Thursday, February 10, 2011

Post-Meeting Commentary for Feb. 10, 2011

I guess after some time I realized that imposing constraints from coupling extractor training should actually be quite intuitive. In general if you have multiple sets of information you would naturally consider whether one set of information could increase the amount of information in the other set or vice versa.

So although bootstrapped learners are common for semi-supervised learning, there are a few other approaches such as using EM, graphical modelling, or the graph-based scheme Prof. Smith mentioned during seminar. Although it's difficult, I feel like if we can keep pushing the accuracy generated from just simple seed data, then it's a major step up from supervised learning, although it probably still needs a lot of work.

The papers Daniel and Brendan introduced I also found pretty interesting. The idea of making the card-pyramid-like data structure, regardless of the end goal, seems like a cool approach, although I guess its usefulness was unclear in the end. Daniel introduced the FACTORIE library that can construct graphical models with pretty good performance results. I recall Markov Logic Networks were mentioned as the basis for comparison, although I am unfamiliar with those at the moment.

As a note, I have yet to find anyone's sharing to be uninteresting, although sometimes I feel like the conversation goes a bit out of the scope of my current knowledge base.


1 comment:

  1. Another thing about the coupled bootstrap learning approach. You can easily? integrate totally different learners. NELL actually as an old-school logic/rule learner as one of the learners. It's not super clear how to integrate that into a probabilistic framework. They also use more straightforward logistic regression classifiers alongside it.

    If you liked that "Coupled" 2009 paper, you might want to look at their 2010 AAAI paper. It has more results too.


Note: Only a member of this blog may post a comment.