Wednesday, January 19, 2011

Commentary for Jan 20th

I read the related paper "A Generative Model for Parsing Natural Language to Meaning Representations" (Lu et al, 2008). In this paper, the authors describe a generative model that simultaneously generates a sentence and a meaning representation. Similar to how CCG directly incorporates semantic information in the parse trees, this model describes both the form and meaning of a sentence at the same time. However, the model generates the trees top-down and is thus only able to use a very small amount of context. Due to this shortcoming, a discriminative reranker is required as a final step. Featurizing this model could potentially be interesting, as doing so would allow the model to weaken some of its many independence assumptions. This model is less interesting algorithmically than the UBL approach, as it is trained with just the Expectation Maximization algorithm. The process used in UBL of alternating between refining the lexicon and reparsing the training data is a more novel concept that could potentially be useful for other problems involving lexica or other similar pieces of information.


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.