Name: Weisi Duan
Focus paper: "An Entity-Level Approach to Information Extraction" by Aria Haghighi and Dan Klein.
This is a pre-meeting review.
I read the paper "Coreference Resolution in a Modular, Entity-Centered Model" where the authors used a generative model which decomposes into three models. The Semantic model and mention model seem to be similar to the two models in the focused paper. The difference seems to be the discourse model in which they used a log linear model which utilizes more feature than the one in the focused paper, which uses only tree distance. Another difference is that in the focused paper, the roles are mapped to the entities one on one while in this paper the types are mapped one to many. Finally, the variational inference in this paper seems to be jointly inferring both the parameters and the entities. I wonder how to do the inference in gibbs sampling in both of two papers. One final thing is about evaluation, since the entities can take any form, I am not sure how exactly the extracted entities are mapped to the gold standard, eg. mapped by overlap of word list of the properties. It is interesting to see the way they represented the entities using the variable length word list and it can be also used in WSD to represent the word senses because there are lots of senses in wordnet with only one synonym. As said the paper, the word list can skew the mention model to the entity, and I guess this can also be done to a word sense.