Thursday, March 3, 2011

Pre-Meeting Summary 3/3/2011 - Alan

The focus paper introduces a task called tense sense disambiguation, where given a concrete tense syntactic form in a sentence, the objective is to select the correct sense among a given set of possible senses.

As an experiment, an English syntactic sense dictionary was compiled and then used to annotate 3000 sentences from the British National Corpus. A supervised learning TSD algorithm was developed that used basic, lexical, and POS Tag-based features. The algorithm also caters to the task structure by allowing the restriction of the possible labels to each concrete syntactic form. The classifier outperforms the MFS baseline in all three conditions when the ASF (abstract syntactic form) type is known, unknown, or given by a simple rule-based classifier.

So for this week, people read the following supplemental papers:

Weisi read about a prototype model for coarse-to-grain learning with regards to multi-class classification. The model was pipelined with sets of independent classifiers trained at each level and since it does not use the overlap in the confusion lists on certain labels, there is the possibility for a better estimation of the feature weights.

Dong read about experiments in combining classifiers using various techniques to improve performance with regards to WSD. Although the features or combinations themselves may not have been that unique, the end result of the paper showed improved performance from such an approach.

In somewhat of a contrast, Daniel read about novel syntactic features used to improve WSD performance. The paper uses supervised machine learning methods, decision lists, and AdaBoost, as well as added features for subcategorization frames and dependencies on words already present with some given sense. Thresholding was also used to trade recall for precision. In the end, the specific syntactic features along with AdaBoost performed the best, although AdaBoost has no effective thresholding mechanism.

Dhananjay read about imposing global constraints over local pairwise order decisions. One is transitivity and the other is time expression normalization. Transistivity uses Integer LP while time expression normalization normalizes everything to a single timeline. The results show 1-2% absolute increase in accuracy.

Brendan read an overview of construction grammar. A construction is a paring of a meaning and form. They require consideration of communicative functionality, meaning, and general cognitive constraints. There are some good examples illustrated in the link on Brendan's post. It seems the relevance of the theory and details in terms of using it or comparing it in some applicable manner are still unclear.


-Alan

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.