Thursday, April 7, 2011

Pre-meeting summary 4/7 - Dhananjay (Leader)

This week's focus paper tries to extract phrasal translation rules for MT. The alignment is modeled as a multiclass labeling problem. The labels are ITG alignments with a constraint over the span (3 in this paper). The feature space is constructed by exploring the space of links (sure links if available). The multiclass labeling problem is resolved by using MIRA.

For the related paper, Alan and Daniel read an unsupervised word alignment method by the same authors - Tailoring Word Alignments to Syntactic Machine Translation. The innovative step that they take is to generate a parse tree for the target language, and use a syntax-sensitive distortion component that conditions on the tree. The idea is that these trees can alter the probabilities of transitions between alignment positions so that distortions which respect tree structure can be preferred. The model is trained using plain EM. The model drastically reduces the number of alignments that cross constituencies, but only mildly improves alignment scores, generally preferring recall over precision.

Weisi and Dong read Statistical Phrase-Based Translation. It presents a translation model and decoder and compare different ways to build phrase translation tables. Their decoder uses a beam search algorithm. The search involves selecting a sequence of untranslated foreign words and an English phrase, and updating the hypothesis cost. An important observation by the authors was that while phase helps during translation, syntax based phrase donot help.

I read a paper that describes MIRA (margin infused relaxed algorithm) that is used for the multiclass labeling problem in the focus paper. A prototype vector is developed for each label, and a similarity score is calculated. The instance is assigned the label with highest similarity score. The training phase involves updating this prototype vector space.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.