Thursday, April 14, 2011

Pre-meeting summary (Dong - leader)

Pre-meeting Summary (Dong Nguyen - leader)
Focus paper: Poetic Statistical Machine Translation: Rhyme and Meter

The focus paper constrained the search for translation with meter, rhyme and length constraints. Instead of post hoc filtering translation that fit the rhyme/meter constraints, they incorporate it as feature functions while doing the search for translations. One of the problem that was also observed in the related papers, that it's not clear how to evaluate poetry generation/translation.

Most of the related papers read focused on Rhyme and Poetry in NLP. In addition, one paper was on machine translation.

Daniel and Dong read Automatic Analysis of Rhythmic Poetry with Applications to Generation and Translation, Greene et al. EMNLP 2010. The paper only deals with sonnets, a poetry form with fairly strict iambic pentameter. Stress patterns of each word are modeled using Finite State Transducers (FST) and EM is used to learn weights. The main problem with this model is that the pronunciation probabilities are content independent, which makes the processing of single words difficult. The learned word stress patterns are used for poetry generation and translation. However, in both cases they didn't perform
a quantitative evaluation.

Dhananjay read the paper Poetry Generation in COLIBRI, Diaz-Agudo et al. in Advances in Case-Based Reasoning 2002. The paper uses a case based reasoning approach to poetry generation. The approach is build on the idea of taking an existing poem and adapt it to the current scenario by replacing words. The method is divided in three parts, retrieval, adaptation and revision.

Alan read Using an on-line dictionary to find rhyming words and pronunciation for unknown words, Roy J. Byrd and Martin Chodorow, ACL 1985. This is an old paper from 1985 and focuses on implementing a system that find rhymes and determines how to pronounce words the system has not seen before. The authors take a pronunciation-based approach to identify rhyming words. To pronounce unknown input words, they try to find overlapping substrings in the input word that are already present in the words in the dictionary. Because of the lack of hard data or experiments, it's hard to evaluate the system, but this might be caused by the technological limitations of that time.

Weisi read Discriminative training and maximum entropy models for statistical machine translation by Och and Neu, ACL 2002.
The paper frames the machine translation source channel into the log-linear model, which makes adding features easier. Training is done using generalized iterative scaling.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.