Focus paper: Reading tea leaves: How humans interpret topic models
Yesterdays meeting focused mainly on ways of evaluating topic models.
For the focus paper, the overall opinion was that the results weren't very strong. More details regarding the Mechanical Turk setup/results
would have been nice, as well as a comparison with somewhat less similar models, such as LSA. Furthermore, it would have been nice if they have connected topic models more with human cognitive processes.
During the meeting we also talked about calculating perplexity. There seems to be multiple ways of calculating this for topic models.
The author of my related paper seems to be working on presenting topic to humans. His recent publications contain a lot of papers regarding topics such as visualization of topics, external evaluation, topic labeling etc. Thus his website (http://www.ics.uci.edu/~newman/) might be a good starting point to search if you're interested in this.
A paper that was mentioned during the meeting was Not-So-Latent Dirichlet Allocation: Collapsed Gibbs Sampling Using Human Judgments by Jonathan
Chang (http://www.aclweb.org/anthology/W/W10/W10-0720.pdf), where humans simulate the sampling step of Gibbs sampling and construct a topic model. I haven't read it yet, but it looks very interesting.