Thursday, November 13, 2014

PhD positions in Northwestern


The Language and Computation Lab in the Department of Linguistics at Northwestern University, directed by Dr. Klinton Bicknell, is currently looking for Ph.D. students to join the lab. Candidates should have a strong academic background, and especially desirable skills include some experience in computer science and/or math. Interested students should apply to the Ph.D. program in the Department of Linguistics, which admits students to the department as a whole rather than to a specific lab and offers all admitted students competitive five-year funding packages not tied to a specific advisor. International applications are welcome. The department's deadline to receive applications for starting in Fall 2015 is November 30, 2014. Research in the Language and Computation Lab investigates how the human brain solves the computational problems of language comprehension, production, and acquisition. We use techniques from machine learning, computational linguistics, and statistics to build computational models of such language behaviors. We test these models by analyzing large behavioral and neuroscientific datasets, and also by gathering new empirical data, especially via eye tracking and crowdsourcing. For more information on current research, see the lab website at http://lcl.northwestern.edu/. The Department of Linguistics at Northwestern has a wealth of excellent language researchers, as does the university more broadly in the Psychology, Communication Sciences & Disorders, and Electrical Engineering & Computer Science departments. Students are highly encouraged to collaborate both within Linguistics and across departments. Students can also take advantage of the university's location in the dynamic Chicago metro area. Feel free to direct any questions to Klinton at kbicknell a_t northwestern.edu.

Two new papers: Lena Jäger et al (JML), and Logačev et al (Cognitive Science)


Two new papers have been accepted for publication by our PhD students Lena Jäger and Pavel Logačev:
Lena Jäger, Zhong Chen, Qiang Li, Chien-Jer Charles Lin, and Shravan Vasishth. The subject-relative advantage in Chinese: Evidence for expectation-based processing. Journal of Memory and Language, in press. [ DOI | .pdf ]
Chinese relative clauses are an important test case for pitting the predictions of expectation-based accounts against those of memory-based theories. The memory-based accounts predict that object relatives should be easier to process than subject relatives because, in object relatives, less linguistic material intervenes between the head noun and the gap (or verb) that it associates with. By contrast, expectation-based accounts such as surprisal predict that the less frequently occurring object relative should be harder to process than the subject relative, because building a rarer structure is computationally more expensive. Previous studies on Chinese relative clauses have the problem that local ambiguities in subject and object relatives could be confounding the comparison. We compared reading difficulty in subject and object relatives (in both subject- and object-modifications) in which the left context leads the reader to predict a relative clause structure as the most likely continuation; we validate this assumption about what is predicted using production data (a sentence completion study and a corpus analysis). Two reading studies (self-paced reading and eye-tracking) show that the Chinese relative clause evidence is consistent with the predictions of expectation-based accounts but not with those of memory-based theories. We present new evidence that the prediction of upcoming structure, generated through the probabilistic syntactic knowledge of the comprehender, is an important determiner of processing cost.
Pavel Logačev and Shravan Vasishth. A Multiple-Channel Model of Task-Dependent Ambiguity Resolution in Sentence Comprehension. Cognitive Science, 2014. Accepted pending minor revision.
Traxler et al. (1998) found that ambiguous sentences are read faster than their unambiguous counterparts. This so-called ambiguity advantage has presented a major challenge to classical theories of human sentence comprehension (parsing) because its most prominent explanation, in the form of the unrestricted race model (URM), assumes that parsing is non-deterministic. Recently, Swets et al. (2008) have challenged the URM. They argue that readers strategically underspecify the representation of ambiguous sentences to save time, unless disambiguation is required by task demands. When disambiguation is required, however, readers assign sentences full structure — and Swets et al. provide experimental evidence to this end. On the basis of their findings they argue against the URM and in favor of a model of task-dependent sentence comprehension. We show through simulations that the Swets et al. data does not constitute evidence for task-dependent parsing because it can be explained by the URM. However, we provide decisive evidence from a German self-paced reading study consistent with Swets et al.’s general claim about task-dependent parsing. Specifically, we show that under certain conditions, ambiguous sentences can be read more slowly than their unambiguous counterparts, suggesting that the parser may create several parses, when required. Finally, we present the first quantitative model of task-driven disambiguation which subsumes the URM, and show that it can explain both Swets et al.’s results and our findings.