Tuesday, November 10, 2015

Understanding underspecification: A comparison of two computational implementations (Logacev et al) accepted in: Quarterly Journal of Experimental Psychology


Pavel Logačev and Shravan Vasishth. Understanding underspecification: A comparison of two computational implementationsQuarterly Journal of Experimental Psychology, 2015. Accepted. [ pdf ]
Swets et al. (2008) present evidence that the so-called ambiguity advantage (Traxler et al., 1998), which has been explained in terms of the Unrestricted Race Model, can equally well be explained by assuming underspecification in ambiguous conditions driven by task-demands. Specifically, if comprehension questions require that ambiguities be resolved, the parser tends to make an attachment: when questions are about superficial aspects of the target sentence, readers tend to pursue an underspecification strategy. It is reasonable to assume that individual differences in strategy will play a significant role in the application of such strategies, so that studying average behavior may not be informative. In order to study the predictions of the good-enough processing theory, we implemented two versions of underspecification: the partial specification model (PSM), which is an implementation of the Swets et al. proposal, and a more parsimonious version, the non-specification model (NSM). We evaluate the relative fit of these two kinds of underspecification to Swets et al.’s data; as a baseline, we also fit three models that assume no underspecification. We find that a model without unspecification pro- vides a somewhat better fit than both underspecification models, while the NSM model provides a better fit than the PSM. We interpret the results as lack of unambiguous evidence in favor of underspecification; however, given that there is considerable existing evidence for good-enough processing in the literature, it is reasonable to assume that some underspecification might occur. Under this assumption, the results can be interpreted as tentative ev- idence for NSM over PSM. More generally, our work provides a method for choosing between models of real-time processes in sentence comprehension that make qualitative predictions about the relationship between several de- pendent variables. We believe that sentence processing research will greatly benefit from a wider use of such methods.

Saturday, October 3, 2015

Some thoughts after attending a conference in Copenhagen

I just got done with a very nice conference in Copenhagen on grammar vs lexicon.

One thing that struck me afresh about several of the people I spoke to there and the talks I heard there is that scientists feel compelled to hold or stand for a theoretical position. People often design their careers around a position that they hold, and then they proceed to defend it no matter what data comes their way. Doing science is very much like a forecasting problem.  Your job is to come up with a prediction of what will happen if a particular experiment is run.

The way we do science, however, is as follows. We first find out what the experiment showed. Then we make the "prediction" based on our favorite theory.  Researchers routinely use the word prediction even when they already know the outcome of an experiment. If this was a weather forecasting problem, it would be like publishing the probability of rain yesterday. Of course you would get everything right! It is this unfortunate tendency to predict after the fact that people are so confident about their theories and positions. After the fact prediction gives an illusion of being right all the time.

I just read a great review of a book on forecasting by the greatest reviewer I have ever encountered on the web: RK, of RK's musings fame.

He discusses a book, Superforecasters, in which the author lays out the qualities of a good forecaster. I quote from the blog almost verbatim:

  • Good back of the envelope calculations
  • Starting with outside view that reduces anchoring bias
  • Subsequent to outside view, get a grip on the inside view
  • Look out for various perspectives about the problem
  • Think three/four times, think deeply to root out confirmation bias
  • It's not the raw crunching power you have that matters most. It's what you do with it.

And here is another quote from the blog, which itself is a quote from the book:

Unpack the question into components. Distinguish as sharply as you can between the known and unknown and leave no assumptions unscrutinized. Adopt the outside view and put the problem into a comparative perspective that downplays its uniqueness and treats it as a special case of a wider class of phenomena. Then adopt the inside view that plays up the uniqueness of the problem. Also explore the similarities and differences between your views and those of others-and pay special attention to prediction markets and other methods of extracting wisdom from crowds. Synthesize all these different views into a single vision as acute as that of a dragonfly. Finally, express your judgment as precisely as you can, using a finely grained scale of probability.

And finally, RK also excerpts a composite portrait of a good forecaster from the book: 




Scientists in psycholinguistics tend to be the exact opposite of a good forecaster. 

They hunker down and defend to death one position, never never never back down in the face of counterevidence, never entertain multiple alternative theories simultaneously, never express any self-doubt (at least not publicly) that their favorite position might be wrong. Whenever we write papers, we end up converging on what we claim is the most plausible explanation for the result we have found. We never end on an equivocation, because that would mean rejection from the top journal we have submitted our paper to.

If anyone other than me is reading this blog, maybe you should read RK's original review of the book, Superforecasters, and maybe also read the book (I know I will), and then think about what's wrong with the way you are doing science, because it is bass-ackwards. We are terrible forecasters, and there's a damn good reason for it!



Saturday, September 26, 2015

ESSLLI 2016 course: Sentence Comprehension as a Cognitive Process: A Computational Approach

Felix Engelmann and I will teach a one-week course at ESSLLI 2016 in Bolzano. I made a preliminary web page for the course, available here.

This is our first step towards writing our book (with the same title as the course), on contract with Cambridge University Press.


Thursday, September 3, 2015

New paper (Paape and Vasishth, Lang. and Speech): Local coherence and preemptive digging-in effects in German

Dario Paape's new paper has been accepted for publication by Language and Speech:

Title: Local coherence and preemptive digging-in effects in German
AbstractSOPARSE (Tabor & Hutchins, 2004) predicts so-called local coherence effects: locally plausible but globally impossible parses of substrings can exert a distracting influence during sentence processing. Additionally, it predicts digging-in effects: the longer the parser stays committed to a particular analysis, the harder it becomes to inhibit that analysis. We investigated the interaction of these two predictions using German sentences. Results from a self-paced reading study show that the processing difficulty caused by a local coherence can be reduced by first allowing the globally correct parse to become entrenched, which supports SOPARSE’s assumptions.
pdf: http://www.ling.uni-potsdam.de/~paape/LCpaper.pdf

Monday, August 31, 2015

New paper: Locality and expectation in separable Persian complex predicates

Here's a new paper by Molood Sadat Safavi, Samar Husain, and Shravan Vasishth, which shows evidence from two self-paced reading studies in Persian against one of the key predictions of the expectation accounts (Hale 2001, Levy 2008).

Title:
Locality and expectation in Persian complex predicates

Abstract:
In sentence comprehension, it is well-known that processing cost increases with dependency distance (Gibson 2000, Lewis and Vasishth 2005); this often referred to as the locality effect. However, the expectation-based account (Hale 2001, Levy 2008) predicts that delaying the appearance of a verb renders it more predictable and therefore easier to process. Following up on previous work (Husain et al 2014), we investigated whether strengthening the expectation can increase facilitation at the verb even further. We operationalize strong expectation as prediction of the lexical entry for the verb; by contrast, weak expectation refers to the prediction of some upcoming verb phrase (these are the cases discussed by Levy 2008). We used Persian for this investigation. This language has a special construction called complex predicates, which are separable Noun-Verb configurations in which the verb (the precise lexical item) is highly predictable given the noun.  In two self-paced reading experiments, we delayed the appearance of the verb by interposing a relative clause (Expt 1, 42 subjects) or a long PP (Expt 2, 40 subjects). As a control, we included a simple predicate (Noun-Verb) configuration; the same distance manipulation was applied here as for complex predicates, but here, the exact lexical entry for the verb is not predicted but rather a verb phrase is predicted. Thus, we had a 2x 2 design, with Expectation Strength (Strong/Weak) and Distance (Short/Long). Based on the Husain et al  study, which had a similar design using Hindi complex predicates, we expected a slowdown in the weak expectation condition (i.e., locality effects), but a facilitation in the strong expectation conditions (i.e., expectation effects). Surprisingly, both experiments showed clear effects of locality in both the strong and weak expectation conditions. We also find evidence that could be consistent with expectation effects: the high-predictable verbs are read faster than the low-predictable verbs. However, this result is difficult to interpret because the verbs used in the strong and weak expectation conditions are different. In sum, these studies show strong and unequivocal evidence in favor of argument-verb dependency distance influencing integration processes at the verb, falsifying a key prediction of the expectation based account of Levy 2008.

Tuesday, August 25, 2015

New paper (Engelmann, Jäger, Vasishth)

Here is a new paper by Felix Engelmann and Lena Jäger and myself that people interested in sentence comprehension processes may be interested in.

Title:
The determinants of retrieval interference in dependency resolution: Review and computational modeling

Abstract:
We report a comprehensive literature review of retrieval interference in reflexive-antecedent dependencies, number agreement, and non-agreement subject-verb dependencies, and computationally evaluate the predictions of cue-based retrieval theory with reference to published results. A novel finding from the review and modeling is that, contrary to claims in the literature, results on number agreement are not entirely compatible with cue-based retrieval theory. We also show that the cue-based retrieval account in its current form cannot explain several reported interference effects, such as (i) speed-ups observed in presence of a syntactically unlicensed distractor when the correct dependent is a full match to the retrieval cues and (ii) slow-downs when the correct dependent only partially matches the retrieval cues.  We demonstrate that these effects can be explained by two theoretical and independently motivated constructs: distractor prominence and cue confusion. The cue-based retrieval model is therefore extended to incorporate distractor prominence and cue confusion, and quantitative predictions are derived from this extended model. We show that the extended cue-based retrieval model provides a better explanation of published results than the classical retrieval account.

The pdf is here:

http://www.ling.uni-potsdam.de/~engelmann/publications/EngelmannEtAl_JML_subm_150825.doc.pdf


Tuesday, April 28, 2015

Two new papers from our lab

We have several new papers that have come out recently, both part of Lena Jaeger's dissertation.

Lena A. Jäger, Felix Engelmann, and Shravan Vasishth. Retrieval interference in reflexive processing: Experimental evidence from Mandarin, and computational modelingFrontiers in Psychology, 6(617), 2015. [ DOI | pdf ]
We conducted two eye-tracking experiments investigating the processing of the Mandarin reflexive ziji in order to tease apart structurally constrained accounts from standard cue-based accounts of memory retrieval. In both experiments, we tested whether structurally inaccessible distractors that fulfill the animacy requirement of ziji influence processing times at the reflexive. In Experiment 1, we manipulated animacy of the antecedent and a structurally inaccessible distractor intervening between the antecedent and the reflexive. In conditions where the accessible antecedent mismatched the animacy cue, we found inhibitory interference whereas in antecedent-match conditions, no effect of the distractor was observed. In Experiment 2, we tested only antecedent-match configurations and manipulated locality of the reflexive-antecedent binding (Mandarin allows non-local binding). Participants were asked to hold three distractors (animate vs. inanimate nouns) in memory while reading the target sentence. We found slower reading times when animate distractors were held in memory (inhibitory interference). Moreover, we replicated the locality effect reported in previous studies. These results are incompatible with structure-based accounts. However, the cue-based ACT-R model of Lewis and Vasishth (2005) cannot explain the observed pattern either. We therefore extend the original ACT-R model and show how this model not only explains the data presented in this article, but is also able to account for previously unexplained patterns in the literature on reflexive processing.

Lena A. Jäger, Lena Benz, Jens Roeser, Brian W. Dillon, and Shravan Vasishth. Teasing apart retrieval and encoding interference in the processing of anaphorsFrontiers in Psychology, 6(506), 2015. [ DOI | http ]
Two classes of account have been proposed to explain the memory processes subserving the processing of reflexive-antecedent dependencies. Structure-based accounts assume that the retrieval of the antecedent is guided by syntactic tree-configurational information without considering other kinds of information such as gender marking in the case of English reflexives. By contrast, unconstrained cue-based retrieval assumes that all available information is used for retrieving the antecedent. Similarity-based interference effects from structurally illicit distractors which match a non-structural retrieval cue have been interpreted as evidence favoring the unconstrained cue-based retrieval account since cue-based retrieval interference from structurally illicit distractors is incompatible with the structure-based account. However, it has been argued that the observed effects do not necessarily reflect interference occurring at the moment of retrieval but might equally well be accounted for by interference occurring already at the stage of encoding or maintaining the antecedent in memory, in which case they cannot be taken as evidence against the structure-based account. We present three experiments (self-paced reading and eye-tracking) on German reflexives and Swedish reflexive and pronominal possessives in which we pit the predictions of encoding interference and cue-based retrieval interference against each other. We could not find any indication that encoding interference affects the processing ease of the reflexive-antecedent dependency formation. Thus, there is no evidence that encoding interference might be the explanation for the interference effects observed in previous work. We therefore conclude that invoking encoding interference may not be a plausible way to reconcile interference effects with a structure-based account of reflexive processing.