Saturday, April 1, 2006
Abstracts (in order of presentation)
Neural underpinnings of cognitive control: Implications for the development of ambiguity resolution abilities
A century of investigation into the role of the human frontal lobes in complex cognition, including language processing, has revealed several interesting but apparently contradictory findings. In particular, the results of numerous studies suggest that left inferior frontal gyrus (LIFG), which includes Broca's area, plays a direct role in sentence-level syntactic processing. In contrast, other brain-imaging and neuropsychological data indicate that LIFG is crucial for cognitive control -- specifically, for overriding highly regularized, automatic processes, even when a task involves syntactically undemanding material (e.g., single words, a list of letters). I'll describe a unifying account of these findings (Novick, Trueswell & Thompson-Schill, 2005), which emphasizes the importance of general cognitive control mechanisms for the syntactic processing of sentences. I'll review work from my lab on individual differences in parsing behaviors in normals and frontal lobe patients, as well as parsing preferences from normally developing children. As a whole, this work suggests that: (1) LIFG is part of a network of frontal lobe subsystems that are generally responsible for the detection and resolution of incompatible stimulus representations; and (2) the role of LIFG in sentence comprehension is to implement the resolution of conflicting characterizations of the sentence, which can arise because of ambiguity in the input.
Ambiguity resolution and dynamic interpretation
The interpretation of an elided verb phrase (Bill did __too) is often ambiguous. Investigations of the preferred analysis of ambiguous elided verb phrases, conducted jointly with Chuck Clifton, will be used as a window into the properties of syntactic versus discourse representations. In syntax, generally there is a strong preference to relate new material to material that is recent/low in the tree (as captured by Right Association, Late Closure, and Recency). In discourse representation, instead, one finds a preference to relate new material to material that is high in the tree, i.e., material that is part of the main assertion rather than material which is presupposed. However, the preference for high antecedents disappears under circumstances where it is the lower antecedent that expresses the main assertion of the utterance, showing that the preference for high material is not due to tree geometry per se but rather due to information-structure. Discourse Representation Theory (Kamp and Reyle, 1993) provides a tool for investigating discourse processing. DRT represents a discourse in terms of discourse referents and properties of those discourse referents, with negation and other operators introducing their own sub-structures. The general law of DRT is that discourse referents introduced in sub-structures are not available in larger structures. There exist many open questions concerning DRT as a part of a theory of discourse processing. I will address the following:
Evidence from selfpaced reading and questionnaire studies of elided verb phrases suggest that the distinction between clause and sentence boundaries must be maintained or reflected in Discourse Representation Structure (DRS). The evidence also suggests that the difference between at-issue content and not-at-issue content, 'supplements' in the theory of Potts 2005, is preserved within a sentence but not across sentence boundaries. The DRS can change dynamically: in some cases, updating the DRS goes beyond making additions to the DRS in response to new material.
- Does it matter how the discourse was divided into individual sentences? Does it matter whether we have a clause boundary vs. sentence boundary?
- What different types of meaning are distinguished, and for how long are they kept distinct?
- Is there dynamic updating of the discourse representation? As new material is added, does this simply make the discourse representation larger or can it also change the representation of already processed material?
Linking the Two Halves of Sentence Processing
Sentence comprehension work has aimed to develop accounts of processing in two distinct areas. One of these is ambiguity resolution, where the central debates concern whether difficult ambiguous sentences are interpreted via a constraint satisfaction process or through a sequence of misanalysis and reanalysis stages. The second domain concerns complex sentences that are viewed as having no structural ambiguity, such as center embedded structures. Explanations for comprehension difficulty here invoke the structure of working memory and other constructs largely unrelated to ambiguity resolution. I will argue that this division of the field is false and that ambiguity resolution processes are relevant to all sentence types. These claims will build upon the Production Distribution Comprehension (PDC) account of sentence interpretation, and I will argue that significant insight in to the comprehension process in these domains can be gleaned from an understanding of how speakers make structural choices during utterance planning.
When, why, and how speakers do (and don't) avoid ambiguity
In this talk, I'll review research from my lab that (a) aims to delineate the circumstances under which speakers do and don't avoid producing ambiguous linguistic expressions; (b) describes the mechanisms by which speakers avoid (or not) producing ambiguous expressions; and (c) speculates about why production might play out as such, given the ultimate objective of successful communication. The experiments and analysis suggest that (a) speakers avoid linguistic ambiguity only under very limited circumstances; (b) that such (mostly non-) avoidance arises because of the nature of the processing relationship between the meaning representations that serve as the starting point of production, and the linguistic representations that are subsequently encoded; and (c) that this is so because the objective of the production system is to produce utterances efficiently, leaving to listeners the task of interpreting sometimes suboptimal utterances, and to the our language's grammar the guarantee that communication will succeed nonetheless.
Bottom-up and top-down constraints in language processing
When we perceive speech, we solve a cascading series of challenges. Many-to-many mappings exist at virtually any level of description of the signal: acoustic-phonetic, phonetic-lexical, lexical-phrasal, and so on. The general strategy adopted by psycholinguists has been to modularize research questions. Speech perception focuses on the apparent lack of invariance in the acoustic-phonetic mapping. Researchers in spoken word recognition make the simplifying assumption that a speech perception system will output a string of phonemes perceived from incoming speech, and focus on the problem of segmenting phonemes into a string of words. Work in sentence processing typically assumes that string of words as input, and so on. I will argue that this 'division of labor' strategy complicates rather than simplifies. I will review work from my lab and others showing that at each level, there are bottom-up and top-down constraints that are ignored under our typical interface assumptions. For example, bottom-up sub-phonemic detail simplifies spoken word recognition, as do top-down pragmatic/syntactic expectations. I will appeal to the principle of lawful variability as a better guide to research strategy than the current division of labor.
Janet Dean Fodor
Disambiguation by silent prosody
There is growing evidence that in silent reading, readers tend to project a prosodic contour onto the written words and then treat this mentally created prosody as if it were part of the stimulus, so that it influences syntactic processing just as 'real' prosody does when we listen to speech. Real prosody can disambiguate syntactically ambiguous sentences, and so does silent prosody -- though perhaps not always as the writer intended, since it is the default prosody that readers project. Prosodic phrasing is shaped by its alignment with syntactic structure but also by other factors such as phrase length and focus. It can therefore offer explanations for a variety of what would otherwise be aberrant ambiguity-resolution preferences: differences in ambiguity resolution between different languages; between different constructions in the same language; even between instances of the same construction if they differ in their phrase lengths. This is important because it means that the observed contrasts in parsing can be attributed to linguistic facts at the prosody-syntax interface, and so do not undermine the hypothesis that the human parsing mechanism is universal and innate. I will report studies by CUNY students, and researchers elsewhere, on a variety of languages including Arabic, Croatian, English, German, Hebrew, Japanese, Spanish and Portuguese.
For further information, please contact Elsi Kaiser
USC Linguistics Department