Types of Noun Phrases Types of Noun Phrases Referential vs. Quantified NPs ently referring noun phrases pick out individuals i orld. e.g. John, Mary, the President, the department chai eferring noun phrases do not pick out an individual ndividuals) in the world. We will call these tified NPs’ or ‘operators’ e.g. No bear, every boy, all the students, each who, two rabbits, some teacher, etc
106
Embed
Types of Noun Phrases Types of Noun Phrases Referential vs. Quantified NPs Inherently referring noun phrases pick out individuals in the world. e.g.John,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Types of Noun PhrasesTypes of Noun Phrases Referential vs. Quantified NPs
Inherently referring noun phrases pick out individuals in the world.
e.g. John, Mary, the President, the department chair
Non-referring noun phrases do not pick out an individual (or individuals) in the world. We will call these ‘quantified NPs’ or ‘operators’
e.g. No bear, every boy, all the students, each boy, who, two rabbits, some teacher, etc.
A Paradigm with a Missing Cell
1) [No talk show host]i believes that Oprah admires himk
2) [No talk show host]i believes that Oprah admires himi
3) [No talk show host]i admires himk
4) *[No talk show host]i admires himi
The phenomenon is the same for referential NP, e.g., ‘Geraldo’
Example (4) cannot mean that no talk show host admireshimself; it must mean that no talk show host admires someother male
1. Geraldo believes that Oprah admires himk
2. Geraldoi believes that Oprah admires himi
3. Geraldoi admires himk
4. *Geraldoi admires himi
5. [No talk show host]i believes that Oprah admires himk
6. [No talk show host]i believes that Oprah admires himi
7. [No talk show host]i admires himk
8. *[No talk show host]i admires himi
Two Accounts of Principle B
Chomsky: Chomsky: Principle B applies to pronouns which are c-commanded by referential NPs and to pronouns
which have operators as antecedents (i.e. 1-8)Reinhart: Reinhart: Principle B applies only to pronouns which are c-commanded by an operator (i.e. 5-8). Example
(4) is ruled out by pragmatic Rule I (Info strength)
The Experimental FindingsChien and Wexler hien and Wexler (1990 (1990 Language Acquisition)
Methodology: Picture-judgment task
Results: First, children ‘appear’ to violate Principle B in sentences like ‘Mama Bear is touching her’, allowing theprohibited meaning, i.e. Mama Bear is touching herself about 50% of the time
However, the same children do not violate Principle Bwhen the antecedent of the pronoun is a quantified NP such as ‘every bear’. E.g., Every bear is touching her is accepted only 15% of the time in a situation where every bear touches herself, but no bear touches another salient female who is depicted in the picture (say Goldilocks).
Examples of Pictures Used.
The reflexives make sure children allow this kind of interpretation
Main Question:can children reject the picture on the right -->
Main Question:can children reject the picture on the right -->
G1: under 4yrsG2: 4-5yrsG3: 5-6yrsG4: 6-7yrsA: Adults
Look at G3 for largest difference
Summary of Chien and Wexler’s Findings
In general, the youngest children didn’t perform well. It may be that the task is too hard for them, at least for some of the test sentences.
Putting the youngest children (Groups 1 and 2) aside, however, children do pretty well with reflexives. They reject the mismatch sentence/picture pairs at high rates.
Children also perform well with pronouns with a quantified NP as antecedent by 5-6 years-old (Group 3), but these children still accept coreference for pronouns with a referential NP as antecedent.
Picture Tasks
Experiments using pictures can give us an idea about what’s going on in children’s grammars, although these experiments tend to underestimate children’s knowledge, as compared to experiments using the Truth Value Judgment task
However, large numbers of subjects can be run, because the experiment is quick to carry out, though children don’tenjoy it.
So, how would the results of Chien and Wexler’s experiment stand up -- if we switch to a TVJ task?
A Target Picture
What Isn’t Shown in a Picture?
A story with ‘events’ taking place in real time. The target sentence is a possible outcome
which satisfies the condition of
plausible dissent
but events took a different turn,
so the actual outcome
makes the sentence false
Plausible denial is also unmet for QNPs
Not every bear is touching her -- in fact, none of them are.
What Isn’t Shown in a Picture?
Possible Outcome:
Mama Bear touches Goldilocks
Events take a turn such that Mama Bear doesn’t touch Goldilocks, but touches herself
The TVJ task
Background: Every troll dried so-and-so
Assertion: Every troll dried a salient female character: Ariel
Possible Outcome:Every troll dried Ariel.
Actual Outcomes:Only one troll dried ArielEvery troll dried herself
NB: there are other Background/Assertion pairs
Test Sentence “Every troll dried her”
The characters are introduced
Theme : A Swimming Story
The 3 trolls are planning to go to the beach. They take towels, & a watermelon -- for a picnic
The Plot Unfolds
The Trolls are swimming in the deep blue sea, when who along comes Ariel the Mermaid.
The Plot Unfolds
The trolls invite Ariel to share the watermelon with them -- once they are all dried off. Ariel’s hair & tail are very wet.
Potential Antecedent for the Pronoun
Ariel accepts, but needs to dry off. “Can you trolls help. My hair is very wet, and so is my tail.”
The Possible Outcome
“Sure, we can help. We’ll go get our towels.”
Possible Outcome
Troll 1: “I have a big towel. I’ll dry your hair, and then I’ll get dry too. I’m still wet.
Possible Outcome
Troll 1: ”Here you go Ariel. Now your hair is dry.”
Condition of Falsification
Troll 2: “Your tail is still wet. Oh look. I only brought a small towel. It’s not big enough to dry us both. And I’m wet too.”
Troll 3: “I can’t help either. Your wet tail willsoak my little towel, and I need it to dry off.”
Possible Outcome becomes Untenable
Troll 1: “Now I feel better” <troll dries self>Troll 2: “Me too.”Troll 3: “I’m getting all dry too.”
The Actual Outcome: & Reminder
The Story Ends
Let’s have watermelon together now.(The towels are still next to the Trolls.)
The Linguistic Antecedent
“That was a story about Ariel and some girl trolls. And I know one thing that happened.Every troll dried her.”
The Truth Value Judgment
Child: “No.” Kermit: “No? What really happened?”Child: “Only one troll dried her.”
Does the Truth Value Judgment Task Help?
Yes and No.
No: Using the Truth Value Judgment Task, children distinguish between pronouns with quantified NP antecedents and referential NP antecedents, permittinglocal coreference in with referential NPs as antecedents.
Yes: The pattern is obtained for children at least one year younger.
Thornton and Wexler (1999)Subjects: 19 children aged 4;0 to 5;1 (mean age = 4;8)
Every reindeer brushed him 08% acceptanceBert brushed him 58% acceptance
Parameter Setting:Null Subjects
Null Subjects
• Child English– Eat cookie.
• Hyams (1986)– English children have an Italian setting of null-subject parameter– Trigger for change: expletive subjects
• Valian (1991)– Usage of English children is different from Italian children
(proportion)
• Wang (1992)– Usage of English children is different from Chinese children (null
objects)
Parameter Setting:Complex Predicates
(Snyder 2001)
Complex Predicates
EnglishSpanish
a. John painted the house red resultative b. Mary picked the book up verb-particle c. Fred made Jeff leave make-causative d. Fred saw Jeff leave perceptual report e. Alice sent Sue the letter double object
f. Alice sent the letter to Sue dative g. Bob put the book on the table put-locative
N-N Compounding
• English: frog man (various meanings)French: homme grenouille (fixed meaning)
• English: banana cupwine glassparty chairblood lady
Compounding Parameter
Resultatives Productive N-N Compounds
American Sign Language Austrooasiatic (Khmer) Finno-Ugric Germanic (German, English) Japanese-Korean Sino-Tibetan (Mandarin) Tai (Thai) Basque Afroasiatic (Arabic, Hebrew) Austronesian (Javanese) Bantu (Lingala) Romance (French, Spanish) Slavic (Russian, Serbo-Croatian)
Compounding Parameter
Resultatives Productive N-N Compounds
American Sign Language Austrooasiatic (Khmer) Finno-Ugric Germanic (German, English) Japanese-Korean Sino-Tibetan (Mandarin) Tai (Thai) Basque Afroasiatic (Arabic, Hebrew) Austronesian (Javanese) Bantu (Lingala) Romance (French, Spanish) Slavic (Russian, Serbo-Croatian)
correlation is not bidirectional
Developmental Evidence
• Complex predicate properties argued to appear as a group in English children’s spontaneous speech (Stromswold & Snyder 1997)
• Appearance of N-N compounding is good predictor of appearance of verb particle constructions and other complex predicate constructions - even after partialing out contributions of– Age of reaching MLU 2.5
– Production of lexical N-N compounds
– Production of adjective-noun combinations
– Correlations are remarkably good
Language Change:Learning about Verbs
Classes of Verbs
• Verbs with syntax like pour– dribble, drip, spill, shake, spin, spew, slop, etc.
• Verbs with syntax like fill– cover, decorate, bandage, blanket, soak,
drench, adorn, etc.
• Verbs with syntax like load– stuff, cram, jam, spray, sow, heap, spread, rub,
dab, plaster, etc.
manner-of-motion
change-of-state
manner-of-motion & change-of-state
Learning Syntax from Semantics
Manner-of-motion
VP
V NP PPfigure ground
VP
V NP PPfigureground
Change-of-state
FigureFrame
GroundFrame
Linking RulesSEMANTICS SYNTAX
• Assumption: linking generalizations are universal
• Shared by competing accounts of learning verb syntax & semantics
Evidence
• Semantics --> Syntax (Gropen et al., 1991, etc.)
– Teach child meaning of novel verb, e.g. this is moaking
– Elicit sentences using that verb
• Syntax --> Semantics (Naigles et al. 1992; Gleitman et al.)
– Show child multiple scenes, use syntax to draw attention
– Adults: verb-guessing task, show effects of scenes, syntax, semantics.
Language Change
Theoretical Approaches
Language Change
• How to take input and reach a new grammar
– Error signal• Failure to analyze input
• Failure to predict input
– Cues• Syntactic
• Non-syntactic
• How do any of these deal with problem of overgeneralization?
Triggers
Gibson & Wexler (1994)
• Triggering Learning Algorithm
– Learner starts with random set of parameter values
– For each sentence, attempts to parse sentence using current settings
– If parse fails using current settings, change one parameter value and attempt re-parsing
– If re-parsing succeeds, change grammar to new parameter setting
A+
+ -
-
B
Si
Gibson & Wexler (1994)
• Triggering Learning Algorithm
– Learner starts with random set of parameter values
– For each sentence, attempts to parse sentence using current settings
– If parse fails using current settings, change one parameter value and attempt re-parsing
– If re-parsing succeeds, change grammar to new parameter setting
A+
+ -
-
B
SiGreediness Constraint
Single Value Constraint
Gibson & Wexler (1994)
• For an extremely simple 2-parameter space, the learning task is easy - any starting point, any destination
• Triggers do not really exist in this model
VO SVO
OV
VOS
OVSSOV
SV VS
Gibson & Wexler (1994)
• Extending the space to 3-parameters– There are non-adjacent grammars
– There are local maxima, where current grammar and all neighbors fail
VO
SV VS
SVO
OV
VOS
OVSSOV-V2
+V2
Gibson & Wexler (1994)
• Extending the space to 3-parameters– There are non-adjacent grammars
– There are local maxima, where current grammar and all neighbors fail
VO
SV VS
SVO
OV
VOS
OVSSOV-V2
+V2
String:Adv S V O
Gibson & Wexler (1994)
• Extending the space to 3-parameters– There are non-adjacent grammars
– There are local maxima, where current grammar and all neighbors fail
– All local maxima involve impossibility of retracting a +V2 hypothesis
VO
SV VS
SVO
OV
VOS
OVSSOV-V2
+V2
String:Adv S V O
Gibson & Wexler (1994)
• Solutions to local maxima problem– #1: Initial state is
V2: default to [-V2]S: unsetO: unset
– #2: Extrinsic ordering
VO
SV VS
SVO
OV
VOS
OVSSOV-V2
+V2
String:Adv S V O
Fodor (1998)
• Unambiguous Triggers– Local maxima in TLA result from the use of ‘ambiguous triggers’
– If learning only occurs based on unambiguous triggers, local maxima should be avoided
• Difficulties– How to identify unambiguous triggers?
– Unambiguous trigger can only be parsed by a grammar that includes value Pi of parameter P, and by no grammars that include value Pj.
– A parameter space with 20 binary parameters implies 220 parses for any sentence.
Fodor (1998)
• Ambiguous Trigger
– SVO can be analyzed by at least 5 of the 8 grammars in G&W’s parameter space
VO SVO
OV
VOS
OVSSOV-V2
+V2
Fodor (1998)
• Structural Triggers Learner (STL)
– Parameters are treelets
– Learner attempts to parse input sentences using supergrammar, that contains treelets for all values of all unset parameters, e.g., 40 treelets for 20 unset binary parameters.
– Algorithm• #1: Adopt a parameter value/trigger structure if and only if it occurs as a part
of every complete well-formed phrase marker assigned to an input sentence by the parser using the supergrammar.
• #2: Adopt a parameter value/trigger structure if and only if it occurs as a part of a unique complete well-formed phrase marker assigned to the input by the parser using the supergrammar.
Fodor (1998)
• Structural Triggers Learner
– If a sentence is structurally ambiguous, it is taken to be uninformative (slightly wasteful, but conservative)
– Unable to take advantage of collectively unambiguous sets of sentences, e.g. SVO and OVS, which entail [+V2]
– Still unclear (to me) how it manages its parsing task
Competition Model
• 2 grammars - start with even strength, both get credit for success, one gets punished for failure
• Each grammar is chosen for parsing/production as a function of its current strength
• Must be that increasing Pi for one grammar decreases Pj for other grammars
• Is it the case that the presence of some punishment will guarantee that a grammar will, over time, always fail to survive?
Competition Model
• Upon the presence of an input datum s, the child– Selects a grammar Gi with the probability pi.
– Analyzes s with Gi.
– Updates competition• If successful, reward Gi by increasing pi.
• Otherwise, punish Gi by decreasing pi.
• This implies that change only occurs when a selected grammar succeeds or fails
Competition Model
• Linear reward-penalty scheme
– Given an input sentence s, the learner selects a grammar Gi with probability pi.
– If Gi --> s then
• p’i = pi + (1 - pi)
• p’j = (1 - )pj
– If Gi -/-> s then
• p’i = (1 - )pi
• p’j = /(N - 1) + (1 - )pj
From Grammars to Parameters
• Number of Grammars problem– Space with n parameters implies at least 2n grammars (e.g. 240 is
~1 trillion)
– Only one grammar is used at a time, so implies very slow convergence
• Competition among Parameter Values– How does this work?
– Naïve Parameter Learning model (NPL) may reward incorrect parameter values as hitchhikers, or punish correct parameter values as accomplices.
Empirical Predictions
• HypothesisTime to settle upon target grammar is a function of the frequency of sentences that punish the competitor grammars
• First-pass assumptions– Learning rate set low, so many occurrences needed to lead to
decisive changes
– Similar amount of input needed to eliminate all competitors
Empirical Predictions
• ±wh-movement
– Any occurrence of overt wh-movement punishes a [-wh-mvt] grammar
– Wh-questions are highly frequent in input to English-speaking children (~30% estimate!)
– [±wh-mvt] parameter should be set very early
– This applies to clear-cut contrast between English and Chinese• French: [+wh-movement] and lots of wh-in-situ
• Japanese: [-wh-movement] plus scrambling
Empirical Predictions
• Verb-raising
– Reported to be set accurately in speech of French children (Pierce, 1992)
French: Two Verb Positions
a. Il ne voit pas le canard
he sees not the duck
b. *Il ne pas voit le canard
he not sees the duck
c. *Il veut ne voir pas le canard
he wants to.see not the duck
d. Il veut ne pas voir le canard
he wants not to.see the duck
French: Two Verb Positions
a. Il ne voit pas le canard
he sees not the duck
b. *Il ne pas voit le canard
he not sees the duck
c. *Il veut ne voir pas le canard
he wants to.see not the duck
d. Il veut ne pas voir le canard
he wants not to.see the duck
agreeing (i.e. finite) forms precede pas
non-agreeing (i.e. infinitive) forms follow pas
French Children’s Speech
• Verb forms: correct or default (infinitive)
• Verb position changes with verb form
• Just like adults
127 119
finite infinitive
(Pierce, 1992)
French Children’s Speech
• Verb forms: correct or default (infinitive)
• Verb position changes with verb form
• Just like adults
122
124
V-neg
neg-V
(Pierce, 1992)
French Children’s Speech
• Verb forms: correct or default (infinitive)
• Verb position changes with verb form
• Just like adults
121 1
6 118
finite infinitive
V-neg
neg-V
(Pierce, 1992)
Empirical Predictions
• Verb-raising
– Reported to be set accurately in speech of French children (Pierce, 1992)
– Crucial evidence
… verb neg/adv …
– Estimated frequency in adult speech: ~7%
– This frequency set as an operational definition of sufficiently frequent for early mastery (early 2’s)
Empirical Predictions
• Verb-second
– Classic argument: V2 is mastered very early by German/Dutch speaking children (Poeppel & Wexler, 1993, Hageman, 1995)
– Yang’s challenges
• Crucial input is infrequent
• Claims of early mastery are exaggerated
Two Verb Positions
a. Ich sah den Mann
I saw the man
b. Den Mann sah ich
the man saw I
c. Ich will [den Mann sehen]
I want the man to.see[inf]
d. Den Mann will ich [sehen]
the man want I to.see[inf]
Two Verb Positions
a. Ich sah den Mann
I saw the man
b. Den Mann sah ich
the man saw I
c. Ich will [den Mann sehen]
I want the man to.see[inf]
d. Den Mann will ich [sehen]
the man want I to.see[inf]
agreeing verbs (i.e.finite verbs) appear insecond position
non-agreeing verbs (i.e.infinitive verbs) appear in final position
German Children’s Speech
• Verb forms: correct or default (infinitive)
• Verb position changes with verb form
• Just like adults
208 43
finite infinitive
Andreas, age 2;2(Poeppel & Wexler, 1993)
German Children’s Speech
• Verb forms: correct or default (infinitive)
• Verb position changes with verb form
• Just like adults
203
48
V-2
V-final
Andreas, age 2;2(Poeppel & Wexler, 1993)
German Children’s Speech
• Verb forms: correct or default (infinitive)
• Verb position changes with verb form
• Just like adults
197 6
11 37
finite infinitive
V-2
V-final
Andreas, age 2;2(Poeppel & Wexler, 1993)
Empirical Predictions
• Cross-language word orders
– Dutch: SVO, XVSO, OVS
– Hebrew: SVO, XVSO, VSO
– English: SVO, XSVO
– Irish: VSO, XVSO
– Hixkaryana: OVS, XOVS
– Order of elimination• Frequent SVO input quickly eliminates #4 and #5
• Relatively frequent XVSO input eliminates #3
• OVS is needed to eliminate #2 - only ~1.3% of input
Empirical Predictions
• But what about the classic findings by Poeppel & Wexler (etc.)?
– They show mastery of V-raising, not mastery of V-2
– Yang argues that early Dutch shows lots of V-1 sentences, due to the presence of a Hebrew grammar (based on Hein corpus)
e.g. week ik niet [know I not]
Empirical Predictions
• Early Argument Drop
– Resuscitates idea that early argument omission in English is due to mis-set parameter
– Overt expletive subjects (‘there’) ~1.2% frequency in input
Null Subjects
• Child English– Eat cookie.
• Hyams (1986)– English children have an Italian setting of null-subject parameter– Trigger for change: expletive subjects
• Valian (1991)– Usage of English children is different from Italian children
(proportion)
• Wang (1992)– Usage of English children is different from Chinese children (null
objects)
Empirical Predictions
• Early Argument Drop
– Resuscitates idea that early argument omission in English is due to mis-set parameter
– Wang et al. (1992) argument about Chinese was based on mismatch in absolute frequencies between Chinese & English learners
– Yang: if incorrect grammar is used probabilistically, then absolute frequency match not expected - rather, ratios should match
– Ratio of null-subjects and null-objects is similar in Chinese and English learners
– Like Chinese, English learners do not produce wh-obj pro V?
Empirical Predictions
• Null Subject Parameter Setting
– Italian environment• English [- null subject] setting killed off early, due to presence of
large amount of contradictory input
• Italian children should exhibit an adultlike profile very early
– English environment• Italian [+ null subject] setting killed off more slowly, since
contradictory input is much rarer (expletive subjects)
• The fact that null subjects are rare in the input seems to play no role
Tuesday 12/9
Poverty of the Stimulus
• Structure-dependent auxiliary fronting
– Is [the man who is sleeping] __ going to make it to class?
– *Is [the man who __ sleeping] is going to make it to class?
• Pullum: relevant positive examples exist (in the Wall Street Journal)
• Yang: even if they do exist, they’re not frequent enough to account for mastery by children age 3;2 in Crain & Nakayama’s experiments (1987)