I'm sorry Dave, I'm afraid I can't do that Linguistics, Statistics, and Natural Language Pr

合集下载

中考英语人教版 语法专题突破 专题突破十四 情景交际

中考英语人教版 语法专题突破 专题突破十四 情景交际
(2)I've had enough.我已经吃饱了。 答语
(3)I'm full,thank you.我吃饱了,谢谢。
5.购物
售货 员用 语
(1)Can/May I help you?你要买什么? (2)What can I do for you?我可以帮你吗? (3)How many/much do you like?你要多少? (4)What color/size/kind do you like?你要什么色/尺寸/哪种? (5)What about this one?这个怎么样? (6)Here's your change.这是你的零钱。
(5)It hurts here.这里疼。 (6)I can't sleep well.我睡不好觉。
8.判断与评价 (1)What do you think of…?你认为……怎么样? (2)How is/was…?……怎么样?
常用 句式 (3)How do you like…?你认为……怎么样?
9.满意
常用 How was your school trip yesterday?
句式 你昨天的学校旅行怎么样?
(1)Cood!Well done!/Perfect!/That's fine!很好。
(2)It was great/pretty good/wonderful/fun/great.
常见 答语
(4)Are you confident about…?你对……有信心吗? (5)Are you sure…?你肯定……?
(1)You can't be serious.你不是认真的吧! (2)I don't mind them.我不介意。 (3)I love/like…(very much).我非常喜欢…… 常见 (4)I guess so.我想是吧。 答语 (5)That's not a big deal.这不是大问题。 (6)Yes,I think so.我想是这样。 (7)I don't love/like…我不喜欢…… (8)It is great!太棒了! (9)Sure,I am.当然。

英语情景交际用语总结及练习题

英语情景交际用语总结及练习题

高中英语情景交际总结及练习1.回答对方感谢(Thanks)—Thank you!—It’s a pleasure./My pleasure./Don’t mention it./That’s all right./You are welcome. 2.回答对方道歉(Apologies)—Sorry.—Never mind./It doesn’t matter./It’s all right./Forget it./That’s all right.3.回答对方请求—Can I.../Could I.../Would you mind if I...—1)Yes, please./Sure./Certainly./Of course./Please do./Of course, you can./Go ahead, please.—2)I am sorry; it’s not allowed./I’m afraid not./You’d better not./I’d rather you didn’t.4.回答对方提供帮助—Would you like me to help...—1)Yes, please./That’s very kind of you./Thank you. That would be nice./Thank you for your help.—2)No, thanks./Thank you all the same./It’s very kind of you, but I can manage it myself.5.表示同意和不同意—1)Sure./certainly./Exactly./Absolutely./That’s correct./Of course./All right.(好的)/I agree./I can’t agree more./That’s a good idea./Yes, I think so./ That’s exactly what I was thinking./That’s just how I see it.—2)No way./Of course not./I’m afraid I don’t agree./I don’t think so./Well, it depends./Well, I am not sure about it.6.几个常见的短语:1.So do I. “So+助动词+主语”表述前面一种表肯定意义的情况也适合后面的另一个人,意为“某人也一样”。

初一英语请求帮助表达单选题70题(答案解析)

初一英语请求帮助表达单选题70题(答案解析)

初一英语请求帮助表达单选题70题(答案解析)1.Could you help me with my homework, Mum?- Yes, I can.- No, I don't.- Sorry, I am busy.答案:Yes, I can.本题考查对请求的回应。

“Could you help me with my homework?”是向妈妈请求帮助,妈妈如果同意可以回答“Yes, I can.”;“No, I don't.”用于回答以“Do”开头的一般疑问句;“Sorry, I am busy.”是拒绝帮助的一种表达方式,但本题问的是能否帮忙,所以选择“Yes, I can.”。

2.Dad, can you pass me the book?- Sure.- No way.- I don't know.答案:Sure.本题中“can you pass me the book?”是向爸爸请求帮忙递书,“Sure.”表示当然可以,符合题意;“No way.”表示没门,拒绝的语气;“I don't know.”表示不知道,不符合请求递书的回应。

3.Mum, please help me clean my room.- OK.- No, I won't.- I can't.答案:OK.向妈妈请求帮忙打扫房间,妈妈同意可以用“OK.”;“No,I won't.”表示不会去做,拒绝的表达;“I can't.”表示不能做,也是拒绝的意思。

4.Can you help me find my pencil, sister?- Certainly.- Never.- Maybe.答案:Certainly.请求姐姐帮忙找铅笔,“Certainly.”表示当然可以;“Never.”表示从不,不符合请求回应;“Maybe.”表示可能,不太确定,不如“Certainly.”直接回应帮忙。

Unit9最后修改

Unit9最后修改

2) 能了解以下语法: 用情态动词can来表达邀请。 3)学会表达邀请,学会对邀请进行恰当 的答复或拒绝。 2. 情感态度价值观目标: 学会人际交往的基本常识,学会有礼 貌地邀请别人以及回答别人的邀请的方式。
Can you come to my birthday party?
Sure, I’d love to
1b Listen, and write the names.
Tim, Ted, Kay, Anna, and Wilson
Tim
Kay
Ted
Anna
W circle “can” or “can’t”.
1.Jeff can/ can’t come to the party. 2.Mary can/can’t come to the party. 3. May can/ can’t come to the party. 4.Claudia can/can’t come to the party.
2c
Student A, invite your partner to do something. Student B, say you can't go and why.
1.too much homework 2.
A: Hey, Dave. Can you go to the movieson Saturday? B: I'm sorry, I'm not available. I have
You are going to somewhere and you invite your friends.
-Can you come to … with me? -Sure, I’d love to. -Sorry, I can’t. I’m not available. I have to/must …. -Catch you that day!/Maybe another time.

【英语】英语非谓语动词用法总结(完整)

【英语】英语非谓语动词用法总结(完整)

【英语】英语非谓语动词用法总结(完整)一、单项选择非谓语动词1.Volunteering gives you a chance _______ lives, including your own.A.change B.changingC.changed D.to change【答案】D【解析】【详解】考查非谓语动词。

句意:做志愿者工作给你一个改变生活的机会,包括你自己的生活。

名词chance后面通常用动词不定式作后置定语。

故选D。

2.________ terrible, the medicine was thrown away by the child.A.Tasted B.TastingC.To taste D.Being tasted【答案】B【解析】【详解】考查非谓语动词。

句意:这种药尝起来非常难吃,被这个孩子扔掉了。

主语是medicine,taste与主语是主动关系,而且taste是系动词无被动,因此用现在分词,故选B。

3.I’m afraid that I can’t attend Tom’s wedding party ______ next weekend.A.to be held B.being heldC.held D.is to be held【答案】A【解析】试题分析:考查非谓语动词作定语。

句子中已经有了谓语can't attend,故此处应填非谓语动词,首先排除D。

hold与party构成被动关系,但ABC答案均表被动。

因此再根据时间状语next weekend可确定填不定式表将来,故选A。

考查非谓语动词作定语时,要注意看与所修饰名词之间的关系,判断是主动还是被动。

同时还要注意从时间上判定,不定式作定语表将来,现在分词表进行,过去分词表完成。

考点:考查非谓语动词作定语。

4.They might just have a place ________ on the writing course—why don’t you give it a try? A.leave B.left C.leaving D.to leave【答案】B【解析】此处left过去分词作定语修饰a place,被留下的留给写作课程的地方(机会)。

2021沪津版英语七年级下册每日练习打卡题U1-4

2021沪津版英语七年级下册每日练习打卡题U1-4

2021春季七年级英语下册3月2日、3月9日打卡题【3月2日星期二】一、选择题。

()1.——What does Mr Black do?——He is ______English teacher.A ./ B. the C. an D. a( ) 2. ____ train goes faster than _____bus.A./; aB. /; theC.The;theD.The;a( ) 3.It’s early. We have half _____ hour to go to work.A .an B.a C. the D./( ) 4.—Dave _______ ruler?—Yes,he has _______ .A.an;some B.a;oneC.a;/D.any;/( ) 5.—How many books do you have?—I have _______ book.That's _______ English book.A.a;an B.a;one C.one;an D.one;( ) 6.At that time Tom was _______ one-year-old baby.A.a B.An C.the D./( ) 7. _______ tiger is _______ China.A.The;a B.A;the C.The;from D.The;/( )8.We can't see _______ sun at _______ night.A.the;the B.the;/C.a;/D./;/()9.The number of the students____ over 2,000 in our school.A.isB.areC.beD.were()10.-Why do you come here?-Because I have_____ to tell you.A.anything importantB.important anythingC.something importantD.important something()11.Most of the villagers took part in the Dragon Boat races_____9:00 am_____ the morning of June 18.A.at;inB.at;onC.on;inD.on;on()12.Why not____ to Beijing this year? It is famous____ many places of interest.A.to go;ofB.go;forC.to go;forD.go;as二、完形填空题。

英语在职考研口语交际(OralE...

英语在职考研口语交际(OralE...

英语在职考研口语交际(Oral English for postgraduate entranceexamination)Oral communicationFrom the start, we will be in accordance with the order and organization of the examination, based on the latest 2007 exam questions and examination paper analysis on the basis of syllabus of test points and score, for study and review.First, the requirements of the examination syllabusAccording to the relevant provisions of the examination syllabus, the hearing test is cancelled temporarily and the oral communication test is carried out in written form.The requirement is: "to conduct daily conversation in English; English conversation for the common life, learning and work, can understand conversational situations, the speaker's intention and the meaning of the dialogue; appropriate communication: the correct understanding of common usage in oral english."Two, Zhenti drilling and analysisPart I Dialogue Communication (15 minutes, 10 points, 1 for each)The following 2007 Zhenti as an example, a brief analysis of the answer my spoken English exam.Section A Dialogue CompletionDirections:In this section, you Will read incomple dialogues between two 5 short speakers, each followed by four choices marked A, B C, and D.Choose the answer that best suits the situation to complete the dialogue by marking the corresponding letter with a single bar across the square on yourmachine-scoring ANSWER SHEET.1.A:How about having lunch with me today, Patti?B: -A.I 'll see you then.B.Thanks a lot.C.Sounds great!D.I can come anytime.2. A:I'm anxious to get started on my project.Can we discuss the it sometime beforeWeekend?B: -A.Why didn't you tell me Earher?B.Yes, that could be arranged.C.I can't spend anytime.D.Yes, it'seasy to discuss.3.A:Hello, George. What a lovely home you have!B: -A.Yeah, this garden is beautiful.B.Nice to have you drop by,C.Thank you! I'm glad you could come.D. Let's sit here so we can admire the view.4.A:Do you feel like doing anything this weekend, Jerry?B: -A. No, I don't mind doing anything.B., all, right., What, do, you, suggest, Yes?C. We could always go to Dave's party.D. How do you like science fiction movies?5.A: Excuse me, boss. There's a Jack Welsh on the him line. Do you want to talk to?B: ________.A. Oh, I'm afraid I won't.B. No, have him call back later.C. Does he want to leave a message?D. Would you please holdmy calls?Oral communication Zhenti explainSection B Dialogue ComprehensionDirections: In this section you will read conversations between a man 5 short and a woman. At the end of each conversation, there is a question followed by 4 choices marked A, B, C, and D. Choose the answer to the question from given by marking the 4 choices corresponding letter with a single bar across the square on your machine-scoring ANSWER SHEET.6.W: You were late again this morning.M: So what?问:男人对女人的责备有何反应?他为迟到感到抱歉。

非谓语动词分类详解

非谓语动词分类详解

非谓语动词分类详解一、单项选择非谓语动词1.—Did Peter fix the computer himself?—He ________,because he doesn’t know much about computers.A.has it fixed B.had fixed itC.had it fixed D.fixed it【答案】C【解析】【详解】考查固定短语。

句意:——Peter自己修的电脑吗?——他让别人修的,因为他不太懂电脑。

have sb. done是过去分词作宾语补足语,表示“使(让,请)别人做某事”。

根据所提供的情景because he doesn’t know much about computers可判断出他找别人维修了电脑。

故选C。

2.Children who are over-protected by their parents may become _____.A.hurt B.spoiled C.damaged D.harmed【答案】B【解析】试题分析:考查动词辨析:句意:过分受父母保护的孩子可能会被惯坏。

spoil作为动词有“宠坏,溺爱”的意思,这里用动词的过去分词作形容词。

hurt指对身体或感情上的伤害;damaged指被毁坏或破坏(好像多指物);harmed指被损害,被伤害,被危害。

选B。

考点:考查动词辨析3.______ to nuclear radiation, even for a short time, may influence genes in human bodies. A.Having exposed B.Being exposedC.To expose D.Exposed【答案】B【解析】【详解】考查动名词。

句意:暴露于核辐射中甚至很短时间都会影响人体的基因。

分析句子成分发现even for a short time是插入语,may influence是谓语,前面的部分应该是主语,be exposed to“暴露于”,要用动名词Being exposed to。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

“I’m sorry Dave,I’m afraid I can’t do that”:Linguistics,Statistics, and Natural Language Processing circa2001Lillian Lee,Cornell UniversityIt’s the year2000,but where are theflying cars?I was promisedflying cars.–Avery Brooks,IBM commercialAccording to many pop-culture visions of the future,technology will eventually produce the Machine that Can Speak to Us.Examples range from the False Maria in Fritz Lang’s1926film Metropolis to Knight Rider’s KITT(a talking car)to Star Wars’C-3PO(said to have been modeled on the False Maria).And, of course,there is the HAL9000computer from2001:A Space Odyssey;in one of thefilm’s most famous scenes,the astronaut Dave asks HAL to open a pod bay door on the spacecraft,to which HAL responds,“I’m sorry Dave,I’m afraid I can’t do that”.Natural language processing,or NLP,is thefield of computer science devoted to creating such machines —that is,enabling computers to use human languages both as input and as output.The area is quite broad,encompassing problems ranging from simultaneous multi-language translation to advanced search engine development to the design of computer interfaces capable of combining speech,diagrams,and other modalities simultaneously.A natural consequence of this wide range of inquiry is the integration of ideas from computer science with work from many otherfields,including linguistics,which provides models of language;psychology,which provides models of cognitive processes;information theory,which provides models of communication;and mathematics and statistics,which provide tools for analyzing and acquiring such models.The interaction of these ideas together with advances in machine learning(see[other chapter])has re-sulted in concerted research activity in statistical natural language processing:making computers language-enabled by having them acquire linguistic information directly from samples of language itself.In this essay, we describe the history of statistical NLP;the twists and turns of the story serve to highlight the sometimes complex interplay between computer science and otherfields.Although currently a major focus of research,the data-driven,computational approach to language processing was for some time held in deep disregard because it directly conflicts with another commonly-held viewpoint:human language is so complex that language samples alone seemingly cannot yield enough information to understand it.Indeed,it is often said that NLP is“AI-complete”(a pun on NP-completeness; see[other chapter]),meaning that the most difficult problems in artificial intelligence manifest themselves in human language phenomena.This belief in language use as the touchstone of intelligent behavior dates back at least to the1950proposal of the Turing Test1as a way to gauge whether machine intelligence has been achieved;as Turing wrote,“The question and answer method seems to be suitable for introducing almost any one of thefields of human endeavour that we wish to include”.The reader might be somewhat surprised to hear that language understanding is so hard.After all,human children get the hang of it in a few years,word processing software now corrects(some of)our grammaticalerrors,and TV ads show us phones capable of effortless translation.One might therefore be led to believe that HAL is just around the corner.Such is not the case,however.In order to appreciate this point,we temporarily divert from describing statistical NLP’s history—which touches upon Hamilton versus Madison,the sleeping habits of colorless green ideas,and what happens when onefires a linguist—to examine a few examples illustrating why understanding human language is such a difficult problem.Ambiguity and language analysisAt last,a computer that understands you like your mother.–1985McDonnell-Douglas adThe snippet quoted above indicates the early confidence at least one company had in the feasibility of getting computers to understand human language.But in fact,that very sentence is illustrative of the host of difficulties that arise in trying to analyze human utterances,and so,ironically,it is quite unlikely that the system being promoted would have been up to the task.A moment’s reflection reveals that the sentence admits at least three different interpretations:1.The computer understands you as well as your mother understands you.2.The computer understands that you like your mother.3.The computer understands you as well as it understands your mother.That is,the sentence is ambiguous;and yet we humans seem to instantaneously rule out all the alternatives except thefirst(and presumably the intended)one.We do so based on a great deal of background knowl-edge,including understanding what advertisements typically try to convince us of.How are we to get such information into a computer?A number of other types of ambiguity are also lurking here.For example,consider the speech recog-nition problem:how can we distinguish between this utterance,when spoken,and“...a computer that understands your lie cured mother”?We also have a word sense ambiguity problem:how do we know that here“mother”means“a female parent”,rather than the Oxford English Dictionary-approved alternative of “a cask or vat used in vinegar-making”?Again,it is our broad knowledge about the world and the context of the remark that allows us humans to make these decisions easily.Now,one might be tempted to think that all these ambiguities arise because our example sentence is highly unusual(although the ad writers probably did not set out to craft a strange sentence).Or,one might argue that these ambiguities are somehow artificial because the alternative interpretations are so unrealistic that an NLP system could easilyfilter them out.But ambiguities crop up in many situations.For example, in“Copy the local patientfiles to disk”(which seems like a perfectly plausible command to issue to a computer),is it the patients or thefiles that are local?2Again,we need to know the specifics of the situation in order to decide.And in multilingual settings,extra ambiguities may arise.Here is a sequence of seven Japanese characters:2Or,perhaps,thefiles themselves are patient?But our knowledge about the world rules this possibility out.sequences,“president,both,business,general-manager”(=“a president as well as a general manager of business”)and“president,subsidiary-business,Tsutomu(a name),general-manager”(=?).It requires a fair bit of linguistic information to choose the correct alternative.3To sum up,we see that the NLP task is highly daunting,for to resolve the many ambiguities that arise in trying to analyze even a single sentence requires deep knowledge not just about language but also about the world.And so when HAL says,“I’m afraid I can’t do that”,NLP researchers are tempted to respond,“I’m afraid you might be right”.Firth thingsfirstBut before we assume that the only viable approach to NLP is a massive knowledge engineering project,let us go back to the early approaches to the problem.In the1940s and1950s,one prominent trend in linguistics was explicitly empirical and in particular distributional,as exemplified by the work of Zellig Harris(who started thefirst linguistics program in the USA).The idea was that correlations(co-occurrences)found in language data are important sources of information,or,as the influential linguist J.R.Firth declared in1957,“You shall know a word by the company it keeps”.Such notions accord quite happily with ideas put forth by Claude Shannon in his landmark1948paper establishing thefield of information theory;speaking from an engineering perspective,he identified the probability of a message’s being chosen from among several alternatives,rather than the message’s actual content,as its critical characteristic.Influenced by this work,Warren Weaver in1949proposed treating the problem of translating between languages as an application of cryptography(see[other chapter]),with one language viewed as an encrypted form of another.And,Alan Turing’s work on cracking German codes during World War II led to the development of the Good-Turing formula,an important tool for computing certain statistical properties of language.In yet a third area,1941saw the statisticians Frederick Mosteller and Frederick Williams address the question of whether it was Alexander Hamilton or James Madison who wrote some of the pseudonymous Federalist Papers.Unlike previous attempts,which were based on historical data and arguments,Mosteller and Williams used the patterns of word occurrences in the texts as evidence.This work led up to the famed Mosteller and Wallace statistical study which many consider to have settled the authorship of the disputed papers.Thus,we see arising independently from a variety offields the idea that language can be viewed from a data-driven,empirical perspective—and a data-driven perspective leads naturally to a computational perspective.A“C”changeHowever,data-driven approaches fell out of favor in the late1950’s.One of the commonly cited factors is a1957argument by linguist(and student of Harris)Noam Chomsky,who believed that language behavior should be analyzed at a much deeper level than its surface statistics.He claimed,It is fair to assume that neither sentence(1)[Colorless green ideas sleep furiously]nor(2)[Furiously sleep ideas green colorless]...has ever occurred....Hence,in any[computed]statistical model...these sentences will be ruled out on identical grounds as equally“remote”from English.4Yet(1),though nonsensical,is grammatical,while(2)is not.That is,we humans know that sentence(1),which at least obeys(some)rules of grammar,is indeed more probable than(2),which is just word salad;but(the claim goes),since both sentences are so rare,they will have identical statistics—i.e.,a frequency of zero—in any sample of English.Chomsky’s criticism is essentially that data-driven approaches will always suffer from a lack of data,and hence are doomed to failure.This observation turned out to be remarkably prescient:even now,when billions of words of text are available on-line,perfectly reasonable phrases are not present.Thus,the so-called sparse data problem continues to be a serious challenge for statistical NLP even today.And so,the effect of Chomsky’s claim, together with some negative results for machine learning and a general lack of computing power at the time, was to cause researchers to turn away from empirical approaches and toward knowledge-based approaches where human experts encoded relevant information in computer-usable form.This change in perspective led to several new lines of fundamental,interdisciplinary research.For ex-ample,Chomsky’s work viewing language as a formal,mathematically-describable object has had lasting impact on both linguistics and computer science;indeed,the Chomsky hierarchy,a sequence of increasingly more powerful classes of grammars,is a staple of the undergraduate computer science curriculum.Con-versely,the highly influential work of,among others,Kazimierz Adjukiewicz,Joachim Lambek,David K. Lewis,and Richard Montague adopted the lambda calculus,a fundamental concept in the study of program-ming languages,to model the semantics of natural languages.The empiricists strike backBy the’80s,the tide had begun to shift once again,in part because of the work done by the speech recog-nition group at IBM.These researchers,influenced by ideas from information theory,explored the power of probabilistic models of language combined with access to much more sophisticated algorithmic and data resources than had previously been available.In the realm of speech recognition,their ideas form the core of the design of modern systems;and given the recent successes of such software—large-vocabulary continuous-speech recognition programs are now available on the market—it behooves us to examine how these systems work.Given some acoustic signal,which we denote by the variable,we can think of the speech recognition problem as that of transcription:determining what sentence is most likely to have produced.Probabilities arise because of the ever-present problem of ambiguity:as mentioned above,several word sequences,such as“your lie cured mother”versus“you like your mother”,can give rise to similar spoken output.Therefore, modern speech recognition systems incorporate information both about the acoustic signal and the language behind the signal.More specifically,they rephrase the problem as determining which sentence maximizes the product.Thefirst term measures how likely the acoustic signal would be if were actually the sentence being uttered(again,we use probabilities because humans don’t pronounce words the same way all the time).The second term measures the probability of the sentence itself;for example,as Chomsky noted,“colorless green ideas sleep furiously”is intuitively more likely to be uttered than the reversal of the phrase.It is in computing this second term,,where statistical NLP techniques come into play,since accurate estimation of these sentence probabilities requires developing probabilistic models of language. These models are acquired by processing tens of millions of words or more.This is by no means a simple procedure;even linguistically naive models require the use of sophisticated computational and statistical techniques because of the sparse data problem foreseen by Chomsky.But using probabilistic models,largedatasets,and powerful learning algorithms(both for and)has led to our achieving the milestone of commercial-grade speech recognition products capable of handling continuous speech ranging over a large vocabulary.But let us return to our story.Buoyed by the successes in speech recognition in the’70s and’80s (substantial performance gains over knowledge-based systems were posted),researchers began applying data-driven approaches to many problems in natural language processing,in a turn-around so extreme that it has been deemed a“revolution”.Indeed,now empirical methods are used at all levels of language analysis. This is not just due to increased resources:a succession of breakthroughs in machine learning algorithms has allowed us to leverage existing resources much more effectively.At the same time,evidence from psy-chology shows that human learning may be more statistically-based than previously thought;for instance, work by Jenny Saffran,Richard Aslin,and Elissa Newport reveals that8-month-old infants can learn to di-vide continuous speech into word segments based simply on the statistics of sounds following one another. Hence,it seems that the“revolution”is here to stay.Of course,we must not go overboard and mistakenly conclude that the successes of statistical NLP render linguistics irrelevant(rash statements to this effect have been made in the past,e.g.,the notorious remark,“Every time Ifire a linguist,my performance goes up”).The information and insight that linguists, psychologists,and others have gathered about language is invaluable in creating high-performance broad-domain language understanding systems;for instance,in the speech recognition setting described above,a better understanding of language structure can lead to better language models.Moreover,truly interdisci-plinary research has furthered our understanding of the human language faculty.One important example of this is the development of the head-driven phrase structure grammar(HPSG)formalism—this is a way of analyzing natural language utterances that truly marries deep linguistic information with computer science mechanisms,such as unification and recursive data-types,for representing and propagating this information throughout the utterance’s structure.In sum,computational techniques and data-driven methods are now an integral part both of building systems capable of handling language in a domain-independent,flexible,and graceful way,and of improving our understanding of language itself.AcknowledgmentsThanks to the members of the CSTB Fundamentals of Computer Science study—and especially Alan Biermann—for their helpful feedback.Also,thanks to Alex Acero,Takako Aikawa,Mike Bailey,Regina Barzilay,Eric Brill,Chris Brockett,Claire Cardie,Joshua Goodman,Ed Hovy,Rebecca Hwa,John Lafferty, Bob Moore,Greg Morrisett,Fernando Pereira,Hisami Suzuki,and many others for stimulating discussions and very useful comments.Rie Kubota Ando provided the Japanese example.The use of the term“rev-olution”to describe the re-ascendance of statistical methods comes from Julia Hirschberg’s1998invited address to the American Association for Artificial Intelligence.I learned of the McDonnell-Douglas ad and some of its analyses from a class run by Stuart Shieber.All errors are mine alone.This paper is based upon work supported in part by the National Science Foundation under ITR/IM grant IIS-0081334and a Sloan Research Fellowship.Any opinions,findings,and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Sloan Foundation.ReferencesAdjukiewicz,Kazimierz.1935.Die syntaktische Konnexit¨a t.Studia Philosophica,1:1–27.English trans-lation available in Storrs McCall,editor,Polish Logic1920-1939,Clarendon Press(1967).Chomsky,Noam.1957.Syntactic Structures.Number IV in Janua Linguarum.Mouton,The Hague,The Netherlands.Firth,John Rupert.1957.A synopsis of linguistic theory1930–1955.In the Philological Society’s Studies in Linguistic Analysis.Blackwell,Oxford,pages1–32.Reprinted in Selected Papers of J.R.Firth, edited by F.Palmer.Longman,1968.Good,Irving J.1953.The population frequencies of species and the estimation of population parameters.Biometrika,40(3,4):237–264.Harris,Zellig.1951.Methods in Structural Linguistics.University of Chicago Press.Reprinted by Phoenix Books in1960under the title Structural Linguistics.Lambek,Joachim.1958.The mathematics of sentence structure.American Mathematical Monthly, 65:154–169.Lewis,David K.1970.General semantics.Synth`e se,22:18–67.Montague,Richard.1974.Formal Philosophy:Selected Papers of Richard Montague.Yale University Press.Edited by Richmond H.Thomason.Mosteller,Frederick and David L.Wallace.1984.Applied Bayesian and Classical Inference:The Case of the Federalist Papers.Springer-Verlag.First edition published in1964under the title Inference and Disputed Authorship:The Federalist.Pollard,Carl and Ivan Sag.1994.Head-driven phrase structure grammar.Chicago University Press and CSLI Publications.Saffran,Jenny R.,Richard N.Aslin,and Elissa L.Newport.1996.Statistical learning by8-month-old infants.Science,274(5294):1926–1928,December.Shannon,Claude E.1948.A mathematical theory of communication.Bell System Technical Journal, 27:379–423and623–656.Turing,Alan puting machinery and intelligence.Mind,LIX:433–60.Weaver,Warren.1949.Translation.Memorandum.Reprinted in W.N.Locke and A.D.Booth,eds.,Ma-chine Translation of Languages:Fourteen Essays,MIT Press,1955.For further readingCharniak,Eugene.1993.Statistical Language Learning.MIT Press.Jurafsky,Daniel and James H.Martin.2000.Speech and Language Processing:An Introduction to Natural Language Processing,Computational Linguistics,and Speech Recognition.Prentice Hall.Contribut-ing writers:Andrew Kehler,Keith Vander Linden,and Nigel Ward.Manning,Christopher D.and Hinrich Sch¨u tze.1999.Foundations of Statistical Natural Language Process-ing.The MIT Press.。

相关文档
最新文档