Spoken dialog management for robots

Spoken dialog management for robots
Spoken dialog management for robots

Spoken Dialog Management for Robots

Nicholas Roy and Joelle Pineau and Sebastian Thrun Robotics Institute,Carnegie Mellon University

Pittsburgh,PA15217

nicholas.roy joelle.pineau sebastian.thrun@https://www.360docs.net/doc/d2771791.html,

Abstract

Spoken dialog managers have bene?ted from stochastic

planners such as MDPs.However,so far,MDPs do not

handle well noisy and ambiguous utterances from the

user.We address this problem by inverting the notion

of dialog state;the state represents the user’s intentions,

rather than the system state.This approach allows for

simple and intuitive dialog description at the sacri?ce

of state observability.

We use a POMDP-style approach to generate dialog

policies;however,the intractability of POMDP solu-

tions requires an approximate solution.We instead

augment the state representation of the MDP by provid-

ing the system with the maximum likelihood state and

a compressed representation of the belief state.In this

way,the system can approximate the optimal POMDP

solution,but at MDP-like speeds.

Introduction

The development of automatic speech recognition has made possible more natural human-computer interaction,and more recently,rich interaction between humans and robots. Speech recognition and speech understanding,however,are not yet at the point where a computer can reliably extract the intended meaning from every human utterance.Hu-man speech can be both noisy and ambiguous,and many real-world systems,such as on a robot in a workplace,must also be speaker-independent.Regardless of these dif?cul-ties,any system that manages human-robot dialogs must be able to perform reliably even with noisy and stochastic speech input.

Recent research in dialog management has shown that Markov Decision Processes can be useful for generating policies(Y oung1990;Levin,Pieraccini,&Eckert1998); the actions are the speech productions from the system,the state is represented by the dialog as a whole,and the goal is to maximize some reward,such as ful?lling some request of the user.However,the correct way to represent the state of the dialog is still an open problem(Singh et al.1999).A common solution is to restrict the system to a single goal, for example booking a?ight in an automated travel agent system;the system state is described in terms of how close the agent is to being able to book the?ight.

Such systems suffer from three principal problems. Firstly,most existing dialog systems are built for a single purpose over a single task domain,for example retriev-ing e-mail or making travel arrangements(Levin,Pierac-cini,&Eckert1998).While it is not impossible to ex-tend these systems to multiple domains,the interaction be-tween different task domains is dif?cult to model.Sec-ondly,system-initiated strategies perform best with these approaches,however mixed-initiative strategies are a more natural model for human-robot interaction.Finally,most existing dialog systems do not model con?dences on the human utterances,and therefore do not account for the reli-ability of utterances while performing a strategy.Some sys-tems do use the log-likelihood values for speech utterances, however,these values are only thresholded as to whether or not the utterance needed con?rmation or not(Niimi& Kobayashi1996;Singh et al.1999).

The real problem that lies at the heart of these three is-sues is that of observability.The ultimate goal of a dia-log system is to satisfy a user request;however,what the user wants is only partially observable at best.A conven-tional MDP demands to know the current state at all times, therefore the state has to be wholly contained in the system. System-initiated dialogs are successful because there is no uncertainty in determining when the user wants something, and a single task-domain system works well,because the relevant goals and actions are easily speci?ed.

In this paper,we invert the conventional notion of state in a dialog.The world is viewed as partially unobservable–the underlying state is the intention of the user with respect to the dialog.The only observations about the user’s state are the speech utterances given by the speech recognition system.By accepting the partial observability of the world, the dialog problem becomes one that is addressed by Par-tially Observable Markov Decision Processes(POMDPs). There are several POMDP algorithms that are the natural choice for policy generation(Kaelbling,Littman,&Cas-sandra1998;Cassandra,Littman,&Zhang1997).How-ever,solving real world dialog scenarios is computationally intractable for full-blown POMDP solvers,as the complex-ity is doubly exponential in the number of states.We there-fore will use an algorithm for?nding approximate solutions to POMDP-style problems and apply it to dialog manage-

ment.This algorithm,the Augmented MDP,was developed for mobile robot navigation in the context of Coastal Nav-igation(Roy&Thrun1999),and operates by augmenting the state description with a compression of the current be-lief state.By representing the belief state succinctly with its entropy,belief-space planning can be approximated without the expected complexity.

In the?rst section of this paper,we develop the model of dialog interaction,including the effects of speech recogni-tion and speech production on the state representation.This model allows for a more natural description of dialog prob-lems,and in particular allows for intuitive handling of noisy and ambiguous dialogs.Few existing dialogs can handle ambiguous input,typically relying on natural language pro-cessing to resolve semantic ambiguities(Aust&Ney1998). Secondly,we describe the Augmented MDP method used to ?nd the optimal policy for a given dialog system.While this algorithm was initially developed for mobile robot naviga-tion,the dialog management problem has several features that require modi?cation to the original algorithm.We con-clude with a description of an example problem domain and compare the performance of the Augmented MDP to con-ventional MDP and POMDP dialog strategies.

The application that we are developing this dialog model for is on a mobile robot with knowledge from several do-mains,and interacting with many people over time.While there habe been some development of speech recognition and understanding on robots(Torrance1994;Asoh,Hara, &Matsui1998),most research has been restricted to the desktop or telephone domains.The reliability of the au-dio signal on a mobile robot,coupled with the expectations of natural interaction that people have with more anthropo-morphic robots increases the demands placed on the dialog manager.

Dialog Systems and POMDPs

A Partially Observable Markov Decision Process (POMDP)is a natural way of modelling dialog pro-cesses,especially when the state of the system is viewed as the state of the user.The partial observability capabilities of a POMDP policy allows the dialog planner to recover from noisy or ambiguous utterances in a natural and autonomous way.At no time does the machine interpreter have any direct knowledge of the state of the user,i.e, what the user wants.The machine interpreter can only infer this state from the(noisy)speech of the user.The POMDP framework provides exactly the right mechanism for modelling uncertainty about what the user is trying to accomplish.

The POMDP consists of an underlying,unobservable Markov Decision Process.The MDP is speci?ed by:

a set of states

a set of actions

a set of transition probabilities

a set of rewards

an initial state

The transition probabilities form a structure over the set of states,connecting the states in a directed graph with arcs between states with non-zero transition probabilities.

The POMDP adds:

a set of observations

a set of observation probabilities

and replaces

the initial state with an initial belief,

the set of rewards with rewards conditioned on observa-tions as well:

The policy is one that generates an action based on the current belief.The optimal strategy for a POMDP is one that maximizes the expected reward(either to a?nite or in?nite horizon,possibly with discounting). Unfortunately,POMDPs for all but the most trivial prob-lems are computationally intractable.

We can,however,simplify the problem by noticing that the uncertainty,or belief state of the system,tends to have a certain structure.The uncertainty that the system has is usually domain-speci?c.For example,it may be likely that an of?ce-delivery system can mis-hear the name of a pack-age recipient,but it is unlikely that the system will confuse the name of the mail recipient for a request to get coffee. In this manner,the beliefs that the system instantiating the POMDP is likely to have are typically well-localised and often uni-modal.We can strengthen the uni-modal assump-tion by imposing a topology on the model that connects states that are easily confused.

By making the uni-modal,localised assumption about the uncertainty,it becomes possible to summarise the be-lief state as its most likely state,and the entropy of the be-lief state.The important point to note is that the entropy of the belief state approximates a suf?cient statistic1.Given this assumption,we can do MDP style planning with this representation.

The Augmented MDP

We represent the belief state of the system as an ordered pair,where is the most likely state in the MDP,and is the entropy of the current belief.

(1) The second half of the state pair is the en-tropy of the probability distribution over the states.This is computed using:

(2)

During planning,we can model the effect of actions and observations by reconstructing the full belief from the state(under the uni-modal assumption above),apply the actionse and observations,and recompress the full belief back down to its state representation.

Modelling Actions and Observations

We model the effect of an action(in the case of dialog management,taking actions usually corresponds to saying something to the user)as

(3)

(4) where is the prior distribution over the states before action,and is the posterior.We summarize the posterior as,computed from prior

and action.

We compute the effect of receiving an utterance:

(5)

(6) where is the posterior of the state distribution af-ter the observation is made and is the probabil-ity of an utterance given a state,given by the POMDP model of the dialog system.is simply a normalizer to ensure the probabilities sum to1.

In order to generate the optimal policy using the Aug-mented MDP framework,two modi?cations have to be made to the model speci?cation.One is to maximise the reward for state-action-observation triplets with minimum belief entropy.We modify the reward as following:

Figure1:Florence Nightingale,the prototype nursing home robot used in these experiments.

and the information about the weather and TV schedules is downloaded on request from the web.

Time

Weather(Current and for Tomorrow)

TV Schedules for different channels(ABC,NBC,CBS)

Delivering video messages(facemail)

2Recall that during policy execution,the observation probabil-ity is given by the speech recognition system itself.serious(having the robot leave the room at the wrong time, or travel to the wrong destination).

An Example Dialog

Table2shows an example dialog from the Augmented MDP system.The left hand columns are the emitted obser-vation and the entropy of the system that resulted.The most interesting lines are the fourth and?fth lines of the dialog: although the true state of the user is want_facemail,the observation made is that the user is in state want_time. The conventional MDP would take the observation as given,but the observation from the speech recognition sys-tem was highly uncertain,so the entropy of the belief be-comes quite high(20).The entropy stays high before get-ting a more certain observation.The system con?rms the desired state,before moving on.We see a similar action a little later;here,because the cost of mis-delivering a face-mail is very high,the system takes con?rmation action even when the entropy is relatively low.

Experimental Results

We compared the performance of the three algorithms(con-ventional MDP,Augmented MDP and full POMDP)over the example domain.The metric used was to look at the total reward accumulated over the course of an ex-tended test.In order to perform this full test,the obser-vations and states from the underlying MDP were gener-ated stochastically from the model and then given to the policy.The action taken by the policy was returned to the model,and the policy rewarded based on the state-action-observation triplet.The experiments were run for a total of100dialogs,where each dialog is considered to be a se-quence of observation-action utterances from the start state no_request to the goal state request_done.Once the?nal state was reached,the system transitioned back to the start state deterministically.

The Restricted State Space Problem

The POMDP policy was generated using the Incremen-tal Improvement algorithm(Cassandra,Littman,&Zhang 1997),using code provided by Tony Cassandra.The solver was unable to complete a solution for the full state space, so we used a restricted problem,with only7states,and2 task domains from table1:the current time and the current weather.

Figure3shows the performance of the three algorithms, over the course of100dialogs.Notice that the POMDP strategy outperformed both the conventional and Aug-mented MDP;it accumulated the most reward,and fastest. This good performance of the POMDP is not suprising,but time to compute this strategy is high–729secs.In terms of time to execute the strategies,although the Augmented MDP eventually accumulates more reward than the conven-tional planner,it takes longer to do so,because it is delayed by low-cost con?rming actions.

In order to get a fairer picture of the three algorithms,the time was normalized by the length of each dialog iteration (a sequence from no_request to request_done).By

Want Weather Info

Want Tomorrow’s Weather

Want Current Weather Want Time

Want TV Info

Want CBS Info

Want ABC Info Want NBC Info

Want to send Facemail

to Sebastian Want to send Facemail

Want to send Facemail

to Mike M.to Mike V.

Want Facemail

Sent

Want to send FaceMail

Recording Facemail

No Request Request begun Request Completed

Figure 2:A graph of the basic Markov Decision Process underlying the dialog manager.

Observation

Belief Entropy

Action

Reward

True State

Table 2:An example dialog,from the Augmented MDP.

50001000015000200002500030000

0200400

6008001000

12001400

R e w a r d G a i n e d

Number of Actions Performed

Reward Gained over Time for Small Decision Problem

POMDP strategy Augmented MDP Conventional MDP

Figure 3:A comparison of the reward gained over time for the Augmented MDP vs.the conventional MDP vs POMDP for the 7state problem.The time is measured in number of actions.

disregarding how long it takes each strategy to ful?ll a user request,the Augmented MDP is no longer penalized as heavily for too many con?rming actions.

Figure 4shows the performance of the three algorithms,as a function of time measured in dialogs.

The POMDP strategy still outperforms the other two,but the ability of the Augmented MDP is more evident here;the

5000

10000

15000

20000

25000

30000

0102030

405060708090100

R e w a r d G a i n e d

Number of Dialogs

Reward Gained per Dialog, for Small Decision Problem

POMDP strategy Augmented MDP Conventional MDP

Figure 4:A comparison of the reward gained over time for the Augmented MDP vs.the conventional MDP for the 7state prob-lem.In this case,the time is measured in dialogs,or iterations of satisfying user requests.

Augmented MDP is accumulating reward at a rate that is closer to the POMDP strategy than the conventional MDP .The sub-optimality of the Augmented MDP compared to the POMDP algorithm is shown in that it is too conserva-tive –the POMDP algorithm does not spend as much time con?rming responses.

The Full State Space Problem

Figure 5demonstrates the algorithms on the full dialog model as given in ?gure 2.Because of the number of states,no POMDP solution could be computed for this problem.

-5000

5000

10000

1500020000

25000

0102030

405060708090100

R e w a r d G a i n e d

Number of Dialogs

Reward Gained per Dialog, for Full Decision Problem

Augmented MDP Conventional MDP

Figure 5:A comparison of the reward gained over time for the Augmented MDP vs.the conventional MDP for the 17state prob-lem.Again,the time is measured in number of actions.

The Augmented MDP clearly outperforms the conven-tional MDP strategy,as it more than triples the total accu-mulated reward over the lifetime of the strategies,although at the cost of taking longer to reach the goal state in each dialog.Table 3breaks down the numbers in more detail.The average reward for the Augmented MDP is 18.6per action,which is the maximum reward for most actions,sug-gesting that the Augmented MDP is taking the right action about 95%of the time.Furthermore,the average reward per dialog for the Augmented MDP is 230compared to 49.7for the conventional MDP ,which suggests that the conven-tional MDP is making a large number of mistakes in each dialog.

Finally,the standard deviation for the Augmented MDP is much narrower,suggesting that this algorithm is getting its rewards much more consistently than the conventional MDP .

Augmented MDP

Average Reward

3.8+/-67.2Average Dialog Reward 49.7+/-193.7

Table 3:A comparison of the rewards accumulated for the two algorithms using the full model.

Figure 6demonstrates the speed for the Augmented MDP algorithm.The graph shows the time to compute the optimal policy as a function of state space size.The num-ber of possible actions and observations also changes with the state space size and is roughly equal to it.As the graph depicts,the conventional MDP solves the problems almost instantaneously.However,the POMDP takes orders and orders of magnitude longer.After 24hours of computation

on a 400MHz Pentium II,only 2episodes of value iteration had occurred.While the Augmented MDP did not have the near-instant solution time for a policy,it only took 8sec-onds for the full 17state policy,and ?gure 6suggests that the algorithm may scale well to larger domains.

2

4

6

8

10

0246

81012141618

T i m e i n S e c o n d s

Number of States

Time to Compute Policy vs. Number of States in Problem

POMDP

Augmented MDP Conventional MDP

Figure 6:A comparison of times to ?nd the policy,for the conven-tional MDP,Augmented MDP and POMDP solvers.The POMDP was unable to ?nd a solution for the 7and 11state problems.

Conclusion

This paper discusses a novel way to view the dialog man-agement problem.The domain is represented as the par-tially observable state of the user,where the observations are speech utterances from the user.This formulation clearly requires a POMDP-style solution,however,?nd-ing a POMDP for any but the most trivial dialog prob-lem is not computationally feasible.Therefore,we use in-stead of a full belief state an augmented MDP state repre-sentation for approximating the optimal policy.This rep-resentation consists of the maximum-likelihood state aug-mented with a compression of the belief state represented by the entropy of the belief.This approach,the Augmented MDP ,was developed for mobile robots in the context of Coastal Navigation.The Augmented MDP ?nds a solu-tion that quantitatively outperforms the conventional MDP ,while dramatically reducing the time to solution compared to a full POMDP (linear vs.exponential in the number of states).The Augmented MDP allows mixed-initiative and user-initiative dialogs,and handles noisy observations from the user appropriately.Particularly,ambiguous observa-tions are also handled without full-blown natural language processing.The policies generated by the Augmented MDP are sub-optimal compared to the POMDP in terms of re-ward gained per action,however,the Augmented MDP substantially outperforms conventional MDPs without the computational cost associated with POMDP solutions.While the results of the Augmented MDP approach to the dialog system are promising,a number of improvements are needed.The Augmented MDP is overly cautious,re-fusing to commit to a particular course of action until it is completely certain.This could be avoided by having some non-linear reward structure,where achieving the goal with some arbitrarily low entropy is equivalent to achieving the

goal with zero entropy.

Secondly,the compression of the belief state as the sin-gle entropy statistic is overly optimistic about the kinds of belief states that are likely to be encountered.In particular, it very poorly models ambiguities between two distinct task domains,as compared with ambiguities between two states in a single domain.The right way to compress the belief state is still an open question;multiple statistics are almost certainly needed,but which and how many is not clear. Another problem is that the policy is relatively sensitive to the parameters of the model.At the moment,the model parameters are set by hand.While learning the parameters from scratch for a full POMDP is probably unnecessary, automatic tuning of the model parameters would de?nitely add to the utility of the model.Furthermore,the mechanism for computing the changes in entropy as a result of observa-tions could also be learned.By generating synthetic speech data,it should be possible to learn the effects of different observations on the entropy,and dispense with a number of assumptions about the prior belief state.

Acknowledgements

The authors would like to thank Tom Mitchell for his advice and support of this research.

Kevin Lenzo and Mathur Ravishankar made our use of Sphinx possible,answered requests for information and made bug?xes willingly.Tony Cassandra was extremely helpful in distributing his POMDP code to us,and answer-ing promptly any questions we had.The assistance of the Nursebot team is also gratefully acknowledged,including the members from the School of Nursing and the Depart-ment of Computer Science Intelligent Systems at the Uni-versity of Pittsburgh.

This research was supported in part by Le Fonds pour la Formation de Chercheurs et l’Aide`a la Recherche(Fonds FCAR).

References

Asoh,H.;Hara,I.;and Matsui,T.1998.Structured dynamic multi-agent architecture for controlling mobile of?ce-conversant robot.In Proc.IEEE ICRA,volume1.

Aust,H.,and Ney,H.1998.Evaluating dialog systems used in the real world.In Proc.IEEE ICASSP,volume2,1053–1056. Black,A.;Taylor,P.;and Caley,R.1999.The Festival Speech Synthesis System,1.4edition.

Cassandra,A.;Littman,M.L.;and Zhang,N.L.1997.In-cremental pruning:A simple,fast,exact algorithm for partially observable Markov decision processes.In Proc.13th Ann.Conf. on Uncertainty in Arti?cial Intelligence(UAI–97),54–61. Kaelbling,L.P.;Littman,M.L.;and Cassandra,A.R.1998. Planning and acting in partially observable stochastic domains. Arti?cial Intelligence101:99–134.

Levin,E.;Pieraccini,R.;and Eckert,https://www.360docs.net/doc/d2771791.html,ing Markov de-cision process for learning dialogue strategies.In Proc.ICASSP. Niimi,Y.,and Kobayashi,Y.1996.Dialog control strategy based on the reliability of speech recognition.In Proc.ICSLP. Ravishankar,M.1996.Ef?cient Algorithms for Speech Recog-nition.Ph.D.Dissertation,Carnegie Mellon.Roy,N.,and Thrun,S.1999.Coastal navigation with mobile robots.In NIPS,volume11.

Singh,S.;Kearns,M.;Litman,D.;and Walker,M.1999.Rein-forcement learning for spoken dialog systems.In NIPS. Torrance,M.C.1994.Natural communication with robots.Mas-ter’s thesis,MIT Dept of E.E.and C.S.

Young,https://www.360docs.net/doc/d2771791.html,e of dialogue,pragmatics and semantics to enhance speech recognition.Speech Communication9(5-6).

雅思历年真题口语题目汇总

雅思历年真题口语题目汇总 version 01old person describe an old man influenced you 1.who was he 2.when did you know him 3.what he did and explain why he influenced you part3 1.老人的经验有什么问题存在? 2.喜欢什么艺术品? 3.给老人拍照片时候注意什么呢? 4.你们国家对老年人是什么态度? 5.你认为这个社会在哪些方面对老年人不太好? 6.老人在你们家有什么影响? 7.你认为老年人在看问题的时候跟年轻人有什么不一样? 8.他们对大家有什么影响? version 02 city 1.where it is located? 2. what special for you? 3. why you want to stay there? part 3 1.please compare 100 hundred years old city and modern city and what predict about the city in the future. 2.上海是个怎样的城市 3.都有那些著名建筑

4.你想为这个城市做些什么? 5.有哪些现象有待提高或者那些提倡 version 03 room part2: 1.what's your favorite room in your home 2.what it likes you live 3.what you do in the room normally and explain why you like it part3: 1.你认识你的邻居吗? 2.城市里的房子和乡村有什么不同? 2003年9月换题后的口语topic Old person Describe a older person you know You should say:Who he or she is How you know him or her How he or she is And explain what infection he or she give you and in what aspect Further question: 1、你们国家对老年人是什么态度? 2、你认为这个社会在哪些方面对老年人不太好? 3、老人在你们家有什么影响? 4、你认为老年人在看问题的时候跟年轻人有什么不一样? 5、他们对大家有什么影响?

完整版初中英语语法大全知识点总结

英语语法大全 初中英语语法 学习提纲 一、词类、句子成分和构词法: 1、词类:英语词类分十种: 名词、形容词、代词、数词、冠词、动词、副词、介词、连词、感叹词。 1、名词(n.):表示人、事物、地点或抽象概念的名称。如:boy, morning, bag, ball, class, orange. :who, she, you, it . 主要用来代替名词。如): 2、代词(pron.3、形容词(adj..):表示人或事物的性质或特征。如:good, right, white, orange . 4、数词(num.):表示数目或事物的顺序。如:one, two, three, first, second, third, fourth. 5、动词(v.):表示动作或状态。如:am, is,are,have,see . 6、副词(adv.):修饰动词、形容词或其他副词,说明时间、地点、程度等。如:now, very, here, often, quietly, slowly. 7、冠词(art..):用在名词前,帮助说明名词。如:a, an, the. 8、介词(prep.):表示它后面的名词或代词与其他句子成分的关系。如in, on, from, above, behind. 9、连词(conj.):用来连接词、短语或句子。如and, but, before . 10、感叹词(interj..)表示喜、怒、哀、乐等感情。如:oh, well, hi, hello. 2、句子成分:英语句子成分分为七种:主语、谓语、宾语、定语、状语、表语、宾语补足语。 1、主语是句子所要说的人或事物,回答是“谁”或者“什么”。通常用名词或代词担任。如:I'm Miss Green.(我是格林小姐) 2、谓语动词说明主语的动作或状态,回答“做(什么)”。主要由动词担任。如:Jack cleans the room every day. (杰克每天打扫房间) 3、表语在系动词之后,说明主语的身份或特征,回答是“什么”或者“怎么样”。通常由名词、 代词或形容词担任。如:My name is Ping ping .(我的名字叫萍萍) 4、宾语表示及物动词的对象或结果,回答做的是“什么”。通常由名词或代词担任。如:He can spell the word.(他能拼这个词) 有些及物动词带有两个宾语,一个指物,一个指人。指物的叫直接宾语,指人的叫间接宾语。间接 宾语一般放在直接宾语的前面。如:He wrote me a letter . (他给我写了 一封信) 有时可把介词to或for加在间接宾语前构成短语,放在直接宾语后面,来强调间接宾语。如:He wrote a letter to me . (他给我写了一封信)

初中英语中主谓一致详解

主谓一致详解 【基础知识】 主谓一致指“人称”和“数”方面的一致关系。对大多数人来说,往往会在掌握主语和随后的谓语动词之间的一致问题上遇到困难。一般情况下,主谓之间的一致关系由以下三个原则支配: 语法一致原则(grammatical concord) 意义一致原则(notional concord) 就近原则(principle of proximity) (一)语法一致原则 用作主语的名词词组中心词和谓语动词在单、复数形式上的一致,就是语法一致。也就是说,如果名词中心词是单数,动词用单数形式;如果名词中心词是复数,动词用复数形式。例如: This table is a genuine antique. Both parties have their own advantages. Her job has something to do with computers. She wants to go home. They are divorcing each other. Mary was watching herself in the mirror. The bird built a nest. Susan comes home every week-end. (二)意义一致原则 有时,主语和谓语动词的一致关系取决于主语的单、复数意义,而不是语法上的单、复数形式,这样的一致关系就是意义一致。例如: Democratic government gradually take the place of an all-powerful monarchy. A barracks was attacked by the guerilla. Mumps is a kind of infectious disease. The United States is a developed country. It is the remains of a ruined palace. The archives was lost.

雅思口语真题

IELTS SPEAKING TEST Part 1 Work or study 1.Do you work or are you a student? 2.What subject(s) are you studying? 3.Why did you choose to study that subject? 4.(For high school) Why did you choose those subjects? 5.Is that a popular subject (to study) in your country? 6.Do you think it's popular because people want to gain knowledge or is there some other reason? 7.What are the most popular subjects (= university degree courses = majors) in China? 8.Did your family help you choose that course? 9.What school (or university) do you go to? 10.Why did you choose that university (or, school)? 11.Do you have any recreational or entertainment activities at your school/university? 12.How do you like your subject? 13.What do you like about your subject? 14.Do you think it's important to choose a subject you like? (Why?) 15.(For university students) What are your favourite classes/ courses/ subjects at university? 16.(For high school students) What's your favourite subject at school? 17.Is your subject interesting? (Why? / Why not?) 18.Do you think it's important to choose a subject you are interested in? (Why?) 19.What's the most interesting part of your subject? 20.(For high school) What's the most interesting of your subjects at school? 21.What are your future work plans? (after you graduate) 22.Do you think what you are studying now will be very useful (or, relevant or, important) for this type of work? 23.How will you (or, how do you plan to) get the job you want? 24.Do you think it will be easy to find that kind of work? 25.Why are you taking the IELTS test? 26.In addition to gaining knowledge, what other ways have you benefited from your school/university experience? Accommodation 1.What kind of housing/accommodation do you live in? 2.Who do you live with? 3.How long have you lived there? 4.Do you plan to live there for a long time? 5.Can you describe the place where you live? 6.Which room does your family spend most of the time in? 7.What do you usually do in your house/flat/room? 8.Are the transport facilities to your home very good? 9.Do you prefer living in a house or a flat? 10.Please describe the room you live in. 11.Is there anything you don't like about the place where you live?

雅思历年真题口语题目汇总

雅思历年真题口语题目汇总 version01old person describe an old man influenced you 1.who was he 2.when did you know him 3.what he did and explain why he influeced you part3 1.老人的经验有什么问题存在? 2.喜欢什么艺术品? 3.给老人拍照片时候注意什么呢? 4.你们国家对老年人是什么态度? 5.你认为这个社会在哪些方面对老年人不太好? 6.老人在你们家有什么影响? 7.你认为老年人在看问题的时候跟年轻人有什么不一样? 8.他们对大家有什么影响? version02city 1.where it is located? 2.what special for you? 3.why you want to stay there? part3 1.please compare100hundred years old city and modern city and what predict about the city in the futu re. 2.上海是个怎样的城市 新东方批改网(https://www.360docs.net/doc/d2771791.html,),在线雅思作文批改,雅思口语批改。语法纠错、恶补,制定考试计划,

3.都有那些著名建筑 4.你想为这个城市做些什么? 5.有哪些现象有待提高或者那些提倡 version03room part2: 1.what's your favorite room in your home 2.what it likes you live 3.what you do in the room normally and explain why you like it part3: 1.你认识你的邻居吗? 2.城市里的房子和乡村有什么不同? 2003年9月换题后的口语topic Old person Describe a older person you know You should say:Who he or she is How you know him or her How he or she is And explain what infection he or she give you and in what aspect Further question: 1、你们国家对老年人是什么态度? 2、你认为这个社会在哪些方面对老年人不太好? 3、老人在你们家有什么影响? 新东方批改网(https://www.360docs.net/doc/d2771791.html,),在线雅思作文批改,雅思口语批改。语法纠错、恶补,制定考试计划,

六年级英语语法知识点汇总

六年级语法总复习 一、词汇 (一)一般过去时态 一般过去时态表示在过去的某个时间发生的动作或存在的状态,常和表示过去的时间状语连用。例如yesterday, last weekend ,last Saturday ,等连用。基本句型:主语+动词的过去式+其他。例句——What did you do last weekend?你上周做什么了? ——I played football last weekend.我踢足球了。 ★规则动词过去式的构成 ⒈一般在动词原形末尾加-ed。例如:play—played ⒉词尾是e的动词直接加-d。例如:dance—danced ⒊末尾只有一个辅音字母的重读闭音节词,先双写这个辅音字母,再加-ed。例如stop(停止)--stopped ⒋结尾是“辅音字母+y”的动词,变“y”为“i”,再加-ed,例如:study--studied ★一些不规则变化的动词过去式 am/is—was are—were go—went swim—swam fly—flew do—did have—had say—said see—saw take—took come—came become—became get—got draw—drew hurt—hurt read—read tell—told will—would eat—ate take—took make—made drink—drank sleep(睡觉)—slept cut(切)--cut sit(坐)—sat begin(开始)—began think—thought find—found run(跑)---ran buy—bought win—won give(给)—gave sing—sang leave—left hear(听)--heart wear—wore (二)一般现在时态 一般现在时态表示包括现在时间在内的一段时间内经常发生的动作或存在的状态,表示习惯性或客观存在的事实和真理。常与often ,always ,usually ,sometimes ,every day等连用。基本句型分为两种情况: ●主语(非第三人称)+动词原形+其他。例句:——What do you usually do on the weekend?——I usually do my homework on the weekend. ●主语(第三人称)+动词的第三人称单数形式+其他。例句: ——What does Sarah usually do on the weekend?萨拉通常在周末干什么? ——She usually does her homework on the weekend.她通常在周末做她的家庭作业。 ★动词第三人称单数形式的变化规则 ⒈一般直接在动词词尾加-s.例如:play—plays ⒉以s ,x ,ch,sh结尾的动词加-es。例如:watch—watches ⒊以辅音字母加y结尾的动词,变y为i,再加es,例如:fly—flies ⒋个别不规则变化动词,需单独记忆,例如:do—does go—goes (三)现在进行时态 现在进行时态表示说话人现在正在进行的动作。基本句型:主语+be+动词的-ing+其他。 例如:——What are you doing ?你在干什么? ——I am doing my homework..我正在做作业。 ★动词现在分词的变化规则 ⒈一般直接在词尾加ing ,例如;wash—washing ⒉以不发音e字母结尾的动词,去掉e ,再加ing.例如:make—making ⒊末尾只有一个辅音字母的重读闭音节词,要双写最后一个辅音字母再加ing.例如swim—swimming (四)一般将来时态 一般将来时态表示将来某一时间或某一段时间内发生的动作或存在的状态。常与表示将来的时间如tomorrow ,next weeken ,this afternoon 等连用。我们通常用will,be going to+动词原形来表示一般将来时态。

l主谓一致讲解最全面主谓一致讲解

主谓一致的讲解 主谓一致是指: 1)语法形式上要一致,即名词单复数形式与谓语要一致。 2)意义上要一致,即主语意义上的单复数要与谓语的单复数形式一致。 一、并列结构作主语时的主谓一致 1.由and 连接主语时 And 连接的两个或多个单数可数名词、不可数名词或代词作主语时根据意义或概念确定谓语用单数或复数 1)并列主语表示不同的人、物或概念时谓语动词用复数 Li Ming and Zhang Hua are good students. Like many others, the little tramp and the naughty boy have rushed there in search of gold. 小流浪汉和调皮的小男孩也赶到那里寻找金子 Both rice and wheat are grown in this area. 2)并列主语表示同一个人、物或概念时,谓语动词用单数形式。 The professor and writer is speaking at the meeting. 那位教授兼作家正在会上发言 A journalist and authour lives on the sixth floor. 一位新闻记者兼作家 His lawyer and former college friend was with him on his trip to Europe. 他的律师兼大学时代的朋友陪他去欧洲旅行 The Premier and Foreign Minister was present at the state banquet. 总理兼外长 比较:the writer and the educator have visited our school. the writer and educator has visited our school. His lawyer and his former college friend were with him on his trip to Europe. 注意:指同一个人或物时,并列主语前只用一个冠词,指不同的需要分别加冠词,但两个名词具有分别的对立的意思时只需要一个冠词即可 A boy and girl are playing tennis. 3)并列主语前有each, every, many a , no 等修饰时谓语动词用单数 Each doctor and (each) nurse working in the hospital was asked to help patients. Every man, woman and child is entitled to take part in the activity. 有权参加 Every boy and (every) girl admires him for his fine sense of humour. Many a boy and (many a ) girl has made the same mistake No boy and no girl is there now.没有任何男孩和女孩在那里 注意:many a 跟单数可数名词但是表示复数意义翻译为很多 Many a student was disappointed after seeing the movie. 4)并列主语为不可分的整体时,谓语动词用单数 A law and rule about protecting environment has been drawn up. 关于保护环境的法律法规已经起草完成。 The knife and fork has been washed 刀叉已经被洗好 War and peace is a constant theme in history 战争与和平是历史永恒的主题 注意;常被视为主体的结构 A cup and saucer 一副杯碟 A horse and cart 马车 A knife and fork 一副刀叉

雅思口语部分PART1附答案(1)解析

PART1 Names 1.Who gave you your name? The name was given by my parents. 2.Does your name have any particular (or special meaning? I don’t know about that. 3.Do you like your name? I guess yes. 4.In your country do people feel that their name is very important? Most of people think so, because names are the first impression to other people. 5.Would you like to change your name? I don’t think it’s necessary. 6.Is it easy to change your name in your country? Maybe there will be some procedures to do. 7.Who usually names babies in your country? Parents or grandparents. 8.Do you have any special traditions about naming children? It depends on the tradition of the hometown. 9.What names are most common in your hometown?

【雅思口语预测】雅思口语话题题库范文

【雅思口语预测】雅思口语话题题库范文 雅思口语精准预测刘薇简介环球雅思北京学校副校长最受欢迎口语教师授课时激情中透着温婉幽默里渗入励志秉承让学生 "轻松复习事半功倍"的教学理念。 从事雅思教学多年北京大学传播学硕士曾学术访问哈佛大学 耶鲁大学等世界顶尖名校。 著作《雅思九天口语高分之路》、《剑桥雅思全真试题原版解析8》。本月预测重点本预测适用于xx年4月雅思考试超高频和重回题库考题、一级重点、都是重点类别需要考生准备细致答案。二级重点考生考生准备思路即可。 Part 1 分类汇总一基本信息类 1. Name 什么名字,中文名含义,未来换吗 2. Study or work 上班还上学上学问专业或最喜欢学科上班工作内容,参加过培训吗,未来换工作吗 3. Hometown 家乡哪里,有什么特别的地方,未来想居住的地方 4. Weather 喜欢什么天气家乡天气,家乡天气今年的变化 5. Family 家庭情况,家庭时间如何度过,全家人喜欢一起做什么二衣食住行类 1. Living 住flat还是house,最喜欢的房间,如果重装修会怎么做 2. Building 喜欢什么建筑,你们国家的人更喜欢传统还是时尚建筑,未来建筑的发展 3.

Countryside 喜欢郊区吗,郊区与城市的区别,什么人喜欢郊区 4. Transport 家乡交通特点,公共交通和私人交通的优缺点,未 来如何改善 5. Boating 划过船吗,坐过船吗,好处,你们国 家水运多吗 6. Clothes and fashion 喜欢什么衣服,喜欢时 尚吗,时尚对生活的影响 7. Bags 平时背包包吗,什么类型的 宝宝,知道哪些包包,丢过包包吗 8. Keeping healthy and fit 如何保持健康,如何保持身材 9. Daily routine and favorite time in a day 典型的一天,一天中最喜欢的时间段 10. Sleeping 几点睡几点起,睡不着怎么办,如何培养好的睡眠习惯 三娱乐休闲 1. Music 喜欢什么类型音乐,喜欢在家听还是concert hall。会乐器吗,什么乐器。 2. dancing 会舞蹈吗,中国人喜欢什么舞蹈,传统舞蹈得到了 传承吗 3. photography 喜欢摄影吗,喜欢给别人拍还是别人 给你拍,喜欢收藏照片吗 4. painting 会画画儿吗,小孩子学 习绘画的好处,有什么绘画技巧吗 5. sport 最喜欢什么运 动,好处 6. museum 喜欢去博物馆吗,好处,博物馆该收费吗7. leisure and relaxation 喜欢什么休闲活动,跟谁,在哪 儿,好处,中国人都喜欢什么休闲 8. weekends 如何度过周 末,周末加班吗,老板应该给发工资吗 9. postcards 喜欢 ___吗,发给别人或者收到过吗,跟email的区别 10. public park 喜欢去公园吗,做什么,应该有更多公园吗四科技生活

2019雅思口语考试真题(9)

2019年9-12月雅思口语Part 2新题预测:有意义的 歌 有意义的歌 P2 Describe a song that is meaningful to you. You should say: what song it is; when you first heard it; what the song is about; and explain why it is meaningful to you. P3 1.Is it good for children to learn music? 2.How do Chinese people learn music? 3.Do men and women play different kinds of musical instruments? 4.what's the status of traditional music now? 5.Do all grades have music class? 6.Is music class important? 7.What kind of music do children like? 8.Why would some parents force their children to learn some musical

instruments? 解析 题目要求考生描述“一首对你有意义的歌”。作答要点包括:是什么歌、什么时候第一次听到、歌是关于什么的,并说明为什么这首歌对你有特殊意义。 范文 Three years ago, a song came out that celebrated being young and relaxed and I was instantly crazy about it. It was called ‘Young for you’ and the beat and melody were off the hook! The song became a massive hit across China that year and you could hear it being played everywhere. When I listen to the song now it immediately takes me back to my youth. I first heard the song playing in a shopping mall and was instantly drawn to it. The singer's voice was like no other I had ever heard before and the melody was haunting and sad. I stopped shopping and used an app on my phone to identify the tune. I immediately downloaded the album off iTunes.

小学英语语法知识点汇总!

小学英语语法知识点汇总! 01 人称代词 主格:I we you she he it they 宾格:me us you her him it them 形容词性物主代词:my our your her his its their 名词性物主代词:mine ours yours hers his its theirs 02 形容词和副词的比较 (1) 一般在形容词或副词后+er older ,taller, longer, stronger (2) 多音节词前+more more interesting, etc. (3) 双写最后一个字母,再+er bigger fatter, etc. (4) 把y变i,再+er heavier, earlier (5) 不规则变化: well-better, much/many-more, etc. 03 可数词的复数形式 Most nouns + s abook –books

Nouns ending in aconsonant +y - y+ ies a story—stories Nouns ending in s,sh, ch or x + es a glass—glasses a watch-watches Nouns ending in o+s or +es a piano—pianos a mango—mangoes Nouns ending in for fe - f or fe +ves a knife –knives a shelf-shelves 04 不可数名词(单复数不变) bread, rice, water ,juice等。 05 缩略形式 I’m= I a,you’re = you are,she’s= she is,he’s = he is it’s= it is,who’s =who is,can’t =can not,isn’t=is not等。 06 a/an a book, a peach an egg,an hour 07 Preposition on, in ,in front of, between, next to, near, beside, at,behind. 表示时间:at six o’clock, at Christmas, at breakfast

高中英语主谓一致知识点讲解

高中英语主谓一致知识点讲解 本文主要讲解主谓一致,并列结构作主语时谓语用复数主谓一致中的靠近原则谓语动词与前面的主语一致 等常见考点。 主谓一致是指: 1)语法形式上要一致,即单复数形式与谓语要一致。 2)意义上要一致,即主语意义上的单复数要与谓语的单复数形式一致。 3)就近原则,即谓语动词的单复形式取决于最靠近它的词语, 一般来说,不可数名词用动词单数,可数名词复数用动词复数。例如: There is much water in the thermos. 但当不可数名词前有表示数量的复数名词时,谓语动词用复数形式。例如: Ten thousand tons of coal were produced last year. 并列结构作主语时谓语用复数,例如: Reading and writing are very important. 读写很重要。 注意:当主语由and连结时,如果它表示一个单一的概念,即指同一人或同一物时,谓语动词用单数,and 此时连接的两个词前只有一个冠词。例如: The iron and steel industry is very important to our life. 钢铁工业对我们的生活有重要意义。 典型例题 The League secretary and monitor ___ asked to make a speech at the meeting. A. is B. was C. are D. were 答案B. 注:先从时态上考虑。这是过去发生的事情应用过去时,先排除A.,C。本题易误选D,因为The League secretary and monitor 好象是两个人,但仔细辨别,monitor 前没有the,在英语中,当一人兼数职时只在第一个职务前加定冠词。后面的职务用and 相连。这样本题主语为一个人,所以应选B。

雅思口语考试形式及内容全面解析

雅思口语考试形式及内容全面解析 雅思口语考试是考生与考官之间进行一对一交流的形式,考官对考生的英语口语水平进行考察。雅思口语考试分为三个部分,考生可以以此使用不同的口语表达技能。雅思考试口语部分将被录音。下面就和大家分享,来欣赏一下吧。 雅思口语考试形式及内容全面解析 雅思口语Part 1 雅思口语考试形式:考官会向考生进行自我简介,并核对考生的身份。之后,考官会就考生熟悉的话题(如朋友、兴趣习惯或者食物) 进行询问。为保证题目的一致性,这些问题都是从一个事先拟定的范围内抽取的。 考试时间有多长:4-5分钟。 这部分考察的是什么技能:这部分考察的是考生就日常性的观点和信息、常见的生活经历或情形以回答问题的形式进行交流的能力。 雅思口语Part 2 雅思口语考试形式:这部分为考生作个人陈述。考官会交给考生一个答题任务卡、铅笔和草稿纸做笔记。答题任务卡上会给

出一个话题和需要在个人陈述中包含的要点,并在最后提示考生解释这个话题的某一个方面。有效地使用答题任务卡上的提示可以帮助考生思考讲述的话题、组织内容、并持续地陈述2分钟时间。在准备时间内做一些笔记也可以帮助考生安排好陈述的结构。考生有一分钟的准备时间,之后考官会要求考生就相关内容讲述1-2分钟。考官会在2分钟后打断考生,并在最后提问一两个问题作为结束语。 考试时间有多长:3-4分钟。 这部分考察的是什么技能:这部分考察得是考生(在没有任何其它提示的情况下)就一个特定的话题进行较长时间的陈述的能力,考察考生是否能恰当地运用语言、是否能连贯地组织自己的观点。考生有可能需要联系自己的经历来完成这部分内容。 雅思口语Part 3 雅思口语考试形式:在这部分考试中,考官和考生将对第二部分中涉及的话题进行讨论,讨论将为更加广泛和抽象,在恰当的时候还会更加深入。 考试时间有多长:4-5分钟。 这部分考察的是什么技能:这部分考察的是考生表达和论述看法、分析、讨论以及深入思考问题的能力。 雅思口语范文:describe something made you laugh

剑桥雅思9口语真题+解析Test3-Part3

剑桥雅思9口语真题+解析Test3-Part3 摘要:剑桥雅思9口语真题+解析Test3-Part3,剑桥雅思9Test3口语Part3讨论话题举例:Reasons for daily travel,有需要的同学抓紧时间下载吧! Discussion topics: Reasons for daily travel Example questions: · Why do people need to travel every day? 为什么人们每天都需要出行? 笨鸟雅思口语名师点题 本题询问人们为什么每天穿梭于各地。通常针对询问people的问题,可以相应列举出不同人群,如上班的人、上学的人、走亲访友的人、旅游的人等,同时针对某类人群适当进行拓展。 高分示例 People need to travel for lots of different reasons. Almost everybody has to travel to work, so they can earn money and provide for their families. If a person is lucky, then they will live very close to the place where they work, so they can travel on foot or by bicycle. Sometimes, however, people live very far away, so they must travel by bus, train, car, or even sometimes by plane or boat! Lots of people also travel to see friends and family members, who might live in a different city, or even a different country. But often people just like to travel for fun, to go to somewhere new, or to see a famous tourist site or place of natural beauty. 参考译文 人们有各种各样的理由来来往往。几乎每个人都要去上班,这样才能赚钱养家。幸运的人住的地方离工作单位很近,可以步行或者骑车去上班。但是,有时候,住的地方比较远的人就必须坐公交车、坐火车、开车,甚至坐飞机、坐船去上班。大部分人都会去异地甚至异国拜访朋友、看望家人。但是人们经常是出去玩,去探索新的地方,去看看著名的景点,去看看自然美景。 亮点表达 earn money 赚钱 tourist site 旅游景点 provide for 供养

初三英语语法知识点归纳

初中英语语法速记口诀大全 很多同学认为语法枯燥难学,其实只要用心并采用适当的学习方法,我们就可以愉快地学会英语,掌握语法规则。笔者根据有关书目和多年教学经验,搜集、组编了以下语法口诀,希望对即将参加中考的同学们有所帮助。 一、冠词基本用法 【速记口诀】 名词是秃子,常要戴帽子, 可数名词单,须用a或an, 辅音前用a,an在元音前, 若为特指时,则须用定冠, 复数不可数,泛指the不见, 碰到代词时,冠词均不现。 【妙语诠释】冠词是中考必考的语法知识之一,也是中考考查的主要对象。以上口诀包括的意思有:①名词在一般情况下不单用,常常要和冠词连用;②表示不确指的可数名词单数前要用不定冠词a或an,确指时要用定冠词the;③如复数名词表示泛指,名词前有this,these,my,some等时就不用冠词。 二、名词单数变复数规则 【速记口诀】 单数变复数,规则要记住, 一般加s,特殊有几处: 末尾字母o,大多加s, 两人有两菜,es不离口, 词尾f、fe,s前有v和e; 没有规则词,必须单独记。 【妙语诠释】①大部分单数可数名词变为复数要加s,但如果单词以/t?蘩/、/?蘩/、/s/发音结尾(也就是单词如果以ch,sh,s,x等结尾),则一般加es;②以o结尾的单词除了两人(negro,hero)两菜(tomato,potato) 加es外,其余一般加s;③以f或fe结尾的单词一般是把f,fe变为ve再加s;④英语中还有些单词没有规则,需要特殊记忆,如child—children,mouse—mice,deer—deer,sheep—sheep,Chinese—Chinese,ox—oxen,man—men,woman—women,foot—feet,tooth —teeth。 三、名词所有格用法 【速记口诀】 名词所有格,表物是“谁的”, 若为生命词,加“’s”即可行, 词尾有s,仅把逗号择; 并列名词后,各自和共有, 前者分别加,后者最后加; 若为无生命词,of所有格, 前后须倒置,此是硬规则。 【妙语诠释】①有生命的名词所有格一般加s,但如果名词以s结尾,则只加“’”;②并列名词所有格表示各自所有时,分别加“’s”,如果是共有,则只在最后名词加“’s”;③

相关文档
最新文档