英文原文及翻译

合集下载

英译汉佳作欣赏

英译汉佳作欣赏

英译汉:佳译赏析巧选主语成妙译(1)原文】饱经沧桑的20世纪仅剩下几个春秋,人类即将跨入充满希望的21世纪。

【译文】I n a few years’ time, mankind will bid farewell to the 20th c entury, a century full of vic issitudes, and enter into the 21s t c entury, a c entury full of hopes.【赏析】1995年,联合国举办纪念成立50周年庆祝活动,江主席出席并发表演说。

原文是该篇演说的第一句,是地道的汉语。

翻译此句时,一般译者往往会亦步亦趋地将原文译为两个分句,分别以“饱经沧桑的20世纪”和“人类”作主语。

但高明的译者吃透了原文的精神,选择mankind为主语统领全句,以准确而地道的英语译出,确实是一则难得的佳译,值得翻译爱好者认真体会。

英译汉:佳译赏析之“肚里的墨水”(2)【原文】T heir family had more money, more hors es, more slaves than any one els e in the Country, b ut the boys had less grammar than mos t of their poor C racker neighbors.【译文】他们家里的钱比人家多,马比人家多,奴隶比人家多,都要算全区第一,所缺少的只是他哥儿俩肚里的墨水,少得也是首屈一指的。

【赏析】原文选自Gone With the Wind。

译文忠实且流畅,算得上好译文,特别值得一提的是译者对grammar的处理,如果照搬字典自然难于翻译,但译者吃透了原句精神,译为“肚里的墨水”,真是再妥帖不过了。

英译汉:佳译赏析之“思前想后”(3)【原文】A nd in these meditations he fell asleep.【译文】他这么思前想后,就睡着了。

英语文摘中英对照File

英语文摘中英对照File

英语文摘中英对照File一、生活感悟英文原文:Life is like a camera. Just focus on what's important and capture the good times, develop from the negatives and if things don't work out, just take another shot.中文翻译:生活就像一台相机。

只需关注重要的事物,捕捉美好时光,从挫折中成长,如果事情不尽如人意,那就再试一次。

二、名人名言英文原文:"The only way to do great work is to love what you do." – Steve Jobs中文翻译:“成就伟大事业的唯一途径,就是热爱你所做的事。

” ——史蒂夫·乔布斯三、趣味故事英文原文:Once upon a time, there was a fox who was very proud of his tail. One day, he got caught in a trap and had to sacrifice his tail to escape. Though he was free, he felt ashamed of his appearance. However, he soon realized that his life was more important than his tail.中文翻译:从前,有一只狐狸非常自豪自己的尾巴。

有一天,它不慎陷入了陷阱,为了逃脱,不得不牺牲自己的尾巴。

虽然它重获自由,但它为自己的外貌感到羞愧。

然而,它很快意识到,生命比尾巴更重要。

四、励志故事英文原文:Thomas Edison failed more than 10,000 times before he invented the light bulb. When asked about his failures, he replied, "I have not failed. I've just found 10,000 ways that won't work."中文翻译:托马斯·爱迪生在发明电灯泡之前失败了超过一万次。

英文原文及翻译

英文原文及翻译

Vera Wang Honors Her Chinese Roots王薇薇以中国根为傲With nuptials(婚礼) season in full swing, Vera Wang’s wedding dress remains at the top of many a bride’s(婚礼) wish list. The designer, who recently took home the lifetime achievement award from the Council of Fashion Designers of America, has been innovating in bridal design for years—using color, knits and even throwing fabric into a washing machine.随着婚礼季的全面展开,王薇薇(Vera Wang)婚纱依然是许多新娘愿望清单上的首选。

王薇薇最近刚拿到美国时装设计师协会(Council of Fashion Designers of America)颁发的终生成就奖。

多年来她一直在婚纱设计领域进行创新──运用色彩和编织手法,甚至将面料扔进洗衣机里。

Ms. Wang said that her latest collection is about construction. “I had felt that I had really messed that vocabulary of perfection for brides for a while, where there’s six fabrics to a skirt, ” she said. “I wanted to go back to something that maybe was what I started with, but in a whole new way, and that would be architecture—not simplicity—but maybe minimalism.”王薇薇说,她的最新婚纱系列重点在于构建。

名篇名译(英译汉)

名篇名译(英译汉)

名篇名译00‎11.原文:It is an ill wind that blows nobody‎good.译文:世事皆利弊并‎存。

赏析:原句结构比较‎特殊("It‎is‎…‎that‎…"),理解起来有点‎困难。

“对谁都没有好‎处的风才是坏‎风”,也就是说大多‎数情况下风对‎人都是有好处‎、有坏处,在引申一步就‎是成了上面的‎译句。

林佩耵在《中英对译技巧‎》一书中(第68页)还给了几个相‎同结构的英文‎句子。

翻译的前提是‎理解。

有人指出。

市面上见到的‎翻译作品,有好多都带有‎因理解不正确‎而产生的低级‎错误,“信”都谈不上还妄‎谈什么“达”和“雅”!初学翻译的朋‎友,在理解原文上‎当不遗余力。

2.原文:Their langua‎g e was almost‎unrest‎r ained‎by any motive‎of pruden‎c e.译文:他们几乎爱讲‎什么就讲什么‎,全然不考虑什‎么谨慎不谨慎‎。

赏析:如果硬译,译文势必成了‎“他们的言论几‎乎不受任何深‎思熟虑的动机‎的约束”。

译者本其译,化其滞,将原句一拆为‎二,充分运用相关‎翻译技巧,译文忠实、通顺。

3.原文:Get a liveli‎h ood,and then practi‎s e virtue‎.译文:先谋生而后修‎身。

(钱钟书译)赏析:原句是祈使句‎,译句也传达出‎了训导的意味‎。

用“谋生”来译“Get‎a‎liveli‎h ood",用“修身”来译“practi‎s e virtue‎",可谓精当。

巧的是,原句七个词,译句也是七个‎汉字。

4.原文:I enjoy the clean volupt‎u ousne‎s s of the warm breeze‎on my skin and the cool suppor‎t of water.译文:我喜爱那洁净‎的暖风吹拂在‎我的皮肤上使‎我陶然欲醉,也喜爱那清亮‎的流水把我的‎身体托浮在水‎面。

牛津英语必修一课文原文及中文翻译

牛津英语必修一课文原文及中文翻译

M1U1S C H O O L l i f e i n t h e U K Going to a British high school for one year was a very enjoyable and exciting experience for me. I was very happy with the school hours in Britain because school starts around 9 a.m. and ends about 3.30 p.m. This means I could get up an hour later than usual as schools in China begin before 8 a.m.On the first day; all of the new students attended an assembly in the school hall. I sat next to a girl whose name is Diane. We soon became best friends. During the assembly; the headmaster told us about the rules of the school. He also told us that the best way to earn respect was to devote oneself to study and achieve high grades. This sounded like my school in China.I had many teachers in the past year. Mr. Heywood ; my class teacher; was very helpful. My favorite teacher was Miss Burke—I loved the lessons that she gave in English Literature. In our class there were 28 students. This is about the average size for British schools. We had to move to different classrooms for different classes. We also had different students in some classes; so it was a struggle for me to remember all the faces and names.I found the homework was not as heavy as what I used to get inmy old school; but it was a bit challenging for me at firs t because all the homework was in English. I felt lucky as all my teachers gave me much encouragement and I enjoyed all my subjects: English; History; English Literature; Computer Science; Maths; Science; PE; Art; Cooking and French.My English improved a lot as I used English every day and spent an hour each day reading English books in the library.I usually went to the Computer Club during the lunch break; so I could send e-mails to my family and friends back home for free. I also had an extra French class on Tuesday evenings. Cooking was really fun as I learned how to buy; prepare and cook food. At the end of term we held a class party and we all had to cook something. I was glad that all my classmates were fond of the cake that I made.Students at that school have to study Maths; English and Science; but can stop studying some subjects if they don’t like them; for example; History and French. They can choose other subjects like Art and Computer Science or Languages such as Spanish and German. In the Art class that I took; I made a small sculpture. Though it didn’t look very beautiful when it was finished; I still liked it very much.I missed Chinese food a lot at lunch. British food is verydifferent. British people like eating dessert at the end of their main meal. After lunch; we usually played on the school field. Sometimes I played football with the boys. Sometimes I just relaxed under a tree or sat on the grass.I was very lucky to experience this different way of life. I look back on my time in the UK with satisfaction; and I really hope to go back and study in Manchester again.在英国的学校生活在英国上了一年的中学对我来说是一段非常令人愉快和兴奋的经历..我很喜欢英国中学的作息时间;因为学校每天上午大约9点上课;下午大约3点半放学..这意味着我每天可以比以往晚一个小时起床;因为在中国学校每天上午8点之前就开始上课了..开学第一天;所有的新生都去学校礼堂参加晨会..我当时坐在一个名叫黛安娜的女孩身边:我们很快就成了最要好的朋友..在晨会上;校长向我们宣布了校规..他还告诉我们;赢得尊重的最佳途径就是专心学习并取得好成绩..这听起来倒是像我在国内就读的学校..过去的一年里我有过许多老师..海伍德先生;我的班主任;对我的帮助很大..我最喜欢的老师是伯克小姐——我喜爱她教的英国文学课程..我们班上一一共有28个学生..英国中学的班级差不多就是这么大..我们上不同的课得去不同的教室..上某些课的时候;班上的同学也不一样;所以对我来说记住所有的人的面孔和名字可是一件难事..我发现这里布置的家庭作业不像我以前在原来学校时那么繁重;可一开始我还是觉得有些挑战性;因为所有的作业都是英语的;让我感到幸运的是;所有的老师都给了我很多鼓励;因此;我也喜欢我所学的每一门功课:英语、历史、英国文学、计算机、数学、科学、体育、艺术、烹饪和法语..我天天都在使用英语;每天还花一个小时在图书馆里阅读英文书籍;因此;我的英语有了很大进步..午饭休息时间我常去电脑俱乐部;这样我就可以免费给国内的家人和朋友发电子邮件了..我还额外选了一·门功课——每个星期二晚上去听法语..当我学会如何买菜、配菜和做菜的时候;我发现烹饪真是一种乐趣..学期末;我们班开了一个派对;我们每个人都要为派对做点吃的..我们班上所有的同学都喜欢我做的蛋糕;这可真让我高兴..数学、英语和科学是该校的必修课;但是;如果不喜欢某些科目是可以中断学习的;如历史和法语..学生可以选修其他的科目;如艺术、计算机或者是西班牙语、德语之类的语言课..在艺术课上我做了一件小雕塑..尽管完工之后它看上去并不十分漂亮;但我仍然非常喜欢它..每到吃午餐的时候我就非常想念中国菜..英国的饮食很不一样;英国人在正餐结束时喜欢吃甜点..午餐后我们通常去学校运动场上玩耍..有时我和男生们一起踢足球..有时我干脆在树下休息休息或是在草地上坐一坐..我很幸运能够体验到这样一种不同的生活方式..回顾在英国的日子我很满意;真希望有朝一日能够重返曼彻斯特;在那里读书学习..M1U1 Project Starting a new school clubWe have a radio club in our school. It is great because it is run by the students for the school. I am lucky as I am one of the hosts.It was started two years ago. One day; I just began thinking about music for everyone; so I asked the headmaster if music could be played during break times. He approved the idea; and two years later I am in charge of the radio club as the oldest student member. Our club is much more than just music. Every morning we tell our schoolmates about the weather; recent news; and some special messages that the teachers want us to broadcast.During exam time we have a special programme that tells students the things they should do for preparation. At the end of the school year; many students who are graduating use our club to give messages to their close friends and teachers.When parents come to visit the school and talk to the teachers; we often play songs sung by students; and we also give messages to inform the parents of events such as outings and school plays.I shall miss the radio club after graduation; but I know that it will continue without me. Kate JonesOur school club ‘Poets of the Next Generation’ is aliterary club that was started by our English teacher Mr. Owen. We meet on the last Friday of every month to talk about poems and poets that we like. In the club meetings; we first select poems that we love; and then read them aloud. We also discuss poems in our meetings.When I attended the first meeting; I was required to write a poem and I had to read it to the club. I was a little nervous at first; but everyone was so nice and friendly that I soon stopped worrying. I once read a poem about nature in the school courtyard. I chose an old tree and gathered everyone under it before I read. The club members said it was one of the best compositions they had heard. Bob Shaw我们学校有一个广播俱乐部..这个俱乐部的非常之处在于它是由学生们自己为学校创办的..我很幸运地成了其中的一名主持人..广播俱乐部是两年前创立的..有一天;我萌发了为大家播放音乐的念头;于是我就问校长能否在休息时间给同学们播放音乐;校长同意了..两年后;我作为元老负责校广播俱乐部的工作..我们的俱乐部现在不只是播放音乐..每天早上我们向同学们播报天气情况和时事新闻;还有老帅们要我们播出的一些特别告示..到了考试的时候我们就会做一档特别节目;告诉学生们复习迎考的注意事项..每当学年结束的时候;许多即将毕业离校的学生就会借助我们的广播俱乐部向他们的好友和老师留下临别致辞..每逢家长来访、与老师交谈的时候;我们常常播放一些由学生们自己演唱的歌曲..我们还会广播一些通知;告诉家长们有关诸如远足、校内戏剧表演之类的活动讯息..毕业后;我会想念广播俱乐部的;但我知道;没有我;它还会继续办下去的..凯特琼斯我们的校内俱乐部“下一代诗人”是由我们的英语老师欧文先生发起的一个文学俱乐部..每个月的最后一个星期五我们会聚在一起讨论喜爱的诗歌和诗人..聚会的时候;我们首先挑选出我们喜爱的诗歌;然后朗诵这些诗歌..我们还在聚会时讨论诗歌..我第一次参加聚会的时候;被要求写一首诗;还得当着俱乐部成员的面进行朗诵..起初我觉得有些紧张;但所有的成员都是那么亲切、友好;我很快就不担心了..有一次;我在学校花园里朗诵了一首表现大自然的诗歌..朗诵前;我选择了一棵老树;把大家都聚集在树下..俱乐部成员们都说那是他们听过的最好的诗歌之一..鲍勃肖。

中英文翻译

中英文翻译

附录英文原文:Chinese Journal of ElectronicsVo1.15,No.3,July 2006A Speaker--Independent Continuous SpeechRecognition System Using Biomimetic Pattern RecognitionWANG Shoujue and QIN Hong(Laboratory of Artificial Neural Networks,Institute ol Semiconductors,Chinese Academy Sciences,Beijing 100083,China)Abstract—In speaker-independent speech recognition,the disadvantage of the most diffused technology(HMMs,or Hidden Markov models)is not only the need of many more training samples,but also long train time requirement. This Paper describes the use of Biomimetic pattern recognition(BPR)in recognizing some mandarin continuous speech in a speaker-independent Manner. A speech database was developed for the course of study.The vocabulary of the database consists of 15 Chinese dish’s names, the length of each name is 4 Chinese words.Neural networks(NNs)based on Multi-weight neuron(MWN) model are used to train and recognize the speech sounds.The number of MWN was investigated to achieve the optimal performance of the NNs-based BPR.This system, which is based on BPR and can carry out real time recognition reaches a recognition rate of 98.14%for the first option and 99.81%for the first two options to the Persons from different provinces of China speaking common Chinese speech.Experiments were also carried on to evaluate Continuous density hidden Markov models(CDHMM ),Dynamic time warping(DTW)and BPR for speech recognition.The Experiment results show that BPR outperforms CDHMM and DTW especially in the cases of samples of a finite size.Key words—Biomimetic pattern recognition, Speech recogniton,Hidden Markov models(HMMs),Dynamic time warping(DTW).I.IntroductionThe main goal of Automatic speech recognition(ASR)is to produce a system which will recognize accurately normal human speech from any speaker.The recognition system may be classified as speaker-dependent or speaker-independent.The speaker dependence requires that the system be personally trained with the speech of the person that will be involved with its operation in order to achieve a high recognition rate.For applications on the public facilities,on the other hand,the system must be capable of recognizing the speech uttered by many different people,with different gender,age,accent,etc.,the speaker independence has many more applications,primarily in the general area of public facilities.The most diffused technology in speaker-independent speech recognition is Hidden Markov Models,the disadvantage of it is not only the need of many more training samples,but also long train time requirement.Since Biomimetic pattern recognition(BPR) was first proposed by Wang Shoujue,it has already been applied to object recognition, face identification and face recognition etc.,and achieved much better performance.With some adaptations,such modeling techniques could be easily used within speech recognition too.In this paper,a real-time mandarin speech recognition system based on BPR is proposed,which outperforms HMMs especially in the cases of samples of a finite size.The system is a small vocabulary speaker independent continuous speech recognition one. The whole system is implemented on the PC under windows98/2000/XPenvironment with CASSANN-II neurocomputer.It supports standard 16-bit sound card .II .Introduction of Biomimetic Pattern Recognition and Multi —Weights Neuron Networks1. Biomimetic pattern recognitionTraditional Pattern Recognition aims at getting the optimal classification of different classes of sample in the feature space .However, the BPR intends to find the optimal coverage of the samples of the same type. It is from the Principle of Homology —Continuity ,that is to say ,if there are two samples of the same class, the difference between them must be gradually changed . So a gradual change sequence must be exists between the two samples. In BPR theory .the construction of the sample subspace of each type of samples depends only on the type itself .More detailedly ,the construction of the subspace of a certain type of samples depends on analyzing the relations between the trained types of samples and utilizing the methods of “cov erage of objects with complicated geometrical forms in the multidimensional space”.2.Multi-weights neuron and multi-weights neuron networksA Multi-weights neuron can be described as follows :12m Y=f[(,,,)]W W W X θΦ-…,,Where :12m ,,W W W …, are m-weights vectors ;X is the inputvector ;Φis the neuron’s computation function ;θis the threshold ;f is the activation function .According to dimension theory, in the feature spacen R ,n X R ∈,the function12m (,,,)W W W X Φ…,=θconstruct a (n-1)-dimensional hypersurface in n-dimensional space which isdetermined by the weights12m ,,W W W …,.It divides the n-dimensional space into two parts .If12m (,,,)W W W X θΦ=…, is a closed hypersurface, it constructs a finite subspace .According to the principle of BPR,determination the subspace of a certain type of samples basing on the type of samples itself .If we can find out a set of multi-weights neurons(Multi-weights neuron networks) that covering all the training samples ,the subspace of the neural networks represents the sample subspace. When an unknown sample is in the subspace, it can be determined to be the same type of the training samples .Moreover ,if a new type of samples added, it is not necessary to retrain anyone of the trained types of samples .The training of a certain type of samples has nothing to do with the other ones .III .System DescriptionThe Speech recognition system is divided into two main blocks. The first one is the signal pre-processing and speech feature extraction block .The other one is the Multi-weights neuron networks, which performs the task of BPR .1.Speech feature extractionMel based Campestral Coefficients(MFCC) is used as speech features .It is calculated as follows :A /D conversion ;Endpoint detection using short time energy and Zero crossing rate(ZCR);Preemphasis and hamming windowing ;Fast Fourier transform ;DCT transform .The number of features extracted for each frame is 16,and 32 frames are chosen for every utterance .A 512-dimensiona1-Me1-Cepstral feature vector(1632⨯ numerical values) represented the pronunciation of every word . 2. Multi-weights neuron networks architectureAs a new general purpose theoretical model of pattern Recognition, here BPR is realized by multi-weights neuron Networks. In training of a certain class of samples ,an multi-weights neuron subNetwork should beestablished .The subNetwork consists of one input layer .one multi-weights neuron hidden layer and one output layer. Such a subNetwork can be considered as a mapping 512:F R R →.12m ()min(,,Y )F X Y Y =…,,Where Y i is the output of a Multi-weights neuron. There are m hiddenMulti-weights neurons .i= 1,2, …,m,512X R ∈is the input vector .IV .Training for MWN Networks1. Basics of MWN networks trainingTraining one multi-weights neuron subNetwork requires calculating the multi-weights neuron layer weights .The multi-weights neuron and the training algorithm used was that of Ref.[4].In this algorithm ,if the number of training samples of each class is N,we can use2N -neurons .In this paper ,N=30.12[(,,,)]ii i i Y f s s s x ++=,is a function with multi-vector input ,one scalar quantity output .2. Optimization methodAccording to the comments in IV.1,if there are many training samples, the neuron number will be very large thus reduce the recognition speed .In the case of learning several classes of samples, knowledge of the class membership of training samples is available. We use this information in a supervised training algorithm to reduce the network scales .When training class A ,we looked the left training samples of the other 14 classes as class B . So there are 30 training samples in set1230:{,,}A A a a a =…,and 420 training samples inset 12420:{,,}B B b b =…,b .Firstly select 3 samples from A, and we have a neuron :1123Y =f[(,,,)]k k k a a a x .Let 01_123,=f[(,,,)]A i k k k i A A Y a a a a =,where i= 1,2, (30)1_123Y =f[(,,,)]B j k k k j a a a b ,where j= 1,2,…420;1_min(Y )B j V =,we specify a value r ,0<r<1.If1_*A i Y r V <,removed i a from set A, thus we get a new set (1)A .We continue until the number ofsamples in set ()k Ais(){}k A φ=,then the training is ended, and the subNetwork of class A has a hiddenlayer of1r - neurons.V .Experiment ResultsA speech database consisting of 15 Chinese dish’s names was developed for the course of study. The length of each name is 4 Chinese words, that is to say, each sample of speech is a continuous string of 4 words, such as “yu xiang rou si”,“gong bao ji ding”,etc .It was organized into two sets :training set and test set. The speech signal is sampled at 16kHz and 16-bit resolution .Table 1.Experimental result atof different values450 utterances constitute the training set used to train the multi-weights neuron networks. The 450 ones belong to 10 speakers(5 males and 5 females) who are from different Chinese provinces. Each of the speakers uttered each of the word 3 times. The test set had a total of 539 utterances which involved another 4 speakers who uttered the 15 words arbitrarily .The tests made to evaluate the recognition system were carried out on differentr from 0.5 to 0.95 with astep increment of 0.05.The experiment results at r of different values are shown in Table 1.Obviously ,the networks was able to achieve full recognition of training set at any r .From the experiments ,it was found that0.5r achieved hardly the same recognition rate as the Basic algorithm. In the mean time, theMWNs used in the networks are much less than of the Basic algorithm. Table 2.Experiment results of BPR basic algorithmExperiments were also carried on to evaluate Continuous density hidden Markov models (CDHMM),Dynamic time warping(DTW) and Biomimetic pattern recognition(BPR) for speech recognition, emphasizing the performance of each method across decreasing amounts of training samples as wellas requirement of train time. The CDHMM system was implemented with 5 states per word.Viterbi-algorithm and Baum-Welch re-estimation are used for training and recognition .The reference templates for DTW system are the training samples themselves. Both the CDHMM and DTW technique are implemented using the programs in Ref.[11].We give in Table 2 the experiment results comparison of BPR Basic algorithm ,Dynamic time warping (DTW)and Hidden Markov models (HMMs) method .The HMMs system was based on Continuous density hidden Markov models(CDHMMs),and was implemented with 5 states per name.VI.Conclusions and AcknowledgmentsIn this paper, A mandarin continuous speech recognition system based on BPR is established.Besides,a training samples selection method is also used to reduce the networks scales. As a new general purpose theoretical model of pattern Recognition,BPR could be used in speech recognition too, and the experiment results show that it achieved a higher performance than HMM s and DTW.References[1]WangShou-jue,“Blomimetic (Topological) pattern recognit ion-A new model of pattern recognition theoryand its application”,Acta Electronics Sinica,(inChinese),Vo1.30,No.10,PP.1417-1420,2002.[2]WangShoujue,ChenXu,“Blomimetic (Topological) pattern recognition-A new model of patternrecognition theory and its app lication”, Neural Networks,2003.Proceedings of the International Joint Conference on Neural Networks,Vol.3,PP.2258-2262,July 20-24,2003.[3]WangShoujue,ZhaoXingtao,“Biomimetic pattern recognition theory and its applications”,Chinese Journalof Electronics,V0l.13,No.3,pp.373-377,2004.[4]Xu Jian.LiWeijun et a1,“Architecture research and hardware implementation on simplified neuralcomputing system for face identification”,Neuarf Networks,2003.Proceedings of the Intern atonal Joint Conference on Neural Networks,Vol.2,PP.948-952,July 20-24 2003.[5]Wang Zhihai,Mo Huayi et al,“A method of biomimetic pattern recognition for face recognition”,Neural Networks,2003.Proceedings of the International Joint Conference on Neural Networks,Vol.3,pp.2216-2221,20-24 July 2003.[6]WangShoujue,WangLiyan et a1,“A General Purpose Neuron Processor with Digital-Analog Processing”,Chinese Journal of Electornics,Vol.3,No.4,pp.73-75,1994.[7]Wang Shoujue,LiZhaozhou et a1,“Discussion on the basic mathematical models of neurons in gen eralpurpose neuro-computer”,Acta Electronics Sinica(in Chinese),Vo1.29,No.5,pp.577-580,2001.[8]WangShoujue,Wang Bainan,“Analysis and theory of high-dimension space geometry of artificial neuralnetworks”,Acta Electronics Sinica (in Chinese),Vo1.30,No.1,pp.1-4,2001.[9]WangShoujue,Xujian et a1,“Multi-camera human-face personal identiifcation system based on thebiomimetic pattern recognition”,Acta Electronics Sinica (in Chinese),Vo1.31,No.1,pp.1-3,2003.[10]Ryszard Engelking,Dimension Theory,PWN-Polish Scientiifc Publishers—Warszawa,1978.[11]QiangHe,YingHe,Matlab Porgramming,Tsinghua University Press,2002.中文翻译:电子学报2006年7月15卷第3期基于仿生模式识别的非特定人连续语音识别系统王守觉秦虹(中国,北京100083,中科院半导体研究所人工神经网络实验室)摘要:在非特定人语音识别中,隐马尔科夫模型(HMMs)是使用最多的技术,但是它的不足之处在于:不仅需要更多的训练样本,而且训练的时间也很长。

英文原文翻译

英文原文翻译

青少年教育Y:您觉得中西方的家庭教育在塑造和培养青少年思想与性格方面有什么共同点吗?H:在这方面,家长的引导是第一要素。

中西方都很重视家庭生活质量对青少年性格培养的影响。

通常情况下,父母都是孩子的第一位老师。

小孩子经常模仿父母的言行并且很倾向于通过模仿父母的一些积极行为而获得奖励。

D:我同意何教授的观点。

无论是在西方还是东方,孩子看起来都是一个家庭的中心并且越来越变成父母希望和梦想的焦点。

同时,中西方的父母都在给小孩子施加压力。

尤其是在中国,这种压力越来越明显。

而美国,在这方面有时稍稍有一点多。

但是,中西方都认为孩子会影响下一代甚至把对孩子的教育看做是有关整个家庭发展的基础。

Y:何教授,您曾说另一个中西方教育的主要不同点是亚洲的学校教育通常更偏向于应试教育。

一个学生的未来可能被高考结果而被决定了。

所以,您是对当前的大学教育的入学途径表示质疑吗?H:是的。

我理解在像中国这样的教育大国的环境下进行全民教育考试改革是一件和困难的事情,但我认为或许我们可以向美国学习,尤其是可以借鉴他们的学习能力倾向测验系统,可以为孩子们提供更多参与考试的机会。

Y:那在美国有多少种这样的测试或者说其他可以进入大学的机会呢?D:我们有两种进入大学的测验考试:学习能力倾向测验和美国大学测验。

但是我们强调这些测验都是非强制性的。

他们不是有美国的学校所要求的,事实上,大多数的公立学校并不要求你在美国生活。

所以,当你在高中生活的最后一年,你所要考虑的不是你想要进入哪所大学或者说通过哪种考试能让你进入大学,你所要考虑的是“你想进入大学吗?”这样就可以帮助孩子减轻不少压力然后有更多的空间去成长为一个学习者,让他们自己去了解到考试很重要但是考试并不能决定你是或者你是怎样的一个学生。

我认为这是中美两国在教育方面的最大不同点。

Y:亚洲学生通常在数学和科学学科上能过的高分。

您对此有什么看法吗?H:这让我想起一个电视节目,这是一个有美国全国广播公司在1996年做的一个关于亚洲学生在美国大学的特别报道。

Module1Unit1课文原文及翻译牛津深圳版八年级英语上册

Module1Unit1课文原文及翻译牛津深圳版八年级英语上册

沪教版八年级英语上册课文原文及翻译Module 1 Unit 1 Module 1 Amazing thingsUnit 1 EncyclopaediasReadingLook it up!Here are two articles from an encyclopaedia.Da Vinci, LeonardoLeonardo da Vinci (1452-1519) was an Italian painter, inventor, musician, engineer and scientist.Da Vinci was born in the countryside.From an early age, he showed great intelligence and artistic ability.As he grew older, he learnt to do many different things.His paintings are very famous, and one, the Mona Lisa, is perhaps the most famous painting in the world.He also had many inventions.For example, his notebooks include some interesting drawings of flying machines. (See Art)DinosaursDinosaurs lived on Earth more than 60 million years before human beings.They lived everywhere on Earth.Some dinosaurs were as small as chickens. Others were as big as ten elephants. Some could even fly.Many dinosaurs ate plants. However, some dinosaurs liked to eat meat.Dinosaurs lived on Earth for more than 150 million years.Then, suddenly, they all died out. Nobody knows why.However, we can learn about them from their fossils. (See Earth history)查阅一下!下面是从百科全书两篇文章。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

英文原文:TITLE: MAPCon: AN EXPERT SYSTEM TO CONFIGURECOMMUNICA TIONS NETWORKSCONTACT: H. Van Dyke Parunak, James Kindrick, and Tihamer Toth-FejelIndustrial Technology InstitutePO Box 1485Ann Arbor, MI 48106van%iti@(313) 769-4049TOPIC: Case Study: configuration.ABSTRACT: MAPCon is an expert system that performs off-line parameter configuration for local area networks that use MAP, theManufacturing Automation Protocol. This paper describes theconfiguration task in general and MAPCon in particular, anddescribes its performance as a function of network size.MAPCon: AN EXPERT SYSTEM TOCONFIGURE COMMUNICATIONS NETWORKSH. Van Dyke Parunak, James Kindrick, and Tihamer Toth-FejelIndustrial Technology Institute1. Problem DefinitionThis section outlines the challenge of network management in general and the MAP architecture in particular, and describes the specific functions that MAPCon fulfills.1.1. The Challenge of Network ManagementManagement of multi-vendor networks is difficult, since most network management products are designed for a single product line. Network management is especially critical in manufacturing LANs, where real-time manufacturing operations rely heavily on consistent network operation. One way to address this problem is to conform the network to a standard, such as the Manufacturing Automation Protocol (MAP). MAPcon 11, the system described in this case study, is the second generation (11 of a knowledgebased system designed to assist in configuring MAP networks . This function, the first that must be accomplished when designing a network, requires the correlation of a large number of parameters in complex ways in order for thenetwork to behave properly.1.2. MAP Architecture MAP complies with the architecture of a set of international standards defined by the International Organization for Standardization (ISO), based on a reference model for Open Systems Interconnection (OSI). OS1 decomposes the problem of communicating reliably between applications into seven layers: physical, data-link, network, transport, session, presentation, and application.The physical and data-link layers of MAP use the IEEE 802.4 token busspecifications, in contrast to the CSMA/CD technology familiar from Ethernet. The physical layer can be either broadband or carrierband. WCon distinguishes between broadband and carrierband, and can configure both types of networks. In MAP, the network layer uses the ISO connection-less protocol or the =called Internet protocol. The transport layer uses the IS0 class 4 transport protocol. 1,Session uses ISOsession version 2, which is a small, basic subset of ISO session protocol. Presentation uses the ISO presentation layer.MAP specifies several application layer protocols: a manufacturing message specification protocol; an association control service element; a file transfer, access and management protocol; a directory services protocol; and a network management protocol. The protocols at layers 1, 2, and 7 of MAP were partially or completely designed with manufacturing automation in mind.The entities in the MAP network architecture include stations or end systems, subnetworks or LAN [Local Area Network) segments, and interconnections or intermediate systems.Stations combine hardware and software to provide communications according to the MAP specification. They can have either a full MAP, a mini-MAP, or a MAPIEPA (Enhanced Perjormance Architecture) configuration. The current version of MAPCon handles only full MAP stations.A subnetwork or LAN segment is a section of a local area network on which all stations share the same token. All stations. on a segment can directly cornmunitate to all other stations on the same segment without any intermediate systems. Subnetworks are of two types: broadband and carrierband. MAPCon handles both types of subnetworks.Interconnection entities (intermediate systems) are devices that connect multiple subnetworks to form the overall MAP network. They are of three types: bridges, routers, and gateways. A bridge interconnects two or more subnetworks with similar media access control services. A router interconnects two or more subnetworks or nptworks of different types. A gateway interconnects two or more subnetworks or networks of different network architecture by performing protocol translation. The current version of MAPCon supports bridges and the MAP side of routers and gateways.1.3. Configuration ManagementConfiguration management is the collection of network management activities that allow the user to know and control the arrangement and state of a network and its entities. The present version of MAPCon lets the user set the network configuration off- line, and thus know its arrangement and state. A future version will let the user control the network configuration as the network operates. The following paragraphs discuss the functions currently provided by MAPCon.Adding or deleting stations in the subnetwork defines the actual topology of the network. Stations and interconnections that are attached to a subnetwork are specified in order to configure the network. This capability not only allows for examining the current configuration of the network but also permitsincremental changes to the configuration.Subnetworks are interconnected by attaching interconnection devices such as bridges, routers, and gateways. Adding such devices requires setting or modifying the relationships between two or more subnetworks.Users need to set the initial characteristics of entities and modify them if the original settings are inconsistent. In general, the characteristics that are set or modified correspond to operational parameters and statistical counters required and maintained by the entities. Each station or interconnection has several layers.Depending on the configuration of the station or the interconnection, specific layers must be present according to the MAP 3.0 specification. Several characteristics in each layer must be set properly across the subnetwork and network in order to configure an operational network. These characteristics include operational parameters, timers, and thresholds on statistical counters. Some of these characteristics are derived from network-wide parameters such as the type of the traffic intended on the network and the nature of the environment under which network will operate (low noise, medium noise, and high noise). Other characteristics are derived from one another. For example, inactivity time at transport is derived from retransmission time and number of retransmissions.Stations and interconnections within a subnetwork must have unique names and MAC addresses. However, there are certain restrictions on names and addresses from a global network point of view. It may be possible to have identical MAC addresses in subnetworks interconnected by a router. The configuration functionmust check address consistency.Similar to names and addresses, network managers and load servers must be assigned to stations and interconnections. All stations and interconnections need an associated manager. It is possible to have multiple managers in a network, up to one per subnetwork. A load server for the network is needed if there are anyloadable stations or interconnections in the network configuration.Configuration also requires consistency checks to make sure that all entities are configured correctly. Mechanisms must be provided to recognize and identify the inconsistencies for corrective actions. Consistency in addresses, network managers, and load servers must be checked before the configuration objective is completed.2. Previous ApproachesAI applications in network management have concentrated in the area of diagnosis and fault management rather than configuration.Expert systems for network configuration have been introduced by some vendors, including BBN and CASE Communications. Technical case studies of these commercial products have not been released, so few details of their structure and function are available. At least one uses OPS-83 rules as its major knowledge representation mechanism. While MAPCon does use some OPS rules, it relies heavily on knowledge represented in frames, to support a structural model ofthe network. This type of structure has also been reported in BBN’s Designe t.Previous systems deal with wide-area networks, and focus on problems of routing and resource allocation. One suggested formalization is to allocate network capacity by maximizing an economic criterion related to network revenues. 1151 However, the complexity of the problem precludes use of standard operations research techniques to address this problem. [I21 BBN’s offering includes a facility to allocate parts, such as cables, boards, and racks, to node sites, much along the lines of RI.MAP’S open architecture and bus structure remove problems that other networks must address. Because MAP networks use a bus rather than a point-bpoint architecture, routing is less of a problem. Because MAP is an open architecture, supporting equipment from different vendors, problems of configuring the boards and cables in a single station are also outside of its scope. The major problem addressed by WCon is consistency of parameter settings across different stations. This problem is not a critical one in the networks served by the other programs named above, because they are predominantly single-vendor networks, in which consistency among operating parameters can be enforced at the design or manufacturing stage of a station’s life. The open architecture that allows MAF’Con to push the ”board and cable” problem back to the manufacturer, also forces the network administrator (rather than the manufacturer) to worry about parameter consistency. Furthermore, MAP differs from these other networks in conforming to IS0 protocols for the OS1 seven layer model. The complexity of these protocols and the level of service they provide require many station parameters (62 in MAP 3.0), leading to the need for MAPCon.3. Our ApproachIn this section we discuss some details of MAPCon’s inner structure and function. We detail the different techniques of knowledge representation that it uses, and show how it performs both synthesis and analysis in its reasoning. Then we sketch how the interplay between these reasoning domains will increase as MAF’Con evolves from a configuration tool toward a full-fledged network supervisor.3.1、Knowledge Representation The existing network configuration systems on which details are available use heuristic knowledge in rules as their main knowledge resou rce. MAPCon’s central knowledge structure is a semantic net model of the network being configured, constructed in the Carnegie Representation Language. It does use rules, but as a means of constraining relations among these objects more than to capture shallow information about configuration preferences, It also uses some procedural computation.3.1.1. FramesThe major domain knowledge involved in MAP network configuration is the identity and interrelation of the network entities, so a frame model is both a natural way to store this information and a reasonable basis for propagating constraints among related entities.The features of frames that are most important for MAPCon aremodularity, connectivity, inheritance, and demons.MAPCon interactively guides the user in the construction of the network model. Each network entity (including subnetworks, stations, intermediate systems such as routers or bridges, and points of attachment) corresponds to a CRL frame, and its configurable parameters are slots in the frame.MAPCon frames are related to each other by relations such as my-intermediate-system, relating a point of attachment to its intermediate system, and has-elements, relating subnetworks to their component stations and points of attachment. Frames conveniently permit the system to maintain a deep understanding about the connectivity of the network. For example, intermediate systems can be one of three types. The type of intermediate system connecting two subnetworks determines the quality of service provided by the network across that connection. One type of intermediate system supports address translation, while another type does not. Thus, two interconnected subnetworks may or may not be required to share an address space, depending upon the type of intermediate system providing the connection.MAPCon frames make extensive use of inheritance. For example, a subnetwork frame shares many structural characteristics of a network frame, while the three kinds of intermediate systems have numerous points of similarity. The class structur e of MAF’Con’s ontology permits easy addition of new kinds of entities.In developing a consistent set of parameters, a change in one parameter may have a cascading effect on others. MAPCon uses demons attached to critical slots to propagate these constraints among related objects. For example, mazimum-ring-maintenance-rotation -time is a station parameter based on a user input value for the enclosing subnetwork.The key subnetwork input value propagates to the component stations over the has-elements relation, and constrains the value of the dependent parameter of each station.3.1.2. Rules and Procedural KnowledgeIn classical expert systems, rules are used to capture the heuristic, shallow knowledge of human experts. While =me MAPCon rules serve this function, most constrain relations among slots on the objects representing MAP network elements. Some relations can be constrained through inheritance among objects. In other cases, particularly when several parameters of a single element interact, procedural code attached by demons to affected slots. In the remaining cases, rules permit straightforward declarative specification of interdependencies that would otherwise require complicated, error-prone and hard-to-modify procedural code. For example, validation of an assigned network manager may require searching multiple interconnected subnetworks and is much easier to understand and implement as a (declarative) rule than an extensive (procedural) search through the semantic model.3.2. Synthesis and AnalysisIt is useful to distinguish between two classes of reasoning objectives: synthesis and analysis. Like many other real-world systems, MAPCon does both,switching between them when appropriate.The basic synthesis problem seeks, given a set of elements and a set of constraints among those elements, to assemble from the elements a structure that satisfies the constraints. The term “planning“ is often used loosely to describe this process, though we prefer to reserve it for a more specific case. We distinguish three major types ofsynthesis, which differ in the class of data [I71 used to represent time in its constraints. We can thus define configuration as synthesis in nominal time; planning in the strict sense as synthesis in ordinal time; and scheduling as synthesis in interval time.Analysis begins with a known structure and reasons about the relation between its behavior and the elements that make it up. The two major forms of analysis are prediction, which reasons from the structure and the behavior of its elements to the behavior of the whole, and interpretation, which reasons from the structure and its observed behavior to the state of its elements. Interpretation in turn can involve monitoring to detect unexpected behavior and diagnosis to explain that behavior.MAPCon is primarily a synthetic system, performing static configuration in the nominal time domain. 1181 MAF’Con determines values for 62 interdependent, configurable parameters for each configurable element (station or point of attachment) of the MAP network being modeled.The parameter setting process must follow a partial time ordering, but the resultant configuration itself is nominal with respect to time.4. PerformanceThe performance of an expert system can be measured both in terms of how well and how quickly it executes its task. It is also interesting to record the match between domain and knowledge engineering environment by noting the amount of custom code needed to build an application.4.1. How Well does MAPCon Perform?WCon's task is to determine whether the parameters in the components of a given network can be configured consistently with one another, and if they can, to perform that configuration. Its structure is such that it always succeeds in configuring a configurable network and in properly flagging an unconfigurable one. Thus, it performs its task well.4.2. How Fast does MAPCon Perform?Since expert systems are often applied in domains that suffer from combinatorial complexity, it is important to understand their speed performance as a function of problem size. While analytic complexity bounds may be available for simple cases, the most straightforward way to assess the speed performance of a full-scale system is to gather execution statistics for it. To this end, we have carried out some preliminary experiments on MAPCon.Leaving aside the user interface, the actual configuration task has three computational phases: procedural computation on the parameters supplied by the user;rule-directed reasoning; and a final phase of procedural computation. We recorded CPU time (on a TI Explorer) for each of these phases, and total number of rule invocations, for five test networks that differed from one another both in the number of stations and in the number of subnetworks.Exhibit 1 shows the number of stations and subnetworks in each test network. Note that configurations 2 and 4 both have ten stations, while 3 and 5 both have twenty stations. The differences between them are thus due only to the division into subnetworks in configurations 4 and 5. In both cases, this division takes the form of a star configuration, with a single router connecting the subnetworks.In all five configurations, the total number of rule invocations per configuration is linear in the number of stations, regardless of the number of subnetworks.In the plots accompanying this discussion, we use lower case 'x' and '0' to represent configurations 1, 2, and 3, which we will often call collectively "123" and which compare number of stations in a single network. Upper case 'X' and '0' represent configurations 1, 4, and 5 (collectively '145'), which compare results for different numbers of networks. Configuration 1 appears in both sets.Exhibit 2 plots raw CPU time to execute the OPS rules portion of the inference cycle, as a function of number of stations, for configurations 123. This graph shows a roughly linear increase in execution time with size of working memory, a result in keeping with more general results on the Rete algorithm. But there is a slight convexity (cupped shape) to the curve. To see more detail, we fit by eye a straight line (y = 10.4~ - 25.7) to the data, and subtract it out, leaving the residuals plotted in Exhibit 3. We have removed slope and magnitude information from these residuals, and in exchange can see more clearly the slight (note the difference in scale between Exhibits 2 and 3) convexity hinted at in the earlier exhibit.翻译原文著作(期刊)名称:MAPCon: AN EXPERT SYSTEM TO CONFIGURE COMMUNICATIONS NETWORKS作者:Van Dyke Parunak, H.; Kindrick, J.; Toth-Fejel原文所在位置:IEEE Xplore数据库原文出版时间:2002原文出版地点:美国MAPCon:配置网络的专家系统1、问题定义本节总体概述了网络管理的挑战并且特别说明了MAP体系结构,并介绍了满足MAPCon的特殊功能。

相关文档
最新文档