中英文对照资料外文翻译文献

合集下载

外文文献及翻译

外文文献及翻译

外文文献及翻译1. 文献:"The Effects of Exercise on Mental Health"翻译:运动对心理健康的影响Abstract: This article explores the effects of exercise on mental health. The author discusses various studies that have been conducted on this topic, and presents evidence to support the claim that exercise can have positive impacts on mental well-being. The article also examines the mechanisms through which exercise affects mental health, such as the release of endorphins and the reduction of stress hormones. Overall, the author concludes that exercise is an effective strategy for improving mental health and recommends incorporating physical activity into daily routines.摘要:本文探讨了运动对心理健康的影响。

作者讨论了在这个主题上进行的各种研究,并提出证据支持运动对心理健康有积极影响的观点。

该文章还探讨了运动如何影响心理健康的机制,如内啡肽的释放和压力激素的减少。

总的来说,作者得出结论,运动是改善心理健康的有效策略,并建议将体育活动纳入日常生活。

2. 文献: "The Benefits of Bilingualism"翻译:双语能力的好处Abstract: This paper examines the benefits of bilingualism. The author presents research findings that demonstrate the cognitiveadvantages of being bilingual, such as enhanced problem-solving skills and improved attention control. The article also explores the social and cultural benefits of bilingualism, such as increased cultural awareness and the ability to communicate with people from different backgrounds. Additionally, the author discusses the positive effects of bilingualism on mental health, highlighting its role in delaying the onset of cognitive decline and in providing a buffer against age-related memory loss. Overall, the author concludes that bilingualism offers a range of advantages and recommends promoting bilingual education and language learning. 摘要:本文研究了双语能力的好处。

参考文献中文的英文对照

参考文献中文的英文对照

参考文献中文的英文对照在学术论文中,参考文献是非常重要的一部分,它可以为论文的可信度和学术性增添分数,其中包括中文和英文文献。

以下是一些常见的参考文献中文和英文对照:1. 书籍 Book中文:王小明. 计算机网络技术. 北京:清华大学出版社,2018.英文:Wang, X. Computer Network Technology. Beijing: Tsinghua University Press, 2018.2. 学术期刊 Article in Academic Journal中文:张婷婷,李伟. 基于深度学习的影像分割方法. 计算机科学与探索,2019,13(1):61-67.英文:Zhang, T. T., Li, W. Image Segmentation Method Based on Deep Learning. Computer Science and Exploration, 2019, 13(1): 61-67.3. 会议论文 Conference Paper中文:王维,李丽. 基于云计算的智慧物流管理系统设计. 2019年国际物流与采购会议论文集,2019:112-117.英文:Wang, W., Li, L. Design of Smart Logistics Management System Based on Cloud Computing. Proceedings of the 2019 International Conference on Logistics and Procurement, 2019: 112-117.4. 学位论文 Thesis/Dissertation中文:李晓华. 基于模糊神经网络的水质评价模型研究. 博士学位论文,长春:吉林大学,2018.英文:Li, X. H. Research on Water Quality Evaluation Model Based on Fuzzy Neural Network. Doctoral Dissertation, Changchun: Jilin University, 2018.5. 报告 Report中文:国家统计局. 2019年国民经济和社会发展统计公报. 北京:中国统计出版社,2019.英文:National Bureau of Statistics. Statistical Communique of the People's Republic of China on the 2019 National Economic and Social Development. Beijing: China Statistics Press, 2019.以上是一些常见的参考文献中文和英文对照,希望对大家写作有所帮助。

交通安全外文翻译文献中英文

交通安全外文翻译文献中英文

外文文献翻译(含:英文原文及中文译文)英文原文POSSIBILITIES AND LIMITA TIONS OF ACCIDENT ANALYSISS.OppeAbstraetAccident statistics, especially collected at a national level are particularly useful for the description, monitoring and prognosis of accident developments, the detection of positive and negative safety developments, the definition of safety targets and the (product) evaluation of long term and large scale safety measures. The application of accident analysis is strongly limited for problem analysis, prospective and retrospective safety analysis on newly developed traffic systems or safety measures, as well as for (process) evaluation of special short term and small scale safety measures. There is an urgent need for the analysis of accidents in real time, in combination with background behavioural research. Automatic incident detection, combined with video recording of accidents may soon result in financially acceptable research. This type of research may eventually lead to a better understanding of the concept of risk in traffic and to well-established theories.Keyword: Consequences; purposes; describe; Limitations; concerned; Accident Analysis; possibilities1. Introduction.This paper is primarily based on personal experience concerning traffic safety, safety research and the role of accidents analysis in this research. These experiences resulted in rather philosophical opinions as well as more practical viewpoints on research methodology and statistical analysis. A number of these findings are published already elsewhere.From this lack of direct observation of accidents, a number of methodological problems arise, leading to continuous discussions about the interpretation of findings that cannot be tested directly. For a fruitful discussion of these methodological problems it is very informative to look at a real accident on video. It then turns out that most of the relevant information used to explain the accident will be missing in the accident record. In-depth studies also cannot recollect all the data that is necessary in order to test hypotheses about the occurrence of the accident. For a particular car-car accident, that was recorded on video at an urban intersection in the Netherlands, between a car coming from a minor road, colliding with a car on the major road, the following questions could be asked: Why did the driver of the car coming from the minor road, suddenly accelerate after coming almost to a stop and hit the side of the car from the left at the main road? Why was the approaching car not noticed? Was it because the driver was preoccupied with the two cars coming from the right and the gap before them that offered him thepossibility to cross? Did he look left before, but was his view possibly blocked by the green van parked at the corner? Certainly the traffic situation was not complicated. At the moment of the accident there were no bicyclists or pedestrians present to distract his attention at the regularly overcrowded intersection. The parked green van disappeared within five minutes, the two other cars that may have been important left without a trace. It is hardly possible to observe traffic behavior under the most relevant condition of an accident occurring, because accidents are very rare events, given the large number of trips. Given the new video equipment and the recent developments in automatic incident and accident detection, it becomes more and more realistic to collect such data at not too high costs. Additional to this type of data that is most essential for a good understanding of the risk increasing factors in traffic, it also important to look at normal traffic behavior as a reference base. The question about the possibilities and limitations of accident analysis is not lightly answered. We cannot speak unambiguously about accident analysis. Accident analysis covers a whole range of activities, each originating from a different background and based on different sources of information: national data banks, additional information from other sources, especially collected accident data, behavioral background data etc. To answer the question about the possibilities and limitations, we first have to look at the cycle of activities in the area of traffic safety. Some ofthese activities are mainly concerned with the safety management of the traffic system; some others are primarily research activities.The following steps should be distinguished:- detection of new or remaining safety problems;- description of the problem and its main characteristics;- the analysis of the problem, its causes and suggestions for improvement;- selection and implementation of safety measures;- evaluation of measures taken.Although this cycle can be carried out by the same person or group of persons, the problem has a different (political/managerial or scientific) background at each stage. We will describe the phases in which accident analysis is used. It is important to make this distinction. Many fruitless discussions about the method of analysis result from ignoring this distinction. Politicians, or road managers are not primarily interested in individual accidents. From their perspective accidents are often treated equally, because the total outcome is much more important than the whole chain of events leading to each individual accident. Therefore, each accident counts as one and they add up all together to a final safety result.Researchers are much more interested in the chain of events leading to an individual accident. They want to get detailed information abouteach accident, to detect its causes and the relevant conditions. The politician wants only those details that direct his actions. At the highest level this is the decrease in the total number of accidents. The main source of information is the national database and its statistical treatment. For him, accident analysis is looking at (subgroups of) accident numbers and their statistical fluctuations. This is the main stream of accident analysis as applied in the area of traffic safety. Therefore, we will first describe these aspects of accidents.2. The nature of accidents and their statistical characteristics.The basic notion is that accidents, whatever there cause, appear according to a chance process. Two simple assumptions are usually made to describe this process for (traffic) accidents:- the probability of an accident to occur is independent from the occurrence of previous accidents;-the occurrence of accidents is homogeneous in time.If these two assumptions hold, then accidents are Poisson distributed. The first assumption does not meet much criticism. Accidents are rare events and therefore not easily influenced by previous accidents. In some cases where there is a direct causal chain (e.g. , when a number of cars run into each other) the series of accidents may be regarded as one complicated accident with many cars involved.The assumption does not apply to casualties. Casualties are often related to the same accident andtherefore the independency assumption does not hold. The second assumption seems less obvious at first sight. The occurrence of accidents through time or on different locations are not equally likely. However, the assumption need not hold over long time periods. It is a rather theoretical assumption in its nature. If it holds for short periods of time, then it also holds for long periods, because the sum of Poisson distributed variables, even if their Poisson rates are different, is also Poisson distributed. The Poisson rate for the sum of these periods is then equal to the sum of the Poisson rates for these parts.The assumption that really counts for a comparison of (composite) situations, is whether two outcomes from an aggregation of situations in time and/or space, have a comparable mix of basic situations. E.g. , the comparison of the number of accidents on one particular day of the year, as compared to another day (the next day, or the same day of the next week etc.). If the conditions are assumed to be the same (same duration, same mix of traffic and situations, same weather conditions etc.) then the resulting numbers of accidents are the outcomes of the same Poisson process. This assumption can be tested by estimating the rate parameter on the basis of the two observed values (the estimate being the average of the two values). Probability theory can be used to compute the likelihood of the equality assumption, given the two observations and their mean.This statistical procedure is rather powerful. The Poisson assumptionis investigated many times and turns out to be supported by a vast body of empirical evidence. It has been applied in numerous situations to find out whether differences in observed numbers of accidents suggest real differences in safety. The main purpose of this procedure is to detect differences in safety. This may be a difference over time, or between different places or between different conditions. Such differences may guide the process of improvement. Because the main concern is to reduce the number of accidents, such an analysis may lead to the most promising areas for treatment. A necessary condition for the application of such a test is, that the numbers of accidents to be compared are large enough to show existing differences. In many local cases an application is not possible. Accident black-spot analysis is often hindered by this limitation, e.g., if such a test is applied to find out whether the number of accidents at a particular location is higher than average. The procedure described can also be used if the accidents are classified according to a number of characteristics to find promising safety targets. Not only with aggregation, but also with disaggregation the Poisson assumption holds, and the accident numbers can be tested against each other on the basis of the Poisson assumptions. Such a test is rather cumbersome, because for each particular case, i.e. for each different Poisson parameter, the probabilities for all possible outcomes must be computed to apply the test. In practice, this is not necessary when the numbers are large. Then the Poissondistribution can be approximated by a Normal distribution, with mean and variance equal to the Poisson parameter. Once the mean value and the variance of a Normal distribution are given, all tests can be rephrased in terms of the standard Normal distribution with zero mean and variance one. No computations are necessary any more, but test statistics can be drawn from tables.3. The use of accident statistics for traffic safety policy.The testing procedure described has its merits for those types of analysis that are based on the assumptions mentioned. The best example of such an application is the monitoring of safety for a country or region over a year, using the total number of accidents (eventually of a particular type, such as fatal accidents), in order to compare this number with the outcome of the year before. If sequences of accidents are given over several years, then trends in the developments can be detected and accident numbers predicted for following years. Once such a trend is established, then the value for the next year or years can be predicted, together with its error bounds. Deviations from a given trend can also be tested afterwards, and new actions planned. The most famous one is carried out by Smeed 1949. We will discuss this type of accident analysis in more detail later.(1). The application of the Chi-square test for interaction is generalised to higher order classifications. Foldvary and Lane (1974), inmeasuring the effect of compulsory wearing of seat belts, were among the first who applied the partitioning of the total Chi-square in values for the higher order interactions of four-way tables.(2). Tests are not restricted to overall effects, but Chi-square values can be decomposed regarding sub-hypotheses within the model. Also in the two-way table, the total Chisquare can be decomposed into interaction effects of part tables. The advantage of 1. and 2. over previous situations is, that large numbers of Chi-square tests on many interrelated (sub)tables and corresponding Chi-squares were replaced by one analysis with an exact portioning of one Chi-square.(3). More attention is put to parameter estimation. E.g., the partitioning of the Chi-square made it possible to test for linear or quadratic restraints on the row-parameters or for discontinuities in trends.(4). The unit of analysis is generalised from counts to weighted counts. This is especially advantageous for road safety analyses, where corrections for period of time, number of road users, number of locations or number of vehicle kilometres is often necessary. The last option is not found in many statistical packages. Andersen 1977 gives an example for road safety analysis in a two-way table. A computer programme WPM, developed for this type of analysis of multi-way tables, is available at SWOV (see: De Leeuw and Oppe 1976). The accident analysis at this level is not explanatory. It tries to detect safety problems that need specialattention. The basic information needed consists of accident numbers, to describe the total amount of unsafety, and exposure data to calculate risks and to find situations or (groups of) road users with a high level of risk. 4. Accident analysis for research purposes.Traffic safety research is concerned with the occurrence of accidents and their consequences. Therefore, one might say that the object of research is the accident. The researcher’s interest however is less focused at this final outcome itself, but much more at the process that results (or does not result) in accidents. Therefore, it is better to regard the critical event in traffic as his object of study. One of the major problems in the study of the traffic process that results in accidents is, that the actual occurrence is hardly ever observed by the researcher.Investigating a traffic accident, he will try to reconstruct the event from indirect sources such as the information given by the road users involved, or by eye-witnesses, about the circumstances, the characteristics of the vehicles, the road and the drivers. As such this is not unique in science, there are more examples of an indirect study of the object of research. However, a second difficulty is, that the object of research cannot be evoked. Systematic research by means of controlled experiments is only possible for aspects of the problem, not for the problem itself. The combination of indirect observation and lack of systematic control make it very difficult for the investigator to detectwhich factors, under what circumstances cause an accident. Although the researcher is primarily interested in the process leading to accidents, he has almost exclusively information about the consequences, the product of it, the accident. Furthermore, the context of accidents is complicated. Generally speaking, the following aspects can be distinguished: - Given the state of the traffic system, traffic volume and composition, the manoeuvres of the road users, their speeds, the weather conditions, the condition of the road, the vehicles, the road users and their interactions, accidents can or cannot be prevented.- Given an accident, also depending on a large number of factors, such as the speed and mass of vehicles, the collision angle, the protection of road users and their vulnerability, the location of impact etc., injuries are more or less severe or the material damage is more or less substantial. Although these aspects cannot be studied independently, from a theoretical point of view it has advantages to distinguish the number of situations in traffic that are potentially dangerous, from the probability of having an accident given such a potentially dangerous situation and also from the resulting outcome, given a particular accident.This conceptual framework is the general basis for the formulation of risk regarding the decisions of individual road users as well as the decisions of controllers at higher levels. In the mathematical formulation of risk we need an explicit description of our probability space, consistingof the elementary events (the situations) that may result in accidents, the probability for each type of event to end up in an accident, and finally the particular outcome, the loss, given that type of accident.A different approach is to look at combinations of accident characteristics, to find critical factors. This type of analysis may be carried out at the total group of accidents or at subgroups. The accident itself may be the unit of research, but also a road, a road location, a road design (e.g. a roundabout) etc.中文译文交通事故分析的可能性和局限性S.Oppe摘要交通事故的统计数字, 尤其国家一级的数据对监控和预测事故的发展, 积极或消极检测事故的发展, 以及对定义安全目标和评估工业安全特别有益。

道路与桥梁工程中英文对照外文翻译文献

道路与桥梁工程中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Bridge research in EuropeA brief outline is given of the development of the European Union, together with the research platform in Europe. The special case of post-tensioned bridges in the UK is discussed. In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio: relating to the identification of voids in post-tensioned concrete bridges using digital impulse radar.IntroductionThe challenge in any research arena is to harness the findings of different research groups to identify a coherent mass of data, which enables research and practice to be better focused. A particular challenge exists with respect to Europe where language barriers are inevitably very significant. The European Community was formed in the 1960s based upon a political will within continental Europe to avoid the European civil wars, which developed into World War 2 from 1939 to 1945. The strong political motivation formed the original community of which Britain was not a member. Many of the continental countries saw Britain’s interest as being purelyeconomic. The 1970s saw Britain joining what was then the European Economic Community (EEC) and the 1990s has seen the widening of the community to a European Union, EU, with certain political goals together with the objective of a common European currency.Notwithstanding these financial and political developments, civil engineering and bridge engineering in particular have found great difficulty in forming any kind of common thread. Indeed the educational systems for University training are quite different between Britain and the European continental countries. The formation of the EU funding schemes —e.g. Socrates, Brite Euram and other programs have helped significantly. The Socrates scheme is based upon the exchange of students between Universities in different member states. The Brite Euram scheme has involved technical research grants given to consortia of academics and industrial partners within a number of the states— a Brite Euram bid would normally be led by an industrialist.In terms of dissemination of knowledge, two quite different strands appear to have emerged. The UK and the USA have concentrated primarily upon disseminating basic research in refereed journal publications: ASCE, ICE and other journals. Whereas the continental Europeans have frequently disseminated basic research at conferences where the circulation of the proceedings is restricted.Additionally, language barriers have proved to be very difficult to break down. In countries where English is a strong second language there has been enthusiastic participation in international conferences based within continental Europe —e.g. Germany, Italy, Belgium, The Netherlands and Switzerland. However, countries where English is not a strong second language have been hesitant participants }—e.g. France.European researchExamples of research relating to bridges in Europe can be divided into three types of structure:Masonry arch bridgesBritain has the largest stock of masonry arch bridges. In certain regions of the UK up to 60% of the road bridges are historic stone masonry arch bridges originally constructed for horse drawn traffic. This is less common in other parts of Europe as many of these bridges were destroyed during World War 2.Concrete bridgesA large stock of concrete bridges was constructed during the 1950s, 1960s and 1970s. At the time, these structures were seen as maintenance free. Europe also has a large number of post-tensioned concrete bridges with steel tendon ducts preventing radar inspection. This is a particular problem in France and the UK.Steel bridgesSteel bridges went out of fashion in the UK due to their need for maintenance as perceived in the 1960s and 1970s. However, they have been used for long span and rail bridges, and they are now returning to fashion for motorway widening schemes in the UK.Research activity in EuropeIt gives an indication certain areas of expertise and work being undertaken in Europe, but is by no means exhaustive.In order to illustrate the type of European research being undertaken, an example is given from the University of Edinburgh portfolio. The example relates to the identification of voids in post-tensioned concrete bridges, using digital impulse radar.Post-tensioned concrete rail bridge analysisOve Arup and Partners carried out an inspection and assessment of the superstructure of a 160 m long post-tensioned, segmental railway bridge in Manchester to determine its load-carrying capacity prior to a transfer of ownership, for use in the Metrolink light rail system..Particular attention was paid to the integrity of its post-tensioned steel elements. Physical inspection, non-destructive radar testing and other exploratory methods were used to investigate for possible weaknesses in the bridge.Since the sudden collapse of Ynys-y-Gwas Bridge in Wales, UK in 1985, there has been concern about the long-term integrity of segmental, post-tensioned concrete bridges which may b e prone to ‘brittle’ failure without warning. The corrosion protection of the post-tensioned steel cables, where they pass through joints between the segments, has been identified as a major factor affecting the long-term durability and consequent strength of this type of bridge. The identification of voids in grouted tendon ducts at vulnerable positions is recognized as an important step in the detection of such corrosion.Description of bridgeGeneral arrangementBesses o’ th’ Barn Bridge is a 160 m long, three span, segmental, post-tensionedconcrete railway bridge built in 1969. The main span of 90 m crosses over both the M62 motorway and A665 Bury to Prestwick Road. Minimum headroom is 5.18 m from the A665 and the M62 is cleared by approx 12.5 m.The superstructure consists of a central hollow trapezoidal concrete box section 6.7 m high and 4 m wide. The majority of the south and central spans are constructed using 1.27 m long pre-cast concrete trapezoidal box units, post-tensioned together. This box section supports the in site concrete transverse cantilever slabs at bottom flange level, which carry the rail tracks and ballast.The center and south span sections are of post-tensioned construction. These post-tensioned sections have five types of pre-stressing:1. Longitudinal tendons in grouted ducts within the top and bottom flanges.2. Longitudinal internal draped tendons located alongside the webs. These are deflected at internal diaphragm positions and are encased in in site concrete.3. Longitudinal macalloy bars in the transverse cantilever slabs in the central span .4. Vertical macalloy bars in the 229 mm wide webs to enhance shear capacity.5. Transverse macalloy bars through the bottom flange to support the transverse cantilever slabs.Segmental constructionThe pre-cast segmental system of construction used for the south and center span sections was an alternative method proposed by the contractor. Current thinking suggests that such a form of construction can lead to ‘brittle’ failure of the ent ire structure without warning due to corrosion of tendons across a construction joint,The original design concept had been for in site concrete construction.Inspection and assessmentInspectionInspection work was undertaken in a number of phases and was linked with the testing required for the structure. The initial inspections recorded a number of visible problems including:Defective waterproofing on the exposed surface of the top flange.Water trapped in the internal space of the hollow box with depths up to 300 mm.Various drainage problems at joints and abutments.Longitudinal cracking of the exposed soffit of the central span.Longitudinal cracking on sides of the top flange of the pre-stressed sections.Widespread sapling on some in site concrete surfaces with exposed rusting reinforcement.AssessmentThe subject of an earlier paper, the objectives of the assessment were:Estimate the present load-carrying capacity.Identify any structural deficiencies in the original design.Determine reasons for existing problems identified by the inspection.Conclusion to the inspection and assessmentFollowing the inspection and the analytical assessment one major element of doubt still existed. This concerned the condition of the embedded pre-stressing wires, strands, cables or bars. For the purpose of structural analysis these elements、had been assumed to be sound. However, due to the very high forces involved,、a risk to the structure, caused by corrosion to these primary elements, was identified.The initial recommendations which completed the first phase of the assessment were:1. Carry out detailed material testing to determine the condition of hidden structural elements, in particularthe grouted post-tensioned steel cables.2. Conduct concrete durability tests.3. Undertake repairs to defective waterproofing and surface defects in concrete.Testing proceduresNon-destructi v e radar testingDuring the first phase investigation at a joint between pre-cast deck segments the observation of a void in a post-tensioned cable duct gave rise to serious concern about corrosion and the integrity of the pre-stress. However, the extent of this problem was extremely difficult to determine. The bridge contains 93 joints with an average of 24 cables passing through each joint, i.e. there were approx. 2200 positions where investigations could be carried out. A typical section through such a joint is that the 24 draped tendons within the spine did not give rise to concern because these were protected by in site concrete poured without joints after the cables had been stressed.As it was clearly impractical to consider physically exposing all tendon/joint intersections, radar was used to investigate a large numbers of tendons and hence locate duct voids within a modest timescale. It was fortunate that the corrugated steel ducts around the tendons were discontinuous through the joints which allowed theradar to detect the tendons and voids. The problem, however, was still highly complex due to the high density of other steel elements which could interfere with the radar signals and the fact that the area of interest was at most 102 mm wide and embedded between 150 mm and 800 mm deep in thick concrete slabs.Trial radar investigations.Three companies were invited to visit the bridge and conduct a trial investigation. One company decided not to proceed. The remaining two were given 2 weeks to mobilize, test and report. Their results were then compared with physical explorations.To make the comparisons, observation holes were drilled vertically downwards into the ducts at a selection of 10 locations which included several where voids were predicted and several where the ducts were predicted to be fully grouted. A 25-mm diameter hole was required in order to facilitate use of the chosen horoscope. The results from the University of Edinburgh yielded an accuracy of around 60%.Main radar sur v ey, horoscope verification of v oids.Having completed a radar survey of the total structure, a baroscopic was then used to investigate all predicted voids and in more than 60% of cases this gave a clear confirmation of the radar findings. In several other cases some evidence of honeycombing in the in site stitch concrete above the duct was found.When viewing voids through the baroscopic, however, it proved impossible to determine their actual size or how far they extended along the tendon ducts although they only appeared to occupy less than the top 25% of the duct diameter. Most of these voids, in fact, were smaller than the diameter of the flexible baroscopic being used (approximately 9 mm) and were seen between the horizontal top surface of the grout and the curved upper limit of the duct. In a very few cases the tops of the pre-stressing strands were visible above the grout but no sign of any trapped water was seen. It was not possible, using the baroscopic, to see whether those cables were corroded.Digital radar testingThe test method involved exciting the joints using radio frequency radar antenna: 1 GHz, 900 MHz and 500 MHz. The highest frequency gives the highest resolution but has shallow depth penetration in the concrete. The lowest frequency gives the greatest depth penetration but yields lower resolution.The data collected on the radar sweeps were recorded on a GSSI SIR System 10.This system involves radar pulsing and recording. The data from the antenna is transformed from an analogue signal to a digital signal using a 16-bit analogue digital converter giving a very high resolution for subsequent data processing. The data is displayed on site on a high-resolution color monitor. Following visual inspection it is then stored digitally on a 2.3-gigabyte tape for subsequent analysis and signal processing. The tape first of all records a ‘header’ noting the digital radar settings together with the trace number prior to recording the actual data. When the data is played back, one is able to clearly identify all the relevant settings —making for accurate and reliable data reproduction.At particular locations along the traces, the trace was marked using a marker switch on the recording unit or the antenna.All the digital records were subsequently downloaded at the University’s NDT laboratory on to a micro-computer.(The raw data prior to processing consumed 35 megabytes of digital data.)Post-processing was undertaken using sophisticated signal processing software. Techniques available for the analysis include changing the color transform and changing the scales from linear to a skewed distribution in order to highlight、突出certain features. Also, the color transforms could be changed to highlight phase changes. In addition to these color transform facilities, sophisticated horizontal and vertical filtering procedures are available. Using a large screen monitor it is possible to display in split screens the raw data and the transformed processed data. Thus one is able to get an accurate indication of the processing which has taken place. The computer screen displays the time domain calibrations of the reflected signals on the vertical axis.A further facility of the software was the ability to display the individual radar pulses as time domain wiggle plots. This was a particularly valuable feature when looking at individual records in the vicinity of the tendons.Interpretation of findingsA full analysis of findings is given elsewhere, Essentially the digitized radar plots were transformed to color line scans and where double phase shifts were identified in the joints, then voiding was diagnosed.Conclusions1. An outline of the bridge research platform in Europe is given.2. The use of impulse radar has contributed considerably to the level of confidence in the assessment of the Besses o’ th’ Barn Rail Bridge.3. The radar investigations revealed extensive voiding within the post-tensioned cable ducts. However, no sign of corrosion on the stressing wires had been found except for the very first investigation.欧洲桥梁研究欧洲联盟共同的研究平台诞生于欧洲联盟。

毕业论文文献外文翻译----危机管理:预防,诊断和干预文献翻译-中英文文献对照翻译

毕业论文文献外文翻译----危机管理:预防,诊断和干预文献翻译-中英文文献对照翻译

第1页 共19页中文3572字毕业论文(设计)外文翻译标题:危机管理-预防,诊断和干预一、外文原文标题:标题:Crisis management: prevention, diagnosis and Crisis management: prevention, diagnosis andintervention 原文:原文:The Thepremise of this paper is that crises can be managed much more effectively if the company prepares for them. Therefore, the paper shall review some recent crises, theway they were dealt with, and what can be learned from them. Later, we shall deal with the anatomy of a crisis by looking at some symptoms, and lastly discuss the stages of a crisis andrecommend methods for prevention and intervention. Crisis acknowledgmentAlthough many business leaders will acknowledge thatcrises are a given for virtually every business firm, many of these firms do not take productive steps to address crisis situations. As one survey of Chief Executive officers of Fortune 500 companies discovered, 85 percent said that a crisisin business is inevitable, but only 50 percent of these had taken any productive action in preparing a crisis plan(Augustine, 1995). Companies generally go to great lengths to plan their financial growth and success. But when it comes to crisis management, they often fail to think and prepare for those eventualities that may lead to a company’s total failure.Safety violations, plants in need of repairs, union contracts, management succession, and choosing a brand name, etc. can become crises for which many companies fail to be prepared untilit is too late.The tendency, in general, is to look at the company as a perpetual entity that requires plans for growth. Ignoring the probabilities of disaster is not going to eliminate or delay their occurrences. Strategic planning without inclusion ofcrisis management is like sustaining life without guaranteeinglife. One reason so many companies fail to take steps to proactively plan for crisis events, is that they fail to acknowledge the possibility of a disaster occurring. Like an ostrich with its head in the sand, they simply choose to ignorethe situation, with the hope that by not talking about it, it will not come to pass. Hal Walker, a management consultant, points out “that decisions will be more rational and better received, and the crisis will be of shorter duration, forcompanies who prepare a proactive crisis plan” (Maynard, 1993) .It is said that “there are two kinds of crises: those that thatyou manage, and those that manage you” (Augustine, 1995). Proactive planning helps managers to control and resolve a crisis. Ignoring the possibility of a crisis, on the other hand,could lead to the crisis taking a life of its own. In 1979, theThree-Mile Island nuclear power plant experienced a crisis whenwarning signals indicated nuclear reactors were at risk of a meltdown. The system was equipped with a hundred or more different alarms and they all went off. But for those who shouldhave taken the necessary steps to resolve the situation, therewere no planned instructions as to what should be done first. Hence, the crisis was not acknowledged in the beginning and itbecame a chronic event.In June 1997, Nike faced a crisis for which they had no existi existing frame of reference. A new design on the company’s ng frame of reference. A new design on the company’s Summer Hoop line of basketball shoes - with the word air writtenin flaming letters - had sparked a protest by Muslims, who complained the logo resembled the Arabic word for Allah, or God.The council of American-Islamic Relations threatened aa globalNike boycott. Nike apologized, recalled 38,000 pairs of shoes,and discontinued the line (Brindley, 1997). To create the brand,Nike had spent a considerable amount of time and money, but hadnever put together a general framework or policy to deal with such controversies. To their dismay, and financial loss, Nike officials had no choice but to react to the crisis. This incident has definitely signaled to the company that spending a little more time would have prevented the crisis. Nonetheless,it has taught the company a lesson in strategic crisis management planning.In a business organization, symptoms or signals can alert the strategic planners or executives of an eminent crisis. Slipping market share, losing strategic synergy anddiminishing productivity per man hour, as well as trends, issues and developments in the socio-economic, political and competitive environments, can signal crises, the effects of which can be very detrimental. After all, business failures and bankruptcies are not intended. They do not usually happen overnight. They occur more because of the lack of attention to symptoms than any other factor.Stages of a crisisMost crises do not occur suddenly. The signals can usuallybe picked up and the symptoms checked as they emerge. A company determined to address these issues realizes that the real challenge is not just to recognize crises, but to recognize themin a timely fashion (Darling et al., 1996). A crisis can consistof four different and distinct stages (Fink, 1986). The phasesare: prodromal crisis stage, acute crisis stage, chronic crisisstage and crisis resolution stage.Modern organizations are often called “organic” due tothe fact that they are not immune from the elements of their surrounding environments. Very much like a living organism, organizations can be affected by environmental factors both positively and negatively. But today’s successfulorganizations are characterized by the ability to adapt by recognizing important environmental factors, analyzing them, evaluating the impacts and reacting to them. The art of strategic planning (as it relates to crisis management)involves all of the above activities. The right strategy, in general, provides for preventive measures, and treatment or resolution efforts both proactively and reactively. It wouldbe quite appropriate to examine the first three stages of acrisis before taking up the treatment, resolution or intervention stage.Prodromal crisis stageIn the field of medicine, a prodrome is a symptom of the onset of a disease. It gives a warning signal. In business organizations, the warning lights are always blinking. No matter how successful the organization, a number of issues andtrends may concern the business if proper and timely attentionis paid to them. For example, in 1995, Baring Bank, a UK financial institution which had been in existence since 1763,ample opportunitysuddenly and unexpectedly failed. There wasfor the bank to catch the signals that something bad was on thehorizon, but the company’s efforts to detect that were thwarted by an internal structure that allowed a single employee both to conduct and to oversee his own investment trades, and the breakdown of management oversight and internalcontrol systems (Mitroff et al., 1996). Likewise, looking in retrospect, McDonald’s fast food chain was given the prodromalsymptoms before the elderly lady sued them for the spilling ofa very hot cup of coffee on her lap - an event that resulted in a substantial financial loss and tarnished image of thecompany. Numerous consumers had complained about thetemperature of the coffee. The warning light was on, but the company did not pay attention. It would have been much simplerto pick up the signal, or to check the symptom, than facing the consequences.In another case, Jack in the Box, a fast food chain, had several customers suffer intestinal distress after eating at their restaurants. The prodromal symptom was there, but the company took evasive action. Their initial approach was to lookaround for someone to blame. The lack of attention, the evasiveness and the carelessness angered all the constituent groups, including their customers. The unfortunate deaths thatptoms,occurred as a result of the company’s ignoring thesymand the financial losses that followed, caused the company to realize that it would have been easier to manage the crisis directly in the prodromal stage rather than trying to shift theblame.Acute crisis stageA prodromal stage may be oblique and hard to detect. The examples given above, are obvious prodromal, but no action wasWebster’s New Collegiate Dictionary, an acute stage occursacutewhen a symptom “demands urgent attention.” Whether the acutesymptom emerges suddenly or is a transformation of a prodromalstage, an immediate action is required. Diverting funds and other resources to this emerging situation may cause disequilibrium and disturbance in the whole system. It is onlythose organizations that have already prepared a framework forthese crises that can sustain their normal operations. For example, the US public roads and bridges have for a long time reflected a prodromal stage of crisis awareness by showing cracks and occasionally a collapse. It is perhaps in light of the obsessive decision to balance the Federal budget that reacting to the problem has been delayed and ignored. This situation has entered an acute stage and at the time of this writing, it was reported that a bridge in Maryland had just collapsed.The reason why prodromes are so important to catch is thatit is much easier to manage a crisis in this stage. In the caseof most crises, it is much easier and more reliable to take careof the problem before it becomes acute, before it erupts and causes possible complications (Darling et al., 1996). In andamage. However, the losses are incurred. Intel, the largest producer of computer chips in the USA, had to pay an expensiveprice for initially refusing to recall computer chips that proved unreliable o n on certain calculations. The f irmfirm attempted to play the issue down and later learned its lesson. At an acutestage, when accusations were made that the Pentium Chips were not as fast as they claimed, Intel quickly admitted the problem,apologized for it, and set about fixing it (Mitroff et al., 1996). Chronic crisis stageDuring this stage, the symptoms are quite evident and always present. I t isIt is a period of “make or break.” Being the third stage, chronic problems may prompt the company’s management to once and for all do something about the situation. It may be the beginning of recovery for some firms, and a deathknell for others. For example, the Chrysler Corporation was only marginallysuccessful throughout the 1970s. It was not, however, until the company was nearly bankrupt that amanagement shake-out occurred. The drawback at the chronic stage is that, like in a human patient, the company may get used to “quick fixes” and “band “band--aid”approaches. After all, the ailment, the problem and the crisis have become an integral partoverwhelmed by prodromal and acute problems that no time or attention is paid to the chronic problems, or the managers perceive the situation to be tolerable, thus putting the crisison a back burner.Crisis resolutionCrises could be detected at various stages of their development. Since the existing symptoms may be related todifferent problems or crises, there is a great possibility thatthey may be misinterpreted. Therefore, the people in charge maybelieve they have resolved the problem. However, in practicethe symptom is often neglected. In such situations, the symptomwill offer another chance for resolution when it becomes acute,thereby demanding urgent care. Studies indicate that today anincreasing number of companies are issue-oriented and searchfor symptoms. Nevertheless, the lack of experience in resolvinga situation and/or inappropriate handling of a crisis can leadto a chronic stage. Of course, there is this last opportunityto resolve the crisis at the chronic stage. No attempt to resolve the crisis, or improper resolution, can lead to grim consequences that will ultimately plague the organization or even destroy it.It must be noted that an unsolved crisis may not destroy the company. But, its weakening effects can ripple through the organization and create a host of other complications.Preventive effortsThe heart of the resolution of a crisis is in the preventiveefforts the company has initiated. This step, similar to a humanbody, is actually the least expensive, but quite often the mostoverlooked. Preventive measures deal with sensing potential problems (Gonzales-Herrero and Pratt, 1995). Major internalfunctions of a company such as finance, production, procurement, operations, marketing and human resources are sensitive to thesocio-economic, political-legal, competitive, technological, demographic, global and ethical factors of the external environment. What is imminently more sensible and much more manageable, is to identify the processes necessary forassessing and dealing with future crises as they arise (Jacksonand Schantz, 1993). At the core of this process are appropriate information systems, planning procedures, anddecision-making techniques. A soundly-based information system will scan the environment, gather appropriate data, interpret this data into opportunities and challenges, and provide a concretefoundation for strategies that could function as much to avoid crises as to intervene and resolve them.Preventive efforts, as stated before, require preparations before any crisis symptoms set in. Generally strategic forecasting, contingency planning, issues analysis, and scenario analysis help to provide a framework that could be used in avoiding and encountering crises.出处:出处:Toby TobyJ. Kash and John R. Darling . Crisis management: prevention, diagnosis 179-186二、翻译文章标题:危机管理:预防,诊断和干预译文:本文的前提是,如果该公司做好准备得话,危机可以更有效地进行管理。

翻译专业中英文对照外文翻译文献

翻译专业中英文对照外文翻译文献

翻译专业中英文对照外文翻译文献(文档含英文原文和中文翻译)Translation EquivalenceDespite the fact that the world is becoming a global village, translation remains a major way for languages and cultures to interact and influence each other. And name translation, especially government name translation, occupies a quite significant place in international exchange.Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. While interpreting—the facilitating of oral or sign-language communication between users of different languages—antedates writing, translation began only after the appearance of written literature. There exist partial translations of the Sumerian Epic of Gilgamesh (ca. 2000 BCE) into Southwest Asian languages of the second millennium BCE. Translators always risk inappropriate spill-over of source-language idiom and usage into the target-language translation. On the other hand, spill-overs have imported useful source-language calques and loanwords that have enriched the target languages. Indeed, translators have helped substantially to shape the languages into which they have translated. Due to the demands of business documentation consequent to the Industrial Revolution that began in the mid-18th century, some translation specialties have become formalized, with dedicated schools and professional associations. Because of the laboriousness of translation, since the 1940s engineers havesought to automate translation (machine translation) or to mechanically aid the human translator (computer-assisted translation). The rise of the Internet has fostered a world-wide market for translation services and has facilitated language localizationIt is generally accepted that translation, not as a separate entity, blooms into flower under such circumstances like culture, societal functions, politics and power relations. Nowadays, the field of translation studies is immersed with abundantly diversified translation standards, with no exception that some of them are presented by renowned figures and are rather authoritative. In the translation practice, however, how should we select the so-called translation standards to serve as our guidelines in the translation process and how should we adopt the translation standards to evaluate a translation product?In the macro - context of flourish of linguistic theories, theorists in the translation circle, keep to the golden law of the principle of equivalence. The theory of Translation Equivalence is the central issue in western translation theories. And the presentation of this theory gives great impetus to the development and improvement of translation theory. It‟s not difficult for us to discover that it is the theory of Translation Equivalence that serves as guidelines in government name translation in China. Name translation, as defined, is the replacement of the name in the source language by an equivalent name or other words in the target language. Translating Chinese government names into English, similarly, is replacing the Chinese government name with an equivalentin English.Metaphorically speaking, translation is often described as a moving trajectory going from A to B along a path or a container to carry something across from A to B. This view is commonly held by both translation practitioners and theorists in the West. In this view, they do not expect that this trajectory or something will change its identity as it moves or as it is carried. In China, to translate is also understood by many people normally as “to translate the whole text sentence by sentence and paragraph by paragraph, without any omission, addition, or other changes. In both views, the source text and the target text must be “the same”. This helps explain the etymological source for the term “translation equivalence”. It is in essence a word which describes the relationship between the ST and the TT.Equivalence means the state or fact or property of being equivalent. It is widely used in several scientific fields such as chemistry and mathematics. Therefore, it comes to have a strong scientific meaning that is rather absolute and concise. Influenced by this, translation equivalence also comes to have an absolute denotation though it was first applied in translation study as a general word. From a linguistic point of view, it can be divided into three sub-types, i.e., formal equivalence, semantic equivalence, and pragmatic equivalence. In actual translation, it frequently happens that they cannot be obtained at the same time, thus forming a kind of relative translation equivalence in terms of quality. In terms of quantity, sometimes the ST and TT are not equivalent too. Absolutetranslation equivalence both in quality and quantity, even though obtainable, is limited to a few cases.The following is a brief discussion of translation equivalence study conducted by three influential western scholars, Eugene Nida, Andrew Chesterman and Peter Newmark. It‟s expected that their studies can instruct GNT study in China and provide translators with insightful methods.Nida‟s definition of translation is: “Translation consists in reproducing in the receptor language the closest natural equivalent of the source language message, first in terms of meaning and secondly in terms of style.” It is a replacement of textual material in one language〔SL〕by equivalent textual material in another language(TL). The translator must strive for equivalence rather than identity. In a sense, this is just another way of emphasizing the reproducing of the message rather than the conservation of the form of the utterance. The message in the receptor language should match as closely as possible the different elements in the source language to reproduce as literally and meaningfully as possible the form and content of the original. Translation equivalence is an empirical phenomenon discovered by comparing SL and TL texts and it‟s a useful operational concept like the term “unit of translation”.Nida argues that there are two different types of equivalence, namely formal equivalence and dynamic equivalence. Formal correspondence focuses attention on the message itself, in both form and content, whereas dynamic equivalence is based upon “the principle of equivalent effect”.Formal correspondence consists of a TL item which represents the closest equivalent of a ST word or phrase. Nida and Taber make it clear that there are not always formal equivalents between language pairs. Therefore, formal equivalents should be used wherever possible if the translation aims at achieving formal rather than dynamic equivalence. The use of formal equivalents might at times have serious implications in the TT since the translation will not be easily understood by the target readership. According to Nida and Taber, formal correspondence distorts the grammatical and stylistic patterns of the receptor language, and hence distorts the message, so as to cause the receptor to misunderstand or to labor unduly hard.Dynamic equivalence is based on what Nida calls “the principle of equivalent effect” where the relat ionship between receptor and message should be substantially the same as that which existed between the original receptors and the message. The message has to be modified to the receptor‟s linguistic needs and cultural expectation and aims at complete naturalness of expression. Naturalness is a key requirement for Nida. He defines the goal of dynamic equivalence as seeking the closest natural equivalent to the SL message. This receptor-oriented approach considers adaptations of grammar, of lexicon and of cultural references to be essential in order to achieve naturalness; the TL should not show interference from the SL, and the …foreignness …of the ST setting is minimized.Nida is in favor of the application of dynamic equivalence, as a moreeffective translation procedure. Thus, the product of the translation process, that is the text in the TL, must have the same impact on the different readers it was addressing. Only in Nida and Taber's edition is it clearly stated that dynamic equivalence in translation is far more than mere correct communication of information.As Andrew Chesterman points out in his recent book Memes of Translation, equivalence is one of the five element of translation theory, standing shoulder to shoulder with source-target, untranslatability, free-vs-literal, All-writing-is-translating in importance. Pragmatically speaking, observed Chesterman, “the only true examples of equivalence (i.e., absolute equivalence) are those in which an ST item X is invariably translated into a given TL as Y, and vice versa. Typical examples would be words denoting numbers (with the exception of contexts in which they have culture-bound connotations, such as “magic” or “unlucky”), certain technical terms (oxygen, molecule) and the like. From this point of view, the only true test of equivalence would be invariable back-translation. This, of course, is unlikely to occur except in the case of a small set of lexical items, or perhaps simple isolated syntactic structure”.Peter Newmark. Departing from Nida‟s rece ptor-oriented line, Newmark argues that the success of equivalent effect is “illusory “and that the conflict of loyalties and the gap between emphasis on source and target language will always remain as the overriding problem in translation theory and practice. He suggests narrowing the gap by replacing the old terms with those of semanticand communicative translation. The former attempts to render, as closely as the semantic and syntactic structures of the second language allow, the exact contextual meani ng of the original, while the latter “attempts to produce on its readers an effect as close as possible to that obtained on the readers of the original.” Newmark‟s description of communicative translation resembles Nida‟s dynamic equivalence in the effect it is trying to create on the TT reader, while semantic translation has similarities to Nida‟s formal equivalence.Meanwhile, Newmark points out that only by combining both semantic and communicative translation can we achieve the goal of keeping the …spirit‟ of the original. Semantic translation requires the translator retain the aesthetic value of the original, trying his best to keep the linguistic feature and characteristic style of the author. According to semantic translation, the translator should always retain the semantic and syntactic structures of the original. Deletion and abridgement lead to distortion of the author‟s intention and his writing style.翻译对等尽管全世界正在渐渐成为一个地球村,但翻译仍然是语言和和文化之间的交流互动和相互影响的主要方式之一。

汽车电子系统中英文对照外文翻译文献

汽车电子系统中英文对照外文翻译文献汽车电子系统中英文对照外文翻译文献1汽车电子系统中英文对照外文翻译文献(文档含英文原文和中文翻译)The Changing Automotive Environment: High-Temperature ElectronicsR. Wayne Johnson, Fellow, IEEE, John L. Evans, Peter Jacobsen, James R. (Rick) Thompson, and Mark ChristopherAbstract —The underhood automotive environment is harsh and current trends in the automotive electronics industry will be pushing the temperatureenvelope for electronic components. The desire to place engine control unitson the engine and transmission control units either on or in the transmissionwill push the ambient temperature above 125125℃℃.However, extreme cost pressures,increasing reliability demands (10 year/241 350 km) and the cost of field failures (recalls, liability, customer loyalty) will make the shift to higher temperatures occur incrementally. The coolest spots on engine and in the transmission will be used. These large bodies do provide considerableheat sinking to reduce temperature rise due to power dissipation in the controlunit. The majority of near term applications will be at 150 ℃ or less andthese will be worst case temperatures, not nominal. The transition toX-by-wire technology, replacing mechanical and hydraulic systems with electromechanical systems will require more power electronics. Integrationof power transistors and smart power devices into the electromechanical℃ to 200℃ . Hybridactuator will require power devices to operate at 175electric vehicles and fuel cell vehicles will also drive the demand for higher temperature power electronics. In the case of hybrid electric and fuel cell vehicles, the high temperature will be due to power dissipation. Thealternates to high-temperature devices are thermal management systems which add weight and cost. Finally, the number of sensors in vehicles is increasingas more electrically controlled systems are added. Many of these sensors mustwork in high-temperature environments. The harshest applications are exhaustgas sensors and cylinder pressure or combustion sensors. High-temperature electronics use in automotive systems will continue to grow, but it will be gradual as cost and reliability issues are addressed. This paper examines themotivation for higher temperature operation,the packaging limitations evenat 125 C with newer package styles and concludes with a review of challenge at both the semiconductor device and packaging level as temperatures push beyond 125 ℃.Index Terms—Automotive, extreme-environment electronics.I. INTRODUCTIONI N 1977, the average automobile contained $110 worth of electronics [1]. By 2003 the electronics content was $1510 per vehicle and is expected to reach$2285 in 2013 [2].The turning point in automotive electronics was governmentTABLE IMAJOR AUTOMOTIVE ELECTRONIC SYSTEMSTABLE IIAUTOMOTIVETEMPERATUREEXTREMES(DELPHIDELCOELECTRONIC SYSTEMS) [3]regulation in the 1970s mandating emissions control and fuel economy. The complex fuel control required could not be accomplished using traditional mechanical systems. These government regulations coupled with increasing semiconductor computing power at decreasing cost have led to an ever increasing array of automotive electronics. Automotive electronics can be divided into five major categories as shown in Table I.The operating temperature of the electronics is a function of location, power dissipation by the electronics, and the thermal design. The automotive electronics industry defines high-temperature electronics as electronics operating above 125 ℃. However, the actual temperature for various electronics mounting locations varies considerably. Delphi Delco Electronic Systems recently published the typical continuous maximum temperatures as reproduced in Table II [3]. The corresponding underhood temperatures are shown in Fig. 1. The authors note that typical junction temperatures for integrated circuits are 10 ℃to15℃ higher than ambient or baseplate temperature, while power devices can reach 25 ℃ higher. At-engine temperatures of 125℃ peak can be maintained by placing the electronics on theintake manifold.Fig. 1. Engine compartment thermal profile (Delphi Delco Electronic Systems) [3].TABLE III THEAUTOMOTIVEENVIRONMENT(GENERALMOTORS ANDDELPHIDELCO ELECTRONICSYSTEMS) [4]TABLE IV REQUIREDOPERATIONTEMPERATURE FORAUTOMOTIVEELECTRONIC SYSTEMS(TOYOTAMOTORCORP. [5]TABLE VMECHA TRONICMAXIMUMTEMPERA TURERANGES(DAIMLERCHRYSLER,EA TONCORPORA TION, ANDAUBURNUNIVERSITY) [6]Fig. 2. Automotive temperatures and related systems (DaimlerChrysler) [8].automotive electronic systems [8]. Fig. 3 shows an actual measured transmission transmission temperature temperature temperature profile profile profile during during during normal normal normal and and excessive excessive driving drivingconditions [8]. Power braking is a commonly used test condition where the brakes are applied and the engine is revved with the transmission in gear.A similar real-world situation would be applying throttle with the emergencybrake applied. Note that when the temperature reached 135135℃℃,the over temperature light came on and at the peak temperature of 145145℃℃,the transmission was beginning to smell of burnt transmission fluid.TABLE VI2002I NTERNA TIONAL T ECHNOLOGY R OADMAPFOR S EMICONDUCTORS A MBI ENTOPERA TINGTEMPERA TURES FORHARSHENVIRONMENTS (AUTOMOTIVE) [9]The 2002 update to the International Technology Roadmap for Semiconductors (ITRS) did not reflect the need for higher operating temperatures for complex integrated circuits, but did recognize increasing temperature requirements for power and linear devices as shown in Table VI [9]. Higher temperature power devices (diodes and transistors) will be used for the power section of power converters and motor drives for electromechanical actuators. Higher temperature linear devices will be used for analog control of power converters and for amplification and some signal processing of sensor outputs prior to transmission to the control units. It should be noted that at the maximum rated temperature for a power device, the power handling capability is derated to zero. Thus, a 200℃ rated power transistor in a 200℃ environment would have zero current carrying capability. Thus, the actual operating environments must be lower than the maximum rating.In the 2003 edition of the ITRS, the maximum junction temperatures identified forharsh-environment complex integrated circuits was raised to 150℃through 2018 [9]. Theambient operating temperature extreme for harsh-environment complex integrated circuits was defined as 40℃to 125℃ through 2009, increasing to 40℃to 150℃for 2010 and beyond. Power/linear devices were not separately listed in 2003.The ITRS is consistent with the current automotive high-temperature limitations. Delphi Delco Electronic Systems offers two production engine controllers (one on ceramic and one on thin laminate) for direct mounting on the engine. These controllers are rated for operation over the temperature range of 40℃to 125℃. The ECU must be mounted on the coolest spot on the engine. The packaging technology is consistent with 140℃ operation, but the ECU is limited by semiconductor and capacitor technologies to 125℃.The future projections in the ITRS are not consistent with the desire to place controllers on-engine or in-transmission. It will not always be possible to use the coolest location for mounting control units. Delphi Delco Electronics Systems has developed an in-transmission controller for use in an ambient temperature of 140℃[10] using ceramic substrate technology. DaimlerChrysler is also designing an in-transmission controller for usewith a maximum ambient temperature of 150℃ (Figs. 4 and 5) [11].II. MECHATRONICSMechatronics, or the integration of electrical and mechanical systems offers a number ofadvantages in automotive assembly. Integration of the engine controller with the engine allows pretest of the engine as a complete system prior to vehicle assembly. Likewise with the integration of the transmission controller and the transmission, pretesting and tuning to account for machining variations can be performed at the transmission factory prior to shipment to the automobile assembly site. In addition, most of the wires connecting to a transmission controller run to the solenoid pack inside the transmission. Integration of the controller into the transmission reduces the wiring harness requirements at the automobile assembly level.Fig. 4. Prototype DaimlerChrysler ceramic transmission controller [11]Fig. 5. DaimlerChrysler in-transmission module [11].The trend in automotive design is to distribute control with network communications. As the industry moves to more X-by-wire systems, this trend will continue. Automotivefinalassembly plants assemble subsystems and components supplied by numerous vendors to build the vehicle. Complete mechatronic subsystems simplify the design, integration, management, inventory control, and assembly of vehicles. As discussed in the previous section, higher temperature electronics will be required to meet future mechatronic designs.III. PACKAGINGCHALLENGES AT125℃Trends in electronics packaging, driven by computer and portable products are resulting in packages which will not meet underhood automotive requirements at 125℃. Most notable are leadless and area array packages such as small ball grid arrays (BGAs) and quadflatpacks no-lead (QFNs). Fig. 6 shows the thermal cycle test 40 ℃to 125℃ results for two sizes of QFN from two suppliers [12]. A typical requirement is for the product to survive 2000–2500 thermal cycles with<1% failure for underhood applications. Smaller I/O QFNs have been found to meet the requirements.Fig. 7 presents the thermal cycle results for BGAs of various body sizes [13]. The die size in the BGA remained constant (8.6 *8.6 mm). As the body size decreases so does the reliability. Only the 23-mm BGA meets the requirements. The 15-mm BGA with the 0.56-mm-thick BT substrate nearly meets the minimum requirements. However, the industry trend is to use thinner BT substrates (0.38 mm) for BGA packages.One solution to increasing the thermal cycle performance of smaller BGAs is to use underfill. Capillary underfill was dispensed and cured after reflow assembly of the BGA. Fig. 8 shows a Weibull plot of the thermal cycle data for the 15-mm BGAs with four different underfills. Underfill UF1 had no failures after 5500 cycles and is, therefore, not plotted. Underfill, therefore, provides a viable approach to meeting underhood automotive requirements with smaller BGAs, but adds process steps, time, and cost to the electronics assembly process.Since portable and computer products dominate the electronics market, the packages developed for these applications are replacing traditional packages such as QFPs for new devices. The automotive electronics industry will have to continuedeveloping assembly approaches such as underfill just to use these new packages in current underhood applications.IV. TECHNOLOGY CHALLENGES ABOVE125 ℃The technical challenges for high-temperature automotive applications are interrelated, but can be divided into semiconductors, passives, substrates,interconnections, and housings/connectors. Industries such as oil well logging have successfully fielded high-temperature electronics operating at 200℃ and above. However, automotive electronics are further constrained by high-volume production, low cost, and long-term reliability requirements. The typical operating life for oil well logging electronics may only be 1000 h, production volumes are in the range of 10s or 100s and, while cost is a concern, it is not a dominant issue. In the following paragraphs, the technical challenges for high-temperature automotive electronics are discussed.Semiconductors: The maximum rated ambient temperature for most silicon basedintegrated circuits is 85℃, which is sufficient for consumer, portable, and computing product applications. Devices for military and automotive applications are typically rated to 125℃. A few integrated circuits are rated to 150℃, particularly for power supply controllers and a few automotive applications. Finally, many power semiconductor devices are derated to zero power handling capability at 200℃.Nelmset al.and Johnsonet al.have shown that power insulated-gate bipolar transistors (IGBTs) and metal–oxide–semiconductorfield-effect transistors (MOSFETs) can be used at 200℃[14], [15]. The primary limitations of these power transistors at the higher temperatures are the packaging (the glass transition temperature of common molding compounds is in the 180℃ to 200℃range) and the electrical stress on the transistor during hard switching.A number of factors limit the use of silicon at high temperatures. First, with a bandgap of 1.12 eV, the silicon p-n junction becomes intrinsic at high temperature (225℃ to 400℃depending on doping levels). The intrinsic carrier concentration is given by (1)As the temperature increases, the intrinsic carrier concentration increases. When the intrinsic carrier concentration nears the doping concentration level, p-n junctions behave as resistors, not diodes, and transistors lose their switching characteristics. One approach used in high-temperature integrated circuit design is to increase the doping levels, which increases the temperature at which the device becomes intrinsic. However, increasing the doping levels decreases the depletion widths, resulting in higher electricfields within the device that can lead to breakdown.A second problem is the increase in leakage current through a reverse-biased p-n junction with increasing temperature. Reverse-biased p-n junctions are commonly used in IC design to provide isolation between devices. The saturation current (I,the ideal reverse-bias current of the junction) is proportional to the square of the intrinsic carrier concentrationwhere Ego=bandgap energy atT= 0KThe leakage current approximately doubles for each 10℃rise in junction temperature. Increased junction leakage currents increase power dissipation within the device and can lead to latch-up of the parasitic p-n-p-n structure in complimentary metal–oxide–semiconductor (CMOS) devices. Epitaxial-CMOS (epi-CMOS) has been developed to improve latch-up resistance as the device dimensions are decreased due to scaling and provides improved high-temperature performance compared to bulk CMOS.Silicon-on-insulator (SOI) technology replaces reverse-biased p-n junctions with insulators, typically SiO2 , reducing the leakage currents and extending the operating range of silicon above 200℃. At present, SOI devices are more expensive than conventional p-njunction isolated devices. This is in part due to the limited use of SOI technology. With the continued scaling of device dimensions, SOI is being used in some high-performance applications and the increasing volume may help to eventually lower the cost.Other device performance issues at higher temperatures include gate threshold voltage shifts, decreased noise margin, decreased switching speed, decreased mobility, decreased gain-bandwidth product, and increased amplifier input–offset voltage [16]. Leakage currents also increase for insulators with increasing temperature. This results in increased gate leakage currents, and increased leakage of charge stored in memory cells (data loss). For dynamic memory, the increased leakage currents require faster refresh rates. For nonvolatile memory, the leakage limits the life of the stored data, a particular issue for FLASH memory used in microcontrollers and automotive electronics modules.Beyond the electrical performance of the device, the device reliability must also be considered. Electromigration of the aluminum metallization is a major concern. Electromigration is the movement of the metal atoms due to their bombardment by electrons (current flow). Electromigration results in the formation of hillocks and voids in the conductor traces. The mean time to failure (MTTF) for electromigration is related to the current density (J)and temperature(T) as shown in (3)The exact rate of electromigration and resulting time to failure is a function of the aluminum microstructure. Addition of copper to the aluminum increases electromigration resistance. The trend in the industry to replace aluminum with copper will improve the electromigration resistance by up to three orders of magnitude [17].Time dependent dielectric breakdown (TDDB) is a second reliability concern. Time to failure due to TDDB decreases with increasing temperature. Oxide defects, including pinholes, asperities at the Si–SiO2 interface and localized changes in chemical structure that reduce the barrier height or increase the charge trapping are common sources of early failure [18]. Breakdown can also occur due to hole trapping (Fowler–Nordheim tunneling). The holes can collect at weak spots in the Si–SiO2 interface, increasing the electricfield locally and leading to breakdown [18]. The temperature dependence of time-to-breakdown(tBD) can be expressed as [18]Values reported for Etbd vary in the literature due to its dependence on the oxidefield and the oxide quality. Furthermore, the activation energy increases with breakdown time [18].With proper high-temperature design, junction isolated silicon integrated circuits can be used to junction temperatures of 150℃ to 165℃, epi-CMOS can extend the range to 225℃to 250℃ and SOI can be used to 250℃ to 280℃ [16, pp. 224]. High-temperature, nonvolatile memory remains an issue.For temperatures beyond the limits of silicon, silicon carbidebased semiconductors are being developed. The bandgap of SiC ranges from 2.75–3.1 depending on the polytype. SiC has lower leakage currents and higher electric field strength than Si. Due to its wider bandgap, SiC can be used as a semiconductor device at temperatures over 600℃. Theprimary focus of SiC device research is currently for power devices. SiC power devices may eventuallyfind application as power devices in braking systems and direct fuel injection. High-temperature sensors have also been fabricated with SiC. Berget al.have demonstrated a SiCbased sensor for cylinder pressure in combustion engines [19] at up to 350℃ and Casadyet al.[20] have shown a SiC-based temperature sensor for use to 500℃. At present, the wafer size, cost, and device yield have made SiC devices too expensive for general automotive use. Most SiC devices are discrete, as the level of integration achieved in SiC to date is low.Passives: Thick and thin-film chip resistors are typically rated to 125 ℃. Naefeet al.[21] and Salmonet al.[22] have shown that thick-film resistors can be used at temperatures above 200℃ if the allowable absolute tolerance is 5% or greater. The resistors studied were specifically formulated with a higher softening point glass. The minimum resistance as afunction of temperature was shifted from 25℃to 150℃to minimize the temperature coefficient of resistance (TCR) over the temperature range to 300℃. TaN and NiCr thin-film resistors have been shown to have less than 1% drift after 1000 h at 200℃ [23]. Thus, for tighter tolerance applications, thin-film chip resistors are preferred. Wire wound resistors provide a high-temperature option for higher power dissipation levels [21].High-temperature capacitors present more of a challenge. For low-value capacitors, negative-positive-zero (NPO) ceramic and MOS capacitors provide low-temperature coefficient of capacitance (TCC) to 200℃. NPO ceramic capacitorshave been demonstrated to 500℃ [24]. Higher dielectric constant ceramics (X7R, X8R, X9U), used to achieve the high volumetric efficiency necessary for larger capacitor values, exhibit a significant capacitance decrease above the Curie temperature, which is typically between 125℃ to 150℃. As the temperature increases, the leakage current increases, the dissipation factor increases, and the breakdown strength decreases. Increasing the dielectric tape thickness to increase breakdown strength reduces the capacitance and is a tradeoff. X7R ceramic capacitors have been shown to be stable when stored at 200℃ [23]. X9U chip capacitors are commercially available for use to 200 C, but there is a significant decrease in capacitance above 150℃.Consideration must also be given to the capacitor electrodes and terminations. Ni is now being substituted for Ag and PdAg to lower capacitor cost. The impact of this change on hightemperature reliability must be evaluated. The surface finish for ceramic capacitor terminations is typically Sn. The melting point of the Sn (232℃) and its interaction with potential solders/brazes must also be considered. Alternate surfacefinishes may be required.For higher value, low-voltage requirements, wet tantalum capacitors show reasonable behavior at 200℃ if the hermetic seal does not lose integrity [23]. Aluminum electrolytics are also available for use to 150℃. Mica paper (260℃) and Teflonfilm (200℃) capacitors can provide higher voltage capability, but are large and bulky [25]. High-temperature capacitors are relatively expensive. V capacitors are relatively expensive. Volumetrically efficient, high-voltage, highcapacitance, olumetrically efficient, high-voltage, highcapacitance, high-temperature and low-cost capacitors are still needed.Standard transformers and inductor cores with copper wire and teflon insulation are suitable for operation to 200℃. For higher temperature operation, the magnetic core, the conductor metal (Ni instead of Cu) and insulator must be selected to be compatible with the higher temperatures [16, pp. 651–652] Specially designed transformers can be used to 450℃ to 500℃, however, they are limited in operating frequency.Crystals are required for clock frequency generation for microcontrollers. Crystals with acceptable frequency shift over the temperature range from 55℃to 200℃ have been demonstrated [22]. However, the selection of packaging materials and assembly process for the crystal are key to high-temperature performance and reliability. For example, epoxies used in assembly must be compatible with 200℃ operation.Substrates: Thick-film substrates with gold metallization have been used in circuits to 500℃ [21], [23]. Palladium silver, platinum silver, and silver conductors are morecommonly used in automotive hybrids for reduced cost. Silver migration has been observed with an unpassivated PdAg thick-film conductor under bias at 300℃ [21]. The time-to-failure needs to be examined as a function of temperature and bias voltage with and without passivation. Low-temperature cofired ceramic (LTCC) and high-temperature cofired ceramic (HTCC) are also suitable for high-temperature automotive applications. Embedded resistors are standard to thick-film hybrids, LTCC, and some HTCC technologies. As previously mentioned, thick-film resistors have been demonstrated at temperatures 200℃. Dielectric tapes for embedded capacitors have also been developed for LTCC and HTCC. However, these embedded capacitors have not been characterized for high-temperature use.High-Tg laminates are also available for fabrication of hightemperature printed wiring boards. Cyanate esters [Tg=250℃by differential scanning calorimetry (DSC)], polyimide (260℃by DSC), and liquid crystal polymers(Tm>280℃)provide options for use to 200℃. Cyanate ester boards have been used successfully in test vehicles at 175℃, but failed when exposed to 250℃ [26]. The higher coefficient of thermal expansion (CTE) of the laminate substrates compared to the ceramics must be considered in the selection of component attachment materials. The temperature limits of the laminates with respect to assembly temperatures must also be carefully considered. Work is ongoing to develop and implement embedded resistor and capacitor technology for laminate substrates for conventional temperature ranges. This technology has not been extended to high-temperature applications.One method many manufacturers are using to address the higher temperatures whilemaintaining lower cost is the use of laminate substrates attached to metal. The typical design involves the use of higher Tg( +140℃ and above) laminate substrates attached to an aluminum plate (approximately 2.54-mm thick) using a sheet or liquid adhesive. To assist in thermal performance, the laminate substrate is often thinner (0.76 mm) than traditional automotive substrates for under-the-hood applications. While this design provides improved thermal performance, the attachment of the laminate to aluminum increases the CTE for the overall substrates. The resultant CTE is very dependent on the ability of the attachment material to decouple the CTE between the laminate substrate and the metal backing. However, regardless of the attachment material used, the combination of the laminate and metal will increase the CTE of the overall substrate above that of a stand-alone laminate substrate. This impact can be quite significant in the reliability performance for components with low CTE values (such as ceramic chip resistors). Fig. 9 illustrates the impact of two laminate-to-metal attachment options compared to standard laminate substrates [27], [28]. The reliability data presented is for 2512 ceramic chip resistors attached to a 0.79-mm-thick laminate substrate attached to aluminum using two attachment materials. Notice that while one material significantly outperforms the other, both are less reliable than the same chip resistor attached to laminate without metal backing.This decrease in reliability is also exhibited on small ball grid array (BGA) packages. Fig. 10 shows the reliability of a 15-mm BGA package attached to laminate compared to the same package attached to a laminate substrate with metal backing [27], [28]. The attachment material used for the metal-backed substrate was the best material selected from previous testing. Notice again that the metal-backed substrate deteriorates the reliability. This reliability deterioration is of particular concern since many IC packages used for automotive applications are ball grid array packages and the packaging trend is for reduced packaging size. These packaging trends make the use of metal-backed substrates difficult for next generation products.One potential solution to the above reliability concern is the use of encapsulants and underfills. Fig. 11 illustrates how conformal coating can improve component reliability for surface mount chip resistors [27], [28]. Notice that the reliability varies greatly depending on material composition. However, for components which meet a marginal level of reliability, conformal coatings may assist the design in meeting the target reliability requirements. The same scenario can be found for BGA underfills. Typical underfill materials may extend the component life by a factor of two or more. For marginal IC packages, this enhancement may provide enough reliability improvement toall the designs to meet under-the-hood requirements. Unfortunately, the improvements provided byencapsulants and underfills increase the material cost and adds one or more manufacturing processes for material dispense and cure.Interconnections: Methods of mechanical and electrical interconnection of the active and passive components to the board include chip and wire,flip-chip, and soldering of packaged parts. In chip and wire assembly, epoxy die-attach materials can beused to 165℃ [29]. Polyimide and silicone die-attach materials can be used to 200℃. For higher temperatures, SnPb ( >90Pb), AuGe, AuSi, AuSn, and AuIn have been used. However,with the exception of SnPb, these are hard brazes and with increasing die size, CTE mismatches between the die and the substrate will lead to cracking with thermal。

颜色识别中英文对照外文翻译文献

中英文资料对照翻译(文档含英文原文和中文翻译)英文原文An Approach of Color Feature Evaluation in ColorAbstract—This paper analyzes the characteristics of five commonly used color spaces and explores their influences on color recognition respectively. Divisibility evaluation based on distance criterion is utilized to evaluate the different colorfeatures in each color space and experimental results show that HSI color space has the best divisibility performance. Keywords-color space;colorrecognition; feature evalutation; divisibility critironI. I NTRODUCTIONColor is the most intuitive vision feature to describe colorful images.It has been widely used in pattern recognition for the reason that color feature is almost free from the effects of scale, rotation and translationfor the input images [1]. Colors in colorful images can be defined by different color space models, such as RGB space, CMY space, I1I2I3space, YUV space and HSI space. Among the above color spaces, RGBis the basic and the most common one and can readily be mapped into other color spaces. However, RGB space is non-uniform forcolor perception and is too easily influenced by light. Thethree color components of RGB space are correlated with each other [2]. CMY space represents colors by the complementary componentof RGB components. YUV space, frequently used in color TV systems, uses three channels as Y, U and V to define the pixel. Y are the brightness information, U and V are the color difference which denotes the overall color difference instead of the difference between the three components of RGB. HSI space is a uniform one which consists to the human perception of colors. Its three components are mutually independent and can perceive color change of each component respectively. But non-linear transform in HSI space may lead substantial computation as well as singularity of the color space when the saturationis low. While in YCbCr color space, the chrominance component and the luminance component are interdependent. Besides that, the conversion from YCbCr space to RGB space is linear and simple, so YCbCr space is commonly used in the field of video encoding compression. YUV space, YCbCr space and HSI space all represent spectrumin two dimension and use the third dimension to represent the intensity of color, which enables them more suitable for occasions where light intensity changes, than RGB space.Color recognition technique has been applied to many fields and has gone ahead rapidly. For instance, color recognition in product surface, license plates identification, face recognition and skin recognition [3-6]. Color recognition effects differ with the change of color space. This paper investigates on color feature divisibility in the commonly used color spaces as RGB space, CMY space, YUV space, YCbCr space, I1I2I3space and HSI space. Analysis indicates that HSI has the best divisibility performance in all theabove color spaces based on the distance criterion. It provides a theory basis for color recognition.II. COLOR SPACE AND I TS T RANSFORMATIONIt is essential to build up and select a suitable color space for obtaining a kind of valid color features to characterize colorful images. Different color spaces are utilized for different research purposes. Color space means to define color by anarray in three-dimension space. In the processing of colorful images, color space is also named as color model or color coordinates. One color space can be converted to another by certain transforms. Below is the introduction of some color spaces and their conversions [7].A. RGB Color SpaceRed (R), green (G), blue (B) are three primary colors ofspectrum. All colors can be generated by the sum of the threeprimary colors. In digital images, values of R, G and B rangefrom 0 to 255. A cube in three-dimension coordinate space can be used to describe the RGB color space, where red, green andblue are the three axes, shown in Fig.1.The main drawback of RGB color space as follows:• It is not intuitive. It is difficult to see from the RGBvalues the cognitive attributes that the color representsitself. • It is non-uniform. The perception difference betweentwo colors in RGB space is different from the distancebetween the two colors.• It is dependent on hardware devices.In a word, RGB space is device-related and an incompleteintuitive color description. To overcome these problems, othercolor spaces,which are more in line with characteristics of color vision, are adopted. RGB space can be mapped to other color spaces readily.B. CMY(CMYK) Color SpaceCMY space is a spatial structure of a rectangular Cartesian. Its three primary components are cyan (C), magenta (M) and yellow (Y). Colors are obtained by subtractive colors. CMY space is widely used in non-emission display as inkjet printers. Equal amount of the three components can generate the black color. But the aforementioned black color is not pure. Generally speaking, to generate true black color, the fourth component, i.e. black, is added in. This is the CMYK color space. CMY space is not very intuitive and non-linear. Its three components are the complementary colors of R, G and B. The transformations are as follows:The transformations from RGB space to CMY space are as follows:C. YUV and YCrCb Color SpaceYUV space and YCbCr space both generate a luminance component and two chrominance components. In YUV space, Y is the luminance component, U and V color difference. Y component is independent of the other two. Moreover, the YUV space can reduce the storage capacity required by digitalcolorful images by the characteristic of human vision. In YCbCr space, Y is the luminance component, Cb is the blue color component and Cr the red color component. Its advantages are obvious that color components are separated from luminance components and linear transformation can be performed from RGB space. Transformation from RGB space to YUV space can beexplained approximated by the following equations:D. HSI Color SpaceHSI space is established from the human psychologicalperception point of view. H (hue) is a color in a color corresponding to the main wavelength in chromatography. S (saturation) is equivalent to the purity of color. I (intensity ) is the brightness of color and theuniform amount of feeling. HSV (hue, saturation, value) and HSB (hue, saturation, brightness) are other color spaces similar to HSI color space , and are all belong to polar coordinate space structure. Their common merit is that they can describe the color intuitively. Most of them can be converted from RGB space linearly. HSI color space has two important points. One is that I component is separated from H component, i.e. I component is independent of image color information. The other is that H component and S component are closely linked to the way human feeling color, where the color description ability of H component is the most closet to human vision. And then distinguish ability of H component is the strongest [8]. Transformation from RGB space to HSI space can be explained by the following equations:HSI color space provides a suitable space with three components that is better used to descript color in line with human hobbits. However, the defect of non-linear in color difference still exists, especially the color and angle in the H component [9].E. I1I2I3 Color SpaceLinear transformation from RGB space to I1I2I3space can be explainedby the following equations to get three orthogonal color features:From formula (7), it can be seen that values of I1, I2and I3 component can be positive and negative. The non-correlation property of I1I2I3 space is the best in image recognition.III. FEATURE EVALUATION OF COLOR SPACEBy color spaces, the abstract, subjective visual perception can be translated into a concrete specific position, vector in three-dimensional space, which makes it possible to visualize color features of colorful images and devices. Color space is an important tool of color recognition. Various mixing system has its corresponding color space, and different color spaces have different properties with their respective advantages and disadvantages. Validityof color space is the key to color image processing. Divisibility criterion can be used to test different color space for their performanceon color classification. The distance criterion is widely utilized due to its concise and clear concept. Its principle is that the smaller the distance within a class while the greater the distance between classes, the better the divisibility it has. Below is the presented algorithm of feature evaluation based on distance criterion [10].• Calculate the mean vector and covariance of the ith class samples, Nis the number of total samples and Nithe number of the ithclass samples.IV. E XPERIMENTAL RESULTS AND ANALYSISWhen identified by human eyes, colors are divided into eleven categories as red, green, blue, yellow, purple, orange, pink, brown, gray, white and black, shown in Fig. 2.The evaluation algorithm is performed respectively on RGB space, CMY space, YUV space, I1I2I3 spaceand HSI space. Feature parameter and assessment indicators are shown in Table Ⅰ.Seen from Table Ⅰ, the HSI space has the best performance compared to other four analyzed color spaces.V. CONCLUSIONIt is necessary to select an effective color space for colorful image processing. This paper analyzes and compares the performance of five common color spaces based on divisibility criterion. Experimental results show that HSI color space has the best divisibility performance compared with the other four spaces. It provides a basis for color space selection in color recognition.ACKNOWLEDGMENTThis work was supported by a grant from Natural Science Foundation of Liaoning Province (Grant number: 20102153). REFERENCES[1] WANG Hui, LVYan, ZHANG Ka, “Research on Color Space Applicable to Wood Species Recognition”, FORESTRY MACHINERY & WOODWORKING EQUIPMENT, Vol.37, pp.20-22,2009.[2] Palus, H. Representations of color images in different color spaces, The Color Image Processing Handbook. London: Chapman& Hall,1998.[3] WANG Yan-song, JIN Wei-qi, “Surface Inspection Based on Color Clustering of Mapping Chro matism”, Transactions of BeijingInstitute of Technology, Vol.30,pp.74-78,2010.[4] CAO Jian qiu, WANG Hua qing and LAN Zhang li, “Skin Color D ivision Base onModified YCrCb Color Space”, JOURNAL OF CHONGQ ING JIAOTONG UN IVERSITY( NATURAL SC IENCE) , Vol. 29, pp.488-492,2010.[5] WANG Feng, MAN Li-chun and XIAO Yi-jun et al., “Color Recognitionof License Plates Based on Immune Data Reduction”. JOURNAL OF SICHUAN UNIVERSITY (ENGINEERING SCIENCE EDITION), vol.40, pp. 164-170, 2008.[6] XU Qing, SHI Yue-xiang, XIE Wen-lan and ZHANG Zheng-zhen, “Method of face detection based on improved YUV color space”, Computer Engineering and Applications, Vol. 44, pp.158-162,2008.[7] HAN Xiaowei, Study on Key Technologies of Color Image Processing, Northeast University,2005.[8] LIU Zhongwei, ZHANG Yujin, “A Comparitive and Analysis Study of Ten color Feature-based Image Retrieval Algorithms”,SIGNAL PROCESSING, Vol.16, pp.79-83, 2000.[9] Liu Jin, Chen Gi, Yu Ruizhao, “Developm ent of Computer Color Science”, COMPUTER ENGINEERING,2,1997.[10] YANG Shuying.Pattern recognition and Intelligent Computation: Matlab Technolgoy, Beijing: Electronic Industry Press, 2008.中文译文关于颜色识别中颜色特征分析的方法摘要:分析五种常用的颜色空间的特征并研究其分别对颜色识别的影响。

水的性质论文中英文资料对照外文翻译文献

水的性质论文中英文资料对照外文翻译文献Properties of waterWater is the phase in which all the main processes take place in flotation. The processes that affect the surface characteristics of particles in water include dissociation of dissolved species, hydration, and the adsorption of ions and flotation reagents. Therefore, it is important to know the properties of water.Water is a polar compound and water molecules interact with each other by attractive forces called van der Waals forces. These forces are closely related to the polar structure of molecules. As well known, positive charges repel positive charges and negative charges repel negative charges, but positive charges attract negative charges. As a consequence, dipole molecules tend to take a position so that attraction between molecules occurs. This phenomenon is called Keesom orientation. Brownian motion disturbs the orientation. A dipole can also induce a dipole moment in another molecule causing attraction between the molecules. The phenomenon is called Debye induction.Nonpolar molecules do also interact with each other. In all atoms and molecules the continuous motion of negative electrons creates rapidly fluctuating dipoles with the positive nucleus causing attractive forces. These forces are called dispersion or London-van der Waals forces. The interaction of nonpolar molecules is caused by the dispersion forces but with polar molecules, such as water, also the Keesom orientation and the Debye induction forces occur. The total van der Waals interactions between two atoms or molecules are given by the sum of those due to orientation, induction and dispersion forces. The orientation interaction is significant only if dipole moment is high. The induction interaction is always small. According to Coulomb’s law electrostatic forces vary inversely with the second power of the distance between two charges. The interaction due to van der Waals forces is much weaker. The forces decay inversely with the sixth power of the distance between molecules. When atoms or molecules come very close to each other their electron clouds repel each other. Therefore, the resultants of van der Waals forces contain both attraction and repulsion terms.In addition to van der Waals forces, hydrogen bonding is characteristic for water (Fig. 1). Hydrogen bonding occurs when an atom is attracted by rather strong forces to two atoms instead of only one, so that it may be considered to act as a bond between them. In water the hydrogen atom is covalently attached to the oxygen (about470 kJ/mol) but has additional attraction (about 23.3 kJ/mol) to a neighbouring oxygen atom of a water molecule. The bond is partly (about 90%) electrostatic and partly (about 10%) covalent (Isaacs et al., 2000, Suresh and Naik, 2000). Typically hydrogen bonding occurs where the partially positively charged hydrogen atom lies between partially negatively charged oxygen and nitrogen atoms, but is also found elsewhere, such as between fluorine atoms in HF2, and between water and smaller halide ions F-, Cl- and Br-.Water appears to be a simple compound but it has many specific properties. Water molecules form an infinite hydrogen-bonded network with localized and structured clustering. According to the present view, these clusters may contain 4-12 water molecules but much larger clusters have been suggested to occur (Chaplin, 2000). The lifetime of the structured clusters is very short, pico seconds in magnitude. Several models have been suggested for the structure of liquid water but no model is able to describe all the anomalous properties of water so far.H HOH H HOOH H H OO HHFigure 1. Schematic representation of hydrogen bonding of water molecules.Ions destroy the natural hydrogen bonded network of water. If the energy of inter- action between an ion and water dipoles is greater than the mutual attraction of water dipoles, the ion will be hydrated. Water molecules are oriented around the ion forming new structures. The degree of hydration depends on the size and valence of the ion. Anions are hydrated more strongly than cations of the same size because hydrogen atoms of water can approach closer than oxygen atoms of water. Ions that exhibit weaker interactions with water than water itself are known as structure- breakers,whereas ions that interact strongly with water are known as structure- makers.Small ions are strongly hydrated creating local order and higher local density. Large monovalent ions, such as Cs+ and I-, are very weakly solvated. Their surface charge density is low and they may be pushed on by strong water-water interactions. Their translational movement is high. Single atom ions may also be found in clathrate structures, where the lattice of water contains cavities that are capable of enclosing molecules without any bonds between them. Smaller ions, such as Rb+ and K+, cause the partial collapse of clathrate structures, through puckering, increasing the local mobility of water molecules. The smallest ions hold strongly to the first shell of their hydrating water molecules and hence there is less localized water molecule mobility. Divalent and trivalent ions are always more strongly solvated than monovalent ions. In the primary hydration shell the water molecules are most restricted in their motion but the effect does not end there. In the second hydration shell the water molecules are freer to rotate and exchange with bulk water, and so on. The position of hydrated water molecules on anions and cations is different and so is their ability to form hydrogen bonds. Altogether, the properties of water depend on all the ions and their characteristics.Ions and their hydrations affect the properties of water, such as viscosity, in many ways. Hydration is an exothermic process. During the formation of the internal layers of hydrated sheaths there is a considerable quantity of heat evolved. During the formation of the subsequent layers the amount of heat gradually decreases. If temperature is increased, the hydration of ions decreases. This is explained so that the rotational movement of water molecules hinders their orientation.The increase of orientation and the stability of the oriented dipoles decrease the solubilizing properties of water. The solvent action of water is closely connected with the hydration of the dissolved ions. If the dipoles of water are already polarized then the hydration of new ions by these water molecules is hindered. For the same reason, the conditions for the diffusion of ions become more difficult in polarized water. In addition, hydrated layers can prevent the adsorption of reagents on particles.The ionic composition of water is determined by the solubility of particles. Minerals are soluble if their hydration energy exceeds the lattice energy. Ion hydration energy increases as the valence of the ion increases and the ionic radius decreases. Also, the energy of crystal lattice increases. However, hydration energy increases much more slowly as ion valence increases than does crystal lattice energy. Therefore, an increase in valence is accompanied by a great reduction in solubility. This is why the sulphides and oxides of bivalent metals are relatively insoluble in water. The rate of dissolution depends on the nature of the mineral, the temperature and pH of the pulp, the intensity of agitation, particle size and the specific surface of the particles, and the ioniccomposition of water.The specific surface of particles determines the overall area that is in contact with water and, consequently, the number of ions that are transferred into the solution per unit of time. The intensity of agitation determines the movement of ions away from the surface of particles. These factors affect the kinetics of dissolution.The temperature and pH of the pulp do not affect only the kinetics of dissolution but also the equilibrium concentrations of the dissolved substances. In most cases, the solubility of minerals increases with an increase in temperature. This is due to the higher vibrational energy of the constituents in the crystal lattice, and at the same time due to the decreased forces between the ions, which facilitates the penetration of water into the lattice.Ionic equilibrium and solubility are important characteristics of solutions and of chemical reactions that occur in water. A chemical reaction in solution is possible when, on collision of ions, molecules are formed in which forces of cohesion between atoms exceed the forces of hydration. A requirement for a reaction to proceed is a removal of ions from the solution in the form of weakly dissociated molecules or nearly insoluble substances, i.e. as precipitate or as gas. At the high ionic concentration of a weakly soluble substance the solubility decreases. The solubility of minerals depends on complexes that are formed in each particular case. If the solution contains similar ions than the mineral, the solubility is decreased.外文翻译水的性质水是相在浮选中,所有的主要过程。

文学作品中英文对照外文翻译文献

文学作品中英文对照外文翻译文献
本文旨在汇总文学作品中的英文和中文对照外文翻译文献,共有以下几篇:
1. 《傲慢与偏见》
翻译:英文原版名为“Pride and Prejudice”,中文版由钱钟书翻译。

该小说是英国作家简.奥斯汀的代表作之一,描绘了19世纪英国中上层社会的生活和爱情故事。

2. 《了不起的盖茨比》
翻译:英文原版名为“The Great Gatsby”,中文版由杨绛翻译。

小说主要讲述了一个居住在纽约长岛的年轻白领盖茨比为了追求他的旧爱黛西而付出的努力,是20世纪美国文学的经典之作。

3. 《麦田里的守望者》
翻译:英文原版名为“The Catcher in the Rye”,中文版由施蛰存翻译。

该小说主人公霍尔顿是美国现代文学中最为知名的反英雄形象之一,作品深刻地揭示了青少年内心的孤独和矛盾。

4. 《1984》
翻译:英文原版名为“1984”,中文版由李敬瑞翻译。

该小说是英国作家乔治.奥威尔的代表作之一,描绘了一个虚构的极权主义社会。

以上是部分文学作品的中英文对照外文翻译文献,可以帮助读者更好地理解和学习相关文学作品。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

平设计任何时期平面设计可以参照一些艺术和专业学科侧重于视觉传达和介绍。

采用多种方式相结合,创造和符号,图像和语句创建一个代表性的想法和信息。

平面设计师可以使用印刷,视觉艺术和排版技术产生的最终结果。

平面设计常常提到的进程,其中沟通是创造和产品设计。

共同使用的平面设计包括杂志,广告,产品包装和网页设计。

例如,可能包括产品包装的标志或其他艺术作品,举办文字和纯粹的设计元素,如形状和颜色统一件。

组成的一个最重要的特点,尤其是平面设计在使用前现有材料或不同的元素。

平面设计涵盖了人类历史上诸多领域,在此漫长的历史和在相对最近爆炸视觉传达中的第20和21世纪,人们有时是模糊的区别和重叠的广告艺术,平面设计和美术。

毕竟,他们有着许多相同的内容,理论,原则,做法和语言,有时同样的客人或客户。

广告艺术勺最终目标是出售的商品和服务>在平面设计,“其实质是使以信息,形成以思想,言论和感觉的经验”。

在唐朝(618-906 )之间的第4和第7世纪的木块被切断打印纺织品和后重现佛典。

阿藏印在868是已知最早的印刷书籍。

在19世纪后期欧洲,尤其是在英国,平面设计开始以独立的运动从美术中分离出来。

蒙德里安称为父亲的图形设计。

他是一个很好的艺术家,但是他在现代广告中利用现代电网系统在广告、印刷和网络布局网格。

于1849年,在大不列颠亨利科尔成为的主要力量之一在设计教育界,该国政府通告设计在杂志设计和制造的重要性。

他组织了大型的展览作为庆祝现代工业技术和维多利亚式的设计。

从1892年至1896年威廉?莫里斯凯尔姆斯科特出版社出版的书籍的一些最重要的平面设计产品和工艺美术运动,并提出了一个非常赚钱的商机就是出版伟大文本论的图书并以高价出售给富人。

莫里斯证明了市场的存在使平面设计在他们自己拥有的权利,并帮助开拓者从生产和美术分离设计。

这历史相对论是,然而,重要的,因为它为第一次重大的反应对于十九世纪的陈旧的平面设计。

莫里斯的工作,以及与其他私营新闻运动,直接影响新艺术风格和间接负责20世纪初非专业性平面设计的事态发展。

谁创造了最初的“平面设计” 似乎存在争议。

这被归因于英国的设计师和大学教授Richard Guyatt,但另一消息来源于20 世纪初美国图书设计师William Addison Dwiggins。

伦敦地铁的标志设计是爱德华约翰斯顿于1916年设计的一个经典的现代而且使用了系统字体设计。

在2 0世纪20年代,苏联的建构主义应用于“智育吐产”在不同领威的生产。

个性化的运动艺术在2俄罗斯大革命是没有价值的,从而走向以创造物体的功利为目勺。

他们设十的建筑、剧院集海报面料、服装、家具、徽标、菜单等。

Jan Tschichold 在他的1928年书中编纂了新的现代印刷原则,他后来否认他在这本书的法西斯主义哲学主张,但它仍然是非常有影响力。

Tschichold ,包豪斯印刷专家如赫伯特拜耳和拉斯洛莫霍伊一纳吉,和El Lissitzky 是平面设计之父都被我们今天所知。

他们首创的生产技术和文体设备,主要用于整个二十世纪。

随后的几年看到平面设计在现代风格获得广泛的接受和应用。

第二次世界大战结束后,美国经济的建立更需要平面设计,主要是广告和包装等。

移居国外的德国包豪斯设计学院于1937年到芝加哥带来了“大规模生产”极简到美国引发野火的“现代”建筑和设计。

值得注意的名称世纪中叶现代设计包括阿德里安Frutiger ,设计师和Frutiger字体大学;保兰德,从20世纪30年代后期,直到他去世于1996年,采取的原则和适用包豪斯他们受欢迎的广告和标志设计,帮助创造一个独特的办法,美国的欧洲简约而成为一个主要的先驱。

平面设计称为企业形象;约瑟夫米勒,罗克曼,设计的海报严重尚未获取1950年代和1960年代时代典型。

从道路标志到技术图表,从备忘录到参考手册,增强了平面设计的知识转让。

可读性增强了文字的视觉效果。

设计还可以通过理念或有效的视觉传播帮助销售产品。

将它应用到产品和公司识别系统的要素像标志、颜色和文字。

连同这些被定义为品牌。

品牌已日益成为重要的提供的服务范围,许多平面设计师,企业形象和条件往往是同时交替使用。

教科书的目的是本科目,女哋理科学和数学。

这些出版物已布局理论设计说明和图表。

一个常见的例子,在使用图形, 教育是图表人体解剖学。

平面设计也适用于布局和格式的教育材料,使信息更容易和更容易理解的。

平面设计是应用在娱乐行业的装饰,景观和视觉故事。

其他的例子娱乐设计用途包括小说,漫画,电影中的开幕和闭幕,在舞台上节目的和道具的安排。

这也包括艺术品在T恤衫的应用和其他物品的出售。

从科学杂志报道,提出意见和事实往往是提高图形和深思熟虑的组成视觉信息-被称为信息的设计。

报纸,杂志,博客电视和电影纪录片,可以使用平面设计通知及娱乐。

随着网络信息与经验的交互设计的工具,Adobe和Flash正越来越多地被用来说明的背景新闻。

一个平面设计项目可能涉及程式化和介绍现有的文字,或者事先存在的意向或图像开发的平面设计师。

例如,一家报纸的故事始于记者和摄影记者,然后成为平面设计师的工作安排到一个合理的页面布局,并确定是否有任何其他图形元素应当要求。

在一本杂志的文章或广告,往往是平面设计师或艺术总监将委员会摄影师或插图创建原始文件只是被纳入设计规划。

现代设计的做法已经扩展到了现代的计算机,例如在使用所见的用户界面,通常被称为交互式设计,或多媒体设计。

任何图形元素用于设计之前,图形元素必须是源于通过视觉艺术技能。

这些图形通常(但并不总是)被设计师开发。

视觉艺术的作品主要是视觉性的东西从使用传统的传播媒介、摄影或电脑产生的艺术。

平面设计原则可以适用于每一个人的版画艺术元素,并最终组成。

3印刷术是艺术工和技术型,修改类型字形,并安H啖型的设计。

类型字形字符)的创健和修改使用各种说明方法。

这项安排的类型是选择字体、大小、线长、主要的(行距)和文字的间距。

刷术是由排字工机,排字,印刷工人,图形艺术家,艺术总监,工作者和办事员。

直到数字时代,印刷成为一个专业的领域。

数字化开辟了新的视觉设计师和用户。

排版设计师平面设计的一部分是在网页设计中是图形设计处理安排风格(内容)的要素从早期的照明网页手工复制书籍的中世纪和程序,以错综复杂的现代杂志和目录布局,适当的网页设计公司长期以来一直是考虑的印刷材料,与印刷媒体,内容通常包括类型(文字,图片(照片)偶尔发生持有者图形的内容,没有印刷油墨,如模具/激光切割,烫金压印或盲目压花。

平面设计师常常专心研究于界面设计,如网页设计和软件设计,最终用户的交互性是一个设计考虑的布局或接口。

视觉沟通技巧、互动沟通技巧与用户互动得相结合和在线品牌推广,平面设计师往往与软件开发和网络开发人员创建的外观和风格的网站或软件应用程序,来加强用户或网络网站的访问者互动体验。

版画是在纸上,其他有机材料或者表面上印刷艺术品的过程。

每一张不会被复制,但时最初的因为它不是一个复制的另一艺术作品,并在技术上称为留下深刻的印象。

绘画或素描,另一方面,创造了独特的原始艺术品。

版画是由一个单一的原始表面创造的,在技术上已经作为基质而被已知。

常见的矩阵包括:金属板,通常是铜或锌的雕刻或蚀刻石料,用于光刻;块木刻的木材,油毡和织物板的丝网印刷。

但也有许多其他种类,讨论如下:作品从一个单一的印刷板创造一个版本,在现代通常每个签署和编号,形成限量。

打印也可编制成册,作为艺术家的书籍。

一个单一的打印可能是产品的一种或多种技术。

色彩学领域是如何在打印机上和显示器上用眼睛识别颜色和如何解释和组织这些色彩。

眼睛的视网膜被两个被命名为视杆和视锥的感光体涵盖。

视杆对光很敏感但是对颜色不是很敏感。

视锥却与视杆恰恰相反。

他们对光不太敏感,但是颜色可以被感知。

随着科技的发展,人们越来越认识到环境问题日益严重,大气污染、森林破坏、水土流失、土地沙漠化水资污染、大量物种灭色、石油、天然气煤等资牯竭。

作为工业殳计师,应该有强烈的环境其中,温室效应、臭氧层破坏和酸雨是当今全球性的三大环境问题。

温室效应就是大气变暖的效应其形成原因是太阳短波辐射可以透过大气射入地面, 而地面增暖后放出的长波辐射却被大气中的二氧化碳就像一层厚厚的玻璃,把地球变成了一个大暖房。

甲烷、臭氧、氯、氟姪以及水汽等也对温室效应有所贡献。

随着人口的急剧增加和工业的迅速发展,越来越多的二氧化碳排入大气中;又由于森林被大量砍伐,大气中原本应被森林吸收的二氧化碳没有被吸收,致使二氧化碳逐渐增加,温室效应也不断增强。

温室效应的后果十分严重,自然生态将随之发生重大变化,荒漠将扩尢土地侵働0重,森林退向极也旱涝灾害严重雨量增加;温带冬天更显夏天更旱;热带也变得更湿,干热的亚热带变得更干旱,迫使原有水利工程重新调整。

沿海将受到严重威胁。

由于气温升咼,两极冰块将融化,使海平面上升,将会淹没许多城市和港口。

臭氧层破坏现象引起科学界及整个国际社会的震动。

美国的两位科学家Mo nila和Rowla nd指出,正是人为的活动造成了今天的臭氧洞。

元凶就是现在所熟知的氟利昂和哈龙。

酸雨目前已成为一种范围广泛、跨越国界勺大气亏染现象酸雨破坏土壤, 使湖泊酸化,危害动植物生长;朿嗷人勺肢肤,诱发皮肤病,弓起肺拥肿肺硬化会腐蚀金属制品、油漆皮革纺织品和含碳酸盐的建筑。

总而言之,人类生活的环境已经日益恶化。

而恶化的原因大部分属于人类本身的不良生活方式和不尊重客观规律,急功近利,对于地球资源的使用没有科学的计划性,而且在设计、制造产品以及日常生活中缺乏保护环境的意识,以至于自毁家园,其危害不仅于当代,而且严重影响了子孙后代的生存。

环境问题在很大程度上是由于人们的不良设计、生活方式造成的后果。

于是给设计师们提出了一个严肃的问题:作为设计师,应肩负起保护环境的历史重任!也给世界带来了灾难。

工业设计在为人类创造了现代生活方式的同时,也加速了资源、能源的消耗,并对地球的生态平衡造成了巨大的破坏。

所以,作为工业设计师,建立环境意识体现了其道德和社会责任心。

设计师必须对自己的设计负责,必须把人类的健康幸福,自然与人类的和谐共存作为设计中心遵循的原则。

设计师还必须掌握必要的材料、工艺化工制造等方面的知识,使得其设计不对环境造成危害而成为可能。

“可持续发展设计”这一概念的提出,对于人性的回归及世界真正意义上的发展具有划时代的意义。

他体现了设计师的道德与责任,已成为21世纪设计发展的总趋势。

从此,人类传统工业文明发展模式转向现代生态文明发展模式。

它是社会进步,经济增长,环境保护三者之间的协同。

可持续发展是人们应遵循的一种全新的伦理、道德和价值观念。

其本质在于:充分利用现代科技,大力开发绿色资源,发展清洁生产不断改善和优化生态环境促使人与自然的和谐发展,人口资源和环境相互协调。

相关文档
最新文档