计算机英文文献译文

合集下载

收藏知乎网友总结的23种英文文献翻译软件,助力文献阅读

收藏知乎网友总结的23种英文文献翻译软件,助力文献阅读

01搜狗翻译搜狗翻译的文档翻译功能有三个优点:第一,可以直接上传文档,流程操作简单化,这才是一键翻译哇,我之前只能说是很多键……;第二,在线阅读翻译结果时,系统可实时提供原文与译文的双屏对照,方便对比查看;第三,译文可直接免费下载,方便进一步研读或分享。

02Google Chrome浏览器假设一个情景,你想在PubMed上找到以清华大学为第一单位的施一公教授的文章,那么,可以在Chrome浏览器上,登上PubMed,搜索格式为Yigong Shi Tsinghua University,即可找到其发表的文章。

接着,看上一篇蛮不错的,点击进去看看,然后,还是全英文。

这时候,你可以试下Chrome自带的网页翻译,真的可以秒翻译,将英文翻译为中文,而且还可以快速转换中/英界面。

03Adobe Acrobat笔者在这里给大伙介绍另一款秒翻译PDF文档的神器(笔者使用的Adobe Acrobat Pro DC,至于具体的下载和安装方式,读者可自行百度)。

但是,需要注意一点,这是Adobe Acrobat,而不是Adobe Reader。

在这里,请应许笔者介绍下开发出Adobe Acrobat的公司——Adobe。

Adobe,在软件界绝对是巨头中巨头的存在。

打个比方,我们常用的PS、PR、AE、In、LR等,无一例外都是领域中的顶尖水平,而且都是Adobe家的。

其中,Adobe家中就有一款几位出色的PDF编辑及处理软件——Adobe Acrobat。

(据说PDF作为国际通用的文件存储格式,也是依它而起)OK,进入主题,Adobe Acrobat是长这个样子的。

它可能干嘛呢?PDF 转word、图片合拼为PDF、编辑PDF等等,可以说,与PDF相关的,它都可以搞定。

那如何使用它来帮助我们翻译文献PDF呢?第一步,用它打开文献PDF文件;第二步,点击使用界面上的“文件”,接着点击“另存为”,选择存储格式为“HTML”,如下图;第三步,PDF文档在导出完成后,会得到两个文件,一是将PDF转为HTML格式的网页文件,另一个则是支持网页文件里面的图片(若删,网页里面的图片显示不出来)第四步,找到网页文件,打开方式选择Google Chrome浏览器,接着,结合Chrome浏览器的网页翻译,即可秒翻。

(完整word版)英文文献及翻译:计算机程序

(完整word版)英文文献及翻译:计算机程序

姓名:刘峻霖班级:通信143班学号:2014101108Computer Language and ProgrammingI. IntroductionProgramming languages, in computer science, are the artificial languages used to write a sequence of instructions (a computer program) that can be run by a computer. Simi lar to natural languages, such as English, programming languages have a vocabulary, grammar, and syntax. However, natural languages are not suited for programming computers because they are ambiguous, meaning that their vocabulary and grammatical structure may be interpreted in multiple ways. The languages used to program computers must have simple logical structures, and the rules for their grammar, spelling, and punctuation must be precise.Programming languages vary greatly in their sophistication and in their degree of versatility. Some programming languages are written to address a particular kind of computing problem or for use on a particular model of computer system. For instance, programming languages such as FORTRAN and COBOL were written to solve certain general types of programming problems—FORTRAN for scientific applications, and COBOL for business applications. Although these languages were designed to address specific categories of computer problems, they are highly portable, meaning that the y may be used to program many types of computers. Other languages, such as machine languages, are designed to be used by one specific model of computer system, or even by one specific computer in certain research applications. The most commonly used progra mming languages are highly portable and can be used to effectively solve diverse types of computing problems. Languages like C, PASCAL and BASIC fall into this category.II. Language TypesProgramming languages can be classified as either low-level languages or high-level languages. Low-level programming languages, or machine languages, are the most basic type of programming languages and can be understood directly by a computer. Machine languages differ depending on the manufacturer and model of computer. High-level languages are programming languages that must first be translated into a machine language before they can be understood and processed by a computer. Examples of high-levellanguages are C, C++, PASCAL, and FORTRAN. Assembly languages are intermediate languages that are very close to machine languages and do not have the level of linguistic sophistication exhibited by other high-level languages, but must still be translated into machine language.1. Machine LanguagesIn machine languages, instructions are written as sequences of 1s and 0s, called bits, that a computer can understand directly. An instruction in machine language generally tells the computer four things: (1) where to find one or two numbers or simple pieces of data in the main computer memory (Random Access Memory, or RAM), (2) a simple operation to perform, such as adding the two numbers together, (3) where in the main memory to put the result of this simple operation, and (4) where to find the next instruction to perform. While all executable programs are eventually read by the computer in machine language, they are not all programmed in machine language. It is extremely difficult to program directly in machine language because the instructions are sequences of 1s and 0s. A typical instruction in a machine language might read 10010 1100 1011 and mean add the contents of storage register A to the contents of storage register B.2. High-Level LanguagesHigh-level languages are relatively sophisticated sets of statements utilizing word s and syntax from human language. They are more similar to normal human languages than assembly or machine languages and are therefore easier to use for writing complicated programs. These programming languages allow larger and more complicated programs to be developed faster. However, high-level languages must be translated into machine language by another program called a compiler before a computer can understand them. For this reason, programs written in a high-level language may take longer to execute and use up more memory than programs written in an assembly language.3. Assembly LanguagesComputer programmers use assembly languages to make machine-language programs easier to write. In an assembly language, each statement corresponds roughly to one machine language instruction. An assembly language statement is composed with the aid of easy to remember commands. The command to add the contents of the storage register A to the contents of storage register B might be written ADD B, A in a typical assembl ylanguage statement. Assembly languages share certain features with machine languages. For instance, it is possible to manipulate specific bits in both assembly and machine languages. Programmers use assemblylanguages when it is important to minimize the time it takes to run a program, because the translation from assembly language to machine language is relatively simple. Assembly languages are also used when some part of the computer has to be controlled directly, such as individual dots on a monitor or the flow of individual characters to a printer.III. Classification of High-Level LanguagesHigh-level languages are commonly classified as procedure-oriented, functional, object-oriented, or logic languages. The most common high-level languages today are procedure-oriented languages. In these languages, one or more related blocks of statements that perform some complete function are grouped together into a program module, or procedure, and given a name such as “procedure A.” If the same sequence of oper ations is needed elsewhere in the program, a simple statement can be used to refer back to the procedure. In essence, a procedure is just amini- program. A large program can be constructed by grouping together procedures that perform different tasks. Procedural languages allow programs to be shorter and easier for the computer to read, but they require the programmer to design each procedure to be general enough to be usedin different situations. Functional languages treat procedures like mathematical functions and allow them to be processed like any other data in a program. This allows a much higher and more rigorous level of program construction. Functional languages also allow variables—symbols for data that can be specified and changed by the user as the program is running—to be given values only once. This simplifies programming by reducing the need to be concerned with the exact order of statement execution, since a variable does not have to be redeclared , or restated, each time it is used in a program statement. Many of the ideas from functional languages have become key parts of many modern procedural languages. Object-oriented languages are outgrowths of functional languages. In object-oriented languages, the code used to write the program and the data processed by the program are grouped together into units called objects. Objects are further grouped into classes, which define the attributes objects must have. A simpleexample of a class is the class Book. Objects within this class might be No vel and Short Story. Objects also have certain functions associated with them, called methods. The computer accesses an object through the use of one of the object’s methods. The method performs some action to the data in the object and returns this value to the computer. Classes of objects can also be further grouped into hierarchies, in which objects of one class can inherit methods from another class. The structure provided in object-oriented languages makes them very useful for complicated programming tasks. Logic languages use logic as their mathematical base. A logic program consists of sets of facts and if-then rules, which specify how one set of facts may be deduced from others, for example: If the statement X is true, then the statement Y is false. In the execution of such a program, an input statement can be logically deduced from other statements in the program. Many artificial intelligence programs are written in such languages.IV. Language Structure and ComponentsProgramming languages use specific types of statements, or instructions, to provide functional structure to the program. A statement in a program is a basic sentence that expresses a simple idea—its purpose is to give the computer a basic instruction. Statements define the types of data allowed, how data are to be manipulated, and the ways that procedures and functions work. Programmers use statements to manipulate common components of programming languages, such as variables and macros (mini-programs within a program). Statements known as data declarations give names and properties to elements of a program called variables. Variables can be assigned different values within the program. The properties variables can have are called types, and they include such things as what possible values might be saved in the variables, how much numerical accuracy is to be used in the values, and how one variable may represent a collection of simpler values in an organized fashion, such as a table or array. In many programming languages, a key data type is a pointer. Variables that are pointers do not themselves have values; instead, they have information that the computer can use to locate some other variable—that is, they point to another variable. An expression is a piece of a statement that describe s a series of computations to be performed on some of the program’s variables, such as X+Y/Z, in which the variables are X, Y, and Z and the computations are addition and division. An assignment statement assigns a variable a value derived fromsome expression, while conditional statements specify expressions to be tested and then used to select which other statements should be executed next.Procedure and function statements define certain blocks of code as procedures or functions that can then be returned to later in the program. These statements also define the kinds of variables and parameters the programmer can choose and the type of value that the code will return when an expression accesses the procedure or function. Many programming languages also permit mini translation programs called macros. Macros translate segments of code that have been written in a language structure defined by the programmer into statements that the programming language understands.V. HistoryProgramming languages date back almost to the invention of the digital computer in the 1940s. The first assembly languages emerged in the late 1950s with the introduction of commercial computers. The first procedural languages were developed in the late 1950s to early 1960s: FORTRAN, created by John Backus, and then COBOL, created by Grace Hopper The first functional language was LISP, written by John McCarthy4 in the late 1950s. Although heavily updated, all three languages are still widely used today. In the late 1960s, the first object-oriented languages, such as SIMULA, emerged. Logic languages became well known in the mid 1970swith the introduction of PROLOG6, a language used to program artificial intelligence software. During the 1970s, procedural languages continued to develop with ALGOL, BASIC, PASCAL, C, and A d a SMALLTALK was a highly influential object-oriented language that led to the merging ofobject- oriented and procedural languages in C++ and more recently in JAVA10. Although pure logic languages have declined in popularity, variations have become vitally important in the form of relational languages for modern databases, such as SQL.计算机程序一、引言计算机程序是指导计算机执行某个功能或功能组合的一套指令。

单片机英文文献资料及翻译

单片机英文文献资料及翻译

单片机英文文献资料及翻译单片机(英文:Microcontroller)Microcontroller is a small computer on a single integrated circuit that contains a processor core, memory, and programmable input/output peripherals. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications.A microcontroller's processor core is typically a small, low-power computer dedicated to controlling the operation of the device in which it is embedded. It is often designed to provide efficient and reliable control of simple and repetitive tasks, such as switching on and off lights, or monitoring temperature or pressure sensors.MEMORYMicrocontrollers typically have a limited amount of memory, divided into program memory and data memory. The program memory is where the software that controls the device is stored, and is often a type of Read-Only Memory (ROM). The data memory, on the other hand, is used to store data that is used by the program, and is often volatile, meaning that it loses its contents when power is removed.INPUT/OUTPUTMicrocontrollers typically have a number of programmable input/output (I/O) pins that can be used to interface with external sensors, switches, actuators, and other devices. These pins can be programmed to perform specific functions,such as reading a sensor value, controlling a motor, or generating a signal. Many microcontrollers also support communication protocols like serial, parallel, and USB, allowing them to interface with other devices, including other microcontrollers, computers, and smartphones.APPLICATIONSMicrocontrollers are widely used in a variety of applications, including:- Home automation systems- Automotive electronics- Medical devices- Industrial control systems- Consumer electronics- RoboticsCONCLUSIONIn conclusion, microcontrollers are powerful and versatile devices that have become an essential component in many embedded systems. With their small size, low power consumption, and high level of integration, microcontrollers offer an effective and cost-efficient solution for controlling a wide range of devices and applications.。

机电专业论文英文文献及其中文译文

机电专业论文英文文献及其中文译文

毕业论文外文文献翻译译文题目:INTEGRATION OF MACHINERY外文资料翻译资料来源:文章名:INTEGRATION OF MACHINERY 《Digital Image Processing》书刊名:作者:Y. Torres J. J. Pavón I. Nieto and J. A.Rodríguez章节:2.4 INTEGRATION OF MACHINERYINTEGRATION OF MACHINERY (From ELECTRICAL AND MACHINERY INDUSTRY)ABSTRACT Machinery was the modern science and technology development inevitable resultthis article has summarized the integration of machinery technology basic outlineand the development background .Summarized the domestic and foreign integration ofmachinery technology present situation has analyzed the integration of machinerytechnology trend of development. Key word:integration of machinery ,technology,present situation ,productt,echnique of manufacture ,trend of development 0. Introduction modern science and technology unceasing development impelleddifferent discipline intersecting enormously with the seepage has caused the projectdomain technological revolution and the transformation .In mechanical engineeringdomain because the microelectronic technology and the computer technology rapiddevelopment and forms to the mechanical industry seepage the integration of machinerycaused the mechanical industry the technical structure the product organizationthe function and the constitution the production method and the management systemhas had the huge change caused the industrial production to enter into quottheintegration of machineryquot by quotthe machinery electrificationquot for the characteristicdevelopment phase. 1. Integration of machinery outline integration of machinery is refers in theorganization new owner function the power function in the information processingfunction and the control function introduces the electronic technology unifies thesystem the mechanism and the computerization design and the software whichconstitutes always to call. The integration of machinery development also has becomeone to have until now own system new discipline not only develops along with thescience and technology but also entrusts with the new content .But its basiccharacteristic may summarize is: The integration of machinery is embarks from thesystem viewpoint synthesis community technologies and so on utilization mechanicaltechnology microelectronic technology automatic control technology computertechnology information technology sensing observation and control technologyelectric power electronic technology connection technology information conversiontechnology as well as software programming technology according to the systemfunction goal and the optimized organization goal reasonable disposition and thelayout various functions unit in multi-purpose high grade redundant reliable inthe low energy consumption significance realize the specific function value andcauses the overall system optimization the systems engineering technology .From thisproduces functional system then becomes an integration of machinery systematic orthe integration of machinery product. Therefore quotintegration of machineryquot coveringquottechnologyquot and quotproductquot two aspects .Only is the integration of machinerytechnology is based on the above community technology organic fusion one kind ofcomprehensivetechnology but is not mechanical technical the microelectronictechnology as well as other new technical simple combination pieces together .Thisis the integration of machinery and the machinery adds the machinery electrificationwhich the electricity forms in the concept basic difference .The mechanicalengineering technology has the merely technical to develop the machineryelectrification still was the traditional machinery its main function still wasreplaces with the enlargement physical strength .But after develops the integrationof machinery micro electron installment besides may substitute for certainmechanical parts the original function but also can entrust with many new functionslike the automatic detection the automatic reduction information demonstrate therecord the automatic control and the control automatic diagnosis and the protectionautomatically and so on .Not only namely the integration of machinery product ishumans hand and body extending humans sense organ and the brains look has theintellectualized characteristic is the integration of machinery and the machineryelectrification distinguishes in the function essence. 2. Integration of machinery development condition integration of machinerydevelopment may divide into 3 stages roughly.20th century 60s before for the firststage this stage is called the initial stage .In this time the people determinationnot on own initiative uses the electronic technology the preliminary achievement toconsummate the mechanical product the performance .Specially in Second World Warperiod the war has stimulated the mechanical product and the electronic technologyunion these mechanical and electrical union military technology postwar transferscivilly to postwar economical restoration positive function .Developed and thedevelopment at that time generally speaking also is at the spontaneouscondition .Because at that time the electronic technology development not yetachieved certain level mechanical technical and electronic technology union alsonot impossible widespread and thorough development already developed the productwas also unable to promote massively. The 20th century 7080 ages for the second stagemay be called the vigorous development stage .This time the computer technologythe control technology the communication development has laid the technology basefor the integration of machinery development . Large-scale ultra large scaleintegrated circuit and microcomputer swift and violent development has provided thefull material base for the integration of machinery development .This timecharacteristic is :①A mechatronics word first generally is accepted in Japanprobably obtains the quite widespread acknowledgment to 1980s last stages in theworldwide scale ②The integration of machinery technology and the product obtainedthe enormous development ③The various countries start to the integration ofmachinery technology and the product give the very big attention and the support.1990s later periods started the integration of machinery technology the new stagewhich makes great strides forward to the intellectualized direction the integrationof machinery enters the thorough development time .At the same time optics thecommunication and so on entered the integration of machinery processes thetechnology also zhan to appear tiny in the integration of machinery the footappeared the light integration of machinery and the micro integration of machineryand so on the new branch On the other hand to the integration ofmachinery systemmodeling design the analysis and the integrated method the integration ofmachinery discipline system and the trend of development has all conducted thethorough research .At the same time because the hugeprogress which domains and so on artificial intelligence technology neural networktechnology and optical fiber technology obtain opened the development vast worldfor the integration of machinery technology .These research will urge theintegration of machinery further to establish the integrity the foundation and formsthe integrity gradually the scientific system. Our country is only then starts fromthe beginning of 1980s in this aspect to study with the application .The State Councilhad been established the integration of machinery leading group and lists as quot863plansquot this technology .When formulated quot95quot the plan and in 2010 developed thesummary had considered fully on international the influence which and possiblybrought from this about the integration of machinery technology developmenttrend .Many universities colleges and institutes the development facility and somelarge and middle scale enterprises have done the massive work to this technicaldevelopment and the application does not yield certain result but and so on theadvanced countries compared with Japan still has the suitable disparity. 3. Integration of machinery trend of development integrations of machinery arethe collection machinery the electron optics the control the computer theinformation and so on the multi-disciplinary overlapping syntheses its developmentand the progress rely on and promote the correlation technology development and theprogress .Therefore the integration of machinery main development direction is asfollows: 3.1 Intellectualized intellectualizations are 21st century integration ofmachinery technological development important development directions .Theartificial intelligence obtains day by day in the integration of machineryconstructors research takes the robot and the numerical control engine bedintellectualization is the important application .Here said quottheintellectualizationquot is to the machine behavior description is in the control theoryfoundation the absorption artificial intelligence the operations research thecomputer science the fuzzy mathematics the psychology the physiology and the chaosdynamics and so on the new thought the new method simulate the human intelligenceenable it to have abilities and so on judgment inference logical thinkingindependent decision-making obtains the higher control goal in order to .Indeedenable the integration of machinery product to have with the human identicalintelligence is not impossible also is nonessential .But the high performancethe high speed microprocessor enable the integration of machinery product to havepreliminary intelligent or humans partial intelligences then is completelypossible and essential. In the modern manufacture process the information has become the controlmanufacture industry the determining factor moreover is the most active actuationfactor .Enhances the manufacture system information-handling capacity to become themodern manufacture science development a key point .As a result of the manufacturesystem information organization and structure multi-level makes the information thegain the integration and the fusion presents draws up the character informationmeasuremulti-dimensional as well as information organizations multi-level .In themanufacture information structural model manufacture information uniform restraintdissemination processing and magnanimous data aspects and so on manufacture knowledgelibrary management all also wait for further break through. Each kind of artificial intelligence tool and the computation intelligence methodpromoted the manufacture intelligence development in the manufacture widespreadapplication .A kind based on the biological evolution algorithm computationintelligent agent in includes thescheduling problem in the combination optimization solution area of technologyreceives the more and more universal attention hopefully completes the combinationoptimization question when the manufacture the solution speed and the solutionprecision aspect breaks through the question scale in pairs the restriction .Themanufacture intelligence also displays in: The intelligent dispatch the intelligentdesign the intelligent processing the robot study the intelligent control theintelligent craft plan the intelligent diagnosis and so on are various These question key breakthrough may form the product innovation the basicresearch system. Between 2 modern mechanical engineering front science differentscience overlapping fusion will have the new science accumulation the economicaldevelopment and societys progress has had the new request and the expectation tothe science and technology thus will form the front science .The front science alsohas solved and between the solution scientific question border area .The front sciencehas the obvious time domain the domain and the dynamic characteristic .The projectfront science distinguished in the general basic science important characteristicis it has covered the key science and technology question which the project actualappeared. Manufacture system is a complex large-scale system for satisfies the manufacturesystem agility the fast response and fast reorganization ability must profit fromthe information science the life sciences and the social sciences and so on themulti-disciplinary research results the exploration manufacture system newarchitecture the manufacture pattern and the manufacture system effectiveoperational mechanism .Makes the system optimization the organizational structureand the good movement condition is makes the system modeling the simulation andthe optimized essential target .Not only the manufacture system new architecture tomakes the enterprise the agility and may reorganize ability to the demand responseability to have the vital significance moreover to made the enterprise first floorproduction equipment the flexibility and may dynamic reorganization ability set ahigher request .The biological manufacture view more and more many is introduced themanufacture system satisfies the manufacture system new request. The study organizes and circulates method and technique of complicated systemfrom the biological phenomenon is a valid exit which will solve many hard nut tocracks that manufacturing industry face from now on currently .Imitating to livingwhat manufacturing point is mimicry living creature organ of from the organizationfrom match more from growth with from evolution etc. function structure and circulatemode of a kind of manufacturing system and manufacturing process. The manufacturing drives in the mechanism under continuously by ones ownperfect raise on organizing structure and circulating modeand thus to adapt theprocess ofwith ability for the environment .For from descend but the last productproceed together a design and make a craft rules the auto of the distance born producesystem of dynamic state reorganization and product and manufacturing the system tendautomatically excellent provided theories foundation and carry out acondition .Imitate to living a manufacturing to belong to manufacturing science andlife science ofquotthe far good luck is miscellaneous to hand overquot it will produceto the manufacturing industry for 21 centuries huge of influence .机电一体化摘要机电一体化是现代科学技术发展的必然结果本文简述了机电一体化技术的基本概要和发展背景。

C#编程语言外文文献翻译中英文

C#编程语言外文文献翻译中英文

C# 编程语言概述外文文献翻译(含:英文原文及中文译文)文献出处:Barnett M. C# Programming Language Overview [J]Lecture Notes in Computer Science, 2016, 3(4):49-59.英文原文C# Programming Language OverviewBarnett M1. History of C, C++, C#The C# programming language is based on the spirit of the C and C++ programming languages. This account has powerful features and an easy-to-learn curve. It cannot be said that C# is the same as C and C++, but because C# is built on both, Microsoft has removed some features that have become more burdensome, such as pointers. This section looks at C and C++ and tracks their development in C#.The C programming language was originally defined in the UNIX operating system. In the past, we often wrote some UNIX applications, including a C compiler, and finally used to write UNIX itself. It is generally accepted that this academic competition extends to the world that contains this business. The original Windows API was defined to work with C using Windows code, and until now at least the core Windows operating system APIS maintains the C compiler.From a defined point of view, C lacks a single detail, like thelanguage Smalltalk does, and the concept of an object. Y ou will learn more about the contents of the object. In Chapter 8, "Write Object-Oriented Code," an object is collected as a data set and some operations are set. The code can be completed by C, but the concept of the object cannot be Forced to appear in this language. If you want to construct your code to make it like an object, that's fine. If you don't want to do this, C will really not mind. The object is not an intrinsic part. Many people in this language did not spend a lot of time in this program example. When the development of object-oriented perspectives began to gain acceptance, think about the code approach. C++ was developed to include this improvement. It is defined to be compatible with C (just as all C programs are also C++ programs and can be compiled by a C++ compiler) The main addition to the C++ language is to provide this new concept. C++ additionally provides a derivative of the class (object template) behavior.The C++ language is a modified version of the C language. Unfamiliar, infrequent languages such as VB, C, and C++ are very low-level and require a lot of coding to make your application run well. Reason and error checking. And C++ can be handled in some very powerful applications, the code works very smoothly. The goal is set to maintain compatibility with C. C++ cannot break the low-level features of C.Microsoft defined C# retains a lot of C and C++ statements. The code can also want to identify the code quickly. A big advantage for C# is that its designers did not make it compatible with C and C++. When this may seem like a wrong treatment, it is actually good news. C# eliminates something that makes C and C++ difficult to work with. Beginning with quirks and defects found in C. C# is starting a clean slate and does not have any compatibility requirements. So it can maintain the strengths of its predecessors and discard weaknesses that make C and C++ programs difficult to survive.2. Introduce C#C#, the new language introduced in the .NET system, is derived from C++. However, C# is a popular, object-oriented (from beginning to end) type-safe language.Language featuresThe following section provides a quick perspective on some of the features of the C# language. If some of them are unfamiliar to you, don't worry, everything will be explained in detail in the following sections. In C#, all code and data must be attached to a class. Y ou cannot define a variable outside the class, nor can you write any code that is not in the class. When a class object is created and run, the class is constructed. When the object of the class is released, the class is destroyed. The class provides single inheritance, and all the classes eventually get from thebase class is the object. Over time, C# provides versioned techniques to help with the formation of your classes to maintain code compatibility when you use code from your earlier classes.Let's look at an example of a class called Family. This class contains two static fields to hold the first and last names of family members. In the same way, there is a way to return the full name of a family member.Class Class1{Public string FirstName;Public string LastName;Public string FullName(){}Return FirstName + LastName;}Note: Single inheritance means that a C# class can only inherit from a base class.C# is a collection that you can package your class into a namespace called the namespace class. And you can help arrange collection of classes on logical aggregations. When you started learning C#, it was clear that all namespaces were related to .NET type systems. Microsoft also chose to include channels that assist in the compatibility of previouscode and APIs. These classes are also included in Microsoft's namespace.Type of dataC# lets you work with two types of data: value types and reference types. The value type holds the actual value. The reference type saves the actual value stored elsewhere in the memory. Raw data types, such as character, integer, float, enumeration, and structure types, are all value types. Objects and array types are treated as reference types. C# predefines reference types (objects and strings) New, Byte, Unsigned Short, Unsigned Integer, Unsigned Long, Float, Double-Float, Boolean, Character, and The value type and reference type of the decimal type will eventually be executed by a primitive type object. C# also allows you to convert a value or a type to another value or a type. Y ou can use an implicit conversion strategy or an explicit conversion strategy. Implicit conversion strategies are always successful and do not lose any information (for example, you can convert an integer to a long integer without losing any information because long integers are longer than integers) Some data is lost because long integers can hold more value than integers. Conversion occurs.Before and after referenceRefer to Chapter 3 "Working with V ariables" to find out more about explicit and implicit conversion strategies.Y ou can use single-dimensional and multidimensional arrays in C#at the same time. Multidimensional arrays can become a matrix. When this matrix has the same area size as a multidimensional array. Or jagged, when some arrays have different sizes.Classes and structures can have data members called attributes and fields. Y ou can define a structure called Employee. For example, there is a field called Name. If you define an Employee type variable called CurrenrEmployee, you can retrieve the employee's name by writing . What should happen after the code assignment? If the employee's name must be read by a database, for example, you can write a code "When some people ask for the value of the name attribute, read the name from the database and return the name with the string type".FunctionA function is a code that can be used at any time, code. An example of a function will appear earlier than the FullName function, in this chapter, in the Family class. A function is usually combined with some code that returns information, and a method usually does not return information. However, for us, we generally attribute them to functions.The function can have four parameters:•The input parameters have values passed into the function, but the function cannot change their values.•The output parameters have no value when they are passed to thefunction, but the function can give them a value and pass the value back to its caller. ,•The reference parameter passes another value by reference. They have a value into the function, and this value can be changed in the function.•The parameter parameter defines an array variable in the list.C# and CLR work together to provide automatic storage management. Or "Leave enough space for this object to use" code like this. The CLR monitors your memory usage and automatically retrieves it when you need it.C# provides a large number of operators that allow you to write a large number of mathematical and bitwise expressions. Many (but not all) of them can be redefined, and you can change the job of these operators.C# provides a long list of reports that you can define through a variety of processing paths through your code. Through the report's operations, using keywords like switch, while, for, break, and continue enables your code to be split into different paths depending on the value of the variable.Classes can contain code and data. Visibility of each member to other objects. C# provides such accessible ranges as public, protected, internal, protected internal, and private.V ariableV ariables can be defined as constants. The constant has a fixed value and cannot be changed during the execution of your code. The value of PI, for example, is a good example of a constant because her value will not be changed while your code is running. The enumeration type defines a specific name for the constant. For example, you can define an enumerated type of planet using Mercury V in your code. If you use a variable to represent the planet, using the names of this enum type can make your code easier to read.C# provides an embedded mechanism to define and handle some events. If you write a class that performs a long operation, you may want to call an event. When the event ends, the client can sign this time and grab the event in their own code, he can let them be notified When you have completed this long budget, this event handling mechanism uses delegates in C#, a variable that references a function.Note: Event processing is a program in your code that determines what action will take place when a time occurs.For example, the user clicks on a button. If your class holds a value, write some code called a protractor that your class can be accessed as if it were an array. Suppose you write a class called Rainbow. For example, it contains a set of colors in this rainbow. Visitors may want some MYRainbow to retrieve the first color in the rainbow. Y ou can write an indexer in your Rainbow class to define what will be returned when thevisitor accesses your class as if it were an array of values.InterfaceC# provides an interface that aggregates properties, methods, and events that describe a set of functions. The class of C# can execute the interface. It tells the user through the interface a set of function files provided by this class. What existing code can have as few compatibility issues as possible. Once there was an interface exposed, it could not be changed, but it could evolve through inheritance. C# classes can perform many interfaces, even if the class can only inherit from a base class.Let's look at an example of a very clear rule in the real world of C# that helps illustrate the interface. Many applications use the additions provided today. There is the ability to read additional items when executed. To do this, this added item must follow some rules. DLL add items must display a function called CEEntry. And you must use CEd as the beginning of the DLL file name. When we run our code, it scans the directories of all the DLLs that are starting with CEd. When it finds one, it is read. Then it uses GetProcAddress to find the CEEntry function in the DLL. This proves that it is necessary for you to obey all the rules to establish an addition. This kind of creating a read addition is necessary because it carries more unnecessary code responsibility. If we use an interface in this example, your DLL additions can be applied to an interface. This ensures that all necessary methods, properties, and eventsappear in the DLL and are specified as files.AttributesThe attribute declares additional information about your class for the CLR. In the past, if you wanted to describe your classes yourself, you would have to use a few decentralized ways to store them in external files, such as IDL or event HTML files. Through your efforts, the property solves this problem. The developer has constrained some information in the class and any kind of information, for example, in the class, defines how it acts when it is used. The possibilities are endless, which is why Microsoft will contain a lot of predefined attributes in the .NET framework.Compile C#Running your C# code generates two important types of information through the C# compiler: code and metadata. The next section describes these two topics and completes a binary review built on .NET code, which is assembly.Microsoft Intermediate Language (MSIL)The code output by the C# compiler is written in an intermediate language called Microsoft. MSIL is your code that is used to construct a detailed set of instructions to guide you on how to perform. It contains instructions for operations, such as initialization of variables, methods for evoking objects, error handling, and declaring something new. C# is notjust a language from the MSIL source code that changes during the writing process. All .NET-compatible languages, including and C++ management, generate MSIL when their source code is compiled. All .NET languages use the same runtime, so code from different languages and different compilers can easily work together.For physical CPUs, MISL is not a set of explicit instructions. It doesn't know anything about your machine's CPU, and your machine doesn't know anything about MSIL. Then, when your CPU can't read MSIL, explain the code. This sinking is called just enough to write, or JIT. The job of the JIT compiler is to translate your universal MSIL code to the machine so that the CPU can execute your code.Y ou may want to know what an extra step is in the process. When a compiler can immediately generate CPU-interpreted code for why MSIL was generated, the compiler does this later. There are many reasons for this. First, MSIL makes it easier for you to write code as it moves to a different piece of hardware. Suppose you have written some C# code and you want it to run on your desktop and handheld devices at the same time. It is very likely that these two devices have different CPUs. If you only have one C# compiler whose goal is a clear CPU, then you need two C# compilers: one with the desktop CPU and the other with the handheld device CPU. Y ou have to compile your code twice to ensure that your correct code is used on the right device. With MSIL, you only write once.The .NET Framework is installed on your desktop and it contains a JIT compiler that translates your MSIL-specific CPU code to your machine. The .NET Framework is installed on your handheld device and it contains a JIT compiler that translates the same MSIL-specific CPU-specific code to your handheld device. To run MSIL code base on any device that has a .NET JIT compiler. Y ou now have only one MSIL basic code that can run on any device that has a .NET JIT compiler. The JIT compiler on these devices can take care of your code and make them run smoothly.Another reason why the compiler uses MSIL is that the settings of the instruction can be easily read by an authenticated proximity. Part of the compiler's job is to verify your code to make it as clear as possible. When properly accessed, these checks ensure that your code does not execute any instructions that can cause your code to crash. The definition of MSIL directives makes this check process easier to understand. CPU-specific instruction settings are optimized for fast code execution. However, they make the code difficult to read and therefore difficult to check. Having a C# compiler that can output CPU-specific code at once can make code inspection difficult or even impossible. Allow the .NET Framework's JIT compiler to verify your code to ensure that your code accesses memory through a buggy path and that the variable types are used correctly.MetadataThe assembly process outputs the same amount of metadata. This is a very important part of the .NET code sharing story. Whether you use C# to build a client application or use C# to build a library that some people use for your application, you will want to take advantage of some compiled .NET code. That code may have been provided by Microsoft as part of the .NET framework, or it may be provided by some online users. The key to using a foreign code is to let the C# compiler know that the class and that variable are in another base code so that it can be found in the precompilation of your work and match the code you write with the source code.Look at the metadata for the directory for your compiled code. The number of bits of source code compiled by C# exists in the compiled code along with the generation of MSIL. The types of methods and variables are completely described in the metadata and are ready to be read by other applications. For example, can read metadata from a .NET library to provide intelligent sensing of all the methods that can be used effectively for a particular class.If you have already worked with COM, you may be familiar with type libraries. The goal of the type library is to provide the same directory functionality to COM objects. However, the type library is provided from a few limitations, and in fact not all data about the target can be put into the type library. Metadata in .NET does not have this disadvantage. Allthe code used to describe the class's information is placed in the metadata.memberSometimes you need to use C# to build a terminal application. These applications are packaged into an executable file and use .EXE as an extension. C# completely supports the creation of .EXE files. However, there are also times when you do not want to be used in other programs. Y ou may want to create some useful C# classes, such as a developer who wants to use your class in a application. In this case, you will not create an application, instead you will build a component. A component is a metadata package. As a unit to configure, these classes will share the same level of version control, security information, and dynamic requirements. Think of a component as a logical DLL. If you are familiar with Microsoft's translation services or COM+, then you can think of components as equivalent to .NET packages.There are two kinds of components: private components and global components. When you build your own component, you don't need to specify whether you want to create a global component or a private component. Y ou can only make your code accessible by a separate application. Y our component is a package similar to a DLL and is installed into the same directory when your application runs it. The application is only executable when it is in the same directory as yourcomponent.If you want to share your code, more global components in more applications. Global components can be used by any system's .NET application regardless of the directory in which it is installed. Microsoft installs components as part of the .NET structure, and each Microsoft component is installed as a global component. The Microsoft Architecture SDK contains the public functionality to install and remove artifacts from global widget storage.C# can be viewed to some extent as a programming language for the .NET Windows-oriented environment. In the past ten years, although VB and C++ have finally become very powerful languages, some of the content has come. For Visual Basic, its main advantage is that it is easy to understand. Many programming tasks are easy to accomplish and basically hide the connotations of the Windows API and the COM component structure. The downside is that Visual Basic has never implemented an early version of object-oriented, real-world (BASIC is primarily intended to make beginners easier to understand than to write large commercial applications), so it cannot really be structured or object-oriented. Programming language.On the other hand, C++ has its own root in the ANSI C++ language definition. It is not fully compatible with ANSI because Microsoft wrote the C++ compiler before the ANSI definition was standardized, but it isalready quite close. Unfortunately, this leads to two problems. First, ANSI C++ was developed under technical conditions more than a decade ago, so it does not support current concepts (such as Unicode strings and generating XML documents), and some of the older grammatical structures were designed for previous compilers ( For example, the declaration and definition of member functions are separate.) Second, Microsoft also tried to evolve C++ into a language for performing high-performance tasks on Windows - avoiding the addition of large numbers of Microsoft-specific keywords and libraries in the language. The result is that in Windows, the language becomes a very messy language. Let a C++ developer talk about how many strings are defined in this way: char*, LPTSTR, (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.Now entering the .NET era - a new environment, it has made new extensions to both languages. Microsoft added many Microsoft-specific keywords to C++ and evolved VB to , retaining some basic VB syntax, but it is completely different in design. From a practical application perspective, is a New language. Here, Visua l C# .NET. Microsoft describes C# as a simple, modern, object-oriented, type-safe, and C and C++-derived programming language. Most in dependent commentators are “derived from C, C++, and Java” from their claims. C# is very similar to C++ and Java. It uses parentheses ({})to mark blocks of code, and semicolons separate lines of statements. The first impression of C# code is that it is very similar to C++ or Java code. But after these seeming similarities, C# is much easier to learn than C++ but harder than Java. Its design and modern development tools are more adaptable than other languages. It also has Visua Basic's ease of use, high performance, and low memory accessibility of C++. C# includes the following features:●Full support for class and object-oriented programming, including interface and inheritance, virtual functions, and operator overloading.●Define a complete, consistent set of basic types.●Built-in support for automatically generating XML document descriptions.●Automatically clean dynamically allocated memory.●Classes or methods can be marked with user-defined properties. This can be used for documentation purposes and has a certain impact on compilation (for example, marking a method to compile only when debugging).●Full access to the .NET base class library and easy access to the Windows API.●Y ou can use pointers and direct memory access, but the C# language can access memory without them.●Supports attributes and events in VB style.●Changing compiler options, ActiveX controls (COM components) are called by other code in the same way. ●C# can be used to write dynamic Web pages and XML Web services.It should be noted that for most of these features, and Managed C++ are also available. But since C# used .NET from the beginning, support for .NET features was not only complete, but also provided a more suitable syntax than other languages. The C# language itself is very similar to Java, but there are some improvements because Java is not designed for use in a .NET environment. Before ending this topic, we must also point out two limitations of C#. One is that the language is not suitable for writing time-critical or very high-performance codes, such as a loop that runs 1000 or 1050 times, and immediately clears the resources they occupy when they are not needed. In this regard, C++ may still be the best of all low-level languages. The second is that C# lacks the key functions needed for very high-performance applications. The parcels guarantee inlining and destructor functions in specific areas of the code. However, such applications are very few.中文译文C# 编程语言概述作者:Barnett M1. C,C++,C#的历史C#程序语言是建立在C 和C++程序语言的精神上的。

单片机英文文献及翻译)

单片机英文文献及翻译)

Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。

人工智能英文文献原文及译文

人工智能英文文献原文及译文

附件四英文文献原文Artificial Intelligence"Artificial intelligence" is a word was originally Dartmouth in 1956 to put forward. From then on, researchers have developed many theories and principles, the concept of artificial intelligence is also expands. Artificial intelligence is a challenging job of science, the person must know computer knowledge, psychology and philosophy. Artificial intelligence is included a wide range of science, it is composed of different fields, such as machine learning, computer vision, etc, on the whole, the research on artificial intelligence is one of the main goals of the machine can do some usually need to perform complex human intelligence. But in different times and different people in the "complex" understanding is different. Such as heavy science and engineering calculation was supposed to be the brain to undertake, now computer can not only complete this calculation, and faster than the human brain can more accurately, and thus the people no longer put this calculation is regarded as "the need to perform complex human intelligence, complex tasks" work is defined as the development of The Times and the progress of technology, artificial intelligence is the science of specific target and nature as The Times change and development. On the one hand it continues to gain new progress on the one hand, and turning to more meaningful, the more difficult the target. Current can be used to study the main material of artificial intelligence and artificial intelligence technology to realize the machine is a computer, the development history of artificial intelligence is computer science and technology and the development together. Besides the computer science and artificial intelligence also involves information, cybernetics, automation, bionics, biology, psychology, logic, linguistics, medicine and philosophy and multi-discipline. Artificial intelligence research include: knowledge representation, automatic reasoning and search method, machine learning and knowledge acquisition and processing of knowledge system, natural language processing, computer vision, intelligent robot, automatic program design, etc.Practical application of machine vision: fingerprint identification, face recognition, retina identification, iris identification, palm, expert system, intelligent identification, search, theorem proving game, automatic programming, and aerospace applications.Artificial intelligence is a subject categories, belong to the door edge discipline of natural science and social science.Involving scientific philosophy and cognitive science, mathematics, neurophysiological, psychology, computer science, information theory, cybernetics, not qualitative theory, bionics.The research category of natural language processing, knowledge representation, intelligent search, reasoning, planning, machine learning, knowledge acquisition, combined scheduling problem, perception, pattern recognition, logic design program, soft calculation, inaccurate and uncertainty, the management of artificial life, neural network, and complex system, human thinking mode of genetic algorithm.Applications of intelligent control, robotics, language and image understanding, genetic programming robot factory.Safety problemsArtificial intelligence is currently in the study, but some scholars think that letting computers have IQ is very dangerous, it may be against humanity. The hidden danger in many movie happened.The definition of artificial intelligenceDefinition of artificial intelligence can be divided into two parts, namely "artificial" or "intelligent". "Artificial" better understanding, also is controversial. Sometimes we will consider what people can make, or people have high degree of intelligence to create artificial intelligence, etc. But generally speaking, "artificial system" is usually significance of artificial system.What is the "smart", with many problems. This involves other such as consciousness, ego, thinking (including the unconscious thoughts etc. People only know of intelligence is one intelligent, this is the universal view of our own. But we are very limited understanding of the intelligence of the intelligent people constitute elements are necessary to find, so it is difficult to define what is "artificial" manufacturing "intelligent". So the artificial intelligence research often involved in the study of intelligent itself. Other about animal or other artificial intelligence system is widely considered to be related to the study of artificial intelligence.Artificial intelligence is currently in the computer field, the more extensive attention. And in the robot, economic and political decisions, control system, simulation system application. In other areas, it also played an indispensable role.The famous American Stanford university professor nelson artificial intelligence research center of artificial intelligence under such a definition: "artificial intelligence about the knowledge of the subject is and how to represent knowledge -- how to gain knowledge and use of scientific knowledge. But another American MIT professor Winston thought: "artificial intelligence is how to make the computer to do what only can do intelligent work." These comments reflect the artificial intelligence discipline basic ideas and basic content. Namely artificial intelligence is the study of human intelligence activities, has certain law, research of artificial intelligence system, how to make the computer to complete before the intelligence needs to do work, also is to study how the application of computer hardware and software to simulate human some intelligent behavior of the basic theory, methods and techniques.Artificial intelligence is a branch of computer science, since the 1970s, known as one of the three technologies (space technology, energy technology, artificial intelligence). Also considered the 21st century (genetic engineering, nano science, artificial intelligence) is one of the three technologies. It is nearly three years it has been developed rapidly, and in many fields are widely applied, and have made great achievements, artificial intelligence has gradually become an independent branch, both in theory and practice are already becomes a system. Its research results are gradually integrated into people's lives, and create more happiness for mankind.Artificial intelligence is that the computer simulation research of some thinking process and intelligent behavior (such as study, reasoning, thinking, planning, etc.), including computer to realize intelligent principle, make similar to that of human intelligence, computer can achieve higher level of computer application. Artificial intelligence will involve the computer science, philosophy and linguistics, psychology, etc. That was almost natural science and social science disciplines, the scope of all already far beyond the scope of computer science and artificial intelligence and thinking science is the relationship between theory and practice, artificial intelligence is in the mode of thinking science technology application level, is one of its application. From theview of thinking, artificial intelligence is not limited to logical thinking, want to consider the thinking in image, the inspiration of thought of artificial intelligence can promote the development of the breakthrough, mathematics are often thought of as a variety of basic science, mathematics and language, thought into fields, artificial intelligence subject also must not use mathematical tool, mathematical logic, the fuzzy mathematics in standard etc, mathematics into the scope of artificial intelligence discipline, they will promote each other and develop faster.A brief history of artificial intelligenceArtificial intelligence can be traced back to ancient Egypt's legend, but with 1941, since the development of computer technology has finally can create machine intelligence, "artificial intelligence" is a word in 1956 was first proposed, Dartmouth learned since then, researchers have developed many theories and principles, the concept of artificial intelligence, it expands and not in the long history of the development of artificial intelligence, the slower than expected, but has been in advance, from 40 years ago, now appears to have many AI programs, and they also affected the development of other technologies. The emergence of AI programs, creating immeasurable wealth for the community, promoting the development of human civilization.The computer era1941 an invention that information storage and handling all aspects of the revolution happened. This also appeared in the U.S. and Germany's invention is the first electronic computer. Take a few big pack of air conditioning room, the programmer's nightmare: just run a program for thousands of lines to set the 1949. After improvement can be stored procedure computer programs that make it easier to input, and the development of the theory of computer science, and ultimately computer ai. This in electronic computer processing methods of data, for the invention of artificial intelligence could provide a kind of media.The beginning of AIAlthough the computer AI provides necessary for technical basis, but until the early 1950s, people noticed between machine and human intelligence. Norbert Wiener is the study of the theory of American feedback. Most familiar feedback control example is the thermostat. It will be collected room temperature and hope, and reaction temperature compared to open or close small heater, thus controlling environmentaltemperature. The importance of the study lies in the feedback loop Wiener: all theoretically the intelligence activities are a result of feedback mechanism and feedback mechanism is. Can use machine. The findings of the simulation of early development of AI.1955, Simon and end Newell called "a logical experts" program. This program is considered by many to be the first AI programs. It will each problem is expressed as a tree, then choose the model may be correct conclusion that a problem to solve. "logic" to the public and the AI expert research field effect makes it AI developing an important milestone in 1956, is considered to be the father of artificial intelligence of John McCarthy organized a society, will be a lot of interest machine intelligence experts and scholars together for a month. He asked them to Vermont Dartmouth in "artificial intelligence research in summer." since then, this area was named "artificial intelligence" although Dartmouth learn not very successful, but it was the founder of the centralized and AI AI research for later laid a foundation.After the meeting of Dartmouth, AI research started seven years. Although the rapid development of field haven't define some of the ideas, meeting has been reconsidered and Carnegie Mellon university. And MIT began to build AI research center is confronted with new challenges. Research needs to establish the: more effective to solve the problem of the system, such as "logic" in reducing search; expert There is the establishment of the system can be self learning.In 1957, "a new program general problem-solving machine" first version was tested. This program is by the same logic "experts" group development. The GPS expanded Wiener feedback principle, can solve many common problem. Two years later, IBM has established a grind investigate group Herbert AI. Gelerneter spent three years to make a geometric theorem of solutions of the program. This achievement was a sensation.When more and more programs, McCarthy busy emerge in the history of an AI. 1958 McCarthy announced his new fruit: LISP until today still LISP language. In. "" mean" LISP list processing ", it quickly adopted for most AI developers.In 1963 MIT from the United States government got a pen is 22millions dollars funding for research funding. The machine auxiliary recognition from the defense advanced research program, have guaranteed in the technological progress on this plan ahead of the Soviet union. Attracted worldwide computer scientists, accelerate the pace of development of AIresearch.Large programAfter years of program. It appeared a famous called "SHRDLU." SHRDLU "is" the tiny part of the world "project, including the world (for example, only limited quantity of geometrical form of research and programming). In the MIT leadership of Minsky Marvin by researchers found, facing the object, the small computer programs can solve the problem space and logic. Other as in the late 1960's STUDENT", "can solve algebraic problems," SIR "can understand the simple English sentence. These procedures for handling the language understanding and logic.In the 1970s another expert system. An expert system is a intelligent computer program system, and its internal contains a lot of certain areas of experience and knowledge with expert level, can use the human experts' knowledge and methods to solve the problems to deal with this problem domain. That is, the expert system is a specialized knowledge and experience of the program system. Progress is the expert system could predict under certain conditions, the probability of a solution for the computer already has. Great capacity, expert systems possible from the data of expert system. It is widely used in the market. Ten years, expert system used in stock, advance help doctors diagnose diseases, and determine the position of mineral instructions miners. All of this because of expert system of law and information storage capacity and become possible.In the 1970s, a new method was used for many developing, famous as AI Minsky tectonic theory put forward David Marr. Another new theory of machine vision square, for example, how a pair of image by shadow, shape, color, texture and basic information border. Through the analysis of these images distinguish letter, can infer what might be the image in the same period. PROLOGE result is another language, in 1972. In the 1980s, the more rapid progress during the AI, and more to go into business. 1986, the AI related software and hardware sales $4.25 billion dollars. Expert system for its utility, especially by demand. Like digital electric company with such company XCON expert system for the VAX mainframe programming. Dupont, general motors and Boeing has lots of dependence of expert system for computer expert. Some production expert system of manufacture software auxiliary, such as Teknowledge and Intellicorp established. In order to find and correct the mistakes, existing expert system and some other experts system was designed,such as teach userslearn TVC expert system of the operating system.From the lab to daily lifePeople began to feel the computer technique and artificial intelligence. No influence of computer technology belong to a group of researchers in the lab. Personal computers and computer technology to numerous technical magazine now before a people. Like the United States artificial intelligence association foundation. Because of the need to develop, AI had a private company researchers into the boom. More than 150 a DEC (it employs more than 700 employees engaged in AI research) that have spent 10 billion dollars in internal AI team.Some other AI areas in the 1980s to enter the market. One is the machine vision Marr and achievements of Minsky. Now use the camera and production, quality control computer. Although still very humble, these systems have been able to distinguish the objects and through the different shape. Until 1985 America has more than 100 companies producing machine vision systems, sales were us $8 million.But the 1980s to AI and industrial all is not a good year for years. 1986-87 AI system requirements, the loss of industry nearly five hundred million dollars. Teknowledge like Intellicorp and two loss of more than $6 million, about one-third of the profits of the huge losses forced many research funding cuts the guide led. Another disappointing is the defense advanced research programme support of so-called "intelligent" this project truck purpose is to develop a can finish the task in many battlefield robot. Since the defects and successful hopeless, Pentagon stopped project funding.Despite these setbacks, AI is still in development of new technology slowly. In Japan were developed in the United States, such as the fuzzy logic, it can never determine the conditions of decision making, And neural network, regarded as the possible approaches to realizing artificial intelligence. Anyhow, the eighties was introduced into the market, the AI and shows the practical value. Sure, it will be the key to the 21st century. "artificial intelligence technology acceptance inspection in desert storm" action of military intelligence test equipment through war. Artificial intelligence technology is used to display the missile system and warning and other advanced weapons. AI technology has also entered family. Intelligent computer increase attracting public interest. The emergence of network game, enriching people's life.Some of the main Macintosh and IBM for application softwaresuch as voice and character recognition has can buy, Using fuzzy logic, AI technology to simplify the camera equipment. The artificial intelligence technology related to promote greater demand for new progress appear constantly. In a word ,Artificial intelligence has and will continue to inevitably changed our life.附件三英文文献译文人工智能“人工智能”一词最初是在1956 年Dartmouth在学会上提出来的。

英文文献及中文翻译_ASP.NET概述ASP.NETOverview

英文文献及中文翻译_ASP.NET概述ASP.NETOverview

英文文献及中文翻译 Overview is a unified Web development model that includes the services necessary for you to build enterprise-class Web applications with a minimum of is part of the .NET Framework, and when coding applications you have access to classes in the .NET Framework.You can code your applications in any language compatible with the common language runtime (CLR), including Microsoft Visual Basic and C#. These languages enable you to develop applications that benefit from the common language runtime, type safety, inheritance, and so on.If you want to try , you can install Visual Web Developer Express using the Microsoft Web Platform Installer, which is a free tool that makes it simple to download, install, and service components of the Microsoft Web Platform.These components include Visual Web Developer Express, Internet Information Services (IIS), SQL Server Express, and the .NET Framework. All of these are tools that you use to create Web applications. You can also use the Microsoft Web Platform Installer to install open-source and PHP Web applications.Visual Web DeveloperVisual Web Developer is a full-featured development environment for creating Web applications. Visual Web Developer provides an ideal environment in which to build Web sites and then publish them to a hosting site. Using the development tools in Visual Web Developer, you can develop Web pages on your own computer. Visual Web Developer includes a local Web server that provides all the features you need to test and debug Web pages, without requiring Internet Information Services (IIS) to be installed.Visual Web Developer provides an ideal environment in which to build Web sitesand then publish them to a hosting site. Using the development tools in Visual Web Developer, you can develop Web pages on your own computer. Visual Web Developer includes a local Web server that provides all the features you need to test and debug Web pages, without requiring Internet Information Services (IIS) to be installed.When your site is ready, you can publish it to the host computer using the built-in Copy Web tool, which transfers your files when you are ready to share them with others. Alternatively, you can precompile and deploy a Web site by using the Build Web Site command. The Build Web Sitecommand runs the compiler over the entire Web site (not just the code files) and produces a Web site layout that you can deploy to a production server.Finally, you can take advantage of the built-in support for File Transfer Protocol (FTP).Using the FTP capabilities of Visual Web Developer, you can connect directly to the host computer and then create and edit files on the server. Web Sites and Web Application ProjectsUsing Visual Studio tools, you can create different types of projects, which includes Web sites, Web applications, Web services, and AJAX server controls.There is a difference between Web site projects and Web application projects. Some features work only with Web application projects, such as MVC and certain tools for automating Web deployment. Other features, such as Dynamic Data, work with both Web sites and Web application projects.Page and Controls FrameworkThe page and controls framework is a programming framework that runs on a Web server to dynamically produce and render Web pages. Web pages can be requested from any browser or client device, and renders markup (such as HTML) to the requesting browser. As a rule, you can use the samepage for multiple browsers, because renders the appropriate markup for the browser making the request. However, you can design your Web page to target a specific browser and take advantage of the features of that browser. Web pages are completely object-oriented. Within Web pages you can work with HTML elements using properties, methods, and events. The page framework removes the implementation details of the separation of client and server inherent in Web-based applications by presenting a unified model for responding to client events in code that runs at the server. The framework also automatically maintains the state of a page and the controls on that page during the page processing life cycle.The page and controls framework also enables you to encapsulate common UI functionality in easy-to-use, reusable controls. Controls are written once, can be used in many pages, and are integrated into the Web page that they are placed in during rendering.The page and controls framework also provides features to control the overall look and feel of your Web site via themes and skins. You can define themes and skins and then apply them at a page level or at a control level.In addition to themes, you can define master pages that you use to create a consistent layout for the pages in your application. A single master page defines the layout and standard behavior that you want for all the pages (or a group of pages) in your application. You can then create individual content pages that contain the page-specific content you want to display. When users request the content pages, they merge with the master page to produce output that combines the layout of the master page with the content from the content page. The page framework also enables you to define the pattern for URLs that will be used in your site. This helps with search engine optimization (SEO) and makes URLs more user-friendly.The page and control framework is designed to generate HTML thatconforms to accessibility guidelines. CompilerAll code is compiled, which enables strong typing, performance optimizations, and early binding, among other benefits. Once the code has been compiled, the common language runtime further compiles code to native code, providing improved performance. includes a compiler that will compile all your application components including pages and controls into an assembly that the hosting environment can then use to service user requests.Security InfrastructureIn addition to the security features of .NET, provides an advanced security infrastructure for authenticating and authorizing user access as well as performing other security-related tasks. You can authenticate users using Windows authentication supplied by IIS, or you can manage authentication using your own user database using forms authentication and membership. Additionally, you can manage the authorization to the capabilities and information of your Web application using Windows groups or your own custom role database using roles. You can easily remove, add to, or replace these schemes depending upon the needs of your application. always runs with a particular Windows identity so you can secure your application using Windows capabilities such as NTFS Access Control Lists (ACLs),database permissions, and so on.State-Management Facilities provides intrinsic state management functionality that enables you to store information between page requests, such as customer information or the contents of a shopping cart. You can save and manage application-specific, session-specific, page-specific, user-specific, and developer-defined information. This information can beindependent of any controls on the page. offers distributed state facilities, which enable you to manage state information across multiple instances of the same application on one computer or on several computers. 概述是一个统一的Web开发模型,它包括您使用尽可能少的代码生成企业级Web应用程序所必需的各种服务。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

[原文]COMPUTER VIRUSESWhat are computer viruses?According to Fred Cohen’s well-known definition, a computer virus is a computer program that can infect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself. Note that a program does not have to perform outright damage (such as deleting or corrupting files) in order to be called a “virus”. However, Cohen uses the terms within his definition (e.g. “program” and “modify”) a bit differently from the way most anti-virus researchers use them, and classifies as viruses some things which most of us would not consider viruses.Computer viruses are bits of code that damage or erase information, files, or software programs in your computer, much like viruses that infect humans, computer viruses can spread, and your computer can catch a virus when you download an infected file from the Internet or copy an infected file from a diskette. Once the viruses is embedded into your computer’s files, it can immediately start to damage or destroy information, or it can wait for a particular date or event to trigger its activity.What are the main types of viruses?Generally, there are two main classes of viruses. The first class consists of the file Infectors which attach themselves to ordinary program files. These usually infect arbitrary .COM and/or .EXE programs, though some can infect any program for which execution is requested, such as .SYS,.OVL,.PRG,&.MNU files.File infectors can be either direct action or resident. A direct-action virus selects one or more other programs to infect each other time the program which contains it is executed ,and thereafter infects other programs when “they” are executed (as in the case of the Jerusalem) or when certain other conditions are fulfilled. The Vienna is an example of a direct-action virus. Most other viruses are resident.The second class is system or boot-record infectors: those viruses, which infect executable code, found in certain system areas on a disk that are not ordinary files. On DOS systems, there are ordinary boot-sector viruses, which infect only the DOS boot sector on diskettes. Examples include Brain, Stoned, Empire, Azusa, and Michelangelo. Such viruses are always resident viruses.Finally, a few viruses are able to infect both (the Tequila virus is one example). There are often called “multipartite” viruses, though there has been criticism of this name; another name is “boot-and -file” virus.File system or cluster viruses (e.g. Dir-II) are those that modify directory table entries so that the virus is loaded and executed before the desired program is. Note that the program itself is not physically altered; only the directory entry is. Some consider these infectors to be a third category of viruses, while others consider them to be a sub-category of the file infectors.What are macro viruses?Many applications provide the functionality to create macros. A macro is a series of commands to perform some application-specific task. Macros are designed to make life easier, for example, to perform some everyday tasks like text-formatting or spreadsheet calculations.Macros can be saved as a series of keystrokes (the application record what keys you press); or they can be written in special macro languages (usually based on real programming languages like C and BASIC). Modern applications combine both approaches; and their advanced macro languages are as complex as general purpose programming languages. When the macro language allows files to be modified, it becomes possible to create macros that copy themselves from one file to another. Such self-replicating macros are called macro viruses.Most macro viruses run under Word for Windows. Since this is a very popular word processor, it provides an effective means for viruses to spread. Most macro viruses are written using the macro language WordBasic. WordBasic is based on the good old BASIC programming language. However, it has many (hundreds of) extensions (for example, to deal with documents: edit, replace string, obtain the name of the current document, open new window, move cursor, etc.).What is a Trojan horse program?A type of program that is often confused with viruses is a ‘Trojan horse’ program. This is not a virus, but simply a program (often harmful) that pretends to be something else.For example, you might download what you think is a new game; but when you run it, it deletes files on your hard drive. Or the third time you start the game, the program E-mail your saved passwords to another person.Note: simply download a file to your computer won’t activate a virus or Trojan horse; you have to execute the code in the file to trigger it. This could mean running a program file, or opening a Word/Excel document in a program (such as Word or Excel) that can execute any macros in the document.What kind of files can spread viruses?Viruses have the potential to infect any type of executable code, not just the files that are commonly called “program files”. For example, some viruses infect executable code in the boot sector of floppy disk or in system areas of hard drives. Another type of virus, known as a “macro” virus, can infect word processing and spreadsheet documents that use macros. And it’s possible for HTML documents containing JavaScript or other types of executable code to spread viruses or other malicious code.Since viruses code must be executed to have any effect, files that the computer treats as pure data are safe. This includes graphics and sound files such as .gif, .jpg, .mp3, .wav, .etc., as well as plain text i n .txt files. For example, just viewing picture files won’t infect your computer with a virus. The virus code has to be in a form, such as an .exe program file or a Word .doc file which the computer will actually try to execute.How do viruses spread?The methodology of virus infection was pretty straightforward when first computer viruses such as Lehigh and Jerusalem started appearing. A virus is a small piece of computer code, usually form several bytes to a few tens of bytes, that can do, well, something unexpected. Such viruses attach themselves to executable files—programs, so that the infected program, before proceeding with whatever tasks it is supposed to do, calls the virus code. One of the simplest ways to accomplish that is to append the virus code to the end of the file, and insert a command to the beginning of the program file that would jump right to the beginning of the virus code. After the virus is finished, it jumps back to the point of origination in the program. Such viruses were very popular in the late eighties. The earlier ones only knew how to attach themselves to .Com files, since structure of a .COM file is much simpler than that of an .EXE file—yetanother executable file format invented for MS-DOS operating system. The first virus to be closely studied was the Lehigh virus. It attached itself to the file that was loaded by the system at boot time—. the virus did a lot of damage to its host, so after three-four replications it was no longer usable. For that reason, the virus never managed to escape the university network.When you execute program code that’s infected by a virus, the virus code will also run and try to infect other programs, either on the same computer or on other computers connected to it over a network. And the newly infected programs will try to infect yet more programs.When you share a copy of an infected file with other computer users, running the file may also infect their computer; and files from those computers may spread the infection to yet more computers.If your computer if infected with a boot sector virus, the virus tries to write copies of itself to the system areas of floppy disks and hard disks. Then the infected floppy disks may infect other computers that boot from them, and the virus copy on the hard disk will try to infect still more floppies.Some viruses, known as ‘multipartite’ viruses, and spread both by infecting files and by infecting the boot areas of floppy disks.What do viruses do to computers?Viruses are software programs, and they can do the same things as any other program running on a computer. The accrual effect of any particular virus depends on how it was programmed by the person who wrote the virus.Some viruses are deliberately designed to damage files or otherwise interfere with your computer’s operation, while other don’t do anything but try to spread themselves around. But even the ones that just spread themselves are harmful, since they damage files and may cause other problems in the process of spreading.Note that v iruses can’t do any damage to hardware: they won’t melt down your CPU, burn out your hard drive, cause your monitor to explode, etc. warnings about viruses that will physically destroy your computer are usually hoaxes, not legitimate virus warnings.Modern viruses can exist on any system form MS DOS and Window 3.1 to MacOS, UNIX, OS/2, Windows NT. Some are harmless, though hard to catch. They can play a jingle on Christmas or reboot your computer occasionally. Other are more dangerous. They can delete or corrupt your files, format hard drives, or do something of that sort. There are some deadly ones that can spread over networks with or without a host, transmit sensitive information over the network to a third party, or even mess with financial data on-line.What’s the story on viruses and E-mail?You can’t get a virus just by reading a plain-text E-mail message or Usenet post. What you have to watch out for are encoded message containing embedded executable code (i.e., JavaScript in HTML message) or message that include an executable file attachment (i.e., an encoded program file or a Word document containing macros).In order to activate a virus or Trojan horse program, you computer has to execute some type of code .This could be a program attached to an E-mail, a Word document you downloaded from the Internet, or something received on a floppy disk. There’s no special hazard in files attached to Usenet posts or E-mail messages: they’re no more dangerous than any other file.What can I do to reduce the chance of getting viruses from E-mail?Treat any file attachments that might contain executable code as carefully as you would anyother new files: save the attachment to disk and then check it with an up-to-date virus scanner before opening the file.If you E-mail or news software has the ability to automatically execute JavaScript, Word macros, or other executable code contained in or attached to a message, I strongly recommend that you disable this feature.My personal feeling is that if an executable file shows up unexpectedly attached to an E-mail, you should delete it unless you can positively verify what it is, Who it came from, and why it was sent to you.The recent outbreak of the Melissa virus was a vivid demonstration of the need to be extremely careful when you receive E-mail with attached files or documents. Just because an E-mail appears to come from someone you trust, this does NOT mean the file is safe or that the supposed sender had anything to do with it.Some General Tips on Avoiding Virus InfectionsInstall anti-virus software from a well-known, reputable company. UPDATE it regularly, and USE it regularly.New viruses come out every single day; an a-v program that hasn’t been updated for several months will not provide much protection against current viruses.In addition to scanning for viruses on a regular basis, install an ‘on access’ scanner (included in most good a-v software packages) and configure it to start automatically each time you boot your system. This will protect your system by checking for viruses each time your computer accesses an executable file.Virus scans any new programs or other files that may contain executable code before you run or open them, no matter where they come from. There have been cases of commercially distributed floppy disks and CD-ROMs spreading virus infections.Anti-virus programs aren’t very good at detecting Trojan horse programs, so be extremely careful about opening binary files and Word/Excel documents from unknown or ‘dubious’ sources. This includes po sts in binary newsgroups, downloads from web/ftp sites that aren’t well-known or don’t have a good reputation, and executable files unexpectedly received as attachments to E-mail.Be extremely careful about accepting programs or other flies during on-line chat sessions: this seems to be one of the more common means that people wind up with virus or Trojan horse problems. And if any other family members (especially younger ones) use the computer, make sure they know not to accept any files while using chat.Do regular backups. Some viruses and Trojan horse programs will erase or corrupt files on your hard drive and a recent backup may be the only way to recover your data.Ideally, you should back up your entire system on a regular basis. If this isn’t practic al, at least backup files you can’t afford to lose or that would be difficult to replace: documents, bookmark files, address books, important E-mail, etc.Dealing with Virus InfectionsFirst, keep in mind “Nick’s First Law of Computer Virus Complaints”:“Just because your computer is acting strangely or one of your programs doesn’t work right, this does not mean that your computer has a virus.”If you haven’t used a good, up-to-date anti-virus program on your computer, do that first. Many problems blamed on viruses are actually caused by software configuration errors or other problems that have nothing to do with a virus.If you do get infected by a virus, follow the direction in your anti-virus program for cleaning it. If you have backup copies of the infected files, use those to restore the files. Check the files you restore to make sure your backups weren’t infected.for assistance, check the web site and support service for your anti-virus software.Note: in general, drastic measures such as formatting your hard drive or using FDISK should be avoided. They are frequently useless at cleaning a virus infection, and may do more harm than good unless you’ re very knowledgeable about the effects of the particular virus you’re dealing with.[译文]计算机病毒什么是计算机病毒?按照Fred Cohen的广为流传的定义,计算机病毒是一种侵入其他计算机程序中的计算机程序,他通过修改其他的程序从而将(也可能是自身的变形)的复制品嵌入其中。

相关文档
最新文档