计算机专业外文翻译--c 的起源一个小的历史

合集下载

计算机专业外文资料翻译----微机发展简史

计算机专业外文资料翻译----微机发展简史

附录外文文献及翻译Progress in computersThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would standus in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarrass de riches. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term famillogic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics. dubbed in the IEE light current electrical engineering. properlyrecognized as an activity in its own right. I remember that we had some difficulty in organizing a co nference because the power engineers‟ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practiceConsolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organization, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scalerequired and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry …going the other way‟. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university orelsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scaleminicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks. Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measureme nts were done and on the whole the choice of features depended upon the designer‟s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and ditzy entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computersThe RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings. both academic and industrial. of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would liketo think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel‟s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in aworkstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation ofthe x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson‟s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognized need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.Shortage of ElectronsAlthough shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done. notably by HuronAmend and his team working in the Cavendish Laboratory. on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.微机发展简史第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

C#编程语言外文文献翻译中英文

C#编程语言外文文献翻译中英文

C# 编程语言概述外文文献翻译(含:英文原文及中文译文)文献出处:Barnett M. C# Programming Language Overview [J]Lecture Notes in Computer Science, 2016, 3(4):49-59.英文原文C# Programming Language OverviewBarnett M1. History of C, C++, C#The C# programming language is based on the spirit of the C and C++ programming languages. This account has powerful features and an easy-to-learn curve. It cannot be said that C# is the same as C and C++, but because C# is built on both, Microsoft has removed some features that have become more burdensome, such as pointers. This section looks at C and C++ and tracks their development in C#.The C programming language was originally defined in the UNIX operating system. In the past, we often wrote some UNIX applications, including a C compiler, and finally used to write UNIX itself. It is generally accepted that this academic competition extends to the world that contains this business. The original Windows API was defined to work with C using Windows code, and until now at least the core Windows operating system APIS maintains the C compiler.From a defined point of view, C lacks a single detail, like thelanguage Smalltalk does, and the concept of an object. Y ou will learn more about the contents of the object. In Chapter 8, "Write Object-Oriented Code," an object is collected as a data set and some operations are set. The code can be completed by C, but the concept of the object cannot be Forced to appear in this language. If you want to construct your code to make it like an object, that's fine. If you don't want to do this, C will really not mind. The object is not an intrinsic part. Many people in this language did not spend a lot of time in this program example. When the development of object-oriented perspectives began to gain acceptance, think about the code approach. C++ was developed to include this improvement. It is defined to be compatible with C (just as all C programs are also C++ programs and can be compiled by a C++ compiler) The main addition to the C++ language is to provide this new concept. C++ additionally provides a derivative of the class (object template) behavior.The C++ language is a modified version of the C language. Unfamiliar, infrequent languages such as VB, C, and C++ are very low-level and require a lot of coding to make your application run well. Reason and error checking. And C++ can be handled in some very powerful applications, the code works very smoothly. The goal is set to maintain compatibility with C. C++ cannot break the low-level features of C.Microsoft defined C# retains a lot of C and C++ statements. The code can also want to identify the code quickly. A big advantage for C# is that its designers did not make it compatible with C and C++. When this may seem like a wrong treatment, it is actually good news. C# eliminates something that makes C and C++ difficult to work with. Beginning with quirks and defects found in C. C# is starting a clean slate and does not have any compatibility requirements. So it can maintain the strengths of its predecessors and discard weaknesses that make C and C++ programs difficult to survive.2. Introduce C#C#, the new language introduced in the .NET system, is derived from C++. However, C# is a popular, object-oriented (from beginning to end) type-safe language.Language featuresThe following section provides a quick perspective on some of the features of the C# language. If some of them are unfamiliar to you, don't worry, everything will be explained in detail in the following sections. In C#, all code and data must be attached to a class. Y ou cannot define a variable outside the class, nor can you write any code that is not in the class. When a class object is created and run, the class is constructed. When the object of the class is released, the class is destroyed. The class provides single inheritance, and all the classes eventually get from thebase class is the object. Over time, C# provides versioned techniques to help with the formation of your classes to maintain code compatibility when you use code from your earlier classes.Let's look at an example of a class called Family. This class contains two static fields to hold the first and last names of family members. In the same way, there is a way to return the full name of a family member.Class Class1{Public string FirstName;Public string LastName;Public string FullName(){}Return FirstName + LastName;}Note: Single inheritance means that a C# class can only inherit from a base class.C# is a collection that you can package your class into a namespace called the namespace class. And you can help arrange collection of classes on logical aggregations. When you started learning C#, it was clear that all namespaces were related to .NET type systems. Microsoft also chose to include channels that assist in the compatibility of previouscode and APIs. These classes are also included in Microsoft's namespace.Type of dataC# lets you work with two types of data: value types and reference types. The value type holds the actual value. The reference type saves the actual value stored elsewhere in the memory. Raw data types, such as character, integer, float, enumeration, and structure types, are all value types. Objects and array types are treated as reference types. C# predefines reference types (objects and strings) New, Byte, Unsigned Short, Unsigned Integer, Unsigned Long, Float, Double-Float, Boolean, Character, and The value type and reference type of the decimal type will eventually be executed by a primitive type object. C# also allows you to convert a value or a type to another value or a type. Y ou can use an implicit conversion strategy or an explicit conversion strategy. Implicit conversion strategies are always successful and do not lose any information (for example, you can convert an integer to a long integer without losing any information because long integers are longer than integers) Some data is lost because long integers can hold more value than integers. Conversion occurs.Before and after referenceRefer to Chapter 3 "Working with V ariables" to find out more about explicit and implicit conversion strategies.Y ou can use single-dimensional and multidimensional arrays in C#at the same time. Multidimensional arrays can become a matrix. When this matrix has the same area size as a multidimensional array. Or jagged, when some arrays have different sizes.Classes and structures can have data members called attributes and fields. Y ou can define a structure called Employee. For example, there is a field called Name. If you define an Employee type variable called CurrenrEmployee, you can retrieve the employee's name by writing . What should happen after the code assignment? If the employee's name must be read by a database, for example, you can write a code "When some people ask for the value of the name attribute, read the name from the database and return the name with the string type".FunctionA function is a code that can be used at any time, code. An example of a function will appear earlier than the FullName function, in this chapter, in the Family class. A function is usually combined with some code that returns information, and a method usually does not return information. However, for us, we generally attribute them to functions.The function can have four parameters:•The input parameters have values passed into the function, but the function cannot change their values.•The output parameters have no value when they are passed to thefunction, but the function can give them a value and pass the value back to its caller. ,•The reference parameter passes another value by reference. They have a value into the function, and this value can be changed in the function.•The parameter parameter defines an array variable in the list.C# and CLR work together to provide automatic storage management. Or "Leave enough space for this object to use" code like this. The CLR monitors your memory usage and automatically retrieves it when you need it.C# provides a large number of operators that allow you to write a large number of mathematical and bitwise expressions. Many (but not all) of them can be redefined, and you can change the job of these operators.C# provides a long list of reports that you can define through a variety of processing paths through your code. Through the report's operations, using keywords like switch, while, for, break, and continue enables your code to be split into different paths depending on the value of the variable.Classes can contain code and data. Visibility of each member to other objects. C# provides such accessible ranges as public, protected, internal, protected internal, and private.V ariableV ariables can be defined as constants. The constant has a fixed value and cannot be changed during the execution of your code. The value of PI, for example, is a good example of a constant because her value will not be changed while your code is running. The enumeration type defines a specific name for the constant. For example, you can define an enumerated type of planet using Mercury V in your code. If you use a variable to represent the planet, using the names of this enum type can make your code easier to read.C# provides an embedded mechanism to define and handle some events. If you write a class that performs a long operation, you may want to call an event. When the event ends, the client can sign this time and grab the event in their own code, he can let them be notified When you have completed this long budget, this event handling mechanism uses delegates in C#, a variable that references a function.Note: Event processing is a program in your code that determines what action will take place when a time occurs.For example, the user clicks on a button. If your class holds a value, write some code called a protractor that your class can be accessed as if it were an array. Suppose you write a class called Rainbow. For example, it contains a set of colors in this rainbow. Visitors may want some MYRainbow to retrieve the first color in the rainbow. Y ou can write an indexer in your Rainbow class to define what will be returned when thevisitor accesses your class as if it were an array of values.InterfaceC# provides an interface that aggregates properties, methods, and events that describe a set of functions. The class of C# can execute the interface. It tells the user through the interface a set of function files provided by this class. What existing code can have as few compatibility issues as possible. Once there was an interface exposed, it could not be changed, but it could evolve through inheritance. C# classes can perform many interfaces, even if the class can only inherit from a base class.Let's look at an example of a very clear rule in the real world of C# that helps illustrate the interface. Many applications use the additions provided today. There is the ability to read additional items when executed. To do this, this added item must follow some rules. DLL add items must display a function called CEEntry. And you must use CEd as the beginning of the DLL file name. When we run our code, it scans the directories of all the DLLs that are starting with CEd. When it finds one, it is read. Then it uses GetProcAddress to find the CEEntry function in the DLL. This proves that it is necessary for you to obey all the rules to establish an addition. This kind of creating a read addition is necessary because it carries more unnecessary code responsibility. If we use an interface in this example, your DLL additions can be applied to an interface. This ensures that all necessary methods, properties, and eventsappear in the DLL and are specified as files.AttributesThe attribute declares additional information about your class for the CLR. In the past, if you wanted to describe your classes yourself, you would have to use a few decentralized ways to store them in external files, such as IDL or event HTML files. Through your efforts, the property solves this problem. The developer has constrained some information in the class and any kind of information, for example, in the class, defines how it acts when it is used. The possibilities are endless, which is why Microsoft will contain a lot of predefined attributes in the .NET framework.Compile C#Running your C# code generates two important types of information through the C# compiler: code and metadata. The next section describes these two topics and completes a binary review built on .NET code, which is assembly.Microsoft Intermediate Language (MSIL)The code output by the C# compiler is written in an intermediate language called Microsoft. MSIL is your code that is used to construct a detailed set of instructions to guide you on how to perform. It contains instructions for operations, such as initialization of variables, methods for evoking objects, error handling, and declaring something new. C# is notjust a language from the MSIL source code that changes during the writing process. All .NET-compatible languages, including and C++ management, generate MSIL when their source code is compiled. All .NET languages use the same runtime, so code from different languages and different compilers can easily work together.For physical CPUs, MISL is not a set of explicit instructions. It doesn't know anything about your machine's CPU, and your machine doesn't know anything about MSIL. Then, when your CPU can't read MSIL, explain the code. This sinking is called just enough to write, or JIT. The job of the JIT compiler is to translate your universal MSIL code to the machine so that the CPU can execute your code.Y ou may want to know what an extra step is in the process. When a compiler can immediately generate CPU-interpreted code for why MSIL was generated, the compiler does this later. There are many reasons for this. First, MSIL makes it easier for you to write code as it moves to a different piece of hardware. Suppose you have written some C# code and you want it to run on your desktop and handheld devices at the same time. It is very likely that these two devices have different CPUs. If you only have one C# compiler whose goal is a clear CPU, then you need two C# compilers: one with the desktop CPU and the other with the handheld device CPU. Y ou have to compile your code twice to ensure that your correct code is used on the right device. With MSIL, you only write once.The .NET Framework is installed on your desktop and it contains a JIT compiler that translates your MSIL-specific CPU code to your machine. The .NET Framework is installed on your handheld device and it contains a JIT compiler that translates the same MSIL-specific CPU-specific code to your handheld device. To run MSIL code base on any device that has a .NET JIT compiler. Y ou now have only one MSIL basic code that can run on any device that has a .NET JIT compiler. The JIT compiler on these devices can take care of your code and make them run smoothly.Another reason why the compiler uses MSIL is that the settings of the instruction can be easily read by an authenticated proximity. Part of the compiler's job is to verify your code to make it as clear as possible. When properly accessed, these checks ensure that your code does not execute any instructions that can cause your code to crash. The definition of MSIL directives makes this check process easier to understand. CPU-specific instruction settings are optimized for fast code execution. However, they make the code difficult to read and therefore difficult to check. Having a C# compiler that can output CPU-specific code at once can make code inspection difficult or even impossible. Allow the .NET Framework's JIT compiler to verify your code to ensure that your code accesses memory through a buggy path and that the variable types are used correctly.MetadataThe assembly process outputs the same amount of metadata. This is a very important part of the .NET code sharing story. Whether you use C# to build a client application or use C# to build a library that some people use for your application, you will want to take advantage of some compiled .NET code. That code may have been provided by Microsoft as part of the .NET framework, or it may be provided by some online users. The key to using a foreign code is to let the C# compiler know that the class and that variable are in another base code so that it can be found in the precompilation of your work and match the code you write with the source code.Look at the metadata for the directory for your compiled code. The number of bits of source code compiled by C# exists in the compiled code along with the generation of MSIL. The types of methods and variables are completely described in the metadata and are ready to be read by other applications. For example, can read metadata from a .NET library to provide intelligent sensing of all the methods that can be used effectively for a particular class.If you have already worked with COM, you may be familiar with type libraries. The goal of the type library is to provide the same directory functionality to COM objects. However, the type library is provided from a few limitations, and in fact not all data about the target can be put into the type library. Metadata in .NET does not have this disadvantage. Allthe code used to describe the class's information is placed in the metadata.memberSometimes you need to use C# to build a terminal application. These applications are packaged into an executable file and use .EXE as an extension. C# completely supports the creation of .EXE files. However, there are also times when you do not want to be used in other programs. Y ou may want to create some useful C# classes, such as a developer who wants to use your class in a application. In this case, you will not create an application, instead you will build a component. A component is a metadata package. As a unit to configure, these classes will share the same level of version control, security information, and dynamic requirements. Think of a component as a logical DLL. If you are familiar with Microsoft's translation services or COM+, then you can think of components as equivalent to .NET packages.There are two kinds of components: private components and global components. When you build your own component, you don't need to specify whether you want to create a global component or a private component. Y ou can only make your code accessible by a separate application. Y our component is a package similar to a DLL and is installed into the same directory when your application runs it. The application is only executable when it is in the same directory as yourcomponent.If you want to share your code, more global components in more applications. Global components can be used by any system's .NET application regardless of the directory in which it is installed. Microsoft installs components as part of the .NET structure, and each Microsoft component is installed as a global component. The Microsoft Architecture SDK contains the public functionality to install and remove artifacts from global widget storage.C# can be viewed to some extent as a programming language for the .NET Windows-oriented environment. In the past ten years, although VB and C++ have finally become very powerful languages, some of the content has come. For Visual Basic, its main advantage is that it is easy to understand. Many programming tasks are easy to accomplish and basically hide the connotations of the Windows API and the COM component structure. The downside is that Visual Basic has never implemented an early version of object-oriented, real-world (BASIC is primarily intended to make beginners easier to understand than to write large commercial applications), so it cannot really be structured or object-oriented. Programming language.On the other hand, C++ has its own root in the ANSI C++ language definition. It is not fully compatible with ANSI because Microsoft wrote the C++ compiler before the ANSI definition was standardized, but it isalready quite close. Unfortunately, this leads to two problems. First, ANSI C++ was developed under technical conditions more than a decade ago, so it does not support current concepts (such as Unicode strings and generating XML documents), and some of the older grammatical structures were designed for previous compilers ( For example, the declaration and definition of member functions are separate.) Second, Microsoft also tried to evolve C++ into a language for performing high-performance tasks on Windows - avoiding the addition of large numbers of Microsoft-specific keywords and libraries in the language. The result is that in Windows, the language becomes a very messy language. Let a C++ developer talk about how many strings are defined in this way: char*, LPTSTR, (MFC version), CString (WTL version), wchar_t*, OLECHAR*, and so on.Now entering the .NET era - a new environment, it has made new extensions to both languages. Microsoft added many Microsoft-specific keywords to C++ and evolved VB to , retaining some basic VB syntax, but it is completely different in design. From a practical application perspective, is a New language. Here, Visua l C# .NET. Microsoft describes C# as a simple, modern, object-oriented, type-safe, and C and C++-derived programming language. Most in dependent commentators are “derived from C, C++, and Java” from their claims. C# is very similar to C++ and Java. It uses parentheses ({})to mark blocks of code, and semicolons separate lines of statements. The first impression of C# code is that it is very similar to C++ or Java code. But after these seeming similarities, C# is much easier to learn than C++ but harder than Java. Its design and modern development tools are more adaptable than other languages. It also has Visua Basic's ease of use, high performance, and low memory accessibility of C++. C# includes the following features:●Full support for class and object-oriented programming, including interface and inheritance, virtual functions, and operator overloading.●Define a complete, consistent set of basic types.●Built-in support for automatically generating XML document descriptions.●Automatically clean dynamically allocated memory.●Classes or methods can be marked with user-defined properties. This can be used for documentation purposes and has a certain impact on compilation (for example, marking a method to compile only when debugging).●Full access to the .NET base class library and easy access to the Windows API.●Y ou can use pointers and direct memory access, but the C# language can access memory without them.●Supports attributes and events in VB style.●Changing compiler options, ActiveX controls (COM components) are called by other code in the same way. ●C# can be used to write dynamic Web pages and XML Web services.It should be noted that for most of these features, and Managed C++ are also available. But since C# used .NET from the beginning, support for .NET features was not only complete, but also provided a more suitable syntax than other languages. The C# language itself is very similar to Java, but there are some improvements because Java is not designed for use in a .NET environment. Before ending this topic, we must also point out two limitations of C#. One is that the language is not suitable for writing time-critical or very high-performance codes, such as a loop that runs 1000 or 1050 times, and immediately clears the resources they occupy when they are not needed. In this regard, C++ may still be the best of all low-level languages. The second is that C# lacks the key functions needed for very high-performance applications. The parcels guarantee inlining and destructor functions in specific areas of the code. However, such applications are very few.中文译文C# 编程语言概述作者:Barnett M1. C,C++,C#的历史C#程序语言是建立在C 和C++程序语言的精神上的。

计算机C语言专业外文翻译

计算机C语言专业外文翻译

计算机 C 语言专业外文翻译C A History of C C and CThe C programming language was created in the spirit of the C and C programminglanguages. This accounts for its powerful features and easy learning curve. The same cant besaid for C and C but because C was created from the ground up Microsoft took theliberty of removing some of the more burdensome features —such aspointers. This sectiontakes a look at the C and C languages tracing their evolution into C.The C programming language was originally designed for use on the UNIX operating system.C was used to create many UNIX applications including a C compiler and was eventuallyusedto write UNIX itself. Its widespread acceptance in the academic arena expanded toinclude the commercial world and software vendors such as Microsoft and Borland releasedC compilers for personal computers. The original Windows API was designed to work withWindows code written in C and the latest set of the core Windows operating system APIsremain compatible with C to this day.From a design standpoint C lacked a detail that other languages such as Smalltalk had alreadyembraced: the concept of an object. Youll learn more about objects in Chapter 8 quot WritingObject- Oriented Code.quot For now think of an object as a collection of data and a set ofoperations that can be performed onthat data. Object-style coding could be accomplishedusing C but the notion of an object was not enforced by the language. If you wantedtostructure your code to resemble an object fine. If you didnt fine. C really didnt care.Objectswerent an inherent part of the language so many people didnt pay much attention to thisprogramming paradigm.After the notion of object- oriented development began to gain acceptance it became clear thatC needed to be refined to embrace this new way of thinking about code. C was created toembody this refinement. It was designed to be backwardly compatible with C such that all Cprograms would also be C programs and could be compiled with a C compiler. Themajor addition to the C language was support for this new object concept. The Clanguage added support for classes which are quottemplatesquot of objects and enabled an entiregeneration of C programmers to think in terms of objects and their behavior.The C language is an improvement over C but it still has some disadvantages. C and Ccan be hard to get a handle on. Unlike easy-to-use languages like Visual Basic C and C arevery quotlow levelquot and require you to do a lot of coding to make your application run well. Youhave to write your own code to handle issues such as memory management and errorchecking. C and C can result in very powerful applications but you need to ensure thatyour code works well. One bug can make the entire application crash or behave unexpectedly.Because of the C design goal of retaining backward compatibility with C C was unableto break away from the low level nature of C.Microsoft designed C to retain much of the syntax of C and C. Developers who arefamiliar with those languages can pick up C codeand begin coding relatively quickly. Thebig advantage to C however is that its designers chose not to make it backwardlycompatible with C and C. While this may seem like a bad deal its actually good news. Celiminates the things that makes C and C difficult to work with. Because all C code is alsoC code C had to retain all of the original quirks and deficiencies found in C. C isstarting with a clean slate and without any compatibility requirements so it can retain thestrengths of its predecessors and discard the weaknesses that made life hard for C and Cprogrammers.Introducing CC the new language introduced in the .NET Framework is derived from C. However Cis a modern objected-oriented from the ground up type-safenguage featuresThe following sections take a quick look at some of the features of the C language. If someof these concepts dont sound familiar to you dontworry. All of them are covered in detail inlaterchapters.ClassesAll code and data in C must be enclosed in a class. You cant define a variable outside of aclass and you cant write any code thats not in a class. Classes can have constructors whichexecute when an object of the class is created and a destructor which executes when anobject of the class is destroyed. Classes support single inheritance and all classes ultimatelyderive from a base class called object. C supports versioning techniques to help your classesevolve over time while maintaining compatibility with code that uses earlier versions of yourclasses.As an example take a look at a class calledFamily. This class contains the two static fieldsthat hold the first and last name of a family member as well as a method that returns the fullname of the family member.class Class1public string FirstNamepublic string LastNamepublic stringFullNamereturn FirstName LastNameNote Single inheritance means that a C class can inherit from only one base class.C enables you to group your classes into a collection of classes called aspaces have names and can help organize collections of classes into logical groupings.As you begin to learn C it becomes apparent that all namespaces relevant to the .NETFramework begin with System. Microsoft has also chosen to include some classes that aid inbackwards compatibility and API access. These classes are contained within the Microsoftnamespace.Data typesC lets you work with two types of data: value types and reference types. Value types holdactual values. Reference types hold references to values stored elsewhere in memory.Primitive types such as char int and float as well as enumerated values and structures arevalue types. Reference types hold variables that deal with objects and arrays. C comes withpredefined reference types object and string as well as predefined value types sbyte shortint long byte ushort uint ulong float double bool char and decimal. You can also defineyour own value and reference types in your code. All value and reference types ultimatelyderive from a base type called object.C allows you to convert a value of one type into a value of another type. You can work withboth implicit conversions andexplicit conversions. Implicit conversions always succeed anddont lose any information for example you can convert an int to a long without losing anydata because a long is larger than an int. Explicit conversions may cause you to lose data forexample converting a long into an int may result in a loss of data because a long can holdlarger values than an int. You must write a cast operator into your code to make an explicitconversion happen.Cross- ReferenceRefer to Chapter 3 quotWorking with Variablesquot for more informationabout implicit and explicit conversions.You can work with both one-dimensional and multidimensional arrays in C.Multidimensional arrays can be rectangular in which each of the arrays has the samedimensions or jagged in which each of the arrays has different dimensions.Classes and structures can have data members called properties and fields. Fields arevariables that are associated with the enclosing class or structure. You may define a structurecalled Employee for example that has a field called Name. If you define a variable of typeEmployee called CurrentEmployee you can retrieve the employees name by . Properties are like fields but enable you to write code to specifywhat should happen when code accesses the value. If the employees name must be read froma database for example you can write code that says quotwhen someone asks for the value ofthe Name property read the name from the database and return the name as a string.quotFunctionsA function is a callable piece of code that may or may not return a value to the code thatoriginally called it. Anexample of a function would be the FullName function shown earlierin this chapter in theFamily class. A function is generally associated to pieces of code thatreturn information whereas a method generally does not return information. For our purposeshowever we generalize and refer to them both as functions.Functions can have four kinds of parameters: Input parameters have values that are sent into the function but thefunction cannotchange those values. Output parameters have no value when they are sent into the function but the functioncan give them a value and send the value back to the caller. Reference parameters pass in a reference to another value. They have a value comingin to the function and that value can be changed inside the function. Params parameters define a variable number of arguments in a list.C and the CLR work together to provide automatic memory management. You dont need towrite code that says quotallocate enough memory for an integerquot or quotfree the memory that thisobject was using.quot The CLR monitors your memory usage and automatically retrieves morewhen you need it. It also frees memory automatically when it detects that it is no longer beingused this is also known as Garbage Collection.C provides a variety of operators that enable you to write mathematical and bitwiseexpressions. Many but not all of these operators can be redefined enabling you to changehow the operators work.C supports a long list of statements that enable you to define various execution paths withinyour code. Flow control statements that use keywords suchas if switch while for break andcontinue enable your code to branchoff into different paths depending on the values ofyourvariables.Classes can contain code and data. Each class member has something called an accessibilityscope which defines the members visibility to other objects. C supports public protectedinternal protected internal and private accessibility scopes.VariablesVariables can be defined as constants. Constants have values that cannot change during theexecution of your code. The value of pi for instance is a good example of a constant becauseits value wont be changing as your code runs. Enum type declarations specify a type namefor a related group of constants. For example you could define an enum of Planets withvalues of Mercury Venus Earth Mars Jupiter Saturn Uranus Neptune and Pluto and usethose names in your code. Using the enum names in code makes code more readable than ifyou used a number to represent each planet.C provides a built-in mechanism for defining and handling events. If you write a class thatperforms a lengthy operation you may want to invoke an event when the operation iscompleted. Clients can subscribe to that event and catch the event in their code which enablesthem to be notified when you have completed your lengthy operation. The event handlingmechanism in C uses delegates which are variables that reference a function.Note An event handler is a procedure in your code that determines the actions to beperformed when an event occurs such as the user clicking a button.If your class holds a set of values clients may want to access the values as if your classwerean array. You can write a piece of code called an indexer to enable your class to be accessedas if it were an array. Suppose you write a class called Rainbow for example that contains aset of the colors in the rainbow. Callers may want to write MyRainbow0 toretrieve the firstcolor in the rainbow. You can write an indexer into your Rainbow class to define what shouldbe returned when the caller accesses your class as if it were an array ofvalues.InterfacesC supports interfaces which are groups of properties methods and events that specify a setof functionality. C classes can implement interfaces which tells users that the class supportsthe set of functionality documented by the interface. You can develop implementations ofinterfaces without interfering with any existing code which minimizescompatibilityproblems. Once an interface has been published it cannot be changed but it can evolvethrough inheritance. C classes can implement many interfaces although the classes can onlyinherit from a single base class.Lets look at a real-world example that would benefit from interfaces to illustrate its extremelypositive role in C. Many applications available today support add-ins. Assume that you havecreated a code editor for writing applications. This code editor when executed has thecapability to load add-ins. To do this the add-in must follow a few rules. The DLL add-inmust export a function called CEEntry and the name of the DLL must begin with CEd. Whenwe run our code editor it scans its working directory for all DLLs that beginwith CEd. Whenit finds one it is loaded and then it uses the GetProcAddress to locate the CEEntry functionwithin the DLL thus verifying that you followed all the rules necessary to create an add-in.This method of creating and loading add-ins is very burdensome because it burdens the codeeditor with more verification duties than necessary. If an interface were used in this instanceyour add-in DLL could have implemented an interface thus guaranteeing that all necessarymethods properties andevents were present with the DLL itself and functioning asdocumentation specified.AttributesAttributes declare additional information about your class to the CLR. In the past if youwanted to make your class selfdescribing you had to take a disconnected approach in whichthe documentation was stored in external files such as IDL or even HTML files. Attributessolve this problem by enabling you the developer to bind information to classes —any kindof information. For example youcan use anattribute to embed documentation informationinto a class.Attributes can also be used to bind runtime information to a class defining how itshould act when used. The possibilities are endless which is why Microsoft includes manypredefined attributes withinthe .NET piling CRunning your C code through the C compiler produces two important pieces ofinformation: code and metadata. The following sections describe these two items andthenfinish up by examining the binary building block of .NET code: the assembly.Microsoft Intermediate Language MSILThe code that is output by the C compiler is written in a language called MicrosoftIntermediate Language or MSIL. MSIL is made up of a specific set of instructions thatspecify how your code should be executed. It contains instructions for operations such asvariable initialization calling object methods and error handling just to name a few. C isnot the only language in which source code changes into MSIL during the compilationprocess.All .NET-compatible languages including Visual Basic .NET and Managed Cproduce MSIL when their source code is compiled. Because all of the .NET languagescompile to the same MSIL instruction set and because all of the .NET languages use the sameruntime code from different languages and different compilers can work togethereasily.MSIL is not a specific instruction set for a physical CPU. It knows nothing about the CPU inyour machine and your machine knows nothing about MSIL. How then does your .NETcode run at all if your CPU cant read MSIL The answer is that the MSIL code is turned intoCPU-specific code when the code is run for the first time. This process is called quotjust-in- timequotcompilation or JIT. The job of a JIT compiler is to translate your generic MSIL code intomachine code that can be executed by your CPU.You may be wondering about what seems like an extra step in the process. Why generateMSIL when a compiler could generate CPU-specific code directly After all compilers havealwaysdone this in the past. There are a couple of reasons for this. First MSIL enables yourcompiled code to be easily moved to different hardware. Suppose youvewritten some Ccode and youd like it to run on both your desktop and a handheld device.Its very likely thatthose two devices have differ.。

c语言的发展史

c语言的发展史

c语言的发展史C语言的发展有以下几个阶段:1. 诞生和初期发展阶段(1972-1979年):C语言是由贝尔实验室的丹尼斯·里奇在1972年至1973年间设计出来的,原本是为了编写Unix操作系统的目的。

在之后的几年里,C语言得到了进一步的发展和完善,并且逐渐被广泛应用于UNIX操作系统以及其他一些项目中。

在1978年,布莱恩·柯尼汉编写了《C程序设计语言》这本经典的C语言教材,使得C语言的应用进一步推广开来。

2. 标准化阶段(1979-1989年):在1983年,美国国家标准协会(ANSI)发布了对C语言的标准化,这个标准被称为“ANSI C”。

而在1989年,国际标准化组织(ISO)也发布了基于ANSI C的国际标准,被称为“ISO C”。

标准化的C语言使得其具有了更高的可移植性和兼容性,使得C语言成为程序员们首选的编程语言之一。

3. C++的诞生与发展(1983年至今):C++是由丹尼斯·里奇的同事贝尔实验室的比雅尼·斯特劳斯楚普在C语言基础上发展而来的。

C++在C语言的基础上加入了面向对象的特性,使得程序的开发和维护更加便利。

C++在1983年首次被正式发布,并逐渐发展成为一门独立的编程语言。

4. 标准化更新与发展(1990年至今):C语言的标准化一直在不断更新与发展。

在1990年,ANSI C的第二个标准版本“C89”发布。

之后在1999年,“C99”发布,引入了一些新的特性和语法。

而在2011年,“C11”版本发布,对C99进行了修订和扩展。

目前,C语言的最新版本是“C18”,于2018年发布。

总的来说,C语言自诞生至今已经经历了近50年的发展历程。

通过标准化和不断的更新,C语言在计算机科学领域得到了广泛的应用,并且成为了后续语言发展的基石之一。

计算机专业中英文翻译(外文翻译、文献翻译)

计算机专业中英文翻译(外文翻译、文献翻译)

英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。

the c programming language 英文版

the c programming language 英文版

the c programming language 英文版摘要:1.介绍C 编程语言2.C 编程语言的发展历程3.C 编程语言的特点4.C 编程语言的应用领域5.C 编程语言的未来发展正文:C 编程语言是一种广泛应用于计算机领域的编程语言。

它的英文版名为"The C Programming Language"。

C 语言最初由丹尼斯·里奇(Dennis Ritchie)在20 世纪70 年代初在贝尔实验室开发,作为Unix 操作系统的一种高级编程语言。

自那时以来,C 语言已经成为全球最流行的编程语言之一,为计算机科学的发展做出了巨大贡献。

C 编程语言的发展历程可以追溯到20 世纪60 年代末期,当时丹尼斯·里奇在贝尔实验室工作,为了改进Unix 操作系统的性能,他开始着手开发一种新的编程语言,即C 语言。

C 语言的命名来源于它之前的一个编程语言B,C 语言被设计成是一种高级编程语言,同时也具有底层访问能力,这使得它非常适合编写系统级别的软件。

C 编程语言具有许多特点,例如它的跨平台性、简洁性、高效性和强大的控制结构。

C 语言支持结构体、函数、指针等编程概念,使得程序员可以编写出高质量的代码。

C 语言的跨平台性意味着编写的程序可以在不同的操作系统和硬件平台上运行,这使得C 语言成为一个广泛应用的编程语言。

C 编程语言在计算机科学领域有着广泛的应用。

它被广泛应用于操作系统、嵌入式系统、硬件驱动、游戏开发等领域。

许多著名的软件和操作系统都是用C 语言编写的,例如Windows、Linux、Unix 等。

C 语言在计算机科学领域的重要性使得学习C 语言成为了许多程序员的必修课。

随着计算机科学的不断发展,C 编程语言也在不断更新和演进。

现代C 语言的版本,如C11、C17 等,已经支持了许多新的特性,例如多线程编程、函数对象等,这使得C 语言在面对新的编程挑战时仍然具有强大的生命力。

计算机科学与技术 外文翻译 英文文献 中英对照

计算机科学与技术 外文翻译 英文文献 中英对照

附件1:外文资料翻译译文大容量存储器由于计算机主存储器的易失性和容量的限制, 大多数的计算机都有附加的称为大容量存储系统的存储设备, 包括有磁盘、CD 和磁带。

相对于主存储器,大的容量储存系统的优点是易失性小,容量大,低成本, 并且在许多情况下, 为了归档的需要可以把储存介质从计算机上移开。

术语联机和脱机通常分别用于描述连接于和没有连接于计算机的设备。

联机意味着,设备或信息已经与计算机连接,计算机不需要人的干预,脱机意味着设备或信息与机器相连前需要人的干预,或许需要将这个设备接通电源,或许包含有该信息的介质需要插到某机械装置里。

大量储存器系统的主要缺点是他们典型地需要机械的运动因此需要较多的时间,因为主存储器的所有工作都由电子器件实现。

1. 磁盘今天,我们使用得最多的一种大量存储器是磁盘,在那里有薄的可以旋转的盘片,盘片上有磁介质以储存数据。

盘片的上面和(或)下面安装有读/写磁头,当盘片旋转时,每个磁头都遍历一圈,它被叫作磁道,围绕着磁盘的上下两个表面。

通过重新定位的读/写磁头,不同的同心圆磁道可以被访问。

通常,一个磁盘存储系统由若干个安装在同一根轴上的盘片组成,盘片之间有足够的距离,使得磁头可以在盘片之间滑动。

在一个磁盘中,所有的磁头是一起移动的。

因此,当磁头移动到新的位置时,新的一组磁道可以存取了。

每一组磁道称为一个柱面。

因为一个磁道能包含的信息可能比我们一次操作所需要得多,所以每个磁道划分成若干个弧区,称为扇区,记录在每个扇区上的信息是连续的二进制位串。

传统的磁盘上每个磁道分为同样数目的扇区,而每个扇区也包含同样数目的二进制位。

(所以,盘片中心的储存的二进制位的密度要比靠近盘片边缘的大)。

因此,一个磁盘存储器系统有许多个别的磁区, 每个扇区都可以作为独立的二进制位串存取,盘片表面上的磁道数目和每个磁道上的扇区数目对于不同的磁盘系统可能都不相同。

磁区大小一般是不超过几个KB; 512 个字节或1024 个字节。

计算机系外文翻译--历史的计算

计算机系外文翻译--历史的计算

毕业设计(论文)外文资料翻译系别计算机信息与技术系专业计算机科学与技术班级姓名学号外文出处附件 1. 原文; 2. 译文2012年3月History of computingMain article: History of computing hardwareThe first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.Limited-function early computersThe Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices.The history of the modern computer begins with two separate technologies, automated calculation and programmability, but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC of which a descendant won a speed competition against a modern desk calculating machine in Japan in 1946, the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC. The Greek mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.Around the end of the 10th century, the French monk Gerbert d'Aurillac brought back from Spain the drawings of a machine invented by the Moors that answered either Yes or No to the questions it was asked. Again in the 13th century, the monks Albertus Magnus and Roger Bacon built talking androids without any further development.In 1642, the Renaissance saw the invention of the mechanical calculator, a device that could perform all four arithmetic operations without relying on human intelligence. The mechanical calculator was at the root of the development ofcomputers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators that the computer was first theorized by Charles Babbage and then developed. Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel of the first commercially available microprocessor integrated circuit.First general-purpose computersIn 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed ; nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. This machine was given to the Science museum in South Kensington in 1910.In the late 1880s, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of ideas and technologies, that would later prove useful in the realization of practical computers, had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the teleprinter.During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digitalcomputers.Alan Turing is widely regarded as the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine, providing a blueprint for the electronic digital computer. Of his role in the creation of the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".EDSAC was one of the first computers to implement the stored-program (von Neumann) architecture.Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.The Atanasoff–Berry Computer (ABC) was the world's first electronic digital computer, albeit not programmable. Atanasoff is considered to be one of the fathers of the computer.Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry, the machine was not programmable, being designed only to solve systems of linear equations. The computer did employ parallel computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC computer derived from the Atanasoff–Berry Computer.The first program-controlled computer was invented by Konrad Zuse, who built the Z3, an electromechanical computing machine, in 1941. The first programmable electronic computer was the Colossus, built in 1943 by Tommy Flowers.George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult.Notable achievements include. Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.The secret British Colossus computers (1943), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.Stored-program architectureReplica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, EnglandSeveral developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored-program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDV AC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of which was completed in 1948 at the University of Manchester in England, the Manchester Small-Scale Experimental Machine (SSEM or "Baby"). The Electronic Delay Storage Automatic Calculator(EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paper—EDV AC—was completed but did not see full-time use for an additional two years.Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of ?1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture.Semiconductors and microprocessorsComputers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by semiconductor transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existenc.历史的计算主要文章:计算机硬件的历史在第一次使用“计算机”这个词被记录在1613年,指的是对一个人进行了计算,或计算,与词的意思相同,直到继续20世纪中期。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

外文原文The Origins of C++: A Little HistoryComputer technology has evolved at an amazing rate over the past few decades. Today a notebook computer can compute faster and store more information than the mainframe computers of the 1960s. Computer languages have evolved, too. The changes may not be as dramatic, but they are important. Bigger, more powerful computers spawn bigger, more complex programs, which, in turn, raise new problems in program management and maintenance.In the 1970s, languages such as C and Pascal helped usher in an era of structured programming,a philosophy that brought some order and discipline to a field badly in need of these qualities. Besides providing the tools for structured programming, C also produced compact,fast-running programs, along with the ability to address hardware matters, such as managing communication ports and disk drives. These gifts helped make C the dominant programming language in the 1980s. Meanwhile, the 1980s witnessed the growth of a new programming paradigm: object-oriented programming, or OOP, as embodied in languages such as SmallTalk and C++. Let’s examine these C and OOP a bit more closely.The Mechanics of Creating a ProgramSuppose you’ve written a C++ program. How do you get it running? The exact steps depend on your computer environment and the particular C++ compiler you use, but they should resemble the following steps:1. Use a text editor of some sort to write the program and save it in a file. This file constitutes the source code for your program.2. Compile the source code. This means running a program that translates the source code to the internal language, called machine language, used by the host computer. The file containing the translated program is the object code for your program.3. Link the object code with additional code. For example, C++ programs normally use libraries. A C++ library contains object code for a collection of computer routines, called functions, to perform tasks such as displaying information onscreen or calculating the square root of a number. Linking combines your object code with object code for the functions you use and with some standard startup code to produce a runtime version of your program. The file containing this final product is called the executable code.CAsyncSocketA CAsyncSocket object represents a Windows Socket — an endpoint of network communication. Class CAsyncSocket encapsulates the Windows Sockets API, providing an object-oriented abstraction for programmers who want to use Windows Sockets in conjunction with MFC.This class is based on the assumption that you understand network communications. You are responsible for handling blocking, byte-order differences, and conversions between Unicode and multibyte character set (MBCS) strings. If you want a more convenient interface that manages these issues for you, see class CSocket.To use a CAsyncSocket object, call its constructor, then call the Create function to create the underlying socket handle (type SOCKET), except on accepted sockets. For a server socket call the Listen member function, and for a client socket call the Connect member function. The server socket should call the Accept function upon receiving a connection request. Use the remaining CAsyncSocket functions to carry out communications between sockets. Upon completion, destroy the CAsyncSocket object if it was created on the heap; the destructor automatically calls the Close function.CsocketClass CSocket derives from CAsyncSocket and inherits its encapsulation of the Windows Sockets API. A CSocket object represents a higher level of abstraction of the Windows Sockets API than that of a CAsyncSocket object. CSocket works with classes CSocketFile and CArchive to manage the sending and receiving of data.A CSocket object also provides blocking, which is essential to the synchronous operation of CArchive. Blocking functions, such as Receive, Send, ReceiveFrom, SendTo, and Accept (all inherited from CAsyncSocket), do not return a WSAEWOULDBLOCK error in CSocket. Instead, these functions wait until the operation completes. Additionally, the original call will terminate with the error WSAEINTR if CancelBlockingCall is called while one of these functions is blocking.To use a CSocket object, call the constructor, then call Create to create the underlying SOCKET handle (type SOCKET). The default parameters of Create create a stream socket, but if you are not using the socket with a CArchive object, you can specify a parameter to create a datagram socket instead, or bind to a specific port to create a server socket. Connect to a client socket using Connect on the client side and Accept on the server side. Then create a CSocketFile object and associate it to the CSocket object in the CSocketFile constructor. Next, create a CArchive object for sending and one for receiving data (as needed), then associate them with the CSocketFile object in the CArchive constructor. When communications are complete, destroy the CArchive, CSocketFile, and CSocket objects.CSocketFileA CSocketFile object is a CFile object used for sending and receiving data across a network via Windows Sockets. You can attach the CSocketFile object to a CSocket object for this purpose. You also can — and usually do — attach the CSocketFile object to a CArchive object to simplify sending and receiving data using MFC serialization.To serialize (send) data, you insert it into the archive, which calls CSocketFile member functions to write data to the CSocket object. To deserialize (receive) data, you extract from the archive. This causes the archive to call CSocketFile member functions to read data from the CSocket object.Tip Besides using CSocketFile as described here, you can use it as a stand-alone file object, just as you can with CFile, its base class. You can also use CSocketFile with any archive-based MFC serialization functions. Because CSocketFile does not support all of CFile’s functionality, some default MFC serialize functions are not compatible with CSocketFile. This is particularly true of the CEditView class. You should not try to serialize CEditView data through a CArchive object attached to a CSocketFile object using CEditView::SerializeRaw; use CEditView::Serialize instead. The SerializeRaw function expects the file object to have functions, such as Seek, that CSocketFile does not have.Using the MFC Source FilesThe Microsoft Foundation Class Library (MFC) supplies full source code. Header files (.H) are in the MFC\Include directory; implementation files (.Cpp) are in the MFC\Src directory.Note The MFC\Src directory contains a makefile you can use with NMake to build MFC library versions, including a browse version. A browse version of MFC is useful for tracing through the calling structure of MFC itself. The file Readme.Txt in that directory explains how to use this makefile.This family of articles explains the conventions that MFC uses to comment the various parts of each class, what these comments mean, and what you should expect to find in each section. ClassWizard and AppWizard use similar conventions for the classes they create for you, and you will probably find these conventions useful for your own code.You might be familiar with the public, protected, and private C++ keywords. When looking at the MFC header files, you'll find that each class may have several of each of these. For example, public member variables and functions might be under more than one public keyword. This is because MFC separates member variables and functions based on their use, not by the type of access allowed. MFC uses private sparingly — even items considered implementation details are generally protected and many times are public. Although access to the implementation details is discouraged, MFC leaves the decision to you.中文译文c++的起源:一个小的历史过去的几十年计算机技术以惊人的速度得到发展。

相关文档
最新文档