计算机系外文翻译---历史的计算

合集下载

外文翻译--计算机

外文翻译--计算机

外文原文computerThe modern world of high technology could not have come about except for the development of the computer. Different types and sizes of computers find uses throughout society in the storage and handling of data, from secret governmental files to banking transactions to private household accounts. Computers have opened up a new era in manufacturing through the techniques of automation, and they have enhanced modern communication systems. They are essential tools in almost every field of research and applied technology, from constructing models of the universe to producing tomorrow’s weather reports, and technique use has in itself opened up new areas of conjecture. Database services and computer networks make available a great variety of information sources. The same advanced techniques also make the invasions of privacy and restricted information sources possible, and computer crime has become one of the many risks that society must face if it is to enjoy the benefits of modern technology.A computer is an electronic device that can receive a set of instructions, or program, and then carry out this program by performing calculations on numerical data or by compiling and correlating other forms of information. The type of computers are mainly inclusive of Microcomputer, Minicomputer, Mainframe Computer and Supercomputer, etc. Microminiaturization , the effort to compress more circuit elements into smaller and smaller chip space is becoming the major trend in computer development. Besides, researchers are trying to develop more powerful and more advanced computers.Any customers all pass the operate system to use the calculator, not direct carry on the operation to the hardware of the calculators. The operate system is a bridge that communicates the customer and calculator. Every general-purpose computer must have an operating system to run other programs. Operating systems perform basic tasks and provide a software platform. The choice of operating systems determines to a great extent of the applications. Therefore OS is very important.The operate system is in the charge of Computer resource control program to execute system software. Say in a specific way,the OS is the most basic in the calculator software system, also constituting the part most importantly, it is responsible for the management and controls the calculator system in all hardware resources and the software resources, can make of various resources matched with mutually, moderating to work with one accord, full develop its function, exaltation the efficiency of the system, still take the interface function of the customer and the calculator system at the same time, use the calculator to provide the convenience for the customer. The operate system is a huge management control procedure, including 5 management functions mostly: Progress and processing the machine manage, the homework manage, saving management, equipments management, document management. Divide the line from the function, the tiny machine operate system can is divided into the single mission operate system, single many mission operate systems of customer and many mission operate systems of multi-user of single customer. At present there are several kinds of OS on the computer which are DOS, OS/2, UNIX, XENIX, LINUX, Window2000, Netware etc.In order for a computer to perform the required task, it must be given instructions in a language that it understands. However, the computer’s own binary based language, or machine language, is difficult for humans to use. Therefore, people devised an assembly language to shorten and simplify the process. In order to make a computer more friendly to use, programmers invented high level languages, such as COBOL, FORTRAN, ASSEMBLER, PASCAL, C++, etc, which made the computers easier to use. For the time being, HTML and XML are very useful languages as well.The database is often used to describe a collection of related data that is organized into an integrated, sophisticated structure that provides different people with varied access to the same data. A database management system is an extremely complex set of software programs that controls the organization, storage and retrieval of data in a database. A successful DBMS is often characterized with the four principal features: (1)Data Security and Integrity; (2)Interactive query; (3)Interactive data Entry and Updating; (4)Data Independence. The intelligent databases are becoming more popular in that they canprovide more validation. The researches on new types of database systems are underway.计算机倘若不是伴随着计算机的发展,现代世界的高端技术不可能出现。

计算机专业外文文献及翻译

计算机专业外文文献及翻译

微软Visual Studio1微软Visual StudioVisual Studio 是微软公司推出的开发环境,Visual Studio可以用来创建Windows平台下的Windows应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和Office 插件。

Visual Studio是一个来自微软的集成开发环境IDE,它可以用来开发由微软视窗,视窗手机,Windows CE、.NET框架、.NET精简框架和微软的Silverlight支持的控制台和图形用户界面的应用程序以及Windows窗体应用程序,网站,Web应用程序和网络服务中的本地代码连同托管代码。

Visual Studio包含一个由智能感知和代码重构支持的代码编辑器。

集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。

其他内置工具包括一个窗体设计的GUI应用程序,网页设计师,类设计师,数据库架构设计师。

它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如Subversion和Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如Team Foundation Server的客户端:团队资源管理器)。

Visual Studio支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。

内置的语言中包括C/C + +中(通过Visual C++),(通过Visual ),C#中(通过Visual C#)和F#(作为Visual Studio 2010),为支持其他语言,如M,Python,和Ruby等,可通过安装单独的语言服务。

它也支持的XML/XSLT,HTML/XHTML,JavaScript和CSS.为特定用户提供服务的Visual Studio也是存在的:微软Visual Basic,Visual J#、Visual C#和Visual C++。

计算机专业外文翻译--c 的起源一个小的历史

计算机专业外文翻译--c  的起源一个小的历史

外文原文The Origins of C++: A Little HistoryComputer technology has evolved at an amazing rate over the past few decades. Today a notebook computer can compute faster and store more information than the mainframe computers of the 1960s. Computer languages have evolved, too. The changes may not be as dramatic, but they are important. Bigger, more powerful computers spawn bigger, more complex programs, which, in turn, raise new problems in program management and maintenance.In the 1970s, languages such as C and Pascal helped usher in an era of structured programming,a philosophy that brought some order and discipline to a field badly in need of these qualities. Besides providing the tools for structured programming, C also produced compact,fast-running programs, along with the ability to address hardware matters, such as managing communication ports and disk drives. These gifts helped make C the dominant programming language in the 1980s. Meanwhile, the 1980s witnessed the growth of a new programming paradigm: object-oriented programming, or OOP, as embodied in languages such as SmallTalk and C++. Let’s examine these C and OOP a bit more closely.The Mechanics of Creating a ProgramSuppose you’ve written a C++ program. How do you get it running? The exact steps depend on your computer environment and the particular C++ compiler you use, but they should resemble the following steps:1. Use a text editor of some sort to write the program and save it in a file. This file constitutes the source code for your program.2. Compile the source code. This means running a program that translates the source code to the internal language, called machine language, used by the host computer. The file containing the translated program is the object code for your program.3. Link the object code with additional code. For example, C++ programs normally use libraries. A C++ library contains object code for a collection of computer routines, called functions, to perform tasks such as displaying information onscreen or calculating the square root of a number. Linking combines your object code with object code for the functions you use and with some standard startup code to produce a runtime version of your program. The file containing this final product is called the executable code.CAsyncSocketA CAsyncSocket object represents a Windows Socket — an endpoint of network communication. Class CAsyncSocket encapsulates the Windows Sockets API, providing an object-oriented abstraction for programmers who want to use Windows Sockets in conjunction with MFC.This class is based on the assumption that you understand network communications. You are responsible for handling blocking, byte-order differences, and conversions between Unicode and multibyte character set (MBCS) strings. If you want a more convenient interface that manages these issues for you, see class CSocket.To use a CAsyncSocket object, call its constructor, then call the Create function to create the underlying socket handle (type SOCKET), except on accepted sockets. For a server socket call the Listen member function, and for a client socket call the Connect member function. The server socket should call the Accept function upon receiving a connection request. Use the remaining CAsyncSocket functions to carry out communications between sockets. Upon completion, destroy the CAsyncSocket object if it was created on the heap; the destructor automatically calls the Close function.CsocketClass CSocket derives from CAsyncSocket and inherits its encapsulation of the Windows Sockets API. A CSocket object represents a higher level of abstraction of the Windows Sockets API than that of a CAsyncSocket object. CSocket works with classes CSocketFile and CArchive to manage the sending and receiving of data.A CSocket object also provides blocking, which is essential to the synchronous operation of CArchive. Blocking functions, such as Receive, Send, ReceiveFrom, SendTo, and Accept (all inherited from CAsyncSocket), do not return a WSAEWOULDBLOCK error in CSocket. Instead, these functions wait until the operation completes. Additionally, the original call will terminate with the error WSAEINTR if CancelBlockingCall is called while one of these functions is blocking.To use a CSocket object, call the constructor, then call Create to create the underlying SOCKET handle (type SOCKET). The default parameters of Create create a stream socket, but if you are not using the socket with a CArchive object, you can specify a parameter to create a datagram socket instead, or bind to a specific port to create a server socket. Connect to a client socket using Connect on the client side and Accept on the server side. Then create a CSocketFile object and associate it to the CSocket object in the CSocketFile constructor. Next, create a CArchive object for sending and one for receiving data (as needed), then associate them with the CSocketFile object in the CArchive constructor. When communications are complete, destroy the CArchive, CSocketFile, and CSocket objects.CSocketFileA CSocketFile object is a CFile object used for sending and receiving data across a network via Windows Sockets. You can attach the CSocketFile object to a CSocket object for this purpose. You also can — and usually do — attach the CSocketFile object to a CArchive object to simplify sending and receiving data using MFC serialization.To serialize (send) data, you insert it into the archive, which calls CSocketFile member functions to write data to the CSocket object. To deserialize (receive) data, you extract from the archive. This causes the archive to call CSocketFile member functions to read data from the CSocket object.Tip Besides using CSocketFile as described here, you can use it as a stand-alone file object, just as you can with CFile, its base class. You can also use CSocketFile with any archive-based MFC serialization functions. Because CSocketFile does not support all of CFile’s functionality, some default MFC serialize functions are not compatible with CSocketFile. This is particularly true of the CEditView class. You should not try to serialize CEditView data through a CArchive object attached to a CSocketFile object using CEditView::SerializeRaw; use CEditView::Serialize instead. The SerializeRaw function expects the file object to have functions, such as Seek, that CSocketFile does not have.Using the MFC Source FilesThe Microsoft Foundation Class Library (MFC) supplies full source code. Header files (.H) are in the MFC\Include directory; implementation files (.Cpp) are in the MFC\Src directory.Note The MFC\Src directory contains a makefile you can use with NMake to build MFC library versions, including a browse version. A browse version of MFC is useful for tracing through the calling structure of MFC itself. The file Readme.Txt in that directory explains how to use this makefile.This family of articles explains the conventions that MFC uses to comment the various parts of each class, what these comments mean, and what you should expect to find in each section. ClassWizard and AppWizard use similar conventions for the classes they create for you, and you will probably find these conventions useful for your own code.You might be familiar with the public, protected, and private C++ keywords. When looking at the MFC header files, you'll find that each class may have several of each of these. For example, public member variables and functions might be under more than one public keyword. This is because MFC separates member variables and functions based on their use, not by the type of access allowed. MFC uses private sparingly — even items considered implementation details are generally protected and many times are public. Although access to the implementation details is discouraged, MFC leaves the decision to you.中文译文c++的起源:一个小的历史过去的几十年计算机技术以惊人的速度得到发展。

计算机科学与技术专业外文翻译--数据库

计算机科学与技术专业外文翻译--数据库

外文原文:Database1.1Database conceptThe database concept has evolved since the 1960s to ease increasing difficulties in designing, building, and maintaining complex information systems (typically with many concurrent end-users, and with a large amount of diverse data). It has evolved together with database management systems which enable the effective handling of databases. Though the terms database and DBMS define different entities, they are inseparable: a database's properties are determined by its supporting DBMS and vice-versa. The Oxford English dictionary cites[citation needed] a 1962 technical report as the first to use the term "data-base." With the progress in technology in the areas of processors, computer memory, computer storage and computer networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitudes. For decades it has been unlikely that a complex information system can be built effectively without a proper database supported by a DBMS. The utilization of databases is now spread to such a wide degree that virtually every technology and product relies on databases and DBMSs for its development and commercialization, or even may have such embedded in it. Also, organizations and companies, from small to large, heavily depend on databases for their operations.No widely accepted exact definition exists for DBMS. However, a system needs to provide considerable functionality to qualify as a DBMS. Accordingly its supported data collection needs to meet respective usability requirements (broadly defined by the requirements below) to qualify as a database. Thus, a database and its supporting DBMS are defined here by a set of general requirements listed below. Virtually all existing mature DBMS products meet these requirements to a great extent, while less mature either meet them or converge to meet them.1.2Evolution of database and DBMS technologyThe introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing.In the earliest database systems, efficiency was perhaps the primary concern, but it was already recognized that there were other important objectives. One of the key aims was to make the data independent of the logic of application programs, so that the same data could be made available to different applications.The first generation of database systems were navigational,[2] applications typically accessed data by following pointers from one record to another. The two main data models at this time were the hierarchical model, epitomized by IBM's IMS system, and the Codasyl model (Network model), implemented in a number ofproducts such as IDMS.The Relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. This was considered necessary to allow the content of the database to evolve without constant rewriting of applications. Relational systems placed heavy demands on processing resources, and it was not until the mid 1980s that computing hardware became powerful enough to allow them to be widely deployed. By the early 1990s, however, relational systems were dominant for all large-scale data processing applications, and they remain dominant today (2012) except in niche areas. The dominant database language is the standard SQL for the Relational model, which has influenced database languages also for other data models.Because the relational model emphasizes search rather than navigation, it does not make relationships between different entities explicit in the form of pointers, but represents them rather using primary keys and foreign keys. While this is a good basis for a query language, it is less well suited as a modeling language. For this reason a different model, the Entity-relationship model which emerged shortly later (1976), gained popularity for database design.In the period since the 1970s database technology has kept pace with the increasing resources becoming available from the computing platform: notably the rapid increase in the capacity and speed (and reduction in price) of disk storage, and the increasing capacity of main memory. This has enabled ever larger databases and higher throughputs to be achieved.The rigidity of the relational model, in which all data is held in tables with a fixed structure of rows and columns, has increasingly been seen as a limitation when handling information that is richer or more varied in structure than the traditional 'ledger-book' data of corporate information systems: for example, document databases, engineering databases, multimedia databases, or databases used in the molecular sciences. Various attempts have been made to address this problem, many of them gathering under banners such as post-relational or NoSQL. Two developments of note are the Object database and the XML database. The vendors of relational databases have fought off competition from these newer models by extending the capabilities of their own products to support a wider variety of data types.1.3General-purpose DBMSA DBMS has evolved into a complex software system and its development typically requires thousands of person-years of development effort.[citation needed] Some general-purpose DBMSs, like Oracle, Microsoft SQL Server, and IBM DB2, have been undergoing upgrades for thirty years or more. General-purpose DBMSs aim to satisfy as many applications as possible, which typically makes them even more complex than special-purpose databases. However, the fact that they can be used "off the shelf", as well as their amortized cost over many applications and instances, makes them an attractive alternative (Vsone-time development) whenever they meet an application's requirements.Though attractive in many cases, a general-purpose DBMS is not always the optimal solution: When certain applications are pervasive with many operating instances, each with many users, a general-purpose DBMS may introduce unnecessary overhead and too large "footprint" (too large amount of unnecessary, unutilized software code). Such applications usually justify dedicated development.Typical examples are email systems, though they need to possess certain DBMS properties: email systems are built in a way that optimizes email messages handling and managing, and do not need significant portions of a general-purpose DBMS functionality.1.4Database machines and appliancesIn the 1970s and 1980s attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine. Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).1.5Database researchDatabase research has been an active and diverse area, with many specializations, carried out since the early days of dealing with the database concept in the 1960s. It has strong ties with database technology and DBMS products. Database research has taken place at research and development groups of companies (e.g., notably at IBM Research, who contributed technologies and ideas virtually to any DBMS existing today), research institutes, and Academia. Research has been done both through Theory and Prototypes. The interaction between research and database related product development has been very productive to the database area, and many related key concepts and technologies emerged from it. Notable are the Relational and the Entity-relationship models, the Atomic transaction concept and related Concurrency control techniques, Query languages and Query optimization methods, RAID, and more. Research has provided deep insight to virtually all aspects of databases, though not always has been pragmatic, effective (and cannot and should not always be: research is exploratory in nature, and not always leads to accepted or useful ideas). Ultimately market forces and real needs determine the selection of problem solutions and related technologies, also among those proposed by research. However, occasionally, not the best and most elegant solution wins (e.g., SQL). Along their history DBMSs and respective databases, to a great extent, have been the outcome of such research, while real product requirements and challenges triggered database research directions and sub-areas.The database research area has several notable dedicated academic journals (e.g., ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE, and more) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE, and more), as well as an active and quite heterogeneous (subject-wise) research community all over the world.1.6Database architectureDatabase architecture (to be distinguished from DBMS architecture; see below) may be viewed, to some extent, as an extension of Data modeling. It is used to conveniently answer requirements of different end-users from a same database, as well as for other benefits. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but not other many details about employees, that are the interest of the human resources department. Thus different departments need different views of the company's database, that both include the employees' payments, possibly in a different level of detail (and presented in different visual forms). To meet such requirement effectively database architecture consists of three levels: external, conceptual and internal. Clearly separating the three levels was a major feature of the relational database model implementations that dominate 21st century databases.[13]The external level defines how each end-user type understands the organization of its respective relevant data in the database, i.e., the different needed end-user views.A single database can have any number of views at the external level.The conceptual level unifies the various external views into a coherent whole, global view.[13] It provides the common-denominator of all the external views. It comprises all the end-user needed generic data, i.e., all the data from which any view may be derived/computed. It is provided in the simplest possible way of such generic data, and comprises the back-bone of the database. It is out of the scope of the various database end-users, and serves database application developers and defined by database administrators that build the database.The Internal level (or Physical level) is as a matter of fact part of the database implementation inside a DBMS (see Implementation section below). It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the conceptual level, provides supporting storage-structures like indexes, to enhance performance, and occasionally stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in attempt to optimize the overall database usage by all its end-uses according to the database goals and priorities.All the three levels are maintained and updated according to changing needs by database administrators who often also participate in the database design.The above three-level database architecture also relates to and being motivated by the concept of data independence which has been described for long time as a desired database property and was one of the major initial driving forces of the Relational model. In the context of the above architecture it means that changes made at a certain level do not affect definitions and software developed with higher level interfaces, and are being incorporated at the higher level automatically. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which saves substantial change work that would be needed otherwise.In summary, the conceptual is a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it is uncomplicated by details of how the data is stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation (see Implementation section below), requires a different levelof detail and uses its own data structure types, typically different in nature from the structures of the external and conceptual levels which are exposed to DBMS users (e.g., the data models above): While the external and conceptual levels are focused on and serve DBMS users, the concern of the internal level is effective implementation details.中文译文:数据库1.1 数据库的概念数据库的概念已经演变自1960年以来,以缓解日益困难,在设计,建设,维护复杂的信息系统(通常与许多并发的最终用户,并用大量不同的数据)。

计算机专业外文资料翻译----微机发展简史

计算机专业外文资料翻译----微机发展简史

附录外文文献及翻译Progress in computersThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would standus in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarrass de riches. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term famillogic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics. dubbed in the IEE light current electrical engineering. properlyrecognized as an activity in its own right. I remember that we had some difficulty in organizing a co nference because the power engineers‟ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practiceConsolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organization, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scalerequired and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry …going the other way‟. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university orelsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scaleminicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks. Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measureme nts were done and on the whole the choice of features depended upon the designer‟s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and ditzy entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computersThe RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings. both academic and industrial. of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would liketo think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel‟s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in aworkstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation ofthe x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed. In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson‟s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognized need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.Shortage of ElectronsAlthough shortage of electrons has not so far appeared as an obvious limitation, in the long term it may become so. Perhaps this is where the exploitation of non-conventional CMOS will lead us. However, some interesting work has been done. notably by HuronAmend and his team working in the Cavendish Laboratory. on the direct development of structures in which a single electron more or less makes the difference between a zero and a one. However very little progress has been made towards practical devices that could lead to the construction of a computer. Even with exceptionally good luck, many tens of years must inevitably elapse before a working computer based on single electron effects can be contemplated.微机发展简史第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。

计算机专业中英文翻译(外文翻译、文献翻译)

计算机专业中英文翻译(外文翻译、文献翻译)

英文参考文献及翻译Linux - Operating system of cybertimesThough for a lot of people , regard Linux as the main operating system to make up huge work station group, finish special effects of " Titanic " make , already can be regarded as and show talent fully. But for Linux, this only numerous news one of. Recently, the manufacturers concerned have announced that support the news of Linux to increase day by day, users' enthusiasm to Linux runs high unprecedentedly too. Then, Linux only have operating system not free more than on earth on 7 year this piece what glamour, get the favors of such numerous important software and hardware manufacturers as the masses of users and Orac le , Informix , HP , Sybase , Corel , Intel , Netscape , Dell ,etc. , OK?1.The background of Linux and characteristicLinux is a kind of " free (Free ) software ": What is called free,mean users can obtain the procedure and source code freely , and can use them freely , including revise or copy etc.. It is a result of cybertimes, numerous technical staff finish its research and development together through Inte rnet, countless user is it test and except fault , can add user expansion function that oneself make conveniently to participate in. As the most outstanding one in free software, Linux has characteristic of the following:(1)Totally follow POSLX standard, expand the network operatingsystem of supporting all AT&T and BSD Unix characteristic. Because of inheritting Unix outstanding design philosophy , and there are clean , stalwart , high-efficient and steady kernels, their all key codes are finished by Li nus Torvalds and other outstanding programmers, without any Unix code of AT&T or Berkeley, so Linu x is not Unix, but Linux and Unix are totally compatible.(2)Real many tasks, multi-user's system, the built-in networksupports, can be with such seamless links as NetWare , Windows NT , OS/2 , Unix ,etc.. Network in various kinds of Unix it tests to be fastest in comparing and assess efficiency. Support such many kinds of files systems as FAT16 , FAT32 , NTFS , Ex t2FS , ISO9600 ,etc. at the same time .(3) Can operate it in many kinds of hardwares platform , including such processors as Alpha , SunSparc , PowerPC , MIPS ,etc., to various kinds of new-type peripheral hardwares, can from distribute on global numerous programmer there getting support rapidly too.(4) To that the hardware requires lower, can obtain very good performance on more low-grade machine , what deserves particular mention is Linux outstanding stability , permitted " year " count often its running times.2.Main application of Linux At present,Now, the application of Linux mainly includes:(1) Internet/Intranet: This is one that Linux was used most at present, it can offer and include Web server , all such Inter net services as Ftp server , Gopher server , SMTP/POP3 mail server , Proxy/Cache server , DNS server ,etc.. Linux kernel supports IPalias , PPP and IPtunneling, these functions can be used for setting up fictitious host computer , fictitious service , VPN (fictitious special-purpose network ) ,etc.. Operating Apache Web server on Linux mainly, the occupation rate of market in 1998 is 49%, far exceeds the sum of such several big companies as Microsoft , Netscape ,etc..(2) Because Linux has outstanding networking ability , it can be usedin calculating distributedly large-scaly, for instance cartoon making , scientific caculation , database and file server ,etc..(3) As realization that is can under low platform fullness of Unix that operate , apply at all levels teaching and research work of universities and colleges extensively, if Mexico government announce middle and primary schools in the whole country dispose Linux and offer Internet service for student already.(4) Tabletop and handling official business appliedly. Application number of people of in this respect at present not so good as Windows of Microsoft far also, reason its lie in Lin ux quantity , desk-top of application software not so good as Windows application far not merely,because the characteristic of the freedom software makes it not almost have advertisement that support (though the function of Star Office is not second to MS Office at the same time, but there are actually few people knowing).3.Can Linux become a kind of major operating system?In the face of the pressure of coming from users that is strengthened day by day, more and more commercial companies transplant its application to Linux platform, comparatively important incident was as follows, in 1998 ①Compaq and HP determine to put forward user of requirement truss up Linux at their servers , IBM and Dell promise to offer customized Linux system to user too. ②Lotus announce, Notes the next edition include one special-purpose edition in Linux. ③Corel Company transplants its famous WordPerfect to on Linux, and free issue. Corel also plans to move the other figure pattern process products to Linux platform completely.④Main database producer: Sybase , Informix , Oracle , CA , IBM have already been transplanted one's own database products to on Linux, or has finished Beta edition, among them Oracle and Informix also offer technical support to their products.4.The gratifying one is, some farsighted domestic corporations have begun to try hard to change this kind of current situation already. Stone Co. not long ago is it invest a huge sum of money to claim , regard Linux as platform develop a Internet/Intranet solution, regard this as the core and launch Stone's system integration business , plan to set up nationwide Linux technical support organization at the same time , take the lead to promote the freedom software application and development in China. In addition domestic computer Company , person who win of China , devoted to Linux relevant software and hardware application of system popularize too. Is it to intensification that Linux know , will have more and more enterprises accede to the ranks that Linux will be used with domestic every enterprise to believe, more software will be planted in Linux platform. Meanwhile, the domestic university should regard Linux as the original version and upgrade already existing Unix content of courses , start with analysing the source code and revising the kernel and train a large number of senior Linux talents, improve our country's own operating system. Having only really grasped the operating system, the software industry of our country could be got rid of and aped sedulously at present, the passive state led by the nose byothers, create conditions for revitalizing the software industry of our country fundamentally.中文翻译Linux—网络时代的操作系统虽然对许多人来说,以Linux作为主要的操作系统组成庞大的工作站群,完成了《泰坦尼克号》的特技制作,已经算是出尽了风头。

计算机外文翻译(完整)

计算机外文翻译(完整)

计算机外⽂翻译(完整)毕业设计(论⽂)外⽂资料翻译专业:计算机科学与技术姓名:王成明学号:06120186外⽂出处:The History of the Internet附件: 1.外⽂原⽂ 2.外⽂资料翻译译⽂;附件1:外⽂原⽂The History of the InternetThe Beginning - ARPAnetThe Internet started as a project by the US government. The object of the project was to create a means of communications between long distance points, in the event of a nation wide emergency or, more specifically, nuclear war. The project was called ARPAnet, and it is what the Internet started as. Funded specifically for military communication, the engineers responsible for ARPANet had no idea of the possibilities of an "Internet."By definition, an 'Internet' is four or more computers connected by a network.ARPAnet achieved its network by using a protocol called TCP/IP. The basics around this protocol was that if information sent over a network failed to get through on one route, it would find another route to work with, as well as establishing a means for one computer to "talk" to another computer, regardless of whether it was a PC or a Macintosh.By the 80's ARPAnet, just years away from becoming the more well known Internet, had 200 computers. The Defense Department, satisfied with ARPAnets results, decided to fully adopt it into service, and connected many military computers and resources into the network. ARPAnet then had 562 computers on its network. By the year 1984, it had over 1000 computers on its network.In 1986 ARPAnet (supposedly) shut down, but only the organization shut down, and the existing networks still existed between the more than 1000 computers. It shut down due to a failied link up with NSF, who wanted to connect its 5 countywide super computers into ARPAnet.With the funding of NSF, new high speed lines were successfully installed at line speeds of 56k (a normal modem nowadays) through telephone lines in 1988. By that time, there were 28,174 computers on the (by then decided) Internet. In 1989 there were 80,000 computers on it. By 1989, there were290,000.Another network was built to support the incredible number of people joining. It was constructed in 1992.Today - The InternetToday, the Internet has become one of the most important technological advancements in the history of humanity. Everyone wants to get 'on line' to experience the wealth of information of the Internet. Millions of people now use the Internet, and it's predicted that by the year 2003 every single person on the planet will have Internet access. The Internet has truly become a way of life in our time and era, and is evolving so quickly its hard to determine where it will go next, as computer and network technology improve every day.HOW IT WORKS:It's a standard thing. People using the Internet. Shopping, playing games,conversing in virtual Internet environments.The Internet is not a 'thing' itself. The Internet cannot just "crash." It functions the same way as the telephone system, only there is no Internet company that runs the Internet.The Internet is a collection of millioins of computers that are all connected to each other, or have the means to connect to each other. The Internet is just like an office network, only it has millions of computers connected to it.The main thing about how the Internet works is communication. How does a computer in Houston know how to access data on a computer in Tokyo to view a webpage?Internet communication, communication among computers connected to the Internet, is based on a language. This language is called TCP/IP. TCP/IP establishes a language for a computer to access and transmit data over the Internet system.But TCP/IP assumes that there is a physical connecetion between onecomputer and another. This is not usually the case. There would have to be a network wire that went to every computer connected to the Internet, but that would make the Internet impossible to access.The physical connection that is requireed is established by way of modems,phonelines, and other modem cable connections (like cable modems or DSL). Modems on computers read and transmit data over established lines,which could be phonelines or data lines. The actual hard core connections are established among computers called routers.A router is a computer that serves as a traffic controller for information.To explain this better, let's look at how a standard computer might viewa webpage.1. The user's computer dials into an Internet Service Provider (ISP). The ISP might in turn be connected to another ISP, or a straight connection into the Internet backbone.2. The user launches a web browser like Netscape or Internet Explorer and types in an internet location to go to.3. Here's where the tricky part comes in. First, the computer sends data about it's data request to a router. A router is a very high speed powerful computer running special software. The collection of routers in the world make what is called a "backbone," on which all the data on the Internet is transferred. The backbone presently operates at a speed of several gigabytes per-second. Such a speed compared to a normal modem is like comparing the heat of the sun to the heat of an ice-cube.Routers handle data that is going back and forth. A router puts small chunks of data into packages called packets, which function similarly to envelopes. So, when the request for the webpage goes through, it uses TCP/IP protocols to tell the router what to do with the data, where it's going, and overall where the user wants to go.4. The router sends these packets to other routers, eventually leadingto the target computer. It's like whisper down the lane (only the information remains intact).5. When the information reaches the target web server, the webserver then begins to send the web page back. A webserver is the computer where the webpage is stored that is running a program that handles requests for the webpage and sends the webpage to whoever wants to see it.6. The webpage is put in packets, sent through routers, and arrive at the users computer where the user can view the webpage once it is assembled.The packets which contain the data also contain special information that lets routers and other computers know how to reassemble the data in the right order.With millions of web pages, and millions of users, using the Internet is not always easy for a beginning user, especially for someone who is not entirely comfortale with using computers. Below you can find tips tricks and help on how to use main services of the Internet.Before you access webpages, you must have a web browser to actually be able to view the webpages. Most Internet Access Providers provide you with a web browser in the software they usually give to customers; you. The fact that you are viewing this page means that you have a web browser. The top two use browsers are Netscape Communicator and Microsoft Internet Explorer. Netscape can be found at /doc/bedc387343323968011c9268.html and MSIE can be found at /doc/bedc387343323968011c9268.html /ie.The fact that you're reading this right now means that you have a web browser.Next you must be familiar with actually using webpages. A webpage is a collection of hyperlinks, images, text, forms, menus, and multimedia. To "navigate" a webpage, simply click the links it provides or follow it's own instructions (like if it has a form you need to use, it will probably instruct you how to use it). Basically, everything about a webpage is made to be self-explanetory. That is the nature of a webpage, to be easily navigatable."Oh no! a 404 error! 'Cannot find web page?'" is a common remark made by new web-users.Sometimes websites have errors. But an error on a website is not the user's fault, of course.A 404 error means that the page you tried to go to does not exist. This could be because the site is still being constructed and the page hasn't been created yet, or because the site author made a typo in the page. There's nothing much to do about a 404 error except for e-mailing the site administrator (of the page you wanted to go to) an telling him/her about the error.A Javascript error is the result of a programming error in the Javascript code of a website. Not all websites utilize Javascript, but many do. Javascript is different from Java, and most browsers now support Javascript. If you are using an old version of a web browser (Netscape 3.0 for example), you might get Javascript errors because sites utilize Javascript versions that your browser does not support. So, you can try getting a newer version of your web browser.E-mail stands for Electronic Mail, and that's what it is. E-mail enables people to send letters, and even files and pictures to each other.To use e-mail, you must have an e-mail client, which is just like a personal post office, since it retrieves and stores e-mail. Secondly, you must have an e-mail account. Most Internet Service Providers provide free e-mail account(s) for free. Some services offer free e-mail, like Hotmail, and Geocities.After configuring your e-mail client with your POP3 and SMTP server address (your e-mail provider will give you that information), you are ready to receive mail.An attachment is a file sent in a letter. If someone sends you an attachment and you don't know who it is, don't run the file, ever. It could be a virus or some other kind of nasty programs. You can't get a virus justby reading e-mail, you'll have to physically execute some form of program for a virus to strike.A signature is a feature of many e-mail programs. A signature is added to the end of every e-mail you send out. You can put a text graphic, your business information, anything you want.Imagine that a computer on the Internet is an island in the sea. The sea is filled with millions of islands. This is the Internet. Imagine an island communicates with other island by sending ships to other islands and receiving ships. The island has ports to accept and send out ships.A computer on the Internet has access nodes called ports. A port is just a symbolic object that allows the computer to operate on a network (or the Internet). This method is similar to the island/ocean symbolism above.Telnet refers to accessing ports on a server directly with a text connection. Almost every kind of Internet function, like accessing web pages,"chatting," and e-mailing is done over a Telnet connection.Telnetting requires a Telnet client. A telnet program comes with the Windows system, so Windows users can access telnet by typing in "telnet" (without the "'s) in the run dialog. Linux has it built into the command line; telnet. A popular telnet program for Macintosh is NCSA telnet.Any server software (web page daemon, chat daemon) can be accessed via telnet, although they are not usually meant to be accessed in such a manner. For instance, it is possible to connect directly to a mail server and check your mail by interfacing with the e-mail server software, but it's easier to use an e-mail client (of course).There are millions of WebPages that come from all over the world, yet how will you know what the address of a page you want is?Search engines save the day. A search engine is a very large website that allows you to search it's own database of websites. For instance, if you wanted to find a website on dogs, you'd search for "dog" or "dogs" or "dog information." Here are a few search-engines.1. Altavista (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed2. Yahoo (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed Collection3. Excite (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed4. Lycos (/doc/bedc387343323968011c9268.html ) - Web spider & Indexed5. Metasearch (/doc/bedc387343323968011c9268.html ) - Multiple searchA web spider is a program used by search engines that goes from page to page, following any link it can possibly find. This means that a search engine can literally map out as much of the Internet as it's own time and speed allows for.An indexed collection uses hand-added links. For instance, on Yahoo's site. You can click on Computers & the Internet. Then you can click on Hardware. Then you can click on Modems, etc., and along the way through sections, there are sites available which relate to what section you're in.Metasearch searches many search engines at the same time, finding the top choices from about 10 search engines, making searching a lot more effective.Once you are able to use search engines, you can effectively find the pages you want.With the arrival of networking and multi user systems, security has always been on the mind of system developers and system operators. Since the dawn of AT&T and its phone network, hackers have been known by many, hackers who find ways all the time of breaking into systems. It used to not be that big of a problem, since networking was limited to big corporate companies or government computers who could afford the necessary computer security.The biggest problem now-a-days is personal information. Why should you be careful while making purchases via a website? Let's look at how the internet works, quickly.The user is transferring credit card information to a webpage. Looks safe, right? Not necessarily. As the user submits the information, it is being streamed through a series of computers that make up the Internet backbone.The information is in little chunks, in packages called packets. Here's the problem: While the information is being transferred through this big backbone, what is preventing a "hacker" from intercepting this data stream at one of the backbone points?Big-brother is not watching you if you access a web site, but users should be aware of potential threats while transmitting private information. There are methods of enforcing security, like password protection, an most importantly, encryption.Encryption means scrambling data into a code that can only be unscrambled on the "other end." Browser's like Netscape Communicator and Internet Explorer feature encryption support for making on-line transfers. Some encryptions work better than others. The most advanced encryption system is called DES (Data Encryption Standard), and it was adopted by the US Defense Department because it was deemed so difficult to 'crack' that they considered it a security risk if it would fall into another countries hands.A DES uses a single key of information to unlock an entire document. The problem is, there are 75 trillion possible keys to use, so it is a highly difficult system to break. One document was cracked and decoded, but it was a combined effort of14,000 computers networked over the Internet that took a while to do it, so most hackers don't have that many resources available.附件2:外⽂资料翻译译⽂Internet的历史起源——ARPAnetInternet是被美国政府作为⼀项⼯程进⾏开发的。

计算机系外文翻译--历史的计算

计算机系外文翻译--历史的计算

毕业设计(论文)外文资料翻译系别计算机信息与技术系专业计算机科学与技术班级姓名学号外文出处附件 1. 原文; 2. 译文2012年3月History of computingMain article: History of computing hardwareThe first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.Limited-function early computersThe Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices.The history of the modern computer begins with two separate technologies, automated calculation and programmability, but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC of which a descendant won a speed competition against a modern desk calculating machine in Japan in 1946, the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC. The Greek mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.Around the end of the 10th century, the French monk Gerbert d'Aurillac brought back from Spain the drawings of a machine invented by the Moors that answered either Yes or No to the questions it was asked. Again in the 13th century, the monks Albertus Magnus and Roger Bacon built talking androids without any further development.In 1642, the Renaissance saw the invention of the mechanical calculator, a device that could perform all four arithmetic operations without relying on human intelligence. The mechanical calculator was at the root of the development ofcomputers in two separate ways. Initially, it was in trying to develop more powerful and more flexible calculators that the computer was first theorized by Charles Babbage and then developed. Secondly, development of a low-cost electronic calculator, successor to the mechanical calculator, resulted in the development by Intel of the first commercially available microprocessor integrated circuit.First general-purpose computersIn 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, his analytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed ; nevertheless his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. This machine was given to the Science museum in South Kensington in 1910.In the late 1880s, Herman Hollerith invented the recording of data on a machine-readable medium. Earlier uses of machine-readable media had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunch machines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of ideas and technologies, that would later prove useful in the realization of practical computers, had begun to appear: Boolean algebra, the vacuum tube (thermionic valve), punched cards and tape, and the teleprinter.During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digitalcomputers.Alan Turing is widely regarded as the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine, providing a blueprint for the electronic digital computer. Of his role in the creation of the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".EDSAC was one of the first computers to implement the stored-program (von Neumann) architecture.Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.The Atanasoff–Berry Computer (ABC) was the world's first electronic digital computer, albeit not programmable. Atanasoff is considered to be one of the fathers of the computer.Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built with the assistance of graduate student Clifford Berry, the machine was not programmable, being designed only to solve systems of linear equations. The computer did employ parallel computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC computer derived from the Atanasoff–Berry Computer.The first program-controlled computer was invented by Konrad Zuse, who built the Z3, an electromechanical computing machine, in 1941. The first programmable electronic computer was the Colossus, built in 1943 by Tommy Flowers.George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult.Notable achievements include. Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.The secret British Colossus computers (1943), which had limited programmability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.Stored-program architectureReplica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program computer, at the Museum of Science and Industry in Manchester, EnglandSeveral developers of ENIAC, recognizing its flaws, came up with a far more flexible and elegant design, which came to be known as the "stored-program architecture" or von Neumann architecture. This design was first formally described by John von Neumann in the paper First Draft of a Report on the EDV AC, distributed in 1945. A number of projects to develop computers based on the stored-program architecture commenced around this time, the first of which was completed in 1948 at the University of Manchester in England, the Manchester Small-Scale Experimental Machine (SSEM or "Baby"). The Electronic Delay Storage Automatic Calculator(EDSAC), completed a year after the SSEM at Cambridge University, was the first practical, non-experimental implementation of the stored-program design and was put to use immediately for research work at the university. Shortly thereafter, the machine originally described by von Neumann's paper—EDV AC—was completed but did not see full-time use for an additional two years.Nearly all modern computers implement some form of the stored-program architecture, making it the single trait by which the word "computer" is now defined. While the technologies used in computers have changed dramatically since the first electronic, general-purpose computers of the 1940s, most still use the von Neumann architecture.Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted research on ternary computers, devices that operated on a base three numbering system of ?1, 0, and 1 rather than the conventional binary numbering system upon which most computers are based. They designed the Setun, a functional ternary computer, at Moscow State University. The device was put into limited production in the Soviet Union, but supplanted by the more common binary architecture.Semiconductors and microprocessorsComputers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by semiconductor transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existenc.历史的计算主要文章:计算机硬件的历史在第一次使用“计算机”这个词被记录在1613年,指的是对一个人进行了计算,或计算,与词的意思相同,直到继续20世纪中期。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

毕业设计(论文)外文资料翻译系别计算机信息与技术系专业计算机科学与技术班级姓名学号外文出处附件 1. 原文; 2. 译文2012年3月History of computingMain article: History of computing hardwareThe first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued with the same meaning until the middle of the 20th century. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations.Limited-function early computersThe Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices.The history of the modern computer begins with two separate technologies, automated calculation and programmability, but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. A few devices are worth mentioning though, like some mechanical aids to computing, which were very successful and survived for centuries until the advent of the electronic calculator, like the Sumerian abacus, designed around 2500 BC of which a descendant won a speed competition against a modern desk calculating machine in Japan in 1946, the slide rules, invented in the 1620s, which were carried on five Apollo space missions, including to the moon and arguably the astrolabe and the Antikythera mechanism, an ancient astronomical computer built by the Greeks around 80 BC. The Greek mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when. This is the essence of programmability.First general-purpose computersDuring the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.Alan Turing is widely regarded as the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm andcomputation with the Turing machine, providing a blueprint for the electronic digital computer. Of his role in the creation of the modern computer, Time magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine".EDSAC was one of the first computers to implement the stored-program (von Neumann) architecture.Die of an Intel 80486DX2 microprocessor (actual size: 12×6.75 mm) in its packaging.The first program-controlled computer was invented by Konrad Zuse, who built the Z3, an electromechanical computing machine, in 1941. The first programmable electronic computer was the Colossus, built in 1943 by Tommy Flowers.George Stibitz is internationally recognized as a father of the modern digital computer. While working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to use binary circuits to perform an arithmetic operation. Later models added greater sophistication including complex arithmetic and programmability.A succession of steadily more powerful and flexible computing devices were constructed in the 1930s and 1940s, gradually adding the key features that are seen in modern computers. The use of digital electronics (largely invented by Claude Shannon in 1937) and more flexible programmability were vitally important steps, but defining one point along this road as "the first digital electronic computer" is difficult. Notable achievements include. Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working machine featuring binary arithmetic, including floating point arithmetic and a measure of programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the world's first operational computer.The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in 1941) which used vacuum tube based computation, binary numbers, and regenerative capacitor memory. The use of regenerative memory allowed it to be much more compact than its peers (being approximately the size of a large desk or workbench), since intermediate results could be stored and then fed back into the same set of computation elements.The secret British Colossus computers (1943), which had limitedprogrammability but demonstrated that a device using thousands of tubes could be reasonably reliable and electronically reprogrammable. It was used for breaking German wartime codes.The Harvard Mark I (1944), a large-scale electromechanical computer with limited programmability.The U.S. Army's Ballistic Research Laboratory ENIAC (1946), which used decimal arithmetic and is sometimes called the first general purpose electronic computer (since Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however, ENIAC had an inflexible architecture which essentially required rewiring to change its programming.Stored-program architectureSemiconductors and microprocessorsComputers using vacuum tubes as their electronic elements were in use throughout the 1950s, but by the 1960s had been largely replaced by semiconductor transistor-based machines, which were smaller, faster, cheaper to produce, required less power, and were more reliable. The first transistorised computer was demonstrated at the University of Manchester in 1953. In the 1970s, integrated circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further decreased size and cost and further increased speed and reliability of computers. By the late 1970s, many products such as video recorders contained dedicated computers called microcontrollers, and they started to appear as a replacement to mechanical controls in domestic appliances such as washing machines. The 1980s witnessed home computers and the now ubiquitous personal computer. With the evolution of the Internet, personal computers are becoming as common as the television and the telephone in the household.Modern smartphones are fully programmable computers in their own right, and as of 2009 may well be the most common form of such computers in existenc.历史的计算主要文章:计算机硬件的历史在第一次使用“计算机”这个词被记录在1613年,指的是对一个人进行了计算,或计算,与词的意思相同,直到继续20世纪中期。

相关文档
最新文档