外文翻译 (2)软件

合集下载

EDA技术及软件外文翻译

EDA技术及软件外文翻译

EDA Technology And SoftwareEDA is Electronic Design Automation (Electronic Automation) is the abbreviation of themselves, in the early 1990s from computer aided Design (CAD), computer aided manufacturing (CAM), computer aided testing (CAT) and computer aided engineering (CAE) development of the concepts and come.EDA technology is on the computer as the tool, the designer in EDA software platform, with VHDL HDL finish design documents, then by the computer automatically logic compilation, reduction, division, comprehensive, optimization, layout and wiring and simulation for a particular goal chips, until the adapter compilation, logic mapping and programming download, etc.1 EDA technology conceptsEDA technology is in electronic CAD technology developed on the basis of computer software system by means of computer for working platform, shirt-sleeve application of electronic technology, computer technology and information processing and intelligent technology to the latest achievements of electronic products, the automatic design.Using EDA tools, electronic stylist can be from concept, algorithm, agreement, etc, begin to design your electronic system a lot work can be finished by computer and electronic products can be from circuit design, performance analysis to design the IC territory or PCB layout the whole process of the computer automatically complete the processing.Now on the concept of using EDA or category very wide. Included in machinery, electronics, communication, aerospace, chemical, mineral, biology, medicine, military and other fields, have EDA applications. Current EDA technology has in big companies, enterprises, institutions and teaching research departments extensive use. For example in the aircraft manufacturing process, from design, performance testing and characteristic analysis until a flight simulator, all may involve EDA technology. Globalization-the EDA technology, mainly in electronic circuit design, PCB design and IC design.EDA can be divided into system level and circuit-level and physical implementation level.2. Development Environment MAX + PLUSⅡ/ QUARTERⅡAltera Corporation is the world's three major CPLD / FPGA manufacturers of the devices it can achieve the highest performance and integration, not only because of the use of advanced technology and new logic structure, but also because it provides a modern design tools MAX + PLUSⅡprogrammable logic development software, the software is launched the third generation of Altera PLD development system. Nothing to do with the structure provides a design environment for Altera CPLD designers to easily design entry, quick processing, and device programming. MAX + PLUSⅡprovides a comprehensive logic design capabilities, including circuitdiagrams, text and waveform design entry and compilation, logic synthesis, simulation and timing analysis, and device programming, and many other features. Especially in the schematic so, MAX + PLUSⅡis considered the most easy to use, the most friendly man-machine interface PLD development software. MAX + PLUSⅡcan develop anything other than the addition APEX20K CPLD / FPGA.MAX + PLUSⅡdevelopment system has many outstanding features:① open interface.②design and construction related: MAX + PLUSⅡsupport Altera's Classic, ACEX 1K, MAX 3000, MAX 5000, MAX 7000, MAX 9000, FLEX 6000, FLEX 8000 and FLEX 10K series of programmable logic devices, gate count is 600 ~ 250 000 doors, offers the industry really has nothing to do with the structure of programmable logic design environment. MAX + PLUSⅡcompiler also provides a powerful logic synthesis and optimization to reduce the burden on the user's design.③can be run on multiple platforms: MAX + PLUSⅡsoftware PC-based WindowsNT 4.0, Windows 98, Win dows 2000 operating systems, but also in Sun SPARCstations, HP 9000 Series 700/800, IBM RISC System/6000 such as run on workstations.④fully integrated: MAX + PLUSⅡsoftware design input, processing, calibration functions are fully integrated within the programmable logic development tools, which can be debugged more quickly and shorten the development cycle.⑤modular tools: designers can input from a variety of design, editing, calibration and programming tools to choose the device to form a user-style development environment, when necessary, to retain on the basis of the original features to add new features. The MAX + PLUSⅡSeries supports a variety of devices, designers need to learn new development tools for the development of new device structures.⑥mail-description language (HDL): MAX + PLUSⅡsoftware supports a variety of HDL design entry, including the standard VHDL, V erilog HDL and Altera's own developed hardware description language AHDL.⑦MegaCore Function: MegaCore are pre-validated for the realization of complex system-level functions provided by the HDL netlist file. It ACEX 1K, MAX 7000, MAX 9000, FLEX 6000, FLEX 8000 and FLEX 10K devices provide the most optimal design. Users can purchase them from the Altera MegaCore, using them can reduce the design task, designers can make more time and energy to improve the design and final product up.⑧ OpenCore Features: MAX + PLUSⅡsoftware with open characteristics of the kernel, OpenCore come to buy products for designers design their own assessment.At the same time, MAX + PLUSⅡthere are many other design entry methods, including:①graphic design input: MAX + PLUSⅡgraphic design input than other software easier to use features, because the MAX + PLUSⅡprovides a rich library unit for the designer calls, especially in the MAX2LIB in the provision of the mf library includes almost all 74 series of devices, in the prim library provides all of the separate digital circuit devices. So long as a digital circuit knowledge, almost no learning can take advantage of excess MAX + PLUSⅡfor CPLD / FPGA design. MAX + PLUSⅡalso includes a variety of special logic macros (Macro-Function) andthe parameters of the trillion of new features (Mega-Function) module. Full use of these modules are designed to greatly reduce the workload of designers to shorten design cycles and multiply.②Enter the text editor: MAX + PLUSⅡtext input language and compiler system supports AHDL, VHDL language, VERILOG language of the three input methods.③ wave input: If you know the input, output waveform, the waveform input can also be used.④hybrid approach: MAX + PLUSⅡdesign and development environment for graphical design entry, text editing input, waveform editing input hybrid editing. To do: in graphics editing, wave form editing module by editing the text include "module name. Inc" or the use of Function (... ..) Return (....) Way call. Similarly, the text editing module input form can also be called when the graphics editor, AHDL compiler results can be used in the VHDL language, VHDL compiler of the results can also be entered in the AHDL language or graphic to use. This flexible input methods, to design the user has brought great convenience.Altera's QuartusⅡis a comprehensive PLD development software to support the schematic, VHDL, V erilog HDL, and AHDL (Altera Hardware Description Language) and other design input forms, embedded devices, and integrated its own simulator, you can complete the design input to complete the hardware configuration of the PLD design process.QuartusⅡin the XP, Linux and Unix on the use, in addition to using the Tcl script to complete the design process, to provide a complete graphical user interface design. With running speed, unified interface, feature set, easy to use and so on.Altera's QuartusⅡsupport IP core, including the LPM / MegaFunction macro function module library, allowing users to take full advantage of sophisticated modules, simplifying the design complexity and speed up the design speed. Good for third-party EDA tool support also allows the user to the various stages in the design process using the familiar third-party EDA tools.In addition, QuartusⅡand DSP Builder tools and by Matlab / Simulink combination, you can easily achieve a variety of DSP applications; support Altera's programmable system chip (SOPC) development, set system-level design, embedded software development, programmable logic design in one, is a comprehensive development platform.MaxPLUSⅡgeneration as Altera's PLD design software, due to its excellent ease of use has been widely used. Altera has now stopped MaxPLUSⅡupdate support, QuartusⅡnot only support the device type as compared to the rich and the graphical interface changes. Altera QuartusⅡincluded in many such SignalTapⅡ, Chip Editor and RTL Viewer design aids, integrated SOPC and HardCopy design process, and inherit MaxPLUSⅡfriendly graphical interface and easy to use.MaxPLUSⅡgeneration as Altera's PLD design software, due to its excellent ease of use has been widely used. Altera has now stopped MaxPLUSⅡupdate support, QuartusⅡnot only support the device type as compared to the rich and the graphical interface changes. Altera QuartusⅡincluded in many such SignalTapⅡ, Chip Editor and RTL Viewer design aids, integrated SOPC and HardCopy design process, and inherit MaxPLUSⅡ friendly graphical interface and easy to use.Altera QuartusⅡ as a programmable logic design environment, due to its strong design capabilities and intuitive interface, more and more digital systems designers welcome.Altera's QuartusⅡis the fourth generation of programmable logic PLD software development platform. The platform supports a working group under the design requirements, including support for Internet-based collaborative design. Quartus platform and Cadence, ExemplarLogic, MentorGraphics, Synopsys and Synplicity EDA vendors and other development tools are compatible. LogicLock improve the software module design features, added FastFit compiler options, and promote the network editing performance, and improved debugging capabilities. MAX7000/MAX3000 devices and other items to support the product.3. Development of language VHDLVHDL (V ery High Speed Integrated Circuit Hardware Description Language) is a very high speed integrated circuit hardware description language, it can describe the function of the hardware circuitry, signal connectivity and the time between languages. It can be more effective than the circuit diagram to express the characteristics of the hardware circuit. Using the VHDL language, you can proceed to the general requirements of the system, since the detailed content will be designed to come down to earth, and finally to complete the overall design of the system hardware. IEEE VHDL language has been the industry standard as a design to facilitate reuse and sharing the results. At present, it can not be applied analog circuit design, but has been put into research. VHDL program structure, including: entity (Entity), structure (Architecture), configure (Configuration), Package Collection (Package) and the Library (Library). Among them, the entity is the basic unit of a VHDL program, by entity and the structure of two parts: the physical design system that is used to describe the external interface signal; structure used to describe the behavior of the system, the system processes or system data structure form. Configuration select the required language from the library system design unit to form different versions of different specifications, so that the function is designed to change the system. Collection of records of the design module package to share the data types, constants, subroutines and so on. Database used to store the compiled entities, the body structure, including the collection and configuration: one is the development of engineering software user, the other is the manufacturer's database.VHDL, the main features are:① powerful, high flexibility: VHDL language is a powerful language structure, clear and concise code can be used to design complex control logic. VHDL language also supports hierarchical design, support design databases and build re usable components. Currently, VHDL language has become a design, simulation, synthesis of standard hardware description language.② Device independence: VHDL language allows designers to generate a design do not need to first select a specific device. For the same design description, you can use a variety of different device structures to achieve its function. So the design description stage, able to focus on design ideas. When the design, simulation, after the adoption of a specific device specified integrated, adapter can be.③Portability: VHDL language is a standard language, so the use of VHDL design can be carried out by different EDA tool support. Transplanted from one toanother simulation tools simulation tools, synthesis tools from a port to another integrated tool, from a working platform into another working platform. EDA tools used in a technical skills, in other tools can also be used.④top-down design methods: the traditional design approach is bottom-up design or flat design. Bottom-up design methodology is to start the bottom of the module design, the gradual formation of the functional modules of complex circuits. Advantage of this design is obvious because it is a hierarchical circuit design, the general circuit sub-module are in accordance with the structure or function of division, so the circuit level clear, clear structure, easy people to develop, while the design archive file is easy, easy communication. Bottom-up design is also very obvious shortcomings, the overall design concept is often not leaving because the cost of months of low-level design in vain. Flat design is a module containing only the circuit, the circuit design is straightforward and, with no division structure and function, it is not hierarchical circuit design. Advantages of small circuit design can save time and effort, but with the increasing complexity of the circuit, this design highlights the shortcomings of the abnormal changes. Top-down design approach is to design top-level circuit description (top model), and then the top-level simulation using EDA software, if the top-level design of the simulation results meet the requirements, you can continue to lower the top-level module by the division level and simulation, design of such a level will eventually complete the entire circuit. Top-down design method compared with the first two are obvious advantages.⑤ rich data types: as a hardware description language VHDL data types are very rich language, in addition to VHDL language itself dozens of predefined data types, in the VHDL language programming also can be user-defined data types. Std_logic data types in particular the use of VHDL language can make the most realistic complex signals in analog circuits.⑥ modeling convenience: the VHDL language can be integrated in the statement and the statement are available for simulation, behavior description ability, therefore particularly suitable for signal modeling language VHDL. The current VHDL synthesizer to complex arithmetic comprehensive descriptions (such as: Quartus Ⅱ2.0 and above versions of std_logic_vector type of data can add, subtract, multiply, divide), so the circuit modeling for complex simulation of VHDL language, whether or comprehensive description of the language are very appropriate.⑦rich runtime and packages: The current package supports VHDL, very rich, mostly in the form of libraries stored in a specific directory, the user can at any time. Such as the IEEE library collection std_logic_1164, std_logic_arith, std_logic_unsigned other package. In the CPLD / FPGA synthesis, EDA software vendors can also use the various libraries and provide package. VHDL language and the user using a variety of results can be stored in a library, in the design of the follow-up can continue to use.⑧VHDL language is a modeling hardware description language, so with ordinary computer languages are very different, common computer language is the CPU clock according to the beat, after an instruction to perform the next instruction, so instruction is a sequential, that is the order of execution, and execution of each instruction takes a specific time. VHDL language to describe the results with the corresponding hardware circuit, which follows the characteristics of hardware, there isno order of execution of the statement is executed concurrently; and statements that do not like ordinary software, take some time each instruction, just follow their own hardware delay.EDA技术及软件EDA是电子设计自动化(Electronic Design Automation)的缩写,在20世纪90年代初从计算机辅助设计(CAD)、计算机辅助制造(CAM)、计算机辅助测试(CAT)和计算机辅助工程(CAE)的概念发展而来。

软件工程(外文翻译文献)

软件工程(外文翻译文献)

外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。

CAT_systran

CAT_systran

- terminology extraction. . . )
- New Language Pairs (FAEN, HIEN, UREN, CSEN, UKEN,
- SKEN, PLEN, ARFR, SREN) - SYSTRAN Lite on
PDA
4
主要内容
MT System Complexity
SYSTRAN机器翻译软件技术培训 SYSTRAN机器翻译软件技术培训
Beijing, 7 July 2006
PROPRIETARY & CONFIDENTIAL
Systran 机器翻译软件技术培训 (一)
Systran 6 系统及功能使用简述
2
Systran -翻译能力强、使用广泛 翻译能力强、
An added value for multiple enterprise applications
- Inbound/inward type communication
• Self-service Translator over the Intranet (Office, Email, Chat, Collaboration) • Market Intelligence based on foreign language content sources • Cross-lingual research and translation on knowledge bases
- (nbrules, sizedictionary ) × nbLPs
- High flexibility required - Stability - Intrinsic complexity of language description

英语翻译工具的介绍说明

英语翻译工具的介绍说明

英语翻译工具的介绍说明为您收集的翻译工具,提供全面的英语翻译工具信息,希望对您有用!现在网络上英语翻译工具五花八门,到底哪些才是真正有用的呢?根据自己这几年英语翻译经历,给大家总结几个好用的英语翻译工具。

1、谷歌翻译谈到多国语言翻译,大家最熟悉还是谷歌,它可以提供全世界80种语言之间的即时翻译。

可谓是所有在线翻译工具中可翻译语种最多的英语翻译工具。

2、译客传说译客传说是为译员量身定制的移动平台,实现了译员之间的实时连接,提高译员翻译效率,为译员提供翻译日志记录、翻译术语记录调用云术语库、翻译行业资讯新闻、翻译招聘信息、翻译简历一键发送、翻译业务、社区交友等,是译员的手机掌上乐园。

3、ICAT辅助翻译软件iCAT辅助翻译软件提供了云端术语管理平台,已具有2000w以上的术语供译员收藏使用。

它能够翻译语种包括:中文、繁体中文、英、日、韩、德、法、俄、西班牙等。

导出译后稿,纯译文、段段对照、并列对照三个格式译文4、高校译云是唯一的高校翻译平台,它聚集高校翻译资源、市场翻译需求的专属沟通平台。

如果你有翻译需求,可以在上面找适宜的高校翻译团队来翻译。

经常做翻译的人都知道,翻译软件在工作中有很大作用,一款好的软件可以使翻译速度提升几倍,同时也能提高翻译质量~1、iOl8:火云译客翻译软件口号是:为译客而生!软件如其口号,各功能模块都是按照译员需求来的,集术语管理、查词、术语共享、在线翻译、在线交流分享等翻译辅助类功能于一体的软件,依托iol8语联网强大的资源优势,不仅有2000万权威术语库,还可以帮助用户快速批注、审核译文!杜绝错、漏译。

翻译速度快速提升!2、TOLQ:众包式网页翻译效劳平台Tolq是一个众包式网页翻译效劳平台,帮助企业进展语言翻译。

可以帮助你迅速的将网页翻译成多达35种语言,从而让你的网站迅速并且零障碍的被世界上的绝大多数人了解。

3、WorldLingoWorldLingo是一家国际著名的跨国翻译公司,其网站提供在线翻译效劳,可翻译文本、文档、网站和电子邮件,有单词数限制。

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照

软件工程毕业论文文献翻译中英文对照学生毕业设计(论文)外文译文学生姓名: 学号专业名称:软件工程译文标题(中英文):Qt Creator白皮书(Qt Creator Whitepaper)译文出处:Qt network 指导教师审阅签名: 外文译文正文:Qt Creator白皮书Qt Creator是一个完整的集成开发环境(IDE),用于创建Qt应用程序框架的应用。

Qt是专为应用程序和用户界面,一次开发和部署跨多个桌面和移动操作系统。

本文提供了一个推出的Qt Creator和提供Qt开发人员在应用开发生命周期的特点。

Qt Creator的简介Qt Creator的主要优点之一是它允许一个开发团队共享一个项目不同的开发平台(微软Windows?的Mac OS X?和Linux?)共同为开发和调试工具。

Qt Creator的主要目标是满足Qt开发人员正在寻找简单,易用性,生产力,可扩展性和开放的发展需要,而旨在降低进入新来乍到Qt的屏障。

Qt Creator 的主要功能,让开发商完成以下任务: , 快速,轻松地开始使用Qt应用开发项目向导,快速访问最近的项目和会议。

, 设计Qt物件为基础的应用与集成的编辑器的用户界面,Qt Designer中。

, 开发与应用的先进的C + +代码编辑器,提供新的强大的功能完成的代码片段,重构代码,查看文件的轮廓(即,象征着一个文件层次)。

, 建立,运行和部署Qt项目,目标多个桌面和移动平台,如微软Windows,Mac OS X中,Linux的,诺基亚的MeeGo,和Maemo。

, GNU和CDB使用Qt类结构的认识,增加了图形用户界面的调试器的调试。

, 使用代码分析工具,以检查你的应用程序中的内存管理问题。

, 应用程序部署到移动设备的MeeGo,为Symbian和Maemo设备创建应用程序安装包,可以在Ovi商店和其他渠道发布的。

, 轻松地访问信息集成的上下文敏感的Qt帮助系统。

软件工程专业外文翻译

软件工程专业外文翻译

软件工程专业外文翻译英文原文SSH is Spring + struts + Hibernate an integration framework, is one of the more popular a Web application framework・SpringLight weight 一一from two aspects in terms of size and cost of the Spring are lightweight・A complete Spring framework can in one size only1MB multiple JAR files released .And Spring required processing overhead is not worth mentioning・Inversion of control 一一Spring through a known as inversion of control (IoC) technology promotes loose coupling .When using IoC, an object depend on other objects will be passed in through passive way, but not the object of its own to create or find a dependent object .You can think of IoC and JNDI instead 一一not the object from the container for dependent, but in different container object is initialized object request on own initiative will rely on to it.Aspect oriented programming 一一Spring provides rich support, allowed by separating the application's business logic and system level service cohesiondevelopment .Application object only realize they should do 一一complete business logic・ They are not responsible for other system level concerns.Container 一一Spring contains and management application object configuration and life cycle, in this sense, it is a kind of container, you can configure each of your bean to be created 一一Based on a reconfigurable prototype (prototype), your bean can create a single instance or every time when they are needed to generate a new examples 一一and how they are interrelated・ However, Spring should not be confused with the traditional heavyweight EJB container, they are often large and unwieldy, difficult to use・ StrutsStruts on Model, View and Controller are provided with the corresponding components・ActionServlet, this is Struts core controller, responsible for intercepting the request from the user・Action, this class is typically provided by the user, the controller receives from the ActionServlet request, and according to the request tocall the model business logic method to processing the request, and the results will be returned to the JSP page display.54Part ModelBy ActionForm and JavaBean, where ActionForm used to package the user the request parameters, packaged into a ActionForm object, the object to be forwarded to the Action ActionServlet Action ActionFrom, according to which the request parameters processing a user request・JavaBean encapsulates the underlying business logic, including database access・ Part ViewThis section is implemented by JSP・Struts provides a rich library of tags, tag library can be reduced through the use of the script, a custom tag library can be achieved with Model effective interaction, and increased practical function.The Controller componentThe Controller component is composed of two parts 一一the core of the system controller, the business logic controller・System core controller, the corresponding ActionServlet.The controller is provided with the Struts framework, HttpServlet class inheritance, so it can be configured to mark Servlet .The controller is responsible for all HTTP requests, and then according to the user request to decide whether or not to transfer to business logic controller・ Business logic controller, responsible for processing a user request, itself does not have the processing power, it calls the Model to complete the dea1. The corresponding Action part・HibernateHibernate is an open source object relation mapping framework, it had a very lightweight JDBC object package, makes Java programmers can usearbitrary objects to manipulate database programming thinking .Hibernate can be applied in any use of JDBC occasions, can be in the Java client programto use, also can be in Servlet / JSP Web applications, the most revolutionary, Hibernate can be applied in theEJB J2EE schema to replace CMP, complete data persistence.The core of Hibernate interface has a total of 5, are: Session, SessionFactory, Query, Transaction and Configuration. The 5 core interface in any development will be used in. Through these interfaces, not only can the persistent object access, but also to carry out a transaction control.55中文翻译SSH为spring+ struts+ hibernate的一个集成框架,是忖前较流行的一种Web应用程序开源框架。

外文翻译---软件测试策略

外文翻译---软件测试策略

附录英文文献SOFTW ARE TESTING STEATEGIESA strategy for software testing integrates software test case design methods into a well-planned series of steps that result in the successful construction of software .As important ,a software testing strategy provides a rode map for the software developer, the quality assurance organization ,and the customer—a rode map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required. Therefore , any testing strategy must incorporate test planning, test case design, test execution, and resultant data collection .一INTEGRATION TESTINGA neophyte in the software world might ask a seemingly legitimate question once all modules have been unit tested:“IF they all work individually, why do you doubt that they’ll work when we put them together?”The problem, of course, is“putting them together”—interfacing . Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; subfunctions, when combiner, may not produce the desired major function; individually acceptable imprecision may be magnified to unacceptable levels; global data structures can present problems—sadly, the list goes on and on .Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design.There is often a tendency to attempt non-incremental integration; that is, to construct the program using a :“big bang”approach. All modules are combined in advance .the entire program in tested as a whole. And chaos usually results! A set of errors are encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless, loop.Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small segments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied. In the sections that follow, a number of different incremental integration strategies are discussed.1.1 Top-Down IntegrationTop-Down Integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.Depth-first integration would integrate all modules on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left hand path, modules M1,M2, and M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated. Then, the central and right hand control paths are built. Breadth-first integration incorporates all modules directly subordinate at each level, moving across the structure horizontally. From the figure, modules M2, M3, and M4 would be integrated first. The next control level, M5, M6, and so on, follows.The integration process is performed in a series of steps:(1)The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module.(2)Depending on the integration approach selected (i.e., depth- or breadth-first), subordinate stubs are replaced one at a time with actual modules.(3)Tests are conducted as each module is integrated.(4)On completion of each set of tests, another stub is replaced with the real module.(5)Regression testing may be conducted to ensure that new errors have not been introduced.The process continues from step 2 until the program structure is built.The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major controlprogram do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated. For example, consider a classic transaction structure in which a complex series of interactive inputs path are requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) maybe demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels, Stubs replace low-level modules at the beginning of top-down testing; therefore no significant data can flow upward in the program structure. The tester is left with three choices: 1 delay many tests until stubs are replaced with actual modules, 2 develop stubs that perform limited functions that simulate the actual module, or 3 integrate the software from the bottom of the hierarchy upward.The first approach (delay tests until stubs are replaced by actual modules) causes us to lose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of top-down approach. The second approach is workable, but can lead to significant overhead, as stubs become more and more complex. The third approach is called bottom-up testing.1.2 Bottom-Up IntegrationBottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Because modules are integrated from the bottom up, processing required for modules subordinate to a given level is always available and the need for stubs is eliminated.A bottom-up integration strategy may be implemented with the following steps:1 Low-level modules are combined into clusters (sometimes called builds) that perform a specific software subfunction.1. A driver (a control program for testing) is written to coordinate test case input and output.2 .The cluster is tested.3.Drivers are removed and clusters are combined moving upward in the program structure.Modules are combined to form clusters 1,2, and 3. Each of the clusters is tested using a driver (shown as a dashed block) Modules in clusters 1 and 2 are subordinate to M1. Drivers D1 and D2 are removed, and the clusters are interfaced directly to M1. Similarly, driver D3 for cluster 3 is removed prior to integration with module M2. Both M1 and M2 will ultimately be integrated with M3, and so forth.As integration moves upward, the need for separate test drivers lessens, In fact, if the top tow levels of program structure are integrated top-down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.1.3 Regression TestingEach time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems functions that regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changes. Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.Regression testing maybe conducted manually, be re-executing a subset of all test cases or using automated capture playback tools. Capture-playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:1.A representative sample of tests that will exercise all software functions.2.Additional tests that focus on software functions that are likely to be affected by the change.3.Tests that focus on the software components that have been changed.As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program functiononce a change has occurred.1.4 Comments on Integration TestingThere has been much discussion of the relative advantages and disadvantages of top-down versus bottom-up integration testing. In general, the advantages of one strategy tend to result in disadvantages for the other strategy. The major disadvantage of the top-down approach is the need for stubs and the attendant testing difficulties that can be associated with them. Problems associated with stubs maybe offset by the advantage of testing major control functions early. The major disadvantage of bottom up integration is that “the program as an entity does not exist until the last module is added”. This drawback is tempered by easier test case design and a lack of stubs.Selection of an integration strategy depends upon software characteristics and sometimes, project schedule. In general, a combined approach (sometimes called sandwich testing) that uses a top-down strategy for upper levels of the program structure, coupled with a bottom-up strategy for subordinate levels maybe the best compromise.As integration testing is conducted, the tester should identify critical modules. A critical module has one or more of following characteristics: 1 addresses several software requirements;2 has a high level of control (resides relatively high in the program structure); 3 is complex or error-prone(cyclomatic complexity maybe used as an indicator ); or 4 has definite performance requirements. Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module function.二SYSTEM TESTING2.1 Recovery TestingMany computer-based systems must recover from faults and resume processing within a prespecified time. In some cases, a system must be fault tolerant; that is, processing fault must not cause overall system function to cease. In other cases, a system failure must be corrected whining a specified period of time or severe economic damage will occur.Recovery testing is a system test that forces the software to fail in variety of ways and recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialization, checkpointing mechrequires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits.2.2 Security TestingAny computer-based system that manages sensitive information or cause actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration.Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport; disgruntled employees who attempt to penetrate for revenge; and dishonest individuals who attempt to penetrate for illicit personal gain.Security testing attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration. To quote Beizer:“The system’s security must, of course, be tested for invulnerability from frontal attack—but must also be tested for invulnerability from flank or rear attack.”During security testing, the tester plays the role of the individual who desires to penetrate the system. Anything goes! The tester may attempt to acquires passwords through external clerical means, may attack the system with custom software designed to break down any defenses that have been constructed; may overwhelm the errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry; and so on.Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost greater than the value of the information that will be obtained.2.3 Stress TestingDuring earlier software testing steps, while-box techniques resulted in through evaluation of normal program functions and performance. Stress tests are designed to confront programs with abnormal situations. In essence, the tester who performs stress testing asks:“How high can we crank this up before it fail?”Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1 special tests maybe designed that generate 10 interrupts per second, when one or two is the average rate; 2 input data rates maybe increased by an order of magnitude to determine how input functions will respond; 3 test cases that require maximum memory or other resources maybe executed;4 test cases that may cause thrashing in a virtual operating system maybe designed; or 5 test cases that may cause excessive hunting for disk resident datamaybe created. Essentially, the tester attempts to break the problem.A variation of stress testing is a technique called sensitivity testing. In some situations (the most common occur in mathematical algorithms) a very small rang of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation. This situation is analogous to a singularity in a mathematical function. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing.2.4 Performance TestingFor real-time and embedded systems, software that provides required function but does not conform to performance requirements is unacceptable. Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module maybe assessed as while-box test are conducted. H0owever, it is not until all system elements are fully integrated that the true performance of a system can be ascertained.软件测试策略软件测试策略把软件测试案例的设计方法集成到一系列已经周密计划的步骤中去,从而使软件的开发得以成功地完成。

计算机辅助翻译在科技英语翻译中的应用研究

计算机辅助翻译在科技英语翻译中的应用研究

计算机辅助翻译在科技英语翻译中的应用研究随着科技的不断进步,计算机辅助翻译已经成为科技英语翻译的主流工具。

它为翻译工作带来了巨大的效率和质量提升。

本文将从计算机辅助翻译的定义、功能和应用实例三个方面来深入探讨它在科技英语翻译中的应用研究。

一、计算机辅助翻译的定义计算机辅助翻译(Computer Assisted Translation,简称CAT)是利用计算机辅助翻译软件快速识别、翻译文本的过程。

翻译人员在这个过程中可以利用计算机协助翻译、查词、翻译、检查、编辑以及进行术语管理等工作。

计算机辅助翻译是人工翻译和机器翻译的中间点,既保留了人工翻译的思维模式,也借助了机器翻译系统提供的各种翻译资源,如词典、句库、术语库等。

二、计算机辅助翻译的功能计算机辅助翻译软件的功能主要包括翻译记忆、术语管理、机器翻译等。

1.翻译记忆它是个人和企业在翻译过程中所使用的术语和短语的一个数据库。

它保存了原文和翻译的句子对,每次用户使用翻译工具时,翻译记忆库能够帮助翻译和自动翻译加快翻译的速度、提高翻译的质量。

翻译记忆的最主要功能在于提高翻译的一致性,避免晦涩的术语和重复的翻译,提高工作效率。

2.术语管理术语管理是各个行业所特有的术语的一个管理系统,通过术语管理的系统能够自动检索企业术语库中的相关术语,辅助翻译人员迅速找到词汇,提高翻译的准确性、一致性与专业程度。

术语管理的优点包括增强翻译准确性、提高翻译标准化率、简化多语种文化下的翻译工作、增强翻译质量控制等。

3.机器翻译机器翻译(Machine Translation)指利用计算机软件自动将一种语言的文本转化成另一种语言的过程。

它通过计算机程序模拟人类翻译的过程,使得计算机能够自动将源文本翻译成目标语言。

虽然机器翻译的翻译准确率与人工翻译难以相比,但它仍在科技英语翻译中发挥着重要的作用。

机器翻译可以快速提供初步的翻译结果,然后再由人工翻译进一步校对,从而减少翻译成本,加快翻译速度。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

北京**********毕业设计(论文)文献翻译题目北京市地铁票价计算系统的设计学生姓名学号专业名称软件工程年级 2011级指导教师职称讲师所在系(院)计算机科学与技术系2014年 12月 11日JSPJSP(JavaServer Pages)は、Sun Microsystems会社に多くの会社が参加を呼びかけ、一緒に一種の動的ウェブページの技術基準。

JSP技術似ASP技術、それは伝統のホームページHTMLファイル(* . htm、* . html)で挿入Java プログラム段(Scriptlet)やJSPマーク(tag)、形成JSPファイル(* . jsp)。

JSPのWebアプリケーションの開発ではマルチプラットフォームの、つまりはLinux運行も、他のOSを実行する。

Javaプログラミング言語で作成JSP技術を使用tags XMLの類とscriptlets、パッケージ発生ダイナミックページの処理ロジック。

ホームページを通じてscriptlets訪問もtagsと存在するサーバーの資源の応用のロジック。

JSPはホームページの論理とホームページの設計と表示が分離し、再使用可能に基づいて支持のコンポーネントの設計に基づくWebアプリケーションをの開発が急速にやすい。

Webサーバーに訪問JSPページの請求は、まず実行中のセグメントは、それを実行結果とともにJSPファイルのHTMLソースへと帰るお客様に。

挿入のJavaプログラム段操作できるデータベース、再配向ページなどを実現に必要な機能を動的ウェブページ。

Java Servlet JSPと同じように、サーバー執行のクライアントが、通常に戻るのはHTMLテキストブラウザでさえあれば、そのクライアントの閲覧。

JSPの1 . 0規範の最後のバージョンは1999年9月に発売した、12月にリリースされた1 . 1規範。

現在は新しいJSP1.2規範は、JSP2.0規範の意見募集稿もすでに登場。

JSPのページはHTMLソースのJavaと組み込みそのコードになる。

サーバーはページのクライアントからこれらのJava請求されてコードの処理が行われ、それを生成のHTMLページに戻りクライアントのブラウザ。

Java Servlet JSPの技術の基礎は、さらに大型のWebアプリケーションの開発が必要Servlet JSPと協力できるJava。

JSPを備えたJava技術のシンプルで使いやすい、完全なオブジェクト指向は、プラットフォームの関係性かつ安全で信頼性の高い、主にインターネットのすべての特徴。

数年前、Marty誘いを受け、参加について、ソフトウェア技術の小型(20人)セミナー.してMarty隣の人はジェームズGosling --- Jav aプログラミング言語の発明者。

いくつかの位置を隔てて、ワシントンから大手のソフトウェア会社でのシニアマネージャー。

討論の過程の中で、セミナーの主席をJiniの議題は、当時は新しいJava技術.主席に尋ねる。

彼はこのマネージャーは彼の考えを続けると、彼らはこの技術を注視して、もしこの技術が流行して、彼らに従うが会社の「受け入れて、拡張(embrace an d exten d)の策略。

この時、Gosling 勝手に挿話「あなたの意味は実は引き受けないで(disgrace an d disten d)拡充。

」最終的に変換servler JS Pページ。

したがって、根本的に、JSPページが執行のいかなる任務を使ってもいいservler完成。

しかし、この下の同性などを意味しないservlerやJSPページに全ての状況が等しい適用。

問題は、技術の能力ではなく、二者の利便性、生産性と保守性の違い。

結局、特定プラットフォームJavaプログラミング言語で完成のことで、同様にアセンブラ言語で完瞭するが、どんな種類の言語は非常に重要だ。

単独で使うとservlerに比べ、JSP下記のメリットを提供する:a . JS PでHTMLの編纂とメンテナンスをより簡単に。

JSPの中で使える定番のHTML:ない追加の反スラッシュない追加の二重引用符でもない、が掛るJava 文法。

b .使用できる標準のウェブサイトの開発ツール。

たとえにJSPを知らないHTMLツール、私たちも使えるし、それらのため見落としJSPラベル(JSP tag s)。

cに開発チームに分けられる。

Javaプログラマが取り組む動態コード。

Web 開発者を集中して表示マネージャー層(プレゼンレイヤ)に。

大型のプロジェクトについて、この区別は極めて重要。

開発チームの大きさによって、およびプロジェクトの複雑さは、静的なHTMLと動態的内容について弱い分離(weaker separatio n)と強い分離(stronger separatio n)。

ここでは、この討論はご使用を中止しservlets JS Pだけを使って。

ほとんどのプロジェクトも同時にこの2種類の技術を使う。

プロジェクトの中のいくつかの請求に対して、あなたがMVC構図の下で組み合わせて利用するこの2つの技術。

私たちはいつものように適切なツールで完成に対応した仕事はservletだけはあなたの箱を埋めて。

JSPの技術の強さJSPの技術の強さ(1)一度作成、あちこち運行。

この点でよりもっと素晴らしいJava PH Pは、システムのほか、コードを変更しなくても。

(2)システムのマルチプラットフォームのサポート。

ほぼすべてのプラットフォーム上の任意の環境の中で開発して任意の環境の中でシステムの展開は、任意の環境の中で拡張。

比べASP / PH Pの局限性は明らかに分かった。

(3)強い弾力性。

ないから小さなJarファイルを実行することができるServlet / JS P、は複数のサーバーを群と負荷のバランス、台以上のアプリケーションを事務処理、情報処理、1台のサーバー無数サーバーは、Javaの巨大な生命力を示した。

(4)の多様化と強力な開発ツール支持。

という点ASPに似て、Javaはすでに多くの非常に優秀な開発ツール、それに多くを無料で手に入れて、そして多くの運行はスムーズには多種プラットフォームの下。

JSPの技術の弱い:ASP(1)と同様、Javaのいくつかの優位性がそれは緻命的な問題。

まさにためにマルチプラットフォームの機能のために、極度の伸縮能力などが増えたので、製品の複雑性。

(2)Javaの運行速度はクラスで常駐メモリに仕上げたので、それはいくつかの場合に使用のメモリによりユーザー数は確かに「最低価格性能比。

一方から、それがハードディスクの空間を汲み一連の. jav aファイルや.クラスファイル、及び対応のバージョンのファイル。

知る必要がServlet:(1)JSPページに変換Servlet。

わからないでどのように働いてServlet JS P。

(2)JSPはスタティックHTML、専用のJSPラベル、Javaコード構成。

どちらのタイプのJavaコード?もちろんServletコード!もし分からないServletプログラミング、じゃないこのコードを作成。

(3)でいくつかの任務を完成Servlet よりJSP仲良しで完成。

JSP得意生成は大量組織秩序のストラクチャードHTMLや他の文字データ構成のページ。

Servlet得意生成バイナリデータの構築構造の多様なページや、あるいは出力の少ない執行出力ない任務(たとえばリダイレクト)。

(4)一部の任務に適してServletと組み合わせて使うJSP完成し、非単独使用ServletやJSP。

JavaScriptと比べてJavaScriptやJavaプログラミング言語はまったく別物で、前者にクライアントのダイナミックHTMLドキュメントを生成、ブラウザマウント構築のページの一部。

これは一つに役立つ機能、普通の機能(JSPとだけがサーバ端は運行)が重なる。

従来のHTMLページのように、JSPページを含む可能にはJavaScript のSCRIPTラベル。

実は、JSPできる人も用い動態生成送信クライアントのJavaScript。

だから、JavaScriptは一項の競争する技術、それは一つの補充技術。

JavaScriptでも使えサーバー側で、最も人にはSU N ON E(以前のiPlane t)、II SとBroadVisio nサーバー。

しかし、Javaさらに強力かつ柔軟で、信頼できる移植。

JavaScriptでも使えサーバー側で、最も人にはSU N ON E(以前のiPlane t)、II SとBroadVisio nサーバー。

しかし、Javaさらに強力かつ柔軟で、信頼できる移植。

JSPの6種類の内蔵の対象:Request、respons e、アウト、セッション、applicatio n、confi g、pagecontex t、ぺージ、exceptio n。

1. reques t対象:対象ユーザーの情報を提出する、対象の適切なメソッドを呼び出すパッケージの情報を得ることができて、つまりその使用を対象ユーザーの情報を得ることができて提出する。

2 . respons e対象:お客様の請求についての動態に応え、クライアントに送信データ。

3 .セッションオブジェクト1 .何はセッション:セッション対象はJSP内蔵対象、それは初めてのJSP ページが搭載されて時の自動作成、完成会話期間管理。

お客様から画面を開くと接続サーバーから顧客離れこのサーバーのブラウザを閉じ終わり、と言われる会話。

ときに客先訪問のサーバー中、このサーバーのいくつかのページの間の接続を繰り返し、繰り返し刷新のページには、サーバーを通じて何らかの方法は同一の顧客には、需要のセッションを対象。

2 .セッション対象のID:ときに取引先を訪問するのは初めてのサーバー上のページにJSP、JSPエンジンを生むセッション対象、同時に分配のStringタイプのID番号、JSPエンジンが同時にこのID番号を送信してクライアント、Cookieのように、お客様の間を対象とセッションた対応の関係。

お客様が再訪問接続サーバーの他のページの時、もう分配を顧客に新しいセッションを対象に、取引先までブラウザを閉じた後、そのお客様のセッションがサーバ端対象がキャンセル、そして取引先との会話の対応関係に消え。

お客様が再開ブラウザ再接続サーバーにサーバーを再作成の新しい顧客とのセッションを対象。

4 . aplicatio n対象何はapplication:1。

サーバ起動後に生まれましたapplication対象がまた、顧客が訪れたのは、サイトの閲覧に各ページの間、このapplication相手も同じまで、サーバーが閉じ。

相关文档
最新文档