软件工程(外文翻译文献)
计算机专业外文文献及翻译

微软Visual Studio1微软Visual StudioVisual Studio 是微软公司推出的开发环境,Visual Studio可以用来创建Windows平台下的Windows应用程序和网络应用程序,也可以用来创建网络服务、智能设备应用程序和Office 插件。
Visual Studio是一个来自微软的集成开发环境IDE,它可以用来开发由微软视窗,视窗手机,Windows CE、.NET框架、.NET精简框架和微软的Silverlight支持的控制台和图形用户界面的应用程序以及Windows窗体应用程序,网站,Web应用程序和网络服务中的本地代码连同托管代码。
Visual Studio包含一个由智能感知和代码重构支持的代码编辑器。
集成的调试工作既作为一个源代码级调试器又可以作为一台机器级调试器。
其他内置工具包括一个窗体设计的GUI应用程序,网页设计师,类设计师,数据库架构设计师。
它有几乎各个层面的插件增强功能,包括增加对支持源代码控制系统(如Subversion和Visual SourceSafe)并添加新的工具集设计和可视化编辑器,如特定于域的语言或用于其他方面的软件开发生命周期的工具(例如Team Foundation Server的客户端:团队资源管理器)。
Visual Studio支持不同的编程语言的服务方式的语言,它允许代码编辑器和调试器(在不同程度上)支持几乎所有的编程语言,提供了一个语言特定服务的存在。
内置的语言中包括C/C + +中(通过Visual C++),(通过Visual ),C#中(通过Visual C#)和F#(作为Visual Studio 2010),为支持其他语言,如M,Python,和Ruby等,可通过安装单独的语言服务。
它也支持的XML/XSLT,HTML/XHTML,JavaScript和CSS.为特定用户提供服务的Visual Studio也是存在的:微软Visual Basic,Visual J#、Visual C#和Visual C++。
软件工程毕业设计(论文)文献综述+外文翻译-基于jsp的宾馆管理系统的设计与实现[管理资料]
![软件工程毕业设计(论文)文献综述+外文翻译-基于jsp的宾馆管理系统的设计与实现[管理资料]](https://img.taocdn.com/s3/m/543571c3b90d6c85ed3ac612.png)
文献综述前言本人毕业设计的论题为《基于JSP的宾馆管理系统的设计和实现》,该系统是在目前服务业的发展日益明显,宾馆的发展也成为了必然的趋势。
国外的宾馆大多宾馆都进入了电脑时代,而目前我国各类宾馆中还有相当一部分宾馆还停留在人工管理的基础上,尤其是中、小得宾馆的管理更是如此,这样的管理机制已经不能适应时代的发展。
另外宾馆行业的发展,使顾客信息呈爆炸性增长,宾馆对宾馆信息管理的自动化与准确化的要求日益强烈的背景下构思出来的,该软件设计完成后可用于所有宾馆行业的发展和管理。
使用计算机对顾客信息进行管理,有着手工管理所无法比拟的优点,例如:检索迅速、查找方便、易修改、可靠性高、存储量大、数据处理快捷、保密性好、寿命长、成本低等。
这些优点能够极大地提高对宾馆信息管理的效率。
本文根据目前国内外学者对宾馆管理系统的研究成果,借鉴他们的成功经验,对宾馆管理系统进行开发。
本文综述了前人所论述的文献,结合自己的看法,并提出自己的观点。
随着科学技术的不断提高,计算机科学的日渐成熟,使用日趋成熟的计算机技术将代替传统的人工模式,来实现宾馆信息的现代化管理,其强大的功能已为人们所深刻认识,它已进入人类社会的各个领域并发挥着越来越重要的作用。
郭真(2009)在《JSP程序设计教程》中系统地介绍了有关JSP开发所涉及的各类知识,包括JSP概述、JSP开发基础、JSP语法、JSP内置对象、JavaBean技术、Servlet技术、JSP实用组件、JSP数据库应用开发和JSP高级程序设计,并通过JSP 综合开发实例——个人博客,介绍了JSP应用的开发流程和相关技术的综合应用。
李刚(2008)在《疯狂JAVA讲义》中深入介绍了Java编程的相关知识,并且不是单纯从知识角度来讲解Java,而是从解决问题的角度来介绍Java语言,通过大量实用案例开发:五子棋游戏、梭哈游戏、仿QQ的游戏大厅等介绍了Java应用的开发流程和相关技术的综合应用。
外文翻译--《软件工程-实践者的研究方法》

附录Software Engineering-A PRACTITIONER’S APPROACHWritten by Roger S. Pressman, Ph.D. (P.340-P.343)13.3DESIGN PRINCIPLESSoftware design is both a process and a model. The design process is a sequence ofsteps that enable the designer to describe all aspects of the software to be built. It is important to note, however, that the design process is not simply a cookbook. Creative skill, past experience, a sense of what makes “good” software, and an overallcommitment to quality are critical success factors for a competent design.The design model is the equivalent of an architect’s plans for a house. It begins by representing the totality of the thing to be built (e.g., a three-dimensional renderingof the house) and slowly refines the thing to provide guidance for constructing eachdetail (e.g., the plumbing layout). Similarly, the design model that is created for softwareprovides a variety of different views of the computer software.Basic design principles enable the software engineer to navigate the design process.Davis suggests a setof principles for software design, which have beenadapted and extended in the following list:• The design process should not suffer from “tunnel vision.” A gooddesigner should consider alternative approaches, judging each based on therequirements of the the resources available to do the job, and thedesign concepts presented in Section • The design should be traceable to the analysis model. Because a singleelement of the design model often traces to multiple requirements, it is necessaryto have a means for tracking how requirements have been satisfied bythe design model.• The design should not reinvent the wheel. Systems are constructed usinga set of design patterns, many of which have likely been encountered before.These patterns should always be chosen as an alternative to reinvention.Time is short and resources are limited! Design time should be invested inrepresenting truly new ideas and integrating those patterns that already exist.• The design should “minimize the intellectual distance”between the software and the problem as it exists in the real world.That is, the structure of the software design should (whenever possible)mimic the structure of the problem domain.• The design should exhibit uniformity and integration. A design is uniformif it appears that one person developed the entire thing. Rules of styleand format should be defined for a design team before design work begins. Adesign is integrated if care is taken in defining interfaces between designComponents.• The design should be structured to accommodate change. The designconcepts discussed in the next section enable a design to achieve this principle.• The design should be structured to degrade gently, even when aberrantdata, events, or operating conditions are encountered. Welldesignedsoftware should never “bomb.” It should be designed toaccommodate unusual circumstances, and if it must terminate processing, doso in a graceful manner.• Design is not coding, coding is not design. Even when detailed proceduraldesigns are created for program components, the level of abstraction ofthe design model is higher than source code. The only design decisions madeat the coding level address the small implementation details that enable theprocedural design to be coded.• The design should be assessed for quality as it is being created, notafter the fact.A variety of design concepts (Section 13.4) and design measures(Chapters 19 and 24) are available to assist the designer in assessing quality.• The design should be reviewed to minimize conceptual (semantic)errors. There is sometimes a tendency to focus on minutiae when the design isreviewed, missing the forest for the trees. A design team should ensure thatmajor conceptual elements of the design (omissions, ambiguity, inconsistency)have been addressed before worrying about the syntax of the design model.When these design principles are properly applied, the software engineer creates a designthat exhibits both external and internal quality factors . External quality factors are those properties of the software that can be readily observed by users (e.g., speed,reliability, correctness, usability).Internal quality factors are of importance to softwareengineers. They lead to a high-quality design from the technical perspective. To achieveinternal quality factors, the designer must understand basic design concepts.13.4 DESIGN CONCEPTSA set of fundamental software design concepts has evolved over the past four decades.Although the degree of interest in each concept has varied over the years, each hasstood the test of time. Each provides the software designer with a foundation fromwhich more sophisticated design methods can be applied. Each helps the softwareengineer to answer the following questions:• What criteria can be used to partition software into individual components?• How is function or data structure detail separated from a conceptual representationof the software?• What uniform criteria define the technical quality of a software design?M. A. Jackson once said: "The beginning of wisdom for a [software engineer] is torecognize the difference between getting a program to work, and getting it right". Fundamental software design concepts provide the necessary frameworkfor "getting it right."13.4.1 AbstractionWhen we consider a modular solution to any problem, many levels of abstraction canbe posed. At the highest level of abstraction, a solution is stated in broad terms usingthe language of the problem environment. At lower levels of abstraction, a more proceduralorientation is taken. Problem-oriented terminology is coupled with implementation-oriented terminology in an effort to state a solution. Finally, at the lowestlevel of abstraction, the solution is stated in a manner that can be directly implemented.Wasserman provides a useful definition:The psychological notion of "abstraction" permits one to concentrate on a problem atsome level of generalization without regard to irrelevant low level details; use of abstractionalso permits one to work with concepts and terms that are familiar in the problem environmentwithout having to transform them to an unfamiliar structure . . .Each step in the software process is a refinement in the level of abstraction of the software solution. During system engineering, software is allocated as an element ofa computer-based system. During software requirements analysis, the software solutionis stated in terms "that are familiar in the problem environment." As we movethrough the design process, the level of abstraction is reduced. Finally, the lowestlevel of abstraction is reached when source code is generated.As we move through different levels of abstraction, we work to create proceduraland data abstractions. A procedural abstraction is a named sequence of instructionsthat has a specific and limited function. An example of a procedural abstraction wouldbe the word open for a door. Open implies a long sequence of procedural steps (e.g.,walk to the door, reach out and grasp knob, turn knob and pull door, step away frommoving door, etc.).A data abstraction is a named collection of data that describes a data objectChapter12). In the context of the procedural abstraction open, we can define a data abstractioncalled door. Like any data object, the data abstraction for door would encompassa set of attributes that describe the door (e.g., door type, swing direction, peningmechanism, weight, dimensions). It follows that the procedural abstraction open wouldmake use of information contained in the attributes of the data abstraction door.Many modern programming languages provide mechanisms for creating abstractdata types. For example, the Ada package is a programming language mechanismthat provides support for both data and procedural abstraction. The original abstractdata type is used as a template or generic data structure from which other data structurescan be instantiated.Control abstraction is the third form of abstraction used in software design. Likeprocedural and data abstraction, control abstraction implies a program control mechanismwithout specifying internal details. An example of a control abstraction is the synchronization semaphore used to coordinate activities in an operating system.The concept of the control abstraction is discussed briefly in Chapter 14.13.4.2 RefinementStepwise refinement is a top-down design strategy originally proposed by Niklaus Wirth. A program is developed by successively refining levels of procedural detail.A hierarchy is developed by decomposing a macroscopic statement of function (aprocedural abstraction) in a stepwise fashion until programming language statementsare reached. An overview of the concept is provided by Wirth: In each step (of the refinement), one or several instructions of the given program are decomposedinto more detailed instructions. This successive decomposition or refinement of specificationsterminates when all instructions are expressed in terms of any underlying computeror programming language . . . As tasks are refined, so the data may have to be refined,decomposed, or structured, and it is natural to refine the program and the data specificationsin parallel.Every refinement step implies some design decisions. It is important that . . . the programmerbe aware of the underlying criteria (for design decisions) and of the existence ofalternative solutions . . .The process of program refinement proposed by Wirth is analogous to the process of refinement and partitioning that is used during requirements analysis. The differenceis in the level of implementation detail that is considered, not the approach.Refinement is actually a process of elaboration.We begin with a statement offunction(or description of information) that is defined at a high level of abstraction. Thatis, the statement describes function or information conceptually but provides no informationabout the internal workings of the function or the internal structure of theinformation. Refinement causes the designer to elaborate on the original statement,providing more and more detail as each successive refinement (elaboration) occurs.Abstraction and refinement are complementary concepts. Abstraction enables adesigner to specify procedure and data and yet suppress low-level details. Refinementhelps the designer to reveal low-level details as design progresses. Both conceptsaid the designer in creating a complete design model as the design evolves.《软件工程-实践者的研究方法》Roger S. Pressman博士(340页-343页)13.3 设计原则软件设计是一个过程也是一个模型。
软件系统开发中英文对照外文翻译文献

软件系统开发中英文对照外文翻译文献(文档含英文原文和中文翻译)软件工程中的过程处理模型斯卡基沃尔特摘要软件系统从起初的开发,维护,再到一个版本升级到另一个版本,经历了一系列阶段。
这篇文章归纳和整理了一些描述如何开发软件系统的方法。
从传统的软件生命周期的背景和定义出发,即大多数教科书所讨论的,并且目前的软件开发实践所遵循的软件生命周期,接着讨论作为目前软件工程技术基石的更全面的软件开发模型。
关键词:软件生命周期;模型;原型1 前言软件业的发展最早可追溯到开发大型软件项目的显式模型,那是在二十世纪五十年代和六十年代间。
总体而言,这些早期的软件生命周期模型的唯一目的就是提供一个合理的概念计划来管理软件系统的开发。
因此,这种计划可以作为一个基础规划,组织,人员配备,协调,预算编制,并指导软件开发活动。
自20世纪60年代,出现了许多经典的软件生命周期的描述(例如,霍西尔1961年,劳斯莱斯1970年,1976年博伊姆,迪斯塔索1980年,1984年斯卡基,萨默维尔1999年)。
罗伊斯(1970)使用现在生活中熟悉的“瀑布”图表,提出了周期的概念,这个图表概括了开发大型软件系统是多么的困难,因为它涉及复杂的工程任务,而这些任务在完成之前可能需要不断地返工。
这些图表也通常在介绍性发言中被采用,主要针对开发大型软件系统的人们(例如,定制软件的客户),他们可能不熟悉各种各样的技术问题但还是要必须解决这些问题。
这些经典的软件生命周期模型通常包括以下活动一些内容:系统启动/规划:系统从何而来?在大多数情况下,不论是现有的信息处理机制以前是自动的,手工的,还是非正式的,新系统都会取代或补充它们。
● 需求分析和说明书:阐述一个新的软件系统将要开发的问题:其业务能力,其所达到的性能特点,支持系统运行和维护所需的条件。
● 功能或原型说明:潜在确定计算的对象,它们的属性和关系,改变这些对象的操作,约束系统行为的限制等。
●划分与选择:给出需求和功能说明书,将系统分为可管理的模块,它们是逻辑子系统的标志,然后确定是否有对应于这些模块的新的,现有的,或可重复使用的软件系统可以复用。
软件开发中英文对照外文翻译文献

软件开发中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:仿真软件开发低大型复杂腔基于UG的二次开发摘要---射击和弹跳射线(SBR)二次开发的基础软件是由国标库(UG)。
射线跟踪的核心算法是基于优化的非均匀有理b样(NURBS)曲线表面相交算法建立在UG,导致非常高的射线路径跟踪的准确性没有啮合从而保持原有的空腔模型的准确性。
它也是有效的避免同任何复杂的蛀牙,因为即使工作屏蔽的过程。
两腔的几何建模及其散射模拟成一个统一的平台,形成一个易用的综合和环球环境电磁建模复杂的蛀牙。
在本文开发的软件对复杂腔散射建模引入了一些数值结果显示的准确性和效率关键词--电大型复杂cavit; 雷达截面; UG的二次开发; 射击和弹跳射线(SBR); 射线跟踪I.介绍雷达截面(RCS)的分析电等大型复杂洞进口或出口,双面或三面角反射器等,是计算电磁学中最重要的主题之一。
低大型复杂的空腔结构,只有基于高频方法如射击和弹跳射线(SBR)[1][2][3]是合适的。
传统上,为三步骤采用SBR首先,模型腔的CAD软件和网格表面的内墙,然后出口信息网格的结果;其次发现表面上的光线的反射点ray-surface十字路口和屏蔽计算;最后计算RCS即将离任的射线从腔。
虽然这些网基于射线跟踪可用于任意形状的蛀牙从理论上讲,它有不准确的缺点路径建立在复杂的蛀牙导致贫穷的RCS计算精度。
电大型复杂的蛀牙,射线跟踪的效率很低,由于分离腔建模与RCS计算复杂的仿真过程。
为了解决这些问题, 一个强大的CAD软件,模拟电大型复杂腔并计算其RCS在同一平台。
开发的软件具有以下优势: 1)腔建模和RCS计算在UG集成,因此仿真过程大大简化。
2)表面啮合没有必要而射线可以追踪精度高和效率在任何任意形状的空腔。
3)开发的软件是通用的电磁散射的凹面反射镜结构,如蛀牙和角落。
小说射线追踪方法的新的先进的软件是基于UG的二次开发将讨论下一步,和RCS仿真结果。
外文翻译---软件和软件工程

外文翻译:Software and software engineering ----the software appearance and enumeratesAs the decade of the 1980s began, a front page story in business week magazine trumpeted the following headline:” software: the new driving force.”software had come of age—it had become a topic for management concern. during the mid-1980s,a cover story in foreune lamented “A Growing Gap in Software,”and at the close of the decade, business week warned managers about”the Software Trap—Automate or else.”As the 1990s dawned , a feature story in Newsweek asked ”Can We Trust Our Software? ”and The wall street journal related a major software company’s travails with a front page article entitled “Creating New Software Was an Agonizing Task …” these headlines, and many others like them, were a harbinger of a new understanding of the importance of computer software ---- the opportunities that it offers and the dangers that it poses.Software has now surpassed hardware as the key to the success of many computer-based systems. Whether a computer is used to run a business, control a product, or enable a system , software is the factor that differentiates . The completeness and timeliness of information provided by software (and related databases) differentiate one company from its competitors. The design and “human friendliness” of a software product differentiate it from competing products with an otherwise similar function .The intelligence and function provided by embedded software often differentiate two similar industrial or consumer products. It is software that can make the difference.During the first three decades of the computing era, the primary challenge was to develop computer hardware that reduced the cost of processing and storing data .Throughout the decade of the 1980s,advances in microelectronics resulted in more computing power at increasingly lower cost. Today, the problem is different .The primary challenge during the 1990s is to improve thequality ( and reduce the cost ) of computer-based solutions- solutions that are implemented with software.The power of a 1980s-era mainframe computer is available now on a desk top. The awesome processing and storage capabilities of modern hardware represent computing potential. Software is the mechanism that enables us to harness and tap this potential.The context in which software has been developed is closely coupled to almost five decades of computer system evolution. Better hardware performance, smaller size and lower cost have precipitated more sophisticated computer-based syst ems. We’re moved form vacuum tube processors to microelectronic devices that are capable of processing 200 million connections per second .In popular books on “the computer revolution,”Osborne characterized a “new industrial revolution,” Toffer called the advent of microelectronics part of “the third wave of change” in human history , and Naisbitt predicted that the transformation from an industrial society to an “information society” will have a profound impact on our lives. Feigenbaum and McCorduck suggested that information and knowledge will be the focal point for power in the twenty-first century, and Stoll argued that the “ electronic community” created by networks and software is the key to knowledge interchange throughout the world . As the 1990s began , Toffler described a “power shift” in which old power structures( governmental, educational, industrial, economic, and military) will disintegrate as computers and software lead to a “democratization of knowledge.”Figure 1-1 depicts the evolution of software within the context of. computer-based system application areas. During the early years of computer system development, hardware underwent continual change while software was viewed by many as an afterthought. Computer programming was a "seat-of-the-pants" art for which few systematic methods existed. Software development was virtually unmanaged--until schedules slipped or costs began to escalate. During this period, abatch orientation was used for most systems. Notable exceptions were interactive systems such as the early American Airlines reservation system and real-time defense-orientedsystems such as SAGE. For the most part, however, hardware was dedicated to the union of, a single program that in turn was dedicated to a specific application.Evolution of softwareDuring the early years, general-purpose hardware became commonplace. Software, on the other hand, was custom-designed for each application and had a relatively limited distribution. Product software(i.e., programs developed to be sold to one or more customers) was in its infancy . Most software was developed and ultimately used by the same person or organization. You wrote it, you got it running , and if it failed, you fixed it. Because job mobility was low , managers could rest assured that you’d be there when bugs were encountered.Because of this personalized software environment, design was an implicit process performed in one’s head, and action was often nonexistent. During the early years we learned much about the implementation of computer-based systems, but relatively little about computer system engineering .In fairness , however , we must acknowledge the many outstanding computer-based systems that were developed during this era. Some of these remain in use today and provide landmark achievements that continue to justify admiration.The second era of computer system evolution (Figure 1.1) spanned the decade from themid-1960s to the late 1970s. Multiprogramming and multiuse systems introduced new concepts of human-machine interaction. Interactive techniques opened a new world of applications and new levels of hardware and software sophistication . Real-time systems could collect, analyze, and transform data form multiple sources , thereby controlling processes and producing output in milliseconds rather than minutes . Advances in on-line storage led to the first generation of database management systems.The second era was also characterized by the use of product software and the advent of "software houses." Software was developed for widespread distribution in a multidisciplinary market. Programs for mainframes and minicomputers were distributed to hundreds and sometimesthousands of users. Entrepreneurs from industry, government, and academia broke away to "develop the ultimate software package" and earn a bundle of money.As the number of computer-based systems grew, libraries of computer software began to expand. In-house development projects produced tens of thousands of program source statements. Software products purchased from the outside added hundreds of thousands of new statements. A dark cloud appeared on the horizon. All of these programs--all of these source statements-had to be corrected when faults were detected, modified as user requirements changed, or adapted to new hardware that was purchased. These activities were collectively called software maintenance. Effort spent on software maintenance began to absorb resources at an alarming rate.Worse yet, the personalized nature of many programs made them virtually unmentionable. A "software crisis" loomed on the horizon.The third era of computer system evolution began in the mid-1970s and continues today. The distributed system--multiple computers, each performing functions concurrently and communicating with one another- greatly increased the complexity of computer-based systems. Global and local area networks, high-bandwidth digital communications, and increasing demands for 'instantaneous' data access put heavy demands on software developers.The third era has also been characterized by the advent and widespread use of microprocessors, personal computers, and powerful desk-top workstations. The microprocessor has spawned a wide array of intelligent products-from automobiles to microwave ovens, from industrial robots to blood serum diagnostic equipment. In many cases, software technology is being integrated into products by technical staff who understand hardware but are often novices in software development.The personal computer has been the catalyst for the growth of many software companies. While the software companies of the second era sold hundreds or thousands of copies of their programs, the software companies of the third era sell tens and even hundreds of thousands of copies. Personal computer hardware is rapidly becoming a commodity, while software provides the differentiating characteristic. In fact, as the rate of personal computer sales growth flattened during the mid-1980s, software-product sales continued to grow. Many people in industry and at home spent more money on software than they did to purchase the computer on which the software would run.The fourth era in computer software is just beginning. Object-oriented technologies (Chapters 8 and 12) are rapidly displacing more conventional software development approaches in many application areas. Authors such as Feigenbaum and McCorduck [FEI83] and Allman [ALL89] predict that "fifth-generation" computers, radically different computing architectures, and their related software will have a profound impact on the balance of political and industrial power throughout the world. Already, "fourth-generation" techniques for software development (discussed later in this chapter) are changing the manner in which some segments of the software community build computer programs. Expert systems and artificial intelligence software has finally moved from the laboratory into practical application for wide-ranging problems in the real world. Artificial neural network software has opened exciting possibilities for pattern recognition and human-like information processing abilities.As we move into the fourth era, the problems associated with computer software continue to intensify:Hardware sophistication has outpaced our ability to build software to tap hardware's potential.Our ability to build new programs cannot keep pace with the demand for new programs.Our ability to maintain existing programs is threatened by poor design and inadequate resources.In response to these problems, software engineering practices--the topic to which this book is dedicated--are being adopted throughout the industry.An Industry PerspectiveIn the early days of computing, computer-based systems were developed usinghardware-oriented management. Project managers focused on hardware because it was the single largest budget item for system development. To control hardware costs, managers instituted formal controls and technical standards. They demanded thorough analysis and design before something was built. They measured the process to determine where improvements could be made. Stated simply, they applied the controls, methods, and tools that we recognize as hardware engineering. Sadly, software was often little more than an afterthought.In the early days, programming was viewed as an "art form." Few formal methods existed and fewer people used them. The programmer often learned his or her craft by trial and error. The jargon and challenges of building computer software created a mystique that few managers cared to penetrate. The software world was virtually undisciplined--and many practitioners of the clay loved it!Today, the distribution of costs for the development of computer-based systems has changed dramatically. Software, rather than hardware, is often the largest single cost item. For the past decade managers and many technical practitioners have asked the following questions: Why does it take so long to get programs finished?Why are costs so high?Why can't we find all errors before we give the software to our customers?Why do we have difficulty in measuring progress as software is being developed?These, and many other’ questions, are a manifestation of the concern about software and the manner in which it is developed--a concern that has tend to the adoption of software engineering practices.译文:软件和软件工程——软件的出现及列举在二十世纪八十年代的前十年开始的时候, 在商业周刊杂志里一个头版故事大声宣扬以下标题:“软件,我们新的驱动力!”软件带来了一个时代------它成为了一个大家关心的主题。
软件工程英文文献原文及翻译

英文文献原文及译文学生姓名:赵凡学号:1021010639学院:软件学院专业:软件工程指导教师:武敏顾晨昕2014年 6月英文文献原文The use of skinWhat is a skin? In the role of setting, the preparations made for the animation in the final process is skinning. The so-called skinning skinning tool is to use role-model role do with our skeletal system to help set the course together. After this procedure, fine role model can be rendered on real numbers can be made into animation. Bones in skinning process, in which the position is called Bind Pose. After the skin, bone deformation of the skin caused by the Games. However, sometimes inappropriate distortion, which requires bone or skin to make the appropriate changes, then you can make use of relevant command to restore the bone binding position, and then disconnect the association between bone and skin. In Maya, you can always put the bones and skin disconnected or reconnected. There is a direct way to skin the skin (skin flexible rigid skinning) and indirect skin (or wrap the lattice deformation of flexible or rigid skinning skinning joint use).In recent years, more and more 3D animation software, a great competition in the market, software companies are constantly developing and updating the relevant software only more humane, but in three-dimensional animation maya mainstream animation software. Able to create bone, meat, God's role is that each CG digital artists dream. Whether the digital characters charm, the test is the animator of life, understanding of life. Digital character to have bone and meat producers are required for the role of the body and has a full grasp of motor function. In addition, the roles of whether there is realism, the key lies in the design and production of the skin, which is skinning animation software for skilled technical and creative mastery is essential. Skin is ready to work in animation final steps, after this procedure, you can do the movements designed, if the skin did not do the work, after the animation trouble, so the skin is very important.As the three-dimensional animation with accuracy and authenticity, the current three-dimensional animation is rapidly developing country, nowadays the use ofthree-dimensional animation everywhere, the field of architecture, planning areas, landscape areas, product demonstrations, simulated animation, film animation, advertising, animation, character animation, virtual reality and other aspects of three-dimensional animation fully reflects the current importance. If compared to the three-dimensional animation puppet animation in real life, then the doll puppet animation equivalent of Maya modeling, puppet performers equivalent Maya animators and puppet steel joints in the body is the skeletal system. Bones in the animation will not be the final rendering, its role is only equivalent to a bracket that can simulate real bones set of major joints to move, rotate, etc.. When the bones are set, we will be bound to the skeleton model, this step is like a robot mounted to a variety of external parts, like hanging, and then through the various settings, add a keyframe animation on bone, and then drive to be bound by the bones corresponding to the model on the joints. Thus, in the final animation, you can see the stiffness of a stationary model with vitality. The whole process from the rigging point of view, may not compare more tedious keyframe animation, rigging, but it is the core of the whole three-dimensional animation, and soul.Rigging plays a vital role in a three-dimensional animation. Good rigging easy animation production, faster and more convenient allows designers to adjust the action figures. Each step are bound to affect the skeleton final animation, binding is based on the premise of doing animation, animators animate convenient, good binding can make animation more fluid, allowing the characters to life even more performance sex. In addition to rigging as well as expression of the binding character, but also to let people be able to speak or behave different facial expressions. Everything is done in order to bind the animation is set, it is bound to set a good animation is mainly based on the entire set of styles and processes. Rigging is an indispensable part in the three-dimensional animation.Three-dimensional animation production process: model, texture, binding, animation, rendering, special effects, synthesis. Each link is associated. Model and material determines the style of animation, binding, and animation determine fluency animation, rendering, animation effects, and synthetic colors and determine the finalresult.Three-dimensional animation, also known as 3D animation, is an emerging technology. Three-dimensional animation gives a three-dimensional realism, even subtle animal hair, this effect has been widely applied to the production of film and television in many areas, education, and medicine. Movie Crash, deformed or fantasy scenes are all three-dimensional animation in real life. Designers in the first three-dimensional animation software to create a virtual scene, and then create the model according to the proportion, according to the requirements set trajectory models, sports, and other parameters of the virtual camera animation, and finally as a model assigned a specific material, and marked the lights , the final output rendering, generating the final screen. DreamWorks' "Shrek" and Pixar's "Finding Nemo" is so accomplished visual impact than the two-dimensional animation has.Animated film "Finding Nemo" extensive use of maya scene technology. Produced 77,000 jellyfish animation regardless of the technical staff or artist is one of the most formidable challenge. This pink translucent jellyfish is most needed is patience and skill, you can say, jellyfish appeared animated sea creatures taken a big step. His skin technology can be very good. The use of film roles skinning techniques is very good, so that each character is vivid, is not related to expression, or action is so smooth, these underwater underwater world is so beautiful. Maya maya technology for the creation of the first to have a full understanding and knowledge. He first thought of creative freedom virtual capacity, but the use of technology has limitations. When the flexible skinning animation technique many roles in the smooth bound for editing, re-allocation tools needed to adjust the skeletal model for the control of the weight through the right point, every detail clownfish are very realistic soft. In the joint on the affected area should smear, let joints from other effects, this movement was not wearing a tie. Used less rigid, rigid lattice bound objects must be created in a position to help the bones of the joint motion. Animated film "Finding Nemo," the whole movie a lot of facial animation, facial skin but also a good technique to make facial expressions, the facial animation is also animated, and now more and more animated facial animationtechnology increasingly possible, these should be good early skin behind it will not affect the expression, there is therefore the creation of the film how maya digital technology, play his video works styling advantages and industrial processes are needed to explore creative personnel, all and three-dimensional figures on the production of content, from maya part. Two-dimensional hand-painted parts, post-synthesis of several parts, from a technical production, artistic pursuit. Several angles to capture the entire production cycle of creation. Maya techniques used in the animated film "Finding Nemo", the flexible skinning performance of many, clown face on with a lot of smooth binding, so more people-oriented, maya application of technical advantages in certain limited extent. Realistic three-dimensional imaging technology in the animation depth spatial density, the sense of space, mysterious underwater world to play the most. Because lifelike action, it also brings the inevitable footage and outdoor sports realistic density, but also to explore this movie maya main goal of the three-dimensional animation.英文文献译文蒙皮的运用什么是蒙皮?在角色设定中,为动画所作的准备工作里的最后一道工序就是蒙皮。
外文翻译-软件工程

中文2860字Software engineeringFrom:/zh-cn/%E8%BD%AF%E4%BB%B6%E5%B7%A5%E7%A8%8B Software engineering is the study of the use of engineering methods to build and maintain effective, practical and high-quality software disciplines. It involves the programming language, database, software development tools, system platform, standards, design patterns and so on.In modern society, the software used in many ways. Typical software such as email, embedded systems, human-machine interface, office packages, operating systems, compilers, databases, games. Meanwhile, almost all the various sectors of computer software applications, such as industry, agriculture, banking, aviation and government departments. These applications facilitate the economic and social development, improve people's working efficiency, while improving the quality of life. Software engineers is to create software applications of people collectively, according to which software engineers can be divided into different areas of system analysts, software designers, system architects, programmers, testers and so on. It is also often used to refer to a variety of software engineers, programmers.OriginIn view of difficulties encountered in software development, North Atlantic Treaty Organization (NATO) in 1968 organized the first Conference on Software Engineering, and will be presented at the "software engineering" to define the knowledge required for software development, and suggested that "software development the activities of similar projects should be. " Software Engineering has formally proposed since 1968, this time to accumulate a large number of research results, widely lot of technical practice, academia and industry through the joint efforts of software engineering is gradually developing into a professional discipline. Definitioncreation and use of sound engineering principles in order to obtain reliable and economically efficient software.application of systematic, follow the principle can be measured approach to development, operation and maintenance of software; that is to be applied to software engineering.The development, management and updating software products related to theories, methods and tools.A knowledge or discipline (discipline), aims to produce good quality, punctual delivery, within budget and meet users need software.the practical application of scientific knowledge in the design, build computer programs, and the accompanying documents produced, and the subsequent operation and maintenance.Use systematic production and maintenance of software products related to technology and management expertise to enable software development and changes in the limited time and under cost.Construction team of engineers developed the knowledge of large software systemsdisciplines.the software analysis, design, implementation and maintenance of a systematic method.the systematic application of tools and techniques in the development of computer-based applications.Software Engineering and Computer ScienceSoftware development in the end is a science or an engineering, this is a question to be debated for a long time. In fact, both the two characteristics of software development. But this does not mean that they can be confused with each other. Many people think that software engineering, computer science and information science-based as in the traditional sense of the physical and chemical engineering as. In the U.S., about 40% of software engineers with a degree in computer science. Elsewhere in the world, this ratio is also similar. They will not necessarily use every day knowledge of computer science, but every day they use the software engineering knowledge.For example, Peter McBreen that software "engineering" means higher degree of rigor and proven processes, not suitable for all types of software development stage. Peter McBreen in the book "Software Craftsmanship: The New Imperative" put forward the so-called "craftsmanship" of the argument, consider that a key factor in the success of software development, is to develop the skills, not "manufacturing" software process.Software engineering and computer programmingSoftware engineering exists in a variety of applications exist in all aspects of software development. The program design typically include program design and coding of the iterative process, it is a stage of software development.Software engineering, software project seeks to provide guidance in all aspects, from feasibility analysis software until the software after completion of maintenance work. Software engineering that software development and marketing activities are closely related. Such as software sales, user training, hardware and software associated with installation. Software engineering methodology that should not be an independent programmer from the team and to develop, and the program of preparation can not be divorced from the software requirements, design, and customer interests.Software engineering design of industrial development is the embodiment of a computer program.Software crisisSoftware engineering, rooted in the 20th century to the rise of 60,70 and 80 years of software crisis. At that time, many of the software have been a tragic final outcome. Many of the software development time significantly beyond the planned schedule. Some projects led to the loss of property, and even some of the software led to casualties. While software developers have found it increasingly difficult for software development.OS 360 operating system is considered to be a typical case. Until now, it is still used in the IBM360 series host. This experience for decades, even extremely complexsoftware projects do not have a set of programs included in the original design of work systems. OS 360 is the first large software project, which uses about 1,000 programmers. Fred Brooks in his subsequent masterpiece, "The Mythical Man Month" (The Mythical Man-Month) in the once admitted that in his management of the project, he made a million dollar mistake.Property losses: software error may result in significant property damage. European Ariane rocket explosion is one of the most painful lesson.Casualties: As computer software is widely used, including hospitals and other industries closely related to life. Therefore, the software error might also result in personal injury or death.Was used extensively in software engineering is the Therac-25 case of accidents. In 1985 between June and January 1987, six known medical errors from the Therac-25 to exceed the dose leads to death or severe radiation burns.In industry, some embedded systems do not lead to the normal operation of the machine, which will push some people into the woods.MethodologyThere are many ways software engineering aspects of meaning. Including project management, analysis, design, program preparation, testing and quality control. Software design methods can be distinguished as the heavyweight and lightweight methods. Heavyweight methods produce large amounts of official documentation. Heavyweight development methodologies, including the famous ISO 9000, CMM, and the Unified Process (RUP).Lightweight development process is not an official document of the large number of requirements. Lightweight methods, including well-known Extreme Programming (XP) and agile process (Agile Processes).According to the "new methodology" in this article, heavyweight method presented is a "defensive" posture. In the application of the "heavyweight methods" software organizations, due to a software project manager with little or no involvement in program design, can not grasp the item from the details of the progress of the project which will have a "fear", constantly had to ask the programmer to write a lot of "software development documentation." The lightweight methods are presented "aggressive" attitude, which is from the XP method is particularly emphasized four criteria - "communication, simplicity, feedback and courage" to be reflected on. There are some people that the "heavyweight method" is suitable for large software team (dozens or more) use, and "lightweight methods" for small software team (a few people, a dozen people) to use. Of course, on the heavyweight and lightweight method of approach has many advantages and disadvantages of debate, and various methods are constantly evolving.Some methodologists think that people should be strictly followed in the development and implementation of these methods. But some people do not have the conditions to implement these methods. In fact, the method by which software development depends on many factors, but subject to environmental constraints. Software development processSoftware development process, with the subsequent development of technologyevolution and improvement. From the early waterfall (Waterfall) development model to the subsequent emergence of the spiral iterative (Spiral) development, which recently began the rise of agile development methodologies (Agile), they showed a different era in the development process for software industry different awareness and understanding of different types of projects for the method.Note distinction between software development process and software process improvement important difference between. Such as ISO 15504, ISO 9000, CMM, CMMI such terms are elaborated in the framework of software process improvement, they provide a series of standards and policies to guide software organizations how to improve the quality of the software development process, the ability of software organizations, and not give a specific definition of the development process. Development of software engineering"Agile Development" (Agile Development) is considered an important software engineering development. It stressed that software development should be able to possible future changes and uncertainties of a comprehensive response.Agile development is considered a "lightweight" approach. In the lightweight approach should be the most prestigious "Extreme Programming" (Extreme Programming, referred to as XP).Correspond with the lightweight approach is the "heavyweight method" exists. Heavyweight approach emphasizes the development process as the center, rather than people-centered. Examples of methods such as heavyweight CMM / PSP / TSP.Aspect-oriented programming (Aspect Oriented Programming, referred to as the AOP) is considered to software engineering in recent years, another important development. This aspect refers to the completion of a function of a collection of objects and functions. In this regard the contents related to generic programming (Generic Programming) and templates.软件工程From:/zh-cn/%E8%BD%AF%E4%BB%B6%E5%B7%A5%E7%A8%8B软件工程是一门研究用工程化方法构建和维护有效的、实用的和高质量的软件的学科。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。