英文翻译工具:软件工程专业毕业设计外文文献翻译 英文翻译工具

合集下载

收藏知乎网友总结的23种英文文献翻译软件,助力文献阅读

收藏知乎网友总结的23种英文文献翻译软件,助力文献阅读

01搜狗翻译搜狗翻译的文档翻译功能有三个优点:第一,可以直接上传文档,流程操作简单化,这才是一键翻译哇,我之前只能说是很多键……;第二,在线阅读翻译结果时,系统可实时提供原文与译文的双屏对照,方便对比查看;第三,译文可直接免费下载,方便进一步研读或分享。

02Google Chrome浏览器假设一个情景,你想在PubMed上找到以清华大学为第一单位的施一公教授的文章,那么,可以在Chrome浏览器上,登上PubMed,搜索格式为Yigong Shi Tsinghua University,即可找到其发表的文章。

接着,看上一篇蛮不错的,点击进去看看,然后,还是全英文。

这时候,你可以试下Chrome自带的网页翻译,真的可以秒翻译,将英文翻译为中文,而且还可以快速转换中/英界面。

03Adobe Acrobat笔者在这里给大伙介绍另一款秒翻译PDF文档的神器(笔者使用的Adobe Acrobat Pro DC,至于具体的下载和安装方式,读者可自行百度)。

但是,需要注意一点,这是Adobe Acrobat,而不是Adobe Reader。

在这里,请应许笔者介绍下开发出Adobe Acrobat的公司——Adobe。

Adobe,在软件界绝对是巨头中巨头的存在。

打个比方,我们常用的PS、PR、AE、In、LR等,无一例外都是领域中的顶尖水平,而且都是Adobe家的。

其中,Adobe家中就有一款几位出色的PDF编辑及处理软件——Adobe Acrobat。

(据说PDF作为国际通用的文件存储格式,也是依它而起)OK,进入主题,Adobe Acrobat是长这个样子的。

它可能干嘛呢?PDF 转word、图片合拼为PDF、编辑PDF等等,可以说,与PDF相关的,它都可以搞定。

那如何使用它来帮助我们翻译文献PDF呢?第一步,用它打开文献PDF文件;第二步,点击使用界面上的“文件”,接着点击“另存为”,选择存储格式为“HTML”,如下图;第三步,PDF文档在导出完成后,会得到两个文件,一是将PDF转为HTML格式的网页文件,另一个则是支持网页文件里面的图片(若删,网页里面的图片显示不出来)第四步,找到网页文件,打开方式选择Google Chrome浏览器,接着,结合Chrome浏览器的网页翻译,即可秒翻。

什么app有大学英语教材翻译

什么app有大学英语教材翻译

什么app有大学英语教材翻译有很多APP可以帮助学生进行大学英语教材的翻译,以下是一些常用的APP:1. Google Translate(谷歌翻译):Google Translate是一款广泛使用的翻译工具,提供多种语言的互译服务。

它可以通过文本输入、语音输入或拍照输入等多种方式进行翻译,并且支持离线翻译功能。

虽然Google Translate的翻译质量有时会存在一些问题,但它仍然是一个方便快捷的工具,适合在紧急情况下进行简单的翻译。

2. Pleco(扑克词典):Pleco是一款专业的中英文翻译工具,主要针对学习汉语的外国学生。

除了提供词典功能外,Pleco还提供了基于OCR技术的拍照翻译功能,可以通过拍照识别英语教材中的文字并进行翻译。

此外,Pleco 还提供了大量的例句和造句,方便学生更好地理解和应用英语单词和短语。

3. Microsoft Translator(微软翻译):Microsoft Translator是由微软公司开发的一款多语言翻译工具。

它支持文本翻译、语音翻译以及即时翻译功能,可以满足不同场景下的翻译需求。

与其他翻译工具相比,Microsoft Translator的精度和稳定性较高,对于复杂句子和专业术语的翻译效果也比较理想。

4. YouDao Dictionary(有道词典):YouDao Dictionary是一款综合性的翻译工具,提供了中文、英文、日文、韩文等多种语言的翻译功能。

它支持单词、词组和句子的翻译,并且提供了真人发音和实时在线翻译的功能。

此外,YouDao Dictionary还提供了词形变化、同义词、反义词等详细解释,帮助学生更好地理解英语教材中的内容。

总结:以上所介绍的APP都可以帮助学生进行大学英语教材的翻译,每个APP都有各自的特点和优势,可以根据个人需求选择适合自己的工具。

在使用这些APP进行翻译时,需要注意一些翻译的限制和误差,尽量结合上下文来解释翻译结果,以确保翻译的准确性和流畅度。

软件工程(外文翻译文献)

软件工程(外文翻译文献)

外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。

翻译英语的软件有哪些

翻译英语的软件有哪些

翻译英语的软件有哪些
翻译英语的软件有很多种。

下面是一些常见的翻译软件:
1. 谷歌翻译(Google Translate)- 谷歌翻译是目前最受欢迎和广泛使用的免费翻译软件之一。

它支持多种语言,包括英语和其他主要语言。

用户可以输入文本,语音或图像来进行翻译。

2. 百度翻译(Baidu Translate)- 百度翻译是中国最大的搜索引擎百度推出的一款翻译软件。

它支持多种语言,并提供在线翻译和离线翻译功能。

3. DeepL翻译(DeepL Translator)- DeepL翻译是一种使用人工智能技术进行翻译的新兴软件。

它被认为是目前最准确和流畅的翻译软件之一。

4. 有道翻译(Youdao Translate)- 有道翻译是一款由中国互联网公司网易推出的翻译软件。

它提供多种语言和翻译模式,并具有在线和离线翻译功能。

5. 百度翻译官(Baidu Translate)- 百度翻译官是一款由百度推出的翻译软件。

它支持多种语言,并提供文本翻译,语音翻译和图片翻译功能。

6. 欧路词典(Eudic)- 欧路词典是一款辞书和翻译软件。

它提供了大量的词典和翻译资源,可以帮助用户查找单词的释义和例句,并进行翻译。

除了这些常见的翻译软件,还有很多其他的翻译工具和在线平台可以使用。

用户可以根据自己的需求和偏好选择适合自己的翻译软件。

中英文翻译的软件

中英文翻译的软件

中英文翻译的软件中英文翻译的软件是一种可以将中文文本翻译成英文或英文文本翻译成中文的工具。

这类软件可以提供方便快捷的翻译服务,帮助用户在跨语种交流中消除语言障碍。

下面将介绍几款常见的中英文翻译软件。

1. 谷歌翻译:谷歌翻译是目前最受欢迎的免费翻译软件之一。

它可以提供准确的中文到英文以及英文到中文的翻译,同时支持其他多种语言的翻译功能。

谷歌翻译利用了机器学习技术,可以根据大量的语料库数据进行翻译,并不断提高翻译质量。

2. 百度翻译:百度翻译是一款功能强大的翻译软件,可以提供精准的中英文互译服务。

它可以在输入框内输入需要翻译的文字,并快速翻译成英文或中文。

百度翻译还支持语音输入和录音翻译功能,让用户更方便地进行翻译。

3. 有道词典:有道词典是一款知名的在线翻译软件。

它具有快速、准确的翻译功能,可以将中文文本翻译成英文或将英文文本翻译成中文。

有道词典还拥有丰富的词库和例句,可以帮助用户更好地理解翻译结果。

4. 欧路词典:欧路词典是一款专业的英汉双向翻译软件。

它具有离线翻译功能,可以提供高质量的中英文翻译服务。

欧路词典还拥有丰富的词库和例句,可以帮助用户更全面地了解翻译结果。

5. 小牛翻译:小牛翻译是一款便捷易用的中英文翻译软件。

它支持文本翻译、扫描翻译和语音翻译三种模式,可以满足不同用户的需求。

小牛翻译还具有自动检测语言、智能换行和离线翻译等功能,提供更优质的翻译体验。

总结起来,中英文翻译的软件通过机器学习和大数据技术,能够提供快速准确的翻译服务。

用户可以根据自己的需求选择适合的翻译软件,帮助解决跨语种交流中的语言障碍。

zotero翻译英文文献

zotero翻译英文文献

zotero翻译英文文献如果你正在进行英文论文写作或者学术研究,那么你一定知道翻译英文文献这个问题非常重要。

如果你的英文水平较低或者文献来源于国外,那么你可能需要一些辅助工具。

今天,我们将介绍一个非常重要的翻译工具,那就是Zotero。

Zotero是一个免费的引用管理器,它可以帮助你轻松地保存,管理和分享你的文献。

除此之外,它还内置了一个翻译工具,可以帮助你翻译英文文献。

以下是使用Zotero翻译英文文献的步骤:第一步,下载并安装Zotero。

你可以在Zotero的官方网站上下载并安装。

第二步,添加你的文献。

你可以通过复制粘贴文献的URL或者ISBN号码来添加一个文献到Zotero中。

当然,最好的方法是安装Zotero的浏览器扩展程序,这样你可以直接抓取网页上的文献。

第三步,选择你要翻译的文献。

在Zotero中选中你要翻译的文献,右键点击鼠标,在弹出菜单中选择“翻译”选项。

第四步,选择翻译的语言。

在弹出的翻译菜单中,选择你要翻译的语言。

Zotero支持多种语言的翻译,包括英语、中文、德语、法语、西班牙语等。

第五步,等待翻译完成。

Zotero会自动连接翻译引擎,并在几秒钟内为你翻译好整个文献的内容。

在翻译完成后,你可以看到翻译后的文献内容,帮助你更好地理解文献。

值得一提的是,虽然Zotero内置了翻译工具,但是翻译质量并不保证。

Zotero只是调用了一些免费的翻译引擎,并不能保证所有内容都翻译得准确无误。

因此,在使用翻译工具时,我们还是需要对翻译结果进行验证和调整。

综上所述,使用Zotero翻译英文文献非常简单,只需要几个简单的步骤即可完成。

但需要注意的是,翻译结果并不保证翻译得十分精确,我们需要对翻译后的文献内容进行验证和调整。

最好的方法是学习英语并自己翻译文献,这样可以更好地理解文献,提高自己的语言能力。

写论文,翻译软件帮你忙,几款翻译软件介绍

写论文,翻译软件帮你忙,几款翻译软件介绍

写论文,翻译软件帮你忙,几款翻译软件介绍说到网友们常用的翻译工具,不外乎是国内的金山词霸,有道词典,国外的灵格斯词霸,微软必应词典等等,最近这几款翻译市场上的巨擘都有一定的升级更新,但性能具体如何,值得一论!第一回合属性比较小编这里用的都是PC版本,兼容性属灵格斯和必应两款最好,都能兼容Win8 ,但国内的两款翻译工具不支持。

有道所占内存最小,必应体积有15.8M,是四款软件中最大的。

第二回合安装过程比较四款软件中只有有道词典捆绑现象严重,虽然小编都一一勾去了,但是安装完成后居然还有!金山词霸1.价格:免费2.版本:4.03.类别:国产软件/外语工具4.大小:7361KB5.开发商:金山6.人气:63977.语言:简体中文8.发布日期:2013-3-22简介:金山词霸是由金山公司推出的一款词典类软件。

第三回合主界面设计比较金山主界面有道主界面灵格斯主界面必应主界面【小结】:虽说面子工程很重要,但这四款词典走的都是简约风,没有花哨的皮肤设置,更没有广告,恩,这才是软件该有的态度!相比而言,小编喜欢必应词典的主界面设计,界面配色鲜亮,没有多余按钮设计,大气,简单;不太中意灵格斯的主界面,无论是颜色还是版面设计上都显得较为单薄。

至于有道和金山只能说中规中矩吧。

有道词典1.价格:免费2.版本:5.4.40.9488 正式版3.类别:国产软件/转换翻译4.大小:5445KB5.开发商:网易6.人气:63977.语言:简体中文8.发布日期:2012-12-15简介:有道词典是网易有道出品的一款很小很强大的翻译软件。

第四回合词典资源比较1.金山词霸金山词霸为用户提供16本通用词典,默认全部开启,支持下载离线词典。

2.有道词典有道词典支持两种自定义词典,一种是有道词典增强版,该版本完整收录《21世纪大英汉词典》及《新英汉大辞典》,离线时也可查看释义;另一种是兼容StarDict格式的词典。

如果您需要添加此格式的词典,只需在搜索引擎中用“stardict”或“星际译王”等关键字进行查询,就可以找到StarDict格式的词典下。

毕业设计(论文)外文资料翻译(学生用)

毕业设计(论文)外文资料翻译(学生用)

毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。

程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。

一般地,我们需要理解连网协议中不同的“层”(Layer)。

而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。

这是一项令人痛苦的工作。

但是,连网本身的概念并不是很难。

我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。

这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。

Java最出色的一个地方就是它的“无痛苦连网”概念。

有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。

我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。

除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。

本章将用一系列易懂的例子解释Java的连网支持。

15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。

早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。

但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。

为达到这个目的,我们采用了IP(互联网地址)的概念。

IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。

我自己的域名是。

所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

英文翻译工具:软件工程专业毕业设计外文文献翻译英文翻译工具英文翻译工具:软件工程专业毕业设计外文文献翻译英文翻译工具话题:英文翻译工具财务分析数据库本科毕业设计外文文献翻译 (英文题目:Software DatabaseAn Object-Oriented Perspective. 中文题目:软件数据库的面向对象的视角学生姓名:宋兰兰学院:信息工程学院系别:软件工程系专业:软件工程班级:软件09-1 指导教师:关玉欣讲师二〇一三年六月内蒙古工业大学本科毕业设计外文文献翻译内蒙古工业大学本科毕业设计外文文献翻译A HISTORICAL PERSPECTIVEFrom the earliestdays of computers, storing and manipulating data have been a major application focus. The first general-purpose DBMS was designed by Charles Bachman at General Electric in the early 1960s and was called the Integrated Data Store. It formed the basis for the network data model, which was standardized by the Conference on Data Systems Languages (CODASYL) and strongly influenced database systems through the 1960s.Bachman was the first recipient of ACM’s Turing Award (thecomputer science equivalent of a Nobel prize) for work in the database area; he received the award in 1973. In the late 1960s, IBM developed the Information Management System (IMS) DBMS, used even todayin many major installations. IMS formed the basis for an alternative data representation framework called the hierarchical data model. The SABRE system for making airline reservations was jointly developed by American Airlines and IBM around the same time, and it allowed several people to access the same data through computer network. Interestingly, today the same SABRE system is used to power popular Web-based travel services such as Travelocity!In 1970, Edgar Codd, at IBM’s San Jose ResearchLaboratory, proposed a new data representation framework called the relational data model. This proved to be a watershed in the development of database systems: it sparked rapid development of several DBMSs based on the relational model, along with a rich body of theoretical results that placed the field on a firm foundation. Codd won the 1981 Turing Award for his seminal work. Database systems matured as an academic discipline, and the popularity of relational DBMSs changed the commercial landscape. Their benefits were widely recognized, and the use of DBMSs for managing corporate data became standard practice.In the 1980s, the relational model consolidated its position as the dominant DBMS paradigm, and database systems continued to gain widespread use. The SQL query language for relational databases, developed as part of IBM’sSystem R project, is now the standard query language. SQL was standardized in the late 1980s, and the current standard, SQL-92, wasadopted by the American National Standards Institute (ANSI) and1内蒙古工业大学本科毕业设计外文文献翻译International Standards Organization (ISO). Arguably, the mostwidely used form of concurrent programming is the concurrent execution of database programs (called transactions). Users write programs as if they are to be run by themselves, and the responsibility for running them concurrently is given to the DBMS. James Gray won the 1999 Turing award for his contributions to the field of transaction management in a DBMS.In the late 1980s and the 1990s, advances have been made in many areas of database systems. Considerable research has been carried out into more powerful query languages and richer data models, and there has been a big emphasis on supporting complex analysis of data from all parts of an enterprise. Several vendors (e.g., IBM’s DB2, Ora cle 8, Informix UDS) have extended their systems with the ability to store new data types such as images and text, and with the ability to ask more complex queries. Specialized systems have been developed by numerous vendors for creating data warehouses, consolidating data from several databases, and for carrying out specialized analysis.An interesting phenomenon is the emergence of several enterprise resource planning(ERP) and management resource planning (MRP) packages, which add a substantial layer of application-oriented features on top of a DBMS. Widely used packages include systems from Baan, Oracle, PeopleSoft, SAP, and Siebel. These packages identify a set of common tasks (e.g., inventory management, human resources planning, financial analysis) encountered bya large number of organizations and provide a general application layer to carry out these tasks. The data is stored in a relational DBMS, and the application layer can be customized to different companies, leading to lower Introduction to Database Systems overall costs for the companies, compared to the cost of building the application layer from scratch. Most significantly, perhaps, DBMSs have entered the Internet Age. While the first generation of Web sites stored their dataexclusively in operating systems files, the use of a DBMS to store data that is accessedthrough a Web browser is becoming widespread. Queries are generated through Web-accessible forms and answers are formatted using a markup language such as HTML, in order to be easily displayed in a browser. All the database vendors are adding features to their DBMS aimed at makingit more suitable for deployment over the Internet. Database management continues to gain importance as more and more data is brought on-line, and made ever more accessible2内蒙古工业大学本科毕业设计外文文献翻译through computer networking. Todaythe field is being driven by exciting visions such as multimedia databases, interactive video, digital libraries, a host of scientific projects such as the human genome mapping effort a nd NASA’s Earth Observation System project, and the desire of companies to consolidate their decision-making processes and mine their data repositories for useful information about their businesses. Commercially, database manage- ment systems represent one of the largest and mostvigorous market segments. Thusthes- tudy of database systems could prove to be richly rewarding in more ways than one!INTRODUCTION TO PHYSICAL DATABASEDESIGNLike all other aspects of database design, physical design must be guided by the nature of the data and its intended use. In particular, it is important to understand the typicalworkload that the database must support; the workload consists of a mix of queries and updates. Users also have certain requirements about how fast certain queries or updates must run or how many transactions must be processed per second. The workload description and users’ performance requirements arethe basis on which a number of decisions have to be made during physical database design.To create a good physical database design and to tune the system for performance in response to evolving user requirements, the designer needs to understand the workings of a DBMS, especially the indexing and query processing techniques supported by the DBMS. If the database is expected to be accessed concurrently by many users, or is a distributed database, the task becomes more complicated, and other features of a DBMS come into play.DATABASE WORKLOADSThe key to good physical design is arriving at an accurate description of the expected workload. A workload description includes the following elements:1. A list of queries and their frequencies, as a fraction ofall queries and updates.2. A list of updates and their frequencies.3内蒙古工业大学本科毕业设计外文文献翻译3. Performance goals for each type ofquery and update.For each query in the workload, we mustidentify:Which relations are accessed.Which attributes are retained (in the SELECT clause).Which attributes have selection or join conditions expressed on them (in the WHERE clause) and howselective these conditions are likely to be. Similarly, for each update in the workload, we must identify:Which attributes have selection orjoin conditions expressed on them (in the WHERE clause) and howselective these conditions are likely to be.The type of update (INSERT, DELETE, or UPDATE) and the updated relation.For UPDATE commands, the fields that are modified by the update.Remember that queries and updates typically have parameters, for example, a debit or credit operation involves a particular account number. The values of these parameters determine selectivity of selection and join conditions.Updates have a query component that is used to find the target tuples. This component can benefit from a good physical design and the presence of indexes. On the other hand, updates typically require additional work to maintain indexes on the attributes that they modify. Thus, while queries can only benefit from the presence of an index, an index may either speed up or slow down a given update. Designers should keep this trade-offer in mind when creating indexes.NEED FORDATABASE TUNINGAccurate, detailed workload information may be hardto come by while doing the initial design of thesystem. Consequently, tuning a database after it has been designed and deployed is important—we must refine the initialdesign in the light of actual usage patterns to obtain the best possible performance.The distinction between database design and database tuning is somewhat arbitrary. We could consider the design process to be over once an initial conceptual schema is designed and a set of indexing and clustering decisions is made. Any subsequentchanges4内蒙古工业大学本科毕业设计外文文献翻译to the conceptual schema or the indexes, say, would then be regarded as a tuning activity. Alternatively, we could consider some refinement of the conceptual schema (and physical design decisions affected by this refinement) to be part of the physical designprocess.Where we draw the line between design and tuning is not very important.OVERVIEW OFDATABASE TUNINGAfter the initial phase of database design, actualuse of the database provides a valuable source of detailed information that can be used to refine the initial design. Many of the original assumptions about the expected workload can be replaced by observed usage patterns; in general, some of the initial workload specification will be validated, and some of it will turn out to be wrong. Initial guesses about the size of data can be replaced with actual statistics from the system catalogs(although this information will keep changing as the system evolves). Careful monitoring of queries can reveal unexpected problems; forexample, the optimizer may not be using some indexes as intended to produce good plans.Continued database tuning is important to get the best possibleperformance.TUNING THE CONCEPTUAL SCHEMAIn thecourse of database design, we may realize that our current choice of relation schemas does not enable us meet our performance objectives for the given workload with any (feasible) set of physical design choices. If so, we may have to redesign our conceptual schema (and re-examine physical design decisions that are affected by the changes that we make).We may realize that a redesign is necessary during the initial design process or later, after the system has been in use for a while. Once a database has been designed and populated with data, changing the conceptual schema requires a significant effort in terms of mapping the contents of relations that are affected. Nonetheless, it may sometimes be necessary to revise the conceptual schema in light of experience with the system. We now5内蒙古工业大学本科毕业设计外文文献翻译consider the issues involved in conceptual schema (re)design fromthe point of view of performance.Several options must be considered while tuning the conceptual schema:We may decide tosettle for a 3NF design instead of a BCNF design.If there are two ways to decompose a given schema into 3NF or BCNF, our choice should be guided by the workload.Sometimes we might decide to further decompose a relation that is already in BCNF. In other situations we might denormalize. That is, we might choose to replace a collection ofrelations obtained by a decomposition from a larger relation with the original (larger) relation, even though it suffers from some redundancy problems. Alternatively, we might choose to add some fields to certain relations to speed up some important queries, even if this leads to a redundant storage of some information (and consequently, a schema that is in neither 3NF nor BCNF).This discussion of normalization has concentrated on the technique ofdecomposition, which amounts to vertical partitioning of a relation. Another technique to consider is horizontal partitioning of a relation, which would lead to our having two relations with identical schemas. Note that we are not talking about physically partitioning the cuples of a single relation; rather, we want to create two distinct relations (possibly with different constraints and indexes on each).Incidentally, when we redesign the conceptual schema, especially if we are tuning an existingdatabase schema, it is worth considering whether we should create views to mask these changes from users for whom the original schema is more natural.TUNING QUERIES ANDVIEWSIf we notice that a query is running much slower than we expected, we have to examine the query carefully to end the problem. Some rewriting of the query, perhaps in conjunction with some index tuning, can often ?x the problem. Similar tuning may be called for if queries on some view run slower than expected.When tuning a query, thefirst thing to verify is that the system is using the plan that 6内蒙古工业大学本科毕业设计外文文献翻译you expect it to use. It may be that thesystem is not finding the best plan for a variety of reasons. Some common situations that are not handled efficiently by many optimizers follow:A selection condition involving null values.Selection conditions involving arithmetic or string expressions or conditions using the or connective. For example, if we have a condition E.age = 2*D.age in the WHERE clause, the optimizer may correctly utilize an available index on E.age but fail to utilize an available index on D.age. Replacing the condition by E.age/2=D.age would reverse the situation.Inability to recognize a sophisticated plan such as an index-only scan for an aggregation query involving a GROUP BY clause.If theoptimizer is not smart enough to and the best plan (using access methods and evaluation strategies supported by the DBMS), some systems allow users to guide the choice of a plan by providing hints to the optimizer; for example, users might be able to force the use of a particular index or choose the join order and join method. A user who wishes to guide optimization in this manner should have a thorough understanding of both optimization and the capabilities of the given DBMS.(8)OTHER TOPICSMOBILE DATABASESThe availability of portable computers and wireless communications has created a new breed of nomadic database users. At one level these users are simply accessing a database through a network, which is similar to distributed DBMSs. Atanother level the network as well as data and user characteristics now have several novel properties, which affect basic assumptions in many components of a DBMS, including the query engine, transaction manager, and recovery ers are connected through a wireless link whose bandwidth is ten times less than Ethernet and 100 times less than ATM networks. Communication costs are therefore significantly higher inpr oportion to I/O and CPU ers’locations are constantly changing, and mobile computers have alimited battery life. Therefore, the true communication costs is connection time and battery usage in addition to bytes transferred, and change constantly depending on location. Data is7内蒙古工业大学本科毕业设计外文文献翻译frequentlyreplicated to minimize the cost of accessing it from different locations.As a user moves around, data could be accessed from multiple database servers within a single transaction. The likelihood of losing connections is also much greater than in a traditional network. Centralized transaction management may therefore be impractical, especially if some data is resident at the mobile computers. We may in fact have to give up on ACID transactions and develop alternative notions of consistency for user programs. MAIN MEMORY DATABASESThe price ofmain memory is now low enough that we can buy enough main memory to hold the entire database for many applications; with 64-bit addressing, modern CPUs also have very large address spaces. Some commercial systemsnow have several gigabytes of main memory. This shift prompts a reexamination of some basic DBMS design decisions, since disk accesses no longer dominate processing time for a memory-resident database:Main memory does not survive system crashes, and so we still have to implement logging and recovery to ensure transaction atomicity and durability. Log records must be written to stable storage at commit time, and this process could become a bottleneck. To minimize this problem, rather than commit each transaction as it completes, we can collect completed transactions and commit them in batches; this is called group commit. Recovery algorithms can also be optimized since pages rarely have to be written out to make room for other pages.The implementation of in-memory operations has to be optimized carefully since disk accesses are no longer the limiting factor for performance.A new criterion must be considered while optimizing queries, namely the amount of space required to execute a plan. It is important to minimize the space overhead because exceeding available physical memory would lead to swapping pages to disk (through the operating system’s virtual memory mechanisms), greatlyslowing down execution.Page-oriented data structures become less important (since pages are no longer the unit of data retrieval), and clustering is not important (since the cost of accessing any region of main memory is uniform).8内蒙古工业大学本科毕业设计外文文献翻译(一)从历史的角度回顾从数据库的早期开始,存储和操纵数据就一直是主要的应用焦点。

相关文档
最新文档