计算机软件工程外文翻译
软件工程(外文翻译文献)

外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。
软件工程专业外文翻译

软件工程专业外文翻译英文原文SSH is Spring + struts + Hibernate an integration framework, is one of the more popular a Web application framework・SpringLight weight 一一from two aspects in terms of size and cost of the Spring are lightweight・A complete Spring framework can in one size only1MB multiple JAR files released .And Spring required processing overhead is not worth mentioning・Inversion of control 一一Spring through a known as inversion of control (IoC) technology promotes loose coupling .When using IoC, an object depend on other objects will be passed in through passive way, but not the object of its own to create or find a dependent object .You can think of IoC and JNDI instead 一一not the object from the container for dependent, but in different container object is initialized object request on own initiative will rely on to it.Aspect oriented programming 一一Spring provides rich support, allowed by separating the application's business logic and system level service cohesiondevelopment .Application object only realize they should do 一一complete business logic・ They are not responsible for other system level concerns.Container 一一Spring contains and management application object configuration and life cycle, in this sense, it is a kind of container, you can configure each of your bean to be created 一一Based on a reconfigurable prototype (prototype), your bean can create a single instance or every time when they are needed to generate a new examples 一一and how they are interrelated・ However, Spring should not be confused with the traditional heavyweight EJB container, they are often large and unwieldy, difficult to use・ StrutsStruts on Model, View and Controller are provided with the corresponding components・ActionServlet, this is Struts core controller, responsible for intercepting the request from the user・Action, this class is typically provided by the user, the controller receives from the ActionServlet request, and according to the request tocall the model business logic method to processing the request, and the results will be returned to the JSP page display.54Part ModelBy ActionForm and JavaBean, where ActionForm used to package the user the request parameters, packaged into a ActionForm object, the object to be forwarded to the Action ActionServlet Action ActionFrom, according to which the request parameters processing a user request・JavaBean encapsulates the underlying business logic, including database access・ Part ViewThis section is implemented by JSP・Struts provides a rich library of tags, tag library can be reduced through the use of the script, a custom tag library can be achieved with Model effective interaction, and increased practical function.The Controller componentThe Controller component is composed of two parts 一一the core of the system controller, the business logic controller・System core controller, the corresponding ActionServlet.The controller is provided with the Struts framework, HttpServlet class inheritance, so it can be configured to mark Servlet .The controller is responsible for all HTTP requests, and then according to the user request to decide whether or not to transfer to business logic controller・ Business logic controller, responsible for processing a user request, itself does not have the processing power, it calls the Model to complete the dea1. The corresponding Action part・HibernateHibernate is an open source object relation mapping framework, it had a very lightweight JDBC object package, makes Java programmers can usearbitrary objects to manipulate database programming thinking .Hibernate can be applied in any use of JDBC occasions, can be in the Java client programto use, also can be in Servlet / JSP Web applications, the most revolutionary, Hibernate can be applied in theEJB J2EE schema to replace CMP, complete data persistence.The core of Hibernate interface has a total of 5, are: Session, SessionFactory, Query, Transaction and Configuration. The 5 core interface in any development will be used in. Through these interfaces, not only can the persistent object access, but also to carry out a transaction control.55中文翻译SSH为spring+ struts+ hibernate的一个集成框架,是忖前较流行的一种Web应用程序开源框架。
软件工程外文文献翻译

软件工程外文文献翻译大连交通大学2012届本科生毕业设计 (论文) 外文翻译原文:New Competencies for HRWhat does it take to make it big in HR? What skills and expertise do you need? Since 1988, Dave Ulrich, professor of business administration at the University of Michigan, and his associates have been on a quest to provide the answers. This year, they?ve released an all-new 2007 Human Resource Competency Study (HRCS). Thefindings and interpretations lay out professional guidance for HRfor at least the next few years.“People want to know what set of skills high-achieving HR people need toperform even better,” says Ulrich, co-director of the project along with WayneBrockbank, also a professor of business at the University of Michigan.Conducted under the auspices of the Ross School of Business at the University of Michigan and The RBL Group in Salt Lake City, with regional partners including the Society for Human Resource Management (SHRM) in North America and other institutions in Latin America, Europe, China and Australia, HRCS is the longest-running, most extensive global HR competency study in existence. “Inreaching our conclusions, we?ve looked across more than 400 companies and are able to report with statistical accuracy what HR executives say and do,” Ulrich says.“The re search continues to demonstrate the dynamic nature of the humanresource management profession,” says SHRM President and CEO Susan R. Meisinger, SPHR. “The findings also highlight what an exciting time it is to be in the profession. We continue to have the ability to really add value to an organization.”“HRCS is foundational work that is really important to HR as a profession,” says Cynthia McCague, senior vice president of the Coca-Cola Co., who participated in the study. “They have created and continue to enhance a framework for thinking about how HR drives organizational performance.”What’s NewResearchers identified six core competencies that high-performing HR professionals embody. These supersede the five competencies outlined in the 2002 HRCS—the last study published—reflecting the continuing evolution of the HRprofession. Each competency is broken out into performance elements.“This is the fifth round, so we can look at past models and compare where the profession is going,” says Evren Esen, survey program manager at SHRM, which provided the sample of HR professionals surveyed in NorthAmerica. “We can actually see the profession changing. Some core areas remain the same, but others,大连交通大学2012届本科生毕业设计 (论文) 外文翻译based on how the raters asse ss and perceive HR, are new.” (For more information, see “The Competencies and Their Elements,” at right.) To some degree, the new competencies reflect a change in nomenclature or a shuffling of the competency deck. However, there are some key differences.Five years ago, HR?s role in managing culture was embedded within a broadercompetency. Now its importance merits a competency of its own. Knowledge of technology, a stand-alone competency in 2002, now appears within Business Ally. In other instances, the new competencies carry expectations that promise to change the way HR views its role. For example, the Credible Activist calls for HR to eschew neutrality and to take a stand—to practice the craft “with an attitude.”To put the competencies in perspective, it?s helpful to view them as a three-tierpyramid with Credible Activist at the pinnacle.Credible Activist. This competency is the top indicator inpredicting overall outstanding performance, suggesting that mastering it should be a priority. “You?ve got to be good at all of them, but, no question, [this competency] is key,” Ulrich says.“But you can?t be a Credible Activist without having all the other competencies. In a sense, it?s the whole package.”“It?s a deal breaker,” agrees Dani Johnson, project manager of the Human Resource Competency Study at The RBL Group in Salt Lake City. “If you don?t cometo the table with it, you?re done. It permeates everything you do.”The Credible Activist is at the heart of what it takes to be an effective H R leader. “The best HR people do not hold back; they step forward and advocate for theirposition,” says Susan Harmansky, SPHR, senior director of domestic restaurant operations for HR at Papa John?s International in Louisville, Ky., and former chair of the Human Resource Certification Institute. “CEOs are not waiting for HR to come in with options—they want your recommendations; they want you to speak from your position as an expert, similar to what you see from legal or finance executives.”“You don?t w ant to be credible without being an activist, because essentially you?re worthless to the business,” Johnson says. “People like you, but you have no impact. On the other hand, you don?t want to be an activist without being credible. You can be dangerous in a situation like that.”Below Credible Activist on the pyramid is a cluster of three competencies: Cultural Steward, Talent Manager/Organizational Designer and Strategy Architect.Cultural Steward. HR has always owned culture. But with Sarbanes-Oxley and other regulatory pressures, and CEOs relying more on HR to manage culture, this is the first time it has emerged as an independent competency. Of the six competencies,大连交通大学2012届本科生毕业设计 (论文) 外文翻译Cultural Steward is the second highest predictor of performance of both HR professionals and HR departments.Talent Manager/Organizational Designer. Talent management focuses on howindividuals enter, move up, across or out of the organization. Organizational design centers on the policies, practices and structure that shape how the organization works. Their linking reflects Ulrich?s belief that HR may be placing too much emphasis ontalent acquisition at the expense of organizational design. Talent management will not succeed in the long run without an organizational structure that supports it.Strategy Architect. Strategy Architects are able to recognize business trends and their impact on the business, and to identify potential roadblocks and opportunities. Harmansky, who recently joined Papa John?s, demonstrates how the Strategy Architect competency helps HR contribute to the overall business strategy. “In my first months here, I?m spending a lot of time traveling, going to see stores all over the country. Every time I go to a store, while my counterparts of the management team are talking about [operational aspects], I?m talking tothe people who work there. I?m trying to find out what the issues are surrounding people. How do I develop them? I?m looking for my business differentiator on the people side s o I can contribute to the strategy.”When Charlease Deathridge, SPHR, HR manager of McKee Foods inStuarts Draft, Va., identified a potential roadblock to implementing a new management philosophy, she used the Strategy Architect competency. “When we were rolling out …lean manufacturing? principles at our location, we administered an employee satisfaction survey to assess how the workers viewed the new system. The satisfaction scores were lower than ideal. I showed [management] how a negative could become a positive, how we could use the data and follow-up surveys as a strategic tool to demonstrate progress.”Anchoring the pyramid at its base are two competencies that Ulrich describes as “table stakes—necessary but not sufficient.” Except in China, where HR is at an earlier stage in professional development and there is great emphasis on transactional activities, these competencies are looked upon as basic skills that everyone must have. There is some disappointing news here. In the United States, respondents rated significantly lower on these competencies than the respondents surveyedin other countries.Business Ally. HR contributes to the success of a business byknowing how it makes money, who the customers are, and why they buy the company?s products and services. For HR professionals to be BusinessAllies (and Credible Activists and Strategy Architects as well), they should be what Ulrich describes as “businessliterate.” The mantra about understanding the business—how it works, the financialsand strategic issues—remains as important today as it did in every iteration of the survey the past 20 years. Yet progress in this area continues to lag.大连交通大学2012届本科生毕业设计 (论文) 外文翻译“Even these high performers don?t know the business as well as they should,” U lrich says. In his travels, he gives HR audiences 10 questions to test their business literacy.Operational Executor. These skills tend to fall into the range of HR activities characterized as transactional or “legacy.” Policies need to be drafted, adapted and implemented. Employees need to be paid, relocated, hired, trained and more. Every function here is essential, but—as with the Business Allycompetency—high-performing HR managers seem to view them as less important and score higher on the other competencies. Even some highly effective HR people may be running a risk in paying too little attention to these nuts-and-bolts activities, Ulrich observes.Practical ToolIn conducting debriefings for people who participated in the HRCS, Ulrich observes how delighted they are at the prescriptive nature of theexercise. The individual feedback reports they receive (see “How the Study Was Done”) offer thema road map, and they are highly motivated to follow it.Anyone who has been through a 360-degree appraisal knows that criticism can be jarring. It?s risky to open yourself up to others? opinions when you don?t have to. Add the prospect of sharing the results with your boss and colleagues who will be rating you, and you may decide to pass. Still, it?s not surprising that highly motivated people like Deathridge jumped at the chance for the free feedback.“All of it is not good,” says Deathridge. “You have to be willing to face up to it. You go home, work it out and say, …Why am I getting this bad feedback?? ”But for Deathridge, the results mostly confirmed what she already knew. “Ibelieve most people know where they?re weak or strong. For me, it was most helpful to look at how close others? ratings of me matched with my own assessments. ... There?s so much to learn about what it takes to be a genuine leader, and this studyhelped a lot.”Deathridge says the individual feedback report she received helped her realize the importance of taking a stand and developing her Credible Activist competency. “There wa s a situation where I had a line manager who wanted to disciplinesomeone,” she recalls. “In the past, I wouldn?t have been able to stand up as strongly as I did. I was able to be very clear about how I felt. I told him that he had not done enough to document the performance issue, and that if he wanted to institute discipline it would have to be at the lowest level. In the past, I would have been more deferential and said, …Let?s compromise and do it at step two or three.? But I didn?t do it; I spoke out strongly and held my ground.”大连交通大学2012届本科生毕业设计 (论文) 外文翻译This was the second study for Shane Smith, director of HR at Coca-Cola. “I didit for the first time in 2002. Now I?m seeing some traction in the things I?ve been working on. I?m pleased to see the consistency with my evaluations of my performance when compared to my raters.”What It All MeansUlrich believes that HR professionals who would have succeeded 30, 20, even 10 years ago, are not as likely to succeed today. They are expected to play new roles. To do so, they will need the new competencies.Ulrich urges HR to reflect on the new competencies and what they reveal about the future of the HR profession. His message is direct and unforgiving. “Legacy HR work is going, and HR people who don?t change with it will be gone.” Still, he remains optimistic that many in HR are heeding his call. “Twenty percent of HRpeople will never get it; 20 percent are really top performing. The middle 60 percent are moving in the right direction,” says Ulrich.“Within that 60 percent there are HR professionals who may be at the table but are not contributing fully,” he adds. “That?s the group I want to talk to. ... I want to show them what they need to do to have an impact.”As a start, Ulrich recommends HR professionals consider initiating three conversations. “One is with your business leaders. Review the competencies withthem and ask them if you?re doing them. Next, pose the same questions to your HR team. Then, ask yourself whether you really know the bu siness or if you?re glossing on the surface.” Finally, set your priorities. “Our data say: …Get working on thatCredible Activist!? ”Robert J. Grossman, a contributing editor of HR Magazine, is a lawyer and aprofessor of management studies at Marist College in Poughkeepsie, N.Y. from:HR Magazine, 2007,06 Robert J. Grossman ,大连交通大学2012届本科生毕业设计 (论文) 外文翻译译文:人力资源管理的新型胜任力如何在人力资源管理领域取得更大成功,需要怎样的专业知识和技能, 从1988年开始,密歇根大学的商业管理教授Dave Ulrich先生和他的助手们就开始研究这个课题。
软件工程外文文献翻译

西安邮电学院毕业设计(论文)外文文献翻译院系:计算机学院专业:软件工程班级:软件0601学生姓名:导师姓名:职称:副教授起止时间:2010年3月8日至2010年6月11日ClassesOne of the most compelling features about Java is code reuse. But to be revolutionary, you’ve got to be able to do a lot more than copy code and change it.That’s the approach used in procedural languages like C, and it hasn’t worked very well. Like everything in Java, the solution revolves around the class. You reuse code by creating new classes, but instead of creating them from scratch, you use existing classes that someone has already built and debugged.The trick is to use the classes without soiling the existing code.➢Initializing the base classSince there are now two classes involved—the base class and the derived class—instead of just one, it can be a bit confusing to try to imagine the resulting object produced by a derived class. From the outside, it looks like the new class has the same interface as the base class and maybe some additional methods and fields. But inheritance doesn’t just copy the interface of the base class. When you create an object of the derived class, it contains within it a subobject of the base class. This subobject is the same as if you had created an object of the base class by itself. It’s just that from the outside, the subobject of the base class is wrapped within the derived-class object.Of course, it’s essential that th e base-class subobject be initialized correctly, and there’s only one way to guarantee this: perform the initialization in the constructor by calling the base-class constructor, which has all the appropriate knowledge and privileges to perform the base-class initialization. Java automatically inserts calls to the base-class constructor in the derived-class constructor.➢Guaranteeing proper cleanupJava doesn’t have the C++ concept of a destructor, a method that is automatically called when an object is destroyed. The reason is probably that in Java, the practice is simply to forget about objects rather than to destroy them, allowing the garbage collector to reclaim the memory as necessary.Often this is fine, but there are times when your class might perform some activities during its lifetime that require cleanup. As mentioned in Chapter 4, you can’t know when the garbage collector will be called, or if it will be called. So if you want something cleaned up for a class, you must explicitly write a special method to do it, and make sure that the client programmer knows that they must call this method.Note that in your cleanup method, you must also pay attention to the calling order for the base-class and member-object cleanup methods in case one subobject depends on another. In general, you should follow the same form that is imposed by a C++ compiler on its destructors: first perform all of the cleanup work specific to your class, in the reverse order of creation. (In general, this requires that base-class elements still be viable.) Then call the base-class cleanup method, as demonstrated here➢Name hidingIf a Java base class has a method name that’s overloaded several times, redefining that method name in the derived class will not hide any of the base-class versions (unlike C++). Thus overloading works regardless of whether the method was defined at this level or in a base class,it’s far more common to override methods of the same name, using exactly the same signature and return type as in the base class. It can be confusing otherwise (which is why C++ disallows it—to prevent you from making what is probably a mistake).➢Choosing composition vs. inheritanceBoth composition and inheritance allow you to place subobjects inside your new class (composition explicitly does this—with inheritance it’s implicit). You might wonder about the difference between the two, and when to choose one over the other.Composition is generally used when you want the features of an existing class inside your new class, but not its interface. That is, you embed an object so that you can use it to implement functionality in your new class, but the user of your new class sees the interface you’ve defined for the new class rather than the interface from theembedded object. For this effect, you embed private objects of existing classes inside your new class.Sometimes it makes sense to allow the class user to directly access the composition of your new class; that is, to make the member objects public. The member objects use implementation hiding themselves, so this is a safe thing to do. When the user knows you’re assembling a bunch of parts, it makes the interface easier to understand.When you inherit, you take an existing class and make a special version of it. In general, this mea ns that you’re taking a general-purpose class and specializing it for a particular need➢The final keywordJava’s final keyword has slightly different meanings depending on the context, but in general it says “This cannot be changed.” You might want to prev ent changes for two reasons: design or efficiency. Because these two reasons are quite different, it’s possible to misuse the final keywordThe following sections discuss the three places where final can be used: for data, methods, and classes.➢Final dataMany programming languages have a way to tell the compiler that a piece of data is “constant.” A constant is useful for two reasons:It can be a compile-time constant that won’t ever change.It can be a value initialized at run time that you don’t want ch anged.In the case of a compile-time constant, the compiler is allowed to “fold” the constant value into any calculations in which it’s used; that is, the calculation can be performed at compile time, eliminating some run-time overhead. In Java, these sorts of constants must be primitives and are expressed with the final keyword. A value must be given at the time of definition of such a constant.A field that is both static and final has only one piece of storage that cannot be changed.When using final with object references rather than primitives, the meaning gets a bit confusing. With a primitive, final makes the value a constant, but with an object reference, final makes the reference a constant. Once the reference is initialized to an object, it can never be changed to point to another object. However, the object itself can be modified; Java does not provide a way to make any arbitrary object a constant. (You can, however, write your class so that objects have the effect of being constant.) This restriction includes arrays, which are also objects.➢Final methodsThere are two reasons for final methods. The first is to put a “lock” on the method to prevent any inheriting class from changing its meaning. This is done for design reasons when you want to mak e sure that a method’s behavior is retained during inheritance and cannot be overridden.The second reason for final methods is efficiency. If you make a method final, you are allowing the compiler to turn any calls to that method into inline calls. When the compiler sees a final method call, it can (at its discretion) skip the normal approach of inserting code to perform the method call mechanism (push arguments on the stack, hop over to the method code and execute it, hop back and clean off the stack arguments, and deal with the return value) and instead replace the method call with a copy of the actual code in the method body. This eliminates the overhead of the method call. Of course, if a method is big, then your code begins to bloat, and you probably won’t see any performance gains from inlining, since any improvements will be dwarfed by the amount of time spent inside the method. It is implied that the Java compiler is able to detect these situations and choose wisely whether to inline a final method. However, it’s best to let the compiler and JVM handle efficiency issues and make a method final only if you want to explicitly prevent overriding➢Final classesWhen you say that an entire class is final (by preceding its definition with the final keyword), you state that you don’t want to inherit from this class or allow anyone else to do so. In other words, for some reason the design of your class is suchthat there is never a need to make any changes, or for safety or security reasons you don’t want subc lassingNote that the fields of a final class can be final or not, as you choose. The same rules apply to final for fields regardless of whet However, because it prevents inheritance, all methods in a final class are implicitly final, since there’s no way to override them. You can add the final specifier to a method in a final class, but it doesn’t add any meaning.her the class is defined as final.➢SummaryBoth inheritance and composition allow you to create a new type from existing types. Typically, however, composition reuses existing types as part of the underlying implementation of the new type, and inheritance reuses the interface. Since the derived class has the base-class interface, it can be upcast to the base, which is critical for polymorphism, as you’ll see in the next chapter.Despite the strong emphasis on inheritance in object-oriented programming, when you start a design you should generally prefer composition during the first cut and use inheritance only when it is clearly necessary. Composition tends to be more flexible. In addition, by using the added artifice of inheritance with your member type, you can change the exact type, and thus the behavior, of those member objects at run time. Therefore, you can change the behavior of the composed object at run time.When designing a system, your goal is to find or create a set of classes in which each class has a specific use and is neither too big (encompassing so much functionality that it’s unwieldy to reuse) nor annoyingly small (you can’t use it by itself or without adding functionality).类“Java引人注目的一项特性是代码的重复使用或者再生。
软件工程中英对照术语表

abstract class 抽象类,提供一组子类共有行为的类,但它本身并不具有实例。
抽象类表示一个概念,从中派生的类代表对这一概念的实施。
Abstraction 抽象,对视图或模型的创建,其中忽略了不必要的细节,以便专注于一组特定的相关细节。
access modifier存取权限,对类、方法或属性进行访问控制的关键字。
Java 中的存取权限可以是公有、私有、保护和包装(默认)。
accessor methods存取器方法,由对象提供的、用于定义连接该对象实例变量的方法。
用来返回实例变量值的存取器方法被称为获取方法;用来为实例变量指定值的存取器方法被称为设置方法。
acceptance验收,客户接受软件产品(作为部分或完整履行合同的结果)所有权的操作。
action动作,对构成计算过程抽象的可执行语句的规范。
动作通常会导致系统状态发生变化,这是通过向一个对象发送消息或是更改链接或属性值来实现。
action sequence动作序列,解析为一系列先后发生的动作的表达式。
action state动作状态,表示不可分动作的执行状态,通常指的是调用一个操作。
activation激活,动作的执行active class主动类,表示系统中控制线程的类。
请参见主动对象。
activity活动,要求角色执行的工作单元。
active object主动对象,拥有线程并可发起控制活动的对象。
主动类的实例。
activity graph活动图,状态机的特例,用于对涉及一个或多个分类器的进程建模。
对比:状态图(statechart diagram)。
同义词:活动图(activity diagram)。
actor主角,系统之外与系统交互的某人或某事物。
actor class主角类,定义一组主角实例,其中每个主角实例相对于系统而言都担任着同样的角色。
在与用例交互时这些用例的用户所担任的一组紧密相关的角色。
主角为每个要与其通信的用例都准备了一个角色。
软件工程本科毕业外文文献翻译资料

软件工程本科毕业外文文献翻译学校代码:10128本科毕业设计外文文献翻译二〇一五年一月The Test Library Management System ofFramework Based on SSHThe application system features in small or medium-sized enterprise lie in the greater flexibility and safety high performance-price ratio. Traditional J2EE framework can not adapt to these needs, but the system a pplication based on SSH(Struts+Spring+Hibernate) technology can better satisfy such needs. This paper analyses some integration theory and key technologies about SSH, and according to the integration constructs a lightweight WEB framework, which has integrated the three kinds of technology ,forming the lightweight WEB framework bas ed on SSH and gaining good effects in practical applications.IntroductionGenerally the J2EE platform[27] used in large enterprise applications, can well s olve the application of reliability, safety and stability, but its weakness is the price hig h and the constructing cycle is long. Corresponding to the small or medium enterprise applications, the replace approach is the system framework of lightweight WEB, inclu ding the more commonly used methods which are based on the Struts and Hibernate. With the wide application of Spring, the three technology combination may be a bette r choice as a lightweight WEB framework. It uses layered structure and provides a go od integrated framework for Web applications at all levels in minimizing the Interlaye r coupling and increasing the efficiency of development. This framework can solve a l ot of problems, with good maintainability and scalability. It can solve the separation o f user interface and business logic separation, the separation of business logic and data base operation and the correct procedure control logic, etc. This paper studies the tech nology and principle of Struts and Spring and Hibernate, presenting a proved lightwei ght WEB application framework for enterprise.Hierarchical Web MechanismHierarchical Web framework including the user presentation layer, business logi clayer, data persistence layer ,expansion layer etc, each layer for different function, re spectively to finish the whole application. The whole system are divided into differentlogic module with relatively independent and mutual, and each module can be imple mented according to different design. It can realize the system parallel development, r apid integration, good maintainability, scalability.Struts MVC FrameworkTo ensure the reuse and efficiency of development process, adopting J2EE techn ology to build the Web application must select a system framework which has a good performance . Only in this way can we ensure not wasting lots of time because of adju sting configuration and achieve application development efficiently and quickly. So, p rogrammers in the course of practice got some successful development pattern which proved practical, such as MVC and O/R mapping, etc; many technologies, including S truts and Hibernate frameworks, realized these pattern. However, Struts framework on ly settled the separation problem between view layer and business logic layer, control layer, did not provide a flexible support for complex data saving process. On the contr ary, Hibernate framework offered the powerful and flexible support for complex data saving process. Therefore, how to integrate two frameworks and get a flexible, low-coupling solutions project which is easy to maintain for information system, is a resea rch task which the engineering staff is studying constantly.Model-View-Controller (MVC) is a popular design pattern. It divides the interactive system in three components and each of them specializes in one task. The model contains the applica tion data and manages the core functionality. The visual display of the model and the f eedback to the users are managed by the view. The controller not only interprets the in puts from the user, but also dominates the model and the view to change appropriately . MVC separates the system functionality from the system interface so as to enhance t he system scalability and maintainability. Struts is a typical MVC frame[32], and it also contains the three aforementioned components. The model level is composed of J avaBean and EJB components. The controller is realized by action and ActionServlet, and the view layer consists of JSP files. The central controller controls the action exec ution that receives a request and redirects this request to the appropriate module contr oller. Subsequently, the module controller processes the request and returns results tothe central controller using a JavaBean object, which stores any object to be presented in the view layer by including an indication to module views that must be presented. The central controller redirects the returned JavaBean object to the main view that dis plays its information.Spring Framework technologySpring is a lightweight J2EE application development framework, which uses the model of Inversion of Control(IoC) to separate the actual application from the Config uration and dependent regulations of the application. Committed to J2EE application a t all levels of the solution, Spring is not attempting to replace the existing framework, but rather “welding” the object of J2EE application at all levels together through the P OJO management. In addition, developers are free to choose Spring framework for so me or all, since Spring modules are not totally dependent.As a major business-level detail, Spring employs the idea of delay injection to assemble code for the sake o f improving the scalability and flexibility of built systems. Thus, the systems achieve a centralized business processing and reduction of code reuse through the Spring AOP module.Hibernate Persistent FrameworkHibernate is a kind of open source framework with DAO design patterns to achie ve mapping(O/R Mapping) between object and relational database.During the Web system development, the tradition approach directly interacts wi th the database by JDBC .However, this method has not only heavy workload but also complex SQL codes of JDBC which need to revise because the business logic sli ghtly changes. So, whatever development or maintain system are inconvenient. Consi dering the large difference between the object-oriented relation of java and the structure of relational database, it is necessary to intro duce a direct mapping mechanism between the object and database, which this kind of mapping should use configuration files as soon as possibility, so that mapping files w ill need modifying rather than java source codes when the business logic changes in the future. Therefore, O/R mapping pattern emerges, which hibernate is one of the most outstanding realization of architecture.It encapsulates JDBC with lightweight , making Java programmer operate a relati onal database with the object oriented programming thinking. It is a a implementation technology in the lasting layer. Compared to other lasting layer technology such as JD BC, EJB, JDO, Hibernate is easy to grasp and more in line with the object-oriented programming thinking. Hibernate own a query language (HQL), which is full y object-oriented. The basic structure in its application as shown in figure6.1.Hibernate is a data persistence framework, and the core technology is the object / relational database mapping(ORM). Hibernate is generally considered as a bridge bet ween Java applications and the relational database, owing to providing durable data se rvices for applications and allowing developers to use an object-oriented approach to the management and manipulation of relational database. Further more, it furnishes an object-oriented query language-HQL.Responsible for the mapping between the major categories of Java and the relatio nal database, Hibernate is essentially a middle ware providing database services. It su pplies durable data services for applications by utilizing databases and several profiles , such as hibernate properties and XML Mapping etc..Web services technologiesThe introduction of annotations into Java EE 5 makes it simple to create sophisticated Web service endpoints and clients with less code and a shorter learning curve than was possible with earlier Java EE versions. Annotations — first introduced in Java SE 5 — are modifiers you can add to your code as metadata. They don't affect program semantics directly, but the compiler, development tools, and runtime libraries can process them to produce additional Java language source files, XML documents, or other artifacts and behavior that augment the code containing the annotations (see Resources). Later in the article, you'll see how you can easily turn a regular Java class into a Web service by adding simple annotations.Web application technologiesJava EE 5 welcomes two major pieces of front-end technology — JSF and JSTL — into the specification to join the existing JavaServer Pages and Servlet specifications. JSF is a set of APIs that enable a component-based approach to user-interface development. JSTL is a set of tag libraries that support embedding procedural logic, access to JavaBeans, SQL commands, localized formatting instructions, and XML processing in JSPs. The most recent releases of JSF, JSTL, and JSP support a unified expression language (EL) that allows these technologies to integrate more easily (see Resources).The cornerstone of Web services support in Java EE 5 is JAX-WS 2.0, which is a follow-on to JAX-RPC 1.1. Both of these technologies let you create RESTful and SOAP-based Web services without dealing directly with the tedium of XML processing and data binding inherent to Web services. Developers are free to continue using JAX-RPC (which is still required of Java EE 5 containers), but migrating to JAX-WS is strongly recommended. Newcomers to Java Web services might as well skip JAX-RPC and head right for JAX-WS. That said, it's good to know that both of them support SOAP 1.1 over HTTP 1.1 and so are fully compatible: a JAX-WS Web services client can access a JAX-RPC Web services endpoint, and vice versa.The advantages of JAX-WS over JAX-RPC are compelling. JAX-WS:•Supports the SOAP 1.2 standard (in addition to SOAP 1.1).•Supports XML over HTTP. You can bypass SOAP if you wish. (See the article "Use XML directly over HTTP for Web services (where appropriate)"for more information.)•Uses the Java Architecture for XML Binding (JAXB) for its data-mapping model. JAXB has complete support for XML schema and betterperformance (more on that in a moment).•Introduces a dynamic programming model for both server and client.The client model supports both a message-oriented and an asynchronous approach.•Supports Message Transmission Optimization Mechanism (MTOM), a W3C recommendation for optimizing the transmission and format of a SOAP message.•Upgrades Web services interoperability (WS-I) support. (It supports Basic Profile 1.1; JAX-WS supports only Basic Profile 1.0.)•Upgrades SOAP attachment support. (It uses the SOAP with Attachments API for Java [SAAJ] 1.3; JAX-WS supports only SAAJ 1.2.)•You can learn more about the differences by reading the article "JAX-RPC versus JAX-WS."The wsimport tool in JAX-WS automatically handles many of the mundane details of Web service development and integrates easily into a build processes in a cross-platform manner, freeing you to focus on the application logic that implements or uses a service. It generates artifacts such as services, service endpoint interfaces (SEIs), asynchronous response code, exceptions based on WSDL faults, and Java classes bound to schema types by JAXB.JAX-WS also enables high-performing Web services. See Resources for a link to an article ("Implementing High Performance Web Services Using JAX-WS 2.0") presenting a benchmark study of equivalent Web service implementations based on the new JAX-WS stack (which uses two other Web services features in Java EE 5 —JAXB and StAX) and a JAX-RPC stack available in J2EE 1.4. The study found 40% to 1000% performance increases with JAX-WS in various functional areas under different loads.ConclusionEach framework has its advantages and disadvantages .Lightweight J2EE struct ure integrates Struts and Hibernate and Spring technology, making full use the powerf ul data processing function of Struts and the management flexible of Spring and the m ature of Hibernate. According to the practice, putting forward an open-source solutions suitable for small or medium-sized enterprise application of. The application system based on this architecture tech nology development has interlayer loose coupling ,structure distinctly, short develop ment cycle, maintainability. In addition, combined with commercial project developm ent, the solution has achieved good effect. The lightweight framework makes the paral lel development and maintenance for commercial system convenience, and can push f orward become other industry business system development.Through research and practice, we can easily find that Struts / Spring / Hiberna te framework utilizes Struts maturity in the presentation layer, flexibility of Spring bu siness management and convenience of Hibernate in the serialization layer, three kind s of framework integrated into a whole so that the development and maintenance beca me more convenient and handy. This kind of approach also will play a key role if appl ying other business system. Of course ,how to optimize system performance, enhance the user's access speed, improve security ability of system framework ,all of these wor ks, are need to do for author in the further.基于SSH框架实现的试题库管理系统小型或者中型企业的应用系统具有非常好的灵活性、安全性以及高性价比,传统的J2EE架构满足不了这些需求,但是基于SSH框架实现的应用系统更好的满足了这样的需求,这篇文章分析了关于SSH的一体化理论和关键技术,通过这些集成形成了轻量级Web框架,在已经集成三种技术的基础上,伴随形成了基于SSH的轻量级Web 框架,并且在实际应用中有着重要作用。
计算机 软件工程 外文翻译 外文文献 英文文献

一、外文资料译文:Java开发2.0:使用Hibernate Shards 进行切分横向扩展的关系数据库Andrew Glover,作者兼开发人员,Beacon50摘要:Sharding并不适合所有网站,但它是一种能够满足大数据的需求方法。
对于一些商店来说,切分意味着可以保持一个受信任的RDBMS,同时不牺牲数据可伸缩性和系统性能。
在Java 开发 2.0系列的这一部分中,您可以了解到切分何时起作用,以及何时不起作用,然后开始着手对一个可以处理数TB 数据的简单应用程序进行切分。
日期:2010年8月31日级别:中级PDF格式:A4和信(64KB的15页)取得Adobe®Reader®软件当关系数据库试图在一个单一表中存储数TB 的数据时,总体性能通常会降低。
索引所有的数据读取,显然是很耗时的,而且其中有可能是写入,也可能是读出。
因为NoSQL 数据商店尤其适合存储大型数据,但是NoSQL 是一种非关系数据库方法。
对于倾向于使用ACID-ity 和实体结构关系数据库的开发人员及需要这种结构的项目来说,切分是一个令人振奋的选方法。
切分一个数据库分区的分支,不是在本机上的数据库技术,它发生在应用场面上。
在各种切分实现,Hibernate Shards 可能是Java™ 技术世界中最流行的。
这个漂亮的项目可以让您使用映射至逻辑数据库的POJO 对切分数据集进行几乎无缝操作。
当你使用Hibernate Shards 时,您不需要将你的POJO 特别映射至切分。
您可以像使用Hibernate 方法对任何常见关系数据库进行映射时一样对其进行映射。
Hibernate Shards 可以为您管理低级别的切分任务。
迄今为止,在这个系列,我用一个比赛和参赛者类推关系的简单域表现出不同的数据存储技术比喻为基础。
这个月,我将使用这个熟悉的例子,介绍一个实际的切分策略,然后在Hibernate实现它的碎片。
外文翻译---软件和软件工程

外文翻译:Software and software engineering ----the software appearance and enumeratesAs the decade of the 1980s began, a front page story in business week magazine trumpeted the following headline:” software: the new driving force.”software had come of age—it had become a topic for management concern. during the mid-1980s,a cover story in foreune lamented “A Growing Gap in Software,”and at the close of the decade, business week warned managers about”the Software Trap—Automate or else.”As the 1990s dawned , a feature story in Newsweek asked ”Can We Trust Our Software? ”and The wall street journal related a major software company’s travails with a front page article entitled “Creating New Software Was an Agonizing Task …” these headlines, and many others like them, were a harbinger of a new understanding of the importance of computer software ---- the opportunities that it offers and the dangers that it poses.Software has now surpassed hardware as the key to the success of many computer-based systems. Whether a computer is used to run a business, control a product, or enable a system , software is the factor that differentiates . The completeness and timeliness of information provided by software (and related databases) differentiate one company from its competitors. The design and “human friendliness” of a software product differentiate it from competing products with an otherwise similar function .The intelligence and function provided by embedded software often differentiate two similar industrial or consumer products. It is software that can make the difference.During the first three decades of the computing era, the primary challenge was to develop computer hardware that reduced the cost of processing and storing data .Throughout the decade of the 1980s,advances in microelectronics resulted in more computing power at increasingly lower cost. Today, the problem is different .The primary challenge during the 1990s is to improve thequality ( and reduce the cost ) of computer-based solutions- solutions that are implemented with software.The power of a 1980s-era mainframe computer is available now on a desk top. The awesome processing and storage capabilities of modern hardware represent computing potential. Software is the mechanism that enables us to harness and tap this potential.The context in which software has been developed is closely coupled to almost five decades of computer system evolution. Better hardware performance, smaller size and lower cost have precipitated more sophisticated computer-based syst ems. We’re moved form vacuum tube processors to microelectronic devices that are capable of processing 200 million connections per second .In popular books on “the computer revolution,”Osborne characterized a “new industrial revolution,” Toffer called the advent of microelectronics part of “the third wave of change” in human history , and Naisbitt predicted that the transformation from an industrial society to an “information society” will have a profound impact on our lives. Feigenbaum and McCorduck suggested that information and knowledge will be the focal point for power in the twenty-first century, and Stoll argued that the “ electronic community” created by networks and software is the key to knowledge interchange throughout the world . As the 1990s began , Toffler described a “power shift” in which old power structures( governmental, educational, industrial, economic, and military) will disintegrate as computers and software lead to a “democratization of knowledge.”Figure 1-1 depicts the evolution of software within the context of. computer-based system application areas. During the early years of computer system development, hardware underwent continual change while software was viewed by many as an afterthought. Computer programming was a "seat-of-the-pants" art for which few systematic methods existed. Software development was virtually unmanaged--until schedules slipped or costs began to escalate. During this period, abatch orientation was used for most systems. Notable exceptions were interactive systems such as the early American Airlines reservation system and real-time defense-orientedsystems such as SAGE. For the most part, however, hardware was dedicated to the union of, a single program that in turn was dedicated to a specific application.Evolution of softwareDuring the early years, general-purpose hardware became commonplace. Software, on the other hand, was custom-designed for each application and had a relatively limited distribution. Product software(i.e., programs developed to be sold to one or more customers) was in its infancy . Most software was developed and ultimately used by the same person or organization. You wrote it, you got it running , and if it failed, you fixed it. Because job mobility was low , managers could rest assured that you’d be there when bugs were encountered.Because of this personalized software environment, design was an implicit process performed in one’s head, and action was often nonexistent. During the early years we learned much about the implementation of computer-based systems, but relatively little about computer system engineering .In fairness , however , we must acknowledge the many outstanding computer-based systems that were developed during this era. Some of these remain in use today and provide landmark achievements that continue to justify admiration.The second era of computer system evolution (Figure 1.1) spanned the decade from themid-1960s to the late 1970s. Multiprogramming and multiuse systems introduced new concepts of human-machine interaction. Interactive techniques opened a new world of applications and new levels of hardware and software sophistication . Real-time systems could collect, analyze, and transform data form multiple sources , thereby controlling processes and producing output in milliseconds rather than minutes . Advances in on-line storage led to the first generation of database management systems.The second era was also characterized by the use of product software and the advent of "software houses." Software was developed for widespread distribution in a multidisciplinary market. Programs for mainframes and minicomputers were distributed to hundreds and sometimesthousands of users. Entrepreneurs from industry, government, and academia broke away to "develop the ultimate software package" and earn a bundle of money.As the number of computer-based systems grew, libraries of computer software began to expand. In-house development projects produced tens of thousands of program source statements. Software products purchased from the outside added hundreds of thousands of new statements. A dark cloud appeared on the horizon. All of these programs--all of these source statements-had to be corrected when faults were detected, modified as user requirements changed, or adapted to new hardware that was purchased. These activities were collectively called software maintenance. Effort spent on software maintenance began to absorb resources at an alarming rate.Worse yet, the personalized nature of many programs made them virtually unmentionable. A "software crisis" loomed on the horizon.The third era of computer system evolution began in the mid-1970s and continues today. The distributed system--multiple computers, each performing functions concurrently and communicating with one another- greatly increased the complexity of computer-based systems. Global and local area networks, high-bandwidth digital communications, and increasing demands for 'instantaneous' data access put heavy demands on software developers.The third era has also been characterized by the advent and widespread use of microprocessors, personal computers, and powerful desk-top workstations. The microprocessor has spawned a wide array of intelligent products-from automobiles to microwave ovens, from industrial robots to blood serum diagnostic equipment. In many cases, software technology is being integrated into products by technical staff who understand hardware but are often novices in software development.The personal computer has been the catalyst for the growth of many software companies. While the software companies of the second era sold hundreds or thousands of copies of their programs, the software companies of the third era sell tens and even hundreds of thousands of copies. Personal computer hardware is rapidly becoming a commodity, while software provides the differentiating characteristic. In fact, as the rate of personal computer sales growth flattened during the mid-1980s, software-product sales continued to grow. Many people in industry and at home spent more money on software than they did to purchase the computer on which the software would run.The fourth era in computer software is just beginning. Object-oriented technologies (Chapters 8 and 12) are rapidly displacing more conventional software development approaches in many application areas. Authors such as Feigenbaum and McCorduck [FEI83] and Allman [ALL89] predict that "fifth-generation" computers, radically different computing architectures, and their related software will have a profound impact on the balance of political and industrial power throughout the world. Already, "fourth-generation" techniques for software development (discussed later in this chapter) are changing the manner in which some segments of the software community build computer programs. Expert systems and artificial intelligence software has finally moved from the laboratory into practical application for wide-ranging problems in the real world. Artificial neural network software has opened exciting possibilities for pattern recognition and human-like information processing abilities.As we move into the fourth era, the problems associated with computer software continue to intensify:Hardware sophistication has outpaced our ability to build software to tap hardware's potential.Our ability to build new programs cannot keep pace with the demand for new programs.Our ability to maintain existing programs is threatened by poor design and inadequate resources.In response to these problems, software engineering practices--the topic to which this book is dedicated--are being adopted throughout the industry.An Industry PerspectiveIn the early days of computing, computer-based systems were developed usinghardware-oriented management. Project managers focused on hardware because it was the single largest budget item for system development. To control hardware costs, managers instituted formal controls and technical standards. They demanded thorough analysis and design before something was built. They measured the process to determine where improvements could be made. Stated simply, they applied the controls, methods, and tools that we recognize as hardware engineering. Sadly, software was often little more than an afterthought.In the early days, programming was viewed as an "art form." Few formal methods existed and fewer people used them. The programmer often learned his or her craft by trial and error. The jargon and challenges of building computer software created a mystique that few managers cared to penetrate. The software world was virtually undisciplined--and many practitioners of the clay loved it!Today, the distribution of costs for the development of computer-based systems has changed dramatically. Software, rather than hardware, is often the largest single cost item. For the past decade managers and many technical practitioners have asked the following questions: Why does it take so long to get programs finished?Why are costs so high?Why can't we find all errors before we give the software to our customers?Why do we have difficulty in measuring progress as software is being developed?These, and many other’ questions, are a manifestation of the concern about software and the manner in which it is developed--a concern that has tend to the adoption of software engineering practices.译文:软件和软件工程——软件的出现及列举在二十世纪八十年代的前十年开始的时候, 在商业周刊杂志里一个头版故事大声宣扬以下标题:“软件,我们新的驱动力!”软件带来了一个时代------它成为了一个大家关心的主题。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
彼得罗萨尼大学经济学年鉴,2010, X(4), 201-214指导MYSQL优化查询的策略摘要:本文探讨了MySQL的索引能力。
它首先回顾索引的工作原理以及结构。
接下来,它会回顾每个主要MySQL数据存储引擎特有的索引功能。
然后,本文将检查各种各样的索引可能有助于加快应用程序的情况。
除了检查索引如何帮助。
在本文中,我们提供索引使用类型:B-树,散列和位图,以优化查询,尽管MySQL 已经实现和索引宽敞的R树。
索引类型对应于用于实现索引的特定种类的内部算法和数据结构。
在MySQL 中,对特定索引类型的支持取决于存储引擎。
关键词:优化数据库结构,数据库性能,优化查询,索引,B树索引,哈希索引,位图索引,存储引擎关于MYSQL数据库索引的考虑数据库调优是通过最小化响应时间(完成语句所需的时间)和最大化数据库可以每秒同时处理的语句数量来提高数据库性能的过程(Schwarty等,2004)<> 调整是由DBA,数据库设计人员,应用程疗:设计人员和数据库用户共同执行的团队练习,对数据库的理解。
调整两者都取决于和影响如下:表格设计,关系,索引设计和其他组件;查询设计,数据读取大小和检索和执行顺序;读取和插入/更新/删除操作的性质和频率;数据库服务器和客户端之间的工作分区;将表,索引或部分加载到内存中的时序,事件和效果;语句的并发特性。
数据库性能成为存在大量数据,复杂查询,处理大量数据的查询,长时间运行查询,锁定每个其他数据的查询,大量同时事务,大量用户和有限带宽的重要问题。
一般来说,大多数数据库系统都是为了表现良好而设计的。
最初的设计阶段可以实现最佳的改进,但是有时候数据库的特性还没有足够的信息。
后来, 在生产用途中改变大型数据库可能是昂贵的,实际的考虑对可以改变的内容给予约束。
调整可以使查询之间的差异需要毫秒或儿分钟甚至更多的执行。
数据库系统是管理信息系统的核心,基于数据库的在线交易处理(OLTP)和在线分析处理(OLAP)是银行,商业,政府等部门最重要的计算机应用(Viliams &Lane, 2007 )。
从大多数系统的应用来看,查询操作在各种数据库操作中占用的最大,而查询操作是基于SELECT语句的SQL语句中的一个语句成本最大的。
例如,如果数据量在一定程度上累积,如银行账号到数据库表中的信息就会累积数白万甚至数千万条记录,全表扫描通常需要儿十分钟的时间,棋至儿个小时。
如果比全表扫描查询策略更好,往往可以减少到儿分钟来进行查询,我们可以看到查询优化技术的重要性。
很多程序员认为查询优化是一个DBMS (数据库管理系统)的任务,用程序员编写的与SQL语句没有什么关系,这是错误的。
良好的查询计划性能往往可以提高次数。
查询讣划是山用户提交的SQL语句集合,查询计划经过优化处理后的语句收集生成。
DBMS查询计划处理过程如下:在查询后的词法,语法检查中,该语句将提交给DBMS的查询优化器,优化器经过代数优化和优化访问路径后跟预编译模块处理语句并生成查询,规划,然后在正确的时间对系统实现,最终的结果将返回给用户。
在实际的数据库产品(如Oracle, Sybase等)都是基于成本优化方法的所有版本,这种从系统中优化的字典基于信息表来估计不同成本的规划查询,以及然后选择一个更好的规划(Opell, 2006)o虽然在数据库查询优化方面已经做得更好,但是山用户根据SQL语句提交给系统的优化,很难想象出原来的查询计划在系统优化后变得有效,所以写说明用户的优缺点至关重要。
系统我们没有讨论查询优化,着重于以下计划来改进用户查询解决方案像美丽一样,最有吸引力的索引策略非常注重持之以恒。
对于主要,连接和过滤键(一个通用的索引美容标准,也许?),索引是适用的,应用程序A可能是错误的方法。
应用程序A可能是一个事务系统,可以支持与数据库的成千上万的快速交互,其数据修改必须以毫秒为单位进行。
应用程序B可能是决策支持系统,用户可以在其中创建大量服务器查询。
这两个应用程序需要非常不同的索引策略。
MySQL的优化器总是尝试使用手头的信息来开发最有效的查询计划。
然而,要求随时间而变化;用户和应用程序可以随时引入不可预测的请求。
这些请求可能包括新的事务,报告,集成等。
索引是您加快查询最重要的工具。
您也可以使用其他技术,但通常最大的区别就是正确使用索引。
在MySQL邮件列表中,人们经常要求帮助使查询运行更快。
在令人惊讶的情况下,表中没有索引,添加索引通常会立即解决问题。
它并不总是这样工作,因为优化并不总是简单。
不过,如果您不使用索引,在许多情况下,您只是浪费时间尝试通过其他方式提高性能。
首先使用索引获得最大的性能提升, 然后查看其他技术可能有帮助。
索引实现的具体细节因不同的MySQL存储引擎而异。
例如,对于MylSAM表,表的数据行保存在数据文件中,索引值保存在索引文件中。
表上可以有多个索引,但它们都存储在同一个索引文件中。
索引文件中的每个索引都包含用于快速访问数据文件的关键记录的排序数组。
相比之下,BDB和InnoDB存储引擎不会以相同的方式分离数据行和索引值,尽管它们将索引维护为排序值集合。
默认情况下,BDB引擎使用每个表中的单个文件来存储数据和索引值。
InnoDB引擎使用单个表空间来管理所有InnoDB表的数据和索引存储。
IrmoDB可以配置为使用自己的表空间创建每个表,但即使如此,表的数据和索引也存储在同一表空间文件中。
MySQL以多种方式使用索引。
如上所述,索引用于加速搜索与WHERE子句匹配的行或执行连接时与其他表中的行匹配的行。
对于使用()或MAX ()函数的查询,可以快速找到索引列中的最小或最大值,而不检查每一行。
MySQL经常可以使用索引对ORDER BY和GROUP BY子句进行快速排序和分组操作。
有时,MySQL可以使用索引来读取查询所需的所有信息。
假设您从MylSAM表中的索引数字列中选择值,而不是从表中选择其他列。
在这种情况下,当MySQL 从索引文件中读取索引值时,它将获得与该索引值相同的值将通过读取数据文件获得。
没有理由读取值两次,所以甚至不需要查询数据文件(Dubois, 2008)o 一般来说,如果MySQL能够找出如何使用索引来更快地处理查询,那么它会。
有缺点。
时间和空间都有成本。
在实践中,这些缺点往往被优势所超越,但你应该知道它们是什么。
首先,索引加快检索速度,但是减慢插入和删除速度,以及更新索引列中的值。
也就是说,指数减缓了涉及写作的大多数操作。
发生这种情况是因为写入记录需要不仅写入数据行,还需要更改任何索引。
表的索引越多,需要做的更多变化,平均性能下降越大(Dubois, 2008)o其次,索引占用磁盘空间,多个索引占用相应的空间。
这可能会导致比没有索引更快地达到表大小限制:对于MylSAM 表,对其进行高度索引可能会导致索引文件比数据文件更快地达到其最大大小。
对于将数据和索引值存储在同一文件中的BDB表,添加索引会使表格更快地达到最大文件大小。
位于InnoDB共享表空间内的所有InnoDB表都会竞争相同的公共空间池,并且添加索引会更快地在此表空间内删除存储空间。
但是,与MylSAM和BDB表使用的文件不同,InnoDB共享表空间不受操作系统的文件大小限制的约束,因为它可以配置为使用多个文件。
只要您有额外的磁盘空间,您可以通过向其添加新组件来扩展表空间。
使用单个表空间的InnoDB表格与BDB表格相同,因为数据和索引值一起存储在单个文件中。
Annals of the University7 of Petrosani Economics, 2010, X(4):201-214.INDEXING STRATEGIES FOR OPTIMIZINGQUERIES ON MYSQLANCAMEHEDINTU,CERASELA PIRVU, CRISTI ETEGANABSTRACT: This article investigates MySQL's index capabilities・It begins by reviewing how indexes work, as well as their structure・ Next, it reviews indexing features specific to each of the major MySQL data storage engines・ This article then examines a broad range of situations in which indexes might help speed up yourapplication・In addition to examining how indexes can be of assistance・In this article we present index usage type: B- trees, hash and bitmap, in order to optimize queries, although MySQL has implemented and indexes spacious R-trees・ The index type corresponds to the particular kinds of internal algorithms and datastructures used to implement the index・ In MySQL, support for a particular index type is dependent upon the storage engine・KEY WORDS: Optimizing database structure, Database performance, Optimizing queries, Indexes, B-tree Index, Hash Index, Bitmap Index, Storage enginesCONS I DERAT IONS ON THE MYSQL DATABASE INDEXINGDatabase tuning is the process of improving database performance by minimizing response time (the time it takes a statement to complete) and maximizing throughput the number of statements a database can handle concurrently per second) (Schwarty, et al・,2004).Tuning is a team exercise - collectively performed by DBAs, database designers, application designers and database users, based understanding of the database・ Tuning both depends on and impacts the following: table design, relationships, index design, and other components; query design, size of data read and retrieved, and order of execution; nature and frequency of read and insert/update/delete operations; partitioning of the work between the Database server and the client; timing, events and effect of loading tables, indexes or parts there of into memory; concurrency characteristics of statements・Database performance becomes an important issue in the presence of large amounts of data, complex queries, queries manipulating large amounts of data, long running queries, queries that lock every one else out, large number of simultaneous transactions, large numbers of users and limited bandwidth・ In general, most database systems are designed for good performance・ The best improvements can be achieved in the initial design phase but sometimes not enough information is available about the characteristics of a database・ Later, altering a large database in production use can be expensive and practical considerations put constraints on what can be changed ・ Tuning can make the difference between a query taking milliseconds or minutes or even more to execute・Database system is the core of management information systems, databasebasedonline transaction processing (OLTP) and online analytical processing (OLAP) is a banking, business, government and other departments of the most important one of computer applications (Williams & Lane, 2007)・ From the application of most systems, the query operation in a variety of database operations in the largest occupied, and the query operation is based on the SELECT statement in the SQL statement is a statement of the cost of the largest・ For example, if the amount of data accumulated to a certain extent, such as a bank account to the database table of information on the accumulation of millions or even tens of millions of records, full table scan often requires tens of minutes time, and even a few hours・ If better than the full table scan query strategy can often be reduced to a few minutes to make inquiries, we can see the importance of query optimization technology・Many programmers think that query optimization is a DBMS (database management system) tasks, prepared with the programmer has little to do with SQL statement, which is wrong・ A good query plan performance often can improve the number of times・ Query plan is submitted by users a collection of SQL statements, query plan is optimized to deal with the statement after the collection of produce・ DBMS query plan to deal with the process is as follows: in the query after the lexical, syntax check, the statement will be submitted to the DBMS* s query optimizer, optimizer after algebraic optimization and optimization of access to the path followed by pre-compiled modules processing of statements and generate inquiries, planning, and then at the right time to the system implementation, the final results will be returned to the user・ In the actual database products (such as Oracle, Sybase, etc ・)are all versions of the high cost-based optimization method, this optimization of the dictionary from the system based on the information table to estimate the different costs of planning inquiries, and then select a better planning (Opell, 2006)・ While it is in the database query optimization has been done better, but by the user of the SQL statement submitted to the system based on optimization, it is difficult to imagine a worse original query plan after the system has become efficient after optimization, so written statement of the advantages and disadvantages of users is essentia1・ System we did not discuss query optimization, focusing on the following plan to improve the user's query solution ・ Like beauty, the most attractive indexing strategy is very much in the eye of the beholder・ After indexes are in place for primary, join, and filter keys (a universal standard of indexing beauty, perhaps?), what works for application A might be the wrong approach for application B・Application A might be a transactional system that supports tens of thousands ofquick interactions with the database, and its data modifications must be made in milliseconds・ Application B might be a decision support system in which users create an ample assortment of server-hogging queries・ These two applications require very different indexing tactics・MySQL's optimizer always tries to use the information at hand to develop the most efficient query plans・ However, requirements change over time; users and applications can introduce unpredicted requests at any point・ These requests might include new transactions, reports, integration, and so forth・Indexing is the most important tool you have for speeding up queries・Other techniques are available to you, too, but generally the one thing that makes the most difference is the proper use of indexes・ On the MySQL mailing list, people often ask for help in making a query run faster・ In a surprisingly large number of cases, there are no indexes on the tables in question, and adding indexes often solves the problem immediately. It doesn't always work like that, because optimization isn't always simple・ Nevertheless, if you don,t use indexes, in many cases you* re just wasting your time trying to improve performance by other means・ Use indexing first to get the biggest performance boost and then see what other techniques might be helpful・The particular details of index implementations vary for different MySQL storage engines ・ For example, for a My I SAM table, the table,s data rows are kept in a data file, and index values are kept in an index file・ You can have more than one index on a table, but they* re all stored in the same index file・ Each index in the index file consists of a sorted array of key records that are used for fast access into the data file・By contrast, the BDB and InnoDB storage engines do not separate data rows and index values in the same way, although both maintain indexes as sets of sorted values ・ By default, the BDB engine uses a single file per table to store both data and index values・ The InnoDB engine uses a single tablespace within which it manages data and index storage for all InnoDB tables・InnoDB can be configured to create each table with its own tablespace, but even so, a table* s data and indexes are stored in the same tablespace file・MySQL uses indexes in several ways・ As just described, indexes are used to speed up searches for rows matching terms of a WHERE clause or rows that match rows in other tables when performing joins・ For queries that use the MINO or MAX0 functions, the smallest or largest value in an indexed column can be found quickly without examining every row・ MySQL can often use indexes to perform sorting and grouping operationsquickly for ORDER BY and GROUP BY clauses・Sometimes MySQL can use an index to reading all the information required for a query. Suppose that you* re selecting values from an indexed numeric column in a MylSAM table, and you,re not selecting other columns from the table・ In this case, when MySQL reads an index value from the index file, it obtains the same value that it would get by reading the data file・ There* s no reason to read values twice, so the data file need not even be consulted (Dubois, 2008)・In general, if MySQL can figure out how to use an index to process a query more quickly, it wil 1. There are and disadvantages・ There are costs both in time and in space・ In practice, these drawbacks tend to be outweighed by the advantages, but you should know what they are・First, indexes speed up retrievals but slow down inserts and deletes, as well as updates of values in indexed columns・ That is, indexes slow down most operations that involve writing・ This occurs because writing a record requires writing not only the data row, it requires changes to any indexes as wel1・ The more indexes a table has, the more changes need to be made, and the greater the average performance degradation (Dubois, 2008)・ Second, an index takes up disk space, and multiple indexes take up correspondingly more space・ This might cause to reach a table size limit more quickly than if there are no indexes: For a MyISAM table, indexing it heavily may cause the index file to reach its maximum size more quickly than the data file・ For BDB tables, which store data and index values together in the same file, adding indexes causes the table to reach the maximum file size more quickly.All InnoDB tables that are located within the InnoDB shared tablespace compete for the same common pool of space, and adding indexes depletes storage within this tablespace more quickly. However, unlike the files used for MyISAM and BDB tables, the InnoDB shared tablespace is not bound by your operating system* s file-size limit, because it can be configured to use multiple files・ As long as you have additional disk space, you can expand the tablespace by adding new components to it. InnoDB tables that use individual tablespaces are constrained the same way as BDB tables because data and index values are stored together in a single file・。