外文文献-中文翻译-数据库
外文参考文献翻译-中文

外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。
为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。
为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。
因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。
本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。
关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。
随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。
HSR还需要快速切换功能。
因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。
LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。
随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。
HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。
为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。
HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。
4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。
在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。
⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。
大数据外文翻译参考文献综述

大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。
数据库中英文对照外文翻译文献

中英文对照外文翻译Database Management SystemsA database (sometimes spelled data base) is also called an electronic database , referring to any collection of data, or information, that is specially organized for rapid search and retrieval by a computer. Databases are structured to facilitate the storage, retrieval , modification, and deletion of data in conjunction with various data-processing operations .Databases can be stored on magnetic disk or tape, optical disk, or some other secondary storage device.A database consists of a file or a set of files. The information in these files may be broken down into records, each of which consists of one or more fields. Fields are the basic units of data storage , and each field typically contains information pertaining to one aspect or attribute of the entity described by the database . Using keywords and various sorting commands, users can rapidly search , rearrange, group, and select the fields in many records to retrieve or create reports on particular aggregate of data.Complex data relationships and linkages may be found in all but the simplest databases .The system software package that handles the difficult tasks associated with creating ,accessing, and maintaining database records is called a database management system(DBMS).The programs in a DBMS package establish an interface between the database itself and the users of the database.. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements form the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or poorly defined ,but people can “browse” through the database until they have the needed information. In short, the DBMS will “manage” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers.A database management system (DBMS) is composed of three major parts:(1)a storage subsystemthat stores and retrieves data in files;(2) a modeling and manipulation subsystem that provides the means with which to organize the data and to add , delete, maintain, and update the data;(3)and an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems;Managers: who require more up-to-data information to make effective decisionCustomers: who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.Users: who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.Organizations : that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.The Database ModelA data model describes a way to structure and manipulate the data in a database. The structural part of the model specifies how data should be represented(such as tree, tables, and so on ).The manipulative part of the model specifies the operation with which to add, delete, display, maintain, print, search, select, sort and update the data.Hierarchical ModelThe first database management systems used a hierarchical model-that is-they arranged records into a tree structure. Some records are root records and all others have unique parent records. The structure of the tree is designed to reflect the order in which the data will be used that is ,the record at the root of a tree will be accessed first, then records one level below the root ,and so on.The hierarchical model was developed because hierarchical relationships are commonly found in business applications. As you have known, an organization char often describes a hierarchical relationship: top management is at the highest level, middle management at lower levels, and operational employees at the lowest levels. Note that within a strict hierarchy, each level of management may have many employees or levels of employees beneath it, but each employee has only one manager. Hierarchical data are characterized by this one-to-many relationship among data.In the hierarchical approach, each relationship must be explicitly defined when the database is created. Each record in a hierarchical database can contain only one key field and only one relationship is allowed between any two fields. This can create a problem because data do not always conform to such a strict hierarchy.Relational ModelA major breakthrough in database research occurred in 1970 when E. F. Codd proposed a fundamentally different approach to database management called relational model ,which uses a table asits data structure.The relational database is the most widely used database structure. Data is organized into related tables. Each table is made up of rows called and columns called fields. Each record contains fields of data about some specific item. For example, in a table containing information on employees, a record would contain fields of data such as a person’s last name ,first name ,and street address.Structured query language(SQL)is a query language for manipulating data in a relational database .It is nonprocedural or declarative, in which the user need only specify an English-like description that specifies the operation and the described record or combination of records. A query optimizer translates the description into a procedure to perform the database manipulation.Network ModelThe network model creates relationships among data through a linked-list structure in which subordinate records can be linked to more than one parent record. This approach combines records with links, which are called pointers. The pointers are addresses that indicate the location of a record. With the network approach, a subordinate record can be linked to a key record and at the same time itself be a key record linked to other sets of subordinate records. The network mode historically has had a performance advantage over other database models. Today , such performance characteristics are only important in high-volume ,high-speed transaction processing such as automatic teller machine networks or airline reservation system.Both hierarchical and network databases are application specific. If a new application is developed ,maintaining the consistency of databases in different applications can be very difficult. For example, suppose a new pension application is developed .The data are the same, but a new database must be created.Object ModelThe newest approach to database management uses an object model , in which records are represented by entities called objects that can both store data and provide methods or procedures to perform specific tasks.The query language used for the object model is the same object-oriented programming language used to develop the database application .This can create problems because there is no simple , uniform query language such as SQL . The object model is relatively new, and only a few examples of object-oriented database exist. It has attracted attention because developers who choose an object-oriented programming language want a database based on an object-oriented model. Distributed DatabaseSimilarly , a distributed database is one in which different parts of the database reside on physically separated computers . One goal of distributed databases is the access of informationwithout regard to where the data might be stored. Keeping in mind that once the users and their data are separated , the communication and networking concepts come into play .Distributed databases require software that resides partially in the larger computer. This software bridges the gap between personal and large computers and resolves the problems of incompatible data formats. Ideally, it would make the mainframe databases appear to be large libraries of information, with most of the processing accomplished on the personal computer.A drawback to some distributed systems is that they are often based on what is called a mainframe-entire model , in which the larger host computer is seen as the master and the terminal or personal computer is seen as a slave. There are some advantages to this approach . With databases under centralized control , many of the problems of data integrity that we mentioned earlier are solved . But today’s personal computers, departmental computers, and distributed processing require computers and their applications to communicate with each other on a more equal or peer-to-peer basis. In a database, the client/server model provides the framework for distributing databases.One way to take advantage of many connected computers running database applications is to distribute the application into cooperating parts that are independent of one anther. A client is an end user or computer program that requests resources across a network. A server is a computer running software that fulfills those requests across a network . When the resources are data in a database ,the client/server model provides the framework for distributing database.A file serve is software that provides access to files across a network. A dedicated file server is a single computer dedicated to being a file server. This is useful ,for example ,if the files are large and require fast access .In such cases, a minicomputer or mainframe would be used as a file server. A distributed file server spreads the files around on individual computers instead of placing them on one dedicated computer.Advantages of the latter server include the ability to store and retrieve files on other computers and the elimination of duplicate files on each computer. A major disadvantage , however, is that individual read/write requests are being moved across the network and problems can arise when updating files. Suppose a user requests a record from a file and changes it while another user requests the same record and changes it too. The solution to this problems called record locking, which means that the first request makes others requests wait until the first request is satisfied . Other users may be able to read the record, but they will not be able to change it .A database server is software that services requests to a database across a network. For example, suppose a user types in a query for data on his or her personal computer . If the application is designed with the client/server model in mind ,the query language part on the personal computer simple sends the query across the network to the database server and requests to be notified when the data are found.Examples of distributed database systems can be found in the engineering world. Sun’s Network Filing System(NFS),for example, is used in computer-aided engineering applications to distribute data among the hard disks in a network of Sun workstation.Distributing databases is an evolutionary step because it is logical that data should exist at the location where they are being used . Departmental computers within a large corporation ,for example, should have data reside locally , yet those data should be accessible by authorized corporate management when they want to consolidate departmental data . DBMS software will protect the security and integrity of the database , and the distributed database will appear to its users as no different from the non-distributed database .In this information age, the data server has become the heart of a company. This one piece of software controls the rhythm of most organizations and is used to pump information lifeblood through the arteries of the network. Because of the critical nature of this application, the data server is also the one of the most popular targets for hackers. If a hacker owns this application, he can cause the company's "heart" to suffer a fatal arrest.Ironically, although most users are now aware of hackers, they still do not realize how susceptible their database servers are to hack attacks. Thus, this article presents a description of the primary methods of attacking database servers (also known as SQL servers) and shows you how to protect yourself from these attacks.You should note this information is not new. Many technical white papers go into great detail about how to perform SQL attacks, and numerous vulnerabilities have been posted to security lists that describe exactly how certain database applications can be exploited. This article was written for the curious non-SQL experts who do not care to know the details, and as a review to those who do use SQL regularly.What Is a SQL Server?A database application is a program that provides clients with access to data. There are many variations of this type of application, ranging from the expensive enterprise-level Microsoft SQL Server to the free and open source mySQL. Regardless of the flavor, most database server applications have several things in common.First, database applications use the same general programming language known as SQL, or Structured Query Language. This language, also known as a fourth-level language due to its simplistic syntax, is at the core of how a client communicates its requests to the server. Using SQL in its simplest form, a programmer can select, add, update, and delete information in a database. However, SQL can also be used to create and design entire databases, perform various functions on the returned information, and even execute other programs.To illustrate how SQL can be used, the following is an example of a simple standard SQL query and a more powerful SQL query:Simple: "Select * from dbFurniture.tblChair"This returns all information in the table tblChair from the database dbFurniture.Complex: "EXEC master..xp_cmdshell 'dir c:\'"This short SQL command returns to the client the list of files and folders under the c:\ directory of the SQL server. Note that this example uses an extended stored procedure that is exclusive to MS SQL Server.The second function that database server applications share is that they all require some form of authenticated connection between client and host. Although the SQL language is fairly easy to use, at least in its basic form, any client that wants to perform queries must first provide some form of credentials that will authorize the client; the client also must define the format of the request and response.This connection is defined by several attributes, depending on the relative location of the client and what operating systems are in use. We could spend a whole article discussing various technologies such as DSN connections, DSN-less connections, RDO, ADO, and more, but these subjects are outside the scope of this article. If you want to learn more about them, a little Google'ing will provide you with more than enough information. However, the following is a list of the more common items included in a connection request.Database sourceRequest typeDatabaseUser IDPasswordBefore any connection can be made, the client must define what type of database server it is connecting to. This is handled by a software component that provides the client with the instructions needed to create the request in the correct format. In addition to the type of database, the request type can be used to further define how the client's request will be handled by the server. Next comes the database name and finally the authentication information.All the connection information is important, but by far the weakest link is the authentication information—or lack thereof. In a properly managed server, each database has its own users with specifically designated permissions that control what type of activity they can perform. For example, a user account would be set up as read only for applications that need to only access information. Another account should be used for inserts or updates, and maybe even a third account would be used for deletes.This type of account control ensures that any compromised account is limited in functionality. Unfortunately, many database programs are set up with null or easy passwords, which leads to successful hack attacks.译文数据库管理系统介绍数据库(database,有时拼作data base)又称为电子数据库,是专门组织起来的一组数据或信息,其目的是为了便于计算机快速查询及检索。
文献检索 含中文数据库和外文数据库

中文数据库检索一、标准数据库1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词根据检索要求确定选择到中文数据库中的标准数据库中进行检索2在图书馆主页上选择中文数据库进入标准数据库3选择检索词在关键词中查找:翻译与全文中查找:口译搜索范围为:全文搜索点击“检索”出现页面4 根据二次检索第一条即唯一一条查询结果“翻译服务规范第2部分:口译”为我所需要的资料5点击进入对其进行下载浏览6 检索结束二、维普期刊全文库(含社科)1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词根据检索要求确定选择到中文数据库中的维普期刊全文库中进行检索2在图书馆主页上选择中文数据库进入维普期刊全文库3在范围为:全部期刊年限:2000~2010 最近更新:全部数据显示方式:概要显示20条关键词为:翻译口译出现页面4 根据二次检索第6条搜索结果“谈如何做好英语口译”为我所需要的资料5点击进入对其进行下载浏览6 检索结束三、CNKI中国知网数据库1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词根据检索要求确定选择到中文数据库中的CNKI中国知网数据库中进行检索2在图书馆主页上选择中文数据库进入CNKI中国知网数据库3 选择高级检索检索项:题名检索词翻译并且检索词:口译时间:2000~2009排序度:相关度匹配:模糊点击检索出现页面4 根据二次检索“第7条口译的文化功能及翻译策略”为我所需要的资料5点击进入对其进行下载浏览6 检索结束四、万方数据系统1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词根据检索要求确定选择到中文数据库中的万方数据系统中进行检索2在图书馆主页上选择中文数据库进入万方数据系统3全部字段:翻译口译资源浏览范围:按数据库分类浏览点击检索出现页面4根据二次检索第3条检索结果“口译中的文化翻译策略初探”为我所需要的资料5点击进入对其进行下载浏览6 检索结束外文数据库一、spring link1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词翻译:translating 口译:interpreting根据检索要求确定需要到外文数据库中的spring link中查找2进入工业大学图书馆主页选择外文数据库进入 spring link2首先输入检索词: ti:translating 出现页面出现页面4通过二次检索发现第四条记录Interpreting and considering the development of thegoal orientation in the transformation of Chinese normal universities为我所需要的5点击下载浏览6检索结束二、kluwer1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词翻译:translating 口译:interpreting根据检索要求确定需要到外文数据库中的Kluwer中查找2进入工业大学图书馆主页选择外文数据库进入 Kluwer 选择复杂查询3输入检索条件检索条件:包含translating and interpreting点击检索发现没有符合条件的记录重新输入检索条件检索词:translating出版时间:2000年01月~2009年12月文件类型:全部其他设置不变点击检索出现页面4发现检索记录没有所需的再输入检索词 interpreting重新进行检索出现页面5根据观察发现第36条记录Re-interpreting some common objections to three transgenic applications: GM foods, xenotransplantation and germ line gene modification (GLGM)为我所需6点击进行下载浏览三、Ebssco1确定检索的课题为翻译口译理论知识,选择“翻译”作为主要关键词口译作为次要关键词翻译:translating 口译:interpreting根据检索要求确定需要到外文数据库中的spring link中查找2进入工业大学图书馆主页选择外文数据库进入 spring link 输入检索条件检索词:translating 位于 TX ALL TEXT 检索模式选择查找全部检索词语限制结果:全文出版日期:2000年01月~2009年12月其他设置不变点击检索3 根据观察发现第六条检索记录 Translator's adaptation in translating. (English) 为我所需4点击下载浏览5检索结束。
数据库外文文献翻译

Transact-SQL Cookbook第一章数据透视表1.1使用数据透视表1.1.1 问题支持一个元素序列往往需要解决各种问题。
例如,给定一个日期范围,你可能希望产生一行在每个日期的范围。
或者,您可能希望将一系列的返回值在单独的行成一系列单独的列值相同的行。
实现这种功能,你可以使用一个永久表中存储一系列的顺序号码。
这种表是称为一个数据透视表。
许多食谱书中使用数据透视表,然后,在所有情况下,表的名称是。
这个食谱告诉你如何创建表。
1.1.2 解决方案首先,创建数据透视表。
下一步,创建一个表名为富,将帮助你在透视表:CREATE TABLE Pivot (i INT,PRIMARY KEY(i))CREATE TABLE Foo(i CHAR(1))富表是一个简单的支持表,你应插入以下10行:INSERT INTO Foo VALUES('0')INSERT INTO Foo VALUES('1')INSERT INTO Foo VALUES('2')INSERT INTO Foo VALUES('3')INSERT INTO Foo VALUES('4')INSERT INTO Foo VALUES('5')INSERT INTO Foo VALUES('6')INSERT INTO Foo VALUES('7')INSERT INTO Foo VALUES('8')INSERT INTO Foo VALUES('9')利用10行在富表,你可以很容易地填充枢轴表1000行。
得到1000行10行,加入富本身三倍,创建一个笛卡尔积:INSERT INTO PivotSELECT f1.i+f2.i+f3.iFROM Foo f1, Foo F2, Foo f3如果你名单上的行数据透视表,你会看到它所需的数目的元素,他们将编号从0到999。
交通安全外文翻译文献中英文

外文文献翻译(含:英文原文及中文译文)英文原文POSSIBILITIES AND LIMITA TIONS OF ACCIDENT ANALYSISS.OppeAbstraetAccident statistics, especially collected at a national level are particularly useful for the description, monitoring and prognosis of accident developments, the detection of positive and negative safety developments, the definition of safety targets and the (product) evaluation of long term and large scale safety measures. The application of accident analysis is strongly limited for problem analysis, prospective and retrospective safety analysis on newly developed traffic systems or safety measures, as well as for (process) evaluation of special short term and small scale safety measures. There is an urgent need for the analysis of accidents in real time, in combination with background behavioural research. Automatic incident detection, combined with video recording of accidents may soon result in financially acceptable research. This type of research may eventually lead to a better understanding of the concept of risk in traffic and to well-established theories.Keyword: Consequences; purposes; describe; Limitations; concerned; Accident Analysis; possibilities1. Introduction.This paper is primarily based on personal experience concerning traffic safety, safety research and the role of accidents analysis in this research. These experiences resulted in rather philosophical opinions as well as more practical viewpoints on research methodology and statistical analysis. A number of these findings are published already elsewhere.From this lack of direct observation of accidents, a number of methodological problems arise, leading to continuous discussions about the interpretation of findings that cannot be tested directly. For a fruitful discussion of these methodological problems it is very informative to look at a real accident on video. It then turns out that most of the relevant information used to explain the accident will be missing in the accident record. In-depth studies also cannot recollect all the data that is necessary in order to test hypotheses about the occurrence of the accident. For a particular car-car accident, that was recorded on video at an urban intersection in the Netherlands, between a car coming from a minor road, colliding with a car on the major road, the following questions could be asked: Why did the driver of the car coming from the minor road, suddenly accelerate after coming almost to a stop and hit the side of the car from the left at the main road? Why was the approaching car not noticed? Was it because the driver was preoccupied with the two cars coming from the right and the gap before them that offered him thepossibility to cross? Did he look left before, but was his view possibly blocked by the green van parked at the corner? Certainly the traffic situation was not complicated. At the moment of the accident there were no bicyclists or pedestrians present to distract his attention at the regularly overcrowded intersection. The parked green van disappeared within five minutes, the two other cars that may have been important left without a trace. It is hardly possible to observe traffic behavior under the most relevant condition of an accident occurring, because accidents are very rare events, given the large number of trips. Given the new video equipment and the recent developments in automatic incident and accident detection, it becomes more and more realistic to collect such data at not too high costs. Additional to this type of data that is most essential for a good understanding of the risk increasing factors in traffic, it also important to look at normal traffic behavior as a reference base. The question about the possibilities and limitations of accident analysis is not lightly answered. We cannot speak unambiguously about accident analysis. Accident analysis covers a whole range of activities, each originating from a different background and based on different sources of information: national data banks, additional information from other sources, especially collected accident data, behavioral background data etc. To answer the question about the possibilities and limitations, we first have to look at the cycle of activities in the area of traffic safety. Some ofthese activities are mainly concerned with the safety management of the traffic system; some others are primarily research activities.The following steps should be distinguished:- detection of new or remaining safety problems;- description of the problem and its main characteristics;- the analysis of the problem, its causes and suggestions for improvement;- selection and implementation of safety measures;- evaluation of measures taken.Although this cycle can be carried out by the same person or group of persons, the problem has a different (political/managerial or scientific) background at each stage. We will describe the phases in which accident analysis is used. It is important to make this distinction. Many fruitless discussions about the method of analysis result from ignoring this distinction. Politicians, or road managers are not primarily interested in individual accidents. From their perspective accidents are often treated equally, because the total outcome is much more important than the whole chain of events leading to each individual accident. Therefore, each accident counts as one and they add up all together to a final safety result.Researchers are much more interested in the chain of events leading to an individual accident. They want to get detailed information abouteach accident, to detect its causes and the relevant conditions. The politician wants only those details that direct his actions. At the highest level this is the decrease in the total number of accidents. The main source of information is the national database and its statistical treatment. For him, accident analysis is looking at (subgroups of) accident numbers and their statistical fluctuations. This is the main stream of accident analysis as applied in the area of traffic safety. Therefore, we will first describe these aspects of accidents.2. The nature of accidents and their statistical characteristics.The basic notion is that accidents, whatever there cause, appear according to a chance process. Two simple assumptions are usually made to describe this process for (traffic) accidents:- the probability of an accident to occur is independent from the occurrence of previous accidents;-the occurrence of accidents is homogeneous in time.If these two assumptions hold, then accidents are Poisson distributed. The first assumption does not meet much criticism. Accidents are rare events and therefore not easily influenced by previous accidents. In some cases where there is a direct causal chain (e.g. , when a number of cars run into each other) the series of accidents may be regarded as one complicated accident with many cars involved.The assumption does not apply to casualties. Casualties are often related to the same accident andtherefore the independency assumption does not hold. The second assumption seems less obvious at first sight. The occurrence of accidents through time or on different locations are not equally likely. However, the assumption need not hold over long time periods. It is a rather theoretical assumption in its nature. If it holds for short periods of time, then it also holds for long periods, because the sum of Poisson distributed variables, even if their Poisson rates are different, is also Poisson distributed. The Poisson rate for the sum of these periods is then equal to the sum of the Poisson rates for these parts.The assumption that really counts for a comparison of (composite) situations, is whether two outcomes from an aggregation of situations in time and/or space, have a comparable mix of basic situations. E.g. , the comparison of the number of accidents on one particular day of the year, as compared to another day (the next day, or the same day of the next week etc.). If the conditions are assumed to be the same (same duration, same mix of traffic and situations, same weather conditions etc.) then the resulting numbers of accidents are the outcomes of the same Poisson process. This assumption can be tested by estimating the rate parameter on the basis of the two observed values (the estimate being the average of the two values). Probability theory can be used to compute the likelihood of the equality assumption, given the two observations and their mean.This statistical procedure is rather powerful. The Poisson assumptionis investigated many times and turns out to be supported by a vast body of empirical evidence. It has been applied in numerous situations to find out whether differences in observed numbers of accidents suggest real differences in safety. The main purpose of this procedure is to detect differences in safety. This may be a difference over time, or between different places or between different conditions. Such differences may guide the process of improvement. Because the main concern is to reduce the number of accidents, such an analysis may lead to the most promising areas for treatment. A necessary condition for the application of such a test is, that the numbers of accidents to be compared are large enough to show existing differences. In many local cases an application is not possible. Accident black-spot analysis is often hindered by this limitation, e.g., if such a test is applied to find out whether the number of accidents at a particular location is higher than average. The procedure described can also be used if the accidents are classified according to a number of characteristics to find promising safety targets. Not only with aggregation, but also with disaggregation the Poisson assumption holds, and the accident numbers can be tested against each other on the basis of the Poisson assumptions. Such a test is rather cumbersome, because for each particular case, i.e. for each different Poisson parameter, the probabilities for all possible outcomes must be computed to apply the test. In practice, this is not necessary when the numbers are large. Then the Poissondistribution can be approximated by a Normal distribution, with mean and variance equal to the Poisson parameter. Once the mean value and the variance of a Normal distribution are given, all tests can be rephrased in terms of the standard Normal distribution with zero mean and variance one. No computations are necessary any more, but test statistics can be drawn from tables.3. The use of accident statistics for traffic safety policy.The testing procedure described has its merits for those types of analysis that are based on the assumptions mentioned. The best example of such an application is the monitoring of safety for a country or region over a year, using the total number of accidents (eventually of a particular type, such as fatal accidents), in order to compare this number with the outcome of the year before. If sequences of accidents are given over several years, then trends in the developments can be detected and accident numbers predicted for following years. Once such a trend is established, then the value for the next year or years can be predicted, together with its error bounds. Deviations from a given trend can also be tested afterwards, and new actions planned. The most famous one is carried out by Smeed 1949. We will discuss this type of accident analysis in more detail later.(1). The application of the Chi-square test for interaction is generalised to higher order classifications. Foldvary and Lane (1974), inmeasuring the effect of compulsory wearing of seat belts, were among the first who applied the partitioning of the total Chi-square in values for the higher order interactions of four-way tables.(2). Tests are not restricted to overall effects, but Chi-square values can be decomposed regarding sub-hypotheses within the model. Also in the two-way table, the total Chisquare can be decomposed into interaction effects of part tables. The advantage of 1. and 2. over previous situations is, that large numbers of Chi-square tests on many interrelated (sub)tables and corresponding Chi-squares were replaced by one analysis with an exact portioning of one Chi-square.(3). More attention is put to parameter estimation. E.g., the partitioning of the Chi-square made it possible to test for linear or quadratic restraints on the row-parameters or for discontinuities in trends.(4). The unit of analysis is generalised from counts to weighted counts. This is especially advantageous for road safety analyses, where corrections for period of time, number of road users, number of locations or number of vehicle kilometres is often necessary. The last option is not found in many statistical packages. Andersen 1977 gives an example for road safety analysis in a two-way table. A computer programme WPM, developed for this type of analysis of multi-way tables, is available at SWOV (see: De Leeuw and Oppe 1976). The accident analysis at this level is not explanatory. It tries to detect safety problems that need specialattention. The basic information needed consists of accident numbers, to describe the total amount of unsafety, and exposure data to calculate risks and to find situations or (groups of) road users with a high level of risk. 4. Accident analysis for research purposes.Traffic safety research is concerned with the occurrence of accidents and their consequences. Therefore, one might say that the object of research is the accident. The researcher’s interest however is less focused at this final outcome itself, but much more at the process that results (or does not result) in accidents. Therefore, it is better to regard the critical event in traffic as his object of study. One of the major problems in the study of the traffic process that results in accidents is, that the actual occurrence is hardly ever observed by the researcher.Investigating a traffic accident, he will try to reconstruct the event from indirect sources such as the information given by the road users involved, or by eye-witnesses, about the circumstances, the characteristics of the vehicles, the road and the drivers. As such this is not unique in science, there are more examples of an indirect study of the object of research. However, a second difficulty is, that the object of research cannot be evoked. Systematic research by means of controlled experiments is only possible for aspects of the problem, not for the problem itself. The combination of indirect observation and lack of systematic control make it very difficult for the investigator to detectwhich factors, under what circumstances cause an accident. Although the researcher is primarily interested in the process leading to accidents, he has almost exclusively information about the consequences, the product of it, the accident. Furthermore, the context of accidents is complicated. Generally speaking, the following aspects can be distinguished: - Given the state of the traffic system, traffic volume and composition, the manoeuvres of the road users, their speeds, the weather conditions, the condition of the road, the vehicles, the road users and their interactions, accidents can or cannot be prevented.- Given an accident, also depending on a large number of factors, such as the speed and mass of vehicles, the collision angle, the protection of road users and their vulnerability, the location of impact etc., injuries are more or less severe or the material damage is more or less substantial. Although these aspects cannot be studied independently, from a theoretical point of view it has advantages to distinguish the number of situations in traffic that are potentially dangerous, from the probability of having an accident given such a potentially dangerous situation and also from the resulting outcome, given a particular accident.This conceptual framework is the general basis for the formulation of risk regarding the decisions of individual road users as well as the decisions of controllers at higher levels. In the mathematical formulation of risk we need an explicit description of our probability space, consistingof the elementary events (the situations) that may result in accidents, the probability for each type of event to end up in an accident, and finally the particular outcome, the loss, given that type of accident.A different approach is to look at combinations of accident characteristics, to find critical factors. This type of analysis may be carried out at the total group of accidents or at subgroups. The accident itself may be the unit of research, but also a road, a road location, a road design (e.g. a roundabout) etc.中文译文交通事故分析的可能性和局限性S.Oppe摘要交通事故的统计数字, 尤其国家一级的数据对监控和预测事故的发展, 积极或消极检测事故的发展, 以及对定义安全目标和评估工业安全特别有益。
SQL Server数据库管理外文翻译文献

SQL Server数据库管理外文翻译文献本文翻译了一篇关于SQL Server数据库管理的外文文献。
摘要该文献介绍了SQL Server数据库管理的基本原则和策略。
作者指出,重要的决策应该基于独立思考,避免过多依赖外部帮助。
对于非可确认的内容,不应进行引用。
文献还强调了以简单策略为主、避免法律复杂性的重要性。
内容概述本文详细介绍了SQL Server数据库管理的基本原则和策略。
其中包括:1. 独立决策:在数据库管理中,决策应该基于独立思考。
不过多依赖用户的帮助或指示,而是依靠数据库管理员的专业知识和经验进行决策。
独立决策:在数据库管理中,决策应该基于独立思考。
不过多依赖用户的帮助或指示,而是依靠数据库管理员的专业知识和经验进行决策。
2. 简单策略:为了避免法律复杂性和错误的决策,应采用简单策略。
这意味着避免引用无法确认的内容,只使用可靠和可验证的信息。
简单策略:为了避免法律复杂性和错误的决策,应采用简单策略。
这意味着避免引用无法确认的内容,只使用可靠和可验证的信息。
3. 数据库管理准则:文献提出了一些SQL Server数据库管理的准则,包括:规划和设计数据库结构、有效的数据备份和恢复策略、用户权限管理、性能优化等。
数据库管理准则:文献提出了一些SQL Server数据库管理的准则,包括:规划和设计数据库结构、有效的数据备份和恢复策略、用户权限管理、性能优化等。
结论文献通过介绍SQL Server数据库管理的基本原则和策略,强调了独立决策和简单策略的重要性。
数据库管理员应该依靠自己的知识和经验,避免过度依赖外部帮助,并采取简单策略来管理数据库。
此外,遵循数据库管理准则也是确保数据库安全和性能的重要手段。
以上是对于《SQL Server数据库管理外文翻译文献》的详细内容概述和总结。
如果需要更多详细信息,请阅读原文献。
土木工程外文文献及翻译

本科毕业设计外文文献及译文文献、资料题目:Designing Against Fire Of Building 文献、资料来源:国道数据库文献、资料发表(出版)日期:2008.3.25院(部):土木工程学院专业:土木工程班级:土木辅修091姓名:xxxx外文文献:Designing Against Fire Of BulidingxxxABSTRACT:This paper considers the design of buildings for fire safety. It is found that fire and the associ- ated effects on buildings is significantly different to other forms of loading such as gravity live loads, wind and earthquakes and their respective effects on the building structure. Fire events are derived from the human activities within buildings or from the malfunction of mechanical and electrical equipment provided within buildings to achieve a serviceable environment. It is therefore possible to directly influence the rate of fire starts within buildings by changing human behaviour, improved maintenance and improved design of mechanical and electrical systems. Furthermore, should a fire develops, it is possible to directly influence the resulting fire severity by the incorporation of fire safety systems such as sprinklers and to provide measures within the building to enable safer egress from the building. The ability to influence the rate of fire starts and the resulting fire severity is unique to the consideration of fire within buildings since other loads such as wind and earthquakes are directly a function of nature. The possible approaches for designing a building for fire safety are presented using an example of a multi-storey building constructed over a railway line. The design of both the transfer structure supporting the building over the railway and the levels above the transfer structure are considered in the context of current regulatory requirements. The principles and assumptions associ- ated with various approaches are discussed.1 INTRODUCTIONOther papers presented in this series consider the design of buildings for gravity loads, wind and earthquakes.The design of buildings against such load effects is to a large extent covered by engineering based standards referenced by the building regulations. This is not the case, to nearly the same extent, in the case of fire. Rather, it is building regulations such as the Building Code of Australia (BCA) that directly specify most of the requirements for fire safety of buildings with reference being made to Standards such as AS3600 or AS4100 for methods for determining the fire resistance of structural elements.The purpose of this paper is to consider the design of buildings for fire safety from an engineering perspective (as is currently done for other loads such as wind or earthquakes), whilst at the same time,putting such approaches in the context of the current regulatory requirements.At the outset,it needs to be noted that designing a building for fire safety is far morethan simply considering the building structure and whether it has sufficient structural adequacy.This is because fires can have a direct influence on occupants via smoke and heat and can grow in size and severity unlike other effects imposed on the building. Notwithstanding these comments, the focus of this paper will be largely on design issues associated with the building structure.Two situations associated with a building are used for the purpose of discussion. The multi-storey office building shown in Figure 1 is supported by a transfer structure that spans over a set of railway tracks. It is assumed that a wide range of rail traffic utilises these tracks including freight and diesel locomotives. The first situation to be considered from a fire safety perspective is the transfer structure.This is termed Situation 1 and the key questions are: what level of fire resistance is required for this transfer structure and how can this be determined? This situation has been chosen since it clearly falls outside the normal regulatory scope of most build- ing regulations. An engineering solution, rather than a prescriptive one is required. The second fire situation (termed Situation 2) corresponds to a fire within the office levels of the building and is covered by building regulations. This situation is chosen because it will enable a discussion of engineering approaches and how these interface with the building regulations–since both engineering and prescriptive solutions are possible.2 UNIQUENESS OF FIRE2.1 IntroductionWind and earthquakes can be considered to b e “natural” phenomena over which designers have no control except perhaps to choose the location of buildings more carefully on the basis of historical records and to design building to resist sufficiently high loads or accelerations for the particular location. Dead and live loads in buildings are the result of gravity. All of these loads are variable and it is possible (although generally unlikely) that the loads may exceed the resistance of the critical structural members resulting in structural failure.The nature and influence of fires in buildings are quite different to those associated with other“loads” to which a building may be subjected to. The essential differences are described in the following sections.2.2 Origin of FireIn most situations (ignoring bush fires), fire originates from human activities within the building or the malfunction of equipment placed within the building to provide a serviceable environment. It follows therefore that it is possible to influence the rate of fire starts by influencing human behaviour, limiting and monitoring human behaviour and improving thedesign of equipment and its maintenance. This is not the case for the usual loads applied to a building.2.3 Ability to InfluenceSince wind and earthquake are directly functions of nature, it is not possible to influence such events to any extent. One has to anticipate them and design accordingly. It may be possible to influence the level of live load in a building by conducting audits and placing restrictions on contents. However, in the case of a fire start, there are many factors that can be brought to bear to influence the ultimate size of the fire and its effect within the building. It is known that occupants within a building will often detect a fire and deal with it before it reaches a sig- nificant size. It is estimated that less than one fire in five (Favre, 1996) results in a call to the fire brigade and for fires reported to the fire brigade, the majority will be limited to the room of fire origin. In oc- cupied spaces, olfactory cues (smell) provide powerful evidence of the presence of even a small fire. The addition of a functional smoke detection system will further improve the likelihood of detection and of action being taken by the occupants.Fire fighting equipment, such as extinguishers and hose reels, is generally provided within buildings for the use of occupants and many organisations provide training for staff in respect of the use of such equipment.The growth of a fire can also be limited by automatic extinguishing systems such as sprinklers, which can be designed to have high levels of effectiveness.Fires can also be limited by the fire brigade depending on the size and location of the fire at the time of arrival. 2.4 Effects of FireThe structural elements in the vicinity of the fire will experience the effects of heat. The temperatures within the structural elements will increase with time of exposure to the fire, the rate of temperature rise being dictated by the thermal resistance of the structural element and the severity of the fire. The increase in temperatures within a member will result in both thermal expansion and,eventually,a reduction in the structural resistance of the member. Differential thermal expansion will lead to bowing of a member. Significant axial expansion will be accommodated in steel members by either overall or local buckling or yielding of local- ised regions. These effects will be detrimental for columns but for beams forming part of a floor system may assist in the development of other load resisting mechanisms (see Section 4.3.5).With the exception of the development of forces due to restraint of thermal expansion, fire does not impose loads on the structure but rather reduces stiffness and strength. Such effects are not instantaneous but are a function of time and this is different to the effects of loads such as earthquake and wind that are more or less instantaneous.Heating effects associated with a fire will not be significant or the rate of loss of capacity will be slowed if:(a) the fire is extinguished (e.g. an effective sprinkler system)(b) the fire is of insufficient severity – insufficient fuel, and/or(c)the structural elements have sufficient thermal mass and/or insulation to slow the rise in internal temperatureFire protection measures such as providing sufficient axis distance and dimensions for concrete elements, and sufficient insulation thickness for steel elements are examples of (c). These are illustrated in Figure 2.The two situations described in the introduction are now considered.3 FIRE WITHIN BUILDINGS3.1 Fire Safety ConsiderationsThe implications of fire within the occupied parts of the office building (Figure 1) (Situation 2) are now considered. Fire statistics for office buildings show that about one fatality is expected in an office building for every 1000 fires reported to the fire brigade. This is an order of magnitude less than the fatality rate associated with apartment buildings. More than two thirds of fires occur during occupied hours and this is due to the greater human activity and the greater use of services within the building. It is twice as likely that a fire that commences out of normal working hours will extend beyond the enclosure of fire origin.A relatively small fire can generate large quantities of smoke within the floor of fire origin. If the floor is of open-plan construction with few partitions, the presence of a fire during normal occupied hours is almost certain to be detected through the observation of smoke on the floor. The presence of full height partitions across the floor will slow the spread of smoke and possibly also the speed at which the occupants detect the fire. Any measures aimed at improving housekeeping, fire awareness and fire response will be beneficial in reducing thelikelihood of major fires during occupied hours.For multi-storey buildings, smoke detection systems and alarms are often provided to give “automatic” detection and warning to the occupants. An alarm signal is also transmitted to the fire brigade.Should the fire not be able to be controlled by the occupants on the fire floor, they will need to leave the floor of fire origin via the stairs. Stair enclosures may be designed to be fire-resistant but this may not be sufficient to keep the smoke out of the stairs. Many buildings incorporate stair pressurisation systems whereby positive airflow is introduced into the stairs upon detection of smoke within the building. However, this increases the forces required to open the stair doors and makes it increasingly difficult to access the stairs. It is quite likely that excessive door opening forces will exist(Fazio et al,2006)From a fire perspective, it is common to consider that a building consists of enclosures formed by the presence of walls and floors.An enclosure that has sufficiently fire-resistant boundaries (i.e. walls and floors) is considered to constitute a fire compartment and to be capable of limiting the spread of fire to an adjacent compartment. However, the ability of such boundaries to restrict the spread of fire can be severely limited by the need to provide natural lighting (windows)and access openings between the adjacent compartments (doors and stairs). Fire spread via the external openings (windows) is a distinct possibility given a fully developed fire. Limit- ing the window sizes and geometry can reduce but not eliminate the possibility of vertical fire spread.By far the most effective measure in limiting fire spread, other than the presence of occupants, is an effective sprinkler system that delivers water to a growing fire rapidly reducing the heat being generated and virtually extinguishing it.3.2 Estimating Fire SeverityIn the absence of measures to extinguish developing fires, or should such systems fail; severe fires can develop within buildings.In fire en gineering literature, the term “fire load” refers to the quantity of combustibles within an enclosure and not the loads (forces) applied to the structure during a fire. Similarly, fire load density refers to the quantity of fuel per unit area. It is normally expressed in terms of MJ/m2 or kg/m2 of wood equivalent. Surveys of combustibles for various occupancies (i.e offices, retail, hospitals, warehouses, etc)have been undertaken and a good summary of the available data is given in FCRC (1999). As would be expected, the fire load density is highly variable. Publications such as the International Fire Engineering Guidelines (2005) give fire load data in terms of the mean and 80th percentile.The latter level of fire load density is sometimes taken asthe characteristic fire load density and is sometimes taken as being distributed according to a Gumbel distribution (Schleich et al, 1999).The rate at which heat is released within an enclosure is termed the heat release rate (HRR) and normally expressed in megawatts (MW). The application of sufficient heat to a combustible material results in the generation of gases some of which are combustible. This process is called pyrolisation.Upon coming into contact with sufficient oxygen these gases ignite generating heat. The rate of burning(and therefore of heat generation) is therefore dependent on the flow of air to the gases generated by the pyrolising fuel.This flow is influenced by the shape of the enclosure (aspect ratio), and the position and size of any potential openings. It is found from experiments with single openings in approximately cubic enclosures that the rate of burning is directly proportional to A h where A is the area of the opening and h is the opening height. It is known that for deep enclosures with single openings that burning will occur initially closest to the opening moving back into the enclosure once the fuel closest to the opening is consumed (Thomas et al, 2005). Significant temperature variations throughout such enclosures can be expected.The use of the word ‘opening’ in relation to real building enclosures refers to any openings present around the walls including doors that are left open and any windows containing non fire-resistant glass.It is presumed that such glass breaks in the event of development of a significant fire. If the windows could be prevented from breaking and other sources of air to the enclosure limited, then the fire would be prevented from becoming a severe fire.Various methods have been developed for determining the potential severity of a fire within an enclosure.These are described in SFPE (2004). The predictions of these methods are variable and are mostly based on estimating a representative heat release rate (HRR) and the proportion of total fuel ςlikely to be consumed during the primary burning stage (Figure 4). Further studies of enclosure fires are required to assist with the development of improved models, as the behaviour is very complex.3.3 Role of the Building StructureIf the design objectives are to provide an adequate level of safety for the occupants and protection of adjacent properties from damage, then the structural adequacy of the building in fire need only be sufficient to allow the occupants to exit the building and for the building to ultimately deform in a way that does not lead to damage or fire spread to a building located on an adjacent site.These objectives are those associated with most building regulations includingthe Building Code of Australia (BCA). There could be other objectives including protection of the building against significant damage. In considering these various objectives, the following should be taken into account when considering the fire resistance of the building structure.3.3.1 Non-Structural ConsequencesSince fire can produce smoke and flame, it is important to ask whether these outcomes will threaten life safety within other parts of the building before the building is compromised by a loss of structural adequacy? Is search and rescue by the fire brigade not feasible given the likely extent of smoke? Will the loss of use of the building due to a severe fire result in major property and income loss? If the answer to these questions is in the affirmative, then it may be necessary to minimise the occurrence of a significant fire rather than simply assuming that the building structure needs to be designed for high levels of fire resistance. A low-rise shopping centre with levels interconnected by large voids is an example of such a situation.3.3.2 Other Fire Safety SystemsThe presence of other systems (e.g. sprinklers) within the building to minimise the occurrence of a serious fire can greatly reduce the need for the structural elements to have high levels of fire resistance. In this regard, the uncertainties of all fire-safety systems need to be considered. Irrespective of whether the fire safety system is the sprinkler system, stair pressurisation, compartmentation or the system giving the structure a fire-resistance level (e.g. concrete cover), there is an uncertainty of performance. Uncertainty data is available for sprinkler systems(because it is relatively easy to collect) but is not readily available for the other fire safety systems. This sometimes results in the designers and building regulators considering that only sprinkler systems are subject to uncertainty. In reality, it would appear that sprinklers systems have a high level of performance and can be designed to have very high levels of reliability.3.3.3 Height of BuildingIt takes longer for a tall building to be evacuated than a short building and therefore the structure of a tall building may need to have a higher level of fire resistance. The implications of collapse of tall buildings on adjacent properties are also greater than for buildings of only several storeys.3.3.4 Limited Extent of BurningIf the likely extent of burning is small in comparison with the plan area of the building, then the fire cannot have a significant impact on the overall stability of the building structure. Examples of situations where this is the case are open-deck carparks and very large area building such as shopping complexes where the fire-effected part is likely to be small in relation to area of the building floor plan.3.3.5 Behaviour of Floor ElementsThe effect of real fires on composite and concrete floors continues to be a subject of much research.Experimental testing at Cardington demonstrated that when parts of a composite floor are subject to heating, large displacement behaviour can develop that greatly assists the load carrying capacity of the floor beyond that which would predicted by considering only the behaviour of the beams and slabs in isolation.These situations have been analysed by both yield line methods that take into account the effects of membrane forces (Bailey, 2004) and finite element techniques. In essence, the methods illustrate that it is not necessary to insulate all structural steel elements in a composite floor to achieve high levels of fire resistance.This work also demonstrated that exposure of a composite floor having unprotected steel beams, to a localised fire, will not result in failure of the floor.A similar real fire test on a multistory reinforced concrete building demonstrated that the real structural behaviour in fire was significantly different to that expected using small displacement theory as for normal tempera- ture design (Bailey, 2002) with the performance being superior than that predicted by considering isolated member behaviour.3.4 Prescriptive Approach to DesignThe building regulations of most countries provide prescriptive requirements for the design of buildings for fire.These requirements are generally not subject to interpretation and compliance with them makes for simpler design approval–although not necessarily the most cost-effective designs.These provisions are often termed deemed-to-satisfy (DTS) provisions. All aspects of designing buildings for fire safety are covered–the provision of emergency exits, spacings between buildings, occupant fire fighting measures, detection and alarms, measures for automatic fire suppression, air and smoke handling requirements and last, but not least, requirements for compartmentation and fire resistance levels for structural members. However, there is little evidence that the requirements have been developed from a systematic evaluation of fire safety. Rather it would appear that many of the requirements have been added one to another to deal with another fire incident or to incorporate a new form of technology. There does not appear to have been any real attempt to determine which provision have the most significant influence on fire safety and whether some of the former provisions could be modified.The FRL requirements specified in the DTS provisions are traditionally considered to result in member resistances that will only rarely experience failure in the event of a fire.This is why it is acceptable to use the above arbitrary point in time load combination for assessing members in fire. There have been attempts to evaluate the various deemed-to-satisfy provisions (particularly the fire- resistance requirements)from a fire-engineering perspective taking intoaccount the possible variations in enclosure geometry, opening sizes and fire load (see FCRC, 1999).One of the outcomes of this evaluation was the recognition that deemed-to- satisfy provisions necessarily cover the broad range of buildings and thus must, on average, be quite onerous because of the magnitude of the above variations.It should be noted that the DTS provisions assume that compartmentation works and that fire is limited to a single compartment. This means that fire is normally only considered to exist at one level. Thus floors are assumed to be heated from below and columns only over one storey height.3.5 Performance-Based DesignAn approach that offers substantial benefits for individual buildings is the move towards performance-based regulations. This is permitted by regulations such as the BCA which state that a designer must demonstrate that the particular building will achieve the relevant performance requirements. The prescriptive provisions (i.e. the DTS provisions) are presumed to achieve these requirements. It is necessary to show that any building that does not conform to the DTS provisions will achieve the performance requirements.But what are the performance requirements? Most often the specified performance is simply a set of performance statements (such as with the Building Code of Australia)with no quantitative level given. Therefore, although these statements remind the designer of the key elements of design, they do not, in themselves, provide any measure against which to determine whether the design is adequately safe.Possible acceptance criteria are now considered.3.5.1 Acceptance CriteriaSome guidance as to the basis for acceptable designs is given in regulations such as the BCA. These and other possible bases are now considered in principle.(i)compare the levels of safety (with respect to achieving each of the design objectives) of the proposed alternative solution with those asso- ciated with a corresponding DTS solution for the building.This comparison may be done on either a qualitative or qualitative risk basis or perhaps a combination. In this case, the basis for comparison is an acceptable DTS solution. Such an approach requires a “holistic” approach to safety whereby all aspects relevant to safety, including the structure, are considered. This is, by far, the most common basis for acceptance.(ii)undertake a probabilistic risk assessment and show that the risk associated with the proposed design is less than that associated with common societal activities such as using pub lic transport. Undertaking a full probabilistic risk assessment can be very difficult for all but the simplest situations.Assuming that such an assessment is undertaken it will be necessary for the stakeholders to accept the nominated level of acceptable risk. Again, this requires a “holistic”approach to fire safety.(iii) a design is presented where it is demonstrated that all reasonable measures have been adopted to manage the risks and that any possible measures that have not been adopted will have negligible effect on the risk of not achieving the design objectives.(iv) as far as the building structure is concerned,benchmark the acceptable probability of failure in fire against that for normal temperature design. This is similar to the approach used when considering Building Situation 1 but only considers the building structure and not the effects of flame or smoke spread. It is not a holistic approach to fire safety.Finally, the questions of arson and terrorism must be considered. Deliberate acts of fire initiation range from relatively minor incidents to acts of mass destruction.Acts of arson are well within the accepted range of fire events experienced by build- ings(e.g. 8% of fire starts in offices are deemed "suspicious"). The simplest act is to use a small heat source to start a fire. The resulting fire will develop slowly in one location within the building and will most probably be controlled by the various fire- safety systems within the building. The outcome is likely to be the same even if an accelerant is used to assist fire spread.An important illustration of this occurred during the race riots in Los Angeles in 1992 (Hart 1992) when fires were started in many buildings often at multiple locations. In the case of buildings with sprinkler systems,the damage was limited and the fires significantly controlled.Although the intent was to destroy the buildings,the fire-safety systems were able to limit the resulting fires. Security measures are provided with systems such as sprinkler systems and include:- locking of valves- anti-tamper monitoring- location of valves in secure locationsFurthermore, access to significant buildings is often restricted by security measures.The very fact that the above steps have been taken demonstrates that acts of destruction within buildings are considered although most acts of arson do not involve any attempt to disable the fire-safety systems.At the one end of the spectrum is "simple" arson and at the other end, extremely rare acts where attempts are made to destroy the fire-safety systems along with substantial parts of the building.This can be only achieved through massive impact or the use of explosives. The latter may be achieved through explosives being introduced into the building or from outside by missile attack.The former could result from missile attack or from the collision of a large aircraft. The greater the destructiveness of the act,the greater the means and knowledge required. Conversely, the more extreme the act, the less confidence there can be in designing against suchan act. This is because the more extreme the event, the harder it is to predict precisely and the less understood will be its effects. The important point to recognise is that if sufficient means can be assembled, then it will always be possible to overcome a particular building design.Thus these acts are completely different to the other loadings to which a building is subjected such as wind,earthquake and gravity loading. This is because such acts of destruction are the work of intelligent beings and take into account the characteristics of the target.Should high-rise buildings be designed for given terrorist activities,then terrorists will simply use greater means to achieve the end result.For example, if buildings were designed to resist the impact effects from a certain size aircraft, then the use of a larger aircraft or more than one aircraft could still achieve destruction of the building. An appropriate strategy is therefore to minimise the likelihood of means of mass destruction getting into the hands of persons intent on such acts. This is not an engineering solution associated with the building structure.It should not be assumed that structural solutions are always the most appropriate, or indeed, possible.In the same way, aircrafts are not designed to survive a major fire or a crash landing but steps are taken to minimise the likelihood of either occurrence.The mobilization of large quantities of fire load (the normal combustibles on the floors) simultaneously on numerous levels throughout a building is well outside fire situations envisaged by current fire test standards and prescriptive regulations. Risk management measures to avoid such a possibility must be considered.4 CONCLUSIONSFire differs significantly from other “loads” such as wind, live load and earthquakes i n respect of its origin and its effects.Due to the fact that fire originates from human activities or equipment installed within buildings, it is possible to directly influence the potential effects on the building by reducing the rate of fire starts and providing measures to directly limit fire severity.The design of buildings for fire safety is mostly achieved by following the prescriptive requirements of building codes such as the BCA. For situations that fall outside of the scope of such regulations, or where proposed designs are not in accordance with the prescriptive requirements, it is possible to undertake performance-based fire engineering designs.However, there are no design codes or standards or detailed methodologies available for undertaking such designs.Building regulations require that such alternative designs satisfy performance requirements and give some guidance as to the basis for acceptance of these designs (i.e. acceptance criteria).This paper presents a number of possible acceptance criteria, all of which use the measure of risk level as the basis for comparison.Strictly, when considering the risks。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
英文原文2:《DBA Survivor: Become a Rock Star DBA》by Thomas LaRock,Published By Apress.2010You know that a database is a collection of logically related data elements that may be structured in various ways lo meet the multiple processing and retrieval needs of organizations and individuals. There’s nothing new about databases—early ones were chiseled in stone, penned on scrolls, and written on index cards. But now databases are commonly recorded on magnetizable media, and computer programs are required to perform the necessary storage and retrieval operations.Yo u’ll see in the following pages that complex data relationships and linkages may be found in all but the simplest databases. The system software package that handles the difficult tasks associated with creating, accessing, and maintaining database records is called a database management system (DBMS) .The programs in a DBMS package establish an interface between the database itself and the users of the database. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions (hat aren't available in regular reports. These questions might initially be vague and / or poorly defined, but people can "browse” through the database until they have the needed information. In short, the DBMS will “m anage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t10programmers. In a file-oriented system, users needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information[4].The availability of a DBMS, however, offers users a much faster alternative communications path.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and docs not provide the kinds of control a necessary in a multiuser organization. These-controls arc only available when a set of application programs arc customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have been very popular. Personal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lasers and so on .By the nature of intended usage, database systems on these machines except from several of the requirements of full doge database systems. Since data sharing is not tended, concurrent operations even less so. the fewer can be less complex. Security and integrity maintenance arc de-emphasized or absent. As data limes will be small, performance efficiency is also important. In fact, the only aspect of a database system that is important is data Independence. Data-dependence, as stated earlier, means that applicant programs and user queries need not recognizant physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data a( a high level (close to (he application) and be totally shielded from the10low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here. Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc. offered by theses stems.The main negative point concerns absence of data protection features. Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or malicious intent. The second weakness of many of the PC-based systems is that of performance. If data volumes grow up to a few thousands of records, performance could be a bottleneck.For organization where growth in data volumes is expected, availability of. the same or compatible software on large machines should be considered.This is one of the most common misconceptions about database management systems that are used in personal computers. Thoroughly comprehensive and sophisticated business systems can be developed in dBASE, Paradox and other DBMSs. However, they are created by experienced programmers using the DBMS's own programming language. Thai is not the same as users who create and manage personal10files that are not part of the mainstream company system.Transaction Management of DatabaseThe objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments. The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions. The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locks before accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations. The consistency criterion adopted for traditional transactions is the notion of scrializability. Scrializa-bility is enforced in conventional database systems through the use of locking for automatic concurrency control, and logging for automatic recovery from system failure situations. A “transaction’’ that doesn't provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion. However, there must be some consistency criterion.Version System Management of DatabaseDespite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics10has been a key impediment to version support in database systems. Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g. a compound document, a user manual, etc. and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there arc three directions of research and development in versioning. First is the notion of a parameterized versioning", that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discarding some of themes either impractical or flawed. The third is the investigation into the semantics and implementation of versioning collections of objects and of versioning the database.There is no consensus of the definition of the te rm “management information system”. Some writers prefer alternative terminology such as “information processing system”, "information and decision syste m, “organizational information syste m”, or simply “i nformat ion system” to refer to the computer-based information processing system which supports the operations, management, and decision-making functions of an organization. This text uses “MIS” because it is descriptive and generally understood; it also frequently uses "information system”instead of ''MIS” t o refer to an organizational information system.10A definition of a management information system, as the term is generally understood, is an integrated, user-machine system for providing information 丨o support operations, management, and decision-making functions in an organization. The system utilizes computer hardware and software; manual procedures: models for analysis planning, control and decision making; and a database. The fact that it is an integrated system does not mean that it is a single, monolithic structure: rather, ii means that the parts fit into an overall design. The elements of the definition arc highlighted below: Computer-based user-machine system.Conceptually, a management information can exist without computer, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but the extent to which information use should be computerized. The concept of a user-machine system implies that some (asks are best performed humans, while others are best done by machine. The user of an MIS is any person responsible for entering input da(a, instructing the system, or utilizing the information output of the system. For many problems, the user and the computer form a combined system with results obtained through a set of interactions between the computer and the user.User-machine interaction is facilitated by operation in which the user's input-output device (usually a visual display terminal) is connected lo the computer. The computer can be a personal computer serving only one user or a large computer that serves a number of users through terminals connected by communication lines. The user input-output device permits direct input of data and immediate output of results. For instance, a person using The computer interactively in financial planning poses 4t what10if* questions by entering input at the terminal keyboard; the results are displayed on the screen in a few second.The computer-based user-machine characteristics of an MIS affect the knowledge requirements of both system developer and system user, “computer-based” means that the designer of a management information system must have a knowledge of computers and of their use in processing. The “user-machine” concept means the system designer should also understand the capabilities of humans as system components (as information processors) and the behavior of humans as users of information.Information system applications should not require users Co be computer experts. However, users need to be able lo specify (heir information requirements; some understanding of computers, the nature of information, and its use in various management function aids users in this task.Management information system typically provide the basis for integration of organizational information processing. Individual applications within information systems arc developed for and by diverse sets of users. If there are no integrating processes and mechanisms, the individual applications may be inconsistent and incompatible. Data item may be specified differently and may not be compatible across applications that use the same data. There may be redundant development of separate applications when actually a single application could serve more than one need. A user wanting to perform analysis using data from two different applications may find the task very difficult and sometimes impossible.The first step in integration of information system applications is an overall information system plan. Even though application systems are implemented one at a10time, their design can be guided by the overall plan, which determines how they fit in with other functions. In essence, the information system is designed as a planed federation of small systems.Information system integration is also achieved through standards, guidelines, and procedures set by the MIS function. The enforcement of such standards and procedures permit diverse applications to share data, meet audit and control requirements, and be shares by multiple users. For instance, an application may be developed to run on a particular small computer. Standards for integration may dictate that the equipment selected be compatible with the centralized database. The trend in information system design is toward separate application processing form the data used to support it. The separate database is the mechanism by which data items are integrated across many applications and made consistently available to a variety of users. The need for a database in MIS is discussed below.The term “information” and “data” are frequently used interchangeably; However, information is generally defined as data that is meaningful or useful to The recipient. Data items are therefore the raw material for producing information.The underlying concept of a database is that data needs to be managed in order to be available for processing and have appropriate quality. This data management includes both software and organization. The software to create and manage a database is a database management system.When all access to any use of database is controlled through a database management system, all applications utilizing a particular data item access the same data item which is stored in only one place. A single updating of the data item updates it for10all uses. Integration through a database management system requires a central authority for the database. The data can be stored in one central computer or dispersed among several computers; the overriding requirement is that there be an organizational function to exercise control.It is usually insufficient for human recipients to receive only raw data or even summarized data. Data usually needs to be processed and presented in such a way that Che result is directed toward the decision to be made. To do this, processing of data items is based on a decision model.For example, an investment decision relative to new capital expenditures might be processed in terms of a capital expenditure decision model.Decision models can be used to support different stages in the decision-making process. “Intelligence’’ models can be used to search for problems and/or opportunities. Models can be used to identify and analyze possible solutions. Choice models such as optimization models maybe used to find the most desirable solution.In other words, multiple approaches are needed to meet a variety of decision situations. The following are examples and the type of model that might be included in an MIS to aid in analysis in support of decision-making; in a comprehensive information system, the decision maker has available a set of general models that can be applied to many analysis and decision situations plus a set of very specific models for unique decisions. Similar models are available tor planning and control. The set of models is the model base for the MIS.Models are generally most effective when the manager can use interactive dialog (o build a plan or to iterate through several decision choices under different conditions.10中文译文2:《数据库幸存者:成为一个摇滚名明星》众所周知,数据库是逻辑上相关的数据元的汇集.这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。