外文文献及其翻译电子政务信息
外文参考文献翻译-中文

外⽂参考⽂献翻译-中⽂基于4G LTE技术的⾼速铁路移动通信系统KS Solanki教授,Kratika ChouhanUjjain⼯程学院,印度Madhya Pradesh的Ujjain摘要:随着时间发展,⾼速铁路(HSR)要求可靠的,安全的列车运⾏和乘客通信。
为了实现这个⽬标,HSR的系统需要更⾼的带宽和更短的响应时间,⽽且HSR的旧技术需要进⾏发展,开发新技术,改进现有的架构和控制成本。
为了满⾜这⼀要求,HSR采⽤了GSM的演进GSM-R技术,但它并不能满⾜客户的需求。
因此采⽤了新技术LTE-R,它提供了更⾼的带宽,并且在⾼速下提供了更⾼的客户满意度。
本⽂介绍了LTE-R,给出GSM-R与LTE-R之间的⽐较结果,并描述了在⾼速下哪种铁路移动通信系统更好。
关键词:⾼速铁路,LTE,GSM,通信和信令系统⼀介绍⾼速铁路需要提⾼对移动通信系统的要求。
随着这种改进,其⽹络架构和硬件设备必须适应⾼达500公⾥/⼩时的列车速度。
HSR还需要快速切换功能。
因此,为了解决这些问题,HSR 需要⼀种名为LTE-R的新技术,基于LTE-R的HSR提供⾼数据传输速率,更⾼带宽和低延迟。
LTE-R能够处理⽇益增长的业务量,确保乘客安全并提供实时多媒体信息。
随着列车速度的不断提⾼,可靠的宽带通信系统对于⾼铁移动通信⾄关重要。
HSR的应⽤服务质量(QOS)测量,包括如数据速率,误码率(BER)和传输延迟。
为了实现HSR的运营需求,需要⼀个能够与 LTE保持⼀致的能⼒的新系统,提供新的业务,但仍能够与GSM-R长时间共存。
HSR系统选择合适的⽆线通信系统时,需要考虑性能,服务,属性,频段和⼯业⽀持等问题。
4G LTE系统与第三代(3G)系统相⽐,它具有简单的扁平架构,⾼数据速率和低延迟。
在LTE的性能和成熟度⽔平上,LTE- railway(LTE-R)将可能成为下⼀代HSR通信系统。
⼆ LTE-R系统描述考虑LTE-R的频率和频谱使⽤,对为⾼速铁路(HSR)通信提供更⾼效的数据传输⾮常重要。
最新外文文献翻译格式范例

外文文献翻译格式范例本科毕业设计(外文翻译)外文参考文献译文及原文学院信息工程学院专业信息工程(电子信息工程方向)年级班别 2006级(4)班学号 3206003186学生姓名柯思怡指导教师 ______ 田妮莉 _ __2010年6月目录熟悉微软SQL Server (1)1Section A 引言 (1)2Section B 再谈数据库可伸缩性 (4)3Section C 数据库开发的特点 (7)Get Your Arms around Microsoft SQL Server (9)1Section A Introduction to SQL Server 2005 (9)2Section B Database Scalability Revisited (13)3Section C Features for Database Development (17)熟悉微软SQL Server1 Section A 引言SQL Server 2005 是微软SQL生产线上最值得期待的产品。
在经过了上百万个邮件,成百上千的规范说明,以及数十次修订后。
微软承诺SQL Server 2005 是最新的基于Windows数据库应用的数据库开发平台。
这节的内容将指出SQL Server 2005产品的一些的重要特征。
SQL Server 2005几乎覆盖OLTP及OLAP技术的所又内容。
微软公司的这个旗舰数据库产品几乎能覆盖所有的东西。
这个软件在经过五年多的制作后,成为一个与它任何一个前辈产品都完全不同的产品。
本节将介绍整个产品的大部分功能。
当人们去寻求其想要的一些功能和技术时,可以从中提取出重要的和最感新区的内容,包括SQL Server Engine 的一些蜕变的历史,以及各种各样的SQL Server 2005的版本,可伸缩性,有效性,大型数据库的维护以及商业智能等如下:●数据库引擎增强技术。
SQL Server 2005 对数据库引擎进行了许多改进,并引入了新的功能。
互联网Web服务中英文对照外文翻译文献

互联网Web服务中英文对照外文翻译文献(文档含英文原文和中文翻译)An internet-based logistics management system forenterprise chains.Developing the internet-based application toolWeb services offer new opportunities in business landscape, facilitating a global marketplace where business rapidly create innovative products and serve customers better. Whatever that business needs is, Web services have the flexibility to meet the demand and allow to accelerate outsourcing. In turn, the developer can focus on building core competencies to create customer and shareholder value. Application development is also more efficient because existing Web services, regardless of where they were developed, can easily be reused.Many of the technology requirements for Web services exist today, such as open standards for business to-business applications, mission-critical transaction platforms and secure integration and messaging products. However, to enable robust and dynamic integration of applications, the industry standards and tools that extend the capabilities of to days business-to-business interoperability are required. The key to taking full advantage of Web services is to understand what Web services are and how the market is likely to evolve. One needs to be able to invest inplatforms and applications today that will enable the developer to quickly and effectively realize these benefits as well as to be able to meet the specific needs and increase business productivity.Typically, there are two basic technologies to be implemented when dealing with internet-based applications; namely server-based and client-based. Both technologieshave their strong points regarding development of the code and the facilities they provide. Server-based applications involve the development of dynamically created web pages. These pages are transmitted to the web browser of the client and contain code in the form of HTML and JA V ASCRIPT language. The HTML part is the static part of the page that contains forms and controls for user needs and the JA V ASCRIPT part is the dynamic part of the page. Typically, the structure of the code can be completely changed through the intervention of web server mechanisms added on thetransmission part and implemented by server-based languages such as ASP, JSP, PHP, etc. This comes to the development of an integrated dynamic page application where user desire regarding problem peculiarities (calculating shortest paths, execute routing algorithms, transact with the database, etc.) is implemented by appropriately invoking different parts of the dynamic content of such pages. In server-based applications allcalculations are executed on the server. In client-based applications, JA V A applets prevail. Communication of the user is guaranteed by the well-known JA V A mechanism that acts as the medium between the user and code.Everything is executed on the client side. Data in this case have to be retrieved, once and this might be the time-consuming part of the transaction.In server-based applications, server resources are used for all calculations and this requires powerful server facilities with respect to hardware and software. Client-based applications are burdened with data transmission (chiefly related to road network data). There is a remedy to that; namely caching. Once loaded, they are left in the cache archives of the web browser to be instantly recalled when needed.In our case, a client-based application was developed. The main reason was the demand from the users point of view for personal data discretion regarding their clients. In fact, this information was kept secret in our system even from the server side involved.Data management plays major role in the good function of our system. This role becomes more substantial when the distribution takes place within a large and detailed road network like this of a major complex city. More specifically, in order to produce the proposed the routing plan, the system uses information about:●the locations of the depot and the customers within the road networkof the city (their co-ordinates attached in the map of the city),●the demand of the customers serviced,●the capacity of the vehicles used,●the spatial characteristics of road segments of the net work examined, ●the topography of the road network,●the speed of the vehicle, considering the spatial characteristics of theroad and the area within of which is moved,●the synthesis of the company fleet of vehicles.Consequently, the system combines, in real time, the available spatial characteristics with all other information mentioned above, and tools for modelling, spatial, non-spatial, and statistical analysis, image processing forming a scalable, extensible and interoperable application environment. The validation and verification of addresses of customers ensure the accurate estimation of travel times and distances travelled. In the case of boundary in the total route duration, underestimates of travel time may lead to failure of the programmed routing plan whereas overestimates can lower the utilization of drivers andvehicles, and create unproductive wait times as well (Assad, 1991). The data corresponding to the area of interest involved two different details. A more detailed network, appropriately for geocoding (approximately250,000 links) and a less detailed for routing (about 10,000 links). The two networks overlapped exactly. The tool that provides solutions to problems of effectively determining the shortest path, expressed in terms of travel time or distance travelled, within a specific road network, using the D ijkstra’s algorithm(Winston,1993). In particular, the Dijkstra’s algorithm is used in two cases during the process of developing the routing plan. In the first case, it calculates the travel times between all possible pairs of depot and customers so that the optimizer would generate the vehicle routes connecting them and in the second case it determines the shortest path between two involved nodes (depot or customer) in the routing plan, as this was determined by the algorithm previously. Due to the fact, that U-turn and left-,right-turn restrictions were taken into consideration for network junctions, an arc-based variant of the algorithm was taken into consideration (Jiang, Han, & Chen, 2002).The system uses the optimization algorithms mentioned in the following part in order to automatically generate the set of vehicle routes (which vehicles should deliver to which customers and in which order) minimizing simultaneously the vehicle costs and the total distance travelled by the vehicles This process involves activities that tend to be more strategic and less structured than operational procedures. The system helpsplanners and managers to view information in new way and examine issues such as:●the average cost per vehicle, and route,●the vehicle and capacity utilization,●the service level and cost,●the modification of the existing routing scenario by adding orsubtracting customers.In order to support the above activities, the interface of the proposed system provides a variety of analyzed geographic and tabulated data capabilities. Moreover, the system can graphically represent each vehicle route separately, cutting it o? from the final routing plan and offering the user the capability for perceiving the road network and the locations of depot and customers with all details.物流管理系统发展基于互联网的应用工具Web服务提供的商业景观的新机会,促进全球市场在业务快速推出创新的产品和客户提供更好的服务。
大数据外文翻译参考文献综述

大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。
英文文献及翻译:基于互联网的可靠性信息系统

英文资料翻译The Reliability of Internet-Based Information SystemSummary of papers foc used on the reliability of the information system w ith the w ide area network and server struc ture development. Existing customers of the system and an amendment to the transformation server HTTP task to perform analysis and advanc ed graphics. At the same time, the artic le is also on the global information network and the tec hnical bac kground, as w ell as, c lient / server systems analysis explained. With systems development, design engineers and reliability analysts c an more quickly and easily on an analysis of the reliability of the system. Keywords: information system, WWW, c lient / server architecture, the reliability of 1. The introduc tion of information systems have a w ide range of practical applic ation, it can be useful for the judge to make a dec isive strategy. Is generally believed that the information system is built on the model of the organizational struc ture of a partic ular data flow. In reliability engineering, researchers in the access and data analysis w ill be some diffic ulties. The system development proc ess is the accumulation of data from the majority of analysts to obtain the reliability. In the component data, computer failure rate for eac h component, the applic ation-spec ific data (for example, the importanc e of the applic ation, function of the number of pins, and so on.) Developers for the design of the system are very important in terms of . Institutions in the organization, client / server arc hitecture has been integrated as a good way of computer data. With the traditional focus on the computer environment, the c lient / server environment, users share data, applic ations, are easier to deal w ith the process [1]. Ability to work depends on the balanc e of the applic ation of c lient / server system, an important role.Support the development of the Internet as an interactive data display and distribution of the means of transmission. Internet c lient and server interaction in the standardization of information was a great success. Similarly, in the development of c lient and server software or netw ork protocol, if not require spec ial resourc es, Internet-based system c an quickly create.In this chapter, we explained the Internet-based and c lient / server tec hnology to achieve the reliability of information systems. Chapter II provides an overview of c lient / server computing in response to the Internet. Chapter III describes the reliability of information systems implementation details, and Chapter IV of further study w ere summarized and disc ussed.2. Internet and c lient / server architectureClient / server struc ture of the relationship betw een the tw o proc esses c an be said to be running a number of tasks in c ooperation. It supports the integrity of information sy stems and scalability [2]. Lyu (1995) demonstrated that the c lient / server structure of the four advantages: cost reduction, productivity improvement, system life cyc le availability of a longer and better. Therefore, c lient / server system architec ture is c onsidered a viable struc ture of information systems.With the development of the Internet to achieve c lient / server structure of the simplest possible way out is the task of the c lient softw are is displayed and the format of the information obtained from the server using a w eb browser. Many bibliographic retrieval system is the typical example. In a web browser as a c lient acc ess to an existing c lient / server platform, only a c lass of system code (HTML and help code) need to maintain.But for other systems, the c lient software on behalf of the server in the implementation of additional tasks or users, the c o-ordination mechanisms need a w eb browser-based c lient to run these jobs. A typic al solution is to use the Common Gatew ay Interfac e (CGI) program. How ever, due to various reasons, this approach is not satisfactory.. In a CGI-based system, all are usually handled by the c lient task must be simulated by the CGI program. Increase the burden on the server. Another from the standard Internet browser acces s to c lient / server applic ations is invented by the Dossick and Kaiser [3]. They have put forward a HTTP proxy to c onnect to the existing c lient / server netw ork system. HTTP proxy to intercept HTTP requests for data and use the original set of requests for their transfer to thesourc e system.The use of APIs is similar to Netsc ape's embedded browser-spec ific tools to create c lient / server system, browser-based c lient is feasible. How ever, the use of suc h APIs generated by the Web-based c lient software to limit the use of a proprietary platform, as well as a dedic ated w eb browser. Unnecessary restric tions w hic h offset a lot of c lients to create Web-based benefits.3. SystemElec tronics and Telecommunic ations Researc h Institute (ETRI) has developed ERIS is called the reliability of information systems. It c an be synthesized using a computer system failure rate and reliability of the calc ulation [4]. ERIS c lients inc lude procedures by the two neutral c omponents, they are different hardw are platforms: workstations and personal computers. Not familiar w ith the UNIX environment, users w ill be inconvenient to use.Needs to be noted that the reliability of software tools made by Birolini. In order to bec ome useful to the user softw are, as opposed to ot her requirements, a large enough database is very important. In stand-alone environment, the user c an have an independent data storage. This w ill be a waste of c omputer resourc es and time. Most existing tools are independent, to share data betw een users inc onvenient. Based on the above in the ERIS test requirements and the views c ollec ted, w e set the follow ing elements:- Friendly user interfac e: man-machine interface for the effective handling large amounts of data is very important. At the same time, his understanding of the results of the analysis is very helpful.- Openness: information servic es must be w idely used. Open the same end-users in the reliability of the information can be used in the c lient easy access to other applic ations.- Data sharing: Onc e part of the data into the DBMS, then this data should be shared by other users.- User Management: User information is stored c an effectively deal w ith the increase in users.- Sec urity: safe design must be appropriate to consider the design data in order to prevent the outside world open. Only those w ith only the c orrect user identific ation number (ID) and password of the user to enter the database server.Based on the above-mentioned requirements, ERIS functions of the development of the follow ing. The system is divided into the follow ing two categories: user / database management and reliability analysis.We are a c ombination of methods to connec t to the Internet as well as the source c lient / server structure of the development of ERIS. Web br owser in the display and formatting information can be used effectively for all users. Web browser management concepts used for ERIS. ERIS allows management through the user's web browser applic ation. Home users can apply for ERIS through the use of ID. Onc e his / her registration ID in the user database, he / she can be in any place to download the c lient program ERIS.ERIS's the realization of c lient similar to Windows program. Conduc ive to the server through the c lient to deal w ith the original func tion-spec ific applications. They have a better and easy to store a user-friendly interfac e, in order to merge the reliability of learning, provide them w ith a good query to the design process. Server process and c lient processes is in line w ith the TCP / IP pr otocol standard data requirements. ERIS provides me w ith Internet and c lient / server architecture of the c omposite structure. CGI and COM servers have two proc esses. CGI solution components from a w eb browser c lient to issue the HTTP request and return the corresponding results. COM is to manage the process of data link request. There is a temporary database error filter components and user information. Only authenticated users and the information data c an be registered.UNIX server operating system is the use of workstations, the c lient is the PC. Informix database management system used to manage users and data. Server proc ess through the ESQL / C language. Client through the MS Visual C + + and Delphi development tools for development.4. Conc lusionERIS is to design engineers and reliability analysts w idely used development system.Succession through a combination of Internet and c lient / server structure of the concept, we have the sc ope in the design of the engine, set up quic kly to understand the reliability of the design environment. Through the use of the Internet, the distribution of time to install a tool to reduc e a lot than before. ERIS also via the Internet to provide servic es to other organizations. Internet tec hnology development and w ill stimulate popular Internet-based system to the traditional c lient / server system changes.基于互联网的可靠性信息系统论文主要讨论的是信息可靠性系统随着广域网和服务器构造的发展。
互联网金融安全中英文对照外文翻译文献

互联网金融安全中英文对照外文翻译文献中英文对照外文翻译文献(文档含英文原文和中文翻译)Database Security in a Web Environment IntroductionDatabases have been common in government departments and commercial enterprises for many years. Today, databases in any organization are increasingly opened up to a multiplicity of suppliers, customers, partners and employees - an idea that would have been unheard of a few years ago. Numerous applications and their associated data are now accessed by a variety of users requiring different levels of access via manifold devices and channels – often simultaneously. For example:• Online banks allow customers to perform a variety of banking operations - via the Internet and over the telephone – whilst maintaining the privacy of account data.• E-Commerce merchants and their Service Providers must store customer, order and payment data on their merchant server - and keep it secure.• HR departments allow employees to update their personal information –whilst protecting certain management information from unauthorized access.• The medical profession must protect the confidentiality of patient data –whilst allowing essential access for treatment.• Online brokerages need to be able to provide large numbers of simultaneous users with up-to-date and accurate financial information.This complex landscape leads to many new demands upon system security. The global growth of complex web-based infrastructures is driving a need for security solutions that provide mechanisms to segregate environments; perform integrity checking and maintenance; enable strong authentication andnon-repudiation; and provide for confidentiality. In turn, this necessitates comprehensive business and technical risk assessment to identify the threats,vulnerabilities and impacts, and from this define a security policy. This leads to security definitions throughout the infrastructure - operating system, database management system, middleware and network.Financial, personal and medical information systems and some areas of government have strict requirements for security and privacy. Inappropriate disclosure of sensitive information to the wrong parties can have severe social, legal and regulatory consequences. Failure to address the basics can result in substantial direct and consequential financial losses - witness the fraud losses through the compromise of several million credit card numbers in merchants’ databases [Occf], plus associated damage to brand-image and loss of consumer confidence.This article discusses some of the main issues in database and web server security, and also considers important architecture and design issues.A Simple ModelAt the simplest level, a web server system consists of front-end software and back-end databases with interface software linking the two. Normally, the front-end software will consist of server software and the network server operating system, and the back-end database will be a relational orobject-oriented database fulfilling a variety of functions, including recording transactions, maintaining accounts and inventory. The interface software typically consists of Common Gateway Interface (CGI) scripts used to receive information from forms on web sites to perform online searches and to update the database.Depending on the infrastructure, middleware may be present; in addition, security management subsystems (with session and user databases) that address the web server’s and related applications’ requirements for authentication, accesscontrol and authorization may be present. Communications between this subsystem and either the web server, middleware or database are via application program interfaces (APIs)..This simple model is depicted in Figure 1.Security can be provided by the following components:• Web server.• Middleware.• Operating system.. Figure 1: A Simple Model.• Database and Database Management System.• Security management subsystem.The security of such a system addressesAspects of authenticity, integrity and confidentiality and is dependent on the security of the individual components and their interactions. Some of the most common vulnerabilities arise from poor configuration, inadequate change control procedures and poor administration. However, even if these areas are properlyaddressed, vulnerabilities still arise. The appropriate combination of people, technology and processes holds the key to providing the required physical and logical security. Attention should additionally be paid to the security aspects of planning, architecture, design and implementation.In the following sections, we consider some of the main security issues associated with databases, database management systems, operating systems and web servers, as well as important architecture and design issues. Our treatment seeks only to outline the main issues and the interested reader should refer to the references for a more detailed description.Database SecurityDatabase management systems normally run on top of an operating system and provide the security associated with a database. Typical operating system security features include memory and file protection, resource access control and user authentication. Memory protection prevents the memory of one program interfering with that of another and limits access and use of the objects employing techniques such as memory segmentation. The operating system also protects access to other objects (such as instructions, input and output devices, files and passwords) by checking access with reference to access control lists. Security mechanisms in common operating systems vary tremendously and, for those that are lacking, there exists special-purpose security software that can be integrated with the existing environment. However, this can be an expensive, time-consuming task and integration difficulties may also adversely impact application behaviors.Most database management systems consist of a number of modules - including database querying and database and file management - along with authorization, concurrent access and database description tables. Thesemanagement systems also use a variety of languages: a data definition language supports the logical definition of the database; developers use a data manipulation language; and a query language is used by non-specialist end-users.Database management systems have many of the same security requirements as operating systems, but there are significant differences since the former are particularly susceptible to the threat of improper disclosure, modification of information and also denial of service. Some of the most important security requirements for database management systems are: • Multi-Level Access Control.• Confidentiality.• Reliability.• Integrity.• Recovery.These requirements, along with security models, are considered in the following sections.Multi-Level Access ControlIn a multi-application and multi-user environment, administrators, auditors, developers, managers and users – collectively called subjects - need access to database objects, such as tables, fields or records. Access control restricts the operations available to a subject with respect to particular objects and is enforced by the database management system. Mandatory access controls require that each controlled object in the database must be labeled with a security level, whereas discretionary access controls may be applied at the choice of a subject.Access control in database management systems is more complicated than in operating systems since, in the latter, all objects are unrelated whereas in a database the converse is true. Databases are also required to make accessdecisions based on a finer degree of subject and object granularity. In multi-level systems, access control can be enforced by the use of views - filtered subsets of the database - containing the precise information that a subject is authorized to see.A general principle of access control is that a subject with high level security should not be able to write to a lower level object, and this poses a problem for database management systems that must read all database objects and write new objects. One solution to this problem is to use a trusted database management system.ConfidentialitySome databases will inevitably contain what is considered confidential data. For example, it could be inherently sensitive or its source may be sensitive, or it may belong to a sensitive table, thus making it difficult to determine what is actually confidential. Disclosure is also difficult to define, as it can be direct, indirect, involve the disclosure of bounds or even mere existence.An inference problem exists in database management systems whereby users can infer sensitive information from relatively insensitive queries. A trivial example is a request for information about the average salary of an employee and the number of employees turns out to be just one, thus revealing the employee’s salary. However, much more sophisticated statistical inference attacks can also be mounted. This highlights the fact that, although the data itself may be properly controlled, confidential information may still leak out.Controls can take several forms: not divulging sensitive information to unauthorized parties (which depends on the respective subject and object security levels), logging what each user knows or masking response data. The first control can be implemented fairly easily, the second quickly becomesunmanageable for a large number of users and the third leads to imprecise responses, and also exemplifies the trade-off between precision and security. Polyinstantiation refers to multiple instances of a data object existing in the database and it can provide a partial solution to the inference problem whereby different data values are supplied, depending on the security level, in response to the same query. However, this makes consistency management more difficult.Another issue that arises is when the security level of an aggregate amount is different to that of its elements (a problem commonly referred to as aggregation). This can be addressed by defining appropriate access control using views.Reliability, Integrity and RecoveryArguably, the most important requirements for databases are to ensure that the database presents consistent information to queries and can recover from any failures. An important aspect of consistency is that transactions execute atomically; that is, they either execute completely or not at all.Concurrency control addresses the problem of allowing simultaneous programs access to a shared database, while avoiding incorrect behavior or interference. It is normally addressed by a scheduler that uses locking techniques to ensure that the transactions are serial sable and independent. A common technique used in commercial products is two-phase locking (or variations thereof) in which the database management system controls when transactions obtain and release their locks according to whether or not transaction processing has been completed. In a first phase, the database management system collects the necessary data for the update: in a second phase, it updates the database. This means that the database can recover from incomplete transactions by repeatingeither of the appropriate phases. This technique can also be used in a distributed database system using a distributed scheduler arrangement.System failures can arise from the operating system and may result in corrupted storage. The main copy of the database is used for recovery from failures and communicates with a cached version that is used as the working version. In association with the logs, this allows the database to recover to a very specific point in the event of a system failure, either by removing the effects of incomplete transactions or applying the effects of completed transactions. Instead of having to recover the entire database after a failure, recovery can be made more efficient by the use of check pointing. It is used during normal operations to write additional updated information - such as logs, before-images of incomplete transactions, after-images of completed transactions - to the main database which reduces the amount of work needed for recovery. Recovery from failures in distributed systems is more complicated, since a single logical action is executed at different physical sites and the prospect of partial failure arises.Logical integrity, at field level and for the entire database, is addressed by the use of monitors to check important items such as input ranges, states and transitions. Error-correcting and error-detecting codes are also used.Security ModelsVarious security models exist that address different aspects of security in operating systems and database management systems. For example, theBell-LaPadula model defines security in terms of mandatory access control and addresses confidentiality only. The Bell LaPadula models, and other models including the Biba model for integrity, are described more fully in [Cast95] and [Pfle89]. These models are implementation-independent and provide a powerfulinsight into the properties of secure systems, lead to design policies and principles, and some form the basis for security evaluation criteria.Web Server SecurityWeb servers are now one of the most common interfaces between users and back-end databases, and as such, their security becomes increasingly important. Exploitation of vulnerabilities in the web server can lead to unforeseen attacks on middleware and backend databases, bypassing any controls that may be in place. In this section, we focus on common web server vulnerabilities and how the authentication requirements of web servers and databases are met.In general, a web server platform should not be shared with other applications and should be the only machine allowed to access the database. Using a firewall can provide additional security - either between the web server and users or between the web server and back-end database - and often the web server is placed on a de-militarized zone (DMZ) of a firewall. While firewalls can be used to block certain incoming connections, they must allow HTTP (and HTTPS) connections through to the web server, and so attacks can still be launched via the ports associated with these connections.VulnerabilitiesVulnerabilities appear on a weekly basis and, here, we prefer to focus on some general issues rather than specific attacks. Common web server vulnerabilities include:• No policy exists.• The default configuration is on.• Reusable passwords appear in clear.• Unnecessary ports available for network services are not disabled.• New security holes are not tracked. Even if they are, well-known vulnerabilities are not always fixed as the source code patches are not applied by system administrator and old programs are not re-compiled or removed.• Security tools are not used to scan the network for weaknesses and changes or to detect intrusions.• Faulty and buggy software - for example, buffer overflow and stack smashingAttacks• Automatic directory listings - this is of particular concern for the interface software directories.• Server root files are generally visible or accessible.• Lack of logs and bac kups.• File access is often not explicitly configured by the system administrator according to the security policy. This applies to configuration, client, administration and log files, administration programs, and CGI program sources and executables. CGI scripts allow dynamic web pages and make program development (in, for example, Perl) easy and rapid. However, their successful exploitation may allow execution of malicious programs, launching ofdenial-of-service attacks and, ultimately, privilege escalation on a server.Web Server and Database AuthenticationWhile user, browser and web server authentication are relatively well understood [Garf97], [Ghos98] and [Tree98], the introduction of additional components, such as databases and middleware, raise a number of authentication issues. There are a variety of options for authentication in a simple model (Figure 1). Firstly, both the web server and database management system can individually authenticate a user. This option requires the user to authenticatetwice which may be unacceptable in certain applications, although a singlesign-on device (which aims to manage authentication in a user-transparent way) may help. Secondly, a common approach is for the database to automatically grant user access based on web server authentication. However, this option should only be used for accessing publicly available information. Finally, the database may grant user access employing the web server authentication credentials as a basis for its own user authentication, using security management subsystems (Figure 1). We consider this last option in more detail.Web-based communications use the stateless HTTP protocol with the implication that state, and hence authentication, is not preserved when browsing successive web pages. Cookies, or files placed on user’s machine by a web server, were developed as a means of addressing this issue and are often used to provide authentication. However, after initial authentication, there is typically no re authentication per page in the same realm, only the use of unencrypted cookies (sometimes in association with IP addresses). This approach provides limited security as both cookies and IP addresses can be tampered with or spoofed.A stronger authentication method, commonly used by commercial implementations, uses digitally signed cookies. This allows additional systems, such as databases, to use digitally signed cookie data, including a session ID, as a basis for authentication. When a user has been authenticated by a web server (using a password, for example), a session ID is assigned and is stored in a security management subsystem database. When a user subsequently requests information from a database, the database receives a copy of the session ID, the security management subsystem checks this session ID against its local copy and, if authentication is successful, user access is granted to the database.The session ID is typically transmitted in the clear between the web server and database, but may be protected by SSL or even by physical security measures. The communications between the browser and web servers, and the web servers and security management subsystem (and its databases), are normally protected by SSL and use a web server security API that is used to digitally sign and verify browser cookies. The communications between the back-end databases and security management subsystem (and its databases) are also normally protected by SSL and use a database security API that verifies session Ids originating from the database and provides additional user authorization credentials. The web server security API is generally proprietary while, for the database security API, many vendors have adopted standards such as the Generic Security Services API (GSS-API) or CORBA [RFC2078] and [Corba].Architecture and DesignSecurity requirements for designing, building and implementing databases are important so that the systems, as part of the overall infrastructure, meet their requirements in actual operation. The various security models provide an important insight into the design requirements for databases and their management systems.Secure Database Management System ArchitecturesIn multi-level database management systems, a variety of architectures are possible: trusted subject, integrity locked, kernels and replicated. Trusted subject is used by most of the leading database management system vendors and can be integrated in existing products. Basically, the trusted subject architecture allows users to access a database via an un trusted front-end, a trusted database management system and trusted operating system. The operating systemprovides physical access to the database and the database management system provides multilevel object protection.The other architectures - integrity locked, kernels and replicated - all vary in detail, but they use a trusted front-end and an un trusted database management system. For details of these architectures and research prototypes, the reader is referred to [Cast95]. Different architectures are suited to different environments: for example, the trusted subject architecture is less integrated with the underlying operating system and is best suited when a trusted path can be assured between applications and the database management system.Secure Database Management System DesignAs discussed above, there are several fundamental differences between operating system and database management system design, including object granularity, multiple data types, data correlations and multi-level transactions. Other differences include the fact that database management systems include both physical and logical objects and that the database lifecycle is normally longer.These differences must be reflected in the design requirements which include:• Access, flow and infer ence controls.• Access granularity and modes.• Dynamic authorization.• Multi-level protection.• Polyinstantiation.• Auditing.• Performance.These requirements should be considered alongside basic information integrity principles, such as:• Well-formed transactions - to ensure that transactions are correct and consistent.• Continuity of operation - to ensure that data can be properly recovered, depending on the extent of a disaster.• Authorization and role management – to ensure that distinct roles are defined and users are authorized.• Authenticated users - to ensure that users are authenticated.• Least privilege - to ensure that users have the minimal privilege necessary to perform their tasks.• Separation of duties - to ensure that no single individual has access to critical data.• Delegation of authority - to ensure that the database management system policies are flexible enough to meet the organization’s requirements.Of course, some of these requirements and principles are not met by the database management system, but by the operating system and also by organizational and procedural measures.Database Design MethodologyVarious approaches to design exist, but most contain the same main stages. The principle aim of a design methodology is to provide a robust, verifiable design process and also to separate policies from how policies are actually implemented. An important requirement during any design process is that different design aspects can be merged and this equally applies to security.A preliminary analysis should be conducted that addresses the system risks, environment, existing products and performance. Requirements should then beanalyzed with respect to the results of a risk assessment. Security policies should be developed that include specification of granularity, privileges and authority.These policies and requirements form the input to the conceptual design that concentrates on subjects, objects and access modes without considering implementation details. Its purpose is to express information and process flows in a complete and consistent way.The logical design takes into account the operating system and database management system that will be used and which of the security requirements can be provided by which mechanisms. The physical design considers the actual physical realization of the logical design and, indeed, may result in a revision of the conceptual and logical phases due to physical constraints.Security AssuranceOnce a product has been developed, its security assurance can be assessed by a number of methods including formal verification, validation, penetration testing and certification. For example, if a database is to be certified as TCSEC Class B1, then it must implement the Bell-LaPadula mandatory access control model in which each controlled object in the database must be labeled with a security level.Most of these methods can be costly and lengthy to perform and are typically specific to particular hardware and software configurations. However, the international Common Criteria certification scheme provides the added benefit of a mutual recognition arrangement, thus avoiding the prospect of multiple certifications in different countries.ConclusionThis article has considered some of the security principles that are associated with databases and how these apply in a web based environment. Ithas also focused on important architecture and design principles. These principles have focused mainly on the prevention, assurance and recovery aspects, but other aspects, such as detection, are equally important in formulating a total information protection strategy. For example, host-based intrusion detection systems as well as a robust and tested set of business recovery procedures should be considered.Any fit-for-purpose, secure e-business infrastructure should address all the above aspects: prevention, assurance, detection and recovery. Certain industries are now starting to specify their own set of global, secure e-business requirements. International card payment associations have recently started to require minimum information security standards from electronic commerce merchants handling credit card data, to help manage fraud losses and associated impacts such as brand-image damage and loss of consumer confidence.网络环境下的数据库安全简介数据库在政府部门和商业机构得到普遍应用已经很多年了。
外文文献-中文翻译-数据库

外文文献-中文翻译-数据库英文原文2:《DBA Survivor: Become a Rock Star DBA》by Thomas LaRock,Published By Apress.2010You know that a database is a collection of logically related data elements that may be structured in various ways lo meet the multiple processing and retrieval needs of organizations and individuals. There’s nothing new about databases—early ones were chiseled in stone, penned on scrolls, and written on index cards. But now databases are commonly recorded on magnetizable media, and computer programs are required to perform the necessary storage and retrieval operations.Yo u’ll see in the following pages that complex data relationships and linkages may be found in all but the simplest databases. The system software package that handles the difficult tasks associated with creating, accessing, and maintaining database records is called a database management system (DBMS) .The programs in a DBMS package establish an interface between the database itself and the users of the database. (These users may be applications programmers, managers and others with information needs, and various OS programs.)A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions (hat aren't available in regular reports. These questions might initially be vague and / or poorly defined, but peo ple can "browse” through the database until they have the needed information. Inshort, the DBMS will “m anage”the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t10programmers. In a file-oriented system, users needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information[4].The availability of a DBMS, however, offers users a much faster alternative communications path.If the DBMS provides a way to interactively and update the database, as well as interrogate it capability allows for managing personal data-Aces however, it does not automatically leave an audit trail of actions and docs not provide the kinds of control a necessary in a multiuser organization. These-controls arc only available when a set of application programs arc customized for each data entry and updating function.Software for personal computers which perform me of the DBMS functions have been very popular. Personal computers were intended for use by individuals for personal information storage and process- These machines have also been used extensively small enterprises, professionals like doctors, acrylics, engineers, lasers and so on .By the nature of intended usage, database systems on these machines except from several of the requirements of full doge database systems. Since data sharing is not tended, concurrent operations even less so. the fewer can be less complex. Security and integrity maintenance arc de-emphasized or absent. As data limes will be small, performance efficiency is also important. In fact, the only aspect of a database system that is important is data Independence. Data-dependence, as stated earlier, means that applicant programs and user queries need not recognizant physical organization of data on secondary storage. The importance of this aspect, particularly for the personal computer user, is that this greatly simplifies database usage. The user can store, access and manipulate data a( a high level (close to (he application) and be totally shielded from the10low level (close to the machine) details of data organization. We will not discuss details of specific PC DBMS software packages here. Let us summarize in the following the strengths and weaknesses of personal computer data-base software systems:The most obvious positive factor is the user friendliness of the software. A user with no prior computer background would be able to use the system to store personal and professional data, retrieve and perform relayed processing. The user should, of course, satiety himself about the quality of software and the freedom from errors (bugs) so that invest-merits in data arc protected.For the programmer implementing applications with them, the advantage lies in the support for applications development in terms of input screen generations, output report generation etc. offered by theses stems.The main negative point concerns absence of data protection features. Unless encrypted, data cane accessed by whoever has access to the machine Data can be destroyed through mistakes or malicious intent. The second weakness of many of the PC-based systems is that of performance. If data volumes grow up to a few thousands of records, performance could be a bottleneck.For organization where growth in data volumes is expected, availability of. the same or compatible software on large machines should be considered.This is one of the most common misconceptions about database management systems that are used in personal computers. Thoroughly comprehensive and sophisticated business systems can be developed in dBASE, Paradox and other DBMSs. However, they are created by experienced programmers using the DBMS's own programming language. Thai is not the same as users who create and manage personal10files that are not part of the mainstream company system.Transaction Management of DatabaseThe objective of long-duration transactions is to model long-duration, interactive Database access sessions in application environments. The fundamental assumption about short-duration of transactions that underlies the traditional model of transactions is inappropriate for long-duration transactions. The implementation of the traditional model of transactions may cause intolerably long waits when transactions aleph to acquire locks before accessing data, and may also cause a large amount of work to be lost when transactions are backed out in response to user-initiated aborts or system failure situations.The objective of a transaction model is to pro-vide a rigorous basis for automatically enforcing criterion for database consistency for a set of multiple concurrent read and write accesses to the database in the presence of potential system failure situations. The consistency criterion adopted for traditional transactions is the notion of scrializability. Scrializa-bility is enforced in conventional database systems through theuse of locking for automatic concurrency control, and logging for automatic recovery from system failure situations. A “transaction’’ that doesn't provide a basis for automatically enforcing data-base consistency is not really a transaction. To be sure, a long-duration transaction need not adopt seri-alizability as its consistency criterion. However, there must be some consistency criterion.Version System Management of DatabaseDespite a large number of proposals on version support in the context of computer aided design and software engineering, the absence of a consensus on version semantics10has been a key impediment to version support in database systems. Because of the differences between files and databases, it is intuitively clear that the model of versions in database systems cannot be as simple as that adopted in file systems to support software engineering.For data-bases, it may be necessary to manage not only versions of single objects (e.g. a software module, document, but also versions of a collection of objects (e.g. a compound document, a user manual, etc. and perhaps even versions of the schema of database (c.g. a table or a class, a collection of tables or classes).Broadly, there arc three directions of research and development in versioning. First is the notion of a parameterized versioning", that is, designing and implementing a versioning system whose behavior may be tailored by adjusting system parameters This may be the only viable approach, in view of the fact that there are various plausible choices for virtually every single aspect of versioning.The second is to revisit these plausible choices for every aspect of versioning, with the view to discardingsome of themes either impractical or flawed. The third is the investigation into the semantics and implementation of versioning collections of objects and of versioning the database.There is no consensus of the definition of the te rm “management information system”. Some writers prefer alternative terminology such as “information processing system”, "information and decision syste m, “organizational information syste m”, or simply “i nformat ion system” to refer to the computer-based information processing system which supports the operations, management, and decision-making functions of an organization. This text uses “MIS” because i t is descriptive and generally understood; it also frequently uses "information system”instead of ''MIS” t o refer to an organizational information system.10A definition of a management information system, as the term is generally understood, is an integrated, user-machine system for providing information 丨o support operations, management, and decision-making functions in an organization. The system utilizes computer hardware and software; manual procedures: models for analysis planning, control and decision making; and a database. The fact that it is an integrated system does not mean that it is a single, monolithic structure: rather, ii means that the parts fit into an overall design. The elements of the definition arc highlighted below: Computer-based user-machine system.Conceptually, a management information can exist without computer, but it is the power of the computer which makes MIS feasible. The question is not whether computers should be used in management information system, but the extent to whichinformation use should be computerized. The concept of a user-machine system implies that some (asks are best performed humans, while others are best done by machine. The user of an MIS is any person responsible for entering input da(a, instructing the system, or utilizing the information output of the system. For many problems, the user and the computer form a combined system with results obtained through a set of interactions between the computer and the user.User-machine interaction is facilitated by operation in which the user's input-output device (usually a visual display terminal) is connected lo the computer. The computer can be a personal computer serving only one user or a large computer that serves a number of users through terminals connected by communication lines. The user input-output device permits direct input of data and immediate output of results. For instance, a person using The computer interactively in financial planning poses 4t what10if* questions by entering input at the terminal keyboard; the results are displayed on the screen in a few second.The computer-based user-machine characteristics of an MIS affect the knowledge requirements of both system developer and system user, “computer-based” means that the designer of a management information system must have a knowledge of computers and of their use in processing. The “user-machine” concept means the system designer should also understand the capabilities of humans as system components (as information processors) and the behavior of humans as users of information.Information system applications should not require users Co be computer experts. However, users need to be able lo specify(heir information requirements; some understanding of computers, the nature of information, and its use in various management function aids users in this task.Management information system typically provide the basis for integration of organizational information processing. Individual applications within information systems arc developed for and by diverse sets of users. If there are no integrating processes and mechanisms, the individual applications may be inconsistent and incompatible. Data item may be specified differently and may not be compatible across applications that use the same data. There may be redundant development of separate applications when actually a single application could serve more than one need. A user wanting to perform analysis using data from two different applications may find the task very difficult and sometimes impossible.The first step in integration of information system applications is an overall information system plan. Even though application systems are implemented one at a10time, their design can be guided by the overall plan, which determines how they fit in with other functions. In essence, the information system is designed as a planed federation of small systems.Information system integration is also achieved through standards, guidelines, and procedures set by the MIS function. The enforcement of such standards and procedures permit diverse applications to share data, meet audit and control requirements, and be shares by multiple users. For instance, an application may be developed to run on a particular small computer. Standards for integration may dictate that theequipment selected be compatible with the centralized database. The trend in information system design is toward separate application processing form the data used to support it. The separate database is the mechanism by which data items are integrated across many applications and made consistently available to a variety of users. The need for a database in MIS is discussed below.The term “information” and “data” are frequently used interchangeably; However, information is generally defined as data that is meaningful or useful to The recipient. Data items are therefore the raw material for producing information.The underlying concept of a database is that data needs to be managed in order to be available for processing and have appropriate quality. This data management includes both software and organization. The software to create and manage a database is a database management system.When all access to any use of database is controlled through a database management system, all applications utilizing a particular data item access the same data item which is stored in only one place. A single updating of the data item updates it for10all uses. Integration through a database management system requires a central authority for the database. The data can be stored in one central computer or dispersed among several computers; the overriding requirement is that there be an organizational function to exercise control.It is usually insufficient for human recipients to receive only raw data or even summarized data. Data usually needs to be processed and presented in such a way that Che result is directed toward the decision to be made. To do this, processing of dataitems is based on a decision model.For example, an investment decision relative to new capital expenditures might be processed in terms of a capital expenditure decision model.Decision models can be used to support different stages in the decision-making process. “Intelligence’’ models can be used to search for problems and/or opportunities. Models can be used to identify and analyze possible solutions. Choice models such as optimization models maybe used to find the most desirable solution.In other words, multiple approaches are needed to meet a variety of decision situations. The following are examples and the type of model that might be included in an MIS to aid in analysis in support of decision-making; in a comprehensive information system, the decision maker has available a set of general models that can be applied to many analysis and decision situations plus a set of very specific models for unique decisions. Similar models are available tor planning and control. The set of models is the model base for the MIS.Models are generally most effective when the manager can use interactive dialog (o build a plan or to iterate through several decision choices under different conditions.10中文译文2:《数据库幸存者:成为一个摇滚名明星》众所周知,数据库是逻辑上相关的数据元的汇集.这些数据元可以按不同的结构组织起来,以满足单位和个人的多种处理和检索的需要。
电子类文献中英文翻译

外文翻译原文:Progress in ComputersThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the worldhad increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputerwith fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upo n the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computersThe RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive nameof computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chipsare cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make se nse to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of theinternational semiconductor industry.At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller.My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met willdepend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
外文文献及其翻译电子政务信息1.政府信息化的含义?政府信息化是指:政府有效利用现代信息和通信技术,通过不同的信息服务设施,对政府的业务流程、组织结构、人员素质等诸方面进行优化、改造的过程。
2.广义和狭义的电子政务的定义?广义的电子政务是指:运用信息技术和通信技术实现党委、人大、政协、政府、司法机关、军队系统和企事业单位的行政管理活动。
(电子党务、电子人大、电子政协)狭义的电子政务是指:政府在其管理和服务职能中运用现代信息和通信技术,实现政府组织结构和工作流程的重组优化,超越时间、空间和部门分隔的制约,全方位的向社会提供优质规范、透明的服务,是政府管理手段的变革。
3.电子政务的组成部分?①:政府部门内部办公职能的电子化和网络化;②:政府职能部门之间通过计算机网络实现有权限的实时互通的信息共享;③:政府部门通过网络与公众和企业间开展双向的信息交流与策;4.理解电子政务的发展动力?①:信息技术的快速发展;②:政府自身改革与发展的需要;③:信息化、民主化的社会需求的推动;5.电子政务的应用模式?模式有:1.政府对公务员的电子政务(G2E);2.政府间的电子政务(G2G);3.政府对企业的电子政务(G2B);4.政府对公众的电子政务(G2C);6.电子政务的功能?①:提高工作效率,降低办公成本;②:加快部门整合,堵塞监管漏洞;③:提高服务水平,便于公众的监督;④:带动社会信息化发展;7.我国电子政务发展存在的主要问题?①:政府公务员与社会公众对电子政务的认识不足;②:电子政务发展缺乏整体规划和统一性标准;③:电子政务管理体制改革远未到位;④:电子政务整体应用水平还较低;⑤:政府公务员的素质有待提高;⑥:电子政务立法滞后;⑦:对电子政务安全问题缺乏正确认识;8.政府创新的含义和内容?含义:是指各级政府为适应公共管理与行政环境的需要,与时俱进的转变观念与职能,探索新的行政方法与途径,形成新的组织结构、业务流程和行政规范,全面提高行政效率,更好的履行行政职责的实践途径。
内容:政府观念改革和创新、政府管理与创新、政府职能与创新、政府服务与创新、政府服务改革与创新、政府业务流程重组与创新、工作方式的改革与创新9.政府流程优化与重组的概念?是指对企业的经营过程进行根本性的重新思考和彻底翻新,以使企业在成本、质量、服务和速度等重大特征上获得明显的改善,并强调通过充分利用信息技术使企业获得巨大提高。
10.政府流程优化与重组的步骤?①:制定计划;②:优化与重组准备;③:审视现有流程;④:重新设计;⑤:实施新流程;⑥:评估反馈;11.政府流程优化与重组的方法?(尤其是流程图法的掌握41页)方法有:流程图法、角色行为图法、IDEF系列方法、统一建模语言法、Petri网方法、工作流方法、柔性建模技术.12.理解“三网一库”?政府信息资源管理中心?(54页)“三网”指:内网、外网和专网;“一库”指:政务信息资源库内网即机关内部办公网,以政府各部门的局域网为基础,建立在保密通信平台上,主要运行党政决策指挥、宏观调控、行政执行、应急处理、监督检查、信息查询等各类相对独立的电子政务应用系统。
外网即公共管理和服务网。
建立在公共通信平台上,主要用于政府信息发布,向社会提供服务。
专网即办公业务资源网。
链接从中央到地方的各级党政机关,上下级相关业务部门。
根据机构职能在业务范围与内网有条件的互联,是吸纳地区级别涉密信息共享。
专网与内网之间采取逻辑隔离。
政府信息资源库,包括党政和各行业的业务数据和管理信息。
它分布于三网之上,按密级和使用要求为不同用户服务。
政府信息资源管理中心建有信息资源元数据库,提供丰富的信息资源供各部门访问。
包括:人口信息,法人单位信息、自然资源信息、空间地理以及宏观经济数据等。
他在数据存储、备份的基础上,为政府部门和企业和公众提供数据共享、数据交换和决策支持服务。
它采取自顶向下的层次结构,分为国家级、省市级和地级市GDC,分别负责本级或本行业的数据服务。
它存储的信息主要分为:基础型、公益型、综合型信息。
它采取统一的目录服务体系。
13.电子政务系统的层次模型?它自下而上可分为:网络系统层、信息管理层、应用服务层、应用业务层。
层次模型框架:电子政务标准和规范体系、面向电子政务的安全体系。
14.了解VPN?了解OSI即开放式通信系统互联参考模型(7个紧密层次)?VPN(Virtual Private Network)即虚拟专用网,它采用了一种称为隧道(Tunnel)的技术,使得政府和企业可以在公网上建立起相互独立、安全、可连接分支机构、分布式网点和移动用户的多个虚拟专用网。
OSI参考模型,即开放式通信系统互联参考模型,是国际标准化组织(ISO)提出的一个试图使各种计算机在世界范围内互连为网络的标准框架。
OSI的框架模型分为7层:物理层、数据链路层、网络层、传输层、会话层、表示层、应用层。
15.TCP/IP协议簇的内容?TCP/IP是互联网上广泛使用的一种协议,它实际上是包含多个协议的协议簇,起源于美国国防部。
可以映射到4层:①网络访问层,负责在线路上传输帧并从线路上接收帧;②网际层,进行路由选择;③传输层,负责管理计算机间的会话,包括TCP/UDP两个协议;④应用层。
16.数据存储与备份?数据存储备份是指:为防止系统出现操作失误或系统故障导致数据丢失,而将全系统或部分数据集合从应用主机的硬盘或阵列复制到其他的存储介质的过程。
数据存储备份的介质:磁盘阵列、磁带库、光盘塔或光盘库、光盘网络镜像服务器。
17.单点登录技术?(84页)单点登录(Single Sign On),简称为 SSO,是目前比较流行的企业业务整合的解决方案之一。
SSO的定义是在多个应用系统中,用户只需要登录一次就可以访问所有相互信任的应用系统。
通常情况下运维内控审计系统、4A系统或者都包含此项功能,目的是简化账号登录过程并保护账号和密码安全,对账号进行统一管理。
18.了解内部公文处理应用?(87页)发文管理、收文管理=======自己看书本,懒得打字。
19.电子政务决策支持系统的内容(DSS)?城市应急联动(106页)?内容为:制定决策方案、下达决策指令、执行决策指令、系统反馈和修正。
20.移动政务的应用?应用方式:基于消息的应用(典型代表是基于短信的应用,形式为短信预警、短信公告、短信通知);基于移动互联网的应用(GPRS/CDMA乃至未来的3G/4G数据传输技术的应用);基于位置的应用(利用GPS定位或者移动网络自身定位)21.政府信息资源的构成?内容构成:①政府决策信息;②为社会各界服务的信息;③反馈信息;④政府间交流;22.政府信息公开的内容及步骤?内容为:根据行政机关的级别确定为县级以上人民政府及其部门、设区的市级人民政府、县级人民政府及其部门和乡镇人民政府公开的信息。
步骤:见书129页23.以政府信息资源的目录服务体系为依托的信息交换?交换体系是以统一的国家电子政务网络为依托,支持区域、跨部门政府信息资源交换与共享的信息系统。
交换流程为:需求方:需要资源的部门系统提供资源请求;交换平台;目录服务;提供方:交换平台根据目录返回结果定位资源。
24.一站式服务的形式?(142页)一站式服务的特点为:以网络为工具,以用户为中心,以应用为灵魂,以便民为母的。
形式有:以利用现代计算机和通信网络技术,提供政府网上服务,提供全面的政务信息,政府资源共享,个性化服务(据功能)25.政府电子化采购的流程(145页)?流程步骤为:生成采购单、发布采购需求信息、供应商应标、网上开标与定标、签订采购合同、供应商供货、货款结算和支付26.电子政务安全问题产生原因?①:技术保障措施不完善:计算机系统本身的脆弱性;软件系统存在缺陷;网络的开放性②:管理体系不健全:组织及人员风险;管理制度不完善;安全策略有漏洞;缺乏应急体系③:基础设施建设不健全:法律体系不健全;安全标准体系不完整、电子政务的信任体系问题④:社会服务体系问题27电子政务的安全需求是什么?需求是:保护政务信息资源价值不受侵犯,保证信息资产的拥有者面临最小的风险和获取最大的安全利益,使政务的信息基础设施、信息应用服务和信息内容为抵御上述威胁而具有保密性、完整性、真实性、可用性和可控性的能力,从而确保一个政府部门能够有效地完成法律所赋予的政府职能。
表现为:信息的真实性、保密性、完整性、不可否认性、可用性。
28.了解几种网络安全技术?A.防火墙:①:一般分为数据包过滤型、应用级网关型、代理服务器型。
②安全策略是:一切未被允许的都是禁止的;一切未被禁止的都是允许的。
B. 电子签名技术:是将摘要信息用发送者的私钥加密,与原文一起传送给接收者。
接收者只有用发送者的公钥才能解密被加密的摘要信息,然后用HASH函数对收到的原文产生一个摘要信息,与解密的摘要信息对比。
如果相同,则说明收到的信息是完整的,在传输过程中没有被修改,否则说明信息被修改过,因此数字签名能够验证信息的完整性。
C.非对称密钥加密技术:需要使用一对相关的密钥:一个加密,一个解密。
分为两种基本模式:加密模式和验证模式D.入侵检测系统:是依照一定得安全策略,对网络、系统的运行状况进行监视,尽可能的发现各种攻击企图、攻击行为或攻击结果,以保证网络系统的机密性、完整性和可用性。
类型有:基于主机的入侵检测系统,基于网络的入侵检测系统、分布式入侵检测系统。
方法有:基于特征的检测、基于异常的检测、完整性检验。
E. 防病毒系统:分为主机防病毒和网络防病毒。
主机房病毒式安装防病毒软件进行实时监测,发现病毒立即清除或修复。
网络防病毒是有安全提供商提供一整套的解决方案,针对网络进行全面防护。
F. 漏洞扫描系统:功能有:动态分析系统的安全漏洞,检查用户网络中的安全隐患,发布检测报告,提供有关漏洞的详细信息和最佳解决对策,杜绝漏洞、降低风险,防患未然。
G.安全认证:用户访问系统之前经过身份认证系统,监控器根据用户身份和授权数据库决定用户能否访问某个资源。
电子政务系统必须建立基于CA认证体制的身份认证系统。
数字证书用来证明数字证书持有者的身份。
29.PMI与PKI的概念和区别?概念:PKI即公钥基础设施,又叫公钥体系,是一种利用公钥加密技术为电子商务、电子政务提供一整套安全基础平台的技术和规范,采用数字证书来管理公钥。
PMI即授权管理基础设施,是国家信息安全基础设施的一个重要组成部分,目标是向用户和应用程序提供授权管理服务,提供用户身份到应用授权的映射功能,提供与实际应用出息模式相对应的、与具体应用系统开发和管理无关的授权和访问控制机制,简化具体应用系统的开发和维护。
区别:①:解决问题的不同。
PKI解决对身份的认证问题。
PMI 是身份认证之后,决定你具有的权限和能做什么的问题。