云技术和服务中英文对照外文翻译文献
关于云计算的英语作文

关于云计算的英语作文英文回答:Cloud computing has revolutionized the way businesses and individuals store, process, and access data and applications. As a result, there are multiple benefits of cloud computing, including improved flexibility, reduced costs, increased security, enhanced collaboration, and innovative services.Flexibility is one of the key advantages of cloud computing. It allows users to access their data and applications from anywhere with an internet connection. This makes it easier for businesses to operate remotely and for individuals to work from home.Reduced costs are another advantage of cloud computing. Businesses can avoid the upfront costs of purchasing and maintaining their own IT infrastructure. Instead, they can pay for cloud services on a pay-as-you-go basis.Increased security is another key benefit of cloud computing. Cloud providers have invested heavily insecurity measures to protect their customers' data. This makes cloud computing a more secure option than storingdata on-premises.Enhanced collaboration is another advantage of cloud computing. Cloud-based applications make it easier forteams to collaborate on projects. This is because cloud applications can be accessed from anywhere, and they allow users to share files and collaborate in real-time.Innovative services are another key benefit of cloud computing. Cloud providers are constantly developing new services, such as artificial intelligence, machine learning, and data analytics. These services can help businesses improve their operations and make better decisions.Overall, cloud computing offers a number of benefitsthat can help businesses and individuals improve their operations, reduce costs, and increase innovation.中文回答:云计算的优势。
参考文献中文的英文对照

参考文献中文的英文对照在学术论文中,参考文献是非常重要的一部分,它可以为论文的可信度和学术性增添分数,其中包括中文和英文文献。
以下是一些常见的参考文献中文和英文对照:1. 书籍 Book中文:王小明. 计算机网络技术. 北京:清华大学出版社,2018.英文:Wang, X. Computer Network Technology. Beijing: Tsinghua University Press, 2018.2. 学术期刊 Article in Academic Journal中文:张婷婷,李伟. 基于深度学习的影像分割方法. 计算机科学与探索,2019,13(1):61-67.英文:Zhang, T. T., Li, W. Image Segmentation Method Based on Deep Learning. Computer Science and Exploration, 2019, 13(1): 61-67.3. 会议论文 Conference Paper中文:王维,李丽. 基于云计算的智慧物流管理系统设计. 2019年国际物流与采购会议论文集,2019:112-117.英文:Wang, W., Li, L. Design of Smart Logistics Management System Based on Cloud Computing. Proceedings of the 2019 International Conference on Logistics and Procurement, 2019: 112-117.4. 学位论文 Thesis/Dissertation中文:李晓华. 基于模糊神经网络的水质评价模型研究. 博士学位论文,长春:吉林大学,2018.英文:Li, X. H. Research on Water Quality Evaluation Model Based on Fuzzy Neural Network. Doctoral Dissertation, Changchun: Jilin University, 2018.5. 报告 Report中文:国家统计局. 2019年国民经济和社会发展统计公报. 北京:中国统计出版社,2019.英文:National Bureau of Statistics. Statistical Communique of the People's Republic of China on the 2019 National Economic and Social Development. Beijing: China Statistics Press, 2019.以上是一些常见的参考文献中文和英文对照,希望对大家写作有所帮助。
云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
云计算Cloud-Computing-外文翻译

毕业设计说明书英文文献及中文翻译学生姓名:学号:计算机与控制工程学院:专指导教师:2017 年 6 月英文文献Cloud Computing1。
Cloud Computing at a Higher LevelIn many ways,cloud computing is simply a metaphor for the Internet, the increasing movement of compute and data resources onto the Web. But there's a difference: cloud computing represents a new tipping point for the value of network computing. It delivers higher efficiency, massive scalability, and faster,easier software development. It's about new programming models,new IT infrastructure, and the enabling of new business models。
For those developers and enterprises who want to embrace cloud computing, Sun is developing critical technologies to deliver enterprise scale and systemic qualities to this new paradigm:(1) Interoperability —while most current clouds offer closed platforms and vendor lock—in, developers clamor for interoperability。
云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
5G无线通信网络中英文对照外文翻译文献

5G无线通信网络中英文对照外文翻译文献(文档含英文原文和中文翻译)翻译:5G无线通信网络的蜂窝结构和关键技术摘要第四代无线通信系统已经或者即将在许多国家部署。
然而,随着无线移动设备和服务的激增,仍然有一些挑战尤其是4G所不能容纳的,例如像频谱危机和高能量消耗。
无线系统设计师们面临着满足新型无线应用对高数据速率和机动性要求的持续性增长的需求,因此他们已经开始研究被期望于2020年后就能部署的第五代无线系统。
在这篇文章里面,我们提出一个有内门和外门情景之分的潜在的蜂窝结构,并且讨论了多种可行性关于5G无线通信系统的技术,比如大量的MIMO技术,节能通信,认知的广播网络和可见光通信。
面临潜在技术的未知挑战也被讨论了。
介绍信息通信技术(ICT)创新合理的使用对世界经济的提高变得越来越重要。
无线通信网络在全球ICT战略中也许是最挑剔的元素,并且支撑着很多其他的行业,它是世界上成长最快最有活力的行业之一。
欧洲移动天文台(EMO)报道2010年移动通信业总计税收1740亿欧元,从而超过了航空航天业和制药业。
无线技术的发展大大提高了人们在商业运作和社交功能方面通信和生活的能力无线移动通信的显著成就表现在技术创新的快速步伐。
从1991年二代移动通信系统(2G)的初次登场到2001年三代系统(3G)的首次起飞,无线移动网络已经实现了从一个纯粹的技术系统到一个能承载大量多媒体内容网络的转变。
4G无线系统被设计出来用来满足IMT-A技术使用IP面向所有服务的需求。
在4G系统中,先进的无线接口被用于正交频分复用技术(OFDM),多输入多输出系统(MIMO)和链路自适应技术。
4G无线网络可支持数据速率可达1Gb/s的低流度,比如流动局域无线访问,还有速率高达100M/s的高流速,例如像移动访问。
LTE系统和它的延伸系统LTE-A,作为实用的4G系统已经在全球于最近期或不久的将来部署。
然而,每年仍然有戏剧性增长数量的用户支持移动宽频带系统。
云计算外文文献+翻译

云计算外文文献+翻译1. 引言云计算是一种基于互联网的计算方式,它通过共享的计算资源提供各种服务。
随着云计算的普及和应用,许多研究者对该领域进行了深入的研究。
本文将介绍一篇外文文献,探讨云计算的相关内容,并提供相应的翻译。
2. 外文文献概述作者:Antonio Fernández Anta, Chryssis Georgiou, Evangelos Kranakis出版年份:2019年该外文文献主要综述了云计算的发展和应用。
文中介绍了云计算的基本概念,包括云计算的特点、架构、服务模型以及云计算的挑战和前景。
3. 研究内容该研究综述了云计算技术的基本概念和相关技术。
文中首先介绍了云计算的定义和其与传统计算的比较,深入探讨了云计算的优势和不足之处。
随后,文中介绍了云计算的架构,包括云服务提供商、云服务消费者和云服务的基本组件。
在架构介绍之后,文中提供了云计算的三种服务模型:基础设施即服务(IaaS)、平台即服务(PaaS)和软件即服务(SaaS)。
每种服务模型都从定义、特点和应用案例方面进行了介绍,并为读者提供了更深入的了解。
此外,文中还讨论了云计算的挑战,包括安全性、隐私保护、性能和可靠性等方面的问题。
同时,文中也探讨了云计算的前景和未来发展方向。
4. 文献翻译《云计算:一项调查》是一篇全面介绍云计算的文献。
它详细解释了云计算的定义、架构和服务模型,并探讨了其优势、不足和挑战。
此外,该文献还对云计算的未来发展进行了预测。
对于研究云计算和相关领域的读者来说,该文献提供了一个很好的参考资源。
它可以帮助读者了解云计算的基本概念、架构和服务模型,也可以引导读者思考云计算面临的挑战和应对方法。
5. 结论。
计算机网络类嵌入式系统的网络服务器中英文翻译、外文翻译、外文文献翻译

Web Server for Embedded SystemsAfter the “everybody-in-the-Internet-wave” now obviously follows the“everything-in-the-Internet-wave”.The most coffee, vending and washingmachines are still not available about the worldwide net. However the embeddedInternet integration for remote maintenance and diagnostic as well as the so-calledM2M communication is growing with a considerable speed rate.Just the remote maintenance and diagnostic of components and systems by Webbrowsers via the Internet, or a local Intranet has a very high weight for manydevelopment projects. In numerous development departments people work oncompletely Web based configurations and services for embedded systems. Theremaining days of the classic user interface made by a small LC-display with frontpanel and a few function keys are over. Through future evolutions in the field ofthe mobile Internet, Bluetooth-based PAN s (Personal Area Network's) andthe rapidly growing M2M communication (M2M=Machine-to-Machine)a further innovating advance is to be expected.The central function unit to get access on an embedded system via Web browser isthe Web server. Such Web servers bring the desired HTML pages (HTML=HyperText Markup Language) and pictures over the worldwide Internetor a local network to the Web browser. This happens HTTP-based (HyperText Transfer Protocol). A TCP/IP protocol stack –that means it is based onsophisticated and established standards–manages the entire communication.Web server (HTTP server) and browser (HTTP client) build TCP/IP-applications. HTTP achieved a phenomenal distribution in the last years.Meanwhile millions of user around the world surf HTTP-based in the WorldWide Web. Today almost every personal computer offers the necessaryassistance for this protocol. This status is valid more and more for embeddedsystems also. The HTTP spreads up with a fast rate too.1. TCP/IP-based HTTP as Communication PlatformHTTP is a simple protocol that is based on a TCP/IP protocol stack (picture 1.A).HTTP uses TCP (Transmission Control Protocol). TCP is a relative complex andhigh-quality protocol to transfer data by the subordinate IP protocol. TCP itselfalways guarantees a safeguarded connection between two communication partnersbased on an extensive three-way-handshake procedure. As aresult the data transfer via HTTP is always protected. Due tothe extensive TCP protocol mechanisms HTTP offers only a low-gradeperformance.Figure 1: TCP/IP stack and HTTP programming modelHTTP is based on a simple client/server-concept. HTTP server and clientcommunicate via a TCP connection. As default TCP port value the port number80 will be used. The server works completely passive. He waits for a request(order) of a client. This request normally refers to the transmition of specificHTML documents. This HTML documents possibly have to be generateddynamically by CGI. As result of the requests, the server will answer with aresponse that usually contains the desired HTML documents among others(picture 1.B).GET /test.htm HTTP/1.1Accept]: image/gif, image/jpeg, */*User selling agent: Mozilla/4.0Host: 192.168.0.1Listing 1.A: HTTP GET-requestHTTP/1.1 200 OKDate: Mon, 06 Dec 1999 20:55:12 GMTServer: Apache/1.3.6 (Linux)Content-length: 82Content-type: text/html<html><head><title>Test-Seite</title></head><body>Test-SeiteThe DIL/NetPCs DNP/1110 – Using the Embedded Linux</body></html>Listing 1.B: HTTP response as result of the GET-request from listing 1.AHTTP requests normally consist of several text lines, which are transmitted to theserver by TCP. The listing 1.A shows an example. The first line characterizes therequest type (GET), the requested object (/test1.htm) and the used HTTP version(HTTP/1.1). In the second request line the client tells the server, which kind offiles it is able to evaluate. The third line includes information about theclient- software. The fourth and last line of the request from listing 1.A is used toinform the server about the IP address of the client. In according to the type ofrequest and the used client software there could follow some further lines. Asan end of the request a blank line is expected.The HTTP responses as request answer mostly consist of two parts. At first thereis a header of individual lines of text. Then follows a content object (optional).This content object maybe consists of some text lines –in case of a HTML file– ora binary file when a GIF or JPEG image should be transferred. The first line of theheader is especially important. It works as status or error message. If anerror occurs, only the header or a part of it will be transmitted as answer.2. Functional principle of a Web ServerSimplified a Web server can be imagined like a special kind of a file server.Picture 2.A shows an overview. The Web server receives a HTTP GET-requestfrom the Web browser. By this request, a specific file is required as answer (seestep 1 into picture 2.A). After that, the Web server tries to get access on the filesystem of the requested computer. Then it attempts to find the desired file (step 2).After the successful search the Web server read the entire file(step 3) and transmit it as an answer (HTTP response comprising of headerand content object) to the Web browser (step 4). If the Web server cannot findthe appropriate file in the file system, an error message (HTTP response whichonly contains the header) is simply be send as response to the client.Figure 2: Functional principle from Web server and browserThe web content is build by individual files. The base is build by static files withHTML pages. Within such HTML files there are references to further filesembedded –these files are typically pictures in GIF or JPEG format. However,also references to other objects, for example Java-Applets, are possible. After aWeb browser has received a HTML file of a Web server, this file will beevaluated and then searched for external references. Now the steps 1 to 4 frompicture 2.A will run again for every external reference in order to request therespective file from the corresponding Web server. Please note, that such areference consists of the name or IP address of a Web server (e.g. ""),as well as the name of the desired file (e.g. "picture1.gif"). So virtually everyreference can refer to another Web server. In other words, a HTML file could belocated on the server "ssv-embedded.de" but the required picture -which isexternal referenced by this HTML file- is located on the Web server"". Finally this (worldwide) networking of separate objects is thecause for the name World Wide Web (WWW). All files, which are required by aWeb server, are requested from a browser like the procedure shown on picture2.A. Normally these files are stored in the file system of the server. TheWebmaster has to update these files from time to time.A further elementary functionality of a Web server is the CommonGateway Interface(CGI) -we have mentioned before. Originally this technologyis made only for simple forms, which are embedded into HTML pages. The data,resulting from the padding of a form, will be transmitted to a Web server viaHTTP-GET or POST-request (see step 1 into picture 2.B). In such a GET- orPOST-request the name of the CGI program, which is needed for theevaluation of a form, is fundamentally included. This program has to be on theWeb server. Normally the directory "/cgi-bin" is used as storage location.As result of the GET- or POST-request the Web server starts the CGI programlocated in the subdirectory "/cgi-bin" and delivers the received data in form ofparameters (step 2). The outputs of a CGI program are guided to the Web server(step 3). Then the Web server sends them all as responses to the Web browser(step 4).3. Dynamic generated HTML PagesIn contradiction to a company Web site server, which informs people about theproduct program and services by static pages and pictures, an embeddedWeb server has to supply dynamically generated contents. The embedded Webserver will generate the dynamic pages in the moment of the first access by abrowser. How else could we check the actual temperature of a system viaInternet? Static HTML files are not interesting for an embedded Web server.The most information about the firmware version and service instructions arestored in HTML format. All other tasks are normally made via dynamic generatedHTML.There are two different technologies to generate a specific HTML page in themoment of the request: First the so-called server-side-scripting and secondthe CGI programming. At the server-side-scripting, script code is embeddedinto a HTML page. If required, this code will be carried out on the server (server-sided).For this, there are numerous script languages available. All these languages areusable inside a HTML-page. In the Linux community PHP is used mostly. Thefavourite of Microsoft is VBScript. It is also possible to insert Java directly intoHTML pages. Sun has named this technology JSP(Java Server Pages).The HTML page with the script code is statically stored in the file system of theWeb server. Before this server file is delivered to the client, a special programreplaces the entire script code with dynamic generated standard HTML. The Webbrowser will not see anything from the script language.Figure 3: Single steps of the Server-Side-ScriptingPicture 3 shows the single steps of the server-side-scripting. In step 1 the Webbrowser requests a specific HTML file via HTTP GET-request. The Web serverrecognizes the specific extension of the desired file (for example *.ASP or *.PHPinstead of *.HTM and/or *.HTML) and starts a so-called scripting engine(see step 2). This program gets the desired HTML file including the script codefrom the file system (step 3), carry out the script code and make a newHTML file without script code (step 4). The included script code will be replacedby dynamic generated HTML. This new HTML file will be read by the Webserver (step 5) and send to the Web browser (step 6). If a server-sided scripting issupposed to be used by an embedded Web server, so you haveto consider the necessary additional resources. A simple example: In orderto carry out the embedded PHP code into a HTML page, additional programmodules are necessary for the server. A scripting engine together with theembedded Web server has to be stored in the Flash memory chip of an embeddedsystem. Through that, during run time more main memory is required.4. Web Server running under LinuxOnce spoken about Web servers in connection with Linux most peopleimmediately think of Apache. After investigations of the Netcraft Surveythis program is the mostly used Web server worldwide. Apache is anenhancement of the legendary NCSA server. The name Apache itself hasnothing to do with Red Indians. It is a construct from "A Patchy Server" becausethe first version was put together from different code and patch files.Moreover there are numerous other Web servers - even for Linux. Most of this arestanding under the GPL (like Apache) and can be used license free. Avery extensive overview you can find at "/". EveryWeb server has his advantages and disadvantages. Some are developed forspecific functions and have very special qualities. Other distinguishes at bestthrough their reaction rate at many simultaneous requests, as wellas the variety of theirconfiguration settings. Others are designed to need minimal resources and offer very small setting possibilities, as well as only one connection to a client.The most important thing by an embedded Web server is the actual resource requirements. Sometimes embedded systems offer only minimal resources, which mostly has to be shared with Linux. Meanwhile there are numerous high- performance 32-bit-386/486-microcontroller or (Strong)ARM-based embedded systems that own just 8 Mbytes RAM and 2 Mbytes Flash-ROM (picture 4). Outgoing from this ROM (Read-only-Memory, i.e. Flash memory chips) a complete Linux, based on a 2.2- or 2.4-Kernel with TCP/IP protocol stack and Web server, will be booted. HTML pages and programs are also stored in the ROM to generate the dynamic Web pages. The space requirements of an embedded system are similar to a little bigger stamp. There it is quite understandable that there is no place for a powerful Web server like Apache.Figure 4: Embedded Web Server Module with StrongARM and LinuxBut also the capability of an Apache is not needed to visualize the counter of a photocopier or the status of a percolator by Web servers and browsers. In most cases a single Web server is quite enough. Two of such representatives are boa () and thttpd (). At first, both Web servers are used in connection with embedded systems running under Linux. The configuration settings for boa and thttpd are poor, but quite enough. By the way, the source code is available to the customer. The practicable binary files for these servers are always smaller than 80 Kbytes and can be integrated in the most embedded systems without problems. For the dynamic generation of HTML pages both servers only offer CGI (Common Gateway Interface) as enlargement. Further technologies, like server-side-includes (SSI) are not available.The great difference between an embedded Web server and Apache is, next to the limited configuration settings, the maximal possible number of simultaneous requests. High performance servers like Apache immediately make an own process for every incoming call request of a client. Inside of this process allfurther steps will then be executed. This requires a very good programming and a lot of free memory resources during run time. But, on the other hand many Web browsers can access such a Web server simultaneously. Embedded Web server like boa and thttpd work only with one single process. If two users need to get access onto a embedded Web server simultaneously, one of both have to wait a few fractions of a second. But in the environment of the embedded systems that is absolutely justifiable. In this case it is first of all a question of remote maintenance, remote configuration and similar tasks. There are not many simultaneous requests expected.The DIL/NetPCs DNP/1110 – Using the Embedded LinuxList of FiguresFigure 1: TCP/IP stack and HTTP programming modelFigure 2: Functional principle from Web server and browserFigure 3: Single steps of the Server-Side-ScriptingFigure 4: Embedded Web Server Module with StrongARM and LinuxListingsListing 1.A: HTTP GET-requestListing 1.B: HTTP response as result of the GET-request from listing 1.A ContactSSV Embedded SystemsHeisterbergallee 72D-30453 HannoverTel. +49-(0)511-40000-0Fax. +49-(0)511-40000-40Email: sales@ist1.deWeb: www.ssv-embedded.deDocument History (Sadnp05.Doc)Revision Date Name1.00 24.05.2002FirstVersion KDWThis document is meant only for the internal application. The contents ofthis document can change any time without announcement. There is takenover no guarantee for the accuracy of the statements. Copyright ©SSV EMBEDDED SYSTEMS 2002. All rights reserved.INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED 'ASIS' WITHOUT WARRANTY OF ANY KIND. The user assumes the entirerisk as to the accuracy and the use of this document. Some names withinthis document can be trademarks of their respective holders.北京工业大学毕业设计(译文)译文:嵌入式系统的网络服务器在“每个人都处在互联网的浪潮中”之后,现在很明显随之而来的是“每件事都处在互联网的浪潮中”。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
中英文对照外文翻译文献(文档含英文原文和中文翻译)译文:利用云技术和服务的新兴前沿:资产优化利用摘要投资回报最大化是一个主要的焦点对所有公司。
信息被视为手段这样做。
此信息是用来跟踪性能和提高财务业绩主要通过优化利用公司资产。
能力和速度,这是可能的收集信息并将其分发到当前的技术该组织正在不断增加,事实上,有超过了行业的能力,接受和利用它。
今天,生产运营商被淹没在数据的结果一种改进的监控资产的能力。
智能电机保护和智能仪器和条件监控系统经常提供32多块每个设备的信息都与相关的报警。
通常运营商没有装备来理解或行动在这个信息。
生产企业需要充分利用标的物专门为这个目的,通过定位他们的工程人员区域中心。
这些工程师需要配备足够的知识能够理解和接受适当的行动来处理警报和警报通过这些智能设备。
可用的信息可以是有用的,在寻找方法增加生产,减少计划外的维护和最终减少停机时间。
然而,寻找信息在实时,或在实时获得有用的信息,而不花了显着的非生产时间,使数据有用的是一个巨大的挑战。
本文将介绍云技术作为一种获取可视化和报告的经济方法条件为基础的数据。
然后,它将讨论使用云技术,使工程资源与现场数据可通过网络浏览器访问的安全格式技术。
我们将覆盖资产的方法多个云服务的优化和效益飞行员和项目。
当重工业公司在全球范围看,世界级运营实现整体设备效率(OEE)得分百分之91。
从历史上看,石油和天然气行业的滞后,这得分十分以上(Aberdeen 集团“操作风险管理”十月2011)。
OEE是质量的商,可用性和效率得分。
这些,可用性似乎影响了石油和天然气行业的最大程度。
在石油和天然气的可用性得分的根源更深的研究导致旋转资产作为故障的根本原因,在70%的情况下,失去的生产或计划外停机。
鉴于这一行业的关键资产失败的斗争,但有方法,以帮助推动有效性得分较高,以实现经营效率的目标。
.在未来十年中,海上石油储量的追求将涉及复杂的提取方法和海底提取技术的广泛使用。
提取下一波的储量将需要复杂的设施,在海底,以及更传统的表面生产设施的形式。
面对日益复杂的生产系统,石油和天然气经营公司正在努力成功地经营这些复杂的设施与一个退休池的主题专业知识。
该行业的答复,这一现象是显着的使用智能设备组织的斗争,把数据转换成过程控制系统,成本很高,然后向工程师提出所有的数据,每一次发生的事件。
工程师然后花时间挖掘数据,建立数据表和创建有用的信息,从桩。
由此他们可以开始工作的问题或问题。
不幸的是,这项工作需要的经验,资产和数据。
更换行业和组织的知识是一个挑战,推动生产企业探索定位主题的专业知识,在资源的枢纽,跨多个资产。
挑战来自于2个方面。
首先,生产系统,无论是在表面上,在海底,需要复杂的生产系统和子系统的许多组件。
其结果是,操作这些设施的自动化系统是非常复杂的,往往需要运营商与200000多个标签的数据和由此产生的报警接口。
其次,将标的物专家在区域枢纽创建的安全风险,必须解决的设计自动化系统。
启用24小时的实时数据访问远程定位的主题的专业知识,需要建立在互联网技术的隧道,使监控从远程办公室,平板电脑和移动设备。
一个健全的安全策略是需要这种支持的一部分。
传统的方法继续影响停机时间。
简而言之,如果工程或维护需要建立一个电子表格,查找数据,操纵它,然后通过它来查找真相,在采取行动时损失的时间会导致停机时间的增加。
基于角色的可视化和报告,与非生产性工作的自动化配对,创造一个环境,推动合作和提高性能。
改进的风险为基础的方法,以尽量减少资产故障是一个可能的答案,以推动资产性能。
它不是完整的,除非系统的地方,使总有效率的维护,每个人在企业中扮演着一个维护的一部分,因为每个装备。
实施全面生产性维护开始与授权的集中式主题的专业知识与现场数据,并给他们的手段,以配合实时操作和维护人员。
它包括提供每一个人的工作与资产的确切信息所需的时间所需的作用,以保持资产健康。
高级维修和设施工程人员是最佳的位置,利用数据的基础上,结合历史的情况下,以推动可靠性提高资产。
协作工具驱动共享这反过来又创造了一个环境,经验不足的员工获得从老员工更好的装备的年轻工程师的知识,操作和维修人员的优化与资产为退休工人。
这种组合的数据利用率,资产性能管理和知识共享导致通过全面的经营绩效管理的优化资产。
经营绩效管理的路径是时间和速度。
创建协作区和分析和工作流程自动化系统,可以涉及重大投资。
资源来执行的系统往往是稀缺的,导致在建设的系统,将使一个企业的经营业绩管理的操作障碍。
今天,技术的存在,使工作流自动化和协作使用简单的网络工具。
简单的浏览器访问允许使用智能手机,平板电脑和个人电脑的人员合作。
问题是,企业在他们的防火墙内构建技术,还是利用云计算技术来推动合作作为付费服务?答案需要一个更广泛的数据的使用和数据策略。
要实现可靠和效率指标,利用现有的数据,石油和天然气经营企业必须制定数据策略,解决了一些问题。
是数据所需的合作也受监管或投资者的期望?如果是这样的话,一个企业应该考虑在其企业结构中持有数据,并在基础设施投资,以存储和展示它。
资本预算和它的资源可以限制的能力,以实现所需的合作,建立系统的能力,具有挑战性的可靠性和效率目标的追求。
购买应用程序作为一个基于云的服务,可以加快与一个支付的路径,作为一个模型。
云技术提供了一种车辆来操作和管理数据。
采购软件作为一种服务,使一个组织能够通过更好地利用现有的数据,更快地进入运营卓越的领域。
采购业务管理工具作为一种服务,避免使用资本预算和专门规避资金跨资产问题。
服务是为他们所使用的经营预算。
利用云计算共享操作条件数据创建一个操作改进的环境。
“私有云”利用同样的技术可以部署,如果企业有考虑,需要所有的数据,以保持其企业内(法规遵从,投资者的遵守等)在开发策略,利用数据驱动的学科之间的合作,企业需要开发的数据策略。
一些数据,例如,安全记录,测试记录和记录和记录被调节,并为这些已授权存储。
其他类型的数据有投资者的合规性存储和安全要求。
在投资者或监管数据的情况下,企业仍然希望协同工具,但可能需要在其公司的结构中的数据,再看图5,协同工具和仪表板也可在传统的资本项目,购买软件和应用程序的开发经营范围内的企业数据结构。
本文的下一部分将解决传统的基础设施的方法和云计算方法,用于在一个组织中的有效性的例子。
云技术可以在标准的服务器体系结构中使用数据和操作的数据和自动化的非生产性的工作,利用数据。
虚拟机管理程序环境允许多个应用程序运行在数据中心的传统集群技术。
应用程序存在,允许数据的积累,从不同的来源与联邦,然后利用创建基于角色的可视化和自动化的手动任务(如生产分配,以及测试验证,监管系统测试文件等)。
图6演示了一个在操作公司的防火墙中使用云技术的测试验证。
图7然后演示了一个操作符的角色视图。
技术有助于显着减少非生产性的操作,同时提出的数据,在一种方式,推动人员集中的领域,他们可以影响业务,以提高效率。
图10和9显示了工作流自动化的一个实例,并将所得的仪表板呈现给含有丰富信息的人员,以帮助提高个人在资产上工作的有效性。
自动化系统既需要大量的资本投资,也需要持续的运营费用,以维持基础设施建设。
在许多情况下,合作是需要的,但确保投资是困难的,因为组织障碍,跨资产的资本项目或根本原因是不存在的资源。
利用云计算的基础,应用程序可以建立在云基础设施,并提供服务。
在这种方式的合作是可能的经营预算的限制,使用一个基金,建立模型。
在大多数情况下,所需的工具的成本效益的好处是显着低于传统的系统建设的成本。
下面的例子是一个达拉斯的抽水系统的制造商在泥浆处理系统和压裂系统,用于钻井的非常规的威尔斯将被用来说明。
我们会打电话给XYZ公司。
客户XYZ 送卡车进入现场,他们提供的服务,石油和天然气企业开发油气田的土地为基础的操作。
抽水系统具有高度的局部自动化,但泵操作人员不具备作出决定时,停止抽水和执行维护。
例如,卡车需要定期保养,一些过滤器需要更换,如每八个小时。
结合需要进行定期维护与客户寻找经营者的肩膀上,偶尔的错误是与阀门定位和/或运行的卡车,直到它打破了试图取悦一个焦虑的客户。
这大大的维护成本,停机时间是一个大问题。
英文原文:AN EMERGING FRONTIER, ASSETOPTIMIZATION UTILIZING CLOUDTECHNOLOGY AND SERVICESEric Fidler Sr. Member Richard and Paes Sr. MemberAbstract –Maximizing return on investment is a major focus for all corporations. Information is seen as the means by which to do so. This information is used to track performance and to improve the financial results primarily by optimizing the use of company assets. The ability and speed with which it is possible to gather information and distribute it with current technology to the organization is constantly increasing and, in fact, has surpassed the ability of industry to accept and utilize it. Today, production operators are overwhelmed with data as a result of an improved ability to monitor assets. Intelligent motor protection and intelligent instrumentation and condition monitoring systems often provide more than 32 pieces of information per device all with related alarms. Often operators are notequipped to understand or act on this information. Production companies need to leverage subject matter expertise for this purpose by locating their engineering staff in regional hubs. These engineers need to be equipped with sufficient knowledge to be able to interpret and take appropriate action to deal with warnings and alarms generated by these intelligent devices. The information available can be useful in finding ways to increase production, reduce unplanned maintenance and ultimately reduce down time. However, finding the information in real time, or getting useful information in real time without significant non-productive time spent making the data useful is a big challenge. This paper will introduce Cloud Technology as an economical method of gaining visualization and reporting to condition based data. It will then discuss the use of Cloud Technology to empower engineering resources with live data available in a secure format accessible through web browser technology. We shall cover the approach to asset optimization and the benefits found in several Cloud Services pilots and projects.INTRODUCTIONWhen heavy industry firms are viewed on a global basis, world class operations achieve an Overall Equipment Effectiveness (OEE) Score of 91 percent. Historically, the Oil and Gas industry lags this score by ten points or more (Aberdeen Group “Operational Risk Management”October 2011). OEE is the quotient of quality, availability and efficiency scores. Of these, availability seems to impact the Oil and Gas industry score to the greatest extent. A deeper look into root causes of availability scores in Oil and Gas leads to rotating assets as the root cause of failures in 70% of instances of lost production or unplanned downtime. Given this the industry struggles with failure of critical assets, but there are ways to help drive effectiveness scores higher in order to achieve operational efficiency objectives.Pursuit of offshore reserves in the next decade will involve complex extraction methods and extensive use of subsea extraction technologies.Extraction of the next wave of reserves will require complex facilities both on the ocean floor, as well as in the form of more traditional surface production facilities. Facing the growing complexity of production systems, oil and gas operating companies are struggling to successfully operate these complex facilities with a retiring pool of subject matter expertise.The industry’s reply to this phenomena is significant use of intelligent devices each delivering useful data. Intelligent breakers, protective relays, measurement relays, process instrumentation and condition monitoring systems to gather useful information. With sophisticated motor controllers such as adjustable speed drives (ASD) which have literally hundreds of parameters available to be monitored, it is easy to see that the amount of information which can be captured is staggering. In fact, managing and interpretation of this information database now becomes the issue. Organizations struggle to bring the data into process control systems at great cost and then present all of the data to engineers each time an event occurs. Engineers then spend time mining the data, building spreadsheets and creating useful information out of the pile. From this they can then start to work on the problem or problems. Unfortunately, this work requires experience with the asset and with the data.Replacing industry and organizational knowledge is a challenge driving production companies to explore locating subject matter expertise in hubs where resources are leveraged across several assets. The challenge comes about in two areas. First, the production systems – both on the surface and on the sea floor –require complex production systems and subsystems with many components. As a result, the automation systems that operate these facilities are extremely complex, often requiring operators to interface with more than 200,000 tags of data and resulting alarms.Second, placing subject matter experts in regional hubs creates security risks that must be addressed in the design of automation systems. Enabling 24-hourlive-data access for remotely located subject matter expertise requires establishing tunnels in internet technology to empower monitoring from remote offices, tablet PCs and mobile devices. A sound security strategy is required as a part of this enablement.OPERATIONS PERFORMANCE MANAGEMENT FOR IMPROVED RELIABILITYTraditional methods continue to impact downtime. In short, if engineering or maintenance has to build a spreadsheet, hunt down data, manipulate it, and then sort through it to find the truth, the time lost in taking action will lead to increased downtime. Role-based visualization and reporting, paired with the automation of non-productive work, creates an environment that drives collaboration and improves performance.Improved visibility with a risk based approach to minimize asset failure is a possible answer to drive asset performance. It is not complete unless the systems put in place empower Total Productive Maintenance where each person in the enterprise plays a part in maintenance as each is equipped to do. Implementing Total Productive Maintenance begins with empowering centralized subject matter expertise with live data and giving them the means to collaborate with operations and maintenance personnel in real time. It involves providing each individual working with the asset the exact information required for their role in the time required to keep the asset healthy. Senior maintenance and facilities engineering personnel are best positioned to utilize data –condition-based combined with historical –to drive reliability improvement in the asset. Collaboration tools drive the knowledge sharing that in turn create an environment where less experience employees gain from the knowledge of the older employees better equipping the younger engineers, operations and maintenance personnel to optimally engage with the asset as older workers retire. This combination of data utilization, asset performance management and knowledge sharing leads to anoptimized asset through total Operations Performance Management.The path to Operations Performance Management is one of time and pace. Creating collaborative zones and systems for analytics and work flow automation can involve significant IT investment. Resources to execute the systems are often scarce leading to barriers in building the systems that would enable an enterprise to move to Performance Management of operations. Today, technology exists to empower workflow automation and collaboration using simple Web tools. Simple browser access allows personnel collaboration using smart phones, tablet PCs and personal computers. The question is, does an enterprise build the technology within their firewalls or does it take advantage of cloud computing technology to drive collaboration as a paid service?The answer requires a broader view of data usage and data strategies. To achieve reliability and efficiency targets leveraging available data, Oil and Gas operating companies must develop data strategies that address a number of considerations. Is the data required for collaboration also governed by regulatory or investor expectations? If that is the case, an enterprise should likely consider holding the data within their corporate structure and investing in infrastructure to store and present it. Capital budgets and IT resources can limit the ability to build the systems required to achieve the desired collaboration –challenging the pursuit of reliability and efficiency targets. Purchasing the applications as a cloud- based service can speed up the path to collaboration with a pay-as-you-go model.Cloud technology provides a vehicle to manipulate and manage data. Purchasing software as a service enables an organization to move faster into the realm of operational excellence through better utilization of existing data. Purchasing operations management tools as a service avoids using capital budgets and specifically circumvents issues of funding across assets. Services are paid for as they are used with operating budgets. Utilizing cloudcomputing to share operating conditional data creates an environment for operational improvement. ‘Private clouds’leveraging the same technologies can be deployed if an enterprise has considerations that require all of the data to remain within their enterprise (regulatory compliance, investor compliance, etc.)In developing strategies that utilize data to drive collaboration between disciplines, an enterprise needs to develop a data strategy. Some of the data such as, safety records, testing records and flow records are regulated and as such have mandated storage. Other types of data have investor compliance storage and security requirements. In the case of investor or regulatory data an enterprise still desires collaborative tools but likely needs to hold the data within its company structure. Looking at Figure 5 again, collaborative tools and dash boarding are also available in traditional methods of a capital project, purchased software and application development operating within the enterprise data structures. The next two sections of this paper will address examples of a traditional infrastructure approach and a cloud computing approach used to drive effectiveness in an organization.Cloud technology enables the manipulation of data and automation of non-productive tasks within standard server architectures owned and operated by the enterprise utilizing the data. Hypervisor environments allow multiple applications to run on traditional cluster technology in data centers. Applications exist which permit the accumulation of data from disparate sources with federation then utilized to create role based visualization and automation of manual tasks (e.g. production allocation, well test verification, regulatory system testing documentation etc.). Figure 6 demonstrates a Well Test validation accomplished utilizing cloud technology within an operating company’s firewall. Figure 7 then demonstrates the role view for an operator resulting. Technology helps to significantly reduce non-productive operations while presenting the data in a way that drives personnel to focus on areaswhere they can impact the business to improve efficiencies. Figures 9 and 10 show an example of workflow automation and the resulting dashboard presented to personnel containing role-rich information designed to help improve the effectiveness of the individuals working on the asset. Automation systems can require both significant capital investments to build as well as ongoing operating expenses to maintain the infrastructure. In many cases, collaboration is desired, but securing the investment is difficult due to organizational barriers in funding capital projects across assets or simply because the resources do not exist.Leveraging cloud computing foundations, applications can be built within the cloud infrastructure and provided as a service. Collaborating in this fashion is possible within constraints of operating budgets using a fund-as-built model. In most cases, the cost of the tools required to gain efficiency benefits are significantly lower than the cost of building traditional systems.The following example of a Dallas-based manufacturer of pumping systems used in mud-handling systems and fracturing systems for drilling of non-conventional wells will be used to illustrate. We will call the company XYZ. Customers of XYZ send trucks into the field where they provide services to oil and gas enterprises developing oil and gas fields in land- based operations. The pumping systems have a high degree of local automation, but pump operations personnel are not equipped to make decisions regarding when to stop pumping and perform maintenance. For example, the trucks require regular maintenance, and some of the filters require replacement as often as every eight hours. Combine the need for regular maintenance with customers looking over the operator’s shoulders, and occasional mistakes are made with valve positioning and/or running the truck until it breaks in an attempt to please an anxious customer. This drives significant maintenance costs, and downtime is a big problem.。