分布式系统设计毕业论文外文文献翻译及原文
Spring和MyBatis的外文翻译..

本科生毕业设计 (论文)外文翻译原文标题MVC Design Pattern for the multi frameworkdistributed applications using XML, spring and strutsframework译文标题使用XML,Spring和struts以MVC为设计模式的多分布式应用程序框架作者所在系别计算机与遥感信息技术学院作者所在班级B12511作者姓名王硕作者学号20124051117指导教师姓名耿炎指导教师职称院长完成时间2015 年1 月北华航天工业学院教务处制译文标题使用XML,Spring和struts以MVC为设计模式的多分布式应用程序框架原文标题MVC Design Pattern for the multi frameworkdistributed applications using XML, spring and struts framework作者Praveen Gupta 译名普利文·古塔国籍印度原文出处International Journal on Computer Science and Engineering 使用XML,Spring和struts以MVC为设计模式的多分布式应用程序框架摘要:模型-视图-控制器(MVC)是一个基本的设计模式,用来分离用户界面与业务的逻辑。
近些年来,应用程序的规模越来越大,而MVC设计模式可以弱耦合不同应用层的程序。
本文提出了一种基于J2EE平台上的网络,它扩展了MVC、XML 的应用程序框架,易于维护。
这是一个包括表示层、业务层、数据持久层和数据库层的多系统层,由于其代码的具有独立性,可维护性和可重用性大大提升。
在本文中,我们使用MVC实现spring和struts框架。
我们的研究显示,应用多个框架设计应用程序时,使用MVC概念能使应用程序转变为更容易、较单一的框架。
关键字:MVC,Spring,XML一介绍近些年来,web是非常复杂的问题。
hadoop分布式存储平台外文翻译文献

hadoop分布式存储平台外文翻译文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussed information technologies today. It presents many promising technological andeconomical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lack of circumstantial information. For auditors, this situation does not change:Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing environments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including the cross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specific purpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure as a Service (IaaS) model, the customer is using the virtual machine provided bythe CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he is assumed to be rare in real world scenarios. Additionally, from the technicalpoint of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between the different cloud service and deployment models. The virtual machine (VM),hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could be extracted. In most of the scenarios, the user agent (e.g. the web browser) onthe client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does not obtain any control of the underlying operating infrastructure such as network,servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of an investigation. Furthermore, due to the limited ability of receiving forensicinformation from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems have been compromised is crucial not only for recovering from an incident. Alsoforensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running syst em’s memory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that is not available on powered down systems. The technique of live investigation。
孙明明 外文资料翻译

毕业设计(论文)外文资料翻译学院:机械电子工程学院专业:机械设计制造及其自动化姓名:孙明明学号: 070501504外文出处: The advantages of PLC control,filed under PLC Articles附件: 1.外文资料翻译译文;2.外文原文。
(用外文写)附件1:外文资料翻译译文PLC的控制优势任何控制系统从概念到进入工厂工作都要经历四个阶段。
PLC系统在每一个阶段都有优势。
第一阶段是设计,对工厂的需要进行研究和制定控制策略,传统的运行平台的设计和制造必须在设计进行前完成。
PLC系统仅仅需要的是一个模糊的关于机器的可能大小的想法和I/O数量的要求(多少输入和输出接口)。
在这个阶段输入和输出芯片十分便宜,所以可以内置一个很健全的备用容量,它允许用来补充遗漏项目和为未来的扩充做准备。
其次是设计。
传统的方案是,每一项工作都是“一次成型”这不可避免的造成了工程拖延和增加成本。
一个的PLC系统使用最简单的标准件螺栓连接在一起。
在这样的连接下开始编写 PLC程序(或者至少是写入详细的程序规范)。
下一阶段是安装,安装是一种繁琐和昂贵的工作,例如安装传感器、执行器、限制开关系统和主机的连接。
分布式PLC系统使用串行链路式的预编译,测试界面可以简化安装它带来了巨大的成本优势。
PLC的程序多数在这个阶段完成。
最后是调试,而这正是PLC真正的优势被发掘的部分。
没有任何设备在第一次就正常工作。
人性就是这样,总会有一些疏漏。
与传统的系统变动情况的耗时和昂贵相比,PLC的设计师提供了系的内置备用内存容量、备用I/O和一些备用多芯电缆线,多数的变动能迅速和相对便宜的完成。
另外一个好处是,所有的变化PLC都有记录,程序的调试和修改不会因为没有被记录而遗失,这是一个经常发生在常规系统中的问题。
还有一个额外的第五阶段,维护,一旦启动工作,并移交生产就产生了维护的问题。
所有设备都有缺点,大多数设备在错误的模式中度过了它们的大部分的时间。
Hadoop分布式文件系统:架构和设计外文翻译

外文翻译原文来源The Hadoop Distributed File System: Architecture and Design 中文译文Hadoop分布式文件系统:架构和设计姓名 XXXX学号 ************2013年4月8 日英文原文The Hadoop Distributed File System: Architecture and DesignSource:/docs/r0.18.3/hdfs_design.html IntroductionThe Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed onlow-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data. HDFS was originally built as infrastructure for the Apache Nutch web search engine project. HDFS is part of the Apache Hadoop Core project. The project URL is/core/.Assumptions and GoalsHardware FailureHardware failure is the norm rather than the exception. An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection of faults and quick, automatic recovery from them is a core architectural goal of HDFS.Streaming Data AccessApplications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. POSIX imposes many hard requirements that are notneeded for applications that are targeted for HDFS. POSIX semantics in a few key areas has been traded to increase data throughput rates.Large Data SetsApplications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. Thus, HDFS is tuned to support large files. It should provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.Simple Coherency ModelHDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. AMap/Reduce application or a web crawler application fits perfectly with this model. There is a plan to support appending-writes to files in the future.“Moving Computation is Cheaper than Moving Data”A computation requested by an application is much more efficient if it is executed near the data it operates on. This is especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.Portability Across Heterogeneous Hardware and Software PlatformsHDFS has been designed to be easily portable from one platform to another. This facilitates widespread adoption of HDFS as a platform of choice for a large set of applications.NameNode and DataNodesHDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocksare stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.The NameNode and DataNode are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range ofmachines. A typical deployment has a dedicated machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.The File System NamespaceHDFS supports a traditional hierarchical file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS does not yet implement user quotas or access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features.The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.Data ReplicationHDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time.The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster.Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.Replica Placement: The First Baby StepsThe placement of replicas is critical to HDFS reliability and performance. Optimizing replica placement distinguishes HDFS from most other distributed file systems. This is a feature that needs lots of tuning and experience. The purpose of a rack-aware replica placement policy is to improve data reliability, availability, and network bandwidth utilization. The current implementation for the replica placement policy is a first effort in this direction. The short-term goals of implementing this policy are to validate it on production systems, learn more about its behavior, and build a foundation to test and research more sophisticated policies.Large HDFS instances run on a cluster of computers that commonly spread across many racks. Communication between two nodes in different racks has to go through switches. In most cases, network bandwidth between machines in the same rack is greater than network bandwidth between machines in different racks.The NameNode determines the rack id each DataNode belongs to via the process outlined in Rack Awareness. A simple but non-optimal policy is to place replicas on unique racks. This prevents losing data when an entire rack fails and allows use of bandwidth from multiple racks when reading data. This policy evenly distributes replicas in the cluster which makes it easy to balance load on component failure. However, this policy increases the cost of writes because a write needs to transfer blocks to multiple racks.For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack. This policy cuts the inter-rack write traffic which generally improves write performance. The chance of rack failure is far less than that of node failure; this policy does not impact data reliability and availability guarantees. However, it does reduce the aggregate network bandwidth used when reading data since a block is placed in only two unique racks rather than three. With this policy, the replicas of a file do not evenly distribute across the racks. One third of replicas are on one node, two thirds of replicas are on one rack, and the other third are evenly distributed across the remaining racks. This policy improves write performance without compromising data reliability or read performance.The current, default replica placement policy described here is a work in progress. Replica SelectionTo minimize global bandwidth consumption and read latency, HDFS tries to satisfy a read request from a replica that is closest to the reader. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If angg/ HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica.SafemodeOn startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state. The NameNode receives Heartbeat and Blockreport messages from the DataNodes. A Blockreport contains the list of data blocks that a DataNode is hosting. Each block has a specified minimum number of replicas. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode. After a configurable percentage of safely replicated data blocks checks in with the NameNode (plus an additional 30 seconds), the NameNode exits the Safemode state. It then determines the list of data blocks (if any) that still have fewer than the specified number of replicas. The NameNode then replicates these blocks to other DataNodes.The Persistence of File System MetadataThe HDFS namespace is stored by the NameNode. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata. For example, creating a new file in HDFS causes the NameNode to insert a record into the EditLog indicating this. Similarly, changing the replication factor of a file causes a new record to be inserted into the EditLog. The NameNode uses a file in its local host OS file system to store the EditLog. The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system too.The NameNode keeps an image of the entire file system namespace and file Blockmap in memory. This key metadata item is designed to be compact, such that a NameNode with 4 GB of RAM is plenty to support a huge number of files and directories. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. This process is called a checkpoint. In the current implementation, a checkpoint only occurs when the NameNode starts up. Work is in progress to support periodic checkpointing in the near future.The DataNode stores HDFS data in files in its local file system. The DataNode has no knowledge about HDFS files. It stores each block of HDFS data in a separatefile in its local file system. The DataNode does not create all files in the same directory. Instead, it uses a heuristic to determine the optimal number of files per directory and creates subdirectories appropriately. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory. When a DataNode starts up, it scans through its local file system, generates a list of all HDFS data blocks that correspond to each of these local files and sends this report to the NameNode: this is the Blockreport.The Communication ProtocolsAll HDFS communication protocols are layered on top of the TCP/IP protocol. A client establishes a connection to a configurable TCP port on the NameNode machine. It talks the ClientProtocol with the NameNode. The DataNodes talk to the NameNode using the DataNode Protocol. A Remote Procedure Call (RPC) abstraction wraps both the Client Protocol and the DataNode Protocol. By design, the NameNode never initiates any RPCs. Instead, it only responds to RPC requests issued by DataNodes or clients.RobustnessThe primary objective of HDFS is to store data reliably even in the presence of failures. The three common types of failures are NameNode failures, DataNode failures and network partitions.Data Disk Failure, Heartbeats and Re-ReplicationEach DataNode sends a Heartbeat message to the NameNode periodically. A network partition can cause a subset of DataNodes to lose connectivity with the NameNode. The NameNode detects this condition by the absence of a Heartbeat message. The NameNode marks DataNodes without recent Heartbeats as dead and does not forward any new IO requests to them. Any data that was registered to a dead DataNode is not available to HDFS any more. DataNode death may cause the replication factor of some blocks to fall below their specified value. The NameNode constantly tracks which blocks need to be replicated and initiates replication whenever necessary. The necessity for re-replication may arise due to many reasons: a DataNode may become unavailable, a replica may become corrupted, a hard disk on a DataNode may fail, or the replication factor of a file may be increased.Cluster RebalancingThe HDFS architecture is compatible with data rebalancing schemes. A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold. In the event of a sudden high demand for a particular file, a scheme might dynamically create additional replicas and rebalance other data in the cluster. These types of data rebalancing schemes are not yet implemented.Data IntegrityIt is possible that a block of data fetched from a DataNode arrives corrupted. This corruption can occur because of faults in a storage device, network faults, or buggy software. The HDFS client software implements checksum checking on the contents of HDFS files. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.Metadata Disk FailureThe FsImage and the EditLog are central data structures of HDFS. A corruption of these files can cause the HDFS instance to be non-functional. For this reason, the NameNode can be configured to support maintaining multiple copies of the FsImage and EditLog. Any update to either the FsImage or EditLog causes each of the FsImages and EditLogs to get updated synchronously. This synchronous updating of multiple copies of the FsImage and EditLog may degrade the rate of namespace transactions per second that a NameNode can support. However, this degradation is acceptable because even though HDFS applications are very data intensive in nature, they are not metadata intensive. When a NameNode restarts, it selects the latest consistent FsImage and EditLog to use.The NameNode machine is a single point of failure for an HDFS cluster. If the NameNode machine fails, manual intervention is necessary. Currently, automatic restart and failover of the NameNode software to another machine is not supported.SnapshotsSnapshots support storing a copy of data at a particular instant of time. One usage of the snapshot feature may be to roll back a corrupted HDFS instance to a previously known good point in time. HDFS does not currently support snapshots but will in a future release.Data OrganizationData BlocksHDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files. A typical block size used by HDFS is 64 MB. Thus, an HDFS file is chopped up into 64 MB chunks, and if possible, each chunk will reside on a different DataNode.StagingA client request to create a file does not reach the NameNode immediately. In fact, initially the HDFS client caches the file data into a temporary local file. Application writes are transparently redirected to this temporary local file. When the local file accumulates data worth over one HDFS block size, the client contacts the NameNode. The NameNode inserts the file name into the file system hierarchy and allocates a data block for it. The NameNode responds to the client request with the identity of the DataNode and the destination data block. Then the client flushes the block of data from the local temporary file to the specified DataNode. When a file is closed, the remaining un-flushed data in the temporary local file is transferred to the DataNode. The client then tells the NameNode that the file is closed. At this point, the NameNode commits the file creation operation into a persistent store. If the NameNode dies before the file is closed, the file is lost.The above approach has been adopted after careful consideration of target applications that run on HDFS. These applications need streaming writes to files. If a client writes to a remote file directly without any client side buffering, the network speed and the congestion in the network impacts throughput considerably. This approach is not without precedent. Earlier distributed file systems, e.g. AFS, have used client side caching to improve performance. APOSIX requirement has been relaxed to achieve higher performance of data uploads.Replication PipeliningWhen a client is writing data to an HDFS file, its data is first written to a local file as explained in the previous section. Suppose the HDFS file has a replication factor of three. When the local file accumulates a full block of user data, the client retrieves a list of DataNodes from the NameNode. This list contains the DataNodes that will host a replica of that block. The client then flushes the data block to the first DataNode. The first DataNode starts receiving the data in small portions (4 KB), writes each portion to its local repository and transfers that portion to the second DataNode in the list. The second DataNode, in turn starts receiving each portion of the data block, writes that portion to its repository and then flushes that portion to the third DataNode. Finally, the third DataNode writes the data to its local repository. Thus, a DataNode can be receiving data from the previous one in the pipeline and at the same time forwarding data to the next one in the pipeline. Thus, the data is pipelined from one DataNode to the next.AccessibilityHDFS can be accessed from applications in many different ways. Natively, HDFS provides a Java API for applications to use. A C language wrapper for this Java API is also available. In addition, an HTTP browser can also be used to browse the files of an HDFS instance. Work is in progress to expose HDFS through the WebDAV protocol.FS ShellHDFS allows user data to be organized in the form of files and directories. It provides a commandline interface called FS shell that lets a user interact with the data in HDFS. The syntax of this command set is similar to other shells (e.g. bash, csh) that users are already familiar with. Here are some sample action/command pairs:FS shell is targeted for applications that need a scripting language to interact with the stored data.DFSAdminThe DFSAdmin command set is used for administering an HDFS cluster. These are commands that are used only by an HDFS administrator. Here are some sample action/command pairs:Browser InterfaceA typical HDFS install configures a web server to expose the HDFS namespace through a configurable TCP port. This allows a user to navigate the HDFS namespace and view the contents of its files using a web browser.Space ReclamationFile Deletes and UndeletesWhen a file is deleted by a user or an application, it is not immediately removed from HDFS. Instead, HDFS first renames it to a file in the /trash directory. The file can be restored quickly as long as it remains in /trash. A file remains in/trash for a configurable amount of time. After the expiry of its life in /trash, the NameNode deletes the file from the HDFS namespace. The deletion of a file causes the blocks associated with the file to be freed. Note that there could be an appreciable time delay between the time a file is deleted by a user and the time of the corresponding increase in free space in HDFS.A user can Undelete a file after deleting it as long as it remains in the /trash directory. If a user wants to undelete a file that he/she has deleted, he/she can navigate the /trash directory and retrieve the file. The /trash directory contains only the latest copy of the file that was deleted. The /trash directory is just like any other directory with one special feature: HDFS applies specified policies to automatically delete files from this directory. The current default policy is to delete files from /trash that are more than 6 hours old. In the future, this policy will be configurable through a well defined interface.Decrease Replication FactorWhen the replication factor of a file is reduced, the NameNode selects excess replicas that can be deleted. The next Heartbeat transfers this information to the DataNode. The DataNode then removes the corresponding blocks and the corresponding free space appears in the cluster. Once again, there might be a time delay between the completion of the setReplication API call and the appearance of free space in the cluster.中文译本原文地址:/docs/r0.18.3/hdfs_design.html一、引言Hadoop分布式文件系统(HDFS)被设计成适合运行在通用硬件(commodity hardware)上的分布式文件系统。
分布式控制系统(dcs)设计与应用实例 英文版

分布式控制系统(dcs)设计与应用实例英文版I. IntroductionIntroduction:Distributed Control Systems (DCS) represent a pivotal technology in industrial automation, offering efficient and reliable control over complex processes across multiple locations. These systems decentralize control functions, enhancing system resilience and flexibility.Intrduction:分布式控制系统(DCS)在工业自动化领域扮演着关键角色,能有效并可靠地对多个地点的复杂流程进行控制。
这些系统通过分散控制功能,提升了系统的抗风险能力和灵活性。
II. DCS Design PrinciplesDesign Fundamentals:The design of a DCS revolves around the principle of modularity and redundancy. It consists of controllers, sensors, actuators, and human-machine interfaces (HMIs), all interconnected through a communication network. Each component is designed to perform specific tasks while maintaining real-time data exchange for coordinatedoperation.设计基础:DCS的设计以模块化和冗余性原则为核心。
专业外文文献翻译--分布式发电

Impact of High Penetration of Distributed Generation on SystemDesign and Operations(1.Bartosz Wojszczyk1,Omar Al-Juburi2,Joy Wang3 Accenture,Raleigh 27601,U.S.;2.Accenture,San Francisco 94105,U.S.;3.Accenture,Shanghai 200020,China) ABSTRACT: This paper addresses the topic of massive utility-oriented deployment of Distributed Generation (DG) in power systems. High penetration of DG presents significant challenges to design/engineering practices as well as to the reliable operation of the power system. This paper examines the impact of large-scale DER implementation on system design, reliable operation and performance and includes practical examples from utility demonstration projects. It also presents a vision for the utility of the future and describes DG technologies being implemented by utilities.KEY WORDS: distributed energy resources ;distributed generation;power system design and operation0 IntroductionDistributed generation (DG) or decentralized generation is not a new industry concept. In 1882, Thomas Edison built his first commercial electric plant—“Pearl Street”. This power station provided 110V DC electricity to 59 customers in lower Manhattan. In 1887, there were 121 Edison power stations in the United States delivering DC electricity to customers. These first power plants were run on water or coal. Centralized power generation became possible when it was recognized that alternating current power could be transported at relatively low costs and reduce power losses across great distances by taking advantage of the ability to raise the voltage at the generation station and lower the voltage near customer loads. In addition, the concepts of improved system performance (system stability) and more effective generation asset utilization provided a platform for wide-area/global grid integration. In recent years, there has been a rapidly growing interest in wide deployment of DG. Commercially available technologies for DG are based on combustion engines, micro- and mini-gas turbines, wind turbines, fuel-cells, various photovoltaic (PV) solutions, low-head hydro units and geothermal systems.Deregulation of the electric utility industry (in some countries), environmental concerns associated with traditional fossil fuel generation power plants, volatility of electric energy costs, Federal and State regulatory support of “green” energy and rapid technological developments all support the proliferation of DG units in electric utility systems. The growing rate of DGdeployment suggests that alternative energy-based solutions play an increasingly important role in the smart grid and modern utility.Large-scale implementation of DG can lead to situations in which the distribution/medium voltage network evolves from a “passive” (local/limited automation, monitoring and control) system to one that actively (global/integrated, self-monitoring, semi-automated) responds to the various dynamics of the electric grid. This poses a challenge for design, operation and management of the power grid as the network no longer behaves as it once did. Consequently, the planning and operation of new systems must be approached somewhat differently with a greater amount of attention paid to global system challenges.The principal goal of this paper is to address the topic of high penetration of distributed generation and its impact on grid design and operations. The following sections describe a vision for the modern utility, DG technology landscape, and DG design/engineering challenges and highlights some of the utility DG demonstration projects.1 Vision for modern utilities1.1 Centralized vs. distributedThe bulk of electric power used worldwide is produced at central power plants, most of which utilize large fossil fuel combustion, hydro or nuclear reactors. A majority of these central stations have an output between 30MW (industrial plant) and 1.7GW. This makes them relatively large in terms of both physical size and facility requirements as compared with DG alternatives. In contrast, DG is:1)Installed at various locations (closer to the load) throughout the power system and mostly operated by independent power producers or consumers.2)Not centrally dispatched (although the development of “virtual” power plants, where many decentralized DG units operate as one single unit, may be an exception to this definition).3)Defined by power rating in a wide range from a few kW to tens of MW (in some countries MW limitation is defined by standards, e.g. US, IEEE 1547 defines DG up to 10MW –either as a single unit or aggregate capacity).4)Connected to the distribution/medium voltage network - which generally refers to the part of the network that has an operating voltage of 600V up to 110kV (depends on the utility/country).The main reasons why central, rather than distributed, generation still dominates current electricity production include economy of scale, fuel cost and availability, and lifetime. Increasing the size of a production unit decreases the cost per MW; however, the advantage of economy of scale is decreasing—technological advances in fuel conversion have improved the economy of small units. Fuel cost and availability is still another reason to keep building largepower plants. Additionally, with a lifetime of 25~50 years, large power plants will continue to remain the prime source of electricity for many years to come.The benefits of distributed generation include: higher efficiency; improved security of supply; improved demand-response capabilities; avoidance of overcapacity; better peak load management; reduction of grid losses; network infrastructure cost deferral; power quality support; reliability improvement; and environmental and aesthetic concerns (offers a wide range of alternatives to traditional power system design). DG offers extraordinary value because it provides a flexible range of combinations between cost and reliability. In addition, DG may eventually become a more desirable generation asset because it is “closer” to the customer and is more economical than central station generation and its associated transmission infrastructure. The disadvantages of DG are ownership and operation, fuel delivery (machine-based DG, remote locations), cost of connection, dispatchability and controllability (wind and solar).1.2 Development of “smart grid”In recent years, there has been a rapidly growing interest in what is called “Smart Grid –Digitized Grid –Grid of the Future”. The main drivers behind this market trend are grid performance, technology enhancement and stakeholders’ atten tion . The main vision behind this market trend is the use of enhanced power equipment/technologies, monitoring devices (sensors), digital and fully integrated communications, and embedded digital processing to make the power grid observable (able to measure the states of critical grid elements), controllable (able to affect the state of any critical grid element), automated (able to adapt and self-heal), and user-friendly (bi-directional utility–customer interaction). The Smart Grid concept should be viewed through the modern utility perspective of remaining profitable (good value to shareholders), continuing to grow revenue streams, providing superior customer service, investing in technologies, making product offerings cost effective and pain free for customers to participate and partnering with new players in the industry to provide more value to society. It is important to recognize that there is merit in the Smart Grid concept and should be viewed in light of it bringing evolutionary rather than revolutionary changes in the industry.In general, this market trend requires a new approach to system design, re-design and network integration and implementation needs. In addition, utilities will have to develop well-defined engineering and construction standards and operation and maintenance practices addressing high penetration levels of DG.2 DG technology landscapeDG systems can utilize either well-established conventional power generation technologies such as low/high temperature fuel cells, diesel, combustion turbines, combined cycle turbines,low-head hydro or other rotating machines, renewable energy technologies including PV, concentrated PV (CPV), solar concentrators, thin-film, solar thermal and wind/mini-wind turbines or technologies that are emerging on the market (e.g. tidal/wave, etc.). Each of the DG technologies has its own advantages and disadvantages which need to be taken into consideration during the selection process.3 DR interconnection requirementsDR interconnection design and engineering details depend on the specific installation size (kW vs. MW); however, the overall components of the installation should include the following:1)DG prime mover (or prime energy source) and its power converter.2)Interface/step-up transformer.3)Grounding (when needed—grounding type depends on utility specific system equirements).4)Microprocessor protective relays for:① Three-, single-phase fault detection and DG overload.② Islanding and abnormal system conditions detection.③ Voltage and current unbalances detection .④ Undesirable reverse power detection .⑤ Machine-based DG synchronization .5)Disconnect switches and/or switchgear(s).6)Metering, control and data logging equipment.7)Communication link(s) for transfer trip and dispatch control functions (when needed).4 Impact of DR integra tion and “penetration” levelIntegration of DG may have an impact on system performance. This impact can be assessed based on:1)Size and type of DG design: power converter type, unit rating, unit impedance, protective relay functions, interface transformer, grounding, etc.2)Type of DG prime mover: wind, PV, ICE, CT, etc.3)Intended DG operating mode(s): load shaving, base-load CHP, power export market, Volt-Var control, etc.4)Interaction with other DG(s) or load(s).5)Location in the system and the characteristics of the grid such as:① Network, auto-looped, radial, etc.② System impedance at connection point.③ Voltage control equipment types, locations and settings.④ Grounding design.⑤ Protection equipment types, locations, and settings.⑥ And other.DR system impact is also dependent on the “penetration” level of the DG connected to the grid. There are a number of factors that should be considered when evaluating the penetration level of DG in the system. Examples of DG penetration level factors include:1)DG as a percent of feeder or local interconnection point peak load (varies with location on the feeder).2)DG as a percent of substation peak load or substation capacity.3)DG as a percent of voltage drop capacity at the interconnection point (varies with location on the feeder).4)DG source fault current contribution as a percent of the utility source fault current (at various locations).4.1 DG impact on voltage regulationVoltage regulation, and in particular voltage rise effect, is a key factor that limits the amount (penetration level) of DG that can be connected to the system. Fig. 2 shows the first example of the network with a relatively large (MW size) DG interconnected at close proximity to the utility substation.Careful investigation of the voltage profile indicates that during heavy load conditions, with connected DG, voltage levels may drop below the acceptable/permissible by standards. The reason for this condition is that relatively large DG reduces the circuit current value seen by the Load Tap Changer (LTC) in the substation (DG current contribution). Since the LTC sees “less” current (representing a light load) than the actual value, it will lower the tap setting to avoid a “light load, high voltage” condition. This action makes the actual “heavy load, low voltage” condition even worse. As a general rule, if the DG contributes less than 20% of the load current, then the DG current contribution effect will be minor and can probably be ignored in most cases.However, if the real power (P) flow direction reverses, toward the substation (Fig. 4), the VR will operate in the reverse mode (primary control). Since the voltage at the substation is a stronger source than the voltage at the DG (cannot be lowered by VR), the VR will increase the number of taps on the secondary side; therefore, voltage on the secondary side increases dramatically.Bi-directional voltage regulators have several control modes for coordination with DG operation. Bi-directionality can be defined based on real (P) and/or reactive (Q) power flow. However, reactive power support (Q) from DG is generally prohibited by standards in many countries. Therefore, VR bi-directionality is set for co-generation modes (real current).4.2 DG impact on power qualityTwo aspects of power quality are usually considered to be important during evaluation of DG impact on system performance: voltage flicker conditions and harmonic distortion of the voltage. Depending on the particular circumstance, a DG can either decrease or increase the quality of the voltage received by other users of the distribution/medium voltage network. Power quality is an increasingly important issue and generation is generally subject to the same regulations as loads. The effect of increasing the grid fault current by adding generation often leads to improved power quality; however, it may also have a negative impact on other aspects of system performance (e.g. protection coordination). A notable exception is that a single large DG, or aggregate of small DG connected to a “weak” grid may lead to power quality problems during starting and stopping conditions or output fluctuations (both normal and abnormal). For certain types of DG, such as wind turbines or PV, current fluctuations are a routine part of operation due to varying wind or sunlight conditions.Other types of DG such as ICE or CT can also have fluctuations due to various factors (e.g. cylinders misfiring and pulsation torque - one misfiring in a 1800 rpm engine translates to 15 Hz pulsation frequency).Harmonics may cause interference with operation of some equipment including overheating/ de-rating of transformers, cables and motors leading to shorter life. In addition, they may interfere with some communication systems located in the close proximity of the grid. In extreme cases they can cause resonant over-voltages, blown fuses, failed equipment, etc. DG technologies have to comply with pre-specified by standards harmonic levels.In order to mitigate harmonic impact in the system the following can be implemented:1)Use an interface transformer with a delta winding or ungrounded winding to minimize injection of triplen harmonics.2)Use a grounding reactor in neutral to minimize triplen harmonic injection.3)Specify rotating generator with 2/3 winding pitch design.4)Apply filters or use phase canceling transformers.5)For inverters: specify PWM inverters with high switching frequency. Avoid line commutated inverters or low switching frequency PWM – otherwise more filters may be needed.6)Place DG at locations with high ratios of utility short circuit current to DG rating.A screening criterion to determine whether detailed studies are required (stiffness factor) to assess DG impact on power quality can be performed based on the ratio between the available utility system fault current at the point of DG connection and the DG’s full load rated output current.4.3 DG impact on ferroresonanceClassic ferroresonance conditions can happen with or without interconnected DG (e.g.resonance between transformer magnetization reactance and underground cable capacitance on an open phase). However, by adding DG to the system we can increase the such as: DG connected rated power is higher than the rated power of the connected load, presence of large capacitor banks (30% to 400% of unit rating), during DG formation on a non-grounded island.4.4 DG impact on system protectionSome DG will contribute current to a circuit current on the feeder. The current contribution will raise fault levels and in some cases may change fault current flow direction. The impact of DG fault current contributions on system protection coordination must be considered. The amount of current contribution, its duration and whether or not there are any protection coordination issues depends on:1)Size and location of DG on the feeder.2)Type of DG (inverter, synchronous machine, induction machine) and its impedance.3)DG protection equipment settings (how fast it trips).4)Impedance, protection and configuration of feeder.5)Type of DG grounding and interface transformer.Machine-based DG (IEC, CT, some micro turbines and wind turbines) injects fault current levels of 4-10 times their rated current with time contribution between 1/3 cycle to several cycles depending on the machine. Inverters contribute about 1-2 times their rated current to faults and can trip-off very quickly--many in less than 1 cycle under ideal conditions. Generally, if fault current levels are changed less than 5% by the DG, then it is unlikely that fault current contribution will have an impact on the existing system/equipment operation. Utilities must also consider interrupting capability of the equipment, e.g. circuit breakers, reclosers and fuses must have sufficient capacity to interrupt the combined DG and utility source fault levels.5 DG interconnection–utility demonstration project examples5.1 Utility 1: ground-fault current contribution for a synchronous DG and changing transformer configurationGround-fault current contribution for 100 kVA, 500kVA and 2MVA synchronous DG is being investigated on some rural feeders in the U.S. In addition, during investigation, the transformer configuration (Delta/Wye/Grounded Wye) on the DG side and utility side was changed.The DG fault current contribution changes in a range from less than 1% (for Delta/Delta transformer configuration) to approx. 30% for 2MVA DG with Delta\Grounded Wye transformer configuration. Slight changes of the fault current for the non-grounded utility-side are due to an increase in pre-fault voltage.5.2 Utility 2: customer-based reliability enhancement on rural feeders – planned islandingapplicationThe planned islanding application is being investigated on some rural feeders in Canada to improve the reliability of supply for rural communities, where the corresponding distribution substation is only supplied by a single high voltage (HV) line [4]. Customers on those feeders may experience sustained power outages for several hours a few times per year due to environmental and weather impacts that cause line break-downs and power outages. A local independent power producer (IPP) equipped with additional equipment is employed to supply the load downstream of the substation when the HV line is down or during maintenance periods of the substation. The IPP is paid an additional bonus if it can successfully serve the load during a power outage.5.3 Utility 3: peaking generation and DG for demand reduction applicationThis application addresses a utility approach toward peak shaving and demand reduction which is attractive to those LDC that purchase electricity from larger utility companies based on a particular rate structure. The cost of electricity for LDC is normally calculated based on energy rate (MWh), the maximum demand (MW) and surcharges due to exceeding the agreed upon maximum demand. The peak-time cost of electricity can be as high as 10~20 times the regular rates.In this case, LDC may install peaking units or have an agreement with specific customers in the LDC’s area that already have on-site backup generation units. The peaking units are operated only during peak-load time for a total of 100 to 200 hours per year based on 5~10 min dispatch commands. In return, the participating facilities are paid for the total power supplied during peak-demand periods at an agreed-upon rate that compensates for both generation/maintenance costs and plant upgrading costs in order to respond to utility dispatch commands.5.4 Utility 4: energy storage applications for firming up intermittency of distributed renewable generation (DRG)Medium to high penetration of renewable energy resources (RES) can cause large power fluctuations due to the variable and dynamic nature of the primary energy source, such as wind and solar photovoltaic generation. Power fluctuations may cause reverse power flow toward the main grid, especially during light load conditions of the feeder. Furthermore, due to inherent intermittent resource characteristics, the firm capacity of a large RES-based DG may be very low and ultimately the utility grid will still be the main provider of the spinning reserve capacity and emergency backup generation in the area. Deployment of distributed energy storage units, when adequately sized and properly co-located with RES integration, has been explored by several utility companies in the U.S. to firm up power fluctuations of the high penetration of renewableenergy (wind and solar) and to reduce adverse impacts on the main grid. Fig. 10 shows an application of energy storage that locally compensates the variation in power output of a large wind farm and averages out the power fluctuations. Hence, the power flow measured at the point of common coupling (PCC) can be controlled based on a pre-scheduled profile, below permissible demand capacity of the feeder.The controlled level of power flow at the PCC also drastically reduces the reserve capacity requirement for management of the load on this feeder since the energy storage unit can provide back-up generation in case of a sudden reduction in wind power production; therefore, it increases the load carrying capacity of the wind farm.6 ConclusionA growing number of electric utilities worldwide are seeking ways to provide excellent energy services and become more customer-focused, competitive, efficient, innovative, and environmentally responsible. Distributed Generation is becoming an important element of the electric utility’s Smart Grid portfolio in the 21st century. Present barriers to widespread implementation of DG are being reduced as technologies mature and financial incentives (including government- and- investor-supported funding) materialize. However, there are still technical challenges that need to be addressed and effectively overcome by utilities. Distributed Generation should become part of day-to-day planning, design and operation processes/practices. Special consideration should be given to the following:1)Transmission and distribution substation designs that are able to handle significant penetration of DG2)Equipment rating margins for fault level growth (due to added DG).3)Protective relays, and settings that can provide reliable and secure operation of the system with interconnected DG (that can handle multiple sources, reverse flow, variable fault levels, etc.).4)Feeder voltage regulation and voltage-drop design approaches that factor possible significant penetration of DG.5)Service restoration practices that reduce chance of interference of DG in the process and even take advantage of DG to enhance reliability where possible.6)Grounding practices and means to control DG induced ground fault over voltages.References[1] KEMA Consulting.Power Quality and Utilization Guide,Section 8 -Distributed Generation andRenewables[M/OL] .Leonardo Energy,Cooper Development Association.http://www.Copperinfo..[2] Willis H L,Scott W G.Distributed Power Generation:Planning and Evaluation[M].New York:Marcel Dekker,2000.[3] Wojszczyk B,Katiraei F.Distributed Energy Resources-Control Operation,and Utility Interconnection[C]//Seminar for Various North American Utilities,2007&2008.[4] Abbey C,Katiraei F,Brothers C,et al.Integration of distributed generation and wind energy in Canada[C]//IEEE PES GM,Montreal,2006:7.。
The_Part-Time_Parliament(Paxos算法中文翻译)

1.2. 要求(Requirements)................................................................................................... 3
现代的议会可以雇佣秘书来记录它的每一个活动,但是在Paxos没有一个人愿意始终呆在议
会大厅里作为一个秘书从头到尾参与每一个会议。取而代之的是每一个议员都会维护一个律
簿(ledger),用来记录一系列已通过的法令,每个法令会带有一个编号。例如议员Λ(译者
注:由于古希腊字母比较难输入,原文中的希腊文姓名统一用其中的一个字母代替)的律簿
2.1. 数学结论 .................................................................................................................... 5
2.2. 初级协议(The Preliminary Protocol) .................................................................... 9
Paxos议会的所知因此比较零散。虽然基本协议是知道的,但对许多细节我们却一无所知,
而这正是我们感兴趣的地方,因此我将忝为推测Paxos人在这些具体细节上可能的做法。
分布式系统设计毕业论文外文文献翻译及原文

锁等,但是,当在 Visio UML的工作,我们的做法在一个更抽象的层次问题如并发隐而
不宣吨必然映射到编程线程。有时,它足以设置检查在类图中塑造一流参考
isActive
复选框来标记类的可能是并发访问的情况。
部分失败。 分布式系统的故障介绍在当地不存在系统的新类型。例如,一个网络链接, 连接两个远程对象可能会下降。 远程计算机可能会关闭或崩溃。 对于一个远程机器上的
在证明了
概念阶段, 这是一个好主意,原型系统, 部署在有代表性的网络系统的组成部分, 看看
服务质量符合要求。
内存访问模式 。远程组件运行在不同的进程,每个进程都有它自己的地址空间。
A到
一个内存地址的指针是不是在另一个进程的地址空间有效。
.NET 中,事情多一点,因
为引进的 AppDomain 和复杂语境。在 .NET 中,一个进程可以划分成一个或多个应用程 序域。 每个 AppDomain 可以分成一个或多个背景。 在其他的 AppDomain 对象的方法调用
一种方式,它会在一个进程中加载 HTML页面和在另一个 COM组件实现的,那么系统将分发
给没有什么 COM组件被加载的问题。
有比单一的方式进行分类分布式系统更多。例如,我们可以有一个
' 本地分布式系统的
分类(有时被称为逻辑分布式系统) ,这些元件在同一台机器上运行的进程,另一个不同的
类,一般分布式系统 ' (有时被称为分布式物理系统)已在不同机器上的组件在不同的进程
有了良
好的基础设施的帮助下, 分布式系统, 只需要一些额外的护理和治疗是在这些额外的组件包
装和分销阶段为主。在这里,我们感兴趣的当然是基础设施
.NET 框架。
在这一章中, 我们将使用一个例子作为一个共同的银行申请后的
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
有了良
好的基础设施的帮助下, 分布式系统, 只需要一些额外的护理和治疗是在这些额外的组件包
装和分销阶段为主。在这里,我们感兴趣的当然是基础设施
.NET 框架。
在这一章中, 我们将使用一个例子作为一个共同的银行申请后的
.NET 框架, 以证明不同
的设计决定的设计师将面临在设计一个分布式系统的过程。然而,因为像编写用例,类图,
第 7 章:分布式系统设计
概述
设计一个分布式系统是一个反复的过程,从需求分析开始到模块化崩溃,到包装和部署
策略。 这个过程主要是为设计一个非分布式系统, 系统, 其对象是所有在同一进程中运行的
过程相同。然而,在分布式系统之间和非分布式之一正如一个开创性的文件指出,
“关于分
布式计算的说明”由 Jim 瓦尔多,杰夫怀恩特,安 Wollrath ,本质上的差异和萨姆肯德尔。
RPC( ORPC),如 Doom和 CORBA的
协议为基础的系统的 IIOP / 的 GIOP(因特网 ORB间协议 / 通用 ORB间协议) 是一类的例子。
在了解有关这些协议,他们的相似之处更多细节感兴趣的读者,他们之间的分歧可以参考
MSDN文章,作者 Don Box,“一个年轻人的指南简单对象访问协议” , MSDN杂志 2000 年 3 月
一种方式,它会在一个进程中加载 HTML页面和在另一个 COM组件实现的,那么系统将分发
给没有什么 COM组件被加载的问题。
有比单一的方式进行分类分布式系统更多。例如,我们可以有一个
' 本地分布式系统的
分类(有时被称为逻辑分布式系统) ,这些元件在同一台机器上运行的进程,另一个不同的
类,一般分布式系统 ' (有时被称为分布式物理系统)已在不同机器上的组件在不同的进程
这些差异不能完全掩盖了起来模块在不同的框架。 分布式系统的设计, 因此必须采取额外的
预防措施,并考虑到这些因素,具体到分布式系统。
幸运的是, 大多数的知识, 我们获得了从设计非分布式可结转系统。 UML 的将是没有多
大价值, 如果它要求我们用不同的任务非常不同的工具集。
所有在本书前面的章节中所涵盖
的材料仍然是有效的, 无论如果我们正在设计一个分布式系统或一个非分布式系统。
用。
之后, 我们看看银行的例子应用程序。 通过这个例子中, 我们将看到如何决定哪些类应
该的 .NET Remoting 的类型,如何来决定每个激活模式 .NET 远程处理的类型,什么代码元素
应该归在一个组件, 如何准备一个组件图, 编制一个组件, 以及如何准备部署图的技术细节。
我们将讨论编制和发布的技术细节 .NET 程序集, NET 的 UML 的组件映射。
组件构成的应用系统。
该系统可根据本地或分布式 IE 和 COM组件。 在我们的例子中, 系统是分布式 COM组件,
如果在一个 EXE文件(我们称这种类型的 COM组件进行进程外组件) 是, 地方如果 COM组件
的一个 DLL文件 (就是我们所说的这种类型的 COM组件在进程内组件) 。但是, 如果 IE 是在
活动图的初始阶段都是一样的, 不管什么制度, 我们正在设计, 我们不打算覆盖整个设计过
程,从头部到脚趾。相反,我们将放在部分是相关的分布式系统的重视。
本章的结构是这样的: 我们将首先处理词汇问题和解释我们的分布式系统的意思。
然后,
我们将有一些讨论 .NET 的分布式基础架构,即 .NET Remoting 的,这将在本章的后面部分使
间。相比之下,本地系统的所有组件只在一个进程中运行,并仅限于一个单一的地址空间。
因此,要判断一个本地或分布式系统是一个简单的经验法则是,
询问是否能到一个内存地址
指针将被有效的, 如果它传递给该系统的另一个组成部分。 在这里, 一个组件是一个系统的
一部分。它可以是一个可执行文件, 一个暴露的函数或一个动态链接库 ( DLL),一个在浏览
基于对象的分布式系统 .NET 中
在本节中,我们将躺在回答以下两个问题的讨论基础: 我们是什么意思分布式系统? 有哪些主要问题,稳健和可靠的分布式系统的挑战设计师?
分布式系统和地方系统
那么, 究竟是一个分布式系统, 什么是本地系统?简单地说, 分布式系统包括的组件运
行在不同的进程可能在不同的机器(但不一定) 。换句话说,它由不同的组件的访问地址空
毕 业 设 计(论文) 外文文献翻译
文献、资料题目: 分布式系统设计 文献、资料来源: 文献、资料发表(出版)日期: 院 (部): 专 业: 班 级: 姓 名: 学 号: 指导教师: 翻译日期: 2017.02.14
专业的 UML使用 Visual Studio .NET 架构师
的揭露 Visio 为企业
的问题: /msdnmag/issues/0300/soap/soap.asp
另一个例子是 ORPC协议 SOAP(简单对象访问协议) ,在标准的 Web服务的核心。是什
么让其他的 SOAPORPC协议不同的是它的传输中立性质, 它使用的方式作为记录信息的数据
器中的 HTML页等等就拿 HTML页面对象。假设你有一个在你的机器的硬盘驱动器的
HTML页
面。你打开它与 IE 浏览器(微软的 Internet Explorer )浏览器。打开时, HTML页将装载
一个 COM组件,调用该组件的 COM对象的方法之一。 在一起的 HTML页面, 在 IE 浏览器, COM
正在运行。
ORPC(对象的远程过程调用)协议
另一种方法是分布式系统的分类进行分类, 并按照消耗的远程组件的方法。 例如, 一些
分布式系统叫号如在同一地址空间的函数调用远程函数调用看看。远程过程调用(
RPC)为
基础的系统属于这一类。其他分布式系统进行远程对象(类型的实例)看起来像本地对象。
组件弥补这种系统的通讯在面向对象的层次类型。对象的
(本文可在: /features/tenyears/volcd/papers/intros/15Waldo.pdf 以及其他
网站。)
延迟的主要区别是,不同的内存访问模式,并发性和部分失败。我们必须在后面的章节
对这些差异更详细的解释。现在,问题是,设计一个分布式系统是从设计一非分布式之一,