大数据、云计算技术与审计外文文献翻译最新译文

合集下载

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。

计算机英文文献加翻译

计算机英文文献加翻译

Management Information System Overview Management Information System is that we often say that the MIS, is a human, computers and other information can be composed of the collection, transmission, storage, maintenance and use of the system, system, emphasizing emphasizing the the management, management, management, stressed stressed stressed that that the modern information society In the increasingly popular. MIS is a new subject, it across a number of areas, such as scientific scientific management management management and and and system system system science, science, science, operations operations operations research, research, research, statistics statistics statistics and and and computer computer science. In these subjects on the basis of formation of information-gathering and processing methods, thereby forming a vertical and horizontal weaving, and systems. The 20th century, along with the vigorous development of the global economy, many economists have proposed a new management theory. In the 1950s, Simon made dependent on information management and decision-making ideas. Wiener published the same period of the control theory, that he is a management control process. 1958, Gail wrote: "The management will lower the cost of timely and accurate information to b etter control." During better control." During this period, accounting for the beginning of the computer, data processing in the term.1970, Walter T . Kenova just to the management information system under a definition of the . Kenova just to the management information system under a definition of the term: "verbal or written form, at the right time to managers, staff and outside staff for the past, present, the projection of future Enterprise and its environment-related information 原文请找腾讯3249114六,维^论~文.网 no no application application application model, model, model, no no mention mention of of computer applications. 1985, management information systems, the founder of the University of Minnesota professor of management at the Gordon B. Davis to a management information system a more complete definition of "management information system is a computer hardware and software resources, manual operations, analysis, planning , Control and decision -making model and the database - System. System. It It provides information to to support support enterprises enterprises or or organizations organizations of of the operation, management and decision-making function. "Comprehensive definition of this Explained Explained that that that the the the goal goal goal of of of management management management information information information system, system, system, functions functions functions and and and composition, composition, composition, but but also reflects the management information system at the time of level.With the continuous improvement of science and technology, computer science increasingly mature, the computer has to be our study and work on the run along. Today, computers are already already very low price, performance, but great progress, and it was used in many areas, the very low price, performance, but great progress, and it was used in many areas, the computer computer was was was so so so popular popular popular mainly mainly mainly because because because of of of the the the following following following aspects: aspects: aspects: First, First, First, the the the computer computer computer can can substitute for many of the complex Labor. Second, the computer can greatly enhance people's work work efficiency. efficiency. efficiency. Third, Third, Third, the the the computer computer computer can can can save save save a a a lot lot lot of of of resources. resources. resources. Fourth, Fourth, Fourth, the the the computer computer computer can can make sensitive documents more secure.Computer application and popularization of economic and social life in various fields. So that the original old management methods are not suited now more and social development. Many people still remain in the previous manual. This greatly hindered the economic development of mankind. mankind. In recent years, with the University of sponsoring scale is In recent years, with the University of sponsoring scale is growing, the number of students students in in in the the the school school school also also also have have have increased, increased, increased, resulting resulting resulting in in in educational educational educational administration administration administration is is is the the growing complexity of the heavy work, to spend a lot of manpower, material resources, and the existing management of student achievement levels are not high, People have been usin g the traditional method of document management student achievement, the management there are many shortcomings, such as: low efficiency, confidentiality of the poor, and Shijianyichang, will have a large number of of documents documents documents and and data, which is is useful useful for finding, finding, updating updating and maintaining Have brought a lot of difficulties. Such a mechanism has been unable to meet the development of the times, schools have become more and more day -to-day management of a bottleneck. bottleneck. In In In the the the information information information age age age this this this traditional traditional traditional management management management methods methods methods will will will inevitably inevitably inevitably be be computer-based information management replaced. As As part part part of of of the the the computer computer computer application, application, application, the the the use use use of of of computers computers computers to to to students students students student student student performance performance information for management, with a manual management of the incomparable advantages for example: example: rapid rapid rapid retrieval, retrieval, retrieval, to to to find find find convenient, convenient, convenient, high high high reliability reliability reliability and and and large large large capacity capacity capacity storage, storage, storage, the the confidentiality confidentiality of of of good, good, good, long long long life, life, life, cost cost cost Low. Low. Low. These These These advantages advantages advantages can can can greatly greatly greatly improve improve improve student student performance management students the efficiency of enterprises is also a scientific, standardized standardized management, management, management, and and and an an an important important important condition condition condition for for for connecting connecting connecting the the the world. world. world. Therefore, Therefore, the development of such a set of management software as it is very necessary thing.Design ideas are all for the sake of users, the interface nice, clear and simple operation as far as possible, but also as a practical operating system a good fault-tolerant, the user can misuse a timely manner as possible are given a warning, so that users timely correction . T o take full advantage advantage of the of the functions of visual FoxPro, design p owerful software powerful software at the same time, as much as possible to reduce the occupiers system resources. Visual FoxPro the command structure and working methods: Visual FoxPro was originally originally called called FoxBASE, FoxBASE, the the U.S. U.S. Fox Fox Software has introduced introduced a a database products, products, in in the run on DOS, compatible with the abase family. Fox Fox Software Software Microsoft acquisition, to be developed so that it can run on Windows, and changed its name to Visual FoxPro. Visual FoxPro is a powerful relational database rapid application development tool, tool, the the the use use use of of of Visual Visual Visual FoxPro FoxPro FoxPro can can can create create create a a a desktop desktop desktop database database database applications, applications, applications, client client client / / / server server applications applications and and and Web Web Web services services services component-based component-based component-based procedures, procedures, procedures, while while while also also also can can can use use use ActiveX ActiveX controls or API function, and so on Ways to expand the functions of Visual FoxPro.1651First, work methods 1. Interactive mode of operation (1) order operation VF in the order window, through an order from the keyboard input of all kinds of ways to complete the operation order. (2) menu operation VF use menus, windows, dialog to achieve the graphical interface features an interactive operation. (3) aid operation VF in the system provides a wide range of user-friendly operation of tools, such as the wizard, design, production, etc.. 2. Procedure means of implementation VF in the implementation of the procedures is to form a group of orders and programming language, an extension to save. PRG procedures in the document, and then run through the automatic implementation of this order documents and award results are displayed. Second, the structure of command 1. Command structure 2. VF orders are usually composed of two parts: The first part is the verb order, also known as keywords, for the operation of the designated order functions; second part of the order clause, for an order that the operation targets, operating conditions and other information . VF order form are as follows: 3. <Order verb> "<order clause>" 4. Order in the format agreed symbols 5. 5. VF in the order form and function of the use of the symbol of the unity agreement, the meaning of VF in the order form and function of the use of the symbol of the unity agreement, the meaning of these symbols are as follows: 6. Than that option, angle brackets within the parameters must be based on their format input parameters. 7. That may be options, put in brackets the parameters under specific requ ests from users choose to enter its parameters. 8. Third, the project manager 9. Create a method 10. command window: CREA T PROJECT <file name> T PROJECT <file name> 11. Project Manager 12. tab 13. All - can display and project management applications of all types of docume nts, "All" tab contains five of its right of the tab in its entirety . 14. Data - management application projects in various types of data files, databases, free form, view, query documents. 15. Documentation - display 原文请找腾讯原文请找腾讯3249114六,维^论~文.网 , statements, documents, labels and other documents. 16. Category - the tab display and project management applications used in the class library documents, including VF's class library system and the user's own design of the library. 17. Code - used in the project management procedures code documents, such as: program files (. PRG), API library and the use of project management for generation of applications (. APP). 18. (2) the work area 19. The project management work area is displayed and management of all types of document window. 20. (3) order button 21. Project Manager button to the right of the order of the work area of the document window to provide command. 22. 4, project management for the use of 23. 1. Order button function 24. New - in the work area window selected certain documents, with new orders button on the new document added to the project management window. 25. Add - can be used VF "file" menu under the "new" order and the "T ools" menu under the "Wizard" order to create the various independent paper added to the project manager, unified organization with management. 26. Laws - may amend the project has been in existence in the various documents, is still to use such documents to modify the design interface. 27. Sports - in the work area window to highlight a specific document, will run the paper.28. Mobile - to check the documents removed from the project. 29. 29. Even Even Even the the the series series series - - - put put put the the the item item item in in in the the the relevant relevant relevant documents documents documents and and and even even even into into into the the the application application executable file. Database System Design :Database design is the logical database design, according to a forthcoming data classification system and the logic of division-level organizations, is user-oriented. Database design needs of various departments of the integrated enterprise archive data and data needs analysis of the relationship between the various data, in accordance with the DBMS. 管理信息系统概要管理信息系统概要管理信息系统就是我们常说的MIS (Management Information System ),是一个由人、计算机等组成的能进行信息的收集、传送、储存、维护和使用的系统,在强调管理,强调信息的现代社会中它越来越得到普及。

审计风险外文文献翻译最新译文

审计风险外文文献翻译最新译文

审计风险外文文献翻译最新译文文献出处:C E Hogan. The Discussion of Audit Risk Control [J]. Contemporary Accounting Research, 2015, 25(1): 219.原文The Discussion of Audit Risk ControlC E HoganAbstractFor any one market, seeking resources optimal configuration is its internal requirements, this requirement with complete information between market subjects, in reality, however, investors and by investors, creditors and debtors, regulators and inevitable existence of information asymmetry between the regulated, audit the generation of the industry is to eliminate the information asymmetry. Certified public accountants to verify statements of the financial information of foreign enterprises and other information, the truth of market main body with information as close as possible to complete information is the process of the audit. Since the audit conclusion is certified public accountants in sampling surveys on the basis of the subjective conclusion, usually can't be absolutely perfect information, the audit risk and the audit risk is the audit itself inherent cannot evade a question.Keywords: audit risk, audit risk management and risk control1 IntroductionAuditing profession development, has become an indispensable organic part of market economy, in the establishment and maintenance of the capital market development, holds an important place of audit, audit of the financial market is hard to imagine.In recent years, however, in view of the accounting firms and certified public accountants case erupted repeatedly, most lawsuits and high litigation of the damages to the whole industry development.2002 of the American journal of accounting statistics results show that the United States over the past 15 years for the auditor to accuse lawsuit, far more than the whole industry occurred in the 105 - year history of the total number of ['];European Ernst & young, KPMG, delete and PWC international accounting firms in 2007, a year only received compensation lawsuit, claim amountmore than $1 billion in six, demanded amount of between $350 million to $1 billion with 12.Strengthen research of audit risk and its management, therefore, not only relates to the interests of the subject of audit and reputation, and is related to the construction of the economic system, is not only beneficial to audit the construction industry, promote audit, benign and healthy development of the career but also to contain or block the audit risk caused a chain reaction, make the audit resources to have economic benefits and social benefits in the direction of the flow, promote the reasonable allocation of social resources and social stability.2 Literature reviewIn 1978, D.H. Roberts (D.H.R obverts) raises the ultimate audit risk model, its mathematical expression is: the ultimate risk inherent risk control risk x 2 analytical detection risk and (+ sampling risk not sampling risk).In 1981, the auditing standards board (AlCPA) standards of 39 announcement the audit sampling and brought forward a new model of audit risk, this theory is that the audit.Risks from the analysis of inherent risk, control risk anddetection risk and testing of four risk in detail, including: inherent risk and control risk the risk of significant error in financial statements and analytical examination and detailed test risks said the risk of significant error in the financial statements are not found. In 1983, the auditing standards board (AICPA) is explained in the auditing standards no. 47 "audit risk and the importance of audit services" (sAS47 #) of the audit risk model and made the changes, the revised audit model: audit risk inherent risk 2 x check risk control. As a result of this model includes the main audit risk factors, and shows that the number of the relationship between each risk factor, convenient measurement, operability and applicability, and therefore most audit organization and the international accounting firms are using this model, the independent auditing standards are also using this model. In 2004, the international auditing standards are revised in SAS47 # auditing standards audit model on the basis of a new audit risk model is put forward, its abstract expression is: the risk of material misstatement risk in audit risk = x check, this model to control risk and inherent risk into comprehensiverisk, and said with the risk of material misstatement. The model that audit risk depends on the size of the material misstatement risk and check risk, certified public accountant shall risk assessment of the implementation process, evaluation of material misstatement risk, and further to design and implement audit according to the results of the assessment program, to control the inspection risk, to reduce audit risk to an acceptable level.And for some institutions and scholars,Audit risk theory put forward its own views is put forward in 1983: Audit risk inherent risk control risk x x = analytical detection risk and substantive testrisk [6]; the auditing practices board (APC) in 1988, an audit risk model is put forward, namely: audit risk = inherent risk control risk x x x sampling risk. In 1997, Alvin. A. Arenas and James k. loss baker (Alvin a. Arenas and James k. Lob eke) published monograph in combination with the audit learn A "(Auditing - An integrated Approach) adopted the system foundation audit and the risk-based audit pattern, on the basis of the risk assessment of the audited units, comprehensive analysis and evaluation of various influence factors of the audited units of economic activity, and according to the quantitative risk level to determine the implementation of the audit scope, focus, and carries on the substantive examination.3 Audit risk management and control3.1 Audit project management and controlEntrusted by the audit stage, first of all should carefully choose the auditees. Industry, the development level of industry correlation and macro-economic conditions, the types of industry market information such as help auditors on the current operating situation of the customer to make a preliminary judgment, and thus to initial positioning its risk. Customer’s own information focus should examine its management level, management level and sustainable management ability and senior management personnel quality, and so on and so forth. Auditors take special attention in the understanding of the unusual move, especially in the audit of listed company, any signs of abnormal behavior will have its exposed, namely risk signal. Between the auditor and the client if there is a related party relationship will affect theindependence of the audit, therefore when determining accepting new clients to avoid this kind of relationship to weakenthe independence of certified public accountants. In commissioned phase can be a new customer list to inform law firm of professional auditors.Implementation stage of the audit specific controlled by implementation and business substantive testing phase and implementation detailed analytical testing and balance testing phase two phases, this stage guided by the audit plan, audit risk control oriented, to obtain audit evidence as the basic goals, the establishment of the internal control system of the audited units first and abide by the conditions for conformance test, according to the test results revised audit plan; And then to substantive testing of accounting report project data, evaluation and appraisal according to the test result.Way to achieve the goal of certified public accountants audit is the implementation of audit procedures, and the result is to achieve the goal of the audit through the audit report to reflect. Audit report reflects the client's final request, also reflect the quality of audit work to accomplish the task, and is also the judgement of the audited matters and conclusion. Therefore audit report stage is to audit the project quality and degree of risk control, the last part of the project risk control.3.2 Audit industry risk management and controlA sound system of laws and regulations is the audit laws is the basic measures to guard against auditing risk. Audit theory system must have a tight inner logic, to become a mature discipline and guide audit practice. Revised auditing standards as the core of the audit standard system, pay attention to the improvement on the application of audit risk model, perfect the risk-oriented audit on the implementation of the specific procedures of specific methods, such as the evaluation of internalcontrol system, the control test and confirm the audit sampling method, test phase use expectation level of audit risk, inherent risk, control risk and detection risk and legal responsibility audit litigation risk and evaluation method, etc., for the auditor in practice to establish a normative and principled technical guidance system, enables the auditor's practice to rules-based and laws.An institute of certified public accountants should give full play to the function of its industry association, to further promote the improvement of the industry standards, strengthen supervision, to establish credit rating, filing system, peer review and experience exchange. In addition, an institute of certified public accountants shall promote the legislation and building rules and regulations, work, and take some measures to protect the lawful rights and interests of a member of the association. To explore in practice, summarize the experience on the basis of the audit work must be formulated in compliance with standards and guidelines as soon as possible, the audit procedures, content, clerical, language use and so on shall be clearly stipulated; Strengthen the constraints supervision mechanism, establish and perfect the relevant regulations of the peer review and the system.3.3 Audit environment risk management and controlThe audit environment is constantly changing. Industrial society to information society and the transformation of the knowledge economy era, the progressive realization of economic globalization, the modern enterprise system gradually introduced, further improving the corporate governance structure, information technology is widely applied in the audit practice, etc. Play an important role in the audit environment, isthe auditor's quality and skills, social expectations and requirements for the audit, the development of related disciplines and so on.For the improvement of the audit environment and reform, not the auditing profession or an institute of certified public accountants can be achieved, it needs the joint efforts of the whole society, such as the correct understanding of the auditing profession widespread public, to reduce the audit expectation gap; To improve the standardization of the capital market operations and the transparency of information disclosure; Perfect the construction of accounting legal system, etc.4 ConclusionsAudit is to monitor the development of social economy, the important aspect of optimizing the allocation of resources, the development of capital market prosperity and stability is particularly important. Audit risk management throughout all aspects of the audit activities, throughout the audit activities. Public accounting firms andcertified public accountants as the main body of the audit risk management, especially must pay attention to in the daily audit practice and strengthen the audit risk management, they need to improve its own, perfect the causes of audit risk, and thus achieve the control of the audit risk more effectively.译文对审计风险控制的探讨C E Hogan摘要对于任何一个市场而言,寻求资源的最优配置都是其内在要求,这要求市场主体之间具备完全信息,然而现实中,投资者与被投资者、债权人与债务人、监管者与被监管者之间必然存在信息的不对称,审计这一行业的产生就是为了消除这种信息的不对称。

参考文献中文的英文对照

参考文献中文的英文对照

参考文献中文的英文对照在学术论文中,参考文献是非常重要的一部分,它可以为论文的可信度和学术性增添分数,其中包括中文和英文文献。

以下是一些常见的参考文献中文和英文对照:1. 书籍 Book中文:王小明. 计算机网络技术. 北京:清华大学出版社,2018.英文:Wang, X. Computer Network Technology. Beijing: Tsinghua University Press, 2018.2. 学术期刊 Article in Academic Journal中文:张婷婷,李伟. 基于深度学习的影像分割方法. 计算机科学与探索,2019,13(1):61-67.英文:Zhang, T. T., Li, W. Image Segmentation Method Based on Deep Learning. Computer Science and Exploration, 2019, 13(1): 61-67.3. 会议论文 Conference Paper中文:王维,李丽. 基于云计算的智慧物流管理系统设计. 2019年国际物流与采购会议论文集,2019:112-117.英文:Wang, W., Li, L. Design of Smart Logistics Management System Based on Cloud Computing. Proceedings of the 2019 International Conference on Logistics and Procurement, 2019: 112-117.4. 学位论文 Thesis/Dissertation中文:李晓华. 基于模糊神经网络的水质评价模型研究. 博士学位论文,长春:吉林大学,2018.英文:Li, X. H. Research on Water Quality Evaluation Model Based on Fuzzy Neural Network. Doctoral Dissertation, Changchun: Jilin University, 2018.5. 报告 Report中文:国家统计局. 2019年国民经济和社会发展统计公报. 北京:中国统计出版社,2019.英文:National Bureau of Statistics. Statistical Communique of the People's Republic of China on the 2019 National Economic and Social Development. Beijing: China Statistics Press, 2019.以上是一些常见的参考文献中文和英文对照,希望对大家写作有所帮助。

云计算外文翻译参考文献

云计算外文翻译参考文献

云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。

信息与计算科学中英文对照外文翻译文献

信息与计算科学中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)【Abstract】Under the network environment the library information resource altogether constructs sharing is refers to all levels of each kind of library basis user to the social information demand, through network use computer, correspondence, electron, multimedia and so on advanced information technology, the high idealization carries on the synthesis cooperation development and the use activity to various collections information resource and the network resources . The market economy swift and violent development, the networking unceasing renewal, the information age arrival, had decided the future library trend of development will be implements the information resource altogether to construct sharing, already achieved the social mutual recognition about this point.This is because:libraries implement the information resource altogether to construct sharing are solve the knowledge information explosion and the collection strength insufficient this contradictory important way..【Key Words】Network; libraries implement: information: construction;work environment the libraryUnder the network environment the library information resource altogether constructs sharing is refers to all levels of each kind of library basis user to the social information demand, through network use computer, correspondence, electron, multimedia and so on advanced information technology, the high idealization carries on the synthesis cooperation development and the use activity to various collections information resource and the network resources.1、 information resource altogether will construct sharing is the future library development and the use information resource way that must be taken.The market economy swift and violent development, the networking unceasing renewal, the information age arrival, had decided the future library trend of development will be implements the information resource altogether to construct sharing, already achieved the social mutual recognition about this point.This is because: 。

大数据技术专业 英语

大数据技术专业 英语

大数据技术专业英语English:Big Data Technology is a multidisciplinary field that encompasses various aspects of data collection, storage, processing, analysis, and visualization. In this specialized field, professionals utilize advanced tools and techniques to handle vast amounts of data generated from diverse sources such as social media, sensors, mobile devices, and enterprise systems. They employ technologies like Hadoop, Spark, NoSQL databases, and machine learning algorithms to extract valuable insights, identify patterns, and make data-driven decisions. Proficiency in programming languages like Python, R, Java, and Scala is crucial for implementing algorithms and building scalable data processing systems. Additionally, expertise in data warehousing, data modeling, and data governance is essential for ensuring the quality and integrity of data throughout its lifecycle. Moreover, strong analytical skills and domain knowledge are indispensable for interpreting results and deriving actionable recommendations from complex datasets. As the volume and complexity of data continue to grow exponentially, the demand for skilled professionals in Big Data Technology is expected to rise, offering lucrative careeropportunities in various industries such as finance, healthcare, retail, and telecommunications.中文翻译:大数据技术是一个涵盖数据收集、存储、处理、分析和可视化等多方面的跨学科领域。

云计算Cloud-Computing-外文翻译

云计算Cloud-Computing-外文翻译

毕业设计说明书英文文献及中文翻译学生姓名:学号:计算机与控制工程学院:专指导教师:2017 年 6 月英文文献Cloud Computing1。

Cloud Computing at a Higher LevelIn many ways,cloud computing is simply a metaphor for the Internet, the increasing movement of compute and data resources onto the Web. But there's a difference: cloud computing represents a new tipping point for the value of network computing. It delivers higher efficiency, massive scalability, and faster,easier software development. It's about new programming models,new IT infrastructure, and the enabling of new business models。

For those developers and enterprises who want to embrace cloud computing, Sun is developing critical technologies to deliver enterprise scale and systemic qualities to this new paradigm:(1) Interoperability —while most current clouds offer closed platforms and vendor lock—in, developers clamor for interoperability。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

毕业设计附件外文文献翻译:原文+译文文献出处:Chaudhuri S. Big data,cloud computing technology and the audit[J]. IT Professional Magazine, 2016, 2(4): 38-51.原文Big data,cloud computing technology and the auditChaudhuri SAbstractAt present, large data along with the development of cloud computing technology, is a significant impact on global economic and social life. Big data and cloud computing technology to modern audit provides a new technology and method of auditing organizations and audit personnel to grasp the big data, content and characteristics of cloud computing technology, to promote the further development of the modern audit technology and method.Keywords: big data, cloud computing technology, audit, advice1 Related concept1.1 Large dataThe word "data" (data) is the meaning of "known" in Latin, can also be interpreted as "fact”. In 2009, the concept of “big data” gradually begins to spread in society. The concept of "big data" truly become popular, it is because the Obama administration in 2012 high-profile announced its "big data research and development plan”. It marks the era of "big data" really began to enter the social economic life.” Big data" (big data), or "huge amounts of data, refers to the amount of data involved too big to use the current mainstream software tools, in a certain period of time to realize collection, analysis, processing, or converted to help decision-makers decision-making information available. Internet data center (IDC) said "big data" is for the sake of more economical, more efficient from high frequency, large capacity, different structures and types of data to derive value and design of a new generation of architecture and technology, and use it to describe and define the information explosion times produce huge amounts of data, and name the related technology development and innovation. Big data has four characteristics: first, the data volume is huge, jumped from TB level to the level of PB.Second, processing speed, the traditionaldata mining technology are fundamentally different. Third, many data types’pictures, location information, video, web logs, and other forms. Fourth, the value of low density, high commercial value.1.2 Cloud computing"Cloud computing" concept was created in large Internet companies such as Google and IBM handle huge amounts of data in practice. On August 9, 2006, Google CEO Eric Schmidt (Eric Schmidt) in the search engine assembly for the first time put forward the concept of "cloud computing”. In October 2007, Google and IBM began in the United States university campus to promote cloud computing technology plan, the project hope to reduce the cost of distributed computing technology in academic research, and provide the related hardware and software equipment for these universities and technical support (Michael Mille, 2009).The world there are many about the definition of "cloud computing”.” Cloud computing" is the increase of the related services based on Internet, use and delivery mode, is through the Internet to provide dynamic easy extension and often virtualized resources. American national standards institute of technology (NIST) in 2009 about cloud computing is defined as: "cloud computing is a kind of pay by usage pattern, this pattern provides available, convenient, on-demand network access, enter the configurable computing resources Shared pool resources (including network, servers, storage, applications, services, etc.), these resources can be quick to provide, just in the management of the very few and or little interaction with service providers."1.3 The relationship between big data and cloud computingOverall, big data and cloud computing are complementary to each other. Big data mainly focus on the actual business, focus on "data", provide the technology and methods of data collection, mining and analysis, and emphasizes the data storage capacity. Cloud computing focuses on "computing", pay attention to IT infrastructure, providing IT solutions, emphasizes the ability to calculate, the data processing ability. If there is no large data storage of data, so the cloud computing ability strong again, also hard to find a place; If there is no cloud computing ability of data processing, the big data storage of data rich again, and ultimately, used in practice. From a technical point of view, large data relies on the cloud computing. Huge amounts of data storage technology, massive data management technology, graphs programming model is the key technology of cloud computing, are also big data technology base. And the data will be "big", themost important is the technology provided by the cloud computing platform. After the data is on the "cloud", broke the past their segmentation of data storage, more easy to collect and obtain, big data to present in front of people. From the focus, the emphasis of the big data and cloud computing. The emphasis of the big data is all sorts of data, broad, deep huge amounts of data mining, found in the data value, forcing companies to shift from "business-driven" for "data driven”. And the cloud is mainly through the Internet, extension, and widely available computing and storage resources and capabilities, its emphasis is IT resources, processing capacity and a variety of applications, to help enterprises save IT deployment costs. Cloud computing the benefits of the IT department in enterprise, and big data benefit enterprise business management department.2 Big data and cloud computing technology analysis of the influence of the audit2.1 Big data and cloud computing technology promote the development of continuous audit modeIn traditional audit, the auditor only after completion of the audited business audit, and audit process is not audit all data and information, just take some part of the audit. This after the event, and limited audit on the audited complex production and business operation and management system is difficult to make the right evaluation in time, and for the evaluation of increasingly frequent and complex operation and management activities of the authenticity and legitimacy is too slow. Along with the rapid development of information technology, more and more audit organization began to implement continuous audit way, to solve the problem of the time difference between audit results and economic activity. However, auditors for audit, often limited by current business conditions and information technology means, the unstructured data to digital, or related detail data cannot be obtained, the causes to question the judgment of the are no specific further and deeper. And big data and cloud computing technology can promote the development of continuous audit mode, make the information technology and big data and cloud computing technology is better, especially for the business data and risk control "real time" to demand higher specific industry, such as banking, securities, insurance industry, the continuous audit in these industries is imminent.2.2 Big data and cloud computing technology to promote the application of overall audit modeThe current audit mode is based on the evaluation of audit risk to implement sampling audit. In impossible to collect and analyze the audited all economic business data, the current audit modemainly depends on the audit sampling, from the perspective of the local inference as a whole, namely to extract the samples from working on the audit, and then deduced the whole situation of the audit object. The sampling audit mode, due to the limited sample drawn, and ignored the many and the specific business activity, the auditors cannot find and reveal the audited major fraud, hidden significant audit risks. Big data and cloud computing technology for the auditor, is not only a technical means are available, the technology and method will provide the auditor with the feasibility of implementing overall audit mode. Using big data and cloud computing technology, cross-industry, across the enterprise to collect and analysis of the data, can need not random sampling method, and use to collect and analyze all the data of general audit mode. Use of big data and cloud computing technology overall audit mode is to analyze all the data related to the audit object allows the auditor to establish overall audit of the thinking mode; can make the modern audit for revolutionary change. Auditors to implement overall audit mode, can avoid audit sampling risk. If could gather all the data in general, you can see more subtle and in-depth information, deep analysis of the data in multiple perspectives, to discover the hidden details in the data information of value to the audit problem. At the same time, the auditor implement overall audit mode, can be found from the audit sampling mode can find problems.2.3 Big data and cloud computing technology for integrated application of the audit resultsAt present, the auditor audit results is mainly provided to the audit report of the audited, its format is fixed, single content, contains less information. As the big data and cloud computing technology is widely used in the audit, the auditor audit results in addition to the audit report, and in the process of audit collection, mining, analysis and processing of large amounts of information and data, can be provided to the audited to improve management, promote the integrated application of the audit results, improve the comprehensive application effect of the audit results. First of all, the auditor in the audit to obtain large amounts of data and related information of summary and induction, financial, business and find the inner rules of operation and management etc, common problems and development trend, through the summary induces a macroscopic and comprehensive strong audit information, to provide investors and other stakeholders audited data prove that, correlation analysis and decision making Suggestions, thus promoting the improvement of the audited management level. Second, auditors by using big data and cloud computing technology can be the same problem in different category analysis and processing, from a differentAngle and different level of integration of refining to satisfy the needs of different levels. Again, the auditor will audit results for intelligent retained, by big data and cloud computing technology, to regulation and curing the problem in the system, in order to calculate or determine the problem developing trend, an early warning of the auditees.3 Big data and cloud computing technology promote the relationship between the applications of evidenceAuditors in the audit process should be based on sufficient and appropriate audit evidence audit opinion, and issue the audit report. However, under the big data and cloud computing environment, auditors are faced with both a huge amount data screening test, and facing the challenge of collecting appropriate audit evidence. Auditors when collecting audit evidence, the traditional thinking path is to collect audit evidence, based on the causal relationship between the big data analysis will be more use of correlation analysis to gather and found that the audit evidence. But from the perspective of audit evidence found, because of big data technology provides an unprecedented interdisciplinary, quantitative dimensions available, made a lot of relevant information to the audit records and analysis. Big data and cloud computing technology has not changed the causal relationship between things, but in the big data and cloud computing technology the development and use of correlation, makes the analysis of data dependence on causal logic relationship is reduced, and even more inclined to application based on the analysis of correlation data, on the basis of correlation analysis of data validation is large, one of the important characteristics of cloud computing technology. In the big data and cloud computing environment, the auditor can collect audit evidence are mostly electronic evidence. Electronic evidence itself is very complex, and cloud computing technology makes it more difficult to obtain evidence of the causal. Auditors should collect from long-term dependence on cause and effect and found that the audit evidence, into a correlation is used to collect and found that the audit evidence.译文大数据、云计算技术与审计Chaudhuri S摘要目前,大数据伴随着云计算技术的发展,正在对全球经济社会生活产生巨大的影响。

相关文档
最新文档