数据挖掘技术中英文对照外文翻译文献

合集下载

数据挖掘导论英文版

数据挖掘导论英文版

数据挖掘导论英文版Data Mining IntroductionData mining is the process of extracting valuable insights and patterns from large datasets. It involves the application of various techniques and algorithms to uncover hidden relationships, trends, and anomalies that can be used to inform decision-making and drive business success. In today's data-driven world, the ability to effectively harness the power of data has become a critical competitive advantage for organizations across a wide range of industries.One of the key strengths of data mining is its versatility. It can be applied to a wide range of domains, from marketing and finance to healthcare and scientific research. In the marketing realm, for example, data mining can be used to analyze customer behavior, identify target segments, and develop personalized marketing strategies. In the financial sector, data mining can be leveraged to detect fraud, assess credit risk, and optimize investment portfolios.At the heart of data mining lies a diverse set of techniques and algorithms. These include supervised learning methods, such asregression and classification, which can be used to predict outcomes based on known patterns in the data. Unsupervised learning techniques, such as clustering and association rule mining, can be employed to uncover hidden structures and relationships within datasets. Additionally, advanced algorithms like neural networks and decision trees have proven to be highly effective in tackling complex, non-linear problems.The process of data mining typically involves several key steps, each of which plays a crucial role in extracting meaningful insights from the data. The first step is data preparation, which involves cleaning, transforming, and integrating the raw data into a format that can be effectively analyzed. This step is particularly important, as the quality and accuracy of the input data can significantly impact the reliability of the final results.Once the data is prepared, the next step is to select the appropriate data mining techniques and algorithms to apply. This requires a deep understanding of the problem at hand, as well as the strengths and limitations of the available tools. Depending on the specific goals of the analysis, the data mining practitioner may choose to employ a combination of techniques, each of which can provide unique insights and perspectives.The next phase is the actual data mining process, where the selectedalgorithms are applied to the prepared data. This can involve complex mathematical and statistical calculations, as well as the use of specialized software and computing resources. The results of this process may include the identification of patterns, trends, and relationships within the data, as well as the development of predictive models and other data-driven insights.Once the data mining process is complete, the final step is to interpret and communicate the findings. This involves translating the technical results into actionable insights that can be easily understood by stakeholders, such as business leaders, policymakers, or scientific researchers. Effective communication of data mining results is crucial, as it enables decision-makers to make informed choices and take appropriate actions based on the insights gained.One of the most exciting aspects of data mining is its continuous evolution and the emergence of new techniques and technologies. As the volume and complexity of data continue to grow, the need for more sophisticated and powerful data mining tools and algorithms has become increasingly pressing. Advances in areas such as machine learning, deep learning, and big data processing have opened up new frontiers in data mining, enabling practitioners to tackle increasingly complex problems and extract even more valuable insights from the data.In conclusion, data mining is a powerful and versatile tool that has the potential to transform the way we approach a wide range of challenges and opportunities. By leveraging the power of data and the latest analytical techniques, organizations can gain a deeper understanding of their operations, customers, and markets, and make more informed, data-driven decisions that drive sustainable growth and success. As the field of data mining continues to evolve, it is clear that it will play an increasingly crucial role in shaping the future of business, science, and society as a whole.。

数据挖掘_Data sets from the book The Analysis of Time Series(《The Analysis of Time Series》中的数据

数据挖掘_Data sets from the book The Analysis of Time Series(《The Analysis of Time Series》中的数据

Data sets from the book "The Analysis of Time Series"(《The Analysis of Time Series》中的数据)数据摘要:This file is a text file giving details about the time series analysed in 'The Analysis of Time Series' by Chris Chatfield. The 5th edn was published in 1996 and the 6th edn in 2003.中文关键词:数据挖掘,时间序列,分析,Chris Chatfield,英文关键词:Data mining,Time series,Analysis,Chris Chatfield,数据格式:TEXT数据用途:The data can be used for data mining and analysis.数据详细介绍:Data sets from the book "The Analysisof Time Series"∙AbstractThis file is a text file giving details about the time series analysed in 'The Analysis of Time Series' by Chris Chatfield. The 5th edn was published in 1996 and the 6th edn in 2003.∙Data DescriptionAn individual series can readily be abstracted from this file.Figure 1.1 - the Beveridge wheat price index series is availablein many places, such as /depts/maths/data/ts/Figure 1.2Average air temperature (deg C) in Recife in successive months for 1953-1962.Figure 1.3 - The Chatfield-Prothero data for Company X.Monthly sales for January 1965 to November 1971.From Series A, 1973, p. 295-.Includes the last 6 observations from the follow-up paper replying tothe Box and Jenkins paper - see p. 251 of Series A 1973.Example 5.1 in 6th edn.Values are coded - hence no unit of measurement.Time period from Aug 1988 to July 1992 - 4 years.Figure 11.1 - the sunspots data is available in many places but isrepeated here for convenience.Monthly sunspots numbers from 1749 to 1983.Table D.2 in 5th edn and 14.2 in 6th edn.Yield (%) on British short term government securities in successive monthsfrom about 1950 to about 1971.Figure D.2 in 5th edn or Fig 14.2 in 6th edn.Monthly totals of international airline passengers (in thousands) for 1949-1960.This data is readily available elsewhere but is repeated here for convenience.Reference'The Analysis of Time Series' by Chris Chatfield. The 5th edn was published in 1996 and the 6th edn in 2003.数据预览:点此下载完整数据集。

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述

大数据外文翻译参考文献综述(文档含中英文对照即英文原文和中文翻译)原文:Data Mining and Data PublishingData mining is the extraction of vast interesting patterns or knowledge from huge amount of data. The initial idea of privacy-preserving data mining PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. Privacy-preserving data mining considers the problem of running data mining algorithms on confidential data that is not supposed to be revealed even to the partyrunning the algorithm. In contrast, privacy-preserving data publishing (PPDP) may not necessarily be tied to a specific data mining task, and the data mining task may be unknown at the time of data publishing. PPDP studies how to transform raw data into a version that is immunized against privacy attacks but that still supports effective data mining tasks. Privacy-preserving for both data mining (PPDM) and data publishing (PPDP) has become increasingly popular because it allows sharing of privacy sensitive data for analysis purposes. One well studied approach is the k-anonymity model [1] which in turn led to other models such as confidence bounding, l-diversity, t-closeness, (α,k)-anonymity, etc. In particular, all known mechanisms try to minimize information loss and such an attempt provides a loophole for attacks. The aim of this paper is to present a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explain their effects on Data Privacy.Although data mining is potentially useful, many data holders are reluctant to provide their data for data mining for the fear of violating individual privacy. In recent years, study has been made to ensure that the sensitive information of individuals cannot be identified easily.Anonymity Models, k-anonymization techniques have been the focus of intense research in the last few years. In order to ensure anonymization of data while at the same time minimizing the informationloss resulting from data modifications, everal extending models are proposed, which are discussed as follows.1.k-Anonymityk-anonymity is one of the most classic models, which technique that prevents joining attacks by generalizing and/or suppressing portions of the released microdata so that no individual can be uniquely distinguished from a group of size k. In the k-anonymous tables, a data set is k-anonymous (k ≥ 1) if each record in the data set is in- distinguishable from at least (k . 1) other records within the same data set. The larger the value of k, the better the privacy is protected. k-anonymity can ensure that individuals cannot be uniquely identified by linking attacks.2. Extending ModelsSince k-anonymity does not provide sufficient protection against attribute disclosure. The notion of l-diversity attempts to solve this problem by requiring that each equivalence class has at least l well-represented value for each sensitive attribute. The technology of l-diversity has some advantages than k-anonymity. Because k-anonymity dataset permits strong attacks due to lack of diversity in the sensitive attributes. In this model, an equivalence class is said to have l-diversity if there are at least l well-represented value for the sensitive attribute. Because there are semantic relationships among the attribute values, and different values have very different levels of sensitivity. Afteranonymization, in any equivalence class, the frequency (in fraction) of a sensitive value is no more than α.3. Related Research AreasSeveral polls show that the public has an in- creased sense of privacy loss. Since data mining is often a key component of information systems, homeland security systems, and monitoring and surveillance systems, it gives a wrong impression that data mining is a technique for privacy intrusion. This lack of trust has become an obstacle to the benefit of the technology. For example, the potentially beneficial data mining re- search project, Terrorism Information Awareness (TIA), was terminated by the US Congress due to its controversial procedures of collecting, sharing, and analyzing the trails left by individuals. Motivated by the privacy concerns on data mining tools, a research area called privacy-reserving data mining (PPDM) emerged in 2000. The initial idea of PPDM was to extend traditional data mining techniques to work with the data modified to mask sensitive information. The key issues were how to modify the data and how to recover the data mining result from the modified data. The solutions were often tightly coupled with the data mining algorithms under consideration. In contrast, privacy-preserving data publishing (PPDP) may not necessarily tie to a specific data mining task, and the data mining task is sometimes unknown at the time of data publishing. Furthermore, some PPDP solutions emphasize preserving the datatruthfulness at the record level, but PPDM solutions often do not preserve such property. PPDP Differs from PPDM in Several Major Ways as Follows :1) PPDP focuses on techniques for publishing data, not techniques for data mining. In fact, it is expected that standard data mining techniques are applied on the published data. In contrast, the data holder in PPDM needs to randomize the data in such a way that data mining results can be recovered from the randomized data. To do so, the data holder must understand the data mining tasks and algorithms involved. This level of involvement is not expected of the data holder in PPDP who usually is not an expert in data mining.2) Both randomization and encryption do not preserve the truthfulness of values at the record level; therefore, the released data are basically meaningless to the recipients. In such a case, the data holder in PPDM may consider releasing the data mining results rather than the scrambled data.3) PPDP primarily “anonymizes” the data by hiding the identity of record owners, whereas PPDM seeks to directly hide the sensitive data. Excellent surveys and books in randomization and cryptographic techniques for PPDM can be found in the existing literature. A family of research work called privacy-preserving distributed data mining (PPDDM) aims at performing some data mining task on a set of private databasesowned by different parties. It follows the principle of Secure Multiparty Computation (SMC), and prohibits any data sharing other than the final data mining result. Clifton et al. present a suite of SMC operations, like secure sum, secure set union, secure size of set intersection, and scalar product, that are useful for many data mining tasks. In contrast, PPDP does not perform the actual data mining task, but concerns with how to publish the data so that the anonymous data are useful for data mining. We can say that PPDP protects privacy at the data level while PPDDM protects privacy at the process level. They address different privacy models and data mining scenarios. In the field of statistical disclosure control (SDC), the research works focus on privacy-preserving publishing methods for statistical tables. SDC focuses on three types of disclosures, namely identity disclosure, attribute disclosure, and inferential disclosure. Identity disclosure occurs if an adversary can identify a respondent from the published data. Revealing that an individual is a respondent of a data collection may or may not violate confidentiality requirements. Attribute disclosure occurs when confidential information about a respondent is revealed and can be attributed to the respondent. Attribute disclosure is the primary concern of most statistical agencies in deciding whether to publish tabular data. Inferential disclosure occurs when individual information can be inferred with high confidence from statistical information of the published data.Some other works of SDC focus on the study of the non-interactive query model, in which the data recipients can submit one query to the system. This type of non-interactive query model may not fully address the information needs of data recipients because, in some cases, it is very difficult for a data recipient to accurately construct a query for a data mining task in one shot. Consequently, there are a series of studies on the interactive query model, in which the data recipients, including adversaries, can submit a sequence of queries based on previously received query results. The database server is responsible to keep track of all queries of each user and determine whether or not the currently received query has violated the privacy requirement with respect to all previous queries. One limitation of any interactive privacy-preserving query system is that it can only answer a sublinear number of queries in total; otherwise, an adversary (or a group of corrupted data recipients) will be able to reconstruct all but 1 . o(1) fraction of the original data, which is a very strong violation of privacy. When the maximum number of queries is reached, the query service must be closed to avoid privacy leak. In the case of the non-interactive query model, the adversary can issue only one query and, therefore, the non-interactive query model cannot achieve the same degree of privacy defined by Introduction the interactive model. One may consider that privacy-reserving data publishing is a special case of the non-interactivequery model.This paper presents a survey for most of the common attacks techniques for anonymization-based PPDM & PPDP and explains their effects on Data Privacy. k-anonymity is used for security of respondents identity and decreases linking attack in the case of homogeneity attack a simple k-anonymity model fails and we need a concept which prevent from this attack solution is l-diversity. All tuples are arranged in well represented form and adversary will divert to l places or on l sensitive attributes. l-diversity limits in case of background knowledge attack because no one predicts knowledge level of an adversary. It is observe that using generalization and suppression we also apply these techniques on those attributes which doesn’t need th is extent of privacy and this leads to reduce the precision of publishing table. e-NSTAM (extended Sensitive Tuples Anonymity Method) is applied on sensitive tuples only and reduces information loss, this method also fails in the case of multiple sensitive tuples.Generalization with suppression is also the causes of data lose because suppression emphasize on not releasing values which are not suited for k factor. Future works in this front can include defining a new privacy measure along with l-diversity for multiple sensitive attribute and we will focus to generalize attributes without suppression using other techniques which are used to achieve k-anonymity because suppression leads to reduce the precision ofpublishing table.译文:数据挖掘和数据发布数据挖掘中提取出大量有趣的模式从大量的数据或知识。

数据挖掘技术毕业论文中英文资料对照外文翻译文献综述

数据挖掘技术毕业论文中英文资料对照外文翻译文献综述

数据挖掘技术毕业论文中英文资料对照外文翻译文献综述数据挖掘技术简介中英文资料对照外文翻译文献综述英文原文Introduction to Data MiningAbstract:Microsoft® SQL Server™ 2005 provides an integrated environment for creating and working with data mining models. This tutorial uses four scenarios, targeted mailing, forecasting, market basket, and sequence clustering, to demonstrate how to use the mining model algorithms, mining model viewers, and data mining tools that are included in this release of SQL Server.IntroductionThe data mining tutorial is designed to walk you through the process of creating data mining models in Microsoft SQL Server 2005. The data mining algorithms and tools in SQL Server 2005 make it easy to build a comprehensive solution for a variety of projects, including market basket analysis, forecasting analysis, and targeted mailing analysis. The scenarios for these solutions are explained in greater detail later in the tutorial.The most visible components in SQL Server 2005 are the workspaces that you use to create and work with data mining models. The online analytical processing (OLAP) and data mining tools are consolidated into two working environments: Business Intelligence Development Studio and SQL Server Management Studio. Using Business Intelligence Development Studio, you can develop an Analysis Services project disconnected from the server. When the project is ready, you can deploy it to the server. You can also work directly against the server. The main function of SQL Server Management Studio is to manage the server. Each environment is described in more detail later in this introduction. For more information on choosing between the two environments, see "Choosing Between SQL Server Management Studio and Business Intelligence Development Studio" in SQL Server Books Online.All of the data mining tools exist in the data mining editor. Using the editor you can manage mining models, create new models, view models, compare models, and create predictions basedon existing models.After you build a mining model, you will want to explore it, looking for interesting patterns and rules. Each mining model viewer in the editor is customized to explore models built with a specific algorithm. For more information about the viewers, see "Viewing a Data Mining Model" in SQL Server Books Online.Often your project will contain several mining models, so before you can use a model to create predictions, you need to be able to determine which model is the most accurate. For this reason, the editor contains a model comparison tool called the Mining Accuracy Chart tab. Using this tool you can compare the predictive accuracy of your models and determine the best model.To create predictions, you will use the Data Mining Extensions (DMX) language. DMX extends SQL, containing commands to create, modify, and predict against mining models. For more information about DMX, see "Data Mining Extensions (DMX) Reference" in SQL Server Books Online. Because creating a prediction can be complicated, the data mining editor contains a tool called Prediction Query Builder, which allows you to build queries using a graphical interface. You can also view the DMX code that is generated by the query builder.Just as important as the tools that you use to work with and create data mining models are the mechanics by which they are created. The key to creating a mining model is the data mining algorithm. The algorithm finds patterns in the data that you pass it, and it translates them into a mining model — it is the engine behind the process.Some of the most important steps in creating a data mining solution are consolidating, cleaning, and preparing the data to be used to create the mining models. SQL Server 2005 includes the Data Transformation Services (DTS) working environment, which contains tools that you can use to clean, validate, and prepare your data. For more information on using DTS in conjunction with a data mining solution, see "DTS Data Mining Tasks and Transformations" in SQL Server Books Online.In order to demonstrate the SQL Server data mining features, this tutorial uses a new sample database called AdventureWorksDW. The database is included with SQL Server 2005, and it supports OLAP and data mining functionality. In order to make the sample database available, you need to select the sample database at the installation time in the “Advanced” dialog for component selection.Adventure WorksAdventureWorksDW is based on a fictional bicycle manufacturing company named Adventure Works Cycles. Adventure Works produces and distributes metal and composite bicycles to North American, European, and Asian commercial markets. The base of operations is located in Bothell, Washington with 500 employees, and several regional sales teams are located throughout their market base.Adventure Works sells products wholesale to specialty shops and to individuals through theInternet. For the data mining exercises, you will work with the AdventureWorksDW Internet sales tables, which contain realistic patterns that work well for data mining exercises.For more information on Adventure Works Cycles see "Sample Databases and Business Scenarios" in SQL Server Books Online.Database DetailsThe Internet sales schema contains information about 9,242 customers. These customers live in six countries, which are combined into three regions:North America (83%)Europe (12%)Australia (7%)The database contains data for three fiscal years: 2002, 2003, and 2004.The products in the database are broken down by subcategory, model, and product.Business Intelligence Development StudioBusiness Intelligence Development Studio is a set of tools designed for creating business intelligence projects. Because Business Intelligence Development Studio was created as an IDE environment in which you can create a complete solution, you work disconnected from the server. You can change your data mining objects as much as you want, but the changes are not reflected on the server until after you deploy the project.Working in an IDE is beneficial for the following reasons:The Analysis Services project is the entry point for a business intelligence solution. An Analysis Services project encapsulates mining models and OLAP cubes, along with supplemental objects that make up the Analysis Services database. From Business Intelligence Development Studio, you can create and edit Analysis Services objects within a project and deploy the project to the appropriate Analysis Services server or servers.If you are working with an existing Analysis Services project, you can also use Business Intelligence Development Studio to work connected the server. In this way, changes are reflected directly on the server without having to deploy the solution.SQL Server Management StudioSQL Server Management Studio is a collection of administrative and scripting tools for working with Microsoft SQL Server components. This workspace differs from Business Intelligence Development Studio in that you are working in a connected environment where actions are propagated to the server as soon as you save your work.After the data has been cleaned and prepared for data mining, most of the tasks associated with creating a data mining solution are performed within Business Intelligence Development Studio. Using the Business Intelligence Development Studio tools, you develop and test the datamining solution, using an iterative process to determine which models work best for a given situation. When the developer is satisfied with the solution, it is deployed to an Analysis Services server. From this point, the focus shifts from development to maintenance and use, and thus SQL Server Management Studio. Using SQL Server Management Studio, you can administer your database and perform some of the same functions as in Business Intelligence Development Studio, such as viewing, and creating predictions from mining models.Data Transformation ServicesData Transformation Services (DTS) comprises the Extract, Transform, and Load (ETL) tools in SQL Server 2005. These tools can be used to perform some of the most important tasks in data mining: cleaning and preparing the data for model creation. In data mining, you typically perform repetitive data transformations to clean the data before using the data to train a mining model. Using the tasks and transformations in DTS, you can combine data preparation and model creation into a single DTS package.DTS also provides DTS Designer to help you easily build and run packages containing all of the tasks and transformations. Using DTS Designer, you can deploy the packages to a server and run them on a regularly scheduled basis. This is useful if, for example, you collect data weekly data and want to perform the same cleaning transformations each time in an automated fashion.You can work with a Data Transformation project and an Analysis Services project together as part of a business intelligence solution, by adding each project to a solution in Business Intelligence Development Studio.Mining Model AlgorithmsData mining algorithms are the foundation from which mining models are created. The variety of algorithms included in SQL Server 2005 allows you to perform many types of analysis. For more specific information about the algorithms and how they can be adjusted using parameters, see "Data Mining Algorithms" in SQL Server Books Online.Microsoft Decision TreesThe Microsoft Decision Trees algorithm supports both classification and regression and it works well for predictive modeling. Using the algorithm, you can predict both discrete and continuous attributes.In building a model, the algorithm examines how each input attribute in the dataset affects the result of the predicted attribute, and then it uses the input attributes with the strongest relationship to create a series of splits, called nodes. As new nodes are added to the model, a tree structure begins to form. The top node of the tree describes the breakdown of the predicted attribute over the overall population. Each additional node is created based on the distribution of states of the predicted attribute as compared to the input attributes. If an input attribute is seen tocause the predicted attribute to favor one state over another, a new node is added to the model. The model continues to grow until none of the remaining attributes create a split that provides an improved prediction over the existing node. The model seeks to find a combination of attributes and their states that creates a disproportionate distribution of states in the predicted attribute, therefore allowing you to predict the outcome of the predicted attribute.Microsoft ClusteringThe Microsoft Clustering algorithm uses iterative techniques to group records from a dataset into clusters containing similar characteristics. Using these clusters, you can explore the data, learning more about the relationships that exist, which may not be easy to derive logically through casual observation. Additionally, you can create predictions from the clustering model created by the algorithm. For example, consider a group of people who live in the same neighborhood, drive the same kind of car, eat the same kind of food, and buy a similar version of a product. This is a cluster of data. Another cluster may include people who go to the same restaurants, have similar salaries, and vacation twice a year outside the country. Observing how these clusters are distributed, you can better understand how the records in a dataset interact, as well as how that interaction affects the outcome of a predicted attribute.Microsoft Naïve BayesThe Microsoft Naïve Bayes algorithm quickly builds mining models that can be used for classification and prediction. It calculates probabilities for each possible state of the input attribute, given each state of the predictable attribute, which can later be used to predict an outcome of the predicted attribute based on the known input attributes. The probabilities used to generate the model are calculated and stored during the processing of the cube. The algorithm supports only discrete or discretized attributes, and it considers all input attributes to be independent. The Microsoft Naïve Bayes algorithm produces a simple mining model that can be considered a starting point in the data mining process. Because most of the calculations used in creating the model are generated during cube processing, results are returned quickly. This makes the model a good option for exploring the data and for discovering how various input attributes are distributed in the different states of the predicted attribute.Microsoft Time SeriesThe Microsoft Time Series algorithm creates models that can be used to predict continuous variables over time from both OLAP and relational data sources. For example, you can use the Microsoft Time Series algorithm to predict sales and profits based on the historical data in a cube.Using the algorithm, you can choose one or more variables to predict, but they must be continuous. You can have only one case series for each model. The case series identifies the location in a series, such as the date when looking at sales over a length of several months or years.A case may contain a set of variables (for example, sales at different stores). The Microsoft Time Series algorithm can use cross-variable correlations in its predictions. For example, prior sales at one store may be useful in predicting current sales at another store.Microsoft Neural NetworkIn Microsoft SQL Server 2005 Analysis Services, the Microsoft Neural Network algorithm creates classification and regression mining models by constructing a multilayer perceptron network of neurons. Similar to the Microsoft Decision Trees algorithm provider, given each state of the predictable attribute, the algorithm calculates probabilities for each possible state of the input attribute. The algorithm provider processes the entire set of cases , iteratively comparing the predicted classification of the cases with the known actual classification of the cases. The errors from the initial classification of the first iteration of the entire set of cases is fed back into the network, and used to modify the network's performance for the next iteration, and so on. You can later use these probabilities to predict an outcome of the predicted attribute, based on the input attributes. One of the primary differences between this algorithm and the Microsoft Decision Trees algorithm, however, is that its learning process is to optimize network parameters toward minimizing the error while the Microsoft Decision Trees algorithm splits rules in order to maximize information gain. The algorithm supports the prediction of both discrete and continuous attributes.Microsoft Linear RegressionThe Microsoft Linear Regression algorithm is a particular configuration of the Microsoft Decision Trees algorithm, obtained by disabling splits (the whole regression formula is built in a single root node). The algorithm supports the prediction of continuous attributes.Microsoft Logistic RegressionThe Microsoft Logistic Regression algorithm is a particular configuration of the Microsoft Neural Network algorithm, obtained by eliminating the hidden layer. The algorithm supports the prediction of both discrete andcontinuous attributes.)中文译文数据挖掘技术简介摘要:微软® SQL Server™2005中提供用于创建和使用数据挖掘模型的集成环境的工作。

Data-mining-clustering数据挖掘—聚类分析大学毕业论文外文文献翻译及原文

Data-mining-clustering数据挖掘—聚类分析大学毕业论文外文文献翻译及原文

Data-mining-clustering数据挖掘—聚类分析大学毕业论文外文文献翻译及原文毕业设计(论文)外文文献翻译文献、资料中文题目:聚类分析文献、资料英文题目:clustering文献、资料来源:文献、资料发表(出版)日期:院(部):专业:自动化班级:姓名:学号:指导教师:翻译日期: 2017.02.14外文翻译英文名称:Data mining-clustering译文名称:数据挖掘—聚类分析专业:自动化姓名:****班级学号:****指导教师:******译文出处:Data mining:Ian H.Witten, EibeFrank 著Clustering5.1 INTRODUCTIONClustering is similar to classification in that data are grouped. However, unlike classification, the groups are not predefined. Instead, the grouping is accomplished by finding similarities between data according to characteristics found in the actualdata. The groups are called clusters. Some authors view clustering as a special type of classification. In this text, however, we follow a more conventional view in that the two are different. Many definitions for clusters have been proposed:●Set of like elements. Elements from different clusters are not alike.●The distance between points in a cluster is less than the distance betweena point in the cluster and any point outside it.A term similar to clustering is database segmentation, where like tuple (record) in a database are grouped together. This is done to partition or segment the database into components that then give the user a more general view of the data. In this case text, we do not differentiate between segmentation and clustering. A simple example of clustering is found in Example 5.1. This example illustrates the fact that that determining how to do the clustering is not straightforward.As illustrated in Figure 5.1, a given set of data may be clustered on different attributes. Here a group of homes in a geographic area is shown. The first floor type of clustering is based on the location of the home. Homes that are geographically close to each other are clustered together. In the second clustering, homes are grouped based on the size of the house.Clustering has been used in many application domains, including biology, medicine, anthropology, marketing, and economics. Clustering applications include plant and animal classification, disease classification, image processing, pattern recognition, and document retrieval. One of the first domains in which clustering was used was biological taxonomy. Recent usesinclude examining Web log data to detect usage patterns.When clustering is applied to a real-world database, many interesting problems occur:●Outlier handling is difficult. Here the elements do not naturally fallinto any cluster. They can be viewed as solitary clusters. However, if aclustering algorithm attempts to find larger clusters, these outliers will beforced to be placed in some cluster. This process may result in the creationof poor clusters by combining two existing clusters and leaving the outlier in its own cluster.● Dynamic data in the database implies that cluster membership may change over time.● Interpreting the semantic meaning of each cluster may be difficult. With classification, the labeling of the classes is known ahead of time. However, with clustering, this may not be the case. Thus, when the clustering process finishes creating a set of clusters, the exact meaning of each cluster may not be obvious. Here is where a domain expert is needed to assign a label or interpretation for each cluster.● There is no one correct answer to a clustering problem. In fact, many answers may be found. The exact number of clusters required is not easy to determine. Again, a domain expert may be required. For example, suppose we have a set of data about plants that have been collected during a field trip. Without any prior knowledge of plant classification, if we attempt to divide this set of data into similar groupings, it would not be clear how many groups should be created.● Another related iss ue is what data should be used of clustering. Unlike learning during a classification process, where there is some a priori knowledge concerning what the attributes of each classification should be, in clustering we have no supervised learning to aid the process. Indeed, clustering can be viewed as similar to unsupervised learning.We can then summarize some basic features of clustering (as opposed to classification):● The (best) number of clusters is not known.● There may not be any a priori knowledge co ncerning the clusters. ● Cluster results are dynamic.The clustering problem is stated as shown in Definition 5.1. Here we assume that the number of clusters to be created is an input value, k. The actual content (and interpretation) of each cluster,j k ,1j k ≤≤, is determined as a result of the function definition. Without loss of generality, we will view that the result of solving a clustering problem is that a set of clusters is created: K={12,,...,k k k k }.D EFINITION 5.1.Given a database D ={12,,...,n t t t } of tuples and aninteger value k , the clustering problem is to define a mapping f : {1,...,}D k → where each i t is assigned to one cluster j K ,1j k ≤≤. A cluster j K , contains precisely those tuples mapped to it; that is,j K ={|(),1,i i j t f t K i n =≤≤and i t D ∈}.A classification of the different types of clustering algorithms is shown in Figure 5.2. Clustering algorithms themselves may be viewed as hierarchical or partitional. With hierarchical clustering, a nested set of clusters is created. Each level in the hierarchy has a separate set of clusters. At the lowest level, each item is in itsown unique cluster. At the highest level, all items belong to the same cluster. With hierarchical clustering, the desired number of clusters is not input. With partitional clustering, the algorithm creates only one set of clusters. These approaches use the desired number of clusters to drive how the final set is created. Traditional clustering algorithms tend to be targeted to small numeric database that fit into memory .There are, however, more recent clustering algorithms that look at categorical data and are targeted to larger, perhaps dynamic, databases. Algorithms targeted to larger databases may adapt to memory constraints by either sampling the database or using data structures, which can be compressed or pruned to fit into memory regardless of the size of the database. Clustering algorithms may also differ based on whether they produce overlapping or nonoverlapping clusters. Even though we consider only nonoverlapping clusters, it is possible to place an item in multiple clusters. In turn, nonoverlapping clusters can be viewed as extrinsic or intrinsic. Extrinsic techniques use labeling of the items to assist in the classification process. These algorithms are the traditional classification supervised learning algorithms in which a special input training set is used. Intrinsic algorithms do not use any a priori category labels, but depend only on the adjacency matrix containing the distance between objects. All algorithms we examine in this chapter fall into the intrinsic class.The types of clustering algorithms can be furthered classified based on the implementation technique used. Hierarchical algorithms can becategorized as agglomerat ive or divisive. ”Agglomerative ” implies that the clusters are created in a bottom-up fashion, while divisive algorithms work in a top-down fashion. Although bothhierarchical and partitional algorithms could be described using the agglomerative vs. divisive label, it typically is more associated with hierarchical algorithms. Another descriptive tag indicates whether each individual element is handled one by one, serial (sometimes called incremental), or whether all items are examined together, simultaneous. If a specific tuple is viewed as having attribute values for all attributes in the schema, then clustering algorithms could differ as to how the attribute values are examined. As is usually done with decision tree classification techniques, some algorithms examine attribute values one at a time, monothetic. Polythetic algorithms consider all attribute values at one time. Finally, clustering algorithms can be labeled base on the mathematical formulation given to the algorithm: graph theoretic or matrix algebra. In this chapter we generally use the graph approach and describe the input to the clustering algorithm as an adjacency matrix labeled with distance measure.We discuss many clustering algorithms in the following sections. This is only a representative subset of the many algorithms that have been proposed in the literature. Before looking at these algorithms, we first examine possible similarity measures and examine the impact of outliers.5.2 SIMILARITY AND DISTANCE MEASURESThere are many desirable properties for the clusters created by a solution to a specific clustering problem. The most important one is that a tuple within one cluster is more like tuples within that cluster than it is similar to tuples outside it. As with classification, then, we assume the definition of a similarity measure, sim(,i l t t ), defined between any two tuples, ,i l t t D . This provides a more strict and alternative clustering definition, as found in Definition 5.2. Unless otherwise stated, we use thefirst definition rather than the second. Keep in mind that the similarity relationship stated within the second definition is a desirable, although not always obtainable, property.A distance measure, dis(,i j t t ), as opposed to similarity, is often used in。

数据采集系统中英文对照外文翻译文献

数据采集系统中英文对照外文翻译文献

中英文对照外文翻译(文档含英文原文和中文翻译)Data Acquisition SystemsData acquisition systems are used to acquire process operating data and store it on,secondary storage devices for later analysis. Many or the data acquisition systems acquire this data at very high speeds and very little computer time is left to carry out any necessary, or desirable, data manipulations or reduction. All the data are stored on secondary storage devices and manipulated subsequently to derive the variables ofin-terest. It is very often necessary to design special purpose data acquisition systems and interfaces to acquire the high speed process data. This special purpose design can be an expensive proposition.Powerful mini- and mainframe computers are used to combine the data acquisition with other functions such as comparisons between the actual output and the desirable output values, and to then decide on the control action which must be taken to ensure that the output variables lie within preset limits. The computing power required will depend upon the type of process control system implemented. Software requirements for carrying out proportional, ratio or three term control of process variables are relatively trivial, and microcomputers can be used to implement such process control systems. It would not be possible to use many of the currently available microcomputers for the implementation of high speed adaptive control systems which require the use of suitable process models and considerable online manipulation of data.Microcomputer based data loggers are used to carry out intermediate functions such as data acquisition at comparatively low speeds, simple mathematical manipulations of raw data and some forms of data reduction. The first generation of data loggers, without any programmable computing facilities, was used simply for slow speed data acquisition from up to one hundred channels. All the acquired data could be punched out on paper tape or printed for subsequent analysis. Such hardwired data loggers are being replaced by the new generation of data loggers which incorporate microcomputers and can be programmed by the user. They offer an extremely good method of collecting the process data, using standardized interfaces, and subsequently performing the necessary manipulations to provide the information of interest to the process operator. The data acquired can be analyzed to establish correlations, if any, between process variables and to develop mathematical models necessary for adaptive and optimal process control.The data acquisition function carried out by data loggers varies from one to 9 in system to another. Simple data logging systems acquire data from a few channels while complex systems can receive data from hundreds, or even thousands, of input channels distributed around one or more processes. The rudimentary data loggers scan the selected number of channels, connected to sensors or transducers, in a sequential manner and the data are recorded in a digital format. A data logger can be dedicated in the sense that it can only collect data from particular types of sensors and transducers. It is best to use a nondedicated data logger since any transducer or sensor can be connected to the channels via suitable interface circuitry. This facility requires the use of appropriate signal conditioning modules.Microcomputer controlled data acquisition facilitates the scanning of a large number of sensors. The scanning rate depends upon the signal dynamics which means that some channels must be scanned at very high speeds in order to avoid aliasing errors while there is very little loss of information by scanning other channels at slower speeds. In some data logging applications the faster channels require sampling at speeds of up to 100 times per second while slow channels can be sampled once every five minutes. The conventional hardwired, non-programmable data loggers sample all the channels in a sequential manner and the sampling frequency of all the channels must be the same. This procedure results in the accumulation of very large amounts of data, some of which is unnecessary, and also slows down the overall effective sampling frequency. Microcomputer based data loggers can be used to scan some fast channels at a higher frequency than other slow speed channels.The vast majority of the user programmable data loggers can be used to scan up to 1000 analog and 1000 digital input channels. A small number of data loggers, with a higher degree of sophistication, are suitable for acquiring data from up to 15, 000 analog and digital channels. The data from digital channels can be in the form of Transistor- Transistor Logic or contact closure signals. Analog data must be converted into digital format before it is recorded and requires the use of suitable analog to digital converters (ADC).The characteristics of the ADC will define the resolution that can be achieved and the rate at which the various channels can be sampled. An in-crease in the number of bits used in the ADC improves the resolution capability. Successive approximation ADC's arefaster than integrating ADC's. Many microcomputer controlled data loggers include a facility to program the channel scanning rates. Typical scanning rates vary from 2 channels per second to 10, 000 channels per second.Most data loggers have a resolution capability of ±0.01% or better, It is also pos-sible to achieve a resolution of 1 micro-volt. The resolution capability, in absolute terms, also depends upon the range of input signals, Standard input signal ranges are 0-10 volt, 0-50 volt and 0-100 volt. The lowest measurable signal varies form 1 t, volt to 50, volt. A higher degree of recording accuracy can be achieved by using modules which accept data in small, selectable ranges. An alternative is the auto ranging facil-ity available on some data loggers.The accuracy with which the data are acquired and logged-on the appropriate storage device is extremely important. It is therefore necessary that the data acquisi-tion module should be able to reject common mode noise and common mode voltage. Typical common mode noise rejection capabilities lie in the range 110 dB to 150 dB. A decibel (dB) is a tern which defines the ratio of the power levels of two signals. Thus if the reference and actual signals have power levels of N, and Na respectively, they will have a ratio of n decibels, wheren=10 Log10(Na /Nr)Protection against maximum common mode voltages of 200 to 500 volt is available on typical microcomputer based data loggers.The voltage input to an individual data logger channel is measured, scaled and linearised before any further data manipulations or comparisons are carried out.In many situations, it becomes necessary to alter the frequency at which particu-lar channels are sampled depending upon the values of data signals received from a particular input sensor. Thus a channel might normally be sampled once every 10 minutes. If, however, the sensor signals approach the alarm limit, then it is obviously desirable to sample that channel once every minute or even faster so that the operators can be informed, thereby avoiding any catastrophes. Microcomputer controlledintel-ligent data loggers may be programmed to alter the sampling frequencies depending upon the values of process signals. Other data loggers include self-scanning modules which can initiate sampling.The conventional hardwired data loggers, without any programming facilities, simply record the instantaneous values of transducer outputs at a regular samplingin-terval. This raw data often means very little to the typical user. To be meaningful, this data must be linearised and scaled, using a calibration curve, in order to determine the real value of the variable in appropriate engineering units. Prior to the availability of programmable data loggers, this function was usually carried out in the off-line mode on a mini- or mainframe computer. The raw data values had to be punched out on pa-per tape, in binary or octal code, to be input subsequently to the computer used for analysis purposes and converted to the engineering units. Paper tape punches are slow speed mechanical devices which reduce the speed at which channels can be scanned. An alternative was to print out the raw data values which further reduced the data scanning rate. It was not possible to carry out any limit comparisons or provide any alarm information. Every single value acquired by the data logger had to be recorded eventhough it might not serve any useful purpose during subsequent analysis; many data values only need recording when they lie outside the pre-set low and high limits.If the analog data must be transmitted over any distance, differences in ground potential between the signal source and final location can add noise in the interface design. In order to separate common-mode interference form the signal to be recorded or processed, devices designed for this purpose, such as instrumentation amplifiers, may be used. An instrumentation amplifier is characterized by good common-mode- rejection capability, a high input impedance, low drift, adjustable gain, and greater cost than operational amplifiers. They range from monolithic ICs to potted modules, and larger rack-mounted modules with manual scaling and null adjustments. When a very high common-mode voltage is present or the need for extremely-lowcom-mon-mode leakage current exists(as in many medical-electronics applications),an isolation amplifier is required. Isolation amplifiers may use optical or transformer isolation.Analog function circuits are special-purpose circuits that are used for a variety of signal conditioning operations on signals which are in analog form. When their accu-racy is adequate, they can relieve the microprocessor of time-consuming software and computations. Among the typical operations performed are multiplications, division, powers, roots, nonlinear functions such as for linearizing transducers, rimsmeasure-ments, computing vector sums, integration and differentiation, andcurrent-to-voltage or voltage- to-current conversion. Many of these operations can be purchased in available devices as multiplier/dividers, log/antilog amplifiers, and others.When data from a number of independent signal sources must be processed by the same microcomputer or communications channel, a multiplexer is used to channel the input signals into the A/D converter.Multiplexers are also used in reverse, as when a converter must distribute analog information to many different channels. The multiplexer is fed by a D/A converter which continually refreshes the output channels with new information.In many systems, the analog signal varies during the time that the converter takes to digitize an input signal. The changes in this signal level during the conversion process can result in errors since the conversion period can be completed some time after the conversion command. The final value never represents the data at the instant when the conversion command is transmitted. Sample-hold circuits are used to make an acquisition of the varying analog signal and to hold this signal for the duration of the conversion process. Sample-hold circuits are common in multichannel distribution systems where they allow each channel to receive and hold the signal level.In order to get the data in digital form as rapidly and as accurately as possible, we must use an analog/digital (A/D) converter, which might be a shaft encoder, a small module with digital outputs, or a high-resolution, high-speed panel instrument. These devices, which range form IC chips to rack-mounted instruments, convert ana-log input data, usually voltage, into an equivalent digital form. The characteristics of A/D converters include absolute and relative accuracy, linearity, monotonic, resolu-tion, conversion speed, and stability. A choice of input ranges, output codes, and other features are available. The successive-approximation technique is popular for a large number ofapplications, with the most popular alternatives being the counter-comparator types, and dual-ramp approaches. The dual-ramp has been widely-used in digital voltmeters.D/A converters convert a digital format into an equivalent analog representation. The basic converter consists of a circuit of weighted resistance values or ratios, each controlled by a particular level or weight of digital input data, which develops the output voltage or current in accordance with the digital input code. A special class of D/A converter exists which have the capability of handling variable reference sources. These devices are the multiplying DACs. Their output value is the product of the number represented by the digital input code and the analog reference voltage, which may vary form full scale to zero, and in some cases, to negative values.Component Selection CriteriaIn the past decade, data-acquisition hardware has changed radically due to ad-vances in semiconductors, and prices have come down too; what have not changed, however, are the fundamental system problems confronting the designer. Signals may be obscured by noise, rfi,ground loops, power-line pickup, and transients coupled into signal lines from machinery. Separating the signals from these effects becomes a matter for concern.Data-acquisition systems may be separated into two basic categories:(1)those suited to favorable environments like laboratories -and(2)those required for hostile environments such as factories, vehicles, and military installations. The latter group includes industrial process control systems where temperature information may be gathered by sensors on tanks, boilers, wats, or pipelines that may be spread over miles of facilities. That data may then be sent to a central processor to provide real-time process control. The digital control of steel mills, automated chemical production, and machine tools is carried out in this kind of hostile environment. The vulnerability of the data signals leads to the requirement for isolation and other techniques.At the other end of the spectrum-laboratory applications, such as test systems for gathering information on gas chromatographs, mass spectrometers, and other sophis-ticated instruments-the designer's problems are concerned with the performing of sen-sitive measurements under favorable conditions rather than with the problem ofpro-tecting the integrity of collected data under hostile conditions.Systems in hostile environments might require components for wide tempera-tures, shielding, common-mode noise reduction, conversion at an early stage, redun-dant circuits for critical measurements, and preprocessing of the digital data to test its reliability. Laboratory systems, on the other hand, will have narrower temperature ranges and less ambient noise. But the higher accuracies require sensitive devices, and a major effort may be necessary for the required signal /noise ratios.The choice of configuration and components in data-acquisition design depends on consideration of a number of factors:1. Resolution and accuracy required in final format.2. Number of analog sensors to be monitored.3. Sampling rate desired.4. Signal-conditioning requirement due to environment and accuracy.5. Cost trade-offs.Some of the choices for a basic data-acquisition configuration include:1 .Single-channel techniques.A. Direct conversion.B. Preamplification and direct conversion.C. Sample-hold and conversion.D. Preamplification, sample-hold, and conversion.E. Preamplification, signal-conditioning, and direct conversion.F. Preamplification, signal-conditioning, sample-hold, and conversion.2. Multichannel techniques.A. Multiplexing the outputs of single-channel converters.B. Multiplexing the outputs of sample-holds.C. Multiplexing the inputs of sample-holds.D. Multiplexing low-level data.E. More than one tier of multiplexers.Signal-conditioning may include:1. Radiometric conversion techniques.B. Range biasing.D. Logarithmic compression.A. Analog filtering.B. Integrating converters.C. Digital data processing.We shall consider these techniques later, but first we will examine some of the components used in these data-acquisition system configurations.MultiplexersWhen more than one channel requires analog-to-digital conversion, it is neces-sary to use time-division multiplexing in order to connect the analog inputs to a single converter, or to provide a converter for each input and then combine the converter outputs by digital multiplexing.Analog MultiplexersAnalog multiplexer circuits allow the timesharing of analog-to-digital converters between a numbers of analog information channels. An analog multiplexer consists of a group of switches arranged with inputs connected to the individual analog channels and outputs connected in common(as shown in Fig. 1).The switches may be ad-dressed by a digital input code.Many alternative analog switches are available in electromechanical and solid-state forms. Electromechanical switch types include relays, stepper switches,cross-bar switches, mercury-wetted switches, and dry-reed relay switches. The best switching speed is provided by reed relays(about 1 ms).The mechanical switches provide high do isolation resistance, low contact resistance, and the capacity to handle voltages up to 1 KV, and they are usually inexpensive. Multiplexers using mechanical switches are suited to low-speed applications as well as those having high resolution requirements. They interface well with the slower A/D converters, like the integrating dual-slope types. Mechanical switches have a finite life, however, usually expressed innumber of operations. A reed relay might have a life of 109 operations, which wouldallow a 3-year life at 10 operations/second.Solid-state switch devices are capable of operation at 30 ns, and they have a life which exceeds most equipment requirements. Field-effect transistors(FETs)are used in most multiplexers. They have superseded bipolar transistors which can introduce large voltage offsets when used as switches.FET devices have a leakage from drain to source in the off state and a leakage from gate or substrate to drain and source in both the on and off states. Gate leakage in MOS devices is small compared to other sources of leakage. When the device has a Zener-diode-protected gate, an additional leakage path exists between the gate and source.Enhancement-mode MOS-FETs have the advantage that the switch turns off when power is removed from the MUX. Junction-FET multiplexers always turn on with the power off.A more recent development, the CMOS-complementary MOS-switch has the advantage of being able to multiplex voltages up to and including the supply voltages. A±10-V signal can be handled with a ±10-V supply.Trade-off Considerations for the DesignerAnalog multiplexing has been the favored technique for achieving lowest system cost. The decreasing cost of A/D converters and the availability of low-cost, digital integrated circuits specifically designed for multiplexing provide an alternative with advantages for some applications. A decision on the technique to use for a givensys-tem will hinge on trade-offs between the following factors:1. Resolution. The cost of A/D converters rises steeply as the resolution increases due to the cost of precision elements. At the 8-bit level, the per-channel cost of an analog multiplexer may be a considerable proportion of the cost of a converter. At resolutions above 12 bits, the reverse is true, and analog multiplexing tends to be more economical.2. Number of channels. This controls the size of the multiplexer required and the amount of wiring and interconnections. Digital multiplexing onto a common data bus reduces wiring to a minimum in many cases. Analog multiplexing is suited for 8 to 256 channels; beyond this number, the technique is unwieldy and analog errors be-come difficult to minimize. Analog and digital multiplexing is often combined in very large systems.3. Speed of measurement, or throughput. High-speed A/D converters can add a considerable cost to the system. If analog multiplexing demands a high-speedcon-verter to achieve the desired sample rate, a slower converter for each channel with digital multiplexing can be less costly.4. Signal level and conditioning. Wide dynamic ranges between channels can be difficult with analog multiplexing. Signals less than 1V generally require differential low-level analog multiplexing which is expensive, with programmable-gain amplifiers after the MUX operation. The alternative of fixed-gain converters on each channel, with signal-conditioning designed for the channel requirement, with digital multi-plexing may be more efficient.5. Physical location of measurement points. Analog multiplexing is suitedfor making measurements at distances up to a few hundred feet from the converter, since analog lines may suffer from losses, transmission-line reflections, and interference. Lines may range from twisted wire pairs to multiconductor shielded cable, depending on signal levels, distance, and noise environments. Digital multiplexing is operable to thousands of miles, with the proper transmission equipment, for digital transmission systems can offer the powerful noise-rejection characteristics that are required for29 Data Acquisition Systems long-distance transmission.Digital MultiplexingFor systems with small numbers of channels, medium-scale integrated digital multiplexers are available in TTL and MOS logic families. The 74151 is a typical example. Eight of these integrated circuits can be used to multiplex eight A/D con-verters of 8-bit resolution onto a common data bus.This digital multiplexing example offers little advantages in wiring economy, but it is lowest in cost, and the high switching speed allows operation at sampling rates much faster than analog multiplexers. The A/D converters are required only to keep up with the channel sample rate, and not with the commutating rate. When large numbers of A/D converters are multiplexed, the data-bus technique reduces system interconnections. This alone may in many cases justify multiple A/D converters. Data can be bussed onto the lines in bit-parallel or bit-serial format, as many converters have both serial and parallel outputs. A variety of devices can be used to drive the bus, from open collector and tristate TTL gates to line drivers and optoelectronic isolators. Channel-selection decoders can be built from 1-of-16 decoders to the required size. This technique also allows additional reliability in that a failure of one A/D does not affect the other channels. An important requirement is that the multiplexer operate without introducing unacceptable errors at the sample-rate speed. For a digital MUX system, one can determine the speed from propagation delays and the time required to charge the bus capacitance.Analog multiplexers can be more difficult to characterize. Their speed is a func-tion not only of internal parameters but also external parameters such as channel, source impedance, stray capacitance and the number of channels, and the circuit lay-out. The user must be aware of the limiting parameters in the system to judge their ef-fect on performance.The nonideal transmission and open-circuit characteristics of analog multiplexers can introduce static and dynamic errors into the signal path. These errors include leakage through switches, coupling of control signals into the analog path, and inter-actions with sources and following amplifiers. Moreover, the circuit layout can com-pound these effects.Since analog multiplexers may be connected directly to sources which may have little overload capacity or poor settling after overloads, the switches should have a break-before-make action to prevent the possibility of shorting channels together. It may be necessary to avoid shorted channels when power is removed and a chan-nels-off with power-down characteristic is desirable. In addition to the chan-nel-addressing lines, which are normally binary-coded, it is useful to have inhibited or enable lines to turn all switches off regardless of the channel being addressed. This simplifies the external logic necessary to cascade multiplexers and can also be useful in certain modes of channeladdressing. Another requirement for both analog and digital multiplexers is the tolerance of line transients and overload conditions, and the ability to absorb the transient energy and recover without damage.数据采集系统数据采集系统是用来获取数据处理和存储在二级存储设备,为后来的分析。

数据挖掘论文英文版

数据挖掘论文英文版

Jilin Province’s population growth and energy consumption analysisMajor StatisticsStudent No. 0401083710Name Niu FukuanJilin Province’s population growth andenergy consumption analysis[Summary]Since the third technological revolution, the energy has become the lifeline of national economy, while the energy on Earth is limited, so in between the major powers led to a number of oil-related or simply a war for oil. In order to compete on the world's resources and energy control, led to the outbreak of two world wars. China's current consumption period coincided with the advent of high-energy, CNPC, Sinopec, CNOOC three state-owned oil giants have been "going out" to develop international markets, Jilin Province as China's energy output and energy consumption province, is also active in the energy corresponding diplomacy. Economic globalization and increasingly fierce competition in the energy environment, China's energy policy is still there are many imperfections, to a certain extent, affect the energy and population development of Jilin Province, China and even to some extent can be said existing population crisis is the energy crisis.[Keyword]Energy consumption; Population; Growth; Analysis;Data sourceI select data from "China Statistical Yearbook 2009" Jilin Province 1995-2007 comprehensive annual financial data (Table 1). Record of the total population (end) of the annual data sequence {Xt}, mind full of energy consumption (kg of standard coal) annual data sequence {Yt}.Table 1 1995-2007 older and province GDP per capita consumption level of all data2001 127627 16629798.1 11.75686723 16.626706712002 128453 17585215.7 11.76331836 16.682569092003 129227 19888035.3 11.76932583 16.805628872004 129988 21344029.6 11.77519742 16.876282612005 130756 23523004.4 11.78108827 16.973489412006 131448 25592925.6 11.78636662 17.057826532007 132129 26861825.7 11.791534 17.106216721.Timing diagramFirst, the total population of Table 1 (end) of the annual data series {Xt}, full of energy consumption (kg of standard coal) annual data series {Yt} are drawn timing diagram, in order to observe the annual population data series {Xt} and national annual energy consumption data sequence {Yt} is stationary, by EVIEWS software output is shown below.Figure 1 of the total population (end) sequence timing diagramFigure 2 universal life energy consumption (kg of standard coal) sequence timing diagramFigure 1 is a sequence {Xt} the timing diagram, Figure 2 is a sequence {Yt} of the timing diagram.Two figures show both the total population (end) or universal life energy consumption (kg of standard coal) index showed a rising trend, the total population of the annual data series {Xt} and national annual energy consumption data sequence {Yt} not smooth, the two may have long-term cointegration relationship.2. Data smoothing(1)Sequence LogarithmFigures 1 and 2 by the intuitive discovery data sequence {Xt} and {Yt} showed a significant growth trend, a significant non-stationary sequence. Therefore, the total population of first sequence {Xt} and universal life energy consumption (kg of standard coal) {Yt}, respectively for the number of treatment to eliminate heteroscedasticity. That logx = lnXt, logy = lnYt, with a view to the target sequence into the linear trend trend sequence, by EVIEWS software operations, the number of sequence timing diagram, in which the population sequence {logx} timing diagram shown in Figure 3, the full sequence of energy consumption {logy} timing diagram shown in Figure 4.Figure3 Figure 4Figure 3 shows the total population observed sequence {logx} and universal life energy consumption (kg of standard coal) sequence {logy} index trend has been basically eliminated, the two have obvious long-term cointegration relationship, which is the transfer function modeling an important prerequisite. However, the above sequence of numbers is still non-stationary series. Respectively {logx} and {logy} sequence of ADF unit root test (Table 5 and Table 6), the test results as shown below. (2)Unit root testHere we will be on the province's total population and the whole sequence {Xt} energy consumption (kg of standard coal) sequence data {Yt} be the unit root test, the results obtained by Eviews software operation is as follows:Table 2 Of the total population sequence {logx}Obtained from Table 2: Total population sequence data {Xt} of the ADF is -0.784587, significantly larger than the 1% level in the critical test value of -4.3260, the 5% level greater than the critical value of -3.2195 testing, but also greater than 10% level in the critical test value -2.7557, so the total population of the data sequence {logx} {Xt} is a non-stationary series.Table 3 National energy consumption (kg of standard coal) unit root test {logy}Obtained from Table 3: National energy consumption (kg of standard coal) data {Yt} of the ADF is 0.489677, significantly larger than the 1% level in the critical test value of -4.3260, the 5% level greater than the critical test value of -3.2195, but also 10% greater than the critical level test value -2.7557, so the total population of the sequence {logx} data {Yt} is a non-stationary series.(3) Sequence of differentialBecause of the number of time series after still not a smooth sequence, so the need for further logarithm of the total population after the sequence {logx} and after a few of the universal life energy consumption (kg of standard coal) differential sequence data {logY} differential sequences were recorded as {▽logx} and {▽logy}. Are respectively the second-order differential of the total population of the sequence {▽logX} and second-order differential of the national energy consumption (kg of standard coal) sequence data {▽ logy} the ADF unit root test (Table 7 and Table 8), test results the following table.Table 4Table 4 shows that the total population of second-order differential sequence {▽logx} ADF value is -10.6278, apparently less than 1% level in the critical test value of -6.292057, less than the 5% level in the critical test value -4.450425 also 10% less than the level in the critical test value of -3.701534, second-order differential of the total population of the sequence {▽ logx} is a stationary sequence.Table5 5Table 5 shows that the second-order differential universal life energy consumption (kg of standard coal) {▽logy} of the ADF is -6.395029, apparently less than 1% level in the critical test value of -4.4613, less than the 5 % level of the critical test value of -3.2695, but also less than the 10% level the critical value of -2.7822 testing,universal life, second-order differential consumption of energy (kg of standard coal) {▽ logy} is a stationary sequence.3. Cointegration(1)Cointegration regressionCointegration theory in the 1980s there Engle Granger put forward specific, it is from the analysis of non-stationary time series start to explore the non-stationary variable contains the long-run equilibrium relationship between the non-stationary time series modeling provides a new solution.As the population time series {Xt} and universal life energy consumption time series {Yt} are logarithmic, the total population obtained by the analysis of time series {logX} and universal life energy consumption time series {logY} are second-order single whole sequence, so they may exist cointegration relationship. The results obtained by Eviews software operation is as follows:Table 6Obtained from Table 6:D(LNE2)= -0.054819 – 101.8623D(LOGX2)t = (-1.069855) (-1.120827)R2=0.122487 DW=1.593055(2)Check the smoothness of the residual sequenceFrom the Eviews software, get residual sequence analysis:Table 7Residual series unit root testObtained from Table 7: second-order differential value of -5.977460 ADF residuals, significantly less than 1% level in the critical test value -4.6405, less than 5% level in the critical test value of -3.3350, but also less than 10% level in the critical test value of -2.8169. Therefore, the second-order difference of the residual et is a stationary time series sequence. Expressed as follows:D(ET,2)=-0.042260-1.707007D(ET(-1),2)t = (-0.783744)(-5.977460)DW= 1.603022 EG=-5.977460,Since EG =- 5.977460, check the AFG cointegration test critical value table (N = 2, = 0.05, T = 16) received, EG value is less than the critical value, so to accept the original sequence et is stationary assumption. So you can determine the total population and energy consumption of all the people living there are two variables are long-term cointegration relationship.4. ECM model to establishThrough the above analysis, after the second-order differential of the logarithm of the total population time series {▽ logX} and second-order differential of Logarithm of of national energy consumption time series {▽ logY} is a stationary sequence, the second-order differential residuals et is also a stationary series. So that the number of second-order differential of the national energy consumption time series {▽ logY} as the dependent variable, after the second-order differential of the logarithm of the total population time series {▽logX} and second-order differential as residuals et from variable regression estimation, using Eviews software, the following findings:Table 8ECM model resultsTable 8 can be written by the ECM standard regression model, results are as follows:D(logY2)= -0.047266-154.4568D(LNP2) +0.171676D(ET2)t = (-1.469685) (-2.528562) (1.755694)R2= 0.579628 DW=1.760658ECM regression equation of the regression coefficients by a significance test, the error correction coefficient is positive, in line with forward correction mechanism. The estimation results show that the province of everyone's life changes in energy consumption depends not only on the change of the total population, but also on the previous year's total population deviation from the equilibrium level. In addition, the regression results show that short-term changes in the total population of all the people living there is a positive impact on energy consumption. Because short-term adjustment coefficient is significant, it shows that all the people living in JilinProvince annual consumption of energy in its long-run equilibrium value is the deviation can be corrected well.5. ARMA model(1) Model to identifyAfter differential differenced stationary series into stationary time series, after the analysis can be used ARMR model, the choice of using the model of everyone's life before the first stable after the annual energy consumption time series {logY} to estimate the first full life energy consumption sequence {logY} do autocorrelation and partial autocorrelation, the results of the following:Table 9{logy} of the autocorrelation and partial autocorrelation mapObtained from Table 9, the relevant figure from behind, after K = 1 in a random interval, partial autocorrelation can be seen in K = 1 after a random interval. So we can live on national energy consumption to establish the sequence {logY} ARMA (1,1) model, following on the ARMA (1,1) model parameter estimation, which results in the following table:Table 10ARMA (1,1) model parameter estimationTable 10 obtained by the ARMA (1,1) model parameter estimation is given by: D(LNE,2)=0.014184+0.008803D(LNE,2)t-1-0.858461U t-1(2)ARMA (1,2) model testModel of the residuals obtained for white noise test, if the residuals are not white noise sequence, then the need for ARMA (1,2) model for further improvement; if it is white noise process, the acceptance of the original model. ARMA (1,2) model residuals test results are as follows:Table11 ARMA (1,2) model residuals testTable 11 shows, Q statistic P value greater than 0.05, so the ARMA (1,1) model, the residual series is white noise sequence and accept the ARMA (1,1) model. Our whole life to predict changes in energy consumption, the results are as follows:Figure 5 National energy consumption forecast mapJilin Province of everyone's life through the forecast energy consumption, we can see all the people living consumption of energy is rising every year, which also shows that in the future for many years, Jilin Province, universal life energy consumption will be showing an upward trend. And because of the total population and the existence of universal life energy consumption effects of changes in the same direction, so the total population over the next many years, will continue to increase.6. ProblemsBased on the province's total population and the national energy consumption cointegration analysis of the relationship between population and energy consumption obtained between Jilin Province, there are long-term stability of the interaction and mutual promotion of the long-run equilibrium relationship. The above analysis can be more accurate understanding of the energy consumption of Jilin Province, Jilin Province put forward a better proposal on energy conservation. Moment, Jilin Province facing energy problems:(1) The heavy industry still accounts for a large proportion of;(2)The scale of energy-intensive industry, the rapid growth of production ofenergy saving effect;(3)The coal-based energy consumption is still.7.Recommendation:(1) Population control, and actively cooperate with the national policy of family planning, ease the pressure on the average population can consume.(2) Raise awareness of the importance of energy saving, the implementation of energy-saving target responsibility system, energy efficiency are implemented.Conscientiously implement the State Council issued the statistics of energy saving, monitoring and evaluation program of the three systems. Strict accountability.(3) Speed up industrial restructuring and transformation of economic development. Speed up industrial restructuring and transformation of economic development, to overcome the resource, energy and other bottlenecks, and take the high technological content, good economic returns, low resources consumption, little environmental pollution and human resources into full play to the new industrialization path.(4) Should pay attention to quality improvement and optimization of the structure, so that the final implementation of the restructuring to improve the overall quality of industrial and economic growth, quality and efficiency up.(5) To enhance the development and promotion of energy-saving technologies, strengthen energy security, promotion of renewable energy, clean energy.Adhere to technical progress and the deepening of reform and opening up the combination. To enhance the independent innovation capability as the adjustment of industrial structure, changing the growth mode of the central link, speed up the innovation system, efforts to address the constraints of the city development major science and technology. Vigorously promote the recycling economy demonstration pilot enterprises to actively carry out comprehensive utilization of resources and renewable resources recycling. And actively promote solar, wind, biogas, biodiesel and other renewable energy construction.References[1] Wang Yan, Applied time series analysis of the Chinese People's University Press, 2008.12[2] Pang Hao. Econometric Science Press, 2006.1。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中英文对照外文翻译文献(文档含英文原文和中文翻译)中英文资料对照外文翻译英文原文Introduction to Data MiningAbstract:Microsoft® SQL Server™ 2005 provides an integrated environment for creating and working with data mining models. This tutorial uses four scenarios, targeted mailing, forecasting, market basket, and sequence clustering, to demonstrate how to use the mining model algorithms, mining model viewers, and data mining tools that are included in this release of SQL Server.IntroductionThe data mining tutorial is designed to walk you through the process of creating data mining models in Microsoft SQL Server 2005. The data mining algorithms and tools in SQL Server 2005 make it easy to build a comprehensive solution for a variety of projects, including market basket analysis, forecasting analysis, and targeted mailing analysis. The scenarios for these solutions areexplained in greater detail later in the tutorial.The most visible components in SQL Server 2005 are the workspaces that you use to create and work with data mining models. The online analytical processing (OLAP) and data mining tools are consolidated into two working environments: Business Intelligence Development Studio and SQL Server Management Studio. Using Business Intelligence Development Studio, you can develop an Analysis Services project disconnected from the server. When the project is ready, you can deploy it to the server. You can also work directly against the server. The main function of SQL Server Management Studio is to manage the server. Each environment is described in more detail later in this introduction. For more information on choosing between the two environments, see "Choosing Between SQL Server Management Studio and Business Intelligence Development Studio" in SQL Server Books Online.All of the data mining tools exist in the data mining editor. Using the editor you can manage mining models, create new models, view models, compare models, and create predictions based on existing models.After you build a mining model, you will want to explore it, looking for interesting patterns and rules. Each mining model viewer in the editor is customized to explore models built with a specific algorithm. For more information about the viewers, see "Viewing a Data Mining Model" in SQL Server Books Online.Often your project will contain several mining models, so before you can use a model to create predictions, you need to be able to determine which model is the most accurate. For this reason, the editor contains a model comparison tool called the Mining Accuracy Chart tab. Using this tool you can compare the predictive accuracy of your models and determine the best model.To create predictions, you will use the Data Mining Extensions (DMX) language. DMX extends SQL, containing commands to create, modify, and predict against mining models. For more information about DMX, see "Data Mining Extensions (DMX) Reference" in SQL Server Books Online. Because creating a prediction can be complicated, the data mining editor contains a tool called Prediction Query Builder, which allows you to build queries using a graphical interface. You can also view the DMX code that is generated by the query builder.Just as important as the tools that you use to work with and create data mining models are the mechanics by which they are created. The key to creating a mining model is the data mining algorithm. The algorithm finds patterns in the data that you pass it, and it translates them into a mining model — it is the engine behind the process.Some of the most important steps in creating a data mining solution are consolidating, cleaning, and preparing the data to be used to create the mining models. SQL Server 2005 includes the Data Transformation Services (DTS) working environment, which contains tools that you can use to clean, validate, and prepare your data. For more information on using DTS in conjunction with a data mining solution, see "DTS Data Mining Tasks and Transformations" in SQL Server Books Online.In order to demonstrate the SQL Server data mining features, this tutorial uses a new sample database called AdventureWorksDW. The database is included with SQL Server 2005, and it supports OLAP and data mining functionality. In order to make the sample database available, you need to select the sample database at the installati on time in the “Advanced” dialog for component selection.Adventure WorksAdventureWorksDW is based on a fictional bicycle manufacturing company named Adventure Works Cycles. Adventure Works produces and distributes metal and composite bicycles to North American, European, and Asian commercial markets. The base of operations is located in Bothell, Washington with 500 employees, and several regional sales teams are located throughout their market base.Adventure Works sells products wholesale to specialty shops and to individuals through the Internet. For the data mining exercises, you will work with the AdventureWorksDW Internet sales tables, which contain realistic patterns that work well for data mining exercises.For more information on Adventure Works Cycles see "Sample Databases and Business Scenarios" in SQL Server Books Online.Database DetailsThe Internet sales schema contains information about 9,242 customers. These customers live in six countries, which are combined into three regions:North America (83%)Europe (12%)Australia (7%)The database contains data for three fiscal years: 2002, 2003, and 2004.The products in the database are broken down by subcategory, model, and product.Business Intelligence Development StudioBusiness Intelligence Development Studio is a set of tools designed for creating business intelligence projects. Because Business Intelligence Development Studio was created as an IDE environment in which you can create a complete solution, you work disconnected from the server. You can change your data mining objects as much as you want, but the changes are not reflected on the server until after you deploy the project.Working in an IDE is beneficial for the following reasons:The Analysis Services project is the entry point for a business intelligence solution. An Analysis Services project encapsulates mining models and OLAP cubes, along with supplemental objects that make up the Analysis Services database. From Business Intelligence Development Studio, you can create and edit Analysis Services objects within a project and deploy the project tothe appropriate Analysis Services server or servers.If you are working with an existing Analysis Services project, you can also use Business Intelligence Development Studio to work connected the server. In this way, changes are reflected directly on the server without having to deploy the solution.SQL Server Management StudioSQL Server Management Studio is a collection of administrative and scripting tools for working with Microsoft SQL Server components. This workspace differs from Business Intelligence Development Studio in that you are working in a connected environment where actions are propagated to the server as soon as you save your work.After the data has been cleaned and prepared for data mining, most of the tasks associated with creating a data mining solution are performed within Business Intelligence Development Studio. Using the Business Intelligence Development Studio tools, you develop and test the data mining solution, using an iterative process to determine which models work best for a given situation. When the developer is satisfied with the solution, it is deployed to an Analysis Services server. From this point, the focus shifts from development to maintenance and use, and thus SQL Server Management Studio. Using SQL Server Management Studio, you can administer your database and perform some of the same functions as in Business Intelligence Development Studio, such as viewing, and creating predictions from mining models.Data Transformation ServicesData Transformation Services (DTS) comprises the Extract, Transform, and Load (ETL) tools in SQL Server 2005. These tools can be used to perform some of the most important tasks in data mining: cleaning and preparing the data for model creation. In data mining, you typically perform repetitive data transformations to clean the data before using the data to train a mining model. Using the tasks and transformations in DTS, you can combine data preparation and model creation into a single DTS package.DTS also provides DTS Designer to help you easily build and run packages containing all of the tasks and transformations. Using DTS Designer, you can deploy the packages to a server and run them on a regularly scheduled basis. This is useful if, for example, you collect data weekly data and want to perform the same cleaning transformations each time in an automated fashion.You can work with a Data Transformation project and an Analysis Services project together as part of a business intelligence solution, by adding each project to a solution in Business Intelligence Development Studio.Mining Model AlgorithmsData mining algorithms are the foundation from which mining models are created. The variety of algorithms included in SQL Server 2005 allows you to perform many types of analysis.For more specific information about the algorithms and how they can be adjusted using parameters, see "Data Mining Algorithms" in SQL Server Books Online.Microsoft Decision TreesThe Microsoft Decision Trees algorithm supports both classification and regression and it works well for predictive modeling. Using the algorithm, you can predict both discrete and continuous attributes.In building a model, the algorithm examines how each input attribute in the dataset affects the result of the predicted attribute, and then it uses the input attributes with the strongest relationship to create a series of splits, called nodes. As new nodes are added to the model, a tree structure begins to form. The top node of the tree describes the breakdown of the predicted attribute over the overall population. Each additional node is created based on the distribution of states of the predicted attribute as compared to the input attributes. If an input attribute is seen to cause the predicted attribute to favor one state over another, a new node is added to the model. The model continues to grow until none of the remaining attributes create a split that provides an improved prediction over the existing node. The model seeks to find a combination of attributes and their states that creates a disproportionate distribution of states in the predicted attribute, therefore allowing you to predict the outcome of the predicted attribute.Microsoft ClusteringThe Microsoft Clustering algorithm uses iterative techniques to group records from a dataset into clusters containing similar characteristics. Using these clusters, you can explore the data, learning more about the relationships that exist, which may not be easy to derive logically through casual observation. Additionally, you can create predictions from the clustering model created by the algorithm. For example, consider a group of people who live in the same neighborhood, drive the same kind of car, eat the same kind of food, and buy a similar version of a product. This is a cluster of data. Another cluster may include people who go to the same restaurants, have similar salaries, and vacation twice a year outside the country. Observing how these clusters are distributed, you can better understand how the records in a dataset interact, as well as how that interaction affects the outcome of a predicted attribute.Microsoft Naïve BayesThe Microsoft Naïve Bayes algorithm quickly builds mining models that can be used for classification and prediction. It calculates probabilities for each possible state of the input attribute, given each state of the predictable attribute, which can later be used to predict an outcome of the predicted attribute based on the known input attributes. The probabilities used to generate the model are calculated and stored during the processing of the cube. The algorithm supports only discrete or discretized attributes, and it considers all input attributes to be independent. TheMicrosoft Naïve Bayes algorithm produces a simple mining model that can be considered a starting point in the data mining process. Because most of the calculations used in creating the model are generated during cube processing, results are returned quickly. This makes the model a good option for exploring the data and for discovering how various input attributes are distributed in the different states of the predicted attribute.Microsoft Time SeriesThe Microsoft Time Series algorithm creates models that can be used to predict continuous variables over time from both OLAP and relational data sources. For example, you can use the Microsoft Time Series algorithm to predict sales and profits based on the historical data in a cube.Using the algorithm, you can choose one or more variables to predict, but they must be continuous. You can have only one case series for each model. The case series identifies the location in a series, such as the date when looking at sales over a length of several months or years.A case may contain a set of variables (for example, sales at different stores). The Microsoft Time Series algorithm can use cross-variable correlations in its predictions. For example, prior sales at one store may be useful in predicting current sales at another store.Microsoft Neural NetworkIn Microsoft SQL Server 2005 Analysis Services, the Microsoft Neural Network algorithm creates classification and regression mining models by constructing a multilayer perceptron network of neurons. Similar to the Microsoft Decision Trees algorithm provider, given each state of the predictable attribute, the algorithm calculates probabilities for each possible state of the input attribute. The algorithm provider processes the entire set of cases , iteratively comparing the predicted classification of the cases with the known actual classification of the cases. The errors from the initial classification of the first iteration of the entire set of cases is fed back into the network, and used to modify the network's performance for the next iteration, and so on. You can later use these probabilities to predict an outcome of the predicted attribute, based on the input attributes. One of the primary differences between this algorithm and the Microsoft Decision Trees algorithm, however, is that its learning process is to optimize network parameters toward minimizing the error while the Microsoft Decision Trees algorithm splits rules in order to maximize information gain. The algorithm supports the prediction of both discrete and continuous attributes.Microsoft Linear RegressionThe Microsoft Linear Regression algorithm is a particular configuration of the Microsoft Decision Trees algorithm, obtained by disabling splits (the whole regression formula is built in a single root node). The algorithm supports the prediction of continuous attributes.Microsoft Logistic RegressionThe Microsoft Logistic Regression algorithm is a particular configuration of the Microsoft Neural Network algorithm, obtained by eliminating the hidden layer. The algorithm supports the prediction of both discrete andcontinuous attributes.)中文译文数据挖掘技术简介摘要:微软® SQL Server™2005中提供用于创建和使用数据挖掘模型的集成环境的工作。

相关文档
最新文档