数据挖掘引论-6.clustering

合集下载

聚类方法(Clustering)

聚类方法(Clustering)

因此衍生出一系列度量 K 相似性的算法
Q
J
大配对和小配对 Major and minor suits
聚类分析原理介绍
相似性Similar的度量(统计学角度) 距离Q型聚类(主要讨论)
主要用于对样本分类 常用的距离有(只适用于具有间隔尺度变量的聚类):
明考夫斯基距离(包括:绝对距离、欧式距离、切比雪夫距离) 兰氏距离 马氏距离 斜交空间距离 此不详述,有兴趣可参考《应用多元分析》(第二版)王学民
首先要明确聚类的目的,就是要使各个类之间的距离 尽可能远,类中的距离尽可能近,聚类算法可以根据 研究目的确定类的数目,但分类的结果要有令人信服 的解释。
在实际操作中,更多的是凭经验来确定类的数目,测 试不同类数的聚类效果,直到选择较理想的分类。
不稳定的聚类方法
算法的选择没有绝对
当聚类结果被用作描述或探查工具时,可以 对同样的数据尝试多种算法,以发现数据可 能揭示的结果。
该法利用了所有样本的信息,被认为是较好的 系统聚类法
广泛采用的类间距离:
重心法(centroid hierarchical method)
类的重心之间的距离 对异常值不敏感,结果更稳定
广泛采用的类间距离
离差平方和法(ward method)
D2=WM-WK-WL

研究目的:挖掘不同人群拨打电话的特征 下面用SAS/Enterprise Miner演示
Q&A
推荐参考书目
《应用多元分析》(第二版)王学民 上海财经大学出版社
《应用多元统计分析》即《Appied Mulhnson, Dean W. Wichern中国统计出版社

关于数据挖掘中的聚类分析

关于数据挖掘中的聚类分析

关于数据挖掘中的聚类分析聚类数据库中的记录可被化分为一系列有意义的子集,即聚类。

聚类增强了人们对客观现实的认识,是概念描述和偏差分析的先决条件。

聚类技术主要包括传统的模式识别方法和数学分类学。

80年代初,Mchalski提出了概念聚类技术牞其要点是,在划分对象时不仅考虑对象之间的距离,还要求划分出的类具有某种内涵描述,从而避免了传统技术的某些片面性。

统计分析(statistical analysis)常见的统计方法有回归分析(多元回归、自回归等)、判别分析(贝叶斯分析、费歇尔判别、非参数判别等)、聚类分析(系统聚类、动态聚类等)和探索性分析(主元分析法、相关分析法等)。

其处理过程可以分为三个阶段:搜集数据、分析数据和进行推理。

在整个过程中,聚类的依据是统计距离和相似系数。

如何度量距离的远近:统计距离和相似系数人工神经网络神经网络近来越来越受到人们的关注,因为它为解决大复杂度问题提供了一种相对来说比较有效的简单方法。

神经网络可以很容易的解决具有上百个参数的问题(当然实际生物体中存在的神经网络要比我们这里所说的程序模拟的神经网络要复杂的多)。

神经网络常用于两类问题:分类和回归。

在结构上,可以把一个神经网络划分为输入层、输出层和隐含层(见图4)。

输入层的每个节点对应一个个的预测变量。

输出层的节点对应目标变量,可有多个。

在输入层和输出层之间是隐含层(对神经网络使用者来说不可见),隐含层的层数和每层节点的个数决定了神经网络的复杂度。

除了输入层的节点,神经网络的每个节点都与很多它前面的节点(称为此节点的输入节点)连接在一起,每个连接对应一个权重Wxy,此节点的值就是通过它所有输入节点的值与对应连接权重乘积的和作为一个函数的输入而得到,我们把这个函数称为活动函数或挤压函数。

如图5中节点4输出到节点6的值可通过如下计算得到:W14*节点1的值+W24*节点2的值神经网络的每个节点都可表示成预测变量(节点1,2)的值或值的组合(节点3-6)。

数据挖掘概念与技术第一章PPT课件

数据挖掘概念与技术第一章PPT课件
数据利用
数据淹没,但却缺乏知识
信息技术的进化
···
数据挖掘的自动化分析的海量数据集 文件处理->数据库管理系统->高级数据库:系统高级数据分析
2021
3
定义:从大量的数据中提取有趣的(非平凡的,隐 含的,以前未知的和潜在有用的)模式或知识。
“数据中发现知识”(KDD)
2021
4
选择和变换
评估和表示
第一章 引论
2021
1
1.1 为什么进行数据挖掘 1.2 什么是数据挖掘 1.3 可以挖掘什么类型的数据 1.4 可以挖掘什么类型的模式 1.5 使用什么技术 1.6 面向什么类型的应用 1.7 数据挖掘的主要问题 1.8 小结
2021
2
数据爆炸
海量数据,爆炸式增长
来源:网络,电子商务,个人 类型:图像,文本···
设想网上购物的一次交易,其付款过程至少包括以下几步数据库操作:
一、更新客户所购商品的库存信息 二、保存客户付款信息--可能包括与银行系统的交互 三、生成订单并且保存到数据库中 四、更新用户相关信息,例如购物数量等等
2021
9
其他类型的数据
股票交易数据 文本 图像 音频视频 未知的
2021
10
1.4.1 类/概念描述:特征化与区分
类/概念
数据特征化
目标数据的一般特性或特征汇总
数据区分
将目标类数据对象的一般性与一个或多个 对比类对象的一般特性进行比较
特征化和区分
2021
11
1.4.2 挖掘频繁模式、关联和相关性
频繁模式是在数据中频繁出现的模式
1.频繁项集、频繁子序列、频繁子结构 2.挖掘频繁模式可以发现数据中的关联和相关性 例如:单维与多维关联

数据挖掘中聚类算法的综述

数据挖掘中聚类算法的综述

数据挖掘中聚类算法的综述3胡庆林 叶念渝 朱明富(华中科技大学控制科学与工程系 武汉 430074)摘 要 聚类算法是数据挖掘领域中非常重要的技术。

本综述按照聚类算法的分类,对每一类中具有代表性的算法进行了介绍,分析和评价。

最后从发现聚类形状、所适用的数据库和输入数据顺序的敏感性等方面进行了算法推荐,供大家在选择聚类算法时参考。

关键词 数据挖掘 聚类分析 聚类算法中图分类号 TP301.61 引言数据挖掘(Data M ining):是从大量的、不完全的、有噪声的、模糊的、随机的数据中,提取隐含在其中的、人们事先不知道的、但又是潜在有用信息和知识的过程。

当人们使用数据挖掘工具对数据中的模型和关系进行辨识的时候,通常第一个步骤就是聚类。

因此根据实际科研情况,选择一个好的聚类算法对后续的研究工作是非常关键的。

聚类的定义:聚类是将数据划分成群组的过程。

通过确定数据之间在预先制定的属性上的相似性来完成聚类任务,这样最相似的数据就聚集成簇。

聚类与分类的不同点:聚类的类别取决于数据本身;而分类的类别是由数据分析人员预先定义好的。

聚类算法的分类:一般可分为基于层次的,基于划分的,基于密度的,基于网格的和基于模型的五种。

2 基于层次的聚类算法层次的聚类算法对给定数据对象进行层次上的分解。

根据层次分解的顺序是自下向上的还是自上向下的,可分为凝聚算法(自下向上)的和分裂算法(自上向下)。

2.1 凝聚算法思想初始的时候,每一个成员都是一个单独的簇,在以后的迭代过程中,再把那些相互临近的簇组成一个新簇,直到把所有的成员组成一个簇为止。

具体代表算法:单连接算法,全连接算法和平均连接算法2.1.1 单连接算法该算法的主要思想是发现最大连通子图,如果至少存在一条连接两个簇的边,并且两点之间的最短距离小于或等于给定的阀值,则合并这两个簇。

2.1.2 全连接算法该算法寻找的是一个团,而不是连通的分量,一个团是一个最大的图,其中任意两个顶点之间都存在一个条边。

数据挖掘概念与技术ppt课件

数据挖掘概念与技术ppt课件

用户 GUI API 数据立方体 API
挖掘结果
第4层 用户界面
OLAP 引擎
第3层 OLAP/OLAM
21.05.2020
.
17
KDD过程的步骤(续)
选择挖掘算法 数据挖掘: 搜索有趣的模式 模式评估和知识表示
可视化, 变换, 删除冗余模式, 等.
发现知识的使用
21.05.2020
.
18
数据挖掘和商务智能
提高支持商务决策的潜能
制定决策
数据表示 可视化技术
数据挖掘 信息发现
21.05.2020
我们正被数据淹没,但却缺乏知识 解决办法: 数据仓库与数据挖掘
数据仓库与联机分析处理(OLAP) 从大型数据库的数据中提取有趣的知识(规则, 规律性, 模
式, 限制等)
21.05.2020
.
6
数据处理技术的演进
1960s: 数据收集, 数据库创建, IMS 和网状 DBMS
1970s: 关系数据库模型, 关系 DBMS 实现
顾客分类(Customer profiling)
数据挖掘能够告诉我们什么样的顾客买什么产品(聚类或分类)
识别顾客需求
对不同的顾客识别最好的产品 使用预测发现什么因素影响新顾客
提供汇总信息
各种多维汇总报告 统计的汇总信息 (数据的中心趋势和方差)
21.05.2020
.
11
法人分析和风险管理
搜索有趣的模式可视化变换删除冗余模式发现知识的使用2105202019提高支持商务决策的潜能最终用户商务分析人员数据分析人员dba制定决策数据表示可视化技术数据挖掘信息发现数据探查olapmda统计分析查询和报告数据仓库数据集市数据源文字记录文件信息提供者数据库系统oltp系统2105202020数据仓库数据清理数据集成过滤数据库数据库或数据仓库数据挖掘引擎模式评估图形用户界面知识库21052020www21052020概念描述

聚类分析文献英文翻译

聚类分析文献英文翻译

电气信息工程学院外文翻译英文名称:Data mining-clustering译文名称:数据挖掘—聚类分析专业:自动化姓名:****班级学号:****指导教师:******译文出处:Data mining:Ian H.Witten, EibeFrank 著二○一○年四月二十六日Clustering5.1 INTRODUCTIONClustering is similar to classification in that data are grouped. However, unlike classification, the groups are not predefined. Instead, the grouping is accomplished by finding similarities between data according to characteristics found in the actual data. The groups are called clusters. Some authors view clustering as a special type of classification. In this text, however, we follow a more conventional view in that the two are different. Many definitions for clusters have been proposed:●Set of like elements. Elements from different clusters are not alike.●The distance between points in a cluster is less than the distance betweena point in the cluster and any point outside it.A term similar to clustering is database segmentation, where like tuple (record) in a database are grouped together. This is done to partition or segment the database into components that then give the user a more general view of the data. In this case text, we do not differentiate between segmentation and clustering. A simple example of clustering is found in Example 5.1. This example illustrates the fact that that determining how to do the clustering is not straightforward.As illustrated in Figure 5.1, a given set of data may be clustered on different attributes. Here a group of homes in a geographic area is shown. The first floor type of clustering is based on the location of the home. Homes that are geographically close to each other are clustered together. In the second clustering, homes are grouped based on the size of the house.Clustering has been used in many application domains, including biology, medicine, anthropology, marketing, and economics. Clustering applications include plant and animal classification, disease classification, image processing, pattern recognition, and document retrieval. One of the first domains in which clustering was used was biological taxonomy. Recent uses include examining Web log data to detect usage patterns.When clustering is applied to a real-world database, many interesting problems occur:●Outlier handling is difficult. Here the elements do not naturally fallinto any cluster. They can be viewed as solitary clusters. However, if aclustering algorithm attempts to find larger clusters, these outliers will beforced to be placed in some cluster. This process may result in the creationof poor clusters by combining two existing clusters and leaving the outlier in its own cluster.● Dynamic data in the database implies that cluster membership may change over time.● Interpreting the semantic meaning of each cluster may be difficult. With classification, the labeling of the classes is known ahead of time. However, with clustering, this may not be the case. Thus, when the clustering process finishes creating a set of clusters, the exact meaning of each cluster may not be obvious. Here is where a domain expert is needed to assign a label or interpretation for each cluster.● There is no one correct answer to a clustering problem. In fact, many answers may be found. The exact number of clusters required is not easy to determine. Again, a domain expert may be required. For example, suppose we have a set of data about plants that have been collected during a field trip. Without any prior knowledge of plant classification, if we attempt to divide this set of data into similar groupings, it would not be clear how many groups should be created.● Another related issue is what data should be used of clustering. Unlike learning during a classification process, where there is some a priori knowledge concerning what the attributes of each classification should be, in clustering we have no supervised learning to aid the process. Indeed, clustering can be viewed as similar to unsupervised learning.We can then summarize some basic features of clustering (as opposed to classification):● The (best) number of clusters is not known.● There may not be any a priori knowledge concerning the clusters.● Cluster results are dynamic.The clustering problem is stated as shown in Definition 5.1. Here we assume that the number of clusters to be created is an input value, k. The actual content (and interpretation) of each cluster,j k ,1j k ≤≤, is determined as a result of the function definition. Without loss of generality, we will view that the result of solving a clustering problem is that a set of clusters is created: K={12,,...,k k k k }.D EFINITION 5.1.Given a database D ={12,,...,n t t t } of tuples and an integer value k , the clustering problem is to define a mapping f : {1,...,}D k → where each i t is assigned to one cluster j K ,1j k ≤≤. A cluster j K , contains precisely those tuples mapped to it; that is, j K ={|(),1,i i j t f t K i n =≤≤and i t D ∈}.A classification of the different types of clustering algorithms is shown in Figure 5.2. Clustering algorithms themselves may be viewed as hierarchical or partitional. With hierarchical clustering, a nested set of clusters is created. Each level in the hierarchy has a separate set of clusters. At the lowest level, each item is in its own unique cluster. At the highest level, all items belong to the same cluster. With hierarchical clustering, the desired number of clusters is not input. With partitional clustering, the algorithm creates only one set of clusters. These approaches use the desired number of clusters to drive how the final set is created. Traditional clustering algorithms tend to be targeted to small numeric database that fit into memory .There are, however, more recent clustering algorithms that look at categorical data and are targeted to larger, perhaps dynamic, databases. Algorithms targeted to larger databases may adapt to memory constraints by either sampling the database or using data structures, which can be compressed or pruned to fit into memory regardless of the size of the database. Clustering algorithms may also differ based on whether they produce overlapping or nonoverlapping clusters. Even though we consider only nonoverlapping clusters, it is possible to place an item in multiple clusters. In turn, nonoverlapping clusters can be viewed as extrinsic or intrinsic. Extrinsic techniques use labeling of the items to assist in the classification process. These algorithms are the traditional classification supervised learning algorithms in which a special input training set is used. Intrinsic algorithms do not use any a priori category labels, but depend only on the adjacency matrix containing the distance between objects. All algorithms we examine in this chapter fall into the intrinsic class.The types of clustering algorithms can be furthered classified based on the implementation technique used. Hierarchical algorithms can becategorized as agglomerative or divisive. ”Agglomerative ” implies that the clusters are created in a bottom-up fashion, while divisive algorithms work in a top-down fashion. Although both hierarchical and partitional algorithms could be described using the agglomerative vs. divisive label, it typically is more associated with hierarchical algorithms. Another descriptive tag indicates whether each individual element is handled one by one, serial (sometimes called incremental), or whether all items are examined together, simultaneous. If a specific tuple is viewed as having attribute values for all attributes in the schema, then clustering algorithms could differ as to how the attribute values are examined. As is usually done with decision tree classification techniques, some algorithms examine attribute values one at a time, monothetic. Polythetic algorithms consider all attribute values at one time. Finally, clustering algorithms can be labeled base on the mathematical formulation given to the algorithm: graph theoretic or matrix algebra. In this chapter we generally use the graph approach and describe the input to the clustering algorithm as an adjacency matrix labeled with distance measure.We discuss many clustering algorithms in the following sections. This is only a representative subset of the many algorithms that have been proposed in the literature. Before looking at these algorithms, we first examine possible similarity measures and examine the impact of outliers.5.2 SIMILARITY AND DISTANCE MEASURESThere are many desirable properties for the clusters created by a solution to a specific clustering problem. The most important one is that a tuple within one cluster is more like tuples within that cluster than it is similar to tuples outside it. As with classification, then, we assume the definition of a similarity measure, sim(,i l t t ), defined between any two tuples, ,i l t t D . This provides a more strict and alternative clustering definition, as found in Definition 5.2. Unless otherwise stated, we use the first definition rather than the second. Keep in mind that the similarity relationship stated within the second definition is a desirable, although not always obtainable, property.A distance measure, dis(,i j t t ), as opposed to similarity, is often used inclustering. The clustering problem then has the desirable property that given a cluster,j K ,,jl jm j t t K ∀∈ and ,(,)(,)i j jl jm jl i t K sim t t dis t t ∉≤.Some clustering algorithms look only at numeric data, usually assuming metric data points. Metric attributes satisfy the triangular inequality. The cluster can then be described by using several characteristic values. Given a cluster, m K of N points { 12,,...,m m mN t t t }, we make the following definitions [ZRL96]:Here the centroid is the “middle ” of the cluster; it need not be an actual point in the cluster. Some clustering algorithms alternatively assume that the cluster is represented by one centrally located object in the cluster called a medoid . The radius is the square root of the average mean squared distance from any point in the cluster to the centroid, and of points in the cluster. We use the notation m M to indicate the medoid for cluster m K .Many clustering algorithms require that the distance between clusters (rather than elements) be determined. This is not an easy task given that there are many interpretations for distance between clusters. Given clusters i K and j K , there are several standard alternatives to calculate the distance between clusters. A representative list is:● Single link : Smallest distance between an element in onecluster and an element in the other. We thus havedis(,i j K K )=min((,))il jm il i j dis t t t K K ∀∈∉and jm j i t K K ∀∈∉.● Complete link : Largest distance between an element in onecluster and an element in the other. We thus havedis(,i j K K )=max((,))il jm il i j dis t t t K K ∀∈∉and jm j i t K K ∀∈∉.● Average : Average distance between an element in onecluster and an element in the other. We thus havedis(,i j K K )=((,))il jm il i j mean dis t t t K K ∀∈∉and jm j i t K K ∀∈∉.● Centroid : If cluster have a representative centroid, then thecentroid distance is defined as the distance between the centroids.We thus have dis(,i j K K )=dis(,i j C C ), where i C is the centroidfor i K and similarly for j C .Medoid : Using a medoid to represent each cluster, thedistance between the clusters can be defined by the distancebetween the medoids: dis(,i j K K )=(,)i j dis M M5.3 OUTLIERSAs mentioned earlier, outliers are sample points with values much different from those of the remaining set of data. Outliers may represent errors in the data (perhaps a malfunctioning sensor recorded an incorrect data value) or could be correct data values that are simply much different from the remaining data. A person who is 2.5 meters tall is much taller than most people. In analyzing the height of individuals, this value probably would be viewed as an outlier.Some clustering techniques do not perform well with the presence of outliers. This problem is illustrated in Figure 5.3. Here if three clusters are found (solid line), the outlier will occur in a cluster by itself. However, if two clusters are found (dashed line), the two (obviously) different sets of data will be placed in one cluster because they are closer together than the outlier. This problem is complicated by the fact that many clustering algorithms actually have as input the number of desired clusters to be found.Clustering algorithms may actually find and remove outliers to ensure that they perform better. However, care must be taken in actually removing outliers. For example, suppose that the data mining problem is to predict flooding. Extremely high water level values occur very infrequently, and when compared with the normal water level values may seem to be outliers. However, removing these values may not allow the data mining algorithms to work effectively because there would be no data that showed that floods ever actually occurred.Outlier detection, or outlier mining, is the process of identifying outliers in a set of data. Clustering, or other data mining, algorithms may then choose to remove or treat these values differently. Some outlier detection techniques are based on statistical techniques. These usually assume that the set of data follows a known distribution and that outliers can be detected by well-known tests such as discordancy tests. However, thesetests are not very realistic for real-world data because real-world data values may not follow well-defined data distributions. Also, most of these tests assume single attribute value, and many attributes are involved in real-world datasets. Alternative detection techniques may be based on distance measures.聚类分析5.1简介聚类分析与分类数据分组类似。

数据挖掘中的名词解释

数据挖掘中的名词解释

第一章1,数据挖掘(Data Mining),就是从存放在数据库,数据仓库或其他信息库中的大量的数据中获取有效的、新颖的、潜在有用的、最终可理解的模式的非平凡过程。

2,人工智能(Artificial Intelligence)它是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。

人工智能是计算机科学的一个分支,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。

3,机器学习(Machine Learning)是研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。

4,知识工程(Knowledge Engineering)是人工智能的原理和方法,对那些需要专家知识才能解决的应用难题提供求解的手段。

5,信息检索(Information Retrieval)是指信息按一定的方式组织起来,并根据信息用户的需要找出有关的信息的过程和技术。

6,数据可视化(Data Visualization)是关于数据之视觉表现形式的研究;其中,这种数据的视觉表现形式被定义为一种以某种概要形式抽提出来的信息,包括相应信息单位的各种属性和变量。

7,联机事务处理系统(OLTP)实时地采集处理与事务相连的数据以及共享数据库和其它文件的地位的变化。

在联机事务处理中,事务是被立即执行的,这与批处理相反,一批事务被存储一段时间,然后再被执行。

8, 联机分析处理(OLAP)使分析人员,管理人员或执行人员能够从多角度对信息进行快速一致,交互地存取,从而获得对数据的更深入了解的一类软件技术。

8,决策支持系统(decision support)是辅助决策者通过数据、模型和知识,以人机交互方式进行半结构化或非结构化决策的计算机应用系统。

它为决策者提供分析问题、建立模型、模拟决策过程和方案的环境,调用各种信息资源和分析工具,帮助决策者提高决策水平和质量。

数据挖掘之聚类分析PPT课件

数据挖掘之聚类分析PPT课件
Border Point
❖Border Point: points with low density but in the neighbourhood of a core point
Noise Point
35
DBSCAN
q p
directly density reachable
q p
density reachable
28
K-Means Revisited
model parameters
latent parameters
29
Expectation Maximizatian Mixture
m: tnhuemobfdearptaoints n: tnhuemobfm erixtcuorm e ponents zij: whrientshteaiinsgceenerbaytetjdhthG e aussian
❖Choose K cluster centres randomly.
❖Each data point is assigned to its closest centroid.
❖Use the mean of each cluster to update each centroid.
❖Repeat until no more ne2w1 assignment.
s(i) b(i)a(i) maxb({i),a(i)}
16
Silhouette
4
3 1
2
1
Cluster
0
-1 2
-2
-3
-3
-2
-1
0
1
2
3
4
-0.2
0
0.2
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
mples of Clustering Applications
Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults Biology: categorize genes with similar functionality WWW
As a stand-alone tool to get insight into data distribution As a preprocessing step for other algorithms
2009/11/10 4
Clustering: Rich Applications and Multidisciplinary Efforts
Cluster analysis
Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters
Unsupervised learning: no predefined classes Typical applications
Document classification Cluster Weblog data to discover groups of similar accessing patterns
2009/11/10
7
What Is Good Clustering?
A good clustering method will produce high quality clusters with
d (i, j) = q (| x − x |q + | x − x |q +...+ | x − x |q ) i1 j1 i2 j2 ip jp where i = (xi1, xi2, …, xip) and j = (xj1, xj2, …, xjp) are two pdimensional data objects, and q is a positive integer
Pattern Recognition GIS
Create thematic maps in GIS by clustering feature spaces Detect spatial clusters or for other spatial mining tasks
Image Processing Economic Science (especially market research) Software package
Dissimilarity matrix
(one mode)
⎡ 0 ⎢ d(2,1) ⎢ ⎢ d(3,1 ) ⎢ ⎢ : ⎢ d ( n ,1) ⎣
0 d ( 3,2 ) : d ( n ,2 )
0 : ...
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ... 0 ⎥ ⎦
12
2009/11/10
Type of data in clustering analysis
high intra-class similarity low inter-class similarity
The quality of a clustering result depends on both the similarity measure used by the method and its implementation The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns
The answer is typically highly subjective.
2009/11/10
9
Requirements of Clustering
Scalability Ability to deal with different types of attributes Ability to handle dynamic data Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Able to deal with noise and outliers Insensitive to order of input records High dimensionality Incorporation of user-specified constraints Interpretability and usability
2009/11/10 10
Cluster Analysis
What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary
Interval-scaled variables Binary variables Nominal variables Ordinal variables ratio variables Variables of mixed types
2009/11/10
13
Interval-valued variables
2009/11/10 8
Measure the Quality of Clustering
Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, typically metric: d(i, j) The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal ratio, and vector variables. Weights may be assigned to different variables based on applications and data semantics. It is hard to define “similar enough” or “good enough”
Introduction to Data Mining
Instructor: Associate Prof. Ying Liu
Graduate University of Chinese Academy of Sciences Fictitious Economy and Data Science Research Center
2009/11/10
11
Data Structures
Data matrix
(two modes)
⎡ x 11 ⎢ ⎢ ... ⎢x ⎢ i1 ⎢ ... ⎢x ⎢ n1 ⎣ ... ... ... ... ... x 1f ... x if ... x nf ... ... ... ... ... x 1p ⎤ ⎥ ... ⎥ x ip ⎥ ⎥ ... ⎥ x np ⎥ ⎥ ⎦
2009/11/10
3
What is Cluster Analysis?
Cluster: a collection of data objects
Similar to one another within the same cluster Dissimilar to the objects in other clusters
Review
Data mining—core of knowledge discovery process
Selection and Transformation Pattern Evaluation
Data Mining
Data Warehouse Data Cleaning and Integration Databases
S-Plus, SPSS, SAS
2009/11/10
5
Examples of Clustering Applications
Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs Land use: Identification of areas of similar land use in an earth observation database Insurance: Identifying groups of motor insurance policy holders with a high average claim cost City-planning: Identifying groups of houses according to their house type, value, and geographical location
相关文档
最新文档