基于ICA与HMM的表情识别
人脸表情识别AI技术中的人脸表情分析和识别模型

人脸表情识别AI技术中的人脸表情分析和识别模型近年来,随着人工智能技术的不断发展,人脸表情识别AI技术逐渐引起了人们的关注。
人脸表情分析和识别模型作为其中重要的一环,对于实现准确的表情识别起着至关重要的作用。
本文将对人脸表情分析和识别模型进行探讨,以期加深对这一技术的理解。
一、人脸表情分析模型人脸表情分析模型是指通过对人脸图像进行处理和分析,来判断人脸表情所属类别的技术。
在这个模型中,首先需要进行人脸检测和人脸关键点定位的工作,以准确获取人脸区域及各个面部特征点的位置。
之后,通过提取人脸的纹理特征,如颜色、纹理等,将其转化为数值特征向量。
接着,利用机器学习算法对这些特征向量进行训练,构建分类器模型,从而实现对不同表情的识别和分类。
目前,常见的人脸表情分析模型主要包括传统的机器学习模型和深度学习模型。
传统的机器学习模型,如支持向量机(SVM)、人工神经网络(ANN)等,需要手动选择和提取特征,并通过特定的算法进行训练。
而深度学习模型则通过多层神经网络的结构,自动学习图像中的特征,进一步提升了模型的准确性和鲁棒性。
在人脸表情分析中,深度卷积神经网络(CNN)是最经典的深度学习模型之一。
它通过多层卷积层和池化层的组合,能够高效地提取图像中的特征。
此外,针对表情识别任务,还可以采用循环神经网络(RNN)结构来捕捉序列信息,如长时间的表情变化。
这些深度学习模型在表情识别准确度上取得了较好的效果,并已经在实际应用中取得了广泛的应用。
二、人脸表情识别模型人脸表情识别模型是建立在人脸表情分析的基础上,通过对人脸表情的判别来实现对具体表情的识别。
在这个模型中,首先需要将输入的人脸图像进行预处理,包括图像的归一化、去噪、增强等操作,以提高识别的准确性。
接着,通过训练好的表情分析模型获取图像中人脸的表情类别。
最后,通过与预先设置的表情库进行对比,确定输入图像所表达的具体表情。
人脸表情识别模型的准确度和鲁棒性直接受到表情分析模型的影响。
基于PCA和ICA的人脸识别

基于PCA和ICA的人脸识别
刘直芳;游志胜;王运琼
【期刊名称】《激光技术》
【年(卷),期】2004(028)001
【摘要】提出利用主成分分析(PCA)和独立成分分析(ICA)相结合的方法对人脸进行识别.首先对预处理后的图像进行降维,即利用PCA算法对图像进行去二阶相关和降维处理,然后再利用ICA算法获得人脸影像独立基成分,利用人脸影像独立基来构造一子空间,最后利用待识别图像在这个空间上的投影系数进行人脸识别.从两个不同的数据集,将传统的PCA人脸识别算法和提出的人脸识别算法进行比较.从实验数据结果看,提出的PCA和ICA结合人脸识别算法优于传统的PCA人脸识别算法.【总页数】4页(P78-81)
【作者】刘直芳;游志胜;王运琼
【作者单位】四川大学,图形图像研究所,成都,610064;四川大学,图形图像研究所,成都,610064;四川大学,图形图像研究所,成都,610064
【正文语种】中文
【中图分类】TP391.41
【相关文献】
1.基于PCA+ICA人脸识别技术的考勤系统设计 [J], 雷耀花;刘东宏
2.基于PCA余像空间的ICA混合特征人脸识别方法 [J], 武妍;宋金晶
3.基于PCA与ICA的人脸识别算法研究 [J], 王展青;刘小双;张桂林;王仲君
4.基于PCA/ICA的人脸识别片上系统的设计与实现 [J], 韩瑞瑞;张尤赛
5.基于PCA及ICA的人脸识别算法的研究及应用 [J], 刘粉香;孙勤红;刘定一因版权原因,仅展示原文概要,查看原文内容请购买。
人脸表情识别技术研究及应用

人脸表情识别技术研究及应用一、引言人脸表情识别技术是指通过计算机视觉和模式识别技术,对人脸图像或视频中的表情进行自动识别和分类的技术。
随着人工智能和计算机视觉技术的飞速发展,人脸表情识别技术在社会生活和工业应用中得到了广泛应用。
本文将重点介绍人脸表情识别技术的研究进展和应用领域。
二、人脸表情识别技术的研究进展1. 传统的人脸表情识别方法传统的人脸表情识别方法主要基于图像特征提取和分类技术。
常用的特征提取方法有主成分分析(PCA)、线性判别分析(LDA)等,通过提取人脸图像中的纹理特征、统计特征和形状特征等进行分析。
然后,采用支持向量机(SVM)、人工神经网络等分类器进行表情分类。
2. 基于深度学习的人脸表情识别方法近年来,随着深度学习技术的广泛应用,基于深度学习的人脸表情识别方法取得了显著进展。
深度卷积神经网络(Convolutional Neural Network, CNN)成为人脸表情识别领域的主流模型。
通过多层卷积和池化操作,CNN能够自动学习人脸图像中的高层语义特征,再通过全连接层进行分类。
3. 结合多模态信息的人脸表情识别方法为了提高人脸表情识别的准确性和鲁棒性,研究者们将多个传感器的信息进行融合。
例如,可以结合声音信号、眼动追踪数据等多模态信息,通过多模态融合的方式,提高表情分类的准确率。
此外,还有一些研究结合情感词典和情感句法等自然语言处理技术,进行情感识别与表情分类的结合。
三、人脸表情识别技术的应用领域1. 智能交互人脸表情识别技术在智能交互中发挥着重要作用。
例如,智能手机可以通过识别用户的表情来自动调节屏幕亮度和音量。
此外,人脸表情识别还可以应用于虚拟现实(VR)和增强现实(AR)等领域,实现更加自然和沉浸式的用户体验。
2. 情感分析通过人脸表情识别技术,可以实现对人们情感状态的自动识别和分析。
这对于广告、市场调研等领域有着重要的意义。
例如,可以通过识别消费者的表情来评估产品的受欢迎程度,为企业决策提供参考。
基于HMM模型的眉毛识别方法研究的开题报告

基于HMM模型的眉毛识别方法研究的开题报告
1. 研究背景
眉毛在人脸识别、情感分析、人机交互等领域具有重要的作用。
眉毛的形态和运动对人的情感表达和身体语言有直接影响,因此在人机交互和情感识别中具有重要应用前景。
同时,眉毛也是人脸识别中的一个重要特征之一,因为眉毛形态和运动能够反映出人的个性和表情。
基于此,本研究拟通过基于隐马尔可夫模型(HMM)的眉毛识别方法,探索如何从眉毛的形态和运动中提取有意义的信息,进一步实现情感识别和人脸识别等应用。
2. 研究内容
本研究拟采用以下方法:
(1)采集眉毛运动数据集。
通过使用高速摄影仪、面部追踪仪、红外线摄像头等设备采集眉毛的形态和运动数据,并对数据进行预处理和特征提取。
(2)建立HMM模型。
基于前期工作,建立针对眉毛形态和运动的HMM 模型,用于描述眉毛运动的状态转换和发射概率分布情况。
(3)实现眉毛识别算法。
根据建立的HMM模型,设计并实现基于HMM 的眉毛识别算法,并进行实验验证。
(4)分析与应用。
分析实验结果并评估算法的性能,讨论如何将该算法应用于情感识别和人脸识别等领域中。
3. 研究意义
(1)本研究将对眉毛的运动规律进行深入研究,提出基于HMM模型的眉毛识别方法,为人脸识别和情感识别等领域提供新的方法和思路。
(2)该研究将对人机交互和虚拟现实等领域的机器感知能力提升具有积极作用,为情感计算和行为识别等领域提供技术支持。
(3)本研究还将促进学科交叉,推动计算机科学和心理学、社会学等领域的交流与合作,为跨学科研究提供参考和示范。
人脸识别技术在视频侦查工作中的应用

人脸识别技术在视频侦查工作中的应用摘要:传统人工的方式去监控、检索、查找目标人员的相关信息。
受到人的注意力,观察力、知识能力和主观判断力等因素的影响,不免存在疏忽和遗漏。
主要采用了计算机图像处理技术与生物统计学原理于一体的人脸识别技术,利用计算机图像处理技术从视频中提取人脸特征点,分析建立数学模型利用生物统计学的原理进行。
人脸识别技术在原有的视频资源基础上,凭借优化深度学习的算法结构化,进行智能存储对比与检索,智能、高效、快速地使得人脸相关信息的采集、捕捉和处理以致对嫌疑人的实时识别、有效打击。
关键词:人脸识别技术;视频侦查;应用1人脸识别技术概述人脸识别技术即基于人的脸部特征,根据输入的人脸图像或视频流初步判断人脸是否存在,确定存在后提供每个脸的大小、位置以及主要面部器官的位置信息,随后经进一步提取给出人脸的身份特征,并与已知人脸对比分析,达到识别人脸身份的目的。
该过程需要一系列技术的支持,如人脸图像采集、人脸定位、识别预处理、身份确认与查找等,具有非强制性和非接触性的特点,较之指纹扫描、虹膜识别等生物技术,人脸识别技术还有其独特的优势,如人的面部图像是肉眼所能观察到的最为直观的信息源,可在识别对象无意识的情况下获取资料,而且精度高、误识率低、快速可靠,即使图像或视频质量较低也能经图像处理系统加以专业处理,突出图像中的人脸细节,进而为视频侦查人员提供线索、缩小侦查范围、尽快侦破案件,故其直观、高效、精准、易获取的优势是应用于视频侦查工作的重要原因。
2人脸识别技术应用流程2.1人脸采集系统可以接入时的实时的视频流(包括路面监控、高速的卡口、社会视频等)或者通过接受人脸抓拍回传的照片或视频流,进行判断是否存在人脸,分离获取人脸数据。
2.2人脸检测判断图片中是否存在人脸的特征。
若存在,定出其在图像中的坐标位置,区域的大小等信息,假如是实时视频流则需进一步追踪输出所检测到的人脸与个人位置等状态随时间的持续变化情况。
基于支持向量机的实时表情识别

图 1 面 部特 征 点的标 定与跟 踪
恃 琵 位
面部表情是一种刻画情绪 、认知 、 体状态 以及他们在 主 社会交往 中的作用和角 色的方法 …。就 自动识别 的观 点来 看, 面部表情 可以被认 为是脸部组成部分和它们 的空间关系
的变 形 或 是 脸 部 的颜 色变 化 。对 面 部 表 情 自动 识 别 的研 究 , 围 绕 在 这 些 变 形 或 脸 部 颜 色 的静 态 或 者 动 态 特 性 的 表 示 和 分 类 上。 二 本文提 出了在实时图像中通过对面部表情 的识别 , 自 来
基于支持 向量机 的实时表情识别
张鹏 ,贾银 山 ,刘 陪胜
摘 要 :从 实时图像 中识 别面部表情和推断情感是一个极具挑战性 的研 究课题。文章介绍 了一种根据视频 图像对面部表情进
行 实 时识 别 的 方 法 。使 用 AS 方 法和 改进 的 LK 光流 算 法进 行 面部 特征 定位 和 特 征 跟 踪 , 取 的 面部 特征 位 移 作 为 支持 向 M — 提 量 机 分 类 器 的输 入 。 实 验 证 明, 支持 向 量机 和 特 征 跟 踪 的 方 法 能很 好 地 识 别 面 部 表 情 。
2 表情 的分 类
21 支 持 向量 机 支持 向量机 ( V S M)【【是一种基 于结构风险最小化原 6 jJ
其它特 征点的准确位置和特征位移 。 从而消除 由距离相机 的
远近、头部运动 ( 如平移或头部倾斜 )等弓起 的特征位移 的 f
误 差 ,提 高 识别 的准 确 率 。 如图 l 所示 ,使 用主动形状模型 ( M ) AS
Mirc mp t A pi t n o. 6 N . 2 1 co o ue p l ai s 12 , o , 0 0 r c o V 6
基于混合分类器的表情识别方法
基于混合分类器的表情识别方法
张志平;汪庆淼
【期刊名称】《计算机工程》
【年(卷),期】2010(36)23
【摘要】根据隐马尔可夫模型(HMM)适用于处理连续动态序列信号、支持向量机(SVM)与K近邻分类器(KNN)擅长模式分类的特点,设计一种(HMM+KNN)+SVM 的混合分类器.利用HMM与KNN对测试样本进行判决.当判决结果相同时,直接输出判决结果,否则引入SVM对测试样本进行再判决.实验结果表明,该方法所确定的分类器优于单一的分类器判决,能有效实现表情识别.
【总页数】4页(P139-141,145)
【作者】张志平;汪庆淼
【作者单位】苏州大学,网络中心,江苏,苏州,215006;苏州大学计算机科学与技术学院,江苏,苏州,215006
【正文语种】中文
【中图分类】TP311.52
【相关文献】
1.基于SVM与C45.混合分类器的人脸表情识别 [J], 刘帅师;陈奇;程曦
2.基于粗略到精细分类的面部表情识别方法 [J], 崔洁;冯晓毅
3.基于混合特征和分类树的细微表情识别方法 [J], 李婷;詹永照;周庚涛
4.基于局部SVM分类器的表情识别方法 [J], 孙正兴;徐文晖
5.堆栈式混合自编码器的人脸表情识别方法 [J], ZHANG Zhiyu;WANG Ruiqiong;WEI Minmin;ZHOU Jie
因版权原因,仅展示原文概要,查看原文内容请购买。
融合独立分量分析与支持向量聚类的人脸表情识别方法
融合独立分量分析与支持向量聚类的人脸表情识别方法周书仁;梁昔明【摘要】针对人脸表情特征提取及自动聚类问题,提出了融合独立分量分析(ICA)与支持向量聚类(SVC)的人脸表情识别方法.采用ICA方法进行人脸表情的特征提取,然后采用混合因子分析(MFA)的交互参数调整方法得到局部约束支持向量聚类(LCSVC)的半径,有效降低了表情类别聚类边缘的部分干扰,这比单独采用支持向量聚类(SVC)方法效果要好.测试样本时通过比较新旧半径的值进行判决,实验结果表明该方法是有效的.%A new method of facial expression recognition is proposed, which is based on Independent Component Analysis (ICA) and Support Vector Clustering (SVC), which aims at solving the problem of features extraction of facial expression and auto-clustering. First, the facial expression features were extracted by ICA, and then the radius of Locally Constrained Support Vector Clustering (LCSVC) could be acquired according to the adjustment of inter-parameter Mixture of Factor Analysis (MFA) method. This clustering method effectively restrained the disturber of the clustering boundary region and also was better than the one that only uses SVC. The test sample is classified by a comparison with the difference between new and old radius, and the experimental results show that the proposed method is effective and successful.【期刊名称】《计算机应用》【年(卷),期】2011(031)006【总页数】4页(P1605-1608)【关键词】表情识别;独立分量分析;局部约束支持向量聚类;混合因子分析【作者】周书仁;梁昔明【作者单位】长沙理工大学计算机与通信工程学院,长沙410114;中南大学信息科学与工程学院,长沙410083【正文语种】中文【中图分类】TP391.41聚类作为一种数据自然分组或无监督数据分类的有效途径,已经被越来越多的研究人员所重视[1-3]。
5.第五章 基于统计模型(HMM)方式的语音识别技术
模式匹配 词 汇 表 W(k), 1≤k≤N 参考模式 R(k), 1≤k≤N 失真侧度 Dk = D(T,R(k)) DTW距离 -Dk:DTW距离 判 别 n = argmin{Dk}
1≤k≤N
统计模型 词 汇 表 W(k), 1≤k≤N 参考模型 M(k), 1≤k≤N 概率侧度 P(T|M(k)) M(k)生成 -P: 由M(k)生成T 的概率 判 别 n=argmax{P(T|M(k))}
5.3 HMM的三个基本问题及其解法 HMM的三个基本问题及其解法
HMM三个基本问题 5.3.1 HMM三个基本问题
模型评估问题(如何求: - 模型评估问题(如何求:P(O|λ)) 最佳路径问题(如何求:Q=q1 - 最佳路径问题(如何求:Q=q1q2…qT) qT) 模型训练问题(如何求: - 模型训练问题(如何求:A ,B ,π)
a 0.5 0.6 0.2 0.4 0.8 b 0.5 HMM 模型的例子
[ ] [ ] [ ]
P(Qj|λ)= P(qj1)P(qj2|qj1)P(qj3|qj2) … P(qjT-1|qjT)
∴ P(O,Qj|λ) = aj0,1b1j(o1) aj1,2 b2j(o2) … ajT-1,T bTj(oT)
基于支持向量机的人脸表情识别模型(IJMSC-V4-N4-5)
I.J.Mathematical Sciences and Computing, 2018, 4, 56-65Published Online November 2018 in MECS ()DOI: 10.5815/ijmsc.2018.04.05Available online at /ijmscA Facial Expression Recognition Model using Support VectorMachinesSivaiah Bellamkonda a*, N.P. Gopalan ba,b Department of Computer Applications, National Institute of Technology, Tiruchirappalli - 620015, India Received: 11 November 2017; Accepted: 19 July 2018; Published: 08 November 2018AbstractFacial Expression Recognition (FER) has gained interest among researchers due to its inevitable role in the human computer interaction. In this paper, an FER model is proposed using principal component analysis (PCA) as the dimensionality reduction technique, Gabor wavelets and Local binary pattern (LBP) as the feature extraction techniques and support vector machine (SVM) as the classification technique. The experimentation was done on Cohn-Kanade, JAFFE, MMI Facial Expression datasets and real time facial expressions using a webcam. The proposed methods outperform the existing methods surveyed.Index Terms: Facial Expression Recognition, Principal Component Analysis, Support vector Machine, Gabor Wavelets, Local Binary Pattern, Machine Vision, Human Computer Interaction.© 2018 Published by MECS Publisher. Selection and/or peer review under responsibility of the Research Association of Modern Education and Computer Science1.IntroductionFacial expression [1] or emotion is an important nonverbal cue in communication. For the development of intelligent robotics, critical health care, student satisfaction and other application areas, emotional interaction between machine and human became the fundamental basis. So, researchers are intensely trying to improve the accuracy [15-18] of the FER systems.Ekman and Friesen [2] introduced 6 basic facial expressions (emotions) as shown in Fig. 1, which are Angry, Disgust, Happy, Fear, Sad, Surprise. According to Mehrabian [3], 55% communicative cues can be judged by facial expression, which implies that the facial expression is the major modality in human communication. * Corresponding author.E-mail address: sivaiah.bk@Fig.1. Six Basic Expressions from JAFFE DatabaseExpressions are formed when we stretch or enlarge facial muscles on the face, but in Facial Action Coding System (FACS) [5], each muscle stretch is considered as an Action Unit (AU) where these AUs will form various expressions. It is very challenging to detect AUs and hence facial expressions with these AUs. So, researchers are concentrating either on active patches or whole face, which we call as a holistic approach. There are numerous feature extraction techniques and classification techniques developed. SVM is popular classifier for FER systems where as some researchers are using Neural Networks [15], Hidden Markov Models [19, 20] and even KNNs [21, 22].2.Related WorkMuzammil Abdulrahman [9] proposed a facial expression recognition model using Gabor wavelet transform along with PCA and LBP. In this paper, Gabor wavelets are used to extract features from images. PCA and LBP algorithms are used as dimensionality reduction techniques. The experiments were carried on JAFFE database.Yanpeng Liu [1] proposed a FER model using LBP as feature extraction techniques where features are extracted from active facial patches. PCA is used as a dimensionality reduction technique in this model. Softmax regression classifier is used for classification purpose where experimentation was done on the Cohn Kanade facial expression dataset.Parth Patel [4] has proposed a FER model using PCA and Discrete Wavelet Transform (DWT). These two techniques are used for extracting features from the Face. SVMs are used for classification purpose. Experimentation was done on JAFFE database.Muzammil Abdulrahman [6] has proposed a model for FER using SVMs. PCA and LBP algorithms were used for extracting features. SVMs are used for classification purpose. Experiments were conducted on both JAFFE and Mevlana University Facial Expression (MUFE) databases.Mahesh Kumbhar [8] proposed a model for FER using Gabor Wavelet. 2-D Gabor Function is used for extracting the features and PCA is used for dimensionality reduction. Feed-forward neural networks with 32 input, two hidden layers, 40 to 60 hidden neurons and four output neurons have been used for classification purpose. Experiments were conducted on JAFFE database.This paper is organized into six sections. The part discusses about the importance and motivation of Facial Expression Recognition. The second section deals with literature review. The third section is the proposed methodology for FER. The fourth part will be the implementation of the proposed method. The fifth section is about results and discussion. The sixth section which is the last section concludes the paper.3.MethodologyIn this paper, we introduced a model with the combination of the three methods such as Gabor wavelets, PCA and LBA for FER task. Here, PCA is used for dimensionality reduction then Gabor wavelets and LBP areused for feature extraction. Fig. 2 shows the proposed method in detail.Fig.2. System Architecture of the Proposed MethodIn the Training phase, the system is to be trained with training samples which are still images. The training input images are already pre-processed such as the face detection and cropping for ease of training purpose. The dimension of the input face image is reduced using PCA. Then Gabor wavelets or Local binary pattern technique is used for extracting the distinct features from the image. The image database is constructed using these features where this database is used for training the SVM classifier. Once the classifier is trained and acquired with the knowledge about the classification pattern, the system is ready to test the real-world expression inputs from the webcam.In the testing phase, the input image is captured from the webcam for the real-time testing sample. As the image is obtained from the webcam, there would be unwanted detail in the picture such as other objects, background, etc. To remove such noise from the image, we detect the face region from the image and then crop the detected face image. This cropped face image is then processed. The dimension of the input test image is reduced using PCA as we have done in the training phase. Then the features are extracted from the image using Gabor wavelet or Local binary pattern. Then the obtained features are used with the SVM classifier to find the label of the expression.3.1. Principal Component AnalysisPCA [10-12] can be used as both dimensionality reduction technique and feature extraction technique. It identifies patterns in data which are similarities and differences. Once we find patterns in data, we can compress those data, which act as a dimensionality reduction technique. Assume that there are N images {x1, x2, x3……xN} which can be constructed as a vector of size t. The PCA performs mapping of the original t -sized feature vector onto an f-sized feature sub vector such that f is always smaller than t. The obtained feature vector Yi € Rf is by eq uation 1:)....,3,2,1( Y W TPCA i N X i = (1) Here W PCA is the transformation matrix and i is the number of images. The columns of W PCA are the f eigenvectors with the f largest Eigen values of the scatter matrix S r , where S r can be defined by equation 2:∑=--=Ni T i i r X X S 1))((μμ (2)Where μ € R t which is the mean image of all the images.3.2. Gabor WaveletsThe specialty of Gabor Wavelets [11-14] is that they extract local features of an image even at different orientations in spatial as well as frequency domains. The GWs finds essential features in an image such as frequency selectivity, orientation selectivity, spatial localization, and quadrature phase relationship. The GW kernel especially in spatial domain uses a Gaussian function. The GWs kernel can be defined by equation 3:')2(22'2'21),,,(x i y x e ey x ϖσπσθϖψ+-=(3)Where (x, y) is the pixel position, ϖ is the central frequency, θ is the orientation and σ is the standard deviation. The parameters x' and y' can be defined by equation 4 & 5:θθsin cos 'y x x += (4) θθcos sin 'y y y +-= (5)For example, Fig. 3 shows the magnitude of the Gabor at four scales and the Real Gabor filter bank (GFB) with four different scales and six different orientations.(a) (b)Fig.3. (a) Magnitude of the GFB at four scales; (b) Real part of the Gabor Kernels at Four Scales and Six Orientations.The GW feature illustration ),(,(y x n m ϕ, is obtained by convolution of the GFB ),,,(θϖψy x with input image as given by equation 6:),,,(*),(),(,θϖψϕy x y x I y x n m = (6)3.3. Local Binary PatternsLBP [5, 16] has its roots in 2D texture analysis. LBP will summarize the local pattern of an image bycomparing each pixel with its neighboring pixels. A threshold value is considered to examine all the adjacent pixels of a center pixel to make 1 if the intensity of a pixel is greater than or equal to the threshold or 0. Then the binary pattern is considered as the local image descriptor. This operator was initially considered for 3×3 pixel matrix, resulting 8 bit codes based on the 8 pixels around the central pixel. Fig. 4 gives an example of the basic LBP operator.Fig.4. LBP Operator Working Example3.4.Support Vector MachinesSVM [7] is a supervised learning technique with associated learning algorithms that analyze data for classification or regression. The SVM training algorithm builds a model for a given training data, which will assign to one category or another for new data, making it a non-probabilistic binary linear classifier. When SVM was introduced, it used for two category classification only. A multi-class classification is a combination of two or more two-category classifications.With a support vector machine, the gap between classes will be maximized as well as the accuracy of classification is also improved. SVM can solve the problems of an inadequate sample of FER and large variance of capacity between different expressions.FER comes under nonlinear classification problem. SVM may use linear algebra and geometry to separate input data into a high dimensional feature space through a selected nonlinear mapping function. This nonlinear mapping function is nothing but kernel function and a learning algorithm is formed to use the kernel functions. Kernel functions include linear, polynomial, RBF and sigmoid. In this paper, RBF kernel function is used.It uses two threads which are one-to-many and one-to-one. SVM generates n different classifiers for n different classes. One-to-one thread selects two different classes as one SVM classifier. Then it will generate n × (n − 1)/2 SVM sub-classifiers. FER comes to multi-class classification issue.4.ExperimentationThe proposed model was implemented using MATLAB environment. The testing was conducted on Cohn Kanade, JAFFE and MMI Facial expression data sets. Cohn Kanade database is a benchmark dataset for FER. There are 97 subjects in this database there; all of them are university students ranged in age between18 to 30 years. 65% are female students, 15% are African-American, and 3% are Asian or Latino. Images are saved with pixel sizes of 640x480 of 8-bits for grayscale values, which have been resized to 200x200 pixels, and used for training.The JAFFE database is also used in the experiment, which contains 210 images of 10 people, having 7 expressions. Each image is saved with a resolution of 256x256. In our experimentation, the original images are used without being altered.MMI Database contains images of 20 subjects of 31 different expressions for each subject. Participants followed the FACS coding system while capturing expressions. Images in the dataset include all six basic emotions. The images were captured at a resolution of 1200x1900 pixels. 5. Results and DiscussionThe accuracy of the system is measured in percentage of correctly classified expression. We can also construct a confusion matrix with the list of emotions that are correctly classified or wrongly classified. Accuracy can be defined as shown in equation 7:100⨯--------=samplestesting of number Total samplesclassified Correctly of Number Accuracy (7)The experiments were conducted with bench mark datasets such as Cohn Kanade, JAFFE, and MMI Facial expression datasets. The training images were pre-arranged in respective folders for convenience of reading and constructing database. The testing images are different images than the training images.Table 1 is the confusion matrix for PCA dimensionality reduction combined with Gabor feature extraction on JAFFE database. The average accuracy of this method is 84.17%.Table 1. PCA with Gabor Wavelet on JAFFE DatasetAnger Disgust Fear Happy Sad Surprise Anger 82 4 3 5 4 2 Disgust 5 85 5 1 2 2 Fear 5 2 79 4 7 3 Happy 3 2 6 85 1 3 Sad 2 3 3 4 86 2 Surprise344188Table 2 depicts the confusion matrix of PCA dimensionality reduction combined with Gabor feature extraction on MMI database. The average accuracy of this method is 85.83%.Table 2. PCA with Gabor Wavelet on MMI DatasetAnger Disgust Fear Happy Sad Surprise Anger 88 2 3 2 3 2 Disgust 3 86 3 3 3 2 Fear 3 2 82 4 6 3 Happy 2 3 6 85 1 3 Sad 2 3 3 4 86 2 Surprise2432188Table 3 is the confusion matrix of PCA dimensionality reduction combined with Gabor feature extraction on Cohn-Kanade database. The average accuracy of this method is 93%.Table 3. PCA with Gabor Wavelet on Cohn-Kanade DatasetAnger Disgust Fear Happy Sad SurpriseAnger 94 2 1 2 1 2Disgust 1 92 1 1 1 2Fear 1 2 94 1 2 0Happy 2 1 2 92 2 1Sad 1 1 1 2 93 2Surprise 1 2 1 2 1 93Table 4 lists the confusion matrix for PCA dimensionality reduction combined with LBP extraction on JAFFE database. The average accuracy of this method is 86%.Table 4. PCA with LBP on JAFFE DatasetAnger Disgust Fear Happy Sad SurpriseAnger 86 3 3 4 2 2Disgust 4 86 5 1 2 2Fear 4 2 80 4 7 3Happy 3 2 5 87 1 2Sad 1 3 3 3 88 2Surprise 2 4 4 1 0 89Table 5 is the confusion matrix for PCA dimensionality reduction combined with LBP extraction on MMI database. The average accuracy of this method is 88.16%.Table 5. PCA with LBP on MMI DatasetAnger Disgust Fear Happy Sad SurpriseAnger 88 3 3 2 3 1Disgust 2 88 3 3 3 1Fear 4 2 84 4 3 2Happy 3 2 5 87 1 2Sad 1 2 3 3 90 2Surprise 2 3 2 1 0 92Table 6 is the confusion matrix for PCA dimensionality reduction combined with LBP extraction on Cohn-Kanade database. The average accuracy of this method is 96.83%.Table 6. PCA with LBP on Cohn-Kanade DatasetAnger Disgust Fear Happy Sad SurpriseAnger 97 1 1 1 0 0Disgust 0 98 0 0 1 1Fear 1 0 96 2 1 0Happy 1 1 1 95 1 1Sad 1 0 1 1 97 0Surprise 0 0 1 1 0 98The summary of the average performances is listed in Table 7. The obtained results were compared with the existing methods which are reviewed in the literature. The combination of PCA with LBP outperforms well compared to other methods. The PCA dimensionality reduction with LBP feature extraction method works very well with the Cohn-Kanade dataset.Table 7. Accuracies Obtained in PercentagesReferenceDatasetJAFFECohn-KanadeMMIYanpeng Liu [1] 96.30 - -Parth Patel [4] 96.67 - -Muzammil [6] 87 - 77Mahesh [8] 72.50 - -ProposedPCA+Gabor84.17 93.00 85.83ProposedPCA+LBP88.00 96.83 88.16Fig. 5 shows the comparative analysis in a graphical representation of the accuracies obtained by the proposed model and the existing models. The graph clearly shows that the proposed model has improved the performance than the existing methods, particularly for Cohn Kanade dataset, the Local Binary Pattern out performs on all the methods.Fig.5. Comparison of Obtained Accuracy with Existing Models6.ConclusionIn this paper, PCA is used for dimensionality reduction, which drastically improves the processing speed of the overall system. Then Gabor wavelets and Local Binary Patterns were used as feature extraction techniques, which improved the accuracy of the proposed system with the most distinguishable features of the expressions. SVM is used as the classifier which gives a better classification over other classifier algorithms. The overall setup of the proposed model improved the accuracy when compared with existing models. The proposed model gives an average accuracy of 84.17% for JAFFE using Gabor wavelets, 93.00% for MMI using Gabor wavelets, 85.83% for Cohn Kanade using Gabor wavelets, 88.00% for JAFFE using LBF, 88.16% for MMI using LBF and 96.83% for Cohn Kanade using LBF feature extraction techniques, where all these methods commonly used PCA as dimensionality reduction and SVM as a classifier.References[1]Yanpeng Liu, Yuwen Cao, Yibin Li, Ming Liu, Rui Song, "Facial Expression Recognition with PCAand LBP Features Extracting from Active Facial Patches", IEEE International Conference on Real-time Computing and Robotics June 6-9, 2016, Angkor Wat, Cambodia, pp. 368-341, 2016.[2]P. Ekman, and W. Friesen, “Facial Action Coding System: A Technique for t he Measurement of FacialMovements”, Consulting Psychologists Press, California, 1978.[3]Mehrabian.A, "Communication without Words", Psychology Today, Vo1.2, No.4, pp. 53-561968.[4]Parth Patel, Khushali Raval, "Facial Expression Recognition Using DWT-PCA with SVM Classifier",International Journal for Scientific Research & Development, Vol. 3, Issue 03, pp. 1531-1537, 2015. [5]Yuan Luo, Cai-ming Wu, Yi Zhang, "Facial expression recognition based on fusion feature of PCA andLBP with SVM", Optik - International Journal for Light and Electron Optics Volume 124, Issue 17, pp.2767-2770, 2013.[6]Muzammil Abdurrahman, Alaa Eleyan, "Facial expression recognition using Support Vector Machines”,23rd IEEE Conference on Signal Processing and Communications Applications Conference (SIU), 16-19 May 2015, Malatya, Turkey, 2015.[7]Anushree Basu, Aurobinda Routray, Suprosanna Shit, Alok Kanti Deb, "Human emotion recognitionfrom facial thermal image based on fused statistical feature and multi-class SVM", 2015 Annual IEEE India Conference (INDICON), 17-20 Dec. 2015, New Delhi, India, 2015.[8]Mahesh Kumbhar, Manasi Patil, Ashish Jadhav, "Facial Expression Recognition using Gabor Wavelet",International Journal of Computer Applications, Volume 68, No.23, PP. 13-18, 2013.[9]Muzammil Abdulrahman, Tajuddeen R. Gwadabe, Fahad J. Abdu, Alaa Eleyan, "Gabor WaveletTransform Based Facial Expression Recognition Using PCA and LBP", IEEE 22nd Signal Processing and Communications Applications Conference (SIU 2014), Trabzon, Turkey, 23-25 April 2014, pp.2265 - 2268, 2014.[10]Shilpa Sharma, Kumud Sachdeva, "Face Recognition using PCA and SVM with Surf Technique",International Journal of Computer Applications, Volume 129, No.4, pp. 41-47, 2015.[11]Vinay A., Vinay S. Shekhar, K. N. Balasubramanya Murthy, S. Natarajan, "Face Recognition usingGabor Wavelet Features with PCA and KPCA - A Comparative Study, 3rd International Conference on Recent Trends in Computing 2015 (ICRTC-2015), Procedia Computer Science 57, pp. 650–659, 2015.[12]Anurag De, Ashim Sahaa, Dr. M.C Pal, "A Human Facial Expression Recognition Model based onEigen Face Approach", International Conference on Advanced Computing Technologies and Applications (ICACTA-2015), pp. 282-289, 2015.[13]Liangke Gui, Tadas Baltrusaitis, and Louis-Philippe Morency, "Curriculum Learning for FacialExpression Recognition", 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), Washington, DC, USA, 2017.[14]Zhiming Su, Jingying Chen, Haiqing Chen, "Dynamic facial expression recognition usingautoregressive models", 7th International Congress on Image and Signal Processing (CISP), 14-16 Oct.2014, Dalian, China, 2014.[15]Mao Xu, Wei Cheng, Qian Zhao, Li Ma, Fang Xu, "Facial Expression Recognition based on TransferLearning from Deep Convolutional Networks", 11th IEEE International Conference on Natural Computation (ICNC), 15-17 Aug. 2015, Zhangjiajie, China, 2015.[16]Pan Z., Polceanu M., Lisetti C., "On Constrained Local Model Feature Normalization for FacialExpression Recognition", In: Traum D., Swartout W., Khooshabeh P., Kopp S., Scherer S., Leuski A.(eds) Intelligent Virtual Agents. IVA 2016, Lecture Notes in Computer Science, Vol. 10011. Springer, Cham, 2016.[17]Shaoping Zhu, "Pain Expression Recognition Based on pLSA Model", The Scientific World Journal,Volume 2014, 2014.[18]Myunghoon Suk and Balakrishnan Prabhakaran, "Real-time Mobile Facial Expression RecognitionSystem – A Case Study", 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 23-28 June 2014, Columbus, OH, USA, 2014.[19]Zineb Elgarrai, Othmane El Meslouhi, Mustapha Kardouchi, Hakim Allali, "Robust facial expressionrecognition system based on hidden Markov models", International Journal of Multimedia Information Retrieval, Volume 5, Issue 4, pp. 229–236, 2016.[20]Arnaud Ahouandjinou, Eug`ene Ezin, Kokou Assogba, Cina Motamed, Mikael Mousse, Bethel Atohoun,"Robust Facial Expression Recognition Using Evidential Hidden Markov Model", https://hal.archives-ouvertes.fr/hal-01448729, 2017.[21]Guo Y., Zhao G., Pietikäinen M., “Dynamic Facial Expression Recognition Using Longitudinal FacialExpression Atlases”, In Fitzgibbon A., Lazebnik S., Perona P., Sato Y., Schmid C. (eds) Computer Vision –ECCV 2012, Lecture Notes in Computer Science, vol. 7573. Springer, Berlin, Heidelberg, 2012.[22]Nikunj Bajaj, S L Happy, Aurobinda Routray, "Dynamic Model of Facial Expression Recognition basedon Eigen-face Approach", Proceedings of Green Energy and Systems Conference, Long Beach, CA, USA, 25 November 2013.Authors’ ProfilesDr. N.P. Gopalan is Professor at Department of Computer Applications, National Institute ofTechnology, Tiruchirappalli, Tamil Nadu, India. He obtained his Ph.D. from Indian Instituteof Science, Bangalore, India. His research interests are in Data Mining, DistributedComputing, Cellular Automata, Theoretical Computer Science, Image Processing andMachine Intelligence.B. Sivaiah is pursuing Ph.D. at Department of Computer Applications, National Institute ofTechnology, Tiruchirappalli. He obtained his B. Tech. and M. Tech in Computer Science &Engineering from Jawaharlal Nehru Technological University, Hyderabad, Andhra Pradesh,India in 2007 and 2010 respectively. His current research interests include image processing,neural networks, machine learning, and intelligent systems.How to cite this paper: Sivaiah Bellamkonda, N.P. Gopalan,"A Facial Expression Recognition Model using Support Vector Machines", International Journal of Mathematical Sciences and Computing(IJMSC), Vol.4, No.4, pp.56-65, 2018.DOI: 10.5815/ijmsc.2018.04.05。