基于人工神经网络印刷的印地文字符分类和识别(IJIGSP-V4-N6-3)
基于神经网络的印刷体数字识别算法的研究

基于神经网络的印刷体数字识别算法的研究摘要印刷体数字识别(Printed Numeral Recognition)是光学字符识别技术(Optical Character Recognition, OCR)的一个分支,是文字识别的重要组成部分。
本文以VC为平台,运用人工神经网络的思想(主要采用BP神经网络),实现了对印刷体数字识别。
关键词数字识别;图像预处理;特征提取;神经网络引言目前,识别技术已经广泛地应用到了各个领域中。
为了达到对一幅图像中的数字进行识别的目的,我们要对图像进行一些处理,这些处理工作的好坏直接决定了识别的质量,这些处理技术依次为图像的读取、对读取的图像进行灰度变换、按照量化指标对灰度变换后的图像进行二值化、然后对二值化后的图像中的字符信息进行切分等。
在进行完上述预处理工作后进行特征提取,再输入到已经训练好的BP网络进行识别。
1 识别的流程识别的流程按照引言中的步骤进行,主要分为两大部分,第一部分为图像的预处理、第二部分为通过神经网络进行印刷体数字的识别。
预处理部分的流程:图像输入-灰度变换-图像二值化-紧缩重排-归一化调整-图像分割-特征提取。
神经网络数字识别的具体流程:样本训练-字符特征输入-识别并给出结果。
2 基于神经网络的特征提取算法概述图像在经过了前期的预处理后,由原来杂乱无章的字符变为了整齐排列的、大小相同的一列字符,在这里图像归一化后的宽度为8像素,高度为16像素,这样就大大方便了对字符特征的提取。
我们把提取的特征存储在特征向量里,然后把特征向量输入到神经网络中,这样就可以对字符进行识别了。
由以上的论述我们可以得出结论,特征提取的算法是整个识别过程的关键,它的好坏直接决定了识别的成败。
对图像中的字符进行特征提取的算法有很多,下面对几种重要的分别进行介绍。
2.1骨架特征提取法由于图像的来源不同,这就使得图像的线条所使用的像素不同,在图像上表现出来就是线条的粗细的不同,这样就使得它们的差别很大。
自编码器应用简介和基于循环神经网络的手写蒙古文词语识别

CTC整体思路
◦ 训练流程和传统的神经网络类似,构建loss function,然后根据 BP算法进行训练,不同之处在于传统的神经网络的训练准则是针对 每帧数据,即每帧数据的训练误差最小,而CTC的训练准则是基于 序列(比如语音识别的一整句话)的,比如最大化p(z|x),序列化 的概率求解比较复杂,因为一个输出序列可以对应很多的路径,所 有引入前后向算法来简化计算。
RNN层数 3 5 5
蒙古文词语识别精度(top1) 72.25% 75.1% 76%
DBLSTM
5
79.13%
解码是对于输入序列x找出概率最大的输出序列l,而不是 概率最大的一条输出路径,因为输出路径和输出序列是多 对一关系。
求解的计算量非常大,一般选用启发式的解码方式。
Best Path Decoding
1为采用滑动窗口的技术,使用PCA进行窗口像素特征降 维、CNN提取窗口图像特征和自编码器提取窗口图像特征, 获取序列特征。 2为解码阶段:非受限解码和受限解码
受限解码可以使用语言模型,提高识别精度
数据预处理
◦ 归一化为128 *64的图片。 ◦ 变长数据:根据手写蒙古文词语的长度不同,调整数据的有效长度, 最后填充为统一的长度。 ◦ 联机数据:根据原始数据构造有效的联机数据。
语音识别中的DNN训练,每一帧都有相应的状态标记, 比如有5帧输入x1,x2,x3,x4,x5,对应的标注分别是状态 a1,a1,a1,a2,a2。 CTC的不同之处在于输出状态引入了一个blank,输出和 label满足如下的等价关系: F(a−ab−)=F(−aa−−abb)=aab 多个输出路径可以映射到一个输出序列。
基于人工神经网络的无分割手写Devnagari签名实时识别(IJIGSP-V5-N4-4)

I.J. Image, Graphics and Signal Processing, 2013, 4, 30-37Published Online April 2013 in MECS (/)DOI: 10.5815/ijigsp.2013.04.04Real Time Recognition of Handwritten Devnagari Signatures without Segmentation Using ArtificialNeural NetworkShailendra Kumar DewanganAssistant Professor, Department of Electronics & Instrumentation EnggChhatrapati Shivaji Institute of Technology, Durg, Chhattisgarh, IndiaAbstract—Handwritten signatures are the most commonly used method for authentication of a person as compared to other biometric authentication methods. For this purpose Neural Networks (NN) can be applied in the process of verification of handwritten signatures that are electronically captured. This paper presents a real time or online method for recognition and verification handwritten signatures by using NN architecture. Various features of signature such as height, length, slant, Hu’s moments etc are extracted and used for training of the NN. The objective of online signature verification is to decide, whether a signature originates from a given signer. This recognition and verification process is based on the instant signature image obtained from the genuine signer and a few images of the original signatures which are already part reference database. The process of Devnagari signature verification can be divided it into sub-processes as pre-processing, feature extraction, feature matching, feature comparison and classification. This stepwise analysis allows us to gain a better control over the precision of different components. Index Terms—Artificial Neural Networks (ANN), Handwritten Signature Verification (HSV), Hu’s moment invariants, Real time recognition method, Signature RecognitionI.INTRODUCTIONDevelopment of a signature recognition system for Devnagari is difficult because there are compound character shapes in the script and the characters in words are topologically connected. Here our focus is on the recognition of online handwritten Devnagari (Hindi) signatures that can be used in common applications like bank cheques, commercial forms, government records, bill processing systems, post code recognition, signature verification, passport readers, online document recognition generated by the expanding technological society etc. Handwriting has continued to persist as a means of communication and recording information in day-to-day life even with the introduction of new technologies [1]. Challenges in handwritten signatures recognition lie in the variation and distortion of handwritten signatures, since different people may use different style of handwriting and direction to draw the same shape of any Devnagari character [2]. This overview describes the nature of handwritten language, how it is translated into electronic data, and the basic concepts behind written language recognition algorithms.Handwritten Devnagari signatures are imprecise in nature as their corners are not always sharp, lines are not perfectly straight, and curves are not necessarily smooth, unlikely the printed signature. Furthermore, Devnagari character can be drawn in different sizes and orientation in contrast to handwriting which is often assumed to be written on a baseline in an upright position [3]. Therefore, a robust online Devnagari handwritten signature recognition system has to account for all of these factors. An approach using Artificial Neural Network is considered for recognition of Handwritten Devnagari Signatures.In this paper a method is proposed for recognition of Devnagari handwritten signature. Novelty of this proposed work is that there is no requirement of segmentation. Under Section - 1 proposed work is introduced. It also deals with objective and motivation of this work. Then after previous work done related to this theme and architecture of this system are also discussed briefly. In Section - 2 methodology is discussed along with brief description of performance parameters used for obtaining the result. Section - 5 gives result analysis and discussions. Finally Section - 6 brief conclusion and scope for further work.A.Proposed WorkIn this proposed work which requires the sample database of handwritten Devnagari signatures, observes various features of the signature sample. On basis of its comparison with the signature samples available in the database, result is shown about authenticity of the signature sample. We can apply this algorithm on different types of applications where human authentication is required.B.ObjectiveThe objective of this work is to design a system for recognition of handwritten signature in Devnagari language. Our first goal is to find the method or function which gives the maximum accuracy for recognizing the signature, when it is arbitrary selected and compared with the signature database. For these types of images itis too much difficult to choose such a function which can be the best for the process of feature extraction and comparison for standardization.C.MotivationThe motivation behind the project is the growing need for a full proof signature verification scheme which can guarantee maximum possible security against fake signatures. The prospect of minimizing the memory required for storing the extracted feature points has also been a driving force in the commencement of this project.D.Previous WorkDevnagari script recognition is very challenging due to the complex structural properties of the script. Two different methods for extracting features from handwritten Devnagari characters can be used are the Curvelet Transform and the Character Geometry [1]. Topographical features of strokes visible with respect to views from different directions which can be successfully used for recognition of face, handwritten signature and printed characters [2]. Three feature extraction techniques as Hu’s Moment Invariants, Zernike moments and Krawtchouk moment can be used for recognition of signatures. These methods provide the stastical data which can be tested for various parameters like the length of feature vector, redundancy and accuracy [3]. A Neural Network based on-line signature verification system can be designed by using a set of feature vectors that can represent the characters of the signatures. By forming and verification of feather sets false acceptance (FA) and false rejection (FR) rate can be reduced [4].A writer independent offline handwritten signature verification system based on grey level feature extraction and Real Adaboost algorithm can be designed by using both global and local features of the script [5]. Object recognition can also be successfully done by using methods based on image moments. Various types of moments (geometric moments, complex moments) and moment-based invariants with respect to various image degradations and distortions (rotation, scaling, affine transform, image blurring, etc.) can be used as shape descriptors for classification [6]. Also efficient numerical algorithms can be used for moment computation and demonstrate practical examples of using moment invariants in real applications [7].E.Architecture of Recognition SystemThe entire process of recognition of handwritten signature can be categorized six sections. The architecture of the signature recognition system is shown in Fig. 1.(a) The first section is to obtain sample signature image from the signer. This sample image is later on compared with the database which is generated by collecting genuine signatures, collected from various human subjects.(b) In the second section the loaded sample signature image is cropped, because further testing process is done on that cropped section only. Cropping refers to the process of removing unwanted areas from aphotographic or illustrated image.Fig. 1 : Architecture of Signature Recognition System (c) In the third section the cropped image is converted into binary image. A binary image is a digital image that has only two possible values for each pixel. Typically the two colors used for a binary image are black and white though any two colors can be used.(d) The fourth section all the specified features of sample signature are extracted. Feature extraction involves simplifying the amount of resources required to describe a large set of data accurately. When performing analysis of complex data one of the major problems stems from the number of variables involved.(e) The fifth section all the features of sample signature image are compared with the signature database.(f) On basis of this comparison it is decided that whether the sample signature belongs to the signature database or it is an authorized signature.F.Designing ProcessThe entire designing process can be subdivided in two steps which are described as followings.a)System Design Process :In order to make a project which can be applicable for real world situation, we are required to do reallybrainstorming. First thing of discussion are that how many signature image signals are taken as an input signal for generalizing the features of that sample signature for our study and also it is important to decide that in which format the images should be taken? Here we have collected 10 samples of signatures of each person.b)Program Design :In our program we have made software using MATLAB programming for analysis of Devnagari handwritten signature using moment invariant analysis. The first step of the process was the acquisition of the features of the particular signature sample, then after according to the available features a threshold value is obtained and it is compared with the database of sample signatures. After comparison the signature sample is identified as genuine or unauthorized one.II.METHODOLOGYThis section describes the methodology behind the system development. It discusses the pre-processing performed, the signature database, and the NN features. Signature Recognition Systems need to preprocess the data [4] [5]. It includes a series of operations to get the results. The major steps are as follows -A.Data Acquisition :For recognition purpose at the first stage the signatures are required to be processed by the system. Therefore they should be in the digital image format. We need to obtain the instant sample signature image from the signer for the verification purpose. For this purpose devices such as digital tablets can be used shown in Fig. 2. Data acquisition process is a process where the real time inputs of signature from the digitizing tablet and the special pen are read into the CPU for processing and to store the signature in to the database. The digitizing tablet is sending the real time inputs to the CPU for further processing and storage. B.Signature Pre-processing :The signature samples are required to be normalized and resized up to proper dimensions. Then after the background noise is removed and thinning of the signature sample is performed. This yields a signature template which can be used for extracting the features. Therefore minimal signature pre-processing is required.C.Feature Extraction :The features extracted from signatures or handwriting play a vital role in the success of any feature based HSV system. If a poorly constructed feature set is used with little insight into the writer’s natural style, then no amount of modeling or analysis is going to result in a successful system. Further, it is necessary to have multiple, meaningful features in the input vector to guarantee useful learning by the NN. In this paper we have considered the most important feature under our consideration for the process of signature verification, is Hu’s Moment Invariants.Fig. 2 : Digital TabletD.Hu’s Moment Invari ants :In image processing, computer vision and related fields image moments are very useful for image analysis. Hu derived these expressions from algebraic invariants applied to the moment generating function under a rotation transformation [6]. They consist of groups of nonlinear centralized moment expressions. The result is a set of absolute orthogonal (i.e. rotation) moment invariants, which can be used for scale, position, and rotation invariant pattern identification. Simple properties of images which are found via image moments include area, total intensity, centroid and information about its orientation. They are computed from normalized centralized moments up to order three I n and n th Hu invariant moment. Here mean values of xand y are represented by equation 2.1 and 2.2,(2.1)(2.2) The central moments can be represented by followingequation 2.3 and 2.4,=(2.3)(2.4)If f(x,y) is a digital image, then the previous equationbecomes,(2.5)The central moments of order up to 3 are:(2.6)(2.7)(2.8)(2.9)(2.10)+2(2.11)+2(2.12)(2.13)(2.14)Central moments are translational invariant. Information about image orientation can be derived by first using the second order central moments to construct a covariance matrix. Rotational invariants moment equations are given as below :(2.15)(2.16)(2.17)It is possible to calculate moments which are invariant under translation, changes in scale, and also rotation. Most frequently used are the Hu set of invariant moments are :M 1 = η20+ η02, (2.18) M 2 = (η20 – η02)2 + 4η211, (2.19) M 3 = (η30 – 3η12)2 + (3η21 – η03)2 , (2.20) M 4 = (η30 + η12)2 + (η21 + η03)2 , (2.21)M 5 = (η30 – 3η12) (η12 + η30) [(η12 + η30)2 – 3(η21 + η03)2] + (3η21 – η03) (η21 + η03) [3(η30 + η12)2 – (η21 + η03)2] , (2.22) M 6 = (η20 – η02) [(η30 + η12)2 – (η21 + η03)2] + 4η11(η30 + η12) (η21 + η03) , (2.23) M 7 = (3η21 – η03) (η30 + η12) [(η12 + η30)2 – 3(η21 + η03)2] + (3η12 – η30) (η21 + η03) [3(η30 + η12)2 – (η21 + η03)2] , (2.24) The first one, M 1, is analogous to the moment of inertia around the image's centroid, where the pixels' intensities are analogous to physical density. The last one, M 7, is skew invariant, which enables it to distinguish mirror images of otherwise identical images. A general theory on deriving complete and independent sets of rotation invariant moments was proposed by J. Flusser [7] and T. Suk [8]. They showed that the traditional Hu's invariant set is neither independent nor complete. M 2 and M 3 are not very useful for pattern recognition, as they are dependent. On the original Hu's set of moment invariants there is a missing third order independent moment invariant which given by following equation:M 8 = η11 [(η30 – η12)2 – (η03 + η21)2] – (η20 – η02) (η30 + η12) (η03 + η21) (2.25) E. Performance Parameters :For ensuring good performance for signature recognition system few parameters are required to be set. Such performance parameters for our Devnagari handwritten signature recognition system are discussed below :a) False Accept Rate or False Match Rate (FAR or FMR) :The false acceptance rate is given by the number of fake signatures accepted by the system with respect to the total number of comparisons made. It is the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs which are incorrectly accepted.b) False Reject Rate or False Non-Match Rate (FRR or FNMR) :The false rejection rate is the total number of genuine signatures rejected by the system with respect to the total number of comparisons made. Both FAR and FRR depend on the threshold variance parameter taken to decide the genuineness of an image. If we choose a high threshold variance then the FRR is reduced, but at the same time the FAR also increases. If we choose a low threshold variance then the FAR is reduced, but at the same time the FRR also increases. The FRR is the measurement of the probability that a biometric system will fail to identify an individual who is properly enrolled. It measures the percent of valid inputs which are incorrectly rejected .c)Receiver Operating Characteristic or Relative Operating Characteristic (ROC) :The ROC plot is a visual characterization of the trade-off between the FAR and the FRR. In general, the matching algorithm performs a decision based on a threshold which determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be less false non-matches but more false accepts. Correspondingly, a higher threshold will reduce the FAR but increase the FRR. This more linear graph illuminates the differences for higher performances.d) 2.5.4 Equal Error Rate or Crossover Error Rate (EER or CER) :The rates at which both accept and reject errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is most accurate.III.EXPERIMENTAL S ET - U PWe have been concerned here with evaluating moment invariants as a feature space in a pattern recognition problem, predicting how they will behave given the classes and noise conditions of the problem. For this purpose, we accumulated enough background to help understand the nature of moment invariants and derived basic performance results in terms of the classes and noise.A.Training and TestingA total of 500 genuine signatures were collected from a population of 50 human subjects which included 22 women and 28 men. Out of which seven of them are left handed writers. Our main task is to recognizing the signature with good feature recognition techniques which provide good results on the signature recognition dataset. We train the system using a training set of signature obtained from a person. Designing of a classifier is a separate area of research. The decision thresholds required for the classification are calculated by considering the variation of features among the training set. Separate set of thresholds (user Specific) is calculated for each person enrolled, some system also use common threshold form all users.For training and testing general fuzzy minimum maximum (FMM) neural network is used in our work. Fuzzy networks are useful for clustering (unsupervised) and classification (supervised) problems. Application of FMM is an attempt to improve classification performance of the recognition system. By using FMM training the total time required to train the database was achieved less than 173.77 µs and the time taken to recognize a signature sample was less than 125.67 ms. B.Reason for using Hu’s Moment InvariantsThis work is based on analysis of values of Hu’s seven moment invariants. They consist of groups of nonlinear centralized moment expressions. The result is a set of absolute orthogonal (i.e. rotation) moment invariants, which can be used for scale, position, and rotation invariant pattern identification. These can be efficiently used in a simple pattern recognition application for successfully identifying various patterns [9]. The first one, M1, is analogous to the moment of inertia around the image's centroid, where the pixels' intensities are analogous to physical density. The last one, M7, is skew invariant, which enables it to distinguish mirror images of otherwise identical images. We have done analysis of fluctuation in moment invariant values of the same image for different positions. We took the original test image, scaled by one half of the test image and mirrored test image. For analyzing rotation effect we have taken rotated test image by 450, 900 and 1800. These images are shown in Fig. 3 under (a) Original Test Image, (b) Scaled by one half, (c) Rotated by 450, (d)Rotated by 900, (e) Rotated by 1800,(f) Mirrored Image.Fig. 3 : Different images for analysisThe moment values were observed for these six differently scaled and rotated images of the same signature image. The experimental values for these six images are shown in Table 3.1.Table 3.1 : Closeness of Moment Invariants values for Different ImagesWe found the closeness of the values of the moments, which is independent of translation, scale change, mirroring and rotation (as shown in Table 3.1). There is only difference in sign of M 7, which is for the mirrored image. This property can be used to detect whether the image is a mirrored image. This was the reason behind using Hu’s moment invariants for this work. C. Results and DiscussionsAfter process of training the signature database we started testing each sample signature. On basis ofrecognition and rejection the experimental results regarding FRR, FAR and AER are shown in Table 3.2. These results show that with the increase of similarity false reject rate (FRR) the false acceptance rate (FAR) is decreased. The results of our examination show that in this method, thebest value for the percentage of signature similarity is nearly 31%. In this point we obtain the minimum error rate (MinErrRate = min (FAR, FRR)). If we consider Average Error Rate (AER) are as following:AER = (FAR + FRR) / 2 (3.1)Table 3.2 : Variation of FAR and FRR with the threshold VarianceAverage Error Rate (AER) will be the smallest amount in SS ∈ [25, 35]. On the other hand, we have the best performance of the system in (25% ≤ SS ≤ 35%). In this interval, we have the minimum value forAverage Error Rate (AER) (shown in Fig. 4).Fig. 4 : Performance of HDSR systemThe minimum value of Average Error Rate (AER) for our work is achieved to be 33.0 %. Where the value of the FAR and the FRR meet one another, the point iscalled Equal Error Rate (EER) as it is shown in Fig. 5.Fig. 5 : Signature Similarity PercentageAs a matter of fact, getting the best performance, we should consider 25% ≤ SS ≤ 35%. The fundamental result of this study is obtaining the average of minimum errors not in the maximum surface similarity. In other words, if the correctness of a signature is its high similarity to the original one, the correct signatures will be rejected because of minor differences and this trend will decrease the efficiency of the system. The method is tested using genuine and forgery signature producedby 50 different persons. The highest accuracy rate achieved by our Devnagari handwritten signature recognition system was 96.12 %.The proposed algorithm can be used as an effective signature verification system. The algorithm proposed was successfully made rotation invariant by the rotation of the image. The Equal Error Rate (EER) was found to be 31%. It is the point where the value of the FAR and the FRR meet one another (shown in Fig. 5). The Equal Error Rate (EER) can further be improved by using better techniques for rotation, blurring and thinning. Using these algorithm random and simple forgeries can be easily detected. A great number of skilled forgeries can also be removed. It uses a compact and memory efficient storage of feature points which reduces memory overhead and results in faster comparisons of the data to be verified.The overall experimental results are categorized in Table 3.3. However it is not sufficient to verify the validity of a signature only by comparing the physical image of it.Table 3.3 : Overall Experimental ResultsIV. C ONCLUSIONRecognition of devnagari handwritten signature is a complex problem, which is not easily solvable. This work has presented a review of moment-based invariant functions, their history, basic principles, and methods how to construct them. We demonstrated that invariant functional can be used in image analysis as features for description and recognition of objects in degraded images. Invariant-based approach is a significant step towards robust and reliable object recognition methods. It has a deep practical impact because many pattern recognition problems would not be solvable otherwise. In practice, image acquisition is always degraded by unrecoverable errors and the knowledge of invariants with respect to these errors is a crucial point. This observation should influence future research directions and should be also incorporated in the education. To decrease the fluctuation of moment invariants, the image spatial resolution must be higher than the threshold of scaling and rotation. However, the resolution cannot be too high, because the computation will remarkablyincrease as the resolution increases. Therefore, the choice of resolution must balance computation and resolution on the real application. This approach can be used in multilingual character recognition as well. The computation increases quickly as resolution increases. From the experimental studies, we find that the choice of image spatial resolution is very important to keep invariant features. To decrease the fluctuation of moment invariants, the image spatial resolution must be higher than the threshold of scaling and rotation. However, the resolution cannot be too high, because the computation will remarkably increase as the resolution increases. Therefore, the choice of resolution must balance computation and resolution on the real application.Scope of Future Work : The concepts of Neural Networks as well as Hu’s moment invariants hold a lot of promise in building systems with high accuracy. It can be extended for the recognition of words, sentence and documents. Another research interest will be on the character images degraded or blurred by various reasons. This approach can be used in multilingual character recognition as well.REFERENCES[1] Brijmohan Singh, Ankush Mittal and DebashisGhosh, “An Evaluation of Different Feature Extractors and Classifiers for Offline Handwritten Devnagari Character Recognition”, Journal of Pattern Recognition Research, pp. 269-277, 2011. [2] Soumen Bag and Gaurav Harit, “TopographicFeature Extraction For Bengali And Hindi Character Images”, International Journal of Signal & Image Processing, Vol. 2, No. 2, pp. 181-196, June 2011. [3] Bhupendra M. Chaudhari, Abhay B. Nehete,Kantilal P. Rane and Ulhas B. Shinde, “Efficient Feature Extraction Technique for Signature Recognition ”, International Journal of Advanced Engineering & Application (IJAEA), pp. 64-70, January 2011. [4] Nan Xu, Li Cheng, Yan Guo, Xiaogang Wu andJiali Zhao, “A Method for Online Signature Verification Based on Neural Network ”, IEEE Trans. of 3rd International Conference on Communication Software and Networks (ICCSN), Wuhan, China, pp. 357-360, May 2011. [5] Juan Hu and Youbin Chen, “Writer IndependentOffline Handwritten Signature Verification based on Real Adaboost ”, IEEE Trans. of 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC), pp. 6095-6098, August 2011. [6] M. K. Hu, "Visual Pattern Recognition by MomentInvariants", IRE Trans. of Information Theory, Vol. 8, pp. 179-187, 1962.[7] J. Flusser, "On the Independence of RotationMoment Invariants", Journal of Pattern Recognition, Vol. 33, pp. 1405-1410, 2000. [8] J. Flusser and T. Suk, "Rotation Moment Invariantsfor Recognition of Symmetric Objects", IEEE Trans. of Image Processing, Vol. 15, pp. 3784-3790, 2006. [9] R. C. Gonzalez and R. E. Woods, “Digital ImageProcessing”, 3rd Edition, Prentice Hall of India, New Delhi, India, 2009.AUTHOR’S PROFILE:Shailendra Kumar Dewangan received B.E. in Electronics & Telecommunication in year 2005 and M.E. in Communication Engineering from Shri Shankaracharya College of Engineering & Technology (SSCET),Bhilai, Chhattisgarh, India. He is currently working as an Assistant Professor in the Department of Electronics & Instrumentation Engineering at Chhatrapati Shivaji Institute of Technology (CSIT), Durg, Chhattisgarh, India. His areas of interest include Digital Signal Processing, information security, digital watermarking, advancements in communication technology, etc. He has published multiple articles in various International and National Journals & Conferences. Besides he has lifetime membership of Indian Society of Technical Education (ISTE) and Associate membership of Institute of Electronics & Telecommunication Engineers (IETE).。
基于BP神经网络的印刷体字符识别系统的研究

基于 BP 神经网络的印刷体字符识别系统的研究
ABSTRACT
This article is mainly concerned about the auto-recognition of printed English letters and digits in image. In the system's design, this paper introduces the neural network pattern recognition technology and presents a design work of Character Recognition System based on BP neural network. In this paper, the whole system is divided into three main modules (pretreatment, feature extraction and crude classification, and the character classifier based on BP) for a detailed description. To improve the performance of the system and to reduce the error rate and rejection rate as much as possible, this article carefully analyses the key and difficult problems which were encountered in the design of these three main modules, and proposes the following solutions : 1. In the design of Pretreatment module, a variety of image processing technology is concerned. A series of algorithm including the removal of discrete miscellaneous noises, character segmentation and slope adjustment is proposed. The realization of these algorithms lays a solid foundation for Character Feature Extraction. 2. Character Feature Extraction is the key point, which decides the success of the overall design. After comparing several popular feature extraction methods, a rough classification method based on the features of closed curves and vertical lines in character skeletons is presented. It evenly divides the original aggregate including 62 characters into three subsets to lower the difficulty of follow-up treatments. This paper brings about an algorithm of extraction which combines rough grid feature and regulated projection feature. It balances overall and partial features of characters and makes identity of similar English letters easier. 3. In the design process of Characters Classifier based on BP, we carefully studied the key problems in network design which consists of network framework design, parameters design, network training and network recognition, and put forward a set of network design options to optimize network performance. The effectiveness of the optimization program was verified in the final system test. This paper shows that character recognition system ba位论文
人工神经网络的字符识别

网络信息工程2020.16人工神经网络的字符识别黄方(空军航空维修技术学院,湖南长沙,410124)摘要:人工神经网络是目前主流的模式识别技术之一。
它是一种模拟大脑学习的简单模型,广泛应用于信息、控制、通信、 经济、医学等领域的字符识别。
利用MATLAB 中的神经网络工具箱对人工神经网络进行训练、验证和测试后,可以识别噪声干扰中的字符集。
关键词:人工神经网络;字符识别;MATLABCharacter Recognition using Artificial Neural NetworksHuang Fang(Airforce Aviation Repair Institute of Technology, Changsha Hunan, 410124)Abstrac t :Artificial Neural Network is one of the current mainstream pa/ttern recognition technologies. It is a simple model that Simulates brain learning and is widely used in character recognition in the fields of information, control, communication, economics, and medicine. After using the neural network toolbox in MATLAB to train, verify and test the artificial neural network, the character set in noise irrterference can be identified.Keywords :Artificiol Neural Networks ; Character recognition ; MATLAB0引言人工神经网络 Artificial Neural Networks (ANN )经过多年的技术发展,具有自适应学习、分布式处理等能力,广泛应用于信息、控制、交通、经济、医学等领域,如故障诊断、 自动控制、交互智能、市场分析、决策优化等研究。
印刷文字的识别方法分类介绍

识别方法是整个系统的核心。
用于汉字识别的模式识别方法可以大致分为结构模式识别、统计模式识别及两者的结合。
下面分别进行介绍。
结构模式识别汉字是一种特殊的模式,印刷其结构虽然比较复杂,但具有相当严格的规律性。
换言之,汉字图形含有丰富的结构信息,可以设法提取含有这种信息的结构特征及其组字规律,作为识别汉字的依据,这就是结构模式识别。
结构模式识别是早期汉字识别研究的主要方法。
其主要出发点是汉字的组成结构。
从汉字的构成上讲,汉字是由笔划(点横竖撇捺等)、偏旁部首构成的;还可以认为汉字是由更小的结构基元构成的。
由这些结构基元及其相互关系完全可以精确地对汉字加以描述,就像一篇文章由单字、词、短语和句子按语法规律所组成一样。
所以这种方法也叫句法模式识别。
识别时,利用上述结构信息及句法分析的方法进行识别,类似一个逻辑推理器。
用这种方法来描述汉字字形结构在理论上是比较恰当的,其主要优点在于对字体变化的适应性强,区分相似字能力强;但是,在实际应用中,面临的主要问题是抗干扰能力差,因为在实际得到的文本图象中存在着各种干扰,如倾斜,扭曲,断裂,粘连,纸张上的污点,对比度差等等。
这些因素直接影响到结构基元的提取,假如结构基元不能准确地得到,后面的推理过程就成了无源之水。
此外结构模式识别的描述比较复杂,匹配过程的复杂度因而也较高。
所以在印刷体汉字识别领域中,纯结构模式识别方法已经逐渐衰落,句法识别的方法正日益受到挑战。
统计模式识别统计决策论发展较早,理论也较成熟。
其要点是提取待识别模式的的一组统计特征,然后按照一定准则所确定的决策函数进行分类判决。
汉字的统计模式识别是将字符点阵看作一个整体,其所用的特征是从这个整体上经过大量的统计而得到的。
统计特征的特点是抗干扰性强,匹配与分类的算法简单,易于实现。
不足之处在于细分能力较弱,区分相似字的能力差一些。
常见的统计模式识别方法有:(1) 模板匹配。
模板匹配并不需要特征提取过程。
字符的图象直接作为特征,与字典中的模板相比,相似度最高的模板类即为识别结果。
基于人工神经网络的手写字符识别技术研究

基于人工神经网络的手写字符识别技术研究手写字符识别技术是人工智能领域的一个重要研究方向。
在现实生活中,手写字符广泛应用于银行支票、个人签名、信函和表格等场景中。
然而,由于每个人的书写风格不同,手写字符的识别一直是一个具有挑战性的任务。
为了解决这个问题,研究人员一直在尝试使用人工神经网络(Artificial Neural Networks, ANN)来进行手写字符识别。
首先,人工神经网络是一种模仿人脑神经网络结构和功能的计算模型。
它由大量的神经元和神经元之间的连接组成,这些连接的权值可以被训练来进行信息处理和模式识别。
人工神经网络的优势在于能够自动学习,通过学习样本集中的模式和特征,准确地进行分类和识别。
基于人工神经网络的手写字符识别技术通常包括以下几个步骤。
首先,需要收集并准备手写字符的数据集。
数据集的质量和多样性对于训练一个有效的神经网络非常重要。
数据集可以包含多种不同的手写字符样本,包括不同字体、大小和笔画变化的字符。
此外,还需要为每个字符样本标注标签,以便进行训练和评估。
接下来,需要对数据集进行预处理。
预处理的目标是将手写字符的图像转换为神经网络可以理解和处理的数字表示。
常见的预处理步骤包括灰度化、二值化和图像大小归一化。
灰度化将彩色图像转换为灰度图像,简化了后续的处理过程。
二值化将灰度图像转换为黑白图像,将字符与背景分离。
图像大小归一化可以将所有字符样本调整为相同的大小,减少样本间的尺度差异。
然后,需要设计一个合适的神经网络结构。
神经网络的结构包括输入层、隐藏层和输出层。
输入层接受预处理后的手写字符图像作为输入,隐藏层用于处理图像的特征和模式,输出层用于对字符进行分类。
常用的神经网络模型包括多层感知机(Multi-Layer Perceptron, MLP)和卷积神经网络(Convolutional Neural Network, CNN)。
选择合适的神经网络模型和参数是手写字符识别技术中的关键一步。
基于BP神经网络的印刷体字符识别系统研究

• 154•在当今社会,数据的爆炸式增长不断考验着人类对于收集数据、信息的应变能力,信息数据从生活中的每一个角落涌出。
本系统通过Matlab 平台实现印刷体字符的识别,利用BP 神经网络进行字符学习训练。
字符识别技术是从印刷体字符识别开始发展的,系统的主要流程为:对图像进行灰度化,进行去除噪声处理。
待识别图像还需要经过二值化把背景与字符进一步区分开(二值化的效果直接决定图像的识别率)。
从二值化图像中分割出字符归一化保存。
同时建立BP 神经网络利用标准字符模板库进行训练,匹配输出识别结果。
1 图像预处理预处理的工作是在字符识别提取字符特征前将视觉图像转换成可由PC 识别的二值图像。
预处理尤为重要,预处理工作出现异常,整个系统的识别性能都会被影响。
字符识别的预处理需要进行很多步骤,比如灰度化、二值化、图像归一化等,才能使图像变得更容易让计算机识别。
1.1 图像灰度化常见的彩色图像大部分由红蓝绿三个颜色通道以及通道之间相互叠加表现的(RGB 颜色模式)。
一个像素不同颜色的RGB 三个值会有很大的差别。
通过研究,可以通过某种规律使图像每个象素的R 、G 、B 分量值化等,这样就可以使图像灰度化。
进行灰度化处理字符图像,是为了减少图像原始数据量,避免条带失真,以便后续处理时的特征采集。
图像灰度化有很多种不同的算法,本设计采用样本较为规范且主流的灰度化方法,故使用Rgb2gray 函数通过计算R 、G 和B 分量的加权和,将RGB彩色图像转换为灰度图像:0.2989 × R + 0.5870 ×G + 0.1140 × Brgb2gray 函数通过消除色调和饱和度信息,同时保留亮度,来将RGB 图像转换为灰度图。
1.2 图像去除噪声根据人们对噪声的特点、噪声的特征和频谱分布等,人们研究出了许许多多各有特色的去噪方法。
受到噪声污染的退化图像的复原可以用线性滤波方法来处理,但多数线性滤波具有低通特性,在去除噪声的同时也使图像的边缘变得模糊。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
I.J. Image, Graphics and Signal Processing, 2012, 6, 15-21Published Online July 2012 in MECS (/)DOI: 10.5815/ijigsp.2012.06.03Classification and Recognition of Printed Hindi Characters Using Artificial Neural NetworksB.Indira 1Kasturba Gandhi Degree & PG College for Women, Secunderabad, A.P, Indiaindsneha@M.Shalini 1Kasturba Gandhi Degree & PG College for Women, Secunderabad, A.P, Indiashalini_praveenkumar@yahoo.co.inM.V. Ramana Murthy 2Chairman, Department of Computer Science, Faculty of Science, Osmania University, Hyderabad, India.mv.rm50@Mahaboob Sharief Shaik 2Faculty of Computing & Information Technology, King Abdul Aziz, University, Jeddah, KSAmshaik@.saAbstract—Character Recognition is one of the important tasks in Pattern Recognition. The complexity of the character recognition problem depends on the character set to be recognized. Neural Network is one of the most widely used and popular techniques for character recognition problem. This paper discusses the classification and recognition of printed Hindi Vowels and Consonants using Artificial Neural Networks. The vowels and consonants in Hindi characters can be divided in to sub groups based on certain significant characteristics. For each group, a separate network is designed and trained to recognize the characters which belong to that group. When a test character is given, appropriate neural network is invoked to recognize the character in that group, based on the features in that character. The accuracy of the network is analyzed by giving various test patterns to the system.Index Terms—Pattern Recognition, Character Recognition, Artificial Neural Network, Feature extraction, ThinningI.INTRODUCTIONPattern Recognition is defined as the field concerned with machine recognition of meaningful regularities in noisy and complex environments [1]. There are various applications of pattern recognition such as character recognition, online signature verification, and face recognition and so on. Character Recognition is the electronic conversion of scanned images of printed or handwritten text into machine readable text. Character recognition system is the base for many different types of applications in various fields, many of which we use in our daily lives. Hindi character recognition is the challenging problem in Pattern Recognition and Neural Networks is one of the most commonly used techniques for character recognition and classification due to their learning and generalization abilities. This paper describes and discusses the classification and recognition of printed Hindi characters using Artificial Neural Networks. Some of the previous approaches related to this work are given in section II. The entire recognition process is explained in section III. Section IV gives the training procedure of neural networks. Testing and results were discussed in section V and finally concluding remarks were given in section VI.II.REVIEW OF PREVIOUS APPROACHESA good text recognizer has many commercial and practical applications such as processing cheques in16Classification and Recognition of Printed Hindi Characters Using Artificial Neural Networksbanks, documentation of library materials, extracting data from paper documents, searching data in scanned book, automation of any organization like post office, which involve lot of manual task of interpreting text. The problem of text recognition has been attempted by many different approaches; some of them are Template matching, Feature extraction, Geometric approach and neural networks.Template matching approach is one of the most simplistic approaches. This is based on matching the stored data against the character to be recognized. Template matching involves determining similarities between the given template and stored database and output the image that produces the higher similarity measure. This technique works effectively with recognition of standard fonts, but gives poor performance with handwritten characters, noisy characters and deformed images.The objective of feature extraction is to capture the essential characteristics of the symbols and this is one of the most difficult problems of pattern recognition. In this approach, statistical distribution of points is analyzed and orthogonal properties are extracted. For each symbol a feature vector is calculated and stored in database, and recognition is performed by finding distance of feature vector of input image with those stored in the database and giving the symbol with minimum deviation. This is very sensitive to noise and edge thickness, but performs well on handwritten character set.In geometric approach an attempt is made to extract features that are quite explicit and can be very easily interpreted. These features depend upon the physical properties, such as number of joints, relative position; number of end points, aspect ratio etc. Classes formed on the basis of these geometric features are quite distinct, with not much of overlapping. The main draw back with this approach is that this approach depends heavily on the character set.Neural network techniques are more popular to perform Character Recognition. It has been reported that Neural Networks could produce high recognition accuracy. Neural Networks with various architectures and training algorithms have been applied successfully for Character recognition. In this, neural network is first trained by the multiple sample images of each alphabet. Then, in the recognition processes, the neural network recognizes the given input symbol. Neural networks are capable of providing good recognition even at the presence of noise but the draw back is they require a lot of training time.Character recognition remains a highly challenging task. Hindi character recognition is one of the most difficult tasks of optical character recognition. This section gives a brief overview of related research work. The research work pertaining to character recognition of Indian languages is very limited.Dr. P.S. Deshpande et.al, proposed a novel approach on character encoding and regular expressions for shape recognition in their paper [2]. The method is independent of the specific aspect of individual shapes, such as thickness of line, size of character and shapes. In this, features are extracted in the form of regular expression. They achieved an accuracy of 90%.A Devanagari text recognition system was designed by Veena Bansali [3] in her research work by integrating knowledge sources, features of characters such as horizontal zero crossings, moments, aspect ratios, position of vertex points and pixel density, with structural description of characters.Aditi Goyal, Kartikay Khandelwal, Piyush Keshri [4], in their paper discussed about various image pre-processing, feature extraction and classification algorithms, to design high performance OCR software for handwritten Hindi alphabets. Image preprocessing included Median filtering, Background removal, Threshold and sparsity removal. In feature selection and extraction, histograms of oriented gradients were used.This provides a flexible feature and helps to deal with high bias and high variance issues. The basic back-propagation algorithm is used to determine the weight matrix. Features were tested on a reduced training set using naïve Bayes and support vector machines. They observed that SVM gave better results than naïve bayes. The performance obtained with handwritten letters is 98 %.Pooja Agarwal, Hanumandlu and Brijesh, in their paper ―Coarse Classification of Handwritten Hindi characters‖ [5], described a system for the categorization of complete handwritten Hindi character set into subgroups based on some similarity measure. They proposed an algorithm for finding and removal of header line and identification of present position of vertical bar in handwritten Hindi character. Experimental results demonstrate that t heir algorithm is effective and achieved a classification rate of 97.25%.Optical character recognition for printed Devanagari script using Artificial Neural Networks was presented by Raghuraj Singh et. Al [6]. The paper [7,8,9,10] proposed a technique for OCR system for different fonts and sizes of printed Devanagari script and achieved a high recognition rate.III.RECOGNITION PROCESS Character recognition is one of the important tasks in pattern recognition. The complexity of the character recognition problem depends on the character set to be recognized. Character recognition process is dependant upon number of factors like various font sizes, noise, broken lines or characters etc. and these factors influence the results of recognition system [11]. Artificial Neural Network is one of the techniques widely used for character recognition problem and considered as a powerful classifier on account of their high computation rate accomplished by massive parallelism [12, 14]. There are four different phases in character recognition processes namely Character acquisition, preprocessing stages, grouping of characters and Character Recognition.Figure 1: different phases in character recognition processA.Character AcquisitionCharacter acquisition is the first phase in any image processing or pattern recognition task. In this paper the images of Hindi characters, in tiff, jpg, bmp, and gif format are obtained through a scanner. After obtaining the digital image, the next step is to apply preprocessing in order to improve the image clarity and also the accuracy of recognition rates.B.Preprocessing StagesPreprocessing is an important step of applying a number of procedures for smoothing, enhancing, filtering etc, for making a digital image usable by subsequent algorithm in order to improve their readability for Optical Character Recognition software. The various stages involved in the preprocessing are:Figure 2: preprocessing stages1) BinarizationLinearization (thresholding) refers to the conversion of a gray-scale image into a binary image. This is also generally referred to as thresholding. There are two approaches for conversion of gray level image to binary form. First one is global threshold which picks one threshold value for the entire image, based on estimation of the background level from the intensity histogram of the image. The other one is local or adaptive threshold which uses different values for each pixel according to the local area information.The purpose of binarization is to identify the extent of objects and also to concentrate on the shape analysis, in which case the intensities of pixels are less significant than the shape of a region.2)Noise EliminationNoise that exists in images is one of the major obstacles in pattern recognition tasks. The quality of image degrades with noise. Noise can occur at different stages like image capturing, transmission and compression. Various standard algorithms, filters and morphological operations are available for removing noise that exists in images. Gaussian filter is one of the popular and effective noise removal techniques. Noise elimination is also called as smoothing. It can be used to reduce fine textured noise and to improve the quality of the image. The techniques like morphological operations are used to connect unconnected pixels, to remove isolated pixels, and also in smoothening pixels boundary.3)Size normalizationNormalization is applied to obtain characters of uniform size. It provides a tremendous reduction in data size. The character patterns have different sizes. The input to the neural network is an array of fixed size. Hence to make the image suitable to this size, size normalization is required. Normalization should reduce the size of the image without getting the structure of the image altered. In this paper, the sizes of Hindi characters are reduced to the size of 32 x 32.C.Grouping of CharactersAfter preprocessing of character, features of character are extracted. This step is heart of the system. This step helps in classifying the characters based on their features. The vowels and consonants of Hindi character set are divided into sub groups based on certain significant characteristics. The vertical bar feature and its position in the character is used to group the vowels and consonants in to sub groups. The characters are classified in to 3 sub groups. The first sub group consists of character without any vertical bar. Characters with vertical bar at right side of the character are in second sub group and the third group includes the characters having a vertical bar in the middle of the character.D.Character RecognitionCharacter recognition system is the base for many different types of applications in various fields, many of which we use in our daily life. Cost effective and less time consuming businesses, post offices, banks, security systems, number plate recognition system and even the field of robotics employ this system as the base of their operations. Recently, neural network became very popular as a technique to perform character recognition[15, 16, 17]. It has been reported that neural networks are capable of providing good recognition rate even at the presence of noise where other methods normally fail. The inherent pattern recognition abilities of layered neural networks lend itself perfectly to this type of task, by autonomously learning the complex mappings in high dimensional input data. In this work, the problem of character recognition is solved using neural networks. Neural network maps the set of input values to set of output values. The multi layer feed forward connectionist model trained by back propagation (gradient – descent) is used to recognize the given input character.IV.THINNINGThinning is a morphological operation that is used to remove selected foreground pixels from binary images. Thinning extracts the shape information of the characters. Thinning is also called skeletonization. Skeletonization refers to the process of reducing the width of a line from many pixels to just single pixel. This process can remove irregularities in letters and in turn, makes the recognition algorithm simpler because they only have to operate on a character stroke, which is only one pixel wide. It also reduces the memory space required for storing the information about the input characters and also reduces the processing time too. The final stage in preprocessing is thinning. Image thinning extracts a skeleton of the image without loss of the topological properties [13]. The thinning algorithm consists of both boundary pixel analysis and connectivity analysis. The binary image before and after thinning is given in the figure 3.The above preprocessing steps are applied to all vowels and consonants of Hindi characters.Figure 3: Original and the thinned imageV.TRAINING THE NEURAL NETWORK Recognition of printed Hindi character is performed by giving the input image of the character. The given image is first converted into a gray scale image. Then the gray level image is converted into a binary image using threshold. Afterwards noise is eliminated by using filters. The next step is size normalization followed by thinning which extracts the skeleton of the image without any loss of the topological properties. After preprocessing of character, features of character are extracted. This step helps in classifying the characters based on their features.In this work, Hindi characters can be classified into three subgroups. Hence three feed forward neural networks are designed to recognize the characters in each sub group. The back propagation learning algorithm is used to train each network with the characters in that group as input examples to that network. This network takes input-output vector pairs during training. During training the weights of thenetwork are iteratively adjusted to minimize error. The input image, number of neurons in each layer, learning rate, momentum and error value is given as input. Theintegrated module takes its input from the output of any one of the three networks and with the help of the lookup table of that subgroup, it recognizes and classifies the given character.VI.TESTING AND RESULTS:The vowels and consonants of Hindi character set are divided into 3 subgroups based on certain significant characteristics. For each subgroup, a separate feedforward neural network is designed to recognize the character which belongs to that group. Back propagation algorithm is used to train each network with examples. Finally, after training the neural networks with proper set of examples of each sub group, the performance of the system is tested with various test patterns with and without noise. The system recognized the character which had a noise up to 40%. Overall performance of network is tested with test samples. It achieved a recognition rate in the range of 76% - 95% for various samples. The results also show that the recognition accuracy and efficiency of the network increases with more number of training samples.VII.CONCLUSIONCharacter recognition is one of the important applications of pattern recognition. Instead of using only one neural network for recognizing and classifying Hindi vowels and consonants, we divided the characters into three subgroups based on certain significant features and three feed forward neural networks are designed and trained to recognize the character in each subgroup. It is observed that recognition accuracy is increased by using the concept of subgroups instead of single network. This work is limited to recognition of Hindi vowels and consonants. Good recognition rate is achieved for the following characters since these characters are of simplistic in nature.Poor recognition rate of character is achieved for the following characters since these characters have close resemblance with ya and va.RFERENCES[1] Jie Liu, Jiguisun, Shengshenq wang –―PatternRecognition –An Overview‖ International journal of computer science and network security, Vol.6, No. 6, June 2006.[2] Dr. P.S. Deshpande, Mrs. Latesh Malik, Mrs.Sandhya Arora ―Chara cterizing Hand written Devanagari Characters using Evolved Regular Expressions‖, 1-4244-0549-1/06, IEEE 2006. [3] Veena Bansali ―Integrating with source inDevanagari text recognition‖, PhD thesis , 1999. [4] Aditi Goyal,Kartikay Khandelwal, piyush Keshri,―Optical Character Recognition for Handwritten Hindi‖, Stanford University, CS229 Machine Learning, Fall,2010.[5] Pooja Agarwal, M. Hanmandlu, Brejesh Lall,―Coarse classification of Handwritten Hindi Characters‖, International Journal of Advanced Science and Technology, Vol. 10, September, 2009.[6] Raghuraj Singh, C. S. Yadav, Prabhat Verma,Vibhash Yadav, ―Optical Character Recognition (OCR) for Printed Devnagari Script using, Artificial Neural Network‖, International Journal of Computer Science & Communication Vol. 1, No. 1, January – June 2010, PP. 91-95.[7] Latesh Malik, P. S. Deshpande ―Recognition ofPrinted Devanagari Characters withRegularExpression in Finite State Models‖, proceedings of International workshop on Machine Intelligence Research, 2009.[8] Veena Bansal and R.M.K. Sinha ―Segmentation ofTouching and Fused Devanagari Characters‖ IIT, Kanpur.[9]. Swamy Saran Atul and Swapneel Prasanth Mishra― Hand-Written Devnagari Character Recognition‖, NIT, Rourkela, 2007.[10]Line Eikvil ―Optical Character Recognition‖,December, 1993.[11]Dyashankar Singh, Sajay Kr. Singh, Dr. (Mrs)MitreyeeDutta, Hand Written Character Recognition Using Twelve Directional Feature Input and Neural Network –2010 International Journal of Computer Applications(0975 – 8887) vol.1 – No. 3.[12]A.K.Jain, Mohiuddin ―Artificial Neural Networks: ATutorial‖, IEEE Computers, 29, 31-44, 1996. [13]Anil K. Jain, Orivind Due Trier and Torfinn Taxt―Feature Extraction Me thods For Character Methods- A survey‖, Pattern Recognition, Vol. 29.No 4, PP 641-662, 1996.[14] B. Indira, ―Artificial neural networks and its use inAutomatic Recognition of Vehicle Registration Numbers‖, Ph.D.thesis, 2008.[15] Brain Clow –―A comparison of neural networktraining Methods for Characte r Recognition‖ –Carleton University April, 2003.[16] Dave Anderson and George MCneil –― A DACSstate of the Art report on Artificial Neural Network Technology‖ –International Conference on Computer Vision – New York-2003.[17] E.Barnard, ―Optimization for Training Neural Nets― IEEE transactions on Neural Networks, Vol. 3, No. 2 , March 1992, PP 232-240. M.Shalini completed her MCA from Osmania University and M.Phil from sri padmavathi mahila university in the area of Artificial neural networks She has more than 17 years of teaching experience. she is currently working as an associate professor, department of computer science (PG Courses) in kasturba Gandhi college for women ,affiliated to osmania university, Hyderabad, Andhra Pradesh, India. Her research interests include artificial neural networks, image processing, AI and web designing.Dr. B.Indira completed her MCA from Kakatiya University and Ph.D from sri padmavathi Mahila University in the area of Artificial neural networks. She has more than 16 years of teaching experience. she is currently working as an associate professor, department of computer science (PG Courses) in kasturba Gandhi college for women ,affiliated to osmania university, Hyderabad, Andhra Pradesh, India Her research interests include artificial neural networks, image processing , AI and cloud computing.Dr. M.V.Ramana Murthy is currently working as chairman, department of computer science, at faculty of science, osmania university, Hyderabad, India. His area of interest is information security. He has more than 28 years of teaching experience. He has several publicationsMr. Mahaboob Sharief Shaik has completed master’s degree in computer applications in the year 1998 and presently working as lecturer at faculty of computing & information technology, King abdulaziz university, Jeddah, Saudi Arabia. His area of interest is network/information security, image processing and database.。