机器人语音识别毕业论文中英文资料外文翻译文献

机器人语音识别毕业论文

中英文资料外文翻译文献

附件1:外文资料翻译译文

改进型智能机器人的语音识别方法

2、语音识别概述

最近,由于其重大的理论意义和实用价值,语音识别已经受到越来越多的关注。到现在为止,多数的语音识别是基于传统的线性系统理论,例如隐马尔可夫模型和动态时间规整技术。随着语音识别的深度研究,研究者发现,语音信号是一个复杂的非线性过程,如果语音识别研究想要获得突破,那么就必须引进非线性系统理论方法。最近,随着非线性系统理论的发展,如人工神经网络,混沌与分形,可能应用这些理论到语音识别中。因此,本文的研究是在神经网络和混沌与分形理论的基础上介绍了语音识别的过程。

语音识别可以划分为独立发声式和非独立发声式两种。非独立发声式是指发音模式是由单个人来进行训练,其对训练人命令的识别速度很快,但它对与其他人的指令识别速度很慢,或者不能识别。独立发声式是指其发音模式是由不同年龄,不同性别,不同地域的人来进行训练,它能识别一个群体的指令。一般地,由于用户不需要操作训练,独立发声式系统得到了更广泛的应用。所以,在独立发声式系统中,从语音信号中提取语音特征是语音识别系统的一个基本问题。

语音识别包括训练和识别,我们可以把它看做一种模式化的识别任务。通常地,语音信号可以看作为一段通过隐马尔可夫模型来表征的时间序列。通过这些特征提

取,语音信号被转化为特征向量并把它作为一种意见,在训练程序中,这些意见将反馈到HMM的模型参数估计中。这些参数包括意见和他们响应状态所对应的概率密度函数,状态间的转移概率,等等。经过参数估计以后,这个已训练模式就可以应用到识别任务当中。输入信号将会被确认为造成词,其精确度是可以评估的。整个过程如图一所示。

图1 语音识别系统的模块图

3、理论与方法

从语音信号中进行独立扬声器的特征提取是语音识别系统中的一个基本问题。解决这个问题的最流行方法是应用线性预测倒谱系数和Mel频率倒谱系数。这两种方法都是基于一种假设的线形程序,该假设认为说话者所拥有的语音特性是由于声道共振造成的。这些信号特征构成了语音信号最基本的光谱结构。然而,在语音信号中,这些非线形信息不容易被当前的特征提取逻辑方法所提取,所以我们使用分型维数来测量非线形语音扰动。

本文利用传统的LPCC和非线性多尺度分形维数特征提取研究并实现语音识别系统。

3.1线性预测倒谱系数

线性预测系数是一个我们在做语音的线形预分析时得到的参数,它是关于毗邻语音样本间特征联系的参数。线形预分析正式基于以下几个概念建立起来的,即一个语音样本可以通过一些以前的样本的线形组合来快速地估计,根据真实语音样本在确切的分析框架(短时间内的)和预测样本之间的差别的最小平方原则,最后会确认出唯一的一组预测系数。

LPC可以用来估计语音信号的倒谱。在语音信号的短时倒谱分析中,这是一种特殊的处理方法。信道模型的系统函数可以通过如下的线形预分析来得到:

其中p代表线形预测命令,,(k=1,2,… …,p)代表预测参数,脉冲响应用h(n)来表示,假设h(n)的倒谱是。那么(1)式可以扩展为(2)式:

将(1)带入(2),两边同时,(2)变成(3)。

就获得了方程(4):

那么可以通过来获得。

(5)中计算的倒谱系数叫做LPCC,n代表LPCC命令。

在我们采集LPCC参数以前,我们应该对语音信号进行预加重,帧处理,加工和终端窗口检测等,所以,中文命令字“前进”的端点检测如图2所示,接下来,断点检测后的中文命令字“前进”语音波形和LPCC的参数波形如图3所示。

图2 中文命令字“前进”的端点检测

图3 断点检测后的中文命令字“前进”语音波形和LPCC的参数波形

3.2 语音分形维数计算

分形维数是一个与分形的规模与数量相关的定值,也是对自我的结构相似性的测量。分形分维测量是[6-7]。从测量的角度来看,分形维数从整数扩展到了分数,打破了一般集拓扑学方面被整数分形维数的限制,分数大多是在欧几里得几何

尺寸的延伸。

有许多关于分形维数的定义,例如相似维度,豪斯多夫维度,信息维度,相关维度,容积维度,计盒维度等等,其中,豪斯多夫维度是最古老同时也是最重要的,它的定义如【3】所示:

其中,表示需要多少个单位来覆盖子集F.

端点检测后,中文命令词“向前”的语音波形和分形维数波形如图4所示。

图4 端点检测后,中文命令词“向前”的语音波形和分形维数波形

3.3 改进的特征提取方法

考虑到LPCC语音信号和分形维数在表达上各自的优点,我们把它们二者混合到信号的特取中,即分形维数表表征语音时间波形图的自相似性,周期性,随机性,同时,LPCC特性在高语音质量和高识别速度上做得很好。

由于人工神经网络的非线性,自适应性,强大的自学能力这些明显的优点,它的优良分类和输入输出响应能力都使它非常适合解决语音识别问题。

由于人工神经网络的输入码的数量是固定的,因此,现在是进行正规化的特征

参数输入到前神经网络[9],在我们的实验中,LPCC和每个样本的分形维数需要分别地通过时间规整化的网络,LPCC是一个4帧数据(LPCC1,LPCC2,LPCC3,LPCC4,每个参数都是14维的),分形维数被模范化为12维数据,(FD1,FD2,…FD12,每一个参数都是一维),以便于每个样本的特征向量有4*14+12*1=68-D维,该命令就是前56个维数是LPCC,剩下的12个维数是分形维数。因而,这样的一个特征向量可以表征语音信号的线形和非线性特征。

自动语音识别的结构和特征

自动语音识别是一项尖端技术,它允许一台计算机,甚至是一台手持掌上电脑(迈尔斯,2000)来识别那些需要朗读或者任何录音设备发音的词汇。自动语音识别技术的最终目的是让那些不论词汇量,背景噪音,说话者变音的人直白地说出的单词能够达到100%的准确率(CSLU,2002)。然而,大多数的自动语音识别工程师都承认这样一个现状,即对于一个大的语音词汇单位,当前的准确度水平仍然低于90%。举一个例子,Dragon's Naturally Speaking或者IBM公司,阐述了取决于口音,背景噪音,说话方式的基线识别的准确性仅仅为60%至80%(Ehsani & Knodt, 1998)。更多的能超越以上两个的昂贵的系统有Subarashii (Bernstein, et al., 1999), EduSpeak (Franco, etal., 2001), Phonepass (Hinks, 2001), ISLE Project (Menzel, et al., 2001) and RAD (CSLU, 2003)。语音识别的准确性将有望改善。

在自动语音识别产品中的几种语音识别方式中,隐马尔可夫模型(HMM)被认为是最主要的算法,并且被证明在处理大词汇语音时是最高效的(Ehsani & Knodt, 1998)。详细说明隐马尔可夫模型如何工作超出了本文的范围,但可以在任何关于语言处理的文章中找到。其中最好的是Jurafsky & Martin (2000) and Hosom, Cole, and Fanty (2003)。简而言之,隐马尔可夫模型计算输入接收信号和包含于一个拥有数以百计的本土音素录音的数据库的匹配可能性(Hinks, 2003, p. 5)。也就是说,一台基于隐马尔可夫模型的语音识别器可以计算输入一个发音的音素可以和一个基于概率论相应的模型达到的达到的接近度。高性能就意味着优良的发音,低性能就意味着劣质的发音(Larocca, et al., 1991)。

虽然语音识别已被普遍用于商业听写和获取特殊需要等目的,近年来,语言学习的市场占有率急剧增加(Aist, 1999; Eskenazi, 1999; Hinks, 2003)。早期的基于自动语音识别的软件程序采用基于模板的识别系统,其使用动态规划执行模式匹配或其他时间规范化技术(Dalby & Kewley-Port,1999). 这些程序包括Talk to Me (Auralog, 1995), the Tell Me More Series (Auralog, 2000), Triple-Play Plus (Mackey & Choi, 1998), New Dynamic English (DynEd, 1997), English Discoveries (Edusoft, 1998), and See it, Hear It, SAY IT! (CPI, 1997)。这些程序的大多数都不会提供任何反馈给超出简单说明的发音准确率,这个基于最接近模式匹配说明是由用户提出书面对话选择的。学习者不会被告之他们发音的准确率。特别是内里,(2002年)评论例如Talk to Me和Tell Me More等作品中的波形图,因为他们期待浮华的买家,而不会提供有意义的反馈给用户。Talk to Me 2002年的版本已经包含了更多Hinks (2003)的特性,比如,信任对于学习者来说是非常有用的:

★一个视觉信号可以让学习者把他们的语调同模型扬声器发出的语调进行对比。★学习者发音的准确度通常以数字7来度量(越高越好)

★那些发音失真的词语会被识别出来并被明显地标注。

附件2:外文原文(复印件)

Improved speech recognition method

for intelligent robot

2、Overview of speech recognition

Speech recognition has received more and more attention recently due to the important theoretical meaning and practical value [5 ]. Up to now, most speech recognition is based on conventional linear system theory, such as Hidden Markov Model (HMM) and Dynamic Time Warping(DTW) . With the deep study of speech recognition, it is found that speech signal is a complex nonlinear process. If the study of speech recognition wants to break through, nonlinear

-system theory method must be introduced to it. Recently, with the developmentof nonlinea-system theories such as artificial neural networks(ANN) , chaos and fractal, it is possible to apply these theories to speech recognition. Therefore, the study of this paper is based on ANN and chaos and fractal theories are introduced to process speech recognition.

Speech recognition is divided into two ways that are speaker dependent and speaker independent. Speaker dependent refers to the pronunciation model trained by a single person, the identification rate of the training person?sorders is high, while others’orders is in low identification rate or can’t be recogni zed. Speaker independent refers to the pronunciation model

trained by persons of different age, sex and region, it can identify a group of persons’orders. Generally,speaker independent system ismorewidely used, since the user is not required to conduct the training. So extraction of speaker independent features from the speech signal is the fundamental problem of speaker recognition system.

Speech recognition can be viewed as a pattern recognition task, which includes training and recognition.Generally, speech signal can be viewed as a time sequence and characterized by the powerful hidden Markov model (HMM). Through the feature extraction, the speech signal is transferred into feature vectors and act asobservations. In the training procedure, these observationswill feed to estimate the model parameters of HMM. These parameters include probability density function for the observations and their corresponding states, transition probability between the states, etc. After the parameter estimation, the trained models can be used for recognition task. The input observations will be recognized as the resulted words and the accuracy can be evaluated. Thewhole process is illustrated in Fig. 1.

Fig. 1Block diagram of speech recognition system

3 Theory andmethod

Extraction of speaker independent features from the speech signal is the fundamental problem of speaker recognition system. The standard methodology for solving this problem uses Linear Predictive Cepstral Coefficients (LPCC) and Mel-Frequency Cepstral Co-efficient (MFCC). Both these methods are linear procedures based on the assumption that speaker features have properties caused by the vocal tract resonances. These features form the basic spectral structure of the speech signal. However, the non-linear information in speech signals is not easily extracted by the present feature extraction methodologies. So we use fractal dimension to measure non2linear speech turbulence.

This paper investigates and implements speaker identification system using both traditional LPCC and non-linear multiscaled fractal dimension feature extraction.

3. 1L inear Predictive Cepstral Coefficients

Linear prediction coefficient (LPC) is a parameter setwhich is obtained when we do linear prediction analysis of speech. It is about some correlation characteristics between adjacent speech samples. Linear prediction analysis is based on the following basic concepts. That is, a speech sample can be estimated approximately by the linear combination of some past speech samples. According to the minimal square sum principle of difference between real speech sample in certain analysis frame

short-time and predictive sample, the only group ofprediction coefficients can be determined.

LPC coefficient can be used to estimate speech signal cepstrum. This is a special processing method in analysis of speech signal short-time cepstrum. System function of channelmodel is obtained by linear prediction analysis as follow.

Where p represents linear prediction order, ak,(k=1,2,…,p) represent sprediction coefficient, Impulse response is represented by h(n). Suppose

cepstrum of h(n) is represented by ,then (1) can be expanded as (2).

The cepstrum coefficient calculated in the way of (5) is called LPCC, n represents LPCC order.

When we extract LPCC parameter before, we should carry on speech signal pre-emphasis, framing processing, windowingprocessing and endpoints detection etc. , so the endpoint detection of Chinese command word“Forward”is shown in Fig.2, next, the speech waveform ofChinese c ommand word“Forward”and LPCC parameter waveform after Endpoint detection is shown in Fig. 3.

3. 2 Speech Fractal Dimension Computation

Fractal dimension is a quantitative value from the scale relation on the meaning of fractal, and also a measuring on self-similarity of its structure. The fractal measuring is fractal dimension[6-7]. From the viewpoint of measuring, fractal dimension is extended from integer to fraction, breaking the limitof the general to pology set dimension being integer Fractal dimension,fraction mostly, is dimension extension in Euclidean geometry.

There are many definitions on fractal dimension, eg.,similar dimension, Hausdoff dimension, inforation dimension, correlation dimension, capability imension, box-counting dimension etc. , where,Hausdoff dimension is oldest and also most important, for any sets, it is defined as[3].

Where, M£(F) denotes how many unit £needed to cover subset F.

In thispaper, the Box-Counting dimension (DB) of ,F, is obtained by partitioning the plane with squares grids of side £, and the numberof squares that intersect the plane (N(£)) and is defined as[8].

The speech waveform of Chinese command word“Forward”and fractal dimension waveform after Endpoint detection is shown in Fig. 4. 3. 3Improved feature extractions method

Considering the respective advantages on expressing speech signal of LPCC and fractal dimension,we mix both to be the feature signal, that is, fractal dimension denotes the self2similarity, periodicity and randomness of speech time wave shape, meanwhile LPCC feature is good for speech quality and high on identification rate.

Due to ANN′s nonlinearity, self-adaptability, robust and self-learning such obvious advantages, its good classification and input2output reflection ability are suitable to resolve speech recognition problem.

Due to the number of ANN input nodes being fixed, therefore time regularization is carried out to the feature parameter before inputted to the neural network[9]. In our experiments, LPCC and fractal dimension of each

sample are need to get through the network of time regularization separately, LPCC is 4-frame data(LPCC1,LPCC2,LPCC3,LPCC4, each frame parameter is 14-D), fractal dimension is regularized to be12-frame data(FD1,FD2,…,FD12, each frame parameter is 1-D), so that the feature vector of each sample has 4*14+1*12=68-D, the order is, the first 56 dimensions are LPCC, the rest 12 dimensions are fractal dimensions. Thus, such mixed feature parameter can show speech linear and nonlinear characteristics as well.

Architectures and Features of ASR ASR is a cutting edge technology that allows a computer or even a hand-held PDA (Myers, 2000) to identify words that are read aloud or spoken into any sound-recording device. The ultimate purpose of ASR technology is to allow 100% accuracy with all words that are intelligibly spoken by any person regardless of vocabulary size, background noise, or speaker variables (CSLU, 2002). However, most ASR engineers admit that the current accuracy level for a large vocabulary unit of speech (e.g., the sentence) remains less than 90%. Dragon's Naturally Speaking or IBM's ViaV oice, for example, show a baseline recognition accuracy of only 60% to 80%, depending upon accent, background noise, type of utterance, etc. (Ehsani & Knodt, 1998). More expensive systems that are reported to outperform these two are Subarashii (Bernstein, et al., 1999), EduSpeak (Franco, et al., 2001), Phonepass (Hinks, 2001), ISLE Project (Menzel, et al., 2001) and RAD (CSLU, 2003). ASR accuracy is expected to improve. Among several types of speech recognizers used in ASR products, both implemented and proposed, the Hidden Markov Model (HMM) is one of the most dominant algorithms and has proven to be an effective method of dealing with large units of speech (Ehsani & Knodt, 1998). Detailed descriptions of how the HHM model works go beyond the scope of this paper and can be found in any text concerned with language processing; among the best are Jurafsky & Martin (2000) and Hosom, Cole, and Fanty

(2003). Put simply, HMM computes the probable match between the input it receives and phonemes contained in a database of hundreds of native speaker recordings (Hinks, 2003, p. 5). That is, a speech recognizer based on HMM computes how close the phonemes of a spoken input are to a corresponding model, based on probability theory. High likelihood represents good pronunciation; low likelihood represents poor pronunciation (Larocca, et al., 1991).

While ASR has been commonly used for such purposes as business dictation and special needs accessibility, its market presence for language learning has increased dramatically in recent years (Aist, 1999; Eskenazi, 1999; Hinks, 2003). Early ASR-based software programs adopted template-based recognition systems which perform pattern matching using dynamic programming or other time normalization techniques (Dalby & Kewley-Port, 1999). These programs include Talk to Me (Auralog, 1995), the Tell Me More Series (Auralog, 2000), Triple-Play Plus (Mackey & Choi, 1998), New Dynamic English (DynEd, 1997), English Discoveries (Edusoft, 1998), and See it, Hear It, SAY IT! (CPI, 1997). Most of these programs do not provide any feedback on pronunciation accuracy beyond simply indicating which written dialogue choice the user has made, based on the closest pattern match. Learners are not told the accuracy of their pronunciation. In particular, Neri, et al. (2002) criticizes the graphical wave forms presented in products such as Talk to Me and Tell Me More because

they look flashy to buyers, but do not give meaningful feedback to users. The 2000 version of Talk to Me has incorporated more of the features that Hinks (2003), for example, believes are useful to learners:

★A visual signal allows learners to compare their intonation to that of the model speaker.

★The learners' pronunciation accuracy is scored on a scale of seven (the higher the better).

Words whose pronunciation fails to be recognized are highlighted

相关文档
最新文档