Evolving Hierarchical RBF Neural Networks for Breast Cancer Detection
人工智能控制技术课件:神经网络控制

例如,在听觉系统中,神经细胞和纤维是按照其最敏感的频率分
布而排列的。为此,柯赫仑(Kohonen)认为,神经网络在接受外
界输入时,将会分成不同的区域,不同的区域对不同的模式具有
不同的响应特征,即不同的神经元以最佳方式响应不同性质的信
号激励,从而形成一种拓扑意义上的有序图。这种有序图也称之
,
,
⋯
,
)
若 输 入 向 量 X= ( 1
, 权 值 向 量
2
W=(1 , 2 , ⋯ , ) ,定义网络神经元期望输出 与
实际输出 的偏差E为:
E= −
PERCEPTRON学习规则
感知器采用符号函数作为转移函数,当实际输出符合期
望时,不对权值进行调整,否则按照下式对其权值进行
单神经元网络
对生物神经元的结构和功能进行抽象和
模拟,从数学角度抽象模拟得到单神经
元模型,其中 是神经元的输入信号,
表示一个神经元同时接收多个外部刺激;
是每个输入所对应的权重,它对应
于每个输入特征,表示其重要程度;
是神经元的内部状态; 是外部输入信
号; 是一个阈值(Threshold)或称为
第三代神经网络:
2006年,辛顿(Geofrey Hinton)提出了一种深层网络模型——深度
置信网络(Deep Belief Networks,DBN),令神经网络进入了深度
学习大发展的时期。深度学习是机器学习研究中的新领域,采用无
监督训练方法达到模仿人脑的机制来处理文本、图像等数据的目的。
控制方式,通过神经元及其相互连接的权值,逼近系统
efficientnet解读 -回复

efficientnet解读-回复什么是EfficientNet?EfficientNet是一种高效的卷积神经网络(Convolutional Neural Network, CNN)架构,由Google研究团队在2019年提出。
它通过优化网络的深度、宽度和分辨率,达到了在图像分类任务上,比目前其他SOTA(State-of-the-Art)的模型具有更高的精度和更好的效能。
EfficientNet的创新点在于使用了一种称为Compound Scaling的方法,该方法为网络的不同维度(深度、宽度和分辨率)选择了合适的比例,从而使网络在三个方面都能够提供良好的性能。
这意味着EfficientNet不仅提升了模型的准确性,同时还减少了可训练参数的数量,使得模型更加高效。
EfficientNet的核心思想是通过在网络的不同层次上相对比例地缩放深度、宽度和分辨率,从而平衡网络的规模和性能。
具体地说,EfficientNet利用了一个复合缩放系数phi(ϕ),它控制了网络的总体缩放比例。
该phi 参数是通过在一定范围内进行搜索和验证,确定出最佳值的。
在EfficientNet的工作中,研究人员通过在Imagenet上进行大规模的实验评估,找到了一个最佳的复合缩放系数phi=1.2。
在EfficientNet的网络结构中,首先进行的是深度方向的缩放。
通过复制某个输入模型(例如ResNet)的某个重复模块,即可扩展EfficientNet的深度。
而深度可以同时扩展子层的数量和整体的网络深度。
然后,进行的是宽度方向的缩放,即扩展通道/特征维度的数量。
为了平衡不同层级的性能,EfficientNet限制了扩展的范围。
这样,即使在更高分辨率的层级中,EfficientNet也能保证较高的计算效率。
最后,进行的是分辨率方向的缩放,即调整图像输入的分辨率。
通过在训练过程中逐渐增加分辨率,EfficientNet能够提高网络对更高分辨率图像的适应能力。
efficientnet解读

EfficientNet解读一、简介EfficientNet是谷歌研究团队在2019年提出的一种高效的卷积神经网络架构。
它通过对网络深度、宽度和分辨率进行统一的缩放来实现优化,达到了在计算资源有限的情况下提高模型性能的效果。
EfficientNet在多个计算机视觉任务上取得了优异的表现,成为了当今领域内备受关注的模型之一。
二、网络架构EfficientNet的网络架构采用了一种称为复合缩放 (Compound Scaling) 的方法,通过对网络的深度、宽度和分辨率进行统一的缩放,实现了在有限的计算资源下提升模型的性能。
具体地,EfficientNet使用了一个复合系数φ来同时控制深度、宽度和分辨率的缩放,使得模型既能够充分利用计算资源,又能够达到更好的性能。
三、性能表现EfficientNet在各种计算机视觉任务上都取得了优异的表现,例如在图像分类、目标检测和语义分割等任务上都取得了state-of-the-art的性能。
其高效的模型架构使得在计算资源有限的情况下也能够获得很好的性能,这使得EfficientNet成为了很多计算机视觉研究者和工程师们研究和使用的对象。
四、应用领域由于其高效的性能和优异的表现,EfficientNet在各种计算机视觉任务的应用领域非常广泛。
例如在智能手机上进行图像识别、无人驾驶领域的视觉感知、医疗影像识别等方面,EfficientNet都能够发挥重要作用,成为了当前人工智能领域内备受关注的模型之一。
五、未来展望随着计算资源的不断提升和深度学习技术的不断发展,EfficientNet有望在未来进一步发展壮大,在更多的应用领域展现出其优异的性能。
未来,EfficientNet还有望在模型的压缩和加速领域有更多的发展,在计算资源有限的环境下依然能够取得更好的性能,为人工智能技术的发展做出更大的贡献。
六、总结EfficientNet的神经网络架构和性能表现都使得它成为了当前领域内备受关注的模型之一。
2024年深度学习技术的突破与创新应用前沿

应用前景:新型神经网络结构在自然语言处理、计算机视觉、语音识别等 领域具有广泛的应用前景,为人工智能技术的进一步发展奠定了基础。
深度学习算法的创新
算法优化:针对特定问题,对深度学习算法进行优化,提高算法的效率和准确性。
语音识别:利用深度学习技 术进行语音转文字、语音合
成等任务
推荐系统:利用深度学习技 术进行用户行为分析,实现
个性化推荐
03
2024年深度学习技术的突破
新型神经网络结构的出现
简介:新型神经网络结构在2024年取得了重大突破,为深度学习领域带来 了新的发展方向。
具体表现:新型神经网络结构在处理复杂任务、提高模型泛化能力等方面 表现出色,为解决实际问题提供了有力支持。
目标检测:深度学习在计算机视 觉领域的应用之一,通过训练大 量数据,实现对图像中目标的自 动识别和定位。
图像生成:通过深度学习技术生 成高质量的图像,可用于虚拟现 实、游戏开发等领域。
添加标题
添加标题
添加标题
添加标题
图像识别:利用深度学习技术对 图像进行分类、识别和分析,广 泛应用于人脸识别、物体识别等 领域。
添加标题
添加标题
添加标题
硬件加速技术:针对深度学习算 法的特性,开发更高效的硬件加 速技术,如GPU、TPU等,以提 高计算速度和降低成本。
多模态融合:将深度学习技术应 用于多个模态的数据,如图像、 语音、文本等,实现多模态融合, 提高信息利用效率和跨模态检索 能力。
感谢观看
汇报人:XX
深度学习可解释性的挑战
深度学习模型的黑箱性质 可解释性与模型性能的矛盾 缺乏统一的评价标准和方法 实际应用中的挑战和限制
基于深度学习的低光照图像增强技术研究

基于深度学习的低光照图像增强技术研究深度学习是当前人工智能技术中的热门研究方向之一,它已经被应用于许多领域,如语音识别、图像识别、自然语言处理等。
在图像处理领域中,深度学习也有着广泛的应用,其中之一就是低光照图像增强。
低光照图像增强是指对光线不足或光线环境恶劣的图像进行处理,使其变得更加清晰明亮、细节更加丰富。
这是一个非常具有挑战性的问题,因为低光照图像通常由于光线不足导致图像信息缺失、噪点增多、色彩失真等现象,传统的图像处理方法难以有效处理。
而深度学习基于卷积神经网络的特征学习和表示能力,能够有效地处理低光照图像增强问题。
要实现低光照图像增强,需要解决以下几个问题:一、建立适合于低光照图像增强的深度学习模型传统的图像增强方法大多建立在颜色空间变换或梯度域变换技术之上,但是这些方法并不能很好地捕捉到图像的高级特征和语义信息,也不能很好地利用复杂的马尔可夫随机场模型来进行处理。
而基于深度学习的方法可以学习到更高级别的特征,通过模型的层次化特性来逐步提取图像中的语义信息,使得低光照图像增强更加准确和精细。
在建立深度学习模型时,需要对训练数据进行合理的选择和处理,以保证模型的泛化能力和鲁棒性。
同时,需要针对不同程度的低光照图像进行训练,以增强模型的适应性。
二、应用适当的损失函数损失函数是深度学习中的关键组成部分之一。
在低光照图像增强问题中,传统的损失函数往往只能通过像素级比较误差来进行刻画,不能很好地利用图像整体的特征和语义信息。
而基于深度学习的方法能够利用更丰富的先验知识,选择适当的损失函数来确保输出结果的质量。
针对低光照图像增强问题,一些研究者提出了不同的损失函数,例如平均绝对误差、结构相似性算法等。
这些损失函数可以提高图像增强效果和图像质量,提高模型的稳定性和鲁棒性。
三、提高模型的效率和速度在低光照图像增强过程中,需要处理大量的图像数据,如果深度学习模型的效率和速度不高,会导致图像增强的过程无法实时进行,大大降低用户的体验。
《2024年虚拟现实增强技术综述》范文

《虚拟现实增强技术综述》篇一一、引言随着科技的飞速发展,虚拟现实(Virtual Reality,简称VR)和增强现实(Augmented Reality,简称AR)技术已经逐渐成为当今科技领域的热点话题。
虚拟现实增强技术,即通过技术手段将虚拟的信息、内容与真实的环境相结合,为人们带来全新的沉浸式体验。
本文将对虚拟现实增强技术的定义、特点、应用领域及发展前景进行综述。
二、虚拟现实增强技术的定义与特点虚拟现实增强技术是一种将虚拟信息和真实环境进行融合的技术,它通过先进的计算机图形技术、传感器技术和人机交互技术等手段,将虚拟的信息内容嵌入到真实的环境中,使用户在真实环境中体验到虚拟信息带来的感觉和体验。
该技术的特点主要表现在以下几个方面:1. 沉浸性:用户可以完全沉浸在虚拟与现实的融合环境中,获得真实的体验感。
2. 交互性:用户可以通过各种设备与虚拟信息进行实时交互,如手势识别、语音识别等。
3. 实时性:虚拟信息能够实时地与真实环境进行融合,为用户带来实时的交互体验。
三、虚拟现实增强技术的应用领域虚拟现实增强技术的应用领域非常广泛,主要表现在以下几个方面:1. 娱乐领域:游戏、电影、音乐等领域是虚拟现实增强技术的主要应用领域。
通过该技术,用户可以获得更加真实的游戏体验和电影观赏体验。
2. 教育领域:虚拟现实增强技术可以为学生提供更加生动、形象的教学内容,帮助学生更好地理解和掌握知识。
3. 医疗领域:在医疗领域,虚拟现实增强技术可以用于手术模拟、康复训练、医学教育等方面,提高医疗水平和效率。
4. 商业领域:在商业领域,虚拟现实增强技术可以用于产品展示、广告宣传、购物体验等方面,提高用户体验和购买意愿。
四、虚拟现实增强技术的发展现状与前景目前,虚拟现实增强技术已经取得了长足的发展,各大科技公司都在积极投入研发该技术。
随着技术的不断进步和应用领域的不断拓展,虚拟现实增强技术的应用前景非常广阔。
未来,该技术将更加普及和成熟,为人们带来更加丰富、真实的体验感。
基于分组双阶段双向卷积长短期方法的高光谱图像超分辨率网络

智城实践NO.04 20241智能城市 INTELLIGENT CITY基于分组双阶段双向卷积长短期方法的高光谱图像超分辨率网络林建君1侯钧译2杨翠云2(1.烟台职业学院信息工程系,山东 烟台 264670;2.青岛科技大学信息科学技术学院,山东 青岛 266000)摘要:文章提出基于分组的双阶段Bi-ConvLSTM网络(GDBN),可以充分利用图像的空间和光谱信息,通过使用以波段为单位的分组策略,有效缓解了计算负担,并对光谱信息进行保护。
在编码器的不同阶段,对浅层信息提取模块和深度特征提取模块进行不同层次信息的提取,浅层信息提取模块能够对不同尺度的浅层特征信息进行充分捕捉,深度特征提取模块能够捕捉图像的高频特征信息。
文章还引入通道注意力机制,增强网络对特征的组织能力,并在自然数据集cave上进行大量实验,效果普遍优于目前主流的深度学习方法。
关键词:双向卷积长短期记忆网络;高光谱图像超分辨率;通道注意力;神经网络;深度学习中图分类号:TP391 文献标识码:A 文章编号:2096-1936(2024)04-0001-03DOI:10.19301/ki.zncs.2024.04.001Hyperspectral image super-resolution network based on groupedtwo-stage biconvolution long-term and short-term methodLIN Jian-jun HOU Jun-yi YANG Cui-yunAbstract:In this paper, a two-stage Bi-ConvLSTM network based on grouping (GDBN) is proposed, which can make full use of the spatial and spectral information of images, and effectively relieve the computational burden and protect the spectral information by using the grouping strategy based on band units. At different stages of the encoder, the shallow information extraction module and the depth feature extraction module can extract different levels of information. The shallow information extraction module can fully capture the shallow feature information of different scales, and the depth feature extraction module can capture the high-frequency feature information of the image. The paper also introduces channel attention mechanism to enhance the network's ability to organize features, and conducts a large number of experiments on natural data set cave, and the effect is generally better than the current mainstream deep learning methods.Key words:bidirectional convolution long-term and short-term memory network; hyperspectral image super-resolution; channel attention; neural network; deep learning近年来,基于深度学习[1-2]的单图像超分辨率方法取得了广泛发展。
神经网络基本介绍PPT课件

神经系统的基本构造是神经元(神经细胞 ),它是处理人体内各部分之间相互信息传 递的基本单元。
每个神经元都由一个细胞体,一个连接 其他神经元的轴突和一些向外伸出的其它 较短分支—树突组成。
轴突功能是将本神经元的输出信号(兴奋 )传递给别的神经元,其末端的许多神经末 梢使得兴奋可以同时传送给多个神经元。
将神经网络与专家系统、模糊逻辑、遗传算法 等相结合,可设计新型智能控制系统。
(4) 优化计算 在常规的控制系统中,常遇到求解约束
优化问题,神经网络为这类问题的解决提供 了有效的途径。
常规模型结构的情况下,估计模型的参数。 ② 利用神经网络的线性、非线性特性,可建立线
性、非线性系统的静态、动态、逆动态及预测 模型,实现非线性系统的建模。
(2) 神经网络控制器 神经网络作为实时控制系统的控制器,对不
确定、不确知系统及扰动进行有效的控制,使控 制系统达到所要求的动态、静态特性。 (3) 神经网络与其他算法相结合
4 新连接机制时期(1986-现在) 神经网络从理论走向应用领域,出现
了神经网络芯片和神经计算机。 神经网络主要应用领域有:模式识别
与图象处理(语音、指纹、故障检测和 图象压缩等)、控制与优化、系统辨识 、预测与管理(市场预测、风险分析) 、通信等。
神经网络原理 神经生理学和神经解剖学的研究表 明,人脑极其复杂,由一千多亿个神经 元交织在一起的网状结构构成,其中大 脑 皮 层 约 140 亿 个 神 经 元 , 小 脑 皮 层 约 1000亿个神经元。 人脑能完成智能、思维等高级活动 ,为了能利用数学模型来模拟人脑的活 动,导致了神经网络的研究。
(2) 学习与遗忘:由于神经元结构的可塑 性,突触的传递作用可增强和减弱,因 此神经元具有学习与遗忘的功能。 决定神经网络模型性能三大要素为:
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Evolving Hierarchical RBF Neural Networks forBreast Cancer DetectionYuehui Chen,Yan Wang,and Bo YangSchool of Information Science and EngineeringJinan University,Jinan250022,P.R.Chinayhchen@Abstract.Hierarchical RBF networks consist of multiple RBF networksassembled in different level or cascade architecture.In this paper,anevolved hierarchical RBF network was employed to detect the breastcancel.For evolving a hierarchical RBF network model,Extended Com-pact Genetic Programming(ECGP),a tree-structure based evolutionaryalgorithm and the Differential Evolution(DE)are used tofind an op-timal detection model.The performance of proposed method was thencompared with Flexible Neural Tree(FNT),Neural Network(NN),andRBF Neural Network(RBF-NN)by using the same breast cancer dataset.Simulation results show that the obtained hierarchical RBF networkmodel has a fewer number of variables with reduced number of inputfeatures and with the high detection accuracy.1IntroductionBreast cancer is the most common cancer in women in many countries.Most breast cancers are detected as a lump/mass on the breast,or through self-examination or mammography[1].Screening mammography is the best tool available for detecting cancerous lesions before clinical symptoms appear[7]. Surgery through a biopsy or lumpectomy have been also been the most com-mon methods of removal.Fine needle aspiration(FNA)of breast masses is a cost-effective,non-traumatic,and mostly invasive diagnostic test that obtains information needed to evaluate malignancy.Recently,a new less invasive tech-nique,which uses super-cooled nitrogen to freeze and shrink a non-cancerous tumor and destroy the blood vessels feeding the growth of the tumour,has been developed[2]in the USA.Various artificial intelligence techniques have been used to improve the di-agnoses procedures and to aid the physician’s efforts[3][4][5][6].In our previ-ous studies,the performance of Flexible Neural Tree(FNT)[11],Neural Net-work(NN),Wavelet Neural Network(WNN)and an ensemble method to detect breast-cancer have been evaluated[12].Hierarchical RBF networks(HRBF)consist of multiple RBF networks assem-bled in different level or cascade architecture in which a problem was divided and solved in more than one step.Mat Isa et ed Hierarchical Radial Basis Func-tion(HiRBF)to increase RBF performance in diagnosing cervical cancer[14].I.King et al.(Eds.):ICONIP2006,Part III,LNCS4234,pp.137–144,2006.c Springer-Verlag Berlin Heidelberg2006138Y.Chen,Y.Wang,and B.YangThe HiRBF cascaded together two RBF networks,where both networks have different structure but using the same learning algorithms.Thefirst network classifies all data and performs afiltering process to ensure that only certain attributes to be fed to the second network.The study shows that the HiRBF performs better compared to single RBF.Hierarchical RBF network has been proved effective in the reconstruction of smooth surfaces from sparse noisy data points[15].In order to improve the model generalization performance,a selec-tive combination of multiple neural networks by using Bayesian method was proposed in[16].In this paper,an automatic method for constructing HRBF networks is pro-posed.Based on a pre-defined instruction/operator set,the HRBF network can be created and evolved.The HRBF network allows input variables selection.In our previous studies,in order to optimize the Flexible Neural Tree(FNT)and the hierarchical TS fuzzy model(H-TS-FS),the hierarchical structure of FNT and H-TS-FS was evolved using Probabilistic Incremental Program Evolution al-gorithm(PIPE)[10][11]and Ant Programming[13]with specific instructions.In this research,the hierarchical structure is evolved using the Extended Compact Genetic Programming(ECGP),a tree-structure based evolutionary algorithm. Thefine tuning of the parameters encoded in the structure is accomplished us-ing the DE algorithm.The proposed method interleaves both optimizations. The novelty of this paper is in the usage of hierarchical RBF network model for selecting the important input variables and for breast cancel detection.The paper is organized as follows.The RBF network is introduced in Section 2.An optimal design method for constructing the HRBF networks is described in Section3.Section4gives the simulation results.Finally in Section5we present some concluding remarks.2The RBF NetworkAn RBF network is a feed-forward neural network with one hidden layer of RBF units and a linear output layer.By an RBF unit we mean a neuron with multiple real inputs x=(x1,...,x n)and one output y computed as:y=ϕ(ξ);ξ= x−c Cb(1)whereϕ:R→R is a suitable activation function,let us consider Gaussian radial basis functionϕ(z)=e−z2.The center c∈R n,the width b∈R and an n×n real matrix C are a unit’s parameters,||·||C denotes a weighted normdefined as x 2C =(Cx)T(Cx)=x T C T Cx.Thus,the network represents the following real function f:R n→R m:f s(x)=hj=1w js e−( x−c Cb)2,s=1,...,m,(2)where w js∈R are weights of s-th output unit and f s is the s-th network output.Evolving Hierarchical RBF Neural Networks for Breast Cancer Detection 139x x x x 1234x x x Fig.1.A RBF neural network (left),an example of hierarchical RBF network (middle),and a tree-structural representation of the HRBF network (right)The goal of an RBF network learning is to find suitable values of RBF units’parameters and the output layer’s weights,so that the RBF network function approximates a function given by a set of examples of inputs and desired outputs T ={x (t ),d (t );t =1,...,k },called a training set .The quality of the learned RBF network is measured by the error function :E =12k t =1m j =1e 2j (t ),e j (t )=d j (t )−f j (t ).(3)3The Hierarchical RBF Network 3.1Encode and CalculationA function set F and terminal instruction set T used for generating a HRBF network model are described as S =F T ={+2,+3,...,+N } {x 1,...,x n },where +i (i =2,3,...,N )denote non-leaf nodes’instructions and taking i argu-ments.x 1,x 2,...,x n are leaf nodes’instructions and taking no arguments.The output of a non-leaf node is calculated as a HRBF network model (see Fig.1).From this point of view,the instruction +i is also called a basis function operator with i inputs.In this research,Gaussian radial basis function is used and the number of radial basis functions used in hidden layer of the network is same with the number of inputs,that is,m =n .In the creation process of HRBF network tree,if a nonterminal instruction,i.e.,+i (i =2,3,4,...,N )is selected,i real values are randomly generated and used for representing the connection strength between the node +i and its chil-dren.In addition,2×n 2adjustable parameters a i and b i are randomly created as radial basis function parameters.The output of the node +i can be calcu-lated by using Eqn.(1)and Eqn.(2).The overall output of HRBF network tree can be computed from left to right by depth-first method,recursively.Finding an optimal or near-optimal HRBF network structure is formulated as a product140Y.Chen,Y.Wang,and B.Yangof evolution.In our previously studies,the Genetic Programming(GP),Proba-bilistic Incremental Program Evolution(PIPE)have been explored for structure optimization of the FNT.In this paper,the ECGP is employed tofind an optimal or near-optimal structure of HRBF networks.3.2Tree Structure Optimization by ECGPFinding an optimal or near-optimal HRBF is formulated as a product of evo-lution.In our previously studies,the Genetic Programming(GP),Probabilistic Incremental Program Evolution(PIPE)have been explored for structure op-timization of the FNT[10][11].In this paper,the Extended Compact Genetic Programming(ECGP)[17]is employed tofind an optimal or near-optimal HRBF structure.ECGP is a direct extension of ECGA to the tree representation which is based on the PIPE prototype tree.In ECGA,Marginal Product Models(MPMs)are used to model the interaction among genes,represented as random variables, given a population of Genetic Algorithm individuals.MPMs are represented as measures of marginal distributions on partitions of random variables.ECGP is based on the PIPE prototype tree,and thus each node in the prototype tree is a random variable.ECGP decomposes or partitions the prototype tree into sub-trees,and the MPM factorises the joint probability of all nodes of the proto-type tree,to a product of marginal distributions on a partition of its sub-trees.A greedy search heuristic is used tofind an optimal MPM mode under the framework of minimum encoding inference.ECGP can represent the probability distribution for more than one node at a time.Thus,it extends PIPE in that the interactions among multiple nodes are considered.3.3Parameter Optimization with DE AlgorithmThe DE algorithm wasfirst introduced by Storn and Price in1995[8].It resem-bles the structure of an evolutionary algorithm(EA),but differs from traditional EAs in its generation of new candidate solutions and by its use of a’greedy’selec-tion scheme.DE works as follows:First,all individuals are randomly initialized and evaluated using thefitness function provided.Afterwards,the following pro-cess will be executed as long as the termination condition is not fulfilled:For each individual in the population,an offspring is created using the weighted difference of parent solutions.The offspring replaces the parent if it isfitter.Otherwise, the parent survives and is passed on to the next iteration of the algorithm.Ingeneration k,we denote the population members by x k1,x k2,...,x kN .The DEalgorithm is given as follows[9]:S1Set k=0,and randomly generate N points x01,x02,...,x0N from searchspace to form an initial population;S2For each point x k i(1≤i≤N),execute the DE offspring generation schemeto generate an offspring x(i k+1);S3If the given stop criteria is not met,set k=k+1,goto step S2.Evolving Hierarchical RBF Neural Networks for Breast Cancer Detection141The DE Offspring Generation approach used is given as follows,S1Choose one point x d randomly such that f(x d)f(x k i),another two points x b,x c randomly from the current population and a subset S={j1,...,j m} of the index set{1,...,n},while m<n and all j i mutually different;S2Generate a trial point u=(u1,u2,...,u n)as follows:DE Mutation.Generate a temporary point z as follows,z=(F+0.5)x d+(F−0.5)x i+F(x b−x c);(4) Where F is a give control parameter;DE Crossover.for j∈S,u j is chosen to be z j;otherwise u j is chosen a to be(x k i)j;S3If f(u)≤f(x k i),set x k+1i =u;otherwise,set x k+1i=x k i.3.4Procedure of the General Learning AlgorithmThe general learning procedure for constructing the HRBF network can be de-scribed as follows.S1Create an initial population randomly(HRBF network trees and its corre-sponding parameters);S2Structure optimization is achieved by using ECGP algorithm;S3If a better structure is found,then go to step S4,otherwise go to step S2; S4Parameter optimization is achieved by DE algorithm.In this stage,the ar-chitecture of HRBF network model isfixed,and it is the best tree developed during the end of run of the structure search;S5If the maximum number of local search is reached,or no better parameter vector is found for a significantly long time then go to step S6;otherwise go to step S4;S6If satisfactory solution is found,then the algorithm is stopped;otherwise go to step S2.3.5Variable Selection Using HRBF Network ParadigmsIt is often a difficult task to select important variables for a classification or re-gression problem,especially when the feature space is large.Conventional RBF neural network usually cannot do this.In the perspective of HRBF network framework,the nature of model construction procedure allows the HRBF net-work to identify important input features in building a HRBF network model that is computationally efficient and effective.The mechanisms of input selec-tion in the HRBF network constructing procedure are as follows.(1)Initially the input variables are selected to formulate the HRBF network model with same probabilities;(2)The variables which have more contribution to the objective function will be enhanced and have high opportunity to survive in the next generation by an evolutionary procedure;(3)The evolutionary operators i.e., crossover and mutation,provide a input selection method by which the HRBF network should select appropriate variables automatically.142Y.Chen,Y.Wang,and B.YangTable parative results of the FNT,NN,RBF[12]and the proposed HRBF network classification methods for the detection of breast cancerCancer typeFNT(%)NN(%)RBF-NN(%)HRBF(%)Benign93.3194.0194.1296.83Malignant 93.4595.4293.2196.83Table 2.The important features selected by the HRBF networkx 0,x 1,x 2,x 3,x 6,x 7,x 9,x 18,x 20,x 25,x 27,x 29Fig.2.The optimized HRBF network for breast cancel detection4SimulationsAs a preliminary study,we made use of the Wisconsin breast cancer data set from the UCI machine-learning database repository [18].This data set has 30attributes (30real valued input features)and 569instances of which 357are of benign and 212are of malignant type.The data set is randomly divided into a training data set and a test data set.The first 285data is used for training and the remaining 284data is used for testing the performance of the different models.All the models were trained and tested with the same set of data.The instruc-tion sets used to create an optimal HRBF network classifier is S =F T ={+2,...,+5} {x 0,x 1,...,x 29}.Where x i (i =0,1,...,29)denotes the 30input fea-tures.The optimal hierarchical HRBF network for breast cancel detection prob-lem is shown in Figure 2.The classification results for testing data set are shown in Table 1.For comparison purpose,the detection performances of the FNT,NN and RBF-NN are also shown in Table 1(for details,see [12]).The important features for constructing the HRBF network models are shown in Table 2.ItEvolving Hierarchical RBF Neural Networks for Breast Cancer Detection143 parison of false positive rate(fp)and true positive rate(tp)for FNT, NN,RBF-NN[12]and hierarchical HRBF networkCancer FNT NN RBF-NN HRBFType fp(%)tp(%)fp(%)tp(%)fp(%)tp(%)fp(%)tp(%) Benign 3.8891.71 4.8593.37 6.697.14 2.9196.69 Malignant 2.7686.41 4.9796.129.296.87 3.3197.09should be noted that the obtained HRBF network classifier has smaller size and reduced features and with high accuracy in breast cancel detection.Receiver Operating Characteristics(ROC)analysis of the FNT,NN,RBF-NN and the HRBF network model is shown in Table3.5ConclusionIn this paper,we presented an optimized HRBF network for the detection of breast cancel and compared the results with some advanced artificial intelligence techniques,i.e.,FNT,NN and RBF-NN.As depicted in Table1,the prelimi-nary results are very encouraging.The best accuracy was offered by the HRBF network method followed by the RBF neural network for detecting benign types and PSO trained neural network for detecting the malignant type of cancer. An important advantage of the HRBF network model is the ability to reduce the number of input variables as presented in Table2.ROC analysis(Table3) illustrates that RBF neural network has the highest false positive rate and the HRBF network model has the lowest false positive rates for detecting benign and malignant cancer.The time required to construct these models are not very much and hope these tools would assist the physician’s effort to improve the currently available automated ways to diagnose breast cancer. AcknowledgmentThis research was partially supported the Natural Science Foundation of China under contract number60573065,and The Provincial Science and Technology Development Program of Shandong under contract number SDSP2004-0720-03. References1.DeSilva,C.J.S.et al.,Artificial Neural networks and Breast Cancer Prognosis,TheAustralian Computer Journal,26,pp.78-81,1994.2.The Weekend Australia,Health Section,pp.7.July,13-14,2002.3.David B.Fogel,Eugene C.Wasson,Edward M.Boughton and Vincent W.Porto,A step toward computer-assisted mammography using evolutionary programmingand neural networks,Cancer Letters,Volume119,Issue1,pp.93-97,1997.144Y.Chen,Y.Wang,and B.Yang4.Charles E.Kahn,Jr,Linda M.Roberts,Katherine A.Shaffer and Peter Haddawy,Construction of a Bayesian network for mammographic diagnosis of breast cancer, Computers in Biology and Medicine,Volume27,Issue1,pp.19-29,1997.5.Shinsuke Morio,Satoru Kawahara,Naoyuki Okamoto,Tadao Suzuki,TakashiOkamoto,Masatoshi Haradas and Akio Shimizu,An expert system for early de-tection of cancer of the breast,Computers in Biology and Medicine,Volume19, Issue5,pp.295-305,1989.6.Barbara S.Hulka and Patricia G.Moorman,Breast Cancer:Hormones and OtherRisk Factors,Maturitas,Volume38,Issue1,pp.103-113,2001.7.Jain,R.and Abraham,A.,A Comparative Study of Fuzzy Classifiers on BreastCancer Data,Australiasian Physical And Engineering Sciences in Medicine,Aus-tralia,Volume27,No.4,pp.147-152,2004.8.R.Storn,and K.Price,”Differential evolution-a simple and efficient adaptivescheme for global optimization over continuous spaces,”Technical report,Interna-tional Computer Science Institute,Berkley,1995.9.K.Price,”Differential Evolution vs.the Functions of the2nd ICEO,”In pro-ceedings of1997IEEE International Conference on Evolutionary Computation (ICEC’97),Indianapolis,USA,pp.153-157,1997.10.Chen,Y.,Yang,B.,Dong,J.,Nonlinear systems modelling via optimal design ofneural trees,International Journal of Neural ystems,14,pp.125-138,2004.11.Chen,Y.,Yang,B.,Dong,J.,Abraham A.,Time-series forcasting usingflexibleneural tree model,Information Science,Vol.174,Issues3/4,pp.219-235,2005. 12.Chen,Y.,Abraham,A.,Yang,B.,Hybrid Neurocomputing for Breast Cancer De-tection,The Fourth International Workshop on Soft Computing as Transdisci-plinary Science and Technology(WSTST’05),pp.884-892,Springer,2005.13.Chen,Y.,Yang,B.and Dong,J.,Evolving Flexible Neural Networks using AntProgramming and PSO algorithm,International Symposium on Neural Networks (ISNN’04),LNCS3173,pp.211-216,2004.14.N.A.Mat Isa,Mashor,M.Y.,and Othman,N.H.,”Diagnosis of Cervical Cancerusing Hierarchical Radial Basis Function(HiRBF)Network,”In Sazali Yaacob,R.Nagarajan,Ali Chekima(Eds.),Proceedings of the International Conference on Artificial Intelligence in Engineering and Technology,pp.458-463,2002.15.S.Ferrari,I.Frosio,V.Piuri,and N.Alberto Borghese,”Automatic MultiscaleMeshing Through HRBF Networks,”IEEE Trans.on Instrumentation and Mea-surment,vol.54,no.4,pp.1463-1470,2005.16.Z.Ahmad,J.Zhang,”Bayesian selective combination of multiple neural networksfor improving long-range predictions in nonlinear process modelling,”Neural Com-put&Applic.Vol.14.pp.78C87,2005.17.K.Sastry and D.E.Goldberg.”Probabilistic model building and competent ge-netic programming”,In R.L.Riolo and B.Worzel,editors,Genetic Programming Theory and Practise,chapter13,pp.205-220.Kluwer,2003.18.Merz J.,and Murphy,P.M.,UCI repository of machine learning databases,/-learn/MLRepository.html,1996.。