基于神经网络的COCOMO估算(IJISA-V4-N9-3)
人工智能核心算法模拟题及参考答案

人工智能核心算法模拟题及参考答案1、基于神经网络的分类模型是?A、生成模型B、判别模型C、两者都不属于D、两者都属于答案:B2、优化器是训练神经网络的重要组成部分,使用优化器的目的不包含以下哪项:A、加快算法收敛速度B、减少手工参数的设置难度C、避过过拟合问题D、避过局部极值答案:C3、在SCikitTearn中,DBSCAN算法对于()参数值的选择非常敏感A、pB、epsC、njobsD、a1gorithm答案:B4、11和12正则化是传统机器学习常用来减少泛化误差的方法,以下关于两者的说法正确的是:A、11正则化可以做特征选择B、11和12正则化均可做特征选择C、12正则化可以做特征选择D、11和12正则化均不可做特征选择答案:A5、Re1U在零点不可导,那么在反向传播中怎么处理OA、设为0B、设为无穷大C、不定义D、设为任意值答案:A6、代码array=np.arange(10,31,5)中的5代表()?A、元素的个数B、步长C、第一个元素D、最后一个元素答案:B7、图像处理中无损压缩的目的是OA、滤除图像中的不相干信号B、滤除图像中的高频信号C、滤除图形中的低频信号D、滤除图像中的冗余信号答案:D8、对于DBSCAN,参数EPS固定,当MinPtS取值较大时,会导致A、能很好的区分各类簇B、只有高密度的点的聚集区划为簇,其余划为噪声C、低密度的点的聚集区划为簇,其余的划为噪声D、无影响答案:B9、为应对卷积网络模型中大量的权重存储问题,研究人员在适量牺牲精度的基础上设计出一款超轻量化模型OA、KNNB、RNNC、BNND、VGG答案:C10、在前馈神经网络中,误差后向传播(BP算法)将误差从输出端向输入端进行传输的过程中,算法会调整前馈神经网络的什么参数A、输入数据大小B、神经元和神经元之间连接有无C、相邻层神经元和神经元之间的连接权重D、同一层神经元之间的连接权重答案:C11、1STM是一个非常经典的面向序列的模型,可以对自然语言句子或是其他时序信号进行建模,是一种OOA、循环神经网络B、卷积神经网络C、朴素贝叶斯D、深度残差网络答案:A12、O的核心训练信号是图片的“可区分性”。
COCOMO估算模型

COCOMO估算模型COCOMO模型是由TRW公司开发,Boehm提出的结构化成本估算模型。
是⼀种精确的、易于使⽤的成本估算⽅法。
基本COCOMO模型,中间COCOMO模型,详细COCOMO模型模型。
其中基本COCOMO模模型按其详细程度可以分为三级:基本型是是⼀个静态单变量模型,它⽤⼀个以已估算出来的原代码⾏数(LOC)为⾃变量的经验函数计算软件开发⼯作量。
中级COCOMO模型在基本COCOMO模型的基础上,再⽤涉及产品、硬件、⼈员、项⽬等⽅⾯的影响因素调整⼯作量的估算。
详细COCOMO模型包括中间COCOMO模型的所有特性,但更进⼀步考虑了软件⼯程中每⼀步骤(如分析、设计)的影响。
模型中,考虑开发环境,软件开发项⽬的类型可以分为3种:1. 组织型(organic):相对较⼩、较简单的软件项⽬。
开发⼈员对开发⽬标理解⽐较充分,与软件系统相关的⼯作经验丰富,对软件的使⽤环境很熟悉,受硬件的约束较⼩,程序的规模不是很⼤(<50000⾏)2. 嵌⼊型(embedded): 要求在紧密联系的硬件、软件和操作的限制条件下运⾏,通常与某种复杂的硬件设备紧密结合在⼀起。
对接⼝,数据结构,算法的要求⾼。
软件规模任意。
如⼤⽽复杂的事务处理系统,⼤型/超⼤型操作系统,航天⽤控制系统,⼤型指挥系统等。
3. 半独⽴型(semidetached):介于上述两种软件之间。
规模和复杂度都属于中等或更⾼。
最⼤可达30万⾏。
COCOMO模型中我们定义以下变量:L-------源指令条数。
不包括注释。
1KDSI = 1000DSI。
E-------开发⼯作量(以⼈⽉计) 1MM = 19 ⼈⽇ = 152 ⼈时 =1/12 ⼈年D-----开发进度。
(以⽉计)根据以上定义,我们分别对基本COCOMO模型,中间COCOMO模型,详细COCOMO模型的应⽤做出解释如下:基本COCOMO模型1. 我们知道,COCOMO模型是⼀种基于代码⾏估算的成本分析⽅法,因此我们⾸先估算出软件的代码⾏规模L(单位是kLoc,即千⾏代码)2. 然后我们根据公式E = a*L^b , D = c*E^d 得到估算出的⼯作量和开发时间。
mscoco的评估指标

mscoco的评估指标引言MSCOCO(Microsoft Common Objects in Context)是一个通用的目标检测、分割和图像描述数据集。
在计算机视觉领域,评估模型的性能非常重要。
为了衡量模型在MSCOCO数据集上的表现,需要使用一些评估指标。
本文将介绍MSCOCO的评估指标以及它们的计算方法和应用场景。
MSCOCO数据集概述MSCOCO数据集是一个广泛使用的计算机视觉数据集,包含了各种类型的图像,例如人物、动物、交通工具等。
它标注了每个图像中的物体位置、物体类别、物体分割等信息,同时还提供了图像描述的文本注释。
评估指标概述评估指标用于度量模型在目标检测、分割和图像描述任务中的性能。
MSCOCO的评估指标主要包括目标检测的Average Precision (AP)、目标分割的Average Precision (AP)、图像描述的BLEU、METEOR、CIDEr等指标。
目标检测指标目标检测任务的评估指标主要是Average Precision (AP)。
AP是预测框(bounding box)和真实框之间的重叠程度的度量,衡量了模型对目标位置的准确性和召回率。
具体计算方法如下:1.对于每个类别,根据预测框和真实框的IOU(Intersection over Union)值,将预测框排序。
2.根据不同的IOU阈值(通常是0.5、0.75、0.95),计算每个阈值下的Precision和Recall。
3.计算每个类别的Average Precision (AP),即Precision和Recall的积分。
目标分割指标目标分割任务的评估指标同样是Average Precision (AP),但是计算方法略有不同。
对于目标分割任务,需要计算预测分割掩码(segmentation mask)和真实分割掩码之间的重叠程度。
具体计算方法如下:1.对于每个类别,根据预测分割掩码和真实分割掩码的IOU值,将预测框排序。
基于特张脸算法的人脸识别年龄预测(IJMECS-V5-N9-6)

explored. In contrast to other face variations, aging variations presents several unique characteristics which make age estimation a challenging task. Since human faces provide a lot of information, many topics have drawn attention and thus have been studied intensively. The most prominent thing of these is face recognition. Other research topics include predicting feature faces, classifying gender, and expressions from facial images, and so on. However, very few studies have been done on age classification or age estimation. In this research, we try to prove that computer can estimate/classify human age according to features extracted from human facial image using eigenfaces. Humans are not perfect in the task of predicting the age of the subjects based on facial information. The accuracy of age prediction by humans depends on various factors, such as the ethnic origin of a person shown in an image, the overall conditions under which the face is observed, and the actual abilities of the observer to perceive and analyse facial information. The aim of this experiment was to get an indication of the accuracy in age estimation by humans, so that we can compare their performance with the performance achieved by machines. . II. LITERATURE REVIEW Accurate age prediction is one of the most important issues in human communication. It is essential part of human-computer interaction. With the advancement in technology, one thing that concerns the whole world and especially in the developing countries is the tremendous increase in the population. With such a rapid rate of increase, it is becoming difficult to recognize each and every individual because we have to maintain copies either in digital or hard copy format of every individual at different time periods of his life. Sometimes database has the required information of that particular individual, but it’s of no use as it is now obsolete. With age a person’s facial features changes and it becomes difficult to identify a person given an image of his at two different ages. Age prediction from human faces is a challenging problem with a host of applications in forensics, security, biometrics, electronic customer relationship management, entertainment and cosmetology. The main challenge is the huge heterogeneity in facial feature changes due to aging for different humans. Being able to determine the facial changes associated with age is a hard problem, I.J. Modern Education and Computer Science, 2013, 9, 38-44
基于贝叶斯校正算法的软件估算模型COCOMOⅡ的研究

【bt c】 Sf a se ia oeC C M l sh udtn o s ta e l m n’ i c tsm t. A sat o w rc tsm tm dl O O OI a e onao r o wr d e p etSn o t a r t e o t e I yt f i f f e vo s ei e
T r e e h n e h t ae r io O OMO I , a ei a bai lo t m,n ut l rge i n ls of t r n a c ee i t e s n f C uh t sm p ci o C B y a C l rt nA g r h o em l p r s na a - I s n i o i ie e s o y
估 算精度 , 采用 多元 回归的分析方法—— 贝叶斯校正算法对 其进 行校 正, 在逻辑 一致 的基 础上根据 先验 信息和样
本信 息作 出推论, 得到 的后验结果提高 了估算精度。实验结果表 明, 经过 贝叶斯校 正算法 的 C C O OMOI模 型进 一 1 步提 高了数据的精确度。
CC M Ⅱ O O 0 模型由 3 个子模型组成: 应用组合模 型、 早期设计模 型、 后体系结构模 型。顾名思 义 , 体 后 系结构模型发生在软件体系结构完好定义和建立之 后, 基于源代码 行 (L C 和/ 功能点 以及 5个 比例 SO ) 或 指数因子、7 1 个工作量乘数因子, 它使用源代码行数 (L C和/ S O ) 或功能点作 为项 目大小 的输入 , 并使用 修
第 5 章 基本COCOMO模型

软件工程经济学所关注的是软件工程关联的经济学问题, 软件工程经济学所关注的是软件工程关联的经济学问题,其 中的核心内容是关于软件工程的成本与进度估算。 中的核心内容是关于软件工程的成本与进度估算。本章内容讨 论应用“构造性成本模型” COCOMO模型 模型) 论应用“构造性成本模型” (COCOMO模型)估算软件成本 的基本问题并介绍基本COCOMO模型 内容包括: 模型。 的基本问题并介绍基本COCOMO模型。内容包括:
基本 COCOMO 模型
基本 COCOMO 模型
定义与假设
在上一章已经介绍了COCOMO模型中所用到的软件 在上一章已经介绍了COCOMO模型中所用到的软件生 模型中所用到的软件生 命周期阶段和活动的定义 下面是一些附加 定义和假设, 的定义。 附加的 命周期阶段和活动的定义。下面是一些附加的定义和假设, 它们是COCOMO模型应用的基础 它们是COCOMO模型应用的基础 基本的成本驱动因子是项目开发中交付的源指令 成本驱动因子是项目开发中交付的源指令数 基本的成本驱动因子是项目开发中交付的源指令数( DSI) 其定义如下: DSI)。其定义如下: 交付(delivered) 交付(delivered)— 这个术语通常意味着必须排 不可交付的支持软件 如测试驱动程序。然而, 支持软件, 除不可交付的支持软件,如测试驱动程序。然而,如 果这些软件的开发需要付出与交付软件相同的努力, 果这些软件的开发需要付出与交付软件相同的努力, 有其自身的评审、测试计划、文档等等,那么, 有其自身的评审、测试计划、文档等等,那么,它们 也应该计算在内 源指令( unstruction) 源指令(source unstruction)— 该术语包括由项 目组成员编写的并能由预处理程序、 目组成员编写的并能由预处理程序、编译程序和汇编 程序转换为机器代码的所有程序指令。 程序转换为机器代码的所有程序指令。它不包括注释 和未经修改的公用软件。含作业控制语言、 和未经修改的公用软件。含作业控制语言、格式控制 语句和数据声明。 代码行计算 语句和数据声明。按代码行计算
COCOMO模型

COCOMO模型 – 常见的软件规模估算⽅法CoCoMo模型计算机软件的估算模型是根据以前完成项⽬的实际数据导出的,⽤于软件项⽬的计划阶段。
模型是根据“从前的”,“局部的”数据得出的,估算模型不可能完全适⽤于当前所有的软件项⽬和全部开发环境。
这些模型的计算结果仅供参考。
1981年Boehm提出“构造性成本模型”(Constructive Cost Model),简称CoCoMo模型。
它是在静态、单变量模型的基础上构造出来的.CoCoMo模型分为基本、中间、详细三个层次,分别⽤于软件开发的三个不同阶段。
基本CoCoMo模型⽤于系统开发的初期,估算整个系统的⼯作量(包括软件维护)和软件开发所需要的时间。
中间CoCoMo模型⽤于估算各个⼦系统的⼯作量和开发时间。
详细CoCoMo模型⽤于估算独⽴的软部件,如⼦系统内部的各个模块。
基本CoCoMo模型E = aLbD = cEd其中:E表⽰⼯作量,单位是⼈⽉(PM)。
D表⽰开发时间,单位是⽉(M)。
L是项⽬的代码⾏估计值,单位是千⾏代码a ,b ,c ,d是常数,取值如下表所⽰。
Boehm把软件划分为组织型、半独⽴型和嵌⼊型三类,允许不同应⽤领域和复杂程度的软件按照三类软件的适⽤范围选取相应的参数a,b,c,d。
软件类型 a b c d 适⽤范围组织型 2.4 1.05 2.5 0.38 各类应⽤程序半独⽴型 3.0 1.12 2.5 0.35 各类实⽤程序、编译程序等嵌⼊型 3.6 1.20 2.5 0.32 实时处理、控制程序、操作系统中间CoCoMo模型以基本CoCoMo模型为基础,在⼯作量估计公式中乘以⼯作量调节因⼦(EAF)E = aLb *EAF其中:L是软件产品的⽬标代码⾏数,a,b是常数,取值如下表所⽰。
中间CoCoMo模型参数软件类型 a b组织型 3.2 1.05半独⽴型 3.0 1.12嵌⼊型 2.8 1.20⼯作量调节因⼦(EAF)与软件产品属性、计算机属性、⼈员属性、项⽬属性有关软件产品属性1.软件可靠性、2.软件复杂性、3.数据库的规模。
中间COCOMO模型估算方程

K :人的工作量(人-年)
Ck :技术状况有关的常数
对于差的开发环境 Ck = 2500 对于好的开发环境 Ck = 10000 对于优的开发环境 Ck = 12500
5、基于代码行的成本估算方法
模型: Le (a 4m b) / 6
Ld
n (b a)2 i1 6
a:极好情况下的源代码估算行数期望值 b:正常情况下的源代码估算行数期望值 c:较差情况下的源代码估算行数期望值
可能的影响 2 3 2 3 2
RMMM
表 风险预测表样本
资金流失
预算风险Biblioteka 40%1需求改变
产品规模
80%
2
技术达不到预期效果
技术风险
30%
1
缺少对于工具的培训
人力风险
80%
3
人员缺乏经验 人员流 动频繁
人力风险
30%
2
人力风险
60%
2
3、COCOMO模型 (1)基本COCOMO模型
估算方程: ED rS c TD a(ED)b
ED:总的开发工作量 TD:开发时间 S:源指令数 r,c,a,b:经验常数,取决于项目的总体类型
项目的总体类型:
结构型:在本机内部的开发环境中的小规模产 品。
嵌入型:计算机开发环境往往受到严格限制, 例如时间与空间的限制, 因此对同样的软件规模, 其开发难度要大些,估算工作量要大得多,生产率 将低得多。
半独立型介于结构型与嵌入型之间。
3、COCOMO模型 (2)中间COCOMO模型
估算方程: ED rS c TD a(ED)b
ED:总的开发工作量 TD:开发时间 S:源指令数 r,c,a,b:经验常数,取决于项目的总体类型
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
I.J. Intelligent Systems and Applications, 2012, 9, 22-28Published Online August 2012 in MECS (/)DOI: 10.5815/ijisa.2012.09.03COCOMO Estimates Using Neural NetworksAnupama Kaushik,Assistant Professor, Dept. of IT, Maharaja Surajmal Institute of Technology, GGSIP University, Delhi, Indiaanupama@msit.inAshish Chauhan, Deepak Mittal, Sachin GuptaDept. of IT, Maharaja Surajmal Institute of Technology, GGSIP University, Delhi, IndiaAshish.chauhan004@; deepakm905@; sachin.gupta_15@Abstract—Software cost estimation is an important phase in software development.It predicts the amount of effort and development time required to build a software system. It is one of the most critical tasks and an accurate estimate provides a strong base to the development procedure. In this paper, the most widely used software cost estimation model, the Constructive Cost Model (COCOMO) is discussed. The model is implemented with the help of artificial neural networks and trained using the perceptron learning algorithm. The COCOMO dataset is used to train and to test the network. The test results from the trained neural network are compared with that of the COCOMO model. The aim of our research is to enhance the estimation accuracy of the COCOMO model by introducing the artificial neural networks to it.Index Terms—Artificial Neural Network, Constructive Cost Model, Perceptron Network, Software Cost EstimationI.IntroductionSoftware cost estimation is one of the most significant activities in software project management. Accurate cost estimation is important because it can help to classify and prioritize development projects to determine what resources to commit to the project and how well these resources will be used. The accuracy of the management decisions will depend on the accuracy of the software development parameters. These parameters include effort estimation, development time estimation, cost estimation, team size estimation, risk analysis, etc. These estimates are calculated in the early development phases of the project. So, we need a good model to calculate these parameters. An early and accurate estimation model reduces the possibilities of conflicts between members in the later stages of project development.In the last few decades many software cost estimation models have been developed. The algorithmic models also known as conventional models use a mathematical formula to predict project cost based on the estimates of project size, the number of software engineers, and other process and product factors [1]. These models can be built by analysing the costs and attributes of completed projects and finding the closest fit formula to actual experience. COCOMO (Constructive Cost Model), is the best known algorithmic cost model published by Barry Boehm in 1981 [2]. It was developed from the analysis of sixty three software projects. These conventional approaches lacks in terms of effectiveness and robustness in their results. These models require inputs which are difficult to obtain during the early stages of a software development project. They have difficulty in modelling the inherent complex relationships between the contributing factors and are unable to handle categorical data as well as lack of reasoning capabilities [3]. The limitations of algorithmic models led to the exploration of the non-algorithmic models which are soft computing based.Non algorithmic models for cost estimation encompass methodologies on fuzzy logic (FL), artificial neural networks (ANN) and evolutionary computation (EC).These methodologies handle real life situations by providing flexible information processing capabilities. This paper proposed a neural network technique using perceptron learning algorithm for software cost estimation which is based on COCOMO model. Neural networks have been found as one of the best techniques for software cost estimation. Now-a-days many researchers and scientists are constantly working on developing new software cost estimation techniques using neural networks [4, 5, 6, 7].The rest of the paper is organized as follows: section 2 and 3 describes the COCOMO model and artificial neural network concepts respectively.Section3 and 4 discusses the related work and proposed neural model. Section 4 and 5 presents the proposed model and the training algorithm implemented. Section 6 discusses the experimental results and evaluation criteria. Finally Section 7 concludes the paper.II.COCOMO ModelThe COCOMO model [2] is the most widely used algorithmic cost estimation technique due to itssimplicity. This model provides us with the effort in person months, the development time in months and the team size in persons. It makes use of mathematical equations to calculate these parameters. The COCOMO model is a hierarchy of software cost estimation models and they are:1. Basic Model- It estimates effort for the small to medium sized software projects in a quick and rough fashion and takes the formE = a (SIZE) b (1) where E is effort applied in Person-Months and SIZE is measured in thousand delivered source instructions. The coefficients a and b are dependent upon the three modes of development of projects. Boehm proposed three modes of projects:(a)Organic mode- It is for small sized projects upto2-50 KLOC (thousand lines of code) withexperienced developers in a familiar environment.(b)Semi detached mode- It is for medium sizedprojects upto 50-300 KLOC with averageprevious experience on similar projects.(c)Embedded mode- It is for large and complexprojects typically over 300 KLOC with developershaving very little previous experience.2. Intermediate Model- The Basic COCOMO does not take account of the software development environment. Boehm introduced a set of 15 cost drivers in the Intermediate COCOMO that adds accuracy to the Basic COCOMO. The cost drivers are grouped into four categories:1.Product attributes(a)Required software reliability (RELY)(b)Database size (DATA)(c)Product complexity (CPLX)puter attributes(a)Execution time constraint (TIME)(b)Main storage constraint (STOR)(c)Virtual machine volatility (VIRT)(d)Computer turnaround time (TURN)3.Personnel attributes(a)Analyst capability (ACAP)(b)Application experience (AEXP)(c)Programmer capability (PCAP)(d)Virtual machine experience (VEXP)(e)Programming language experience (LEXP)4.Project attributes(a)Modern programming practices (MODP)(b)Use of software tools (TOOLS)(c)Required development schedule (SCED)Table I Coefficients for Intermediate COCOMOThe Cost drivers have up to six levels of rating: Very Low, Low, Nominal, High, Very High, and Extra High. Each rating has a corresponding real number known as effort multiplier, based upon the factor and the degree to which the factor can influence productivity. The estimated effort in person-months (PM) for the intermediate COCOMO is given as:Effort = a×[SIZE]b ×i=1Π15 EM i (2) In equation (2) the coefficient “a”is known as productivity coefficient and the coefficient “b” is t he scale factor. They are based on the different modes of project as given in Table I. The contribution of effort multipliers corresponding to the respective cost drivers is introduced in the effort estimation formula by multiplying them together. The numerical value of the ith cost driver is EM i (Effort Multiplier).3. Detailed Model- Boehm introduced two more capabilities in this model and they are, Phase sensitive effort multipliers which help in determining the manpower allocation for each phase of the project and three level product hierarchy. These are module, subsystem and system levels. The ratings of the cost drivers are done at appropriate level.This research used intermediate COCOMO model because it has estimation accuracy that is greater than the basic version, and at the same time comparable to the detailed version.III.Artificial Neural NetworksArtificial neural networks (ANN) [8] are the interconnection of the artificial neurons. They are used to solve the artificial intelligence problems without the need for creating a real biological model. These networks focus on hypothetical matters from an information processing point of view. ANN’s possess large number of highly interconnected processing elements called neurons. Each neuron is connected with the other by a connection link. Each connection link is associated with weights which contain information about the input signal. This information is used by the neuron net to solve a particular problem. Each neuron has an internal state of its own. This internal state is called the activation level of neuron, which is the function of the inputs the neuron receives. There are a number of activation functions that can be applied over net input such as Gaussian, Linear, Sigmoid and Tanh. Figure 1 shows the structure of a basic neural network.Fig.1 Basic Neural NetworkA basic neural network consists of a number of inputs applied by some weights, combined together to give an output. The feedback from the output is again put into the inputs to adjust the applied weights and to train the network. This structure of the neural networks help to solve the practical, non linear, decision making problems easily.The neural network used in our approach is perceptron neural network [9]. The perceptron is a network that learns concepts, i.e. it can learn to respond with True (1) or False (0) for inputs presented to it, by repeatedly studying examples provided to it. This network weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector. The training technique used is called the perceptron learning rule. Perceptron Neural Network is selected due to its ability to generalize from its training vectors and work with randomly distributed connections.Vectors from a training set are presented to the network one after another. If the network's output is correct, no change is made. Otherwise, the weights and biases are updated using the perceptron learning rule. An entire pass through all of the input training vectors is called an epoch. When such an entire pass of the training set has occurred without error, training is complete. At this time any input training vector may be presented to the network and it will respond with the correct output vector. If a vector P not in the training set is presented to the network, the network will tend to exhibit generalization by responding with an output similar to target vectors for input vectors close to the previously unseen input vector P.The activation function is one of the key components of the perceptron as in the most common neural network architectures. It determines, based on the inputs, whether the perceptron activates or not. Basically, the perceptron takes all of the weighted input values and adds them together. If the sum is above or equal to some value (called the threshold) then the perceptron fires. Otherwise, the Perceptron does not. The output of the perceptron network is given byy = f(y in) (3) where f(y in) is activation function and is defined asf(y in ) = (4)IV.Relevant WorkArtificial neural networks are good at modeling complex non linear relationships. Since last many years, there have been many researchers who have worked upon the cost estimation of the projects using the artificial neural networks. Many researchers have applied the neural networks approach to estimate software development effort [10, 11, 12, 13, 14, 15 and 16]. A recent study by Jorgensen [17] provides a detailed review of different studies on the software development effort. Prasad Reddy P.V.G.D, Sudha K.R, Rama Sree P and Ramesh S.N.S.V.S.[18] explain the radial study of the Neural Network. Another study by Samson et al. [12] uses an albus multilayer perceptron in order to predict software effort. They use Boehm’s COCOMO dataset. Srinivasan and Fisher [15] report the use of a neural network with a back propagation learning algorithm. They found that the neural network outperformed other techniques. K. Vinay Kumar, V. Ravi, Mahil Carr, and N. Raj Kiran [19] use the wavelet neural network for predicting software development cost. N. Tadayon [20] also reports the use of neural network with a back propagation learning algorithm. However it was not clear how the dataset was divided for training and validation purposes. B. Tirimula Rao et al. [21] provided a novel neural network approach for software cost estimation using functional link artificial neural network. COCOMO is arguably the most popular and widely used software estimation model, which integrates valuable expertknowledge [2].Fig. 2 Architecture of the proposed Neural NetworkV.Proposed Neural NetworkFigure 2 shows the basic structure of the proposednetwork. The performance of a neural network depends on its architecture and their parameter settings. There are many parameters governing the architecture of the neural network including the number of layers, the number of nodes in each layer, the transfer function in each node, learning algorithm parameters and the weights which determine the connectivity between nodes. Inappropriate selection of network patterns and learning rules may cause serious difficulties in network performance and training. The problem is to decide the number of layers and number of nodes in the layers and the learning algorithm as well. However, the criterion is to select the minimum nodes which would not impair the network performance. The number of layers and nodes should be minimized to amplify the performance. In our network, there are 17 inputs to the network which are size of the project in KLOC, 15 effort multipliers, actual effort of the project and one bias value. These inputs enter the network as weighted inputs. The effort is calculated using equation (5).The weights are initialized as W i= 1 for i = 1 to 17, learning rate, α = 0.001 and bias b = 1. The inputs, as received, are multiplied to the weights and provided to the network. As the Propagation network uses summation of the inputs but the COCOMO model uses its multiplication, a log function is used to neutralize them. So, the equation (2) is modified as:log (Effort) = log (a×[SIZE]b ×i=1Π15 EM i) (5) The output obtained by the above equation, is compared using the activation function and the output signal is sent forward. According to the output of the activation function, the weights applied on the inputs are modified. When the output of activation function is 1, the difference between actual effort and effort calculated is found to check if it is in permissible limit or not. If it is in the permissible limit, the output is accepted else the weights are adjusted. This completes with one epoch of the project.The algorithm for training the above network and for calculating new set of weights is depicted in the following steps:Step 1: Initialize the weights, bias and learning rate α. Step 2: Perform steps 3-8 until stopping condition is false.Step 3: Perform steps 4-7 for each training pair.Step 4: The input layer receives input signal and sends it to the hidden layer by applying identityactivation functions on all the input units fromi=1 to 17.Step 5: Each hidden unit j= 1 to 5 sums its weighted input signals to calculate net input given by:The activation functions as given by equation (4) are applied over the above net input to calculate the output response: = )Step 6: Calculate the output i.e. effort at the output layer using the same procedure as in step 5 andconsidering all the weights for j=1 to 5 as 1. Step 7: Compare the actual effort with the computed effort, if the difference is within thepermissible limit the output is accepted elsethe weights are updated as follow:w i(new) = w i(old) + α × input(i)Step 8: Check for the stopping condition i.e. if there is no change in weights then stop the training process, else start again from Step 3.VI.Evaluation Criteria and ResultsThe experiments are done with the proposed neural network model by taking some of the original projects from COCOMO dataset. COCOMO dataset is publicly available which consists of 63 projects [22]. We have divided the entire dataset into two sets, training set and validation set to get more accuracy of prediction. The model is implemented in Matlab.The evaluation consists in comparing the accuracy of the estimated effort with the actual effort. There are many evaluation criteria for software effort estimation among them we applied the most frequent one which is Magnitude of Relative Error (MRE) which is defined as in equation (6).MRE= (6) Table 1 shows some of the experimental values which were tested. These values are then compared with the actual effort of the model. The comparison tells us about the efficiency of our network. Each row of the table corresponds to a project data which specifies the size of the project, the actual effort of the project, the effort multiplier values and finally the effort calculated by our project. The input values are entered in the project through a GUI (Graphical User Interface). The model is implemented in Matlab. Table 2 shows the actual effort, the estimated effort and the MRE value for the experimented projects. Figure 3 is the graphical representation of the actual and the calculated effort of 15 projects of COCOMO dataset [22]. Through this graph, it can be observed that the difference between the actual and the calculated effort is quite less which shows that the proposed algorithm is an accurate and precise algorithm.Table 1 Experimental StudiesTable 2 Comparisons of ResultsFig. 3: Actual and Calculated EffortVII. ConclusionA reliable and accurate estimate of software development effort has always been a challenge for both the industrial and academic communities. There are several software effort forecasting models that can be used in forecasting future software development effort. We have constructed a cost estimation model based on artificial neural networks. Our idea consists in the use of a model that maps COCOMO model to a neural network with minimal number of layers and nodes to increase the performance of the network. The neural network that we have used to predict the software development effort is the Perceptron network. We have use d the COCOM0’81 dataset to train and to test the network. It is observed that the obtained accuracy of the network is acceptable.Thus, it is concluded that the use of the artificial neural network algorithm to model the COCOMO estimation algorithm is an efficient way to find the values of the project estimates. It provides us with nearly accurate values.AcknowledgementThe authors would like to thank the anonymous reviewers for their careful reading of this paper and for their helpful comments.References[1] Ch. Satyananda Reddy, KVSN Raju,“AnImproved Fuz zy Approach for COCOMO’s EffortEstimation using Gaussian Membership Function,” Journal of Software ”, Volume 4, No. 5, July 2009. [2] Boehm, B.W., “Software EngineeringEconomics,” Prentice -Hall, Englewood Cliffs, NJ, USA, 1994.[3] M.O. Saliu, M.Ahmed, “Soft Computing basedEffort Prediction Systems –A Survey, in : E.Damiani, L.C. Jain (Eds),” Computational Intelligence in Software Engineering, Springer-Verlag, July 2004, ISBN 3-540-22030-5.[4] Dawson, C.W., “A neur al network approach tosoftware projects effort estimation,” Transaction Information and Communication Technologies, Vol.16, pages 9, 1996.[5] Idri, A. Khoshgoftaar, T.M. Abran, A., “Canneural networks be easily interpreted in software cost estimation?,” Pro ceedings of the IEEE Internation Conference on Fuzzy Systems, FUZZ-IEEE’02, Vol.:2, 1162-1167, 2002.[6] Finnie, G.R. and Wittig, G.E., “AI tools forsoftware development effort estimation,” In proceedings of the IEEE International Conference on Software Engineering: Education and Practice, Washington DC, pp 346-353, 1996.[7] B. Tirimula Rao, B. Sameet, G. Kiran Swathi, K.Vikram Gupta, Ch. Ravi Teja, S. Sumana, “A novel neural network approach for software cost estimation using Functional Link Artificial Neural Network(FLANN)”, International Journal of Computer Science and Network Society, Vol.9 No.6, June 2009.[8] Stephen Marsland , Jonathan Shapiro, and UlrichNehmzow. “A self-organising network that grows when required ”, Journal Neural Networks, Vol. 15 Issue (8-9):1041- 1058, 2002.[9] S.N. Sivanandam, S.N. Deepa, Principles of SoftComputing, Wiley India (2007).[10] Ch. Satyananda Reddy and KVSVN Raju, “ AnOptimal Neural Network Model for Software Effort Estimation”, Int.J. of Software Engineering, IJSE Vol.3 No.1 January 2010[11] Jorgerson, M., “Experience with accuracy ofsoftware maintenance task effort prediction models,” IEEE Transactions on Software Engineering, Volume 21 (8), 674–681, 1995.[12]Samson, B., Ellison, D., Dugard, P., “Softwarecost estimation using an Albus perceptron (CMAC),” Journal of Information and Software Technology, Volume 39 (1), 55–60, 1997.[13]Schofield, C., “Non-algorithmic effort estimationtechniques,” Technical Report TR98-01, 1998. [14]Seluca, C., “An investigation into software eff ortestimation using a back propagation neural network,” M.Sc.Thesis, Bournemouth University, UK, 1995.[15]Srinivasan, K., Fisher, D., “Machine learningapproaches to estimating software development effort,” IEEE Transactions on Software Engineering, Volume 21 (2), 126–137, 1995. [16]Wittig, G., Finnie, G., “Estimating softwaredevelopment effort with connectionist models,”Journal of Information and Software Technology, Volume 39 (7), 469–476, 1997.[17]Hughes, R.T., “An evaluation of machine learningtechniques for software effort estimation,”University of Brighton, 1996.[18]Prasad Reddy P.V.G.D, Sudha K.R, Rama Sree Pand Ramesh S.N.S.V.S., “Software Effort Estimation using Radial Basis and Generalized Regression Neural Networks”, Journal of Computing, Volume 2, Issue 5, May 2010, ISSN 2151-9617[19]K. Vinay Kumar, V. Ravi, Mahil Carr, N. RajKiran, “Software development cost estimation using wavelet neural networks”, The journal of Systems and Software 81(2008) 1853-1867. [20]N. Tadayon, “Neural Network Approach forSoftware Cost Estimation”, proceedings of the International Conference on Information Technology: Coding and Computing(ITCC’05), Vol. 2, pp. 815-818, 2005.[21]B. Tirimula Rao, B. Sameet, G. Kiran Swathi, K.Vikram Gupta, Ch. Ravi Teja, S. Sumana, “A novel neural network approach for software cost estimation using Functional Link Artificial Neural Network (FLANN)”, International Journal of Computer Science and Network Society, Vol. 9 No.6, June 2009.[22]Anupama Kaushik received her B.E (Computer Science) from Bharathiyar University and M.Tech (Information Technology) from Tezpur University. She joined Department of Information Technology of Maharaja Surajmal Institute of Technology as an Assistant Professor in 2004. Her research area includes Software Engineering, Object Oriented Software Engineering and Soft Computing.Ashish Chauhan is a student pursuing his B.Tech from Department of Information Technology of Maharaja Surajmal Institute of Technology. This work was a part of their project on Software Cost Estimation. His research area includes Software Engineering and Artificial Neural Networks.Deepak Mittal is a student pursuing his B.Tech from Department of Information Technology of Maharaja Surajmal Institute of Technology. This work was a part of their project on Software Cost Estimation. His research area includes Software Engineering and Artificial Neural Networks.Sachin Gupta is a student pursuing his B.Tech from Department of Information Technology of Maharaja Surajmal Institute of Technology. This work was a part of their project on Software Cost Estimation. His research area includes Software Engineering and Artificial Neural Networks.。