Calibrating software cost models to Department of Defense Database A review of ten studies
matlab工具箱安装教程

1.1 如果是Matlab安装光盘上的工具箱,重新执行安装程序,选中即可;1.2 如果是单独下载的工具箱,一般情况下仅需要把新的工具箱解压到某个目录。
2 在matlab的file下面的set path把它加上。
3 把路径加进去后在file→Preferences→General的Toolbox Path Caching里点击update Toolbox Path Cache更新一下。
4 用which newtoolbox_command.m来检验是否可以访问。
如果能够显示新设置的路径,则表明该工具箱可以使用了。
把你的工具箱文件夹放到安装目录中“toolbox”文件夹中,然后单击“file”菜单中的“setpath”命令,打开“setpath”对话框,单击左边的“ADDFolder”命令,然后选择你的那个文件夹,最后单击“SAVE”命令就OK了。
MATLAB Toolboxes============================================/zsmcode.htmlBinaural-modeling software for MATLAB/Windows/home/Michael_Akeroyd/download2.htmlStatistical Parametric Mapping (SPM)/spm/ext/BOOTSTRAP MATLAB TOOLBOX.au/downloads/bootstrap_toolbox.htmlThe DSS package for MATLABDSS Matlab package contains algorithms for performing linear, deflation and symmetric DSS. http://www.cis.hut.fi/projects/dss/package/Psychtoolbox/download.htmlMultisurface Method Tree with MATLAB/~olvi/uwmp/msmt.htmlA Matlab Toolbox for every single topic !/~baum/toolboxes.htmleg. BrainStorm - MEG and EEG data visualization and processingCLAWPACK is a software package designed to compute numerical solutions to hyperbolic partial differential equations using a wave propagation approach/~claw/DIPimage - Image Processing ToolboxPRTools - Pattern Recognition Toolbox (+ Neural Networks)NetLab - Neural Network ToolboxFSTB - Fuzzy Systems ToolboxFusetool - Image Fusion Toolboxhttp://www.metapix.de/toolbox.htmWAVEKIT - Wavelet ToolboxGat - Genetic Algorithm ToolboxTSTOOL is a MATLAB software package for nonlinear time series analysis.TSTOOL can be used for computing: Time-delay reconstruction, Lyapunov exponents, Fractal dimensions, Mutual information, Surrogate data tests, Nearest neighbor statistics, Return times, Poincare sections, Nonlinear predictionhttp://www.physik3.gwdg.de/tstool/MATLAB / Data description toolboxA Matlab toolbox for data description, outlier and novelty detectionMarch 26, 2004 - D.M.J. Taxhttp://www-ict.ewi.tudelft.nl/~davidt/dd_tools/dd_manual.htmlMBEhttp://www.pmarneffei.hku.hk/mbetoolbox/Betabolic network toolbox for Matlabhttp://www.molgen.mpg.de/~lieberme/pages/network_matlab.htmlPharmacokinetics toolbox for Matlabhttp://page.inf.fu-berlin.de/~lieber/seiten/pbpk_toolbox.htmlThe SpiderThe spider is intended to be a complete object orientated environment for machine learning in Matlab. Aside from easy use of base learning algorithms, algorithms can be plugged together and can be compared with, e.g model selection, statistical tests and visual plots. This gives all the power of objects (reusability, plug together, share code) but also all the power of Matlab for machine learning research.http://www.kyb.tuebingen.mpg.de/bs/people/spider/index.htmlSchwarz-Christoffel Toolbox/matlabcentral/fileexchange/loadFile.do?objectId=1316&objectT ype=file#XML Toolbox/matlabcentral/fileexchange/loadFile.do?objectId=4278&object Type=fileFIR/TDNN Toolbox for MATLABBeta version of a toolbox for FIR (Finite Impulse Response) and TD (Time Delay) NeuralNetworks./interval-comp/dagstuhl.03/oish.pdfMisc.http://www.dcsc.tudelft.nl/Research/Software/index.htmlAstronomySaturn and Titan trajectories ... MALTAB astronomy/~abrecht/Matlab-codes/AudioMA Toolbox for Matlab Implementing Similarity Measures for Audiohttp://www.oefai.at/~elias/ma/index.htmlMAD - Matlab Auditory Demonstrations/~martin/MAD/docs/mad.htmMusic Analysis - Toolbox for Matlab : Feature Extraction from Raw Audio Signals for Content-Based Music Retrihttp://www.ai.univie.ac.at/~elias/ma/WarpTB - Matlab Toolbox for Warped DSPBy Aki Härmä and Matti Karjalainenhttp://www.acoustics.hut.fi/software/warp/MATLAB-related Softwarehttp://www.dpmi.tu-graz.ac.at/~schloegl/matlab/Biomedical Signal data formats (EEG machine specific file formats with Matlab import routines)http://www.dpmi.tu-graz.ac.at/~schloegl/matlab/eeg/MPEG Encoding library for MATLAB Movies (Created by David Foti)It enables MATLAB users to read (MPGREAD) or write (MPGWRITE) MPEG movies. That should help Video Quality project.Filter Design packagehttp://www.ee.ryerson.ca:8080/~mzeytin/dfp/index.htmlOctave by Christophe COUVREUR (Generates normalized A-weigthing, C-weighting, octave and one-third-octave digital filters)/matlabcentral/fileexchange/loadFile.do?objectType=file&object Id=69Source Coding MATLAB Toolbox/users/kieffer/programs.htmlBio Medical Informatics (Top)CGH-Plotter: MATLAB Toolbox for CGH-data AnalysisCode: http://sigwww.cs.tut.fi/TICSP/CGH-Plotter/Poster: http://sigwww.cs.tut.fi/TICSP/CSB2003/Posteri_CGH_Plotter.pdfThe Brain Imaging Software Toolboxhttp://www.bic.mni.mcgill.ca/software/MRI Brain Segmentation/matlabcentral/fileexchange/loadFile.do?objectId=4879Chemometrics (providing PCA) (Top)Matlab Molecular Biology & Evolution Toolbox(Toolbox Enables Evolutionary Biologists to Analyze and View DNA and Protein Sequences) James J. Caihttp://www.pmarneffei.hku.hk/mbetoolbox/Toolbox provided by Prof. Massart research grouphttp://minf.vub.ac.be/~fabi/publiek/Useful collection of routines from Prof age smilde research grouphttp://www-its.chem.uva.nl/research/pacMultivariate Toolbox written by Rune Mathisen/~mvartools/index.htmlMatlab code and datasetshttp://www.acc.umu.se/~tnkjtg/chemometrics/dataset.htmlChaos (Top)Chaotic Systems Toolbox/matlabcentral/fileexchange/loadFile.do?objectId=1597&objectT ype=file#HOSA Toolboxhttp://www.mathworks.nl/matlabcentral/fileexchange/loadFile.do?objectId=3013&objectTy pe=fileChemistry (Top)MetMAP - (Metabolical Modeling, Analysis and oPtimization alias Met. M. A. P.)http://webpages.ull.es/users/sympbst/pag_ing/pag_metmap/index.htmDoseLab - A set of software programs for quantitative comparison of measured and computed radiation dose distributions/GenBank Overview/Genbank/GenbankOverview.htmlMatlab: /matlabcentral/fileexchange/loadFile.do?objectId=1139CodingCode for the estimation of Scaling Exponentshttp://www.cubinlab.ee.mu.oz.au/~darryl/secondorder_code.htmlControl (Top)Control Tutorial for Matlab/group/ctm/AnotherCommunications (Top)Channel Learning Architecture toolbox(This Matlab toolbox is a supplement to the article "HiperLearn: A High Performance Learning Architecture")http://www.isy.liu.se/cvl/Projects/hiperlearn/Source Coding MATLAB Toolbox/users/kieffer/programs.htmlTCP/UDP/IP Toolbox 2.0.4/matlabcentral/fileexchange/loadFile.do?objectId=345&objectT ype=fileHome Networking Basis: Transmission Environments and Wired/Wireless Protocols Walter Y. Chen/support/books/book5295.jsp?category=new&language=-1MATLAB M-files and Simulink models/matlabcentral/fileexchange/loadFile.do?objectId=3834&object Type=file•OPNML/MATLAB Facilities/OPNML_Matlab/Mesh Generation/home/vavasis/qmg-home.htmlOpenFEM : An Open-Source Finite Element Toolbox/CALFEM is an interactive computer program for teaching the finite element method (FEM)http://www.byggmek.lth.se/Calfem/frinfo.htmThe Engineering Vibration Toolbox/people/faculty/jslater/vtoolbox/vtoolbox.htmlSaGA - Spatial and Geometric Analysis Toolboxby Kirill K. Pankratov/~glenn/kirill/saga.htmlMexCDF and NetCDF Toolbox For Matlab-5&6/staffpages/cdenham/public_html/MexCDF/nc4ml5.htmlCUEDSID: Cambridge University System Identification Toolbox/jmm/cuedsid/Kriging Toolbox/software/Geostats_software/MATLAB_KRIGING_TOOLBOX.htmMonte Carlo (Dr Nando)http://www.cs.ubc.ca/~nando/software.htmlRIOTS - The Most Powerful Optimal Control Problem Solver/~adam/RIOTS/ExcelMATLAB xlsheets/matlabcentral/fileexchange/loadFile.do?objectId=4474&objectTy pe=filewrite2excel/matlabcentral/fileexchange/loadFile.do?objectId=4414&objectTy pe=fileFinite Element Modeling (FEM) (Top)OpenFEM - An Open-Source Finite Element Toolbox/NLFET - nonlinear finite element toolbox for MATLAB ( framework for setting up, solving, and interpreting results for nonlinear static and dynamic finite element analysis.)/GetFEM - C++ library for finite element methods elementary computations with a Matlabinterfacehttp://www.gmm.insa-tlse.fr/getfem/FELIPE - FEA package to view results ( contains neat interface to MATLA/~blstmbr/felipe/Finance (Top)A NEW MATLAB-BASED TOOLBOX FOR COMPUTER AIDED DYNAMIC TECHNICAL TRADINGStephanos Papadamou and George StephanidesDepartment of Applied Informatics, University Of Macedonia Economic & Social Sciences, Thessaloniki, Greece/fen31/one_time_articles/dynamic_tech_trade_matlab6.htm Paper: :8089/eps/prog/papers/0201/0201001.pdfCompEcon Toolbox for Matlab/~pfackler/compecon/toolbox.htmlGenetic Algorithms (Top)The Genetic Algorithm Optimization Toolbox (GAOT) for Matlab 5/mirage/GAToolBox/gaot/Genetic Algorithm ToolboxWritten & distributed by Andy Chipperfield (Sheffield University, UK)/uni/projects/gaipp/gatbx.htmlManual: /~gaipp/ga-toolbox/manual.pdfGenetic and Evolutionary Algorithm Toolbox (GEATbx)/Evolutionary Algorithms for MATLAB/links/ea_matlab.htmlGenetic/Evolutionary Algorithms for MATLABhttp://www.systemtechnik.tu-ilmenau.de/~pohlheim/EA_Matlab/ea_matlab.html GraphicsVideoToolbox (C routines for visual psychophysics on Macs by Denis Pelli)/VideoToolbox/Paper: /pelli/pubs/pelli1997videotoolbox.pdf4D toolbox/~daniel/links/matlab/4DToolbox.htmlImages (Top)Eyelink Toolbox/eyelinktoolbox/Paper: /eyelinktoolbox/EyelinkToolbox.pdfCellStats: Automated statistical analysis of color-stained cell images in Matlabhttp://sigwww.cs.tut.fi/TICSP/CellStats/SDC Morphology Toolbox for MATLAB (powerful collection of latest state-of-the-art gray-scale morphological tools that can be applied to image segmentation, non-linear filtering, pattern recognition and image analysis)/Image Acquisition Toolbox/products/imaq/Halftoning Toolbox for MATLAB/~bevans/projects/halftoning/toolbox/index.htmlDIPimage - A Scientific Image Processing Toolbox for MATLABhttp://www.ph.tn.tudelft.nl/DIPlib/dipimage_1.htmlPNM Toolboxhttp://home.online.no/~pjacklam/matlab/software/pnm/index.htmlAnotherICA / KICA and KPCA (Top)ICA TU Toolboxhttp://mole.imm.dtu.dk/toolbox/menu.htmlMISEP Linear and Nonlinear ICA Toolboxhttp://neural.inesc-id.pt/~lba/ica/mitoolbox.htmlKernel Independant Component Analysis/~fbach/kernel-ica/index.htmMatlab: kernel-ica version 1.2KPCA- Please check the software section of kernel machines.KernelStatistical Pattern Recognition Toolboxhttp://cmp.felk.cvut.cz/~xfrancv/stprtool/MATLABArsenal A MATLAB Wrapper for Classification/tmp/MATLABArsenal.htmMarkov (Top)MapHMMBOX 1.1 - Matlab toolbox for Hidden Markov Modelling using Max. Aposteriori EM Prerequisites: Matlab 5.0, Netlab. Last Updated: 18 March 2002./~parg/software/maphmmbox_1_1.tarHMMBOX 4.1 - Matlab toolbox for Hidden Markov Modelling using Variational Bayes Prerequisites: Matlab 5.0,Netlab. Last Updated: 15 February 2002../~parg/software/hmmbox_3_2.tar/~parg/software/hmmbox_4_1.tarMarkov Decision Process (MDP) Toolbox for MatlabKevin Murphy, 1999/~murphyk/Software/MDP/MDP.zipMarkov Decision Process (MDP) Toolbox v1.0 for MATLABhttp://www.inra.fr/bia/T/MDPtoolbox/Hidden Markov Model (HMM) Toolbox for Matlab/~murphyk/Software/HMM/hmm.htmlBayes Net Toolbox for Matlab/~murphyk/Software/BNT/bnt.htmlMedical (Top)EEGLAB Open Source Matlab Toolbox for Physiological Research (formerly ICA/EEG Matlabtoolbox)/~scott/ica.htmlMATLAB Biomedical Signal Processing Toolbox/Toolbox/Powerful package for neurophysiological data analysis ( Igor Kagan webpage)/Matlab/Unitret.htmlEEG / MRI Matlab Toolbox/Microarray data analysis toolbox (MDAT): for normalization, adjustment and analysis of gene expression_r data.Knowlton N, Dozmorov IM, Centola M. Department of Arthritis and Immunology, Oklahoma Medical Research Foundation, Oklahoma City, OK, USA 73104. We introduce a novel Matlab toolbox for microarray data analysis. This toolbox uses normalization based upon a normally distributed background and differential gene expression_r based on 5 statistical measures. The objects in this toolbox are open source and can be implemented to suit your application. AVAILABILITY: MDAT v1.0 is a Matlab toolbox and requires Matlab to run. MDAT is freely available at:/publications/2004/knowlton/MDAT.zipMIDI (Top)MIDI Toolbox version 1.0 (GNU General Public License)http://www.jyu.fi/musica/miditoolbox/Misc. (Top)MATLAB-The Graphing Tool/~abrecht/matlab.html3-D Circuits The Circuit Animation Toolbox for MATLAB/other/3Dcircuits/SendMailhttp://carol.wins.uva.nl/~portegie/matlab/sendmail/Coolplothttp://www.reimeika.ca/marco/matlab/coolplots.htmlMPI (Matlab Parallel Interface)Cornell Multitask Toolbox for MATLAB/Services/Software/CMTM/Beolab Toolbox for v6.5Thomas Abrahamsson (Professor, Chalmers University of Technology, Applied Mechanics,Göteborg, Sweden)http://www.mathworks.nl/matlabcentral/fileexchange/loadFile.do?objectId=1216&objectType =filePARMATLABNeural Networks (Top)SOM Toolboxhttp://www.cis.hut.fi/projects/somtoolbox/Bayes Net Toolbox for Matlab/~murphyk/Software/BNT/bnt.htmlNetLab/netlab/Random Neural Networks/~ahossam/rnnsimv2/ftp: ftp:///pub/contrib/v5/nnet/rnnsimv2/NNSYSID Toolbox (tools for neural network based identification of nonlinear dynamic systems) http://www.iau.dtu.dk/research/control/nnsysid.htmlOceanography (Top)WAFO. Wave Analysis for Fatigue and Oceanographyhttp://www.maths.lth.se/matstat/wafo/ADCP toolbox for MATLAB (USGS, USA)Presented at the Hydroacoustics Workshop in Tampa and at ADCP's in Action in San Diego /operations/stg/pubs/ADCPtoolsSEA-MAT - Matlab Tools for Oceanographic AnalysisA collaborative effort to organize and distribute Matlab tools for the Oceanographic Community /Ocean Toolboxhttp://www.mar.dfo-mpo.gc.ca/science/ocean/epsonde/programming.htmlEUGENE D. GALLAGHER(Associate Professor, Environmental, Coastal & Ocean Sciences)/edgwebp.htmOptimization (Top)MODCONS - a MATLAB Toolbox for Multi-Objective Control System Design/mecheng/jfw/modcons.htmlLazy Learning Packagehttp://iridia.ulb.ac.be/~lazy/SDPT3 version 3.02 -- a MATLAB software for semidefinite-quadratic-linear programming .sg/~mattohkc/sdpt3.htmlMinimum Enclosing Balls: Matlab Code/meb/SOSTOOLS Sum of Squares Optimi zation Toolbox for MATLAB User’s guide/sostools/sostools.pdfPSOt - a Particle Swarm Optimization Toolbox for use with MatlabBy Brian Birge ... A Particle Swarm Optimization Toolbox (PSOt) for use with the Matlab scientific programming environment has been developed. PSO isintroduced briefly and then the use of the toolbox is explained with some examples. A link to downloadable code is provided.Plot/software/plotting/gbplot/Signal Processing (Top)Filter Design with Motorola DSP56Khttp://www.ee.ryerson.ca:8080/~mzeytin/dfp/index.htmlChange Detection and Adaptive Filtering Toolboxhttp://www.sigmoid.se/Signal Processing Toolbox/products/signal/ICA TU Toolboxhttp://mole.imm.dtu.dk/toolbox/menu.htmlTime-Frequency Toolbox for Matlabhttp://crttsn.univ-nantes.fr/~auger/tftb.htmlVoiceBox - Speech Processing Toolbox/hp/staff/dmb/voicebox/voicebox.htmlLeast Squared - Support Vector Machines (LS-SVM)http://www.esat.kuleuven.ac.be/sista/lssvmlab/WaveLab802 : the Wavelet ToolboxBy David Donoho, Mark Reynold Duncan, Xiaoming Huo, Ofer Levi /~wavelab/Time-series Matlab scriptshttp://wise-obs.tau.ac.il/~eran/MATLAB/TimeseriesCon.htmlUvi_Wave Wavelet Toolbox Home Pagehttp://www.gts.tsc.uvigo.es/~wavelets/index.htmlAnotherSupport Vector Machine (Top)MATLAB Support Vector Machine ToolboxDr Gavin CawleySchool of Information Systems, University of East Anglia/~gcc/svm/toolbox/LS-SVM - SISTASVM toolboxes/dmi/svm/LSVM Lagrangian Support Vector Machine/dmi/lsvm/Statistics (Top)Logistic regression/SAGA/software/saga/Multi-Parametric Toolbox (MPT) A tool (not only) for multi-parametric optimization. http://control.ee.ethz.ch/~mpt/ARfit: A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive modelshttp://www.mat.univie.ac.at/~neum/software/arfit/The Dimensional Analysis Toolbox for MATLABHome: http://www.sbrs.de/Paper: http://www.isd.uni-stuttgart.de/~brueckner/Papers/similarity2002.pdfFATHOM for Matlab/personal/djones/PLS-toolbox/Multivariate analysis toolbox (N-way Toolbox - paper)http://www.models.kvl.dk/source/nwaytoolbox/index.aspClassification Toolbox for Matlabhttp://tiger.technion.ac.il/~eladyt/classification/index.htmMatlab toolbox for Robust Calibrationhttp://www.wis.kuleuven.ac.be/stat/robust/toolbox.htmlStatistical Parametric Mapping/spm/spm2.htmlEVIM: A Software Package for Extreme Value Analysis in Matlabby Ramazan Gençay, Faruk Selcuk and Abdurrahman Ulugulyagci, 2001.Manual (pdf file) evim.pdf - Software (zip file) evim.zipTime Series Analysishttp://www.dpmi.tu-graz.ac.at/~schloegl/matlab/tsa/Bayes Net Toolbox for MatlabWritten by Kevin Murphy/~murphyk/Software/BNT/bnt.htmlOther: /information/toolboxes.htmlARfit: A Matlab package for the estimation of parameters and eigenmodes of multivariate autoregressive models/~tapio/arfit/M-Fithttp://www.ill.fr/tas/matlab/doc/mfit4/mfit.htmlDimensional Analysis Toolbox for Matlab/The NaN-toolbox: A statistic-toolbox for Octave and Matlab®... handles data with and without MISSING VALUES.http://www-dpmi.tu-graz.ac.at/~schloegl/matlab/NaN/Iterative Methods for Optimization: Matlab Codes/~ctk/matlab_darts.htmlMultiscale Shape Analysis (MSA) Matlab Toolbox 2000p.br/~cesar/projects/multiscale/Multivariate Ecological & Oceanographic Data Analysis (FATHOM)From David Jones/personal/djones/glmlab (Generalized Linear Models in MATLA.au/staff/dunn/glmlab/glmlab.htmlSpacial and Geometric Analysis (SaGA) toolboxInteresting audio links with FAQ, VC++, on the topic机器学习网站北京大学视觉与听觉信息处理实验室北京邮电大学模式识别与智能系统学科复旦大学智能信息处理开放实验室IEEE Computer Society北京映象站点计算机科学论坛机器人足球赛模式识别国家重点实验室南京航空航天大学模式识别与神经计算实验室- PARNEC南京大学机器学习与数据挖掘研究所- LAMDA南京大学人工智能实验室南京大学软件新技术国家重点实验室人工生命之园数据挖掘研究院微软亚洲研究院中国科技大学人工智能中心中科院计算所中科院计算所生物信息学实验室中科院软件所中科院自动化所中科院自动化所人工智能实验室ACL Special Interest Group on Natural Language Learning (SIGNLL)ACMACM Digital LibraryACM SIGARTACM SIGIRACM SIGKDDACM SIGMODAdaptive Computation Group at University of New MexicoAI at Johns HopkinsAI BibliographiesAI Topics: A dynamic online library of introductory information about artificial intelligence Ant Colony OptimizationARIES Laboratory: Advanced Research in Intelligent Educational SystemsArtificial Intelligence Research in Environmental Sciences (AIRIES)Austrian Research Institute for AI (OFAI)Back Issues of Neuron DigestBibFinder: a computer science bibliography search engine integrating many other engines BioAPI ConsortiumBiological and Computational Learning Center at MITBiometrics ConsortiumBoosting siteBrain-Style Information Systems Research Group at RIKEN Brain Science Institute, Japan British Computer Society Specialist Group on Expert SystemsCanadian Society for Computational Studies of Intelligence (CSCSI)CI Collection of BibTex DatabasesCITE, the first-stop source for computational intelligence information and services on the web Classification Society of North AmericaCMU Advanced Multimedia Processing GroupCMU Web->KB ProjectCognitive and Neural Systems Department of Boston UniversityCognitive Sciences Eprint Archive (CogPrints)COLT: Computational Learning TheoryComputational Neural Engineering Laboratory at the University of FloridaComputational Neurobiology Lab at California, USAComputer Science Department of National University of SingaporeData Mining Server Online held by Rudjer Boskovic InstituteDatabase Group at Simon Frazer University, CanadaDBLP: Computer Science BibliographyDigital Biology: about creating artificial lifeDistributed AI Unit at Queen Mary & Westfield College, University of LondonDistributed Artificial Intelligence at HUJIDSI Neural Networks group at the Université di Firenze, ItalyEA-related literature at the EvALife research group at DAIMI, University of Aarhus, Denmark Electronic Research Group at Aberdeen UniversityElsevierComputerScienceEuropean Coordinating Committee for Artificial Intelligence (ECCAI)European Network of Excellence in ML (MLnet)European Neural Network Society (ENNS)Evolutionary Computing Group at University of the West of EnglandEvolutionary Multi-Objective Optimization RepositoryExplanation-Based Learning at University of Illinoise at Urbana-ChampaignFace Detection HomepageFace Recognition Vendor TestFace Recognition HomepageFace Recognition Research CommunityFingerpassftp of Jude Shavlik's Machine Learning Group (University of Wisconsin-Madison)GA-List Searchable DatabaseGenetic Algorithms Digest ArchiveGenetic Programming BibliographyGesture Recognition HomepageHCI Bibliography Project contain extended bibliographic information (abstract, key words, table of contents, section headings) for most publications Human-Computer Interaction dating back to 1980 and selected publications before 1980IBM ResearchIEEEIEEE Computer SocietyIEEE Neural Networks SocietyIllinois Genetic Algorithms Laboratory (IlliGAL)ILP Network of ExcellenceInductive Learning at University of Illinoise at Urbana-ChampaignIntelligent Agents RepositoryIntellimedia Project at North Carolina State UniversityInteractive Artificial Intelligence ResourcesInternational Association of Pattern RecognitionInternational Biometric Industry AssociationInternational Joint Conference on Artificial Intelligence (IJCAI)International Machine Learning Society (IMLS)International Neural Network Society (INNS)Internet Softbot Research at University of WashingtonJapanese Neural Network Society (JNNS)Java Agents for Meta-Learning Group (JAM) at Computer Science Department, Columbia University, for Fraud and Intrusion Detection Using Meta-Learning AgentsKernel MachinesKnowledge Discovery MineLaboratory for Natural and Simulated Cognition at McGill University, CanadaLearning Laboratory at Carnegie Mellon UniversityLearning Robots Laboratory at Carnegie Mellon UniversityLaboratoire d'Informatique et d'Intelligence Artificielle (IIA-ENSAIS)Machine Learning Group of Sydney University, AustraliaMammographic Image Analysis SocietyMDL Research on the WebMirek's Cellebration: 1D and 2D Cellular Automata explorerMIT Artificial Intelligence LaboratoryMIT Media LaboratoryMIT Media Laboratory Vision and Modeling GroupMLNET: a European network of excellence in Machine Learning, Case-based Reasoning and Knowledge AcquisitionMLnet Machine Learning Archive at GMD includes papers, software, and data sets MIRALab at University of Geneva: leading research on virtual human simulationNeural Adaptive Control Technology (NACT)Neural Computing Research Group at Aston University, UKNeural Information Processing Group at Technical University of BerlinNIPSNIPS OnlineNeural Network Benchmarks, Technical Reports,and Source Code maintained by Scott Fahlman at CMU; source code includes Quickprop, Cascade-Correlation, Aspirin/Migraines Neural Networks FAQ by Lutz PrecheltNeural Networks FAQ by Warren S. SarleNeural Networks: Freeware and Shareware ToolsNeural Network Group at Department of Medical Physics and Biophysics, University ofNeural Network Group at Université Catholique de LouvainNeural Network Group at Eindhoven University of TechnologyNeural Network Hyperplane Animator program that allows easy visualization of training data and weights in a back-propagation neural networkNeural Networks Research at TUT/ELENeural Networks Research Centre at Helsinki University of Technology, FinlandNeural Network Speech Group at Carnegie Mellon UniversityNeural Text Classification with Neural NetworksNonlinearity and Complexity HomepageOFAI and IMKAI library information system, provided by the Department of Medical Cybernetics and Artificial Intelligence at the University of Vienna (IMKAI) and the Austrian Research Institute for Artificial Intelligence (OFAI). It contains over 36,000 items (books, research papers, conference papers, journal articles) from many subareas of AI OntoWeb: Ontology-based information exchange for knowledge management and electronic commercePortal on Neural Network ForecastingPRAG: Pattern Recognition and Application Group at University of CagliariQuest Project at IBM Almaden Research Center: an academic website focusing on classification and regression trees. Maintained by Tjen-Sien LimReinforcement Learning at Carnegie Mellon UniversityResearchIndex: NECI Scientific Literature Digital Library, indexing over 200,000 computer science articlesReVision: Reviewing Vision in the Web!RIKEN: The Institute of Physical and Chemical Research, JapanSalford SystemsSANS Studies of Artificial Neural Systems, at the Royal Institute of Technology, Sweden Santa-Fe InstituteScirus: a search engine locating scientific information on the InternetSecond Moment: The News and Business Resource for Applied AnalyticsSEL-HPC Article Archive has sections for neural networks, distributed AI, theorem proving, and a variety of other computer science topicsSOAR Project at University of Southern CaliforniaSociety for AI and StatisticsSVM of ANU CanberraSVM of Bell LabsSVM of GMD-First BerlinSVM of MITSVM of Royal Holloway CollegeSVM of University of SouthamptonSVM-workshop at NIPS97TechOnLine: TechOnLine University offers free online courses and lecturesUCI Machine Learning GroupUMASS Distributed Artificial Intelligence LaboratoryUTCS Neural Networks Research Group of Artificial Intelligence Lab, Computer Science Department, University of Texas at AustinVivisimo Document Clustering: a powerful search engine which returns clustered results Worcester Polytechnic Institute Artificial Intelligence Research Group (AIRG)Xerion neural network simulator developed and used by the connectionist group at the University of TorontoYale's CTAN Advanced Technology Center for Theoretical and Applied Neuroscience ZooLand: Artificial Life Resource。
Lecture 4_Software Process models,

Safety critical system requires a very structured development process The system with rapidly changing requirements, a less formal, flexible process is useful.
Waterfall Model
Separate and distinct phases of specification and development Specification and development are interleaved The system is developed as a series of versions (increments), with each version adding functionality to the previous version.
Throw-away prototyping
Benefits of Incremental Development
Cost related with accommodating customer requirements is minimized. Early customer feedback Customer feedback helps in identifying the problems earlier. More rapid delivery and deployment of useful software to the customer is possible, even if all the functionality has not been included.
CLB Precision Calibrator - OMEGASW-I Software说明书

K-111Model OMEGASW-I Your Reliable FieldReporter Calibration Data Management SoftwareSoftware for use with CLB Precision CalibratorCalibration test procedures have been prepared for temperature, pressure and electrical instruments.Specific range settings and other instrument details are entered to establish an optimal calibration point pattern.Calibration Test ProceduresBefore each calibration test, an ambienttemperature sample is recorded withthe Pt100 probe to report calibration temperature conditions.Ambient Temperature SampleSerial number, area, hook-up and loop information can be entered or edited for each tag number. The serial number will be compared with the serial number recorded in the field. Any difference in serial numbers is reported.Tag Number DatabaseAll instrument data entered to set up a procedure are stored in a database. Select tag number andpress details to view or print instrument settings and procedure characteristics.Instrument DetailsTest point results are reported for verification after a calibration test has been finished.Calibrator Shows Test ResultsShort Cut for Test Procedure SetupWith the OMEGASW-I, a calibrator can be used as a single channel data logger. T emperature, pressure or electrical signals are recorded at preset intervals. Data is recorded on the PCMCIA card and can be printed. A 1 Mb PCMCIA card can contain as many as 9000 samples.Single Channel DataloggerLogger procedures areprepared with take sample timer settings. Timers are available to set the time between samples and to set the time between groups of samples.Datalogger TimersTest point sequences as composed in your procedure allow you to take recordings of output and input readings simultaneously. Inputs and outputs can becombined freely to adapt to your application. Very useful for engine performance tests and process monitoring.Test Bench ApplicationsY our report has space for additional text. Before printing, you can editpersonal report comments.These comments are saved under the appropriate tag number and can be re-edited each timeyou want to print a report.EditorA level expressed in a percentage of span or reading can be entered to assist technicians in the field. Findings exceeding pressure or temperature transmitter tolerances are displayed on the calibrator screen for verification.Pass/Fail ToleranceThe system has an equipment database to comply with most types of process instruments.OMEGASW-I has been prepared for the CLB calibrator and future calibrator models.Calibration EquipmentMessage can be freely edited to be prompted in the field. Messages may be inserted in the testprocedure to prompt before each calibration point. With pressure instruments,the calibrator display automatically shows the pressure to apply for each calibration point.Message and InstructionsCalibration points and sequences are selectable to check calibrations on specific points and on hysteresis. Technicians can select the type of routine from the calibrator keypad. Available routines are As Found and As Left.Calibration StrategyCALIBRATIONREPORTLOGGERDo you have instruments with equal specifications?Test procedures made before may be used for new tag numbers to avoid full setup.KK-112A work order is a selection of instrument tag numbers due for calibration. Y ou can have one or more work orders. Work orders can be edited by adding or deleting tag numbers.Technician Work OrdersIf calibration is performed loop by loop, OMEGASW-I offers a filter to show all instruments in a specific loop.Work Orders Prepared for Each LoopY ou can enter notes for each work order. Notes can contain specific work order instructions like safety warnings orinformation about the tools you need. Notes are printed together with the work order.Work Order Notes A selection of prepared work orders are downloaded on the PCMCIA card for field calibrations or logging.Card access fordownloading is made either by an integral PF card slot or by the calibrator card slot.Downloading Work OrdersResults FilesCard access for uploading is made either by anintegral computer card slot or by the calibrator card slot. The upload screen in OMEGASW-I allows you to move one or more files from the PCMCIA card into a directory on your PC hard disk.Uploading Result FilesCalibration of pressure or temperature transmitters can also be shown in a graph. Y ou simply press the GRAPH key to show or print calibration graphs for both “As Found” or As Left findings. A zoom facility allows you to adapt the % error scale.GraphsThe OMEGASW-I allows you to report calibrations extensively. Simplyselect tag number, date and test routine. Press REPORT to print afull ISO 9000 document.ReportsTWO WAYS TO DO YOUR FIELD CALIBRATIONSLEVEL-1:Calibrator built-in test procedures are used to record calibrations manually. Level-1 shows calibrator screens to enter and record tag number, serial number and name. Verysuitable for service contracting companies who do not have test procedures prepared on OMEGASW-I.LEVEL-2:Prepared test procedures composed inOMEGASW-I are used to record calibrations automatically. Simply select a tag number to start a test procedure from the keypad. The test procedure will proceed non-stop or will wait before each recording for your acceptance.RESULTS WORK ORDERDOWNLOADUPLOADAfter calibration orlogging, the PCMCIA card contains results in DBF formatted DOS files.These files are ready to be read by OMEGASW-I commercially available databases. As the files contain all results nicely arranged in a tabular format, you can print a simple document straightaway.Consists of compressed programs on two 3.5” disks. and includes operating manual in English, German and French,One 1 Mb PCMCIA SRAM type 2 card, and RS-232 cable with COM port adaptor.To Order (Specify Model No.)Model Number Price DescriptionOMEGASW-I$1295Software for use with CLB CalibratorCANADA www.omega.ca Laval(Quebec) 1-800-TC-OMEGA UNITED KINGDOM www. Manchester, England0800-488-488GERMANY www.omega.deDeckenpfronn, Germany************FRANCE www.omega.frGuyancourt, France088-466-342BENELUX www.omega.nl Amstelveen, NL 0800-099-33-44UNITED STATES 1-800-TC-OMEGA Stamford, CT.CZECH REPUBLIC www.omegaeng.cz Karviná, Czech Republic596-311-899TemperatureCalibrators, Connectors, General Test and MeasurementInstruments, Glass Bulb Thermometers, Handheld Instruments for Temperature Measurement, Ice Point References,Indicating Labels, Crayons, Cements and Lacquers, Infrared Temperature Measurement Instruments, Recorders Relative Humidity Measurement Instruments, RTD Probes, Elements and Assemblies, Temperature & Process Meters, Timers and Counters, Temperature and Process Controllers and Power Switching Devices, Thermistor Elements, Probes andAssemblies,Thermocouples Thermowells and Head and Well Assemblies, Transmitters, WirePressure, Strain and ForceDisplacement Transducers, Dynamic Measurement Force Sensors, Instrumentation for Pressure and Strain Measurements, Load Cells, Pressure Gauges, PressureReference Section, Pressure Switches, Pressure Transducers, Proximity Transducers, Regulators,Strain Gages, Torque Transducers, ValvespH and ConductivityConductivity Instrumentation, Dissolved OxygenInstrumentation, Environmental Instrumentation, pH Electrodes and Instruments, Water and Soil Analysis InstrumentationHeatersBand Heaters, Cartridge Heaters, Circulation Heaters, Comfort Heaters, Controllers, Meters and SwitchingDevices, Flexible Heaters, General Test and Measurement Instruments, Heater Hook-up Wire, Heating Cable Systems, Immersion Heaters, Process Air and Duct, Heaters, Radiant Heaters, Strip Heaters, Tubular HeatersFlow and LevelAir Velocity Indicators, Doppler Flowmeters, LevelMeasurement, Magnetic Flowmeters, Mass Flowmeters,Pitot Tubes, Pumps, Rotameters, Turbine and Paddle Wheel Flowmeters, Ultrasonic Flowmeters, Valves, Variable Area Flowmeters, Vortex Shedding FlowmetersData AcquisitionAuto-Dialers and Alarm Monitoring Systems, Communication Products and Converters, Data Acquisition and Analysis Software, Data LoggersPlug-in Cards, Signal Conditioners, USB, RS232, RS485 and Parallel Port Data Acquisition Systems, Wireless Transmitters and Receivers。
点云数据转换成实体模型通过基于点的立体像素化立体像素

点云数据转换成实体模型通过基于点的立体像素化立体像素PointCloudDataConversionintoSolidModelsviaPoint-BasedVoxelization1 2 3 4Tommy Hinks ; Hamish Carr ; Linh Truong-Hong ; and Debra F. Laefer, M.ASCEAbstract:Automatedconversionofpointclouddatafromlaserscanninginto formatsappropriateforstructuralengineeringholdsgreatprom- iseforexploitingincreasinglyavailableaeriallyandterrestriallybase dpixelizeddataforawiderangeofsurveying-relatedapplicationsfrom environmental modeling to disaster management. This paper introduces a point-based voxelization method to automatically transform pointclouddataintosolidmodelsforcomputationalmodeling.Thefundamentalvi abilityofthetechniqueisvisuallydemonstratedforbothaerial andterrestrialdata.Foraerialandterrestrialdata,thiswasachievedinl essthan30sfordatasetsupto650,000points.Inallcases,thesolid models converged without any user intervention when processed in a commercial ?nite-element method program. DOI: 10.1061/ASCESU.1943-5428.0000097 2013 American Society of Civil Engineers.CE Database subject headings: Data processing; Surveys; Finite element method; Information management.Author keywords: Terrestrial; Aerial; Laser scanning; LiDAR; Voxelization; Computational modeling; Solid models; Finite element.Introductionexist.Thispaperlaysthegroundworkforkeyadvancementinsucha pipeline. The procedure proposed herein to reconstruct buildingLaser scanning has achieved great prominence within the civil en- facadesfrompointcloud,whichisafundamentalstepforgenerating gineering community in recent years for topics as divergent as city-scale computational models.coastline monitoring Olsen et al. 2009, 2011, airport layout op- timization Parrish and Nowak 2009, and ground-displacementidenti?cation for water-system risk assessment Stewart et al.FacadeReconstruction2009. Additionally, there has been strong motivation to obtainfurther functionality from laser scanning and other remote-sensing Inrecentyears,developmentsinlaser-scanningtechnologyand?ight-data, including three-dimensional 3D volume estimation forpath planning have allowed aerial laser scanning ALS to acquire mining Mukherji 2012, road documentation Dong et al. 2007,pointclouddataquicklyandaccuratelyatacityscale,therebyhaving structuralidenti?cationShanandLee2005;Zhangetal.2012,and thepotentialforreconstructing3Dbuildingsurfacesacrossanentire emergency planning Laefer and Pradhan 2006. Furthermore,city in nearly real time. A number of approaches based on semi- computational responses of city-scale building groups are increas- automaticLangandForstner1996andautomaticHenricssonetal. inglyindemandforheightenedurbanization,disastermanagement,1996techniqueshavebeenproposedtoreconstructbuildingmodelsand microclimate modeling, but input data are typically too ex- from such data sets, but automatically extracting highly detailed, pensive as a result of the need for manual surveying. Additionally, accurate,andcomplexbuildingsstillremainsachallengeHaalaandcurrent software tools for transforming remote-sensing data into Kada 2010. The semiautomatic procedures need human operator computationalmodelshaveoneormoreofthefollowingproblems: intelligence.TheautomaticvisualmodelingofurbanareasfromALS alowdegreeofreliability,aninabilitytocapturepotentiallycritical data tends to extract sample points for an individual building by details,and/oraneedforahighdegreeofhumaninteraction.Todate, applying segmentation techniques and then reconstructing eacha seamless, automated, and robust transformation pipeline frombuilding individually. In such cases, vertical facade surfaces are notremote-sensing data into city-scale computational models does not portrayed in detail, and outlines may be of relatively low accuracy unless ground planes are integrated, which requires either a priori1 informationormanualintervention.Unfortunately,theeffectivenessDoctoralRecipient,SchoolofComputerScienceandInformatics,Univ.of engineering modeling often depends largely on the geometricCollege Dublin, Bel?eld, Dublin 4, Ireland. E-mail: ******************2accuracy and details of the building models?thus the currentSeniorLecturer,SchoolofComputing,FacultyofEngineering,Univ.ofmismatch.Leeds, Leeds LS2 9JT, U.K. E-mail: h.carr@//0>.3Post-doctoral Researcher, Urban Modelling Group, School of Civil, Presently, commercial products are generally semiautomatic StructuralandEnvironmentalEngineering,Univ.CollegeDublin,Bel?eld, Laefer et al. 2011, whereas in the computer graphics and photo- Dublin 4, Ireland. E-mail: linh.truonghong@gmailgrammetry communities, researchers have focused on automated4AssociateProfessor,LeadPI,UrbanModellingGroup,SchoolofCivil,surfacereconstructionfromdenseandregularsamplepointsHoppeStructuralandEnvironmentalEngineering,Univ.CollegeDublin,Bel?eld, 1994; Kazhdan et al. 2006. Unfortunately, ALS data are oftenDublin 4, Ireland corresponding author. E-mail: ******************* sparse and irregular, and may contain major occlusions on vertical Note.ThismanuscriptwassubmittedonNovember16,2011; approvedsurfaces owing to street- and self-shadowing Hinks et al. 2009.on September 10, 2012; published online on September 13, 2012. Discus- Dedicated urban modeling surface-reconstruction approachession period open until October 1, 2013; separate discussions must be generallyusethemajorbuildingplanesChenandChen2007andsubmitted for individual papers. This paper is part of the Journal ofcan be described as either model-driven or data-driven. Model-Surveying Engineering, Vol. 139, No. 2, May 1, 2013ASCE, ISSN0733-9453/2013/2-72?83/$25.00. driven techniques use a ?xed set of geometric primitives that are72 / JOURNALOFSURVEYINGENGINEERING?ASCE / MAY2013J. Surv. Eng. 2013.139:72-83.Downloaded from by East China Inst of Tech on 04/13/13.Copyright ASCE. For personal use only; all rights reserved.Fig. 1. Work?ow of the proposed approach: *Collection and preparation of LiDAR data involve multiple steps outside the scope of this paper’s scienti?ccontribution;thesegenerallyincludeplanning,collection,re gistration,and?ltering;seeTruong-Hong2011andHinks2011forfurther detailsttedtothepointdata.Suchtechniquescanbeeffectivewhenadataset is sparse because the ?tting of geometric primitives does not require complete data. In contrast, data-driven techniques derive surfaces directly from the point data and are capable of modeling arbitrarilyshapedbuildings.Generally,data-drivenapproachesaremore?exiblethanmodel-drivenapproaches,butareoftensensitiveto noise in the input data.For strictly visual representation, model-driven approachescanbeeffective.Forexample,Haalaetal.1998 proposed four dif-ferent primitives and their combinations to automatically derive 3D building geometry of houses from ALS and existing ground planes.Similarly, Maas and Vosselman 1999 introduced an invariantmoment-basedalgorithmfortheparametersofastandardgabled-roofhouse type that allowed for modeling asymmetric elements such as dormers. However, these efforts assume homogeneous point dis-Fig. 2. Octree representationtributions, which is unrealistic. You et al. 2003 also adapted a set of geometric primitives and ?tting strategies to model complex buildings with irregular shapes, but the approach required user interventionandgeneratedonlylimitedwalldetails.Huetal.2004used a combination of linear and nonlinear ?tting primitives to SolidModelingreconstructacomplexbuilding,inwhichaerialimagerywasusedtore?ne the models. To generate building models directly from point cloud data forIncontrast,manydata-driventechniquesoperatingonALSdata engineering simulations [e.g., FEM], there are three dominant reconstruct roof shapes directly from sample points of roof planes. methods:1constructivesolidgeometryCSG,whereobjectsareSubsequently, the remainder of the building is simply extruded represented using Boolean combinations of simpler objects; 2 to the ground level from the roof-shape outlines. Vosselman and boundary representations B-reps, where object surfaces are rep- Dijkman2001usedaHoughtransformforextractionofplanefaces resentedeitherexplicitly orimplicitly;and3spatialsubdivision roofplanesfromtheALSdata,andthen3Dbuildingmodelswere representations,wherean objectdomain is decomposed intocells reconstructed by combining ground planes and the detected roof withsimple topologic and geometric structure, such as regular planes.Hofmannetal.2003introducedamethodtoextractplanar gridsandoctreesGoldman2009;HoffmannandRossignac1996;roof faces by analyzing triangle mesh slopes and orientations from there are many extensive treatises available for in-depth consid-a triangular irregular network structure generated from ALS data. eration of this topic B?hm et al. 1984; Rossignac and Requicha More recently, Dorninger and Pfeifer 2008 used an a-shape ap- 1984, 1999.proach to determine a roof outline from point clouds of the roof Generating solid models automatically from point cloud data projectedontoahorizontalplane.Also,ZhouandNeumann2010 is particularly important because the cost of manually creating created impressive buildings for a large urban area by using a vol- solid models of existing objects is far greater than the associated umetric modeling approach in which roof planes were determined hardware,software,andtrainingcosts.Assuch,spatialsubdivision based on a normal vector obtained from analysis of grid cells be- representations are used extensively for creating solid models of longingtorooflayers.However,thesemodelsarealsoextrudedand buildings in which regular grids or octrees are employed to de- lack vertical-wall details. compose an entire object intononoverlapping 3D regions, com-Therefore, this paper presents an automated approach to con- monly referred to as voxels. Voxels are usually connected andverting point clouds of individual buildings into solid models for described a simple topologic and geometric structure. In grids, structural analysis by means of computational analysis in which avolumeissubdividedintosmallerregionsbyappropriateplanes thepointcloudthatweresemiautomaticallysegmentedfromLight parallel to the coordinate system axes,typically using aCartesian Detection and Ranging LiDAR data become the input Fig. 1. coordinate system. An initial voxel bounding all point data re-Notably, this proposed approach focuses on reconstructing solid cursively divides a volume into eight subvoxels, organized in modelsbyusingvoxelgridswiththecriticalparameteraseitherthe a hierarchical structure Samet 1989. Voxels may be labeled voxel size or the number of voxel grids; for more details on col- white,black,orgraybasedontheirpositionsFig.2.Blackvoxels lecting ALS and terrestrial laser scanning TLS data and on are completely inside the solid, whereas white voxels are com- segmenting point clouds, see Truong-Hong 2011andHinks pletelyoutside.Voxelswithbothblackandwhitechildrenaregray 2011. Hoffmann and Rossignac 1996.JOURNALOFSURVEYINGENGINEERING?ASCE / MAY2013 / 73J. Surv. Eng. 2013.139:72-83.Downloaded from by East China Inst of Tech on 04/13/13. Copyright ASCE. For personal use only; all rights reserved.Fig.3.Voxelgridspanningavolumeina3Dspaceboundedbyx ,x ,y ,y ,andz ,z ,whe reDx,Dy,andDzarevoxelsizes andmin min minN , N , and N are the number of voxels in each directionx y zIn an application of spatial subdivision for surface recon-struction,CurlessandLevoy1996presentedavolumetricmethodforintegratingrangeimagestoreconstruc tanobject’ssurfacebasedon acumulative weighted signed-distancefunction. Unfortunately,the approach is not suited for arbitrary objects. In related work, GuarnieriandPontin2005builtatriangulatedmeshofanobject’ssurfacebycombiningaconsensussurface[asproposedbyWheeleret al. 1998], an octree representation, and the marching-cubesalgorithm Lorensen and Cline 1987. This multifaceted algorithmFig. 4. Point-based voxelization avoids surface reconstruction and canreducetheeffectofthenoiseowingtosurfacesampling,sensoroperates directly on point datameasurements,andregistrationerrors.However,foroptimalresults,themethodrequiresmodi?cationofparametersthatdependheavilyon input-data characteristics such as the voxel size, the threshold value for the angle, and the distance between two consecutive neighbor-range viewpoints. z 2zminN? 1 ?3?zDzThevoxelhaseightlatticeverticesassociatedwithsixrectangular VoxelizationfacesFig.3.Eachinteriorvoxelhas26neighboringvoxels,witheight sharing a vertex,12 sharing an edge,and six sharing a face. Critical to octree/quadree representations for further processing is Conversely,anexteriororinteriorvoxelonahole’sboundaryoften voxelization. This term describes the conversion of any type of has only 17 neighboring voxels four sharing a vertex, eight geometric or volumetric object such as a curve, surface, solid, or sharinganedge,and?vesharingaface.Moreover,mostexisting computedtomographicdataintovolumetricdatastoredina3Darray voxelization techniques operate on surface representations ofof voxels Karabassi et al. 1999. Initially, a voxel grid divides objects, where a signi?cant part of the problem is to identifya bounded 3D region into a set of cells, which are referred to as throughwhichvoxelsthesurfacespass.Suchmethodsarereferredvoxels. The division is typically conducted in the axial directions to as surface-based voxelization Cohen-Or and Kaufman 1995of a Cartesian coordinate system. Before voxelization, three pairs [Fig.4a?c].Incontrast,thepoint-basedvoxelizationinthispaper ofcoordinatevalues??x , x , ?y , y , and ?z , z ? aremin min minoperates directly on the point data and does not require a derived createdalongthethreeaxesX, Y, and Zde?ningaglobalsystemsurface [Fig. 4a?c]. Point-based voxelization is conceptually Fig. 3. The basic idea of a voxelization algorithm is to examine much simpler than surface-based voxelization algorithms, and whethervoxelsbelongtotheobjectofinterestandtoassignavalue whereas the mechanisms are well known, they have not beenof 1 or 0,respectively Karabassi et al. 1999; a further description applied to generating solid modeling of buildings from LiDARof voxel grids is available in Cohen and Kaufman 1990.data.An initial voxel bounding all point cloud data in 3D Euclidean3Asmentionedearlier,eachvoxelisclassi?edasactiveorinactivespaceR is subdivided into subset voxels by grids along the x-, y-, corresponding to binary values based on the sample points within andz-coordinatesinaCartesiancoordinatesystem.Eachvoxelinthethat voxel [Eq. 4]subset is represented by an index v?i, j, k?, where i2?0; N 21 , xj2?0; N 21 , and k2?0; N 21 Fig. 3. With the dimensionsy zactive ifn$TnofindividualvoxelsDx, Dy, Dz,anumberofvoxelsN , N , Nx y zf n?4?valong each direction are given in Eqs. 1?3 inactive ifn,Tnwheretheargumentn5numberofpointsmapping to avoxel,andx 2xmin T 5user-speci?edthresholdvalue.Typically,T 51,whichmeansn nN? 1 ?1?xDxthat voxels containing at least one mapping point are classi?edasactiveandallothersasinactive.Moresophisticateddensity-basedy 2yminclassi?cation functions can be designed. An example is shown inN? 1 ?2?yDyFig. 5.74 / JOURNALOFSURVEYINGENGINEERING?ASCE / MAY2013J. Surv. Eng. 2013.139:72-83.Downloaded from by East China Inst of Tech on 04/13/13. Copyright ASCE. For personal use only; all rights reserved.Fig. 5. Voxelization model of front building of Trinity College, Dublin, Ireland, created by a voxel grid: a input data set of 245,000 ALS points;bvoxelizationmodelwithvoxelsizeDx5Dy5Dz50:25m;cvoxelclassi?cationwiththethresholdT51andvoxelizationmodelwithaboutn5,000 active voxels n is the largest number of points mapping to asingle voxelFig. 6. Solid model componentsProposedConversionofVoxelizedModelsintoSolidModelsTo reconstruct vertical surfaces of building models, a voxel grid is used to divide data points in a bounded 3D region into smallervoxels. Important facade features such as windows and doors are subsequently detected basedon a voxel’s characteristics, where an inactive voxel represents the inside of an opening. Consequently, building models are converted into an appropriate format for com- putational processing.Anobjectisde?nedbyitssurfaceboundary,whichthenmustbeFig. 7. Face orientation as dictated by the right-hand ruleconvertedintoanappropriatesolidrepresentationcompatiblewithcommercialcomputationalpackages.Althoughmanyschemesareavailable,B-repsarehereinadoptedbecauseoftheircompatibilitywith commercial structural-analysis software e.g., ANSYS soft- Keypointsarerepresentedbya3Dcoordinateofasingularpoint.ware Laefer et al. 2011. The proposed method de?nes both the An edge is de?ned as the connection between exactly two keygeometry and topology of an object by a set of nonoverlappingpoints;forexample,theedgee 5fP, Pgistheedgewithstartingij i jandendingpointP.Notably,edgeshaveanorientation;asfaces approximate the boundary of the solid model. This section pointPi jsuch, e 52eThus, the edges e and e would be ?ipped. EdgepresentsabriefdescriptionoftheB-repschemeimplementedintheij ji ij jiproposed approach; for more details, see Goldman 2009. Ge- ?ipping is important when de?ning an orientable face for dis-ometry is de?ned by key singular points, with each point rep- tinguishing the inside from the outside.resenting a speci?c location in space. Topology is de?ned by Similarly, faces represent surfaces of a solid model that areconnections between key points. When used together, they can connections between edges. The faces are further connected de?neasolidmodelFig.6.DatastructuresfordescribingB-reps to form volumes. A face is de?ned as a list of edgesoften capture the incidence relations between a face and its f5fe ,e ,.,e g that involve closed paths. A face01 12 ?n22??n21?bounding edges and an edge and its bounding vertices, whichconsistingofthreekeypointsisatriangle,whereasqu。
考虑环境因素的软件可靠性增长模型

考虑环境因素的软件可靠性增长模型收稿日期:2011-01-07;修回日期:2011-02-28。
基金项目:国家“核高基”资助项目(2009ZX01039-003-001-002)。
作者简介:韩炫(1985-),男,湖北鄂州人,硕士研究生,主要研究方向:软件可靠性测试; 雷航(1960-),男,四川自贡人,教授,博士,主要研究方向:软件可靠性工程。
文章编号:1001-9081(2011)07-1759-03doi:10.3724/SP.J.1087.2011.01759(电子科技大学计算机科学与工程学院,成都611731)(hanxuan_85@)摘要:软件可靠性增长模型中由于测试阶段和实际运行阶段环境的不同导致了失效强度函数的判断偏差。
在Musa 执行时间模型中的经典模型M-O对数泊松执行时间模型基础上,提出考虑环境因素的对数泊松模型,该模型能较好的刻画失效强度函数变化规律,并给出参数估计公式。
通过对失效数据集的实验,结果表明该模型具有较好的拟合效果。
关键词:软件可靠性增长模型;M-O对数泊松时间模型;非齐次泊松过程模型;失效强度;环境因子中图分类号:TP311.5文献标志码:ASoftware reliability growth model considering environmental differenceHAN Xuan,LEI Hang(School of Computer Science and Engineering,University of Electronic Science and Technology of China,Chengdu Sichuan 611731,China)Abstract: The failure intensity function will be misjudged in Software Reliability Growth Model (SRGM),because of the difference between the testing environment and the user environment. A logarithmic Poisson model considering environment factor was proposed, which was based on theM-O logarithmic Poisson execution time model, the representative of Musa execution-time models. It can represent the change law of the failure intensity function better, and the parameter estimation equations had been provided. The experimental results based on the failure data sets show that the proposed model has better fitting curve than some of other models.Key words: software reliability growth model; M-O logarithmic Poisson execution time model; Non-Homogeneous PoissonProcess (NHPP)model; failure intensity;environmental factor0 引言随着计算机的广泛应用,软件规模越来越大,结构和功能日益复杂,软件质量也受到人们的重视。
Coordinating models and metrics to manage software projects

Paper appeared in: Raffo, D. M., W. Harrison, and J. Vandeville, “Coordinating Models and Metrics to Manage Software Projects", International Journal of Software Process Improvement and Practice.Coordinating Models and Metrics to Manage Software ProjectsDavid Raffo Portland State University School of BusinessPO Box 751 Portland, OR 97207 USA 503-725-8508 davidr@Warren HarrisonPortland State UniversityComputer SciencePO Box 751Portland, OR 97207 USA503-725-3108warren@Joseph VandevilleNorthrop GrummanSoftware Engineering.P.O. Box 9650Melbourne, FL 32902 USA407-951-5287vandejo1@ABSTRACTIn previous work we developed techniques for modeling software development processes quantitatively in terms of development cost, product quality, and project schedule using simulation. This work has predominately been applied to the software project management planning function. The goal of our current work is to develop a “forward looking”approach that integrates metrics with simulation models of the software development process in order to support the software project management controlling function. This “forward-looking”approach provides predictions of project performance and the impact of various management decisions. It can be used to assess the project’s conformance to planned schedule and resource consumption. This paper reports on work with a leading software development firm to create an approach, that includes a flexible metrics repository and a discrete event simulation model.KeywordsSoftware Process, Project Management, Simulation Modeling, Software Measurement Repositories1INTRODUCTIONEffective project management is critical to the success of software development projects. For large projects, software project management is often broken into two separate functions: planning and control. Planning is forward looking. It tells us what needs to be done, when it is to be done, how it is to be done and who is going to do it. Planning usually occurs prior to embarking upon aproject or early in the project life cycle. Controlling is intended to keep events on course by identifying and correcting deviations from the plan. It has a more narrow and immediate focus. It is intended to alert managers to significant deviations from plan while the project is in process.The planning function can address a variety of useful questions regarding the potential performance of a company’s software development process. Questions related to predicting effort, cost, schedule, and product quality; developing forecasts of staffing levels over the duration of the project, addressing resource constraints and resource allocation, predicting service-levels for product support, analyzing risks and so forth [KeMR 99].The control function is different in focus, timing, and scope. It involves the on-going management of a project.It compares the likely final state of the project (actual performance) with respect to cost, quality and schedule with the state called for by the original plan. Deviations are reported to the project manager who then may choose to take some specific action to bring the project back into control. Timely and accurate data are necessary to provide an accurate picture of where the project currently is and to make a prediction of where the project is likely to go. The ability to quantitatively monitor and assess software projects helps support the Quantitative Process Management and Software Quality Management Level 4 Key Process Areas (KPAs) of the Capability Maturity Model (CMM) [Humphrey 89, Paulk+ 93].In a domain where “gut feel” and subjective estimates are common, software project managers have often looked for1tools and an approach to provide quantitative data on current project status and quantitative estimates on potential project outcomes.In previous work we developed techniques for modeling software development processes quantitatively in terms of development cost, product quality, and project schedule using simulation. This work has predominately been applied to the software project management planning function [Raffo 96; RVM 99]. The goal of our current research is to develop a “forward-looking” approach that integrates metrics with simulation models of the software development process in order to support the software project management control function. The forward-looking approach provides predictions of project performance and the impact of various management decisions. By combining metrics and predictive models it provides a more comprehensive picture than can be achieved using metrics alone. In addition, the predictive models can support managers as they attempt to replan and bring the project back on track. A key element of this approach is the development of a flexible metrics repository which links corporate databases with software process simulation models. This paper reports on work with a leading software development firm to create an approach that includes a flexible metrics repository and a discrete event simulation model based on the company’s software development process.Section 2 provides a background regarding various models used to predict performance of software development processes. Primary attention is given to software process simulation models, which support the project planning function by capturing process-level issues. In addition, the expanded information needs required to support the control activity and the need for the metrics repository are shown. Section 3 presents the metrics repository and how it supports the controlling function by providing up-to-date information in a flexible manner. Section 4 provides an overview of a discrete event simulation model that has been developed. A distinction is made between the representation used by the discrete event simulation model and other process modeling approaches that makes the discrete event paradigm highly compatible with the metrics repository described in section 3. Section 5 discusses the controlling activity, outcome-based control limits (OBCLs) and decisions supported by our approach. Section 6 discusses potential benefits, conclusions and future work.2MODELING APPROACHES USED TO SUPPORT THE PLANNING ACTIVITY Planning is forward looking. A variety of models and methods have been applied to support project planning.These models and methods have been used to predict various aspects of performance of software development projects. Some of these approaches have been static models applied to one dimension of performance such as product quality or reliability [MIO 87, Musa 75, Trivedi 82]. Other approaches utilize high-level models to estimate costs and predict project-level performance.These models typically use summary inputs of the project and abstract out details of the underlying software development process. These models may also capture multiple dimensions of performance such as cost and schedule. Well known examples of these types of models include COCOMO [Boehm 81, Boehm+ 95] and SLIM [Putnam 78, Putnam 80].In recent work, Raffo developed the Process Tradeoff Analysis (PTA) Method. This method builds on previous work by Kellner et al. at the Software Engineering Institute (SEI) [KH 88, KH 89, Kellner 89A, Kellner 89B, Kellner 90] by developing a quantitative approach to evaluating potential process changes in terms of development cost, product quality, and project schedule [Raffo 96]. The core of the PTA method addresses evaluating process alternatives quantitatively by developing stochastic state-based simulation models of each process alternative. These models explicitly capture process-level details including complex interdependencies among process components. The PTA Method has been applied to real-world process change problems at leading software development firms. [Raffo 96, RK 96, RVM 99].Recently, other researchers have also modeled software development processes at a more detailed level than COCOMO and SLIM models using a variety of software process simulation techniques such as discrete event simulation, system dynamics or continuous simulation, knowledge-based systems, and scheduling simulation.[Scacchi 99, Madachy 94, Tvedt 95]These models capture project-level issues which is a critical feature needed to support planning activities. In addition, the state-based and discrete event simulation process models [KoMR 98, RK 95, Raffo 96] are stochastic and can provide a quantitative assessment of risk.The above approaches have the advantage of utilizing high-level parameters that are typically available for software development projects. Using these parameters and inputs, which are often captured at the end of a project, useful information and project-level estimates can be predicted for future projects.However, in order to1.Support planning decisions related to the projectand processes being used,22.Evaluate alternative variations of differentprocesses and make choices (e.g. formalinspections or reviews, different ways to conductintegration test, and so forth), and3.Allocate resources among different process sub-tasks,more detailed models which capture process-level and product-level issues are needed.Project-status metrics are used to provide indicators for the project. However, these metrics provide only snapshots of specific individual aspects of project status. By incorporating these metrics into a life-cycle model of the software development process, a more complete multi-dimensional performance picture can be created.By capturing the details related to actually executing software projects, software process simulation models take a very significant step forward in supporting project planning and control activities. This step forward is attained by modeling the software development process to a finer level of granularity and utilizing lower-level project data. However, the timeliness of data sources for these models (i.e. data obtained from past projects) has remained the same. In order to provide an accurate picture of current project status, up-to-date project information is needed.The work presented in this paper describes an approach for integrating metrics with simulation models of the software development process. We describe necessary support tools such as a flexible metrics repository and discuss the interface between the repository and the simulation models.3SOFTWARE METRICS REPOSITORYUp-to-date project and process information is necessary to support planning and control decisions about a project. The metrics repository stores the necessary information and provides the critical link between raw project metrics and model parameters. Since the model needs to provide a timely view of the project at all times, the repository must facilitate the collection of data on a "real-time" basis. However, because of the natural evolution of models and their corresponding information needs and the different levels of granularity in which data may be captured, the repository must also possess an incredible level of flexibility.The repository is based upon a "transformation view" of the software development process. Artifacts such as specifications, designs and code are "transformed" by theapplication of a "transformation process" into a new artifact. For instance, a design artifact may be transformed into a code artifact by the application of a "programming transformation".Artifacts possess certain properties, such as "size", "volatility", "complexity", etc. and the transformation process possesses other properties such as "resources consumed", "errors made", etc. The transformation process as well as the artifacts themselves can be represented in the following simplified entity-relationship diagram (figure 1).Figure 1 – Entity-Relationship Diagram of a Portion ofthe Transformation ProcessIn short, this model denotes that an Artifact is related to another Artifact through some "transforms" relationship.In order to provide an historical record of the state of the project as it progresses through (or is “transformed by”) the development process, we record snapshots of project characteristics each time a significant "transformation"occurs. The threshold of significance can be adjusted to record snapshots at whatever level of detail is necessary.For instance, snapshots could be taken every time a changed piece of code is checked back into the revision control system, or they could be taken every time the project proceeds to the next phase of the process. The level of significance which is selected will have an impact on the degree to which the repository can be used to facilitate control activities (more detail supports greater control).It is quite easy to characterize the entire development process as a series of transformations. The process begins3with a requirements specification, which can be considered the initial artifact. The artifact is transformed into a design artifact through the application of a "design transformation" (i.e., this represents the design activity). Should additional granularity be desired, the "design activity" could be decomposed into several transformations, such as transforming the requirements artifact into an architectural design artifact. The architectural design artifact could then be transformed into a structure chart artifact, which could finally be transformed into a detailed design artifact.The detailed design artifact may be transformed into code through a programming transformation. The code artifact could be transformed into an "inspected code" artifact through the inspection transformation. Inspected code may be transformed into "final code", "tested code", "delivered code", etc.A notable point is that the relationships between artifacts and transformations are many-to-many. That is, a design artifact may be transformed into several different code artifacts. For instance, a structure chart artifact may be transformed into a code artifact for every node in its tree. Likewise several different artifacts may be transformed into a single artifact. A code artifact and a maintenance request artifact may be transformed into another, revised code artifact.As each artifact or transformation is recorded, the repository attempts to characterize the state of the project at each snapshot by maintaining information about the transformation and the artifact(s) produced by the transformation. This information is intended to characterize particular "domains" associated with the transformation and the artifact. It should be recognized that the manner in which we operationalize a particular domain might differ from project to project, and organization to organization.For instance, suppose a company has some large projects with formal inspections and data collection. At the same time, the same company also has smaller projects where the data is stored by project, but only to a certain depth. These differences can be accommodated by the level of significance that triggers a "transformation" as well as including a description of the measurement scheme used along with each measurement (figure 2).The domains of interest associated with a transformation include: the resources expended in the process of doing the transformation, the number of defects found, the number of faults injected, as well as the duration for each transformation.The resource expenditure domain may involve calendar days, staff hours, equivalent FTE, etc. As a standardmethod of operationalizing this domain, we propose actual staff hours expended (including both paid and unpaid overtime). However, if this information is not available, then some other measure of resource expenditure with a description of the counting rules, exceptions, etc. would be acceptable. Clearly, as we move farther and farther away from a "canonical measure of resource consumption", models built upon this data become less and less reliable. The number of faults found and made during a transformation should be noted. In general, it is safe to say that a transformation either inserts faults or finds them. For instance an inspection transformation will find faults, while a coding transformation will create them.Figure 2 – Entity-Relationship Diagram ShowingDesign Artifact PropertiesThe domains of interest associated with each artifact (i.e., code unit, specification, design, etc.) involve size, complexity, volatility and quality (we might better term this "correctness"). Each of these can be operationalized in a variety of ways. Rather than selecting sophisticated and complicated measurement techniques, we have instead chosen to promote the use of simple, easily understood and captured metrics. However, the repository accommodates multiple metrics, and we encourage the collection of additional, more sophisticated metrics as well. Table 1 lists some domains and possible methods of operationalizing them.Artifacts such as design documents, bug reports, change4requests, etc. are not addressed specifically in this table. However, depending on their level of formality, they should be able to be characterized as a "text document" or a "code document".Table 1 – Potential Metrics Domains Domain Possible OperationalizationsSize, Code (traditional programming languages)Source Lines: number of non-blank lines in each artifact. Alternates: Software Science N and V; number of statements; number ofsubroutines/functionsSize, Text Documents Text Lines: number of non-blank lines in each artifact. Figures should be counted based on the percentage of the page they cover. Alternates: number of sentences; number of paragraphs; number of pages; number of "key words" such as ShallComplexity, Code (traditional programming languages)Decision Count: number of decision statements in the code such as IF, WHILE, REPEAT, CASE, etc. Alternates: Software Science E; number of local and global variables; level of control-flow nestingComplexity, Text Documents Readability: Flesch Reading Ease Score applied to the document. Alternates: Flesch-Kincaid Grade Level Score; Gunning Fog IndexVolatility, Code Number of changes since the lastversion of the artifact. Alternates:percent of total changes to the artifactthat occurred since the last version ofthe artifact; change in units of each sizemeasure listed above since the lastversion of the artifact; percent of totalchanges in units of each size measuresince the last version of the artifact.Volatility, Text Documents Number of "TBDs" in the artifact. Alternates: Number of changes since the last version of the artifact in units of each size measure listed above; percent of total changes to the artifact that occurred since the last version of the artifact, in units of each size measure listed above.Quality, Code Number of faults (not failures) found inthe artifact. Alternates: number of testcases (from test plan) passed and failedso far;Quality, Text Documents Number of faults (not failures) found in the artifact. Alternates: noneWhile the repository supports flexibility of data, it also provides a framework for maintaining the current state ofthe project. Depending on the level of granularity and frequency of recording, a project manager should be able to determine the current artifacts being developed, as well as all the pertinent information currently available about preceding artifacts. However, data collection, especially of resource consumption properties is an inherently manual process and the data can only be as timely and reliable as the data-collection personnel make it.Therefore, some of the properties will be implemented as "computed fields" by providing links to artifacts such as the project defect tracking system and inserting "update triggers" in transformation sources such as revision control systems and makefile scripts.The repository is currently implemented using Microsoft Access™ with linkage to the PR-Tracker™ bug tracking system. However, numerous other implementation approaches are under investigation including PROLOG and other commercial DBMS packages. The repository must be concerned both with linking to various data-capture mechanisms, as well as interfacing with the simulation model.4COMBINING PROCESS MODELS WITH THEDATA RESPOSITORYGiven the discussion in section 3, it is clear that a detailed artifact-based process model of software development projects would be compatible with the metrics repository.This section briefly describes our on-going work with Northrop Grumman Corporation to develop an artifact- or entity-based model that provides a compatible interface to the metrics repository described in section 3 and is linked through parameters that are generated by a set of database queries. These parameters are updated during various significant project transformations to provide timely information that can be used by the model to make improved predictions.Since mid-1996, Northrop Grumman has been sponsoring research into the use of stochastic simulation models to support software process improvement and quantitative project management issues. The goal of this research is to develop a quantitative simulation model of the software development process that can be used in a forward-looking, predictive fashion to simulate the impact of proposed process changes prior to deployment on software projects. This work is being conducted in collaboration with Portland State University and the Software Engineering Research Center (SERC), a NSF sponsored Industry/University Cooperative Research Center. A new discrete event simulation model has been developed to simulate the activities and artifacts of one of Northrop Grumman’s large-scale software development projects. This model contains cost, schedule and quality5data that have been collected from past projects. This research project has been expanded in scope to explore the potential new capabilities of integrating up-to-date metrics information as can be provided by the metrics repository described above with the discrete event simulation model of the software development process. Northrop Grumman’s SBMS Melbourne site develops software for airborne radar surveillance and battle management systems. The portion of the software development process modeled consists of traditional software life cycle activities. These activities consist of 71 distinct development steps. The selection of steps was based on availability of distinct metrics. If two activities could not be distinguished using their metric data, then they were treated as a single activity. For example, if process measurements do not distinguish between compiling code and fixing any resulting compilation errors, then this is treated as a single process step.Figure 3 –TJU Project Life Cycle Process Model The model can be viewed as a hierarchy of process steps. At the highest level, the model consists of four life cycle phases (See figure 3). Preliminary Design consists of software requirements analysis and high level software architectural design. It consists of designing CSCIs (Computer Software Configuration Items), CSCs (Computer Software Components) and identifying CSUs (Computer Software Units). Detailed Design consists of designing individual CSUs. Code and Unit Test is the implementation phase which consists of writing computer code, compiling units, testing individual units, and the first level of unit integration. CPET (Computer Program Engineering Test) consists of internal integration and testing activities which are not formally witnessed by the customer.Each of the four life cycle phases is decomposed in the model into several main tasks, and these in turn are decomposed into sub-tasks. The architecture of the simulation model replicates the architecture of the actual software development process in that some activities are executed sequentially, and some concurrently through the use of multiple entities. The overall project schedule (duration) for each simulation run is equal to the duration of the critical path. However, the critical path for the project results from the precedence relationships (flexible or fixed) that are encoded into this process architecture as well as the individual process step durations (determined based upon input contract constraints or expended effort) and process branching decisions that occur as each simulation run is executed.Earned value is accrued by each activity as the project progresses. The amount of earned value assigned to each activity is based upon planned earned value allocations that are input before the simulation model is run. The model can help update earned value allocations by estimating the amount of effort associated with each sub-task.When the simulation model is run, parameters for each execution are drawn from populations of data that were collected from the various project teams. Using multiple runs, the model provides the mean and variance of performance results that may be experienced from team to team. Hence, the results of the simulation are stochastic, capturing the inherent uncertainty associated with real-world development. The process model was developed using Extend TM from Imagine That i.At SBMS Melbourne the model has been used to analyze several process change problems and to evaluate alternative process configurations for a significant new project bid proposal. The analyses have included flexible “what if” assessments of an initial review of a process change ideas.One key distinction between the system dynamics and state-based simulation models as compared to a discrete event simulation model is the handling of process artifacts. In the discrete event simulation model, individual artifacts are represented and each artifact is able to retain distinct attributes. In other words, rather than representing a generic code or design artifact, in the discrete simulation model, we represent a particular code module with a certain size, complexity, number of defects, and so forth. It is clear that this added detail is highly compatible with the structure and output of theTerminate6metrics repository. The added detail also provides substantial scope for addressing a number of interesting questions such as: How does the process react if only 20% of the modules contain 80% of the defects? How does the process react if a few code modules are very large or highly complex rather than uniform throughout? What is the effect of a high or low level of fan-out of code modules from design? In short, this representation enables us to look at important questions related to core project management issues. Using the updated information from the repository enables improved accuracy in the predictions rather than basing the predictions upon initial project estimates of key model parameters (e.g. e.g. size and so forth). As will be discussed in section 5, this approach supports active planning and re-planning activities described earlier as part of the management controlling function.4THE CONTROL ACTIVITYThe purpose of this section is to illustrate the capability that can be achieved by linking the flexible metrics repository to the software process simulation model to provide real-time metrics combined with short-term performance prediction.The control function is different from the planning function in terms of focus, timing, and scope. It involves the on-going management of a project. It compares the likely final state of the project (actual performance) with respect to cost, quality and schedule with the state called for by the original plan. Deviations are reported to the project manager who then may choose to take some specific action to bring the project back “in control”.We introduce the concept of predicted project performance (i.e. predicted by the model) as being “in-control” or “out of control” – meaning the project “is” or “is not” adhering to the plan within a reasonable amount (reasonable bounds) for the performance measures under consideration (in our model these performance measures are cost, quality, and schedule). This is different from the definition typically used in statistical process control (SPC) where control limits are derived statistically from previous measurements of the process. In SPC, the control limits are determined independently of the desired outcome. We define outcome-based control limits (OBCLs) which are used as guides for assessing whether the project is “in-control”from a project management perspective. OBCLs can correspond to internal performance goals or can reflect contract performance requirements.The decision as to whether a project is “out of control”requires a) constant monitoring of the current state of the project and b) an objective, accurate and meaningful way to compare the current state to the planned state.Software process simulation models address this issue very well. Not only can process models identify changes that will have a significant impact to the project, they also can distinguish deviations from the plan that will not affect the project. For instance, although coding on a particular module may begin late, it may have no noticeable impact on the project if it is not on the critical path.By incorporating up-to-date metrics data as described in Section 3 with the model described in section 4, estimated parameters become more accurate, the time frame for the estimate is reduced and more is known about the actual status of the project. In this mode, the model would predict the likely end of project cost, quality, and schedule performance. This performance would be compared to the outcome based control limits. If the project is within the OBCLs, confidence is increased that the current approach will achieve the desired performance targets.On the other hand, if performance is outside of acceptable bounds, management is alerted that action needs to be taken. The project manager may have a large number of possible actions that could be taken to bring a project back into control. Information on which options are most likely to be successful, and their relative cost and risk is essential.When a significant deviation between planned and actual behavior is identified, the project manager can take several steps. First, he can attempt to determine whether the deviation is significant. We propose the use of a computer simulation of the project to help predict the final outcome of the project, given the deviation from the plan. The result of the simulation will help the manager decide if the project is truly in trouble.If the deviation suggests that the project may be in trouble the project manager can change aspects of the simulation representing various process steps and explore the results of process changes in response to the control deviation.Potential actions to bring the process back under control can be analyzed and compared for effectiveness, risk and cost.5CONCLUSIONS AND FUTURE WORKThe work presented here utilizes quantitative simulation models of the software development process in a new way that supports on-going management planning and control and supports corporate efforts for achieving levels 4 and 5 of the CMM. This work is closely tied to our previous process modeling research, which predicts the impact of7。
测绘工程专业英语翻译(中文版)
E-mail: sxlong2013@foxmaiБайду номын сангаас.com; ychbai@
Received Sept. 24, 2013; Revision accepted Dec. 3, 2013; Crosschecked Dec. 20, 2013
Abstract: A riverbed topographic survey is one of the most important tasks for river model experiments. To improve measurement efficiency and solve the riverbed interference problem in traditional methods, this study discussed two measurement methods that use digital image-processing technology to obtain topographic information. A new and improved approach for calibrating camera radial distortion, which comes from originally distorted images captured by our camera, was proposed to enhance the accuracy of image measurement. Based on perspective projection transformation, we described a 3D reconstruction method based upon multiple images, which is characterized by using an approximated maximum likelihood estimation method (AMLE) considering the first-order error propagation of the residual error to compute transformation parameters. Moreover, a theoretical derivation of 3D topography according to grey information from a single image was carried out. With the diffuse illumination model, assuming that the ideal grey value and topographic elevation value are positively correlated, we derived a novel closed formula to explain the relationship of 3D topographic elevation, grey value, grey gradient, and the solar direction vector. Experimental results showed that our two methods both have some positive advantages even if they are not perfect. Key words: Riverbed topographic survey, Radial distortion calibration, Projection transformation, Grey information transformation doi:10.1631/jzus.A1300317 Document code: A CLC number: TV149.2
关于gps的英语小短文阅读
关于gps的英语小短文阅读GNSS即全球卫星导航系统,广泛地应用于定位、授时、导航等诸多民用和军事领域。
GPS系统基于无线电单向测距和CDMA技术实现,在当前诸多卫星导航系统中技术上最具典型性,使用也最为普遍。
小编精心收集了关于gps的英语小短文,供大家欣赏学习!关于gps的英语小短文:GPS Cell PhonesMotivated by the events of 9/11 2001 and problems with 911 calls from cellular phones, the FCC requires that by the end of 2005 all cell phone carriers must be able to trace the location of cell phone calls to within a range of no more than 100 meters.Cell phones are already available with GPS technology installed. These systems are not the same as the GPS devices used by hikers, mariners and drivers. Lower cost models do not allow the user to enter data such as mapping software. All systems require a wireless network.Cell phones with GPS technology use AGPS (Assisted Global Positioning System). Assisted because the system uses both cell phone towers and satellites as location finders.There are advantages and disadvantages in the new technology. The cost to implement the program will be passed on to consumers -- cell phones will cost more. Privacy is a real concern with the general public especially in this day of identity theft. It is a concern that unknown people will be able to access your location. Also there is a possibility that the spam you are flooded with on your home computer will now be sent to your cell phone.Using GPS cell phones to track people has some great advantages. Locating kids and family can be a blessing. Remember though, if you try to locate someone who is out ofyour calling area, you will be charged extra.The obvious benefit for the consumer is the issue of emergency aid and that was the catalyst for this whole idea of GPS cell phones. A 911 call that can be quickly located, emergency roadside assistance, locating persons missing in remote areas, the list goes on. If coverage is available then GPS cell phones save lives.Many carriers already have GPS cell phones available. You can buy the basic model for emergency tracking or you can pay for the technology that turns the cell phone into a sophisticated mapping, PDA system. Problems are still an issue with the advanced features. The more you use the advanced features, the greater the drain on the battery. Increasing battery size also increases the cell phone size and that is a problem for most consumers who want ever smaller, lighter devices to carry around. At this time Japan seems to have the edge on developing the high-end miniature GPS cell phone.Sacrificing privacy for safety is the issue and I suspect that it would only take one positive outcome in an emergency situation to make the decision for you.As the systems become more and more refined camera and PDA capabilities are being included into the phone itself. Developments in GPS cell phone technology are continuing. If programmers can solve the issues of privacy then the potential for GPS cell phones is incredible. It will no longer be an issue of "Can you hear me now?" Rather the question will be, "Can you find me now?"Anne King is a sports and recreation writer in Boise, Idaho. For more information on GPS cell phones, visit Maps GPS which also provides practical information on GPS andmaps that everyone can use. The website includes product reviews and a maps/GPS glossary.关于gps的英语小短文:神秘的暗物质可用GPS卫星探测到The everyday use of a GPS device might be to find your way around town or even navigate a hiking trail, but for two physicists, the Global Positioning System might be a tool in directly detecting and measuring dark matter, so far an elusive but ubiquitous form of matter responsible for the formation of galaxies. Andrei Derevianko, of the University of Nevada, Reno, and his colleague Maxim Pospelov, of the University of Victoria and the Perimeter Institute for Theoretical Physics in Canada, have proposed a method for a dark-matter search with GPS satellites and other atomic clock networks that compares times from the clocks and looks for discrepancies."Despite solid observational evidence for the existence of dark matter, its nature remains a mystery," Derevianko, a professor in the College of Science at the University, said. "Some research programs in particle physics assume that dark matter is composed of heavy-particle-like matter. This assumption may not hold true, and significant interest exists for alternatives.""Modern physics and cosmology fail dramatically in that they can only explain 5 percent of mass and energy in the universe in the form of ordinary matter, but the rest is a mystery."There is evidence that dark energy is about 68 percent of the mystery mass and energy. The remaining 27 percent is generally acknowledged to be dark matter, even though it is not visible and eludes direct detection and measurement."Our research pursues the idea that dark matter may be organized as a large gas-like collection of topological defects, or energy cracks," Derevianko said. "We propose to detect thedefects, the dark matter, as they sweep through us with a network of sensitive atomic clocks. The idea is, where the clocks go out of synchronization, we would know that dark matter, the topological defect, has passed by. In fact, we envision using the GPS constellation as the largest human-built dark-matter detector."Their research was well-received by the scientific community when the theory was presented at scientific conferences this year, and their paper on the topic appears today in the online version of the scientific journal Nature Physics, ahead of the print version. 关于gps的英语小短文:With the widespread use of technologies like GPS and Internet-connected cameras With the widespread use of technologies like GPS and Internet-connected cameras, parents can now keep an eye on their children wherever they are.The combination of high-tech tools and highly-protective parents has set off the creation of a GPS-equipped wristwatch that allows parents to listen in real time to what their children are doing. The device has caused complaints from teachers since some parents began equipping their children with the watch so they can listen in on what happens at school.A teacher surnamed You at a well-known primary school in Pudong New Area discovered someone was eavesdropping(偷听) on her lessons after a parent of one of her students posted something she said online while class was still on. You soon found out the parent had given her child a watch that could send out the sounds in the classroom to her mobile phone. Soon, other parents decided to equip their own children with the devices.You was annoyed to find out her students' parents were spying on her. It made her feel as if they distrusted her. After Youtold her fellow teachers what had happened, they discovered that some of their students were also wearing the watches.。
纹理物体缺陷的视觉检测算法研究--优秀毕业论文
摘 要
在竞争激烈的工业自动化生产过程中,机器视觉对产品质量的把关起着举足 轻重的作用,机器视觉在缺陷检测技术方面的应用也逐渐普遍起来。与常规的检 测技术相比,自动化的视觉检测系统更加经济、快捷、高效与 安全。纹理物体在 工业生产中广泛存在,像用于半导体装配和封装底板和发光二极管,现代 化电子 系统中的印制电路板,以及纺织行业中的布匹和织物等都可认为是含有纹理特征 的物体。本论文主要致力于纹理物体的缺陷检测技术研究,为纹理物体的自动化 检测提供高效而可靠的检测算法。 纹理是描述图像内容的重要特征,纹理分析也已经被成功的应用与纹理分割 和纹理分类当中。本研究提出了一种基于纹理分析技术和参考比较方式的缺陷检 测算法。这种算法能容忍物体变形引起的图像配准误差,对纹理的影响也具有鲁 棒性。本算法旨在为检测出的缺陷区域提供丰富而重要的物理意义,如缺陷区域 的大小、形状、亮度对比度及空间分布等。同时,在参考图像可行的情况下,本 算法可用于同质纹理物体和非同质纹理物体的检测,对非纹理物体 的检测也可取 得不错的效果。 在整个检测过程中,我们采用了可调控金字塔的纹理分析和重构技术。与传 统的小波纹理分析技术不同,我们在小波域中加入处理物体变形和纹理影响的容 忍度控制算法,来实现容忍物体变形和对纹理影响鲁棒的目的。最后可调控金字 塔的重构保证了缺陷区域物理意义恢复的准确性。实验阶段,我们检测了一系列 具有实际应用价值的图像。实验结果表明 本文提出的纹理物体缺陷检测算法具有 高效性和易于实现性。 关键字: 缺陷检测;纹理;物体变形;可调控金字塔;重构
Keywords: defect detection, texture, object distortion, steerable pyramid, reconstruction
II
Fluke 54 II Calibrator 用户指南说明书
Changing the Temperature Setpoint (cont’d…) Pressing and holding a key will cause the setpointtemperature to advance more quickly to a desired value. Three scanning speeds are provided: slow, medium and fast. The lower setpoint limit and upper setpoint limit are at -23 and 257F, respectively. While the min. and max. setting are changeable, it is not advised as it may result in damage to the calibrator.Heat-Up/Cool-Down Transition Time Testing/Calibrating Temperature ProbesWhen calibrating probes at different temperature points, start at the lowest temperature and work up to thehighest temperature. Do not jump up and down from a very hot temperature to a relatively cooler temperature. This will reduce the time it takes for the probe well to re-stabilize after you change the setpoint. When placing probes into the well, make sure the probe tip goes all the way down to the bottom of the probe well, the full 4.5". This will insure the degree of highest accuracy possible when taking your reading.After calibrating each probe, remove it from the well and place it on a protected surface to cool. If you have another probe to calibrate, place it into the probe well and allow the calibrator a few minutes to re-stabilize.WGS/START HERE ARRO 5WGS/START HERE ARRO6CL1500Bench-Top Dry Block Calibrator***********************®It is the policy of OMEGA Engineering, Inc. to comply with all worldwide safety and EMC/EMI regulations that apply. OMEGA is constantly pursuing certification of its products to the European New Approach Directives. OMEGA will add the CE mark to every appropriate device upon certification.The information contained in this document is believed to be correct, but OMEGA accepts no liability for any errors it contains, and reserves the right to alter specifications without notice.WARNING: These products are not designed for use in, and should not be used for, human applications.MQS4695/0417Overheat Reset SwitchIf the unit is operated at high temperatures in elevated ambient temperatures, an overheat condition may occur. In an overheat situation a mechanical reset switch inside the unit will pop and open the heater circuit. The controller will still have power. While the controller will be demanding heat from the heater, the process temperature will fall or rise continuously until it equalizes with room temperature. If an overheatcondition occurs, let the unit cool off for one hour. If this does not correct the problem, contact the factory.turn the calibrator off until completing the cool-down procedure.When you have completed working with the calibrator you must cool the unit down to ambient temperature if you intend to move your unit and/or return to storage.Serial Cable ConnectionThe CL1500 features a serial port that allowsbi-directional data transfer via a three conductor cable consisting of signal ground, receive input, and transmit output. It is recommended that less than fifty feet of cable be used between the computer and this instrument. For detailed information on RS232 Communication and Software refer to Section 4 in the User’s GuideConnecting the CL1500 to a Computer’s Serial Port/manuals/manualpdf/M4695.pdfTime -5°C (23°F) to 23°C (73.4°F) 1 minute 23°C (73.4°F) to 100°C (212°F) 3 minutes 100°C (212°F) to 125°C (257°F)5 minutesHeating TimesTime 125°C (257°F) to 100°C (212°F) 1 minute 100°C (212°F) to 0°C (32°F) 8 minutes 0°C (32°F) to -5°C (23°F)4 minutesCooling TimesWARRANTY/DISCLAIMEROMEGA ENGINEErING, INC. warrants this unit to be free of defects in materials and workmanship for a period of 13 months from date of purchase. OMEGA’s WArrANTY adds an additional one (1) month grace period to the normal one (1) year product warranty to cover handling and shipping time. T his ensures that OMEGA’s customers receive maximum coverage on each product.If the unit malfunctions, it must be returned to the factory for evaluation. OMEGA’s Customer Service Department will issue an Authorized return (Ar) number immediately upon phone or written request. Upon examination by OMEGA, if the unit is found to be defective, it will be repaired or replaced at no charge. OMEGA’sWArrANT Y does not apply to defects resulting from any action ofthe purchaser, including but not limited to mishandling, improper interfacing, operation outside of design limits, improper repair, or unauthorized modification. T his WArrANT Y is VOID if the unit shows evidence of having been tampered with or shows evidence of having been damaged as a result of excessive corrosion; or current, heat, moisture or vibration; improper specification; misapplication; misuse or other operating conditions outside of OMEGA’s control. Components in which wear is not warranted, include but are not limited to contact points, fuses, and triacs.OMEGA is pleased to offer suggestions on the use of its various products. However, OMEGA neither assumes responsibility for any omissions or errors nor assumes liability for any damages that result from the use if its products in accordance with information provided by OMEGA, either verbal or written. OMEGA warrants only that the parts manufactured by the company will be as specified and free of defects. OMEGA MAKES NO OTHER WARRANTIES OR REPRESENTATIONS OF ANY KIND WHATSOEVER, EXPRESSED OR IMPLIED, EXCEPT THAT OF TITLE, AND ALL IMPLIED WARRANTIES INCLUDING ANY W ARRANTY OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE HEREBY DISCLAIMED. LIMITATION OF LIABILITY: The remedies of purchaser set forth herein are exclusive, and the total liability of OMEGA with respect to this order, whether based on contract, warranty, negligence, indemnification, strict liability or otherwise, shall not exceed the purchase price of the component upon which liability is based. In no event shall OMEGA be liable for consequential, incidental or special damages.CONDITIONS: Equipment sold by OMEGA is not intended to be used, nor shall it be used: (1) as a “Basic Component” under 10 CFr 21 (NrC), used in or with any nuclear installation or activity; or (2) in medical applications or used on humans. Should any Product(s) be used in or with any nuclear installation or activity, medical application, used on humans, or misused in any way, OMEGA assumes no responsibility as set forth in our basic WArrANT Y/DISCLAIMEr language, and, additionally, purchaser will indemnify OMEGA and hold OMEGA harmless from any liability or damage whatsoever arising out of the use of the Product(s) in such a manner.RETURN REQUESTS/INQUIRIESDirect all warranty and repair requests/inquiries to the OMEGA Customer Service Department. BEFOrE rE UrNING ANY PrODUC (S) O OMEGA, PUrCHASEr MUS OB AIN AN AUTHOrIZED rETUrN (Ar) NUMBEr FrOM OMEGA’S CUSTOMEr SErVICE DEPArT MENT (IN OrDEr T O AVOID PrOCESSING DELAYS). T he assigned Ar number should then be marked on the outside of the return package and on any correspondence.FOr WARRANTY rETUrNS, please have the followinginformation available BEFOrE contacting OMEGA:1. Purchase Order number under which the product was PUrCHASED,2. Model and serial number of the product under warranty, and3. repair instructions and/or specific problems relative to the product.FOr NON-WARRANTY rEPAIrS, consult OMEGA for current repair charges. Have the following information available BEFOrE contacting OMEGA:1. Purchase Order number to coverthe COST of the repair or calibration,2. Model and serial number of the product, and3.r epair instructions and/or specific problems relative to the product.OMEGA’s policy is to make running changes, not model changes, whenever an improvement is possible. This affords our customers the latest in technology and engineering.OMEGA is a registered trademark of OMEGA ENGINEErING, INC.© Copyright 2017 OMEGA ENGINEErING, INC. All rights reserved. T his document may not be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine-readable form, in whole or in part, without the prior written consent of OMEGA ENGINEErING, INC.Available ModelsModel No.*Hole Size CL1500 Standard See Fig. 2CL1500M Metric * Add suffix -230 for 230 Vac Models.Probe Wells (With Dimensions)MountingMount the unit on a bench, table top or shelf in ahorizontal position and operate at least ten inches from any air obstructions to the fan, front panel, rear panel, bottom and top of the unit, in an ambient environment between the specified 0 to 45°C (32 to 113°F).Ambient TemperatureThe calibration block of the CL1500 can achieve any temperature within the specified temperature range of -5 to 125°C (+23 to 257°F) when being operated in normal ambient temperature 23°C (72°F) environments. As long as the ambient temperature does not exceed 25°C (75°F), the block will achieve its lower limit temperature of -5°C (23°F). The minimum block temperature the unit can achieve is proportionallyhigher with increased ambient temperature. An increase in Ambient temperature of 1°C (1.8°F) above the 23°C (72°F) increases the minimum probe well temperature by approximately 0.8°C (1.4°F).WGS/START HERE ARROWGS/START HERE ARRO 2WGS/START HERE ARRO34Using This Quick Start ManualUse this Quick Start Manual with your CL1500 Series Bench-Top Dry Block Calibrator for easy installation and basic operation. For detailed information, refer to the User’s Guide (Manual Number M4695).PRECAUTIONS:• F ollow all safety precautions and operating instructions outlined in this quick start and accompanying User’s Guide.• N ever leave your calibrator unattended when in use.• K eep out of reach of all children.• N ever touch the probe well or probes when hot without proper protection.• N ever place any objects other than temperature probes in the well.• D o not operate in flammable or explosive environments.• N ever operate with a power cord other than the one provided with your unit.• T urn unit off and disconnect main power cord before attempting any maintenance or fuse replacement.• N ever disconnect main power cord or main power source when unit is still hot.• D o not connect and or operate this unit to a non-grounded, non-polarized outlet or power source.• T his unit is intended for indoor use only. Avoid exposure to moisture or high humidity.• Never operate the unit outside.• D o not return your unit to storage when hot, allow unit to cool down to ambient temperature.General InformationThe Model CL1500 is a portable, rugged, bench-top, hot/cold dry block calibration source with a built-in precision PID digital controller. Thecalibrator is used to test and calibrate temperature probes of various diameters. The calibration block has 6 holes to accommodate probes of varying diameter. It is available in both standard and metric versions. It can be set to any temperature between -5 to 125°C (+23 to 257°F).The Effect of Increased Ambient Temperature onOperating TemperaturePower ConnectionInternational (230 Vac~, 50/60 Hz Models only)On “-230Vac” models an International style power cord with the proper color code and approvals is provided with stripped wire ends for connection to the proper connector used in your country or local area, this connector is not provided. Make sure when installing your connector to the wire ends that the ground connection has been made.Certification: CE (CL1500 ~230VAC only)and inside the calibrator’s enclosure when connected to the AC mains supply. Do not remove the top or bottom cover of the calibrator for any reason.Front Panel Controls and IndicatorsFront PanelProcess TemperatureThis field displays the current temperature of the calibration block.Setpoint TemperatureThis field displays the desired calibration block temperature. Once the block reaches this desired temperature, both displays will read the same value.Parameter/Access Key:Used to index through parameters or to access Menu levels.Raise Key: Used to scroll up through available parameter settings, increase values or change menu levels (Hold for fast-step progression).L ower Key: Used to scroll down through available parameter settings, decrease values or change menu levels (Hold for fast-step progression).M ode Key: This key is inactive.Press to save settings and exit a menu level Back PanelChanging the Temperature Setpoint The CL1500’s upper display indicates the calibration block temperature known as (PV)Process Variable, while the lower display indicates the programmed setpoint known as (SV) Setpoint Variable. Changes to the setpoint, units of measure and communication settings are made via the raise and lower keys.RAISE KEYP ARAMETER/ACCESS KEYCALIBRA TIONFANVENTAC POWER INPUT。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
CALIBRATING SOFTWARE COST MODELS TO DEPARTMENT OF DEFENSE DATABASES - A REVIEW OF TEN STUDIESDaniel V. FerensAir Force Research LaboratoryDavid S. ChristensenUniversity of West Florida1 February, 1998CALIBRATING SOFTWARE COST MODELS TO DEPARTMENTOF DEFENSE DATABASES - A REVIEW OF TEN STUDIESABSTRACTThere are many sophisticated parametric models for estimating the size, cost, and schedule of software projects. In general, the predictive accuracy of these models is no better than within 25 percent of actual cost or schedule, about one half of the time (Thibodeau, 1981; IIT Research Institute, 1988). Several authors assert that a model's predictive accuracy can be improved by calibrating (adjusting) its default parameters to a specific environment (Kemerer, 1987; Van Genuchten and Koolen, 1991; Andolfi et al. 1996). This paper reports the results of a long-term project that tests this assertion.From 1995 to 1997, masters students at the Air Force Institute of Technology calibrated selected software cost models to databases provided by two Air Force product centers. Nine parametric software models (REVIC, SASET, PRICE-S, SEER-SEM, SLIM, SOFTCOST, CHECKPOINT, COCOMO II, and SAGE) were calibrated. Data from the product centers were extracted and stratified for specific software estimation models, calibrated to specific environments, and validated using hold-out samples. The project was limited to software development cost or effort, although the same procedures could be used for size, schedule, or other estimating applications.Results show that calibration does not always improve a model's predictive accuracy. Although one model which uses function points did show significantly improved accuracy (Mertes, 1996), the results could not be replicated on another database (Marzo, 1997).INTRODUCTIONSoftware costs are continuing to rise in the Department of Defense (DOD) and other government agencies (Mosemann, 1996). To better understand and control these costs, DOD agencies often use parametric cost models for software development cost and schedule estimation. However, the accuracy of these models is poor when the default values embedded in the models are used (Boehm, 1991; Brooks, 1975). Even after the software cost models are calibrated to DOD databases, most have been shown to be accurate to within only 25 percent of actual cost or schedule about half the time. For example, Thibodeau (1981) reported the accuracy of early versions of the PRICE-S and SLIM models to be within 25 and 30 percent, respectively, on military ground programs. The IIT Research Institute (1988) reported similar results on eight Ada programs, with the most accurate model at only 30 percent of actual cost or schedule, 62 percent of the time.Further, the level of accuracy reported by these studies is likely overstated because most studies have failed to use hold-out samples to validate the calibrated models. Instead of reserving a sample of the database for validation, the same data used to calibrate the models were used to assess accuracy (Ourada and Ferens, 1992).In a study using 28 military ground software data points, Ourada (1991) showed that failure to use a hold-out sample overstates a model's accuracy. One half of the data was used to calibrate the Air Force’s REVIC model. The remaining half was used to validate the calibrated model. REVIC was accurate to within 30 percent, 57 percent of the time on the calibration subset, but only 28 percent of the time on the validation subset.Validating on a hold-out sample is clearly more relevant because new programs being estimated are, by definition, not in the calibration database. The purpose of this study is to calibrate and properly evaluate the accuracy of selected software cost estimation models using hold-out samples. The expectation is that calibration improves the estimating accuracy of a model (Kemerer, 1987; Van Genuchten and Koolen, 1991; Andolfi et al., 1996).THE DECALOGUE PROJECTThis paper describes the results of a long-term project at the Air Force Institute of Technology to calibrate and validate selected software cost estimation models. Software databases were provided by two Air Force product centers: the Space and Missile Systems Center (SMC), and the Electronic Systems Center (ESC). The project has been nicknamed the "Decalogue project" because ten masters theses extensively document the procedures and results of calibrating each software cost estimation model.The Decalogue project is organized into three phases, corresponding to when the theses were completed. Five theses were completed in 1995; two theses were completed in 1996; threetheses were completed in 1997. Lessons learned during each phase were applied to the next phase. A brief description of each phase and its results follows.PHASE I.Five theses were completed in 1995. Each thesis student calibrated a specific software cost model (Revised Enhanced Intermediate Version of COCOMO (REVIC), Software Architecture Sizing and Estimating Tool (SASET), PRICE-S, SEER-SEM, and SLIM) using the SMC software database. REVIC and SASET are owned by the government. The remaining models are privately owned.The SMC database was developed by Management Consulting and Research, and contains detailed historical data for over 2,500 software programs (MCR, 1995). The database includes inputs for REVIC, SASET, PRICE-S, and SEER-SEM for some of the 2,500 projects, but none specifically for SLIM.The details of each thesis project are described in the separate thesis reports (1995) of Weber, Vegas, Galonsky, Rathmann, and Kressin. Each is available from the Defense Technical Information Center. Additional detail is also available from Ferens and Christensen (1997). Here, only the highlights are provided, and include a short description of the software models, the calibration methodology, and the results.REVIC. This model is the Air Force Cost Analysis Agency’s computerized variant of the Constructive Cost Model (COCOMO), developed by Dr. Barry Boehm. REVIC is calibrated the same way as COCOMO (Boehm, 1981). The nominal intermediate equations for REVIC are of the form E = A (KDSI)B where E is effort in person-months, KDSI is thousands of delivered source instructions, and A and B are the constants to be calibrated. The equations can be modified by calibrating A, B, or A and B. In calibrating the model, the product of nineteen effort adjustment factors is computed for each program and used to adjust for program variation. A large database is highly desirable if both A and B are calibrated.PRICE-S. As discussed in the User’s Manual (PRICE-S, 1993), this model is calibrated by running the model in the ECIRP (PRICE backwards) mode. In this mode, the actual cost or effort, and all inputs except the model’s productivity factor (PROFAC), are entered into the model. The inputs include program size, language, application mix, hardware utilization, integration difficulty, platform, and several complexity factors. The output is a value of PROFAC for each project analyzed. PROFAC, which captures the skill levels, experience, efficiency, and productivity of an organization, is a very sensitive parameter; small changes in PROFAC result in relatively large effort estimation differences.SEER-SEM. There are several versions of SEER-SEM; Rathman (1995) used version 4.0 for his thesis project. According to the User’s Manual (SEER-SEM, 1994), this version of the model can be calibrated in either of two ways. The first way is to calibrate an "effectivetechnology rating" (ETR), a parameter that reflects relative productivity. To calibrate ETR, the user must enter values for size, effort or schedule, and "knowledge base" parameters. Knowledge base parameters include information about the platform, application, acquisition method, development method, and development standard used.Instead of calibrating ETR, the user may calibrate effort and schedule adjustment factors from historical data. These factors are multipliers for which the nominal value is 1.0. Factors greater than 1.0 result in longer schedules and greater effort. The factors, like the ETR, can be included in a custom knowledge base for future programs. While the factors are easier to understand and work with than ETR, more input data are needed. Rathman (1995) used this latter method in his thesis.SLIM. Version 3.2 of the Software Life Cycle Model is calibrated by entering actual size, effort, schedule, and number of defects on historical programs. The model outputs a "productivity index" (PI) and a "manpower buildup index" (MBI) for each program. Since the user cannot directly enter MBI into the model, the calibrated PI is of most interest to the user. Like PROFAC in PRICE-S, PI, which measures the total development environment, is also very sensitive.SASET. This model was developed by Martin Marietta under contract to the United States Navy and Air Force (Ratliff, 1993). A calibration tool, the Database Management System Calibration Tool (Harbert, et al., 1992), is available with the model. The tool adjusts the model’s "productivity calibration constants" (PCCs), for the type of software (systems, application, or support) using the size, effort, and complexity of past programs. The calibration can be further refined by adjusting for different classes (avionics, ground, manned space, etc.) of software. As usual, there are default values for these constants if the user cannot calibrate the model.Calibration rules. The five models were calibrated to a portion of the SMC database. The database was divided into the following subsets: military ground, avionics, unmanned space, missiles, and military mobile. The military ground subset was further divided into command and control programs and signal processing programs. Each subset was then divided into calibration and holdout samples using three rules:(1) I f there were less than nine data points, the subset was considered too small for a hold-out sample and could not be validated.(2) I f there were between nine and eleven data points, eight were randomly selected forcalibration and the rest were used for validation.(3) I f there were twelve or more data points, two-thirds were randomly selected forcalibration and the rest were used for validation.The accuracy of each model was evaluated using criteria proposed by Conte, et al. (1986) based on the following statistics:Magnitude of Relative Error (MRE) = | Estimate - Actual | / Actual (1)Mean Magnitude of Relative Error (MMRE) = ( M RE) / n(2)Root Mean Square (RMS) =[(1/n) (Estimate - Actual)2]½(3)Relative Root Mean Square (RRMS) = RMS / [( A ctual )/ n](4)Prediction Level (Pred (.25)) = k/n(5)For Equation 5, n is the number of data points in the subset and k is the number of data points with MRE # 0.25. According to Conte, et al. (1986 ), a model's estimate is accurate when MMRE # 0.25, RRMS # 0.25, and Pred (.25) < .75.Results. Table 1 summarizes the results of Phase 1. Due to an oversight, not all five theses reported RRMS. Thus, only MMRE and PRED (.25) are shown. "Validation sample size" is the number of data points in the holdout sample used for validation. For some models, the military ground subsets (signal processing and command and control) were combined into an overall military ground subset to obtain a sufficiently large sample size for validation.TABLE 1REVIC, SASET, PRICE-S, SEER-SEM, AND SLIM CALIBRATION RESULTS (1995)ModelData Set ValidationSample SizePre-CalibrationMMRE PRED (.25)Post-CalibrationMMRE PRED (.25)REVIC Military Ground 5 1.21 0 0.86 0 Unmanned Space 4 0.43 0.50 0.31 0.50 SASET Avionics 1 1.76 0 0.22* 1.00* Military Ground 24 10.04 0 0.58 0 PRICE-S Military Ground 11 0.30 0.36 0.29 0.36 Unmanned Space 4 0.34 0.50 0.34 0.50 SEER-SEM Avionics 1 0.46 0 0.24* 1.00* Command and Control 7 0.31 0.43 0.31 0.29Signal Processing 7 1.54 0.29 2.10 0.43Military Mobile 4 0.39 0.25 0.46 0.25 SLIM Command and Control 3 0.62 0 0.67 0* Met Conte’s criteriaAs shown in Table 1, most of the calibrated models were inaccurate. In the two instances where the calibrated models met Conte’s criteria, only one data point was used for validation. Thus, these results are not compelling evidence that calibration improves accuracy. In fact, in some cases the calibrated model was less accurate than the model before calibration.These results may be due in part to the nature of the databases available to DOD agencies. In the SMC database, the developing contractors are not identified. Therefore, the data may represent an amalgamation of many different development processes, programming styles, etc., which are consistent within contracting organizations, but vary widely across contractors.Furthermore, because of inconsistencies in software data collection among different DOD efforts, actual cost data and other data may be inconsistent and unreliable.1PHASE II.In 1996 two additional models, SoftCost-OO and CHECKPOINT, were calibrated by two masters students. Details are provided in their thesis reports (Southwell, 1996; Mertes, 1996). A brief description of each model, the calibration procedures, and the results of Phase 2 follow.SoftCost-OO. The SoftCost Object-Oriented (OO) model is a commercial model originally developed by Don Reifer, and marketed by Resource Calculations, Inc. The model is a modification of SoftCost-Ada developed by Reifer during the late 1980s. In addition to size, SoftCost-OO uses twenty-eight parameters in four categories (product, process, personnel, and project) to adjust effort and schedule for a particular program. Key parameters include system architecture, application type, OO program experience, analyst capability, and reuse costs and benefits. The model is calibrated by simultaneously adjusting two factors: an average work force factor, and a productivity factor of thousands of executable source lines of code per person-month. Currently, these factors must be calibrated off-line using an electronic spreadsheet. Anon-line capability is envisioned for the future (SoftCost-OO, 1994).CHECKPOINT. The CHECKPOINT model is a commercial model marketed by Software Productivity Research (SPR) and is based on the work of Capers Jones. It is unique among the models calibrated in this study because the internal algorithms are based on function points instead of lines of code.2 If a user inputs lines of code and language, the model converts lines of code to function points using pre-set values for the language specified. A user can obtain a basic estimate by specifying (1) the nature and scope of the project, (2) the project class and type, and (3) complexity ratings for design, code, and data. The complexity ratings are used to adjust the function point count. A user may also enter values for more than one hundred detailed parameters in five categories (process, technology, personnel, environment, and special factors). In addition, a user can calibrate CHECKPOINT by creating templates from historical programs (SPR, 1993). These templates are used to set default values for new programs for selected input parameters.Calibration rules. With a few exceptions related to the subsets to calibrate and the hold-out sample rules, the two models were calibrated and validated using the same methods that were used in Phase I. A seventh subset of the SMC database, ground in-support-of-space (designated 1 This problem was addressed in Phase 3 of the Decalogue project, where the ESC database was used. The ESC database contains an identifier for each contributing contractor.2 Function points are weighted sums of five attributes or functions of a software program (inputs, outputs, inquiries, interfaces, and master files). Based on their analysis of more than 30 data processing programs, Albrecht and Gaffney (1983) report that function points may be superior to SLOC as predictors of software development cost or effort.“Ground Support” in Tables 2, 3, and 4) was used for both models. For SoftCost-OO, three additional subsets for European Space Agency programs were added since SoftCost-OO is used extensively in Europe. For CHECKPOINT, the missile subset was not used, and no European programs were used. In addition, data were obtained on Management Information System (MIS) programs written in COBOL from a local contractor, and a subset for COBOL programs was added to determine if stratification by language would provide better results. Finally, the rules to determine the sizes of the calibration and holdout samples were changed to avoid the problem of "single-point validations" experienced in Phase 1. Specifically, if there were eight or more data points in a subset, half were used for calibration, and the other half for validation. If there were fewer than eight data points, that subset was not used.Results. The following three tables show the results of calibrating each model. For SoftCost-00 (Table 2), calibration almost always improved the accuracy of the model, although none of the subsets met Conte’s criteria. For CHECKPOINT, all but one subset met the criteria when predicting development effort (Table 3), but none met the criteria when predicting schedule (Table 4).TABLE 2SOFTCOST CALIBRATION RESULTS (1996)Data Set ValidationSample SizePre-CalibrationMMRE RRMS PRED (.25)Post-CalibrationMMRE RRMS PRED (.25)Ground Support 15 2.73 3.13 0.13 1.80 1.96 0.20 Ground Support (Europe) 25 3.05 3.61 0.08 0.67 0.84 0.36 Unmanned Space 5 0.56 1.05 0.20 0.48 0.92 0.20 Unmanned Space (Europe) 7 1.79 0.79 0.14 1.27 0.84 0.14 Avionics 5 0.71 0.76 0.20 0.85 0.56 0.20 Command and Control 6 1.90 3.43 0.17 0.52 0.87 0.50 Signal Processing 9 0.43 0.61 0.11 0.28 0.64 0.44 Military Mobile 5 0.63 0.51 0.20 0.42 0.40 0.20 Since CHECKPOINT uses function points as a measure of size, they were used when sufficient data points were available for the subsets; otherwise, source lines of code (SLOC) were used. For three function point effort subsets, there was substantial improvement in accuracy after the model was calibrated for other programs in these subsets, especially for the MIS COBOL subset. Except for the Command and Control subset, the SLOC effort subsets met Conte’s criteria both before and after calibration. Although calibration did not significantly improve accuracy for these subset s (primarily because SLOC are an output, not an input, to CHECKPOINT), the accuracy was very good even without calibration. The CHECKPOINT results for effort estimation are especially noteworthy because the inputs for this model were not even considered when the SMC database was developed.TABLE 3CHECKPOINT CALIBRATION RESULTS (EFFORT, 1996)Data Set ValidationSample SizePre-CalibrationMMRE RRMS PRED (.25)Post-CalibrationMMRE RRMS PRED (.25)Effort - Function PointsMIS - COBOL 6 0.54 0.10 0.67 0.02* 0.01* 1.00* Military Mobile - Ada 4 1.38 0.41 0.25 0.19* 0.06* 0.75* Avionics 4 0.82 0.68 0.50 0.16* 0.11* 0.75* Effort - SLOCCommand and Control 6 0.19* 0.14* 0.50 0.16* 0.16* 0.50 Signal Processing 10 0.09* 0.08* 1.00* 0.09* 0.08* 1.00* Unmanned Space 5 0.05* 0.05* 1.00* 0.04* 0.06* 1.00* Ground Support 4 0.05* 0.06* 1.00* 0.05* 0.06* 1.00* COBOL Programs 4 0.05* 0.05* 1.00* 0.05* 0.05* 1.00* Met Conte’s CriteriaTABLE 4CHECKPOINT CALIBRATION RESULTS (SCHEDULE, 1996)Data Set ValidationSample SizePre-CalibrationMMRE RRMS PRED (.25)Post-CalibrationMMRE RRMS PRED (.25)MIS - COBOL 6 0.31 0.37 0.17 0.29 0.72 0.33 Unmanned Space 5 0.60 0.62 0.00 0.50 0.68 0.00 Ground Support 4 0.60 0.62 0.00 0.60 0.62 0.00 COBOL Programs 4 0.60 0.60 0.00 0.60 0.60 0.00 Although these results are promising, it should not be assumed that CHECKPOINT will do as well in other environments. The best results for the CHECKPOINT model were for the MIS COBOL data set, which was obtained from a single contractor. Data from multiple contractors, which often characterize DOD databases, are more difficult to calibrate accurately. Furthermore, CHECKPOINT is a function point model. If the user wants to input size in SLOC (which is usually the case), the user or model must first convert the SLOC to function points. Unfortunately, the conversion ratios are sometimes subject to significant variations. Thus, the SLOC effort results for CHECKPOINT may not work out as well elsewhere.PHASE III.In 1997 three models (COCOMO II, SAGE, and CHECKPOINT) were calibrated. COCOMO II, the successor to Boehm's COCOMO model (1981), was calibrated to the SCM database. SAGE model, a commercial model developed by Randy Jensen, was calibrated to the SMC and ESC databases. Finally, CHECKPOINT was calibrated to the ESC database to determine whether the unusually high accuracy reported by Mertes (1996) could be achieved on adifferent database. As before, the details are documented in the 1997 thesis reports (Bernheisel, Marzo, and Shrum). Here, only the highlights are described.The COCOMO II “Post-Architecture” model, the long-awaited successor to COCOMO, is expected to have an on-line calibration capability; however, it was not available for this study. Instead, the model was calibrated using the procedure described in Phase I for REVIC. Since the data sets were relatively small, only the coefficient of the effort equation was calibrated. The exponent was set to an “average” value of 1.153.SAGE is a commercial model developed by Dr. Randy Jensen (1996). It currently has an on-line calibration capability where effort and schedule equations are calibrated simultaneously by adjusting a "basic technology constant" (Ctb) for effort, and a "system complexity" (D) factor for schedule. Due to time limitations, only Ctb was calibrated for this study, and a value of 15, typical for the types of subsets calibrated here, was used for D. The basic technology constant accounts for an organization’s personnel capability and experience, use of modern practices and tools, and computer turnaround and terminal response times. Higher values of Ctb represent higher productivity, and result in relatively lower costs and shorter schedules.The SMC database was stratified into the seven categories used in Phase II (1996). No changes were made except that a more recent edition of the database was used.The ESC database contains information on 52 projects and 312 computer software configuration items (Marzo, 1997). It contains contractor identifiers and language, but not information on application type. It does contain inputs for the SEER-SEM model for which it was originally developed. The ESC database was initially stratified by contractor since it was believed that a model can be more accurate when calibrated for a specific developer (Kemerer, 1987). For CHECKPOINT, the ESC database was also stratified by language, contractor, and language.Calibration rules. The techniques used to calibrate the models were significantly improved over those used in the earlier phases. In the past, small data sets reduced the meaningfulness of the calibration. Indeed, making statistically valid inferences from small data sets of completed software projects is a common limitation of any calibration study. To overcome this limitation, each model was calibrated multiple times by drawing random samples from the data set. The remaining hold-out samples were used for validation. Averages of the validation results became the measure of accuracy. This technique, known as "resampling," is becoming an increasingly popular and acceptable substitute for more conventional statistical techniques (University of Maryland, 1997).The resampling technique is flexible. For CHECKPOINT, resampling was used on only the small data sets (8-12 data points). Four random samples from the small data sets were used to calibrate and validate the model. For COCOMO II, only data sets of twelve or more data points were used, and resampling was accomplished on all data sets by using 80 percent of the datapoints (selected randomly) for calibration and the remaining 20 percent for validation. The process was repeated five times, and the results were averaged. For SAGE, all data sets having four or more points were used with an even more comprehensive resampling procedure. Simulation software, Crystal Ball, was used to select two data points for validation and the rest for calibration. Instead of limiting the number of runs to four or five, all possible subsets were run.Results. Table 5 shows the results of the CHECKPOINT calibration using the ESC database. Unlike the results reported by Mertes (1996), none of the data sets met any of Conte’s criteria, even those for a single contractor. This may be due in part to the lack of function point counts in the ESC database; only SLOC are provided for all data points. However, since Mertes’results using CHECKPOINT for SLOC were also very good, it is difficult to account for the differences between the results of Mertes (1996) and Shrum (1997).TABLE 5CHECKPOINT CALIBRATION RESULTS (1997)Data Set ValidationSample SizePre-CalibrationMMRE RRMS PRED (.25)Post-CalibrationMMRE RRMS PRED (.25)Ada Language 8 1.21 1.34 0.00 1.70 2.54 0.50 Assembly Language 11 0.83 1.44 0.09 2.05 1.20 0.18 FORTRAN Language 12 0.73 1.12 0.17 0.70 2.31 0.17 JOVIAL Language 7 0.71 1.22 0.00 0.44 0.68 0.43Contractor B 4** 0.60 0.74 0.13 0.64 0.49 0.25 Contractor J 11 0.69 0.91 0.18 1.33 1.43 0.18Ada and Contractor R 5** 0.59 0.57 0.05 0.39 0.72 0.45 CMS2 and Contractor M 5** 0.91 1.13 0.00 0.69 0.64 0.10 FORTRAN and Contractor A 7 0.82 0.84 0.00 0.44 0.88 0.29 JOVIAL and Contractor J 6 0.80 1.42 0.00 0.37 0.70 0.33 ** Resampling Used For This SetTable 6 shows the results for COCOMO II, where calibration slightly improved the model's predictive accuracy, but none of the subsets met Conte’s criteria. It is possible that better results may be attained when the on-line calibration capability is incorporated into the model.TABLE 6COCOMO II CALIBRATION RESULTS (1997)Data Set TotalSample SizePre-CalibrationMMRE RRMS PRED (.25)Post-CalibrationMMRE RRMS PRED (.25)Command and Control 12 0.39 0.49 0.30 0.33 0.53 0.40 Signal Processing 19 0.45 0.63 0.33 0.38 0.53 0.40 Ground Support 15 0.71 1.16 0.07 0.66 0.95 0.20 Military Mobile 12 0.79 0.95 0.10 0.68 0.74 0.00Table 7 shows the results for SAGE on both databases. Although calibration sometimes resulted in improved accuracy, only a few sets met Conte’s criteria. This is somewhat surprising for the ESC data sets, where individual contractors are identified by a code letter, and Ctb should be consistent for a company. It may be that even within a single company software programs are developed differently. Also, it is possible that if the simultaneous effort and schedule calibration capability which is now integrated into SAGE was used, the results would be better.TABLE 7SAGE CALIBRATION RESULTS (1997)Data Set TotalSample SizePre-CalibrationMMRE RRMS PRED (.25)Post-CalibrationMMRE RRMS PRED (.25)SMC – Avionics 9 0.45 0.54 0.21 0.39 0.52 0.24 Command and Control 10 0.23* 0.23* 0.70 0.29 0.30 0.45 Signal Processing 16 0.39 0.43 0.44 0.50 0.54 0.20 Unmanned Space 7 0.66 0.69 0.14 0.59 0.88 0.30 Ground Support 14 0.32 0.44 0.43 0.32 0.44 0.43 Military Mobile 10 0.37 0.47 0.29 0.41 0.52 0.36 Missile 4 0.66 0.89 0.00 0.67 0.44 0.24 ESC – Contractor A 17 0.48 0.57 0.17 0.41 0.40 0.31 Contractor J 17 0.37 0.47 0.33 0.47 0.57 0.14 Contractor R 6 0.32 0.36 0.32 0.21* 0.23* 0.54 * Met Conte’s CriteriaCONCLUSIONSCalibration does not always improve a model's predictive accuracy. Most of the calibrated models evaluated in this project failed to meet Conte' criteria. The one exception was the calibration of CHECKPOINT to the SMC database (Mertes, 1996), where almost all of the calibrated data sets met Conte’s criteria, both for function point and SLOC applications. Unfortunately, this result could not be replicated on the ESC database (Shrum, 1997) using a superior validation technique. Overall, none of the models was shown to be more accurate than within 25 percent of actual cost or effort, one half the time.This does not mean the Decalogue project was a failure. Much was learned about the models, their strengths and weaknesses, and the challenges in calibrating them to DOD databases. One major insight of the project is that the use of a holdout sample is essential for meaningful model calibration. Without a holdout sample, the predictive accuracy of the model is probably overstated. Since all new projects are outside of the historical database(s), validation is much more meaningful than the more common practice of analyzing within-database performance. The calibrations performed in 1997 also developed and applied resampling as a superior technique to use in validating small samples. It is better than just using one subset of data for a holdout, and can be done easily with modern software, such as Excel and Crystal Ball. Hopefully, the findings。