Model-based testing through a GUI
软件测试术语中英文对照

data definition C-use pair:数据定义C-use使用对
data definition P-use coverage:数据定义P-use覆盖
data definition P-use pair:数据定义P-use使用对
data definition:数据定义
data definition-use coverage:数据定义使用覆盖
data definition-use pair :数据定义使用对
data definition-use testing:数据定义使用测试
Check In :检入
Check Out :检出
Closeout : 收尾
code audit :代码审计
Code coverage : 代码覆盖
Code Inspection:代码检视
Core team : 核心小组
corrective maintenance:故障检修
correctness :正确性
coverage :覆盖率
coverage item:覆盖项
crash:崩溃
Beta testing : β测试
Black Box Testing:黑盒测试
Blocking bug : 阻碍性错误
Bottom-up testing : 自底向上测试
boundary value coverage:边界值覆盖
boundary value testing:边界值测试
Bug bash : 错误大扫除
bug fix : 错误修正
Bug report : 错误报告
RAD6

IBM MW & Server Toolkits
ISV & Customer Tools
WebSphere Studio
WebSphere Studio Workbench
–Based on Java 集成开发环境支持所有J2EE组件开发及测试 提高编程效率
V6.0 Application Server
Separate Install (Local or Remote)
10
© 2005 IBM Corporation
Annotation-based Programmration
Web Services
Build and consume Web Services – EJB and Java bean JSR 101/109 Web Services Support for setting conformance levels – WS-I SSBP 1.0 (Simple SOAP Basic Profile 1.0), and WS-I AP (Attachments Profile 1.0) Generate JAX-RPC handlers with wizard – Skeleton handler generated and deployment descriptors updated Secure Web Service requests/responses with WS-Security – Security values specified in J2EE deployment descriptor editors Integrated SOAP Monitor available for viewing Web Service traffic – Integrated into creation wizard
软件检验中英文对照

Acceptance testing | 验收测试Acceptance Testing|可接受性测试Accessibility test | 软体适用性测试actual outcome|实际结果Ad hoc testing | 随机测试Algorithm analysis | 算法分析algorithm|算法Alpha testing | α测试analysis|分析anomaly|异常application software|应用软件Application under test (AUT) | 所测试的应用程序Architecture | 构架Artifact | 工件ASQ|自动化软件质量(Automated Software Quality)Assertion checking | 断言检查Association | 关联Audit | 审计audit trail|审计跟踪Automated Testing|自动化测试Backus-Naur Form|BNF范式baseline|基线Basic Block|基本块basis test set|基本测试集Behaviour | 行为Bench test | 基准测试benchmark|标杆/指标/基准Best practise | 最佳实践Beta testing | β测试Black Box Testing|黑盒测试Blocking bug | 阻碍性错误Bottom-up testing | 自底向上测试boundary value coverage|边界值覆盖boundary value testing|边界值测试Boundary values | 边界值Boundry Value Analysis|边界值分析branch condition combination coverage|分支条件组合覆盖branch condition combination testing|分支条件组合测试branch condition coverage|分支条件覆盖branch condition testing|分支条件测试branch condition|分支条件Branch coverage | 分支覆盖branch outcome|分支结果branch point|分支点branch testing|分支测试branch|分支Breadth Testing|广度测试Brute force testing| 强力测试Buddy test | 合伙测试Buffer | 缓冲Bug | 错误Bug bash | 错误大扫除bug fix | 错误修正Bug report | 错误报告Bug tracking system| 错误跟踪系统bug|缺陷Build | 工作版本(内部小版本)Build Verfication tests(BVTs)| 版本验证测试Build-in | 内置Capability Maturity Model (CMM)| 能力成熟度模型Capability Maturity Model Integration (CMMI)| 能力成熟度模型整合capture/playback tool|捕获/回放工具Capture/Replay Tool|捕获/回放工具CASE|计算机辅助软件工程(computer aided software engineering)CAST|计算机辅助测试cause-effect graph|因果图certification |证明change control|变更控制Change Management |变更管理Change Request |变更请求Character Set | 字符集Check In |检入Check Out |检出Closeout | 收尾code audit |代码审计Code coverage | 代码覆盖Code Inspection|代码检视Code page | 代码页Code rule | 编码规范Code sytle | 编码风格Code Walkthrough|代码走读code-based testing|基于代码的测试coding standards|编程规范Common sense | 常识Compatibility Testing|兼容性测试complete path testing |完全路径测试completeness|完整性complexity |复杂性Component testing | 组件测试Component|组件computation data use|计算数据使用computer system security|计算机系统安全性Concurrency user | 并发用户Condition coverage | 条件覆盖condition coverage|条件覆盖condition outcome|条件结果condition|条件configuration control|配置控制Configuration item | 配置项configuration management|配置管理Configuration testing | 配置测试conformance criterion| 一致性标准Conformance Testing| 一致性测试consistency | 一致性consistency checker| 一致性检查器Control flow graph | 控制流程图control flow graph|控制流图control flow|控制流conversion testing|转换测试Core team | 核心小组corrective maintenance|故障检修correctness |正确性coverage |覆盖率coverage item|覆盖项crash|崩溃criticality analysis|关键性分析criticality|关键性CRM(change request management)| 变更需求管理Customer-focused mindset | 客户为中心的理念体系Cyclomatic complexity | 圈复杂度data corruption|数据污染data definition C-use pair|数据定义C-use使用对data definition P-use coverage|数据定义P-use覆盖data definition P-use pair|数据定义P-use使用对data definition|数据定义data definition-use coverage|数据定义使用覆盖data definition-use pair |数据定义使用对data definition-use testing|数据定义使用测试data dictionary|数据字典Data Flow Analysis | 数据流分析data flow analysis|数据流分析data flow coverage|数据流覆盖data flow diagram|数据流图data flow testing|数据流测试data integrity|数据完整性data use|数据使用data validation|数据确认dead code|死代码Debug | 调试Debugging|调试Decision condition|判定条件Decision coverage | 判定覆盖decision coverage|判定覆盖decision outcome|判定结果decision table|判定表decision|判定Defect | 缺陷defect density | 缺陷密度Defect Tracking |缺陷跟踪Deployment | 部署Depth Testing|深度测试design for sustainability |可延续性的设计design of experiments|实验设计design-based testing|基于设计的测试Desk checking | 桌前检查desk checking|桌面检查Determine Usage Model | 确定应用模型Determine Potential Risks | 确定潜在风险diagnostic|诊断DIF(decimation in frequency) | 按频率抽取dirty testing|肮脏测试disaster recovery|灾难恢复DIT (decimation in time)| 按时间抽取documentation testing |文档测试domain testing|域测试domain|域DTP DETAIL TEST PLAN详细确认测试计划Dynamic analysis | 动态分析dynamic analysis|动态分析Dynamic Testing|动态测试embedded software|嵌入式软件emulator|仿真End-to-End testing|端到端测试Enhanced Request |增强请求entity relationship diagram|实体关系图Encryption Source Code Base| 加密算法源代码库Entry criteria | 准入条件entry point |入口点Envisioning Phase | 构想阶段Equivalence class | 等价类Equivalence Class|等价类equivalence partition coverage|等价划分覆盖Equivalence partition testing | 等价划分测试equivalence partition testing|参考等价划分测试equivalence partition testing|等价划分测试Equivalence Partitioning|等价划分Error | 错误Error guessing | 错误猜测error seeding|错误播种/错误插值error|错误Event-driven | 事件驱动Exception handlers | 异常处理器exception|异常/例外executable statement|可执行语句Exhaustive Testing|穷尽测试exit point|出口点expected outcome|期望结果Exploratory testing | 探索性测试Failure | 失效Fault | 故障fault|故障feasible path|可达路径feature testing|特性测试Field testing | 现场测试FMEA|失效模型效果分析(Failure Modes and Effects Analysis)FMECA|失效模型效果关键性分析(Failure Modes and Effects Criticality Analysis)Framework | 框架FTA|故障树分析(Fault Tree Analysis)functional decomposition|功能分解Functional Specification |功能规格说明书Functional testing | 功能测试Functional Testing|功能测试G11N(Globalization) | 全球化Gap analysis | 差距分析Garbage characters | 乱码字符glass box testing|玻璃盒测试Glass-box testing | 白箱测试或白盒测试Glossary | 术语表GUI(Graphical User Interface)| 图形用户界面Hard-coding | 硬编码Hotfix | 热补丁I18N(Internationalization)| 国际化Identify Exploratory Tests –识别探索性测试IEEE|美国电子与电器工程师学会(Institute of Electrical and Electronic Engineers)Incident 事故Incremental testing | 渐增测试incremental testing|渐增测试infeasible path|不可达路径input domain|输入域Inspection | 审查inspection|检视installability testing|可安装性测试Installing testing | 安装测试instrumentation|插装instrumenter|插装器Integration |集成Integration testing | 集成测试interface | 接口interface analysis|接口分析interface testing|接口测试interface|接口invalid inputs|无效输入isolation testing|孤立测试Issue | 问题Iteration | 迭代Iterative development| 迭代开发job control language|工作控制语言Job|工作Key concepts | 关键概念Key Process Area | 关键过程区域Keyword driven testing | 关键字驱动测试Kick-off meeting | 动会议L10N(Localization) | 本地化Lag time | 延迟时间LCSAJ|线性代码顺序和跳转(Linear Code Sequence And Jump)LCSAJ coverage|LCSAJ覆盖LCSAJ testing|LCSAJ测试Lead time | 前置时间Load testing | 负载测试Load Testing|负载测试Localizability testing| 本地化能力测试Localization testing | 本地化测试logic analysis|逻辑分析logic-coverage testing|逻辑覆盖测试Maintainability | 可维护性maintainability testing|可维护性测试Maintenance | 维护Master project schedule |总体项目方案Measurement | 度量Memory leak | 内存泄漏Migration testing | 迁移测试Milestone | 里程碑Mock up | 模型,原型modified condition/decision coverage|修改条件/判定覆盖modified condition/decision testing |修改条件/判定测试modular decomposition|参考模块分解Module testing | 模块测试Monkey testing | 跳跃式测试Monkey Testing|跳跃式测试mouse over|鼠标在对象之上mouse leave|鼠标离开对象MTBF|平均失效间隔实际(mean time between failures)MTP MAIN TEST PLAN主确认计划MTTF|平均失效时间(mean time to failure)MTTR|平均修复时间(mean time to repair)multiple condition coverage|多条件覆盖mutation analysis|变体分析N/A(Not applicable) | 不适用的Negative Testing | 逆向测试, 反向测试, 负面测试negative testing|参考负面测试Negative Testing|逆向测试/反向测试/负面测试off by one|缓冲溢出错误non-functional requirements testing|非功能需求测试nominal load|额定负载N-switch coverage|N切换覆盖N-switch testing|N切换测试N-transitions|N转换Off-the-shelf software | 套装软件operational testing|可操作性测试output domain|输出域paper audit|书面审计Pair Programming | 成对编程partition testing|分类测试Path coverage | 路径覆盖path coverage|路径覆盖path sensitizing|路径敏感性path testing|路径测试path|路径Peer review | 同行评审Performance | 性能Performance indicator| 性能(绩效)指标Performance testing | 性能测试Pilot | 试验Pilot testing | 引导测试Portability | 可移植性portability testing|可移植性测试Positive testing | 正向测试Postcondition | 后置条件Precondition | 前提条件precondition|预置条件predicate data use|谓词数据使用predicate|谓词Priority | 优先权program instrumenter|程序插装progressive testing|递进测试Prototype | 原型Pseudo code | 伪代码pseudo-localization testing|伪本地化测试pseudo-random|伪随机QC|质量控制(quality control)Quality assurance(QA)| 质量保证Quality Control(QC) | 质量控制Race Condition|竞争状态Rational Unified Process(以下简称RUP)|瑞理统一工艺recovery testing|恢复性测试Refactoring | 重构regression analysis and testing|回归分析和测试Regression testing | 回归测试Release | 发布Release note | 版本说明release|发布Reliability | 可靠性reliability assessment|可靠性评价reliability|可靠性Requirements management tool| 需求管理工具Requirements-based testing | 基于需求的测试Return of Investment(ROI)| 投资回报率review|评审Risk assessment | 风险评估risk|风险Robustness | 强健性Root Cause Analysis(RCA)| 根本原因分析safety critical|严格的安全性safety|(生命)安全性Sanity Testing|理智测试Schema Repository | 模式库Screen shot | 抓屏、截图SDP|软件开发计划(software development plan)Security testing | 安全性测试security testing|安全性测试security.|(信息)安全性serviceability testing|可服务性测试Severity | 严重性Shipment | 发布simple subpath|简单子路径Simulation | 模拟Simulator | 模拟器SLA(Service level agreement)| 服务级别协议SLA|服务级别协议(service level agreement)Smoke testing | 冒烟测试Software development plan(SDP)| 软件开发计划Software development process| 软件开发过程software development process|软件开发过程software diversity|软件多样性software element|软件元素software engineering environment|软件工程环境software engineering|软件工程Software life cycle | 软件生命周期source code|源代码source statement|源语句Specification | 规格说明书specified input|指定的输入spiral model |螺旋模型SQAP SOFTWARE QUALITY ASSURENCE PLAN 软件质量保证计划SQL|结构化查询语句(structured query language)Staged Delivery|分布交付方法state diagram|状态图state transition testing |状态转换测试state transition|状态转换state|状态Statement coverage | 语句覆盖statement testing|语句测试statement|语句Static Analysis|静态分析Static Analyzer|静态分析器Static Testing|静态测试statistical testing|统计测试Stepwise refinement | 逐步优化storage testing|存储测试Stress Testing | 压力测试structural coverage|结构化覆盖structural test case design|结构化测试用例设计structural testing|结构化测试structured basis testing|结构化的基础测试structured design|结构化设计structured programming|结构化编程structured walkthrough|结构化走读stub|桩sub-area|子域Summary| 总结SVVP SOFTWARE Vevification&Validation PLAN| 软件验证和确认计划symbolic evaluation|符号评价symbolic execution|参考符号执行symbolic execution|符号执行symbolic trace|符号轨迹Synchronization | 同步Syntax testing | 语法分析system analysis|系统分析System design | 系统设计system integration|系统集成System Testing | 系统测试TC TEST CASE 测试用例TCS TEST CASE SPECIFICATION 测试用例规格说明TDS TEST DESIGN SPECIFICATION 测试设计规格说明书technical requirements testing|技术需求测试Test | 测试test automation|测试自动化Test case | 测试用例test case design technique|测试用例设计技术test case suite|测试用例套test comparator|测试比较器test completion criterion|测试完成标准test coverage|测试覆盖Test design | 测试设计Test driver | 测试驱动test environment|测试环境test execution technique|测试执行技术test execution|测试执行test generator|测试生成器test harness|测试用具Test infrastructure | 测试基础建设test log|测试日志test measurement technique|测试度量技术Test Metrics |测试度量test procedure|测试规程test records|测试记录test report|测试报告Test scenario | 测试场景Test Script|测试脚本Test Specification|测试规格Test strategy | 测试策略test suite|测试套Test target | 测试目标Test ware | 测试工具Testability | 可测试性testability|可测试性Testing bed | 测试平台Testing coverage | 测试覆盖Testing environment | 测试环境Testing item | 测试项Testing plan | 测试计划Testing procedure | 测试过程Thread testing | 线程测试time sharing|时间共享time-boxed | 固定时间TIR test incident report 测试事故报告ToolTip|控件提示或说明top-down testing|自顶向下测试TPS TEST PEOCESS SPECIFICATION 测试步骤规格说明Traceability | 可跟踪性traceability analysis|跟踪性分析traceability matrix|跟踪矩阵Trade-off | 平衡transaction|事务/处理transaction volume|交易量transform analysis|事务分析trojan horse|特洛伊木马truth table|真值表TST TEST SUMMARY REPORT 测试总结报告Tune System | 调试系统TW TEST WARE |测试件Unit Testing |单元测试Usability Testing|可用性测试Usage scenario | 使用场景User acceptance Test | 用户验收测试User database |用户数据库User interface(UI) | 用户界面User profile | 用户信息User scenario | 用户场景V&V (Verification & Validation) | 验证&确认validation |确认verification |验证version |版本Virtual user | 虚拟用户volume testing|容量测试VSS(visual source safe) |VTP Verification TEST PLAN验证测试计划VTR Verification TEST REPORT验证测试报告Walkthrough | 走读Waterfall model | 瀑布模型Web testing | 网站测试White box testing | 白盒测试Work breakdown structure (WBS) | 任务分解结构Zero bug bounce (ZBB) | 零错误反弹。
软件测试英语术语缩写

软件测试常用英语词汇静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围: The Test Scope测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance.可复用性:Reusability可靠性: Reliability/Availability电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturity model )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testingβ测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification软件开发中常见英文缩写和各类软件开发文档的英文缩写:英文简写文档名称MRD market requirement document (市场需求文档)PRD product requirement document (产品需求文档)SOW 工作任务说明书PHB Process Handbook (项目过程手册)EST Estimation Sheet (估计记录)PPL Project Plan (项目计划)CMP Software Management Plan( 配置管理计划)QAP Software Quality Assurance Plan (软件质量保证计划)RMP Software Risk Management Plan (软件风险管理计划)TST Test Strategy(测试策略)WBS Work Breakdown Structure (工作分解结构)BRS Business Requirement Specification(业务需求说明书)SRS Software Requirement Specification(软件需求说明书)STP System Testing plan (系统测试计划)STC System Testing Cases (系统测试用例)HLD High Level Design (概要设计说明书)ITP Integration Testing plan (集成测试计划)ITC Integration Testing Cases (集成测试用例)LLD Low Level Design (详细设计说明书)UTP Unit Testing Plan ( 单元测试计划)UTC Unit Testing Cases (单元测试用例)UTR Unit Testing Report (单元测试报告)ITR Integration Testing Report (集成测试报告)STR System Testing Report (系统测试报告)RTM Requirements Traceability Matrix (需求跟踪矩阵)CSA Configuration Status Accounting (配置状态发布)CRF Change Request Form (变更申请表)WSR Weekly Status Report (项目周报)QSR Quality Weekly Status Report (质量工作周报)QAR Quality Audit Report(质量检查报告)QCL Quality Check List(质量检查表)PAR Phase Assessment Report (阶段评估报告)CLR Closure Report (项目总结报告)RFF Review Finding Form (评审发现表)MOM Minutes of Meeting (会议纪要)MTX Metrics Sheet (度量表)CCF ConsistanceCheckForm(一致性检查表)BAF Baseline Audit Form(基线审计表)PTF Program Trace Form(问题跟踪表)领测国际科技(北京)有限公司领测软件测试网 /软件测试中英文对照术语表A• Abstract test case (High level test case) :概要测试用例• Acceptance:验收• Acceptance criteria:验收标准• Acceptance testing:验收测试• Accessibility testing:易用性测试• Accuracy:精确性• Actual outcome (actual result) :实际输出/实际结果• Ad hoc review (informal review) :非正式评审• Ad hoc testing:随机测试• Adaptability:自适应性• Agile testing:敏捷测试• Algorithm test (branch testing) :分支测试• Alpha testing:alpha 测试• Analyzability:易分析性• Analyzer:分析员• Anomaly:异常• Arc testing:分支测试• Attractiveness:吸引力• Audit:审计• Audit trail:审计跟踪• Automated testware:自动测试组件• Availability:可用性B• Back-to-back testing:对比测试• Baseline:基线• Basic block:基本块• Basis test set:基本测试集• Bebugging:错误撒播• Behavior:行为• Benchmark test:基准测试• Bespoke software:定制的软件• Best practice:最佳实践• Beta testing:Beta 测试领测国际科技(北京)有限公司领测软件测试网 /• Big-bang testing:集成测试• Black-box technique:黑盒技术• Black-box testing:黑盒测试• Black-box test design technique:黑盒测试设计技术• Blocked test case:被阻塞的测试用例• Bottom-up testing:自底向上测试• Boundary value:边界值• Boundary value analysis:边界值分析• Boundary value coverage:边界值覆盖率• Boundary value testing:边界值测试• Branch:分支• Branch condition:分支条件• Branch condition combination coverage:分支条件组合覆盖率• Branch condition combination testing:分支条件组合测试• Branch condition coverage:分支条件覆盖率• Branch coverage:分支覆盖率• Branch testing:分支测试• Bug:缺陷• Business process-based testing:基于商业流程的测试C• Capability Maturity Model (CMM) :能力成熟度模型• Capability Maturity Model Integration (CMMI) :集成能力成熟度模型• Capture/playback tool:捕获/回放工具• Capture/replay tool:捕获/重放工具• CASE (Computer Aided Software Engineering) :电脑辅助软件工程• CAST (Computer Aided Software Testing) :电脑辅助软件测试• Cause-effect graph:因果图• Cause-effect graphing:因果图技术• Cause-effect analysis:因果分析• Cause-effect decision table:因果判定表• Certification:认证• Changeability:可变性• Change control:变更控制• Change control board:变更控制委员会• Checker:检查人员• Chow's coverage metrics (N-switch coverage) :N 切换覆盖率• Classification tree method:分类树方法• Code analyzer:代码分析器• Code coverage:代码覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Code-based testing:基于代码的测试• Co-existence:共存性• Commercial off-the-shelf software:商用离岸软件• Comparator:比较器• Compatibility testing:兼容性测试• Compiler:编译器• Complete testing:完全测试/穷尽测试• Completion criteria:完成标准• Complexity:复杂性• Compliance:一致性• Compliance testing:一致性测试• Component:组件• Component integration testing:组件集成测试• Component specification:组件规格说明• Component testing:组件测试• Compound condition:组合条件• Concrete test case (low level test case) :详细测试用例• Concurrency testing:并发测试• Condition:条件表达式• Condition combination coverage:条件组合覆盖率• Condition coverage:条件覆盖率• Condition determination coverage:条件判定覆盖率• Condition determination testing:条件判定测试• Condition testing:条件测试• Condition outcome:条件结果• Confidence test (smoke test) :信心测试(冒烟测试)• Configuration:配置• Configuration auditing:配置审核• Configuration control:配置控制• Configuration control board (CCB) :配置控制委员会• Configuration identification:配置标识• Configuration item:配置项• Configuration management:配置管理• Configuration testing:配置测试• Confirmation testing:确认测试• Conformance testing:一致性测试• Consistency:一致性• Control flow:控制流• Control flow graph:控制流图• Control flow path:控制流路径• Conversion testing:转换测试• COTS (Commercial Off-The-Shelf software) :商业离岸软件• Coverage:覆盖率• Coverage analysis:覆盖率分析领测国际科技(北京)有限公司领测软件测试网 /• Coverage item:覆盖项• Coverage tool:覆盖率工具• Custom software:定制软件• Cyclomatic complexity:圈复杂度• Cyclomatic number:圈数D• Daily build:每日构建• Data definition:数据定义• Data driven testing:数据驱动测试• Data flow:数据流• Data flow analysis:数据流分析• Data flow coverage:数据流覆盖率• Data flow test:数据流测试• Data integrity testing:数据完整性测试• Database integrity testing:数据库完整性测试• Dead code:无效代码• Debugger:调试器• Debugging:调试• Debugging tool:调试工具• Decision:判定• Decision condition coverage:判定条件覆盖率• Decision condition testing:判定条件测试• Decision coverage:判定覆盖率• Decision table:判定表• Decision table testing:判定表测试• Decision testing:判定测试技术• Decision outcome:判定结果• Defect:缺陷• Defect density:缺陷密度• Defect Detection Percentage (DDP) :缺陷发现率• Defect management:缺陷管理• Defect management tool:缺陷管理工具• Defect masking:缺陷屏蔽• Defect report:缺陷报告• Defect tracking tool:缺陷跟踪工具• Definition-use pair:定义-使用对• Deliverable:交付物• Design-based testing:基于设计的测试• Desk checking:桌面检查领测国际科技(北京)有限公司领测软件测试网 /• Development testing:开发测试• Deviation:偏差• Deviation report:偏差报告• Dirty testing:负面测试• Documentation testing:文档测试• Domain:域• Driver:驱动程序• Dynamic analysis:动态分析• Dynamic analysis tool:动态分析工具• Dynamic comparison:动态比较• Dynamic testing:动态测试E• Efficiency:效率• Efficiency testing:效率测试• Elementary comparison testing:基本组合测试• Emulator:仿真器、仿真程序• Entry criteria:入口标准• Entry point:入口点• Equivalence class:等价类• Equivalence partition:等价区间• Equivalence partition coverage:等价区间覆盖率• Equivalence partitioning:等价划分技术• Error:错误• Error guessing:错误猜测技术• Error seeding:错误撒播• Error tolerance:错误容限• Evaluation:评估• Exception handling:异常处理• Executable statement:可执行的语句• Exercised:可执行的• Exhaustive testing:穷尽测试• Exit criteria:出口标准• Exit point:出口点• Expected outcome:预期结果• Expected result:预期结果• Exploratory testing:探测测试领测国际科技(北京)有限公司领测软件测试网 /F• Fail:失败• Failure:失败• Failure mode:失败模式• Failure Mode and Effect Analysis (FMEA) :失败模式和影响分析• Failure rate:失败频率• Fault:缺陷• Fault density:缺陷密度• Fault Detection Percentage (FDP) :缺陷发现率• Fault masking:缺陷屏蔽• Fault tolerance:缺陷容限• Fault tree analysis:缺陷树分析• Feature:特征• Field testing:现场测试• Finite state machine:有限状态机• Finite state testing:有限状态测试• Formal review:正式评审• Frozen test basis:测试基线• Function Point Analysis (FPA) :功能点分析• Functional integration:功能集成• Functional requirement:功能需求• Functional test design technique:功能测试设计技术• Functional testing:功能测试• Functionality:功能性• Functionality testing:功能性测试G• glass box testing:白盒测试H• Heuristic evaluation:启发式评估• High level test case:概要测试用例• Horizontal traceability:水平跟踪领测国际科技(北京)有限公司领测软件测试网 /I• Impact analysis:影响分析• Incremental development model:增量开发模型• Incremental testing:增量测试• Incident:事件• Incident management:事件管理• Incident management tool:事件管理工具• Incident report:事件报告• Independence:独立• Infeasible path:不可行路径• Informal review:非正式评审• Input:输入• Input domain:输入范围• Input value:输入值• Inspection:审查• Inspection leader:审查组织者• Inspector:审查人员• Installability:可安装性• Installability testing:可安装性测试• Installation guide:安装指南• Installation wizard:安装向导• Instrumentation:插装• Instrumenter:插装工具• Intake test:入口测试• Integration:集成• Integration testing:集成测试• Integration testing in the large:大范围集成测试• Integration testing in the small:小范围集成测试• Interface testing:接口测试• Interoperability:互通性• Interoperability testing:互通性测试• Invalid testing:无效性测试• Isolation testing:隔离测试• Item transmittal report:版本发布报告• Iterative development model:迭代开发模型K• Key performance indicator:关键绩效指标领测国际科技(北京)有限公司领测软件测试网 /• Keyword driven testing:关键字驱动测试L• Learnability:易学性• Level test plan:等级测试计划• Link testing:组件集成测试• Load testing:负载测试• Logic-coverage testing:逻辑覆盖测试• Logic-driven testing:逻辑驱动测试• Logical test case:逻辑测试用例• Low level test case:详细测试用例M• Maintenance:维护• Maintenance testing:维护测试• Maintainability:可维护性• Maintainability testing:可维护性测试• Management review:管理评审• Master test plan:综合测试计划• Maturity:成熟度• Measure:度量• Measurement:度量• Measurement scale:度量粒度• Memory leak:内存泄漏• Metric:度量• Migration testing:移植测试• Milestone:里程碑• Mistake:错误• Moderator:仲裁员• Modified condition decision coverage:改进的条件判定覆盖率• Modified condition decision testing:改进的条件判定测试• Modified multiple condition coverage:改进的多重条件判定覆盖率• Modified multiple condition testing:改进的多重条件判定测试• Module:模块• Module testing:模块测试• Monitor:监视器• Multiple condition:多重条件• Multiple condition coverage:多重条件覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Multiple condition testing:多重条件测试• Mutation analysis:变化分析• Mutation testing:变化测试N• N-switch coverage:N 切换覆盖率• N-switch testing:N 切换测试• Negative testing:负面测试• Non-conformity:不一致• Non-functional requirement:非功能需求• Non-functional testing:非功能测试• Non-functional test design techniques:非功能测试设计技术O• Off-the-shelf software:离岸软件• Operability:可操作性• Operational environment:操作环境• Operational profile testing:运行剖面测试• Operational testing:操作测试• Oracle:标准• Outcome:输出/结果• Output:输出• Output domain:输出范围• Output value:输出值P• Pair programming:结队编程• Pair testing:结队测试• Partition testing:分割测试• Pass:通过• Pass/fail criteria:通过/失败标准• Path:路径• Path coverage:路径覆盖• Path sensitizing:路径敏感性• Path testing:路径测试领测国际科技(北京)有限公司领测软件测试网 / • Peer review:同行评审• Performance:性能• Performance indicator:绩效指标• Performance testing:性能测试• Performance testing tool:性能测试工具• Phase test plan:阶段测试计划• Portability:可移植性• Portability testing:移植性测试• Postcondition:结果条件• Post-execution comparison:运行后比较• Precondition:初始条件• Predicted outcome:预期结果• Pretest:预测试• Priority:优先级• Probe effect:检测成本• Problem:问题• Problem management:问题管理• Problem report:问题报告• Process:流程• Process cycle test:处理周期测试• Product risk:产品风险• Project:项目• Project risk:项目风险• Program instrumenter:编程工具• Program testing:程序测试• Project test plan:项目测试计划• Pseudo-random:伪随机Q• Quality:质量• Quality assurance:质量保证• Quality attribute:质量属性• Quality characteristic:质量特征• Quality management:质量管理领测国际科技(北京)有限公司领测软件测试网 /R• Random testing:随机测试• Recorder:记录员• Record/playback tool:记录/回放工具• Recoverability:可复原性• Recoverability testing:可复原性测试• Recovery testing:可复原性测试• Regression testing:回归测试• Regulation testing:一致性测试• Release note:版本说明• Reliability:可靠性• Reliability testing:可靠性测试• Replaceability:可替换性• Requirement:需求• Requirements-based testing:基于需求的测试• Requirements management tool:需求管理工具• Requirements phase:需求阶段• Resource utilization:资源利用• Resource utilization testing:资源利用测试• Result:结果• Resumption criteria:继续测试标准• Re-testing:再测试• Review:评审• Reviewer:评审人员• Review tool:评审工具• Risk:风险• Risk analysis:风险分析• Risk-based testing:基于风险的测试• Risk control:风险控制• Risk identification:风险识别• Risk management:风险管理• Risk mitigation:风险消减• Robustness:健壮性• Robustness testing:健壮性测试• Root cause:根本原因S• Safety:安全领测国际科技(北京)有限公司领测软件测试网 /• Safety testing:安全性测试• Sanity test:健全测试• Scalability:可测量性• Scalability testing:可测量性测试• Scenario testing:情景测试• Scribe:记录员• Scripting language:脚本语言• Security:安全性• Security testing:安全性测试• Serviceability testing:可维护性测试• Severity:严重性• Simulation:仿真• Simulator:仿真程序、仿真器• Site acceptance testing:定点验收测试• Smoke test:冒烟测试• Software:软件• Software feature:软件功能• Software quality:软件质量• Software quality characteristic:软件质量特征• Software test incident:软件测试事件• Software test incident report:软件测试事件报告• Software Usability Measurement Inventory (SUMI) :软件可用性调查问卷• Source statement:源语句• Specification:规格说明• Specification-based testing:基于规格说明的测试• Specification-based test design technique:基于规格说明的测试设计技术• Specified input:特定输入• Stability:稳定性• Standard software:标准软件• Standards testing:标准测试• State diagram:状态图• State table:状态表• State transition:状态迁移• State transition testing:状态迁移测试• Statement:语句• Statement coverage:语句覆盖• Statement testing:语句测试• Static analysis:静态分析• Static analysis tool:静态分析工具• Static analyzer:静态分析工具• Static code analysis:静态代码分析• Static code analyzer:静态代码分析工具• Static testing:静态测试• Statistical testing:统计测试领测国际科技(北京)有限公司领测软件测试网 /• Status accounting:状态统计• Storage:资源利用• Storage testing:资源利用测试• Stress testing:压力测试• Structure-based techniques:基于结构的技术• Structural coverage:结构覆盖• Structural test design technique:结构测试设计技术• Structural testing:基于结构的测试• Structured walkthrough:面向结构的走查• Stub: 桩• Subpath: 子路径• Suitability: 符合性• Suspension criteria: 暂停标准• Syntax testing: 语法测试• System:系统• System integration testing:系统集成测试• System testing:系统测试T• Technical review:技术评审• Test:测试• Test approach:测试方法• Test automation:测试自动化• Test basis:测试基础• Test bed:测试环境• Test case:测试用例• Test case design technique:测试用例设计技术• Test case specification:测试用例规格说明• Test case suite:测试用例套• Test charter:测试宪章• Test closure:测试结束• Test comparator:测试比较工具• Test comparison:测试比较• Test completion criteria:测试比较标准• Test condition:测试条件• Test control:测试控制• Test coverage:测试覆盖率• Test cycle:测试周期• Test data:测试数据• Test data preparation tool:测试数据准备工具领测国际科技(北京)有限公司领测软件测试网 / • Test design:测试设计• Test design specification:测试设计规格说明• Test design technique:测试设计技术• Test design tool: 测试设计工具• Test driver: 测试驱动程序• Test driven development: 测试驱动开发• Test environment: 测试环境• Test evaluation report: 测试评估报告• Test execution: 测试执行• Test execution automation: 测试执行自动化• Test execution phase: 测试执行阶段• Test execution schedule: 测试执行进度表• Test execution technique: 测试执行技术• Test execution tool: 测试执行工具• Test fail: 测试失败• Test generator: 测试生成工具• Test leader:测试负责人• Test harness:测试组件• Test incident:测试事件• Test incident report:测试事件报告• Test infrastructure:测试基础组织• Test input:测试输入• Test item:测试项• Test item transmittal report:测试项移交报告• Test level:测试等级• Test log:测试日志• Test logging:测试记录• Test manager:测试经理• Test management:测试管理• Test management tool:测试管理工具• Test Maturity Model (TMM) :测试成熟度模型• Test monitoring:测试跟踪• Test object:测试对象• Test objective:测试目的• Test oracle:测试标准• Test outcome:测试结果• Test pass:测试通过• Test performance indicator:测试绩效指标• Test phase:测试阶段• Test plan:测试计划• Test planning:测试计划• Test policy:测试方针• Test Point Analysis (TPA) :测试点分析• Test procedure:测试过程领测国际科技(北京)有限公司领测软件测试网 /• Test procedure specification:测试过程规格说明• Test process:测试流程• Test Process Improvement (TPI) :测试流程改进• Test record:测试记录• Test recording:测试记录• Test reproduceability:测试可重现性• Test report:测试报告• Test requirement:测试需求• Test run:测试运行• Test run log:测试运行日志• Test result:测试结果• Test scenario:测试场景• Test script:测试脚本• Test set:测试集• Test situation:测试条件• Test specification:测试规格说明• Test specification technique:测试规格说明技术• Test stage:测试阶段• Test strategy:测试策略• Test suite:测试套• Test summary report:测试总结报告• Test target:测试目标• Test tool:测试工具• Test type:测试类型• Testability:可测试性• Testability review:可测试性评审• Testable requirements:需求可测试性• Tester:测试人员• Testing:测试• Testware:测试组件• Thread testing:组件集成测试• Time behavior:性能• Top-down testing:自顶向下的测试• Traceability:可跟踪性U• Understandability:易懂性• Unit:单元• unit testing:单元测试• Unreachable code:执行不到的代码领测国际科技(北京)有限公司领测软件测试网 /• Usability:易用性• Usability testing:易用性测试• Use case:用户用例• Use case testing:用户用例测试• User acceptance testing:用户验收测试• User scenario testing:用户场景测试• User test:用户测试V• V -model:V 模式• Validation:确认• Variable:变量• Verification:验证• Vertical traceability:垂直可跟踪性• Version control:版本控制• Volume testing:容量测试W• Walkthrough:走查• White-box test design technique:白盒测试设计技术• White-box testing:白盒测试• Wide Band Delphi:Delphi 估计方法。
测试常见术语(中英文对比附解析)

测试常见术语(中英文对比附解析)Acceptance Testing--可接受性测试一般由用户/客户进行的确认是否可以接受一个产品的验证性测试。
actual outcome--实际结果被测对象在特定的条件下实际产生的结果。
Ad Hoc Testing--随机测试测试人员通过随机的尝试系统的功能,试图使系统中断。
algorithm--算法(1)一个定义好的有限规则集,用于在有限步骤内解决一个问题;(2)执行一个特定任务的任何操作序列。
algorithm analysis--算法分析一个软件的验证确认任务,用于保证选择的算法是正确的、合适的和稳定的,并且满足所有精确性、规模和时间方面的要求。
Alpha Testing--Alpha测试由选定的用户进行的产品早期性测试。
这个测试一般在可控制的环境下进行的。
analysis--分析(1)分解到一些原子部分或基本原则,以便确定整体的特性;(2)一个推理的过程,显示一个特定的结果是假设前提的结果;(3)一个问题的方法研究,并且问题被分解为一些小的相关单元作进一步详细研究。
anomaly--异常在文档或软件操作中观察到的任何与期望违背的结果。
application software--应用软件满足特定需要的软件。
architecture--构架一个系统或组件的组织结构。
ASQ--自动化软件质量(Automated Software Quality)使用软件工具来提高软件的质量。
assertion--断言指定一个程序必须已经存在的状态的一个逻辑表达式,或者一组程序变量在程序执行期间的某个点上必须满足的条件。
assertion checking--断言检查用户在程序中嵌入的断言的检查。
audit--审计一个或一组工作产品的独立检查以评价与规格、标准、契约或其它准则的符合程度。
audit trail--审计跟踪系统审计活动的一个时间记录。
Automated Testing--自动化测试使用自动化测试工具来进行测试,这类测试一般不需要人干预,通常在GUI、性能等测试中用得较多。
Reference “Automated Model-Based Testing of

Automated Model-Based Testing of Community-Driven Open-Source GUI ApplicationsZheng-Wen Shen2006/07/121Reference •“Automated Model-Based Testing of Community-Driven Open-Source GUI Applications”–Qing Xie and Atif Memon–22nd International Conference onSoftware Maintenance(ICSM 2006),Philadelphia, PA, USA, Sep. 25-27, 2006.2Outline•1. Introduction•2. Testing Loops•3. Overview of GUI Model•4. Experiment•5. Conclusions341. Introduction (1/3)•Open-source software (OSS ) development on world-wide web. (WWW )–Communities of programmers distributed world-wide–Unprecedented code churn ratesProblems on OSS (2/3)•Little direct inter-developer communication–CVS commit log message, bug reports,change-requests, and comments •Developers work on loosely coupled parts of the application code.–Local change has inadvertently brokenother parts of the overall software code52. Testing Loops•CR Tools Test cases Æfragile–Input event sequence can no longerexecute on the GUI–The expected output stored withthe test case becomes obsolete.•Model-based techniques–Generate and maintain test casesautomatically during OSS evolution–Employs GUI models to generate test cases.7Continuous GUI Testing83. Overview of GUI Model(1/5)•An event-flow graph(EFG) model represents all possible event sequences that may be executed on a GUI.–All executable paths of the software •Microsoft Word (total 4210 events)–80 events open menus, 346 events openwindows, 196 events close windows, and theremaining 3588 events interact with theunderlying code.9Event-Interaction Graph (2/5)•Test interactions between loosely-coupled parts of an OSS•System-interaction events = Non-structural events + close windows events •Test cases consist of event-flow-paths that start and end with system-interaction events, without any intermediate system-interaction events.10Path in an EFG and EIG(3/5)11Test Case Generation (4/5) 1.Test cases are short Ægenerate andexecute very quickly2.Only consists of system-interactionevents3.The expected state is stored only forsystem-interaction events4.All system-interaction events areexecuted; most of the GUI’sfunctionality is covered.5. Each test case is independentand the suite can be distributed.12Test Oracle Creation (5/5)•Oracles for Crash Tests–Crashes during test execution may be used toidentify serious problems in the software.•Oracles for Smoke Tests–Software does what it was doing beforemodifications–Reference Testing•Oracles for Comprehensive Testing –Specifications-based approach–Precondition + Effect134. Experiment•Do popular web based community-driven GUI–based OSS have problems that can be detected by our automated techniques?•Do these problems persist across multiple versions of the OSS?14Notice•Execute fully-automatic crash testing process on applications and report problems–A crash ÙAn uncaught exception •Determine how long these problems have been in the application code •We overall process executed without any human intervention in 5-8 hours.15Subject Applications •FreeMind, a premier mind-mapping software–0.0.2, 0.1.0, 0.4, 0.7.1, 0.8.0RC5, 0.8.0•Gantt Project, a project scheduling application – 1.6, 19.11, 1.10.3, 1.11, 1.11.1, 2.pre1•JMSN, a pure java Microsoft MSN messenger clone–0.9a, 0.9.2, 0.9.7, 0.9.8b7, 0.9.9b1•Crossword Sage, a tool for creating (and solving) professional looking crosswords withpowerful word suggestion capabilities–0.1, 0.2, 0.3.0, 0.3.1, 0.3.2, and 0.3.516FreeMind Bugs1.NullPointerException when trying to open a non-existent file (0.0.2, 0.1.0)2.FileNotFoundException when trying to save a file with a very long le name(0.0.2, 0.1.0, 0.4)3.NullPointerException when clicking on some buttons on the main toolbarwhen no le is open (0.1.0);4.NullPointerException when clicking on some menu items if no le is open(0.1.0, 0.4, 0.7.1, 0.8.0RC5)5.NullPointerException when trying to save a “blank”file (0.1.0)6.NullPointerException when adding a new node after toggling folded node(0.4)7.FileNotFoundException when trying to import a nonexistent file (0.4, 0.7.1,0.8.0RC5, 0.8.0)8.FileNotFoundException when trying to export a file with a very long lename (0.7.1, 0.8.0RC5, 0.8.0)9.NullPointerException when trying to split a node in “Edit a long node”window (0.7.1, 0.8.0RC5, 0.8.0)10.NumberFormatException when setting non-numeric input while expectinga number in “preferences setting”window (0.8.0RC5, 0.8.0)17Gantt Project Bugs1.NumberFormatException when setting non-numeric inputs whileexpecting a number in “New task”window (1.6)2.FileNotFoundException when trying to open a nonexistent file (1.6)3.FileNotFoundException when trying to save a file with a very longle name (1.6, 1.9.11, 1.10.3, 1.11, 1.11.1, 2.pre1)4.NullPointerException after confirming any preferences setting(1.9.11)5.NullPointerException when trying to save the content to a server(1.9.11)6.NullPointerException when trying to import a nonexistent file(1.9.11, 1.10.3, 1.11, 1.11.1, 2.pre1)7.InterruptedException when trying to open a new window (1.10.3)8.Runtime error when trying to send e-mail (1.11, 1.11.1, 2.pre1)18JMSN Bugs1.InvocationTargetException when trying torefresh the buddy list (0.9a, 0.9.2)2.FileNotFoundException when trying tosubmit a bug/request report because the submission page doesn't exist (0.9a,0.9.2, 0.9.5, 0.9.7, 0.9.8b7, 0.9.9b2);3.NullPointerException when trying to checkthe validity of the login data (0.9.7,0.9.8b7, 0.9.9b2)4.SocketException and NullPointerExceptionwhen stopping a socket that has beenstarted (0.9.8b7, 0.9.9b2)19Crossword Sage Bugs •NullPointerException in Crossword Builder when trying to delete a word (0.3.0, 0.3.1)•NullPointerException in Crossword Builder when trying to suggest a new word (0.3.0, 0.3.1, 0.3.2, 0.3.5)•NullPointerException in Crossword Builder when trying to write a clue for a word (0.3.0, 0.3.1, 0.3.2, 0.3.5)•NullPointerException when loading a new crossword file (0.3.5)•NullPointerException when splitting a word (0.3.5)•NullPointerException when publishing the crossword (0.3.5)204.1 Results21Results•The some bug existed across applications Ùshared open-source GUI components (FileSave) Îsanitize inputs•Many bugs are persistent across versions•There are fewer bugs in the first version than in later versions Ùconsistent with our experience22the reasons for crash1.Invalid text input: validity, size2.Widget enabled when it should bedisabled3.Object declared but not initialized4.Obsolete external resources235. Conclusions (1/2)•Recognition the nature of the WWW –enables the separation of GUI testing steps by level of automation, feedback, and resourceutilization.•Demonstration that resources may be better utilized by defining a concentric loop-based GUI testing approach.•Demonstration that popular GUI-based OSS developed on the WWW have flaws that can be detected by our fully automated approach.24Conclusions (2/2)• A more detailed study of the overall benefits of this technique•Extended subject application Ætest TerpOffice incrementally•Web application that have complex back-ends•The interaction between three test loops –Whether one loop can benefit from theexecution of the inner loops–Study the need of additional loops25。
软件测试常用英语词汇

软件测试常用英语词汇静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围: The Test Scope测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance.可复用性:Reusability可靠性: Reliability/Availability电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturity model )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testingβ测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification。
ENCOUNTERTRUE-TIMEATPG

Encounter True-Time ATPGPart of the Encounter Test family, Encounter True-Time ATPG offers robust automated test patterngeneration (ATPG) engines, proven to generate the highest quality tests for all standard design-for-test (DFT) methods, styles, and flows. It supports not only industry-standard stuck-at and transition fault models, but also raises the bar on fault detection by providing defect-based, user-definable modeling capability with its patented pattern fault technology.Pattern fault technology is what enables the Encounter “gate-exhaustive” coverage(GEC) methodology, proven to be two-to-four times more efficient at detecting gate intrinsic faults than any other static methodologies available on the market (e.g. SSF, N-Detect).For delay test, True-Time ATPGincludes a dynamic timing engine and uses either circuit timing information or constraints to automaticallygenerate transition-based fault tests and faster-than-at-speed tests for identifying very deep sub-micron design-process feature defects (e.g. certain small delay defects).Figure 1: Encounter True-Time ATPG provides a timing-based ATPG engine driven by SDF or SDC informationOn-product clock generation (OPCG) produces and applies patterns to effectively capture this class of faults while minimizing false failures. Use of SDF or SDC information ensures the creation of a highly accurate timing-based pattern set.True-Time ATPG optimizes test coverage through a combination of topological random resistant fault analysis (RRFA) and deterministic fault analysis (DFA)with automated test point insertion—far superior to traditional test coverage algorithms. RRFA is used for early optimi-zation of test coverage, pattern density, and runtime performance. DFA is applied downstream for more detailed circuit-level fault analysis when the highest quality goals must be met.To reduce scan test time while maintaining the highest test coverage, True-Time technology provides intelligent ATPG with on-chip compression (XOR- or MISR-based). It is also power-aware and uses patented technologies to significantly reduce and manage power consumption during manufacturing test.True-Time ATPG also offers a customizable environment to suityour project development needs.The GUI provides highly interactive capabilities for coverage analysis and debug; it includes a powerful sequence analyzer that boosts productivity. Encounter True-Time ATPG is available in two offerings: Basic and Advanced.Benefits• Ensures high quality of shipped silicon with production-proven 2-4x reduction in test escapes• Provides superior partial scan coverage with proprietary pattern fault modeling and sequential ATPG algorithms• Optimizes test coverage with RRFA and DFA test point insertion methodology • Boosts productivity by integrating with Encounter RTL Compiler• Delivers superior runtime throughput with high-performance model build and fault simulation engines as well as distributed ATPG • Lowers cost of test with patterncompaction and compressiontechniques that maintain fullscan coverage• Balances tester costs with diagnosticsmethodologies by offering flexiblecompression architectures with fullX masking capabilities (includingOPMISR+ and XOR-based solutions)• Supports low pin-count testingvia JTAG control of MBIST andhigh-compression ratio technology• Supports reduced pin-count testing forI/O test• Interfaces with Encounter Power Systemfor accurate power calculation andpattern IR drop analysis• Reduces circuit and switching activityduring manufacturing test to managepower consumption• Reduces false failures due tovoltage drop• Provides a GUI with powerfulinteractive analysis capabilitiesincluding a schematic viewer andsequence analyzerEncounter TestPart of the Encounter digital design andimplementation platform, the EncounterTest product family delivers an advancedsilicon verification and yield learningsystem. Encounter Test comprises threeproduct technologies:• Encounter DFT Architect: ensuresease of use, productivity, and predict-ability in generating ATPG-readynetlists containing DFT structures, fromthe most basic to the most complex;available as an add-on option toEncounter RTL Compiler• Encounter True-Time ATPG: ensuresthe fewest test escapes and the highestquality shipped silicon at the lowestdevelopment and production costs• Encounter Diagnostics: delivers themost accurate volume and precisiondiagnostics capabilities to accelerateyield ramp and optimize device andfault modelingEncounter Test also offers a flexible APIusing the PERL language to retrieve designdata from its pervasive database. Thisunique capability allows you to customizeSoC Test Infrastructure• Maximize productivity• Maximize predictabilityTest Pattern Generation• Maximize product quality• Minimize test costsDiagnostic• Maximize yeld and ramp• Maximize silicon bring-upEncounter DFT Architect• Full-chip test infrastructure• Scan compression(XOR and MISR), BIST,IEEE1500, 1149.1/6• ATPG-aware insertionverification• Power-aware DFT and ATPGEncounter True-Time ATPG• Stuck-at, at-speed, andfaster-than-at-speed testing• Design timing drivestest timing• High-quality ATPGEncounter Diagnostics• Volume mode finds criticalyield limiters• Precision mode locatesroot cause• Unsurpassed silicon bring-upprecisionSiliconFigure 2: Encounter Test offers a complete RTL-to-silicon verification flow and methodologies that enable the highest quality IC devices at the lowest costreporting, trace connections in the design, and obtain information that might be helpful for debugging design issues or diagnostics.FeaturesTrue-Time ATPG BasicTrue-Time ATPG Basic contains thestuck-at ATPG engine, which supports:• High correlation test coverage, easeof use, and productivity through integration with the Encounter RTL Compiler synthesis environment• Full scan, partial scan, and sequential ATPG for edge-triggered andLSSD designs• Stuck-at, IDDQ, and I/O parametric fault models• Core-based testing, test data migration, and test reuse• Special support for custom designs such as data pipelines, scan control pipelines, and safe-scan• Test pattern volume optimization using RRFA-based test point insertion• Test coverage optimization usingDFA-based test point insertion• Pre-defined (default) and user-defined defect-based fault modeling andgate-exhaustive coverage based on pattern fault technology• Powerful GUI with interactive analysis capabilitiesPattern fault capability enables defect-based testing with a patented technology for accurately modeling the behavior of nanometer defects, such as bridges and opens for ATPG and diagnostics, and for specifying the complete test of a circuit. The ATPG engine, in turn, uses this definition wherever the circuit is instan-tiated within a design. By default, pattern faults are used to increase coverage of XOR, LATCH, FLOP, TSD, and MUX primi-tives. They can also be used to model unique library cells and transition and delay-type defects.True-Time ATPG AdvancedTrue-Time ATPG Advanced offers thesame capabilities as the Basic configu-ration, plus delay test ATPG functionality.It uses post-layout timing data from theSDF file to calculate the path delay of allpaths in the design, including distributiontrees of test clocks and controls. Usingthis information, you can decide on thebest cycle time(s) to test for in a givenclock domain.True-Time ATPG Advanced is capableof generating tests at multiple testfrequencies to detect potential early yieldfailures and certain small delay defects.You can specify your own cycle time orlet True-Time ATPG calculate one basedon path lengths. It avoids generating testsalong paths that exceed tester cycle timeand/or mask transitions along paths thatexceed tester cycle time. True-Time ATPGgenerates small delay defect patternsbased on longest path analysis to ensurepattern efficiency.A unique feature of the Advancedoffering is its ability to generate faster-than-at-speed tests to detect small delaydefects that would otherwise fail duringsystem test or result in early field failures.True-Time ATPG Advanced also usestester-specific constraint informationduring test pattern generation. Thecombination of actual post-layout timingand tester constraint information withTrue-Time ATPG Advanced algorithmsensures that the test patterns will work“first pass” on the tester.The test coverage optimizationmethodology is expanded beyond RRFAand DFA-based test point insertion(TPI) technology. The combinationof both topological and circuit-levelfault analysis with automated TPIprovides the most advanced capabilityfor ensuring the highest possible testcoverage while controlling the numberof inserted test points. DFA-based TPIBridge TestingFigure 3: Pattern faults model any type ofbridge behavior; net pair lists automaticallycreate bridging fault models; ATPG anddiagnostics use the models to detect andisolate bridgesFigure 4: Power-aware ATPG for scan and capture modes prevents voltage-drop–induced failures in test modeCadence is transforming the global electronics industry through a vision called EDA360.With an application-driven approach to design, our software, hardware, IP, and services helpcustomers realize silicon, SoCs, and complete systems efficiently and profitably. © 2012 Cadence Design Systems, Inc. All rights reserved. Cadence, the Cadence logo, Conformal, Encounter, and VoltageStorm are registered trademarks of Cadence Design Systems, Inc. All other s are properties of their respective holders.has links to Encounter Conformal ® Equivalence Checker to ensure the most efficient, logically equivalent netlist modifications with maximum controllability and observability.The ATPG engine works with multiple compression architectures to generate tests that cut costs by reducing scan test time and data volume. Actual compression ratios are driven by the compression architecture as well asdesign characteristics (e.g. available pins, block-level structures). Users can achieve compression ratios exceeding 100x.Flexible compression options allow you to select a multiple input signature register (MISR) architecture with the highest compression ratio, or an exclusive-or (XOR)–based architecture that enables a highly efficientcombinational compression ratio and a one-pass diagnostics methodology. Both architectures support a broadcast type or XOR-based decompressor.On-product MISR plus (OPMISR+) uses a MISR-based output compression, which eliminates the need to check the response at each cycle. XOR-based compression uses an XOR-tree–based output compression to enable a one-pass flow through diagnostics.Additionally, intelligent ATPG algorithms minimize full-scan correlation issues and reduce power consumption, deliv-ering demonstrated results of >99.5 stuck-at test coverage with >100x test time reduction. Optional X-state masking capability is available on a per-chain/ per-cycle basis. Masking is usuallyrequired when using delay test because delay ATPG may generate unknown states in the circuit.Using the Common Power Format (CPF), True-Time ATPG Advanced automatically generates test modes to enable individual power domains to be tested independently or in small groups. This, along with automaticrecognition and testing of power-specific structures (level shifters, isolation logic, state retention registers) ensures the highest quality for low-power devices.Power-aware ATPG uses industry-leading techniques to manage and significantly reduce power consumption due to scan and capture cycles during manufacturing test. The benefit is reduced risk of false failures due to voltage drop and fewer reliability issues due to excessive power consumption. True-Time ATPG Advanced uses algorithms that limit switching during scan testing to further reduce power consumption.Encounter Test offers a flexible API using the PERL language to retrievedesign data from its pervasive database. This unique capability allows users to customize reporting, trace connections in the design, and obtain information that might be helpful for debugging design issues or diagnostics.Platforms• Sun Solaris (64-bit)• HP-UX (64-bit)• Linux (32-bit, 64-bit)• IBM AIX (64-bit)Cadence Services and Support• Cadence application engineers can answer your technical questions by telephone, email, or Internet—they can also provide technical assistance and custom training • Cadence certified instructors teach more than 70 courses and bring their real-world experience into the classroom • More than 25 Internet Learning Series (iLS) online courses allow you the flexibility of training at your own computer via the Internet • Cadence Online Support gives you24x7 online access to a knowledgebase of the latest solutions, technicaldocumentation, software downloads, and more。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Model-Based Testing Through a GUIAntti Kervinen 1,Mika Maunumaa 1,Tuula Pääkkönen 2,and Mika Katara 11Tampere University of Technology,Institute of Software SystemsP.O.Box 553,FI-33101Tampere,FINLAND{stname}@tut.fi 2Nokia Technology PlatformsP.O.Box 68,FI-33721Tampere,FINLANDAbstract.So far,model-based testing approaches have mostly been used in test-ing through various kinds of APIs.In practice,however,testing through a GUI is another equally important application area,which introduces new challenges.In this paper,we introduce a new methodology for model-based GUI testing.This includes using Labeled Transition Systems (LTSs)in conjunction with ac-tion word and keyword techniques for test modeling.We have also conducted an industrial case study where we tested a mobile device and were able to find previ-ously unreported defects.The test environment included a standard MS Windows GUI testing tool as well as components implementing our approach.Assessment of the results from an industrial point of view suggests directions for future de-velopment.1IntroductionSystem testing through a GUI can be considered as one of the most challenging types of testing.It is often done by a separate testing team of domain experts that can validate that the clients’requirements have been fulfilled.However,the domain experts often lack programming skills and require easy-to-use tools to support their pared to application programming interface (API)testing,GUI testing is made more complex by the various user interface issues that need to be dealt with.Such issues include in-put of user commands and interpretation of the output results,for instance,using text recognition in some cases.Developers are often reluctant to implement system level APIs only for the purposes of testing.Moreover,general-purpose testing tools need to be adapted to use such APIs.In contrast,a GUI is often available and there are several general-purpose GUI test-ing tools,which can be easily taken into use.Among the test automation community,however,GUI testing tools are not considered an optimal solution.This is largely due to bad experiences in using so-called capture/replay tools that capture key presses,as well as mouse movement,and replay those in regression tests.The bad experiences are mostly involved with high maintenance costs associated with such a tool [1].The GUI is often the most volatile part of the system and possible changes to it affect the GUI test automation scripts.In the worst case,the selected capture/replay tool uses bitmap comparisons to verify the results of the test runs.False negative results can then be obtained from minor changes in the look and feel of the system.In practice,such test automation needs maintenance whenever the GUI is changed.In Proceedings of the 5th International Workshop on Formal Approaches to Testing of Software (FATES 2005), Edinburgh, Scotland, UK, July 2005. Number 3997 in Lecture Notes in Computer Science, pages 16-31. Springer 2006. © Springer-Verlag.The state of the art in GUI testing is represented by so-called keyword and action word techniques[2,3].They help in maintenance problems by providing a clear sepa-ration of concerns between business logic and the GUI navigation needed to implement the logic.Keywords correspond to key presses and menu navigation,such as“click the OK button”,while action words describe user events at a higher level of abstraction. For instance,a single action word can be defined to open a selectedfile whose name can be given as a parameter.The idea is that domain experts can design the test cases easily using action words even before the system implementation has been started.Test automation engineers then define the keywords that implement the action words using the scripting language provided by the GUI automation tool.Although some tools use smarter comparison techniques than pure bitmaps,and provide advanced test design concepts,such as keywords and action words,the main-tenance costs can still be significant.Moreover,such tools seldomfind new bugs and return the investment only when the same test suites are run several times,such as in regression testing.The basic problem is in the static and linear nature of the test cases. Even if only10%of the test cases would need to be updated whenever the system under test changes,this can mean modifying one hundred test cases from the test suite of one thousand regression tests.Our goal is to improve the status of GUI testing with model-based techniques. Firstly,by using test models to generate test runs,we will not run into difficulties with maintaining large test suites.Secondly,we have better chances offinding previously undetected defects,since we are able to vary the order of events.Towards these ends, we propose a test automation approach based on Labeled Transition Systems(LTSs)as well as action words and keywords.The idea is to describe a test model as a LTS whose transitions correspond to action words.This should be made as easy as possible for also testers with no programming skills.The maintenance effort should localize to a single model or few component models.The action machines we introduce are composed in parallel with refinement machines mapping the action words to sequences of keywords. The resulting composite LTS is then read into a general-purpose GUI testing tool that interprets the keywords and walks through the model using some heuristics.The tool also verifies the test results and handles the reporting.The contributions of this paper are in formalizing the above scheme,introducing novel test model architecture and applying the approach in an industrial case study. Finally,we have assessed the results from an industrial point of view.The rest of the paper is structured as follows.Sections2and3describe our approach in detail as well as the case study we have conducted.The assessment of the results is given in Section4. Related work is discussed in Section5and conclusions drawn in Section6.2Building a Test Model ArchitectureIn the following,we will develop a layered test model architecture for testing several concurrently running applications through a GUI.The basis for layering is in keyword and action word techniques,and therefore we willfirst introduce how to adapt these concepts to model-based testing.As a running example,we will use testing of Symbian applications.Symbian[4]is an operating system targeted for mobile devices such as smartphones and PDAs.The variety of features available resembles testing of PC applications,but there are also characteristics of embedded systems.For instance,there is no access to the resources of the GUI.In the following,the term system under test(SUT)will be used to refer to a device running Symbian OS.2.1Action Words and KeywordsAs Buwalda[3]recommends in the description of action-based testing,test design-ers should focus on high-level concepts in test design.This means modeling business processes and picking interesting sequences of events for discovering possible errors. These high-level events are called action words.The test automation engineer then im-plements the action words with keywords,which act as a concrete implementation layer of test automation.An example of a keyword from our Symbian test environment is kwPressKey mod-eling a key press.The keyword could be used,for instance,in a sequence that models starting a calculator application.Such a sequence would correspond to a single ac-tion word,say awStartCalculator.Thus,action words represent abstract operations like “make a phone call”,“open Calculator”etc.Implementation of action words can consist of sequences of keywords with related parameters as test data.However,the difference between keywords and action words is somewhat in the eye of the beholder.The most generic keywords can almost be considered as action words in the sense of functional-ity;the main difference is in the purpose of use and the level of abstraction.Our focus is on the state machine side of the action-based testing.We do not con-sider decision tables,which are recommended as one alternative for handling test com-binations[3].However,there have been some industrial implementations using spread-sheets to describe keyword combinations to run test cases,and they have proven quite useful.Experiences also suggest that the keywords need to be well described and agreed upon jointly,so that the same test cases can be shared throughout an organization.2.2Test Model,Action Machines and Refinement MachinesWe use the TVT verification toolset[5]to create test models.With the tools,the most natural way to express the behavior of a SUT is an LTS.We use two tools in the toolset: a graphical editor for drawing LTSs,and a parallel composition tool for calculating the behavior of a system where its input LTSs run in parallel.We will compose our test model LTS from small,hand-drawn LTSs with the parallel composition tool.The test model specifies a part of the externally observable behavior of the SUT.At most that part will be tested when the model is used in test runs.In our test model architecture,hand-drawn LTSs are divided in two classes.Action machines are the model-based equivalent for test cases expressed with action words, whereas refinement machines give executable contents to action words,that is,refine-ment from action words to keywords.In the following we formalize these concepts.= ,awVerifyC }(k P = R (A s ,R )e nd _Fig.1.Transition splitter and parallel compositionDefinition 1(LTS).A labeled transition system,abbreviated LTS,is defined as a quadru-ple (S ,Σ,∆,ˆs )where S denotes a set of states ,Σis a set of actions (alphabet ),∆⊆S ×Σ×S is a set of transitions and ˆs ∈S is an initial state . Our test model is a deterministic LTS.An LTS (S ,Σ,∆,ˆs )is deterministic if there is no state in which any leaving transitions share the same action name (label).For example,there are four such LTSs in Figure 1,with their initial states marked with filled circles.Action machines and refinement machines are LTSs whose alphabets include action words and keywords,respectively.In Figure 1,A is an action machine and R is a re-finement machine.Action machines describe what should be tested at action word level.In A ,application should be first started,then verified to be running and finally quitted.After quitting,the application should be started again,and so on.Refinement machines specify how action words in action machines can be implemented.Keyword sequences that implement an action word a are written in-between start_a and end_a transitions.In Figure 1,R refines two action words in A .Firstly,it provides two alternative im-plementations to action word awStartC .To start an application,a user can either use a short cut key (by pressing “SoftRight”)or select the application from a menu.Secondly,verification that the application is running is always carried out by checking that there is text “Camera”on the screen.The action word for quitting the application is not refined by R ,but another refinement machine can be used to do that.During the test execution,we keep track of the current state of the test model,start-ing from the initial state.One of the transitions leaving the current state is chosen.If the label of the transition is not a keyword,the test execution continues from the destination state of the transition.Otherwise,the action corresponding to the keyword is taken:a key is pressed,a text is searched on the display etc.These actions either succeed or fail.For example,text verification succeeds if and only if the searched text can be found on the display.Because sometimes failing an action is allowed,or even required,we need a way to specify the expected successfulnesses of actions in the test model.For that, we use the labeling of transitions.There can be two labels(with and without a tilde) for some keywords;kwVerifyT ext<‘Clock alarm’>and˜kwVerifyT ext<‘Clock alarm’>,for instance.The former label states that in the source state of the transition searching text ‘Clock alarm’is allowed to succeed and the latter allows the search to fail.If the taken action succeeded(failed)a transition without(with)a tilde is searched in the current state.If there is no such transition,an error has been found(that is,the behavior differs from what is required in the test model).Otherwise,the test execution is continued from the destination state of the transition.Hence,our testing method resembles“exploration testing”introduced in[6].How-ever,we do not need separate output actions.This is because the only way we can ob-serve the behavior of the SUT is to examine its GUI corresponding to the latest screen capture.In addition,there are many actions that are neither input(keyword)nor output actions.They can be used in debugging(in the execution log,one can see what action word we tried to execute when an error was detected)and in measuring the coverage (for instance,covered high-level actions can be found out).2.3Composing a Test ModelWe use parallel composition for two different purposes.The main purpose is to create test models that enable extensive testing of concurrent systems.This means that we can test many applications simultaneously.It is clearly more efficient than testing only one application at a time,because now interactions between the applications are also tested. The other purpose is to refine the action machines by injecting the keywords of their refinement machines in correct places in them.Refinement could be carried out to some extent by replacing transitions labeled by action words with the sequences of transitions specified in refinement machines.How-ever,this kind of macro expansion mechanism would expand action words always to the same keywords,which might not be wanted.For example,it is handy to expand action word“show image”to keywords“select the second menu item”and“press show button”when it is executed for thefirst ter on,the second item should be se-lected by default in the image menu,and therefore the action word should be expanded to keyword“press show button”.We avoid the limits of macro expansion mechanism by using transition splitting on action machines and then letting the parallel composition to do the refinement.The transition splitter divides transitions with given labels in two by adding a new state between the original source and destination states.If the label of a split transition is“a”then the new transitions are labeled as“start_a”and“end_a”.Definition2(Transition splitter“ A”).Let L be an LTS(S,Σ,∆,ˆs)and A a set of actions.S new={s s,a,s |(s,a,s )∈∆∧a∈A}is a set of new states(S∩S new=/0).Then A(L)is an LTS(S ,Σ ,∆ ,ˆs )where–S =S∪S new–Σ =(Σ\A )∪{start_a |a ∈A }∪{end_a |a ∈A }–∆ ={(s ,a ,s )∈∆|a /∈A }∪{(s ,start_a ,s s ,a ,s )|(s ,a ,s )∈∆∧a ∈A }∪{(s s ,a ,s ,end_a ,s )|(s ,a ,s )∈∆∧a ∈A }–ˆs =ˆsIn Figure 1,LTS A s is obtained from A by splitting transitions with labels aw_StartC and aw_VerifyC .As already mentioned,we construct the test model with parallel composition.We use a parallel composition that resembles the one given in [7].Whereas traditional paral-lel compositions synchronize syntactically the same actions of participating processes,our parallel composition is given explicitly the combinations of actions that should be synchronized and the results of the synchronous executions.This way we can state,for example,that action a in process P x is synchronized with action b in process P y and their synchronous execution is observed as action c (the result).The set of combina-tions and results is called rules .The parallel composition combines component LTSs to a single composite LTS in the following way.Definition 3(Parallel composition “ R ”). R (L 1,...,L n )is the parallel composition of n LTSs according to rules R.LTS L i =(S i ,Σi ,∆i ,ˆs i ).Let ΣR be a set of resulting actions and √a “pass”symbol such that ∀i :√/∈Σi .The rule set R ⊆(Σ1∪{√})×···×(Σn ∪{√})×ΣR .Now R (L 1,...,L n )=(S ,Σ,∆,ˆs ),where–S =S 1×···×S n–Σ={a ∈ΣR |∃a 1,...,a n :(a 1,...,a n ,a )∈R }–((s 1,...,s n ),a ,(s 1,...,s n ))∈∆if and only if there is (a 1,...,a n ,a )∈R such thatfor every i (1≤i ≤n )•(s i ,a i ,s i )∈∆i or •a i =√and s i =s i –ˆs = ˆs 1,...,ˆs nA rule in a parallel composition associates an array of actions (or “pass”symbol √)of input LTSs to an action in resulting LTS.The action is the result of the synchronous execution of the actions in the array.If there is √instead of an action,the corresponding LTS will not participate in the synchronous execution described by the rule.In Figure 1,P is the parallel composition of A s and R with rulesR ={ start_awStartC ,start_awStartC ,start_awStartC ,end_awStartC ,end_awStartC ,end_awStartC , start_awVerifyC ,start_awVerifyC ,start_awVerifyC ,end_awVerifyC ,end_awVerifyC ,end_awVerifyC ,aw_QuitC ,√,aw_QuitC , √,kwPressKey<’AppMenu’>,kwPressKey<’AppMenu’> , √,kwPressKey<’Center’>,kwPressKey<’Center’> , √,kwPressKey<’SoftRight’>,kwPressKey<’SoftRight’> , √,kwVerifyT ext<’Camera’>,kwVerifyT ext<’Camera’> }A (G )R G 1R G 2A (T S )R T S A (V )R Vstart_aw*,end_aw*start_aw*,end_aw*start_aw*,end_aw*start_aw*,end_aw*FROM V IRST GINT,IRETINT,IRETAction machines Refinement machinesFig.2.Test model architectureWhen using such rules,results of the parallel composition include action words (with start_and end_prefixes)and keywords (and possibly some other actions).How-ever,when test models are walked through during a test run,communication with a SUT takes place only when keywords are encountered.2.4Test Model ArchitectureIn the SUT,several applications can be run simultaneously,but only one can be active at a time.The active application receives all user input except the one that activates a task switcher.The user can activate an already running application with the task switcher.This setting forces us to restrict the concurrency (interleavings of actions)in the test model.Otherwise,the test model would allow executing first one keyword in one application and then another keyword in another application without activating the other application first.This would lead to a situation where the test model assumes that both applications have received one input,but in reality,the first application received two inputs and the other none.Because the activation itself must be expressed as a sequence of keywords,it is natural to model the task switcher as a special application,a sort of a scheduler.The task switcher starts executing when an active application is interrupted,and stops when it activates another (or the same)application.Although the absence of interleaved ac-tions might make the parallel composition look an unnecessarily complicated tool for building the model,it is not.The composition generates a test model that contains all combinations of states in which the tested application can be inactive.Thus,it enables rigorous testing of every application in every combination of states of the other appli-cations in background.Technically,we have one action machine for every application to be tested,and one action machine for task switching:action machines G (Gallery application),V (V oice recorder application)and T S (task switcher),for instance.Action machines are synchronized with each other and with their refinement machines,as shown in Figure 2.Before the synchronization,all action words of action machines are split.In the figure,lines that connect action machines to refinement machines represent synchronizing the split action words of the connected processes.For instance,we have a rule for synchronizing A (G )and R G 1with action start_awVerifyImageList and an-other rule for A (G )and R G 2with start_awViewImage .There are also rules that allow execution of every keyword in refinement machines without synchronization.Synchronizations that take care of task switching are presented with lines that con-nect G and V to T S in thefigure.Both G and V include actions INT and IRET that represent interrupting the application and returning from interrupt.Initially,Gallery ap-plication is active.If G executes INT synchronously with T S,G goes to a state where it waits for IRET.On the other hand,T S executes keywords that activate another(or the same)application in the SUT and then execute synchronously IRET with the corre-sponding action machine.Finally,there is a connector labeled FROM V IRST G in Figure2.It represents “go to Gallery”function in V oice recorder.In our SUT,the function activates Gallery application and opens its sound clips menu.V oice recorder is deactivated but left in background.In the test model,action FROM V IRST G is the result of synchronizing actions IGOTO<Gallery>in V and IRST<VoiceRecorder>actions in G.Thefirst action leads V to an interrupted state from which it can continue only by executing IRET synchronously with T S.The second action lets G to continue from an interrupted state, but forces it to a state where sound clip menu is assumed to be on the screen.Formally,our test model T M is acquired from the expression:T M= R( A(G), A(T S), A(V),R G1,R G2,R T S,R V)where set A contains all the action words and rule set R is as outlined above.One advantage of this architecture is that it allows us to reuse the component LTSs with a variety of SUTs.For example,if the GUI of some application changes,all we need to change is the refinement machine of the corresponding action machine.If a feature in an application should not be tested,it is enough to remove the corresponding action words from application’s action machine.If an application should not be tested, we just drop out its LTSs from the parallel composition.Accordingly,if a new applica-tion should be tested,we add the action and the refinement machine for it(also T S and R T S must be changed to be able to activate the new application,but they are simple enough to be generated automatically).Moreover,if we test a new SUT with the same features but with completely different GUI,we redraw the refinement machines for the SUT but use the same action machines.While refinement machines can be changed without touching their action machines, changing an action machine causes easily changes in its refinement machines.If a new action word is introduced in an action machine,either its refinement machine has to be extended correspondingly or a new refinement machine added to the parallel compo-sition.In addition,changing the order of action words inside an action machine may cause changes in its refinement machine.For example,action word awChooseFirstIm-ageInGallery can be unfolded to different sequences of keywords depending on the state of the SUT in which the action word is executed.In one state,Gallery application may already show the image list.Then the action can be completed by a keyword that selects thefirst item in the list.However,in another state,Gallery may show a list of voice sam-ples,and therefore the refinement shouldfirstfind out the list of images before thefirst image can be selected.Thus,action words may contain hidden assumptions on SUT’s state where the action takes place.Of course,one can make these assumptions explicit, for example,by extending the action label:awChooseFirstImageInGalleryWhenImage-ListIsShown.Fig.3.Test environment3System Testing on Symbian PlatformThe above theory was developed in conjunction with an industrial case study.In this section,we will describe the case study including the test environment and setting that we used.Moreover,we outline the implementation of our model-based test engine,and explain the modeling process concerning keyword selection and creation of the test model itself.In addition,we will briefly evaluate our results.3.1Test Environment and SettingThe system we tested was a Symbian-based mobile device with Bluetooth capability. The test execution system was installed on a PC,and it consisted of two main com-ponents:test automation tool,including our test execution engine,and remote control software for the SUT.The test environment is depicted as a UML deployment diagram in Figure3.We applied TVT tools for creating the test model.As a test automation tool we used Mercury’s QuickTest Pro(QTP)[8].QTP is a GUI testing tool for MS Windows capable of capturing information about window resources,such as buttons and textfields,and providing access to those resources through an API.The tool also enables writing and executing test procedures using a scripting language(Visual Basic Script,VBScript in the following)and recording a test log when requested.The remote control tool we used was m-Test by Intuwave[9].It provides access to the GUI of the SUT and to some internal information such as a list of running processes. m-Test makes it possible to remotely navigate through the GUI(see Figure4,on the left-hand side).GUI resources visible on the display cannot be obtained,only the bitmap of the display is available.m-Test can also recognize the text visible on the display back to characters(see Figure4,on the right-hand side).m-Test supports various ways to connect to the SUT;in the study we used a Bluetooth radio link.Moreover,in the beginning of the study,we obtained a VBScript function library.It was originally developed to serve as a library of keyword implementations for conven-tional test procedures in system testing of the SUT.For example,for pushing a button there was a function called’Key’etc.Fig.4.Inputs and outputs of SUT as seen from m-Test3.2Test EngineThe test execution engine,which executes the LTS state machine,consisted of four parts:execution engine TestRunner,state model Model,transition selector Chooser, and keyword proxy SUTAdapter(see Figure3).TestRunner was responsible for ex-ecuting transition events selected by Chooser using the keyword function library via SUTAdapter.Based on the result of executing a keyword,TestRunner determines if the test run should continue or not.If the run can continue,the cycle continues until the number of executed transitions exceeds the maximum number of steps.The test designer determines the step limit that is provided as a parameter.Model was constructed from states(State),transitions(Transition),and their con-nections to each other.The test model(LTS)is read from afile and translated to a state object model,which provides access to the states and transitions.Chooser selects a transition to be executed in the current state.The selection method can be random or probabilistic based on weights attached to the transitions.Naturally, more advanced Chooser could also be based on an operation profile[10]or game theory [11],for instance.Since the schedule of our case study was tight,we chose the random selection algorithm because it was the easiest to implement.The keyword function library that we obtained served as our initial keyword im-plementation.However,during the early phases of the study it became apparent that it was not suitable for our purposes.The library had too much built-in control over the test procedure.In contrast,our approach requires that the test verdict must be given bythe test engine.The reason is that sometimes a failure of a SUT is actually the desiredTable1.Keyword categoriesKeyword Param.kwPressKey’keyLeft’kwWriteText’Hello’Navigate Select a menu itemActivates an application if startedkwVerifyText’Move’Control Activates a device to receive subsequent commandsStart an applicationkwIsMenuSelected’Move’result.In addition,since theflow control was partly embedded in the library,we did not have any keyword that would report the status of the SUT.For that reason,we cre-ated a keyword proxy(SUTAdapter).Its purpose was to hide original function library keywords,use or re-implement them tofit our purpose,and to add some new keywords.3.3Keyword CategoriesWe discovered that there must be at leastfive types of keywords:command,navigate, query,control,and state verification.As an example,some keywords from each cate-gory are shown in Table1.The command type keywords are the most obvious ones: They send input to the SUT,for instance,“press key”or“write text”.Navigation key-words,such as“select menu item”,are used to navigate in the GUI.Query keywords are used to compare texts or images on the display.Control keywords are used to manage the state of the SUT.These four keyword groups are well suited for most of the common testing situations.However,our approach allowed us to create several situations where also the state verification keywords were needed.State verification keywords verify that the SUT is in some particular state(for in-stance,“Is menu text selected“)or that some sporadic event,like a phone call,has occurred.These keywords were essential,because the environment did not allow us to capture such information otherwise.The state of the SUT was only available through indirect clues that were extracted from the display bitmap.Because of this,the test model occasionally misinterpreted the state of the SUT or missed an event.This made test modeling somewhat more complicated than we anticipated.The biggest difference between the query and state verification keywords is in the intent of their use.Queries are used to determine the presence of texts etc.on the display,whereas state verification keywords check if the GUI is in a required state.The latter are used to detect if the SUT is in a wrong state,i.e.the failure has occurred.The missing of an event was the most common error in the model,which occurred often when exact timing was required(like testing an alarm).This problem was prob-ably caused by the slow communication between QTP and m-Test.There were several occasions when some event was missed just because the execution of a keyword was too slow or the execution time varied between runs.。