外文翻译---软件测试策略

合集下载

计算机外文资料翻译---Visual Basic简介

计算机外文资料翻译---Visual Basic简介

毕业设计(论文)外文资料翻译系(院):计算机科学系专业:计算机科学与技术姓名:学号: 111001203外文出处:Learn Visual Basic in 24 hours--Hour 1 Visual Basic at Work 附件:1.外文资料翻译译文;2.外文原文。

附件1:外文资料翻译译文Visual Basic简介什么是Visual BasicMicrosoft Visual Basic 5.0是旧的BASIC语言最近的最好的化身,在一个包里给你一个完全的Windows应用开发系统。

Visual Basic (我们常称它VB)能让你写、编辑,并且测试Windows应用程序。

另外,VB有包括你能用来写并且编译帮助文件那的工具,ActiveX控制,并且甚至因特网应用程序Visual Basic是它本身的一个Windows应用程序。

你装载并且执行VB系统就好象你做其它Windows程序。

你将使用这个运行的VB程序创造另外的程序。

虽然VB是一个非常地有用的工具,但VB只是一个程序员(写程序的人)用来写,测试,并且运行Windows应用程序的工具。

尽管程序员们经常可替交地使用术语程序和应用程序,当你正在描述一个Windows 程序时术语应用程序似乎最适合,因为一个Windows程序由若干份代表性的文件组成。

这些文件以工程形式一起工作。

通过双击一个图标或由以Windows开始应用程序运行启动菜单用户从Windows加载并且运行工程产生最终的程序。

过去的45年与计算机硬件一起的编程工具的角色已经演变。

今天一种程序语言,例如Visual Basic,一些年以前与程序语言的非常不一致。

Windows操作系统的视觉的天性要求比一些年以前是可利用的更先进的工具。

在windowed环境以前,一种程序语言是你用来写程序的一个简单的基于文章工具。

今天你需要的不只是一种语言,你需要一种能在windows系统内部工作并且能利用所有的绘画、多媒体、联机和Windows提供的多处理活动开发应用软件的绘图开发工具。

软件测试管理中英文对照外文翻译文献

软件测试管理中英文对照外文翻译文献

中英文资料外文翻译基于价值的软件测试管理摘要:根据研究表明测试已经成为软件开发过程中一个很重要的环节,它占据了整个软件开发成本的百分之三十到五十。

测试通常不是用来组织商业价值的最大化,也不是肩负着项目的使命。

路径测试、分支测试、指导测试、变换测试、场景测试以及需求测试等对于软件的所有方面都是同等重要的。

然而在实践中百分之八十的价值往往来自百分之二十的软件。

为了从软件测试中得到最大的投资回报,测试管理需要最大化它的价值贡献。

在本章,我们将更加促进对基于价值的测试的需要,描述支持基于价值的测试管理的实践,勾画出基于价值的测试管理的框架,并举例说明该框架。

关键词: 基于价值的软件测试,基于价值的测试,测试成本,测试利益,测试管理11.1 前言测试是软件质量保证过程中最重要和最广泛使用的方法。

校验和验证旨在通过综合分析, 测试软件确保其正确运行功能,确保软件的质量和软件的可靠性。

在IEEE610.12(1990)中,测试被定义为在规定条件下对执行的系统或者组件进行观察和记录,并对系统或者组件进行评价的活动。

测试在实践过程中被广泛的使用,在保证质量策略的诸多组织中扮演着重要的角色。

软件影响着成千上万人的日常生活,担负着艰巨的任务。

因此软件在不久的将来将显得尤其的重要。

研究表明,测试通常消耗软件开发成本的30%至50%。

对于安全危急系统,甚至更高的比例也不足为奇。

因此软件测试具有挑战的就是寻找更多的有效途径进行有效的测试。

软件测试管理的价值在于努力减少测试成本和满足需求。

有价值的测试管理对于项目目标和商业价值也能有很好的向导。

在第一章,Boehm 列举了很多方面的潜在测试成本。

该例子说明了利用客户结账类型的7%的成本来完成50%的软件测试利益。

尽管百分百测试是一个不太切实际的目标, 然而通过调整测试方法, 仍有很大的空间来改进和节省达到预期的价值。

基于软件工程的价值动力在于目前软件工程的实践研究都是把需求, 测试案例, 测试对象和产品缺陷看的同等重要。

优秀翻译软件开发方案

优秀翻译软件开发方案

优秀翻译软件开发方案在现代全球化的社会中,翻译软件的需求越来越大。

然而,传统的翻译软件往往存在着一些问题,比如精确度不高、上下文理解能力较差等。

为了解决这些问题,我们提出了一种全新的优秀翻译软件开发方案。

该方案的核心思想是结合人工智能和机器学习的技术,以提升翻译软件的质量和效果。

具体的开发步骤如下:1. 数据收集:我们将收集大量的多语种语料库,包括书籍、新闻文章、博客等。

这些语料将作为训练数据,用于机器学习模型的训练。

2. 数据预处理:为了提高翻译质量,我们将对收集到的语料进行预处理。

这包括词性标注、命名实体识别、语义分析等。

通过这些处理,我们的系统能够更好地理解原文的上下文和含义。

3. 机器学习模型训练:基于预处理后的语料,我们将使用深度学习技术训练翻译模型。

这些模型可以对输入的句子进行分析,并输出相应的翻译结果。

我们将采用序列到序列的模型架构,如循环神经网络(RNN)或者是变换器模型(Transformer),以优化我们的翻译质量。

4. 模型评估与调优:为了确保翻译质量和性能的稳定性,我们将对模型进行评估和调优。

这包括使用验证集对模型进行测试,并根据结果进行参数调整和优化。

5. 上线发布:经过反复的测试和调优后,我们将上线发布我们的翻译系统。

用户可以通过我们的平台或者是API接口使用我们的翻译服务。

此外,我们的方案还包括以下几个创新点:1. 上下文感知:我们的翻译模型将基于上下文信息进行翻译。

这意味着我们的系统能够更好地理解句子中的语义和语境。

例如,对于“bank”一词,我们的系统可以根据上下文判断是“银行”还是“河岸”。

2. 用户反馈机制:我们的系统将与用户进行互动,并收集用户的反馈数据。

通过分析用户反馈,我们可以进一步改进翻译质量,并修正翻译错误。

3. 领域适应:我们将根据不同的领域优化我们的翻译模型。

例如,对于法律、医学等特定领域,我们将收集专门的语料库,并训练定制化的翻译模型。

通过以上方案,我们相信我们的翻译软件能够具备更高的翻译质量和准确度。

软件测试的策略与技巧

软件测试的策略与技巧

软件测试的策略与技巧在软件开发过程中,软件测试是一个至关重要的环节。

通过系统地验证软件的功能、性能和稳定性,可以确保软件能够达到用户的期望,并以稳定可靠的方式运行。

为了提高测试的效率和质量,软件测试人员需要掌握一些策略与技巧。

本文将介绍几种常见的软件测试策略与技巧,并分析其适用场景和优缺点。

一、黑盒测试与白盒测试在软件测试中,常用的两种测试方法是黑盒测试和白盒测试。

黑盒测试是基于功能需求的测试方法,测试人员只关注软件的输入和输出,而不考虑内部实现细节。

白盒测试则是基于代码逻辑的测试方法,测试人员需要了解软件的内部结构和实现方式,并根据代码进行测试。

两者各有优劣,应根据具体情况选择使用。

1. 黑盒测试黑盒测试适用于测试人员对软件内部实现不了解或不关心的情况。

测试人员通过输入不同的数据和操作,验证软件的功能是否符合需求,并检查软件是否能够正确地处理异常情况。

黑盒测试可以覆盖软件的不同功能模块,同时也能够帮助发现一些潜在的性能和安全问题。

2. 白盒测试白盒测试适用于测试人员对软件内部实现非常了解的情况。

测试人员可以根据代码逻辑设计测试用例,并利用代码覆盖率工具评估测试的完整性。

白盒测试可以发现一些由于程序逻辑错误导致的问题,同时也可以提供较高的测试覆盖率。

然而,白盒测试需要测试人员具备一定的编程和调试能力,而且对代码的变动比较敏感,需要及时进行维护。

二、静态测试与动态测试除了根据测试方法的不同,软件测试还可以根据测试阶段的不同进行分类,其中比较常见的是静态测试和动态测试。

静态测试主要是在软件开发早期进行的,通过对文档和设计的分析,发现和修复问题。

动态测试则是在软件开发后期进行的,通过运行软件和验证功能来测试软件。

1. 静态测试静态测试主要包括需求分析、代码审查和静态分析等方法。

需求分析主要是对软件需求进行验证,确保需求的准确性和完整性。

代码审查是通过对代码的逐行检查和评估,发现和修复潜在的问题。

静态分析可以根据代码的可读性、复杂性和规范性等方面,评估代码的质量和可维护性。

软件测试部分中英文对照

软件测试部分中英文对照
Testing coverage : 测试覆盖
Test design : 测试设计
Test driver : 测试驱动
Testing environment : 测试环境
Artifact : 工件
Automated Testing : 自动化测试
Architecture : 构架
Assertion checking : 断言检查 (10)
Audit : 审计
Application under test (AUT) : 所测试的应用程序
Hotfix : 热补丁
G11N(Globalization) : 全球化
Gap analysis : 差距分析
Garbage characters : 乱码字符
Glossary : 术语表
Glass-box testing : 白箱测试或白盒测试
GUI(Graphical User Interface) : 图形用户界面
Decision coverage : 判定覆盖
Debug : 调试
Defect : 缺陷
defect density : 缺陷密度 (60)
Deployment : 部署
Desk checking : 桌前检查
Blocking bug : 阻碍性错误
Bottom-up testing : 自底向上测试 (20)
Branch coverage : 分支覆盖
Brute force testing : 强力测试
Bug : 错误
Bug report : 错误报告
Load testing : 负载测试
Maintenance : 维护

软件测试中英文对照

软件测试中英文对照

Acceptance testing | 验收测试Acceptance Testing|可接受性测试Accessibility test |软体适用性测试actual outcome|实际结果Ad hoc testing | 随机测试Algorithm analysis |算法分析algorithm|算法Alpha testing | α测试analysis|分析anomaly|异常application software|应用软件Application under test (AUT)|所测试的应用程序Architecture |构架Artifact |工件ASQ|自动化软件质量(Automated Software Quality)Assertion checking |断言检查Association |关联Audit | 审计audit trail|审计跟踪Automated Testing|自动化测试Backus—Naur Form|BNF范式baseline|基线Basic Block|基本块basis test set|基本测试集Behaviour |行为Bench test | 基准测试benchmark|标杆/指标/基准Best practise |最佳实践Beta testing | β测试Black Box Testing|黑盒测试Blocking bug |阻碍性错误Bottom—up testing |自底向上测试boundary value coverage|边界值覆盖boundary value testing|边界值测试Boundary values |边界值Boundry Value Analysis|边界值分析branch condition combination coverage|分支条件组合覆盖branch condition combination testing|分支条件组合测试branch condition coverage|分支条件覆盖branch condition testing|分支条件测试branch condition|分支条件Branch coverage |分支覆盖branch outcome|分支结果branch point|分支点branch testing|分支测试branch|分支Breadth Testing|广度测试Brute force testing| 强力测试Buddy test | 合伙测试Buffer | 缓冲Bug |错误Bug bash | 错误大扫除bug fix | 错误修正Bug report |错误报告Bug tracking system|错误跟踪系统bug|缺陷Build | 工作版本(内部小版本)Build Verfication tests(BVTs)| 版本验证测试Build-in |内置Capability Maturity Model (CMM)| 能力成熟度模型Capability Maturity Model Integration (CMMI)|能力成熟度模型整合capture/playback tool|捕获/回放工具Capture/Replay Tool|捕获/回放工具CASE|计算机辅助软件工程(computer aided software engineering)CAST|计算机辅助测试cause—effect graph|因果图certification |证明change control|变更控制Change Management |变更管理Change Request |变更请求Character Set |字符集Check In |检入Check Out |检出Closeout |收尾code audit |代码审计Code coverage |代码覆盖Code Inspection|代码检视Code page | 代码页Code rule | 编码规范Code sytle |编码风格Code Walkthrough|代码走读code-based testing|基于代码的测试coding standards|编程规范Common sense | 常识Compatibility Testing|兼容性测试complete path testing |完全路径测试completeness|完整性complexity |复杂性Component testing |组件测试Component|组件computation data use|计算数据使用computer system security|计算机系统安全性Concurrency user |并发用户Condition coverage |条件覆盖condition coverage|条件覆盖condition outcome|条件结果condition|条件configuration control|配置控制Configuration item |配置项configuration management|配置管理Configuration testing | 配置测试conformance criterion| 一致性标准Conformance Testing|一致性测试consistency |一致性consistency checker|一致性检查器Control flow graph | 控制流程图control flow graph|控制流图control flow|控制流conversion testing|转换测试Core team |核心小组corrective maintenance|故障检修correctness |正确性coverage |覆盖率coverage item|覆盖项crash|崩溃criticality analysis|关键性分析criticality|关键性CRM(change request management)|变更需求管理Customer-focused mindset | 客户为中心的理念体系Cyclomatic complexity |圈复杂度data corruption|数据污染data definition C—use pair|数据定义C—use使用对data definition P-use coverage|数据定义P-use覆盖data definition P-use pair|数据定义P-use使用对data definition|数据定义data definition—use coverage|数据定义使用覆盖data definition-use pair |数据定义使用对data definition—use testing|数据定义使用测试data dictionary|数据字典Data Flow Analysis |数据流分析data flow analysis|数据流分析data flow coverage|数据流覆盖data flow diagram|数据流图data flow testing|数据流测试data integrity|数据完整性data use|数据使用data validation|数据确认dead code|死代码Debug |调试Debugging|调试Decision condition|判定条件Decision coverage | 判定覆盖decision coverage|判定覆盖decision outcome|判定结果decision table|判定表decision|判定Defect | 缺陷defect density |缺陷密度Defect Tracking |缺陷跟踪Deployment |部署Depth Testing|深度测试design for sustainability |可延续性的设计design of experiments|实验设计design—based testing|基于设计的测试Desk checking |桌前检查desk checking|桌面检查Determine Usage Model | 确定应用模型Determine Potential Risks | 确定潜在风险diagnostic|诊断DIF(decimation in frequency)|按频率抽取dirty testing|肮脏测试disaster recovery|灾难恢复DIT (decimation in time)| 按时间抽取documentation testing |文档测试domain testing|域测试domain|域DTP DETAIL TEST PLAN详细确认测试计划Dynamic analysis |动态分析dynamic analysis|动态分析Dynamic Testing|动态测试embedded software|嵌入式软件emulator|仿真End—to—End testing|端到端测试Enhanced Request |增强请求entity relationship diagram|实体关系图Encryption Source Code Base|加密算法源代码库Entry criteria | 准入条件entry point |入口点Envisioning Phase |构想阶段Equivalence class |等价类Equivalence Class|等价类equivalence partition coverage|等价划分覆盖Equivalence partition testing |等价划分测试equivalence partition testing|参考等价划分测试equivalence partition testing|等价划分测试Equivalence Partitioning|等价划分Error |错误Error guessing |错误猜测error seeding|错误播种/错误插值error|错误Event-driven | 事件驱动Exception handlers |异常处理器exception|异常/例外executable statement|可执行语句Exhaustive Testing|穷尽测试exit point|出口点expected outcome|期望结果Exploratory testing |探索性测试Failure | 失效Fault |故障fault|故障feasible path|可达路径feature testing|特性测试Field testing |现场测试FMEA|失效模型效果分析(Failure Modes and Effects Analysis)FMECA|失效模型效果关键性分析(Failure Modes and Effects Criticality Analysis) Framework |框架FTA|故障树分析(Fault Tree Analysis)functional decomposition|功能分解Functional Specification |功能规格说明书Functional testing |功能测试Functional Testing|功能测试G11N(Globalization) | 全球化Gap analysis |差距分析Garbage characters | 乱码字符glass box testing|玻璃盒测试Glass—box testing |白箱测试或白盒测试Glossary | 术语表GUI(Graphical User Interface)| 图形用户界面Hard—coding | 硬编码Hotfix | 热补丁I18N(Internationalization)| 国际化Identify Exploratory Tests –识别探索性测试IEEE|美国电子与电器工程师学会(Institute of Electrical and Electronic Engineers)Incident 事故Incremental testing | 渐增测试incremental testing|渐增测试infeasible path|不可达路径input domain|输入域Inspection |审查inspection|检视installability testing|可安装性测试Installing testing | 安装测试instrumentation|插装instrumenter|插装器Integration |集成Integration testing |集成测试interface | 接口interface analysis|接口分析interface testing|接口测试interface|接口invalid inputs|无效输入isolation testing|孤立测试Issue |问题Iteration | 迭代Iterative development|迭代开发job control language|工作控制语言Job|工作Key concepts |关键概念Key Process Area | 关键过程区域Keyword driven testing |关键字驱动测试Kick—off meeting |动会议L10N(Localization) |本地化Lag time | 延迟时间LCSAJ|线性代码顺序和跳转(Linear Code Sequence And Jump)LCSAJ coverage|LCSAJ覆盖LCSAJ testing|LCSAJ测试Lead time |前置时间Load testing | 负载测试Load Testing|负载测试Localizability testing| 本地化能力测试Localization testing |本地化测试logic analysis|逻辑分析logic—coverage testing|逻辑覆盖测试Maintainability |可维护性maintainability testing|可维护性测试Maintenance |维护Master project schedule |总体项目方案Measurement |度量Memory leak | 内存泄漏Migration testing |迁移测试Milestone |里程碑Mock up |模型,原型modified condition/decision coverage|修改条件/判定覆盖modified condition/decision testing |修改条件/判定测试modular decomposition|参考模块分解Module testing |模块测试Monkey testing | 跳跃式测试Monkey Testing|跳跃式测试mouse over|鼠标在对象之上mouse leave|鼠标离开对象MTBF|平均失效间隔实际(mean time between failures)MTP MAIN TEST PLAN主确认计划MTTF|平均失效时间 (mean time to failure)MTTR|平均修复时间(mean time to repair)multiple condition coverage|多条件覆盖mutation analysis|变体分析N/A(Not applicable) | 不适用的Negative Testing |逆向测试, 反向测试, 负面测试negative testing|参考负面测试Negative Testing|逆向测试/反向测试/负面测试off by one|缓冲溢出错误non—functional requirements testing|非功能需求测试nominal load|额定负载N-switch coverage|N切换覆盖N-switch testing|N切换测试N-transitions|N转换Off—the—shelf software | 套装软件operational testing|可操作性测试output domain|输出域paper audit|书面审计Pair Programming |成对编程partition testing|分类测试Path coverage | 路径覆盖path coverage|路径覆盖path sensitizing|路径敏感性path testing|路径测试path|路径Peer review |同行评审Performance | 性能Performance indicator|性能(绩效)指标Performance testing |性能测试Pilot |试验Pilot testing |引导测试Portability |可移植性portability testing|可移植性测试Positive testing |正向测试Postcondition |后置条件Precondition | 前提条件precondition|预置条件predicate data use|谓词数据使用predicate|谓词Priority | 优先权program instrumenter|程序插装progressive testing|递进测试Prototype |原型Pseudo code |伪代码pseudo-localization testing|伪本地化测试pseudo—random|伪随机QC|质量控制(quality control)Quality assurance(QA)| 质量保证Quality Control(QC) |质量控制Race Condition|竞争状态Rational Unified Process(以下简称RUP)|瑞理统一工艺Recovery testing |恢复测试recovery testing|恢复性测试Refactoring |重构regression analysis and testing|回归分析和测试Regression testing |回归测试Release |发布Release note |版本说明release|发布Reliability |可靠性reliability assessment|可靠性评价reliability|可靠性Requirements management tool|需求管理工具Requirements—based testing |基于需求的测试Return of Investment(ROI)|投资回报率review|评审Risk assessment |风险评估risk|风险Robustness | 强健性Root Cause Analysis(RCA)| 根本原因分析safety critical|严格的安全性safety|(生命)安全性Sanity testing | 健全测试Sanity Testing|理智测试Schema Repository | 模式库Screen shot |抓屏、截图SDP|软件开发计划(software development plan)Security testing | 安全性测试security testing|安全性测试security.|(信息)安全性serviceability testing|可服务性测试Severity | 严重性Shipment |发布simple subpath|简单子路径Simulation |模拟Simulator | 模拟器SLA(Service level agreement)|服务级别协议SLA|服务级别协议(service level agreement)Smoke testing |冒烟测试Software development plan(SDP)| 软件开发计划Software development process|软件开发过程software development process|软件开发过程software diversity|软件多样性software element|软件元素software engineering environment|软件工程环境software engineering|软件工程Software life cycle | 软件生命周期source code|源代码source statement|源语句Specification |规格说明书specified input|指定的输入spiral model |螺旋模型SQAP SOFTWARE QUALITY ASSURENCE PLAN 软件质量保证计划SQL|结构化查询语句(structured query language)Staged Delivery|分布交付方法state diagram|状态图state transition testing |状态转换测试state transition|状态转换state|状态Statement coverage |语句覆盖statement testing|语句测试statement|语句Static Analysis|静态分析Static Analyzer|静态分析器Static Testing|静态测试statistical testing|统计测试Stepwise refinement | 逐步优化storage testing|存储测试Stress Testing |压力测试structural coverage|结构化覆盖structural test case design|结构化测试用例设计structural testing|结构化测试structured basis testing|结构化的基础测试structured design|结构化设计structured programming|结构化编程structured walkthrough|结构化走读stub|桩sub-area|子域Summary| 总结SVVP SOFTWARE Vevification&Validation PLAN| 软件验证和确认计划symbolic evaluation|符号评价symbolic execution|参考符号执行symbolic execution|符号执行symbolic trace|符号轨迹Synchronization | 同步Syntax testing |语法分析system analysis|系统分析System design | 系统设计system integration|系统集成System Testing | 系统测试TC TEST CASE 测试用例TCS TEST CASE SPECIFICATION 测试用例规格说明TDS TEST DESIGN SPECIFICATION 测试设计规格说明书technical requirements testing|技术需求测试Test |测试test automation|测试自动化Test case | 测试用例test case design technique|测试用例设计技术test case suite|测试用例套test comparator|测试比较器test completion criterion|测试完成标准test coverage|测试覆盖Test design | 测试设计Test driver |测试驱动test environment|测试环境test execution technique|测试执行技术test execution|测试执行test generator|测试生成器test harness|测试用具Test infrastructure | 测试基础建设test log|测试日志test measurement technique|测试度量技术Test Metrics |测试度量test procedure|测试规程test records|测试记录test report|测试报告Test scenario | 测试场景Test Script|测试脚本Test Specification|测试规格Test strategy | 测试策略test suite|测试套Test target | 测试目标Test ware | 测试工具Testability | 可测试性testability|可测试性Testing bed | 测试平台Testing coverage |测试覆盖Testing environment | 测试环境Testing item |测试项Testing plan |测试计划Testing procedure | 测试过程Thread testing |线程测试time sharing|时间共享time—boxed |固定时间TIR test incident report 测试事故报告ToolTip|控件提示或说明top—down testing|自顶向下测试TPS TEST PEOCESS SPECIFICATION 测试步骤规格说明Traceability | 可跟踪性traceability analysis|跟踪性分析traceability matrix|跟踪矩阵Trade—off | 平衡transaction|事务/处理transaction volume|交易量transform analysis|事务分析trojan horse|特洛伊木马truth table|真值表TST TEST SUMMARY REPORT 测试总结报告Tune System | 调试系统TW TEST WARE |测试件Unit Testing |单元测试Usability Testing|可用性测试Usage scenario | 使用场景User acceptance Test |用户验收测试User database |用户数据库User interface(UI) | 用户界面User profile | 用户信息User scenario |用户场景V&V (Verification & Validation) | 验证&确认validation |确认verification |验证version |版本Virtual user | 虚拟用户volume testing|容量测试VSS(visual source safe)|VTP Verification TEST PLAN验证测试计划VTR Verification TEST REPORT验证测试报告Walkthrough | 走读Waterfall model | 瀑布模型Web testing | 网站测试White box testing |白盒测试Work breakdown structure (WBS) |任务分解结构Zero bug bounce (ZBB)|零错误反弹。

软件测试外文翻译--GUI自动化测试研究

软件测试外文翻译--GUI自动化测试研究

附录1外文译文GUI自动化测试研究摘要:指出了目前自动化测试所采用的录制技术存在的不足,针对不断变化的图形用户界面测试代码很难维护和扩展的问题,采用基于对象的捕捉技术,设计了以Windows消息机制为基础的GU IATF测试框架,实现了高度灵活并易于扩展的图形用户界面自动化测试。

关键词:软件测试;回归测试;自动化0.引言测试是一种旨在评估一个程序或系统的属性或能力,确定它是否符合其所需结果的活动。

在整个软件开发过程中,从需求分析到系统设计直到代码实现,都会出现或多或少的问题。

如何保障软件的质量,软件测试就成为关键的技术。

软件测试的工作量很大并具有一定的重复性,尤其在测试后期所进行的回归测试中(回归测试在软件出现发展性的改变和修正性改变时运行),需要验证以前发现的问题在新版本中是否解决,大部分测试工作是重复的。

实现软件测试的自动化可以使大量的测试程序化地反复执行,不仅节约了大量的劳动力,而且提高了测试效率并保证了测试的质量。

1.录制技术的不足目前一些录制技术被应用到图形用户界面的自动化测试中,在软件开发周期中,系统需要不断地更新和维护,为了保证测试质量,测试代码对不断变化的系统要有很强的适应能力,换句话说,测试也同样需要维护。

测试脚本的录制过程是根据具体的界面和操作进行的,一旦脚本的执行界面发生改变,运行就会出现异常,甚至仅仅是被操作对象位置的改变或图像分辨率的改变都可能会造成图形用户界面自动化测试的失败,因此,基于录制技术的自动化测试维护的代价相当高。

另外,脚本录制的过程是固定的,所以脚本的运行会完全按照操作步骤,不具备灵活性。

2.自动化测试框架的提出在目前的软件测试中,一个备受关注的问题是如何高效地实现图形用户界面的自动化测试,并使测试代码具有很高的灵活性。

本文提出了一种基于对象捕捉技术的图形用户界面自动化测试框架GUIATF(Graphics User Interface Automation Testing Framework),为测试人员方便地创建并灵活地维护测试代码提供保证。

【计算机专业文献翻译】软件测试

【计算机专业文献翻译】软件测试

附录2 英文文献及其翻译原文:Software testingThe chapter is about establishment of the system defects objectives according to test programs. You will understand the followings: testing techniques that are geared to discover program faults, guidelines for interface testing, specific approaches to object-oriented testing, and the principles of CASE tool support for testing. Topics covered include Defect testing, Integration testing, Object-oriented testing and Testing workbenches.The goal of defect testing is to discover defects in programs. A successful defect test is a test which causes a program to behave in an anomalous way. Tests show the presence not the absence of defects. Only exhaustive testing can show a program is free from defects. However, exhaustive testing is impossible.Tests should exercise a system's capabilities rather than its components. Testing old capabilities is more important than testing new capabilities. Testing typical situations is more important than boundary value cases.An approach to te sting where the program is considered as a ‘black-box’. The program test cases are based on the system specification. Test planning can begin early in the software process. Inputs causing anomalous behaviour. Outputs which reveal the presence of defects.Equivalence partitioning. Input data and output results often fall into different classes where all members of a class are related. Each of these classes is an equivalence partition where the program behaves in an equivalent way for each class member. Test cases should be chosen from each partition.Structural testing. Sometime it is called white-box testing. Derivation of test cases according to program structure. Knowledge of the program is used to identify additional test cases. Objective is to exercise all program statements, not all path combinations.Path testing. The objective of path testing is to ensure that the set of test cases is such that each path through the program is executed at least once. The starting point for path testing is a program flow graph that shows nodes representing program decisions and arcs representing the flow of control.Statements with conditions are therefore nodes in the flow graph. Describes the program control flow. Each branch is shown as a separate path and loops are shown by arrows looping back to the loop condition node. Used as a basis for computing the cyclomatic complexity.Cyclomatic complexity = Number of edges -Number of nodes +2The number of tests to test all control statements equals the cyclomatic complexity.Cyclomatic complexity equals number ofconditions in a program. Useful if used with care. Does not imply adequacy of testing. Although all paths are executed, all combinations of paths are not executed.Cyclomatic complexity: Test cases should be derived so that all of these paths are executed.A dynamic program analyser may be used to check that paths have been executed.Integration testing.Tests complete systems or subsystems composed of integrated components. Integration testing should be black-box testing with tests derived from the specification.Main difficulty is localising errors. Incremental integration testing reduces this problem. Tetsing approaches. Architectural validation. Top-down integration testing is better at discovering errors in the system architecture.System demonstration.Top-down integration testing allows a limited demonstration at an early stage in the development. Test observation: Problems with both approaches. Extra code may be required to observe tests. Takes place when modules or sub-systems are integrated to create larger systems. Objectives are to detect faults due to interface errors or invalid assumptions about interfaces. Particularly important for object-oriented development as objects are defined by their interfaces.A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order.Interface misunderstanding. A calling component embeds assumptions about the behaviour of the called component which are incorrectTiming errors: The called and the calling component operate at different speeds and out-of-date information is accessed.Interface testing guidelines: Design tests so that parameters to a called procedure are at the extreme ends of their ranges. Always test pointer parameters with null pointers. Design tests which cause the component to fail. Use stress testing in message passing systems. In shared memory systems, vary the order in which components are activated.Stress testingExercises the system beyond its maximum design load. Stressing the system often causes defects to come to light. Stressing the system test failure behaviour. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data. Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded.The components to be tested are object classes that are instantiated as objects. Larger grain than individual functions so approaches to white-box testing have to be extended. No obvious ‘top’ to the system for top-down integration and testing. Object-oriented testing. Testing levels.Testing operations associated with objects.Testing object classes. Testing clusters of cooperating objects. Testing the complete OO systemObject class testing. Complete test coverage of a class involves. Testing all operations associated with an object. Setting and interrogating all object attributes. Exercising the object in all possible states.Inheritance makes it more difficult to design object class tests as the information to be tested is not localized. Weather station object interface. Test cases are needed for all e a state model toidentify state transiti.ons for testing.Object integration. Levels of integration are less distinct in objectoriented systems. Cluster testing is concerned with integrating and testing clusters of cooperating objects. Identify clusters using knowledge of the operation of objects and the system features that are implemented by these clusters. Testing is based on a user interactions with the system. Has the advantage that it tests system features as experienced byusers.Thread testing.Tests the systems response to events as processing threads through the system. Object interaction testing. Tests sequences of object interactions that stop when an object operation does not call on services from another objectScenario-based testing. Identify scenarios from use-cases and supplement these with interaction diagrams that show the objects involved in the scenario. Consider the scenario in the weather station system where a report is generated.Input of report request with associated acknowledge and a final output of a report. Can be tested by creating raw data and ensuring that it is summarised properly. Use the same raw data to test the WeatherData object.Testing workbenches.Testing is an expensive process phase. Testing workbenches provide a range of tools to reduce the time required and total testing costs. Most testing workbenches are open systems because testing needs are organisation-specific. Difficult to integrate with closed design and analysis workbenchesTetsing workbench adaptation. Scripts may be developed for user interface simulators and patterns for test data generators. Test outputs may have to be prepared manually for comparison. Special-purpose file comparators may be developed.Key points:Test parts of a system which are commonly used rather than those which are rarely executed. Equivalence partitions are sets of test cases where the program should behave in an equivalent way. Black-box testing is based on the system specification. Structural testing identifies test cases which cause all paths through the program to be executed.Test coverage measures ensure that all statements have been executed at least once. Interface defects arise because of specification misreading, misunderstanding, errors or invalid timing assumptions. To test object classes, test all operations, attributes and states.Integrate object-oriented systems around clusters of objects.译文:软件测试本章的目标是介绍通过测试程序发现程序中的缺陷的相关技术。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

附录英文文献SOFTW ARE TESTING STEATEGIESA strategy for software testing integrates software test case design methods into a well-planned series of steps that result in the successful construction of software .As important ,a software testing strategy provides a rode map for the software developer, the quality assurance organization ,and the customer—a rode map that describes the steps to be conducted as part of testing, when these steps are planned and then undertaken, and how much effort, time, and resources will be required. Therefore , any testing strategy must incorporate test planning, test case design, test execution, and resultant data collection .一INTEGRATION TESTINGA neophyte in the software world might ask a seemingly legitimate question once all modules have been unit tested:“IF they all work individually, why do you doubt that they’ll work when we put them together?”The problem, of course, is“putting them together”—interfacing . Data can be lost across an interface; one module can have an inadvertent, adverse affect on another; subfunctions, when combiner, may not produce the desired major function; individually acceptable imprecision may be magnified to unacceptable levels; global data structures can present problems—sadly, the list goes on and on .Integration testing is a systematic technique for constructing the program structure while conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design.There is often a tendency to attempt non-incremental integration; that is, to construct the program using a :“big bang”approach. All modules are combined in advance .the entire program in tested as a whole. And chaos usually results! A set of errors are encountered. Correction is difficult because isolation of causes is complicated by the vast expanse of the entire program. Once these errors are corrected, new ones appear and the process continues in a seemingly endless, loop.Incremental integration is the antithesis of the big bang approach. The program is constructed and tested in small segments, where errors are easier to isolate and correct; interfaces are more likely to be tested completely; and a systematic test approach may be applied. In the sections that follow, a number of different incremental integration strategies are discussed.1.1 Top-Down IntegrationTop-Down Integration is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Modules subordinate (and ultimately subordinate) to the main control module are incorporated into the structure in either a depth-first or breadth-first manner.Depth-first integration would integrate all modules on a major control path of the structure. Selection of a major path is somewhat arbitrary and depends on application-specific characteristics. For example, selecting the left hand path, modules M1,M2, and M5 would be integrated first. Next, M8 or (if necessary for proper functioning of M2) M6 would be integrated. Then, the central and right hand control paths are built. Breadth-first integration incorporates all modules directly subordinate at each level, moving across the structure horizontally. From the figure, modules M2, M3, and M4 would be integrated first. The next control level, M5, M6, and so on, follows.The integration process is performed in a series of steps:(1)The main control module is used as a test driver, and stubs are substituted for all modules directly subordinate to the main control module.(2)Depending on the integration approach selected (i.e., depth- or breadth-first), subordinate stubs are replaced one at a time with actual modules.(3)Tests are conducted as each module is integrated.(4)On completion of each set of tests, another stub is replaced with the real module.(5)Regression testing may be conducted to ensure that new errors have not been introduced.The process continues from step 2 until the program structure is built.The top-down integration strategy verifies major control or decision points early in the test process. In a well-factored program structure, decision making occurs at upper levels in the hierarchy and is therefore encountered first. If major controlprogram do exist, early recognition is essential. If depth-first integration is selected, a complete function of the software may be implemented and demonstrated. For example, consider a classic transaction structure in which a complex series of interactive inputs path are requested, acquired, and validated via an incoming path. The incoming path may be integrated in a top-down manner. All input processing (for subsequent transaction dispatching) maybe demonstrated before other elements of the structure have been integrated. Early demonstration of functional capability is a confidence builder for both the developer and the customer.Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems can arise. The most common of these problems occurs when processing at low levels in the hierarchy is required to adequately test upper levels, Stubs replace low-level modules at the beginning of top-down testing; therefore no significant data can flow upward in the program structure. The tester is left with three choices: 1 delay many tests until stubs are replaced with actual modules, 2 develop stubs that perform limited functions that simulate the actual module, or 3 integrate the software from the bottom of the hierarchy upward.The first approach (delay tests until stubs are replaced by actual modules) causes us to lose some control over correspondence between specific tests and incorporation of specific modules. This can lead to difficulty in determining the cause of errors and tends to violate the highly constrained nature of top-down approach. The second approach is workable, but can lead to significant overhead, as stubs become more and more complex. The third approach is called bottom-up testing.1.2 Bottom-Up IntegrationBottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., modules at the lowest level in the program structure). Because modules are integrated from the bottom up, processing required for modules subordinate to a given level is always available and the need for stubs is eliminated.A bottom-up integration strategy may be implemented with the following steps:1 Low-level modules are combined into clusters (sometimes called builds) that perform a specific software subfunction.1. A driver (a control program for testing) is written to coordinate test case input and output.2 .The cluster is tested.3.Drivers are removed and clusters are combined moving upward in the program structure.Modules are combined to form clusters 1,2, and 3. Each of the clusters is tested using a driver (shown as a dashed block) Modules in clusters 1 and 2 are subordinate to M1. Drivers D1 and D2 are removed, and the clusters are interfaced directly to M1. Similarly, driver D3 for cluster 3 is removed prior to integration with module M2. Both M1 and M2 will ultimately be integrated with M3, and so forth.As integration moves upward, the need for separate test drivers lessens, In fact, if the top tow levels of program structure are integrated top-down, the number of drivers can be reduced substantially and integration of clusters is greatly simplified.1.3 Regression TestingEach time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems functions that regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects.In a broader context, successful tests (of any kind) result in the discovery of errors, and errors must be corrected. Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changes. Regression testing is the activity that helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.Regression testing maybe conducted manually, be re-executing a subset of all test cases or using automated capture playback tools. Capture-playback tools enable the software engineer to capture test cases and results for subsequent playback and comparison. The regression test suite (the subset of tests to be executed) contains three different classes of test cases:1.A representative sample of tests that will exercise all software functions.2.Additional tests that focus on software functions that are likely to be affected by the change.3.Tests that focus on the software components that have been changed.As integration testing proceeds, the number of regression tests can grow quite large. Therefore, the regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program functiononce a change has occurred.1.4 Comments on Integration TestingThere has been much discussion of the relative advantages and disadvantages of top-down versus bottom-up integration testing. In general, the advantages of one strategy tend to result in disadvantages for the other strategy. The major disadvantage of the top-down approach is the need for stubs and the attendant testing difficulties that can be associated with them. Problems associated with stubs maybe offset by the advantage of testing major control functions early. The major disadvantage of bottom up integration is that “the program as an entity does not exist until the last module is added”. This drawback is tempered by easier test case design and a lack of stubs.Selection of an integration strategy depends upon software characteristics and sometimes, project schedule. In general, a combined approach (sometimes called sandwich testing) that uses a top-down strategy for upper levels of the program structure, coupled with a bottom-up strategy for subordinate levels maybe the best compromise.As integration testing is conducted, the tester should identify critical modules. A critical module has one or more of following characteristics: 1 addresses several software requirements;2 has a high level of control (resides relatively high in the program structure); 3 is complex or error-prone(cyclomatic complexity maybe used as an indicator ); or 4 has definite performance requirements. Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module function.二SYSTEM TESTING2.1 Recovery TestingMany computer-based systems must recover from faults and resume processing within a prespecified time. In some cases, a system must be fault tolerant; that is, processing fault must not cause overall system function to cease. In other cases, a system failure must be corrected whining a specified period of time or severe economic damage will occur.Recovery testing is a system test that forces the software to fail in variety of ways and recovery is properly performed. If recovery is automatic (performed by the system itself), re-initialization, checkpointing mechrequires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits.2.2 Security TestingAny computer-based system that manages sensitive information or cause actions that can improperly harm (or benefit) individuals is a target for improper or illegal penetration.Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport; disgruntled employees who attempt to penetrate for revenge; and dishonest individuals who attempt to penetrate for illicit personal gain.Security testing attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration. To quote Beizer:“The system’s security must, of course, be tested for invulnerability from frontal attack—but must also be tested for invulnerability from flank or rear attack.”During security testing, the tester plays the role of the individual who desires to penetrate the system. Anything goes! The tester may attempt to acquires passwords through external clerical means, may attack the system with custom software designed to break down any defenses that have been constructed; may overwhelm the errors, hoping to penetrate during recovery; may browse through insecure data, hoping to find the key to system entry; and so on.Given enough time and resources, good security testing will ultimately penetrate a system. The role of the system designer is to make penetration cost greater than the value of the information that will be obtained.2.3 Stress TestingDuring earlier software testing steps, while-box techniques resulted in through evaluation of normal program functions and performance. Stress tests are designed to confront programs with abnormal situations. In essence, the tester who performs stress testing asks:“How high can we crank this up before it fail?”Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1 special tests maybe designed that generate 10 interrupts per second, when one or two is the average rate; 2 input data rates maybe increased by an order of magnitude to determine how input functions will respond; 3 test cases that require maximum memory or other resources maybe executed;4 test cases that may cause thrashing in a virtual operating system maybe designed; or 5 test cases that may cause excessive hunting for disk resident datamaybe created. Essentially, the tester attempts to break the problem.A variation of stress testing is a technique called sensitivity testing. In some situations (the most common occur in mathematical algorithms) a very small rang of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation. This situation is analogous to a singularity in a mathematical function. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing.2.4 Performance TestingFor real-time and embedded systems, software that provides required function but does not conform to performance requirements is unacceptable. Performance testing is designed to test run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module maybe assessed as while-box test are conducted. H0owever, it is not until all system elements are fully integrated that the true performance of a system can be ascertained.软件测试策略软件测试策略把软件测试案例的设计方法集成到一系列已经周密计划的步骤中去,从而使软件的开发得以成功地完成。

相关文档
最新文档