软件测试常用英语

合集下载

软件测试中英文术语对照表

软件测试中英文术语对照表

软件测试中英文术语对照表.软件测试中英文术语对照表英文术语中文术语对应的说明High Level Test Case Abstract Test Case 抽象测试用例Acceptance TestingAcceptance验为了满足组件或系统使用者客户或其他授权Acceptance Criteria验收准体的需要,组件或系统必须达到的准则IEEE610)一般由用客户进行的确认是否可以接受一Acceptance Testing验收测业务流系统的验证性测试是根据用户需求以确保系统复合所有验收准进行的正式测试(IEEE 61一致Accessibility Testing可达性测可达性测试就是测试残疾人或不方便的人使即被测试的软件是软件或者组件的容易程度这能够被残疾或者部分有障碍人士正常使用中也包含了正常人在某些时候发生暂时性障的情况下正常使用,如怀抱婴儿Accuracy准确软件产品提供的结果的正确性一致性和精确Functionality。

参序的能力ISO9126TestingActual OutcomeActual Result实际结实际结Actual Result组件或系统测试之后产生或观察到的行临时评Ad Hoc Review非正式评审(和正式的评审相比随机测非正式的测试执行即没有正式的测试准备Ad Hoc Testing也没有期望结果和必须遵格设计和技术应用的测试执行指Adaptability适应而适应不同特定软件产品无需进行额外修改Probability境的能。

参(ISO9126敏捷测Agile Tesing如极限编程开发的项目进行对使用敏捷方法Test强调测试优先行的设计模式软件测试Driven DevelopmentAlgorithm Test[Tmap]算法测Branch TestingAlphAlpha Testing 测由潜在用户或者独立的测试团队在开发环境通常在或者模拟实际操作环境下进行的测试发组织之外进行。

软件测试中英文术语对照表

软件测试中英文术语对照表
Capture/Replay Tool
捕获/回放工具
Capture/Playback Tool
CASE
计算机辅助软件工程
Computer Aided Software Engineering
CAST
计算机辅助软件测试
Computer Aided Software Engineering的首字母缩写,参见Test Automation。在测试过程中使用计算机软件工具进行辅助的测试
Black-Box Testing
黑盒测试
不考虑组件或系统内部结构的功能或非功能测试
Black-Box Test Design Technique
黑盒测试设计技术
基于系统功能或非功能规格说明书来设计或选择测试用例的技术,不设计软件内部结构
Bottom-Up Testing
自底向上测试
渐增式集成测试的一种,其策略是先测试底层的组件,为此为基础逐步进行更高层次的组件测试,直到系统集成所有的组件,参见Intergration Testing
Analyzability
可分析性
软件产品缺陷或运行失败原因可悲诊断的能力,或对修改部分的可识别能力(ISO9126)。参见Maintainability
Analyzer
分析器
Static Analyzer
Anomaly
异常
任何和基于需求文档、设计文档、用户文档、标准或者个人的期望和预期之间偏差的情况都可以称为异常。异常可以在但不限于下面的过程中识别:评审(Review)、测试分析(Test Analysis)、编译(Compilation)、软件产品或应用文档的使用等。参见Defect、Deviation、Error、Fault、Failure、Incident、Problem

软件测试英语

软件测试英语

Invoking(调用)Recursive:递归的Pathnames(路径)slash (/,斜杠)Logout(注销)Password Requirements(密码规则)Directories(目录)cd(Change Directories改变目录)Sorts(排序)On normal 异常的Display(显示)dynamically(动态地)commands(命令)executable(可执行的)linking format(链接格式)executables(可执行文件)Chmod:修改权限permission denied(拒绝访问)Octal(八进制)用Q表示十六进制:hexadecimal十进制:decimal1.静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review2.动态测试:Execution-Based Testing3.白盒测试:White-Box Testing4.黑盒测试:Black-Box Testing5.灰盒测试:Gray-Box Testing6.软件质量保证SQA:Software Quality Assurance7.软件开发生命周期:Software Development Life Cycle8.冒烟测试:Smoke Test9.回归测试:Regression Test10.功能测试:Function Testing11.性能测试:Performance Testing12.压力测试:Stress Testing13.负载测试:Volume Testing14.易用性测试:Usability Testing15.安装测试:Installation Testing16.界面测试:UI Testing17.配置测试:Configuration Testing18.文档测试:Documentation Testing19.兼容性测试:Compatibility Testing20.安全性测试:Security Testing21.恢复测试:Recovery Testing22.单元测试:Unit Tes23.集成测试:Integration Test24.系统测试:System Test25.验收测试:Acceptance Test26.测试计划应包括:测试对象:The Test Objectives,测试范围: The Test Scope,测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks27.主测试计划: a master test plan28.需求规格说明书:The Test Specifications29.需求分析阶段:The Requirements Phase30.接口:Interface31.最终用户:The End User31.正式的测试环境:Formal Test Environment32.确认需求:Verifying The Requirements33.有分歧的需求:Ambiguous Requirements34.运行和维护:Operation and Maintenance.35.可复用性:Reusability36.可靠性: Reliability/Availability37.电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 38.要从以下几方面测试软件:正确性:Correctness实用性:Utility性能:Performance 健壮性:Robustness 可靠性:Reliability。

uat专业英语名词

uat专业英语名词

uat专业英语名词UAT (User Acceptance Testing)专业英语名词是指用户验收测试,在软件开发的过程中起到了重要的作用。

为了夯实对UAT专业英语名词的了解,本文将从基本概念、流程、关键步骤和常用词汇等方面进行详细讲述。

一、基本概念1. UAT (User Acceptance Testing) 用户验收测试UAT是指最终用户对软件系统或产品进行验收的过程,目的是验证系统是否满足用户需求和期望。

用户在真实环境中模拟实际使用情景,测试软件系统的可用性、稳定性、兼容性等方面。

二、流程UAT的流程通常包含以下几个阶段:1. 测试计划制定:确定测试目标、范围、资源和进度等。

2. 测试用例设计:根据用户需求和功能设计测试用例,包括正常情况和异常情况。

3. 测试环境准备:建立测试环境,包括硬件设备和软件配置。

4. 测试执行:按照测试用例逐步执行测试,记录测试结果并报告问题。

5. 问题解决和反馈:开发人员修复问题并再次进行测试,直到问题解决。

6. 验收测试:最终用户进行测试,确认系统是否满足需求。

7. 终审和验收:对测试结果进行终审和验收,决定是否通过。

三、关键步骤1. 用户需求确认:在测试开始前,与用户充分沟通,确保准确理解用户需求。

2. 测试计划编制:制定详细的测试计划,明确测试目标、范围、策略和资源等。

3. 测试用例设计:根据用户需求编写详尽的测试用例,确保覆盖各个功能模块。

4. 测试环境准备:搭建与生产环境相似的测试环境,包括硬件、软件以及数据等。

5. 测试执行:按照测试计划和用例执行测试,记录测试结果和问题。

6. 问题跟踪和解决:及时记录和跟踪问题,并确保问题得到解决和验证。

7. UAT执行:用户根据测试用例进行测试,评估系统是否能满足需求。

8. 问题修复和再测试:开发人员修复问题后,再次进行测试,确保问题解决。

9. 验收和关闭:用户进行验收,确认系统达到预期效果后关闭测试。

四、常用词汇1. UAT Plan - UAT计划2. Test Case - 测试用例3. Test Scenario - 测试场景4. Test Environment - 测试环境5. Test Data - 测试数据6. Test Execution - 测试执行7. Defect - 缺陷8. Bug - 错误9. Issue - 问题10. Acceptance Criteria - 验收标准五、总结UAT专业英语名词是用户验收测试领域中常用的术语,对软件开发过程和产品上线至关重要。

软件测试英语术语+缩写之欧阳引擎创编

软件测试英语术语+缩写之欧阳引擎创编

软件测试常用英语词汇欧阳引擎(2021.01.01)静态测试:Non-Execution-Based Testing或Static testing代码走查:Walkthrough代码审查:Code Inspection 技术评审:Review 动态测试:Execution-Based Testing白盒测试:White-Box Testing黑盒测试:Black-Box Testing灰盒测试:Gray-Box Testing软件质量保证SQA:Software Quality Assurance软件开发生命周期:Software Development Life Cycle 冒烟测试:Smoke Test回归测试:Regression Test功能测试:Function Testing性能测试:Performance Testing压力测试:Stress Testing 负载测试:Volume Testing易用性测试:Usability Testing安装测试:Installation Testing界面测试:UI Testing配置测试:Configuration Testing 文档测试:Documentation Testing兼容性测试:Compatibility Testing安全性测试:Security Testing恢复测试:Recovery Testing单元测试:Unit Test集成测试:Integration Test系统测试:System Test验收测试:Acceptance Test测试计划应包括:测试对象:The Test Objectives测试范围:The Test Scope 测试策略:The Test Strategy 测试方法:The Test Approach, 测试过程:The test procedures, 测试环境:The Test Environment, 测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks 接口:Interface最终用户:The End User正式的测试环境:Formal Test Environment确认需求:Verifying The Requirements 有分歧的需求:Ambiguous Requirements运行和维护:Operation and Maintenance. 可复用性:Reusability可靠性:Reliability/Availability 电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 正确性:Correctness实用性:Utility 健壮性:Robustness可靠性:Reliability软件需求规格说明书:SRS (software requirement specification )概要设计:HLD (high level design )详细设计:LLD (low level design )统一开发流程:RUP (rational unified process )集成产品开发:IPD (integrated product development )能力成熟模型:CMM (capability maturitymodel )能力成熟模型集成:CMMI (capability maturity model integration )戴明环:PDCA (plan do check act )软件工程过程组:SEPG (software engineering process group )集成测试:IT (integration testing )系统测试:ST (system testing )关键过程域:KPA (key process area )同行评审:PR (peer review )用户验收测试:UAT (user acceptance testing )验证和确认:V&V (verification & validation )控制变更委员会:CCB (change control board )图形用户界面:GUI (graphic user interface )配置管理员:CMO (configuration management officer )平均失效间隔时间:(MTBF mean time between failures )平均修复时间:MTTR (mean time to restoration )平均失效时间:MTTF (mean time to failure )工作任务书:SOW (statement of work )α测试:alpha testing β测试:beta testing适应性:Adaptability可用性:Availability功能规格说明书:Functional Specification软件开发中常见英文缩写和各类软件开发文档的英文缩写:英文简写文档名称MRDmarket requirement document (市场需求文档)PRDproduct requirement document (产品需求文档)SOW工作任务说明书PHBProcess Handbook (项目过程手册)ESTEstimation Sheet (估计记录)PPLProject Plan (项目计划)CMPSoftware Management Plan( 配置管理计划)QAPSoftware Quality Assurance Plan (软件质量保证计划)RMPSoftware Risk Management Plan (软件风险管理计划)TSTTest Strategy(测试策略)WBSWork Breakdown Structure (工作分解结构)BRSBusiness Requirement Specification(业务需求说明书) SRSSoftware Requirement Specification(软件需求说明书) STPSystem Testing plan (系统测试计划)STCSystem Testing Cases (系统测试用例)HLDHigh Level Design (概要设计说明书)ITPIntegration Testing plan (集成测试计划) ITCIntegration Testing Cases (集成测试用例)LLDLow Level Design (详细设计说明书)UTPUnit Testing Plan ( 单元测试计划)UTCUnit Testing Cases (单元测试用例)UTRUnit Testing Report (单元测试报告)ITRIntegration Testing Report (集成测试报告)STRSystem Testing Report (系统测试报告)RTMRequirements Traceability Matrix (需求跟踪矩阵) CSAConfiguration Status Accounting (配置状态发布)CRFChange Request Form (变更申请表)WSRWeekly Status Report (项目周报)QSRQuality Weekly Status Report (质量工作周报)QARQuality Audit Report(质量检查报告)QCLQuality Check List(质量检查表)PARPhase Assessment Report (阶段评估报告)CLRClosure Report (项目总结报告)RFFReview Finding Form (评审发现表)MOMMinutes of Meeting (会议纪要)MTXMetrics Sheet (度量表)CCFConsistanceCheckForm(一致性检查表)BAFBaseline Audit Form(基线审计表)PTFProgram Trace Form(问题跟踪表)领测国际科技(北京)有限公司领测软件测试网 /软件测试中英文对照术语表A• Abstract test case (High level test case) :概要测试用例• Acceptance:验收• Acceptance criteria:验收标准• Acceptance testing:验收测试• Accessibility testing:易用性测试• Accuracy:精确性• Actual outcome (actual result) :实际输出/实际结果• Ad hoc review (informal review) :非正式评审• Ad hoc testing:随机测试• Adaptability:自适应性• Agile testing:敏捷测试• Algorithm test (branch testing) :分支测试• Alpha testing:alpha 测试• Analyzability:易分析性• Analyzer:分析员• Anomaly:异常• Arc testing:分支测试• Attractiveness:吸引力• Audit:审计• Audit trail:审计跟踪• Automated testware:自动测试组件• Availability:可用性B• Back-to-back testing:对比测试• Baseline:基线• Basic block:基本块• Basis test set:基本测试集• Bebugging:错误撒播• Behavior:行为• Benchmark test:基准测试• Bespoke software:定制的软件• Best practice:最佳实践• Beta testing:Beta 测试领测国际科技(北京)有限公司领测软件测试网 /• Big-bang testing:集成测试• Black-box technique:黑盒技术• Black-box testing:黑盒测试• Black-box test design technique:黑盒测试设计技术• Blocked test case:被阻塞的测试用例• Bottom-up testing:自底向上测试• Boundary value:边界值• Boundary value analysis:边界值分析• Boundary value coverage:边界值覆盖率• Boundary value testing:边界值测试• Branch:分支• Branch condition:分支条件• Branch condition combination coverage:分支条件组合覆盖率• Branch condition combination testing:分支条件组合测试• Branch condition coverage:分支条件覆盖率• Branch coverage:分支覆盖率• Branch testing:分支测试• Bug:缺陷• Business process-based testing:基于商业流程的测试C• Capability Maturity Model (CMM) :能力成熟度模型• Capability Maturity Model Integration (CMMI) :集成能力成熟度模型• Capture/playback tool:捕获/回放工具• Capture/replay tool:捕获/重放工具• CASE (Computer Aided Software Engineering) :电脑辅助软件工程• CAST (Computer Aided Software Testing) :电脑辅助软件测试• Cause-effect graph:因果图• Cause-effect graphing:因果图技术• Cause-effect analysis:因果分析• Cause-effect decision table:因果判定表• Certification:认证• Changeability:可变性• Change control:变更控制• Change control board:变更控制委员会• Checker:检查人员• Chow's coverage metrics (N-switch coverage) :N 切换覆盖率• Classification tree method:分类树方法• Code analyzer:代码分析器• Code coverage:代码覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Code-based testing:基于代码的测试• Co-existence:共存性• Commercial off-the-shelf software:商用离岸软件• Comparator:比较器• Compatibility testing:兼容性测试• Compiler:编译器• Complete testing:完全测试/穷尽测试• Completion criteria:完成标准• Complexity:复杂性• Compliance:一致性• Compliance testing:一致性测试• Component:组件• Component integration testing:组件集成测试• Component specification:组件规格说明• Component testing:组件测试• Compound condition:组合条件• Concrete test case (low level test case) :详细测试用例• Concurrency testing:并发测试• Condition:条件表达式• Condition combination coverage:条件组合覆盖率• Condition coverage:条件覆盖率• Condition determination coverage:条件判定覆盖率• Condition determination testing:条件判定测试• Condition testing:条件测试• Condition outcome:条件结果• Confidence test (smoke test) :信心测试(冒烟测试)• Configuration:配置• Configuration auditing:配置审核• Configuration control:配置控制• Configuration control board (CCB) :配置控制委员会• Configuration identification:配置标识• Configuration item:配置项• Configuration management:配置管理• Configuration testing:配置测试• Confirmation testing:确认测试• Conformance testing:一致性测试• Consistency:一致性• Control flow:控制流• Control flow graph:控制流图• Control flow path:控制流路径• Conversion testing:转换测试• COTS (Commercial Off-The-Shelf software) :商业离岸软件• Coverage:覆盖率• Coverage analysis:覆盖率分析领测国际科技(北京)有限公司领测软件测试网 /• Coverage item:覆盖项• Coverage tool:覆盖率工具• Custom software:定制软件• Cyclomatic complexity:圈复杂度• Cyclomatic number:圈数D• Daily build:每日构建• Data definition:数据定义• Data driven testing:数据驱动测试• Data flow:数据流• Data flow analysis:数据流分析• Data flow coverage:数据流覆盖率• Data flow test:数据流测试• Data integrity testing:数据完整性测试• Database integrity testing:数据库完整性测试• Dead code:无效代码• Debugger:调试器• Debugging:调试• Debugging tool:调试工具• Decision:判定• Decision condition coverage:判定条件覆盖率• Decision condition testing:判定条件测试• Decision coverage:判定覆盖率• Decision table:判定表• Decision table testing:判定表测试• Decision testing:判定测试技术• Decision outcome:判定结果• Defect:缺陷• Defect density:缺陷密度• Defect Detection Percentage (DDP) :缺陷发现率• Defect management:缺陷管理• Defect management tool:缺陷管理工具• Defect masking:缺陷屏蔽• Defect report:缺陷报告• Defect tracking tool:缺陷跟踪工具• Definition-use pair:定义-使用对• Deliverable:交付物• Design-based testing:基于设计的测试• Desk checking:桌面检查领测国际科技(北京)有限公司领测软件测试网 /• Development testing:开发测试• Deviation:偏差• Deviation report:偏差报告• Dirty testing:负面测试• Documentation testing:文档测试• Domain:域• Driver:驱动程序• Dynamic analysis:动态分析• Dynamic analysis tool:动态分析工具• Dynamic comparison:动态比较• Dynamic testing:动态测试E• Efficiency:效率• Efficiency testing:效率测试• Elementary comparison testing:基本组合测试• Emulator:仿真器、仿真程序• Entry criteria:入口标准• Entry point:入口点• Equivalence class:等价类• Equivalence partition:等价区间• Equivalence partition coverage:等价区间覆盖率• Equivalence partitioning:等价划分技术• Error:错误• Error guessing:错误猜测技术• Error seeding:错误撒播• Error tolerance:错误容限• Evaluation:评估• Exception handling:异常处理• Executable statement:可执行的语句• Exercised:可执行的• Exhaustive testing:穷尽测试• Exit criteria:出口标准• Exit point:出口点• Expected outcome:预期结果• Expected result:预期结果• Exploratory testing:探测测试领测国际科技(北京)有限公司领测软件测试网 /F• Fail:失败• Failure:失败• Failure mode:失败模式• Failure Mode and Effect Analysis (FMEA) :失败模式和影响分析• Failure rate:失败频率• Fault:缺陷• Fault density:缺陷密度• Fault Detection Percentage (FDP) :缺陷发现率• Fault masking:缺陷屏蔽• Fault tolerance:缺陷容限• Fault tree analysis:缺陷树分析• Feature:特征• Field testing:现场测试• Finite state machine:有限状态机• Finite state testing:有限状态测试• Formal review:正式评审• Frozen test basis:测试基线• Function Point Analysis (FPA) :功能点分析• Functional integration:功能集成• Functional requirement:功能需求• Functional test design technique:功能测试设计技术• Functional testing:功能测试• Functionality:功能性• Functionality testing:功能性测试G• glass box testing:白盒测试H• Heuristic evaluation:启发式评估• High level test case:概要测试用例• Horizontal traceability:水平跟踪领测国际科技(北京)有限公司领测软件测试网 /I• Impact analysis:影响分析• Incremental development model:增量开发模型• Incremental testing:增量测试• Incident:事件• Incident management:事件管理• Incident management tool:事件管理工具• Incident report:事件报告• Independence:独立• Infeasible path:不可行路径• Informal review:非正式评审• Input:输入• Input domain:输入范围• Input value:输入值• Inspection:审查• Inspection leader:审查组织者• Inspector:审查人员• Installability:可安装性• Installability testing:可安装性测试• Installation guide:安装指南• Installation wizard:安装向导• Instrumentation:插装• Instrumenter:插装工具• Intake test:入口测试• Integration:集成• Integration testing:集成测试• Integration testing in the large:大范围集成测试• Integration testing in the small:小范围集成测试• Interface testing:接口测试• Interoperability:互通性• Interoperability testing:互通性测试• Invalid testing:无效性测试• Isolation testing:隔离测试• Item transmittal report:版本发布报告• Iterative development model:迭代开发模型K• Key performance indicator:关键绩效指标领测国际科技(北京)有限公司领测软件测试网 /• Keyword driven testing:关键字驱动测试L• Learnability:易学性• Level test plan:等级测试计划• Link testing:组件集成测试• Load testing:负载测试• Logic-coverage testing:逻辑覆盖测试• Logic-driven testing:逻辑驱动测试• Logical test case:逻辑测试用例• Low level test case:详细测试用例M• Maintenance:维护• Maintenance testing:维护测试• Maintainability:可维护性• Maintainability testing:可维护性测试• Management review:管理评审• Master test plan:综合测试计划• Maturity:成熟度• Measure:度量• Measurement:度量• Measurement scale:度量粒度• Memory leak:内存泄漏• Metric:度量• Migration testing:移植测试• Milestone:里程碑• Mistake:错误• Moderator:仲裁员• Modified condition decision coverage:改进的条件判定覆盖率• Modified condition decision testing:改进的条件判定测试• Modified multiple condition coverage:改进的多重条件判定覆盖率• Modified multiple condition testing:改进的多重条件判定测试• Module:模块• Module testing:模块测试• Monitor:监视器• Multiple condition:多重条件• Multiple condition coverage:多重条件覆盖率领测国际科技(北京)有限公司领测软件测试网 /• Multiple condition testing:多重条件测试• Mutation analysis:变化分析• Mutation testing:变化测试N• N-switch coverage:N 切换覆盖率• N-switch testing:N 切换测试• Negative testing:负面测试• Non-conformity:不一致• Non-functional requirement:非功能需求• Non-functional testing:非功能测试• Non-functional test design techniques:非功能测试设计技术O• Off-the-shelf software:离岸软件• Operability:可操作性• Operational environment:操作环境• Operational profile testing:运行剖面测试• Operational testing:操作测试• Oracle:标准• Outcome:输出/结果• Output:输出• Output domain:输出范围• Output value:输出值P• Pair programming:结队编程• Pair testing:结队测试• Partition testing:分割测试• Pass:通过• Pass/fail criteria:通过/失败标准• Path:路径• Path coverage:路径覆盖• Path sensitizing:路径敏感性• Path testing:路径测试领测国际科技(北京)有限公司领测软件测试网 /• Peer review:同行评审• Performance:性能• Performance indicator:绩效指标• Performance testing:性能测试• Performance testing tool:性能测试工具• Phase test plan:阶段测试计划• Portability:可移植性• Portability testing:移植性测试• Postcondition:结果条件• Post-execution comparison:运行后比较• Precondition:初始条件• Predicted outcome:预期结果• Pretest:预测试• Priority:优先级• Probe effect:检测成本• Problem:问题• Problem management:问题管理• Problem report:问题报告• Process:流程• Process cycle test:处理周期测试• Product risk:产品风险• Project:项目• Project risk:项目风险• Program instrumenter:编程工具• Program testing:程序测试• Project test plan:项目测试计划• Pseudo-random:伪随机Q• Quality:质量• Quality assurance:质量保证• Quality attribute:质量属性• Quality characteristic:质量特征• Quality management:质量管理领测国际科技(北京)有限公司领测软件测试网 /R• Random testing:随机测试• Recorder:记录员• Record/playback tool:记录/回放工具• Recoverability:可复原性• Recoverability testing:可复原性测试• Recovery testing:可复原性测试• Regression testing:回归测试• Regulation testing:一致性测试• Release note:版本说明• Reliability:可靠性• Reliability testing:可靠性测试• Replaceability:可替换性• Requirement:需求• Requirements-based testing:基于需求的测试• Requirements management tool:需求管理工具• Requirements phase:需求阶段• Resource utilization:资源利用• Resource utilization testing:资源利用测试• Result:结果• Resumption criteria:继续测试标准• Re-testing:再测试• Review:评审• Reviewer:评审人员• Review tool:评审工具• Risk:风险• Risk analysis:风险分析• Risk-based testing:基于风险的测试• Risk control:风险控制• Risk identification:风险识别• Risk management:风险管理• Risk mitigation:风险消减• Robustness:健壮性• Robustness testing:健壮性测试• Root cause:根本原因S• Safety:安全领测国际科技(北京)有限公司领测软件测试网 /• Safety testing:安全性测试• Sanity test:健全测试• Scalability:可测量性• Scalability testing:可测量性测试• Scenario testing:情景测试• Scribe:记录员• Scripting language:脚本语言• Security:安全性• Security testing:安全性测试• Serviceability testing:可维护性测试• Severity:严重性• Simulation:仿真• Simulator:仿真程序、仿真器• Site acceptance testing:定点验收测试• Smoke test:冒烟测试• Software:软件• Software feature:软件功能• Software quality:软件质量• Software quality characteristic:软件质量特征• Software test incident:软件测试事件• Software test incident report:软件测试事件报告• Software Usability Measurement Inventory (SUMI) :软件可用性调查问卷• Source statement:源语句• Specification:规格说明• Specification-based testing:基于规格说明的测试•Specification-based test design technique:基于规格说明的测试设计技术• Specified input:特定输入• Stability:稳定性。

软件测试词汇(英语)

软件测试词汇(英语)

AAcceptance Testing:T esting conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.Accessibility Testing:Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).Ad Hoc Testing:A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.Agile Testing:Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also T est Driven Development.Application Binary Interface (ABI):A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments. Application Programming Interface (API):A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.Automated Software Quality (ASQ):The use of software tools, such as automated testing tools, to improve software quality.Automated Testing:•Testing employing software tools which execute tests without manual intervention.Can be applied in GUI, performance, API, etc. testing.•The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.BBackus-Naur Form:A metalanguage used to formally describe the syntax of a language. Basic Block:A sequence of one or more consecutive, executable statements containing no branches.Basis Path Testing:A white box test case design technique that uses the algorithmic flow of the program to design tests.Basis Set:The set of tests derived using basis path testing.Baseline:The point at which some deliverable produced during the software engineering process is put under formal change control.Beta Testing:Testing of a rerelease of a software product conducted by customers. Binary Portability Testing:Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.Black Box Testing:Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well thecomponent conforms to the published requirements for the component.Bottom Up Testing:An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. Boundary Testing:T est which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).Bug:A fault in a program which causes the program to perform in an unintended or unanticipated manner.Boundary Value Analysis:BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.Branch Testing:Testing in which all branches in the program source code are tested at least once.Breadth Testing:A test suite that exercises the full functionality of a product but does not test features in detail.CCAST:Computer Aided Software Testing.Capture/Replay Tool:A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.CMM:The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.Cause Effect Graph:A graphical representation of inputs and the associated outputs effects which can be used to design test cases.Code Complete:Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.Code Coverage:An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.Code Inspection:A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.Code Walkthrough:A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.Coding:The generation of source code.Compatibility Testing:T esting whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware. Component:A minimal software item for which a separate specification is available.Component Testing:See Unit Testing.Concurrency Testing:Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.Conformance Testing:The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.Context Driven Testing:The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.Conversion Testing:T esting of programs or procedures used to convert data from existing systems for use in replacement systems.Cyclomatic Complexity:A measure of the logical complexity of an algorithm, used in white-box testing.DData Dictionary:A database that contains definitions of all data items defined during analysis.Data Flow Diagram:A modeling notation that represents a functional decomposition of a system.Data Driven Testing:Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.Debugging:The process of finding and removing the causes of software failures. Defect:Nonconformance to requirements or functional / program specification Dependency Testing:Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.Depth Testing:A test that exercises a feature of a product in full detail.Dynamic Testing:T esting software through executing it. See also Static Testing.EEmulator:A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.Endurance Testing:Checks for memory leaks or other problems that may occur with prolonged execution.End-to-End testing:Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.Equivalence Class:A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification. Equivalence Partitioning:A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.Exhaustive Testing:Testing which covers all combinations of input values and preconditions for an element of the software under test.Functional Decomposition:A technique used during planning, analysis and design; creates a functional hierarchy for the software.Functional Specification:A document that describes in detail the characteristics of the product with regard to its intended features.Functional Testing:See also Black Box Testing.•Testing the features and operational behavior of a product to ensure they correspond to its specifications.•Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.GGlass Box Testing:A synonym for White Box Testing.Gorilla Testing:Testing one particular module,functionality heavily.Gray Box Testing:A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.HHigh Order Tests:Black-box tests conducted once the software has been integrated.IIndependent Test Group (ITG):A group of people whose primary responsibility is software testing,Inspection:A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).Integration Testing:T esting of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.Installation Testing:Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.JKLLoad Testing:See Performance Testing.Localization Testing:This term refers to making software specifically designed for a specific locality.Loop Testing:A white box testing technique that exercises program loops.MMetric:A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.Monkey Testing:Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.NNegative Testing:T esting aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.OPPath Testing:Testing in which all paths in the program source code are tested at least once.Performance Testing:Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing". Positive Testing:Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.QQuality Assurance:All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.Quality Audit:A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.Quality Circle:A group of individuals with related interests that meet at regular intervals toconsider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.Quality Control:The operational techniques and the activities used to fulfill and verify requirements of quality.Quality Management:That aspect of the overall management function that determines and implements the quality policy.Quality Policy:The overall intentions and direction of an organization as regards quality as formally expressed by top management.Quality System:The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.RRace Condition:A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.Ramp Testing:Continuously raising an input signal until the system breaks down.Recovery Testing:Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.Regression Testing:Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.Release Candidate:A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).S<>Sanity Testing:Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.<>Scalability Testing:Performance testing focused on ensuring the application under test gracefully handles increases in work load.<>Security Testing:T esting which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.<>Smoke Testing:A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware forthe first time and considering it a success if it does not catch on fire.<>Soak Testing:Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.<>Software Requirements Specification:A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/<>Software Testing:A set of activities conducted with the intent of finding errors in software.<>Static Analysis:Analysis of a program carried out without executing the program.Static Analyzer:A tool that carries out static analysis.<>Static Testing:Analysis of a program carried out without executing the program.Storage Testing:T esting that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage. Stress Testing:Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.Structural Testing:T esting based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.System Testing:Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.Testability:The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.Testing:•The process of exercising software to verify that it satisfies specified requirements and to detect errors.The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).•The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.Test Automation:See Automated Testing.Test Bed:An execution environment configured for testing. May consist of specifichardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.Test Case:•Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.• A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.Test Driven Development:Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.Test Driver:A program or test tool used to execute a tests. Also known as a Test Harness. Test Environment:The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.Test First Design:Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.Test Harness:A program or test tool used to execute a tests. Also known as a Test Driver. Test Plan:A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.Test Procedure:A document providing detailed instructions for the execution of one or more test cases.Test Script:Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.Test Specification:A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.Test Suite:A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several T est Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test. Test Tools:Computer programs used in the testing of a system, a component of the system, or its documentation.Thread Testing:A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to theintegration of components by successively lower levels.Top Down Testing:An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.Total Quality Management:A company commitment to develop a process that achieves high quality product and customer satisfaction.Traceability Matrix:A document showing the relationship between Test Requirements and Test Cases.UUsability Testing:T esting the ease with which users can learn and use a product.Use Case:The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.Unit Testing:Testing of individual software components.VValidation:The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.Verification:The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.Volume Testing:T esting which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.WWalkthrough:A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.White Box Testing:Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path T esting. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing. Workflow Testing:Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.。

软件测试英语面试题及答案

软件测试英语面试题及答案### 软件测试英语面试题及答案1. What is software testing?Software testing is the process of evaluating a software application or system to determine whether it meets the specified requirements and to identify any defects or issues that might be present. It is a key phase in the software development life cycle and plays a crucial role in ensuring the quality and reliability of the software product.Answer: Software testing is a systematic process that involves verifying and validating a software application to ensure it meets the requirements and is free from defects. It is essential to improve the quality of the software and to ensure that it functions correctly under various conditions.2. What are the different types of software testing?There are several types of software testing, including:- Functional Testing: Testing individual components or features for both expected and unexpected inputs and comparing the actual results with the expected results.- Non-functional Testing: Evaluating the performance, reliability, usability, and other attributes of the software. - Regression Testing: Ensuring that new changes to thesoftware have not adversely affected existing features.- Integration Testing: Testing the combination of software components to ensure they work together as expected.- System Testing: Testing the complete, integrated software system to evaluate its compliance with the specified requirements.- Acceptance Testing: The final testing stage where the software is tested to ensure it meets the user's acceptance criteria.Answer: The various types of software testing are designed to cover different aspects of software quality. They include functional, non-functional, regression, integration, system, and acceptance testing, each serving a specific purpose in the overall testing process.3. What is the difference between white box testing and black box testing?- White Box Testing: Also known as structural testing or code-based testing, it involves testing the software with knowledge of its internal structure and workings. It is used to check the internal logic and flow of the program.- Black Box Testing: This type of testing is performed without any knowledge of the internal workings of the application. It focuses on the functionality of the software and how it responds to inputs.Answer: White box testing requires an understanding of the software's internal code and structure, while black box testing is based on the software's functionality and externalbehavior. The choice between the two depends on the testing objectives and the information available to the tester.4. What is the purpose of test cases and test suites?Test cases are detailed descriptions of the test scenarios that are designed to verify specific aspects of the software. They include the input, expected results, and the steps to execute the test. A test suite is a collection of test cases that are grouped together to cover a particular feature or functionality of the software.Answer: Test cases and test suites are essential for structured testing. They provide a systematic approach to testing, ensuring that all aspects of the software are evaluated. Test cases help in identifying defects, while test suites help in organizing and prioritizing the testing efforts.5. How do you handle a situation where you find a bug that is not reproducible?When a bug is not reproducible, it can be challenging to diagnose and fix. The steps to handle such a situation include:- Documenting the Bug: Record all the details about the bug, including the steps taken, the environment, and any error messages.- Analyzing the Bug: Try to understand the conditions under which the bug might occur by analyzing the logs, code, andsystem state.- Isolating the Bug: Attempt to isolate the bug by changing one variable at a time to see if the bug can be reproduced. - Communicating with the Team: Discuss the bug with the development team to get insights and possible solutions.- Prioritizing the Bug: If the bug cannot be reproduced, it may be necessary to prioritize it based on its impact and the likelihood of it occurring again.Answer: Reproducibility is key to resolving bugs. However, when a bug is not reproducible, thorough documentation, analysis, isolation, communication, and prioritization are crucial steps in managing the issue effectively.6. How do you prioritize testing efforts?Prioritizing testing efforts is essential to ensure that the most critical parts of the software are tested first. The factors that influence prioritization include:- Risk Assessment: Testing areas with the highest risk of failure first.- Business Value: Prioritizing features that provide the most value to the business.- User Impact: Focusing on features that impact the user experience the most.- Resource Availability: Considering the availability of testing resources.- Development Progress: Aligning testing with the development schedule to ensure that testing is completed in time.Answer: Effective prioritization of testing efforts is a balance between risk, value, user impact, resource availability, and development progress. It's important to have a clear understanding。

测试常用英语短语

测试常用英语短语测试常用英语短语 Acceptance Testing--可接受性测试 一般由用户/客户进行的确认是否可以接受一个产品的验证性测试。 actual outcome--实际结果 被测对象在特定的条件下实际产生的结果。 Ad Hoc Testing--随机测试 测试人员通过随机的尝试系统的功能,试图使系统中断。 algorithm--算法 (1)一个定义好的有限规则集,用于在有限步骤内解决一个问题;(2)执行一个特定任务的任何操作序列。 algorithm analysis--算法分析 一个软件的验证确认任务,用于保证选择的算法是正确的、合适的和稳定的,并且满足所有精确性、规模和时间方面的要求。 Alpha Testing--Alpha测试 由选定的用户进行的产品早期性测试。这个测试一般在可控制的环境下进行的。 analysis--分析 (1)分解到一些原子部分或基本原则,以便确定整体的特性;(2)一个推理的过程,显示一个特定的结果是假设前提的结果;(3)一个问题的方法研究,并且问题被分解为一些小的相关单元作进一步详细研究。 anomaly--异常 在文档或软件操作中观察到的任何与期望违背的结果。 application software--应用软件 满足特定需要的软件。 architecture--构架 一个系统或组件的组织结构。 ASQ--自动化软件质量(Automated Software Quality) 使用软件工具来提高软件的质量。 assertion--断言 指定一个程序必须已经存在的状态的一个逻辑表达式,或者一组程序变量在程序执行期间的某个点上必须满足的条件。 assertion checking--断言检查 用户在程序中嵌入的断言的检查。 audit--审计 一个或一组工作产品的独立检查以评价与规格、标准、契约或其它准则的符合程度。 audit trail--审计跟踪 系统审计活动的一个时间记录。 Automated Testing--自动化测试 使用自动化测试工具来进行测试,这类测试一般不需要人干预,通常在GUI、性能等测试中用得较多。 Backus-Naur Form--BNF范式 一种分析语言,用于形式化描述语言的语法 baseline--基线 一个已经被正式评审和批准的规格或产品,它作为进一步开发的一个基础,并且必须通过正式的变更流程来变更。 Basic Block--基本块 一个或多个顺序的可执行语句块,不包含任何分支语句。 basis test set--基本测试集 根据代码逻辑引出来的一个测试用例集合,它保证能获得100%的分支覆盖。 behaviour--行为 对于一个系统的一个函数的输入和预置条件组合以及需要的反应。一个函数的所有规格包含一个或多个行为。 benchmark--标杆/指标/基准 一个标准,根据该标准可以进行度量或比较。 Beta Testing--Beta测试 在客户场地,由客户进行的对产品预发布版本的测试。这个测试一般是不可控的 big-bang testing--大锤测试/一次性集成测试 非渐增式集成测试的一种策略,测试的时候把所有系统的组件一次性组合成系统进行测试。 Black Box Testing--黑盒测试 根据软件的规格对软件进行的测试,这类测试不考虑软件内部的运作原理,因此软件对用户来说就像一个黑盒子。 bottom-up testing--由低向上测试 渐增式集成测试的一种,其策略是先测试底层的组件,然后逐步加入较高层次的组件进行测试,直到系统所有组件都加入到系统。 boundary value--边界值 一个输入或输出值,它处在等价类的边界上。 boundary value coverage--边界值覆盖 通过测试用例,测试组件等价类的所有边界值。 boundary value testing--边界值测试 通过边界值分析方法来生成测试用例的一种测试策略。 Boundry Value Analysis--边界值分析 该分析一般与等价类一起使用。经验认为软件的错误经常在输入的边界上产生,因此边界值分析就是分析软件输入边界的一种方法 branch--分支 在组件中,控制从任何语句到其它任何非直接后续语句的一个条件转换,或者是一个无条件转换。 branch condition--分支条件 branch condition combination coverage--分支条件组合覆盖 在每个判定中所有分支条件结果组合被测试用例覆盖到的百分比。 branch condition combination testing--分支条件组合测试 通过执行分支条件结果组合来设计测试用例的一种方法。 branch condition coverage--分支条件覆盖 每个判定中分支条件结果被测试用例覆盖到的百分比。 branch condition testing--分支条件测试 通过执行分支条件结果来设计测试用例的一种方法。 branch coverage--分支覆盖 通过测试执行到的分支的百分比。 branch outcome--分支结果 见判定结果(decision outcome) branch point--分支点 见判定(decision) branch testing--分支测试 通过执行分支结果来设计测试用例的一种方法。 Breadth Testing--广度测试 在测试中测试一个产品的所有功能,但是不测试更细节的特性。 bug--缺陷 capture/playback tool--捕获/回放工具 参考capture/replay tool Capture/Replay Tool--捕获/回放工具 一种测试工具,能够捕获在测试过程中传递给软件的输入,并且能够在以后的时间中,重复这个执行的过程。这类工具一般在GUI测试中用的较多。 CASE--计算机辅助软件工程(computer aided software engineering) 用于支持软件开发的一个自动化系统。 CAST--计算机辅助测试 在测试过程中使用计算机软件工具进行辅助的测试。 cause-effect graph--因果图 一个图形,用来表示输入(原因)与结果之间的关系,可以被用来设计测试用例 certification --证明 一个过程,用于确定一个系统或组件与特定的需求相一致。 change control--变更控制 一个用于计算机系统或系统数据修改的过程,该过程是质量保证程序的一个关键子集,需要被明确的描述。 code audit --代码审计 由一个人、组或工具对源代码进行的一个独立的评审,以验证其与设计规格、程序标准的一致性。正确性和有效性也会被评价。 Code Coverage--代码覆盖率 一种分析方法,用于确定在一个测试套执行后,软件的哪些部分被执行到了,哪些部分没有被执行到。 Code Inspection--代码检视 一个正式的同行评审手段,在该评审中,作者的同行根据检查表对程序的逻辑进行提问,并检查其与编码规范的一致性。 Code Walkthrough--代码走读 一个非正式的同行评审手段,在该评审中,代码被使用一些简单的测试用例进行人工执行,程序变量的状态被手工分析,以分析程序的逻辑和假设。 code-based testing--基于代码的测试 根据从实现中引出的目标设计测试用例。 coding standards--编程规范 一些编程方面需要遵循的标准,包括命名方式、排版格式等内容。 Compatibility Testing--兼容性测试 测试软件是否和系统的其它与之交互的元素之间兼容,如:浏览器、操作系统、硬件等。 complete path testing --完全路径测试 参考穷尽测试(exhaustive testing) completeness--完整性 实体的所有必须部分必须被包含的属性。 complexity --复杂性 系统或组件难于理解或验证的程度。 Component--组件 一个最小的软件单元,有着独立的规格 Component Testing--组件测试 参考单元测试 computation data use--计算数据使用 一个不在条件中的数据使用。 computer system security--计算机系统安全性 计算机软件和硬件对偶然的或故意的访问、使用、修改或破坏的一种保护机制。 condition--条件 一个不包含布尔操作的布尔表达式,例如:A condition coverage--条件覆盖 通过测试执行到的条件的百分比。 condition outcome--条件结果 条件为真为假的评价。 configuration control--配置控制 配置管理的一个方面,包括评价、协调、批准、和实现配置项的变更。 configuration management--配置管理 一套技术和管理方面的原则用于确定和文档化一个配置项的功能和物理属性、控制对这些属性的变更、记录和报告变更处理和实现的状态、以及验证与指定需求的一致性。 conformance criterion-- 一致性标准 判断组件在一个特定输入值上的行为是否符合规格的一种方法。 Conformance Testing-- 一致性测试 测试一个系统的实现是否和其基于的规格相一致的测试。 consistency -- 一致性 在系统或组件的各组成部分和文档之间没有矛盾,一致的程度。 consistency checker-- 一致性检查器 一个软件工具,用于测试设计规格中需求的一致性和完整性。 control flow--控制流 程序执行中所有可能的事件顺序的一个抽象表示。 control flow graph--控制流图 通过一个组件的可能替换控制流路径的一个图形表示。 conversion testing--转换测试 用于测试已有系统的数据是否能够转换到替代系统上的一种测试。 corrective maintenance--故障检修 用于纠正硬件或软件中故障的维护。 correctness--正确性 软件遵从其规格的程度。 correctness--正确性 软件在其规格、设计和编码中没有故障的程度。软件、文档和其它项满足需求的程度。软件、文档和其它项满足用户明显的和隐含的需求的程度。 coverage--覆盖率 用于确定测试所执行到的覆盖项的百分比。 coverage item--覆盖项 作为测试基础的一个入口或属性:如语句、分支、条件等。 crash--崩溃 计算机系统或组件突然并完全的丧失功能。 criticality--关键性

软件测试英语

Invoking(调用)Recursive:递归的Pathnames(路径)slash (/,斜杠)Logout(注销)Password Requirements(密码规则)Directories(目录)cd(Change Directories改变目录)Sorts(排序)On normal 异常的Display(显示)dynamically(动态地)commands(命令)executable(可执行的)linking format(链接格式)executables(可执行文件)Chmod:修改权限permission denied(拒绝访问)Octal(八进制)用Q表示十六进制:hexadecimal十进制:decimal1.静态测试:Non-Execution-Based Testing或Static testing 代码走查:Walkthrough代码审查:Code Inspection技术评审:Review2.动态测试:Execution-Based Testing3.白盒测试:White-Box Testing4.黑盒测试:Black-Box Testing5.灰盒测试:Gray-Box Testing6.软件质量保证SQA:Software Quality Assurance7.软件开发生命周期:Software Development Life Cycle8.冒烟测试:Smoke Test9.回归测试:Regression Test10.功能测试:Function Testing11.性能测试:Performance Testing12.压力测试:Stress Testing13.负载测试:Volume Testing14.易用性测试:Usability Testing15.安装测试:Installation Testing16.界面测试:UI Testing17.配置测试:Configuration Testing18.文档测试:Documentation Testing19.兼容性测试:Compatibility Testing20.安全性测试:Security Testing21.恢复测试:Recovery Testing22.单元测试:Unit Tes23.集成测试:Integration Test24.系统测试:System Test25.验收测试:Acceptance Test26.测试计划应包括:测试对象:The Test Objectives,测试范围: The Test Scope,测试策略: The Test Strategy测试方法: The Test Approach,测试过程: The test procedures,测试环境: The Test Environment,测试完成标准:The test Completion criteria测试用例:The Test Cases测试进度表:The Test Schedules风险:Risks27.主测试计划: a master test plan28.需求规格说明书:The Test Specifications29.需求分析阶段:The Requirements Phase30.接口:Interface31.最终用户:The End User31.正式的测试环境:Formal Test Environment32.确认需求:Verifying The Requirements33.有分歧的需求:Ambiguous Requirements34.运行和维护:Operation and Maintenance.35.可复用性:Reusability36.可靠性: Reliability/Availability37.电机电子工程师协会IEEE:The Institute of Electrical and Electronics Engineers) 38.要从以下几方面测试软件:正确性:Correctness实用性:Utility性能:Performance 健壮性:Robustness 可靠性:Reliability。

计算机软件英语词汇表

计算机软件英语词汇表1. Application - 应用程序2. Software - 软件3. Program - 程序4. Code - 代码5. Algorithm - 算法6. Database - 数据库7. Operating system - 操作系统8. User interface - 用户界面9. Debugging - 调试10. Compiler - 编译器11. Interpreter - 解释器12. Version control - 版本控制13. Integration - 集成14. Testing - 测试15. Documentation - 文档16. Web development - 网站开发17. Mobile app - 移动应用18. Interface - 接口19. Function - 函数20. Object-oriented programming - 面向对象编程21. Back-end - 后端22. Front-end - 前端23. Database management - 数据库管理24. Network - 网络25. Security - 安全26. Framework - 框架27. Bug - 错误28. Patch - 补丁29. Update - 更新30. Install - 安装31. Uninstall - 卸载32. Optimization - 优化33. Scalability - 可扩展性34. User experience - 用户体验35. User acceptance testing - 用户验收测试36. Integration testing - 集成测试37. Regression testing - 回归测试38. Alpha testing - 内部测试39. Beta testing - 公测40. Deployment - 部署。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Actual Fix Time 实际修改时间 Assigned To 被分配给 Closed in Version 被关闭的版本 Closing Date 关闭日期 Defect ID 缺陷编号 Description 描述 Detected By 被(谁)发现 Detected in Version 被发现的版本 Detected on Date 被发现的日期 Estimated Fix Time 估计修改的时间 Modified 修正 Planned Closing Version计划关闭的版本 Priority 优先级 Project 项目 R&D Comments 研发人员备注 Reproducible 可重现 Severity 严重程度 Status 状态 Summary 概要 Creation Date 创建日期 Description 描述 Designer 设计人员 Estimated DevTime 估计设计和生成测试的时间 Execution Status 执行状态 Modified 修正 Path 路径 Status 状态 Steps 步骤 Template 模版 Test Name 测试名称 Type 类型 Actual 实际结果 Description 描述 Exec Date 执行日期 Exec Time 执行时间 Expected 期望结果 Source Test 测试资料 Status 状态 Step Name 步骤名称 Duration 执行的期限 Exec Date 执行日期 Exec Time 执行时间 Host 主机 Operating System 操作系统 OS Build Number 操作系统生成的编号 OS Service Pack 操作系统的服务软件包 Run Name 执行名称 Run VC Status 执行 VC 的状态 Run VC User 执行 VC 的用户 Run VC Version 执行 VC 的版本 Status 状态 Test Version 测试版本 Tester 测试员 Attachment 附件 Author 作者 Cover Status 覆盖状态 Creation Date 创建日期 Creation Time 创建时间 Description 描述 Modified 修正 Name 名称 Priority 优先级 Product 产品 ReqID 需求编号 Reviewed 被检查 Type 类型 Exec Date 执行日期 Modified 被修正 Planned Exec Date 计划执行的日期 Planned Exec Time 计划执行的时间 Planned Host Name 计划执行的主机名称 Responsible Tester 负责测试的人员 Status 状态 Test Version 测试的版本 Tester 测试员 Time 时间 Close Date 关闭日期 Description 描述 Modified 修正 Open Date 开放日期 Status 状态 Test Set 测试集合

Acceptance testing(验收测试),系统开发生命周期方法论的一个阶段,这时相关的用户和/或独立测试人员根据测试计划和结果对系统进行测试和接收。它让系统用户决定是否接收系统。它是一项确定产品是否能够满足合同或用户所规定需求的测试。这是管理性和防御性控制。 Ad hoc testing(随机测试),没有书面测试用例、记录期望结果、检查列表、脚本或指令的测试。主要是根据测试者的经验对软件进行功能和性能抽查。随机测试是根据测试说明书执行用例测试的重要补充手段,是保证测试覆盖完整性的有效方式和过程。 Alpha testing(α测试),是由一个用户在开发环境下进行的测试,也可以是公司内部的用户在模拟实际操作环境下进行的受控测试,Alpha测试不能由程序员或测试员完成。 Automated Testing(自动化测试),使用自动化测试工具来进行测试,这类测试一般不需要人干预,通常在GUI、性能等测试中用得较多。 Beta testing(β测试),测试是软件的多个用户在一个或多个用户的实际使用环境下进行的测试。开发者通常不在测试现场,Beta测试不能由程序员或测试员完成。 Black box testing(黑盒测试),指测试人员不关心程序具体如何实现的一种测试方法。根据软件的规格对软件进行各种输入和观察软件的各种输出结果来发现软件的缺陷的测试,这类测试不考虑软件内部的运作原理,因此软件对用户来说就像一个黑盒子。 Bug(错误),有时称作defect(缺陷)或error(错误),软件程序中存在的编程错误,可能会带来不必要的副作用,软件的功能和特性与设计规格说明书或用户需求不一致的方面。软件缺陷表现特征为:软件未达到产品说明书标明的功能;软件出现产品说明书指明不会出现的错误;软件功能超出产品说明书指明的范围;虽然产品说明书未指出但是软件应达到的目标;软件测试人员或用户认为软件难以理解,不易使用,运行速度缓慢等问题。 Bug report(错误报告),也称为“Bug record(错误记录)”,记录发现的软件错误信息的文档,通常包括错误描述、复现步骤、抓取的错误图像和注释等。 Bug tracking system(错误跟踪系统,BTS),也称为“Defect tracking system,DTS”,管理软件测试缺陷的专用数据库系统,可以高效率地完成软件缺陷的报告、验证、修改、查询、统计、存储等任务。尤其适用于大型多语言软件的测试管理。 Build(工作版本),软件开发过程中用于内部测试的功能和性能等不完善的软件版本。工作版本既可以是系统的可操作版本,也可以是展示要在最终产品中提供的部分功能的部分系统。 Compatibility Testing(兼容性测试),也称“Configuration testing(配置测试)”,测试软件是否和系统的其它与之交互的元素之间兼容,如:浏览器、操作系统、硬件等。验证测试对象在不同的软件和硬件配置中的运行情况。 Capture/Replay Tool(捕获/回放工具),一种测试工具,能够捕获在测试过程中传递给软件的输入,并且能够在以后的时间中,重复这个执行的过程。这类工具一般在GUI测试中用的较多。 Crash(崩溃),计算机系统或组件突然并完全的丧失功能,例如软件或系统突然退出或没有任何反应(死机)。 Debug(调试),开发人员确定引起错误的根本原因和确定可能的修复措施的过程。一般发生在子系统或单元模块编码完成时,或者根据测试错误报告指出错误以后,开发人员需要执行调试过程来解决已存在的错误。 Deployment(部署),也称为shipment(发布),对内部IT系统而言,指它的第一个版本通过彻底的测试、形成产品、交付给付款客户的阶段。 Dynamic testing(动态测试),通过执行软件的手段来测试软件。 Exception(异常/例外),一个引起正常程序执行挂起的事件。 Functional testing(功能测试),也称为behavioral testing(行为测试),根据产品特征、操作描述和用户方案,测试一个产品的特性和可操作行为以确定它们满足设计需求。本地化软件的功能测试,用于验证应用程序或网站对目标用户能正确工作。使用适当的平台、浏览器和测试脚本,以保证目标用户的体验将足够好,就像应用程序是专门为该市场开发的一样。 Garbage characters(乱码字符),程序界面中显示的无意义的字符,例如,程序对双字节字符集的字符不支持时,这些字符不能正确显示。 GB 18030 testing(GB 18030测试),软件支持GB 18030字符集标准能力的测试,包括GB 18030字符的输入、输出、显示、存储的支持程度。 Installing testing(安装测试),确保该软件在正常情况和异常情况的不同条件下,例如,进行首次安装、升级、完整的或自定义的安装都能进行安装。异常情况包括磁盘空间不足、缺少目录创建权限等。核实软件在安装后可立即正常运行。安装测试包括测试安装代码以及安装手册。安装手册提供如何进行安装,安装代码提供安装一些程序能够运行的基础数据。 Integration testing(集成测试),被测试系统的所有组件都集成在一起,找出被测试系统组件之间关系和接口中的错误。该测试一般在单元测试之后进行。 International testing(国际化测试),国际化测试的目的是测试软件的国际化支持能力,发现软件的国际化的潜在问题,保证软件在世界不同区域中都能正常运行。国际化测试使用每种可能的国际输入类型,针对任何区域性或区域设置检查产品的功能是否正常,软件国际化测试的重点在于执行国际字符串的输入/输出功能。国际化测试数据必须包含东亚语言、德语、复杂脚本字符和英语(可选)的混合字符。 Localizability testing(本地化能力测试),本地化能力是指不需要重新设计或修改代码,将程序的用户界面翻译成任何目标语言的能力。为了降低本地化能力测试的成本,提高测试效率,本地化能力侧是通常在软件的伪本地化版本上进行。本地化能力测试中发现的典型错误包括:字符的硬编码(即软件中需要本地化的字符写在了代码内部),对需要本地化的字符长度设置了国定值,在软件运行时以控件位置定位,图标和位图中包含了需要本地化的文本,软件的用户界面与文档术语不一致等。 Load testing(负载测试),通过测试系统在资源超负荷情况下的表现,以发现设计上的错误或验证系统的负载能力。在这种测试中,将使测试对象承担不同的工作量,以评测和评估测试对象在不同工作量条件下的性能行为,以及持续正常运行的能力。负载测试的目标是确定并确保系统在超出最大预期工作量的情况下仍能正常运行。此外,负载测试还要评估性能特征,例如,响应时间、事务处理速率和其他与时间相关的方面。 Localization testing(本地化测试),本地化测试的对象是软件的本地化版本。本地化测试的目的是测试特定目标区域设置的软件本地化质量。本地化测试的环境是在本地化的操作系统上安装本地化的软件。从测试方法上可以分为基本功能测试,安装/卸载测试,当地区域的软硬件兼容性测试。测试的内容主要包括软件本地化后的界面布局和软件翻译的语言质量,包含软件、文档和联机帮助等部分。 Performance testing(性能测试),评价一个产品或组件与性能需求是否符合的测试。包括负载测试、强度测试、数据库容量测试、基准测试等类型。 Pilot testing(引导测试),软件开发中,验证系统在真实硬件和客户基础上处理典型操作的能力。在软件外包测试中,引导测试通常是客户检查软件测试公司测试能力的一种形式,只有通过了客户特定的引导测试,软件测试公司才能接受客户真实软件项目的软件测试。

相关文档
最新文档