Best Practices for Software Performance Engineering (2003)

合集下载

GAMP

GAMP

System Acceptance Test & Results (Supplier &
User)
Ongoing Operation
(User) User Change Control User Maintenance
"COMPLEX" EQUIPMENT ACQUISITION MODEL USER/(CONSULTANT if applicable)
Electrical

Software & Network

Testing and Acceptance
Mechanical & Electrical Hardware Test Specification & Results (Supplier) Software Test Specification & Results (Allows
User and Supplier

Supplier

Specification, Design & Construction
(Supplier) Mechanical

Specification / Design / Construction Specification / Design / Construction Specification / Design / Code
Volume One
Volume Two
Part 1: User Guide
Part 2: Supplier Guide
Good Practice Examples
Part 2: Supplier Guide
Reviews validation process & GAMP philosophies. Specifications defined & explained.

IBM推出六大“软件能力”等

IBM推出六大“软件能力”等

龙源期刊网
IBM推出六大“软件能力”等
作者:
来源:《计算机世界》2011年第45期
企业动态
IBM推出六大“软件能力”
本报讯近日,IBM正式推出六大“软件能力”——包括洞察力、敏捷力、协作力、创新
力、优化力、安全力。

这些能力可以帮助企业将信息转化为洞察、推动产品和服务创新、驱动业务整合及优化、联系和协作、管理风险、安全及合规、优化业务架构及服务的影响,这既是大多数企业最关注的业务需求,也是IBM软件具备的能力所在。

惠普携手AMD 发力数据中心
本报讯日前,惠普宣布推出全新的采用AMD皓龙6200系列处理器的 HP ProLiant G7服务器。

该系列服务器可提高数据中心的效率、可扩展性和高达30%的性能,以支持大规模虚拟数据库工作负载。

惠普此次发布了HP ProLiant BL465c G7、HP ProLiant BL685c G7等五款新的ProLiant x86系列工业标准服务器。

IT软件厂商在信息产业中最具潜力
本报讯近日,2011年“北京信息网络产业新业态创新企业榜”公布,有30个最具创新力和增长潜力的信息网络企业上榜。

据了解,本次遴选活动的参评企业主要分布于软件业、互联网信息服务、云计算、物联网、移动互联网等领域。

最终进入30强的企业主要涉及软件业、互联网信息服务、电子产品、移动通信、云计算等领域,以软件提供商居多。

acp考试模拟试题五

acp考试模拟试题五

acp考试模拟试题五ACI考试模拟试题五1. 请选择下面哪种技术不是ACI的关键特性:A. Application Centric InfrastructureB. Leaf-and-Spine架构C. 集成网络与存储D. `Open Shortest Path First(OSPF)协议2. ACI的主要目标是什么?A. 为数据中心提供更高的可扩展性和可管理性B. 提供更多的带宽和速度C. 降低网络部署和维护成本D. 扩展数据中心的物理空间3. 您可以使用哪些工具来管理ACI Fabric?A. Cisco APICB. Cisco ACI ToolkitC. Cisco DCNMC. 所有上述4. 哪个选项描述了ACI中的Policy Model?A. 基于策略的自动化B. 传统的CLI管理C. 独立的网络和存储D. 手动配置5. 以下哪个选项不是ACI中tenant的常见元素?A. Application ProfileB. Endpoint Groups (EPGs)C. VLANsD. Contracts6. 以下哪个选项描述了ACI Fabric中的SPI(Switch Peer Group)?A. 用于路由解决方案B. 用于冗余连接C. 用于IS-IS协议D. 用于BGP7. 在ACI中,哪种协议用于Fabric Discovery?A. BGPB. OSPFC. IS-ISD. LLDP8. 以下哪种情况可以触发ACI中的Policy Resolution?A. 新的Application ProfileB. 新的Tenant创建C. 网络故障D. 所有上述9. 在ACI中,哪个选项描述了VMM(Virtual Machine Manager)域?A. 用于虚拟机和虚拟交换机的集成B. 用于物理网络设备的管理C. 用于负载均衡D. 用于数据中心备份10. 下面哪个选项最好描述了ACI的关键优势?A. 传统的CLI管理B. 基于策略的自动化C. 手动配置D. 独立的网络和存储11. 选择所有适用的情况,以确保ACI中的正确通信:A. 配置EPG之间的ContractsB. 使用VLAN划分TrafficC. 创建Application Profile和EPPD. 提供VRF到EPG的关联12. ACI中的哪个概念用于虚拟化网络和存储资源?A. Application Network ProfileB. Endpoint Groups (EPGs)C. VMM IntegrationD. 操作系统13. 下列哪个选项不是ACI中的Fabric访问控制方式?A. Micro-SegmentationB. MAC SecC. Service ChainingD. EIGRP14. 在ACI中,哪种设置用于实现Fabric中的跨Rack链路?A. Link AggregationB. ACI Spine节点设置C. 端口ChannelingD. 10G链路15. 选择以下哪个描述符不适用于 ACI 中的外部连接?A. 与外部设备连接B. 用于Internet连接C. 端口通道D. VRF分割答案:1. D2. A3. D4. A5. C6. B7. D8. D9. A10. B11. A, C, D12. C13. D14. C15. D。

提高计算机软件操作效率的工具与插件推荐

提高计算机软件操作效率的工具与插件推荐

提高计算机软件操作效率的工具与插件推荐第一章:桌面操作工具为了提高计算机软件的操作效率,我们可以利用各种桌面操作工具来简化常见的操作。

以下是一些常用的桌面操作工具推荐:1. Launchy:这是一个快速启动程序的工具,可以通过简单的按键组合来快速打开各种应用程序和文件,避免了繁琐的鼠标操作。

2. Fences:这是一个桌面整理工具,可以将桌面上的图标分组整理,提高桌面的可用性和整洁度,让你更快地找到需要的应用程序或文件。

3. Dexpot:这是一个虚拟桌面工具,可以将桌面划分为多个虚拟桌面,让你能够同时处理多个任务,提高工作效率。

第二章:浏览器插件浏览器作为我们日常使用最频繁的软件之一,如何提高浏览器的操作效率也显得尤为重要。

以下是一些常用的浏览器插件推荐:1. OneTab:这个插件可以将当前所有的浏览器标签页合并为一个页面,以便节省内存并提高浏览器的性能。

2. LastPass:这是一个密码管理工具,可以自动保存和填充各种网站的登录信息,避免了频繁输入密码的麻烦,提高了上网效率。

3. Evernote Web Clipper:这个插件可以帮助你快速保存网页内容到Evernote笔记中,方便日后查阅和整理。

第三章:文档处理工具对于经常需要编辑和处理各种文档的人来说,选择适合的文档处理工具也是提高操作效率的关键。

以下是一些常用的文档处理工具推荐:1. Office快捷键:熟悉和使用Office软件的快捷键可以极大地提高你在文档处理过程中的效率,减少冗长的鼠标操作。

2. WPS Office:与Microsoft Office兼容的办公套件,提供了一系列高效的编辑和格式化工具,能够满足日常文档处理的需求。

第四章:代码编辑工具对于从事编程工作的人来说,选择合适的代码编辑工具能够极大地提高开发效率。

以下是一些常用的代码编辑工具推荐:1. Visual Studio Code:这是一款轻量级的代码编辑器,提供了丰富的插件和快捷键,支持多种编程语言,适用于各种开发任务。

嵌入式系统基础B及答案

嵌入式系统基础B及答案

嵌入式系统基础B及答案院(系)班级姓名准考证号........................... 密封。

密封。

线2021―2021学年第二学期期末考试问题编号评分员1。

单选题(每题2分,共30分)一二三四五六总分1、下面哪种操作系统不属于商用操作系统。

()a、 windowsxpb、linuxc、 vxworksd、wince2.以下哪项不是嵌入式操作系统的功能。

()a、内核精简b、特异性强c、功能强大d、高实时性3、下面哪种不属于嵌入式系统的调试方法。

()a、模拟调试b、软件调试c、 BDM/JTAG调试D.单独调试4、在嵌入式arm处理器中,下面哪种中断方式优先级最高。

()a、重置b、数据中止c、 fiqd、irq5.nandflash和norflash的正确区别是()。

a、nor的读速度比nand稍慢一些c、nand的擦除速度远比nor的慢6、通常所说的32位微处理器是指()。

a、地址总线的宽度为32位C,CPU的字长为32位7,addr0,R1,[R2]属于()。

a、即时寻址b、寄存器间接寻址c、寄存器寻址d、基址变址寻址b、处理后的数据长度只能为32位D,通用寄存器的数量为32b、nand的写入速度比nor慢很多d、大多数写入操作需要先进行擦除操作8.数据字越长()。

a、时钟频率越快b、运算速度D越快,精度越高c、对存储器寻址能力越差9.典型的计算机系统结构是()。

a、冯诺依曼体系结构b、哈佛结构c、单总线结构d、双总线结构10.以下不是RISC指令系统的特点()。

a、大量使用寄存器b、使用固定长度的指令格式大学计算机基础试卷b共4页第1页学院(系)班名录取号………………………………密………………………………封………………………………线………………………………c、使用多周期指令d、寻址方式多11.以下哪些设备不是嵌入式系统产品()。

a、pdab、自动取款机c、个人计算机d、机顶盒12、下列不属于arm处理器异常工作模式的是()。

ICS-600操作手册

ICS-600操作手册

2.3.1 泵 ........................................................................................................................17
2.3.2 压力传感器..........................................................................................................18
2 • 描述 ...........................................................................................7
2.1 操作特点..............................................................................................................................7
2.4.1 面板标签集..........................................................................................................24 2.4.2 软件控制模式 ......................................................................................................25 2.4.3 System Wellness和Predictive Performance........................................................26

高级计算机程序设计员模拟试题含参考答案

高级计算机程序设计员模拟试题含参考答案

高级计算机程序设计员模拟试题含参考答案一、单选题(共90题,每题1分,共90分)1、Visual C++提供的()是一个用来创建或改变资源的特定环境。

它通过共享技术和界面来快速简捷地创建和修改应用资源。

A、AppWizardB、资源编辑器C、ClassWizardD、资源管理器正确答案:B2、下列关于HTMLHelpWorkshop说法正确的是()。

A、不可以浏览、编辑和转换图形B、不可以截取屏幕图形C、不可以对HTML 文件进行压缩D、不可以编辑声音和图像正确答案:B3、C# 中组件可以分为两类:即()和()。

A、不具备图形界面的类库,具有用户界面的控件B、不具备图形界面的控件,具有用户界面的类库C、不具备图形界面的类库,具有用户界面的类库D、不具备图形界面的控件,具有用户界面的控件正确答案:A4、"可通过()间接地给 " 用户账号 " 赋予了权限。

"A、组描述B、组成员C、组账号D、组密码正确答案:C5、TrackRecord是()公司的测试管理工具。

A、RationalB、CompurewareC、Mercury InteractiveD、IBM正确答案:B6、用例分为系统用例和()。

A、时序用例B、业务用例C、对象用例D、测试用例正确答案:B7、()决定 SQL Server在查询数据库时所采用的数据比较方式。

A、服务登陆标识B、字符集C、网络库D、排序方式正确答案:D8、()方法的作用是创建并返回一个与SqlConnection 相关联的SqlCommand对象。

A、ExecuteReader()B、Open()C、ExecuteNonQuery()D、CreateCommand()正确答案:D9、并行接口适用于()的场合,其接口电路相对简单。

A、传输距离较远,传输速度要求高B、传输距离较近,传输速度要求高C、传输距离较近,传输速度要求低D、传输距离较远,传输速度要求低正确答案:B10、当需要控制一个类的实例只有一个,而且客户端只能从一个全局的访问点访问它时,可以选用设计模式中的()。

涅盘计划开源能力通用基础考试试题及答案

涅盘计划开源能力通用基础考试试题及答案

涅槃计划开源能力通用基础考试试题及答案一、单选(共35分)1、要交换变量A和B的值,应使用的语句组( )A.A-A=B;B=C;C=AB.B-C=A;A=B;B=CC.C-A=B;B=AD.D-C=A;B=A;B=C答案:B2、有些软件只有创建它的人、团队、组织才能修改,并且控制维护工作,这类软件被称为()。

A.A-文档B.B-闭源软件C.C-软件D.D-开发手册答案:B3、从E-R模型向关系模型转换,一个M:N的联系转换成一个关系模式时,该关系模式的键是()。

A.A-M端实体的键B.B-IV端实体的键C.C-M端实体键与N端实体键组合D.D-重新选取其他属性答案:C4、按照AJAX之父Jesse James Garrett所认为的交互设计属于用户体验要素的()层。

A.A-表现层B.B-框架层C.C-结构层D.D-范围层答案:C5、当网络中任何一个工作站发生故障时, 都有可能导致整个网络停止工作, 这种网络的拓扑结构为A.A-星型B.B-环型C.C-总线型D.D-树型答案:B6、在常见的数据处理中,()是最基本的处理。

A.A-删除B.B-查找C.C-读取D.D-插入答案:B7、个人计算机简称PC机,这种计算机属于A.A-小型计算机B.B-巨型计算机C.C-微型计算机D.D-超级计算机答案:C8、现代计算机之所以能够自动、连续地进行数据处理,主要是因为A.A-采用了半导体器件B.B-采用了二进制C.C-采用了开关电路D.D-具有存储程序的功能答案:D9、开放源代码是指()。

A.A-原始代码B.B-可执行文件C.C-二进制文件D.D-软件使用文档答案:A10、计算机最早应用于()。

A.A-数据处理B.B-工业控制C.C-计算机辅助D.D-科学计算答案:D11、计算机采用二进制数的最主要理由是A.A-易于用电子元件表示B.B-存储信息量大C.C-符合人们的习惯D.D-数据输入输出方便答案:A12、计算机的主要特点是()。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1.0I NTRODUCTIONPerformance—responsiveness and scalability—is a make-or-break quality for software. Software perfor-mance engineering (SPE) [Smith and Williams 2002],[Smith 1990] provides a systematic, quantitative approach to constructing software systems that meet performance objectives. With SPE, you detect prob-lems early in development, and use quantitative meth-ods to support cost-benefit analysis of hardware solutions versus software requirements or design solu-tions, or a combination of software and hardware solu-tions.SPE is a software-oriented approach; it focuses on architecture, design, and implementation choices. It uses model predictions to evaluate trade-offs in soft-ware functions, hardware size, quality of results, and resource requirements. The models assist developers in controlling resource requirements by enabling them to select architecture and design alternatives with acceptable performance characteristics. The models aid in tracking performance throughout the develop-ment process and prevent problems from surfacing late in the life cycle (typically during final testing).SPE also prescribes principles and performance pat-terns for creating responsive software, performance antipatterns for recognizing and correcting common problems, the data required for evaluation, procedures for obtaining performance specifications, and guide-lines for the types of evaluation to be conducted ateach development stage. It incorporates models for representing and predicting performance as well as a set of analysis methods.This paper presents 24 “best practices” for SPE in four categories: project management, performance model-ing, performance measurement, and techniques. A best practice is:“a process, technique, or innovative use of technol-ogy, equipment or resources that has a proven record of success in providing significant improve-ment in cost, schedule, quality, performance, safety,environment, or other measurable factors which impact an organization.” [Javelin 2002]The best practices presented here are based on:•observations of companies that are successfully applying SPE,•interviews and discussions with practitioners in those companies, and•our own experience in applying SPE techniques on a variety of consulting assignments.Many of them can be found in the Performance Solu-tions book [Smith and Williams 2002]. Ten of them were presented in [Smith and Williams 2003a]. This paper builds on the earlier paper, and puts them in the four categories.These best practices represent documented strategies and tactics employed by highly admired companies to manage software performance. They have imple-Best Practices for Software Performance EngineeringPerformance—responsiveness and scalability—is a make-or-break quality for software.Software Performance Engineering (SPE) provides a systematic, quantitative approach to constructing software systems that meet performance objectives.It prescribes ways to build performance into new systems rather than try to fix them later. Many companies suc-cessfully apply SPE and they attest to the financial, quality, customer satisfaction and other benefits of doing it right the first time.This paper describes 24 best practices for applying SPE to proactively managing the per-formance of new applications. They are vital for successful, proactive SPE efforts, and they are among the practices of world-class SPE organizations. They will help you to establish new SPE programs and fine tune existing efforts in line with practices used by the best software development projects.Connie U. Smith, Ph.D.Performance Engineering ServicesPO Box 2640Santa Fe, New Mexico, 87504-2640(505) 988-3811/Lloyd G. Williams, Ph.D.Software Engineering Research264 Ridgeview Lane Boulder, Colorado 80302(303) 938-9847boulderlgw@Copyright © 2003, Performance Engineering Services and Software Engineering Research. All rights reserved.mented these practices and refined their use to place themselves and their practitioners among the best in the business for their ability to deliver software that meets performance objectives and is on-time and within budget.2.0P ROJECT M ANAGEMENT B EST P RACTICES These are practices adopted by managers of software development projects and/or managers of SPE special-ists who work with development managers.2.1Perform An Early Estimate Of PerformanceRiskIt is important to understand your level of performance risk. A risk is anything that has the possibility of endan-gering the success of the project. Risks include: the use of new technologies, the ability of the architecture to accommodate changes or evolution, market factors, schedule, and others.If failing to meet your performance goals would endan-ger the success of your project, you have a perfor-mance risk. If your project supports a critical business function and/or will be deployed with high visibility (such as a key, widely publicized web application), then failing to meet performance objectives may result in a business failure and you have an extreme performance risk. Inexperienced developers, lack of familiarity with the technology, a cutting-edge application, and aggres-sive schedule all increase your risk of performance fail-ure.To assess the level of performance risk, begin by iden-tifying potential risks. You will find an overview of soft-ware risk assessment and control in [Boehm 1991]. Once you have identified potential risks, try to deter-mine their impact. The impact of a risk has two compo-nents: its probability of happening, and the severity of the damage that would occur if it did. For example, if a customer were unable to access a Web site within the required time, the damage to the business might be extreme. However, it may also be that the team has implemented several similar systems, so the probability of this happening might be very small. Thus, the impact of this risk might be classified as moderate. If there are multiple performance risks, ranking them according to their anticipated impact will help you address them sys-tematically.2.2Match The Level of SPE Effort To ThePerformance RiskSPE is a risk-driven process. The level of risk deter-mines the amount of effort that you put into SPE activi-ties. If the level of risk is small, the SPE effort can be correspondingly small. If the risk is high, then a more significant SPE effort is needed. For a low-risk project,the amount of SPE effort required might be about 1% of the total project budget. For high-risk projects, the SPE effort might be as high as 10% of the project bud-get.2.3Track SPE Costs And BenefitsSuccessful application of SPE is often invisible. If you are successfully managing performance, you do not have performance problems. Because of this, it is nec-essary to continually justify your SPE efforts. In fact, we have heard managers ask “Why do we have perfor-mance engineers if we don’t have performance prob-lems?”It is important to track the costs and benefits of apply-ing SPE so that you can document its financial value and justify continued efforts. The costs of SPE include salaries for performance specialists, tools, and support equipment such as workstations for performance ana-lysts or a dedicated performance testing facility. The benefits are usually costs due to poor performance that you reduce or avoid as a result of applying SPE. These include: costs of refactoring or tuning, contractual pen-alties, user support costs and lost revenue as well as intangible costs such as damaged customer relations. Once you have this information, it is easy to calculate the return on investment (ROI) [Reifer 2002] for your SPE efforts. The return on investment for SPE is typi-cally more than high enough to justify its continued use (see, for example, [Williams, et al. 2002] and [Williams and Smith 2003b]).2.4Integrate SPE Into Your Software DevelopmentProcess and Project ScheduleTo be effective, SPE should not be an “add-on;” it should be an integral part of the way in which you approach software development. Integrating SPE into the software process avoids two problems that we have seen repeatedly in our consulting practice. One is over-reliance on individuals. When you rely on individu-als to perform certain tasks instead of making them part of the process, those tasks are frequently forgotten when those individuals move to a different project or leave the company.The second reason for making SPE an integral part of your software process is that many projects fall behind schedule during development. Because performance problems are not always apparent, managers or devel-opers may be tempted to omit SPE studies in favor of meeting milestones. If SPE milestones are defined and enforced, it is more difficult to omit them.2.5Establish Precise, Quantitative PerformanceObjectives And Hold Developers and Managers Accountable For Meeting ThemPrecise, quantitative performance objectives help you to control performance by explicitly stating the required performance in a format that is rigorous enough so that you can quantitatively determine whether the software meets that objective. Well-defined performance objec-tives also help you evaluate architectural and design alternatives and trade-offs and select the best way of meeting performance (and other quality) requirements. It is important to define one or more performance objectives for each performance scenario. Throughout the modeling process, you can compare model results to the objective, to determine if there is significant risk of failing to meet the objectives, and take appropriate action early. And, as soon as you can get measure-ments from a performance test, you can determine whether or not the software meets the objective.A well-defined performance objective would be some-thing like: “The end-to-end time for completion of a ‘typ-ical’ correct ATM withdrawal performance scenario must be less than 1 minute, and a screen result must be presented to the user within 1 second of the user’s input.” Vague statements such as “The system must be efficient” or “The system shall be fast” are not useful as performance objectives.For some types of systems you may define different performance objectives, depending on the intensity of requests. For example, the response time objective for a customer service application may be 1 second with up to 500 users or less, 2 seconds for 500 to 750 users, and 3 seconds for up to 1,000 users.Unless performance objectives are clearly defined, it is unlikely that they will be met. In fact, establishing spe-cific, quantitative, measurable performance objectives is so central to the SPE process that we have made it one of the performance principles [Smith and Williams 2002]. When a team is accountable for, and rewarded for achieving their system’s performance, they are more likely to manage it effectively. If the team is only accountable for completion time and budget, there is no incentive to spend time or money for performance.2.6Identify Critical Use Cases And Focus On TheScenarios That Are Important To Performance Use cases describe categories of behavior of a system or one of its subsystems. They capture the user’s view of what the system is supposed to do. Critical use cases are those that are important to responsiveness as seen by users, or those for which there is a perfor-mance risk. That is, critical use cases are those for which the system will fail, or be less than successful, if performance goals are not met.Not every use case will be critical to performance. The 80-20 rule applies here: A small subset of the use cases (≤20%) accounts for most of the uses (≥80%) of the system. The performance of the system is domi-nated by these heavily used functions. Thus, these should be your first concern when assessing perfor-mance.Don’t overlook important functions that are used infre-quently but must perform adequately when they are needed. An example of an infrequently used function whose performance is important is recovery after some failure or outage. While this may not occur often, it may be critical that it be done quickly.Each use case is described by a set of scenarios that describe the sequence of actions required to execute the use case. Not all of these scenarios will be impor-tant from a performance perspective. For example, variants are unlikely to be executed frequently and, thus, will not contribute significantly to overall perfor-mance.For each critical use case, focus on the scenarios that are executed frequently, and on those that are critical to the user’s perception of performance. For some sys-tems, it may also be important to include scenarios that are not executed frequently, but whose performance is critical when they are executed, such as recovery from an outage.Select the scenarios, get consensus that they are the most important, then focus on their design and imple-mentation to expedite processing and thus optimize their responsiveness. People are more likely to have confidence in the model results if they agree that the scenarios used and workloads used to obtain the results are representative of those that are actually likely to occur. Otherwise, it is easy to rationalize that any poor performance predicted by the models is unlikely, because the performance scenarios chosen will not be the dominant workload functions. The sce-narios also drive the measurement studies by specify-ing the conditions that should be performance tested.2.7Perform an Architecture Assessment to EnsureThat the Software Architecture Will SupportPerformance ObjectivesRecent interest in software architectures has under-scored the importance of architecture in determining software quality. While decisions made at every phase of the development process are important, architectural decisions have the greatest impact on quality attributes such as modifiability, reusability, reliability, and perfor-mance. As Clements and Northrop note [Clements and Northrop 1996]:“Whether or not a system will be able to exhibit itsdesired (or required) quality attributes is largelydetermined by the time the architecture is chosen.” While a good architecture cannot guarantee attainment of performance objectives, a poor architecture can pre-vent their achievement.Architectural decisions are among the earliest made in a software development project. They are also the most costly to fix if, when the software is completed, the architecture is found to be inappropriate for meet-ing quality objectives. Thus, it is important to be able to assess the impact of architectural decisions on quality objectives such as performance and reliability at the time that they are made.Performance cannot be retrofitted into an architecture without significant rework; it must be designed into soft-ware from the beginning. Thus, if performance is important, it is vital to spend the up-front time neces-sary to ensure that the architecture will not hinder attainment of performance requirements. The “make it run, make it run right, make it run fast” approach is dangerous. Our experience is that performance prob-lems are most often due to inappropriate architectural choices rather than inefficient coding. By the time the architecture is fixed, it may be too late to achieve ade-quate performance by tuning.The method that we use for assessing the performance of software architectures is known as PASA SM [Will-iams and Smith 2002]. It was developed from our expe-rience in conducting performance assessments of software architectures in a variety of application domains including web-based systems, financial appli-cations, and real-time systems. PASA uses the princi-ples and techniques of software performance engineering (SPE) to determine whether an architec-ture is capable of supporting its performance objec-tives. The method may be applied to new development to uncover potential problems when they are easier and less expensive to fix. It may also be used when upgrading legacy systems to decide whether to con-tinue to commit resources to the current architecture or migrate to a new one.2.8Secure The Commitment To SPE At All LevelsOf The OrganizationThe successful adoption of SPE requires commitment at all levels of the organization. This is typically not a problem with developers. Developers are usually anx-ious to do whatever is needed to improve the quality of their software.If there is a problem with commitment, it usually comes from middle managers who are constantly faced with satisfying many conflicting goals. They must continu-ally weigh schedule and cost against quality of service benefits. Without a strong commitment from middle managers, these other concerns are likely to force SPE aside. Commitment from upper management is neces-sary to help middle managers resolve these conflicting goals.2.9 Establish an SPE Center of Excellence to Workwith Performance Engineers on Project Teams It is important that you designate one or more individu-als to be responsible for performance engineering. You are unlikely to be successful without a performance engineer (or a performance manager) who is responsi-ble for:•Tracking and communication of performance issues•Establishing a process for identifying and responding to situations that jeopardize theattainment of the performance objectives •Assisting team members with SPE tasks•Formulating a risk management plan based on shortfall and activity costs•Ensuring that SPE tasks are properly performed The responsible person should be high enough in the organization to cause changes when they are neces-sary. The performance engineering manager should report either to the project manager or to that person’s manager.The person responsible for performance engineering should be in the development organization rather than the operations organization. You will have problems if responsibility for SPE is in the operations organization because developers will likely put priority on meeting schedules over making changes to reduce operational costs.Making SPE a function of the capacity planning group is also a mistake in most organizations, even though that group usually already employs individuals with performance modeling expertise. While some capacity planners have the performance engineering skills, most are mathematical experts who are too far removed from the software issues to be effective.With the “SPE Center of Excellence” approach, mem-bers of the development team are trained in the basic SPE techniques. In the early phases of a project, the developers can apply these techniques to construct simple models that support architectural and design decisions. This allows developers to get feedback on the performance characteristics of their architecture and design in a timely fashion. Later, as the modelsbecome more complex, someone from the SPE Center can take them over to conduct more detailed studies that require more technical expertise.The SPE Center develops tools, builds performance expertise, and assists developers with modeling prob-lems. A member of this group may also review the team’s models to confirm that nothing important has been overlooked. The central group can also develop reusable models or reference models, as well as pro-vide data on the overhead for the organization’s hard-ware/software platforms. Finally, the performance group can provide assistance in conducting measure-ments.2.10 Ensure that Developers and PerformanceSpecialists Have SPE Education, Training, and ToolsSPE consists of a comprehensive set of methods. Edu-cation and experience in these methods improves the architectures and designs created by developers. It helps performance specialists interface with develop-ers, and shortens the time necessary for SPE studies. Performance tuning experience is helpful for SPE but it is not the same as proactive performance engineering. To be proficient you need additional education and training.Tools are essential for SPE. Modeling tools expedite SPE studies and limit the mathematical background required for performance analysts to construct and solve the models. Measurement tools are vital for obtaining resource consumption data, evaluating per-formance against objectives, and verifying and validat-ing results. However, simply acquiring a set of tools will not guarantee success. You must also have the exper-tise to know when and how to use them. It is also important to know when the result reported by a tool is unreasonable, so that problems with models or mea-surements can be detected and corrected.The project team must have confidence in both the pre-dictive capabilities of the models, and the analyst’s skill in using them. Without this confidence, it is easier to attribute performance problems predicted by the mod-els to modeling errors, rather than to actual problems with the software. If the developers understand the models and how they were created, they are more likely to have confidence in them.2.11 Require Contractors To Use SPE On YourProductsYou should require your contractors (e.g., external developers suppliers, etc.) to use SPE in developing your products to avoid unpleasant surprises when the products are delivered.It is also important to specify deliverables that will allow you to assess whether SPE is being properly applied. These deliverables fall into four broad categories:•Plans: These artifacts are targeted primarily at project management. They include technicalplans for each development phase, as well asconfiguration management plans, policies, andprocedures governing the production and main-tenance of other SPE artifacts.•Performance objectives: These artifacts include specifications for key performance scenarios,along with quantitative, measurable criteria forevaluating the performance of the system underdevelopment. They also include specificationsfor the execution environment(s) to be evalu-ated.•Performance models and results: This category includes the performance models for key sce-narios and operating environments, along withthe model solutions for comparison to perfor-mance objectives.•Performance validation, verification, and mea-surement reports (V&V): This category includesdocumentation and measurement results thatdemonstrate that the models are truly represen-tative of the software’s performance, and thatthe software will meet performance require-ments.3.0P ERFORMANCE M ODELING B ESTP RACTICESThese are best practices used by performance engi-neers who model the software architecture and design.3.1Use Performance Models To EvaluateArchitecture And Design Alternatives BeforeCommitting to CodeToday’s software systems have stringent requirements for performance, availability, security and other quality attributes. In most cases, there are trade offs that must be made among these properties. For example, perfor-mance and security often conflict with one another.It’s unlikely that these trade offs will sort themselves out and ignoring them early in development process is a recipe for disaster. The “make it run, make it run right, make it run fast” approach is dangerous.While it is possible to refactor code after it has been written to improve performance, refactoring is not free. It takes time and consumes resources. The more com-plex the refactoring, the more time and resources it requires. When performance problems arise, they are most often at the architecture or design level. Thus, refactoring to solve performance problems is likely to involve multiple components and their interfaces. Theresult is that later refactoring efforts are likely to be large and very complex.One company we worked with used a modeling study to estimate that refactoring their architecture would save approximately $2 million in hardware capacity. However, because the changes to the architecture were so extensive, they decided that it would be more economical to purchase the additional hardware. Another company used historical data to determine that its cost for refactoring to improve performance was approximately $850,000 annually [Williams, et al. 2002].Simple performance models can provide the informa-tion needed to identify performance problems and eval-uate architecture and design alternatives for correcting them. These models are inexpensive to construct and evaluate. They eliminate the need to implement the software and measure it before understanding its per-formance characteristics. And, they provide a quantita-tive basis for making trade-offs among quality attributes such as reliability, security, and performance.3.2Start With The Simplest Model That IdentifiesProblems With The System Architecture,Design, Or Implementation Plans Then AddDetails As Your Knowledge Of The SoftwareIncreasesThe early SPE models are easily constructed and solved to provide feedback on whether the proposed software is likely to meet performance goals. These simple models are sufficient to identify problems in the architecture or early design phases of the project. You can easily use them to evaluate many alternatives because they are easy to construct and evaluate. Later, as more details of the software are known, you can construct and solve more realistic (and complex) models.Later in the development process. As the design and implementation proceed and more details are known, you expand the SPE models to include additional infor-mation in areas that are critical to performance.3.3Use Best- And Worst-Case Estimates OfResource Requirements To Establish BoundsOn Expected Performance And ManageUncertainty In EstimatesSPE models rely upon estimates of resource require-ments for the software execution. The precision of the model results depends on the quality of these esti-mates. Early in the software process, however, your knowledge of the details of the software is sketchy, and it is difficult to precisely estimate resource require-ments. Because of this, SPE uses adaptive strategies, such as the best- and worst-case strategy.For example, when there is high uncertainty about resource requirements, you use estimates of the upper and lower bounds of these quantities. Using these esti-mates, you produce predictions of the best-case and worst-case performance. If the predicted best-case performance is unsatisfactory, you look for feasible alternatives. If the worst-case prediction is satisfactory, you proceed to the next step of the development pro-cess with confidence. If the results are somewhere in between, the model analyses identify critical compo-nents whose resource estimates have the greatest effect, and you can focus on obtaining more precise data for them.Best- and worst-case analysis identifies when perfor-mance is sensitive to the resource requirements of a few components, identifies those components, and permits assessment of the severity of problems as well as the likelihood that they will occur. When perfor-mance goals can never be met, best-and worst-case results also focus attention on potential design prob-lems and solutions rather than on model assumptions. If you make all the best-case assumptions and the pre-dicted performance is still not acceptable, it is hard to fault the assumptions.3.4Establish A Configuration Management PlanFor Creating Baseline Performance Models and Keeping Them Synchronized With Changes To The SoftwareMany of the SPE artifacts evolve with the software. For example, performance scenarios and the models that represent them will be augmented as the design evolves. Managing changes to these SPE artifacts is similar to the configuration management used to man-age changes to designs or code. Configuration man-agement also makes it possible to ensure that a particular version of a performance model is accurately matched to the version of the design that it represents. While it isn’t essential for many systems to have a for-mal configuration management plan, safety-critical sys-tems and others require both the plan and the control of SPE artifacts.Baselines for scenarios and models should be estab-lished following their initial validation and verification. Once an artifact has been baselined, it may only be changed using the established change control proce-dure.The configuration management plan should specify how to identify an artifact (e.g., CustomerOrder soft-ware model v1.2), the criteria for establishing a base-line for an artifact, and the procedure to be used when making a change.。

相关文档
最新文档