AutoTest introduce

合集下载

Linux内核测试工具Autotest简介

Linux内核测试工具Autotest简介

Linux内核测试工具Autotest简介【Introduction】Autotest is a framework for fully automated testing. It is designed primarily to test the Linux kernel, though it is useful for many other purposes such as qualifying new hardware, virtualization testing and other general user space program testing under linux platforms. It's an open-source project under the GPL and is used and developed by a number of organizations, including Google, IBM, Red Hat, and many others.Testing is not about running tests ... testing is about finding and fixing bugs. We have to:∙Run the tests∙Find a bug∙Classify the bug∙Hand the bugs off to a developer∙Developer investigates bug (cyclical)∙Developer tests some proposed fix (cyclical)∙Fix checked in∙New release issued to test team.So many test systems I see are oriented only around the first two (or even one!) steps. This is massively inefficient【Autotest vs Other harnesses】∙ONE harness to do performance, stress, multi-machine testing, etc. 性能,压力,分布式测试∙Consistent results & logging structure∙Web and CLI front end 网页前端控制测试用例∙Web and CLI analysis backend 网页前端分析∙Shared machine pool & scheduler∙EASY to write new tests: low entry barrier 易写新测试∙Open source – share tests with vendors 开源∙Control files are powerful!∙Proven scaling – 6000 machines+ 可控制机器数【setup】∙直接从github上的stonekim/autotest 克隆下来,其中添加了新的功能——测试用例可以不放在tests文件夹下,可在任意位置。

Autotest测试基础命令简介

Autotest测试基础命令简介

copy files to/from 10.132.44.245 Upload file from local to sun server
Copy file from local to FTP server:10.132.44.245 :/agte/
\\10.132.44.245 username: agte password: nba245
Start/stop/check autotest
啟動autotest
/usr/auto/bin/auto & auto&
停止autotest
rastop auto.exe
檢查auto進程
pscmpbubst1% ps -A|grep auto 202 ? 0:00 automoun 203 ? 0:05 automoun 5858 ? 4186:45 auto.exe 5742 ? 0:00 auto leozh1@fxcmpbubst1%
把Cisco數據庫中所有與foc0000001相關的測詴記 錄download到當前server, 一般都在snfind之前使用
snfind
leozh1@fxcmpbubst1% snfind [fxcmpbubst1] Enter serial number or MAC adr (0=exit,1=email,2=server,3=help,p=print) - foc00000022 SNFIND FOR FOC00000022 ON fxcmpbubst1 DATE RUNTIME ACCT LINE ID UUT TYPE P/F AREA PARENTSN PARENTUUT TAN VID GID MACHINE TEST ----------------------------------------------------------------------------------------------------------------------------------------------------------05/19/09 18:56 0:00 autoprog1 -74-6536-01 S PCBST FO61212006V ----- _fxcpcbubst3 6 10/13/09 23:14 0:00 autoprog1 -73-12850-01 S PCBP2 ------ -- _fxc 0 10/17/09 23:05 0:00 prod -73-11649-03 S ASSY ------ -- fxcmpb 10/17/09 23:46 0:00 prod -73-11649-03 S ASSY ------ -- fxcmpb 10/17/09 23:49 0:00 prod -73-11649-03 S ASSY ------ -- fxcmpb 10/17/09 23:50 0:00 prod -73-11649-03 P ASSY FOC00000001 73-10015-07 ---fxcmpbubst1 4 ======================================================================================= [fxcmpbubst1] Enter serial number or MAC adr (0=exit,1=email,2=server,3=help,p=print) - 0 leozh1@fxcmpbubst1% 在server上查詢測詴記錄,可以隨時追蹤已出貨的UUT測詴記錄

自动化测试技术PPT课件

自动化测试技术PPT课件

哪些能自动化?
软件需求变动不频繁 测试脚本的稳定性决定了自动化测试的维 护成本,如果变动过于频繁,维护成本太高。 项目周期足够长 若项目周期太短,或是紧急上线的新功能, 新产品是不适合去用自动化测试的。 自动化测试脚本可重复使用 若自动化脚本利用率不高,测会造成人力的 浪费。
自动化工具介绍
PHPUnit 简介 PHPUnit是一个轻量级的PHP测试框架。它 是在PHP5下面对JUnit3系列版本的完整移植, 是xUnit测试框架家族的一员(它们都基于模式先 锋Kent Beck的设计)。 类似的其他语言的还有JUNIT(java), NUNIT(c#),UNITTEST(python)以及 Rspec(Ruby).
/files/seleniumserver-standalone-2.22.0.jar
安装selenium server:只要安装了jdk1.6 或1.7 都 可以运行selenium-server,在命令行输入: java -jar selenium-server-standalone-2.22.0.jar
测试用例编写及测试过程
测试脚本的录制 用Selenium录制测试操作,并转化成PHPUNIT脚 本,保存成对应该的文件。在要检测的地方,加上适 当的Assert语句。 运行Selenium Server 服务器 在CMD窗口是运行命令: java -jar seleniumserver-standalone-2.22.0.jar
自动化测试技术分享
宋现锋 @潜龙0318来自内 容提要何为自动化?
哪些能自动化? 自动化工具介绍 测试用例编写及测试过程 编写自动化用例中应注意的问题 目前编写的自动化用例介绍
何为自动化?
自动化(Automation)是指机器设备、系统或过 程(生产、管理过程)在没有人或较少人的直接 参与下,按照人的要求,经过自动检测、信息处 理、分析判断、操纵控制,实现预期的目标的过 程。 我们测试领域中的自动化,就是“把以人为驱动的 测试行为转化为机器执行的一种过程。” 自动化测试的 7 个步骤:改进自动化测试过程,定义 需求,验证概念,支持产品的可测试性,具有可 延续性的设计,有计划的部署和面对成功的挑战。

Python中的自动化测试和集成测试框架

Python中的自动化测试和集成测试框架

Python中的自动化测试和集成测试框架Python的自动化测试和集成测试框架随着软件业的迅速发展,测试变得越来越重要。

在软件开发过程中,测试是确保软件质量的关键步骤之一。

软件测试有很多种类型,其中两个主要类型是手动测试和自动化测试。

自动化测试是测试过程中使用一些专门的软件工具和脚本来自动执行测试任务的过程。

Python是一种流行的编程语言,它提供了许多用于测试的工具和框架。

在本篇论文中,我们将重点介绍Python中的自动化测试和集成测试框架。

1.自动化测试1.1自动化测试介绍自动化测试是一种自动执行测试用例的过程。

自动化测试可以显著提高测试效率和准确性,因为它可以自动执行大量的测试,而不需要人类的干预。

自动化测试需要一些专门的软件工具和脚本来实现。

自动化测试可以分为接口测试、功能测试、性能测试、安全测试等。

1.2 Python中的自动化测试工具和框架Python提供了许多用于自动化测试的工具和框架。

下面列举了一些常用的Python自动化测试工具和框架:1) Pytest:Pytest是一种成熟的Python自动化测试框架。

它提供了很多内置的断言功能,可以轻松地编写测试用例。

Pytest还提供了插件机制和丰富的命令行选项,可以扩展其功能。

2) Unittest:Unittest是Python内置的自动化测试框架。

它提供了测试用例的基本结构和一组断言方法。

Unittest还支持各种数据驱动测试和测试套件。

3) Selenium:Selenium是一个自动化测试工具,主要用于Web应用程序的测试。

它可以模拟用户在浏览器中进行的操作,并测试Web 应用程序的功能和性能。

Selenium可以使用Python编写测试脚本。

4) Robot Framework:Robot Framework是一个流行的自动化测试框架,它提供了丰富的关键字,并支持多种测试类型,如接口测试、Web应用程序测试和数据库测试。

Robot Framework可以使用Python 编写测试脚本。

Auto test 培训资料

Auto test 培训资料
EMP WCDMA3G Auto EMP WCDMA3G Auto
STANADI STANADI STANADI STANADI STANADI STANADI STANADI STANADI STANADI STANADI ----------3G_QC_CM
900/1800
2 MMI 18P COMM 3 MMI 18P-1-GSM
品质GR 品质
program power other
名称 普通I/O Cable
21 22 23
KG130 KG208 KG270 KG276 KG275 KG288 KG290 KG296 KG370 KM380 KP160 KP100 KP105 KP105a KP106A KP106b KP110 KP110a KP115a KP130
900/1800
44 45 46 47 48
850/1900
900/1800
RF CABLE:
N O 1 RF500(绿线 绿线) 绿线 3 RF800(蓝线) 蓝 局部图片 名称
850/1900
900/1800
900/1800
49 50 51 52
900/1800
900/1800 900/1800/1900/2100 900/1800/1900/2100 900/1800/1900/2100 900/1800/1900 900/1800/1900/2100 900/1800/1900 900/1800/1900 900/1800/1900
RF500 RF500 RF500 RF500 RF800 RF500 RF500 RF500 RF500 RF500 RF500 RF500 RF500 RF600 RF800 RF500 RF500 RF500 RF800 RF600

自动化测试脚本编写规范

自动化测试脚本编写规范

自动化测试脚本编写规范一、引言自动化测试脚本编写规范是为了保证测试脚本的可读性、可维护性和可扩展性,提高测试效率和测试质量而制定的一系列规范和约定。

本文将详细介绍自动化测试脚本编写规范的各个方面。

二、命名规范1. 脚本文件名:脚本文件名应具有描述性,能够清晰表达脚本的功能和目的。

建议使用小写字母、数字和下划线组合,以便于识别和维护。

2. 函数和变量名:函数和变量名应具有描述性,能够清晰表达其用途和含义。

建议使用驼峰命名法,即首字母小写,后续单词首字母大写。

三、脚本结构1. 导入模块:首先导入所需的模块,如selenium、unittest等。

2. 定义测试类:使用unittest框架,定义一个继承自unittest.TestCase的测试类。

3. 定义测试方法:在测试类中定义测试方法,每一个测试方法应该只测试一个功能点或者场景。

4. 初始化方法:在每一个测试方法之前,编写setUp()方法进行测试环境的初始化,如启动浏览器、打开网页等。

5. 测试方法:编写具体的测试步骤和断言,确保每一个测试方法能够独立运行和验证。

6. 清理方法:在每一个测试方法之后,编写tearDown()方法进行测试环境的清理,如关闭浏览器、清除缓存等。

7. 测试套件:在脚本的最后,编写测试套件,将所有的测试方法组织起来,方便批量执行。

四、注释规范1. 文件注释:在每一个脚本文件的开头,添加文件注释,包括脚本名称、作者、版本号、修改日期等信息。

2. 函数注释:在每一个函数的开头,添加函数注释,描述函数的功能、参数和返回值。

3. 行注释:在代码行的末尾,添加行注释,解释该行代码的作用和用途。

五、代码规范1. 缩进:使用四个空格进行缩进,不使用制表符。

2. 空行:在函数和类之间、函数内部逻辑之间、代码块之间添加适当的空行,提高代码的可读性。

3. 行长限制:每行代码的长度不应超过80个字符,超过时应进行适当的换行。

4. 空格:在运算符两边、逗号后面、冒号后面添加适当的空格,提高代码的可读性。

PPC4 压力控制器 校准器 操作和维护手册

PPC4 压力控制器 校准器 操作和维护手册

2.安装................................................................................. 5
2.1. 拆箱和检查............................................................ 5
2.3.1.准备工作...................................................................................................... 6 2.3.2.前、后面板................................................................................................... 6 2.3.2.1.前面板.............................................................................................. 6 2.3.2.2.后面板.............................................................................................. 6 2.3.3.电源连接...................................................................................................... 7 2.3.4.连接至压力源(供压端口).......................................................................... 7 2.3.5.连接真空泵(排气端口)............................................................................. 7 2.3.6.连接RPM4参考压力监测仪中的外部Q-RPT ................................................. 8 2.3.7.连接被测装置(Test(+) 和 Test(-) 端口) .................................................... 9 2.3.7.1.安装自清洁集液器(SPLT)........................................................... 10 2.3.7.2.安装双容积单元(DVU)、G15K和BG15K Q-RPTS .................... 10 2.3.8.ATM端口.................................................................................................... 10 2.3.9.检查/设置安全等级..................................................................................... 10 2.3.10.关闭绝压和负表压模式(AXXX PPT).................................................... 11

Python中的自动化测试和持续集成(CI)

Python中的自动化测试和持续集成(CI)

Python中的自动化测试和持续集成(CI)自动化测试和持续集成是软件开发中至关重要的环节。

Python作为一门功能强大且易于学习的编程语言,提供了许多工具和框架来支持自动化测试和持续集成的实施。

本文将介绍Python中的自动化测试和持续集成的基本概念、工具和最佳实践。

一、自动化测试1.1 测试的基本概念在软件开发过程中,测试是确保软件质量的关键环节之一。

自动化测试是通过编写脚本或程序自动执行测试用例,以替代手动测试的一种方法。

它可以提高测试效率、减少人工错误,并能够持续地运行测试用例。

1.2 Python中的测试框架Python提供了多种测试框架,其中最常用的是unittest和pytest。

unittest是Python标准库中的一个测试框架,提供了一套用于编写和运行测试的工具。

pytest是一个第三方测试框架,相比unittest更加灵活和易用。

1.3 测试驱动开发(TDD)测试驱动开发是一种开发方法论,它要求在编写功能代码之前先编写测试用例。

Python中的自动化测试为TDD提供了良好的支持。

通过编写测试用例并确保测试通过,我们可以在开发过程中快速发现和修复问题,保证代码的质量和稳定性。

二、持续集成(CI)2.1 持续集成的基本概念持续集成是一种开发方法,它要求开发人员频繁地将代码集成到主干,并通过自动化构建和测试来验证代码的可靠性。

持续集成能够帮助团队快速发现和解决问题,确保代码的一致性和可部署性。

2.2 Python中的持续集成工具Python提供了多种持续集成工具,如Jenkins、Travis CI和CircleCI 等。

这些工具可以与版本控制系统集成,自动触发构建、运行测试,甚至自动部署到生产环境。

2.3 持续集成的最佳实践为了确保持续集成的顺利进行,我们需要遵循一些最佳实践。

首先,确保每次提交的代码都通过了测试。

其次,尽早解决失败的构建,并定期清理无用的构建。

此外,及时修复测试用例中的错误,保证测试的准确性和稳定性。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Autotest—Testing the UntestableJohn AdmanskiGoogle Inc. jadmanski@Steve HowardGoogle Inc. showard@AbstractIncreased automated testing has been one of the most popular and beneficial trends in software engineering. Yet low-level systems such as the kernel and hardware have proven extremely difficult to test effectively,and as a result much kernel testing has taken place in a manual and relatively ad-hoc manner.Most existing test frame-works are designed to test higher-level software isolated from the underlying platform,which is assumed to be stable and reliable.Testing the underlying platform it-self requires a completely new set of assumptions and these must be reflected in the framework’s design from the ground up.The design must incorporate the machine under test as an important component of the system and must anticipate failures at any level within the kernel and hardware.Furthermore,the system must be capable of scaling to hundreds or even thousands of machines under test,enabling the simultaneous testing of many different development kernels each on a variety of hard-ware platforms.The system must therefore facilitate ef-ficient sharing of machine resources among developers and handle automatic upkeep of thefleet.Finally,the system must achieve end-to-end automation to make it simple for developers to perform basic testing and incor-porate their own tests with minimal effort and no knowl-edge of the framework’s internals.At the same time,it must accommodate complex cluster-level tests and di-verse,specialized testing environments within the same scheduling,execution and reporting framework. Autotest is an open-source project that overcomes these challenges to enable large-scale,fully automated test-ing of low-level systems and detection of rare bugs and subtle performance ing Autotest at Google,kernel developers get per-checkin testing on a pool of hundreds of machines,and hardware test engi-neers can qualify thousands of new machines in a short time frame.This paper will cover the above challenges and present some of the solutions successfully employed in Autotest.It will focus on the layered system architec-ture and how that enables the distribution of not only the test execution environment but the entire test control system,as well as the leveraging of Python to provide simple but infinitely extensible job control and test har-nesses,and the automatic system health monitoring and machine repairs used to isolate users from the manage-ment of the test bed.1IntroductionAutotest is a framework for fully automated testing of low-level systems,including kernels and hardware.It is designed to provide end-to-end automation for func-tional and performance tests against running kernels or hardware with as little manual setup as possible.This automation allows testing to be performed with less wasted effort,greater frequency,and higher consistency.It also allows tests to be easily pushed upstream to vari-ous developers,moving testing earlier into the develop-ment cycle.Using Autotest,kernel and hardware engineers can achieve much greater test coverage than such compo-nents usually receive.This typical lack of effective low-level systems testing comes with good reason:au-tomated testing of such systems is a difficult task and presents many challenges distinct from userspace soft-ware testing.This paper introduces the requirements Autotest aims to meet and some of the unique challenges that arise from these requirements,including robust test-ing in the face of system instability,scaling to thousands of test machines,and minimizing complexity of test ex-ecution and test development.The paper will discuss solutions for each of these challenges that have been em-ployed in Autotest to achieve effective,fully automated low-level systems testing.2BackgroundHigh-quality automated testing is a necessity for any large,long-lived software project to maintain stability 1Figure1:High level operation of a complete Autotest systemwhile permitting rapid development.This is as true for the Linux kernel and other system software as it is for user-space software.However,so far the benefits of automated testing have been most successfully realized within user-space applications.Most existing test automation frameworks are targeted at software running on top of the platform provided by the hardware and operating system,the realm in which nearly all software operates.By taking advantage of the assumption that an application is running in a re-liable standardized environment provided by the plat-form,a framework can abstract away and simplify most of the underlying system.When attempting to provide the same services for kernel(and hardware)testing,this assumption is no longer reasonable since the underlying system is an integral component of what is being tested. This was part of the original motivation for the develop-ment of thefirst versions of Autotest and its predecessor, IBM Autobench[5][4].Autotest begins with the goal of testing the underlying platform itself,and this goal engenders a unique set of requirements.Firstly,because the platform on whichAutotest runs is itself under test,Autotest must be built from the ground up to assume system instability.This requires graceful handling of kernel panics,hardware lockups,network failures,and other unexpected fail-ures.In addition,tasks such as kernel installation and hardware configuration must be simple,commonplace activities in Autotest.Secondly,because the platform under test cannot be eas-ily virtualized,every running test requires a physical machine.Hardware virtualization may be used for basic kernel testing,but as it fails to produce accurate per-formance results and can mask platform-specific func-tional issues it is useful only for the most basic kernel functional verification.Autotest is therefore built to run every test on a physical machine,both for kernel and hardware testing.This makes coordination among mul-tiple machines a core necessity in Autotest and further-more implies that scaling requires distribution of testing among hundreds or even thousands of machines.This additionally creates a need for a system of efficient shar-ing of test machines between users to maximize utiliza-tion over such a large testfleet.2Finally,Autotest must fulfill the generic requirements of any testing framework.In particular,Autotest must minimize the overhead imposed on test developers.It must be trivial to incorporate existing tests,easy to write simple new tests,and possible to write complex multi-process or multimachine tests,all within the same basic framework.Furthermore,developing tests should be a simple,familiar process,requiring interaction with only a small subset of the available infrastructure.Tests must therefore be easily executable by hand and simultane-ously pluggable into a large-scale scheduling system. These levels of abstraction are broken down into distinct modules discussed in more detail throughout this paper. As illustrated in Figure1,the lowest layer of the sys-tem is the Autotest client,a simple test framework that runs on individual machines.The next layer,Autoserv, is designed to run on centralized test servers to automat-ically install and execute clients and to coordinate multi-machine tests.The outermost layer consists of a single frontend and job scheduler to allow multiple users to share a single testfleet and results repository.Note that the dependencies go in only one direction making the design more modular and allowing users to interact with the system on multiple levels.On a large scale users can push a button on a web interface to launch a complete test suite on a large cluster of machines while on a small scale users can run a single test on a local workstation by executing a shell command.2.1Related workThe Linux Test Project"has a goal to deliver test suites to the open source community that validate the reliabil-ity,robustness,and stability of Linux"[1].It is a collec-tion of functional and stress tests for the Linux kernel and related features as well as a client infrastructure for test execution.The client infrastructure eases the ex-ecution of a many tests(there are over3,000tests in-cluded),supports running tests in parallel,can generate background stress during test execution,and generates a report of test results at the end of a run.LTP is not,how-ever,intended to be a general-purpose,fully-automated kernel testing framework.There are a number of Au-totest goals that are specifically non-goals of LTP[8].It is essentially a collection of tests and is therefore suit-able for inclusion into Autotest as a test,and indeed such inclusion has been easily done.An automation framework called Xentest was developedfor testing the Xen virtualization project.David Bar-rera et al.note that“testing Linux under Xen and test-ing Linux itself are very much alike”and perform part of their testing by“running standard test suites under Linux running on top of Xen”,including LTP[3].Since testing Xen is much like testing the underlying hardware itself the goals of Autotest share much in common with those of Xentest,both from a kernel testing and a hard-ware testing point of view.Xentest is a collection of scripts with support for building and booting Xen,run-ning tests under it,and gathering results logs together.It does not support any automated analysis of test results to determine pass/fail conditions.Test runs are config-urable by a controlfile using the Python ConfigParser module.This provides simple configuration but lacks any programmatic power within controlfiles.Finally, Xentest is built closely around Xen and does not aim to be generic framework for kernel or hardware testing.On the other hand,Autotest could be used to perform Xen testing much like Xentest does and some work has been done on this in the past.Crackerjack is another test automation system,one de-signed specifically for regression testing[10].It focuses onfinding incompatible API changes between kernel versions.This is valuable testing but is a narrower focus from that of Autotest.Two frameworks that address the problem of distributed kernel testing are PyReT[6]and ANTS[2].The former depends on a sharedfile system for all communications while the latter uses a serial console.Both of these re-quirements on test machines were deemed too restrictive for Autotest,which relies solely on an SSH connection for communications.ANTS is quite robust to test ma-chine failures,as it configures all test machines from scratch using network booting and is capable of using remote power control to reset and recover machines that have become unresponsive.The system additionally in-cludes a machine reservation tool so that machines can be shared between developers and the automated sys-tem without conflict.These are all important features that have found their way into Autotest.However,the system is built strictly for nightly testing and does not support a general queue of user-customizable jobs.It includes very limited results analysis in the form of an email report upon completion of the night’s tests.It runsa number of open-source tests(including LTP)but doesnot support more complex,multimachine tests.Finally, the system is proprietary and therefore of little direct 3utility to the community.For distributed performance testing of the kernel there exist systems presented by Alexander Ufimtsev[9]and Tim Chen[7].In both systems,test machines operate autonomously,running a client harness which moni-tors the kernel repository,building and testing new re-leases as they appear.In this sense,the systems are built around the specific purpose of per-release testing, although the latter system includes support for testing arbitrary patches on any kernel.Both systems’clients transmit results to a central repository,a remote server in the former case and a shared database in the latter.The former system includes some automated analysis for re-gression detection based on differences from previous averages,a task not yet implemented in Autotest.The latter system includes a web frontend displaying graphs of each benchmark over kernel versions,with support for displaying profiler information,rerunning tests or bisecting tofind the patch responsible for a regression. Autotest includes partial support for these features but could benefit from improvements in this area.3Autotest ClientThe most basic requirement that Autotest is intended to fulfill is to provide an environment for running tests ona machine in a way that meets the following criteria:1.The lowest,most bare-metal access must be avail-able.2.Test results are available in a standard machine-parseable way.3.Standard tests developed outside of the frameworkcan be easily run within it.Thefirst of the criteria,low-level system access,seems fairly self-evident when writing tests which are aimed at the kernel and the hardware itself.To test a particular component of a system,the test must be written using tools that have access the standard API for that compo-nent.Since C is the lingua franca of the systems world, a C API can generally be counted on as being available, but even that isn’t always the case.When creating afile system during a test,mkfs is going to be the easiest and most readily available mechanism;so as well as be-ing able to easily incorporate custom C the framework must also make it easy to work with external tools.This initial requirement could have been satisfied by writing the framework itself in C,but that would ulti-mately have conflicted with the other requirements that Autotest was expected to meet.First,this would’ve made calling out to external applications ultimately more difficult;while functions like fork,exec, popen and system provide all the basic mechanisms needed to launch an external process and collect results from it,working with them in C requires a relatively large amount of boilerplate compared to a higher-level scripting language such as Perl or Python.This only be-comes more true if the output of the executed process needs to be manipulated and/or parsed in any way.The second requirement that test results be logged in a stan-dard way almost guarantees that the test will need to do string manipulation,another task simplified by using a scripting language.To meet these somewhat conflicting requirements,the Autotest framework itself was written in Python,with utilities provided to simplify the compilation and exe-cution of C code.Tests themselves are implemented by creating a Python module defining a test subclass,sat-isfying a standardized,pre-defined interface.Individual tests are packaged up in a directory and can be bundled along with whatever additional resources are needed, such as datafiles,C code to be compiled and executed or even pre-compiled binaries if necessary.This also satisfies the third of the three requirements,the ability to run standard tests written independently of Au-totest.All that is required is to bundle the components necessary for the test with a simple Python wrapper.The wrapper is responsible for setting up any necessary en-vironment,executing the underlying test,and translat-ing the results from the form produced by the test into Autotest standard logging calls.The wrappers are gen-erally quite simple;the median size of a test wrapper in the current Autotest distribution is only38lines.Using Python for implementing tests also provides an easy mechanism for bundling up suites of tests or cus-tomizing the execution of specific tests.Tests them-selves are executed by writing a“controlfile”which is simply a Python script executing in a predefined envi-ronment.It can be a single line saying“execute this test”,a more complex script that executes a whole se-quence of tests,or even a script that conditionally exe-cutes tests depending on what hardware and kernel are running on the machine.The environment provided by Autotest contains additional utilities that allow control 4files to put the machine into any state necessary for ex-ecuting tests,even if it requires installing a kernel and rebooting the machine.Having the full power of Python available allows test runners to perform limitless cus-tomization without having to learn a custom job control language.This power does come with one major drawback, though.Due to the dynamic nature of Python and the power available to controlfiles,it is impossible to stat-ically determine much information about a job.For ex-ample,it is impossible to know in advance what tests a job will run,and indeed the set of tests run may poten-tially be nondeterministic.This limitation has not been severe enough to outweigh the benefits of this approach.3.1Installation ProblemsAs this system was put into use at Google,the instal-lation of Autotest onto test machines quickly became a serious performance issue.Allowing test developers to bundle data,source code and even binaries with their tests made it easy to write tests but allowed the instal-lation size to grow dramatically.The situation could be somewhat alleviated by minimizing how often an install was necessary,but in practice this only helps if the test framework can be pre-installed on the systems.The solution to this problem is a fairly standard one: rather than treating Autotest and its test suite as a single, monolithic package,break it up into a set of packages:•a core package containing the framework itself •packages for the various utilities and dependencies such as profilers,compilers and any non-standard system utilities that would need to be installed •packages for the individual testsEach package is able to declare other packages as de-pendencies.The core package can be installed every-where and is fairly lightweight,consisting only of a set of Python sourcefiles without any of the more heavy-weight data and binaries required by some tests.When executing a job,the framework is then able to dynami-cally download and install any packages needed to exe-cute a specific test.4Autotest Server4.1Distributing test runs across machinesThe Autotest client provides sufficient infrastructure for running low-level tests but it only executes tests and col-lects results on a single machine.To test a kernel on multiple hardware configurations,a tester would need to install the test client on multiple machines,manually run jobs on each of these machines,and examine the results scattered across these systems.This deficiency led to the development of Autoserv,an Autotest Server,a separate layer designed around the client.It allows a user to run a test by executing a server process on a machine other than the test machine.The server process will connect to the remote test ma-chine via SSH,install an Autotest client,run a job on the client,and then pull the results back from the test machine.Localizing these server runs to a single ma-chine allows users to run test jobs on arbitrary sets of machines while collecting all the results into a central location for analysis.4.2Recovering failed test systemsOnce users start running tests on larger sets of machines, dealing with crashed systems becomes a much more common occurrence.As the number of test machines increases,bad kernels(and random chance)are going to result in more failed systems.When testing on a single machine,manual intervention is the simplest method of dealing with failure,but this does not scale to hundreds or thousands of machines.Automation becomes neces-sary with two major requirements:•Automatically detect and report on test machinefailures•Provide a mechanism for repairing broken systems Handling these requirements entirely within the client running on the test machine is impractical;detecting and reporting a kernel panic or hardware failure will not even be possible when the crash kills the test pro-cesses on the machine.Similarly,repair may require re-imaging a machine which will wipe out the client it-self.5With job execution controlled from a remote machine, handling these requirements becomes feasible.Au-toserv implements support for monitoring serial console output,network console output and general syslog out-put in/var/log.It can also interact with external ser-vices that collect crash dumps and even power cycle the machine if that capability is available.In the very worst case the server process can at least clearly log the failure of the job(and any tests it was running)along with the last known state of the failed test machine. Automated repair can also be performed.This is im-plemented in Autoserv in an escalating fashion,first by making several attempts to put the machine back into a known good state,then by optionally calling out to any local infrastructure in place to carry out a complete rein-stallation of the machine,andfinally,if necessary,by es-calating the repair process to a human.Testing on large numbers of machines now becomes much more practi-cal when systems broken by bad kernels(or bad tests) can be put back into a working state with a minimum of human intervention.4.3Multi-machine testsRemote control of test execution also introduces the opportunity to run single tests that span multiple ma-chines.While this could be done with the Autotest client alone by running the client on a master test system and having it drive other slave test systems,this would re-quire duplicating most of the“remote control”infras-tructure from the server directly into the client.This could also be problematic from a security point of view since,rather than routing control through a single server, the test machines would require much more liberal ac-cess to one another.Since Autotest already established the need for a sep-arate server mechanism,it was natural to extend it to support“server-side”testing.Instead of only providing afixed set of server operations(install client and run job, repair,etc.),Autoserv allows testers to supply a Python controlfile for execution on the server,just like on the client.This can be used to implement,for example,a network test with the followingflow:•Install Autotest client on two machines •Launch“network server”job on one machine •Launch“network client”job on one machine•Wait for both jobs to complete and collect results No single-machine networking test can duplicate the same results,particularly when attempting to quantify networking performance and not just test the stability of the network stack.This also allows for execution of larger-scale cluster testing.Although this begins to creep beyond the scope of systems testing it still has significant value,not as a way to test the cluster applications but rather as a way of testing the impact of kernel and hardware changes on larger-scale applications.A smaller-scale cluster test can follow a workflow similar to that for network test-ing.Alternatively,a server job can make use of pre-existing cluster setup and management tools,simply driving the external services and collecting results af-terwards.4.4Mitigating Network UnreliabilityWhile one of the primary goals of Autoserv is to in-crease reliability,it also introduces new unreliabilities as an unfortunate side effect.The primary issue is that it in-troduces a new point of failure,the connection between the server and the client machines.Working directly with the client,a user can launch a job on a machine and return after expected completion,and any transient network issues will not affect the test result.This is no longer the case when the job is being controlled by a re-mote server that continuously monitors the test machine.The problem can be alleviated somewhat by periodically polling the remote machine rather than continually mon-itoring it,but ultimately this only reduces susceptibility to the problem.Implementing more reliable communications over OpenSSH ultimately proved too difficult,primarily due to the lack of control over and visibility into network failure modes.One alternative considered was to usea completely separate communication mechanism,butthis was rejected as ing SSH provides Autotest with a robust and secure mechanism for com-munication and remote execution,without requiring the large investment of time and labor required to invent a custom protocol that would then need to be installed on every test machine.Instead the solution was to add an alternative SSH im-plementation that uses a Python package(paramiko1) 1/paramiko/6instead of launching an external OpenSSH -ing an in-process library allowed tighter integration and communication between Autoserv and the SSH imple-mentation,allowing the use of long-lived SSH connec-tions with automatic recovery from network failure.At the same time modifications were made to the Autotest client to allow it to be run as a detachable daemon so that the automatic connection recovery could re-attach to clients with no impact on the local testing.Adding paramiko support had the additional benefit of reducing the overhead of executing SSH operations from Autoserv by performing them in-process,as well as simplifying the use of multi-channel SSH sessions to avoid the cost of continually creating and terminat-ing new sessions.Within Autoserv this is implemented in such a way that the paramiko-based implementation can be used as a drop-in replacement for the OpenSSH-based one,allowing testers to make use of whichever is better suited to their needs.OpenSSH works better “out of the box”with most Linux configurations,while paramiko,which requires more setup and configuration, ultimately allows for more reliable,lightweight connec-tions.5Scheduler and Frontend5.1Shared machine poolAutoserv provides a convenient and reliable way for in-dividual users to test small numbers of platforms.As a standalone application,however,it cannot possibly ful-fill the requirement of scaling to thousands of machine and achieving efficient utilization of a shared machine pool.To address these needs the Autotest service ar-chitecture provides a layer on top of Autoserv that al-lows Autotest to operate as a shared service rather than a standalone application.Rather than execute the Au-totest client or server directly,users interact with a cen-tral service instance through a web-or command-line-based interface.The service maintains a shared machine pool and a global queue of test jobs requested by users. There are three major components that make this usage model possible.The Autotest Frontend is an interface for users to schedule and monitor test jobs and manage the machine pool.The Autotest Scheduler is responsi-ble for executing and monitoring Autoserv to run tests on machines in the pool in response to user requests.Fi-nally,the results analysis interface,not discussed in thispaper,provides a common interface to view,aggregate and analyze test results.The Autotest Frontend is a web application for schedul-ing tests,monitoring ongoing testing,and managing test machines.It operates on a database which takes the available tests,the machines in the shared test bed,and the global queue of test jobs that have been scheduled by users.The scheduler interacts with the frontend through this database,executing test jobs that have been sched-uled and updating the statuses of jobs and machines based on execution progress.The frontend supports a number of features to help users organize the machine pool.First,the system supports access control lists to restrict the set of users that can run tests on certain machines.Some machines may be open for general testing,but some users,particularly hard-ware testers,will have dedicated machines that cannot be used by others.Second,the system supports tagging of machines with arbitrary labels.The most common usage of this feature is to mark the platform of a ma-chine,which is often important for both job scheduling and results bels can additionally be used to declare machine capabilities,such as remote power con-trol,or to group together large numbers of machines for easier scheduling.The scheduler is a daemon running on the server whose primary purpose is to execute and monitor Autoserv pro-cesses.The scheduler continuously matches up sched-uled test jobs with available machines,launches Au-toserv processes to execute these jobs,and monitors these processes to completion.It updates the database with the status of each job throughout execution,allow-ing the user to track job progress.Upon completion, the scheduler executes a parser to read Autoserv’s struc-tured results logs into a database of test results.The user can then perform powerful analysis of these results through a special results analysis interface.An important feature of the scheduler is its statelessness.While it maintains plenty of in-memory state,all impor-tant state can be reconstructed from the database.This is exactly what happens upon scheduler startup,ensur-ing that when the scheduler needs to restart,all tests will continue running uninterrupted and machine time won’t be wasted.This is critical for minimizing user impact during deployments of new Autotest versions or after a scheduler crash.7。

相关文档
最新文档