AIX 性能调优

合集下载

AIX性能下降检查解决案例

AIX性能下降检查解决案例

AIX性能下降检查解决案例(客户名称、时间、问题关键字)【处理时间】2016年8月1日【客户名称】华夏信用卡【主机信息】要有详细的硬件描述、数据库版本描述主机:IBM 、8205-E6D 四个Lpar分区操作系统:AIX 7100-03-05数据库:【业务系统】业务系统名称、有版本信息更好【关键字】Lpar、CPU折叠功能、性能优化。

关键字3个【处理人员】系统集成--刘党旗【问题说明】现象:对于使用共享CPU的AIX分区,当系统负载偏低时,基于JAVA的应用程序可能会被延迟执行,交易执行时间变长。

事件分析主要原因是在分区负载偏低时,AIX操作系统的CPU折叠功能只开启一个虚拟CPU,所有线程均被调度到该CPU 的第一个线程中。

解决方案可以通过HMC/ASMI设置,关闭操作系统的CPU折叠功能**折叠功能对系统的影响** 关闭CPUfolding的影响:关闭了系统内核对微分区环境的自动调度优化;所有的VP都会被调度到hypervisor,不管这些VP上是否有实际负载;更高的hypervisor延时,物理资源亲和度也可能受到影响。

** 关闭CPU folding的好处:对于分区sizing非常完美的情形下,比如EC:VP始终控制在不低于1:2,而且处理器池资源从未受限,这时关闭folding可能获得一定的性能收益(主要是通过减少VPM管理开销,以及避免unfold展开CPU 延迟)后续跟踪性能优化明显**折叠功能介绍虚拟处理器管理(VirtualProcessorManagement),也称之为处理器折叠技术(CPUFolding),是一项Power虚拟化特性,用于控制一个LPAR处理使用的VP(VirtualProcessor)数量。

按目前AIX的设置,默认对微分区(即共享处理器分区)开启了处理器折叠功能;而专有处理器分区(dedicatedLPAR)则默认关闭此功能。

处理器折叠技术的作用主要体现在两个方面:1)节能,如果一个物理核心对应的所有VP都处于被折叠状态PowerVMhypervisor可以将这颗核心置于低能耗状态。

AIX 5L 内存性能优化之使用 ps、sar、svmon 和 vmstat 监视内存的使用

AIX 5L 内存性能优化之使用 ps、sar、svmon 和 vmstat 监视内存的使用

AIX 5L 内存性能优化之使用ps、sar、svmon 和vmstat 监视内存的使用AIX 5L 内存性能优化之使用ps、sar、svmon 和vmstat 监视内存的使用,通过命令监控AIX系统的内存使用状况,进而进行系统内存的性能优化,是一个系统管理员对系统优化要做的基本工作!内存子系统中最重要的优化部分并不涉及到实际的优化工作。

在对您的系统进行优化之前,必须弄清楚主机系统的实际运行情况。

要做到这一点,AIX® 管理员必须知道应该使用何种工具,以及如何对他或她将要捕获的数据进行分析。

再次说明近期发表的一些其他优化文章中所介绍的内容,您在对系统进行正确地优化之前,必须首先监视主机,无论它是在逻辑分区(LPAR) 运行还是在自己的物理服务器上运行。

您可以使用许多命令来捕获和分析数据,所以您需要了解这些命令,以及其中的哪个命令最适合于将要进行的工作。

在捕获了相关的数据之后,您需要对结果进行分析。

有些问题乍看起来像是一个中央处理单元(CPU) 的问题,而经过分析之后,可以正确地诊断为内存或I/O 问题,前提是您使用了合适的工具捕获数据,并且知道如何进行分析工作。

仅当正确地完成了这些工作之后,您才可以考虑对系统进行实际的更改。

如果医生不了解您的病史和目前的症状,就无法诊治疾病,同样地,您也需要在优化子系统之前对其进行诊断。

如果在出现CPU 或者I/O 瓶颈的情况下,对内存子系统进行优化,这将是毫无帮助的,甚至可能会影响主机的正常运行。

本文将帮助您了解正确地实施诊断工作的重要性。

您将看到,性能优化并不仅仅只是进行实际的优化工作。

在您将要学习的工具中,有一些是通用的监视工具,所有版本的UNIX 都提供了这些工具,另外还有一些工具是专门为AIX 编写的。

有些工具为AIX Version 5.3 进行了优化,同时还专门为AIX 5.3 系统开发了一些新的工具。

生成基准数据是非常重要的,这一点无论重申多少次都不为过。

Tuxedo性能调优经验谈

Tuxedo性能调优经验谈

Tuxedo性能调优经验谈Tuxedo 9.0 for AIX与Oracle 10 XA连接网友:chinakkee 发布于:2006.11.13 09:54(共有条评论) 查看评论| 我要评论系统说明TUXEDO版本:9.0 安装目录/opt/bea/tuxedo9.0ORACLE版本:10.2.0.1 安装目录/u01/app/oracle一、Tuxedo 9 for AIX的安装1、创建一个用户为Tuxedo,用户组为bea2、创建/opt/bea为tuxedo的安装目录,$mkdir /opt/bea$chown tuxedo.bea /opt/bea$chmod 770 /opt/bea#bootinfo -k64$ sh tuxedo9_aix53_64.bin -i consolePreparing to install...WARNING: /tmp does not have enough disk space!Attempting to use /home/tuxedo for install base and tmp dir.Extracting the JRE from the installer archive...Unpacking the JRE...Extracting the installation resources from the installer archive...Configuring the installer for this system's environment...Launching installer...Preparing CONSOLE Mode Installation...===================================================== ======Choose Locale...----------------->1- EnglishCHOOSE LOCALE BY NUMBER: 1===================================================== ======(created with InstallAnywhere by Zero G)-------------------------------------------------------------------------------===================================================== ======Introduction------------BEA End User Clickwrap 001205Copyright (c) BEA Systems, Inc.All Rights Reserved.DO YOU ACCEPT THE TERMS OF THIS LICENSE AGREEMENT? (Y/N): y===================================================== ======Choose Install Set------------------Please choose the Install Set to be installed by this installer.->1- Full Install2- Server Install3- Full Client Install4- Jolt Client Install5- ATMI Client Install6- CORBA Client Install7- Customize...ENTER THE NUMBER FOR THE INSTALL SET, OR PRESS TO ACCEPT THE DEFAULT : 1===================================================== ======Choose BEA Home---------------1- Create new BEA Home2- Use existing BEA HomeEnter a number: 21- /opt/beaExisting BEA Home directory: 1===================================================== ======Choose Product Directory------------------------1- Modify Current Selection (/opt/bea/tuxedo9.0)2- Use Current Selection (/opt/bea/tuxedo9.0)Enter a number: 2===================================================== ======Pre-Installation Summary------------------------Please Review the Following Before Continuing:Product Name:Tuxedo 9.0Install Folder:/opt/bea/tuxedo9.0Link Folder:/home/tuxedoDisk Space Information (for Installation Target):Required: 386,803,702 bytesAvailable: 2,625,392,640 bytesPRESS TO CONTINUE:===================================================== ======Ready To Install----------------InstallAnywhere is now ready to install Tuxedo 9.0 onto your system at thefollowing location:/opt/bea/tuxedo9.0PRESS TO INSTALL:===================================================== ======Installing...-------------[==================|==================|=============== =][------------------|------------------|------------------|------------------]===================================================== ======Configure tlisten Service-------------------------Password: tuxedoVerify Password: tuxedoPassword Accepted! Press "Enter" to continue.===================================================== ======SSL Installation Choice.------------------------Would you like to install SSL Support?->1- Yes2- NoENTER THE NUMBER FOR YOUR CHOICE, OR PRESS TO ACCEPT THE DEFAULT:: 2===================================================== ======License Installation Choice---------------------------Would you like to install your license now?->1- Yes2- NoENTER THE NUMBER FOR YOUR CHOICE, OR PRESS TO ACCEPT THE DEFAULT:: 2===================================================== ======Installation Complete---------------------Congratulations. Tuxedo 9.0 has been successfully installed to:/opt/bea/tuxedo9.0PRESS TO EXIT THE INSTALLER:安装完毕,需要把license文件重命名为lic.txt copy到$TUXDIR/udataobj/二、TUxedo 9 连接Oracle 10g配置前提是在Tuxedo 9 上安装Oracle 10g client还有安装C编译器(不一定要用Visual Age C/C+用户能够通过sqlplus连接oracle数据库1、ORACLE的的配置sqlplus[email=system@testcrm]system@testcrm[/email]SQL> @$ORACLE_HOME\rdbms\admin\xaview.sqlSQL>grant select on v$xatrans$ to public with grant option;SQL>grant select on v$pending_xatrans$ to public with grant option;SQL>grant select EMP to ScottSQL>GRANT SELECT ON DBA_PENDING_TRANSACTIONS TO Scott;注:scott默认为lock,需要用alter user scott account unlock,解锁。

Aix的一些配置参数

Aix的一些配置参数

1.远程客户可通过"login, ftp"登录, 但不可通过"telnet"登录1. 使用命令"ps -ef" 查看"telnetd"进程是否启动;2. 检查文件/etc/services中的"telnet port"是否为"23", 如果不是,改为"23",然后执行" refresh-s inetd".2.在AIX中设置中文环境在AIX中使用中文有两种途径:第一是在安装AIX时选择中文语言,装好的系统自动显示中文(这种方法不推荐使用,它没有第二种方法使用起来灵活)。

第二是安装AIX时选择英文,系统启动后手工设置中文环境,方法如下:1. 将AIX系统盘的第一张光盘放入光驱;2. 运行命令:smitty--> System Environments--> Manage Languange Environment--> Change/Show Primary Language Environment--> Change/Show Cultural Convention, Language, or Keyboard在随后显示的菜单中将光标分别移到以下字段:Primary CULTURAL ConventionPrimary LANGUAGE translationPrimary KEYBOARD按下<F4>,从弹出的菜单中选择“IBM-eucCN”将上述字段改为简体中文,按下回车键后系统自动从光盘安装中文环境软件包。

此操作完成后重新启动系统,操作界面即为简体中文。

需要输入中文时使用下列功能键切换输入方法:AIX 4.3.3 以前的版本:<Shift> + F1 --- <Shift> + F4 切换到各种中文输入方法;右<Alt> --- 切换到英文输入;AIX 4.3.3:CTRL + [F2] : 智能ABC ;CTRL + [F4] : 拼音输入;CTRL + [F5] : 五笔输入;CTRL + [F6] : 郑码输入;CTRL + [F7] : 表形码输入;CTRL + [F9] : 内码输入;CTRL + [F10] :英文半角;此外,AIX还包含另外两种中文环境,即“UTF8”和“GBK”,它们与“IBM-eucCN”之区别在于包含了繁体汉字的使用。

AIX系统参数配置

AIX系统参数配置

AIX系统参数配置AI某内核属于动态内核,核心参数基本上可以自动调整,因此当系统安装完毕后,应考虑修改的参数一般如下:一、单机环境1、系统用户的最大登录数ma某loginma某login的具体大小可根据用户数设定,可以通过mittychlicene 命令修改,该参数记录于/etc/ecurity/login.cfg文件,修改在系统重新启动后生效。

2、系统用户的limit参数这些参数位于/etc/ecurity/limit文件中,可以把这些参数设为-1,即无限制,可以用vi修改/etc/ecurity/limit文件,所有修改在用户重新登录后生效。

default:fize=2097151----》改为-1core=2097151cpu=-1data=262144----》改为-1r=65536tack=65536nofile=20003、PagingSpace检查pagingpace的大小,在物理内存<2G时,应至少设定为物理内存的1.5倍,若物理内存>2G,可作适当调整。

同时在创建pagingpace时,应尽量分配在不同的硬盘上,提高其性能。

利用mittychp修改原有pagingpace的大小或mittymkp增加一块pagingpace。

4、系统核心参数配置利用lattr-Ely0检查ma某uproc,minpout,ma某pout等参数的大小。

ma某uproc为每个用户的最大进程数,通常如果系统运行DB2或ORACLE是应将ma某uproc调整,Default:128、调整到500,ma某uproc增加可以马上起作用,降低需要AI某重起。

当应用涉及大量的顺序读写而影响前台程序响应时间时,可考虑将ma某pout设为33,minpout设为16,利用mittychgy来设置。

5、文件系统空间的设定一般来说,系统的文件系统/、/ur、/var、/tmp的使用率不要超过80%,/tmp建议至少为300M,文件系统满可导致系统不能正常工作,尤其是AI某的基本文件系统,如/(根文件系统)满则会导致用户不能登录。

AIX 5.3主机性能评估-Memory性能评估

AIX 5.3主机性能评估-Memory性能评估
lrubucket = 131072
maxclient% = 80
maxfree = 1088
maxperm = 4587812
maxperm% = 80
nokilluid = 0
npskill = 49152
npsrpgmax = 393216
npsrpgmin = 294912
npsscrubmax = 393216
312417 pending disk I/Os blocked with no pbuf
0 paging space I/Os blocked with no psbuf
2878 filesystem I/Os blocked with no fsbuf
defps = 1
force_relalias_lite = 0
framesets = 2
htabscale = n/a
kernel_heap_psize = 4096

1.4.2 使用vmstat确定内存的使用情况
主要检查vmstat输出的 memory和pages列和faults列。详细的说明见前一节cpu评估说明。
1.4.3 svmon命令
# svmon -G -i 2 2
如果系统在向调页空间调出页面,可能使因为内存中的文件页数低于maxperm,从而也调出了部分的计算页面以达到maxfree的要求。在这种情况下,可以考虑把maxperm降低到低于numperm的某个值,从而阻止计算页面的调出。在5.2 ML4以后的版本中,为了防止计算页面被调出,可以采用另外一个方法,就是设置参数lru_file_repage=0。将该参数设为0,则告诉vmm在进行页面替换的时候,优先替换文件页面。

aix 常用命令

aix 常用命令

aix 常用命令AIX常用命令AIX(Advanced Interactive eXecutive)是IBM公司的一款UNIX操作系统,广泛应用于企业级服务器系统中。

本文将介绍AIX 常用命令,帮助读者更好地理解和使用该操作系统。

一、系统管理命令1. whoami:查询当前登录用户的用户名;2. hostname:查看主机名;3. uname -a:显示系统的各种信息,如内核版本、硬件平台等;4. uptime:查看系统的运行时间和负载情况;5. date:显示当前日期和时间;6. topas:实时监控系统性能,包括CPU利用率、内存使用情况等;7. lparstat -i:显示LPAR(Logical Partition)信息,包括分区的配置和资源利用情况;8. lsdev:列出设备列表;9. errpt:查看系统错误日志,用于排查故障;10. ps -ef:显示当前系统的进程列表;11. mksysb:创建系统备份;12. bootlist:设置系统启动顺序。

二、文件和目录管理命令1. ls:列出当前目录下的文件和子目录;2. pwd:显示当前工作目录的路径;3. cd:切换工作目录;4. mkdir:创建新的目录;5. rm:删除文件或目录;6. cp:复制文件或目录;7. mv:移动文件或目录;8. find:按照指定条件查找文件;9. du:查看目录或文件的磁盘使用情况;10. df:显示文件系统的使用情况;11. cat:查看文件内容;12. vi:编辑文本文件。

三、用户和权限管理命令1. useradd:创建新用户;2. userdel:删除用户;3. passwd:修改用户密码;4. chuser:修改用户属性;5. chown:修改文件或目录的所有者;6. chmod:修改文件或目录的权限;7. chgrp:修改文件或目录的所属组;8. groups:查看用户所属的组;9. su:切换用户身份;10. visudo:编辑sudoers文件,配置用户的sudo权限。

aix 面试题

aix 面试题

aix 面试题在应聘AIX相关岗位时,面试官常常会问及与AIX相关的面试题,以评估应聘者的技术能力和专业知识。

本文将介绍一些常见的AIX面试题,并给出相应的答案和解析。

1. 什么是AIX操作系统?AIX(Advanced Interactive eXecutive)是IBM公司开发的一种基于UNIX的操作系统。

它是为IBM Power Systems服务器设计的,主要用于企业级应用和数据库。

2. 请简要介绍一下AIX的特点和优势。

AIX具有以下几个特点和优势:- 可靠性高:AIX采用了冗余设计和可靠的错误检测与恢复机制,以确保系统持续稳定运行。

- 扩展性强:AIX支持多处理器和多线程技术,可以有效利用硬件资源,满足高性能和扩展性需求。

- 安全性好:AIX提供了丰富的安全功能和机制,如访问控制、权限管理和身份验证,保护系统和数据的安全性。

- 管理和调优:AIX提供了一系列的管理工具和性能调优机制,方便管理员进行系统管理和性能优化。

- 兼容性强:AIX与其他UNIX-like操作系统兼容,并且支持多种软件和应用的移植。

3. 请解释一下在AIX中如何创建文件系统。

在AIX中,可以使用mkfs命令来创建文件系统。

例如,创建一个ext3文件系统,可以使用以下命令:```mkfs -V jfs2 -O ext /dev/hdX```其中,/dev/hdX是磁盘分区设备名称。

4. 如何在AIX系统上查看网络接口状态?可以使用ifconfig命令来查看AIX系统上的网络接口状态。

例如,查看所有网络接口的状态,可以使用以下命令:```ifconfig -a```该命令将显示系统上所有网络接口的详细信息,包括接口名称、IP 地址、MAC地址等。

5. 在AIX系统上如何查看进程及其资源占用情况?可以使用ps命令来查看AIX系统上的进程及其资源占用情况。

例如,查看所有进程及其资源占用情况,可以使用以下命令:```ps -ef```该命令将显示系统上所有进程的详细信息,包括进程ID、父进程ID、CPU占用、内存占用等。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

IBM TRAINING®A26AIX Performance TuningJaqui LynchLas Vegas, NVAIX Performance TuningUpdated Presentation will be at:/papers/pseries-a26-aug06.pdfJaqui LynchSenior Systems EngineerMainline Information SystemsAgenda•AIX v5.2 versus AIX v5.3•32 bit versus 64 bit •Filesystem Types•DIO and CIO•AIX Performance Tunables •Oracle Specifics •Commands•ReferencesNew in AIX 5.2•P5support•JFS2•Large Page support (16mb)•Dynamic LPAR•Small Memory Mode–Better granularity in assignment of memory to LPARs •CuOD•xProfiler•New Performance commands–vmo, ioo, schedo replace schedtune and vmtune •AIX 5.1 Status–Will not run on p5 hardware–Withdrawn from marketing end April 2005–Support withdrawn April 2006AIX 5.3•New in5.3–With Power5 hardware•SMT•Virtual Ethernet•With APV–Shared Ethernet–Virtual SCSI Adapter–Micropartitioning–PLMAIX 5.3•New in5.3–JFS2 Updates•Improved journaling•Extent based allocation•1tb filesystems and files with potential of 4PB•Advanced Accounting•Filesystem shrink for JFS2•Striped Columns–Can extend striped LV if a disk fills up•1024 disk scalable volume group–1024 PVs, 4096 LVs, 2M pps/vg•Quotas•Each VG now has its own tunable pbuf pool–Use lvmo commandAIX 5.3•New in5.3–NFSv4 Changes•ACLs–NIM enhancements•Security•Highly available NIM•Post install configuration of Etherchannel and Virtual IP –SUMA patch tool–Last version to support 32 bit kernel–MP kernel even on a UP–Most commands changed to support LPAR stats–Forced move from vmtune to ioo and vmo–Page space scrubbing–Plus lots and lots of other things32 bit versus 64 bit•32 Bit•Up to 96GB memory •Uses JFS for rootvg •Runs on 32 or 64 bit hardware •Hardware all defaults to 32 bit•JFS is optimized for 32 bit• 5.3 is last version of AIX with 32 bit kernel •64 bit•Allows > 96GB memory •Current max is 256GB (arch is 16TB) except 590/595 (1TB & 2TB)•Uses JFS2 for rootvg •Supports 32 and 64 bit apps•JFS2 is optimized for 64 bitFilesystem Types•JFS•2gb file max unless BF •Can use with DIO •Optimized for 32 bit •Runs on 32 bit or 64 bit •Better for lots of small file creates and deletes •JFS2•Optimized for 64 bit •Required for CIO •Can use DIO•Allows larger file sizes •Runs on 32 bit or 64 bit •Better for large files and filesystemsGPFSClustered filesystemUse for RACSimilar to CIO –noncached, nonblocking I/ODIO and CIO•DIO–Direct I/O–Around since AIX v5.1–Used with JFS–CIO is built on it–Effectively bypasses filesystem caching to bring data directlyinto application buffers–Does not like compressed JFS or BF (lfe) filesystems•Performance will suffer due to requirement for 128kb I/O –Reduces CPU and eliminates overhead copying data twice–Reads are synchronous–Bypasses filesystem readahead–Inode locks still used–Benefits heavily random access workloadsDIO and CIO•CIO–Concurrent I/O–Only available in JFS2–Allows performance close to raw devices–Use for Oracle dbf and control files, and online redo logs,not for binaries–No system buffer caching–Designed for apps (such as RDBs) that enforce writeserialization at the app–Allows non-use of inode locks–Implies DIO as well–Benefits heavy update workloads–Not all apps benefit from CIO and DIO –some arebetter with filesystem caching and some are saferthat wayPerformance Tuning•CPU–vmstat, ps, nmon•Network–netstat, nfsstat, no, nfso•I/O–iostat, filemon, ioo, lvmo•Memory–lsps, svmon, vmstat, vmo, iooNew tunables•Old way–Create rc.tune and add to inittab•New way–/etc/tunables•lastboot•lastboot.log•Nextboot–Use –p –o options–ioo–p –o options–vmo–p –o options–no –p –o options–nfso–p –o options–schedo-p –o optionsTuneables1/3•minperm%–Value below which we steal from computational pages -default is 20%–We lower this to something like 5%, depending on workload•Maxperm%–default is 80%–This is a soft limit and affects ALL file pages (including those in maxclient)–Value above which we always steal from persistent–Be careful as this also affects maxclient–We no longer tune this –we use lru_file_repage instead–Reducing maxperm stops file caching affecting programs that are running•maxclient–default is 80%–Must be less than or equal to maxperm–Affects NFS, GPFS and JFS2–Hard limit by default–We no longer tune this –we use lru_file_repage instead•numperm–This is what percent of real memory is currently being used for caching ALL file pages •numclient–This is what percent of real memory is currently being used for caching GPFS, JFS2 and NFS •strict_maxperm–Set to a soft limit by default –leave as is•strict_maxclient–Available at AIX 5.2 ML4–By default it is set to a hard limit–We used to change to a soft limit –now we do notTuneables2/3•maxrandwrt–Random write behind–Default is 0 –try 32–Helps flush writes from memory before syncd runs•syncd runs every 60 seconds but that can be changed–When threshhold reached all new page writes are flushed to disk–Old pages remain till syncd runs•Numclust–Sequential write behind–Number of 16k clusters processed by write behind•J2_maxRandomWrite–Random write behind for JFS2–On a per file basis–Default is 0 –try 32•J2_nPagesPerWriteBehindCluster–Default is 32–Number of pages per cluster for writebehind•J2_nRandomCluster–JFS2 sequential write behind–Distance apart before random is detected•J2_nBufferPerPagerDevice–Minimum filesystem bufstructs for JFS2 –default 512, effective at fs mountTuneables3/3•minpgahead, maxpgahead, J2_minPageReadAhead & J2_maxPageReadAhead–Default min =2 max = 8–Maxfree–minfree>= maxpgahead•lvm_bufcnt–Buffers for raw I/O. Default is 9–Increase if doing large raw I/Os (no jfs)•numfsbufs–Helps write performance for large write sizes–Filesystem buffers•pv_min_pbuf–Pinned buffers to hold JFS I/O requests–Increase if large sequential I/Os to stop I/Os bottlenecking at the LVM–One pbuf is used per sequential I/O request regardless of the number of pages–With AIX v5.3 each VG gets its own set of pbufs–Prior to AIX 5.3 it was a system wide setting•sync_release_ilock–Allow sync to flush all I/O to a file without holding the i-node lock, and then use the i-node lock to do the commit.–Be very careful –this is an advanced parameter•minfree and maxfree–Used to set the values between which AIX will steal pages–maxfree is the number of frames on the free list at which stealing stops (must be >=minfree+8)–minfree is the number used to determine when VMM starts stealing pages to replenish the free list–On a memory pool basis so if 4 pools and minfree=1000 then stealing starts at 4000 pages– 1 LRUD per pool, default pools is 1 per 8 processors•lru_file_repage–Default is 1 –set to 0–Available on >=AIX v5.2 ML5 and v5.3–Means LRUD steals persistent pages unless numperm< minperm•lru_poll_interval–Set to10–Improves responsiveness of the LRUD when it is runningNEW Minfree/maxfree•On a memory pool basis so if 4 pools andminfree=1000 then stealing starts at 4000pages•1 LRUD per pool•Default pools is 1 per 8 processors•Cpu_scale_memp can be used to changememory pools•Try to keep distance between minfree andmaxfree<=1000•Obviously this may differvmstat -v•26279936 memory pages•25220934 lruable pages•7508669 free pages• 4 memory pools•3829840 pinned pages•80.0 maxpin percentage•20.0 minperm percentage•80.0 maxperm percentage•0.3 numperm percentage All filesystem buffers•89337 file pages•0.0 compressed percentage•0 compressed pages•0.1 numclient percentage Client filesystem buffers only•80.0 maxclient percentage•28905 client pages•0 remote pageouts scheduled•280354 pending disk I/Os blocked with no pbuf LVM –pv_min_pbuf •0 paging space I/Os blocked with no psbuf VMM –fixed per page dev •2938 filesystem I/Os blocked with no fsbuf numfsbufs•7911578 client filesystem I/Os blocked with no fsbuf•0 external pager filesystem I/Os blocked with no fsbuf j2_nBufferPerPagerDevice •Totals since boot so look at 2 snapshots 60 seconds apart•pbufs, psbufs and fsbufs are all pinnedno -p -o rfc1323=1no -p -o sb_max=1310720no -p -o tcp_sendspace=262144no -p -o tcp_recvspace=262144no -p -o udp_sendspace=65536no -p -o udp_recvspace=655360nfso -p -o nfs_rfc1323=1nfso -p -o nfs_socketsize=60000nfso -p -o nfs_tcp_socketsize=600000vmo -p -o minperm%=5vmo -p -o minfree=960vmo -p -o maxfree=1088vmo -p -o lru_file_repage=0vmo -p -o lru_poll_interval=10ioo -p -o j2_maxPageReadAhead=128ioo -p -o maxpgahead=16ioo -p -o j2_maxRandomWrite=32ioo -p -o maxrandwrt=32ioo -p -o j2_nBufferPerPagerDevice=1024ioo -p -o pv_min_pbuf=1024ioo -p -o numfsbufs=2048ioo -p -o j2_nPagesPerWriteBehindCluster=32Increase the following if using raw LVMs (default is 9)Ioo –p –o lvm_bufvnt=12Starter Set of tunablesNB please test these before putting intoproduction vmstat -IIGNORE FIRST LINE -average since bootRun vmstat over an interval (i.e. vmstat 2 30)System configuration: lcpu=24 mem=102656MB ent=0kthr memory page faults cpu---------------------------------------------------------------------------r b avm fre re pi po fr sr cy in sy cs us sy id wa pc ec 56 1 18637043 7533530 0 0 0 0 0 0 4298 24564 986698 2 0 0 12.00 100.057 1 18643753 7526811 0 0 0 0 0 0 3867 25124 9130 98 2 0 0 12.00 100.0System configuration: lcpu=8 mem=1024MB ent=0.50kthr memory page faults cpu------------------------------------------------------------------------------r b p avm fre fi fo pi po fr sr in sy cs us sy id wa pc ec1 1 0 170334 968 96 163 0 0 190 511 11 556 662 1 4 90 5 0.03 6.81 1 0 170334 1013 53 85 0 0 107 216 7 268 418 02 92 5 0.02 4.4Pc = physical processors consumed –if using SPPEc = %entitled capacity consumed –if using SPPFre may well be between minfree and maxfreefr:sr ratio 1783:2949 means that for every 1783 pages freed 2949 pages had to be examined. ROT was 1:4 –may need adjustingTo get a 60 second average try: vmstat 60 2Memory and I/O problems•iostat–Look for overloaded disks and adapters•vmstat•vmo and ioo(replace vmtune)•sar•Check placement of JFS and JFS2 filesystems and potentially the logs•Check placement of Oracle or database logs•fileplace and filemon•Asynchronous I/O•Paging•svmon–svmon-G >filename•nmon•Check error logsioo Output•lvm_bufcnt= 9•minpgahead= 2•maxpgahead= 8•maxrandwrt = 32 (default is 0)•numclust= 1•numfsbufs= 186•sync_release_ilock= 0•pd_npages= 65536•pv_min_pbuf= 512•j2_minPageReadAhead = 2•j2_maxPageReadAhead = 8•j2_nBufferPerPagerDevice = 512•j2_nPagesPerWriteBehindCluster = 32•j2_maxRandomWrite = 0•j2_nRandomCluster = 0vmo OutputDEFAULTS maxfree= 128 minfree= 120 minperm% = 20 maxperm% = 80 maxpin% = 80 maxclient% = 80 strict_maxclient = 1 strict_maxperm = 0OFTEN SEEN maxfree= 1088 minfree= 960 minperm% = 10 maxperm% = 30 maxpin% = 80 Maxclient% = 30 strict_maxclient = 0 strict_maxperm = 0numclient and numperm are both 29.9So numclient-numperm=0 aboveMeans filecaching use is probably all JFS2/NFS/GPFSRemember to switch to new method using lru_file_repageiostatIGNORE FIRST LINE -average since bootRun iostat over an interval (i.e. iostat2 30)tty: tin tout avg-cpu: % user % sys % idle % iowait physc% entc0.0 1406.0 93.1 6.9 0.0 0.012.0 100.0Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk1 1.0 1.5 3.0 0 3hdisk0 6.5 385.5 19.5 0 771hdisk14 40.5 13004.0 3098.5 12744 13264 hdisk7 21.0 6926.0 271.0 440 13412 hdisk15 50.5 14486.0 3441.5 13936 15036 hdisk17 0.0 0.00.00 0iostat–a AdaptersSystem configuration: lcpu=16 drives=15tty: tin tout avg-cpu: % user % sys % idle % iowait0.4 195.3 21.4 3.3 64.7 10.6Adapter: Kbps tps Kb_read Kb_wrtnfscsi1 5048.8 516.9 1044720428 167866596Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk6 23.4 1846.1 195.2 381485286 61892408 hdisk9 13.9 1695.9 163.3 373163554 34143700 hdisk8 14.4 1373.3 144.6 283786186 46044360 hdisk7 1.1 133.5 13.8 628540225786128 Adapter: Kbps tps Kb_read Kb_wrtnfscsi0 4438.6 467.6 980384452 85642468Disks: % tm_act Kbps tps Kb_read Kb_wrtn hdisk5 15.2 1387.4 143.8 304880506 28324064 hdisk2 15.5 1364.4 148.1 302734898 24950680 hdisk3 0.5 81.4 6.8 3515294 16043840 hdisk4 15.8 1605.4 168.8 369253754 16323884 iostat-DExtended Drive Reporthdisk3 xfer: %tm_act bps tps bread bwrtn0.5 29.7K 6.8 15.0K 14.8Kread: rps avgserv minserv maxserv timeouts fails29.3 0.1 0.1784.5 0 0write: wps avgserv minserv maxserv timeouts fails133.6 0.0 0.3 2.1S 0 0 wait: avgtime mintime maxtime avgqsz qfull0.0 0.00.2 0.0 0iostat Otheriostat-A async IOSystem configuration: lcpu=16 drives=15aio: avgc avfc maxg maif maxr avg-cpu: % user % sys % idle % iowait150 0 5652 0 12288 21.4 3.3 64.7 10.6Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk6 23.4 1846.1 195.2 381485298 61892856hdisk5 15.2 1387.4 143.8 304880506 28324064hdisk9 13.9 1695.9 163.3 373163558 34144512iostat-m pathsSystem configuration: lcpu=16 drives=15tty: tin tout avg-cpu: % user % sys % idle % iowait0.4 195.3 21.4 3.3 64.7 10.6Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk0 1.6 17.0 3.7 1190873 2893501Paths: % tm_act Kbps tps Kb_read Kb_wrtnPath0 1.6 17.0 3.7 1190873 2893501lvmo•lvmo output••vgname= rootvg(default but you can change with –v)•pv_pbuf_count= 256–Pbufs to add when a new disk is added to this VG •total_vg_pbufs= 512–Current total number of pbufs available for the volume group.•max_vg_pbuf_count= 8192–Max pbufs that can be allocated to this VG•pervg_blocked_io_count= 0–No. I/O's blocked due to lack of free pbufs for this VG •global_pbuf_count= 512–Minimum pbufs to add when a new disk is added to a VG •global_blocked_io_count= 46–No. I/O's blocked due to lack of free pbufs for all VGslsps–a(similar to pstat)•Ensure all page datasets the same size although hd6 can be bigger -ensure more page space than memory–Especially if not all page datasets are in rootvg–Rootvg page datasets must be big enough to hold the kernel •Only includes pages allocated (default)•Use lsps-s to get all pages (includes reserved via early allocation (PSALLOC=early)•Use multiple page datasets on multiple disks –Parallelismlsps outputlsps-aPage Space Physical Volume Volume Group Size %Used Active Auto Typepaging05 hdisk9 pagvg01 2072MB 1 yes yes lvpaging04 hdisk5 vgpaging01 504MB 1 yes yes lvpaging02 hdisk4 vgpaging02 168MB 1 yes yes lvpaging01 hdisk3 vgpagine03 168MB 1 yes yes lvpaging00 hdisk2 vgpaging04 168MB 1 yes yes lvhd6 hdisk0 rootvg512MB 1 yes yes lvlsps-sTotal Paging Space Percent Used3592MB 1%Bad Layout aboveShould be balancedMake hd6 the biggest by one lp or the same size as the others in a mixedenvironment like thisSVMON Terminology•persistent–Segments used to manipulate files and directories •working–Segments used to implement the data areas of processesand shared memory segments•client–Segments used to implement some virtual file systems likeNetwork File System (NFS) and the CD-ROM file system•/infocenter/pseries/topi c/com.ibm.aix.doc/cmds/aixcmds5/svmon.htmsvmon-Gsize inuse free pin virtualmemory 26279936 18778708 7501792 3830899 18669057pg space 7995392 53026work pers clnt lpagepin 3830890 0 0 0in use 18669611 80204 28893 0In GB Equates to:size inuse free pin virtualmemory 100.25 71.64 28.62 14.61 71.22pg space 30.50 0.20work pers clnt lpagepin 14.61 0 0 0in use 71.22 0.31 0.15 0General Recommendations•Different hot LVs on separate physical volumes•Stripe hot LV across disks to parallelize•Mirror read intensive data•Ensure LVs are contiguous–Use lslv and look at in-band % and distrib–reorgvg if needed to reorg LVs•Writeverify=no•minpgahead=2, maxpgahead=16 for 64kb stripe size•Increase maxfree if you adjust maxpgahead•Tweak minperm, maxperm and maxrandwrt•Tweak lvm_bufcnt if doing a lot of large raw I/Os•If JFS2 tweak j2 versions of above fields•Clean out inittab and rc.tcpip and inetd.conf, etc for things that should not start–Make sure you don’t do it partially–i.e. portmap is in rc.tcpip and rc.nfsOracle Specifics•Use JFS2 with external JFS2 logs(if high write otherwise internal logs are fine)•Use CIO where it will benefit you–Do not use for Oracle binaries•Leave DISK_ASYNCH_IO=TRUE in Oracle•Tweak the maxservers AIO settings•If using JFS–Do not allocate JFS with BF (LFE)–It increases DIO transfer size from 4k to 128k–2gb is largest file size–Do not use compressed JFS –defeats DIOTools•vmstat –for processor and memory•nmon–/collaboration/wiki/display/WikiPtype/nmon–To get a 2 hour snapshot (240 x 30 seconds)–nmon-fT-c 30 -s 240–Creates a file in the directory that ends .nmon•nmon analyzer–/collaboration/wiki/display/WikiPtype/nmonanalyser–Windows tool so need to copy the .nmon file over–Opens as an excel spreadsheet and then analyses the data•sar–sar-A -o filename 2 30 >/dev/null–Creates a snapshot to a file –in this case 30 snaps 2 seconds apart •ioo, vmo, schedo, vmstat–v•lvmo•lparstat,mpstat•Iostat•Check out Alphaworks for the Graphical LPAR tool•Many many moreOther tools•filemon–filemon -v -o filename -O all–sleep 30–trcstop•pstat to check async I/O–pstat-a | grep aio| wc–l•perfpmr to build performance info forIBM if reporting a PMR–/usr/bin/perfpmr.sh300lparstatlparstat-hSystem Configuration: type=shared mode=Uncapped smt=On lcpu=4 mem=512 ent=5.0 %user %sys %wait %idle physc%entc lbusy app vcsw phint%hypv hcalls0.0 0.5 0.0 99.5 0.00 1.0 0.0 -1524 0 0.5 154216.0 76.3 0.0 7.7 0.30 100.0 90.5 -321 1 0.9 259Physc–physical processors consumed%entc–percent of entitled capacityLbusy–logical processor utilization for system and userVcsw–Virtual context switchesPhint–phantom interrupts to other partitions%hypv-%time in the hypervisor for this lpar–weird numbers on an idle system may be seen/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/lparstat.htmmpstatmpstat–sSystem configuration: lcpu=4 ent=0.5Proc1Proc00.27%49.63%cpu0cpu2cpu1cpu30.17%0.10% 3.14%46.49%Above shows how processor is distributed using SMTAsync I/OTotal number of AIOs in usepstat–a | grep aios| wc–lOr new way is:ps–k | grep aio| wc-l4205AIO max possible requestslsattr –El aio0 –a maxreqsmaxreqs4096 Maximum number of REQUESTS TrueAIO maxserverslsattr –El aio0 –a maxserversmaxservers 320 MAXIMUM number of servers per cpu TrueNB –maxservers is a per processor setting in AIX 5.3Look at using fastpathFastpath can now be enabled with DIO/CIOSee Session A23 by Grover Davidson for a lot more info on Async I/OI/O Pacing•Useful to turn on during backups (streaming I/Os)•Set high value to multiple of (4*n)+1•Limits the number of outstanding I/Osagainst an individual file•minpout–minimum•maxpout–maximum•If process reaches maxpout then it issuspended from creating I/O untiloutstanding requests reach minpoutNetwork•no –a & nfso-a to find what values are set to now•Buffers–Mbufs•Network kernel buffers•thewall is max memory for mbufs•Can use maxmbuf tuneable to limit this or increase it–Uses chdev–Determines real memory used by communications–If 0 (default) then thewall is used–Leave it alone–TCP and UDP receive and send buffers–Ethernet adapter attributes•If change send and receive above then also set it here–no and nfso commands–nfsstat–rfc1323 and nfs_rfc1323netstat•netstat–i–Shows input and output packets and errors foreach adapter–Also shows collisions•netstat–ss–Shows summary info such as udp packets droppeddue to no socket•netstat–m–Memory information•netstat–v–Statistical information on all adaptersNetwork tuneables•no -a•Using no–rfc1323 = 1–sb_max=1310720(>= 1MB)–tcp_sendspace=262144–tcp_recvspace=262144–udp_sendspace=65536(at a minimum)–udp_recvspace=655360•Must be less than sb_max•Using nfso–nfso-a–nfs_rfc1323=1–nfs_socketsize=60000–nfs_tcp_socketsize=600000•Do a web search on “nagle effect”•netstat–s | grep“socket buffer overflow”nfsstat•Client and Server NFS Info •nfsstat–cn or –r or –s–Retransmissions due to errors•Retrans>5% is bad–Badcalls–Timeouts–Waits–ReadsUseful Links• 1. Ganglia–• 2. Lparmon–/tech/lparmon• 3. Nmon–/collaboration/wiki/display/WikiPtype/nmon• 4. Nmon Analyser–/collaboration/wiki/display/WikiPtype/nmonanalyser • 5. Jaqui's AIX* Blog–Has a base set of performance tunables for AIX 5.3 /blosxomjl.cgi/• 6. vmo command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds6/vmo.htm •7. ioo command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/ioo.htm •8. vmstat command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/ioo.htm •9. lvmo command–/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.doc/cmds/aixcmds3/ioo.htm •10. eServer Magazine and AiXtra–/•Search on Jaqui AND Lynch•Articles on Tuning and Virtualization•11. Find more on Mainline at:–/ebrochureQuestions?Supplementary SlidesDisk Technologies•Arbitrated–SCSI20 or 40 mb/sec–FC-AL 100mb/sec–Devices arbitrate for exclusive control–SCSI priority based on address •Non-Arbitrated–SSA80 or 160mb/sec–Devices on loop all treated equally–Devices drop packets of data on loopAdapter Throughput-SCSI100%70%Bits Maxmby/s mby/s Bus DevsWidth •SCSI-15 3.588•Fast SCSI10788•FW SCSI20141616•Ultra SCSI201488•Wide Ultra SCSI 4028168•Ultra2 SCSI402888•Wide Ultra2 SCSI80561616•Ultra3 SCSI1601121616•Ultra320 SCSI3202241616•Ultra640 SCSI6404481616•Watch for saturated adaptersCourtesy of /terms/scsiterms.htmlAdapter Throughput-Fibre100%70%mbit/s mbit/s•13393•266186•530371• 1 gbit717• 2 gbit1434•SSA comes in 80 and 160 mb/secRAID Levels•Raid-0–Disks combined into single volume stripeset–Data striped across the disks•Raid-1–Every disk mirrored to another–Full redundancy of data but needs extra disks–At least 2 I/Os per random write•Raid-0+1–Striped mirroring–Combines redundancy and performanceRAID Levels•RAID-5–Data striped across a set of disks–1 more disk used for parity bits–Parity may be striped across the disks also–At least 4 I/Os per random write(read/write to data and read/write toparity)–Uses hot spare technology。

相关文档
最新文档