Netapp NAS存储Linux NFS调优指导书v1.0
NetApp存储设备安装配置手册

NetApp存储设备配置说明修改记录目录1编写目的 (1)2专业名词和缩略语 (2)3组网方式和环境介绍 (3)4安装配置方法 (3)4.1N ET A PP硬件安装 (3)4.2设备初始化和系统设定 (4)4.2.1设备初始化 (4)4.2.2系统设定 (4)4.3操作系统安装 (6)4.3.1注册现有系统的cifs服务,将操作系统文件上传至FAS存储系统 (6)4.4应用配置 (8)4.4.1系统参数配置 (8)4.4.2注册需要使用的服务 (10)4.4.3创建一个卷并输出空间 (12)4.4.4创建一个Qtree并实施quota限制 (17)4.4.5配置autosupport (19)4.4.6配置snapshot策略及数据恢复方法 (19)4.4.7磁盘故障的数据恢复方法 (19)4.4.8配置Cluster (19)1 编写目的编写本文档的目的在于详细地说明NetApp FAS存储系统的安装、配置以及常用命令的介绍和可靠性维护、故障检查与恢复的方法,便于开发、测试、用服和工程维护人员安装、使用和维护NetApp FAS存储系统存储系统。
2 专业名词和缩略语3 组网方式和环境介绍NetAppFAS3240AESX ServerSWITCHESX Server图3.1 NetApp FAS存储系统组网结构NetApp FAS存储系统存储设备以NAS存储方式使用,通过万兆交换机与主机相连接。
4 安装配置方法4.1 NetApp硬件安装存储设备硬件的安装主要是各盘柜间线缆的连接、磁盘安装、盘柜上架、上电等,以上操作多由NetApp技术支持工程师完成。
使用存储设备随机携带的“DB-9 to RJ-45”转接线将FAS存储系统的CONSOLE端口和安装了WINDOWS操作系统的主机串口相连,在WINDOWS主机上安装SecureCRT软件,新建一个serial协议的连接,其中port参数根据所连接的是COM1还是COM2来进行选择,其余参数参考图4.1所示,通过串口连接登录到FAS存储系统。
NetApp配置手册

2. 系统基本维护指南.......................................................... 5
2.1. 2.2. 2.3. 2.4. 2.5. 2.6. 2.7. 2.8. 2.9. 2.10. 2.11. 2.12. 2.13. 2.14. 2.15. 2.16. 2.17. 2.18.
Shanghai PKIT Confidential源自11上海帕科网络科技有限公司
Netapp 存储系统维护手册
Shanghai PKIT Confidential
12
上海帕科网络科技有限公司
Netapp 存储系统维护手册
2.6. 停机及重新启动
从Filer - Shut Down and Reboot项目中可以执行停机以及重新启动,同时还可 以设定等待时间。对于NetApp产品,强烈建议通过这种方式停机并关闭电源, 否则可能会导致NVRAM电池电量过度消耗,进而影响下次系统的正常启动。 针对cluster系统(如FAS2040A FAS3140A等)在关机前一定先要将cluster功能 禁用:telnet到任意一控制器执行命令:cf disable;再按以下操作对每控制器进 关机操作。在开机后.同样telnet到任意一控制器执行命令:cf enable ;开启cluster 功能。注意:再关系统电源时先关控制器再关磁盘柜。开机顺序相反。.
进入管理界面...........................................................................................................5 系统基本信息...........................................................................................................8 系统LOG信息 ..........................................................................................................9 配置Autosupport.....................................................................................................10 设置时区、时间和日期 ......................................................................................... 11 停机及重新启动.....................................................................................................13 管理及创建卷.........................................................................................................13 管理及创建Qtree .................................................................................................... 21 磁盘配额.................................................................................................................25 SnapShot的配置和管理 .........................................................................................36 CIFS的相关信息 ....................................................................................................38 CIFS共享 ................................................................................................................41 启用home directory功能 ........................................................................................44 ISCSI连接Windows................................................................................................45 网络端口的管理.....................................................................................................55 其他网络参数.........................................................................................................57 更改root用户密码 ..................................................................................................60 系统实时状态监控.................................................................................................61
NetApp存储安装、配置和维护手册V1.0

NetApp存储系统安装、配置和维护手册网存文档信息本安装和维护手册为 XXX 定制,为NetApp标准文档之补充。
目录1作业规划步骤 (1)2配置步骤 (3)2.1设置磁盘归属,创建ROOT卷 (3)2.2检查并更新各部件的firmware系统版本 (15)2.3检查并更新存储操作系统版本 (19)2.4输入软件许可 (23)2.5执行SETUP进行初始化设置 (23)2.6调整ROOT卷的大小 (29)2.7配置VLAN (29)2.8修改HOSTS文件 (31)2.9修改/etc/rc文件 (32)2.10配置AutoSupport服务 (33)2.11配置SSH (34)2.12配置SNMP (35)2.13配置NTP (36)2.14配置MTA (37)2.15配置IPspace (37)2.16配置MultiStore (37)2.17配置CIFS (41)2.18配置ISCSI (44)2.19配置FCP (45)2.20配置NFS (46)2.21配置重复数据删除 (47)2.22配置Snaprestore (48)2.23容灾实现Snapmirror (52)3日常维护 (55)3.1正常开关机 (55)3.2维护手段 (55)3.2.1Filerview 图形管理接口 (55)3.2.2命令行(CLI) (57)3.3空间管理:Aggr, Volume和lun的介绍 (57)3.4常用命令基本应用 (58)3.5日常系统检查 (58)3.5.1目测 (58)3.5.2例行系统检查 (58)3.6autosupport功能简介和配置 (59)4故障处理流程 (61)Page II4.1支持方式 (61)4.1.1NetApp on the web (NOW) site和服务 (61)4.1.2GSC( Global Support Center 全球支持中心) (61)4.2案例开立流程 (62)4.3损坏部件更换流程 (62)Page III1 作业规划步骤Page 22 配置步骤配置参数表2.1 设置磁盘归属,创建ROOT卷Page 3Page 4Page 5Page 6Page 7Page 8Page 9Page 10Page 11Page 12Page 13Page 142.2 检查并更新各部件的firmware系统版本Page 15Page 16Page 17Page 182.3 检查并更新存储操作系统版本Page 19Page 20Page 21Page 222.4 输入软件许可使用license add XXXXXXX命令添加许可,全部输入后,使用license命令进行检查。
netapp存储方案

存储解决方案NetApp Inc.2010年3月存储解决方案目录1NETAPP 存储产品技术特性 (2)1.1N ET A PP的统一存储理念 (2)1.2安全可靠 (4)1.3易于使用和维护 (5)1.4多种数据保护方式 (7)1.5易于升级扩展 (9)1.6系统管理 (10)2NETAPP FAS存储在视音频存储应用中的优势 (12)2.1跨平台的文件共享 (12)2.2NAS存储有助于服务器的性能提高 (12)2.3N ET A PP NAS存储有永远一致性的文件系统 (12)2.4无拟伦比的高扩展性 (13)2.5灵活的热备盘机制 (13)2.6高度的易管理性 (14)1 NetApp 存储产品技术特性1.1 NetApp的统一存储理念NetApp公司是唯一一家采用统一化存储架构设计的厂商,在同一平台完成对NAS、FC-SAN和IP-SAN的支持,支持的存储数据访问协议包括:NFS、CIFS、iSCSI、FCP,用户只需通过简单的软件协议设置就可以完成对所有功能的支持,而无需再次购买额外的硬件。
对于用户而言,将有大量现有的和未来的应用将部属于该存储系统中,而这些应用将根据其自身特性而可能拥有不同的存储要求。
作为一个开放平台,在该环境中统一存储是必要且必须的,为了更方便且集成实现统一存储,NetApp在该领域拥有独特且领先的技术,保证采用不同协议的应用能够在一个统一的环境中运行。
但根据约定俗成的习惯,SAN偏指计算平台和存储平台间采用数据块进行传递的网络模型,而NAS狭义地定义计算平台和存储平台间采用文件的方式进行数据传递的网络模型。
iSCSI的诞生更加说明了网络模型的实现究竟使用光通道协议还是以太网协议是相对次要的环节,这只是承载数据块的工具而已。
因此,SAN和NAS的区别在于传递的内容是数据块还是文件,更根本上来说是文件系统所处的位置。
对于企业来说,只有极少部分的数据是仅能应用在SAN或NAS的环境下的。
linuxnfs配置方法

NFS服务器的配置、NFS!艮务器的安装检查linux 系统中是否安装了nfs-utils 和portmap 两个软件包(RHEL係统默认已经安装了这两个软件包)命令#rpm - q nfs-utils portmap、查看NFS服务器是否启动命令#service nfs starus#service portmap status三、如果服务器没有启动,则开启服务(默认服务下nfs 没有完全开启)命令#service nfs start#service portmap start四、指定NFS服务器的配置文件NFS 服务器的配置文件保存“ /etc/ ”目录中,文件名称是“ exports ”,该文件用于被指NFS服务器提供的目录共享命令#vi /etc/exports配置“ exports ”文件格式如下/home * (sync,ro)Home :共享目录名:表示所有主机sync,ro ):设置选项exports 文件中的“配置选项”字段放置在括号对(“()”)中,多个选项间用逗号分隔sync :设置NFS服务器同步写磁盘,这样不会轻易丢失数据,建议所有的NFS共享目录都使用该选项ro :设置输出的共享目录只读,与rw不能共同使用rw :设置输出的共享目录可读写,与ro不能共同使用exports 文件中“客户端主机地址”字段可以使用多种形式表示主机地址指定IP 地址的主机指定域名的主机指定网段中的所有主机指定域下的所有主机所有主机五、重新输出共享目录Exportfs 管理工具可以对“ exports ”文件进行管理命令#exportfs - rv可以让新设置的“ exports ”文件内容生效六、显示NFS服务器的输出目录列表显示当前主机中NFS服务器的输出列表# showmount -e七、显示NFS服务器中被挂载的共享目录显示当前主机NFS服务器中已经被NFS客户机挂载使用的共享目录# showmount -d八、在另外一个linux 系统中挂在共享目录挂载NFS服务器中的共享目录# mount -t nfs /mnt/daoda九、查看mnt目录中的内容Cd /mnt |ll十、卸载系统中已挂载的NFS共享目录命令# umount /mnt/总结:1 、在配置NFS服务器之前用ping命令确保两个linux系统正常连接,如果无法连接关闭图形界面中的防火墙#service iptables stop2 、在配置中确保输入的命令是正确的3 、更改完“ exports ”文件后要输入exportfs - rv,使得“ exports ”文件生效。
NetApp操作手册

NetApp FAS系列存储器操作手册目录App存储系统 (3)2.系统基本维护指南 (5)2.1.进入管理界面 (5)2.2.系统基本信息 (6)2.3.系统LOG信息 (7)2.4.配置Autosupport (8)2.5.设置时区、时间和日期 (8)2.6.杂项设置 (9)2.7.停机及重新启动 (10)2.8.管理及创建卷 (11)2.9.管理及创建Qtree (12)2.10.磁盘配额 (13)2.11.SnapShot的配置和管理 (15)2.12.CIFS的相关信息 (17)2.13.CIFS共享 (19)2.14.启用home directory功能 (20)2.15.ISCSI连接Windows (21)2.16.网络端口的管理 (33)2.16.1.VIF Multiple方式绑定,对应Cisco 交换机端配置命令 (34)2.17.其他网络参数 (35)2.18.更改root用户密码 (36)2.19.系统实时状态监控 (37)附录一:磁盘更换步骤 (39)附录二:时间同步服务器的设置 (40)App存储系统NetApp 系统为各种不同平台上的用户提供了对全部企业数据的无缝访问。
NetApp全系列光纤网络存储系统在文件访问方面支持NFS 和CIFS,在块存储访问方面支持FCP 和iSCSI,确保您可以非常方便地将NetApp 存储系统集成到NAS 或SAN 环境中,并且保护原来的信息。
NetApp 的设计为专用访问环境中的应用程序服务器和服务器集群以及多用户环境中的用户提供了经过优化和整合的高性能数据访问方式。
NetApp 存储系统提供了经过实践考验的、超过99.998% 的数据可用性,减少了代价高昂的停机时间(无论是计划内的还是计划外的),最大限度地保障了对关键数据的访问。
它们在一个简单、易用的环境中实现了数据的可管理性、可扩展性、互操作性和可用性,从而降低了您的总拥有成本,加强了竞争优势。
linux nfs 完全配置手册
linux nfs 完全配置手册.txt16生活,就是面对现实微笑,就是越过障碍注视未来;生活,就是用心灵之剪,在人生之路上裁出叶绿的枝头;生活,就是面对困惑或黑暗时,灵魂深处燃起豆大却明亮且微笑的灯展。
17过去与未来,都离自己很遥远,关键是抓住现在,抓住当前。
NFS server可以看作是一个FILE SERVER,它可以让你的PC通过网络将远端得NFS SERVER 共享出来的档案MOUNT到自己的系统中,在CLIENT看来使用NFS的远端文件就象是在使用本地文件一样。
NFS协议从诞生到现在为止,已经有多个版本,如NFS V2(rfc1094),NFS V3(rfc1813)(最新的版本是V4(rfc3010)。
二、各NFS协议版本的主要区别V3相对V2的主要区别:1、文件尺寸V2最大只支持32BIT的文件大小(4G),而NFS V3新增加了支持64BIT文件大小的技术。
2、文件传输尺寸V3没有限定传输尺寸,V2最多只能设定为8k,可以使用-rsize and -wsize 来进行设定。
3、完整的信息返回V3增加和完善了许多错误和成功信息的返回,对于服务器的设置和管理能带来很大好处。
4、增加了对TCP传输协议的支持V2只提供了对UDP协议的支持,在一些高要求的网络环境中有很大限制,V3增加了对TCP 协议的支持*5、异步写入特性6、改进了SERVER的mount性能7、有更好的I/O WRITES 性能。
9、更强网络运行效能,使得网络运作更为有效。
10、更强的灾难恢复功能。
异步写入特性(v3新增加)介绍:NFSV3 能否使用异步写入,这是可选择的一种特性。
NFSV3客户端发发送一个异步写入请求到服务器,在给客户端答复之前服务器并不是必须要将数据写入到存储器中(稳定的)。
服务器能确定何时去写入数据或者将多个写入请求聚合到一起并加以处理,然后写入。
客户端能保持一个数据的copy以防万一服务器不能完整的将数据写入。
Linux NFS 优化
Linux NFS-HOWTOPrev Next5. Optimizing NFS PerformanceCareful analysis of your environment, both from the client and from the server point of view, is the first step necessary for optimal NFS performance. The first sections will address issues that are generally important to the client. Later (Section 5.3 and beyond), server side issues will be discussed. In both cases, these issues will not be limited exclusively to one side or the other, but it is useful to separate the two in order to get a clearer picture of cause and effect.Aside from the general network configuration - appropriate network capacity, faster NICs, full duplex settings in order to reduce collisions, agreement in network speed among the switches and hubs, etc. - one of the most important client optimization settings are the NFS data transfer buffer sizes, specified by the mount command options rsize and wsize.5.1. Setting Block Size to Optimize Transfer SpeedsThe mount command options rsize and wsize specify the size of the chunks of data that the client and server pass back and forth to each other. If no rsize and wsize options are specified, the default varies by which version of NFS we are using. The most common default is 4K (4096 bytes), although for TCP-based mounts in 2.2 kernels, and for all mounts beginning with 2.4 kernels, the server specifies the default block size.The theoretical limit for the NFS V2 protocol is 8K. For the V3 protocol, the limit is specific to the server. On the Linux server, the maximum block size is defined by the value of the kernel constant NFSSVC_MAXBLKSIZE, found in the Linux kernel sourcefile ./include/linux/nfsd/const.h. The current maximum block size for the kernel, as of 2.4.17, is 8K (8192 bytes), but the patch set implementing NFS over TCP/IP transport in the 2.4 series, as of this writing, uses a value of 32K (defined in the patch as 32*1024) for the maximum block size.All 2.4 clients currently support up to 32K block transfer sizes, allowing the standard 32K block transfers across NFS mounts from other servers, such as Solaris, without client modification. The defaults may be too big or too small, depending on the specific combination of hardware and kernels. On the one hand, some combinations of Linux kernels and network cards (largely on older machines) cannot handle blocks that large. On the other hand, if they can handle larger blocks, a bigger size might be faster.You will want to experiment and find an rsize and wsize that works and is as fast as possible. You can test the speed of your options with some simple commands, if your network environment is not heavily used. Note that your results may vary widely unless you resort to using more complex benchmarks, such as Bonnie, Bonnie++, or IOzone.The first of these commands transfers 16384 blocks of 16k each from the special file /dev/zero (which if you read it just spits out zeros really fast) to the mounted partition. We will time it tosee how long it takes. So, from the client machine, type:This creates a 256Mb file of zeroed bytes. In general, you should create a file that's at least twiceas large as the system RAM on the server, but make sure you have enough disk space! Then read back the file into the great black hole on the client machine (/dev/null) by typing the following:Repeat this a few times and average how long it takes. Be sure to unmount and remount the filesystem each time (both on the client and, if you are zealous, locally on the server as well),which should clear out any caches.Then unmount, and mount again with a larger and smaller block size. They should be multiplesof 1024, and not larger than the maximum block size allowed by your system. Note that NFSVersion 2 is limited to a maximum of 8K, regardless of the maximum block size defined by NFSSVC_MAXBLKSIZE; Version 3 will support up to 64K, if permitted. The block size should be a power of two since most of the parameters that would constrain it (such as file system blocksizes and network packet size) are also powers of two. However, some users have reported bettersuccesses with block sizes that are not powers of two but are still multiples of the file systemblock size and the network packet size.Directly after mounting with a larger size, cd into the mounted file system and do things like ls, explore the filesystem a bit to make sure everything is as it should. If the rsize/wsize is too large the symptoms are very odd and not 100% obvious. A typical symptom is incomplete file lists when doing ls, and no error messages, or reading files failing mysteriously with no error messages. After establishing that the given rsize/ wsize works you can do the speed tests again. Different server platforms are likely to have different optimal sizes.Remember to edit /etc/fstab to reflect the rsize/wsize you found to be the most desirable.If your results seem inconsistent, or doubtful, you may need to analyze your network more extensively while varying the rsize and wsize values. In that case, here are several pointers to benchmarks that may prove useful:∙Bonnie /bonnie/∙Bonnie++ .au/bonnie++/∙IOzone file system benchmark /∙The official NFS benchmark, SPECsfs97 /osg/sfs97/The easiest benchmark with the widest coverage, including an extensive spread of file sizes, and of IO types - reads, & writes, rereads & rewrites, random access, etc. - seems to be IOzone. A recommended invocation of IOzone (for which you must have root privileges) includesunmounting and remounting the directory under test, in order to clear out the caches between tests, and including the file close time in the measurements. Assuming you've already exported/tmp to everyone from the server foo, and that you've installed IOzone in the local directory, this should work:The benchmark should take 2-3 hours at most, but of course you will need to run it for each value of rsize and wsize that is of interest. The web site gives full documentation of the parameters, but the specific options used above are:∙-a Full automatic mode, which tests file sizes of 64K to 512M, using record sizes of 4K to 16M∙-R Generate report in excel spreadsheet form (The "surface plot" option for graphs is best) ∙-c Include the file close time in the tests, which will pick up the NFS version 3 commit time∙-U Use the given mount point to unmount and remount between tests; it clears out caches ∙-f When using unmount, you have to locate the test file in the mounted file system5.2. Packet Size and Network DriversWhile many Linux network card drivers are excellent, some are quite shoddy, including a few drivers for some fairly standard cards. It is worth experimenting with your network card directly to find out how it can best handle traffic.Try ping ing back and forth between the two machines with large packets using the -f and -s options with ping (see ping(8) for more details) and see if a lot of packets get dropped, or if they take a long time for a reply. If so, you may have a problem with the performance of your network card.For a more extensive analysis of NFS behavior in particular, use the nfsstat command to look at nfs transactions, client and server statistics, network statistics, and so forth. The "-o net" option will show you the number of dropped packets in relation to the total number of transactions. In UDP transactions, the most important statistic is the number of retransmissions, due to dropped packets, socket buffer overflows, general server congestion, timeouts, etc. This will have a tremendously important effect on NFS performance, and should be carefully monitored. Note that nfsstat does not yet implement the -z option, which would zero out all counters, so you must look at the current nfsstat counter values prior to running the benchmarks.To correct network problems, you may wish to reconfigure the packet size that your network card uses. Very often there is a constraint somewhere else in the network (such as a router) that causes a smaller maximum packet size between two machines than what the network cards onthe machines are actually capable of. TCP should autodiscover the appropriate packet size for a network, but UDP will simply stay at a default value. So determining the appropriate packet size is especially important if you are using NFS over UDP.You can test for the network packet size using the tracepath command: From the client machine, just type tracepath server2049 and the path MTU should be reported at the bottom. You can then set the MTU on your network card equal to the path MTU, by using the MTU option to ifconfig, and see if fewer packets get dropped. See the ifconfig man pages for details on how to reset the MTU.In addition, netstat -s will give the statistics collected for traffic across all supported protocols. You may also look at /proc/net/snmp for information about current network behavior; see the next section for more details.5.3. Overflow of Fragmented PacketsUsing an rsize or wsize larger than your network's MTU (often set to 1500, in many networks) will cause IP packet fragmentation when using NFS over UDP. IP packet fragmentation and reassembly require a significant amount of CPU resource at both ends of a network connection. In addition, packet fragmentation also exposes your network traffic to greater unreliability, since a complete RPC request must be retransmitted if a UDP packet fragment is dropped for any reason. Any increase of RPC retransmissions, along with the possibility of increased timeouts, are the single worst impediment to performance for NFS over UDP.Packets may be dropped for many reasons. If your network topography is complex, fragment routes may differ, and may not all arrive at the Server for reassembly. NFS Server capacity may also be an issue, since the kernel has a limit of how many fragments it can buffer before it starts throwing away packets. With kernels that support the /proc filesystem, you can monitor the files/proc/sys/net/ipv4/ipfrag_high_thresh and /proc/sys/net/ipv4/ipfrag_low_thresh. Once the number of unprocessed, fragmented packets reaches the number specified byipfrag_high_thresh (in bytes), the kernel will simply start throwing away fragmented packets until the number of incomplete packets reaches the number specified by ipfrag_low_thresh.Another counter to monitor is IP: ReasmFails in the file /proc/net/snmp; this is the number of fragment reassembly failures. if it goes up too quickly during heavy file activity, you may have problem.5.4. NFS over TCPA new feature, available for both 2.4 and 2.5 kernels but not yet integrated into the mainstream kernel at the time of this writing, is NFS over TCP. Using TCP has a distinct advantage and a distinct disadvantage over UDP. The advantage is that it works far better than UDP on lossy networks. When using TCP, a single dropped packet can be retransmitted, without the retransmission of the entire RPC request, resulting in better performance on lossy networks. Inaddition, TCP will handle network speed differences better than UDP, due to the underlying flow control at the network level.The disadvantage of using TCP is that it is not a stateless protocol like UDP. If your server crashes in the middle of a packet transmission, the client will hang and any shares will need to be unmounted and remounted.The overhead incurred by the TCP protocol will result in somewhat slower performance than UDP under ideal network conditions, but the cost is not severe, and is often not noticable without careful measurement. If you are using gigabit ethernet from end to end, you might also investigate the usage of jumbo frames, since the high speed network may allow the larger frame sizes without encountering increased collision rates, particularly if you have set the network to full duplex.5.5. Timeout and Retransmission ValuesTwo mount command options, timeo and retrans, control the behavior of UDP requests whenencountering client timeouts due to dropped packets, network congestion, and so forth. The -o timeo option allows designation of the length of time, in tenths of seconds, that the client will wait until it decides it will not get a reply from the server, and must try to send the request again.The default value is 7 tenths of a second. The -o retrans option allows designation of the number of timeouts allowed before the client gives up, and displays the Server not responding message. The default value is 3 attempts. Once the client displays this message, it will continue to try to send the request, but only once before displaying the error message if another timeout occurs. When the client reestablishes contact, it will fall back to using the correct retrans value, and will display the Server OK message.If you are already encountering excessive retransmissions (see the output of the nfsstat command), or want to increase the block transfer size without encountering timeouts and retransmissions, you may want to adjust these values. The specific adjustment will depend upon your environment, and in most cases, the current defaults are appropriate.5.6. Number of Instances of the NFSD Server DaemonMost startup scripts, Linux and otherwise, start 8 instances of nfsd. In the early days of NFS, Sun decided on this number as a rule of thumb, and everyone else copied. There are no good measures of how many instances are optimal, but a more heavily-trafficked server may require more. You should use at the very least one daemon per processor, but four to eight per processor may be a better rule of thumb. If you are using a 2.4 or higher kernel and you want to see how heavily each nfsd thread is being used, you can look at the file /proc/net/rpc/nfsd. The last ten numbers on the th line in that file indicate the number of seconds that the thread usage was at that percentage of the maximum allowable. If you have a large number in the top three deciles, you may wish to increase the number of nfsd instances. This is done upon starting nfsd using the number of instances as the command line option, and is specified in the NFS startup script(/etc/rc.d/init.d/nfs on Red Hat) as RPCNFSDCOUNT. See the nfsd(8) man page for more information.5.7. Memory Limits on the Input QueueOn 2.2 and 2.4 kernels, the socket input queue, where requests sit while they are currently being processed, has a small default size limit (rmem_default) of 64k. This queue is important for clients with heavy read loads, and servers with heavy write loads. As an example, if you are running 8 instances of nfsd on the server, each will only have 8k to store write requests while it processes them. In addition, the socket output queue - important for clients with heavy write loads and servers with heavy read loads - also has a small default size (wmem_default). Several published runs of the NFS benchmark SPECsfs specify usage of a much higher value for both the read and write value sets, [rw]mem_default and [rw]mem_max. You might consider increasing these values to at least 256k. The read and write limits are set in the proc file system using (for example) the files /proc/sys/net/core/rmem_default and/proc/sys/net/core/rmem_max. The rmem_default value can be increased in three steps; the following method is a bit of a hack but should work and should not cause any problems:∙Increase the size listed in the file:∙Restart NFS. For example, on Red Hat systems,∙You might return the size limits to their normal size in case other kernel systems depend on it:This last step may be necessary because machines have been reported to crash if these values are left changed for long periods of time.5.8. Turning Off Autonegotiation of NICs and HubsIf network cards auto-negotiate badly with hubs and switches, and ports run at different speeds, or with different duplex configurations, performance will be severely impacted due to excessive collisions, dropped packets, etc. If you see excessive numbers of dropped packets in the nfsstat output, or poor network performance in general, try playing around with the network speed and duplex settings. If possible, concentrate on establishing a 100BaseT full duplex subnet; the virtual elimination of collisions in full duplex will remove the most severe performance inhibitorfor NFS over UDP. Be careful when turning off autonegotiation on a card: The hub or switch that the card is attached to will then resort to other mechanisms (such as parallel detection) to determine the duplex settings, and some cards default to half duplex because it is more likely to be supported by an old hub. The best solution, if the driver supports it, is to force the card to negotiate 100BaseT full duplex.5.9. Synchronous vs. Asynchronous Behavior in NFSThe default export behavior for both NFS Version 2 and Version 3 protocols, used by exportfsin nfs-utils versions prior to Version 1.11 (the latter is in the CVS tree, but not yet released in a package, as of January, 2002) is "asynchronous". This default permits the server to reply to client requests as soon as it has processed the request and handed it off to the local file system, without waiting for the data to be written to stable storage. This is indicated by the async option denoted in the server's export list. It yields better performance at the cost of possible data corruption if the server reboots while still holding unwritten data and/or metadata in its caches. This possible data corruption is not detectable at the time of occurrence, since the async option instructs the server to lie to the client, telling the client that all data has indeed been written to the stable storage, regardless of the protocol used.In order to conform with "synchronous" behavior, used as the default for most proprietary systems supporting NFS (Solaris, HP-UX, RS/6000, etc.), and now used as the default in the latest version of exportfs, the Linux Server's file system must be exported with the sync option. Note that specifying synchronous exports will result in no option being seen in the server's export list:∙Export a couple file systems to everyone, using slightly different options:∙Now we can see what the exported file system parameters look like:If your kernel is compiled with the /proc filesystem, then the file /proc/fs/nfs/exports will also show the full list of export options.When synchronous behavior is specified, the server will not complete (that is, reply to the client) an NFS version 2 protocol request until the local file system has written all data/metadata to the disk. The server will complete a synchronous NFS version 3 request without this delay, and will return the status of the data in order to inform the client as to what data should be maintained in its caches, and what data is safe to discard. There are 3 possible status values, defined an enumerated type, nfs3_stable_how, in include/linux/nfs.h. The values, along with the subsequent actions taken due to these results, are as follows:∙NFS_UNSTABLE - Data/Metadata was not committed to stable storage on the server, and must be cached on the client until a subsequent client commit request assures that the server does send data to stable storage.∙NFS_DATA_SYNC - Metadata was not sent to stable storage, and must be cached on the client. A subsequent commit is necessary, as is required above.∙NFS_FILE_SYNC - No data/metadata need be cached, and a subsequent commit need not be sent for the range covered by this request.In addition to the above definition of synchronous behavior, the client may explicitly insist on total synchronous behavior, regardless of the protocol, by opening all files with the O_SYNC option. In this case, all replies to client requests will wait until the data has hit the server's disk, regardless of the protocol used (meaning that, in NFS version 3, all requests will beNFS_FILE_SYNC requests, and will require that the Server returns this status). In that case, the performance of NFS Version 2 and NFS Version 3 will be virtually identical.If, however, the old default async behavior is used, the O_SYNC option has no effect at all in either version of NFS, since the server will reply to the client without waiting for the write to complete. In that case the performance differences between versions will also disappear.Finally, note that, for NFS version 3 protocol requests, a subsequent commit request from the NFS client at file close time, or at fsync() time, will force the server to write any previously unwritten data/metadata to the disk, and the server will not reply to the client until this has been completed, as long as sync behavior is followed. If async is used, the commit is essentially a no-op, since the server once again lies to the client, telling the client that the data has been sent to stable storage. This again exposes the client and server to data corruption, since cached data may be discarded on the client due to its belief that the server now has the data maintained in stable storage.5.10. Non-NFS-Related Means of Enhancing Server PerformanceIn general, server performance and server disk access speed will have an important effect on NFS performance. Offering general guidelines for setting up a well-functioning file server is outside the scope of this document, but a few hints may be worth mentioning:∙If you have access to RAID arrays, use RAID 1/0 for both write speed and redundancy;RAID 5 gives you good read speeds but lousy write speeds.∙ A journalling filesystem will drastically reduce your reboot time in the event of a system crash. Currently, ext3 will work correctly with NFS version 3. In addition, Reiserfsversion 3.6 will work with NFS version 3 on 2.4.7 or later kernels (patches are available for previous kernels). Earlier versions of Reiserfs did not include room for generationnumbers in the inode, exposing the possibility of undetected data corruption during aserver reboot.∙Additionally, journalled file systems can be configured to maximize performance by taking advantage of the fact that journal updates are all that is necessary for dataprotection. One example is using ext3 with data=journal so that all updates go first tothe journal, and later to the main file system. Once the journal has been updated, the NFS server can safely issue the reply to the clients, and the main file system update can occur at the server's leisure.The journal in a journalling file system may also reside on a separate device such as aflash memory card so that journal updates normally require no seeking. With onlyrotational delay imposing a cost, this gives reasonably good synchronous IO performance.Note that ext3 currently supports journal relocation, and ReiserFS will (officially) support it soon. The Reiserfs tool package found atftp:///pub/reiserfsprogs/reiserfsprogs-3.x.0k.tar.gz contains thereiserfstune tool, which will allow journal relocation. It does, however, require a kernel patch which has not yet been officially released as of January, 2002.∙Using an automounter (such as autofs or amd) may prevent hangs if you cross-mount files on your machines (whether on purpose or by oversight) and one of those machinesgoes down. See the Automount Mini-HOWTO for details.∙Some manufacturers (Network Appliance, Hewlett Packard, and others) provide NFS accelerators in the form of Non-Volatile RAM. NVRAM will boost access speed to stable storage up to the equivalent of async access.Prev Home Next Setting up an NFS Client Security and NFS。
Linux NFS服务器性能优化
NFS是网络文件系统(Network File System)的简称,是分布式计算系统的一个组成部分,可实现在异种网络上共享和装配远程文件系统。
NFS由Sun公司开发,目前已经成为文件服务的一种标准(RFC1904,RFC1813)。
其最大的功能就是可以通过网络,让不同操作系统的计算机可以共享数据,所以也可以将它看做是一个文件服务器。
NFS文件服务器是Linux最常见网络的服务之一。
尽管它的规则简单,却有着丰富的内涵。
NFS 服务器可以看作是一个文件服务器,它可以让你的PC通过网络将远端的NFS 服务器共享出来的文件挂载到自己的系统中,在客户端看来使用NFS的远端文件就象是在使用本地文件一样。
一、硬件设备的选择随着计算机技术的发展,以硬盘为首的I/O设备对计算机的整体性能影响越来越大,通讯服务器(messaging/E-mail/VOD):快速的I/O是这类应用的关键,硬盘的I/O吞吐能力是主要瓶颈;数据仓库:大型商业数据存储、编目、索引、数据分析,高速商业计算等,需要具有良好的网络和硬盘I/O吞吐能力;数据库(ERP/OLTP等)服务器,除了需要具有强大的CPU处理能力,同时需要有很好的磁盘I/O吞吐性能;NFS网络文件系统性能的主要瓶颈是硬盘的I/O性能和网络带宽。
SCSI(Small Computer System Interface,小型计算机系统接口)技术在需要高性能的网络服务器和工作站领域却得到了广泛应用,现在已经成为网络服务器的标准的接口选择。
速度从SCSI-I最初的5MBps到2005年的320MBps。
内部传输率的高低是评价一个硬盘整体性能的决定性因素,硬盘数据传输率分为内外部传输率。
通常称外部传输率也为突发数据传输率或接口传输率,指从硬盘的缓存中向外输出数据的速度。
由于硬盘的内部传输率要小于外部传输率,所以只有内部传输率才可以作为衡量硬盘性能的真正标准。
SCSI硬盘技术在内部传输率要性能上有更大优势。
NetApp存储和备份思路介绍
01
统一存储
NetApp提供统一的存储平台, 支持块、文件和对象等多种数据 访问方式。
数据保护
02
03
高效性能
NetApp提供全面的数据保护功 能,包括快照、克隆和远程复制 等。
NetApp存储设备采用先进的缓 存技术和多路径I/O等技术,提 供高性能的数据访问能力。
存储架构与组成
存储控制器
存储控制器是存储设备的核心部件, 负责处理数据访问请求和管理存储设 备。
数据备份重要性
数据是企业的重要资产,任何数据丢失 都可能导致业务中断、财务损失和声誉 损害。因此,定期备份数据是保障企业 业务连续性和数据安全的关键措施。
面临的挑战
随着企业数据量的不断增长,数据备 份面临着存储成本高、备份窗口短、 恢复失败风险大等挑战。
传统备份方式及其局限性
传统备份方式
常见的传统备份方式包括完全备份、增量备份和差异备份。完全备份会备份所有数据,而增量备份和差异备份则 分别备份自上次完全备份或增量备份以来发生变化的数据。
统一存储解决方案
灵活性和高效性
NetApp的统一存储解决方案旨在为企业提供灵活、高效 的存储资源管理。该方案支持块、文件和对象存储,可满 足不同应用和工作负载的需求。
数据保护和恢复
统一存储解决方案提供全面的数据保护和恢复功能,包括 快照、克隆和远程复制等,确保企业数据的安全性和可用 性。
简化管理和优化性能
硬盘驱动器
硬盘驱动器是存储数据的物理介质, 通常采用SAS或SATA接口与存储控 制器连接。
缓存
缓存是位于存储控制器中的高速内存, 用于暂存数据以提高数据访问速度。
网络接口
网络接口用于连接存储设备和服务器 等网络设备,通常采用FC、iSCSI或 NFS等协议进行数据传输。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
N E T A P P N A S存储L I N U X N F S调优指导书©2005 Network ApplianceAll rights reserved一、前言越来越多的Network Appliance 客户开始认识到Linux 在其企业中的价值。
过去,在提供适合企业工作负荷的稳定性水平、性能水平和可扩展性水平方面,Linux NFS 客户端一直落后于Linux 其他成员。
不过,在最近一段时间内,NFS 客户端有了很大进步,并且其性能和在较差网络条件下的工作能力仍在不断增强。
对Linux NFS 客户端的技术支持也更加成熟,尽管其水平尚未达到对企业基础设施关键部分的要求。
由于越来越多的客户开始依赖Linux NFS 客户端,因此,我们提供了本文档以弥补差距。
本文档介绍了以下两个方面的内容:一、NetApp 存储系统设置参数,二、建议Linux NFS 客户端参数的设置。
二、NetApp 存储系统设置参数Data ONTAP操作系统是Network Appliance公司研发的,OnTap 操作系统默认是按NFS服务最优来设置的,故此我们只需确认一些常用的参数及功能按具体客户应用来设置:首先我们来检查Volume的参数,如下所示:nas> vol options vol_namenosnap=on, nosnapdir=on, minra=off, no_atime_update=on, nvfail=off,snapmirrored=off, create_ucode=on, convert_ucode=on, maxdirsize=10470,fs_size_fixed=off, guarantee=volume, svo_enable=off, svo_checksum=off,svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,fractional_reserve=100, extent=off, try_first=volume_grow由于snapshot为消耗一定的系统资源,在确定不使用snapshot的情况下,其中参数“nosnap=on”和“nosnapdir=on”在不使用snapshot情况下设置为“on”,如果Volume 需要使用“snapshot”作为备件工具,这两个参数需设置为“off”。
对于volume参数设置的更改,使用如下的指令:nas> vol options vol_name nosnap on如果使用图形界面FileView 设置不使用snaphot如下图所示:cnidc-storage ->Volumes->Snapshots->Manage如果存储是使用Aggregate的方式,(Aggregate--被定义为给卷(Volumes)分配空间的许多磁盘的池(pool)),对于aggregate的参数设置如下所示:nas> aggr options aggr_nameroot, diskroot, nosnap=on, raidtype=raid_dp, raidsize=16,snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off,snapshot_autodelete=on, lost_write_protect=on对于”nosnap=on”同样表示aggregate不参用snapshot.对于存储nfs系统,一般是默认,不需要更改,如下所示,在安装调示过程中可作比较及确认:nas> options nfsnfs.access *nfs.export.allow_provisional_access onnfs.export.auto-update onnfs.export.harvest.timeout 1800 nfs.export.neg.timeout 3600 nfs.export.pos.timeout 36000 nfs.hide_snapshot off nfs.ifc.xmt.high 16 nfs.ifc.xmt.low 8 nfs.kerberos.enable off nfs.locking.check_domain on nfs.mount_rootonly on nfs.mountd.trace off group.strict off nfs.notify.carryover on nfs.per_client_stats.enable off nfs.require_valid_mapped_uid off nfs.response.trace off nfs.response.trigger 60 nfs.rpcsec.ctx.high 0 nfs.rpcsec.ctx.idle 360 nfs.tcp.enable on nfs.udp.enable on nfs.udp.xfersize 32768 nfs.v2.df_2gb_lim off nfs.v3.enable on nfs.v4.acl.enable off nfs.v4.enable off nfs.v4.id.domain nfs.v4.read_delegation off nfs.v4.write_delegation off nfs.webnfs.enable off nfs.webnfs.rootdir XXXnfs.webnfs.rootdir.set off、从以上的指令,我们可以知道,“options nfs”-- 可以列表当前Netapp存储关于nfs的参数设置情况。
例如以下指令可改变 nfs参数的设置:“options nfs.webnfs.enable off ”。
确认NFS TCP/V3 输入以下指令:options nfs.tcp.enable onoptions nfs.v3.enable on三、建议Linux NFS 客户端参数的设置如果不指定任何加载选项,Linux 加载命令(或自动加载器)会自动选择以下默认设置:mount -o rw,fg,vers=2,udp,rsize=4096,wsize=4096,hard,intr,timeo=7,retrans=5在大多数环境下,不需对这些默认设置进行任何更改,即可让NFS 正常工作。
几乎所有的NFS 服务器都支持基于UDP 的NFS 版本2和版本3。
由于某些网络环境会将大的UDP 数据包分段,在这种情况下如果某些数据段丢失,就可能会影响性能,因此,rsize 和wsize 都相对较小。
为了适应较慢的服务器和网络,RPC 重发超时的默认设置为0.7 秒。
服务器UDP加载Filer 时,可以使用以下选项:mount –t nfs -o rw, fg, vers=3, udp, timeo=4, retrans=5, rsize=32768, wsize=32768, hard, nointr, nolock, nocto, actimeo=600服务器TCP加载Filer 时,可以使用以下选项:mount –t nfs -o rw, fg, vers=3, tcp, timeo=600, retrans=2, rsize=32768, wsize=32768, hard, nointr, nolock, nocto, actimeo=600在该例中,我们利用fg 选项来确保在FTP 或HTTP 服务器启动前可以使用NFS 文件。
ro 选项意味着FTP 或HTTP 服务器永远不会向文件中写入数据, rw 选项意味着FTP 或HTTP 服务器会向文件中写入数据。
另外,文件锁定并非必需,因此,为了消除NLM 的开销,我们使用了nolock 选项。
nocto 选项有助于减少GETATTR 和LOOKUP 操作的次数,付出的代价是降低了同其他客户端缓存的一致性。
FTP 服务器会在其属性缓存超时(通常为大约一分钟后)后了解到对服务器上文件的更改情况。
增加属性缓存超时时间还会降低属性缓存重新验证率。
以下是对于特定的Linux操作系统所作的调优:增大您的客户端用来进行NFS 通信的传输套接字缓冲区,有助于减少客户端的资源争夺情况,减少性能差异,同时可以提高最大数据和操作吞吐量。
在 2.4.20 之后的Linux 内核中,由于客户端将自动选择最优套接字缓冲区,因此不必进行以下操作:1. 以root 用户身份登录客户端2. 利用cd 命令变换工作目录至/proc/sys/net/core3. 输入命令“echo 262143 > rmem_max”4. 输入命令“echo 262143 > wmem_max”5. 输入命令“echo 262143 > rmem_default”6. 输入命令“echo 262143 > wmem_default”7. 重新在客户端加载NFS 文件系统对于基于UDP 的NFS 以及在使用千兆以太网时,这是特别有用的。
您应该考虑将其加入在系统加载NFS 文件系统前运行的系统启动脚本中。
我们所提供的是经过测试的最大安全套接字缓冲区大小。
在小于16MB 的客户端,应该保留默认的套接字缓冲区大小设置,以节省内存。
在7.2 之后的Red Hat 发行套件中,包含了一个称为/etc/sysctl.conf 的文件,在该文件中加入诸如此类的更改后,每次系统重新启动都会执行这些更改内容。
在您的Red Hat 系统的/etc/sysctl.conf 文件中加入以下行:net.core.rmem_max = 262143net.core.wmem_max = 262143net.core.rmem_default = 262143net.core.wmem_default = 262143另一TCP 改进措施以下设置有助于在运行基于TCP 的NFS 时减少客户端和Filer 的工作量:1. echo 0 > /proc/sys/net/ipv4/tcp_sack2. echo 0 > /proc/sys/net/ipv4/tcp_timestamps这些设置将禁用TCP 的可选功能,从而可以减少一些处理操作,并节省少量网络带宽。
如果是您自己生成的内核,一定要确保已禁用CONFIG_SYNCOOKIES。
SYN cookie 会额外增加套接字两端的处理操作,从而降低TCP 连接的速度。
一些Linux 发行商提供的是启用了SYN cookie 的内核。