Rate Windows for Efficient Network and IO Throttling

合集下载

Avamar备份说明书

Avamar备份说明书

Balaji PanchanathanEMCJayalakshmi SureshEMCPravin Ashok KumarEMCEFFICIENT AVAMAR BACKUPS OVER WAN AND SIZING LINKSTable of ContentsIntroduction (3)New in Avamar 7.1 for WAN (3)Sizing of WAN Links (4)Configuration (5)Type of WAN simulations and their configurations (8)How to measure the traffic on the appliance (9)Performance test results over WAN (10)DTLT (12)AER (12)Observations (13)Recommendations (13)Conclusion (14)Appendix (15)References (20)Disclaimer: The views, processes or methodologies published in this article are those of the authors. They do not necessarily reflect EMC Corporation’s views, processes or methodologies.IntroductionThis article will focus on four things:1. New features in Avamar® 7.1 which help WAN backups2. How to estimate the bandwidth required for WAN links based on the application anddata size. This depends on the dedupe rate for the application and use of a WANemulator to measure it (Linux, open source tools, i.e. netem, etc.)3. Performance number for desktop/laptop (DTLT), Avamar Extended Retention (AER),Data Domain® (DD) with different encryption strengths4. Broad recommendations for the customerNew in Avamar 7.1 for WANStarting with Avamar 7.1, WAN is a supported configuration with Data Domain as the target storage device. With this support, metadata can be stored in Avamar and the data can be moved across the WAN to the Data Domain device.A salient feature is support of a 60-minute outage of the WAN Link and support for over-WAN backup to Data Domain as the target.Figure 1 depicts a type of network configuration that is supported.Figure 1This support provides customers the flexibility to deploy AVE’s in each remote office and have one Data Domain in a central location. Optionally, the customer can have one central Avamar server and deploy Data Domain virtual edition in each branch office.Sizing of WAN LinksThe customer has to estimate the size of the WAN links required for backing up their data. To do so, open source tools like netem, available on any Avamar server or Linux machine, can be used.Customers can use the results of the test shown below to decide where Avamar and Data Domain needs to be deployed, i.e. in a remote location or in a central office.The following set of test equipment will enable customers to easily perform the test decide for themselves.∙ESX Server - host∙Avamar Server – AVE virtual edition∙Data Domain – Virtual Edition∙Linux – SLES 11 SP1∙Windows client∙Linux ClientAvamar server virtual edition, Data Domain virtual edition, and Linux WAN emulator can be installed in a single ESX host.The network diagram will look like that which is shown in Figure 2.Figure 2Configuration∙Client should be in the same network as that of one interface of the network appliance ∙Server should be in the same network as that of one interface of the network appliance ∙Server and client should be on different network∙Data Domain should be on a different network on the same ESX hostFollow the steps below:1.ESX Configuration: Step to add new network to the ESX.o Log in to the ESX host using vSphere Cliento Click on Configurationo Click on Networkingo Click on Add networking (which will be displayed on the right side)o Select Virtual machineo Use the network label as VM Network 1o Repeat the above steps again and add VM Network 2, 3, 4, and 52.ESX VM Appliance Configuration: We need to add four interfaces to the SLES machine.The interfaces can be added by following the steps below.o Log in to the ESX host using vSphere Cliento Deploy the VM using the vmdk fileo Add disk capacityo Power Ono Right click on the VMo Edit Settingso Click on ADDo Select Ethernet Adaptero Select the VM network (for the second interface, select VM network 1 (by default first interface will be added). For the third interface, select VM Network 2)Shown below is the sample snapshot after the interfaces are added.3.ESX Configuration for client and server:o Log in in to the ESX host using vSphere Cliento Right click on the Client VMo Edit Settingso Click on the Network Adaptor and then change the label to VM Network 1o In a similar way, click on the Server VM (AVE) and then change the label to VM Network 24.IP Address on the network appliance:o Give the command ifconfig –a and get the list of interfaces (ex: eth0, eth2, eth5, etc. and then configure the IPs using the commands below (replacing theinterface respectively)i. Ifconfig eth0 10.110.209.230 netmask 255.255.252.0ii. Ifconfig eth1 192.168.2.11 netmask 255.255.255.0iii. Ifconfig eth2 192.168.1.3 netmask 255.255.255.0iv. Ifconfig eth3 192.168.3.1 netmask 255.255.255.0After configuring the IP address, the configs can be checked by using the ifconfigcommand.5.Routing-related configo Sysctl –p net.ipv4.ip_forward=1o On Client sidei. route add –net 192.168.1.0 netmask 255.255.255.0 gw 192.168.2.11ii. route add –net 192.168.3.0 netmask 255.255.255.0 gw 192.168.2.11b. On Server sidei. route add –net 192.168.2.0 netmask 255.255.255.0 gw 192.168.1.3ii. route add –net 192.168.3.0 netmask 255.255.255.0 gw 192.168.1.3c. On Data Domain sidei. route add –net 192.168.1.0 netmask 255.255.255.0 gw 192.168.3.1ii. route add –net 192.168.2.0 netmask 255.255.255.0 gw 192.168.3.1d. Route-related config can be checked using route –n commandNote: In the above sample ifconfig and route commands, the ipaddress/netmask should be replaced by your ip/netmask, respectivelyType of WAN simulations and their configurations1. Drop, delay, out-of-order(TCP Level)2. Bandwidth throttlework impairments can be done on both client/server interfacesCommands to simulate Network impairments:After executing the command, we can check whether those settings are in effect using the tc filter show dev <interface>command.How to measure the traffic on the applianceThe iptraf tool, installed and running the iptraf command, will help monitor traffic on the appliance.Follow the steps below:∙On the command line, run the command iptraf∙Enter a key to continue∙Select General Interface statistics∙Select a file to which you want to log the statsThe screen will display the traffic flowing through each of the interfaces.Below is the snapshot of how it will look after following the steps above.Performing the test set up above and using those commands, customers can simulate different WAN conditions, i.e. Drop rate, bandwidth throttle, etc.Customers can also disable WAN conditions on Avamar® and have only WAN condition for Data Domain (and vice versa), enabling them to check which application offers better results and decide on the architecture .Performance test results over WANOur testing on filesystem backup over WAN delivered the results below.Bandwidth Throttle results: With 1MbpsWith 10MbpsResults for different WAN profiles we have tested in desktop laptop environment (DTLT) are shown below.AERThe Avamar Extended Retention (AER) feature is used for Avamar backup retention to tape and restore those retained backups to clients. Formerly called Direct-to-Tape Out (DTO), it is an archiving solution for Avamar.Main tasks involved in AER are∙Exports (Identifying the backups and pushing it to tape libraries which is attached to AER Node),∙Imports (Moves the backup from Tape to AER Node (physical storage).∙Restore (Registering the client to AER and restoring the respective backups to Client).Observations∙Impact of delay on restore is greater compared to backup in exports. Additionally, there is 50% greater impact on restore compared to backup.∙Impact of bandwidth throttle is greater in restore, at least 10x worse. These things should be taken into account when the customer wants to restore (import) from AERnode.Recommendations∙WAN throughput different between medium and high encryption is minimal∙Backup window required for different clients/applications∙Test with different CPU throttles and test whether CPU usage has any impact on WAN throughput. Our assumption is that the bottleneck is only the network and thisassumption needs to be validated.Broad recommendations based on the tests conducted∙Data Domain performs better if the delay is less, in the range of 5-100ms. If the delay is 500ms, Avamar performance is much better, by at least 2x. However, with bandwidthless than 1Mbps, even with 500ms delay, Data Domain is better.∙The impact of delay when the available bandwidth is 1Mbps is much less, roughly a 20% drop in performance for Avamar and 5% for Data Domain when the delay increases from 5ms to 500ms. Hence, with bandwidth throttle, it is better to use Data Domain as storage target rather than Avamar.ConclusionPerformance numbers in WAN conditions are given in this article. The same can be used for sizing the WAN links. Customers can also easily test their numbers using open source tools like netem/tc, etc. This will help customers avoid surprises and evaluate the different products available to select the best product. This set of WAN tools cannot only be used with Avamar but also with other backup products to select the right product and right WAN size.AppendixBelow is the bandwidth script which can be used on the Linux SLES box (WAN Emulator). Using the script, bandwidth throttle can be applied and tests can be conducted.#!/bin/bash## tc uses the following units when passed as a parameter.# kbps: Kilobytes per second# mbps: Megabytes per second# kbit: Kilobits per second# mbit: Megabits per second# bps: Bytes per second# Amounts of data can be specified in:# kb or k: Kilobytes# mb or m: Megabytes# mbit: Megabits# kbit: Kilobits# To get the byte figure from bits, divide the number by 8 bit### Name of the traffic control command.TC=tc# The network interface we're planning on limiting bandwidth.IF=eth5 # Interface4# Download limit (in mega bits)DNLD=10mbit # DOWNLOAD Limit# Upload limit (in mega bits)UPLD=10mbit # UPLOAD Limitit# IP address of the machine we are controllingIP=192.168.4.12 # Host IP# Filter options for limiting the intended interface.U32="$TC filter add dev $IF protocol ip parent 1:0 prio 1 u32" start() {# We'll use Hierarchical Token Bucket (HTB) to shape bandwidth. # For detailed configuration options, please consult Linux man# page.$TC qdisc add dev $IF root handle 1: htb default 30$TC class add dev $IF parent 1: classid 1:1 htb rate $DNLD $TC class add dev $IF parent 1: classid 1:2 htb rate $UPLD$U32 match ip dst $IP/32 flowid 1:1$U32 match ip src $IP/32 flowid 1:2# The first line creates the root qdisc, and the next two lines# create two child qdisc that are to be used to shape download# and upload bandwidth.## The 4th and 5th line creates the filter to match the interface. # The 'dst' IP address is used to limit download speed, and the # 'src' IP address is used to limit upload speed.}stop() {# Stop the bandwidth shaping.$TC qdisc del dev $IF root}restart() {# Self-explanatory.stopsleep 1start}show() {# Display status of traffic control status. $TC -s qdisc ls dev $IF}case "$1" instart)echo -n "Starting bandwidth shaping: " startecho "done";;stop)echo -n "Stopping bandwidth shaping: " stopecho "done";;restart)echo -n "Restarting bandwidth shaping: " restartecho "done";;show)echo "Bandwidth shaping status for $IF:" showecho "";;*)pwd=$(pwd)echo "Usage: tc.bash {start|stop|restart|show}" ;;esacexitReferences/collaborate/workgroups/networking/netem/linux/man-pages/man8/tc-netem.8.html/index.php/tag/traffic-shaping//2.2/manual.htmlhttp://www.slashroot.in/linux-iptraf-and-iftop-monitor-and-analyse-network-traffic-and-bandwidth https:///watch?v=Y5un7JTGp3ohttps:///jterrace/1895081/man/8/ifconfig/od/commands/l/blcmdl8_route.htm/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-networking-guide.pdf/tools/traffic-control.phpEMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO RESPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license。

企业核心骨干网交换路由器enterasys Networks X - Pedition 8600 产

企业核心骨干网交换路由器enterasys Networks X - Pedition 8600 产

Industry-Leading Performance and Control at the Network CoreEnterasys Networks’ award-winning X-Pedition family represents a new generation of switch routing solutions engineered to support today’s rapidly expanding enterprises. Built particularly for the backbone, the 16-slot X-Pedition 8600 switch router combines wire-speed performance at gigabit rates, pinpoint control of application flows, and superior routing capacity to ensure high availability of internal and external networks including business-critical web content, ERP applications, voice/video/data, e-commerce and more. The high-capacity X-Pedition 8600 delivers full-function, wire-speed IP/IPX routing—both unicast (IP:RIP ,OSPF , BGP , IPX:RIP) and multicast (IGMP , DVMRP , PIM-DM, PIM-SM). Powered by a non-blocking 32 Gigabit per second switching fabric, the X-Pedition 8600’s throughput exceeds 30 million packets per second and can be configured with up to 240 10/100 ports or 60 Gigabit Ethernet ports.Enterprise backbone requirements are met through massive table capacity and redundancy. The X-Pedition is also the industry’s first Gigabit switching router with WAN capabilities. The WAN interfaces extend the benefits of the X-Pedition to remote locations, providing network administrators application-level control from the desktop to the WAN edge, all at wire speed.The unique X-Pedition architecture enables you to route or switch packets based on the information in Layer 4 or on the traditional source-destination information in Layer 3. This application-level control allows the X-Pedition to guarantee security and end-to-end Quality of Service (QoS) while maintaining wire-speed throughput. QoS policies may encompass all the applications in the network, groups of users, or relate specifically to a single host-to-host application flow.•High-capacity, multilayer switch router for enterprise backbones—Full-function IP/IPX routing for unicast and multicast traffic—32 Gbps non-blocking switching fabric; 30 Mpps routing throughput —Up to 60 Gigabit Ethernet ports; up to 240 10/100 ports—Built-in support for 10 Gig, optical networks and emerging technologies •Full application support from the desktop to the WAN—Wire-speed Layer 4 application flow switching—Maintains wire-speed performance with all other features enabled —Supports HSSI, FDDI, ATM and serial WAN interfaces —Ready now for multicast voice and video applications•Pinpoint control to prioritize applications, improve e-business operation—Wire-speed, application-level QoS for end-to-end reliability —Application load balancing and content verification—Supports DiffServ, Weighted Fair Queuing and Rate Limiting (CAR)•Superior fault tolerance to ensure 24x7 network availability—Redundant power supplies and CPUs to protect from failures —Load sharing to enhance performance through redundant links•Advanced security features for greater peace of mind—Secure Harbour™ framework protects against internal and external abuse —Wire-speed Layer 2/3/4 security filters•Standards-based, intuitive management for fast, easy troubleshooting—Full support for RMON and RMON 2—Comprehensive SNMP-based management via NetSight™ AtlasThe X-Pedition 8600 is easily configured and managed through NetSight Atlas network management software,which includes plug-in applications for ACL, inventory and policy management. The X-Pedition Switch Router is fully standards-based and completely interoperable with existing networking equipment.Guaranteeing Quality of ServiceWith global enterprise more dependent than ever on the applications that support their business—from e-commerce and SAP to emerging multicast video applications—quality of service (QoS) becomes a top priority.QoS refers to a set of mechanisms for guaranteeing levels of bandwidth, maximum latency limits, and controlled interpacket timing. Enterasys’ X-Pedition 8600 delivers true standards-based QoS by integrating wire-speed Layer 4 switching with policy-based traffic classification and prioritization. Because Enterasys’ custom ASICs can read deeper into the packet, all the way to Layer 4, traffic can be identified, classified, and prioritized at the application level.Unmatched Performance with Wire-Speed Routing and SwitchingThe X-Pedition 8600 minimizes network congestion by routing more than 30 million packets per second (pps). The 32 Gbps switching fabric in the X-Pedition delivers full-function unicast and multicast wire-speed IP/IPX routing at gigabit speeds on all ports.The X-Pedition 8600’s custom ASICs switch or route traffic at wire speed based on Layer 2, Layer 3 and Layer 4 information. These ASICs also store QoS policies and security filters, providing wire-speed performance even when QoS and security filters are enabled. As a result, network managers no longer need to make compromises when it comes to performance and functionality; the X-Pedition delivers both.Application-Level QoS and Access Control—at Wire SpeedBased on Layer 2, Layer 3 and Layer 4 information, the X-Pedition allows network managers to identify traffic and set QoS policies, without compromising wire-speed performance.The X-Pedition can guarantee bandwidth on an application-by-application basis, thereby accommodating high-priority traffic even during peak periods of usage. QoS policies can be broad enough to encompass all the applications in the network, or relate specifically to a single host-to-host application flow.Unlike conventional routers, the X-Pedition’s performance does not degrade when security filters are imple-mented. Wire-speed security, obtained through 20,000 filters, enables network managers to benefit from both performance and security. Filters can be set based on Layer 2, Layer 3 or Layer 4 information, enabling network managers to control access based not only on IP addresses, but also on host-to-host application flows.Wire-Speed Multicast to Support Convergence ApplicationsThe X-Pedition’s switching fabric is capable of replicating packets in hardware, eliminating performance bottlenecks caused by conventional software-based routers. By providing the necessary infrastructure, the X-Pedition turns the network into an efficient multicast medium, supporting Protocol Independent Multicasting-Sparse Mode (PIM-SM), DVMRP and per-port IGMP .Industry-Leading CapacityLarge networks require large table capacities for storing routes, application flows, QoS rules, VLAN information and security filters. The X-Pedition 8600 provides table capacities that are an order of magnitude greater than most other solutions available today, supporting up to 250,000 routes, 4,000,000 application flows and 800,000 Layer 2 MAC addresses.How the X-Pedition Supports QoS•Wire-Speed Routing on Every Port —Removesrouting as the bottleneck and avoids “switch when you can, route when you must”schemes which are often complicated and proprietary •Massive Non-Blocking Backplane —Prevents overloaded output wires from clogging the switching hardware and isolates points of network congestion so that other traffic flows are unaffected•Large Buffering Capacity —Avoids packet loss during transient bursts that exceed output wire capacity •T raffic Classification and Prioritization —Enables policy-based QoS which guarantees throughput and minimizes latency forimportant traffic during times of congestion•Layer 4 Flow Switching —Provides application-level manageability, enabling the implementation of trueend-to-end QoS (e.g., RSVP)•Intuitive QoS Management Interface —Allows powerful QoS policies to beimplemented and maintained quickly and easily•Detailed NetworkInstrumentation —Facilitates network baselining and troubleshooting, delivering insight into the behavior of network trafficFull-function wire-speed IP/IPX routing enables the X-Pedition to scale seamlessly as the network evolves.The chassis-based X-Pedition can be configured with up to 240 10/100 ports or up to 60 Gigabit Ethernet ports. More than 4,000 VLANs, 20,000 security filters and large per-port buffers provide the capacity to handle peak traffic across even the largest enterprise backbones.Comprehensive Management for Easy Deployment, Changes and T roubleshootingVLAN Management —The X-Pedition can be configured to support VLANs based on ports and work managers can use Layer 2 VLANs with 802.1p prioritization and 802.1Q tagging, and can configure VLANs guided wizards within NetSight Atlas management software.Extensive Performance Monitoring —The X-Pedition paves the way for proactive planning of bandwidth growth and efficient network troubleshooting by providing RMON and RMON2 capabilities per port. Easy-to-Use, Java-Based Management —The X-Pedition’s rich functionality is made easy to use through NetSight Atlas, a command console that provides extensive configuration and monitoring of the X-Pedition as well as your entire Enterasys network. NetSight Atlas allows network managers to use any Java-enabled client station across the enterprise to remotely manage the X-Pedition 8600. NetSight Atlas can run on Solaris and Windows NT/2000/XP environments.Why the X-Pedition is a Better Backbone Router•Best-Selling Modular Layer 3Switch Router•Wire-Speed Performance with All Features Enabled •First to Support WAN Interfaces•Part of an Integrated End-to-End Solution•Pinpoint Application Control from the Desktop to the WAN •Multilayer Security Filters Don’t Sacrifice Performance •Award-Winning, Time-T ested Solution•Highly Manageable, Easily ConfigurableX-Pedition, NetSight and Secure Harbour are trademarks of Enterasys Networks. All other products or services mentioned are identified by the trademarks or servicemarks of their respective companies or organizations. NOTE: Enterasys Networks reserves the right to change specifications without notice. Please contact your representative to confirm current specifications.TECHNICAL SPECIFICATIONSPerformanceWire-speed IP/IPX unicast and multicast routing32 Gbps non-blocking switching fabric30 Million packets per second routing and Layer 4 switchingthroughputCapacity240 Ethernet/Fast Ethernet ports (10/100Base-TX or100Base-FX)60 Gigabit Ethernet ports (1000Base-LX or 1000Base-FX)Up to 25,000 routesUp to 4,000,000 Layer 4 application flowsUp to 800,000 Layer 2 MAC addressesUp to 250,000 Layer 3 routesUp to 20,000 security/access control filters3 MB buffering per Gigabit port1 MB buffering per 10/100 port4,096 VLANsPower System120VAC, 6A MaxRedundant CPU and power supplyHot-swappable media modulesPHYSICAL SPECIFICATIONSDimensions48.9 cm (19.25”) x 43.82 cm (17.25”) x 31.12 cm (12.25”)Weight61.75 lb. (28.0 kg)ENVIRONMENTAL SPECIFICATIONSOperating T emperature0°C to 40°C (32°F to 104°F)Relative Humidity5% to 95% noncondensingPROTOCOLS AND STANDARDSIP RoutingRIPv1/v2, OSPF, BGP-4IPX RoutingRIP, SAPMulticast SupportIGMP, DVMRP, PIM-DM, PIM-SMQoSApplication level, RSVPIEEE 802.1pIEEE 802.1QIEEE 802.1d Spanning T reeIEEE 802.3IEEE 802.3uIEEE 802.3xIEEE 802.3zRFC 1213 - MIB-2RFC 1493 - Bridge MIBRFC 1573 - Interfaces MIBRFC 1643 - Ethernet like interface MIBRFC 1163 - A Border Gateway Protocol (BGP)RFC 1267 - BGP-3RFC 1771 - BGP-4RFC 1657 - BGP-4 MIBRFC 1058 - RIP v1RFC 1723 - RIP v2 Carrying Additional InformationRFC 1724 - RIP v2 MIBRFC 1757 - RMONRFC 1583 - OSPF Version 2RFC 1253 - OSPF v2 MIBRFC 2096 - IP Forwarding MIBRFC 1812 - Router RequirementsRFC 1519 - CIDRRFC 1157 - SNMPRFC 2021 - RMON2RFC 2068 - HTTPRFC 1717 - The PPP Multilink ProtocolRFC 1661 - PPP (Point to Point Protocol)RFC 1634 - IPXWANRFC 1662 - PPP in HDLC FramingRFC 1490 - Multiprotocol Interconnect over Frame RelayORDERING INFORMATIONSSR-16X-Pedition 8600 switch router 16-slot base system includingchassis, backplane, modular fan, and a single switch fabricmodule (SSR-SF-16). Requires new CM2 Control ModuleSSR-PS-16Power Supply for the X-Pedition switch router 8600SSR-PS-16-DCDC Power Supply Module for the X-Pedition 8600SSR-SF-16Switch fabric module for the X-Pedition 8600. One moduleships with the base system (SSR-16). Order only if second isrequired for redundancy.SSR-PCMCIAX-Pedition 8600 and 8000 8MB PCMCIA card (ships with SSR-RS-ENT, second required for redundant CM configuration)SSR-CM2-64X-Pedition switch router Control Module with 64 MB memorySSR-CM3-128X-Pedition switch router Control Module with 128 MB memorySSR-CM4-256X-Pedition switch router Control Module with 256 MB memorySSR-MEM-128New CM2 memory upgrade kit (For CM2 series only)SSR-RS-ENTX-Pedition Switch Router Services for L2, L3, L4 Switchingand IP (Ripv2, OSPF) IPX (RIP/SAP) Routing. One requiredwith every chassis, shipped on PC card.© 2002 Enterasys Networks, Inc. All rights reserved. Lit. #9012476-111/02。

国内外智能窗户发展的研究现状

国内外智能窗户发展的研究现状

国内外智能窗户发展的研究现状In recent years, the development of smart windows, both domestically and internationally, has gained significant attention and research efforts. Smart windows, also known as switchable windows or dynamic windows, are designed to regulate the amount of light and heat passing through them, enhancing energy efficiency and providing greater comfort to occupants. This essay will discuss the current research status of smart windows, covering various aspects such as technologies, applications, challenges, and future prospects.One of the key areas of research in smart windows is the development of different technologies to achieve switchable properties. Several technologies have been explored, including electrochromic, thermochromic, photochromic, and suspended particle device (SPD) technologies. Electrochromic windows, for instance, use an electrical current to change the tint of the window, allowing control over the amount of light transmitted.Thermochromic windows, on the other hand, respond to temperature changes, darkening or lightening accordingly. These technologies have shown promising results and are being further refined to improve their performance and durability.The application of smart windows is another area of active research. Smart windows have the potential to be used in various sectors, including residential, commercial, and automotive. In residential buildings, they can help reduce energy consumption by minimizing the need for artificial lighting and heating or cooling systems. In commercial buildings, smart windows can enhance occupant comfort and productivity while reducing energy costs. Additionally, the automotive industry is exploring the integration of smart windows to improve the overall driving experience and energy efficiency of vehicles. Research efforts are focused on optimizing the design and functionality of smart windows for different applications.Despite the progress made in the development of smart windows, there are still challenges that need to beaddressed. One of the main challenges is the cost of production and installation. Currently, smart windows are relatively expensive compared to traditional windows, making their widespread adoption a challenge. Researchers are working towards developing cost-effective manufacturing processes and materials to reduce the overall cost. Another challenge is the durability and longevity of smart windows. The switchable properties of smart windows should be maintained over an extended period, and the windows should be able to withstand environmental factors such as temperature variations and UV exposure. Ongoing research is focused on improving the durability and reliability ofsmart windows.Looking ahead, the future of smart windows appears promising. With advancements in materials science and nanotechnology, researchers are exploring innovative materials and manufacturing techniques to enhance the performance and functionality of smart windows. For example, the integration of nanomaterials and thin-film technology can lead to more efficient and durable smart windows. Furthermore, the Internet of Things (IoT) is expected toplay a significant role in the development of smart windows. By connecting smart windows to a network, users can control and monitor the windows remotely, creating a moreintelligent and responsive environment.In conclusion, the research on smart windows, both domestically and internationally, is progressing rapidly. Various technologies, such as electrochromic and thermochromic, are being explored to achieve switchable properties. The applications of smart windows span across residential, commercial, and automotive sectors, with a focus on energy efficiency and occupant comfort. Challenges such as cost and durability are being addressed through ongoing research. The future of smart windows looks promising, with advancements in materials science and the integration of IoT technology. Overall, smart windows have the potential to revolutionize the way we interact with our built environment, providing energy-efficient and comfortable spaces.。

南京地铁英语作文

南京地铁英语作文

南京地铁英语作文Nanjing Metro, also known as Nanjing Subway, is a rapid transit system serving the city of Nanjing, the capital of Jiangsu Province in China. It is one of the busiest metro systems in the country and has been expanding rapidly in recent years. The metro system is a vital part of thecity's transportation network, providing a convenient and efficient way for residents and visitors to travel around the city.The Nanjing Metro currently consists of 10 lines, with a total length of over 400 kilometers. It connects thecity's major residential areas, commercial districts, and tourist attractions, making it an essential mode of transportation for the city's residents. The metro systemis known for its punctuality and cleanliness, and it is a popular choice for commuters and tourists alike.One of the most impressive aspects of the Nanjing Metro is its commitment to providing a comfortable and convenienttravel experience for passengers. The stations and trains are well-maintained and equipped with modern facilities, such as air conditioning, Wi-Fi, and electronic displays showing train schedules and route maps. The trains are spacious and well-designed, with ample seating and standing room for passengers.In addition to its convenience and comfort, the Nanjing Metro is also known for its commitment to safety and security. The stations and trains are equipped with CCTV cameras and security personnel to ensure the safety of passengers. The metro system also has clear and easy-to-understand signage in both Chinese and English, making it accessible to international visitors.The Nanjing Metro has played a significant role in reducing traffic congestion and improving air quality in the city. By providing an efficient and reliablealternative to driving, the metro system has helped to reduce the number of cars on the road, leading to less pollution and a more sustainable urban environment.In conclusion, the Nanjing Metro is an essential part of the city's transportation infrastructure, providing a safe, comfortable, and convenient way for residents and visitors to travel around the city. With its extensive network, modern facilities, and commitment to safety and sustainability, the metro system has become a model for public transportation in China. Whether you are a local resident or a tourist visiting Nanjing, the metro is the best way to explore the city and experience its vibrant culture and history.。

剑桥商务英语证书考试(BEC)-第2章剑桥商务英语证书考试(BEC)高级真题及详解(第3辑)-Te

剑桥商务英语证书考试(BEC)-第2章剑桥商务英语证书考试(BEC)高级真题及详解(第3辑)-Te

剑桥商务英语证书考试(BEC)-第2章剑桥商务英语证书考试(BEC)⾼级真题及详解(第3辑)-TeTest 3READING 1 hourPART ONEQuestions 1-8Look at the statements below and at the five extracts on the opposite page from the annual reports of five mobile phone companies.Which company (A, B, C, D or E) does each statement (1-8) refer toFor each statement (1-8), mark one letter (A, B, C, D or E) on your Answer Sheet. You will need to use some of these letters more than once.There is an example at the beginning, (0).Example:0 This company has no direct competition.1 This company is still making a financial loss.2 This company is having part of its business handled by an outside agency.3 This company has grown without undue expense.4 This company is trying to find out what the market response will be to a newproduct.5 This company continues to lose customers.6 This company aims to target a specific group of consumers.7 This company is finding it less expensive than before to attract new customers.8 This company has rationalized its outlets.AOur management team is dedicated to delivering operational excellence and improved profitability. In the coming year, we will focus our marketing on professional young adults, who represent the high value segment of the market and who according to independent research are most likely to adopt our more advanced mobile data products. Customer retention is central to our strategy, and we have been successful in reversing the customer loss of recent years by loyalty and upgrade schemes. A restructuring programme, resulting from changing marketing conditions, has seen our workforce scaled down to 6,100 people. BAs the only network operator in the country, our marketing is aimed at expanding the size of the market. In the business sector, we have targeted small and medium-sized businesses by offering standardised services, and large customersby offering tailored telecommunications solutions. We have been at the forefront of introducing new telecommunications technology and services and have recently distributed 150 of our most advanced handsets to customers to assess the likely demand for advanced data services. Last year, the industry recognized our achievement when we won a national award for technological progress.CA new management team has driven our improved performance here. It is committed to bringing the business into profitability within three years after reaching break-even point in the next financial year. We are focused on delivering rising levels of customer service and an improvement in the quality and utilization of our network. Good progress has been made on all these fronts. The cost of acquiring new subscribers has been reduced and new tariffs have been introduced to encourage greater use of the phone in the late evening.DWe have continued to expand our network in a cost-efficient manner and have consolidated our retail section by combining our four wholly-owned retail businesses into a single operating unit. We expect this to enhance our operational effectiveness and the consistency of our service. Our ambition is to give customers the best retail experience possible. We were, therefore, delighted earlier this yearwhen we won a major European award for customer service. This was particularly pleasing to us as we have always given high priority to customer satisfaction and operational excellence.EHere, we are focused on continuously realizing cost efficiencies as well as improving the level of customer satisfaction and retention. We have already taken effective measures to reduce customer loss and to strengthen our delivery of customer service. The quality of our network has improved significantly over the past year and an increase in the utilization of our network is now a priority. The operation of our customer service centre has been outsourced to a call centre specialist and this has led to a substantial increase in the level of service.【答案与解析】1. C 这家公司依旧财政亏损。

计算机操作系统英文论文

计算机操作系统英文论文

Introduction to the operating system of the new technology Abstract:the Operating System (Operating System, referred to as OS) is an important part of a computer System is an important part of the System software, it is responsible for managing the hardware and software resources of the computer System and the working process of the entire computer coordination between System components, systems and between users and the relationship between the user and the user. With the appearance of new technology of the operating system functions on the rise. Operating system as a standard suite must satisfy the needs of users as much as possible, so the system is expanding, function of increasing, and gradually formed from the development tools to system tools and applications of a platform environment. To meet the needs of users. In this paper, in view of the operating system in the core position in the development of computer and technological change has made an analysis of the function of computer operating system, development and classification of simple analysis and elaborationKey words: computer operating system, development,new technology Operating system is to manage all the computer system hardware resources include software and data resources; Control program is running; Improve the man-machine interface; Provide support for other application software, etc., all the computer system resourcesto maximize the role, to provide users with convenient, efficient, friendly service interface.The operating system is a management computer hardware and software resources program, is also the kernel of the computer system and the cornerstone. Operating system have such as management and configuration memory, decided to system resources supply and demand of priorities, control input and output devices, file system and other basic network operation and management affairs. Operating system is to manage all the computer system hardware resources include software and data resources; Control program is running; Improve the man-machine interface; Provide support for other application software, etc., all the computer system resources to maximize the role, to provide users with convenient, efficient, friendly service interface. Operating system is a huge management control procedures, including roughly five aspects of management functions, processes and processor management, operation management, storage management, equipment management, file management. At present the common operating system on microcomputer DOS, OS / 2, UNIX, XENIX, LINUX, Windows, Netware, etc. But all of the operating system with concurrency, sharing, four basic characteristics of virtual property and uncertainty. At present there are many different kinds of operating system, it is difficultto use a single standard unified classification. Divided according to the application field, can be divided into the desktop operating system, server operating system, the host operating system, embedded operating system.1.The basic introduction of the operating system(1)The features of the operating systemManagement of computer system hardware, software, data and other resources, as far as possible to reduce the work of the artificial allocation of resources and people to the machine's intervention, the computer automatically work efficiency into full play.Coordinate the relationship between and in the process of using various resources, make the computer's resources use reasonable scheduling, both low and high speed devices running with each other.To provide users with use of a computer system environment, easy to use parts of a computer system or function. Operating system, through its own procedures to by all the resources of the computer system provides the function of the abstract, the function of the formation and the equivalent of the operating system, and image, provide users with convenient to use the computer.(2)The development of the operating systemOperating system originally intended to provide a simple sorting ability to work, after updating for auxiliary more complex hardwarefacilities and gradual evolution.Starting from the first batch mode, also come time sharing mechanism, in the era of multiprocessor comes, the operating system also will add a multiprocessor coordination function, even the coordination function of distributed systems. The evolution of the other aspects also like this.On the other hand, on a personal computer, personal computer operating system of the road, following the growth of the big computer is becoming more and more complex in hardware, powerful, and practice in the past only large computer functions that it step by step.Manual operation stage. At this stage of the computer, the main components is tube, speed slow, no software, no operating system. User directly using a machine language program, hands-on completely manual operation, the first will be prepared machine program tape into the input, and then start the machine input the program and data into a computer, and then through the switch to start the program running and computing, after the completion of the printer output. The user must be very professional and technical personnel to achieve control of the computer.Batch processing stage. Due to the mid - 1950 - s, the main components replaced by the transistor computer, running speed hadthe very big enhancement, the software also began to develop rapidly, appeared in the early of the operating system, it is the early users to submit the application software for management and monitoring program of the batch.Multiprogramming system phase. As the medium and small-scale integrated circuit widely application in computer systems, the CPU speed is greatly increased, in order to improve the utilization rate of CPU and multiprogramming technology is introduced, and the special support multiprogramming hardware organization, during this period, in order to further improve the efficiency of CPU utilization, a multichannel batch system, time-sharing system, etc., to produce more powerful regulatory process, and quickly developed into an important branch of computer science, is the operating system. Collectively known as the traditional operating system.Modern operating systems. Large-scale, the rapid development of vlsi rapidly, a microprocessor, optimization of computer architecture, computer speed further improved, and the volume is greatly reduced, for personal computers and portable computer appeared and spread. Its the biggest advantage is clear structure, comprehensive functions, and can meet the needs of the many USES and operation aspects.2. New technology of the operating systemFrom the standpoint of the operating system of the new technology, it mainly includes the operating system structure design of the micro kernel technology and operating system software design of the object-oriented technology.(1) The microkernel operating system technologyA prominent thought in the design of modern operating systems is the operating system of the composition and function of more on a higher level to run (i.e., user mode), and leave a small kernel as far as possible, use it to complete the core of the operating system is the most basic function, according to the technology for micro kernel (Microkernel) technology.The microkernel structure(1) Those most basic, the most essential function of the operatingsystem reserved in the kernel(2)Move most of the functionality of the operating system intothe kernel, and each operating system functions exist in theform of a separate server process, and provide services.(3)In user space outside of the kernel including all operatingsystem, service process also includes the user's applicationprocess. Between these processes is the client/server mode.Micro kernel contains the main ingredient(1) Interrupt and the exception handling mechanism(2)Interprocess communication mechanisms(3)The processor scheduling mechanism(4)The basic mechanism of the service functionThe realization of the microkernelMicro kernel implementation "micro" is a major problem and performance requirements of comprehensive consideration. To do "micro" is the key to implementation mechanism and strategy, the concept of separation. Due to the micro kernel is the most important of news communication between processes and the interrupt processing mechanism, the following briefly describes the realization of both.Interprocess communication mechanismsCommunication service for the client and the server is one of the main functions of the micro kernel, is also the foundation of the kernel implement other services. Whether to send the request and the server reply messages are going through the kernel. Process of news communication is generally through the port (port). A process can have one or more ports, each port is actually a message queue or message buffer, they all have a unique port ID (port) and port authority table, the table is pointed out that this process can be interactive communications and which process. Ports ID and kernel power table maintenance.Interrupt processing mechanismMicro-kernel structure separation mechanism will interrupt and the interrupt processing, namely the interrupt mechanism on micro kernel, and put the interrupt handling in user space corresponding service process. Micro kernel interruption mechanism, is mainly responsible for the following work:(1) When an interrupt occurs to identify interrupt;(2) Put the interrupt signal interrupt data structure mapping tothe relevant process;(3) The interrupt is transformed into a message;(4) Send a message to the user space in the process of port, butthe kernel has nothing to do with any interrupt handling.(5) Interrupt handling is to use threads in a system.The advantages of the microkernel structure(1) Safe and reliableThe microkernel to reduce the complexity of the kernel, reduce the probability of failure, and increases the security of the system.(2) The consistency of the interfaceWhen required by the user process services, all based on message communication mode through the kernel to the server process. Therefore, process faces is a unified consistent processescommunication interface.(3) Scalability of the systemSystem scalability is strong, with the emergence of new hardware and software technology, only a few change to the kernel.(4) FlexibilityOperating system has a good modular structure, can independently modify module and can also be free to add and delete function, so the operating system can be tailored according to user's need.(5) CompatibilityMany systems all hope to be able to run on a variety of different processor platform, the micro kernel structure is relatively easy to implement.(6) Provides support for distributed systemsOperating under the microkernel structure system must adopt client/server mode. This model is suitable for distributed systems, can provide support for distributed systems.The main drawback of microkernelUnder the micro-kernel structure, a system service process need more patterns (between user mode and kernel mode conversion) and process address space of the switch, this increases costs, affected the speed of execution.3 .Object-oriented operating system technologyObject-oriented operating system refers to the operating system based on object model. At present, there have been many operating system used the object-oriented technology, such as Windows NT, etc. Object-oriented has become a new generation of an important symbol of the operating system.The core of object-oriented conceptsIs the basic idea of object-oriented to construct the system as a series of collections of objects. The object refers to a set of data and the data of some basic operation encapsulated together formed by an entity. The core of object-oriented concept includes the following aspects:(1) EncapsulationIn object-oriented encapsulation is the meaning of a data set and the data about the operation of the packaging together, form a dynamic entity, namely object. Encapsulated within the request object code and data to be protected.(2) InheritanceInheritance refers to some object can be inherited some features and characteristics of the object.(3) PolymorphismPolymorphism refers to a name a variety of semantics, or the same interface multiple implementations. Polymorphism inobject-oriented languages is implemented by overloading and virtual functions.(4) The messageNews is the way of mutual requests and mutual cooperation between objects. An object through the message to activate another object. The message typically contains a request object identification and information necessary to complete the work.Object-oriented operating systemIn object-oriented operating system, the object as a concurrent units, all system resources, including documents, process and memory blocks are considered to be an object, such as the operating system resources are all accomplished through the use of object services.The advantages of object-oriented operating system:(1)Can reduce operating system throughout its life period whena change is done to the influence of the system itself.For example, if the hardware has changed, will force the operating system also changes, in this case, as long as change the object representing the hardware resources and the operation of the object of service, and those who use only do not need to change the object code.(2)Operating system access to its resources and manipulation are consistent .Operating system to produce an event object, delete, and reference, and it produces reference, delete, and a process object using the same method, which is implemented by using a handle to the object. Handle to the object, refers to the process to a particular object table in the table.(3)Security measures to simplify the operating system.Because all the objects are the same way, so when someone tries to access an object, security operating system will step in and approved, regardless of what the object is.(4)Sharing resources between object for the process provides a convenient and consistent approach.Object handle is used to handle all types of objects. The operating system can by tracking an object, how many handle is opened to determine whether the object is still in use. When it is no longer used, the operating system can delete the object.ConclusionIn the past few decades of revolutionary changes have taken place in the operating system: technological innovation, the expansionof the user experience on the upgrade, application field and the improvement of function. As in the past few decades, over the next 20 years there will be huge changes in operating system. See we now use the operating system is very perfect. Believe that after the technology of the operating system will still continue to improve, will let you use the more convenient. Believe that the operating system in the future will make our life and work more colorful.。

物流专业英语

物流专业英语

一.短语翻译(英译中)Unit 1Part 11.anchor sectors 支柱产业2.cargo container handling capacity 货物集装箱处理能力3.put in place 出台相关政策4.priority use of land resources 优先使用土地资源5.sector-by—sector 各个部门6。

tax concessions 税收优惠7。

tax-free zones 免税区8.automate much of the paperwork 文书工作自动化9.Rail freight traffic 铁路货物10。

public spending 政府开支11.terminal operators 码头营运商12。

Three Gorges Dam project 三峡工程13.Joint ventures 合资企业14.Small and medium—sized 中小型Part 21。

tendered forwarding services 提供货运代理2。

purchasing business 采购业务3。

customs-declarations 清关证明4。

explore the logistical facilities and services考察物流设施和服务5。

through the courtesy of 承蒙6。

regular freight forwarding 正规货运代理7.Shipping Agent 装船代理8.Cargo Forwarding Agent 货运代理9.Customs Clearance Agent 清关代理10。

under separate cover 在另函内11.for your information 供你参考Unit 2Part 11.work-in-process 在制品2。

identification cards 身份证3。

希尔顿·皮尔顿(HPE)10G双端口546SFP+网卡设备概述说明书

希尔顿·皮尔顿(HPE)10G双端口546SFP+网卡设备概述说明书

QuickSpecs HPE Ethernet 10G 2-port 546SFP+ Adapter OverviewHPE Ethernet 10G 2-port 546SFP+ AdapterThis adapter is part of an extended catalog of products tailored for customers in specific markets or with specific workloads, requiring the utmost in performance or value, but typically have a longer lead-timeThe HPE Ethernet 10Gb 2-port 546SFP+ Adapter for ProLiant servers is designed to optimize Cloud efficiency, and improve performance and security of applications – especially where I/O, block storage and database performance are critical and the need for maximum VM density and up-scalability are greatest.The HPE Ethernet 546SFP+ can provide up to 40 Gb/s of converged bi-directional Ethernet bandwidth, helping to alleviate network bottlenecks.HPE Ethernet 10G 2-port 546SFP+ AdapterPlatform InformationModelsHPE Ethernet 10Gb 2-port 546SFP+ Adapter779793-B21 Kit Contents HPE Ethernet 10Gb 2-port 546SFP+ AdapterQuick install cardProduct warranty statementCompatibility -Supported Servers HPE ProLiant DL380 Gen9HPE ProLiant DL360 Gen9HPE ProLiant DL180 Gen9HPE ProLiant DL160 Gen9HPE ProLiant DL120 Gen9HPE ProLIant DL80 Gen9HPE ProLiant DL60 Gen9HPE ProLiant ML350 Gen9HPE ProLiant ML150 Gen9HPE ProLiant ML110 Gen9HPE Apollo 6000 Gen9HPE Apollo 2000 Gen9NOTE:This is a list of supported servers. Some may be discontinued.At a Glance Features•Dual 10 Gb ports provide up to 40 Gb bi-directional per adapter•Industry-leading throughput and latency performance•Over eight million small packets/s, ideal for web/mobile applications, mobile messaging, and social media•Tunnel Offload support for VXLAN and NVGRE•Support for Preboot eXecution Environment (PXE)•Optimized host virtualization density with SR-IOV support•Converges RoCE with LAN traffic on a single 10 GbE wire•RDMA over Converged Ethernet (RoCE) for greater server efficiency and lower latency•Advanced storage offload processing freeing up valuable CPU cycles•Supports UEFI and legacy boot options•Greater bandwidth with PCIe 3.0•Includes 128 MB of onboard memory•Jumbo Frames support•Supports receive-side scaling (RSS) for the efficient distribution of network receive processing across multiple CPUs in multiprocessor systems•Support for Windows SMB Direct•Supports VMWare NetQueue, Microsoft Virtual Machine Queue (VMQ) for WindowsThroughput-Theoretical Bandwidth This adapter delivers 20 Gb/s bi-directional Ethernet transfer rate per port (40 Gb/s per adapter), providing the network performance needed to improve response times and alleviate bottlenecks.802.1p QoS Tagging IEEE quality of service (QoS) 802.1p tagging allows the adapter to mark or tag frames with a priority level across a QoS-aware network for improved traffic flow.802.1Q VLANs IEEE 802.1Q virtual local area network (VLAN) protocol allows each physical port of this adapter to be separated into multiple virtual NICs for added network segmentation and enhanced security and performance.VLANs increase security by isolating traffic between users. Limiting the broadcast traffic to within the sameVLAN domain also improves performance.Configuration Utilities This adapter ships with a suite of operating system-tailored configuration utilities that allow the user to enable initial diagnostics and configure adapter teaming. This includes a patented teaming GUI for Microsoft Windowsoperating systems. Additionally, support for scripted installations of teams in a Microsoft Windows environmentallow for unattended OS installations.DPDK This adapter supports DPDK with benefit for packet processing acceleration and use in NFV deployments. Interrupt Coalescing Interrupt coalescing (interrupt moderation) groups multiple packets, thereby reducing the number of interrupts sent to the host. This process optimizes host efficiency, leaving the CPU available for other duties.Jumbo Frames This adapter supports Jumbo Frames (also known as extended frames), permitting up to a 9,200 byte (KB) transmission unit (MTU) when running Ethernet I/O traffic. This is over five times the size of a standard 1500-byte Ethernet frame. With Jumbo Frames, networks can achieve higher throughput performance and greaterCPU utilization. These attributes are particularly useful for database transfer and tape backup operations. LED Indicators LED indicators show link integrity and network activity for easy troubleshooting.Management Support This adapter ships with agents that can be managed from HPE Systems Insight Manager or other management application that support SNMP.Message Signaled Interrupt (Extended) (MSI-X)Message Signaled Interrupt (Extended) provides performance benefits for multi-core servers by load balancing interrupts between CPUs/cores.PCI Express Interface This adapter is designed with an eight lane (x8) PCI Express bus based on the PCIe 3.0 standard. The adapter is backward compatible with four lane (x4) PCI Express, automatically auto-sensing between x8 and x4 slots.Preboot eXecution Environment (PXE)Support for PXE enables automatic deployment of computing resources remotely from anywhere. It allows a new or existing server to boot over the network and download software, including the operating system, from a management/ deployment server at another location on the network.Additionally, PXE enables decentralized software distribution and remote troubleshooting and repairs.RoCE RoCE is an accelerated I/O delivery mechanism that allows data to be transferred directly from the user memory of the source server to the user memory of the destination server bypassing the operating system(OS) kernel. Because the RDMA data transfer is performed by the DMA engine on the adapter's networkprocessor, the CPU is not used for the data movement, freeing it to perform other tasks such as hosting morevirtual workloads (increased VM density). RDMA also bypasses the host's TCP/IP stack, in favor of upper layerInfiniBand protocols implemented in the adapter's network processor. The bypass of the TCP/IP stack and theremoval of a data copy step reduce overall latency to deliver accelerated performance for applications such asMicrosoft Hyper-V Live Migration, Microsoft SQL and Microsoft SharePoint with SMB Direct.Server Integration This adapter is a validated, tested, and qualified solution that is optimized for HPE ProLiant servers. Hewlett Packard Enterprise validates a wide variety of major operating systems drivers with the full suite of web-basedenterprise management utilities including HPE Intelligent Provisioning and HPE Systems Insight Manager thatsimplify network management.This approach provides a more robust and reliable networking solution than offerings from other vendors andprovides users with a single point of contact for both their servers and their network adapters.TCP/UDP/IP For overall improved system response, this adapter supports standard TCP/IP offloading techniques including:TCP/IP, UDP checksum offload (TCO) moves the TCP and IP checksum offloading from the CPU to the networkadapter. Large send offload (LSO) or TCP segmentation offload (TSO) allows the TCP segmentation to behandled by the adapter rather than the CPU.Tunnel Offload Minimize the impact of overlay networking on host performance with tunnel offload support for VXLAN and NVGRE. By offloading packet processing to adapters, customers can use overlay networking to increaseVM migration flexibility and virtualized overlay networks with minimal impact to performance. HPE TunnelOffloading increases I/O throughput, reduces CPU utilization, and lowers power consumption. Tunnel Offloadsupports VMware's VXLAN and Microsoft's NVGRE solutions.Warranty Maximum: The remaining warranty of the HPE product in which it is installed (to a maximum three-year, limited warranty).Minimum: One year limited warranty.NOTE:Additional information regarding worldwide limited warranty and technicalsupport is available at: /us/en/enterprise/servers/warranty/index.aspx#.V4e3tPkrJhEService and SupportService and Support NOTE:This adapter is covered under HPE Support Services/ Service Contract applied to the HPEProLiant Server or enclosure. No separate HPE Support Services need to be purchased.Most HPE branded options sourced from HPE that are compatible with your product will be covered underyour main product support at the same level of coverage, allowing you to upgrade freely. Additional support isrequired on select workload accelerators, switches, racks and UPS options 12KVA and over. Coverage of theUPS battery is not included under HPE support services; standard warranty terms and conditions apply.Warranty and Support Services Warranty and Support Services will extend to include HPE options configured with your server or storage device. The price of support service is not impacted by configuration details. HPE sourced options that are compatible with your product will be covered under your server support at the same level of coverage allowing you to upgrade freely. Installation for HPE options is available as needed. To keep support costs low for everyone, some high value options will require additional support. Additional support is only required on select high value workload accelerators, fibre switches, InfiniBand and UPS options 12KVA and over. Coverage of the UPS battery is not included under TS support services; standard warranty terms and conditions apply.Protect your business beyond warranty with HPE Support Services HPE Technology Services delivers confidence, reduces risk and helps customers realize agility and stability. Connect to HPE to help prevent problems and solve issues faster. HPE Support Services enable you to choose the right service level, length of coverage and response time as you purchase your new server, giving you full entitlement to the support you need for your IT and business.Protect your product, beyond warranty.Parts and Materials Hewlett Packard Enterprise will provide HPE-supported replacement parts and materials necessary to maintain the covered hardware product in operating condition, including parts and materials for availableand recommended engineering improvements. Parts and components that have reached their maximumsupported lifetime and/or the maximum usage limitations as set forth in the manufacturer's operating manual,product quick-specs, or the technical product data sheet will not be provided, repaired, or replaced as part ofthese services. The defective media retention service feature option applies only to Disk or eligible SSD/FlashDrives replaced by Hewlett Packard Enterprise due to malfunction.For more information Visit the Hewlett Packard Enterprise Service and Support website.Related OptionsCables - Direct Attach HPE BladeSystem c-Class 10GbE SFP+ to SFP+ 0.5m Direct Attach Copper Cable487649-B21 HPE BladeSystem c-Class 10GbE SFP+ to SFP+ 1m Direct Attach Copper Cable487652-B21HPE BladeSystem c-Class 10GbE SFP+ to SFP+ 3m Direct Attach Copper Cable487655-B21HPE BladeSystem c-Class 10GbE SFP+ to SFP+ 5m Direct Attach Copper Cable537963-B21HPE FlexNetwork X240 10G SFP+ to SFP+ 0.65m Direct Attach Copper Cable JD095CHPE FlexNetwork X240 10G SFP+ to SFP+ 1.2m Direct Attach Copper Cable JD096CHPE FlexNetwork X240 10G SFP+ to SFP+ 3m Direct Attach Copper Cable JD097CHPE FlexNetwork X240 10G SFP+ to SFP+ 5m Direct Attach Copper Cable JG081CHPE FlexNetwork X240 10G SFP+ SFP+ 7m Direct Attach Copper Cable JC784CNOTE:Direct Attach Cable (DAC) must be purchased separately for copper environments.Cables - Fiber Optic HPE LC to LC Multi-mode OM3 2-Fiber 0.5m 1-Pack Fiber Optic Cable AJ833A HPE LC to LC Multi-mode OM3 2-Fiber 1.0m 1-Pack Fiber Optic Cable AJ834AHPE LC to LC Multi-mode OM3 2-Fiber 5.0m 1-Pack Fiber Optic Cable AJ836AHPE LC to LC Multi-mode OM3 2-Fiber 15.0m 1-Pack Fiber Optic Cable AJ837AHPE LC to LC Multi-mode OM3 2-Fiber 30.0m 1-Pack Fiber Optic Cable AJ838AHPE LC to LC Multi-mode OM3 2-Fiber 50.0m 1-Pack Fiber Optic Cable AJ839ANOTE:Fiber transceivers and cables must be purchased separately for fiber-optic environments.Transceivers HPE BladeSystem c-Class 10Gb SFP+ SR Transceiver455883-B21 NOTE:Fiber transceivers and cables must be purchased separately for fiber-optic environments.Technical SpecificationsGeneral Specifications Network Processor Mellanox Connect X-3 ProData Rate Two ports, each at 20 Gb/s bi-directional; 40 Gb/s aggregate bi-directionaltheoretical bandwidth.Bus type PCI Express 3.0 (Gen 3) x8Form Factor Stand-up cardIEEE Compliance802.3ae, 802.1Q, 802.3x, 802.1p, 802.3ad/LACP, 802.1AB(LLDP),802.1Qbg, 802.1Qbb, 802.1QazPower and Environmental Specifications Power8.4W typical, 9.7W maximumTemperature - Operating0° to 55°C (32° to 131°F)Temperature - Non-Operating-40° to 70° C (-40° to 158° F)Humidity - Operating15% to 80% non-condensingHumidity - Non-operating10% to 90% non-condensingEmissions Classification FCC Class A, VCCI Class A, BSMI Class A, CISPR 22 Class A, ACA Class A,EN55022 Class A, EN55024-1, ICES-003 Class A, MIC Class ARoHS Compliance 6 of 6Safety UL Mark (USA and Canada)CE MarkEn 60590-1Operating System and Virtualization Support The Operating Systems supported by this adapter are based on the server OS support. Please refer to the OS Support Matrix athttps:///us/en/servers/server-operating-systems.html.Environment-friendly Products and Approach - End-of-life Management and Recycling Hewlett Packard Enterprise offers end-of-life product return, trade-in, and recycling programs, in many geographic areas, for our products. Products returned to Hewlett Packard Enterprise will be recycled, recovered or disposed of in a responsible manner.The EU WEEE Directive (2012/19/EU) requires manufacturers to provide treatment information for each product type for use by treatment facilities. This information (product disassembly instructions) is posted on the Hewlett Packard Enterprise web site. These instructions may be used by recyclers and other WEEE treatment facilities as well as Hewlett Packard Enterprise OEM customers who integrate and re-sell Hewlett Packard Enterprise equipment.Summary of ChangesSign up for updates© Copyright 2018 Hewlett Packard Enterprise Development LP. The information contained herein is subject to changewithout notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the expresswarranty statements accompanying such products and services. Nothing herein should be construed as constituting anadditional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions containedherein.c04543733 - 15182 - Worldwide - V8 -05-February-2018。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Rate Windows for Efficient Network and I/O ThrottlingKyung D. Ryu, Jeffrey K. Hollingsworth, and Peter J. KeleherDept. of Computer ScienceUniversity of Maryland{kdryu,hollings,keleher} @This paper proposes and evaluates a new mechanism for I/O and network rate policing. The goal of the proposed system is to provide an simple, yet effective way to enforce resource limits on target classes of jobs in a system. The basic approach is useful for several types of systems including running background jobs on idle workstations, and providing resource limits on network intensive applications such as virtual web server hosting. Our approach is quite simple, we use a sliding window average of recent events to compute the average rate for a target resource. The assigned limit is enforced by forcing application processes to sleep when they issue requests that would bring their resource utilization out of the allowable profile. Our experimental results that show that we are able to provide the target resource limitations within a few percent, and do so with no measurable slowdown of the overall system.Contact author:Dr. Peter KeleherComputer Science DepartmentA. V. Williams Bldg.University of MarylandCollege Park, MD 20742-3255301 405-0345Fax: 301 405-6707keleher@Rate Windows for Efficient Network and I/O ThrottlingKyung D. Ryu, Jeffrey K. Hollingsworth, and Peter J. KeleherDept. of Computer ScienceUniversity of Maryland{kdryu,hollings,keleher}@AbstractThis paper proposes and evaluates a new mechanism for I/O and network rate policing. The goal of the proposed system is to provide an simple, yet effective way to enforce resource limits on target classes of jobs in a system. The basic approach is useful for several types of systems including running background jobs on idle workstations, and providing resource limits on network intensive applications such as virtual web server hosting. Our approach is quite simple, we use a sliding window average of recent events to compute the average rate for a target resource The assigned limit is enforced by forcing application processes to sleep when they issue requests that would bring their resource utilization out of the allowable profile. Our experimental results that show that we are able to provide the target resource limitations within a few percent, and do so with no measurable slowdown of the overall system.1. IntroductionThis paper proposes and evaluates rate windows, a new mechanism for I/O and network rate policing. Integrated with our existing Linger-Longer infrastructure for policing CPU and memory consumption [15], the rate windows give unprecedented control over the resource use of user applications. More specifically, rate windows is a low-overhead facility that gives us the ability to set hard per-process bounds on I/O and network usage.Current general-purpose UNIX systems provide no support for prioritizing access to other resources such as memory, communication and I/O. Priorities are, to some degree, implied by the corresponding CPU scheduling priorities. For example, physical pages used by a lower-priority process will often be lost to higher-priority processes. LRU-like page replacement policies are more likely to page out the lower-priority process's pages, because it runs less frequently. However, this might not be true with a higher-priority process that is not computationally intensive, and a lower priority process that is. We therefore need an additional mechanism to control the memory allocation between local and guest processes. Like CPU scheduling, this modification should not affect the memory allocation (or page replacement) between processes in the same class.This ability has applications in several areas; we perform a detailed investigation of two in this paper. First, we show that network and I/O throttling is crucial in order to provide guarantees to users who allow their workstations to be used in Condor-like systems. Condor-like facilities allow guest processes to efficiently exploit otherwise-idle workstation resources. The opportunity for harvesting cycles in idle workstations has long been recognized [12], since the majority of workstation cycles go unused. In combination with ever-increasing needs for cycles, this presents an obvious opportunity to better exploit existing resources.However, most such policies waste many opportunities to exploit cycles because of overlyconservative estimates of resource contention. Our linger-longer approach [14] exploits these opportunities by delaying migrating guest processes off of a machine in the hope of exploiting fine-grained idle periods that exist even while users are actively using their computers. These idle periods, on the order of tens of milliseconds, occur when users are thinking, or waiting for external events such as disks or networks. Our prior work consisted of new mechanisms and policies that limit the use of CPU cycles and memory by guest jobs. The work proposed in this paper complements that work in extending similar protection to network and I/O bandwidth usage.Second, we show that rate windows can be used to efficiently provide rate policing of network connections. Rating limiting is useful both for managing resource allocations of competing users (such as virtual hosting of web servers) and can be used for rate-based clocking of network protocols as a means of improving the utilization of networks with high bandwidth-delay products [7, 13].The rest of this paper is organized as follows. Section 2 describes the implementation of rate windows and evaluates its use with micro-benchmarks. Section 3 reviews the Linger-Longer infrastructure, motivates the use of rate windows for Linger-Longer. In particular, we show that a significant class of guest applications is still able to affect host processes via network and I/O contention. Further we show that there is no general way to prevent this using CPU and memory policing that still allows the guest to make progress. Section 4 describes the use of rate windows in policing file I/O, and Section 5 describes its use with network I/O/. Finally, Section 6 reviews related work and Section Error! Reference source not found. concludes. 2. CPU and memory policingBefore discussing rate windows, we place this work in the context of the Linger-Longer resource-policing infrastructure [14]. The Linger-Longer infrastructure is based on the thesis that current Condor-like [11] policies waste many opportunities to exploit idle cycles because of overly conservative estimates of resource contention. We believe that overall throughput is maximized if systems implement fine-grained cycle stealing by leaving guest jobs on machine even when resource-intensive host jobs start up. However, the host job will be adversely affected unless the guest job’s resource use is strictly limited. Our earlier work strictly bounded CPU and memory use by guest jobs through use of a few, simple modifications to existing kernel policies.These policies rely on two new mechanisms. First, a new guest priority prevents guest processes from running when runnable host processes are present. The change essentially establishes guest processes as a different class, such that guest processes are not chosen if any runnable host processes exist. This is true even if the host processes have lower runtime priorities than the guest process. Note that running with “nice –19” is not sufficient, as the nice’d process can still consume between 8%, 15%, and 40% of the CPU for Linux (2.0.32), Solaris (SunOS 5.5), and AIX (4.2), respectively [15].We verified that the scheduler reschedules processes any time a host process unblocks while a guest process is running. This is the default behavior on Linux, but not on many BSD derived operating systems. One potential problem of our strict priority policy is that it could cause priority inversion. Priority inversion occurs when a higher priority process is not able to run due to a lower priority process holding a shared resource. However, this is not possible in our applicationdomain because guest and host processes do not share locks, or any other non-revocable resources.Our second mechanism limited guest consumption of memory resources. Unfortunately, memory is more difficult to deal with than the CPU. The cost of reclaiming the processor from a running process in order to run a new process consists only of saving processor state and restoring cache state. The cost of reclaiming page frames from a running process is negligible for clean pages, but quite large for modified pages because they need to be flushed to disk before being reclaimed. The simple solution to this problem is to permanently reserve physical memory for the host processes. The drawback is that many guest processes are quite large. Simulations and graphics rendering applications can often fill all available memory. Hence, not allowing guest processes to use the majority of physical memory would prevent a large class of applications from taking advantage of idle cycles.We therefore decided not to impose any hard restrictions on the number of physical pages that can be used by a guest process. Instead, we implemented a policy that establishes low and high thresholds for the number of physical pages used by guest processes. Essentially, the page replacement policy prefers to evict a page from a host process if the total number of physical pages held by the guest process is less than the low threshold. The replacement policy defaults to the standard clock-based pseudo-LRU policy up until the upper threshold. Above the high threshold, the policy prefers to evict a guest page. The effect of this policy is to encourage guest processes to steal pages from host processes until the lower threshold is reached, to encourage host processes to steal from guest processes above the high threshold, and to allow them to compete evenly in the region between the two thresholds. However, the host priority will lead to the number of pages held by the guest processes being closer to the lower threshold, because the host processes will run more frequently.We modified the Linux kernel to support this prioritized page replacement. Two new global kernel variables were added for the memory thresholds, and are configurable at run-time via system calls. The kernel keeps track of resident memory size for guest processes and host processes. Periodically, the virtual memory system triggers the page-out mechanism. When it scans in-memory pages for replacement, it checks the resident memory size of guest processes against the memory thresholds. If they are below the lower thresholds, the host processes’ pages are scanned first for page-out. Resident sizes of guest processes larger than the upper threshold cause the guest processes’ pages to be scanned first.Figure 1: Threshold validations – Low and high thresholds are set to 50MB and 70 MB. At time 90, the host job becomes I/O-bound. Host process acquires 150 MB when running without contention, guest process acquires 128 MB without contention. Total available memory is 179 MB.Between the two thresholds, older pages are paged out first no matter what processes own them.We validated our memory threshold modifications by tracking the resident memory size of host and guest processes for two CPU-intensive applications with large memory footprints. The result is shown in Figure 1. The chart shows memory competition between a guest and a host process. The application behavior and memory thresholds shown are not meant to be representative, but were constructed to demonstrate that the memory thresholds are strictly enforced by our modifications to Linux’s page replacement policy. The guest process starts at time 20 and grabs 128MB. The host process starts at time 38 and quickly grabs a total of 128 MB. Note that the host actually touches 150 MB. It is prevented from obtaining all of this memory by the low threshold. Since the guest process’ total memory has dropped to the low threshold, all replacements come from host pages. Hence, no more pages can be stolen from the guest. At time 90, the host process turns into a highly I/O-bound application that uses little CPU time. When this happens, the guest process becomes a stronger competitor for physical pages, despite the lower CPU priority, and slowly steals pages from the host process. This continues until time 106, at which point the guest process reaches the high threshold and all replacements come from its own pages. For this experiment, we deliberately set the limits very high to demonstrate the mechanism. 5/10% of total memory are very acceptable guest job memory limits for most of cases. However, these values can be adapted at run time to meet the different requirements of applications.3. Rate Windows3.1 PolicyFirst, we distinguish between “unconstrained” and “constrained” job classes. The default for all processes is unconstrained; jobs must be explicitly put into constrained classes. The unconstrained class is allowed to consume all available I/O. Each distinct constrained class has a different threshold bandwidth, defining the maximum aggregate bandwidth that all processes in that class can consume. As an optimization, however, if there is only one class of constrained jobs, and no I/O-bound unconstrained jobs, the constrained jobs are allowed unfettered access to the available bandwidth.We identify the presence of unconstrained I/O-bound jobs by monitoring I/O bandwidth, moving the system into the throttled state when unconstrained bandwidth exceeds thresh high, and into the unthrottled state when unconstrained bandwidth drops below thresh low. Note that thresh low is lower than thresh high, providing hysteresis to the system to prevent oscillations between throttled and un-throttled mode when the I/O rate is near the threshold. The state of the system is reflected in the global variable throttled. Note that the current unconstrained bandwidth is not an instantaneous measure; it is measured over the life of the rate window, defined below.3.2 MechanismThe implementation of rate windows is straightforward. We currently have a hard-coded set of job equivalence classes, although this could be easily generalized for an arbitrary number. Each class has two kernel window structures, one for file I/O and one for network I/O. Each window structure contains a circular queue, implemented via a 100-element array (see Figure 2). The window structure describes the last I/O operations performed by jobs in the class, plus a few other scalar variables. The window structure only describes I/O events that occurred during the previous 5 seconds, so there may be fewer than 100 operations in the array. We experimented with several different window sizes before arriving at these constants, but it is clearly possible that new environments or applications could be best served by using other values. We provide a means of tuning these and other parameters from a user-level tool.We currently trigger our mechanism via hooks placed high in the kernel, at each of the kernel calls that implement I/O and network communication: read(), write(), send(), etc. Each hook calls rate_check() with process ID, I/O length, and I/O type. The process ID is used to map to an I/O class, and the I/O type is used to distinguish between file and network I/O. The rate_check() routine maintains a sliding window of operations performed for each class of service and for the overall system. We maintain a window of up to 100 recent events. However, to prevent using too old of information, we limit the sliding window to a fixed interval of time (currently 5 seconds).Define B w, the window bandwidth, as the total amount of I/O in the window’s operations, including the new operation. Define T w, the window time, as the interval from the beginning of the oldest operation in the window until the expected completion of the new operation, assuming it starts immediately. Let R t be the threshold bandwidth per second for this class. We then allow the new operation to proceed immediately if the class is currently throttled and:wtwBRT>Otherwise, we calculate the sleep() delay as follows:delay wwtBTR=−This process is illustrated graphically in Figure 3. Note that we have upper and lower bounds on allowable sleep times.Sleep durations that are too small degrade overall efficiency, so durations under our lower bound are set to zero. Sleep durations that are too large tend to make the stream bursty. If our computed delay is above the computed threshold we break the I/O into multiple pieces and spreadFigure 3: Policing I/O Requests.the total delay over the pieces. This is will not affect application execution since for file I/O requests will eventually be broken into individual disk blocks and for network connections TCP provides a byte-oriented stream rather than a record oriented one2816.We chose Linux as our target operating system for several reasons. First, it is one of the most widely used UNIX operating systems. Second, the source code is open and widely available. Since many active Linux users build their own customized kernels, our mechanisms could easily be patched into existing installations by end users. This is important because most PCs are deployed on people's desks, and cycle-stealing approaches are probably more applicable to desktop environments than to server environments. Also since our mechanism simply requires the ability to intercept I/O calls, it would be easy to implement as a loadable kernel modules on systems that defined an API to intercept I/O calls. Windows 2000 (nee Window NT) and the stackable filesystem [9] provide the required calls.In order to provide the finer granularity of sleep time to allow our policing to be implemented, we augmented the standard 2.22816 For UDP this is not a problem since the max user level request is constrained by the network’s MTU. Linux kernel with extensions developed by KURT Real-time Linux project [3].4. File I/O PolicingIn order to validate our approach, we conducted a series of micro-benchmarks and application benchmarks. The purpose of these experiments is three fold. First, we want to show that our mechanism doesn’t introduce any significant delay on normal operation of the system. Second, we want to show that we can effectively police the I/O rates. Third, since our policing mechanism sits above the file buffer cache, it will be conservative in policing the disk since hits in cache will be charged against a job classes’s overall file I/O limit. We wanted to measure this affect.We first measured resource usage in order to verify that the use of rate windows does not add significant overhead to the system. We ran a single tar program by itself both with and without rate windows enabled. The completion time of the tar application with rate windows enabled was less than the variation between consecutive runs of the experiment. This was expected, as there are no computationally expensive portions of the algorithm. Note that this experiment does not account for the system cost of extra context switches caused by sleeping guest jobs.Second, we ran two instances of tar, one as a guest job and one as a host job. Figure 4a represents a run with throttling enabled, and Figure 4 shows a run without throttling. There is no caching between the two because they have disjoint input. The guest job is intended to be representative of those used by cycle-stealing schedulers such as Condor. Unless specified otherwise, a “guest” job is assumed to be constrained to 10% of the maximum I/O or network bandwidth, whereas a “host” process has unconstrained use of all bandwidth.In both figures, the guest job starts first, followed somewhat later by the host job. At this point, the guest job throttles down to its 10% rate. When the host job finishes, the guest job throttles back up after the rate window empties. The sequence on the left is with throttling, on the right without. Note that the version with I/O throttling is less thrifty with resources (the jobs finish later). This is a design decision: our goal is to prevent undue degradation of unconstrained host job performance regardless of the effect on any guest jobs.We look at the behavior of one of the tar processes in more detail in Figure 5. The point of this figure is that despite the frequent and varied file I/O calls, and despite the buffer cache, disk I/O’s get issued at regular intervals that precisely match the threshold value set for this experiment. Note that actual disk I/O.(a) (b)Figure 4: File I/O of competing tar applications with (left) and without (right) file I/O policing. Figure 5: I/O sizes vs. time for tarOur third set of micro-benchmark experiments is designed to look at the distribution of sleep times for a guest process. For this case, we ran three different applications. The first application was again a run of the tar utility. Second, we ran the agrep utility across the source directory for the Linux kernel looking for a simple pattern that did not occur in the files searched. Third, we ran a compile workload that consisted of compiling a library of C++ methods that were divided among 34 files plus 45 header files. This third test was designed to stress the gap between monitoring at the file request level and the disk I/O level since all of the common header files would remain in the file buffer cache for the duration of the experiment.A histogram (100 buckets) of the sleep durations is shown in Figure 6. We have omitted those events that have no delay since their frequency completely dominates the rest of the values. Figure 6(a) shows the results for the tar application. In this figure, there is a large spike in the delay time at 20msec since this is exactly the mean delay required for the I/O the must common sized I/O request of 10K bytes to be limited to 500 KB/sec. Figure 6(b) shows the results for the compilation workload. In this example, the most popular sleep time is the maximum sleep duration of 100msec. This is due to the fact, that at several periods during the application execution, the program is highly I/O intensive and our mechanism was straining to keep the I/O rate throttled down. Figure 6(c) shows the sleep time distribution for the agrep application. The results for this application show that the most popular sleep time (other than no sleep) was 2-3 ms. This is very close to the mean sleep time of 2.5 ms forthis application.(a) Tar(b) Compile Workload(c) AgrepFigure 6: Distribution of Sleep Times for Tar program.Fourth, we examine the relationship between file I/O and disk I/O. File I/O can dilate because i) file I/O’s can be done in small sizes, but disk I/O is always rounded up to the next multiple of the page size, and ii) the buffer cache’s read-ahead policy may bring speculatively bring in disk blocks that are never referenced. File I/O can also attenuate due to buffer cache hits, which is a consequence on the I/O locality of the applications. We measured 1) the total amount of file I/O requested, 2) the actual I/O requests performed by the disk, 3) the total number of I/O events 4) the total number of I/O events that were delayed by sleep calls, 5) the total amount of sleep time, 6) the total runtime of the workload, and 7) the average actual disk I/O rate (total disk I/O’s divided by execution time). The results are shown in Table 1.Looking first at the difference between file I/O and disk I/O, note that file I/O is equal to the disk I/O for tar, 14% less for agrep, and 233% larger for compile. Notice that for the two I/O intensive applications, the overall I/O rate for the application is very close to the target rate.We did not observe poor read-ahead behavior in our experiments; agrep’s behavior is due to small reads being rounded to larger disk pages. The low file I/O number for compile, of course, is due to good buffer cache locality.Since the temporal extent of our window automatically adapts based on the effective I/O rate (due to the limit of 100 items), we wanted to look at how the size of this window changed during the execution of a program. Figure 8 shows the distribution of the effective window size for the compilation workload. The bar chart on the left shows the effective window size (in seconds) for the workload when it is run without any I/O rate control. The curve on the right shows the same information when I/O rate control is enabled. In both cases the effective window size is much less than the upper limited of 5 seconds. The average size for the limited case is 0.98 seconds, and 1.71 seconds for the limited case. <what is the conclusion for this ??>Metric Tar Agrep Compile Total File I/O 103.0 MB 50.0 MB 23.3 MB Total Disk I/O 103.0 MB 58.1 MB 10.0 MB Total I/O Events 17,430 11,526 3,859 Total Sleep Events 6,928 3,324 1,004 Total Sleep Time 178.0 sec 83.3 sec 29.1 Sec Total Execution Time 211.2 sec 108.7 sec 70.6 Sec Average I/O Rate 487 KB/sec 534 KB/sec 141 KB/secTable 1: I/O Application BehaviorThe full story of the I/O dilation is seen when we look at the time varying behavior of the I/O. Figure 7 shows the one-second average I/O rates for the compile workload. Notice that although this workload has considerable hits in the file buffer cache, our mechanism ensured that the actual disk I/O rate was less than the target rate of 500KB/sec. The requested I/O rate peaks are higher than our target limit, due to the fact we average I/O requests over a 5 second window and we are showing data over a 1 second window in this figure.Although we do not claim that our set of I/O-intensive applications is representative, our experiments support our intuition that file I/O dilation is not a problem. Rather, the main concern is that of lost opportunity. Consider an example where we would like to share all available bandwidth equally between two applications. We can set thresholds for each application at half of the maximum achievable disk bandwidth. However, good buffer cache locality would mean that file I/O at this rate would generate less, possibly much less, disk I/O. Such attenuation represents unused bandwidth.There are two potential approaches to recouping this lost bandwidth. The first is to add a hook into the buffer cache to check for a cache miss before adding the I/O to our window, and deciding whether to sleep. We deliberately have not taken this path because we wish to keep our system at as high a level as possible. We could currently move our entire system into libc without loss of functionality or accuracy. This would be compromised if we put hooks deeper into the kernel.Figure 7: I/O Rates for the Compile Workload.Figure 8: Comparison of Effective Window Size (Compilation workload).A second approach is to use statistics from proc file system to apply a “normalization factor” to our limit calculations. Of necessity, this would be inexact. The advantage is that it can be implemented entirely outside of the kernel. We are currently pursuing this approach, but the mechanism is not yet in place.5. Network I/O policingPolicing network I/O is easier than file I/O because there is no analogue to the file buffer cache or read ahead, which dilate and attenuate the effective disk I/O rate. Hence, network bandwidth is a somewhat better target for our current implementation of rate windows than file I/O. Since contention for network resources is probably more common than disk bandwidth contention, this preference is fortuitous.5.1 Linger longer: Throttling guest processes Most of the experiments in Section 4 assumed the use of rate windows in a linger-longer context. We ran one more linger-longer experiment, this time with network I/O as the target. One of the main complaints about Condor and similar systems is that the act of moving a guest job from a newly loaded host often induces significant overhead to retrieve the application’s checkpoint. Further, periodic checkpointing for fault tolerance produces bursty network traffic. This experiment shows that even the checkpoint is throttled and can be prevented from affect host jobs.Figure 9 shows two instances of a guest process moving off of a node because a host process suddenly becomes active. Moving off the node entails writing a 90MB checkpoint file over the network. This severely reduces available bandwidth for the host workload (a web server2817 in this case) in the unthrottled case shown in Figure 9a. Only after the checkpoint is finished does the web server claim most of the bandwidth.In the throttled case shown in Figure 9b, the condor daemon’s network write of the checkpoint consumes a majority of the bandwidth only until the host web server starts up. At this point, the system enters throttling mode and the bandwidth available to the checkpoint is reduced to the guest class’s threshold. Once the web server becomes idle again, the checkpoint resumes writing at the higher rate.5.2 Rate-based network clockingFinally, we look at the use of rate windows to perform an approximation of rate-based clocking of network traffic. Such clocking has been proposed as a method of preventing network contention and improving utilization in transport protocols. Specifically, modifying the TCP2817 The host process could be any network intensive process such as an FTP or a Web browser.(a) (b)Figure 9: Guest job checkpoint vs. host web server。

相关文档
最新文档