Mellanox-4036-QDR交换机白皮书

合集下载

Mellanox CS7510智能交换机说明书

Mellanox CS7510智能交换机说明书

©2018 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.Mellanox provides the world’s first smart switch, enabling in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. The CS7510 system provides the highest performing fabric solution in a 16U form factor by delivering 64Tb/s of full bi-directional bandwidth with 400ns port latency.SCALING-OUT DATA CENTERS WITH EXTENDED DATA RATE (EDR) INFINIBANDFaster servers based on PCIe 3.0, combined with high-performance storage and applications that use increasingly complex computations, are causing data bandwidth requirements to spiral upward. As servers are deployed with next generation processors, High-Performance Computing (HPC) environments and Enterprise Data Centers (EDC) will need every last bit of bandwidth delivered with Mellanox’s next generation of EDR InfiniBand high-speed smart switches.SUSTAINED NETWORK PERFORMANCEBuilt with Mellanox’s latest Switch-IB™ InfiniBand switch devices, the CS7510 provides up to 324 100Gb/s full bi-directional bandwidth per port. The CS7510 modular chassis switch provide an excellent price-performance ratio for medium to extremely large size clusters, along with the reliability and manageability expected from a director-class switch.CS7510 is the world’s first smart network switch, designed to enable in-network computing through the Co-Design Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) technology. The Co-Design architecture enables the usage of all active data center devices to accelerate the communications frameworks, resulting in order of magnitude applications performance improvements.WORLD-CLASS DESIGNCS7510 is an elegant director switch designed for performance, serviceability, energy savings and high-availability. The CS7510 comes with highly efficient, 80 gold+ and energy star certified AC power supplies.The leaf, spine blades and management modules, as well as the power supplies and fan units, are all hot-swappable to help eliminate down time.COLLECTIVE COMMUNICATION ACCELERATIONCollective is a term used to describe communication patterns in which all members of a group of communication endpoints participate.324-Port EDR 100Gb/s InfiniBand Smart Director SwitchCS7510 InfiniBand SwitchPRODUCT BRIEFSWITCH SYSTEM†350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2018. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo and MLNX-OS are registered trademarks of Mellanox Technologies, Ltd. Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP), Switch-IB, UFM, and Unified Fabric Manager are trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.Mellanox CS7520 InfiniBand Switchpage 2Collectives have implications on overall application performance and scale. CS7510 introduces the Co-Design SHARP technology, which enables the switch to manage collective communications using embedded hardware. Switch-IB 2 improves the performance of selected collective operations by processing the data as it traverses the network, eliminating the need to send data multiple times between endpoints. This decreases the amount of data traversing the network and frees up CPU resources for computation rather than using them to process communication.MANAGEMENTThe CS7510, dual-core x86 CPU, comes with an onboard subnetmanager, enabling simple, out-of-the-box fabric bring-up for up to 2048 nodes. CS7510 switch runs the same MLNX-OS ® software package as Mellanox FDR products to deliver complete chassis management, to manage the firmware, power supplies, fans and ports.Mellanox CS7510–16U modular chassis–36 QSFP28 EDR 100Gb/s InfiniBand ports per dual IC leaf bladeSwitch Specifications–Compliant with IBTA 1.21 and 1.3 –9 virtual lanes:8 data + 1 management –256 to 4Kbyte MTU–4x48K entry linear forwarding databaseManagement Ports–DHCP–Familiar Industry Standard CLI–Management over IPv6 –Management IP –SNMP v1,v2,v3 –Web UIFabric Management–On-board Subnet Manager supporting fabrics of up to 2048 nodes–Unified Fabric Manager™ (UFM™) AgentConnectors and Cabling–QSFP28 connectors–Passive copper or active fiber cables –Optical modulesIndicators–Per port status LED Link, Activity –System status LEDs: System, fans, power supplies –Port Error LED –Unit ID LEDPhysical Characteristics–Dimensions:28’’H x 17.64’’W x 30.3’’D –Weight:Fully populated 275kg (606lb)Power Supply–Hot swappable with N+N redundancy–Input range: 180-265VAC –Frequency:47-63Hz, single phase ACCooling–Hot-swappable fan trays –Front-to-rear air flow –Auto-heat sensing fansPower Consumption–Typical power consumption (fully populated):–Passive cable: 4939W –Active cable: 6543WFEATURESSafety–CB –cTUVus –CE –CUEMC (Emissions)–CE –FCC –VCCI –ICES –RCMOperating Conditions–Operating 0ºC to 40ºC–Non-Operating -40ºC to 70ºC–Humidity: Operating 10% to 85%, non-condensing–Altitude: Operating -60 to 3200mAcoustic–ISO 7779 –ETS 300 753Others–RoHS compliant –1-year warrantyCOMPLIANCETable 1 - CS7510 Series Part Numbers and Descriptions53363PB Rev 2.0。

云计算资料

云计算资料

西南石油勘探研究院Mellanox InfiniBand高速网络加速应用方案概述石油在现代工业体系中扮演着关键角色,同时由于它的不可再生性和短期内难以被其他能源所取代的特点,促使石油企业将石油勘探视为首要任务。

这项业务的目标就是要在地表数千米以下的地层中找到油藏的位置,强大的技术支持是提高石油勘探效率和投资回报率的关键,而技术日新月异,性能不断提升的高性能计算系统是它所需技术中必不可少的组成部分。

利用高性能计算实现更加精确高效的石油勘探已经是当今世界石油行业的共识。

而基于标准技术部署的计算机集群以其出色的性价比和完整的生态系统支持受到越来越多的青睐。

Mellanox InfiniBand作为当今计算机集群最高性能的标准网络互连,其高带宽、低延迟、高可扩展性、低CPU占有率等特性可以进一步加速石油勘探应用软件的性能。

西南石油勘探研究院于2012年第二季度最新采用的Mellanox InfiniBand QDR(40Gb/秒)搭建的最新系统为该院石油勘探带来全新动力,大大提高地震资料处理相关计算的精度和复杂度。

此文将介绍优化的系统架构及如何利用InfiniBand集群加速石油勘探。

InfiniBand技术介绍InfiniBand是由InfiniBand行业协会(InfiniBand Trade Association,IBTA)定义的一项标准。

它是一种新的I/O总线技术,用于取代目前的PCI总线。

InfiniBand主要应用于企业网络和数据中心,也可以应用在高速线速路由器、交换机以及大型电信设备中。

InfiniBand的设计思路是通过一套中心机构,即中心InfiniBand交换机,在远程存储器、网络以及服务器等设备之间建立一个单一的连接链路,并由中心InfiniBand交换机来指挥流量。

在2011年11月公布的全球高性能计算机TOP500排行榜上,基于InfiniBand网络互连的系统占比达到42%,其中前100名中有55个系统采用InfiniBand,而且呈现逐年递增的趋势。

华为CloudEngine系列交换机VXLAN技术白皮书

华为CloudEngine系列交换机VXLAN技术白皮书
关于本章
介绍VXLAN的实现原理。 2.1 基本概念 2.2 报文格式 2.3 隧道建立与维护 2.4 数据报文转发 2.5 VXLAN QoS
2 原理描述
2 原理描述
文档版本 01 (2014-09-20)
华为专有和保密信息
4
版权所有 © 华为技术有限公司
CloudEngine 系列交换机 VXLAN 技术白皮书
表 2-1 控制器相关概念
概念
描述
控制器 (Controller)
控制器是OpenFlow协议的控制面服务器,所有的路径计算与管 理都由独立的控制器完成。
通常,刀片服务器即可作为控制器。
转发器
OpenFlow协议的转发平面设备,只处理数据转发任务。
OpenFlow协议 OpenFlow协议是SDN中的重要协议,是控制器和转发器的通信 通道。控制器通过OpenFlow协议将信息下发给转发器。
4 基于 SDN 控制器的 VXLAN 配置示例.....................................................................................23 5 参考标准和协议.............................................................................................................................39
CloudEngine 系列交换机
VXLAN 技术白皮书
文档版本 01 发布日期 2014-09-20
华为技术有限公司
版权所有 © 华为技术有限公司 2014。 保留一切权利。 非经本公司书面许可,任何单位和个人不得擅自摘抄、复制本文档内容的部分或全部,并不得以任何形式传播。

绿盟网络入侵防护系统产品白皮书

绿盟网络入侵防护系统产品白皮书

绿盟网络入侵防护系统产品白皮书© 2011 绿盟科技■版权声明本文中出现的任何文字叙述、文档格式、插图、照片、方法、过程等内容,除另有特别注明,版权均属绿盟科技所有,受到有关产权及版权法保护。

任何个人、机构未经绿盟科技的书面授权许可,不得以任何方式复制或引用本文的任何片断。

目录一. 前言 (2)二. 为什么需要入侵防护系统 (2)2.1防火墙的局限 (3)2.2入侵检测系统的不足 (3)2.3入侵防护系统的特点 (3)三. 如何评价入侵防护系统 (4)四. 绿盟网络入侵防护系统 (4)4.1体系结构 (5)4.2主要功能 (5)4.3产品特点 (6)4.3.1 多种技术融合的入侵检测机制 (6)4.3.2 2~7层深度入侵防护能力 (8)4.3.3 强大的防火墙功能 (9)4.3.4 先进的Web威胁抵御能力 (9)4.3.5 灵活高效的病毒防御能力 (10)4.3.6 基于对象的虚拟系统 (10)4.3.7 基于应用的流量管理 (11)4.3.8 实用的上网行为管理 (11)4.3.9 灵活的组网方式 (11)4.3.10 强大的管理能力 (12)4.3.11 完善的报表系统 (13)4.3.12 完备的高可用性 (13)4.3.13 丰富的响应方式 (14)4.3.14 高可靠的自身安全性 (14)4.4解决方案 (15)4.4.1 多链路防护解决方案 (15)4.4.2 交换防护解决方案 (16)4.4.3 路由防护解决方案 (16)4.4.4 混合防护解决方案 (17)五. 结论 (18)一. 前言随着网络与信息技术的发展,尤其是互联网的广泛普及和应用,网络正逐步改变着人类的生活和工作方式。

越来越多的政府、企业组织建立了依赖于网络的业务信息系统,比如电子政务、电子商务、网上银行、网络办公等,对社会的各行各业产生了巨大深远的影响,信息安全的重要性也在不断提升。

近年来,企业所面临的安全问题越来越复杂,安全威胁正在飞速增长,尤其混合威胁的风险,如黑客攻击、蠕虫病毒、木马后门、间谍软件、僵尸网络、DDoS攻击、垃圾邮件、网络资源滥用(P2P下载、IM即时通讯、网游、视频)等,极大地困扰着用户,给企业的信息网络造成严重的破坏。

Mellanox 高速低成本 DAC 电缆说明书

Mellanox 高速低成本 DAC 电缆说明书

Mellanox® MCP1600-E0xxEyy DAC cables are high speed, cost-effective alternatives to fiber optics inInfiniBand 100Gb/s EDR applications.Mellanox QSFP28 passive copper cables(1) contain four high-speed copper pairs, each operating at datarates of up to 25Gb/s. Each QSFP28 port comprises an EEPROM providing product information, which canbe read by the host system.Mellanox’s unique quality passive copper cable solutions provide power-efficient connectivity for shortdistance interconnects. It enables higher port bandwidth, density and configurability at a low cost andreduced power requirement in the data centers.Rigorous cable production testing ensures best out-of-the-box installation experience, performance anddurability.T able 1 - Absolute Maximum RatingsT able 2 - Operational SpecificationsT able 3 - Electrical SpecificationsMCP1600-E0xxEyyINTERCONNECTPRODUCT BRIEF†100Gb/s QSFP28 Direct Attach Copper Cable(1)Raw cables are provided from different sources to ensure supply chain robustness.©2018 Mellanox Technologies. All rights reserved.†For illustration only. Actual products may vary.©2018 Mellanox Technologies. All rights reserved.Table 4 - Cable Mechanical SpecificationsNote 1. The minimum assembly bending radius (close to the connector) is 10x the cable’s outer diameter. The repeated bend (far from the connector) is also 10x the cable’s outer diameter. The single bend (far from the connector) is 5x the cable’s outer diameter.Table 5 - Part Numbers and DescriptionsNote. See Figure 2 for the cable length definition. Note 2. xx = reach; yy = wire gauge.Figure 2.Cable Length DefinitionFigure 1.Assembly Bending Radius350 Oakmead Parkway, Suite 100, Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: © Copyright 2018. Mellanox Technologies. All rights reserved.Mellanox, Mellanox logo and LinkX are registered trademarks of Mellanox Technologies, Ltd. All other trademarks are property of their respective owners.Warranty InformationMellanox LinkX direct attach copper cables include a 1-year limited hardware warranty, which covers parts repair or replacement.Mechanical Schematics60111PBRev 2.5。

Mellanox交换机学习记录

Mellanox交换机学习记录

Mellanox交换机学习记录—、基本术语:1、IPOIB:是在IB网络上跑一个TCP/IP的协议,使用的IB网络,配置IP 地址,但两端都是IB卡。

2、EOIB:是在IB网络上跑以太网的协议,配IP地址,但一端是IB卡(上行),另一端为以太网卡(下行)简单来讲,IPOIB只需要infiniband的交换机,而EOIB需要用到bridgeXo Bridge X的两个端口,可以同时全部接IB网络,全部接以太网,1为IB, 2为以太网,但不能1接以太网2接IB网。

BRIDGE X的三个以太网口之间互相不通疑问:BRIDGE X是个什么样的产品,1和2端口在哪?3、SRP:SCSI RDMA protocol,是IB SAN 的一种协议,也被称为SCSI Rrmote Protocol,其主要作用是把SCSI协议的命令和数据通过RDMA的方式跑到Ifin让and网络上,和ISCSI类似。

该协议主要是面向存储方面,提供了一个高带宽,高性能的存储。

SCSI协议主要是在主机和存储设备之间传送命令、状态和块数据。

注:RDNIA(Remote Direct Memory Access)技术全称远程直接数据存取,就是为了解决网络传输中服务器端数据处理的延迟而产生的。

RDMA通过网络把资料直接传入讣算机的存储区,将数据从一个系统快速移动到远程系统存储器中,而不对操作系统造成任何影响,这样就不需要用到多少讣算机的处理功能。

它消除了外部存储器复制和文本交换操作,因而能解放内存带宽和CPU周期用于改进应用系统性能4、MPI:主要面向计算机应用方面,大量的计算,加密解密等,需要并行计算,在HPC方面应用比较广泛。

MPI是帮助用户进行并行计算的一个开源工具,有自己的标准(MPI1、MPI2等)。

它做的是包括1、每个节点上起一个进程把程序加载起来运行程序2、解决程序起来之后运行过程中各种通讯并行计算可以理解成如果有8个节点,启动MPI之后可以看到8个运行结果,4个节点可以有4个运行结果。

Mellanox Ethernet 网络设备用户手册说明书

Mellanox Ethernet 网络设备用户手册说明书

SOLUTION BRIEFKEY BUSINESS BENEFITSEXECUTIVE SUMMARYAnalytic tools such as Spark, Presto and Hive are transforming how enterprises interact with and derive value from their data. Designed to be in memory, these computing and analytical frameworks process volumes of data 100x faster than Hadoop Map/Reduce and HDFS - transforming batch processing tasks into real-time analysis. These advancements have created new business models while accelerating the process of digital transformation for existing enterprises.A critical component in this revolution is the performance of the networking and storage infrastructure that is deployed in support of these modern computing applications. Considering the volumes of data that must be ingested, stored, and analyzed, it quickly becomes evident that the storage architecture must be both highly performant and massively scalable.This solution brief outlines how the promise of in-memory computing can be delivered using high-speed Mellanox Ethernet infrastructure and MinIO’s ultra-high performance object storage solution.IN MEMORY COMPUTINGWith data constantly flowing from multiple sources - logfiles, time series data, vehicles,sensors, and instruments – the compute infrastructure must constantly improve to analyze data in real time. In-memory computing applications, which load data into the memory of a cluster of servers thereby enabling parallel processing, are achieving speeds up to 100x faster than traditional Hadoop clusters that use MapReduce to analyze and HDFS to store data.Although Hadoop was critical to helping enterprises understand the art of the possible in big data analytics, other applications such as Spark, Presto, Hive, H2O.ai, and Kafka have proven to be more effective and efficient tools for analyzing data. The reality of running large Hadoop clusters is one of immense complexity, requiring expensive administrators and a highly inefficient aggregation of compute and storage. This has driven the adoption of tools like SparkDelivering In-memory Computing Using Mellanox Ethernet Infrastructure and MinIO’s Object Storage SolutionMinIO and Mellanox: Better TogetherHigh performance object storage requires the right server and networking components. With industryleading performance combined with the best innovation to accelerate data infrastructure Mellanox provides the networking foundation needed to connect in-memory computing applications with MinIO high performance object storage. Together, they allow in-memory compute applications to access and process large amounts of data to provide high speed business insights.Simple to Deploy, Simpler to ManageMinIO can be installed and configured within minutes simply by downloading a single binary and executing it. The amount of configuration options and variations has been kept to a minimum resulting in near-zero system administration tasks and few paths to failures. Upgrading MinIO is done with a single command which is non-disruptive and incurs zero downtime.MinIO is distributed under the terms of the Apache* License Version 2.0 and is actively developed on Github. MinIO’s development community starts with the MinIO engineering team and includes all of the 4,500 members of MinIO’s Slack Workspace. Since 2015 MinIO has gathered over 16K stars on Github making it one of the top 25 Golang* projects based on a number of stars.which are simpler to use and take advantage of the massive benefits afforded by disaggregating storage and compute. These solutions, based on low cost, memory dense compute nodes allow developers to move analytic workloads into memory where they execute faster, thereby enabling a new class of real time, analytical use cases.These modern applications are built using cloud-native technologies and,in turn, use cloud-native storage. The emerging standard for both the public and private cloud, object storage is prized for its near infinite scalability and simplicity - storing data in its native format while offering many of the same features as block or file. By pairing object storage with high speed, high bandwidth networking and robust compute enterprises can achieve remarkable price/performance results.DISAGGREGATE COMPUTE AND STORAGE Designed in an era of slow 1GbE networks, Hadoop (MapReduce and HDFS) achieved its performance by moving compute tasks closer to the data. A Hadoop cluster often consists of many 100s or 1000s of server nodes that combine both compute and storage.The YARN scheduler first identifies where the data resides, then distributes the jobs to the specific HDFS nodes. This architecture can deliver performance, but at a high price - measured in low compute utilization, costs to manage, and costs associated with its complexity at scale. Also, in practice, enterprises don’t experience high levels of data locality with the results being suboptimal performance.Due to improvements in storage and interconnect technologies speeds it has become possible to send and receive data remotely at high speeds with little (less than 1 microsecond) to no latency difference than if the storage were local to the compute.As a result, it is now possible to separate storage from the compute with no performance penalty. Data analysis is still possible in near real time because the interconnect between the storage and the compute is fast enough to support such demands.By combining dense compute nodes, large amounts of RAM, ultra-highspeed networks and fast object storage, enterprises are able to disaggregate storage from compute creating the flexibility to upgrade, replace, or add individual resources independently. This also allows for better planning for future growth as compute and storage can be added independently and when necessary, improving utilization and budget control.Multiple processing clusters can now share high performance object storage so that different types of processing, such as advanced queries, AI model training, and streaming data analysis, can run on their own independent clusters while sharing the same data stored on the object storage. The result is superior performance and vastly improved economics.HIGH PERFORMANCE OBJECT STORAGEWith in-memory computing, it is now possible to process volumes of data much faster than with Hadoop Map/Reduce and HDFS. Supporting these applications requires a modern data infrastructure with a storage foundation that is able to provide both the performance required by these applications and the scalability to handle the immense volume of data created by the modern enterprise.Building large clusters of storage is best done by combining simple building blocks together, an approach proven out by the hyper-scalers. By joining one cluster with many other clusters, MinIO can grow to provide a single, planet-wide global namespace. MinIO’s object storage server has a wide rangeof optimized, enterprise-grade features including erasure code and bitrot protection for data integrity, identity management, access management, WORM and encryption for data security and continuous replication and lamba compute for dynamic, distributed data.MinIO object storage is the only solution that provides throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a single namespace. MinIO runs Spark queries faster, captures streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms.LATENCY AND THROUGHPUTIndustry-leading performance and IT efficiency combined with the best of open innovation assist in accelerating big data analytics workloads which require intensive processing. The Mellanox ConnectX® adapters reduce the CPU overhead through advanced hardware-based stateless offloads and flow steering engines. This allows big data applications utilizing TCP or UDP over IP transport to achieve the highest throughput, allowing completion of heavier analytic workloads in less time for big data clusters so organizations can unlock and efficiently scale data-driven insights while increasing application densities for their business.Mellanox Spectrum® Open Ethernet switches feature consistently low latency and can support a variety of non-blocking, lossless fabric designs while delivering data at line-rate speeds. Spectrum switches can be deployed in a modern spine-leaf topology to efficiently and easily scalefor future needs. Spectrum also delivers packet processing without buffer fairness concerns. The single shared buffer in Mellanox switches eliminates the need to manage port mapping and greatly simplifies deployment. In an© Copyright 2019. Mellanox, Mellanox logo, and ConnectX are registered trademarks of Mellanox Technologies, Ltd. Mellanox Onyx is a trademark of Mellanox Technologies, Ltd. All other trade-marks are property of their respective owners350 Oakmead Parkway, Suite 100 Sunnyvale, CA 94085Tel: 408-970-3400 • Fax: MLNX-423558315-99349object storage environment, fluid resource pools will greatly benefit from fair load balancing. As a result, Mellanox switches are able to deliver optimal and predictable network performance for data analytics workloads.The Mellanox 25, 50 or 100G Ethernet adapters along with Spectrum switches results in an industry leading end-to-end, high bandwidth, low latency Ethernet fabric. The combination of in-memory processing for applications and high-performance object storage from MinIO along with reduced latency and throughput improvements made possible by Mellanox interconnects creates a modern data center infrastructure that provides a simple yet highly performant and scalable foundation for AI, ML, and Big Data workloads.CONCLUSIONAdvanced applications that use in-memory computing, such as Spark, Presto and Hive, are revealing business opportunities to act in real-time on information pulled from large volumes of data. These applications are cloud native, which means they are designed to run on the computing resources in the cloud, a place where Hadoop HDFS is being replaced in favor of using data infrastructures that disaggregates storage from compute. These applications now use object storage as the primary storage vehicle whether running in the cloud or on- premises.Employing Mellanox networking and MinIO object storage allows enterprises to disaggregate compute from storage achieving both performance and scalability. By connecting dense processing nodes to MinIO object storage nodes with high performance Mellanox networking enterprises can deploy object storage solutions that can provide throughput rates over 100GB/sec and scales easily to store 1000s of Petabytes of data under a singlenamespace. The joint solution allows queries to run faster, capture streaming data more effectively, and shortens the time needed to test, train and deploy AI algorithms, effectively replacing existing Hadoop clusters with a data infrastructure solution, based on in-memory computing, that consumes a smaller data center footprint yet provides significantly more performance.WANT TO LEARN MORE?Click the link below to learn more about object storage from MinIO VAST: https://min.io/Follow the link below to learn more about Mellanox end-to-end Ethernet storage fabric:/ethernet-storage-fabric/。

绿盟运维安全管理系统

绿盟运维安全管理系统

绿盟运维安全管理系统产品白皮书©2018绿盟科技■版权声明本文中出现的任何文字叙述、文档格式、插图、照片、方法、过程等内容,除另有特别注明,版权均属绿盟科技所有,受到有关产权及版权法保护。

任何个人、机构未经绿盟科技的书面授权许可,不得以任何方式复制或引用本文的任何片断。

目录一. 背景 (1)1.1运维账号混用,粗放式权限管理 (1)1.2审计日志粒度粗,易丢失,难定位 (2)1.3面临法规遵从的压力 (2)1.4运维工作繁重枯燥 (2)1.5虚拟云技术蓬勃发展 (3)二. 产品概述 (3)2.1运维安全管理系统 (3)2.2目标 (3)2.3应用场景 (4)2.3.1 管理员制定运维管理策略 (5)2.3.2 普通运维用户访问目标设备 (6)2.4系统价值 (8)三. 产品介绍 (8)3.1系统功能 (8)3.2系统架构 (9)四. 产品特性 (11)4.1多维度、细粒度的认证与授权体系 (11)4.1.1 灵活的用户认证方式 (11)4.1.2 细粒度的运维访问控制 (11)4.1.3 多维度的运维访问授权 (12)4.2高效率、智能化的资产管理体系 (12)4.2.1 智能化巡检托管设备和设备账号 (13)4.2.2 高效率管理设备和设备账号 (13)4.3提供丰富多样的运维通道 (14)4.3.1 B/S下网页访问 (14)4.3.2 C/S下客户端访问 (14)4.3.3 跨平台无缝管理 (15)4.3.4 强大的应用扩展能力 (15)4.4高保真、易理解、快定位的审计效果 (16)4.4.1 数据库操作图形与命令行级双层审计 (16)4.4.2 基于唯一身份标识的审计 (16)4.4.3 全程运维行为审计 (17)4.4.4 审计信息“零管理” (17)4.4.5 文字搜索定位录像播放 (18)4.5稳定可靠的系统安全性保障 (19)4.5.1 系统安全保障 (19)4.5.2 数据安全保障 (19)4.6快速部署,简单易用 (19)4.6.1 物理旁路,逻辑串联 (19)4.6.2 配置向导功能 (20)五. 客户收益 (21)插图索引图 1.1 用户与运维账号的关系现状 (1)图 2.1 核心思路 (4)图 2.2 运维管理员制定策略 (5)图 2.3 普通用户访问目标设备 (7)图 3.1 系统功能 (9)图 3.2 系统架构 (10)前置机架构示意图 (15)图 4.1 数据库操作图形与命令行级双层审计 (16)图 4.2 文字搜索定位录像播放 (18)图 4.3 产品部署 (20)一. 背景随着信息化的发展,企事业单位IT系统不断发展,网络规模迅速扩大、设备数量激增,建设重点逐步从网络平台建设,转向以深化应用、提升效益为特征的运行维护阶段,IT系统运维与安全管理正逐渐走向融合。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Mellanox Technologies
Mellanox GD4036高性能QDR InfiniBand交换机
Mellanox公司是全球网格骨干解决方案的领导者,致力于为下一代数据中心网络化计算提供最为全面的解决方案。

Mellanox公司的产品涵盖了从服务器到网络交换到存储互联的各个领域,凭借先进的架构设计以及严谨的质量控制,Mellanox公司的软硬件产品在全球用户市场获得了广泛的好评,通过IBM、HP、SGI、NEC、SUN等合作伙伴建立了OEM合作关系,Mellanox公司的产品以及解决方案正在被部署在各行各业。

作为业高性能InfiniBand交换解决方案,Mellanox GD4036为高性能计算集群和网格提供了水平空前的性能和扩展性。

GD4036交换机能使高性能应用运行在分布式的服务器、存储和网络资源上。

Mellanox GD4036对单个机箱中的36个节点精心设计并提供了QDR规格的40Gb/s通讯带宽,多台GD4036或者与Mellanox其他产品配合,我们可以搭建更大规模的集群运算系统,可配置的节点数量范围从十几个到几千个,内部无阻塞的交换设计最大限度的为集群运算系统提供了可靠高效的通讯环境
Mellanox Technologies
一、Mellanox GD4036模块介绍
Mellanox GD4036主要由主机箱、sPSU电源模块以及相应的系统散热模块构成,整机采用模块化设计、无线缆接插件紧密连接结构,大大提高了设备的可靠性,同时方便了系统安装以及以后的维护工作。

GD4036机箱及相关部件
Mellanox GD4036机箱采用工业标准设计完全符合19英寸机柜的安装,支持网络机柜和服务器机柜安装使用;高度为1U,在一个标准42U机柜中可以轻松部署多台Mellanox GD4036;Mellanox GD4036提供了机架导轨,更方便机箱的机柜安装;
Mellanox Technologies
1. 对外端口
Mellanox GD4036机箱提供了36个QDR 40Gb/s端口,总计可以提供2.88Tb/s吞吐量;
2. 管理模块
Mellanox GD4036内置了子网管理器,无须在服务器上安装子网管理器软件;管理模块提供了标准的DB9串口以及以太网连接接口提供远程管理。

同时还提供了一个标准的USB接口用于软件及微码的升级。

GD4036除了标准的36端口交换芯片外,还设计有由一个低功耗的CPU以及相应的缓存组成的板载监控系统,通过搭载相应的固件,可以实时监控电源模块、风扇散热工作状
Mellanox Technologies
况以及系统工作温度等详细信息;对于交换机端口可以实现启用、禁用以及速率调整等功能;
3. 风扇模块
Mellanox GD4036机箱标准配置了一个散热模块,内部包含两个冗余风扇,支持热插拔更换维护,提高了高可用性和可维护性。

4.电源模块
随机箱配置2个最大功率为350Watt的电源模块,确保系统能够实现N+1,N+N电源供电,每个电源模块提供对机箱内所有模块的供电,用户不必担心出现竞争友商产品中的分区供电故障问题。

所有电源模块均位于机箱两侧支持热插拔更换,并且与外部线缆连接
Mellanox Technologies
模块不在同侧,这样的设计充分考虑了线缆布线对未来电源系统维护带来的挑战,大大方便系统维护。

二、Mellanox GD4036可靠性
Mellanox GD4036全部采用模块化设计,部件连接均采用先进可靠的连接模块来实现,从根本上保证了系统内部通讯链路的可靠性;
在管理方面,Mellanox GD4036内置子网管理器、集成Device Manager,可实现设备硬件监控功能、完全支持Unified Fabric Manager(UFM)软件管理;
在系统供电方面,Mellanox GD4036支持N+1以及N+N的冗余供电配置,所有电源模块均提供对整个机箱供电,平均负担整机负载,电源模块支持热插拔更换维护,确保整个系统不间断运行;
Mellanox Technologies
Mellanox GD4036提供了一个散热模块,散热模块具备两个散热风扇,两路散热风扇互为备份;同时,先进的机箱模块设计使得系统散热效率最大化;
Mellanox GD4036通过了国际FCC、UL、CB、VCCI等多项认证,在产品设计、运行可靠性、电磁稳定性等多方面获得权威机构的认可!
三、Mellanox GD4036 性能
Mellanox GD4036采用全互连无阻塞架构,总计提供2.88Tbps的吞吐率,同一个内
部交换模块的两个端口之间的延迟小于100纳秒;
四、Mellanox GD4036管理
Mellanox GD4036提供了命令行管理方式以及先进的UFM管理软件,借助Mellanox UFM管理软件,InfiniBand网络不再是个神秘的黑盒子,整个网络的监控与管理将变得透明化、系统化。

Mellanox Technologies
1. Mellanox UFM核心特点
●以应用软件为中心的网络管理;
●无限可扩展性提供对应用软件、数据库以及存储系统的无缝支持;
●直观的展现网络交通以及设备运行状况,确保用户清晰并深度掌握网络工作
Mellanox Technologies
●先进的网络阻塞状况发现与分析优化处理功能;
●基于应用软件工作流以及网络拓扑结构的通讯路由优化功能;
●可设定与调节的故障预警机制,使用户对网络通讯状况了如指掌;
●提供网络分区以及多服务等级的分区功能,方便用户设定与调整;
●提供在一个共享的网络中实现多个基于应用软件的独立通讯区域设定;
●集中化的InfiniBand网络设备管理使得大型网络中设备管理更为便捷;
●安全可靠的HA架构设计确保UFM管理系统的高可用性;
●提供API接口,方便用户将UFM管理纳入现有的综合管理系统中。

2. Mellanox UFM网络状况发现与控制
Mellanox UFM集成了先进的网络监控引擎,对InfiniBand网络交换机以及连接到InfiniBand网络的主机提供实时的监控。

Mellanox UFM提供了一个可自行设定的公告牌界面,可以提供网络健康状况以及主机CPU、内存、磁盘等资源的使用状况,通过公告牌界面,我们可以方便的看到服务器中网络通讯带宽开销最大的Top10(数量可以自行设定),网络中阻塞最多的Top10,网络中故障报警的实时列表,网络中阻塞问题的热点出现在哪里等等,
Mellanox Technologies
3. 网络拓扑结构自动发现,网络瓶颈实时显示
Mellanox UFM能够自动监测网络拓扑结构并自动绘制出相应的拓扑结构图,同时,通过实时的通讯链路监控,Mellanox UFM能够自动发现网路拥塞的热点区域并通过
Mellanox Technologies
图表方式显示给用户,此功能可以帮助用户精确的定位网络通讯的阻塞状况,为下一步性能优化提供参考数据。

4. 网络分区优化与路由通讯优化
Mellanox UFM提供先进的网络通讯优化功能,针对不同需求的计算群组(低延迟、高带宽等等)可以创建相应的逻辑计算机资源组,在同一个组内的计算节点之间通讯会自动根据所设定的网络需求类型进行优化,确保网络通讯能够分层进行,大大提高网络通讯效率;
Mellanox Technologies
同时,Mellanox UFM还提供了独有的Traffic Optimized Routing (TOR)路由算法,经过优化后,网络中的阻塞热点会自动被均衡再分配,大大降低网络带宽资源争抢所造成的整体计算效率下降的问题。

Mellanox Technologies
Mellanox Technologies
5. 全网络通讯日志收集与保存
Mellanox UFM会自动收集并保存整个网络(包括交换机端、计算节点I/O节点端)的通讯日志,为系统通讯状况分析以及故障排查提供强有力的资源支持。

Mellanox Technologies
6. Mellanox UFM支持的InfiniBand网络设备以及主机平台
Mellanox UFM支持的硬件交换机平台:
Mellanox GD2004/2012系列
Mellanox Vantage系列
Mellanox GD4000系列
Mellanox Technologies
Mellanox IS5000系列
Mellanox UFM支持的主机平台:
Redhat 5.1/5.2/5.3/5.4/5.5/5.6/6.0
Centos 5.1/5.2/5.3/5.4/5.5/5.6/6.0
Windows 2003/2008
五、Mellanox GD4036在OEM厂商
目前,Mellanox GD4036交换机已经通过IBM、HP、富士通、曙光、浪潮等多个服务器厂商严格的OEM产品测试,已经开始面向市场销售并获得了数目可观的订单,这进一步说明了Mellanox GD4036无论在可靠性、性能以及可扩展性等方面已经获得诸多国际厂商以及用户的充分认可!。

相关文档
最新文档