vplex_geosynchrony_5.3_technical_differences_srg

vplex_geosynchrony_5.3_technical_differences_srg
vplex_geosynchrony_5.3_technical_differences_srg

Welcome to VPLEX GeoSynchrony 5.3 Technical Differences course.

Click the Notes tab to view text that corresponds to the audio recording.

Click the Supporting Materials tab to download a PDF version of this eLearning.

Copyright ? 1996, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, Data Domain, RSA, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra, Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation Technology, Common Information Model, Configuration Intelligence, Configuresoft, Connectrix, CopyCross, CopyPoint, Dantz, DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab, EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, Infra, InputAccel, InputAccel Express, Invista, Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, nLayers, OnAlert, OpenScale, PixTools, Powerlink, PowerPath, PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, the RSA logo, SafeLine, SAN Advisor, SAN Copy, SAN Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix, Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, VMAX, Vblock, Viewlets, Virtual Matrix, Virtual Matrix Architecture, Virtual Provisioning, VisualSAN, VisualSRM, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression, xPresso, YottaYotta, the EMC logo, and where information lives, are registered trademarks or trademarks of EMC Corporation in the United States and other countries.

All other trademarks used herein are the property of their respective owners.

? Copyright 2014 EMC Corporation. All rights reserved. Published in the USA.

The purpose of this course is to cover the VPLEX GeoSynchrony 5.3 features that include: VPLEX Integrated Array Services (VIAS), IPv6 support, performance monitoring improvements and vStorage APIs for Array Integration (VAAI) XCOPY support. Performance monitoring improvements include virtual volume statistics and the ability to monitor events and alerts from an external client.

This course is intended for those with prior VPLEX experience who are involved in VPLEX solution design, implementations and post-installation break/fix.

This module provides an overview of the features and enhancements in VPLEX GeoSynchrony 5.3 including VIAS, VAAI XCOPY, IPv6 and events and alerts.

VPLEX GeoSynchrony 5.3 introduces key features and enhancements that improve storage provisioning, scalability and monitoring. With VIAS, administrators are now able to provision storage using a single management interface. VPLEX also supports IPv6 on VS2 engines. Administrators are also able to obtain performance metrics for all virtual volumes to improve overall system scalability. Additionally, alerts and events are more informative. As a result, VPLEX is easier to monitor and support. With VAAI XCOPY support, ESXi servers offload storage-related tasks to VPLEX for processing; this frees up ESXi CPU cycles for other jobs.

VIAS provides a single management interface that makes provisioning storage from VPLEX easier for the administrator when using VMAX and VNX arrays. To use VIAS, VPLEX must be zoned to the arrays. Once zoned, administrators can provision and de-provision volumes, as needed, directly from VPLEX.

With VIAS, volumes can be provisioned from pools.

Using VIAS, VPLEX will communicate with arrays to create volumes from pools and will mask them to VPLEX. Next, volumes are imported into VPLEX. Once imported, VPLEX presents the volumes to host(s) as virtual volumes.

Using a single management interface, provisioning VPLEX storage is time-efficient and allows an end-to-end stack integration.

VPLEX GeoSynchrony 5.3 now supports IPv6. This brings with it almost unlimited IP addresses and improved security as IPSec is part of the IPv6 protocol. IPv6 provides more efficient routing because it is able to aggregate prefixes into a single prefix for the IPv6 internet. It also provides more efficient packet processing as an IPv6 packet contains no IP-level checksum and as a result the checksum does not have to be recalculated at each hop. Lastly, it provides IP mobility as nodes that use IPv6 addresses are able change their physical connection without having to change their IPv6 address.

External clients can now receive events and alerts using an Advanced Message Queuing Protocol (AMQP) client. Management Servers in a VPLEX environment contain an AMQP Server. The AMQP server acts as a consolidator of event data and as a server for clients to obtain event data. VPLEX events are generated on both the Directors and the Management Servers and sent to the AMQP Servers. Event data is also passed across the VPLEX management network to the Management Servers.

However, if there is a cluster partition, event data cannot be forwarded between Management Servers. Events are also logged on the Directors themselves. The event data that is sent to the Management Server is translated from being VPLEX internal type event data to being event data that can be consumed and viewed by AMQP clients.

This data is then sent for viewing on external clients that have registered for events and alerts.

Prior to VAAI XCOPY support, ESXi servers were responsible for all copy tasks. This limited the speed of storage vMotion and inefficiently utilized ESXi CPU for copy operations.

With VPLEX GeoSynchrony 5.3, ESXi servers initiate the copy process, then VPLEX performs the copy process. As a result, ESXi CPU cores can focus on other tasks.

Three key enhancements of VPLEX GeoSynchrony 5.3 are:

1.The Management Server(s), directors and VPLEX Witness are all upgraded to the same

Operating System level (SuSE Linux Enterprise Server 11 SP3). This improves system

security and integrity.

2.Performance monitoring has been enhanced as metrics can be collected for each virtual

volume. These metrics include IOPS, latency and throughput bandwidth for reads and writes.

3.VPLEX can also sync with an external NTP server. This ensures VPLEX shares the same

time as other systems in the environment.

This module focuses on the proper implementation of VIAS. Configuration and provisioning procedures are covered as part of this module.

The Storage Management Initiative-Specification (SMI-S) provider allows a VPLEX system to monitor and control storage resources on back-end arrays. As a result, VPLEX is able to provision volumes from storage pools that exist on back-end arrays. A Linux or Windows SMI-S server installed with Solutions Enabler with SMI version 1.5 or greater is required. The SMI-S provider (also known as Array Management Provider) must be registered with VPLEX. It can be registered from any Management Server. The credentials to register the Array Management Provider are stored in an EMC lockbox on the Management Server and are not stored in clear text.

Back-end arrays must be authorized to use the SMI-S provider. To authorize VMAX arrays, gatekeepers must be presented to the SMI-S provider via Fibre Channel. To authorize VNX or CX4 series arrays, the IP address of both storage processors must be registered with the SMI-S provider. Authorization can be confirmed by running symcfg list on the SMI-S Server.

It is also required to have at least one storage pool on each array and one storage view on VPLEX.

This video demonstration will show how to register an AMP with VPLEX. This demo assumes that Solutions Enabler SMI has been successfully installed and best practices have been followed to authorize the SMI-S to manage the storage arrays.

In VPLEX GeoSynchrony 5.3, Virtual volumes can be provisioned from both storage pools and storage volumes that exist on back-end arrays. Pool based provisioning uses VIAS and requires pools to be created on the EMC back-end arrays. During pool based provisioning, VPLEX automatically creates storage volumes from pools, masks volumes to VPLEX, and creates associated VPLEX objects such as devices, virtual volumes and consistency groups. It also automatically places virtual volumes into existing VPLEX storage views for hosts to access. For the first release of provisioning using VIAS, only one provisioning job can be run at a time. Subsequent jobs that are started while the first job is still in progress will error out with a descriptive message. After the initial job is completed another provisioning job can be started. This applies to both the VPLEX GUI and CLI.

Virtual volumes can also be provisioned using existing storage volumes that have already been presented to VPLEX from any back-end arrays, including supported third party arrays. VPLEX can automatically create VPLEX objects and present virtual volumes to hosts through existing storage views. It is recommended to use storage volume provisioning if back-end arrays consist of volumes that have existing data that must be protected.

It is also important to note that a mapping file is no longer required to add storage volumes to VPLEX from VNX arrays.

This video will demonstrate how to provision virtual volumes from pools.

Pool based provisioning is performed by specifying the storage-pool to use when running the virtual volume provision command. This command accepts the base name, the pools, the number of volumes to create and the capacity of each volume. It can also be scripted using the REST API. Here is an example of creating 6 virtual volumes that are 5 gigabytes in capacity.

The Storage Volume Provisioning Wizard is similar to the Pool Based Provisioning Wizard. The only difference is that with storage volume provisioning, storage volumes must be selected to be used as virtual volumes.

Storage volume based provisioning can also be run using the storage-tool compose command. Here is an example of creating a virtual volume using two storage volumes.

Storage volumes claimed using VIAS are unclaimed and returned to the VPLEX array by using the Unclaim Storage option. This automatically unmasks volumes provisioned to VPLEX. Storage volumes must not be part of a storage view or consistency group and they must not be associated with any virtual volumes, devices or extents.

Clicking the Unclaim storage button causes a window to appear that provides information on if integrated services are being used for the volume.

Selecting the checkbox Delete integrated services storage volumes on the array and return the capacity to the storage pools causes the storage volume capacity to be returned to the pool after being unclaimed. By default, this option is not checked.

Once VIAS has been configured on VPLEX, the system uses two logs for all VIAS-related tasks. The first log is named bole.log. Bole.log contains the VIAS provisioning requests. This shows a volume that was provisioned from the SATA-Pool on a VMAX array.

This is the location of the bole.log file on the Management Server.

The VIA.log provides more in-depth information compared to the Bole.log, including specific details about each task: execution time, volume name, consistency group, storage views, capacity and so on.

This is the location of the via.log file on the Management Server.

相关主题
相关文档
最新文档