WindowsServer 2016超融合解决方案介绍

合集下载

win server 2016操作手册

win server 2016操作手册

win server 2016操作手册【Win Server 2016操作手册】前言本操作手册提供了关于Win Server 2016操作系统的详细指南和常见问题解答,旨在帮助用户快速上手并解决操作中遇到的困惑。

请按照以下顺序逐步阅读本手册,以便获得最佳操作体验。

第一章安装Win Server 20161.1 系统要求Win Server 2016对硬件和软件有一定的要求。

在开始安装之前,确保你的计算机满足以下最低配置要求。

1.2 安装步骤本节将指导您完成Win Server 2016的安装过程,并提供详细步骤和提示。

请按照以下步骤进行操作。

第二章配置基本设置2.1 系统语言和区域设置Win Server 2016支持各种语言和区域设置。

本节将介绍如何在安装后进行相关设置。

2.2 网络设置网络设置是服务器配置的重要一环。

本节将详细介绍如何配置网络参数和连接。

第三章管理用户和权限3.1 用户管理Win Server 2016允许您创建和管理多个用户账户。

本节将指导您如何添加、删除和编辑用户账户,并设置其权限。

3.2 组管理组是将用户分类和管理的一种方式。

本节将介绍如何创建和管理组,并将用户账户添加到相应的组中。

第四章文件和文件夹管理4.1 创建和编辑文件Win Server 2016提供了多种方式来创建和编辑文件。

本节将介绍几种常见的操作方式。

4.2 管理文件和文件夹文件和文件夹的管理对服务器运行和数据存储至关重要。

本节将详细介绍如何管理文件和文件夹,包括重命名、复制、粘贴、删除等操作。

第五章网络服务配置5.1 DHCP服务配置DHCP服务可自动分配IP地址和其他网络配置信息。

本节将指导您如何配置和管理DHCP服务。

5.2 DNS服务配置DNS服务负责将域名解析为对应的IP地址。

本节将介绍如何配置和管理DNS服务。

第六章安全性配置6.1 防火墙设置防火墙可以保护服务器免受未经授权的访问。

本节将指导您如何设置和配置防火墙规则。

WindowsServer2016Hyper-V简介与配置.doc

WindowsServer2016Hyper-V简介与配置.doc

Hyper-V 是微软的虚拟化解决方案,能够让用户在Windows 系统上部署并使 用虚拟机。

俾统Windows 操作系统分为应用层和内核层,应用程序运行在Ring 3, 操作系统运行在Ring 0。

待Hyper-V 安装后,Hypervisor 运行在比传统模式更高 特权级的Ring-1,该特权级由CPU 提供支持,能够捕获虚拟机的特权操作并对 其进行模拟。

HypervisorHardware同时,原来的宿主操作系统与虚拟机操作系统运行在相同特权级,宿主机操 作系统被称为管理操作系统,虚拟机被称为客户机操作系统。

管理操作系统中运 行有一系列的用于管理虚拟机的组件(Virtualization Stack),用于与底层 Hypervisor 交互以提供虚拟化服务,Virtualization Stack 具体包拈:1) VSP (Virtualization Service Provider):用于控制虚拟机的 IO 请求;2) VMBus (Virtual Machine Bus):用于实现管理操作系统和客户操作系统 之间的数据通信,即用于实现VSP 和虚拟机中的VSC (Virtualization Service Client)通信;3) VMMS (Virtual Machine Management Service):与管理操作系统中的丄作 线程(每个虚拟机对应一个工作线程)一起提供对虚拟机生命周期管理, 包括创建、幵启、停止、保存和删除虚拟机;4) VID (Virtual Infrastructure Driver):协调 VMMS 与工作线程,管理客户 操作系Window Kernel Ring 3 Ring 0 Ring -1统和管理操作系统之间的通信。

Hyper-V 需要依赖于硬件虚拟化技术,如Intel 的VT-X 技术,同时还需要硬 件提供二级地址转换功能,如Intel 的EPT (Extended Page Tables )技术。

windows server 2016概述 -回复

windows server 2016概述 -回复

windows server 2016概述-回复Windows Server 2016是微软推出的用于企业网络环境的操作系统,它为企业提供了一系列丰富的功能和工具,旨在增强网络安全、提高性能和可靠性,以及简化管理和部署过程。

在本文中,我们将一步一步回答关于Windows Server 2016的概述。

第一步:引言和背景知识(150-200字)Windows Server 2016是微软公司推出的最新版服务器操作系统,它继承了前代操作系统的传统和优势,并引入了一系列新功能和改进。

作为一种面向企业应用的操作系统,Windows Server 2016具有高度的可扩展性和灵活性,能够满足企业在不同规模和需求下的各种业务要求。

它还具备强大的安全性和可靠性,能够保护企业的数据和网络免受外部威胁和攻击。

第二步:功能和特性(300-400字)Windows Server 2016提供了许多功能和特性,以帮助企业提高效率和生产力。

其中一些重要的功能包括:1. Hyper-V虚拟化平台:Windows Server 2016集成了最新的Hyper-V 虚拟化技术,可以提供高性能、高可用性和强大的扩展能力,使企业能够更好地利用硬件资源,并降低成本。

2. Nano Server:Nano Server是Windows Server 2016推出的一种最小化版本,它具有更小的安全攻击表面和更快的启动时间,适用于云环境和容器化应用。

3.存储空间直通:Windows Server 2016引入了存储空间直通功能,可以将物理存储设备直接映射给虚拟机,提供更高的性能和可靠性。

4. 容器支持:与以往版本不同的是,Windows Server 2016提供了完整的容器支持,使企业能够更轻松地部署、管理和迁移应用程序。

5. 分层存储空间:分层存储空间是Windows Server 2016的一项新功能,可以将冷热数据存储在不同的媒介上,以提供更高的性能和灵活性。

超融合解决方案介绍

超融合解决方案介绍

20160810超融合解决方案介绍v174超融合解决方案介绍首先需要厘清概念,谈超融合,就得先从融合架构谈起融合架构又称为集成系统,通过整机柜集成和预配置实现快速部署,但仍然采用传统的服务器、网络、存储三层架构。

典型的融合架构方案包括VCE的Vblock、NetApp的FlexPod、Oracle的Exadata,以及Huawei 的FusionCube和H3C的UIS等。

可以注意到融合架构在形态上体现为一个机柜或刀箱,里面集成了服务器、网络、存储节点,安装了虚拟化软件.而超融合架构则是基于通用的服务器硬件,借助虚拟化和分布式技术,融合计算、存储、虚拟化与一体,不再需要专门的SAN存储。

相比融合架构来说,超融合摆脱了整机柜/刀箱集成的束缚,也不再受限于传统的三层架构,更具有弹性.其次理解每家产品的理念、定位和技术渊源超融合概念在今天能够得到大家的热议,一是随着硬件性能密度的提升,网络互联互通的便利,软件智能逐渐走向成熟,超融合也是IT基础设施演进的一个必然方向;二则是对传统架构的颠覆式创新,在IT资源的分配和管理上相比传统架构更有效率(尤其是集中式存储模式),因此对传统存储的冲击也最大.这也恰恰说明为何超融合理念的倡导者会是Nutanix、SimpliVity、Pivot3、Maxta以及VMware这些在企业级(存储)市场并不显赫的厂商。

而对于EMC、IBM、NetApp这些存储巨头来说,超融合并不是一个“讨人喜欢”的概念,谁让他们“真的有一头牛"呢?Nutanix、SimpliVity等超融合初创厂商,均成立于2009年,借鉴Google等互联网数据中心架构,通过软件定义的方式,将服务器的计算和存储资源聚合,在虚拟化层面上实现池化和统一管理,实现虚拟化环境的新型部署模式。

他们的确是这个市场最彻底的“革命者”.VMware在2014年发布的VSAN,也是专为优化虚拟环境而设计的存储解决方案,通过与vSphere的深度耦合,将软件定义数据中心延伸到存储一层(在此前VMware通过VAAI、VASA、VVOL帮助专业存储厂商更好的支持虚拟化应用,现在决定甩开膀子自己上阵了),并巩固VMware在虚拟化数据中心的优势。

超融合方案部署

超融合方案部署

超融合方案部署在当今信息技术迅猛发展的时代,超融合方案作为一种新兴的IT解决方案,正逐渐被广泛应用于各个行业。

超融合方案不仅能够提高IT资源的整合效率,还能够提高系统的性能和可靠性。

本文将围绕超融合方案的部署进行探讨,以帮助读者更好地理解和应用该方案。

一、超融合方案概述超融合方案是一种将计算、存储、网络和管理等多个计算机基础设施以软件定义的方式进行整合的IT解决方案。

它将传统的硬件设备进行虚拟化,使得企业能够以更高效、灵活和可靠的方式提供IT服务。

超融合方案能够满足企业对于资源整合、性能提升和成本控制的需求,因此被广泛应用于云计算、虚拟桌面基础设施等场景。

二、超融合方案的部署步骤1.需求分析:在部署超融合方案之前,首先需要进行需求分析。

通过与企业相关部门的沟通,了解其业务需求、IT资源规模和性能要求等。

根据这些需求,选择适合的超融合方案供应商和产品。

2.基础设施准备:超融合方案需要基于一定的硬件设备进行部署。

在部署之前,需要进行物理设备的选购和准备工作。

根据需求分析结果,选择合适的服务器、存储设备和网络设备,并进行设备部署和配置。

3.软件安装与配置:超融合方案的部署离不开软件的安装和配置。

在部署过程中,需要根据超融合方案的产品文档,按照要求进行软件的安装和配置工作。

这其中包括超融合管理软件、虚拟化软件、存储软件等。

4.网络连接和调试:在超融合方案部署完成后,需要进行网络连接和调试工作。

此时,需要将超融合方案与企业现有网络进行连接,并进行网络配置和调试,以确保超融合方案正常工作。

5.数据迁移和测试:在部署完成后,需要将企业现有的数据迁移到超融合方案中。

这需要进行数据备份、数据迁移和数据测试等工作,以确保数据的完整性和可用性。

6.运维和管理:超融合方案的部署完成后,需要进行系统的运维和管理工作。

这包括系统的监控、故障排除、性能优化和安全管理等。

通过有效的运维和管理,可以确保超融合方案的稳定性和可靠性。

WindowsServer2016介绍与安装

WindowsServer2016介绍与安装

WindowsServer2016介绍与安装版本介绍Windows Server 2016 Essentials edition Windows Server 2016 Essentials版是专为⼩型企业⽽设计的。

它对应于Windows Server的早期版本中的Windows Small Business Server。

此版本最多可容纳25个⽤户和50台设备。

它⽀持两个处理器内核和⾼达64GB的RAM。

它不⽀持Windows Server 2016的许多功能,包括虚拟化。

Windows Server 2016 Standard edition(标准版) Windows Server 2016标准版是为具有很少或没有虚拟化的物理服务器环境设计的。

它提供了Windows Server 2016操作系统可⽤的许多⾓⾊和功能。

此版本最多⽀持64个插槽和最多4TB的RAM。

它包括最多两个虚拟机的许可证,并且⽀持Nano服务器安装Windows Server 2016 Datacenter edition(数据中⼼版) Windows Server 2016数据中⼼版专为⾼度虚拟化的基础架构设计,包括私有云和混合云环境。

它提供Windows Server 2016操作系统可⽤的所有⾓⾊和功能。

此版本最多⽀持64个插槽,最多640个处理器内核和最多4TB的RAM。

它为在相同硬件上运⾏的虚拟机提供了⽆限基于虚拟机许可证。

它还包括新功能,如储存空间直通和存储副本,以及新的受防护的虚拟机和软件定义的数据中⼼场景所需的功能。

Microsoft Hyper-V Server 2016 作为运⾏虚拟机的独⽴虚拟化服务器,包括Windows Server 2016中虚拟化的所有新功能。

主机操作系统没有许可成本,但每个虚拟机必须单独获得许可。

此版本最多⽀持64个插槽和最多4TB的RAM。

它⽀持加⼊到域。

除了有限的⽂件服务功能,它不⽀持其他Windows Server 2016⾓⾊。

Server2016网卡组合模式

Server2016网卡组合模式

网卡组合模式iSCSI服务器进行通讯。

对的Hyper-V同一台Hyper-V虚拟机之间也会使用到大量的网络流主机以及物理网络来说,量也会成“已禁用”组”据服务器当前配置情况,来最优的选择地址哈希或者Hyper-V端口模式。

在“备用适配器”列表中可以选择某块网卡,将其设置为备用状态。

该网卡平时是不会工作的,不会参与流量的分发操作。

当组合中的某网卡损坏后,该备用网卡就会接替其工作。

当然,在网卡组合中就需要去除该网卡可以承可以根据需地址,默认网关等参数。

对于物理逻辑网卡的配较简单,对主机来说,需机的名称在“外选ADriver”操器”按钮,交换机进行绑定。

在配置虚拟机时,图1 创建NIC组级功能”项,“允许将此网络适配器作为来宾操作系统中的组的一部分”拟机中将网卡进行组合配置(如图r络通讯的性能。

例如,在虚拟机中的服务器管理器中打开NIC组合的管理和查看窗口,删除已经存在的组。

之后打开虚拟机设置窗口,在左侧分别选择存在的虚拟网卡,在对应的“虚拟交换机”列表中选择“未连接”项。

对应的,在Hyper-V主机上删除上述虚拟机交换机(例如“Switch1”)。

在Hyper-V主机上打开NIC组合的管理和查看窗口,删除已经存在的进“nic1”“-AllowManagement”示无需在网卡进行组合配置。

之后查看的状态,于未激活状态。

在管理器中打开虚拟交换机管理器,“vswitch1”经处于聚合的状态了。

选择目标虚拟机,左侧分别对应的虚拟网卡,图2 在虚拟机中使用NIC功能。

IT超融合解决方案-生态分析

IT超融合解决方案-生态分析

优势
• 简单 – 专为虚拟机设计的存储,可以在任何标准的x86 架构下运行 • 基于策略的自动化、满足SLA,按虚拟机级的存储策略 • 能在提高相同用户体验的前提下显著降低 TCO • 可以从小规模开始,线性扩展性能、容量和成本 • 与VMWare 软件平台深度集成
13
V-SAN 方案配置
两种构建 Virtual SAN 节点的方法
数据中心基础架构的核心;
• 随着互联网+时代的到来,具 备互联网基因的超融合架构成
为企业级客户首选,可以加速
业务系统从传统架构向互联网 /云计算架构的转型;
• 传统IT厂商数据中心解决方案
已经落后于互联网公司或相关 的新技术厂商。
7
超融合:破坏性创新技术(简单、方便、灵活)
Server SAN Phase 3 Server SAN Phase 2
8
是什么在驱动着超融合的发展?
9
超融合产品竞争对比分析
10
HCI主流厂商格局
11
HCI解决方案评估标准
平台支持 扩展性
虚拟化平台
• VMware、Hyper-V、KVM
管理
最大节点、容量扩展限制 单次扩展节点、容量限制 性能随节点数量线性增长 管理界面 存储层的功能与API接口 与虚拟化平台的管理集成
简单
硬件平台兼容性 闪存支持(全闪存或混合) Cloud集成(OpenStack)
部署快捷
使用简便 维护便利
12
VMware vSAN 6.2
概述
• VMWare Virtual SAN 将来自多个服务器的固态磁盘和硬 盘组成集群,以创建共享存储 • 重新定义虚拟化管理程序以建立计算和存储集群 • 针对自我调节、以虚拟机为中心的存储进行基于策略的 管理 • 内置固态磁盘缓存的横向扩展体系结构,HDD/SSD的分 层设计
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Windows Server 2016 Hyper-Converged Solution - Virtual Machines and Software Defined Storage on the Same ClusterWindows Server 2016 Technical Preview introduces Storage Spaces Direct, which enables building highly available (HA) storage systems with local storage. This is a significant step forward in Microsoft Windows Server software-defined storage (SDS) as it simplifies the deployment and management of SDS systems and also unlocks use of new classes of disk devices, such as SATA and NVMe disk devices, that were previously not possible with clustered Storage Spaces with shared disks.Windows Server 2016 provides a hyper-converged solution by allowing the same set of servers to provide SDS, through Storage Spaces Direct (S2D), and serve as the hosts for virtual machines using Hyper-V. The sameHow to Use this Guide:This document provides both an introductory overview and specific standalone examples of how to deploy a Hyper Converged Solution with Storage Spaces Direct.Before taking any action, it is recommended that you do a quick read through of this document to familiarize yourself with the overall approach, to get a sense for the important Notes associated with some steps, and to acquaint yourself with the additional supporting resources and documentation. Hyper-converged Solution with Software Defined Storage OverviewIn the Hyper-Converged configuration described in this guide, Storage Spaces Direct seamlessly integrates with the features you know today that make up the Windows Server software defined storage stack, including Clustered Shared Volume File System (CSVFS), Storage Spaces and Failover Clustering.The hyper-converged deployment scenario has the Hyper-V (compute) and Storage Spaces Direct (storage) components on the same cluster. Virtual machine's files are stored on local CSVs. This allows for scaling Hyper-V compute clusters together with the storage it’s using. Once Storage Spaces Direct is configured and the CSV volumes are available, configuring and provisioning Hyper-V is the same process and uses the same tools that you would use with any other Hyper-V deployment on a failover cluster. Figure 5 illustrates the hyper-converged deployment scenario.1FIGURE 5: Hyperconverged – same cluster configured for Storage Spaces Direct and the hosting of virtual machinesHardware requirementsWe are working with our hardware partners to define and validate specific hardware configurations, including SAS HBA, SATA SSD and HDD, RDMA enabled network adapters etc. to ensure a good user experience. You should contact your hardware vendors for the solutions that they have verified are compatible for use with Storage Spaces Direct.If you would like to evaluate Storage Spaces Direct in Windows Server 2016 Technical Preview without investing in hardware, you can use Hyper-V virtual machines, see Testing Storage Spaces Direct using Windows Server 2016 virtual machines.For more information about hardware options, see Hardware options for evaluating Storage Spaces Direct in Technical Preview 4NoteStorage Spaces Direct does not support disks connected via multiple paths, and the Microsoft Multipath MPIO software stack.2Example Hardware for this GuideFor simplicity, this guide references a specific set of hardware that we were able to test. This is for example purposes only and most of the steps are not specific to hardware. Where something is specific to hardware, it will be noted. There are many hardware vendors with solutions that are compatible with the hyper-converged system described in this guide and this hardware example does not indicate a preference over other systems or hardware vendors. Due to limited resources and time constraints imposed by TP5, we are in a position to offer detailed guidance only for a specific subset of tested hardware configurations at this time.Server: Dell 730XD- Bios: 1.5.54HBA: Dell HBA330- Firmware:9.17.20.07 A00Network Interfaces:Mellanox ConnectX-3 Pro (dual port 10Gb, SFP+) for RoCEv2 networks- Firmware: 2.34.50.60 or newerTop of Rack Switch (TOR) Cisco Nexus 3132- BIOS version: 1.7.0Information GatheringThe following information will be needed as inputs to configure provision and manage the hyper-converged system, and therefore it will speed up the process and make it easier for you if you have it on hand when you start:- Server Names–you should be familiar with your organization’s nami ng policies for computers, files, paths, and other resources as you will be provisioning several servers with Nanoinstallations and each will need to have a unique server name.- Domain name – you will be joining computers to your domain, and you will need to specify the domain name. It would be good to familiarize with your internal domain naming and domainjoining policies.3- Administrator Password for the new servers: When the Nano images are created, the command to create the images will prompt you to input the password for the local administrator account.- For RDMA configurationso Top of Rack switch make/modelo Network Adapter make/modelThere are 2 types of RDMA protocols, note which type your RDMA adapter is(RoCEv2 or iWarp)o Vlan IDs to be used for the 2 network interfaces used by the management OS on the hyper-converged hosts. You should be able to obtain this from your networkadministrator.Nano or Full/Core Install OptionsHyper-converged deployments can be done using either Nano or Full installations of Windows Server 2016 Preview.Nano is a new install type for Windows Server 2016, see this link for more information on the advantages of using Nano and deploying and managing Nano server.This guide will focus on deploying hyper-converged systems using Nano server and the “Deploy the operating system” section is a step-by-step method of deploying Nano server.However, the steps in t he “Configure the Network” and “Configure Storage Spaces Direct” sections are identical whether you are using Nano or full or core installations.For full and core installations, instead of following the “Deploy the operating system” in this guide, you can deploy Windows Server 2012 Datacenter like you would any other Failover Cluster deployment. This would include joining them to an Active Directory domain and installing the Hyper-V role and Failover Cluster feature and if using RoCE RDMA devices including the “Data Center Bridging” feature. Nano server installations require all management to be done remotely, except what can be done through the Nano Recovery Console. On Full and core installations you can use the remote management steps in this guide, or in some cases you can log into the servers and do the commands and management locally.Nano: Installing and configuring Hyper-Converged SystemThis section includes instructions to install and configure the components of a Hyper-Converged system using the Windows Server 2016 Technical Preview with a Nano Server configuration of the operating system. The act of deploying a Hyper-Converged system can be divided into three high level phases:1. Deploy the operating system2. Configure the Network3. Configure Storage Spaces Direct45Figure 6 illustrates the process for building a hyper-converged solution using Windows Server 2016 Technical Preview.Figure 6: Process for building a hyper-converged solution using Windows Server 2016 Technical Preview.You can tackle these steps a few at a time or all at once, but they do need to be completed in the order shown in Figure 6. After describing some prerequisites and terminology, we will describe each of the three phases in more detail and provides examples.ImportantThis preview release should not be used in production environments.Prerequisites and TerminologyThe provisioning and deployment process for a Windows Server Nano server involves specific steps that include:Creating a bootable .VHDx file for each Nano server∙Copying the bootable .VHDx files to a physical host and configuring the host to boot from the .VHDx files∙Remotely managing the newly deployed host machines running Nano ServersNote: The Image creation machine and the Management machine (defined below) can be the same machine. The critical factor is that the machine from which you are managing must be of the same version (or higher) as the Nano server that are being managed. For Windows Server 2016 Technical Preview 5 evaluation we recommend that your Management machine be runningWS2016 TP5 so you will be able to efficiently manage the Nano Servers (which are also running TP5).1. Image creation machine. The instructions in this guide includes creating bootableNano .VHDx files for each server. It’s a simple process, but you will need a system (Windows10 or Windows Server 2012 R2 or later) where you can use PowerShell to create andtemporarily store the .VHDX files that will be copied to the servers. The cmdlet modules used to create the image are imported from the Windows Server 2016 preview ISO, the instructionsbelow will have details on this process.2. Management machine. For the purposes of this document, the machine that has themanagement tools to remotely manage the Nano servers will be referred to as theManagement system. The management system machine has the following requirements:a. Running Windows Server 2016 Technical Preview 5, domain joined to the samedomain or fully trusted domain as the Nano systems.b. Remote Server Administration Tools (RSAT) and PowerShell modules for Hyper-V andFailover Clustering. RSAT tools and PowerShell modules are available on WindowsServer 2016 and can be installed without installing other features. They are alsoavailable by installing the RSAT package for Windows clients.c. Management system can be run inside of a Virtual Machine or on a physical machine.d. Requires network connectivity to the Nano servers3. Host machines. In the example below, the expectation is that you start with physicalmachines that are booted to a Windows Server operating system (full or core). We’ll becopying the VHDs files to the Host machines and then re-booting into Nano operation systemthat was created in the VHDx files. Booting from a VHDx file is the method of deploymentbeing outlined in this guide. Other methods of deploying VHDx boot files can also be used.Deploy the operating systemDeploying the operating system is composed of the following tasks:1. Acquire an ISO image of Windows Server 2016 TP52. Use the ISO and PowerShell to create the new Nano Server Images3. Copy the new Nano Server images to the Host machines4. Reboot into the new Nano Server image5. Connecting to and managing the Nano Servers from the Management system machine678Complete the steps below to create and deploy the Nano Server as the operating system on your Host machines in a Hyper-Converged system. Note: The “Getting Started with Nano Server” guide has many more examples and detailed explanations of how to deploy and manage a Nano server. The instructions below are solely intended to illustrate one of many possible deployments; you need to find an approach that fits your organization’s needs and situation.Acquire an ISO image of Windows Server 2016 TP5 DatacenterDownload a copy Datacenter ISO from <link to Technet> to your Image creation machine and note the path.Use the ISO and PowerShell to Create the new Nano Server ImagesThere are other methods do deploy Nano, but in the case of this exam ple we’ll provide a set of steps below. If you want to learn more about creating and managing different kinds of Nano deployments or images, see the “Getting Started with Nano Server” guid e, starting in the section “To quickly deploy Nano Server on a physical server”.NoteIf your deployment isn’t using a RoCEv2 RDMA adapter, then you can remove the“-Packages Microsoft-NanoServer-DCB-Package” parameter in the PowerShellcommandlet string below. Our example hardware for this guide does use RoCEv2RDMA adapters and Data Center Bridging, so the DCB package is included in theexample.NoteIf you are going to manage the servers with System Center, add the following itemsin the “-Packages” section of the “New-NanoServerImage” commandMicrosoft-NanoServer-SCVMM-PackageMicrosoft-NanoServer-SCVMM-Compute-PackageNoteIf you have drivers that are recommended by your hardware vendor, It is simplestto inject the network drivers into the image during the “New-NanoServerImage”step below. If you don’t, you may be able to use the in-box drivers using the –OEMDrivers parameter in the “New-NanoServerImage” command, and then updatethe drivers using Windows Update after deployment. It is important to have thedrivers that your hardware vender recommends, so that the networks provide thebest reliability and performance possible.91. On the Image creation machine, mount the Windows Server Technical Preview .ISO. Tomount the ISO, in File Explorer select and right click on the ISO, then choose Mount. Once the mounted drive is opened, navigate to the \NanoServer\NanoServerImageGenerator directory and copy the contents to a directory to your desired working folder on your Image creation machine where you want to create and store your new Nano Server Images.In this example, the NanoServerImageGenerator directory will be copied to:C:\NanoBuild\NanoBuildScripts2. Start Windows PowerShell as an administrator, change directory your desired workingfolder where you copied the “NanoServerImageGenerator” contents to, and run the following command to import the Nano Server Image Generator PowerShell module. This module will enable you to create the new Nano Server images.Import-Module.\NanoServerImageGenerator–VerboseYou should see something like this:3. Copynetwork drivers to a directory and note the path. The example in the next step will use c:\WS2016TP5_Drivers4. Before using the following PowerShell commands to create the new Nano Server imagesplease read the following section to get an overview of the entire task. Some features, need specific packages to be specified to be included in the “New-NanoServerImage”command below.In this step, you will create a unique image for each Host machine. We need 4 images;one for each physical host in the HyperConverged setup.10Creating each Nano Server image can take several minutes depending on the size ofthe drivers and other packages being included. It is not unusual for a large image to take30 minutes to complete the creation process.∙Create the images one at a time. Because of possible file collision, werecommend creating the images one at a time.∙You will be prompted to input a password for the Administrator accounts of your new Nano Servers. Type carefully and note your password for later use.You will use these passwords later to log into the new Nano Servers∙You will need the following information (at a minimum)o MediaPath: Specifies the path to the mounted Windows Server PreviewISO. It will usually be something like D:\o TargetPath: Specifies the path where the resulting .VHDx file will belocated. NOTE: this path needs to pre-exist before running the new-NanaServerImage cmdlet.o ComputerName: Specifies the name that the Nano server will use and beaccessed by.o Domain name: Specifies the fully qualified name to the domain that yourserver will join.o DriversPath– folder location where the expanded drivers that you want toinject to the image are maintainedo Other options: If you want a richer understanding of the all the inputparameters associated with New-NanoServerImage you can learn morefrom the “Getting Started with Nano Server” guide.New-NanoServerImage -MediaPath <MediaPath> -TargetPath <TargetPath> -ComputerName <ComputerName> -Compute -Storage -Clustering -DomainName <DomainName -OEMDrivers -DeploymentType Host -Edition Datacenter -EnableRemoteManagementPort -ReuseDomainNode -DriversPath <DriversPath> -Packages Microsoft-NanoServer-DCB-PackageThe following is an example of how you can execute the same thing in a script://Example definition of variable names and values$myNanoServerName="myComputer-1"$myNanoImagePath=".\Nano\NanoServerPhysical"$myNanoServerVHDXname="myComputer=1.VHDX"$myDomainFQDN=""$MediaPath="d:\"$myDriversPath="C:\WS2016TP5_Drivers"New-NanoServerImage-MediaPath d:\-TargetPath"$myNanoImagePath\$myNanoServerVHDXname"-ComputerName$myNanoServerName-Compute-Storage-Clustering-DomainName$myDomainFQDN-OEMDrivers-DeploymentType Host-Edition Datacenter-EnableRemoteManagementPort-ReuseDomainNode-DriversPath$myDriversPath-Packages Microsoft-NanoServer-DCB-Package11When you complete this task, you should have 1 VHDx file for each of the four hyper-converged systems that you are provisioningOther Packages that you may want to include:Desired State Configuration. An example feature that requires this is the Software Defined Network feature. The packages to include are:Microsoft-NanoServer-DSC-PackageShielded VMMicrosoft-NanoServer-SecureStartup-PackageMicrosoft-NanoServer-ShieldedVM-PackageManaging Nano with System Center Virtual Machine Manager or Operations ManagerMicrosoft-NanoServer-SCVMM-PackageMicrosoft-NanoServer-SCVMM-Compute-PackageCopy the new Nano Server images to the Host machinesThe tasks in this section assume that the servers that will be used for the hyper-converged system (Host Machines) are booted into a Windows Server operating system and accessible to the network. 1. Log in as an Administrator on the Host machines that will be the nodes of the hyper-convergedsystem.2. Copy the VHDX files that you created earlier to each respective Host machine and configureeach Host machine to boot from the new VHDX using the following steps:∙Mount the VHDx. If you are using Windows Explorer, the mount is accomplished by right clicking on the VHDX file and “mount”.Note: In this example, it is mounted under D:\ ∙Open a PowerShell console with Administrator privilages.∙Change the prompt to the “Windows” directory of the mounted VHD: In this example the command would be:cd d:\windows∙Enable booting to the VHDx:Bcdboot.exe d:\windows12Unmount the VHD. If you are using Windows Explorer, the unmount is accomplished by right clicking on the drive letter in the left hand navigation pane, and selecting “eject”. THIS STEPIS IMPORTANT, THE SYSTEM MAY HAVE ISSUES BOOTING IF YOU DON’T UNMOUNTTHE VHDX.Reboot into the new Nano Server image1. Reboot the Host machines. They will automatically boot into the new Nano Server VHDx images.2. Log into the Nano Recovery Console: After the Host machines are booted, they will show alogon screen for the Nano Server Recovery Console (see the "Nano Server Recovery Console"section in this Nano guide). You will need to enter “Administrator” for the User Name and enter the password you specified earlier when creating the new Nano Server images. For the Domain field, you can leave this blank or enter the computer name of your Nano server.3. Acquire the IP address of the Nano Server: You will use these IP addresses to connect to theNano Server in the next section, so it’s suggested to write it down or note it somewhere.a. Steps to aquire the IP address in the Nano Recovery Console:i. Select Networking then press Enterii. Identify from the network adapter list the one that is being used to connect to the system to manage it. If you aren’t sure which one, look at each of them andidentify the addresses.iii. Select your Ethernet adapter then press Enteriv. Note your IPv4 address for later useNote: While you are in the Nano Recovery Console, you may also specify static IP addresses at this time for networks if DHCP is not available.Connecting to and managing the Nano Servers from the Management system machineYou will need a Management system machine that has the same build of Windows Server 2016 to manage and configure your Nano deployment. Remote Server Administration Tools (RSAT) for Windows Serve 2016 is not suggested for this scenario since some of the Windows 10 storage APIs may not be updated to be fully compatible at the time of this preview release.1. On the Management system install the Failover Cluster and Hyper-V management tools. Thiscan be done through Server Man ager using the “Add Roles and Features” wizard. In the“Features” page, select “Remote Server Administration Tools” and then select the tools toinstall.2. On the Management system machine configure TrustedHosts; this is a onetimeconfiguration on the management system machine:Open a PowerShell console with Administrator privilages and execute the following. This willconfigure the trusted hosts to all hosts.enter13After the onetime configuration above, you will not need to repeat Set-Item. However, each time you close and reopen the PowerShell console you should establish a new remote PS Session to the Nano Server by running the commands below:3. Enter the PS session and use either the Nano Server name or the IP address that you acquiredfrom the Recovery Console earlier in this doc. You will be prompted for a password after youexecute this command, enter the administrator password you specified when creating the Nano VHDx.Enter-PSSession-ComputerName<myComputerName>-CredentialLocalHost\AdministratorExamples of doing the same thing in a way that is more useful in scripts, in case you need todo this more than once:Example 1: using an IP address:$ip="10.100.0.1"$user="$ip\Administrator"Enter-PSSession-ComputerName$ip-Credential$userExample 2: OR you can do something similar with computer name instead of IP address.$myNanoServer1="myNanoServer-1"$user="$myNanoServer1\Administrator"Enter-PSSession-ComputerName$myNanoServer1-Credential$userAdding domain accounts.So far this guide has had you deploying and configuring individual nodes with the local administrator account<ComputerName>\Administrator.Managing a hyper-converged system, including the cluster and storage and virtualization components, often requires using a domain account that is in the Administrators group on each node.The following steps are done from the Management System.For each server of the hyper-converged system:e a PowerShell console that was opened with Administrator privileges and in a PSSession issue thefollowing command to add your domain account(s) in the Administrators local security group. See the section above for information about how to connect to the Nano systems using PSSession.Net localgroup Administrators<Domain\Account>/add14Network ConfigurationThe following assumes 2 RDMA NIC Ports (1 dual port, or 2 single port). In order to deploy Storage Spaces Direct, the Hyper-V switch must be deployed with RDMA-enabled host virtual NICs. Complete the following steps to configure the network on each server:NoteSkip this Network Configuration section, if you are testing Storage Spaces Direct inside of virtual machines. RDMA is not available for networking inside a virtual machine.Configure the Top of Rack (TOR) SwitchOur example configuration is using a network adapter that implements RDMA using RoCEv2. Network QoS for this type of RDMA requires that the TOR have specific capabilities set for the network ports that the NICs are connected to.15Enable Network Quality of Service (Network QoS)Network QoS is used to in this hyper-converged configuration to ensure that the Software Defined Storage system has enough bandwidth to communicate between the nodes to ensure resiliency and performance. Do the following steps from a management system using Enter-PSSession to connect and do the following to each of the servers.NoteFor Windows Server 2016 Technical Preview, there are multiple vendors supporting these RDMA network capabilities. Check with your network interface card vendor to verify which of their products support hyper-converged RDMA networking in in technical preview 5.1. Set a network QoS policy for SMB-Direct, which is the protocol that the software definedstorage system uses.New-NetQosPolicy “SMB” –NetDirectPortMatchCondition 445 –PriorityValue8021Action 3The output should look something like this:Name : SMBOwner : Group Policy (Machine)NetworkProfile : AllPrecedence : 127JobObject :NetDirectPort : 445PriorityValue : 32. Turn on Flow Control for SMBEnable-NetQosFlowControl –Priority 33. Disable flow control for other trafficDisable-NetQosFlowControl –Priority 0,1,2,4,5,6,74. Get a list of the network adapters to identify the target adapters (RDMA adapters)Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeedThe output should look something like the following. The Mellanox ConnectX03 Pro adapters are the RDMA network adapters and are the only ones connected to a switch, in this example configuration.[MachineName]: PS C:\Users\User\Documents> Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeedName InterfaceDescription Status LinkSpeed---- -------------------- ------ ---------NIC3 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #46 Disconnected 0 bpsEthernet 2 Mellanox ConnectX-3 Pro Ethernet Adapter #2 Up 10 Gbps16SLOT # Mellanox ConnectX-3 Pro Ethernet Adapter Up 10 GbpsNIC4 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #47 Disconnected 0 bpsNIC1 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #44 Disconnected 0 bpsNIC2 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #45 Disconnected 0 bps5. Apply network QoS policy to the target adapters. The target adapters are the RDMA adapters.Use the “Name” of the target adapters for the –InterfaceAlias in the following exampleEnable-NetAdapterQos –InterfaceAlias “<adapter1>”,”<adapter2>”Using the example above, the command would look like this:Enable-NetAdapterQoS –InterfaceAlias “Ethernet 2”,”SLOT #”6. Create a Traffic class and give SMB Direct 30% of the bandwidth minimum. The name of theclass will be “SMB”New-NetQosTrafficClass “SMB” –Priority 3 –BandwidthPercentage 30 –Algorithm ETSCreate a Hyper-V Virtual Switch with SET and RDMA vNICThe Hyper-V virtual switch allows the physical NIC ports to be used for both the host and virtual machines and enables RDMA from the host which allows for more throughput, lower latency, and less system (CPU) impact. The physical network interfaces are teamed using the Switch Embedded Teaming (SET) feature that is new in Windows Server 2016.Do the following steps from a management system using Enter-PSSession to connect to each of the servers.1. Identify the network adapters (you will use this info in step #2)Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeed [MachineName]: PS C:\Users\User\Documents> Get-NetAdapter | FT Name,InterfaceDescription,Status,LinkSpeedName InterfaceDescription Status LinkSpeed---- -------------------- ------ ---------NIC3 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #46 Disconnected 0 bpsEthernet 2 Mellanox ConnectX-3 Pro Ethernet Adapter #2 Up 10 GbpsSLOT # Mellanox ConnectX-3 Pro Ethernet Adapter Up 10 GbpsNIC4 QLogic BCM57800 Gigabit Ethernet (NDIS VBD Client) #47 Disconnected 0 bpsNIC1 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #44 Disconnected 0 bpsNIC2 QLogic BCM57800 10 Gigabit Ethernet (NDIS VBD Client) #45 Disconnected 0 bps2. Create the virtual switch connected to both of the physical network adapters, and enable theSwitch Embedded Teaming (SET). You may notice a message that your PSSession lostconnection. This is expected and your session will reconnect.New-VMSwitch –Name SETswitch –NetAdapterName “<adapter1>”,”<adapter2>”–EnableEmbeddedTeaming $trueUsing the Get-NetAdapter example above, the command would look like this:17。

相关文档
最新文档