ESXi多网卡设置
ESXi 多vlan配置

51C T O 首页我的博客搜索社区:论坛博客下载读书更多登录注册首页|F 5|C i s c o |V M w a r e |L i n u x 博客统计信息z h u j i a n 2233 的B L O G写留言邀请进圈子发消息加友情链接加T A 为好友M S N /Q Q 论坛 开心 人人 豆瓣 新浪微博 分享到:博主的更多文章>>标签:服务器 网卡 V L A N V M w a r e E S X i原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。
否则将追究法律责任。
h t t p ://08180701.b l o g .51c t o .c o m /478869/410652背景:需要在一台V M w a r e E S X i 服务器单网卡跑多V L A N解决方法:首页在交换机上将与V M w a r e E S X i 服务器的端口模式设置成t r u n k ,再在V M w a r e E S X i 服务器上建立不同的虚拟交换机,每个交换机的V l a n i d 值与交换机的L V A N号对应就可。
最后不用的虚拟机网卡选择不用的虚拟交换机相V M w a r e E S X i 服务器单网卡跑多V L A N2010-10-26 08:53:15悦佩h t t p ://08180701.b l o g .51c t o .c o m 【订阅】原创:5翻译:0转载:0博 客|图库|写博文|帮 助用户名:z h u j i a n 2233文章数:5评论数:2访问量:1133无忧币:33博客积分:56博客等级:1注册日期:2008-09-14L i n u x 双域名邮件服务器实战V M w a r e E S X i 服务器单网卡跑多V L A N P P T P V P N 服务器问题之一F 5使用单臂网络架构问题之一 M R TG 监控6509状态(一)网络工程师联盟 热门文章搜索B L O G 文章搜 索我的技术圈(4)更多>>连。
VMWare ESXi 6.0相关设置说明

VMWare ESXi 6.0相关设置说明目录1 健康检查 (1)2 时间配置 (2)3 DNS和路由配置 (3)4 虚拟机启动和关机选项 (4)5 网络设置 (6)1健康检查点击主机的“配置”选项卡,再点击“健康状况”能查看到主机各硬件的健康情况,如果有部分硬件有告警或故障,能在这里看到。
2时间配置“配置”选项卡里点击“时间配置”,能看到时间及NTP(Network Time Protocol)信息,点击右上方的“属性”可以进行时间和NTP配置修改,在NTP配置的“选项”里可以设置NTP相关参数及添加NTP服务器,可根据实际情况进行添加设置。
3DNS和路由配置“配置”选项卡里点击“DNS和路由配置”,能看到当前的主机名、DNS及默认网关信息,点击右上方的“属性”可以进行主机名、DNS和默认网关配置修改。
4虚拟机启动和关机选项“配置”选项卡里点击“虚拟机启动和关机”,能看到各虚拟机的启动关闭选项,此选项主要是配置虚拟机是否允许与主机系统一起自动启动与停止,就是说当主机系统关机时,虚拟机将自动关闭电源,而当主机系统启动时,虚拟机将根据这里的配置自动启动或保持关机不启动。
对于一些经常使用的虚拟机建议设置为与主机系统一起启动。
5网络设置1.“配置”选项卡里点击“网络”,能看到相关的网络设置,从下图看,有一个“标准交换机”vSwitch0,此“标准交换机”下有一个“VMNetwork”、一个“Management Network”和四张物理适配器,其中有两张物理适配器没有连接,“Management Network”里能看到主机管理IP地址。
从图里看到,所以开机的虚拟机和“Management Network”都通过右边四张物理适配器访问。
2.点击右上方的“添加网络”出现以下对话框,有两类连接选项:(1)虚拟机(VM Network)用于所有虚拟网路卡连接的端口,相当于物理交换机的下行端口组。
主要给ESXi上创建的虚拟机使用,前面创建虚拟机时选择的网络就是指这个。
esxiwindowscpu多核的设置原理详细说明

esxiwindowscpu多核的设置原理详细说明物理cpu(即插槽):真实的⼀个CPU;⽐如 2core:⼀个cpu有多少核;⽐如 8hyper threading:超线程技术,即再虚拟个核出来。
所以虚拟软件vmware,esxi最终会算出多少个逻辑cpu:物理cpu(slot) * cores * 2 = 2*8*2=24linux对物理cpu或者slot没有限制。
win10 专业版最多运⾏2个slot或者物理cpu。
在win10上,如果你的esxi虚拟出 vCpu = 16个,由于最多运⾏两个插槽,即2个物理cpu。
那需要配置它的每个cpu核⼼是8核。
这样正好有2 slot。
Setting the Number of Cores per CPU in a Virtual Machine: A How-to GuideWhen creating virtual machines, you should configure processor settings for them. With hardware virtualization, you can select the number of virtual processors for a virtual machine and set the number of sockets and processor cores. How many cores per CPU should you select for optimal performance? Which configuration is better: setting fewer processors with more CPU cores or setting more processors with fewer CPU cores? This blog post explains the main principles of processor configuration for VMware virtual machines. TerminologyFirst of all, let’s go over the definitions of the terms you should know when configuring CPU settings for to help you understand the working principle. Knowing what each term means allows you to avoid confusion about the number of cores per CPU, CPU cores per socket, and the number of CPU cores vs speed.A CPU Socket is a physical connector on the motherboard to which a single physical CPU is connected. A motherboard has at least one CPU socket. Server motherboards usually have multiple CPU sockets that support multiple multicore processors. CPU sockets are standardized for different processor series. Intel and AMD use different CPU sockets for their processor families.A CPU (central processing unit, microprocessor chip, or processor) is a computer component. It is the electronic circuitry with transistors that is connected to a socket. A CPU executes instructions to perform calculations, run applications, and complete tasks. When the clock speed of processors came close to the heat barrier, manufacturers changed the architecture of processors and started producing processors with multiple CPU cores. To avoid confusion between physical processors and logical processors or processor cores, some vendors refer to a physical processor as a socket.A CPU core is the part of a processor containing the L1 cache. The CPU core performs computational tasks independently without interacting with other cores and external components of a “big” processor that are shared among cores. Basically, a core can be considered as a small processor built into the main processor that is connected to a socket. Applications should support parallel computations to use multicore processors rationally.Hyper-threading is a technology developed by Intel engineers to bring parallel computation to processors that have one processor core. The debut of hyper-threading was in 2002 when the Pentium 4 HT processor was released and positioned for desktop computers. An operating system detects a single-core processor with hyper-threading as a processor with two logical cores (not physical cores). Similarly, a four-core processor with hyper-threading appears to an OS as a processor with 8 cores. The more threads run on each core, the more tasks can be done in parallel. Modern Intel processors have both multiple cores and hyper-threading. Hyper-threading is usually enabled by default and can be enabled or disabled in BIOS. AMD simultaneous multi-threading (SMT) is the analog of hyper-threading for AMD processors.A vCPU is a virtual processor that is configured as a virtual device in the virtual hardware settings of a VM. A virtual processor can be configured to use multiple CPU cores. A vCPU is connected to a virtual socket.CPU overcommitment is the situation when you provision more logical processors (CPU cores) of a physical host to VMs residing on the host than the total number of logical processors on the host.NUMA (non-uniform memory access) is a computer memory design used in multiprocessor computers. The idea is to provide separate memory for each processor (unlike UMA, where all processors access shared memory through a bus). At the same time, a processor can access memory that belongs to other processors by using a shared bus (all processors access all memory on the computer). A CPU has a performance advantage of accessing own local memory faster than other memory on a multiprocessor computer.These basic architectures are mixed in modern multiprocessor computers. Processors are grouped on a multicore CPU package or node. Processors that belong to the same node share access to memory modules as with the UMA architecture. Also, processors can access memory from the remote node via a shared interconnect. Processors do so for the NUMA architecture but with slower performance. This memory access is performed through the CPU that owns that memory rather than directly.NUMA nodes are CPU/Memory couples that consist of a CPU socket and the closest memory modules. NUMA is usually configured in BIOS as the node interleaving or interleaved memory setting.An example. An ESXi host has two sockets (two CPUs) and 256 GB of RAM. Each CPU has 6 processor cores. This server contains two NUMA nodes. Each NUMA node has 1 CPU socket (one CPU), 6 Cores, and 128 GB of RAM.always tries to allocate memory for a VM from a native (home) NUMA node. A home node can be changed automatically if there are changes in VM loads and ESXi server loads.Virtual NUMA (vNUMA) is the analog of NUMA for VMware virtual machines. A vNUMA consumes hardware resources of more than one physical NUMA node to provide optimal performance. The vNUMA technology exposes the NUMA topology to a guest operating system. As a result, the guest OS is aware of the underlying NUMA topology for the most efficient use. The virtual hardware version of a VM must be 8 or higher to use vNUMA. Handling of vNUMA was significantly improved in VMware vSphere 6.5, and this feature is no longer controlled by the CPU cores per socket value in the VM configuration. By default, vNUMA is enabled for VMs that have more than 8 logical processors (vCPUs). You can enable vNUMA manually for a VM by editing the VMX configuration file of the VM and adding theline numa.vcpu.min=X, where X is the number of vCPUs for the virtual machine.CalculationsLet’s find out how to calculate the number of physical CPU cores, logical CPU cores, and other parameters on a server.The total number of physical CPU cores on a host machine is calculated with the formula:(The number of Processor Sockets) x (The number of cores/processor) = The number of physical processor cores*Processor sockets only with installed processors must be considered.If hyper-threading is supported, calculate the number of logical processor cores by using the formula:(The number of physical processor cores) x (2 threads/physical processor) = the number of logical processorsFinally, use a single formula to calculate available processor resources that can be assigned to VMs:(CPU sockets) x (CPU cores) x (threads)For example, if you have a server with two processors with each having 4 cores and supporting hyper-threading, then the total number of logical processors that can be assigned to VMs is2(CPUs) x 4(cores) x 2(HT) = 16 logical processorsOne logical processor can be assigned as one processor or one CPU core for a VM in VM settings.As for virtual machines, due to hardware emulation features, they can use multiple processors and CPU cores in their configuration for operation. One physical CPU core can be configured as a virtual CPU or a virtual CPU core for a VM.The total amount of clock cycles available for a VM is calculated as:(The number of logical sockets) x (The clock speed of the CPU)For example, if you configure a VM to use 2 vCPUs with 2 cores when you have a physical processor whose clock speed is 3.0 GHz, then the total clock speed is 2x2x3=12 GHz. If CPU overcommitment is used on an ESXi host, the available frequency for a VM can be less than calculated if VMs perform CPU-intensive tasks.LimitationsThe maximum number of virtual processor sockets assigned to a VM is 128. If you want to assign more than 128 virtual processors, configure a VM to use multicore processors.The maximum number of processor cores that can be assigned to a single VM is 768 in vSphere 7.0 Update 1. A virtual machine cannot use more CPU cores than the number of logical processor cores on a physical machine.CPU hot add. If a VM has 128 vCPUs or less than 128 vCPUs, then you cannot use the CPU hot add feature for this VM and edit the CPU configuration of a VM while a VM is in the running state.OS CPU restrictions. If an operating system has a limit on the number of processors, and you assign more virtual processors for a VM, the additional processors are not identified and used by a guest OS. Limits can be caused by OS technical design and OS licensing restrictions. Note that there are operating systems that are licensed per-socket and per CPU core (for example, ).CPU support limits for some operating systems:Windows 10 Pro – 2 CPUsWindows 10 Home – 1 CPUWindows 10 Workstation – 4 CPUsWindows Server 2019 Standard/Datacenter – 64 CPUsWindows XP Pro x64 – 2 CPUsWindows 7 Pro/Ultimate/Enterprise - 2 CPUsWindows Server 2003 Datacenter – 64 CPUsConfiguration RecommendationsFor older vSphere versions, I recommend using sockets over cores in VM configuration. At first, you might not see a significant difference in CPU sockets or CPU cores in VM configuration for VM performance. Be aware of some configuration features. Remember about NUMA and vNUMA when you consider setting multiple virtual processors (sockets) for a VM to have optimal performance.If vNUMA is not configured automatically, mirror the NUMA topology of a physical server. Here are some recommendations for VMs in VMware vSphere 6.5 and later:When you define the number of logical processors (vCPUs) for a VM, prefer the cores-per-socket configuration. Continue until the count exceeds the number of CPU cores on a single NUMA node on the ESXi server. Use the same logic until you exceed the amount of memory that is available on a single NUMA node of your physical ESXi server.Sometimes, the number of logical processors for your VM configuration is more than the number of physical CPU cores on a single NUMA node, or the amount of RAM is higher than the total amount of memory available for a single NUMA node. Consider dividing the count of logical processors (vCPUs) across the minimum number of NUMA nodes for optimal performance.Don’t set an odd number of vCPUs if the CPU count or amount of memory exceeds the number of CPU cores. The same applies in case memory exceeds the amount of memory for a single NUMA node on a physical server.Don’t create a VM that has a number of vCPUs larger than the count of physical processor cores on your physical host.If you cannot disable vNUMA due to your requirements, don’t enable the vCPU Hot-Add feature.If vNUMA is enabled in vSphere prior to version 6.5, and you have defined the number of logical processors (vCPUs) for a VM, select the number of virtual sockets for a VM while keeping the cores-per-socket amount equal to 1 (that is the default value). This is because the one-core-per-socket configuration enables vNUMA to select the best vNUMA topology to the guest OS automatically. This automatic configuration is optimal on the underlying physical topology of the server. If vNUMA is enabled, and you’re using the same number of logical processors (vCPUs) but increase the number of virtual CPU cores and reduce the number of virtual sockets by the same amount, then vNUMA cannot set the best NUMA configuration for a VM. As a result, VM performance is affected and can degrade.If a guest operating system and other software installed on a VM are licensed on a per-processor basis, configure a VM to use fewer processors with more CPU cores. For example, Windows Server 2012 R2 is licensed per socket, and Windows Server 2016 is licensed on a per-core basis.If you use CPU overcommitment in the configuration of your VMware virtual machines, keep in mind these values: 1:1 to 3:1 – There should be no problems in running VMs3:1 to 5:1 – Performance degradation is observed6:1 – Prepare for problems caused by significant performance degradationCPU overcommitment with normal values can be used in test and dev environments without risks.Configuration of VMs on ESXi HostsFirst of all, determine how many logical processors (Total number of CPUs) of your physical host are needed for a virtual machine for proper work with sufficient performance. Then define how many virtual sockets with processors (Number of Sockets in vSphere Client) and how many CPU cores (Cores per Socket) you should set for a VM keeping in mind previous recommendations and limitations. The table below can help you select the needed configuration.If you need to assign more than 8 logical processors for a VM, the logic remains the same. To calculate the number of logical CPUs in , multiply the number of sockets by the number of cores. For example, if you need to configure a VM to use 2-processor sockets, each has 2 CPU cores, then the total number of logical CPUs is 2*2=4. It means that you should select 4 CPUs in the virtual hardware options of the VM in vSphere Client to apply this configuration.Let me explain how to configure CPU options for a VM in VMware vSphere Client. Enter the IP address of your in a web browser, and open VMware vSphere Client. In the navigator, open Hosts and Clusters, and select the needed virtual machine that you want to configure. Make sure that the VM is powered off to be able to change CPU configuration.Right-click the VM, and in the context menu, hit Edit Settings to open virtual machine settings.Expand the CPU section in the Virtual Hardware tab of the Edit Settings window.CPU. Click the drop-down menu in the CPU string, and select the total number of needed logical processors for this VM. In this example, Iselect 4 logical processors for the Ubuntu VM (blog-Ubuntu1).Cores per Socket. In this string, click the drop-down menu, and select the needed number of cores for each virtual socket (processor). CPU Hot Plug. If you want to use this feature, select the Enable CPU Hot Add checkbox. Remember limitations and requirements. Reservation. Select the guaranteed minimum allocation of CPU clock speed (frequency, MHz, or GHz) for a virtual machine on an ESXi host or cluster.Limit. Select the maximum CPU clock speed for a VM processor. This frequency is the maximum frequency for a virtual machine, even if this VM is the only VM running on the ESXi host or cluster with more free processor resources. The set limit is true for all virtual processors of a VM. If a VM has 2 single-core processors, and the limit is 1000 MHz, then both virtual processors work with a total clock speed of one million cycles per second (500 MHz for each core).Shares. This parameter defines the priority of resource consumption by virtual machines (Low, Normal, High, Custom) on an ESXi host or resource pool. Unlike Reservation and Limit parameters, the Shares parameter is applied for a VM only if there is a lack of CPU resources within an ESXi host, resource pool, or DRS cluster.Available options for the Shares parameter:Low – 500 shares per a virtual processorNormal - 1000 shares per a virtual processorHigh - 2000 shares per a virtual processorCustom – set a custom valueThe higher the Shares value is, the higher the amount of CPU resources provisioned for a VM within an ESXi host or a resource pool. Hardware virtualization. Select this checkbox to enable . This option is useful if you want to run a VM inside a VM for testing or educational purposes.Performance counters. This feature is used to allow an application installed within the virtual machine to be debugged and optimized after measuring CPU performance.Scheduling Affinity. This option is used to assign a VM to a specific processor. The entered value can be like this: “0, 2, 4-7”.I/O MMU. This feature allows VMs to have direct access to hardware input/output devices such as storage controllers, network cards, graphic cards (rather than using emulated or paravirtualized devices). I/O MMU is also called Intel Virtualization Technology for Directed I/O (Intel VT-d) and AMD I/O Virtualization (AMD-V). I/O MMU is disabled by default. Using this option is deprecated in vSphere 7.0. If I/O MMU is enabled for a VM, the VM cannot be migrated with and is not compatible with snapshots, memory overcommit, suspended VM state, physical NIC sharing, and .If you use a standalone ESXi host and use VMware Host Client to configure VMs in a web browser, the configuration principle is the same as for VMware vSphere Client.If you connect to vCenter Server or ESXi host in and open VM settings of a vSphere VM, you can edit the basic configuration of virtual processors. Click VM > Settings, select the Hardware tab, and click Processors. On the following screenshot, you see processor configuration for the same Ubuntu VM that was configured before in vSphere Client. In the graphical user interface (GUI) of VMware Workstation, you should select the number of virtual processors (sockets) and the number of cores per processor. The number of total processor cores (logical cores of physical processors on an ESXi host or cluster) is calculated and displayed below automatically. In the interface of vSphere Client, you set the number of total processor cores (the CPUs option), select the number of cores per processor, and then the number of virtual sockets is calculated and displayed.Configuring VM Processors in PowerCLIIf you prefer using the command-line interface to configure components of VMware vSphere, use to edit the CPU configuration of VMs. Let’s find out how to edit VM CPU configuration for a VM which name is Ubuntu 19 in Power CLI. The commands are used for VMs that are powered off.To configure a VM to use two single-core virtual processors (two virtual sockets are used), use the command:get-VM -name Ubuntu19 | set-VM -NumCpu 2Enter another number if you want to set another number of processors (sockets) to a VM.In the following example, you see how to configure a VM to use two dual-core virtual processors (2 sockets are used):$VM=Get-VM -Name Ubuntu19$VMSpec=New-Object -Type VMware.Vim.VirtualMachineConfigSpec -Property @{ "NumCoresPerSocket" = 2}$VM.ExtensionData.ReconfigVM_Task($VMSpec)$VM | Set-VM -NumCPU 2Once a new CPU configuration is applied to the virtual machine, this configuration is saved in the VMX configuration file of the VM. In my case, I check the Ubuntu19.vmx file located in the VM directory on the datastore (/vmfs/volumes/datastore2/Ubuntu19/). Lines with new CPU configuration are located at the end of the VMX file.numvcpus = "2"cpuid.coresPerSocket = "2"If you need to reduce the number of processors (sockets) for a VM, use the same command as shown before with less quantity. For example, to set one processor (socket) for a VM, use this command:get-VM -name Ubuntu19 | set-VM -NumCpu 1The main advantage of using Power CLI is the ability to configure multiple VMs in bulk. is important and convenient if the number of virtual machines to configure is high. Use VMware cmdlets and syntax of Microsoft PowerShell to create scripts.ConclusionThis blog post has covered the configuration of virtual processors for VMware vSphere VMs. Virtual processors for virtual machines are configured in VMware vSphere Client and in Power CLI. The performance of applications running on a VM depends on the correct CPU and memory configuration. In VMware vSphere 6.5 and later versions, set more cores in CPU for virtual machines and use the CPU cores per socket approach. If you use vSphere versions older than vSphere 6.5, configure the number of sockets without increasing the number of CPU cores for a VM due to different behavior of vNUMA in newer and older vSphere versions. Take into account the licensing model of software you need to install on a VM. If the software is licensed on using a per CPU model, configure more cores per CPU in VM settings. When using virtual machines in VMware vSphere, don’t forget about . Use NAKIVO Backup & Replication to back up your virtual machines, including VMs that have multiple cores per CPU. Regular backup helps you protect your data and recover the data in case of a .5(100%)4votes。
ESXi多网卡设置

ESXi双网卡双IP设置最近机房有台机器坏了,新购一台机器(Dell730 双CPU,共20核,64G存,3块4T硬盘)。
考虑到现在虚拟化技术比较成熟,使用维护确实方便,决定采用vSphere来部署。
机器部署挺方便的,由于以前物理机是双IP设置,虚拟化时遇到一点小问题,主要是开始没理解虚拟交接机的概念。
一设置网络1.点击配置-》网络按钮,默认只有一个虚拟交换机。
所有物理端口聚合到此虚拟交换机,实现冗余。
这就是我开始配置虚机的IP地址,虚机之间可以ping,但到物理交换机就是不通的原因。
2.增加虚拟交换机点击添加添加网络按钮下一步,选择ESXi的物理端口,新建虚拟交换机下面是新增后效果:二虚机指定虚拟网卡的网络选择虚机,点击虚拟机配置按钮选择第一步设置的虚拟交换机这样配置后,可正常ping通物理交换机,就可以了。
三.Windows虚机双IP配置Windows比较简单,分别配置网卡和对应的网关。
在高级里将下一跳由自动改为固定值就可了。
四.Linux虚机双IP配置Ubuntu的双IP稍微麻烦一点,ubuntu安装时不像RHEL,自动激活联线的网卡,ubuntu只激活一个,另一个需手工安装。
1.找出网卡sudo lshw -C network2.编辑/etc/network/interfaces,加入新网卡配置vi /etc/network/interfaces 修改里面的容如下auto eth0iface eth0 inet staticaddress 192.168.4.213netmask 255.255.255.0auto eth1iface eth1 inet staticaddress 58.200.200.15netmask 255.255.255.128gateway 58.200.200.13.增加路由通过以上操作后,可ping通各自的网络。
但如果通过外网访问的话,只有一个IP是通的。
pfsense for esxi安装设置

Pfsense 安装2016年4月1日9:18Esxi6网络设置以下为最终网络配置。
在esxi配置网络里选添加网络。
虚拟机下一步把所有网卡选项去掉下一步。
下一步网络标签改为 VM LAN,esxi自带的网络可以改为VM WAN好区分内外网。
完成如下图。
配置虚拟机网络到刚刚设置好的虚拟网络中。
把两个虚拟网卡分别设置到VM wan 及 VM lan中。
其它虚拟机也要用同样的设置。
设置好后开始安装pfsense服务器。
下载最新版本下载列表随便选一个下载点就可以。
https:///download/2、创建虚拟机,选择新建虚拟机Esxi选安装其他->FreeBSD(64)选择1个CPU,1G内存,8GB硬盘即可。
安回车进行光盘加载文件。
选99进行安装进行本地硬盘安装。
选最后一项进行安装选择第一个选项快速安装,快速安装。
如果选择希望自定义安装的话可以选择第二个选型选OK回车继续安装完成选择重启,重启后记得移除安装光盘。
重启后出现这个画面。
0.注销(ssh只有)1.分配接口2.设置界面(s)的ip地址 3. webconfigurator重置密码4.重置为出厂默认值 5.重启系统 6.停止系统 7.ping主机 8.壳 9.pftop 10.过滤日志 11.重启webconfigurator 12.pfsense开发者壳13.升级从控制台 14.启用安全shell(sshd)选择2进行WAN IP及LAN IP设置选择要修改WAN IP 还是LAN IP,1或2 回车。
输入内网IP 地址:172.16.10.110 回车外网根据Ip 地址进行同样设置。
输入子网掩码这里设置为16,回车,外网同样设置。
这里是否设置外网Ip 如不设置回车即可提示是否设置IPv6 如不设置回车即可是否设置DHCP,选n不设置,进入网页管理里在设置。
设置完成后回车继续。
用上网的IP地址进行网页管理。
进入网页管理http://172.16.10.110用户名:admin 密码:pfsense。
在VMware ESXi服务器上配置NAT上网

在VMware ESXi服务器上配置NAT上网在使用VMware workstation的时候,我们经常以NAT的方式配置虚拟机的网络,与桥接方式相比,这样配置可以让虚拟机共享主机的网络而不用单独设置IP。
到了ESXi,由于其使用了vSwitch作为网络交换设备,因此没有NAT这样的选项了。
但在实际环境中,我们还是经常会遇到IP不够用的情况,比如只有一个公网IP,但是有一堆虚拟机需要上网。
此时就要通过软路由来达到目的。
先看一下配置之前的网络环境,在vSphere Client上选中主机,然后在右边依次点击“配置”->“网络”,如下图:可以看到当前主机上有一个虚拟交换机vSwitch0,构成VM Network网络,它连接到主机的物理网卡vmnic0上,因此网络是与外网连通的。
有4台虚拟机连接到此网络。
此时这4台虚拟机想要上网,必须有此网段的独立IP。
想达到共享上网的目的,我们必须增加一个内网,比如10.10.10.*,然后通过路由设置把这个网段内的请求映射到外网去。
先在主机上创建内网,还在刚才的“网络”页,点“添加网络...”,选择创建虚拟机网络:之后比较关键,选择创建虚拟交换机,但是不要让它与物理网卡相关,因此去掉vmnic1前面的勾,下方的预览图里会相应显示无适配器。
之所以这么做,是因为我们要把这个网络的请求都转发到VM Network上去,而不要让它自己走物理网卡出去。
下一步,可以给它定一个名称,比如NAT Network。
接下来要建一个软件路由了,它的作用是连接两个网络,把内网的请求转发到外网去。
我推荐使用pfSense,它是一个ova文件,在vSphere Client的文件菜单里选“部署OVF模板...”就可以部署它了,过程比较简单,不一一截图了。
部署完成后,注意编辑一下配置,作为路由器,它一定有2个网络适配器,我们把适配器1定义为外网,让它接入VM Network网络,把适配器2定义为内网,让它接入NAT Network,如下图:顺便记录一下这两个适配器的MAC地址,后面会用到。
ESXIpfsense公网ip,实现内网服务器端口映射

ESXIpfsense公网ip,实现内网服务器端口映射暴露内网服务器端口的方法有很多,之前介绍过ngrok和frp,今天我们用 ESXI +pfsense 来做下。
0. 准备材料ESXI服务器一台,双网卡,版本5.5以上公网ip一个(有固定IP是最好的)fpsense镜像1. 拓扑&规划1.内网网段 192.168.0.0/24 ,公司路由器网关192.168.0.12.ESXI服务器的两张网卡: 一张接内网交换机,一张接外网(比如光猫),内网管理地址192.168.0.1513.pfsense安装在EXSI上,使用ESXI的两张网卡,内网网段和现有的内网一致,即192.168.0.0/24,内网网关地址192.168.0.2344.ESXI上有一台虚拟机,IP:192.168.0.219,运行了一个web服务(nginx)2. ESX网络配置配置两个vSwitch,分别对应两张网卡,需注意的是接外网的网卡配置时,IP设置要结合实际网络来配置,本次的环境是外接电信光猫,选择自动获取。
而对内的vSwitch则可以配置下ip范围,比如192.168.0.1-192.168.0.127。
如果之前已配置了内网网卡,则用现有的即可,不需要另外配置。
配置完成后的拓扑如下vSwitch0用作内网,连接的是内网交换机,虚拟机网络是VM NetworkvSwitch1用作外网,连接电信光猫,虚拟机网络是VM_DX3. 安装pfsense在ESXI上新建虚拟机,配置2CPU,2G内存,8G硬盘就够了,配置两个网卡,一个连VM Network,另一个连VM_DX将pfsense的镜像放入CD/DVD中,开机安装。
一路默认即可,最后看下网卡和ip是否对应,并手动配置下内网ip,设置为192.168.0.234,最终界面如下:如果网卡对应正确,配置正确的话,在内网PC上就能通过192.168.0.234管理pfsense了用户名admin 密码pfsense4. 在pfsense上配置内网服务器的端口映射点击Firewall -> NAT , 点击ADD按钮,添加一条映射规则这里将192.168.0.219的80端口映射到公网的9090端口,如果要内网也能通过公网ip直接访问的话,要开下回流,即NAT reflection要选择NAT+Proxy。
VMwareESXi配置

TT 服务器技术专题之“主标题”
Page 11 of 72
随便在后面加个 1 辨认,这里有一个 BUG,输入完 IP 和 netmask 掩码之后,关闭设置 会出现错误。如图:
TT 服务器技术专题之“主标题”
Page 12 of 72
提示,比如我的 netmask 是 23 位 255.255.254.0,IP 是 192.168.1.11,但是网关设 置 192.168.0.254,它会报不属于同个网段的错误。不管他继续前进。添加好网卡之后回 到 configuration 界面的 networking,点击 properties 便可以继续设置网关。
Windows Server 2008 R2
TT 服务器技术专题之“主标题”
Page 4 of 72
Hyper-V Server 2008 和 ESXi 都是一种虚拟的主系统,并不是我们日常用的 VMware Workstation 或者 VPC。EXSi 和 Hyper-V 都是一个完整的系统,可以打个比方,VMware Workstation 等虚拟机只是操作系统的一个软件,提供的功能都是基于主系统(Linux 或 Windows),性能也是受到所在操作系统的影响。而 EXSi 和 Hyper-V 则是一个完整的宿主 系统,EXSi 是基于 Linux 修改而成,Hyper-V 是基于 Windows 修改而成。这两个系统只是 个宿主系统,无任何额外功能,都需要另一个管理系统来管理这两个宿主系统(这也是免 费产品的缺陷所在)。
我试验了一下,Hyper-V 很麻烦,它没有一个免费的像 VMware Infrastructure2.5 这 样的客户端工具来管理宿主系统。我在微软主页上找了几个小时,暂时只知道可以用 SCVMM2008 和 windows2008 X86-64 版自带的 Hyper-V 来远程管理(VISTA X86-64 版也可 以),SCVMM2008 提供了 180 天的测试期,我本来想只需用到远程虚拟机管理工具 VMM ADMINISTRATOR CONSOLE 来管理已经安装好 Hyper-V,但是它还需要加入域。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
ESXi双网卡双IP设置
最近机房有台机器坏了,新购一台机器(Dell730 双CPU,共20核,64G内存,3块4T硬盘)。
考虑到现在虚拟化技术比较成熟,使用维护确实方便,决定采用vSphere来部署。
机器部署挺方便的,由于以前物理机是双IP设置,虚拟化时遇到一点小问题,主要是开始没理解虚拟交接机的概念。
一设置网络
1.点击配置-》网络按钮,默认只有一个虚拟交换机。
所有物理端口聚合到此虚拟交换机,实现冗余。
这就是我开始配置虚机的IP地址,虚机之间可以ping,但到物理交换机就是不通的原因。
2.增加虚拟交换机
点击添加添加网络按钮
下一步,选择ESXi的物理端口,新建虚拟交换机
下面是新增后效果:
二虚机指定虚拟网卡的网络
选择虚机,点击虚拟机配置按钮
选择第一步设置的虚拟交换机
这样配置后,可正常ping通物理交换机,就可以了。
三.Windows虚机双IP配置
Windows比较简单,分别配置网卡和对应的网关。
在高级里将下一跳由自动改
为固定值就可了。
四.Linux虚机双IP配置
Ubuntu的双IP稍微麻烦一点,ubuntu安装时不像RHEL,自动激活联线的网卡,ubuntu只激活一个,另一个需手工安装。
1.找出网卡
sudo lshw -C network
2.编辑/etc/network/interfaces,加入新网卡配置
vi /etc/network/interfaces 修改里面的内容如下
auto eth0
iface eth0 inet static
address 192.168.4.213
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 58.200.200.15
netmask 255.255.255.128
gateway 58.200.200.1
3.增加路由
通过以上操作后,可ping通各自的网络。
但如果通过外网访问的话,只有一个IP是通的。
cat /etc/iproute2/rt_tables
# reserved values
255 local
254 main
253 default
252 net0
251 net1
0 unspec
#
# local
#
#1 inr.ruhep
[rootlocalhost ~]#
使用ip route添加默认路由:
ip route add 127.0.0.0/8 dev lo table net1
ip route add default via 172.16.8.1 dev eth0 src 172.16.8.11 table net1
ip rule add from 172.16.8.11 table net1
ip route add 127.0.0.0/8 dev lo table net0
ip route add default via 10.120.6.1 dev eth1 src 10.120.6.78 table net0
ip rule add from 10.120.6.78 table net0
ip route flush table net1
ip route flush table net0
这样操作后,就可以双IP访问,有一路断掉就可以正常访问的。
五.配置自动添加脚本
本来挺简单的,加入rc.local就可以了,新装的最新的ubuntu16.04.3已取消rc.local,参考其他文档,处理了一下。
首先创建systemd的服务脚本
1、sudo vi etcsystemdsystemrc-local.service
[Unit]
Description=etcrc.local Compatibility
ConditionPathExists=etcrc.local
[Service]
T ype=forking
ExecStart=etcrc.local start
TimeoutSec=0
StandardOutput=tty
RemainAfterExit=yes
SysVStartPriority=99
#sysVstart这行可以删掉,我看启动日志中貌似报忽略这个了。
[Install]
WantedBy=multi-user.target
2、sudo systemctl enable rc-local.service
然后就按以前的格式编辑etcrc.local就好了。
最后记得chmod +x etcrc.local
终于大功告成,心情非常愉快。
分享此文,希望看到此文的你少走弯路。