VMWare实现Solaris SUN Cluster 3.2.3+oracle10g 安装配置-2.Solaris操作系统安装

合集下载

Oracle Solaris Cluster for SAP 配置指南说明书

Oracle Solaris Cluster for SAP 配置指南说明书

Oracle Solaris Cluster for SAP Configuration GuideSolaris Cluster 3.3 3/13 and 4.xO R A C L E W H I T E P A P E R | M A Y 2016DisclaimerThe following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.Table of ContentsDisclaimer 11. Introduction 22. Oracle Solaris Cluster Framework and Data Services 33. SAP Scenarios 44. Oracle Solaris Cluster Geographic Edition 55. Licensing Information 56. References 51. IntroductionThe global economy offers unprecedented opportunity for companies to increase customers and revenue, but there is a downside from an IT point of view. Systems must be continuously available. The Oracle Solaris operating system includes features such as predictive self healing and fault manager that are designed to keep system and applications up and running, even if there is a hardware failure. Oracle Solaris Cluster software protects against system failure for even higher availability. In the SAP environment, high availability (HA) is very important as most SAP products are business critical and must stay always up and running. This configuration guide is intended to help identify the Oracle Solaris Cluster Data Service for SAP products that provide high availability for common SAP implementation scenarios.Oracle Solaris Cluster is a high availability cluster software product for the Oracle Solaris operating system. It is used to improve the availability of hardware and software services, by having redundant computers (known as cluster nodes), the Solaris Cluster framework software, and cluster data services (known as agents) for the applications to provide high availability. Applications are administrated and monitored in resource groups which consist of one or more resources. Resource groups can be configured as fail over, scalable, or multiple masters, depending on the application requirements.Oracle Solaris Cluster provides extended support for Oracle Solaris Zones, enabling server consolidation in HA environment. These virtualization capabilities allow scalable, multiple masters or failover applications and associated Oracle Solaris Cluster data services to run unmodified within zone clusters. Zone clusters on Oracle Solaris Cluster provide administrative isolation with full service protection through fine-grained monitoring of applications, policy-based restart, and fail over within a virtual cluster. This type of environment offers multiple layers of availability. For example, a failed application can be configured to first try to restart in its zone. If the restart fails, it can attempt to start in another zone using Oracle Solaris Cluster failover capability. If this fails again, it can attempt to reboot the zone using Oracle Solaris Cluster framework and start the application after the zone is rebooted.The Oracle Solaris Cluster Geographic Edition provides Disaster Recovery feature. For more information see chapter 4.2. Oracle Solaris Cluster Framework and Data ServicesOracle Solaris Cluster Framework and Data Services can be used to improve the availability of SAP components running on Oracle Solaris operating system. It eliminates every single point of failure (SPOF) in the system. All tiers of a SAP system can be consolidated within a Oracle Solaris cluster environment, enabling a single point of management with data services for database instance(s), Standalone Central Service instance(s), Enqueue Replication server instance(s), Primary and Additional Application Server instance(s), and liveCache instance(s). The earlier SAP version with Central Instance is also supported on Solaris Cluster.The following is a list of useful Solaris Cluster Data Services in SAP environments:» Oracle Solaris Cluster Data Service for SAP NetWeaver –This agent supports SAP Standalone Central Services instance(s), Enqueue Replication server instance(s), Primary and Additional Application Server instance(s), and earlier SAP versions with Central Instance. The Additional Application Server instances can be configured as multiple master resources, which may run on multiple cluster nodes at the same time, or as failover resources which run on one node only. All other instances are to be configured as failover resources.» Oracle Solaris Cluster Data Service for Oracle Database -This agent supports Oracle database single instance configuration. The Oracle database runs on one node and can fail over to other nodes.» Oracle Solaris Cluster Data Service for Oracle Real Application Clusters –This agent supports the Oracle RAC database, which may run on multiple cluster nodes simultaneously.» Oracle Solaris Cluster Data Service for Oracle External Proxy –This agent is used for interrogating the status of Oracle Database or the Oracle RAC database that is not on the same cluster as the SAP application, but on an external cluster.» Oracle Solaris Cluster Data Service for SAP MaxDB –It consists of two agents, one for the MaxDB database, and the other for the xserver processes. The MaxDB database is to be configured in restart and failover mode. The xserver processes are responsible for all connections to the MaxDB database. The xserver resource must be configured in multiple-master mode and runs on each of the cluster nodes that MaxDB database might failover to.» Oracle Solaris Cluster Data Service for SAP liveCache –It consists of two agents, liveCache and xserver. The liveCache agent is responsible for making the SAP liveCache database highly available in restart and failover mode. The xserver agent takes care of the health of the xserver processes, which establish all connections to the SAP liveCache. It must be configured in multiple-master mode and runs on each of the cluster nodes that liveCache may failover to.» Oracle Solaris Cluster Data Service for Sybase ASE –This agent supports the Sybase database in restart and failover mode.» Oracle Solaris Cluster Data Service for NFS –This agent can be used for sharing SAP central directories via NFS.» Oracle Solaris Cluster Data Service for Oracle Solaris Zones –This agent supports HA for Solaris Zones. The zone with its application can be switched over from one cluster node to another. Please note this is not the same as Oracle Solaris Zone Cluster.The Solaris Cluster Data Service for SAP NetWeaver can be integrated with sapstartsrv via SAP HA Connector and is certified by SAP with SAP HA Interface Certification NW-HA-CLU740. This agent is currently available on two Solaris Cluster versions:» Oracle Solaris Custer 4.x for Solaris 11.x operating systems» Oracle Solaris Cluster 3.3 3/13 for Solaris 10 operating systemsFor more information about Oracle Solaris Cluster framework and data services, please refer to the Oracle Solaris Cluster 4.3 documentation and Oracle IT Infrastructure Solutions for SAP High Availability.3. SAP ScenariosOracle Solaris Cluster support all the SAP products based on SAP NetWeaver 7.0x and 7.x with SAP kernel version updated to 720/720_EXT, 721/721_EXT, 722/722_EXT, 740, 741, 742, and 745.Historically, there are three installation scenarios for HA SAP system. For new SAP system installation, it is highly recommended to configure with Central Services (ASCS/SCS/ERS) scenario.» Standalone Central Services (ASCS/SCS/ERS) scenario –This is the default High-Availability SAP system installation with sapinst, include ABAP, Java and ABAP+Java system. The Single Point of Failure of this scenario is the ASCS/SCS instance(s), which consist of the Message Server and Enqueue Server processes. The Enqueue Server process holds the enqueue lock table in shared memory. The ERS instance(s) holds the Enqueue Replication Server process, which replicates the enqueue lock table on another cluster node. The (A)SCS and ERS instance(s) must be configured in failover resource group(s). In case of failure, the (A)SCS instance must failover to the node where the ERS instance is running and takes over the replicated enqueue lock table, thus to prevent enqueue data loss. The ERS instance will be switched over by the Solaris Cluster agent automatically to another available cluster node to re establish the redundancy of the enqueue lock table. The Primary Application Server instance may also hold some unique services making it aSingle Point of Failure as well. It can be protected in a failover resource group. One or more Additional Application Server instance(s) can be configured in failover mode or multiple master resource groups. In multiple master resource groups, the Solaris Cluster agent takes care for all the given number of application server instances. If one of the application server instances fails, it may start another one to keep the capacity the same as before.» ABAP Central Instance Scenario -This is the legacy SAP system architecture. Only earlier versions of ABAP-only SAP systems may have this scenario. The Single Point of Failure in this scenario is the Central Instance which holds the Message Server and the Enqueue Server together with the Primary Application Server in one instance. Dialog Instances or Additional Application Server instances can be configured in failover or multiple master resource groups as well. Since this will be not supported on SAP NetWeaver 7.5, it is recommended to migrate to the (A)SCS/ERS scenario as soon as possible. For migration please refer to the SAP note 2271095.» ABAP Central Instance + Java Central Services Scenario –Only the ABAP+Java SAP system based on NetWeaver 2004 with SAP kernel 640 can have this scenario. Since this is only in customer specific support by SAP, it is highly recommended to move from this scenario to the (A)SCS/ERS scenario as soon as possible. For migration please refer to the white paper Migration to Solaris Cluster Data Service for SAP NetWeaver.4. Oracle Solaris Cluster Geographic EditionThe Oracle Solaris Cluster Geographic Edition extends the Oracle Solaris Cluster by using multiple clusters in separated locations, especially long distances, to provide service continuity in case of disaster in one location. For more information, please refer to Oracle Solaris Cluster 4.3 Geographic Edition Overview.5. Licensing InformationFor Ordering and Licensing questions, please refer to Oracle Solaris Cluster 4.3 Licensing Information User Manual.6. ReferencesFor more information about HA SAP on Oracle Solaris Cluster, please refer to the Oracle Solaris Cluster 4.3 documentation and Oracle IT Infrastructure Solutions for SAP High Availability.Oracle Corporation, World Headquarters Worldwide Inquiries500 Oracle Parkway Phone: +1.650.506.7000Redwood Shores, CA 94065, USA Fax: +1.650.506.7200C O N N E C T W I T H U S/oracle /oracle /oracle Copyright © 2016, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0116White Paper TitleJanuary 2016Author: [OPTIONAL]Contributing Authors: [OPTIONAL]。

Solaris Cluster 集群体系

Solaris Cluster 集群体系

Solaris Volume Manager Veritas Volume Manager**
Replication and Mirroring
File System
Sun StorageTek Traffic Manager Sun StorageTek Availability Suite** Sun StorageTek 99x0 TrueCopy** Sun StorageTek 9900 ShadowImage** EMC SRDF**
合作伙伴技术培训, 2010 年 8 月
4
Sun High Availability 高可用性解决方案
Solaris Cluster 和 Sun Java Availability Suite 架构服务级管理平台
实现高可用性,无单点配置,配置全局存储,通过 Heartbeat 心跳监控集群节点 服务级管理,通过 Agent 代理支持集群的服务,包括网络、数据、和应用 易于实施和使用 开放的和可以集成的 低风险低总拥有成本
DB
File Server
Solaris Cluster
Mail Server
Application
Solaris Cluster
Solaris Cluster
OS
LDOM LDOM LDOM Guest Guest I/O Domain Domain Domain LDOM I/O Domain LDOM Guest Domain LDOM Guest Domain
Solaris Cluster 3.2 11/09(U3)
Sun Cluster 3.2 Sun Cluster Geographic Edition 3.2 Sun Cluster Agent 3.2 Sun Developer Tools 开发工具

很详细的SUN Cluster for Oracle(裸设备)文档-FZPU双机维护文档

很详细的SUN Cluster for Oracle(裸设备)文档-FZPU双机维护文档

Fzpu双机维护文档一.磁盘资源维护1查看磁盘集信息root@web-db1 #metaset -s ora_data2.删除磁盘集ora_data中的磁盘资源d5root@web-db1 # metaset -s ora_data -d -f /dev/did/rdsk/d5-f 代表强制删除3. 添加磁盘d5资源到磁盘集ora_data中root@web-db1 # metaset -s ora_data -a /dev/did/rdsk/d54.磁盘集属主的切换metaset –s ora_data –t注:(关于磁盘集属主的问题,在哪台机器上创建先创建了磁盘集那个节点就默认为磁盘集的数组,在实施中如果还未对cluster配置资源,会发生节点不能对磁盘集设置和写入的问题,可以用命令在需要切换的节点上运行命令metaset –s ora_data –t 进行切换)5.删除磁盘集ora_data中的节点主机web-db2root@web-db1 # metaset -s ora_data -d -f -h web-db2-f 代表强制删除6. 添加节点主机web-db2到磁盘集ora_data中root@web-db1 # metaset -s ora_data -a (-M)-h web-db27.初始化d5、d6为metadevicesmetainit -s oracle-data d50 1 1 /dev/did/dsk/d5metainit -s oracle-data d60 1 1 /dev/did/dsk/d68. 切换磁盘集到节点web-db2root@web-db1 # scswitch -z -D ora_data -h web-db29. 查看metadevices信息root@web-db1 # metastat -s ora_data10. 查看软分区详细信息root@web-db1 # metastat -s ora_data d5111. 创建软分区root@web-db1 # metainit -s ora_data d51 -p d50 10g创建一个新的软分区的具体步骤:a.创建磁盘集ora_datametaset -s ora_data -a -h web-db1 web-db2b.将共享磁盘d4,d5 添加到磁盘集ora_data中metaset –s ora_data –a /dev/did/rdsk/d4s0metaset –s ora_data –a /dev/did/rdsk/d5s0c.初始化d4、d5为metadevicesmetainit -s oracle-data d50 1 1 /dev/did/dsk/d5metainit -s oracle-data d60 1 1 /dev/did/dsk/d6d..划分软分区metainit -s ora_data d401 -p d40 20mmetainit -s ora_data d402 -p d40 20mmetainit -s ora_data d403 -p d40 30Gmetainit -s ora_data d404 -p d40 30Gmetainit -s ora_data d405 -p d40 500mmetainit -s ora_data d406 -p d40 25mmetainit -s ora_data d407 -p d40 500mmetainit -s ora_data d408 -p d40 4Gmetainit -s ora_data d409 -p d40 1Ge.查看磁盘集ora_data状态root@web-db1 # metastat -s ora_dataf. 软分区的挂载以d421做为归档日志存放设备挂接到/mnt/arch为例metainit -s ora_data d421 -p d40 100Gnewfs /dev/m d/ora_data/dsk/d421mount –F ufs /dev/md/ora_data/dsk/d421 /mnt/arch注:(此时确保web-db1为磁盘集ora_data的属主)修改web-db1 和 web-db2的 /etc/vfstab,加入/dev/md/ora_data/dsk/d421 /dev/md/ora_data/rdsk/d421 /mnt/arch ufs 2 no logging12. 删除软分区root@web-db1 # metaclear -s ora_data -r d5113. 删除metadevices(d50)下的所有软分区(慎用)root@web-db1 # metaclear -s ora_data -p d5014.oracle RAC中创建多属主磁盘集失败root@web-db2 # metaset -s ora_data -M -a -h web-db1 web-db2metaset: web-db2: ora_data: node web-db1 is not in membership list原因:没有注册SUNW.rac_svm服务所致解决方法:1:注册服务root@web-db2 # scrgadm -a -t SUNW.rac_svm2:在每个节点上编辑/var/run/nodelist文件root@web-db2 #vi /var/run/nodelist1 web-db1 192.168.1.12 web-db2 192.168.1.215. 如何更改设备组所需的辅助节点数设备组辅助节点的缺省数设置为一。

solaris操作系统安装手册(AMS cluster环境)

solaris操作系统安装手册(AMS cluster环境)

一.说明服务器主备机分别需要3个IP地址,其中两个可以客户端登陆,一个地址用于cluster。

主备机100.100.100.1和100.100.100.2为AMS主备服务器内部通信的ip地址。

内部通信用的两个地址可以采用网线直连的方式。

可以自己设置私网IP,设置时务必提前了解下局方网络规划,避免与规划的私网地址冲突。

尽量不要使用192.168.1.x(现在大部分无线路由器或ONU内部分配的IP地址都是这个网段的)文档中内部通信ip使用私网地址100.100.100.1和100.100.100.2外网内网交换机外网内网交换机Host B(注:本文档的solaris系统安装是为5520 AMS cluster做的前期准备,若仅想参考本文档安装操作系统而无需安装AMS cluster,或装单机的5520,则无需配置内网通信ip网卡3的部分)二.安装solaris系统5520AMS Cluster需要的Solaris安装与单机版稍有不同,磁盘分区是主要区别,请注意。

2.1在光驱放入SOLARIS 10 X86安装光盘,打开电源,服务器自检。

注意:对于没有显示器的HP Proliant服务器安装solaris,请参照文档《HP ProLiant 服务器无显示器安装步骤.doc》HP ProLiant服务器无显示器安装步2.2自检完完成后,从Solaris安装光盘启动,选择solaris并回车。

2.3出现图示信息,选择“1”,开始安装。

2.4选择US-English,按“F2”继续。

2.5按回车继续。

2.6鼠标点击一下空白处,按回车继续。

2.7选择安装过程所用语音,选“0”并回车。

2.8点击Next。

2.9选择networked,点击Next。

2.10按CTRL键,同时需要的网卡,钦州需要设置3个IP,所以按住ctrl同时选择e1000g0 ,e1000g1,e1000g2,并点击Next。

2.11对第一块网卡进行设置,选e1000g0,点击Next。

SUN Solaris SUN Solairs服务器可能遇到的问题总结

SUN Solaris SUN Solairs服务器可能遇到的问题总结

SUN Solaris SUN Solairs服务器可能遇到的问题总结(一)(二)(三)部分1)Q:现在遇到这样一个问题,telnet一台SUN机时报下面的错误:No utmpx entry. You must exec "login" from the lowest level "shell".做了下面的处理后:cd /var/admmv utmpx utmpxbaktouch utmpxtelnet 恢复了正常,可在该机器的终端下执行login 命令时,又报了上面相同的错误。

而且重新启动机器以后,telnet时还是报相同的错误!A:进入单用户模式,清空(不是删除)这两个文件# cat /dev/null > /var/adm/wtmpx# cat /dev/null > /var/adm/utmpx之后,重新启动系统找了很多国外的论坛,都是这么说的:The problem comes if utmp or wtmp file becomes corrupted . You need to initialize these and reboot the system to correct the error.These files are log files and can be initialized without affecting the system, as long as you reboot the system after truncating the files. Perform these steps:1. Bring the system into System Maintenance mode.2. Make copies of the files /etc/utmp, /etc/utmpx, /etc/wtmp, and/etc/wtmpx before proceeding with the next step.3. Delete the contents of these files by executing the followingcommands:# > /etc/utmp# > /etc/wtmp# > /etc/utmpx# > /etc/wtmpx4. Shutdown the system:# shutdown -y -g0Restart the system2)Q:我用setenv PA TH=$PATH:/path/to/my/program的方式来赋值,总显示语法或者修饰符有问题。

Oracle Solaris Cluster数据服务指南说明书

Oracle Solaris Cluster数据服务指南说明书

适用于Oracle的Oracle®Solaris Cluster数据服务指南文件号码E23231–042014年3月,E23231-04版权所有©2000,2014,Oracle和/或其附属公司。

保留所有权利。

本软件和相关文档是根据许可证协议提供的,该许可证协议中规定了关于使用和公开本软件和相关文档的各种限制,并受知识产权法的保护。

除非在许可证协议中明确许可或适用法律明确授权,否则不得以任何形式、任何方式使用、拷贝、复制、翻译、广播、修改、授权、传播、分发、展示、执行、发布或显示本软件和相关文档的任何部分。

除非法律要求实现互操作,否则严禁对本软件进行逆向工程设计、反汇编或反编译。

此文档所含信息可能随时被修改,恕不另行通知,我们不保证该信息没有错误。

如果贵方发现任何问题,请书面通知我们。

如果将本软件或相关文档交付给美国政府,或者交付给以美国政府名义获得许可证的任何机构,必须符合以下规定:ERNMENT END USERS:Oracle programs,including any operating system,integrated software,any programs installed on the hardware,and/or documentation,delivered to U.S. Government end users are"commercial computer software"pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations.As such,use,duplication,disclosure,modification,and adaptation of the programs,including any operating system,integrated software,any programs installed on the hardware,and/or documentation,shall be subject to license terms and license restrictions applicable to the programs.No other rights are granted to the ernment.本软件或硬件是为了在各种信息管理应用领域内的一般使用而开发的。

vmware中solaris安装vmware tool并且添加windows共享文件夹方法

vmware中solaris安装vmware tool并且添加windows共享文件夹方法

1 O n the host, select VM > Install VMware Tools.If an earlier version of VMware Tools is installed, the menu item is Update VMware Tools. If the current version is installed, the menu item is Reinstall VMware Tools.2 O n the guest, log in as root.3 I f necessary, mount the VMware Tools virtual CD-ROM image.Usually, the Solaris volume manager vold mounts the CD-ROM under/cdrom/vmwaretools. If the CD-ROM is not mounted, restart the volume manager using the following commands:/etc/init.d/volmgt stop/etc/init.d/volmgt start4 C hange to a working directory (for example, /tmp):cd /tmp5 E xtract VMware Tools:gunzip -c /cdrom/vmwaretools/vmware-solaris-tools.tar.gz | tar xf -6 R un the VMware Tools installer:cd vmware-tools-distrib./vmware-install.plRespond to the configuration prompts. Press Enter to accept the default value.7 L og out of the root account:exit8 (Optional) Start your graphical environment.9 In an X terminal, to start the VMware User process, enter the following command:vmware-user之后可以用/etc/init.d/vmware-tool来控制start/stop添加share folder,VMware中VM->Settings->Options->Shared Folder, 点击add,选择目录后OK接着在solaris /mnt/hgfs中就能看到新的folder。

solaris 中安装oracle问题解决总结

solaris 中安装oracle问题解决总结

oracle for solaris 安装错误FAQ声明:以下均为从论坛上整理而来,有什么错误之处敬请大家指出,并请大家补充这上面没有的^_^Q:ORACLE for solaris有64-bit和32-bit的区别吗?A:有,安装时视操作系统而定。

检查操作系统是32位还是64位:#isoinfo -vQ: 安装ORACLE时无法更换第二张盘A: 当安装程序提示插入第二张盘时,必须将当前目录改变到其他目录。

使用cd /退出光盘目录,光盘就可以退出了,再将第二张mount上Q:Solaris8 x86 下安装oracle8i./runInstaller 提示说“文件不能执行”A:检查以下几个方面:1.数据库版本对不对,看清楚使用的安装版本是for sparc还是for X86的2.在oracle用户下,执行$ DISPLAY=IP:0.0 export DISPLAY3.ls -al看该文件权限,chmod +x ./runInstaller,或者对整个目录chmod -R +xQ:solaris 上安装oracle 9i是否需要java虚拟机?A:不需要Q:安装oracle 8i到25%时出现错误在solaris下安装ORACLE时有一个错误信息File not found /usr/local/java/java1.2.2Ignore Or AbortA:可以手工指定到/usr/java1.2Q:安装过程中跳出窗口“Unable to find make utility in location:/usr/ccs/bin/make"A:需要make工具,请先设置环境变量,方法如下:PATH=$PATH:/usr/ccs/bin export PATH再安装试下,如果不行,请到下一个make,还有gcc下来编译再装,应该可以Q:"调用目标install(在makefile /exprot/home/oracel/orant/cts/lib/ins_ctx.mk)出错"A:在oracle用户下执行make,如果找不到,把它加到路径里。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

VMWare Server 2.0.2 实现SUN Cluster 3.2.3 安装配置篇二、SUN Solaris 10 U9 x86 64bit 安装篇
撰写人郭晖
联系邮箱kelantas@
1、定义SUN Solaris 10 主机,定义过程就不详细截图表述了。

只是把需要定义成什么样的主机在这里说一下。

CPU:2 颗
内存:2540M
硬盘:73G*2 ,都放到SCSI Control 0 下面
网卡:3颗,一颗属于VMnet0 bridged,一颗属于VMnet1 host-only,一颗属于VMnet2 host-only
光驱:1个,指向solaris 10 -u9 的.iso,我的截图指向了sun-cluster 3.2.3 的iso ,大家记得自己改成对的。

具体配置看截图
然后就可以启动虚拟机开始安装操作系统了.
建两台主机,安装除主机名,第一网卡的IP地址不同,其他都是一样的。

主机名1:guosol10a1 IP:192.168.1.31 netmask:255.255.255.0 主机名2:guosol10a1 IP:192.168.1.32 netmask:255.255.255.0.
按照以下截图来安装:
s7 的250M 是留给Metadb 用的,/globaldevices 600M 是给集群用的。

相关文档
最新文档