RAC One Node操作步骤

合集下载

Oracle Solaris Cluster for SAP 配置指南说明书

Oracle Solaris Cluster for SAP 配置指南说明书

Oracle Solaris Cluster for SAP Configuration GuideSolaris Cluster 3.3 3/13 and 4.xO R A C L E W H I T E P A P E R | M A Y 2016DisclaimerThe following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.Table of ContentsDisclaimer 11. Introduction 22. Oracle Solaris Cluster Framework and Data Services 33. SAP Scenarios 44. Oracle Solaris Cluster Geographic Edition 55. Licensing Information 56. References 51. IntroductionThe global economy offers unprecedented opportunity for companies to increase customers and revenue, but there is a downside from an IT point of view. Systems must be continuously available. The Oracle Solaris operating system includes features such as predictive self healing and fault manager that are designed to keep system and applications up and running, even if there is a hardware failure. Oracle Solaris Cluster software protects against system failure for even higher availability. In the SAP environment, high availability (HA) is very important as most SAP products are business critical and must stay always up and running. This configuration guide is intended to help identify the Oracle Solaris Cluster Data Service for SAP products that provide high availability for common SAP implementation scenarios.Oracle Solaris Cluster is a high availability cluster software product for the Oracle Solaris operating system. It is used to improve the availability of hardware and software services, by having redundant computers (known as cluster nodes), the Solaris Cluster framework software, and cluster data services (known as agents) for the applications to provide high availability. Applications are administrated and monitored in resource groups which consist of one or more resources. Resource groups can be configured as fail over, scalable, or multiple masters, depending on the application requirements.Oracle Solaris Cluster provides extended support for Oracle Solaris Zones, enabling server consolidation in HA environment. These virtualization capabilities allow scalable, multiple masters or failover applications and associated Oracle Solaris Cluster data services to run unmodified within zone clusters. Zone clusters on Oracle Solaris Cluster provide administrative isolation with full service protection through fine-grained monitoring of applications, policy-based restart, and fail over within a virtual cluster. This type of environment offers multiple layers of availability. For example, a failed application can be configured to first try to restart in its zone. If the restart fails, it can attempt to start in another zone using Oracle Solaris Cluster failover capability. If this fails again, it can attempt to reboot the zone using Oracle Solaris Cluster framework and start the application after the zone is rebooted.The Oracle Solaris Cluster Geographic Edition provides Disaster Recovery feature. For more information see chapter 4.2. Oracle Solaris Cluster Framework and Data ServicesOracle Solaris Cluster Framework and Data Services can be used to improve the availability of SAP components running on Oracle Solaris operating system. It eliminates every single point of failure (SPOF) in the system. All tiers of a SAP system can be consolidated within a Oracle Solaris cluster environment, enabling a single point of management with data services for database instance(s), Standalone Central Service instance(s), Enqueue Replication server instance(s), Primary and Additional Application Server instance(s), and liveCache instance(s). The earlier SAP version with Central Instance is also supported on Solaris Cluster.The following is a list of useful Solaris Cluster Data Services in SAP environments:» Oracle Solaris Cluster Data Service for SAP NetWeaver –This agent supports SAP Standalone Central Services instance(s), Enqueue Replication server instance(s), Primary and Additional Application Server instance(s), and earlier SAP versions with Central Instance. The Additional Application Server instances can be configured as multiple master resources, which may run on multiple cluster nodes at the same time, or as failover resources which run on one node only. All other instances are to be configured as failover resources.» Oracle Solaris Cluster Data Service for Oracle Database -This agent supports Oracle database single instance configuration. The Oracle database runs on one node and can fail over to other nodes.» Oracle Solaris Cluster Data Service for Oracle Real Application Clusters –This agent supports the Oracle RAC database, which may run on multiple cluster nodes simultaneously.» Oracle Solaris Cluster Data Service for Oracle External Proxy –This agent is used for interrogating the status of Oracle Database or the Oracle RAC database that is not on the same cluster as the SAP application, but on an external cluster.» Oracle Solaris Cluster Data Service for SAP MaxDB –It consists of two agents, one for the MaxDB database, and the other for the xserver processes. The MaxDB database is to be configured in restart and failover mode. The xserver processes are responsible for all connections to the MaxDB database. The xserver resource must be configured in multiple-master mode and runs on each of the cluster nodes that MaxDB database might failover to.» Oracle Solaris Cluster Data Service for SAP liveCache –It consists of two agents, liveCache and xserver. The liveCache agent is responsible for making the SAP liveCache database highly available in restart and failover mode. The xserver agent takes care of the health of the xserver processes, which establish all connections to the SAP liveCache. It must be configured in multiple-master mode and runs on each of the cluster nodes that liveCache may failover to.» Oracle Solaris Cluster Data Service for Sybase ASE –This agent supports the Sybase database in restart and failover mode.» Oracle Solaris Cluster Data Service for NFS –This agent can be used for sharing SAP central directories via NFS.» Oracle Solaris Cluster Data Service for Oracle Solaris Zones –This agent supports HA for Solaris Zones. The zone with its application can be switched over from one cluster node to another. Please note this is not the same as Oracle Solaris Zone Cluster.The Solaris Cluster Data Service for SAP NetWeaver can be integrated with sapstartsrv via SAP HA Connector and is certified by SAP with SAP HA Interface Certification NW-HA-CLU740. This agent is currently available on two Solaris Cluster versions:» Oracle Solaris Custer 4.x for Solaris 11.x operating systems» Oracle Solaris Cluster 3.3 3/13 for Solaris 10 operating systemsFor more information about Oracle Solaris Cluster framework and data services, please refer to the Oracle Solaris Cluster 4.3 documentation and Oracle IT Infrastructure Solutions for SAP High Availability.3. SAP ScenariosOracle Solaris Cluster support all the SAP products based on SAP NetWeaver 7.0x and 7.x with SAP kernel version updated to 720/720_EXT, 721/721_EXT, 722/722_EXT, 740, 741, 742, and 745.Historically, there are three installation scenarios for HA SAP system. For new SAP system installation, it is highly recommended to configure with Central Services (ASCS/SCS/ERS) scenario.» Standalone Central Services (ASCS/SCS/ERS) scenario –This is the default High-Availability SAP system installation with sapinst, include ABAP, Java and ABAP+Java system. The Single Point of Failure of this scenario is the ASCS/SCS instance(s), which consist of the Message Server and Enqueue Server processes. The Enqueue Server process holds the enqueue lock table in shared memory. The ERS instance(s) holds the Enqueue Replication Server process, which replicates the enqueue lock table on another cluster node. The (A)SCS and ERS instance(s) must be configured in failover resource group(s). In case of failure, the (A)SCS instance must failover to the node where the ERS instance is running and takes over the replicated enqueue lock table, thus to prevent enqueue data loss. The ERS instance will be switched over by the Solaris Cluster agent automatically to another available cluster node to re establish the redundancy of the enqueue lock table. The Primary Application Server instance may also hold some unique services making it aSingle Point of Failure as well. It can be protected in a failover resource group. One or more Additional Application Server instance(s) can be configured in failover mode or multiple master resource groups. In multiple master resource groups, the Solaris Cluster agent takes care for all the given number of application server instances. If one of the application server instances fails, it may start another one to keep the capacity the same as before.» ABAP Central Instance Scenario -This is the legacy SAP system architecture. Only earlier versions of ABAP-only SAP systems may have this scenario. The Single Point of Failure in this scenario is the Central Instance which holds the Message Server and the Enqueue Server together with the Primary Application Server in one instance. Dialog Instances or Additional Application Server instances can be configured in failover or multiple master resource groups as well. Since this will be not supported on SAP NetWeaver 7.5, it is recommended to migrate to the (A)SCS/ERS scenario as soon as possible. For migration please refer to the SAP note 2271095.» ABAP Central Instance + Java Central Services Scenario –Only the ABAP+Java SAP system based on NetWeaver 2004 with SAP kernel 640 can have this scenario. Since this is only in customer specific support by SAP, it is highly recommended to move from this scenario to the (A)SCS/ERS scenario as soon as possible. For migration please refer to the white paper Migration to Solaris Cluster Data Service for SAP NetWeaver.4. Oracle Solaris Cluster Geographic EditionThe Oracle Solaris Cluster Geographic Edition extends the Oracle Solaris Cluster by using multiple clusters in separated locations, especially long distances, to provide service continuity in case of disaster in one location. For more information, please refer to Oracle Solaris Cluster 4.3 Geographic Edition Overview.5. Licensing InformationFor Ordering and Licensing questions, please refer to Oracle Solaris Cluster 4.3 Licensing Information User Manual.6. ReferencesFor more information about HA SAP on Oracle Solaris Cluster, please refer to the Oracle Solaris Cluster 4.3 documentation and Oracle IT Infrastructure Solutions for SAP High Availability.Oracle Corporation, World Headquarters Worldwide Inquiries500 Oracle Parkway Phone: +1.650.506.7000Redwood Shores, CA 94065, USA Fax: +1.650.506.7200C O N N E C T W I T H U S/oracle /oracle /oracle Copyright © 2016, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0116White Paper TitleJanuary 2016Author: [OPTIONAL]Contributing Authors: [OPTIONAL]。

Solaris Cluster 集群体系

Solaris Cluster 集群体系

Solaris Volume Manager Veritas Volume Manager**
Replication and Mirroring
File System
Sun StorageTek Traffic Manager Sun StorageTek Availability Suite** Sun StorageTek 99x0 TrueCopy** Sun StorageTek 9900 ShadowImage** EMC SRDF**
合作伙伴技术培训, 2010 年 8 月
4
Sun High Availability 高可用性解决方案
Solaris Cluster 和 Sun Java Availability Suite 架构服务级管理平台
实现高可用性,无单点配置,配置全局存储,通过 Heartbeat 心跳监控集群节点 服务级管理,通过 Agent 代理支持集群的服务,包括网络、数据、和应用 易于实施和使用 开放的和可以集成的 低风险低总拥有成本
DB
File Server
Solaris Cluster
Mail Server
Application
Solaris Cluster
Solaris Cluster
OS
LDOM LDOM LDOM Guest Guest I/O Domain Domain Domain LDOM I/O Domain LDOM Guest Domain LDOM Guest Domain
Solaris Cluster 3.2 11/09(U3)
Sun Cluster 3.2 Sun Cluster Geographic Edition 3.2 Sun Cluster Agent 3.2 Sun Developer Tools 开发工具

Oracle-通用补丁安装操作手册

Oracle-通用补丁安装操作手册

Oracle 通用补丁安装操作手册1.概述近几年,随着我们用户ORACLE数据库的应用越来越深入,环境越来越复杂,一些ORACLE常见的BUG也会偶尔的被触发.因此,我们对技术人员要求不能再像以前一样只要求会安装数据库,还应该要求其掌握在遇到数据库BUG的时候,能够熟练的找到并安装对应的修复补丁才行。

因此该文档用于指导技术人员,能够查看和检查数据库环境的补丁安装情况,并对提供的ORACLE补丁文件进行安装操作。

2. ORACLE补丁体系ORACLE数据库同其他企业软件一样,正式版的发布和运维都要经过一个漫长的周期,当正式版发布后,在其生命周期内会oracle公司会在官网上定期发布更新补丁程序,根据发布周期和集合程度不同,它们分别是维护版本(Version)、补丁集(Patch Set)、关键补丁更新(Critical Patch Update)、补丁集更新(Patch SetUpdate)和小补丁(One—off patch)几类,下面简单的介绍下它们的区别.Ø Version/维护版本针对前一个维护版本的所有补丁进行整理,增加新的功能或对软件有较大的改动,进行整体测试,得到一个软件版本”包”,称为版本Version.比如 11.2Ø Patch Set/补丁集在两次产品版本之间发布的一组经过全面测试的累计整体修复程序(一般每年发布一两次),如11.2。

0。

211.2。

0.3。

Ø Critical Patch Update(CPU)/关键补丁更新指每季度提供一次的一组高优先级修复程序(往往针对安全性问题)。

对于以前的安全性修复程序而言,这些CPU是累积的,只需要安装最近最后一个就可以,它就包含了之前的所有CPU补丁,但也可包含其他修复程序,目的是解决与非安全性补丁之间的补丁冲突问题(即降低合并请求的必要性).目前已经更名为Security Patch Update (SPU)。

CLUSTERPRO MC ApplicationMonitor 1.2 for Linux 使用手

CLUSTERPRO MC ApplicationMonitor 1.2 for Linux 使用手

# rpm -ivh /mnt/cdrom/depot/clusterpro-mc-am-1.2.0-1.x86_64.rpm
3.6
4. rpm(8)
ApplicationMonitor
OS
OS Red Hat Enterprise Linux 5 # rpm -qa clusterpro-mc-am clusterpro-mc-am-1.2.0-1
3.3.
.................................................................................................................................. 2
3.4.
....................................................................................................................... 2
2.
........................................................................................................................................ 1
2.1.
................................................................................................................... 1
Oracle Database 12c (12.1.0) Oracle Database 12c (12.1.0.1)

ODA X5-2介绍

ODA X5-2介绍

甲骨文ODA X5-2负载配置对应 优化文档编制高负载 压力测试性能与扩展 能力测试 真实生产 压力测试 Patch 回退测试故障植入 式测试在研发设计的每个环节优化从应用到磁盘,对每一组件进行联合设计、测试与验证组件互操作性 测试与验证 端到端的 功能验证早期介入的 开发测试确立整合 的对象与目标端到端的全面的深入的优化联合协作的研发设计团队今天传统的数据库双机高可用系统面临挑战复杂性和高成本需要各种专业技能难以管控宕机风险ODA ——集成设计的数据库双机高可用系统集计算、存储、网络、操作系统、数据库、管理于一体共享存储心跳监控数据库服务器节点1 ( DAS/FC-SAN / IP-SAN )数据库服务器节点2ODA单一机箱内的 高可用数据库系统!ODA :五年四代,稳中求变ODA V1 ODA X3-2 ODA X4-2 ODA X5-2Sep 2011 Mar 2013 Dec 2013 Jan 2015稳定的硬件架构•2个计算节点 •共享的DAS 存储稳定的软件架构•Appliance Manager •Oracle Database不断丰富的功能特性•虚拟化 •Flash Cache •DB12c 支持 •EM12c 支持数据库平台•部门级关键业务数据库–OLTP, data warehouses, and data marts–Runs Oracle’s high availability software suiteincluding Oracle Real Application Clusters •整合的数据库–Supports Multitenant and In-Memory database options•开发测试数据库–Quickly and efficiently provision snapshots 应用平台•集成虚拟化,在一个系统中支持所有层的应用集中统一部署•认证支持所有Oracle软件–Oracle E-Business Suite, JD Edwards,PeopleSoft–Oracle WebLogic Server–Oracle Enterprise Manager•认证支持数百个ISV的应用ODA的总体定位ODA:数千客户部署,数百ISV认证•广泛部署于众多行业–教育业–金融业–制造业–零售业–医疗业–……•数百个行业ISVs已加入ODA Exastack计划中ODA:中国市场,渐入佳境ODA X5-2硬件能力-标准配置标准配置•2个计算节点, 总配置:–72 CPU 核–512GB 内存–4块600GB 启动硬盘–8个外部万兆网–4个40Gb 内部 Infiniband 连接•共享存储柜–800 GB 写缓存 SSD–1600 GB 数据库缓存 SSD–128 TB SAS-2 硬盘–注:可扩展到256TB 硬盘ODA X5-2亮点纷呈硬件指标大幅飙升 集成40Gb/s IB 互连技术增加Flash Cache 技术实现与EM12c 无缝集成1 2 3 4•冗余服务器节点•冗余SAS HBA卡,由多路径软件自动切换•冗余I/O控制器模块•冗余HDDs和SSDs,由ASM自动化管理Node 0HBA HBA HBA HBA IO ModuleNode 1P0 P1IO ModuleSlot 20Slot 16Slot 12Slot 8Slot 4Slot 0Slot 21Slot 17Slot 13Slot 9Slot 5Slot 1Slot 22Slot 18Slot 14Slot 10Slot 6Slot 2Slot 23Slot 19Slot 15Slot 11Slot 7Slot 3ODA X5-2内部存储冗余连接架构•ODA Flash Cache–Flash Cache 在两个计算节点间共享使用–为应用增加cache 命中率 –比没有配置cache 块6倍•ODA Flash Logs–数据库写redo 日志到flash –微秒级写日志–比传统存储介质写日志快4x•ODA Flash Files–存储ACFS 文件系统metadata 到flash–优化ACFS 文件系统读和写操作 –优化ACFS 文件系统的数据库和VM 快照操作No Cache Local Cache ODA Flash CacheA c c e s s T i m eODA X5-2的硬件和软件集成设计,大大优化数据库性能方案适用场景一:单一数据库RACODA 数据库系统应用服务器•中小规模企业 •大型企业内部单个部门 •集团企业远程分支机构企业新建系统原有传统架构下的老旧系统更新换代方案适用场景二: 多个数据库集中整合ODA 数据库系统应用1 •中小规模企业 •大型企业内部单个部门 •集团企业远程分支机构企业新建系统原有传统架构下的老旧系统更新换代DB1 应用2应用3DB2 DB3✓数小时完成所有软硬件初始安装配置部署 ✓一键式打补丁和升级✓存储高度自动化、近乎零管理 ✓工厂预制各种可能的数据库部署模板✓工厂预制各种面向应用的虚拟机模板✓一站式管理和支持①极致简单✓硬件组件实现完全冗余配置•冗余的服务器节点 •冗余的存储I/O 模块 •冗余的各种物理连线 •冗余电源及风扇✓数据实现双重/三重镜像保护 ✓数据库/应用实现RAC 集群/高可用性 ✓虚拟机实现节点内/节点间的高可用性②高可靠✓最高性能的两路服务器架构•最高CPU 性能:至强E5-2699 V3 CPU •最高内存带宽:DDR4-2133MHz 内存 •最高存储带宽:12Gb/s 存储控制器卡•最高网络带宽:4x10GbE 对外网络访问端口✓多层存储架构设计•内存<-->Flash Redo Log<-->Flash Data <-->HDD✓40Gb/s 高带宽、低延迟的Infiniband 互连③高性能✓工厂预集成,预测试,预调优,将现场实施部署的各种可能的技术风险降到最低 ✓与牢不可破的企业级Linux 操作系统与全球排名第一的数据库软件相集成,极大增强了系统的安全性✓内置有数据库级灾备软件,确保数据库不因不可抗力的灾难事故导致数据丢失④高安全ODA部署和运维极致简单数天数小时ODA X5-2虚拟机部署过程使用# oakcli 工具第一步:导入虚机模板第二步:定制虚机模板(可选)第三步:克隆虚机模板第四步:启动虚机ODA实施部署优势:快速部署你的数据库RAC系统——通过设备管理软件实现一键式安装、部署、配置、打补丁、诊断、调优…开箱上架安装插上电源连接网络一键式安装配置完成Overall 2 hours only!To Install, Manage and MaintainT i m eC o s tBuild Your OwnInstallation Expertise Optimization Expertise Network Administration Storage Administration System AdministrationOracle ApplianceManagerSavingsODA PK 传统解决方案:ODA 更省钱Oracle Database ApplianceODA 仅需要一名工程师,仅使用Oracle Appliance Manager 图形化工具,就能轻松完成所有工作,极大降低企业业务应用系统运营 的管理成本,省时省力硬件•冗余的计算节点•冗余的内置硬盘做操作系统镜像 •冗余的物理连接•冗余的12Gb/s 存储I/O 连接•冗余的40Gb/s IB 集群心跳互联 •冗余的10Gb/s 对外服务网络连接 •数据存储冗余•双重镜像•三重镜像 •冗余的热交换的电源、风扇ODA 内置硬件和软件双重可靠性保证 软件 •Oracle 数据库11g 、12c 企业版 −Real Application Clusters−RAC One Node •Oracle Grid Infrastructure −Oracle Clusterware •Oracle Linux−UEK2 kernel •Oracle Appliance Manager −System check for all components−Diagnostics collection加快新业务上线速度提高企业竞争力降低风险方案的业务价值降低TCO(总成本)ODA X5-2,矢志不渝的商业价值完整, 简单, 可靠, 低成本•低采购成本–CoD aligns license spend to demand•低运维成本–Better utilize expensive high skilled resources–Reduced time spent on maintenance•快速上线,实现价值–Bring new functionality to market more rapidly•更好的可用性–Built-in high availability and best practices lead to greater uptime。

Sun cluster 3.x 培训(规划、安装、配置和管理)

Sun cluster 3.x 培训(规划、安装、配置和管理)

Sun cluster3.x
软件架构及切换原理
每一个节点在投票设备上分配了一个64-
重要模块及概念介绍:
Persistent Reservations and Reservation Keys
bit的key;
Su换原理
如果因为某种原因,节点2退出cluster系统,实 际上,节点1会从投票设备上删除节点2的64-bit 的key;
双机系统的规划
1、硬件互联的规划
硬件互联要必须解决的一个关键问题:scsi-initiator-id
双机系统的规划
1、硬件互联的规划
(以T3单端口磁盘阵列为例说明)
主机节点冗余 Redundant Servers
每个节点本身的一些部件是冗余的; 节点之间互为备份是冗余的;
每个节点对外提供服务的网口是冗余的
Redundant Public Networks
两个节点相互通信的网口是冗余的
Redundant Private Networks
2、This is called split-brain operation.
Sun cluster3.x
投票机制:
软件架构及切换原理
重要模块及概念介绍:
1、两个节点首先尝试保留指定的投票设备;
2、第一个获得投票设备的节点将作为cluster成员保留在cluster中; 3、在竞争中,失去投票设备的节点将退出cluster;
Sun cluster3.x
软件架构及切换原理
重要模块及概念介绍:
投票机制:
Sun cluster架构引入了一种投票机制“voting system”,在这个投票系统中: 1、每个节点拥有一票(one vote); 2、被指定为投票设备(quorum devices)的共享硬盘拥有一票; 3、整个系统必须拥有多数票(majority)时才能形成cluster,正常运行; (more than 50 percent of all possible votes present)

爱立信HD BSC调测个人总结

HD BSC调测个人总结目录一概述 (3)1.1 HD BSC知识疏通 (3)1.2调测主要步骤 (5)二调测前准备工作 (6)三本调过程 (7)3.1 设备加电 (7)3.2起cp (7)3.3起APG (10)3.4硬件数据加载与测试 (13)3.4.1扩硬件SAE,其他SAE参考现网进行SIZE ALTERATION (14)3.4.2 RP,EM的定义与解闭 (15)3.4.3 GS的定义与测试 (17)3.4.4 SNT的定义与测试 (18)3.4.5 DIP定义,SDIP定义,位置信息定义 (18)3.4.6 TRA的数据定义 (19)3.5 CP &APG NE TEST (21)3.6 APG数据配置部分 (21)3.6.1开通oss及配置APG账户和密码 (21)3.6.2 定义sts (23)3.6.3 配置病毒库 (24)3.7传输本端自环测试 (24)四A口数据 (24)4.1、信令点及相关数据 (24)4.2时钟定义 (26)4.3定义电路识别标号cic.(一定要与MGW侧保持一样) (26)4.4半永久定义: (27)五GB口数据 (28)5.1 GB口部分数据注解 (28)六开微蜂窝 (29)6.1加载基站软件 (29)6.2加载基站数据 (30)6.3开微蜂窝注意事项 (30)七拨测 (31)7.1拨测内容 (31)八.参考资料 (32)一概述1.1 HD BSC知识疏通APG43硬件结构:CP的五种状态:CP有5种工作状态:EX、SB/WO、SB/SE、SB/UP、SB/HAEX:CP处于执行边。

SB/WO:CP处于备用边的正常工作状态。

SB/SE:CP处于分离状态,对CP进行操作不影响另一边。

SB/UP:CP的备用边正从主用边CP进行UPDATING,它是在状态变化时的一种暂时状态,主用侧向备用侧拷贝不同步数据,核查硬件,一般为3-5分钟,成功变为Sb/Wo,不成功变为Sb/HA。

roacle RAC学习文档_crs安装

Linux- NIC bonding - 使用较多
10gR2 CRS和RDBMS安装
安装CRS和RDBMS前的主要准备工作:
• Install and configure OCFS2 (linux) • install and configure asmlib (Linux)
- 在linux上是必须的吗? No,used for asm performance
IBM - EtherChannel - 使用较多 - HACMP Swap Adapter
HP - Auto port aggregation (APA) - 使用较多 - MC/ServiceGuard Local Switch
SUN - Trunking - IP Multipathing (IPMP) - 使用较多
RAM (1GB) disk space swap /tmp (400MB)
• Software verification/configuration
Version,packages,patches
• Setting up/verifying kernel parameters
shmmax等 - for database udp参数 - for RAC interconnect LLT参数(Only for VCS) - for RAC interconnect
After entering the correct names, click OK. Then click Add to add additional Cluster Nodes with the appropriate network names. The Public node and Virtual Host names should properly resolve in DNS and the /etc/hosts file. Use nslookup to verify that the names properly resolve in DNS. Verify both the fully qualified and short names. All should resolve properly. Also verify reverse lookups using nslookup to resolve the IP addresses to the proper host names.

ORACLE 数据库服务器存储阵列一体机ODA深度分析


Oracle Confidential
ODA – 机箱ster in a Box


Independent power buffer chips and individually wired between PSUs and SNs. 每个服务器节点机箱控制和状态独立访问
-
没有集中的机箱控制管理功能
HIGHER
|
© 2011 Oracle Corporation – Proprietary and Confidential
ODA 价值定位
THE ORACLE DATABASE APPLIANCE PROVIDES: 1. 节约成本 • 整合数据库降低成本,降低管理多个服务器存储环境的复杂性.三年内大约节省70K USD 以及1000工作小时 • 最小化 支出花费 通过”按照增长付费”的价格体系 Expand for 4 to 24 cores • 减少了整体拥有成本,包括支持,系统管理,制冷&电力消耗,软件授权(假如运行在非Oracle 硬件上) 2. 企业就绪& 简单管理 • 获得一个完整,预配置,测试和认证的一体机 (服务器,存储,网络).
SAS- 15
SAS- 11 SAS- 7 SAS- 3
共享的磁盘
• HDD插槽0至19使用于DATA & RECO • 提供硬盘的冗余三倍镜像和SSD的两倍镜像 • 通过多路径组提供本地的错误保护
• 通过单一资源进行升级和补丁来简化IT 维护任务
• 自动化的性能和高可用性事件 告警&自动化的纠正操作 3. 高可用性 • 通过冗余硬件 和软件来提供高可用性,来极大的最小化停机时间和尽量减少管理员的干 涉.
• 自我管理的存储 (没有存储管理员的需要MIN REQUIRED)

Coordination(调度)

5. RAC Resource Coordination(调度)5.1 Overview of RAC Resource Coordination(调度)在cluster中,需要同步所有实例以共同使用资源. RAC使用Global Resource Directory来记录在cluster数据库中资源使用的相关信息. The Global Cache Service(GCS)和Global Enqueue Service(GES)管理directory中的信息. 每个实例都会在其SGA中维护部分Global Resource Directory.RAC中的resource coordination发生在instance级和cluster database级. RAC中instance级的resource coordination称为local resource coordination. Cluster级的resource coordination称为global resource coordination.在cluster database中的local resource coordination与在一个单实例的Oracle机制是完全相同的.也就是说行级或快级访问,空间管理,SCN创建,data dictionary cache和library cache的管理是相同的. 然而global resource coordination会稍微复杂些,这在之后会详细介绍.5.1.1 The Contents of the Global Resource Directory每个实例的部分Global Resource Directory含有所有共享资源的当前状态信息.在实例failure或重新配置cluster时,Oracle也会使用Global Resource Directory中的信息.在Global Resource Directory中的共享资源信息包含以下部分:数据块标识符,比如数据块地址实例持有数据块的模式:(N) Null, (S)Shared, (X) Exclusive当前各实例持有数据块时使用的角色: local或global5.1.2 RAC Synchronization Processes当某个实例请求某资源时(比如:一数据块), RAC会locally管理此实例对数据块的请求. 如果这个数据块之后被一或多个实例修改了, 则Oracle会将同步提升为global级别,以允许在clustre内的多个实例中共享访问这个数据块.这种情况下作同步时就需要节点内的信息和准备块的一致性读版本并在cluster数据库内传输块的拷贝.当一或多个实例请求另一实例正在update的数据块时,the Global Cache Service Processes(LMSn)会定位,准备和传输最新的块影像.与GCS协作,GES在节点间传送节点内信息(To coordinate GCS processing, the GES transmits intranode message over the interconnect).因此,在RAC中,当远程实例请求由某实例locally持有的资源时,原来的locally coordination会升级为global coordination.这在下面章节所述队列和数据块的过期影像时是正确的.5.1.3 Enqueues队列是一内存结构,用于序列化对特定的行的更新. 除非其它实例请求某个已分配了enqueue的资源,否则是不涉及调度enqueue的.不需要配置global enqueues. 它们的数量是在Oracle启动时自动计算的,并且会记录在alert.log文件中.5.1.4 Past Images(译成过期影像-为了与before image区别)在对块的写操作完成之前,Oracle维护的dirty block的拷贝就称为past images. GCS管理past images,在failure recovery时GCS也会使用它们.5.2 Resource Modes and Roles因数据块会在实例间传输,这导致了同一数据块可以存在多个caches中. 这些块可由多个实例以不同的模式持有,具体的模式取决于实例是修改数据块还是仅仅对数据块作一致性读.在RAC中GCS用于资源调度的模式和角色分别如下:5.2.1 Resource ModesResource mode决定了所持有的资源是否可修改. 表5-1中比较了三种模式, Null(N)模式,shared(S)模式, execlusive(X) mode.5.2.2 Resource RolesOracle会给持有资源的实例分配一个resource role. 这个角色或者为local或者为global,两者是互斥的. 从local role转化为global role的过程如下:如果这个块已经被local 实例修改过并传送给另一实例,则它会被视为globally managed, GCS会将global role分配给这个块.如果资源仅存在于一个cache中,则它们都为local role. 如果数据块在一个实例中被修改,然后传输给另一个实例,则含有此数据块的buffer会被视为globally dirty且资源应为global role.5.2.3 Global Cache Service OperationsGCS会跟踪数据块的位置,模式,角色. 因此,GCS也管理各实例对资源的访问privileges. 在数据块的当前版本存在于某实例的buffer cache中,而另一实例又申请对修改此数据块时,Oracle使用GCS来保证cache的一致性(coherency).如果一个实例以exclusive模式读取一个块,则这个实例中的其后的其它多事务毋需GCS的干预就可共享访问此数据块.然而,这种情况只在如果数据块不传输出local cache时成立.如果数据块被传输出local cache, 则GCS 会将其角色升级为global role,同时更新Global Resource Directory. 资源是否从exclusive模式转换为另一模式,这取决于其它实例对资源的使用.下面就来看一个例子.5.2.3.1 Global Cache Service Processing Example假设cluster数据库中的一个实例需要update一个数据块. 同时,另一实例也需要update同一数据块. 如果没有GCS提供的cache coherency(高速缓存一致性)功能,两个实例就会同时update同一数据块.然而,因为GCS的同步控制,只有一个实例可以update这个块,另一个实例会等待.为了确保在任何时间只有一个实例可以update一个块,GCS会维护cache coherency. 实例也不必一直以exclusive模式持有这个块直到事务结束.例如,一个实例修改了一个块中的某一行且未commit,这时来自于另一实例的事务则仍可修改这个块中的另一行数据.5.2.3.2 Cache Coherency and the Global Cache ServiceGCS要求实例在修改数据块前在cluster-wide级别上(而不是local instance级别)获取资源,来确保cache coherency. 这样,GCS同步了全局cache访问,并限制在某一时间只有一个实例可以修改数据块.Oracle的数据块多版本体系会区分当前数据块和一个或多个consistent read(CR)版本的数据块. 当前块包含了所有已提交和将要提交的事务.(The current block contains changes for all committed and yet-to-be-committed transactions).Consistent read(CR)版本的数据块表示了数据块在某时刻的一致性快照. LMSn进程通过将rollback segment 中的信息应用到past images上来产生consistent read版本的数据块. 在RAC中,当前块和一致性读块都由GCS来管理.如果某一实例(A)持有一已被它修改过的数据块,这时另一实例(B)又来请求这个块,则正持有此数据块的实例会维护此数据块的past image(PI). 如果(实例B)发生failure, Oracle能够通过读(实例A中的)PIs来重构当前版本和一致性读版本的数据块.5.3 System Change Number Processing为了在cluster中同步资源,尤其是数据块,Oracle必须跟踪数据块上的连续修改.也就是, Oracle通过向数据块的每个版本分配一个唯一数字标识符来记录数据块的每一次变化. 这使Oracle可以按顺序生成redo logs以供今后可能的recovery所用.Oracle为每一事务分配一个SCN. 概念上讲,应有一个全局的串来生成SCNs(Conceptually, there is a global在单实例的Oracle中,SGA维护和增长exclusive模式mount的数据库的SCN.在RAC环境下,必须全局地维护SCN,而且需在多种平台上实现. SCN可以由GCS管理,也可由Lamport SCN generation scheme管理,或者使用硬件clock或专用SCN服务器来管理.5.3.1 Lamport SCN GenerationLamport SCN generation scheme是高效的,可升级(scalable)的,因为它在所有的实例中并行地生成SCN. In this scheme, all message between instances carry SCNs. 多个实例可以并行地生成SCN而不需在实例间作额外的通信.在多数平台上, 当MAX_COMMIT_PROPAGATION_DELAY的值大于平台相关的阀值时,Oracle使用Lamport SCN generation scheme. 这个值初始缺省为7秒.如果你改变了这个值,实际上你是在改变你的cluster数据库的SCN生成模式.在实例启动后,可在alter.log中查明是否在使用Lamport SCN generation scheme. 如果MAX_COMMIT_PROPAGATION_DELAY的值设得小于阀值,则Oracle会使用hardware clock SCN generation scheme.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Oracle RAC convert to RAC One Node操作步骤 CopyRight By Jieyancai 环境说明: Redhat 6.3x64+oracle11.2.0.3 rac+asm

数据库名:zxm 实例名:zxm1,zxm2 目标转换为racone 以下操作均用oracle用户在节点rac01操作。

1. 停止和移除原来的实例

srvctl stop database -d zxm srvctl remove instance -d zxm -i zxm1 srvctl remove instance -d zxm -i zxm2

2. 增加一个实例 srvctl add instance -d zxm -i zxm_1 -n rac01 3. 添加服务

手动添加服务: begin dbms_service.CREATE_SERVICE ('APP', 'APP', FAILOVER_METHOD=>dbms_service.FAILOVER_METHOD_BASIC, FAILOVER_TYPE=>dbms_service.FAILOVER_TYPE_SELECT, FAILOVER_RETRIES=>300 ); end; /

srvctl add service -d zxm -s app -r zxm_1 -P BASIC -e SELECT -m BASIC -y AUTOMATIC 检查服务: oracle@rac01:/home/oracle(zxm1)>srvctl config service -d zxm Service name: app Service is enabled Server pool: zxm_app Cardinality: 1 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: SELECT Failover method: BASIC TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC Edition: Preferred instances: zxm_1 Available instances:

4. 手动启动服务 srvctl start service -d zxm -s app 5. 转换为RAC One Node

srvctl convert database -d zxm -c RACONENODE -i zxm_1 检查结果: oracle@rac01:/home/oracle(zxm1)>srvctl config database -d zxm Database unique name: zxm Database name: zxm Oracle home: /home/oracle/11.2.0/db_1 Oracle user: oracle Spfile: +DATA/zxm/spfilezxm.ora Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: zxm Database instances: Disk Groups: DATA,RECO Mount point paths: Services: app Type: RACOneNode Online relocation timeout: 30 Instance name prefix: zxm Candidate servers: rac01 Database is administrator managed oracle@rac01:/home/oracle(zxm1)>srvctl config service -d zxm Service name: app Service is enabled Server pool: zxm Cardinality: 1 Disconnect: false Service role: PRIMARY Management policy: AUTOMATIC DTP transaction: false AQ HA notifications: false Failover type: SELECT Failover method: BASIC TAF failover retries: 0 TAF failover delay: 0 Connection Load Balancing Goal: LONG Runtime Load Balancing Goal: NONE TAF policy specification: BASIC Edition: Preferred instances: zxm_1 Available instances:

6. 切换操作 srvctl relocate database -d zxm -n rac02 检查结果: oracle@rac01:/home/oracle(zxm1)>crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.DATA.dg ora....up.type ONLINE ONLINE rac01 ora....ER.lsnr ora....er.type ONLINE ONLINE rac01 ora....N1.lsnr ora....er.type ONLINE ONLINE rac01 ora.OCR.dg ora....up.type ONLINE ONLINE rac01 ora.RECO.dg ora....up.type ONLINE ONLINE rac01 ora.asm ora.asm.type ONLINE ONLINE rac01 ora.cvu ora.cvu.type ONLINE ONLINE rac01 ora.gsd ora.gsd.type OFFLINE OFFLINE ora.jyc.db ora....se.type OFFLINE OFFLINE ora....network ora....rk.type ONLINE ONLINE rac01 ora.oc4j ora.oc4j.type ONLINE ONLINE rac01 ora.ons ora.ons.type ONLINE ONLINE rac01 ora....SM1.asm application ONLINE ONLINE rac01 ora....01.lsnr application ONLINE ONLINE rac01 ora.rac01.gsd application OFFLINE OFFLINE ora.rac01.ons application ONLINE ONLINE rac01 ora.rac01.vip ora....t1.type ONLINE ONLINE rac01 ora....SM2.asm application ONLINE ONLINE rac02 ora....02.lsnr application ONLINE ONLINE rac02 ora.rac02.gsd application OFFLINE OFFLINE ora.rac02.ons application ONLINE ONLINE rac02 ora.rac02.vip ora....t1.type ONLINE ONLINE rac02 ora.scan1.vip ora....ip.type ONLINE ONLINE rac01 ora....app.svc ora....ce.type ONLINE ONLINE rac02 ora.zxm.db ora....se.type ONLINE ONLINE rac02 oracle@rac01:/home/oracle(zxm1)>crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.dg ONLINE ONLINE rac01 ONLINE ONLINE rac02 ora.LISTENER.lsnr ONLINE ONLINE rac01 ONLINE ONLINE rac02 ora.OCR.dg ONLINE ONLINE rac01 ONLINE ONLINE rac02 ora.RECO.dg ONLINE ONLINE rac01 ONLINE ONLINE rac02 ora.asm

相关文档
最新文档