Oracle RAC Brain Split Resolution

大约是一周前,一位资深的Oracle工程师向我和客户介绍RAC中脑裂的处理过程,据他介绍脑裂发生时通过各节点对voting disk(投票磁盘)的抢夺,那些争抢到(n/2+1)数量voting disk的节点就可以survive(幸存)下来,而没有争抢到voting disk的节点则被evicted踢出节点。

不得不说以上这番观点,来得太过随意了,一位从Oracle 6就开始从事维护工作的老工程师也会犯这样的概念性错误,只能说Oracle技术的更新过于日新月异了。

在理解脑裂(Brain Split)处理过程前,有必要介绍一下Oracle RAC Css(Cluster Synchronization Services)的工作框架:

Oracle RAC CSS提供2种后台服务包括群组管理(Group Managment简称GM)和节点监控(Node Monitor简称NM),其中GM管理组(group)和锁(lock)服务。在集群中任意时刻总有一个节点会充当GM主控节点(master node)。集群中的其他节点串行地将GM请求发送到主控节点(master node),而master node将集群成员变更信息广播给集群中的其他节点。组成员关系(group membership)在每次发生集群重置(cluster reconfiguration)时发生同步。每一个节点独立地诠释集群成员变化信息。

而节点监控NM服务则负责通过skgxn(skgxn-libskgxn.a,提供节点监控的库)与其他厂商的集群软件保持节点信息的一致性。此外NM还提供对我们熟知的网络心跳(Network heartbeat)和磁盘心跳(Disk heartbeat)的维护以保证节点始终存活着。当集群成员没有正常Network heartbeat或Disk heartbeat时NM负责将成员踢出集群,被踢出集群的节点将发生节点重启(reboot)。

NM服务通过OCR中的记录(OCR中记录了Interconnect的信息)来了解其所需要监听和交互的端点,将心跳信息通过网络发送到其他集群成员。同时它也监控来自所有其他集群成员的网络心跳Network heartbeat,每一秒钟都会发生这样的网络心跳,若某个节点的网络心跳在misscount(by the way:10.2.0.1中Linux上默认misscount为60s,其他平台为30s,若使用了第三方vendor clusterware则为600s,但10.2.0.1中未引入disktimeout;10.2.0.4以后misscount为60s,disktimeout为200s;11.2以后misscount为30s:CRS-4678: Successful get misscount 30 for Cluster Synchronization Services,CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services)指定的秒数中都没有被收到的话,该节点被认为已经”死亡”了。NM还负责当其他节点加入或离开集群时初始化集群的重置(Initiates cluster reconfiguration)。

在解决脑裂的场景中,NM还会监控voting disk以了解其他的竞争子集群(subclusters)。关于子集群我们有必要介绍一下,试想我们的环境中存在大量的节点,以Oracle官方构建过的128个节点的环境为我们的想象空间,当网络故障发生时存在多种的可能性,一种可能性是全局的网络失败,即128个节点中每个节点都不能互相发生网络心跳,此时会产生多达128个的信息”孤岛”子集群。另一种可能性是局部的网络失败,128个节点中被分成多个部分,每个部分中包含多于一个的节点,这些部分就可以被称作子集群(subclusters)。当出现网络故障时子集群内部的多个节点仍能互相通信传输投票信息(vote mesg),但子集群或者孤岛节点之间已经无法通过常规的Interconnect网络交流了,这个时候NM Reconfiguration就需要用到voting disk投票磁盘。

因为NM要使用voting disk来解决因为网络故障造成的通信障碍,所以需要保证voting disk在任意时刻都可以被正常访问。在正常状态下,每个节点都会进行磁盘心跳活动,具体来说就是会到投票磁盘的某个块上写入disk心跳信息,这种活动每一秒钟都会发生,同时CSS还会每秒读取一种称作”kill block”的”赐死块”,当”kill block”的内容表示本节点被驱逐出集群时,CSS会主动重启节点。

为了保证以上的磁盘心跳和读取”kill block”的活动始终正常运作CSS要求保证至少(N/2+1)个投票磁盘要被节点正常访问,这样就保证了每2个节点间总是至少有一个投票磁盘是它们都可以正常访问的,在正常情况下(注意是风平浪静的正常情况)只要节点所能访问的在线voting disk多于无法访问的voting disk,该节点都能幸福地活下去,当无法访问的voting disk多于正常的voting disk时,Cluster Communication Service进程将失败并引起节点重启。所以有一种说法认为voting disk只要有2个足以保证冗余度就可以了,没有必要有3个或以上voting disk,这种说法是错误的。Oracle推荐集群中至少要有3个voting disks。

补充1:

Question:
有同学问那么voting disk  必须是奇数个呢?

Answer:
实际上我们仅仅是推荐使用奇数个vote disk ,而非必须是奇数个。10gR2中vote disk的数目上限是32个。

Question
我们可以使用2或4个vote disk吗?

Answer:
可以的。 但是2、4这样的数目在“至少(N/2+1)个投票磁盘要被节点正常访问”这一disk heartbeat的硬性算法下是不利的:
当我们使用2个vote disk 时,不能发生任意个vote disk的心跳失败
当我们使用3个vote disk 时,不能发生大于1个的vote disk心跳失败
当我们使用4个vote disk 时,不能发生大于1个的vote disk心跳失败 ,这和3个时的容错率是一样,但是因为我们有更多的vote disk,这会导致管理成本和引入的风险增长
当我们使用5个vote disk 时,不能发生大于2个的vote disk心跳失败
当我们使用6个vote disk 时,仍然不能发生大于2个的vote disk心跳失败, 同样的因为比5时多出一个 ,也会引入不合理的管理成本和风险

补充2:
Question:
若节点间的网络心跳正常,且节点所能正常心跳的vote disk 大于 不能正常访问的 ,如3个votedisk 时 恰巧有1个vote disk 的disk heartbeat 超时,此时Brain split 会发生吗?

Answer:
这种情况即不会触发Brain Split,也不会引发节点驱逐协议(eviction protocol)。 当单个或小于(N/2+1)个的voting disk心跳失败(disk heartbeat failure)时,这种心跳失败可能是由于短期内节点访问voting disk发生I/O error错误而引起的,此时css会立刻将这些失败的voting disk标记为OFFLINE。虽然有一定数量的voting disk OFFLINE了,但是我们仍有至少(N/2+1)个投票磁盘可用,这保证了eviction protocol不会被调用,所以没有节点会被reboot重启。紧接着node monitor模块的Disk ping Monitor Thread(DPMT-clssnmDiskPMT)会重复尝试访问这些失败的OFFLINE voting disk,若这些投票磁盘变得再次可I/O访问且经过验证其上的数据也没有讹误,那么css会再次将此voting disk标记为ONLINE;但是如果在45s( 这里的45s是基于misscount和 内部算法获得的) 内仍不能正常访问相关的voting disk,那么DMPT将在cssd.log中生成警告信息,如:

 

CSSD]2011-11-11 20:11:20.668 >
WARNING: clssnmDiskPMT: long disk latency >(45940 ms) to voting disk (0//dev/asm-votedisk1)

 

假设以上发生clssnmDiskPMT警告的RAC场景共有3个voting disk,现已有一个asm-votedisk1因为I/O error或其他原因而被标记为OFFLINE,若此时再有一个votedisk也出现了问题并disk heartbeat 失败,那么节点会因为少于规定数目(2)的votedisk而引发eviction protocol,进而重启reboot。

单个或小于(N/2+1)个的voting disk心跳失败都仅仅生成警告(Warning),而非致命的错误。因为仍有绝大数量的vote disk可被访问,因此生成的警告都是非致命的,eviction protocol将不会被触发。

 

当实际的NM Reconfiguration集群重置情况发生时所有的active节点和正在加入集群的节点都会参与到reconfig中,那些没有应答(ack)的节点都将不再被归入新的集群关系中。实际上reconfig重置包括多个阶段:
1.初始化阶段 — reconfig manager(由集群成员号最低的节点担任)向其他节点发送启动reconfig的信号
2.投票阶段 — 节点向reconfig manager发送该节点所了解的成员关系
3.脑裂检查阶段 — reconfig manager检查是否脑裂
4.驱逐阶段 — reconfig manager驱逐非成员节点
5.更新阶段 — reconfig manager向成员节点发送权威成员关系信息

 

在脑裂检查阶段Reconfig Manager会找出那些没有Network Heartbeat而有Disk Heartbeat的节点,并通过Network Heartbeat(如果可能的话)和Disk Heartbeat的信息来计算所有竞争子集群(subcluster)内的节点数目,并依据以下2种因素决定哪个子集群应当存活下去:

  1. 拥有最多节点数目的子集群(Sub-cluster with largest number of Nodes)
  2. 若子集群内数目相等则为拥有最低节点号的子集群(Sub-cluster with lowest node number),举例来说在一个2节点的RAC环境中总是1号节点会获胜。
  3. 采用Stonith algorithm 的IO fencing(remote power reset)

 

补充:

STONITH算法是一种常用的I/O Fencing algorithm,是RAC中必要的远程关闭节点的接口。其想法十分简单,当某个节点上运行的软件希望确保本集群内的其他节点不能使用某种资源时,拔出其他节点的插座即可。这是一种简单、可靠且有些残酷的算法。Stonith 的优势是其没有特定的硬件需求,也不会限制集群的扩展性。

Oracle Clusterware的Process Monitor模块负责实现IO fencing,保证不会因节点/实例的不协调工作而产生corruption。Process Monitor的工作具体由hangcheck timer或者oprocd 完成, 在Linux平台上10.2.0.4 之前都没有oprocd的存在(其他Unix平台在10.2.0.1就有了),在安装RAC之前需要额外安装hangcheck timer软件以保证IO fencing, 到10.2.0.4 时Linux上也有了oprocd,具体见<Know about RAC Clusterware Process OPROCD> 一文。 这些负责IO fencing的进程一般都会被锁定在内存中(locked in memory)、实时运行(Real time)、休眠固定的时间(Sleep a fixed time)、以root身份运行;若进程唤醒后发现时间已经太晚,那么它会强制reboot;若这些进程发生失败,则也会重启,所以在RAC环境中oprocd是一个很重要的进程,不要失去手动去kill这个进程。

 

 

在完成脑裂检查后进入驱逐阶段,被驱逐节点会收到发送给它们的驱逐信息(如果网络可用的话),若无法发送信息则会通过写出驱逐通知到voting disk上的”kill block”来达到驱逐通知的目的。同时还会等待被驱逐节点表示其已收到驱逐通知,这种表示可能是通过网络通信的方式也可能是投票磁盘上的状态信息。

可以看到Oracle CSS中Brain Split Check时会尽可能地保证最大的一个子集群存活下来以保证RAC系统具有最高的可用性,而并不如那位资深工程师所说的在Cluster Reconfiguration阶段会通过节点对投票磁盘的抢占来保证哪个节点存活下来。

以下为一个三节点RAC环境中的2个示例场景:

1.1号节点网络失败,2,3号节点形成子集群;2,3节点通过voting disk向1号节点发起驱逐:

以下为1号节点的ocssd.log日志:
[    CSSD]2011-04-23 17:11:42.943 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 50 3.280308e-268artbeat fatal, eviction in 29.610 seconds
[    CSSD]2011-04-23 17:11:42.943 [3042950032] >TRACE:   clssnmPollingThread: node vrh2 (2) is impending reconfig, flag 1037, misstime 30390
[    CSSD]2011-04-23 17:11:42.943 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 50 3.280308e-268artbeat fatal, eviction in 29.150 seconds

对2,3号节点发起misscount计时

[    CSSD]2011-04-23 17:11:42.943 [3042950032] >TRACE:   clssnmPollingThread: node vrh3 (3) is impending reconfig, flag 1037, misstime 30850
[    CSSD]2011-04-23 17:11:42.943 [3042950032] >TRACE:   clssnmPollingThread: diskTimeout set to (57000)ms impending reconfig status(1)
[    CSSD]2011-04-23 17:11:44.368 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 50 3.280308e-268artbeat fatal, eviction in 28.610 seconds
[    CSSD]2011-04-23 17:12:04.778 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 75 3.280308e-268artbeat fatal, eviction in 14.580 seconds
[    CSSD]2011-04-23 17:12:04.779 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 75 3.280308e-268artbeat fatal, eviction in 14.120 seconds
[    CSSD]2011-04-23 17:12:06.207 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 75 3.280308e-268artbeat fatal, eviction in 13.580 seconds
[    CSSD]2011-04-23 17:12:17.719 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 3.280308e-268artbeat fatal, eviction in 5.560 seconds
[    CSSD]2011-04-23 17:12:17.719 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 3.280308e-268artbeat fatal, eviction in 5.100 seconds
[    CSSD]2011-04-23 17:12:19.165 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 3.280308e-268artbeat fatal, eviction in 4.560 seconds
[    CSSD]2011-04-23 17:12:19.165 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 3.280308e-268artbeat fatal, eviction in 4.100 seconds
[    CSSD]2011-04-23 17:12:20.642 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 3.280308e-268artbeat fatal, eviction in 3.560 seconds
[    CSSD]2011-04-23 17:12:20.642 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 3.280308e-268artbeat fatal, eviction in 3.100 seconds
[    CSSD]2011-04-23 17:12:22.139 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 3.280308e-268artbeat fatal, eviction in 2.560 seconds
[    CSSD]2011-04-23 17:12:22.139 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 3.280308e-268artbeat fatal, eviction in 2.100 seconds
[    CSSD]2011-04-23 17:12:23.588 [3042950032] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 3.280308e-268artbeat fatal, eviction in 1.550 seconds
[    CSSD]2011-04-23 17:12:23.588 [3042950032] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 3.280308e-268artbeat fatal, eviction in 1.090 seconds

2号节点的ocssd.log日志:

[    CSSD]2011-04-23 17:11:53.054 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 50 8.910601e-269artbeat fatal, eviction in 29.800 seconds
[    CSSD]2011-04-23 17:11:53.054 [3053439888] >TRACE:   clssnmPollingThread: node vrh1 (1) is impending reconfig, flag 1037, misstime 30200
[    CSSD]2011-04-23 17:11:53.054 [3053439888] >TRACE:   clssnmPollingThread: diskTimeout set to (57000)ms impending reconfig status(1)
[    CSSD]2011-04-23 17:11:54.516 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 50 8.910601e-269artbeat fatal, eviction in 28.790 seconds
[    CSSD]2011-04-23 17:12:14.826 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 75 8.910601e-269artbeat fatal, eviction in 14.800 seconds
[    CSSD]2011-04-23 17:12:16.265 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 75 8.910601e-269artbeat fatal, eviction in 13.800 seconds
[    CSSD]2011-04-23 17:12:27.755 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 8.910601e-269artbeat fatal, eviction in 5.800 seconds
[    CSSD]2011-04-23 17:12:29.197 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 8.910601e-269artbeat fatal, eviction in 4.800 seconds
[    CSSD]2011-04-23 17:12:30.658 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 8.910601e-269artbeat fatal, eviction in 3.800 seconds
[    CSSD]2011-04-23 17:12:32.133 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 8.910601e-269artbeat fatal, eviction in 2.800 seconds
[    CSSD]2011-04-23 17:12:33.602 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 8.910601e-269artbeat fatal, eviction in 1.790 seconds
[    CSSD]2011-04-23 17:12:35.126 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 8.910601e-269artbeat fatal, eviction in 0.800 seconds

[    CSSD]2011-04-23 17:12:35.399 [117574544] >TRACE:   clssnmHandleSync: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:12:35.399 [117574544] >TRACE:   clssnmHandleSync: Acknowledging sync: src[3] srcName[vrh3] seq[21] sync[10]

clssnmHandleSyn应答3号节点发送的同步信息

[    CSSD]2011-04-23 17:12:35.399 [5073104] >USER:    NMEVENT_SUSPEND [00][00][00][0e]

发生Node Monitoring SUSPEND事件

[    CSSD]2011-04-23 17:12:35.405 [117574544] >TRACE:   clssnmSendVoteInfo: node(3) syncSeqNo(10)

通过clssnmSendVoteInfo向3号节点发送投票信息Vote mesg

[    CSSD]2011-04-23 17:12:35.415 [117574544] >TRACE:   clssnmUpdateNodeState: node 0, state (0/0) unique (0/0) prevConuni(0) birth (0/0) (old/new)
[    CSSD]2011-04-23 17:12:35.415 [117574544] >TRACE:   clssnmUpdateNodeState: node 1, state (3/0) unique (1303592601/1303592601) prevConuni(0) birth (9/9) (old/new)
[    CSSD]2011-04-23 17:12:35.415 [117574544] >TRACE:   clssnmDiscHelper: vrh1, node(1) connection failed, con (0xb7e80ae8), probe((nil))
[    CSSD]2011-04-23 17:12:35.415 [117574544] >TRACE:   clssnmDeactivateNode: node 1 (vrh1) left cluster

确认1号节点离开了集群cluster

[    CSSD]2011-04-23 17:12:35.415 [117574544] >TRACE:   clssnmUpdateNodeState: node 2, state (3/3) unique (1303591210/1303591210) prevConuni(0) birth (2/2) (old/new)
[    CSSD]2011-04-23 17:12:35.415 [117574544] >TRACE:   clssnmUpdateNodeState: node 3, state (3/3) unique (1303591326/1303591326) prevConuni(0) birth (3/3) (old/new)
[    CSSD]2011-04-23 17:12:35.415 [117574544] >USER:    clssnmHandleUpdate: SYNC(10) from node(3) completed
[    CSSD]2011-04-23 17:12:35.416 [117574544] >USER:    clssnmHandleUpdate: NODE 2 (vrh2) IS ACTIVE MEMBER OF CLUSTER
[    CSSD]2011-04-23 17:12:35.416 [117574544] >USER:    clssnmHandleUpdate: NODE 3 (vrh3) IS ACTIVE MEMBER OF CLUSTER
[    CSSD]2011-04-23 17:12:35.416 [117574544] >TRACE:   clssnmHandleUpdate: diskTimeout set to (200000)ms
[    CSSD]2011-04-23 17:12:35.416 [3021970320] >TRACE:   clssgmReconfigThread:  started for reconfig (10)
[    CSSD]2011-04-23 17:12:35.416 [3021970320] >USER:    NMEVENT_RECONFIG [00][00][00][0c]
[    CSSD]2011-04-23 17:12:35.417 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock crs_version type 2
[    CSSD]2011-04-23 17:12:35.417 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(0) grock(crs_version) birth(9/9)
[    CSSD]2011-04-23 17:12:35.418 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_FAILOVER type 3
[    CSSD]2011-04-23 17:12:35.418 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock EVMDMAIN type 2
[    CSSD]2011-04-23 17:12:35.418 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(EVMDMAIN) birth(9/9)
[    CSSD]2011-04-23 17:12:35.418 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock CRSDMAIN type 2
[    CSSD]2011-04-23 17:12:35.418 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(CRSDMAIN) birth(9/9)
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh1 type 3
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh2 type 3
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh3 type 3
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock ocr_crs type 2
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(ocr_crs) birth(9/9)
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock #CSS_CLSSOMON type 2
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(#CSS_CLSSOMON) birth(9/9)
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmEstablishConnections: 2 nodes in cluster incarn 10
[    CSSD]2011-04-23 17:12:35.419 [3063929744] >TRACE:   clssgmPeerDeactivate: node 1 (vrh1), death 10, state 0x80000000 connstate 0xa
[    CSSD]2011-04-23 17:12:35.419 [3063929744] >TRACE:   clssgmPeerListener: connects done (2/2)
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmEstablishMasterNode: MASTER for 10 is node(2) birth(2)
[    CSSD]2011-04-23 17:12:35.419 [3021970320] >TRACE:   clssgmMasterCMSync: Synchronizing group/lock status
[    CSSD]2011-04-23 17:12:35.428 [3021970320] >TRACE:   clssgmMasterSendDBDone: group/lock status synchronization complete
[    CSSD]CLSS-3000: reconfiguration successful, incarnation 10 with 2 nodes

[    CSSD]CLSS-3001: local node number 2, master node number 2

完成reconfiguration

[    CSSD]2011-04-23 17:12:35.440 [3021970320] >TRACE:   clssgmReconfigThread:  completed for reconfig(10), with status(1)

以下为3号节点的ocssd.log:

[    CSSD]2011-04-23 17:12:36.303 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 50 1.867300e-268artbeat fatal, eviction in 29.220 seconds
[    CSSD]2011-04-23 17:12:36.303 [3053439888] >TRACE:   clssnmPollingThread: node vrh1 (1) is impending reconfig, flag 1037, misstime 30780
[    CSSD]2011-04-23 17:12:36.303 [3053439888] >TRACE:   clssnmPollingThread: diskTimeout set to (57000)ms impending reconfig status(1)
[    CSSD]2011-04-23 17:12:57.889 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 75 1.867300e-268artbeat fatal, eviction in 14.220 seconds
[    CSSD]2011-04-23 17:13:10.674 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 1.867300e-268artbeat fatal, eviction in 5.220 seconds
[    CSSD]2011-04-23 17:13:12.115 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 1.867300e-268artbeat fatal, eviction in 4.220 seconds
[    CSSD]2011-04-23 17:13:13.597 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 1.867300e-268artbeat fatal, eviction in 3.210 seconds
[    CSSD]2011-04-23 17:13:15.024 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 1.867300e-268artbeat fatal, eviction in 2.220 seconds
[    CSSD]2011-04-23 17:13:16.504 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 1.867300e-268artbeat fatal, eviction in 1.220 seconds
[    CSSD]2011-04-23 17:13:17.987 [3053439888] >WARNING: clssnmPollingThread: node vrh1 (1) at 90 1.867300e-268artbeat fatal, eviction in 0.220 seconds
[    CSSD]2011-04-23 17:13:18.325 [3053439888] >TRACE:   clssnmPollingThread: Eviction started for node vrh1 (1), flags 0x040d, state 3, wt4c 0
[    CSSD]2011-04-23 17:13:18.326 [3032460176] >TRACE:   clssnmDoSyncUpdate: Initiating sync 10
[    CSSD]2011-04-23 17:13:18.326 [3032460176] >TRACE:   clssnmDoSyncUpdate: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:13:18.326 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (11)
[    CSSD]2011-04-23 17:13:18.326 [3032460176] >TRACE:   clssnmSetupAckWait: node(2) is ALIVE
[    CSSD]2011-04-23 17:13:18.326 [3032460176] >TRACE:   clssnmSetupAckWait: node(3) is ALIVE
[    CSSD]2011-04-23 17:13:18.327 [3032460176] >TRACE:   clssnmSendSync: syncSeqNo(10)
[    CSSD]2011-04-23 17:13:18.329 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(11), ackCount(2)
[    CSSD]2011-04-23 17:13:18.329 [89033616] >TRACE:   clssnmHandleSync: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:13:18.329 [89033616] >TRACE:   clssnmHandleSync: Acknowledging sync: src[3] srcName[vrh3] seq[21] sync[10]
[    CSSD]2011-04-23 17:13:18.330 [8136912] >USER:    NMEVENT_SUSPEND [00][00][00][0e]
[    CSSD]2011-04-23 17:13:18.332 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(11)
[    CSSD]2011-04-23 17:13:18.332 [3032460176] >TRACE:   clssnmDoSyncUpdate: Terminating node 1, vrh1, misstime(60010) state(5)
[    CSSD]2011-04-23 17:13:18.332 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (13)
[    CSSD]2011-04-23 17:13:18.332 [3032460176] >TRACE:   clssnmSetupAckWait: node(2) is ACTIVE
[    CSSD]2011-04-23 17:13:18.332 [3032460176] >TRACE:   clssnmSetupAckWait: node(3) is ACTIVE
[    CSSD]2011-04-23 17:13:18.334 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(13), ackCount(2)
[    CSSD]2011-04-23 17:13:18.335 [89033616] >TRACE:   clssnmSendVoteInfo: node(3) syncSeqNo(10)
[    CSSD]2011-04-23 17:13:18.337 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(13)

以上完成了2-3节点间的Vote mesg通信,这些信息包含Node identifier,GM peer to peer listening endpoint以及
View of cluster membership。

[    CSSD]2011-04-23 17:13:18.337 [3032460176] >TRACE:   clssnmCheckDskInfo: Checking disk info...

开始检测voting disk上的信息

[ CSSD]2011-04-23 17:13:18.337 [3032460176] >TRACE: clssnmCheckDskInfo: node 1, vrh1, state 5 with leader 1 has smaller cluster size 1; my cluster size 2 with leader 2

发现其他子集群,包含1号节点且1号节点为该子集群的leader,为最小子集群;3号与2号节点组成最大子集群,2号节点为leader节点

[    CSSD]2011-04-23 17:13:18.337 [3032460176] >TRACE:   clssnmEvict: Start
[ CSSD]2011-04-23 17:13:18.337 [3032460176] >TRACE: clssnmEvict: Evicting node 1, vrh1, birth 9, death 10, 
impendingrcfg 1, stateflags 0x40d 
发起对1号节点的驱逐

[    CSSD]2011-04-23 17:13:18.337 [3032460176] >TRACE:   clssnmSendShutdown: req to node 1, kill time 443294
[    CSSD]2011-04-23 17:13:18.339 [3032460176] >TRACE:   clssnmDiscHelper: vrh1, node(1) connection failed, con (0xb7eaf220), probe((nil))
[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmWaitOnEvictions: Start
[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmWaitOnEvictions: node 1, vrh1, undead 1
[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmCheckKillStatus: Node 1, vrh1, down, LATS(443144),timeout(150)

clssnmCheckKillStatus检查1号节点是否down了

[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (15)
[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmSetupAckWait: node(2) is ACTIVE
[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmSetupAckWait: node(3) is ACTIVE
[    CSSD]2011-04-23 17:13:18.340 [3032460176] >TRACE:   clssnmSendUpdate: syncSeqNo(10)
[    CSSD]2011-04-23 17:13:18.341 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(15), ackCount(2)
[    CSSD]2011-04-23 17:13:18.341 [89033616] >TRACE:   clssnmUpdateNodeState: node 0, state (0/0) unique (0/0) prevConuni(0) birth (0/0) (old/new)
[    CSSD]2011-04-23 17:13:18.341 [89033616] >TRACE:   clssnmUpdateNodeState: node 1, state (5/0) unique (1303592601/1303592601) prevConuni(1303592601) birth (9/9) (old/new)
[    CSSD]2011-04-23 17:13:18.341 [89033616] >TRACE:   clssnmDeactivateNode: node 1 (vrh1) left cluster

[    CSSD]2011-04-23 17:13:18.341 [89033616] >TRACE:   clssnmUpdateNodeState: node 2, state (3/3) unique (1303591210/1303591210) prevConuni(0) birth (2/2) (old/new)
[    CSSD]2011-04-23 17:13:18.341 [89033616] >TRACE:   clssnmUpdateNodeState: node 3, state (3/3) unique (1303591326/1303591326) prevConuni(0) birth (3/3) (old/new)
[    CSSD]2011-04-23 17:13:18.342 [89033616] >USER:    clssnmHandleUpdate: SYNC(10) from node(3) completed
[    CSSD]2011-04-23 17:13:18.342 [89033616] >USER:    clssnmHandleUpdate: NODE 2 (vrh2) IS ACTIVE MEMBER OF CLUSTER
[    CSSD]2011-04-23 17:13:18.342 [89033616] >USER:    clssnmHandleUpdate: NODE 3 (vrh3) IS ACTIVE MEMBER OF CLUSTER
[    CSSD]2011-04-23 17:13:18.342 [89033616] >TRACE:   clssnmHandleUpdate: diskTimeout set to (200000)ms
[    CSSD]2011-04-23 17:13:18.347 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(15)
[    CSSD]2011-04-23 17:13:18.348 [3032460176] >TRACE:   clssnmDoSyncUpdate: Sync 10 complete!
[    CSSD]2011-04-23 17:13:18.350 [3021970320] >TRACE:   clssgmReconfigThread:  started for reconfig (10)
[    CSSD]2011-04-23 17:13:18.350 [3021970320] >USER:    NMEVENT_RECONFIG [00][00][00][0c]
[    CSSD]2011-04-23 17:13:18.351 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock crs_version type 2
[    CSSD]2011-04-23 17:13:18.352 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(0) grock(crs_version) birth(9/9)
[    CSSD]2011-04-23 17:13:18.353 [3063929744] >TRACE:   clssgmDispatchCMXMSG(): got message type(7) src(2) incarn(10) during incarn(9/9)
[    CSSD]2011-04-23 17:13:18.354 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_FAILOVER type 3
[    CSSD]2011-04-23 17:13:18.355 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock EVMDMAIN type 2
[    CSSD]2011-04-23 17:13:18.355 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(EVMDMAIN) birth(9/9)
[    CSSD]2011-04-23 17:13:18.355 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock CRSDMAIN type 2
[    CSSD]2011-04-23 17:13:18.355 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(CRSDMAIN) birth(9/9)
[    CSSD]2011-04-23 17:13:18.355 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh1 type 3
[    CSSD]2011-04-23 17:13:18.355 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh2 type 3
[    CSSD]2011-04-23 17:13:18.356 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh3 type 3
[    CSSD]2011-04-23 17:13:18.356 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock ocr_crs type 2
[    CSSD]2011-04-23 17:13:18.356 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(ocr_crs) birth(9/9)
[    CSSD]2011-04-23 17:13:18.356 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock #CSS_CLSSOMON type 2
[    CSSD]2011-04-23 17:13:18.356 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(1) grock(#CSS_CLSSOMON) birth(9/9)
[    CSSD]2011-04-23 17:13:18.357 [3021970320] >TRACE:   clssgmEstablishConnections: 2 nodes in cluster incarn 10
[    CSSD]2011-04-23 17:13:18.366 [3063929744] >TRACE:   clssgmPeerDeactivate: node 1 (vrh1), death 10, state 0x80000000 connstate 0xa
[    CSSD]2011-04-23 17:13:18.367 [3063929744] >TRACE:   clssgmHandleDBDone(): src/dest (2/65535) size(68) incarn 10
[    CSSD]2011-04-23 17:13:18.367 [3063929744] >TRACE:   clssgmPeerListener: connects done (2/2)
[    CSSD]2011-04-23 17:13:18.369 [3021970320] >TRACE:   clssgmEstablishMasterNode: MASTER for 10 is node(2) birth(2)

更新阶段   

[    CSSD]CLSS-3000: reconfiguration successful, incarnation 10 with 2 nodes

[    CSSD]CLSS-3001: local node number 3, master node number 2

[    CSSD]2011-04-23 17:13:18.372 [3021970320] >TRACE:   clssgmReconfigThread:  completed for reconfig(10), with status(1)

2.另一场景为1号节点未加入集群,2号节点的网络失败,因2号节点的member number较小故其通过voting disk向3号节点发起驱逐

以下为2号节点的ocssd.log日志

[    CSSD]2011-04-23 17:41:48.643 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 50 8.910601e-269artbeat fatal, eviction in 29.890 seconds
[    CSSD]2011-04-23 17:41:48.643 [3053439888] >TRACE:   clssnmPollingThread: node vrh3 (3) is impending reconfig, flag 1037, misstime 30110
[    CSSD]2011-04-23 17:41:48.643 [3053439888] >TRACE:   clssnmPollingThread: diskTimeout set to (57000)ms impending reconfig status(1)
[    CSSD]2011-04-23 17:41:50.132 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 50 8.910601e-269artbeat fatal, eviction in 28.890 seconds
[    CSSD]2011-04-23 17:42:10.533 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 75 8.910601e-269artbeat fatal, eviction in 14.860 seconds
[    CSSD]2011-04-23 17:42:11.962 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 75 8.910601e-269artbeat fatal, eviction in 13.860 seconds
[    CSSD]2011-04-23 17:42:23.523 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 8.910601e-269artbeat fatal, eviction in 5.840 seconds
[    CSSD]2011-04-23 17:42:24.989 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 8.910601e-269artbeat fatal, eviction in 4.840 seconds
[    CSSD]2011-04-23 17:42:26.423 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 8.910601e-269artbeat fatal, eviction in 3.840 seconds
[    CSSD]2011-04-23 17:42:27.890 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 8.910601e-269artbeat fatal, eviction in 2.840 seconds
[    CSSD]2011-04-23 17:42:29.382 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 8.910601e-269artbeat fatal, eviction in 1.840 seconds
[    CSSD]2011-04-23 17:42:30.832 [3053439888] >WARNING: clssnmPollingThread: node vrh3 (3) at 90 8.910601e-269artbeat fatal, eviction in 0.830 seconds
[    CSSD]2011-04-23 17:42:32.020 [3053439888] >TRACE:   clssnmPollingThread: Eviction started for node vrh3 (3), flags 0x040d, state 3, wt4c 0
[    CSSD]2011-04-23 17:42:32.020 [3032460176] >TRACE:   clssnmDoSyncUpdate: Initiating sync 13
[    CSSD]2011-04-23 17:42:32.020 [3032460176] >TRACE:   clssnmDoSyncUpdate: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:42:32.020 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (11)
[    CSSD]2011-04-23 17:42:32.020 [3032460176] >TRACE:   clssnmSetupAckWait: node(2) is ALIVE
[    CSSD]2011-04-23 17:42:32.020 [3032460176] >TRACE:   clssnmSendSync: syncSeqNo(13)
[    CSSD]2011-04-23 17:42:32.021 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(11), ackCount(1)
[    CSSD]2011-04-23 17:42:32.021 [117574544] >TRACE:   clssnmHandleSync: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:42:32.021 [117574544] >TRACE:   clssnmHandleSync: Acknowledging sync: src[2] srcName[vrh2] seq[13] sync[13]
[    CSSD]2011-04-23 17:42:32.021 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(11)
[    CSSD]2011-04-23 17:42:32.021 [3032460176] >TRACE:   clssnmDoSyncUpdate: Terminating node 3, vrh3, misstime(60000) state(5)
[    CSSD]2011-04-23 17:42:32.021 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (13)
[    CSSD]2011-04-23 17:42:32.021 [3032460176] >TRACE:   clssnmSetupAckWait: node(2) is ACTIVE
[    CSSD]2011-04-23 17:42:32.021 [5073104] >USER:    NMEVENT_SUSPEND [00][00][00][0c]
[    CSSD]2011-04-23 17:42:32.021 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(13), ackCount(1)
[    CSSD]2011-04-23 17:42:32.022 [117574544] >TRACE:   clssnmSendVoteInfo: node(2) syncSeqNo(13)
[    CSSD]2011-04-23 17:42:32.022 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(13)
[    CSSD]2011-04-23 17:42:32.022 [3032460176] >TRACE:   clssnmCheckDskInfo: Checking disk info...
[ CSSD]2011-04-23 17:42:32.022 [3032460176] >TRACE: clssnmCheckDskInfo: node 3, vrh3, state 5 with leader 3 
has smaller cluster size 1; my cluster size 1 with leader 2 检查voting disk后发现子集群3为最小"子集群"(3号节点的node number较2号大);2号节点为最大子集群 [ CSSD]2011-04-23 17:42:32.022 [3032460176] >TRACE: clssnmEvict: Start [ CSSD]2011-04-23 17:42:32.022 [3032460176] >TRACE: clssnmEvict: Evicting node 3, vrh3, birth 3, death 13, impendingrcfg 1, stateflags 0x40d [ CSSD]2011-04-23 17:42:32.022 [3032460176] >TRACE: clssnmSendShutdown: req to node 3, kill time 1643084 发起对3号节点的驱逐和shutdown request

[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmDiscHelper: vrh3, node(3) connection failed, con (0xb7e79bb0), probe((nil))
[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmWaitOnEvictions: Start
[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmWaitOnEvictions: node 3, vrh3, undead 1
[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmCheckKillStatus: Node 3, vrh3, down, LATS(1642874),timeout(210)
[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (15)
[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmSetupAckWait: node(2) is ACTIVE
[    CSSD]2011-04-23 17:42:32.023 [3032460176] >TRACE:   clssnmSendUpdate: syncSeqNo(13)
[    CSSD]2011-04-23 17:42:32.024 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(15), ackCount(1)
[    CSSD]2011-04-23 17:42:32.024 [117574544] >TRACE:   clssnmUpdateNodeState: node 0, state (0/0) unique (0/0) prevConuni(0) birth (0/0) (old/new)
[    CSSD]2011-04-23 17:42:32.024 [117574544] >TRACE:   clssnmUpdateNodeState: node 1, state (0/0) unique (0/0) prevConuni(0) birth (0/0) (old/new)
[    CSSD]2011-04-23 17:42:32.024 [117574544] >TRACE:   clssnmUpdateNodeState: node 2, state (3/3) unique (1303591210/1303591210) prevConuni(0) birth (2/2) (old/new)
[    CSSD]2011-04-23 17:42:32.024 [117574544] >TRACE:   clssnmUpdateNodeState: node 3, state (5/0) unique (1303591326/1303591326) prevConuni(1303591326) birth (3/3) (old/new)
[    CSSD]2011-04-23 17:42:32.024 [117574544] >TRACE:   clssnmDeactivateNode: node 3 (vrh3) left cluster

[    CSSD]2011-04-23 17:42:32.024 [117574544] >USER:    clssnmHandleUpdate: SYNC(13) from node(2) completed
[    CSSD]2011-04-23 17:42:32.024 [117574544] >USER:    clssnmHandleUpdate: NODE 2 (vrh2) IS ACTIVE MEMBER OF CLUSTER
[    CSSD]2011-04-23 17:42:32.024 [117574544] >TRACE:   clssnmHandleUpdate: diskTimeout set to (200000)ms
[    CSSD]2011-04-23 17:42:32.024 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(15)
[    CSSD]2011-04-23 17:42:32.024 [3032460176] >TRACE:   clssnmDoSyncUpdate: Sync 13 complete!
[    CSSD]2011-04-23 17:42:32.024 [3021970320] >TRACE:   clssgmReconfigThread:  started for reconfig (13)
[    CSSD]2011-04-23 17:42:32.024 [3021970320] >USER:    NMEVENT_RECONFIG [00][00][00][04]
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock crs_version type 2
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(2) grock(crs_version) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_FAILOVER type 3
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(0) grock(_ORA_CRS_FAILOVER) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock EVMDMAIN type 2
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(3) grock(EVMDMAIN) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock CRSDMAIN type 2
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(3) grock(CRSDMAIN) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh1 type 3
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(0) grock(_ORA_CRS_MEMBER_vrh1) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock _ORA_CRS_MEMBER_vrh3 type 3
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(0) grock(_ORA_CRS_MEMBER_vrh3) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock ocr_crs type 2
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(3) grock(ocr_crs) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupGrocks: cleaning up grock #CSS_CLSSOMON type 2
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmCleanupOrphanMembers: cleaning up remote mbr(3) grock(#CSS_CLSSOMON) birth(3/3)
[    CSSD]2011-04-23 17:42:32.025 [3021970320] >TRACE:   clssgmEstablishConnections: 1 nodes in cluster incarn 13
[    CSSD]2011-04-23 17:42:32.026 [3063929744] >TRACE:   clssgmPeerDeactivate: node 3 (vrh3), death 13, state 0x0 connstate 0xf
[    CSSD]2011-04-23 17:42:32.026 [3063929744] >TRACE:   clssgmPeerListener: connects done (1/1)
[    CSSD]2011-04-23 17:42:32.026 [3021970320] >TRACE:   clssgmEstablishMasterNode: MASTER for 13 is node(2) birth(2)
[    CSSD]2011-04-23 17:42:32.026 [3021970320] >TRACE:   clssgmMasterCMSync: Synchronizing group/lock status
[    CSSD]2011-04-23 17:42:32.026 [3021970320] >TRACE:   clssgmMasterSendDBDone: group/lock status synchronization complete
[    CSSD]CLSS-3000: reconfiguration successful, incarnation 13 with 1 nodes

[    CSSD]CLSS-3001: local node number 2, master node number 2

完成reconfiguration

[    CSSD]2011-04-23 17:42:32.027 [3021970320] >TRACE:   clssgmReconfigThread:  completed for reconfig(13), with status(1)

以下为3号节点的ocssd.log日志:

[    CSSD]2011-04-23 17:42:33.204 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 50 1.867300e-268artbeat fatal, eviction in 29.360 seconds
[    CSSD]2011-04-23 17:42:33.204 [3053439888] >TRACE:   clssnmPollingThread: node vrh2 (2) is impending reconfig, flag 1039, misstime 30640
[    CSSD]2011-04-23 17:42:33.204 [3053439888] >TRACE:   clssnmPollingThread: diskTimeout set to (57000)ms impending reconfig status(1)
[    CSSD]2011-04-23 17:42:55.168 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 75 1.867300e-268artbeat fatal, eviction in 14.330 seconds
[    CSSD]2011-04-23 17:43:08.182 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 1.867300e-268artbeat fatal, eviction in 5.310 seconds
[    CSSD]2011-04-23 17:43:09.661 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 1.867300e-268artbeat fatal, eviction in 4.300 seconds
[    CSSD]2011-04-23 17:43:11.144 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 1.867300e-268artbeat fatal, eviction in 3.300 seconds
[    CSSD]2011-04-23 17:43:12.634 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 1.867300e-268artbeat fatal, eviction in 2.300 seconds
[    CSSD]2011-04-23 17:43:14.053 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 1.867300e-268artbeat fatal, eviction in 1.300 seconds
[    CSSD]2011-04-23 17:43:15.467 [3053439888] >WARNING: clssnmPollingThread: node vrh2 (2) at 90 1.867300e-268artbeat fatal, eviction in 0.300 seconds
[    CSSD]2011-04-23 17:43:15.911 [3053439888] >TRACE:   clssnmPollingThread: Eviction started for node vrh2 (2), flags 0x040f, state 3, wt4c 0
[    CSSD]2011-04-23 17:43:15.911 [3032460176] >TRACE:   clssnmDoSyncUpdate: Initiating sync 13
[    CSSD]2011-04-23 17:43:15.911 [3032460176] >TRACE:   clssnmDoSyncUpdate: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:43:15.911 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (11)
[    CSSD]2011-04-23 17:43:15.911 [3032460176] >TRACE:   clssnmSetupAckWait: node(3) is ALIVE
[    CSSD]2011-04-23 17:43:15.911 [3032460176] >TRACE:   clssnmSendSync: syncSeqNo(13)
[    CSSD]2011-04-23 17:43:15.911 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(11), ackCount(1)
[    CSSD]2011-04-23 17:43:15.912 [89033616] >TRACE:   clssnmHandleSync: diskTimeout set to (57000)ms
[    CSSD]2011-04-23 17:43:15.912 [89033616] >TRACE:   clssnmHandleSync: Acknowledging sync: src[3] srcName[vrh3] seq[29] sync[13]
[    CSSD]2011-04-23 17:43:15.912 [8136912] >USER:    NMEVENT_SUSPEND [00][00][00][0c]
[    CSSD]2011-04-23 17:43:15.912 [3032460176] >TRACE:   clssnmWaitForAcks: done, msg type(11)
[    CSSD]2011-04-23 17:43:15.912 [3032460176] >TRACE:   clssnmDoSyncUpdate: Terminating node 2, vrh2, misstime(60010) state(5)
[    CSSD]2011-04-23 17:43:15.912 [3032460176] >TRACE:   clssnmSetupAckWait: Ack message type (13)
[    CSSD]2011-04-23 17:43:15.912 [3032460176] >TRACE:   clssnmSetupAckWait: node(3) is ACTIVE
[    CSSD]2011-04-23 17:43:15.913 [89033616] >TRACE:   clssnmSendVoteInfo: node(3) syncSeqNo(13)
[    CSSD]2011-04-23 17:43:15.912 [3032460176] >TRACE:   clssnmWaitForAcks: Ack message type(13), ackCount(1)
[    CSSD]2011-04-23 17:43:15.913 [3032460176] >TRACE:   clssnmCheckDskInfo: Checking disk info...
[ CSSD]2011-04-23 17:43:15.913 [3032460176] >ERROR: clssnmCheckDskInfo: Aborting local node to avoid splitbrain. [ CSSD]2011-04-23 17:43:15.913 [3032460176] >ERROR: : my node(3), Leader(3), Size(1) VS Node(2), Leader(2), Size(1) 读取voting disk后发现kill block,为避免split brain,自我aborting!

[    CSSD]2011-04-23 17:43:15.913 [3032460176] >ERROR:   ###################################
[    CSSD]2011-04-23 17:43:15.913 [3032460176] >ERROR:   clssscExit: CSSD aborting
[    CSSD]2011-04-23 17:43:15.913 [3032460176] >ERROR:   ###################################
[    CSSD]--- DUMP GROCK STATE DB ---
[    CSSD]----------
[    CSSD]  type 2, Id 4, Name = (crs_version)
[    CSSD]  flags: 0x1000
[    CSSD]  grant: count=0, type 0, wait 0
[    CSSD]  Member Count =2, master 2
[    CSSD]   . . . . .
[    CSSD]     memberNo =2, seq 2
[    CSSD]     flags = 0x0, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]   . . . . .
[    CSSD]     memberNo =1, seq 12
[    CSSD]     flags = 0x1000, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 2, nodeBirth 2
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 3, Id 11, Name = (_ORA_CRS_FAILOVER)
[    CSSD]  flags: 0x0
[    CSSD]  grant: count=1, type 3, wait 1
[    CSSD]  Member Count =1, master -3
[    CSSD]   . . . . .
[    CSSD]     memberNo =0, seq 0
[    CSSD]     flags = 0x12, granted 1
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 2, Id 2, Name = (EVMDMAIN)
[    CSSD]  flags: 0x1000
[    CSSD]  grant: count=0, type 0, wait 0
[    CSSD]  Member Count =2, master 2
[    CSSD]   . . . . .
[    CSSD]     memberNo =2, seq 1
[    CSSD]     flags = 0x0, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 2, nodeBirth 2
[    CSSD]     privateDataSize = 508
[    CSSD]     publicDataSize = 504
[    CSSD]   . . . . .
[    CSSD]     memberNo =3, seq 2
[    CSSD]     flags = 0x0, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 508
[    CSSD]     publicDataSize = 504
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 2, Id 5, Name = (CRSDMAIN)
[    CSSD]  flags: 0x1000
[    CSSD]  grant: count=0, type 0, wait 0
[    CSSD]  Member Count =1, master 3
[    CSSD]   . . . . .
[    CSSD]     memberNo =3, seq 2
[    CSSD]     flags = 0x0, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 128
[    CSSD]     publicDataSize = 128
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 3, Id 12, Name = (_ORA_CRS_MEMBER_vrh1)
[    CSSD]  flags: 0x0
[    CSSD]  grant: count=1, type 3, wait 1
[    CSSD]  Member Count =1, master -3
[    CSSD]   . . . . .
[    CSSD]     memberNo =0, seq 0
[    CSSD]     flags = 0x12, granted 1
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 3, Id 12, Name = (_ORA_CRS_MEMBER_vrh3)
[    CSSD]  flags: 0x0
[    CSSD]  grant: count=1, type 3, wait 1
[    CSSD]  Member Count =1, master -3
[    CSSD]   . . . . .
[    CSSD]     memberNo =0, seq 0
[    CSSD]     flags = 0x12, granted 1
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 2, Id 3, Name = (ocr_crs)
[    CSSD]  flags: 0x1000
[    CSSD]  grant: count=0, type 0, wait 0
[    CSSD]  Member Count =2, master 3
[    CSSD]   . . . . .
[    CSSD]     memberNo =3, seq 2
[    CSSD]     flags = 0x0, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 32
[    CSSD]   . . . . .
[    CSSD]     memberNo =2, seq 12
[    CSSD]     flags = 0x1000, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 2, nodeBirth 2
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 32
[    CSSD]----------
[    CSSD]----------
[    CSSD]  type 2, Id 1, Name = (#CSS_CLSSOMON)
[    CSSD]  flags: 0x1000
[    CSSD]  grant: count=0, type 0, wait 0
[    CSSD]  Member Count =2, master 2
[    CSSD]   . . . . .
[    CSSD]     memberNo =2, seq 1
[    CSSD]     flags = 0x1000, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 2, nodeBirth 2
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]   . . . . .
[    CSSD]     memberNo =3, seq 2
[    CSSD]     flags = 0x1000, granted 0
[    CSSD]     refCnt = 1
[    CSSD]     nodeNum = 3, nodeBirth 3
[    CSSD]     privateDataSize = 0
[    CSSD]     publicDataSize = 0
[    CSSD]----------
[    CSSD]--- END OF GROCK STATE DUMP ---
[    CSSD]------- Begin Dump -------

为Oracle Clusterware修改公用及私有网络接口

出于种种原因我们可能需要为已安装的Oracle集群软件修改其使用的公用或私有网络所使用的网络接口(How to Change Interconnect/Public Interface IP or Subnet in Oracle Clusterware),这些共有或私有网络(public/private network)被保存在ocr中,我们需要以root用户的身份执行oifcfg命令来对这些网络接口信息进行修改:

/* 使用ocrdump命令转储ocr信息 */

[oracle@vrh1 ~]$ ocrdump ocr

[oracle@vrh1 ~]$ cat ocr

[SYSTEM.css.interfaces]UNDEF 
[SYSTEM.css.interfaces.global]UNDEF 
[SYSTEM.css.interfaces.global.eth0]UNDEF 
[SYSTEM.css.interfaces.global.eth0.131|d32|d1|d0]UNDEF 
[SYSTEM.css.interfaces.global.eth0.131|d32|d1|d0]ORATEXT : public

[SYSTEM.css.interfaces.global.eth1]UNDEF
[SYSTEM.css.interfaces.global.eth1.131|d33|d1|d0]UNDEF :
[SYSTEM.css.interfaces.global.eth1.131|d33|d1|d0]ORATEXT : cluster_interconnect

.............

/* 实际上css的misscount参数和voting disk所在的位置也存在ocr文件中  */

[SYSTEM.css.misscount]UB4 (10) : 60
[SYSTEM.css.diskfile]ORATEXT : /dev/raw/raw2

1.首先我们可以通过oifcfg getif了解当前的public/interconnect网络配置情况:

[root@vrh1 ~]# who am i
root     pts/1        2011-04-18 18:23 (192.168.0.1)

[root@vrh1 ~]# cd $ORA_CRS_HOME/bin

[root@vrh1 bin]# oifcfg getif
eth0  131.32.1.0  global  public
eth1  131.33.1.0  global  cluster_interconnect

/* 可以看到这里分别用131.32.1/2网段配置了public和cluster_interconnect网络  */

2.确认在/etc/hosts文件中为新的网络接口IP做了必要的修改,之后我们要保证当前CRS已经启动(否则将出现PRIF-10: failed to initialize the cluster registry错误而无法配置),使用oifcfg setif命令首先添加新的public公用网络接口:

[root@vrh1 bin]# oifcfg delif -global eth0

/* 首先删除旧有的公用网络接口信息  */

[root@vrh1 bin]# oifcfg setif -global eth0/192.168.1.0:public

/* 注意这里制定192.168.1.0这个网段即可,不用输入完整的ip */

/* 因为以上操作是global全局操作,在单个节点完成即可 */

3.修改内联网络(cluster_interconnect)接口的方法也是类似的:

[root@vrh1 bin]# oifcfg delif -global eth1

[root@vrh1 bin]# oifcfg setif -global eth1/192.168.0.0:cluster_interconnect

/* 与public network配置时类似指定网段即可  */

4.确认网络接口的修改情况,以及当前数据库使用的inter connecter状况:

[oracle@vrh1 ~]$ oifcfg getif
eth0  192.168.1.0  global  public
eth1  192.168.0.0  global  cluster_interconnect

SQL> select * from x$skgxpia;

ADDR           INDX    INST_ID PUB_SKGXPI PICKED_SKGXPIA
-------- ---------- ---------- ---------- -----------------------------------
NAME_SKGXPIA    IP_SKGXPIA
--------------- ----------------
4572EC54          0          1 N          OSD
eth1            131.33.1.107

/* 可以看到当前数据库实例仍使用旧有的inter connect网络 */

[root@vrh1 ~]# crsctl stop crs

[root@vrh1 ~]# crsctl start crs

[oracle@vrh1 ~]$ sqlplus / as sysdba

Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> 
SQL> select * from x$skgxpia;

ADDR           INDX    INST_ID PUB_SKGXPI PICKED_SKGXPIA
-------- ---------- ---------- ---------- -----------------------------------
NAME_SKGXPIA    IP_SKGXPIA
--------------- ----------------
4572EC54          0          1 N          OSD
eth1            192.168.0.107

/* 这里需要重启CRS后实例才能正确使用更新后的cluster interconnect网络接口 */

Convert your single instance to 10g RAC by manual

随着RAC在国内的流行,出现了不少将原本单节点数据库转换成RAC的需求;事实上这种single instance到RAC的转换并不困难,可以直接在原有单节点数据库上修改一些初始化参数并运行脚本来完成转换;这里我们通过实例向大家展示转换工作的大致步骤和注意事项。

示例环境中的节点名与示例名对应关系如下,注意Instance Name为PROD21和PROD22。

Node Name Instance Name DB NAME
vrh1 PROD21 PROD2
vrh2 PROD22 PROD2

以下操作均应当在1号节点上完成

一、在各节点上安装clusterware,并确保CRS正常启动;针对clusterware的安装在这里不做展开

[oracle@vrh1 u01]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

/* 应当确保所有RAC节点上的CRS均正常启动 */

[oracle@vrh1 u01]$ crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.4.0]

二、在各个节点上以Real Application Cluster选项安装Oracle Database Software软件,一般我们要求DB的版本要与clusterware的版本一致,所以如我们的环境中CRS的版本是10.2.0.4.0,那么DB的版本最好也是10.2.0.4.0;Database Software可以安装在共享存储上(shared storage),也可以是各节点上独立的文件系统上; 同时我们有必要修改一系列的环境变量如ORACLE_HOME、LD_LIBRARY_PATH到新软件的相应目录下,以及为RAC设置Oracle所推荐的一些操作系统内核参数如net.core.wmem_max等;

三、从新安装的DB Software目录下启动Netca工具,为Cluster配置新的Listener监听器;在完成新Listener的配置后;最终将老的单节点的Listener关闭掉

四、备份原单节点的数据库,并将之还原到今后RAC使用的共享存储(shared storage)上;如果今后使用ASM作为共享存储方案的话,那么可以 参考文档Note 452758.1 <How to Convert a Single-Instance ASM to Cluster ASM>

五、备份原单节点实例的参数文件,并加入下列参数:

*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
<SID1>.undo_tablespace=undotbs (undo tablespace which already exists)
<SID1>.instance_name=<SID1>
<SID1>.instance_number=1
<SID1>.thread=1
<SID1>.local_listener=<LISTENERNAME>_<HOSTNAME1>
<SID2>.instance_name=<SID2>
<SID2>.instance_number=2
<SID2>.local_listener=<LISTENERNAME>_<HOSTNAME2>
<SID2>.thread=2
<SID2>.undo_tablespace=UNDOTBS2
<SID2>.cluster_database = TRUE
<SID2>.cluster_database_instances = 2 

如在我们的示例环境中:

SQL> create pfile from spfile;
File created.

[oracle@vrh1 u01]$ cd $ORACLE_HOME/dbs

[oracle@vrh1 dbs]$ cp initPROD2.ora initPROD2.ora.bak

/* 以上备份了pfile形式的参数文件 */

*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_management=AUTO
PROD21.undo_tablespace=undotbs
PROD21.instance_name=PROD21
PROD21.instance_number=1
PROD21.thread=1
PROD21.local_listener=LISTENER

PROD22.instance_name=PROD22
PROD22.instance_number=2
PROD22.local_listener=LISTENER
PROD22.thread=2

PROD22.undo_tablespace=undotbs2
PROD22.cluster_database= TRUE
PROD22.cluster_database_instances= 2

/* 以上为示例环境中我们加入参数文件的RAC相关initial parameter */

如之前说明的<SID1>对应于<DB_NAME>1; 而<SID2>对应于<DB_NAME>2;因为这里的DB_NAME为PROD2,所以2个实例的名字为PROD21和PROD22,希望不会造成你的困惑。

六、修改参数文件中的”control_files”参数即将控制文件指向到新的共享存储的路径下

七、从pfile的文本参数文件上创建Server parameter file即SPFILE;如果我们将来不使用共享的SPFILE的话,那么因为原单节点上的实例名从原来的PROD2装换成PROD21,所以这里spfile的名字应当是spfilePROD21.ora;如果我们希望使用共享的SPFILE的话,则follow下一个步骤

SQL> create spfile='?/dbs/spfilePROD2.ora' from pfile='?/dbs/initPROD2.ora';
File created.

八、一般我们使用共享的SPFILE,那么就要将以上创建的SPFILE拷贝到共享存储上,并在本地的ORACLE_HOME/dbs/initSID.ora参数文件中留下共享SPFILE的路径:

[oracle@vrh1 dbs]$ cp spfilePROD2.ora /share1

[oracle@vrh1 dbs]$ cat initPROD21.ora
spfile='/share1/spfilePROD2.ora'

九、以新的SID为实例创建密码文件:

[oracle@vrh2 dbs]$ cd $ORACLE_HOME/dbs
[oracle@vrh1 dbs]$ orapwd file=orapwPROD21 password=oracle entries=10

十、启动实例到mount状态
[oracle@vrh1 dbs]$ echo $ORACLE_SID
PROD21

SQL> startup mount;

ORACLE instance started.

Total System Global Area 629145600 bytes
Fixed Size 1269016 bytes
Variable Size 318767848 bytes
Database Buffers 301989888 bytes
Redo Buffers 7118848 bytes
Database mounted.

十一、注意将数据文件(datafile),在线日志文件(redo log)重命名(rename)到共享设备上,如:

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/share1/redo01.log
/share1/redo02.log
/share1/redo03.log

SQL> alter database rename file '/share1/redo01.log' to '/u01/redo01.log';
Database altered.

SQL>  alter database rename file '/share1/redo02.log' to '/u01/redo02.log';
Database altered.
......................

/* 不要忘记临时文件 */

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/share1/temp01.dbf

SQL> alter database rename file '/share1/temp01.dbf' to '/u01/temp01.dbf';
Database altered.

十二、加入2号线程的重做日志组,并启用2号线程

SQL> alter database add logfile thread 2 group 4 '/u01/redo04.log' size 50M;
Database altered.
SQL> alter database add logfile thread 2 group 5 '/u01/redo05.log' size 50M;
Database altered.

SQL> alter database enable public thread 2;
Database altered.

............

十三、从1号节点打开数据库,并创建供二号实例(或其他更多实例)使用的撤销表空间UNDO TABLESPACE:

SQL> alter database open;
Database altered.

SQL>  CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE '/u01/undotbs02.dbf' size 100M;
Tablespace created.

十四、从1号节点上运行?/rdbms/admin/catclust.sql脚本创建cluster相关的视图:

SQL> @?/rdbms/admin/catclust
...........

Grant succeeded.

Synonym created.

PL/SQL procedure successfully completed.


以下操作应当在2号或其他更多节点上执行

十五、在2号节点上设置合理的ORACLE_HOME(指向新安装的DB软件目录)和ORACLE_SID(我们的环境中为PROD22)环境变量;你可能还需要为2号实例创建必要的audit_file_dest审计文件存放目录和background_dump_dest后台转储文件存放目录等,如果实例无法正确获取这些目录可能出现”ORA-09925: Unable to create audit trail file”错误。如步骤2所述也需要在2号节点上配置推荐的操作系统内核参数等。

十六、创建$ORACLE_HOME/dbs/initSID.ora PFILE参数文件指向共享的SPFILE文件,如步骤8所示

[oracle@vrh2 ~]$ echo $ORACLE_SID
PROD22

[oracle@vrh2 ~]$ cd $ORACLE_HOME/dbs

[oracle@vrh2 dbs]$ cat initPROD22.ora
spfile='/u01/spfilePROD2.ora'

十七、在2号节点上创建形如orapwSID的密码文件:

[oracle@vrh2 dbs]$ cd $ORACLE_HOME/dbs
[oracle@vrh2 dbs]$ orapwd file=orapwPROD22 password=oracle entries=10

十八、启动2号节点的实例,并打开数据库:

SQL> startup mount;
ORACLE instance started.

Total System Global Area  629145600 bytes
Fixed Size                  1269016 bytes
Variable Size             318767848 bytes
Database Buffers          301989888 bytes
Redo Buffers                7118848 bytes
Database mounted.

SQL> alter database open;
Database altered.

SQL> select instance_name,status from gv$instance;

INSTANCE_NAME    STATUS
---------------- ------------
PROD22           OPEN
PROD21           OPEN

以上完成了对单节点数据库到RAC数据库的转换,但该RAC数据库的资源尚未注册到OCR中,因此无法使用例如srvctl、crs_stat等命令配置或监控这些资源,我们需要手动将这些资源加入到OCR中。

以下操作在任意一节点上完成即可
:

srvctl add database -d <database name> -o <ORACLE_HOME path> -p <spfile location and name> 

srvctl add instance -d <database name> -i <instance 1 name> -n <node 1 name > 

srvctl add instance -d <database name> -i <instance 2 name> -n <node 2 name >

如在我们的示例环境中可以如下添加:
[oracle@vrh2 ~]$ srvctl add database -d PROD2 -o $ORACLE_HOME -p /u01/spfilePROD2.ora 

[oracle@vrh2 ~]$ srvctl add instance -d PROD2 -i PROD21 -n vrh1

[oracle@vrh2 ~]$ srvctl add instance -d PROD2 -i PROD22 -n vrh2

[oracle@vrh2 ~]$ crs_start ora.PROD2.db

[oracle@vrh2 ~]$ crs_stat -t
Name           Type           Target    State     Host       
------------------------------------------------------------
ora....21.inst application    ONLINE    ONLINE    vrh1       
ora....22.inst application    ONLINE    ONLINE    vrh2       
ora.PROD2.db   application    ONLINE    ONLINE    vrh1       
ora.vrh1.gsd   application    ONLINE    ONLINE    vrh1       
ora.vrh1.ons   application    ONLINE    ONLINE    vrh1       
ora.vrh2.gsd   application    ONLINE    ONLINE    vrh2       
ora.vrh2.ons   application    ONLINE    ONLINE    vrh2

为10g RAC Cluster添加节点

RAC体系为系统扩容开创了一条新的道路:只需要不断向集群Cluster中加入新的服务器就可以了。这里我们就来介绍一下如何为10g RAC Cluster增加节点;实际上这并不复杂,甚至可以说是很简单的。为现有的Oracle 10g RAC添加节点大致包括以下步骤:

  1. 配置新的服务器节点上的硬件及操作系统环境
  2. 向Cluster集群中加入该节点
  3. 在新节点上安装Oracle Database software
  4. 为新的节点配置监听器LISTENER
  5. 通过DBCA为新的节点添加实例

我们假设当前RAC环境中存在2个节点,分别为vrh1和vrh2;该RAC环境中clusterware和database的版本均为10.2.0.4,需要加入该Cluster的新节点为vrh3。

一、在新的服务器节点上配置操作系统环境

  1. 这包括配置该节点今后使用的public network公用网络和private network接口,不要忘记在hosts文件中加入之前节点的网络信息,并将该份完整的hosts文件传到集群Cluster中已有的节点上,保证处处可以访问
  2. 同时需要在原有的基础上配置oracle(或其他DBA)用户的身份等价性,这需要将新节点上生成的id_rsa.pub和id_dsa.pub文件中的信息追加到authorized_keys文件中,并保证在所有节点上均有这样一份相同的authorized_keys文件
  3. 调整新节点上的操作系统内核参数,保证其满足今后在该节点上运行实例的内存要求以及10g RAC Cluster的推荐的udp网络参数
  4. 调整新节点上的系统时间以保持同其他节点一致,或者配置NTP服务
  5. 保证原有Cluster中所有节点上的CRS都正常运行,否则addNode时会报错
  6. 配置clusterware和database软件的安装目录,要求路径和原有节点上的一致

二.向Rac Cluster中加入新的节点
1.在原有RAC节点上(vrh1)以oracle用户身份登录,设置合理的DISPLAY显示环境变量,并运行$ORA_CRS_HOME/oui/bin目录下的addNode.sh脚本

[oracle@vrh1 ~]$ export DISPLAY=IP:0.0
[oracle@vrh1 ~]$ cd $ORA_CRS_HOME/oui/bin
[oracle@vrh1 bin]$ ./addNode.sh

2.若OUI被正常调用则会出现欢迎界面,并点击NEXT:
Screenshot

3.在”Specify Cluster Nodes to Add to Installation”界面下输入公共节点名、私有节点名和VIP节点名(之前应当在所有节点的/etc/hosts文件中都配置了):
Screenshot-1

4.出现”Cluster Node Addition Summary”界面Review一遍Summary信息,并点击NEXT:
Screenshot-2

5.出现”Cluster Node Addition Progress”界面,OUI成功将安装好的clusterware软件传输到新的节点后,会提示用户运行多个脚本包括:orainstRoot.sh(新节点)、rootaddnode.sh(运行OUI的原有节点)、root.sh(新节点):
Screenshot-3

6.运行之前提示的三个脚本:

/* 在新节点上运行orainstRoot.sh脚本 */

[root@vrh3 ~]# /s01/oraInventory/orainstRoot.sh 
Changing permissions of /s01/oraInventory to 770.
Changing groupname of /s01/oraInventory to oinstall.
The execution of the script is complete

/* 在运行OUI的原有节点上运行rootaddnode.sh脚本 */

[root@vrh1 ~]# /s01/oracle/product/10.2.0/crs/install/rootaddnode.sh
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Attempting to add 1 new nodes to the configuration
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node {nodenumber}: {nodename} {private interconnect name} {hostname}
node 3: vrh3 vrh3-priv vrh3
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/s01/oracle/product/10.2.0/crs/bin/srvctl add nodeapps -n vrh3 -A vrh3-vip/255.255.255.0/eth0 -o /s01/oracle/product/10.2.0/crs

/* 在新节点上运行root.sh脚本 */

[root@vrh3 ~]# /s01/oracle/product/10.2.0/crs/root.sh 
WARNING: directory '/s01/oracle/product/10.2.0' is not owned by root
WARNING: directory '/s01/oracle/product' is not owned by root
WARNING: directory '/s01/oracle' is not owned by root
WARNING: directory '/s01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
OCR LOCATIONS =  /dev/raw/raw1
OCR backup directory '/s01/oracle/product/10.2.0/crs/cdata/crs' does not exist. Creating now
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/s01/oracle/product/10.2.0' is not owned by root
WARNING: directory '/s01/oracle/product' is not owned by root
WARNING: directory '/s01/oracle' is not owned by root
WARNING: directory '/s01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node {nodenumber}: {nodename} {private interconnect name} {hostname}
node 1: vrh1 vrh1-priv vrh1
node 2: vrh2 vrh2-priv vrh2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        vrh1
        vrh2
        vrh3
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Creating VIP application resource on (0) nodes.
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.

7.运行完成以上脚本后回到OUI界面点选OK,出现安装成功界面:
Screenshot-4

三.在新节点上安装Oracle Database software
1.在原有RAC节点上以Oracle用户身份登录,设置合理的DISPLAY显示环境变量,并运行$ORACLE_HOME/oui/bin目录下的addNode.sh脚本:

[oracle@vrh1 ~]$ export DISPLAY=IP:0.0

[oracle@vrh1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@vrh1 bin]$ ./addNode.sh 

2.若OUI被正常调用则会出现欢迎界面,并点击NEXT:
Screenshot-5

3.在”Specify Cluster Nodes to Add to Installation“界面中勾选需要安装Oracle Database software数据库软件的节点,这里为vrh3:
Screenshot-6

4.出现”Cluster Node Addition Summary”界面,review一遍安装Summary后点击Next:
Screenshot-7

5.出现”Cluster Node Addition Progress”进度条界面,完成对Oracle database Software的远程传输安装后会提示用户运行root.sh脚本:
Screenshot-8

6.在新的节点(这里为vrh3)上运行提示脚本root.sh

四.在新的节点(vrh3)上使用netca工具配置监听器LISTENER,不做展开

五.使用DBCA工具在新节点上添加实例
1.在原有RAC节点上以Oracle用户身份登录,设置合理的DISPLAY显示环境变量,并运行DBCA命令:

2.出现DBCA欢迎界面,选择”Oracle Real Application Clusters”并点击Next:
Screenshot-9

3.勾选节点管理,并点击Next:
Screenshot11

4.勾选”Add an instance”增加节点,并点击Next:
Screenshot12

5.勾选想要添加实例的数据库并填写SYSDBA用户信息密码:
Screenshot13

6.勾选正确的实例名和节点,并点击Next

7.Review Storage存储信息,并点击Next

8.Review Summary信息,并点击OK

9.等待进度条完成,当被提示是否执行另一操作”perform another operation”时选择No

10.进一步通过查询gv$instance视图验证节点是否添加成功,若添加成功则当可以看到所有的节点信息。

ORA-19808错误一例

一套Linux上的11.2.0.2 RAC系统,其中一个节点startup mount时出现ORA-19808错误,日志如下:


SQL> alter database mount;
alter database mount
*
ERROR at line 1:
ORA-01105: mount is incompatible with mounts by other instances
ORA-19808: recovery destination parameter mismatch


SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


[oracle@rh3 ~]$ oerr ora 19808
19808, 00000, "recovery destination parameter mismatch"
// *Cause:  The value of parameters DB_RECOVERY_FILE_DEST and
//          DB_RECOVERY_FILE_DEST_SIZE must be same in all instances.
//          instance. All databases must have same recovery destination
//          parameters.
// *Action: Check DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE
//          values in all instances.

SQL> col name for a30
SQL> col value for a30

SQL> select name ,value,inst_id from gv$system_parameter  where name='db_recovery_file_dest_size';

NAME                           VALUE                             INST_ID
------------------------------ ------------------------------ ----------
db_recovery_file_dest_size     0                                       1
db_recovery_file_dest_size     42278584320                             2

该错误是由于启动节点使用了pfile形式的参数文件,而该参数文件中的db_recovery_file_dest_size值与已经启动的另一个节点的db_recovery_file_dest_size不一致所造成。

如果使用共享的server parameter file则不可能出现上述情况,当然也可以通过在启动节点上修改db_recovery_file_dest_size来解决问题。

Oracle RAC内部错误:ORA-00600[keltnfy-ldmInit]一例

一套SUNOS上的2节点10.2.0.2 RAC系统日前出现ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []内部错误,错误发生时系统操作人员误使用hostname命令修改了1号主机的主机名,之后陆续出现以上ora-00600错误,同时操作系统日志显示RAC CSS进程意外终止,具体日志如下:

================== OS Message=====================
Jan 10 11:15:10 cupd25k-a root: [ID 702911 user.error] Cluster Ready Services completed waiting on dependencies.
Jan 10 11:15:16 cupd25k-a root: [ID 702911 user.error] Duplicate Oracle CLSMON found. Killing and restarting it.
Jan 10 11:15:16 cupd25k-a root: [ID 702911 user.error] Oracle CSS daemon failed to start up. Check CRS logs for diagnostics.
Jan 10 11:15:16 cupd25k-a root: [ID 702911 user.error] Oracle CLSMON terminated with unexpected status 137. Respawning

/* 这里的Duplicate Oracle CLSMON found 因该指的是OCLSMON进程,
"In Oracle 10.2.0.2 and above there is an additional process called OCLSOMON
which monitors the CSS daemon for hangs or scheduling issues and can reboot a
node if there is a perceived hang. OCLSOMON is spawned in init.cssd and runs
as the Oracle user."
   oclsmon进程在10.2.0.2以后版本被引入,用以监视css进程,
   若发生hang或操作系统调度问题时该进程可能会reboot节点,
   oclsmon进程会被init.cssd脚本spawned.  */

==================oclsmon.log======================
2011-01-10 11:15:11.376
unspecified member number is (1)
Member 1 group OCLSMON_ in use. Is oclsmon already up?
2011-01-10 11:15:11.479
Internal Error Information:
  Category: 8
  Operation: skgxnreg: the member number is i
  Location: skgxnreg_7
  Other:
  Dep: 1
2011-01-10 11:15:11.737
unspecified member number is (1)
Member 1 group OCLSMON_ in use. Is oclsmon already up?
2011-01-10 11:15:11.751
Internal Error Information:
  Category: 8
  Operation: skgxnreg: the member number is i
  Location: skgxnreg_7
  Other:
  Dep: 1
2011-01-10 11:15:12.006
unspecified member number is (1)
Member 1 group OCLSMON_ in use. Is oclsmon already up?
2011-01-10 11:15:12.023
Internal Error Information:
  Category: 8
  Operation: skgxnreg: the member number is i
  Location: skgxnreg_7
  Other:
  Dep: 1
2011-01-10 11:15:12.278
unspecified member number is (1)
Member 1 group OCLSMON_ in use. Is oclsmon already up?
2011-01-10 11:15:12.293
Internal Error Information:
  Category: 8
  Operation: skgxnreg: the member number is i
  Location: skgxnreg_7
  Other:
  Dep: 1

/*  skgxn是Oracle Clusterware用以监视skgxn事件(即第三方CLUSTERWARE相关的事宜,他们应该有用sun的cluster);
    似乎是修改hostname导致了Oracle CSS出现了fatal error,并启动了一个以上的OCLSMON进程(Duplicate Oracle CLSMON found),
    最后"Oracle CSS daemon failed to start up. Check CRS logs for diagnostics",
    在Oracle instance启动的情况下25k-a节点的CSS进程意外终止,
    可能导致该节点上的所有实例的LMD(global Enqueue Service daemon)、LMON无法正常工作而导致实例hang住。*/

==========================alert.log====================
Errors in file /oracle/oracle/admin/BOCPCS/udump/bocpcs1_ora_12320.trc:
ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []

=========================part of trace file===============
*** 2011-01-10 11:11:02.957
ksedmp: internal or fatal error
ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
Current SQL information unavailable - no session.
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
ksedmp()+716         CALL     ksedst()             FFFFFFFF7FFF9D40 ?
                                                   000000000 ? 0FFFFFFFF ?
                                                   FFFFFFFF7FFF8EE8 ?
                                                   FFFFFFFF7FFFA640 ?
                                                   000000008 ?
kgerinv()+200        PTR_CALL 0000000000000000     000000002 ? 10638A1CC ?
                                                   000000001 ? 000000000 ?
                                                   10638A000 ? 10638A1CC ?
kgeasnmierr()+28     CALL     kgerinv()            106384B98 ? 000000000 ?
                                                   105D3B940 ? 000000002 ?
                                                   FFFFFFFF7FFFDFF0 ?
                                                   000001430 ?
keltnfy()+784        CALL     kgeasnmierr()        106384B98 ? 1064DCBF0 ?
                                                   105D3B940 ? 000000002 ?
                                                   000000000 ? 00000002E ?
kscnfy()+552         PTR_CALL 0000000000000000     10639B498 ? 38001E7A8 ?
                                                   1055AC5D0 ? 10639B498 ?
                                                   000102C00 ? 10638A1C0 ?
ksucrp()+2436        CALL     kscnfy()             000008000 ? 000808214 ?
                                                   100C4C220 ? 1055C6680 ?
                                                   00000000F ? 000000001 ?
opiino()+2056        CALL     ksucrp()             000106387 ? 380007608 ?
                                                   000000000 ? 000380000 ?
                                                   000106000 ? 106387618 ?
opiodr()+1488        PTR_CALL 0000000000000000     10555A000 ?
                                                   FFFFFFFF7FFFF1C8 ?
                                                   00010555A ? 000106000 ?
                                                   105C83000 ? 000000001 ?
opidrv()+828         CALL     opiodr()             106391000 ? 000000000 ?
                                                   106390DD8 ? 106390000 ?
                                                   106391BD0 ? 000106000 ?
sou2o()+80           CALL     opidrv()             106394358 ? 000000001 ?
                                                   00000003C ? 000000000 ?
                                                   00000003C ? 000106000 ?
opimai_real()+124    CALL     sou2o()              FFFFFFFF7FFFF788 ?
                                                   00000003C ? 000000004 ?
                                                   FFFFFFFF7FFFF7B0 ?
                                                   105C82000 ? 000105C82 ?
main()+152           CALL     opimai_real()        000000002 ?
                                                   FFFFFFFF7FFFF888 ?
                                                   103F1BBCC ? 10632DB10 ?
                                                   002411E44 ? 000014400 ?
_start()+380         CALL     main()               000000002 ? 000000008 ?
                                                   000000000 ?
                                                   FFFFFFFF7FFFF898 ?
                                                   FFFFFFFF7FFFF9A8 ?
                                                   FFFFFFFF7C700200 ?

/* 可以看到以上trace文件指出了no session,
    在服务进程启动阶段遭遇了该keltnfy-ldmInit内部错误*/

metalink文档Startup Database Produces Ora-00600: [Keltnfy-Ldminit] [ID 336447.1]
介绍了该内部错误一般由主机上的不当网络配置引起,很显然使用hostname命令修改了一个无法解析的
主机名时可能引发该ORA-00600[keltnfy-ldmInit]内部错误。

Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.3 - Release: 10.2 to 10.2
Information in this document applies to any platform.
***Checked for relevance on 09-Jun-2010***
Symptoms

An startup nomount on Oracle 10g Release 2 database produces the following exception in alert log

Starting up ORACLE RDBMS Version: 10.2.0.1.0.
Errors in file /opt/oracle/10.2/admin/ORCL/udump/ORCL_ora_535.trc:
ORA-00600: internal error code, arguments: [keltnfy-ldmInit], [46], [1], [], [], [], [], []
USER: terminating instance due to error 600
Instance terminated by USER, pid = 535
Cause

The problem is related to getting host information.
In this case, ldmInit()/sldmInit() is failing with error 46 : LDMERR_HOST_NOT_FOUND

The following exception may also occur :

LDMERR_SOSD_INIT         OSD init failed to be specific in these OSD failures
 LDMERR_BAD_ADDR         bad address when system call gethostname failed
 LDMERR_HOST_NOT_FOUND   gethostbyname system call fails
 LDMERR_NO_SUPPORT       when specific address type is not supported

Development has fixed two bugs so far regarding this issue

Bug:5438154 - Abstract: ORA-600[KELTNFY-LDMINIT]  STARTING THE DB
Release Notes:
ldmInit returned LDMERR_HOST_NOT_FOUND for the machine huge alias list/address list
Workaround:
reduce the alais list of the machine

Bug:5486074 - Abstract: ORA-600 [KELTNFY-LDMINIT] WHEN DNS IS NOT AVAILABLE
Release Notes:
Internal error is raised by the Server Generated Alert subsystem when it can not determine Host Name or
Network Address. This can be caused by DNS server being unaavilable. 

Solution

The fix for 5486074 will not fix any underlying error from gethostbyname(), it just change the internal error to a warning message :

 "Warning: keltnfy call to ldmInit failed with error 46"

You will still need to fix the network config issue.  

These are the check you can do verify the host information 

      Check permission on /etc/hosts 

$ ls -l /etc/hosts
-rw-r--r--  2 root root 194 Oct 17  2006 /etc/hosts

      Check if /etc/hosts file is correctly configured

              ( all of this on one line ). 

Check the hostname:
$ hostname
$ ping `hostname`

Make sure you are able to ping the hostname
      Check if /etc/nodename is correctly configured

If you have DNS setup, ping is not a tool to diagnose DNS problem. A better tool to use is nslookup, dnsquery, or dig.

$ nslookup
$ nslookup
$ nslookup 

The forward and reverse lookup should succeed and return consistent address/info.  

 Check nsswitch.conf

$ more nsswitch.conf
hosts:      files dns
Make sure host lookup is also done through the /etc/hosts file and not just dns.  It is recommended that FILES come first before DNS.
Also, check the resolv.conf. This makes sure that the DNS is working properly.

显然在生产主机上使用hostname命令是危险的,因为你很难保证你在打字的时候不会因为同事的一下拍击而输错,有人说在生产环境中rm命令因该被禁用,那么这种特殊待遇对hostname命令也适用,我们可以用什么来代替hostname查看主机名呢?选择可以有非常多,这里我推荐一种:

-bash-3.00$ oslevel -r 
5300-07

-bash-3.00$ hostname
askmaclean.com

-bash-3.00$ uname -n
askmaclean.com

/* uname -n完全可以满足你的需要! */
That's great!

诊断RAC全局缓存块丢失gc blocks lost

在Oracle RAC环境中,无论我们从AWR自动负载性能报告、Statspack或者Grid Control中都可以找到Oracle数据库软件所收集的全局缓存工作负载统计信息(global cache work load statistics);其中就包含了全局缓存块丢失(Global cache lost blocks)的统计信息(这些丢失的全局缓存块可能是gc cr block lost或者gc current block lost),若集群中的任意节点出现大量的全局缓存块丢失(下文简写为gc blocks lost),则可能意味着内联(private)网络存在问题或者packet网络包处理低效。通过监控和评估这些全局缓存的相关统计信息,可以有效保证内联全局缓存(interconnect Global Cache)和全局队列服务(Global Enqueue Service)(GCS/GES)以及整个集群的正常工作。全局缓存块丢失一般预示着网络包处理存在问题并需要进一步勘察。另外全局缓存块丢失(gc blocks lost)的问题常会伴随着gc cr multiblock waits等待发生(传输多个连续的数据块全局缓存)。

就目前来看最有嫌疑造成或加速gc blocks lost的”元凶”往往是因为错误地或者不当的配置了内联网络(interconnects)。接下来我们会进一步介绍如何找出造成gc blocks lost的原因。

虽然gc blocks lost对集群造成的影响更多的反应在性能方面,但我们也无法保证其没有造成节点/实例被驱逐(eviction)的可能性。Oracle Clusterware集群及Oracle RAC实例的节点成员管理依赖于内联网络的心跳(heartbeats)。假设在网络心跳持续丢失的情况下,节点/实例的驱逐可以发生。以下我们列出gc blocks lost可能造成的主次要症状:

主要症状:

  • ‘gc cr block lost’或’gc current block lost’成为实例中Top 5的主要等待事件

次要症状:

  • SQL trace报告显示多次出现gc cr requests,gc current request等待事件
  • 出现长时间的gc cr multiblock requests等待
  • 糟糕的应用性能与吞吐量
  • ifconfig或其他网络工具显示存在大量的网络包packet发送接收(send/receive)错误
  • netstat报告显示存在errors/retransmits/reassembly等失败
  • 单个或多个节点失败
  • 由网络处理引发的异常CPU使用率

下面我们尝试列出可能引起gc blocks lost的多种可能性:
1.设置过小的UDP receive (rx) buffer sizes/UDP buffer socket overflows
描述:在真实环境中Oracle RAC全局缓存块处理总是集送式(bursty)地、连续地;当OS在等待可用CPU时需要将接受到的packet存放的相关协议的buffer中。当buffer空间不足时将可能导致静默的packet丢失进而造成全局缓存块丢失(global cache block loss)。在绝大多数UNIX平台上`netstat -s`或`netstat -su`命令帮助我们了解udp溢出(UDPInOverflows),packet接收错误,帧丢弃(frame dropped),或由buffer full造成的packet丢弃。
措施:Packet丢失大多数情况下归因于在接受服务器上不当的UDP buffer缓存大小,进而导致buffer溢出和global cache block loss。当操作系统所设置的UDP接收缓存大小(UDP receive (rx) buffer size)小于128k时Oracle打开一个socket套接字的udp rx buffer size为128K。若OS的设置大于128K时Oracle会遵从该设置值保持不变。Oracle所使用的UDP receive buffer大小会因不同的数据库标准块(>8k)的大小而增大,但不会超过OS系统所决定的限度。当DB_FILE_MULTIBLOCK_READ_COUNT初始化参数设置大于4的环境中出现因不当的udp缓存设置所造成的过度的’global cache cr requests’等待事件超时一般很容易观察到udp buffer溢出、丢包、缓存块丢失等现象。为了缓解这种问题,增大udp buffer的大小是一种行之有效的方法,此外我们还可以降低DB_FILE_MULTIBLOCK_READ_COUNT参数值。
在绝大多数UNIX/Linux平台上以下命令可以帮助我们了解udp socket buffer溢出或丢包的情况:

[maclean@rh2 ~]$ netstat -s
Ip:
    103300 total packets received
    0 forwarded
    0 incoming packets discarded
    103296 incoming packets delivered
    105287 requests sent out
Icmp:
    101 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 75
        echo replies: 26
    175 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 119
        echo request: 56
IcmpMsg:
        InType0: 26
        InType3: 75
        OutType3: 119
        OutType8: 56
Tcp:
    30355 active connections openings
    73 passive connection openings
    29589 failed connection attempts
    35 connection resets received
    3 connections established
    93218 segments received
    102780 segments send out
    68 segments retransmited
    0 bad segments received.
    29644 resets sent
Udp:
    2264 packets received
    46 packets to unknown port received.
    0 packet receive errors
    2270 packets sent
TcpExt:
    17 invalid SYN cookies received
    59 ICMP packets dropped because they were out-of-window
    181 TCP sockets finished time wait in fast timer
    166 delayed acks sent
    1 delayed acks further delayed because of locked socket
    Quick ack mode was activated 3 times
    6247 packets directly queued to recvmsg prequeue.
    6427 packets directly received from backlog
    554572 packets directly received from prequeue
    4171 packets header predicted
    1039 packets header predicted and directly queued to user
    9183 acknowledgments not containing data received
    4216 predicted acknowledgments
    2 times recovered from packet loss due to SACK data
    TCPDSACKUndo: 14
    18 congestion windows recovered after partial ack
    0 TCP data loss events
    2 fast retransmits
    46 other TCP timeouts
    6 DSACKs sent for old packets
    19 DSACKs received
    26 connections reset due to unexpected data
    25 connections reset due to early user close
    9 connections aborted due to timeout
IpExt:
    InMcastPkts: 4168
    InBcastPkts: 3505

[maclean@rh2 ~]$ netstat -su
IcmpMsg:
    InType0: 26
    InType3: 75
    OutType3: 119
    OutType8: 56
Udp:
    2264 packets received
    46 packets to unknown port received.
    0 packet receive errors
    2270 packets sent
IpExt:
    InMcastPkts: 4168
    InBcastPkts: 3505

此外udp丢包常会造成延迟增加,降低带宽,增大cpu使用率(kernel和user部分的),及因包重传(packet retransmission)导致消耗额外的内存。

2.糟糕的内联网络性能及高cpu使用率,`netstat -s`显示出现packet reassembly包重组失败
描述:庞大的UDP数据报(datagrams)可能需要被拆分并以多个帧的形式发送(取决于Medium Transmission Unit MTU的大小),在接收端服务器需要将这些拆分包重组(reassemble);高cpu使用率(持续地或高频率的波峰),不当的reassembly buffers及UDP buffer空间可能造成包重组失败。在接收端服务器`netstat -s`报告可以显示IP统计信息中存在大量的重组失败’reassembles failed’和超时后帧丢弃’fragments dropped after timeout’。碎片包(Fragmented packets)有一个重组的保留时间。未被成功重组的包可能会被丢弃并需要再次申请。在没有重组空间的情况下包会被静默地丢弃。

`netstat –a` 显示IP统计:
     3104582 fragments dropped after timeout
     34550600 reassemblies required
     8961342 packets reassembled ok
     3104582 packet reassembles failed.

措施:增加碎片重组buffer的大小,为重组分配更多的空间。增加重组碎片包的保留时间。增加udp receiver buffer以降低网络延迟,缓解包重组失败及cpu使用率对网络栈处理造成的负面影响。

在Linux上我们可以修改如下阀值以增大重组缓存空间:
/proc/sys/net/ipv4/ipfrag_low_thresh (默认为196608)
/proc/sys/net/ipv4/ipfrag_high_thresh (默认为262144)

为修改碎片包重组时间,可以修改:
/proc/sys/net/ipv4/ipfrag_time (默认为30)

以下上列出可能造成gc blocks lost性能问题的最主要的2种可能性,更多信息可以参考原文:gc lost blocks diagnostics。同时因各UNIX平台的差异可能你无法使用以上指出的命令来观测udp溢出、丢包等现象,那么可以采用OSwatcher工具来收集相关的网络信息。

ORA-00600:[kclchkinteg_2]及[kjmsm_epc]内部错误一例

一套AIX上的9.2.0.6 RAC系统,alert日志中最初报LMS进程(Lock Manager Server process,即锁服务管理进程,仅出现在RAC系统中)遭遇ORA-00600[kclchkinteg_2],继而出现ORA-00600[[kjmsm_epc]内部错误导致实例crash(instance crashed)。相关日志如下:

Tue Dec 1 01:20:25 2009
Errors in file /oracle/admin/crmdb/bdump/crmdb1_lms7_2143190.trc:
ORA-00600: internal error code, arguments: [kclchkinteg_2], [], [], [], [], [], [], []
Tue Dec 1 01:20:25 2009
Errors in file /oracle/admin/crmdb/bdump/crmdb1_lms7_2143190.trc:
ORA-00600: internal error code, arguments: [kclchkinteg_2], [], [], [], [], [], [], []
Tue Dec 1 01:20:26 2009
Trace dumping is performing id=[cdmp_20091201012026]
Tue Dec 1 01:20:40 2009
Errors in file /oracle/admin/crmdb/bdump/crmdb1_lms7_2143190.trc:
ORA-00600: internal error code, arguments: [kjmsm_epc], [], [], [], [], [], [], []
Tue Dec 1 01:20:41 2009
Errors in file /oracle/admin/crmdb/bdump/crmdb1_lms7_2143190.trc:
ORA-00603: ORACLE server session terminated by fatal error
ORA-00600: internal error code, arguments: [kjmsm_epc], [], [], [], [], [], [], []
Tue Dec 1 01:20:43 2009

经过和metalink确认,认为此次实例意外终止时由于”Bug 3671867 – OERI[kclchkinteg_2] possible in RAC environment”:

Hdr: 3671867 10.1.0.2 RDBMS 10.1.0.2 RAC PRODID-5 PORTID-212 ORA-600
Abstract: INSTANCE TERMINATED WITH ORA-600 [KCLCHKINTEG_2]
PROBLEM:
--------
During testing, one instance was terminated with ORA-600 [KCLCHKINTEG_2]

DIAGNOSTIC ANALYSIS:
--------------------
Alert log contains:

Thu Jun  3 12:57:26 2004
Private_strands 0 at log switch
Thread 1 advanced to log sequence 153
  Current log# 1 seq# 153 mem# 0: +EMDBD1/dbotazc0/onlinelog/group_1.267.1
  Current log# 1 seq# 153 mem# 1: +EMDBD1/dbotazc0/onlinelog/group_1.268.1
Thu Jun  3 13:41:45 2004
Private_strands 0 at log switch
Thread 1 advanced to log sequence 154
  Current log# 2 seq# 154 mem# 0: +EMDBD1/dbotazc0/onlinelog/group_2.269.1
  Current log# 2 seq# 154 mem# 1: +EMDBD1/dbotazc0/onlinelog/group_2.270.1
Thu Jun  3 13:55:06 2004
Errors in file /DBA/nest/oracle/DBOTA1/logs/dbota1_dbw0_82244.trc:
ORA-600: internal error code, arguments: [kclchkinteg_2], [], [], [], [],
[], [], []
Thu Jun  3 13:55:08 2004
Errors in file /DBA/nest/oracle/DBOTA1/logs/dbota1_dbw0_82244.trc:
ORA-600: internal error code, arguments: [kclchkinteg_2], [], [], [], [],
[], [], []
Thu Jun  3 13:55:08 2004
DBW0: terminating instance due to error 471
Thu Jun  3 13:55:08 2004
Trace dumping is performing id=[cdmp_20040603135508]
Thu Jun  3 13:55:10 2004
Dump system state for local instance only
Thu Jun  3 13:55:10 2004
Trace dumping is performing id=[cdmp_20040603135510]
The only trace file we have been sent is the dbw trace.
WORKAROUND:
-----------
None
RELATED BUGS:
-------------
There are several which end in KCLCHKINTEG but not KCLCHKINTEG_2
REPRODUCIBILITY:
----------------
Appears only to have happened once so far.
TEST CASE:
----------
N/A
STACK TRACE:
------------
ksedmp ksfdmp kgerinv kgeasnmierr kclassertle kclchkinteg kclfwrite1 kcbbic1
kcbbiop kcbbdrv ksbabs ksbrdp opirip opidrv sou2o main start

SUPPORTING INFORMATION:
-----------------------
Bruce Carter has looked at this and suggested a bug be raised.
Affects:
    Product (Component)	Oracle Server (Rdbms)
    Range of versions believed to be affected	Versions < 10.2
    Versions confirmed as being affected
        * 10.1.0.3
    Platforms affected	Generic (all / most platforms affected)
Fixed:
    This issue is fixed in
        * 9.2.0.7 (Server Patch Set)
        * 10.1.0.4 (Server Patch Set)
        * 10.2.0.1 (Base Release)
Symptoms:
Related To:
    * Internal Error May Occur (ORA-600)
    * Instance May Crash
    * ORA-600 [kclchkinteg_2]
    * RAC (Real Application Clusters) / OPS
Description
    ORA-600 [kclchkinteg_2] possible in RAC environment

Oracle开发部分确认该3671867 bug已在9.2.0.8中得到修复,建议通过应用9.2.0.8补丁集或者打上bug 3671867的one-off patch来修复该问题;该Bug可能导致实例意外终止,因此其Severity也极高,值得手头仍有9i RAC系统需要管理的dba注意。

沪公网安备 31010802001379号

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569