11gR2新特性:LMHB Lock Manager Heart Beat后台进程

LMHB是11gR2中新引入的后台进程,官方文档的介绍是Global Cache/Enqueue Service Heartbeat Monitor,Monitor the heartbeat of LMON, LMD, and LMSn processes,LMHB monitors LMON, LMD, and LMSn processes to ensure they are running normally without blocking or spinning。  Database and ASM instances, Oracle RAC

该进程负责监控LMON、LMD、LMSn等RAC关键的后台进程,保证这些background process不被阻塞或spin。 LMHB可能是Lock Manager Heartbeat的缩写。

 

我们来看一下该进程的trace跟踪文件以便了解其功能:

按照 100s -> 80s -> 100s -> 80s的间隔监控并输出一次LMSn、LCKn、LMON、LMD等进程的状态及wait chain,由kjfmGCR_HBCheckAll函数控制

 

*** 2012-02-03 00:03:10.066
==============================
LMS0 (ospid: 17247) has not moved for 77 sec (1328245389.1328245312)
kjfmGCR_HBCheckAll: LMS0 (ospid: 17247) has status 2
  : waiting for event 'gcs remote message' for 0 secs with wait_id 15327.
  ===[ Wait Chain ]===
  Wait chain is empty.
kjgcr_Main: KJGCR_ACTION - id 5

*** 2012-02-03 00:04:50.091
==============================
LMS0 (ospid: 17247) has not moved for 88 sec (1328245489.1328245401)
kjfmGCR_HBCheckAll: LMS0 (ospid: 17247) has status 2
  : waiting for event 'gcs remote message' for 0 secs with wait_id 24546.
  ===[ Wait Chain ]===
  Wait chain is empty.
kjgcr_Main: KJGCR_ACTION - id 5

 LCK0 (ospid: 2662) has not moved for 95 sec (1309746735.1309746640)
  kjfmGCR_HBCheckAll: LCK0 (ospid: 2662) has status 6
  ==================================================
  === LCK0 (ospid: 2662) Heartbeat Report
  ==================================================
  LCK0 (ospid: 2662) has no heartbeats for 95 sec. (threshold 70 sec)
   : Not in wait; last wait ended 80 secs ago.
   : last wait_id 2317342 at 'libcache interrupt action by LCK'.
  ..
  .
   Session Wait History:
       elapsed time of 1 min 20 sec since last wait
    0: waited for 'libcache interrupt action by LCK'
  ..

 

大约每3分钟输出一次TOP CPU User,CPU使用率高的session信息:

 

*** 2012-02-03 00:05:30.102
kjgcr_SlaveReqBegin: message queued to slave
kjgcr_Main: KJGCR_ACTION - id 3
CPU is high.  Top oracle users listed below:
     Session           Serial         CPU
      29                 7             0
     156                23             0
       3                 1             0
       4                 1             0
       5                 1             0

*** 2012-02-03 00:08:30.147
kjgcr_SlaveReqBegin: message queued to slave
kjgcr_Main: KJGCR_ACTION - id 3
CPU is high.  Top oracle users listed below:
     Session           Serial         CPU
      29                 7             0
     156                23             0
       3                 1             0
       4                 1             0
       5                 1             0

 

如果发现有session的CPU使用率极高,根据内部算法可能会激活 资源计划(resource management plan) ,甚至于kill 进程:

 

*** 2012-02-03 00:08:35.149
kjgcr_Main: Reset called for action high cpu, identify users, count 0

*** 2012-02-03 00:08:35.149
kjgcr_Main: Reset called for action high cpu, kill users, count 0

*** 2012-02-03 00:08:35.149
kjgcr_Main: Reset called for action high cpu, activate RM plan, count 0

*** 2012-02-03 00:08:35.149
kjgcr_Main: Reset called for action high cpu, set BG into RT, count 0

 

从11.2.0.2 开始LMHB开始使用slave 进程GCRn来完成实际的任务(Global Conflict Resolution Slave Process Performs synchronous tasks on behalf of LMHB GCRn processes are transient slaves that are started and stopped as required by LMHB to perform synchronous or resource intensive tasks.) LMHB会控制GCRn进程的启停,以便使用多个GCRn完成同步和缓解资源紧张的任务(例如kill进程)。

可以看到实际LMHB调用的多为kjgcr或kjfmGCR开头的内部函数,GCR意为Global Conflict Resolution。

 

kjgcr_Main: KJGCR_ACTION – id 5

GCR 进程的trace :

*** 2011-11-28 02:42:44.466
kjgcr_SlaveActionCbk: Callback failed, check trace
Dumping GCR slave work message at 0x96b81fc0
GCR layer information: type = 1, index = 0
Unformatted dump of ksv layer header:

 

LMHB进程的出现是为了提高RAC的可用性,特别是在资源紧张的环境中他会主动地去尝试kill掉最耗费资源的服务进程,以保证LMS等关键的RAC后台进程能正常工作; 因为该进程定期监控LMS、LMON等后台进程的等待事件及session的CPU使用率等信息,所以该LMHB进程的跟踪日志也可能成为诊断RAC故障的之一,这是11.2.0.1以来RAC一个潜在的新特性和增强。

相关隐式参数

_lm_hb_callstack_collect_time hb diagnostic call stack collection time in seconds — 5s
_lm_hb_disable_check_list list of process names to be disabled in heartbeat check — none

 

11.2是第一个引入LMHB进程的版本,所以并不是太成熟,在实际过程中对于资源使用率很高的RAC系统而言LMHB可能会帮一些倒忙,若你确实遇到了相关的问题或者是在11.2 RAC上碰到了一些诡异的现象,那么可以关注一下以下这些MOS Note:

 

ORA-29770 LMHB Terminates Instance as LMON Waited for Control File IO or LIBRARY CACHE or ROW CACHE Event for too Long [ID 1197674.1]
Bug 8888434 – LMHB crashes the instance with LMON waiting on controlfile read [ID 8888434.8]
Bug 11890804 – LMHB crashes instance with ORA-29770 after long “control file sequential read” waits [ID 11890804.8]
Bug 11890804: LMHB TERMINATE INSTANCE WHEN LMON WAIT CHANGE FROM CF READ AFTER 60 SEC
Bug 13467673: CSS MISSCOUNT AND ALL ASM DOWN WITH ORA-29770 BY LMHB
Bug 13390052: KJFMGCR_HBCHECKALL MESSAGES ARE CONTINUOUSLY LOGGED IN LMHB TRACE FILE.
Bug 13322797: LMHB TERMINATES THE INSTANCE DUE TO ERROR 29770
Bug 11827088 – Latch ‘gc element’ contention, LMHB terminates the instance [ID 11827088.8]

Bug 13061883: LMHB IS TERMINATING THE INSTANCE DURING SHUTDOWN IMMEDIATE
Bug 12564133 – ORA-600[1433] in LMHB process during RAC reconfiguration [ID 12564133.8]
Bug 12886605: ESSC: LMHB TERMINATE INSTANCE DUE TO 29770 – LMON WAIT ENQ: AM – DISK OFFLINE
Bug 12757321: LMHB TERMINATING THE INSTANCE DUE TO ERROR 29770
Bug 10296263: LMHB (OSPID: 15872): TERMINATING THE INSTANCE DUE TO ERROR 29770
Bug 11899415: ORA-29771 AND LMHB (OSPID: XXXX) KILLS USER (OSPID: XXX
Bug 10431752: SINGLE NODE RAC: LMHB TERMINATES INSTANCE DUE TO 29770
Bug 11656856: LMHB (OSPID: 27701): TERMINATING THE INSTANCE DUE TO ERROR 29770
Bug 10411143: INSTANCE CRASHES WITH IPC SEND TIMEOUT AND LMHB TERMINATES WITH ORA-29770
Bug 11704041: DATABASE INSTANCE CRASH BY LMHB PROCESS
Bug 10412545: ORA-29770 LMHB TERMINATE INSTANCE DUE TO VARIOUS LONG CSS WAIT
Bug 10147827: INSTANCE TERMINATED BY LMHB WITH ERROR ORA-29770
Bug 10016974: ORA-29770 LMD IS HUNG FOR MORE THAN 70 SECONDS AND LMHB TERMINATE INSTANCE
Bug 9376100: LMHB TERMINATING INSTANCE DUE ERROR 29770

 

 

关注dbDao.com的新浪微博

扫码关注dbDao.com 微信公众号:

Comments

  1. maclean says:

    Fri Mar 08 08:33:31 2013
    WARNING: Heavy swapping observed on system in last 5 mins.
    pct of memory swapped in [3.91%] pct of memory swapped out [7.33%].
    Please make sure there is no memory pressure and the SGA and PGA
    are configured correctly. Look at DBRM trace file for more details.

  2. harryzhang says:

    这是alert.log的信息额。不是特别准。

Speak Your Mind

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569