latch free:cache buffer handles造成的SQL性能问题

数月之前,一位新疆的同事反映客户的某套单节点数据库存在性能问题,某个SQL长时间运行没有结束,在该SQL运行期间大量回话处于latch free等待中。因为相隔遥遥千里,同事之间是通过Email交流的;他首先给我发来了该问题SQL的相关explain_log。其中该语句的执行计划如下:

-----------------------------------------------------------------------------------------------
| Id  | Operation                               |  Name               | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                     |     1 |  1956 |    13 |
|*  1 |  FILTER                                 |                     |       |       |       |
|   2 |   NESTED LOOPS OUTER                    |                     |     1 |  1956 |    13 |
|   3 |    NESTED LOOPS OUTER                   |                     |     1 |  1942 |    12 |
|   4 |     NESTED LOOPS OUTER                  |                     |     1 |  1924 |    11 |
|   5 |      NESTED LOOPS                       |                     |     1 |  1906 |    10 |
|   6 |       NESTED LOOPS                      |                     |     1 |  1856 |     9 |
|   7 |        NESTED LOOPS                     |                     |     5 |  9055 |     8 |
|   8 |         NESTED LOOPS OUTER              |                     |     5 |  8575 |     7 |
|   9 |          NESTED LOOPS OUTER             |                     |     5 |  8280 |     6 |
|  10 |           NESTED LOOPS OUTER            |                     |     5 |  7785 |     5 |
|  11 |            NESTED LOOPS OUTER           |                     |     5 |  7290 |     4 |
|  12 |             NESTED LOOPS OUTER          |                     |     5 |  4130 |     3 |
|* 13 |              TABLE ACCESS BY INDEX ROWID| S_PROD_INT          |     5 |  2960 |     2 |
|* 14 |               INDEX RANGE SCAN          | S_PROD_INT_M53      |   326 |       |     5 |
|  15 |              TABLE ACCESS BY INDEX ROWID| S_PROD_LN           |     1 |   234 |     1 |
|* 16 |               INDEX UNIQUE SCAN         | S_PROD_LN_P1        |     1 |       |       |
|  17 |             TABLE ACCESS BY INDEX ROWID | S_PROD_INT_CRSE     |     1 |   632 |     1 |
|* 18 |              INDEX RANGE SCAN           | S_PROD_INT_CRSE_U1  |     1 |       |     1 |
|  19 |            TABLE ACCESS BY INDEX ROWID  | S_LIT               |     1 |    99 |     1 |
|* 20 |             INDEX UNIQUE SCAN           | S_LIT_P1            |     1 |       |       |
|  21 |           TABLE ACCESS BY INDEX ROWID   | S_LIT               |     1 |    99 |     1 |
|* 22 |            INDEX UNIQUE SCAN            | S_LIT_P1            |     1 |       |       |
|  23 |          TABLE ACCESS BY INDEX ROWID    | S_PROD_INT_TNTX     |     1 |    59 |     1 |
|* 24 |           INDEX RANGE SCAN              | S_PROD_INT_TNTX_U1  |     1 |       |     1 |
|  25 |         TABLE ACCESS BY INDEX ROWID     | S_VOD               |     1 |    96 |     1 |
|* 26 |          INDEX RANGE SCAN               | S_VOD_U2            |     1 |       |     2 |
|* 27 |        TABLE ACCESS BY INDEX ROWID      | S_VOD_VER           |     1 |    45 |     1 |
|* 28 |         INDEX RANGE SCAN                | S_VOD_VER_U1        |     2 |       |     4 |
|  29 |       TABLE ACCESS BY INDEX ROWID       | S_ISS_OBJ_DEF       |     1 |    50 |     1 |
|* 30 |        INDEX RANGE SCAN                 | S_ISS_OBJ_DEF_M3    |     1 |       |     2 |
|  31 |      TABLE ACCESS BY INDEX ROWID        | S_CTLG_CAT          |     1 |    18 |     1 |
|* 32 |       INDEX UNIQUE SCAN                 | S_CTLG_CAT_P1       |     1 |       |       |
|  33 |     TABLE ACCESS BY INDEX ROWID         | S_CTLG_CAT          |     1 |    18 |     1 |
|* 34 |      INDEX UNIQUE SCAN                  | S_CTLG_CAT_P1       |     1 |       |       |
|  35 |    TABLE ACCESS BY INDEX ROWID          | S_CTLG_CAT          |     1 |    14 |     1 |
|* 36 |     INDEX UNIQUE SCAN                   | S_CTLG_CAT_P1       |     1 |       |       |
|  37 |   NESTED LOOPS                          |                     |     1 |   137 |     7 |
|  38 |    NESTED LOOPS                         |                     |     4 |   456 |     6 |
|  39 |     NESTED LOOPS                        |                     |     4 |   364 |     5 |
|  40 |      NESTED LOOPS                       |                     |     2 |   142 |     4 |
|  41 |       NESTED LOOPS                      |                     |     1 |    53 |     3 |
|  42 |        NESTED LOOPS                     |                     |     3 |   120 |     2 |
|* 43 |         INDEX RANGE SCAN                | S_CTLG_CAT_PROD_U1  |     3 |    57 |     2 |
|* 44 |         TABLE ACCESS BY INDEX ROWID     | S_CTLG_CAT          |     1 |    21 |     1 |
|* 45 |          INDEX UNIQUE SCAN              | S_CTLG_CAT_P1       |     1 |       |       |
|* 46 |        TABLE ACCESS BY INDEX ROWID      | S_CTLG              |     1 |    13 |     1 |
|* 47 |         INDEX UNIQUE SCAN               | S_CTLG_P1           |     1 |       |       |
|  48 |       TABLE ACCESS BY INDEX ROWID       | S_PARTY_CTGRY       |     2 |    36 |     1 |
|* 49 |        INDEX RANGE SCAN                 | S_PARTY_CTGRY_F1    |     2 |       |     1 |
|  50 |      TABLE ACCESS BY INDEX ROWID        | S_PARTY_RPT_REL     |     2 |    40 |     1 |
|* 51 |       INDEX RANGE SCAN                  | S_PARTY_RPTREL_F50  |     2 |       |     1 |
|  52 |     TABLE ACCESS BY INDEX ROWID         | S_PARTY_PER         |     1 |    23 |     1 |
|* 53 |      INDEX RANGE SCAN                   | S_PARTY_PER_F1      |     1 |       |     2 |
|* 54 |    INDEX RANGE SCAN                     | S_PARTY_PER_U1      |     1 |    23 |     3 |
|  55 |   NESTED LOOPS                          |                     |     1 |   157 |     8 |
|  56 |    NESTED LOOPS                         |                     |     1 |   144 |     7 |
|  57 |     NESTED LOOPS                        |                     |     1 |   125 |     6 |
|  58 |      NESTED LOOPS                       |                     |     1 |   104 |     5 |
|  59 |       NESTED LOOPS                      |                     |     1 |    86 |     4 |
|  60 |        NESTED LOOPS                     |                     |     4 |   264 |     3 |
|  61 |         NESTED LOOPS                    |                     |     2 |    86 |     2 |
|* 62 |          INDEX RANGE SCAN               | S_PARTY_PER_U1      |     2 |    46 |     4 |
|  63 |          TABLE ACCESS BY INDEX ROWID    | S_POSTN             |     1 |    20 |     1 |
|* 64 |           INDEX UNIQUE SCAN             | S_POSTN_P1          |     1 |       |       |
|* 65 |         INDEX RANGE SCAN                | S_PARTY_PER_U1      |     2 |    46 |     3 |
|* 66 |        INDEX RANGE SCAN                 | S_PARTY_RPT_REL_U2  |     1 |    20 |     1 |
|* 67 |       INDEX RANGE SCAN                  | S_PARTY_CTGRY_U1    |     1 |    18 |     1 |
|* 68 |      TABLE ACCESS BY INDEX ROWID        | S_CTLG_CAT          |     1 |    21 |     1 |
|* 69 |       INDEX UNIQUE SCAN                 | S_CTLG_CAT_P1       |     1 |       |       |
|* 70 |     INDEX RANGE SCAN                    | S_CTLG_CAT_PROD_U1  |     1 |    19 |     1 |
|* 71 |    TABLE ACCESS BY INDEX ROWID          | S_CTLG              |     1 |    13 |     1 |
|* 72 |     INDEX UNIQUE SCAN                   | S_CTLG_P1           |     1 |       |       |
|  73 |   NESTED LOOPS                          |                     |     1 |    58 |     3 |
|  74 |    NESTED LOOPS                         |                     |     3 |   135 |     2 |
|* 75 |     INDEX RANGE SCAN                    | S_CTLG_CAT_PROD_U1  |     2 |    38 |     2 |
|* 76 |     TABLE ACCESS BY INDEX ROWID         | S_CTLG_CAT          |     1 |    26 |     1 |
|* 77 |      INDEX UNIQUE SCAN                  | S_CTLG_CAT_P1       |     1 |       |       |
|* 78 |    TABLE ACCESS BY INDEX ROWID          | S_CTLG              |     1 |    13 |     1 |
|* 79 |     INDEX UNIQUE SCAN                   | S_CTLG_P1           |     1 |       |       |
-----------------------------------------------------------------------------------------------

该SQL涉及近四十个表和索引的链接操作,而且全部采用嵌套循环链接模式,是典型的siebel应用。这个SQL的执行计划十分复杂,我甚至不愿意去看它;既然从执行计划上进一步分析过于违背“惰性”,我首先想到的还是从wait interface入手,毕竟OWI是”useless”的。因为无法远程操作,反复收集信息会让整个过程十分缓慢,一口气问对方要了10046,errorstack,latches的dump信息,以及该数据库的完整RDA报告。

既然是latch free等待,那么总要先定位到相关的latch才好进一步分析,我们首先看10046 trace:

cat    siebdb1_ora_15465.trc |grep "nam='latch free'"|awk '{print $7}' |sort|uniq    -c
1 p1=-4611685984823284696
169 p1=-4611685984823317640
1 p1=-4611685988591416672
1 p1=-4611685988592487696
2 p1=-4611685988692060472
1 p1=-4611685988694684144
1 p1=-4611685988705526816
1 p1=-4611685988733017136
1 p1=-4611685988779484112
1 p1=-4611685988784059264
1 p1=-4611685988787329592
1 p1=-4611685988788323152
1 p1=-4611685988800312448
2 p1=-4611685988805347840
24 p1=-4611685988806714976
2 p1=-4611685988837854328
7 p1=-4611685988841028520
1 p1=-4611685988869432648
1 p1=-4611685988871320920

通过以上脚本处理,我们可以看到在执行过程中出现了各种latch free,出现最频繁的是p1=-4611685984823317640的栓。latch free等待事件中p1所代表的是会话所等待栓的地址。-4611685984823317640换算到16进制为C0000007D2F56B78。我们看看这个栓到底是何方神圣:

 c0000007d2f56b78 cache buffer handles level=3
        Location from where latch is held: kcbzfs:
        Context saved from call: 0
        state=busy
  gotten 2734921346 times wait, failed first 656272521 sleeps 44808579
  gotten 0 times nowait, failed: 0

当会话需要pin住buffer header时它首先要获去buffer handle,得到buffer handle的过程中首先要抢占cache buffer handles栓,为了避免对于cache buffer handles栓的过度争用,每个会话被允许cache一小撮buffer handles,也叫保留集(reserved set)。该保留集的上限由隐式参数_db_handles_cached(默认为5)所控制,在此基础上会话在执行不是十分复杂的SQL时不必反复申请栓。
同时存在一些十分复杂的语句,需要同时pin住大量的缓存,此时若存在空闲的handle,会话被允许索要手头5个cached buffer handles以外的handle。也为了限制单个会话的资源占用量,Oracle通过内部算法总是尽量让每个进程能够pin住”合理份额“的buffer,由此单个会话同一时间所能持有的总handle数等于db_block_buffers/processes,隐式参数_cursor_db_buffers_pinned指出了该值。另cache buffer handles并没有子栓,仅有父栓本身,所以如果系统中有大量复杂SQL,会话所能cache的buffer handles远不足以满足需求时,该栓就成为整个系统性能的瓶颈,甚至可能出现潜在的死锁/饿死问题。
结合我们从trace中获取的实际信息,cache buffer handles的命中率为1-656272521/2734921346=76%;一般来说如果系统中某个栓的命中率低于95%时,我们就可以认为该栓的争用较为激烈了;而此处一个父栓的命中率到了80%以下,我们很难想象其性能恶化状态。
在同事发给我的RDA中探索,从resource limit栏目中可以看到该实例的会话高水位从未超过2500,而实例设定的processes值则为6000,这是一个很多初级DBA或施工方会犯的错误,认为设定高一些的processes值可以避免出现ora-00020错误从而高枕无忧,殊不知Oracle中许多隐式参数的分配是针对process数来计算的,资源总数是有限的,process数多则单个会话所能操控的理论资源数相应减少,这不仅在Oracle中,几乎是放之四海皆准的原理。
极有可能是过高的processes参数影响了_cursor_db_buffers_pinned,而该问题SQL除了十分复杂外还有全部采用nested loop链接方式的问题,这就要求同时时刻需pin住大量不同表和索引的buffer,于是在SQL执行伊始会话尝试pin住了一部分的buffer,但因为最终达到了_cursor_db_buffers_pinned所指定的上限,该会话在持有cache buffer handles栓的情况下处于hang状态,进一步导致了大量其他会话因latch free等待而阻塞。
当时给同事的最主要建议就是降低processes参数,从该系统的V$resource_limit动态视图来看,将processes设到3000就绰绰有余了;同时我们可以增大隐式参数_db_handles_cached以避免系统中对该栓的争用。

几天后,同事回信告诉我开发人员通过其他途径绕过了该问题,但我还是建议他在未来可能的offline计划中加入修改processes和_db_handles_cached参数的操作。

这个case发生几个月后,我无意中从MOS中发现以下note:

Hdr: 9328032 10.2.0.3 RDBMS 10.2.0.3 BUFFER CACHE PRODID-5 PORTID-23
Abstract: INCREASE IN CACHE BUFFER CHAINS/HANDLES WAITS

*** 01/29/10 05:41 am ***
TAR:
----
3-1375394871

PROBLEM:
--------
Today, and despite no significant change in workload we have increased waits
on these two latches:

29Jan2010_1030_1045 : Top 5 Timed Events
   Avg %Total
29Jan2010_1030_1045 : ~~~~~~~~~~~~~~~~~~
  wait   Call
29Jan2010_1030_1045 : Event                                 Waits    Time (s)
  (ms)   Time Wait Class
29Jan2010_1030_1045 : ------------------------------ ------------ -----------
------ ------ ----------
29Jan2010_1030_1045 : db file sequential read           4,912,142      50,169
    10   40.7   User I/O
29Jan2010_1030_1045 : CPU time                                         45,456
         36.9
29Jan2010_1030_1045 : latch: cache buffers chains       1,347,911      12,255
     9   10.0 Concurrenc
29Jan2010_1030_1045 : latch: cache buffer handles       1,364,489      10,662
     8    8.7      Other
29Jan2010_1030_1045 : log file sync                       482,281       7,101
    15    5.8     Commit
29Jan2010_1030_1045 :
-------------------------------------------------------------     

DIAGNOSTIC ANALYSIS:
--------------------
See below

WORKAROUND:
-----------
None

RELATED BUGS:
-------------

REPRODUCIBILITY:
----------------

TEST CASE:
----------

STACK TRACE:
------------

SUPPORTING INFORMATION:
-----------------------

24 HOUR CONTACT INFORMATION FOR P1 BUGS:
----------------------------------------

DIAL-IN INFORMATION:
--------------------

IMPACT DATE:
------------ 
Notes so far:

There seems to be 5 - 12% increase across a number of statistics
between the OK sample and the problem sample.
eg:                          Total           per second  per TX
  26th> execute count       6,322,503        7,025.2      14.6
  29th> execute count       6,828,911        7,552.3      14.7
        8% increase in execute count

  26th> data blocks consistent reads - u 16,567,225  18,408.4  38.2
  29th> data blocks consistent reads - u 18,700,073  20,681.1  40.3
        12.6% increase in consistent read undo records applied

cache buffer handles
~~~~~~~~~~~~~~~~~~~~
 There is about an 8% increase in gets of this latch between
 the two reports:
                               Gets        miss%   wait time
  26th> cache buffer handles  80,320,092   21.0        56
  29th> cache buffer handles  87,158,750   55.9     10661

 And the latch was already showing 21% miss rate before so was
 showing signs of being close to becoming a problem.

 The latch misses stats show:
                            Location      Sleeps   Waiter Sleeps
   cache buffer handles     kcbzgs        726,520  832,967
   cache buffer handles     kcbzfs        638,106  531,875

 This latch is used just to get / release buffer state objects.

 Each process caches up to "_db_handles_cached" of these against
 its own process so once we are warmed up and have processes
 then we should only use this latch if we are after pinning more
 than 5 buffers in the process state.

 As the misses seem balanced across kcbzgs and kcbzfs then the
 gets do not appear to be due to warming up so this points at
 SQLs on the system needing to pin more than 5 buffers at a
 time. If we increase "_db_handles_cached" then we should be able
 to alleviate the load on this latch.

 eg: See PID 4174 from yesterday we see it has 9 buffers pinned
     and is running:
       SELECT   /*+ LEADING (PROD, ITM, ORDX)
          INDEX (ITM S_ORDER_ITEM_BT_W1_X )*/ ...
     So we can see there are SQLs needing many more than 5 buffers
     pinned at a time.
*** 01/29/10 10:06 am ***
cache buffers chains
~~~~~~~~~~~~~~~~~~~~            GETS           Miss%
 26th>  cache buffers chains    866,844,547    1.0    0.0     83
 29th>  cache buffers chains    900,769,954    1.4    0.1  12258
        4% increase in gets , large increase in wait time.

 Previous miss rate was 1%, now 1.4%.

 James already checked and mostly the spread looks reasonable
  across child latches.
                                           per second
     26th>     Transactions:                481.56
     29th>     Transactions:                513.79
               6.5% more transactions processed.

 This is looking like increased load is starting to see the
 latch contention. It may be sensible here to look at the
 higher "buffer gets" SQLs to see if they really are well
 tuned. eg: 8y9msp8p3gv23 in the 1130-1145 reports are
 showing increased executions but also increased buffers
 per execution. It would seem sensible to check out the
 plans and use of the top buffer gets SQLs.

 It would also be wise to confirm we are not hitting bug 6455161
 as that affects this release and I cannot see that patch in place.
 ie: Verify:
     select owner||'.'||Object_name||'.'||subobject_name
       from dba_objects
      where object_id<>data_object_id
        and object_type like 'INDEX%'
      order by owner,object_name,subobject_name;
 That would not be a difference from 26th but if this is hit
 you gets may be far higher than needed for certains types of
 query.
*** 01/29/10 02:14 pm *** (CHG: Sta->11)
*** 01/29/10 02:14 pm ***
*** 01/29/10 02:40 pm *** (CHG: Sta->16)
*** 01/29/10 03:03 pm ***
*** 01/29/10 11:33 pm ***
Many of the indexes are on SIEBEL objects so
bug 6455161 looks like it could occur here.
*** 01/30/10 12:02 am ***
*** 01/30/10 12:41 am ***
I checked a few top buffer gets SQLs from older AWR that we have
awrrpt_1_67673_67677 . It looks like the heaviest SQLs are
probably not using indexes which are exposed to 6455161 but a couple
may be:

 av5skvp0mkxm1
  SELECT /*+ ALL_ROWS */ T3.CONFLICT_ID, T3.LAST_UPD ...
  FROM SIEBEL.S_LST_OF_VAL T1, SIEBEL.S_SRM_ACTION T2, SIEBEL.S_SRM_ACTION T3

   ...

  Indexes on the q1.log list which might be used by that query in NL:
    SIEBEL.S_SRM_ACTION_M1    SIEBEL.S_SRM_ACTION_P1
    SIEBEL.S_SRM_ACTION_U1
    SIEBEL.S_SRM_ACT_PARAM_P1 SIEBEL.S_SRM_ACT_PARAM_U1

 7khg9mx4cv5c5
  Queries from SIEBEL.S_ORG_EXT with lots of other tables

  Might use SIEBEL.S_ORG_EXT_M1 from the list
*** 01/30/10 12:59 am ***
=====
>>> Side issue unrelated to the issue here:
      From patch list patches_28012010.lis
      customer has 6868080 installed. This was superceded by 8244734
=====
*** 01/30/10 07:06 am *** (CHG: Sta->10)
*** 01/30/10 07:06 am ***
Suggestions for this:

a> Set "_db_handles_cached" >= 10

b> Get and install a patch for bug 6455161
    This seems like this is minimal impact at present
    as problem indexes are not used in the top buffer
    gets SQLs BUT one index rebuild could lead to mass
    problems, and it may give some small reductions
    in buffer gets on certain SQLs, just not the top
    ones.
    It is a small focused fix with no clashes on current
    list.

c> Double check some of the top buffer gets SQL plans.
    They may be the best they could be but are worth
    checking as there is little in the AWRs to show
    any special reason for cache buffers chains increase
    other than the slightly higher workload.
*** 02/01/10 07:32 am ***

>> SQL "av5skvp0mkxm1" may be susceptible to 6455161.
*** 02/01/10 08:03 am ***
Reviewing level of "_db_handles_cached" as some of the top
SQLs seem very deep NL outer.

Testing this here.

*** 02/01/10 11:05 am ***
For repeatedly executed deep NL SQLs higher handle caching should
improve loading on the "cache buffer handles" latch, but it
has no impact on the chains latch as NLs and NL outers keep buffers
pinned anyway.

So suggestions are still>>
 Set "_db_handles_cached" >= 40 (given nested depth of some SQL)
   See if extra memory should be added to shared pool to accomodate that

 Install 6455161

 Drill more into SQL to see if any can be improved esp wrt buffer
  gets needed .
 Also
   Recheck V$latch_children spread for cache buffers chains
   and recheck X$BH blocks on warmest children
   See if there is any session skew on the latch waits
   (any sessions more prone to waits than others)

该note所述case与我们的情况相似,经过排查发现最有”嫌疑“的SQL语句中也存在大量的Nested Loop外连接。这似乎是siebel应用一个普遍存在的问题,note中的客户最初发现的症结也是cache buffer handles栓等待问题,Oracle MOS最后给出的建议是:

1.增大db_handles_cache 到10以上,最后又建议增加到40以上

2. 这个note中客户的数据库版本是sparc(64 bit) 10.2.0.3, MOS建议在此基础上应用Bug 6455161的one-off patch

3.检查有大量buffer gets的SQL语句的执行计划


Posted

in

by

Tags:

Comments

4 responses to “latch free:cache buffer handles造成的SQL性能问题”

  1. admin Avatar
    admin

    Bug 6455161 Higher CPU / Higher “cache buffer chains” latch gets / Higher “consistent gets” after truncate/Rebuild
    This note gives a brief overview of bug 6455161.
    The content was last updated on: 18-MAY-2010
    Click here for details of each of the sections below.
    This bug is marked as an important issue.
    Affects:

    Product (Component) Oracle Server (Rdbms)
    Range of versions believed to be affected Versions >= 10.1.0.2 but < 11 Versions confirmed as being affected * 10.1.0.2 * 10.1.0.3 * 10.1.0.4 * 10.1.0.5 * 10.2.0.1 * 10.2.0.2 * 10.2.0.3 Platforms affected Generic (all / most platforms affected) It is believed to be a regression in default behaviour thus: Regression introduced in 10.1.0.2 Fixed: This issue is fixed in * 10.2.0.3 Patch 21 on Windows Platforms * 10.2.0.4 (Server Patch Set) * 11.1.0.6 (Base Release) Symptoms: Related To: * Latch Contention * Performance Of Query/ies Affected * Excessive CPU Usage * Truncate Description This problem is introduced in 10.1.0.2 The issue is that the index root block is pinned and unpinned for each key lookup during a Nested Loop on Indexes that have OBJECT_ID <> DATA_OBJECT_ID
    (as viewed in DBA_OBJECTS).
    Each pin/unpin operation obtains the “cache buffer chains” latch, uses extra cpu
    and increases the “CONSISTENT GETS” statistic.

    At the individual statement level queries doing Nested Loops with
    Indexes lookups may do more “cache buffer chains” latch gets and have
    a larger than expected “CONSISTENT GETS” statistic (“query” column on tkprof).

    At the instance level it is possible to get some contention on the
    “cache buffer chains” latch and some increase in the CPU utilization.

    This may have an influence over various diagnostic tools including sqlplus’
    “autotrace”, AWR reports, and Statspack.

    The DATA_OBJECT_ID for an index changes if an ALTER INDEX REBUILD or
    TRUNCATE TABLE operation has been performed on the index.

    To find a list of affected objects:

    select owner||’.’||Object_name||’.’||subobject_name
    from dba_objects
    where object_id<>data_object_id
    and object_type like ‘INDEX%’
    order by owner,object_name,subobject_name;

    *** Note that this only affects use of such indexes in a NESTED LOOPS operation. ***
    *** Also Note It is a normal and expected to have indexes with object_id<>data_object_id.
    This is not a corruption. It is a normal state used internally for other purposes.
    There is no need to recreate the indexes after the fix is applied.

    Workaround
    Drop and Recreate all indexes which OBJECT_ID <> DATA_OBJECT_ID.
    This synchronizes the two values.
    (Note that it is not sufficient to REBUILD the index, it must
    be explicitly dropped then recreated)

  2. admin Avatar
    admin

    Bug 8244734 NUMA Pool misconfigured at startup (ORA-4031)
    This note gives a brief overview of bug 8244734.
    The content was last updated on: 27-MAY-2009
    Click here for details of each of the sections below.
    Affects:

    Product (Component) Oracle Server (Rdbms)
    Range of versions believed to be affected Versions < 11.2 Versions confirmed as being affected * 10.2.0.3 * 10.2.0.4 Platforms affected Generic (all / most platforms affected) Fixed: This issue is fixed in * 10.2.0.5 (Server Patch Set) * 11.2 (Future Release) Symptoms: Related To: * Error May Occur * Shared Pool Affected * ORA-4031 * NUMA Related * _NUMA_POOL_SIZE * _ENABLE_NUMA_OPTIMIZATION Description This fix supercedes the fix from bug 6868080 . ORA-4031 can occur when the NUMA option is used due to either an imbalance between shared pool subpools or due to underconfiguration of the NUMA pool at startup. For imbalances subpool 1 shows memory allocated to "NUMA pool X" where X is a number.

  3. admin Avatar
    admin

    Bug 6868080 ORA-4031 with NUMA
    This note gives a brief overview of bug 6868080.
    The content was last updated on: 21-APR-2010
    Click here for details of each of the sections below.
    Affects:

    Product (Component) Oracle Server (Rdbms)
    Range of versions believed to be affected Versions < 11.2 Versions confirmed as being affected * 10.2.0.4 * 11.1.0.6 Platforms affected Generic (all / most platforms affected) Note that this fix has been superceded by the fix in Bug:8244734 Fixed: This issue is fixed in * 10.2.0.5 (Server Patch Set) * 11.1.0.7 (Server Patch Set) * 11.2.0.1 (Base Release) Symptoms: Related To: * Error May Occur * Shared Pool Affected * ORA-4031 * NUMA Related * This fix has been superceeded * _NUMA_POOL_SIZE * _ENABLE_NUMA_OPTIMIZATION Description ORA-4031 can occur when the NUMA option is used due to imbalance between shared pool subpools. Subpool 1 shows memory allocated to "NUMA pool X" where X is a number. Note: This fix has been superceded by the fix for bug 8244734

  4. ricky Avatar
    ricky

    分析的新思路,不错,好好学习,钻研钻研~为什么不从sql优化着手呢?惯性思维就是会从sql优化下手。

Leave a Reply to admin Cancel reply

Your email address will not be published. Required fields are marked *