直接路径读取对于延迟块清除的影响

在Oracle 11g版本中串行的全表扫描可能使用直接路径读取(direct path read)的方式取代之前版本中一直使用的DB FILE SCATTERED READ, 显然direct path read具备更多的优势:

1. 减少了对栓的使用,避免可能的栓争用

2.物理IO的大小不再取决于buffer_cache中所存在的块;试想某个8个块的extent中1,3,5,7号块在高速缓存中,而2,4,6,8块没有被缓存,传统的方式在读取该extent时将会是对2,4,6,8块进行4次db file sequential read,这是一种十分可怕的状况,其效率往往要比单次读取这个区间的所有8个块还要低得多,虽然Oracle为了避免这种情况总是尽可能的不缓存大表的块(读入后总是放在队列最冷的一端);而direct path read则可以完全避免这类问题,尽可能地单次读入更多的物理块。

当然直接路径读取也会引入一些缺点:

1.在直接路径读取某段前需要对该对象进行一次段级的检查点(A segment checkpoint).

2.可能导致重复的延迟块清除操作(我们假设你了解delayed block cleanout是什么).

metalink 文档[ID 793845.1] 对该新版本中的变化进行了描述:

Applies to:

Oracle Server – Enterprise Edition – Version: 11.1.0.6 to 11.1.0.7
This problem can occur on any platform.

Symptoms

After migrating an 11g database from a standalone to a 4-node RAC,  a noticeable
increase of 'direct path read' waits were observed at times.
Here are the Cache sizes and Top 5 events.
waits

Cache Sizes                       Begin        End
~~~~~~~~~~~                  ---------- ----------
               Buffer Cache:     3,232M     3,616M  Std Block Size:         8K
           Shared Pool Size:     6,736M     6,400M      Log Buffer:     8,824K
Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                           Avg
                                                          wait   % DB
Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
DB CPU                                           13,916          42.1
direct path read                  1,637,344      13,359      8   40.4 User I/O
db file sequential read              47,132       1,111     24    3.4 User I/O
DFS lock handle                     301,278       1,028      3    3.1 Other
db file parallel read                14,724         554     38    1.7 User I/O

Changes

Migrated from a standalone database to a 4-node RAC.
Moved from Unix file system storage to ASM.

Using Automatic Shared Memory Management (ASMM).
The setting of db_cache_size in spfile/pfile is low compared to normal workload requirements.

Cause

There have been changes in 11g in the heuristics to choose between direct path reads or reads through buffer cache for serial table scans.
In 10g, serial table scans for “large” tables used to go through cache (by default) which is not the case anymore.  In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats.
Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.

Solution

When using Automatic Shared Memory Management (ASMM) and with buffer cache low limit set at a low end compared to the normal workload requirements and usually after startup, 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. When ASMM increases the buffer cache due to increased demand, 11g might not again do serial direct path read scans for these same large tables.  If you like to avoid this from happening, you should note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values.
db_cache_size
shared_pool_size

下面我们对直接路径读取对于延迟块清除造成的影响进行测试:

SQL> create table tv as select rownum rn,rpad('A',600,'Z') rp from dual
2       connect by level <=300000;

表已创建。

新建一个会话a:

SQL> set linesize 200 pagesize 1400;
SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                          27281
25 physical reads                                                 27273
25 physical reads direct                                          27273         
25 cleanouts only - consistent read gets                            0

-- 显然查询采用了直接路径读取方式

SQL> update tv set rn=rn+1;                        -- 尝试批量更新

SQL> alter system flush buffer_cache;             
-- 刷新高速缓存,造成延迟块清除的情景,并提交

系统已更改。

SQL> commit;

提交完成。

新建一个会话b:

SQL> set linesize 200 pagesize 1400;
SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                 54554
25 physical reads                                                        27273
25 physical reads direct                                                 27273
25 redo size                                                                 0
25 cleanouts only - consistent read gets                           27273      
--查询采用direct path read时产生了延迟块清除操作,但不产生redo

SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                109104
25 physical reads                                                        54546
25 physical reads direct                                                 54546
25 redo size                                                                 0
25 cleanouts only - consistent read gets                                 54546

再次查询仍采用直接路径读取,产生了相同数目的延迟块清除操作,并没有产生redo;可见direct path read的清除操作仅是针对从磁盘上读取到PGA内存中的镜像,而不对实际的块做任何修改,因而也没有任何redo;

下面我们使用普通串行全表扫描方式,设置event 10949可以避免采用直接路径读取方式.关于该事件可以参见这里.

SQL> ALTER SESSION SET EVENTS '10949 TRACE NAME CONTEXT FOREVER';

会话已更改。

SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                163662
25 physical reads                                                        81819
25 physical reads direct                                                 54546
25 redo size                                                           1966560
25 cleanouts only - consistent read gets                                 81819

SQL> select count(*) from tv;

COUNT(*)
----------
300000

SQL> select vm.sid, vs.name, vm.value
2    from v$mystat vm, v$sysstat vs
3   where vm.statistic# = vs.statistic#
4     and vs.name in ('cleanouts only - consistent read gets',
5                     'session logical reads',
6                     'physical reads',
7                     'physical reads direct','redo size');

SID NAME                                                                  VALUE
---------- ---------------------------------------------------------------- ----------
25 session logical reads                                                190947
25 physical reads                                                        95673
25 physical reads direct                                                 54546
25 redo size                                                           1966560
25 cleanouts only - consistent read gets                                 81819

第一次采用普通全表扫描方式时产生了与direct path read时相同量的延迟块清除操作,并因此产生了大量的redo,这种模式回归到了最经典的延迟块清除情景中;之后的一次读取则不再需要清除块和产生重做了,我们在读取一个“干净”的表段。

从以上测试我们可以了解到,11g中使用更为广泛的direct path read方式对有需要延迟块清除操作的段所可能产生的影响,因为实际没有一个“修改块”的操作,所以虽然延迟块清除操作在该种模式下每次都必须产生,却实际没有产生脏块,因而也就不会有“写块”的必要,故而也没有redo的产生。所产生的负载可能更多的体现在cpu time的使用上。


Posted

in

by

Tags:

Comments

2 responses to “直接路径读取对于延迟块清除的影响”

  1. admin Avatar

    KEEP BUFFER POOL Does Not Work for Large Objects on 11g
    Applies to:

    Oracle Server – Enterprise Edition – Version: 11.1.0.7 and later [Release: 11.1 and later ]
    Information in this document applies to any platform.
    Symptoms

    While doing full table scan, Buffer Cache KEEP pool is not being used to read the blocks of a (KEEP) table whose size is >10% of DB_KEEP_CACHE_SIZE.

    These blocks are always being read through DIRECT READ, which can cause performance problem due to non-caching / re-reading of keep object blocks.

    Changes

    Upgraded to 11g.
    Cause

    Problem is caused by Bug 8897574 which will be fixed in 12.1:
    Abstract: Keep Buffer Pool Does Not Work

    Due to this bug, tables with size >10% of cache size, were being treated as ‘large tables’ for their reads and this resulted in execution of a new SERIAL_DIRECT_READ path in 11g.
    Solution

    Check if an patch exists for your platform and version otherwise contact support for assistance.

    With the bug fix applied, any object in the KEEP buffer pool, whose size is less than DB_KEEP_CACHE_SIZE, is considered as an small or medium sized object.
    This will cache the read blocks and avoid subsequent direct read for these objects .

    @ As per development update in the bug ,to workaround this issue we can set Event 10949 at level 1.

  2. Charlie 木匠 (@mujiang) Avatar

    请教一下. 直接路径读取的延迟块清除操作, cleanouts有没有把脏块写进数据文件?
    如果没有, direct path read就从数据文件里面读取了COMMIT之前的旧数据, 没有保证查询级别的数据一致性.

    在你的第一次测试里面, 为什么每次direct path read都产生了延迟块清除操作?
    脏块写一次就够了, 为什么不停地写?

    恕鄙人愚钝.

    多谢,
    木匠

Leave a Reply

Your email address will not be published. Required fields are marked *