诊断RAC全局缓存块丢失gc blocks lost

在Oracle RAC环境中,无论我们从AWR自动负载性能报告、Statspack或者Grid Control中都可以找到Oracle数据库软件所收集的全局缓存工作负载统计信息(global cache work load statistics);其中就包含了全局缓存块丢失(Global cache lost blocks)的统计信息(这些丢失的全局缓存块可能是gc cr block lost或者gc current block lost),若集群中的任意节点出现大量的全局缓存块丢失(下文简写为gc blocks lost),则可能意味着内联(private)网络存在问题或者packet网络包处理低效。通过监控和评估这些全局缓存的相关统计信息,可以有效保证内联全局缓存(interconnect Global Cache)和全局队列服务(Global Enqueue Service)(GCS/GES)以及整个集群的正常工作。全局缓存块丢失一般预示着网络包处理存在问题并需要进一步勘察。另外全局缓存块丢失(gc blocks lost)的问题常会伴随着gc cr multiblock waits等待发生(传输多个连续的数据块全局缓存)。

就目前来看最有嫌疑造成或加速gc blocks lost的”元凶”往往是因为错误地或者不当的配置了内联网络(interconnects)。接下来我们会进一步介绍如何找出造成gc blocks lost的原因。

虽然gc blocks lost对集群造成的影响更多的反应在性能方面,但我们也无法保证其没有造成节点/实例被驱逐(eviction)的可能性。Oracle Clusterware集群及Oracle RAC实例的节点成员管理依赖于内联网络的心跳(heartbeats)。假设在网络心跳持续丢失的情况下,节点/实例的驱逐可以发生。以下我们列出gc blocks lost可能造成的主次要症状:

主要症状:

  • ‘gc cr block lost’或’gc current block lost’成为实例中Top 5的主要等待事件

次要症状:

  • SQL trace报告显示多次出现gc cr requests,gc current request等待事件
  • 出现长时间的gc cr multiblock requests等待
  • 糟糕的应用性能与吞吐量
  • ifconfig或其他网络工具显示存在大量的网络包packet发送接收(send/receive)错误
  • netstat报告显示存在errors/retransmits/reassembly等失败
  • 单个或多个节点失败
  • 由网络处理引发的异常CPU使用率

下面我们尝试列出可能引起gc blocks lost的多种可能性:
1.设置过小的UDP receive (rx) buffer sizes/UDP buffer socket overflows
描述:在真实环境中Oracle RAC全局缓存块处理总是集送式(bursty)地、连续地;当OS在等待可用CPU时需要将接受到的packet存放的相关协议的buffer中。当buffer空间不足时将可能导致静默的packet丢失进而造成全局缓存块丢失(global cache block loss)。在绝大多数UNIX平台上`netstat -s`或`netstat -su`命令帮助我们了解udp溢出(UDPInOverflows),packet接收错误,帧丢弃(frame dropped),或由buffer full造成的packet丢弃。
措施:Packet丢失大多数情况下归因于在接受服务器上不当的UDP buffer缓存大小,进而导致buffer溢出和global cache block loss。当操作系统所设置的UDP接收缓存大小(UDP receive (rx) buffer size)小于128k时Oracle打开一个socket套接字的udp rx buffer size为128K。若OS的设置大于128K时Oracle会遵从该设置值保持不变。Oracle所使用的UDP receive buffer大小会因不同的数据库标准块(>8k)的大小而增大,但不会超过OS系统所决定的限度。当DB_FILE_MULTIBLOCK_READ_COUNT初始化参数设置大于4的环境中出现因不当的udp缓存设置所造成的过度的’global cache cr requests’等待事件超时一般很容易观察到udp buffer溢出、丢包、缓存块丢失等现象。为了缓解这种问题,增大udp buffer的大小是一种行之有效的方法,此外我们还可以降低DB_FILE_MULTIBLOCK_READ_COUNT参数值。
在绝大多数UNIX/Linux平台上以下命令可以帮助我们了解udp socket buffer溢出或丢包的情况:

[maclean@rh2 ~]$ netstat -s
Ip:
    103300 total packets received
    0 forwarded
    0 incoming packets discarded
    103296 incoming packets delivered
    105287 requests sent out
Icmp:
    101 ICMP messages received
    0 input ICMP message failed.
    ICMP input histogram:
        destination unreachable: 75
        echo replies: 26
    175 ICMP messages sent
    0 ICMP messages failed
    ICMP output histogram:
        destination unreachable: 119
        echo request: 56
IcmpMsg:
        InType0: 26
        InType3: 75
        OutType3: 119
        OutType8: 56
Tcp:
    30355 active connections openings
    73 passive connection openings
    29589 failed connection attempts
    35 connection resets received
    3 connections established
    93218 segments received
    102780 segments send out
    68 segments retransmited
    0 bad segments received.
    29644 resets sent
Udp:
    2264 packets received
    46 packets to unknown port received.
    0 packet receive errors
    2270 packets sent
TcpExt:
    17 invalid SYN cookies received
    59 ICMP packets dropped because they were out-of-window
    181 TCP sockets finished time wait in fast timer
    166 delayed acks sent
    1 delayed acks further delayed because of locked socket
    Quick ack mode was activated 3 times
    6247 packets directly queued to recvmsg prequeue.
    6427 packets directly received from backlog
    554572 packets directly received from prequeue
    4171 packets header predicted
    1039 packets header predicted and directly queued to user
    9183 acknowledgments not containing data received
    4216 predicted acknowledgments
    2 times recovered from packet loss due to SACK data
    TCPDSACKUndo: 14
    18 congestion windows recovered after partial ack
    0 TCP data loss events
    2 fast retransmits
    46 other TCP timeouts
    6 DSACKs sent for old packets
    19 DSACKs received
    26 connections reset due to unexpected data
    25 connections reset due to early user close
    9 connections aborted due to timeout
IpExt:
    InMcastPkts: 4168
    InBcastPkts: 3505

[maclean@rh2 ~]$ netstat -su
IcmpMsg:
    InType0: 26
    InType3: 75
    OutType3: 119
    OutType8: 56
Udp:
    2264 packets received
    46 packets to unknown port received.
    0 packet receive errors
    2270 packets sent
IpExt:
    InMcastPkts: 4168
    InBcastPkts: 3505

此外udp丢包常会造成延迟增加,降低带宽,增大cpu使用率(kernel和user部分的),及因包重传(packet retransmission)导致消耗额外的内存。

2.糟糕的内联网络性能及高cpu使用率,`netstat -s`显示出现packet reassembly包重组失败
描述:庞大的UDP数据报(datagrams)可能需要被拆分并以多个帧的形式发送(取决于Medium Transmission Unit MTU的大小),在接收端服务器需要将这些拆分包重组(reassemble);高cpu使用率(持续地或高频率的波峰),不当的reassembly buffers及UDP buffer空间可能造成包重组失败。在接收端服务器`netstat -s`报告可以显示IP统计信息中存在大量的重组失败’reassembles failed’和超时后帧丢弃’fragments dropped after timeout’。碎片包(Fragmented packets)有一个重组的保留时间。未被成功重组的包可能会被丢弃并需要再次申请。在没有重组空间的情况下包会被静默地丢弃。

`netstat –a` 显示IP统计:
     3104582 fragments dropped after timeout
     34550600 reassemblies required
     8961342 packets reassembled ok
     3104582 packet reassembles failed.

措施:增加碎片重组buffer的大小,为重组分配更多的空间。增加重组碎片包的保留时间。增加udp receiver buffer以降低网络延迟,缓解包重组失败及cpu使用率对网络栈处理造成的负面影响。

在Linux上我们可以修改如下阀值以增大重组缓存空间:
/proc/sys/net/ipv4/ipfrag_low_thresh (默认为196608)
/proc/sys/net/ipv4/ipfrag_high_thresh (默认为262144)

为修改碎片包重组时间,可以修改:
/proc/sys/net/ipv4/ipfrag_time (默认为30)

以下上列出可能造成gc blocks lost性能问题的最主要的2种可能性,更多信息可以参考原文:gc lost blocks diagnostics。同时因各UNIX平台的差异可能你无法使用以上指出的命令来观测udp溢出、丢包等现象,那么可以采用OSwatcher工具来收集相关的网络信息。


Posted

in

by

Tags:

Comments

3 responses to “诊断RAC全局缓存块丢失gc blocks lost”

  1. maclean Avatar
    maclean

    select instance,
    class,
    lost,
    lost_time,
    cr_block,
    cr_block_time,
    cr_2hop,
    cr_2hop_time,
    cr_3hop,
    cr_3hop_time,
    cr_busy,
    cr_busy_time,
    cr_congested,
    cr_congested_time,
    current_block,
    current_block_time,
    current_2hop,
    current_2hop_time,
    current_3hop,
    current_3hop_time,
    current_busy,
    current_busy_time,
    current_congested,
    current_congested_time
    from gv$instance_cache_transfer
    where inst_id = USERENV(‘Instance’)
    and instance in (select instance_number from gv$instance);

  2. hustegg^0^ Avatar
    hustegg^0^

    取决于Medium(Maximum) Transmission Unit MTU的大小

  3. flu Avatar
    flu

    请教 solaris有对应的参数么

Leave a Reply

Your email address will not be published. Required fields are marked *