Script:列出Oracle每小时的redo重做日志产生量

以下脚本可以用于列出最近Oracle数据库每小时估算的redo重做日志产生量,因为估算数据来源于archivelog的产生量和大小,所以数据是近似值,可供参考:

WITH times AS
 (SELECT /*+ MATERIALIZE */
   hour_end_time
    FROM (SELECT (TRUNC(SYSDATE, 'HH') + (2 / 24)) - (ROWNUM / 24) hour_end_time
            FROM DUAL
          CONNECT BY ROWNUM <= (1 * 24) + 3),
         v$database
   WHERE log_mode = 'ARCHIVELOG')
SELECT hour_end_time, NVL(ROUND(SUM(size_mb), 3), 0) size_mb, i.instance_name
  FROM(
SELECT hour_end_time, CASE WHEN(hour_end_time - (1 / 24)) > lag_next_time THEN(next_time + (1 / 24) - hour_end_time) * (size_mb / (next_time - lag_next_time)) ELSE 0 END + CASE WHEN hour_end_time < lead_next_time THEN(hour_end_time - next_time) * (lead_size_mb / (lead_next_time - next_time)) ELSE 0 END + CASE WHEN lag_next_time > (hour_end_time - (1 / 24)) THEN size_mb ELSE 0 END + CASE WHEN next_time IS NULL THEN(1 / 24) * LAST_VALUE(CASE WHEN next_time IS NOT NULL AND lag_next_time IS NULL THEN 0 ELSE(size_mb / (next_time - lag_next_time)) END IGNORE NULLS) OVER(
 ORDER BY hour_end_time DESC, next_time DESC) ELSE 0 END size_mb
  FROM(
SELECT t.hour_end_time, arc.next_time, arc.lag_next_time, LEAD(arc.next_time) OVER(
 ORDER BY arc.next_time ASC) lead_next_time, arc.size_mb, LEAD(arc.size_mb) OVER(
 ORDER BY arc.next_time ASC) lead_size_mb
  FROM times t,(
SELECT next_time, size_mb, LAG(next_time) OVER(
 ORDER BY next_time) lag_next_time
  FROM(
SELECT next_time, SUM(size_mb) size_mb
  FROM(
SELECT DISTINCT a.sequence#, a.next_time, ROUND(a.blocks * a.block_size / 1024 / 1024) size_mb
  FROM v$archived_log a,(
SELECT /*+ no_merge */
CASE WHEN TO_NUMBER(pt.VALUE) = 0 THEN 1 ELSE TO_NUMBER(pt.VALUE) END VALUE
  FROM v$parameter pt
 WHERE pt.name = 'thread') pt
 WHERE a.next_time > SYSDATE - 3 AND a.thread# = pt.VALUE AND ROUND(a.blocks * a.block_size / 1024 / 1024) > 0)
 GROUP BY next_time)) arc
 WHERE t.hour_end_time = (TRUNC(arc.next_time(+), 'HH') + (1 / 24)))
 WHERE hour_end_time > TRUNC(SYSDATE, 'HH') - 1 - (1 / 24)), v$instance i
 WHERE hour_end_time <= TRUNC(SYSDATE, 'HH')
 GROUP BY hour_end_time, i.instance_name
 ORDER BY hour_end_time
 /

Sample Output:

HOUR_END_TIME    SIZE_MB INSTANCE_NAME
------------- ---------- ----------------
2011/9/29 1:0       2.92 VPROD1
2011/9/29 2:0       2.92 VPROD1
2011/9/29 3:0       2.92 VPROD1
2011/9/29 4:0       2.92 VPROD1
2011/9/29 5:0       2.92 VPROD1
2011/9/29 6:0       2.92 VPROD1
2011/9/29 7:0       2.92 VPROD1
2011/9/29 8:0       2.92 VPROD1
2011/9/29 9:0       2.92 VPROD1
2011/9/29 10:       2.92 VPROD1
2011/9/29 11:       2.92 VPROD1
2011/9/29 12:      3.537 VPROD1
2011/9/29 13:       3.55 VPROD1
2011/9/29 14:       3.55 VPROD1
2011/9/29 15:       3.55 VPROD1
2011/9/29 16:       3.55 VPROD1
2011/9/29 17:       3.55 VPROD1
2011/9/29 18:       3.55 VPROD1
2011/9/29 19:       3.55 VPROD1
2011/9/29 20:       3.55 VPROD1

关注dbDao.com的新浪微博

扫码关注dbDao.com 微信公众号:

Comments

  1. 另一方法:可以通过awr中的每个快照的信息去收集redo size
    SELECT to_date(end_time, ‘yyyy-mm-dd hh24:mi:ss’) as day
    ,trunc(redo_size/1024/60/60) “redo_size_KB/s”
    — ,500000 baseline
    FROM (SELECT to_char(c.end_interval_time, ‘yyyy-mm-dd hh24:mi:ss’) AS end_time
    ,decode(sign(a.value – b.value),-1,0,a.value – b.value) AS redo_size
    FROM sys.wrh$_sysstat a
    ,sys.wrh$_sysstat b
    ,sys.wrm$_snapshot c
    WHERE a.stat_id = 1236385760
    AND b.stat_id = 1236385760
    AND a.snap_id = b.snap_id + 1
    AND a.instance_number = 1
    AND b.instance_number = 1
    AND c.instance_number=1
    AND a.snap_id = c.snap_id
    ORDER BY a.snap_id) x

  2. chengwill says:

    两种方法的估算误差太大了,还是先手工切换日志,然后用操作系统的文件大小来算比较接近实际情况的

Speak Your Mind

TEL/電話+86 13764045638
Email service@parnassusdata.com
QQ 47079569