RMAN 内存利用介绍: PGA 以及SGA

在磁盘上的备份会使用PGA内存空间作为备份缓冲区,PGA内存空间从用于通道进程的内存空间中分配。如果操作系统没有配置本地异步I/O,可以利用DBWR_IO_SLAVES参数使用I/O从属来填充内存中的输出缓冲区。如果设置DBWR_IO_SLAVES参数为任意的非零值,RMAN会自动分配4个I/O从属来协调缓冲区内存中数据块加载。为了实现这一功能,RMAN必须利用一个共享内存区域。因此,用于磁盘备份的内存缓冲区会被推入共享池,如果存在Large池,则被推入large池。

如果没有使用磁带I/O从属,会在PGA中分配用于磁带输出缓冲区的内存。设置init.ora参数BACKUP_TAPE_IO_SLAVES=TURE,可以使用磁带I/O从属,必要时还可以在服务器参数文件(spfile)中动态设置该参数。BACKUP_TAPE_IO_SLAVES参数设置为TRUE时,RMAN会为每个通道创建一个从属进程来帮助备份工作。为了协调这一功能,RMAN会将内存分配推入SGA。

Oracle SGA中的Large池

Large池是Oracle内存空间的SGA中的一个特定区域。使用init.ora或SPFILE文件中的LARGE_POOL_SIZE参数可以设置large池,这个参数值被指定为一个字节数。对于某些需要共享空间且涉及共享池中常见操作的内存可以利用large池。占用large池的主要限于RMAN内存缓冲区(如果使用了I/O从属)和用于共享服务器。Large池又是用于java连接,如果PARALLEL_AUTOMATIC_TUNING (10g中不再使用)被设置为TRUE, large池还会包括并行查询从属(parallel query slave)

实际上,我们不一定需要large池。如果没有large池,所有可能占用large池的会简单地使用共享池中的空间。这不是世界末日,但是最好将RMAN缓冲区分到PGA中他们自己独立的空间中。这样以来,SQL和PL/SQL分析以及其他普通操作的共享池操作不会受到RMAN备份的影响。反之亦然,此外还可以更方便,更直接地调整RMAN的Oracle内存空间。

如果配置了任一种I/O从属选项并且没有配置large池,则会从SGA的共享池区中分配内存。如果没有配置large池却又要使用I/O从属,我们建议最好创建一个large池,这个large池的大小基于备份分配的通道总数(加上1MB用于开销)。

如果在磁带上做备份,就需要使用一个Media Management Server(介质管理服务器)产品。如果从与目标数据库相同的系统运行Media Manager(介质管理器),磁带子系统会需要额外的系统资源。调整备份时一定要考虑到这个因素。


Posted

in

by

Tags:

Comments

4 responses to “RMAN 内存利用介绍: PGA 以及SGA”

  1. admin Avatar

    Advise On How To Improve Rman Performance
    Applies to:

    Oracle Server – Enterprise Edition – Version: 9.2.0.1 to 10.2.0.1 – Release: 9.2 to 10.2
    Information in this document applies to any platform.
    Goal

    How to boost RMAN performance ?

    Solution

    Although backup and recovery tuning requires a good understanding of hardware and software used
    like disk speed , IO , Buffering and/or MML used for net backup.

    As many factors can affect backup performance. Often, finding the solution to a slow backup is a
    process of trial and error. To get the best performance for a backup, follow the following
    suggested steps:

    Step 1: Remove RATE Parameters from Configured and Allocated Channels
    =========================================================
    The RATE parameter on a channel is intended to reduce, rather than increase, backup throughput, so
    that more disk bandwidth is available for other database operations.

    If your backup is not streaming to tape, then make sure that the RATE parameter is not set on the
    ALLOCATE CHANNEL or CONFIGURE CHANNEL commands.

    Step 2 : Consider Using I/O Slaves
    ========================

    – If You Use Synchronous Disk I/O, Set DBWR_IO_SLAVES

    If and only if your disk does not support asynchronous I/O, then try setting the DBWR_IO_SLAVES initialization parameter to a nonzero value. Any nonzero value for DBWR_IO_SLAVES causes a fixed number (four) of disk I/O slaves to be used for backup and restore, which simulates asynchronous I/O.
    If I/O slaves are used, I/O buffers are obtained from the SGA. The large pool is used, if configured. Otherwise, the shared pool is used.

    Note: By setting DBWR_IO_SLAVES, the database writer processes will use slaves as well.
    You may need to increase the value of the PROCESSES initialization parameter.

    – Use Tape slaves To keep the tape streaming (continually moving) by simulating
    asynchronous I/O

    Set the “init.ora” parameter:
    BACKUP_TAPE_IO_SLAVES = true

    This causes one tape I/O slave to be assigned to each channel server process.

    In 8i/9i/10g, if the DUPLEX option is specified, then tape I/O slaves must be enabled.
    In this case, for DUPLCEX=, there are tape slaves per channel. These N slaves
    all operate on the same four output buffers. Consequently, a buffer is not freed
    up until all slaves have finished writing to tape.

    Step 3: If You Fail to Allocate Shared Memory, Set LARGE_POOL_SIZE
    =========================================================
    Set this initialization parameter if the database reports an error in the alert.log stating that it does not have enough memory and that it will not start I/O slaves.

    The message should resemble the following:
    ksfqxcre: failure to allocate shared memory means sync I/O will be used whenever async I/O to file not supported natively

    When attempting to get shared buffers for I/O slaves, the database does the following:

    * If LARGE_POOL_SIZE is set, then the database attempts to get memory from the large pool. If this value is not large enough, then an error is recorded in the alert log, the database does not try to get buffers from the shared pool, and asynchronous I/O is not used.
    * If LARGE_POOL_SIZE is not set, then the database attempts to get memory from the shared pool.
    * If the database cannot get enough memory, then it obtains I/O buffer memory from the PGA and writes a message to the alert.log file indicating that synchronous I/O is used for this backup.

    The memory from the large pool is used for many features, including the shared server (formerly called multi-threaded server), parallel query, and RMAN I/O slave buffers. Configuring the large pool prevents RMAN from competing with other subsystems for the same memory.

    Requests for contiguous memory allocations from the shared pool are usually small (under 5 KB) in size. However, it is possible that a request for a large contiguous memory allocation can either fail or require significant memory housekeeping to release the required amount of contiguous memory. Although the shared pool may be unable to satisfy this memory request, the large pool is able to do so. The large pool does not have a least recently used (LRU) list; the database does not attempt to age memory out of the large pool.

    Use the LARGE_POOL_SIZE initialization parameter to configure the large pool. To see in which pool (shared pool or large pool) the memory for an object resides, query V$SGASTAT.POOL.

    The formula for setting LARGE_POOL_SIZE is as follows:

    LARGE_POOL_SIZE = number_of_allocated_channels *
    (16 MB + ( 4 * size_of_tape_buffer ) )

    Step 4: Tune RMAN Tape Streaming Performance Bottlenecks
    ================================================
    There are several tasks you can perform to identify and remedy bottlenecks that affect RMAN’s performance on tape backups:
    Using BACKUP… VALIDATE To Distinguish Between Tape and Disk Bottlenecks

    One reliable way to determine whether the tape streaming or disk I/O is the bottleneck in a given backup job is to compare the time required to run backup tasks with the time required to run BACKUP VALIDATE of the same tasks.
    BACKUP VALIDATE of a backup to tape performs the same disk reads as a real backup but performs no tape I/O. If the time required for the BACKUP VALIDATE to tape is significantly less than the time required for a real backup to tape, then writing to tape is the likely bottleneck.

    Using Multiplexing to Improve Tape Streaming with Disk Bottlenecks

    In some situations when performing a backup to tape, RMAN may not be able to send data blocks to the tape drive fast enough to support streaming.

    For example, during an incremental backup, RMAN only backs up blocks changed since a previous datafile backup as part of the same strategy. If you do not turn on change tracking, RMAN must scan entire datafiles for changed blocks, and fill output buffers as it finds such blocks. If there are not many changed blocks, RMAN may not fill output buffers fast enough to keep the tape drive streaming.

    You can improve performance by increasing the degree of multiplexing used for backing up. This increases the rate at which RMAN fills tape buffers, which makes it more likely that buffers are sent to the media manager fast enough to maintain streaming.

    Using Incremental Backups to Improve Backup Performance With Tape Bottlenecks

    If writing to tape is the source of a bottleneck for your backups, consider using incremental backups as part of your backup strategy. Incremental level 1 backups write only the changed blocks from datafiles to tape, so that any bottleneck on writing to tape has less impact on your overall backup strategy. In particular, if tape drives are not locally attached to the node running the database being backed up, then incremental backups can be faster.

    Step 5: Query V$ Views to Identify Bottlenecks
    =====================================
    If none of the previous steps improves backup performance, then try to determine the exact source of the bottleneck. Use the V$BACKUP_SYNC_IO and V$BACKUP_ASYNC_IO views to determine the source of backup or restore bottlenecks and to see detailed progress of backup jobs.

    V$BACKUP_SYNC_IO contains rows when the I/O is synchronous to the process (or thread on some platforms) performing the backup.
    V$BACKUP_ASYNC_IO contains rows when the I/O is asynchronous.
    Asynchronous I/O is obtained either with I/O processes or because it is supported by the underlying operating system.

    To determine whether your tape is streaming when the I/O is synchronous, query the EFFECTIVE_BYTES_PER_SECOND column in the V$BACKUP_SYNC_IO or V$BACKUP_ASYNC_IO view.
    If EFFECTIVE_BYTES_PER_SECOND is less than the raw capacity of the hardware, then the tape is not streaming. If EFFECTIVE_BYTES_PER_SECOND is greater than the raw capacity of the hardware, the tape may or may not be streaming.

    Compression may cause the EFFECTIVE_BYTES_PER_SECOND to be greater than the speed of real I/O.
    Identifying Bottlenecks with Synchronous I/O

    With synchronous I/O, it is difficult to identify specific bottlenecks because all synchronous I/O is a bottleneck to the process. The only way to tune synchronous I/O is to compare the rate (in bytes/second) with the device’s maximum throughput rate. If the rate is lower than the rate that the device specifies, then consider tuning this aspect of the backup and restore process. The DISCRETE_BYTES_PER_SECOND column in the V$BACKUP_SYNC_IO view displays the I/O rate. If you see data in V$BACKUP_SYNC_IO, then the problem is that you have not enabled asynchronous I/O or you are not using disk I/O slaves.
    Identifying Bottlenecks with Asynchronous I/O

    Long waits are the number of times the backup or restore process told the operating system to wait until an I/O was complete. Short waits are the number of times the backup or restore process made an operating system call to poll for I/O completion in a nonblocking mode. Ready indicates the number of time when I/O was already ready for use and so there was no need to made an operating system call to poll for I/O completion.

    The simplest way to identify the bottleneck is to query V$BACKUP_ASYNC_IO for the datafile that has the largest ratio for LONG_WAITS divided by IO_COUNT.

    Note:
    If you have synchronous I/O but you have set BACKUP_DISK_IO_SLAVES, then the I/O will be displayed in V$BACKUP_ASYNC_IO.

    Also the following is a recommended for improving RMAN performance on AIX5L based system..
    ===================================================================
    IBM suggestions the following AIX related advices:

    1. set AIXTHREAD to S on /etc/environment.

    2. ” ioo -o maxpgahead=256 ” to set maxpgahead parameter
    Initial settings were : Min/Maxpgahead 2 16

    3. ” vmo -o minfree=360 -o maxfree=1128 ” to set minfree and maxfree…
    Initial settings were : Min/Maxfree 240 256

    Getting %15-20 performance improvements on RMAN backup performance on AIX 5L Based Systems.

  2. admin Avatar

    How to Calculate Rman Memory Allocation In Large Pool
    Applies to:

    Oracle Server – Enterprise Edition – Version: 9.2.0.1 to 9.2.0.1 – Release: 9.2 to 9.2
    Oracle Server – Enterprise Edition – Version: 9.2.0.1 to 9.2.0.8 [Release: 9.2 to 9.2]
    Information in this document applies to any platform.
    Goal

    How to Calculate the Memory Required by RMAN in Large Pool
    Solution

    Contrary to the rman documentation at 9i, the amount of memory required by RMAN to be set aside
    in LARGE POOL when asynchronous io is simulated with the use of slaves is NOT 16Mb per channel.

    For a disk backup:

    Each channel is allocated 4*1Mb (output buffers) plus 16*1Mb (input buffers)

    For a tape backup:

    Each channel is allocated 4* plus 16*1Mb (input buffers)

    Additionally, when the controlfile is included in the backup the memory allocated jumps by a
    further 16 Mb (for the channel doing the controlfile backup) to 32Mb plus the output buffers
    (tape or disk).

    Bug 4513611 (still with Development) has been raised to clarify the documentation and
    confirm if this is expected behaviour.

    In the meantime, to cater for all circumstances, large pool should be set as follows (bearing in
    mind that the controlfile can only be backed up by one channel) :

    For disk channels

    LARGE_POOL_SIZE = (No:channels * (16 + 4)) +16 Mb

    For tape channels:

    LARGE_POOL_SIZE = (No:channels * (16 + 4()) +16 Mb

  3. admin Avatar
    admin

    PURPOSE

    The pupose of this note is to show how RMAN makes use of memory buffers
    for backup/restore operations, and also how the use of i/o slaves can
    affect this.

    SCOPE & APPLICATION

    This note is intended for DBAs and Support Personnel.

    RMAN I/O Slaves and Memory Usage
    ================================

    Contents:

    1.0 How Does RMAN make use of memory buffers?
    2.0 Size of Input/Output Buffers
    3.0 Why Use I/O Slaves?
    4.0 Configuring I/O Slaves

    1.0 How Does RMAN make use of memory buffers?
    =============================================

    For each backup/restore operation, every server session (ie, RMAN channel)
    allocates

    a. 4 input buffers for every disk file
    b. 4 output buffers for every backup piece

    memory(input) = #buffers * #files * buffersize
    = 4 * #files * buffersize

    #files = total number of files concurrently open

    To reduce the amount of memory used by RMAN set – MAXOPENFILES =
    EG – Before maxopenfiles
    4*100(files)*8192*64
    After maxopenfiles = 4
    4*4(files)*8192*64

    This can be illustrated by the following:

    RMAN> run {
    allocate channel c1 type ‘SBT_TAPE’;
    backup datafile 1,2;
    }

    rman_ill

    The server process reads data from the disk file into one of the input buffers.
    A given buffer is dedicated to a file whilst a server process is operating on
    that file. When one buffer fills up, the server process writes to one of the
    other three. The buffers are used in a circular fashion.

    The input buffers will contain blocks that do not need to be backed up, as well
    as those that do.

    A ‘memory copy’ routine is used to copy the required data from an input to an
    output buffer. This is where block corruption is checked (ie, validate header,
    compute checksums if enabled).

    2.0 Size of Input/Output Buffers
    ================================

    a. input buffers
    —————-

    NOTE : DB_FILE_DIRECT_IO_COUNT is not available in Oracle9i onwards.
    In Oracle9i, it is replaced by a hidden _DB_FILE_DIRECT_IO_COUNT which
    governs the size of direct I/Os in BYTES (not blocks). The default is
    1Mb butwill be sized down if the max_io_size of the system is smaller.

    The input buffer size is:
    buffersize = db_block_size * db_file_direct_io_count

    As there are 4 input buffers, the total input buffer memory use per channel is:
    memory(input) = #buffers * #files * buffersize
    = 4 * #files * buffersize

    For example, if 2 channels are used, and each of these channels backs up 3
    files, then for each channel

    memory(input) = 4 * 3 * db_block_size * db_file_direct_io_count

    b. output buffers
    —————–

    For disk channels, the output buffer size is:
    buffersize = db_block_size * db_file_direct_io_count

    For SBT_TAPE channels, the output buffer size in Oracle8/8i is o/s dependant. (On Solaris,
    this defaults to 64k) On 9i/10g it defaults to 256k for all platforms. The BLKSIZE argument to ‘allocate channel…’ can be
    used to override the default value.

    As there are 4 output buffers,
    memory(output) = #buffers * buffersize
    = 4 * buffersize

    c. Allocation of Memory
    ———————–

    This memory is allocated from the channel server process PGA, unless i/o slaves
    are used. I/O slave memory is allocated from the SGA in order for the memory to
    be shared between the I/O slave and the channel server process. In this case,
    Oracle recommends the ‘large pool’ feature is used, i.e. Set the “init.ora”
    parameter to:

    LARGE_POOL_SIZE =

    where is the size of the large pool, calculated from the above.

    If the I/O slave cannot acquire the required memory from the SGA, then an
    ORA-04031 error is asserted (see “alert.log”), and the operation continues
    synchronously by allocating memory from the channel server’s PGA.

    3.0 Why Use I/O Slaves?
    =======================

    For optimal performance during backup/restore operations, the goal should
    be to keep the tape streaming i.e. continually moving. Stopping and starting
    tapes are expensive operations. Additionally, potential tape stretching will
    lower the life span of the tape.

    I/O slaves can be used to provide such a performance enhancement by simulating
    asynchronous I/O. There are two types of I/O slaves; disk slaves and
    tape slaves.

    By default, all I/O to tape is synchronous. This means that the channel server
    process is blocked from doing any work while waiting for a tape to complete a
    write. Tape i/o slaves allow the channel server process to continue to fill and
    process buffers whilst the tape write is completing.

    It is also important to quickly fill the input buffers with data. On platforms
    that do not support asynchronous I/O, the channel server process can be
    blocked on a file read, thus preventing it from processing the buffers.
    Disk I/O slaves can be used to asynchronously read from files,
    thus enabling channel server process to continue to process the buffers.

    This is especially important during incremental backups, or backups of ’empty’
    files, where the number of modified buffers is sufficiently low that the tape
    is writing faster than the output buffers are being filled.

    4.0 Configuring I/O Slaves
    ==========================

    a. Disk Slaves
    ————–

    For Oracle 8.0, set the “init.ora” parameter

    BACKUP_DISK_IO_SLAVES =

    where is the number of disk i/o slaves to start.

    Oracle recommends that no more than 4 disk slaves are started. In this case,
    extra channels should be considered.

    For Oracle 8i/9i/10g, set the “init.ora” parameter

    DBWR_IO_SLAVES > 0

    This causes 4 disk i/o slaves to be started.

    Note that every channel server process doing a backup/restore will be assigned
    this number of disk i/o slaves.

    b. Tape Slaves
    ————–

    Set the “init.ora” parameter

    BACKUP_TAPE_IO_SLAVES = true

    This causes one tape I/O slave to be assigned to each channel server process.

    In 8i/9i/10g, if the DUPLEX option is specified, then tape I/O slaves must be enabled.
    In this case, for DUPLCEX=, there are tape slaves per channel. These N slaves
    all operate on the same four output buffers. Consequently, a buffer is not freed
    up until all
    slaves have finished writing to tape.

    c. init.ora
    ———–

    Each I/O slave is an Oracle server process. The “init.ora” parameters’
    processes and sessions need to be set accordingly.

  4. admin Avatar
    admin

    RMAN Myths Dispelled: Common RMAN Performance Misconceptions
    PURPOSE
    ——-

    This document will help dispel some of the common misconceptions
    related to proper usage of Recovery Manager (RMAN).

    SCOPE & APPLICATION
    ——————-

    All RMAN users who are already familiar with basic RMAN functionality
    will benefit from this note.

    RMAN: Common Myths Dispelled
    —————————–

    The following document is meant as a reference point for common
    misconceptions about Oracle’s Recovery Manager (RMAN). RMAN is one of
    the key elements in any high availability solution; however, many of
    its core features are not fully understood. The following list of
    common myths may grow (or shrink), so check back with this document
    from time to time.

    Myth #1. RMAN only backs up those datablocks which contain data.

    During a full datafile backup, every block in the datafile is read into
    a memory buffer (see NOTE:73354.1 for more information on memory usage)
    called an input buffer. If the block has ever been written to by an
    oracle process, then the block is backed up EVEN IF THE BLOCK IS EMPTY.
    The reason for this is that RMAN was designed to backup the database even
    if the database is closed, and therefore does not access space management
    information in the data dictionary to see if a block is on the freelist
    or not (if all data has been deleted from it). So, the only blocks that
    are skipped at the time of backup are blocks that have never been written to
    by an Oracle process. Even blocks now on the freelist after a drop/truncate
    table will be backed up. Keep in mind, though, that ALL blocks are read
    into memory and checked.

    Starting 10gR2, above behavior is changed. For locally managed tablespaces,
    we read the file bitmap headers and backup only the used extents.

    Myth #2: Incremental backups will take significantly less time to backup
    than full backups.

    Unfortunately, this cannot be said with 100% accuracy. While an
    incremental backup MAY take less time overall than a full backup of the same
    datafile, the process by which RMAN checks each block for changes does not
    differ between a full backup and an incremental backup.

    During an incremental backup (at a level greater than 0), RMAN pulls every
    block in a datafile into a buffer in memory to check its SCN number. If
    the SCN is newer than the SCN recorded by the level 0 incremental backup
    the full backup), then the block is pushed from an input buffer to an output
    buffer and then written to the backup piece. Every block in the datafile is
    still checked, in the same fashion as described in Myth #1, and for the same
    reasons. So the performance increase only occurs at the write to our
    allocated device (disk or tape), and thus any decrease in time depends on
    how many blocks are written, how many channels are allocated, how many
    devices being written to, etc.

    10g introduces Block Change Tracking which allows us to track which blocks have
    changed since the last incremental backup. Use of the Block Change Tracking
    feature will give very fast incremental backup performance as we then do not
    need to scan the entire file for changed blocks. For more information on this
    feature see: Note 262853.1 10G RMAN Fast Incremental Backups.

    Myth #3: I can specify incremental backups during a restore operation.

    An incremental backup will never be used when a RESTORE command is given.
    RESTORE will go back to the last full (incremental level 0), and restore
    from that backup. Even if you use a tag on the incremental backup, like
    this:

    run {
    allocate channel x type disk;
    backup incremental level=1
    format=’c:\backup\%U.test’
    tag=’test_inc_1′
    datafile 7;
    }

    If you then specify the tag on the restore, you will get the following
    error stack:

    RMAN> run{
    2> allocate channel x type disk;
    3> restore datafile 7 from tag=’test_inc_1′;
    4> }

    RMAN-03022: compiling command: allocate
    RMAN-03023: executing command: allocate
    RMAN-08030: allocated channel: x
    RMAN-08500: channel x: sid=15 devtype=DISK

    RMAN-03022: compiling command: restore

    RMAN-03022: compiling command: IRESTORE
    RMAN-03026: error recovery releasing channel resources
    RMAN-08031: released channel: x
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure during compilation of command
    RMAN-03013: command type: restore
    RMAN-03002: failure during compilation of command
    RMAN-03013: command type: IRESTORE
    RMAN-06026: some targets not found – aborting restore
    RMAN-06023: no backup or copy of datafile 7 found to restore

    An incremental backup is not a functional backup. It is an ‘add-on’
    to a full backup, and therefore cannot be ‘restored.’ It’s only function
    is to provide a faster MTTR (Mean Time To Recovery) during a RECOVER
    operation. In fact, it is impossible for the user to specify that an
    incremental backup be used during recovery.

    Let’s use the following RECOVER command as an example:

    RMAN> run{
    2> allocate channel x type disk;
    3> restore datafile 7;
    4> recover datafile 7;
    5> }

    RMAN will first restore datafile 7 from the last incremental
    level 0 backup. Then, it will evaluate its two options for the recovery.
    One option is to restore incremental level backups that exist since the
    last level 0, then finish up with any archivelogs that exist from the
    last incremental backup to the current point in time. The second option is
    to ignore the incremental backups and simply apply all archive logs from
    the level 0 backup until the current point in time. RMAN evaluates both
    options and if incremental backups are present will use them in preference to archivelogs.

    Myth #4: RMAN utilizes the LARGE_POOL area of the SGA for allocating
    memory for input/output buffers.

    The LARGE_POOL is only used if I/O Slaves are specified by one or
    both of the initialization parameters DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES.
    If DBWR_IO_SLAVES has a non-zero value and/or BACKUP_TAPE_IO_SLAVES is set to TRUE,
    then RMAN uses the LARGE_POOL because the I/O Slave Processes must coordinate their
    buffer read/writes in a common area. If no value for LARGE_POOL_SIZE is set in the init.ora file,
    and if I/O slaves are configured to be used, then RMAN uses the SHARED_POOL for
    input and output buffers. This can put severe strain on the Shared Pool,
    and should be avoided. This is often indicated by RMAN receiving an ora-4031
    error from the target database.

    Myth #5: The more channels I allocate, the faster my backup will complete.

    Allocating more than one channel is an effective way of parallelizing a
    backup, but the bottleneck is going to be the devices to which we are writing.
    However, the pay off we expect from each additional channel will decrease rapidly from there in a
    roughly parabolic trend. If there are four processes each waiting on each other to complete
    writes to the same device, then inevitably there are processes waiting idly for the opportunity
    to write to the device. And the trade off can be very expensive; note:73354.1 talks about the
    memory overhead incurred for each channel allocated.

    If we are writing to tape, and we have only one tape device, chances are that
    allocating more channels will increase the speed of our backup and keep
    the tape device from spinning. However, hardware multiplexing to tape is NOT recommended as
    although backup performance will see a marked improvement, it will have a detrimental effect on
    restore performance. Bearing in mind that a tape is a serial device – if you parallelise 4
    backuppieces to a single tape RMAN will request those four backuppieces simultaneously during a
    restore but your media manager will only be able to supply one file at a time (with a tape
    rewind PER file).

Leave a Reply

Your email address will not be published. Required fields are marked *