Veritas ODM Async and Oracle

简单来说对于Veritas Vxfs和ODM FILE而言filesystemio_options不生效(This parameter is not applicable to VxFS files, ODM files, or Quick I/O files.),必要检查ODM SO是否正确LINK。

 

ls -l $ORACLE_HOME/lib/libodm*

 

 

对于JFS2而言,一般建议设置filesystemio_options=SETALL

(Since the version 10g, Oracle will open data files located on the JFS2 file system with the O_CIO option if the filesystemio_options initialization parameter is set to either directIO or setall.)

 

 

Symptoms

Disk utility output shows 100% usage for disk continously:

ProcList CPU Rpt Mem Rpt Disk Rpt NextKeys SlctProc Help Exit GlancePlus C.04.50.00 19:15:39 bplita3 ia64 Current Avg High ———————————————————————- CPU Util S SN NU U | 90% 90% 90% Disk Util F F |100% 100% 100% <=== Disk Too heavily loaded
In Statspack or AWR report, high ‘free buffer waits’ seen even after with considerably increasing db_cache_size and db writers.

Statspack Report Instance Efficiency Percentages (Target 100%) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Buffer Nowait %: 99.99 Redo NoWait %: 99.99 Buffer Hit %: 58.92 In-memory Sort %: 99.82 Library Hit %: 89.19 Soft Parse %: 83.51 Execute to Parse %: 50.76 Latch Hit %: 100.00 Parse CPU to Parse Elapsd %: 81.48 % Non-Parse CPU: 93.21 …. Top 5 Timed Events ~~~~~~~~~~~~~~~~~~ % Total Event Waits Time (s) Ela Time ——————————————– ———— ———– ——– free buffer waits 76,390 74,807 55.82 enqueue 4,071 11,625 8.67
…..

 

Cause
The problem is related to Disk I/O.

Solution

 

1. Use the O/S striping software

Try to use the O/S striping software to distribute database files over as many disks as you can.

2. Use Direct IO

Mount the filesystem with direct IO option.
For example:

% Mount –F vxfs –o remount,nodatainlog,mincache=direct,convosync=direct /dev/vg00/lv_ora /soft/oracle

mincache and convosync

“mincache=direct” => bypass buffer cache on read
“convosync=direct” => force direct I/O for DB writers

Mincache=direct and convosync=direct allow data to be transferred directly from Oracle buffer cache to disk and disk to Oracle buffer cache. This avoids double buffering by bypassing the file system buffer cache and can improve physical read/write performance. However, cases where the disk read could have been avoided because a required block was in file system buffer cache may be negatively impacted.

If your filesystem is mounted with this option, then FILESYSTEMIO_OPTIONS default setting of ASYNCH can be used in order to use DIO.

Parameters in Oracle influencing the use of Direct IO

FILESYSTEMIO_OPTIONS defines the IO operations on filesystem files .This parameter should not normally be set by the user.
The value may be any of the following:
asynch – Set by default on HP. This allows asynchronous IO to be used where supported by the OS.
directIO – This allows directIO to be used where supported by the OS. Direct IO bypasses any Unix buffer cache.
setall – Enables both ASYNC and DIRECT IO.
none – This disables ASYNC IO and DIRECT IO so that Oracle uses normal synchronous writes, without any direct io options.
See

Document 120697.1 Init.ora Parameter “FILESYSTEMIO_OPTIONS” Reference Note
DISK_ASYNCH_IO controls whether I/O to datafiles, control files, and logfiles is asynchronous (that is, whether parallel server processes can overlap I/O requests with CPU processing during table scans). If your platform supports asynchronous I/O to disk, Oracle recommends that you leave this parameter set to its default value. However, if the asynchronous I/O implementation is not stable, you can set this parameter to false to disable asynchronous I/O. If your platform does not support asynchronous I/O to disk, this parameter has no effect.

If you set DISK_ASYNCH_IO to false, then you should also set DBWR_IO_SLAVES or DB_WRITER_PROCESSES to a value other than its default of zero in order to simulate asynchronous I/O.
DB_WRITER_PROCESSES or DBWR_IO_SLAVES
see comments in DISK_ASYNCH_IO

Again the default setting of ASYNCH can be used when implementing direct I/O on HP-UX / Veritas.

3. Concurrent I/O

An alternative solution to Direct I/O is to use Concurrent I/O. Concurrent I/O is available in OnlineJFS 5.0.1.

To enable Concurrent I/O, the filesystem must be mounted with “-o cio”.
Eg:

mount -F vxfs -o nodatainlog,cio /soevxfs/redos /oracle/mnt/redos

Please note that remount should not be used to enable Concurrent I/O on mounted filesystems.

“-o cio”

Concurrent I/O allows multiple processes to read from or write to the same file without blocking other read(2) or write(2) calls.With Concurrent I/O, the read and write operations are not serialized. This advisory is generally used by applications that require high performance for accessing data and do not perform overlapping writes to the same file. It is the responsibility of the application or the running threads to coordinate the write activities to the same file. It also avoids double buffering by bypassing the filesystem buffer cache and thus improves physical read/write performance significantly. Concurrent I/O performs very close to that of raw logical volumes.

 

The value may be any of the following:
“asynch” – This allows asynchronous IO to be used where
supported by the OS.
“directIO” – This allows directIO to be used where
supported by the OS. Direct IO bypasses any
Unix buffer cache. As of 10.2 most platforms
will try to use “directio” option for NFS
mounted disks (and will also check NFS
attributes are sensible).
“setall” – Enables both ASYNC and DIRECT IO.
“none” – This disables ASYNC IO and DIRECT IO so that
Oracle uses normal synchronous writes, without
any direct io options.

 

We assume Veritas ODM driver is installed, mounted and available.

The following steps to enable/disable ODM for Oracle database (note that with different versions of the Solaris OS, the path may change to the ODM files).

Enable ODM

Log in as oracle user.

i) Shutdown the database

ii) Change directories:
$ cd $ORACLE_HOME/lib

iii) Take a backup of existing original ODM library

PA systems

$ mv $ORACLE_HOME/lib/libodm10.sl $ORACLE_HOME/lib/libodm10.sl.org

IA systems

$ mv $ORACLE_HOME/lib/libodm10.so $ORACLE_HOME/lib/libodm10.so.org

(* Note 9i library name ie libodm9.so,10g is libodm10.so and 11g is libodm11.so)

iv) create a soft link to veritas ODM library

PA systems

ln -s /opt/VRTSodm/lib/libodm.sl $ORACLE_HOME/lib/libodm10.sl

IA systems

ln -s /opt/VRTSodm/lib/libodm.so $ORACLE_HOME/lib/libodm10.so

iv) Start the database and check

Once the database instance is enabled with ODM, the following message is displayed in the Oracle alert log:

Example :-

???¢????????Oracle instance running with ODM: VERITAS 4.1 ODM Library, Version 1.1???¢????????
Disable ODM

Log in as oracle user.

i) Shutdown the database

ii) Change directories:
$ cd $ORACLE_HOME/lib
iii) Remove the softlink

$ cd $ORACLE_HOME/lib

PA systems
$ rm libodm10.sl

IA systems
$ rm libodm10.so
iv) Copy the orginal $ORACLE_HOME/lib/libodm10.sl or libodm10.so file back to disable the ODM library.

PA systems
$ cp $ORACLE_HOME/lib/libodm10.sl.org $ORACLE_HOME/lib/libodm10.sl

IA systems
$ cp $ORACLE_HOME/lib/libodm10.so.oracle $ORACLE_HOME/lib/libodm10.so

v) Start the database

 

 

Applies to:

Oracle Server – Enterprise Edition – Version: 10.2.0.2 to 11.1.0.7 – Release: 10.2 to 11.1
Information in this document applies to any platform.
Database installations using Veritas Storage Foundation for Oracle / Oracle RAC (SFRAC)
and making use of the Veritas ODM (Oracle Disk Manager) library
Symptoms

The Veritas shared library used by Oracle Disk Manager could be lost upon RDBMS patchset
install such as patchset versions 10.2.0.2, 10.2.0.3, 10.2.0.4 or 11.1.0.7.
If this problem is not identified then poor IO performance or a change in IO profile of the
database can result such as cached IO being done to a Veritas filesystem rather than the expected
direct IO as was the case before applying the patchset.
Changes

Applying one of the currently available RDBMS patchsets:

10.2.0.2
10.2.0.3
10.2.0.4
11.1.0.7

on platforms AIX, Linux, HP-UX, Solaris.

Further RDBMS patchset versions which are not yet released as this Note is written can cause the same problem.
Cause

Upon patchset install in the $ORACLE_HOME/lib directory the library libodm<Version>.so is replaced by the the patchset’s own libodm<Version>.so which is created as a soft link to the dummy library libodmd<Version>.so.

In addition, by relinking the oracle binaries during patchset install on Linux, HP-UX or Solaris the dummy library $ORACLE_HOME/rdbms/lib/libodm<Veritas>.a will be statically linked in.
On AIX the make command would link the libodm<version>.so
rather than the static library.
If the previous installed library in $ORACLE_HOME/lib was a soft link or a copy of the Veritas libodm and the library was linked to the oracle binary then the ODM functionality would be lost after patchset install.

If the database is affected by this problem it would be visible from the alert.log file i.e. by the disappearance of a message like below:

“Oracle instance running with ODM: VERITAS 4.1.20.00 ODM Library, Version 1.1 ”

 

 

Solution

If the Veritas libodm has been lost following steps need to be executed to get it back in place:

Example: Oracle RDBMS after 10.2.0.4 patchset install on Solaris

su – <oracle install user>
ldd $ORACLE_HOME/bin/oracle shows no libodm10.so

Create soft link of Veritas ODM library
(below examples are valid for Oracle 10g and Veritas 5.0 installs)
AIX, HP-UX IA64, Linux, Sparc-Solaris:
mv $ORACLE_HOME/lib/libodm10.so $ORACLE_HOME/lib/libodm10.so.10204
HP-UX PA:
mv $ORACLE_HOME/lib/libodm10.sl $ORACLE_HOME/lib/libodm10.sl.10204

And then
AIX:
ln -s /opt/VRTSodm/libodm64.so $ORACLE_HOME/lib/libodm10.so
HP-UX PA:
ln -s /opt/VRTSodm/lib/libodm.sl $ORACLE_HOME/lib/libodm10.sl
HP-UX IA64:
ln -s /opt/VRTSodm/lib/libodm.sl $ORACLE_HOME/lib/libodm10.so
Linux:
ln -s /opt/VRTSodm/lib/libodm64.so $ORACLE_HOME/lib/libodm10.so
Sparc-Solaris:
ln -s /opt/VRTSodm/lib/sparcv9/libodm.so $ORACLE_HOME/lib/libodm10.so
Avoid linking the rdbms/lib static dummy library

mv $ORACLE_HOME/rdbms/lib/libodm10.a $ORACLE_HOME/rdbms/lib/libodm10.a.10204
make -f ins_rdbms.mk ioracle

Confirm libodm10.so is linked to the oracle binary:

ldd $ORACLE_HOME/bin/oracle shows libodm10.so
and starting up the Oracle instance would show a message like:
“Oracle instance running with ODM: VERITAS … ”

References

Bug 7359739: 3X”LOG FILE SYNC” DATAGUARD AFTER UPGRADING DB TO 10.2.0.4 FROM 10.2.0.2

Bug 7010362: STATIC VERSION OF ODM LIBRARY(LIBODM10.A) GETTING LINKED TO ORACLE BINARY

 

In Oracle9 Release 2 (9.2), you can use the filesystemio_optionsinit.ora
parameter to enable or disable asynchronous I/O, direct I/O, or Concurrent I/O
on file system files. This parameter is used on files that reside in non-VxFS
filesystems only. This parameter is not applicable to VxFS files, ODM files, or
Quick I/O files.
See your Oracle documentation for more details.

http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/pdf/sf_ora_admin.pdf

 

 

 

 


Posted

in

by

Tags:

Comments

One response to “Veritas ODM Async and Oracle”

  1. Maclean Liu Avatar

    Solution

    A) Please verify that Veritas ODM is correctly enabled and configured as the following exampe (your ODM version & Oracle version could be different):

    1. Check the VRTSdbed license:

    For Database Edition 3.0:

    # /usr/sbin/vxlicense -t DATABASE_EDITION| egrep \ ‘Feature name:|Expiration date:’
    vrts:vxlicense: INFO: Feature name: DATABASE_EDITION [100]
    vrts:vxlicense: INFO: Expiration date: No expiration date

    For Database Edition 3.5:

    # /sbin/vxlictest -n “VERITAS Database Edition for Oracle” -f “ODM”
    ODM feature is licensed

    2. Check that the VRTSodm package is installed:

    For Database Edition 3.0 or Database Edition 3.5:

    # pkginfo VRTSodm
    system VRTSodm VERITAS Oracle Disk Manager

    3. Determine whether the Oracle version is 32-bit or 64-bit:

    # cd $ORACLE_HOME/bin
    # file oracle

    For a 32-bit version, the output is as follows:

    oracle: ELF 32-bit MSB executable SPARC Version 1, dynamically linked, not stripped

    For a 64-bit version, the output is as follows:

    oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not stripped

    4 . Check that libodm.so is present and correctly linked:

    For Database Edition 3.0 or Database Edition 3.5:

    If running 32-bit Oracle, use the following command:

    #ls -lL /usr/lib/libodm.so
    -rw-r–r– 1 root sys 13288 Apr 25 18:42 /usr/lib/libodm.so

    # cmp $ORACLE_HOME/lib/libodm9.so /usr/lib/libodm.so
    # echo $?
    0

    Alternatively, if you are running 64-bit Oracle, use the following command:

    # ls -lL /usr/lib/sparcv9/libodm.so
    -rw-r–r– 1 root sys 16936 Apr 25 18:42 /usr/lib/sparcv9/libodm.so

    # cmp $ORACLE_HOME/lib/libodm9.so /usr/lib/libodm.so
    # echo $?
    0

    5. If the above link test fails, then check that the following links have been created correctly:

    If running 32-bit Oracle, use the following command:

    # ln -s /opt/VRTSodm/lib/libodm.so $ORACLE_HOME/lib/libodm9.so

    Alternatively, if you are running 64-bit Oracle, use the following command:

    # ln -s /opt/VRTSodm/lib/sparcv9/libodm.so $ORACLE_HOME/lib/libodm9.so

    6. Check that the instance is using the Oracle Disk Manager function:

    # cat /dev/odm/stats >/dev/null
    # echo $?
    0

    B) For ODM configuration with Oracle Server 11.2 and DNFS please check the next manual:

    http://www.oracle.com/pls/db112/homepage

    =)> Oracle® Grid Infrastructure Installation Guide
    11g Release 2 (11.2) for HP-UX
    Part Number E17211-09

    ==)> 3 Configuring Storage for Grid Infrastructure for a Cluster and Oracle Real Application Clusters (Oracle RAC)

    ===)> 3.2.8 Enabling Direct NFS Client Oracle Disk Manager Control of NFS

Leave a Reply

Your email address will not be published. Required fields are marked *