Backup Script:Expdp Schema to ASM Storage

Below is a shell script  which backup dmpfile to ASM storage everyday:

#!/bin/bash
#asmcmd mkdir DATA/ASM_DATAPUMP_BACKUP
#create directory ASM_DATAPUMP_BACKUP as '+DATA/ASM_DATAPUMP_BACKUP';
#grant read,write on directory ASM_DATAPUMP_BACKUP to system;
#create directory LOGDIR as '/s01/logdir';
#grant read,write on directory LOGDIR to system;
#create directory DMPBACKUP as '/s01/dmpbackup';
#grant read,write on directory DMPBACKUP to system;
export ORACLE_HOME=/s01/oracle/product/11.2.0/dbhome_1;
export ORA_CRS_HOME=/s01/grid;
export ORACLE_SID=PROD1;
export PATH=$ORACLE_HOME/bin:/s01/grid:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin:/home/oracle/bin:$ORACLE_HOME/OPatch;
thisday=`date "+%Y""%m""%d"`;
expfilename='maclean'$thisday'.dmp';
explogname='maclean'$thisday'.log';
yesterday=`date -d "1 days ago" +%Y%m%d`;
yesterdayfile='maclean'$yesterday'.dmp';
expdp system/system schemas=maclean directory=ASM_DATAPUMP_BACKUP dumpfile=$expfilename logfile=LOGDIR:$explogname;
#TRANSFER FILE FROM ASM TO FILESYSTEM
sqlplus / as sysdba <<EOF
exec DBMS_FILE_TRANSFER.COPY_FILE('ASM_DATAPUMP_BACKUP','$expfilename','DMPBACKUP','$expfilename');
EOF
export ORACLE_SID=+ASM1;
export ORACLE_HOME=/s01/grid;
$ORACLE_HOME/bin/asmcmd rm DATA/ASM_DATAPUMP_BACKUP/$yesterdayfile;
#/usr/bin/find /s01/dmpbackup -mtime 1 -name 'maclean*.dmp'  -exec /bin/rm -rf {} \;
#asmcmd cp DATA/ASM_DATAPUMP_BACKUP/$expfilename /s01/dmpbackup;

Data Pump failed with ORA-04031/ORA-4030?

在10g中引入了数据泵Data Pump导入导出工具,DataPump的工作流如下图:

data_pump_workflow

我们在使用Data Pump工具时经常会遇到著名的ORA-04031/ORA-04030错误,主要影响DataPump的内存组件有PGA和SGA中的共享池Shared Pool、流池Streams Pool。Expdp/Impdp对shared Pool的开销主要体现在其运行过程中需要调用一系列的包体PACKGE BODY,它们包括:

PACKAGE_NAME                             TYPE                 SHARABLE_MEM
---------------------------------------- -------------------- ------------
SYS.KUPM$MCP                             PACKAGE BODY               425448
SYS.KUPW$WORKER                          PACKAGE BODY               386000
SYS.DBMS_METADATA_INT                    PACKAGE BODY               325856
SYS.DBMS_REPCAT_UTL                      PACKAGE BODY               269064
SYS.DBMS_METADATA                        PACKAGE BODY               226624
SYS.DBMS_DATAPUMP                        PACKAGE BODY               192888
SYS.DBMS_PRVTAQIS                        PACKAGE BODY               147288
SYS.DBMS_PRVTAQIM                        PACKAGE BODY               142680
SYS.KUPF$FILE                            PACKAGE BODY               142008
SYS.DBMS_METADATA_UTIL                   PACKAGE BODY               115224
SYS.KUPD$DATA                            PACKAGE BODY               109400
SYS.DBMS_LOGREP_EXP                      PACKAGE BODY               102648
SYS.DBMS_SCHED_MAIN_EXPORT               PACKAGE BODY                86816
SYS.DBMS_SYS_SQL                         PACKAGE BODY                72280
SYS.DBMS_AW                              PACKAGE BODY                68128
SYS.DBMS_SQL                             PACKAGE BODY                68064
SYS.DBMS_DM_EXP_INTERNAL                 PACKAGE BODY                57040
SYS.DBMS_AW_EXP                          PACKAGE BODY                52256
SYS.KUPC$QUE_INT                         PACKAGE BODY                52088
SYS.DBMS_CUBE_EXP                        PACKAGE BODY                48432
SYS.KUPV$FT_INT                          PACKAGE BODY                47184
SYS.DBMS_LOGREP_UTIL                     PACKAGE BODY                45856
SYS.DBMS_CDC_EXPDP                       PACKAGE BODY                44616
SYS.DBMS_EXPORT_EXTENSION                PACKAGE BODY                38728
SYS.DBMS_CDC_UTILITY                     PACKAGE BODY                37712
SYS.KUPV$FT                              PACKAGE BODY                34536
SYS.DBMS_DM_MODEL_EXP                    PACKAGE BODY                33904
SYS.DBMS_AQ                              PACKAGE BODY                33512
SYS.DBMS_IJOB                            PACKAGE BODY                33488
SYS.DBMS_AQ_SYS_EXP_INTERNAL             PACKAGE BODY                29464
SYS.DBMS_FILE_GROUP_EXP                  PACKAGE BODY                29440
SYS.KUPD$DATA_INT                        PACKAGE BODY                29424
SYS.DBMS_RULE_EXP_RL_INTERNAL            PACKAGE BODY                29400
SYS.KUPP$PROC                            PACKAGE BODY                25368
SYS.DBMS_AQ_IMPORT_INTERNAL              PACKAGE BODY                25352
SYS.DBMS_AQADM                           PACKAGE BODY                25320
SYS.DBMS_ODCI                            PACKAGE BODY                21200
SYS.UTL_XML                              PACKAGE BODY                21200
SYS.DBMS_CDC_DPUTIL                      PACKAGE BODY                21192
SYS.DBMS_STREAMS_DATAPUMP_UTIL           PACKAGE BODY                21144
SYS.KUPF$FILE_INT                        PACKAGE BODY                17104
SYS.DBMS_SESSION                         PACKAGE BODY                17048
SYS.DBMS_AQ_SYS_EXP_ACTIONS              PACKAGE BODY                17048
SYS.DBMS_STREAMS_DATAPUMP                PACKAGE BODY                17032
SYS.KUPC$QUEUE_INT                       PACKAGE BODY                17032
SYS.DBMS_RULE_ADM                        PACKAGE BODY                17032
SYS.DBMS_LOCK                            PACKAGE BODY                17032
SYS.DBMS_RULEADM_INTERNAL                PACKAGE BODY                12952
SYS.DBMS_TRANSFORM_EXIMP_INTERNAL        PACKAGE BODY                12952
SYS.DBMS_REFRESH_EXP_SITES               PACKAGE BODY                12952
SYS.DBMS_REPCAT_RGT_EXP                  PACKAGE BODY                12936
SYS.UTL_RAW                              PACKAGE BODY                12936
SYS.DBMS_FLASHBACK                       PACKAGE BODY                12936
SYS.DBMS_TRANSFORM_EXIMP                 PACKAGE BODY                12936
SYS.DBMS_SCHED_JOB_EXPORT                PACKAGE BODY                12936
SYS.DBMS_REFRESH_EXP_LWM                 PACKAGE BODY                12936
SYS.DBMS_SCHED_EXPORT_CALLOUTS           PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_QUEUE_TABLES             PACKAGE BODY                 8856
SYS.DBMS_DEFER_IMPORT_INTERNAL           PACKAGE BODY                 8856
SYS.DBMS_INTERNAL_SAFE_SCN               PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_INDEX_TABLES             PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_HISTORY_TABLES           PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_SIGNATURE_TABLES         PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_DEQUEUELOG_TABLES        PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_SUBSCRIBER_TABLES        PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_CMT_TIME_TABLES          PACKAGE BODY                 8856
SYS.DBMS_AQ_EXP_TIMEMGR_TABLES           PACKAGE BODY                 8856
SYS.DBMS_RULE_EXP_RULES                  PACKAGE BODY                 8840
SYS.DBMS_RULE_EXP_UTLI                   PACKAGE BODY                 8840
SYS.DBMS_AQADM_INV                       PACKAGE BODY                 8840
SYS.DBMS_ZHELP_IR                        PACKAGE BODY                 8840
SYS.KUPC$QUEUE                           PACKAGE BODY                 8840
SYS.DBMS_AQ_EXP_ZECURITY                 PACKAGE BODY                 8840
SYS.KUPU$UTILITIES_INT                   PACKAGE BODY                 8840
SYS.KUPU$UTILITIES                       PACKAGE BODY                 8840
SYS.DBMS_REPCAT_EXP                      PACKAGE BODY                 8840
SYS.DBMS_CDC_EXPVDP                      PACKAGE BODY                 8840
total<<10MB

DataPump内部利用高级队列 Advanced Queue技术,所以也会用到流池Streams Pool,与之相关的Streams Pool组件包括:

POOL         NAME                            BYTES
------------ -------------------------- ----------
streams pool sob_kgqmrec                     19584
streams pool Sender info                     17616
streams pool recov_kgqbtctx                  11904
streams pool kwqbcqini:spilledovermsgs        3168
streams pool kgqbt_alloc_block                2096
streams pool recov_kgqmsub                    1608
streams pool kwqbsinfy:bqg                    1232
streams pool recov_kgqmctx                    1104
streams pool kwqbsinfy:mpr                    1088
streams pool kwqbsinfy:sta                     768
streams pool kgqmsub                           584
streams pool fixed allocation callback         448
streams pool kwqbsinfy:cco                     376
streams pool image handles                     288
streams pool kwqbsinfy:bms                     256
streams pool name_kgqmsub                      256
streams pool spilled:kwqbl                     256
streams pool deqtree_kgqmctx                   144
streams pool substree_kgqmctx                  144
streams pool kgqmdm_fl_1                       144
streams pool time manager index                144
streams pool msgtree_kgqmctx                   144

当Streams Pool分配过小同样可能引发Expdp/Impdp因ORA-04031 (“streams pool”, …)错误而意外终止,详见<EXPDP Fails With ORA-04031 (“streams pool”, …) [ID 457724.1]>

此外DataPump还可能从Large Pool中分配PX msg pool作为并行进程通信池,但是这种内存开销很小。

DataPump对PGA的消耗主要体现在koh-kghu sessi sub-heap上,已知的Bug 10404544(ORA – 4030 DURING EXPDP)、7681160(EXPDP FAILS WITH ORA-4030 WHEN SELECT FROM SYS.KU$_PROCACT_SCHEMA_VIEW)说明该子堆sub-heap(和另一个sub-heap kxs-heap-w)在10.2.0.4上使用expdp时可能引发内存泄露memory-leak。我们来具体看一下DataPump Manager DM00的PGA使用情况:

SQL> oradebug setospid 5278;
Oracle pid: 51, Unix process pid: 5278, image: oracle@rh2.oracle.com (DM00)
SQL> oradebug dump heapdump 536870917;
Statement processed.
SQL> oradebug tracefile_name;
/s01/orabase/diag/rdbms/prod/PROD1/trace/PROD1_dm00_5278.trc
SQL> select pga_alloc_mem / 1024 / 1024, pga_used_mem / 1024 / 1024, pname
2    from v$process
3   where addr = '00000000DCBC51F8'
4  /
PGA_ALLOC_MEM/1024/1024 PGA_USED_MEM/1024/1024 PNAME
----------------------- ---------------------- -----
8.06555557             7.25846481 DM00
[oracle@rh2 ~]$ egrep "HEAP DUMP heap name|Total heap size|Permanent space" 
/s01/orabase/diag/rdbms/prod/PROD1/trace/PROD1_dm00_5278.trc
HEAP DUMP heap name="session heap"  desc=0x7fae4fc167f8
Total heap size    =  4517512
Permanent space    =   192664
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4f9db150
Total heap size    =   362416
Permanent space    =       80
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4fc47fb0
Total heap size    =   336104
Permanent space    =       80
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4f984660
Total heap size    =   156792
Permanent space    =       80
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4f988c88
Total heap size    =   154912
Permanent space    =       80
HEAP DUMP heap name="Alloc environm"  desc=0x7fae4fc3e090
Total heap size    =   129512
Permanent space    =      416
HEAP DUMP heap name="Alloc statemen"  desc=0x7fae4f8c91f8
Total heap size    =    10264
Permanent space    =      696
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4f9f3650
Total heap size    =    10192
Permanent space    =     1056
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4fc4fed0
Total heap size    =     9912
Permanent space    =     1488
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4f8e4690
Total heap size    =     9080
Permanent space    =      560
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4f8aeda0
Total heap size    =     8992
Permanent space    =      568
HEAP DUMP heap name="pga heap"  desc=0xb7c8ba0
Total heap size    =  2353064
Permanent space    =   654544
HEAP DUMP heap name="KFK_IO_SUBHEAP"  desc=0x7fae4fdfaa98
Total heap size    =   730640
Permanent space    =       80
HEAP DUMP heap name="koh-kghu call "  desc=0x7fae4fb650d0
Total heap size    =   246080
Permanent space    =       80
HEAP DUMP heap name="diag pga"  desc=0x7fae500347e0
Total heap size    =    70816
Permanent space    =    13472
HEAP DUMP heap name="Alloc environm"  desc=0x7fae4fdd15c8
Total heap size    =    68064
Permanent space    =     1336
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4fb50ee0
Total heap size    =    12272
Permanent space    =     2096
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4fb66c00
Total heap size    =    11784
Permanent space    =     1856
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4fc14fc0
Total heap size    =     9816
Permanent space    =     1392
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4fc10068
Total heap size    =     9664
Permanent space    =     1240
HEAP DUMP heap name="PLS PGA hp"  desc=0x7fae4fdeeab8
Total heap size    =    46784
Permanent space    =       80
HEAP DUMP heap name="top call heap"  desc=0xb7ce3c0
Total heap size    =   458584
Permanent space    =     1920
HEAP DUMP heap name="callheap"  desc=0xb7cd578
Total heap size    =   343104
Permanent space    =    65536
HEAP DUMP heap name="callheap"  desc=0xb7cd4c0
Total heap size    =    21616
Permanent space    =      968
HEAP DUMP heap name="kti call subhe"  desc=0x7fae4fdee018
Total heap size    =    20584
Permanent space    =       80
HEAP DUMP heap name="callheap"  desc=0x7fae4f6bd108
Total heap size    =     8952
Permanent space    =      200
HEAP DUMP heap name="callheap"  desc=0x7fae4f6bd030
Total heap size    =     2072
Permanent space    =     1672
HEAP DUMP heap name="top uga heap"  desc=0xb7ce5e0
Total heap size    =  4520496
Permanent space    =       80
HEAP DUMP heap name="session heap"  desc=0x7fae4fc167f8
Total heap size    =  4517512
Permanent space    =   192664
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4f9db150
Total heap size    =   362416
Permanent space    =       80
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4fc47fb0
Total heap size    =   336104
Permanent space    =       80
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4f984660
Total heap size    =   156792
Permanent space    =       80
HEAP DUMP heap name="koh-kghu sessi"  desc=0x7fae4f988c88
Total heap size    =   154912
Permanent space    =       80
HEAP DUMP heap name="Alloc environm"  desc=0x7fae4fc3e090
Total heap size    =   129512
Permanent space    =      416
HEAP DUMP heap name="Alloc statemen"  desc=0x7fae4f8c91f8
Total heap size    =    10264
Permanent space    =      696
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4f9f3650
Total heap size    =    10192
Permanent space    =     1056
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4fc4fed0
Total heap size    =     9912
Permanent space    =     1488
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4f8e4690
Total heap size    =     9080
Permanent space    =      560
HEAP DUMP heap name="Alloc server h"  desc=0x7fae4f8aeda0
Total heap size    =     8992
Permanent space    =      568

一般遇到这类memory leak的问题,Oracle内部会使用一个名叫heap.awk的dump分析工具(类似于ass.awk)来找出问题子堆(problematic sub-heap),我们可以使用图形化的免费工具Membai来替代heap.awk。

heap_dump_analysis

总结:

DataPump工具Expdp/Impdp需要从PGA和SGA的Shared Pool、Streams Pool和Large Pool分配必要的内存。为了避免Expdp/Impdp出现ORA-04031/ORA-04030错误,我们有必要在自动管理模式下设置合理的pga_aggregate_target和sga_target(抑或者memory_target)内存初始化参数,如果使用手动的SGA管理的话,那么有必要保证shared_pool_size的设置适宜,对于Streams Pool和Large Pool一般设置为150MB大小。

解决UDE-31623错误一例

一套Linux x86-64上的11.2.0.2数据库在使用data pump数据泵expdp工具导出时遭遇了ORA-31623错误,详细日志如下:

[oracle@rh2 dbs]$ expdp system/password dumpfile=s01:sh.dmp schemas=sh
Export: Release 11.2.0.2.0 - Production on Mon May 2 22:46:57 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551
ORA-06512: at line 1
/* 伴随以上ORA-31623错误在告警日志中出现了以下记录 */
2011-05-02 22:47:05.792000 +08:00
Errors in file /s01/orabase/diag/rdbms/prod/PROD1/trace/PROD1_ora_22060.trc  (incident=54525):
ORA-04031: unable to allocate 56 bytes of shared memory 
("streams pool","unknown object","streams pool","fixed allocation callback")
Incident details in: /s01/orabase/diag/rdbms/prod/PROD1/incident/incdir_54525/PROD1_ora_22060_i54525.trc
2011-05-02 22:47:08.189000 +08:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
Dumping diagnostic data in directory=[cdmp_20110502224708], requested by (instance=1, osid=22060), summary=[incident=54525].
2011-05-02 22:47:10.502000 +08:00
Sweep [inc][54525]: completed
Sweep [inc2][54525]: completed
SQL> show parameter memory
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address             integer     0
memory_max_target                    big integer 912M
memory_target                        big integer 912M
shared_memory_address                integer     0

从以上日志可以看到导出进程在为”fixed allocation callback”从streams pool流池中分配内存时遇到了ORA-04031错误,进而导致了UDE-31623错误。为什么expdp要从streams pool中分配内存呢?data pump数据泵在11g中开始Advanced Queue高级队列来控制其job作业的启动、停止和重启了。如果streams pool的当前size为0,那么显然无法分配到任何内存;而当前系统中就只设置了AMM特性的memory_target内存参数,并没有显式地设置streams_pool_size参数,这就导致了问题的发生。
我们可以通过上症状在MOS上搜索可以找到Note”UDE-31623 Error with Data Pump Export”:

Applies to:
Oracle Server - Enterprise Edition - Version: 11.2.0.0. and later   [Release: 11.2 and later ]
Information in this document applies to any platform.
Goal
Data Pump Export does not start:
$ expdp system/ directory=test dumpfile=system.dmp logfile=abcd.log full=y
Export: Release 11.2.0.1.0 - Production on Thu Mar 25 12:11:31 2010
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
UDE-31623: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3263
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4488
ORA-06512: at line 1
Solution
For this Data Pump error, you will need to configure the database with some Streams Pool.
Perform the following query to find out how much STREAMS_POOL_SIZE has been allocated
select * from v$sgainfo;
...
Streams Pool Size                               0 Yes
Data Pump now uses Advanced Queuing (AQ) internally, so it can handle job stops/starts/re-starts.  
However, AQ using the Streams Pool, where the database currently has the STREAMS_POOL_SIZE set to ZERO.
Manually set the STREAMS_POOL_SIZE, re-start the database and re-attempt the Data Pump Export.
Alternately, turn on Automatic Shared Memory management (ASMM) by setting SGA_TARGET or MEMORY_TARGET.

显然上述Note在有一点上说错了,那就是设置memory_target参数无益于解决该UDE-31623问题;最好的做法还是显式地去设置streams_pool_size参数,但遗憾的是streams_pool_size虽然是一个可以动态修改的参数,但实际上是无法动态修改的:

SQL> col name for a20
SQL>
SQL> select name,isinstance_modifiable from v$system_parameter where name='streams_pool_size';
NAME                 ISINS
-------------------- -----
streams_pool_size    TRUE
SQL> alter system set streams_pool_size=60M sid='PROD1';
alter system set streams_pool_size=60M sid='PROD1'
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool
SQL> alter system set streams_pool_size=60M sid='PROD1' scope=spfile;
System altered.
Restart Instance .................
[oracle@rh2 dbs]$ expdp system/password dumpfile=s01:sh1.dmp schemas=sh
Export: Release 11.2.0.2.0 - Production on Mon May 2 23:28:33 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  system/******** dumpfile=s01:sh1.dmp schemas=sh
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 273.8 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/DIMENSION
. . exported "SH"."CUSTOMERS"                            9.853 MB   55500 rows
. . exported "SH"."COSTS":"COSTS_Q1_1998"                139.5 KB    4411 rows
. . exported "SH"."COSTS":"COSTS_Q1_1999"                183.5 KB    5884 rows
. . exported "SH"."COSTS":"COSTS_Q1_2000"                120.6 KB    3772 rows
. . exported "SH"."COSTS":"COSTS_Q1_2001"                227.8 KB    7328 rows
. . exported "SH"."COSTS":"COSTS_Q2_1998"                79.52 KB    2397 rows
. . exported "SH"."COSTS":"COSTS_Q2_1999"                132.5 KB    4179 rows
. . exported "SH"."COSTS":"COSTS_Q2_2000"                119.0 KB    3715 rows
. . exported "SH"."COSTS":"COSTS_Q2_2001"                184.5 KB    5882 rows
. . exported "SH"."COSTS":"COSTS_Q3_1998"                131.1 KB    4129 rows
. . exported "SH"."COSTS":"COSTS_Q3_1999"                137.3 KB    4336 rows
. . exported "SH"."COSTS":"COSTS_Q3_2000"                151.4 KB    4798 rows
. . exported "SH"."COSTS":"COSTS_Q3_2001"                234.4 KB    7545 rows
. . exported "SH"."COSTS":"COSTS_Q4_1998"                144.7 KB    4577 rows
. . exported "SH"."COSTS":"COSTS_Q4_1999"                159.0 KB    5060 rows
. . exported "SH"."COSTS":"COSTS_Q4_2000"                160.2 KB    5088 rows
. . exported "SH"."COSTS":"COSTS_Q4_2001"                278.4 KB    9011 rows
. . exported "SH"."SALES":"SALES_Q1_1998"                1.412 MB   43687 rows
. . exported "SH"."SALES":"SALES_Q1_1999"                2.071 MB   64186 rows
. . exported "SH"."SALES":"SALES_Q1_2000"                2.012 MB   62197 rows
. . exported "SH"."SALES":"SALES_Q1_2001"                1.965 MB   60608 rows
. . exported "SH"."SALES":"SALES_Q2_1998"                1.160 MB   35758 rows
. . exported "SH"."SALES":"SALES_Q2_1999"                1.754 MB   54233 rows
. . exported "SH"."SALES":"SALES_Q2_2000"                1.802 MB   55515 rows
. . exported "SH"."SALES":"SALES_Q2_2001"                2.051 MB   63292 rows
. . exported "SH"."SALES":"SALES_Q3_1998"                1.633 MB   50515 rows
. . exported "SH"."SALES":"SALES_Q3_1999"                2.166 MB   67138 rows
. . exported "SH"."SALES":"SALES_Q3_2000"                1.909 MB   58950 rows
. . exported "SH"."SALES":"SALES_Q3_2001"                2.130 MB   65769 rows
. . exported "SH"."SALES":"SALES_Q4_1998"                1.581 MB   48874 rows
. . exported "SH"."SALES":"SALES_Q4_1999"                2.014 MB   62388 rows
. . exported "SH"."SALES":"SALES_Q4_2000"                1.814 MB   55984 rows
. . exported "SH"."SALES":"SALES_Q4_2001"                2.257 MB   69749 rows
. . exported "SH"."SUPPLEMENTARY_DEMOGRAPHICS"           697.3 KB    4500 rows
. . exported "SH"."FWEEK_PSCAT_SALES_MV"                 419.8 KB   11266 rows
. . exported "SH"."PROMOTIONS"                           58.89 KB     503 rows
. . exported "SH"."TIMES"                                380.8 KB    1826 rows
. . exported "SH"."CAL_MONTH_SALES_MV"                   6.312 KB      48 rows
. . exported "SH"."CHANNELS"                              7.25 KB       5 rows
. . exported "SH"."COUNTRIES"                            10.20 KB      23 rows
. . exported "SH"."PRODUCTS"                             26.17 KB      72 rows
. . exported "SH"."COSTS":"COSTS_1995"                       0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_1996"                       0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_H1_1997"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_H2_1997"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q1_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q1_2003"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q2_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q2_2003"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q3_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q3_2003"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q4_2002"                    0 KB       0 rows
. . exported "SH"."COSTS":"COSTS_Q4_2003"                    0 KB       0 rows
. . exported "SH"."DIMENSION_EXCEPTIONS"                     0 KB       0 rows
. . exported "SH"."SALES":"SALES_1995"                       0 KB       0 rows
. . exported "SH"."SALES":"SALES_1996"                       0 KB       0 rows
. . exported "SH"."SALES":"SALES_H1_1997"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_H2_1997"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q1_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q1_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q2_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q2_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q3_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q3_2003"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q4_2002"                    0 KB       0 rows
. . exported "SH"."SALES":"SALES_Q4_2003"                    0 KB       0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/s01/sh1.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 23:30:20
/* 11g中加强了data pump在告警日志中的记录信息,可以看到这里成功启动了DM进程*/
DM00 started with pid=55, OS id=25347, job SYSTEM.SYS_EXPORT_SCHEMA_01
2011-05-02 23:28:35.814000 +08:00
Starting background process SMCO
SMCO started with pid=56, OS id=25349
DW00 started with pid=57, OS id=25351, wid=1, job SYSTEM.SYS_EXPORT_SCHEMA_01
2011-05-02 23:28:54.029000 +08:00
XDB installed.
XDB initialized.

single quote needed in expdp query?

如果在使用数据泵时不采用parfile参数文件的话,query参数指定的查询条件是需要使用单引号括起来的,而当使用parfile时则不需要加上单引号,加上后反而会出现LPX-314: an internal failure occurred错误:

[maclean@rh2 mesg]$ expdp maclean/maclean directory=dump  tables=estimate_me query=estimate_me:"where t1<2000"
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 November, 2010 22:32:33
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** directory=dump tables=estimate_me query=estimate_me:where t1<2000
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MACLEAN"."ESTIMATE_ME" failed to load/unload and is being skipped due to error:
ORA-00936: missing expression
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
/s01/dump/expdat.dmp
Job "MACLEAN"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 22:32:36
/* 不采用参数文件形式,没有加单引号的情况下出现ORA-00936: missing expression错误*/
[maclean@rh2 mesg]$ expdp maclean/maclean directory=dump  tables=estimate_me query=estimate_me:'"where t1<2000"'
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 November, 2010 22:33:39
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** directory=dump tables=estimate_me query=estimate_me:"where t1<2000"
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "MACLEAN"."ESTIMATE_ME"                     32.42 KB    1999 rows
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
/s01/dump/expdat.dmp
Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 22:33:43
/* 不采用参数文件,在query参数中加上单引号则导出成功 */
[maclean@rh2 mesg]$ cat quote.par
tables=estimate_me
query=estimate_me:'"where t1<2000"'
directory=dump
[maclean@rh2 mesg]$ rm /s01/dump/expdat.dmp
[maclean@rh2 mesg]$ expdp maclean/maclean parfile=quote.par 
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 November, 2010 22:35:08
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** parfile=quote.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MACLEAN"."ESTIMATE_ME" failed to load/unload and is being skipped due to error:
ORA-06502: PL/SQL: numeric or value error
ORA-31605: the following was returned from LpxXSLSetTextVar in routine kuxslSetParam:
LPX-314: an internal failure occurred
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
/s01/dump/expdat.dmp
Job "MACLEAN"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 22:35:11
/* 采用parfile选项,参数文件中query参数加入单引号出现LPX-314: an internal failure occurred错误,导出失败*/
[maclean@rh2 mesg]$ cat quote.par
tables=estimate_me
query=estimate_me:"where t1<2000"
directory=dump
[maclean@rh2 mesg]$ rm /s01/dump/expdat.dmp
[maclean@rh2 mesg]$ expdp maclean/maclean parfile=quote.par           
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 November, 2010 22:36:27
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** parfile=quote.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "MACLEAN"."ESTIMATE_ME"                     32.42 KB    1999 rows
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
/s01/dump/expdat.dmp
Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 22:36:30
/* 采用parfile,参数文件中query未加入单引号,导出成功 */

1. QUERY in Parameter file.

Using the QUERY parameter in a parameter file is the preferred method. Put double quotes around the text of the WHERE clause.

Example to export the following data with the Export Data Pump client:

* from table scott.emp all employees whose job is analyst or whose salary is 3000 or more; and
* from from table hr.departments all deparments of the employees whose job is analyst or whose salary is 3000 or more.

File: expdp_q.par
—————–
DIRECTORY = my_dir
DUMPFILE = exp_query.dmp
LOGFILE = exp_query.log
SCHEMAS = hr, scott
INCLUDE = TABLE:”IN (‘EMP’, ‘DEPARTMENTS’)”
QUERY = scott.emp:”WHERE job = ‘ANALYST’ OR sal >= 3000″
# place following 3 lines on one single line:
QUERY = hr.departments:”WHERE department_id IN (SELECT DISTINCT
department_id FROM hr.employees e, hr.jobs j WHERE e.job_id=j.job_id
AND UPPER(j.job_title) = ‘ANALYST’ OR e.salary >= 3000)”

— Run Export DataPump job:

%expdp system/manager parfile=expdp_q.par

Note that in this example the TABLES parameter cannot be used, because all table names that are specified at the TABLES parameter should reside in the same schema.
2. QUERY on Command line.

The QUERY parameter can also be used on the command line. Again, put double quotes around the text of the WHERE clause.

Example to export the following data with the Export Data Pump client:

* table scott.dept; and
* from table scott.emp all employees whose name starts with an ‘A’

— Example Windows platforms:
— Note that the double quote character needs to be ‘escaped’
— Place following statement on one single line:

D:\> expdp scott/tiger DIRECTORY=my_dir DUMPFILE=expdp_q.dmp
LOGFILE=expdp_q.log TABLES=emp,dept QUERY=emp:\”WHERE ename LIKE ‘A%’\”

— Example Unix platforms:
— Note that all special characters need to be ‘escaped’

% expdp scott/tiger DIRECTORY=my_dir \
DUMPFILE=expdp_q.dmp LOGFILE=expdp_q.log TABLES=emp,dept \
QUERY=emp:\”WHERE ename LIKE \’A\%\’\”

— Example VMS platform:
— Using three double-quote characters

$ expdp scott/tiger DIRECTORY=my_dir –
DUMPFILE=exp_cmd.dmp LOGFILE=exp_cmd.log TABLES=emp,dept –
QUERY=emp:”””WHERE ename LIKE ‘A%'”””

Note that with the original export client two jobs were required:
— Example Windows platforms:
— Place following statement on one single line:

D:\> exp scott/tiger FILE=exp_q1.dmp LOG=exp_q1.log TABLES=emp
QUERY=\”WHERE ename LIKE ‘A%’\”

D:\> exp scott/tiger FILE=exp_q2.dmp LOG=exp_q2.log TABLES=dept

— Example Unix platforms:

> exp scott/tiger FILE=exp_q1.dmp LOG=exp_q1.log TABLES=emp \
QUERY=\”WHERE ename LIKE \’A\%\’\”

> exp scott/tiger FILE=exp_q2.dmp LOG=exp_q2.log TABLES=dept

— Example VMS platform:

$ exp scott/tiger FILE=exp_q1.dmp LOG=exp_q1.log TABLES=emp –
QUERY=”””WHERE ename LIKE ‘A%'”””

$ exp scott/tiger FILE=exp_q2.dmp LOG=exp_q2.log TABLES=dept

Note that with original export client in Oracle8i on VMS, the syntax was different (also note the extra space that is needed between two single quotes):
… QUERY=”‘WHERE ename LIKE \’A%\’ ‘”
That is: [double_quote][single_quote]WHERE ename LIKE [backslash][single_quote]A%[backslash][single_quote][space][single_quote][double_quote]

No way to disable datapump estimate?

昨天晚上去客户现场配合一个导入导出作业;这是一套Solaris 10上的10.2.0.1系统,导出采用expdp数据泵工具,需要导出的数据源是一张大小在120G左右的单表,该表存放了最近一年的数据,实际导出时只需要最近三个月的数据,所以使用了QUERY参数,并指定了并行度为2。
该导出作业之前未经测试过,语句也是临时写的,实际执行导出工作时发现在评估阶段(estimate phase)耗费了大约十多分钟的时间,estimate默认使用blocks模式,即通过计算对象段占用的数据库块来预估dumpfile的大小;此外还有statistics模式通过对象的统计信息来推算导出文件的大小:

The block method of estimate is the least accurate. We investigate how many blocks were allocated for the storage segments for the tables specified. If the table had been recently analyzed, the estimate=statistics would be the most accurate. So remember, it is important to generate statistics before calculating the estimate using estimate=statistics method. Additionally, ESTIMATE=BLOCKS method could be inaccurate when:

a) The table was created with a much bigger initial extent size than was needed for the actual table data

b) Many rows have been deleted from the table, or a very small percentage of each block is used.

# When statistics have been explicitly set via the DBMS_STATS package in order to force certain optimizer plans, instead of being directly collected from the data, ESTIMATE=STATISTICS may not report correct results.

但实际上这2中方式在使用QUERY的情况下都无法正确预估导出文件的大小:

[maclean@rh2 udump]$ expdp maclean/maclean directory=dump  tables=estimate_me query=estimate_me:'"where t1<2000"'  estimate=statistics
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 November, 2010 21:28:26
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** directory=dump tables=estimate_me query=estimate_me:"where t1<2000" estimate=statistics
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "MACLEAN"."ESTIMATE_ME"                     104.7 MB
Total estimation using STATISTICS method: 104.7 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "MACLEAN"."ESTIMATE_ME"                     32.42 KB    1999 rows
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
/s01/dump/expdat.dmp
Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 21:28:29
/* statistics模式使用统计来判断导出文件大小,比较容易控制 */
begin
dbms_stats.set_table_stats(ownname => 'MACLEAN',tabname => 'ESTIMATE_ME',numrows => 999999999999999,numblks => 99999999999999999999);
end;
/
PL/SQL procedure successfully completed.
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "MACLEAN"."ESTIMATE_ME"                     10244 GB
Total estimation using STATISTICS method: 10244 GB
/* 再次导出时数据泵采用了"作假的"统计信息*/
[maclean@rh2 udump]$ expdp maclean/maclean directory=dump  tables=estimate_me query=estimate_me:'"where t1<2000"'  estimate=blocks
Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 November, 2010 21:31:58
Copyright (c) 2003, 2007, Oracle.  All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting "MACLEAN"."SYS_EXPORT_TABLE_01":  maclean/******** directory=dump tables=estimate_me query=estimate_me:"where t1<2000" estimate=blocks
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "MACLEAN"."ESTIMATE_ME"                       192 MB
Total estimation using BLOCKS method: 192 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "MACLEAN"."ESTIMATE_ME"                     32.42 KB    1999 rows
Master table "MACLEAN"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MACLEAN.SYS_EXPORT_TABLE_01 is:
/s01/dump/expdat.dmp
Job "MACLEAN"."SYS_EXPORT_TABLE_01" successfully completed at 21:32:01
/* 可以看到以上BLOCKS method预估的dumpfile大小为192MB,而STATISTICS方式预估为104.7MB;
然而实际的使用QUERY参数导出的1999行数据只占用32k的空间!
estimate在这里摆了一个大乌龙
*/

查了一下metalink发现10.2.0.1上存在expdp在estimate评估阶段耗时很久的bug:

EXPDP Slow and Estimate Takes Long Time [ID 822283.1]
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 10.2.0.3
This problem can occur on any platform.
Symptoms
Data pump Export is very slow and long time is spent in Estimating the data size.
EXPDP is stuck for long at :
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** parfile=dpexp.par
Estimate in progress using BLOCKS method...
AWR report taken when EXPDP is running slow show the below query took long time to complete .
SELECT /*+rule*/ SYS_XMLGEN(VALUE(KU$),
XMLFORMAT.createFormat2('TABLE_DATA_T', '7')), 0 ,KU$.BASE_OBJ.NAME
,KU$.BASE_OBJ.OWNER_NAME ,'TABLE' ,to_char(KU$.BYTES_ALLOC) ,
to_char(KU$.ET_PARALLEL) ,KU$.FGAC ,KU$.NONSCOPED_REF ,KU$.XMLSCHEMACOLS
,KU$.NAME , KU$.NAME ,'TABLE_DATA' ,KU$.PART_NAME ,KU$.SCHEMA_OBJ.OWNER_NAME
,KU$.TS_NAME , KU$.TRIGFLAG
,decode(KU$.SCHEMA_OBJ.TYPE_NUM,2,decode(bitand(KU$.PROPERTY,8192),8192,
'NESTED TABLE','T'),19,'PARTITION',20,'PARTITION','SUBPARTITION')
,to_char(KU$.UNLOAD_METHOD)
FROM SYS.KU$_TABLE_DATA_VIEW KU$
WHERE NOT BITAND(KU$.BASE_OBJ.FLAGS,128)!=0 AND NOT (BITAND
(KU$.BASE_OBJ.FLAGS,16)=16) AND KU$.BASE_OBJ.NAME IN
('PA_DRAFT_INVOICES_ALL') AND KU$.BASE_OBJ.OWNER_NAME IN ('PA') AND NOT
EXISTS (SELECT 1 FROM SYS.KU$NOEXP_TAB A WHERE A.OBJ_TYPE='TABLE' AND
A.NAME=KU$.BASE_OBJ.NAME AND A.SCHEMA=KU$.BASE_OBJ.OWNER_NAME) AND NOT EXISTS
(SELECT 1 FROM SYS.KU$NOEXP_TAB A WHERE A.OBJ_TYPE='SCHEMA' AND
A.NAME=KU$.BASE_OBJ.OWNER_NAME);
Cause
The issue here is with using RBO.
Data pump is not choosing Cost Based Optimizer.
This is because of the 'unpublished' bug 5929373 -DATA PUMP TAKES LONG TIME TO EXPORT DATA
This bug is closed as duplicate of another bug << 5464834 >>
Solution
1. Upgrade to 10.2.0.4 . The bug is fixed in 10.2.0.4
Or
2. Apply the << patch 5464834 >> if available for your platform .
* Login to metalink
* Go to 'Patches and Updates'
* Click on 'Simple Search'
* Enter the patch number 5464834
* Select your platform from the dropdown list .
* Patch,if available , will be displayed .
* Download and apply the patch .

该Bug可以通过升级版本解决,但远水解不了近火,最初想disable掉estimate功能;但查阅若干资料后发现目前似乎没有能够关掉estimate功能的办法,这个就是悲哀所在了!耐心等estimate阶段完成后,实际数据导出阶段倒也只花了十来分钟,就这一点看来datapump很有中国式的办事风格。

另外当指定的导出文件数小于指定的parallelism,且导出数据量较大时可能出现ORA-39095错误,会导致导出作业意外终止:

39095, 00000, "Dump file space has been exhausted: Unable to allocate %s bytes"
// *Cause:  The Export job ran out of dump file space before the job was
//          completed.
// *Action: Reattach to the job and add additional dump files to the job
//          restarting the job.
EXPDP generating ORA-39095 : " dump file space has been exhausted"
Applies to:
Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.2 - Release: 10.2 to 11.2
Information in this document applies to any platform.
Symptoms
Checked for relevance on 10-22-2010
Errors while performing Datapump Export:
- when exporting a large number of tables then it completely stops
- the jobs may remain in the datapump export tables
- No trace generated at the time of the failure.
ERROR:
ORA - 39095 : " dump file space has been exhausted. Unable to allocate 4096 bytes".jobsystem.sys_export_full_02 stops due to fatal error.
Cause
Customer chose only limited number of files in "dumpfile" parameter with limited "filesize" , i.e.:
full=Y
directory=dpump_dir
dumpfile=full_1.dmp,full_2.dmp,full_3.dmp,full_4.dmp
filesize=3000000000
This means 4 dump files with filesize 3000000000 byte (2.79 G).
the dumpfile parameter is the issue here.
.
Solution
1- You can use the dynamic format (i.e dumpfile=full_%U.dmp) :
The 'wildcard' specification for the dump file can expand up to 99 files. If 99 files have been generated before the export has completed, it will again return the ORA-39095 error.
2- If this is yet not enough and more files are needed, a workaround would be to speficy a bigger 'filesize' parameter.
3- If this is inconvenient, another option is to use this syntax:
dumpfile=fullexp%U.dmp, fullexp2_%U.dmp, fullexp3_%U.dmp
which can expand up to 3*99 files.
If encountering problems containing the dump in a single directory using this solution, you may prefer this syntax:
dumpfile=dmpdir1:fullexp1_%U.dmp, dmpdir2:fullexp2_%U.dmp, dmpdir3:fullexp3_U.dmp
(assuming the 3 directory objects listed above had been already created first).

解决方法很简单就是去掉并行度,或者指定动态的导出文件名。
记以录之,回去补觉了!

  • The block method of estimate is the least accurate. We investigate how many blocks were allocated for the storage segments for the tables specified. If the table had been recently analyzed, the estimate=statistics would be the most accurate. So remember, it is important to generate statistics before calculating the estimate using estimate=statistics method. Additionally, ESTIMATE=BLOCKS method could be inaccurate when:a) The table was created with a much bigger initial extent size than was needed for the actual table data

    b) Many rows have been deleted from the table, or a very small percentage of each block is used.

  • When statistics have been explicitly set via the DBMS_STATS package in order to force certain optimizer plans, instead of being directly collected from the data, ESTIMATE=STATISTICS may not report correct results.

沪ICP备14014813号

沪公网安备 31010802001379号