Oracle DUL frequently asked questions

dictv8.ddl does not unload the data dictionary!

Check the database was migrated from Oracle7 to Oracle8. A sure sign of this is that the rfile# of the first system datafile is NOT 1 but 4 or 16.

The solution requires DUL version 8.0.6.3 or higher. Starting from this version DUL has a special bootstrap command. This command enables you to generate a correct dict.ddl in a few easy steps.

DUL and large files (files > 2GB)

The current versions for VMS and NT should not have any problem with these large files. Starting from DUL version 8.0.6.7 DUL will report if it can do 32-bit i/o(no large file support) or 64-bit i/o with large file suport. Also starting from 8.0.6.7 DUL will use large file io for all unix versions, if the operating system supports the 64-bit posix interface. If large file support is not available use the work arounds below.

DUL and large database files workaround

The solutions is to split the big data file in multiple small parts. Each part should not be bigger than 2GB, because 32-bit io functions fail during open() of such a file.

Example: the database block size is 8K and the datafile bigdata.dbf is about 5GB. To be more precise it has 655360 data blocks according to the oracle rdbms. The size according to v$datafile is 5368709120 bytes. In real life the file is 8192 bytes bigger because there is a extra leader block added to it. This block zero can cause some confusion. We are going to split the file in parts that each contain 250000 real oracle blocks. This means that the first part has to be 1 block bigger because of block zero.

The actual commands to split the file are:

dd if=bigdatafile.dbf of=part1.dbf bs=8k count=250001
dd if=bigdatafile.dbf of=part2.dbf bs=8k  skip=250001 count=250000
dd if=bigdatafile.dbf of=part3.dbf bs=8k  skip=500001

The control file should look like this (assuming tablespace 5 datafile 6):

5 6 part1.dbf endblock=250000
5 6 part2.dbf startblock=250001 endblock=500000
5 6 part3.dbf startblock=500001

DUL and large output files workaround

even if all the data files are small it is possible that the generated output file is bigger than 2GB. The export mode of DUL generates the smallest output files as compared to the other output formats. If even this file is to big you should use the named pipe trick. The unix command to make a unix named pipe vary on the different platforms. It could be mkfifo, mksf or even mknod.

Example:

# create the named pipe
mkfifo SCOTT_BIGEMP.dat
# start a subshell that will read from the pipe
( dd bs=100k of=part1 count=20000
  dd bs=100k of=part2 count=20000
  dd bs=100k of=part3 count=20000 ) <scott_bigemp.dat& dul="" table="" the="" unload="" and="" run="" #=""> unload table scott.bigemp;

“itl count too high” when unloading an Oracle8i database

For Oracle8i there is a subtle change in the block header. The only solution is to use a more recent version of DUL. (At least 8.0.5.3).

Scan database reports index organized tables for normal tables

Or “bad sub type” while running the generated script.

There has been a mixup in DUL in the type number for Index or Table. When printing the statement the indexes were printed as tables and vice versa. The work around is to edit the script. The solution is to use a more recent version of DUL. (At least 8.0.5.4).

Can DUL be ported for a desktop platform?

DUL version 3 and higher are available for NT and WIN95.

Can you port DUL to platform xyz ?

If you supply me with an account, bduijnen, on such a machine, it should not be too difficult. Acces to an ANSI-C compiler is absolutely necessary. The make and rlogin,rcp capabilities are very much apreciated. To test DUL access to a database is required. (Both normal dba acces and read access for the datafiles)

Can I use DUL to unload offline or read only tablespaces of production system?

Because DUL bypasses the transaction and the cache layer in the rdbms, it is not a good idea to use it for production, you can get incorrect results.

What do you mean with “Support and Internal use only”?

The idea is that you can use DUL for a customer to retrieve otherwise lost data. Most of the times we go on site, because using DUL is not so easy. (And we charge them a normal consultancy fee).

Do you have DUL for Netware?

Sorry, I do not have DUL for Netware.

Netware is a bit special OS and more a file server than anything else. Another reason is that I do not have the proper environment to build and test it.

But do not worry, DUL can work cross platform. In general people use a win 95 or NT client to access the data files and unload from there. (The NT executable should run on Windows 95 as well)

How to use DUL on VMS?

The latest executables for VMS are linked not shared. I hope this allows the executables to run on multiple vms versions. The executable is shipped in a backup saveset with the matching dict?.ddl scripts. After download the file will have the wrong record layout. You must convert it first. Use the following commands:

$!create correctly formatted backup saveset
$ set file/attribute=(rfm:fix,lrl:32256) dul4vms.bck
$!extract the files from the saveset into the current directory
$ backup/log  dul4vms.bck/save *

Does DUL support raw devices?

DUL will work on raw devices. But DUL is not raw device aware. On some platforms we skip the first part of the raw device. DUL does not automatically skip this extra part. The easiest way to configure DUL in this is the optional extra offset in the control file. These extra offsets that I am aware of are 4K on AIX raw devices and 64K for Dec Unix:

1 /dev/rdsk/rawdevice/aix 4096
2 /dev/rdsk/rawdevice/decunix 65536

For all the others I never found an extra offset. If you find a new one let me know, so I can add them to the list.

many corrupted blocks messages during scan database on raw devices

It is quite common to make the raw devices bigger than the real data files. In other words the last part of the raw device has never been formatted as a proper oracle block. Check the block numbers reported in the error messages. You should see that the errors start at a certain block and from there almost every block is bad.

To prevent extra problems DUL does not use the size as stored in the file header. So DUL will read the whole raw device including the unused part at the end. Also it might get error out with a strange error at the end of the device depending on how the device driver implementation behaves when you read at the end of the device.

The solution is to specify the size of the datafile in the control.dul. Just add:

blocks size_in_oracle_blocks.

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *