Oracle DUL Salvaging Oracle Data with Data UnLoader

Oracle DUL Salvaging Oracle Data with Data UnLoader

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: service@parnassusdata.com

ORACLE PRM-DUL Download: http://zcdn.parnassusdata.com/DUL5108.zip

 

DULRULER

 

 

Salvaging Data with Data UnLoader
Full Agenda
Chapter 1: Overview
Chapter 2: Basics
Chapter 3: Parameters and Commands 
Chapter 4: Simple Data Salvage Scenarios
Chapter 5: Complex Data Salvage Scenarios
Chapter 6: Loading Data
Chapter 7: Internal Files
Chapter 8: Labs
Chapter 1: Overview
•	Prereqs: 
•	General knowledge of Oracle Database Architecture
•	High level file structures
•	Instance processes
•	General knowledge of Oracle Database Operations
•	Database states: started, mounted, open, shutdown 
•	export/import utilities
•	SQL*Loader
Overview Agenda
•	Concepts
•	When to use Data UnLoader?
•	“Normal” Data Recovery
•	Features
•	Limitations
•	Key Scoping Questions
•	Expectations
Overview: 
Concepts: What is Data UnLoader?
•	A C program developed by Bernard van Duijnen
•	Extracts only DATA for tables and clusters directly from database files (*.dbf)
•	Creates standard Export Dump files (*.dmp) or the combination SQL*Loader control/data files (.ctl/.dat) for loading data into a new or functioning database 
•	A stand-alone utility that has no dependence on the database instance being mounted or open
	DUL is strictly for Support and internal use only. 
Overview: 
Concepts: How it’s done
•	Unformats data by directly scanning datafiles (.dbf)
•	Data/segment/extent information and metadata is extracted from SYSTEM.dbf file if available
•	Data/segment/extent information is extracted using heuristics and user knowledge if SYSTEM.dbf file is NOT available
•	Does dirty reads (only data written to disk is extracted; data in memory is lost; archive logs are not used) 
•	Later enhancements have been made to extract data from standard export (.exp) or data pump files (.edp) as an alternative (although still a work in progress)
Overview: 
Concepts: Security
•	Data Security
•	Because DUL directly reads .dbf files
•	The database can be completely down
•	There is no database userid to audit access
•	Extracted data is easily viewable
•	Program Security
•	The availability and distribution of the DUL executable must be tightly limited and controlled
Overview: 
Concepts: History
•	Originally
•	An engineer was engaged on-site only 
•	The engineer was responsible for installing and removing the DUL executable from the client machine
•	Now
•	Remote is the preferred, faster method of delivery
•	Some low level, time-based “copy” protection has been implemented to expire DUL after 45 days
	Note: Most government contracts will not allow remote access and on-site is the only option.
Overview: 
Concepts: “Remote” Access Options
•	Customer could give direct access to host (VPN/putty)
•	This usually involves customer’s Security department and is either denied or takes too long for all approvals
•	Conduct a Webex or Beehive shared session to view and control activity with the DBA
•	The DUL.exe needs to be e-mailed/ftp’d to the DBA
•	Talk the DBA through the process on the phone
•	The DUL.exe needs to be e-mailed/ftp’d to the DBA
•	Pre-configure control files, e-mail to DBA, receive logs
•	ASE should be fairly knowledgeable of DUL
Overview: 
When to use Data UnLoader?
•	Generally, a decision is made by Global Product Support (GPS) from a Service Request
•	Only after all documented and undocumented data recovery methods have been evaluated and rejected
•	When “normal” methods of data/database recovery are not applicable or not practicable
	Note: While the most dramatic use of this process is for a downed database, it may also be used when the database is up and just partial data is missing or corrupt.
Overview: 
When to use Data UnLoader?
•	Examples
•	Customer has no usable backup
•	Inadequate backups/archives result in data loss
•	Internal errors or corruption are preventing the database from opening
•	MTTR is not acceptable using normal recovery methods (i.e. too many archive logs to apply)
•	Loss of SYSTEM datafile
•	A corrupt datafile has taken a tablespace offline
Overview:
When to use Data UnLoader?
•	DUL is intended to retrieve data that cannot be retrieved by other means
•	It is not an alternative to Database Recovery
•	It is a last resort, not for normal production usage
•	Note: There can be logical inconsistencies with the data retrieved.
	There is no guarantee of success!
•	This is primarily because of the unpredictable nature of corruptions and not all features or data types of the database have been implemented.
Overview:
“Normal” Database Recovery
•	Current database recovery options (manual) 
•	restore last backup and roll forward with archive logs
•	import from last full backup export
•	use SQL*Loader to re-load the data from source
•	use parallel create table as select (PCTS)
•	use transportable tablespaces
•	build a cloned database, export data, then import
•	RMAN (more automated)
•	Initiate an RMAN session with appropriate directives
Overview: 
“Normal” Database Recovery
•	Undocumented init.ora database parameters
•	_corrupted_rollback_segments
•	_allow_resetlogs_corruption  
•	Diagnostic/Editing tools
•	opatch (very old tool, see Notes 28864.1, 33182.1)
•	Block Browser/Editor (BBED) 
•	Note: BBED is no longer distributed starting with 11g
Overview:
“Normal” DB Recovery: Limitations
•	No alternatives in the case of loss of SYSTEM tablespace or datafile(s) 
•	The database must be in reasonably good condition or recovery is not possible even with the undocumented parameters 
•	Editing datafiles is cumbersome, dangerous, and not guaranteed to work
•	Extensive internal structure knowledge required
•	Certain corruptions are beyond editing
Overview: 
Supported Features
•	Works against Oracle v6 through v11  (12?)
•	Available for multiple (but not all) platforms
•	Linux, AIX, Solaris, Windows
•	DUL for HP-UX has been unavailable since 8/13/12
•	Cross platform extracts are possible
•	NOT an alternative to standard methods of recovery
•	NOT an officially supported utility
•	Use only as a last resort!
Overview: 
Supported Features
•	Reads data dictionary if SYSTEM datafile is available
•	However, it can extract data without SYSTEM datafile
•	Without SYSTEM, user/object/column names are lost
•	Can unload individual tables, tables by schema (users), tables by object number, or the entire database
•	Can unload extents from incomplete tables
•	Can unload cross platforms
•	i.e. Unload in Windows, Load in UNIX
•	Supports standard Oracle database constructs 
Overview: 
Supported Features: Constructs
•	chained rows, migrated rows 
•	hash/index clusters
•	longs, raws, blobs
•	rowids, dates, numbers
•	multiple free list groups
•	segment high water mark
•	unlimited extents
•	partitioned tables
Overview: 
Supported “New” Features
•	Configurable for Automatic Storage Management (ASM)
•	Easier configuration (tablespace and relative file numbers are now read directly from .dbf files)
•	Unexp (extract from an export file)
•	Unpump (extract from a data pump file)
•	SQL*Loader can now be used for long raws and blobs
•	Note:  The current version of DUL is 10.2.0.5.31
•	Base Release from 6/9/2011
Overview: 
Limitations / Restrictions
•	Database can be corrupt, but individual blocks must be good.  An error will go to the log for each bad block.
•	DUL unloads only table and cluster data
•	No support for MLSLABELS
•	No support for multibyte character sets
•	No support for some 11g features (i.e. encryption)
•	DUL v10 can be used against v11 datafiles
Overview: 
Limitations on 11g features (11/2012)
•	11g secure file lobs are in beta testing
•	Label security is not supported yet
•	Encryption is not supported yet
•	ASM disks on exadata cells are not supported yet
•	11g ASM with variable extent sizes and ASM striping support is  just recently available  
•	Complex types not supported in export_mode 
•	The generated dump file in export_mode=true does not contain metadata
Overview:
Key Scoping Questions
•	Before proceeding with an engagement, find out:
•	What platform? OS version? DB version? Blocksize?
•	Is the DB up or down? Can be it mounted? Opened? 
•	Is the SYSTEM.dbf datafile available? Readable?
•	Is remote access possible?  Is on-site required? 
•	How large is the DB or table(s) to be extracted?
•	Which data format is preferred? Export? SQL*Loader?
•	Is there enough available disk space for output?
•	Remind the customer that there is no guarantee of success.
Overview: 
Expectations
•	From ASE
•	Assess the customer environment and conditions 
•	Determine what data is needed or requested
•	Configure DUL for extracting data
•	“Best effort” to run DUL to deliver export dump files or SQL Loader files with data to the customer
•	Assist customer with 
•	Interpreting the DUL log
•	Planning for loading and verifying the data
Overview: 
Expectations
•	From Customer
•	Access to all required .dbf files (telnet, owc, ftp?)
•	Create new database (required if current db won’t open)
•	Load and verify data
•	If SYSTEM.dbf was missing, rename application objects and columns with assistance from application developers or functional staff members
•	Rebuild all required missing objects (indexes, triggers..)
•	Verify application functionality
Overview: 
Expectations
•	General questions from customer
•	How long will a DUL engagement take?
•	The setup time depends on 
•	what .dbf files are available, where and when
•	what kind of corruption that is encountered 
•	The extract time is dependent on the size of the database
•	The time to interpret results depends on
•	availability of SYSTEM.dbf
•	The time to reload new database
•	 the customer may not need ASE at this point after a few loads are completed
Overview:
The Future of DUL
•	Bernard continues to enhance DUL, however it is more difficult to support new features of 11g and 12c like encryption and compression.
	As more customers embrace unsupported new features, DUL will have more limited application.
•	Newer versions of the Oracle Database have additional data protection/recovery features (i.e. recycle bin, flash recovery/flashback options)
	As more customers embrace newer recovery features, there should be less calls for DUL.
Overview:
The Future of DUL
•	There will always be hardware/software failures and human error which cause databases to crash and lead to data loss.
•	There will always be customers using older versions of the database.
•	DUL will continue to be a valuable tool for salvaging data, although with limitations.
Quiz
1) After configuring and running DUL, an unknown error appears in the dul.log.  Which of the following should be done to get help quickly?
•	Open a SEV-1 SR with GPS immediately
•	Send an e-mail to service@parnassusdata.com or call his cell phone
•	Send an e-mail to service@parnassusdata.com
Quiz
2) The DUL program should not be executed while the database is mounted. 
•	True
•	False
Quiz
3) What is the output from a DUL run?
•	Export .dmp files
•	SQL*Loader .dat files
•	SQL*Loader .ctl files
•	A dul.log
•	All of the above
•	No output, DUL will restore data in place
Quiz
4) As long as the customer has a good SYSTEM.dbf datafile, DUL will extract all the required customer data and format an export dump file or SQL*Loader .dat data file.
•	True 
•	False
Quiz
5) As long as the customer has a good SYSTEM.dbf datafile, the only additional work for the customer will be to import the export files or run SQL*Loader.
•	True
•	False 
Quiz
6) When the customer is missing the SYSTEM.dbf datafile, DUL can still extract usable data. The ASE will rename table and column names to their appropriate application usable identities.
•	True
•	False
Chapter 2: Basics
•	Prereqs: 
•	Intermediate knowledge of Oracle Database Server
•	Use of imp utility, interactive and batch modes
•	Use of SQL*Loader 
•	General knowledge of UNIX/Windows command level
•	vi or other text editor on UNIX
•	WordPad on Windows
•	From https://stbeehive.oracle.com/teamcollab/overview/DUL
•	Download DataUnLoader_Toolkit.zip
•	Download DataUnLoader_Labs.zip
Basics: Agenda
•	Data UnLoader Architecture
•	What is needed?
•	Configuring DUL
•	Installing and Running DUL
•	Reviewing the output log
Basics:
Data UnLoader Architecture
•	INPUT
•	.dbf datafiles (can be offline copy, at alternate location)
•	2 .dul “control” files (not the DB control files)
•	User commands at DUL> prompt
•	dul.exe or dul (single file is about 500k)
•	OUTPUT
•	.dmp files (for importing later) or
•	.ctl and .dat files (for SQL*Loader)
•	Naming will be schema_tablename.filetype
•	log file (dul.log)
Basics:
What is needed?
•	Missing data? Got .dbf? Need dul.exe? Go to 
•	https://stbeehive.oracle.com/teamcollab/overview/DUL
•	Download executable for your desired platform
•	Best Practice: If going on-site, download all platforms ahead of time in case internal Oracle access is later prohibited.
•	Create and Configure an init.dul
•	OS and DB dependent parameters, directives
•	Create and Configure a control.dul 
•	Map datafiles to location on disk 
•	ASM is now supported (new feature)
Basics:
Go to the New DUL Website
Basics:
Download executable
Basics:
Configuring an init.dul
•	Contains parameters to help DUL understand the format and platform of the datafiles (.dbf) and what output format to use
•	This plain text file has information specifying  
•	DUL cache size
•	Details of header layout
•	Oracle block size
•	Output file format
•	Sql*Loader format and record size. 
•	Other optional directives
Basics:
Configuring an init.dul
•	Configuring the DUL cache sizes
•	These must be large enough to hold all entries from the DBA_TAB_COLUMNS, _TABLES, _OBJECTS, and _USERS tables
•	DC_COLUMNS =
•	DC_TABLES =
•	DC_OBJECTS = 
•	DC_USERS =
•	Note: DUL now uses these as minimum values and can dynamically increase as needed when run.
Basics:
Configuring an init.dul
•	Configuring the platform dependent parameters 
•	OSD_BIG_ENDIAN_FLAG = true/false
-	“Endianness” deals with the ordering of bits within a structure
-	On a big-endian platform, the 4 bytes of a word are allocated in the reverse order, high to low
-	Little endian ordering means storing the least significant byte at the least significant address 
-	Note: For DB >= 10g, this is the only osd parm needed
Basics:
Configuring an init.dul
•	OSD_DBA_FILE_BITS =
-	The number of bits in a data base address (dba) used for the low order part of file number.
•	OSD_C_STRUCT_ALIGNMENT =
-	Structure layout in data file headers.
•	OSD_FILE_LEADER_SIZE =
-	Number of blocks/bytes before the Oracle file header. Unix datafiles have an extra leading block (file size, block size magic number)
Basics:
Configuring an init.dul
•	Database specific parameters
•	db_block_size = 8192 /* db default block size */
•	compatible = 9            /* db version */
•	Sql*Loader or Export format parameters
•	export mode = false   /* for sql*loader, true=exp */
•	ldr_enclose_char = "  /* delimiter for sql*loader */
•	ldr_phys_rec_size = 81  /* output length */
•	Note: Additional parameters and values will be detailed in Chapter 3.
Basics:
Configuring an init.dul (sample)
dc_columns = 200000
dc_tables = 40000
dc_objects = 40000
dc_users = 2048
osd_big_endian_flag = true  # Only osd parm needed on >=10g
osd_dba_file_bits = 10
osd_c_struct_alignment = 32
osd_file_leader_size = 1
osd_word_size = 32
db_block_size = 8192
compatible=9
export_mode=false
ldr_enclose_char = "
ldr_phys_rec_size = 81
Basics:
Minimal init.dul
Latest Update from Bernard (7/7/14)
	DUL will now try to figure out all configuration parameters from the dbf headers
	Init.dul only needs:
	COMPATIBLE=n
	OSD_BIG_ENDIAN_FLAG=true/false 
–	For cross platform only
	When specified, DUL will use the parameter values that you post in init.dul
	A GENERIC_init.dul is included in DUL ToolKit
Basics:
Configuring a control.dul
•	Specifies where the data to be extracted is currently located
•	Datafile locations can be obtained at the OS level
•	If the database can mount, use SQL on v$datafile
•	Now supports 3 formats for line entries
•	file_piece_spec (traditional)
•	asm_disk_spec 
•	block_index_spec
Basics:
control.dul: file_piece_spec
•	[[tablespace_no] relative_file_number] data_file_name
[ extra_leader_offset ]         
[ startblock block_no ]         
[ endblock block_no ]
[ block_size nk ]
Note:  New with version >= 10, the tablespace_no and relative_file_number are optional!  They can now be read directly from the .dbf file header (if not corrupt). 
Basics:
Creating control.dul: file_piece_spec
If the datafile headers are corrupt or for some other reason not accessible to dul, you may be able to run the SQL below if the database is mountable or from a stand-by database. 
sqlplus /nolog 
connect / as sysdba 
startup mount
set trimspool on pagesize 0 linesize 256 feedback off 
column name format a200 
spool control.dul 
select ts#, rfile#, name from v$datafile; 
exit 
Basics:
Creating control.dul: file_piece_spec
•	The previous SQL assumes the database is mounted
•	However, it is also possible to generate the control.dul from a database control file 
•	 >strings   >control.dul
•	You can get the ts# and rfile# number by doing a dump from another database (if version < 10)  alter system dump datafile ‘c:\app\jdydek\oradata\orcl\users01.dbf’ block min 1 block max 2;  Look for the fields below in the trace file: File Number=4 RelFno: 4 Basics: Sample Output: alter system dump datafile Start dump data block from file D:\MY_DATA\DULCLASS\USERS.DBF minblk 1 maxblk 2 V10 STYLE FILE HEADER: Compatibility Vsn = 169869568=0xa200100 Db ID=2596130611=0x9abdcf33, Db Name='XE' Activation ID=0=0x0 Control Seq=323=0x143, File size=12800=0x3200 File Number=4, Blksiz=8192, File Type=3 DATA … File Space Header Block: Header Control: RelFno: 4, Unit: 8, Size: 12800, Flag: 9 AutoExtend: YES, Increment: 1280, MaxSize: 655360 Initial Area: 7, Tail: 12800, First: 42, Free: 1557 Deallocation scn: 0.0 Header Opcode: Save: No Pending Op End dump data block from file D:\MY_DATA\DULCLASS\USERS.DBF minblk 2 maxblk 2 Basics: control.dul: with optional ts# and file# 0 1 /p012/oracle/GTU5ASP/system.dbf 1 2 /p014/oracle/GTU5ASP/undotbs01.dbf 2 3 /p015/oracle/GTU5ASP/sysaux01.dbf 4 4 /p028/oracle/GTU5ASP/pinn_data01.dbf 0 5 /p030/oracle/GTU5ASP/system02.dbf 4 6 /p020/oracle/GTU5ASP/pinn_data02.dbf 1 7 /p018/oracle/GTU5ASP/undotbs02.dbf 4 8 /p033/oracle/GTU5ASP/pinn_data03.dbf Above can be obtained by running the following SQL on the database in ‘mount’ mode.  select ts#, rfile#, name from v$datafile; Note: Even without ts# and rfile#, this is useful to capture all datafile names Basics: control.dul: without optional parms /p012/oracle/GTU5ASP/system.dbf /p014/oracle/GTU5ASP/undotbs01.dbf /p015/oracle/GTU5ASP/sysaux01.dbf /p028/oracle/GTU5ASP/pinn_data01.dbf /p030/oracle/GTU5ASP/system02.dbf /p020/oracle/GTU5ASP/pinn_data02.dbf /p018/oracle/GTU5ASP/undotbs02.dbf /p033/oracle/GTU5ASP/pinn_data03.dbf Note: dul v10 can now read tablespace number and relative file number directly from dbf headers if not corrupt. Basics: control.dul: asm_disk_spec • DISK device name [ disk group options ] • disk group option ::= GROUP disk group name | DISK_NO disk number in group | F1B1 File1 Block1 location • Note: Device name is generally sufficient. Other properties are retrieved from the file headers Basics: Sample control.dul: asm_disk_spec # ASM disks for disk groups disk /media/maxtor/asm/dgn1 disk /media/maxtor/asm/dgn2 disk /media/maxtor/asm/dgn3 disk /media/maxtor/asm/dgodd # system datafile in the first asm disk group +DGN/db102/datafile/system.257.621616979 # users datafile in a different disk group +DGODD/db102/datafile/users.257.621616683 • 11/27/12: The latest DUL executables have a new layer of asm support for fine grained striping and variable extent sizes. Basics: control.dul: block_index_spec • BLOCK INDEX block_index_name • A block index is a way to access Oracle blocks on corrupt file systems • Before using in control.dul, use the DUL command: • CREATE BLOCK INDEX index_name ON device ; • Useful to merge multiple disk images or to unload from corrupted file systems. This is only useful in extreme file system corruption scenarios. Basics: Installing and Running DUL • dul.exe is command and parameter driven, using a structure similar to SQL*Plus (about 18 commands) • See Chapter 3 and the online manual for command details • There is no “installer” program. Just copy the .exe program to a desired directory. Create init.dul and control.dul with text editor. • Run DUL from a UNIX shell or Windows DOS prompt. • >./dul.exe       /* for UNIX */
•	>dul               /* for Windows */
•	Best Practice:  DUL output goes to the same directory where the executable is located.  Create a separate directory for dul.exe and output while letting the control.dul direct the input.
Basics:
Running DUL: Common Commands
•	Some common, standard commands (interactive or batched)
•	SHOW PARAMETERS;	/* display all parm settings */
•	BOOTSTRAP;		/* read dictionary metadata */
•	SCAN DATABASE;	/* scans all known datafiles */
•	SCAN TABLES; or SCAN EXTENTS;
•	UNLOAD DATABASE;	/* extracts all users/tables */
•	UNLOAD TABLE schema.table (…..) ;
•	UNLOAD USER user_name;
•	UNLOAD OBJECT number;
•	EXIT; or BYE;		/* end dul session */
•	Chapter 3 will review details of all commands
Basics:
“New” commands / features
•	Updated user guide online (getting better all the time)
•	ASM support (control.dul format is different)
•	SCAN LOB SEGMENT FOR …
•	UNEXP TABLE schema.table_name
•	UNPUMP …  (wip, check with Bernard)
•	dul.exe has some built-in “copy” protection
•	Always download the current version
•	Date sensitive inactivation may stop the program from  executing
	Cannot start now
	You need a more recent DUL version for this os
Basics:
Reviewing the dul.log
•	Each successive execution of dul.exe will create a log
•	By default, the log is named “dul.log” 
•	When doing multiple runs of DUL, 
•	By default, each run will overwrite the dul.log
•	Set RESET_LOGFILE=false to append dul.logs for each run or rename dul.log between each run
•	Note:  Although you may set only about 10 parameters in the init.dul, there are actually about 50 possible parameters.  Those not set will have default values.  The log will show all parameters’ values for that specific run.
Basics:
Sample dul.log
DUL version 10.2.0.5.17 with 64-bits i/o
Init.dul parameter settings:
_SLPE_DEBUG = FALSE
ALLOW_CHECKSUM_MISMATCH = FALSE
ALLOW_DBA_MISMATCH = FALSE
ALLOW_OTHER_OBJNO = FALSE
ALLOW_TRAILER_MISMATCH = FALSE
ASM_DO_HARD_CHECKS = TRUE
AUTO_UPDATE_CHECKSUM = TRUE
AUTO_UPDATE_TRAILER = TRUE
BUFFER = 1048576
CF_FILES = 1022
CF_TABLESPACES = 64
COMPATIBLE = 10
CONTROL_FILE = control.dul
DB_BLOCK_SIZE = 8192
DB_NAME = 
Basics:
Sample dul.log
…
UNEXP_VERBOSE = FALSE
USE_LOB_FILES = FALSE
USE_SCANNED_EXTENT_MAP = false
VERIFY_NUMBER_PRECISION = TRUE
WARN_RECREATE_FILES = TRUE
WRITABLE_DATAFILES = FALSE
Reading USER.dat 42 entries loaded
Reading OBJ.dat 16246 entries loaded and sorted 16246 entries
Reading TAB.dat 1512 entries loaded
Reading COL.dat 58768 entries loaded and sorted 58768 entries
Reading TABPART.dat
DUL: Error: File TABPART.dat, line 1: token missing
DUL: Warning: Ignoring file TABPART.dat cache
Basics:
Sample dul.log
Database national character set is AL16UTF16
Entries from control file control.dul:
DUL: Error: open( 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF', Read Only)
OS error 2: No such file or directory
DUL: Error: open( 'C:\ORACLEXE\ORADATA\XE\DULDAT01.DBF', Read Only)
OS error 2: No such file or directory
DUL: Error: open( 'D:\DULDAT02-corrupt.DBF', Read Only)
OS error 2: No such file or directory
DUL> show parameters
…
DUL> bye
Goodbye
Basics:
Simple vs. Complex Data Extracts 
•	Simple: With the SYSTEM datafile
•	Schema names, table names, column names and attributes are accessible
•	BOOTSTRAP; reads info from the data dictionary
•	Complex: Without the SYSTEM tablespace
•	Schema names, table names, attribute names must be mapped by the customer
•	SCAN DATABASE; SCAN TABLES; 
or SCAN EXTENTS;
•	Set use_scanned_extents_map=true to UNLOAD
Basics
Other Complex Data Extracts 
•	Complex:  With or without SYSTEM.dbf
•	Extract data from a truncated a table
•	Extract data from a dropped a table
•	Extract data from a deleted a table
•	Extract original data from a truncated table that was partially inserted into.
•	Note: Beginning with 10g, new database features (i.e. recycle bin, flashback) can avoid many of the above scenarios, but only if they are properly configured.
Basics:
Preparing to engage
•	DUL Preflight Questions list
•	Get preflight answers via e-mail or scoping call
•	Set customer expectations
•	Download current exe zip/gz file for platform
•	DUL Utility (Note: date effective copy protection)
•	DUL Manual
•	If going on-site, copy to a CD or USB drive
•	Note that all platforms can fit on a single CD
•	Cross platform DUL is possible
•	If not going on-site, e-mail or ftp to customer
Quiz
1) If misconfigured and DUL produces an error message, what steps are necessary to rerun?
•	Modify the parameters and restart
•	Reinstall DUL and move .dbf files to new directory
•	Backup the log and datafiles
•	DUL cannot be rerun
Quiz	
2) Explain the function of these files:
•	init.dul
•	control.dul
•	dul.exe
•	dul.log
•	system.dbf
Quiz
3) Explain the differences in the state of the data dictionary after each of the following and which technique would be used to extract data:
a) the database won’t open
b) the database won’t mount
c) a table is truncated
d) table rows are deleted
e) a table is dropped
Chapter 3: 
Parameters and Commands
•	Prereqs: 
•	Intermediate knowledge of Oracle Database Server
•	Use of imp utility, interactive and batch modes
•	Use of SQL*Loader 
•	General knowledge of UNIX/Windows command level
•	vi or other text editor on UNIX
•	WordPad on Windows
Parameters and Commands 
Agenda
•	Startup sequence
•	Most Useful Parameters 
•	Rarely Used Parameters
•	Undocumented Parameters
•	Most Useful Commands
•	Rarely Used Commands
Parameters and Commands
Startup Sequence
>dul [optional command filename]
•	The parameter file (default "init.dul“) is processed. 
•	The control file (default "control.dul") is processed.
•	Tries to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL's data dictionary cache. 
•	Tries to load seg.dat and col.dat. 
•	Display “DUL>” prompt and accept commands (or run the commands in the filename specified as a first argument in the command line) 
Parameters and Commands 
Most Useful Parameters
•	CONTROL_FILE = control.dul
•	File name where .dbf location input comes from
•	“control.dul” is the default
•	LOGFILE = dul.log
•	File name where the run output is written
•	“dul.log” is the default
•	RESET_LOGFILE = [TRUE/false]
•	If true, each run of DUL overrides previous dul.log
Parameters and Commands 
Most Useful Parameters
•	DB_NAME=text
•	DB_ID=number
•	These optional parameters can be used to tell DUL exactly which database instance (by name or id) to process.
•	Useful if customer unintentionally supplies .dbf files from multiple databases.
•	DUL will report mismatched .dbf files in the dul.log
•	It would be best to remove these .dbf files from the control.dul and rerun the dul session. 
Parameters and Commands 
Most Useful Parameters
•	EXPORT_MODE=[true/false]
•	If true, DUL outputs export (.dmp) files
•	If false, DUL outputs SQL*Loader files (.dat and .ctl)
•	When using SQL*Loader output
•	LDR_ENCLOSE_CHAR = "   /* delimiter  char */
•	LDR_OUTPUT_IN_UTF8 =[true/false]
•	LDR_PHYS_REC_SIZE = 81
•	If 0, no fixed size; each line ends with new line character
•	If >2, each line is a fixed size
Parameters and Commands 
Most Useful Parameters
•	BUFFER=1048576
•	Specifies a number of bytes for buffering a row
•	The row output buffer size is used in both export and SQL*Loader modes
•	Each row is first stored in this buffer
•	Only complete rows without errors are written to the output file
•	DUL will fail if this is exceeded and must be restarted
•	In most cases, the default 1M is sufficient.  However don’t panic if dul fails with a buffer too small error
Parameters and Commands 
Most Useful Parameters
•	FILE_SIZE_IN_MB=number
•	A number specifying the maximum dump (.dmp) file size in megabytes. The default 0 means unlimited.
•	This is useful when the customer has limited disk space
•	Dump files larger than this are split into multiple pieces as DUL is running
•	This gives the customer a chance to copy files to another file system and reclaim space by deleting
•	Each partial dump file can be imported independently
Parameters and Commands 
Most Useful Parameters
•	COMPATIBLE=[6,7,8,9,10,11]
•	Specify the database version 
•	DB_BLOCK_SIZE=8192
•	Specify the default database block size
•	USE_SCANNED_EXTENT_MAP = [FALSE/true]
•	When false (default), use extent map from seg header
•	When true, use extent map from external ext.dat file created from SCAN command to unload tables
•	The ext.dat may be generated separately in a different run when more complex conditions require it
Parameters and Commands 
Rarely Used Parameters
•	ALLOW_CHECKSUM_MISMATCH = FALSE
•	If true, skips checks for block checksum
•	ALLOW_DBA_MISMATCH = FALSE
•	If true, skips check for block address
•	ALLOW_OTHER_OBJNO = FALSE
•	If true, skips check for valid object numbers
•	ALLOW_TRAILER_MISMATCH = FALSE
•	If true, skips check for block trailer
•	Be careful before setting any of these to TRUE!
Parameters and Commands 
Rarely Used Parameters
•	FILE = text
•	text is used as a base to generate 8.3 DOS filenames
•	MAX_OPEN_FILES = 8
•	The maximum #dbfs left open concurrently at OS level
•	PARSE_HEX_ESCAPES = FALSE
•	\\xhh hex escape sequences used while parsing? No 
•	If set to TRUE, then \\ character strings can be specified for using escape sequences
•	This is also used for specifying multi-byte characters
Parameters and Commands 
Rarely Used Parameters
•	WARN_RECREATE_FILES = TRUE
•	If true, log warning message when files are overwritten
•	If false, suppress warning messages
•	WRITABLE_DATAFILES = FALSE
•	If false (default), DUL is ensured to be READ-ONLY
•	If true, DUL can do some limited datafile editing in command mode
Parameters and Commands 
Undocumented Parameters
•	_SLPE_DEBUG=FALSE
•	ASM_DO_HARD_CHECKS = TRUE
•	AUTO_UPDATE_CHECKSUM = TRUE
•	AUTO_UPDATE_TRAILER = TRUE
•	CF_FILES = 1022
•	CF_TABLESPACES = 64
•	DEFAULT_CHARACTER_SET = 
•	DEFAULT_NATIONAL_CHARACTER_SET = 
Parameters and Commands 
Undocumented Parameters
•	DB_ID =
•	DB_NAME =
•	FEEDBACK = 0
•	OSD_MAX_THREADS = 1055
•	SCAN_DATABASE_SCANS_LOB_SEGMENTS = TRUE
•	SCAN_STEP_SIZE = 512
•	TRACE_FLAGS = 0
Parameters and Commands 
Undocumented Parameters
•	UNEXP_MAX_ERRORS = 1000
•	UNEXP_VERBOSE = FALSE
•	USE_LOB_FILES = FALSE
•	VERIFY_NUMBER_PRECISION = TRUE
•	Although functions of some of the undocumented parameters may be intuitive, it is recommended to first e-mail and ask Bernard to be sure.
Parameters and Commands 
Most Useful Commands
•	[ALTER SESSION] SET init.dul parameter =  value ;
•	Most parameters can be changed dynamically
•	BOOTSTRAP;
•	This scans the data dictionary and builds external files used for mapping USERs, TABLES, EXTENTS, and COLUMNS during later UNLOAD commands.  
•	Only functions if SYSTEM.dbf is available  
•	DESCRIBE  owner_name.table_name ;
•	REM any_text_till_end_of_line 
•	REM  is NOT allowed inside ddl statements 
Parameters and Commands 
Most Useful Commands
•	SCAN DATABASE;
•	Scans all blocks of all configured data files. 
•	Two or three files are generated
•	SEG.dat
–	found segment headers 
•	EXT.dat
–	contiguous table/cluster data blocks
•	SCANNEDLOBPAGE.dat
–	information for each lob datablock
–	If SCAN_DATABASE_SCANS_LOB_SEGMENTS=TRUE
Parameters and Commands 
Most Useful Commands
•	SCAN TABLES;
•	Uses SEG.dat and EXT.dat as input.        
•	Scans for all tables in all data segments (a header block and at least one matching extent with at least 1 table).
•	SCAN EXTENTS;
•	Uses SEG.dat and EXT.dat as input.
•	All extents for which no corresponding segment header has been found.  Only useful if a tablespace is not complete, or a segment header is corrupt.
Parameters and Commands 
Most Useful Commands
•	SHOW DBA dba ; 
•	dba to file_no block_no calculator	   
•	SHOW DBA  rfile_no block_no ;  
•	file_no block_no to dba calculator	   
•	SHOW SIZES ;                   
•	shows size of important structures	   
•	SHOW PARAMETER;                
•	shows the current values of all parameters	   
•	SHOW LOBINFO;                  
•	shows lob indexes found with SCAN DATABASE       
Parameters and Commands 
Most Useful Commands
•	SHOW DATAFILES; 
•	summary of configured datafiles       
•	SHOW ASM DISKS; 
•	summary of configured asm disks       
•	SHOW ASM FILES; 
•	summary of configured datafiles on asm       
•	SHOW ASM FILE  cid  
•	extent information for asm file
Parameters and Commands 
Most Useful Commands
•	UNLOAD DATABASE;  /* everything except SYS */
•	UNLOAD USER user_name; 
•	UNLOAD [TABLE]  [  schema_name.] table_name  		[ PARTITION ( partition_name ) ] 
[ SUBPARTITION ( sub_partition_name ) ] 
[ ( column_definitions) ] 
[  cluster_clause  ] 
[  storage_clause  ] ;
Parameters and Commands 
Most Useful Commands
•	UNLOAD EXTENT  table_name               
[ (  column_definitions  ) ]              
[ TABLESPACE  tablespace_no  ]               
FILE  extent_start_file_number      
BLOCK extent_start_block_number  		BLOCKS  extent_size_in oracle_blocks ; 
Parameters and Commands 
Most Useful Commands
•	UNLOAD LOB SEGMENT FOR 
[schema_name.]table_name [(column name)] ;
•	UNLOAD LOB SEGMENT STORAGE 
( SEGOBJNO data obj#) ;
•	EXIT, QUIT, EOF, BYE all cause DUL to terminate.
Parameters and Commands 
Rarely Used Commands
•	COMMIT;     
•	Writes the changed blocks to the data file
•	Most of the time, DUL is used as READ-ONLY!
•	CREATE BLOCK INDEX  index_name  ON  device ;
•	A block index contains the address of valid oracle blocks found in a corrupt file system. Useful to merge multiple disk images or to unload from corrupted file systems. 
•	This is only useful in extreme file system corruption scenarios
Parameters and Commands 
Rarely Used Commands
•	DUMP [ TABLESPACE  tablespace_no ]     
[ FILE  file_no  ]     
[ BLOCK  block_no  ]     
[ LEVEL  level_no  ] ;     
•	Not a complete blockdump, mainly used for debugging.     The data block address is remembered.
•	EXTRACT  asm file name  to  output file name  ;  
•	Copies any ASM file from a disk group to the file system.     (there was a problem with online redologs, this needs more testing)
Parameters and Commands 
Rarely Used Commands
•	MERGE block_index INTO [  segment  ];
•	The merge command uses the information in the index file to locate possible data blocks. It looks for a combination of file numbers and object id. 
•	Each candidate block is compared to the current block in the datafile. 
•	If the current block is bad or has an older scn, the candidate will be written into the datafile. 
•	This is only useful in extreme file system corruption scenarios.
Parameters and Commands 
Rarely Used Commands
•	ROLLBACK; 
•	Cancels the UPDATE statements
•	SCAN DUMP FILE  dump file name 
[ FROM  begin offset  ]
[ UNTIL  end offset  ];
•	Scans an export dump file to provide the create/insert statements and the offsets in the dump file
Parameters and Commands 
Rarely Used Commands
•	SCAN LOB SEGMENT  storage clause ;
•	SCAN LOB SEGMENT FOR  
table name  [.  column name] ;
•	Scans the lob segment to produce LOBPAGE.dat information, but ONLY for this segment. 
•	Probably quicker and smaller. 
•	For partitioned objects use SCAN DATABASE;
Parameters and Commands 
Rarely Used Commands
•	UPDATE [ block_address ] 
SET UB1|UB2|UB4 @ offset_in_block = new_value ;
•	UPDATE [ block_address ] 
SET  block element name  = new_value ;        
•	Now and then we can repair something.
•	Patches the current block and dumps it.  
•	You can issue multiple UPDATE commands.  
•	Block is not written yet, use COMMIT to write. 
Parameters and Commands 
Rarely Used Commands
•	UNEXP [TABLE] [  owner. ]  table name        
(  column list  ) [ DIRECT ]       
DUMP FILE  dump file name       
FROM  begin offset  [ UNTIL  end offset  ]       
[ MINIMUM  min number of cols  			COLUMNS ] ;        
•	To unload data from a corrupted exp dump file. 
•	No special setup required, just the compatible parameter.       
•	The begin offset should be where a row actually begins. 
Parameters and Commands 
Rarely Used Commands
•	UNPUMP       
•	To unload data from a corrupted expdp (datapump) dump file.       
•	This is still a work in progress.  While the basic commands work, it is rather complex to use. 
•	Contact Bernard directly if this is needed.
Quiz
1) DUL is run at a command prompt and can not be run in batch mode.
•	True
•	False
Quiz
2) DUL can run in parallel when used in a batch mode
•	True
•	False
Chapter 4: Simple Scenarios
•	Prereqs: 
•	Intermediate knowledge of Oracle Database Server
•	Use of imp utility, interactive and batch modes
•	Use of SQL*Loader 
•	General knowledge of UNIX/Windows command level
•	vi or other text editor on UNIX
•	WordPad on Windows
Simple Scenarios
Agenda
•	Simple Data Recovery 
•	Full Database Data extract
•	Partial Database Data extract
•	Schema Data extract
•	Table Data extract
•	Extent Data extract
•	Data Dictionary extract
Simple Scenarios
Assumptions / Process
•	The SYSTEM.dbf datafile is available and not corrupt
•	The engineer doesn’t need to know anything about the application tables structure, column types etc...
•	The metadata (user names, table names, column names, column types) are read from the SYSTEM.dbf with the BOOTSTRAP; command.	                     
•	Information generated during the BOOTSTRAP command is stored in OS text files and can be read by subsequent DUL sessions.
Simple Scenarios
Full Database Extract
•	Unloading the entire database with all available .dbf files will generate output for all schema (except SYS) even ones you may not be interested in (i.e. SYSTEM, XDB, MDSYS, OUTLN, etc)
•	Configure init.dul and control.dul
•	Run /dul
DUL>bootstrap;
DUL>unload database;
DUL>exit
Simple Scenarios
Full Database Extract
While a Full Database Extract may be the fastest way to get some kind of result (particularly if customer is in a hurry), it will also create many more files than needed.
•	 If disk space is an issue for the customer (in the case of a very large database), creating too many files could be a problem
•	 Doing a Full Database Extract will take the maximum amount of time because each table extract is done serially
Simple Scenarios
Full Database Extract
Best Practice: Know exactly what data is critical for the customer and limit the DUL extract run.
•	 Limit configured dbfs in control.dul to only those that contain data the customer needs.
•	 DUL will save time by not scanning unnecessary dbfs
Best Practice: DUL does not automatically run in parallel, however multiple Partial Database Extract sessions can run in parallel if properly configured to save time.
Simple Scenarios
Partial Database Extract
•	Best Practice: To save time, eliminate dbfs from the control.dul that don’t contain critical data. These are typically UNDOTBS, SYSAUX, IDX, INDEX, TEMP, etc
•	The customer may identify other unnecessary dbfs
•	Configure init.dul and control.dul
•	Run /dul
DUL>bootstrap;
DUL>unload database;
DUL>exit
Simple Scenarios
Partial Database Extract
While the commands for a Partial Database Extract may be the same as a Full, DUL is only aware of the .dbf files in the control.dul and consequently will limit output
•	 DUL will output an error message for objects with referenced tablespaces found in the data dictionary, but have no corresponding .dbf in the control.dul
•	 These errors can be ignored, although you may have to explain them to the customer
Simple Scenarios
Partial Database Extract
•	Best Practice: If the customer is uncertain about the contents of their application .dbf files, leave them in the control.dul, otherwise data may be missed
•	To save time, consider running parallel DUL sessions with each configured for a different schema or a different subset of tables (if the database is large)
•	If running multiple DUL sessions, consider running from separate directories to avoid overwriting dul.log
Simple Scenarios
Partial Database Extract
•	Another scenario requiring a Partial Extract would be that a customer has a single tablespace that has gone offline.  Possibly only one of it’s datafiles is corrupt.
•	In this case, only include the dbfs for that tablespace (and system) and exclude all others from control.dul
•	Configure init.dul and control.dul
•	Run /dul
DUL>bootstrap;
DUL>unload database;
DUL>exit
Simple Scenarios
Partial Database Extract
•	Best Practice: To eliminate many error messages and also save time, consider extracting at the schema or table level if the customer is able to be specific.
•	Note: In many real world scenarios, the database may not be down.  The data to be extracted may be a very small portion of the entire database.
•	A truncated table
•	A dropped user
•	A corrupted tablespace
Simple Scenarios
Schema Data Extract
•	Unload only the schema that the customer needs. 
•	Configure init.dul and control.dul
•	Run /dul
DUL>bootstrap;
DUL>unload USER scott;
DUL>unload USER apps;
…
DUL>exit
Simple Scenarios
Table Data Extract
•	Unload only the exact tables that the customer needs
•	Configure init.dul and control.dul
•	Run /dul
DUL>bootstrap;
DUL>unload TABLE scott.dept;
DUL>unload TABLE apps.po_orders;	
…
DUL>exit
Simple Scenarios
Table Data Extract
•	There are optional parameters to the UNLOAD TABLE command which can further limit what data is extracted.
•	UNLOAD [TABLE]  [  schema_name . ]  table_name
[ PARTITION(  partition_name ) ]              
[ SUBPARTITION(  sub_partition_name ) ]              
[ (  column_definitions ) ]              
[  cluster_clause  ]              
[  storage_clause  ] ;
Simple Scenarios
Table Data Extract
•	storage_clause ::=        
STORAGE ( storage_specification ) 
•	storage_specification ::= 
OBJNO object_id_number 
|       TABNO cluster_table_number
|       SEGOBJNO cluster/data_obj_number /*v7/v8 */
|       BLOCK data_segment_header_block_number )  
Simple Scenarios
Extent Data Extract
•	Taking the limits further, data can be extracted by extent (just a partial table, with limited rows)
•	This can be useful when the customer has partial table corruption and has already marked specific rows or blocks as unusable.
•	Configure init.dul and control.dul
•	Run /dul
DUL>bootstrap;
DUL>unload EXTENT scott.customer …;
DUL>exit
Simple Scenarios
Extent Data Extract
•	UNLOAD EXTENT  table_name               
[ (  column_definitions  ) ]              
[ TABLESPACE  tablespace_no  ]               
FILE  extent_start_file_number              
BLOCK extent_start_block_number               
BLOCKS  extent_size_in oracle_blocks ;
Simple Scenarios
Data Dictionary Data Extract
•	Since DUL only extracts data, it is up to the customer DBA to reconstruct indexes, triggers, etc from previously saved create scripts.
•	If the customer has not maintained create scripts for these other objects, it may be possible to construct by extracting source code from the data dictionary.
	UNLOAD table sys.source$;
	Note: If this data is extracted, DO NOT load into SYS on another database.  Import this data to a separate schema and write SQL to reverse engineer scripts.
Simple Scenarios
Review Output
•	dul.log
•	All parameter settings for that run are listed
•	db_id and/or db_name found is listed
•	All recognized datafiles are listed along with file#, relative file#,  startblock, #blocks, blocksize, and offset
•	Row counts for each schema.table written
•	Possible warning and error messages 
•	Best Practice: The exact version of DUL for this run is displayed in the header.  If there are any questions, send this log to the service@parnassusdata.com distribution list
Simple Scenarios
Review Output
•	Extracted files
•	schema_tablename.dmp or .dat or .ctl
•	dul.log example
Found db_id = 1321768231
0 1 c:\app\jdydek\oradata\orcl\SYSTEM01.DBF startblock 0 blocks 93441 block size 8192 (off=0)
4 4 c:\app\jdydek\oradata\orcl\USERS01.DBF startblock 0 blocks 641 block size 8192 (off=0)
…
DUL> unload table locationd (col001 number, col002 varchar2(11)) storage (extents (file 4 block 530));
. unloading table                 LOCATIOND       6 rows unloaded
DUL> exit
Simple Scenarios
Review Output Messages
•	In general, there is no comprehensive list of warnings or errors that may occur during a DUL session. 
•	Check the on-line user’s guide or e-mail helpdul_nl.
•	Warnings
•	These may typically be ignored, however try to understand why they are occurring.
•	Errors
•	These may cause DUL to terminate, or it may continue on with no results.  Use best judgment from reviewing the output dul.log.
Simple Scenarios
Review Output Typical Errors
•	You need a more recent DUL version for this os
•	Cause: DUL is recompiled approximately every 45 days with a different expiration date.  
•	Download the current available version from website. 
•	If it is current, e-mail Bernard immediately.
•	version GLIBC_2.11 not found (required by dul)
•	Cause: There are 2 versions of DUL for Linux
•	Download and try the alternate version of DUL for Linux (one uses dynamic linked libraries)
Simple Scenarios
Review Output Typical Errors
•	Each table extracted should show a row count but: 
. unloading table    LOGSTDBY$FLASHBACK_SCN
DUL: Error: No entry in control file for block: ts# = 1 rfile# = 2 block# = 2050
DUL: Error: While processing ts# 1 file# 2 block# 2050
DUL: Error: Could not read/parse segment header
0 rows unloaded
•	Cause: By excluding datafiles like SYSAUX that don’t contain any customer application data, DUL will not find matching tablespaces and blocks that do exist in the data dictionary.  This is expected.
Simple Scenarios
Review Output Typical Errors
•	During startup, you might see:
DUL: Error: open( 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF', Read Only)
OS error 2: No such file or directory
•	Cause: A datafile specified in the control.dul is not at the location specified.
•	Either correct the location in the control.dul
•	Or ignore if the file is not really needed and was meant to be excluded.  DUL will continue and not terminate.
Simple Scenarios
Review Output Typical Errors
•	Unsupported feature / data type
DUL: Error: Type REF, typcode 110 is not supported(yet)
0000000000 00220208 6d6aa16a cad6403a b6c27d8e .".. mj.j ..@: ..}. 
•	Cause: DUL may find data types it doesn’t recognize
•	E-mail the error message to Bernard for an interpretation and impact statement. 
Simple Scenarios
Review Output Typical Errors
•	OS Dependent errors
•	If osd_dba_file_bits size is wrong:
DUL: Warning: Block[1][2] DBA in block mismatch [4][2] 
DUL: Warning: Bad cache layer header file#=1, block#=2 
•	If osd_c_struct_alignment is wrong:
DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262145 OS error 2: No such file or directory 
•	If db_block_size is wrong:
DUL: Warning: Block[1][2] DBA in block mismatch [513][1159680] 
DUL: Warning: File=1, block 2: illegal block version 2
DUL: Warning: Block[1][2] Illegal block type[0]
DUL: Warning: Bad cache layer header file#=1, block#=2
Quiz
1) You must have all .dbf files for the database available for DUL to retrieve customer required data.
•	True
•	False
Chapter 5: Complex Scenarios
•	Prereqs: 
•	Intermediate knowledge of Oracle Database Server
•	Use of imp utility, interactive and batch modes
•	Use of SQL*Loader 
•	General knowledge of UNIX/Windows command level
•	vi or other text editor on UNIX
•	WordPad on Windows
Complex Scenarios
Agenda
•	Assumptions / Process
•	Analyzing the entire database
•	Extract data from truncated table
•	Extract data from dropped table
•	Extract data from deleted table
Complex Scenarios
Assumptions / Process
•	The SYSTEM.dbf file is NOT available or is corrupt
•	Best Practice: Before proceeding check if the customer has an older, backup version of SYSTEM.dbf.  Unless there have been major changes to the database, internal object information may be the same and usable even from an old SYSTEM.dbf.
•	Column data types will be guessed by DUL as it tries to extract, but table and column names are lost
•	The guessed column data types/precision can be wrong
•	dul.log will contain statistics (counts, confidence level)  
Complex Scenarios
Assumptions / Process
•	A functional person with in-depth knowledge about the application and tables is required
•	The unloaded data does not have any value, if you don’t know from which table it came.
•	DUL will only identify tables by object number
•	DUL will only identify columns by column number
•	Best Practice: DUL generates row counts for each table.  DUL row counts may be matched to tables if the customer has any recent reports with similar information.
Complex Scenarios
Assumptions / Process
•	This process requires at least 2 runs of DUL
•	Run DUL to analyze all objects, data types and create generic UNLOAD commands for each object
•	Review dul.log statistics and object to table mappings and column data types with the customer.  
•	Modify the UNLOAD commands if necessary.
Note: Actual column names from the customer may be substituted for the generated “Col1, Col2, Col3…”; Table name may be substituted for Object Number at this time.
•	Run DUL to process each modified UNLOAD command by object number, possibly in a batch file
Complex Scenarios
Assumptions / Process
•	Note: DUL will not find ending columns if they only contain NULLs
•	Trailing NULL columns are not stored in the database
•	The number of columns may not match the source table
•	Tables that have been dropped will be found and generate an UNLOAD with valid object number
•	When a table is dropped, the description is removed from the data dictionary only, however rows with the object number will still exist in the datafile.
•	Tables without rows will go unreported
Complex Scenarios
DUL Run 1: Analyze datafiles
•	Configure init.dul and control.dul
•	Be sure USE_SCANNED_EXTENT_MAP=false
•	Run /dul
DUL>scan database;
DUL>set USE_SCANNED_EXTENT_MAP=true
DUL>scan tables;   /* or scan extents */
DUL>exit
Complex Scenarios
Review dul.log from 1st Run
•	Review statistics and generated UNLOAD commands
•	For each found object number, DUL will: 
•	Display it’s column analysis confidence level statistics
•	Post up to 5 rows of data in plain text 
•	Format a generic UNLOAD command with “best guess” column attributes
•	Tablename will be ‘OBJNOnnn’
•	Columns will be ‘COL001, COL002, etc’
•	Use the plain text to verify the identity of the table and to validate the data types that DUL guessed.
Complex Scenarios
Review dul.log from 1st Run
Analyzing segment: data object id 12088 segment header at ( file 4 block 11)   heap organized table
Col    Seen  Max PCT  PRINT  NUMBERS DATES TIMESTAMP WITH TZ INTRVAL ROWIDS LOB
no   count Size NUL 75%100% AnyNice AnyNice AnyNice AnyNice Y2M D2S AnyNice
1       4    2   0   0   0 100 100   0   0   0   0   0   0   0   0   0   0   0
2       4   22   0 100 100   0   0   0   0   0   0   0   0   0   0  50   0   0
DUL: Warning: No character set id for CHAR Column
"1" "Europe"                                                                    
"2" "Americas"                                                                  
"3" "Asia"                                                                      
"4" "Middle East and Africa"                                                    
UNLOAD TABLE OBJNO12088 ( COL001 NUMBER, COL002 VARCHAR2(22) )
STORAGE( DATAOBJNO 12088 );
Complex Scenarios
Modifying UNLOAD command
At this time, the generated UNLOAD command can be edited to match customer specifications:
UNLOAD TABLE OBJNO12088 ( COL001 NUMBER, COL002 VARCHAR2(22))
STORAGE( DATAOBJNO 12088 );
Might become:
UNLOAD TABLE location ( loc_id NUMBER (10), description VARCHAR2(25))
STORAGE( DATAOBJNO 12088 );
Complex Scenarios
DUL Run 2: Execute UNLOADs
•	Cut and Paste each relevant UNLOAD command:
•	To another text file for a batch run or
•	To another on-line DUL session if the number of UNLOADS is small and the amount of rows small
/dul
DUL>UNLOAD TABLE location ( loc_id NUMBER (10), description VARCHAR2(25)) STORAGE( DATAOBJNO 12088 );
. unloading table                  LOCATION       4 rows unloaded
Complex Scenarios
Extract Data from Truncated Table
•	Since Oracle 8, every object has 2 object ids.
•	The standard object id is used for tracking dependencies in the data dictionary.
•	The data object id is used to mark the data blocks in a data segment.
•	When an object is created, the two are equal
•	Each time a table is truncated, the data object id is incremented.
•	To get data from before a truncate, you need the previous data object id (may be current object id – 1)
Complex Scenarios
Data Extract from Truncated Tables
•	SQL to identify tables that have been truncated
SELECT name, obj#, dataobj# from sys.obj$ 
where obj# <> dataobj#; 
•	SQL to identify usernames and internal id
SELECT user_id, username from sys.dba_users
order by username;   /* or user_id */
Complex Scenarios
Data Extract from Truncated Table
REM Find truncated objects for a specific userid
set pages 60
col username        format a12 trunc
select d.username,
o.owner#, o.name, o.obj#, o.dataobj#
from sys.obj$ o,
sys.dba_users d
where o.obj#<>o.dataobj#         /* Truncated? */      
and o.owner# = d.user_id
and d.username = upper('&1')   /* For 1st parm */
order by owner#, name;
Complex Scenarios
Extract Data from Dropped Table
•	Beginning with 10g, Oracle has implemented a recycle bin. It is enabled by default.
•	If the database is up, the customer can recover a dropped table normally:
	Flashback table schema.tablename to before drop;
•	If the database is down or Flashback has not been configured, then the key to extracting data is identifying the correct data object id.
	From the dul.log, you may see multiple objects with similar row counts. The customer will need to identify.    
Complex Scenarios
Extract Data from Deleted Table
Test your understanding of Data UnLoader with the following scenario:
•	Flashback Recovery Area has not been configured or the version of the database does not support flashback
•	When trying to remove an employee, the customer forgets a WHERE clause for employee id and does:
	DELETE from hr.emp;  
	COMMIT;    /* all rows are deleted… now what?  */
•	How should DUL be configured to extract the data deleted from the hr.emp table?
Quiz
•	What steps are required after running DUL with no SYSTEM.dbf datafile available?
•	Review dul.log with customer
•	Identify tables based on row count
•	Modify UNLOAD commands to use proper table and column names
•	Cut / Paste modified UNLOAD commands
•	Rerun DUL and enter each UNLOAD command
•	All of the above
Chapter 6: Loading Data
•	Depending on customer needs, the output from a DUL session could possibly be
•	hundreds of export dump files (*.dmp) 
•	hundreds of SQL*Loader files (*.dat, *.ctl)
•	Some of these are temporary workfiles and can be ignored
•	Or just a few of each
•	hundreds of LOB*.dat or *.ctl
•	These are temporary DUL workfiles and can be ignored
•	Customer usable output will be named in the form of schema_tablename.dmp, .dat or .ctl
Loading Data
Next Steps
•	Once the output files are generated, the ASE would typically disengage and the customer DBA take over. 
•	However some customers may need help loading the data.
•	If many files have been generated, the scripts on the following pages may help to batch them together.
Chapter 7: DUL Internal Files
•	Prereqs: 
•	Intermediate knowledge of Oracle Database Server
•	Use of imp utility, interactive and batch modes
•	Use of SQL*Loader 
•	General knowledge of UNIX/Windows command level
•	vi or other text editor on UNIX
•	WordPad on Windows
•	Access to Excel for importing and parsing files
DUL Internal Files
•	DUL generates 2 types of temporary internal files
•	.dat
•	.ctl
•	Critical internal files used
•	EXT.dat			Extent map
•	SEG.dat			Segment map
•	SEEN_TAB.dat		Found table names
•	SEEN_COL.dat		Found column names
DUL Internal Files
•	EXT.dat and SEG.dat are generated during the initial scan of the database
•	DUL uses these files to scan datafiles for content
•	SEEN_COL.dat and SEEN_TAB.dat are used to produce the object statistical information produced during a scan table/extent operation
•	All files can be used to perform advanced unload recovery operations with DUL
DUL Internal Files
•	EXT.dat  (created during a SCAN DATABASE;)
•	Contains info about contiguous table/cluster data blocks
•	Consists of 9 columns
•	Object ID
•	File# of segment header
•	Block# of segment header
•	File#
•	Block# of the first block after the segment header
•	Number of blocks = block offset with segment pointing to HWM
•	Number of tables, 1 if regular table
DUL Internal Files
•	SEG.dat  (created during a SCAN DATABASE;)
•	Contains info of segment headers
•	Consists of 4 columns
•	Object ID
•	File#
•	Rfile#
•	Block#
DUL Internal Files
•	SEEN_TAB.dat (created during a SCAN TABLE;)
•	Contains info on Table objects found
•	Consists of 6 columns
•	Object ID
•	Table #
•	Block #
•	??
•	# of columns
•	Row count
DUL Internal Files
•	SEEN_COL.dat (created during a SCAN TABLE;)
•	Consists of 16 columns
•	Object ID
•	Table #
•	Column #
•	Field Type
•	Max Internal Size
•	Null found
•	Number of rows
•	75% Character
DUL Internal Files
•	SEEN_COL.dat (continued)
•	100% Character
•	Number
•	% Nice Number
•	% Date
•	% Row ID
DUL Internal Files
Examples
•	SEG.dat
"77876" "4" "4" "522 "
•	EXT.dat
"4" "16777227" "6" "5" "77876" "1" "1" "1" "0“
•	SEEN_TAB.dat  
77876 4 522 0 2 5
•	SEEN_COL.dat
77876 0 1 2 2 F 5 0 0 0 5 5 0 0 0 0
77876 0 2 1 11 F 5 0 5 5 0 0 0 0 3 0
DUL Internal Files
Examples
•	To better understand the object numbers found, the columns found and the row counts, these files may be imported into Excel for analysis
Chapter 8: Exercises / Labs
•	Prereqs: Chapter 1
•	None
•	Prereqs: Chapter 2
•	Access to internal Oracle DUL website
•	Prereqs: Chapters 3-7 
•	Oracle Database 11g installed on laptop or
•	Access to a test/development Oracle 11g database
•	Full DBA administrative privileges (for verification)
Exercises / Labs
•	Chapter 1: Overview
•	none
Exercises / Labs
•	Chapter 2: Basics
•	Verify access to new DUL website, SSO is required
•	https://stbeehive.oracle.com/teamcollab/overview/DUL
•	Bookmark DUL website under Favorites
•	Download DUL for Windows (or preferred platform)
•	Expand zip/gz file to local directory
•	Download and save DUL User’s Guide as a local Word Document 
•	Browse Announcements, Forums/Help, Tasks, WikiPages/All Pages for available information
•	Add yourself to the helpdul_nl distribution list
Exercises / Labs
•	Chapters 3-7: Lab setup steps
•	Note: In a real world scenario, the database may be down, but to simplify lab setup and verification of results, you can keep the database up.  
•	DUL can run against .dbf files whether the database is up or down.
•	Login to your database as SYSDBA and run the following script to create a labuser, some tables under labuser, and a newuser for importing data to.
•	@create
Exercises / Labs
•	Chapter 3: Parameters and Commands 
•	Create an init.dul from copy of the generic_init.dul
•	Modify OSD parameters for your platform
•	Review and modify other parameters as desired
•	Generate a control.dul from the database
•	Run SQL on v$datafiles if the database is mounted
	@control_dul.sql
•	Generate a control.dul from OS, edit as needed
•	Capture a file list of .dbf files from the oradata directory
	/strings CONTROL01.dbf > control.dul 
Exercises / Labs
•	Chapter 4: Simple Data Salvage Scenarios
1) Perform a Full Database Extract using the init.dul and control.dul from Chapter 3 exercises
•	dul < Chap4_Ex1.txt
•	Review the log
•	Review the output directory 
•	Notice how many files were created
•	How many of these are temporary work files?
•	How many of these are usable for loading?
•	Review error messages
•	Did it complete?
Exercises / Labs
•	Chapter 4: Simple Data Salvage Scenarios
2) Perform a Partial Database Extract 
•	Edit the control.dul from Exercise 1
•	Remove SYSAUX01.dbf, UNDOTBS01.dbf, EXAMPLE01.dbf (or similar .dbfs on your db)
•	CleanDir.bat
•	dul < Chap4_Ex2.txt
•	Review the log
•	Notice different error messages.
•	Why?
•	Review how many files were created this time?
Exercises / Labs
•	Chapter 4: Simple Data Salvage Scenarios
3) Perform a targeted Schema extract
•	UNLOAD USER labuser;
•	Check the log for table names and row counts
4) Perform a Table extract
•	UNLOAD TABLE sys.source$;
•	Note: Data from this table may be used to reconstruct source scripts for indexes, triggers, etc.
•	Check the log for row counts and any errors
•	Import and verify all data from Chapter 4
Exercises / Labs
•	Chapter 5: Complex Data Salvage Scenarios
•	@actions   /* truncate, drop, delete tables */
•	Edit control.dul, remove SYSTEM01.dbf
•	dul < Chap5_Ex1.txt
•	Review dul.log
•	Extract some UNLOAD commands from log
•	Modify some UNLOAD commands
•	Run UNLOAD commands
•	Import and verify data from Chapter 5
Exercises / Labs
•	Chapter 6: Loading Data
•	Import extracted .dmp or .dat/.ctl files into a new schema on your test database
•	Run import.bat supplied for import from Windows
•	Review data imported to newuser
•	Chapter 7: Internal Files
•	Parse the ext.dat, seg.dat, seen_tab.dat and seen_col.dat text files into an Excel spreadsheet
•	Review row counts, table column statistics, object nos

Comment

*

沪ICP备14014813号

沪公网安备 31010802001379号