DUL Oracle Data Unloader 다운로드

If you cannot recover the data by yourself, ask Parnassusdata, the professional ORACLE database recovery team for help.

Parnassusdata Software Database Recovery Team

Service Hotline:  +86 13764045638 E-mail: [email protected]

 

ORACLE PRM-DUL Download: http://zcdn.parnassusdata.com/DUL5108.zip

 

오라클 DUL 네덜란드, 오라클 지원, 버나드 반 Duijnen 개발에서 회사 내에서 오라클 데이터베이스 복구 도구입니다 :
DUL 오라클의 제품이 아닙니다
DUL는 오라클에 의해 지원되는 제품이 아닙니다
DUL 엄격하게 내부 용 오라클 지원 영업 지원 부서로 제한됩니다
DUL 그렇지 않으면 DUL를 사용하는 경우에도 자격이 없습니다, ​​먼저 PS 오라클의 표준 서비스는 DUL 사용할 수 있습니다 구입해야 외국에서 오라클의 내부 승인을 거쳐야 사용
DUL 엄격하게 제어되는 이유 중 하나는 오라클 소스 코드의 사용이며, 엄밀히 제어 되어야만

약 DUL 9 처음부터 DUL 소프트웨어 시간 잠금 DUL의 사용을 제한하기 때문에주기 위해 버나드 반 Duijnen 외부 세계는 그가 정기적으로 DUL가 (C 언어를 기반으로 DUL)를 다른 플랫폼에서 컴파일 주기적으로 ORACLE 내부 DUL에 업로드 덧붙였다 작업 공간 (기반 stbeehive 공간), 오라클 지원은 다운로드 내부 VPN 로그인을 사용할 수 있습니다. 즉, 잠금 날짜의 버전을 출시 년 10 월 1 일 bernard.van.duijnen처럼 30 일 11월 1일 DUL 간단하게 읽을 수 없습니다, ​​기본적으로 비효율적 인 OS 시간에 다음이 버전, 그래서 OS를 변경할 수있는 시간입니다 그것은 쓸모가 없다. 오라클의 데이터 파일 레인은 또한 현재 시간을 기록하기 때문에, 그래서 DUL 시간에 데이터 파일을 읽습니다. 불가능한 일반 사용자가 DUL와 시간을 변경하십시오.
DUL 더 HP-UX의 버전을 해당 있도록 bernard.van.duijnen 학생들이 DUL 플랫폼 HP-UX를 제공하지 않기 때문에주의.
너무 나이가 현재의 10g, 11g, 12C 데이터베이스에서 사용되는 오라클 DUL 버전 한편 이전 버전은 기본적으로 작동하지 않습니다. 미국에서 사용 DUL이 엄격하게 중국에서 제어, 다음 기본 오라클 ACS 고급 고객 서비스 부서가 사용이됩니다, 오라클 ACS 현장 서비스 가격이 여전히 비싸다 구입할 수 있습니다.
부록은 오라클 ACS 프리젠 테이션 문서를 제공 DUL 서비스 (물론 원래 사이트 서비스를, 그렇지 않으면 당신은 심지어 현장 서비스 ACS 고급 서비스를 살 수없는, 더 비싼, 사용자가 매년 PS 표준 서비스를 구입 한 경우) :

 

 

 

다음 다운로드 링크 DUL 10이지만 때문에 로크의 때문에 정기적 실패.

 

DUL FOR LINUX platform (updated to PRM-DUL)

DUL FOR Windows platform (updated to PRM-DUL)

시 탄 소프트웨어 (기업 맥클레인이있는) DUL 유사한 제품, PRM-DUL을 개발했다. 그래픽 인터페이스 (GUI)와 DataBridge의 도입에 기초 DUL 및 기타 기능 (데이터 착륙하지 않고는 SQLLDR 직접 대상 DBLINK로 데이터베이스에 동일 파일 전송된다) PRM-DUL가 작성되기 때문에 당신은 모든 플랫폼을 교차 할 수 있도록, JAVA 기반 HP-UX 등.

 

 

PRM-DUL 무료 버전 다운로드 :
http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip
PRM-DUL 설명서 http://www.parnassusdata.com/sites/default/files/ParnassusData%20Recovery%20Manager%20For%20Oracle%20Database%E7%94%A8%E6%88%B7%E6%89% 8B % E5 % 86 % 8C %의 20v0.3.pdf

데이터베이스는 데이터 테이블의 더 만 이상의 행이 직접 무료 PRM-DUL을 사용할 수 있도록 작은 경우 PRM-DUL은 무료 버전의 기본 각 테이블은 데이터의 만 행을 추출 할 수 있습니다. 데이터베이스가 크고 데이터가 매우 중요한 경우에, 당신은 데이터베이스에 대한 PRM-DUL, Enterprise 버전 PRM-DUL의 라이센스 소프트웨어 라이센스를 Enterprise Edition을 구입 고려할 수, 라이센스 가격은 (17 포함 7,500위안입니다 %의 부가 가치세).
한편 PRM-DUL 또한 일부 무료 라이센스를 제공합니다 :
무료 및 오픈 여러 PRM-DUL Enterprise Edition의 라이센스 키

사용 DUL 후 오라클 데이터베이스 복구 경우가 여전히 작동하는 경우에, 당신은 서비스 복구의 채택을 고려할 수 있습니다 :
테이블은 ASM 디스크 그룹 수없는 MOUNT이 좋아하는, 잘못된 드롭, TRUNCATE는, 삭제 등을하고, 데이터베이스를 열 수 없습니다 :시 탄 소프트웨어는 포함 오라클 복원의 거의 모든 장면을 제공합니다.

 

그들은 당신이 찾을 수 있습니다 처리 할 수없는 경우시 탄 Oracle 데이터베이스 복구 소프트웨어 전문 팀 구성원은 회복 할 수 있도록하기!
탄시 데이터베이스 복구 소프트웨어 전문 팀

대체 전화 번호 : +86 13764045638 이메일 : [email protected]

 

Current recovery options 
restore and rollforward
export/import
use SQL*Loader to re-load the data
(parallel) create table as select (PCTS)
Transportable Tablespace


Diagnostic tools
orapatch
BBED (block browser/editor) 
Undocumented parameters
_corrupted_rollback_segments, _allow_resetlogs_corruption  etc... 


No alternatives in the case of loss of SYSTEM tablespace datafile(s) 
The database must be in ‘reasonably’ good condition or else recovery is not possible (even with the undocumented parameters!) 
Patching is very ‘cumbersome’ and is not always guaranteed to work
Certain corruptions are beyond patching
Bottom line - loss of data!!


The most common problem is the fact that customer’s backup strategy does not match their business needs. 
Eg.  Customer takes weekly backups of the database, but in the event of a restore their business need is to be up and running within (say) 10 hours.   This is not feasible since the ‘rollforward’ of one week’s worth of archive logs would (probably) take more than 10 hours!!


Building a cloned database exporting data, and importing into the recovery database.
Building a cloned database and using Transportable Tablespaces for recovery. 


DUL could be a possible solution
DUL (?) - Bernard says ‘Life is DUL without it!’
bottom line - salvage as much data as possible



DUL is intended to retrieve data that cannot be retrieved otherwise
It is NOT an alternative for restore/rollforward, EXP, SQL*Plus etc. 
It is meant to be a last resort, not for normal production usage
Note: There are logical consistency issues with the data retrieved


DUL should not be used where data can be salvaged using one of the supported mechanisms (restore/rollforward, exp/imp etc…)


Doesn’t need the database or the instance to be open
Does not use recovery, archive logs etc…
It doesn’t care about data consistency
more tolerant to data corruptions
Does not require the SYTEM tablespace to recover


DUL is a utility that can unload data from “badly damaged” databases. 
DUL will scan a database file, recognize table header blocks, access extent information, and read all rows 
Creates a SQL*Loader or Export formatted output
matching SQL*Loader control file is also generated




DUL version 3 (still in testing!) supports IMP loadable dump file.  More on DUL version 3 later...



Read the Oracle data dictionary if the SYSTEM tablespace files are available 
Analyze all rows to determine 
number of columns, column datatypes and column lengths


If the SYSTEM tablespace datafiles are not available DUL does its own analysis, more on this later...



DUL can handle all row types
normal rows, migrated rows, chained rows, multiple extents, clustered tables, etc. 
The utility runs completely unattended, minimal manual intervention is needed.
Cross platform unloading is supported



DUL can open other datafile(s) if there are extents in that datafile(s).
Although DUL can handle it, LONG RAW presents a problem for SQL*Loader - we’ll talk about this shortly...

For cross platform unloading the configuration parameters within "init.dul" will have to be modified to match those of the original platform and O/S rather than the platform from which the unload is being done.
DUL unloads in the physical order of the columns. The cluster key columns are always unloaded first.


Recovers data directly from Oracle data files 
the Database (RDBMS) is bypassed 
Does dirty reads, it assumes that every transaction is committed
Does not check if media recovery has been done
DATABASE CORRUPT - BLOCKS OK 
Support for Locally Managed Tablespaces


DUL does not require that media recovery be done.
Since DUL reads the data directly from datafiles,  it  reads data that is committed as well as uncommitted.  Therefore the data that is salvaged by DUL can potentially be logically corrupt.  It is upto the DBA and/or the Application programmers to validate the data.


The database can be copied from a different operating system than the DUL-host 
Supports all database constructs: 
row chaining, row migration, hash/index clusters, longs, raws, rowids, dates, numbers, multiple free list groups, segment high water mark, NULLS, trailing NULL columns etc...
DUL should work with all versions 6 , 7, 8 and 9
Enhanced to support 9i functionality. 


DUL has been tested with versions from 6.0.36 up to 7.2.2. The old block header layout (pre 6.0.27.2) also works! 


The main new features are: 
  Support for Oracle version 6, 7, 8 and 9 
  Support for Automatic Space Managed Segments 
  New bootstrap procedure: just use ‘bootstrap;’.   No more 
       dictv6,7 or 8.ddl files 
  LOB are supported in SQL*Loader mode only 
  (Sub)Partitioned tables can be unloaded 
  Unload a single (Sub)Partition 
  Improved the scan tables 
  The timestamp and interval datatypes 
  Stricter checking of negative numbers 
  (Compressed) Index Organized Tables be unloaded 
  Very strict checking of row status flags 
  Unload index to see what rows you are missing 
  Objects, nested tables and varrays are not supported (internal  
        preparation for varray support ) 


DUL has been tested with versions from 6.0.36 up to 9.0.1. The old block header layout (pre 6.0.27.2) also works! 
DuL 92 is mostly bug fixes:
The latest version is DUL92. The main new features are: 
     fix for problem with startup when db_block_size = 16K 
     fix for scan database and Automatic Space Managed Segments 
     fix for block type errors high block types; new max is 51 
     Support for Automatic Space Managed Segments 
     phase zero of new unformat command 
     internal preparation for varray support 
     Bug fix in the stricter checking of negative numbers 
     Bug fix in the unloading of clustered tables 


The database can be corrupted, but an individual data block used must be 100% correct
blocks are checked to make sure that they are not corrupt and belong to the correct segment
DUL can and will only unload table/cluster data. 
it will not dump triggers, constraints, stored procedures nor create scripts for tables or views
But the data dictionary tables describing these can be unloaded


Note: If during an unload a bad block is encountered, an error message is printed in the loader file and to standard output. Unloading will continue with the next row or block. 


MLSLABELS (trusted oracle) are not supported
No special support for multi byte character sets
DUL can unload (LONG) RAWs, but there is no way to reload these 1-to-1 with SQL*Loader
SQL*Loader cannot be used to load  LONG RAW data.



DUL can unload (long) raws, but there is no way to reload these 1-to-1 with SQL*Loader. There is no suitable format in SQL*Loader
to preserve all long raws. Use the export mode instead or write a Pro*C program to load the data.



DUL and large files (files > 2GB) 
Starting from DUL version 8.0.6.7 DUL will report if it can do 32-bit i/o(no large file support) or 64-bit i/o with large file suport.
DUL support for raw devices
DUL will work on raw devices. But DUL is not raw device aware.


Raw Devices:
On some platforms we skip the first part of the raw device. DUL does not automatically skip this extra part. The easiest way to configure DUL in this is the optional extra offset in the control file. These extra offsets that I am aware of are 4K on AIX raw devices and 64K for Dec Unix. 
DUL does not use the size as stored in the file header. So DUL will read the whole raw device including the unused part at the end.


There are two configuration files for DUL
init.dul
control.dul
Configuration parameters are platform specific.


If you do decide that DUL is the only way to go, then here is how to go about configuring and using DUL.  Good Luck!!


Contains parameters to help DUL understand the format of the database files 
Has information on  
DUL cache size
Details of header layout
Oracle block size
Output file format
Sql*Loader format and record size. 
etc...


Sample init.dul file for Solaris looks like:
# The dul cache must be big enough to hold all entries from the Dictionary dollar tables.
dc_columns = 200000
dc_tables = 20000
dc_objects = 20000
dc_users = 40
# OS specific parameters
big_endian_flag = true
dba_file_bits = 6
align_filler = 1
db_leading_offset = 1
# Database specific parameters
db_block_size = 2048
# Sql*Loader format parameters
ldr_enclose_char = "
ldr_phys_rec_size = 81


Used to translate the file numbers to file names
Each entry on a separate line, first the file_number then the data_file_name
A third optional field is an extra positive or negative byte offset, that will be added to all fseek() operations for that datafile.


This optional field makes it possible to skip over the extra block for AIX on raw devices or to unload from fragments of a datafile.

The control file would look like : 
  1  /u04/bugmnt/tar9569610.6/gs/sysgs.dbf                                
  2  /u04/bugmnt/tar9569610.6/gs/rbs.dbf                                  
  3  /u04/bugmnt/tar9569610.6/gs/user.dbf         
  4  /u04/bugmnt/tar9569610.6/gs/index.dbf                   
  5  /u04/bugmnt/tar9569610.6/gs/test.dbf
When the database is up and running v$dbfile contains the above information.



# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40

# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 6
osd_c_struct_alignment = 32
osd_file_leader_size = 1

# database parameters
db_block_size = 8192

# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81

#ADD PARAMETERS
export_mode=true  # still needed with dul9
compatible=9


# AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096

   # Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000



Case1: Data dictionary usable


Case 1:  
SYSTEM tablespace available
Case 2:  
Using DUL without the SYSTEM tablespace


Straight forward method  	                     
Execute ‘dul’ from os prompt then ‘bootstrap’ from DUL
Don’t need to know about the application tables structure, column types etc...


DUL> unload table hr.emp_trunc;

DUL: Error: No entry in OBJ$ for "EMP_TRUNC" type = 2
DUL: Error: Could not resolve object id
DUL: Error: Missing dictionary information, cannot unload table
DUL> scan database;

Case2: Without the SYSTEM tablespace 

Needs an in depth knowledge about the application and the application tables
The unloaded data does not have any value, if you do not know from which table it came from
Column types can be guessed by DUL but table and column names are lost
The guessed column types can be wrong


Note: 
1) Any old SYSTEM tablespace from the same database but weeks old can be of great help!
2) If you recreate the tables (from the original CREATE TABLE scripts) then the structural information of a "lost" table can be matched to the "seen" tables scanned with two SQL*Plus scripts. (fill.sql andgetlost.sql)

Steps to follow: 
1.configure DUL for the target database. This means creating a correct init.dul and control.dul. 
2.SCAN DATABASE; : scan the database for extents and segments. 
3.SCAN TABLES; : scan the found segments for rows. 
4.SCAN EXTENTS; : scan the found extents. 
5.Identify the lost tables from the output of step 3. 
6.UNLOAD the identified tables. 



DUL will not find “last” columns that only contain NULL's
Trailing NULL columns are not stored in the database
Tables that have been dropped can be seen
When a table is dropped, the description is removed from the data dictionary only
Tables without rows will go unnoticed


During startup DUL goes through the following steps: 
the parameter file "init.dul" is processed
the DUL control file (default "control.dul") is scanned
try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL's data dictionary cache
try to load seg.dat and col.dat. 
accept DDL-statements or run the DDL script specified as first argument



DUL version 3, 8, 9 and 92 are available. 

http://www-sup.nl.oracle.com/dul/index.html

 exceutables, user’s and configutration guide
Available on most common platforms
Solaris
AIX
NT
HP etc...


DUL version 9 is currently available on: 
aix
alphavms62
att3000
dcosx
hp.tar.bin
osf1
rm4000.tar.bin   
sco
sequen
sunos
sunsol2
vaxvms55
vaxvms61
win95
winnt 

DuL with Dictionary


 Configure init.dul and control.dul
 Load DuL
 Bootstrap
 Unload database, user, table


DuL without Dictionary


 Configure init.dul and control.dul (control will include
   the datafiles needing to be recovered only).
 Load DuL
 alter session set use_scanned_extent_map = true
 scan database
 scan tables
 Using the found table definitions construct an uload 
   statement:
unload table dul2.emp (EMPLOYEE_ID number(22), FIRST_NAME varchar2(20), 
LAST_NAME varchar2(25), 
EMAIL varchar2(25),PHONE_NUMBER varchar2(20), HIRE_DATE date, JOB_ID varchar2 (10),
SALARY number(22), COMMISSION_PCT number(22),MANAGER_ID number(22), 
DEPARTMENT_ID number(22))
storage (dataobjno 28200);




 


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *