Oracle DUL Data Unloader數據恢復軟體介紹

 

 

如果自己搞不定可以找詩檀軟件專業ORACLE數據庫修復團隊成員幫您恢復!

詩檀軟件專業數據庫修復團隊

咨詢電話: +86 13764045638    郵箱:service@parnassusdata.com

Oracle DUL Data Unloader數據恢復工具信息匯總

 

Oracle DUL 是Oracle公司內部的數據庫恢復工具,由在荷蘭的Oracle Support,Bernard van Duijnen開發:

 

  • DUL不是Oracle的一個產品
  • DUL不是一個受Oracle支持的產品
  • DUL被嚴格限制為Oracle Support售後支持部門內部使用
  • DUL的使用在國外需要經過Oracle公司的內部審批,首先你必須購買了Oracle的標準服務PS才可能用到DUL,否則甚至沒有資格使用DUL
  • DUL被嚴格控制的一個原因是其采用了部分Oracle源代碼,所以必須被嚴格控制

 

oracle-dul-1

 

大約從DUL 9開始,Bernard van Duijnen為了限制外界使用DUL,所以給DUL加上了軟件時間鎖,即他會定期編譯不同平臺上的DUL(DUL基於C語言編寫)並定期上傳到ORACLE 內部的DUL workspace(基於stbeehive的空間),Oracle Support可以使用內部VPN登陸後下載。就是說 好比bernard.van.duijnen 在10月1日發布了一個版本,日期鎖是30天,那麼這個版本到11月1日基本就失效了, DUL不是簡單的讀OS時間,所以改OS時間是沒用的。 因為Oracle的datafile裏也記錄了一個當前時間,所以DUL讀的是datafile裏的時間。 一般用戶不可能為了用DUL去改那個時間。

註意由於bernard.van.duijnen同學不提供HP-UX平臺上的DUL,所以DUL沒有HP-UX的對應版本。

同時早期的Oracle DUL版本用在現在的版本10g、11g、12c的數據庫基本是用不了了,因為太老了。  在美國使用DUL是被嚴格控制的,在中國國內的話 基本就是Oracle ACS 高級客戶服務部門對外在用,購買ORACLE ACS現場服務的價格還是很貴的。

附件為一個Oracle ACS提供DUL 服務的介紹文檔(當然原廠現場服務是比較昂貴的,且前提是用戶已經每年購買了PS標準服務,否則甚至無法購買ACS高級服務的現場服務):

https://www.askmaclean.com/wp-content/uploads/2014/01/DUL.pdf

 

 

DUL 10的英文版使用手冊:

DUL User’s and Configuration Guide V10.2.4.27

https://www.askmaclean.com/wp-content/uploads/2014/01/DUL-Users-and-Configuration-Guide-V10.2.4.27.pdf

 

以下是DUL 10的下載鏈接,但是因為加鎖了,所以會定期失效。

DUL FOR LINUX平臺(已更新為PRM-DUL)

DUL FOR Windows平臺 (已更新為PRM-DUL)

 

 

DUL可以從已經損壞嚴重的數據庫中抽取數據, DUL可以直接掃描Oracle Datafile數據文件,並識別表頭塊segment header,訪問Extent盤區信息,並從中讀取實際行數據。 後續DUL能生成SQLLDR形式的導入文件,或者EXP格式的DMP文件。

如果SYSTEM表空間數據文件還在,那麼 DUL讀取Oracle數據字典。否則DUL采取采用的形式實際讀取行,並根據內部算法判斷字段類型,字段長度。

DUL可以基本上處理所有的常見行類型,包括常規行、遷移行、鏈式行、多盤區和簇表等;處理這些行時不需要額外人工介入。跨平臺抽取也是OK的。  DUL直接從Oracle Datafile抽取數據,而無需Oracle數據庫實例。 其實施臟讀,假設每個事務均已經提交。 DUL不檢測是否需要做介質恢復,即便是損壞的數據塊也可以讀出。 支持DMT和LMT表空間。 由於是臟讀,所以DUL恢復數據後一般建議由應用驗證數據。

兼容性方面DUL可以處理從不同操作系統上拷貝過來的數據文件。支持大多數數據庫結構: 鏈式行、遷移行、hash/index 簇,LONG,RAW,ROWID,DATE,Number,多FreeList,高水位,NULL等等。 DUL 兼容ORACLE 6,7,8和9以及10g 11g 12c。

 

PRM Schema-level DataBridge (1)

 

詩檀軟件(Maclean 所在的公司)開發了DUL的同類產品 ,PRM-DUL。 在DUL的基礎上引入了圖形化界面GUI和DataBridge(數據無需落地成為SQLLDR文件,直接像DBLINK一樣傳輸到目標數據庫)等功能;同時由於PRM-DUL是基於JAVA編寫的,所以可以跨所有平臺,包括HP-UX。

PRM-DUL的免費版本下載:

http://parnassusdata.com/sites/default/files/ParnassusData_PRMForOracle_3206.zip

PRM-DUL的使用手冊 http://www.parnassusdata.com/sites/default/files/ParnassusData%20Recovery%20Manager%20For%20Oracle%20Database%E7%94%A8%E6%88%B7%E6%89%8B%E5%86%8C%20v0.3.pdf

 

PRM-DUL的免費版本默認每張表只能抽取一萬行數據,如果你的數據庫很小以至於沒有超過一萬行數據的表,那麼可以直接使用免費的PRM-DUL。 如果你的數據庫較大且數據十分重要,那麼可以考慮購買企業版的PRM-DUL,企業版PRM-DUL 針對一套數據庫提供一個License軟件使用許可證,一個License的價格是7500元人民幣(含17%增值稅)。

同時PRM-DUL還提供部分免費的License:

免費開放幾個PRM-DUL 企業版License Key

 

 

 

 

 

 

 

 

如果你的Oracle數據庫恢復case在使用DUL後仍搞不定,那麼可以考慮通過服務恢復:

詩檀軟件目前提供幾乎所有場景的Oracle恢復情況,包括:數據庫無法打開,表被誤DROP、TRUNCATE、DELETE等,ASM Diskgroup無法MOUNT等。

 

 

PRM-DUL 基於JAVA開發,這保證了PRM可以跨平臺運行,無論是AIX、Solaris、HPUX等Unix平臺, Redhat、Oracle Linux、SUSE等Linux平臺,還是Windows平臺上均可以直接運行PRM。

 

PRM-DUL支持的操作系統平臺:

 

Platform Name Supported
AIX POWER Yes
Solaris Sparc Yes
Solaris X86 Yes
Linux X86 Yes
Linux X86-64 Yes
HPUX Yes
MacOS Yes

 

 

PRM-DUL目前支持的數據庫版本

 

ORACLE DATABASE VERSION Supported
Oracle 7 Yes
Oracle 8 Yes
Oracle 8i Yes
Oracle 9i Yes
Oracle 10g Yes
Oracle 11g Yes
Oracle 12c Yes

 

 

PRM-DUL目前支持的多語言:

 

 

語言 字符集 對應的編碼
中文 簡體/繁體 ZHS16GBK GBK
中文 簡體/繁體 ZHS16DBCS CP935
中文 簡體/繁體 ZHT16BIG5 BIG5
中文 簡體/繁體 ZHT16DBCS CP937
中文 簡體/繁體 ZHT16HKSCS CP950
中文 簡體/繁體 ZHS16CGB231280 GB2312
中文 簡體/繁體 ZHS32GB18030 GB18030
日文 JA16SJIS SJIS
日文 JA16EUC EUC_JP
日文 JA16DBCS CP939
韓語 KO16MSWIN949 MS649
韓語 KO16KSC5601 EUC_KR
韓語 KO16DBCS CP933
法語 WE8MSWIN1252 CP1252
法語 WE8ISO8859P15 ISO8859_15
法語 WE8PC850 CP850
法語 WE8EBCDIC1148 CP1148
法語 WE8ISO8859P1 ISO8859_1
法語 WE8PC863 CP863
法語 WE8EBCDIC1047 CP1047
法語 WE8EBCDIC1147 CP1147
德語 WE8MSWIN1252 CP1252
德語 WE8ISO8859P15 ISO8859_15
德語 WE8PC850 CP850
德語 WE8EBCDIC1141 CP1141
德語 WE8ISO8859P1 ISO8859_1
德語 WE8EBCDIC1148 CP1148
意大利語 WE8MSWIN1252 CP1252
意大利語 WE8ISO8859P15 ISO8859_15
意大利語 WE8PC850 CP850
意大利語 WE8EBCDIC1144 CP1144
泰語 TH8TISASCII CP874
泰語 TH8TISEBCDIC TIS620
阿拉伯語 AR8MSWIN1256 CP1256
阿拉伯語 AR8ISO8859P6 ISO8859_6
阿拉伯語 AR8ADOS720 CP864
西班牙語 WE8MSWIN1252 CP1252
西班牙語 WE8ISO8859P1 ISO8859_1
西班牙語 WE8PC850 CP850
西班牙語 WE8EBCDIC1047 CP1047
葡萄牙語 WE8MSWIN1252 CP1252
葡萄牙語 WE8ISO8859P1 ISO8859_1
葡萄牙語 WE8PC850 CP850
葡萄牙語 WE8EBCDIC1047 CP1047
葡萄牙語 WE8ISO8859P15 ISO8859_15
葡萄牙語 WE8PC860 CP860

 

 

PRM-DUL支持的表存儲類型:

 

表存儲類型 Supported
Cluster Table簇表 YES
索引組織表,分區或非分區 YES
普通堆表,分區或非分區 YES
普通堆表 啟用基本壓縮 YES(Future)
普通堆表 啟用高級壓縮 NO
普通堆表 啟用混合列壓縮 NO
普通堆表 啟用加密 NO
帶有虛擬字段virtual column的表 NO
鏈式行、遷移行 chained rows 、migrated rows YES

 

 

註意事項: 對於virtual column、11g optimized default column而言 數據抽取可能沒問題,但會丟失對應的字段。 這2個都是11g之後的新特性,使用者較少。

 

 

PRM-DUL支持的數據類型

 

數據類型 Supported
BFILE No
Binary XML No
BINARY_DOUBLE Yes
BINARY_FLOAT Yes
BLOB Yes
CHAR Yes
CLOB and NCLOB Yes
Collections (including VARRAYS and nested tables) No
Date Yes
INTERVAL DAY TO SECOND Yes
INTERVAL YEAR TO MONTH Yes
LOBs stored as SecureFiles Future
LONG Yes
LONG RAW Yes
Multimedia data types (including Spatial, Image, and Oracle Text) No
NCHAR Yes
Number Yes
NVARCHAR2 Yes
RAW Yes
ROWID, UROWID Yes
TIMESTAMP Yes
TIMESTAMP WITH LOCAL TIMEZONE Yes
TIMESTAMP WITH TIMEZONE Yes
User-defined types No
VARCHAR2 and VARCHAR Yes
XMLType stored as CLOB No
XMLType stored as Object Relational No

 

 

PRM-DUL對ASM的支持

 

 

功能 Supported
支持直接從ASM中抽取數據,無需拷貝到文件系統上 YES
支持從ASM中拷貝數據文件 YES
支持修復ASM metadata YES
支持圖形化展示ASM黑盒 Future

 

 

 

以下 為ORACLE DUL 工具使用手冊:

 

ORACLE DUL 的原理和功能列表:

 

獨立的C-程序

DUL是一個獨立的C程序,它直接從數據文件的表中檢索行。 Oracle的RDBMS軟件完全不被使用。 DUL進行臟讀,它假定每個事務被提交。它也不檢查/是否需要完成介質恢復。

最後一招

DUL旨在檢索無法用其他方法檢索的數據。它不是EXP,SQL *加等等的替代選擇,而是最後的方案。不用於正常的生產環境。

在使用DUL之前,你必須知道RDBMS有許多隱藏的功能可以強制打開壞數據庫。無記錄的init.ora參數和事件可以用於跳過前滾,禁用回滾,禁用特定SMON行動,推進數據庫scn等等。

 

數據庫損壞 – 數據塊OK

數據庫可以被破壞,但單獨的數據塊必須是100%正確。在所有導出進行檢查時,確保塊沒有損壞並屬於正確的數據段。如果掃描時遇到損壞的塊,一個錯誤信息被打印在加載程序(loader)文件和標準輸出。導出將繼續下一行或塊。

cluster/表/索引中的行

DUL能且只能導出索引/表/cluster中數據。它不會轉儲觸發器,存儲過程,也不會創建表或視圖的SQL腳本。 (但描述它們的數據字典表可以被導出)。該數據將被導出為在適合SQL * Loader或IMP的格式。同時,SQL * Loader的匹配控制文件也會生成。

DUL可以導出索引和索引組織表。索引的導出能用來確定表應該有多少行或標識缺失的行。

 

跨平臺導出

 

支持跨平臺導出。數據庫可以從與DUL-主機不同的操作系統被復制。 (迄今所涉及的數據庫/系統:Sequent/ptx, Vax Vms, Alpha Vms, MVS, HP9000/8xx, IBM AIX, SCO Unix, Alpha OSF/1, Intel Windows NT)。

在“init.dul”中的配置參數必須進行修改,以匹配原始平臺和O / S,而非導出執行的平臺。

 

強大功能

DUL不​​會轉儲,運行或掛起不論數據庫的破壞有多嚴重;(幾乎)支持所有Oracle功能

對於所有數據庫結構的面支持:行鏈接,行遷移,散列/索引集群,longs,raws,行id,日期,數字,多空閑列表組(multiple free list groups),段高水位(segment high water mark),NULLs,顯示NULL列尾部,無限擴展,Oracle8的新數據塊布局,分區表。

後來增加的有lobs,壓縮索引,9iR2壓縮表。可變數組(varray)和抽象數據類型ADTs(用戶定義的對象)在SQL * Loader模式下被部分支持。

ASM是完全支持的,文件可以從ASM磁盤組中提取。不使用裝載的ASM實例,磁盤直接訪問。支持非默認的ASM分配單元大小。

使用unexp命令套件,數據可以從導出的轉儲文件中恢復。Unpump的一些初步工作已完成,以支持數據pump文件。

 

支持的RDBMS版本

DUL能用於Oracle 6以上版本。 DUL已通過從6.0.26到10.2的所有版本。即使舊版數據塊header布局(6.0.27.2前)也支持。

多字節支持

DUL本質上是一個單一字節的應用程序。它的命令解析器不理解多字節字符,但導出任何多字節的數據庫是可以的。對於所有可能情況都有解決方法。

DUL可轉換為UTF8。這是為了存儲在UTF16的 NCLOBS。

 

限制

MLSLABELS

不支持受信任的Oracle多級安全標貼。

(LONG)RAW

DUL可以導出(long)raws。目前在SQL * Loader中合適的格式來保存所有的long raws。所以long raws和blobs可以在這兩種模式下被導出。

 

ORACLE8對象選項和LOBS

 

尚不支持嵌套表,如果需要請告知我來添加。支持varray和ADT,以及儲存為核心lob的。 在SQL * Loader模式和EXP模式下都支持CLOBS,NCLOBS。 BLOBS在EXP模式下最好處理,在SQL * Loader模式下產生的十六進制格式當前未能正確加載。

可移植性

通過ANSI-C編譯器,DUL可以移植到任何操作系統。 DUL已經被移植到許多UNIX變體,VMS和WindowsNT。目前,所有構建都使用gcc和Linux的交叉編譯環境完成。

 

RDBMS 內部知識

對Oracle RDBMS內部的良好掌握是成功地使用DUL的必要前提。例如數據服務器內部(DSI)文檔和教程提供一個良好的基礎。甚至還有專門的DUL模塊。

設置和使用DUL

配置文件

針對DUL有兩個配置文件。 “init.dul”包含所有的配置參數。 (緩存大小,header布局的詳細內容,oracle數據塊大小​​,輸出文件格式)在控制文件中,“control.dul”,數據庫的數據文件名和asm磁盤可以被指定。

可用數據字典

如果組成SYSTEM表空間的數據文件可用,那Oracle數據字典就可用。Oracle分配給這些文件的數字和你給定的名稱(不需要是Oracle知道的原始名稱),必須列入“control.dul”文件。您還需要包括文件數和其他表空間的文件名,這些表空間是最終想要導出表及其數據。不包含這些文件不會影響數據字典導出步驟,但會影響之後的表導出。

 

當USER$, OBJ$, TAB$ and COL$能被導出時,使用DUL

步驟如下:

  • 在目標數據庫配置DUL。這表示要創建一個正確的dul和control.dul。 SYSTEM表空間的數據文件數量和名稱必須與任何你要導出表和數據的表空間的數據文件一並包括在control.dul文件中。對於Oracle8及以上版本,表空間數和相關文件數必須在每個數據文件中指定。
  • 使用”BOOTSTRAP;”命令來準備導出。在引導過程中會發現一個兼容的段,找到bootstrap$表導出。不再需要舊的“dul dictv7.ddl”。
  • 導出數據文件被包括在“dul”文件中的表。使用以下命令之一:
    • “UNLOAD TABLE [ owner>.]table ;(別忘了分號)
      • 這將導出表定義和表的數據。
    • “UNLOAD USER user name ;
      • 這為指定的用戶導出所有表和數據。
    • “UNLOAD DATABASE ;

這導出所有可用的數據庫表。 (除了用戶SYS)。

 

 

沒有可用的數據字典

如果SYSTEM表空間的數據文件不可用,那麼雖然可以unload 導出數據,但USER,TABLE和COLUM名稱是未知的。識別表會是艱巨的任務。但它可以(並已經)被完成。你需要深入了解你的應用程序和應用程序表。列類型可以由DUL猜測,但表和列名丟失了。來自同一數據庫但舊了幾周的任何SYSTEM表空間可以有很大的幫助!DUL使用的多數信息不改變。 (只有dataobj#是在truncate或索引重建過程中)

 

使用DUL而不用SYSTEM表空間

步驟如下:

  • 在目標數據庫配置DUL。這表示創建一個正確的dul和control.dul。 (見端口具體參數)。在這種情況下,control.dul文件需要將被導出的表和數據的數量和數據文件名,但它並不需要SYSTEM表空間信息。
  • SCAN DATABASE;  :掃描數據庫,創建程度和分段圖
  • SCAN TABLES; or SCAN EXTENTS; :收集行統計
  • 從步驟3的輸出中識別丟失的表。
  • 導出識別出的表。

 

自動搜索

 

為了便於尋找丟失的表:在seen_tab.dat和seen_col.dat掃描的統計信息可以被加載到一個全新的數據庫。如果你要重建表(但願創建表的腳本仍可用),那麼通過兩個SQL * Plus腳本(fill.sql和getlost.sql),“丟失”表的結構信息可以匹配到“可見”表被掃描的信息。

 

提示和陷阱

 

  • 名稱與DUL不是真正相關的,僅與必須加載數據的人相關。但是,如果你不知道被導出的數據來自哪個表,它就沒有任何價值。
  • 猜測的列可能是錯誤的。即使算法保守,在不確定時決定UNKNOWN。
  • 顯示NULL尾部列不存儲在數據庫中。因此,如果最後一列只包含空值,那掃描將無法找到。 (在導出顯示NULL列尾部被正確處理時)。
  • 當一個表被刪除,該描述僅從數據字典中被移除。除非數據塊被重新用於新的段,否則不會被重寫。所以掃描軟件可以看到被刪除的表。
  • 沒有行的表會被忽視。較新的對象ID比舊對象更高。如果表被重建,或者如果有同一表的測試和生產版本,對象的id可用於決定。

DDLDUL描述語言)導出語句概述

DUL使用類似SQL的命令界面。有導出區段,表,用戶或整個數據庫的DDL語句。所需的數據字典信息可以在DDL語句中指定或取自先前導出的數據字典。以下三個語句將導出DEPT表。如果數據字典和區段地圖可用,最常見的形式是:

UNLOAD TABLE scott.dept;

所有相關信息也能在語句中指定:

REM Columns with type in the correct order

REM The segment header loaction in the storage clause

UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

 

Oracle版本6:

REM version 6 data blocks have segment header location in each block        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

Oracle 7:

REM Oracle7 data blocks have object id in each block

ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)               STORAGE( OBJNO 1501 );

 

DUL輸出格式

只有完好無損的行會被寫入輸出文件。對它來說,各行是緩存的。buffer的大小可以通過init.dul的參數BUFFER來更改。高BUFFER參數不等同於較快的速度,它應該大到足以容納一個完整的行。不完整或損壞的行不會被寫出。 init.dul參數FILE_SIZE_IN_MB可用於將輸出(在適當的邊界)分割成多個文件,每個文件可以單獨加載。

有三種不同的輸出格式模式

  • 導出模式
  • SQL * Loader模式:流數據文件
  • SQL * Loader模式:固定的物理記錄數據文件

 

導出模式

生成的文件與由EXP生成的表模式導出完全不同!該文件是IMP能加載的最小格式。每個表中將生成單獨的IMP可加載文件。它是單一的表轉儲文件。它包含一個header,一個insert table語句和表數據。表補助grant,存儲子句,或觸發器將不被包括在內。一個最小的create table語句被包含在內(無存儲子句,只有列名和沒有精度的類型)。在生成的header中文件中的字符集指示是V6類型。它被設置為表示基於ASCII的字符集。

要啟用導出模式, 將init.dul參數EXPORT_MODE設置為TRUE。

由於生成的偽轉儲文件不包含字符集信息集NLS_LANG以匹配原始數據庫。在導出模式,不進行字符集轉換。

 

SQL * Loader模式

數據在此模式要麼完全不能轉換,要麼全部被轉換為UTF8,前提是設置了LDR_OUTPUT_IN_UTF8。在混合字符集環境中此設置是必需的,因為一個數據文件的內容必須有一個單獨的字符集。<\ p>

在加載數據時,你可能需要設置NLS_LANG以匹配原始數據庫,從而防止不必要的字符集轉換。

在兩種SQL * Loader輸出格式中,列將被空格分隔並被雙引號括起來。數據中的任何雙引號將增加一倍。SQL * Loader會識別這點並只加載一個。使用init.dul參數LDR_ENCLOSE_CHAR,你能將包圍列的字符從雙引號更改為任何想要的字符。

有兩種類型的物理記錄模式:

數據流模式

 

在流模式下沒什麼特別的,每條記錄後會有換行符打印出來。這是緊湊模式,可用於數據不包含換行符的情況。要啟用數據流模式,設置LDR_PHYS_REC_SIZE = 0

init.dul

 

固定的物理記錄

如果數據可包含換行符,這個模式是必不可少的。一個邏輯記錄和一個完整的行可以由多個物理記錄組成。默認的記錄長度為81,這很適合VT220的屏幕。物理記錄大小可用init.dul的LDR_PHYS_REC_SIZE指定。

輸出文件名

生成的文件名是:owner name_table name.ext。IMP可加載文件的擴展名為“.dmp”。“.dat”和“.ctl”被用於SQL * Loader數據文件和控制文件。為了防止變量替換和其他不好的影響,奇怪的字符被剝離。(只有字母數字和’_’被允許)。

如果設置了FILE參數,生成的名稱將是FILEnnn.ext。如果文件系統不支持足夠長的文件名,可以用這個方法處理。 (舊的窗口,6.3文件名格式)

 

一些DUL INTERNALS

要求信息

 

要從數據庫塊中導出表數據必須了解以下信息:

  1. 列/cluster簇信息:列的數量和類型。CHAR或VARCHAR列的最大長度。集群列數以及集群中的表數量。這個信息可以由unload語句提供,或者從先前導出的USER $,OBJ $,TAB $和COL $中獲得。
  2. 段/區段的信息:當導出表時,在數據段頭塊的extent表被用於定位所有數據塊。這個數據段頭塊(文件號和塊號)的位置來自數據字典或可在unload語句中指定。如果該段頭不正確/不可用,那必須用另一種方法。 DUL能通過掃描整個數據庫創建自己的區段地圖。 (以DUL與掃描數據庫語句單獨運行。)

二進制HEADER

 

在塊頭的C-Structs不被直接復制,它們通過特殊功能被檢索。結構成員的所有補償都被編入DUL。這種方法使跨平臺導出實現。(在HP導出MVS創建的數據文件)除了字節順序,至今只有四個布局類型被發現。

  1. VAX VMS和Netware:在結構成員之間無對齊填充。
  2. 韓國Ticom Unix機器:結構成員的16位對齊。
  3. MS / DOS :16位對齊和16位字長。
  4. 世界上其他(包括Alpha VMS):結構成員對成員大小對齊。

 

機器依賴

 

(數據庫)機器依賴用以下參數配置:

  • 一個word(大/小端)的字節順序
  • 在DBA(塊地址)FILE#的低部分的位數量
  • 在一個C-Struct中的成員對齊
  • oracle文件頭塊之前的塊或字節數
  • 在段頭結構中使用的詞的大小

導出數據字典

 

如果數據字典的文件存在且未破壞,DUL可以導出數據庫的數據字典。對於要使用的數據字典,內部表必須首先導出至外部文件:(USER $,OBJ $,TAB $和COL $)。bootstrap命令將找到並導出所需的表。

 

DDLDUL描述語言)規範

 

DDL(DUL描述語言)規範

DDL(DUL描述語言)規範
[ ALTER SESSION ] SET init.dul parameter =  value ;
     Most parameters can be changed on the fly. 

BOOTSTRAP [LOCATE | GENERATE | COMPLETE
           | UNLOAD   Bootstrap$ segment header block address ];
     Bootstraps the data dictionary. Default is COMPLETE.
     LOCATE finds and unloads the bootstrap$ table.
     GENERATE builds a ddl file based on inforation in the cache.
     COMPLETE is in fact LOCATE, followed by GENERATE (two times)

COMMIT;
     Writes the changed block to the data file.

CREATE BLOCK INDEX  index_name  ON  device ;


 

塊索引包含在損壞的文件系統中找到的有效Oracle塊的地址。用於合並多個磁盤映像或從損壞的文件系統導出。只有在極端的文件系統損壞情況下很有用。

 

DESCRIBE  owner_name  . table_name ;

DUMP [ TABLESPACE  tablespace_no ]
     [ FILE  file_no  ]
     [ BLOCK  block_no  ]
     [ LEVEL  level_no  ] ;
     Not a complete blockdump, mainly used for debugging.
     The block address is remembered.

EXTRACT  asm file name  to  output file name  ;
     Copies any ASM file from a disk group to the file system.
     (there was a problem with online redologs this needs more testing)

MERGE block_index INTO [  segment  ];



合並命令使用索引文件中的信息來定位文件數和對象id的組合的可能數據塊,每個候選塊與在數據文件中的當前塊進行比較。如果當前塊是壞的,或有一個更舊的scn,候選人將被寫入數據文件。這只有在極端的文件系統損壞情況下很有用。

 

 

REM any_text_you_like_till_End_Of_Line : comment
REM  NOT allowed inside ddl statements. ( To avoid a two layer lexical scan).

ROLLBACK; # Cancels the UPDATE statements.

SHOW     DBA  dba ;                # dba -> file_no block_no calculator
	   | DBA  rfile_no block_no ;  # file_no block_no -> dba calculator
	   | SIZES ;                   # show some size of important structs
	   | PARAMETER;                # shows the values of all parameters
	   | LOBINFO;                  # lob indexes found with SCAN DATABASE
       | DATAFILES;                # summary of configured datafiles
       | ASM DISKS;                # summary of configured asm disks
       | ASM FILES;                # summary of configured datafiles on asm
       | ASM FILE  cid      # extent information for asm file

UNEXP [TABLE] [  owner  . ]  table name 
       (  column list  ) [ DIRECT ]
       DUMP FILE  dump file name
       FROM  begin offset  [ UNTIL  end offset  ]
       [ MINIMUM  minimal number of columns  COLUMNS ] ;

       To unload data from a corrupted exp dump file. No special setup
       or configuration is required, just the compatible parameter.
       The start offset should be where a row actually begins.

UNPUMP
       To unload data from a corrupted expdp (datapump) dump file.
       This is still work in progress, the basic commands work
       but rather complex to use. Contact me if this is needed.

UNLOAD DATABASE;

UNLOAD USER user_name;

UNLOAD [TABLE]  [  schema_name . ]  table_name 
              [ PARTITION(  partition_name ) ]
              [ SUBPARTITION(  sub_partition_name ) ]
              [ (  column_definitions ) ]
              [  cluster_clause  ]
              [  storage_clause  ] ;

UNLOAD EXTENT  table_name 
              [ (  column_definitions  ) ]
              [ TABLESPACE  tablespace_no  ] 
              FILE  extent_start_file_number
              BLOCK extent_start_block_number 
              BLOCKS  extent_size_in oracle_blocks ;

UNLOAD LOB SEGMENT FOR [  schema_name . ]  table_name   [ (  column name  ) ] ;

UNLOAD LOB SEGMENT STORAGE ( SEGOBJNO data obj#) ;

UPDATE [ block_address ] SET UB1|UB2|UB4 @ offset_in_block = new_value ;
UPDATE [ block_address ] SET  block element name  = new_value ;
        Now and then we can repair something.
	Patches the current block and dumps it.
	You can issue multiple UPDATE commands.
	Block is not written yet, use COMMIT to write. 

storage_clause ::=
        STORAGE ( storage_specification  [ more_storage_specs ] )

storage_specification ::=
        OBJNO object_id_number 
|       TABNO cluster_table_number
|       SEGOBJNO cluster/data_object_number       /* v7/v8 style data block id */
|       FILE  data_segment_header_file_number     /* v6 style data block id */
        BLOCK data_segment_header_block_number )   
|       any_normal_storage_specification_but_silently_ignored

SCAN DATABASE;

 

掃描所有數據文件的所有塊。生產了兩個或三個文件:

  1. 找到的段頭(索引/集群/表)的dat信息:(對象ID,文件號和塊號)。
  2. 連續的表/群集的數據塊的dat信息。 (對象ID(V7),段頭的文件和塊號(V6),文件好和第一個塊的塊號,塊號,表號)
  3. 每個lob數據塊的dat信息,這個文件(可選的,僅在init.dul:SCAN_DATABASE_SCANS_LOB_SEGMENTS= TRUE)可能很大。同時,需要的內存大小可能會產生問題。目的有兩個:1:在導出表時可能解決損壞的lob索引。 2:導出lob段(已刪除的lob或無lob索引或父表的lob段)。SCANNEDLOBPAGE.dat字段的含義:(segobj#,lobid,fat_page_no,version( wrap, base),ts#,file#,block#)

 

DDLDUL描述語言)概述

UNLOAD EXTENTUNLOAD TABLE規則:

extent map 盤區地圖

UNLOAD TABLE要求一個extent map 盤區地圖。在99.99%的情況下,段頭中的extent map 盤區地圖是可用的。在罕見的0.01%的情況下,段頭丟失。可以用掃描數據庫命令創建一個區段地圖。僅當參數USE_SCANNED_EXTENT_MAP被設為TRUE時,自生成的區段地圖才在導出時被使用。

所有數據塊有它們所屬一些段的ID。但是V6和V7之間有根本區別。由Oracle版本6創建的數據塊有段頭塊的地址。由Oracle7創建的數據塊在header中有段對象ID。

 

列定義

列定義必須指定列存儲在段中的順序,由col$.segcol#指定。這不需要與在CREATE TABLE語句中指定的列順序相同。集群列被移到前面,long移到後面。使用ALTER TABLE命令添加到表的列總是最後保存。

導出單個區段

UNLOAD EXTENT可用於導出一個或多個相鄰塊。要導出的區段必須用STROEAGE子句指定:要指定單個區段,使用:STORAGE(EXTENTS(FILE fno BLOCK bno BLOCKS #blocks))(FILE和BLOCK指定第一個塊,BLOCKS 區段大小)

DUL具體列類型

有兩個額外的DUL特定數據類型:

  1. IGNORE:該列將被跳過,就好像它不存在。
  2. UNKNOWN:每列會有一個啟發式猜測將。

在SQL * Loader模式還有更多DUL特定的數據類型:

  1. HEXRAW:列是十六進制轉儲。
  2. LOBINFO:顯示LOB定位的一些信息。
  3. BINARY NUMBER:作為在LOB索引中使用的機器字。

 

識別USER $OBJ $$ TABCOL $

DUL使用與RDBMS相同的bootstrap程序。即它使用系統數據文件頭的root dba來定位bootstrap$ 表。由於版本不同,root dba可能是包含bootstrap$地址的兼容段位置,或較新版本中是bootstrap$表本身的地址。bootstrap$表被導出,它的內容是分析以查找頭四表(USER $,OBJ $,$ TAB和COL $)。其他表是基於頭四個表的信息被導出的。

 

SCAN 掃描命令概述

SCAN TABLES和SCAN EXTENTS掃描相同的信息,並生成類似的輸出。所有行的所有列都被檢查。每一列收集以下統計數據:

  • 行在數據塊中多久顯示一次。
  • 最大的內部列長度。
  • 列是NULL的情況有多久。
  • 列由至少75%可打印的ASCII組成的情況有多久。
  • 列由100%可打印的ASCII組成的情況有多久。
  • 列是一個有效的Oracle數字的情況有多久。
  • 列是一個不錯的數字的情況有多久。 (沒有多少以零開頭或結尾 )
  • 列是一個有效的日期的情況有多久。
  • 列是一個可能有效的rowid的情況有多久。

這些統計被組合並有一種列類型被提出。使用該建議,五行被導出顯示結果。這些統計數據轉儲到兩個文件(seen_tab.dat和seen_col.dat)。有SQL * Loader和SQL * Plus腳本可以自動完成識別過程的一部分。 (目前稱為getlost選項)。

 

描述describe

有一個描述命令。它會顯示表格的字典信息,在DUL的字典緩存中可用。

DUL啟動順序

在啟動過程中,DUL經過以下步驟:

  • 參數文件“dul”被處理。
  • DUL控制文件(默認“dul”)被掃描。
  • 嘗試加載USER$,OBJ$,TAB $和COL $的轉儲,如果有則放到DUL的數據字典緩存中。
  • 嘗試加載dat和col.dat。
  • 接受DDL語句或運行指定為第一個參數的DDL腳本。

 

init.dul中可以指定的DUL 參數

ALLOW_TRAILER_MISMATCH

BOOLEAN

強烈不推薦使用,不太會產生更多的行。只有在你完全理解它的意思及為什麼想使用時才能使用。跳過正確塊trailer的檢查。在這個測試中失敗的塊是損壞的碎片。但這樣省去了修補一些塊的麻煩。

 

ALLOW_DBA_MISMATCH

BOOLEAN

強烈不推薦使用,不太會產生更多的行。只有在你完全理解它的意思及為什麼想使用時才能使用。跳過正確塊trailer的檢查。在這個測試中失敗的塊是損壞的碎片。但這樣省去了修補一些塊的麻煩。

 

ALLOW_OTHER_OBJNO

BOOLEAN

如果你的字典比你的數據文件更舊,那數據對象ID對於被truncate的表有所不同。當這個參數設為true,它會發出警告,但使用來自段頭的值。所有其他塊都被全面檢查。這只用於特例。

ASCII2EBCDIC

BOOLEAN

(var)char字段必須從 EBCDIC 轉譯為 ASCII。(針對在ASCII主機上導出MVS數據庫)

BUFFER

NUMBER (字節)

在導出和SQL * Loader模式中使用的行輸出緩沖區(buffer)的大小。它在各行被首先存儲到該緩沖區中。只有沒有錯誤的完整行被寫入輸出文件。

COMPATIBLE

NUMBER

數據庫版本,有效值為 6,7,8或9。必須指定參數。

 

CONTROL_FILE

TEXT

DUL 控制文件名(默認:“ control.dul”)。

 

DB_BLOCK_SIZE

NUMBER

字節表示的Oracle塊大小(最大32K)

 

DC_COLUMNS

NUMBER

DC_OBJECTS

NUMBER

DC_TABLES

NUMBER

DC_USERS

NUMBER

dul 字典緩存大小。如果其中一個過低,高速緩存將被自動調整大小。

 

EXPORT_MODE

BOOLEAN

使用export 導出模式或 SQL*Loader 模式。

 

FILE

TEXT

(轉儲或數據)文件名生成的基礎。對類似8.3 DOS的文件系統使用。

 

FILE_SIZE_IN_MB

NUMBER (Megabytes)

最大轉儲文件大小。轉儲文件被分割成多個部分。每個文件都有一個完整的頭,並能被單獨加載。

 

LDR_ENCLOSE_CHAR

TEXT

括起SQL* Loader模式中字段的字符。The character to enclose fields in SQL*Loader mode.

LDR_PHYS_REC_SIZE

NUMBER

生成的loader數據文件的物理記錄大小。

 

LDR_PHYS_REC_SIZE = 0沒有固定的記錄,每個記錄以一個換行符結束。

LDR_PHYS_REC_SIZE > 2: 固定的記錄大小。

MAX_OPEN_FILES

在操作系統級別同時保持開放的數據文件最大#。

 

OSD_BIG_ENDIAN_FLAG

機器語言表示的字節順序。Big Endian一開始也被稱為MSB。 DUL根據在運行的機器設置默認值。要了解這個被稱為Big Endian的原因,閱讀格列佛遊記。

 

 

OSD_DBA_FILE_BITS
以位表示的DBA中文件編號大小。更精確的說是文件號的低序部分的大小。
OSD_FILE_LEADER_SIZE
bytes/blocks added before the real oracle file header block
OSD_C_STRUCT_ALIGNMENT
C 結構成員對齊(0,16或23)。默認的32對於大多數端口是正確的。 
OSD_WORD_SIZE
一個機器字的大小總是32,除了MS/DOS(16)
PARSE_HEX_ESCAPES
Boolean 默認 FALSE
在解析時,在字符串使用\\xhh十六進制轉義序列。如果設置為true,則奇怪的字符可以使用轉義序列來指定。該功能也用於指定多字節字符。
USE_SCANNED_EXTENT_MAP
BOOLEAN
導出表時,在ext.dat使用掃描好的區段地圖。一般算法在段頭使用區段地圖。只有當一些段頭丟失或不正確時,該參數才有用。
WARN_RECREATE_FILES
BOOLEAN (TRUE)
如果現有的文件被覆蓋,設置為FALSE以取消警告信息。
WRITABLE_DATAFILES
BOOLEAN (FALSE)
DUL的一般使用只會讀取數據庫文件。然而,UPDATE和 SCAN RAW DEVICE也將寫出。參數能防止有意外損壞。

示例 init.dul :
# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40

# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 10
osd_c_struct_alignment = 32
osd_file_leader_size = 1

# database parameters
db_block_size = 8k

# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81
配置端口相關的參數
從RDBMS版本10G 開始,osd參數易於配置。通常,所有參數都可以使用其默認值。唯一可能要註意的是osd_big_endian_flag,在原始數據庫平臺與當前機器不同的情況下進行跨平臺導出時,如果osd_big_endian_flag設置不正確,當檢查文件頭時讓會在啟動時被檢測。
已知參數集合
對於預10G數據庫沒有在OSD的OSD維基頁面列表(操作系統Dependend),幾乎每一個平臺參數已知參數列表。如果你的平臺不在列表中,你可以使用下面的建議確定的參數。 (然後請通知我,讓我可以將它們添加到列表中。)
osd_big_endian_flag
big endian 或 little endian(機器表示的字節順序):HP,SUN和大型機通常big endian:OSD_BIG_ENDIAN_FLAG = TRUE。 DEC和Intel平臺是little endian :OSD_BIG_ENDIAN_FLAG = FALSE。默認值對DUL正在運行的平臺是正確的。
這點沒有標準的技巧,以下在UNIX系統上可能有效:
  echo dul | od -x
  If the output is like:
     0000000 6475 6c0a
     0000004
  You are on a big endian machine (OSD_BIG_ENDIAN_FLAG=TRUE).

  If you see:
     0000000 7564 0a6c
     0000004
  This is a little endian machine (OSD_BIG_ENDIAN_FLAG=FALSE).
osd_dba_file_bits
在DBA用於文件號的低位部分的位數量。執行以下查詢:
SQL> select dump(chartorowid('0.0.1')) from dual;
  
  Typ=69 Len=6: 8,0,0,0,0,0    ->       osd_dba_file_bits =  5 (SCO)
  Typ=69 Len=6: 4,0,0,0,0,0    ->       osd_dba_file_bits =  6 (Sequent , HP)
  Typ=69 Len=6: 1,0,0,0,0,0    ->       osd_dba_file_bits =  8 (NCR,AIX)
  Typ=69 Len=6: 0,16,0,0,0,0   ->       osd_dba_file_bits = 12 (MVS)
  Typ=69 Len=10: 0,0,0,0,0,64,0,0,0,0      osd_dba_file_bits = 10 (Oracle8)      
OSD_C_STRUCT_ALIGNMENT
在數據文件頭的結構布局。 0:在C-結構(VAX/ VMS只)成員之間沒有填充16:一些韓國ticom機和MS/ DOS32:結構成員按成員大小排列。 (所有其他包括ALPHA / VMS)檢查以下查詢:
SELECT * FROM v$type_size
WHERE type IN ( 'KCBH', 'KTNO', 'KCBH', 'KTBBH', 'KTBIT', 'KDBH'
              , 'KTECT', 'KTETB', 'KTSHC') ;
一般情況下,會出現osd_c_struct_alignment=32以及如下:
K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              24
KTB      KTBBH    TRANSACTION FIXED HEADER                 48
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           44
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            8

8 rows selected.
僅對於VAX/ VMS和Netware,會出現 osd_c_struct_alignment=0和如下輸出:
COMPONEN TYPE     DESCRIPTION                      SIZE
-------- -------- -------------------------------- ----------
K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              23
KTB      KTBBH    TRANSACTION FIXED HEADER                 42
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           39
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            7

8 rows selected.
如果有一個不同的列表,這將需要一些主要黑客攻擊和嗅探,還可能對DUL有重大變化。 (電子郵件Bernard.van.Duijnen@oracle.com)
osd_file_leader_size

在Oracle文件頭前的塊/字節數。 Unix數據文件有一個額外的領先塊(文件大小,塊大小幻數)大量(>100)被看作是一個字節偏移,少數被看作Oracle塊號。
Unix    :       osd_file_leader_size = 1
Vms     :       osd_file_leader_size = 0
Desktop :       osd_file_leader_size = 1 (or 512 for old personal oracle)
Others  :       Unknown ( Use Andre Bakker's famous PATCH utility to find out)
                An Oracle7 file header block starts with the pattern 0X0B010000.
你可以在可選的第三個字段的control.dul中添加一個額外的字節偏移(如在原始設備的AIX或DEC UNIX數據文件)
控制文件語法規範
控制文件(默認名稱“control.dul”)用於指定ASM磁盤,塊索引和數據文件名。control的格式已被擴展。
目前在DUL控制文件中有三種類型的定義。在單獨的行中的每個條目。 ASM磁盤必須先於ASM文件。
control_file_line ::= asm_disk_spec | file_piece_spec | block_index_spec
如果兼容是10或更高,你也可以指定ASM磁盤。它對於指定設備名稱通常足夠。所有屬性都是由頭部檢查自動檢索。當頭部檢查是不可能時才需要完整的語法,用於有損壞的header的磁盤。語法是:
DISK  device name [  disk group options  ]

 disk group option  ::= GROUP  disk group name 
                           | DISK_NO  disk number in group 
                           | F1B1  File1 Block1 location
塊索引是在損壞的文件系統上訪問Oracle塊的一種方式。通常,損壞的文件系統沒有被全部刪除,它不是空的。由於oracle塊的特定布局,it is possible to datablocks an store their location in the block index。參見 create block index命令。block_index_name是一個普通的標識符,它用於構建一個唯一的文件名。
BLOCK INDEX  block_index_name
每個條目可以包含一個數據文件的一部分。最小單位是一個數據塊。這種的話,將對於DUL太大的數據文件分割成多個部分是可能的,其中每個部分小於2GB。
一般指定的文件名就足夠了。即使對於單個塊。如果compatible為10或更高,文件號和表空間號將從文件頭讀取。
如果指定的細節與文件頭不同,DUL會給出一個警告,但使用你的定義。這是為了能在文件損壞的頭塊導出文件。對於調試,可以轉儲文件頭。
可選的額外leader 偏移是一個額外的字節偏移,它將被添加到該數據文件的所有lseek() 操作。這樣就可以跳過一些AIX原始設備額外的4K塊,或在原始設備上Tru64d 額外的64K。
file_piece_spec ::= 
         [ [ tablespace_no ] relative_file_number]data_file_name
         [ optional extra leader offset ]
         [ startblock block_no ]
         [ endblock block_no ]
示例
# AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096

# Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000

# ASM disks for two disk groups
disk /media/maxtor/asm/dgn1
disk /media/maxtor/asm/dgn2
disk /media/maxtor/asm/dgn3
disk /media/maxtor/asm/dgn4
disk /media/maxtor/asm/dgodd

# system datafile in the first asm disk group
+DGN/db102/datafile/system.257.621616979

# users datafile in a different disk group
+DGODD/db102/datafile/users.257.621616683

# a so called big file tablespace, use 1024 for the file#
 8 1024 /home/oracle/v102/dbs/bigfilets

# Or let DUL find out itself from the header
 /home/oracle/v102/dbs/bigfilets

# one tablespace with a different block size
/home/oracle/v102/dbs/ts16k.dbf block_size 16k

# or let DUL find out by header inspection
/home/oracle/v102/dbs/ts16k.dbf
示例導出會話:可用於DUL的數據字典
1.	創建一個合適的“init.dul”
2.	創建control.dul

sqlplus /nolog
    connect / as sysdba
    startup mount
    set trimspool on pagesize 0 linesize 256 feedback off
    column name format a200
    spool control.dul
    select ts#, rfile#, name from v$datafile;
    exit
edit the result

For Oracle8 a different query must be used:
   select ts#, rfile#, name from v$datafile;
1.	啟動DUL和bootstrap;
$ dul

Data UnLoader 10.2.1.16 - Oracle Internal Only - on Thu Jun 28 11:37:24 2007
with 64-bit io functions

Copyright (c) 1994 2007 Bernard van Duijnen All rights reserved.

 Strictly Oracle Internal use Only
DUL> bootstrap;
Probing file = 1, block = 377
. unloading table                BOOTSTRAP$      57 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 57 entries loaded
Parsing Bootstrap$ contents
DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 10
 OBJ$: segobjno 18, file 1
 TAB$: segobjno 2, tabno 1, file 1
 COL$: segobjno 2, tabno 5, file 1
 USER$: segobjno 10, tabno 1, file 1
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$   52275 rows unloaded
. unloading table                      TAB$    1943 rows unloaded
. unloading table                      COL$   59310 rows unloaded
. unloading table                     USER$      70 rows unloaded
Reading USER.dat 70 entries loaded
Reading OBJ.dat
 52275 entries loaded and sorted 52275 entries
Reading TAB.dat 1943 entries loaded
Reading COL.dat 59310 entries loaded and sorted 59310 entries
Reading BOOTSTRAP.dat 57 entries loaded
...
Some more messages for all the other TABLES
...
Database character set is WE8ISO8859P1
Database national character set is AL16UTF16
DUL> unload user SCOTT;
About to unload SCOTT's tables ...
. unloading table                       EMP      14 rows unloaded
示例導出會話:無法用於DUL的數據字典
1.	創建一個合適的“init.dul”(見配置指南)
2.	創建control.dul見上文
3.	掃描數據庫的段頭和區段:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:10:16 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL> scan database;
data file 1 20480 blocks scanned
data file 4 7680 blocks scanned
data file 5 512 blocks scanned
DUL>quit
1.	重啟DUL和掃描找到的表獲得列統計,這生成了大量輸出:
echo scan tables \; | dul > scan.out&

[ many lines here]


Object id 1601 table number 0
UNLOAD TABLE T1601_0 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER, C5 DATE
        , C6 NUMBER, C7 NUMBER, C8 NUMBER )
    STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));

Colno  Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
    1    14        3    0%   0%   0% 100% 100%   0%   0%
    2    14        6    0% 100% 100% 100%  14%   0%  21%
    3    14        9    0% 100% 100% 100%  14%   0%   0%
    4    14        3    7%   0%   0% 100% 100%   0%   0%
    5    14        7    0%   0%   0%   0%   0% 100%   0%
    6    14        3    0%   0%   0% 100% 100%   0%   0%
    7    14        2   71%   0%   0% 100% 100%   0%   0%
    8    14        2    0%   0%   0% 100% 100%   0%   0%

"7369" "SMITH" "CLERK" "7902" "17-DEC-1980 AD 00:00:00" "800" "" "20"

"7499" "-0.000025253223" "SALESMAN" "7698" "20-FEB-1981 AD 00:00:00" "1600" "30+

0" "30"

"7521" "WARD" "SALESMAN" "7698" "22-FEB-1981 AD 00:00:00" "1250" "500" "30"

"7566" "JONES" "MANAGER" "7839" "02-APR-1981 AD 00:00:00" "2975" "" "20"

"7654" "MARTIN" "SALESMAN" "7698" "28-SEP-1981 AD 00:00:00" "1250" "1400" "30"

[ many more lines here ]
這看著很眼熟,使用以上信息和你對emp表的了解來編寫:
UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
        hiredate date, sal number, comm number deptno number)
 STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
1.	用這個語句來導出emp:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:46:33 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Loaded 350 segments
Loaded 204 extents
Extent map sorted
DUL> UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
DUL 2> hiredate date, sal number, comm number deptno number)
DUL 3> STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
. unloading table                       EMP      14 rows unloaded
DUL>quit
 
示例導出會話:不正確的init.dul參數
錯誤的 osd_dba_file_bits大小
這會產生類似如下的輸出。通常這不應該發生,因為你應該創建一個演示數據庫,並通過DUL記錄的(HTML頁)查詢進行檢查。
在DBA的失配是只在文件號(括號內第一號)的部分。第二個數字,塊號,是正確的。
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:40:33 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [4][2]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][3] DBA in block mismatch [4][3]
DUL: Warning: Bad cache layer header file#=1, block#=3

...........and etc..........
錯誤的osd_file_leader_size
這可能會造成類似如下的輸出,但還有許多其他可能。在這裏,我們block off號是固定的。In this case we are a fixed number of blocks off.文件號是正確的。塊號的差是恒定的:
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:44:23 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.

DUL: Warning: Block[1][2] DBA in block mismatch [1][3]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][3] DBA in block mismatch [1][4]
DUL: Warning: Bad cache layer header file#=1, block#=3

...........and etc..........
錯誤的osd_c_struct_alignment
這可能會產生類似如下的輸出:
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:46:10 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
. unloading table OBJ$

DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262145
OS error 2: No such file or directory

DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262146
OS error 2: No such file or directory

...........and etc..........
錯誤的db_block_size
當DB_BLOCK_SIZE設置過小時會生成以下輸出。正確值是4096,而設置為2048。通常情況下,此參數值應取自Oracle實例的init.ora文件中,且將無法正確設置。
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Thu Sep 4 12:38:25 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [513][1159680]
DUL: Warning: File=1, block 2: illegal block version 2
DUL: Warning: Block[1][2] Illegal block type[0]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][4] DBA in block mismatch [1][2]
DUL: Warning: File[1]Block[4]INCSEQ mismatch[90268!=0]
DUL: Warning: Bad cache layer header file#=1, block#=4

DUL: Warning: Block[1][6] DBA in block mismatch [1][3]
DUL: Warning: File[1]Block[6]INCSEQ mismatch[139591710!=86360346]
DUL: Warning: Bad cache layer header file#=1, block#=6

...........and etc..........
QUOTE MISSING
如果收到以下錯誤,這是由於數據字典表“USER$,OBJ$,$ TAB和COL$”沒有被正確生成。要解決這個錯誤,簡單地刪除所有dictv6.ddl或dictv7.ddl創建.dat和.ctl文件並重啟。
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:49:30 1997

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:49:30 1997

DUL: Error: Quote missing
從損壞的EXP轉儲文件拯救數據 - UNEXP教程
如果你不了解EXP轉儲文件的結構任何東西,這就比較困難。以下是一個快速解釋。除了文件頭,轉儲文件有識別各個部分的標記。在每個表部分會有SQL語句。最有趣的部分是create table語句,接下來是插入到表語句。插入語句後直接跟綁定信息,(列數,以及每列的類型和綁定長度和其他一些信息)。接下來是實際的列。每列之前是兩個字節長度,後跟實際列數據。可能較長的列有幾個技巧。列數據的結尾以特殊的長度標記OXFFFF標誌。行的開頭沒有標記。損壞後的重新同步是試錯。損壞一般不能立即檢測到。該格式與DIRECT導出略有不同,所以你必須要對DIRECT導出使用DIRECT選項。指定的偏移量是一個行的開頭。一般來說第一個直接在綁定數組之後,但為最佳的靈活性,你可以從行數據的任何地方開始。
第一步是掃描轉儲文件來找到偏移和SQL語句。每個輸出行從項目被找到的偏移位置開始。
DUL>  scan dump file expdat.dmp;
0: CSET: 1 (US7ASCII)                # Character set info from the header
3: SEAL EXPORT:V10.02.01             # the Seal - the exp version tag
20: DBA SYSTEM                       # exp done as SYSTEM
8461: CONNECT SCOTT                  # section for user SCOTT
8475: TABLE "EMP"
                                     # complete create table staement
8487: CREATE TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))  PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS

                                     # Insert statement
8829: INSERT INTO "EMP" ("EMPNO", "ENAME", "JOB", "MGR", "HIREDATE",
"SAL", "COMM", "DEPTNO") VALUES (:1, :2, :3, :4, :5, :6, :7, :8)

                                     # BIND information
8957: BIND information for 8 columns
 col[  1] type 2 max length 22
 col[  2] type 1 max length 10 cset 31 (WE8ISO8859P1) form 1
 col[  3] type 1 max length 9 cset 31 (WE8ISO8859P1) form 1
 col[  4] type 2 max length 22
 col[  5] type 12 max length 7
 col[  6] type 2 max length 22
 col[  7] type 2 max length 22
 col[  8] type 2 max length 22
Conventional export                  # Conventional means NOT DIRECT

9003: start of table data            # Here begins the first row
現在從create table語句和直接/常規信息和列數據的開頭創建unexp語句。
UNEXP TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))
dump file expdat.dmp from 9003;

Unloaded 14 rows, end of table marker at 9670 # so we have our famous 14 rows
這將創建普通SQL * Loader文件和匹配的控制文件。在輸出文件中一個額外的列被添加,這是與行的狀態有關。 AP表示行是部分的,(缺失一些列)R指重新同步,這是一個再同步之後的第一行。 O表示重疊,之前的一行有錯誤,但新行與另一行部分重疊。






目錄
~~~~~~~~~~~~~~~~~
1.簡介
2.使用DUL
2.1創建一個適當的init.dul文件
2.2創建control.dul文件
2.3導出對象信息
2.4調用DUL
2.5重建數據庫
3.如何重建存儲在數據字典的對象定義?
4當段頭塊被損壞時,如何導出數據?
5. 當文件頭塊被損壞時,如何導出數據?
6.如何導出數據,而無需系統表空間?
7.附錄A:哪裏可以找到可執行文件?
8.參考
1.簡介
~~~~~~~~~~~~~~~
本文檔解釋了如何使用DUL,而不是對Bernard的數據導出能力的完整解釋。
本文件僅供內部使用,不應在任何時候給予客戶,DUL應始終被分析師使用或在分析師監督下使用。
DUL(數據導出)嘗試從Oracle數據庫中檢索無法檢索的數據。這不是導出工具或
SQL * Loader的替代選擇。該數據庫可能被破壞,但一個單獨的數據塊必須是100%正確的。在所有導出時,塊會被檢查,以確保塊沒有損壞且屬於正確的段。如果一個損壞的塊被DUL檢測到,錯誤信息會被打印到loader文件,並輸出到標準輸出,但是這不會終止下一行或塊的導出。

2.使用DUL
~~~~~~~~~~~~
首先,你必須獲得存在於數據塊的對象所需的信息,這些統計將被加載到DUL字典以導出數據庫對象。
這個信息是從在數據庫創建時被創建的USER $,OBJ $,$ TAB和COL $表中檢索的
,它們可以基於這一事實:由於SQL,BSQ的剛性性質,對象號在這些表是固定的而被導出。 DUL可以在系統的系統表空間中找到信息,因此,如果(多個)數據文件不存在,(多個)表數據文件必須包含在控制文件中,參見第6章。
2.1創建相應的“init.dul”文件
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
REM平臺指定參數(NT)
REM能獲取的最常見平臺的一列參數。
osd_big_endian_flag=false
osd_dba_file_bits=10
osd_c_struct_alignment=32
osd_file_leader_size=1
osd_word_size = 32
REM DUL字典緩存的大小。如果其中某一個過低,啟動將失敗。
dc_columns=2000000
dc_tables=10000
dc_objects=1000000
dc_users=400
dc_segments=100000
控制文件的位置和文件名,默認值是control.dul
在當前目錄control_file = D:\Dul\control_orcl.dul

數據庫塊大小,可以在init.ora中的文件中找到,或在服務器管理器中執行“show parameter %db_block_size%” 被檢索到
(svrmgr23/ 30 /l)將該參數更改為損壞數據塊的塊大小。
db_block_size=4096。

當數據需要導出/導入格式,它可以/必須被指定。
 這將創建Oracle導入工具適用的文件,雖然生成的文件與由EXP工具生成的表模式導出完全不同。
它是有一個創建表結構語句和表數據的單個表轉儲文件。
grants,存儲子句,觸發器不包括在這個轉儲文件中!
export_mode=true

REM兼容參數可以被指定且可以是6,7或8
compatible=8

該參數是可選的並能在REM不支持的長文件名(e.g. 8.3 DOS)的平臺,或當文件格式DUL使用 “owner_name.table_name.ext”不可接受時被指定。
在這裏,轉儲文件會類似dump001.ext,dump002.ext,等。
file = dump

完整的列表可在HTML部分“DUL參數”獲取,雖然這init.dul文件在大多數情況可行,且包含所有正確參數以成功完成導出。
2.2 創建“control.dul”文件
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
你需要對有關邏輯表空間和物理數據文件有一定了解,或你可以在數據庫被加載時進行以下查詢:
Oracle 6, 7
———–
> connect internal
> spool control.DUL
> select * from v$dbfile;
> spool off

Oracle 8
——–
> connect internal
> spool control.DUL
> select ts#, rfile#, name from v$datafile;
> spool off

如果需要的話,編輯spool文件和變化,數據文件的位置和stripe out不必要的信息,如表頭,反饋行,等...
示例控制文件像這樣:
Edit the spool file and change, if needed, the datafile location and stripe
out unnecessary information like table headers, feedback line, etc…
A sample control file looks something like this :
REM Oracle7 control file
1 D:\DUL\DATAFILE\SYS1ORCL.DBF
3 D:\DUL\DATAFILE\DAT1ORCL.DBF
7 D:\DUL\DATAFILE\USR1ORCL.DBF
REM Oracle8 control file
0 1 D:\DUL\DATAFILE\SYS1ORCL.DBF
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF
1 3 D:\DUL\DATAFILE\USR2ORCL.DBF
2 4 D:\DUL\DATAFILE\DAT1ORCL.DBF

註:每個條目可以包含一個數據文件的一部分,當你需要拆分對於DUL太大的數據文件時,這就有用了,這樣每個部分就小於比方說2GB了。 例如 :
REM Oracle8 其中一個數據文件被分割成多部分,每部分小於1GB! 
0 1 D:\DUL\DATAFILE\SYS1ORCL.DBF
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 1 endblock 1000000
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 1000001 endblock 2000000
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 2000001 endblock 2550000
2.3 Unload the object information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
以適當DDL(DUL描述語言)腳本啟動BUL工具。由於數據庫版本不同,有3個可用腳本來導出USER$,$ OBJ,TAB$和COL$表。
Oracle6 :> dul8.exe dictv6.ddl
Oracle7 :> dul8.exe dictv7.ddl
Oracle8 :> dul8.exe dictv8.ddl

Data UnLoader: Release 8.0.5.3.0 – Internal Use Only – on Tue Jun 22 22:19:
Copyright (c) 1994/1999 Bernard van Duijnen All rights reserved.

Parameter altered
Session altered.
Parameter altered
Session altered.
Parameter altered
Session altered.
Parameter altered
Session altered.
. unloading table OBJ$ 2271 rows unloaded
. unloading table TAB$ 245 rows unloaded
. unloading table COL$ 10489 rows unloaded
. unloading table USER$ 22 rows unloaded
. unloading table TABPART$ 0 rows unloaded
. unloading table IND$ 274 rows unloaded
. unloading table ICOL$ 514 rows unloaded
. unloading table LOB$ 13 rows unloaded

Life is DUL without it
這將 USER$, OBJ$, TAB$ and COl$ 數據字典表的數據導出到 SQL*Loader 文件,這不能被處理到導入格式的轉儲文件中。 , this can not be manipulated into dump files
of the import format. 參數 export_mode = false 被硬編碼到ddl腳本且不能更改為值“true”,因為這會導致DUL產生錯誤而失敗:

. unloading table OBJ$
DUL: Error: Column “DATAOBJ#” actual size(2) greater than length in column
definition(1)
………….etc……………

2.4 調用DUL
~~~~~~~~~~~~~~
在交互模式下啟動DUL,你也可以準備一個包含所有ddl命令以導出數據庫必要數據的腳本。我會在本文檔中描述最常用的命令,但不是完整的可指定參數列表。完整的列表可以在 “DDL描述”部分找到。
DUL> unload database;
=> 這將導出整個數據庫表(包括sys'tables)
DUL> unload user ;
=> 這將導出所有特定用戶所擁有的表。
DUL> unload table ;
=> 這將卸載由用戶名擁有的指定表
DUL> describe ;
=> 將表示表列以及指定用戶所擁有的(多個)數據文件的指向。will represent the table columns with there relative pointers to the datafile(s) owned by the specified user.

DUL> scan database;
=>掃描所有數據文件的所有塊。
生成兩個文件:
1:找到的段頭的seg.dat信息(索引/集群/表)
(對象ID,文件號和塊號)。
2:連續的表/集群的數據塊的ext.dat信息。
(對象ID(V7),文件和段頭的塊號(V6),文件號和第一個塊的塊號,塊的數量,表數量)
DUL> scan tables;
=>使用seg.dat和ext.dat作為輸入。
掃描所有數據段中的所有表(一個頭塊和至少匹配至少一個表的一個區段)。

2.5重建數據庫
~~~~~~~~~~~~~~~~~~~~~~~~
創建新的數據庫,並使用導入或SQL * Loader來恢復被DUL檢索到的數據。需要註意的是,當你只導出表結構數據時,索引,grants,PL / SQL和觸發器將不再在新的數據庫中存在。為了獲得與之前數據庫的完全相同的副本,你需要重新運行表,索引,PL / SQL等的創建腳本。
如果你沒有這些腳本,那麼你將需要執行在文檔第3部分描述的步驟。

3.如何重建存儲在數據字典的對象定義
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
要通過DUL重建PL / SQL(程序包,過程,函數或觸發器),grants,索引,約束或存儲子句(舊的表結構)是可以的,但就是有點棘手。您需要使用DUL導出相關的數據字典表,然後加載這些表到一個健康的數據庫,一定要使用與SYS或(系統)不同的用戶。加載損壞數據庫的數據字典表到健康數據庫字典可能也會破壞健康的數據庫。

示例從損壞的數據庫檢索pl/sql packages / procedures /functions的詳情:
1)按照在“使用DUL”一節中的步驟解釋並導出數據字典表“source$”
2)創建一個新的用戶登錄到一個健康數據庫,並指定所需的默認和臨時表空間。
3)將連接,資源, imp_full_database授權給新用戶。
4)導入/加載表“source$”到新創建的模式:
例如:imp80 userid=newuser/passw file=d:\dul\scott_emp.dmp
log=d:\dul\impemp.txt full=y
5)現在,您可以從表查詢以在損壞的數據庫中重建pl/sql packages / procedures /functions。在WebIv可以找到產生這樣的PL / SQL創建腳本。
相同的步驟可以用於重建索引,約束,和存儲參數,或者為相應的用戶重新授權。請註意,你總是需要使用某種類型的腳本,可以重建對象並包括損壞的數據庫版本的所有功能。例如:當損壞的數據庫是7.3.4版本,你有幾個位圖索引,如果你會使用支持7.3.2版本或之前的腳本,那麼你將無法成功重建位圖索引!

4. 當段頭塊被損壞時,如何導出數據 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~
當DUL不能以正常方式檢索數據塊信息,它可以掃描數據庫來創建其自己的段/區段地圖。要從數據文件導出數據,掃描數據庫的過程是必要的。
(為了說明這個例子,我根據段頭塊復制一個空塊)
1)創建一個適當的“init.dul”(見2.1)和“control.dul”(見2.2)的文件。
2)導出表。這將失敗,並指出段頭塊有損壞:
DUL> unload table scott.emp;
. unloading table EMP
DUL: Warning: Block is never used, block type is zero
DUL: Error: While checking tablespace 6 file 10 block 2
DUL: Error: While processing block ts#=6, file#=10, block#=2
DUL: Error: Could not read/parse segment header
0 rows unloaded
3)運行掃描數據庫命令:
DUL> scan database;
tablespace 0, data file 1: 10239 blocks scanned
tablespace 6, data file 10: 2559 blocks scanned
4)向DUL說明它應該使用自己的生成的區段地圖,而不是段頭信息。
DUL> alter session set use_scanned_extent_map = true;
Parameter altered
Session altered.
DUL> unload table scott.emp;
. unloading table EMP 14 rows unloaded

5. 當數據文件頭塊損壞時,如何導出數據 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
在你打開數據庫時,數據文件頭塊的損壞總是會列出。這不像一個頭段塊的損壞(見第4點),其中的數據庫可以成功打開,且當你進行表查詢時,損壞會列出。DUL從這種情況中恢復沒有問題,盡管有其他恢復這種情況的方法,如為數據文件頭塊打補丁。
你將收到如下錯誤:
ORACLE instance started.
Total System Global Area 11739136 bytes
Fixed Size 49152 bytes
Variable Size 7421952 bytes
Database Buffers 4194304 bytes
Redo Buffers 73728 bytes
Database mounted.
ORA-01122: database file 10 failed verification check
ORA-01110: data file 10: ‘D:\DATA\TRGT\DATAFILES\JUR1TRGT.DBF’
ORA-01251: Unknown File Header Version read for file number 10

6.如何卸載數據,而無需系統表空間
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
如果數據文件不能用於系統表空間,導出仍可繼續,但對象信息無法從數據字典表USER $,OBJ $,$ TAB和COL $檢索。所以所有者名稱,表名和列名不會被加載到DUL字典。識別表會是一項艱巨的任務,且這裏需要對RDBMS內部有很好的了解。
首先,你需要對你的應用和它的表有很好的了解。
列類型可以有DUL猜測,但表和列名都將丟失。
任何同一個數據庫舊的系統表空間(可能是幾周前的)可以是很大的幫助!
1)如上述的步驟1和2,創建 “init.dul”文件和“control.dul”文件。在這種情況下,控制文件將包含所有你想要恢復的數據文件,但不需要系統表空間的信息。
2)然後你調用DUL並輸入以下命令:
DUL> scan database;
data file 6 1280 blocks scanned
這將創建區段和段地圖。也許DUL命令解釋程序也將被終止。
3)重新調用DUL命令解釋器並執行以下操作:
Data UnLoader: Release 8.0.5.3.0 – Internal Use Only – on Tue Aug 03 13:33:
Copyright (c) 1994/1999 Oracle Corporation, The Netherlands. All rights res
Loaded 4 segments
Loaded 2 extents
Extent map sorted
DUL> alter session set use_scanned_extent_map = true;
DUL> scan tables; (or scan extents;)
Scanning tables with segment header
Oid 1078 fno 6 bno 2 table number 0
UNLOAD TABLE T_O1078 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN )
STORAGE ( TABNO 0 EXTENTS( FILE 6 BLOCK 2));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 4 2 0% 0% 0% 100% 100% 0% 0%
2 4 10 0% 100% 100% 100% 0% 0% 0%
3 4 8 0% 100% 100% 100% 0% 0% 50%
“10” “ACCOUNTING” “NEW YORK”
“20” “RESEARCH” “DALLAS”
“30” “SALES” “CHICAGO”
“40” “OPERATIONS” “BOSTON”
Oid 1080 fno 6 bno 12 table number 0
UNLOAD TABLE T_O1080 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER,
C5 DATE, C6 NUMBER, C7 NUMBER, C8 NUMBER )
STORAGE ( TABNO 0 EXTENTS( FILE 6 BLOCK 12));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 14 3 0% 0% 0% 100% 100% 0% 0%
2 14 6 0% 100% 100% 100% 0% 0% 21%
3 14 9 0% 100% 100% 100% 0% 0% 0%
4 14 3 7% 0% 0% 100% 100% 0% 0%
5 14 7 0% 0% 0% 0% 0% 100% 0%
6 14 3 0% 0% 0% 100% 100% 0% 0%
7 14 2 71% 0% 0% 100% 100% 0% 0%
8 14 2 0% 0% 0% 100% 100% 0% 0%
“7369” “SMITH” “CLERK” “7902” “17-DEC-1980 AD 00:00:00″ “800” “” “20”
“7499” “ALLEN” “SALESMAN” “7698” “20-FEB-1981 AD 00:00:00″ “1600” “300” “30”
“7521” “WARD” “SALESMAN” “7698” “22-FEB-1981 AD 00:00:00″ “1250” “500” “30”
“7566” “JONES” “MANAGER” “7839” “02-APR-1981 AD 00:00:00″ “2975” “” “20”
“7654” “MARTIN” “SALESMAN” “7698” “28-SEP-1981 AD 00:00:00″ “1250” “1400” “30”
Note : it might be best that you redirect the output to a logfile since
commands like the “scan tables” can produce a lot of output.
On Windows NT you can do the following command :
C:\> dul8 > c:\temp\scan_tables.txt
scan tables;
exit;
4)從步驟3的輸出中找到丟失的表;如果你仔細看上面的輸出會發現,unload語法已經給出,但表的名稱將是格式T_0 ,列名稱將是格式的C ;數據類型不會精確匹配之前的數據類型。
特別查找像“Oid 1078 fno 6 bno 2 table number 0”的字符串,其中:
oid = object id, will be used to unload the object 對象id,會被用於導出對象
fno = (data)file number (數據)文件號
bno = block number 塊號
5)使用“unload table”命令導出找出的表:
DUL> unload table dept (deptno number(2), dname varchar2(14),
loc varchar2(13)) storage (OBJNO 1078)
Unloading extent(s) of table DEPT 4 rows.







評論:
DUL盤頭副本
盤頭副本
盤頭副本
最近有ASM磁盤頭的額外副本。通過使用kfed與修復選項,此副本可以用來修復真正的頭。
位置
此副本被存儲為PST的最後一個塊。這意味著它在分配單元1的最後一個塊(原來是au 0的塊0)。分配單位的默認大小為為1M,元數據塊的默認大小為4K,這表示每個au中有256塊。所以,通常副本在au1 block 254(ASM從零計數,原始的在分配單元0塊0)
kfed修復
確定唯一的問題是與丟失/損壞的磁盤頭後,該修復程序很簡單,只要:
$ kfed repair
如果su大小為非標準的,以上會失敗,顯示如下:
KFED-00320: Invalid block num1 =  [3] , num2 =  [1] , error = [type_kfbh]
但這是預料之中的,且沒有任何損害。你只需要指定正確的au大小。例如4MB AU的命令是:
$ kfed repair ausz=4194304 
DUL
DUL將檢查/使用頭復制(總是?或者只在主頭損壞的情況下?)如果頭損壞,但副本是好的,警告使用kfed?
參考
Bug 5061821 OS工具可以摧毀ASM磁盤頭fixed 11.2.0.1,11.1.0.7,10.2.0.5和更高。
註意417687.1在現有的一個磁盤頭損壞後創建一個新的ASM磁盤頭
rdbms/src/client/tools/kfed/kfed.c




DUL導出掃描的lob

我們的想法是直接從LOB段導出LOB。
我們需要弄清楚:
1. 它是CLOB還是BLOB
2. 對於CLOB,它的字符集,單字節或UCS16,字節大小
3. 塊大小,在頭中的lob page#是一個fat page number
4. 缺失頁應當被發出為全零
5. 大小未知,只需要剝離trailing zeroes(但對於字節大小)?
實施的變更:
1.添加LOB段ID來掃描lob頁面信息,因為大多數屬性適用於段。即掃描的lob頁緩存的布局是:segid,lobid,fatpageno(chunk#),version wrap, vsn base, ts#, file#, block#
要做的事:
1.導出所有LOB的命令。一個LOB段提供命令中所有的常用屬性。
2.從LOB段指定lobid,可選的大小,和可選的DBA列表中導出單個lob的命令
3.分析生成導出LOB段命令的命令
4.分析生成從段中導出每個LOB命令的命令
需要考慮的事項:
1.	更改 file#, block# 為單個dba?pro no calculations, contra more difficult to read?




我正與客戶在處理一個數據打撈作業。從DUL網站下載的DUL工具不能正常工作!

dul4x86-linux.tar.gz給出錯誤:version ‘GLIBC_2.11’ not found

dul4i386-linux.as3.tar.gz給出錯誤:You need a more recent DUL version for this os.你需要一個較新的版本DUL這個操作系統。

客戶端Linux版本:2.6.32-400.33.2.el5uek

請幫忙!!!


Linux有兩個版本,這是第二個,被正常啟動的。由於內建復制的保護,您必須確保從Bernard的網站上下載最新的版本。如果你確實有最新的下載,那麼只有Bernard可以重新編譯並重新分配一個新的可執行版本。 DUL大約每45天失效。




我們處在關鍵時刻,需要DUL從一個production down降數據庫中提取數據。
- 數據庫在NOARCHIVELOG模式/ Windows 64位平臺。
- 由於go life,沒有可用的數據庫備份
- 數據庫較小,但非常重要。
- 從早晨開始有介質故障損壞了除真正客戶數據的數據文件之外的所有數據庫的數據文件。
- 根據每db驗證工具輸出,系統表空間100%損壞。
- 任何地方都沒有系統表空間的備份,甚至測試系統是作為新的數據庫創建的,因此對象ID,rfile號是不一樣的。

我們嘗試了以下內容:
1.	使用系統數據文件從生產中導出數據(由於系統數據文件被破壞,無法bootstrap)
2.	使用系統數據文件從TEST中導出數據(成功bootstrap,但無法導出,因為不匹配rfile#,TS#,和對象ID ..預想到的結果,但值得一試)
3.	僅使用實際數據的數據文件導出數據,成功生成scaned_tables,並且我們已向客戶請求提供表的列表來map,但我不確定他們能否提供清晰信息。

感謝提供任何建議,類似於:
- 有沒有什麼辦法來修復損壞的系統表空間的數據文件,並用它進行數據導出。
- 或者,有沒有辦法從TEST使用系統數據文件(不同的數據庫作為新安裝),其中rfile#, ts#,和對象id的錯配。


 

 

 

 

 

 

 

 

 

 

DUL’s PRINCIPLES and FEATURE LIST

STANDALONE C-PROGRAM

DUL is a standalone C program that directly retrieves rows from tables in data files. The Oracle RDBMS software is NOT used at all. DUL does dirty reads, it assumes that every transaction is committed. Nor does it check/require that media recovery has been done.

LAST RESORT

DUL is intended to retrieve data that cannot be retrieved otherwise. It is NOT an alternative for EXP, SQL*Plus etc. It is meant to be a last resort, not for normal production usage.

Before you use DUL you must be aware that the rdbms has many hidden features to force a bad database open. Undocumented init.ora parameters and events can be used to skip roll forward, to disable rollback, disable certain SMON actions, advance the database scn and more.

DATABASE CORRUPT – BLOCKS OK

The database can be corrupted, but an individual data block used must be 100% correct. During all unloading checks are made to make sure that blocks are not corrupted and belong to the correct segment. If during a scan a bad block is encountered, an error message is printed in the loader file and to standard output. Unloading will continue with the next row or block.

ROWS in CLUSTERS/TABLES/INDEXES

DUL can and will only unload index/table/cluster data. It will NOT dump triggers, stored procedures nor create sql scripts for tables or views. (But the data dictionary tables describing them can be unloaded). The data will be unloaded in a format suitable for SQL*Loader or IMP. A matching control file for SQL*Loader is generated as well.

DUL can unload indices and index organized tables. Index unload is usefull to determine how many rows a table should have or to identify the missing rows.

CROSS PLATFORM UNLOADING

Cross-platform unloading is supported. The database can be copied from a different operating system than the DUL-host. (Databases/systems done so far: Sequent/ptx, Vax Vms, Alpha Vms, MVS, HP9000/8xx, IBM AIX, SCO Unix, Alpha OSF/1, Intel Windows NT).

The configuration parameters within “init.dul” will have to be modified to match those of the original platform and O/S rather than the platform from which the unload is being done.

ROBUST

DUL will not dump, spin or hang no matter how badly corrupted the database is.

(NEARLY) ALL ORACLE FEATURES SUPPORTED

Full support for all database constructs: row chaining, row migration, hash/index clusters, longs, raws, rowids, dates, numbers, multiple free list groups, segment high water mark, NULLS, trailing NULL columns, and unlimited extents, new block layout of Oracle8, partitioned tables.

Later additions are lobs, compressed indexes, 9ir2 compressed tables. Varrays and ADTs (user defined objects) are partly supported in sql*loader mode.

ASM is fully supported, files can be extracted from an asm disk group. No mounted ASM instance is used, the disks are accessed directly. Non default asm allocation unit sizes are supported.

Data can be recovered from export dump files with the unexp command suite. Some initial work has been done for unpump to support data pump files.

SUPPORTED RDBMS VERSIONS

DUL should work with all versions starting oracle 6. DUL has been tested with versions from 6.0.26 up to 10.2. Even the old block header layout (pre 6.0.27.2) is supported.

MULTI BYTE SUPPORT

DUL itself is essentially a single byte application. The command parser does not understand multi byte characters, but it is possible to unload any multi byte database. For all possible caveats there is a work around.

DUL can optionally convert to UTF8. This is for NCLOBS that are stored in UTF16.

RESTRICTIONS

MLSLABELS

Multi Level Security Lables of trusted oracle are not supported.

(LONG) RAW

DUL can unload (long) raws. Nowadays there is suitable format in SQL*Loader to preserve all long raws. So Long raws and blobs can be unloaded in both modes.

ORACLE8 OBJECT OPTION AND LOBS

Nested tables are not yet supported, if they are needed let me know and it will be added. Varrays and ADTs are supported, also those that are stored as a kernel lob. CLOBS, NCLOBS are supported both in SQL*Loader mode and in exp mode. BLOBS are best handled in exp mode, the generated hex format in SQL*Loader mode is not loaded correctly currently.

PORTABLE

DUL can be ported to any operating system with an ANSI-C compiler. DUL has been ported to many UNIX variants, VMS and WindowsNT. Currently all builds are done using gcc and a cross compiler environment on Linux

RDBMS INTERNALS

A good knowledge of the Oracle RDBMS internals is a pre requisite to be able to use DUL successfully. For instance the Data Server Internals (DSI) courses give a good foundation. There is even a module dedicated to DUL

 

SETTING UP and USING DUL

CONFIGURATION FILES

There are two configuration files for DUL. “init.dul” contains all configuration parameters. (size of caches, details of header layout, oracle block size, output file format) In the control file, “control.dul”, the database data file names and the asm disks can be specified.

DATA DICTIONARY AVAILABLE

The Oracle data dictionary is available if the data files which made up the SYSTEM TableSpace are available and useable. The number which Oracle assigned to these files and the name you have given them, which does not have to be the original name which Oracle knew, must be included in the “control.dul” file. You also need to eventually include the file numbers and names of any files from other TableSpaces for which you wish to eventually unload TABLES and their data. The lack of inclusion of these files will not affect the data dictionary unload step but it will affect later TABLE unloading.

USING DUL WHEN USER$, OBJ$, TAB$ and COL$ CAN BE  UNLOADED

Steps to follow:

  1. configure DUL for the target database. This means creating a correct init.dul and control.dul. The SYSTEM TableSpace’s data file numbers and names must be included within the control.dul file along with any data files for TableSpaces from which you wish to unload TABLEs and their data. For Oracle8 and higher the tablespace number and the relative file number must be specified for each datafile.
  2. Use the ” BOOTSTRAP; ” command to prepare for unloading. The bootstrap process will find a compatibility segment, find the bootstrap$ table unload The old ” dul dictv7.ddl”re no longer needed.
  3. Unload the tables for which data files have been included within the “control.dul” file. Use one of the following commands:
    • “UNLOAD TABLE [ owner>.]table ; (do not forget the semicolon)
      • This will unload the one table definition and the table’s data.
    • “UNLOAD USER user name ;
      • This unloads all tables and data for the specified user.
    • “UNLOAD DATABASE ;
      • This unloads all of the database tables available. (except the user SYS).

NO DATA DICTIONARY AVAILABLE

If data files are not available for the SYSTEM TableSpace the unload can still continue but USER, TABLE and COLUM names will not be known. Identifying the tables can be an overwhelming task. But it can be (and has been) done. You need in depth knowledge about your application and the application tables. Column types can be guessed by DUL, but table and column names are lost. Any old SYSTEM tablespace from the same database but weeks old can be of great help!. Most of the information that DUL uses does not change. (only the dataobj# is during truncate or index rebuild)

USING DUL WITHOUT SYSTEM TABLESPACE

Steps to follow:

  1. configure DUL for the target database. This means creating a correct init.dul and control.dul. (See Port specific parameters ). In this case control.dul file will need the numbers and names of datafiles from which TABLEs and data will be unloaded but it does not require the SYSTEM TableSpace’s information.
  2. SCAN DATABASE; : scan the database, build extent and segment map
  3. SCAN TABLES; or SCAN EXTENTS; : gather row statistics
  4. Identify the lost tables from the output of step 3.
  5. UNLOAD the identified tables.
AUTOMATED SEARCH

To ease the hunt for the lost tables: the scanned statistical information in seen_tab.dat and seen_col.dat can be loaded into a fresh database. If you recreate the tables ( Hopefully the create table scripts are still available) then structure information of a “lost” table can be matched to the “seen” tables scanned information with two SQL*Plus scripts. (fill.sql and getlost.sql).

HINTS AND PITFALLS
  • Names are not really relevant for DUL, only for the person who must load the data. But the unloaded data does not have any value, if you do not know from which table it came.
  • The guessed column types can be wrong. Even though the algorithm is conservative and decides UNKNOWN if not sure.
  • Trailing NULL columns are not stored in the database. So if the last columns only contain NULL’s than the scanner will NOT find them. (During unload trailing NULL columns are handled correctly).
  • When a table is dropped, the description is removed from the data dictionary only. The data blocks are not overwritten unless they are reused for a new segment. So the scanner software can see a table that has been dropped.
  • Tables without rows will go unnoticed.
  • Newer objects have a higher object id than older objects. If an table is recreated, or if there is a test and a production version of the same table the object id can be used to decide.

DDL (DUL Description Language) UNLOAD STATEMENT OVERVIEW

DUL uses an SQL like command interface. There are DDL statements to unload extents, tables, users or the entire database. Data dictionary information required can be specified in the ddl statements or taken from the previously unloaded data dictionary. The following three statements will unload the DEPT table. The most common form is if the data dictionary and the extent map are available:

         UNLOAD TABLE scott.dept;

All relevant information can be specified in the statement as well:

        REM Columns with type in the correct order
        REM The segment header loaction in the storage clause
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

Oracle version 6:

        REM version 6 data blocks have segment header location in each block
        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));

Oracle7:

        REM Oracle7 data blocks have object id in each block

        ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
        UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
               STORAGE( OBJNO 1501 );

DUL’S OUTPUT FORMAT.

Only complete good rows are written to the output file. For this each row is buffered. The size of the buffer can changed with the init.dul parameter BUFFER. There is no speed gained with a high BUFFER parameter, it should just be big enough to hold a complete row. Incomplete or bad rows are not written out. The FILE_SIZE_IN_MB init.dul parameter can be used to split the output (at a proper boundary) into multiple files, each file can be loaded individually.

There are three different modes of output format.

  • Export mode
  • SQL*Loader mode: stream data files
  • SQL*Loader mode: Fixed physical record data files

EXPORT MODE

The generated file is completely different from a table mode export generated by EXP! The file is the minimal format that IMP can load. For each table a separate IMP loadable file will be generated. It is a single table dump file. It contains a header an insert table statement and the table data. Table grants, storage clauses, or triggers will not be included. An minimal create table statement is included (no storage clause just column names and types without precision). The character set indication in the file in the generated header is V6 style. It is set to mean ASCII based characterset.

To enable export mode, set the init.dul parameter EXPORT_MODE to TRUE.

As the generated pseudo dump file does not contain character set information set NLS_LANG to match that of the original database. In export mode no character set conversion is done.

SQL*LOADER MODES

The data in the is either not converted at all, or everthing is converted to UTF8 if LDR_OUTPUT_IN_UTF8 is set. This setting is required in mixed character set environments as the contents of a data file must have a single character set.<\P>

When loading the data you probably need to set NLS_LANG to match that of the original database to prevent unwanted character set conversion.

For both SQL*Loader output formats the columns will be space separated and enclosed in double quotes. Any double quote in the data will be doubled. SQL*Loader recognizes this and will load only one. The character used to enclose the columns can be changed from double quote to any character you like with the init.dul parameter LDR_ENCLOSE_CHAR.

There are two styles of physical record organization:

Stream Mode

Nothing special is done in stream mode, a newline is printed after each record. This is a compact format and can be used if the data does not contain newline characters. To enable stream mode set LDR_PHYS_REC_SIZE = 0 in init.dul.

Fixed Physical Records

This mode is essential if the data can contain newlines. One logical record, one comlete row, can be composed of multiple physical records. The default is record length is 81, this fits nicely on the screen of a VT220. The physical record size can be specified with LDR_PHYS_REC_SIZE in init.dul.

OUTPUT FILE NAMES

The file names generated are: owner name_table name.ext. The extension is “.dmp” for IMP loadable files. “.dat” and “.ctl” are used for the SQL*Loader datafile and the control file. To prevent variable substitution and other unwanted side effects, strange characters are stripped.(Only alpha numeric and ‘_’ are allowed).

If the FILE parameter is set the generated names will be FILEnnn.ext. This possibility is a work around if the file system does not support long enough file names. (Old windows with 6.3 filename format)

SOME DUL INTERNALS

REQUIRED INFORMATION

To unload table data from a database block the following information must be known:

  1. Column/Cluster Information: The number and type of the columns. For char or varchar columns the maximum length as well. The number of cluster columns and the table number in the cluster. This information can be supplied in the unload statement or it can be taken from the previously unloaded USER$, OBJ$, TAB$ and COL$.
  2. Segment/Extent information: When unloading a table the extent table in the data segment header block is used to locate all data blocks. The location of this segment header block (file number and block number) is taken from the data dictionary or can be specified in the unload statement. If the segment header is not correct/available then another method must be used. DUL can build its own extent map by scanning the whole database. (in a separate run of DUL with the scan database statement.)
BINARY HEADERS

C-Structs in block headers are not copied directly, they are retrieved with specialized functions. All offsets of structure members are programmed into DUL. This approach makes it possible to cross-unload. (Unload an MVS created data file on an HP) Apart from byte order only four layout types have been found so far.

  1. Vax VMS and Netware : No alignment padding between structure members.
  2. Korean Ticom Unix machines : 16 bit alignment of structure members.
  3. MS/DOS 16 bit alignment and 16 bit wordsize.
  4. Rest of the world (Including Alpha VMS) structure member alignment on member size.
MACHINE DEPENDENCIES

Machine dependencies (of the database) are configurable with parameters:

  • Order of bytes in a word (big/little endian).
  • Number of bits for the low part of the FILE# in a DBA (Block Address).
  • Alignment of members in a C-struct.
  • Number of blocks or bytes before the oracle file header block.
  • Size of a word used in the segment header structure.
UNLOADING THE DATA DICTIONARY

DUL can use the data dictionary of the database to be unloaded if the files for it exist and are uncorrupted. For the data dictionary to be used, internal tables must be unloaded first to external files: (USER$, OBJ$, TAB$ and COL$). The bootstrap command will find and unload the required tables.

DDL ( DUL DESCRIPTION LANGUAGE ) SPECIFICATION

[ ALTER SESSION ] SET init.dul parameter =  value ;
     Most parameters can be changed on the fly. 

BOOTSTRAP [LOCATE | GENERATE | COMPLETE
           | UNLOAD   Bootstrap$ segment header block address ];
     Bootstraps the data dictionary. Default is COMPLETE.
     LOCATE finds and unloads the bootstrap$ table.
     GENERATE builds a ddl file based on inforation in the cache.
     COMPLETE is in fact LOCATE, followed by GENERATE (two times)

COMMIT;
     Writes the changed block to the data file.

CREATE BLOCK INDEX  index_name  ON  device ;

A block index contains address of valid oracle blocks found in a corrupt file system. Useful to merge multiple disk images or to unload from corrupted file systems. This is only useful in extreme file system corruption scenarios.

DESCRIBE  owner_name  . table_name ;

DUMP [ TABLESPACE  tablespace_no ]
     [ FILE  file_no  ]
     [ BLOCK  block_no  ]
     [ LEVEL  level_no  ] ;
     Not a complete blockdump, mainly used for debugging.
     The block address is remembered.

EXTRACT  asm file name  to  output file name  ;
     Copies any ASM file from a disk group to the file system.
     (there was a problem with online redologs this needs more testing)

MERGE block_index INTO [  segment  ];

The merge command uses the information in the index file to locate possible data blocks it looks for a combination of file numbers and object id, each candidate block is compared to the current block in the datafile. If the current block is bad, or has an older scn the candidate will will be written into the datafile. This is only useful in extreme file system corruption scenarios.

REM any_text_you_like_till_End_Of_Line : comment
REM  NOT allowed inside ddl statements. ( To avoid a two layer lexical scan).

ROLLBACK; # Cancels the UPDATE statements.

SHOW     DBA  dba ;                # dba -> file_no block_no calculator
	   | DBA  rfile_no block_no ;  # file_no block_no -> dba calculator
	   | SIZES ;                   # show some size of important structs
	   | PARAMETER;                # shows the values of all parameters
	   | LOBINFO;                  # lob indexes found with SCAN DATABASE
       | DATAFILES;                # summary of configured datafiles
       | ASM DISKS;                # summary of configured asm disks
       | ASM FILES;                # summary of configured datafiles on asm
       | ASM FILE  cid      # extent information for asm file

UNEXP [TABLE] [  owner  . ]  table name 
       (  column list  ) [ DIRECT ]
       DUMP FILE  dump file name
       FROM  begin offset  [ UNTIL  end offset  ]
       [ MINIMUM  minimal number of columns  COLUMNS ] ;

       To unload data from a corrupted exp dump file. No special setup
       or configuration is required, just the compatible parameter.
       The start offset should be where a row actually begins.

UNPUMP
       To unload data from a corrupted expdp (datapump) dump file.
       This is still work in progress, the basic commands work
       but rather complex to use. Contact me if this is needed.

UNLOAD DATABASE;

UNLOAD USER user_name;

UNLOAD [TABLE]  [  schema_name . ]  table_name 
              [ PARTITION(  partition_name ) ]
              [ SUBPARTITION(  sub_partition_name ) ]
              [ (  column_definitions ) ]
              [  cluster_clause  ]
              [  storage_clause  ] ;

UNLOAD EXTENT  table_name 
              [ (  column_definitions  ) ]
              [ TABLESPACE  tablespace_no  ] 
              FILE  extent_start_file_number
              BLOCK extent_start_block_number 
              BLOCKS  extent_size_in oracle_blocks ;

UNLOAD LOB SEGMENT FOR [  schema_name . ]  table_name   [ (  column name  ) ] ;

UNLOAD LOB SEGMENT STORAGE ( SEGOBJNO data obj#) ;

UPDATE [ block_address ] SET UB1|UB2|UB4 @ offset_in_block = new_value ;
UPDATE [ block_address ] SET  block element name  = new_value ;
        Now and then we can repair something.
	Patches the current block and dumps it.
	You can issue multiple UPDATE commands.
	Block is not written yet, use COMMIT to write. 

storage_clause ::=
        STORAGE ( storage_specification  [ more_storage_specs ] )

storage_specification ::=
        OBJNO object_id_number 
|       TABNO cluster_table_number
|       SEGOBJNO cluster/data_object_number       /* v7/v8 style data block id */
|       FILE  data_segment_header_file_number     /* v6 style data block id */
        BLOCK data_segment_header_block_number )   
|       any_normal_storage_specification_but_silently_ignored

SCAN DATABASE;

Scans all blocks of all data files. Two or three files are generated:

  1. SEG.dat information of found segment headers (index/cluster/table): (object id, file number, and block number).
  2. EXT.dat information of contiguous table/cluster data blocks. (object id(V7), file and block number of segment header (V6), file number and block number of first block, number of blocks, number of tables)
  3. SCANNEDLOBPAGE.dat information for each lob datablock, this file (optional, only if init.dul:SCAN_DATABASE_SCANS_LOB_SEGMENTS=TRUE) can possibly be huge. Also the required memory size can be problematic. The purpose is twofold: 1: to possibly work around corrupt lob indexes during unload table. 2: unload lob segments (for deleted lobs or lob segments without lob index or parent table) Meaning of the fields in SCANNEDLOBPAGE.dat: (segobj#, lobid, fat_page_no, version( wrap, base), ts#, file#, block#)
SCAN DUMP FILE  dump file name
        [ FROM  begin offset  ]
        [ UNTIL  end offset  ];

        Scans an  export dump file to produce to provide the
        create/insert statements and the offsets in the dump file.

SCAN LOB SEGMENT    storage clause ;
SCAN LOB SEGMENT FOR  table name  [.  column name] ;
        Scans the lob segment to produce LOBPAGE.dat information,
        but then for this segment only. Probably quicker and
        smaller. For partitioned objects use scan database.

SCAN TABLES;
        Uses SEG.dat and EXT.dat as input.
        Scans all tables in all data segments (a header block and at least one
        matching extent with at least 1 table).

SCAN EXTENTS;
        Uses SEG.dat and EXT.dat as input.
        All extents for which no corresponding segment header has been found.
        (Only useful if a tablespace is not complete, or a segment header
        is corrupt).

EXIT QUIT and EOF all cause DUL to terminate.

DDL ( DUL DESCRIPTION LANGUAGE ) DESCRIPTION

Rules for UNLOAD EXTENT and UNLOAD TABLE:

Extent Map

UNLOAD TABLE requires an extent map. In 99.99% of the cases the extent map in the segment header is available. In the rare 0.01% that the segment header is lost an extent map can be build with the scan database command. The self build extent map will ONLY be used during an unload if the parameter USE_SCANNED_EXTENT_MAP is set to TRUE.

All data blocks have some ID of the segment they belong to. But there is a fundamental difference between V6 and V7. Data blocks created by Oracle version 6 have the address of the segment header block. Data blocks created by Oracle7 have the segment object id in the header.

Column Specification

The column definitions must be specified in the order the columns are stored in the segment, that is ordered by col$.segcol#. This is not necessarily the same order as the columns where specified in the create table statement. Cluster columns are moved to the front, longs to the end. Columns added to the table with alter table command, are always stored last.

Unloading a single extent

UNLOAD EXTENT can be used to unload 1 or more adjacent blocks. The extent to be unloaded must be specified with the STORAGE clause: To specify a single extent use: STORAGE ( EXTENTS( FILE fno BLOCK bno BLOCKS #blocks) ) (FILE and BLOCK specify the first block, BLOCKS the size of the extent)

DUL specific column types

There are two extra DUL specific data types:

  1. IGNORE: the column will be skipped as if it was not there at all.
  2. UNKNOWN: a heuristic guess will be made for each column.

In SQL*Loader mode there are even more DUL specific data types:

  1. HEXRAW: column is HEX dumped.
  2. LOBINFO: show some information from LOB locators .
  3. BINARY NUMBER: Machine word as used in a LOB index.

Identifying USER$, OBJ$, TAB$ and COL$

DUL uses the same bootstrap procedure as the rdbms. That is it uses the root dba from the system datafile header to locate the bootstrap$ table. Depending on the version this root dba is either the location of the compatibility segment containing the bootstrap$ address or for the newer versions the address of the bootstrap$ table itself. The bootstrap$ table is unloaded and its contents is parsed to find the first four tables (USER$, OBJ$, TAB$ and COL$). The other tables are unloaded based on information in these first four.

DESCRIPTION OF SCAN COMMANDS

SCAN TABLES and SCAN EXTENTS scan for the same information and produce similar output. ALL columns of ALL rows are inspected. For each column the following statistics are gathered:

  • How often the column is seen in a data block.
  • The maximum internal column length.
  • How often the column IS NULL.
  • How often the column consists of at least 75% printable ascii.
  • How often the column consists of 100% printable ascii.
  • How often the column is a valid oracle number.
  • How often the column is a nice number. (not many leading or trailing zero’s)
  • How often the column is a valid date.
  • How often the column is a possible valid rowid.

These statistics are combined and a column type is suggested. Using this suggestion five rows are unloaded to show the result. These statistics are dumped to two files (seen_tab.dat and seen_col.dat). There are SQL*Loader and SQL*Plus scripts available to automate a part of the identification process. (Currently known as the getlost option).

DESCRIBE

There is a describe command. It will show the dictionary information for the table, available in DUL’s dictionary cache.

DUL STARTUP SEQUENCE

During startup DUL goes through the following steps:

  • the parameter file “init.dul” is processed.
  • the DUL control file (default “control.dul”) is scanned.
  • Try to load dumps of the USER$, OBJ$, TAB$ and COL$ if available into DUL’s data dictionary cache.
  • Try to load seg.dat and col.dat.
  • Accept DDL-statements or run the DDL script specified as first arg.

DUL parameters to be specified in init.dul:

ALLOW_TRAILER_MISMATCH
BOOLEAN
Strongly discouraged to use, will seldom produce more rows. Use only if you fully understand what it means and why you want it. skips the check for correct block trailer. The blocks failing this test are split of corrupt. But it saves you the trouble to patch some blocks.
ALLOW_DBA_MISMATCH
BOOLEAN
Strongly discouraged to use, will seldom produce more rows. Use only if you fully understand what it means and why you want it. Skips the check for correct block address. The blocks failing this test are probably corrupt. But it saves you the trouble to patch some blocks.
ALLOW_OTHER_OBJNO
BOOLEAN
If your dictionary is older than your datafiles then the data object id’s can differ for truncated tables. With this parameter set to true it will issue a warning but use the value from segment header. All other blocks are fully checked. This is for special cases only.
ASCII2EBCDIC
BOOLEAN
Must (var)char fields be translated from EBCDIC to ASCII. (For unloading MVS database on a ASCII host)
BUFFER
NUMBER (bytes)
row output buffer size used in both export and SQL*Loader mode. In each row is first stored in this buffer. Only complete rows without errors are written to the output file.
COMPATIBLE
NUMBER
Database version , valid values are 6, 7, 8 or 9. This parameter must be specified
CONTROL_FILE
TEXT
Name of the DUL control file (default: “control.dul”).
DB_BLOCK_SIZE
NUMBER
Oracle block size in bytes (Maximum 32 K)
DC_COLUMNS
NUMBER
DC_OBJECTS
NUMBER
DC_TABLES
NUMBER
DC_USERS
NUMBER
Sizes of dul dictionary caches. If one of these is too low the cache will be automatically resized.
EXPORT_MODE
BOOLEAN
EXPort like output mode or SQL*Loader format
FILE
TEXT
Base for (dump or data) file name generation. Use this on 8.3 DOS like file systems
FILE_SIZE_IN_MB
NUMBER (Megabytes)
Maximum dump file size. Dump files are split into multiple parts. Each file has a complete header and can be loaded individually.
LDR_ENCLOSE_CHAR
TEXT
The character to enclose fields in SQL*Loader mode.
LDR_PHYS_REC_SIZE
NUMBER
Physical record size for the generated loader datafile.
LDR_PHYS_REC_SIZE = 0 No fixed records, each record is terminated with a newline.
LDR_PHYS_REC_SIZE > 2: Fixed record size.
MAX_OPEN_FILES
Maximum # of database files that are concurrently kept open at the OS level.
OSD_BIG_ENDIAN_FLAG
Byte order in machine word. Big Endian is also known as MSB first. DUL sets the default according to the machine it is running on. For an explanation why this is called Big Endian, you should read Gullivers Travels.
OSD_DBA_FILE_BITS
File Number Size in DBA in bits. Or to be more precise the size of the low order part of the file number.
OSD_FILE_LEADER_SIZE
bytes/blocks added before the real oracle file header block
OSD_C_STRUCT_ALIGNMENT
C Structure member alignment (0,16 or 32). The default of 32 is correct for most ports.
OSD_WORD_SIZE
Size of a machine word always 32, except for MS/DOS(16)
PARSE_HEX_ESCAPES
Boolean default FALSE
Use \\xhh hex escape sequences in strings while parsing. If set to true then strange characters can be specified using escape sequences. This feature is also for specifying multi-byte characters.
USE_SCANNED_EXTENT_MAP
BOOLEAN
Use the scanned extent map in ext.dat when unloading a table. The normal algorithme uses the extent map in the segment header. This parameter is only useful if some segment headers are missing or incorrect.
WARN_RECREATE_FILES
BOOLEAN (TRUE)
Set to FALSE to suppress the warning message if an existing file is overwritten.
WRITABLE_DATAFILES
BOOLEAN (FALSE)
Normal use of DUL will only read the database files. However the UPDATE and the SCAN RAW DEVICE will write as well. The parameter is there to prevent accidental damage.

SAMPLE init.dul :

# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40

# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 10
osd_c_struct_alignment = 32
osd_file_leader_size = 1

# database parameters
db_block_size = 8k

# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81

Configuring the port dependent parameters

Starting from rdbms version 10G osd parameters are easy to configure. Typically all parameters can be used at their defaults. The only one that might need attention is osd_big_endian_flag, when doing a cross platform unload, where the original database platform is different from the current machine. If osd_big_endian_flag is set incorrectly, it is detected at startup, when doing file header inspection.

Collection of known Parameters

For pre 10G databases there is a list of known parameters in the osd wiki page list of osd (Operating System Dependend) parameters for almost every platform. If your platform is not in the list you can use the suggestions below to determine the parameters. (And then please inform me so I can add them to the list.)

osd_big_endian_flag

big endian or little endian (byte order in machine words): HP, SUN and mainframes are generally big endian: OSD_BIG_ENDIAN_FLAG = TRUE. DEC and Intel platforms are little endian: OSD_BIG_ENDIAN_FLAG = FALSE. The default is correct for the platform where DUL is running on.

There is no standard trick for this, the following might work on a unix system:

  echo dul | od -x
  If the output is like:
     0000000 6475 6c0a
     0000004
  You are on a big endian machine (OSD_BIG_ENDIAN_FLAG=TRUE).

  If you see:
     0000000 7564 0a6c
     0000004
  This is a little endian machine (OSD_BIG_ENDIAN_FLAG=FALSE).
osd_dba_file_bits

The number of bits in a dba used for the low order part of file number. Perform the following query:

  SQL> select dump(chartorowid('0.0.1')) from dual;
  
  Typ=69 Len=6: 8,0,0,0,0,0    ->       osd_dba_file_bits =  5 (SCO)
  Typ=69 Len=6: 4,0,0,0,0,0    ->       osd_dba_file_bits =  6 (Sequent , HP)
  Typ=69 Len=6: 1,0,0,0,0,0    ->       osd_dba_file_bits =  8 (NCR,AIX)
  Typ=69 Len=6: 0,16,0,0,0,0   ->       osd_dba_file_bits = 12 (MVS)
  Typ=69 Len=10: 0,0,0,0,0,64,0,0,0,0      osd_dba_file_bits = 10 (Oracle8)       
OSD_C_STRUCT_ALIGNMENT

Structure layout in data file headers. 0: No padding between members in a C-struct (VAX/VMS only) 16: Some korean ticom machines and MS/DOS 32: Structure members are member size aligned. (All others including ALPHA/VMS) Check the following query:

SELECT * FROM v$type_size
WHERE type IN ( 'KCBH', 'KTNO', 'KCBH', 'KTBBH', 'KTBIT', 'KDBH'
              , 'KTECT', 'KTETB', 'KTSHC') ;

In general osd_c_struct_alignment = 32 and the following output is expected:

K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              24
KTB      KTBBH    TRANSACTION FIXED HEADER                 48
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           44
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            8

8 rows selected.

For VAX/VMS and Netware ONLY osd_c_struct_alignment = 0 and this output is expected:

COMPONEN TYPE     DESCRIPTION                      SIZE
-------- -------- -------------------------------- ----------
K        KTNO     TABLE NUMBER IN CLUSTER                   1
KCB      KCBH     BLOCK COMMON HEADER                      20
KTB      KTBIT    TRANSACTION VARIABLE HEADER              23
KTB      KTBBH    TRANSACTION FIXED HEADER                 42
KDB      KDBH     DATA HEADER                              14
KTE      KTECT    EXTENT CONTROL                           39
KTE      KTETB    EXTENT TABLE                              8
KTS      KTSHC    SEGMENT HEADER                            7

8 rows selected.

If there is a different list this will require some major hacking and sniffing and possibly a major change to DUL. (Email Bernard.van.Duijnen@oracle.com)

osd_file_leader_size

Number of blocks/bytes before the oracle file header. Unix datafiles have an extra leading block ( file size, block size magic number) A large number ( > 100) is seen as a byte offset, a small number is seen as a number of oracle blocks.

Unix    :       osd_file_leader_size = 1
Vms     :       osd_file_leader_size = 0
Desktop :       osd_file_leader_size = 1 (or 512 for old personal oracle)
Others  :       Unknown ( Use Andre Bakker's famous PATCH utility to find out)
                An Oracle7 file header block starts with the pattern 0X0B010000.

You can add an additional byte offset in control.dul in the optional third field (for instance for AIX or DEC UNIX data files on raw device)

Control file syntax specification

A control file (default name “control.dul”) is used to specify asm disks, block indexes and the data file names. The format of the control has been extended

Currently there are three types of specifications in the DUL control file. Each entry on a separate line. The asm disks must precede the asm files.

  control_file_line ::= asm_disk_spec | file_piece_spec | block_index_spec

If COMPATIBLE is 10 or higher you can also specify asm disks. Its generally sufficent to specify the device name. All properties are automatically retrieved by header inspection. The full syntax is only needed when header inspection is impossible, that is for disks with corrupt headers. The syntax is:

DISK  device name [  disk group options  ]

 disk group option  ::= GROUP  disk group name 
                           | DISK_NO  disk number in group 
                           | F1B1  File1 Block1 location

A block index is a way to access oracle blocks on corrupt file systems. In general a corrupt file system is not wiped out, its not empty. Due to the specific layout of oracle blocks it is possible to datablocks an store their location in the block index. See also the create block index command . A block_index_name is a normal identifier, it is used to construct an unique file name.

BLOCK INDEX  block_index_name

Each entry can contain a part of a datafile. The smallest unit is a single data block. This way it is possible to split datafiles that are too big for DUL in parts where each part is smaller than 2GB.

In general it is sufficient to specify the file name. Even for a single block. If compatible is 10 or higher the file numbers and the tablespace numbers will be read from the file header.

If the specified details are different from the file header DUL will give a warning but use your specification. This is to be able to unload files with a corrupted header block. For debugging it is possible to dump the file header.

The optional extra leader offset is an extra byte offset, that will be added to all lseek() operations for that datafile. This makes it possible to skip over the extra 4k block for some AIX raw devices, or the extra 64K on Tru64 on raw devices

  file_piece_spec ::= 
         [ [ tablespace_no ] relative_file_number]data_file_name
         [ optional extra leader offset ]
         [ startblock block_no ]
         [ endblock block_no ]

Examples

   # AIX version 7 example with one file on raw device
   1 /usr/oracle/dbs/system.dbf
   8 /dev/rdsk/data.dbf 4096
   # Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
   0  1 /fs1/oradata/PMS/system.dbf
   1  2 /tmp/huge_file_part1 startblock 1 endblock 1000000
   1  2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
   1  2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000
# ASM disks for two disk groups
disk /media/maxtor/asm/dgn1
disk /media/maxtor/asm/dgn2
disk /media/maxtor/asm/dgn3
disk /media/maxtor/asm/dgn4
disk /media/maxtor/asm/dgodd

# system datafile in the first asm disk group
+DGN/db102/datafile/system.257.621616979

# users datafile in a different disk group
+DGODD/db102/datafile/users.257.621616683

# a so called big file tablespace, use 1024 for the file#
 8 1024 /home/oracle/v102/dbs/bigfilets

# Or let DUL find out itself from the header
 /home/oracle/v102/dbs/bigfilets

# one tablespace with a different block size
/home/oracle/v102/dbs/ts16k.dbf block_size 16k

# or let DUL find out by header inspection
/home/oracle/v102/dbs/ts16k.dbf

Sample unload session: data dictionary usable for DUL

    1. create a suitable “init.dul”
    2. create a control.dul
    sqlplus /nolog
    connect / as sysdba
    startup mount
    set trimspool on pagesize 0 linesize 256 feedback off
    column name format a200
    spool control.dul
    select ts#, rfile#, name from v$datafile;
    exit
edit the result
For Oracle8 a different query must be used:
   select ts#, rfile#, name from v$datafile;
    1. start DUL and bootstrap;
$ dul

Data UnLoader 10.2.1.16 - Oracle Internal Only - on Thu Jun 28 11:37:24 2007
with 64-bit io functions

Copyright (c) 1994 2007 Bernard van Duijnen All rights reserved.

 Strictly Oracle Internal use Only
DUL> bootstrap;
Probing file = 1, block = 377
. unloading table                BOOTSTRAP$      57 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 57 entries loaded
Parsing Bootstrap$ contents
DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 10
 OBJ$: segobjno 18, file 1
 TAB$: segobjno 2, tabno 1, file 1
 COL$: segobjno 2, tabno 5, file 1
 USER$: segobjno 10, tabno 1, file 1
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table                      OBJ$   52275 rows unloaded
. unloading table                      TAB$    1943 rows unloaded
. unloading table                      COL$   59310 rows unloaded
. unloading table                     USER$      70 rows unloaded
Reading USER.dat 70 entries loaded
Reading OBJ.dat
 52275 entries loaded and sorted 52275 entries
Reading TAB.dat 1943 entries loaded
Reading COL.dat 59310 entries loaded and sorted 59310 entries
Reading BOOTSTRAP.dat 57 entries loaded
...
Some more messages for all the other TABLES
...
Database character set is WE8ISO8859P1
Database national character set is AL16UTF16
DUL> unload user SCOTT;
About to unload SCOTT's tables ...
. unloading table                       EMP      14 rows unloaded

Example unload session: data dictionary UNUSABLE for DUL

    1. create a suitable “init.dul” (See config guide)
    2. create a control.dul See above
    3. scan the database for segment headers and extents:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:10:16 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL> scan database;
data file 1 20480 blocks scanned
data file 4 7680 blocks scanned
data file 5 512 blocks scanned
DUL>quit
    1. Restart DUL and scan the found tables for column statistics this creates a huge amount of output:
echo scan tables \; | dul > scan.out&

[ many lines here]


Object id 1601 table number 0
UNLOAD TABLE T1601_0 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER, C5 DATE
        , C6 NUMBER, C7 NUMBER, C8 NUMBER )
    STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));

Colno  Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
    1    14        3    0%   0%   0% 100% 100%   0%   0%
    2    14        6    0% 100% 100% 100%  14%   0%  21%
    3    14        9    0% 100% 100% 100%  14%   0%   0%
    4    14        3    7%   0%   0% 100% 100%   0%   0%
    5    14        7    0%   0%   0%   0%   0% 100%   0%
    6    14        3    0%   0%   0% 100% 100%   0%   0%
    7    14        2   71%   0%   0% 100% 100%   0%   0%
    8    14        2    0%   0%   0% 100% 100%   0%   0%

"7369" "SMITH" "CLERK" "7902" "17-DEC-1980 AD 00:00:00" "800" "" "20"

"7499" "-0.000025253223" "SALESMAN" "7698" "20-FEB-1981 AD 00:00:00" "1600" "30+

0" "30"

"7521" "WARD" "SALESMAN" "7698" "22-FEB-1981 AD 00:00:00" "1250" "500" "30"

"7566" "JONES" "MANAGER" "7839" "02-APR-1981 AD 00:00:00" "2975" "" "20"

"7654" "MARTIN" "SALESMAN" "7698" "28-SEP-1981 AD 00:00:00" "1250" "1400" "30"

[ many more lines here ]

This looks familiar, use the above information and your knowledge of the emp table to compose:

UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
        hiredate date, sal number, comm number deptno number)
 STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
    1. use this statement to unload emp:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:46:33 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Loaded 350 segments
Loaded 204 extents
Extent map sorted
DUL> UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
DUL 2> hiredate date, sal number, comm number deptno number)
DUL 3> STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
. unloading table                       EMP      14 rows unloaded
DUL>quit
 

Example unload session: Incorrect init.dul Parameters

WRONG osd_dba_file_bits size

This can generate output similar to below. Normally this should not happen since you should create a demo database and check this via the DUL documented (in html page) query.

The mismatch in DBA’s is only in the file number (first number in brackets) part. The second number, the block number, is correct.

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:40:33 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [4][2]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][3] DBA in block mismatch [4][3]
DUL: Warning: Bad cache layer header file#=1, block#=3

...........and etc..........

WRONG osd_file_leader_size

This may create output similar to below, but many other flavours are possible. In this case we are a fixed number of blocks off. The file number is correct. The difference in the block numbers is constant.:

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:44:23 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.

DUL: Warning: Block[1][2] DBA in block mismatch [1][3]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][3] DBA in block mismatch [1][4]
DUL: Warning: Bad cache layer header file#=1, block#=3

...........and etc..........

WRONG osd_c_struct_alignment

This may generate output similar to the following:

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:46:10 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
. unloading table OBJ$

DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262145
OS error 2: No such file or directory

DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262146
OS error 2: No such file or directory

...........and etc..........

WRONG db_block_size

The following output was generated when the db_block_size was set too small. The correct value was 4096 and it was set to 2048. Normally, the value for this parameter should be taken from the Oracle instances’s init.ora file and will not be correctly set.

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Thu Sep 4 12:38:25 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [513][1159680]
DUL: Warning: File=1, block 2: illegal block version 2
DUL: Warning: Block[1][2] Illegal block type[0]
DUL: Warning: Bad cache layer header file#=1, block#=2

DUL: Warning: Block[1][4] DBA in block mismatch [1][2]
DUL: Warning: File[1]Block[4]INCSEQ mismatch[90268!=0]
DUL: Warning: Bad cache layer header file#=1, block#=4

DUL: Warning: Block[1][6] DBA in block mismatch [1][3]
DUL: Warning: File[1]Block[6]INCSEQ mismatch[139591710!=86360346]
DUL: Warning: Bad cache layer header file#=1, block#=6

...........and etc..........

QUOTE MISSING

If you get the following error it is caused by the data dictionary tables “USER$, OBJ$, TAB$ and COL$” not being correctly generated. To fix this error simply delete all dictv6.ddl or dictv7.ddl created .dat and .ctl files and restart.

Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:49:30 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL: Error: Quote missing

Salvaging data from corrupt EXP dump files – UNEXP Tutorial

If you do not know anything about the structure of a EXP dump file this can be difficult. Here is a quick explanation. Apart from the file header a dump file has MARKERS that identify the various sections. In each table section there will be SQL statements. The most interrsesting part is the create table statement, followed by the insert into table statement. The insert statement is directly followed by the bind information, (number of columns, and for each column its type and bind length and a small bit more). Then it is followed by the actual columns. Each column is preceded by a two byte length, followed by the actual column data. There are several tricks for longer columns possible. The end of the column data is marked by the special length marker OXFFFF. There is no marker for the beginning of a row. Resynching after a corruption is trial and error. Corruption are generally not immediate detectable. The format is slightly different for DIRECT export, so you will have to use the DIRECT option for DIRECT exports. The offset to be specified is the beginning of a row. In general the first one directly behind the bind array, but for optimal flexibility you can start anywhere in the row data.

The first step is to scan the dump file to find the offsets and the sql statements. Each output line starts with the offset where the item is found.

DUL>  scan dump file expdat.dmp;
0: CSET: 1 (US7ASCII)                # Character set info from the header
3: SEAL EXPORT:V10.02.01             # the Seal - the exp version tag
20: DBA SYSTEM                       # exp done as SYSTEM
8461: CONNECT SCOTT                  # section for user SCOTT
8475: TABLE "EMP"
                                     # complete create table staement
8487: CREATE TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))  PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS

                                     # Insert statement
8829: INSERT INTO "EMP" ("EMPNO", "ENAME", "JOB", "MGR", "HIREDATE",
"SAL", "COMM", "DEPTNO") VALUES (:1, :2, :3, :4, :5, :6, :7, :8)

                                     # BIND information
8957: BIND information for 8 columns
 col[  1] type 2 max length 22
 col[  2] type 1 max length 10 cset 31 (WE8ISO8859P1) form 1
 col[  3] type 1 max length 9 cset 31 (WE8ISO8859P1) form 1
 col[  4] type 2 max length 22
 col[  5] type 12 max length 7
 col[  6] type 2 max length 22
 col[  7] type 2 max length 22
 col[  8] type 2 max length 22
Conventional export                  # Conventional means NOT DIRECT

9003: start of table data            # Here begins the first row

Now build an unexp statement from the create table statement and the direct/conventional information and the start of the column data.

UNEXP TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))
dump file expdat.dmp from 9003;

Unloaded 14 rows, end of table marker at 9670 # so we have our famous 14 rows

This builds the normal SQL*Loader file and a matching control file. In the output file one extra column is added, this is related to the status of the row. A P means the row is Partial, (some columns missing) R means Resynch, it is the first row after a resynchronisation. O means Overlap, the previous row had errors, but the new row partly overlaps the other one.

 

 

 

Table Of Contents
~~~~~~~~~~~~~~~~~
1. Introduction
2. Using DUL
2.1 Create an appropriate init.dul file
2.2 Create the control.dul file
2.3 Unload the object information
2.4 Invoke DUL
2.5 Rebuild the database
3. How to rebuild object definitions that are stored in the data dictionary ?
4. How to unload data when the segment header block is corrupted ?
5. How to unload data when the file header block is corrupted ?
6. How to unload data without the system tablespace ?
7. Appendix A : Where to find the executables ?
8. References

1. Introduction
~~~~~~~~~~~~~~~

This document is to explain how to use DUL rather than to give a full
explanation of Bernard’s Data UnLoader capabilities.

This document is for internal use only and should not be given to customers at
any time, Dul should always be used by or under the supervision of a support
analyst.

DUL (Data UnLoader) is intended to retrieve data from the Oracle Database that
cannot be retrieved otherwise. This is not an alternative for the export
utility or SQL*Loader. The database may be corrupted but an individual data
block used must be 100% correct. During all unloading checks are made to make
sure that blocks are not corrupted and belong to the correct segment. If a
corrupted block is detected by DUL, an error message is printed in the loader
file and to the standard output, but this will not terminate the unloading of
the next row or block.

2. Using DUL
~~~~~~~~~~~~

First you must retrieve the necessary information about the objects that exists
in the database, these statistics will be loaded into the DUL dictionary to
unload the database objects.

This information is retrieved from the USER$, OBJ$, TAB$ and COL$ tables that
were created at database creation time, they can be unloaded based on the fact
that object numbers are fixed for these tables due to the rigid nature of sql.
bsq. DUL can find the information in the system tablespace, therefor the system
tablespace datafile(s) must be included in the control file, if this datafile(s)
is not present see chapter 6.
2.1 Create an appropriate “init.dul” file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

REM Platform specific parameters (NT)
REM A List of parameters for the most common platforms can be obtained from
REM
osd_big_endian_flag=false
osd_dba_file_bits=10
osd_c_struct_alignment=32
osd_file_leader_size=1
osd_word_size = 32

REM Sizes of dul dictionary caches. If one of these is too low startup will
REM fail.
dc_columns=2000000
dc_tables=10000
dc_objects=1000000
dc_users=400
dc_segments=100000

REM Location and filename of the control file, default value is control.dul
REM in the current directory
control_file = D:\Dul\control_orcl.dul

REM Database blocksize, can be found in the init<SID>.ora file or can be
REM retrieved by doing “show parameter %db_block_size%” in server manager
REM (svrmgr23/30/l) changes this parameter to whatever the block size is of
REM the crashed database.
db_block_size=4096

REM Can/must be specified when data is needed into export/import format.
REM this will create a file suitable to use by the oracle import utility,
REM although the generated file is completely different from a table mode
REM export generated by the EXP utility. It is a single table dump file
REM with only a create table structure statement and the table data.
REM Grants, storage clauses, triggers are not included into this dump file !
export_mode=true

REM Compatible parameter must be specified an can be either 6, 7 or 8
compatible=8

REM This parameter is optional and can be specified on platforms that do
REM not support long file names (e.g. 8.3 DOS) or when the file format that
REM DUL uses “owner_name.table_name.ext” is not acceptable. The dump files
REM will be something like dump001.ext, dump002.ext, etc in this case.
file = dump

A complete list can be obtained at
html section “DUL Parameters” although this init.dul file will work in most
cases and contains all accurate parameters to succesfully complete the
unloading.

2.2 Create the “control.dul” file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A good knowledge about the logical tablespace and physical datafile
structure is needed or you can do the following queries when the database
is mounted :

Oracle 6, 7
———–
> connect internal
> spool control.DUL
> select * from v$dbfile;
> spool off

Oracle 8
——–
> connect internal
> spool control.DUL
> select ts#, rfile#, name from v$datafile;
> spool off

Edit the spool file and change, if needed, the datafile location and stripe
out unnecessary information like table headers, feedback line, etc…
A sample control file looks something like this :

REM Oracle7 control file
1 D:\DUL\DATAFILE\SYS1ORCL.DBF
3 D:\DUL\DATAFILE\DAT1ORCL.DBF
7 D:\DUL\DATAFILE\USR1ORCL.DBF

REM Oracle8 control file
0 1 D:\DUL\DATAFILE\SYS1ORCL.DBF
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF
1 3 D:\DUL\DATAFILE\USR2ORCL.DBF
2 4 D:\DUL\DATAFILE\DAT1ORCL.DBF

Note : Each entry can contain a part of a datafile, this can be useful when
you need to split datafiles that are too big for DUL, so that each
part is smaller than for example 2GB. For example :

REM Oracle8 with a datafile split into multiple parts, each part is
REM smaller than 1GB !
0 1 D:\DUL\DATAFILE\SYS1ORCL.DBF
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 1 endblock 1000000
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 1000001 endblock 2000000
1 2 D:\DUL\DATAFILE\USR1ORCL.DBF startblock 2000001 endblock 2550000
2.3 Unload the object information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Start the DUL utility with the appropriate ddl (Dul Description Language)
script. There are 3 scripts available to unload the USER$, OBJ$, TAB$ and
COL$ tables according to the database version.

Oracle6 :> dul8.exe dictv6.ddl
Oracle7 :> dul8.exe dictv7.ddl
Oracle8 :> dul8.exe dictv8.ddl

Data UnLoader: Release 8.0.5.3.0 – Internal Use Only – on Tue Jun 22 22:19:
Copyright (c) 1994/1999 Bernard van Duijnen All rights reserved.

Parameter altered
Session altered.
Parameter altered
Session altered.
Parameter altered
Session altered.
Parameter altered
Session altered.
. unloading table OBJ$ 2271 rows unloaded
. unloading table TAB$ 245 rows unloaded
. unloading table COL$ 10489 rows unloaded
. unloading table USER$ 22 rows unloaded
. unloading table TABPART$ 0 rows unloaded
. unloading table IND$ 274 rows unloaded
. unloading table ICOL$ 514 rows unloaded
. unloading table LOB$ 13 rows unloaded

Life is DUL without it

This will unload the data of the USER$, OBJ$, TAB$ and COl$ data dictionary
tables into SQL*Loader files , this can not be manipulated into dump files
of the import format. The parameter export_mode = false is hardcoded into
the ddl scripts and can not be changed to the value “true” since this will
cause DUL to fail with the error:

. unloading table OBJ$
DUL: Error: Column “DATAOBJ#” actual size(2) greater than length in column
definition(1)
………….etc……………
2.4 Invoke DUL
~~~~~~~~~~~~~~

Start DUL in interactive mode or you can prepare a scripts that contains all
the ddl commands to unload the necessary data from the database. I will
describe in this document the most used commands, this is not a complete list
of possible parameters that can be specified. A complete list can be found at
section “DDL Description”.

DUL> unload database;
=> this will unload the entire database tables(includes sys’tables as well)

DUL> unload user <username>;
=> this will unload all the tables owned by that particullarly user.

DUL> unload table <username.table_name>;
=> this will unload the specified table owned by that username

DUL> describe <owner_name.table_name>;
=> will represent the table columns with there relative pointers to the
datafile(s) owned by the specified user.

DUL> scan database;
=> Scans all blocks of all data files.
Two files are generated:
1: seg.dat information of found segment headers (index/cluster/table)
(object id, file number, and block number).
2: ext.dat information of contiguous table/cluster data blocks.
(object id(V7), file and block number of segment header (V6),
file number and block number of first block,
number of blocks, number of tables)

DUL> scan tables;
=> Uses seg.dat and ext.dat as input.
Scans all tables in all data segments (a header block and at least one
matching extent with at least 1 table).
2.5 Rebuild the database
~~~~~~~~~~~~~~~~~~~~~~~~

Create the new database and use import or SQL*Loader to restore the data
retrieved by DUL. Note that when you only unloaded the data that table
structures, indexation, grants, PL/SQL and triggers will no longer exist in
the new database. To obtain an exactly same copy of the database as before
you will need to rerun your creation scripts for the tables, indexes, PL/SQL,
etc.

If you don’t have these scripts then you will need to perform the steps
described in section 3 of this document.
3. How to rebuild object definitions that are stored in the data dictionary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

You want to rebuild PL/SQL(packages, procedures, functions or triggers), grants,
indexes, constraints or storage clauses(old table structure) with DUL. This can
be done but is a little bit tricky. You need to unload the relevant data
dictionary tables using DUL and then load these tables into a healthy database,
be sure to use a different user than sys or (system). Loading the data
dictionary tables of the crashed database into the healthy database dictionary
could corrupt the healthy database as well.

Detailed explanation to retrieve for example pl/sql packages / procedures /
functions from a corrupted database :

1) Follow the steps explained in the “Using DUL” section and unload the data
dictionary table “source$”

2) Create a new user into a healthy database and specify the desired default
and temporary tablespace.

3) Grant connect, resource, imp_full_database to the new user.

4) Import/load the table “source$” into the new created schema:

e.g.: imp80 userid=newuser/passw file=d:\dul\scott_emp.dmp
log=d:\dul\impemp.txt full=y

5) You can now query from the table <newuser.source$> to rebuild the pl/sql
procedures/functions from the corrupted database. Scripts can be found on
WebIv to generate such PL/SQL creation scripts.

The same steps can be followed to recreate indexes, constraints, and storage
parameters or to regrant privileges to the appropiate users. Please notice that
you always need to use a script of some kind that can recreate the objects and
include all the features of the crashed database version. For example : when
the crashed database is of version 7.3.4 and you have several bitmap indexes,
if you would use a script that supports version 7.3.2 or prior, then you won’t
be able to recreate the bitmap indexes succesful !
4. How to unload data when the segment header block is corrupted
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

When DUL can’t retrieve data block information on the normal way, it can scan
the database to create its own segment/extent map. The procedure of scanning
the database is necessary to unload the data from the datafiles.
(to illustrate this example I copied an empty block ontop of the segment header
block)

1) Create an appropiate “init.dul” (see 2.1) and “control.dul” (see 2.2) file.

2) Unload the table. This will fail and indicate that there is a corruption in
the segment header block:

DUL> unload table scott.emp;
. unloading table EMP
DUL: Warning: Block is never used, block type is zero
DUL: Error: While checking tablespace 6 file 10 block 2
DUL: Error: While processing block ts#=6, file#=10, block#=2
DUL: Error: Could not read/parse segment header
0 rows unloaded

3) run the scan database command :

DUL> scan database;
tablespace 0, data file 1: 10239 blocks scanned
tablespace 6, data file 10: 2559 blocks scanned

4) Indicate to DUL that it should use its own generated extent map rather than
the segment header information.

DUL> alter session set use_scanned_extent_map = true;
Parameter altered
Session altered.
DUL> unload table scott.emp;
. unloading table EMP 14 rows unloaded
5. How to unload data when the datafile header block is corrupted
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

A corruption in the datafile header block is always listed at the moment you
open the database this is not like a header segment block corruption (see point
4) where the database can be succesfully openend and the corruption is listed
at the moment you do a query of a table. Dul has no problems with recovering
from such situations although there are other alternatives of recovering from
this situation like patching the datafile header block.

The error you will receive looks something like :
ORACLE instance started.
Total System Global Area 11739136 bytes
Fixed Size 49152 bytes
Variable Size 7421952 bytes
Database Buffers 4194304 bytes
Redo Buffers 73728 bytes
Database mounted.
ORA-01122: database file 10 failed verification check
ORA-01110: data file 10: ‘D:\DATA\TRGT\DATAFILES\JUR1TRGT.DBF’
ORA-01251: Unknown File Header Version read for file number 10
6. How to unload data without the system tablespace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

If datafiles are not available for the system tablespace the unload can still
continue but the object information can’t be retrieved from the data dictionary
tables USER$, OBJ$, TAB$ and COL$. So ownername, tablename and columnnames will
not be loaded into the DUL dictionary. Identifying the tables can be an
overwhelming task and a good knowledge of the RDBMS internals are needed here.
First of all you need a good knowledge of your application and it’s tables.
Column types can be guessed by DUL, but table and column names will be lost.

Any old system tablespace from the same database (may be weeks old) can be a
great help !

1) Create the “init.dul” file and the “control.dul” file as explained in above
steps 1 and 2. In this case the control file will contain all the datafiles
from which you want to restore but it doesn’t require the system tablespace
information.

2) Then You invoke dul and type the following command :

DUL> scan database;
data file 6 1280 blocks scanned

This will build the extent and segment map. Probably the dul command
interpreter will be terminated as well.

3) reinvoke the dul command interpreter and do the following :

Data UnLoader: Release 8.0.5.3.0 – Internal Use Only – on Tue Aug 03 13:33:

Copyright (c) 1994/1999 Oracle Corporation, The Netherlands. All rights res
Loaded 4 segments
Loaded 2 extents
Extent map sorted
DUL> alter session set use_scanned_extent_map = true;
DUL> scan tables; (or scan extents;)

Scanning tables with segment header

Oid 1078 fno 6 bno 2 table number 0

UNLOAD TABLE T_O1078 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN )
STORAGE ( TABNO 0 EXTENTS( FILE 6 BLOCK 2));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 4 2 0% 0% 0% 100% 100% 0% 0%
2 4 10 0% 100% 100% 100% 0% 0% 0%
3 4 8 0% 100% 100% 100% 0% 0% 50%
“10” “ACCOUNTING” “NEW YORK”
“20” “RESEARCH” “DALLAS”
“30” “SALES” “CHICAGO”
“40” “OPERATIONS” “BOSTON”

Oid 1080 fno 6 bno 12 table number 0

UNLOAD TABLE T_O1080 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER,
C5 DATE, C6 NUMBER, C7 NUMBER, C8 NUMBER )
STORAGE ( TABNO 0 EXTENTS( FILE 6 BLOCK 12));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 14 3 0% 0% 0% 100% 100% 0% 0%
2 14 6 0% 100% 100% 100% 0% 0% 21%
3 14 9 0% 100% 100% 100% 0% 0% 0%
4 14 3 7% 0% 0% 100% 100% 0% 0%
5 14 7 0% 0% 0% 0% 0% 100% 0%
6 14 3 0% 0% 0% 100% 100% 0% 0%
7 14 2 71% 0% 0% 100% 100% 0% 0%
8 14 2 0% 0% 0% 100% 100% 0% 0%
“7369” “SMITH” “CLERK” “7902” “17-DEC-1980 AD 00:00:00” “800” “” “20”
“7499” “ALLEN” “SALESMAN” “7698” “20-FEB-1981 AD 00:00:00” “1600” “300” “30”
“7521” “WARD” “SALESMAN” “7698” “22-FEB-1981 AD 00:00:00” “1250” “500” “30”
“7566” “JONES” “MANAGER” “7839” “02-APR-1981 AD 00:00:00” “2975” “” “20”
“7654” “MARTIN” “SALESMAN” “7698” “28-SEP-1981 AD 00:00:00” “1250” “1400” “30”

Note : it might be best that you redirect the output to a logfile since
commands like the “scan tables” can produce a lot of output.
On Windows NT you can do the following command :
C:\> dul8 > c:\temp\scan_tables.txt
scan tables;
exit;

4) Identify the lost tables from the output of step 3; if you look carefully to
the output above then you will notice that the unload syntax is already given
but that the table name will be of the format t_0<objectno> and the column
names will be of the format C<no>; datatypes will not be an exact match of
the datatype as it was before.

Look especially for strings like “Oid 1078 fno 6 bno 2 table number 0” where:
oid = object id, will be used to unload the object
fno = (data)file number
bno = block number

5) Unload the identified tables with the “unload table” command :

DUL> unload table dept (deptno number(2), dname varchar2(14),
loc varchar2(13)) storage (OBJNO 1078)
Unloading extent(s) of table DEPT 4 rows.

Comment

*

沪ICP备14014813号

沪公网安备 31010802001379号