赞
踩
由于企业的数据库人员经验不足,或是根本没有dba,因此,可能会出现数据瘫痪,数据库不能打开的现象,但是又没有备份,那么我们一般有两种方法:
a,数据库强行打开;
b,使用oracle dul工具来把数据抽取出来
一般使用dul都是强行打开数据库都无法打开,或是某个表空间文件无法online的情况下才使用,需要把这个表空间的数据挽救出来。注意此工具需要system表空间是好的。
dul是oracle内部工具,原来没有什么限制,现在有了时间限制为30天,也就是说,如果下载下来使用超过30天,就无法使用了。大家记住这个工具只是为了挽救数据,此工具会把数据库中的表的数据抽取出来,用户中的索引,存储过程,函数,触发器之类的除了表以外的对象不能使用此工具,并且即使表数据抽取出来,表的结构也无法正确抽取出来,比如表结构中一列id number(20) not null,当导入的时候此列只能是id number,无法显示number(20),后面的非空也不会有,因此只能说挽救数据。此工具抽取数据生成两种格式的文件,一个是.dmp文件,可以直接使用imp导入,另外一个是sqlldr文件,只能使用sqlldr命令加载进去。那么其他对象怎么才可以获得到,有两个办法:
1.从开发商那里获得到用户的结构,但是不需要数据。
2.把所有文件,除了system undo以外的其他文件全部offline,然后强行打开。如果这样可以,那么可以把用户结构导出
exp system/password owner=appuser rows=n
当获得到用户结构后,需要先把空用户导入,也就是除了数据以外的用户的对象导入,然后把外键,触发器disable,然后再把dul导出的数据导入,此时要注意,如果有数据量大的表,必须要先把索引去掉,然后等导入数据后再把索引使用兵法创建的方法创建。
上边的方法最好使用第二种,因为数据库中有sequence,如果使用第一种,有可能sequence在开发商那里很小,但是在生产库很大,我们就遇到过这种情况,如果第二种方法无法实现,只能是用第一种的时候,我们可能就需要认为把sequence的cache手动加大。
下面摘自oracle文档,对dul进行一个详细说明
DUL is a standalone C program that directly retrieves rowsfrom tables in data files. The Oracle RDBMS software is NOT used at all. DULdoes dirty reads, it assumes that every transaction is committed. Nor does itcheck/require that media recovery has been done.
DUL is intended to retrieve data that cannot be retrievedotherwise. It is NOT an alternative for EXP, SQL*Plus etc. It is meant to be alast resort, not for normal production usage.
Before you use DUL you must be aware that the rdbms hasmany hidden features to force a bad database open. Undocumented init.oraparameters and events can be used to skip roll forward, to disable rollback,disable certain SMON actions, advance the database scn and more.
The database can be corrupted, but an individual data blockused must be 100% correct. During all unloading checks are made to make surethat blocks are not corrupted and belong to the correct segment. If during ascan a bad block is encountered, an error message is printed in the loader fileand to standard output. Unloading will continue with the next row or block.
DUL can and will only unload index/table/cluster data. Itwill NOT dump triggers, stored procedures nor create sql scripts for tables orviews. (But the data dictionary tables describing them can be unloaded). Thedata will be unloaded in a format suitable for SQL*Loader or IMP. A matchingcontrol file for SQL*Loader is generated as well.
DUL can unload indices and index organized tables. Indexunload is usefull to determine how many rows a table should have or to identifythe missing rows.
Cross-platform unloading is supported. The database can becopied from a different operating system than the DUL-host. (Databases/systemsdone so far: Sequent/ptx, Vax Vms, Alpha Vms, MVS, HP9000/8xx, IBM AIX, SCOUnix, Alpha OSF/1, Intel Windows NT).
The configuration parameters within "init.dul"will have to be modified to match those of the original platform and O/S ratherthan the platform from which the unload is being done.
DUL will not dump, spin or hang no matter how badlycorrupted the database is.
Full support for all database constructs: row chaining, rowmigration, hash/index clusters, longs, raws, rowids, dates, numbers, multiplefree list groups, segment high water mark, NULLS, trailing NULL columns, andunlimited extents, new block layout of Oracle8, partitioned tables.
Later additions are lobs, compressed indexes, 9ir2compressed tables. Varrays and ADTs (user defined objects) are partly supportedin sql*loader mode.
ASM is fully supported, files can be extracted from an asmdisk group. No mounted ASM instance is used, the disks are accessed directly.Non default asm allocation unit sizes are supported.
Data can be recovered from export dump files with the unexpcommand suite. Some initial work has been done for unpump to support data pumpfiles.
DUL should work with all versions starting oracle 6. DULhas been tested with versions from 6.0.26 up to 10.2. Even the old block headerlayout (pre 6.0.27.2) is supported.
DUL itself is essentially a single byte application. The commandparser does not understand multi byte characters, but it is possible to unloadany multi byte database. For all possible caveats there is a work around.
DUL can optionally convert to UTF8. This is for NCLOBS thatare stored in UTF16.
Multi Level Security Lables of trusted oracle are notsupported.
DUL can unload (long) raws. Nowadays there is suitableformat in SQL*Loader to preserve all long raws. So Long raws and blobs can beunloaded in both modes.
Nested tables are not yet supported, if they are needed letme know and it will be added. Varrays and ADTs are supported, also those thatare stored as a kernel lob. CLOBS, NCLOBS are supported both in SQL*Loader modeand in exp mode. BLOBS are best handled in exp mode, the generated hex formatin SQL*Loader mode is not loaded correctly currently.
DUL can be ported to any operating system with an ANSI-Ccompiler. DUL has been ported to many UNIX variants, VMS and WindowsNT. Currentlyall builds are done using gcc and a cross compiler environment on Linux
A good knowledge of the Oracle RDBMS internals is a prerequisite to be able to use DUL successfully. For instance the Data ServerInternals (DSI) courses give a good foundation. There is even amodulededicated to DUL
There are two configuration files for DUL."init.dul" contains all configuration parameters. (size of caches,details of header layout, oracle block size, output file format) In the controlfile, "control.dul", the database data file names and the asm diskscan be specified.
The Oracle data dictionary is available if the data fileswhich made up the SYSTEM TableSpace are available and useable. The number whichOracle assigned to these files and the name you have given them, which does nothave to be the original name which Oracle knew, must be included in the"control.dul" file. You also need to eventually include the filenumbers and names of any files from other TableSpaces for which you wish toeventually unload TABLES and their data. The lack of inclusion of thesefiles will not affect the data dictionary unload step but it will affect laterTABLE unloading.
Steps to follow:
If data files are not available for the SYSTEM TableSpacethe unload can still continue but USER, TABLE and COLUM names will not beknown. Identifying the tables can be an overwhelming task. But it can be (andhas been) done. You need in depth knowledge about your application and theapplication tables. Column types can be guessed by DUL, but table and columnnames are lost. Any old SYSTEM tablespace from the same database but weeks oldcan be of great help!. Most of the information that DUL uses does not change.(only the dataobj# is during truncate or index rebuild)
Steps to follow:
To ease the hunt for the lost tables: the scannedstatistical information in seen_tab.dat and seen_col.dat can be loaded into afresh database. If you recreate the tables ( Hopefully the create table scriptsare still available) then structure information of a "lost" table canbe matched to the "seen" tables scanned information with two SQL*Plusscripts. (fill.sql and getlost.sql).
DUL uses an SQL like command interface. There are DDLstatements to unload extents, tables, users or the entire database. Datadictionary information required can be specified in the ddl statements or takenfrom the previously unloaded data dictionary. The following three statementswill unload the DEPT table. The most common form is if the data dictionary andthe extent map are available:
UNLOAD TABLE scott.dept;
All relevant information can be specified in the statementas well:
REM Columns with type in the correct order
REM The segment header loaction in the storage clause
UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));
Oracle version 6:
REM version 6 data blocks have segment header location in each block
ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
STORAGE( EXTENTS ( FILE 1 BLOCK 1205 ));
Oracle7:
REM Oracle7 data blocks have object id in each block
ALTER SESSION SET USE_SCANNED_EXTENT_MAP = TRUE;
UNLOAD TABLE dept( deptno NUMBER, dname CHAR, loc CHAR)
STORAGE( OBJNO 1501 );
Only complete good rows are written to the output file. Forthis each row is buffered. The size of the buffer can changed with the init.dulparameter BUFFER. There is no speed gained with a high BUFFER parameter, itshould just be big enough to hold a complete row. Incomplete or bad rows are notwritten out. The FILE_SIZE_IN_MB init.dul parameter can be used to split theoutput (at a proper boundary) into multiple files, each file can be loadedindividually.
There are three different modes of output format.
The generated file is completely different from a tablemode export generated by EXP! The file is the minimal format that IMP can load.For each table a separate IMP loadable file will be generated. It is a singletable dump file. It contains a header an insert table statement and the tabledata. Table grants, storage clauses, or triggers will not be included. Anminimal create table statement is included (no storage clause just column namesand types without precision). The character set indication in the file in thegenerated header is V6 style. It is set to mean ASCII based characterset.
To enable export mode, set the init.dul parameterEXPORT_MODE to TRUE.
As the generated pseudo dump file does not containcharacter set information set NLS_LANG to match that of the original database.In export mode no character set conversion is done.
The data in the is either not converted at all, oreverthing is converted to UTF8 if LDR_OUTPUT_IN_UTF8 is set. This setting isrequired in mixed character set environments as the contents of a data filemust have a single character set.
When loading the data you probably need to set NLS_LANG tomatch that of the original database to prevent unwanted character setconversion.
For both SQL*Loader output formats the columns will bespace separated and enclosed in double quotes. Any double quote in the datawill be doubled. SQL*Loader recognizes this and will load only one. The characterused to enclose the columns can be changed from double quote to any characteryou like with the init.dul parameter LDR_ENCLOSE_CHAR.
There are two styles of physical record organization:
Nothing special is done in stream mode, a newline isprinted after each record. This is a compact format and can be used if the datadoes not contain newline characters. To enable stream mode setLDR_PHYS_REC_SIZE = 0 in init.dul.
This mode is essential if the data can contain newlines.One logical record, one comlete row, can be composed of multiple physicalrecords. The default is record length is 81, this fits nicely on the screen ofa VT220. The physical record size can be specified with LDR_PHYS_REC_SIZE ininit.dul.
The file names generated are: owner name_table name.ext.The extension is ".dmp" for IMP loadable files. ".dat" and".ctl" are used for the SQL*Loader datafile and the control file. Toprevent variable substitution and other unwanted side effects, strangecharacters are stripped.(Only alpha numeric and '_' are allowed).
If the FILE parameter is set the generated names will beFILEnnn.ext. This possibility is a work around if the file system does notsupport long enough file names. (Old windows with 6.3 filename format)
To unload table data from a database block the followinginformation must be known:
C-Structs in block headers are not copied directly, theyare retrieved with specialized functions. All offsets of structure members areprogrammed into DUL. This approach makes it possible to cross-unload. (Unloadan MVS created data file on an HP) Apart from byte order only four layout typeshave been found so far.
Machine dependencies (of the database) are configurablewith parameters:
DUL can use the data dictionary of the database to beunloaded if the files for it exist and are uncorrupted. For the data dictionaryto be used, internal tables must be unloaded first to external files: (USER$,OBJ$, TAB$ and COL$). The bootstrap command will find and unload the requiredtables.
[ ALTER SESSION ] SET init.dul parameter = value ;
Most parameters can be changed on the fly.
BOOTSTRAP [LOCATE | GENERATE | COMPLETE
| UNLOAD Bootstrap$ segment header block address ];
Bootstraps the data dictionary. Default is COMPLETE.
LOCATE finds and unloads the bootstrap$ table.
GENERATE builds a ddl file based on inforation in the cache.
COMPLETE is in fact LOCATE, followed by GENERATE (two times)
COMMIT;
Writes the changed block to the data file.
CREATE BLOCK INDEX index_name ON device ;
A block index contains address of valid oracle blocks foundin a corrupt file system. Useful to merge multiple disk images or to unloadfrom corrupted file systems. This is only useful in extreme file systemcorruption scenarios.
DESCRIBE owner_name . table_name ;
DUMP [ TABLESPACE tablespace_no ]
[ FILE file_no ]
[ BLOCK block_no ]
[ LEVEL level_no ] ;
Not a complete blockdump, mainly used for debugging.
The block address is remembered.
EXTRACT asm file name to output file name ;
Copies any ASM file from a disk group to the file system.
(there was a problem with online redologs this needs more testing)
MERGE block_index INTO [ segment ];
The merge command uses the information in the index file tolocate possible data blocks it looks for a combination of file numbers andobject id, each candidate block is compared to the current block in thedatafile. If the current block is bad, or has an older scn the candidate willwill be written into the datafile. This is only useful in extreme file systemcorruption scenarios.
REM any_text_you_like_till_End_Of_Line : comment
REM NOT allowed inside ddl statements. ( To avoid a two layer lexical scan).
ROLLBACK; # Cancels the UPDATE statements.
SHOW DBA dba ; # dba -> file_no block_no calculator
| DBA rfile_no block_no ; # file_no block_no -> dba calculator
| SIZES ; # show some size of important structs
| PARAMETER; # shows the values of all parameters
| LOBINFO; # lob indexes found with SCAN DATABASE
| DATAFILES; # summary of configured datafiles
| ASM DISKS; # summary of configured asm disks
| ASM FILES; # summary of configured datafiles on asm
| ASM FILE cid # extent information for asm file
UNEXP [TABLE] [ owner . ] table name
( column list ) [ DIRECT ]
DUMP FILE dump file name
FROM begin offset [ UNTIL end offset ]
[ MINIMUM minimal number of columns COLUMNS ] ;
To unload data from a corrupted exp dump file. No special setup
or configuration is required, just the compatible parameter.
The start offset should be where a row actually begins.
UNPUMP
To unload data from a corrupted expdp (datapump) dump file.
This is still work in progress, the basic commands work
but rather complex to use. Contact me if this is needed.
UNLOAD DATABASE;
UNLOAD USER user_name;
UNLOAD [TABLE] [ schema_name . ] table_name
[ PARTITION( partition_name ) ]
[ SUBPARTITION( sub_partition_name ) ]
[ ( column_definitions ) ]
[ cluster_clause ]
[ storage_clause ] ;
UNLOAD EXTENT table_name
[ ( column_definitions ) ]
[ TABLESPACE tablespace_no ]
FILE extent_start_file_number
BLOCK extent_start_block_number
BLOCKS extent_size_in oracle_blocks ;
UNLOAD LOB SEGMENT FOR [ schema_name . ] table_name [ ( column name ) ] ;
UNLOAD LOB SEGMENT STORAGE ( SEGOBJNO data obj#) ;
UPDATE [ block_address ] SET UB1|UB2|UB4 @ offset_in_block = new_value ;
UPDATE [ block_address ] SET block element name = new_value ;
Now and then we can repair something.
Patches the current block and dumps it.
You can issue multiple UPDATE commands.
Block is not written yet, use COMMIT to write.
storage_clause ::=
STORAGE ( storage_specification [ more_storage_specs ] )
storage_specification ::=
OBJNO object_id_number
| TABNO cluster_table_number
| SEGOBJNO cluster/data_object_number /* v7/v8 style data block id */
| FILE data_segment_header_file_number /* v6 style data block id */
BLOCK data_segment_header_block_number )
| any_normal_storage_specification_but_silently_ignored
SCAN DATABASE;
Scans all blocks of all data files. Two or three files aregenerated:
SCAN DUMP FILE dump file name
[ FROM begin offset ]
[ UNTIL end offset ];
Scans an export dump file to produce to provide the
create/insert statements and the offsets in the dump file.
SCAN LOB SEGMENT storage clause ;
SCAN LOB SEGMENT FOR table name [. column name] ;
Scans the lob segment to produce LOBPAGE.dat information,
but then for this segment only. Probably quicker and
smaller. For partitioned objects use scan database.
SCAN TABLES;
Uses SEG.dat and EXT.dat as input.
Scans all tables in all data segments (a header block and at least one
matching extent with at least 1 table).
SCAN EXTENTS;
Uses SEG.dat and EXT.dat as input.
All extents for which no corresponding segment header has been found.
(Only useful if a tablespace is not complete, or a segment header
is corrupt).
EXIT QUIT and EOF all cause DUL to terminate.
UNLOAD TABLE requires an extent map. In 99.99% of the casesthe extent map in the segment header is available. In the rare 0.01% that thesegment header is lost an extent map can be build with the scan databasecommand. The self build extent map will ONLY be used during an unload if theparameter USE_SCANNED_EXTENT_MAP is set to TRUE.
All data blocks have some ID of the segment they belong to.But there is a fundamental difference between V6 and V7. Data blocks created byOracle version 6 have the address of the segment header block. Data blockscreated by Oracle7 have the segment object id in the header.
The column definitions must be specified in the order thecolumns are stored in the segment, that is ordered by col$.segcol#. This is notnecessarily the same order as the columns where specified in the create tablestatement. Cluster columns are moved to the front, longs to the end. Columnsadded to the table with alter table command, are always stored last.
UNLOAD EXTENT can be used to unload 1 or more adjacentblocks. The extent to be unloaded must be specified with the STORAGE clause: Tospecify a single extent use: STORAGE ( EXTENTS( FILEfno BLOCK bnoBLOCKS #blocks) ) (FILE and BLOCK specify the first block, BLOCKS thesize of the extent)
There are two extra DUL specific data types:
In SQL*Loader mode there are even more DUL specific datatypes:
DUL uses the same bootstrap procedure as the rdbms. That isit uses the root dba from the system datafile header to locate the bootstrap$table. Depending on the version this root dba is either the location of thecompatibility segment containing the bootstrap$ address or for the newerversions the address of the bootstrap$ table itself. The bootstrap$ table isunloaded and its contents is parsed to find the first four tables (USER$, OBJ$,TAB$ and COL$). The other tables are unloaded based on information in thesefirst four.
SCAN TABLES and SCAN EXTENTS scan for the same informationand produce similar output. ALL columns of ALL rows are inspected. For eachcolumn the following statistics are gathered:
These statistics are combined and a column type issuggested. Using this suggestion five rows are unloaded to show the result.These statistics are dumped to two files (seen_tab.dat and seen_col.dat). Thereare SQL*Loader and SQL*Plus scripts available to automate a part of theidentification process. (Currently known as the getlost option).
There is a describe command. It will show the dictionaryinformation for the table, available in DUL's dictionary cache.
During startup DUL goes through the following steps:
ALLOW_TRAILER_MISMATCH
BOOLEAN
Strongly discouraged to use,will seldom produce more rows. Use only if you fully understand what it meansand why you want it. skips the check for correct block trailer. The blocksfailing this test are split of corrupt. But it saves you the trouble to patchsome blocks.
ALLOW_DBA_MISMATCH
BOOLEAN
Strongly discouraged to use,will seldom produce more rows. Use only if you fully understand what it meansand why you want it. Skips the check for correct block address. The blocksfailing this test are probably corrupt. But it saves you the trouble to patchsome blocks.
ALLOW_OTHER_OBJNO
BOOLEAN
If your dictionary is olderthan your datafiles then the data object id's can differ for truncated tables.With this parameter set to true it will issue a warning but use the value fromsegment header. All other blocks are fully checked. This is for special casesonly.
ASCII2EBCDIC
BOOLEAN
Must (var)char fields betranslated from EBCDIC to ASCII. (For unloading MVS database on a ASCII host)
BUFFER
NUMBER (bytes)
row output buffer size usedin both export and SQL*Loader mode. In each row is first stored in this buffer.Only complete rows without errors are written to the output file.
COMPATIBLE
NUMBER
Database version , validvalues are 6, 7, 8 or 9. This parameter must be specified
CONTROL_FILE
TEXT
Name of the DUL control file(default: "control.dul").
DB_BLOCK_SIZE
NUMBER
Oracle block size in bytes(Maximum 32 K)
DC_COLUMNS
NUMBER
DC_OBJECTS
NUMBER
DC_TABLES
NUMBER
DC_USERS
NUMBER
Sizes of dul dictionarycaches. If one of these is too low the cache will be automatically resized.
EXPORT_MODE
BOOLEAN
EXPort like output mode orSQL*Loader format
FILE
TEXT
Base for (dump or data) filename generation. Use this on 8.3 DOS like file systems
FILE_SIZE_IN_MB
NUMBER (Megabytes)
Maximum dump file size. Dumpfiles are split into multiple parts. Each file has a complete header and can beloaded individually.
LDR_ENCLOSE_CHAR
TEXT
The character to enclosefields in SQL*Loader mode.
LDR_PHYS_REC_SIZE
NUMBER
Physical record size for thegenerated loader datafile.
LDR_PHYS_REC_SIZE = 0 Nofixed records, each record is terminated with a newline.
LDR_PHYS_REC_SIZE > 2:Fixed record size.
MAX_OPEN_FILES
Maximum # of database filesthat are concurrently kept open at the OS level.
OSD_BIG_ENDIAN_FLAG
Byte order in machine word.Big Endian is also known as MSB first. DUL sets the default according to themachine it is running on. For an explanation why this is called Big Endian, youshould read Gullivers Travels.
OSD_DBA_FILE_BITS
File Number Size in DBA inbits. Or to be more precise the size of the low order part of the file number.
OSD_FILE_LEADER_SIZE
bytes/blocks added before thereal oracle file header block
OSD_C_STRUCT_ALIGNMENT
C Structure member alignment(0,16 or 32). The default of 32 is correct for most ports.
OSD_WORD_SIZE
Size of a machine word always32, except for MS/DOS(16)
PARSE_HEX_ESCAPES
Boolean default FALSE
Use \\xhh hex escapesequences in strings while parsing. If set to true then strange characters canbe specified using escape sequences. This feature is also for specifyingmulti-byte characters.
USE_SCANNED_EXTENT_MAP
BOOLEAN
Use the scanned extent map inext.dat when unloading a table. The normal algorithme uses the extent map inthe segment header. This parameter is only useful if some segment headers aremissing or incorrect.
WARN_RECREATE_FILES
BOOLEAN (TRUE)
Set to FALSE to suppress thewarning message if an existing file is overwritten.
WRITABLE_DATAFILES
BOOLEAN (FALSE)
Normal use of DUL will onlyread the database files. However the UPDATE and the SCAN RAW DEVICE will writeas well. The parameter is there to prevent accidental damage.
SAMPLE init.dul :
# sample init.dul configuration parameters
# these must be big enough for the database in question
# the cache must hold all entries from the dollar tables.
dc_columns = 200000
dc_tables = 10000
dc_objects = 10000
dc_users = 40
# OS specific parameters
osd_big_endian_flag = false
osd_dba_file_bits = 10
osd_c_struct_alignment = 32
osd_file_leader_size = 1
# database parameters
db_block_size = 8k
# loader format definitions
LDR_ENCLOSE_CHAR = "
LDR_PHYS_REC_SIZE = 81
Starting from rdbms version 10G osd parameters are easy toconfigure. Typically all parameters can be used at their defaults. The only onethat might need attention is osd_big_endian_flag, when doing a cross platformunload, where the original database platform is different from the currentmachine. If osd_big_endian_flag is set incorrectly, it is detected at startup,when doing file header inspection.
For pre 10G databases there is a list of known parametersin the osd wiki page listof osd (Operating System Dependend) parameters for almost every platform.If your platform is not in the list you can use the suggestions below todetermine the parameters. (And then please inform me so I can add them to thelist.)
big endian or little endian (byte order in machine words):HP, SUN and mainframes are generally big endian: OSD_BIG_ENDIAN_FLAG = TRUE.DEC and Intel platforms are little endian: OSD_BIG_ENDIAN_FLAG = FALSE. Thedefault is correct for the platform where DUL is running on.
There is no standard trick for this, the following mightwork on a unix system:
echo dul | od -x
If the output is like:
0000000 6475 6c0a
0000004
You are on a big endian machine (OSD_BIG_ENDIAN_FLAG=TRUE).
If you see:
0000000 7564 0a6c
0000004
This is a little endian machine (OSD_BIG_ENDIAN_FLAG=FALSE).
The number of bits in a dba used for the low order part offile number. Perform the following query:
SQL> select dump(chartorowid('0.0.1')) from dual;
Typ=69 Len=6: 8,0,0,0,0,0 -> osd_dba_file_bits = 5 (SCO)
Typ=69 Len=6: 4,0,0,0,0,0 -> osd_dba_file_bits = 6 (Sequent , HP)
Typ=69 Len=6: 1,0,0,0,0,0 -> osd_dba_file_bits = 8 (NCR,AIX)
Typ=69 Len=6: 0,16,0,0,0,0 -> osd_dba_file_bits = 12 (MVS)
Typ=69 Len=10: 0,0,0,0,0,64,0,0,0,0 osd_dba_file_bits = 10 (Oracle8)
Structure layout in data file headers. 0: No paddingbetween members in a C-struct (VAX/VMS only) 16: Some korean ticom machines andMS/DOS 32: Structure members are member size aligned. (All others includingALPHA/VMS) Check the following query:
SELECT * FROM v$type_size
WHERE type IN ( 'KCBH', 'KTNO', 'KCBH', 'KTBBH', 'KTBIT', 'KDBH'
, 'KTECT', 'KTETB', 'KTSHC') ;
In general osd_c_struct_alignment = 32 and the followingoutput is expected:
K KTNO TABLE NUMBER IN CLUSTER 1
KCB KCBH BLOCK COMMON HEADER 20
KTB KTBIT TRANSACTION VARIABLE HEADER 24
KTB KTBBH TRANSACTION FIXED HEADER 48
KDB KDBH DATA HEADER 14
KTE KTECT EXTENT CONTROL 44
KTE KTETB EXTENT TABLE 8
KTS KTSHC SEGMENT HEADER 8
8 rows selected.
For VAX/VMS and Netware ONLY osd_c_struct_alignment = 0 andthis output is expected:
COMPONEN TYPE DESCRIPTION SIZE
-------- -------- -------------------------------- ----------
K KTNO TABLE NUMBER IN CLUSTER 1
KCB KCBH BLOCK COMMON HEADER 20
KTB KTBIT TRANSACTION VARIABLE HEADER 23
KTB KTBBH TRANSACTION FIXED HEADER 42
KDB KDBH DATA HEADER 14
KTE KTECT EXTENT CONTROL 39
KTE KTETB EXTENT TABLE 8
KTS KTSHC SEGMENT HEADER 7
8 rows selected.
If there is a different list this will require some majorhacking and sniffing and possibly a major change to DUL. (EmailBernard.van.Duijnen@oracle.com)
Number of blocks/bytes before the oracle file header. Unixdatafiles have an extra leading block ( file size, block size magic number) Alarge number ( > 100) is seen as a byte offset, a small number is seen as anumber of oracle blocks.
Unix : osd_file_leader_size = 1
Vms : osd_file_leader_size = 0
Desktop : osd_file_leader_size = 1 (or 512 for old personal oracle)
Others : Unknown ( Use Andre Bakker's famous PATCH utility to find out)
An Oracle7 file header block starts with the pattern 0X0B010000.
You can add an additional byte offset in control.dul in theoptional third field (for instance for AIX or DEC UNIX data files on rawdevice)
A control file (default name "control.dul") isused to specify asm disks, block indexes and the data file names. The format ofthe control has been extended
Currently there are three types of specifications in theDUL control file. Each entry on a separate line. The asm disks must precede theasm files.
control_file_line ::= asm_disk_spec | file_piece_spec | block_index_spec
If COMPATIBLE is 10 or higher you can also specify asmdisks. Its generally sufficent to specify the device name. All properties areautomatically retrieved by header inspection. The full syntax is only neededwhen header inspection is impossible, that is for disks with corrupt headers.The syntax is:
DISK device name [ disk group options ]
disk group option ::= GROUP disk group name
| DISK_NO disk number in group
| F1B1 File1 Block1 location
A block index is a way to access oracle blocks on corruptfile systems. In general a corrupt file system is not wiped out, its not empty.Due to the specific layout of oracle blocks it is possible to datablocks anstore their location in the block index. See also the create block indexcommand . A block_index_name is a normal identifier, it is used toconstruct an unique file name.
BLOCK INDEX block_index_name
Each entry can contain a part of a datafile.
The smallest unitis a single data block.
This way it ispossible to split datafiles that are too big for DUL
in parts whereeach part is smaller than 2GB.
In general it is sufficient to specify thefile name.
Even for a singleblock.
If compatible is10 or higher the file numbers and the tablespace
numbers will beread from the file header.
If the specifieddetails are different from the file header
DUL will give awarning but use your specification.
This is to be able to unload files with a corruptedheader block.
For debugging itis possible to dump the file header.
The optional extra leader offset is an extrabyte offset,
that will beadded to all lseek() operations for that datafile.
This makes itpossible to skip over the extra 4k block for some
AIX raw devices,or the extra 64K on Tru64 on raw devices
file_piece_spec ::=
[ [ tablespace_no ] relative_file_number]data_file_name
[ optional extra leader offset ]
[ startblock block_no ]
[ endblock block_no ]
# AIX version 7 example with one file on raw device
1 /usr/oracle/dbs/system.dbf
8 /dev/rdsk/data.dbf 4096
# Oracle8 example with a datafile split in multiple parts, each part smaller than 2GB
0 1 /fs1/oradata/PMS/system.dbf
1 2 /tmp/huge_file_part1 startblock 1 endblock 1000000
1 2 /tmp/huge_file_part2 startblock 1000001 endblock 2000000
1 2 /mnt3/huge_file_part3 startblock 2000001 endblock 2550000
# ASM disks for two disk groups
disk /media/maxtor/asm/dgn1
disk /media/maxtor/asm/dgn2
disk /media/maxtor/asm/dgn3
disk /media/maxtor/asm/dgn4
disk /media/maxtor/asm/dgodd
# system datafile in the first asm disk group
+DGN/db102/datafile/system.257.621616979
# users datafile in a different disk group
+DGODD/db102/datafile/users.257.621616683
# a so called big file tablespace, use 1024 for the file#
8 1024 /home/oracle/v102/dbs/bigfilets
# Or let DUL find out itself from the header
/home/oracle/v102/dbs/bigfilets
# one tablespace with a different block size
/home/oracle/v102/dbs/ts16k.dbf block_size 16k
# or let DUL find out by header inspection
/home/oracle/v102/dbs/ts16k.dbf
1. create a suitable "init.dul"
2. create a control.dul
sqlplus /nolog
connect / as sysdba
startup mount
set trimspool on pagesize 0 linesize 256 feedback off
column name format a200
spool control.dul
select ts#, rfile#, name from v$datafile;
exit
edit the result
For Oracle8 a different query must be used:
select ts#, rfile#, name from v$datafile;
3. start DUL and bootstrap;
$ dul
Data UnLoader 10.2.1.16 - Oracle Internal Only - on Thu Jun 28 11:37:24 2007
with 64-bit io functions
Copyright (c) 1994 2007 Bernard van Duijnen All rights reserved.
Strictly Oracle Internal use Only
DUL> bootstrap;
Probing file = 1, block = 377
. unloading table BOOTSTRAP$ 57 rows unloaded
DUL: Warning: Dictionary cache DC_BOOTSTRAP is empty
Reading BOOTSTRAP.dat 57 entries loaded
Parsing Bootstrap$ contents
DUL: Warning: Recreating file "dict.ddl"
Generating dict.ddl for version 10
OBJ$: segobjno 18, file 1
TAB$: segobjno 2, tabno 1, file 1
COL$: segobjno 2, tabno 5, file 1
USER$: segobjno 10, tabno 1, file 1
Running generated file "@dict.ddl" to unload the dictionary tables
. unloading table OBJ$ 52275 rows unloaded
. unloading table TAB$ 1943 rows unloaded
. unloading table COL$ 59310 rows unloaded
. unloading table USER$ 70 rows unloaded
Reading USER.dat 70 entries loaded
Reading OBJ.dat
52275 entries loaded and sorted 52275 entries
Reading TAB.dat 1943 entries loaded
Reading COL.dat 59310 entries loaded and sorted 59310 entries
Reading BOOTSTRAP.dat 57 entries loaded
...
Some more messages for all the other TABLES
...
Database character set is WE8ISO8859P1
Database national character set is AL16UTF16
DUL> unload user SCOTT;
About to unload SCOTT's tables ...
. unloading table EMP 14 rows unloaded
1. create a suitable "init.dul" (See config guide)
2. create a control.dul See above
3. scan the database for segment headers and extents:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:10:16 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL> scan database;
data file 1 20480 blocks scanned
data file 4 7680 blocks scanned
data file 5 512 blocks scanned
DUL>quit
4. Restart DUL and scan the found tables for column statistics this creates
5. a huge amount of output:
echo scan tables \; | dul > scan.out&
[ many lines here]
Object id 1601 table number 0
UNLOAD TABLE T1601_0 ( C1 NUMBER, C2 UNKNOWN, C3 UNKNOWN, C4 NUMBER, C5 DATE
, C6 NUMBER, C7 NUMBER, C8 NUMBER )
STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
Colno Seen MaxIntSz Null% C75% C100 Num% NiNu% Dat% Rid%
1 14 3 0% 0% 0% 100% 100% 0% 0%
2 14 6 0% 100% 100% 100% 14% 0% 21%
3 14 9 0% 100% 100% 100% 14% 0% 0%
4 14 3 7% 0% 0% 100% 100% 0% 0%
5 14 7 0% 0% 0% 0% 0% 100% 0%
6 14 3 0% 0% 0% 100% 100% 0% 0%
7 14 2 71% 0% 0% 100% 100% 0% 0%
8 14 2 0% 0% 0% 100% 100% 0% 0%
"7369" "SMITH" "CLERK" "7902" "17-DEC-1980 AD 00:00:00" "800" "" "20"
"7499" "-0.000025253223" "SALESMAN" "7698" "20-FEB-1981 AD 00:00:00" "1600" "30+
0" "30"
"7521" "WARD" "SALESMAN" "7698" "22-FEB-1981 AD 00:00:00" "1250" "500" "30"
"7566" "JONES" "MANAGER" "7839" "02-APR-1981 AD 00:00:00" "2975" "" "20"
"7654" "MARTIN" "SALESMAN" "7698" "28-SEP-1981 AD 00:00:00" "1250" "1400" "30"
[ many more lines here ]
This looksfamiliar, use the above information and your knowledge of
the emp table tocompose:
UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
hiredate date, sal number, comm number deptno number)
STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
6. use this statement to unload emp:
$ dul
UnLoader: Version 2.0.0.0 - Very Restricted on Tue May 16 11:46:33 1995
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Loaded 350 segments
Loaded 204 extents
Extent map sorted
DUL> UNLOAD TABLE emp ( empno number, ename char, job char, mgr number,
DUL 2> hiredate date, sal number, comm number deptno number)
DUL 3> STORAGE ( TABNO 0 EXTENTS( FILE 1 BLOCK 10530));
. unloading table EMP 14 rows unloaded
DUL>quit
This can generateoutput similar to below. Normally this should not
happen since youshould create a demo database and check this via the DUL
documented (inhtml page) query.
The mismatch in DBA's is only in the filenumber (first number in brackets) part.
The secondnumber, the block number, is correct.
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:40:33 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [4][2]
DUL: Warning: Bad cache layer header file#=1, block#=2
DUL: Warning: Block[1][3] DBA in block mismatch [4][3]
DUL: Warning: Bad cache layer header file#=1, block#=3
...........and etc..........
This may createoutput similar to below, but many other flavours are possible. In this case weare a fixed number of
blocks off. Thefile number is correct. The difference in the block numbers is constant.:
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:44:23 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [1][3]
DUL: Warning: Bad cache layer header file#=1, block#=2
DUL: Warning: Block[1][3] DBA in block mismatch [1][4]
DUL: Warning: Bad cache layer header file#=1, block#=3
...........and etc..........
This may generateoutput similar to the following:
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:46:10 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
. unloading table OBJ$
DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262145
OS error 2: No such file or directory
DUL: Warning: file# 0 is out of range
DUL: Warning: Cannot read data block file#=0, block# = 262146
OS error 2: No such file or directory
...........and etc..........
The followingoutput was generated when the db_block_size was set too
small. Thecorrect value was 4096 and it was set to 2048. Normally, the
value for thisparameter should be taken from the Oracle instances's init.ora
file and will notbe correctly set.
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Thu Sep 4 12:38:25 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
Session altered.
Session altered.
Session altered.
Session altered.
Session altered.
DUL: Warning: Block[1][2] DBA in block mismatch [513][1159680]
DUL: Warning: File=1, block 2: illegal block version 2
DUL: Warning: Block[1][2] Illegal block type[0]
DUL: Warning: Bad cache layer header file#=1, block#=2
DUL: Warning: Block[1][4] DBA in block mismatch [1][2]
DUL: Warning: File[1]Block[4]INCSEQ mismatch[90268!=0]
DUL: Warning: Bad cache layer header file#=1, block#=4
DUL: Warning: Block[1][6] DBA in block mismatch [1][3]
DUL: Warning: File[1]Block[6]INCSEQ mismatch[139591710!=86360346]
DUL: Warning: Bad cache layer header file#=1, block#=6
...........and etc..........
If you get thefollowing error it is caused by the data dictionary tables
"USER$,OBJ$, TAB$ and COL$" not being correctly generated. To
fix this errorsimply delete all dictv6.ddl or dictv7.ddl created .dat
and .ctl filesand restart.
Data UnLoader: Release 3.2.0.1 - Internal Use Only - on Wed Sep 3 10:49:30 1997
Copyright (c) 1994/95 Oracle Corporation, The Netherlands. All rights reserved.
DUL: Error: Quote missing
If you do not know anything about the structure of a EXPdump file this can be difficult. Here is a quick explanation. Apart from thefile header a dump file has MARKERS that identify the various sections. In eachtable section there will be SQL statements. The most interrsesting part is thecreate table statement, followed by the insert into table statement. The insertstatement is directly followed by the bind information, (number of columns, andfor each column its type and bind length and a small bit more). Then it isfollowed by the actual columns. Each column is preceded by a two byte length,followed by the actual column data. There are several tricks for longer columnspossible. The end of the column data is marked by the special length markerOXFFFF. There is no marker for the beginning of a row. Resynching after acorruption is trial and error. Corruption are generally not immediatedetectable. The format is slightly different for DIRECT export, so you willhave to use the DIRECT option for DIRECT exports. The offset to be specified isthe beginning of a row. In general the first one directly behind the bindarray, but for optimal flexibility you can start anywhere in the row data.
Thefirst step is to scan the dump file to find the offsets and the sql statements.Each output line starts with the offset where the item is found.
DUL> scan dump file expdat.dmp;
0: CSET: 1 (US7ASCII) # Character set info from the header
3: SEAL EXPORT:V10.02.01 # the Seal - the exp version tag
20: DBA SYSTEM # exp done as SYSTEM
8461: CONNECT SCOTT # section for user SCOTT
8475: TABLE "EMP"
# complete create table staement
8487: CREATE TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0)) PCTFREE 10 PCTUSED 40
INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1
FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" LOGGING NOCOMPRESS
# Insert statement
8829: INSERT INTO "EMP" ("EMPNO", "ENAME", "JOB", "MGR", "HIREDATE",
"SAL", "COMM", "DEPTNO") VALUES (:1, :2, :3, :4, :5, :6, :7, :8)
# BIND information
8957: BIND information for 8 columns
col[ 1] type 2 max length 22
col[ 2] type 1 max length 10 cset 31 (WE8ISO8859P1) form 1
col[ 3] type 1 max length 9 cset 31 (WE8ISO8859P1) form 1
col[ 4] type 2 max length 22
col[ 5] type 12 max length 7
col[ 6] type 2 max length 22
col[ 7] type 2 max length 22
col[ 8] type 2 max length 22
Conventional export # Conventional means NOT DIRECT
9003: start of table data # Here begins the first row
Nowbuild an unexp statement from the create table statement and the direct/conventionalinformation and the start of the column data.
UNEXP TABLE "EMP" ("EMPNO" NUMBER(4, 0), "ENAME" VARCHAR2(10),
"JOB" VARCHAR2(9), "MGR" NUMBER(4, 0), "HIREDATE" DATE, "SAL" NUMBER(7, 2),
"COMM" NUMBER(7, 2), "DEPTNO" NUMBER(2, 0))
dump file expdat.dmp from 9003;
Unloaded 14 rows, end of table marker at 9670 # so we have our famous 14 rows
This builds thenormal SQL*Loader file and a matching control file.
In the outputfile one extra column is added, this is related to the
status of therow. A P means the row is Partial, (some columns missing)
R means Resynch,it is the first row after a resynchronisation.
O means Overlap,the previous row had errors, but the new row
partlyoverlaps the other one.Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。