Oracle Datapump Interview Questions and Answers
In which cases imp/exp is used?
-Eliminate database fragmentation
-Schema refresh (move the schema from one database to another)
-Detect database corruption. Ensure that all the data can be read (if the data can be read that means there is no block corruption)
-Transporting tablespaces between databases
-Backup database objects
Which are the common IMP/EXP problems?
ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.
How can we check DATAPUMP file is corrupted or not?Sometimes we may be in situation, to check whether the dumpfile exported long time back is VALID or not or our application team is saying that the dumpfile provided by us is corrupted.
Use SQLFILE Parameter with import script to detect corruption. The use of this parameter will read the entire datapump export dumpfile and will report if corruption is detected.
impdp system/*** directory=dump_dir dumpfile=expdp.dmp logfile=corruption_check.log sqlfile=corruption_check.sql
This will write all DDL statements (which will be executed if an import is performed) into the file which we mentioned in the command.
How can we find elapsed time for particular object during Datapump or Export?We have an undocumented parameter ‘metrics’ in DATAPUMP to check how much it took to export different objects types.
Expdp system/passwd directory=dump_dir dumpfile=expdp_full.dmp logfile=expdp_full.log full=y metrics=y;
How to move table from one tablespace to another tablespace?
You can use any of the below method:
1.Export the table, drop the table, create definition of table in new tablespace and then import the data using (imp ignore=y).
2.Create new table in new tablespace then drop the original table and rename temporary table with original table name.
CREATE TABLE temp_name TABLESPACE new_tablespace as select * from 'source_table';
DROP TABLE real_table;
RENAME temp_name to real_table;
What is the difference between SQL*loader and Import utilities?
Both these utilities are used for loading the data into the database. The difference is that the import utility relies on the data being produced by another oracle utility Export while SQL*Loader is a high speed data loading mechanism allows data to be loaded that has been produced by other utilities from different data source.SQL * Loader loads data from standard OS files or flat file in oracle database tables.Export/Import allows moving existing data in oracle format to and from oracle database.
How to re-organize schema?
We can use dbms_redefinition package for online re-organization of schema objects. Otherwise using import/export and data pump utility you can recreate or re-organize your schema.
How we can improve the EXP Performance?
1.Set the BUFFER parameter to a high value (e.g. 2M)
2.If you run multiple export sessions, ensure they write to different physical disks.
How we can improve the IMP performance?
1.Import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated), import the data and recreate the indexes
2.Store the dump file to be imported on a separate physical disk from the oracle data files
3.If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import
4.Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB) ) and COMMIT =y or set COMMIT=n (is the default behavior: import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.)
5.Use the direct path to import the data (DIRECT=y)
6.(if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init<SID>.ora file
7.(if possible) Set the LOG_BUFFER to a big value and restart oracle.
How Datapump works internally? What resources will be utilized at Network level, in Memory level?
How to Export Dumpfile to different mount points (multiple dumpfiles) sametime both in EXP and EXPDP, as the destination has less space available?
No comments:
Post a Comment