Oracle Export/Import: Tips and Tricks (1)

Some tips about how to identify and work around limited storage for dump files when working with export/import…

In the big database migration projects I work on we are presented with a wide range of technical options for migrating our databases, but in my experience logical backup/restore – that is Original Export Import and Datapump – is the method of choice for the overwhelming majority of the migrations we need to do because they offer flexibility like you wouldn’t believe compared to other methods.

There are some particular challenges using these utilities: we need some place to write the dump files, which can be a challenge in mature systems where there may be limited storage and limited options to add more, and they can be too slow for some migrations.

In this series of blogs I will present a series of tips and tricks for how to optimise performance and storage utilisation of Exp/Imp and Datapump and I will also illustrate the use of some other features that are particularly useful in these kinds of big DB migration projects. Some of this stuff wil be old news to experienced DBAs, but I hope it will offer some value.

So lets’s start with an old one; how to optimise use of available storage…

Requirement: We are required to migrate a schema from a 9i database in support of a migration. We are not sure if there is sufficient storage available on the old server and we cannot add any more. So we need to figure out if exp/imp is a viable option for migrating this schema.

Step 1: Size the dump file using dba_segments

I usually first look in dba_segments and see how big the table and lob segments are for the schema we’re interested in…

SELECT round(sum(bytes)/1048576 ) MBytes from dba_segmentswhere owner = ‘MYSCHEMA’ and segment_type not like ‘%INDEX%’

We are excluding index related segments since they occupy no significant space within an export dump file; only the metadata to recreate the index segments is written to the dump file.

Of course, segment size is not an accurate predictor of how big the eventual dump file will be since the bytes column in dba_segments reflects blocks that are allocated to a segment, regardless of how much data they might contain, but this query is a quick check and if it gives us a number that will comfortably fit into the space we have available, then we need look no further; we have enough space.

Step 2: Size the dump file using a dummy export

If the schema segment size looks too big based on the segment size check or we need more information, the next step is to execute a dummy export. Here we can make use of named pipes in a unix/linux environment to get a pretty accurate estimate of the likely dump file size.

Named pipes are used provide inter process communication to channel the output of one command in a Unix/Linux environment to another command. Used with Oracle’s original export/import they can provide us with some pretty cool options as we will see.

Based on the example in Oracle Metalink Note 106465.1 performing a dummy export works like this…

  • Create a pipe called exp.pip (syntax may differ depending on platform):

cd /tmp

mknod exp.pipe p

  • Start reading from the exp.pipe using dd, dump the output to the bit bucket (/dev/null), set blocksize to 1K and execute in background:

dd if=/tmp/exp.pipe of=/dev/null bs=1024 &

  • Start the export

exp myschema/mypwd  file=/tmp/exp/pipe …

  • At the end of exp, look for numbers of records written…

Export Terminated successfully without warnings.

5+0 records in

5+0 records out

So this tells us it wrote 5x1K records, so our dump file will be 5 KB in total, but because we are writing to /dev/null we dont consume any actual storage by running this test. Obviously, in a real migration the numbers will be very much larger.

For datapump it’s a little more straightforward since we can use the ESTIMATE_ONLY parameter to achieve the same thing.

As well as confirming if we have enough storage this figure allows us to estimate how long it is likely to take to copy the files to the target environment.

It will also provide a crude metric regarding how long the export will take. Reading data and then throwing data into /dev/null will be faster than if we write to a dump file, so it will not give an accurate export time, but if even the dummy export takes longer than the time we have available, then we already know we will have to rethink our plans for this migration – maybe undertake some baseline testing, tune the export or renegotiate the migration window.

Step 3: Zip the export using named pipes and gzip

If the dummy export suggests the dump is still too big for the available space we can then look at using named pipes again, this time to zip up the file using gzip.

We create a named pipe, redirect its output into gzip and write the output from that into our target file name. Then start the export, using the pipe as the filename…

mkfifo /tmp/exp.pipe

gzip -c < /tmp/exp.pipe > filename.gz &

exp un/pwd file=/tmp/exp.pipe parfile=exp.par

Typically gzip will reduce the dump file to a fifth of its original size. Once export is complete we copy the dump file to the the following commands

mkfifo /tmp/exp.pipe

gzip -dc filename.gz > /tmp/exp.pipe &

imp un/pwd file=/tmp/exp.pipe parfile=exp.par

This is an old trick for which there are many variants, using compress instead of gzip for example, but its still worth its weight in Gold when working with legacy systems.

In the next Blog I will show an example of combining ssh with named pipes to allow exp to write to a dump file on another server and an example of a “rolling export”.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s