Oracle Export/Import: Tips and Tricks (2)

Some thoughts about using named pipes with ssh and rolling export/import to work around space limitations when executing export/import in big migration projects.

In my previous blog I mentioned some tips using Oracle’s orignal exp/imp utility in combination with named pipes to deal with the scenario when we have limited storage available on a server. I showed how we can use original export/import in combination with named pipes to execute dummy exports to give us accurate metrics regarding the likely size of the export dump file and how pushing export through a named pipe to gzip the output inline is a well-known trick that will give us a dump file that is typically about 20% of the size it would otherwise be.

But what if even that is too big for the available storage? Here’s some more ideas…

ssh & named pipes – exporting to a remote server

For Original Export/Import we can combine a named pipe on our database server with an ssh process on a separate server that will collect the output from exp and write it out to that server. This allows us to configure a so-called “Bridge Server” with its own pool of storage that can be reused for multiple migrations through the life of the project. It means we always have enough storage for our dumps whilst avoiding the pain of having to continuously attach and remove storage volumes from our legacy servers, which can be time-consuming, may require outages and may be impossible for some environments.

So, assume you have a DB server DBSRV1, from which we want to execute an export, and a Bridge Server BRSRV2, configured with lots of lovely storage. The way it might work is this…

Step 1: Create a named pipe on the DB Server…

@DBSRV1$ mkfifo /tmp/exp.pipe

Step 2: Start a ssh process on the Bridge Server to cat the output from the pipe and write it out to a dump file…

@BRSRV1$ nohup ssh -q DBSRV1 \”cat /tmp/exp.pipe\” > /u01/app/oracle/MigDumps/myexp.dmp

…so now we have a pipe set up on the DB server and a process on the Bridge Server waiting to read whatever comes out of it and write that to a dump file

Step 3: Start the export on the DB server…

@DBSRV1$ exp un/pw file=/tmp/exp.pipe …

And away it goes. We will be writing across a public LAN, so it may be slow, depending on things like network load, whether there are firewalls in the way or whatever, but in most cases it will be fast enough in my experience.

Rolling Migrations with Filesize

Next tip is to consider utilising the little remembered parameter FILESIZE that is available for both datapump and original export/import, and it allows us to execute what I call a rolling export import.

FILESIZE specifies the maximum size for each dump file that export will create. Export will write only the number of bytes you specify to each dump file. When the amount of data Export must write exceeds the maximum value you specified for FILESIZE, it will get the name of the next export file from the FILE parameter and continue exporting into that file.

So how does this help us? Well, as each file closes we can copy it to the target server, validate it and then delete it from the source server. It means we dont need storage to accommodate the entire dump for an export, only for the files currently being exported to or those being copied to the target environment.

An example using Original Export Import…

@DBSRV1$ exp un/pw filesize=100MB FILES=exp1.dmp,exp2.dmp,exp3.dmp,…,exp10.dmp …

In this case export will write to file exp1.dmp until it is 100MB in size, then it will start writing to exp2.dmp. At this point we can copy the exp1.dmp file to our target server and if required we can start the import at that point. When exp2.dmp is completed we can copy that, validate it, delete it from the source server and continue the import, and so on and so on. We would need to execute the import in interactive mode so that once it has imported from exp1.dmp it will prompt and wait for the next file.

Thus we are starting the import whilst the export is still running, saving time, and we only need enough storage to our source server to accommodate a few files at a time, not the whole dump file set.

Clearly, this is not an elegant solution, its difficult to be sure how many files at a time will exist on the source server and this technique is not easily automated, so it will probably need to a dedicated DBA to manually process it from end to end and I would only recommend to use it exceptional circumstances.

Pipes and filesize combined

One variation of this rolling export/import technique is that with Original Export/Import we can combine it with named pipes and gzip. This will generate multiple files in a gzipped format, so they will be very much faster to copy to the target environment.

So we might do this…

Step 1: Execute a dummy export to accurately size the export dump…

mknod /tmp/exp.pipe p

dd if=/tmp/exp.pipe of=/dev/null bs=1048576 &

exp myschema/mypwd  file=/tmp/exp.pipe

Step 2: Decide how many files to use for the export.

We’d need to rehearse the migration once or twice to get the optimal filesize, but for example, the dummy export may tell us the dump size will be 100GB, we may select a file size of 10GB, so we will generate about 10 files. If they are being compressed inline using named pipes and gzip they will generate gzipped files sized at about 2GB each. A typical file transfer rate using a file transfer accelerator on a 100 Mbps line will transfer those at a rate of maybe 30 GB/hr, so a 2 GB file will take 4 or 5 minutes.

Step 3: Define named pipes and start a background inline gzip process for each of the files we expect to export to

mkfifo /tmp/exp1.pipe

mkfifo /tmp/exp2.pipe

mkfifo /tmp/exp10.pipe

gzip -c < /tmp/exp1.pipe > filename1.gz &

gzip -c < /tmp/exp2.pipe > filename2.gz &

gzip -c < /tmp/exp10.pipe > filename10.gz &

Step 4: Start the export

exp un/pwd file=/tmp/exp1.pipe,/tmp/exp2.pipe…,/tmp/exp10.pipe parfile=exp.par

Step 5: Monitor the export logfile to see when each file is completed

tail -f exp.log

Step 6: When the first dump is written, copy it to the target environment and begin the import.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s