Dump & Load Scripts (^d) - FREE

Have ProTop generate example dump and load scripts for your database using the FREE version of ProTop.

NOTE:  These are sample scripts.  They will not dump and load your entire database for you.  This is merely an example of how to go about a dump and load of the application data tables in your database. It does not include other schema components like audit policies, SQL views etc. as these vary according to the OE components you are using.  And it does not handle special data types like C/LOBs.

You should be versed in operating system scripting (.bat or .sh) to understand and use the scripts produced.  

These are UNSUPPORTED, use them AT YOUR OWN RISK!

Dump & Load scripts (^d) - The Free Version


With the free version of ProTop, only a simple, single-threaded D&L system is created.  The commercial version of ProTop adds the ability to optimize the D&L with various options.  For more information on what the PAID version of ProTop can do for your Dump & Load, see this Kbase article:  Dump & Load Scripts (^d) - PAID

When the ^d key is executed from within ProTop RT (free), it presents the "basic" dialog:

img00100

And provides these features:

  • The st file has simple data, index, and LOB areas
  • The dump scripts provide for a single-threaded dump

Fields in the form above:

ProTop RT Label Description
Block Size Target block size.
Dump & Load Work Dir Scripts are configured with this directory holding the newly created dump and load scripts
Target DB Dir Scripts are configured with this directory containing the target database file
Target BI Dir Scripts are configured with this directory containing the target database BI file
Target AI Dir Scripts are configured with this directory containing the target database AI file

A Note on Database Block Size

Many valid recommendations regarding database block size exist, but we tend to favor 8192 (the largest possible value) in all cases. Other common answers are: 

a) 4096 on Windows and Linux, 8192 on UNIX 
b) match the file system block size to avoid torn pages.

To create the scripts, edit the fields, press F1 or ctrl-x to run the script generator. Once generated, you will see a message similar to this:

You can find the dump and load files and scripts in two directories inside the Dump & Load Work Dir you provided.  For example:

protop mb117sports 
^d
...
Dump & Load Work Dir:  /tmp
...
ctrl-x
q

cd /tmp

ls
drwxrwxrwx 2 mb users 4096 Nov 27 16:59 build
drwxrwxrwx 2 mb users 4096 Nov 27 16:59 mb117sports.dl


>>> The Dump & Load scripts are in mb117sports.dl: <<<


cd mb117sports.dl

ls -l
total 24
-rwxr-xr-x 1 mb users 2829 Nov 27 16:59 dlenv
-rwxr-xr-x 1 mb users 7302 Nov 27 16:59 load.sh
-rwxr-xr-x 1 mb users  211 Nov 27 16:59 mb117sports.clean.sh
-rwxr-xr-x 1 mb users  871 Nov 27 16:59 mb117sports.dump00.sh
-rwxr-xr-x 1 mb users  684 Nov 27 16:59 mb117sports.zdumpall.sh


>>> And the index rebuild scripts are in build: >>>


cd ../build

/tmp/dl/build$ ls -l 
total 52
-rwxr-xr-x 1 mb users  3146 Nov 27 16:59 build.sh
-rwxr-xr-x 1 mb users  1225 Nov 27 16:59 idxbuild.sh
-rwxr-xr-x 1 mb users 17058 Nov 27 16:59 mb117sports.df
-rwxr-xr-x 1 mb users 16303 Nov 27 16:59 mb117sports.df.noarea
-rwxr-xr-x 1 mb users   483 Nov 27 16:59 mb117sports.st
-rwxr-xr-x 1 mb users   861 Nov 27 16:59 mb117sports.tblmv.sh

Using the FREE ProTop Generated Dump and Load Scripts 

  1. test, test, and then test again!!! 

  2. for this test I am using /tmp for my dump & load work directory and my sports database

  3. when the database is down, run a table analysis for a baseline number of records in each table; you'll compare these with a table analysis run after the dump & load to verify that the new database contains all the records that were dumped

  4. probkup the database you want to D&L

  5. run 'protop myDB' and execute the ^d command key to dump the scripts tailored to that database into /tmp/

  6. run /tmp/sports.dl/sports.zdumpall.sh - this will:

    - create /tmp/dump, /tmp/load, /tmp/log and /tmp/stage directories

    - start the dump (a binary dump) against the sports db and put the data in the /tmp/stage directory

  7. while the dump is running, cd to /tmp/build and run build.sh to:

    1. (delete and re) create the new database structure

    2. enable licensed features

    3. start the db and writers

    4. load schema

    5. move user tables & indexes to their proper storage areas

    6. back up the new empty database

  8. verify that the dump is complete - all of the expected tables should have a .bd in the /tmp/stage directory

  9. (this is repeatable) cd /tmp/sports.dl and run load.sh, which will prompt you for responses to various options but in general: 

    1. shut down the new database (because this is repeatable)

    2. clean up from previous runs according to your responses

    3. restore the copy of the new empty db

    4. load the data

    5. build indexes

    6. run a table analysis and compare with the first one to verify all records are accounted for

  10. review /tmp/summary.rpt for issues

  11. test, test, and test again to familiarize yourself with all the moving parts