Adding an existing table to replication without taking an outage

The application I support replicates a sub-set of the tables in the production environment via GoldenGate to a reporting database. Below are the steps to add an existing production table (with data) to the replication process without taking an outage. (I refer to the table as the “NEW” table, but it is an existing table in production that is new to the replication process.)

Perform these steps on the source of the replication

01. ADD TRANDATA to enable Oracle GoldenGate to acquire the transaction information that it needs from the transaction records. You can do this either in GoldenGate or in the database.

In GoldenGate

a. Start GoldenGate command interpreter

ggsci

b. Login to the database

dblogin userid ggs_owner@ <database_name>PASSWORD<password_string>, ENCRYPTKEY <key_name>

c. Turn on trandata

ADD TRANDATA PROD_SCHEMA.CUSTOMER;

In the database

a. Either turn on at the database level

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

b. Or at the table level

ALTER TABLE PROD_SCHEMA.CUSTOMER ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

02. Add the new table to extract process.

a. Edit the extract parm

cd /ggs/dirprm
vi exa01.prm

Add in the lines:

TABLE PROD_SCHEMA.CUSTOMER; 

b. Edit the data pump parm

cd /ggs/dirprm
vi dpump01.prm

Add in the lines:

TABLE PROD_SCHEMA.CUSTOMER; 

03. Recycle the extract and pump

cd /ggs
ggsci
stop EXA01
stop DPUMP01
info all

Make sure that the extract and pump are shutdown

start EXA01
start DPUMP01
info all

Make sure that the extract and pump are now running.

04. Now we will export the data consistent to an SCN. I prefer to set up the structures external to the data pump process as we are not replicating all tables and I want to make sure the structures are in place without RI etc. Additionally, I have different table space names that are easier to resolve in SQL. IF I have to do a large number of tables, I may use data pump with the CONTENT=METADATA_ONLY option.

a. Get the current SCN of the database

sqldba
 
select to_char(dbms_flashback.get_system_change_number) from dual;
 
TO_CHAR(CURRENT_SCN)
----------------------------------------
13438379883675
 
exit;

b. Create a parm file for the export as shown:

vi dean_expdp_data_gg.par
 
job_name=dean_expdp_data_gg
CONTENT=DATA_ONLY
DIRECTORY=DEAN_DATA_PUMP
CLUSTER=N
exclude=statistics
exclude=tablespace
flashback_scn=13438379883675
tables= PROD_SCHEMA.CUSTOMER 
DUMPFILE=dean_expdp_data_gg.dmp
LOGFILE=dean_expdp_data_gg.log

c. Run the export with the following command

rm nohup.out
nohup expdp \'/ as sysdba \' parfile=dean_expdp_data_gg.par &
tail -f nohup.out

05. After the export is complete, FTP the dump file to the target server of the replication process.

Perform these steps on the target of the replication

06. Run an import as shown below

a. Create a parameter file for the import as shown:

vi dean_impdp_data_gg.par
 
job_name=dean_impdp_data_gg
DIRECTORY=DEAN_DATA_PUMP
CLUSTER=N
CONTENT=DATA_ONLY
TABLE_EXISTS_ACTION=TRUNCATE
DUMPFILE=dean_expdp_data_gg.dmp
LOGFILE=dean_impdp_data_gg.log

b. Run the import with the following command

rm nohup.out
nohup impdp \'/ as sysdba \' parfile=dean_impdp_data_gg.par &
tail -f nohup.out

07. While the import is running (or after) edit the replicat’s parameter file as follows

cd /ggs/dirprm
 
vi rep01.prm
MAP PROD_SCHEMA.CUSTOMER, TARGET PROD_SCHEMA.CUSTOMER , FILTER ( @GETENV("TRANSACTION", "CSN") > 13438379883675);

Stop the replicat

stop REPVFRP
info all

Make sure the replicat has stopped and then start it back up to get the new parameters

start REPVFRP
info all

Make sure the replicat has started and monitor the ggserr.log to make sure there are no errors.

Author: Dean Capps

Database consultant at Amazon Web Services.