This week we were challenged to come up with a solution that would allow a database administrator to move a single instance of an Oracle database on an AWS EC2 machine to a different EC2 machine. We were tasked with coming up with as much automation as possible to accomplish this goal.
A colleague had already written the required cloud formation template to create the new EC2 instance and to install Oracle on it via an Ansible playbook. All of this was executed via Jenkins. The challenge now was to be able to restore the database on the new EC2 instance. In order to automate the backup and restore part of the process I created a python script that accepted as input the parameters:
- Name of the database
- Location of the script directory that contained maintenance scripts for the database
- An AWS S3 bucket to be used for transporting the data
The python script could be downloaded from an S3 bucket that we had set up as a code repository. Once the python script had been downloaded onto the source server, the database administrator could run the script and provide the above parameters. Python would then create the required backup and restore scripts, copy the crontab entries and stage the maintenance scripts for transfer to the new server.
The backup script backs up the database via RMAN commands (including the database, archive logs, spfile, control file) and then uploads all of this to an S3 bucket.
The next step would be to run the cloud formation template (to create a new EC2 instance) and the Ansible playbook (to install Oracle, ASM, etc.). We added a parameter to the Ansible playbook to accept the name of the S3 bucket that holds the backups. At the end of the creation and install, the playbook would
- Copy the script generated on the source server from the S3 bucket
- Run the restore script which would
- Download all the files from the S3 bucket
- Create the adump directory
- Create a spfile from the backup
- Create a control file
- Restore and recover the database
- Register the database with the listener
- Add entry in /etc/oratab
- Restore cron entries to the crontab
- Copy the scripts from the source server to the target server into the same directory
- Open the database in read/write mode
- Log all of the above to a file for review
All of this worked as expected. The one weak link is the amount of time that it would take to transfer the information to and from the S3 bucket. An alternative would be to FTP the data between the servers but I wanted to try using an S3 bucket.