AWS – EC2 with Oracle for a I/O intensive application

This morning we received a request to create an EC2 instance in AWS for an Oracle database. The application that was intending to benchmark on this EC2 instance tended to be very I/O intensive. In the past we had been using general-purpose disks rather than provisioned IOPS. To ensure that this application got the best I/O performance, they wanted the disks to be provisioned at the highest possible IOPS rate. Amazon has a restriction that the IOPS:GB ratio cannot exceed 30:1. We were striping the disks to improve I/O such that each ASM group consisted of 4 disks. We did some math and came up with the below structure:

IOPS    #of disks    Size per disk   Ratio
                               (GB)
Arch      3000    4            100             30
ASM1      6000    4            200             30
ASM2      6000    4            200             30
Total size = 2000 gb

Automate the move of a single instance Oracle database to a new EC2

This week we were challenged to come up with a solution that would allow a database administrator to move a single instance of an Oracle database on an AWS EC2 machine to a different EC2 machine. We were tasked with coming up with as much automation as possible to accomplish this goal.

A colleague had already written the required cloud formation template to create the new EC2 instance and to install Oracle on it via an Ansible playbook. All of this was executed via Jenkins. The challenge now was to be able to restore the database on the new EC2 instance. In order to automate the backup and restore part of the process I created a python script that accepted as input the parameters:

  • Name of the database
  • Location of the script directory that contained maintenance scripts for the database
  • An AWS S3 bucket to be used for transporting the data

The python script could be downloaded from an S3 bucket that we had set up as a code repository. Once the python script had been downloaded onto the source server, the database administrator could run the script and provide the above parameters. Python would then create the required backup and restore scripts, copy the crontab entries and stage the maintenance scripts for transfer to the new server.

The backup script backs up the database via RMAN commands (including the database, archive logs, spfile, control file) and then uploads all of this to an S3 bucket.

The next step would be to run the cloud formation template (to create a new EC2 instance) and the Ansible playbook (to install Oracle, ASM, etc.). We added a parameter to the Ansible playbook to accept the name of the S3 bucket that holds the backups. At the end of the creation and install, the playbook would

  • Copy the script generated on the source server from the S3 bucket
  • Run the restore script which would
    • Download all the files from the S3 bucket
    • Create the adump directory
    • Create a spfile from the backup
    • Create a control file
    • Restore and recover the database
    • Register the database with the listener
    • Add entry in /etc/oratab
    • Restore cron entries to the crontab
    • Copy the scripts from the source server to the target server into the same directory
    • Open the database in read/write mode
    • Log all of the above to a file for review

All of this worked as expected. The one weak link is the amount of time that it would take to transfer the information to and from the S3 bucket. An alternative would be to FTP the data between the servers but I wanted to try using an S3 bucket.