Over the last few months I have been pursuing my AWS certifications. As of last week I have completed my:
AWS Certified Solutions Architect
AWS Certified Developer
AWS Certified SysOps Administrator
Experiences of a DBA
Over the last few months I have been pursuing my AWS certifications. As of last week I have completed my:
AWS Certified Solutions Architect
AWS Certified Developer
AWS Certified SysOps Administrator
While creating a new Oracle database with ASM on an AWS EC2 platform I encountered the below errors while attempting to mount one of the ASM disk groups:
ALTER DISKGROUP VOTE_DISK mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15017: diskgroup "VOTE_DISK" cannot be mounted ORA-15040: diskgroup is incomplete
When I checked the v$asm_diskgroup, I saw
SQL> select NAME,TOTAL_MB,FREE_MB from v$asm_diskgroup; NAME TOTAL_MB FREE_MB ------------------------------ ---------- ---------- ASMDATA1 716796 716726 DBVOT 0 0 ASMDATA2 716796 716726 ASMARCH 716796 716726
The goal was to create the VOTE_DISK diskgroup on the DBVOT volume that was in turn mapped to the /dev/bqdx disk. The AWS Console indicated that the /dev/bqdx was attached to the EC2 but was not visible in the OS for some reason. I issued the command:
/usr/bin/echo -e "o\nn\np\n1\n\n\nt\n8e\nw" | /usr/sbin/fdisk /dev/bqdx
Which resulted in:
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): Building a new DOS disklabel with disk identifier 0x29ba6b8d.
Command (m for help): Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p): Partition number (1-4, default 1): First sector (2048-104857599, default 2048): Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599): Using default value 104857599
Partition 1 of type Linux and of size 50 GiB is set
Command (m for help): Selected partition 1
Hex code (type L to list all codes): Changed type of partition 'Linux' to 'Linux LVM'
Command (m for help): The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
However, when I attempted to create the disk I got the error:
root@10-118-134-71:/root # /usr/sbin/oracleasm createdisk DBVOT /dev/bqdx1 Device "/dev/bqdx1" is already labeled for ASM disk "DBVOT"
I then attempted to drop the disk group
SQL> drop diskgroup DBVOT; drop diskgroup DBVOT * ERROR at line 1: ORA-15039: diskgroup not dropped ORA-15001: diskgroup "DBVOT" does not exist or is not mounted
I then tried with the FORCE option and succeeded
SQL> drop diskgroup DBVOT force including contents; Diskgroup dropped.
I then deleted the disk
root@10-118-134-71:/root # /usr/sbin/oracleasm deletedisk DBVOT Clearing disk header: done Dropping disk: done
and checked to make sure it was no longer visible
[oracle@10-118-134-71 ~]$ /usr/sbin/oracleasm listdisks DB1 DG10 DG11 DG12 DG2 DG3 DG4 DG5 DG6 DG7 DG8 DG9
Then I recreated the disk
root@10-118-134-71:/root # /usr/sbin/oracleasm createdisk DBVOT /dev/bqdx1 Writing disk header: done Instantiating disk: done
and listed the disks to confirm
root@10-118-134-71:/root # /usr/sbin/oracleasm listdisks DG1 DG10 DG11 DG12 DG2 DG3 DG4 DG5 DG6 DG7 DG8 DG9 DBVOT
Then I created the disk in ASM
SQL> CREATE DISKGROUP DBVOT EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/DBVOT'; Diskgroup created.
After the EC2 is built, run the below commands as root. Depending on the network security, you may have to either set or unset proxy variables.
/usr/bin/yum install -y binutils.x86_64 compat-libcap1.x86_64 compat-libstdc++-33.i686 \ compat-libstdc++-33.x86_64 gcc.x86_64 gcc-c++.x86_64 glibc-devel.i686 glibc-devel.x86_64 glibc.i686 \ glibc.x86_64 libXtst.x86_64 libXtst.i686 ksh.x86_64 libaio.x86_64 libaio.i686 libaio-devel.i686 \ libaio-devel.x86_64 libgcc.i686 libgcc.x86_64 libstdc++.x86_64 libstdc++.i686 libstdc++-devel.i686 \ libstdc++-devel.x86_64 libXi.i686 libXi.x86_64 make.x86_64 sysstat.x86_6
The above command will install all of the prerequisites required for the Oracle Client 12c .
Create the install group
/usr/sbin/groupadd oinstall
Create the dba group
/usr/sbin/groupadd dba
Create the oracle user and add to the groups
/usr/sbin/useradd -g oinstall -G dba oracle
Set the oracle password
passwd oracle
Respond to the above prompt with the oracle password that you want to use.
Make a directory for the software
mkdir /apps/opt/oracle
Change ownership of the directory to the oracle user and the oinstall group
chown oracle:oinstall /apps/opt/oracle
Change to oracle user
su - oracle
Copy the software into the above directory via ftp, sftp, scp etc. The directory will contain file
linuxamd64_12102_client.zip
Unzip the foftware
unzip linuxamd64_12102_client.zip
Change directory to the client directory that was unzipped.
cd /apps/opt/oracle/client
Run the installer in silent mode
./runInstaller -silent \ -responseFile /apps/opt/oracle/client/response/client_install.rsp \ oracle.install.client.installType=Runtime \ UNIX_GROUP_NAME=dba \ INVENTORY_LOCATION=/apps/opt/oracle/oraInventory \ SELECTED_LANGUAGES=en \ ORACLE_HOME=/apps/opt/oracle/base/rtclient64 \ ORACLE_BASE=/apps/opt/oracle/base \ waitForCompletion=true
At the end of the installation, you should see the message
The installation of Oracle Client 12c was successful. Please check '/apps/opt/oracle/oraInventory/logs/silentInstall2017-08-30_06-00-25PM.log' for more details
Exit from oracl back to root
exit
As a root user, execute the following script
/apps/opt/oracle/oraInventory/orainstRoot.sh
The above script will display messages similar to
Changing permissions of /apps/opt/oracle/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /apps/opt/oracle/oraInventory to dba. The execution of the script is complete.
Set up environment variables for the oracle user. Go to the oracle user’s home
su - oracle cd
Edit the oracle users
vi .bash_profile
Setup the below variable to make it easier for the users to access sqlplus etc.
# Oracle client variables export ORACLE_HOME=/apps/opt/oracle/base/rtclient64 export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH PATH=$ORACLE_HOME/bin:$PATH which sqlplus
Test sqlplus
sqlplus user_id SQL*Plus: Release 12.1.0.2.0 Production on Wed Aug 30 18:23:27 2017 Copyright (c) 1982, 2014, Oracle. All rights reserved. Enter password: Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security, OLAP, Data Mining and Real Application Testing options
This week one of our applications is attempting to benchmark the performance of an Oracle database running on an Exadata machine versus running in the AWS cloud on an x1.16xlarge EC2. Our initial thought was to define the storage as two ASM disk groups, each consisting of four 400 gb disks at 6,000 IOPS each per ASM disk group. Assuming that a given data file was spread across all four disks in a disk group, we could have speeds approaching 20,000 IOPS.
During testing, Amazon recommended that the ASM data be placed on single disks running at 20,000 IOPS. The current maximum IOPS that can be provisioned is 20,000 IOPS. Additionally, AWS requires that the IOPS:gb ratio be maintained at a maximum of 50:1. In order to meet these requirements, we decided to create two mount points, each of 1000 gb at 20,000 IOPS. An additional 1000 gb at 20,000 IOPS was also added for the archive logs.
We created a json template to spin up the three disks and mounted them to the existing EC2. Before mounting, make sure that the mount points are not already being used by currently attached disks. After the disks are mounted, we attached them to ASM and waited for the rebalance to complete. After the rebalance was completed, we dropped the old disks that were provisioned at 6000 IOPS. The application saw a 67% decrease in elapsed time on some of the queries.
This morning we received a request to create an EC2 instance in AWS for an Oracle database. The application that was intending to benchmark on this EC2 instance tended to be very I/O intensive. In the past we had been using general-purpose disks rather than provisioned IOPS. To ensure that this application got the best I/O performance, they wanted the disks to be provisioned at the highest possible IOPS rate. Amazon has a restriction that the IOPS:GB ratio cannot exceed 30:1. We were striping the disks to improve I/O such that each ASM group consisted of 4 disks. We did some math and came up with the below structure:
IOPS #of disks Size per disk Ratio
(GB)
Arch 3000 4 100 30
ASM1 6000 4 200 30
ASM2 6000 4 200 30
Total size = 2000 gb
This week we were challenged to come up with a solution that would allow a database administrator to move a single instance of an Oracle database on an AWS EC2 machine to a different EC2 machine. We were tasked with coming up with as much automation as possible to accomplish this goal.
A colleague had already written the required cloud formation template to create the new EC2 instance and to install Oracle on it via an Ansible playbook. All of this was executed via Jenkins. The challenge now was to be able to restore the database on the new EC2 instance. In order to automate the backup and restore part of the process I created a python script that accepted as input the parameters:
The python script could be downloaded from an S3 bucket that we had set up as a code repository. Once the python script had been downloaded onto the source server, the database administrator could run the script and provide the above parameters. Python would then create the required backup and restore scripts, copy the crontab entries and stage the maintenance scripts for transfer to the new server.
The backup script backs up the database via RMAN commands (including the database, archive logs, spfile, control file) and then uploads all of this to an S3 bucket.
The next step would be to run the cloud formation template (to create a new EC2 instance) and the Ansible playbook (to install Oracle, ASM, etc.). We added a parameter to the Ansible playbook to accept the name of the S3 bucket that holds the backups. At the end of the creation and install, the playbook would
All of this worked as expected. The one weak link is the amount of time that it would take to transfer the information to and from the S3 bucket. An alternative would be to FTP the data between the servers but I wanted to try using an S3 bucket.
The below MS Word Document contains the steps that I used to transfer data from an on-prem server/database into an Oracle RDS instance in the AWS Cloud.