Resolving security alerts by removing an old kernel

While working on an Oracle instance running on a Unix EC2 server, I encountered a recurring security alert indicating that my server was using an outdated kernel version. Despite being up to date with patches, the issue persisted. This blog post outlines the steps I took to address this problem efficiently, allowing me to continue my work without the need for creating a new server.

Identifying the Issue:

To determine the kernel versions installed on my system, I used the command:

rpm -qa | grep kernel

Among the listed versions, I noticed an older kernel, “kernel-uek-5.4.17-2136.319.1.3.el7uek.x86_64,” that was causing the security alert.

Solution Steps:

Switching to Root User:
To make the necessary changes, I switched to the root user using the command:

sudo su root

Removing the Old Kernel:
To remove the outdated kernel version, I utilized the following command:

yum remove kernel-uek-5.4.17-2136.319.1.3.el7uek.x86_64

This command triggered the removal process and displayed the following output:

Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package kernel-uek.x86_64 0:5.4.17-2136.319.1.3.el7uek will be erased
--> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================================================================
Package                                     Arch                                    Version                                                       Repository                                   Size
=====================================================================================================================================================================================================
Removing:
kernel-uek                                  x86_64                                  5.4.17-2136.319.1.3.el7uek                                    @ol7_UEKR6                                  115 M

Transaction Summary
=====================================================================================================================================================================================================
Remove  1 Package

Installed size: 115 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Erasing    : kernel-uek-5.4.17-2136.319.1.3.el7uek.x86_64                                                                                                                                      1/1
  Verifying  : kernel-uek-5.4.17-2136.319.1.3.el7uek.x86_64                                                                                                                                      1/1

Removed:
  kernel-uek.x86_64 0:5.4.17-2136.319.1.3.el7uek

Complete!

Selectively recovering data in an Aurora PostgreSQL database

Using automated backups and snapshots we can restore an AWS RDS database
– to the most current point in time
– to a prior point in time
– use the restored database going forward
– or restore particular tables

In summary, we have multiple options depending on the nature and extent of the data lost.

In this example, I will demonstrate a method to restore certain rows in one table in the original database.

Create a test table

CREATE TABLE dc_schema.test_table (
                object_id numeric NOT NULL,
                "owner" varchar(128) NULL,
                object_name varchar(128) NULL,
                CONSTRAINT test_table_pk PRIMARY KEY (object_id)
);

and inserted some data. For demonstration purposes, let us assume that the rows with object_id between 30,000 and 40,000 are critical to the application. Further, assume that someone accidentally deleted this data with the following SQL:

select *
from   dc_schema.test_table daac 
where  object_id between 30000 and 40000
;

We now need this data restored as it is critical to the operation of the application.

On the AWS RDS Console page, select the DB cluster, click on “Actions” and then on “Restore to point in time”. On the next page, populate the custom date and time to a point before the data was deleted. Most of the options will be populated except for a few such as the DB instance identifier.

Click on the orange button at the bottom right of the page to begin the recovery. This will take some time to occur.

Some points to keep in mind with respect to the restored database:

– You can use the restored DB instance as soon as its status is available. The DB instance continues to load data in the background. This is known as lazy loading.

– If you access data that hasn’t been loaded yet, the DB instance immediately downloads the requested data from Amazon S3, and then continues loading the rest of the data in the background

– To help mitigate the effects of lazy loading on tables to which you require quick access, you can perform operations that involve full-table scans, such as SELECT *. This allows Amazon RDS to download all of the backed-up table data from S3.

The restore process will create a new instance. While the “restore” is occurring, you can continue to use the original database. To simulate this, I ran a Python program to add another 100,000 rows to the table from which the data was accidentally deleted. The situation of creating keys in the range of the data that was deleted has to be handled by the application staff. In this particular example, I am creating keys higher than the range of keys that were deleted to ensure that there is no primary key collision.

After the restore was completed, I connected to the restored database to check if my data is present. After this was confirmed, I switched back to the original database from where the critical data was deleted and created a link to the recovered database with the following commands:

Create an extension:

CREATE EXTENSION dblink;

Create a dblink:

SELECT dblink_connect
(
  'link_to_recovery’, 
  'dbname=clientdb 
   port=5432  
   host= red-recovery-cluster…….rds.amazonaws.com 
   user=dbadmin
   password=password_here'
);

Execute the following SQL to select the data from the recovered database and insert it into the original database:

insert
into    dc_schema.test_table
SELECT  * 
from    dblink
(
'link_to_recovery', 
'SELECT object_id, owner, object_name 
 FROM   dc_schema.test_table 
 where  object_id between 30000 and 40000'
)
AS 
p(object_id numeric, owner varchar(128), object_name varchar(128))
;

Assuming that you have recovered all the data that was accidentally deleted, you can drop the recovered database. I will do this via AWS CLI commands running in CloudShell.

First, we delete the instance created under the cluster with the command:

aws rds delete-db-instance \
  --db-instance-identifier red-recovery-cluster \
  --skip-final-snapshot \
  --delete-automated-backups

After the instance is deleted, I deleted the cluster with the following command:

aws rds delete-db-cluster \
  --db-cluster-identifier red-recovery-cluster \
  --skip-final-snapshot

Please note that the above commands delete the database along with automated backups and without a final snapshot.

CloudWatch alarms

One of the methods to be alerted on an issue with one of your AWS resources is to use a CloudWatch alert that will send an email (or page) to a subscriber. One issue with this approach is that “An alarm invokes actions only when the alarm changes state.”. This is documented at

Using Amazon CloudWatch alarms

In other words, you will receive one email or page when the event occurs. As an example, if you have an alert that is based on the CPU utilization of the database exceeding a certain threshold, the alert will be sent out once when the database first breaches the threshold. If you have a requirement to be paged every (for example) five minutes while the issue continues, you will need to develop some other methodology. One of the suggestions is to use a combination of services such as Eventbridge and Lambda to keep checking the alarm state.

As an option for DBAs used to handling alerts via scripting, I created a small script (see below) that can be executed via cron on an EC2 server that will monitor the CloudWatch alarm status and keep paging while the alarm is occurring.

#!/bin/bash

#
## alarms that are to be checked
#
alarm_names=("mydb-CPUUtilizationAlarm" "mydb-FreeLocalStorageAlarm")

#
## write out a file header
#
echo 'Report on CloudWatch alarms' > alarm_status.txt
echo 'Executed at ' `date` >> alarm_status.txt
echo ' ' >> alarm_status.txt
printf "%-50s %-10s\n" "Alarm Name" "Status" >> alarm_status.txt
pager_flag='n'

#
## get the status of each alarm
#
for cur_value in "${alarm_names[@]}"
do
    alarm_status=`aws cloudwatch describe-alarms --alarm-names "$cur_value" --output="json" | grep -iE '"StateValue"' | sed 's/StateValue\|"\|\':'\|','//g'`
      alarm_status=$(echo "$alarm_status" | tr -d ' ')
      if  [ "$alarm_status" == "ALARM" ]; then
          pager_flag='y'
      fi
      printf "%-50s %-5s\n" "$cur_value" "$alarm_status" >> alarm_status.txt
done

#
## mail to the appropriate person/team
# 
mail -s "CloudWatch alarm status" dba-group@company.com < alarm_status.txt

#
## check if a page out is required
#
if  [ $pager_flag == 'y' ]; then
    mail -s "CloudWatch alarm status - Alarm detected" dba-pager@company.com < alarm_status.txt
fi

Using Python/BOTO3 code to put test data into a DynamoDB table

In this example, I created a Python script that uses the Boto3 SDK to write data to a DynamoDB table. The script defines a function called put_item() that takes in four arguments: part_key, sort_key, alt_sort_key, and more_info. The function prints out the part_key, sort_key, and alt_sort_key, writes an item to a DynamoDB table with the given attributes, and prints out the response in JSON format.

The script also defines a main logic section that iterates through a list of alphabet letters and a list of numbers to generate different combinations of part_key, sort_key, and alt_sort_key. It calls the put_item() function for each combination of keys with the more_info argument generated by concatenating alt_sort_key with the integer 3029.

The script uses the os module to clear the console before execution and the json module to print out responses in JSON format. It also imports the boto3 module to create a resource for DynamoDB operations.

#--------------------------------------------------------------------
#
# Author      : Dean Capps 
# Description : Put an item in a DynamoDB table
#
#--------------------------------------------------------------------

print("Starting")

import os
os.system('clear')

#
## import Python SDK for AWS
#
import boto3
import json

#
## create a boto3 resource for DynamoDB operations
#
dynamodb = boto3.resource("dynamodb")

#
## Write (put_item) into a dynamodb table with different 
## ReturnConsumedCapacity options
def put_item(part_key, sort_key, alt_sort_key, more_info):
    print(part_key, sort_key, alt_sort_key)
    try:
        TableName = dynamodb.Table('dean_test_table')
        response = TableName.put_item(
            Item={
                    'part_key'     : part_key,
                    'sort_key'     : sort_key,
                    'alt_sort_key' : alt_sort_key,
                    'more_info'    : more_info
            },
            #ReturnConsumedCapacity="NONE"
            ReturnConsumedCapacity="INDEXES"
        )
        # print(json.dumps(response, indent=2))
        
        # response = TableName.put_item(
            # Item={
                    # 'part_key':'b',
                    # 'sort_key':'4',
                    # 'alt_sort_key':'b1'
            # },
            # ReturnConsumedCapacity="TOTAL"
        # )
        # print(json.dumps(response, indent=2))
        
        # response = TableName.put_item(
            # Item={
                    # 'part_key':'d',
                    # 'sort_key':'1',
                    # 'alt_sort_key':'c1',
                    # 'more_info': {
                        # 'field-1':'Field 01 data',
                        # 'field-2':'Field 02 data'
                    # }
            # },
            # ReturnConsumedCapacity="INDEXES"
        #)
        print(json.dumps(response, indent=2))
        
    except Exception as e:
        print("Error writing to table")
        print(e)



#
## main logic
#
part_key = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z']
sort_key = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20' ]
for i in range(len(part_key)):
    for j in range(len(sort_key)):
        part_key_temp = part_key[i]
        sort_key_temp = sort_key[j]
        alt_sort_key  = part_key_temp + sort_key_temp
        more_info     = alt_sort_key*3029
        put_item(part_key_temp, sort_key_temp, alt_sort_key, more_info)

Using Python/BOTO3 code to create a DynamoDB table

This week I have been experimenting with the interface between DynamoDB and Python/BOTO3. In this example, I am creating a DynamoDB table along with a local secondary index (LSI) and a global secondary index (GSI). It is important that the following order be maintained in the table specification within the “dynamodb.create_table” structure:

a. Specify the key schema (Primary key in the RDBMS world)

b. Next specify the attributes of the table (Columns in the RDBMS world). Note that I have specified one extra attribute (alt_sort_key), which will be used in the LSI

c. In the next chunk of code, I create an LSI with the partition key matching the table’s key and the alternate sort key, alt_sort_key. Also included in the LSI specification is the projection clause which is the set of attributes that is to be copied from the table into the LSI. DynamoDB provides three different options for this:

KEYS_ONLY – Each item in the index consists only of the table partition key and sort key values, plus the index key values

INCLUDE – In addition to the attributes described in KEYS_ONLY, the secondary index will include other non-key attributes that you specify.

ALL – The secondary index includes all of the attributes from the source table.

d. The last structure is the GSI. Note that the LSI uses capacity from the table, while the GSI requires that you specify the capacity separately.

The entire create table code is structured to be within a try/catch logic to handle errors.

#! /usr/bin/env python
#--------------------------------------------------------------------
#
# Author      : Dean Capps 
# Description : Create a DynamoDB table
#
#--------------------------------------------------------------------
#
print("Starting")

import os
os.system('clear')

#
## import Python SDK for AWS
#
import boto3

#
## create a boto3 client for DynamoDB operations
#
dynamodb = boto3.client("dynamodb")

#
## Create the table
##
## Keep the order of
##   a. key schema
##   b. attributes
##   c. LSI
##   d. GSI
#
try:
    response = dynamodb.create_table(
        TableName="dean_test_table",  
        KeySchema=[
            {
                'AttributeName': 'part_key',
                'KeyType': 'HASH'
            },
            {
                "AttributeName": "sort_key",                
                'KeyType': 'RANGE'                
            }
        ],
        AttributeDefinitions=[
            {
                "AttributeName": "part_key",
                "AttributeType": "S"
            },
            {
                "AttributeName": "sort_key",
                "AttributeType": "S"
            },
            {
                "AttributeName": "alt_sort_key",
                "AttributeType": "S"
            }
        ],
        LocalSecondaryIndexes=[
            {
                'IndexName': 'dean_test_table_lsi',
                'KeySchema': [
                    {
                        'AttributeName': 'part_key',
                        'KeyType': 'HASH'
                    },
                    {
                        'AttributeName': 'alt_sort_key',
                        'KeyType': 'RANGE'
                    }
                ],
                'Projection': {
                    'ProjectionType': 'ALL'
                },
            }
        ],      
        GlobalSecondaryIndexes=[
            {
                'IndexName': 'dean_table_gsi',
                'KeySchema': [
                    {
                        'AttributeName': 'alt_sort_key',
                        'KeyType': 'HASH'
                    },
                ],
                'Projection': {
                    'ProjectionType': 'ALL'
                },
                'ProvisionedThroughput' :{
                    'ReadCapacityUnits': 1,
                    'WriteCapacityUnits': 1,
                }
            }
        ],        
        ProvisionedThroughput={
            "ReadCapacityUnits": 5,
            "WriteCapacityUnits": 5
        }
    )
    print("Table created successfully")
except Exception as e:
    print("Error creating table:")
    print(e)

Manipulating CSV files with Python

I had a CSV file with over 50 columns of which I only needed 11 columns in a slightly different order. I had been manipulating the file manually but got frustrated after the second time I had to do this repetitive manual task and turned to Python to see if I could write some quick and dirty code. As with most things in Python, it was relatively easy and quick to accomplish this:

import csv
with open("file_with_many_columns.csv","r") as source:
    rdr= csv.reader( source )
    with open("file_with_columns_needed.csv","w") as result:
        wtr= csv.writer( result )
        for r in rdr:
            #
            ## 
            #
            wtr.writerow( (r[2], r[1], r[0], r[3], r[4], r[5], r[6], r[7], r[27], r[44], r[45] ) )

The number in the square brackets corresponds to the column numbers in the original file. Like all things in Python numbering starts with [0] being the first or “A” column. The order of columns in the write statement is the order in which the files will be in the output file.

Hope this helps with your use case.

Automating website scraping

For many years I have had a personal quest to develop a simple way to automate the collection of information from websites. In 2017, I played around with a Python/Selenium-based solution. See blog post Python/Selenium example for details.

While the Python/Selenium-based solution was interesting it did not meet all my requirements such as ease of use, low development time, and flexibility. While browsing the web I ran across Windows Power Automate which looked interesting. I was pleasantly surprised to see that a desktop version of the software was included in my Microsoft 365 subscription. I downloaded and installed the product and attempted to automate the collection of information.

The use case was to get the account balance from my brokerage account, the current DOW and S&P numbers, and write the information to a CSV file with the current date. The steps are as follows:

01. Click on create a new flow (aka program unit)
02. Set up variables for user names and passwords by clicking on “Variables” on the left and dragging the function “Set variable” into the body of the flow
03. Launch a new web browser instance with “Launch new name-of-browser”. I was not able to get the Firefox plugins to work, however, Microsoft Edge worked well
04. Populate the user name and password with “Populate text field on web page”. The product displays a UI when you are on the web page so you simply have to highlight the field on the web page and assign the variable set in 02. above
05. Usually you have to click on a login button of some sort. This is accomplished with the “Press button on web page” function
06. I coded a “Wait” for 30 seconds to let the web page complete the log on
07. Use the “Extract data from web page” to scrape the required data and store it into a variable
08. The “Get current date and time” function retrieves the current date into a variable
09. Write the data into a file with the “Write text to file” function
10. Close the web browser with the “Close web browser” function

Repeat the above steps for each web page you need to extract the information from. The individual functions can be moved up or down in the flow and can be copied to repeat a function.

One issue I ran into was that the is written to a file in columnar format. I am sure that I will be able to change this as I get more familiar with the product. In the interim, I created a few lines of code in Python and executed it from the flow with the “Run DOS command function” to format the data into a CSV file. Now when I want the information from these three websites, I run the flow and open the CSV file in Excel.

The tool also has options to debug such as run from a step or stop at a step. The product is intuitive and easy to learn for anyone with a few basic coding skills.

Complete documentation is here.

Creating an AWS RDS instance via the command line interface (CLI)

We recently ran into an unusual circumstance working with a customer and were unable to create an RDS instance via the console and ended up creating the instance via the command line. This is a useful command in that we can use it in a script to create multiple instances.

The complete documentation for all options can be found at AWS CLI reference.

Code with explanation of the options

I will attempt to explain each option of the command below. The complete code in one unit is at the bottom of the post.

aws rds create-db-instance \
--db-name MYDB \

The name of the database. This option has different results depending on the database engine you use. This example is for Oracle.

--db-instance-identifier mydb \

The DB instance identifier. This parameter is stored as a lowercase string.

--allocated-storage 300 \

The amount of storage in gibibytes (GiB) to allocate for the DB instance. The constraints on the storage are different depending on the database engine you use.

--db-instance-class db.m5.2xlarge \

The compute and memory capacity of the DB instance. Not all DB instance classes are available in all Amazon Web Services Regions, or for all database engines.

--engine oracle-se2-cdb \

The type of database engine to be used for this instance. In this example, it is Oracle, Standard Edition container database.

--master-username admin \

The account name for the master user.

--master-user-password admin_pwd \

The password for the master user. Please pick a secure password.

--vpc-security-group-ids "sg1" "sg2" \

A list of Amazon EC2 VPC security groups to associate with this DB instance in the format “string 01””String 02”

--db-subnet-group-name “sub_net_1” \

A DB subnet group to associate with this DB instance.

--db-parameter-group-name default.oracle-se2-cdb-19 \

The name of the DB parameter group to associate with this DB instance. If you do not specify a value, then the default DB parameter group for the specified DB engine and version is used.

--backup-retention-period 2 \

The number of days for which automated backups are retained. Setting this parameter to a positive number enables backups. Setting this parameter to 0 disables automated backups.

--port 1521 \

The database port. There are different default values depending on the database engine.

--no-multi-az \

Specifies that the database is not to be deployed in multi-AZ mode. This is a Boolean and can be specified as –multi-az or –no-multi-az

--engine-version 19.0.0.0.ru-2022-04.rur-2022-04.r1 \

The version number of the database engine to use. For a list of valid engine versions, use the DescribeDBEngineVersions operation.

--auto-minor-version-upgrade \

Indicates whether minor engine upgrades are applied automatically to the DB instance during the maintenance window. By default, minor engine upgrades are applied automatically.

--license-model license-included \

License model information for this DB instance.

--option-group-name my_option_group \

Indicates that the DB instance should be associated with the specified option group.

--character-set-name WE8ISO8859P1 \

For supported engines, such as Oracle, this value indicates that the DB instance should be associated with the specified Character Set.

--nchar-character-set-name AL16UTF16 \

The name of the NCHAR character set for the Oracle DB instance.

--no-publicly-accessible \

Whether the DB instance is publicly accessible

--storage-type gp2 \

Specifies the storage type to be associated with the DB instance.

--storage-encrypted \

Specifies that the DB instance storage is encrypted with the below KMS Key

--kms-key-id  \

Specify the ARN of the KMS key to be used to encrypt the storage

--copy-tags-to-snapshot \

Specifies that the tags associated with the database instance will be applied to the snapshots of the database

--no-enable-iam-database-authentication \

A value that indicates whether to enable mapping of Amazon Web Services Identity and Access Management (IAM) accounts to database accounts. IAM database authentication works with MariaDB, MySQL, and PostgreSQL. With this authentication method, you don’t need to use a password when you connect to a DB instance. Instead, you use an authentication token.

--enable-performance-insights \

Whether to enable Performance Insights for the DB instance. Performance Insights expands on existing Amazon RDS monitoring features to illustrate and help you analyze your database performance.

--performance-insights-kms-key-id  \

The ARN of the KMS key used to encrypt your performance insights. If you do not specify a value, then Amazon RDS uses your default KMS key.

--performance-insights-retention-period 7 \

The duration to retain the performance insights data in days.

--deletion-protection \

Specifies that this DB instance has deletion protection enabled.

--max-allocated-storage 500 \

The upper limit in gibibytes (GiB) to which Amazon RDS can automatically scale the storage of the DB instance.

--region us-east-1

The region that the database will be deployed in.

Complete code in one unit

aws rds create-db-instance \
--db-name MYDB \
--db-instance-identifier mydb \
--allocated-storage 300 \
--db-instance-class db.m5.2xlarge \
--engine oracle-se2-cdb \
--master-username admin \
--master-user-password admin_pwd \
--vpc-security-group-ids "sg1" "sg2" \
--db-subnet-group-name “sub_net_1” \
--db-parameter-group-name default.oracle-se2-cdb-19 \
--backup-retention-period 2 \
--port 1521 \
--no-multi-az \
--engine-version 19.0.0.0.ru-2022-04.rur-2022-04.r1 \
--auto-minor-version-upgrade \
--license-model license-included \
--option-group-name my_option_group \
--character-set-name WE8ISO8859P1 \
--nchar-character-set-name AL16UTF16 \
--no-publicly-accessible \
--storage-type gp2 \
--storage-encrypted \
--kms-key-id  \
--copy-tags-to-snapshot \
--no-enable-iam-database-authentication \
--enable-performance-insights \
--performance-insights-kms-key-id  \
--performance-insights-retention-period 7 \
--deletion-protection \
--max-allocated-storage 500 \
--region us-east-1

Oracle Active Data Guard on AWS RDS

On Aug 23, 2022, AWS announced support for managed Oracle Data Guard Switchover and Automated Backups for replicas. See the announcement at:

AWS RDS for Oracle now supports managed Oracle Data Guard

Oracle DBAs are familiar with creating a physical standby in an on-premises environment and opening it in read-only (i.e. Active Data Guard) mode. AWS has now automated the process of creating the standby and the process of switching over in the context of an RDS (i.e. managed database service). All the manual tasks of taking an RMAN backup, transferring to the standby, restoring and recovering the database, setting up Data Guard Manager, etc. can be accomplished by a few clicks on the AWS console. This blog describes the steps required to create and test this scenario.

Prerequisites
1. The database instance size must be db.t3.large at the minimum.
2. The primary database should be using an option group that is exclusive to this database.
3. Ensure you have the required KMS key in both regions
4. Create a parameter group in the target region that exactly matches the parameter group from the source region

Creating a replica
After you have created a primary database in any region:
1. Select the primary database and under the actions button, select “Create replica”.
2. Choose multi-AZ on the target also
3. Click on the create replica button. Replica creation will take approximately 1.5 hours. This may vary based on the size of the database.
4. Change the backup option on the replica. The replica will be created with the backup retention period set to “0” as shown below. This is contrary to the standard practice of not backing up a data guard standby. However, for the replica to switch over to primary at some point in the future, the retention period should be greater than “0”.

Performing a switchover across regions
A switchover is a role reversal between the primary database and one of its standby databases. A switchover guarantees no data loss. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role, and the standby database transitions to the primary role.

1. Confirm there is no lag. This can be done via the console under the “Replication” section under the “Lag” column or via SQL

SELECT ARCH.thread#                        "Thread",
       ARCH.sequence#                      "Last Sequence Received",
       APPL.sequence#                      "Last Sequence Applied",
       ( ARCH.sequence# - APPL.sequence# ) "Difference"
FROM   (SELECT thread#,
               sequence#
        FROM   v$archived_log
        WHERE  ( thread#, first_time ) IN (SELECT thread#,
                                                  Max(first_time)
                                           FROM   v$archived_log
                                           GROUP  BY thread#)) ARCH,
       (SELECT thread#,
               sequence#
        FROM   v$log_history
        WHERE  ( thread#, first_time ) IN (SELECT thread#,
                                                  Max(first_time)
                                           FROM   v$log_history
                                           GROUP  BY thread#)) APPL
WHERE  ARCH.thread# = APPL.thread#
ORDER  BY 1;

2. Confirm that there is no pending maintenance on either the primary of the replica

3. Initiate the switchover from the replica (not the primary!).

4. Select the RDS instance and click on “Switch over replica”. And agree to the warning panel that is displayed

5. The former replica will display the message “The Switchover to the read replica started” under “Logs & events”, “Recent events” section. (Remember to sort the messages such that the latest messages are displayed.)

6. After the switchover is complete, both databases will display “The Switchover to the read replica finished successfully”

7. Applications will have to reconnect to the databases

8. You can confirm that the data is replicating from the new primary via the console or SQL

Test reboot with multi-AZ failover – Primary
Initiate a reboot with failover on the current primary.

1. Select the RDS instance, under “Actions”, and click on “Reboot”. Check the option “Reboot With Failover?”

2. After about 8 to 10 minutes, the database will complete the multi-AZ failover. Under “Logs & events”, “Recent events” section, look for messages “Multi-AZ instance failover started” and “Multi-AZ instance failover completed”.

A similar test can be done on the Replica.

Observations
1. After a switchover, both the databases may show as Read-Only. This can be ignored

2. You cannot temporarily shut down the database for the customary 7 days as it has a read replica

3. Ensure that there is no pending maintenance before issuing a switchover

4. In the event that you are using a modified character set on your primary, the console may show the replica’s character set as “AL32UTF8”. However, in the database, it matches the primary in the database. This can be confirmed with

select  property_value 
from    DATABASE_properties 
where   property_name='NLS_CHARACTERSET'
;

5. The console will display a message that the switchover is complete. However, this is displayed prematurely. The actual switchover will take additional time. Monitor the status of the two database instances under the “Status” column.

Trigger with a subprogram in PL/SQL

Last week I had a requirement to create a trigger on a table that had to perform some logic and then run additional updates based on the outcome of the logic. The logic ran into multiple IF statements resulting in long messy code. Additionally, there was second requirement to execute this chunk of code from two places within the trigger functionality. I moved the code to a subprogram within the trigger’s PL/SQL logic resulting in a modular structure. This is a simplified example of the trigger code to demonstrate the technique.

The below lines declare a subprogram with some logic that creates an update statement. The EXECUTE IMMEDIATE is commented out for testing and demonstration:

  PROCEDURE perform_calculations(cust_num number) IS
    BEGIN 

        v_upd_sql :='';

		IF  :OLD.CUST_ZIP <> :NEW.CUST_ZIP THEN  
		    v_upd_sql := ' UPDATE DEAN_CUSTOMER SET CUST_ZIP_OLD = ' || :OLD.CUST_ZIP || ' WHERE CUST_NUM = ''' || CUST_NUM || '''';
   	    END IF;  

		--EXECUTE IMMEDIATE v_upd_sql;
		dbms_output.put_line('SQL --->' || v_upd_sql);  
    END;

The below line of code calls the subprogram:

        perform_calculations(:old.cust_num);

In the real-world scenario, I was working on, the code issued a BULK COLLECT similar to below and ran the subprogram for each row collected:

        v_main_sql := ' SELECT ROWID FROM ' || v_table_name || 
                      ' WHERE COL01 = ' :OLD.COL01;

        EXECUTE IMMEDIATE v_main_sql BULK COLLECT INTO var01;
        FOR i IN 1..var01.COUNT
        LOOP
            perform_calculations(var01(i));
        END LOOP;

“var01” was defined as shown below:

TYPE var01_type IS TABLE OF ROWID;
Var01 var01_type;

Code

create or replace TRIGGER UPDATE_TRIGGER AFTER UPDATE ON DEAN_CUSTOMER FOR EACH ROW 
DECLARE 
  v_upd_sql         VARCHAR2(8000); 

  table_not_found EXCEPTION;  
  PRAGMA EXCEPTION_INIT (table_not_found, -942);  

  PROCEDURE perform_calculations(cust_num number) IS
    BEGIN 

        v_upd_sql :='';

		IF  :OLD.CUST_ZIP <> :NEW.CUST_ZIP THEN  
		    v_upd_sql := ' UPDATE DEAN_CUSTOMER SET CUST_ZIP_OLD = ' || :OLD.CUST_ZIP || ' WHERE CUST_NUM = ''' || CUST_NUM || '''';
   	    END IF;  

		--EXECUTE IMMEDIATE v_upd_sql;
		dbms_output.put_line('SQL --->' || v_upd_sql);  
    END;

    BEGIN 
        dbms_output.put_line(' ');  
        dbms_output.put_line('********** Trigger DC invoked **********');  
        dbms_output.put_line(' ');  
        perform_calculations(:old.cust_num);
END; 

Table definition

CREATE TABLE DEAN_CUSTOMER
(
  “CUST_NUM"     NUMBER(8,0),
  "CUST_NAME"    VARCHAR2(10 BYTE), 
  "CUST_ZIP"     NUMBER(12,0),
  "CUST_ZIP_OLD" NUMBER(12,0)
) SEGMENT CREATION DEFERRED 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
  NOCOMPRESS LOGGING
  TABLESPACE "USERS"
;