Instructions to run the SQL Tuning Advisor

A brief write up on running the SQL Tuning Advisor. It seems that every time, I run an ADDM report I see output that includes “Run SQL Tuning Advisor on the SELECT statement with SQL_ID”

          Findings and Recommendations
          ----------------------------
 
Finding 1: Top SQL Statements
Impact is .22 active sessions, 95% of total activity.
-----------------------------------------------------
SQL statements consuming significant database time were found. These
statements offer a good opportunity for performance improvement.
 
   Recommendation 1: SQL Tuning
   Estimated benefit is .22 active sessions, 95% of total activity.
   ----------------------------------------------------------------
   Action
      Run SQL Tuning Advisor on the SELECT statement with SQL_ID
      "gbwan1836hj3c".
      Related Object
         SQL statement with SQL_ID gbwan1836hj3c.
         SELECT time_key + (:"SYS_B_00" / :"SYS_B_01") period,
         in_value,
         out_value
         FROM  (SELECT time_key,

Create an SQL Tuning task as follows:

DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          sql_id      => 'gbwan1836hj3c',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 500,
                          task_name   => 'gbwan1836hj3c_tuning_task',
                          description => 'Tuning task1 for statement gbwan1836hj3c');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

Run the tuning task

EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'gbwan1836hj3c_tuning_task');

The above will take a couple of minutes. Depending on the complexity of the query, the time_limit may need to be increased.

After the task is complete, query the output:

set long 65536
set longchunksize 65536
set linesize 100
select dbms_sqltune.report_tuning_task('gbwan1836hj3c_tuning_task11') from dual;

I was using SQL Developer and the output will look a bit strange. Copy and paste it into Notepad to see it properly.

Oracle 12c – Data guard standby fallen behind – logs files available on primary

I had a situation today where in a standby had stopped applying logs about 20 days ago and the monitoring was somehow ignored. As we still had the logs available on the primary database, I copied them to an external file system on the primary with this script:

#!/bin/ksh
for i in $(asmcmd ls +DG_ASM_PROD/PRODDB/ARCHIVELOG/2017_11_16); 
do
  asmcmd cp +DG_ASM_PROD/PRODDB/ARCHIVELOG/2017_11_16/$i /backup/dean
done
for i in $(asmcmd ls +DG_ASM_PROD/PRODDB/ARCHIVELOG/2017_11_17); 
do
  asmcmd cp +DG_ASM_PROD/PRODDB/ARCHIVELOG/2017_11_17/$i /backup/dean
done
.
.
.
.
for i in $(asmcmd ls +DG_ASM_PROD/PRODDB/ARCHIVELOG/2017_12_06); 
do
  asmcmd cp +DG_ASM_PROD/PRODDB/ARCHIVELOG/2017_12_06/$i /backup/dean
done

After all the files were copied from asm (+DG_ASM_PROD/PRODDB/ARCHIVELOG) to a non-asm file system (/backup/dean), I FTPed all the files by logging on to the standby and issuing commands:

cd /backup/PRODDR/dean
sftp oracle@prodsrvr.verizon.com
get /backup/dean/* .

Probably would have been faster to zip the files, but this was a quick and dirty solution and we have good network speed.

After the files have landed on the standby, I generated SQL statements to register them with the below awk command:

ls -l thread* | awk '{print "ALTER DATABASE REGISTER PHYSICAL LOGFILE '\''/backup/PRODDR/dean/"$9 "'\'';"}' > temp.sql

After the commands were ready in the temp.sql file, I stopped MRP with

alter database recover managed standby database cancel;

And registered all the logs by executing the above sql file

SQL> @temp.sql

After all the files were registered, I restarted mrp with

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  DISCONNECT FROM SESSION;

And waited for log apply to catch up.

Connected to an idle instance despite the fact that the database processes are running

For some odd reason, we were seeing a message indicating that the database was down:

[oracle@proddb1 trace]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Nov 30 23:21:39 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> 

All the Oracle processes were running

[oracle@proddb1 trace]$ ps -ef | grep -i mon
579      19405     1  0 Nov27 ?        00:00:21 ora_pmon_PRODDB1
579      19437     1  0 Nov27 ?        00:00:07 ora_smon_PRODDB1
579      19447     1  0 Nov27 ?        00:01:53 ora_mmon_PRODDB1
579      20369     1  0 Nov27 ?        00:00:02 ora_tmon_PRODDB1

Oracle SID, home, base etc were set correctly:

[oracle@proddb1 trace]$ echo $ORACLE_BASE;echo $ORACLE_HOME;echo $ORACLE_SID;hostname;uname -a
/apps/opt/oracle/product
/apps/opt/oracle/product/12.1.0.2/db_1
PRODDB1
proddb1.vzbi.com
Linux proddb1.vzbi.com 3.10.0-327.el7.x86_64 #1 SMP Thu Oct 29 17:29:29 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux

The one unusual thing was that the “*mon” processes were owned by userid 579 rather than “oracle”. Checked the /etc/passwd:

[oracle@proddb1]$ cat /etc/passwd | grep -i oracle
oracle:x:1579:122:Oracle Support:/home/oracle:/bin/bash

The Systems Administrator had moved the Oracle userid from 579 to 1579 to allow FTP/SCP etc. commands. This was done while Oracle was running. I killed the SMON process to shut down the database and brought it back up. However, the issue persisted. I had to remove the below files:

[oracle@proddb1 ~]$ ls -la /var/tmp/.oracle
total 8
drwxrwxrwt  2 root dba  4096 Nov 30 22:49 .
drwxrwxrwt. 5 root root 4096 Nov 27 16:18 ..
srwxrwxrwx  1  579 dba     0 Nov 22 19:55 s#13746.1
srwxrwxrwx  1  579 dba     0 Nov 22 19:55 s#13746.2
srwxrwxrwx  1  579 dba     0 Nov 30 22:49 s#18231.1
srwxrwxrwx  1  579 dba     0 Nov 30 22:49 s#18231.2
srwxrwxrwx  1  579 dba     0 Nov 10 19:42 s#20024.1
srwxrwxrwx  1  579 dba     0 Nov 10 19:42 s#20024.2
srwxrwxrwx  1  579 dba     0 Nov 22 18:26 s#23729.1
srwxrwxrwx  1  579 dba     0 Nov 22 18:26 s#23729.2
srwxrwxrwx  1  579 dba     0 Nov 22 18:16 s#3272.1
srwxrwxrwx  1  579 dba     0 Nov 22 18:16 s#3272.2
srwxrwxrwx  1  579 dba     0 Nov 22 21:10 s#8843.1
srwxrwxrwx  1  579 dba     0 Nov 22 21:10 s#8843.2
srwxrwxrwx  1  579 dba     0 Nov 30 22:49 sREGISTER_PROD

[oracle@proddb1 datapump]$ ls -l /dev/shm 
total 9348
-rwxrwx--- 1 579 dba   4096 Nov 27 21:11 JOXSHM_EXT_0_PRODDB1_458756
-rwxrwx--- 1 579 dba   4096 Nov 27 21:11 JOXSHM_EXT_100_PRODDB1_458756
-rwxrwx--- 1 579 dba   4096 Nov 27 21:11 JOXSHM_EXT_101_PRODDB1_458756
-rwxrwx--- 1 579 dba   4096 Nov 27 21:11 JOXSHM_EXT_102_PRODDB1_458756
-rwxrwx--- 1 579 dba   8192 Nov 27 21:11 JOXSHM_EXT_103_PRODDB1_458756
-rwxrwx--- 1 579 dba   4096 Nov 27 21:11 JOXSHM_EXT_104_PRODDB1_458756
-rwxrwx--- 1 579 dba   4096 Nov 27 21:11 JOXSHM_EXT_105_PRODDB1_458756

In addition to the above, the listener and other files/semaphores were owned by the “old” oracle id (579). Finally, we decided to bounce the server to get rid of all the old semaphores.

Losing index after disabling constraints

I had a table defined with a partitioned primary key. In order to load the data, I issued

ALTER TABLE SCHEMA.TABLE_NAME DISABLE CONSTRAINT PK_TABLE_NAME;

After this alter was issued, the index entry vanished from DBA_CONSTRAINTS, DBA_IND_PARTITIONS. When I enabled the index, the index was created as a non-partitioned index. In order to get past this, I had to change the alter to

ALTER TABLE SCHEMA.TABLE_NAME DISABLE CONSTRAINT PK_TABLE_NAME KEEP INDEX;

Resolving ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PROCESS_TABLE_EXISTS_ACTION

I was running an export to get an estimate of the size of the dump file for the data in a list of tables. The parameter file contained the below options:

job_name=dean_expdp_data
estimate_only=yes
CONTENT=DATA_ONLY
DIRECTORY=DEAN_DATA_PUMP
PARALLEL=4
exclude=statistics
exclude=DB_LINK
exclude=tablespace
exclude=DIRECTORY
schemas=SCHEMA1,SCHEMA2,SCHEMA3
exclude=table:"IN('BIG_TABLE_01','BIG_TABLE_02')"
LOGFILE=dean_expdp_data.log

Ran the command

expdp \'/ as sysdba \' parfile=dean_expdp_data.par

and got error

[oracle@pdcigusprdb1 datapump]$ expdp \'/ as sysdba \' parfile=dean_expdp_data.par

Export: Release 12.1.0.2.0 - Production on Fri Nov 17 19:01:54 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYS"."DEAN_EXPDP_DATA":  "/******** AS SYSDBA" parfile=dean_expdp_data.par 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PROCESS_TABLE_EXISTS_ACTION [TABLE_DATA:"SCHEMA"."TABLE_NAME"] 
ORA-31600: invalid input value  for parameter TABLE_EXISTS_ACTION in function KUPW$WORKER.PROCESS_TABLE_EXISTS_ACTION

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 11259

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name
0x75a5a7aa0     27116  package body SYS.KUPW$WORKER
0x75a5a7aa0     11286  package body SYS.KUPW$WORKER
0x75a5a7aa0     20583  package body SYS.KUPW$WORKER
0x75a5a7aa0     15739  package body SYS.KUPW$WORKER
0x75a5a7aa0     10613  package body SYS.KUPW$WORKER
0x75a5a7aa0     13357  package body SYS.KUPW$WORKER
0x75a5a7aa0     17209  package body SYS.KUPW$WORKER
0x75a5a7aa0      5235  package body SYS.KUPW$WORKER
0x75a5a7aa0     12091  package body SYS.KUPW$WORKER
0x75a5a7aa0      2081  package body SYS.KUPW$WORKER
0x7174e4340         2  anonymous block

KUPC$QUEUE_INT.SEND
KUPC$QUEUE_INT.SEND
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
In procedure CREATE_TABLE_DATA_OBJECT_ROWS
In function GATHER_PARSE_ITEMS
In function CHECK_FOR_REMAP_NETWORK
Nothing to remap
In procedure BUILD_OBJECT_STRINGS
In function PROCESS_TABLE_EXISTS_ACTION
In procedure DETERMINE_FATAL_ERROR with ORA-31600: invalid input value  for parameter TABLE_EXISTS_ACTION in function KUPW$WORKER.PROCESS_TABLE_EXISTS_ACTION
Job "SYS"."DEAN_EXPDP_DATA" stopped due to fatal error at Fri Nov 17 19:01:56 2017 elapsed 0 00:00:02

According to Oracle Support “DataPump Export (EXPDP) Fails With Errors ORA-39126 ORA-31600 (Doc ID 1508068.1)

use CONTENT=ALL instead of CONTENT=DATA_ONLY when using ESTIMATE_ONLY=Y parameter

I changed parameter file and ran the export and got the error

Export: Release 12.1.0.2.0 - Production on Fri Nov 17 19:42:50 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORA-31626: job does not exist
ORA-31633: "SYS.DEAN_EXPDP_DATA"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1048
ORA-00955: name is already used by an existing object

As the previous export had failed, I had to clean up dba_datapump_jobs:

SQL> SELECT owner_name, job_name, operation, job_mode, 
  2     state, attached_sessions 
  3     FROM dba_datapump_jobs 
   where owner_name='SYS' and state='NOT RUNNING'
ORDER BY 1,2; 

  4    5  
OWNER_NAME JOB_NAME         OPERATION  JOB_MODE STATE         ATTACHED_SESSIONS
SYS        DEAN_EXPDP_DATA  EXPORT     SCHEMA   NOT RUNNING                   0

Drop the table:

SQL> drop table DEAN_EXPDP_DATA;

Table dropped.

SQL> 

Reran the export and got the estimate. However, it sort of defeated the purpose as CONTENT=ALL includes the meta data in the size of the export.

Total estimation using BLOCKS method: 596.3 MB
Job "SYS"."DEAN_EXPDP_DATA" successfully completed at Fri Nov 17 19:47:31 2017 elapsed 0 00:00:02

Renaming/relocating data files is easier in Oracle 12c

Prior to 12c, the process to rename/relocate a data file consisted off

-  taking the tablespace offline
-  renaming the datafiles using OS commands
-  issuing an 
     ALTER TABLESPACE data_01 
           RENAME DATAFILE '/u01/oradata/data_01.dbf' TO '/u02/oradata/data_01.dbf';
-  bring the table space online

With 12c, this is much easier. You can accomplish this with:

ALTER DATABASE MOVE DATAFILE '/u01/oradata/data_01.dbf' TO '/u02/oradata/data_01.dbf';

Dropping a RAC database without DBCA

One way to drop a RAC database without using DBCA.

This happened to be a smaller non-production database and I had the luxury of taking an export before I dropped the database. All the users had already moved to the new database.

Create a data pump directory (if you do not already have one).

CREATE or replace DIRECTORY dean_data_pump AS '/backup/UATDB/DATAPUMP';

Perform an export with the command:

cd /backup/UATDB/DATAPUMP
nohup expdp \'/ as sysdba \' directory=DEAN_DATA_PUMP dumpfile=UATDB_full_2017_09_18.dmp logfile=UATDB_full_2017_09_18.log full=yes &

Monitor the nohup.out file for errors.

Check the current status of the database. This is a 2 node RAC and is currently running on both nodes

[uatsrvr2-UATDB] srvctl status database -d UATDB
Instance UATDB1 is running on node uatsrvr1
Instance UATDB2 is running on node uatsrvr2

Stop all nodes of the RAC

srvctl stop database -d UATDB

Repeat the status check

[uatsrvr2-UATDB] srvctl status database -d UATDB
Instance UATDB1 is not running on node uatsrvr1
Instance UATDB2 is not running on node uatsrvr2

Start SQL*Plus to change the cluster_database parameter

[uatsrvr2-UATDB2] sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Mon Sep 18 10:35:26 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 4810256384 bytes
Fixed Size                  226136 bytes
Variable Size            13421810 bytes
Database Buffers         3456106 bytes
Redo Buffers                97070 bytes
Database mounted.
SQL> alter system set cluster_database=FALSE scope=spfile sid='*';

System altered.

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

Bring up the database in exclusive mode with restricted session

SQL> startup mount exclusive restrict;

ORACLE instance started.

Total System Global Area 4810256384 bytes
Fixed Size                  2261368 bytes
Variable Size            1342181000 bytes
Database Buffers         3456106496 bytes
Redo Buffers                9707520 bytes
Database mounted.

Check if the database is in restricted session and cluster is off and then exit from SQL*Plus

SQL> select logins,parallel from v$instance;

LOGINS     PAR
---------- ---
RESTRICTED NO

SQL> exit

Connect to the database via RMAN

[uatsrvr2-UATDB2] rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Mon Sep 18 10:43:07 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: UATDB2 (DBID=407684853, not open)

Drop the database, including all the backups

RMAN> DROP DATABASE INCLUDING BACKUPS;

database name is "UATDB" and DBID is 407684853

Do you really want to drop all backups and the database (enter YES or NO)? yes

using target database control file instead of recovery catalog
allocated channel: ORA_1
channel ORA_DISK_1: SID=2 device type=DISK

List of Backup Pieces
BP Key  BS Key  Pc# Cp# Status      Device Type Piece Name
------- ------- --- --- ----------- ----------- ----------
3535    3535    1   1   AVAILABLE   DISK        /backup01/UATDB/backups/UATDB_db_20170911_eise5da4_1_1.bus
.
.
Many more lines of output
.
.
database name is "UATDB" and DBID is 407684853
database dropped

RMAN> 

Remove the configuration from SRVCTL

[uatsrvr2-UATDB] srvctl remove database -d UATDB
Remove the database UATDB? (y/[n]) y
[uatsrvr2-UATDB] 

The following steps will have to be performed on all nodes of the RAC. Edit the /etc/oratab and remove the entry for the database.

If auditing is turned on, delete the audit files from the audit destination.

Delete any CRON entries, trace files, password files, etc.

Installing the Oracle Client 12c on an AWS Linux EC2 – Silent Mode

After the EC2 is built, run the below commands as root. Depending on the network security, you may have to either set or unset proxy variables.

/usr/bin/yum install -y binutils.x86_64 compat-libcap1.x86_64 compat-libstdc++-33.i686 \
compat-libstdc++-33.x86_64 gcc.x86_64 gcc-c++.x86_64 glibc-devel.i686 glibc-devel.x86_64 glibc.i686 \
glibc.x86_64 libXtst.x86_64 libXtst.i686 ksh.x86_64 libaio.x86_64 libaio.i686 libaio-devel.i686 \
libaio-devel.x86_64 libgcc.i686 libgcc.x86_64 libstdc++.x86_64 libstdc++.i686 libstdc++-devel.i686 \
libstdc++-devel.x86_64 libXi.i686 libXi.x86_64 make.x86_64 sysstat.x86_6

The above command will install all of the prerequisites required for the Oracle Client 12c .

Create the install group

/usr/sbin/groupadd oinstall

Create the dba group

/usr/sbin/groupadd dba

Create the oracle user and add to the groups

/usr/sbin/useradd -g oinstall -G dba oracle

Set the oracle password

passwd oracle

Respond to the above prompt with the oracle password that you want to use.

Make a directory for the software

mkdir /apps/opt/oracle

Change ownership of the directory to the oracle user and the oinstall group

chown oracle:oinstall /apps/opt/oracle

Change to oracle user

su - oracle

Copy the software into the above directory via ftp, sftp, scp etc. The directory will contain file

linuxamd64_12102_client.zip

Unzip the foftware

unzip linuxamd64_12102_client.zip

Change directory to the client directory that was unzipped.

cd /apps/opt/oracle/client

Run the installer in silent mode

./runInstaller -silent \
 -responseFile /apps/opt/oracle/client/response/client_install.rsp    \
   oracle.install.client.installType=Runtime           \
   UNIX_GROUP_NAME=dba                                 \
   INVENTORY_LOCATION=/apps/opt/oracle/oraInventory            \
   SELECTED_LANGUAGES=en                               \
   ORACLE_HOME=/apps/opt/oracle/base/rtclient64            \
   ORACLE_BASE=/apps/opt/oracle/base                           \
   waitForCompletion=true

At the end of the installation, you should see the message

The installation of Oracle Client 12c was successful.
Please check '/apps/opt/oracle/oraInventory/logs/silentInstall2017-08-30_06-00-25PM.log' for more details

Exit from oracl back to root

exit

As a root user, execute the following script

/apps/opt/oracle/oraInventory/orainstRoot.sh

The above script will display messages similar to

Changing permissions of /apps/opt/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /apps/opt/oracle/oraInventory to dba.
The execution of the script is complete.

Set up environment variables for the oracle user. Go to the oracle user’s home

su - oracle
cd

Edit the oracle users

vi .bash_profile

Setup the below variable to make it easier for the users to access sqlplus etc.

# Oracle client variables
export ORACLE_HOME=/apps/opt/oracle/base/rtclient64
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH
PATH=$ORACLE_HOME/bin:$PATH
which sqlplus

Test sqlplus

sqlplus user_id

SQL*Plus: Release 12.1.0.2.0 Production on Wed Aug 30 18:23:27 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password: 

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining and Real Application Testing options

AWS – EC2 with Oracle for a I/O intensive application – Part 02

This week one of our applications is attempting to benchmark the performance of an Oracle database running on an Exadata machine versus running in the AWS cloud on an x1.16xlarge EC2. Our initial thought was to define the storage as two ASM disk groups, each consisting of four 400 gb disks at 6,000 IOPS each per ASM disk group. Assuming that a given data file was spread across all four disks in a disk group, we could have speeds approaching 20,000 IOPS.

During testing, Amazon recommended that the ASM data be placed on single disks running at 20,000 IOPS. The current maximum IOPS that can be provisioned is 20,000 IOPS. Additionally, AWS requires that the IOPS:gb ratio be maintained at a maximum of 50:1. In order to meet these requirements, we decided to create two mount points, each of 1000 gb at 20,000 IOPS. An additional 1000 gb at 20,000 IOPS was also added for the archive logs.

We created a json template to spin up the three disks and mounted them to the existing EC2. Before mounting, make sure that the mount points are not already being used by currently attached disks. After the disks are mounted, we attached them to ASM and waited for the rebalance to complete. After the rebalance was completed, we dropped the old disks that were provisioned at 6000 IOPS. The application saw a 67% decrease in elapsed time on some of the queries.

Resolving a persistent locking issue hours after it occurred

A fellow DBA supported an application that had a process that compiled a number of packages every night as part of a scheduled script. Sample command:

ALTER PACKAGE schema.package_name COMPILE PACKAGE;
 
ERROR at line 1:
ORA-04021: timeout occurred while waiting to lock object
 

Over the last couple of days this script began to fail every night. I used the below SQL to identify the blocking session:

select    * 
from      gv$active_session_history
where     sample_time > sysdate -  (interval '1' day)
  and     sql_opname = 'ALTER PACKAGE'
  and not blocking_Session is null
order by  sample_time
;

This SQL returned the information identifying the blocks with the sample time exactly matching the script’s scheduled run time. I used the sample time, blocking session id, session serial and instance id and ran the below SQL:

select * 
from   gv$active_session_history
where sample_time between 
                  to_date('07-AUG-17 08.01 PM',
                          'dd-mon-yy hh.mi pm') 
                  and 
                  to_date('07-AUG-17 09.32 PM',
                          'dd-mon-yy hh.mi pm')
  and  session_id      = 
  and session_serial# = 
  and inst_id         = 
order by sample_time
;

This provided me the last sql_id that the blocking session had executed.

select  * 
from    gv$sql
where   inst_id = 
  and   sql_id in 'sql id'
;

The last SQL that the blocking session had executed happened to be a select. However, working on the assumption that it had previously performed a DML, I cancelled the session and the ALTER PACKAGES script was able to execute successfully.