RMAN – Mismatched objects

One of my production databases was scripted to write backups to /backup01. For reasons not germane to this post, this mount point became full and I had to move the backups to /backup02. I copied the old backups and modified the script to write to /backup02 starting with the next run. However, the next execution gave me the below error.

RMAN-06207: WARNING: 1 objects could not be deleted for DISK channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212:   Object Type   Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Backup Piece    /backup01/GPn4sbh1bm_1_1.bus

This was a direct result of my moving the backups from one location to another external to RMAN. To resolve this error, I issued a cross check

RMAN> crosscheck backuppiece '/backup01/GPn4sbh1bm_1_1.bus';
 
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1011 instance=instance name device type=DISK
crosschecked backup piece: found to be 'EXPIRED'
backup piece handle=/backup01/GPn4sbh1bm_1_1.bus RECID=125099 STAMP=951616886
Crosschecked 1 objects

followed by a delete obsolete

RMAN> delete obsolete;                         
 
RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 7 days
using channel ORA_DISK_1
Deleting the following obsolete backups and copies:
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           125099 10-AUG-17         
  Backup Piece       125099 10-AUG-17          /backup01/GPn4sbh1bm_1_1.bus
 
Do you really want to delete the above objects (enter YES or NO)? y
deleted backup piece
backup piece handle=/backup01/GPn4sbh1bm_1_1.bus RECID=125099 STAMP=951616886
Deleted 1 objects

This resolved the issue.

Resolving a persistent locking issue hours after it occurred

A fellow DBA supported an application that had a process that compiled a number of packages every night as part of a scheduled script. Sample command:

ALTER PACKAGE schema.package_name COMPILE PACKAGE;
 
ERROR at line 1:
ORA-04021: timeout occurred while waiting to lock object
 

Over the last couple of days this script began to fail every night. I used the below SQL to identify the blocking session:

select    * 
from      gv$active_session_history
where     sample_time > sysdate -  (interval '1' day)
  and     sql_opname = 'ALTER PACKAGE'
  and not blocking_Session is null
order by  sample_time
;

This SQL returned the information identifying the blocks with the sample time exactly matching the script’s scheduled run time. I used the sample time, blocking session id, session serial and instance id and ran the below SQL:

select * 
from   gv$active_session_history
where sample_time between 
                  to_date('07-AUG-17 08.01 PM',
                          'dd-mon-yy hh.mi pm') 
                  and 
                  to_date('07-AUG-17 09.32 PM',
                          'dd-mon-yy hh.mi pm')
  and  session_id      = 
  and session_serial# = 
  and inst_id         = 
order by sample_time
;

This provided me the last sql_id that the blocking session had executed.

select  * 
from    gv$sql
where   inst_id = 
  and   sql_id in 'sql id'
;

The last SQL that the blocking session had executed happened to be a select. However, working on the assumption that it had previously performed a DML, I cancelled the session and the ALTER PACKAGES script was able to execute successfully.

AWS – EC2 with Oracle for a I/O intensive application

This morning we received a request to create an EC2 instance in AWS for an Oracle database. The application that was intending to benchmark on this EC2 instance tended to be very I/O intensive. In the past we had been using general-purpose disks rather than provisioned IOPS. To ensure that this application got the best I/O performance, they wanted the disks to be provisioned at the highest possible IOPS rate. Amazon has a restriction that the IOPS:GB ratio cannot exceed 30:1. We were striping the disks to improve I/O such that each ASM group consisted of 4 disks. We did some math and came up with the below structure:

IOPS    #of disks    Size per disk   Ratio
                               (GB)
Arch      3000    4            100             30
ASM1      6000    4            200             30
ASM2      6000    4            200             30
Total size = 2000 gb