ORA-16139 Physical standby with a switchover status of “NOT ALLOWED”

A colleague requested assistance with an issue he encountered while attempting to perform a switchover from a physical standby to a primary. The environment was two sets of 4 node RACs each running Oracle 11.2.0.4. Both sets of RACs had been started as single node databases before the switchover had been attempted.

The status on the standby (being switched to a primary) was as follows:

SELECT database_role, switchover_status FROM v$database;

DATABASE_ROLE    SWITCHOVER_STATUS
PHYSICAL STANDBY NOT ALLOWED

The alert log indicated that not all logs had been applied to this standby before it was converted to a primary:

Database not available for switchover
  End-Of-REDO archived log file has not been recovered
  Incomplete recovery SCN:0:115789224 archive SCN:0:115809474
Database not available for switchover
  End-Of-REDO archived log file has not been recovered
  Incomplete recovery SCN:0:115789224 archive SCN:0:115809474
Switchover: Media recovery required - standby not in limbo
ORA-16139 signalled during: ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN...

At this point we brought up the database as a standby with the below commands:

ALTER DATABASE MOUNT STANDBY DATABASE;

alter database recover managed standby database finish force; 

This caused the last archive logs from the primary to be applied and the following to be displayed in the alert log:

Attempt to do a Terminal Recovery (BOSTON1)
Media Recovery Start: Managed Standby Recovery (BOSTON1)
 started logmerger process
Wed Jul 25 22:24:53 2018
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 64 slaves
Wed Jul 25 22:24:54 2018
Media Recovery Log +DG/BOSTON/1_4912_948659911.arc
Identified End-Of-Redo (switchover) for thread 1 sequence 4912 at SCN 0x0.6e71cc2
Resetting standby activation ID 2219145217 (0x84457801)
Media Recovery End-Of-Redo indicator encountered
Media Recovery Applied through change 115809474
Media Recovery Complete:lter database recover managed standby database finish force End-Of-REDO (BOSTON1)
Attempt to set limbo arscn 0:115809474 irscn 0:115809474
Completed: alter database recover managed standby database finish force

After the above, we issued:

alter database commit to switchover to primary with session shutdown;

And the alert log indicated that it was successful:

alter database commit to switchover to primary with session shutdown
ALTER DATABASE SWITCHOVER TO PRIMARY (BOSTON1)
Maximum wait for role transition is 15 minutes.
Backup controlfile written to trace file /opt/oracle/product/diag/rdbms/BOSTON/BOSTON1/trace/BOSTON1_ora_45373.trc
SwitchOver after complete recovery through change 115809474
Online log +DG/BOSTON/redo01.log: Thread 1 Group 1 was previously cleared
Online log +DG/BOSTON/redo02.log: Thread 1 Group 2 was previously cleared
Online log +DG/BOSTON/redo03.log: Thread 2 Group 3 was previously cleared
Online log +DG/BOSTON/redo04.log: Thread 2 Group 4 was previously cleared
Online log +DG/BOSTON/redo05.log: Thread 3 Group 5 was previously cleared
Online log +DG/BOSTON/redo06.log: Thread 3 Group 6 was previously cleared
Online log +DG/BOSTON/redo07.log: Thread 4 Group 7 was previously cleared
Online log +DG/BOSTON/redo08.log: Thread 4 Group 8 was previously cleared
Standby became primary SCN: 115809472
Switchover: Complete - Database mounted as primary
Completed: alter database commit to switchover to primary with session shutdown

At this point the database was in the below mode and switchover status:

SELECT open_mode, database_role, switchover_status FROM v$database;

OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS
MOUNTED              PRIMARY          NOT ALLOWED

We then enabled the log_archive_dest_state_2 as this was now the primary and issued the below commands:

shutdown immediate;

startup;

The database was now in the below mode and switchover status:

SELECT open_mode, database_role, switchover_status FROM v$database;

OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS
READ WRITE           PRIMARY          RESOLVABLE GAP

After a couple of minutes, the status changed to:

SELECT open_mode, database_role, switchover_status FROM v$database;

OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS
READ WRITE           PRIMARY          SESSIONS ACTIVE

The standby status was as follows:

SELECT open_mode, database_role, switchover_status FROM v$database;

OPEN_MODE            DATABASE_ROLE    SWITCHOVER_STATUS
MOUNTED              PHYSICAL STANDBY NOT ALLOWED

This is an expected status as per “Physical Standby Switchover_status Showing Not Allowed. (Doc ID 1392763.1)”. After the command to switch roles is issued on the primary, the primary will generate a special marker called EOR (end-of-redo) that is placed in the header of online redo log sequence. So this online redo log sequence will be archived locally and sent to all standby databases. Only upon receiving and applying EOR (end-of-redo), v$database.switchover_status will change from “not allowed” to “to primary” or “sessions active”.

We cut a few logs on the new primary and checked to ensure that they were applied on the standby. We then shut down the standby and brought it up as a 4 node RAC. Then we shut down the primary and brought it up as a 4 node RAC.

ORA-10458, ORA-01196, ORA-01110 errors on a physical standby

This morning we had an issue with a physical standby server. The first notification was that log apply was falling behind. It turned out that the standby server had been restarted and the listener and the database not restarted. When we started the database, we saw the below errors in the alert log:

Standby crash recovery aborted due to error 16016.
Errors in file /opt/app/oracle/diag/rdbms/dr/PRODDR/trace/PRODDR_ora_21598.trc:
ORA-16016: archived log for thread 1 sequence# 16661 unavailable
Recovery interrupted!
Some recovered datafiles maybe left media fuzzy
Media recovery may continue but open resetlogs may fail
Completed standby crash recovery.
Errors in file /opt/app/oracle/diag/rdbms/dr/PRODDR/trace/PRODDR_ora_21598.trc:
ORA-10458: standby database requires recovery
ORA-01196: file 1 is inconsistent due to a failed media recovery session
ORA-01110: data file 1: '/appl_ware/oradata/data/PRODDR/system01.dbf'
ORA-10458 signalled during: ALTER DATABASE OPEN...

A lot of posts on google indicated that a recovery was needed. Before we did that we decided to try and start MRP to see if that would resolve the issue. We issued

ALTER DATABASE MOUNT STANDBY DATABASE;

followed by the command to start MRP

alter database recover managed standby database using current logfile disconnect from session;

and the problem was resolved.

ORA-15032, ORA-15017 and ORA-15040 issues with an ASM disk group

While creating a new Oracle database with ASM on an AWS EC2 platform I encountered the below errors while attempting to mount one of the ASM disk groups:

ALTER DISKGROUP VOTE_DISK mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "VOTE_DISK" cannot be mounted
ORA-15040: diskgroup is incomplete

When I checked the v$asm_diskgroup, I saw

SQL> select NAME,TOTAL_MB,FREE_MB from v$asm_diskgroup;  

NAME                             TOTAL_MB    FREE_MB
------------------------------ ---------- ----------
ASMDATA1                            716796     716726
DBVOT                                    0          0
ASMDATA2                            716796     716726
ASMARCH                             716796     716726

The goal was to create the VOTE_DISK diskgroup on the DBVOT volume that was in turn mapped to the /dev/bqdx disk. The AWS Console indicated that the /dev/bqdx was attached to the EC2 but was not visible in the OS for some reason. I issued the command:

/usr/bin/echo -e "o\nn\np\n1\n\n\nt\n8e\nw" | /usr/sbin/fdisk /dev/bqdx

Which resulted in:

Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): Building a new DOS disklabel with disk identifier 0x29ba6b8d.

Command (m for help): Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): Partition number (1-4, default 1): First sector (2048-104857599, default 2048): Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-104857599, default 104857599): Using default value 104857599
Partition 1 of type Linux and of size 50 GiB is set

Command (m for help): Selected partition 1
Hex code (type L to list all codes): Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

However, when I attempted to create the disk I got the error:

root@10-118-134-71:/root # /usr/sbin/oracleasm createdisk DBVOT /dev/bqdx1
Device "/dev/bqdx1" is already labeled for ASM disk "DBVOT"

I then attempted to drop the disk group

SQL> drop diskgroup DBVOT;
drop diskgroup DBVOT
*
ERROR at line 1:
ORA-15039: diskgroup not dropped
ORA-15001: diskgroup "DBVOT" does not exist or is not mounted

I then tried with the FORCE option and succeeded

SQL> drop diskgroup DBVOT force including contents;

Diskgroup dropped.

I then deleted the disk

root@10-118-134-71:/root # /usr/sbin/oracleasm deletedisk DBVOT 
Clearing disk header: done
Dropping disk: done

and checked to make sure it was no longer visible

[oracle@10-118-134-71 ~]$ /usr/sbin/oracleasm listdisks
DB1
DG10
DG11
DG12
DG2
DG3
DG4
DG5
DG6
DG7
DG8
DG9

Then I recreated the disk

root@10-118-134-71:/root # /usr/sbin/oracleasm createdisk DBVOT /dev/bqdx1
Writing disk header: done
Instantiating disk: done

and listed the disks to confirm

root@10-118-134-71:/root # /usr/sbin/oracleasm listdisks
DG1
DG10
DG11
DG12
DG2
DG3
DG4
DG5
DG6
DG7
DG8
DG9
DBVOT

Then I created the disk in ASM

SQL> CREATE DISKGROUP DBVOT EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/DBVOT';

Diskgroup created.

TNS-12535, ns secondary err code: 12560 and users unable to log on

This afternoon the users of one of my applications reported that they were unable to sign on using a particular user account. The user account in question was open and unexpired. The users were able to sign on to other accounts without any issues. The alert logs on all nodes of the RAC were being flooded with messages such as:

Fatal NI connect error 12170.

  VERSION INFORMATION:
        TNS for Linux: Version 11.2.0.4.0 - Production
        Oracle Bequeath NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
        TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.4.0 - Production
  Time: 15-JUN-2018 19:17:55
  Tracing not turned on.
  Tns error struct:
    ns main err code: 12535
    
TNS-12535: TNS:operation timed out
    ns secondary err code: 12560
    nt main err code: 505
    
TNS-00505: Operation timed out
    nt secondary err code: 110
    nt OS err code: 0
  Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=xxx.xxx.xxx.xx)(PORT=nnn))
Fri Jun 15 19:18:37 2018

Despite bouncing all nodes of the RAC and flushing the shared pool etc. the users were not able to connect via this particular user account. There were existing connections with this account that were working.

After some false leads, a colleague noticed that there were a number of library cache locks with the user account set to “null”. These were being generated by a set of new servers that the application was attempting to configure using the user account that was experiencing the issue. After these new servers were shut down and the database bounced, normal functionality was restored.

Applying an incremental copy to make a physical standby current

One of our standby RACs had fallen behind in the log apply process:

DESTINATION           THREAD#    APPLIED   NOTAPPLD      DELTA
------------------ ---------- ---------- ---------- ----------
1 - From Primary            1       1187       1231         44
                            2       1520       1572         52
******************            ---------- ---------- ----------
Hash                                2707       2803         96

MRP was waiting on the below log number

   INST_ID PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#
---------- --------- ------------ ---------- ---------- ----------
         2 MRP0      WAIT_FOR_GAP          2       1521          0

This log file was no longer available on the primary and hence I had to use an incremental copy to get the standby in synch with the primary. At this point I cancelled MRP on the standby, shut down the RAC via SRVCTL and started node 2 as a standalone in mount mode.

On the primary, I identified the first change number of the n-1th log that the standby had identified was missing. Using this change number, I created an incremental backup:

backup incremental from scn 14521210213 database format '/backup/dean/backup_%U.bak';

I also created a backup of the control file:

backup current controlfile for standby format '/backup/dean/stnd_ctl_file.ctl';

These files were transferred to the standby via FTP commands.

After the files are transferred to the standby, sign into SQLPLUS and stop MRP with

alter database recover managed standby database cancel;

Stop all nodes of the RAC with

srvctl status databs -d PRODDB
srvctl stop database -d PRODDB -o immediate

Bring up the database as a single node in SQLPLUS

startup nomount;
Alter database mount standby database;

Exit from SQLPLUS and go start RMAN and catalog the directory where the backups were FTPed with:

catalog start with '/backup/dean';

And then recovered the database with:

recover database noredo;

After the recover was complete, I restored the control file:

shutdown immediate;
startup nomount;
restore standby controlfile from '/backupb/dean/stnd_ctl_file.ctl';

Since the data file locations on the standby were different from the primary, the standby control file has to be updated. Start the database in mount mode:

alter database mount standby database;

At this point, the instance thinks the data file locations are those of the primary as the control file that was restored was from the primary. Switch the data file locations in RMAN with the below command (The catalog command should point to the data file locations on the standby.):

catalog start with '+DG_CLU_ORA1/D99Xsn';
switch database to copy;

In my experience, the “switch database to copy;” will switch the undo, sysaux, users and system data files successfully. The others (i.e. application-specific database files) tend to fail with the below error:

RMAN> switch database to copy;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 08/27/2018 17:40:53
RMAN-06571: datafile 41 does not have recoverable copy

Check with REPORT SCHEMA to ensure that the data file exists on the standby with:

RMAN>  report schema;

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA
Report of database schema for database with db_unique_name IPTSDV

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    0        SYSTEM               ***     +DG_CLU_ORA1/D99Xsn/system.259.841589723
2    0        SYSAUX               ***     +DG_CLU_ORA1/D99Xsn/sysaux.260.841589731
.
.
.
41   1024     emp_stats_DAT_001_DC  ***     +DG_CLU_ORA1/D99Xsn/emp_stats_dat_001_dc.2405.983050989

If the file does exist, catalog it with:

RMAN> catalog datafilecopy '+DG_CLU_ORA1/D99Xsn/emp_stats_dat_001_dc.2405.983050989';

cataloged datafile copy
datafile copy file name=+DG_CLU_ORA1/D99Xsn/emp_stats_dat_001_dc RECID=197 STAMP=985283856

The switch database to copy should work after that

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DG_CLU_ORA1/D99Xsn/system.1853.983049983"
datafile 2 switched to datafile copy "+DG_CLU_ORA1/D99Xsn/sysaux.1793.983050037"
datafile 3 switched to datafile copy "+DG_CLU_ORA1/D99Xsn/undotbs1.1805.983050783"
.
.
.
datafile 41 switched to datafile copy "+DG_CLU_ORA1/D99Xsn/emp_stats_dat_001_dc.2405.983050989"

After this, I started MRP on the last node with:

alter database recover managed standby database using current logfile disconnect from session;

The standby database was now current with the primary

DESTINATION           THREAD#    APPLIED   NOTAPPLD      DELTA
------------------ ---------- ---------- ---------- ----------
1 - From Primary            1       1233       1234          1
                            2       1575       1575          0
******************            ---------- ---------- ----------
Hash                                2808       2809          1

To make sure everything was perfect, I cut a log on the primary and monitored to ensure that was applied on the standby.

Another issue I faced was that the log files were pointing to the primary’s log file location. This can be changed by running the output of the below SQL:

set line 200
select 'ALTER DATABASE RENAME FILE ' || '''' || member || '''' || ' to ' || '''' || replace(member,'primary_log_location','standby_log_location') || '''' || ';'
from v$logfile where member like '%primary db name%';

Output will look like:

ALTER DATABASE RENAME FILE '+DG1/primary/ONLINELOG/group_2_a.log' to '+DG3/standby/ONLINELOG/group_2_a.log';

Splitting a partition

Encountered a strange setup wherein a table was created as partitioned but only with a max partition resulting in basically a single partition table. Extract of the ORIGINAL DDL was as follows:

  CREATE TABLE “SCHEMA”.”TABLE_NAME” 
   (	"VERSION_NUM" VARCHAR2(5 BYTE), 
       .
       .
       .    
	"PARTITION_COLUMN" DATE, 
	.
      .
 (PARTITION "PARTITION_MAX" 
  TABLESPACE "DE_SUSP_IND_001" )  ENABLE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
  STORAGE(
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "TABLE_DAT_001" 
  PARTITION BY RANGE ("PARTITION_COLUMN") 
 (PARTITION "PARTITION_MAX"  VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE
  ENABLE ROW MOVEMENT ;

I needed to split this into two partitions such that all ros with a value less than 2017-11-12 in a separate partition. I split the partitions as shown:

ALTER TABLE “SCHEMA”.”TABLE_NAME” 
      SPLIT PARTITION "PARTITION_MAX" AT (TO_DATE('2017-11-12','YYYY-MM-DD')) 
      INTO (PARTITION "PARTITION_20171111", PARTITION "PARTITION_MAX") UPDATE INDEXES PARALLEL
;

The above resulted in two partitions,

PARTITION "PARTITION_20171111"  VALUES LESS THAN (TO_DATE(' 2017-11-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))

And

PARTITION "PARTITION_MAX"  VALUES LESS THAN (MAXVALUE) SEGMENT CREATION IMMEDIATE

After this was complete, I gathered statistics to check that the row count in the partitions matched the actual distribution of data.

Recovering a tablespaces with a LOB Tablespace/Auxiliary table

Due to a miscommunication with the development team, the DBA team accidentally refreshed the data in an environment that was currently being used for a test cycle. The users requested that we recover the database back to the most recent copy which we had taken earlier that morning. I created a REXX routine to generate the approximately 600 recover jobs that were needed to perform this as fast as possible. Some of my jobs failed because the base table had a LOB column that was pointing to an auxiliary table in a LOB TABLESPACE.

The table structure was defined as follows (DDL has been edited to remove options not relevant):

     CREATE TABLE                                             
       PRODA.MY_TABLE_01                                      
        (                                                     
        ACCOUNT_NAME   CHAR(50) NOT NULL WITH DEFAULT          
         FOR SBCS DATA                                        
       ,ACCOUNT_TXT   CLOB(1 M) NOT NULL WITH DEFAULT          
         FOR SBCS DATA                                        
        )                                                     
         IN PRODAB00.TSPACE57  
         CCSID EBCDIC                                         
         NOT VOLATILE                                         
         APPEND NO                                                                           
     ;                                
     CREATE                           
       LOB TABLESPACE TSPACE58        
         IN DPDNBB00                  
         USING STOGROUP PRODASMS      
         PRIQTY -1                    
         SECQTY -1                    
         ERASE NO                         
         GBPCACHE SYSTEM                  
         NOT LOGGED                       
         DSSIZE 4 G                       
         BUFFERPOOL BP32K1                
         LOCKSIZE ANY                     
         LOCKMAX SYSTEM                   
         CLOSE NO                         
     ;                                    
     CREATE AUXILIARY TABLE               
         PRODA.MY_TABLE_01_AUX_01         
         IN PRODAB00.TSPACE58             
         STORES PRODA.MY_TABLE_01         
         APPEND NO                        
         COLUMN ACCOUNT_TXT                
         PART 1                           
     ;                                    

The initial recover was coded as follows:

//SYSIN    DD  *                                   
  RECOVER  TABLESPACE  PRODAB00.TSPACE57           
           TORBA       X'0000000015D79FB6B7E0'     
/*                                                 

This generated the error:

DSNUGUTC -  RECOVER TABLESPACE PRODAB00.TSPACE57 TORBA X'0000000015D79FB6B7E0'
DSNU1316I -DBXC 102 15:41:03.83 DSNUCAIN - THE FOLLOWING TABLESPACES ARE MISSING FROM THE RECOVERY LIST
PRODAB00.TSPACE58

I changed the JCL as follows:

//SYSIN    DD  *                                  
  RECOVER  TABLESPACE  PRODAB00.TSPACE57          
           TABLESPACE  PRODAB00.TSPACE58          
           TORBA       X'0000000015D79FB6B7E0'    
/*                                                

And the recover completed:

DSNUCBMD - RECOVER TABLESPACE PRODAB00.TSPACE57   START
DSNUCBAL - THE IMAGE COPY DATA SET HLQ.PRODAB00.TSPACE57.NVE.F1010801 WITH DATE=20180411 AND TIME=071133 IS PARTICIPATING IN RECOVERY OF TABLESPACE PRODAB00.TSPACE57

DSNUCBMD - RECOVER TABLESPACE PRODAB00.TSPACE58   START                    
DSNUCBAL - THE IMAGE COPY DATA SET HLQ.PRODAB00.TSPACE58.NVE.F1010801 WITH DATE=20180411 AND TIME=071133 IS PARTICIPATING IN RECOVERY OF TABLESPACE PRODAB00.TSPACE58

Moving temp table spaces on a RAC with users online

Please see blog post entitled “Moving undo table spaces on a RAC with users online” for some additional background. In summary, we had to free up space on a certain ASM disk group. We had about 175 GB allocate to temp and decided to move this with users actively accessing the database.

First, take the file offline:

ALTER DATABASE TEMPFILE '+ASM_DG_01/PRODPRM/temp01.dbf' OFFLINE;

Copy the file from the old ASM disk group to the new disk group. Set oracle home to your ASM instance and then issue ASMCMD.

cp +ASM_DG_01/PRODPRM/temp01.dbf +ASM_DG_04/PRODPRM/temp01.dbf

(Note that this is a copy, not a move.)

Rename the data file in the database in sqlplus. If you have changed the oracle home to the ASM instance, remember to change back to your RAC instance’s oracle home:

ALTER DATABASE RENAME FILE '+ASM_DG_01/PRODPRM/temp01.dbf' TO '+ASM_DG_04/PRODPRM/temp01.dbf';

Bring it back online:

ALTER DATABASE TEMPFILE '+ASM_DG_04/PRODPRM/temp01.dbf' ONLINE;

Now to delete the file from the old ASM disk group. Set oracle home to your ASM instance and then issue ASMCMD.

rm +ASM_DG_01/PRODPRM/temp01.dbf

The space utilized in the ASM disk group should now be reduced.

Moving undo table spaces on a RAC with users online

We were running out of space on an ASM DG and had to free up space ASAP. Since out UNDO tablespaces are sized at about 200gb and were on the ASM DG that was full, we decided to move them.

This was the primary database with a physical (data guard) standby, so I needed to turn off standby file management

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='MANUAL';

Create a new undo file for the RAC node 01 in the new ASM disk group:

create undo tablespace UNDOTBS1A datafile '+ASM_DG_01/PRODPRM/undotbs1a.dbf' size 20G;

Alter to allocate the new undo file to RAC node 01:

ALTER SYSTEM SET UNDO_TABLESPACE=UNDOTBS1A SCOPE=BOTH SID='PRODPRM1';

Since this is a multi-node RAC, we can take one node offline to remove all users attached to the old undo tablespace off RAC node 01:

srvctl stop instance PRODPRM1 –d PRODPRM –o immediate

To be safe, make sure the RAC node is down with:

srvctl status database –d PRODPRM

Also, check the alert log on this particular node to make sure it was a clean shutdown.

With the RAC node down, drop the data file:

drop tablespace UNDOTBS1 including contents and datafiles;

Bring up RAC node 01 and it will be using the new undo table space:

srvctl start instance PRODPRM1 –d PRODPRM

Turn on standby file management

ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT='AUTO';