The downside of making ALTER SYSTEM commands with scope=spfile in a RAC environment

There are often occasions where I have to make changes to parameters in the database some of which can only be made with scope=spfile as opposed to scope=both. The intention is to make this change now and pick it up on the next scheduled bounce of the RAC. Recently, a colleague had made such a change and we were waiting for the next opportunity to bounce the RAC. Unfortunately, the server supporting node 4 of the RAC crashed and Cluster Services attempted to restart node 4 after the server was back up. At this point we encountered the below error:

[srvrnode4-OMZPOAW4] srvctl start instance -d RACDB -i RACDB4
PRCR-1013 : Failed to start resource ora.racdb.db
PRCR-1064 : Failed to start resource ora.racdb.db on node srvrnode4
CRS-5017: The resource action "ora.racdb.db start" encountered the following error: 
ORA-01105: mount is incompatible with mounts by other instances
ORA-01677: standby file name conversion parameters differ from other instance
. For details refer to "(:CLSN00107:)" in "/oracle/product/diag/crs/srvrnode4/crs/trace/crsd_oraagent_oracle.trc".

At this point, nodes 1-3 were running with a set of parameters in their memory which differed from the parameters that were in the spfile that node 4 was attempting to start with. We were then left with two options:

a. shutdown nodes 1-3 of the RAC and start all nodes together from the spfile

b. reset the spfile to the values that nodes 1-3 were running with and start node 4

Option a. was not possible as this would have impacted the users. Option b was a difficult choice because

01. we would undo the work we had done pending a bounce

02. since a few days had passed between the ALTER SYSTEM scope=spfile commands we were not 100% sure of the changes that had been made

We decided to back up the spfile to a pfile (with all the desired changes) and then start looking for difference between the pfile and the parameters in memory on nodes 1-3. For each of the differences, we issued an alter command with scope=spfile. When this was complete, we were able to start node 4 as the spfile now matched what was in the memory of nodes 1-3. Instead of going back to square one with the desired changes in the spfile pending a bounce, we changed the .profile to display the required commands as a reminder to issue them with the next bounce of the RAC.

REXX RC=1 SQLSTATE=01506 on DSNREXX PREPARE statement after migrating to DB2 12

A colleague had a REXX process in place that had been working with DB2 11. After the upgrade to DB2 12, the REXX exec started failing with RC=1 on a PREPARE statement. The SQL statement being prepared contained logic to generate the boundaries used in the predicates referencing a time stamp column in a table:

  FROM creator.table_name       
WHERE TIME_STAMP_COL BETWEEN             
      'EndTS' - 2 HOURS-01 MONTHS  
  AND 'EndTS' + 2 HOURS-01 MONTHS  
   OR TIME_STAMP_COL BETWEEN             
      'EndTS' - 2 HOURS-02 MONTHS  
  AND 'EndTS' + 2 HOURS-02 MONTHS  
   OR TIME_STAMP_COL BETWEEN             
      'EndTS' - 2 HOURS-03 MONTHS  
  AND 'EndTS' + 2 HOURS-03 MONTHS  
   OR TIME_STAMP_COL BETWEEN             
      'EndTS' - 2 HOURS-04 MONTHS  
  AND 'EndTS' + 2 HOURS-04 MONTHS  
GROUP BY DATE(TIME_STAMP_COL)       

The REXX code PREPAREs this statement with

address DSNREXX "EXECSQL PREPARE S40 FROM :SQL" 

In DB2 11, this would return RC=0. After the upgrade to DB2 12, this started returning RC=1. The SQLCA contained:

SQLCODE = 0
SQLERRD.1 = 0
SQLERRD.2 = 0
SQLERRD.3 = 999
SQLERRD.4 = 1142028335
SQLERRD.5 = 0
SQLERRD.6 = 0
SQLWARN.1 =
SQLWARN.2 =
SQLWARN.3 =
SQLWARN.4 = 
SQLWARN.5 =           
SQLWARN.6 = W
SQLWARN.7 =  
SQLWARN.8 =
SQLWARN.9 =
SQLWARN.10 =          
SQLERRP = DSN
SQLSTATE = 01506 <---
SQLERRMC =    

Per IBM

01506 = An adjustment was made to a DATE or TIMESTAMP value to correct an invalid date resulting from an arithmetic operation.

The issue was caused because the SQL would generate an invalid date for February. In the past, the PREPARE would have ignored this.

Removing data vault from an Oracle 12c database

We had a change in requirements that necessitated the removal of data vault from a 12c database. Initially, the documentation indicated that this was not possible. However, a colleague provided the solution. The steps are:

As SYS run:

EXEC DBMS_MACADM.DISABLE_DV;

Output should be:

PL/SQL procedure successfully completed.

Now bounce the database. After the bounce, as SYS run

alter session set recyclebin=off;

Followed by:

@?/rdbms/admin/dvremov.sql

Script to connect to multiple Linux servers with Python & Paramiko

I needed a quick and dirty script to check if I had an account and valid password on a list of servers. Instead of manually logging on to each server to confirm my access, I created a Python script using Paramiko. The script reads in a text file with a list of servers and then logs on to each server and tests if I am able to successfully log on with my user account. In order for the script to execute successfully, you need to have Parmiko installed in addition to python.

Script:

import sys
import time
import paramiko
import getpass
 
my_id = 'johnsmith'
my_password = getpass.getpass('Password:')
 
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
 
out_file = open('connection_results.txt','w')
in_file = open('list_of_servers.txt', 'r') 
for host in in_file: 
      host=host.strip()
      print ("Checking server",host)
      
      try:
            ssh.connect(host, username=my_id, password=my_password)
            terminal = ssh.invoke_shell()
            terminal.send('junk')
            terminal.send('\n')
            time.sleep(5)
            print (terminal.recv(9999).decode('utf-8'))
      
            command = 'hostname'
            (stdin, stdout, stderr) = ssh.exec_command(command)
            for line in stdout.readlines():
                  print ("Connected to",line)
                  out_file.write("connected to " + line + "\n")
    
            terminal.send('exit')
            terminal.send('\n')
            time.sleep(5)
      
            ssh.close()
 
      except:
            out_file.write("Could not connect to " + host + "\n")
            
 
in_file.close()
out_file.close()

SQL*Loader-704/ORA-12154 errors

A user was attempting to run SQLLDR with the below command:

sqlldr sys@DBUAT control=/db1/DB_UAT_data.ctl log=/db1/DB_UAT_data.log

The above command prompted for the SYS password and then displayed the below error message:

SQL*Loader-704: Internal error: ulconnect: OCIServerAttach [0]
ORA-12154: TNS:could not resolve the connect identifier specified 
11:22 AM 

TNSPING to the DBUAT database were successful.

The issue was caused by the SYS password containing the “@” symbol. A quick workaround was to change the command to:

sqlldr \'/ as sysdba \' control=/db1/DB_UAT_data.ctl log=/db1/DB_UAT_data.log

ORA-31693, ORA-29913, ORA-31640, ORA-19505, ORA-27037, Linux-x86_64 Error: 2: No such file or directory on IMPDP

I recently stepped in for another Database Base Administrator who was no longer with the group. The project being supported was in the middle of migrating from a non-RAC environment to a 4 node RAC environment. The move was designed to be accomplished via Expdp, FTP and then an Impdp. I typically perform this by separating the schemas from the data as this allows me a moment to check that all the schema objects migrated successfully before performing the usually larger data migration.

In this particular case, the schemas imported successfully. When I started the data import I got the below errors:

Import: Release 11.2.0.4.0 - Production on Mon Feb 25 12:49:46 2019

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Data Mining and Real Application Testing options
Master table "SYS"."DEAN_IMPDP_DATA" successfully loaded/unloaded
Starting "SYS"."DEAN_IMPDP_DATA":  "/******** AS SYSDBA" parfile=dean_impdp_data.par 
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
ORA-31693: Table data object "TABLE_OWNER"."TABLE_NAME" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31640: unable to open dump file "/data_pump/dean_expdp_data.dmp" for read
ORA-19505: failed to identify file "/data_pump/dean_expdp_data.dmp"
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

What was further misleading was that some of the tables reported the above error while others loaded.

The file /data_pump/dean_expdp_data.dmp existed and had the required permissions.

The root cause of the error turned out to be that the data was exported from a single instance server and the import was being run on a RAC. The import operations for some of the tables were being tasked to the other nodes of the RAC that did not have access to the /data_pump/dean_expdp_data.dmp file and hence the “ORA-27037: unable to obtain file status“ errors.

I changed the import’s par file to add “CLUSTER=NO” and the issue was resolved.

Creating a database link on behalf of another user

I had to create a database link on behalf of another user whose password I did not know. This was performed as the “SYS” user in an Oracle 11.2.0.4.0 environment.

The requirements were as follows:
a. User test_user would connect to test_db
b. Execute an SQL over a db link as uat_user on uat_db
c. Database link should be named test_2_uat
d. uat_users password on uat_db was known. Assume it is mypassword
e. the tnsnames file has an entry for the uat_db named uat_db and is verified by
tnsping uat_db

Steps:
01. Log on to test_db as SYS user

02. Create the below procedure:

CREATE or replace PROCEDURE test_user.DEAN_DB_LINK AS
BEGIN
  EXECUTE IMMEDIATE 'CREATE DATABASE LINK test_2_uat '
       ||'CONNECT TO UAT_USER IDENTIFIED BY "mypassword" '
       ||'USING ''uat_db''';
END DEAN_DB_LINK;
/

03. After the above is created grant privilege to test_user to create a link

GRANT CREATE DATABASE LINK TO TEST_USER;

04. Execute the above procedure (as SYS)

EXECUTE TEST_USER.DEAN_DB_LINK;

05. Check that the link was created

SELECT * FROM DBA_DB_LINKS;

06. Revoke the privilege from test_user

REVOKE CREATE DATABASE LINK  FROM TEST_USER;

Update – March 21, 2019

A friend reminded me to cleanup the procedure:

drop PROCEDURE test_user.DEAN_DB_LINK;

This is based on the information at:
How to Create a Database Link in Another User’s Schema

Cleaning up Oracle audit files resulting in “cannot execute [Argument list too long]”

Just a quick little Unix tip:

Over the Christmas/New Year’s holidays one of my application’s audit file destination started filling up causing the database to stop future logons and generate page outs etc. The root cause appeared to be a script running on an application server that for some unknown reason was attempting to logon and failing about 2000 times a minute. This generated about the same number of audit files filling the mount point and/or exceeding the inodes. The regular housekeeping script was unable to clean up the files as it was executing the command similar to:

rm *_ora_3*_20181231*.aud

This would result in the error:

-ksh: rm: /bin/rm: cannot execute [Argument list too long]

This being the holiday season, we were unable to find a contact to either stop or fix the application script. After some Google searches, I was able to rapidly delete the files with the below command:

find /app/oracle/admin/instance1/adump -maxdepth 1 -name "*.aud" -print0 | xargs -0 rm

Using Python to search for key words in text files

As I have previously blogged, Python is very useful in solving everyday tasks that you may encounter. This week, I was tasked with reviewing resumes of potential candidates. As some of the resumes were unexpectedly long (8 pages in one instance), I came up with a list of search terms and then used the bellow python code to identify the presence of these terms in the resumes. The script generates a skills matrix of candidate name and skill as a csv that can be opened in Excel and filtered as needed. Additionally, it counts the number of search terms found in each resume to rank candidates.

This process does not in any way replace reading the resumes. It is just a convenient way to generate a cross reference of skills and candidates.

import os
import string

searchstrings = ('install', 'patch', 'upgrade', 'migrate', 'shell', 'scripts', 'rac', 'asm', 'performance', 'data guard', 'golden gate', 'import', 'export', 'pump', 'loader', 'rman', 'recovery', 'tde', 'db2', 'mysql', 'sybase', 'mongo', 'teradata', 'postgres', 'postgresql', 'casandra', 'mssql', 'aws', 'jenkins', 'json', 'cloud', 'oci')
src_dict = ("C:/temp/Resumes/temp/") #Specify base directory
 
report_file_name = 'C:/temp/Resumes/temp.txt'
report_file = open(report_file_name, 'w')
#temp = 'Name,Skill' + '\n'
#report_file.write(temp)
                
for resumes in os.listdir(src_dict):
    #print (resumes, 'contains the below terms')
    files = os.path.join(src_dict, resumes)
    #print (files)
    strng = open(files)
    for line in strng.readlines():
        #print (line)
        for word in searchstrings:
            if  word in line.lower():
                #print ('    Found', word, 'at line-->', line.rstrip())
                temp = resumes + ',' + word + '\n'
                #print (resumes,',',word)
                report_file.write(temp)
                
report_file.close()

#
## Sort the data to remove the duplicates
#
duplicates_file = open(report_file_name, 'r').readlines()
content_set = sorted(set(duplicates_file))

unique_file_name = 'C:/temp/Resumes/report.csv'
unique_file = open(unique_file_name, 'w')
for line in content_set:
    unique_file.write(line)

unique_file.close()

#
## Count the number of skills that each person has
#
unique_file_name = 'C:/temp/Resumes/report.csv'
with open('C:/temp/Resumes/report.csv') as unique_file:
    line = unique_file.readline()
fields = line.split(",")
prev_name = fields[0]
unique_file.close()
#print (prev_name)
skill_ctr = 0

unique_file = open(unique_file_name, 'r')
for line in unique_file:
    fields = line.split(",")
    curr_name = fields[0]
    if  curr_name == prev_name:
        skill_ctr = skill_ctr + 1
    else:
        temp = prev_name + ' has ' + str(skill_ctr) + ' skills.'
        print (temp)
        prev_name = curr_name
        skill_ctr = 0
temp = curr_name + ' has ' + str(skill_ctr) + ' skills.'
print (temp)

unique_file.close()
142
51
73
77