printf wrapping/tabbing on spaces in variables

If you need to print the contents of variables that happen to contain spaces via printf you would typically code as follows:

#!/bin/bash
export var_01='This is variable 01'
export var_02='continuing in variable 02'
printf '%-30s %-30s\n' $var_01 $var_02 >> temp.txt

However, this will output as follows:

This                           is                            
variable                       01                            
continuing                     in                            
variable                       02   

Instead of the expected:

This is variable 01            continuing in variable 02   

To prevent this, enclose the variables in double quotes:

#!/bin/bash
export var_01='This is variable 01'
export var_02='continuing in variable 02'
printf '%-30s %-30s\n' "$var_01" "$var_02" >> temp.txt

Storing the result from a v$ table into a variable in a script

Typically while selecting from a V$ table in a Unix script, I have to escape the “$” character as shown below:

sqlplus -s $sqlplus_command << EOF > backup_check_temp.txt
  select  value 
  from    V\$RMAN_CONFIGURATION 
  where   NAME = 'RETENTION POLICY'
  ;  
  exit;
EOF

The above will successfully run the SQL and write the output in the spool file.

However, if instead of the output in a spool file, you want the output in a Unix variable for further processing, you would change the code to

export ret_period=`sqlplus -s $sqlplus_command << EOF
                   select   value 
                   from     V\$RMAN_CONFIGURATION 
                   where    NAME = 'RETENTION POLICY'
                   ;
EOF`
if  [ -z "$ret_period" ]; then
    export ret_period='Not found'
fi    

This will fail with:

+ export 'ret_period=                select value from V where NAME = '\''RETENTION POLICY'\''
                                       *
ERROR at line 1:
ORA-00942: table or view does not exist'

The reason is that the second method of coding results in two levels of interpretation, each of which requires the “$” character to be escaped. To get the code working, it has to be coded as follows (note the two “\\” escape characters):

export ret_period=`sqlplus -s $sqlplus_command << EOF
                   select   value 
                   from     V\\$RMAN_CONFIGURATION 
                   where    NAME = 'RETENTION POLICY'
                   ;
EOF`
if  [ -z "$ret_period" ]; then
    export ret_period='Not found'
fi    

Flashback Data Archive

We had a requirement to track how, when, and who made changes to the data in some of our sensitive tables. In the past, I would accomplish this by creating a trigger that would create before-and-after images into an audit table along with some other information such as the time of the change, the current user making the change, the IP address of the user making the change, etc. With 11g, Oracle introduced “Total Recall” that was subsequently renamed to Flashback Data Archive (FDA or FBA) in 12c. This new feature allows for long-term storage of “undo” data in a separate area known as the flashback archive.

Since my target database was on Oracle 12c, I decided to try out this new technology as opposed to a trigger solution.

Setup steps

Create a TABLESPACE and associated data file

  CREATE TABLESPACE "FDA" DATAFILE 
  '+DG_ORA1/datafile/fda_01.dbf' SIZE 1G
  NOLOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT 
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

Create the flashback archive pointing to the above TABLESPACE. Additionally, set the quota and retention

CREATE FLASHBACK ARCHIVE DEFAULT OA_FDA TABLESPACE FDA
  QUOTA 1G RETENTION 30 day;

If you add more space to the TABLESPACE associated with the flashback archive, remember to increase the quota assigned above to the flashback archive. It is important to allocate enough space to this TABLESPACE as running out of space here will also impact the operations on the base tables.

In order to capture additional information about the user account, terminal, program, etc. that made the change, set the context level as shown below. There are other options to the level, however, I found “ALL” to be a good choice.

exec dbms_flashback_archive.set_context_level(level=> 'ALL');

Grant the schema owner access to the flashback archive

ALTER USER ACCT_OWNER QUOTA UNLIMITED ON FDA;

add the tables that you want to monitor to the flashback archive

ALTER TABLE "ACCT_OWNER"."ACCT_MASTER"  FLASHBACK ARCHIVE OA_FDA;

For every table that is added to the flashback archive, Oracle creates three additional tables named

SYS_FBA_DDL_COLMAP_91245
SYS_FBA_HIST_91245
SYS_FBA_TCRV_91245

I am not exactly sure how the number is assigned for each table added to the flashback archive but it will get a number and these three tables. These tables are created in the flashback data archive TABLESPACE created at the beginning of this process.

You can list the tables that are currently in the flashback data archive with the following SQL statement

SELECT     *
FROM       dba_flashback_archive_tables
ORDER BY   owner_name, table_name
;

Viewing the information if you have the key columns

The following SQL statement will display the change to a given row of table. Note that the SQL statement is run on the actual table and the optimizer uses the indexes on the actual table hence code the predicates appropriately

SELECT 
         versions_startscn
        ,versions_starttime
        ,versions_endtime
        ,versions_xid
        ,case versions_operation
              when 'D' then 'Delete'
              when 'I' then 'Insert'
              when 'U' then 'Update'
              else          versions_operation 
         end as Action    
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','SESSION_USER') AS session_user
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','HOST') AS host
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','module') AS module
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','os_user') AS os_user
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','ip_address') AS ip_address
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','AUTHENTICATED_IDENTITY') 
               as AUTHENTICATED_IDENTITY
        ,DBMS_FLASHBACK_ARCHIVE.get_sys_context
              (versions_xid, 'USERENV','DATABASE_ROLE') as test
        ,RO.*
FROM    "ACCT_OWNER"."ACCT_MASTER"  versions between 
                 timestamp (systimestamp - interval '1' day) and 
                 systimestamp RO
WHERE   key_value_col = 1622310
  and   versions_xid is not null
  order by 
        versions_starttime desc
;

Note:
a. that the table is aliased to RO at the end of the FROM clause (i.e. after the versions between part of the SQL). This allows me to see all the columns with RO.*.

b. in some cases I saw rows where the versions_xid was not populated and this causes the DBMS_FLASHBACK_ARCHIVE.get_sys_context to fail.

c. if you attempt to read data before any changes have been made and written to the flashback archive, you get a null error. Adjust the time period specified in the versions above.

d. there is a slight delay between modifying the table and seeing it in the flashback data archive. Changes to the flashback archive are written by a new process called the “fbda”. This is not a significant issue and I assume I was only able to notice it because I was testing on a quiet system where I was the only user.

Viewing the information if you DO NOT have the key columns
The above SQL will answer the question of who modified the data in a particular row. If you are attempting to identify all the changes that were made to a given table (i.e. you do not have key column values), the above SQL will not return in any reasonable amount of time. The issue is that each row in the result set initiates three calls to DBMS_FLASHBACK_ARCHIVE.get_sys_context for every column that uses the function. For example, if there are 100 rows that are in the flashback archive, and I am using the get_sys_context column 7 times in the above SQL, there will be (1000 rows * 7 columns * 3 calls) = 21,000 calls to DBMS_FLASHBACK_ARCHIVE.get_sys_context.

Also, note that the versions clause will list all of the rows that exist in the specified timeframe and not the rows that were modified in that timeframe. This will be discussed later below.

As a result of these gotchas, the below SQL performs better if you’re searching for all the rows that are modified a a given time

SELECT 
         versions_startscn
        ,versions_starttime
        ,versions_endtime
        ,versions_xid
        ,case versions_operation
              when 'D' then 'Delete'
              when 'I' then 'Insert'
              when 'U' then 'Update'
              else          versions_operation 
         end as Action    
        ,RO.*
FROM    "ACCT_OWNER"."ACCT_MASTER"  versions between 
                 timestamp (systimestamp - interval '1' day) and 
                 systimestamp RO
  order by 
        versions_starttime desc
;

The above query assumes that you want to see the changes in chronological order descending. If you want to see the information by a key column, then order by that column. After you have identified the particular row that you’re interested in, use the key column value and run the previous SQLs with the DBMS_FLASHBACK_ARCHIVE.get_sys_context columns.

Altering a table – Adding a column
If you add a column, the rows that existed in the flashback archive for the have values of now with the new column which is pretty much expected behavior for any table.

Altering a table – Modifying a column
If you modify an existing column of the table that is currently in the flashback archive, you will see a new column named something like M_50389_old_column_name together with the previous column.

Dropping a table that is part of the flashback archive
Before dropping a table that is part of the flashback archive, you need to remove it from the flashback data archive otherwise you will get the error

drop table "ACCT_OWNER"."ACCT_MASTER_DEAN"
Error report -
ORA-55610: Invalid DDL statement on history-tracked table
55610. 00000 -  "Invalid DDL statement on history-tracked table"
*Cause:    An attempt was made to perform certain DDL statement that is
           disallowed on tables that are enabled for Flashback Archive.
*Action:   No action required.

Remove the table from the flashback archive as follows

ALTER TABLE "ACCT_OWNER"."ACCT_MASTER"         NO FLASHBACK ARCHIVE;

and then it can be dropped.

Any attempt to read the flashback data for a table that has been drop results in the error

ORA-01466: unable to read data - table definition has changed
01466. 00000 -  "unable to read data - table definition has changed"
*Cause:    Query parsed after tbl (or index) change, and executed
           w/old snapshot
*Action:   commit (or rollback) transaction, and re-execute

Rows that exist vs. rows that were modified in the versions interval

When I started this exercise, I assumed that a query based on versions between timestamp “A” and timestamp “B” would only show me the rows that were modified between these two timestamps. However, my queries would return rows that had been modified outside of this range. This is documented at Oracle support in Doc id 2686158.1 which states “the VERSIONS BETWEEN clause is used to qualify rows which “existed” in the time range, and it is not something used to qualify rows which were “inserted” in the time range”. The column versions_starttime indicates the time of the actual change.

Performance implications
In a quiet system, I ran a workload of 10,000 inserts, 10,000 updates followed by 10,000 deletes. Each SQL statement was followed by a commit in an attempt to simulate an OLTP workload. After the AWR snapshot had rolled over to a new snapshot interval, I turned on the flashback archive and ran the same workload. An AWR difference report did not show any noticeable difference in resource consumption. However, this could have been due to the small size of my workload. Users testing in a somewhat busy nonproduction environment were also unable to identify any perceptible increase in elapsed time.

Locking
I naively assumed that since the rows were being written to the flashback archive via the FBDA process there would be no locking. However, we did see instances of locking. A colleague found the below documents on Oracle support

2423880.1 - FBDA: Massive Performance Problem After 
            enabling FDA (Doc ID 2423880.1)
            Request patch for bug 22239699
 
22239699  - Bug - FBDA TCRV delete maintenance colliding with 
                  archiving delete causing TX contention 
 
2344671.1 - Flashback Data Archive: Error ORA-1 and ORA-60 In 
            SQL Merge Into SYS_FBA_TCRV DEADLOCK ERRORS
 
2370465.1 - FDA - Flashback Data Archive Usage and Best 
            Practices (a.k.a. Total Recall)

Summary
In summary, it was relatively easy to set up the flashback archive and add or remove tables. Volume testing in a nonproduction environment has not shown any increase in resource consumption. Ensuring that there is enough space in the flashback archive table space is very important as running out of space impacts the application.

140
50
72
75