The below MS Power Point contains the steps to build a shadow database.
Category: Databases
Adabas 8391 Blocksize
Steps taken to convert to large block size
1. ADAORD existing database contents using RESTRUCTUREDB option
ADAORD RESTRUCTUREDB DBASSODEV=8391,DBDATADEV=8391,LPB=32760
2. Reformat existing datasets
ADAFRM ASSOFRM SIZE=26704,DEVICE=8391
3. Redefine database using ADADEF
ADADEF DEFINE DBNAME='TEST-DATABASE-44' ADADEF ASSOSIZE=26704,DATASIZE=30042,WORKSIZE=1100 ADADEF MAXFILES=255 * ADADEF FILE=1,CHECKPOINT ADADEF NAME='CHECKPOINT-FILE',MAXISN=50000,UISIZE=5B ADADEF NISIZE=20B,DSSIZE=500B /*
4. RELOAD RESTRUCTURED DATA FROM STEP 1
ADAORD STORE ALLFILES,OVERWRITE
Performance recommendations for ADARUN parameters:
All these notes are going from 3390 to 8391. Other blocksizes are not considered. There are 4 major areas to consider:
A. LBP
B. Work dataset size and parameters.
C. PLOG parameters.
D. Cache parameters.
A. Increase LBP.
Double the size from the old.
NOTE: You may have to increase the Region size on the nuc step. The upper limit to the buffer pool is the number of buffer pool headers that must be scanned for every buffer flush. Traditionally, we like to stay under 30-40M. With 8391, there are fewer headers (ASSO blocksize is 1.6x bigger), therefore the buffer pool can be larger.
CAUTION: If you do not increase LBP, your performance may be degraded due to more IO (fewer blocks in LBP) and increased number of buffer flushes (full of update blocks, none to overwrite). Also consider doubling the size of LFIOP, if a large volume database and substantiated by pds history (mbr=APSDB2).
B. Work dataset size and parameters.
Considerations:
1. Has work dataset size been increased or not?
2. Parameters for the primary versus secondary?
(AD.P.ADA—.MPM.CNTL(ADARUN or SECRUN)
3. Ideal sizes for Work or just convert?
1. If the overall size of Work has been increased and it is at least 2x larger than the old, you can just use the same LP and LWKP2 values as before. If it is not at least 2x bigger or you want ideal sizes, then move on.
2. Secondary LP and LWKP2 sizes.
a. Make changes to:
AD.P.ADA---.MPM.CNTL(SECRUN) AD.P.ADA---.MPM.CNTL(SECNUCPS)
b. Determine size of SEC Work in blocks for an 8391.
Either do an Adarep on the secondary or use:
(number of cylinders) * 60
Call this WORKSIZE
c. Use LWKP2=100 (100 blocks)
d. For LP, use the smaller of
LP=65500
or
LP= WORKSIZE – 200
3. PRIMARY LP AND LWKP2 sizes, straight conversion over.
a. Make changes to:
AD.P.ADA—.MPM.CNTL(ADARUN)
AD.ADA—.CNTL(ADA—PS)
b. If the overall size of Work is as before (same number of cyls) and you do not want ideal sizes, and just want it all to be the same as the old, then use:
New LP = (old LP) * 0.42 (can round up)
New LWKP2 = (old LWKP2) * 0.42 (can round up)
4. PRIMARY LP AND LWKP2 sizes, ideal sizes.
a. Make changes to:
AD.P.ADA—.MPM.CNTL(ADARUN)
AD.ADA—.CNTL(ADA—PS)
b. Determine size of SEC Work in blocks for an 8391.
Either do an Adarep on the secondary or use:
(number of cylinders) * 60
Call this WORKSIZE
c. Work III size.
For client use 1000.
NOTE: For other shops, there are much more important formulas. Work III needs to be at least 50 blocks (else nuc won’t start).
If users get a response code 73, then this area is currently filled up. The immediate problem could be resolved by stopping the Sx users or issuing DELUI cmds. We can than revisit resizing this area, probably by shrinking LP.
d. For LWKP2, use
New LWKP2 = (old LWKP2) * 0.42 (can round up)
NOTE: This client only, there are much more important formulas.
Work II is used only for complex searches (a rarity at here). The percent of complex searches here is less than 1%, the chances of 2 running concurrently is low. This is much smaller sizing than most other Adabas shops. In a worst case, if multiple very large complex searches need to run, there may be longer delay in getting to run (but maybe this is good punishment). If someone tries to do a complex or non-descriptor search on a file of more than 16M, they will get a response code 74 (which maybe they should). We can reconsider sizing this at a later time.
e. For LP, use the smaller of
LP=65500
or
LP= WORKSIZE – LWKP2 – 1000
C. PLOG parameters.
1. DUALPLD=8391
2. Determine size of PLOG in cylinders.
DUALPLS= cyls * 60
D. Cache parameters.
Not fully explored at this time. ADARUN CASSOMAXS and CDATAMAXS should be doubled to keep the same number of blocks in memory, but paging problems may result.
Mysterious contention!
One of our applications began to experience ORA-02049: timeout: distributed transaction waiting for lock errors during their processing. The application was connecting via a service name to a four node RAC cluster. Continue reading “Mysterious contention!”
GoldenGate replication from DB2 on z/OS to Oracle on Linux
This is a short presentation on the operational commands used to support GoldenGate replication from DB2 on z/OS to Oracle on Linux. Click on the link below to open a MS Power Point presentation.
Adding an existing table to replication without taking an outage
The application I support replicates a sub-set of the tables in the production environment via GoldenGate to a reporting database. Below are the steps to add an existing production table (with data) to the replication process without taking an outage. Continue reading “Adding an existing table to replication without taking an outage”
DB2 10 to DB2 11 migration production subsystems phase 1
On September 24th we migrated 5 standalone DB2 subsystems and 2 subsystems which were members of the same datasharing group. The migrations themselves had no issues and we were up and running on DB2 11 within a few hours on all of these subsystems.
Within a few hours and over the next couple days we started having several issues.
- SQL failures with research determining that one of the subsystems UNION_COLNAME_7 zparm value was set to the default value of NO instead of YES. Corrected zparm which corrected problems.
- Dynamic SQL query running long. Query had IN subquery and we created an index to get query to run in a timely matter. (LAC4 problem)
- Another dynamic SQL query running long. Query also had IN subquery and created another index to get query to run in a couple minutes instead of an hour and a half. (FLG3 problem)
- Static SQL query running long after package rebound during DB2 11 upgrade. Query also had IN subquery that was now running 20 CPU minutes instead of 90 CPU seconds. (TTMAGP1 problem)
Since we had three SQL performance issues all with IN subqueries we questioned if some common defect was introduced with DB2 11 so we opened a Service Request to IBM for research. We provided documentation on each issue and IBM eventually determined that two of these issues were separate defects with APARs opened but the FLG3 issue was just a data skew issue and access path selection that did change with DB2 11 which is something we deal with every time we upgrade DB2 versions.
The LAC4 issue above had APAR PI71415 created. The FLG3 issue was the data skew issue and a work-around was provided by IBM to modify the subquery to prefer a better access path. The TTMAGP1 issue above had APAR PI70237 created.
IBM did supply APAR fixes for each of the above items and we successfully tested them but at this time we are waiting for APAR closure for the two items with an actual PTF to apply and test before continuing our DB2 11 migrations.
Load JCL with a cursor to copy data from a remote DB2 subsystem
This is an example of a job that uses a cursor in a load utility to copy data from a remote DB2 subsystem to a local DB2 subsystem. Continue reading “Load JCL with a cursor to copy data from a remote DB2 subsystem”
DB2 10 to DB2 11 migration test subsystems
Our first test subsystem migration occurred on August 11th followed the next day by another migration and then we let those “burn-in” for over a week before planning any additional migrations. We did not experience any issues with the first two migrations so on August 24th we migrated 5 more test DB2 subsystems.
Problems identified after 5 additional test subsystem migrations.
- -805 SQL Error on common packages DSNTIAD but further research found that the PLAN PKLIST was specified incorrectly and rebinding the PLAN corrected the issues.
- -805 SQL Error on package DSNUTIL.DSNUGSQL due to remote SQL access to a DB2 10 subsystem. We remote bound this new DB2 11 package version to each DB2 10 subsystem to ensure this error did not occur for any other applications.
- Failures S04E level ID error. Noticed error running DB2 utilities but the failure occurred if DB2 attempted to access these particular tablespaces at all. Opened Service Request to IBM and they identified the issue was caused due to old SYSLGRNX entries on some tablespaces which was basically fallout from the Y2K bug. Applied a fix to allow us to run MODIFY to cleanup old rows from SYSLGRNX and then we worked to identify and cleanup the rest of our DB2 subsystems to ensure this issue did not occur on future migrations. I strongly suggest researching this potential issue on all your DB2 subsystems before doing migrations. Working with IBM we had to run some special MODIFY utilities with DIAGNOSE to get entries cleaned up.
We continued our migrations with 5 more on September 7th and the final 5 on September 21st completing our test subsystem migrations (17 total).
Additional problem identified after test subsystem migrations were complete.
- Application reported SQL failures and upon research found they were UNION SQL and eventually determined that during migration some of our zparm UNION_COLNAME_7 values had been set to the default NO instead of YES causing the problem.
At this time our migrations were stable after addressing the above issues and we were ready to start our first production subsystem migrations on September 24th and my next blog post will describe this effort and related issues that caused us to postpone future migration.
Using triggers to track user activity
I had a request to identify which user was updating the rows in a table. I created the below to triggers to track the insert and update activity back to a user Continue reading “Using triggers to track user activity”
DB2 10 to DB2 11 migration initial status
We started migrations of the DB2 subsystems, that our organization supports, prior to the creation of this BLOG site but I will post a chronology of our migrations and the issues we have experienced up to now and continue with updates as we continue our migration efforts. We are migrating from DB2 10 to DB2 11 and we currently support applications across 40 DB2 subsystems with 17 being test and 23 being production. We have completed migration of all 17 test subsystems and 4 of the 23 production but currently awaiting PTF fixes for two APARs that were opened related to issues we identified. Our next migrations are not planned now until February while we wait for the APARs to be finalized and PTFs to be created for our testing. We did test APAR fixes for both issues and they did correct the SQL performance problems we experienced.