Michael Dinh

Subscribe to Michael Dinh feed Michael Dinh
Michael T. Dinh, Oracle DBA
Updated: 4 hours 17 min ago

Skip Goldengate Replicat Transaction

Tue, 2018-04-17 19:22
Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.15 17640173 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131101.0605.2_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Nov 19 2013 03:18:45

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.
====================================================================================================
ORA-02292: integrity constraint (OWNER.MARY_JOE_FK) violated - child record found (status = 2292). DELETE FROM "OWNER"."T_JOE"  WHERE "JOENUMMER" = :b0.
====================================================================================================

+++ SKIPTRANSACTION
GGSCI> start replicat REP1 SKIPTRANSACTION

+++ REVIEW PRM
[gguser]$ grep -i discard rep1.prm
--REPERROR (DEFAULT, DISCARD)
REPERROR (-1, DISCARD)
REPERROR (2291, DISCARD) 
DISCARDFILE ./discard/rep1.discard append, MEGABYTES 1024
DISCARDROLLOVER AT 00:01
[gguser]$ 

+++ REVIEW SKIPPING FROM DISCARD
[gguser]$ grep -c "Skipping delete from OWNER.T_JOE" rep1.discard
15276

[gguser]$ grep -A2 "Skipping delete from OWNER.T_JOE" ./discard/rep1.discard|head
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87850906
*
JOENUMMER = 1
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87851339
*
JOENUMMER = 2
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 87851735
*
[gguser@viz-cp-dc1-p11 oracle]$ grep -A2 "Skipping delete from OWNER.T_JOE" ./discard/rep1.discard|tail
*
JOENUMMER = 50093291
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033367
*
JOENUMMER = 50094681
--
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033767
*
JOENUMMER = 50094741

+++ REVIEW RBA FROM DISCARD
[gguser]$ grep rba rep1.discard|head -1
Aborting transaction on ./dirdat/nd beginning at seqno 4475 rba 87850906

[gguser]$ grep rba rep1.discard|tail -1
Skipping delete from OWNER.T_JOE at seqno 4475 rba 94033767
[gguser]$ 

+++ NOTICE MATCH WITH LOGDUMP
Logdump 23 >scanforendtrans
End of Transaction found at RBA 94033767 

====================================================================================================
GATHER DATA
====================================================================================================

GGATE@SQL> r
1 select count(*) from
2 (
3 (select JOENUMMER from OWNER.T_JOE minus select JOENUMMER from OWNER.T_JOE@dblink)
4 union all
5 (select JOENUMMER from OWNER.T_JOE@dblink minus select JOENUMMER from OWNER.T_JOE)
6 )
7*

COUNT(*)
———-
15273

GGATE@SQL>

+++ CREATE TEMPORARY TABLE

GGATE@SQL> create table T_JOE_DEL as select JOENUMMER from OWNER.T_JOE minus select JOENUMMER from OWNER.T_JOE@dblink;

+++ REVIEW DATA FROM TEMPORARY TABLE TO COMPARE WITH DISCARD

GGATE@SQL> r
1 select * from (
2 select JOENUMMER from T_JOE_DEL order by 1 asc
3* ) where rownum <11

JOE
————
1
2
3
21
23
24
25
26
27
28

10 rows selected.

GGATE@SQL>

GGATE@SQL> r
1 select * from (
2 select JOE from T_JOE_DEL order by 1 desc
3* ) where rownum <11

JOE
————
50094741
50094681
50093291
50093221
50093191
50093101
50092851
50092791
50092781
50092741

10 rows selected.

GGATE@SQL>

====================================================================================================
CORRECT DATA
====================================================================================================

GGATE@SQL> delete from OWNER.T_JOE where JOENUMMER in (select JOENUMMER from T_JOE_DEL);

15273 rows deleted.

GGATE@SQL> commit;

Commit complete.

====================================================================================================
VERIFY ROW COUNT
====================================================================================================

+++ USING COUNT MAY NOT BE THE BEST OPTION.
GGATE@SQL> select count(*) from OWNER.T_JOE;

  COUNT(*)
----------
      9939

GGATE@SQL> select count(*) from OWNER.T_JOE@dblink;

  COUNT(*)
----------
      9939

GGATE@SQL> 

====================================================================================================
REVIEW REPORT FILE
====================================================================================================

[gguser]$ grep SKIPTRANSACTION REP1*.rpt
rep1.rpt:2018-04-17 12:15:15  INFO    OGG-01370  User requested START SKIPTRANSACTION. The current transaction will be skipped. Transaction ID 22.30.1923599, position Seqno 4475, RBA 87850906.

[gguser]$ grep -i skip ggserr.log
2018-04-17 12:15:14  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (gguser): start replicat rep1 SKIPTRANSACTION.
2018-04-17 12:15:14  INFO    OGG-00963  Oracle GoldenGate Manager for Oracle, mgr.prm:  Command received from GGSCI on host 10.232.135.44:33310 (START REPLICAT rep1 SKIPTRANSACTION).
2018-04-17 12:15:15  INFO    OGG-01370  Oracle GoldenGate Delivery for Oracle, rep1.prm:  User requested START SKIPTRANSACTION. The current transaction will be skipped. Transaction ID 22.30.1923599, position Seqno 4475, RBA 87850906.
[gguser]$ 

====================================================================================================
LOGDUMP TO FIND END OF TRANSACTONS
====================================================================================================

Logdump 15 >open ./dirdat/nd004475
Current LogTrail is ./dirdat/nd004475 
Logdump 16 >detail on
Logdump 17 >fileheader detail
Logdump 18 >ghdr on
Logdump 19 >detail data
Logdump 20 >ggstoken detail
Logdump 21 >pos 87850906
Reading forward from RBA 87850906 
Logdump 22 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   310  (x0136)   IO Time    : 2018/04/17 10:47:16.475.512   
IOType     :     3  (x03)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 779280 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:16.475.512 Delete               Len   310 RBA 87850906 
Name: OWNER.T_JOE 
Before Image:                                             Partition 4   G  b   

GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6b55 4141 5441 4141 6264 7141 4159 0001 | AAAHkUAATAAAbdqAAY..  
TokenID x4c 'L' LOGCSN           Info x00  Length   10 
 3732 3833 3730 3834 3135                          | 7283708415  
TokenID x36 '6' TRANID           Info x00  Length   13 
 3232 2e33 302e 3139 3233 3539 39                  | 22.30.1923599  
   
Logdump 23 >scanforendtrans
End of Transaction found at RBA 94033767 
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   331  (x014b)   IO Time    : 2018/04/17 10:47:16.429.234   
IOType     :     3  (x03)     OrigNode   :   255  (xff) 
TransInd   :     .  (x02)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 13903264 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:16.429.234 Delete               Len   331 RBA 94033767 
Name: OWNER.T_JOE 
Before Image:                                             Partition 4   G  e   
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6b55 4141 5741 4141 4e6c 6c41 4177 0001 | AAAHkUAAWAAANllAAw..  
   
Logdump 24 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :   174  (x00ae)   IO Time    : 2018/04/17 10:47:24.429.491   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :     167409       AuditPos   : 13947088 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2018/04/17 10:47:24.429.491 FieldComp            Len   174 RBA 94034190 
Name: OWNER.NEW_DATA 
Before Image:                                             Partition 4   G  b   
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4148 6a59 4141 5441 4142 794b 4541 412f 0001 | AAAHjYAATAAByKEAA/..  
TokenID x4c 'L' LOGCSN           Info x00  Length   10 
 3732 3833 3730 3834 3538                          | 7283708458  
TokenID x36 '6' TRANID           Info x00  Length   13 
 3132 2e31 362e 3330 3031 3139 37                  | 12.16.3001197  
   
Logdump 25 >open ./dirdat/nd004475
Current LogTrail is ./dirdat/nd004475 
Logdump 26 >count
LogTrail ./dirdat/nd004475 has 92822 records 
Total Data Bytes          92730182 
  Avg Bytes/Record             999 
Delete                       20937 
Insert                        5405 
FieldComp                      724 
LargeObject                  65755 
Others                           1 
Before Images                21163 
After Images                 71658 

Average of 1589 Transactions 
    Bytes/Trans .....      61161 
    Records/Trans ...         58 
    Files/Trans .....          5 
 
Logdump 27 >detail on
Logdump 28 >filter inc filename OWNER.T_JOE
Logdump 29 >count
Scanned     10000 records, RBA   12734577, 2018/04/17 07:25:42.524.558 
Scanned     20000 records, RBA   25670230, 2018/04/17 08:00:11.480.213 
Scanned     30000 records, RBA   38698934, 2018/04/17 08:30:24.488.669 
Scanned     40000 records, RBA   51436567, 2018/04/17 08:59:11.452.549 
Scanned     50000 records, RBA   63868041, 2018/04/17 09:43:10.477.605 
Scanned     60000 records, RBA   76010927, 2018/04/17 10:14:59.472.122 
Scanned     70000 records, RBA   94264594, 2018/04/17 10:47:31.447.436 
LogTrail ./dirdat/nd004475 has 15296 records 
Total Data Bytes           4757365 
  Avg Bytes/Record             311 
Delete                       15296 
Before Images                15296 
Filtering matched        15296 records 
          suppressed     77526 records 

Average of 2 Transactions 
    Bytes/Trans .....    2745786 
    Records/Trans ...       7648 
    Files/Trans .....        110 
 

OWNER.T_JOE                                      Partition 4 
Total Data Bytes           4757365 
  Avg Bytes/Record             311 
Delete                       15296 
Before Images                15296 
Logdump 30 >

 

Framework To Run SQL For All Active DB Instances

Sat, 2018-04-14 06:52

Requirement is to configure hugepages for multiple RAC database instances.

pmon processes

grid     12692     1  0 09:39 ?        00:00:00 asm_pmon_+ASM1
grid     13296     1  0 09:39 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   13849     1  0 09:40 ?        00:00:00 ora_pmon_DEV1
oracle   13851     1  0 09:40 ?        00:00:00 ora_pmon_QA1
oracle   13854     1  0 09:40 ?        00:00:00 ora_pmon_PERF1
oracle   13855     1  0 09:40 ?        00:00:00 ora_pmon_TEST1
oracle   14998     1  0 09:40 ?        00:00:00 ora_pmon_INT1

Create parameter.sh which will run parameter.sql.
You might be thinking, WTH is this person thinking!
I wanted SQL script to be reusable.

Run parameter.sql

oracle@racnode-dc1-1:hawk1:/home/oracle
$ sqlplus / as sysdba @ parameter.sql

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 14 13:25:56 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


NAME                           CDB
------------------------------ ---
HAWK                           NO


NAME                           DISPLAY_VALUE INST_ID     CON_ID DEFAULT_VALUE ISDEFAULT
------------------------------ ------------- ------- ---------- ------------- ---------
cluster_database               TRUE                1          0 FALSE         FALSE
                               TRUE                2          0 FALSE         FALSE
cluster_database_instances     2                   1          0 4294967295    TRUE
                               2                   2          0 4294967295    TRUE
db_file_name_convert                               1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
db_name                        hawk                1          0 NULL          FALSE
                               hawk                2          0 NULL          FALSE
db_unique_name                 hawk                1          0 NONE          TRUE
                               hawk                2          0 NONE          TRUE
instance_groups                                    1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
instance_name                  hawk1               1          0 NULL          TRUE
                               hawk2               2          0 NULL          TRUE
instance_number                1                   1          0 0             FALSE
                               2                   2          0 0             FALSE
instance_type                  RDBMS               1          0 NONE          TRUE
                               RDBMS               2          0 NONE          TRUE
memory_max_target              0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
memory_target                  0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
pdb_file_name_convert                              1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
pga_aggregate_limit            2G                  1          0 1             TRUE
                               2G                  2          0 1             TRUE
pga_aggregate_target           256M                1          0 0             FALSE
                               256M                2          0 0             FALSE
sga_max_size                   768M                1          0 1000          TRUE
                               768M                2          0 1000          TRUE
sga_target                     768M                1          0 0             FALSE
                               768M                2          0 0             FALSE
use_large_pages                TRUE                1          0 NULL          FALSE
                               TRUE                2          0 NULL          FALSE

34 rows selected.

13:25:56 SYS @ hawk1:>

Run parameter.sh

oracle@racnode-dc1-1:hawk1:/u01/app/oracle/12.1.0.2/db1
$ ~/parameter.sh

******** Current ora_pmon:
----------------------------------------
ora_pmon_hawk1
----------------------------------------

******** SQL Script: /home/oracle/parameter.sql

The Oracle base remains unchanged with value /u01/app/oracle
Oracle Instance alive for sid "hawk1"

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 14 13:26:55 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

13:26:55 SYS @ hawk1:>13:26:55 SYS @ hawk1:>13:26:55 SYS @ hawk1:>
NAME                           CDB
------------------------------ ---
HAWK                           NO


NAME                           DISPLAY_VALUE INST_ID     CON_ID DEFAULT_VALUE ISDEFAULT
------------------------------ ------------- ------- ---------- ------------- ---------
cluster_database               TRUE                1          0 FALSE         FALSE
                               TRUE                2          0 FALSE         FALSE
cluster_database_instances     2                   1          0 4294967295    TRUE
                               2                   2          0 4294967295    TRUE
db_file_name_convert                               1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
db_name                        hawk                1          0 NULL          FALSE
                               hawk                2          0 NULL          FALSE
db_unique_name                 hawk                1          0 NONE          TRUE
                               hawk                2          0 NONE          TRUE
instance_groups                                    1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
instance_name                  hawk1               1          0 NULL          TRUE
                               hawk2               2          0 NULL          TRUE
instance_number                1                   1          0 0             FALSE
                               2                   2          0 0             FALSE
instance_type                  RDBMS               1          0 NONE          TRUE
                               RDBMS               2          0 NONE          TRUE
memory_max_target              0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
memory_target                  0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
pdb_file_name_convert                              1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
pga_aggregate_limit            2G                  1          0 1             TRUE
                               2G                  2          0 1             TRUE
pga_aggregate_target           256M                1          0 0             FALSE
                               256M                2          0 0             FALSE
sga_max_size                   768M                1          0 1000          TRUE
                               768M                2          0 1000          TRUE
sga_target                     768M                1          0 0             FALSE
                               768M                2          0 0             FALSE
use_large_pages                TRUE                1          0 NULL          FALSE
                               TRUE                2          0 NULL          FALSE

34 rows selected.

13:26:55 SYS @ hawk1:>Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@racnode-dc1-1:hawk1:/u01/app/oracle/12.1.0.2/db1
$

Requirements are .sh and .sql must have the same name and reside in the same location.
.sh can be called from any location.

parameter.sh


#!/bin/sh
# --------------------------------------------------------------------------------
# parameter.sh
# MDinh April 12, 2018
#
# Shell script will run SQL script having the same base name
# for all active database instances.
# --------------------------------------------------------------------------------
DN=`dirname $0`
BN=`basename $0`
SQL_SCRIPT_DIR=$DN
SQL=`echo $BN|cut -d'.' -f1`.sql
echo
echo "******** Current ora_pmon:"
echo "----------------------------------------"
ps -eo cmd|grep ora_pmon|grep -v grep
echo "----------------------------------------"
echo
echo "******** SQL Script: "$SQL_SCRIPT_DIR/$SQL
echo
for x in `ps -eo cmd|grep ora_pmon|grep -v grep|awk -F "_" '{print $NF}'`
do
  ORAENV_ASK=NO
  set -a
  ORACLE_SID=$x
  . oraenv
  set +a
  sysresv|tail -1

sqlplus -L "/ as sysdba" << EOF
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
start $SQL_SCRIPT_DIR/$SQL
exit
EOF
if [ "$?" != "0" ]; then
  echo "$ORACLE_SID ERROR: Running $SQL_SCRIPT_DIR/$SQL"
  exit 1
fi
done
exit

parameter.sql


col name for a30
col value for a30
col default_value for a13
col display_value for a13
col inst_id for 99
break on name
set lines 200 pages 1000 trimsp on tab off
select name,CDB from v$database
;
select name,display_value,inst_id,con_id,default_value,isdefault
from gv$parameter
where regexp_like (name,'^sga|^pga|^memory|^cluster.*database|^instance|use_large_pages|db.*name','i')
order by name,value,inst_id
;

Frame work for shell script is the same. Just make a copy and update any comments.

oracle@racnode-dc1-1:hawk1:/home/oracle
$ ll
total 36
-rwxr-xr-x 1 oracle oinstall   19 Feb 10 20:44 db.env
-rwxr-xr-x 1 oracle oinstall   49 Feb 10 20:45 gi.env
-rwxr-xr-x 1 oracle oinstall 1020 Apr 14 13:24 parameter.sh
-rw-r--r-- 1 oracle oinstall  414 Apr 14 13:24 parameter.sql
-rwxr-xr-x 1 oracle oinstall 1038 Apr 14 13:23 set_db_use_large_pages_only.sh
-rw-r--r-- 1 oracle oinstall  430 Apr 12 17:50 set_db_use_large_pages_only.sql
-rwxr-xr-x 1 oracle oinstall 1038 Apr 14 13:23 set_db_use_large_pages_true.sh
-rw-r--r-- 1 oracle oinstall  430 Apr 12 17:53 set_db_use_large_pages_true.sql
-rw-r--r-- 1 oracle oinstall 1909 Jan 29 02:39 wc.sql

oracle@racnode-dc1-1:hawk1:/home/oracle
$ diff parameter.sh set_db_use_large_pages_true.sh
3c3
 # set_db_use_large_pages_true.sh

oracle@racnode-dc1-1:hawk1:/home/oracle
$ ./set_db_use_large_pages_true.sh

******** Current ora_pmon:
----------------------------------------
ora_pmon_hawk1
----------------------------------------

******** SQL Script: ./set_db_use_large_pages_true.sql

The Oracle base remains unchanged with value /u01/app/oracle
Oracle Instance alive for sid "hawk1"

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 14 13:48:17 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

13:48:17 SYS @ hawk1:>13:48:17 SYS @ hawk1:>13:48:17 SYS @ hawk1:>
NAME                           CDB
------------------------------ ---
HAWK                           NO


NAME                           DISPLAY_VALUE INST_ID     CON_ID DEFAULT_VALUE ISDEFAULT
------------------------------ ------------- ------- ---------- ------------- ---------
use_large_pages                TRUE                1          0 NULL          FALSE
                               TRUE                2          0 NULL          FALSE

13:48:17 SYS @ hawk1:>alter system set USE_LARGE_PAGES=TRUE scope=spfile sid='*'
13:48:17   2  ;

System altered.

13:48:17 SYS @ hawk1:>Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@racnode-dc1-1:hawk1:/home/oracle
$

Check 12.1.0.2 Alert Log For HugePages Usage

Fri, 2018-04-13 18:23

What! Another post on hugepages – seriously?

+ grep 'Dump of system resources acquired for SHARED GLOBAL AREA' -B1 -A22 database alert log
+ tail -25
2018-04-13T09:40:23.908633-07:00
Dump of system resources acquired for SHARED GLOBAL AREA (SGA) 

2018-04-13T09:40:23.916573-07:00
 Per process system memlock (soft) limit = UNLIMITED
2018-04-13T09:40:23.920591-07:00
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 2996M
2018-04-13T09:40:23.928517-07:00
 Available system pagesizes:
  4K, 2048K 
2018-04-13T09:40:23.936717-07:00
 Supported system pagesize(s):
 
2018-04-13T09:40:23.943044-07:00
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
2018-04-13T09:40:23.947112-07:00
     2048K             2303            1498            1498        NONE
2018-04-13T09:40:23.951899-07:00
 Reason for not supporting certain system pagesizes: 
2018-04-13T09:40:23.960107-07:00
  4K - Large pagesizes only
2018-04-13T09:40:23.965247-07:00

====================================================================================================

Tue Apr 10 12:29:13 2018
Dump of system resources acquired for SHARED GLOBAL AREA (SGA) 

Tue Apr 10 12:29:13 2018
 Per process system memlock (soft) limit = 128G
Tue Apr 10 12:29:13 2018
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 4002M
Tue Apr 10 12:29:13 2018
 Available system pagesizes:
  4K, 2048K 
Tue Apr 10 12:29:13 2018
 Supported system pagesize(s):
Tue Apr 10 12:29:13 2018
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
Tue Apr 10 12:29:13 2018
        4K       Configured               5         1024005        NONE
Tue Apr 10 12:29:13 2018
     2048K                0            2001               0        NONE
Tue Apr 10 12:29:13 2018
RECOMMENDATION:
Tue Apr 10 12:29:13 2018
 1. For optimal performance, configure system with expected number 
 of pages for every supported system pagesize prior to the next 

Oracle Different Levels of Hell

Sat, 2018-03-31 09:50

Did not know there exists many levels of hell and Oracle certainly has them.

Would it be bad if someone is searching for hell and this blog is listed as top 10? :=(

Here’s the effort one needs to go through to determine what patch to apply for Goldengate software (binary) as part of quarterly Critical Patch Updates.

Hint: Goldengate does not participate in quarterly patch updates.

Find the latest patch available and patch using Opatch or install newer database compatible version using runInstaller.

Oracle GoldenGate — Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)
Latest RDBMS version Oracle Server 12.1.0.2

Let’s create a new doc for new version but don’t provide reference to it. Make the user find it.

Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1)
Latest OGG Release 12.3
Latest RDBMS Release 12.2.0.1

OGG Release/OGG Patch
12.3/Recommended 12.3.0.1.2 or higher *** This is not a patch
12.2/Recommended 12.2.0.2.0 or higher *** This is not a patch
12.2/Minimum 12.2.0.1.170221 *** This is a patch

Master Note for Oracle GoldenGate Core Product Patch Sets (Doc ID 1645495.1)
There are no patch sets available for 12.3.0.1 at this time.
Latest OGG v12.2.0.1 Patch Set Availability Notes

Good But Not Good Enough Coding Practice

Sat, 2018-03-17 13:36

Good: Alert from from localhost but what script is this? If there was only 1 cron entry and intuitive, then I would not be blogging about it.

ALERT … Goldengate process “EXTRACT(PU)” has a lag of 02 hour 22 min on localhost

Better: Alert is from localhost for monitoring_gg.sh

monitoring_gg.sh ALERT … Goldengate process “EXTRACT(PU)” has a lag of 00 hour 00 min on localhost

That is all.

 

Inconsistencies – Are they good or bad?

Fri, 2018-03-09 07:40

Different companies will have different implementations of the same, e.g. backup, naming convention, to name a few.

However, should there be different implementations of the same in the company itself?

There are pros and cons to everything and there are no wrongs or rights, just objectives and requirements.

Example: if you wants to have the same experience with coffee no matter where, you goes to Starbucks.

Don’t let it frustrate you. Think of it as freedom and the opportunity to do thing the way you want to.

Is the glass half empty or half full? For me, it’s half empty because I can fill it up with what I want.

Upgrade 12.2 Journey – DataGuard

Sat, 2018-03-03 12:07

Oracle Data Guard Broker Changes in Oracle Database 12c Release 2 (12.2.0.1)

How to resolve MRP stuck issues on a physical standby database? (Doc ID 1221163.1)
Starting from 12.2 use V$DATAGUARD_PROCESS view instead of v$managed_standby

How Do You Create Data Guard Configuration?

Mon, 2018-02-19 17:30

I have taken for granted to create Data Guard Configuration the same way most of the time that I don’t know what goes wrong with done differently.

oracle@racnode-dc1-1:hawk1:/home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Feb 19 23:41:49 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

23:41:49 SYS @ hawk1:>show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      hawk
db_unique_name                       string      hawk
pdb_file_name_convert                string

23:42:04 SYS @ hawk1:>alter system set dg_broker_start=true sid='*' scope=memory;

System altered.

23:42:40 SYS @ hawk1:>


+++ CREATE CONFIGURATION USING UPPER CASE WITHOUT QUOTES

oracle@racnode-dc1-1:hawk1:/home/oracle
$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.

--- NO QUOTES USE AND ALL UPPERCASE - EASIEST METHOD
--- Broker convert database to match that of db_unique_name

DGMGRL> CREATE CONFIGURATION DG_CONFIG AS PRIMARY DATABASE IS HAWK CONNECT IDENTIFIER IS HAWK;
Configuration "dg_config" created with primary database "hawk"

DGMGRL> show configuration

Configuration - dg_config

  Protection Mode: MaxPerformance
  Members:
  hawk - Primary database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

DGMGRL> show database hawk

Database - hawk

  Role:               PRIMARY
  Intended State:     OFFLINE
  Instance(s):
    hawk1
    hawk2

Database Status:
DISABLED

DGMGRL> remove configuration
Removed configuration

--- CONFIGURATION IS UPPERCASE 
--- Does it look better in uppercase?
DGMGRL> CREATE CONFIGURATION 'DG_CONFIG' AS PRIMARY DATABASE IS 'hawk' CONNECT IDENTIFIER IS HAWK;
Configuration "DG_CONFIG" created with primary database "hawk"
DGMGRL> show configuration

Configuration - DG_CONFIG

  Protection Mode: MaxPerformance
  Members:
  hawk - Primary database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

DGMGRL> show database hawk

Database - hawk

  Role:               PRIMARY
  Intended State:     OFFLINE
  Instance(s):
    hawk1
    hawk2

Database Status:
DISABLED

DGMGRL> remove configuration
Removed configuration

+++ ISSUES OCCUR WHEN USING UPPERCASE WITH QUOTES FOR DATABASE
+++ Not sure if this will work as I have not tested end to end. Why create it this way to begin with?
DGMGRL> CREATE CONFIGURATION 'DG_CONFIG' AS PRIMARY DATABASE IS 'HAWK' CONNECT IDENTIFIER IS HAWK;
Configuration "DG_CONFIG" created with primary database "HAWK"
DGMGRL> show configuration

Configuration - DG_CONFIG

  Protection Mode: MaxPerformance
  Members:
  HAWK - Primary database

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED

DGMGRL> show database hawk
Object "hawk" was not found

DGMGRL> show database HAWK
Object "hawk" was not found

DGMGRL> show database 'HAWK';

Database - HAWK

  Role:               PRIMARY
  Intended State:     OFFLINE
  Instance(s):
    hawk1
    hawk2

Database Status:
DISABLED

DGMGRL>

REFERENCE:

CREATE CONFIGURATION

CREATE CONFIGURATION configuration_name AS PRIMARY DATABASE IS database-name CONNECT IDENTIFIER IS connect-identifier;

database-name
The name that will be used by the broker to refer to the primary database. 
It must match (case-insensitive) the value of the primary database DB_UNIQUE_NAME initialization parameter.

12c DataGuard / Broker Pitfalls

Thu, 2018-02-15 07:01

In a broker configuration, you use the DGConnectIdentifer property to specify a connect identifier for each database.

The connect identifier for a database must:
Allow all other databases in the configuration to reach it.
Allow all instances of an Oracle RAC database to be reached.
Specify a service that all instances dynamically register with the listeners so that connect-time failover on an Oracle RAC database is possible.

The service should NOT be one that is defined and managed by Oracle Clusterware.

A static service needs to be defined and registered only if Oracle Clusterware or Oracle Restart is not being used.

Else, by default, the broker assumes a static service name of db_unique_name_DGMGRL.db_domain and expects the listener has been started with the following content in the listener.ora file:

LISTENER =
(DESCRIPTION_LIST =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = host_name)(PORT = port_num))
  )
)

SID_LIST_LISTENER=
(SID_LIST=
  (SID_DESC=
    (GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain)
    (ORACLE_HOME=oracle_home)
    (SID_NAME=sid_name)
    (ENVS="TNS_ADMIN=oracle_home/network/admin")
  )
)  

As of Oracle Database 12c Release 1 (12.1), for all databases to be added to a broker configuration, any LOG_ARCHIVE_DEST_n parameters that have the SERVICE attribute set, but not the NOREGISTER attribute, must be cleared.

Create Configuration Failing with ORA-16698 (Doc ID 1582179.1)

Oracle Data Guard Installation

 

Goldengate REPORTING

Sun, 2018-02-11 08:52

There were performance issues due to a new table being introduced to replication and I was asked to gather number of DMLs on table for 1 day.

Using DBA_TAB_MODIFICATIONS did not meet the requirements since statistics were gather about a week ago and over inflated.

Next thought process was why not use Goldengate since it captures all the changes and report on it.

This may or may not provide the required data if reporting is not properly configured.

Some will tell you to never rollover the report while others will rollover the report on a weekly basis.

From most recent experience, I would rollover report on daily basis because data can always be aggregated.

Here is an example of what I had to deal with.

$ grep -i report dirprm/e_hawk.prm

REPORTCOUNT EVERY 10 MINUTES, RATE

$ grep “activity since” dirtpt/E*.rpt|sort

E_HAWK0.rpt:Report at 2018-02-09 12:28:49 (activity since 2018-02-08 22:01:15)
--- How are you going to get daily activities accurately from aggregation over 2 months time frame?
E_HAWK1.rpt:Report at 2018-02-08 21:48:37 (activity since 2017-11-27 18:24:26)

Corresponding data from the report.
$ grep -A5 “From Table SCOTT.RATETEST” dirrpt/E_*rpt

E_HAWK0.rpt:Report at 2018-02-09 12:28:49 (activity since 2018-02-08 22:01:15)
dirrpt/E_HAWK0.rpt:From Table SCOTT.RATETEST:
dirrpt/E_HAWK0.rpt-       #                   inserts:       977
dirrpt/E_HAWK0.rpt-       #                   updates:  10439912
dirrpt/E_HAWK0.rpt-       #                   befores:  10439912
dirrpt/E_HAWK0.rpt-       #                   deletes:         0
dirrpt/E_HAWK0.rpt-       #                  discards:         0
--
E_HAWK1.rpt:Report at 2018-02-08 21:48:37 (activity since 2017-11-27 18:24:26)
dirrpt/E_HAWK1.rpt:From Table SCOTT.RATETEST:
dirrpt/E_HAWK1.rpt-       #                   inserts:     87063
dirrpt/E_HAWK1.rpt-       #                   updates: 821912582
dirrpt/E_HAWK1.rpt-       #                   befores: 821912582
dirrpt/E_HAWK1.rpt-       #                   deletes:         0
dirrpt/E_HAWK1.rpt-       #                  discards:         0

How did I end up getting the data?

From Goldengate report above which is better as it provides the exact data processed by Goldengate.

I was lucky! Extract was restarted when table was added and removed.

Example of Better Configuration:

global_ggenv.inc

STATOPTIONS RESETREPORTSTATS
REPORT AT 00:01
REPORTROLLOVER AT 00:01
REPORTCOUNT EVERY 60 MINUTES, RATE
DISCARDROLLOVER AT 00:01

e_hawk.prm
EXTRACT E_HAWK

EXTRACT e_hawk
EXTTRAIL ./dirdat/bb
-- CHECKPARAMS
INCLUDE ./dirprm/global_macro.inc
INCLUDE ./dirprm/global_dbenv.inc
INCLUDE ./dirprm/global_ggenv.inc
Reference:

REPORTROLLOVER

Use the REPORTROLLOVER parameter to force report files to age on a regular schedule, instead of when a process starts. 
For long or continuous runs, setting an aging schedule controls the size of the active report file and provides a more predictable set of archives that can be included in your archiving routine.
Report statistics are carried over from one report to the other. To reset the statistics in the new report, use the STATOPTIONS parameter with the RESETREPORTSTATS option.

DBA_TAB_MODIFICATIONS

INSERTS/UPDATES/DELETES - Approximate number since the last time statistics were gathered.

PURGEOLDEXTRACTS Not Purging Trail Files Part2

Sat, 2018-02-03 10:55
If you read the post PURGEOLDEXTRACTS Not Purging Trail Files, you will find the solution is to replace syntax from mgr.prm as shown below:
Replace PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 1
With    PURGEOLDEXTRACTS /ggs/dirdat/*, USECHECKPOINTS, MINKEEPDAYS 2 

I found the solution by luck vs correct analysis; hence, the adage “Better to be lucky than good.”

Recently, the same issue occurred again for another environment and the solution was just the opposite.

PURGEOLDEXTRACTS dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24, FREQUENCYMINUTES 30
-- PURGEOLDEXTRACTS /DBFS/ggs/dirdat/*, USECHECKPOINTS, MINKEEPHOURS 24, FREQUENCYMINUTES 30

Why is that!

No convention and inconsistency.

Here are the details.

--- How is manager configured?
GGSCI> send manager GETPURGEOLDEXTRACTS

Sending GETPURGEOLDEXTRACTS request to MANAGER ...

--- Manager is configured with trail pointing to /DBFS
PurgeOldExtracts Rules
Fileset                              MinHours MaxHours MinFiles MaxFiles UseCP
/DBFS/ggs/dirdat/*                   24       0        1        0        Y
OK	
--- Extract trail showing from $GG_HOME/dirdat
Extract Trails
Filename                        Oldest_Chkpt_Seqno  IsTable  IsVamTwoPhaseCommit
/u01/app/gg/12.2.0/dirdat/aa    16285

--- How was the extract created?
GGSCI> send e* status

Sending STATUS request to EXTRACT E_HAWK ...


EXTRACT E_HAWK (PID 40932)
  Current status: Recovery complete: At EOF

  Current read position:
  Sequence #: 16285
  RBA: 27233729
  Timestamp: 2018-01-25 21:01:35.000450
  Extract Trail: dirdat/aa --- This is how the trail is defined when extract was created

GGSCI> info e*

EXTRACT    E_HAWK    Last Started 2018-01-25 21:22   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
Process ID           40932
Log Read Checkpoint  File dirdat/aa000016286
                     2018-02-01 14:42:29.000124  RBA 29233729
GGSCI> exit

--- From $GG_HOME, dirdat is using symbolic link to /DBFS
$ ls -ld dir*
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirchk -> /DBFS/ggs/dirchk
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dircrd -> /DBFS/ggs/dircrd
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirdat -> /DBFS/ggs/dirdat
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirdef -> /DBFS/ggs/dirdef
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirdmp -> /DBFS/ggs/dirdmp
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirout -> /DBFS/ggs/dirout
drwxr-xr-x 2 ggsuser oinstall 4096 Jan 26 13:49 dirpcs
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirprm -> /DBFS/ggs/dirprm
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirrpt -> /DBFS/ggs/dirrpt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirsql -> /DBFS/ggs/dirsql
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirtmp -> /DBFS/ggs/dirtmp
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirwlt -> /DBFS/ggs/dirwlt
lrwxrwxrwx 1 ggsuser oinstall   23 Mar 18  2017 dirwww -> /DBFS/ggs/dirwww

In conclusion, PURGEOLDEXTRACTS location should be defined the same as the extract.

Isn’t that intuitive?

Oracle should make this a MOS Doc ;=)

Goldengate Tracing Network Ports

Tue, 2018-01-30 22:30
--- Find mgr process
$ ps -ef|grep ./mgr
ggs  11823     1  0  2017 ?        00:29:13 ./mgr PARAMFILE /u01/app/ggs/dirprm/mgr.prm REPORTFILE /u01/app/ggs/dirrpt/MGR.rpt PROCESSID MGR
ggs  45054 30127  0 14:15 pts/0    00:00:00 grep --color=auto ./mgr

--- Find extract process
$ ps -ef|grep ./extract
ggs  17604 30127  0 14:15 pts/0    00:00:00 grep --color=auto ./extract
ggs  44306 11823  0 03:33 ?        00:01:28 /u01/app/ggs/extract PARAMFILE /u01/app/ggs/dirprm/e_hawk.prm REPORTFILE /u01/app/ggs/dirrpt/e_hawk.rpt PROCESSID e_hawk
ggs  44354 11823  0 03:33 ?        00:00:48 /u01/app/ggs/extract PARAMFILE /u01/app/ggs/dirprm/p_hawk.prm REPORTFILE /u01/app/ggs/dirrpt/p_hawk.rpt PROCESSID p_hawk

--- Find mgr port
$ lsof -i -P |grep 11823
mgr     11823 ggs    6u  IPv4 3492119103      0t0  TCP *:7809 (LISTEN)

--- Find extract port
$ lsof -i -P |grep 44306
extract 44306 ggs    6u  IPv4 2827659844      0t0  TCP *:7840 (LISTEN)

$ lsof -i -P |grep 44354
extract 44354 ggs    6u  IPv4 2827649786      0t0  TCP *:7841 (LISTEN)
extract 44354 ggs   10u  IPv4 2827610899      0t0  TCP hawk.local:21252->eagle.local:7822 (ESTABLISHED)

--- Use Goldengate to find the same info. However, it does only provides local port.
$ ./ggsci 

Oracle GoldenGate Command Interpreter for Oracle
Version 12.3.0.1.0 OGGCORE_12.3.0.1.0_PLATFORMS_170721.0154_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Jul 21 2017 23:31:13
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.

GGSCI> send manager childstatus debug

Sending CHILDSTATUS request to MANAGER ...

Child Process Status - 2 Entries

ID      Group   Process Retry Retry Time          Start Time          Port
---- -------- --------- ----- ------------------- ------------------- ----
   0   e_hawk     44306     0 None                2017/11/20 19:49:14 7840
   1   p_hawk     44354     0 None                2017/11/20 19:55:43 7841

GGSCI>

Be Friend With awk/sed | ASM Mapping

Sun, 2018-01-21 11:10

I had request to add disks to ASM Disk Group without any further details for what new disks were added.

Need to figure out which disks are on ASM now, which disks should be used as new ones.

Got lazy and created scripts for this for future use.

[root@racnode-dc1-1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racnode-dc1-1 ~]#

[oracle@racnode-dc1-1 ~]$ /etc/init.d/oracleasm listdisks
CRS01
DATA01
FRA01

--- [8,49] is major,minor for device
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

--- Extract major,minor for devide
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'
[8,49]

--- Remove [] brackets
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'
[8,49
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'
8,49

--- Alternative option to remove []
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed 's/[][]//g'
8,49

--- Create patterns for grep
[oracle@racnode-dc1-1 ~]$ oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'
8,.*49

--- Test grep using pattern
[oracle@racnode-dc1-1 ~]$ ls -l /dev/* | grep -E '8,.*49'
brw-rw---- 1 root    disk      8,  49 Jan 21 16:42 /dev/sdd1
[oracle@racnode-dc1-1 ~]$

--- Test grep with command line syntax
[oracle@racnode-dc1-1 ~]$ ls -l /dev/*|grep -E `oracleasm querydisk -d DATA01|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
brw-rw---- 1 root    disk      8,  49 Jan 21 16:42 /dev/sdd1
[oracle@racnode-dc1-1 ~]$

--- Run script
[oracle@racnode-dc1-1 ~]$ /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Jan 21 21:42 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Jan 21 21:42 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Jan 21 21:42 /dev/sde1

[oracle@racnode-dc1-1 ~]$

--- ASM Lib version
[oracle@racnode-dc1-1 ~]$ rpm -qa|grep asm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-19.0.1.el7.x86_64
[oracle@racnode-dc1-1 ~]$

--- Script
[oracle@racnode-dc1-1 ~]$ cat /sf_working/scripts/asm_mapping.sh

#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done
[root@racnode-dc1-1 ~]# fdisk -l /dev/sdd1

Disk /dev/sdd1: 8587 MB, 8587837440 bytes, 16773120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@racnode-dc1-1 ~]#

More Fun With sed

Thu, 2018-01-04 20:33

Objective is to convert what looks to be Samba share from Windows to Linux current directory path.

Basically, the core of the code using sed.


sed -i.bak -e 's|C\:\\\\scripts|'"$PWD"'|g' -e 's|\\\\|\/|g' "$f"

[vagrant@db-asm-1 ~]$ pwd
/home/vagrant
[vagrant@db-asm-1 ~]$

[vagrant@db-asm-1 ~]$ ll
total 8
-rw-rw-r-- 1 vagrant vagrant 56 Jan  5 03:22 t.1
-rw-rw-r-- 1 vagrant vagrant 55 Jan  5 03:22 t.2
drwxrwxr-x 3 vagrant vagrant 18 Jan  5 03:21 working

[vagrant@db-asm-1 ~]$ cat t.1
set file_path "C:\\scripts"
source "$file_path\\t1.sql"

[vagrant@db-asm-1 ~]$ cat t.2
set file_path "C:\\scripts"
source "$file_path\\t2.sql


[vagrant@db-asm-1 ~]$ cat ~/working/dinh/test.sh
#!/bin/bash -ex
# Make sure you always put $f in double quotes to avoid any nasty surprises i.e. "$f"
for f in t.*
do
  echo "Processing $f file..."
  sed -i.bak -e 's|C\:\\\\scripts|'"$PWD"'|g' -e 's|\\\\|\/|g' "$f"
done


[vagrant@db-asm-1 ~]$ ~/working/dinh/test.sh
+ for f in 't.*'
+ echo 'Processing t.1 file...'
Processing t.1 file...
+ sed -i.bak -e 's|C\:\\\\scripts|/home/vagrant|g' -e 's|\\\\|\/|g' t.1
+ for f in 't.*'
+ echo 'Processing t.2 file...'
Processing t.2 file...
+ sed -i.bak -e 's|C\:\\\\scripts|/home/vagrant|g' -e 's|\\\\|\/|g' t.2


[vagrant@db-asm-1 ~]$ cat t.1
set file_path "/home/vagrant"
source "$file_path/t1.sql"

[vagrant@db-asm-1 ~]$ cat t.1.bak
set file_path "C:\\scripts"
source "$file_path\\t1.sql"


[vagrant@db-asm-1 ~]$ cat t.2
set file_path "/home/vagrant"
source "$file_path/t2.sql

[vagrant@db-asm-1 ~]$ cat t.2.bak
set file_path "C:\\scripts"
source "$file_path\\t2.sql

[vagrant@db-asm-1 ~]$ ll
total 16
-rw-rw-r-- 1 vagrant vagrant 57 Jan  5 03:26 t.1
-rw-rw-r-- 1 vagrant vagrant 56 Jan  5 03:22 t.1.bak
-rw-rw-r-- 1 vagrant vagrant 56 Jan  5 03:26 t.2
-rw-rw-r-- 1 vagrant vagrant 55 Jan  5 03:22 t.2.bak
drwxrwxr-x 3 vagrant vagrant 18 Jan  5 03:21 working
[vagrant@db-asm-1 ~]$

 


UNIFORM_LOG_TIMESTAMP_FORMAT CANNOT BE SET ON ASM INSTANCE (Doc ID 2308274.1)

Tue, 2017-12-26 09:18

Alert.log: New timestamp format in Oracle 12.2

Whats new with the timestamp format in Oracle 12c Release 2?

UNIFORM_LOG_TIMESTAMP_FORMAT CANNOT BE SET ON ASM INSTANCE (Doc ID 2308274.1)

It's expected behavior.  UNIFORM_LOG_TIMESTAMP_FORMAT parameter can't be set on ASM instance in 12.2.0.1.

It will be implemented in future release 18.1.

Not Another adrci Examples

Sat, 2017-12-16 22:30

I know there are a lot of blog posts on ADRI, etc…

However, none of them solved what I was looking for, “How to show all specific problems?”


--- When there are more than 50 incidents
50 rows fetched (*** more available ***)
show incident -all

--- Show specific problem for ORA errors
show problem -all -p "problem_key='ORA 1578'"

---Generates the package for the problem id 100 in /tmp
ips pack problem 100 in /tmp

---Generates the package for the incident id 6439 in /tmp
ips pack incident 6439 in /tmp

---Generates the package for the problem with the problem_key 'ORA 1578'
ips pack problemkey "ORA 1578"

---Generates the package with the incidents occurred in last 8 seconds.
ips pack seconds 8

---Generates the package with the incidents occurred
---between the times '2007-05-01 10:00:00.00' and '2007-05-01 23:00:00.00'
ips pack time '2007-05-01 10:00:00.00' to '2007-05-01 23:00:00.00'

Using sed to backup file and remove lines

Wed, 2017-12-13 19:13
[oracle@racnode-dc1-2 ~]$ cd /u01/app/oracle/12.1.0.2/db1/rdbms/log/

--- DDL will fail since datafile is hard coded!
[oracle@racnode-dc1-2 log]$ cat tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

--- Remove ALTER and RESIZE from sql file.
--- Most likely the incorrect way to do this since TBS may be undersized.

12.2 Datapump Improvements actually does this the right way.

[oracle@racnode-dc1-2 log]$ sed -i.bak '/ALTER DATABASE DATAFILE\|RESIZE/ d' tablespaces_ddl.sql

[oracle@racnode-dc1-2 log]$ ls -l tablespace*
-rw-r--r-- 1 oracle dba 1214 Dec 14 02:03 tablespaces_ddl.sql
-rw-r--r-- 1 oracle dba 1488 Dec 14 01:45 tablespaces_ddl.sql.bak

[oracle@racnode-dc1-2 log]$ diff tablespaces_ddl.sql tablespaces_ddl.sql.bak
14a15,16
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
24a27,28
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
32a37,38
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

[oracle@racnode-dc1-2 log]$ cat tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

[oracle@racnode-dc1-2 log]$

12.2 Datapump Improvements

Wed, 2017-12-13 19:00

Datafile for tablespace USERS was resize to 5242880.

12.2.0.1.0
5242880 size is part of create tablespace.

CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 5242880
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

12.1.0.2.0
5242880 size is part of alter tablespace.

Why is this important?
Manual intervention is no longer required to have correct datafiles size.

 
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

Test Case:

01:01:05 SYS @ owl:>select bytes,tablespace_name,autoextensible,maxbytes from dba_data_files where tablespace_name='USERS';

     BYTES TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
   5242880 USERS                          YES 3.4360E+10

01:01:57 SYS @ owl:>alter tablespace users add datafile size 4m;

Tablespace altered.

01:02:43 SYS @ owl:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         7    5242880
         5    4194304

01:04:09 SYS @ owl:>alter database datafile 5 resize 5242880;

Database altered.

01:05:08 SYS @ owl:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         7    5242880
         5    5242880

01:05:15 SYS @ owl:>

+++++++++++

[oracle@db-asm-1 ~]$ expdp parfile=expdp_tbs.par

Export: Release 12.2.0.1.0 - Production on Thu Dec 14 01:31:12 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYS"."SYS_EXPORT_FULL_01":  /******** AS SYSDBA parfile=expdp_tbs.par
W-1 Startup took 1 seconds
W-1 Processing object type DATABASE_EXPORT/TABLESPACE
W-1      Completed 3 TABLESPACE objects in 0 seconds
W-1 Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/admin/owl/dpdump/tbs.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Dec 14 01:31:19 2017 elapsed 0 00:00:06

+++++++++++

[oracle@db-asm-1 ~]$ impdp parfile=impdp_tbs.par

Import: Release 12.2.0.1.0 - Production on Thu Dec 14 01:32:51 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
W-1 Startup took 0 seconds
W-1 Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_tbs.par
W-1 Processing object type DATABASE_EXPORT/TABLESPACE
W-1      Completed 3 TABLESPACE objects in 0 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 14 01:32:54 2017 elapsed 0 00:00:02

+++++++++++

[oracle@db-asm-1 ~]$ cat /u01/app/oracle/admin/owl/dpdump/tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 73400320
  AUTOEXTEND ON NEXT 73400320 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;


CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 33554432
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 5242880
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;


[oracle@db-asm-1 ~]$

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

01:40:59 SYS @ hawk2:>select bytes,tablespace_name,autoextensible,maxbytes from dba_data_files where tablespace_name='USERS';

     BYTES TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
   5242880 USERS                          YES 3.4360E+10

01:41:17 SYS @ hawk2:>alter tablespace users add datafile size 4m;

Tablespace altered.

01:41:24 SYS @ hawk2:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         6    5242880
         2    4194304


01:41:34 SYS @ hawk2:>alter database datafile 2 resize 5242880;

Database altered.

01:41:56 SYS @ hawk2:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         6    5242880
         2    5242880

01:42:02 SYS @ hawk2:>

++++++++++

[oracle@racnode-dc1-2 ~]$ expdp parfile=expdp_tbs.par

Export: Release 12.1.0.2.0 - Production on Thu Dec 14 01:43:19 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Starting "SYS"."SYS_EXPORT_FULL_01":  /******** AS SYSDBA parfile=expdp_tbs.par
Startup took 12 seconds
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 2 seconds
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/12.1.0.2/db1/rdbms/log/tbs.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Dec 14 01:44:34 2017 elapsed 0 00:00:43

++++++++++

[oracle@racnode-dc1-2 ~]$ impdp parfile=impdp_tbs.par

Import: Release 12.1.0.2.0 - Production on Thu Dec 14 01:45:48 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Startup took 1 seconds
Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_tbs.par
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 1 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 14 01:45:57 2017 elapsed 0 00:00:05

[oracle@racnode-dc1-2 ~]$

++++++++++

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oracle/12.1.0.2/db1/rdbms/log/tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
  
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;
[oracle@racnode-dc1-2 ~]$

Goldengate 12.3 Automatic CDR

Sat, 2017-12-02 17:51

Automatic Conflict Detection and Resolution

Requirements: GoldenGate 12c (12.3.0.1) and Oracle Database 12c Release 2 (12.2) and later.

Automatic conflict detection and resolution does not require application changes for the following reasons:

  • Oracle Database automatically creates and maintains invisible timestamp columns.
  • Inserts, updates, and deletes use the delete tombstone log table to determine if a row was deleted.
  • LOB column conflicts can be detected.
  • Oracle Database automatically configures supplemental logging on required columns.

I have not had the chance to play with this yet and just only notice the documentation has been updated with details.

 

 


RMAN Backup To FRA Repercussions

Sun, 2017-11-26 09:50

Common advice is to backup to FRA.
Before following advice, evaluate to determine fit and understand any repercussions.
Doesn’t this potentially create SPOF and may require restore from tape unnecessarily?

HINT:

Make sure the following commands are part of backup when backup to FRA.

CONFIGURE CHANNEL DEVICE TYPE DISK CLEAR;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;
DEMO:
Recovery Manager: Release 12.1.0.2.0 - Production on Sun Nov 26 16:02:17 2017

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP ON;

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default

RMAN> backup datafile 1;

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=+FRA/HAWK/BACKUPSET/2017_11_26/nnndf0_tag20171126t160327_0.274.961085007 tag=TAG20171126T160327 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

--- Control File and SPFILE Autobackup to FRA
Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=+FRA/HAWK/AUTOBACKUP/2017_11_26/s_961085014.275.961085015 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
new RMAN configuration parameters are successfully stored

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
--- CONTROLFILE AUTOBACKUP FORMAT is same but ***NOT*** DEFAULT
RMAN> backup datafile 1;

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=+FRA/HAWK/BACKUPSET/2017_11_26/nnndf0_tag20171126t160655_0.276.961085215 tag=TAG20171126T160655 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

--- Control File and SPFILE Autobackup to ***DISK***
Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=/u01/app/oracle/12.1.0.2/db1/dbs/c-3219666184-20171126-01 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;

old RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F';
RMAN configuration parameters are successfully reset to default value

RMAN> show controlfile autobackup format;

RMAN configuration parameters for database with db_unique_name HAWK are:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default <-- 

RMAN> backup datafile 1 FORMAT '%d_%I_%T_%U';

Starting backup at 26-NOV-17
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA/HAWK/DATAFILE/system.258.960967651
channel ORA_DISK_1: starting piece 1 at 26-NOV-17
channel ORA_DISK_1: finished piece 1 at 26-NOV-17
piece handle=/u01/app/oracle/12.1.0.2/db1/dbs/HAWK_3219666184_20171126_0oski093_1_1 tag=TAG20171126T161531 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 26-NOV-17

Starting Control File and SPFILE Autobackup at 26-NOV-17
piece handle=+FRA/HAWK/AUTOBACKUP/2017_11_26/s_961085738.277.961085739 comment=NONE
Finished Control File and SPFILE Autobackup at 26-NOV-17

RMAN>
REFERENCE:
How to KEEP a backup created in the Flash Recovery Area (FRA)? (Doc ID 401163.1)	
A backup needed to be KEPT, must be created outside the flash recovery area.

Why are backups going to $ORACLE_HOME/dbs rather than Flash recovery area via Rman or EM Grid control /FRA not considering Archivelog part of it (Doc ID 404854.1)
 1. Do not use a FORMAT clause on backup commands.
 
RMAN Uses Flash Recovery Area for Autobackup When Using Format '%F' (Doc ID 338483.1)	 

Pages