Michael Dinh

Subscribe to Michael Dinh feed Michael Dinh
Michael T. Dinh, Oracle DBA
Updated: 8 hours 53 min ago

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

Tue, 2019-04-16 16:53

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/18.3.0.0/grid/gridSetup.sh -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/18.3.0.0/grid/install/response/grid_2019-04-16_06-19-12AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/gridSetupActions2019-04-16_06-19-12AM.log

As a root user, execute the following script(s):
        1. /u01/18.3.0.0/grid/rootupgrade.sh

Execute /u01/18.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]


+ exit
oracle@racnode-dc1-1::/home/oracle
$

Basically, the error provided is utterly useless.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
oracle@racnode-dc1-1::/home/oracle

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

================================================================================
 # Upgrading RHP Repository failed. # 12668 # 1555412503112
================================================================================

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

Check gridSetupActions2019-04-16_12-59-56PM.log

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

================================================================================
WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
================================================================================

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

================================================================================
INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver
================================================================================
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112
oracle@racnode-dc1-1:+ASM1:/u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM
$

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE 

#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

oracle@racnode-dc1-1::/sf_OracleSoftware/18cLinux
$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
oracle@racnode-dc1-1:+ASM1:/home/oracle
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

Run rhprepos upgradeSchema -fromversion 12.1.0.2.0 – FAILED.

oracle@racnode-dc1-1::/home/oracle
$ /u01/18.3.0.0/grid/bin/rhprepos upgradeSchema -fromversion 12.1.0.2.0
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

oracle@racnode-dc1-1::/home/oracle
$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
oracle@racnode-dc1-1::/home/oracle
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

----------------------------------------
Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab
IP:   10.0.2.15

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64

----------------------------------------

oracle@racnode-dc1-2::/home/oracle
$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle@racnode-dc1-2::/home/oracle
$

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

Sun, 2019-04-14 19:54

There are/were a lot of discussions about Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]
on how cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade could have changed the cluster upgrade state to [NORMAL].

Running gridSetup.sh -executeConfigTools in silent mode, the next step cluvfy is not run.

[oracle@racnode-dc1-1 ~]$ /u01/18.3.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-04-15_01-02-06AM

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.
[oracle@racnode-dc1-1 ~]$

Running gridSetup.sh -executeConfigTools in GUI, there is an option to ignore Failed Upgrading RHP Repository and continue to the next step to run cluvfy.

I don’t think cluvfy modify the state of the cluster but rather ora.cvu did due to the existing of the following files.

[root@racnode-dc1-1 install]# pwd
/u01/app/oracle/crsdata/@global/cvu/baseline/install
[root@racnode-dc1-1 install]# ll
total 36000
-rw-r--r-- 1 oracle oinstall 35958465 Apr 14 06:05 grid_install_12.1.0.2.0.xml
-rw-r--r-- 1 oracle oinstall   901803 Apr 15 01:42 grid_install_18.0.0.0.0.zip
[root@racnode-dc1-1 install]# 

When checking RESULTS from ora.cvu, there are no errors.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=
[oracle@racnode-dc1-1 ~]$ 

Hell! What do I know as I am just a RAC novice and happy the cluster state is what it should be.

Update Override OPatch

Sun, 2019-04-14 08:47

Framework to source GI/DB RAC environment stored on shared volume.

[oracle@racnode-dc1-2 patch]$ df -h |grep patch
media_patch              3.7T  442G  3.3T  12% /media/patch

[oracle@racnode-dc1-2 patch]$ ps -ef|grep pmon
oracle    3268  2216  0 15:37 pts/0    00:00:00 grep --color=auto pmon
oracle   11254     1  0 06:33 ?        00:00:02 ora_pmon_hawk2
oracle   19995     1  0 05:52 ?        00:00:02 asm_pmon_+ASM2

[oracle@racnode-dc1-2 patch]$ cat /etc/oratab
+ASM2:/u01/app/12.1.0.1/grid:N
hawk2:/u01/app/oracle/12.1.0.1/db1:N

[oracle@racnode-dc1-2 patch]$ cat gi.env
### Michael Dinh : Mar 26, 2019
### Source RAC GI environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset ORACLE_UNQNAME
ORAENV_ASK=NO
h=$(hostname -s)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=+ASM${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
export GRID_HOME=$ORACLE_HOME
env|egrep 'ORA|GRID'
sysresv|tail -1

[oracle@racnode-dc1-2 patch]$ cat hawk.env
### Michael Dinh : Mar 26, 2019
### Source RAC DB environment
### Prerequisites for hostname: last char from hostname must be digit
### Allow: prodhost01, racnode-dc1-1
### DisAllow: prod01host
set +x
unset GRID_HOME
h=$(hostname -s)
### Extract filename without extension (.env)
ORAENV_ASK=NO
export ORACLE_UNQNAME=$(basename $BASH_SOURCE .env)
### Extract last character from hostname to create ORACLE_SID
export ORACLE_SID=$ORACLE_UNQNAME${h:${#h} - 1}
. oraenv <<< $ORACLE_SID
env|egrep 'ORA|GRID'
sysresv|tail -1
[oracle@racnode-dc1-2 patch]$

update_opatch.sh

#!/bin/sh -x
update_opatch()
{
set -ex
cd $ORACLE_HOME
$ORACLE_HOME/OPatch/opatch version
unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip ; echo $?
$ORACLE_HOME/OPatch/opatch version
}
ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
. /media/patch/gi.env
update_opatch
. /media/patch/hawk.env
update_opatch
exit

Run update_opatch.sh

[oracle@racnode-dc1-1 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been changed from hawk1 to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 patch]$


[oracle@racnode-dc1-2 patch]$ ./update_opatch.sh
+ ls -lh /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
-rwxrwxrwx 1 vagrant vagrant 107M Feb  1 22:08 /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
+ cd /u01/app/12.1.0.1/grid
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/12.1.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ cd /u01/app/oracle/12.1.0.1/db1
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.1.0.1.3

OPatch succeeded.
+ unzip -qod . /media/patch/Jan2019/p6880880_122010_Linux-x86-64.zip
+ echo 0
0
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.16

OPatch succeeded.
+ exit
[oracle@racnode-dc1-2 patch]$

Create Linux Swap File

Sun, 2019-04-14 08:26

Currently I am using oravirt (Mikael Sandström) · GitHub (https://github.com/oravirt) vagrant boxes.

The swap is too small, wanted to increase for 18c upgrade test, tired of doing this manually, and here’s a script for that.

#!/bin/sh -x
swapon --show
free -h
rm -fv /swapfile1
dd if=/dev/zero of=/swapfile1 bs=1G count=16
ls -lh /swapfile?
chmod 0600 /swapfile1
mkswap /swapfile1
swapon /swapfile1
swapon --show
free -h
echo "/root/swapfile1         swap                    swap    defaults        0 0" >> /etc/fstab
cat /etc/fstab
exit

Script in action:

[root@racnode-dc1-2 patch]# ./mkswap.sh
+ swapon --show
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   2G  33M   -1
+ free -h
              total        used        free      shared  buff/cache   available
Mem:           5.6G        4.0G        114M        654M        1.4G        779M
Swap:          2.0G         33M        2.0G
+ rm -fv /swapfile1
+ dd if=/dev/zero of=/swapfile1 bs=1G count=16
16+0 records in
16+0 records out
17179869184 bytes (17 GB) copied, 42.7352 s, 402 MB/s
+ ls -lh /swapfile1
-rw-r--r-- 1 root root 16G Apr 14 15:18 /swapfile1
+ chmod 0600 /swapfile1
+ mkswap /swapfile1
Setting up swapspace version 1, size = 16777212 KiB
no label, UUID=b084bd5d-e32e-4c15-974f-09f505a0cedc
+ swapon /swapfile1
+ swapon --show
NAME       TYPE      SIZE   USED PRIO
/dev/dm-1  partition   2G 173.8M   -1
/swapfile1 file       16G     0B   -2
+ free -h
              total        used        free      shared  buff/cache   available
Mem:           5.6G        3.9G        1.0G        189M        657M        1.3G
Swap:           17G        173M         17G
+ echo '/root/swapfile1         swap                    swap    defaults        0 0'
+ cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Apr 18 08:50:14 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/ol-root     /                       xfs     defaults        0 0
UUID=ed2996e5-e077-4e23-83a5-10418226a725 /boot                   xfs     defaults        0 0
/dev/mapper/ol-swap     swap                    swap    defaults        0 0
/dev/vgora/lvora /u01 ext4 defaults 1 2
/root/swapfile1         swap                    swap    defaults        0 0
+ exit
[root@racnode-dc1-2 patch]#

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

Sat, 2019-04-13 18:13

After upgrade and apply RU for Grid 18c, the cluster upgrade state was not NORMAL.

The cluster upgrade state is [UPGRADE FINAL] which I have never seen before.

Searching Oracle Support was useless as I was only able to find the following states:

The cluster upgrade state is [NORMAL]
The cluster upgrade state is [FORCED]
The cluster upgrade state is [ROLLING PATCH]

The following checks were performed after upgrade:

[oracle@racnode-dc1-1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [18.0.0.0.0]

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [2532936542].

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[oracle@racnode-dc1-1 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#


[oracle@racnode-dc1-2 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]

[oracle@racnode-dc1-2 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2532936542].

[oracle@racnode-dc1-2 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2532936542] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 ] have been applied on the local node. The release patch string is [18.5.0.0.0].

[oracle@racnode-dc1-2 ~]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [UPGRADE FINAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Check OCR: Grid Infrastructure Upgrade : The cluster upgrade state is [FORCED] (Doc ID 2482606.1)
I was desperate and OCR was fine.

[root@racnode-dc1-1 ~]# olsnodes -c
vbox-rac-dc1

[root@racnode-dc1-1 ~]# olsnodes -t -a -s -n
racnode-dc1-1   1       Active  Hub     Unpinned
racnode-dc1-2   2       Active  Hub     Unpinned

[root@racnode-dc1-1 ~]# $GRID_HOME/bin/ocrdump /tmp/ocrdump.txt

[root@racnode-dc1-1 ~]# grep SYSTEM.version.hostnames /tmp/ocrdump.txt
[SYSTEM.version.hostnames]
[SYSTEM.version.hostnames.racnode-dc1-1]
[SYSTEM.version.hostnames.racnode-dc1-1.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-1.site]
[SYSTEM.version.hostnames.racnode-dc1-2]
[SYSTEM.version.hostnames.racnode-dc1-2.patchlevel]
[SYSTEM.version.hostnames.racnode-dc1-2.site]
[root@racnode-dc1-1 ~]#

Thanks to my friend Vlatko J. https://twitter.com/jvlatko

Run cluvfy:

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ which cluvfy
/u01/18.3.0.0/grid/bin/cluvfy

oracle@racnode-dc1-1:+ASM1:/home/oracle
$ cluvfy stage -post crsinst -allnodes -collect cluster -gi_upgrade

Baseline collected.
Collection report for this execution is saved in file "/u01/app/oracle/crsdata/@global/cvu/baseline/install/grid_install_18.0.0.0.0.zip".

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 13, 2019 11:05:58 PM
CVU home:                     /u01/18.3.0.0/grid/
User:                         oracle
oracle@racnode-dc1-1:+ASM1:/home/oracle
$

After running cluvfy, the cluster upgrade state is [NORMAL].

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-1 ~]#

[root@racnode-dc1-2 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2532936542].
[root@racnode-dc1-2 ~]#

Playing with ACFS

Sun, 2019-03-24 11:50

Kernel version is 4.1.12-94.3.9.el7uek.x86_64 vs ACFS-9325: Driver OS kernel version = 4.1.12-32.el7uek.x86_64 because kernel was upgraded and ACFS has not been reconfigured after kernel upgrade.

[root@racnode-dc1-1 ~]# uname -r
4.1.12-94.3.9.el7uek.x86_64

[root@racnode-dc1-1 ~]# lsmod | grep oracle
oracleacfs           3719168  2
oracleadvm            606208  7
oracleoks             516096  2 oracleacfs,oracleadvm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# modinfo oracleoks
filename:       /lib/modules/4.1.12-94.3.9.el7uek.x86_64/weak-updates/usm/oracleoks.ko
author:         Oracle Corporation
license:        Proprietary
srcversion:     3B8116031A3907D0FFFC8E1
depends:
vermagic:       4.1.12-32.el7uek.x86_64 SMP mod_unload modversions
signer:         Oracle Linux Kernel Module Signing Key
sig_key:        2B:B3:52:41:29:69:A3:65:3F:0E:B6:02:17:63:40:8E:BB:9B:B5:AB
sig_hashalgo:   sha512

[root@racnode-dc1-1 ~]# acfsdriverstate version
ACFS-9325:     Driver OS kernel version = 4.1.12-32.el7uek.x86_64(x86_64).
ACFS-9326:     Driver Oracle version = 181010.

[root@racnode-dc1-1 ~]# acfsdriverstate installed
ACFS-9203: true

[root@racnode-dc1-1 ~]# acfsdriverstate supported
ACFS-9200: Supported

[root@racnode-dc1-1 ~]# acfsroot version_check
ACFS-9316: Valid ADVM/ACFS distribution media detected at: '/u01/app/12.1.0.1/grid/usm/install/Oracle/EL7UEK/x86_64/4.1.12/4.1.12-x86_64/bin'

[root@racnode-dc1-1 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# acfsutil registry
Mount Object:
  Device: /dev/asm/acfs_vol-256
  Mount Point: /ggdata02
  Disk Group: DATA
  Volume: ACFS_VOL
  Options: none
  Nodes: all
[root@racnode-dc1-1 ~]#
Gather ACFS Volume Info:

[oracle@racnode-dc1-1 ~]$ asmcmd volinfo –all

Diskgroup Name: DATA

         Volume Name: ACFS_VOL
         Volume Device: /dev/asm/acfs_vol-256
         State: ENABLED
         Size (MB): 10240
         Resize Unit (MB): 512
         Redundancy: UNPROT
         Stripe Columns: 8
         Stripe Width (K): 1024
         Usage: ACFS
         Mountpath: /ggdata02
Gather ACFS info using resource name:

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.drivers.acfs -init

NAME=ora.drivers.acfs
TYPE=ora.drivers.acfs.type
TARGET=ONLINE
STATE=ONLINE on racnode-dc1-1

From (asmcmd volinfo –all): Diskgroup Name: DATA 

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.dg -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------

From (asmcmd volinfo –all): Diskgroup Name: DATA and Volume Name: ACFS_VOL

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.advm -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
                                                             is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res ora.DATA.ACFS_VOL.acfs -t

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------
Gather ACFS info using resource type:

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.volume.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.ACFS_VOL.advm
               ONLINE  ONLINE       racnode-dc1-1            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Volume device /dev/asm/acfs_vol-256 
			                                                 is online,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.acfs.type’

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.data.acfs_vol.acfs
               ONLINE  ONLINE       racnode-dc1-1            mounted on /ggdata02,STABLE
               ONLINE  ONLINE       racnode-dc1-2            mounted on /ggdata02,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-1 ~]$ crsctl stat res -t -w ‘TYPE = ora.diskgroup.type’

--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.DATA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
ora.FRA.dg
ONLINE ONLINE racnode-dc1-1 STABLE
ONLINE ONLINE racnode-dc1-2 STABLE
--------------------------------------------------------------------------------

Playing with oracleasm and ASMLib

Fri, 2019-03-15 19:02

Forgot about script I wrote some time ago: Be Friend With awk/sed | ASM Mapping

[root@racnode-dc1-1 ~]# cat /sf_working/scripts/asm_mapping.sh
#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done

[root@racnode-dc1-1 ~]# /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Mar 16 10:25 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Mar 16 10:25 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Mar 16 10:25 /dev/sde1

[root@racnode-dc1-1 ~]#

HOWTO: Which Disks Are Handled by ASMLib Kernel Driver? (Doc ID 313387.1)

[root@racnode-dc1-1 ~]# oracleasm listdisks
CRS01
DATA01
FRA01

[root@racnode-dc1-1 dev]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 oracle dba 8, 33 Mar 15 10:46 CRS01
brw-rw---- 1 oracle dba 8, 49 Mar 15 10:46 DATA01
brw-rw---- 1 oracle dba 8, 65 Mar 15 10:46 FRA01

[root@racnode-dc1-1 dev]# ls -l /dev | grep -E '33|49|65'|grep -E '8'
brw-rw---- 1 root    disk      8,  33 Mar 15 23:47 sdc1
brw-rw---- 1 root    disk      8,  49 Mar 15 23:47 sdd1
brw-rw---- 1 root    disk      8,  65 Mar 15 23:47 sde1

[root@racnode-dc1-1 dev]# /sbin/blkid | grep oracleasm
/dev/sde1: LABEL="FRA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="205115d9-730d-4f64-aedd-d3886e73d123"
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"
/dev/sdc1: LABEL="CRS01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="232e214d-07bb-4f36-aba8-fb215437fb7e"
[root@racnode-dc1-1 dev]#

Various commands to retrieved oracleasm info and more.

[root@racnode-dc1-1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# uname -r
4.1.12-61.1.18.el7uek.x86_64

[root@racnode-dc1-1 ~]# rpm -q oracleasm-`uname -r`
package oracleasm-4.1.12-61.1.18.el7uek.x86_64 is not installed

[root@racnode-dc1-1 ~]# rpm -qa |grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# oracleasm -V
oracleasm version 2.1.9

[root@racnode-dc1-1 ~]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure        Configure the Oracle Linux ASMLib driver
    init             Load and initialize the ASMLib driver
    exit             Stop the ASMLib driver
    scandisks        Scan the system for Oracle ASMLib disks
    status           Display the status of the Oracle ASMLib driver
    listdisks        List known Oracle ASMLib disks
    querydisk        Determine if a disk belongs to Oracle ASMlib
    createdisk       Allocate a device for Oracle ASMLib use
    deletedisk       Return a device to the operating system
    renamedisk       Change the label of an Oracle ASMlib disk
    update-driver    Download the latest ASMLib driver

[root@racnode-dc1-1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@racnode-dc1-1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

[root@racnode-dc1-1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@racnode-dc1-1 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@racnode-dc1-1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@racnode-dc1-1 ~]# oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

[root@racnode-dc1-1 ~]# oracleasm querydisk -p DATA01
Disk "DATA01" is a valid ASM disk
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"

[root@racnode-dc1-1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:CRS01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:DATA01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:FRA01 [104853504 blocks (53684994048 bytes), maxio 1024]

[root@racnode-dc1-1 ~]# lsmod | grep oracleasm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Mar  5 20:21 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm

[root@racnode-dc1-1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# rpm -qi oracleasmlib-2.0.4-1.el6.x86_64
Name        : oracleasmlib
Version     : 2.0.4
Release     : 1.el6
Architecture: x86_64
Install Date: Tue 18 Apr 2017 10:56:40 AM CEST
Group       : System Environment/Kernel
Size        : 27192
License     : Oracle Corporation
Signature   : RSA/SHA256, Mon 26 Mar 2012 10:22:51 PM CEST, Key ID 72f97b74ec551f03
Source RPM  : oracleasmlib-2.0.4-1.el6.src.rpm
Build Date  : Mon 26 Mar 2012 10:22:44 PM CEST
Build Host  : ca-build44.us.oracle.com
Relocations : (not relocatable)
Packager    : Joel Becker <joel.becker@oracle.com>
Vendor      : Oracle Corporation
URL         : http://oss.oracle.com/
Summary     : The Oracle Automatic Storage Management library userspace code.
Description :
The Oracle userspace library for Oracle Automatic Storage Management
[root@racnode-dc1-1 ~]#

Thank You ALL

Mon, 2019-03-04 07:43

Oracle like like a box of chocolate, you never know what you are going to get. (Reference: Forrest Gump movie)

After spending countless hours over weekend, I am reminded of quote, “Curiosity killed the cat, but satisfaction brought it back.”

Basically, I have been unsuccessful in rebuilding 12.1.0.1 RAC VM to test and validate another Upgrade BUG

The finding looks to match – root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Thank you to all who share their experiences !!!

==============================================================================================================
+++ FAILED STEP: TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******
==============================================================================================================

Line 771: failed: [racnode-dc1-2] /u01/app/12.1.0.1/grid/root.sh", ["Check /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log for the output of root script"]
TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******


[oracle@racnode-dc1-2 ~]$ cat /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.1/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2019/03/04 05:17:39 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:06 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:07 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2019/03/04 05:18:44 CLSRSC-507: The root script cannot proceed on this node racnode-dc1-2 because either the first-node operations have not completed on node racnode-dc1-1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.1/grid/crs/install/crsutils.pm line 3681.
The command '/u01/app/12.1.0.1/grid/perl/bin/perl -I/u01/app/12.1.0.1/grid/perl/lib -I/u01/app/12.1.0.1/grid/crs/install /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl ' execution failed
[oracle@racnode-dc1-2 ~]$

[oracle@racnode-dc1-2 ~]$ tail /etc/oracle-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-2 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
[root@racnode-dc1-1 ~]#

==============================================================================================================
+++ CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.
==============================================================================================================

https://community.oracle.com/docs/DOC-1011444

Created by 3389670 on Feb 24, 2017 6:41 PM. Last modified by 3389670 on Feb 24, 2017 6:41 PM.
Visibility: Open to anyone

Problem Summary 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Problem Description 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3 
OLR initialization - successful 
2017/02/23 05:28:25 CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.2/grid/crs/install/crsutils.pm line 3681. 
The command '/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/roo

Error Codes 
--------------------------------------------------- 
CLSRSC-507

Problem Category/Subcategory 
--------------------------------------------------- 
Database RAC / Grid Infrastructure / Clusterware/Install / Upgrade / Patching issues

Solution 
---------------------------------------------------

1. Download latest JAN 2017 PSU 12.1.0.2.170117 (Jan 2017) Grid Infrastructure Patch Set Update (GI PSU) - 24917825

https://updates.oracle.com/download/24917825.html 

Platform or Language Linux86-64

2. Unzip downloaded patch as GRID user to directory

unzip p24917825_121020_Linux-x86-64.zip -d 

3. Run deconfig on both nodes

In the 2nd node as root user, 

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

In the 1st node as root user, 
/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force -lastnode

4. Once deconfig is completed then move forward on applying patching on both nodes in GRID Home

5. Move to unzip patch directory and apply patch using opatch manual

In 1st node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

In 2nd node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

6. Once Patch apply is completed on both node then move forward on invoking config.sh  

/u01/app/12.1.0.2/grid/crs/config/config.sh

or run root.sh directly on node1 and node2

/u01/app/12.1.0.2/grid/root.sh

Learned Something New code Formatting

Sat, 2019-03-02 08:18

Which is better? Using code in [ ] or Using code in < >

Using code in [ ]

[oracle@racnode-dc1-1 18cLinux]$ egrep -v "^#" gridsetup_upgrade.rsp | awk -F'=' '$2'
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.ignoreDownNodes=false

[oracle@racnode-dc1-1 18cLinux]$ sdiff -iEZbWBst -w 120 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                        |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                     |  oracle.install.option=UPGRADE
ORACLE_BASE=                                               |  ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=                                  |  oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=                                 |  oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=                                  |  oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=                        |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=            |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=      |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=               |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.ignoreDownNodes=                 |  oracle.install.crs.config.ignoreDownNodes=false
[oracle@racnode-dc1-1 18cLinux]$

-i, --ignore-case
-E, --ignore-tab-expansion
-Z, --ignore-trailing-space
-b, --ignore-space-change
-W, --ignore-all-space
-B, --ignore-blank-lines
-s, --suppress-common-lines
-t, --expand-tabs (expand tabs to spaces in output)

-w, --width=NUM

Using code in < >

[oracle@racnode-dc1-1 18cLinux]$ egrep -v "^#" gridsetup_upgrade.rsp | awk -F'=' '$2'
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.ignoreDownNodes=false

[oracle@racnode-dc1-1 18cLinux]$ sdiff -iEZbWBst -w 120 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                        |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                     |  oracle.install.option=UPGRADE
ORACLE_BASE=                                               |  ORACLE_BASE=/u01/app/oracle
oracle.install.asm.OSDBA=                                  |  oracle.install.asm.OSDBA=asmdba
oracle.install.asm.OSOPER=                                 |  oracle.install.asm.OSOPER=asmoper
oracle.install.asm.OSASM=                                  |  oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=                        |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=            |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=      |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=               |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.ignoreDownNodes=                 |  oracle.install.crs.config.ignoreDownNodes=false
[oracle@racnode-dc1-1 18cLinux]$

-i, --ignore-case
-E, --ignore-tab-expansion
-Z, --ignore-trailing-space
-b, --ignore-space-change
-W, --ignore-all-space
-B, --ignore-blank-lines
-s, --suppress-common-lines
-t, --expand-tabs (expand tabs to spaces in output)

-w, --width=NUM

And sdiff

ORA-17503: ksfdopn:10 Failed to open spfile

Thu, 2019-02-28 13:21
[oracle@racnode-dc1-2 dbs]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/HAWK/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/HAWK/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/HAWK/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/HAWK/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy

Starting from sqlplus did not help either and why does it even matter?

[oracle@racnode-dc1-2 dbs]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Feb 28 18:11:58 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

18:11:59 SYS @ hawk2:>startup;
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/HAWK/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/HAWK/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
18:12:01 SYS @ hawk2:>exit
Disconnected
[oracle@racnode-dc1-2 dbs]$

SRVCTL Fails to Start Instance with ORA-17503 ORA-27303 But sqlplus Startup is Fine [1322959.1]

Even though the situation did not match support note, the solution provided did work

[oracle@racnode-dc1-2 dbs]$ id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[oracle@racnode-dc1-2 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwxrwxr-x 1 oracle oinstall 292020952 Feb 27 22:57 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-2 dbs]$ chmod 6751 $GRID_HOME/bin/oracle
[oracle@racnode-dc1-2 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 292020952 Feb 27 22:57 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-2 dbs]$ ls -l $ORACLE_HOME/bin/oracle
-rwxrwsr-x 1 oracle dba 324409192 Feb 27 22:51 /u01/app/oracle/12.1.0.1/db1/bin/oracle
[oracle@racnode-dc1-2 dbs]$

==========================================================================================

[oracle@racnode-dc1-1 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwxrwxr-x 1 oracle oinstall 292020952 Feb 27 21:41 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ chmod 6751 $GRID_HOME/bin/oracle
[oracle@racnode-dc1-1 dbs]$ ls -l $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 292020952 Feb 27 21:41 /u01/app/12.1.0.1/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -l $ORACLE_HOME/bin/oracle
-rwxrwsr-x 1 oracle dba 324409192 Feb 27 21:35 /u01/app/oracle/12.1.0.1/db1/bin/oracle
[oracle@racnode-dc1-1 dbs]$

[oracle@racnode-dc1-1 dbs]$ srvctl start database -d hawk
[oracle@racnode-dc1-1 dbs]$ srvctl status database -d hawk -v
Instance hawk1 is running on node racnode-dc1-1. Instance status: Open.
Instance hawk2 is running on node racnode-dc1-2. Instance status: Open.
[oracle@racnode-dc1-1 dbs]$

[oracle@racnode-dc1-1 dbs]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-1            Started,STABLE
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-1            STABLE
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-1            169.254.203.248 172.
                                                             16.9.10,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-1            Open,STABLE
ora.oc4j
      1        OFFLINE OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  ONLINE       racnode-dc1-1            STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-1 dbs]$

Oracle BAD 18c Grid Image

Tue, 2019-02-26 08:25

I am screwed before I even start.

Why doesn’t Oracle update the bad image and release a new one!

Downloaded LINUX.X64_180000_grid_home.zip (Oracle Database 18c Grid Infrastructure (18.3) for Linux x86-64)
from Oracle Database 18c (18.3)

Unzip software:

unzip -qod /u01/app/18.0.0/grid /sf_OracleSoftware/LINUX.X64_180000_grid_home.zip

Run: runcluvfy.sh stage -pre crsinst FAILED

/u01/app/18.0.0/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.1.0.1/grid -dest_crshome /u01/app/18.0.0/grid -dest_version 18.0.0.0.0 -fixup -verbose

Pre-check for cluster services setup was unsuccessful.

Checks did not pass for the following nodes:
        racnode-dc1-2,racnode-dc1-1


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Physical Memory ...FAILED
racnode-dc1-2: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-2" [Required physical memory = 8GB (8388608.0KB)]

racnode-dc1-1: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Swap Size ...FAILED
racnode-dc1-2: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

racnode-dc1-1: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

Verifying Check incorrectly sized ASM Disks ...FAILED
PRCT-1065 : Failed to verify the size consistency of ASM disks on node
"racnode-dc1-1". kfod execution failed at location "/u01/app/18.0.0/grid//bin".
Detailed error:
/u01/app/18.0.0/grid//bin/kfod.bin: error while loading shared libraries:
libasmclntsh18.so: cannot open shared object file: No such file or directory


CVU operation performed:      stage -pre crsinst
Date:                         Feb 25, 2019 10:16:18 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle

Software was not properly tested.

It was probably a mistake and software should have been downloaded from https://edelivery.oracle.com

NOPE: Same results.

Unzip software:

unzip -qod /u01/app/18.0.0/grid /sf_OracleSoftware/18cLinux/V978971-01.zip

Pre-check for cluster services setup was unsuccessful.

Checks did not pass for the following nodes:
        racnode-dc1-2,racnode-dc1-1


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Physical Memory ...FAILED
racnode-dc1-2: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-2" [Required physical memory = 8GB (8388608.0KB)]

racnode-dc1-1: PRVF-7530 : Sufficient physical memory is not available on node
               "racnode-dc1-1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Swap Size ...FAILED
racnode-dc1-2: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

racnode-dc1-1: PRVF-7573 : Sufficient swap size is not available on node
               "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB
               (2097148.0KB)]

Verifying Check incorrectly sized ASM Disks ...FAILED
PRCT-1065 : Failed to verify the size consistency of ASM disks on node
"racnode-dc1-1". kfod execution failed at location "/u01/app/18.0.0/grid//bin".
Detailed error:
/u01/app/18.0.0/grid//bin/kfod.bin: error while loading shared libraries:
libasmclntsh18.so: cannot open shared object file: No such file or directory


CVU operation performed:      stage -pre crsinst
Date:                         Feb 26, 2019 2:37:48 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle
[oracle@racnode-dc1-1 ~]$ ll /u01/app/18.0.0/grid/bin/k*
-rw-r--r-- 1 oracle oinstall      0 Jul 18  2018 /u01/app/18.0.0/grid/bin/kfed
-rwxr-xr-x 1 oracle oinstall    472 Jul 18  2018 /u01/app/18.0.0/grid/bin/kfod
-rwxr-x--x 1 oracle oinstall 144740 Jul 18  2018 /u01/app/18.0.0/grid/bin/kfod.bin
-rw-r--r-- 1 oracle oinstall      0 Jul 18  2018 /u01/app/18.0.0/grid/bin/kgmgr
[oracle@racnode-dc1-1 ~]$

PRCT-1065 Failures During cluvfy Upgrade Verification ( Doc ID 2279848.1 )

Oracle support is suggesting to apply Oracle® Database Patch 28828717 – GI Release Update 18.5.0.0.190115 following

How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? ( Doc ID 1410202.1 )

UPDATE:
 
[oracle@racnode-dc1-1 ~]$ unset ORACLE_BASE ORACLE_HOME ORACLE_SID ORA_CRS_HOME GRID_HOME
[oracle@racnode-dc1-1 ~]$ env|egrep -i 'oracle|home'
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin:/u01/app/12.1.0.1/grid/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle
[oracle@racnode-dc1-1 ~]$ export PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
[oracle@racnode-dc1-1 ~]$ env|egrep -i 'oracle|home'
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle
[oracle@racnode-dc1-1 ~]$ ll /media/patch/Jan2019/28828717/
total 128
drwxrwxrwx 1 vagrant vagrant 0 Feb 26 19:04 28435192
drwxrwxrwx 1 vagrant vagrant 0 Feb 26 19:04 28547619
drwxrwxrwx 1 vagrant vagrant 0 Feb 26 19:05 28822489
drwxrwxrwx 1 vagrant vagrant 0 Feb 26 19:06 28864593
drwxrwxrwx 1 vagrant vagrant 0 Feb 26 19:08 28864607
drwxrwxrwx 1 vagrant vagrant 4096 Feb 26 19:05 automation
-rwxrwxrwx 1 vagrant vagrant 5828 Jan 9 16:02 bundle.xml
-rwxrwxrwx 1 vagrant vagrant 117023 Jan 9 15:43 README.html
-rwxrwxrwx 1 vagrant vagrant 0 Jan 9 23:37 README.txt
[oracle@racnode-dc1-1 ~]$ /u01/app/18.0.0/grid/gridSetup.sh -silent -applyRU /media/patch/Jan2019/28828717
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-02-26_09-29-43PM/installerPatchActions_2019-02-26_09-29-43PM.log
Launching Oracle Grid Infrastructure Setup Wizard...

[FATAL] [INS-40426] Grid installation option has not been specified.
ACTION: Specify the valid installation option.
[oracle@racnode-dc1-1 ~]$

Sed’ing Through ora.cvu Hell

Sat, 2019-02-23 06:02

Don’t know why I always look for trouble.

The trouble found was CHECK_RESULTS from ora.cvu.type had many issues which look to be BUGS related.

Here is the RAC environment from VM.

[oracle@racnode-dc1-1 ~]$ cat /etc/system-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [0] and no patches have been applied on the local node.

[oracle@racnode-dc1-1 ~]$ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-1 is [0].

[oracle@racnode-dc1-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'
CHECK_RESULTS=PRVF-5507 : NTP daemon or service is not running on any node but NTP configuration file exists on the following node(s):
racnode-dc1-2
racnode-dc1-1
PRVF-5415 : Check to see if NTP daemon or service is running failed
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-2" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
PRVF-7573 : Sufficient swap size is not available on node "racnode-dc1-1" [Required = 2.7844GB (2919680.0KB) ; Found = 2GB (2097148.0KB)]
PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm
PRCW-1015 : Wallet hawk does not exist.
CLSW-9: The cluster wallet to be operated on does not exist. :[1015]
[oracle@racnode-dc1-1 ~]$
BUGS?

Linux OL7/RHEL7: PRVE-0421 : No entry exists in /etc/fstab for mounting /dev/shm (Doc ID 2065603.1)

Bug 24696235 – cvu check results shows errors PRCW-1015 and CLSW-9 (Doc ID 24696235.8)

[root@racnode-dc1-1 ~]# ocrdump
[root@racnode-dc1-1 ~]# cat OCRDUMPFILE |grep -i SYSTEM.WALLET
[SYSTEM.WALLET]
[SYSTEM.WALLET.APPQOSADMIN]
[SYSTEM.WALLET.MGMTDB]
[root@racnode-dc1-1 ~]#

There’s is indeed no wallet for database hawk. But if wallet is created, will only result in another bug?

cluvfy:PRCQ-1000 : An error occurred while establishing connection to database with user name “DBSNMP” (Doc ID 2288958.1)

PRCQ-1000 : An error occurred while establishing connection to database with user name "DBSNMP" and connect descriptor:
ORA-01017: invalid username/password; logon denied

Cluster Verification Utility (CVU) Check Fails With NTP Configuration (Doc ID 2162408.1)

Some Good References:

Slimming Down Oracle RAC 12c’s Resource Footprint

Oracle Grid Infrastructure: change the interval for the Cluster Verification Utility (ora.cvu)

Small Notes on Clusterware resource ora.cvu

Simplest Automation: Use Environment Variables

Thu, 2019-02-21 07:25

Copy the last five Goldengate trail files from source to destination.

Here are high level steps:

Copy trail with prefix (aa*) to new destination:
1. export OLD_DIRDAT=/media/patch/dirdat
2. export NEW_DIRDAT=/media/swrepo/dirdat
3. export TRAIL_PREFIX=rt*
5. ls -l $NEW_DIRDAT
6. ls $OLD_DIRDAT/$TRAIL_PREFIX | head -5
7. ls $OLD_DIRDAT/$TRAIL_PREFIX | tail -5
8. cp -fv $(ls $OLD_DIRDAT/$TRAIL_PREFIX | tail -5) $NEW_DIRDAT
9. ls -l $NEW_DIRDAT/*

Copy trail with prefix (ab*) to new destination:
export TRAIL_PREFIX=ab*
Repeat steps 5-9

opatchauto is not that dumb

Wed, 2019-02-20 17:41

I find it ironic that we want to automate yet fear automation.

Per documentation, ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1),
the following patch is required to implement ACFS:

p22810422_12102160419forACFS_Linux-x86-64.zip
Patch for Bug# 22810422
UEKR4 SUPPORT FOR ACFS(Patch 22810422)

I had inquired why opatchauto is not used to patch the entire system versus manual patching for GI ONLY.

To patch GI home and all Oracle RAC database homes of the same version:
# opatchauto apply _UNZIPPED_PATCH_LOCATION_/22810422 -ocmrf _ocm response file_

OCM is not included in OPatch binaries since OPatch version 12.2.0.1.5; therefore, -ocmrf is not needed.
Reason for GI only patching is simply because we only need to do it to enable ACFS support.

Rationale makes sense and typically ACFS is only applied to GI home.

Being curious, shouldn’t opatchauto know what homes to apply the patch where applicable?
Wouldn’t be easier to execute opatchauto versus performing manual steps?

What do you think and which approach would you use?

Here are the results from applying patch 22810422.


[oracle@racnode-dc1-2 22810422]$ pwd
/sf_OracleSoftware/22810422

[oracle@racnode-dc1-2 22810422]$ sudo su -
Last login: Wed Feb 20 23:13:52 CET 2019 on pts/0

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:21:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-21-53PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-22-12PM.log
The id for this session is YBG6

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-22-24PM_1.log



OPatchauto session completed at Wed Feb 20 23:24:53 2019
Time taken to complete the session 3 minutes, 7 seconds


[root@racnode-dc1-2 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:25:12 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-25-19PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-25-38PM.log
The id for this session is 3BYS

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-28-00PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-2_2019-02-20_11-36-59PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-2
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-2
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-30-39PM_1.log



OPatchauto session completed at Wed Feb 20 23:40:10 2019
Time taken to complete the session 14 minutes, 58 seconds
[root@racnode-dc1-2 ~]#

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-2 ~]$

====================================================================================================

### Checking resources while patching racnode-dc1-1
[oracle@racnode-dc1-2 ~]$ crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
--------------------------------------------------------------------------------

[oracle@racnode-dc1-2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.DATA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.FRA.dg
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.asm
               ONLINE  ONLINE       racnode-dc1-2            Started,STABLE
ora.net1.network
               ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.ons
               ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       racnode-dc1-2            169.254.178.60 172.1
                                                             6.9.11,STABLE
ora.cvu
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.hawk.db
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.mgmtdb
      1        ONLINE  ONLINE       racnode-dc1-2            Open,STABLE
ora.oc4j
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.racnode-dc1-1.vip
      1        ONLINE  INTERMEDIATE racnode-dc1-2            FAILED OVER,STABLE
ora.racnode-dc1-2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
--------------------------------------------------------------------------------
[oracle@racnode-dc1-2 ~]$


====================================================================================================

[oracle@racnode-dc1-1 ~]$ sudo su -
Last login: Wed Feb 20 23:02:19 CET 2019 on pts/0

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
ORACLE_SID = [root] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"

[root@racnode-dc1-1 ~]# export PATH=$PATH:$GRID_HOME/OPatch

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422 -analyze

OPatchauto session is initiated at Wed Feb 20 23:43:46 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-43-54PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-44-12PM.log
The id for this session is M9KF

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

OPatchAuto successful.

--------------------------------Summary--------------------------------

Analysis for applying patches has completed successfully:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1


==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid


==Following patches were SUCCESSFULLY analyzed to be applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-44-26PM_1.log



OPatchauto session completed at Wed Feb 20 23:46:31 2019
Time taken to complete the session 2 minutes, 45 seconds

[root@racnode-dc1-1 ~]# opatchauto apply /sf_OracleSoftware/22810422

OPatchauto session is initiated at Wed Feb 20 23:47:13 2019

System initialization log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-02-20_11-47-20PM.log.

Session log file is /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-02-20_11-47-38PM.log
The id for this session is RHMR

Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0.1/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/12.1.0.1/db1
Patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1

Patch applicability verified successfully on home /u01/app/12.1.0.1/grid


Verifying SQL patch applicability on home /u01/app/oracle/12.1.0.1/db1
SQL patch applicability verified successfully on home /u01/app/oracle/12.1.0.1/db1


Preparing to bring down database service on home /u01/app/oracle/12.1.0.1/db1
Successfully prepared home /u01/app/oracle/12.1.0.1/db1 to bring down database service


Bringing down CRS service on home /u01/app/12.1.0.1/grid
Prepatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-50-01PM.log
CRS service brought down successfully on home /u01/app/12.1.0.1/grid


Start applying binary patch on home /u01/app/12.1.0.1/grid
Binary patch applied successfully on home /u01/app/12.1.0.1/grid


Starting CRS service on home /u01/app/12.1.0.1/grid
Postpatch operation log file location: /u01/app/12.1.0.1/grid/cfgtoollogs/crsconfig/crspatch_racnode-dc1-1_2019-02-20_11-58-57PM.log
CRS service started successfully on home /u01/app/12.1.0.1/grid


Preparing home /u01/app/oracle/12.1.0.1/db1 after database service restarted
No step execution required.........
Prepared home /u01/app/oracle/12.1.0.1/db1 successfully after database service restarted

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
RAC Home:/u01/app/oracle/12.1.0.1/db1
Summary:

==Following patches were SKIPPED:

Patch: /sf_OracleSoftware/22810422/22810422
Reason: This patch is not applicable to this specified target type - "rac_database"


Host:racnode-dc1-1
CRS Home:/u01/app/12.1.0.1/grid
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /sf_OracleSoftware/22810422/22810422
Log: /u01/app/12.1.0.1/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-02-20_23-52-37PM_1.log



OPatchauto session completed at Thu Feb 21 00:01:15 2019
Time taken to complete the session 14 minutes, 3 seconds

[root@racnode-dc1-1 ~]# logout

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [hawk1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.1/grid
ORACLE_HOME=/u01/app/12.1.0.1/grid
Oracle Instance alive for sid "+ASM1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
22810422;ACFS Interim patch for 22810422

OPatch succeeded.

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk1"
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home "/u01/app/oracle/12.1.0.1/db1".

OPatch succeeded.
[oracle@racnode-dc1-1 ~]$

Using awk to remove attributes without values

Mon, 2019-02-18 20:41

Attributes without values are displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
CV_DESTLOC=
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
HOSTING_MEMBERS=
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
NEXT_CHECK_TIME=
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESOURCE_GROUP=
RESTART_ATTEMPTS=5
RESTART_DELAY=0
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
START_TIMEOUT=0
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
STOP_TIMEOUT=0
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
USE_STICKINESS=0
USR_ORA_ENV=
WORKLOAD_CPU=0
WORKLOAD_CPU_CAP=0
WORKLOAD_MEMORY_MAX=0
WORKLOAD_MEMORY_TARGET=0

[oracle@racnode-dc1-1 ~]$

Attributes without values are NOT displayed.

[oracle@racnode-dc1-1 ~]$ crsctl stat res -w "TYPE = ora.cvu.type" -p|awk -F'=' '$2'
NAME=ora.cvu
TYPE=ora.cvu.type
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
ACTION_TIMEOUT=60
AGENT_FILENAME=%CRS_HOME%/bin/orajagent
AUTO_START=restore
CARDINALITY=1
CHECK_INTERVAL=60
CHECK_RESULTS=1122099754
CHECK_TIMEOUT=600
CLEAN_TIMEOUT=60
CRSHOME_SPACE_ALERT_STATE=OFF
CSS_CRITICAL=no
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle CVU resource
ENABLED=1
GEN_NEXT_CHECK_TIME=1550563672
GEN_RUNNING_NODE=racnode-dc1-2
IGNORE_TARGET_ON_FAILURE=no
INSTANCE_FAILOVER=1
LOAD=1
LOGGING_LEVEL=1
MODIFY_TIMEOUT=60
PLACEMENT=restricted
RELOCATE_BY_DEPENDENCY=1
RELOCATE_KIND=offline
RESTART_ATTEMPTS=5
RUN_INTERVAL=21600
SCRIPT_TIMEOUT=30
SERVER_CATEGORY=ora.hub.category
SERVER_POOLS=*
START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network)
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network)
TARGET_DEFAULT=default
TYPE_VERSION=1.1
UPTIME_THRESHOLD=1h
USER_WORKLOAD=no
[oracle@racnode-dc1-1 ~]$

You might ask why I am doing this.

I am reviewing configuration before implementation and will compare them with after implementation.

The less items I need to look at before implementation, the better.

GoldenGate XAG APP VIP Revisited

Sat, 2019-02-16 08:17

For unknown reasons, XAG integration for GoldenGate target was eradicated without any trace (I was not able to find any).

When running crsctl at target, no resources were available.

crsctl stat res -t -w 'TYPE = xag.goldengate.type'
crsctl stat res -t|egrep -A2 'dbfs|xag'

Here is an example from source:

$ crsctl stat res -t -w 'TYPE = xag.goldengate.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02           STABLE
--------------------------------------------------------------------------------

$ crsctl stat res -t|egrep -A2 'dbfs|xag'
dbfs_mount
               ONLINE  ONLINE       host_source01            STABLE
               ONLINE  ONLINE       host_source02            STABLE
--
ora.dbfs.db
      1        ONLINE  ONLINE       host_source01            Open,STABLE
      2        ONLINE  ONLINE       host_source02            Open,STABLE
--
xag.gg_ue-vip.vip
      1        ONLINE  ONLINE       host_source01            STABLE
xag.gg_ue.goldengate
      1        ONLINE  ONLINE       host_source02            STABLE

Now, I need to setup XAG for target RAC Cluster.

FYI: XAG Bundled Agent was not downloaded, instead used the one available from GRID_HOME.

$ agctl query releaseversion
The Oracle Grid Infrastructure Agents release version is 3.1.0

$ agctl query deployment
The Oracle Grid Infrastructure Agents deployment is bundled

Creating XAG using 2 commands provided different metadata vs 1 command.

The difference between FILESYSTEMS is expected due to change from DBFS to ACFS.

Currently, the change is being implemented at target.

Here is an example using 2 commands:

As Root:
appvipcfg create -network=1 -ip=10.30.91.158 -vipname=xag.gg_ue-vip.vip -user=ggsuser -group=oinstall

As Oracle:
agctl add goldengate gg_ue \
--gg_home /u03/gg/12.2.0 \
--instance_type target \
--nodes target04,target02 \
--vip_name xag.gg_target-vip.vip \
--filesystems ora.acfs_data.acfs_vol.acfs \
--oracle_home /u01/app/oracle/product/12.1.0/client_2

Create source_xag_goldengate.txt target_xag_goldengate.txt using:
crsctl stat res -w "TYPE = xag.goldengate.type" -p
$ diff source_xag_goldengate.txt target_xag_goldengate.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
---
< AUTO_START=restore
> AUTO_START=never
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=
---
< VIP_CREATED=1
> VIP_CREATED=0

Here is an example using 1 command:

As Root:
agctl add goldengate gg_target
--gg_home /u03/gg/12.2.0
--instance_type target
--nodes target01,target02
-- filesystems ora.acfs_data.acfs_vol.acfs
--oracle_home /u01/app/oracle/product/12.1.0/client_2
--network 1 --ip 10.30.91.158
--user ggsuser
--group dba

$ diff source_xag_goldengate.txt target_xag_goldengate2.txt
< ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
> ACL=owner:ggsuser:rwx,pgrp:dba:r-x,other::r--
---
< FILESYSTEMS=dbfs_mount
< GG_HOME=/u03/app/gg/12.2.0
---
> FILESYSTEMS=ora.acfs_data.acfs_vol.acfs
> GG_HOME=/u03/gg/12.2.0
---
< HOSTING_MEMBERS=source01 source02
> HOSTING_MEMBERS=target01 target02
---
< ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
> ORACLE_HOME=/u01/app/oracle/product/12.1.0/client_2
---
< START_DEPENDENCIES=hard(xag.gg_target-vip.vip,dbfs_mount) pullup(xag.gg_target-vip.vip,dbfs_mount)
> START_DEPENDENCIES=hard(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs) pullup(xag.gg_target-vip.vip,ora.acfs_data.acfs_vol.acfs)
---
< STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:dbfs_mount)
> STOP_DEPENDENCIES=hard(xag.gg_target-vip.vip,intermediate:ora.acfs_data.acfs_vol.acfs)

In conclusion, I will be creating XAG using 1 command from now on to provide more metadata info.

Error (CLSD|CLSU-00100|CLSU-00103: error location: sclsdgcwd2|CLSD00183) Running ggsci

Fri, 2019-02-15 18:25

Rant: Any application requiring strace for a simple problem to determine root cause is poorly written.

Oracle blog – Amardeep Sidhu January 12, 2019 Error while running ggsci

The blog above was a great help.

$ ./ggsci 

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.170919 OGGCORE_12.2.0.1.0OGGBP_PLATFORMS_171030.0908_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Oct 30 2017 20:49:22
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.


2019-02-15 18:04:14.827 
CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process
Additional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1
CLSU-00103: error location: sclsdgcwd2
(:CLSD00183:)

Results from strace.

strace ./ggsci 
mkdir("/u01/app/oracle/product/12.1.0/client_2/log", 01777) = -1 EACCES (Permission denied)

I did take a different path for resolution.

gid is different for ggsuser and oracle

uid=1521(ggsuser) gid=1500(dba)      groups=1500(dba),1501(oinstall)
uid=1500(oracle)  gid=1501(oinstall) groups=1501(oinstall),1500(dba)

As root, chmod 775 -R /u01/app resolved the issue.

# cd /u01/
# chmod 775 -R app/

However, this does not explain why it was working before adding GoldenGate to CRS.

# agctl add goldengate GoldenGate_instance \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/client_2 \
--nodes node1,node2 \
--network 1 --ip 10.30.91.158 \
--user ggsuser \
--group dba \
--filesystems ora.acfs_data.acfs_vol.acfs \
--gg_home /u03/gg/12.2.0

Source Oracle Environment Easily

Tue, 2019-02-12 21:50

I have been patching a lot lately and wanted a fast and easy method to source Oracle environment.

The objective is to copy, paste from action plan vs having to selectively copy, edit, paste.

Example: . /media/patch/gi.env vs . oraenv — +ASM[n]

Started by creating gi.env which will be used to source GI for all RAC hosts.

You are probably thinking, isn’t it a PITA to have to edit and maintain all the gi.env per host, e.g. 6 nodes RAC cluster?

Rightfully so and it’s a PITA unless it’s dynamic.

There is one requirement: host# = instance#

Hence, +ASM1 is running on host05 will not work.

Next step would probably be to script the tasks.

DEMO1:

[oracle@racnode-dc1-1 ~]$ ps -ef|grep [p]mon
oracle 10818 1 0 03:58 ? 00:00:00 asm_pmon_+ASM1
oracle 11456 1 0 03:58 ? 00:00:00 ora_pmon_hawk1
oracle 11763 1 0 03:58 ? 00:00:00 mdb_pmon_-MGMTDB

[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid “+ASM1”

[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/11.2.0.4/db1
Oracle Instance alive for sid “hawk1”

[oracle@racnode-dc1-1 ~]$ srvctl status database -d $ORACLE_UNQNAME
Instance hawk1 is running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2

[oracle@racnode-dc1-1 ~]$ df -h /media/patch/
Filesystem Size Used Avail Use% Mounted on
media_patch 3.7T 413G 3.3T 12% /media/patch

[oracle@racnode-dc1-1 ~]$

[oracle@racnode-dc1-2 ~]$ ps -ef|grep [p]mon
oracle 7339 1 0 03:56 ? 00:00:00 asm_pmon_+ASM2
oracle 8904 1 0 03:57 ? 00:00:00 ora_pmon_hawk2

[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
ORACLE_SID = [oracle] ? The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid “+ASM2”

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM2] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/11.2.0.4/db1
Oracle Instance alive for sid “hawk2”

[oracle@racnode-dc1-2 ~]$ srvctl status database -d $ORACLE_UNQNAME
Instance hawk1 is running on node racnode-dc1-1
Instance hawk2 is running on node racnode-dc1-2

[oracle@racnode-dc1-2 ~]$ cat /media/patch/gi.env
set +x
unset ORACLE_UNQNAME
h=$(hostname -s)
n=1
. oraenv <<< +ASM${h:${#h} – $n}
export GRID_HOME=$ORACLE_HOME
env|egrep ‘ORACLE|GRID’
sysresv|tail -1

[oracle@racnode-dc1-2 ~]$

[oracle@racnode-dc1-2 ~]$ cat /media/patch/hawk.env
set +x
h=$(hostname -s)
n=1
export ORACLE_UNQNAME=hawk
. oraenv <<< $ORACLE_UNQNAME${h:${#h} – $n}
env|grep ORACLE
sysresv|tail -1
[oracle@racnode-dc1-2 ~]$

DEMO2:

[oracle@racnode-dc1-1 ~]$ export PATCH_TOP_DIR=/u01/stage/patch/Jan2019
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ . /media/patch/gi.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid “+ASM1”
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ export PREPATCH_LOG=$PATCH_TOP_DIR/`echo $ORACLE_HOME | awk -F/ ‘{print $NF}’`_prepatch_”$(hostname -s)”_lsinv.log
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -detail > $PREPATCH_LOG; echo $?
0
[oracle@racnode-dc1-1 ~]$ ls -l $PREPATCH_LOG
-rw-r–r– 1 oracle oinstall 205889 Feb 13 04:41 /u01/stage/patch/Jan2019/grid_prepatch_racnode-dc1-1_lsinv.log
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
ORACLE_SID = [+ASM1] ? The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/11.2.0.4/db1
Oracle Instance alive for sid “hawk1”
[oracle@racnode-dc1-1 ~]$
[oracle@racnode-dc1-1 ~]$ export PREPATCH_LOG=$PATCH_TOP_DIR/`echo $ORACLE_HOME | awk -F/ ‘{print $NF}’`_prepatch_”$(hostname -s)”_lsinv.log
[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory -detail > $PREPATCH_LOG; echo $?
0
[oracle@racnode-dc1-1 ~]$ ls -l $PREPATCH_LOG
-rw-r–r– 1 oracle oinstall 118180 Feb 13 04:41 /u01/stage/patch/Jan2019/db1_prepatch_racnode-dc1-1_lsinv.log
[oracle@racnode-dc1-1 ~]$

 

 

Find Database Growth Using OEM Repository

Fri, 2018-12-21 08:20

Typically, what as been done is to schedule job for each database to collect database growth.

This may be problematic as it can be forgotten when new databases are created versus the likelihood of forgetting to add database to monitoring for OEM.

EM12c, EM13c : Querying the Repository Database for Building Reports using Metric Information (Doc ID 2347253.1)

Those raw data are inserted in various tables like EM_METRIC_VALUES for example. 
EM aggregates those management data by hour and by day. 
Those raw data are kept 7 days; the one hour aggregated data are kept 31 days, while one day aggregated data are kept one year.

How to obtain the Historical Database Total Used and Allocated Size from OEM Repository

The above blog post provided a good starting point.

This post is using query to collect database size (metric_name=’DATABASE_SIZE’) vs tablespace size (metric_name=’tbspAllocation’) to avoid having to sum all tablespaces to determine database size.

OMS: 13.2.0 and EMREP DB: 12.2.0

Comparison for METRIC_COLUMN between DATABASE_SIZE and tbspAllocation.

For tbspAllocation, the size was not clear and did not research further but it does appear to be GB.

SQL> select distinct metric_name, METRIC_COLUMN from sysman.mgmt$metric_daily where metric_name='tbspAllocation' order by 1;

METRIC_NAME                                                      METRIC_COLUMN
---------------------------------------------------------------- ----------------------------------------------------------------
tbspAllocation                                                   spaceUsed
tbspAllocation                                                   spaceAllocated

SQL> select distinct METRIC_COLUMN from sysman.mgmt$metric_daily WHERE metric_name='DATABASE_SIZE';

METRIC_COLUMN
----------------------------------------------------------------
ALLOCATED_GB
USED_GB

TARGET_TYPE used (not all results presented):

SQL> select distinct target_type from sysman.mgmt$metric_daily order by 1;

TARGET_TYPE
----------------------------------------------------------------
oracle_database
oracle_pdb
rac_database

METRIC_NAME used (not all results presented):

SQL> select distinct metric_name from sysman.mgmt$metric_daily order by 1;

METRIC_NAME
----------------------------------------------------------------
DATABASE_SIZE
tbspAllocation

DEMO:

SQL> @dbsize.sql
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off
Enter value for 1: perf

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
xxxxperf                                           rac_database    01-MAR-18  2698.6       3526.8
                                                   rac_database    01-APR-18  2709.9       3526.8         2698.6        11.31
                                                   rac_database    01-MAY-18  2728.8       3526.8         2709.9        18.86
                                                   rac_database    01-JUN-18  2735.4       3548.8         2728.8         6.61
                                                   rac_database    01-JUL-18  2746.4       3548.8         2735.4        11.01
                                                   rac_database    01-AUG-18  2758.7       3548.8         2746.4        12.27
                                                   rac_database    01-SEP-18  2772.5       3548.8         2758.7        13.82
                                                   rac_database    01-OCT-18  4888.8       6207.8         2772.5       2116.3
                                                   rac_database    01-NOV-18  4647.8       6207.8         4888.8         -241
                                                   rac_database    01-DEC-18  3383.2       6207.8         4647.8        -1265
yyyyperf                                           oracle_database 01-MAR-18   63.07       395.58
                                                   oracle_database 01-APR-18   63.19       395.58          63.07          .12
                                                   oracle_database 01-MAY-18   64.33       395.58          63.19         1.14
                                                   oracle_database 01-JUN-18   64.81       395.58          64.33          .48
                                                   oracle_database 01-JUL-18    65.1       395.58          64.81          .29
                                                   oracle_database 01-AUG-18   65.22       395.58           65.1          .12
                                                   oracle_database 01-SEP-18   65.79       395.58          65.22          .57
                                                   oracle_database 01-OCT-18   68.18       395.58          65.79         2.39
                                                   oracle_database 01-NOV-18   75.79       395.72          68.18         7.61
                                                   oracle_database 01-DEC-18    80.4       395.72          75.79         4.61

29 rows selected.

SQL> @dbsize
SQL> -- Michael Dinh : Dec 20, 2018
SQL> set echo off
Enter value for 1: *

TARGET_NAME                                        TARGET_TYPE     MONTH_DT  USED_GB ALLOCATED_GB PREVIOUS_MONTH DIFF_USED_GB
-------------------------------------------------- --------------- --------- ------- ------------ -------------- ------------
CDByyyy_xxxxxxxxxxxxxxxxxxxxxxxxxx_CDBROOT         oracle_pdb      01-MAR-18    7.96        94.73
                                                   oracle_pdb      01-APR-18    3.44        94.73           7.96        -4.52
                                                   oracle_pdb      01-MAY-18   12.26        95.07           3.44         8.82
                                                   oracle_pdb      01-JUN-18   76.18        95.12          12.26        63.92
                                                   oracle_pdb      01-JUL-18   70.87        95.15          76.18        -5.31
                                                   oracle_pdb      01-AUG-18   77.63        95.15          70.87         6.76
                                                   oracle_pdb      01-SEP-18     4.9        95.15          77.63       -72.73
                                                   oracle_pdb      01-OCT-18       4        95.15            4.9          -.9
                                                   oracle_pdb      01-NOV-18   41.34        95.15              4        37.34
                                                   oracle_pdb      01-DEC-18   33.52        95.15          41.34        -7.82
CDByyyy_xxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxPDB        oracle_pdb      01-MAR-18  1610.6         2571
                                                   oracle_pdb      01-APR-18  1644.9         2571         1610.6        34.27
                                                   oracle_pdb      01-MAY-18  1659.3       2571.3         1644.9        14.43
                                                   oracle_pdb      01-JUN-18  1694.7       2571.4         1659.3        35.32
                                                   oracle_pdb      01-JUL-18  1753.8       2571.4         1694.7        59.18
                                                   oracle_pdb      01-AUG-18  1827.9       2571.4         1753.8        74.06
                                                   oracle_pdb      01-SEP-18  1900.8       2571.4         1827.9        72.91
                                                   oracle_pdb      01-OCT-18  1977.2       2571.4         1900.8        76.43
                                                   oracle_pdb      01-NOV-18  2044.8       2571.4         1977.2         67.6
                                                   oracle_pdb      01-DEC-18  2144.5       2571.4         2044.8        99.64

Script:

set line 200 verify off trimspool off tab off pages 1000 numw 6 echo on
-- Michael Dinh : Dec 20, 2018
set echo off
/*
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
How to obtain the Historical Database Total Used and Allocated Size from OEM Repository
*/
col target_name for a50
col target_type for a15
undefine 1
break on target_name
WITH dbsz AS (
SELECT
target_name, target_type, month_dt,
SUM(DECODE(metric_column, 'USED_GB', maximum)) used_gb,
SUM(DECODE(metric_column, 'ALLOCATED_GB', maximum)) allocated_gb
FROM (
SELECT target_name, target_type, trunc(rollup_timestamp,'MONTH') month_dt, metric_column, MAX(maximum) maximum
FROM sysman.mgmt$metric_daily
WHERE target_type IN ('rac_database','oracle_database','oracle_pdb')
AND metric_name = 'DATABASE_SIZE'
AND metric_column IN ('ALLOCATED_GB','USED_GB')
AND REGEXP_LIKE(target_name,'&&1','i')
GROUP BY target_name, target_type, trunc(rollup_timestamp,'MONTH'), metric_column
)
GROUP BY target_name, target_type, month_dt
ORDER BY target_name, month_dt
)
SELECT target_name, target_type, month_dt, used_gb, allocated_gb,
LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) previous_month,
used_gb-LAG(used_gb,1) OVER (PARTITION BY target_name ORDER BY target_name) diff_used_gb
FROM dbsz
ORDER BY target_name, month_dt
;

Pages