Michael Dinh
Port Forwarding Using SSH Config File
Here is a good reference SSH config file for OpenSSH client
From a secured server, I am able to connect to 2 different environments which seems counter intuitive but I digress.
Since there are 2 different environments, the same ~/.ssh/config cannot be used as there may be IP overlap.
One environment will use ~/.ssh/config and ssh emhost
Other environment will use ~/.ssh/cbconfig and ssh -F ~/.ssh/cbconfig emhost
The default EM port for both hosts is 7803.
Using firefox https://localhost:7803/em to access EM does not work well when saving username and password as they will be overwritten.
One solution to save sysman’s password is to use URL with different port.
Hence, config will have EM port 7803 forward to 7803 while cbconfig will have EM port 7803 forward to 7804.

========================================
This is on cloud and looks complicated.
========================================
I did not create the configuration and don't know how many hours it took.
~/.ssh/config
Host emhost
HostName 10.157.38.66
LocalForward 7001 10.157.38.66:7001
LocalForward 7102 10.157.38.66:7102
LocalForward 7803 10.157.38.66:7803
LocalForward 9803 10.157.38.66:9803
LocalForward 9851 10.157.38.66:9851
# DEFAULTS:
Host *
User dinh
========================================
This is on premise and looks simpler.
========================================
ssh -F ~/.ssh/cbconfig emhost
Host emhost
HostName 10.10.72.254
# Forward port need to use IP address.
# Equivalent to ssh -L 7804:10.10.72.254:7803 mdinh@10.10.72.254
LocalForward 7804 10.131.28.227:7803
# DEFAULTS:
Host *
User mdinh
Who Can Access
I had a request to list the users who have read access (or greater) to the APP schema.
Base on the results below:
User DINH has SELECT on APP.INTERVAL table (view)
Role APP_ROLE has SELECT/UPDATE on APP.INTERVAL table (view)
User DINH/APP/SYS has APP_ROLE
SQL> show con_name
CON_NAME
------------------------------
ORCLPDB1
SQL> show user
USER is "SYS"
SQL> @priv.sql
SQL> select username from dba_users where created > (select created from v$database) order by 1;
USERNAME
------------------------------
APP
AUDIT_TEST
DINH
DINH099PD
PDBADMIN
WMS099PD
6 rows selected.
SQL> select * from DBA_TAB_PRIVS where owner='APP';
GRANTEE OWNER TABLE_NAME GRANTOR PRIVILEGE GRANTABLE HIERARCHY COM TYPE INH
-------------------- -------------------- ------------------------- -------------------- -------------------- --------- --------- --- ------------------------ ---
DINH APP INTERVAL APP SELECT NO NO NO TABLE NO
APP_ROLE APP INTERVAL APP SELECT NO NO NO TABLE NO
APP_ROLE APP INTERVAL APP UPDATE NO NO NO TABLE NO
SQL> --- ROLE_TAB_PRIVS describes table privileges granted to roles.
SQL> --- Information is provided only about roles to which the user has access.
SQL> select * from ROLE_TAB_PRIVS where OWNER='APP';
ROLE OWNER TABLE_NAME COLUMN_NAME PRIVILEGE GRANTABLE COM INH
------------------------------ -------------------- ------------------------- ------------------------- -------------------- --------- --- ---
APP_ROLE APP INTERVAL UPDATE NO NO NO
APP_ROLE APP INTERVAL SELECT NO NO NO
SQL> select * from DBA_ROLE_PRIVS where GRANTED_ROLE='APP_ROLE' order by 1;
GRANTEE GRANTED_ ADMIN DEL DEFAULT COM INH
-------------------- -------- ----- --- ------- --- ---
APP APP_ROLE YES NO YES NO NO
DINH APP_ROLE NO NO YES NO NO
SYS APP_ROLE NO NO YES NO NO
SQL>
I also used Pete Finnigan’s who_can_access.sql for comparison.
Note who_can_access.sql is per object vs per schema.
If there were hundreds / thousands of table, then not sure how this will scale.
who_can_access: Release 1.0.3.0.0 - Production on Wed Apr 07 19:00:04 2021
Copyright (c) 2004 PeteFinnigan.com Limited. All rights reserved.
NAME OF OBJECT TO CHECK [USER_OBJECTS]: INTERVAL
OWNER OF THE OBJECT TO CHECK [USER]: APP
OUTPUT METHOD Screen/File [S]:
FILE NAME FOR OUTPUT [priv.lst]:
OUTPUT DIRECTORY [DIRECTORY or file (/tmp)]:
EXCLUDE CERTAIN USERS [N]:
USER TO SKIP [TEST%]:
Checking object => APP.INTERVAL
====================================================================
Object type is => TABLE (TAB)
Privilege => SELECT is granted to =>
Role => APP_ROLE (ADM = NO) which is granted to =>
User => DINH (ADM = NO)
User => SYS (ADM = NO)
User => APP (ADM = YES)
User => DINH (ADM = NO)
Privilege => UPDATE is granted to =>
Role => APP_ROLE (ADM = NO) which is granted to =>
User => DINH (ADM = NO)
User => SYS (ADM = NO)
User => APP (ADM = YES)
PL/SQL procedure successfully completed.
For updates please visit http://www.petefinnigan.com/tools.htm
SQL>
Did I do this right?
Detect Linux Host Restart
Sometime ago I had blogged about Monitor Linux Host Restart
The simple solution: How to email admins automatically after a Linux server starts?
Here is the example from root’s cron:
# crontab -l
@reboot su oracle -c '/home/oracle/scripts/host_restart_alert.sh' > /tmp/host_restart_alert.out 2>&1
Shell script is used because mail cannot be sent from local host and will need to be sent from remote host.
#!/bin/bash -x
MAILFROM=
MAILTO=
SUBJECT="Node reboot detected for $(hostname)"
EMAILMESSAGE="$(hostname) was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`"
# uptime reports minutely and need to sleep for at least 60s after host restart
sleep 63
ssh oracle@remotehost /bin/bash <<EOF
/home/oracle/scripts/send_email.sh "$EMAILMESSAGE" "$SUBJECT" "$MAILFROM" "$MAILTO"
EOF
exit
Why is there a need to detect host restart and isn’t there monitoring for the host?
This is Oracle Exadata Cloud@Customer (ExaCC) environment.
When Oracle support performs patching, they do not provide any sort of communication or status and monitoring is disable for all hosts beforehand.
OPatchAuto to Patch a GI/RAC Environment.
After the patching is complete and your servers are restarted, you should check your product software to verify that the issue has been resolved.
This is why there is a need to detect and be notified for server restart.
Linux Find Week# Of Month
Unfortunately, Linux does not have parameter for Week Number Of Month
I found the solution at https://serverfault.com/questions/383666/how-to-determine-number-of-week-of-the-month
echo $((($(date +%-d)-1)/7+1))
Here is how I have tested.
[oracle@oracle-12201-vagrant ~]$ date -d '20210404'
Sun Apr 4 00:00:00 -05 2021
[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210404' +%-d)-1)/7+1))
1
[oracle@oracle-12201-vagrant ~]$ date -d '20210411'
Sun Apr 11 00:00:00 -05 2021
[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210411' +%-d)-1)/7+1))
2
[oracle@oracle-12201-vagrant ~]$ date -d '20210418'
Sun Apr 18 00:00:00 -05 2021
[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210418' +%-d)-1)/7+1))
3
[oracle@oracle-12201-vagrant ~]$ date -d '20210425'
Sun Apr 25 00:00:00 -05 2021
[oracle@oracle-12201-vagrant ~]$ echo $((($(date -d '20210425' +%-d)-1)/7+1))
4
Why is this even useful?
It can be a continuation of Simplify Log Management For Backup
Simplify Log Management For Backup
Currently, there is a cronjob to delete backup logs older than 7 days.
30 23 * * * find /home/oracle/scripts/logs -name "*.log" -mtime +7 -exec rm {} \;
Typically, it not’s a big deal; however, having to update crontab with 50 entries can be cumbersome when not necesary.
$ crontab -l|wc -l
50
Furthermore, there are 1,044 logs accumulated in the directory.
$ ls -l /home/oracle/scripts/logs/*.log|wc -l
1044
Here is an example for level0 / level1 backup log and seems redundant to have timestamp as part of log name.
$ ls -lt backup_$ORACLE_SID_level0*.log
-rw-r--r-- 1 oracle oinstall 1004854 Apr 3 07:04 backup_$ORACLE_SID_level0_sbt_202104030700_Sat.log
-rw-r--r-- 1 oracle oinstall 839713 Mar 28 05:08 backup_$ORACLE_SID_level0_202103280500_Sun.log
-rw-r--r-- 1 oracle oinstall 1292709 Mar 27 07:04 backup_$ORACLE_SID_level0_sbt_202103270700_Sat.log
$ ls -lt backup_$ORACLE_SID_level1*.log
-rw-r--r-- 1 oracle oinstall 31694 Apr 3 05:11 backup_$ORACLE_SID_level1_202104030510_Sat.log
-rw-r--r-- 1 oracle oinstall 801491 Apr 2 07:33 backup_$ORACLE_SID_level1_sbt_202104020730_Fri.log
-rw-r--r-- 1 oracle oinstall 31711 Apr 2 05:11 backup_$ORACLE_SID_level1_202104020510_Fri.log
-rw-r--r-- 1 oracle oinstall 767509 Apr 1 07:33 backup_$ORACLE_SID_level1_sbt_202104010730_Thu.log
-rw-r--r-- 1 oracle oinstall 31587 Apr 1 05:11 backup_$ORACLE_SID_level1_202104010510_Thu.log
-rw-r--r-- 1 oracle oinstall 733961 Mar 31 07:32 backup_$ORACLE_SID_level1_sbt_202103310730_Wed.log
-rw-r--r-- 1 oracle oinstall 32797 Mar 31 05:11 backup_$ORACLE_SID_level1_202103310510_Wed.log
-rw-r--r-- 1 oracle oinstall 700145 Mar 30 07:32 backup_$ORACLE_SID_level1_sbt_202103300730_Tue.log
-rw-r--r-- 1 oracle oinstall 31591 Mar 30 05:11 backup_$ORACLE_SID_level1_202103300510_Tue.log
-rw-r--r-- 1 oracle oinstall 666291 Mar 29 07:32 backup_$ORACLE_SID_level1_sbt_202103290730_Mon.log
-rw-r--r-- 1 oracle oinstall 31731 Mar 29 05:11 backup_$ORACLE_SID_level1_202103290510_Mon.log
-rw-r--r-- 1 oracle oinstall 631891 Mar 28 07:32 backup_$ORACLE_SID_level1_sbt_202103280730_Sun.log
-rw-r--r-- 1 oracle oinstall 32925 Mar 27 05:11 backup_$ORACLE_SID_level1_202103270510_Sat.log
-rw-r--r-- 1 oracle oinstall 1091718 Mar 26 07:33 backup_$ORACLE_SID_level1_sbt_202103260730_Fri.log
-rw-r--r-- 1 oracle oinstall 31523 Mar 26 05:11 backup_$ORACLE_SID_level1_202103260510_Fri.log
Here’s how to improve logging for level0 and level1 backup using date function.
This will keep backup level0 and level1 logs for 7 days.
%a - locale's abbreviated weekday name (e.g., Sun)
$ echo "backup_${ORACLE_SID}_level0_$(date +%a).log"
backup_ORCLCDB_level0_Sat.log
$ echo "backup_${ORACLE_SID}_level0_sbt_$(date +%a).log"
backup_ORCLCDB_level0_sbt_Sat.log
$ echo "backup_${ORACLE_SID}_level1_$(date +%a).log"
backup_ORCLCDB_level1_Sat.log
$ echo "backup_${ORACLE_SID}_level1_sbt_$(date +%a).log"
backup_ORCLCDB_level1_sbt_Sat.log
Basically, the log will be overwritten on a weekly basis.
What will happen Level0 backup failed on the weekend and is performed on Monday?
It’s a one off and will you lose sleep over it?
Next, there are 204 logs for archived log backup.
$ ls -lt backup_$ORACLE_SID_arch*.log|wc -l
204
$ ls -lt backup_$ORACLE_SID_arch*.log|tail
-rw-r--r-- 1 oracle oinstall 14104 Mar 26 08:51 backup_$ORACLE_SID_arch_202103260850_Fri.log
-rw-r--r-- 1 oracle oinstall 14103 Mar 26 07:51 backup_$ORACLE_SID_arch_202103260750_Fri.log
-rw-r--r-- 1 oracle oinstall 14108 Mar 26 06:51 backup_$ORACLE_SID_arch_202103260650_Fri.log
-rw-r--r-- 1 oracle oinstall 13081 Mar 26 05:51 backup_$ORACLE_SID_arch_202103260550_Fri.log
-rw-r--r-- 1 oracle oinstall 14109 Mar 26 04:51 backup_$ORACLE_SID_arch_202103260450_Fri.log
-rw-r--r-- 1 oracle oinstall 14104 Mar 26 03:51 backup_$ORACLE_SID_arch_202103260350_Fri.log
-rw-r--r-- 1 oracle oinstall 14108 Mar 26 02:51 backup_$ORACLE_SID_arch_202103260250_Fri.log
-rw-r--r-- 1 oracle oinstall 14104 Mar 26 01:51 backup_$ORACLE_SID_arch_202103260150_Fri.log
-rw-r--r-- 1 oracle oinstall 14108 Mar 26 00:51 backup_$ORACLE_SID_arch_202103260050_Fri.log
-rw-r--r-- 1 oracle oinstall 14104 Mar 25 23:51 backup_$ORACLE_SID_arch_202103252350_Thu.log
$ ls -lt backup_WEUMIX1_arch*.log|head
-rw-r--r-- 1 oracle oinstall 14094 Apr 3 10:51 backup_$ORACLE_SID_arch_202104031050_Sat.log
-rw-r--r-- 1 oracle oinstall 14091 Apr 3 09:50 backup_$ORACLE_SID_arch_202104030950_Sat.log
-rw-r--r-- 1 oracle oinstall 14092 Apr 3 08:51 backup_$ORACLE_SID_arch_202104030850_Sat.log
-rw-r--r-- 1 oracle oinstall 14096 Apr 3 07:51 backup_$ORACLE_SID_arch_202104030750_Sat.log
-rw-r--r-- 1 oracle oinstall 14100 Apr 3 06:51 backup_$ORACLE_SID_arch_202104030650_Sat.log
-rw-r--r-- 1 oracle oinstall 13631 Apr 3 05:51 backup_$ORACLE_SID_arch_202104030550_Sat.log
-rw-r--r-- 1 oracle oinstall 14082 Apr 3 04:51 backup_$ORACLE_SID_arch_202104030450_Sat.log
-rw-r--r-- 1 oracle oinstall 14084 Apr 3 03:51 backup_$ORACLE_SID_arch_202104030350_Sat.log
-rw-r--r-- 1 oracle oinstall 14104 Apr 3 02:51 backup_$ORACLE_SID_arch_202104030250_Sat.log
-rw-r--r-- 1 oracle oinstall 14093 Apr 3 01:51 backup_$ORACLE_SID_arch_202104030150_Sat.log
Here’s how to improve logging for archived log using date function.
This will keep backup archived logs for 7 days.
%H - hour (00..23)
$ echo "backup_${ORACLE_SID}_arch_$(date +%a"_H"%H).log"
backup_ORCLCDB_arch_Sat_H11.log
What if 2 versions of log are required? Rename current log file and append .1 before creating new one.
What if 3 versions of log are required? Hmm?
How can there be 3 versions for level0 and level1 backup logs when backup logs older than 7 days are deleted?
Cleanup Trace Files For Multiple Oracle Homes
I know what you are probably thinking. What’s the big deal and how many homes can there be?
For Exadata Cloud, I recalled seeing as many as 18 database homes.
As shown below, there are 5 database homes with version 12.2 and 1 database home with version 19.0.
# dbaascli dbhome info
DBAAS CLI version 21.1.1.0.1
Executing command dbhome info
Enter a homename or just press enter if you want details of all homes
1.HOME_NAME=OraHome101
HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_4
VERSION=19.8.0.0
PATCH_LEVEL=19.8.0.0.200714
DBs installed=
OH Backup=NOT Configured
2.HOME_NAME=OraHome100
HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_7
VERSION=19.8.0.0
PATCH_LEVEL=19.8.0.0.200714
DBs installed=*****
Agent DB IDs=d21b07df-20f2-439e-bc40-78a9597af362
OH Backup=NOT Configured
3.HOME_NAME=OraHome105_12201_dbru200714_0
HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_6
VERSION=19.8.0.0
PATCH_LEVEL=19.8.0.0.200714
DBs installed=******
Agent DB IDs=f7d46615-a223-4002-9270-fa69465a7f2a
OH Backup=NOT Configured
4.HOME_NAME=OraHome102_12201_dbru200714_0
HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_3
VERSION=19.8.0.0
PATCH_LEVEL=19.8.0.0.200714
DBs installed=*****
Agent DB IDs=dceed071-9655-4c84-bef4-74b20180c99b
OH Backup=NOT Configured
5.HOME_NAME=OraHome101_12201_dbru200714_0
HOME_LOC=/u02/app/oracle/product/12.2.0/dbhome_2
VERSION=19.8.0.0
PATCH_LEVEL=19.8.0.0.200714
DBs installed=*******
Agent DB IDs=b2a5220d-844b-49b6-9351-7c72cf3c9d9b
OH Backup=NOT Configured
6.HOME_NAME=OraHome100_19800_dbru200714_0
HOME_LOC=/u02/app/oracle/product/19.0.0.0/dbhome_2
VERSION=19.8.0.0
PATCH_LEVEL=19.8.0.0
DBs installed=********
Agent DB IDs=feedb0e0-2d10-4db7-997a-a78e4ab083ef
Checking oratab for Oracle Homes
$ sort -u -t : -k 2,2 /etc/oratab | grep -v "^#" | awk -F ":" '{print $2}'
/u01/app/19.0.0.0/grid
/u02/app/oracle/product/12.2.0/dbhome_2
/u02/app/oracle/product/12.2.0/dbhome_3
/u02/app/oracle/product/12.2.0/dbhome_4
/u02/app/oracle/product/12.2.0/dbhome_6
/u02/app/oracle/product/12.2.0/dbhome_7
/u02/app/oracle/product/19.0.0.0/dbhome_2
Here is the crontab schedule:
00 01 * * * find /u01/app/grid/diag/crs/*/crs/trace -name "*.tr?" -mtime +30 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/audit -name "*.aud" -mtime +366 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "*.tr?" -mtime +200 -exec rm -f {} \;
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "cdmp*" -mtime +200 -exec rm -rf {} \;
00 04 * * * find /u02/app/oracle/diag/rdbms/*/*/cdump -name "core*" -mtime +200 -exec rm -rf {} \;
Here is the explanation for what (*) represents and examples:
00 01 * * * find /u01/app/grid/diag/crs/*/crs/trace -name "*.tr?" -mtime +30 -exec rm -f {} \;
ls -ld /u01/app/grid/diag/crs/*/crs/trace
* = hostname
Example:
$ ls -ld /u01/app/grid/diag/crs/*/crs/trace
drwxrwxr-x 2 grid oinstall 135168 Mar 26 18:40 /u01/app/grid/diag/crs/hostname/crs/trace
==============================
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/audit -name "*.aud" -mtime +366 -exec rm -f {} \;
ls -ld /u02/app/oracle/product/*/*/rdbms/audit
*/* = version/dbhome
Example:
$ ls -ld /u02/app/oracle/product/*/*/rdbms/audit
drwxr-xr-x 9 oracle oinstall 614400 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/audit
drwxr-xr-x 2 oracle oinstall 253952 Mar 26 18:40 /u02/app/oracle/product/12.2.0/dbhome_3/rdbms/audit
drwxr-xr-x 2 oracle oinstall 294912 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/audit
drwxr-xr-x 4 oracle oinstall 94208 Mar 26 18:32 /u02/app/oracle/product/12.2.0/dbhome_6/rdbms/audit
drwxr-xr-x 2 oracle oinstall 4096 Mar 1 02:31 /u02/app/oracle/product/12.2.0/dbhome_7/rdbms/audit
drwxr-xr-x 3 oracle oinstall 5783552 Mar 26 18:32 /u02/app/oracle/product/19.0.0.0/dbhome_2/rdbms/audit
==============================
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "*.tr?" -mtime +200 -exec rm -f {} \;
ls -l /u02/app/oracle/product/*/*/rdbms/log/*.tr?
*/* = version/dbhome
Example:
$ ls -l /u02/app/oracle/product/*/*/rdbms/log/*.tr?
-rw-r----- 1 oracle asmadmin 868 Feb 19 17:41 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/log/*******2_ora_57506.trc
-rw-r----- 1 oracle asmadmin 868 Dec 4 18:06 /u02/app/oracle/product/12.2.0/dbhome_2/rdbms/log/*******2_ora_66404.trc
-rw-r----- 1 oracle asmadmin 862 Mar 24 19:38 /u02/app/oracle/product/12.2.0/dbhome_3/rdbms/log/*****2_ora_217755.trc
-rw-r----- 1 oracle asmadmin 869 Feb 18 21:51 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/log/*****2_ora_351349.trc
-rw-r----- 1 oracle asmadmin 867 Feb 19 17:41 /u02/app/oracle/product/12.2.0/dbhome_4/rdbms/log/*****2_ora_57519.trc
-rw-r----- 1 oracle asmadmin 866 Mar 1 20:01 /u02/app/oracle/product/12.2.0/dbhome_6/rdbms/log/******2_ora_167170.trc
-rw-r----- 1 oracle asmadmin 831 Mar 1 02:31 /u02/app/oracle/product/12.2.0/dbhome_7/rdbms/log/*****2_ora_314160.trc
==============================
00 01 * * * find /u02/app/oracle/product/*/*/rdbms/log -name "cdmp*" -mtime +200 -exec rm -rf {} \;
ls -ld /u02/app/oracle/diag/rdbms/*/*/cdump
*/* = db_unique_name/db_name
Example:
$ ls -ld /u02/app/oracle/diag/rdbms/*/*/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 3 2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 2 2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 21 2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Feb 17 02:35 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 21 2020 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Feb 18 21:51 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
drwxr-xr-x 2 oracle asmadmin 4096 Sep 25 07:13 /u02/app/oracle/diag/rdbms/db_unique_name/db_name/cdump
It’s also possible to use adrci to configure SHORTP_POLICY and LONGP_POLICY.
If new homes are created, then would SHORTP_POLICY and LONGP_POLICY need up be updated for the new homes?
Alternatively, can download and use purgeLogs: Cleanup traces, logs in one command (Doc ID 2081655.1)
Oracle9i Release 2 Data Guard Broker NF
Hopefully, you did not judge blog post by it’s title as this may be a gem.
From 12.2, Broker Controlled Database Initialization Parameters and SQL Statements
The following database initialization parameters are controlled by broker configurable properties. Therefore, you should not set these parameters manually:
LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT
What’s missing are information from prior releases not carried forward to future release.
Removed the FAL_SERVER
and FAL_CLIENT
properties; these are managed automatically by Data Guard broker.
https://docs.oracle.com/cd/B10501_01/server.920/a96629/whatsnew.htm
Unfortunately for me, I started with DG in 10g and totally missed 9i NF for DG.
Best To Rename Both Table And Index Partitions
I have been working on compressing table and index partitions.
Currently, system generated partition names are the same for table and index.
Find partition size for TABLE (SYS_P2321):
SQL> @ partition_size_tab.sql
SQL> select s.owner, s.segment_name, s.segment_type,
2 s.partition_name, s.tablespace_name tbs,
3 -- t.partitioning_type type, t.interval,
4 s.bytes/1024/1024/1024 size_gb
5 from dba_segments s, dba_part_tables t
6 where s.segment_type in ('TABLE PARTITION')
7 and s.owner=UPPER('&&owner')
8 and t.table_name=UPPER('&&table')
9 and s.partition_name=UPPER('&&partition')
10 and s.owner=t.owner
11 and s.segment_name=t.table_name
12 order by s.partition_name asc
13 ;
Enter value for owner: app
Enter value for table: interval
Enter value for partition: SYS_P2321
OWNER SEGMENT_NAME SEGMENT_TYPE PARTITION TBS SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP INTERVAL TABLE PARTITION SYS_P2321 USERS .01
SQL> set echo off
Find partition size for INDEX (SYS_P2321):
-- There is no need to enter variables.
SQL> @ partition_size_idx.sql
SQL> select s.owner, s.segment_name, s.segment_type,
2 s.partition_name, s.tablespace_name tbs,
3 -- i.partitioning_type type, i.interval,
4 s.bytes/1024/1024/1024 size_gb
5 from dba_segments s, dba_part_indexes i
6 where s.segment_type in ('INDEX PARTITION')
7 and s.owner=UPPER('&&owner')
8 and i.table_name=UPPER('&&table')
9 and s.partition_name=UPPER('&&partition')
10 and s.owner=i.owner
11 and s.segment_name=i.index_name
12 order by s.partition_name asc
13 ;
OWNER SEGMENT_NAME SEGMENT_TYPE PARTITION TBS SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP INTERVAL_IDX INDEX PARTITION SYS_P2321 USERS .00
SQL> set echo off
When renaming system generated partition name, it is best to rename both table and index with the same partition name in order to simplify finding partition size for table and index.
Here is an example where table partitions were renamed while index partitions were not renamed.
TABLE partitions were renamed.
SQL> @ partition_interval_delta_tab.sql
SQL> set echo off
SQL> declare
2 l_date date;
3 begin
4 for x in (
5 select p.partition_name, p.high_value
6 from dba_tab_partitions p
7 where table_owner=UPPER('&&owner') and table_name=UPPER('&&table')
8 and p.compression='DISABLED' and p.interval='YES' order by 1
9 )
10 loop
11 execute immediate
12 'begin :h := ' || x.high_value || '; end;' using OUT l_date;
13 if months_between(sysdate, l_date) > 36 then
14 dbms_output.put_line(x.partition_name||' : '||to_char(l_date, 'DD-MON-YYYY'));
15 end if;
16 end loop;
17 end;
18 /
P201612 : 01-JAN-2017
P201701 : 01-FEB-2017
P201702 : 01-MAR-2017
P201703 : 01-APR-2017
P201704 : 01-MAY-2017
P201705 : 01-JUN-2017
P201706 : 01-JUL-2017
P201707 : 01-AUG-2017
P201708 : 01-SEP-2017
P201709 : 01-OCT-2017
P201710 : 01-NOV-2017
P201711 : 01-DEC-2017
P201712 : 01-JAN-2018
P201801 : 01-FEB-2018
P201802 : 01-MAR-2018
SQL> set echo off
INDEX partitions were NOT renamed.
SQL> @ partition_interval_delta_idx.sql
SQL> declare
2 l_date date;
3 begin
4 for x in (
5 select p.partition_name, p.high_value
6 from dba_ind_partitions p, dba_part_indexes i
7 where i.owner=UPPER('&&owner') and i.table_name=UPPER('&&table')
8 and p.index_owner=i.owner and p.index_name=i.index_name
9 and p.compression='DISABLED' and p.interval='YES' order by 1
10 )
11 loop
12 execute immediate
13 'begin :h := ' || x.high_value || '; end;' using OUT l_date;
14 if months_between(sysdate, l_date) > 36 then
15 dbms_output.put_line(x.partition_name||' : '||to_char(l_date, 'DD-MON-YYYY'));
16 end if;
17 end loop;
18 end;
19 /
SYS_P2321 : 01-JAN-2017
SYS_P2322 : 01-FEB-2017
SYS_P2323 : 01-MAR-2017
SYS_P2324 : 01-APR-2017
SYS_P2325 : 01-MAY-2017
SYS_P2326 : 01-JUN-2017
SYS_P2327 : 01-JUL-2017
SYS_P2328 : 01-AUG-2017
SYS_P2329 : 01-SEP-2017
SYS_P2330 : 01-OCT-2017
SYS_P2331 : 01-NOV-2017
SYS_P2332 : 01-DEC-2017
SYS_P2333 : 01-JAN-2018
SYS_P2334 : 01-FEB-2018
SYS_P2335 : 01-MAR-2018
SQL> set echo off
Let’s find the size for partitioned table and index.
Find partition size for TABLE.
SQL> @ partition_size_tab.sql
SQL> select s.owner, s.segment_name, s.segment_type,
2 s.partition_name, s.tablespace_name tbs,
3 -- t.partitioning_type type, t.interval,
4 s.bytes/1024/1024/1024 size_gb
5 from dba_segments s, dba_part_tables t
6 where s.segment_type in ('TABLE PARTITION')
7 and s.owner=UPPER('&&owner')
8 and t.table_name=UPPER('&&table')
9 and s.partition_name=UPPER('&&partition')
10 and s.owner=t.owner
11 and s.segment_name=t.table_name
12 order by s.partition_name asc
13 ;
Enter value for partition: P201612
OWNER SEGMENT_NAME SEGMENT_TYPE PARTITION TBS SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP INTERVAL TABLE PARTITION P201612 USERS .01
SQL> set echo off
Find partition size for INDEX.
Need to enter different partition name for INDEX (SYS_P2321).
SQL> @ partition_size_idx.sql
SQL> select s.owner, s.segment_name, s.segment_type,
2 s.partition_name, s.tablespace_name tbs,
3 -- i.partitioning_type type, i.interval,
4 s.bytes/1024/1024/1024 size_gb
5 from dba_segments s, dba_part_indexes i
6 where s.segment_type in ('INDEX PARTITION')
7 and s.owner=UPPER('&&owner')
8 and i.table_name=UPPER('&&table')
9 and s.partition_name=UPPER('&&partition')
10 and s.owner=i.owner
11 and s.segment_name=i.index_name
12 order by s.partition_name asc
13 ;
SQL> set echo off
SQL> undefine partition
-- How did I know to use SYS_P2321 for index partitions?
-- Both table and index partition have the same date (01-JAN-2017).
TABLE : P201612 : 01-JAN-2017
INDEX : SYS_P2321 : 01-JAN-2017
SQL> @ partition_size_idx.sql
SQL> select s.owner, s.segment_name, s.segment_type,
2 s.partition_name, s.tablespace_name tbs,
3 -- i.partitioning_type type, i.interval,
4 s.bytes/1024/1024/1024 size_gb
5 from dba_segments s, dba_part_indexes i
6 where s.segment_type in ('INDEX PARTITION')
7 and s.owner=UPPER('&&owner')
8 and i.table_name=UPPER('&&table')
9 and s.partition_name=UPPER('&&partition')
10 and s.owner=i.owner
11 and s.segment_name=i.index_name
12 order by s.partition_name asc
13 ;
Enter value for partition: SYS_P2321
OWNER SEGMENT_NAME SEGMENT_TYPE PARTITION TBS SIZE_GB
--------------- --------------- ------------------ --------- --------------- -----------
APP INTERVAL_IDX INDEX PARTITION SYS_P2321 USERS .00
SQL> set echo off
Compress Historical Interval Partitions
There is a requirement to compress monthly interval partition older that 36 months.
First, interval partitions were renamed to more intuitive names using Renaming Interval Partitions
Even though the rename is not necessary, it does provide more clarity.
There is a demo to find and compress partitions older that 36 months.
SQL> @ partition_interval_delta.sql
SQL> select
2 partition_name, compression, high_value,
3 TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) months_delta
4 from dba_tab_partitions
5 where table_owner = UPPER('&owner')
6 and table_name = UPPER('&table')
7 and compression='DISABLED'
8 and TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) >= &months_delta
9 order by 1
10 ;
Enter value for owner: app
Enter value for table: interval
Enter value for months_delta: 36
PARTITION_NAME COMPRESS HIGH_VALUE MONTHS_DELTA
--------------------- -------- -------------------------------------------------------- ------------
P201702 DISABLED TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 49
P201705 DISABLED TO_DATE(' 2017-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 46
P201706 DISABLED TO_DATE(' 2017-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 45
P201707 DISABLED TO_DATE(' 2017-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 44
P201708 DISABLED TO_DATE(' 2017-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 43
P201709 DISABLED TO_DATE(' 2017-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 42
P201710 DISABLED TO_DATE(' 2017-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 41
P201711 DISABLED TO_DATE(' 2017-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 40
P201712 DISABLED TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 39
P201801 DISABLED TO_DATE(' 2018-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 38
P201802 DISABLED TO_DATE(' 2018-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 37
P201803 DISABLED TO_DATE(' 2018-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 36
SQL>
SQL> set echo off
SQL> set verify on
SQL> define partition_name = P201702
SQL> alter table APP.INTERVAL modify partition &partition_name compress for OLTP;
old 1: alter table APP.INTERVAL modify partition &partition_name compress for OLTP
new 1: alter table APP.INTERVAL modify partition P201702 compress for OLTP
SQL> alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8;
old 1: alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8
new 1: alter table APP.INTERVAL move partition P201702 ONLINE UPDATE INDEXES PARALLEL 8
SQL> define partition_name = P201705
SQL> alter table APP.INTERVAL modify partition &partition_name compress for OLTP;
old 1: alter table APP.INTERVAL modify partition &partition_name compress for OLTP
new 1: alter table APP.INTERVAL modify partition P201705 compress for OLTP
SQL> alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8;
old 1: alter table APP.INTERVAL move partition &partition_name ONLINE UPDATE INDEXES PARALLEL 8
new 1: alter table APP.INTERVAL move partition P201705 ONLINE UPDATE INDEXES PARALLEL 8
SQL> @ partition_interval_delta.sql
SQL> select
2 partition_name, compression, high_value,
3 TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) months_delta
4 from dba_tab_partitions
5 where table_owner = UPPER('&owner')
6 and table_name = UPPER('&table')
7 and compression='DISABLED'
8 and TRUNC(MONTHS_BETWEEN(SYSDATE,(TO_DATE(regexp_substr(partition_name, '[[:digit:]]+'),'YYYYMM')))) >= &months_delta
9 order by 1
10 ;
Enter value for owner: app
Enter value for table: interval
Enter value for months_delta: 36
PARTITION_NAME COMPRESS HIGH_VALUE MONTHS_DELTA
--------------------- -------- -------------------------------------------------------- ------------
P201706 DISABLED TO_DATE(' 2017-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 45
P201707 DISABLED TO_DATE(' 2017-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 44
P201708 DISABLED TO_DATE(' 2017-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 43
P201709 DISABLED TO_DATE(' 2017-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 42
P201710 DISABLED TO_DATE(' 2017-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 41
P201711 DISABLED TO_DATE(' 2017-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 40
P201712 DISABLED TO_DATE(' 2018-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 39
P201801 DISABLED TO_DATE(' 2018-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 38
P201802 DISABLED TO_DATE(' 2018-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 37
P201803 DISABLED TO_DATE(' 2018-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS' 36
SQL> set echo off
Purge Database Audit Trail Table
Segment for AUD$/FGA_LOG$ tables reside in SYSAUX tablespace and will be moved AUDIT_TBS before configuring purge.
The requirement is to purge audits older than 7 years (366*7=2562) [l_days NUMBER := 2562]
DEMO:
$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Wed Mar 3 18:22:35 2021
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
18:22:35 SYS@DB01 AS SYSDBA> select file_name from dba_data_files where tablespace_name='SYSAUX';
FILE_NAME
--------------------------------------------------------------------------------
/oradata/db01/datafile/sysaux_01.dbf
Elapsed: 00:00:00.01
18:22:44 SYS@DB01 AS SYSDBA> create tablespace AUDIT_TBS datafile '&location/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g;
Enter value for location: /oradata/db01/datafile
old 1: create tablespace AUDIT_TBS datafile '&location/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g
new 1: create tablespace AUDIT_TBS datafile '/oradata/db01/datafile/audit_tbs_01.dbf' size 5g autoextend on maxsize 30g
Tablespace created.
Elapsed: 00:00:24.68
18:24:29 SYS@DB01 AS SYSDBA> @ audit.sql
18:24:37 SYS@DB01 AS SYSDBA>
18:24:37 SYS@DB01 AS SYSDBA> select owner, segment_name, segment_type, tablespace_name, round(bytes/1024/1024,2) USED_MB
18:24:37 2 from dba_segments where segment_name IN ('AUD$','FGA_LOG$') order by 2
18:24:37 3 ;
OWNER SEGMENT_NAME SEGMENT_TYPE TABLESPACE_NAME USED_MB
-------------------- -------------------- ------------------ -------------------- ----------
SYS AUD$ TABLE SYSTEM 2946
SYS FGA_LOG$ TABLE SYSTEM .06
Elapsed: 00:00:00.09
18:24:37 SYS@DB01 AS SYSDBA> select MIN(TIMESTAMP), MAX(TIMESTAMP), TRUNC(systimestamp)-MIN(TIMESTAMP) diff_day, COUNT(*)
18:24:37 2 from dba_audit_trail
18:24:37 3 ;
MIN(TIMES MAX(TIMES DIFF_DAY COUNT(*)
--------- --------- ---------- ----------
23-SEP-14 03-MAR-21 2352.58642 20801590
Elapsed: 00:00:53.32
18:25:30 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:25:30 2 ;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
DB AUDIT TABLESPACE SYSAUX FGA AUDIT TRAIL
DB AUDIT TABLESPACE SYSAUX STANDARD AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
10 rows selected.
Elapsed: 00:00:00.01
18:25:30 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_last_arch_ts
18:25:30 2 ;
no rows selected
Elapsed: 00:00:00.00
18:25:30 SYS@DB01 AS SYSDBA> exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')
PL/SQL procedure successfully completed.
Elapsed: 00:09:17.79
18:34:48 SYS@DB01 AS SYSDBA> ;
1 select * from dba_audit_mgmt_last_arch_ts
2*
18:34:48 SYS@DB01 AS SYSDBA> exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS')
PL/SQL procedure successfully completed.
Elapsed: 00:00:07.83
18:34:56 SYS@DB01 AS SYSDBA> ;
1 select * from dba_audit_mgmt_last_arch_ts
2*
18:34:56 SYS@DB01 AS SYSDBA>
18:34:56 SYS@DB01 AS SYSDBA> begin
18:34:56 2 dbms_audit_mgmt.INIT_CLEANUP (
18:34:56 3 audit_trail_type => dbms_audit_mgmt.audit_trail_all,
18:34:56 4 default_cleanup_interval => 24);
18:34:56 5 end;
18:34:56 6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:05.21
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> begin
18:35:01 2 if
18:35:01 3 dbms_audit_mgmt.IS_CLEANUP_INITIALIZED(dbms_audit_mgmt.AUDIT_TRAIL_ALL) then
18:35:01 4 dbms_output.put_line('******* YES *******');
18:35:01 5 else
18:35:01 6 dbms_output.put_line('******* NO *******');
18:35:01 7 end if;
18:35:01 8 end;
18:35:01 9 /
******* YES *******
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:35:01 2 ;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
14 rows selected.
Elapsed: 00:00:00.01
18:35:01 SYS@DB01 AS SYSDBA>
18:35:01 SYS@DB01 AS SYSDBA> begin
18:35:01 2 dbms_audit_mgmt.CREATE_PURGE_JOB(
18:35:01 3 audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL,
18:35:01 4 audit_trail_purge_interval => 24,
18:35:01 5 audit_trail_purge_name => 'PURGE_ALL_AUDIT_TRAILS',
18:35:01 6 use_last_arch_timestamp => TRUE);
18:35:01 7 end;
18:35:01 8 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.77
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> begin
18:35:02 2 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(
18:35:02 3 audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD,
18:35:02 4 last_archive_time => SYSTIMESTAMP-2562);
18:35:02 5 end;
18:35:02 6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.05
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> begin
18:35:02 2 DBMS_SCHEDULER.create_job (
18:35:02 3 job_name => 'AUDIT_LAST_ARCHIVE_TIME',
18:35:02 4 job_type => 'PLSQL_BLOCK',
18:35:02 5 job_action => 'DECLARE
18:35:02 6 l_days NUMBER := 2562;
18:35:02 7 BEGIN
18:35:02 8 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 9 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 10 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 11 dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML, TRUNC(SYSTIMESTAMP)-l_days);
18:35:02 12 END;',
18:35:02 13 start_date => SYSTIMESTAMP,
18:35:02 14 repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;',
18:35:02 15 end_date => NULL,
18:35:02 16 enabled => TRUE,
18:35:02 17 comments => 'Automatically set audit last archive time.');
18:35:02 18 end;
18:35:02 19 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.07
18:35:02 SYS@DB01 AS SYSDBA>
18:35:02 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_config_params order by 1
18:35:02 2 ;
PARAMETER_NAME PARAMETER_VALUE AUDIT_TRAIL
------------------------------ -------------------- --------------------
AUDIT FILE MAX AGE 5 XML AUDIT TRAIL
AUDIT FILE MAX AGE 5 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 OS AUDIT TRAIL
AUDIT FILE MAX SIZE 10000 XML AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 STANDARD AUDIT TRAIL
DB AUDIT CLEAN BATCH SIZE 10000 FGA AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS STANDARD AUDIT TRAIL
DB AUDIT TABLESPACE AUDIT_TBS FGA AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 XML AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 OS AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 STANDARD AUDIT TRAIL
DEFAULT CLEAN UP INTERVAL 24 FGA AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 OS AUDIT TRAIL
OS FILE CLEAN BATCH SIZE 1000 XML AUDIT TRAIL
14 rows selected.
Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA> select * from dba_audit_mgmt_last_arch_ts
18:35:02 2 ;
AUDIT_TRAIL RAC_INSTANCE LAST_ARCHIVE_TS
-------------------- ------------ ---------------------------------------------
STANDARD AUDIT TRAIL 0 26-FEB-14 06.35.02.000000 PM +00:00
Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA> select job_action from dba_scheduler_jobs where job_name='PURGE_ALL_AUDIT_TRAILS'
18:35:02 2 ;
JOB_ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
BEGIN DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL(15, TRUE); END;
Elapsed: 00:00:00.17
18:35:02 SYS@DB01 AS SYSDBA> select job_action from dba_scheduler_jobs where job_name='AUDIT_LAST_ARCHIVE_TIME'
18:35:02 2 ;
JOB_ACTION
------------------------------------------------------------------------------------------------------------------------------------------------------
DECLARE
l_days NUMBER := 2562;
BEGIN
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days);
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days);
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-l_days);
dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML, TRUNC(SYSTIMESTAMP)-l_days);
END;
Elapsed: 00:00:00.14
18:35:02 SYS@DB01 AS SYSDBA> commit;
Commit complete.
Elapsed: 00:00:00.00
18:35:02 SYS@DB01 AS SYSDBA>
--- audit.sql col parameter_name for a30 col parameter_value for a20 col audit_trail for a20 col owner for a20 col segment_name for a20 col tablespace_name for a20 col last_archive_ts for a45 col job_action for a150 set lines 200 pages 100 serverout on echo on select owner, segment_name, segment_type, tablespace_name, round(bytes/1024/1024,2) USED_MB from dba_segments where segment_name IN ('AUD$','FGA_LOG$') order by 2 ; select MIN(TIMESTAMP), MAX(TIMESTAMP), TRUNC(systimestamp)-MIN(TIMESTAMP) diff_day, COUNT(*) from dba_audit_trail ; select * from dba_audit_mgmt_config_params order by 1 ; select * from dba_audit_mgmt_last_arch_ts ; exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS') ; exec dbms_audit_mgmt.SET_AUDIT_TRAIL_LOCATION(audit_trail_type => dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, AUDIT_TRAIL_LOCATION_VALUE => 'AUDIT_TBS') ; begin dbms_audit_mgmt.INIT_CLEANUP ( audit_trail_type => dbms_audit_mgmt.audit_trail_all, default_cleanup_interval => 24); end; / begin if dbms_audit_mgmt.IS_CLEANUP_INITIALIZED(dbms_audit_mgmt.AUDIT_TRAIL_ALL) then dbms_output.put_line('******* YES *******'); else dbms_output.put_line('******* NO *******'); end if; end; / select * from dba_audit_mgmt_config_params order by 1 ; begin dbms_audit_mgmt.CREATE_PURGE_JOB( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_ALL, audit_trail_purge_interval => 24, audit_trail_purge_name => 'PURGE_ALL_AUDIT_TRAILS', use_last_arch_timestamp => TRUE); end; / begin dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP( audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_AUD_STD, last_archive_time => SYSTIMESTAMP-2562); end; / begin DBMS_SCHEDULER.create_job ( job_name => 'AUDIT_LAST_ARCHIVE_TIME', job_type => 'PLSQL_BLOCK', job_action => 'DECLARE l_days NUMBER := 2562; BEGIN dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_AUD_STD, TRUNC(SYSTIMESTAMP)-l_days); dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_FGA_STD, TRUNC(SYSTIMESTAMP)-l_days); dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_OS, TRUNC(SYSTIMESTAMP)-l_days); dbms_audit_mgmt.SET_LAST_ARCHIVE_TIMESTAMP(dbms_audit_mgmt.AUDIT_TRAIL_XML, TRUNC(SYSTIMESTAMP)-l_days); END;', start_date => SYSTIMESTAMP, repeat_interval => 'freq=daily; byhour=0; byminute=0; bysecond=0;', end_date => NULL, enabled => TRUE, comments => 'Automatically set audit last archive time.'); end; / select * from dba_audit_mgmt_config_params order by 1 ; select * from dba_audit_mgmt_last_arch_ts ; select job_action from dba_scheduler_jobs where job_name='PURGE_ALL_AUDIT_TRAILS' ; select job_action from dba_scheduler_jobs where job_name='AUDIT_LAST_ARCHIVE_TIME' ; commit;
Migrating Windows PuTTY private key to Linux
A long, long time ago, I had blog about Migrating Windows PuTTY registry to Linux
Since then I have have migrated away from PuTTY with preference for CLI.
There are posts for Migrating Windows PuTTY private key to Linux; however, they did not work for me.
This is what worked for me, much simpler, and avoid having to transfer private key from source to target.
-- At SOURCE:
Open PuTTYgen
File - Load private key
Select *.ppk
Conversions - Export OpenSSH key to id_rsa
Open text editor and copy content from id_rsa
-- At TARGET:
qadesusiwevo@pc-33d1fa:~$ cd ~/.ssh/
qadesusiwevo@pc-33d1fa:~/.ssh$ vi id_rsa
-- Copy content from source id_rsa to target
qadesusiwevo@pc-33d1fa:~/.ssh$ vi config
qadesusiwevo@pc-33d1fa:~/.ssh$ cat config
SendEnv LANG TERM LOGNAME
StrictHostKeyChecking=no
GSSAPIAuthentication=no
ForwardAgent=yes
TCPKeepAlive=yes
ServerAliveCountMax=10
ServerAliveInterval=30
qadesusiwevo@pc-33d1fa:~/.ssh$ cd
qadesusiwevo@pc-33d1fa:~$ vi setup.sh
qadesusiwevo@pc-33d1fa:~$ cat setup.sh
cd ~/.ssh
eval `ssh-agent -s`
ssh-add
ssh-add -l
exit
qadesusiwevo@pc-33d1fa:~$ chmod 600 ~/.ssh/id_rsa
qadesusiwevo@pc-33d1fa:~$ ./setup.sh
Agent pid 8268
Identity added: /home/qadesusiwevo@264-1-100-0-0-xxxxxxxxxx-us-common.vela.com/.ssh/id_rsa (/home/qadesusiwevo@264-1-100-0-0-xxxxxxxxxx-us-common.vela.com/.ssh/id_rsa)
2048 SHA256:D7OPt2dPVtZ8byfpZuZTCLYpUo54z/e6ouCdmDOtsuc /home/qadesusiwevo@264-1-100-0-0-xxxxxxxxxx-us-common.vela.com/.ssh/id_rsa (RSA)
qadesusiwevo@pc-33d1fa:~$ ssh username@10.26.31.203
Warning: Permanently added '10.26.31.203' (ECDSA) to the list of known hosts.
Last login: Wed Feb 24 04:28:19 2021 from ip-10-26-31-204.us-west-2.compute.internal
_| __| )
_| ( / Amazon Linux AMI
__|_|__|
https://aws.amazon.com/amazon-linux-ami/2016.09-release-notes/
82 package(s) needed for security, out of 149 available
Run "sudo yum update" to apply all updates.
Amazon Linux version 2018.03 is available.
-bash-4.2$ hostname
ip-10-26-31-203
-bash-4.2$ exit



NOTE: Failed voting file relocation on diskgroup
After Oracle support performs maintenance for Exadata Cloud, the are many errors from ASM alert log.
There have been many discussions whether it’s a BUG or if alert can be ignored.
Unfortunately, BUG does not fit and cannot be ignored.
The simple solution is to check for SUCCESS after Failed as shown below.
egrep -n 'NOTE: Failed voting file relocation on diskgroup|SUCCESS: refreshed membership' /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log|grep DATAC1
1937:SUCCESS: refreshed membership for 1/0xacd2391b (DATAC1)
1972:SUCCESS: refreshed membership for 1/0xacd2391b (DATAC1)
4244:NOTE: Failed voting file relocation on diskgroup DATAC1
4250:SUCCESS: refreshed membership for 1/0x8f4036a8 (DATAC1)
9876:NOTE: Failed voting file relocation on diskgroup DATAC1
9881:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
10130:NOTE: Failed voting file relocation on diskgroup DATAC1
10135:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
11112:NOTE: Failed voting file relocation on diskgroup DATAC1
11425:NOTE: Failed voting file relocation on diskgroup DATAC1
11441:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
12410:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
13318:NOTE: Failed voting file relocation on diskgroup DATAC1
13717:NOTE: Failed voting file relocation on diskgroup DATAC1
13733:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
14703:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
15566:NOTE: Failed voting file relocation on diskgroup DATAC1
15865:NOTE: Failed voting file relocation on diskgroup DATAC1
15881:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
16836:SUCCESS: refreshed membership for 1/0x8f4038d9 (DATAC1)
Want more comfort?
$ crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 1b45f107ff974f1abf34ff7b005eaff1 (/dev/exadata_quorum/QD_DATAC1_QM828NAKVNXA1) [DATAC1]
2. ONLINE b3607b245b984f7ebfe5188c0775c44e (/dev/exadata_quorum/QD_DATAC1_QM738NAKVNXA2) [DATAC1]
3. ONLINE 5d26163d434a4fb6bf2cb173bae3cae1 (o/192.168.136.14;192.168.136.15/DATAC1_CD_05_phx302307exdcl07) [DATAC1]
4. ONLINE 3e1661086fed4f8bbf080db321282b23 (o/192.168.136.16;192.168.136.17/DATAC1_CD_04_phx302307exdcl08) [DATAC1]
5. ONLINE edac9a7822624fe4bfd59eb357d55d95 (o/192.168.136.18;192.168.136.19/DATAC1_CD_04_phx302307exdcl09) [DATAC1]
Located 5 voting disk(s).
# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 491684
Used space (kbytes) : 90748
Available space (kbytes) : 400936
ID : 1880339001
Device/File Name : +DATAC1
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
Monitor Linux Host Restart
The application is not RAC-aware and cannot handle ORA-3113, ORA-25402, or ORA-25409 properly.
Hence, there is requirement to notify the application team to restart the application when database server is restarted.
Initial implementation to monitor reboot was to use cronjob from oracle running every 5m to detect server restart.
While the implementation is effective, it’s not efficient. This was my first attempt.
The script detects if server was restarted X seconds ago by checking /proc/uptime.
If uptime is less than X seconds, then send notification server was restarted.
Here is high level example:
### Scripts accept paramenter with values for seconds $ /home/oracle/scripts/last_reboot.sh /home/oracle/scripts/last_reboot.sh: line 10: 1: ---> USAGE: /home/oracle/scripts/last_reboot.sh [in seconds] ### The heart of the script is to check /proc/uptime in seconds $ egrep -o '^[0-9]+' /proc/uptime 2132607 ### Scheduled cron tab to run every 5 minute to determine if server uptime is less that 540 seconds and send notification. $ crontab -l|grep reboot ##### monitor node reboot ##### */5 * * * * /home/oracle/scripts/last_reboot.sh 540 > /tmp/last_reboot.cron 2>&1
A more efficient implementation is to run a cronjob automatically after the server restart.
Here is high level example:
### When server is restarted, host_restart_alert.sh will be executed [root@oracle-12201-vagrant ~]# crontab -l @reboot su oracle -c '/home/oracle/host_restart_alert.sh' > /tmp/host_restart_alert.out 2>&1 ### Here is host_restart_alert.sh [oracle@oracle-12201-vagrant ~]$ cat host_restart_alert.sh #!/bin/bash -x # Script ie being called from root crontab # uptime reports minutely and need to sleep for at least 60s after host restart sleep 63 EMAILMESSAGE="$(hostname) was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`" echo $EMAILMESSAGE > /tmp/restart_$HOSTNAME.log exit ### Comment from colleague: ### From a bash syntax perspective, it’s not wrong. It’s not great style (don’t use backticks) printf -v EMAILMESSAGE '%s was restarted %s ago at %s' \ "$(hostname)" \ "$(uptime -p| awk -F'up' '{print $2}')" \ "$(uptime -s)" echo $EMAILMESSAGE > /tmp/restart_$HOSTNAME.log ### Deconstructing uptime commands: [oracle@oracle-12201-vagrant ~]$ uptime -p up 17 hours, 28 minutes [oracle@oracle-12201-vagrant ~]$ uptime -s 2021-02-15 18:00:51 ### Deconstructing message sent: [oracle@oracle-12201-vagrant ~]$ echo "$HOSTNAME was restarted `uptime -p| awk -F'up' '{print $2}'` ago at `uptime -s`" oracle-12201-vagrant was restarted 17 hours, 28 minutes ago at 2021-02-15 18:00:51 ### Demo: [root@oracle-12201-vagrant ~]# date Tue Feb 16 14:51:18 -05 2021 [root@oracle-12201-vagrant ~]# uptime 14:51:22 up 1 min, 1 user, load average: 0.58, 0.23, 0.08 [root@oracle-12201-vagrant ~]# ls -l /tmp/restart -rw-r--r--. 1 root root 271 Feb 16 14:51 /tmp/host_restart_alert.out -rw-r--r--. 1 oracle oinstall 71 Feb 16 14:51 /tmp/restart_oracle-12201-vagrant.log [root@oracle-12201-vagrant ~]# cat /tmp/host_restart_alert.out sleep 63 ++ hostname ++ uptime -p ++ awk -Fup '{print $2}' ++ uptime -s printf -v EMAILMESSAGE '%s was restarted %s ago at %s' oracle-12201-vagrant ' 1 minute' '2021-02-16 14:50:02' echo oracle-12201-vagrant was restarted 1 minute ago at 2021-02-16 14:50:02 exit [root@oracle-12201-vagrant ~]# cat /tmp/restart_oracle-12201-vagrant.log oracle-12201-vagrant was restarted 1 minute ago at 2021-02-16 14:50:02 [root@oracle-12201-vagrant ~]#
Scripts were tested on Oracle Linux Server release 7.8 and 7.9.
Search And Replace Email From Shell Scripts
The goal is to replace dinh@gmail.com with dba@gmail.com for all shell scripts.
Fortunately, all shell scripts are located from one directory; otherwise, will need to find all locations.
Check crontab to find possible directory location for shell scripts.
[vagrant@oracle-12201-vagrant ~]$ crontab -l
5 4 * * * /home/vagrant/scripts/test.sh something > /tmp/test.out 2>&1
[vagrant@oracle-12201-vagrant ~]$
[vagrant@oracle-12201-vagrant ~]$ crontab -l|grep -v '#'|grep sh|awk '{print $6}'|sort -u
/home/vagrant/scripts/test.sh
[vagrant@oracle-12201-vagrant ~]$
Check directory for shell scripts.
[vagrant@oracle-12201-vagrant scripts]$ ls -l
total 12
-rwxrwxr-x. 1 vagrant vagrant 25 Feb 4 21:15 dt.sh
-rwxrwxr-x. 1 vagrant vagrant 20 Feb 4 21:14 test.sh
[vagrant@oracle-12201-vagrant scripts]$
Check shell scripts containing emails to modify.
[vagrant@oracle-12201-vagrant scripts]$ grep 'dinh@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|sort -u|grep -v edit_email.sh
dt.sh
test.sh
[vagrant@oracle-12201-vagrant scripts]$
Create edit_email.sh to modify email.
[vagrant@oracle-12201-vagrant scripts]$ cat edit_email.sh
for infile in $(grep 'dinh@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|sort -u|grep -v `basename $0`)
do
echo $infile
sed 's/dinh@gmail.com/dba@gmail.com/g' $infile > tmp.$$
mv tmp.$$ $infile
chmod 755 $infile
grep 'gmail.com' $infile
done
[vagrant@oracle-12201-vagrant scripts]$
Run edit_email.sh and verify results.
[vagrant@oracle-12201-vagrant scripts]$ ./edit_email.sh
dt.sh
echo dba@gmail.com
test.sh
export PAGER_EMAIL="dba@gmail.com"
[vagrant@oracle-12201-vagrant scripts]$
[vagrant@oracle-12201-vagrant scripts]$ grep 'dinh@gmail.com' *.sh|grep sh|awk -F':' '{print $1}'|sort -u|grep -v edit_email.sh
Remove Characters From String Using regexp_replace
It’s probably have been half a century since I have coded PL/SQL.
So there I was, reviewing PL/SQL code and it looks rather redundant.
Did you know there is defined variable USER for PL/SQL code and using sys_context is not necessary?
Also, regexp_replace can be used to remove all characters from string.
Here is an example:
Basically, one line of code replaces all the redundant code.
I have no idea why there is a need to extract characters.
DINH099PD@ORCLPDB1 > show user
USER is "DINH099PD"
DINH099PD@ORCLPDB1 > @dinh.sql
DINH099PD@ORCLPDB1 > DECLARE
2 -- Remove character from l_user
3 l_user VARCHAR2(30) := regexp_replace(USER, '[[:alpha:]]|_');
4 l_user2 varchar2(20);
5 l_output varchar2(100);
6 BEGIN
7 -- Remove character from l_user2
8 l_user2 := sys_context('USERENV', 'CURRENT_USER');
9 l_output := replace(replace(l_user2, 'DINH', ''),'PD', '');
10 dbms_output.put_line (l_user);
11 dbms_output.put_line (l_output);
12 END;
13 /
099
099
PL/SQL procedure successfully completed.
DINH099PD@ORCLPDB1 >
DBSAT Error: Data collection was not successful.
First time running DBSAT for 19.8 RAC CDB and it failed miserably.
Create TNS entry for PDB and connect as system@PDB_service failed.
Many have tried and failed and thanks to Roy Salazar who came up with a work around.
Here is a demo for error and work around.
### Error: Data collection was not successful.
$ export TNS_ADMIN=/u01/app/grid/19.0/network/admin
$ ./dbsat collect "/ as sysdba" dbsat_$ORACLE_SID
Setup complete.
SQL queries complete.
OS commands complete.
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0
Error: Data collection was not successful.
### There should have been dbsat_$ORACLE_SID.json; however, nowrap.json exist due to error.
$ ls -l *.json
-rw-------. 1 oracle oracle 486835 Dec 5 04:34 nowrap.json
### Create report using nowrap which will create nowrap_report.zip and rename nowrap_report.zip accordingly. Unfortunately, the report contents are still named nowrap.
$ ./dbsat report nowrap
Database Security Assessment Tool version 2.2.1 (May 2020)
This tool is intended to assist you in securing your Oracle database
system. You are solely responsible for your system and the effect and
results of the execution of this tool (including, without limitation,
any damage or data loss). Further, the output generated by this tool may
include potentially sensitive system configuration data and information
that could be used by a skilled attacker to penetrate your system. You
are solely responsible for ensuring that the output of this tool,
including any generated reports, is handled in accordance with your
company's policies.
DBSAT Reporter ran successfully.
Calling /usr/bin/zip to encrypt the generated reports…
Enter password:
Verify password:
zip warning: nowrap_report.zip not found or empty
adding: nowrap_report.txt (deflated 77%)
adding: nowrap_report.html (deflated 83%)
adding: nowrap_report.xlsx (deflated 3%)
adding: nowrap_report.json (deflated 81%)
zip completed successfully.
$ ls -ltrh
total 6.1M
-r-xr-xr-x. 1 oracle oracle 14K May 6 2020 dbsat
-rw-rw-r--. 1 oracle oracle 290K May 6 2020 sat_reporter.py
-rw-rw-r--. 1 oracle oracle 61K May 6 2020 sat_collector.sql
-rw-rw-r--. 1 oracle oracle 26K May 6 2020 sat_analysis.py
-r-xr-xr-x. 1 oracle oracle 14K May 6 2020 dbsat.bat
-rw-r--r--. 1 oracle oracle 4.5M Dec 3 04:33 dbsat.zip
drwxr-xr-x. 5 oracle oracle 40 Dec 3 04:33 Discover
-rw-------. 1 oracle oracle 26K Dec 4 09:40 sat_analysis.pyc
drwxr-xr-x. 2 oracle oracle 4.0K Dec 4 09:40 xlsxwriter
-rw-------. 1 oracle oracle 476K Dec 5 04:34 nowrap.json
-rw-------. 1 oracle oracle 105K Dec 5 04:36 nowrap_report.zip
__ATA.cmd.push(function() {
__ATA.initDynamicSlot({
id: 'atatags-26942-5fca7ffd88139',
location: 120,
formFactor: '001',
label: {
text: 'Advertisements',
},
creative: {
reportAd: {
text: 'Report this ad',
},
privacySettings: {
text: 'Privacy settings',
}
}
});
});
Upgrade DB from 11.2 to 19.8 Using dbua silent
There was a debate as to whether the parameter -useGRP UPGRADE19C for dbua is necessary where UPGRADE19C is the name for the restore point created prior to upgrading the database.
Although it’s not necessary, it is beneficial for dbua to automate the restore process.
When -useGRP UPGRADE19C is used, restore.sh is created to restore the database using guarantee restore point specified.
If -useGRP is not used, then dbua will not create restore.sh script. While I have not personally tested this, I did check for restore.sh script for a recent upgrade and did not find one.
Why not use dbua to its full potential?
DEMO:
--- 11.2 database:
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
There are no Interim patches installed in this Oracle Home.
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lsinventory|grep Database
Oracle Database 11g 11.2.0.4.0
[oracle@ol7-112-dg1 ~]$
--- 19c database:
[oracle@ol7-112-dg1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
OPatch succeeded.
[oracle@ol7-112-dg1 ~]$
--- Copy emremove.sql from 19c to 11.2 DB home:
cp -fv /u01/app/oracle/product/19.3.0.0/db_1/rdbms/admin/emremove.sql /u01/app/oracle/product/11.2.0.4/dbhome_1/rdbms/admin
--- Remove EM and OLAP:
set echo on serveroutput on
@?/rdbms/admin/emremove.sql
@?/olap/admin/catnoamd.sql
@?/rdbms/admin/utlrp.sql
--- Create guarantee restore point UPGRADE19C:
[oracle@ol7-112-dg1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 00:23:56 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> @/sf_working/sql/restore_point_upgrade19c.sql
SQL> drop restore point UPGRADE19C;
drop restore point UPGRADE19C
*
ERROR at line 1:
ORA-38780: Restore point 'UPGRADE19C' does not exist.
SQL> alter system set db_recovery_file_dest_size=1m scope=both sid='*';
System altered.
SQL> alter system set db_recovery_file_dest_size=9000m scope=both sid='*';
System altered.
SQL> select sum(flashback_size)/1024/1024/1024 gb from v$flashback_database_log;
GB
"----------"
SQL> select flashback_on from v$database;
FLASHBACK_ON
"----------"
NO
SQL> create restore point UPGRADE19C guarantee flashback database;
Restore point created.
SQL> select flashback_on from v$database;
FLASHBACK_ON
"------------------"
RESTORE POINT ONLY
SQL> select name, time, guarantee_flashback_database from v$restore_point order by 1,2;
NAME TIME GUA
"------------------------------ ---------------------------------------- ---"
UPGRADE19C 21-NOV-20 12.24.19.000000000 AM YES
SQL> select sum(flashback_size)/1024/1024/1024 gb from v$flashback_database_log;
GB
.048828125
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@ol7-112-dg1 ~]$
--- Upgrade DB using dbua silent: -useGRP UPGRADE19C
[oracle@ol7-112-dg1 ~]$ echo $ORACLE_SID $ORACLE_HOME
testdb /u01/app/oracle/product/11.2.0.4/dbhome_1
[oracle@ol7-112-dg1 ~]$ ./run_dbua.sh
/u01/app/oracle/product/19.3.0.0/db_1/bin/dbua -silent \
-sid testdb \
-oracleHome /u01/app/oracle/product/11.2.0.4/dbhome_1 \
-useGRP UPGRADE19C \
-recompile_invalid_objects TRUE \
-upgradeTimezone TRUE \
-emConfiguration NONE \
-skipListenersMigration \
-createListener FALSE \
-upgrade_parallelism 8
Logs directory: /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM
Performing Pre-Upgrade Checks…
PRE- and POST- FIXUP ACTIONS
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/upgrade.xml
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/preupgrade_fixups.sql
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/postupgrade_fixups.sql
[WARNING] [DBT-20060] One or more of the pre-upgrade checks on the database have resulted into warning conditions that require manual intervention. It is recommended that you address these warnings as suggested before proceeding.
ACTION: Refer to the pre-upgrade results location for details: /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb
12% complete
15% complete
25% complete
77% complete
87% complete
Database upgrade has been completed successfully, and the database is ready to use.
100% complete
[oracle@ol7-112-dg1 ~]$
--- DBUA Logs:
[oracle@ol7-112-dg1 ~]$ ls -l /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/
total 76340
-rw-r-----. 1 oracle oinstall 0 Nov 21 00:27 Backup.log
-rw-r-----. 1 oracle oinstall 48860899 Nov 21 01:03 catupgrd0.log
-rw-r-----. 1 oracle oinstall 6740107 Nov 21 01:03 catupgrd1.log
-rw-r-----. 1 oracle oinstall 3759694 Nov 21 01:03 catupgrd2.log
-rw-r-----. 1 oracle oinstall 5391694 Nov 21 01:03 catupgrd3.log
-rw-r-----. 1 oracle oinstall 2974948 Nov 21 01:03 catupgrd4.log
-rw-r-----. 1 oracle oinstall 2127696 Nov 21 01:03 catupgrd5.log
-rw-r-----. 1 oracle oinstall 3975631 Nov 21 01:03 catupgrd6.log
-rw-r-----. 1 oracle oinstall 3411705 Nov 21 01:03 catupgrd7.log
-rw-------. 1 oracle oinstall 528 Nov 21 00:28 catupgrd_catcon_7841.lst
-rw-r-----. 1 oracle oinstall 0 Nov 21 00:54 catupgrd_datapatch_upgrade.err
-rw-r-----. 1 oracle oinstall 1306 Nov 21 01:01 catupgrd_datapatch_upgrade.log
-rw-r-----. 1 oracle oinstall 38676 Nov 21 01:03 catupgrd_stderr.log
-rw-r-----. 1 oracle oinstall 1 Nov 21 00:27 checksBuffer.tmp
-rw-r-----. 1 oracle oinstall 41134 Nov 21 00:27 components.properties
-rwxr-xr-x. 1 oracle oinstall 320 Nov 21 00:27 createSPFile_testdb.sql
-rw-r-----. 1 oracle oinstall 15085 Nov 21 00:27 dbms_registry_extended.sql
-rwxr-xr-x. 1 oracle oinstall 120 Nov 21 00:27 grpOpen_testdb.sql
-rw-r-----. 1 oracle oinstall 942 Nov 21 00:27 init.ora
-rw-r-----. 1 oracle oinstall 69 Nov 21 00:28 Migrate_Sid.log
drwxr-x---. 3 oracle oinstall 21 Nov 21 00:27 oracle
-rw-r-----. 1 oracle oinstall 10409 Nov 21 01:04 Oracle_Server.log
-rw-r-----. 1 oracle oinstall 14051 Nov 21 00:27 parameters.properties
-rw-r-----. 1 oracle oinstall 8580 Nov 21 00:27 postupgrade_fixups.sql
-rw-r-----. 1 oracle oinstall 301 Nov 21 01:10 PostUpgrade.log
-rw-r-----. 1 oracle oinstall 7884 Nov 21 00:27 preupgrade_driver.sql
-rw-r-----. 1 oracle oinstall 8514 Nov 21 00:27 preupgrade_fixups.sql
-rw-r-----. 1 oracle oinstall 443 Nov 21 00:28 PreUpgrade.log
-rw-r-----. 1 oracle oinstall 99316 Nov 21 00:27 preupgrade_messages.properties
-rw-r-----. 1 oracle oinstall 457732 Nov 21 00:27 preupgrade_package.sql
-rw-r-----. 1 oracle oinstall 1464 Nov 21 00:27 PreUpgradeResults.html
-rwxr-xr-x. 1 oracle oinstall 42 Nov 21 00:27 shutdown_testdb.sql
-rw-r-----. 1 oracle oinstall 94342 Nov 21 01:10 sqls.log
-rwxr-xr-x. 1 oracle oinstall 35 Nov 21 00:27 startup_testdb.sql
-rwxr-xr-x. 1 oracle oinstall 2070 Nov 21 00:27 testdb_restore.sh
drwxr-x---. 3 oracle oinstall 24 Nov 21 00:27 upgrade
-rw-r-----. 1 oracle oinstall 5287 Nov 21 01:10 UpgradeResults.html
-rw-r-----. 1 oracle oinstall 2920 Nov 21 01:09 UpgradeTimezone.log
-rw-r-----. 1 oracle oinstall 11264 Nov 21 00:27 upgrade.xml
-rw-r-----. 1 oracle oinstall 1583 Nov 21 01:04 upg_summary_CDB_Root.log
-rw-r-----. 1 oracle oinstall 115 Nov 21 01:07 Utlprp.log
[oracle@ol7-112-dg1 ~]$
--- Script testdb_restore.sh:
[oracle@ol7-112-dg1 sql]$ cat /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/testdb_restore.sh
!/bin/sh
-- Run this Script to Restore Oracle Database Instance testdb
echo -- Bringing up the database from the source oracle home
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1; export ORACLE_HOME
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.4/dbhome_1/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
ORACLE_SID=testdb; export ORACLE_SID
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/shutdown_testdb.sql
echo -- Bringing down the database from the new oracle home
ORACLE_HOME=/u01/app/oracle/product/19.3.0.0/db_1; export ORACLE_HOME
LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0.0/db_1/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
ORACLE_SID=testdb; export ORACLE_SID
/u01/app/oracle/product/19.3.0.0/db_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/shutdown_testdb.sql
echo -- Removing database instance from new oracle home …
echo You should Remove this entry from the /etc/oratab: testdb:/u01/app/oracle/product/19.3.0.0/db_1:N
echo -- Bringing up the database from the source oracle home
unset LD_LIBRARY_PATH; unset LD_LIBRARY_PATH_64; unset SHLIB_PATH; unset LIB_PATH
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0.4/dbhome_1/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1; export ORACLE_HOME
ORACLE_SID=testdb; export ORACLE_SID
rm /u01/app/oracle/product/19.3.0.0/db_1/dbs/spfiletestdb.ora
echo You should Add this entry in the /etc/oratab: testdb:/u01/app/oracle/product/11.2.0.4/dbhome_1:Y
cd /u01/app/oracle/product/11.2.0.4/dbhome_1
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/createSPFile_testdb.sql
/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/sqlplus /nolog @/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/grpOpen_testdb.sql
RESTORE_RESULT=$?
echo -- Execution of restore script for the database TESTDB completed.
exit $(($RESTORE_RESULT|$?))
[oracle@ol7-112-dg1 sql]$
--- grpOpen_testdb.sql: flashback database to restore point UPGRADE19C;
[oracle@ol7-112-dg1 ~]$ grep -i upgrade19c /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/*.sql
/u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/grpOpen_testdb.sql:flashback database to restore point UPGRADE19C;
[oracle@ol7-112-dg1 ~]$
--- Restore database back to 11.2:
[oracle@ol7-112-dg1 ~]$ /u01/app/oracle/cfgtoollogs/dbua/upgrade2020-11-21_12-27-10AM/testdb/testdb_restore.sh
-- Bringing up the database from the source oracle home
SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 04:28:33 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
ORACLE instance shut down.
Disconnected
-- Bringing down the database from the new oracle home
SQL*Plus: Release 19.0.0.0.0 - Production on Sat Nov 21 04:28:33 2020
Version 19.8.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected.
ORACLE instance shut down.
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0
-- Removing database instance from new oracle home …
You should Remove this entry from the /etc/oratab: testdb:/u01/app/oracle/product/19.3.0.0/db_1:N
-- Bringing up the database from the source oracle home
You should Add this entry in the /etc/oratab: testdb:/u01/app/oracle/product/11.2.0.4/dbhome_1:Y
SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 04:28:38 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
ORACLE instance started.
Total System Global Area 1603411968 bytes
Fixed Size 2253664 bytes
Variable Size 520096928 bytes
Database Buffers 1073741824 bytes
Redo Buffers 7319552 bytes
File created.
ORA-01507: database not mounted
ORACLE instance shut down.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL*Plus: Release 11.2.0.4.0 Production on Sat Nov 21 04:28:46 2020
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to an idle instance.
ORACLE instance started.
Total System Global Area 1603411968 bytes
Fixed Size 2253664 bytes
Variable Size 520096928 bytes
Database Buffers 1073741824 bytes
Redo Buffers 7319552 bytes
Database mounted.
Flashback complete.
Database altered.
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
-- Execution of restore script for the database TESTDB completed.
[oracle@ol7-112-dg1 ~]$
Create 19c Database In Archive Mode Using dbca silent
There were discussions on Twitter about BUG for not being able to create database in Archive Mode using dbca silent and piqued my interest.
Here is a quick and dirty test case to demonstrate it was successful for my environment.
Using response file with dbca does not work per Twitter thread but from CLI does.
--- DB patch level:
[oracle@ol7-19-lax1 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
OPatch succeeded.
--- Create database in archivelog mode:
[oracle@ol7-19-lax1 ~]$ dbca -silent \
-createDatabase \
-responseFile NO_VALUE \
-templateName General_Purpose.dbc \
-sid testdb \
-gdbname TESTDB \
-characterSet AL32UTF8 \
-sysPassword Oracle_4U \
-systemPassword Oracle_4U \
-createAsContainerDatabase FALSE \
-databaseType MULTIPURPOSE \
-automaticMemoryManagement FALSE \
-totalMemory 2048 \
-datafileDestination +DATA \
-recoveryAreaDestination +RECO \
-redoLogFileSize 50 \
-emConfiguration NONE \
-sampleSchema FALSE \
-enableArchive TRUE \
-ignorePreReqs
Prepare for db operation
10% complete
Registering database with Oracle Restart
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
53% complete
56% complete
62% complete
Completing Database Creation
68% complete
70% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
/u01/app/oracle/cfgtoollogs/dbca/TESTDB.
Database Information:
Global Database Name:TESTDB
System Identifier(SID):testdb
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/TESTDB/TESTDB0.log" for further details.
--- Logs for dbca:
[oracle@ol7-19-lax1 ~]$ ls -l /u01/app/oracle/cfgtoollogs/dbca/TESTDB/
total 21508
-rw-r-----. 1 oracle oinstall 12131 Nov 20 13:20 cloneDBCreation.log
-rw-r-----. 1 oracle oinstall 784 Nov 20 13:07 CloneRmanRestore.log
-rw-r-----. 1 oracle oinstall 1820 Nov 20 13:21 lockAccount.log
-rw-r-----. 1 oracle oinstall 3578 Nov 20 13:24 postDBCreation.log
-rw-r-----. 1 oracle oinstall 1436 Nov 20 13:21 postScripts.log
-rw-r-----. 1 oracle oinstall 0 Nov 20 13:06 rmanUtil
-rw-r-----. 1 oracle oinstall 18726912 Nov 20 13:07 tempControl.ctl
-rw-r-----. 1 oracle oinstall 843 Nov 20 13:24 TESTDB0.log
-rw-r-----. 1 oracle oinstall 843 Nov 6 05:14 TESTDB.log
-rw-r-----. 1 oracle oinstall 1635418 Nov 6 05:14 trace.log_2020-11-06_04-58-10AM
-rw-r-----. 1 oracle oinstall 1619098 Nov 20 13:24 trace.log_2020-11-20_01-05-35PM
--- /etc/oratab is automatically updated:
[oracle@ol7-19-lax1 ~]$ tail /etc/oratab
Backup file is /u01/app/oracle/product/19.0.0/dbhome_1/srvm/admin/oratab.bak.ol7-19-lax1.oracle line added by Agent
+ASM1:/u01/app/19.0.0/grid:N
hawk1:/u01/app/oracle/product/19.0.0/dbhome_1:N
testdb:/u01/app/oracle/product/19.0.0/dbhome_1:N
--- Confirm Archive Mode:
[oracle@ol7-19-lax1 ~]$ . oraenv <<< testdb
ORACLE_SID = [hawk1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ol7-19-lax1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 20 13:25:09 2020
Version 19.8.0.0.0
Copyright (c) 1982, 2020, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0
OL7-19-LAX1:(SYS@TESTDB:PRIMARY> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 31
Next log sequence to archive 33
Current log sequence 33
OL7-19-LAX1:(SYS@TESTDB:PRIMARY> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.8.0.0.0
--- Database automatically registered with cluster:
[oracle@ol7-19-lax1 ~]$ srvctl config database -d testdb
Database unique name: TESTDB
Database name: TESTDB
Oracle home: /u01/app/oracle/product/19.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/TESTDB/PARAMETERFILE/spfile.315.1056983141
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: DATA,RECO
Mount point paths:
Services:
Type: SINGLE
OSDBA group: dba
OSOPER group: oper
Database instance: testdb
Configured nodes: ol7-19-lax1
CSS critical: no
CPU count: 0
Memory target: 0
Maximum memory: 0
Default network number for database services:
Database is administrator managed
[oracle@ol7-19-lax1 ~]$
--- Delete database:
[oracle@ol7-19-lax1 ~]$ dbca -silent -deleteDatabase -sourceDB testdb
Enter SYS user password:
[WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed.
Prepare for db operation
32% complete
Connecting to database
35% complete
39% complete
42% complete
45% complete
48% complete
52% complete
65% complete
Updating network configuration files
68% complete
Deleting instance and datafiles
84% complete
100% complete
Database deletion completed.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/TESTDB/TESTDB1.log" for further details.
[oracle@ol7-19-lax1 ~]$
Using emcli to create blackout for rac_database
It’s not possible to create blackout for RAC databases using emctl.
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
Blackout start Error : Command-line blackouts on targets spanning multiple agents are not supported.
12c Cloud Control Blackouts: Steps to Create Blackouts from Console UI / emctl / emcli (Doc ID 1386134.1)
It is also not possible to use emctl for creating blackouts against composite targets like Cluster, Cluster Database, Fusion Middleware (FMW) Domains, E-Biz Suite, etc or against Multi-Agent targets such as PDB. since these targets span multiple hosts and the blackout details cannot be propagated to the agents on the other nodes.
For composite targets, the blackout has to be created via the Console UI or the emcli.
Example: creating blackout for RAC database (primary and standby)
db_unique_name - MBANZINP_XXXXRAC - Primary database
db_unique_name - MBANZINP_XXXXRACDR - Physical standby database
$ $OMS_HOME/bin/emcli login -username=sysman
$ $OMS_HOME/bin/emcli get_targets -targets=rac_database | grep MBANZINP
1 Up rac_database MBANZINP_XXXXRAC.domain.com
1 Up rac_database MBANZINP_XXXXRACDR.domain.com
$OMS_HOME/bin/emcli create_blackout -name="WHATEVER" -reason="switchover" \
-add_targets="MBANZINP_XXXXRAC.domain.com:rac_database" -propagate_targets -schedule="duration:1:00"
$OMS_HOME/bin/emcli create_blackout -name="WHATEVER" -reason="switchover" \
-add_targets="MBANZINP_XXXXRACDR.domain.com:rac_database" -propagate_targets -schedule="duration:1:00"
-propagate_targets
When you specify this option, a blackout for a target of type "host" applies the blackout to all targets on the host, including the Agent.
This is equivalent to nodelevel in the emctl command.
Regardless of whether you specify this option, a blackout for a target that is a composite or a group applies the blackout to all members of the composite or group.
Other useful commands:
$OMS_HOME/bin/emcli get_blackouts
$OMS_HOME/bin/emcli get_blackout_targets -name="WHATEVER"
$OMS_HOME/bin/emcli get_blackout_details -name="WHATEVER"
$OMS_HOME/bin/emcli stop_blackout -name="WHATEVER"
$OMS_HOME/bin/emcli delete_blackout -name="WHATEVER"
Troubleshooting ORA-01017 from DGMGRL validate
I was reviewing new 19c Data Guard implementation.
DGMGRL validate is my favorite command to detect any issues.
Using validate, shows issue for one of the databases in Data Guard Configuration.
Apologizes for the funky format, since wordpress changed layout, it’s becoming more difficult to use, but I digress.
--- Here are the errors:
DGMGRL> validate network configuration for all;
ORA-01017: invalid username/password; logon denied
DGMGRL> validate static connect identifier for all;
ORA-01017: invalid username/password; logon denied
--- Check TNS from $DB_HOME:
$ cat tnsnames.ora
FALCON =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = 10.1.71.242)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = FALCON)
)
)
--- Connect using TNS from above SUCCEED.
$ sqlplus sys@FALCON as sysdba
--- Connect using static connect identifier FAILED.
$ sqlplus sys@'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.71.242)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=FALCON_DGMGRL)(INSTANCE_NAME=FALCON)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))' as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 10 14:00:29 2020
Version 19.8.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
ORA-01017: invalid username/password; logon denied
--- Check password file from DB_HOME to find file size is different comparing to other host.
--- Create copy for password file and copy from other host to this host.
--- What's strange is if password file was an issue, then why did sqlplus sys@FALCON as sysdba succeeded?
--- Regardless, I like to keep them the same.
$ ls -l orapw*
-rw-r-----. 1 oracle oinstall 4608 Nov 10 14:11 orapwFALCON
-rw-r-----. 1 oracle oinstall 1536 Nov 9 16:45 orapwFALCON.bak
--- Check listener.ora from $GI_HOME and there's there's the problem.
ORACLE_HOME is set to GI vs DB home.
$ diff listener.ora listener.ora.bak
16c16
< (ORACLE_HOME = /u01/product/19c)
---
> (ORACLE_HOME = /u01/grid/19c)
Thinking out loud here.
Oracle Data Guard Broker and Static Service Registration (Doc ID 1387859.1)
Note: Static “_DGMGRL” entries are no longer needed as of Oracle Database 12.1.0.2 in Oracle Data Guard Broker configurations.
Since database was upgraded from 11.2 to 19c, existing static connect identifier was carried forward.
It might be better to remove existing configuration and create new configuration without _DGMGL; otherwise, may encounter the same issue for next upgrade to 20c.