Feed aggregator

Next/Previous Record: how APEX does it

Jeff Kemp - Sun, 2020-03-29 22:14

It’s not a common requirement in my experience, but it does come up every now and then: a customer is happy with a simple Report + Form but when they open a record, they want to be able to make their changes and go to the “next” record in one button click, instead of having to go back to the report and select the other record.

In the Sample Database Application, page 29 (Order Details) implements “Next” and “Previous” buttons which allow the user to save and open another record in one action.

These buttons are implemented using a legacy Form Pagination process. This process sets some hidden items (P29_ORDER_ID_NEXT, P29_ORDER_ID_PREV, P29_ORDER_ID_COUNT) based on a query on a specified table (DEMO_ORDERS) with a specified unique identifier (ORDER_ID) associated with a primary key item (P29_ORDER_ID). The process requires a navigation order, specified by one or two columns (ORDER_ID, in this instance) in order to know what would the “next” and “previous” records be. If there is no next or previous record is found, the buttons are hidden.

When the page is submitted, the Next and Previous buttons submit a request (GET_NEXT_ORDER_ID or GET_PREVIOUS_ORDER_ID, respectively). After the ApplyMRU process has run, one of the relevant Branches will redirect the client back to the same page (p29) and set P29_ORDER_ID to either &P29_ORDER_ID_NEXT. or &P29_ORDER_ID_PREV. which causes the page to load the relevant record.

Some things to note with this approach:

  1. The legacy Form Pagination process is limited to a maximum 2 columns for the uniqueness constraint, and 2 columns for the navigation order.
  2. The navigation order of records will not match any custom sort order or filtering the user might have used on the report; so after the user opens the “first record” in the report, the form will not necessarily navigate to the “next record” that they might expect.
  3. The Next / Previous record IDs are queried when the page is initially loaded, so if anything has changed prior to the user clicking “Next” or “Previous”, it’s possible the user will inadvertently be directed to a record that is not actually “next” or “previous” to the record as it is now.
    In the worst case, if someone had deleted a record, the page would show “record not found”.
    In a perhaps less problematic case, if someone else has just inserted a new record with a unique identifier that happens to fall between the user’s previous record and the record they are navigating to, the user will effectively “skip over” the newly inserted record and might be led to believe it doesn’t exist.
  4. For large datasets there may be a performance penalty whenever each user loads the form since the page must issue additional queries to find the IDs for the “previous smaller” and “next larger” record, as well as to get the total number of records, and the position in the overall dataset of the current record (so it can set the “count” item to something like “8 of 10”). This appears to involve the execution of three separate queries (in addition to the original query which gathered the data for the record being viewed).

These are not necessarily insurmountable or showstopping issues but should be kept in mind for forms using this approach.

With the new APEX Form feature, the above approach can still be used in much the same way – the attribute settings are a little different.

The page has a process before header of type Form – Initialization. This process has the following optional settings: Next Primary Key Item(s), Previous Primary Key Item(s), and Current Row/Total Item. To use this feature you must first create items (usually hidden) and then set these attributes to the item names.

Note that the Next / Previous Primary Key Item(s) attributes accept a comma-delimited list of items which allows them to support a compound key; I haven’t tested it but I expect this means it can support more than 2 columns.

The form will automatically populate these hidden items with the Order ID of the next and previous record, and will set the Current Row/Total Item to something like “8 of 10”. You can then use these items how you wish, e.g. as per the legacy pagination scheme, add the “Next” and “Previous” buttons, and create the navigation Branches to open the form with the relevant records.

To control the navigation order, you would set the Order By attribute on the form region.

It should be noted that the comments above about how concurrent record inserts and deletes by other users, and about report filters and sorting, also apply to the new form process.

In the past I built a system where it was important that the “Next” / “Previous” buttons should allow the user to navigate up and down the records exactly as shown in the report, respecting user-entered filters and sort order. The approach I took was to gather the IDs into a collection and pass this to the form when the user opened a record. I described the implementation and limitations of this approach in an older blog post which I expect still works today: Next/Previous buttons from Interactive Report results

Silent Install 11.2.0.4 DB Software With GI 18c On OEL 7.7

Michael Dinh - Sat, 2020-03-28 15:45

Just some note:

One good thing about GUI install is that it allows one to fix any issues and retry and not so much with silent install

================================================================================
Requirements for Installing Oracle 11.2.0.4 RDBMS on OL7 or RHEL7 64-bit (x86-64) (Doc ID 1962100.1)	

PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)	

Installation of Oracle 11.2.0.4 Database Software on OL7 fails with 'Error in invoking target 'agent nmhs' of makefile ' & 
"undefined reference to symbol 'B_DestroyKeyObject'" error (Doc ID 1965691.1)	
================================================================================


================================================================================
### First install attempt without -ignorePrereq
================================================================================

$ ./runInstaller -ignorePrereq

Note that the above command does not perform any pre-requisite checks. 
Hence, ensure that all the software requirements documented in the install guide are fulfilled before executing the installer using the above option.

================================================================================

[oracle@ol7-183-rac1 ~]$ ./install_db_software.sh

+ /u01/app/oracle/software/database/runInstaller -force -silent -waitforcompletion
-responseFile /u01/app/oracle/software/database/response/db_install.rsp 
oracle.install.option=INSTALL_DB_SWONLY 
ORACLE_HOSTNAME=ol7-183-rac1.localdomain 
UNIX_GROUP_NAME=oinstall 
INVENTORY_LOCATION=/u01/app/oraInventory 
SELECTED_LANGUAGES=en ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 
ORACLE_BASE=/u01/app/oracle 
oracle.install.db.InstallEdition=EE 
oracle.install.db.EEOptionsSelection=false 
oracle.install.db.DBA_GROUP=dba 
oracle.install.db.OPER_GROUP=oper 
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 
oracle.installer.autoupdates.option=SKIP_UPDATES 
oracle.install.db.isRACOneInstall=false 
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false 
DECLINE_SECURITY_UPDATES=true

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 25005 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 17391 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-03-26_04-15-06PM. Please wait ...

[FATAL] [INS-13013] Target environment do not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log. 
   Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
[oracle@ol7-183-rac1 ~]$


================================================================================
### Review types of errors
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log |cut -d ":" -f1 |sort -u
   ACTION
   CAUSE
INFO
SEVERE
WARNING
[oracle@ol7-183-rac1 ~]$


================================================================================
### Review List of failed Tasks
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -A100 "List of failed Tasks" /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
INFO: ------------------List of failed Tasks------------------
INFO: *********************************************
INFO: Package: pdksh-5.2.14: This is a prerequisite condition to test whether the package "pdksh-5.2.14" is available on the system.
INFO: Severity:IGNORABLE
INFO: OverallStatus:VERIFICATION_FAILED
INFO: *********************************************
INFO: CRS Integrity: This test checks the integrity of Oracle Clusterware stack across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Cluster Manager Integrity: This test checks the integrity of cluster manager across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Node Application Existence: This test checks the existence of Node Applications on the system.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Clock Synchronization: This test checks the Oracle Cluster Time Synchronization Services across the cluster nodes.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: *********************************************
INFO: Database Clusterware Version Compatibility: This test ensures that the Database version is compatible with the CRS version.
INFO: Severity:CRITICAL
INFO: OverallStatus:OPERATION_FAILED
INFO: -----------------End of failed Tasks List----------------
INFO: Adding ExitStatus PREREQUISITES_NOT_MET to the exit status set
SEVERE: [FATAL] [INS-13013] Target environment do not meet some mandatory requirements.
   CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
INFO: Advice is ABORT
INFO: Adding ExitStatus INVALID_USER_INPUT to the exit status set
INFO: Completed validating state {performChecks}
INFO: Terminating all background operations
INFO: Terminated all background operations
INFO: Finding the most appropriate exit status for the current application
INFO: Exit Status is -3
INFO: Shutdown Oracle Database 11g Release 2 Installer
[oracle@ol7-183-rac1 ~]$


================================================================================
### Search for "Error Message"
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -i 'error message' /u01/app/oraInventory/logs/installActions2020-03-26_04-15-06PM.log
INFO: Error Message:PRVF-7532 : Package "pdksh" is missing on node "ol7-183-rac2"
INFO: Error Message:PRVF-7532 : Package "pdksh" is missing on node "ol7-183-rac1"
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
INFO: Error Message:PRVF-4037 : CRS is not installed on any of the nodes
[oracle@ol7-183-rac1 ~]$


================================================================================
PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)	
The bug is fixed in 11.2.0.3, the workaround is to update GI home with CRS="true" flag.
================================================================================


================================================================================
### Check inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
(?xml version="1.0" standalone="yes" ?)
(!-- Copyright (c) 1999, 2020, Oracle and/or its affiliates.
All rights reserved. --)
(!-- Do not modify the contents of this file by hand. --)
(INVENTORY)
(VERSION_INFO)
   (SAVED_WITH)12.2.0.4.0(/SAVED_WITH)
   (MINIMUM_VER)2.1.0.6.0(/MINIMUM_VER)
(/VERSION_INFO)
(HOME_LIST)
(HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true"/)
(/HOME_LIST)
(COMPOSITEHOME_LIST)
(/COMPOSITEHOME_LIST)
(/INVENTORY)


================================================================================
### UPDATE inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ . oraenv {{{ +ASM1
ORACLE_SID = [cdbrac1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-183-rac1 ContentsXML]$ export GRID_HOME=$ORACLE_HOME

[oracle@ol7-183-rac1 ContentsXML]$ $GRID_HOME/oui/bin/runInstaller -silent -ignoreSysPrereqs -updateNodeList ORACLE_HOME=$GRID_HOME "CLUSTER_NODES={ol7-183-rac1,ol7-183-rac2}" CRS=true
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 17391 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.


================================================================================
### VERIFY inventory for GI RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
(?xml version="1.0" standalone="yes" ?)
(!-- Copyright (c) 1999, 2020, Oracle and/or its affiliates.
All rights reserved. --)
(!-- Do not modify the contents of this file by hand. --)
(INVENTORY)
(VERSION_INFO)
   (SAVED_WITH)12.2.0.4.0(/SAVED_WITH)
   (MINIMUM_VER)2.1.0.6.0(/MINIMUM_VER)
(/VERSION_INFO)
(HOME_LIST)
(HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true")
   (NODE_LIST)
      (NODE NAME="ol7-183-rac1"/)
      (NODE NAME="ol7-183-rac2"/)
   (/NODE_LIST)
(/HOME)
(/HOME_LIST)
(COMPOSITEHOME_LIST)
(/COMPOSITEHOME_LIST)
(/INVENTORY'
[oracle@ol7-183-rac1 ContentsXML]$


================================================================================
### Retry Install
================================================================================

[oracle@ol7-183-rac1 ~]$ cat install_db_software.sh
#!/bin/sh -x
/u01/app/oracle/software/database/runInstaller -force \
-silent -waitforcompletion -ignorePrereq \
-responseFile /u01/app/oracle/software/database/response/db_install.rsp \
oracle.install.option=INSTALL_DB_SWONLY \
ORACLE_HOSTNAME=ol7-183-rac1.localdomain \
UNIX_GROUP_NAME=oinstall \
INVENTORY_LOCATION=/u01/app/oraInventory \
SELECTED_LANGUAGES=en \
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 \
ORACLE_BASE=/u01/app/oracle \
oracle.install.db.InstallEdition=EE \
oracle.install.db.EEOptionsSelection=false \
oracle.install.db.DBA_GROUP=dba \
oracle.install.db.OPER_GROUP=oper \
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 \
oracle.installer.autoupdates.option=SKIP_UPDATES \
oracle.install.db.isRACOneInstall=false \
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false \
DECLINE_SECURITY_UPDATES=true
[oracle@ol7-183-rac1 ~]$


[oracle@ol7-183-rac1 ~]$ ./install_db_software.sh
+ /u01/app/oracle/software/database/runInstaller -force -silent -waitforcompletion -ignorePrereq 
-responseFile /u01/app/oracle/software/database/response/db_install.rsp 
oracle.install.option=INSTALL_DB_SWONLY 
ORACLE_HOSTNAME=ol7-183-rac1.localdomain 
UNIX_GROUP_NAME=oinstall 
INVENTORY_LOCATION=/u01/app/oraInventory 
SELECTED_LANGUAGES=en 
ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 
ORACLE_BASE=/u01/app/oracle 
oracle.install.db.InstallEdition=EE 
oracle.install.db.EEOptionsSelection=false 
oracle.install.db.DBA_GROUP=dba 
oracle.install.db.OPER_GROUP=oper 
oracle.install.db.CLUSTER_NODES=ol7-183-rac1,ol7-183-rac2 
oracle.installer.autoupdates.option=SKIP_UPDATES 
oracle.install.db.isRACOneInstall=false 
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false 
DECLINE_SECURITY_UPDATES=true

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 24578 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 17391 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-03-26_05-17-28PM. Please wait ...

You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log

The installation of Oracle Database 11g was successful.
Please check '/u01/app/oraInventory/logs/silentInstall2020-03-26_05-17-28PM.log' for more details.

As a root user, execute the following script(s):
        1. /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh

Execute /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh on the following nodes:
[ol7-183-rac1, ol7-183-rac2]

Successfully Setup Software.
[oracle@ol7-183-rac1 ~]$


[root@ol7-183-rac1 ~]# /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
Check /u01/app/oracle/product/11.2.0.4/dbhome_1/install/root_ol7-183-rac1.localdomain_2020-03-26_17-44-13.log for the output of root script
[root@ol7-183-rac1 ~]#


[root@ol7-183-rac2 ~]# /u01/app/oracle/product/11.2.0.4/dbhome_1/root.sh
Check /u01/app/oracle/product/11.2.0.4/dbhome_1/install/root_ol7-183-rac2.localdomain_2020-03-26_17-44-55.log for the output of root script
[root@ol7-183-rac2 ~]#


================================================================================
### FROM silentInstall*.log - Known Issues - (Doc ID 1965691.1)	
================================================================================

[oracle@ol7-183-rac1 ~]$ cat /u01/app/oraInventory/logs/silentInstall2020-03-26_05-17-28PM.log
silentInstall2020-03-26_05-17-28PM.log
sNativeVolName:/u01/app/oracle/product/11.2.0.4/dbhome_1/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
sNativeVolName:/tmp/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
Error in invoking target 'agent nmhs' of makefile '/u01/app/oracle/product/11.2.0.4/dbhome_1/sysman/lib/ins_emagent.mk'. See '/u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log' for details.
sNativeVolName:/u01/app/oracle/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
sNativeVolName:/u01/app/oraInventory/
m_asNodeArray:ol7-183-rac1,ol7-183-rac2
m_sLocalNode:ol7-183-rac1
The installation of Oracle Database 11g was successful.
[oracle@ol7-183-rac1 ~]$


================================================================================
### Check installActions*.log
================================================================================

[oracle@ol7-183-rac1 ~]$ grep -e '[[:upper:]]: ' /u01/app/oraInventory/logs/installActions2020-03-26_05-17-28PM.log |cut -d ":" -f1 |sort -u
INFO
WARNING
[oracle@ol7-183-rac1 ~]$


================================================================================
### Check inventory for DB RAC install
================================================================================

[oracle@ol7-183-rac1 ContentsXML]$ cat inventory.xml
{?xml version="1.0" standalone="yes" ?}
{!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. --}
{!-- Do not modify the contents of this file by hand. --}
{INVENTORY}
{VERSION_INFO}
   {SAVED_WITH}11.2.0.4.0{/SAVED_WITH}
   {MINIMUM_VER}2.1.0.6.0{/MINIMUM_VER}
{/VERSION_INFO}
{HOME_LIST}
{HOME NAME="OraGI18Home1" LOC="/u01/app/18.0.0/grid" TYPE="O" IDX="1" CRS="true"}
   {NODE_LIST}
      {NODE NAME="ol7-183-rac1"/}
      {NODE NAME="ol7-183-rac2"/}
   {/NODE_LIST}
{/HOME}
{HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0.4/dbhome_1" TYPE="O" IDX="2"}
   {NODE_LIST}
      {NODE NAME="ol7-183-rac1"/}
      {NODE NAME="ol7-183-rac2"/}
   {/NODE_LIST}
{/HOME}
{/HOME_LIST}
{COMPOSITEHOME_LIST}
{/COMPOSITEHOME_LIST}
{/INVENTORY}
[oracle@ol7-183-rac1 ContentsXML]$


================================================================================
### cluvfy comp healthcheck
================================================================================

[oracle@ol7-183-rac1 cvu]$ . oraenv <<< +ASM1
ORACLE_SID = [cdbrac1] ? The Oracle base remains unchanged with value /u01/app/oracle

[oracle@ol7-183-rac1 ~]$ cluvfy comp software

Verification of Health Check was unsuccessful.
Checks did not pass for the following nodes:
        ol7-183-rac2,ol7-183-rac1


Failures were encountered during execution of CVU verification request "Health Check".

Verifying Physical Memory ...FAILED
ol7-183-rac2: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-183-rac2" [Required physical memory = 8GB (8388608.0KB)]

ol7-183-rac1: PRVF-7530 : Sufficient physical memory is not available on node
              "ol7-183-rac1" [Required physical memory = 8GB (8388608.0KB)]

Verifying Ethernet Jumbo Frames ...FAILED
ol7-183-rac2: PRVE-0293 : Jumbo Frames are not configured for interconnects
              "eth2" on node "ol7-183-rac2.localdomain". [Expected="eth2=9000";
              Found="eth2=1500"]

ol7-183-rac1: PRVE-0293 : Jumbo Frames are not configured for interconnects
              "eth2" on node "ol7-183-rac1.localdomain". [Expected="eth2=9000";
              Found="eth2=1500"]


CVU operation performed:      Health Check
Date:                         Mar 26, 2020 6:07:08 PM
CVU home:                     /u01/app/18.0.0/grid/
User:                         oracle
[oracle@ol7-183-rac1 cvu]$

ORA-28050 - Can I drop the SYSTEM User?

Pete Finnigan - Sat, 2020-03-28 02:46
Two things most annoy me with the Oracle database in terms of securing it and this is the abundance of default users in most Oracle databases that I perform security audits on and also the massive amount of PUBLIC grants....[Read More]

Posted by Pete On 27/03/20 At 06:11 PM

Categories: Security Blogs

Tuning query with function calls in select

Tom Kyte - Sat, 2020-03-28 02:46
I have 7M rows from legacy system. <code> insert /*+ append */ into TGT_DATA ( LST_NAM, FRST_NAM, MDL_NAM ) SELECT /*+ parallel(10)*/ func_text_clean(LST_NAM), func_text_clean( FRST_NAM), func_text_clean( MDL_NAM) FROM P_ONST; </...
Categories: DBA Blogs

Pass string values to stored procedure

Tom Kyte - Sat, 2020-03-28 02:46
Hello, Ask Tom Team. I have several DELETE statements and I want them all within a stored procedure to simplify execution. All these statements would use different string values in the where condition. DELETE FROM user1.table4 t4 WHERE t4.id...
Categories: DBA Blogs

Basic DynamoDB Data Modelling

Pakistan's First Oracle Blog - Sat, 2020-03-28 01:58



The key differentiator between a NoSQL database like AWS Dynamodb and a relational database like Oracle is of the way data is modelled. In a relational model; data is normalized, de-duplicated and relationships are established between entities. In a NoSQL database like Dynamodb, data can be duplicated and entities can be fused together. This is done to facilitate scalability which is hallmark of today's hyper-scale cloud based applications.


One of the most common relationship within data modelling is One-to-Many. For example, we have 2 entities; Painting and Gallery. A painting can only reside in one gallery, whereas a gallery can have multiple paintings. This is one to many relationship. In relational modelling world, we would have 2 tables; Painting and Gallery, such as:


Museum
Painting

MuseumId
MuseumName
PaintingId
PaintingName
MuseumId
M1
Sofía
P1
Guernica
M1
M2
NSW Gallery
P2
The Great Masturbator
M1
M3
Louvre
P3
Sofala
M2
P4
Mona Lisa
M3


In above One-to-Many relationship, we have joined both Museum and Painting with primary key and foreign key relationship. So e.g. if want to check all paintings in Museum M1, then the query would be:

select p.Title, m.MuseumName from Painting p, Museum m where p.MuseumId=m.MuseumId and m.MuseumId=’M1’;

Now joins are expensive, and they become more expensive as the data grows. In huge cloud scale databases this hampers scalability and at times become impractical to have these joins, hence the NoSQL databases like Dynamodb.

So how do you model One-to-Many relationship in a Dynamodb table when we cannot have joins and aggregations?

We know what the entities are, and we have an access pattern, so now let’s design our table.


MuseumPaintingTable
Primary Key
Attributes
PK
SK
MuseumName
Title
Artist
Museum#M1
Painting#P1
Sofía
Guernica
Picasso
Painting#P2
Sofía
The Great Masturbator
Salvador Dali
Museum#M2
Painting#P3
NSW Gallery
Sofala
Russel Drysdale
Museum#M3
Painting#P4
Louvre
Mona Lisa
Leonardo


If you are coming from a relational world, the above table may seem like blasphemy even though I haven’t yet used different attributes for each item to keep it simple. This is where you need to appreciate the flexibility of NoSQL databases to facilitate scalability.

In above table, we have define a composite primary key, consisting of a partition key and a sort key. A primary key in Dynamodb table defines a unique item, where item corresponds to a row. Remember that all access patterns are driven through the primary key, which means that the entry point for your queries must be the primary key. Every museum and painting has its own unique id, so we have combined them together to generate a unique item.

Note that how we have defined a one-to-many relationship in above table using partition key and sort key within the primary key. For one Museum#M1, we have 2 paintings; Painting#P1 and Painting#P2, and then both of these paintings have their own attributes, which can even differ in structure. We have fused both Painting and Museum entities together.

Now how would you write the previous relational query in DynamoDB, where we wanted to check all paintings in the Museum M1? It would be like this:

aws dynamodb query \
    --table-name MuseumPaintingTable \
    --key-condition-expression "PK = :MID" \
    --expression-attribute-values  '{":MID":{"S":"M1"}}'
               --projection-expression 'Title,MuseumName'

The above command is a RESTful API call to Dynamodb table.
Categories: DBA Blogs

Band Joins

Tom Kyte - Fri, 2020-03-27 08:26
Team, Was reading about Band join from the below link <u>https://docs.oracle.com/en/database/oracle/oracle-database/12.2/tgsql/joins.html#GUID-24F34188-110F-4245-9DE7-43954092AFE0</u> <u>https://jonathanlewis.wordpress.com/2017/02/13/band-j...
Categories: DBA Blogs

Proxy user info

Tom Kyte - Fri, 2020-03-27 08:26
Where in v$ views is hiding info about PROXY_USERNAME ? I'am aware of little piece of information in v$session_connect_info and dbproxy_username column in unified_audit_trail. Most appropriate place will be in v$session, but i didn't find it. R...
Categories: DBA Blogs

Oracle disables your multitenant option when you run on EC2

Yann Neuhaus - Thu, 2020-03-26 17:56

I have installed Oracle 19.6 on an EC2 for our Multitenant Workshop training. And of course, during the workshop we create a lot of PDBs. If you don’t have paid for the Enterprise Edition plus the Multitenant Option you can create at most 3 pluggable database. But with this option you can create up to 252 pluggable databases. Does it worth the price, which according to the public price list is USD 47,500 + 17,500 per processor, which means per-core because Oracle doesn’t count the core factor when your Intel processors are in AWS Cloud (according to the Authorized Cloud Environments paper)? Probably not because Oracle detects where you run and bridles some features depending whether you are on the Dark or the Light Side of the public cloud (according to their criteria of course).

At one point I have 3 pluggable databases in my CDB:


SQL> show pdbs
   CON_ID     CON_NAME    OPEN MODE    RESTRICTED
_________ ____________ ____________ _____________
        2 PDB$SEED     READ ONLY    NO
        3 CDB1PDB01    MOUNTED
        4 CDB1PDB03    MOUNTED
        5 CDB1PDB02    MOUNTED

I want to create a 4th one:


SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

It fails. The maximum number of pluggable databases is defined by MAX_PDBS, but I defined nothing in my SPFILE:


SQL> show spparameter max_pdbs
SID NAME     TYPE    VALUE
--- -------- ------- -----
*   max_pdbs integer

I thought that the default was 4098 (which is incorrect anyway as you cannot create more than 4096) but it is actually 5 here:


SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 5

Ok… this parameter is supposed to count the number of user pluggable databases (the ones with CON_ID>2) and I have 3 of them here. The limit is 5 and I have an error mentioning that I’ve reached the limit. That’s not the first time I see wrong maths with this parameter. But there’s worse as I cannot change it:


SQL> alter system set max_pdbs=6;

alter system set max_pdbs=6
 *
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-65334: invalid number of PDBs specified

I can change it in the SPFILE but it doesn’t help me to create more pluggable databases:


SQL> alter system set max_pdbs=200 scope=spfile;

System altered.

SQL> startup force;

Total System Global Area   2147482744 bytes
Fixed Size                    9137272 bytes
Variable Size               587202560 bytes
Database Buffers           1543503872 bytes
Redo Buffers                  7639040 bytes
Database mounted.
Database opened.

SQL> show parameter max_pdbs
NAME     TYPE    VALUE
-------- ------- -----
max_pdbs integer 200

SQL> create pluggable database CDB1PDB04 from CDB1PDB03;

create pluggable database CDB1PDB04 from CDB1PDB03
                          *
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created

Something bridles me. There’s a MOS Note ORA-65010 When Oracle Database Hosted on AWS Cloud (Doc ID 2328600.1) about the same problem but that’s in 12.1.0.2 (before MAX_PDBS was introduced) which is supposed to be fixed in AUG 2017 PSU. But here I am 3 years later in 19.6 (the January 2020 Release Update for the latest version available on-premises).

So, Oracle limits the number of pluggable databases when we are on a public cloud provider which is not the Oracle Public Cloud. This limitation is not documented in the licensing documentation which mentions 252 as the Enterprise Edition limit, and I see nothing about “Authorized Cloud Environments” limitations for this item. This, and the fact that it can come and go with Release Updates put customers at risk when running on AWS EC2: financial risk and availability risk. I think there are only two choices, on long term, when you want to run your database on a cloud: go to Oracle Cloud or leave for another Database.

How does the Oracle instance know on which public cloud you run? All cloud platforms provide some metadata through HTTP api. I have straced all sendto() and recvfrom() system calls when starting the instance:


strace -k -e trace=recvfrom,sendto -yy -s 1000 -f -o trace.trc sqlplus / as sysdba <<<'startup force'

And I searched for Amazon and AWS here:

This is clear: the instance has a function to detect the cloud provider (kgcs_clouddb_provider_detect) when initializing the SGA in a multitenant architecture (kpdbInitSga) with the purpose of detecting non-oracle clouds (kscs_is_non_oracle_cloud). This queries the AWS metadata (documented on Retrieving Instance Metadata):


[oracle@ora-cdb-1 ~]$ curl http://169.254.169.254/latest/meta-data/services/domain
amazonaws.com/

When Oracle software sees the name of the enemy in the domain name amazonaws.com, it sets an internal limit for the number of pluggable databases that overrides the MAX_PDBS setting. Ok, I don’t need this metadata and I’m root on EC2 so my simple workaround is to block this metadata API:


[root@ora-cdb-1 ~]# iptables -A OUTPUT -d 169.254.169.254  -j REJECT
[root@ora-cdb-1 ~]# iptables -L
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
REJECT     udp  --  anywhere             10.0.0.2             udp dpt:domain reject-with icmp-port-unreachable
REJECT     all  --  anywhere             10.0.0.2             reject-with icmp-port-unreachable

Then restart the instance and it works: I can set or reset MAX_PDBS and create more pluggable databases.

I can remove the rule


[root@ora-cdb-1 ~]# iptables -D OUTPUT -d 169.254.169.254  -j REJECT

If, for watever reason I want to revert back.

Finally, because they had many bugs with the MAX_PDBS soft limit, there’s a parameter to disable it and this disables also the hard limit:


SQL> alter system set "_cdb_disable_pdb_limit"=true scope=spfile;
System altered.

Thanks to Mauricio Melnik for the heads-up on that:

MOS 2538172.1 _cdb_disable_pdb_limit=true …. maybe?

— Mauricio Melnik (@maurimelnik) March 26, 2020

However, with this parameter you cannot control anymore the maximum number of PDBs so don’t forget to monitor your AUX_COUNT in DBA_FEATURE_USAGE_STATISTICS.

Here was my discovery when preparing the multitenant workshop lab environment. Note that given the current situation where everybody works from home when possible, we are ready to give this training full of hands-on exercises though Microsoft Teams and AWS EC2 virtual machines. Two days to be comfortable when moving to CDB architecture, which is what should be done this year when you plan to stay with Oracle Database for the future versions.

Update 27-MAR-2020

In order not to sound too negative here, this limit on AWS platforms has been removed in the past and this may be a bug re-introduced with the change from 1 to 3 PDBs in Standard Edition.

Cet article Oracle disables your multitenant option when you run on EC2 est apparu en premier sur Blog dbi services.

Setting Users Impossible Passwords BY VALUES and Schema Only Accounts

Pete Finnigan - Thu, 2020-03-26 14:06
I plan to try and write some Oracle security based blog posts whilst working from home. These promises when I have made them in the past usually end up not coming true due to other work and things getting more....[Read More]

Posted by Pete On 26/03/20 At 02:38 PM

Categories: Security Blogs

Guarantee a limit for total amount of resources used by all the sessions of users in a particular user group

Tom Kyte - Thu, 2020-03-26 14:06
Can we guarantee a limit for total amount of resources used by all the sessions of users in a particular user group so their total usage does not exceed 10% of CPU, 10% of IO, 10% of Memory? Can we ensure that a user group follows different resou...
Categories: DBA Blogs

Gaining low level and in depth understanding on switching through data base resource manager

Tom Kyte - Thu, 2020-03-26 14:06
I read that one way to switch the load plan is through scheduler. I guess scheduler will switch to a plan by taking switching decision based on time. It will not take switching decision based on need and availability of resources among various consum...
Categories: DBA Blogs

ora-12954 on oracle xe 18c with less than 12gb user data

Tom Kyte - Thu, 2020-03-26 14:06
Dear Tom, about two weeks ago we moved a test database from 12.2.0.1 SE2 to 18c XE via data pump. We used schema import into a pdb with compression enabled (from the parameter file: TRANSFORM=TABLE_COMPRESSION_CLAUSE:"ROW STORE COMPRESS ADVANCED")...
Categories: DBA Blogs

parallel database systems store

Tom Kyte - Thu, 2020-03-26 14:06
I am recently studying database and face such a question: Large-scale parallel database systems store an extra copy of each data item on disks attached to a different processor, to avoid loss of data if one of the processors fails. a. Instead of ...
Categories: DBA Blogs

v_x$kglob query takes longer time

Tom Kyte - Thu, 2020-03-26 14:06
Following query was taking longer time. <b>1)</b> We try to make markhot with following SQL <code>SYS.DBMS_SHARED_POOL.MARKHOT(hash=>e130457a4520f54c18acb0131777d76d,namespace=>0);</code> <b>ERROR at line 1: ORA-06550: line 1, column 42: ...
Categories: DBA Blogs

Submitting PL/SQL procedure execution request via REST

Tom Kyte - Thu, 2020-03-26 14:06
I need to click a URL on GUI page. Once clicked, respective PL/SQL procedure execution request should be submitted to respective database in form of query API request- REST API. Once procedure is executed in database, status message (success or faile...
Categories: DBA Blogs

Grouping same value in different groups

Tom Kyte - Thu, 2020-03-26 14:06
Database: Oracle Database 12c Release 12.2.0.1.0 Following is my test case script: <code>create table test ( id number(1), sdate date, tdate date, prnt_id number(1) ); i...
Categories: DBA Blogs

Data Pump API to refresh top 10 partitions

Tom Kyte - Thu, 2020-03-26 14:06
I have a requirement where I have to refresh tables from PROD to DEV. But refresh needs to be done in such a manner that partitioned table should be refreshed with its latest top 10 partitions and non partitioned table should be refreshed full I a...
Categories: DBA Blogs

Using FLASHBACK DATABASE for [destructive] D.R. Testing

Hemant K Chitale - Thu, 2020-03-26 11:45
Testing your Disaster Recovery strategy with an Oracle Standby Database can be at different "levels" for the database :
1. Graceful Switchover to the D.R. site and reversing roles between the two databases, but only querying* data at the D.R. site
2. Shutdown of the Production site and Failover to the D.R. site and only *querying* data at the D.R. site
3. Shutdown of the Production site and Failover to the D.R. site with *destructive* testing at the D.R. site followed by restore (or flashback) of the D.R. site database to "throwaway" all  changes
3. Either Switchover or Failover with role reversal and *destructive* testing at the D.R. site, validation that data changes flow back to the Production site and, finally, restore (or flashback) of the database at both sites.

Restoring a large database at one or both sites can take time.
You may have taken a Snapshot of the database(s) and just restore the snapshot.
Or you may FLASHBACK the database(s).

{for details on how I created this Standby database configuration in 19c, see my previous posts here and here}

I will try to use FLASHBACK DATABASE here.

I start with the Primary running at the Production site :

oracle19c>sqlplus hemant/hemant@orclpdb1

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:22:26 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Last Successful login time: Thu Mar 26 2020 23:22:02 +08:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> drop table my_transactions purge;

Table dropped.

SQL> create table my_transactions (txn_id number, txn_data varchar2(50));

Table created.

SQL> insert into my_transactions values (1,'First at ProductionDC:Primary');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>


I then verify the state of both databases (the "oracle19c" prompt is at the Production site, the  "STDBYDB" prompt is at the D.R. site)

oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:23:48 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
CURRENT NOT ALLOWED PRIMARY NO 4796230

SQL>



STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:25:02 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY REQUIRED PHYSICAL STANDBY NO
0 4796205


SQL>


So, currently, the Standby is slightly behind (SCN#4796205) the Primary (SCN#4796230). Note that FLASHBACK is *not* enabled in the databases.

I first create my RESTORE POINT on the Standby and then on the Primary.

{at the current Standby at the D.R. site}
SQL> alter database recover managed standby database cancel;

Database altered.

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /opt/oracle/FRA/STDBYDB
db_recovery_file_dest_size big integer 10G
SQL> create restore point dr_before_switch guarantee flashback database;

Restore point created.

SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
DR_BEFORE_SWITCH

2 4796590 YES


SQL>
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>




{at the current Primary at the Production site}
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
CURRENT NOT ALLOWED PRIMARY NO 4796230

SQL> alter system switch logfile;

System altered.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
CURRENT NOT ALLOWED PRIMARY NO 4796968

SQL> create restore point production_before_switch guarantee flashback database;

Restore point created.

SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
PRODUCTION_BEFORE_SWITCH

2 4797182 YES


SQL>


At each site, I have created a Restore Point (with Guarantee Flashback Database). I have ensured that the Restore Point for the current Standby Database at the D.R. site is at a *lower* SCN (4796590) than that for the current Primary (4797182) (at the Production site).  To further ensure this, I did a log swich and verified the CURRENT_SCN at the Primary before creating the Restore Point.

(Note that both sites have a DB_RECOVERY_FILE_DEST configured for the GUARANTEEd Restore Point).

(a small note : I have to disable Recovery at the Standby database before I can create a Restore Point and then re-enable Recovery after that.  A Restore Point cannot be created when a database is in Recovery mode).


I now put in another transaction at the Primary (Production site database) and then Switchover to to the D.R. site.

SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into my_transactions values (2,'Second, after R.P. at ProductionDC:Primary');

1 row created.

SQL> commit;

Commit complete.

SQL> connect / as sysdba
Connected.
SQL> alter database switchover to stdbydb;

Database altered.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:41:57 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$databasse
3
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON CURRENT_SCN
------- ----------- ---------------- ------------------ -----------
STANDBY ALLOWED PHYSICAL STANDBY RESTORE POINT ONLY 4899284

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>


So, now the database at the Production site is a Standby database.

I now connect to the database at the D.R. site that is now a Primary

STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:45:02 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
CURRENT NOT ALLOWED PRIMARY RESTORE POINT ONLY
4899284 0


SQL> shutdown ;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
Database opened.
SQL>
SQL> alter pluggable database orclpdb1 open;

Pluggable database altered.

SQL> connect hemant/hemant@STDBYPDB1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
2 Second, after R.P. at ProductionDC:Primary

SQL>
SQL> insert into my_transactions values (3,'Destructive change at DRDC');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
STDBYDB>


{Note that "STDBYDPDB1" is my tnsnames entry for the PDB which still has the name "orclpdb1" at the D.R. site.}

I have created a "destructive" change with the third row which should not be in production. However, I will switch back to the Production data centre and verify that the row has replicated back.

{at the D.R. site}
STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:50:29 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database switchover to orclcdb;

Database altered.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
STDBYDB>



{at the Production site}
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:52:21 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
Database opened.
SQL> alter pluggable database orclpdb1 open;
alter pluggable database orclpdb1 open
*
ERROR at line 1:
ORA-65019: pluggable database ORCLPDB1 already open


SQL>
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
2 Second, after R.P. at ProductionDC:Primary
3 Destructive change at DRDC

SQL>


So, I have been able to
1. SWITCHOVER from the Production site to the D.R. site
2. Create a new row when the database is Primary at the D.R. site
3. SWITCHOVER back to the Production site
4. Verify that the destructive row is now at the Production site.

I now need to reset both databases to the state they were in before I began the test.

{at the Production site}
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Thu Mar 26 23:56:16 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
CURRENT NOT ALLOWED PRIMARY RESTORE POINT ONLY
5000964 0


SQL>
SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
PRODUCTION_BEFORE_SWITCH

2 4797182 YES


SQL>
SQL> FLASHBACK DATABASE TO RESTORE POINT PRODUCTION_BEFORE_SWITCH;

Flashback complete.

SQL> alter database open resetlogs ;

Database altered.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
CURRENT NOT ALLOWED PRIMARY RESTORE POINT ONLY
5000964 4798237


SQL>
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary

SQL>


So, now, the database at the Production site has reverted to the Restore Point and all changes after the Restore Point have been discarded.

This includes TXN_ID=2 which I had added to demonstrate propagation of a change from the Production site to the D.R. site ---- in your testing, you must ensure that you do not make any changes after the Restore Point is created.   Typically, you'd create your Production Restore Point with the applications disconnecte, database shutdown and re-mounted just before switchover.  Remember, this is for D.R. testing when you do have control over applications and database shutdown and startup.


What about the database at the D.R. site ?  Can I flashback it and resume it's role as a Standby ?
Remember that the Restore Point I created on the D.R. site was at a *lower* SCN than that for the Production site.

STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 27 00:08:25 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY ALLOWED PHYSICAL STANDBY RESTORE POINT ONLY
0 5000964


SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

NAME
--------------------------------------------------------------------------------
RESTORE_POINT_TIME
---------------------------------------------------------------------------
DATABASE_INCARNATION# SCN GUA
--------------------- ---------- ---
DR_BEFORE_SWITCH

2 4796590 YES

PRODUCTION_BEFORE_SWITCH_PRIMARY

2 4797182 NO


SQL> FLASHBACK DATABASE TO RESTORE POINT DR_BEFORE_SWITCH;

Flashback complete.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY ALLOWED PHYSICAL STANDBY RESTORE POINT ONLY
0 4796590


SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>


Now the database at the Production site has resumed as a Primary database, at SCN#4798237 and the database at the D.R. site has resumed as a Standby database at SCN#4796590  (lower than the Primary).

If you noticed the second entry in v$restore_point at the D.R. site -- Restore Point name "PRODUCTION_BEFORE_SWITCH_PRIMARY" -- this is a 19c enhancement where a Restore Point created on the Primary automatically propagates to the Standby, with the suffix "_PRIMARY"  (to indicate that it came from a database in PRIMARY role) attached to the Restore Point name.

Can I really really be sure that I have reverted both databases to their intended roles ?

I  can verify this again :

{at the Production site}
SQL> connect hemant/hemant@orclpdb1
Connected.
SQL> insert into my_transactions values (1001,'After DR Testing, back to normal life');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
1001 After DR Testing, back to normal life

SQL>



{at the D.R site}
SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> alter pluggable database orclpdb1 open;

Pluggable database altered.

SQL> connect hemant/hemant@stdbypdb1
Connected.
SQL> select * from my_transactions order by 1;

TXN_ID TXN_DATA
---------- --------------------------------------------------
1 First at ProductionDC:Primary
1001 After DR Testing, back to normal life

SQL>
SQL> connect / as sysdba
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select controlfile_type, open_resetlogs, database_role, flashback_on, standby_became_primary_scn, current_scn
2 from v$database
3 /

CONTROL OPEN_RESETL DATABASE_ROLE FLASHBACK_ON
------- ----------- ---------------- ------------------
STANDBY_BECAME_PRIMARY_SCN CURRENT_SCN
-------------------------- -----------
STANDBY REQUIRED PHYSICAL STANDBY RESTORE POINT ONLY
0 4802358


SQL>


To verify the behaviour, I added a new row (TXN_ID=1001) in the Primary database at the Production site and then did an OPEN READ ONLY of the Standby database at the D.R. site to check the table.
Note :  So as to not require an Active Data Guard licence, I stopped Recovery on the Standby before I did an OPEN READ ONLY.
Of course, after the verification, I resumed the Standby database in Recovery mode.

This whole exercise also did NOT need the databases to be "permanently" in FLASHBACK ON mode.  I used the Guaranteed Restore Point feature with the Recovery File Dest to generate the minimal flashback logs.  At the end of the exercise, I can DROP the Restore Points.

{at the Production site}
oracle19c>sqlplus

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 27 00:37:47 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> drop restore point PRODUCTION_BEFORE_SWITCH;

Restore point dropped.

SQL> alter database open;

Database altered.

SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

no rows selected

SQL>


{at the D.R. site}
STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Mar 27 00:40:47 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
DR_BEFORE_SWITCH
PRODUCTION_BEFORE_SWITCH_PRIMARY

SQL>
SQL> drop restore point PRODUCTION_BEFORE_SWITCH_PRIMARY;

Restore point dropped.

SQL> drop restore point DR_BEFORE_SWITCH;

Restore point dropped.

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL>
SQL> select name, restore_point_time, database_incarnation#, scn, guarantee_flashback_database
2 from v$restore_point
3 /

no rows selected

SQL>


The only "catch" is that I had to bring up the Production site (Primary) database in MOUNT mode before I could drop the Restore Point.  So, you need to factor this into you D.R. testing.


Categories: DBA Blogs

Sending PostgreSQL data to AWS S3 by using AWS Kinesis data streams

Yann Neuhaus - Thu, 2020-03-26 09:22

Before we really start with this post: This is just an experiment and you should not implement it like this in real life. The goal of this post is just to show what is possible and I am not saying that you should do it (the way it is implemented here will be catastrophic for your database performance and it is not really secure). As I am currently exploring a lot of AWS services I wanted to check if there is an easy way to send data from PostgreSQL into an AWS Kinesis data stream for testing purposes and it turned out that this is actually quite easy if you have the AWS Command Line Interface installed and configured on the database server.

Creating a new Kinesis stream in AWS is actually a matter of a few clicks (of course you can do that with the command line utilities as well):

What I want is a simple data stream where I can put data into:

Obviously the new stream needs a name and as I will not do any performance or stress testing one shard is absolutely fine:

That’s all what needs to be done, the new stream is ready:

An AWS Kinesis stream is not persistent by default. That means, if you want to permanently store the output of a stream you need to connect the stream to a consumer that processes, eventually transforms, and finally stores the data somewhere. For this you can use AWS Kinesis Firehose and this is what I’ll be doing here:

As I want to use AWS S3 as the target for my data I need to use a delivery stream:

The delivery stream needs a name as well and we will use the stream just created above as the source:


We could go ahead and transform the data with an AWS Lambda function but we’re going to keep it simple for now and skip this option:


The next screen is about the target for the data. This could be AWS Redshift, AWS Elasticsearch, Splunk or AWS S3, what we’ll be doing here:

Finally specifying the target S3 bucket:

The settings for buffering at not really important for this test but will matter for real systems as these settings determine how fast your data is delivered to S3 (we also do not care about encryption and compression for now):

Error logging should of course be enabled and we need an IAM role with appropriate permissions:

Final review:


… and the stream and delivery stream are ready to use:

That’s it for the setup on the AWS side and we can continue with configuring PostgreSQL to call the AWS command line utility to write data to the stream. Callling system commands from inside PostgreSQL can be done in various ways, we’ll be using pl/Perl for that, and even the untrusted version so only superusers will be able to do that:

postgres=# create extension plperlu;
CREATE EXTENSION
postgres=# \dx
                      List of installed extensions
  Name   | Version |   Schema   |              Description               
---------+---------+------------+----------------------------------------
 plperlu | 1.0     | pg_catalog | PL/PerlU untrusted procedural language
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(2 rows)

Next we need a table that will contain the data we want to send to the stream:

postgres=# create table stream_data ( id serial primary key
                                    , stream text );

A trigger will fire each time a new row is inserted and the trigger function will call the AWS command line interface:

create or replace function f_send_to_kinesis()
returns trigger
language plperlu
AS $$
     system('aws kinesis put-record --stream-name postgres-to-kinesis --partition-key 1 --data '.$_TD->{new}{stream});
     return;
$$;

create trigger tr_test
after insert or update
on stream_data
for each row
execute procedure f_send_to_kinesis();

This is all we need. Let’s insert a row into the table and check if it arrives in AWS S3 (remember that it will take up to 300 seconds or 5MB of data):

postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# insert into stream_data (stream) values ('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa');
INSERT 0 1
postgres=# select * from stream_data;
 id |              stream              
----+----------------------------------
  1 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  2 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  3 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  4 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  5 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  6 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  7 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  8 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  9 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
(9 rows)

You will also notice that the insert takes quite some time because calling the AWS command line utility and waiting for the result takes ages compared to a normal insert.

While waiting for the data to arrive you can check the monitoring section of both, the stream and the delivery stream:

After a while the data appears in S3 and it is organized in [YEAR]/[MONTH]/[DAY]/[HOUR]:




Looking at the file itself all our data is there:

So, actually it is quite easy to send data to an AWS Kinesis stream. If you really need to send data out of PostgreSQL I probably would go for listen/notify to make the calls to the AWS command line utility not blocking the inserts or updates to the table that holds the data for the stream. Anyway, currently I am not aware of a good use case for sending streams of data out of PostgreSQL directly to AWS Kinesis. Maybe you do something like that and how?

Cet article Sending PostgreSQL data to AWS S3 by using AWS Kinesis data streams est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator