Feed aggregator

Rittman Mead at Kscope 2018

Rittman Mead Consulting - Thu, 2018-05-31 02:20
Rittman Mead at Kscope 2018

Kscope 2018 is just a week away! Magnificent location (Walt Disney World Swan and Dolphin Resort) for one of the best tech conferences of the year! The agenda is impressive (look here) spanning over ten different tracks from the traditional EPM, BI Analytics and Data Visualization, to the newly added Blockchain! Plenty of great content and networking opportunities!

I'll be representing Rittman Mead with two talks: one about Visualizing Streams (Wednesday at 10:15 Northern Hemisphere A2, Fifth Level) on how to build a modern analytical platform including Apache Kafka, Confluent's KSQL, Apache Drill and Oracle's Data Visualization (Cloud or Desktop).

Rittman Mead at Kscope 2018

During the second talk, titled DevOps and OBIEE:
Do it Before it's Too Late!
(Monday at 10:45 Northern Hemisphere A1, Fifth Level), I'll be sharing details, based on our experience, on how OBIEE can be fully included in a DevOps framework, what's the cost of "avoiding" DevOps and automation in general and how Rittman Mead's toolkits, partially described here, can be used to accelerate the adoption of DevOps practices in any situation.

Rittman Mead at Kscope 2018

If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.

Categories: BI & Warehousing

OpenShift on my Windows 10 laptop with MiniShift

Yann Neuhaus - Thu, 2018-05-31 01:23

If you want to play with OpenShift on your laptop, you can, in a Virtual Machine. I have VirtualBox installed on my laptop. I’ll install Minishift here, which will create the VM to run OpenShift with few simple commands only. On Linux you can refer to Daniel’s post. Here is the Windows version. Oh, and Daniel did that to run Postgres but my goal is to run an Oracle container of course. Or MySQL maybe.

I’ve downloaded minishift-1.18.0-windows-amd64.zip and unzipped it in D:\Downloads\minishift-1.18.0-windows-amd64 where I have minishift.exe

Minishift

I configure to use VirtualBox

minishift config set vm-driver virtualbox

It is installed in my Windows profile:

C:\Users\fpa\.minishift

Be careful, minishift do not know that we have multiple drives in Windows. When I was running minishift.exe from the D: disk is was not able to find the virtual machine’s files that were on C:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Downloading OpenShift binary 'oc' version 'v3.9.0'
40.81 MiB / 40.81 MiB [===================================================================================] 100.00% 0s-- Downloading OpenShift v3.9.0 checksums ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM ..... FAIL E0529 17:08:31.056327 1448 start.go:391] Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.. Retrying.
Error starting the VM: Error creating the VM. Error creating machine: Error in driver during machine creation: open /Users/fpa/.minishift/cache/iso/b2d/v1.3.0/minishift-b2d.iso: The system cannot find the path specified.

Then, I changed to the C: drive

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>C:
 
C:\Users\fpa>dir \Users\fpa\.minishift\cache\iso\b2d\v1.3.0\
Volume in drive C is OS
Volume Serial Number is 26AE-33F7
 
Directory of C:\Users\fpa\.minishift\cache\iso\b2d\v1.3.0
 
29-May-18 15:22 .
29-May-18 15:22 ..
29-May-18 15:22 41,943,040 minishift-b2d.iso
1 File(s) 41,943,040 bytes
2 Dir(s) 30,652,370,944 bytes free

And I run minishift from there:

C:\Users\fpa>D:minishift start
-- Starting profile 'minishift'
-- Checking if https://github.com is reachable ... OK
-- Checking if requested OpenShift version 'v3.9.0' is valid ... OK
-- Checking if requested OpenShift version 'v3.9.0' is supported ... OK
-- Checking if requested hypervisor 'virtualbox' is supported on this platform ... OK
-- Checking if VirtualBox is installed ... OK
-- Checking the ISO URL ... OK
-- Checking if provided oc flags are supported ... OK
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
Memory: 2 GB
vCPUs : 2
Disk size: 20 GB
-- Starting Minishift VM .............................. OK
-- Checking for IP address ... OK
-- Checking for nameservers ... OK
-- Checking if external host is reachable from the Minishift VM ...
Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 0% used OK
Importing 'openshift/origin:v3.9.0' . CACHE MISS
Importing 'openshift/origin-docker-registry:v3.9.0' . CACHE MISS
Importing 'openshift/origin-haproxy-router:v3.9.0' . CACHE MISS
-- OpenShift cluster will be configured with ...
Version: v3.9.0
-- Copying oc binary from the OpenShift container image to VM ...................................... OK
-- Starting OpenShift cluster ........................
Using Docker shared volumes for OpenShift volumes
Using public hostname IP 192.168.99.102 as the host IP
Using 192.168.99.102 as the server IP
Starting OpenShift using openshift/origin:v3.9.0 ...
OpenShift server started.
 
The server is accessible via web console at:
  https://192.168.99.102:8443
 
You are logged in as:
User: developer
Password:
 
To login as administrator:
oc login -u system:admin

Docker

That’s all. I have a new VM in VirtualBox whith its main files in C:/Users/fpa/.minishift

C:/Users/fpa/.minishift/machines/minishift/boot2docker.iso
C:/Users/fpa/.minishift/machines/minishift/disk.vmdk

The VM boots on the Boot2Docker iso, which is the way to run Docker on Windows without enabling HyperV. The first network interface is NAT for internet access. The second one has a DHCP IP from 192.168.99.1

You can control the VM with minishift (start, stop, configure, ssh,…):

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift ssh
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.12.6, build HEAD : 5ab2289 - Wed Jan 11 03:20:40 UTC 2017
Docker version 1.12.6, build 78d1802

We have everything running in containers here:

docker@minishift:/mnt/sda1$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
77c0ef5a80d7 50c7bffa0653 "/usr/bin/openshift-r" 8 minutes ago Up 8 minutes k8s_router_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
10fb6a2a6b70 9b472363b07a "/bin/sh -c '/usr/bin" 8 minutes ago Up 8 minutes k8s_registry_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
2f6b90fb0bb4 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_router-1-tsmw7_default_7a1be0e2-635b-11e8-843d-f2c6a11ee2db_1
3c720d166989 fae77002371b "/usr/bin/origin-web-" 8 minutes ago Up 8 minutes k8s_webconsole_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
bb5870fc0b7e openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_docker-registry-1-zfxm5_default_7865ac33-635b-11e8-843d-f2c6a11ee2db_1
95663bf29487 openshift/origin-pod:v3.9.0 "/usr/bin/pod" 8 minutes ago Up 8 minutes k8s_POD_webconsole-7dfbffd44d-48b9h_openshift-web-console_62b66c66-635b-11e8-843d-f2c6a11ee2db_1
5fc022dbe112 openshift/origin:v3.9.0 "/usr/bin/openshift s" 8 minutes ago Up 8 minutes origin

But we should not have to connect to this machine.

The minishift executable can be used to control anything. As I have docker client installed on my laptop (the Docker Toolbox) I can get the environment variables:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>d:minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.102:2376
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i

and see, from Windows, the docker images that are in the VM:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>where docker
C:\Program Files\Docker Toolbox\docker.exe
 
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.102:2376
SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
 
D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
openshift/origin-web-console v3.9.0 fae77002371b 12 days ago 495.1 MB
openshift/origin-docker-registry v3.9.0 9b472363b07a 12 days ago 464.9 MB
openshift/origin-haproxy-router v3.9.0 50c7bffa0653 12 days ago 1.283 GB
openshift/origin-deployer v3.9.0 e4de3cb64af9 12 days ago 1.261 GB
openshift/origin v3.9.0 83ec0170e887 12 days ago 1.261 GB
openshift/origin-pod v3.9.0 b6d2be1df9c0 12 days ago 220.1 MB

While I’m there, I can run whatever I want as a docker container. Let’s try with Oracle.

I need to login to the Docker store (where I have accepted the license conditions)

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: pachot
Password:
Login Succeeded

Let’s pull the Oracle ‘slim’ image:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker pull store/oracle/database-enterprise:12.2.0.1-slim
12.2.0.1: Pulling from store/oracle/database-enterprise
4ce27fe12c04: Downloading [> ] 1.076 MB/83.31 MB
9d3556e8e792: Downloading [> ] 535.3 kB/151 MB
fc60a1a28025: Download complete
0c32e4ed872e: Download complete
b465d9b6e399: Waiting

And run it:

D:\Downloads\minishift-1.18.0-windows-amd64\minishift-1.18.0-windows-amd64>docker run -it --rm --name orcl store/oracle/database-enterprise:12.2.0.1-slim
Setup Oracle Database
Oracle Database 12.2.0.1 Setup
Wed May 30 20:16:56 UTC 2018
 
Check parameters ......
log file is : /home/oracle/setup/log/paramChk.log
paramChk.sh is done at 0 sec
 
untar DB bits ......
log file is : /home/oracle/setup/log/untarDB.log
untarDB.sh is done at 38 sec
...

You may find that funny, but the Oracle images in the Docker store contains only a tarball of Oracle Home and a pre-created database. Just the time to untar those and run the instance and after 2 minutes I have my database ready. All is untar-ed to the volume, including the software.

Here are the ports that are redirected to:

C:\Users\fpa>SET DOCKER_TLS_VERIFY=1
C:\Users\fpa>SET DOCKER_HOST=tcp://192.168.99.102:2376
C:\Users\fpa>SET DOCKER_CERT_PATH=C:\Users\fpa\.minishift\certs
C:\Users\fpa>docker port orcl
1521/tcp -> 0.0.0.0:32771
5500/tcp -> 0.0.0.0:32770

Then, easy to connect with SQL*Net with the credentials provided (see the setup instructions)

C:\Users\fpa>sqlcl sys/Oradoc_db1@//192.168.99.102:32771/orclpdb1.localdomain as sysdba
SQLcl: Release 18.1.1 Production on Wed May 30 22:41:10 2018
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> select instance_name,host_name from v$instance;
 
INSTANCE_NAME HOST_NAME
------------- ------------
ORCLCDB 254f96463883

OpenShift

So, that’s an easy way to run Oracle on Docker when you have VirtualBox. One download and 5 commands and I’m ready to connect. But that’s not the goal. Here we have OpenShift here to manage multiple Docker containers.

According to the ‘minishift start’ output I have a Web server on https://192.168.99.102:8443 (user: system, password: admin)

It already contains a lot of databases:
CaptureMiniShift001

They are really easy to use. In two clicks I’ve run a MySQL container:
CaptureMiniShift002

If you don’t like the GUI, there’s the command line interface of OpenShift within the minishift ‘cache:

C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe login https://192.168.99.102:8443
Authentication required for https://192.168.99.102:8443 (openshift)
Username: system
Password:
Login successful.
 
You have one project on this server: "test"
 
Using project "test".
 
C:\Users\fpa>C:\Users\fpa\.minishift\cache\oc\v3.9.0\windows\oc.exe status
In project test on server https://192.168.99.102:8443
 
svc/mysql - 172.30.224.246:3306
dc/mysql deploys openshift/mysql:5.7
deployment #1 deployed 7 minutes ago - 1 pod

Now that I have OpenShift running and installed, I’ll be able to run Oracle and manage containers from there. That’s for the next post on this subject.

 

Cet article OpenShift on my Windows 10 laptop with MiniShift est apparu en premier sur Blog dbi services.

How to generate some big test tables and rapidly export their SQL data

Tom Kyte - Wed, 2018-05-30 10:26
Hello teams, Our Developer manager asks me to export Oracle production database's three big tables with <i><b>CSV</b></i> or this format <i><b>"insert ... into ..."</b></i> for giving another App teams (their database is MySQL 5.7) once again, I r...
Categories: DBA Blogs

Database Link between Oracle 10g and Oracle 12c

Tom Kyte - Wed, 2018-05-30 10:26
Dears, am trying to create a dblink between Oracle 10g and Oracle 12c the dblink is created successfully my case is as below when select data from a table directly over dblink give the result without problems but call a package or stored procedur...
Categories: DBA Blogs

expdp between two scn numbers

Tom Kyte - Wed, 2018-05-30 10:26
Hi Team, Can we run expdp command between two scn numbers? Thanks Krishna
Categories: DBA Blogs

I just want to know the difference between the below two queries.

Tom Kyte - Wed, 2018-05-30 10:26
Q: List the Emps who are senior to their own MGRS. select * from emp e,emp m where e.mgr = m.empno and e.hiredate < m.hiredate; select * from emp e,emp m where e.empno= m.mgr and e.hiredate > m.hiredate; the above two queries are getting s...
Categories: DBA Blogs

Fortune 500 and the Art of Execution

Abhinav Agarwal - Wed, 2018-05-30 07:53
The Fortune 500 Companies 2018 rankings came out last week, and browsing the list, the following random thoughts struck me about the list and the technology industry:

  • Walmart - you can be in a very, very traditional brick-and-mortar business (yes, they have been making inroads into e-commerce, but for the most part, Walmart is a traditional retailer), but as long as you keep doing things well, you can be in the top 10. Not only that, you can be the top-ranked company by revenues for a sixth year in a row. In this case, you can be numero uno, with annual revenues that top five-hundred billion dollars - $500 billion, be more than twice the size of the second-ranked company (Exxon-Mobile is ranked second, with annual revenues of $244B), and also employ the second-most number of employers (2.3 million).
  • Apple - you can be a mass-market luxury brand (yes, that is a contradiction in terms), sell only a handful of products (its Mac, iPhone, and iPad product lines bring in 79% of its revenues) and be in the top 10 - ranked fourth. You will also get to make the profits of any company - $48 billion. You also get to be the most highly valued company - at $922 billion.
  • Amazon - you can sell almost everything under the sun, sell it almost all online (its foray into physical stores and its acquisition of Whole Foods notwithstanding), employ the most employers of any company in America, be a $100 billion plus company, yet grow revenues by more than thirty per-cent (to $177 billion), and crack the top 10 - ranked eighth. You also get to be the second-most highly valued company on earth, at $765 billion.
  • Netflix: you do only one thing: in this case, streaming video content on-demand and producing your own content, almost triple your profits (199% jump year-on-year), not be in the top 200, and yet deliver the best 10-year returns to shareholders (48%, annualized!
  • The top five most valuable companies on the list are all technology companies - Apple, Amazon, Alphabet (the parent company of Google), Microsoft, and Facebook.
Bottom line? What is common across all these companies is a relentless focus on execution. Execution - a simple lesson to learn, yet incredibly difficult to practice. Flipkart, the Indian e-commerce giant in which Walmart (press release) bought a 77% stake for $16 billion, valuing the company at $22 billion, learned that the hard way, when it lost focus in its fight against Amazon.

Further suggested reading:

This is an expanded version of my LinkedIn post.

© 2018, Abhinav Agarwal. All rights reserved.

Index Bouncy Scan 3

Jonathan Lewis - Wed, 2018-05-30 07:15

This is a follow-up to a problem I had with yesterday’s example of using recursive CTEs to “bounce” along a multi-column index to pick out the unique set of combinations of the first two columns. Part of the resulting query used a pair of aggregate scalar subqueries in a select list – and Andrew Sayer improved on my query by introducing a “cross apply” (which I simply hadn’t thought of) which the optimizer transformed into a lateral view (which I had thought of, but couldn’t get to work).

After seeing what the Andrew and the optimizer had done I looked a little more closely at my lateral view experiment and modified it so that it worked. Here are the three critical versions of the relevant code fragment; first is my original code, then Andrew’s cross apply, then my working lateral view version:

select
        (select min(t1.val1) val1 from t1 where t1.val1 > bounce1.val1) val1,
        (select min(t1.val2) val2 from t1 where t1.val1 > bounce1.val1 and rownum = 1) val2
from    bounce1
where   bounce1.val1 is not null
 
 
select
        ca.val1 ,ca.val2
from    bounce1
cross  apply (select val1, val2
              from  (select /*+ index(t1) no_index_ffs(t1) */
                             val1, val2
                     from    t1
                     where   t1.val1 > bounce1.val1
                     and     rownum = 1
                    )
             ) ca
where  bounce1.val1 is not null
 
----

select
        ca.val1 ,ca.val2
from    bounce1, 
        lateral(select val1, val2
              from  (select /*+ index(t1) no_index_ffs(t1) */
                             val1, val2
                     from    t1
                     where   t1.val1 > bounce1.val1
                     and     rownum = 1
                    )
             ) ca
where  bounce1.val1 is not null

All I’ve done to modify Andrew’s code is put a comma after the table (actually CTE) bounce1, then change “cross apply” to “lateral”. Compare the resulting text with the following lateral version that doesn’t work:


select
        ca.val1 ,ca.val2
from    bounce1, 
        lateral (
                   select /*+ index(t1) no_index_ffs(t1) */
                             val1, val2
                     from    t1
                     where   t1.val1 > bounce1.val1
                     and     rownum = 1
             ) ca
where  bounce1.val1 is not null

To get from not working to working all I’ve done is wrap the text in my lateral() subquery inside one more (apparently redundant) layer of “select * from ()”!

In fact my assumption that my code wasn’t working was incorrect – what was really going on was that the code I had written was producing the wrong results but I thought that I had made a mistake in the way I was writing it and couldn’t figure out what I had done wrong.

Problem Solving:

To get a better idea of what’s going on, I took a closer look at the execution plans. Here are the plans (main body only) for the two variants of using the lateral() view – the first from the SQL with the “redundant” select, the second as I originally wrote it. Notice that the number of rows (A-Rows) returned in the first case is the 30 expected while in the second case it’s only 10.


---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                 |      1 |        |   125 (100)|     30 |00:00:00.01 |      40 |     28 |       |       |          |
|   1 |  SORT ORDER BY                               |                 |      1 |      4 |   125   (3)|     30 |00:00:00.01 |      40 |     28 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |                 |      1 |      4 |   124   (2)|     30 |00:00:00.01 |      40 |     28 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |     33 |00:00:00.01 |      40 |     28 |  1024 |  1024 |          |
|*  4 |     VIEW                                     |                 |      1 |      2 |    61   (2)|      3 |00:00:00.01 |       8 |      4 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |      3 |00:00:00.01 |       8 |      4 |  1024 |  1024 |          |
|*  6 |       VIEW                                   |                 |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |      1 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |                 |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 |      1 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK           |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |      1 |       |       |          |
|   9 |       NESTED LOOPS                           |                 |      3 |      1 |    31   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|  10 |        RECURSIVE WITH PUMP                   |                 |      3 |        |            |      3 |00:00:00.01 |       0 |      0 |       |       |          |
|  11 |        VIEW                                  | VW_LAT_1BBF5C63 |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|  12 |         VIEW                                 |                 |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|* 13 |          COUNT STOPKEY                       |                 |      3 |        |            |      2 |00:00:00.01 |       6 |      3 |       |       |          |
|* 14 |           INDEX RANGE SCAN                   | T1_PK           |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       6 |      3 |       |       |          |
|  15 |     SORT AGGREGATE                           |                 |     30 |      1 |            |     30 |00:00:00.01 |      32 |     24 |       |       |          |
|  16 |      FIRST ROW                               |                 |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|* 17 |       INDEX RANGE SCAN (MIN/MAX)             | T1_PK           |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|  18 |     RECURSIVE WITH PUMP                      |                 |     11 |        |            |     30 |00:00:00.01 |       0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------


------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name            | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                 |      1 |        |   125 (100)|     10 |00:00:00.01 |      16 |       |       |          |
|   1 |  SORT ORDER BY                               |                 |      1 |      4 |   125   (3)|     10 |00:00:00.01 |      16 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |                 |      1 |      4 |   124   (2)|     10 |00:00:00.01 |      16 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |     11 |00:00:00.01 |      16 |  1024 |  1024 |          |
|*  4 |     VIEW                                     |                 |      1 |      2 |    61   (2)|      1 |00:00:00.01 |       4 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |      1 |00:00:00.01 |       4 |  1024 |  1024 |          |
|*  6 |       VIEW                                   |                 |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |                 |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK           |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |       |       |          |
|   9 |       NESTED LOOPS                           |                 |      1 |      1 |    31   (0)|      0 |00:00:00.01 |       2 |       |       |          |
|  10 |        RECURSIVE WITH PUMP                   |                 |      1 |        |            |      1 |00:00:00.01 |       0 |       |       |          |
|* 11 |        VIEW                                  | VW_DCL_1BBF5C63 |      1 |      1 |     2   (0)|      0 |00:00:00.01 |       2 |       |       |          |
|* 12 |         COUNT STOPKEY                        |                 |      1 |        |            |      1 |00:00:00.01 |       2 |       |       |          |
|  13 |          INDEX FULL SCAN                     | T1_PK           |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|  14 |     SORT AGGREGATE                           |                 |     10 |      1 |            |     10 |00:00:00.01 |      12 |       |       |          |
|  15 |      FIRST ROW                               |                 |     10 |      1 |     2   (0)|      9 |00:00:00.01 |      12 |       |       |          |
|* 16 |       INDEX RANGE SCAN (MIN/MAX)             | T1_PK           |     10 |      1 |     2   (0)|      9 |00:00:00.01 |      12 |       |       |          |
|  17 |     RECURSIVE WITH PUMP                      |                 |     11 |        |            |     10 |00:00:00.01 |       0 |       |       |          |
------------------------------------------------------------------------------------------------------------------------------------------------------------------

Most importantly we can see that the optimizer has used two different transformations. For the working query we see the view name VW_LAT_xxxxxxxx at operation 11, this is Oracle implementing a lateral view; for the problem query we see the view name VW_DCL_xxxxxxxx at operation 11, which is Oracle implementing a transformation to a “decorrelated lateral view”.

My first test after noting this difference was to see what would happen in I added the hint /*+ no_query_transformation */ to the query: it resulted in the VW_DCL_xxxxxxxx view name changing to VW_LAT_xxxxxxxx and the query producing the right result. Andrew Sayer, on the ODC thread, then pointed out that he’d done a couple more experiments and used the /*+ no_decorrelate() */ hint so I tried that with my query, adding it (with no parameters) to the subquery inside the lateral() clause – again the plan changed from using VW_DCL to VW_LAT and the results were correct.

Test Case

Bottom line on this – it looks like the optimizer is decorrelating a subquery when it shouldn’t, leading to wrong results. To make it easier to see this anomaly I stripped the original sample down to a basic test case starting with the table that I used in the previous posting:

rem
rem     Script:         decorralate.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem
rem     Last tested 
rem             18.1.0.0  -- via liveSQL
rem             12.2.0.1
rem             12.1.0.2
rem

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum-1,3)                 val1,
        mod(rownum-1,10)                val2,
        lpad('x',100,'x')               padding
from
        generator       v1
order by
        dbms_random.value
;

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

alter table t1 add constraint t1_pk primary key(val1, val2, id);

Now two versions of a simplified piece of code that should select the distinct values of val1 greater than the lowest value (each row in the UNION ALL of dual is emulating the way in which yesterday’s recursive CTE was effectively saying “this is a current known value, find the next higher”):


prompt  =============
prompt  Right results
prompt  =============

select
        v1.val1, v1.val2
from    (
        select  0 val1, 0 val2 from dual
        union all
        select 1,0 from dual
        union all
        select 2,0 from dual
        ) bounce1,
        lateral (
            select val1, val2 from (
              select  /*+ index(t1) no_index_ffs(t1) */
                      t1.val1, t1.val2
              from    t1
              where   t1.val1 > bounce1.val1
              and     rownum = 1
            )
        ) v1
;

prompt  ===========================================
prompt  Wrong results -- "redundant" select removed
prompt  ===========================================

select
        v1.val1, v1.val2
from    (
        select  0 val1, 0 val2 from dual
        union all
        select 1,0 from dual
        union all
        select 2,0 from dual
        ) bounce1,
        lateral (
            -- select val1, val2 from (
              select  /*+ index(t1) no_index_ffs(t1) */
                      t1.val1, t1.val2
              from    t1
              where   t1.val1 > bounce1.val1
              and     rownum = 1
            -- )
        ) v1
;

Here’s a cut-n-paste from running the two queries:


=============
Right results
=============

      VAL1       VAL2
---------- ----------
         1          0
         2          0

2 rows selected.

============================================
Wrong results  -- "redundant" select removed
============================================

no rows selected

Finally, to get an idea of what’s gone wrong – and to show that the optimizer has done something wrong when attempting to decorrelate – we can take a look at the optimizer trace file to see the final transformed SQL that the optimizer has produced a plan for. (I enabled the trace with the command “alter session set events ‘trace [rdbms.SQL_Transform.*]’;” to limit the trace to just the information about optimizer transformations.) This – cosmetically altered – is the final “unparsed” query:

select 
        vw_dcl_a18161ff.val1 val1,
        vw_dcl_a18161ff.val2 val2 
from    ( 
                (select 0 val1 from sys.dual dual) 
                union all  
                (select 1 1 from sys.dual dual) 
                union all  
                (select 2 2 from sys.dual dual)
        ) bounce1, 
        (
        select
                 /*+ no_index_ffs (t1) index (t1) */ 
                t1.val1 val1_0,
                t1.val2 val2_1 
        from
                test_user.t1 t1
        where 
                rownum = 1
        ) vw_dcl_a18161ff 
where 
        vw_dcl_a18161ff.val1 > bounce1.val1

As you can see, the lateral view has turned into a non-mergeable inline view which selects the first row available from t1 by following the supplied hints, and joins that single row result set to bounce1. I have a suspicion that lateral views which include rownum predicates should not be decorrelated. I have looked on MoS to see if I can find any bugs related to decorrelating lateral views, but either there are none or my search terms weren’t good enough.

 

utlrp weird behavior with INVAID objects in an Orace EBS database

Syed Jaffar - Wed, 2018-05-30 07:05
In one of the recent database migration & upgrade activities, I have migrated an Oracle EBS database 11.2.0.4 to a new DB host and upgraded to 12.1.0.2. After migration and post upgrade, an utlrp.sql was ran to validate the 100k invalid objects in the database.

Weirdly, during the utlrp execution and when the INVALID objects count goes to 200, the INVALID objects counts started to increase again, and ultimately reaching the 100k number again. The utlrp was in kind of loop and never exited.

As a workaround, I have manually compiled all invalid objects and ran ultrp which ran successfully. I am yet to unlock the theory caused the situation.

Stay tuned for more updates on this post.

Traditional Export/Import Fail With ORA-01031 in DB Vault Environment Starting Release 11.2.0.3

Syed Jaffar - Wed, 2018-05-30 06:27
Oracle Database Vault was configured on RAC database v11.2.0.3 and appropriate permissions given to allow the regular duties, such as data pump export/import etc.

However, when classical/traditional import (a dump file from v9 database) was performed, the import on 11.2.0.3 database failed with the following errors:


Import: Release 11.2.0.3.0 - Production on Tue May 29 16:56:23 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.


IMP-00058: ORACLE error 1031 encountered
ORA-01031: insufficient privilegesUsername: 

Data pump jobs were successfully executing without any issues. The only problem is with traditional export/import . According to below Oracle notes, it is an expected behavior in 11.2.0.3 with DB Vault in place.


Classic Export/Import Fail With ORA-01031 in Database Vault Environment Starting Release 11.2.0.3 (Doc ID 1419687.1)
Bug 13615338 - EXP/IMP disabled in Database Vault environment (Doc ID 13615338.8)

According to the notes,

Export and Import tools are no longer recommended to be used, especially in Database Vault environments.
When it is not possible to use data pump (for instance when the export has been created from a pre-10g database),
the patch 13615338 must be installed to allow the usage of classic export/import utilities.



The option would be to upgrade the database to higher than 11.2.0.3.


Will apply the patch and update this blog if the issue is fixed or not.






I’m Charlie Sinks and this is how I work

Duncan Davies - Wed, 2018-05-30 06:00

This week we’re featuring another of the mainstays from the PSAdmin.io slack community  – Charlie Sinks. He is a regular in the community and a frequent guest on their PodCast giving UMRUG recaps and discussing automation. He also presented at the inaugural PSAdmin.conf on Implementing Rundeck with PeopleSoft (the recording is available here and is well worth an hour of your time). I’m thrilled that he has added his profile to our ‘How I Work‘ series.

charlie 2

Name: Charlie Sinks

Occupation: Sr. Software Engineer and PeopleSoft Admin at Andersen Windows
Location: Oak Park Heights, MN
Current computer:
At work: Dell Latitude E7470 laptop running Windows 10.
At home: a PC that I built from bits and pieces currently running Windows 7 and Ubuntu Linux
Current mobile devices: Motorola Moto G4 Plus and a cheap Timex wristwatch
I work: Fervently

What apps/software/tools can’t you live without?

  • PeopleSoft Deployment Packages (DPKs) and Puppet – I can’t even imagine deploying PeopleSoft environments and maintenance the old way anymore. It’s easy to get hung on the things we wish the DPK did better, but it really opened the door to automation in the PS Admin world for those of us who weren’t already using Puppet, Ansible, etc.
  • Rundeck – the ability to execute tasks – from simple commands to complex orchestration – on any number of remote servers has helped us go a long way towards automation. For example: single-click CPU patching on all of our non-Production servers at the same time. Even for simple service restarts, it’s so much easier to do it from a dashboard in Rundeck.
  • BeyondCompare – It’s crazy how many file formats this thing can compare. Best compare utility I’ve come across.
  • Password Safe – Way better than remembering a hundred passwords, and way WAY better than keeping them all in a spreadsheet. Also, using a password manager encourages me to use very long, complex passwords since I’ll never have to remember or retype them. Password Safe is great at storing passwords for all kinds of accounts, but doesn’t have good web browser integration for website accounts. So for that I use…
  • Dashlane – password manager with great browser integration for website accounts. Not as great for any other type of account. For that, see above.
  • Notepad++ / Visual Studio Code – For years I swore by Notepad++ because of its great plugins, macros, and language support. I still think it’s great, but I just discovered Visual Studio Code. That has everything I like about NP++ plus more. Much more streamlined extension and language support, themes, and its integration with Git is phenomenal. And it lets me turn on Vim keymapping with a single keystroke! I have a new favorite now.
  • MobaXterm – a nice upgrade from using a terminal like PuTTY for administering Linux servers. I haven’t begun to use all of the features this tool has, but I was quick to discover that it comes with a suite of simple games. I’m not really sure why… but it’s best not to question some things.

Besides your phone and computer, what gadget can’t you live without?
My coffee-maker. I use an Aeropress at home, and a French press at work. I always have a coffee mug in my hand. I may have a problem.

What’s your workspace like?
If I’m being honest, a bit messy with Post-It notes all over the place and the occasional coffee-mug ring décor. I write Post-It notes to remind myself of all sorts of things and try to put them in obnoxious places where I can’t ignore them. Yet I still manage to ignore most of them… I guess I’m so good at tuning things out that I can’t even trick myself.

workstation2 2

What do you listen to while you work?
Blues, jazz, electronica, old-school hip-hop, skiffle – it really depends on my mood and what type of work I’m doing. A lot of Tom Waits lately.

What PeopleSoft-related productivity apps do you use?
Change Assistant (for our internal releases in addition to the rest of it), DPKs, and puppet. I also like the Chrome extensions PS Utilities and PSChrome.

Do you have a 2-line tip that some others might not know?
When you’re developing or testing, instead of clearing your cache hit Ctrl+Shift+N in Chrome to open Incognito mode.

That was one line, can I do another one? If you’re newly upgrading to PeopleSoft 9.2, take some time to build out some navigation collections and homepage tiles. It’s really easy, will make your life easier, and will impress your end-users. It’ll also limit how often you have to use the new Nav menu… ick.

What SQL/Code do you find yourself writing most often?
Effective Date joins. Amiright?

What would be the one item you’d add to PeopleSoft if you could?
Open-source development! At least for the DPKs. Come on, Oracle. Let us fix the things we complain about so we can stop bothering you about it.

What everyday thing are you better at than anyone else?
No matter how good you are at something, there’s always somebody who’s better at it… so this is hard to answer. I do have a really high tolerance for eating extremely spicy food.

How do you keep yourself healthy and happy?
Spending time with my wife and three kids, reading, and getting out in nature.

What’s the best advice you’ve ever received?
Treat others as you would like to be treated, and always consider the other person’s point of view. Along those lines is my favorite Jack Handey quote: “Before you criticize someone, walk a mile in their shoes. That way, you’ll be a mile from them, and you’ll have their shoes.”

Safely Upgrading to Oracle APEX 18.1

Dimitri Gielis - Wed, 2018-05-30 05:37
Oracle Application Express (APEX) 18.1 has been out now for a couple of days.

I typically don't wait long before doing the upgrade, as with every new release you get many new features I want to use. Also if you want to stay on top of the game, you just want to move as fast as you can. I typically start testing the Early Adopter releases and then when apex.oracle.com gets updated, I do more testing, but having it on your own system with applications that are used day-in-day-out is a different level.

So I thought to share how we update our environment in a safe way.

The first thing we do is put our maintenance pages on. We use an Apache Reverse Proxy in front of Apache Tomcat with ORDS which is connected to the Database. By specifying some ErrorDocuments the maintenance pages are being used the moment there's an error.

For example, you can add this to your httpd.conf:

ErrorDocument 404 https://s3.amazonaws.com/apexRnD/website/maintenance.html
ErrorDocument 500 https://s3.amazonaws.com/apexRnD/website/maintenance.html
ErrorDocument 503 https://s3.amazonaws.com/apexRnD/website/maintenance.html


When you update APEX you don't want any incoming connections, so we stop Apache Tomcat with ORDS. At that moment the Reverse Proxy gets an error and the ErrorDocument kicks in and serves the Maintenance page. This way if people want to use the system, they know we are working on it.

We use Oracle Database 12c container database and pluggable databases. We want to run different versions of APEX next to each other because we have to test APEX Office Print against all APEX releases. Our customers use different releases of Oracle APEX too, so when we do custom development we have to stick to their version, so we really need all supported APEX versions somewhere.

Our setup was like this before the APEX 18.1 upgrade:
- CDB: cdb
- PDB with APEX 4.2: apex42_pdb
- PDB with APEX 5.0: apex50_pdb
- PDB with APEX 5.1 (main - our most used one): apex_pdb

With every new major release of APEX we clone our main PDB and give it the name of the APEX release, so we keep the APEX release we are on.

The steps to clone a pluggable database in Oracle DB 12.1 (SQL*Plus or SQLcl):

alter pluggable database apex_pdb close immediate; 
alter pluggable database apex_pdb open read only; 
create pluggable database APEX51_PDB from APEX_PDB file_name_convert=('/u01/app/oracle/oradata/cdb/APEX_PDB/','/u01/app/oracle/oradata/cdb/APEX51_PDB/') PATH_PREFIX='/u01/app/oracle/oradata/cdb/APEX51_PDB'; 
alter pluggable database apex51_pdb open; 
alter pluggable database apex_pdb close immediate; 
alter pluggable database apex_pdb open;


After the above we have a situation like this:
- CDB: cdb
- PDB with APEX 4.2: apex42_pdb
- PDB with APEX 5.0: apex50_pdb
- PDB with APEX 5.1: apex51_pdb
- PDB with APEX 5.1: apex_pdb  - will be upgraded to APEX 18.1 (main - our most used one)

Note: if you use Transparent Data Encryption (TDE) you have to perform some additional steps.

The installation of APEX 18.1 on the database side are basically 5 steps:
1) download the software from OTN
2) unzip in /tmp folder and cd into the /tmp/apex directory
3) run SQLcl or SQLPlus as sys as sysdba and connect to the apex_pdb container
alter session set container=APEX_PDB;
4) run the apexins command
@apexins SYSAUX SYSAUX TEMP /i/

In my environment the script took about 23 minutes to complete:


Note: the APEX 18.1 scripts are in 3 phases and the wizard shows information and timings for all phases and at the end also a global timing for the whole. If you want to have less downtime you can run the phases separately - see the doc Maximizing Uptime During an Application Express Upgrade

5) run the apex_rest_config command
@apex_rest_config.sql

The pluggable database is ready now and contains APEX 18.1.

During the APEX upgrade and as we already have downtime, we typically make use of that time to upgrade the other components in a typical Oracle APEX stack, namely the web server (e.g. Apache Tomcat) and ORDS (Oracle REST Data Services). Another advantage of going with a new version of your middleware is that you have your working Apache Tomcat and ORDS untouched, so in case you have to rollback there's nothing to do. Note that you can prepare most of the following commands beforehand.


Upgrading the Application (web) Server:

Unzip in your folder of choice.
That is basically all you have to do (on Linux) :)


Unzip in your folder of choice and cd into it.
Run: java -jar ords.war install advanced
and follow the wizard to install ORDS in APEX_PDB
* make sure you use different config dirs for ORDS in order to run multiple versions of ORDS and APEX


Once done, copy the ords.war into /apache-tomcat-version/webapps
Next copy the images folder of the apex directory to /apache-tomcat-version/webapps:
cp -R /tmp/apex/images /apache-tomcat-version/webapps/i

Start Apache Tomcat:
cd bin 
./startup.sh

Restart your Apache Reverse Proxy (and optionally take out the ErrorDocuments)
/sbin/service httpd graceful

It sometimes happens to me that APEX isn't working the first time when I run it.
Then I debug the connection and check the logs of the web server.

Another thing that often helps, is running ORDS in standalone mode as it will give me clear messages. e.g.

WARNING: *** jdbc.MaxLimit in configuration |apex|| is using a value of 10, this setting may not be sized adequately for a production environment ***
WARNING: *** jdbc.InitialLimit in configuration |apex|| is using a value of 3, this setting may not be sized adequately for a production environment ***
WARNING: The pool named: |apex|al| is invalid and will be ignored: The username or password for the connection pool named apex_al, are invalid, expired, or the account is locked
WARNING: The pool named: |apex|rt| is invalid and will be ignored: The username or password for the connection pool named apex_rt, are invalid, expired, or the account is locked

The above warning remembers me to change some parameters of ORDS. Or I could look-up my previous configuration and copy those parameters. The above warning also indicates our APEX_LISTENER user can't connect (apex_al), so we need to fix that by specifying the correct password. For example, for apex_rt I forgot which user it was, but it's easy to find by navigating to the ords config folder and view the apex_rt file. It will tell the user in the file.

Now we should have APEX 18.1 up-and-running :)

We also want to access the previous versions of APEX. So I copy the older ordsxx.war files to the new web server, but I name those ords51.war, ords50.war, so the URL I access to the different APEX versions becomes https://www.apexrnd.be/ords50/ or https://www.apexrnd.be/ords51/
https://www.apexrnd.be/ords/ is always the latest version of APEX. 
The images folder of the older APEX version (5.1) we map to /i51/ (instead of /i/ as that is of APEX 18.1 now). In order to have a different image folder you need to run in apex51_pdb following sql:
SQL> @\utilities\reset_image_prefix.sql


We upgraded our systems this weekend, the second day after 18.1 was released. We followed more or less the above procedure and things went fine. Make sure to test your own apps first before doing the upgrade. Most of our apps were running just fine, but for some, we had to replace some older plugins with new versions or remove the plugins and replace by built-in functionality.

Note: there are many different ways of updating your system. It comes down to see what works for you. What I share works for us, but for example, if you can't afford downtime you probably want to work with standby databases and load balancers. Or if you work with virtual machines or Docker, it might be useful to clone the machine and test things on the entire machine first.

Categories: Development

Upgrades

Jonathan Lewis - Wed, 2018-05-30 04:08

One of my maxims for Oracle performance is: “Don’t try to be too clever”. Apart from the obvious reason that no-one else may be able to understand how to modify your code if the requirements change at a future date, there’s always the possibility that an Oracle upgrade will mean some clever trick you implemented will simply stop working.

While searching for information about a possible Oracle bug recently I noticed the following fix control (v$system_fix_control) in 12.2.0.1:


     BUGNO OPTIMIZE SQL_FEATURE                        DESCRIPTION                                                             VALUE
---------- -------- ---------------------------------- ---------------------------------------------------------------- ------------
  18385778          QKSFM_CARDINALITY_18385778         avoid virtual col usage if FI is unusable or invisible 

Maybe that’s just invalidated an idea I published 12 years ago.

I haven’t researched the bug or any underlying SR, but I can think of valid argument both for and against the fix as described.

 

 

Virtual Developer Workshop - Container Native Application Development

Want to  take advantage of cloud computing frameworks, which are composed of loosely-coupled cloud services. Oracle would like to invite you to The Oracle Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Python and cx_Oracle RPMs are available from yum.oracle.com

Christopher Jones - Tue, 2018-05-29 21:04

cx_Oracle logo

This is worth cross posting: Getting Started with Python Development on Oracle Linux

Our Oracle Linux group has made Python and cx_Oracle RPMs available for a while. They recently launched a new landing page with nice, clear instructions on how to install various versions of Python, and how to install the cx_Oracle interface for Oracle Database. Check the link above.

Oracle Utilities and the Oracle Database In-Memory Option

Anthony Shorten - Tue, 2018-05-29 20:09

A few years ago, Oracle introduced an In-Memory option for the database to optimize analytical style applications. In Oracle Database 12c and above, the In-Memory option has been enhanced to support other types of workloads. All Oracle Utilities products are now certified to use the Oracle In-Memory option, on Oracle Database 12c and above, to allow customers to optimize the operational and analytical aspects of the products.

The Oracle In-Memory option is a memory based column store that co-exists with existing caching schemes used within Oracle to deliver faster access speeds for complex queries across the products. It is transparent to the product code and can be easily implemented with a few simple changes to the database to implement the objects to store in memory. Once configured the Oracle Cost Based Optimizer becomes aware of the data loaded into memory and adjusts the execution plan directly, delivering much better performance in almost all cases.

There are just a few option changes that need to be done:

  • Enable the In-Memory Option. The In-Memory capability is actually already in the database software already (no relinking necessary) but it is disabled by default. After licensing the option, you can enable the option by setting the amount of the SGA you want to use for the In-Memory store. Remember to ensure that the SGA is large enough to cover the existing memory areas as well as the In-Memory Data Store. These are setting a few database initialization parameters.
  • Enable Adaptive Plans. To tell the optimizer that you now want to take into account the In-Memory Option, you need to enable Adaptive Plans to enable support. This is flexible where you can actually turn off the In-Memory support without changing In-Memory settings.
  • Decide the Objects to Load into Memory. Now that the In-Memory Option is enabled the next step is to decide what is actually loaded into memory. Oracle provides an In-Memory Advisor that analyzes workloads to make suggestions.
  • Alter Objects to Load into Memory. Create the SQL DDL statements to specify the statements to instruct the loading of objects into memory. This includes priority and compression options for the objects to maximize flexibility of the option. The In-Memory Advisor can be configured to generate these statements from its analysis.

No changes to the code is necessary to use the option to speed up common queries in the products and analytical queries.

A new Implementing Oracle In-Memory Option (Doc Id: 2404696.1) whitepaper available from My Oracle Support has been published which outlines details of this process as well as specific guidelines for implementing this option.

PS. The Oracle In-Memory Option has been significantly enhanced in Oracle Database 18c.

 

Effective Way to Get Changed Rows in ADF BC API

Andrejus Baranovski - Tue, 2018-05-29 14:32
Did you ever wondered how to get all changed rows in the transaction, without iterating through entire row set? It turns out to be pretty simple with ADF BC API method - getAllEntityInstancesIterator, which is invoked for Entity Definition attached to current VO.

Method works well - it returns changed rows from different row set pages, not only from the current. In my experiment, I changed couple of rows in the first page:


And couple of rows in 5th page. Also I removed row and created one:


Method returns information about all changed rows, as well as deleted and new:


Example of getAllEntityInstancesIterator method usage in VO Impl class. This method helps to get all changed rows in current transaction, very handy:


Sample application source code is available on GitHub.

EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers

Yann Neuhaus - Tue, 2018-05-29 13:42

In the last post we had a look at the basic configuration of EDB EFM and confirmed that we can do manual switchovers for maintenance operations. Being able to do a switchover and switchback is fine but what really is important are automatic failovers. Without an automatic failover this setup would be somehow useless as a) EDB EFM is supposed to do exactly that and b) without that we would need to implement something on our own. So lets have a look if that works.

For this little I scaled the PostgreSQL instances to three, meaning one master and two replica instances. Three is the minimum required as EFM requires at least three agents so at least two can decide what to do. In a traditional setup this is usually done by making a non database host the EDB EFM witness host. In this OpenShift/MiniShift setup we do not have a witness node so the minimum amount of database hosts (where EFM runs on) is three. This is how it currently looks like:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-dzcj6   0/1       Running   2          3h        172.17.0.9    localhost   standbydb
edb-as10-0-1-jmfjk   0/1       Running   2          4h        172.17.0.3    localhost   masterdb
edb-as10-0-1-rlzdb   0/1       Running   2          3h        172.17.0.6    localhost   standbydb
edb-pgpool-1-jtwg2   0/1       Running   2          4h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   0/1       Running   2          4h        172.17.0.5    localhost   queryrouter

What I will do is to kill the master pod (edb-as10-0-1-jmfjk) to see what happens to my current setup. Before I do that, lets open a second session into one of the standby pods and check the current EFM status:

dwe@dwe:~$ oc rsh edb-as10-0-1-dzcj6
sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Master      172.17.0.3           UP     UP        
	Standby     172.17.0.6           UP     UP        
	Standby     172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.3

Standby priority host list:
	172.17.0.9 172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.3           0/F000060        
	Standby     172.17.0.9           0/F000060        
	Standby     172.17.0.6           0/F000060        

	Standby database(s) in sync with master. It is safe to promote.

Looks fine. So from the first session using the oc command line utility lets kill the master pod. What I at least expect is that one of the standby instances get promoted and the remaining standby is reconfigured to connect to the new master.

dwe@dwe:~$ oc delete pod edb-as10-0-1-jmfjk
pod "edb-as10-0-1-jmfjk" deleted
dwe@dwe:~$ oc get pods
NAME                 READY     STATUS        RESTARTS   AGE
edb-as10-0-1-dzcj6   1/1       Running       2          3h
edb-as10-0-1-jmfjk   1/1       Terminating   2          4h
edb-as10-0-1-rlzdb   1/1       Running       2          4h
edb-as10-0-1-snnxc   0/1       Running       0          2s
edb-bart-1-s2fgj     1/1       Running       2          4h
edb-pgpool-1-jtwg2   1/1       Running       2          4h
edb-pgpool-1-pjxzs   1/1       Running       2          4h

The master pod is terminating. What does the second session in the standby pod tell us when we ask EFM for the cluster status?

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.6           UP     UP        
	Promoting   172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000250        
	Standby     172.17.0.6           0/F000140        

	One or more standby databases are not in sync with the master database.

The master is gone and one of the standby instances gets promoted. Lets wait a few seconds and check again:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Idle        172.17.0.11          UP     UNKNOWN   
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000288        
	Standby     172.17.0.6           0/F000288        

	Standby database(s) in sync with master. It is safe to promote.

Idle Node Status (idle nodes ignored in XLog location comparisons):

	Address              XLog Loc         Info
	--------------------------------------------------------------
	172.17.0.11          0/F000288        DB is in recovery.

The new master is ready, the remaining standby is reconfigured to connect to the new master. Even better a few seconds later the old master is back as a new standby:

sh-4.2$ /usr/edb/efm-3.0/bin/efm cluster-status edb
Cluster Status: edb
VIP: 

	Agent Type  Address              Agent  DB       Info
	--------------------------------------------------------------
	Standby     172.17.0.11          UP     UP        
	Standby     172.17.0.6           UP     UP        
	Master      172.17.0.9           UP     UP        

Allowed node host list:
	172.17.0.3

Membership coordinator: 172.17.0.6

Standby priority host list:
	172.17.0.6 172.17.0.11

Promote Status:

	DB Type     Address              XLog Loc         Info
	--------------------------------------------------------------
	Master      172.17.0.9           0/F000288        
	Standby     172.17.0.6           0/F000288        
	Standby     172.17.0.11          0/F000288        

	Standby database(s) in sync with master. It is safe to promote.

Works fine and is what you must have in a good setup. When we kill one of the standby pods the expected result is that the pod restarts and the instance will come back as a standby, lets check:

20:32:47 dwe@dwe:~$ oc delete pod edb-as10-0-1-hf27d
pod "edb-as10-0-1-hf27d" deleted
dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS        RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-8p5rd   0/1       Running       0          3s        172.17.0.6    localhost   
edb-as10-0-1-dzcj6   1/1       Running       2          4h        172.17.0.9    localhost   masterdb
edb-as10-0-1-hf27d   0/1       Terminating   1          12m       172.17.0.3    localhost   
edb-as10-0-1-snnxc   1/1       Running       0          20m       172.17.0.11   localhost   standbydb
edb-pgpool-1-jtwg2   1/1       Running       2          5h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   1/1       Running       2          5h        172.17.0.5    localhost   queryrouter

A few moments later we have the previous status:

dwe@dwe:~$ oc get pods -o wide -L role
NAME                 READY     STATUS    RESTARTS   AGE       IP            NODE        ROLE
edb-as10-0-1-8p5rd   1/1       Running   0          4m        172.17.0.6    localhost   standbydb
edb-as10-0-1-dzcj6   1/1       Running   2          4h        172.17.0.9    localhost   masterdb
edb-as10-0-1-snnxc   1/1       Running   0          24m       172.17.0.11   localhost   standbydb
edb-pgpool-1-jtwg2   1/1       Running   2          5h        172.17.0.4    localhost   queryrouter
edb-pgpool-1-pjxzs   1/1       Running   2          5h        172.17.0.5    localhost   queryrouter

Fine as well. To complete this little series about EDB containers in MiniShift/OpenShift we will have a look at how we can add an EDB BART container to the setup, because backup and recovery is still missing :)

 

Cet article EDB Failover Manager in EDB containers in Minishift/OpenShift – Failovers est apparu en premier sur Blog dbi services.

Oracle and PwC Team to Support Finance Transformation for Insurers and Provide IFRS 17 Compliance Expertise

Oracle Press Releases - Tue, 2018-05-29 07:00
Press Release
Oracle and PwC Team to Support Finance Transformation for Insurers and Provide IFRS 17 Compliance Expertise Insurers to More Easily Navigate Adoption of New Accounting Standard, Modernize Decision Making and Deploy Timely and Insightful Reporting

Redwood Shores, Calif.—May 29, 2018

Oracle Financial Services Analytical Applications (OFSAA) and PwC, an Oracle PartnerNetwork (OPN) Platinum and Cloud Elite partner, announced today a global collaboration to provide insurance clients with clear visibility into near-term and ongoing compliance with the International Accounting Standards Board’s (IASB) new IFRS 17 standard, which insurers must meet by 2021, and the pending GAAP Long Duration Contract Standard. Together, Oracle and PwC are facilitating risk and finance transformation and insurers’ adoption of IFRS 17.

Oracle provides the data management platform to enable finance transformation and analytical applications for IFRS 17 with pre-built business rules, valuations, contractual service margin calculations and disclosures. The data management platform and its associated tools provide prebuilt integration with the accounting and financial close process, facilitate integration with actuarial systems, and create integrated, auditable results for internal and external reporting, with auditability and lineage.

PwC acts as Oracle’s strategic advisor and provide consulting and implementation services and accelerator tools delivering tailored enhancements. Insurers will benefit from improved decision making, modernized actuarial analytics, and more timely and insightful reporting related to compliance needs.

“Now more than ever, insurers have to meet an array of regulatory and compliance requirements including new financial standards, such as IFRS 17,” said Sonny Singh, senior vice president and general manager, Financial Services Global Business Unit, Oracle. “To do so, they need a comprehensive technological solution to properly comply with these increasingly complex requirements and reduce any adverse impact to their business. Oracle’s award winning risk and compliance platform, which are purpose built for financial services institutions, combine with PwC’s experience in advisory services to help provide insurers the necessary capabilities to address these stringent requirements.”

“PwC is a leading advisor to the insurance industry and plays a major role in helping insurers transform and modernize their businesses while meeting new IFRS 17 requirements,” said Alex Bertolotti, PwC IFRS 17 lead partner. “Together with Oracle, which has a strong technology position in the financial services industry, our extensive market experience enables us to provide advisory and software solutions for the insurance industry, and bring strong market experience in helping our insurance clients meet their specific business challenges.”

Contact Info
Judi Palmer
Oracle
+1 650 784 7901
Judi.palmer@oracle.com
Dario Cziráky
PwC - UK
+44(0)7702699200
Dario.Cziraky@pwc.com
Yeon Lee
PwC - US
+1 646-471-9082
Yeon.Lee@pwc.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle's partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates.

About PwC

At PwC, our purpose is to build trust in society and solve important problems. We’re a network of firms in 158 countries with more than 236,000 people who are committed to delivering quality in assurance, advisory and tax services. Find out more and tell us what matters to you by visiting us at www.pwc.com. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Dario Cziráky

  • +44(0)7702699200

Yeon Lee

  • +1 646-471-9082

Index Bouncy Scan 2

Jonathan Lewis - Tue, 2018-05-29 06:27

I wrote a note some time last year about taking advantage of the “index range scan (min/max)” operation in a PL/SQL loop to find the small number distinct values in a large single column index efficiently (for example an index that was not very efficient but existed to avoid the “foreign key locking” problem. The resulting comments included pointers to other articles that showed pure SQL solutions to the same problem using recursive CTEs (“with” subqueries) from Markus Winand and Sayan Malakshinov: both writers also show examples of extending the technique to cover more cases than the simple list of distinct values.

The topic came up again on the ODC (OTN) database forum a couple of days ago; one of the replies linked back to my original posting, another gave the recursive solution for a single column index – so I ended up seeing the following question twice, once as a comment on my blog, once in the forum: “Can you extend this method to a two column index, what about an N column index ?”

Here’s a walk-through of working out one possible solution for the two-column requirement – how to find all the distinct combinations for the first two columns of a very large index without having to scan and aggregate the whole index. We start with a suitable table and index.


rem
rem     Script:         bouncy_index.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0.4
rem

create table t1
segment creation immediate
nologging
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum-1,3)                 val1,
        mod(rownum-1,10)                val2,
        lpad('x',100,'x')               padding
from
        generator       v1
order by
        dbms_random.value
;

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

alter table t1 add constraint t1_pk primary key(val1, val2, id);

I’ve created a table with 3 values for val1, 10 values for val2, with a total of 30 combinations. The addition of the primary key starting with (val1, val2) is just a lazy way to ensure that I have a suitable index AND val1 and val2 are both declared not null.

With this data my first step will be to demonstrate the recursive CTE (“with” subquery) used by Andrew Sayer in the ODC posting to get the distinct values for val1 using three index “index range scan (min/max)”probes. I’ve included the in-memory execution plan with rowsource execution stats to show that this does a minimal amount of work.

The results in this note come from 12.2.0.1:


set serveroutput off
alter session set statistics_level = all;

with bouncy (val1)
as (
        select  min(val1) val1
        from    t1
        union all
        select  (select min(t1.val1) val1 from t1 where t1.val1 > bouncy.val1) val1
        from    bouncy
        where   bouncy.val1 is not null
    )
select  *
from    bouncy
where   bouncy.val1 is not null
order by
        val1
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |      1 |        |    19 (100)|      3 |00:00:00.01 |       7 |      4 |       |       |          |
|   1 |  SORT ORDER BY                             |       |      1 |      2 |    19   (6)|      3 |00:00:00.01 |       7 |      4 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                     |       |      1 |      2 |    18   (0)|      3 |00:00:00.01 |       7 |      4 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|       |      1 |        |            |      4 |00:00:00.01 |       7 |      4 |  1024 |  1024 |          |
|   4 |     SORT AGGREGATE                         |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |      1 |       |       |          |
|   5 |      INDEX FULL SCAN (MIN/MAX)             | T1_PK |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       2 |      1 |       |       |          |
|   6 |     SORT AGGREGATE                         |       |      3 |      1 |            |      3 |00:00:00.01 |       5 |      3 |       |       |          |
|   7 |      FIRST ROW                             |       |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |      3 |       |       |          |
|*  8 |       INDEX RANGE SCAN (MIN/MAX)           | T1_PK |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |      3 |       |       |          |
|   9 |     RECURSIVE WITH PUMP                    |       |      4 |        |            |      3 |00:00:00.01 |       0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("BOUNCY"."VAL1" IS NOT NULL)
   8 - access("T1"."VAL1">:B1)

As you can see I’ve done an “index full scan (min/max)” as the first step of the recursive query, visiting just two buffered blocks (the index leaf-block count is 27 – roughly 9 per value of val1 – so Oracle is clearly doing an efficient access for that value, it’s not rally a “full” scan. We then see 3 “index range scan (min/max)” at roughly 2 buffer visits each to collect the remaining values. (There’s probably a small saving in buffer gets due to the pinning that takes place).

So we can get the val1 values very easily and efficiently with this recurstive CTE technology. Let’s write some code that uses the same technology to find the val2 values for each possible val1 value in turn:

with bounce2 (val1, val2)
as (
        select val1, val2 from (
                select  0 val1, 0 val2 from dual
                union all
                select 1,0 from dual
                union all
                select 2,0 from dual
        )
        union all
        select  bounce2.val1, (select min(t1.val2) val2 from t1 where t1.val1 = bounce2.val1 and t1.val2 > bounce2.val2) val2
        from    bounce2
        where   bounce2.val2 is not null
--      and     bounce2.val1 is not null
)
select * from bounce2
where
        bounce2.val2 is not null
and     bounce2.val1 is not null        -- > redundant predicate
order by
        val1, val2
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |      1 |        |    27 (100)|     30 |00:00:00.01 |      32 |     24 |       |       |          |
|   1 |  SORT ORDER BY                             |       |      1 |      6 |    27   (4)|     30 |00:00:00.01 |      32 |     24 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                     |       |      1 |      6 |    26   (0)|     30 |00:00:00.01 |      32 |     24 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|       |      1 |        |            |     33 |00:00:00.01 |      32 |     24 |  1024 |  1024 |          |
|   4 |     VIEW                                   |       |      1 |      3 |     6   (0)|      3 |00:00:00.01 |       0 |      0 |       |       |          |
|   5 |      UNION-ALL                             |       |      1 |        |            |      3 |00:00:00.01 |       0 |      0 |       |       |          |
|   6 |       FAST DUAL                            |       |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       0 |      0 |       |       |          |
|   7 |       FAST DUAL                            |       |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       0 |      0 |       |       |          |
|   8 |       FAST DUAL                            |       |      1 |      1 |     2   (0)|      1 |00:00:00.01 |       0 |      0 |       |       |          |
|   9 |     SORT AGGREGATE                         |       |     30 |      1 |            |     30 |00:00:00.01 |      32 |     24 |       |       |          |
|  10 |      FIRST ROW                             |       |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|* 11 |       INDEX RANGE SCAN (MIN/MAX)           | T1_PK |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |     24 |       |       |          |
|  12 |     RECURSIVE WITH PUMP                    |       |     11 |        |            |     30 |00:00:00.01 |       0 |      0 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("BOUNCE2"."VAL2" IS NOT NULL AND "BOUNCE2"."VAL1" IS NOT NULL))
  11 - access("T1"."VAL1"=:B1 AND "T1"."VAL2">:B2)


In this example of the code the second half of the CTE looks remarkably similar to the previous statement – except I now have a two-column CTE and I’ve included an equality predicate against val1 based on the first of the two columns. In the first half of the code I’ve cheated (as a temporary measure) and supplied three rows of data which list the three distinct values of val1 with their associated minimum values for val2.

The execution plan shows that I’ve done 30 “index range scan (min/max)” of the index with 32 buffer visits. And that’s exactly the right number of probes to return my result set. So if I can manage to generate the starting values efficiently I can execute the whole query efficiently. So let’s find a way of changing that “union all on dual” fudge into a generic statement. Let’s replace it with a recursive CTE:


with bounce1(val1, val2) as (
        select val1, val2 
        from    (
                select
                        /*+ index(t1) */
                        val1, val2,
                        row_number() over(order by val1, val2) rn
                from    t1
        )
        where
                rn = 1
        union all
        select
                (select min(t1.val1) val1 from t1 where t1.val1 > bounce1.val1) val1,
                (select min(t1.val2) val2 from t1 where t1.val1 > bounce1.val1 and rownum = 1) val2
        from    bounce1
        where   bounce1.val1 is not null
),
bounce2 (val1, val2)
as (
        select  val1, val2 
        from    bounce1
--      where   bounce1.val1 is not null
        union all
        select  bounce2.val1, (select min(t1.val2) val2 from t1 where t1.val1 = bounce2.val1 and t1.val2 > bounce2.val2) val2
        from    bounce2
        where   bounce2.val2 is not null
--      and     bounce2.val1 is not null
)
select * from bounce2
where
        bounce2.val2 is not null
and     bounce2.val1 is not null        -- > redundant predicate
order by
        val1, val2
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last'));

--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |       |      1 |        |   189 (100)|     30 |00:00:00.01 |      45 |       |       |          |
|   1 |  SORT ORDER BY                               |       |      1 |      4 |   189   (2)|     30 |00:00:00.01 |      45 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |       |      1 |      4 |   188   (2)|     30 |00:00:00.01 |      45 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |       |      1 |        |            |     34 |00:00:00.01 |      45 |  1024 |  1024 |          |
|   4 |     VIEW                                     |       |      1 |      2 |    87   (2)|      4 |00:00:00.01 |      13 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|       |      1 |        |            |      4 |00:00:00.01 |      13 |  1024 |  1024 |          |
|*  6 |       VIEW                                   |       |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |       |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |       |       |          |
|   9 |       SORT AGGREGATE                         |       |      3 |      1 |            |      3 |00:00:00.01 |       5 |       |       |          |
|  10 |        FIRST ROW                             |       |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |       |       |          |
|* 11 |         INDEX RANGE SCAN (MIN/MAX)           | T1_PK |      3 |      1 |     2   (0)|      2 |00:00:00.01 |       5 |       |       |          |
|  12 |       SORT AGGREGATE                         |       |      3 |      1 |            |      3 |00:00:00.01 |       6 |       |       |          |
|* 13 |        COUNT STOPKEY                         |       |      3 |        |            |      2 |00:00:00.01 |       6 |       |       |          |
|* 14 |         INDEX RANGE SCAN                     | T1_PK |      3 |    500 |     2   (0)|      2 |00:00:00.01 |       6 |       |       |          |
|  15 |       RECURSIVE WITH PUMP                    |       |      4 |        |            |      3 |00:00:00.01 |       0 |       |       |          |
|  16 |     SORT AGGREGATE                           |       |     30 |      1 |            |     30 |00:00:00.01 |      32 |       |       |          |
|  17 |      FIRST ROW                               |       |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |       |       |          |
|* 18 |       INDEX RANGE SCAN (MIN/MAX)             | T1_PK |     30 |      1 |     2   (0)|     27 |00:00:00.01 |      32 |       |       |          |
|  19 |     RECURSIVE WITH PUMP                      |       |     11 |        |            |     30 |00:00:00.01 |       0 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("BOUNCE2"."VAL2" IS NOT NULL AND "BOUNCE2"."VAL1" IS NOT NULL))
   6 - filter("RN"=1)
   7 - filter(ROW_NUMBER() OVER ( ORDER BY "VAL1","VAL2")<=1) 11 - access("T1"."VAL1">:B1)
  13 - filter(ROWNUM=1)
  14 - access("T1"."VAL1">:B1)
  18 - access("T1"."VAL1"=:B1 AND "T1"."VAL2">:B2)


Again we see 30 probes using “index range scan (min/max)” with 32 buffer gets to get 30 rows; plus a further 13 buffer gets to generate the three driving rows. The 13 buffer gets break down to: 2 to get the minimum (val1, val2) combination using an “index full scan (min/max)”, then 5 for the probes to get the three minimum values for val1, and 6 for the probes to get the three corresponding minimum values of val2.

You’ll notice that I’ve got various “is not null” predicates scattered throughout the code. In some cases this is to stop Oracle from running into an infinite loop and reporting Oracle error: ORA-32044: cycle detected while executing recursive WITH query” This will occur because of the way that “(select max()…)” inline scalar subqueries returning a null if there is no data found which would lead to the next cycle of the recursive descent taking that null as an input – hence starting the infinite recursion. In some cases the “is not null” predicates are my default pattern for recurstive CTEs and some of them could probably be removed with no change in meaning (or workload).

The /*+ index() */ hint in the starting point for bounce1 was necessary to avoid an “index fast full scan” in 12.2; but that was purely a case of the statistics – number of distinct values, leaf_block count, etc – making the optimizer pick an option that was appropriate for this tiny data set, but not appropriate for the demonstration.  In fact this looks like the side effect of two defects in the 12.1 optimizer code, of which only one has been fixed in 12.2.

Optimizer Limitations

Here’s an extract from the execution plan for the final query with an /*+ index(t1) */ hint in place. The extract is identical for 12.1.0.2 and 12.2.0.1:

--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
...
|*  6 |       VIEW                                   |       |      1 |      1 |    29   (0)|      1 |00:00:00.01 |       2 |       |       |          |
|*  7 |        WINDOW NOSORT STOPKEY                 |       |      1 |  10000 |    29   (0)|      1 |00:00:00.01 |       2 | 73728 | 73728 |          |
|   8 |         INDEX FULL SCAN                      | T1_PK |      1 |  10000 |    29   (0)|      2 |00:00:00.01 |       2 |       |       |          |

You’ll notice the Cost at operation 8 is appropriate for a real (i.e. all leaf blocks) full scan of the index. (The leaf_block value was 27 as I mentioned earlier on). You’ll also see that the OMem (PGA requirement for optimum workarea operation) figure is consistent with Oracle processing 10,000 rows in the index. Since the optimizer managed to work out that it could do a full scan with nosort and stopkey it looks a little surprising that the algorithms didn’t manage to make some allowance for the limited access that would occur. (I’d view this as a current limitation, rather than a bug, though).

Now compare the equivalent extracts when we hint an index fast full scan 12.1.0.2 first, then 12.2.0.1:

12.1.0.2
--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
...
|*  6 |       VIEW                                   |       |      1 |      1 |    39   (8)|      1 |00:00:00.03 |      32 |       |       |          |
|*  7 |        WINDOW SORT PUSHED RANK               |       |      1 |  10000 |    39   (8)|      1 |00:00:00.03 |      32 |  2048 |  2048 | 2048  (0)|
|   8 |         INDEX FAST FULL SCAN                 | T1_PK |      1 |  10000 |     5   (0)|  10000 |00:00:00.01 |      32 |       |       |          |

12.2.0.1
--------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------
...
|*  6 |       VIEW                                   |       |      1 |      1 |     7  (29)|      1 |00:00:00.01 |      34 |       |       |          |
|*  7 |        WINDOW SORT PUSHED RANK               |       |      1 |  10000 |     7  (29)|      1 |00:00:00.01 |      34 |  2048 |  2048 | 2048  (0)|
|   8 |         INDEX FAST FULL SCAN                 | T1_PK |      1 |  10000 |     5   (0)|  10000 |00:00:00.01 |      34 |       |       |          |

In both cases the cost of the index fast full scan is the same – and much cheaper; but in 12.1.0.2 the cost of the query looks as if it is allowing for sorting (and spilling) the entire 10,000 rows of returned from the index fast full scan (even though the OMem indicates otherwise), while the cost in 12.2.0.1 looks as if it recognises that it just has to do a running comparison through the data set as it returns, keeping only the current minimum in memory at any one moment. This clearly matches our expectations of how Oracle ought to behave, which is why I’d call this a bug in 12.1, fixed by 12.2.

The dramatic change in cost of operation 7 on the upgrade explains the change in plan and the necessity for the /*+ index(t1) */ hint – but if the “first row” predicate were also reflected in the costing then the cost of the “stopkey” index full scan would drop to 2 (probably) and the original 12.1 path would be re-appear.

Footnote

I don’t think there’s a lot of scope for improving the efficiency of this query for getting the (relatively) small number of distinct combinations from the first two columns of a very large index – but there are some very clever SQL bunnies on the ODC forum, so I won’t be surprised if someone comes up with a better solution.

Pages

Subscribe to Oracle FAQ aggregator