Feed aggregator

Get the Real-world Advice You Need—Add Oracle Support Experts to Your Schedule

Chris Warticki - Tue, 2018-10-02 19:01
Oracle Support has the expertise and resources to help you optimize and innovate within your business, whether you're an executive leader, enterprise resource manager, database admin, IT service engineer, or anything in between.
Visit us at Oracle OpenWorld 2018 in San Francisco, October 22 to 25, and discover tools, knowledge, and best practices for getting the most from your Oracle solutions—on-premises, in the cloud, or a hybrid of both.
Come and engage with Oracle Support and Services leaders and engineers at Oracle OpenWorld.
We can help you discover available support resources and see what's coming next for your Oracle Cloud and on-premise solutions.
See sessions and demos from:
  • Oracle Support
  • Oracle Advanced Customer Services
  • Oracle University
  • And more!
Connect with support experts across Oracle’s product portfolio and gain valuable insights at the Oracle Support Stars Bar, located on the third floor of Moscone West.
Support personnel are available for walk-in consultations, or you can schedule a one-on-one engagement ahead of time to fit your event schedule.
Attend a 15-minute Mini-Briefing session on key topics at our Mini-Briefing Center (adjacent to the Stars Bar) and enter a drawing for a chance to win a prize.
Learn about resources to help drive productivity and proactively avoid business obstacles.
Oracle Support offers dozens of dedicated conference sessions for your specific products and solutions, with support tricks and tips for Oracle Database, Oracle E-Business Suite, Oracle Fusion Applications, and more.
Many Oracle Support sessions have historically been some of the best-attended at OpenWorld, with a focus on practical, real-world expertise that you can take home and benefit from immediately.
Oracle Support's proactive program helps organizations adopt and integrate the latest product capabilities, streamline support resolution, and optimize systems and software performance. Oracle Support annually recognizes customers and partners who have excelled in driving business value adopting Oracle's proactive support offerings.
The Proactive Support Individual Champion awards are presented at the Stars Bar on Monday, October 22, at 1:00 p.m. Join us for the award presentation and find out how real-world peers have had success using Proactive tools and resources.
Join us for the annual My Oracle Support Monday Mix, our unique customer appreciation event that complements your Oracle OpenWorld experience and offers another dimension to the sessions and demos you'll attend all week.
Connect and engage with Oracle Support executives and engineers on Monday, October 22 from 6:15 to 8:30 p.m. at Fang Restaurant, 660 Howard Street in San Francisco (just a three-minute walk from Moscone).
Admission is free for Oracle Support customers with an Oracle OpenWorld badge and photo ID.
Oracle Support has provided award-winning service for more than 40 years, and continues to help our more than 430,000 customers find new possibilities to fuel their success and protect their technology investments. We look forward to seeing you at Oracle OpenWorld 2018.
Oracle Support
Trusted | Secure | Comprehensive

Migrating spatial data from SQL Server to Oracle

Tom Kyte - Tue, 2018-10-02 15:26
Hi, I would want to migrate spatial columns of data from SQL server to Oracle database server. Can you please provide how to begin. The sql developer tool does not support migration of spatial data. Could you provide appropriate tool/steps necessa...
Categories: DBA Blogs

Compressing LOB data increases table size

Tom Kyte - Tue, 2018-10-02 15:26
Hello, I think this a more conceptual question: In our core production database system, we have a pretty big table that weights 15,5TB. Of these, about 14.4TB are XML Data stored in a LOB column. We decided to reduce the size of the database...
Categories: DBA Blogs

Delete on billions record table

Tom Kyte - Tue, 2018-10-02 15:26
Hi Chris, Hi Connor, Appology for puttingy query in theoritical manner, but I would request you to please suggest something which I can try in below scenario. There is requirement when I need to cleanup a table (e.g. orders) having approx. 25 b...
Categories: DBA Blogs

Database Password Change

Tom Kyte - Tue, 2018-10-02 15:26
Hello Experts, I am just curious about changing passwords via cmd sql for SYS user. I issued the below command: alter user SYS identified by new_password; commit; Then restart the instance and OS. When I tried to use the new password, it ...
Categories: DBA Blogs

Regarding Undo tablespace

Tom Kyte - Tue, 2018-10-02 15:26
Which background process writes the copy of "old image" from "db buffer cache" to undo -segment of undo tablespace ?
Categories: DBA Blogs

Oracle Offline Persistence Toolkit - Before Request Sync Listener

Andrejus Baranovski - Tue, 2018-10-02 15:09
One more post from me related to Oracle Offline Persistence Toolkit. I already described how after request listener could be useful to read response data after sync - Oracle Offline Persistence Toolkit - After Request Sync Listener. Today will explain when before request listener could be useful. Same as after request listener, it is defined during persistence manager registration:


Before request listener must return promise. We can control resolved action. For example if there is no need to update request, we simply return continue. We would need to update request, if same row is updated multiple times during sync. Change indicator value must be updated in request payload. We read latest change indicator value from array, initialised in after request listener. Request payload is converted to JSON, value updated and then we construct new request and resolve it with replay. API allows to provide new request, by replacing original:


Here is the use case. While offline - update value:


While remaining offline, update same value again:


We should trace executed requests during sync, when going online. First request, initiated by first change is using change indicator value 292:


Second request is using updated change indicator value 293:


Without before and after request listener logic, second request would execute with same change indicator value as the first one. This would lead to data conflict on backend.

Sample application code is available on GitHub.

PDB Snapshot Carousel Oracle 18.3

Yann Neuhaus - Tue, 2018-10-02 09:09

A new feature with Oracle 18c is the PDB snapshot carousel. As indicated by its name a PDB snapshot is a copy of a PDB at a specific point in time. You have the possibility to create up to eight snapshots, when you reach the maximum number of snapshots, the last snapshot is over written. The snapshot carousel is obviously the name of all your PDB snapshots.

We have the possibility to create automatic snapshots using the “snapshot mode every” clause when you create or alter a PDB. For example you can change the snapshot mode from a PDB to every  3 hours:

SQL> alter session set container=pdb;

Session altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
MANUAL

SQL> alter pluggable database snapshot mode every 3 hours;

Pluggable database altered.

SQL> select snapshot_mode,snapshot_interval/60 from dba_pdbs;

SNAPSH SNAPSHOT_INTERVAL/60
------ --------------------
AUTO			  3

To return to manual mode, just type:

SQL> alter pluggable database snapshot mode manual;

Pluggable database altered.

We can create PDB snapshots manually, you can use a specific name or not:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

SQL> alter pluggable database snapshot;

Pluggable database altered.

We can query the dba_pdb_snapshots view to display the PDB snapshots location:

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS ORDER BY SNAP_SCN;

CON_ID CON_NAME SNAPSHOT_NAME SNAP_SCN

FULL_SNAPSHOT_PATH

3        PDB	  PDB_SNAP    1155557
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155557.pdb

3        PDB	  SNAP_2263384607_987432172  1155823
/home/oracle/oradata/DB18/pdb/snap_2263384607_1155823.pdb

If you want to drop a snapshot, you have two methods:

You delete the snapshot with the following alter pluggable statement:

SQL> alter pluggable database drop snapshot SNAP_2263384607_987432172;

Pluggable database altered.

Otherwise you set the MAX_PDB_SNAPSHOTS property to zero in the PDB:

You can query the CDB_PROPERTIES and CDB_PDBS to display the parameter value:

SELECT r.CON_ID, p.PDB_NAME, PROPERTY_NAME,
  	PROPERTY_VALUE AS value, DESCRIPTION
  	FROM   CDB_PROPERTIES r, CDB_PDBS p
  	WHERE  r.CON_ID = p.CON_ID
  	AND    PROPERTY_NAME LIKE 'MAX_PDB%'
  	AND    description like 'maximum%'
  	ORDER BY PROPERTY_NAME

CON_ID	PDB_NAME	PROPERTY_NAME	VALUE	           DESCRIPTION
  3		  PDB     MAX_PDB_SNAPSHOTS    8    maximum number of snapshots for a given PDB

And if you set it to zero all your PDB snapshots will be dropped:

SQL> alter session set container=pdb;

Session altered.

SQL> alter pluggable database set max_pdb_snapshots = 0;

Pluggable database altered.

SQL> SELECT CON_ID, CON_NAME, SNAPSHOT_NAME, 
SNAPSHOT_SCN AS snap_scn, FULL_SNAPSHOT_PATH 
FROM   DBA_PDB_SNAPSHOTS
ORDER BY SNAP_SCN;

no rows selected

But the main interest of the snapshot PDBS is to create new PDBS from a productive environment based on a point in time of the production PDB.

So we create a PDB snapshot named PDB_SNAP:

SQL> alter pluggable database snapshot pdb_snap;

Pluggable database altered.

And now we create a PDB from the PDB_SNAP snapshot:

SQL> create pluggable database PDB2 from PDB using snapshot PDB_SNAP create_file_dest='/home/oracle/oradata/DB18/pdb2';

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB		                  READ WRITE NO
	 4 APPPSI			  READ WRITE NO
	 5 PDB2 			  READ WRITE NO

We have also the possibility to change the snapshot mode:

SQL> alter session set container=pdb;

Session altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
MANUAL


SQL> ALTER PLUGGABLE DATABASE SNAPSHOT MODE EVERY 1 HOURS;

Pluggable database altered.

SQL> SELECT SNAPSHOT_MODE "S_MODE", SNAPSHOT_INTERVAL/60 "SNAP_INT_HRS" 
     FROM DBA_PDBS;

S_MODE SNAP_INT_HRS
------ ------------
AUTO		  1

We have the possibility to create a PDB that creates snapshots every 15 minutes :

SQL> create pluggable database pdb_new from pdb
  2  file_name_convert=('pdb','pdb_new')
  3  snapshot mode every 15 minutes;

Pluggable database created.

There is a pre requisite for configuring automatic PDB snapshots: the CDB must be in local undo mode.

Finally the snapshots are correctly created in my environment every 15 minutes:

oracle@localhost:/home/oracle/oradata/DB183/pdb/ [DB183] ls -lrt snap*
-rw-r--r--. 1 oracle dba 65690276 Oct  1 15:04 snap_3893567541_798493.pdb
-rw-r--r--. 1 oracle dba 65740202 Oct  1 15:19 snap_3893567541_801189.pdb
-rw-r--r--. 1 oracle dba 65823279 Oct  1 15:34 snap_3893567541_803706.pdb

And to verify if it is correct , I had created in my pdb_new environment a location table in my psi schema with two records at 15H20:

SQL> create table psi.location (name varchar2(10));

Table created.

SQL> insert into psi.location values ('London');

1 row created.

SQL> insert into psi.location values('Paris');

1 row created.

SQL> commit;

And we create a new pdb from the snap to verify if the data are correct:

SQL> create pluggable database pdb_psi from pdb_new 
     using snapshot SNAP_45745043_988386045 
     create_file_dest='/home/oracle/oradata/DB183/pdb_psi';

Pluggable database created.

We open pdb_psi and we check:

SQL> alter session set container=pdb_psi;

Session altered.

SQL> select * from psi.location;

NAME
----------
London
Paris

This feature might be very useful for testing purposes, imagine you have a production PDB, you only have to create a refreshable clone named PDB_MASTER and configure it to create daily snapshots. If you need a PDB for testing you only have to create a clone from any snapshot.

Conclusion

All those tests have been realized on an Linux x86-64 server, with Oracle 18.3 Enterprise Edition. My DB183 database has been initialized with the “_exadata_feature_on”  hidden parameter to avoid the “ORA-12754 Feature PDB Snapshot Carousel is disabled due to missing capability” error message.

If you have a look at the Database Licensing User Manual:

https://docs.oracle.com/en/database/oracle/oracle-database/18/dblic/Licensing-Information.html#GUID-B6113390-9586-46D7-9008-DCC9EDA45AB4

Feature / Option / Pack SE2 EE EE-ES DBCS SE DBCS EE DBCS EE-HP DBCS EE-EP ExaCS Notes PDB Snapshot Carousel N N Y N Y Y Y Y

 

You will see that PDB Carousel (and a lot of interesting new features in Oracle 18.3) are only available for Engineered System or in Cloud and not for Enterprise Edition for third party hardware. I really hope Oracle will change this behavior in the future releases.

 

Cet article PDB Snapshot Carousel Oracle 18.3 est apparu en premier sur Blog dbi services.

Oracle Rolls Out Autonomous NoSQL Database Service

Oracle Press Releases - Tue, 2018-10-02 07:00
Press Release
Oracle Rolls Out Autonomous NoSQL Database Service Oracle’s self-driving NoSQL database empowers developers to easily build modern applications with high availability and lower cost than Amazon DynamoDB

Redwood Shores, Calif.—Oct 2, 2018

Oracle today announced the availability of Oracle Autonomous NoSQL Database, the newest addition to the Oracle Autonomous Database portfolio. Designed for workloads needing fast and predictable responses to simple operations, the self-driving database service enables developers to easily deploy massive-scale applications, including UI personalization, shopping carts, online fraud detection, gaming, and advertising. Using innovative machine learning and automation capabilities, Oracle Autonomous NoSQL Database delivers a significantly more reliable NoSQL database with 99.95 percent availability and is up to 70 percent lower cost than Amazon DynamoDB1.

The fully managed Oracle Autonomous NoSQL Database handles NoSQL applications that require low latency, data model flexibility, and elastic scaling. Using simple APIs, developers can focus on application development without having to worry about managing servers, storage expansion, cluster deployments, software installation, or backup. Developers can simply specify the throughput and capacity that they wish to provision, and resources are allocated and scaled accordingly to meet dynamic workload requirements.

“We continue to leverage our revolutionary autonomous capabilities to transform the database market," said Andrew Mendelsohn, executive vice president, Oracle Database. "Our latest self-driving database cloud service, Oracle Autonomous NoSQL Database, provides extreme reliability and performance at very low costs to achieve a highly flexible application development framework.”

Oracle Autonomous Database offers exceptional performance while helping reduce risk and costs for enterprises. Running on Oracle Cloud Infrastructure, the autonomous database portfolio is self-driving, self-securing, and self-repairing to automate key management processes including patching, tuning and upgrading to keep critical infrastructure automatically running. Oracle Autonomous NoSQL joins Oracle Autonomous Transaction Processing and Oracle Autonomous Data Warehouse, which became available earlier this year. Each database cloud service is tuned to a specific workload.

Oracle Autonomous NoSQL Database provides a wealth of features to meet the needs of today’s developers:

  • Modern: A developer-oriented solution, Oracle Autonomous NoSQL Database is designed for flexibility. The database supports key value APIs including simple declarative SQL API and command line interfaces along with flexible data models for data representation including relational and ad-hoc JSON.

  • Open: The service provides a non-proprietary SQL language, delivering innovative interoperability between standard relational and standard JSON data models. Users also have deployment options to run the same application in the cloud or on-premises with no platform lock-in.

  • Easy: With an available SDK and support for popular languages including Python, Node.JS and Java, Oracle offers a no hassle application development solution to easily connect to Oracle Autonomous NoSQL Database.

1 Cost reference is for read-only workloads.

Contact Info
Dan Muñoz
Oracle
+1.650.506.2904
dan.munoz@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • +1.650.506.2904

Nicole Maloney

  • +1.650.506.0806

Find the min date from a list of date without using a sub select query

Tom Kyte - Mon, 2018-10-01 21:06
Hi, I am working on a project and I need some help with the query. I have the following data in my existing table: <code> ----------------- S_ID S_DATE ----------------- A 01-FEB-12 A 14-MAR-12 A 28-APR-14 A 28-MAR...
Categories: DBA Blogs

Live Three-day Fluid Training Event in Seattle Dec 4

Jim Marion - Mon, 2018-10-01 18:55

Are you interested in learning PeopleTools Fluid? Have you already taken a Fluid training course, but still don't feel comfortable with Fluid? Please join us in beautiful downtown Seattle from December 4th through the 6th to learn all about PeopleTools Fluid. Our curriculum starts with Fluid navigation, works its way into Fluid page construction, and finishes with advanced topics such as site-specific CSS3, JavaScript, and event mapping. This course is packed with best practices and tips.

Through the material in this course you will become comfortable with Fluid and proficient with Fluid development. You will learn the skills necessary to apply PeopleSoft-specific CSS and how to write your own custom CSS. You will learn several shortcuts for converting existing custom Classic pages to Fluid.

With most of HCM Employee Self Service Classic set to retire on December 31st of this year (MyOracle Support document 1348959.1), there is no better time to learn Fluid. Space is limited and the early bird discount expires soon so Register now to ensure a seat in the best Fluid class available!

node-oracledb 3.0 Introduces SODA Document Storage

Christopher Jones - Mon, 2018-10-01 17:11

node-oracledb icon

 

 

Release announcement: Node-oracledb 3.0.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Simple Oracle Document Access (SODA) preview, Connection Pool draining, Call timeouts.

 

Node-oracledb 3 has been released, which is very exciting. This release adds support for some Oracle Database and Oracle Client 18c features.

As with earlier versions, node-oracledb 3 is usable with Oracle Client libraries 11.2 onwards. This allows it to connect to Oracle Database 9.2 or later, depending on the client version. But only when using Oracle Client 18.3 libraries and connected to Oracle Database 18.3 will you get the latest and greatest set of Oracle features, which we are pleased to bring you.

Here are the highlights of thise node-oracledb 3.0 release:

  • Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See the section lower down for examples.

  • A new drainTime argument to pool.close() allows pools to be force-closed after a specified number of seconds. This feature was a contribution from Danilo Silva. Thanks Danilo!

    When a pool is closed with a given drainTime, any subsequent getConnection() calls will fail, but connections currently in use will continue working. This allows code to complete and outstanding transactions to be committed. When no connections are in use, or at the end of the drain time (whichever occurs first), the pool and all its connections will be forcibly closed.

    Although the words 'forcibly closed' seem harsh, the drain time actually allows more graceful shutdown of applications, so that users will get clear messages that the pool is closing (or has closed), and letting the database-side sessions be cleanly freed without waiting for them to timeout. The drainTime can also be zero, forcing the immediate close of all sessions - this is handy when you want to kill an app but be nice to the database.

  • Installation of the pre-built node-oracledb binaries got a bit easier with basic proxy authentication support. Also the 'npm config' proxy value is now used when installing if there are no proxy environment variables set. These changes were a contribution from Cemre Mengu. Thank you Cemre!

  • Added a connection.callTimeout property to interrupt long running database calls. This is available when node-oracledb 3 is using Oracle Client libraries version 18.1, or later, regardless of Oracle Database version.

    The call timeout is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution. For background, the main code layer beneath node-oracledb's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by node-oracledb, zero or more 'round-trips' to the database can occur - calling the database and getting a response back.

    The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in node-oracledb before or after the completion of each round-trip is not counted.

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and an error is returned.

    • In the case where a node-oracledb operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout.

    • If no round-trip is required, the operation will never be interrupted.

    When callTimeout is exceeded, node-oracledb attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds.

    If the cleanup was successful, a "DPI-1067: call timeout of N ms exceeded with ORA-XXX" error will be returned and the application can continue to use the connection.

    For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed.

  • On Windows, node-oracledb will now attempt to load the Oracle Client libraries from the 'node_modules\oracledb\build\Release' directory before doing the standard Windows library directory search i.e. of the PATH directories.

    This new feature could be useful if you are bundling up applications on Windows and want to include the Oracle Instant Client. By putting the client libraries in the 'node_modules\oracledb\build\Release' directory there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle Client libraries in PATH.

  • poolPingInterval functionality has been 're-enabled' when using the connection pool with Oracle Client libraries 12.2, or later. Previously it was deliberately not enabled with these versions, since an internal check in those Oracle clients is very efficient for seeing if the network has dropped out. However users were unhappy that the internal check does not identify connections that are unusable because they have exceeded database session resource limits (e.g. return ORA-02396 when used), or have been explicitly closed by DBAs (e.g. return ORA-00028). This is where poolPingInterval helps.

    This change can make your apps seem more highly available but there is a drawback: your apps may be silently reconnecting more than is optimal, and you might not be aware of connection storms if a large pool needs to be re-established. You should monitor AWR reports to see if connections are occurring too frequently, and then work with your network and DBA administrators to prevent idle sessions being killed.

These are just the highlights. For other changes and improvements see the CHANGELOG. But read on to hear more about SODA . . . .

Simple Oracle Document Access (SODA) in node-oracledb

Oracle Simple Document Access (SODA) is a set of NoSQL-style APIs that let you create and store collections of documents (in particular JSON) in Oracle Database, retrieve them, and query them, without needing to know SQL. SODA support is now available in node-oracledb 3 when using Oracle Database 18.3 and Oracle Client 18.3, or higher. SODA APIs are also available for Python, C, Java, PL/SQL and via REST, so it is widely accessible and bound to be a useful tool in your data toolkit. We are currently labelling node-oracledb 3 SODA APIs as a 'preview' but, with a future version of the Oracle Client libraries, this will change.

The class diagram of node-oracledb shows the separation of the relational and SODA worlds:

node-oracledb class overview diagram

In reality, SODA is backed by Oracle Database tables, providing a well known, secure, and efficient storage solution. You could access those tables via SQL but this would rarely be needed, perhaps for some advanced Oracle Database functionality such as analytics for reporting.

Instead you will almost certainly just use the new classes and methods. Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents (e.g JSON) in them. Some basic examples are:

// Create the parent object for SODA. soda = await connection.getSodaDatabase(); // Create a new SODA collection, if it doesn't exist. // This will open an existing collection, if the name is already in use. collection = await soda.createCollection("mycollection"); // Insert a document. // A system generated key is created by default. content = {name: "Matilda", address: {city: "Melbourne"}}; doc = await collection.insertOneAndGet(content); key = doc.key; console.log("The key of the new SODA document is: ", key);

You can then get documents back via a key look up, or by a search. A key lookup is straightforward:

// Fetch the document back doc = await collection.find().key(key).getOne(); // A SodaDocument content = doc.getContent(); // A JavaScript object console.log('Retrieved SODA document as an object:'); console.log(content);

For documents that can be converted to JSON you can alternatively get them as a string:

content = doc.getContentAsString(); // A JSON string console.log('Retrieved SODA document as a string:'); console.log(content);

The find() method is an operation builder, with methods that allow progressively limiting criteria to be set, reducing the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() or count(), amongst others.

With JSON documents, a complete filtering specification language can be used for query-by-example (QBE) to find documents. A brief example is:

// Find all documents with city names starting with 'S' documents = await collection.find() .filter({"address.city": {"$like": "S%"}}) .getDocuments(); for (let i = 0; i < documents.length; i++) { content = documents[i].getContent(); console.log(' city is: ', content.address.city); }

A runnable example is in soda1.js

Check out the node-oracledb SODA manual section and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity.

You don't have Oracle Database 18.3 yet? Get it from here. Or you may be interested in using JSON with older versions of Oracle Database.

Summary

We are continuing to introduce important features to node-oracledb to make your development experience better. We have a long wishlist and will continue our work. Contributions from the community are always welcome, and we thank the people who have contributed to this and previous releases of node-oracledb for helping to make it better.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub or Slack (link to join Slack).

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Follow us on Twitter or Facebook.

Using the Query Cache for good performance in #Exasol

The Oracle Instructor - Mon, 2018-10-01 08:52

The result of a query can be cached in Exasol to the effect that repeated identical queries complete in no time. This feature has been introduced in version 5 and is enabled by default.

SQL_EXA> select session_value,system_value  from exa_parameters where parameter_name='QUERY_CACHE';
EXA: select session_value,system_value  from exa_parameters where parameter...

SESSION_VALUE        SYSTEM_VALUE
-------------------- --------------------
ON                   ON

1 row in resultset.

The Query Cache can be (de-)activated on the session level as well as on the system level.

SQL_EXA> alter session set query_cache='off';
EXA: alter session set query_cache='off';

Rows affected: 0
SQL_EXA> select object_name,mem_object_size/1024/1024 as mb from exa_user_object_sizes where object_name='T';
EXA: select object_name,mem_object_size/1024/1024 as mb from exa_user_objec...

OBJECT_NAME          MB
-------------------- ----------------------------------
T                             1537.49641990661621093750

1 row in resultset.

SQL_EXA> select count(*) from t;
EXA: select count(*) from t;

COUNT(*)
---------------------
            320000000

1 row in resultset.
SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 4
Elapsed: 00:00:03.022

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 5
Elapsed: 00:00:02.620

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 6
Elapsed: 00:00:02.724

Without using the Query Cache the repeated query takes roughly 3 seconds.

SQL_EXA> alter session set query_cache='on';
EXA: alter session set query_cache='on';

Rows affected: 0


Timing element: 7
Elapsed: 00:00:00.008

SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 8
Elapsed: 00:00:00.009

Lightning fast! If statement profiling is enabled, QUERY CACHE RESULT shows as PART_NAME in tables like EXA_USER_PROFILE_LAST_DAY.
Also EXECUTION_MODE from EXA_SQL_LAST_DAY shows the usage of the Query Cache:

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
                   35

1 row in resultset.

SQL_EXA> show autocommit;
AUTOCOMMIT = "ON"
SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=33 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         0.005                     1 CACHED

1 row in resultset.

If DML changes the table, the result in the Query Cache is invalidated automatically:

SQL_EXA> update t set numcol2=1 where rowid in (select rowid from t limit 1);
EXA: update t set numcol2=1 where rowid in (select rowid from t limit 1);
Rows affected: 1

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
888896

1 row in resultset.

Timing element: 10
Elapsed: 00:00:02.870

SQL_EXA> set timing off;
SQL_EXA> select current_statement;
EXA: select current_statement;

CURRENT_STATEMENT
---------------------
51

1 row in resultset.

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME DURATION ROW_COUNT EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------

0 rows in resultset.

There’s a 1 Minute interval for syncing the statistic tables. It can be triggered manually, though:

SQL_EXA> flush statistics;
EXA: flush statistics;

Rows affected: 0

SQL_EXA> commit;
EXA: commit;

Rows affected: 0

SQL_EXA> select command_name,duration,row_count,execution_mode from exa_sql_last_day where stmt_id=49 and session_id=current_session;
EXA: select command_name,duration,row_count,execution_mode from exa_sql_las...

COMMAND_NAME                             DURATION    ROW_COUNT             EXECUTION_MODE
---------------------------------------- ----------- --------------------- --------------------
SELECT                                         2.862                     1 EXECUTE

1 row in resultset.

Runtime and EXECUTION_MODE EXECUTE confirms that the Query Cache was invalidated by the UPDATE above. Now it’s automatically refreshed:

SQL_EXA> set timing on;
SQL_EXA> select count(*) from t where numcol2=42;
EXA: select count(*) from t where numcol2=42;

COUNT(*)
---------------------
               888896

1 row in resultset.


Timing element: 11
Elapsed: 00:00:00.010
Categories: DBA Blogs

Oracle Food and Beverage Enables Continued Innovation with Omnivore Integration

Oracle Press Releases - Mon, 2018-10-01 07:00
Press Release
Oracle Food and Beverage Enables Continued Innovation with Omnivore Integration Easy Integrations Allow Restaurant Operators to Extend Value of Point-of-Sale Investment with Third Party Solutions

Redwood Shores, Calif.—Oct 1, 2018

Oracle Food and Beverage has collaborated with Omnivore, a universal point of sale (POS) connection for restaurant app development and a Gold level member of Oracle PartnerNetwork (OPN), to encourage restaurant operators to develop differentiated customer experiences and further streamline operations with their Oracle Food and Beverage investment. With this Omnivore connection, restaurant operators can easily take advantage of an ecosystem of third party solutions through a single point of integration into the Oracle Food and Beverage Simphony restaurant management platform.   

“Guests are demanding engaging and personalized experiences from dining establishments and we believe continuous implementation of technology is essential to helping restaurant operators keep pace,” said Chris Adams, vice president of strategy, Oracle Food and Beverage. “Through our collaboration with Omnivore, Oracle Food and Beverage is accelerating innovation in the restaurant community by making it easier than ever for operators to take advantage of the latest third party solutions.”

Omnivore connects restaurant point of sale to over 150 restaurant technologies, aligning brands with consumer, staff and support center needs in order to grow transactions while minimizing expenses.

"Together, Omnivore and Oracle Food and Beverage bring merchants a true open, best-of-breed POS technology platform, giving restaurants access to the best available technologies to meet even the most creative brands' digital needs," said Mike Wior, CEO, Omnivore.  “We’re proud to help Oracle customers navigate innovative third party solutions within a rapidly evolving digital landscape.”

Together Oracle and Omnivore are enabling operators to create a unique technology footprint that will enable restaurants to establish a competitive differentiation that reflects their guest experience. By creating a single source of integration into Oracle Food and Beverage restaurant management software and POS hardware, operators can leverage Omnivore to further streamline operations and quickly deploy new technologies faster than ever.

Contact Info
Matt Torres
Oracle
415.595.1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

About Omnivore

Revolutionizing the worlds of hospitality and retail, Omnivore connects a retailer’s point-of-sale (POS) with new technologies, driving engagement through the full lifecycle of the consumer experience. Omnivore’s cloud-based platform enables a single integration through their API, seamlessly facilitating the connections to POS systems now and into the future. The platform helps restaurants discover apps for payment, reservations, delivery, loyalty, analytics and more to connect with millions of consumers around the world. Omnivore enables access to real-time, quality consumer level point of purchase intelligence. Located in the San Francisco Bay area, Omnivore is a privately held company. Visit omnivore.io.

Talk to a Press Contact

Matt Torres

  • 415.595.1584

Covert Column to Rows Dynamically

Tom Kyte - Mon, 2018-10-01 02:46
Hi, I have situation when I am trying to convert rows into columns(Dynamically). Here is the scenario - 1. I have a function(userdefined) which takes in a SQL query as input and returns the result as a table type. 2. The result is ...
Categories: DBA Blogs

Insufficient privilege to access object SYS.DBMS_LOCK

Tom Kyte - Mon, 2018-10-01 02:46
Dear Tom, If we are using the dbms_lock package in anonymous block it is working fine.But it is not working inside the procedure. We have gone through the following link asktom.oracle.com/~tkyte/Misc/RolesAndProcedures.html and even tried with '...
Categories: DBA Blogs

Alter Table

Tom Kyte - Mon, 2018-10-01 02:46
Hi Tom , i want ALTER TABLE by adding columns to it , if the columns already exist int the table then it doesn't add , else adds the columns . how i can do that ? Regards, Adil
Categories: DBA Blogs

Lizok's Bookshelf

Greg Pavlik - Sun, 2018-09-30 17:34
The first of Eugene Vodolazkin's novels translated to English was, of course, Laurus, which ranks as one of the significant literary works of the current century. I was impressed by the translators ability to convey not just a feel for what I presume the original has, but a kind of "other-time-yet-our-timeness" that seems an essential part of the authors objective. I recently picked up Volodazkin's Aviator and thought to look up the translator as well. I was delighted to find her blog on modern Russian literature, which can be found here:

http://lizoksbooks.blogspot.com/2018/09/the-2018-nose-award-longlist.html

Case Study

Jonathan Lewis - Sun, 2018-09-30 13:59

A question about reading execution plans and optimising queries arrived on the ODC database forum a little while ago; the owner says the following statement is taking 14 minutes to return 30,000 rows and wants some help understanding why.

If you look at the original posting you’ll see that we’ve been given the text of the query and the execution plan including rowsource execution stats. There’s an inconsistency between the supplied information and the question asked, and I’ll get back to that shortly, but to keep this note fairly short I’ve excluded the 2nd half of the query (which is a UNION ALL) because the plan says the first part of the query took 13 minutes and 20 second and the user is worried about a total of 14 minutes.

SELECT /*+ gather_plan_statistics*/ DISTINCT
                rct.org_id,
                hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1 order_number,
                rct.customer_trx_id,
                rct.trx_number,
                rct.trx_date,
                rctd.gl_date,
                rct.creation_date,
                rctl.line_number,
                rct.invoice_currency_code inv_currency,
                (
                       SELECT SUM (rct_1.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_1
                       WHERE  rct_1.customer_trx_id = rct.customer_trx_id
                       AND    rct_1.line_type = 'LINE') inv_net_amount,
                (
                       SELECT SUM (rct_2.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_2
                       WHERE  rct_2.customer_trx_id = rct.customer_trx_id
                       AND    rct_2.line_type = 'TAX') inv_tax_amount,
                (
                       SELECT SUM (rct_3.extended_amount)
                       FROM   apps.ra_customer_trx_lines_all rct_3
                       WHERE  rct_3.customer_trx_id = rct.customer_trx_id) inv_gross_amount,
                gll.currency_code                                    func_currency,
                Round((
                        (
                        SELECT SUM (rct_4.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_4
                        WHERE  rct_4.customer_trx_id = rct.customer_trx_id
                        AND    rct_4.line_type = 'LINE')*gdr.conversion_rate),2) func_net_amount,
                Round((
                        (
                        SELECT SUM (rct_5.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_5
                        WHERE  rct_5.customer_trx_id = rct.customer_trx_id
                        AND    rct_5.line_type = 'TAX')*gdr.conversion_rate),2) func_tax_amount,
                Round((
                        (
                        SELECT SUM (rct_6.extended_amount)
                        FROM   apps.ra_customer_trx_lines_all rct_6
                        WHERE  rct_6.customer_trx_id = rct.customer_trx_id)*gdr.conversion_rate),2) func_gross_amount,
                glcc.segment1                                                                 company,
                glcc.segment2                                                                 account,
                hg.geography_name                                                             billing_country,
                gdr.conversion_rate
FROM            apps.hz_parties hzp,
                apps.hz_cust_accounts hca,
                apps.ra_customer_trx_all rct,
                apps.ra_customer_trx_lines_all rctl,
                apps.ra_cust_trx_line_gl_dist_all rctd,
                apps.gl_code_combinations_kfv glcc,
                apps.hz_cust_site_uses_all hcsua,
                apps.hz_cust_acct_sites_all hcasa,
                apps.hz_party_sites hps,
                apps.hz_locations hl,
                apps.hz_geographies hg,
                apps.gl_ledgers gll,
                apps.gl_daily_rates gdr
WHERE           hzp.party_id = hca.party_id
AND             hca.cust_account_id = rct.bill_to_customer_id
AND             hca.cust_account_id = hcasa.cust_account_id
AND             rct.customer_trx_id = rctl.customer_trx_id
AND             rctl.customer_trx_line_id = rctd.customer_trx_line_id
AND             glcc.code_combination_id = rctd.code_combination_id
AND             rct.bill_to_site_use_id = hcsua.site_use_id
AND             hcsua.cust_acct_site_id = hcasa.cust_acct_site_id
AND             hcasa.party_site_id = hps.party_site_id
AND             hps.location_id = hl.location_id
AND             hl.country = hg.country_code
AND             hg.geography_type = 'COUNTRY'
AND             rctl.line_type = 'TAX'
AND             gll.ledger_id = rct.set_of_books_id
AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             gdr.conversion_type = 'Corporate'
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')
AND             glcc.segment1 = '2600'
AND             glcc.segment2 = '206911'
GROUP BY        hzp.party_name,
                hca.account_number,
                rct.interface_header_attribute1,
                rct.trx_number,
                rct.trx_date,
                rct.creation_date,
                rctl.line_number,
                rctl.unit_selling_price,
                rct.org_id,
                rctd.gl_date,
                rct.customer_trx_id,
                glcc.segment1,
                glcc.segment2,
                hg.geography_name,
                rct.invoice_currency_code,
                gll.currency_code,
                gdr.conversion_rate 

We note that there are six scalar subqueries in the text I’ve reported – and they form two groups of three, and the difference between the two groups is that one group is multiplied by a conversion rate while the other isn’t; moreover in each group the three subqueries are simply querying subsets of the same correlated data set. So it looks as if all 6 scalar subqueries could be eliminated and replaced by the inclusion of an aggregate view in the from clause and the projection of 6 columns from that view.

However, before pursuing that option, take a look at the plan with the rowsource execution stats – where is the time going ?


-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                                           |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   1 |  UNION-ALL                                                 |                              |      1 |        |    501 |00:13:20.17 |    3579K|  
|   2 |   HASH UNIQUE                                              |                              |      1 |      1 |    501 |00:13:20.17 |    3579K|  
|   3 |    HASH GROUP BY                                           |                              |      1 |      1 |  19827 |00:13:20.15 |    3579K|  
|   4 |     NESTED LOOPS                                           |                              |      1 |        |  21808 |00:13:10.26 |    3579K|  
|   5 |      NESTED LOOPS                                          |                              |      1 |      1 |  21808 |00:13:10.11 |    3578K|  
|   6 |       NESTED LOOPS OUTER                                   |                              |      1 |      1 |  21808 |00:13:09.90 |    3576K|  
|   7 |        NESTED LOOPS OUTER                                  |                              |      1 |      1 |  21808 |00:13:09.25 |    3501K|  
|   8 |         NESTED LOOPS OUTER                                 |                              |      1 |      1 |  21808 |00:13:08.48 |    3426K|  
|   9 |          NESTED LOOPS OUTER                                |                              |      1 |      1 |  21808 |00:13:07.66 |    3333K|  
|  10 |           NESTED LOOPS OUTER                               |                              |      1 |      1 |  21808 |00:13:06.92 |    3258K|  
|  11 |            NESTED LOOPS OUTER                              |                              |      1 |      1 |  21808 |00:13:06.08 |    3183K|  
|  12 |             NESTED LOOPS                                   |                              |      1 |      1 |  21808 |00:13:04.69 |    3090K|  
|  13 |              NESTED LOOPS                                  |                              |      1 |      1 |  21808 |00:13:05.75 |    3026K|  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  16 |                 NESTED LOOPS                               |                              |      1 |    351 |  33459 |00:00:03.67 |    1025K|  
|  17 |                  NESTED LOOPS                              |                              |      1 |    351 |  33459 |00:00:03.06 |     926K|  
|  18 |                   NESTED LOOPS                             |                              |      1 |    351 |  33459 |00:00:02.47 |     827K|  
|* 19 |                    HASH JOIN                               |                              |      1 |    351 |  33459 |00:00:01.90 |     730K|  
|  20 |                     TABLE ACCESS FULL                      | GL_LEDGERS                   |      1 |     38 |     39 |00:00:00.01 |      15 |  
|  21 |                     NESTED LOOPS                           |                              |      1 |        |  33459 |00:00:01.75 |     730K|  
|  22 |                      NESTED LOOPS                          |                              |      1 |    351 |  33459 |00:00:01.44 |     696K|  
|  23 |                       NESTED LOOPS                         |                              |      1 |    351 |  33459 |00:00:01.11 |     646K|  
|* 24 |                        HASH JOIN                           |                              |      1 |    385 |  33459 |00:00:00.40 |     526K|  
|* 25 |                         TABLE ACCESS BY INDEX ROWID BATCHED| GL_CODE_COMBINATIONS         |      1 |     35 |      1 |00:00:00.01 |     108 |  
|* 26 |                          INDEX RANGE SCAN                  | GL_CODE_COMBINATIONS_N2      |      1 |    499 |     77 |00:00:00.01 |       3 |  
|* 27 |                         TABLE ACCESS BY INDEX ROWID BATCHED| RA_CUST_TRX_LINE_GL_DIST_ALL |      1 |    651K|   1458K|00:00:02.22 |     526K|  
|* 28 |                          INDEX RANGE SCAN                  | RA_CUST_TRX_LINE_GL_DIST_N2  |      1 |    728K|   1820K|00:00:01.60 |   11147 |  
|* 29 |                        TABLE ACCESS BY INDEX ROWID         | RA_CUSTOMER_TRX_LINES_ALL    |  33459 |      1 |  33459 |00:00:00.53 |     119K|  
|* 30 |                         INDEX UNIQUE SCAN                  | RA_CUSTOMER_TRX_LINES_U1     |  33459 |      1 |  33459 |00:00:00.31 |   86364 |  
|* 31 |                       INDEX UNIQUE SCAN                    | RA_CUSTOMER_TRX_U1           |  33459 |      1 |  33459 |00:00:00.21 |   49850 |  
|  32 |                      TABLE ACCESS BY INDEX ROWID           | RA_CUSTOMER_TRX_ALL          |  33459 |      1 |  33459 |00:00:00.20 |   33459 |  
|  33 |                    TABLE ACCESS BY INDEX ROWID             | HZ_CUST_ACCOUNTS             |  33459 |      1 |  33459 |00:00:00.42 |   97887 |  
|* 34 |                     INDEX UNIQUE SCAN                      | HZ_CUST_ACCOUNTS_U1          |  33459 |      1 |  33459 |00:00:00.24 |   64428 |  
|  35 |                   TABLE ACCESS BY INDEX ROWID              | HZ_PARTIES                   |  33459 |      1 |  33459 |00:00:00.44 |   98783 |  
|* 36 |                    INDEX UNIQUE SCAN                       | HZ_PARTIES_U1                |  33459 |      1 |  33459 |00:00:00.26 |   65175 |  
|  37 |                  TABLE ACCESS BY INDEX ROWID               | HZ_CUST_SITE_USES_ALL        |  33459 |      1 |  33459 |00:00:00.46 |   98374 |  
|* 38 |                   INDEX UNIQUE SCAN                        | HZ_CUST_SITE_USES_U1         |  33459 |      1 |  33459 |00:00:00.28 |   64915 |  
|* 39 |                 TABLE ACCESS BY INDEX ROWID                | HZ_CUST_ACCT_SITES_ALL       |  33459 |      1 |  33459 |00:00:00.45 |   98195 |  
|* 40 |                  INDEX UNIQUE SCAN                         | HZ_CUST_ACCT_SITES_U1        |  33459 |      1 |  33459 |00:00:00.26 |   64736 |  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
|  43 |               TABLE ACCESS BY INDEX ROWID                  | HZ_PARTY_SITES               |  21808 |      1 |  21808 |00:00:00.35 |   64339 |  
|* 44 |                INDEX UNIQUE SCAN                           | HZ_PARTY_SITES_U1            |  21808 |      1 |  21808 |00:00:00.23 |   42531 |  
|  45 |              TABLE ACCESS BY INDEX ROWID                   | HZ_LOCATIONS                 |  21808 |      1 |  21808 |00:00:00.33 |   64353 |  
|* 46 |               INDEX UNIQUE SCAN                            | HZ_LOCATIONS_U1              |  21808 |      1 |  21808 |00:00:00.18 |   42545 |  
|  47 |             VIEW PUSHED PREDICATE                          | VW_SSQ_1                     |  21808 |      1 |  21808 |00:00:01.17 |   93476 |  
|  48 |              SORT GROUP BY                                 |                              |  21808 |      1 |  21808 |00:00:01.06 |   93476 |  
|  49 |               TABLE ACCESS BY INDEX ROWID BATCHED          | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.84 |   93476 |  
|* 50 |                INDEX RANGE SCAN                            | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.36 |   59938 |  
|  51 |            VIEW PUSHED PREDICATE                           | VW_SSQ_2                     |  21808 |      1 |  21808 |00:00:00.69 |   74433 |  
|  52 |             SORT GROUP BY                                  |                              |  21808 |      1 |  21808 |00:00:00.59 |   74433 |  
|  53 |              TABLE ACCESS BY INDEX ROWID BATCHED           | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.49 |   74433 |  
|* 54 |               INDEX RANGE SCAN                             | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.24 |   59903 |  
|  55 |           VIEW PUSHED PREDICATE                            | VW_SSQ_3                     |  21808 |      1 |  21808 |00:00:00.61 |   74852 |  
|  56 |            SORT GROUP BY                                   |                              |  21808 |      1 |  21808 |00:00:00.51 |   74852 |  
|  57 |             TABLE ACCESS BY INDEX ROWID BATCHED            | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.38 |   74852 |  
|* 58 |              INDEX RANGE SCAN                              | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.19 |   59148 |  
|  59 |          VIEW PUSHED PREDICATE                             | VW_SSQ_4                     |  21808 |      1 |  21808 |00:00:00.70 |   93490 |  
|  60 |           SORT GROUP BY                                    |                              |  21808 |      1 |  21808 |00:00:00.61 |   93490 |  
|  61 |            TABLE ACCESS BY INDEX ROWID BATCHED             | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |     16 |    145K|00:00:00.63 |   93490 |  
|* 62 |             INDEX RANGE SCAN                               | XXC_CUSTOMER_GETPAID         |  21808 |     16 |    145K|00:00:00.25 |   59950 |  
|  63 |         VIEW PUSHED PREDICATE                              | VW_SSQ_5                     |  21808 |      1 |  21808 |00:00:00.63 |   74427 |  
|  64 |          SORT GROUP BY                                     |                              |  21808 |      1 |  21808 |00:00:00.54 |   74427 |  
|  65 |           TABLE ACCESS BY INDEX ROWID BATCHED              | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  92201 |00:00:00.44 |   74427 |  
|* 66 |            INDEX RANGE SCAN                                | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  92201 |00:00:00.21 |   59900 |  
|  67 |        VIEW PUSHED PREDICATE                               | VW_SSQ_6                     |  21808 |      1 |  21808 |00:00:00.59 |   74846 |  
|  68 |         SORT GROUP BY                                      |                              |  21808 |      1 |  21808 |00:00:00.50 |   74846 |  
|  69 |          TABLE ACCESS BY INDEX ROWID BATCHED               | RA_CUSTOMER_TRX_LINES_ALL    |  21808 |      8 |  53060 |00:00:00.35 |   74846 |  
|* 70 |           INDEX RANGE SCAN                                 | XXC_CUSTOMER_GETPAID         |  21808 |     12 |  53060 |00:00:00.17 |   59144 |  
|* 71 |       INDEX RANGE SCAN                                     | HZ_GEOGRAPHIES_N11           |  21808 |   5812 |  21808 |00:00:00.13 |    2684 |  
|  72 |      TABLE ACCESS BY INDEX ROWID                           | HZ_GEOGRAPHIES               |  21808 |    168 |  21808 |00:00:00.07 |     620 |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

Let’s start by raising some concerns about the quality of information available.

First, the OP says it takes 14 minutes to return 30,000 rows: but the top line of the plan says it has taken 13 minutes and 20 seconds to return the first 501 rows, and if we look a little further down the plan operation 3 (Hash Group By) reports 00:13:20.15 to aggregate down to 19,827 rows. So this half of the plan cannot return more than 19,827 rows, and the half I have discarded (for the moment) must be returning the other 10,000+ rows. The information we have is incomplete.

Of course you may think that whatever the rest of the plan does is fairly irrelevant – it’s only going to be responsible for at most another 40 seconds of processing – except my previous experience of rowsource execution statistics tells me that when you do a large number of small operations the times reported can be subject to fairly large rounding errors and that enabling the measurement can increase the execution time by a factor of three or four. It’s perfectly feasible that this half of the query is actually the faster half under normal run-time circumstances but runs much more slowly (with a much higher level of CPU utilisation) when rowsource execution stats is in enabled. So let’s not get too confident.

With that warning in mind, what can we see in this half of the plan.

Big picture: the inline scalar subqueries have disappeared. In 12c the optimimzer can unnest scalar subqueries in the select list and turn them into outer joins, and we can see that there are 6 “Nested Loop Outer” operations, corresponding to 6 “View Pushed Predicate” operations against views labelled VW_SSQ1 through to VW_SSQ6 (SSQ = Scalar Sub Query ?). This goes back to my early comment – a person could probably rewrite the 6 scalar subqueries as a single aggregate view in the from clause: the optimizer isn’t quite clever enough to manage that in this case, but in simpler cases it might be able to do exactly that.

Big picture 2: most of the 13 minutes 20 seconds appears at operation 14 as it processes the 33,459 rows supplied to it from the 4.33 seconds of work done by operation 15 and its descendants. Reducing this part of the execution plan to the smallest relevant section we get the following:

-----------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                                                  | Name                         | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  
-----------------------------------------------------------------------------------------------------------------------------------------------------  
|  14 |               NESTED LOOPS                                 |                              |      1 |      1 |  21808 |00:13:03.30 |    2961K|  
|  15 |                NESTED LOOPS                                |                              |      1 |      1 |  33459 |00:00:04.33 |    1123K|  
|  41 |                TABLE ACCESS BY INDEX ROWID BATCHED         | GL_DAILY_RATES               |  33459 |      1 |  21808 |00:12:44.59 |    1838K|  
|* 42 |                 INDEX RANGE SCAN                           | GL_DAILY_RATES_U1            |  33459 |      1 |  21808 |00:13:08.16 |    1837K|  
-----------------------------------------------------------------------------------------------------------------------------------------------------  

For each row supplied by operation 15 Oracle calls operation 41, which calls operation 42 to do an index range scan to supply a set of rowids so that operation 41 can access a table and return rows. Apparently the total time spent by operation 41 waiting for operation 42 to return rowids and then doing its own work is 12 minutes 44 seconds, while the range scans alone (all 33,459 of them) take 13 minutes and 8 seconds. Remember, though, that “lots of small operations = scope of rounding errors” when you look at these timings. Despite the inconsistency between the timings for operations 41 and 42 it’s reasonable to conclude that between them that’s where most of the execution time went.

Two questions – (a) can we refine our analysis of how the time is split between the two operations and (b) why do these lines take so much time.

Check the Starts and the A-rows: (reminder: for comparison, we expect A-rows to be approximately E-rows * Starts) for both operations we see 33,459 starts and 21,808 rows. The index range scans return (on average) a single rowid about two-thirds of the time, and every time a range scan returns a rowid the corresponding row is returned from the table (If you check the Id column there’s no asterisk on operation 41 so no extra predicate is applied as Oracle accesses the table row – but even if there were an extra predicate we’d still be happy to infer that if 21,808 rowids returned from operation 42 turned into 21,808 rows returned from the table then there are no wasted accesses to the table).

Now look at the Buffers for the index range scan – 1.837M: that’s roughly 56 buffers per range scan – that’s a lot of index to range through to find one rowid, which is a good clue that perhaps we do a lot of work with each Start and really do use up a lot of CPU on this operation. Let’s see what the Predicate Section of the plan tells us about this range scan:


Predicate Information (identified by operation id):  
---------------------------------------------------  
  42 - access("GDR"."FROM_CURRENCY"="RCT"."INVOICE_CURRENCY_CODE" AND "GDR"."TO_CURRENCY"="GLL"."CURRENCY_CODE" AND   
              "GDR"."CONVERSION_TYPE"='Corporate')  
       filter(("GDR"."CONVERSION_TYPE"='Corporate' AND TO_DATE(INTERNAL_FUNCTION("GDR"."CONVERSION_DATE"))=TO_DATE(INTERNAL_FUNCTION("RCTD"."  
              GL_DATE"))))  

We have access predicates (things which narrow down the number of leaf blocks that we walk through) and filter predicates (things we do to test every key entry we access). Notably the gdr.conversion type is a filter predciate as well as an access predicate – and that suggests that our set of predicates has “skipped over” a column in the index: from_currency and to_currency might be the first two columns in the index, but conversion_type is then NOT the third.

More significantly, though, there’s a column called conversion_date in the index (maybe that’s column 3 in the index – it feels like it ought to be); but for every index entry we’ve selected from the 56 blocks we walk through we do some sort of internal conversion (or un-translated transformation) to the column then convert the result to a date to compare it with another date (similarly processed from an earlier operation). What is that “internal function” – let’s check the query:


AND             gdr.from_currency = rct.invoice_currency_code
AND             gdr.to_currency = gll.currency_code
AND             gdr.conversion_type = 'Corporate'
AND             to_date(gdr.conversion_date) = to_date(rctd.gl_date)
AND             rctd.gl_date BETWEEN To_date ('01-JAN-2018', 'DD-MON-YYYY') AND  To_date ('31-JAN-2018', 'DD-MON-YYYY')

(I’ve swapped the order of a couple of lines to highlight a detail).

The filter predicate is comparing gdr.conversion_date with rctd.gl_date – and we can probably assume that both columns really are dates because (a) the word “date” is in their names and (b) the rctd.gl_date is being compared with genuine date values in the next predicate down (and – though I haven’t shown it – the way the plan reports the next predicate proves that the column really is a date datatype).

So the predicate in the SQL applies the to_date() function to two columns that are dates – which means the optimizer has to convert the date columns to some default character format and then convert them back to dates. The “internal function” is a to_char() call. Conversions between date and character formats are CPU-intensive, and we’re doing a double conversion (to_date(to_char(column_value)) to every data value in roughly 56 blocks of an index each time we call that line of the plan. It’s not surprising we spend a lot of time in that line.

Initial strategy:

Check the column types for those two columns, if they are both date types decide whether or not the predicate could be modified to a simple gdr.conversion_date = rctd.gl_date (though it’s possible that something slightly more sophisticated should be used) but whatever you do avoid the redundant conversion through character format.

Warning

Simply eliminating the to_date() calls may changes the results. Here’s a demonstration of how nasty things happen when you apply to_date() to a date:


SQL> desc t1
 Name                          Null?    Type
 ----------------------------- -------- --------------------
 D1                                     DATE
 D2                                     DATE

SQL> insert into t1 values(sysdate, sysdate + 10/86400);

1 row created.

SQL> select * from t1 where d1 = d2;

no rows selected

SQL> select * from t1 where to_date(d1) = to_date(d2);

D1        D2
--------- ---------
30-SEP-18 30-SEP-18

1 row selected.

SQL> alter session set nls_date_format = 'yyyy-mm-dd hh24:mi:ss';

Session altered.

SQL> select * from d1 where to_date(d1) = to_date(d2);

no rows selected

Different users could get different results because they have different settings for their nls_date_format.

Reminder

I started my analysis with two comments about the quality of information – first, we don’t really know whether or not this half of the union all would be responsble for most of the time if rowsource execution statistics were not enabled; secondly large number of small operations can lead to a lot of rounding errors in timing. There are six occurrences of unnested scalar subqueries which are all called 21,808 times – and the recorded time for all 6 of them is remarkably small given the number of executions, even when allowing for the precision with which they operate; it’s possible that these subqueries take a larger fraction of the total time than the plan indicates, in which case it might become necessary (rather than just nice) to do a manual unnesting and reduce the number of inline views to 3 (one for each line_type), 2 (one with, one without, conversion_rate) or just one.

Footnote

Once again I’ve spent a couple of hours writing notes to explain the thoughts that went through my mind in roughly 10 minutes of reading the original posting. It’s a measure really of how many bits of information you can pull together, and possibly discard, very quickly once you understand how many things the optimizer is capable of doing and how the execution plan tries to show you how a statement was (or will be) handled.

 

Partitioning -- 6 : Hash Partitioning

Hemant K Chitale - Sun, 2018-09-30 06:25
Unlike Range or List Partitioning where you define the rule which identifies which Partition a row will be inserted into (based on the value in the Partition Key Column(s)),  Hash Partitioning relies on Oracle applying a "hashing formula (algorithm)" to "randomly" distribute incoming rows across the available Partitions in the table.
This would be useful when you want to break up a table into smaller physical segments (maybe into even separate Tablespaces on different disks) without considering grouping of data.  In Date based Range Partitioning data is grouped into different Partitions -- i.e. physical segments on disk --  based on the Date value (e.g. by Month or Year).  In List Partitioning, data is grouped based on the value in the Partition Key Column.

Here is a small example on Hash Partitioning :

SQL> create table iot_incoming_data
2 (data_item_number number,
3 data_item_key varchar2(32),
4 data_item_value varchar2(64),
5 data_item_timestamp timestamp)
6 partition by hash (data_item_number)
7 (partition p1 tablespace hash_ptn_1,
8 partition p2 tablespace hash_ptn_2,
9 partition p3 tablespace hash_ptn_3,
10 partition p4 tablespace hash_ptn_4)
11 /

Table created.

SQL>


In this definition of the table, I have "randomly" distributed incoming rows across 4 Partitions in 4 different Tablespaces.  Given the incoming "data_item_number" values (either machine generated or from a sequence), each of the 4 Partitions would be equally loaded.
(In contrast, in Date based Range Partitioning of, say, a SALES table, you might have fewer rows in older Partitions and an increasing number of rows in new Partitions as your business and Sales Volume grow over time !).

Unlike Range Partitioning, Hash Partitioning will not perform well for a "range based query"  (e.g. a range of sales dates or a range of data item numbers).  It is suitable for "equality" or "in-list" predicates.  If you do need a range based query, you would need a Global Index.

Note that it is advised that you should use a Power of 2 for the number of Hash Partitions.

Let me demonstrate insertion of data :

SQL> insert into iot_incoming_data
2 select rownum,
3 dbms_random.string('X',16),
4 dbms_random.string('X',32),
5 systimestamp
6 from dual
7 connect by level < 10001;

10000 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> exec dbms_stats.gather_table_stats('HEMANT','IOT_INCOMING_DATA');

PL/SQL procedure successfully completed.

SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'IOT_INCOMING_DATA'
4 order by partition_position
5 /

PARTITION_NAME NUM_ROWS
------------------------------ ----------
P1 2471
P2 2527
P3 2521
P4 2481

SQL>


Note that I have inserted the 10,000 rows from a single session.  In the real world, you would have multiple sessions concurrently inserting rows into the table.
Based on the Hashing algorithm that Oracle used (note : this is internal to Oracle and we cannot use any custom algorithm), Oracle has more or less evenly distributed the incoming rows across the 4 Partitions.

Let me select some random rows from the Partitions :

SQL> select data_item_number  
2 from iot_incoming_data partition (P1)
3 where rownum < 6
4 order by 1;

DATA_ITEM_NUMBER
----------------
8361
8362
8369
8379
8380

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P2)
3 where rownum < 6
4 order by 1
5 /

DATA_ITEM_NUMBER
----------------
8087
8099
8101
8105
8109

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P3)
3 where rownum < 6
4 and data_item_number < 100
5 order by 1
6 /

DATA_ITEM_NUMBER
----------------
2
5
8
18
20

SQL>
SQL> select data_item_number
2 from iot_incoming_data partition (P4)
3 where rownum < 6
4 and data_item_number between 1000 and 1100
5 order by 1
6 /

DATA_ITEM_NUMBER
----------------
1001
1002
1005
1008
1009

SQL>


(The first two queries returned rows with values greater than 8000 simply because I didn't specify a range of values as a filter and those rows came from the first few blocks that Oracle read from the buffer cache).
Note how the DATA_ITEM_NUMBER values indicate "near-random" distribution of rows across the Partitions.  It is likely that if I had created multiple sessions concurrently running inserts into the table, distribution of the rows would have been even more "random".



Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator