Feed aggregator

SQL Monitor

Jonathan Lewis - Fri, 2018-04-06 01:50

I’ve mentioned the SQL Monitor report from time to time as a very useful way of reviewing execution plans – the feature is automatically enabled by parallel execution and by queries that are expected to take more than a few seconds to complete, and the inherent overheads of monitoring are less than the impact of enabling the rowsource execution statistics that allow you to use the ‘allstats’ format of dbms_xplan.display_cursor() to get detailed execution information for a query. The drawback to the SQL Monitor feature is that it doesn’t report predicate information. It’s also important to note that it falls under the performance and diagnostic licences: some of the available performance informtion comes from v$active_session_history, and the report is generated by a call to the dbms_sqltune package.

There are two basic calls – report_sql_monitor_list(), which appeared in 11.2, produces a summary of the statements and their individual executions (from the information that is still in memory, of course) and report_sql_monitor() shows detailed execution plans. Here’s a simple bit of SQL*Plus code showing basic use – it lists a summary of all the statements monitored in the last half hour, then (as it stands at present) the full monitoring details of the most recently completed monitored statement:


set long 250000
set longchunksize 65536

set linesize 254
set pagesize 100
set trimspool on

set heading off

column text_line format a254

spool report_sql_monitor

select 
        dbms_sqltune.report_sql_monitor_list(
                active_since_date       => sysdate - 30 / (24*60),
                type                    => 'TEXT'
        ) text_line 
from    dual
;

select 
        dbms_sqltune.report_sql_monitor(
--              sql_id                  => '&m_sql_id',
--              start_time_filter       => sysdate - 30/(24 * 60),
--              sql_exec_id             => &m_exec_id,
                type                    =>'TEXT'
        ) text_line 
from    dual
;

spool off




Here’s a variation that reports the details of the most recently completed execution of a query with the specified SQL_ID:

set linesize 255
set pagesize 200
set trimspool on
set long 200000

column text_line format a254
set heading off

define m_sql_id = 'fssk2xabr717j'

spool rep_mon

SELECT  dbms_sqltune.report_sql_monitor(
                sql_id=> v.sql_id,
                sql_exec_id => v.max_sql_exec_id
        ) text_line
from     (
        select
                sql_id,
                max(sql_exec_id)        max_sql_exec_id
        from
                v$sql_monitor
        where
                sql_id = '&m_sql_id'
        and     status like 'DONE%'
        group by
                sql_id
        )       v
;

spool off

set heading on
set linesize 132
set pagesize 60

And a sample of the text output, which is the result of monitoring the query “select * from dba_objects” (with an arraysize of 1,000 set in SQL*Plus):


SQL Monitoring Report

SQL Text
------------------------------
select /*+ monitor */ * from dba_objects

Global Information
------------------------------
 Status              :  DONE (ALL ROWS)
 Instance ID         :  1
 Session             :  SYS (262:54671)
 SQL ID              :  7nqa1nnbav642
 SQL Execution ID    :  16777216
 Execution Started   :  04/05/2018 19:43:42
 First Refresh Time  :  04/05/2018 19:43:42
 Last Refresh Time   :  04/05/2018 19:45:04
 Duration            :  82s
 Module/Action       :  sqlplus@linux12 (TNS V1-V3)/-
 Service             :  SYS$USERS
 Program             :  sqlplus@linux12 (TNS V1-V3)
 Fetch Calls         :  93

Global Stats
===========================================================================
| Elapsed |   Cpu   |    IO    |  Other   | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
===========================================================================
|    0.31 |    0.29 |     0.00 |     0.02 |    93 |   6802 |   18 |   9MB |
===========================================================================

SQL Plan Monitoring Details (Plan Hash Value=2733869014)
=================================================================================================================================================================================
| Id |                Operation                 |       Name       |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Mem  | Activity | Activity Detail |
|    |                                          |                  | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | (Max) |   (%)    |   (# samples)   |
=================================================================================================================================================================================
|  0 | SELECT STATEMENT                         |                  |         |      |        83 |     +0 |     1 |    91314 |      |       |       |          |                 |
|  1 |   VIEW                                   | DBA_OBJECTS      |   91084 | 2743 |        83 |     +0 |     1 |    91314 |      |       |       |          |                 |
|  2 |    UNION-ALL                             |                  |         |      |        83 |     +0 |     1 |    91314 |      |       |       |          |                 |
|  3 |     TABLE ACCESS BY INDEX ROWID          | SUM$             |       1 |      |           |        |       |          |      |       |       |          |                 |
|  4 |      INDEX UNIQUE SCAN                   | I_SUM$_1         |       1 |      |           |        |       |          |      |       |       |          |                 |
|  5 |     TABLE ACCESS FULL                    | USER_EDITIONING$ |       1 |    2 |         1 |     +0 |   872 |        1 |      |       |       |          |                 |
|  6 |      TABLE ACCESS BY INDEX ROWID BATCHED | OBJ$             |       1 |    3 |           |        |       |          |      |       |       |          |                 |
|  7 |       INDEX RANGE SCAN                   | I_OBJ1           |       1 |    2 |           |        |       |          |      |       |       |          |                 |
|  8 |     FILTER                               |                  |         |      |        83 |     +0 |     1 |    91312 |      |       |       |          |                 |
|  9 |      HASH JOIN                           |                  |   91394 |  211 |        83 |     +0 |     1 |    91312 |      |       |    2M |          |                 |
| 10 |       TABLE ACCESS FULL                  | USER$            |     125 |    2 |         1 |     +0 |     1 |      125 |      |       |       |          |                 |
| 11 |       HASH JOIN                          |                  |   91394 |  207 |        83 |     +0 |     1 |    91312 |      |       |    1M |   100.00 | Cpu (1)         |
| 12 |        INDEX FULL SCAN                   | I_USER2          |     125 |    1 |         1 |     +0 |     1 |      125 |      |       |       |          |                 |
| 13 |        TABLE ACCESS FULL                 | OBJ$             |   91394 |  204 |        83 |     +0 |     1 |    91312 |   13 |   9MB |       |          |                 |
| 14 |      TABLE ACCESS FULL                   | USER_EDITIONING$ |       1 |    2 |         1 |     +0 |   872 |        1 |    2 | 16384 |       |          |                 |
| 15 |      NESTED LOOPS SEMI                   |                  |       1 |    2 |           |        |       |          |      |       |       |          |                 |
| 16 |       INDEX SKIP SCAN                    | I_USER2          |       1 |    1 |           |        |       |          |      |       |       |          |                 |
| 17 |       INDEX RANGE SCAN                   | I_OBJ4           |       1 |    1 |           |        |       |          |      |       |       |          |                 |
| 18 |      TABLE ACCESS FULL                   | USER_EDITIONING$ |       1 |    2 |           |        |       |          |      |       |       |          |                 |
| 19 |     HASH JOIN                            |                  |       2 |    4 |         1 |    +82 |     1 |        1 |      |       |       |          |                 |
| 20 |      NESTED LOOPS                        |                  |       2 |    4 |         1 |    +82 |     1 |        2 |      |       |       |          |                 |
| 21 |       STATISTICS COLLECTOR               |                  |         |      |         1 |    +82 |     1 |        2 |      |       |       |          |                 |
| 22 |        TABLE ACCESS FULL                 | LINK$            |       2 |    2 |         1 |    +82 |     1 |        2 |    2 | 16384 |       |          |                 |
| 23 |       TABLE ACCESS CLUSTER               | USER$            |       1 |    1 |         1 |    +82 |     2 |        2 |      |       |       |          |                 |
| 24 |        INDEX UNIQUE SCAN                 | I_USER#          |       1 |      |         1 |    +82 |     2 |        2 |    1 |  8192 |       |          |                 |
| 25 |      TABLE ACCESS FULL                   | USER$            |       1 |    1 |           |        |       |          |      |       |       |          |                 |
=================================================================================================================================================================================


1 row selected.


In a future note I’ll show an example of using one of these reports to identify the critical performance issue with an SQL statement that was raised recently on the ODC (OTN) database forum, but I’ll just point out one detail from this report. The “Time active (s)” says the query ran for about 83 seconds, but the Global Stats section tells us the elapsed time was 0.31 seconds. In this case the difference between these two is the time spent passing the data to the client.

Footnote

It is possible to force monitoring for an SQL statement with the /*+ monitor */ hint. Do be careful with this in production systems; each time the statement is executed the session will try to get the “Real-time descriptor latch” which is a latch with no latch children so if you monitor a lightweight statement that is called many times from many sessions you may find you lose a lot of time to latch contention and the attendant CPU spinning.

 

New Whitepaper: Solutions to EBS 12.2 Upgrade Performance Issues

Steven Chan - Thu, 2018-04-05 11:51

Our Performance Group has just released a new whitepaper:

This white paper lists performance issues that may slow down your upgrade to E-Business Suite 12.2. It lists:

  • Upgrade performance issues fixed in EBS 12.2.7, 12.2.0 Cumulative Update Patch 8 (CUP8), or associated patches
  • Standalone performance upgrade issues with fixes
  • Database fixes for EBS upgrade performance issues
  • Workarounds for known upgrade performance bugs

This is a companion document to a previously-published white paper:

Both whitepapers are updated regularly with the latest information available, so they would be worth bookmarking and flagging for updates.

Related Articles

 

Categories: APPS Blogs

Collaborate Preview #1: How a Chatbot Army Could Help Your Business

No matter your profession, having an assistant to provide some help would be awesome. Someone to compile data, run reports, enter information into systems, look up key details, and even order your new business cards after your promotion. Unfortunately, most professionals don’t have an actual human assistant to perform tasks like these specifically for them. Most of these tasks we all do ourselves, and while they all may be important and necessary, they take time away from more value-add tasks and those parts of your job you really enjoy.

Sales professionals, for example, all have a multitude of tasks they need to perform that actually gets them to their desired result: making the sale. They need to update their customer relationship management (CRM) system daily with new or edited opportunities, new contacts, notes from customer calls, and what their activities or tasks will be for the day. Not only are these updates important for each and every sales representative, but they are also critical for sales managers who need to review pipeline and forecast information and share it with executive or leadership teams. And while performing these updates or accessing sales data may only take about 15 minutes, if they are performed every business day over 1 year, sales reps stand to lose about 2 selling days. Multiply that number by how many sales reps your company employs, and we’re talking 15, 20 or even 30 selling days lost in a year.

Another example is the questions employees have regarding company policies and procedures, as well as the small issues they might encounter every day. Even the best employee onboarding and training programs are not going to help all employees remember vacation policies, or how to change insurance beneficiaries, or what to do if they need to reset a password. When faced with these unknowns, most employees are going to call the company help desk. They will probably get the answers they need, but it will cost them their time, and it will perpetuate the high costs to staff and maintain the company help desk.

Fishbowl Atlast Intelligent Chatbot So, is it possible within an organization to have an assistant for every employee? With chatbots or virtual assistants, the answer is yes. Purpose-built chatbots can be created for human resources to answer employee FAQs (frequently asked questions), and in sales to help update CRM systems and get sales collateral and data quickly through a conversational user interface. Fishbowl Solutions is leveraging Oracle Mobile Cloud Service, Enterprise, and its intelligent chatbot feature, to create chatbots for these use cases and more. We will be discussing how these chatbots get created, and why chatbots need to be built to satisfy specific use cases at Collaborate 2018 during this session: Rise of the Bot Army with Oracle Mobile Cloud Enterprise, which takes place on Tuesday, April 24th from 1:15 to 2:15 PM. Come hear how chatbots can help sales, marketing, customer service, human resources, and other departments cut costs, automate routine or manual tasks, and provide 24 x 7 customer and/or employee engagement. And yes, if you get that promotion, a chatbot could help you get your new business cards: https://www.youtube.com/watch?v=rOkbMiV-j0s

You can also stop by Fishbowl’s booth #848 to see a demo of Fishbowl’s Atlas intelligent chatbot. For more information on Fishbowl’s activities at Collaborate 2018, please visit this page: https://www.fishbowlsolutions.com/about/news/collaborate/

The post Collaborate Preview #1: How a Chatbot Army Could Help Your Business appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Password Validation in MySQL

Yann Neuhaus - Thu, 2018-04-05 05:05
Introduction on validate_password plugin

Since version 5.6.6 MySQL provides a new security plugins named Password Validation Plugin. The password-validation plugin aims to test passwords strength and improve security. The goal of this blog is to provide you a short overview of the functionalities provided through this plugin and illustrate these functionalities with concrete examples.

As explained into the documentation The validate_password plugin implements two capabilities:

1. The plugin checks the password against the current password policy and rejects the password if it is weak
2. The VALIDATE_PASSWORD_STRENGTH() SQL function assesses the strength of potential passwords. The function takes a password argument and returns an integer from 0 (weak) to 100 (strong).

validate_password plugin implements three level of password checking that are described below:

  • LOW – policy tests password length only.
  • MEDIUM (Default) – policy adds the conditions that passwords must contain at least 1 numeric character, 1 lowercase character, 1 uppercase character, and 1 special (nonalphanumeric) character
  • STRONG – policy adds the condition that password substrings of length 4 or longer must not match words in the dictionary file, if one has been specified.

validate_password plugin provides several checks that can be seen using the show variables command:

SHOW VARIABLES LIKE 'validate_password.%';
+--------------------------------------+--------+
| Variable_name                        | Value  |
+--------------------------------------+--------+
| validate_password.check_user_name    | ON     |
| validate_password.dictionary_file    |        |
| validate_password.length             | 8      |
| validate_password.mixed_case_count   | 1      |
| validate_password.number_count       | 1      |
| validate_password.policy             | MEDIUM |
| validate_password.special_char_count | 1      |
+--------------------------------------+--------+
7 rows in set (0,01 sec)

Tests with validate_password.policy=LOW

First let’s set the validate_password.policy to LOW to check which tests are done by the plugin. It should only check password length.

SET GLOBAL validate_password.policy=LOW;

+--------------------------------------+-------+
| Variable_name                        | Value |
+--------------------------------------+-------+
| validate_password.check_user_name    | ON    |
| validate_password.dictionary_file    |       |
| validate_password.length             | 8     |
| validate_password.mixed_case_count   | 1     |
| validate_password.number_count       | 1     |
| validate_password.policy             | LOW   |
| validate_password.special_char_count | 1     |
+--------------------------------------+-------+
create user 'steulet'@'localhost' identified by '1234567';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

create user 'steulet'@'localhost' identified by '12345678';
Query OK, 0 rows affected (0,01 sec)

 

Tests with validate_password.policy=MEDIUM

MEDIUM policy adds the conditions that passwords must contain at least 1 numeric character, 1 lowercase character, 1 uppercase character, and 1 special (nonalphanumeric) character

SET GLOBAL validate_password.policy=MEDIUM;
Query OK, 0 rows affected (0,00 sec)

SHOW VARIABLES LIKE 'validate_password.%';
+--------------------------------------+-------------------------------------+
| Variable_name | Value |
+--------------------------------------+-------------------------------------+
| validate_password.check_user_name | ON |
| validate_password.dictionary_file |  |
| validate_password.length | 8 |
| validate_password.mixed_case_count | 1 |
| validate_password.number_count | 1 |
| validate_password.policy | MEDIUM |
| validate_password.special_char_count | 1 |
+--------------------------------------+-------------------------------------+
7 rows in set (0.00 sec)
create user 'hueber'@'localhost' identified by '12345678';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

create user 'hueber'@'localhost' identified by '1234567L';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

create user 'hueber'@'localhost' identified by '123456zL';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

create user 'hueber'@'localhost' identified by '12345!zL';
Query OK, 0 rows affected (0.01 sec)

 

Tests with validate_password.policy=STRONG

In order to check the validate_password.policy=STRONG I uploaded a password file used for brute force attack. You can download this file from: https://github.com/danielmiessler/SecLists/blob/master/Passwords/Most-Popular-Letter-Passes.txt

SET GLOBAL validate_password.dictionary_file='/u01/mysqldata/mysqld2/PasswordList';
Query OK, 0 rows affected (0,00 sec)
SET GLOBAL validate_password.policy=strong;
Query OK, 0 rows affected (0,00 sec)
SHOW VARIABLES LIKE 'validate_password.%';
+--------------------------------------+-------------------------------------+
| Variable_name | Value |
+--------------------------------------+-------------------------------------+
| validate_password.check_user_name | ON |
| validate_password.dictionary_file | /u01/mysqldata/mysqld2/PasswordList |
| validate_password.length | 8 |
| validate_password.mixed_case_count | 1 |
| validate_password.number_count | 1 |
| validate_password.policy | STRONG |
| validate_password.special_char_count | 1 |
+--------------------------------------+-------------------------------------+

7 rows in set (0.00 sec)

create user 'neuhaus'@'localhost' identified by 'Manager1;';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

create user 'neuhaus'@'localhost' identified by 'Password1;';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

If I decrease the validate_password.policy to medium, the plugin doesn’t check the dictionary file anymore:

SET GLOBAL validate_password.policy=medium;
Query OK, 0 rows affected (0,00 sec)

create user 'neuhaus'@'localhost' identified by 'Password1;';
Query OK, 0 rows affected (0,00 sec)
Function VALIDATE_PASSWORD_STRENGTH()

As explained above the validate_password_strength test a password and returns an integer from 0 (weak) to 100 (strong) representing the password strength.

select VALIDATE_PASSWORD_STRENGTH('abcd');
+------------------------------------+
| VALIDATE_PASSWORD_STRENGTH('abcd') |
+------------------------------------+
| 25 |
+------------------------------------+
1 row in set (0.00 sec)
select VALIDATE_PASSWORD_STRENGTH('password');
+----------------------------------------+
| VALIDATE_PASSWORD_STRENGTH('password') |
+----------------------------------------+
| 50 |
+----------------------------------------+
1 row in set (0.00 sec)
select VALIDATE_PASSWORD_STRENGTH('Manager1!');
+-----------------------------------------+
| VALIDATE_PASSWORD_STRENGTH('Manager1!') |
+-----------------------------------------+
| 75 |
+-----------------------------------------+
1 row in set (0.00 sec)
select VALIDATE_PASSWORD_STRENGTH('aZbq!1)m8N');
+------------------------------------------+
| VALIDATE_PASSWORD_STRENGTH('aZbq!1)m8N') |
+------------------------------------------+
| 100 |
+------------------------------------------+
1 row in set (0.00 sec)

 

 

Cet article Password Validation in MySQL est apparu en premier sur Blog dbi services.

Question: Anything Wrong With Query Performance? (Straight To You)

Richard Foote - Thu, 2018-04-05 00:21
I have a query that runs pretty darn efficiently, here’s the setup: So the query basically returns 1000 rows based on the CODE column and it does so using an index on CODE. The CBO has got the costings for this just about spot on. For 1000 rows returned, it does so with just 1006 […]
Categories: DBA Blogs

My Oracle Support Blog

Joshua Solomin - Wed, 2018-04-04 14:42
Tips for MOS. 

Virtual Workshop: Developing Microservices Using DevOps

Get Hands On with your Oracle Cloud Trial The Oracle Cloud Developing Microservices using DevOps workshop will walk you through the Software Development Lifecycle (SDLC) for a Cloud Native...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Logging APEX Report Downloads

Scott Spendolini - Tue, 2018-04-03 20:42
A customer recently asked how APEX could track who clicked “download” from an Interactive Grid.  After some quick searching of the logs, I realized that APEX simply does not record this type of activity, aside from a simple page view type of “AJAX” entry.  This was not specific enough, and of course, led to the next question - can we prevent users from downloading data from a grid entirely?

I knew that any Javascript-based solution would fall short of their security requirements, since it is trivial to reconstruct the URL pattern required to initiate a download, even if the Javascript had removed the option from the menu.  Thus, I had to consider a PL/SQL-based approach - one that could not be bypassed by a malicious end user.

To solve this problem, I turned to APEX’s Initialization PL/SQL Code parameter.  Any PL/SQL code entered in this region will be executed before any other APEX-related process.  Thus, it is literally the first place that a developer can interact with an APEX page - be it a full page view or Ajax-based process.

Both IRs and Classic Reports leave enough data in the REQUEST parameter of the URL in the logs to decode which report was downloaded and what format was selected.  However, if you don’t know what the specific URL patterns look like, or don’t have the Request column selected, you’d never know it.  For my solution, I chose to incorporate all three types of reports - Classic, IG and IR - which also centralized it into a single place.

The solution is relatively simple, and requires two components: a table to track the downloads and a procedure to populate the table.  It should also work with both Oracle 11g & 12c.  I have tested it on APEX 5.1.4.  The IG portion will not work on APEX 5.0, since there is no IG component in that release.

First, create the table to store the logs:

CREATE TABLE dl_audit_log 
(
app_user VARCHAR2(255),
app_id NUMBER,
app_page_id NUMBER,
request VARCHAR2(255),
downloaded_on DATE,
report_id NUMBER,
report_name VARCHAR2(255),
report_type VARCHAR2(255),
report_format VARCHAR2(255)
)
/

Next, create the procedure that will capture any download.

CREATE OR REPLACE PROCEDURE dl_audit
 (
 p_request     IN VARCHAR2 DEFAULT v('REQUEST'),
 p_app_user    IN VARCHAR2 DEFAULT v('APP_USER'),
 p_app_page_id IN NUMBER   DEFAULT v('APP_PAGE_ID'),
 p_app_id      IN NUMBER   DEFAULT v('APP_ID'),
 p_app_session IN NUMBER   DEFAULT v('APP_SESSION')
 )
AS
 l_count NUMBER;
 l_id             NUMBER;
 l_report_name    VARCHAR2(255);
 l_report_format  VARCHAR2(255);
 l_json           VARCHAR2(10000);
BEGIN
-------------------------------------------------------------------------------------------------------------------------------
-- Capture Classic Report
-- Region ID will be embedded in the request
--------------------------------------------------------------------------------------------------------------------------------
CASE
WHEN p_request LIKE 'FLOW_EXCEL_OUTPUT%' THEN

-- Uncomment the two lines below to prevent the downloaded
--htp.prn('Download Prohibited');
--apex_application.g_unrecoverable_error := TRUE;

-- Get the ID
 SELECT SUBSTR(p_request, 20, INSTR(p_request,'_',20)-20) INTO l_id FROM dual;
 SELECT region_name INTO l_report_name FROM apex_application_page_regions WHERE region_id = l_id;

-- Log the download
 INSERT INTO dl_audit_log (app_user, app_id, app_page_id, request, downloaded_on, report_id, report_name, report_type, report_format
   VALUES (p_app_user, p_app_id, p_app_page_id, p_request, SYSDATE, l_id, l_report_name, 'CLASSIC', 'CSV');
-------------------------------------------------------------------------------------------------------------------------------
-- Capture IR download
-- Region ID embedded in request only when there is more than 1 IR on the page
-------------------------------------------------------------------------------------------------------------------------------
WHEN p_request LIKE '%CSV' OR p_request LIKE '%HTMLD' OR p_request LIKE '%PDF' THEN

-- Uncomment the two lines below to prevent the downloaded
--htp.prn('Download Prohibited');
--apex_application.g_unrecoverable_error := TRUE;

-- Determine how many IRs are on the page
 SELECT COUNT(*) INTO l_count FROM apex_application_page_ir where page_id = p_app_page_id AND application_id = p_app_id;

-- If there is 1, then get the ID from the view
 IF l_count = 1 THEN
   SELECT interactive_report_id, region_name INTO l_id, l_report_name
     FROM apex_application_page_ir where page_id = p_app_page_id AND application_id = p_app_id; ELSE

   -- Otherwise, get the ID from the REQUEST
   SELECT SUBSTR(p_request,5, INSTR(p_request,']')-5) INTO l_id FROM dual;
   SELECT region_name INTO l_report_name FROM apex_application_page_ir where region_id = TRIM(l_id);

 END IF;

-- Log the download
 INSERT INTO dl_audit_log (app_user, app_id, app_page_id, request, downloaded_on, report_id, report_name, report_type, report_format
   VALUES (p_app_user, p_app_id, p_app_page_id, p_request, SYSDATE, l_id, l_report_name, 'IR'
   CASE WHEN p_request LIKE '%CSV' THEN 'CSV' WHEN p_request LIKE '%HTMLD' THEN 'HTML' WHEN p_request LIKE '%PDF' THEN 'PDF' ELSE 'OTHER' END);

-------------------------------------------------------------------------------------------------------------------------------
-- Capture IG download
--------------------------------------------------------------------------------------------------------------------------------
WHEN LOWER(owa_util.get_cgi_env('QUERY_STRING')) LIKE 'p_flow_id=' || p_app_id || '&p_flow_step_id=
 || p_app_page_id || '&p_instance=' || p_app_session || '%&p_json%download%' THEN

-- Uncomment the two lines below to prevent the downloaded
 --htp.prn('Download Prohibited');
 --apex_application.g_unrecoverable_error := TRUE;

-- Extract the JSON
 SELECT utl_url.unescape(substr(owa_util.get_cgi_env('QUERY_STRING'),
 INSTR(owa_util.get_cgi_env('QUERY_STRING'), 'p_json=') + 7)) INTO l_json FROM dual;
 apex_json.parse(l_json);

-- Get the report ID
 l_id := apex_json.get_varchar2(p_path => 'regions[%d].id', p0 => 1);
 l_report_format := apex_json.get_varchar2(p_path => 'regions[%d].download.format', p0 => 1);

 -- Lookup the name
 SELECT region_name INTO l_report_name FROM apex_application_page_regions where region_id = l_id;

-- Log the download
 INSERT INTO dl_audit_log (app_user, app_id, app_page_id, request, downloaded_on, report_id, report_name, report_type, report_format
   VALUES (p_app_user, p_app_id, p_app_page_id, p_request, SYSDATE, l_id, l_report_name, 'GRID', l_report_format);

-- No auditing needed, as the user did not download a report
ELSE NULL;

END CASE;

END;
/

Lastly, add a reference to the procedure to the Initialization PL/SQL Code.  This can be found under your Shared Components > Security.


Once these three steps are completed, then any download of any report will be logged automatically.  There’s no need to adjust any specific report or add any parameters - it will just work as reports are downloaded.

Also, uncommenting the referenced lines in each section will also prevent that kind of report from being downloaded entirely.  The message could be changed as needed or even re-directed to an error page instead.

Running VirtualBox inside a VM instance in Oracle Cloud Infrastructure

Wim Coekaerts - Tue, 2018-04-03 16:15

OK - So don't ask "Why?"... Because... I can! :) would be the answer for the most part.

Oracle Cloud Infrastructure supports nested virtualization. When you create a VM instance in OCI, and you run Oracle Linux 7 with our kernel, you can create KVM or (soon you see how...) VirtualBox VMs inside. If you create a BM instance, you can install VirtualBox or use kvm as you normally would on a local server. Since, well, it's a bare metal server - full access to the hardware and its features.

VirtualBox has some very interesting built-in features which might make it useful to run remote (even when virtualized). One example would be the embedded vRDP server. It can do great remote audio and video (enable/tune videochannel), it makes it easy to take your local VirtualBox images and run them unmodified remotely, it lets you create smaller VMs that you constantly start/stop... you can use vagrant boxes, and it opens up the whole vagrant VirtualBox environment to a remote cloud. So aside from "Because I can"... there are actual good use cases for this!

How do you go about doing this. For the most part it's pretty trivial, installation of VirtualBox in a VM in OCI is no different than how you would install it on your local desktop or server. Configuring a guest VM in VirtualBox should be done using the command line (vboxmanage) instead of installing a full remote desktop and run vnc and such. It's a lot faster to do it using the command line. And then also, if you want to run VirtualBox in Bridged mode so that you have full access to the OCI native cloud network facilities (VCN/Subnet/IP addresses, even public IPs - without NAT) there are a few minor things you need to do.

Here are some of the steps to get going: I'm not a big screenshot guy so bear with me in text for the most part.

Step 1: Create an OCI VM and create/assign an extra VNIC to pass through to your VirtualBox VM.

If you don't already have an OCI account, you can go sign up and get a $300 credit trial account here. That should give you enough to get started.

Set up your account, create a Virtual Cloud Network (VCN) with its subnets and create a VM instance in one of the availability domains/regions. To test this out I created a VM.Standard2.2 shape instance with Oracle Linux 7. Once this instance is created, you can log in with user opc and get going.

When you log into your VM instance, and from the OCI web console you will see that you have a primary VNIC attached. This might show up as ens3 or so inside your VM. In the OCI web console the VNIC has a name (typically the primary VNIC's name is the same as your instance name), it has a private IP and if you decided to have it on a public network, a public ip address as well. All this stuff will be configured out of the box for you as part of your instance creation.

Since I want to show how to use a bridged network in VirtualBox, you will need a second VNIC. You can create that at this point, or you can come back later and do it once you are ready to start your VirtualBox VM. Just go to Attached VNICs in the webconsole (or use the OCI cli) and create a VNIC on a given VCN/Subnet.

create vnic

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The important information to jot down are the mac address and the private ip address of this newly created vnic. In the example 10.0.0.2 and 00:00:17:02:EB:EA  this info is needed later.

Step 2: Install and configure VirtualBox

With Oracle Linux 7 - this is a very easy process. Use yum to install VirtualBox and the dependencies for building the VirtualBox kernel modules and quickly download and install the Extension Pack and you're done:

# yum install -y kernel-uek-devel-`uname -r` gcc # yum install -y VirtualBox-5.2 # wget https://download.virtualbox.org/virtualbox/5.2.8/Oracle_VM_VirtualBox_Extension_Pack-5.2.8.vbox-extpack # vboxmanage extpack install Oracle_VM_VirtualBox_Extension_Pack-5.2.8.vbox-extpack

That's it - you now have a fully functioning VirtualBox hypervisor installed on top of Oracle Linux 7 in an OCI VM instance.

Step 3: Create your first VirtualBox guest VM

The following instructions show you how to create a VM from the command line. The nice thing with using the command line is that you can clearly see what it takes for a VM to be configured and you can easily tweak the values (memory, disk,...).

First, you likely want to create a new VM from an install ISO. So upload your installation media to your OCI VM. I uploaded my Oracle Linux 7.5 preview image which you can get here.

Create your VirtualBox VM

# vboxmanage createvm --name oci-test --ostype oracle_64 --register # vboxmanage modifyvm oci-test --memory 4096 --vram 128 --ioapic on # vboxmanage modifyvm oci-test --boot1 dvd --boot2 disk --boot3 none --boot4 none # vboxmanage modifyvm oci-test --vrde on

Configure the Virtual Disk and Storage controllers (Feel free to attach an OCI Block Volume to your VM and put the VirtualBox virtual disks on that volume, of course). The example below creates a 40G virtual disk image and attaches the OL7.5 ISO as a DVD image.

# vboxmanage createhd --filename oci-test.vdi --size 40960 # vboxmanage storagectl oci-test --name "SATA Controller" --add sata --controller IntelAHCI # vboxmanage storageattach oci-test --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium oci-test.vdi # vboxmanage storagectl oci-test --name "IDE Controller" --add ide # vboxmanage storageattach oci-test --storagectl "IDE Controller" --port 0 --device 0 --type dvddrive --medium /home/opc/OracleLinux-R7-U5-BETA-Server-x86_64-dvd.iso

Configure the Bridged Network Adapter to directly connect to the OCI VNIC

This is a little more involved. You have to find out which network device was created on the VM host for this secondary VNIC.

# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens3: mtu 9000 qdisc mq state UP qlen 1000 link/ether 00:00:17:02:3a:29 brd ff:ff:ff:ff:ff:ff inet 192.168.1.8/24 brd 192.168.1.255 scope global dynamic ens3 valid_lft 73962sec preferred_lft 73962sec 3: ens4: mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 00:00:17:02:eb:ea brd ff:ff:ff:ff:ff:ff

Bring up this network adapter without an IP address and configure the MTU to 9000 (default mtu settings for VNICs in OCI)

# ip link set dev ens4 up # ip link set ens4 mtu 9000

Almost there... Now just create the NIC in VirtualBox and assign the mac address you recorded earlier to this NIC. It is very important to make sure you use that mac address, otherwise the networking will not allow traffic over the network. Note: don't use : for the mac address on the command line.

# vboxmanage modifyvm oci-test --nic1 bridged --bridgeadapter1 ens4 --macaddress1 00001702ebea

That's it. You now have a VirtualBox VM that can be started, will boot from install media, and be directly connected to the hosts network in OCI. There is no DHCP running on this network, so when you create your VirtualBox VM, you have to assign a static IP (use the one that was assigned as Private IP address (10.0.02 in the example above)).

Before you start your VM, open up the firewall on the host for remote RDP connections and do the same in the OCI console, modify the security list for your host primary VNIC to allow for port 3389 (RDP) traffic ingress.

# firewall-cmd --permanent --add-port=3389/tcp # firewall-cmd --reload

Start your VM in headless mode and use your favorite RDP client on your desktop or laptop to connect to the remote VirtualBox console.

# vboxmanage startvm oci-test --type headless

If you want to experiment with remote video/audio (for instance, play a youtube video inside your VM or play a movie file), enable the vrde video channel. Use the quality parameter to modify the compression/lossy ratio (improves performance) of the mjpeg stream.

# vboxmanage modifyvm oci-test --vrdevideochannel on # vboxmanage modifyvm oci-test --vrdevideochannelquality 70

Comparing Intent Classification in TensorFlow and Oracle Chatbot

Andrejus Baranovski - Tue, 2018-04-03 10:46
I have created sample set of intents with phrases (five phrases per intent, and ten intents). Using this set of data to train and build classification model with TensorFlow and Oracle Chatbot machine learning. Once model is trained, classifying identical sample phrases with both TensorFlow and Oracle Chatbot to compare results. Using Oracle Chatbot with both Linguistic and Machine Learning models.

Summary:

1. Overall TensorFlow model performs better. The main reason for this - I was training TensorFlow model multiple times, until good learning output (minimized learning loss) was produced.

2. Oracle Chatbot doesn't return information about learning loss after training, this makes it hard to decide if training was efficient or no. As consequence - worse classification results, can be related to slightly less efficient training, simply because you don't get information about training efficiency

3. Classification results score: 93% TensorFlow, 87% Oracle Chatbot Linguistic model, 67% Oracle Chatbot Machine Learning. TensorFlow is better, but Oracle Chatbot Linguistic model is very close. Oracle Chatbot Machine Learning model can be improved, see point 2

Results table (click on it, to see maximized):


TensorFlow

List of intents for TensorFlow is provided in JSON file. Same intents are used to train model in Oracle Chatbot:


TensorFlow classification model is created by training 2-layer neural network. Once training is completed, it prints out total loss for the training. This allows to repeat training, until model is produced with optimal loss (as close as possible to 0): 0.00924 in this case:


TensorFlow classification result is good, it failed to classify only one sentence - "How you work?" This sentence is not directly related to any intent, although I should mention Oracle Chatbot Linguistic model is able to classify it. TensorFlow offers correct classification intent as second option, coming very close to correct answer:



Oracle Chatbot

Oracle Chatbot provides UI to enter intents and sample phrases - same set of intents with phrases is used as for TensorFlow:


Oracle Chatbot offers two training models - linguistic and machine learning based.


Once model is trained, there is no feedback about training loss. We can enter phrase and check intent classification result. Below is sample for Linguistic model classification failure - it fails to classify one of the intents, where sentence topic is not perfectly clear, however same intent is classified well by Oracle Chatbot Machine Learning model:


Oracle Chatbot Machine Learning model fails on another intent, where we want to check for hospital (hospital search) to monitor blood pressure. I'm sure if it would be possible to review training quality loss (may be in the next release?), we could decide to re-train model and get results close to TensorFlow. Classification with Oracle Chatbot Machine Learning model:

Raspberry Pi 3 B Oracle Linux 7.4 ARM64 with UEK5 preview image available for download

Wim Coekaerts - Tue, 2018-04-03 10:07

A few weeks ago we released an Oracle Linux 7 Update 4 for ARM64 preview update on OTN. This updated ISO installs on Ampere X-Gene 3 (emag) and Cavium ThunderX / ThunderX2 -based systems (and it's also known to work on Qualcomm Centriq 2400-based servers).

Today we added the RPI3 (Raspberry Pi 3 Model B) disk image as well. The previous RPI3 image was still using Oracle Linux 7.3 as a base along with a 4.9 Linux kernel. The newly released image makes it current. It is the same Oracle Linux 7.4 package set as we released on the ISO and it uses the same UEK5 preview kernel (based on 4.14.30 right now).

The current image uses uboot and boots the kernel directly. We will do another update in the near future where we switch to uboot+efi and grub2, so that updating kernels will work the same way as we can do on the regular ARM server installs (where we boot with EFI -> grub2).

A few things to point out:

- OL7/ARM64 is a 64-bit only build. That makes binaries pretty large and the RPI3 only has 1GB of RAM so it's a bit of a stretch.

- X/gnome-shell doesn't work in this release, this is a known issue, when we move to 7.5 this will be resolved but our focus is mostly server and per the above, running a heavy GUI stack is hard on a 1GB system.

- We do not yet support the latest RPI3 Model B+.  Only the RPI3 Model B. We don't have a device tree/dtb file yet for the RPI3 Model B+.

Since it has all the same packages as the server one, you can run docker on the RPI3:

# cat /etc/oracle-release Oracle Linux Server release 7.4 # uname -a Linux rpi3 4.14.30-1.el7uek.aarch64 #1 SMP Mon Mar 26 23:11:30 PDT 2018 aarch64 aarch64 aarch64 GNU/Linux # yum install docker-engine # systemctl enable docker # systemctl start docker # docker pull oraclelinux:7-slim

And there you go a small Oracle Linux 7 for ARM image right on your rpi - directly from docker hub.

# docker pull oraclelinux:7-slim 7-slim: Pulling from library/oraclelinux eefac02db809: Pull complete Digest: sha256:fc684f5bbd1e46cfa28f56a0340026bca640d6188ee79ef36ab2d58d41636131 Status: Downloaded newer image for oraclelinux:7-slim

Oracle VM Server: supported guest systems

Dietrich Schroff - Tue, 2018-04-03 04:36
After the installation of


and

the next step is to install a guest. But which operating systems are supported on Oracle VM Server?
Let's look into the official Oracle documentation:
The list of supported operating systems can be found here.


Table 5.1 HVM-Supported Linux Guest Operating Systems
Guest Operating System
HVM 32-bit
HVM 64-bit
Oracle Linux Release 7.x
N/A
Yes
Oracle Linux Release 6.x
Yes
Yes
Oracle Linux Release 5.x
Yes
Yes
Oracle Linux Release 4.x
Yes
Yes
Red Hat Enterprise Linux 7.x
N/A
Yes
Red Hat Enterprise Linux 6.x
Yes
Yes
Red Hat Enterprise Linux 5.x
Yes
Yes
Red Hat Enterprise Linux 4.x
Yes
Yes
CentOS 7.x
N/A
Yes
CentOS 6.x
Yes
Yes
CentOS 5.x
Yes
Yes
CentOS 4.x
Yes
Yes
SUSE Linux Enterprise Server 11.x
No
Yes
SUSE Linux Enterprise Server 12 SP2 or later
No
Yes

Table 5.2 PVHVM-Supported Linux Guest Operating Systems
Guest Operating System
PVHVM 32-bit
PVHVM 64-bit
Oracle Linux Release 7.x
N/A
Yes
Oracle Linux Release 6.x
Yes
Yes
Oracle Linux Release 5.x
Yes
Yes
Oracle Linux Release 4.x
Yes
Yes
Red Hat Enterprise Linux 7.x
N/A
Yes
Red Hat Enterprise Linux 6.x
Yes
Yes
Red Hat Enterprise Linux 5.x
Yes
Yes
Red Hat Enterprise Linux 4.x
Yes
Yes
CentOS 7.x
N/A
Yes
CentOS 6.x
Yes
Yes
CentOS 5.x
Yes
Yes
CentOS 4.x
Yes
Yes
SUSE Linux Enterprise Server 11.x
No
Yes
SUSE Linux Enterprise Server 12 SP2 or later
No
Yes

Table 5.4 CPU Paravirtualized Supported Guest Operating Systems
Guest Operating System
Paravirtualized 32-bit
Paravirtualized 64-bit
Oracle Linux Release 7.x
No
No
Oracle Linux Release 6.x
Yes
Yes
Oracle Linux Release 5.x
Yes
Yes
Oracle Linux Release 4.x
Yes
Yes
Red Hat Enterprise Linux 7.x
No
No
Red Hat Enterprise Linux 6.x
Yes
Yes
Red Hat Enterprise Linux 5.x
Yes
Yes
Red Hat Enterprise Linux 4.x
Yes
Yes
CentOS 7.x
No
No
CentOS 6.x
Yes
Yes
CentOS 5.x
Yes
Yes
CentOS 4.x
Yes
Yes
SUSE Linux Enterprise Server 11.x
No
Yes
SUSE Linux Enterprise Server 12 SP2 or later
No
No

Table 5.5 Microsoft Windows Supported Guest Operating Systems
Guest Operating Systems
64-bit
32-bit
HVM
HVM with Oracle VM Paravirtual Drivers for Microsoft Windows
Microsoft Windows Server 2016
Yes
N/A
Supported
Supported
Microsoft Windows Server 2012 R2
Yes
N/A
Supported
Supported
Microsoft Windows Server 2012
Yes
N/A
Supported
Supported
Microsoft Windows Server 2008 R2 SP1
Yes
N/A
Supported
Supported
Microsoft Windows Server 2008 SP2
Yes
Yes
Supported
Supported
Microsoft Windows Server 2003 R2 SP2
Yes
Yes
Supported
Supported
Microsoft Windows 10
Yes
Yes
Supported
Supported
Microsoft Windows 8.1
Yes
Yes
Supported
Supported
Microsoft Windows 8
Yes
Yes
Supported
Supported
Microsoft Windows 7 SP1
Yes
Yes
Supported
Supported
Microsoft Windows Vista SP2
Yes
Yes
Supported
Supported

These tables are valid for version 3.4 - that means you have to check the support matrix for each version seperatly.

ODPI-C 2.3 is now on GitHub

Christopher Jones - Mon, 2018-04-02 21:33
ODPI-C logo

Release 2.3 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Top features: Improve Batch Statement Execution

 

ODPI-C 2.3 improves support for Batch Statement execution with dpiStmt_executeMany(). To support DML RETURNING producing multiple rows for each iteration, a new function dpiVar_getReturnedData() was added, replacing the function dpiVar_getData() which will be deprecated in a future release. A fix for binding LONG data in dpiStmt_executeMany() also landed.

If you haven't heard of Batch Statement Executation (sometimes referred to as Array DML), check out this Python cx_Oracle example or this Node.js node-oracledb example.

A number of other issues were addressed in ODPI-C 2.3. See the release notes for more information.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Node-oracledb 2.2 with Batch Statement Execution (and more) is out on npm

Christopher Jones - Mon, 2018-04-02 17:16

Release announcement: Node-oracledb 2.2, the Node.js module for accessing Oracle Database, is on npm.

Top features: Batch Statement Execution

In the six-or-so weeks since 2.1 was released, a bunch of new functionality landed in node-oracledb 2.2. This shows how much engineering went into the refactored lower abstraction layer we introduced in 2.0, just to make it easy to expose Oracle features to languages like Node.js.

The top features in node-oracledb 2.2 are:

  • Added oracledb.edition to support Edition-Based Redefinition (EBR). The EBR feature of Oracle Database allows multiple versions of views, synonyms, PL/SQL objects and SQL Translation profiles to be used concurrently. This lets database logic be updated and tested while production users are still accessing the original version.

    The new edition property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an ALTER SESSION command or ORA_EDITION environment variable.

  • Added oracledb.events to allow the Oracle client library to receive Oracle Database service events, such as for Fast Application Notification (FAN) and Runtime Load Balancing (RLB).

    The new events property can be set at the global level, when creating a pool, or when creating a standalone connection. This removes the need to use an oraaccess.xml file to enable event handling, making it easier to use Oracle high availablility features, and makes it available for the first time to users who are linking node-oracledb with version 11.2 Oracle client libraries.

  • Added connection.changePassword() for changing passwords. Passwords can also be changed when calling oracledb.getConnection(), which is the only way to connect when a password has expired.

  • Added connection.executeMany() for efficient batch execution of DML (e.g. INSERT, UPDATE and DELETE) and PL/SQL execution with multiple records. See the example below.

  • Added connection.getStatementInfo() to find information about a SQL statement without executing it. This is most useful for finding column types of queries and for finding bind variables names. It does require a 'round-trip' to the database, so don't use it without reason. Also there are one or two quirks because the library underneath that provides the implementation has some 'historic' behavior. Check the manual for details.

  • Added connection.ping() to support system health checks. This verifies that a connection is usable and that the database service or network have not gone down. This requires a round-trip to the database so you wouldn't use it without reason. Although it doesn't replace error handling in execute(), sometimes you don't want to be running a SQL statement just to check the connection status, so it is useful in the arsenal of features for keeping systems running reliably.

See the CHANGELOG for all changes.

One infrastructure change we recently made was to move the canonical home for documentation to GitHub 'pages'. This will be kept in sync with the current production version of node-oracledb. If you update your bookmarks to the new locations, it will allow us to update the source code repository documentation mid-release without confusing anyone about available functionality.

Batch Statement Execution

The new connection.executeMany() method allows many sets of data values to be bound to one DML or PL/SQL statement for execution. It is like calling connection.execute() multiple times for one statement but requires fewer round-trips overall. This is an efficient way to handle batch changes, for example when inserting or updating multiple rows, because the reduced cost of round-trips has a significant affect on performance and scalability. Depending on the number of records, their sizes, and on the network speed to the database, the performance of executeMany() can be significantly faster than the equivalent use of execute().

In one little test I did between Node.js on my laptop and a database running on my adjacent desktop, I saw that executeMany() took 16 milliseconds whereas execute() took 2.3 seconds to insert 1000 rows, each consisting of a number and a very short string. With larger data sizes and slower (or faster!) networks the performance characteristics will vary, but the overall benefit is widespread.

The executeMany() method supports IN, IN OUT and OUT variables. Binds from RETURNING INTO clauses are supported, making it easy to insert a number of rows and find, for example, the ROWIDs of each.

With an optional batchErrors mode, you can insert 'noisy' data easily. Batch Errors allows valid rows to be inserted and invalid rows to be rejected. A transaction will be started but not committed, even if autocommit mode is enabled. The application can examine the errors, find the bad data, take action, and explicitly commit or rollback as desired.

To give one example, let's look at the use of batchErrors when inserting data:

var sql = "INSERT INTO childtab VALUES (:1, :2, :3)"; // There are three value in each nested array since there are // three bind variables in the SQL statement. // Each nested array will be inserted as a new row. var binds = [ [1016, 10, "apples"], [1017, 10, "bananas"], [1018, 20, "cherries"], [1018, 20, "damson plums"], // duplicate key [1019, 30, "elderberry"], [1020, 40, "fig"], [1021, 75, "golden kiwifruit"], // parent does not exist [1022, 40, "honeydew melon"] ]; var options = { autoCommit: true, // autocommit if there are no batch errors batchErrors: true, // identify invalid records; start a transaction for valid ones bindDefs: [ // describes the data in 'binds' { type: oracledb.NUMBER }, { type: oracledb.NUMBER }, { type: oracledb.STRING, maxSize: 16 } // size of the largest string, or as close as possible ] }; connection.executeMany(sql, binds, options, function (err, result) { if (err) consol.error(err); else { console.log("Result is:", result); } });

Assuming appropriate data exists in the parent table, the output might be like:

Result is: { rowsAffected: 6, batchErrors: [ { Error: ORA-00001: unique constraint (CJ.CHILDTAB_PK) violated errorNum: 1, offset: 3 }, { Error: ORA-02291: integrity constraint (CJ.CHILDTAB_FK) violated - parent key not found errorNum: 2291, offset: 6 } ] }

This shows that 6 records were inserted but the records at offset 3 and 6 (using a 0-based index into the 'binds' variable array) were problematic. Because of these batch errors, the other records were not committed, despite autoCommit being true. However they were inserted and the transaction could be committed or rolled back.

We know some users are inserting very large data sets so executeMany() will be very welcome. At the very huge end of the data spectrum you may want to call executeMany() with batches of data to avoid size limitations in various layers of the Oracle and operating system stack. Your own testing will determine the best approach.

See Batch Execution in the manual for more information about the modes of executeMany() and how to use it in various cases. There are runnable examples in the GitHub examples directory. Look for the files prefixed 'em_'. There are two variants of each sample: one uses call-back style, and the other uses the Async/Await interface available with Node.js 8.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

New OA Framework 12.2.6 Update 11 Now Available

Steven Chan - Mon, 2018-04-02 17:03

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Oracle Application Framework (FWK) Release 12.2.6 Bundle 11 (Patch 27529582:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

In addition, this latest bundle patch includes fixes for the following issues:

  • TO BE SUPPLIED

Related Articles

Categories: APPS Blogs

To generate list of unique Random Numbers based on the number count required

Tom Kyte - Mon, 2018-04-02 09:46
Hi, My requirement is to generate the list of random numbers based on the total count provided. for instance, if total count is 100, i have to generate 100 unique random numbers. Below is the sample code i used. Could you please check and let me ...
Categories: DBA Blogs

Group by is failing on remote database query

Tom Kyte - Mon, 2018-04-02 09:46
Hello, My issue is the good old ORA02070 and I've found different workarounds but none of them fits my particular problem. It happens when I try to do a group by on remote database query. I need to compare data between the local and remote dat...
Categories: DBA Blogs

DB stuck at check point not complete.

Tom Kyte - Mon, 2018-04-02 09:46
Our DB stuck and check poin not complete. As per the logs DBA says.. ELECT for update running on the db causing the high number of block changes and caused the issue After this issue we are unable to generate ASH or AWR for that time. How to iden...
Categories: DBA Blogs

Not all standby Redo logs are being used in Physical standby database

Tom Kyte - Mon, 2018-04-02 09:46
Hi I wanted to know how does Oracle physical standby database decides how many standby Redo to be used. As We have a Standby which have multiple Standby Logs but its is only using 2 of them for each thread. We tried to switch the logs on both...
Categories: DBA Blogs

character set us7ascii missing from dbca

Tom Kyte - Mon, 2018-04-02 09:46
I am trying to create a database in oracle 12.1.0.2 using dbca. THe character set us7ascii is not in the list of character sets.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator