Feed aggregator

More archives during impdp, what is the exact reason and what will happen internally

Tom Kyte - Fri, 2018-10-05 16:26
Why more archives will be generated during import (impdp) activity, i would like to know what will happen internally due to which more archives being generated. Is it only for import activity or it will generate more redo for export as well ? Many...
Categories: DBA Blogs

Query on dba_hist_active_sess_history query taking too long

Tom Kyte - Fri, 2018-10-05 16:26
Hi, I'm using the below query to fetch details from dba_hist_active_sess_history which matches a specific wait event occurring at a specific hour of the day within the last 90 days: select USER_ID, PROGRAM, MACHINE from dba_hist_active_sess_his...
Categories: DBA Blogs

Excessive archive log generation during data load

Tom Kyte - Fri, 2018-10-05 16:26
Hi Tom, I am encountering a situation related to data loading and excessive archive log generation. I am using Oracle 8.1.6 under Solaris 7. I insert twice a week about 1 million rows in to a table whose columns are of number datatype. The loa...
Categories: DBA Blogs

Working with REST POST and Other Operations in Visual Builder

Shay Shmeltzer - Fri, 2018-10-05 12:37

One of the strong features of Visual Builder Cloud Service is the ability to consume any REST service very easily. I have a video that shows you how to work with REST services in a completely declarative way, but that video doesn't show you what happens behind the scenes when you work with the quick starts. In addition, that video shows using the GET methods and several threads on our community's discussion forum asked for help working with other operations of REST.

The demo video aims to give you a better insight into working with REST operations showing how to:

  • Add service endpoints for various REST operations
  • Create a GET form manually for retrieving single records
  • Create a POST form manually
    • Create type for the request and response parameters
    • Create variables based on the types
    • Call the POST operation passing a variable as body
  • Get the returned values from the POST to show in a page or notifications

A couple of notes:

In the video I use the free REST testing platform at https://jsonplaceholder.typicode.com

While I do everything here manually - you should be able to use the quick starts for creating a "create" form and map them to the post operation - as long as you marked the specific entry as a "create" entry like I did in the demo.

If the concepts above such as types, variables, action chains are new to you - I would highly recommend watching this video on the VBCS Architecture and Building Blocks, it will help you better understand what VBCS is all about.

 

 

 

 

Categories: Development

Canada Alliance 2018

Jim Marion - Fri, 2018-10-05 10:02

Calling all Canadian Higher Education and Government customers! Canada Alliance is next month and boasts a great lineup of speakers and sessions. JSMPROS will host two pre-conference workshops Monday prior to the main conference. Please bring a laptop if you wish to participate. Please note: space is limited.

  • Configure, Don't Customize! PeopleSoft Page and Field Configurator Monday, November 12 from 10:00 AM–12:30 PM in Coast Hotel: Acadia Room
  • Advanced Query Monday, November 12 from 1:30 PM–4:00 PM in Coast Hotel: Acadia Room

For further details, please visit the Canada Alliance 2018 Workshop page. I look forward to seeing you soon!

Walking through the Zürich ZOUG Event – September the 18th

Yann Neuhaus - Fri, 2018-10-05 09:38

What a nice and an interesting new experience… My first ZOUG Event… Interesting opportunities to meet some great persons and hear to some great sessions. I had the chance to participate to Markus Michalewicz sessions. Markus is Senior Director, Database HA and Scalability Product Management by Oracle, and was the special guest to this event.

https://soug.ch/events/soug-day-september-2018/

The introduction session was done by Markus. He covered a presentation of the different HA solutions in order to talk about MAA. Oracle Maximum Availability Architecture (MAA) is, from my understanding, more a service delivered by Oracle in order to help customer to find their best solution at the lowest cost and complexity according to their constraint.

I was really looking to hear the next session from Robert Bialek from Trivadis about oracle database service high availability with Data Guard. Bialek covered a nice presentation of Data Guard, how it is working and providing some good tips in the way it should be configured.

The best session was certainly the next one, done by my colleague, Clemens Bleile, Oracle Technology Leader at dbi. What a great sharing experience from his past years as one of the managers in the Oracle Support Performance team EMEA. Clemens talked about SQLTXPLAIN, performance troubleshooting tool, its history and the future. Clemens also presented SQLT tool.

The last session I followed was chaired by Markus. The subject was autonomous database, and all the automatic features which came along the last Oracle releases. Will this make Databases been able to be managed themselves? The future will let us know. :-)

Thanks to dbi management to have given me the opportunity to join this Zoug event!

 

Cet article Walking through the Zürich ZOUG Event – September the 18th est apparu en premier sur Blog dbi services.

Join Cardinality – 2

Jonathan Lewis - Fri, 2018-10-05 09:37

In the previous note I posted about Join Cardinality I described a method for calculating the figure that the optimizer would give for the special case where you had a query that:

  • joined two tables
  • used a single-column to join on equality
  • had no nulls in the join columns
  • had a perfect frequency histogram on the columns at the two ends of the join
  • had no filter predicates associated with either table

The method simply said: “Match up rows from the two frequency histograms, multiply the corresponding frequencies” and I supplied a simple SQL statement that would read and report the two sets of histogram data, doing the arithmetic and reporting the final cardinality for you. In an update I also added an adjustment needed in 11g (or, you might say, removed in 12c) where gaps in the histograms were replaced by “ghost rows” with a frequency that was half the lowest frequency in the histogram.

This is a nice place to start as the idea is very simple, and it’s likely that extensions of the basic idea will be used in all the other cases we have to consider. There are 25 possibilities that could need separate testing – though only 16 of them ought to be relevant from 12c onwards. Oracle allows for four kinds of histograms – in order of how precisely they describe the data they are:

  • Frequency – with a perfect description of the data
  • Top-N (a.k.a. Top-Frequency) – which describes all but a tiny fraction (ca. one bucket’s worth) of data perfectly
  • Hybrid – which can (but doesn’t usually, by default) describe up to 2,048 popular values perfectly and gives an approximate distribution for the rest
  • Height-balanced – which can (but doesn’t usually, by default) describe at most 1,024 popular values with some scope for misinformation.

Finally, of course, we have the general case of no histogram, using only 4 numbers (low value, high value, number of rows, number of distinct values) to give a rough picture of the data – and the need for histograms appears, of course, when the data doesn’t look anything like an even distribution of values between the low and high with close to “number of rows”/”number of distinct values” for each value.

So there are 5 possible statistical descriptions for the data in a column – which means there are 5 * 5 = 25 possible options to consider when we join two columns, or 4 * 4 = 16 if we label height-balanced histograms as obsolete and ignore them (which would be a pity because Chinar has done some very nice work explaining them).

Of course, once we’ve worked out a single-column equijoin between two tables there are plenty more options to consider:  multi-column joins, joins involving range-based predicates, joins involving more than 2 tables, and queries which (as so often happens) have predicates which aren’t involved in the joins.

For the moment I’m going to stick to the simplest case – two tables, one column, equality – and comment on the effects of filter predicates. It seems to be very straightforward as I’ll demonstrate with a new model

rem
rem     Script:         freq_hist_join_03.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Oct 2018
rem

execute dbms_random.seed(0)

create table t1(
        id      number(8,0),
        n0040   number(4,0),
        n0090   number(4,0),
        n0190   number(4,0),
        n0990   number(4,0),
        n1      number(4,0)
)
;

create table t2(
        id      number(8,0),
        n0050   number(4,0),
        n0110   number(4,0),
        n0230   number(4,0),
        n1150   number(4,0),
        n1      number(4,0)
)
;

insert into t1
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   40) + 1                   n0040,
        mod(rownum,   90) + 1                   n0090,
        mod(rownum,  190) + 1                   n0190,
        mod(rownum,  990) + 1                   n0990,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

insert into t2
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                                  id,
        mod(rownum,   50) + 1                   n0050,
        mod(rownum,  110) + 1                   n0110,
        mod(rownum,  230) + 1                   n0230,
        mod(rownum, 1150) + 1                   n1150,
        trunc(30 * abs(dbms_random.normal))     n1
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > comment to avoid WordPress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T2',
                method_opt  => 'for all columns size 1 for columns n1 size 254'
        );
end;
/

You’ll notice that in this script I’ve created empty tables and then populated them. This is because of an anomaly that appeared in 18.3 when I used “create as select”, and should allow the results from 18.3 be an exact match for 12c. You don’t need to pay much attention to the Nxxx columns, they were there so I could experiment with a few variations in the selectivity of filter predicates.

Given the purpose of the demonstration I’ve gathered histograms on the column I’m going to use to join the tables (called n1 in this case), and here are the summary results:


TABLE_NAME           COLUMN_NAME          HISTOGRAM       NUM_DISTINCT NUM_BUCKETS
-------------------- -------------------- --------------- ------------ -----------
T1                   N1                   FREQUENCY                119         119
T2                   N1                   FREQUENCY                124         124

     VALUE  FREQUENCY  FREQUENCY      PRODUCT
---------- ---------- ---------- ------------
         0       2488       2619    6,516,072
         1       2693       2599    6,999,107
         2       2635       2685    7,074,975
         3       2636       2654    6,995,944
...
       113          1          3            3
       115          1          2            2
       116          4          3           12
       117          1          1            1
       120          1          2            2
                                 ------------
sum                               188,114,543

We’ve got frequencyy histograms, and we can see that they don’t have a perfect overlap. I haven’t printed every single line from the cardinality query, just enough to show you the extreme skew, a few gaps, and the total. So here are three queries with execution plans:


set serveroutput off

alter session set statistics_level = all;
alter session set events '10053 trace name context forever';

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));


select
        count(*)
from
        t1, t2
where
        t1.n1 = t2.n1
and     t1.n0990 = 20
and     t2.n1150 = 25
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

I’ve queried the pure join – the count was exactly the 188,114,543 predicted by the cardinality query, of course – then I’ve applied a filter to one table, then to both tables. The first filter n0990 = 20 will (given the mod(,990)) definition identify one row in 990 from the original 100,000 in t1; the second filter n1150 = 25 will identify one row in 1150 from t2. That’s filtering down to 101 rows and 87 rows respectively from the two tables. So what do we see in the plans:


-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:23.47 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:23.47 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    188M|    188M|00:00:23.36 |     748 |  6556K|  3619K| 8839K (0)|
|   3 |    TABLE ACCESS FULL| T1   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.02 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.02 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    190K|    200K|00:00:00.02 |     748 |  2715K|  2715K| 1647K (0)|
|*  3 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
|   4 |    TABLE ACCESS FULL| T2   |      1 |    100K|    100K|00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T1"."N0990"=20)



-----------------------------------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |00:00:00.01 |     748 |       |       |          |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |00:00:00.01 |     748 |       |       |          |
|*  2 |   HASH JOIN         |      |      1 |    165 |    165 |00:00:00.01 |     748 |  2715K|  2715K| 1678K (0)|
|*  3 |    TABLE ACCESS FULL| T2   |      1 |     87 |     87 |00:00:00.01 |     374 |       |       |          |
|*  4 |    TABLE ACCESS FULL| T1   |      1 |    101 |    101 |00:00:00.01 |     374 |       |       |          |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("T1"."N1"="T2"."N1")
   3 - filter("T2"."N1150"=25)
   4 - filter("T1"."N0990"=20)


The first execution plan shows an estimate of 188M rows – but we’ll have to check the trace file to confirm whether that’s only an approximate match to our calculation, or whether it’s an exact match. So here’s the relevant pair of lines:


Join Card:  188114543.000000 = outer (100000.000000) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 188114543 Computed: 188114543.000000

Yes, the cardinality calculation and the execution plan estimates match perfectly. But there are a couple of interesting things to note. First, Oracle seems to be deriving the cardinality by multiplying the individual cardinalities of the two tables with a figure it calls “sel” – the thing that Chinar Aliyev has labelled Jsel the “Join Selectivity”. Secondly, Oracle can’t do arithmetic (or, removing tongue from cheek) the value it’s reported for the join selectivity is reported at only 6 decimal places, but stored to far more. What is the Join Selectivity, though ? It’s the figure we derive from the cardinality SQL divided by the cardinality of the cartesian join of the two tables – i.e. 188,114,543 / (100,000 * 100,000).

With the clue from the first trace file, can we work out why the second and third plans show 190K and 165 rows respectively. How about this – multiply the filtered cardinalities of the two separate tables, then multiply the result by the join selectivity:

  • 1a)   n0990 = 20: gives us 1 row in every 990.    100,000 / 990 = 101.010101…    (echoing the rounded execution plan estimate).
  • 1b)   100,000 * (100,000/990) * 0.0188114543 = 190,014.69898989…    (which is in the ballpark of the plan and needs confirmation from the trace file).

 

  • 2a)   n1150 = 25: gives us 1 row in every 1,150.    100,000 / 1,150 = 86.9565217…    (echoing the rounded execution plan estimate)
  • 2b)   (100,000/990) * (100,000/1,150) * 0.0188114543 = 165.2301651..    (echoing the rounded execution plan estimate).

Cross-checking against extracts from the 10053 trace files:


Join Card:  190014.689899 = outer (101.010101) * inner (100000.000000) * sel (0.018811)
Join Card - Rounded: 190015 Computed: 190014.689899

Join Card:  165.230165 = outer (86.956522) * inner (101.010101) * sel (0.018811)
Join Card - Rounded: 165 Computed: 165.230165

Conclusion.

Remembering that we’re still looking at very simple examples with perfect frequency histograms: it looks as if we can work out a “Join Selectivity” (Jsel) – the selectivity of a “pure” unfiltered join of the two tables – by querying the histogram data then use the resulting value to calculate cardinalities for simple two-table equi-joins by multiplying together the individual (filtered) table cardinality estimates and scaling by the Join Selectivity.

Acknowledgements

Most of this work is based on a document written by Chinar Aliyev in 2016 and presented at the Hotsos Symposium the same year. I am most grateful to him for responding to a recent post of mine and getting me interested in spending some time to get re-acquainted with the topic. His original document is a 35 page pdf file, so there’s plenty more material to work through, experiment with, and write about.

 

Alliance Down Under 2018 Workshops

Jim Marion - Fri, 2018-10-05 08:15

Today marks the 30 day countdown to Alliance Down Under, an incredible opportunity for Oracle customers to network, share experiences, and learn more about Oracle products. On Monday and Tuesday, November 5 - 6, I am partnering with Presence of IT to deliver several pre-conference workshops at Alliance Down Under. For more details and to register, please visit the Alliance Down Under pre-conference workshop page. Workshops available:

  • Building better-than-breadcrumbs navigation
  • Configure, Don’t Customize! Event Mapping and Page and Field Configurator
  • Chatbot Workshop
  • Data Migration Framework: Deep Dive
  • App Designer for Functional Business Analysts (including building CIs for Excel to CI)
  • Advanced PeopleTools Tips & Techniques
  • Fluid Design/Configuration for Functional Business Analysts

We look forward to seeing you there!

[BLOG] Oracle EBS (R12) Financial: Production Based Depreciation in Fixed Assets

Online Apps DBA - Fri, 2018-10-05 04:23

✔What is Depreciation ✔What are the fixed assets ✔Production Based Depreciation in Fixed Assets & much more… Check at https://k21academy.com/financial13 ✔What is Depreciation ✔What are the fixed assets ✔Production Based Depreciation in Fixed Assets & much more… Check at https://k21academy.com/financial13

The post [BLOG] Oracle EBS (R12) Financial: Production Based Depreciation in Fixed Assets appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Cloud Jump Start With Oracle Cloud Infrastructure

Oracle Cloud Infrastructure is the cloud for your most demanding workloads. It combines the elasticity and utility of public cloud with the granular control, security, and predictability of...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The identity column jumps its value if using merge into statement

Tom Kyte - Thu, 2018-10-04 22:06
Hi, I have one table defined as below, one of the column is defined as identity type <code> create table TEST ( col1 VARCHAR2(10), col2 NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVALUE 999999999999999...
Categories: DBA Blogs

dbms_lob.compare and length

Tom Kyte - Thu, 2018-10-04 22:06
Hello, I'm trying within a trigger to compare two clobs to see if there is any change. I am trying to prevent any unnecessary writes. Prior to writing to audit trail I compare two values. <code> v_clob_compare := dbms_lob.compare( :old.clob_tex...
Categories: DBA Blogs

Using function in conjunction with WITH query clause

Tom Kyte - Thu, 2018-10-04 22:06
Bit of a newbie, and hoping I can get pointed in the right direction. I've simplified things to demonstrate the issue I'm experiencing (and I'm really struggling to get a clear answer on other posts). When running the following: <code>with f...
Categories: DBA Blogs

SQL Query to Convert Ten Rows with One Column to Five Rows With One Column

Tom Kyte - Thu, 2018-10-04 22:06
i have a table with column name as value. There are 10 rows in the table. The desired output of this to be displayed as two columns first 5 rows as one column A and rows 6 to 10 as column B , next to each other as 5 rows of data like this <code>A B...
Categories: DBA Blogs

Calculate a variable date value and use it in a where clause to return all rows after that date

Tom Kyte - Thu, 2018-10-04 22:06
Long time SQL user of many flavors but brand new to PL/SQL and struggling to learn the "Oracle way". I've seen MANY examples of using variables in queries online and in documentation, but I've been unsuccessful finding a sample of what I want to do ...
Categories: DBA Blogs

SP execution plan should depend on input parameter

Tom Kyte - Thu, 2018-10-04 22:06
Hi guys, I have a SP having input parameters and the execution plan should depend on the parameters provided to the procedure. Ex : PROCEDURE GetData( DataType int, DataValue int ) I want this procedure to search DataValue in column1 if DataType =...
Categories: DBA Blogs

Understanding SQL Profiles

Tom Kyte - Thu, 2018-10-04 22:06
Hi Tom, My understanding of using SQL Profiles has always been that they would prevent (frequent) changes in access paths of SQL statement. This morning I noticed that, despite the fact that an SQL profile was connected to a statement and statias...
Categories: DBA Blogs

Oracle Utilities Technical Best Practices whitepaper updated

Anthony Shorten - Thu, 2018-10-04 18:21

With the release of Oracle Utilities Application Framework V4.3.0.6.0 the Technical Best Practices whitepaper has been updated with the latest advice and latest information.

The following changes have been made:

  • Overview of the Health Check capability
  • Preparing your Implementation for the Oracle Cloud - An overview of the objects that need to be changed to prepare for the migration from on-premise to the Oracle Cloud
  • Optimization techniques for minimizing costs.

The latest version is located in Technical Best Practices (Doc Id: 560367.1) available from My Oracle Support.

Oracle’s AI-driven Risk Management Makes Corporate Finances More Secure

Oracle Press Releases - Thu, 2018-10-04 11:00
Press Release
Oracle’s AI-driven Risk Management Makes Corporate Finances More Secure Advanced Access Controls parlay AI to help finance teams bolster security and risk analysis

Redwood Shores, Calif.—Oct 4, 2018

To help protect customers from ever-increasing fraud and security threats, Oracle today unveiled the enterprise software industry’s first AI—driven security and risk management solution. Designed specifically for Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Risk Management Cloud’s new Advanced Access Controls enable organizations to continuously monitor for segregation of duties (SOD), financial compliance (SOX), privacy risks, proprietary information and payment risks.

The new controls embed self-learning, artificial intelligence (AI) techniques to constantly examine all users, roles and privileges against a library of active security rules. The offering includes more than 100 best practices (configurable rules) across general ledger, payables, receivables and fixed assets.

“As the pace of business accelerates, organizations can no longer rely on time-consuming manual processes, which leave them vulnerable to fraud and human error,” said Laeeq Ahmed, managing director at KPMG. “With adaptive capabilities and AI, products such as Oracle Risk Management Cloud can help organizations manage access controls and monitor activity at scale to protect valuable data and reduce exposure to risk.”

Key benefits of AI-driven Security & Risk Management include:

  • Continuous protection: Constant monitoring of user and application activity

  • Instant best practices: More than 100 proven ERP security rules

  • Self-learning: Embedded AI and self-learning for precise results

  • Augmented incident response: Ensures that issues are directed to analysts for tracking, investigation and closure

“On-going disruption in the marketplace and regulatory landscape presents continually evolving operational and financial risks,” said Bill Behen, principal at Grant Thornton. “Oracle’s unique approach to risk management and expertise applying AI technology enables organizations to securely move to the cloud and continuously protect their business from a host of external and internal threats.”

The pre-packaged audit-approved security rules automate access analysis during the role design phase to significantly accelerate ERP implementations. In addition, the intuitive workbench, visualization and simulation features within Advanced Access Controls make it easy to add new rules and further optimize user access. Once live, the solution continuously monitors and automatically routes incidents to security analysts.

To help customers analyze complex, recursive and dynamic security data across all users, roles and privileges, Advanced Access Controls uses graph-based analysis and self-learning algorithms. This enables organizations to accurately and reliably review and visualize the entire path by which any user is able to access and execute sensitive functions.

“Advanced Access Controls automate the time-consuming analysis needed to protect business data from insider threats, fraud, misuse and human error,” said Sid Sinha, vice president of Risk Management Cloud Product Strategy, at Oracle. “This service is part of an integrated, practical solution to effectively protect information in business applications using the latest data analysis and exception management techniques.”

For more information on Oracle Advanced Access Controls and Oracle Risk Management Cloud, go to cloud.oracle.com/risk-management-cloud.

To learn more about Risk Management Cloud at Oracle OpenWorld, please visit the sessions catalog.

Contact Info
Bill Rundle
Oracle PR
+1 650 506 1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1 650 506 1891

New OA Framework 12.2.6 Update 14 Now Available

Steven Chan - Thu, 2018-10-04 10:31

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Oracle Application Framework (FWK) Release 12.2.6 Bundle 14 (Patch 28183913:R12.FWK.C)

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.6 bundle patches.

In addition, this latest bundle patch includes fixes for the following issues:

  • The details of an expanded row are not visible in the viewport area when a table has more than 30 rows.
  • On the iSupport Service Request creation page, the length of the Problem Summary data input field is inadequate.
  • Messages filtered for viruses are not added to the confirmation message in a specific product flow.
  • Validation of required fields is not triggered when a new row is added to a table.
  • With the application session language set to Arabic, in the IE 11 browser a popup does not appear when the Delete button is clicked.

Related Articles

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator