Feed aggregator

Quick History of Me

Mark Vakoc - Wed, 2007-09-19 22:24
Here's a quick background of who I am, why I'm blogging, and what I hope to accomplish here.

I have been with JD Edwards/PeopleSoft/Oracle for almost ten years now. The first three or so years were spent on the support organizations SWAT team flying to customer sites to address specific customer issues and provide technical assistance. Life on the road was great; I traveled to more than twelve countries and countless client sites and really enjoyed the work. It came time to stop traveling so much so I settled down in the Denver area and created the Support Assistant diagnostic product. I'm still very proud of SA, and am glad to say that the product is alive and kicking. More on that later.

After working primarily on SA for five or so years I decided to tip my hat into something new. I wanted to apply my experience from the field and support organizations and create something new. The result is Server Manager. Specifically, I wanted to address
  • The difficulty in installing and upgrading our server based products
  • The difficulties around configuration management; understanding all the configuration parameters and managing the configuration of servers across an E1 installation
  • The difficulty in monitoring the activity of E1 servers
  • The difficulty associated with troubleshooting servers when something goes wrong
Hopefully you'll see that Server Manager addresses these and many other administration activities that will truly result in a lower cost of ownership.

As someone unbelievably addicted to blogs, generally of a geeky nature, I've been "inspired" by many others to use this forum to publish tips, tricks, knowledge, and general musings on the product of which I'm so proud. I'm particularly inspired by a blog by a co-worker entitled 'The Buttso Blathers' with very insightful posts on all things OAS (Oracle Application Server).

There is one thing this blog does not represent; it is not a support forum or a place to seek help.  So tune in and learn more about the exciting Server Manager product.


Mark Vakoc - Wed, 2007-09-19 22:19
Before digging into the product itself I want to express some thanks to the team that worked on Server Manager. At this point I'll let the involved parties remain anonymous. This was a team effort, and I want to express my personal thanks to the development team and quality assurance team that helped create the product.

So, for all those involved in Server Manager, I say thanks.

Making Sense of Fusion Middleware

Solution Beacon - Wed, 2007-09-19 18:22
The Fusion Middleware area has become more and more crowded with all of the products that are now included.I speak from personal experience on this topic since I'm often stumbling through the various acronyms used for these products.Most of us think that Fusion Middleware refers to Service-Oriented Architecture, Identity Management and Business Process Management since these are usually the

Redwood Shores

Fairlie Rego - Mon, 2007-09-17 15:02

I was in Redwood last week and got a chance to meet some of the best minds in Server Technologies including a few ex-colleagues and a couple of guys on the revered
Oak Table

And although a significant part of the week was spent in the wine and lounge bars of San Francisco it was quite an enriching experience..

Inside the Middle

Solution Beacon - Mon, 2007-09-17 12:44
I've written generally about Oracle Fusion Middleware but let's move down to the next level of detail. An easy way to do this is to focus on functionality. There are seven functional categories of products in Fusion Middleware, per Oracle.The Unified Workplace group which provides collaboration tools (Groupware, Instant Messaging), portals, mobile/desktop presentation and secure search.The

Starting in the Middle

Solution Beacon - Mon, 2007-09-17 12:40
During the Cold War, US and Russian analysts spent millions of hours analyzing one another’s public pronouncements trying to understand what was really going on with the other side.In the same manner today, we often find ourselves trying to understand what Oracle is trying to communicate (or not communicate) through their many announcements, white papers and presentations. One area where this

Payroll Reversal

RameshKumar Shanmugam - Sun, 2007-09-16 16:38
After running the payroll and if post run processing has already occured we cannot correct the payroll by Retry, even in case of roll back we need to roll back the complete Payroll for correcting an entry for one or two assignment.
In these cases we can use Payroll Reversal. In short Payroll Reversal helps to correct the run result for an single assignment

To Reverse the Payroll run for an assignemnt
(R) US HRMS Manager
(N) Fast Path > Reverse Payroll Run
Select the Assignment that you need to reverse from the LOV, Tick the Reverse Check Box for the Payroll that we need to reverse and save the work. Consolidation set for the run will be automatically displayed for the Reversal
Now the Run Result is been reversed to the Value before the payroll Run. Now you can modify the Entry and start processing your payroll again.

Try this out!!!
Categories: APPS Blogs

Query to determine the clustering factor before you create the index

Mihajlo Tekic - Sun, 2007-09-16 10:33
I found the following query useful to determine the clustering factor for the indexes that has not been created yet.

select count(1) clustering_factor
select dbms_rowid.rowid_block_number(rowid) block_no,
list of the indexed columns,
count(1) num_rows,
over (order by list of the indexed columns) prev_block
from table_name
group by dbms_rowid.rowid_block_number(rowid),
list of the indexed columns
order by list of the indexed columns
where block_no<>prev_block or prev_block is null

Let's take a look at the following example:

SQL> create table test
2 tablespace example
3 as select
4 mod(rownum,8) col1
5 lpad('x',1000) col2
6 from all_objects
7 where rownum<=1000;

Using the following query, we can see that there are about 7 rows per block (block size 8192) that makes the records with same COL1 to be scattered across different blocks:

SQL> select num_rows_in_block, count(1) blocks
2 from
3 (
4 select block_no, count(1) num_rows_in_block
5 from
6 (
7 select dbms_rowid.rowid_block_number(rowid) block_no
8 from test
9 )
10 group by block_no
11 )
12* group by num_rows_in_block
SQL> /

----------------- ----------
6 1
7 142

Let see how big would be the clustering factor of an index built on TEST(COL1).

SQL> select count(1) clustering_factor
2 from
3 (
4 select dbms_rowid.rowid_block_number(rowid) block_no,
5 col1,
6 count(1) num_rows,
7 LAG(dbms_rowid.rowid_block_number(rowid))
8 over (order by col1) prev_block
9 from test
10 group by dbms_rowid.rowid_block_number(rowid), col1
11 order by col1
12 )
13* where block_no<>prev_block or prev_block is null
SQL> /


The result was pretty obvious.
Now let's add another column COL3 in the table TEST and set its value to mod(col1, 4).
This will make the blocks to contain at most two records with the same value of COL3.

SQL> alter table test
2 add col3 number;

Table altered.

SQL> update test set col3=mod(col1,4);

1000 rows updated.

SQL> commit;

Commit complete.


Now let see what would be the value of the clustering factor of the index created on COL3 column:

SQL> select count(1) clustering_factor
2 from
3 (
4 select dbms_rowid.rowid_block_number(rowid) block_no,
5 col3,
6 count(1) num_rows,
7 LAG(dbms_rowid.rowid_block_number(rowid))
8 over (order by col3) prev_block
9 from test
10 group by dbms_rowid.rowid_block_number(rowid), col3
11 order by col3
12 )
13* where block_no<>prev_block or prev_block is null
SQL> /


Now let's create the actual indexes and see what their clustering factor would be:

SQL> create index test_col1_idx on test(col1);

Index created.

SQL> create index test_col3_idx on test(col3);

Index created.

SQL> select index_name, clustering_factor
2 from user_indexes
3* where table_name='TEST'
SQL> /

------------------------------ -----------------

Perfect insights about the importance of the clustering factor for calculating the cost I found in Jonathan Lewis's book Cost-Based Oracle Fundamentals.

Let's go back to the very first query in this post. Why I think this query is useful?
Because sometime, even though I think some columns are perfect combination to create index on, the index might not be used by the optimizer because the clustering factor is big.
It can give you an initial clue how effective some index could be.

Oracle 11g Documentation

Renaps' Blog - Fri, 2007-09-14 09:02

Since reading the documentation is always a good idea when a new Oracle release is out, I decided to read Oracle® Database Upgrade Guide and the Oracle® Database New Features Guide before getting into the 11g installation guide.

It is funny how I haven’t yet downloaded the new Oracle 11g database and I am already using one of its new features!

Oracle 11g now enables users to send out comments, making it faster and easier to everyone to enhance the documentation.
This new feature have also been discussed here:
IT-eye Weblog
OTN TechBlog

While reading through the 2 guides, I used this new feature to report a couple of errors:
Oracle has responded to my user comments pretty fast:

Thank you for sending us this correction to the Oracle Database Upgrade Guide. The problem has been fixed and the correction should appear in the next revision of the book (probably in part number B28300-02).

Submitter: my_mail
Book title: Oracle Database Upgrade Guide
Part number: b28300
Release: 11g Release 1 (11.1)
Topic title: Compatibility and Interoperability
URL: http://download.oracle.com/docs/cd/B28359_01/server.111/b28300/compat.htm
Status: Forwarded
Submitted on: 28-AUG-07

In the Automatic Maintenance Tasks Management section, the “See Also” link was pointing to the wrong URL: http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/repair.htm#ADMIN022

This page should have been pointing to a subsection of the Automatic Maintenance Tasks Management feature…

Categories: DBA Blogs

Oracle 11g Database New Features: Data Guard Enhancements

Virag Sharma - Wed, 2007-09-12 07:59

Why We need Data Guard ?? for

  • Data protection
  • Data Availability

11g Increase ROI from standby systems and enhance manageability, New feature like Active Data Guard , Snapshot standby make better ROI.

Here is some data guard category and there enhancement

1) Data Protection
  • Advanced Compression
  • Lost-write protection
  • Fast-Start Failover
2) Increase ROI
  • Active Data Guard
  • Snapshot Standby
3) High Availability
  • Faster Redo Apply
  • Faster failover & switchover
  • Automatic Failover using ASYNC
4) Manageability
  • Mixed Windows/Linux

Active Data Guard

Oracle Active Data Guard 11g – a new Database Option
Enables read-only access to a physical standby database while Redo Apply is active
It is now possible to query a physical standby database while Redo Apply is active.This new capability increases your return on investment in Data Guard technology because a physical standby database can now be used to offload queries from the primary database in addition to providing data protection.
( We can do same with logical standby , what is diffrent here , well redo apply method is faster in physical stand by ie redo apply methode is diffrent in physical standby)

  • Stop log apply
Alter database recover managed standby database cancel;
  • Open database for read-only access
alter database open
  • Once database open start redo apply
alter database recover managed standby database
using current logfile disconnect from session;

Snapshot Standby

This enhancement is good example of ROI. This feature allow us to better utilization of standby. This feature Truly leverages DR hardware for multiple purposes.For example convert physical database to snapshot standby database do testing/ application patching etc. After testing convert back snapshot database back to physical standby.

  • -- Convert physical database to snapshot
  • --
  • -- Do testing on database
  • --
  • -- Convert snapshot Database to physical database


This Similar to storage snapshot , but it useage same space , not additional space is needed ( I know this possibly can achieve in 10g R2 too , using flashback feature , But for sure it is not easy as it is in 11g )

Redo compression

To transport redo data in compressed form to destination use COMPRESSION attribute.
By default compression is disabled.

  • Alter system set log_archive_dest_1 = 'SERVICE=standby1 compression=ENABLE';

  • select dest_name , compression from v$archive_dest;

    --------------------------------- -------

    10 rows selected.

Data Protection

New Initialization parameter db_lost_write_protect Introduced. This parameter Compare versions of blocks on the standby with that in the incoming redo stream
Version discrepancy implies lost write on either primary or standby database


Managibilty Enhancement

SYS user and password files no longer required for redo transmission authentication. Non-SYS user can be specified through the parameter, This user must have the SYSOPER privileges, Requires password for this user to be the same at primary and all standbys
Upon SYSDBA / SYSOPER changes, password file must be copied from the primary to all physical standby databases


Data Guard Support for Heterogeneous Primary and Standby Systems in Same Data Guard Configuration (- Since 11g Linux installer only available, so not able to test this feature)

Fast-Start Failover

Immediate automatic failover for user-configurable health conditions


Condition examples:
  • Datafile Offline
  • Corrupted Controlfile
  • Corrupted Dictionary
  • Inaccessible Logfile
  • Stuck Archiver
  • Any explicit ORA-xyz error

Application can also request for failover using package DBMS_DG


Categories: DBA Blogs

My new laptop

Siva Doe - Mon, 2007-09-10 15:15

Last week our team got couple of new laptops. Sony Vaio VGN-FZ17G.
Looks shiny and sleek. It came loaded. Core2 Duo @ 2.00 GHz, 2GB RAM,
160G Disk, Vista Home Premium. The battery life sucks though. With the
default power plan of Vaio Optimized, just about 40 mins of battery
backup. May be it will give more with power saver plans.

Coming to the main story - with the help of GParted, resized the Vista
partition. Then started installing Nevavda build 72. I was all worked
up eager to see Caiman installer at work. Alas, no such luck. Looks
like the Nvidia GeForce 8400M GT card was not supported by the bundled
driver in nv_72. So, I had to endure one more session of the familiar
blue/green console installer.

First thing I did was to create a ZFS pool and had /opt in ZFS (is
this safe?). Next, downloaded and installed Nvidia latest driver,
which worked beautifully. Now Xorg is running at the native 1280x800
resolution. Strangely, my Wifi is working but not my ethernet port.

These will be my next steps. Get the ethernet working (Marvell Yukon).
Install Compiz Fusion (thanks Erwann), Second Life for Solaris (thanks
Dana). Install Sun Studio 12 (I have to get back to work, dont I?);
Coolstack. The others will be TBD later. Sound is not working, but
this will be my lowest priority though.


Latest supercomputer!

Peter Khos - Sun, 2007-09-09 01:38
I found this news very interesting indeed. The latest supercomputer is here and it is not controlled by government or private agencies. Instead, it is under the control of (most likely) criminals and the question being asked is "what do they planned to do with it?"It's scary but just imagine the possibilities and none of them are good at all. I think it might be prudent for the white hats to Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com0

Some Good Links / Blogs / site to know more about New Oracle Database 11g features

Virag Sharma - Sat, 2007-09-08 21:56

I have rated 5 *(star) for some sites , which I like Most

( these links collected from google alert )

  1. Oracle Database Online Documentation 11g Release 1 (11.1) *****
  2. ORACLE-BASE - Articles on Oracle 11g new features *****
  3. Oracle Database 11g on OTN
  4. Oracle Database 11g: The Top Features for DBAs and Developers
  5. Pythian Group Blog » Tuning Pack 11g : Real-Time SQL Monitoring
  6. PSOUG - New in 11gR1*****
  7. Robert G. Freeman’s Blog: 11g Security New Feature… A short one…
  8. Oracle 11g new Features Summary by Burleson
  9. My top 10 Oracle 11g New Features Part 3 - 5 nice optimizer statistics
  10. Oracle 11g Top New Features for DBA Virag Sharma
  11. Oracle Magazine: PL/SQL Practices: On the PL/SQL Function Result Cache
  12. Changes in Oracle 11.1 - Julian Dyke *****
  13. Blogging about 11g - Part 7 - Function Result Cache
  14. Oracle 11g New Features SQL plan management (SPM)
  15. Oracle 11g New feaures : Case Sensitive Password
  16. SQL Performance Analyzer (SPA) Part - 2
  17. Oracle 11g ADR Automatic Diagnostic Repository
  18. New Parameters in 11g (Part 2) | Dizwell Informatics
  19. Oracle 11g DRCP: Database Resident Connection Pooling - second attempt
  20. Oracle 11g internals part 1: Automatic Memory Management
  21. My top 10 Oracle 11g New Features Part 4 - the SQL Query Result Cache
  22. Oracle 11g Tips by Burleson Consulting

You can have a look on following forum as well. Good thing about this forum is that , it update time to time , with latest 11g Links. Thanks to TongucY


- Vi
Categories: DBA Blogs

Concurrent Usage for E-Business

Peter Khos - Sat, 2007-09-08 16:52
Blogger seems to have lost the last entry that I posted earlier this week. I didn't check and now I can't even remembered what it was that I posted about.Anyhow, there was a number of stuff to post but I thought I would focus on a current technology issue that we are facing with our Oracle E-Business project. Our test lab has been running some performance scripts on our E-Business test Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com2

Nulls: Nulls and Aggregate Functions (New SQL Snippets Tutorial)

Joe Fuda - Sat, 2007-09-08 07:00
A new tutorial has been added to SQL Snippets which demonstrates how aggregate functions deal with null values. Techniques for generating results that ignore nulls and results that include nulls are highlighted.

Options for Generating XML for BI Publisher

Solution Beacon - Fri, 2007-09-07 10:42
Oracle's BI Publisher is a powerful tool for producing richly formatted documents. It takes any well-formed XML data and refines it into custom invoices, checks, bar coded labels, web pages – the possibilities are limitless. The source for the XML data can be any process that can generate well-formed XML data ("well-formed" simply means that it complies with XML standards). In the Oracle

Oracle 11g New Features SQL plan management (SPM)

Virag Sharma - Fri, 2007-09-07 07:54
You might have noticed that execution plan changed in CBO , specially in following case

  • Database Upgrade
  • Database / Schema Stats collection
  • Change in environment ( LinkUnix to Linux Migration )
  • Change in data

You want to upgrade database from 10g to 11g and this change can cause regressions in SQL performance,and fixing them manually can be difficult and time consuming. Sql tuning can be used but, this is a reactive mechanism and cannot guarantee stable performance when drastic changes happen to the system. SQL tuning can only resolve performance issues after they have occurred and are identified. For example, a SQL statement may become a high-load statement due to a plan change, but this cannot be resolved by SQL tuning until after the plan change occurs.

Solution: Oracle 11g new features SQL plan management (SPM) records and evaluates the execution plans of SQL Statements over time, and builds SQL plan baselines composed of a set of existing plans known to be efficient. The SQL plan baselines are then used to preserve performance of corresponding SQL statements, regardless of changes occurring in the system.

Note:- Before upgrading database to 11g ( , i have collect SQL Tuning set (STS) for 7 Day using "capture_cursor_cache_sqlset". By assuming that in 7 day all SQL stas will collected in STS. After that upgraded database to 11g ( from 10g , i have used this STS to compare execution plan from 11g. Please check "SQL Performance Analyzer Part - 2"
for steps to transfer STS one database to other database

1) Capturing SQL Plan Baselines

· Automatic Plan Capture ( 11g Only )

· Manual Plan Capture ( 10g and 11g )

2) Make Changes Upgrade Database / collect stats / Migrate Database to Linux

3) Upload SQL Plan Baseline

4) Enable the use of SQL plan baselines

5) Evolving SQL Plan Baselines

1) Capturing SQL Plan Baselines in oracle 10g before upgrading to 11g

Automatic Plan Capture(11g only)

When OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES set true , then database automatically creates and maintains the plan history for SQL statements using information provided by the optimizer.


In init.ora file




Note : - Above automatic plan capture only work on 11g , if you want to capture plan in 10g , then check section “Manual Plan Capture”

b) Manual Plan Capture ( 10g and 11g )

Following script will capture sql plan in SQL tuning set for 7 Day ( this script will run for 7 day , you can change time according to your need )

exec dbms_sqltune.drop_sqlset('STS_RAG_CMUL');
exec dbms_sqltune.create_sqlset('STS_RAG_CMUL');

sts_cmul VARCHAR2(30) := 'STS_RAG_CMUL';



capture_cursor_cache_sqlset : The procedure captures a workload from the cursor cache into a SQL tuning set, polling the cache multiple times over a time period and updating the workload data stored there. It can execute over as long a period as required to capture an entire system workload.

  • time_limit : 604800 ( The total amount of time, in seconds, to execute , 7 day = 7 * 24 * 60 * 60 = 604800 )
  • repeat_interval : 1 ( The amount of time, in seconds, to pause between sampling )
  • capture_option During capture, either insert new statements, update existing ones, or both.
    • 'INSERT'
    • 'UPDATE',
    • 'MERGE'
  • capture_mode capture Option (UPDATE and MERGE capture options).
    • MODE_REPLACE_OLD_STATS - Replace statistics when the number of executions
      seen is greater than that stored in the SQL tuning set

MODE_ACCUMULATE_STATS - Add new values to current values for SQL we already store. Note that this mode detects if a statement has been aged out, so the final value for a statistics will be the sum of the statistics of all cursors that statement existed under.

The CAPTURE_CURSOR_CACHE_SQLSET function enables the capture of the full system workload by repeatedly polling the cursor cache over a specified interval. This function is a lot more efficient than repeatedly using the SELECT_CURSOR_CACHE and LOAD_SQLSET procedures to capture the cursor cache over an extended period of time. This function effectively captures the entire workload, as SQL Profiles opposed to the AWR—which only captures the workload of high-load SQL statements or the LOAD_SQLSET procedure, which accesses the data source only once.

2) Make Changes e.g. Upgrade Database / collect stats / Migrate Database to Linux Collect stats

exec dbms_stats.set_param('ESTIMATE_PERCENT','100');

exec dbms_stats.set_param('method_opt','FOR ALL COLUMNS SIZE 254');

exec dbms_stats.Gather_Database_Stats;

3) Upload SQL Plan Baseline

If you have collected stats manually in SQL tuning sets ( STS) , then you need to
upload baseline from STS

-- Upload plan manually using dbms_spm

variable pls number;

exec :pls := dbms_spm.load_plans_from_sqlset(sqlset_name=>'STS_RAG_CMUL',-
basic_filter=>'parsing_schema_name like ''APPS'' and plan_hash_value!=0', -

4) Enable the use of SQL plan baselines

To enable the use of SQL plan baselines, set the OPTIMIZER_USE_SQL_PLAN_BASELINES
initialization parameter to TRUE. By default, this parameter is set to TRUE.

--In init.ora file




5) Evolving SQL Plan Baselines

I have captured 10g plans in STS and after upgrading database to 11g uploaded those plans ,using dbms_spm.load_plans_from_sqlset. After uploading plans manually , I have set


--Check not accepted Plans

SQL> select sql_handle, plan_name, enabled, accepted, fixed
from dba_sql_plan_baselines
Where ACCEPTED='NO' and

--------------------- ------------------------- ------- --------- -------
SYS_SQL_9295397103ae5ebe SYS_SQL_PLAN_74720f16b0fcefa7 YES NO NO
SYS_SQL_9295397103ad3eba SYS_SQL_PLAN_03ad3ebaa086802f YES NO NO
SYS_SQL_434ef30d9da6a29b SYS_SQL_PLAN_9da6a29b1f6e321d YES NO NO
SYS_SQL_13836d6b3da62bbb SYS_SQL_PLAN_3da62bbbe5990995 YES NO NO
.......................................... ....................................

40 rows selected.

-- Displaying SQL Plan Baselines

SQL> select * from table(dbms_xplan.display_sql_plan_baseline(sql_handle=>'SYS_SQL_13836d6b3da62bbb',format=>'basic'));

SQL handle: SYS_SQL_13836d6b3da62bbb
SQL text: ( I have removed SQL text and changed table name in below execution plan)
Plan name: SYS_SQL_PLAN_3da62bbb91266099
Enabled: YES Fixed: NO Accepted: YES Origin: MANUAL-LOAD

Plan hash value: 2877140902
Id Operation Name

Plan name: SYS_SQL_PLAN_3da62bbbe5990995
Enabled: YES Fixed: NO Accepted: NO Origin: AUTO-CAPTURE

Plan hash value: 3245492848

Id Operation Name

42 rows selected.


Evolving Plans with pls/sql function DBMS_SPM.EVOLVE_SQL_PLAN_BASELIN


report clob;

sql_handle => 'SYS_SQL_13836d6b3da62bbb');

Evolve SQL Plan Baseline Report
SQL_HANDLE =SYS_SQL_13836d6b3da62bbb

Plan: SYS_SQL_PLAN_3da62bbbe5990995
Plan was verified: Time used .06 seconds.
Failed performance criterion: Compound improvement ratio <= .3.

Baseline Plan Test Plan Improv. Ratio

------------- --------- -------------

Rows Processed: 1460 1460
Elapsed Time(ms): 10 10 1
CPU Time(ms): 9 9 1
Buffer Gets: 84 286 .29
Disk Reads: 0 0
Direct Writes: 0 0
Fetches: 0 0
Executions: 1 1
Report Summary

Number of SQL plan baselines verified: 1.
Number of SQL plan baselines evolved: 0.

PL/SQL procedure successfully completed.


Just for demo purpose , lets accept plan 2 ('SYS_SQL_PLAN_3da62bbbe5990995')

-- Acccept plan 2 ('SYS_SQL_PLAN_3da62bbbe5990995') and Fixed it

variable cnt number;
exec :cnt := dbms_spm.alter_sql_plan_baseline(sql_handle =>
'SYS_SQL_13836d6b3da62bbb', -
plan_name => 'SYS_SQL_PLAN_3da62bbbe5990995', -
attribute_name => 'ACCEPTED', attribute_value => 'YES');

SQL> exec :cnt := dbms_spm.alter_sql_plan_baseline(sql_handle =>
'SYS_SQL_13836d6b3da62bbb', -
plan_name => 'SYS_SQL_PLAN_3da62bbbe5990995', -
attribute_name => 'ACCEPTED', attribute_value => 'YES');

PL/SQL procedure successfully completed.

-- Same pl/sql function used to fix plans
-- Optimizer always picked FIXED plans

SQL> exec :cnt := dbms_spm.alter_sql_plan_baseline(sql_handle =>
'SYS_SQL_13836d6b3da62bbb', -
plan_name => 'SYS_SQL_PLAN_3da62bbbe5990995', -
attribute_name => 'FIXED', attribute_value => 'YES');

PL/SQL procedure successfully completed.


- - Check status of plans

SQL> select sql_handle, plan_name, enabled, accepted, fixed from dba_sql_plan_baselines
2 Where FIXED='YES' and PARSING_SCHEMA_NAME like '';

----------------------------- ---------------------------- -------------- --------
SYS_SQL_13836d6b3da62bbb SYS_SQL_PLAN_3da62bbbe5990995 YES YES YES




-- Displaying SQL Plan Baselines after changes

SQL> select * from table(dbms_xplan.display_sql_plan_baseline(sql_handle=>'SYS_SQL_13836d6b3da62bbb',format=>'basic'));

SQL handle: SYS_SQL_13836d6b3da62bbb
SQL text: ( I have removed SQL text and changed table name in below execution plan)
Plan name: SYS_SQL_PLAN_3da62bbb91266099
Enabled: YES Fixed: NO Accepted: YES Origin: MANUAL-LOAD

Plan hash value: 2877140902

Id Operation Name
Plan name: SYS_SQL_PLAN_3da62bbbe5990995
Enabled: YES Fixed: YES Accepted: YES Origin: AUTO-CAPTURE

Plan hash value: 3245492848

Id Operation Name
42 rows selected.

ReferenceSQL Plan Management Chapter 15
Categories: DBA Blogs

Payroll Rollback

RameshKumar Shanmugam - Wed, 2007-09-05 16:19
Once the Payroll /Quick Pay process is completed if there is some issue or if some correction need to be made. We can either
  • Retry
  • Reverse
  • Rollback

In this blog i am trying to explain how to rollback a Payroll process.

What Rollback does?

In simple word to explain Rollback process reset the Run Result and the Latest balance back to the value before running the Payroll Process.

Rollback process has to be run in the same reverse sequence of the Payroll Run. for eg. for processing the payroll, we run the following process in sequence Payroll run, Prepayment, costing, Transfer to GL. Similarly for the Rollback we need to run in the reverse sequence Rollback Transfer to GL, Rollback Costing, Rollback Prepayment, Rollback Payroll Run.

Note: if the costing details are imported into GL then we cannot rollback the process.So it is always advisable to run the Transfer to GL process after all validation and checking are done and run this process when we feel no more modification are needed

List of Process type that can be rolled back in PF K RUP1

  • Advance Pay Element
  • Balance Adjustment
  • Bank or Post office Payment
  • Batch element entry
  • cheque writer
  • Costing
  • Costing of Payment
  • Estimate costing
  • ext/Manual Payment
  • Magnetic report
  • Magnetic Transfer
  • Pre-Payments
  • Purge
  • QuickPay Pre-Payment
  • Quick Pay run
  • Report Generator
  • Retro Costing
  • Retro Pay
  • Retro Pay by Action
  • Retro Pay by element
  • Reversal
  • Run
  • Transfer to GL
  • Void Payment

Categories: APPS Blogs

UKOUG 2007 agenda packed with APEX

Anthony Rayner - Wed, 2007-09-05 10:43
The agenda for UKOUG 2007 has just been released and there is currently a total of 13 presentations on or relating to APEX (and a keynote delivered by Tom Kyte!). They are...

Loads to learn so come along, I'll be there and presentating 'Building The Rich User Interface with Oracle Application Express and AJAX' so looking forward to doing that (scary!) and listening to many more! Hope you can make it!

Categories: Development


Subscribe to Oracle FAQ aggregator