Feed aggregator

Webinar: Are you Ready for Fusion?

Solution Beacon - Thu, 2007-07-26 14:43
This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. This one hour webinar will be presented on August 15th at 10:30am CDT, and registration is available here.TitleAre you Ready for Fusion?AbstractA Practical Guide to What You Should Know For those of you concerned about Fusion's place in your

Webinar: Release 12 Subledger Accounting Engine

Solution Beacon - Thu, 2007-07-26 14:42
This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. This one hour webinar will be presented on August 15th at 1:30pm CDT, and registration is available here.TitleRelease 12 Subledger Accounting EngineAbstractWhat it is, What it does, and How to use it. Be in the know by attending this

Choice of desktop OS

Peter Khos - Wed, 2007-07-25 20:10
Back in April of this year, I bought a new Dell AMD desktop which comes with Vista installed. I gave that a twirl for a month and thought maybe I should go with Linux so that I can install Oracle on a "proper" OS and also Vista applications are tough to come by.I am finding that Linux (I chose Ubuntu) has similar problems where I was not able to use the software that comes with my various Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com1

Webinar: Release 12 Multi-Org Access Control (MOAC) – An Inside Look

Solution Beacon - Tue, 2007-07-24 15:55
This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. The webinar will be presented on 7/25 at 1:30pm CDT, and registration is available here.Title Release 12 Multi-Org Access Control (MOAC) – An Inside Look Abstract Join us as our Solution Architect focuses on Multi-Org Access Control (MOAC),

Webinar: Release 12 Procurement Part I – The Professional Buyer's Work Center

Solution Beacon - Tue, 2007-07-24 15:55
This is another in our Release 12 webinar series, and will be presented live, with the recorded replay available for registered attendees in the near future. The webinar will be presented on 7/25 at 10:30am CDT, and registration is available here.TitleWebinar: Release 12 Procurement Part I – The Professional Buyer's Work CenterAbstractExciting things are happening to the Procurement Suite in

Simple Tutorial for Publishing FSG Reports Using XML Publisher

Solution Beacon - Tue, 2007-07-24 15:50
This simple tutorial will show you how to create a custom FSG Report using XML Publisher.1. Log in to Oracle Applications and select the "XML Publisher Administrator" responsibility (your Applications Administrator will have to grant access to this responsibility).2. Navigate to the Templates page.3. Type FSG% in the Name field and click "Go to query" for the standard FSG template

Remote debugging Code Tester

Jornica - Tue, 2007-07-24 12:04

Quest Code Tester for Oracle (Code Tester) helps you with defining test cases, generating test harnesses and presenting the test results in a structured way. Code Tester does not provide any features to debug your code. If you run into a red light situation when a test case fails, you have discover where the error is located. This means checking inputs and outcomes in order to exclude incorrect setup and incorrect initialization code. And of course checking your code, recompiling your (test) code and login again.

If this does not result into a green light, it is time to debug your code with a development IDE. As a result you have to transfer your test code into your development IDE. Wouldn't it be nice if you could enter debug mode seamlessly: when executing your test case the execution stops at a breakpoint and you can debug your code. The answer is: yes, you can. With SQL Developer you can remote debug your code within a test run.

The linking pin between Code Tester and SQL Developer is the package sys.dbms_debug_jdwp where jdwp stands for Java Debug Wire Protocol. This protocols needs a debugger process i.e. SQL Developer and a debuggee process i.e. Code Tester. The debugger listens for requests from the debuggee i.e. PL/SQL package procedure calls to connect_tcp and disconnect.

Setting up the debuggee
In Code Tester you have to modify the initialization section of your test case. Add the following code
dbms_debug_jdwp.connect_tcp(host => '127.0.0.1', port => 4000);
The first parameter is your IP address of the client where the Code Tester IDE runs (as seen from the database server you're connected to). Because I'm running Code Tester and Oracle XE on the same machine, I use the local host address 127.0.0.1. The second parameter is the default port. An alternative for the hard coded IP address is SYS_CONTEXT ('USERENV', 'IP_ADDRESS').

After the test case is executed, switch off remote debugging. Add the following code to the cleanup section:
dbms_debug_jdwp.disconnect;

Setting up the debugger
In SQL Developer login with the same user as Code Tester and right click on your connection, a context menu appears and select the 'Remote Debug' option. A small window with the title 'Debugger - Listen for the JPDA' (Java Platform Debugger Architecture) appears, enter the address or host name where SQL Developer should listen to connect. Use the same IP address as in dbms_debug_jdwp.connect_tcp. Also check if the port is the same.



Before switching to Code Tester again set a breakpoint in your code (and compile your code for debug) otherwise the debugger will not stop at your breakpoint. At last but not at least make sure the user has the debug connect session privilege and the debug any procedure when debugging other users objects.

Debugger meets debuggee
It is time to run your test with Code Tester. SQL Developer will stop on your breakpoint. Note: while debugging your code Code Tester will not respond. After stepping through your code press the resume button in SQL Developer to return to Code Tester. As an example I modified the code of the normal usage test case of the function qctod#betwnstr (see for an explanation and the source code How Quest Code Tester for Oracle can help you get rid of bugs in your PL/SQL procedures). In the picture below SQL Developer hits the breakpoint at line 13. In the data tab you can see all variables and their values.

This example shows how to use remote debugging with Code Tester. Instead of using SQL Developer as debugger you can also use Toad for Oracle or Jdeveloper. And also every front end can be used as debuggee for remote debugging as long the calls to dbms_debug_jdwp can be implemented. Let's start debugging!

Links
I found the following links usefully while writing this blog entry:

An apology to Kevin Closson

Peter Khos - Tue, 2007-07-24 00:00
In my last post, I mentioned that I was going to blog about the spat between Burleson and Lewis but decided not to BUT I left a comment on Kevin Closson's blog referring to the spat. This is definitely the wrong thing to do as if I was not willing to use my blog to comment on, why should I use someone's blog to comment.Kevin, my sincere apologies (please remove all comments I left pertaining to Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com1

11g whitepapers @ OTN

Pankaj Chandiramani - Mon, 2007-07-23 02:09

I have seen couple of good technical whitepapers at otn , below is the link to the same .
These cover the complete series for new features , security , HA etc

http://www.oracle.com/technology/products/database/oracle11g/index.html

Categories: DBA Blogs

How we solved a (ORA-02049 Timeout: Distributed Transaction Waiting for Lock) on our Apps Customized module

Fadi Hasweh - Sun, 2007-07-22 05:00
We have a customized Point of Sale module that is integrated with our Apps standard CRM and financial modules; we faced a serious issue on this customized module that is when users are trying to sale through this module they receive an ORA-02049 Timeout: Distributed Transaction Waiting for Lock, which require them to keep trying until they make the sale. This error used to show on daily basis on the peak hours only but we could not tell what the cause of it, simple search of the error on metalink return note 1018919.102 that advices that we should increases the distributed_lock_timeout value in the INIT.ORA file the default value was 60 seconds so we increased it to 300 seconds even though we don’t have any distributed transactions on the system all the transactions were local. We restart the issue and the problem became worse because now the end users have to wait for 5 minutes (300 seconds) before they receive the error message (ORA-02049) and because of that we had to set the value back to 60 seconds.

After that we tried to trace the error using different event trace levels but with no luck we were not able to determine what is causing the error.

We thought that it’s a database bug and oracle advised us to upgrade the database from 9.2.0.5 to 9.2.0.7 we did that but still the issue is there.

After a month of investigation/tracing and snapshot of when the problem is happing we managed to find out what was causing the problem. It was a bitmap index that was built on the table we were trying to insert data on.

When an end user was trying to sale without committing his transaction for some reason and at the same time another end user tries to sale he will receive the error message and a lock on the table happened and the error pops-up.

We solved the issue by dropping the bitmap index and creating a normal b-tree index even though the column has only three distinct values.

The "Golden Rule" of People Management

Peter Khos - Sat, 2007-07-21 13:53
I was going to blog about the current spat between Jonathan Lewis and Don Burleson on the OTN forums over LGWR and LGWR_IO_SLAVES but then decided that it wasn't worth the web space that it occupies. So, I will blog about a non-technical subject, managing people.People management is a complex subject and there are numerous books published by folks smarter than I on the subject. Here's my take Peter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com3

Problems with CVS removes?

Robert Baillie - Fri, 2007-07-20 11:30
Accidently removed a file in CVS that you want to keep? Sounds like a stupid question, because when you know the answer to this problem it just seems blindingly obvious, but what if you've issued a 'remove' against a file in CVS and before you commit the remove you decided that you made a mistake and still want to keep it? I.E you issued (for example) > cvs remove -f sheep.php But not issued > cvs commit -m removed sheep.php I've heard work arounds such as: Edit the "entries" file in the relevant CVS directory in your workspace, removing the reference to the file. This makes the file appear unknown to CVS.Perform an update in that directory. This gets the repository version of the file and updates the "entries" file correctly All you actually need to do is re-add the file: > cvs add sheep.php U sheep.php cvs server: sheep.php, version 1.6, resurrected When used in this way, the add command will issue an update against the file and retrieve the repository version of...

Problems with CVS removes?

Rob Baillie - Fri, 2007-07-20 10:58
Accidently removed a file in CVS that you want to keep?

Sounds like a stupid question, because when you know the answer to this problem it just seems blindingly obvious,
but what if you've issued a 'remove' against a file in CVS and before you commit the remove you decided that you
made a mistake and still want to keep it?

I.E you issued (for example)

> cvs remove -f sheep.php

But not issued

> cvs commit -m removed sheep.php

I've heard work arounds such as:
  • Edit the "entries" file in the relevant CVS directory in your workspace, removing the reference to the file.
    This makes the file appear unknown to CVS.
  • Perform an update in that directory. This gets the repository version of the file and updates the "entries"
    file correctly


All you actually need to do is re-add the file:

> cvs add sheep.php

U sheep.php
cvs server: sheep.php, version 1.6, resurrected

When used in this way, the add command will issue an update against the file and retrieve the repository version of the file.

A word of warning though, if you had uncommitted changes in that file before you issued a remove, CVS isn't going to recover that for you...

How about if you've removed a file, but your version of the file is out of date and so you can't commit it?

So you've issued the following:

> cvs remove -f sheep.txt

cvs server: scheduling 'sheep.txt' for removal
cvs server: use 'cvs sheep' to remove this file permanently

> cvs commit -m removed sheep.txt

cvs server: Up-to-date check failed for 'sheep.txt'
cvs server: correct above errors first!

You can't issue an update because you get the following:

> cvs update sheep.txt

cvs server: conflict: removed sheep.txt was modified by second party
C rob_tmp.txt

Again, add the file.

> cvs add sheep.php

U sheep.php
cvs server: sheep.php, version 1.6, resurrected

This gets you the most up to date version from the repository, that you can then check for changes (you wouldn't want to just remove it now that someone's added new content would you?)

Once you've convinced yourself that it's still a good idea to delete it, just issue the remove and commit.

Simple when you know how!

EnterpriseDB, Oracle-compatible open-source RDBMS

Peter Khos - Thu, 2007-07-19 23:35
I recently received an e-mail from Ziff-Davis for a free webinar to take a look at EnterpriseDB which is quoted as the "World's Leading Oracle-compatible" database. It went on to describe that FTD (the flower company) save 83% in Oracle licencing costs and got a 400% improvement in performance by moving their reporting database from Oracle to EnterpriseDB Advanced Server.That got my interest butPeter Khttp://www.blogger.com/profile/14068944101291927006noreply@blogger.com3

Introducing the Solution Beacon Release 12 Webinar Series

Solution Beacon - Thu, 2007-07-19 17:42
We're pleased to announce our first Release 12 Webinar Series! These live webinars range from 30 to 60 minutes long and are intended to get people informed about the new Oracle Release 12 E-Business Suite. Topics include a Technical Introduction for Newcomers, Security Recommendations, and reviews of the new features in the apps modules, so whether your interest is functional or technical you're

Top 10 areas to address before taking Oracle BPEL Process Manager 10.1.3 to a Production Implementation

Arvind Jain - Mon, 2007-07-16 17:33
Here is a summary of the article I am writing on How to adopt BPEL PM in a Production Environment. This is based on 10.1.3 release of BPEL PM. If you need specific details then please drop me a line.

Top 10 areas to address before taking Oracle BPEL Process Manager 10.1.3 to a Production Implementation
Arvind Jain
5th July 2007

1) Version Management (Design Time)
When we are choosing a Source Safe System or Version Control system for Business Processes the consideration are quite different than choosing a Source Safe System or Version Control system for Java, C++ code components. The average user / designer of Business processes is not CODE savvy, they cannot be expected to manually merge code (*.bpel files or *.wsdl files for example). BPEL PM lacks in Design time version management of Business Processes using the jDeveloper IDE. What is needed is a Process Based Development and Merge environment. We need Visibility into Process Repository. So the requirements are different from that of a Component based repository. Consider using a good BPMN / BPA tool.

2) Version Governance (Run Time)
While BPEL PM can maintain version number for deployed BPEL processes, it is still left to an administrator or a Business Analyst to decide which process version will be active at a given point in time and what will be the naming, versioning standard. Since every deployed BPEL Process is a service, so it becomes critical to apply SOA governance methodology to control various deployed and running BPEL Processes.

3) SOAP over JMS (over SSL)
Most of the big corporation and multinationals have policies which restrict use of HTTP traffic from outside world to inside intranet. Moreover they have policies which require the use of a Messaging Layer or an ESB as a Service Intermediatory for persistence, logging, security and compliance reasons. BPEL PM support for bi directional SSL enabled JMS communication is not out of box. It needs to be tried and tested out within your organization and workarounds needs to be implemented.

4) Authentication & Authorization - Integration with LDAP / Active Directory
SOA governance requires authentication and authorization for service access based on a corporate repository and roles defined within them. This is also critical for BPEL Human Workflow (HWF). Make sure to do a small Pilot / POC for integration with your corporate identity repository before taking BPEL PM to production.
5) Integration with Rules Engine
BPEL should be used for Orchestration only and not for coding programming logic or hard coded rules. Hence it is important to have a separate Rules Engine. Many rules engine available in Market support Java facts and BPEL Engine Being a Java Engine should integrate out of the Box with these. But some rules engine have the limitation that they can take only XML facts, so it is an overhead to go from Java to XML so as to use XML facts and then marshal back to Java. So make sure that you have sorted out Integration with Rules Engine prior to BPEL production implementation.
6) Implementation Architecture
BPEL processes and projects can and will expand to occupy all available resources within your organization. These business processes are pretty visible processes within a company and have strict SLAs to meet. Make sure you have a proven and tested reference architecture for Clustering, High Availability and Disaster recovery. There has to be a provisioning process, deployment process and Process Life cycle governance methodology in place before you can fire up all engines in a production environment.
7) Throughput Consideration
BPEL PM by nature is an interpretation engine and hence there is a performance hit when running long running processes and doing heavy transformations. Plan on doing some stress and load testing on the servers running your Business Processes to get a Ball Park estimate of what is the end to end processing time and how much load can be taken by the BPEL server. Specifically do a capacity planning based on results from these pilot load and stress tests.

8) Design of BPEL Process (Payload Size, BPEL Variables - Pass by Reference or by Value)
Designing a Business Process is more of an art than a science and the same holds for BPEL Business Processes. It is important to understand what will be best practices in your organization in terms of Payload size and length of various Processes and how they are orchestrated. Are you passing across big XML payloads which can be avoided by changing the process and using a technique called as passing by reference? Will that also make your process more efficient and create true Business Services from these processes? Give a consideration to these and spend some white boarding sessions with Business and IT Analysts before creating a BPEL process.
9) Schema Consideration - Canonical Data Model & Minimal Transformations
The most cost and resource intensive step in any Integration or Process Orchestration scenario is Transformations. Especially in an orchestration engine like BPEL PM the XML payload goes through multiple massaging steps. If you can design your process flow in such a way that there is minimal of these steps then it will improve the performance of Business Process end 2 end. Also it is a best practice to have an enterprise wide canonical data model derived from some industry wide standard like OASIS, Rosettanet, ebXML etc
10) Administration - Multiple BPEL Console, Central HWF Server, Customized UI or use existing UI?
BPEL PM is easy to use and makes process orchestration almost a zero coding activity. Also it is pretty easy to learn and hence there is suddenly a bunch of BPEL processes deployed and a bunch of BPEL developers in enterprise once the floodgates are opened.

It is very critical for an enterprise scale deployment to figure out ways to Provision BPEL Server Instances and to give selective access to BPEL Console to relevant developers. BPEL console is a powerful tool and there is not much of a role based security functionality in that except for the concept of domains. Options are to create your own Administration / Console UI using the BPEL Server API’s or to have a BPEL Administrator take care of such requests.
BPEL PM comes with a built in Human Workflow Server (HWF) but in an enterprise you might want to have a centralized HWF server. All these need to be given though to before putting BPEL PM into a production environment.

10 @ Sun

Siva Doe - Mon, 2007-07-16 15:31

The title should say '10 @ Sun; 15 w. Sun'. When I joined Larsen and Toubro (L&T) in 1992, little did I know that those pizza boxes named SparcStation1 and Sun 3/xx (Motorola CPUs??) were made by a company that I was going to work for in about 4 years time. It was fun playing with SS1, writing Postscript programs that directly draws on the root window. The 3 series was running Sunview (I am sure quite a few would remember this GUI). My impression is that it was as fast and responsive as I it is currently on my Ultra 20 running GNOME ;)
It had been a roller coaster ride with Sun. I had moments of extreme happiness (probably the news that Sun stock was doing $120+) and also the complete opposite. I had been with Sun IT doing application development, later doing system administration with ITOps, and now am with engineering teams.
I greatly admire Sun as a company and cant think of working with any one else. I am afraid I will be too much biased to work any where else. The freedom that you get here is awesome. One has to work at Sun to believe and feel it. I am proud to be part of Sun's efforts, with open source in particular.
I hope I will be around to write '15 @ Sun' and '20 @ Sun'. Thanks to all my colleagues who has been making my life at Sun a great one. Thank you Sun.

ATG Rollup 4 and my Custom schema

Fadi Hasweh - Mon, 2007-07-16 01:38
After Appling ATG Rollup 4 patch no. (4676589) on our HP-UX server successfully we start to receive the following error only on our customized schema but not on the standard schemas.
The error was showing when we try to run any procedure from this customized schema we keep getting the following even though it used to work fine before the patch
"
ORA-00942: table or view does not existORA-06512: at "APPS.FND_CORE_LOG", line 23ORA-06512: at "APPS.FND_CORE_LOG", line 158ORA-06512: at "APPS.FND_PROFILE", line 2468ORA-06512: at "APPS.XX_PACKAGE_PA", line 682ORA-06512: at line 4
"

After checking on metalink we got a hint from note 370000.1 the note dose not apply for the same case but it did gave us the hint and the solution was as follow

connect as APPLSYSGRANT SELECT ON FND_PROFILE_OPTIONS TO SUPPORT;GRANT SELECT ON FND_PROFILE_OPTION_VALUES TO SUPPORT;


Support is my customized schema Custom
Have an error free day ;-)

fadi

Blogging away!

Menon - Sat, 2007-07-14 18:00
For a long time, I wanted to create a web site with some articles that reflected my thoughts on database and J2EE. During the 15 odd years of my experience in the software industry, I have realized that there is a huge gap between the middle tier folks in Java and the database folks (or the backend folks.) In fact my book - Expert Oracle JDBC Programming - was largely inspired by my desire to fill this gap for Java developers who develop Oracle-based applications. Although most of my industry experience has been in developing Oracle-based applications, during the last 2 years or so, I have had the opportunity to work with MySQL and SQL Server databases as well. This has given me a somewhat unique perspective on developing Java applications that use database (a pretty large spectrum of applications.)

This blog will contain my opinions on this largely controversial subject (think database-independence for example), on good practices related to Java/J2EE and database programming (Oracle, MySQL and SQL Server). From time to time, it will also include any other personal ramblings I may choose to add.

Feel free to give comments on any of my posts here.

Enjoy!

Using dbx collector

Fairlie Rego - Sat, 2007-07-14 08:45
It is quite possible that you have a single piece of sql which consumes more and more cpu over time without an increase in logical i/o for the statement or due to increased amount of hard parsing.

The reason could be extra burning of cpu in an Oracle source code function with time which has not been instrumented as a wait in the RDBMS kernel. One way to find out which function in the Oracle source code is the culprit is via the dbx collector function available in the Sun Studio 11. I guess dtrace would also help but I haven’t played with it. This tool can also be used in diagnosing increased cpu usage of Oracle tools across different RDBMS versions.

Let us take a simple example on how to run this tool on a simple insert statement.

SQL> create table foo ( a number);

Table created.

> sqlplus

SQL*Plus: Release 10.2.0.3.0 - Production on Sat Jul 14 23:46:03 2007

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Enter user-name: /

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> set sqlp sess1>>
sess1>>

Session 2
Find the server process servicing the previously spawned sqlplus session and attach to it via the debugger.

> ps -ef | grep sqlplus
oracle 20296 5857 0 23:47:38 pts/1 0:00 grep sqlplus
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus
> ps -ef | grep 17205
oracle 20615 5857 0 23:47:48 pts/1 0:00 grep 17205
oracle 17237 17205 0 23:46:04 ? 0:00 oracleTEST1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle 17205 23919 0 23:46:03 pts/4 0:00 sqlplus

> /opt/SUNWspro/bin/dbx $ORACLE_HOME/bin/oracle 17237

Reading oracle
==> Output trimmed for brevity.

dbx: warning: thread related commands will not be available
dbx: warning: see `help lwp', `help lwps' and `help where'
Attached to process 17237 with 2 LWPs
(l@1) stopped in _read at 0xffffffff7bfa8724
0xffffffff7bfa8724: _read+0x0008: ta 64
(dbx) collector enable


Session 1
==================================================================
begin
for i in 1..1000
loop
insert into foo values(i);
end loop;
end;
/

Session 2
==================================================================

(dbx) cont
Creating experiment database test.3.er ...
Reading libcollector.so

Session 1
==================================================================
PL/SQL procedure successfully completed.

sess1>>exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

Session 2
=========

execution completed, exit code is 0
(dbx) quit

The debugger creates a directory called test.1.er.
You can analyse the collected data by using analyser which is a GUI tool.

> export DISPLAY=10.59.49.9:0.0
> /opt/SUNWspro/bin/analyzer test.3.er



You can also generate a callers-callees report using the following syntax

/opt/SUNWspro/bin/er_print test.3.er
test.3.er: Experiment has warnings, see header for details
(/opt/SUNWspro/bin/er_print) callers-callees

A before and after image of the performance problem would help in diagnosing the function in the code which consumes more CPU with time.

Pages

Subscribe to Oracle FAQ aggregator