Feed aggregator

The EMP mindmap

Jornica - Mon, 2009-02-02 14:58

By accident, I stumbled upon a Java script library for generating mind maps. But instead of having a static diagram, this Java script enabled one is dynamic, you can move nodes, you can zoom in or out to another level of nodes. A very nice graphical way of presenting hierarchical data. For example, who's managing who in Oracle's EMP table as shown below. Come on, grab your mouse and start moving and clicking, it's alive!

Here's the recipe to build such an application.

  • Start with downloading the source of the Google code project js-mindmap. In the sources you will several Java script libraries, and an example file js-mindmap.html.

  • In APEX upload the Java script libraries add references from the header section to these libraries.

  • Add a PL/SQL region, which generates the hierarchy and will be converted in a diagram:

    <canvas id="cv"></canvas>
    <div id="js-mindmap">

    For example, the function show_emp_tree returns the html code for the <div> element.

Note that when you replace the EMP table by OEHR_EMPLOYEES table the animation will become very slow due to the increase of nodes and links. Also the js-mindmap project is still in development. But anyway it's fun!

Analytics as a Service

Nigel Thomas - Mon, 2009-02-02 04:23
With all this talk of SQLstream's recent v2.0 launch, I was interested to read Tim Bass's CEP blog posting on Analytics-as-a-Service. He calls it A3S - and rightly avoids calling it AaaS; apart from the fact the X-as-a-Service is almost as cliched as XYZ 2.0 (and equally meaningless), careless use of that sequence of As and Ss could cause spam filters round the world to get over-excited.

If we must have the as-a-service tag, I'd like to trademark BI-as-a-service: BIAS. Apart from being a proper acronym, it also gets across that BI often gives you the answers you think you want - not necessarily the ones you really need.

This Application Has Unique Business Rule Needs

Kenneth Downs - Sun, 2009-02-01 15:29

No it does not. If it did, then your customer/employer would be doing something no other human being has ever done, which is unlikely in the extreme. The application may be unique in its particulars, but it is almost certainly extremely common in its patterns. This week we will see how "unique" needs are in fact nothing but common ordinary development projects.

Beginning With the Conclusion

I have had this conversation with many programmers over the past few years, and it always follows the same patterns. The easy part of the argument is showing the programmer that what he thinks is special or unique is in fact common. The much harder part, because it involves the delicate human ego, is showing the programmer that he has not seen this because he is ignorant. This is not fun to do and I myself usually skip it, it's usually not worth the trouble.

Path 1: Details

Occasionally I speak to a programmer who thinks he has a unique situation. His claim begins with the mountain of details he must handle, details which appear to be contradictory, subtle, and overall perplexing. He wonders if some new approach is required to handle them.

In answering this claim, we begin with the easy part, showing that the situation is itself not unique. In short, all worthwhile projects involve mountains of detail, so there is nothing special there. When it comes to the subtleties and the maze of exceptions and special cases, these are common in mature businesses that have evolved this complexity in response to business needs over the years. So again there is nothing unique here, the programmer's situation is again common.

At this point we have to ask how the programmer will deal with this perplexing mountain of detail. If he knows what he is doing, he will give the general answer that he is going to break it down as much as possible into independent smaller problems that can be solved on their own. Since this is nothing more than how all programmers solve complex problems, the entire "uniqueness" claim has completely collapsed. His project is utterly common.

The much harder part of the conversation comes if the programmer does not know how to break down the problem. For instance, if the problem is all about a fiendishly complex pricing system with lots of discounts and pricing levels, and the programmer does not know that he needs to begin with the database, and he further does not want to hear that, well, there is not much I can do for him. He will end up working a lot harder than he needs to, and will probably remain convinced he is dealing with something "unique".

But let's go a little deeper into that example of the complicated pricing system. Why do I claim that he must start with the tables, and that is he is wasting time if he does not? Well, a complete answer is much more than will fit here, and in fact I hit that theme over and over in these essays, but it comes down to:

  • He must have an accurate and precise description of the details that govern the pricing scheme. That is what tables are for.
  • In working out the mechanics of the tables, particularly their primary and foreign keys, he will come to a his most complete understanding of the mechanisms involved.
  • When the tables completely reflect the details he must work with, the code will just about write itself.
  • Lastly, but probably most importantly, the customer will expect to control the pricing system by adjusting the parameters at all levels. Again, that is what tables are for. The user is in control of the pricing system if he can edit the tables (because of course he cannot edit the code).
Path 2: Combinations

Once upon a time we had simple desktop business applications, games, and then this weird new thing, "the web". Now they are all mixed together, as we play games on the internet that are tied into huge databases. Modern applications often combine technologies that used to be comfortably separate. On any particular project, some of the requirements look like they can be met with an RDBMS, some require management and delivery of media such as MP3 or video, and he is told as well he must provide RSS feeds and import data coming in XML format. Perhaps as well there will be stone tablets and papyrus scrolls.

This programmer may believe he is in a unique situation because of this combination of needs. Because no single toolset out there can meet the entire project, perhaps this is something never before seen? But this does not hold up. Just like the argument about complexity, he must break the problem up correctly, and when he has done so he will have a perfectly ordinary project. Though I might add it will also be a very interesting project and probably a lot of fun.

In The End It Is All About Patterns

I have given two examples above taken from my own experience where programmers have claimed to me that they faced some unique situation. There are many other cases, and they always make perfect sense to the person who thinks he has discovered something new. The biggest flaw in the programmer's thinking is failing to distinguish between particulars and patterns.

My claim in this essay is that the patterns of all problems are the same. Somebody has seen it before, somebody has done it before, the answer is out there. The process of analysis and programming is about slotting your particulars in the patterns that have already been established.

In the broadest sense all programs process data, and particular programs break down into broad patterns of data access and manipulation. Sometimes you have a broad range of users putting in data with very little massaging (think twitter) and sometimes you have one group controlling much of the data while others make use of it (think Amazon), and sometimes your data is mostly relational and table based (think any ecommerce or biz app) and sometimes its mostly media (think youtube).

Once you have these broad patterns identified, you can then proceed to make use of established practices within each particular area. What is the best way to provide sensitive data on the web and protect it from unauthorized eyes? Somebody has done it before. What is the best way to track large amounts of media? Somebody has done it before. What is the best way to set up a complex pricing system with lots of discounts and pricing levels? Somebody has done it before. In all cases, your particulars may be different, but the patterns will be the same.

Conclusion: Find the Patterns

Whenever I find myself looking at a situation that appears to be new, I try to tell myself that it may be new to me, but it is not likely to be new to the human race. If it does not appear to follow a well-known pattern then I proceed as if I have not yet recognized the pattern and continue to analyze and break it apart until the pattern emerges. So far it always has.

Categories: Development

Shared Feeds

Nigel Thomas - Sun, 2009-02-01 10:07
My posting rate has been quite low recently, but I have been enjoying using Google Reader to keep up with what everyone else is saying.

I've been sharing items with some friends / colleagues, but it seems harmless to open this up to the world. The topics are eclectic, but mainly around product development, event stream processing, RDBMS and data warehouse. Maybe the very occasional Dilbert or xkcd. You're all very welcome to see what I've shared - the links are now on the right hand side of this blog. And if you too are using a reader, here is a direct link to my shared items and here is the atom feed for them. If Google could just pack up my daily shares as a single blog post, wouldn't that be great.

I'm still successfully avoiding Twitter... I've no idea how I would find the time, and having seen Philip Schofield mentioned as a user (on a low rent BBC Sunday afternoon show) - and apparently now he is London's #5 tw*t(terer), so it must be hopelessly uncool anyway.

SQLstream launches v2.0 of its Event Stream Processing engine

Nigel Thomas - Sun, 2009-02-01 08:55
As well as working as a freelance Oracle consultant, I have spent most of the last year working with SQLstream Inc on their Event Stream Processing engine, version 2.0 of which has now been launched.

My initial point of contact was Julian Hyde, with whom I worked in the Oracle CASE / Oracle Designer team in the early 90s. He went out to Oracle HQ and worked on bit-mapped indexes, then spent time at Broadbase before becoming best known as the founder-architect for the Mondrian OLAP server.

Event Stream Processing (ESP) is a development of what we might have called Active Database a few years ago. Rather than running queries against stored data as in an RDBMS, we register active queries which can be used to filter data arriving from any kind of stream. These active queries behave just like topic subscriptions in a message bus - except that unlike typical JMS implementations, the SQLstream query engine can do more than just filter. SQL queries against the data-in-flight can also:
  • join data between multiple streams, or between streams and tables
  • aggregate data within a stream to give rolling or periodic aggregates based on time or row counts
  • apply built-in and user-defined functions to columns within the stream's rows
  • build a network of streams and views to support complex transformation and routing requirements
Queries are defined using SQL:2003 with minimal extensions (there are some SQL examples on the web site); so developers used to working with an RDBMS will find it easy to grok the relational messaging approach. Unlike a typical message bus, most transformation and routing takes place entirely inside the SQLstream environment, rather than being delegated to external applications.

My experience working at JMS vendor SpiritSoft convinced me of the value of asynchronous message-based techniques, which in the last 10 years have spread out from high-end financial systems to ESB implementations all over the place. Since the early 80s we have seen how the RDBMS swept all other forms of structured data storage away. Now SQLstream's relational messaging approach removes the impedance mismatch between how we store data, and how we process it on the wire. In principle, this architecture can subsume both message-oriented (ESB style) and data-oriented (ETL style) integration architectures.

It should be said that "other ESP engines are available". Oracle itself has two projects on the go: its own CQL which I believe is still internal, and Oracle CEP (the rebranded BEA WebLogic Event Server - which itself is (or was) based on the open source Esper project). These two development threads will no doubt combine at some point (perhaps they already have?). IBM also has two or three independent CEP (complex event processing) projects on the go.

I think the same thing will happen to ESP / CEP as happened to ETL/EAI tools in the last ten or fifteen years. For sure, the database/application server vendors (especially Oracle and IBM) will sell plenty of this software within their respective client bases. An Oracle CEP system that was (or could be) tightly integrated with the RDBMS itself - maybe executing PL/SQL as well as Java functions - would be an easy sell. However multi-vendor sites will be interested in an agnostic / vendor-independent tool as a basis for their integration efforts. Just as Informatica has carved out a place for itself in competition with Oracle's ODI and OWB and IBM's DataStage, so SQLstream and other ESP vendors can fight for the common ground. It will be very interesting to see how it all turns out.

See the SQLstream product page for background, plus posts from Julian Hyde, CTO of SQLstream and Nicholas Goodman, Director of BI Solutions at Pentaho.

PS: here's another post from David Raab.

Java Embedding in BPEL Process

Peeyush Tugnawat - Fri, 2009-01-30 14:34

If you have a Java embedding in your BPEL process and have ever wondered where (if not domain.log) the output is written to if you do something like

System.out.println statement from your code within the java embedding

The  output gets written to the file under <soa_suite / oas_install_home>/opmn/logs/default*.log

How to enable trace for a CRM session

Aviad Elbaz - Thu, 2009-01-29 09:30

I was being asked to examine a performance issue within one of our CRM application screens, after some users complained about a specific long time action.

First thing, I tried to enable trace for the CRM session, but It turned out that it’s definitely not simple to identify a CRM session. Especially in my case, when a session opens two (sometimes more) database sessions. It’s quite impossible actually.

So how it is possible to trace those CRM sessions anyway?

Oracle has provided an option to execute custom code for every session opened in the database through a system profile. This profile called “Initialization SQL Statement - Custom” (the short name is 'FND_INIT_SQL') and allows customize sql/pl*sql code.

Once setting this profile in user level, each session opened for this user will first execute the code within the profile. No matter which type of activity the user does – Forms, CRM, Concurrent request, or anything else that opens a database session – the content of this profile will be executed.

So, clearly we can use this capability to enable Trace for users sessions.

Steps to enable trace for specific user:

  1. Login with “Application Developer” responsibility
  2. Open the “Create Profile” form –> Query the profile “FND_INIT_SQL”
  3. Make sure that “visible” and “updateable” are checked in user level.

  4. Switch responsibility to “System Administrator”
  5. Navigate to Profile –> System –> Query the profile “Initialization SQL Statement - Custom” in user level for the user we would like to enable trace for.

  6. Update the profile option value in user level to the following:

    ** Just replace AVIADE with the user you enable trace for.

  7. Now, after the user logout from the application (the user you enabled trace for), the user can login and reproduce the issue.
  8. When finish to reproduce the issue, you should disable the trace by clearing the profile option value and update it to NULL. (profile “Initialization SQL Statement – Custom” of course..)
  9. The trace file/s will wait for you in your udump (user_dump_dest init’ parameter) directory.

Since I enabled and disabled the trace quite a few times while investigating my performance issue, I wrote these handy simple programs which enable and disable the trace for a user in a quick and easy manner.

Execute this program to enable trace for a specific user: (substitute step 6 above)

  l_ret     boolean;
  l_user_id number;

  select user_id
    into l_user_id
    from fnd_user
   where user_name = '&&USER_NAME';

  l_ret := fnd_profile.SAVE(X_NAME        => 'FND_INIT_SQL',
                            X_VALUE       => 'BEGIN FND_CTL.FND_SESS_CTL('''','''','''', ''TRUE'','''',''ALTER SESSION SET TRACEFILE_IDENTIFIER=''||''''''''||''&&USER_NAME'' ||''''''''||'' EVENTS =''||''''''''||'' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 12 ''||''''''''); END;',
                            X_LEVEL_NAME  => 'USER',
                            X_LEVEL_VALUE => l_user_id);

  dbms_output.put_line('Profile has updated successfully');

  when others then
    dbms_output.put_line('Failed to update the profile: '||sqlerrm);


Execute this program to disable trace for a specific user: (substitute step 8 above)

  l_ret     boolean;
  l_user_id number;

  select user_id
    into l_user_id
    from fnd_user
   where user_name = '&USER_NAME';

  l_ret := fnd_profile.DELETE(X_NAME        => 'FND_INIT_SQL',
                              X_LEVEL_NAME  => 'USER',
                              X_LEVEL_VALUE => l_user_id);

  dbms_output.put_line('Profile has erased successfully');

  when others then
    dbms_output.put_line('Failed to erase the profile: '||sqlerrm);

Hope you find it helpful…
Feel free to leave a comment or share your thought about this issue.


Categories: APPS Blogs

Drizzle: A Pretty Cool Project

Padraig O'Sullivan - Wed, 2009-01-28 20:38
Drizzle is a pretty cool project whose progress I've started following in the last few weeks. I'm trying to contribute in a tiny way if I can by confirming bug reports. If I had more time, I'd like to try resolving some bugs. Hopefully, I'll find some spare time to do that in the future.I think its definitely a project worth keeping an eye on though. Check it out if you have the time.Padraighttp://www.blogger.com/profile/17562327461254304451noreply@blogger.com0

ORA-27123: unable to attach to shared memory segment

Sabdar Syed - Wed, 2009-01-28 06:27

Of late, in one of our test unix (Sun Solaris) boxes, a database user is trying to connect to the database, but getting an error ORA-27123: unable to attach to shared memory segment” with permission denied issue.

Initially we thought that there might be an issue with SGA memory area or may be an issue with the shared memory segments and semaphores for the instance allocated. But later we found that the permission on ORACLE_HOME directory got changed accidentally with the full permission.

Here are our findings:

$ sqlplus testusr/password@testdb

SQL*Plus: Release - Production on Tue Jan 27 20:53:44 2009

Copyright (c) 1982, 2005, Oracle. All Rights Reserved.

ORA-01034: ORACLE not available
ORA-27123: unable to attach to shared memory segment
SVR4 Error: 13: Permission denied
As per Oracle Error Messages:

Error: ORA-27123: unable to attach to shared memory segment
Cause: shmat() call failed
Action: check permissions on segment, contact Oracle support
Check the oracle executable file permission.

$ cd $ORACLE_HOME/bin
$ ls -l oracle
-rwxrwxr-x 1 oracle dba 119582976 Feb 3 2008 oracle
Here the oracle file permission has -rwxrwxr-x i.e. 775, but this file must have the permission -rwsr-s- -x i.e. 6751

Change the permissions for oracle file.

$ cd $ORACLE_HOME/bin
$ chmod 6751 oracle
$ ls -l oracle
-rwsr-s--x 1 oracle dba 119582976 Feb 3 2008 oracle
After changing the permissions on oracle executable file, all the users are now able to connect to the database without any errors.

Note: For further information refer the Oracle Metalink Note ID: 1011995.6 Subject: COMMON ORACLE PERMISSION PROBLEMS ON UNIX.

Sabdar Syed


Fairlie Rego - Tue, 2009-01-27 17:34
Outbound_connect_timeout comes into play when nodes in a cluster are down and we cannot wait for the OS timeout as this causes long delays in connect time. For example on Solaris the value of tcp_ip_abort_interval = 180000 ==> which is 180 seconds ==> 3 mins

In this post I will demonstrate how outbound_connect_timeout (OCT) can effectively avoid timeouts experienced by clients connecting to RAC nodes

If we take an example of the following connect string

(ADDRESS = (PROTOCOL = TCP)(HOST = sdby1-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby2-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby3-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = sdby4-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = prim1-vip)(PORT = 1521))

In the above alias the first 4 nodes are non existent nodes and the service TEST
runs on the node prim1 which is listed last in the above alias.

I have deliberately set load_balance to OFF so that the client has to traverse through all nodes serially.

If we set an OUTBOUND_CONNECT_TIMEOUT of 3 seconds in the client's sqlnet.ora
(client is Solaris) then the time to establish the connection is around 12 seconds.

If we were to run a sqlnet trace on the connection we see that the connection starts at

Connection started at
[27-JAN-2009 22:52:33:741] New trace stream is /tmp/cli_262.trc

and the first address which is tried is

[27-JAN-2009 22:52:33:757] nsc2addr: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=sdb1-vip)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=TEST)(CID=(PROGRAM=sqlplus@bart)(HOST=bart)(USER=oracle))))

Moving on we can see that OCT is enabled due to the line

[27-JAN-2009 22:52:33:759] nstoSetupTimeout: entry
[27-JAN-2009 22:52:33:759] nstoSetupTimeout: ATO enabled for ctx=0x1001c9280, val=3000(millisecs)

After this we the following pattern is seen continuously

[27-JAN-2009 22:52:33:776] nsevwtsg: entry
[27-JAN-2009 22:52:33:776] nsevwtsg: cid=0
[27-JAN-2009 22:52:33:776] nsevwait: entry
[27-JAN-2009 22:52:33:776] nsevwait: 1 registered connection(s)
[27-JAN-2009 22:52:33:776] nsevwait: 0 pre-posted event(s)
[27-JAN-2009 22:52:33:776] nsevwait: waiting for transport event (0 thru 0)...
[27-JAN-2009 22:52:33:776] nsevwait: 0 newly-posted event(s)
[27-JAN-2009 22:52:33:776] nsevwait: 0 posted event(s)
[27-JAN-2009 22:52:33:776] nsevwait: exit (0)
[27-JAN-2009 22:52:33:776] nstoToqCheckSingle: entry
[27-JAN-2009 22:52:33:776] nstoToqCheckSingle: normal exit

and the timeout occurs in 3 seconds

[27-JAN-2009 22:52:36:771] nstoHandleEventTO: ATO occurred for ctx=0x1001c9280

Ergo the OCT seems to be working perfectly

The client then tries to establish a connection with sdby2-vip and experiences
the same timeout

The connection is finally established at

[27-JAN-2009 22:52:45:915] nscon: no connect data
[27-JAN-2009 22:52:45:915] nscon: connect handshake is complete
[27-JAN-2009 22:52:45:915] nscon: nsctxinf[0]=0x41, [1]=0x41
[27-JAN-2009 22:52:45:915] nscon: normal exit

which translates to around 12 seconds.

From my testing for clients on Windows

OUTBOUND_CONNECT_TIMEOUT does not work with base windows client. (Clients may get errors)
It works with patch 21 on top of
OUTBOUND_CONNECT_TIMEOUT does work with 11g client

Without OCT if we trace the connection again we see that we wait for more than 3 mins

[28-JAN-2009 14:18:35:299] nttcni: entry
[28-JAN-2009 14:18:35:299] nttcni: trying to connect to socket 10.
[28-JAN-2009 14:22:19:915] ntt2err: entry

[28-JAN-2009 14:22:19:915] ntt2err: soc 10 error - operation=1, ntresnt[0]=505, ntresnt[1]=145, ntresnt[2]=0
[28-JAN-2009 14:22:19:915] ntt2err: exit
[28-JAN-2009 14:22:19:915] nttcni: exit
[28-JAN-2009 14:22:19:915] nttcon: exit
[28-JAN-2009 14:22:19:921] nserror: entry
[28-JAN-2009 14:22:19:921] nserror: nsres: id=0, op=65, ns=12535, ns2=12560; nt[0]=505, nt[1]=145, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
[28-JAN-2009 14:22:19:921] nsopen: unable to open transport
[28-JAN-2009 14:22:19:921] nsiocancel: entry


Subscribe to Oracle FAQ aggregator