Feed aggregator

storing password history

Tom Kyte - Fri, 2018-06-01 17:26
<code> I have a table create table user_password_history (user_cd varchar2(3), user_pass varchar2(30), created_date date); i want to keep last 5 records of each user in this table. kindly guide me how to achieve it. Thanks for your suppor...
Categories: DBA Blogs

The one case where we need an index rebuild - how to avoid it?

Tom Kyte - Fri, 2018-06-01 17:26
Having read Tom's books I know that in general there's no reason to rebuild indexes, and so we don't. However, we have one particular index which does benefit from rebuilding. Since it's a locally-partitioned index on a date-partitioned table we ...
Categories: DBA Blogs

CHAR Index (vs) NUMERIC Index

Tom Kyte - Fri, 2018-06-01 17:26
Hi Tom, In comparison to an Index on CHAR field and Index on NUMERIC field pls let me know which Index(whether CHAR or NUMERIC) would be faster. Thanx & Regards, Madhusudhana Rao.P
Categories: DBA Blogs

TNS-12541 : TNS: No Listener

Tom Kyte - Fri, 2018-06-01 17:26
Hi, In order to configure Standby database I have created one listener "Prod" & TNS Service "To_Stand" on Primary Side.Listener created successfully & its running.But the problem is TNS is not establishing connection with the database.Its saying "No...
Categories: DBA Blogs

Creating Dependent/Cascading Select Lists with Visual Builder

Shay Shmeltzer - Fri, 2018-06-01 17:13

A common requirement in applications is to have dependent lists (also known as cascading lists) - meaning have the value selected in one place influence the values that could be select in another place. For example when you select a state, we'll only show you cities in that state in the city list.

In the short demo video below, I'm showing you how to implement this cascading lists solution with the new Visual Builder Cloud Service.

The solution is quite simple

You catch the event of a value change in the first list, and in the action chain that is invoked you set a filterCriterion on the second list. (See this entry for a quick introduction to filterCriterion).

Since the list is connected to a ServiceDataProvider, there is no further action you need to take - the change to the SDP will be reflected in the UI component automatically.

Quick tips - make sure you reference the id of the column and that your operators are properly defined and enclosed in double quotes.

 

Categories: Development

SQLcl connect target depends on previous connection

Yann Neuhaus - Fri, 2018-06-01 05:36

I thought it was a bug for two reasons: first, because I don’t like that my connect destination depends on the context, and then because it is a different behavior than in SQL*Plus. But finally, it is the correct behavior, and the 100% compatibility with SQL*Plus is expected only when SET CLASSIC=ON. And as I was surprised, and I think it can be dangerous, here is a blog post about it. Basically, be very careful if you are connected with a service name and you want to connect locally.

Basically, if you attended my demo about ‘From Transportable Tablespaces to Pluggable Databases’, where I switch between different instances you may have seen that I had to run ‘connect / as sysdba’ two times because the first one failed with invalid username/password

SQL> connect scott/tiger@//localhost/PDB1
Connected.
...
SQL> connect / as sysdba
USER =
URL = jdbc:oracle:oci8:@//localhost/PDB1
Error Message = ORA-01017: invalid username/password; logon denied
 
Warning: You are no longer connected to ORACLE.
 
SQL> connect / as sysdba
Connected.

This is not a big deal, but that means that it tries to connect to //localhost/PDB1 when I wanted to connect locally to my ORACLE_SID environment variable. Here, expecting a bequeath connection, I didn’t provide a password, then I cannot connect to the PDB. But imagine that I use a password, and the password is the same in the two databases… I would have been connected to the wrong database. Just imagine this:

SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- checking something on PROD
SQL> connect sys/password@PROD as sysdba
Connected.
...
-- back to TEST (or at least I think so)
SQL> host echo $ORACLE_SID
TEST
SQL> connect sys/password as sysdba
Connected.
-- do terrible things
SQL> drop table VERY_IMPORTANT_TABLE;
Table VERY_IMPORTANT_TABLE dropped.
-- now look where I am:
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@PROD AS SYSDBA
CONNECTION_IDENTIFIER:
PROD

Actually, what happens is that when SQLcl is already connected with a connection string (i.e not locally using bequeath) the next connect command will use the same connection string. This means that:

connect user/password

is actually equivalent to

connect user/password@&_CONNECT_IDENTIFIER

SQL*Plus

This behavior has been introduced in SQLcl but this is not how SQL*Plus works:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect / as sysdba
Connected.
SQL> define _CONNECT_IDENTIFIER
DEFINE _CONNECT_IDENTIFIER = "CDB1" (CHAR)

Disconnect

The first solution to avoid this in SQLcl is to always disconnect before you want to connect to a different service:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> disc
 
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.2.0.0.0
SQL> connect sys/oracle as sysdba
Connected.
SQL> show connection
CONNECTION:
SYS@jdbc:oracle:oci8:@ AS SYSDBA
CONNECTION_IDENTIFIER:
CDB$ROOT

This is why the second time was ok in my case: first one failed with invalid password and then I was disconnected.

TWO_TASK

The second solution is to set an impossible TWO_TASK (or LOCAL in Windows) so that local connections are impossible:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> connect sys/oracle as sysdba
USER = sys
URL = jdbc:oracle:oci8:@NoWhere
Error Message = ORA-12154: TNS:could not resolve the connect identifier specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere
Error Message = IO Error: Unknown host specified
USER = sys
URL = jdbc:oracle:thin:@NoWhere:1521/NoWhere
Error Message = IO Error: Unknown host specified

CLASSIC=ON

The third solution is to run SQLcl in SQL*Plus 100% compatible mode:

SQL> connect sys/oracle@//localhost/PDB1 as sysdba
Connected.
SQL> set classic on
SQL> show classic
SQL*Plus mode: ON
SQL> connect / as sysdba
Connected.

Here we have the same behavior as SQL*Plus: no use of current connection string.

The SQL CLASSIC ON is usually for the output (error messages, autotrace statistics, and a few enhancement made to SQLcl). And the online help still says that it is about output:

SQL> help set classic
SET CLASSIC
SET CLASSIC [ ON | OFF ] Set classic SQL*Plus Settings on
This will allow scripts which expect traditional output to be honored.

However, it seems that this CLASSIC mode is also very important for connection.

Test and show _CONNECTION_STRING

If you show the connection string at the prompt, this may prevent errors:

SQL> set sqlprompt "_connect_identifier> "
//localhost/CDB2> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
//localhost/CDB1> connect sys/oracle as sysdba
Connected.

Always check which database

By the way, when I prepare a script that can make some damages when not run at the correct place, I usually add a test on DBID on top of it:

CDB1> whenever sqlerror exit failure;
CDB1> select 0/decode(dbid,'944121612',1,0) from v$database;
 
Error starting at line : 1 in command -
select 0/decode(dbid,'944121612',1,0) from v$database
Error report -
ORA-01476: divisor is equal to zero

Different passwords

Of course, you should have different passwords on prod and test databases. However, I prefer to have passwords in a wallet (external password file) and then you will always have the correct identification as it is recorded for each service name.

 

Cet article SQLcl connect target depends on previous connection est apparu en premier sur Blog dbi services.

Index Bouncy Scan 4

Jonathan Lewis - Fri, 2018-06-01 03:19

There’s always another hurdle to overcome. After I’d finished writing up the “index bouncy scan” as an efficient probing mechanism to find the combinations of the first two columns (both declared not null) of a very large index a follow-up question appeared almost immediately: “what if it’s a partitioned index”.

The problem with “typical” partitioned indexes is that the smallest value of the leading column might appear in any of the partitions, and the combination of that value and the smallest value for the second column might not appear in all the partitions where the smallest value appears. Consider a table of 10 partitions and a locally partitioned index on (val1, val2) where neither column is the partition key. The smallest value of val1 – call it k1 may appear only in partitions 4, 7, 8, 9, 10; the lowest combination of (val1, val2) – call it (k1, k2) may appear only in partitions 8 and 10. In a global (or globally partitioned) index the pair (k1, k2) would be at the low (leftmost) end of the index, but to find the pair in a locally partitioned index we have to probe the leftmost end of 10 separate index partitions – and once we’ve done that each “bounce” requires us to probe 10 index partitions for the first (val1, val2) pair where val1 = k1 and val2 is just just greater than k2, or val1 is just greater than k1 and val2 is the minimum for that value of val1. The more partitions we have the greater the number of index partitions we have to probe at each step and the more likely it is that we ought to switch to a brute force index fast full scan with aggregate.

Here’s the starting point for solving the problem (maybe) – I’ll create a simple partitioned table, and use the “bouncy scan” code from the earlier posting with the table and column names adjusted accordingly:


rem
rem     Script:         bouncy_index_3.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1
rem

create table pt1 (
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
)
nologging
partition by hash (object_id) partitions 4
as
select
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
from
        (select * from all_objects),
        (select rownum n1 from dual connect by level <= 10) ; alter table pt1 modify(status not null); execute dbms_stats.gather_table_stats(null,'pt1',granularity=>'ALL',method_opt=>'for all columns size 1')

create index pt1_i1 on pt1(status, namespace) nologging local;

prompt  ==================================================
prompt  Make some rows in the last partition have a status
prompt  that won't be found in the first partition.
prompt  ==================================================

column namespace format 99999999
column partition_name new_value m_part

select  partition_name
from    user_tab_partitions
where   table_name = 'PT1'
order by
        partition_position
;

update pt1 partition (&m_part) set status = 'MISSING' where rownum <= 10;

select
        dbms_mview.pmarker(rowid), status, namespace
from    pt1
where   status = 'MISSING'
;

I’ve created a hash partitioned copy of view all_objects, duplicating it 10 times and created a local index on the columns (status, namespace). My data has two values for status, ‘VALID’ and ‘INVALID’, and there are about 10 values for the namespace. I’ve then updated a few rows in the last partition, giving them a status value that is between the two current values – this is just one little test case to help me check that my code is going to catch all values even if they don’t appear in the first table partition.

Here’s the query from the earlier posting – and it does get the right results – followed by the execution plan:


alter session set statistics_level = all;

set serveroutput off
set linesize 180
set pagesize 60

prompt  =============================================================
prompt  Original Query, showing expensive access for driving minimums
prompt  =============================================================

with bounce1(status, namespace) as (
        select status, namespace
        from    (
                select
                        /*+ index(pt1) no_index_ffs(pt1) */
                        status, namespace,
                        row_number() over(order by status, namespace) rn
                from    pt1
        )
        where
                rn = 1
        union all
        select
                v1.status, v1.namespace
        from    bounce1,
                lateral (
                              select  /*+ index(pt1) no_index_ffs(pt1) no_decorrelate */
                                      pt1.status, pt1.namespace
                              from    pt1
                              where   pt1.status > bounce1.status
                              and     rownum = 1
                ) v1
        where   bounce1.status is not null
        and     bounce1.namespace is not null
),
bounce2 (status, namespace)
as (
        select  status, namespace
        from    bounce1
        where   bounce1.status is not null
        union all
        select  bounce2.status, (select min(pt1.namespace) namespace from pt1 where pt1.status = bounce2.status and pt1.namespace > bounce2.namespace) namespace
        from    bounce2
        where   bounce2.namespace is not null
        and     bounce2.status is not null
)
select * from bounce2
where
        bounce2.namespace is not null
and     bounce2.status is not null      -- > redundant predicate
order by
        status, namespace
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last outline'));


----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                    | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                             |                 |      1 |        | 16378 (100)|       |       |     10 |00:00:00.58 |    1869 |       |       |          |
|   1 |  SORT ORDER BY                               |                 |      1 |      4 | 16378   (4)|       |       |     10 |00:00:00.58 |    1869 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                       |                 |      1 |      4 | 16377   (4)|       |       |     10 |00:00:00.58 |    1869 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |       |       |     12 |00:00:00.58 |    1869 |  1024 |  1024 |          |
|*  4 |     VIEW                                     |                 |      1 |      2 |  8157   (4)|       |       |      2 |00:00:00.58 |    1747 |       |       |          |
|   5 |      UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |       |       |      2 |00:00:00.58 |    1747 |  1024 |  1024 | 2048  (0)|
|*  6 |       VIEW                                   |                 |      1 |      1 |  4047   (4)|       |       |      1 |00:00:00.58 |    1732 |       |       |          |
|*  7 |        WINDOW SORT PUSHED RANK               |                 |      1 |    617K|  4047   (4)|       |       |      1 |00:00:00.58 |    1732 |  2048 |  2048 | 2048  (0)|
|   8 |         PARTITION HASH ALL                   |                 |      1 |    617K|  1759   (2)|     1 |     4 |    617K|00:00:00.34 |    1732 |       |       |          |
|   9 |          INDEX FULL SCAN                     | PT1_I1          |      4 |    617K|  1759   (2)|     1 |     4 |    617K|00:00:00.15 |    1732 |       |       |          |
|  10 |       NESTED LOOPS                           |                 |      2 |      1 |  4110   (4)|       |       |      1 |00:00:00.01 |      15 |       |       |          |
|  11 |        RECURSIVE WITH PUMP                   |                 |      2 |        |            |       |       |      2 |00:00:00.01 |       0 |       |       |          |
|  12 |        VIEW                                  | VW_LAT_1BBF5C63 |      2 |      1 |     9   (0)|       |       |      1 |00:00:00.01 |      15 |       |       |          |
|* 13 |         COUNT STOPKEY                        |                 |      2 |        |            |       |       |      1 |00:00:00.01 |      15 |       |       |          |
|  14 |          PARTITION HASH ALL                  |                 |      2 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      15 |       |       |          |
|* 15 |           INDEX RANGE SCAN                   | PT1_I1          |      5 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      15 |       |       |          |
|  16 |     SORT AGGREGATE                           |                 |     10 |      1 |            |       |       |     10 |00:00:00.01 |     122 |       |       |          |
|  17 |      PARTITION HASH ALL                      |                 |     10 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     122 |       |       |          |
|  18 |       FIRST ROW                              |                 |     40 |      1 |     9   (0)|       |       |     27 |00:00:00.01 |     122 |       |       |          |
|* 19 |        INDEX RANGE SCAN (MIN/MAX)            | PT1_I1          |     40 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     122 |       |       |          |
|  20 |     RECURSIVE WITH PUMP                      |                 |     10 |        |            |       |       |     10 |00:00:00.01 |       0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("BOUNCE2"."NAMESPACE" IS NOT NULL AND "BOUNCE2"."STATUS" IS NOT NULL))
   4 - filter("BOUNCE1"."STATUS" IS NOT NULL)
   6 - filter("RN"=1)
   7 - filter(ROW_NUMBER() OVER ( ORDER BY "STATUS","NAMESPACE")<=1) 13 - filter(ROWNUM=1) 15 - access("PT1"."STATUS">"BOUNCE1"."STATUS")
  19 - access("PT1"."STATUS"=:B1 AND "PT1"."NAMESPACE">:B2)

In terms of time the query doesn’t seem to have done too badly – but I’m only using a small data set and we can see from the numbers that we haven’t produced an efficient plan. Operations 8 and 9 tell us that we’ve done an index full scan on every single partition before passing the data up for a window sort operation. That’s clearly a bad thing, but we did have an index() hint at that bit of code that worked very well for the simple (global) index so maybe we should have taken that out before testing (except it doesn’t help much to do so since Oracle still scans all 617K rows, changing to an index fast full scan).

Apart from that massive load the rest of the query looks quite efficient. We keep seeing “partition hash all” of course – whatever we do we tend to do it to 4 separate partitions one after the other – but everything else we do looks rather efficient. But there is another problem – and this is where the importance of inserting the rows with status = ‘MISSING’ shows up: this query didn’t find them! We have a predicate “rownum = 1” in the second half of the bounce1 recursive subquery and because we’re using a partitioned index we’ve managed to find a row that looks appropriate in an early partition when the row we really needed doesn’t appear until the last partition.

Let’s return to this problem later – first we want to check if the rest of the query will run efficiently and give us the right answer if we can find some way of getting the starting values; so let’s use a strategy we’ve used before – replace the bounce1 subquery with a union all select from dual:


with bounce1(status, namespace) as (
        select status, namespace
        from    (
                select 'INVALID' status, 1 namespace from dual
                union all
                select 'MISSING', 4 from dual
                union all
                select 'VALID', 1 from dual
        )
),
bounce2 (status, namespace)
as (
        select  status, namespace
        from    bounce1
        where   bounce1.status is not null
        union all
        select  bounce2.status, (select min(pt1.namespace) namespace from pt1 where pt1.status = bounce2.status and pt1.namespace > bounce2.namespace) namespace
        from    bounce2
        where   bounce2.namespace is not null
        and     bounce2.status is not null
)
select * from bounce2
where
        bounce2.namespace is not null
and     bounce2.status is not null      -- > redundant predicate
order by
        status, namespace
;

select * from table(dbms_xplan.display_cursor(null,null,'cost allstats last partition outline'));

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |        |      1 |        |    76 (100)|       |       |     11 |00:00:00.01 |     132 |       |       |          |
|   1 |  SORT ORDER BY                             |        |      1 |      6 |    76   (2)|       |       |     11 |00:00:00.01 |     132 |  2048 |  2048 | 2048  (0)|
|*  2 |   VIEW                                     |        |      1 |      6 |    75   (0)|       |       |     11 |00:00:00.01 |     132 |       |       |          |
|   3 |    UNION ALL (RECURSIVE WITH) BREADTH FIRST|        |      1 |        |            |       |       |     14 |00:00:00.01 |     132 |  1024 |  1024 |          |
|   4 |     VIEW                                   |        |      1 |      3 |     6   (0)|       |       |      3 |00:00:00.01 |       0 |       |       |          |
|   5 |      UNION-ALL                             |        |      1 |        |            |       |       |      3 |00:00:00.01 |       0 |       |       |          |
|   6 |       FAST DUAL                            |        |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |       |       |          |
|   7 |       FAST DUAL                            |        |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |       |       |          |
|   8 |       FAST DUAL                            |        |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |       |       |          |
|   9 |     SORT AGGREGATE                         |        |     11 |      1 |            |       |       |     11 |00:00:00.01 |     132 |       |       |          |
|  10 |      PARTITION HASH ALL                    |        |     11 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     132 |       |       |          |
|  11 |       FIRST ROW                            |        |     44 |      1 |     9   (0)|       |       |     27 |00:00:00.01 |     132 |       |       |          |
|* 12 |        INDEX RANGE SCAN (MIN/MAX)          | PT1_I1 |     44 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     132 |       |       |          |
|  13 |     RECURSIVE WITH PUMP                    |        |     10 |        |            |       |       |     11 |00:00:00.01 |       0 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("BOUNCE2"."NAMESPACE" IS NOT NULL AND "BOUNCE2"."STATUS" IS NOT NULL))
  12 - access("PT1"."STATUS"=:B1 AND "PT1"."NAMESPACE">:B2)

This gets us the right answer, very efficiently. There are only 11 rows in the result set and we have an average 12 buffer visits per row – which is reasonble given that we (probably) have to probe 4 index partitions for every row. So that’s 11 * 4 * 3 buffer visits per probe – which seems just about optimal.

The next step is to figure out a way of getting the (three in our case) starting points while using a partitioned index. Here’s a query we can use for bounce1:


with bounce1(status, namespace) as (
        select
                (select min(status) from pt1) status,
                (select /*+ index(pt1) */ min(namespace) from pt1 where status = (select min(status) from pt1)) namespace
        from
                dual
        union all
        select
                v1.status, v2.namespace
        from    bounce1,
                lateral(
                        (select /*+ index(pt1) */ min(pt1.status) status from pt1 where pt1.status > bounce1.status)
                )       v1,
                lateral(
                        select /*+ index(pt1) */ min(pt1.namespace) namespace
                        from pt1
                        where pt1.status =  (select min(pt2.status) from pt1 pt2 where pt2.status > bounce1.status)
                )       v2
        where
                bounce1.status is not null
        and     bounce1.namespace is not null
)
select * from bounce1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition outline'));

It looks a little convoluted with all the inline select statements, but they all do very small amounts of work and they’re only reading the index leaf blocks that you have to read. We know from yesterday’s post that Oracle can execute the scalar subqueries at lines 3 and 4 very efficiently; we can hope (and check) that the lateral() subqueries driven by the single values from the recursive row in bounce1 will operate just as efficiently – and here’s the plan:


----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |   166 (100)|       |       |      4 |00:00:00.01 |     132 |
|   1 |  VIEW                                     |                 |      1 |      2 |   166   (0)|       |       |      4 |00:00:00.01 |     132 |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |       |       |      4 |00:00:00.01 |     132 |
|   3 |    SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   4 |     PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   5 |      INDEX FULL SCAN (MIN/MAX)            | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   6 |    SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      24 |
|   7 |     PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|   8 |      FIRST ROW                            |                 |      4 |      1 |     9   (0)|       |       |      4 |00:00:00.01 |      24 |
|*  9 |       INDEX RANGE SCAN (MIN/MAX)          | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|  10 |        SORT AGGREGATE                     |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|  11 |         PARTITION HASH ALL                |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  12 |          INDEX FULL SCAN (MIN/MAX)        | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  13 |    FAST DUAL                              |                 |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |
|  14 |    NESTED LOOPS                           |                 |      4 |      1 |   146   (0)|       |       |      3 |00:00:00.01 |      96 |
|  15 |     NESTED LOOPS                          |                 |      4 |      1 |   137   (0)|       |       |      3 |00:00:00.01 |      36 |
|  16 |      RECURSIVE WITH PUMP                  |                 |      4 |        |            |       |       |      3 |00:00:00.01 |       0 |
|  17 |      VIEW                                 | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      36 |
|  18 |       SORT AGGREGATE                      |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  19 |        PARTITION HASH ALL                 |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  20 |         FIRST ROW                         |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 21 |          INDEX RANGE SCAN (MIN/MAX)       | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  22 |     VIEW                                  | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      60 |
|  23 |      SORT AGGREGATE                       |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      60 |
|  24 |       PARTITION HASH ALL                  |                 |      3 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  25 |        FIRST ROW                          |                 |     12 |      1 |     9   (0)|       |       |      5 |00:00:00.01 |      60 |
|* 26 |         INDEX RANGE SCAN (MIN/MAX)        | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  27 |          SORT AGGREGATE                   |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  28 |           PARTITION HASH ALL              |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  29 |            FIRST ROW                      |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 30 |             INDEX RANGE SCAN (MIN/MAX)    | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - access("STATUS"=)
  21 - access("PT1"."STATUS">"BOUNCE1"."STATUS")
  26 - access("PT1"."STATUS"=)
  30 - access("PT2"."STATUS">:B1)

Although we have done lots of individual probes into the index they have all been very efficient using a min/max access and an average of about 3 buffer visits per probe. So we can now insert this new bounce1 subquery into the previous query in place of the union all of dual and check that the two pieces of the query cooperate.


with bounce1(status, namespace) as (
        select
                (select min(status) from pt1) status,
                (select /*+ index(pt1) */ min(namespace) from pt1 where status = (select min(status) from pt1)) namespace
        from
                dual
        union all
        select
                v1.status, v2.namespace
        from    bounce1,
                lateral(
                        (select /*+ index(pt1) */ min(pt1.status) status from pt1 where pt1.status > bounce1.status)
                )       v1,
                lateral(
                        select /*+ index(pt1) */ min(pt1.namespace) namespace
                        from pt1
                        where pt1.status =  (select min(pt2.status) from pt1 pt2 where pt2.status > bounce1.status)
                )       v2
        where
                bounce1.status is not null
        and     bounce1.namespace is not null
),
bounce2 (status, namespace)
as (
        select  status, namespace from bounce1
        union all
        select  bounce2.status, (select min(t.namespace) namespace from pt1 t where t.namespace > bounce2.namespace and status=bounce2.status) namespace
        from    bounce2
        where   bounce2.status is not null
        and     bounce2.namespace is not null
)
select  *
from    bounce2
where   namespace is not null
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition outline'));

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |                 |      1 |        |   396 (100)|       |       |     11 |00:00:00.01 |     266 |
|*  1 |  VIEW                                       |                 |      1 |      4 |   396   (1)|       |       |     11 |00:00:00.01 |     266 |
|   2 |   UNION ALL (RECURSIVE WITH) BREADTH FIRST  |                 |      1 |        |            |       |       |     15 |00:00:00.01 |     266 |
|   3 |    VIEW                                     |                 |      1 |      2 |   166   (0)|       |       |      4 |00:00:00.01 |     132 |
|   4 |     UNION ALL (RECURSIVE WITH) BREADTH FIRST|                 |      1 |        |            |       |       |      4 |00:00:00.01 |     132 |
|   5 |      SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   6 |       PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   7 |        INDEX FULL SCAN (MIN/MAX)            | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   8 |      SORT AGGREGATE                         |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      24 |
|   9 |       PARTITION HASH ALL                    |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|  10 |        FIRST ROW                            |                 |      4 |      1 |     9   (0)|       |       |      4 |00:00:00.01 |      24 |
|* 11 |         INDEX RANGE SCAN (MIN/MAX)          | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|  12 |          SORT AGGREGATE                     |                 |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|  13 |           PARTITION HASH ALL                |                 |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  14 |            INDEX FULL SCAN (MIN/MAX)        | PT1_I1          |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|  15 |      FAST DUAL                              |                 |      1 |      1 |     2   (0)|       |       |      1 |00:00:00.01 |       0 |
|  16 |      NESTED LOOPS                           |                 |      4 |      1 |   146   (0)|       |       |      3 |00:00:00.01 |      96 |
|  17 |       NESTED LOOPS                          |                 |      4 |      1 |   137   (0)|       |       |      3 |00:00:00.01 |      36 |
|  18 |        RECURSIVE WITH PUMP                  |                 |      4 |        |            |       |       |      3 |00:00:00.01 |       0 |
|  19 |        VIEW                                 | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      36 |
|  20 |         SORT AGGREGATE                      |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  21 |          PARTITION HASH ALL                 |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  22 |           FIRST ROW                         |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 23 |            INDEX RANGE SCAN (MIN/MAX)       | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  24 |       VIEW                                  | VW_LAT_C2D92EFA |      3 |      1 |     9   (0)|       |       |      3 |00:00:00.01 |      60 |
|  25 |        SORT AGGREGATE                       |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      60 |
|  26 |         PARTITION HASH ALL                  |                 |      3 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  27 |          FIRST ROW                          |                 |     12 |      1 |     9   (0)|       |       |      5 |00:00:00.01 |      60 |
|* 28 |           INDEX RANGE SCAN (MIN/MAX)        | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      5 |00:00:00.01 |      60 |
|  29 |            SORT AGGREGATE                   |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  30 |             PARTITION HASH ALL              |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  31 |              FIRST ROW                      |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 32 |               INDEX RANGE SCAN (MIN/MAX)    | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  33 |    SORT AGGREGATE                           |                 |     11 |      1 |            |       |       |     11 |00:00:00.01 |     134 |
|  34 |     PARTITION HASH ALL                      |                 |     11 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     134 |
|  35 |      FIRST ROW                              |                 |     44 |      1 |     9   (0)|       |       |     27 |00:00:00.01 |     134 |
|* 36 |       INDEX RANGE SCAN (MIN/MAX)            | PT1_I1          |     44 |      1 |     9   (0)|     1 |     4 |     27 |00:00:00.01 |     134 |
|  37 |    RECURSIVE WITH PUMP                      |                 |     10 |        |            |       |       |     11 |00:00:00.01 |       0 |
------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("NAMESPACE" IS NOT NULL)
  11 - access("STATUS"=)
  23 - access("PT1"."STATUS">"BOUNCE1"."STATUS")
  28 - access("PT1"."STATUS"=)
  32 - access("PT2"."STATUS">:B1)
  36 - access("STATUS"=:B1 AND "T"."NAMESPACE">:B2)

Job done. We’ve found the distinct set of pairs without having to scan the entire index. We’ve found 11 pairs at a total cost of 266 buffer gets. For comparitive purposes the query totalled 56 buffer visits when I recreated the table as a non-partitioned table (again updating a few rows to status = ‘MISSING’).

It’s important to note that this query can only work this efficiently in 12.2 (and possibly in a suitably patched 11.2.0.4) because of the optimizer’s ability to use the min/max operation for queries like: “select max(col2) where col1 = (select max()…))”. When I ran the final query on 12.1.0.2 the execution plan changed around lines 11 and 28 where 12.2.0.1 could use the aggregate subquery to drive the min/max scan 12.1.0.2 did a real range scan with aggregate (which was extremely expensive at one point).

------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name            | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------------------------------------------
|  23 |       VIEW                                  | VW_LAT_C2D92EFA |      3 |      1 |  1206   (2)|       |       |      3 |00:00:05.43 |    2414 |
|  24 |        SORT AGGREGATE                       |                 |      3 |      1 |            |       |       |      3 |00:00:05.43 |    2414 |
|  25 |         PARTITION HASH ALL                  |                 |      3 |    422K|  1206   (2)|     1 |     4 |    845K|00:00:05.84 |    2414 |
|* 26 |          INDEX RANGE SCAN                   | PT1_I1          |     12 |    422K|  1206   (2)|     1 |     4 |    845K|00:00:02.03 |    2414 |
|  27 |           SORT AGGREGATE                    |                 |      3 |      1 |            |       |       |      3 |00:00:00.01 |      36 |
|  28 |            PARTITION HASH ALL               |                 |      3 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
|  29 |             FIRST ROW                       |                 |     12 |      1 |     9   (0)|       |       |      8 |00:00:00.01 |      36 |
|* 30 |              INDEX RANGE SCAN (MIN/MAX)     | PT1_I1          |     12 |      1 |     9   (0)|     1 |     4 |      8 |00:00:00.01 |      36 |
------------------------------------------------------------------------------------------------------------------------------------------------------

As you can see, this makes a dramatic difference to the work Oracle has to do – in this case 2,414 buffer gets and 845K rows examined. As I said in yestrday’s post – there’s a patch for 11.2.0.4, so there could be a patch for 12.1.0.2 if you ask for it, but it looks like no-one has done so yet.

<h3>Footnote:</h3>

I could have used a lateral() view in the first half of bounce1 to reduce the reported number of probes of pt1_i1 in the plan – but it made the code extremely messy, I had to include a /*+ no_decorrelate */ hint in it, and it increased the number of buffer visits slightly because the optimizer seemed to lose the option for a min/max scan in this particular lateral join.

 

How To Prototype In Time and Budget Constraint Projects

Nilesh Jethwa - Thu, 2018-05-31 23:07

In the software development industry, you can expect that you’ll encounter clients who are too cheap, who want to have more than they pay for, who want to make things done impossibly quick, and who are just plain difficult to … Continue reading ?

By: MockupTiger Wireframes

Initial release of PeopleTools 8.57 will be in the Oracle Cloud

PeopleSoft Technology Blog - Thu, 2018-05-31 16:44

Oracle recently announced in a Tech Update that PeopleTools 8.57 will be generally available to all PeopleSoft customers for use first in the Oracle Cloud.  Shortly after, most likely with the third PeopleTools patch (8.57.03) it will be available for on premises environments. This, understandably, has generated some questions so I thought I take a few minutes to clear things up. 

When we talk about running PeopleSoft in the cloud, we are talking about taking one or more of your existing PeopleSoft environments and instead of running them in your data center, they are run on hardware that you subscribe to and manage in the cloud.  In this case, the cloud is Oracle’s Infrastructure as a Service, also called Oracle Cloud Infrastructure (OCI).   When you move one of your environments to the cloud, it’s backed up to files, copied to a cloud repository and provision from the repository.  We call that process ‘Lift and Shift’.  When it’s done, your application with your data, your customizations, and your configurations is running in the cloud.  Even though it’s in the cloud, you are responsible for maintaining it.  What has changed is the infrastructure – servers, storage, network – that the application is running on.

Just to be clear, there is no Software-as-a-Services (SaaS) version of PeopleSoft or PeopleTools, nor does Oracle have any plans to release one.   So, thinking it through, the 8.57 release of PeopleTools that we make available in the cloud is exactly the same as the version you will install in your on premises environment.  Why then are we releasing it first in the cloud?

There are several reasons, but the most significant is to build awareness of the benefits of running PeopleSoft applications in the Oracle Cloud, and what that can mean to you.  We believe that in the long term, the best way to run PeopleSoft applications is to do so in in the cloud.  To make it even better, one of the major initiatives over the past couple of years is the release of PeopleSoft Cloud Manager.  With Cloud Manager, many of the processes that are time consuming or difficult, particularly around lifecycle management, have been improved or automated.  The PeopleTools upgrade, for instance, is automated.  Just choose one of your application images from your cloud repository and start it with the latest Tools version and the app will be upgraded as part of the provisioning process.  It’s that easy.  And that’s just one example.

It’s pretty easy to take advantage of the initial releases of PeopleTools in the cloud.  In fact, as I write this there is a 30 day Trial Program that gives you free credits to try it out.  Be sure to follow the correct OBE when installing.  For more information go to this link or talk to your Oracle Account team.  There is also a PeopleSoft Key Concept page about running Peoplesoft on the Oracle Cloud and PeopleSoft Cloud Manager where you can get more information.  It only takes a small investment to try this out, and it could lead to major improvements in how you manage your applications.

The Single Responsibility principle

Andrew Clarke - Thu, 2018-05-31 16:14
The Single Responsibility principle is the foundation of modular programming, and is probably the most important principle in the SOLID set. Many of the other principles flow from it.

It is quite simple: a program unit should do only one thing. A procedure should implement a single task; a package should gather together procedures which solve a set of related tasks. Consider the Oracle library package UTL_FILE. Its responsibility is quite clear: it is for working with external files. It implements all the operations necessary to work with OS files: opening them, closing them, reading and writing, etc. It defines a bespoke suite of exceptions too.

Each procedure in the package has a clear responsibility too. For instance, fclose() closes a single referenced file whereas fclose_all() closes all open files. Now, the package designers could have implemented that functionality as a single procedure, with different behaviours depending on whether the file parameter was populated or unpopulated. This might seem a simpler implementation, because it would be one fewer procedure. But the interface has actually become more complicated: essentially we have a flag parameter, which means we need to know a little bit more about the internal processing of fclose(). It would have made the package just a little bit harder to work with without saving any actual code.

Of course, it's pretty easy to define the Single Responsibility of a low level feature like file handling. We might think there are some superficially similarities with displaying information to the screen but it's fairly obvious that these are unrelated and so we need tow packages, UTL_FILE and DBMS_OUTPUT. When it comes to our own code, especially higher level packages, it can be harder to define the boundaries. At the broadest level we can define domains - SALES, HR, etc. But we need more than one package per domain: how do we decide the responsibilities of indvidual pacakages?

Robert C Martin defines the Single Responsibility principle as: "A class should have only one reason to change." Reasons for change can be many and various. In database applications dependence on tables is a primary one. So procedures which work a common set of table may well belong together. But there are at least two sets of privileges for data: reading and manipulating. So it's likely we will need a package which gathers together reporting type queries which can be granted to read-only users and a package which executes DML statements which can be granted to more privileged users. Maybe our domain requires special processing, such as handling sensitive data; procedures for implementing that business logic will belong in separate packages.

Single responsibility becomes a matrix, with dependencies along one access and audience of users along another.

The advantages of Singel Responsibility should be obvious. It allows us to define a cohesive package, collecting together all the related functionality which makes it easy for others reuse it. It also allows us to define private routines in a package body, which reduces the amount of code we have to maintain while giving us a mechanism for preventing other developers from using it. Restricting the features to a single responsibility means unrelated functions are not coupled together. This gives a better granularity for granting the least privileges necessary to users of our code. Part of the Designing PL/SQL Programs series

Oracle 18c Autonomous Health Framework (AHF) - Part 1

Syed Jaffar - Thu, 2018-05-31 10:03
Recently I had to present at eProseed AnualTech conference in Luxembourg and I was requested  to present a topic something about Oracle18c.

Obviously I don't want to talk and repeat the same about Autonomous Database, many experts already said much on this. I then decided to pick a topic which really helps DBAs, Administrators and finally to the organization. I was really fascinated about Oracle 18c autonomous health framework concepts and decided to do a presentation on this topics.

Working in a complex and huge Oracle environment, I knew where most of our energy and time is spend, as a DBA or system administrator. We always focus on avoiding run time availability and performance issues. In a complex and critical environment, every other day, you will face a new challenge and you must be on your toes as DBA during the business hours.

For DBA, most importantly, we need to ensure the database availability, at the same time, ensure its deliver the same performance 24x7. Imagine, if you get stuck with latches, instance crash, node crash, someone changes the binaries permission/ownership, you will spend hours and hours to fix and find the root cause of the issues.

With 18c autonomous health framework, its easy to avoid and auto fix run-time availability and performance issues. There are 8 components that makes this framework. Though some of them are present in 12.2, but, all these are configured automatically upon 18c configuration and run 24x7 in daemon mode. Also, 3 of the components have machine learning capabilities. to fix issue automatically.

I will start discussing about 8 components in next series. Stay tune for more on this topic.



Patching ODA lite to 12.2.1.3.0

Yann Neuhaus - Thu, 2018-05-31 09:57

Here is the latest patch for your ODA and it seems that Oracle documentation for this patch is not exactly the procedure you’ll have to follow to successfully patch your appliance. I recently updated X6-2M and X6-2L to this latest release and here is how to do that. In this example I was patching from 12.1.2.12.0, no intermediate patch was needed.

1) Download the patch

It seems odd but sometimes finding the corresponding patch is not so easy! With the patch number, it’s more convenient. For 12.2.1.3.0 with the dcs stack the number is 27648057. This patch will update all the components:  dcs (odacli), operating system, bios/firmwares, ilom, GI and dbhomes.

Copy the patch in a temporary folder on the server, for example /opt/patch. You’ll need to be root to apply the patch.

2) Check the actual versions and free up space on disk

I recommend you to check the actual versions. It’s easy:

odacli describe-component

System Version
---------------
12.1.2.12.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.1.2.12.0           up-to-date
GI                                        12.1.0.2.170814       up-to-date
DB {
[ OraDB12102_home1 ]                      12.1.0.2.170814       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.170814       up-to-date
}
ILOM                                      3.2.7.26.a.r112632    up-to-date
BIOS                                      39050100              up-to-date
OS                                        6.8                   up-to-date
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5                       KPYABR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

Don’t care about Available Version column, it’s only valid after you register a new patch. Your ODA doesn’t check the latest patch online.

For free space check that folders /, /u01 and /opt have enough free GB to process. At least 10GB I think.

3) Prepare the patch files

It’s slightly different than previous versions. Only the first zipfile has to be uncompressed and registered. The 2 other files can directly be registered without unzipping them.

cd /opt/patch
unzip p27648057_122130_Linux-x86-64_1of3.zip

odacli update-repository -f /opt/patch/oda-sm-12.2.1.3.0-180504-server1of3.zip
odacli update-repository -f /opt/patch/p27648057_122130_Linux-x86-64_2of3.zip
odacli update-repository -f /opt/patch/p27648057_122130_Linux-x86-64_3of3.zip

4) Update the dcs-agent

First you’ll have to update the dcs-agent:

odacli update-dcsagent -v 12.2.1.3.0
{
"jobId" : "150b3486-cfb2-4b32-b751-0ed89ce3d7be",
"status" : "Created",
"message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",
"reports" : [ ],
"createTimestamp" : "May 22, 2018 10:31:50 AM CEST",
"resourceList" : [ ],
"description" : "DcsAgent patching",
"updatedTime" : "May 22, 2018 10:31:50 AM CEST"
}

As for every kind of operation you do with odacli, you receive a jobId you can monitor:

odacli describe-job -i "150b3486-cfb2-4b32-b751-0ed89ce3d7be"

Job details
----------------------------------------------------------------
                     ID:  150b3486-cfb2-4b32-b751-0ed89ce3d7be
            Description:  DcsAgent patching
                 Status:  Success
                Created:  May 22, 2018 10:31:50 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
dcsagent rpm verification                May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:50 AM CEST       Success
Patch location validation                May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:50 AM CEST       Success
dcs-agent upgrade                        May 22, 2018 10:31:50 AM CEST       May 22, 2018 10:31:51 AM CEST       Success

This update takes only a minute.

Check again the version and you will see a new component in 18c, quite weird:

odacli describe-component
System Version
---------------
12.2.1.3.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.1.2.12.0           12.2.1.3.0
GI                                        12.1.0.2.170814       12.2.0.1.180116
DB {
[ OraDB12102_home1 ]                      12.1.0.2.170814       12.1.0.2.180116
[ OraDB11204_home1 ]                      11.2.0.4.170814       11.2.0.4.180116
}
DCSAGENT                                  18.1.3.0.0            up-to-date
ILOM                                      3.2.7.26.a.r112632    up-to-date
BIOS                                      39050100              39090000
OS                                        6.8                   6.9
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5 ]                     KPYABR3Q              kpyagr3q
}
FIRMWAREDISK                              0R3Q                  up-to-date

5) Update the server

Updating the server will actually update not only the operating system but also the ILOM, the BIOS, the firmware of internal disks and the GI. For the OS, Oracle Linux will go from 6.8 to 6.9 and the update process will assume that no additional packages have been installed. If you installed additional packages, please remove them as they can prevent the patch to apply correctly. Even devel packages should be removed, the upgrade of normal packages linked to them will not work because of the dependencies. You can easily remove the install packages with rpm -e, for example:

rpm -e openssl-devel-1.0.1e-48.el6_8.4.x86_64
rpm -e krb5-devel-1.10.3-57.el6.x86_64
rpm -e zlib-devel-1.2.3-29.el6.x86_64
rpm -e keyutils-libs-devel-1.4-5.0.1.el6.x86_64
rpm -e libcom_err-devel-1.42.8-1.0.2.el6.x86_64
rpm -e libselinux-devel-2.0.94-7.el6.x86_64
rpm -e libsepol-devel-2.0.41-4.el6.x86_64

Now you can safely run the patching:

odacli update-server -v 12.2.1.3.0

This update is the longest one (between 30 minutes and 1 hour), while you should see your server rebooting. As usual check the status of the job with describe-job:

odacli describe-job -i "27d2195f-f16b-44d8-84e0-6af6e48ccad7"

At the end of the process, describe-job will look like that:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
dcs-controller upgrade                   May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
Patch location validation                May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
dcs-cli upgrade                          May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:09 AM CEST       Success
Creating repositories using yum          May 23, 2018 11:07:09 AM CEST       May 23, 2018 11:07:10 AM CEST       Success
Applying HMP Patches                     May 23, 2018 11:07:10 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
Patch location validation                May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
oda-hw-mgmt upgrade                      May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:04 AM CEST       Success
Creating repositories using yum          May 23, 2018 11:09:04 AM CEST       May 23, 2018 11:09:09 AM CEST       Success
Applying OS Patches                      May 23, 2018 11:09:09 AM CEST       May 23, 2018 11:11:04 AM CEST       Success
OSS Patching                             May 23, 2018 11:11:04 AM CEST       May 23, 2018 11:11:04 AM CEST       Success
Applying Firmware Disk Patches           May 23, 2018 11:11:04 AM CEST       May 23, 2018 11:11:08 AM CEST       Success
Applying Firmware Expander Patches       May 23, 2018 11:11:08 AM CEST       May 23, 2018 11:11:13 AM CEST       Success
Applying Firmware Controller Patches     May 23, 2018 11:11:13 AM CEST       May 23, 2018 11:11:16 AM CEST       Success
Checking Ilom patch Version              May 23, 2018 11:11:17 AM CEST       May 23, 2018 11:11:19 AM CEST       Success
Patch location validation                May 23, 2018 11:11:19 AM CEST       May 23, 2018 11:11:20 AM CEST       Success
Apply Ilom patch                         May 23, 2018 11:11:21 AM CEST       May 23, 2018 11:11:22 AM CEST       Success
Copying Flash Bios to Temp location      May 23, 2018 11:11:22 AM CEST       May 23, 2018 11:11:22 AM CEST       Success
Starting the clusterware                 May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:11:23 AM CEST       Success
Creating GI home directories             May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:11:23 AM CEST       Success
Cloning Gi home                          May 23, 2018 11:11:23 AM CEST       May 23, 2018 11:13:53 AM CEST       Success
Configuring GI                           May 23, 2018 11:13:53 AM CEST       May 23, 2018 11:14:06 AM CEST       Success
Running GI upgrade root scripts          May 23, 2018 11:14:06 AM CEST       May 23, 2018 11:29:04 AM CEST       Success
Resetting DG compatibility               May 23, 2018 11:29:04 AM CEST       May 23, 2018 11:29:09 AM CEST       Success
Running GI config assistants             May 23, 2018 11:29:09 AM CEST       May 23, 2018 11:30:10 AM CEST       Success
restart oakd                             May 23, 2018 11:30:13 AM CEST       May 23, 2018 11:30:23 AM CEST       Success
Updating GiHome version                  May 23, 2018 11:30:23 AM CEST       May 23, 2018 11:30:25 AM CEST       Success
preRebootNode Actions                    May 23, 2018 11:30:39 AM CEST       May 23, 2018 11:31:25 AM CEST       Success
Reboot Ilom                              May 23, 2018 11:31:25 AM CEST       May 23, 2018 11:31:25 AM CEST       Success

Don’t forget that if there is a problem during the patching process (you forgot to remove an additional rpm for example), you can relaunch the patching and it will skip the already patched components. But you will loose a lot of time! Please control the components version after the reboot with describe-component.

6) Patch the dbhomes

You now need to patch the dbhomes separately. First of all list them:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
86b6a068-55a8-4171-9f94-48b86f135065     OraDB12102_home1     12.1.0.2.170814 (26680878, 26609798)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
c6e7d556-5785-41d8-a120-ed3ef756848a     OraDB11204_home1     11.2.0.4.170814 (26609929, 26609445)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

And then patch the first one:

odacli update-dbhome -v 12.2.1.3 -i c6e7d556-5785-41d8-a120-ed3ef756848a

It will also apply the datapatch on all the primary databases connected to this home. If you don’t have any database linked to the dbhome patching is fast:

odacli describe-job -i "9defa52d-2997-42ea-9bd3-aafdbf0a4dc5"
Job details
----------------------------------------------------------------
                     ID:  9defa52d-2997-42ea-9bd3-aafdbf0a4dc5
            Description:  DB Home Patching: Home Id is c6e7d556-5785-41d8-a120-ed3ef756848a
                 Status:  Success
                Created:  May 23, 2018 11:43:55 AM CEST
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           May 23, 2018 11:43:56 AM CEST       May 23, 2018 11:43:58 AM CEST       Success
Patch location validation                May 23, 2018 11:43:58 AM CEST       May 23, 2018 11:43:59 AM CEST       Success
Opatch updation                          May 23, 2018 11:44:19 AM CEST       May 23, 2018 11:44:20 AM CEST       Success
Patch conflict check                     May 23, 2018 11:44:20 AM CEST       May 23, 2018 11:44:27 AM CEST       Success
db upgrade                               May 23, 2018 11:44:27 AM CEST       May 23, 2018 11:46:01 AM CEST       Success

About 2 minutes for an empty dbhome.

And for a dbhome linked to databases:

odacli update-dbhome -v 12.2.1.3 -i 86b6a068-55a8-4171-9f94-48b86f135065
{
"jobId" : "8083dc43-61fe-49da-8081-43b4e5257e95",
"status" : "Created",
"message" : null,
"reports" : [ ],
"createTimestamp" : "May 23, 2018 12:46:06 PM CEST",
"resourceList" : [ ],
"description" : "DB Home Patching: Home Id is 86b6a068-55a8-4171-9f94-48b86f135065",
"updatedTime" : "May 23, 2018 12:46:06 PM CEST"
}

odacli describe-job -i "e871f741-7dd6-49c6-9b4a-af0d68e647e7"

Job details
----------------------------------------------------------------
ID:  e871f741-7dd6-49c6-9b4a-af0d68e647e7
Description:  DB Home Patching: Home Id is 86b6a068-55a8-4171-9f94-48b86f135065
Status:  Success
Created:  May 23, 2018 1:37:00 PM CEST
Message:  WARNING::Failed to run datapatch on db DB01TST##WARNING::Failed to run datapatch on db DB03TST

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           May 23, 2018 1:37:21 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Patch location validation                May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Opatch updation                          May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
Patch conflict check                     May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
db upgrade                               May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:23 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:37:23 PM CEST        May 23, 2018 1:37:57 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:37:57 PM CEST        May 23, 2018 1:38:26 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:38:26 PM CEST        May 23, 2018 1:38:51 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:38:51 PM CEST        May 23, 2018 1:39:49 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:39:49 PM CEST        May 23, 2018 1:40:28 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:40:28 PM CEST        May 23, 2018 1:40:53 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:40:53 PM CEST        May 23, 2018 1:41:19 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:41:19 PM CEST        May 23, 2018 1:41:44 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:41:44 PM CEST        May 23, 2018 1:42:14 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:42:14 PM CEST        May 23, 2018 1:42:48 PM CEST        Success
SqlPatch upgrade                         May 23, 2018 1:42:48 PM CEST        May 23, 2018 1:43:22 PM CEST        Success

It’s longer but less than 10 minutes for this example. You can see the number of databases here (one line SqlPatch upgrade for one database). The job is successful, but actually not so successful because 2 warnings are raised. And be careful because message field in the job details is limited: check the /opt/oracle/dcs/log/dcs-agent.log for extended warning messages.

Always check the DB Version after the update of all dbhomes, this is the new version of the binaries:

odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
86b6a068-55a8-4171-9f94-48b86f135065     OraDB12102_home1     12.1.0.2.180116 (26925218, 26925263)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
c6e7d556-5785-41d8-a120-ed3ef756848a     OraDB11204_home1     11.2.0.4.180116 (26609929, 26925576)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured

If update job is really successful, jump to step 8.

7) Update the databases where datapatch failed

For all the databases where datapatch cannot be applied, you’ll have to do it manually. I discovered that if your database is not in AMERICAN_AMERICA language and territory (FRENCH_FRANCE in my case) odacli will not be able to apply the datapatch on the database. Let’s check the status of the datapatch on the database, set the NLS_LANG environment to AMERICAN_AMERICA and apply the patch on these databases:

su – oracle
. oraenv <<< DB01TST
cd $ORACLE_HOME/OPatch
sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 26609798
Action : APPLY
Action Time : 24-SEP-2017 23:50:14
Description : DATABASE BUNDLE PATCH 12.1.0.2.170814
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_G100316_CD
BROOT_2017Sep24_23_48_22.log
Status : SUCCESS

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:40:27
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_40_13.log
Status : FAILED

PL/SQL procedure successfully completed.

exit
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
./datapatch -verbose
sqlplus / as sysdba
SQL> set serveroutput on
SQL> exec dbms_qopatch.get_sqlpatch_status;

Patch Id : 26609798
Action : APPLY
Action Time : 24-SEP-2017 23:50:14
Description : DATABASE BUNDLE PATCH 12.1.0.2.170814
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26609798/21481992/26609798_apply_G100316_CD
BROOT_2017Sep24_23_48_22.log
Status : SUCCESS

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:40:27
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_40_13.log
Status : FAILED

Patch Id : 26925263
Action : APPLY
Action Time : 29-MAY-2018 15:53:43
Description : DATABASE BUNDLE PATCH 12.1.0.2.180116
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/26925263/21857460/26925263_apply_DB01TST_20
18May29_15_53_01.log
Status : SUCCESS

exit

Repeat this operation on all the faulty databases.

8) Patching the storage

Patching the storage is actually patching the NVMe disks. This is not always mandatory as your disks can already have the latest version. So, if needed, apply the patch:

odacli update-storage -v 12.2.1.3.0
{
"jobId" : "4a221df4-8c85-4f34-aa7f-e014cdb751f7",
"status" : "Created",
"message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",
"reports" : [ ],
"createTimestamp" : "May 29, 2018 13:27:41 PM CEST",
"resourceList" : [ ],
"description" : "Storage Firmware Patching",
"updatedTime" : "May 29, 2018 13:27:41 PM CEST"
}

odacli describe-job -i "4a221df4-8c85-4f34-aa7f-e014cdb751f7"

Job details
----------------------------------------------------------------
ID:  4a221df4-8c85-4f34-aa7f-e014cdb751f7
Description:  Storage Firmware Patching
Status:  Success
Created:  May 29, 2018 1:27:41 PM CEST
Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           May 29, 2018 1:27:41 PM CEST        May 29, 2018 1:27:45 PM CEST        Success
Applying Firmware Expander Patches       May 29, 2018 1:27:45 PM CEST        May 29, 2018 1:27:52 PM CEST        Success
Applying Firmware Controller Patches     May 29, 2018 1:27:52 PM CEST        May 29, 2018 1:28:51 PM CEST        Success
preRebootNode Actions                    May 29, 2018 1:28:51 PM CEST        May 29, 2018 1:28:51 PM CEST        Success
Reboot Ilom                              May 29, 2018 1:28:51 PM CEST        May 29, 2018 1:28:51 PM CEST        Success

Beware of the reboot of the server few minutes after the patching is finished! And control the components version after the reboot.

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this step is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level, for example:

cd /opt/patch
unzip p23494992_122130_Linux-x86-64.zip
Archive:  p23494992_122130_Linux-x86-64.zip
 extracting: odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip
  inflating: README.txt

update-image --image-files odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip
Unpacking will take some time,  Please wait...
Unzipping odacli-dcs-12.2.1.3.0-180320-DB-12.1.0.2.zip

10) Control the version of the components

Now the patching is done. It took about 2 hours, not so bad for all these updates. Finally, do a last check of the components to see if everything is fine:

odacli describe-component
System Version
---------------
12.2.1.3.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.3.0            up-to-date
GI                                        12.2.0.1.180116       up-to-date
DB {
[ OraDB12102_home1 ]                      12.1.0.2.180116       up-to-date
[ OraDB11204_home1 ]                      11.2.0.4.180116       up-to-date
}
DCSAGENT                                  18.1.3.0.0            up-to-date
ILOM                                      4.0.0.24.r121140      up-to-date
BIOS                                      39090000              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER {
[ c6 ]                                    4.650.00-7176         up-to-date
[ c0,c1,c2,c3,c4,c5 ]                     KPYAGR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

Everything should be up-to-date now, until the next patch :-)

 

Cet article Patching ODA lite to 12.2.1.3.0 est apparu en premier sur Blog dbi services.

Dynamic Process, Conditions and Scope

Jan Kettenis - Thu, 2018-05-31 08:56

In Oracle Integration Cloud's Dynamic Processes activation/termination conditions can be based on case events. These events are related to the scope of the components they relate to, which implies some restrictions. The below explains how this works, and how to work around these restrictions.

A Dynamic Process or Case (as I will call it in this article) in the Oracle Integration Cloud consists of four component types: the Case itself, Stages (phases), Activities, and Milestones. An Activity or Milestone is either in a particular Stage (in the picture below Activities A to H are), or global (Activities X and Y). Cases, Processes, Stages, Activities and Miletones cannot be nested (but a Case can initiate a sub-Case via an Activity, which I will discuss another time).



Except for the case itself, all other components can explicitly be activated/enabled or terminated/completed based on conditions. For example in the dynamic process above Milestone 1 is activated once Activity A is completed, and Stage 2 is to be activated once Stage 1 is completed.

A Stage implicitly completes when all work in that stage is done (i.e. all Activities), and a Case implicitly completes when all work in the case is done. Currently the status of a Case cannot be explicitly set using conditions, but I would expect this to become possible in some next version. In the meantime there is a REST API that can be used to close or complete a case.

There are two types of conditions for explicit activation/termination:
  • (case) Events, for example completion of an activity
  • (case) Data Driven, for example "status" field gets value "started"
Events and Data Driven can also be used in combination, for example Activity B is only activated when Activity A is completed (event) AND some "status" data field has value "approve" (data driven).

The scope of an Event is its container, meaning:
  • A Stage can only be activated or terminated by a condition based on an Event concerning another Stage or a Global Activity.
  • An Activity can only be activated or terminated by a condition based on an Event concerning another Activity or Milestone in the same Stage.
  • A Milestone can only be completed by a condition based on an Event concerning an Activity or another Milestone in the same Stage.
  • A Global Activity can only be enabled or terminated by a condition based on an Event concerning a Stage, or Global Milestone or another Global Activity
  • A Global Milestone can only be enabled or terminated by a condition based on an Event concerning a Stage, a Global Activity, or another Global Milestone
I expect that in practice most conditions will be based on Events (so far for me that is the case) where the scope of these events will impose no limit on that. However, there are situations where you will need a "work-around".

Let's assume that in the example Stage 2 is only to be activated when Milestone 1 is completed and otherwise Stage 2 is to be skipped and the case should directly go to Stage 3. Because of the way events are limited by their scope, you cannot create a condition for Stage 2 to be skipped based on the completion of Milestone 1 (which is in Stage 1 and therefore not visible outside).

The work-around is to use a Data Driven condition instead. You can for example have a "metaData.status" field that you can set to something like "skip phase 2" and use that instead.

In general, it probably always is a good idea to let your case have some complex data element for example called "metaData" consisting of fields like "dateStarted" and "status", which that you fill out via the activities, and if needed can be used in conditions everywhere.

Min/Max upgrade

Jonathan Lewis - Thu, 2018-05-31 08:13

Here’s a nice little optimizer enhancement that appeared in 12.2 to make min/max range scans (and full scans) available in more circumstances. Rather than talk through it, here’s a little demonstration:

rem
rem     Script:         122_minmax.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1        Good path
rem             12.1.0.2        Bad path

create table pt1 (
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
)
nologging
partition by hash (object_id) partitions 4
as
select
        object_id,
        owner,
        object_type,
        object_name,
        status,
        namespace
from
        (select * from all_objects),
        (select rownum n1 from dual connect by level <= 10)  -- > comment to avoid format wordpress issue
;

alter table pt1 modify(status not null);

execute dbms_stats.gather_table_stats(null,'pt1',granularity=>'ALL',method_opt=>'for all columns size 1')

create index pt1_i1 on pt1(status, namespace) nologging local;

alter session set statistics_level = all;
set serveroutput off
set linesize 156
set pagesize 60
set trimspool on

select  min(status) from pt1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition'));

select  min(namespace) from pt1 where status = 'INVALID';
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition'));

select  min(namespace) from pt1 where status = (select min(status) from pt1);
select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost partition'));

The basic “min/max” optimisation allows Oracle to avoid a massive sort aggregate – Oracle doesn’t need to acquire a lot of data and sort it when it knows that the “left hand” end of an index is the low values and the “right hand” is the high values so, for example, in the first query above the optimizer could simply walk down the index branches to the left hand leaf and look at the single lowest entry in the leaf block to determine the lowest value for status … if the index had been a global index.

Things get a little messy, though, when the index is locally partitioned and your query isn’t about the partition key and there’s no suitable global index. Once upon a time (IIRC) Oracle would simply have to do an index fast full scan across all index partitions to handle such a query, but some time ago it got a lot cleverer and was enhanced to do a min/max scan on each partition in turn getting one value per partition very efficiently, then aggregating across those values to find the global minimum.

Here are the three execution plans (with rowsource execution stats pulled from memory) taken from 12.1.0.2 for the queries above:


-----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |      1 |        |     9 (100)|       |       |      1 |00:00:00.01 |      12 |
|   1 |  SORT AGGREGATE             |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   2 |   PARTITION HASH ALL        |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   3 |    INDEX FULL SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
-----------------------------------------------------------------------------------------------------------------------------


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |        |      1 |        |     9 (100)|       |       |      1 |00:00:00.01 |      12 |
|   1 |  SORT AGGREGATE               |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   2 |   PARTITION HASH ALL          |        |      1 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      12 |
|   3 |    FIRST ROW                  |        |      4 |      1 |     9   (0)|       |       |      1 |00:00:00.01 |      12 |
|*  4 |     INDEX RANGE SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      1 |00:00:00.01 |      12 |
-------------------------------------------------------------------------------------------------------------------------------


-----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |      1 |        |   337 (100)|       |       |      1 |00:00:00.07 |    2402 |   2242 |
|   1 |  SORT AGGREGATE                |        |      1 |      1 |            |       |       |      1 |00:00:00.07 |    2402 |   2242 |
|   2 |   PARTITION HASH ALL           |        |      1 |    422K|   328  (10)|     1 |     4 |     10 |00:00:00.07 |    2402 |   2242 |
|*  3 |    INDEX FAST FULL SCAN        | PT1_I1 |      4 |    422K|   328  (10)|     1 |     4 |     10 |00:00:00.07 |    2402 |   2242 |
|   4 |     SORT AGGREGATE             |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |      0 |
|   5 |      PARTITION HASH ALL        |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |      0 |
|   6 |       INDEX FULL SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |      0 |
-----------------------------------------------------------------------------------------------------------------------------------------

In the first plan Oracle has done an “index full scan (min/max)” across each of the four partitions in turn to return one row very cheaply from each, then aggregated to find the overall minimum.

In the second plan Oracle has done an “index range scan (min/max)” in exactly the same way, since it was able to find the start point in the index for the status ‘INVALID’ very efficiently.

In the third plan Oracle has been able to find the minimum value for the status (‘INVALID’) very efficiently in the subquery, and has passed that single value up to the main query, which has then used a brute force approach to search the whole of every partition of the index for every occurrence (all 10 of them) of the value ‘INVALID’ and then aggregated them to find the minimum namespace. Despite “knowing”, by the time the main query runs, that there will be a single value to probe for the status, the optimizer has not anticipated the fact that the final query will effectively become the same as the preceding one. As a result we’ve read 2,242 data blocks into the cache.

Turn, then, to the execution plan from 12.2.0.1 for this last query:


---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name   | Starts | E-Rows | Cost (%CPU)| Pstart| Pstop | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |        |      1 |        |     9 (100)|       |       |      1 |00:00:00.01 |      24 |
|   1 |  SORT AGGREGATE                 |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      24 |
|   2 |   PARTITION HASH ALL            |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|   3 |    FIRST ROW                    |        |      4 |      1 |     9   (0)|       |       |      4 |00:00:00.01 |      24 |
|*  4 |     INDEX RANGE SCAN (MIN/MAX)  | PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      24 |
|   5 |      SORT AGGREGATE             |        |      1 |      1 |            |       |       |      1 |00:00:00.01 |      12 |
|   6 |       PARTITION HASH ALL        |        |      1 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
|   7 |        INDEX FULL SCAN (MIN/MAX)| PT1_I1 |      4 |      1 |     9   (0)|     1 |     4 |      4 |00:00:00.01 |      12 |
---------------------------------------------------------------------------------------------------------------------------------

In 12.2 you can see that the main query is now doing an “index range scan (min/max)” on each index partition in turn, based on the incoming (though unknown at parse time) single value from the subquery. As a result the total work done is a mere 24 buffer visits.

There have been a couple of occasions in the past where I’ve had to write some PL/SQL to work around little details like this. It’s nice to know simple tables and partitoned tables with local indexes can now behave the same way. I also wonder whether there may be sites that could drop (or drop columns from, or make local) some indexes that they’ve previously created to  handle queries of the “most recent occurence” type.

If, for any reason, you need to disable this enhancement, it’s controlled by fix_control (v$system_fix_control) “18915345 Allow MIN/MAX optimization for pred having single row subquery” which can be set in the startup file, at the system level, or in the session.

Update

Checking MoS for the bug number I found that the limitation had been reported for 11.2.0.3, with “Fixed in product version” reported as 12.2; but there are patches for various releases of 11.2.0.4, though none yet for 12.1.0.2 – but if you think you need it you can always try raising an SR.

 

Introducing SQL Server on Kubernetes

Yann Neuhaus - Thu, 2018-05-31 06:50

After spending some times with Docker Swarm let’s introduce SQL Server on Kubernetes (aka K8s). Why another container orchestrator? Well, because Microsoft gives a strong focus on Kubernetes in their documentation and their events and because K8s is probably one of the most popular orchestration tools in the IT industry. By the way, I like to refer to the Portworx Annual Container Adoption Survey to get a picture of container trend over the years and we may notice there is no clear winner among orchestration tools yet between Swarm and K8s. By 2017, one another interesting point was persistent storage challenge that is the top 1 of the top list of adoption. I’m looking forward to see the next report about this point because you probably guessed, database containers rely mainly on it.

Anyway, as an IT services company, it appears justifiable to include K8s to our to-do list about container orchestrators :)

blog 136 - 000 - K8s - banner

First of all, let’s say this blog post doesn’t aim to compare Docker Swarm and K8s. Each platform has pros and cons and you can read a lot on the internet. I will rather expose some thoughts about deploying our dbi services docker image on this platform. Indeed, since last year we mainly work on our SQL Server docker image based on Docker and Docker Swarm architectures and it may be interesting to see if we may go the same way with K8s.

But before deploying our custom image we need to install a K8s infrastructure. From an installation perspective K8s cluster is likely harder to use than Docker Swarm. This time rather than using my own lab environment, I will shift on both Azure container and Azure container registry services to provision an operational K8s service. I just want here to focus on deploying my image and get some experience feedbacks about interacting with K8s. The Microsoft procedure is well-documented so there is no really adding-value to duplicate the installation steps. Because we operate on Azure, I will use a lot of az cli and kubectl commands to deploy and to manage my K8s service. Here some important information concerning my infrastructure:

I first installed and configured a private registry through the Azure container registry service in order to push my custom docker image for SQL Server 2017 on Linux. Obviously, this step may be optional regarding your context. My custom image is named dbi_linux_sql2017.

[dab@DBI-LT-DAB:#]> az acr repository list --name dbik8registry --output table
Result
------------------
dbi_linux_sql2017
mssql-server-linux

 

Then I installed my K8s service that includes 2 nodes. This is likely not a recommended scenario for production but it will fit with my need for the moment. I will probably scale my architecture for future tests.

[dab@DBI-LT-DAB:#]> kubectl cluster-info
Kubernetes master is running at https://dbik8sclus-k8s-rg-913528-...
Heapster is running at https://dbik8sclus-k8s-rg-913528-...
KubeDNS is running at https://dbik8sclus-k8s-rg-913528-...
kubernetes-dashboard is running at https://dbik8sclus-k8s-rg-913528-...
…
[dab@DBI-LT-DAB:#]> kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://dbik8sclus-k8s-rg-913528-3eb7146d.hcp.westeurope.azmk8s.io:443
  name: dbik8scluster
contexts:
- context:
    cluster: dbik8scluster
    user: clusterUser_k8s-rg_dbik8scluster
  name: dbik8scluster
current-context: dbik8scluster
…

[dab@DBI-LT-DAB:#]> kubectl get nodes
NAME                       STATUS    ROLES     AGE       VERSION
aks-nodepool1-78763348-0   Ready     agent     6h        v1.9.6
aks-nodepool1-78763348-1   Ready     agent     6h        v1.9.6

 

From an Azure perspective, my K8s cluster is composed of several resources in a dedicated resource group with virtual machines, disks, network interfaces, availability sets and a K8s load balancer reachable from a public IP address.

blog 136 - 00 - K8s - Azure config

Finally, I granted to my K8s cluster sufficient permissions to access my private Docker registry (READ role).

[dab@DBI-LT-DAB:#]>$CLIENT_ID=(az aks show --resource-group k8s-rg --name dbik8scluster --query "servicePrincipalProfile.clientId" --output tsv)
[dab@DBI-LT-DAB:#]>$ACR_ID=$(az acr show --name dbik8registry --resource-group k8s-rg --query "id" --output tsv)

[dab@DBI-LT-DAB:#]>az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

[dab@DBI-LT-DAB:#]> kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-s94vc   kubernetes.io/service-account-token   3         6h

 

Similar to Docker Swarm, we may rely on secret capabilities to protect the SQL Server sa password. So, let’s take advantage of it!

[dab@DBI-LT-DAB:#]> kubectl create secret generic mssql --from-literal=SA_PASSWORD="xxxxx"
[dab@DBI-LT-DAB:#]> kubectl get secrets
NAME                  TYPE                                  DATA      AGE
default-token-s94vc   kubernetes.io/service-account-token   3         6h
mssql                 Opaque                                1         6h

 

At this stage before deploying my SQL Server application, let’s introduce some K8s important concepts we should be familiar as a database administrator.

  • Pod

Referring to the K8s documentation, a pod is a logical concept that represents one or more application containers with some shared resources as shared storage, networking including unique cluster IP address and metadata about each container image such image version, exposed port etc ….

Each container in the same pod is always co-located and co-scheduled and run in shared context on the same node. Comparing to Docker Swarm, the latter doesn’t offer such capabilities because as far I as know by default, tasks are spread services across the cluster and there is no really easy way to achieve the same concept than K8s pod.

To simplify, a K8s pod is a group of containers that are deployed together on the same host. Referring to my SQL Server deployment with only one container, pod may be replaced by container here but in a real production scenario SQL Server will likely be one part of a K8s pod.

blog 136 - 0 - K8s - POD

 

We may correlate what was said previously by using K8s related commands to pods. Here a status of the pod related to my SQL Server deployment.

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                                READY     STATUS    RESTARTS   AGE
mssql-deployment-5845f974c6-xx9jv   1/1       Running   0          3h

[dab@DBI-LT-DAB:#]> kubectl describe pod mssql-deployment-5845f974c6-xx9jv
Name:           mssql-deployment-5845f974c6-xx9jv
Namespace:      default
Node:           aks-nodepool1-78763348-0/10.240.0.4
Start Time:     Wed, 30 May 2018 19:16:46 +0200
Labels:         app=mssql
                pod-template-hash=1401953072
Annotations:    <none>
Status:         Running
IP:             10.244.1.13
Controlled By:  ReplicaSet/mssql-deployment-5845f974c6
Containers:
  mssql:
    Container ID:   docker://b71ba9ac3c9fa324d8ff9ffa8ec24015a676a940f4d2b64cbb85b9de8ce1e227
    Image:          dbik8registry.azurecr.io/dbi_linux_sql2017:CU4
    Image ID:       docker-pullable://dbik8registry.azurecr.io/dbi_linux_sql2017@sha256:5b9035c51ae2fd4c665f957da2ab89432b255db0d60d5cf63d3405b22a36ebc1
    Port:           1433/TCP
    State:          Running
      Started:      Wed, 30 May 2018 19:17:22 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      ACCEPT_EULA:        Y
      MSSQL_SA_PASSWORD:  xxxxxx
      DMK:                Y
    Mounts:
      /var/opt/mssql from mssqldb (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-s94vc (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          True
  PodScheduled   True
Volumes:
  mssqldb:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mssql-data
    ReadOnly:   false
….

 

  • Replica set

A Replication set is a structure that enables you to easily create multiple pods, then make sure that that number of pods always exists. If a pod does crash, the Replication Controller replaces it. It also provides scale capabilities as we get also from Docker Swarm.

  • Service

From K8s documentation a service is also an abstraction which defines a logical set of Pods and a policy by which to access them – sometimes called a micro-service. The set of Pods targeted by a service is (usually) determined by a Label Selector. While there are some differences under the hood, we retrieve the same concepts with Docker Swarm from a deployment perspective.

  •  Virtual IP and service proxies

Referring again to the K8s documentation, every node in a Kubernetes cluster runs a kube-proxy that is responsible for implementing a form of virtual IP for Services. It includes Ingress network that is also part of Docker Swarm architecture with overlay networks and routing mesh capabilities.

In my case, as described previously I used an external load balancer with an EXTERNAL-IP configured to access my SQL Server container from the internet (xx.xxx.xxx.xx is my masked public IP as you already guessed)

[dab@DBI-LT-DAB:#]> kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes         ClusterIP      10.0.0.1       <none>          443/TCP          7h
mssql-deployment   LoadBalancer   10.0.200.170   xx.xxx.xxx.xx   1433:30980/TCP   4h

 

From an Azure perspective the above output corresponds to what we may identify as my Kubernetes load balancer and public IP address resources as well.

blog 136 - 1 - K8s - Load Balancer

blog 136 - 12 - K8s - Public IP

Once again, my intention was not to compare Docker Swarm and K8s at all but just to highlight the fact if you’re already comfortable with Docker Swarm, the move on K8s is not as brutal as we may suppose from a high-level point of view.

Ok let’s start now the deployment phase. As said previously my private container registry already contains my custom SQL Server image. I just had to tag my image on my local machine and to push it the concerned registry as I might do with other remote Docker registries.

[dab@DBI-LT-DAB:#]> docker images
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
dbik8registry.azurecr.io/dbi_linux_sql2017    CU4                 3c6bafb33a5c        17 hours ago        1.42GB
dbi/dbi_linux_sql2017                         CU4                 3c6bafb33a5c        17 hours ago        1.42GB

[dab@DBI-LT-DAB:#]> az acr repository list --name dbik8registry --output table
Result
------------------
dbi_linux_sql2017
mssql-server-linux

[dab@DBI-LT-DAB:#]> az acr repository show-tags --name dbik8registry --repository mssql-server-linux --output table
Result
--------
2017-CU4

 

In addition, I used a persistent storage based on Azure managed disk in order to guarantee persistence for my SQL Server database files.

[dab@DBI-LT-DAB:#]> kubectl describe pvc mssql-data
Name:          mssql-data
Namespace:     default
StorageClass:  azure-disk
Status:        Bound
Volume:        pvc-32a42393-6402-11e8-885d-f2170a386bd7
…

 

Concerning the image itself we use some custom parameters to create both a dedicated user for applications that will run on the top of the SQL Server instance and to enable the installation of the DMK maintenance module for SQL Server at the container start up. We have other customization topics but for this blog post it will be sufficient to check what we want to test.

Here my deployment file. Comparing to Docker Swarm deployment file, I would say the manifest is more complex with K8s (that’s a least my feeling).

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: mssql
        image: dbik8registry.azurecr.io/dbi_linux_sql2017:CU4
        ports:
        - containerPort: 1433
        env:
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mssql
              key: SA_PASSWORD 
        - name: DMK
          value: "Y"
        volumeMounts:
        - name: mssqldb
          mountPath: /var/opt/mssql
      volumes:
      - name: mssqldb
        persistentVolumeClaim:
          claimName: mssql-data
---
apiVersion: v1
kind: Service
metadata:
  name: mssql-deployment
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

 

Let’s deploy and let’s spin up our SQL Server application

[dab@DBI-LT-DAB:#]> kubectl apply -f T:\dbi_dbaas_azure\sqlk8sazuredeployment.yaml
deployment "mssql-deployment" created
service "mssql-deployment" created

 

Pod and services are created. Let’s take a look at some information about them. Deployment and pod are ok. The last command shows the associated internal IP to connect in order to the SQL Server pod as well as a external / public IP address that corresponds to the Ingress load-balancer to connect from outside Azure internal network. We also get a picture of exposed ports.

[dab@DBI-LT-DAB:#]> kubectl get deployments
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
mssql-deployment   1         1         1            1           7m

[dab@DBI-LT-DAB:#]> kubectl get pods -o wide
NAME                               READY     STATUS    RESTARTS   AGE       IP            NODE
mssql-deployment-8c67fdccc-pbg6d   1/1       Running   0          12h       10.244.1.16   aks-nodepool1-78763348-0

[dab@DBI-LT-DAB:#]> kubectl get replicasets
NAME                         DESIRED   CURRENT   READY     AGE
mssql-deployment-8c67fdccc   1         1         1         12h

[dab@DBI-LT-DAB:#]> kubectl get services
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)          AGE
kubernetes         ClusterIP      10.0.0.1       <none>          443/TCP          9h
mssql-deployment   LoadBalancer   10.0.134.101   xxx.xxx.xxx.xxx   1433:31569/TCP   7m

[dab@DBI-LT-DAB:#]> kubectl describe service mssql-deployment
Name:                     mssql-deployment
Namespace:                default
Labels:                   <none>
…
Selector:                 app=mssql
Type:                     LoadBalancer
IP:                       10.0.134.101
LoadBalancer Ingress:     xxx.xxx.xxx.xxx
Port:                     <unset>  1433/TCP
TargetPort:               1433/TCP
NodePort:                 <unset>  31569/TCP
Endpoints:                10.244.1.16:1433
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

 

Let’s try now to connect to new fresh SQL Server instance from my remote laptop:

blog 136 - 2 - K8s - Container

Great job! My container includes all my custom stuff as the dbi_tools database and dedicated maintenance jobs related to our DMK maintenance tool. We may also notice the dbi user created during the container start up.

Just out of curiosity, let’s have a look at the pod log or container log because there is only one in the pod in my case. The log includes SQL Server log startup and I put only some interesting samples here that identify custom actions we implemented during the container startup.

[dab@DBI-LT-DAB:#]> kubectl get po -a
NAME                               READY     STATUS    RESTARTS   AGE
mssql-deployment-8c67fdccc-pk6sm   1/1       Running   0          21m

[dab@DBI-LT-DAB:#]> kubectl logs mssql-deployment-8c67fdccc-pk6sm
…
======= 2018-05-30 21:04:59 Creating /u00 folder hierarchy ========
cat: /config.log: No such file or directory
======= 2018-05-30 21:04:59 Creating /u01 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u02 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u03 folder hierarchy ========
======= 2018-05-30 21:04:59 Creating /u98 folder hierarchy ========
======= 2018-05-30 21:04:59 Linking binaries and configuration files to new FHS ========
======= 2018-05-30 21:04:59 Creating MSFA OK =======
….
2018-05-30 21:05:13.85 spid22s     The default language (LCID 1033) has been set for engine and full-text services.
======= 2018-05-30 21:05:29 MSSQL SERVER STARTED ========
======= 2018-05-30 21:05:29 Configuring tempdb database files placement =======
…
2018-05-30 21:06:05.16 spid51      Configuration option 'max server memory (MB)' changed from 2147483647 to 1500. Run the RECONFIGURE statement to install.
Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install.
Configuration option 'max server memory (MB)' changed from 2147483647 to 1500. Run the RECONFIGURE statement to install.
======= 2018-05-30 21:06:05 Configuring max server memory OK =======
======= 2018-05-30 21:06:05 Creating login dbi =======
======= 2018-05-30 21:06:05 Creating login dbi OK =======
======= 2018-05-30 21:06:05 Installing DMK =======
Changed database context to 'master'.
Creating dbi_tools...
2018-05-30 21:06:12.47 spid51      Setting database option MULTI_USER to ON for database 'dbi_tools'.
Update complete.
Changed database context to 'dbi_tools'.

(1 rows affected)

(1 rows affected)
======= 2018-05-30 21:06:12 Installing DMK OK =======
======= MSSQL CONFIG COMPLETED =======
2018-05-30 21:10:09.20 spid51      Using 'dbghelp.dll' version '4.0.5'
2018-05-30 21:10:19.76 spid51      Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-05-30 21:10:19.87 spid51      Using 'xplog70.dll' version '2017.140.3022' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.

 

To finish this blog post up properly, let’s simulate a pod failure to check the K8s behavior with our SQL Server container.

[dab@DBI-LT-DAB:#]> kubectl delete pod mssql-deployment-8c67fdccc-pk6sm
pod "mssql-deployment-8c67fdccc-pk6sm" deleted

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                               READY     STATUS        RESTARTS   AGE
mssql-deployment-8c67fdccc-jrdgg   1/1       Running       0          5s
mssql-deployment-8c67fdccc-pk6sm   1/1       Terminating   0          26m

[dab@DBI-LT-DAB:#]> kubectl get pods
NAME                               READY     STATUS    RESTARTS   AGE
mssql-deployment-8c67fdccc-jrdgg   1/1       Running   0          2m

 

As expected, the replica set is doing its job by re-creating the pod to recover my SQL Server instance, and by connecting to the persistent storage. We can check we can still connect on the instance from the load balancer IP address without running into any corruption issue.

blog 136 - 3 - K8s - Container 2

To conclude, I would say that moving our custom SQL docker image was not as hard as I expected. Obviously, there are some difference between the both orchestrator products but from an application point of view it doesn’t make a big difference. In an administration perspective, I’m agree the story is probably not the same :)

What about K8s from a development perspective? You may say that you didn’t own such Azure environment but the good news is you can use Minikube which is the single node version of Kubernetes mainly designed for local development. I will probably blog about it in the future. Stay tuned!

 

 

Cet article Introducing SQL Server on Kubernetes est apparu en premier sur Blog dbi services.

An awk filter to truncate or wrap around tabular output

Yann Neuhaus - Thu, 2018-05-31 04:47

In my previous blog “idql and its column output”, see link here, I provided a small awk filter to reflow the idql output’s columns by wrapping their content around. Later I came of thinking that it could be useful to be able to truncate the columns instead, in order to have an even compacter output. A shorter, denser table can certainly help is some cases, such as when a quick overview of a query’s result is enough to get an idea of its correctness or its performance.
Of course, in case of truncation, a user-definable trailing string, defaulting to the ellipsis, would be appended to show that a truncation has been applied; this is to avoid to present incomplete and therefore incorrect data to an unsuspecting or distracted reader.
So, here is the script:

# Usage:
#    gawk [-v maxw=nn] [-v truncate=0|1] [-v ellipsis=...] [-v wrap=1|0] -f compact_wwa.awk file
# or:
#    cmd | gawk [-v maxw=nn] [-v truncate=0|1] [-v ellipsis=...] [-v wrap=1|0] -f compact_wwa.awk
# where:
#     missing numeric parameters assume the value 0;
#     maxw is the maximum column width and defaults to 32;
#         characters outside this limit are either wrapped around in their own column or truncated, depending on the parameters truncate and wrap;
#     wrapping is the default and has priority over truncating;
#     truncate=0 wrap=0 --> wrapping around;
#     truncate=0 wrap=1 --> wrapping around;
#     truncate=1 wrap=0 --> truncate;
#     truncate=1 wrap=1 --> wrapping around;
#     truncating is therefore only done when explicitly and unambiguously requested; this is to preserve data integrity whenever possible;
#     ellipsis string is only considered when truncating and defaults to '...'; thus, there is always one to warn of truncation;
# example:
#     gawk -v maxw=50 -f compact_wwa.awk tmp_file  | less -S
#     C. Cervini, dbi-services;
BEGIN {
   if (!maxw) maxw = 32
   if (!truncate)
      if (!wrap)
         wrap = 1
      else;
   else if (!wrap);
   else
      truncate = 0
   if (!ellipsis)
      ellipsis = "..."
   while (getline && !match($0, /^([0-9]+> )+/));
   header = substr($0, RLENGTH + 1)
   getline
   nbFields = NF
   fs[0] = 0; fw[0] = -1 # just so that fs[1] = 1, see below;
   headerLine = ""; sepLine = ""
   for (i = 1; i <= NF; i++) {
      fs[i] = fs[i - 1] + fw[i - 1] + 2
      fw[i] = length($i)
      Min = min(fw[i], maxw)
      if (1 == truncate)
         Min = max(Min, length(ellipsis))
      sepLine = sepLine sprintf("%s  ", substr($0, fs[i], Min))
   }
   printWithWA(header)
   printf("%s\n", sepLine)
}
{
   if (match($0, /^\([0-9]+ rows? affected\)/)) {
      print
      exit
   }
   printWithWA($0)
}
function printWithWA(S) {
   do {
      left_over = ""
      for (i = 1; i <= nbFields; i++) {
         Min = min(fw[i], maxw)
         if (1 == truncate)
            Min = max(Min, length(ellipsis))
         columnS = substr(S, fs[i], Min)
         if (1 == truncate) {
            restColumn = substr(S, fs[i] + Min, fw[i] - Min); gsub(/ +$/, "", restColumn)
            if (restColumn)
               columnS = substr(columnS, 1, length(columnS) - length(ellipsis)) ellipsis
         }
         printf("%s  ", columnS)
         restS = substr(S, fs[i] + Min, fw[i] - Min)
         if (length(restS) > 0) {
            left_over = left_over sprintf("%-*s  ", fw[i], restS)
         }
         else
            left_over = left_over sprintf("%*s  ", fw[i], "")
      }
      printf "\n"
      gsub(/ +$/, "", left_over)
      S = left_over
   } while (left_over && wrap)
}
function min(x, y) {
   return(x <= y ? x : y)
}
function max(x, y) {
   return(x >= y ? x : y)
}

Example of execution

idql dmtest -Udmadmin -Pdmadmin -w500 <<EoQ | gawk -v maxw=25 -v truncate=1 -v ellipsis="(...)" -f compact_wwa.awk | less -S
select r_object_id, user_name, user_os_name, user_address, user_group_name, user_privileges, owner_def_permit, world_def_permit, group_def_permit, default_folder, user_db_name, description,
acl_domain, acl_name, user_os_domain, home_docbase, user_state, client_capability, globally_managed, user_delegation, workflow_disabled, alias_set_id, user_source, user_ldap_dn, user_xprivileges,
failed_auth_attempt, user_admin, user_global_unique_id, user_login_name, user_login_domain, user_initials, USER_PASSWORD, user_web_page, first_failed_auth_utc_time, last_login_utc_time,
deactivated_utc_time, deactivated_ip_addr, root_log_dir
from
dm_user
go
exit
EoQ

Result:
blog10
blog11
Here the ellipsis string has been set to “(…)”.
One could wonder why there are 2 apparently contradictory options to specify the wrapping, truncate and wrap. Wouldn’t one be enough since truncate = non wrap ? And it would also remove the 7 lines of validation and prioritization at the beginning. One of the reason is comfort as it is often easier to think in positive rather than in negative logic. The other is that both options are not exactly opposite; the opposite of truncate can be “do nothing” as well as “wrap around”, and vice-versa. It is better to be explicit rather than implicit so to avoid any ambiguity.
What if a do-nothing filter is needed while still calling compact_wwa.awk (have I said that wwa stands for “with wrap-around” ?), e.g. from another script that passes the command-line options to it ? Just specify some large value for maxw and that’s it. As its name suggests it, compact_wwa.awk only attempts to shrink columns if larger than maxw; narrower columns are not modified.

Possible emprovements

There are still lots of ways to enhance this script. For instance, one of them is to provide a maxw per column. Here, the parameter maxw is applied to all the columns but it would be nice to compress (whether by truncating or by wrapping around) only the uninteresting columns so to have more space for the relevant ones.
Another way would be to make the script work both ways, enlarging columns as well as shrinking them, which could sometimes simplify their parsing since they’ll all have the same width. Maybe the next time…

Wrapping up

I think this script can prove useful to those who need to work with idql and have to deal with its limits. Countless times in the past, I had to open a DQL extraction’s output in UltraEdit to manually shrink all those columns. Too bad that I didn’t have this little utility before, it could have spared me tons of time doing all this tedious work.

 

Cet article An awk filter to truncate or wrap around tabular output est apparu en premier sur Blog dbi services.

locking explain plan table

Tom Kyte - Thu, 2018-05-31 04:46
Hi Tom, while creating explain plan for query, explain plan table is getting locked until commit or rollback. If we don't want to commit / rollback, is it a problem that explain plan table stays locked? also while it is locked we don't have any prob...
Categories: DBA Blogs

crsctl and background processes of rac

Tom Kyte - Thu, 2018-05-31 04:46
Hi Tom, Which background process or any process works to retrieve the result of command "crsctl stat res -t" Regards, S.Alam
Categories: DBA Blogs

Adding a Documentum Extension to gawk, part II

Yann Neuhaus - Thu, 2018-05-31 04:41

This is part II of the article “Adding a Documentum Extension to gawk”. You can find Part I here
Before we can test the extension, we need a test program and some helper functions for comfort. Let’s prepare them.
o  Move back to the dmgawk directory and edit DctmAPI.awk, the wrapper functions;

$ cd ../../..
$ pwd
/home/dmadmin/gawk
$ vi DctmAPI.awk

o  Cut and paste the lines below:

@load "dctm"

# provides high-level function to do Documentum stuff;
# set environment variable $DOCUMENTUM to where the dmcl.ini file is and $LD_LIBRARY_PATH to the location of libdmcl40.so;
# this library file is made available to gawk scripts by @include-ing in it;
# the environment variable AWKPATH can be set to the path containing the included awk sources;
# Cesare Cervini
# dbi-services.com
# 5/2018

dmLogLevel = 1

function dmShow(mesg) {
# displays the message msg if allowed
   if (dmLogLevel > 0)
      print mesg
}

function dmConnect(docbase, user_name, password) {
# connects to given docbase as user_name/password;
# returns a session id if OK, an empty string otherwise
   dmShow("in connect(), docbase = " docbase ", user_name = " user_name ", password = " password)
   session = dmAPIGet("connect," docbase "," user_name "," password)
   if (!session) {
      print "unsuccessful connection to docbase " docbase " as user " user_name
      return "" 
   }
   else {
      dmShow("successful session " session)
      dmShow(dmAPIGet("getmessage," session))
   }
   dmShow("exiting connect()")
   return session
}

function dmExecute(session, dql_stmt) {
# execute non-SELECT DQL statements;
# returns 1 if OK, 0 otherwise;
   dmShow("in dmExecute(), dql_stmt=" dql_stmt)
   query_id = dmAPIGet("query," session "," dql_stmt)
   if (!query_id) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }
   if (!dmAPIExec("close," session "," query_id)) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }
   dmAPIGet("getmessage," session)
   return 1
}

function dmSelect(session, dql_stmt, attribute_names) {
# execute the DQL SELECT statement passed in dql_stmt and outputs the result to stdout;
# attributes_names is a list of attributes to extract from the result set;
# return 1 if OK, 0 otherwise;
   dmShow("in dmSelect(), dql_stmt=" dql_stmt)
   query_id = dmAPIGet("query," session "," dql_stmt)
   if (!query_id) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }

   s = ""
   nb_attrs = split(attribute_names, attributes_tab, " ")
   for (attr = 1; attr <= nb_attrs; attr++)
      s = s "[" attributes_tab[attr] "]\t"
   print s
   resp_cntr = 0
   while (dmAPIExec("next," session "," query_id) > 0) {
      s = ""
      for (attr = 1; attr <= nb_attrs; attr++) {
         value = dmAPIGet("get," session "," query_id "," attributes_tab[attr])
         if ("r_object_id" == attributes_tab[attr] && !value) {
            dmShow(dmAPIGet("getmessage," session))
            return 0
         }
         s= s "[" (value ? value : "NULL") "]\t"
      }
      resp_cntr += 1
      dmShow(sprintf("%d: %s", resp_cntr, s))
   }
   dmShow(sprintf("%d rows iterated", resp_cntr))

   if (!dmAPIExec("close," session "," query_id)) {
      dmShow(dmAPIGet("getmessage," session))
      return 0
   }

   return 1
}

function dmDisconnect(session) {
# closes the given session;
# returns 1 if no error, 0 otherwise;
   dmShow("in dmDisconnect()")
   status = dmAPIExec("disconnect," session)
   if (!status)
      dmShow("Exception in dmDisconnect():")
   else
      dmShow("exiting disconnect()")
   return status
}

o  Ditto for the test program tdctm.awk;
$ vi tdctm.awk

# test program for DctmAPI.awk and the interface dctm.c;
# Cesare Cervini
# dbi-services.com
# 5/2018

@include "DctmAPI.awk"

BEGIN {
   dmLogLevel = 1

   status = dmAPIInit()
   printf("dmAPIInit(): %d\n", status)
   if (status)
      print("dmAPIInit() was successful")
   else
      print("dmAPIInit() was not successful")
 
   printf "\n"
   session = dmConnect("dmtest", "dmadmin" , "dmadmin")
   printf("dmConnect: session=%s\n", session)
   if (!session) {
      print("no session opened, exiting ...")
      exit(1)
   }

   printf "\n"
   dump = dmAPIGet("dump," session ",0900c35080008107")
   print("object 0900c35080008107 dumped:\n" dump)

   printf "\n"
   stmt = "update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'"
   status = dmExecute(session, stmt)
   if (status)
      print("dmExecute [" stmt "] was successful")
   else
      print("dmExecute [" stmt "] was not successful")

   printf "\n"
   stmt = "select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document"
   status = dmSelect(session, stmt, "r_object_id object_name owner_name acl_domain acl_name")
   if (status)
      print("dmSelect [" stmt "] was successful")
   else
      print("dmSelect [" stmt "] was not successful")

   printf "\n"
   stmt = "select count(*) from dm_document"
   status = dmSelect(session, stmt,  "count(*)")
   if (status)
      print("dmSelect [" stmt "] was successful")
   else
      print("dmSelect [" stmt "] was not successful")

   printf "\n"
   status = dmDisconnect(session)
   if (status)
      print("successfully disconnected")
   else
      print("error while  disconnecting")

   printf "\n"
   status = dmAPIDeInit()
   if (status)
      print("successfully deInited")
   else
      print("error while  deInited")

   exit(0)
}

o  Let’s test now !
first, set the needed variables for the Documentum environment, if still not done;
$ export DOCUMENTUM=/home/dmadmin/documentum
$ export LD_LIBRARY_PATH=$DOCUMENTUM/product/7.3/bin

o  Then, we need to tell the new gawk where to find the dynamic extension and the awk wrapper;
the environment variables are AWKLIBPATH and AWKPATH respectively;
let’s define those variables on-the-fly for a change:

$ AWKLIBPATH=gawk-4.2.1/extension/.libs AWKPATH=./ gawk-4.2.1/gawk -f ./tdctm.awk

o  Produced output:

dmAPIInit(): 1
dmAPIInit() was successful
 
in connect(), docbase = dmtest, user_name = dmadmin, password = dmadmin
successful session s0
[DM_SESSION_I_SESSION_START]info: "Session 0100c3508000d6d6 started for user dmadmin."
exiting connect()
dmConnect: session=s0
 
object 0900c35080008107 dumped:
USER ATTRIBUTES
 
object_name : 4/10/2018 20:09:28 dm_DBWarning
title : Result of dm_method(dm_DBWarning) with status code (0)
subject : Result of dm_method(dm_DBWarning) with command line: ./dmbasic -f../install/admin/mthd4.ebs -eDBWarning -- -docbase_name dmtest.dmtest -user_name dmadmin -job_id 0800c3508000035f -method_....
authors []:
keywords []:
resolution_label :
owner_name : dmadmin
owner_permit : 7
group_name : docu
group_permit : 5
world_permit : 3
log_entry :
acl_domain : dmadmin
acl_name : dm_4500c35080000101
language_code : FR
 
SYSTEM ATTRIBUTES
 
r_object_type : dm_document
r_creation_date : 4/10/2018 20:09:41
r_modify_date : 5/13/2018 22:09:06
r_modifier : dmadmin
r_access_date : nulldate
r_composite_id []:
r_composite_label []:
r_component_label []:
r_order_no []:
...
 
i_is_replica : F
i_vstamp : 200
 
in dmExecute(), dql_stmt=update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'
dmExecute [update dm_document object set language_code = 'FR' where r_object_id = '0900c35080008107'] was successful
 
in dmSelect(), dql_stmt=select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document
[r_object_id] [object_name] [owner_name] [acl_domain] [acl_name] 1: [0900c350800001d0] [Default Signature Page Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 2: [6700c35080000100] [CSEC Plugin] [dmadmin] [dmadmin] [dm_4500c35080000101] 3: [6700c35080000101] [Snaplock Connector] [dmadmin] [dmadmin] [dm_4500c35080000101] 4: [0900c350800001ff] [Blank Word 2007 / 2010 Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 5: [0900c35080000200] [Blank Word 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 6: [0900c35080000201] [Blank Word 2007 / 2010 Macro-enabled Document] [dmadmin] [dmadmin] [dm_4500c35080000101] 7: [0900c35080000202] [Blank Word 2007 / 2010 Macro-enabled Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 8: [0900c35080000203] [Blank Excel 2007 / 2010 Workbook] [dmadmin] [dmadmin] [dm_4500c35080000101] 9: [0900c35080000204] [Blank Excel 2007 / 2010 Template] [dmadmin] [dmadmin] [dm_4500c35080000101] 10: [0900c350800001da] [11/21/2017 16:31:10 dm_PostUpgradeAction] [dmadmin] [dmadmin] [dm_4500c35080000101] ...
885: [0900c35080005509] [11/30/2017 20:11:05 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 886: [0900c3508000611d] [12/11/2017 19:40:05 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 887: [0900c35080006123] [12/11/2017 20:10:08 dm_DataDictionaryPublisher] [dmadmin] [dmadmin] [dm_4500c35080000101] 888: [0900c3508000612d] [12/11/2017 20:30:07 dm_UpdateStats] [dmadmin] [dmadmin] [dm_4500c35080000101] 888 rows iterated
dmSelect [select r_object_id, object_name, owner_name, acl_domain, acl_name from dm_document] was successful
 
in dmSelect(), dql_stmt=select count(*) from dm_document
[count(*)] 1: [888] 1 rows iterated
dmSelect [select count(*) from dm_document] was successful
 
in dmDisconnect()
exiting disconnect()
successfully disconnected
 
successfully deInited

That’s it, enjoy your new dmgawk!

Next steps

Now that the extension has proved to be usable, it could be interesting to deploy it system-wide. This can be done by the generated Makefile at the root directory of the package as follows:

$ sudo make install

The loadable extensions are stored in /usr/local/lib/gawk:

$ ll /usr/local/lib/gawk
total 504
-rwxr-xr-x 1 root root 124064 May 13 16:38 filefuncs.so
...
-rwxr-xr-x 1 root root 26920 May 13 16:38 time.so
-rwxr-xr-x 1 root root 31072 May 13 16:38 dctm.so

As this is the standard place to look for them, there no need to specify the AWKLIBPATH anymore.
There is also a standard place for awk utility scripts, /usr/local/share/awk. If DctmAPI.awk is installed there, it becomes available to anybody from anywhere on the system and setting AWKPATH is not needed any more. In addition, if /usr/local/bin is in the righteous $PATH, the newly extended gawk is accessible directly, along with its extensions and its ancillary scripts, e.g.:
$ gawk -f dmgawk/tdctm.awk
You may also want to define an alias dmawk pointing to the extended gawk:
$ alias dmawk=/usr/local/bin/gawk
or you may want to name it dmgawk so you still can access the original dmawk:
$ alias dmgawk=/usr/local/bin/gawk

I’m not finished yet!

I started this mini-project on a 32-bit Ubuntu VM with the Documentum v5.3 libdmcl40.so library. Once it proved working, I replayed all the steps on a 64-bit Ubuntu VM with the Documentum v7.3 binaries. There, as already noticed in the past, the library libdmcl40.so kept crashing the executable that loads it at run-time, gawk here, so I successfully resorted to libdmcl.so, the one that calls java code behind the scenes through JNI. It is a bit slower at start up but at least does work as expected. It is very likely that the ancient libdmcl40.so is going to be retired in some future release of the content server, so be prepared to switch.
You may remember that I edited the automake Makefile.am to have the extension compiled, and that I had to relink said extension manually with the libdmcl.so library. A better way would be to provide it with its own makefile to automate those steps. This way, no original gawk source file would be altered.

Conclusion, finally

This test program and wrapper are actually the gawk counterparts of the Documentum extension for python I wrote earlier for this blog (check it here). With these 2 extensions, an administrator does not have any longer to be stuck with a limited and, even worse, stagnant tool. The awk language is easy and well suited to the kind of work administrators do with Documentum. Its gawk implementation is powerful, open, very well maintained and, as shown, easily extensible so there is no reason not to use it.
For licensing reasons, the interface cannot be distributed by Documentum but anybody can install it privately without restriction.

 

Cet article Adding a Documentum Extension to gawk, part II est apparu en premier sur Blog dbi services.

Adding a Documentum Extension to gawk, part I

Yann Neuhaus - Thu, 2018-05-31 04:41

Recently, I was searching my NAS for some files which, to end this intolerable suspense, I did not find but on the other hand I did stumbled across a couple of interesting dmawk scripts I wrote for on customer more than 20 years ago. Some statements looked a bit odd, e.g. access to elements from multi-dimensional arrays such as “a[i1][i2]”, or “delete A” to empty an array (instead of the well-known awk idiom split(“”, A)). After a while, I understood what was going on here: the dmawk that came with content server v3.x was formerly based on the GNU dialect of awk named gawk. It was already a more powerful interpreter than the standard AT&T awk available on the Unix I was using, namely HP-UX, and nowadays it has become even better, as you can see by yourself by checking the official manual here.

At that time, it already allowed to be extended, which Documentum took profit of by turning gawk into a DMCL client, dmawk. However, it was quite a tedious task because it required hacking deeply into the source code. When years later I was playing with this and trying to add Oracle connectivity to gawk (and turn it into an Oracle OCI-based client, oragawk ;-), I remember for instance one step that required the symbol table to be edited in order to add the new functions, and possibly their implementation code inserted in the bison file; finally, the whole gawk source had to be recompiled and relinked. Tedious yet bearable as it didn’t prevent passionate people from forking custom versions with useful new functionalities such as xml processing in xgawk.

Over the years, starting with v4.1.1, gawk has evolved a new, much easier mechanism for adding extensions (aka plugins). It is named dynamic extension (see documentation here); it lets one load shared libraries at run-time and invoke their functions from within a gawk script through a minimum interface to be provided; the other way around, i.e. callbacks, is also possible for the brave, or really motivated, ones. Several powerful extensions have been developed through this system such as: json serialization from/to an associative array (useful e.g. to prepare parameters to pass to javascript functions such as the ones in HighChart or flot libraries), arbitrary-precision integer and floating point support, postgresql database access, etc. (for a list of current extensions, see here). If you have a recent gawk compiled with the MPFR extension, try for example the command below:
gawk -M -v prec=16000 'BEGIN{PREC = prec; printf("a big fp number:\n%f\n\n", 3.0**10**4); n = 3**10**4; print "a large integer number:\n" n; print "it has " length(n) " digits"; exit}'
Impressive, isn’t ?
To know if your local gawk has the option, ask it so:
gawk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
Copyright (C) 1989, 1991-2015 Free Software Foundation.

This one has it.

So why hasn’t Documentum kept up with gawk and gone for the standard, lesser awk instead ? Maybe because of a licensing issue ? Gawk is protected by the GNU GPL (see its full text in the gawk manual above) which mandates not only its modified source code but also all extension code statically or dynamically linked with it to be provided also in source code form and included in the delivered package, and maybe this was disturbing at the time. Now that open source is omnipresent, from O/Ses to RDBMS, from the network software to WEB development frameworks, from ERP to CRM software, it is so common that even a restrictive license such as the GPL does not shock anyone today. Linux itself, or GNU/Linux as some like to call it, contains a massive amount of GPL-ed software, linux per se being limited to the kernel, and is under the GPL itself. Thus, in the new extension mechanism, the GPL requires to publish the source code of not only the interface to be linked with gawk but also of the proprietary shared library libdmcl40.so (see the definition of Corresponding Source) that is loaded at run-time; but wait: since libdmcl40.so tightly uses other proprietary Documentum libraries, their source code would also need to be published, and so on, transitively. Obviously, this licensing is by design not favorable to closed source, which is exactly the reason d’être of the FSF (aka, the Free Software Foundation). If Documentum chose not to go the GPL way at that time with the simpler EDMS, it is very unlikely that it will in the future unless it drastically changes its business model and become, say, the Red Hat of document management!

Fortunately, there are no reasons to hold one’s breath until that day for I propose here a simple, straightforward interface between gawk and the Documentum libdmcl40.so/libdmcl.so run-time libraries which will not violate the GPL as nothing closed source is distributed. Its usage is exactly the same as documented in the API reference manual. I’ll show you hereafter how to prepare and use it. Since this article will be quite long, I’ll split it in two parts. Part I, you’re reading it, presents the interface and explains how to compile it as a gawk extension. Part II here will deal with a wrapper around that interface providing a few helper functions and a test program to show how to use the extension.

Installation

o  Check the currently installed version of gawk;

$ gawk --version
GNU Awk 4.1.3, API: 1.1 (GNU MPFR 3.1.4, GNU MP 6.1.0)
Copyright (C) 1989, 1991-2015 Free Software Foundation.

o  This is an old release; let’s take this opportunity to upgrade to the current latest, the 4.2.1;
    prepare a working directory; the name “dmgawk” fits it well;
$ mkdir ~/dmgawk; cd ~/dmgawk

o  Let’s get the latest source code of gawk, it’s the release 4.2.1;
$ wget http://ftp.gnu.org/gnu/gawk/gawk-4.2.1.tar.gz
$ tar -xpvzf gawk-4.2.1.tar.gz
$ cd gawk-4.2.1

o  compile it;
$ ./configure
$ make

o  Check the new version;
$ ./gawk --version
GNU Awk 4.2.1, API: 2.0
Copyright (C) 1989, 1991-2018 Free Software Foundation.

o  Note that the MPFR and GMP extensions were not linked in; that’s because those libraries were not present on my system;
    to have them, just download, compile and install them, and then run gawk’s ./configure and make again;
    fine so far; we’ll extend this local version; whether it will or can be installed system-wide is up to you;
    move to the extensions directory;
    edit the automake Makefile.am and add the highlighted references below to the new interface, let’s call it dctm.c;
$ cd extension
$ vi Makefile.am
pkgextension_LTLIBRARIES = \
filefuncs.la \
...
time.la \
dctm.la
...
time_la_SOURCES = time.c
time_la_LDFLAGS = $(MY_MODULE_FLAGS)
time_la_LIBADD = $(MY_LIBS)
 
dctm_la_SOURCES = dctm.c
dctm_la_LDFLAGS = $(MY_MODULE_FLAGS)
dctm_la_LIBADD = $(MY_LIBS)


o  save and quit; that’s all for the make file;
    let’s edit the interface dctm.c now and insert the code below;
$ vi dctm.c

/*
 * dctm.c - Builtin functions that provide an interface to Documentum dmapp.h;
 * see dmapp.h for description of functions; 
 *
 * Cesare Cervini
 * dbi-services.com
 * 5/2018
 * go to .libs and and link dmcl.o it with the Documentum library with: gcc -o dctm.so -shared dctm.o path-to-the-shared-library/libdmcl40.so;
 */
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
#include <assert.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>

#include <sys/types.h>
#include <sys/stat.h>

#include "gawkapi.h"

#include "gettext.h"
#define _(msgid)  gettext(msgid)
#define N_(msgid) msgid

/* make it point to the Documentum dmapp.h on your system */
#include "/home/dmadmin/documentum/share/sdk/include/dmapp.h"

static const gawk_api_t *api;	/* for convenience macros to work */
static awk_ext_id_t ext_id;
static const char *ext_version = "dctm extension: version 1.0";
static awk_bool_t (*init_func)(void) = NULL;

int plugin_is_GPL_compatible;

/*  do_dmAPIInit */
static awk_value_t *
do_dmAPIInit(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   unsigned int ret = 0;

   assert(result != NULL);

   ret = dmAPIInit();
   ret &= 0xff;

   return make_number(ret, result);
}

/*  do_dmAPIDeInit */
static awk_value_t *
do_dmAPIDeInit(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   unsigned int ret = 0;

   assert(result != NULL);

   ret = dmAPIDeInit();
   ret &= 0xff;

   return make_number(ret, result);
}

/*  do_dmAPIExec */
static awk_value_t *
do_dmAPIExec(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str;
   unsigned int ret = 0;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str)) {
      ret = dmAPIExec(str.str_value.str);
      ret &= 0xff;
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPIExec: called with inappropriate argument(s)"));

   return make_number(ret, result);
}

/*  do_dmAPIGet */
static awk_value_t *
do_dmAPIGet(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str;
   char *got_value;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str)) {
      got_value = dmAPIGet(str.str_value.str);
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPIGet: called with inappropriate argument(s)"));

   make_const_string(got_value == NULL ? "" : got_value, strlen(got_value), result);
   return result;
}

/*  do_dmAPISet */
static awk_value_t *
do_dmAPISet(int nargs, awk_value_t *result, struct awk_ext_func *unused) {
   awk_value_t str1;
   awk_value_t str2;
   unsigned int ret = 0;

   assert(result != NULL);

   if (get_argument(0, AWK_STRING, & str1) && get_argument(0, AWK_STRING, & str2)) {
      ret = dmAPISet(str1.str_value.str, str2.str_value.str);
      ret &= 0xff;
   } else if (do_lint)
      lintwarn(ext_id, _("dmAPISet: called with inappropriate argument(s)"));

   return make_number(ret, result);
}

/* these are the exported functions along with their min and max arities; */
static awk_ext_func_t func_table[] = {
	{ "dmAPIInit",   do_dmAPIInit, 0, 0, awk_false, NULL },
	{ "dmAPIDeInit", do_dmAPIDeInit, 0, 0, awk_false, NULL },
	{ "dmAPIExec",   do_dmAPIExec, 1, 1, awk_false, NULL },
	{ "dmAPIGet",    do_dmAPIGet, 1, 1, awk_false, NULL },
	{ "dmAPISet",    do_dmAPISet, 2, 2, awk_false, NULL },
};

/* define the dl_load function using the boilerplate macro */

dl_load_func(func_table, dctm, "")

o  Again, run configure and build the extensions;
$ pwd
/home/dmadmin/dmgawk/gawk-4.2.1/extension
$ ./configure
$ make

o  The extensions’ object files and shared libraries are actually contained in the hidden .libs directory below extension; so move there and check the generated dctm.so library;

$ cd .libs
$ ldd dctm.so
linux-vdso.so.1 => (0x00007ffc1d7e5000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3452930000)
/lib64/ld-linux-x86-64.so.2 (0x00007f3452efd000)

o  The library has still no reference to libdmcl.so; let’s link it with it and check again;
    find that Documentum library on your system; on mine, it’s in /home/dmadmin/documentum/product/7.3/bin;
$ gcc -o dctm.so -shared dctm.o /home/dmadmin/documentum/product/7.3/bin/libdmcl.so
$ ldd dctm.so
linux-vdso.so.1 => (0x00007fff55dbf000)
/home/dmadmin/documentum/product/7.3/bin/libdmcl.so (0x00007fece91bb000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fece8df1000)
libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007fece8bb9000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fece899c000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fece861a000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fece8311000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fece80fb000)
/lib64/ld-linux-x86-64.so.2 (0x00007fece95ca000)

Good, so far dctm.so has been linked with libdmcl.so and thus is able to access the dmAPI*() functions. gawk let us load dynamic extensions through two ways:
1. the statement @load “name_of_extension” inserted in the client awk script;
2. the -l | –load “name_of_extension” command-line options
Please, turn now to part II here for the rest of this article.

 

Cet article Adding a Documentum Extension to gawk, part I est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator