Feed aggregator

Oracle Industry Connect 2018 Draws Top Business Leaders to Share Insights and Strategies for Digital Transformation

Oracle Press Releases - Mon, 2018-04-02 07:00
Press Release
Oracle Industry Connect 2018 Draws Top Business Leaders to Share Insights and Strategies for Digital Transformation

Redwood Shores, Calif.—Apr 2, 2018

Oracle today announced its 5th-annual Oracle Industry Connect conference. The event brings together business leaders and experts from across industries to share deep sector expertise and strategies for innovation and organizational transformation. In addition to leading customer executives, the conference will feature keynotes from Oracle CEO Mark Hurd and Bob Weiler, executive vice president of Oracle’s Global Business Units. The event takes place April 10-11, 2018 at the Hilton Midtown Manhattan Hotel in New York City.

“Oracle delivers the widest range of industry-specific applications in the cloud,” said Oracle CEO Mark Hurd. “From creating smarter construction sites with IoT, to empowering retailers to deliver better customer experiences with inventory management, our unmatched industry expertise and solutions are driving our customers’ success.”

At Oracle Industry Connect, industry peers join together to share deep-domain expertise and best practices on the mission-critical applications that are core to their businesses and their success. Ninety percent of the breakout session content is comprised of Oracle customers providing insights on identifying and overcoming complex industry challenges.

The keynotes will be followed by seven industry-specific tracks for business leaders in communications, construction and engineering, energy and utilities, financial services and insurance, hospitality, life sciences and healthcare and retail. Speakers include:

  • Amy Brady, Chief Information Officer, Executive Vice President, KeyBank
  • Connie Santilli, Vice President Enterprise Systems and Strategy, GAP
  • David Sipes, Chief Operations Officer, RingCentral
  • David Wilson, Chief Innovation Officer, Bechtel Corporation
  • Michael Webber, Deputy Director of the Energy Institute, Co-Director of the Clean Energy Incubator
  • Ray Bennett, Chief Global Officer, Global Operations, Marriott International
  • Sandy Tremps, Vice President, MRL IT, Merck
  • William Pratt, Chief Information Officer, SiriusXM Radio Inc.

“Cloud computing is the foundation of how businesses can truly transform themselves,” said Bob Weiler, executive vice president, Oracle’s Global Business Units. “Smart companies will leverage powerful industry-specific solutions to innovate, delight their customers and drive new revenue streams.”

For more information about how Oracle is committed to empowering organizations through best-in-class, industry-specific business solutions, visit oracle.com/industries. To learn more about Oracle Industry Connect 2018, go to oracle.com/oracleindustryconnect.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Oracle Named a Leader in the 2018 Gartner Magic Quadrant for Identity Governance and Administration

Oracle Press Releases - Mon, 2018-04-02 07:00
Press Release
Oracle Named a Leader in the 2018 Gartner Magic Quadrant for Identity Governance and Administration Oracle positioned as a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Apr 2, 2018

Oracle today announced that it has been named a Leader in Gartner’s 2018 “Magic Quadrant for Identity Governance and Administration” report for the fifth consecutive time. Oracle believes this recognition further validates the strength and innovation of cloud security services Oracle has introduced over the past year and its ability to help enterprises better integrate security solutions to manage their business.

“Security and Identity has quickly become one of the most critical areas businesses must address in order to be successful and maintain regular operations, and identity governance is a critical foundational step any enterprise should take to strengthen its security posture,” said Eric Olden, senior vice president and general manager, security and identity, Oracle. “Over the last year, Oracle has significantly enhanced its solutions’ capabilities to help enterprises manage, analyze and remediate security incidents with Oracle’s autonomous security capabilities. We are continuing our commitment to offering a trusted identity fabric and portfolio to help enterprises secure their businesses.”

According to Gartner, “IGA Leaders deliver a comprehensive toolset for governance and administration of identity and access. These vendors have successfully built a significant installed customer base and revenue stream, and have high viability ratings and robust revenue growth. Leaders also show evidence of superior vision and execution for anticipated requirements related to technology, methodology or means of delivery. Leaders typically demonstrate customer satisfaction with IGA capabilities and/or related service and support.”

In December, Oracle announced the first cloud-native identity governance service for hybrid cloud environments, which will be fully integrated and native to Oracle’s SaaS applications, Oracle Identity Security Operations Center (Identity SOC) portfolio (including Oracle Identity Cloud Service and Oracle CASB Cloud Service), as well as Oracle Management Cloud. In addition, Oracle expanded its consumer identity management capabilities in Oracle Identity Cloud Service through integrations with Oracle Marketing Cloud and Oracle Data Cloud.

Oracle’s integrated security suite of the Oracle Identity SOC portfolio and Oracle Management Cloud are designed to help enterprises forecast, reduce, detect, and resolve security threats and assist in efforts to remediate application and infrastructure performance issues.  Leveraging artificial intelligence to analyze a unified data set consisting of the full breadth of security and operational telemetry, as well as provide automated remediation, Oracle’s integrated suite is designed to enable customers to quickly adapt their security and operational posture as their risk landscape changes. This application of machine learning can potentially help thwart attacks, reduce the detection window from months to minutes, and more quickly address security breaches and performance outages.

Download a complimentary copy of Gartner’s 2018 “Magic Quadrant for Identity Governance and Administration” here.

Source: Gartner, “Magic Quadrant for Identity Governance and Administration,” Felix Gaehtgens, Kevin Kampman, Brian Iverson, 21 February 2018.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Contact Info
Jesse Caputo
Oracle
+1.650.506.5967
jesse.caputo@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Jesse Caputo

  • +1.650.506.5967

Kristin Reeves

  • +1.415.856.5145

Adding Native Pivot Charts and Tables to your Excel Reports!!

Tim Dexter - Sun, 2018-04-01 23:59

A report in Excel format is a very common requirement and BI Publisher can generate excel output using RTF, XSL or Excel Template. Excel template is recommended when the requirement is to create pixel perfect column width, to use built in excel functions, to create multi-sheet output, to handle preceding zeroes in data, to maintain data formatting, to manage high number of columns of data, etc.

How about adding native charts and pivot tables in the excel report ? Well, excel templates can handle that too.  

There is no wizard in the Excel Template Builder to create charts or pivot table, but you can certainly include Excel Pivot Charts and Pivot Tables in your report using MS Excel features. Here is a step-by-step guide:

 

Step 1: Create Excel Template to build data for Pivot Chart & Pivot Table

Use Excel Template Builder to create Excel Template

Load a sample XML data. Add data column header names.

Use "Insert Field" option from BI Publisher Ribbon Menu and create data place holders as shown below.

You will see an interim dialog box from the Template Builder that a metadata sheet will be created. Click OK on it.

 

Add looping of data using Insert Repeating Group. Select the For Each entry at the repeating node level

 

Preview the output. This will bring all records in the excel sheet in a separate .xls output file.

 

Step 2: Create Pivot Chart & Pivot Table

You can close the output .xls file and stay in the Excel Template. Now select all the data columns to be used in the Pivot Chart and table. You can click on column headers and select the entire column to be included or you can just select the table with column headers and single row of data placeholders. From Excel Menu Insert, select Pivot Chart & Pivot Table option.

 

In the dialog box "Create PivotTable", you can keep selected the option "Select a table or range" and leave the Table/Range that appears by default based on the selection.

You can choose to create the Pivot Chart and Pivot Table in a new work sheet (recommended).Click OK.

This will add a new Sheet in the Excel file and insert a Pivot Table and Chart place holder, with Pivot Table fields on the right panel

Here you can select the fields for the Pivot table and chart, to be depicted as Axis, Legend and Values. In this example we have included Product Type, Product, LOB and Brand as Axis and Revenue as Values.

Please note that by default the function selected under Values is Count. Therefore, select the drop down next to Count function and choose Value Field Settings, where you can change this to Sum function.

 

 

One more thing to note is the presence of Field Buttons in the chart. You can hide these Field Buttons. With Pivot Chart selected, go to Analyze Menu in the Ribbon style Menu, and under Show/Hide section choose "Hide all Field Buttons".

Finally the template will look like this

 

Step 3: Include dynamic data generated by BI Publisher for Pivot Chart & Pivot Table

Right click on Pivot Chart, select PivotChart Options, select Data tab. Here select the option "Refresh data when opening the file". This will bring the data dynamically into the PIvot Chart and Pivot Table.

 

 

You can run preview of the excel output and you will see the pivot table and chart displaying dynamic data.

You will notice blank data appearing in the Pivot Table and Chart. This is due to the way the looping works against the dynamic data. You can hide this blank data by filtering the blank data from the parent field in the pivot table of the output excel file. In this example, we will remove the blank data from Product field and the complete blank section will be removed without affecting rest of the data. To do this, just hover over Product in the right side pane under Pivot Chart Fields and click on the down arrow. This will open the filter options for Product field. Uncheck the Blank value from filter list. 

 

So, this completes the template design and the final output will look as shown below

You can further include excel functions and formula within these pivot table and charts as necessary for your requirement. You can even change the chart type, style etc. to create the most appropriate visual representation of the data. You can upload the excel template on BI Publisher server and run it against live data. You can include as many sheets with different pivot charts and tables, as required for your report. 

Also note that excel template can be run against any data source type in BI Publisher Data Model. Therefore you can use BI Analysis or even run a BIJDBC SQL query against RPD layer, and bring complex calculations, aggregations as a part of your data. 

 

Hope this was helpful. If you want to check the sample template and data, download it from here.

Have a great day !!

Categories: BI & Warehousing

New functions not yet documented - TO_DOG_YEAR

Tom Kyte - Sun, 2018-04-01 15:26
Over the past few months I've been working on a project using oracle advanced analytics to detect IBS in Dogs. I've written a short blog post about this here (http://www.oralytics.com/2018/04/predicting-ibs-in-dogs-using-oracle-18c.html) I've noti...
Categories: DBA Blogs

PgBackRest : Dedicated Backup Host

Yann Neuhaus - Sun, 2018-04-01 11:59

In previous blogs (blog1 and blog2) we saw a configuration of PgBackRest where the tool is installed in the same server that also hosts the PostgreSQL cluster. This configuration is fine if we have a single database server. But in the case that we have many database servers, it is more suitable to have a dedicated server for backups. This will also separate the backups and WAL archive from databases server
In this article we will see how to configure PgBackRest in an environment with 2 databases servers.
We present below the configuration we will use. We suppose that PgBackRest is already installed on both servers.
pgservertools: dedicated backup hosts
pgserver1: database server
pgserver2: database server
schema
As we can see we have two clusters running on each database server (pgserver1 and pgserver2). Note that we can have as many clusters we want on each server.
pgserver1 : cluster1

postgres=# show data_directory ;
  data_directory
-------------------
 /u90/pgdata/pg101
(1 row)

postgres=# show port;
 port
------
 5432
(1 row)

postgres=#

pgserver1 : cluster2

postgres=# show data_directory ;
   data_directory
--------------------
 /u90/pgdata2/pg101
(1 row)

postgres=# show port;
 port
------
 5433
(1 row)

postgres=#

pgserver2 : cluster1

postgres=# show data_directory ;
  data_directory
-------------------
 /u90/pgdata/pg101
(1 row)

postgres=# show port;
 port
------
 5432
(1 row)

postgres=#

pgserver2 : cluster2

postgres=# show data_directory ;
   data_directory
--------------------
 /u90/pgdata2/pg101
(1 row)

postgres=# show port;
 port
------
 5433
(1 row)

postgres=#

Now let’s create on pgservertools a dedicated user who will own the repository. It is not recommended to use user postgres. We will use backupowner as user.

[root@pgservertools ~]# useradd -m backupowner

As PgBackRest requires communication between the hosts without password, we have to configure ssh keys between user backupowner (on pgservertools) and users postgres (on pgserver1 and pgserver2).
On pgservertools, let’s generate keys for user backupowner

[backupowner@pgservertools ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/backupowner/.ssh/id_rsa): y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in y.
Your public key has been saved in y.pub.
The key fingerprint is:
SHA256:drIaCe1aMZSPOmQkNdYfn5WLm/iE4sJS2YKGj6EF38M backupowner@pgservertools.localdomain
The key's randomart image is:
+---[RSA 2048]----+
|    +.       .   |
|   o ....   o    |
|  . . o. o + .   |
|.  o o o. + .    |
| + ++o= So.o     |
|o =oE+o=o++      |
|.* oo+=..o       |
|o o o+.o  .      |
|   ....          |
+----[SHA256]-----+
[backupowner@pgservertools ~]$

Do the same on pgserver1 for user postgres

[postgres@pgserver1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/postgres/.ssh/id_rsa):
Created directory '/home/postgres/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/postgres/.ssh/id_rsa.
Your public key has been saved in /home/postgres/.ssh/id_rsa.pub.
The key fingerprint is:
15:cf:78:47:ef:e5:ab:29:b7:25:59:03:de:de:88:be postgres@pgserver1.localdomain
The key's randomart image is:
+--[ RSA 2048]----+
|          .   .  |
|           = . . |
|          o +.. o|
|         . ...oo.|
|        S    . +o|
|             .+.+|
|            .o.+.|
|           o .=  |
|            E=.  |
+-----------------+
[postgres@pgserver1 ~]$

And on pgserver2 for user postgres

[postgres@pgserver2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/postgres/.ssh/id_rsa):
Created directory '/home/postgres/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/postgres/.ssh/id_rsa.
Your public key has been saved in /home/postgres/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ZKov+TdohBLvaO/pSYIl+Tk6iWfUMx1Lqixb8ByaZzI postgres@pgserver2.localdomain
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|        o        |
| ..   o+         |
|+ o+ =.oS        |
| @ooB.+          |
|E+@=o= .         |
|=BB+++o o        |
|oB +*+o. .       |
+----[SHA256]-----+
[postgres@pgserver2 ~]$

Now let’s exchange keys between servers.

[backupowner@pgservertools .ssh]$ ssh-copy-id postgres@pgserver1
[backupowner@pgservertools .ssh]$ ssh-copy-id postgres@pgserver2
[postgres@pgserver1 .ssh]$ ssh-copy-id backupowner@pgservertools
[postgres@pgserver2 .ssh]$ ssh-copy-id backupowner@pgservertools

And then let’s test connection

[backupowner@pgservertools ~]$ ssh postgres@pgserver1 date
Tue Feb 20 11:42:06 CET 2018
[backupowner@pgservertools ~]$ ssh postgres@pgserver2 date
Tue Feb 20 11:42:10 CET 2018
 [postgres@pgserver1 .ssh]$ ssh backupowner@pgservertools date
Tue Feb 20 11:42:54 CET 2018
[postgres@pgserver2 .ssh]$ ssh backupowner@pgservertools date
Tue Feb 20 11:43:23 CET 2018

Ok now that everything is fine for trusted connections, let’s configure the pgbackrest.conf files. We present below contents of our files on the 3 servers. We can notice that encryption is used (see previous blogs)

pgservertools

[root@pgservertools ~]# cat /etc/pgbackrest.conf
[pgserver1pgdata]
db1-host=pgserver1
db1-path=/u90/pgdata/pg101
db1-user=postgres

[pgserver1pgdata2]
db1-host=pgserver1
db1-path=/u90/pgdata2/pg101
db1-user=postgres
db1-port=5433
[pgserver2pgdata]
db1-host=pgserver2
db1-path=/u90/pgdata/pg101
db1-user=postgres

[pgserver2pgdata2]
db1-host=pgserver2
db1-path=/u90/pgdata2/pg101
db1-user=postgres
db1-port=5433

[global]
repo-path=/u90/backup
retention-full=2
repo-cipher-pass=dkN28km/CltmsbzkDdKahmwXctr0GJd/9F8tegBXBWASULhVatNXauMMKWUslax1
repo-cipher-type=aes-256-cbc [root@pgservertools etc]#

pgserver1

[postgres@pgserver1 ~]$ cat /etc/pgbackrest.conf
[pgserver1pgdata]
db1-path=/u90/pgdata/pg101
db1-socket-path=/tmp

[pgserver1pgdata2]
db1-path=/u90/pgdata2/pg101
db1-port=5433
db1-socket-path=/tmp

[global]
backup-host=pgservertools
backup-user=backupowner
log-level-file=detail
[postgres@pgserver1 ~]$

pgserver2

[root@pgserver2 postgres]# cat /etc/pgbackrest.conf
[pgserver1pgdata]
db1-path=/u90/pgdata/pg101
db1-socket-path=/tmp

[pgserver1pgdata2]
db1-path=/u90/pgdata2/pg101
db1-port=5433
db1-socket-path=/tmp

[global]
backup-host=pgservertools
backup-user=backupowner
log-level-file=detail
[root@pgserver2 postgres]#

The next step is to create the stanzas
pgserver1: 2 stanzas pgserver1pgdata and pgserver1pgdata2

[backupowner@pgservertools ~]$  pgbackrest --stanza=pgserver1pgdata --log-level-console=detail --db-port=5432 --db-host=pgserver1 stanza-create
2018-02-21 15:21:42.815 P00   INFO: stanza-create command begin 1.28: --db1-host=pgserver1 --db1-path=/u90/pgdata/pg101 --db1-port=5432 --db1-user=postgres --log-level-console=detail --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/u90/backup --stanza=pgserver1pgdata
2018-02-21 15:21:46.881 P00   INFO: stanza-create command end: completed successfully

[backupowner@pgservertools ~]$  pgbackrest --stanza=pgserver1pgdata2 --log-level-console=detail --db-port=5433 --db-host=pgserver1 stanza-create
2018-02-21 15:23:39.116 P00   INFO: stanza-create command begin 1.28: --db1-host=pgserver1 --db1-path=/u90/pgdata2/pg101 --db1-port=5433 --db1-user=postgres --log-level-console=detail --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/u90/backup --stanza=pgserver1pgdata2
2018-02-21 15:23:41.360 P00   INFO: stanza-create command end: completed successfully
[backupowner@pgservertools ~]$

pgserver2: 2 stanzas pgserver2pgdata and pgserver2pgdata2

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver2pgdata --log-level-console=detail --db-port=5432 --db-host=pgserver2 stanza-create
2018-02-27 13:22:47.710 P00   INFO: stanza-create command begin 1.28: --db1-host=pgserver2 --db1-path=/u90/pgdata/pg101 --db1-port=5432 --db1-user=postgres --log-level-console=detail --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/u90/backup --stanza=pgserver2pgdata
2018-02-27 13:22:49.624 P00   INFO: stanza-create command end: completed successfully

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver2pgdata2 --log-level-console=detail --db-port=5433 --db-host=pgserver2 stanza-create
2018-02-27 13:23:01.323 P00   INFO: stanza-create command begin 1.28: --db1-host=pgserver2 --db1-path=/u90/pgdata2/pg101 --db1-port=5433 --db1-user=postgres --log-level-console=detail --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/u90/backup --stanza=pgserver2pgdata2
2018-02-27 13:23:03.233 P00   INFO: stanza-create command end: completed successfully
[backupowner@pgservertools pgserver1pgdata]$

And now we can do a backup of any our cluster using the corresponding stanza

pgserver1pgdata

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver1pgdata --log-level-console=detail --db-port=5432 --db-host=pgserver1 backup

pgserver1pgdata2

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver1pgdata2 --log-level-console=detail --db-port=5433 --db-host=pgserver1 backup

pgserver2pgdata

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver2pgdata --log-level-console=detail --db-port=5432 --db-host=pgserver2 backup

pgserver2pgdata2

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver2pgdata2 --log-level-console=detail --db-port=5433 --db-host=pgserver2 backup

An example of getting info about backup

[backupowner@pgservertools pgserver1pgdata]$ pgbackrest --stanza=pgserver1pgdata --log-level-console=detail  info
stanza: pgserver1pgdata
    status: ok

    db (current)
        wal archive min/max (10-1): 000000010000000000000018 / 000000010000000000000018

        full backup: 20180221-155755F
            timestamp start/stop: 2018-02-27 11:45:51 / 2018-02-27 11:46:18
            wal start/stop: 000000010000000000000018 / 000000010000000000000018
            database size: 30MB, backup size: 30MB
            repository size: 3.5MB, repository backup size: 3.5MB
[backupowner@pgservertools pgserver1pgdata]$

Conclusion
In this blog we have seen how PgbackRest can be be used in an environment with multiple database servers.

 

Cet article PgBackRest : Dedicated Backup Host est apparu en premier sur Blog dbi services.

After IoT, IoP makes its way to the database

Yann Neuhaus - Sat, 2018-03-31 21:00

At each new Oracle version, I like to check what’s new, not only from the documentation, but also from exposed internals. I look (and sometimes diff) on catalog views definitions, undocumented parameters, and even the new C functions in the libraries. At last Oak Table World, I was intrigued by this V$SQLFN_METADATA view explained by Vit Spinka when digging into the internals of how execution plans are stored. This view has entries with all SQL functions, and a VERSION column going from ‘V6 Oracle’ to ‘V11R1 Oracle’. The lastest functions has an ‘INVALID’ entry and we also can see some functions with ‘SQL/DS’. Well, now that we have Oracle 18c on the Oracle Cloud, I came back to this view to see if anything is new, listing the highest FUNC_ID at the top and the first row attired my attention:


SQL> select * from V$SQLFN_METADATA order by 1 desc fetch first 10 rows only;
 
FUNC_ID NAME MINARGS MAXARGS DATATYPE VERSION ANALYTIC AGGREGATE OFFLOADABLE DISP_TYPE USAGE DESCR CON_ID
------- ---- ------- ------- -------- ---------- -------- --------- ----------- --------- ----- ----- ------
1148 TO_DOG_YEAR 1 4 NUMERIC V13 Oracle NO NO YES NORMAL TO_DOG_YEAR 0
1147 JSON_MERGEPATCH 4 0 UNKNOWN INVALID NO NO NO NORMAL JSON_MERGEPATCH 0
1146 JSON_PATCH 4 0 UNKNOWN INVALID NO NO NO NORMAL JSON_PATCH 0
1145 ROUND_TIES_TO_EVEN 1 2 NUMERIC INVALID NO NO YES NORMAL ROUND_TIES_TO_EVEN 0
1144 CON_ID_TO_CON_NAME 1 0 UNKNOWN INVALID NO NO NO NORMAL CON_ID_TO_CON_NAME 0
1143 TIMESTAMP_TO_NUMBER 1 1 UNKNOWN INVALID NO NO YES NORMAL TIMESTAMP_TO_NUMBER 0
1142 TO_UTC_TIMESTAMP_TZ 1 0 UNKNOWN INVALID NO NO YES NORMAL TO_UTC_TIMESTAMP_TZ 0
1141 OPTSYSAPPROXRANK 1 0 UNKNOWN INVALID NO NO NO NORMAL Internal evaluation function for multiple approx_rank's 0
1140 APPROX_RANK 1 1 NUMERIC INVALID NO YES NO NORMAL APPROX_RANK 0
1139 APPROX_SUM 1 2 NUMERIC INVALID NO YES NO NORMAL APPROX_SUM 0

Because those functions are SQL functions, I searched this ‘TO_DOG_YEAR’ on Google to see whether a new ANSI SQL function was implemented. But finally came upon something I didn’t expect: Dog Years Calculator. The trends in databases are really going crazy these times. All focus is on developers. XML, JSON, Docker… and now a function to calculate your age in dog years.
But afterall, it makes sense. IoT (not ‘Index Organized Table’ but ‘Internet Of Things’) is coming with sensors everywhere. And it is not only ‘things’ but it comes to living beings. I have read recently about ‘Internet of Pets’ where collars equipped with sensors detect where your domestic animal go and when he is hungry.

Let’s test it. Tomorrow, my elder kid has his 13th birthday. Now Oracle can tell me that he will be 65 in dog years:

SQL> select to_dog_year(date'2005-04-02') from dual;
 
TO_DOG_YEAR(DATE'2005-04-02')
-----------------------------
65

Yes, here I learn that the calculation is a bit more complex than just multiplying by 7. Of course, adding a SQL standard function would not make sense if it was just a multiplication.

But it seems to be even more complex. I searched for the C functions behind this one:

[oracle@CLOUD18C ~]$ nm /u01/app/oracle/product/18.0.0/dbhome_1/bin/oracle | grep -iE "dog.*year"
000000001452e073 r KNCLG_TODOGYEAR
0000000003ffcf40 T LdiJDaysDogYear
000000000f3170c0 T LdiJulianDogYear
000000000f316fc0 T LdiJulianDogYeararr
000000000f3170f0 t LdiJulianDogYeari
000000000f606e10 T OCIPConvertDateDogYearTime
000000000ebf2380 t qerxjConvertDogYearTime
0000000010de19e0 t qjsngConvStructDogYear
0000000010de0320 T qjsngNumberDogYearDty
0000000010de06f0 T sageStringDogYearDty
0000000010de7110 T sageplsDogYear
000000000bc5cd80 t sagerwAddDogYearTime
0000000010bad3c0 T qmxtgrConvSaxDogYear
0000000010bad400 T qmxtgrConvSaxDogYear_internal
00000000025ae090 T qosDateTimeDogYear
0000000004f22b60 T xsCHDogYeartime
000000000438c230 T nlsBreedDogYear
000000000438bb50 t nlsBreedDogYearCmn
000000000438c060 T nlsBreedDogYearTime
000000000438bc50 T nlsBreedDogYear
00000000044d1da0 T xvopAddDTDurDogYear
00000000044d1ac0 T xvopAddYMDurDogYear

Those ‘nlsBreed’ functions ring a bell and I checked if there are new values in V$NLS_VALID_VALUES

SQL> select distinct parameter from V$NLS_VALID_VALUES;
 
PARAMETER
----------------------------------------------------------------
TERRITORY
CHARACTERSET
BREED
LANGUAGE
SORT

That ‘BREED’ is a new one, with a lot of interesting values:

CaptureBREED

And here is my example using this new NLS parameter.

SQL> select to_dog_year(date'2005-04-02','','NLS_BREED=Saint Bernard') from dual;
 
TO_DOG_YEAR(DATE'2005-04-02',
----------------------------
96

Note that I’ve no idea about the second parameter, I had to put a ‘null’ for it to be able to mention the NLS one, or I got a ‘ORA-00909: invalid number of arguments’.

I have to say that, for a DBA focused on the core database functions, it is hard to understand that new features go on things like this TO_DOG_YEAR function. But being realistic, it is clear that the budget for new features go into the new direction: all for developers, big data, IoT… Of course we can write those functions in PL/SQL or maybe one day with JavaScript thanks to the Multi-Lingual Engine currently in beta. But IoT is also about performance, and a standard function avoids context switches.

 

Cet article After IoT, IoP makes its way to the database est apparu en premier sur Blog dbi services.

sqlplus and its column output

Yann Neuhaus - Sat, 2018-03-31 20:12

During tuning or normal DBA activities one of the most annoying things is sqlplus with its default column output when running queries. I.e. even after setting a linesize of 1000 and a pagesize of 1000 the output may look as follows in your preferred terminal emulation (putty here):

no_terminator

Most people address this by using other tools like sqldeveloper (or sqlcl with sqlformat ansiconsole). However, a lot of people still use sqlplus.

How can you address the output-format-issue with sqlplus?
Actually you can format columns of course and specify only the columns you would like to see (instead of “select *”):


SQL> select schemaname, osuser from v$session where sid=(select sid from v$mystat where rownum=1);
 
SCHEMANAME OSUSER
-------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------
CBLEILE oracle
 
SQL> column schemaname format a32
SQL> column osuser format a32
SQL> select schemaname, osuser from v$session where sid=(select sid from v$mystat where rownum=1);
 
SCHEMANAME OSUSER
-------------------------------- --------------------------------
CBLEILE oracle

But that’s annoying if you have to do it for a couple of columns for every sql-statement you interactively type in.

The better alternative is to use a terminal emulation, which allows horizontal scrolling. Tanel Poder describes it in his “AOT Preparation Session” on Youtube: https://www.youtube.com/watch?v=tC81PMO7ODw.
Interestingly horizontal scrolling is available on Windows for ages. I.e. using cmd.exe or the Powershell you can define a “Screen Buffer Size” for the “Width” of the Window (in Properties -> Layout). Best is to set it under “Defaults” so that it’s available every time you open a cmd.exe or Powershell window.

But what to use on e.g. Linux? Most terminal emulation software does not allow horizontal scrolling. Actually there are only very few terminal emulation products available, which allow it. One of the freely available products is terminator ( https://github.com/software-jessies-org/jessies/wiki/Terminator ). After downloading the rpm on my Oracle Enterprise Linux server I installed it as follows:


[root@localhost terminator]# rpm -i org.jessies.terminator.x86_64.rpm
error: Failed dependencies:
/usr/bin/ruby is needed by org.jessies.terminator-27.171.7083-2.x86_64
[root@localhost terminator]# yum install ruby
[root@localhost terminator]# rpm -i org.jessies.terminator.x86_64.rpm
[root@localhost terminator]# terminator

The prereqs for terminator are an installed Java Runtime Environment (JRE -> you may use the one provided within your ORACLE_HOME) and Ruby (as you can see above).
I started terminator once as root to install the necessary global terminfo. After that I can use it as e.g. the oracle-user:


[oracle@localhost ~]$ terminator &

There are no wrapped lines due to the terminal size anymore:

terminator1

And you can scroll to the right:

terminator2

Running sqlcl with sqlformat set to ansiconsole the Terminator terminal emulation also helps for queries with many columns, which do not fit on the screen:

terminator_sqlcl1

Scrolled to the right:

terminator_sqlcl2

Besides the formatting issue, it’s of course recommended to use the utility rlwrap (readline wrapper) with sqlplus on Linux for command line history (get previous command lines with the up and down key) and easy command line editing. Actually rlwrap is much better than the history available in sqlplus of 12.2. I do recommend watching the youtube video from Tanel Poder mentioned above, because he also shows that rlwrap can be configured to provide command completion (for Oracle PLSQL-objects and reserved words) as well.

When installing the dbi services DMK Management Kit ( http://www.dbi-services.com/dmk ) on Linux then rlwrap is of course included.

So sqlcl is cool, but sqlplus also still has its right to exist ;-)

 

Cet article sqlplus and its column output est apparu en premier sur Blog dbi services.

BIG NEWS: I’ve Been Selected On “Journey To Mars” Program !!

Richard Foote - Sat, 2018-03-31 17:56
After many months of nervous waiting, knowing that my chances were extremely slim at best, I have just received my confirmation letter from NASA saying that I’ve been one of the lucky ones selected to participate in the “Journey To Mars” program !! I’m soooo damn excited !!! Planning is now under way for what […]
Categories: DBA Blogs

Oracle 12c SOA Suite: Administration via Enterprise Manager

Dietrich Schroff - Sat, 2018-03-31 14:20
After

i want to present some screenshots of the web console of the SOA Suite:
http://localhost:7001/emBut first the servers have to be started:
cd /home/oracle/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain/bin
nohup ./startNodeManager.sh &
nohup ./startWeblogic.sh &Now the login is available:

 But there the soa_server1 is still down:


 So let's start this one:
nohup ./startManagedWebLogic.sh soa_server1 http://192.168.178.49:7001
The icon left to "base_domain" (beneath the oracle logo at the top) is the menu:



If you navigate to "soa-infra(soa_server1) you get this frame:






    Docker: efficiently building images for large software

    Yann Neuhaus - Sat, 2018-03-31 11:30

    I see increasing demand to build a Docker image for the Oracle Database. But the installation process for Oracle does not really fit the Docker way to install by layers: you need to unzip the distribution, install from it to the Oracle Home, remove the things that are not needed, strop the binaries,… Before addressing those specific issues, here are the little tests I’ve done to show how the build layers increase the size of the image.

    I’m starting with an empty docker repository on XFS filesystem:

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 33M 80G 1% /var/lib/docker

    add, copy, rename and append

    For the example, I create a 100MB file in the context:

    [root@VM121 docker]# mkdir -p /var/tmp/demo
    [root@VM121 docker]# dd if=/dev/urandom of=/var/tmp/demo/file0.100M count=100 bs=1M

    Here his my docker file:

    FROM alpine:latest as staging
    WORKDIR /var/tmp
    ADD file0.100M .
    RUN cp file0.100M file1.100M
    RUN rm file0.100M
    RUN mv file1.100M file2.100M
    RUN dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M

    The 1st step starts with an alpine image
    The 2nd step sets the working directory
    The 3rd step adds a 100M file from the context
    The 4th step copies the file, so that we have 200M in two files
    The 5th step removes the previous file, so that we have 100M in one file
    The 6th step renames the file, staying with only one 100M file
    The 7th step appends 100M to the file, leaving 200M in one file

    Here is the build with default option:

    [root@VM121 docker]# docker image build -t franck/demo /var/tmp/demo

    The context, my 100M files is send first:

    Sending build context to Docker daemon 104.9MB

    And here are my 7 steps:

    Step 1/7 : FROM alpine:latest as staging
    latest: Pulling from library/alpine
    ff3a5c916c92: Pull complete
    Digest: sha256:7df6db5aa61ae9480f52f0b3a06a140ab98d427f86d8d5de0bedab9b8df6b1c0
    Status: Downloaded newer image for alpine:latest
    ---> 3fd9065eaf02
    Step 2/7 : WORKDIR /var/tmp
    Removing intermediate container 93d1b5f21bb9
    ---> 131b3e6f34e7
    Step 3/7 : ADD file0.100M .
    ---> 22ca0b2f6424
    Step 4/7 : RUN cp file0.100M file1.100M
    ---> Running in b4b1b9c7e29b
    Removing intermediate container b4b1b9c7e29b
    ---> 8c7290a5c87e
    Step 5/7 : RUN rm file0.100M
    ---> Running in 606e2c73d456
    Removing intermediate container 606e2c73d456
    ---> 5287e66b019c
    Step 6/7 : RUN mv file1.100M file2.100M
    ---> Running in 10a9b379150e
    Removing intermediate container 10a9b379150e
    ---> f508f426f70e
    Step 7/7 : RUN dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M
    ---> Running in 9dcf6d80642c
    100+0 records in
    100+0 records out
    Removing intermediate container 9dcf6d80642c
    ---> f98304641c54
    Successfully built f98304641c54
    Successfully tagged franck/demo:latest

    So, what’s the size of my docker repository after my image with this 200M file?

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 538M 80G 1% /var/lib/docker

    I have more than 500MB here.

    Actually, besides the alpine image downloaded, which is only 4MB, the image I have build is 538MB:

    [root@VM121 docker]# docker image ls
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest f98304641c54 Less than a second ago 528MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    We can better understand this size by looking at intermediate images:

    [root@VM121 docker]# docker image ls -a
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest f98304641c54 1 second ago 528MB
    <none> <none> f508f426f70e 27 seconds ago 319MB
    <none> <none> 5287e66b019c 36 seconds ago 214MB
    <none> <none> 8c7290a5c87e 37 seconds ago 214MB
    <none> <none> 22ca0b2f6424 42 seconds ago 109MB
    <none> <none> 131b3e6f34e7 47 seconds ago 4.15MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    The first one, ’22ca0b2f6424′ is from the step 3 which added the 100MB file
    The second one ‘8c7290a5c87e’ is from the 4th step which copied the file, bringing the image to 200MB
    The third one ‘5287e66b019c’ is from the 5th step which removed the file. I didn’t increase the size but didn’t remove anything either.
    The fourth one ‘f508f426f70e’ is from the 6th step which renamed the file. But this, for docker, is like copying to a new layer and that adds 100MB
    Finally, the 7th step appended only 100MB, but this finally resulted to copy the full 200MB file to the new layer

    We can see all those operations, and size added at each step, from the image history:

    [root@VM121 docker]# docker image history franck/demo
    IMAGE CREATED CREATED BY SIZE COMMENT
    f98304641c54 1 second ago /bin/sh -c dd if=/dev/urandom of=file2.100M … 210MB
    f508f426f70e 27 seconds ago /bin/sh -c mv file1.100M file2.100M 105MB
    5287e66b019c 36 seconds ago /bin/sh -c rm file0.100M 0B
    8c7290a5c87e 37 seconds ago /bin/sh -c cp file0.100M file1.100M 105MB
    22ca0b2f6424 42 seconds ago /bin/sh -c #(nop) ADD file:339435a18aeeb1b69… 105MB
    131b3e6f34e7 47 seconds ago /bin/sh -c #(nop) WORKDIR /var/tmp 0B
    3fd9065eaf02 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
    <missing> 2 months ago /bin/sh -c #(nop) ADD file:093f0723fa46f6cdb… 4.15MB

    All in one RUN

    One workaround is to run everything in the same layer. Personally, I don’t like it because I don’t get the point of using a Dockerfile for just running one script.
    So, here is the Dockerfile with only one RUN command:

    FROM alpine:latest as staging
    WORKDIR /var/tmp
    ADD file0.100M .
    RUN cp file0.100M file1.100M \
    && rm file0.100M \
    && mv file1.100M file2.100M \
    && dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M

    The build is similar except that there are fewer steps:

    [root@VM121 docker]# docker image build -t franck/demo /var/tmp/demo
    Sending build context to Docker daemon 104.9MB
    Step 1/4 : FROM alpine:latest as staging
    latest: Pulling from library/alpine
    ff3a5c916c92: Pull complete
    Digest: sha256:7df6db5aa61ae9480f52f0b3a06a140ab98d427f86d8d5de0bedab9b8df6b1c0
    Status: Downloaded newer image for alpine:latest
    ---> 3fd9065eaf02
    Step 2/4 : WORKDIR /var/tmp
    Removing intermediate container 707644c15547
    ---> d4528b28c85e
    Step 3/4 : ADD file0.100M .
    ---> e26215766e75
    Step 4/4 : RUN cp file0.100M file1.100M && rm file0.100M && mv file1.100M file2.100M && dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M
    ---> Running in 49c2774851f4
    100+0 records in
    100+0 records out
    Removing intermediate container 49c2774851f4
    ---> df614ac1b6b3
    Successfully built df614ac1b6b3
    Successfully tagged franck/demo:latest

    This leaves us with a smaller space usage::

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 340M 80G 1% /var/lib/docker

    The image is smaller, but still larger than the final state (a 300MB image for only one 200MB file):

    [root@VM121 docker]# docker image ls
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest df614ac1b6b3 Less than a second ago 319MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    This is because we have grouped the RUN steps, but the ADD has its own layer, adding a file that is removed later:

    [root@VM121 docker]# docker image ls -a
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest df614ac1b6b3 Less than a second ago 319MB
    <none> <none> e26215766e75 20 seconds ago 109MB
    <none> <none> d4528b28c85e 22 seconds ago 4.15MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB
     
    [root@VM121 docker]# docker image history franck/demo
    IMAGE CREATED CREATED BY SIZE COMMENT
    df614ac1b6b3 Less than a second ago /bin/sh -c cp file0.100M file1.100M … 210MB
    e26215766e75 20 seconds ago /bin/sh -c #(nop) ADD file:fe0262a4b800bf66d… 105MB
    d4528b28c85e 22 seconds ago /bin/sh -c #(nop) WORKDIR /var/tmp 0B
    3fd9065eaf02 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
    <missing> 2 months ago /bin/sh -c #(nop) ADD file:093f0723fa46f6cdb… 4.15MB

    This is the kind of issue we have when building an Oracle Database image. We need to ADD the zip file for the database distribution, and the latest bundle patch. It is removed later but still takes space on the image. Note that one workaround to avoid the ADD layer can be to get the files from an NFS or HTTP server with wget or curl in a RUN layer rather than an ADD one. There’s an example on Stefan Oehrli blog post.

    –squash

    With the latest versions of docker, there’s an easy way to flatten all those intermediary images at the end.
    Here I’ve 18.03 and enabled experimental features:

    [root@VM121 docker]# docker info
    Containers: 0
    Running: 0
    Paused: 0
    Stopped: 0
    Images: 8
    Server Version: 18.03.0-ce
    Storage Driver: overlay2
    Backing Filesystem: xfs
    ...
     
    [root@VM121 docker]# cat /etc/docker/daemon.json
    {
    "experimental": true
    }

    I start with the same as before but just add –squash to the build command

    [root@VM121 docker]# docker image build --squash -t franck/demo /var/tmp/demo

    The output is similar but the image is an additional one, reduced down to the size of my final state (with one 200MB file):

    [root@VM121 docker]# docker image ls
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest 2ab439a723c4 Less than a second ago 214MB
    <none> <none> c3058e598b0a 3 seconds ago 528MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    The intermediate image list shows that all was done as without ‘–squash’ but with an additional set which reduced the size:

    [root@VM121 docker]# docker image ls -a
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest 2ab439a723c4 Less than a second ago 214MB
    <none> <none> c3058e598b0a 3 seconds ago 528MB
    <none> <none> 1f14d93a592e 23 seconds ago 319MB
    <none> <none> 7563d40b650b 27 seconds ago 214MB
    <none> <none> 8ed15a5059bd 28 seconds ago 214MB
    <none> <none> 24b11b9026ce 31 seconds ago 109MB
    <none> <none> 382bb71a6a4a 33 seconds ago 4.15MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    This step is visible in the image history as a ‘merge’ step:

    [root@VM121 docker]# docker image history franck/demo
    IMAGE CREATED CREATED BY SIZE COMMENT
    2ab439a723c4 Less than a second ago 210MB merge sha256:c3058e598b0a30c606c1bfae7114957bbc62fca85d6a70c2aff4473726431394 to sha256:3fd9065eaf02feaf94d68376da52541925650b81698c53c6824d92ff63f98353
    <missing> 3 seconds ago /bin/sh -c dd if=/dev/urandom of=file2.100M … 0B
    <missing> 23 seconds ago /bin/sh -c mv file1.100M file2.100M 0B
    <missing> 27 seconds ago /bin/sh -c rm file0.100M 0B
    <missing> 28 seconds ago /bin/sh -c cp file0.100M file1.100M 0B
    <missing> 31 seconds ago /bin/sh -c #(nop) ADD file:14cef588b48ffbbf1… 0B
    <missing> 33 seconds ago /bin/sh -c #(nop) WORKDIR /var/tmp 0B
    <missing> 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
    <missing> 2 months ago /bin/sh -c #(nop) ADD file:093f0723fa46f6cdb… 4.15MB

    However, even if I have a smaller final image, my filesystem usage is even larger with this additional 210MB:

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 739M 80G 1% /var/lib/docker

    Let’s prune it to get rid of those intermediate images:

    [root@VM121 docker]# docker image prune -f
    Deleted Images:
    deleted: sha256:c3058e598b0a30c606c1bfae7114957bbc62fca85d6a70c2aff4473726431394
    deleted: sha256:37ed4826d70def1978f9dc0ddf42618d951f65a79ce30767ac3a5037d514f8af
    deleted: sha256:1f14d93a592eb49a210ed73bf65e6886fcec332786d54b55d6b0e16fb8a8beda
    deleted: sha256:c65cf4c70aed04e9b57e7a2a4fa454d3c63f43c32af251d8c86f6f85f44b1757
    deleted: sha256:7563d40b650b2126866e8072b8df92d5d7516d86b25a2f6f99aa101bb47835ba
    deleted: sha256:31ee5456431e903cfd384b1cd7ccb7918d203dc73a131d4ff0b9e6517f0d51cd
    deleted: sha256:8ed15a5059bd4c0c4ecb78ad77ed75da143b06923d8a9a9a67268c62257b6534
    deleted: sha256:6be91d85dec6e1bda6f1c0d565e98dbf928b4ea139bf9cb666455e77a2d8f0d9
    deleted: sha256:24b11b9026ce738a78ce3f7b8b5d86ba3fdeb15523a30a7c22fa1e3712ae679a
    deleted: sha256:c0984945970276621780a7888adfde9c6e6ca475c42af6b7c54f664ad86f9c9f
    deleted: sha256:382bb71a6a4a7ddec86faa76bb86ea0c1a764e5326ad5ef68ce1a6110ae45754
     
    Total reclaimed space: 524.3MB

    Now having only the squashed image:

    [root@VM121 docker]# docker image ls -a
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest 2ab439a723c4 32 minutes ago 214MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB
     
    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 237M 80G 1% /var/lib/docker

    multi-stage build

    Finally, you can do something similar to an intermediate squash using multi-stage build.

    Here is my Dockerfile:

    FROM alpine:latest as staging
    WORKDIR /var/tmp
    ADD file0.100M .
    RUN cp file0.100M file1.100M
    RUN rm file0.100M
    RUN mv file1.100M file2.100M
    RUN dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M
     
    FROM alpine:latest
    WORKDIR /var/tmp
    COPY --from=staging /var/tmp .

    With multi-stage build, we can start the second stage from a different image, and add more steps, but here I just start with the same alpine image and copy the final layer of the previous build.

    We see something very similar to the –squash one:

    [root@VM121 docker]# docker image ls
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest 55f329385f8c Less than a second ago 214MB
    <none> <none> fd26a00db784 8 seconds ago 528MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB
     
    [root@VM121 docker]# docker image ls -a
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest 55f329385f8c 1 second ago 214MB
    <none> <none> fd26a00db784 9 seconds ago 528MB
    <none> <none> 9bf5be367b63 32 seconds ago 319MB
    <none> <none> 531d78833ba8 35 seconds ago 214MB
    <none> <none> 05dd68114743 36 seconds ago 214MB
    <none> <none> b9e5215a9fc8 39 seconds ago 109MB
    <none> <none> ab332f486793 41 seconds ago 4.15MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    The history of the last stage shows the copy of 210MB from the previous one:

    [root@VM121 docker]# docker image history franck/demo
    IMAGE CREATED CREATED BY SIZE COMMENT
    55f329385f8c 1 second ago /bin/sh -c #(nop) COPY dir:2b66b5c36eff5b51f… 210MB
    ab332f486793 41 seconds ago /bin/sh -c #(nop) WORKDIR /var/tmp 0B
    3fd9065eaf02 2 months ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B
    <missing> 2 months ago /bin/sh -c #(nop) ADD file:093f0723fa46f6cdb… 4.15MB

    The usage of filesystem is similar to the –squash one. Even if we reduced the final image, all the intermediate states had to be stored:

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 737M 80G 1% /var/lib/docker

    That looks good, if you accept to use a large intermediate space while building the image, which gives you the possibility to debug without re-running from the beginning, thanks to the layers in cache. However, you have still the inefficiency that each time you try the build, the context will be sent again even when not needed. And that is long with a 3GB .zip in the case of Oracle Database installation. Unfortunately, if you add the file to the .dockerignore once you know you have the ADD steps in cache, the next build attempt will not use the caches anymore. I would love to see a per-stage .dockerignore file for multi-stage builds. Or simply have docker realize that some files in the context will not be needed by the COPY or ADD that are not in cache yet.

    Sending the whole context at each build attempt, when debugging your Dockerfile, is not efficient at all and looks like punch-card time compilation where people sent the cards to be compiled during the night. One syntax error on the first line and you go for another day.

    One solution is to have all the required files in an NFS or HTTPd server and get them with ADD from the URL as mentioned earlier.

    Multi-stage with multi-contexts

    Another solution is to put all COPY or ADD from context in one Dockerfile to build the image containing all required files, and then build your image from it (and squash it at the end).

    Here is my first Dockerfile, just adding the files from the context:

    [root@VM121 docker]# ls /var/tmp/demo
    Dockerfile file0.100M nocontext
    [root@VM121 docker]# cat /var/tmp/demo/Dockerfile
    FROM alpine:latest as staging
    WORKDIR /var/tmp
    ADD file0.100M .

    I build this ‘staging’ image:

    [root@VM121 docker]# docker image build -t franck/stage0 /var/tmp/demo
    Sending build context to Docker daemon 104.9MB
    Step 1/3 : FROM alpine:latest as staging
    latest: Pulling from library/alpine
    ff3a5c916c92: Pull complete
    Digest: sha256:7df6db5aa61ae9480f52f0b3a06a140ab98d427f86d8d5de0bedab9b8df6b1c0
    Status: Downloaded newer image for alpine:latest
    ---> 3fd9065eaf02
    Step 2/3 : WORKDIR /var/tmp
    Removing intermediate container 0eeed8e0cfd2
    ---> a5db3b29c8e1
    Step 3/3 : ADD file0.100M .
    ---> 2a34e1e981be
    Successfully built 2a34e1e981be
    Successfully tagged franck/stage0:latest

    This one is the minimal one:

    [root@VM121 docker]# docker image ls
    + docker image ls
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/stage0 latest 2a34e1e981be Less than a second ago 109MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB
     
    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 139M 80G 1% /var/lib/docker

    Now, I don’t need to send this context anymore during further development of my Dockerfile.
    I’ve added the following steps to a Dockerfile in another directory:

    [root@VM121 docker]# ls /var/tmp/demo/nocontext/
    Dockerfile
    [root@VM121 docker]# cat /var/tmp/demo/nocontext/Dockerfile
    FROM franck/stage0 as stage1
    WORKDIR /var/tmp
    RUN cp file0.100M file1.100M
    RUN rm file0.100M
    RUN mv file1.100M file2.100M
    RUN dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M
    FROM alpine:latest
    WORKDIR /var/tmp

    Here is the build, using multi-stage to get a squashed final image (you can also use –squash)

    [root@VM121 docker]# docker image build -t franck/demo /var/tmp/demo/nocontext
     
    Sending build context to Docker daemon 2.048kB
    Step 1/9 : FROM franck/stage0 as stage1
    ---> 2a34e1e981be
    Step 2/9 : WORKDIR /var/tmp
    Removing intermediate container eabf57a8de05
    ...
    Successfully built 82478bfa260d
    Successfully tagged franck/demo:latest

    At that point, there’s no advantage on space used as I keep all layers for easy Dockerfile development:

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 738M 80G 1% /var/lib/docker
     
    [root@VM121 docker]# docker image ls
    REPOSITORY TAG IMAGE ID CREATED SIZE
    franck/demo latest 82478bfa260d About a minute ago 214MB
    <none> <none> 5772ad68d208 About a minute ago 528MB
    franck/stage0 latest 2a34e1e981be About a minute ago 109MB
    alpine latest 3fd9065eaf02 2 months ago 4.15MB

    But now, if I want to add an additional step:

    [root@VM121 docker]# cat >> /var/tmp/demo/nocontext/Dockerfile <<< 'RUN chmod a+x /var/tmp'

    I can re-build quickly, using cached layers, and without the need to send the context again:

    [root@VM121 docker]# docker image build -t franck/demo /var/tmp/demo/nocontext
    Sending build context to Docker daemon 2.048kB
    Step 1/10 : FROM franck/stage0 as stage1
    ---> 2a34e1e981be
    Step 2/10 : WORKDIR /var/tmp
    ---> Using cache
    ---> fa562926cc2b
    Step 3/10 : RUN cp file0.100M file1.100M
    ---> Using cache
    ---> 31ac716f4d61
    Step 4/10 : RUN rm file0.100M
    ---> Using cache
    ---> d7392cf51ad9
    Step 5/10 : RUN mv file1.100M file2.100M
    ---> Using cache
    ---> 4854e503885b
    Step 6/10 : RUN dd if=/dev/urandom of=file2.100M seek=100 count=100 bs=1M
    ---> Using cache
    ---> 5772ad68d208
    Step 7/10 : FROM alpine:latest
    ---> 3fd9065eaf02
    Step 8/10 : WORKDIR /var/tmp
    ---> Using cache
    ---> a5db3b29c8e1
    Step 9/10 : COPY --from=stage1 /var/tmp .
    ---> Using cache
    ---> 82478bfa260d
    Step 10/10 : RUN chmod a+x /var/tmp
    ---> 4a69ee40a938
    Successfully built 4a69ee40a938
    Successfully tagged franck/demo:latest

    Once I’m ok with my final image, I can remove the intermediate ones:

    [root@VM121 docker]# docker image prune -f
    Deleted Images:
    deleted: sha256:5772ad68d20841197d1424f7c64edd21704e4c7b470acb2193de51ae8741385d
    deleted: sha256:bab572d749684d126625a74be4f01cc738742f9c112a940391e3533e61dd55b9
    deleted: sha256:4854e503885b4057809fe2867a743ae7898e3e06b329229519fdb5c9d8b10ac1
    deleted: sha256:de4acb90433c30d6a21cc3b4483adbd403d8051f3c7c31e6bc095a304606355a
    deleted: sha256:d7392cf51ad99d5d0b7a1a18d8136905c87bc738a5bc94dec03e92f5385bf9c8
    deleted: sha256:f037e7f973f4265099402534cd7ba409f35272701166d59a1be8e5e39508b07c
    deleted: sha256:31ac716f4d61f0048a75b8de6f18757970cf0670a0a3d711e4386bf098b32041
    deleted: sha256:2dccb363c5beb4daf45586383df6454b198f824d52676f70318444c346c0fe9a
    deleted: sha256:fa562926cc2b3cb56400e1068984bb4048f56713a3cf6dcfa3cf6d945023ebc4
     
    Total reclaimed space: 419.4MB

    And the staging one:

    [root@VM121 docker]# docker image rm franck/stage0
    Untagged: franck/stage0:latest
    Deleted: sha256:2a34e1e981be9154c31c5ee7eb942cc121267f4416b6fe502ab93f2dceafd98c
    Deleted: sha256:b996a1bdc829167f16dcbe58b717284764470661c3116a6352f15012e1dff07c

    Finally, I optimized the developement of the Dockerfile and finished with the minimal size.

    [root@VM121 docker]# df -hT /var/lib/docker
    Filesystem Type Size Used Avail Use% Mounted on
    /dev/sdc xfs 80G 237M 80G 1% /var/lib/docker

    So what?

    I’m always surprised by the lack of efficiency when building an image with a Dockerfile. Any serious application deployment involves several intermediate files and the way docker build is layered inflates the size and the time required. Efficient layering and snapshotting work at block level. Here, at file level, any byte of data modified in a file, even metadata such as the file name, is a whole file copy. But for common applications, the installation steps are not as simple adding new files. You may have files appended, object files added to libraries, then compiled, the stripped…

    In this post, I tested some recent features, such as multi-stage build and the experimental –squash, as well as a simple manual multi-stage build. Of course, you can do everything in the same layers, and even not use Dockerfiles at all, but then why using Docker? There’s also the Packer approach that I’ve not tested yet. However, I like the Docker approach, but only when used correctly. Deploying an application, like Oracle Database, should use the layered build in the following way: additional steps for new options or new updates. This means that the files must be built elsewhere, in a staging container, and added in one step. And to be efficient, the context should be sent only when needed: when a non-cached ADD or COPY requires it.

     

    Cet article Docker: efficiently building images for large software est apparu en premier sur Blog dbi services.

    ORA-12545 TNS: Host or Object Does not Exist , while using SCAN

    Syed Jaffar - Sat, 2018-03-31 11:30
    As part of SSL configuration between WebLogic tier and DB tier, post sqlnet.ora settings and scan listener restart, we encountered an ORA-12545 error on a 3 node RAC database when using the SCAN listener.

    We verified the LISTENER status and we could find all the services in READY status. Tried using VIP, it worked perfectly. So, the only issue was connecting through SCAN. We were fortunate to find a solution after a very quick search over the net. The MOS Doc 364855.1 explained the issue and provided the solution.

    As part of the configuration, we set the LOCAL_LISTENER to an VIP hostname. After modifying the LOCAL_LISTENER string to point to VIP IP address, we could manage to connect through the SCAN.

    Logically the VIP hostname should have worked, but, we noticed through many references the same issue. 



    References:
    Client Connection to RAC Intermittently Fails-ORA-12545 TNS: Host or Object Does not Exist (Doc ID 364855.1)
    https://ardentperf.com/2007/04/02/local_listener-and-ora-12545/
    http://spotdba.blogspot.com/2012/08/error-ora-12545-while-trying-to-connect.html

    First steps with REST services on ADF Business Components

    Amis Blog - Sat, 2018-03-31 10:20

    Recently we had a challenge at a customer for which ADF REST resources on Business Components were the perfect solution.

    Our application is built in Oracle JET and of course we wanted nice REST services to communicate with. Because our data is stored in an Oracle database we needed an implementation to easily access the data from JET. We decided on using ADF and Business Components to achieve this. Of course there are alternative solutions available but because our application runs as a portal in Webcenter Portal, ADF was already in our technology stack. I would like to share some of my first experiences with this ADF feature. We will be using ADF 12.2.1.1.

    In this introduction we will create a simple application, the minimal required set of business components and a simple REST service. There are no prerequirements to start using the REST functionality in ADF. If you create a custom application you can choose to add the feature for REST Services but it is not necessary. Start with making a simple EO and VO:

    image

    Before you can create any REST services, you need to define your first release version. The versions of REST resources are managed in the adf-config.xml. Go to this file, open the Release Versions tab and create version 1. The internal name is automatically configured based on your input:

    image

    Your application is now ready for your first service. Go to the Web Service tab of the Application Module and then the REST tab. Click the green plus icon to add a resource. Your latest version will automatically be selected. Choose an appropriate name and press OK.

    image

    ADF will create a config file for your resource (based on the chosen ViewObject), a resourceRegistry that will manage all resources in your application and a new RESTWebService project that you can use to start the services. The config file automatically opens and you can now further configure your resource.

    image

    In the wizard Create Business Components from Tables, there is a REST Resources step in which you can immediately define some resources on View Objects. Using this option always gives me an addPageDefinitionUsage error, even by creating the simplest service:

    image

    After ignoring this error, several things go wrong (what a surprise). The REST resource is created in a separate folder (not underneath the Application Module), it is not listed as a REST resource in the Application Module and finally it doesn’t work. All in all not ideal. I haven’t been able to figure out what happens but I would recommend avoiding this option (at least in this version of JDeveloper).

    There are two important choices to make before starting your service. You have to decide which REST actions will be allowed, and what attributes will be exposed.

    Setting the actions is simple. On the first screen of your config file are several checkboxes for the actions, Create, Delete and Update. By default they are all allowed on your service. Make sure to uncheck all actions that you don’t want to allow on your service. This provides for better security.

    image

    Limiting the exposed attributes can be done in two ways. You can hide attributes on the ViewObject for all REST services on that VO. This is a secure and convenient way if you know an attribute should never be open to a user.

    image

    Another way of configuring attributes for your REST services is creating REST shapes. This is a powerful feature that can be accessed from the ViewObject screen. You can make shapes independent of specific resources and apply them whenever you want. To create a shape, go to the ViewObject and to the tab Service Shaping. Here you can add a shape with the green plus-icon. Keep in mind that the name you choose for your shape will be a suffix to the name of your ViewObject. After creating the shape, you can use the shuttle to remove attributes.

    image

    The newly created shape will have its own configuration file in a different location but you can only change it in the ViewObject configuration.

    image

    After the shape is created, it can now be added to your REST service. To do this, use the Attributes tab in your Resource file, select the shape and you see the attribute shuttle is updated automatically.

    image

    You are now ready to start your service. Right-click on the RESTWebService project and run. If you have done everything right, JDeveloper will show you the url where your services are running. Now you can REST easily.

    The post First steps with REST services on ADF Business Components appeared first on AMIS Oracle and Java Blog.

    Oracle Different Levels of Hell

    Michael Dinh - Sat, 2018-03-31 09:50

    Did not know there exists many levels of hell and Oracle certainly has them.

    Would it be bad if someone is searching for hell and this blog is listed as top 10? :=(

    Here’s the effort one needs to go through to determine what patch to apply for Goldengate software (binary) as part of quarterly Critical Patch Updates.

    Hint: Goldengate does not participate in quarterly patch updates.

    Find the latest patch available and patch using Opatch or install newer database compatible version using runInstaller.

    Oracle GoldenGate — Oracle RDBMS Server Recommended Patches (Doc ID 1557031.1)
    Latest RDBMS version Oracle Server 12.1.0.2

    Let’s create a new doc for new version but don’t provide reference to it. Make the user find it.

    Latest GoldenGate/Database (OGG/RDBMS) Patch recommendations (Doc ID 2193391.1)
    Latest OGG Release 12.3
    Latest RDBMS Release 12.2.0.1

    OGG Release/OGG Patch
    12.3/Recommended 12.3.0.1.2 or higher *** This is not a patch
    12.2/Recommended 12.2.0.2.0 or higher *** This is not a patch
    12.2/Minimum 12.2.0.1.170221 *** This is a patch

    Master Note for Oracle GoldenGate Core Product Patch Sets (Doc ID 1645495.1)
    There are no patch sets available for 12.3.0.1 at this time.
    Latest OGG v12.2.0.1 Patch Set Availability Notes

    ORDS: Installation and Configuration

    Amis Blog - Fri, 2018-03-30 09:57

    In my job as system administrator/DBA/integrator I was challenged to implement smoketesting using REST calls. Implementing REST in combination with WebLogic is pretty easy. But then we wanted to extend smoketesting to the database. For example we wanted to know if the database version and patch level were at the required level as was used throughout the complete DTAP environment. Another example is the existence of required database services. As it turns out Oracle has a feature called ORDS – Oracle REST Data Service – to accomplish this.

    With ORDS you can install it in 2 different scenario’s, in standalone mode on the database server, or in combination with an application server such as WebLogic Server, Glassfish Server, or Tomcat.

    This article will give a short introduction to ORDS. It then shows you how to install ORDS feasible for a production environment using WebLogic Server 12c and an Oracle 12c database as we have done for our smoketesting application.

    We’ve chosen WebLogic Server to deploy the ORDS application because we already used WebLogic’s REST feature for smoketesting the application and WebLogic resources, and for high availability reasons because we use an Oracle RAC database. Also running in stand-alone mode would lead to additional security issues for port configutions.

    Terminology

    REST: Representational State Transfer. It provides interoperability on the Internet between computer systems.

    ORDS: Oracle REST Data Services. Oracle’s implementation of RESTful services against the database.

    RESTful service: an http web service that follows the REST architecture principles. Access to and/or manipulation of web resources is done using a uniform and predefined set of stateless operators.

    ORDS Overview

    ORDS makes it easy to develop a REST interface/service for relational data. This relational data can be stored in either an Oracle database, an Oracle 12c JSON Document Store, or an Oracle NoSQL database.

    A mid-tier Java application called ORDS, maps HTTP(S) requests (GET, PUT, POST, DELETE, …) to database transactions and returns results in a JSON format.

    ORDS Request Response Flow

    Installation Process

    The overall process of installing and configuring ORDS is very simple.

    1. Download the ORDS software
    2. Install the ORDS software
    3. Make some setup configurational changes
    4. Run the ORDS setup
    5. Make a mapping between the URL and the ORDS application
    6. Deploy the ORDS Java application

    Download the ORDS software

    Downloading the ORDS software can be done from the Oracle Technology Network. I used version ords.3.0.12.263.15.32.zip. I downloaded it from Oracle Technet:
    http://www.oracle.com/technetwork/developer-tools/rest-data-services/downloads/index.html

    Install the ORDS software

    The ORDS software is installed on the WebLogic server running the Administration console. Create an ORDS home directory and unzip the software.

    Here are the steps on Linux

    $ mkdir -p /u01/app/oracle/product/ords
    $ cp -p ords.3.0.12.263.15.32.zip /u01/app/oracle/product/ords
    $ cd /u01/app/oracle/product/ords
    $ unzip ords.3.0.12.263.15.32.zip

    Make some setup configurational changes ords_params.properties File

    Under the ORDS home directory a couple of subdirectories are created. One subdirectory is called params. This directory holds a file called ords_params.properties. This file holds some default parameters that are used during the installation. This file ords_params.properties, is used for silent installation. In case any parameters aren’t specified in this file, ORDS interactively asks you for the values.

    In this article I go for a silent installation. Here are the default parameters and the ones I set for installing

    Parameter

    Default Value

    Configured Value

    db.hostname

    dbserver01.localdomain

    db.port

    1521

    1521

    db.servicename

    ords_requests

    db.username

    APEX_PUBLIC_USER

    APEX_PUBLIC_USER

    migrate.apex.rest

    false

    false

    plsql.gateway.add

    false

    false

    rest.services.apex.add

    false

    rest.services.ords.add

    true

    true

    schema.tablespace.default

    SYSAUX

    ORDS

    schema.tablespace.temp

    TEMP

    TEMP

    standalone.http.port

    8080

    8080

    user.public.password

    Ords4Ever!

    user.tablespace.default

    USERS

    ORDS

    user.tablespace.temp

    TEMP

    TEMP

    sys.user

    SYS

    sys.password

    Oracle123

    NOTE

    As you see, I refer to a tablespace ORDS for the installation of the metadata objects. Don’t forget to create this tablespace before continuing.

    NOTE

    The parameters sys.user and sys.password are removed from the ords_params.properties file after running the setup (see later on in this article)

    NOTE

    The password for parameter user.public.password is obscured after running the setup (see later on in this article)

    NOTE

    As you can see there are many parameters that refer to APEX. APEX is a great tool for rapidly developing very sophisticated applications nowadays. Although you can run ORDS together with APEX, you don’t have to. ORDS runs perfectly without an APEX installation.

    Configuration Directory

    I create an extra directory to hold all configuration data, called config directly under the ORDS home directory. Here all configurational data used during setup are stored.

    $ mkdir config
    $ java -jar ords.war configdir /u01/app/oracle/product/ords/config
    $ # Check what value of configdir has been set!
    $ java -jar ords.war configdir

    Run the ORDS setup

    After all configuration is done, you can run the setup, which installs the Oracle metadata objects necessary for running ORDS in the database. The setup creates 2 schemas called:

    • ORDS_METADATA
    • ORDS_PUBLIC_USER

    The setup is run in silent mode, which uses the parameter values previously set in the ords_params.properties file.

    $ mkdir -p /u01/app/oracle/logs/ORDS
    $ java -jar ords.war setup –database ords –logDir /u01/app/oracle/logs/ORDS –silent

    Make a mapping between the URL and the ORDS application

    After running the setup, ORDS required objects are created inside the database. Now it’s time to make a mapping from the request URL to the ORDS interface in the database.

    $ java -jar ords.war map-url –type base-path /ords ords

    Here a mapping is made between the request URL from the client to the ORDS interface in the database. The /ords part after the base URL is used to map to a database connection resource called ords.

    So the request URL will look something like this:

    http://webserver01.localdomain:7001/ords/

    Where http://webserver01.localdomain:7001 is the base path.

    Deploy the ORDS Java application

    Right now all changes and configurations are done. It’s time to deploy the ORDS Java application against the WebLogic Server. Here I use wlst to deploy the ORDS Java application, but you can do it via the Administration Console as well, whatever you like.

    $ wlst.sh
    $ connect(‘weblogic’,’welcome01′,’t3://webserver01.localdomain:7001′)
    $ progress= deploy(‘ords’,’/u01/app/oracle/product/ords/ords.war’,’AdminServer’)
    $ disconnect()
    $ exit()

    And your ORDS installation is ready for creating REST service!

    NOTE

    After deployment of the ORDS Java application, it’s state should be Active and health OK. You might need to restart the Managed Server!

    Deinstallation of ORDS

    As the installation of ORDS is pretty simple, deinstallation is even more simple. The installation involves the creation of 2 schemas on the database and a deployment of ORDS on the application server. The deinstall process is the reverse.

    1. Undeploy ORDS from WebLogic Server
    2. Deinstall the database schemas using

      $ java –jar ords.war uninstall

      In effect this removes the 2 schemas from the database

    3. Optionally remove the ORDS installation directories
    4. Optionally remove the ORDS tablespace from the database

    References

    Summary

    The installation of ORDS is pretty simple. You don’t need to get any extra licenses to use ORDS. ORDS can be installed without installing APEX. You can run ORDS stand-alone, or use a J2EE webserver like WebLogic Server, Glassfish Server, or Apache Tomcat. Although you will need additional licenses for the use of these webservers.

    Hope this helps!

    The post ORDS: Installation and Configuration appeared first on AMIS Oracle and Java Blog.

    CPU utilization

    Tom Kyte - Fri, 2018-03-30 08:46
    How can I reduce CPU utilization of Oracle database?
    Categories: DBA Blogs

    Oracle DBAs and Meltdown & Spectre (M&S) vulnerability Patch

    Pakistan's First Oracle Blog - Thu, 2018-03-29 21:01
    So what Oracle DBAs need to do regarding Meltdown & Spectre (M&S) vulnerability patch? 



    Well, they should ask the sysadmins to install the patch to the affected versions. They need to get a maintenance window for that. They need to take full backup of Oracle infrastructure and databases before that patching and they should get some baseline of OS metrics to compare it with post patch status of the system. 

    Not much there is to do for Oracle DBAs in this regard as this vulnerability is in hardware and is mainly related to sysadmins. Nonetheless, Oracle DBAs should avail this opportunity and install latest CPU. 

    The vulnerability is in the chipset itself, unrelated to OS. These vulnerabilities exist at the hardware layer and provide attackers with a way to essentially read the memory used by other processes. Because of the nature of this exploit, the database itself is not currently thought to be a vector in the risk, in-fact the real "fix" for this issue relies on fixing architecture at the chip-set level. 

    To mitigate the risk currently without replacing your chips, OS vendors are releasing patches that fundamentally change interactions with memory structures by processes. This is why we're seeing in "Addendum to the January 2018 CPU Advisory for Spectre and Meltdown (Doc ID 2347948.1)" Oracle is releasing patches for Oracle VM (virtual machines are particularly susceptible to this exploit as one "VM" can read the memory processes of another, making this particularly deadly to cloud computing) and Oracle Enterprise Linux. We do understand that Oracle is exploring the possibility that there may be additional patches needed for Oracle Enterprise and Standard edition DBs themselves.


    Only for Exadata, It is needed to apply the latest Exadata 12.2.1.1.6 software bundle (the full version number is 12.2.1.1.6.180125.1). Spectre / Meltdown patches are included into it.


    The best course of action regarding this would be to get a word from Oracle support for any database related patch. 

    Categories: DBA Blogs

    Move a Datafile from one ASM Diskgroup to Another Diskgroup

    Pakistan's First Oracle Blog - Thu, 2018-03-29 20:49
    Following are steps to move a datafile from one ASM diskgroup to another diskgroup in the same ASM instance:




    For this example, let's suppose the full path of datafile to be moved is +DATA/test/datafile/test.22.121357823 and datafile number is 11.

    Step 1: From RMAN, put datafile 11 offline:

    SQL "ALTER DATABASE DATAFILE ''+DATA/test/datafile/test.22.121357823'' OFFLINE";

    Step 2: Backup Datafile 11 to Copy using RMAN:

    $ rman target /
    BACKUP AS COPY DATAFILE 11 FORMAT '+DATA_NEW';

    --- Make note the path and name of the generated datafile copy.

    Step 3: From RMAN, switch datafile 11 to copy:

    SWITCH DATAFILE "+DATA/test/datafile/test.22.121357823" TO COPY;

    Step 4: From RMAN, Recover Datafile 11:

    RECOVER DATAFILE 11;

    Step 5: From RMAN, put datafiles online:

    SQL "ALTER DATABASE DATAFILE ''+DATA_NEW/'' ONLINE";

    Step 6: From SQLPlus, verify if datafile 11 was correctly switched and was online:

    sqlplus / as sysdba
    SQL> select file_id,file_name,online_status from dba_data_files where file_id in (11);
    Categories: DBA Blogs

    Docker-CE on Oracle Enterprise Linux 7

    Yann Neuhaus - Thu, 2018-03-29 15:32

    Here is how I install the latest Docker version on Oracle Linux 7. You find several blog posts about it which all install ‘docker-engine’. But things move fast in this agile world and docker package name has changed. The Community Edition is now ‘docker-ce’ and you want this one to run the latest version.

    I’m on OEL 7.4 but should also wotj on RHEL 7:
    [root@VM188 yum]# cat /etc/oracle-release
    Oracle Linux Server release 7.4

    docker-engine

    If you enable [ol7_addons] you can install ‘docker-engine:’

    # yum-config-manager --enable ol7_addons
    # yum info docker-engine
    Loaded plugins: ulninfo
    Available Packages
    Name : docker-engine
    Arch : x86_64
    Version : 17.06.2.ol
    Release : 1.0.1.el7
    Size : 21 M
    Repo : ol7_addons/x86_64
    Summary : The open-source application container engine
    URL : https://dockerproject.org
    License : ASL 2.0
    Description : Docker is an open source project to build, ship and run any application as a
    : lightweight container.

    But forget it. That’s 17.06 which is 6 months old. You should consider Docker as a puppy. 6 month in puppy’s years is like 3 human years. So many changes happened.

    You can remove all those old things:

    # yum remove docker docker-common docker-selinux docker-engine

    docker-ce

    I’ve not found ‘docker-ce’ on OL7 repositories, as only the Enterprise Edition is there. Then I added the CentOS repo (with yum-config-manager that you can get with yum-utils if you don’t have it already):

    yum -y install yum-utils
    yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

    Once done, I’ve just installed Docker Community Edition with:

    yum -y install docker-ce

    And, at that time I got the version 17.12 easily:

    [root@VM188 ~]# yum info docker-ce
    Loaded plugins: ulninfo
    Installed Packages
    Name : docker-ce
    Arch : x86_64
    Version : 17.12.0.ce
    Release : 1.el7.centos
    Size : 123 M
    Repo : installed
    From repo : docker-ce-stable
    Summary : The open-source application container engine
    URL : https://www.docker.com
    License : ASL 2.0
    Description : Docker is an open source project to build, ship and run any application as a
    : lightweight container.
    ...

    But now there’s a new version available:

    Available Packages
    Name : docker-ce
    Arch : x86_64
    Version : 18.03.0.ce
    Release : 1.el7.centos
    Size : 35 M
    Repo : docker-ce-stable/x86_64
    Summary : The open-source application container engine
    URL : https://www.docker.com
    License : ASL 2.0
    Description : Docker is an open source project to build, ship and run any application as a
    : lightweight container.

    The problem is that if you want to install docker-ce in this latest version, you will now get:

    Resolving Dependencies
    --> Running transaction check
    ---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
    --> Processing Dependency: pigz for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
    --> Finished Dependency Resolution
    Error: Package: docker-ce-18.03.0.ce-1.el7.centos.x86_64 (docker-ce-stable)
    Requires: pigz
    You could try using --skip-broken to work around the problem
    ** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows:
    2:microcode_ctl-2.1-22.5.0.3.el7_4.x86_64 has missing requires of kernel

    (Ok Google, this is what you need to index…)

    pigz

    Starting from version 18.02 there’s a new dependency on ‘pigz’ for parallel gzip.

    To get this ‘pigz’ package from the OL7 repository you need to enable EPEL in /etc/yum.repos.d/public-yum-ol7.repo

    [ol7_developer_EPEL] name=Oracle Linux $releasever Developement Packages ($basearch)
    baseurl=http://yum.oracle.com/repo/OracleLinux/OL7/developer_EPEL/$basearch/
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
    gpgcheck=1
    enabled=1

    Now, I’m able to install the latest docker-ce:

    [root@VM188 yum.repos.d]# yum install docker-ce
    Loaded plugins: ulninfo
    Resolving Dependencies
    --> Running transaction check
    ---> Package docker-ce.x86_64 0:17.12.0.ce-1.el7.centos will be updated
    ---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be an update
    --> Processing Dependency: pigz for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
    --> Running transaction check
    ---> Package pigz.x86_64 0:2.3.4-1.el7 will be installed
    --> Finished Dependency Resolution
     
    Dependencies Resolved
     
    ================================================================================================================================
    Package Arch Version Repository Size
    ================================================================================================================================
    Updating:
    docker-ce x86_64 18.03.0.ce-1.el7.centos docker-ce-stable 35 M
    Installing for dependencies:
    pigz x86_64 2.3.4-1.el7 ol7_developer_EPEL 80 k
     
    Transaction Summary
    ================================================================================================================================
    Install ( 1 Dependent package)
    Upgrade 1 Package
     
    Total download size: 35 M
    Is this ok [y/d/N]: y

    Oracle Database on Docker

    You may wonder why I install Docker on Oracle Linux rather than CentOS. The MOS Doc ID 2216342.1 mentions that Oracle will support customers running Oracle Database (single instance) in Docker containers running on Oracle Linux 7 with UEK4 or Red Hat Enterprise Linux 7.

    If you want to validate your Docker install for running Oracle Database, the easiest is to use the image build script provided by Oracle:

    git clone https://github.com/oracle/docker-images.git
    cd ./docker-images/OracleDatabase/SingleInstance/dockerfiles/
    # download and move linuxx64_12201_database.zip is in 12.2.0.1 subdirectory
    sh buildDockerImage.sh -v 12.2.0.1 -e

    Those are maintained by Gerald Venzl, Oracle product manager for database development, so they are obviously the best way to run Oracle Database on Docker. You can read all related best practices from the same author. Once you have that running, you have validated your environment and you can customize further if you want.

     

    Cet article Docker-CE on Oracle Enterprise Linux 7 est apparu en premier sur Blog dbi services.

    Optimizing the Performance & Scalability of Java Applications that use an RDBMS

    Kuassi Mensah - Thu, 2018-03-29 12:40
    PreambleThere is an abundant literature on Java performance (books, articles, blogs, websites, and so on); a Google search returns more than 5 millions hits. To name a few, the Effective Java programming language guide, Java Performance the definitive guide, Java performance tuning newsletter and associated http://www.javaperformancetuning.com website. This is not the purpose of this post.

    The goal of this post is to revisit the known best practices for speeding up and scaling database operations for Java applications then discuss database proxy and the upcoming standard Java API for asynchronous database access (ADBA).

    Even those familiar with Java optimization techniques will learn new tips!
    Speeding up Java applications that use an RDBMS
    Optimizing database operations for Java applications includes: speeding up database connectivity, speeding up SQL statements processing, optimizing network traffic, and in-place processing. 
    Speeding up Database Connectivity
    Connection establishment is the most expensive database operation; the obvious optimization that
    Java developers have been using for ages is connection pooling which avoids creating connections at
     runtime. 

    Client-side Connection Pools

    Java connection pools such as the Apache Commons DBCP, C3P0, as well as the Oracle  
    Universal Connection Pool (UCP) and many others,  run along the JDBC libraries either stand-alone 
    within the JDK/JRE or as part of Java EE containers datasources (e.g., Tomcat, Weblogic, WebSphere
    and others). Java EE containers usually furnish their own connection pools but they also allow
     replacing theirs with 3rd party pools (see using UCP with Tomcat, UCP with Weblogic). 

    Most Java developers use these client-side or mid-tier connection pools for sustaining small and
     medium workloads however, these connection pools are confined to the JRE/JDK instance
    (i.e., can't be shared beyond the boundary of the JRE/JDK) and unpractical when deploying
    thens of thousands of mid-tiers or Web servers. Even with very small pool size each, the RDBMS
     server  is overwhelmed by thens of thousands of pre-allocated connections that are predominantly
     idle (more than 90%).



    Proxy Connection Pools

    Proxy connection pools such as MySQL Router, Oracle Database Connection Manager in Traffic
     Director Mode (CMAN-TDM), and others, are part of proxy servers that sit between the database
     clients (i.e., Java apps) and the RDBMS. These allow thousands of database clients to share a 
    common connection pool. I will discuss this a bit more, near the end of this post.

    The Oracle database also furnishes database-side connection pools such as  the Shared Servers
    and the Database Resident Connection Pool (DRCP). We will not discuss those in this post.

    Other connection optimization features include: deferring connection health check and the 
    de-prioritization of failed nodes.
    Deferring Connection Health Check

    The ability of a connection pool such as Oracle's Universal Connection Pool (UCP) to avoid
    checking the health of connections for a defined period of time, improves the latency of
    connection check-out (i.e., getConnection() returns faster).

    De-prioritization of Failed Nodes
    In a multi-instances clustered database environment such as Oracle RAC,  this JDBC feature assigns
    a low priority to a failed instance for a user-defined period of time thereby reducing the connection
    establishment latency (iow, avoid attempting to get connections from the failed instance).
    Optimizing Statements Processing
    The default COMMIT mode with JDBC is Auto-COMMIT; unless this corresponds to your desire, 
    you should explicitly disable Auto-COMMIT on the connection object.

    conn.setAutoCommit(false);


    Processing a SQL statement requires several steps including: parsing, binding variables, executing,
    fetching resultSets (if a query), and COMMITting or ROLLBACKing the transaction (if a DML
    i.e., Insert, Update, or Delete). 

    Java developers have several options for optimizing SQL statements processing including: 
    Prepared Statements, Statements Caching, ResultSets caching with change notification.


    Prepared Statements

    Parsing (i.e., hard parsing) is the most expensive operation during the processing of a SQL statement.
    The best practices consists in avoiding parsing by using Prepared Statements which are parsed only
     once then reused on subsequent invocations, after binding variables. A security byproduct of 
    Prepared Statements is to prevent SQL injection.

    https://docs.oracle.com/javase/tutorial/jdbc/basics/prepared.html
    Statements Caching

    Statement caching significantly improves performance.  The JDBC driver caches the SQL statements 
    (PreparedStatements and CallableStatements) on close, using an LRU algorithm then refers
    the RDBMS to the parsed form in its library cache (i.e., "use statement #2)during subsequent
    invocations of the same statement.  Enabled by setting  Implicit statement caching  to true and
     allocating a statement cache in the driver memory (i.e., an array per physical connection).

      
    OracleDataSource ods = new OracleDataSource(); 
    ... 
    ods.setImplicitCachingEnabled( true ); 

    ods.setStmtCacheSize(nn);

    ...

    ResultSets Caching with Change Notification - the Hard Way (JDBC-OCI)

    Caching JDBC result sets avoids re-executing the corresponding SQL query, resulting in dramatic
    Java applications performance. RDBMSes allow caching ResultSet at the server side but the 
    applications needs a roundtrip to the database to get these. Optimizing further, these result set can be
    pushed to the drivers (JDBC, C/C++, PHP, C#, and so on) and grabbed by the applications without 
    database roundtrips. 
    What if the ResultSets become stale, out of sync with the actual RDBMS data? RDBMSes 
    furnish mechanisms to maintain the ResultSets, up to date. For example, the Oracle database Query
    Change Notifications allows registering a SQL query with the RDBMS and receiving notifications
    when committed DMLs from other threads render the ResultSets out of sync. 

    Java applications may explicitly implement ResultSet caching with change notification through the 
    following steps:

    Prerequisite: grant CHANGE NOTIFICATION to the schema (i.e., database user); 
     grant change notification to HR;  // might need your DBA's help.

    1) Create a registration


    OracleConnection conn = ods.getConnection();
    Properties prop = new Properties();
    prop.setProperty(OracleConnection.DCN_NOTIFY_ROWIDS, "true");
    prop.setProperty(OracleConnection.DCN_QUERY_CHANGE_NOTIFICATION,"true");
       ...
        DatabaseChangeRegistration dcr = conn.registerDatabaseChangeNotifictaion(prop);
        ...

    2) Associate a query with the registration
      
      Statement stmt = conn.createStatement(); 
      // associating the query with the registration
      ((OracleStatement)stmt).setDatabaseChangeRegistration(dcr); 
      // any query that will be executed with the 'stmt' object will be associated with
      // the registration 'dcr' until 'stmt' is closed or 
      // '((OracleStatement)stmt).setDatabaseChangeRegistration(null);' is executed.
     ...



    3) Listen to the notification
     ...
     // Attach the listener to the registration. 
     // Note: DCNListener is a custom listener and not a predefined or standard 
     // listener
     DCNListener list = new DCNListener(); dcr.addListener(list); 
     ...

     catch(SQLException ex) { 
     // if an exception occurs, we need to close the registration in order 
     // to interrupt the thread otherwise it will be hanging around. 
      if(conn != null) 
         conn.unregisterDatabaseChangeNotification(dcr); 
      throw ex; 
     }



    ResultSets Caching with Change Notification - the Easy Way (JDBC-Thin with DB 18c)

    You may also enable ResultSet caching with invalidation, in a much easier way, using the following
     steps (once JDBC-Thin in Oracle database 18c is available on-premise).

    1) Set the following database parameters in the database configuration file also known as INIT.ORA

    CLIENT_RESULT_CACHE_SIZE=100M // e.g., maximum cache size, in bytes
    CLIENT_RESULT_CACHE_LAG=1000 // maximum delay for refreshing the cache (msec) 



    2) Set the JDBC connection property oracle.jdbc.enableQueryResultCache to true (the default).



    3) add the following hint to the SQL query string  "/*+ RESULT_CACHE */"

    Example "SELECT /*+ RESULT_CACHE */ product_name, unit_price 
                 FROM PRODUCTS WHERE unit_price > 100"

    If changing the Java/JDBC source code to add the SQL hint is not an option, you can instruct the
     RDBMS to cache the ResultSets of all queries related to a specific table, either at table creation 
    (default mode) or later (force mode); this is known as Table annotation.

    Examples              

    CREATE TABLE products (...) RESULT_CACHE (MODE DEFAULT);
       ALTER TABLE products RESULT_CACHE (MODE FORCE);

    The RDBMS furnishes views such as the V$RESULT_CACHE_STATISTICS and
    CLIENT_RESULT_CACHE_STATS$ table for monitoring the effectiveness of ResultSet caching.
    See section 15  in the performance tuning guide for more details on configuring the server-side result 
    set cache

    Array Fetch

    Array fetching is an absolute necessity when retrieving a large number of rows from a ResultSet.
    The fetch size can be specified on Statement, PreparedStatement, CallableStatement, and 
    ResultSet objects.
    Example: pstmt.setFetchSize(20);



    When using the Oracle database, this array size is capped by the RDBMS's internal buffer known as
    Session Data Unit (SDU). The SDU buffer is used  for transferring data from the tables to the client, 
    over the network. The size of this buffer, in bytes, can be specified in JDBC URL

          jdbc:oracle:thin:@(DESCRIPTION=(SDU=10240)
                         (ADDRESS=(PROTOCOL=tcp)(HOST=myhost-vip)(PORT=1521))
                         (CONNECT_DATA=(SERVICE_NAME=myorcldbservicename)))



    or at the service level in Net Services configuration files sqlnet.ora and tnsnames.ora.
    There is a hard limit depending on the RDBMS release: 2MB with DB 12c, 64K with DB 11.2,
     and 32K with DB pre-11.2.
    In summary, even if you set the array fetch to a large number, it cannot retrieve more data than the
    SDU permits.

    Array DML (Update Batch)

    The JDBC specification defines array operations as sending a batch of the same DML operations
    (i.e.,  array INSERTs, array UPDATEs, array DELETE) for sequential execution at the server, thereby
     reducing network round-trips.

     Update Batching consists in explicitly invoking the addBatch methods which adds a statement to
     an array operation then explicitly calling executeBatch method. 

    ...
     PreparedStatement pstmt =
      conn.prepareStatement("INSERT INTO employees VALUES(?, ?)");
    pstmt.setInt(1, 2000);
    pstmt.setString(2, "Milo Mumford");
    pstmt.addBatch();
    pstmt.setInt(1, 3000);
    pstmt.setString(2, "Sulu Simpson");
    pstmt.addBatch();
    int[] updateCounts = pstmt.executeBatch();

    ...

    Optimizing Network TrafficNetwork Data Compression

    The ability to compress data transmitted between the Java applications and the RDBMS over LAN or WAN reduces the volume of data, the transfert time and the number of roundtrips.

    // Enabling Network Compression 
    prop.setProperty("oracle.net.networkCompression","on"); 
    // Optional configuration for setting the client compression threshold.
    prop.setProperty("oracle.net.networkCompressionThreshold","1024"); ds.setConnectionProperties(prop); 
    ds.setURL(url); 
    Connection conn = ds.getConnection(); 
    ...

    Sessions Multiplexing 

    The Oracle database Connection Manager a.k.a. CMAN, furnishes the ability to funnel multiple database connections over a single network connection thereby saving OS resources.
    In-Place ProcessingAs we have seen earlier, SQL statements execution involves a number of roundtrips between a database client i.e., Java mid-tier/web-server and the RDBMS; this is the rationales for using stored procedures. Even modern data processing such as Hadoop or Spark, collocate the processing and data for low latency.
    All RDBMSes furnish stored procedures in various languages including proprietary procedural language such as Oracles PL/SQL but also Java, JavaScript, even PHP, Perl, Python, and TCL.
    I discussed the pros and cons of stored procedures in chapter 1 of my book.
    I'd add that in a modern Micro-services based architecture, stored procedures are perfect for designing data-bound services.

    The Oracle database furnishes Java and PL/SQL stored procedures. Java in the database is one of the best Oracle database gem; see some code samples on GitHub.

    Scaling Out Java Applications that use an RDBMS
    In this section, I will discuss scaling Java applications using Sharded databases, Multitenant databases, database proxy and the upcoming asynchronous Java database access API.  
    Horizontal Scaling of Java applications with Sharded DatabasesSharded database have been around for a while; think of shards as horizontal partitioning of tables across several databases (iow, partitions on steroids!).
    The main impact for developers is that Java application must be Shard-aware; iow, the requirement to: (i) define which fields serve as sharding key, (ii) set the binding values and build the sharding key (and optionally, the super sharding key) before requesting a connection to the datasource. RDBMS vendors are actively working on a routing capability which will remove shard-awareness (see database proxy, later in this post).

    Java SE 9 furnishes the standard APIs for building the sharding and supersharding keys.

    DataSource ds = new MyDataSource();
    ShardingKey shardingKey = ds.createShardingKeyBuilder()
    .subkey("abc", JDBCType.VARCHAR)
    .subkey(94002, JDBCType.INTEGER)
    .build();
     ...
     Connection con = ds.createConnectionBuilder()
    .shardingKey(shardingKey)
    .build();

    Depending on the RDBMS implementation, the map of shards keys across databases also know as shard topology is maintained by an external mechanism known as the "Shard Director" (in  Oracle database implementation). Without further optimization, all connection requests (with a mandatory sharding key) go to the Shard Director which finds the corresponding shard then a connection is established with that shard.

    A Shared Pool for Sharded DBs

    The Oracle Universal Connection Pool (UCP) furnishes a shared single pool for all shards.
    UCP has been enhanced to transparently suck the shard map (i.e., all the keys that map to a specific shard), from the Shard Director, during the first connection to a specific shard. Once UCP gets the keys range, it no longer needs to go to the Shard Director for subsequent connections requests related to that shard. After a little while, assuming your Java application randomly accesses all shards, UCP will get the entire shard topology from the Shard Director. A high availability byproduct of UCP acting as the Shard Director is that shard-aware Java applications can work even if the Shard Director is down.
    Scaling Java Applications with Multi-Tenant DatabasesMulti-tenancy is a key business requirement for enterprise Java applications. It could be simulated at the application level but true Multi-tenancy requires a Multi-tenant RDBMS where each tenant has it's own database.
    Multi-tenant RDBMS scale by managing thousands of databases with one of very few database instances (an instance being the set of processes and memory structures necessary for managing a database), thereby reducing drastically the required computing resources.

    How would Java applications scale with Multi-Tenant RDBMS?

    A non Multi-tenant aware connection pool would allocate a pool per database, defeating the purpose.  UCP has been enhanced to use a single shared pool for all pluggable databases -- a.k.a. PDB (a PDB is the tenant specific database in Oracle's Multi-tenant architecture).
    Upon a connection request to a specific PDB, if there is no free/available connection attached to that tenant database, UCP transparently repurposes an idle connection in the pool, which was attached to another PDB to be re-attached to this one, thereby allowing to use a small set of pooled connections to service all tenants while avoiding new connection creation (remember, this is very expensive!) and preserving system resources.
    See the UCP doc for more details on using one datasource per tenant or a single datasource for all tenants.
    Database proxy 
    Proxies are man-in-the-middle software running between the database and its clients e.g., Java applications. There are several proxy offerings on the market; to name a few: MySQL Router, the Oracle Database Connection Manager in Traffic Director Mode (CMAN-TDM), ProxySQL, and so on.
    The Oracle CMAN-TDM is new in Oracle database 18c; it is an extension of the existing Oracle Connection Manager a.k.a. CMAN and furnishes these new following capabilities
    • Fully transparent to applications
    • Routes database traffic to right instance (planned)
    • Hides database planned and unplanned outages to support zero application downtime
    • Optimizes database session usage and application performance 
    • Enhances database security
    CMAN-TDM is client agnostic, iow, it supports all database clients applications including: Java, C, C++, DotNET, Node.js, Python, Ruby, R.
    Java applications would connect to CMAN-TDM which, in its turn, connects to the database using the latest driver and libraries then transparently furnish the Quality of Service that the application would get only if it was using the latest driver and APIs

    See more details in the CMAN landing page and the Net Services documentions linked from the landing page.
    Asynchronous Java Database Access API (ADBA)
    The existing JDBC API leads to blocked threads, threads scheduling, and contention; it is not suitable for reactive applications or high throughput and large-scale deployments. There exist non-standard asynchronous Java database access APIs but the Java community needs a standard one where user threads never block. User threads submit database operations and return; the API implementation takes care of executing the operations, independently of user threads.
    This new API proposal is not intended to be an extension to, or a replacement for, JDBC but, rather, an entirely separate API that provides completely nonblocking access to the same databases as JDBC.


    The new API proposal relies on the java.util.concurrent.CompletionStage interface; it is available for download from the OpenJDK sandbox @  http://tinyurl.com/java-async-db.
    You can sed some examples in the latest JavaOne presentation @ http://bit.ly/2wi948k.

    There was a suggestion on the mailing list http://mail.openjdk.java.net/pipermail/jdbc-spec-discuss/ to rather base he API on the Reactive Streams class java.util.concurrent.Flow; you can follow that discussion in the mailing list.

    I would encourage all the readers of this blog to review the API and get involved in the discussion.
    In order to help the community get a feel of ADBA, an alpha version of it that runs over the vanilla/synchronous JDBC -- that we are calling AoJ for ADBA over JDBC -- will be posted soon, along with a couple of working examples.
    I will announce it, when available, on my social media streams including @kmensah, http://db360.blogspot.com/, https://www.linkedin.com/in/kmensah.
    Resources

    Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415

    Amis Blog - Thu, 2018-03-29 10:26

    We are in the process of upgrading our Oracle Clusters and SIHA/Restart systems to Oracle 12.2.0.1

    The upgrade of the Grid-Infra home on a Oracle SIHA/Restart system from 11.2.0.4 to 12.2.0.1 fails when
    running rootupgrade.sh with error message:

    CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’

    We start the upgrade to 12.2.0.1 (with Jan2018 RU patch) as:
    $ ./gridSetup.sh -applyPSU /app/software/27100009

    The installation and relink of the software looks correct.
    However, when running the rootupgrade.sh as root user, as part of the post-installation,
    the script ends with :

    2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/crsctl query has softwareversion
    2018-03-28 11:20:27: Command output:
    > Oracle High Availability Services version on the local node is [12.2.0.1.0]
    >End Command output
    2018-03-28 11:20:27: Version String passed is: [Oracle High Availability Services version on the local node is [12.2.0.1.0]]
    2018-03-28 11:20:27: Version Info returned is : [12.2.0.1.0]
    2018-03-28 11:20:27: Got CRS softwareversion for su025p074: 12.2.0.1.0
    2018-03-28 11:20:27: The software version on su025p074 is 12.2.0.1.0
    2018-03-28 11:20:27: leftVersion=11.2.0.4.0; rightVersion=12.2.0.0.0
    2018-03-28 11:20:27: [11.2.0.4.0] is lower than [12.2.0.0.0]
    2018-03-28 11:20:27: Disable the SRVM_NATIVE_TRACE for srvctl command on pre-12.2.
    2018-03-28 11:20:27: Invoking “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
    2018-03-28 11:20:27: trace file=/app/oracle/crsdata/su025p074/crsconfig/srvmcfg1.log
    2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first
    2018-03-28 11:21:02: Command output:
    > PRCA-1003 : Failed to create ASM asm resource ora.asm
    > PRCR-1071 : Failed to register or update resource ora.asm
    > CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’.
    >End Command output
    2018-03-28 11:21:02: “upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first” failed with status 1.
    2018-03-28 11:21:02: Executing cmd: /app/gi/12201_grid/bin/clsecho -p has -f clsrsc -m 180 “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
    2018-03-28 11:21:02: Command

    The rootupgrade.sh script is run as the root user as prescribed,  but root cannot add the ASM resource.
    This leaves the installation unfinished.

    There is no description in the Oracle Knowledge base, however according Oracle Support this problem is
    caused by unpublished   Bug 25183818 : SIHA 11204 UPGRADE TO MAIN IS FAILING 

    As per March 2018, no workaround or software patch is yet available.

    The post Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415 appeared first on AMIS Oracle and Java Blog.

    Pages

    Subscribe to Oracle FAQ aggregator