Feed aggregator

Making our way into Dremio

Rittman Mead Consulting - Wed, 2018-07-25 04:07

In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.

Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.

The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.

I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files.

Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from here.

Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit /Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env to point to the directory where java 1.8 home is located.

After that you should be able to start Dremio as any other MacOs application and access http://localhost:9047

Image Description

Configuring S3 Source

Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found here) and show the query capabilities of Dremio against unstructured data.

For this demo we're using Garmin CSV activity data that can be easily downloaded from Garmin activity page.

Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.

act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories
1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08  ,0:06:06  ,0:04:13  ,175.390625,193.0,92.89507499768523,--,--,--,65
1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26  ,0:10:08  ,0:06:02  ,150.140625,236.0,63.74555754497759,--,--,--,55

For user information data we have used the following dataset

runner,dob,name
JM,01-01-1900,Jon Mead
NMG,01-01-1900,Nelio Guimaraes

Add your S3 credentials to access

After configuring your S3 account all buckets associated to it, will be prompted under the new source area.

For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart

Image Description

nmgbuckettest - contains Garmin activity data that could be seen as a fact table in CSV format :

Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories

nmgdremiouser - contains user data that could be seen as a user dimension in a CSV format:

runner,dob,name

Creating datasets

After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.

By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for \n and checked the option Extract Field Name

Lets do the same for the second set of CSV's (nmgdremiouser bucket)

Click in saving will drive us to a new panel where we can start performing some queries.

However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the dob field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.

Apply the changes and save the new dataset under the desire space.

Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call D_USER for the dataset coming from nmgdremiouser bucket and F_ACTIVITY for data coming from nmgbuckettest

Querying datasets

Now that we have D_USER and F_ACTIVITY datasets created we can start querying them and do some analysis.

This first analysis will tell us which runner climbs more during his activities:

SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name
FROM (
  SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner
  FROM (
    SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain,
    MAX(to_number("Elevation Gain",'###')) as max_elev_gain,
    SUM(to_number("Elevation Gain",'###')) as sum_elev_gain,
    runner
    FROM dremioblogpost.F_ACTIVITY
    where "Elevation Gain" != '--'
    group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner    

To enrich the example lets understand who is the fastest runner with analysis based on the total climbing

 SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name
FROM ( 
  SELECT km_per_hour, total_climbing, runner
  FROM (
    select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour,
        sum(cast("Elevation Gain" as integer)) total_climbing,
        runner
        from dremioblogpost.F_ACTIVITY
        where "Avg Moving Paces" != '--'
        and "Elevation Gain" != '--'
        group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner

Conclusions

Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.

Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a "good enough" way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.

Categories: BI & Warehousing

Making our way into Dremio

Rittman Mead Consulting - Wed, 2018-07-25 04:07

In an analytics system, we typically have an Operational Data Store (ODS) or staging layer; a performance layer or some data marts; and on top, there would be an exploration or reporting tool such as Tableau or Oracle's OBIEE. This architecture can lead to latency in decision making, creating a gap between analysis and action. Data preparation tools like Dremio can address this.

Dremio is a Data-as-a-Service platform allowing users to quickly query data, directly from the source or in any layer, regardless of its size or structure. The product makes use of Apache Arrow, allowing it to virtualise data through an in-memory layer, creating what is called a Data Reflection.

The intent of this post is an introduction to Dremio; it provides a step by step guide on how to query data from Amazon's S3 platform.

I wrote this post using my MacBook Pro, Dremio is supported on MacOS. To install it, I needed to make some configuration changes due to the Java version. The latest version of Dremio uses Java 1.8. If you have a more recent Java version installed, you’ll need to make some adjustments to the Dremio configuration files.

Lets start downloading Dremio and installing it. Dremio can be found for multiple platforms and we can download it from here.

Dremio uses Java 1.8, so if you have an early version please make sure you install java 1.8 and edit /Applications/Dremio.app/Contents/Java/dremio/conf/dremio-env to point to the directory where java 1.8 home is located.

After that you should be able to start Dremio as any other MacOs application and access http://localhost:9047

Image Description

Configuring S3 Source

Dremio can connect to relational databases (both commercial and open source), NoSQL, Hadoop, cloud storage, ElasticSearch, among others. However the scope of this post is to use a well known NoSQL storage S3 bucket (more details can be found here) and show the query capabilities of Dremio against unstructured data.

For this demo we're using Garmin CSV activity data that can be easily downloaded from Garmin activity page.

Here and example of a CSV Garmin activity. If you don't have a Garmin account you can always replicate the data above.

act,runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories
1,NMG,1,00:06:08.258,00:06:06.00,1,36,--,0:06:08  ,0:06:06  ,0:04:13  ,175.390625,193.0,92.89507499768523,--,--,--,65
1,NMG,2,00:10:26.907,00:10:09.00,1,129,--,0:10:26  ,0:10:08  ,0:06:02  ,150.140625,236.0,63.74555754497759,--,--,--,55

For user information data we have used the following dataset

runner,dob,name
JM,01-01-1900,Jon Mead
NMG,01-01-1900,Nelio Guimaraes

Add your S3 credentials to access

After configuring your S3 account all buckets associated to it, will be prompted under the new source area.

For this post I’ve created two buckets : nmgbuckettest and nmgdremiouser containing data that could be interpreted as a data mart

Image Description

nmgbuckettest - contains Garmin activity data that could be seen as a fact table in CSV format :

Act,Runner,Split,Time,Moving Time,Distance,Elevation Gain,Elev Loss,Avg Pace,Avg Moving Paces,Best Pace,Avg Run Cadence,Max Run Cadence,Avg Stride Length,Avg HR,Max HR,Avg Temperature,Calories

nmgdremiouser - contains user data that could be seen as a user dimension in a CSV format:

runner,dob,name

Creating datasets

After we add the S3 buckets we need to set up the CSV format. Dremio makes most of the work for us, however we had the need to adjust some fields, for example date formats or map a field as an integer.

By clicking on the gear icon we access the following a configuration panel where we can set the following options. Our CSV's were pretty clean so I've just change the line delimiter for \n and checked the option Extract Field Name

Lets do the same for the second set of CSV's (nmgdremiouser bucket)

Click in saving will drive us to a new panel where we can start performing some queries.

However as mentioned before at this stage we might want to adjust some fields. Right here I'll adapt the dob field from the nmgdremiouser bucket to be in the dd-mm-yyyy format.

Apply the changes and save the new dataset under the desire space.

Feel free to do the same for the nmgbuckettest CSV's. As part of my plan to make I'll call D_USER for the dataset coming from nmgdremiouser bucket and F_ACTIVITY for data coming from nmgbuckettest

Querying datasets

Now that we have DUSER and FACTIVITY datasets created we can start querying them and do some analysis.

This first analysis will tell us which runner climbs more during his activities:

SELECT round(nested_0.avg_elev_gain) AS avg_elev_gain, round(nested_0.max_elev_gain) AS max_elev_gain, round(nested_0.sum_elev_gain) as sum_elev_gain, join_D_USER.name AS name
FROM (
  SELECT avg_elev_gain, max_elev_gain, sum_elev_gain, runner
  FROM (
    SELECT AVG(to_number("Elevation Gain",'###')) as avg_elev_gain,
    MAX(to_number("Elevation Gain",'###')) as max_elev_gain,
    SUM(to_number("Elevation Gain",'###')) as sum_elev_gain,
    runner
    FROM dremioblogpost.F_ACTIVITY
    where "Elevation Gain" != '--'
    group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner    

To enrich the example lets understand who is the fastest runner with analysis based on the total climbing

 SELECT round(nested_0.km_per_hour) AS avg_speed_km_per_hour, nested_0.total_climbing AS total_climbing_in_meters, join_D_USER.name AS name
FROM ( 
  SELECT km_per_hour, total_climbing, runner
  FROM (
    select avg(cast(3600.0/((cast(substr("Avg Moving Paces",3,2) as integer)*60)+cast(substr("Avg Moving Paces",6,2) as integer)) as float)) as km_per_hour,
        sum(cast("Elevation Gain" as integer)) total_climbing,
        runner
        from dremioblogpost.F_ACTIVITY
        where "Avg Moving Paces" != '--'
        and "Elevation Gain" != '--'
        group by runner
  ) nested_0
) nested_0
 INNER JOIN dremioblogpost.D_USER AS join_D_USER ON nested_0.runner = join_D_USER.runner

Conclusions

Dremio is an interesting tool capable of unifying existing repositories of unstructured data. Is Dremio capable of working with any volume of data and complex relationships? Well, I believe that right now the tool isn't capable of this, even with the simple and small data sets used in this example the performance was not great.

Dremio does successfully provide self service access to most platforms meaning that users don't have to move data around before being able to perform any analysis. This is probably the most exciting part of Dremio. It might well be in the paradigm of a "good enough" way to access data across multiple sources. This will allow data scientists to do analysis before the data is formally structured.

Categories: BI & Warehousing

Is my problem solvable by use of an hierarchical query?

Tom Kyte - Wed, 2018-07-25 03:46
Hello, I have this problem I haven't been able to solve so far, though I suspect it should be using a hierarchical query. I included sample/test data in the LiveSQL script I saved and shared, it includes the desired result the hypothetical query w...
Categories: DBA Blogs

Getting error as ORA-29283: invalid file operation

Tom Kyte - Tue, 2018-07-24 09:26
Hi, we have created this anonymous block to capture the data using DB link and write into CSV file. The details are as below, <code>--Table create table emp (empno number) / --insert insert into emp vaules (10) / insert into emp vaul...
Categories: DBA Blogs

External table: only one rejected record is loaded into bad file

Tom Kyte - Tue, 2018-07-24 09:26
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production I am using following code to create external table. <code> V_SQL := 'CREATE TABLE ' || L_EXT_TABLE || ' (' || L_CNAMES || ') ORGANIZATION EXTERNAL ( TYPE...
Categories: DBA Blogs

performance tunnig ==> excessive child cursor

Tom Kyte - Tue, 2018-07-24 09:26
hi Team , my cursor_sharing parameter is set to exact , amount of parsing i have checked is less but some query have very high number of child cursor . One of update statement have 506 child cursor . can you help me out how to deal with this...
Categories: DBA Blogs

Need solution for, No of columns in the update statement are dynamic

Tom Kyte - Tue, 2018-07-24 09:26
Hi Tom, I want to write an UPDATE statement in which the number of columns may vary for each run based on the user choice. Eg: DDL <code>CREATE TABLE "XX_MASK_REF" ( "TABLE_NAME" VARCHAR2(150 BYTE), "COLUMN_NAME" VARCHAR2(150 BYTE), ...
Categories: DBA Blogs

Application vs Middle Tier Connection Pooling

Tom Kyte - Tue, 2018-07-24 09:26
Do you have a recommendation for use of connection pooling in the application vs a middle tier pool? Would you suggest WLS to manage a conn pool at the middle tier? any issue for .net or python applications to access the pool for connections?
Categories: DBA Blogs

Create Age Group Dynamically Based on Lower and Upper Age limit and age band

Tom Kyte - Tue, 2018-07-24 09:26
Hi, I have following requirement and seeking help to achieve this via SQL only (no PL/SQL block) - Table A contains following columns - 1. Employee_ID 2. Employee_Age 3. Employee_Dept The end user needs to provide following 3 prompt val...
Categories: DBA Blogs

SQL Server on Linux – I/O internal thoughts

Yann Neuhaus - Tue, 2018-07-24 07:55

Let’s start the story from the beginning with some funny testing I tried to perform with SQL Server on Linux a couple of months ago. At that time, I wanted to get some pictures of syscalls from SQL Server as I already did in a past on Windows side with sysinternal tools as procmon and stack traces. On Linux strace is probably one of the best counterparts.

blog 140 - 0 - 0 -  banner

Firstly, please note this blog is just from my own researches and it doesn’t constitute in any cases an official documentation from Microsoft and may lead likely to some inaccuracies. This kind of write up is definitely not easy especially when you’re not the direct developer of the product and because things change quickly nowadays making at the same time your blog post biased :) Anyway, I was just curious to figure out how SQL Server deals with I/O on Linux side and the main safety point here is certainly to show how you may achieve it on Linux. So let’s start from the beginning with already what we know on the Windows operating system: The SQL Server engine goes through Win32 API and functions like CreateFile(), ReadFile(), WriteFile() to deal with I/O but let’s focus specifically on the CreateFile() function here. CreateFile() is used to create or to open an existing file or an I/O device with some specific flags. Some of them as FILE_FLAG_WRITE_THROUGH are used to meet the Write-Ahead Logging (WAL) Protocol by bypassing all system / disk caches (cf. Microsoft article).

On April 10 2018 I did my first tests on the Linux side with SQL Server 2017 CU5 and here was my first strace output after creating dbi_db database:

blog 140 - 0 - 2 - strace stack twitter

blog 140 - 0 - 1 - strace stack

It was an expected output for me because on Linux SQL Server uses an low-level open() system call – that is the counterpart of createfile() on Windows – but the surprising thing was with O_DIRECT flag only. I’m not a system developer and from my position, I may understand benefits from using O_DIRECT with database systems because it is driving by AIO (asynchronous I/O) and by the fact we may completely bypass any kernel space buffers (by default on Linux). I get the opportunity to thanks @dbaffaleuf with our interesting discussions on this topic

But referring to the documentation we may read the following sample as well about O_DIRECT:

File I/O is done directly to/from user-space buffers. The O_DIRECT flag on its own makes an effort to transfer data synchronously, but does not give the guarantees of the O_SYNC flag that data and necessary metadata are transferred

From my understanding using O_DIRECT that implies durable writes on block devices are not guaranteed and from my trace I noticed the transaction log seemed to be open with O_DIRECT in this case …

2837  20:44:32 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 187
2837  10:13:09 fstat(187, {st_mode=S_IFREG|0660, st_size=0, ...}) = 0

 

… From a WAL protocol perspective using such flag might lead to not meet the requirements because we might experience data loss in case of a system / storage outage unless either implementing another low-level mechanism like fsync() for transaction log and checkpoints or to be sure the storage guarantees writes are O_DIRECT safe. At this stage I expected to find out more fsync() related entries in my trace but no chance as shown below (I put only a sample but in fact no other relevant syscalls in the entire trace that might indicate forcing synchronization stuff)

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 72.59   12.993418        2089      6220      2503 futex
 10.78    1.929811        7509       257           epoll_wait
  9.08    1.625657       47813        34        28 restart_syscall
  3.28    0.587733        8643        68           io_getevents
  1.97    0.351806       70361         5           nanosleep
  1.92    0.344254       34425        10        10 rt_sigtimedwait
  0.32    0.056943           3     22116           clock_gettime
  0.02    0.003530         101        35           munmap
  0.01    0.002149          32        67           io_submit
  0.01    0.001706           6       273           epoll_ctl
  0.01    0.000897           6       154           read
  0.00    0.000765          11        68           writev
  0.00    0.000605           4       136        68 readv
  0.00    0.000591           4       137           write
  0.00    0.000381          10        40        23 open
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.900633                 29827      2638 total

 

A couple of weeks ago, I wanted to update my test environment with SQL Server 2017 CU8 (on Linux) I noticed the following messages (in the red rectangle):

blog 140 - 2 - forceflush

 

Hmm .. that was pretty new and interesting and in fact, the aforementioned messages were related to this Microsoft article (Thanks @Bobwardms to pointed me out). This new behavior is available since SQL Server 2017 CU6 and the article describes that how Microsoft has introduced a change with a new “forced flush” mechanism for SQL Server on Linux system.

In a nutshell for all scenarios, a new flush mechanism guarantees data is safely written to a stable media for transaction logs and during checkpoints. Let’s dig further into the both methods.

Let’s say first I applied the same pattern for all the tests that follow. The test’s protocol included one dbo.t1 table with only one column (id int). I inserted for each test a bunch of data rows (67 rows to be more precise related to 67 distinct implicit transactions) without any other user concurrent activities. It remains some internal stuff from SQL Server but I guess we may consider them as negligible compared to my tests.

insert dbo.t1 values (1)
go 67

 

  • Default forced flush mechanism behavior

 

In this scenario referring to my strace file output database files are still open with O_DIRECT only as shown below (my database’s name is toto this time)

4745  10:54:34 open("/u01/sqlserverdata/mssqlserver/toto.mdf", O_RDWR|O_CREAT|O_EXCL|O_DIRECT, 0660) = 188

 

I also used the following command to get directly a picture summary of the number of calls per syscall

$ sudo strace -f -c -o sql_strace.txt $(pidof sqlservr |sed 's/\([0-9]*\)/\-p \1/g')

 

Here the sample output I got:

$ cat sql_strace.txt
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 71.36   24.078231        2685      8969      3502 futex
 12.13    4.093680      120402        34        30 restart_syscall
 10.03    3.384817       12583       269           epoll_wait
  2.67    0.901541       13258        68           io_getevents
  1.77    0.598830       46064        13        13 rt_sigtimedwait
  1.66    0.560893       93482         6           nanosleep
  0.23    0.077873           2     31422           clock_gettime
  0.09    0.030924         462        67           fsync
  0.01    0.003212         321        10           madvise
  0.01    0.003026          22       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   33.742080                 42195      3651 total

 

This time I noticed additional calls of fsync() to guarantee writes on blocks – 67 calls that seem to be related to 67 transactions right? I double checked in my strace output and it seems that is the case but I may be wrong on this point so please feel free to comment. Another interesting point is that I also continued to notice asynchronous IO from io_getevents function that appeared from my trace. That make sense for me. Writes of data are asynchronous while those on transaction logs are synchronous by design. In this mode fsync() is triggered during transaction commits and checkpoints.

 

  • Enabling trace flag 3979

Enabling trace flag 3979 has effect to disable the forced flush behavior replaced by writethrough and alternatewritethrough options. Referring to the Microsoft article the former will translate the well-known FILE_FLAG_WRITE_THROUGH flag requests into O_DSYNC opens but with some performance optimization stuff by using fdatasync() rather than fsync() Indeed, fdatasync() is supposed to generate less I/O activity because it doesn’t require to synchronize file metadata (only the data portion of the file is concerned here).

Anyway, my strace sample file output below confirmed that both data file and transaction log were open with both O_DIRECT and O_DSYNC meaning we are bypassing the kernel buffer space and we are also forcing synchronous I/O for both files.  Does it matter? Well, from my understanding writing dirty data pages is still an asynchronous process when checkpoints occur. How much better it performs in this case? I don’t know and it will require likely another bunch of strong tests on different use cases –

5279  11:07:36 open("/u01/sqlserverdata/mssqlserver/dbi_test.mdf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185
…
5279  11:07:38 open("/u02/sqlserverlog/mssqlserver/dbi_test_log.ldf", O_RDWR|O_CREAT|O_EXCL|O_DSYNC|O_DIRECT, 0660) = 185

 

As previously I noticed 67 calls of fdatasync() (and not anymore fsync() here) related likely to my 67 implicit transactions and I still continued to notice asynchronous IO driving by io_getevents() or io_submit() syscalls.

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 73.90   12.678046        2360      5371      1927 futex
 10.80    1.853571        7788       238           epoll_wait
  9.05    1.551962       45646        34        30 restart_syscall
  1.93    0.331275       47325         7         7 rt_sigtimedwait
  1.84    0.316524      105508         3           nanosleep
  1.61    0.276806        4131        67           io_getevents
  0.71    0.121815           7     18259           clock_gettime
  0.06    0.010184         152        67           fdatasync
  0.02    0.003851          26       146           read
  0.02    0.003217          12       272           epoll_ctl
  0.01    0.002139          32        67           io_submit
  0.01    0.002070          15       136           write
  …
------ ----------- ----------- --------- --------- ----------------
100.00   17.156630                 25170      2054 total

 

 

Finally, at the moment of this write up, let’s say that Microsoft recommends enabling the trace flag 3979 as well as changing the default values of writethrough and alternatewritethrough options to 0 to revert back to old behavior before CU6 illustrated in the first section of this blog post but only in the case your storage guarantees your writes will be “O_DIRECT” safe. I think you may understand why now – if my understanding of Linux I/O is obviously correct- :)

See you!

 

 

Cet article SQL Server on Linux – I/O internal thoughts est apparu en premier sur Blog dbi services.

Banks to Offer Greater Insight into Accounts for Global Corporate Customers with Oracle

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Banks to Offer Greater Insight into Accounts for Global Corporate Customers with Oracle New Oracle Banking Virtual Account Management simplifies transaction banking, enables operational efficiencies

Redwood Shores, Calif.—Jul 24, 2018

Oracle Financial Services new platform enables banks’ corporate customers to more efficiently manage and monitor their banking accounts. As corporate businesses expand globally, their operations become exponentially more complex. They often have many separate accounts that are used to process accounts payable and receivable and in multiple currencies, for example. They also must contend with dynamic market conditions and evolving regulatory requirements. Oracle Banking Virtual Account Management helps corporate businesses rationalize their complex accounting structures and simplify account management processes, such as a centralizing their accounts using virtual account management (VAM). VAMs are non-physical accounts that can be used by corporate businesses to optimize their working capital processes. With Oracle’s VAM solution, banks can leverage a Virtual Account Identifier and gain the flexibility to maintain different rules for different corporate customers.

“Oracle is committed to building industry-first corporate banking capabilities that are digitally- enabled and exclusively designed for corporate banks,” said Chet Kamat, Senior Vice President, Banking, Oracle Financial Services. “To strengthen the relationship with their corporate customers, banks need to offer smarter transaction banking alternatives. Virtual accounts are an ideal way to give corporate customers access to real-time information and enable faster decision-making.”

Oracle Banking Virtual Account Management solution enables banks to centralize cash and liquidity and better manage transaction flows for corporate customers. This reduction in account complexity provides greater visibility and control of assets. Oracle’s VAM solution can facilitate easy segregation of inflow and outflow of funds and enables seamless and efficient reconciliation, which allows corporate customers to optimize working capital in a cost-effective manner. With Virtual Account Identifiers banks can enable corporate customers to set up their own rules for payment routing that best suits their business needs. Corporate businesses can now easily identify the paying customer and the purpose of payments.

The solution also enables banks to provide their customers the ability to define and manage their banking account structure at their convenience, in real-time. Customers can take charge of account structures across business units, geographies, and enjoy a friction-free banking experience. The solution also offers RESTFul APIs enabling banks to provide user experiences specific to customer needs.

Oracle Banking Virtual Account Management is pre-integrated with Oracle Banking Liquidity Management, Oracle Banking Payments and Oracle FLEXCUBE, which will helps banks offer comprehensive support for virtual accounts. The solution is also capable of working with any core banking system and banks can easily interface with corporate ERP systems and configure their customers’ banking requirements based on distinct needs.

Contact Info
Judi Palmer
Oracle Corporation
+1 650 607 6598
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 607 6598

Brian Pitts

  • +1 312 475 5921

Retail Consult Becomes Oracle PartnerNetwork Platinum Level Partner

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Retail Consult Becomes Oracle PartnerNetwork Platinum Level Partner Oracle Recognizes Retail Consult for Their Track Record of Success to Deliver Global Retail Transformation

Redwood Shores, Calif.—Jul 24, 2018

Retail Consult, a highly specialized group focused on technology solutions for retail, has achieved Platinum partner status in Oracle Partner Network (OPN).  By attaining Platinum level membership, Oracle has recognized Retail Consult for its in-depth expertise and excellence in delivering the Oracle Retail Solution Portfolio.

Retailers are under pressure to transform their business with solutions that will help them stay ahead of the competition. Retail Consult has demonstrated an extremely high success rate helping clients to successfully overcome the inherent risks with digital transformation projects.  Retail Consult provides solutions strategy, implementation, deployment, training and support services. These services combined with the depth and breadth of the Oracle Retail solutions has been a recipe for client success around the world.

Retail Consult has established its depth and breadth of the expertise across the Oracle Retail portfolio including Oracle Retail Merchandise Operations Management, Oracle Retail Planning and Optimization, Oracle Retail Supply Chain Management, Oracle Retail Omnichannel, Oracle Commerce and Oracle Retail Insights. Many of Retail Consult’s resources are trained and certified to implement Oracle’s state-of-the art solutions that uniquely addresses the challenges of retailers today.

“The breadth and depth of the Oracle Retail solution, combined with our retail business and technology expertise deliver an unbeatable combination for client success in their retail transformation projects,” Silvia Gomes, Partner, Retail Consult.

“Retail Consult is a valuable partner to Oracle Retail. They have a proven track record of success delivery to our customer base,” said Jeff Warren, Vice President of Solution Management, Oracle Retail. “Retail Consult consistently contributes to the growth and education of our community.”

Examples of Exceptional Delivery of the Oracle Retail solutions by Retail Consult include:

 

With its Platinum status, Retail Consult, receives the high level of engagement, commitment and resources available to OPN partners. Platinum members receive dedicated virtual account management support to build joint development plans and help broaden Specialization areas and revenue opportunities.  Additional benefits include priority placement in the OPN Solutions Catalog, one free application integration validated by Oracle, joint marketing and sales opportunities, discounted training and more. For more information about the benefits of becoming an OPN Platinum level partner, please visit: http://www.oracle.com/us/partnerships/index.htm

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Retail Consult

Retail Consult is a group of professionals who specialize in technology solutions for retail, offering clients global perspective and experience with operations in Europe, North, South and Central America. Retail Consult provides solutions strategy, implementation, deployment, training and support services. They serve clients across a range of retail segments including fashion, grocery, pharmacy, do it yourself, telecommunications, and electronics. In size, their clients range from regional, privately held companies to global retailers deploying multiple brands. Retail Consult is unique because they have an extremely high success rate for implementing Oracle Retail Solutions. Their values and mission drive them to be a strategic partner to all clients. http://www.retail-consult.com

About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle PartnerNetwork

Oracle PartnerNetwork (OPN) is Oracle's partner program that provides partners with a differentiated advantage to develop, sell and implement Oracle solutions. OPN offers resources to train and support specialized knowledge of Oracle’s products and solutions and has evolved to recognize Oracle’s growing product portfolio, partner base and business opportunity. Key to the latest enhancements to OPN is the ability for partners to be recognized and rewarded for their investment in Oracle Cloud. Partners engaging with Oracle will be able to differentiate their Oracle Cloud expertise and success with customers through the OPN Cloud program – an innovative program that complements existing OPN program levels with tiers of recognition and progressive benefits for partners working with Oracle Cloud. To find out more visit: http://www.oracle.com/partners.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Foodation Selects Oracle Food and Beverage to Drive International Growth and Innovation

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Foodation Selects Oracle Food and Beverage to Drive International Growth and Innovation Oracle Hospitality Simphony Cloud Helps Streamline Business Operations, Accelerate Speed of Service, and Enhance Customer Experience

Redwood Shores, Calif.—Jul 24, 2018

Foodation, an Italian company that specializes in designing and developing fast-casual restaurants across Europe, has chosen Oracle Hospitality Simphony Cloud for its restaurants, including the popular Briscola Pizza Society, Polpa Burger Trattoria and Mariù Italian Kebab. With 10 restaurants and plans for international expansion, Foodation needed a technology platform that could support the strategic growth of its business and offer a single view of operations across all of its restaurants. Additionally, the company wanted to increase operational efficiency through centralization, improve functionality and modernize the customer experience.

“Over the next five years, we not only plan to expand our business internationally, but we also want to position our brands as the go-to restaurants across Europe, and Oracle Hospitality Simphony Cloud is helping us to achieve this goal,” said Riccardo Cortese, CEO of Foodation. “Oracle Simphony Cloud provides us with a reliable and trusted foundation for growth and enables us to streamline our operations with one single view of business. Before the implementation, we only had insight into business operations data on a weekly basis, but now, using the simple, easy-to-use Oracle Micros InMotion app, we have a single view of all of our restaurants that we can view as frequently as we like.”

Oracle Simphony Cloud provides Foodation with one centralized system to manage operations across all six brands, allowing centralized reporting and auditing for the management team, together with drill-down capabilities to real-time order-specific information. This minute-by-minute detail enables maximized revenue and profitability. In addition, Oracle Simphony Cloud integrates all digital channels, such as mobile payment, mobile ordering and mobile applications, for one centralized view of the customer, helping to accelerate speed of service and enhance the overall customer experience.

“In today’s competitive market, restaurants are challenged to operate not only faster, but smarter, in order to meet customer’s rising expectations,” said Chris Adams, vice president strategy, Oracle Food and Beverage. “Oracle Simphony Cloud enables business operations to flow more cohesively, efficiently and profitably while greatly enhancing the overall customer experience.”

For more information learn how Foodation Serves Up A Growing Pizza Enterprise with Oracle Simphony, One Slice at a Time.

Contact Info
Matt Torres
Oracle
4155951584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

For more information about Oracle Food and Beverage, please visit www.Oracle.com/Food-Beverage

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 4155951584

Corporate Banks Can Drive Greater Digital Customer Engagement with New Oracle Solutions

Oracle Press Releases - Tue, 2018-07-24 07:00
Press Release
Corporate Banks Can Drive Greater Digital Customer Engagement with New Oracle Solutions New Corporate Banking Solutions Enable Smarter Operations for Credit Facilities, Corporate Lending and Trade Finance

Redwood Shores, Calif.—Jul 24, 2018

Oracle Financial Services launched three new corporate banking engagement solutions designed to meet the needs of banks looking to digitize their processes. Oracle Banking Credit Facilities Process Management, Oracle Banking Corporate Lending Process Management and Oracle Banking Trade Finance Process Management can enhance banks’ customer relationships, improve staff productivity and reduce costs.

“Oracle has long been committed to providing the banking sector with innovative financial solutions,” said Chet Kamat, Senior Vice President, Banking for Oracle Financial Services. “Harnessing the power of digitization for corporate banking operations is a step towards enabling banks to deliver exceptional value to their corporate customers. We have equipped them with purposeful insight for better decision-making, enabled standardization of processes to enhance distinct operational capabilities, and empowered them with real-time collaboration and intelligent automation to achieve higher levels of efficiency.”

The traditional approach in corporate banking has been largely concentrated on product processor-oriented work, backed by reams of paperwork. The new solutions will combat labor-intensive processes and digitize them.

Accelerate Digitization

With the launch of Oracle Banking Credit Facilities Process Management banks can accelerate credit origination and servicing, pre-qualify credit lines, track exposures to customers in real-time, and mitigate business risks.

Oracle Banking Corporate Lending Process Management enables banks to meet customers’ on-demand and custom financing needs by accelerating the process of loan origination, servicing and closure.

Oracle Banking Trade Finance Process Management helps banks manage the end-to-end trade finance lifecycle for a wide range of trade services such as guarantees, documentary credit and collections.

Oracle’s corporate banking engagement solutions offer greater productivity for banks with many key features:

  • Host-agnosticism
  • Configurable workflow automation
  • Role-based dashboards
  • Intuitive UI
  • Multi-device support
  • Extensibility
  • Real time alerting
  • Reporting
 

In addition, widgets provide intelligent operational and analytical data addressing the needs of sales, operations, risk and legal teams. These solutions are also capable of exposing APIs and providing a front-to-back integration, which enables banks to build an extensible banking platform and enhance their service offerings.

Contact Info
Judi Palmer
Oracle Corporation
+1 650 784 7901
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Brian Pitts

  • +1 312 475 5921

"ORA-22902: CURSOR expression not allowed" in ORDS and APEX and how to fix them

Dimitri Gielis - Tue, 2018-07-24 06:18
When you want to define different blocks of data, some hierarchical, some not, you can do that by using the cursor expressions in SQL. An example of a query looks like this:

  select
    c.cust_first_name,
    c.cust_last_name,
    c.cust_city,
    cursor(select o.order_total, order_name,
              cursor(select p.product_name, i.quantity, i.unit_price
                       from demo_order_items i, demo_product_info p
                      where o.order_id = i.order_id
                        and i.product_id = p.product_id
                    ) product                
             from demo_orders o
            where c.customer_id = o.customer_id
          ) orders
  from demo_customers c

In the above query you see you can nest the cursor expressions. But you can also define the cursors next to each other. We use this technique a lot when defining where the data comes from in the APEX Office Print (AOP) APEX plugin:


In Oracle Application Express 18.1 there's a small bug (Bug 28298260 - REGRESSION: SQL QUERY CONTAINING CURSOR EXPRESSION CAN'T BE PARSED) that when you validate the query you get "ORA-22902: CURSOR expression not allowed".

The APEX Dev team already fixed it - you can download from Oracle Support the bundle PSE patch #28128115. Once applied everything is validating correctly again.


AOP also supports REST web services, and some people define those in ORDS (Oracle REST Data Services). Depending the version of ORDS you might get the same error: "Error during evaluation of resource template: GET test/cursor/, SQL Error Code: 22,902, SQL Error Message: ORA-22902: CURSOR expression not allowed"



This doesn't mean your environment can not use the cursor syntax, you just have to set the pagination size to 0 and it's fixed.

In the latest version of ORDS (18.2) you get by default the 500 error without the error number:


Fix is the same, set pagination to 0 and you are good to go.

Categories: Development

Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries

Yann Neuhaus - Tue, 2018-07-24 05:57

Good news, the latest Patchset for Oracle 12cR2 (which is not named patchset anymore, is actually called release 18c and numbered 18.0.0.0.0) is available for download on OTN. It is great because OTN download does not require access to Support and Software Updates. It is available to anybody under the Free Developer License Terms (basically development, testing, prototyping, and demonstrating for an application that is not in production and for non-commercial use). We all complained about the ‘Cloud First’ strategy because we were are eager to download the latest version. But the positive aspect of it is that we have now on OTN a release that has been stabilized after a few release updates. In the past, only the first version of the latest release was available there. Now we have one with many bug fixed.

Of course, I didn’t wait and I have tested 18c as soon as it was available on the Oracle Cloud thanks to the ACE Director program that provided me with some Cloud Credits. In this post, I’ll update my Cloud database to run it with the on-premises binary. Because that’s the big strength of Oracle: we can run the same software, 100% compatible, on the Cloud and on our own servers. There are some limitations in the features available, but technically it is the same software.

Oracle Cloud First

Here is my Cloud version of Oracle 18c installed on February (18.1) updated on April (18.2) and July (18.3):

SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 
SQL> select banner from v$version;
 
BANNER
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> select banner_full from v$version;
 
BANNER_FULL
----------------------------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> select banner_legacy from v$version;
 
BANNER_LEGACY
--------------------------------------------------------------------------------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
 
SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

LINUX.X64_180000_db_home.zip

I have installed the on-premises 18c available on OTN. The good things with the new releases are:

  • No need to extract installer files. Just unzip the Oracle Home and link the executable
  • This Oracle Home image already includes the latest Release Updates


SQLcl: Release 18.2 Production on Tue Jul 24 11:02:56 2018
 
Copyright (c) 1982, 2018, Oracle. All rights reserved.
 
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
 
SQL> host $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)
OPatch succeeded.
 

We have 4 updates from July here for the following components:

  • The Database (28090523)
  • The Java in the Oracle Home, aka JDK (27908644)
  • The Java in the database, aka OJVM (27923415)
  • The clusterware component for the database to match the CRS, aka OCW (28090553)

So, now we have an image of the Oracle Home which already contains all the latest updates… except one:

$ cat $ORACLE_HOME/sqldeveloper/sqldeveloper/bin/version.properties
 
COMPANY=Oracle
PRODUCT=SQL Developer
VERSION=17.30003410937f
VER=17.3.2
VER_FULL=17.3.2.341.0937
BUILD_LABEL=341.0937
BUILD_NUM=341.0937
EDITION=

Unfortunately, that’s an old version of SQL Developer here, and with no SQLcl. Then just download this additional one and unzip it in the Oracle Home.

DataPatch

So, what happens when I open the database that I have created on 18.1 and patched with 18.2 and 18.3 RUs on the Oracle Cloud? There are two updates for the database (DBRU and OJVM). The DBRU is already there then DataPatch has only to apply the OJVM:

[oracle@VM183x dbhome_1]$ $ORACLE_HOME/OPatch/datapatch
SQL Patching tool version 18.0.0.0.0 Production on Tue Jul 24 10:57:55 2018
Copyright (c) 2012, 2018, Oracle. All rights reserved.
 
Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_11104_2018_07_24_10_57_5 5/sqlpatch_invocation.log
 
Connecting to database...OK
Gathering database info...done
 
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
 
Bootstrapping registry and package to current versions...done
Determining current state...done
 
Current state of interim SQL patches:
Interim patch 27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)):
Binary registry: Installed
PDB CDB$ROOT: Not installed
PDB PDB$SEED: Not installed
PDB PDB1: Not installed
 
Current state of release update SQL patches:
Binary registry:
18.3.0.0.0 Release_Update 1806280943: Installed
PDB CDB$ROOT:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.20.321353 AM
PDB PDB$SEED:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.21.802495 AM
PDB PDB1:
Applied 18.3.0.0.0 Release_Update 1806280943 successfully on 18-JUL-18 11.38.23.230513 AM
 
Adding patches to installation queue and performing prereq checks...done
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PDB1
No interim patches need to be rolled back
No release update patches need to be installed
The following interim patches will be applied:
27923415 (OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415))
 
Installing patches...
Patch installation complete. Total patches installed: 3
 
Validating logfiles...done
Patch 27923415 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08. log (no errors)
Patch 27923415 apply (pdb PDB$SEED): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDBSEED_2018Jul24_10_58_56. log (no errors)
Patch 27923415 apply (pdb PDB1): SUCCESS
logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_PDB1_2018Jul24_10_58_56.log (no errors)
SQL Patching tool complete on Tue Jul 24 10:59:21 2018

Now here is the history of patches:

SQL> exec dbms_qopatch.get_sqlpatch_status;
 
Patch Id : 27676517
Action : APPLY
Action Time : 18-APR-2018 20:44:50
Description : Database Release Update : 18.2.0.0.180417 (27676517)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27676517/22097537/27676517_apply_CDB1_CDBROOT_2018Apr18_20_43_27.log
Status : SUCCESS
 
Patch Id : 28090523
Action : APPLY
Action Time : 18-JUL-2018 11:38:20
Description : Database Release Update : 18.3.0.0.180717 (28090523)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/28090523/22329768/28090523_apply_CDB1_CDBROOT_2018Jul18_11_36_38.log
Status : SUCCESS
 
Patch Id : 27923415
Action : APPLY
Action Time : 24-JUL-2018 10:59:19
Description : OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
Logfile : /u01/app/oracle/cfgtoollogs/sqlpatch/27923415/22239273/27923415_apply_CDB1_CDBROOT_2018Jul24_10_58_08.log
Status : SUCCESS
 
PL/SQL procedure successfully completed.

This is all good. Despite the different release schedules, the level of software is exactly the same. And we can start on-premises on a release with low regression risk (18c like a patchset) but many fixes (several release updates). For the moment only the Linux port is there. The other platforms should come this summer.

 

Cet article Release 18.0.0.0.0 Version 18.3.0.0.0 On-Premises binaries est apparu en premier sur Blog dbi services.

Oracle Database 18c silent installation and database creation on Oracle Linux

Pierre Forstmann Oracle Database blog - Mon, 2018-07-23 16:59

Today 23rd of July 2018 Oracle Corp. has released Oracle 18c for Linux on OTN.

Silent Oracle Database installation

This is my first 18c installation on a new Oracle Linux 7.5 virtual machine (VM) using a minimal Linux installation:

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)
# cat /etc/os-release 
NAME="Oracle Linux Server"
VERSION="7.5"
ID="ol"
VERSION_ID="7.5"
PRETTY_NAME="Oracle Linux Server 7.5"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:5:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.5
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.5
# 

This VM has 4 GB of RMAN (and 4 GB of swap space), one single 40 GB disk and is connected to internet to be able to access Oracle Linux public yum repository.

I have enabled in /etc/yum.repos.d/public-yum-ol7.repo:

[ol7_u5_base]
name=Oracle Linux $releasever Update 5 installation media copy ($basearch)
baseurl=https://yum.oracle.com/repo/OracleLinux/OL7/5/base/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=1
enabled=1

I have checked YUM repositories with:

# yum repolist                     
Loaded plugins: ulninfo
ol7_UEKR4/x86_64/primary                                       |  40 MB  00:00:51     
ol7_UEKR4                                                                     732/732
ol7_latest                                                                  8173/8173
ol7_u5_base                                                                 7278/7278
repo id            repo name                                                    status
ol7_UEKR4/x86_64   Latest Unbreakable Enterprise Kernel Release 4 for Oracle Li   732
ol7_latest/x86_64  Oracle Linux 7Server Latest (x86_64)                         8 173
ol7_u5_base/x86_64 Oracle Linux 7Server Update 5 installation media copy (x86_6 7 278
repolist: 16 183

I have checked that Oracle 18c preinstallation RPM is available:

# yum info *preinstall* | grep Name 
Name        : oracle-database-preinstall-18c
Name        : oracle-database-server-12cR2-preinstall
Name        : oracle-rdbms-server-11gR2-preinstall
Name        : oracle-rdbms-server-12cR1-preinstall

I have installed 18c preinstallation RPM to ease Oracle installation using Internet connection. I have connected as root and run:

# yum -y install  oracle-database-preinstall-18c

Note that preinstallation RPM has also created oracle account:

# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba)

I have downloaded from OTN Oracle 18c installation media:

$ ls -rtl
total 4457668
-rw-r--r--. 1 oracle oinstall 4564649047 Jul 23 21:54 LINUX.X64_180000_db_home.zip

I have created a target Oracle Home directrory with root account:

# mkdir -p /u01/db18c
# chown oracle:dba /u01/db18c

I have unzipped installation media into this directory:

$ cd /u01/db18c
$ unzip /stage/LINUX.X64_180000_db_home.zip 

I have created Oracle Base and Oracle Inventory directories with root account:

# mkdir /u01/base
# chown oracle:dba /u01/base
# mkdir /u01/orainv
# chown oracle:dba /orainv

I have run following script from /u01/db18c/ with oracle account:

./runInstaller \
-silent \
-responseFile /u01/db18c/install/response/db_install.rsp \
   oracle.install.option=INSTALL_DB_SWONLY             \
   UNIX_GROUP_NAME=oinstall                            \
   INVENTORY_LOCATION=/u01/orainv                      \
   SELECTED_LANGUAGES=en                               \
   ORACLE_HOME=/u01/db18c                              \
   ORACLE_BASE=/u01/base                               \
   oracle.install.db.InstallEdition=EE                 \
   oracle.install.db.isCustomInstall=false             \
   oracle.install.db.OSDBA_GROUP=dba                   \
   oracle.install.db.OSBACKUPDBA_GROUP=dba             \
   oracle.install.db.OSDGDBA_GROUP=dba                 \
   oracle.install.db.OSKMDBA_GROUP=dba                 \
   oracle.install.db.OSRACDBA_GROUP=dba                \
   SECURITY_UPDATES_VIA_MYORACLESUPPORT=false          \
   DECLINE_SECURITY_UPDATES=true

Running the script has generated following output:

Launching Oracle Database Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. installActions2018-07-23_10-22-54PM.log
   ACTION: Identify the list of failed prerequisite checks from the log: installActions2018-07-23_10-22-54PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/db18c/install/response/db_2018-07-23_10-22-54PM.rsp

You can find the log of this install session at:
 /tmp/InstallActions2018-07-23_10-22-54PM/installActions2018-07-23_10-22-54PM.log

As a root user, execute the following script(s):
	1. /u01/orainv/orainstRoot.sh
	2. /u01/db18c/root.sh

Execute /u01/orainv/orainstRoot.sh on the following nodes: 
[ol7defs0]
Execute /u01/db18c/root.sh on the following nodes: 
[ol7defs0]


Successfully Setup Software with warning(s).
Moved the install session logs to:
 /u01/orainv/logs/InstallActions2018-07-23_10-22-54PM

I have ignored following warning:

INFO:  [Jul 23, 2018 10:23:12 PM] ------------------List of failed Tasks------------------
INFO:  [Jul 23, 2018 10:23:12 PM] *********************************************
INFO:  [Jul 23, 2018 10:23:12 PM] Physical Memory: This is a prerequisite condition to test whether the system has at least 8GB (8388608.0KB) of total physical memory.
INFO:  [Jul 23, 2018 10:23:12 PM] Severity:IGNORABLE
INFO:  [Jul 23, 2018 10:23:12 PM] OverallStatus:VERIFICATION_FAILED
INFO:  [Jul 23, 2018 10:23:12 PM] -----------------End of failed Tasks List----------------

I have run with user root:

# /u01/orainv/orainstRoot.sh
Changing permissions of /u01/orainv.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/orainv to oinstall.
The execution of the script is complete.
# /u01/db18c/root.sh
Check /u01/db18c/install/root_ol7defs0_2018-07-23_22-31-41-412574476.log for the output of root script
# cat /u01/db18c/install/root_ol7defs0_2018-07-23_22-31-41-412574476.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/db18c
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Oracle Trace File Analyzer (TFA) is available at : /u01/db18c/suptools/tfa/release/tfa_home/bin/tfactl 

I have checked detailed Oracle Database version:

$ export ORACLE_HOME=/u01/db18c
$ export PATH=$ORACLE_HOME/bin:$PATH
$ $ORACLE_HOME/OPatch/opatch lsinv
Oracle Interim Patch Installer version 12.2.0.1.14
Copyright (c) 2018, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/db18c
Central Inventory : /u01/orainv
   from           : /u01/db18c/oraInst.loc
OPatch version    : 12.2.0.1.14
OUI version       : 12.2.0.4.0
Log file location : /u01/db18c/cfgtoollogs/opatch/opatch2018-07-23_22-35-16PM_1.log

Lsinventory Output file location : /u01/db18c/cfgtoollogs/opatch/lsinv/lsinventory2018-07-23_22-35-16PM.txt

--------------------------------------------------------------------------------
Local Machine Information::
Hostname: ol7defs0
ARU platform id: 226
ARU platform description:: Linux x86-64

Installed Top-level Products (1): 

Oracle Database 18c                                                  18.0.0.0.0
There are 1 products installed in this Oracle Home.


Interim patches (4) :

Patch  27908644     : applied on Wed Jul 18 19:44:11 CEST 2018
Unique Patch ID:  22153180
Patch description:  "UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171"
   Created on 4 May 2018, 01:21:02 hrs PST8PDT
   Bugs fixed:
     27908644

Patch  27923415     : applied on Wed Jul 18 19:41:38 CEST 2018
Unique Patch ID:  22239273
Patch description:  "OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)"
   Created on 15 Jul 2018, 10:33:22 hrs PST8PDT
   Bugs fixed:
     27304131, 27539876, 27952586, 27642235, 27636900, 27461740

Patch  28090553     : applied on Wed Jul 18 19:40:01 CEST 2018
Unique Patch ID:  22256940
Patch description:  "OCW RELEASE UPDATE 18.3.0.0.0 (28090553)"
   Created on 11 Jul 2018, 19:20:31 hrs PST8PDT
   Bugs fixed:
     12816839, 18701017, 22734786, 23698980, 23840305, 25709124, 25724089
     26299684, 26313403, 26433972, 26527054, 26586174, 26587652, 26647619
     26827699, 26860285, 26882126, 26882316, 26943660, 26996813, 27012915
     27018734, 27032726, 27034318, 27040560, 27080748, 27086406, 27092991
     27098733, 27106915, 27114112, 27121566, 27133637, 27144533, 27153755
     27166715, 27174938, 27174948, 27177551, 27177852, 27182006, 27182064
     27184253, 27204476, 27212837, 27213140, 27220610, 27222423, 27222938
     27238077, 27238258, 27249544, 27252023, 27257509, 27263677, 27265816
     27267992, 27271876, 27274143, 27285557, 27299455, 27300007, 27302415
     27309182, 27314512, 27315159, 27320985, 27334353, 27338838, 27346984
     27358232, 27362190, 27370933, 27377219, 27378959, 27379846, 27379956
     27393421, 27398223, 27399499, 27399762, 27399985, 27401618, 27403244
     27404599, 27426277, 27428790, 27430219, 27430254, 27433163, 27452897
     27458829, 27465480, 27475272, 27481406, 27481765, 27492916, 27496806
     27503318, 27503413, 27508936, 27508984, 27513114, 27519708, 27526362
     27528204, 27532009, 27534289, 27560562, 27560735, 27573154, 27573408
     27574335, 27577122, 27579969, 27581484, 27593587, 27595801, 27600706
     27609819, 27625010, 27625050, 27627992, 27654039, 27657467, 27657920
     27668379, 27682288, 27691717, 27702244, 27703242, 27708711, 27714373
     27725967, 27731346, 27734470, 27735534, 27739957, 27740854, 27747407
     27748321, 27757979, 27766679, 27768034, 27778433, 27782464, 27783059
     27786669, 27786699, 27801774, 27811439, 27839732, 27850736, 27862636
     27864737, 27865439, 27889841, 27896388, 27897639, 27906509, 27931506
     27935826, 27941514, 27957892, 27978668, 27984314, 27993298, 28023410
     28025398, 28032758, 28039471, 28039953, 28045209, 28099592, 28109698
     28174926, 28182503, 28204423, 28240153

Patch  28090523     : applied on Wed Jul 18 19:39:24 CEST 2018
Unique Patch ID:  22329768
Patch description:  "Database Release Update : 18.3.0.0.180717 (28090523)"
   Created on 14 Jul 2018, 00:03:50 hrs PST8PDT
   Bugs fixed:
     9062315, 13554903, 21547051, 21766220, 21806121, 23003564, 23310101
     24489904, 24689376, 24737581, 24925863, 25035594, 25035599, 25287072
     25348956, 25634405, 25726981, 25743479, 25824236, 25943740, 26226953
     26336101, 26423085, 26427905, 26450454, 26476244, 26598422, 26615291
     26646549, 26654411, 26731697, 26785169, 26792891, 26818960, 26822620
     26843558, 26843664, 26846077, 26894737, 26898279, 26928317, 26933599
     26956033, 26961415, 26966120, 26986173, 26992964, 27005278, 27026401
     27028251, 27030974, 27036408, 27038986, 27041253, 27044575, 27047831
     27053044, 27058530, 27060167, 27060859, 27061736, 27066451, 27066519
     27073066, 27086821, 27090765, 27101527, 27101652, 27110878, 27112686
     27119621, 27126666, 27128580, 27135647, 27143756, 27143882, 27147979
     27153641, 27155549, 27156355, 27163928, 27169796, 27181521, 27181537
     27189611, 27190851, 27193810, 27199245, 27208953, 27210038, 27210872
     27214085, 27215007, 27216224, 27221900, 27222121, 27222626, 27224987
     27226913, 27232983, 27233563, 27236052, 27236110, 27240246, 27240570
     27241221, 27241247, 27244337, 27244785, 27249215, 27250547, 27254851
     27258578, 27259386, 27259983, 27262650, 27262945, 27263276, 27263996
     27270197, 27274456, 27274536, 27275136, 27275776, 27282707, 27283029
     27283960, 27284499, 27285244, 27288230, 27292213, 27294480, 27301308
     27301568, 27302594, 27302681, 27302695, 27302711, 27302730, 27302777
     27302800, 27302960, 27304410, 27304936, 27305318, 27307868, 27310092
     27313687, 27314206, 27314390, 27318869, 27321179, 27321834, 27326204
     27329812, 27330158, 27330161, 27333658, 27333664, 27333693, 27334316
     27334648, 27335682, 27338912, 27338946, 27339115, 27339396, 27339483
     27339495, 27341036, 27345190, 27345231, 27345450, 27345498, 27346329
     27346644, 27346709, 27346949, 27347126, 27348081, 27348707, 27349393
     27352600, 27354783, 27356373, 27357773, 27358241, 27359178, 27359368
     27360126, 27364891, 27364916, 27364947, 27365139, 27365702, 27365993
     27367194, 27368850, 27372756, 27375260, 27375542, 27376871, 27378103
     27379233, 27381383, 27381656, 27384222, 27389352, 27392187, 27395404
     27395416, 27395794, 27396357, 27396365, 27396377, 27396624, 27396666
     27396672, 27396813, 27398080, 27398660, 27401637, 27405242, 27405696
     27410300, 27410595, 27412805, 27417186, 27420715, 27421101, 27422874
     27423251, 27425507, 27425622, 27426363, 27427805, 27430802, 27432338
     27432355, 27433870, 27434050, 27434193, 27434486, 27434974, 27435537
     27439835, 27441326, 27442041, 27444727, 27445330, 27445462, 27447452
     27447687, 27448162, 27450355, 27450400, 27450783, 27451049, 27451182
     27451187, 27451531, 27452760, 27453225, 27457666, 27457891, 27458164
     27459909, 27460675, 27467543, 27469329, 27471876, 27472969, 27473800
     27479358, 27483974, 27484556, 27486253, 27487795, 27489719, 27496224
     27496308, 27497950, 27498477, 27501327, 27501413, 27501465, 27502420
     27504190, 27505603, 27506774, 27508985, 27511196, 27512439, 27517818
     27518227, 27518310, 27520070, 27520900, 27522245, 27523368, 27523800
     27525909, 27532375, 27533819, 27534509, 27537472, 27544030, 27545630
     27547732, 27550341, 27551855, 27558557, 27558559, 27558861, 27560702
     27563629, 27563767, 27570318, 27577758, 27579353, 27580996, 27585755
     27585800, 27586810, 27586895, 27587672, 27591842, 27592466, 27593389
     27595973, 27599689, 27602091, 27602488, 27603841, 27604293, 27607805
     27608669, 27610269, 27613080, 27613247, 27615608, 27616657, 27617522
     27625274, 27625620, 27631506, 27634676, 27635508, 27644757, 27649707
     27652302, 27663370, 27664702, 27679488, 27679664, 27679806, 27679961
     27680162, 27680509, 27682151, 27688099, 27688692, 27690578, 27691809
     27692215, 27693713, 27697092, 27701795, 27705761, 27707544, 27709046
     27718914, 27719187, 27723002, 27726269, 27726780, 27732323, 27739006
     27740844, 27744211, 27745220, 27747869, 27748954, 27751006, 27753336
     27757567, 27772815, 27773602, 27774320, 27774539, 27779886, 27780562
     27782339, 27783289, 27786772, 27791223, 27797290, 27803665, 27807441
     27812560, 27812593, 27813267, 27815347, 27818871, 27832643, 27833369
     27834984, 27840386, 27847259, 27851757, 27861909, 27869339, 27873643
     27882176, 27892488, 27924147, 27926113, 27930478, 27934468, 27941896
     27945870, 27950708, 27952762, 27961746, 27964051, 27970265, 27971575
     27984028, 27989849, 27993289, 27994333, 27997875, 27999597, 28021205
     28022847, 28033429, 28057267, 28059199, 28072130, 28098865, 28106402
     28132287, 28169711, 28174827, 28184554, 28188330, 25929650, 28264172



--------------------------------------------------------------------------------

OPatch succeeded.
$ $ORACLE_HOME/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.

So this 18c version has some of the July 2018 release updates (180717 = 17-JUL-2018).

I have also checked SQL*Plus banner:

$ sqlplus -v

SQL*Plus: Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
$ 
Silent database creation

I have created directories for databases with root account:

# mkdir /u01/oradata
# chown oracle:dba /u01/oradata
# mkdir /u01/fra
# chown oracle:dba /u01/fra

I have used following script to create a container database named CDB with one pluggable database:

dbca -silent \
-createDatabase \
-templateName General_Purpose.dbc \
-gdbName CDB \
-sid CDB \
-createAsContainerDatabase true \
-numberOfPdbs 1 \
-pdbName pdb \
-pdbadminUsername pdba \
-pdbadminPassword oracle \
-SysPassword oracle \
-SystemPassword oracle \
-emConfiguration NONE \
-storageType FS \
-datafileDestination /u01/oradata \
-recoveryAreaDestination /u01/fra \
-recoveryAreaSize 3200  \
-characterSet AL32UTF8 \
-memoryPercentage 40 \
-enableArchive true \
-redoLogFileSize 100

Output is:

WARNING] [DBT-06801] Specified Fast Recovery Area size (3,200 MB) is less than the recommended value.
   CAUSE: Fast Recovery Area size should at least be three times the database size (2,446 MB).
   ACTION: Specify Fast Recovery Area Size to be at least three times the database size.
[WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'PDBADMIN' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06801] Specified Fast Recovery Area size (3,200 MB) is less than the recommended value.
   CAUSE: Fast Recovery Area size should at least be three times the database size (3,309 MB).
   ACTION: Specify Fast Recovery Area Size to be at least three times the database size.
Prepare for db operation
8% complete
Copying database files
31% complete
Creating and starting Oracle instance
32% complete
36% complete
40% complete
43% complete
46% complete
Completing Database Creation
51% complete
53% complete
54% complete
Creating Pluggable Databases
58% complete
77% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/base/cfgtoollogs/dbca/CDB.
Database Information:
Global Database Name:CDB
System Identifier(SID):CDB
Look at the log file "/u01/base/cfgtoollogs/dbca/CDB/CDB.log" for further details.

I have checked created database with:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
	 0

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------

SQL> select patch_id, status, description, action_time from dba_registry_sqlpatch;

  PATCH_ID STATUS		     DESCRIPTION								      ACTION_TIME
---------- ------------------------- -------------------------------------------------------------------------------- ------------------------------
  28090523 SUCCESS		     Database Release Update : 18.3.0.0.180717 (28090523)			      23-JUL-18 10.57.56.127734 PM
  27923415 SUCCESS		     OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)				      23-JUL-18 10.57.56.142065 PM

SQL> select name, cdb, log_mode from v$database;

NAME	  CDB LOG_MODE
--------- --- ------------
CDB	  YES ARCHIVELOG

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB				  READ WRITE NO

I have used following script to create a non-container database named NCDB with:

dbca -silent \
-createDatabase \
-templateName General_Purpose.dbc \
-gdbName NCDB \
-sid NCDB \
-createAsContainerDatabase false \
-SysPassword oracle \
-SystemPassword oracle \
-emConfiguration NONE \
-storageType FS \
-datafileDestination /u01/oradata \
-recoveryAreaDestination /u01/fra \
-recoveryAreaSize 3200  \
-characterSet AL32UTF8 \
-memoryPercentage 40 \
-enableArchive true \
-redoLogFileSize 100 

Output is:

[WARNING] [DBT-06208] The 'SYS' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06208] The 'SYSTEM' password entered does not conform to the Oracle recommended standards.
   CAUSE: 
a. Oracle recommends that the password entered should be at least 8 characters in length, contain at least 1 uppercase character, 1 lower case character and 1 digit [0-9].
b.The password entered is a keyword that Oracle does not recommend to be used as password
   ACTION: Specify a strong password. If required refer Oracle documentation for guidelines.
[WARNING] [DBT-06801] Specified Fast Recovery Area size (3,200 MB) is less than the recommended value.
   CAUSE: Fast Recovery Area size should at least be three times the database size (2,402 MB).
   ACTION: Specify Fast Recovery Area Size to be at least three times the database size.
Prepare for db operation
10% complete
Copying database files
40% complete
Creating and starting Oracle instance
42% complete
46% complete
50% complete
54% complete
60% complete
Completing Database Creation
66% complete
69% complete
70% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/base/cfgtoollogs/dbca/NCDB.
Database Information:
Global Database Name:NCDB
System Identifier(SID):NCDB
Look at the log file "/u01/base/cfgtoollogs/dbca/NCDB/NCDB.log" for further details.

I have checked created database with:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
	 0

BANNER
--------------------------------------------------------------------------------
BANNER_FULL
--------------------------------------------------------------------------------
BANNER_LEGACY
--------------------------------------------------------------------------------
    CON_ID
----------

SQL> select patch_id, status, description, action_time from dba_registry_sqlpatch;

  PATCH_ID STATUS		     DESCRIPTION								      ACTION_TIME
---------- ------------------------- -------------------------------------------------------------------------------- ------------------------------
  28090523 SUCCESS		     Database Release Update : 18.3.0.0.180717 (28090523)			      23-JUL-18 11.33.28.278762 PM
  27923415 SUCCESS		     OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)				      23-JUL-18 11.33.28.325217 PM

SQL> select name, cdb, log_mode from v$database;

NAME	  CDB LOG_MODE
--------- --- ------------
NCDB	  NO  ARCHIVELOG

SQL> show pdbs
SQL> 
Conclusion

As documented Oracle Database 18c installation is now image based like Grid Infrastructure 12.2.0.1: response file and parameters for software installation and database creation are very similar to Oracle 12.2.0.1.

Oracle Corp. has decided to release on OTN a patched version: this is something new for a new release (it was possible to have patched versions but only in preinstalled VMs like the Hands-On Lab for Upgrade).

However at the time of writing I have not found the announced RPM to install Oracle 18c on Linux.

Categories: DBA Blogs

Using BULK COLLECT/FORALL

Tom Kyte - Mon, 2018-07-23 15:06
I have a procedure that was previously using a "slow-by_slow" type procedure. I've converted it to BULK COLLECT and FORALL (test_cur, see livesql). The procedure is obtaining data from 2 tables then inserts this data into 2 corresponding tables. ...
Categories: DBA Blogs

API for context

Tom Kyte - Mon, 2018-07-23 15:06
Hi! What is the best way to get context value in Oracle? We have sys_context, which allows get value from context, but it forces to hardcode constants for namespaces and keys. Is there more graceful solution for it? For myself I've made a sepa...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator