Feed aggregator

Error!?! What's going in APEX? The easiest way to Debug and Trace an Oracle APEX session

Dimitri Gielis - Wed, 2018-06-20 13:55
There are some days you just can't explain the behaviour of the APEX Builder or your own APEX Application. Or you recognize this sentence of your end-user? "Hey, it doesn't work..."

In Oracle APEX 5.1 and 18.1, here's how you start to see in the land of the blinds :)

Logged in as a developer in APEX, go to Monitor Activity:


 From there go to Active Sessions:



You will see all active sessions at that moment. Looking at the Session Id or Owner (User) you can identify the session easily:


Clicking on the session id shows the details: which page views have been done, which calls, the session state information and the browser they are using.

But even more interesting, you can set the Debug Level for that session :)


When the user requests a new page or action, you see a Debug ID of that request.


Clicking on the Debug ID, you see straight away all the debug info and hopefully it gives you more insight why something is not behaving as expected.



A real use case: custom APEX app

I had a real strange issue which I couldn't explain at first... an app that was running for several years suddenly didn't show info in a classic report, it got "no data found". When logging out and back in, it would show the data in the report just fine. The user said it was not consistent, sometimes it works, sometimes not... even worse, I couldn't reproduce the issue. So I told her to call me whenever it happened again.
One day she calls, so I followed the above to set debug on for her session and then I saw it... the issue was due to pagination. In a previous record she had paginated to the "second page", but for the current record there was no "second page". With the debug information I could see exactly why it was behaving like that... APEX rewrote the query rows > :first_row, which was set to 16, but for that specific record there were not more than 16 records, so it would show no data found.
Once I figured that out, I could quickly fix the issue by Resetting Pagination on opening of the page.

Debug Levels

You can set different Debug Levels. Level 9 (= APEX Trace) gives you most info whereas debug level 1, only shows the errors, but not much other info. I typically go with APEX Trace (level 9).

The different debug levels with the description:


Trace Mode

In case you want to go a step futher you can also set Trace Mode to SQL Trace.


This will do behind the scenes: alter session set events '10046 trace name context forever, level 12’;
To find out where the trace file is stored, go to SQL Workshop > SQL Scripts and run

SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

It will return the path of the trace file. When looking into that directory you want to search for the filename which contains the APEX session id (2644211946422) and time you ran the trace.


In Oracle SQL Developer you can then look at those trace files a bit more easily. You can also use TKPROF or other tools.


When I really have performance issues and I need to investigate further, I like to use Method R Workbench. The Profiler interpretes the trace file(s) and gives an explanation what's going on.


And with the different tools on the left, you can drill down in the files.


I'm definitely not a specialist in reading those trace files, but the above tools really help me understanding them. When I'm really stuck I contact Cary Millsap - or I call him Mr Trace - he's the father of those tools and knows trace files inside out :)

A second use case: APEX Builder

I was testing our APEX Office Print plugin in APEX 18.1 and for some reason APEX was behaving differently than earlier versions, but I didn't understand why. I followed the above method again to turn debug and trace on for my own session - so even when you are in the APEX Builder you can see what APEX is doing behind the scenes.


Debugging and Tracing made easy

I hope by this post you see the light when you are in the dark. Let the force be with you :)

Categories: Development

Dealing with automatic restart and SQL Docker containers

Yann Neuhaus - Wed, 2018-06-20 12:57

A couple of weeks ago, a customer asked me how to restart containers automatically after a reboot of the underlying host. In his context, it was not an insignificant question because some containers are concerned by SQL Server databases and he wanted to stay relaxed as long as possible even after a maintenance of the Linux host by sysadmins. The concerned (DEV) environment doesn’t include container orchestration like Swarm or Kubernetes.

blog 139 - 0 - banner

The interesting point is there are several ways to perform the job according to the context. Let’s say I was concerned by services outside Docker that are depend of the containerized database environment.

The first method is a purely sysadmin solution that includes systemd which is a Linux process manager that can be used to automatically restart services that fail with restarting policy values as no, on-success, on-failure, on-abnormal, on-watchdog, on-abort, or always. The latter fits well with my customer scenario.

Is there advantage to use this approach? Well, in my customer context some services outside docker are dependent of the SQL container and using systemd is a good way to control dependencies.

Below the service unit file used during my mission and I have to give credit to the SQL Server Customer Advisory team who published an example of this file included in their monitoring solution based on InfluxDB, Grafana and collectd. The template file includes unit specifiers that make it generic. I just had to change the name of the system unit file accordingly to which container I wanted to control.

[Unit]
Description=Docker Container %I
Requires=docker.service
After=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStart=/usr/bin/docker start -a %i
ExecStop=/usr/bin/docker stop -t 2 %i

[Install]
WantedBy=default.target

 

Let’s say I have one SQL Server container named sql. The next step will consist in copying the service template to /etc/systemd/system and changing the service name accordingly to the SQL container name. Thus, we may now benefit from the systemctl command capabilities

$ sudo cp ./service-template /etc/systemd/system/docker-container@sql.service
$ systemctl daemon-reload
$ sudo systemctl enable docker-container@sql

 

That’s it. I may get the status of my new service as following

$ sudo systemctl status docker-container@sql

 

blog 139 - 1 - systemctl status docker container

 

I can also stop and start my SQL docker container like this:

[clustadmin@docker3 ~]$ sudo systemctl stop docker-container@sql
[clustadmin@docker3 ~]$ docker ps -a
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS                     PORTS               NAMES
9a8cad6f21f5        microsoft/mssql-server-linux:2017-CU7   "/opt/mssql/bin/sqls…"   About an hour ago   Exited (0) 7 seconds ago                       sql

[clustadmin@docker3 ~]$ sudo systemctl start docker-container@sql
[clustadmin@docker3 ~]$ docker ps
CONTAINER ID        IMAGE                                   COMMAND                  CREATED             STATUS              PORTS                    NAMES
9a8cad6f21f5        microsoft/mssql-server-linux:2017-CU7   "/opt/mssql/bin/sqls…"   About an hour ago   Up 5 seconds        0.0.0.0:1433->1433/tcp   sql

 

This method met my customer requirement but I found one drawback in a specific case when I stop my container from systemctl command and then I restart it by using docker start command. Thus the status is not reported correctly (Active = dead) and I have to run systemctl restart command against my container to go back to normal. I will probably update this post or to write another one after getting some information on this topic or just feel free to comments: I’m willing to hear about you!

 

The second method I also proposed to my customer for other SQL containers without any external dependencies was to rely on the Docker container restart policy capability. This is a powerful feature and very simple to implement with either docker run command or Dockerfile as follows:

docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=P@$$w0rd1' -p 1433:1433 --restart=unless-stopped -d microsoft/mssql-server-linux:2017-CU7

 

Restart-policy values as Always and unless-stopped fit well with my customer scenario even if I prefer the latter option because it provides another level of control if you manually decide to stop the container for any reasons.

I will voluntary omit the third method that consist in installing systemd directly into the container because it is not recommended by Docker itself and not suitable with my customer case as well.

See you!

 

 

 

Cet article Dealing with automatic restart and SQL Docker containers est apparu en premier sur Blog dbi services.

Migrating from ASMLIB to ASMFD

Yann Neuhaus - Wed, 2018-06-20 12:33

Before Oracle 12.1 the methods used to configure ASM were
• udev
• asmlib
Oracle 12.1 comes with a new method called Oracle ASM Filter Driver (Oracle ASMFD).
In Oracle documentation we can find following:
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks. Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks.
The Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.

In the following blog I am going to migrate from asmlib to asmfd. I am using a cluster 12.1 with 2 nodes.

Below we present our actual configuration.

[root@rac12a ~]# crsctl check cluster -all
**************************************************************
rac12a:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac12b:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@rac12a ~]#


[root@rac12a ~]# crsctl get cluster mode status
Cluster is running in "flex" mode
[root@rac12a ~]#

[root@rac12a ~]# ps -ef | grep pmon
grid      7217     1  0 11:20 ?        00:00:00 asm_pmon_+ASM1
grid      8070     1  0 11:21 ?        00:00:00 apx_pmon_+APX1
oracle    8721     1  0 11:22 ?        00:00:00 ora_pmon_mydb_1
root     14395  2404  0 11:32 pts/0    00:00:00 grep --color=auto pmon
[root@rac12a ~]#

First let’s get information about our ASM disks. We will use these outputs later to migrate the disks to ASMFD disks

[root@rac12a ~]# oracleasm listdisks | xargs oracleasm querydisk -p             
Disk "ASM_DATA" is a valid ASM disk
/dev/sdc1: LABEL="ASM_DATA" TYPE="oracleasm"
Disk "ASM_DIVERS" is a valid ASM disk
/dev/sdd1: LABEL="ASM_DIVERS" TYPE="oracleasm"
Disk "ASM_OCR1" is a valid ASM disk
/dev/sdg1: LABEL="ASM_OCR1" TYPE="oracleasm"
Disk "ASM_OCR2" is a valid ASM disk
/dev/sdi1: LABEL="ASM_OCR2" TYPE="oracleasm"
Disk "ASM_VOT1" is a valid ASM disk
/dev/sde1: LABEL="ASM_VOT1" TYPE="oracleasm"
Disk "ASM_VOT2" is a valid ASM disk
/dev/sdh1: LABEL="ASM_VOT2" TYPE="oracleasm"
Disk "ASM_VOT3" is a valid ASM disk
/dev/sdf1: LABEL="ASM_VOT3" TYPE="oracleasm"
[root@rac12a ~]#

To migrate to ASMFD, we first have to change the value of the parameter diskstring for the ASM instance. The actual value can be get by using

[grid@rac12a trace]$ asmcmd dsget
parameter:ORCL:*
profile:ORCL:*
[grid@rac12a trace]$

Let’s set the new value on both nodes

grid@rac12a trace]$ asmcmd dsset 'ORCL:*','AFD:*'

We can then verify

[grid@rac12a trace]$ asmcmd dsget
parameter:ORCL:*, AFD:*
profile:ORCL:*,AFD:*
[grid@rac12a trace]$

Once the new value of the diskstring set, let stop the cluster on both nodes

[root@rac12a ~]# crsctl stop cluster
[root@rac12b ~]# crsctl stop cluster

Once the cluster is stopped we have to disable and stop asmlib on both nodes

[root@rac12a ~]# systemctl disable oracleasm
Removed symlink /etc/systemd/system/multi-user.target.wants/oracleasm.service.

[root@rac12a ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@rac12a ~]# oracleasm exit
Unmounting ASMlib driver filesystem: /dev/oracleasm
Unloading module "oracleasm": oracleasm
[root@rac12a ~]#

[root@rac12a ~]# ls -ltr /dev/oracleasm/
total 0
[root@rac12a ~]#

Now let’s remove all packages relative to ASMLIB on both nodes

[root@rac12a oracle]# rpm -e oracleasm-support-2.1.11-2.el7.x86_64 oracleasmlib-2.0.12-1.el7.x86_64
warning: /etc/sysconfig/oracleasm saved as /etc/sysconfig/oracleasm.rpmsave
[root@rac12a oracle]#

The next step is to stop acfsload on both nodes

[root@rac12a ~]# lsmod | grep acfs
oracleacfs           3343483  0
oracleoks             500109  2 oracleacfs,oracleadvm
[root@rac12a ~]#

[root@rac12a ~]# acfsload stop
[root@rac12a ~]# lsmod | grep acfs
[root@rac12a ~]#

As root, we can now configure Oracle ASMFD to filter at the node level. In my case steps were done on both nodes

[root@rac12a oracle]# asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
[root@rac12a oracle]#

Once the configuration done, we can check AFD state on all nodes

[root@rac12a oracle]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DISABLED' on host 'rac12a.localdomain'
[root@rac12a oracle]#

We can see that afd module is loaded but the filtering is disabled. We then have to edit the oracleafd.conf to enable the filtering

[root@rac12a etc]# cat oracleafd.conf
afd_diskstring='/dev/sd*1'

And then we have to run on both nodes

[root@rac12a etc]# asmcmd afd_filter -e
Connected to an idle instance.
[root@rac12a etc]#

[root@rac12b ~]#  asmcmd afd_filter -e
Connected to an idle instance.
[root@rac12b ~]#

Running again the afd_state command, we can confirm that the filtering is now enabled.

[root@rac12a etc]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac12a.localdomain'
[root@rac12a etc]#

Now we can migrate all asm disks.

[root@rac12a etc]# asmcmd afd_label ASM_DATA /dev/sdc1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_DIVERS /dev/sdd1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_OCR1 /dev/sdg1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_OCR2 /dev/sdi1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_VOT1 /dev/sde1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_VOT2 /dev/sdh1 --migrate
Connected to an idle instance.
[root@rac12a etc]# asmcmd afd_label ASM_VOT3 /dev/sdf1 --migrate
Connected to an idle instance.
[root@rac12a etc]#

We can verify the ASMFD disks using the command

[root@rac12b ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label                     Filtering   Path
================================================================================
ASM_DATA                    ENABLED   /dev/sdc1
ASM_DIVERS                  ENABLED   /dev/sdd1
ASM_OCR1                    ENABLED   /dev/sdg1
ASM_OCR2                    ENABLED   /dev/sdi1
ASM_VOT1                    ENABLED   /dev/sde1
ASM_VOT2                    ENABLED   /dev/sdh1
ASM_VOT3                    ENABLED   /dev/sdf1
[root@rac12b ~]#

Let’s update the afd.conf so that ASMFD can mount ASMFD disks.

[root@rac12a etc]# cat afd.conf
afd_diskstring='/dev/sd*'
afd_filtering=enable

When the ASMFD disks are visible on both nodes, we can start acsfload on both nodes

[root@rac12a etc]# acfsload start
ACFS-9391: Checking for existing ADVM/ACFS installation.
ACFS-9392: Validating ADVM/ACFS installation files for operating system.
ACFS-9393: Verifying ASM Administrator setup.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9154: Loading 'oracleoks.ko' driver.
ACFS-9154: Loading 'oracleadvm.ko' driver.
ACFS-9154: Loading 'oracleacfs.ko' driver.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9156: Detecting control device '/dev/asm/.asm_ctl_spec'.
ACFS-9156: Detecting control device '/dev/ofsctl'.
ACFS-9322: completed
[root@rac12a etc]#

Now the conversion is done and we can start crs on both nodes

[root@rac12a ~]# crsctl start crs

[root@rac12b ~]# crsctl start crs

We can remove all asmlib references in the parameter diskstring

[grid@rac12a trace]$ asmcmd dsget
parameter:ORCL:*, AFD:*
profile:ORCL:*,AFD:*

[grid@rac12a trace]$ asmcmd dsset 'AFD:*'

[grid@rac12a trace]$ asmcmd dsget
parameter:AFD:*
profile:AFD:*
[grid@rac12a trace]$

Once the cluster started, we can verify the disk names

[grid@rac12a trace]$ asmcmd lsdsk
Path
AFD:ASM_DATA
AFD:ASM_DIVERS
AFD:ASM_OCR1
AFD:ASM_OCR2
AFD:ASM_VOT1
AFD:ASM_VOT2
AFD:ASM_VOT3
[grid@rac12a trace]$

We can also use following command to confirm that now ASMFD is being used

set linesize 300
col PATH for a20
set pages 20
col LIBRARY for a45
col NAME for a15
select inst_id,group_number grp_num,name,state,header_status header,mount_status mount,path, library
from gv$asm_disk order by inst_id,group_number,name;


   INST_ID    GRP_NUM NAME            STATE    HEADER       MOUNT   PATH                 LIBRARY
---------- ---------- --------------- -------- ------------ ------- -------------------- ---------------------------------------------
         1          1 ASM_DIVERS      NORMAL   MEMBER       CACHED  AFD:ASM_DIVERS       AFD Library - Generic , version 3 (KABI_V3)
         1          2 ASM_OCR1        NORMAL   MEMBER       CACHED  AFD:ASM_OCR1         AFD Library - Generic , version 3 (KABI_V3)
         1          2 ASM_OCR2        NORMAL   MEMBER       CACHED  AFD:ASM_OCR2         AFD Library - Generic , version 3 (KABI_V3)
         1          3 ASM_DATA        NORMAL   MEMBER       CACHED  AFD:ASM_DATA         AFD Library - Generic , version 3 (KABI_V3)
         1          4 ASM_VOT1        NORMAL   MEMBER       CACHED  AFD:ASM_VOT1         AFD Library - Generic , version 3 (KABI_V3)
         1          4 ASM_VOT2        NORMAL   MEMBER       CACHED  AFD:ASM_VOT2         AFD Library - Generic , version 3 (KABI_V3)
         1          4 ASM_VOT3        NORMAL   MEMBER       CACHED  AFD:ASM_VOT3         AFD Library - Generic , version 3 (KABI_V3)
         2          1 ASM_DIVERS      NORMAL   MEMBER       CACHED  AFD:ASM_DIVERS       AFD Library - Generic , version 3 (KABI_V3)
         2          2 ASM_OCR1        NORMAL   MEMBER       CACHED  AFD:ASM_OCR1         AFD Library - Generic , version 3 (KABI_V3)
         2          2 ASM_OCR2        NORMAL   MEMBER       CACHED  AFD:ASM_OCR2         AFD Library - Generic , version 3 (KABI_V3)
         2          3 ASM_DATA        NORMAL   MEMBER       CACHED  AFD:ASM_DATA         AFD Library - Generic , version 3 (KABI_V3)
         2          4 ASM_VOT1        NORMAL   MEMBER       CACHED  AFD:ASM_VOT1         AFD Library - Generic , version 3 (KABI_V3)
         2          4 ASM_VOT2        NORMAL   MEMBER       CACHED  AFD:ASM_VOT2         AFD Library - Generic , version 3 (KABI_V3)
         2          4 ASM_VOT3        NORMAL   MEMBER       CACHED  AFD:ASM_VOT3         AFD Library - Generic , version 3 (KABI_V3)

14 rows selected.
 

Cet article Migrating from ASMLIB to ASMFD est apparu en premier sur Blog dbi services.

Remote syslog from Linux and Solaris

Yann Neuhaus - Wed, 2018-06-20 10:47

Auditing operations with Oracle Database is very easy. The default configuration, where SYSDBA operations go to ‘audit_file_dest’ (the ‘adump’ directory) and other operations go to the database may be sufficient to log what is done but is definitely not a correct security audit method as both destinations can have their audit trail deleted by the DBA. If you want to secure your environment by auditing the most privileged accounts, you need to send the audit trail to another server.

This is easy as well and here is a short demo involving Linux and Solaris as the audited environments. I’ve created those 3 computer services in the Oracle Cloud:
CaptureSyslog000

So, I have an Ubuntu service where I’ll run the Oracle Database (XE 11g) and the hostname is ‘ubuntu’

root@ubuntu:~# grep PRETTY /etc/os-release
PRETTY_NAME="Ubuntu 16.04.4 LTS"

I have a Solaris service which will also run Oracle, and the hostname is ‘d17872′

root@d17872:~# cat /etc/release
Oracle Solaris 11.3 X86
Copyright (c) 1983, 2016, Oracle and/or its affiliates. All rights reserved.
Assembled 03 August 2016

I have an Oracle Enterprise Linux service which will be my audit server, collecting syslog messages from remote hosts, the hostname is ‘b5e501′ and the IP address in the PaaS network is 10.29.235.150

[root@b5e501 ~]# grep PRETTY /etc/os-release
PRETTY_NAME="Oracle Linux Server 7.5"

Testing local syslog

I start to ensure that syslog works correctly on my audit server:

[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &
[root@b5e501 ~]#
[root@b5e501 ~]# logger -p local1.info "hello from $HOSTNAME"
[root@b5e501 ~]# Jun 20 08:28:35 b5e501 bitnami: hello from b5e501

Remote setting

On the aduit server, I un-comment the lines about receiving syslog from TCP and UDP on port 514

[root@b5e501 ~]# grep -iE "TCP|UDP" /etc/rsyslog.conf
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
# Remote Logging (we use TCP for reliable delivery)

I restart syslog service

[root@b5e501 ~]# systemctl restart rsyslog
Jun 20 08:36:47 b5e501 systemd: Stopping System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2769" x-info="http://www.rsyslog.com"] exiting on signal 15.
Jun 20 08:36:47 b5e501 systemd: Starting System Logging Service...
Jun 20 08:36:47 b5e501 rsyslogd: [origin software="rsyslogd" swVersion="8.24.0" x-pid="2786" x-info="http://www.rsyslog.com"] start
Jun 20 08:36:47 b5e501 systemd: Started System Logging Service.

I tail the /var/log/messages (which is my default destination for “*.info;mail.none;authpriv.none;cron.none”)

[root@b5e501 ~]# tail -f /var/log/messages &
[root@b5e501 ~]# jobs
[1]+ Running tail -f /var/log/messages &

I test with local1.info and check that the message is tailed even when logger is sending it though the network:

[root@b5e501 ~]# logger -n localhost -P 514 -p local1.info "hello from $HOSTNAME"
Jun 20 09:18:07 localhost bitnami: hello from b5e501

That’s perfect.

Now I can test the same from my Ubuntu host to ensure that the firewall settings allow for TCP and UDP on port 514


root@ubuntu:/tmp/Disk1# logger --udp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in UDP"
root@ubuntu:/tmp/Disk1# logger --tcp -n 10.29.235.150 -P 514 -p local1.info "hello from $HOSTNAME in TCP"

Here are the correct messages received:

Jun 20 09:24:46 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:24:54 ubuntu bitnami hello from ubuntu in TCP

Destination setting for the audit

As I don’t want to have all messages into /var/log/messages, I’m now setting, in the audit server, a dedicated file for “local1″ facility and “info” level that I’ll use for my Oracle Database audit destination

[root@b5e501 ~]# touch "/var/log/audit.log"
[root@b5e501 ~]# echo "local1.info /var/log/audit.log" >> /etc/rsyslog.conf
[root@b5e501 ~]# systemctl restart rsyslog

After testing the same two ‘logger’ commands from the remote host I check the entries in my new file:

[root@b5e501 ~]# cat /var/log/audit.log
Jun 20 09:55:09 ubuntu bitnami hello from ubuntu in UDP
Jun 20 09:55:16 ubuntu bitnami hello from ubuntu in TCP

Remote logging

Now that I validated that remote syslog is working, I set automatic forwarding of syslog messages on my Ubuntu box to send all ‘local1.info to the audit server':

root@ubuntu:/tmp/Disk1# echo "local1.info @10.29.235.150:514" >> /etc/rsyslog.conf
root@ubuntu:/tmp/Disk1# systemctl restart rsyslog

This, with a single ‘@’ forwards in UDP. You can double the ‘@’ to forward using TCP.

Here I check with logger in local (no mention of the syslog host here):

root@ubuntu:/tmp/Disk1# logger -p local1.info "hello from $HOSTNAME with forwarding"

and I verify that the message is logged in the audit server into /var/log/audit.log

[root@b5e501 ~]# tail -1 /var/log/audit.log
Jun 20 12:00:25 ubuntu bitnami: hello from ubuntu with forwarding

Repeated messages

Note that when testing, you may add “$(date)” to your message in order to see it immediately because syslog keeps the message to avoid flooding when the message is repeated. This:

root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Always the same message"
root@ubuntu:/tmp/Disk1# logger -p local1.info "Then another one"

is logged as this:

Jun 20 12:43:12 ubuntu bitnami: message repeated 5 times: [ Always the same message] Jun 20 12:43:29 ubuntu bitnami: Then another one

I hope that one day this idea will be implemented by Oracle when flooding messages to the alert.log ;)

Oracle Instance

The last step is to get my Oracle instance sending audit message to the local syslog, with facility.level local1.info so that they will be automatically forwarded to my audit server. I have to set audit_syslog_level to ‘local1.info’ and the audit_trail to ‘OS':

oracle@ubuntu:~$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.2.0 Production on Wed Jun 20 11:48:00 2018
 
Copyright (c) 1982, 2011, Oracle. All rights reserved.
 
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
 
SQL> alter system set audit_syslog_level='local1.info' scope=spfile;
 
System altered.
 
SQL> alter system set audit_trail='OS' scope=spfile;
 
System altered.
 
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size 2233344 bytes
Variable Size 616565760 bytes
Database Buffers 444596224 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.

It is very easy to check that it works as the SYSDBA and the STARTUP are automatically audited. Here is what I can see in my audit server /var/log/audit.log:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 11:55:47 ubuntu Oracle Audit[27066]: LENGTH : '155' ACTION :[7] 'STARTUP' DATABASE USER:[1] '/' PRIVILEGE :[4] 'NONE' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[13] 'Not Available' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:47 ubuntu Oracle Audit[27239]: LENGTH : '148' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[0] ''
Jun 20 11:55:51 ubuntu Oracle Audit[27419]: LENGTH : '159' ACTION :[7] 'CONNECT' DATABASE USER:[1] '/' PRIVILEGE :[6] 'SYSDBA' CLIENT USER:[6] 'oracle' CLIENT TERMINAL:[5] 'pts/0' STATUS:[1] '0' DBID:[10] '2860420539'

In the database server, I have no more files in the adump since this startup:

oracle@ubuntu:~/admin/XE/adump$ /bin/ls -alrt
total 84
drwxr-x--- 6 oracle dba 4096 Jun 20 11:42 ..
-rw-r----- 1 oracle dba 699 Jun 20 11:44 xe_ora_26487_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26515_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26519_1.aud
-rw-r----- 1 oracle dba 694 Jun 20 11:44 xe_ora_26523_1.aud
drwxr-x--- 2 oracle dba 4096 Jun 20 11:48 .
-rw-r----- 1 oracle dba 896 Jun 20 11:48 xe_ora_26574_1.aud

Solaris

I have also started a Solaris service:

opc@d17872:~$ pfexec su -
Password: solaris_opc
su: Password for user 'root' has expired
New Password: Cl0udP01nts
Re-enter new Password: Cl0udP01nts
su: password successfully changed for root
Oracle Corporation SunOS 5.11 11.3 June 2017
You have new mail.
root@d17872:~#

Here, I add the forwarding to /etc/syslog.conf (tab is a required separator which cannot be replaced with spaces) and restart the syslog service:

root@d17872:~# echo "local1.info\t@10.29.235.150" >> /etc/syslog.conf
root@d17872:~# svcadm restart system-log

Then logging a message locally

root@d17872:~# logger -p local1.info "hello from $HOSTNAME with forwarding"

Here is the messaged received from the audit server:

[root@b5e501 ~]# tail -f /var/log/audit.log
Jun 20 05:27:51 d17872.compute-a511644.oraclecloud.internal opc: [ID 702911 local1.info] hello from d17872 with forwarding

Here in Solaris I have the old ‘syslog’ with no syntax to change the UDP port. The default port is defined in /etc/services, which is the one I’ve configured to listen to on my audit server:

root@d17872:~# grep 514 /etc/services
shell 514/tcp cmd # no passwords used
syslog 514/udp

If you want more features, you can install syslog-ng or rsyslog on Solaris.

 

Cet article Remote syslog from Linux and Solaris est apparu en premier sur Blog dbi services.

Bourbon Ibirapuera Streamlines Property Operations, Creates New Guest Experiences with Oracle Cloud

Oracle Press Releases - Wed, 2018-06-20 08:00
Press Release
Bourbon Ibirapuera Streamlines Property Operations, Creates New Guest Experiences with Oracle Cloud Brazilian Hotel Implements Integrated Suite of Hospitality, ERP and CX

Redwood Shores, Calif.—Jun 20, 2018

Oracle today announced that Bourbon Ibirapuera has selected a suite of Oracle solutions including OPERA and Simphony Cloud, Fusion ERP, OPERA Loyalty, OPERA OWS, Oracle Sales Cloud, Eloqua and Hyperion as part of an initiative to streamline operations across properties and arm hotel associates with new insights that inform personalized guest experiences. Bourbon Ibirapuera initially invested in OPERA Cloud, Simphony Cloud and ERP Cloud solutions before adding Oracle Sales Cloud and Eloqua products for a full suite of operations, back-office and customer facing technologies. Bourbon Ibirapuera will be the first hotel in Brazil to install these solutions and arm staff with cloud tools that enable deeper guest interaction and loyalty.

“Bourbon Ibirapuera’s choice of Oracle is a testament to the value that Oracle horizontal and vertical products bring to all segments of the hospitality market,” said Bernard Jammet, senior vice president, Oracle Hospitality. “With these new tools Bourbon Ibirapuera will be able to augment their guest experience and compete with larger chains and properties in region.”

“The hospitality industry as a whole spends too much time and effort managing multiple vendors and building integrations across solutions to maximize the value from IT investments,” said John Chen, CEO, Bourbon Ibirapuera. “After an initial investment in OPERA and Simphony Cloud we quickly realized the value of investing in a suite of solutions with existing integrations. With our new suite of solutions we are arming our business with deeper insights to empower informed management decisions, streamlining the reservation process and optimizing hotel operations in an integrated way.”

As a longtime customer of Oracle Hospitality, Bourbon Ibirapuera’s experience with OPERA delivering value for hospitality operations established Oracle as a clear front runner for the upgrade project. A phased integration approach, first focusing on the back end operational infrastructure before adding new marketing tools, allowed Bourbon Ibirapuera to effectively manage digital transformation and prepare staff for a cloud transition. Bourbon Ibirapuera’s implementation will also bring several new customer-facing features to the region including web and mobile check-in and more targeted incentives for new or repeat guests including personalized rates and promotions.

Bourbon Ibirapuera will bring these point of sale, online reservation, financial and budget planning and CRM and marketing tools online in June 2018 after a four month implementation process.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators and hoteliers. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility.

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

St. HOPE Accelerates Mission to Support Sacramento Youth

Oracle Press Releases - Wed, 2018-06-20 08:00
Press Release
St. HOPE Accelerates Mission to Support Sacramento Youth Nonprofit Founded by Former NBA All-Star and Sacramento Mayor Turns to NetSuite to Support Growth Beyond the Classroom

SAN MATEO, Calif.—Jun 20, 2018

 

St. HOPE, a nonprofit community development corporation, has leveraged Oracle NetSuite to support its mission to create one of the finest urban pre-kindergarten through 12th grade public school systems in the United States. With NetSuite SuiteSuccess for Nonprofits, St. HOPE has been able to streamline critical business functions in order to focus its time and resources on providing high quality public education and creating living-wage sustainable jobs.

Founded by former NBA All-Star and Sacramento mayor Kevin Johnson in 1989, St. HOPE began as a single, portable classroom that served as an afterschool program for Sacramento High School students. Today, it serves 1,800 students through five charter schools and manages residential properties as well as an Art and Cultural Center that includes a cafe, bookstore, barbershop, art gallery and 200-seat theater. The charter school network focuses on students from urban communities and aims to graduate self-motivated, industrious and critical-thinking leaders who are committed to serving others, passionate about lifelong learning and prepared to earn a degree from a four-year college.

“As we found success with the schools, the organization realized that we had an opportunity to do more in the community,” said Julian Love, chief financial officer, St. HOPE Community Development. “That meant we needed a system that could better track and manage finances. SuiteSuccess fit exactly what we needed.”

With the preconfigured roles, dashboards and nonprofit industry best practices within SuiteSuccess, St. HOPE has been able to shorten payroll processes by 87 percent, digitize and gain greater control over purchasing processes, and achieve real-time visibility into its financial performance. As a result, St. HOPE has been able to manage the increasing business complexity presented by its growth and expanding scope. St. HOPE selected NetSuite SuiteSuccess for Nonprofits in March 2017 and went live with a full-fledged Enterprise Resource Planning (ERP) system in less than three months.

“NetSuite has a proud history of helping organizations in the nonprofit sector,” said David Geilhufe, Senior Director, Social Impact & Nonprofit, Oracle NetSuite. “With SuiteSuccess, we’re able to help thriving organizations like St. HOPE to quickly and easily manage critical business functions so they can focus on their mission and on helping the community.”

Contact Info
Danielle Tarp
Oracle NetSuite Corporate Communications
650-506-2904
danielle.tarp@oracle.com
About St. HOPE

St. HOPE began in 1989 in a portable classroom at Sacramento High School as an after-school program named St. HOPE Academy. Founded by NBA All-Star and Oak Park native Kevin Johnson, St. HOPE is a nonprofit community development corporation whose mission is to revitalize the Oak Park community through public education, and economic development. Learn more at www.sthope.org.

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials/Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 40,000 organizations and subsidiaries in 199 countries and territories.

For more information, please visit http://www.netsuite.com.

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

he Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • 650-506-2904

Oracle Utilities Ranks No. 1 in Home Energy Management

Oracle Press Releases - Wed, 2018-06-20 07:00
Press Release
Oracle Utilities Ranks No. 1 in Home Energy Management Navigant names Oracle the leader in customer engagement and demand side management technology

Redwood Shores, Calif.—Jun 20, 2018

Oracle, the largest provider of cloud technology for the utility industry once again earned the top spot in a Navigant Research Leaderboard report that ranks companies in the home energy management (HEM) space. Oracle Utilities, which acquired Opower in 2016, received the highest ranking in this 2018 report due to significant market penetration of its home energy management solution across 100 utilities and its ability to offer utility companies a comprehensive, end-to-end utility software solution at scale around the world.

“This is significant validation of our continued leadership and support of our utility customers,” said Rodger Smith, SVP and general manager of Oracle Utilities. “Since acquiring Opower we have continued to innovate in the rapidly evolving home energy management market to deliver the strongest results in the category. Our investment in scalable solutions that connect every customer enables tighter customer-to-grid integration for the utility of the future.”

Prior to joining Oracle, Opower has been consistently ranked as the top provider since Navigant introduced the HEM Leaderboard, due to its leading capabilities in this category including home energy reports, behavioral demand response, smart meter and rates engagement, billing insights and alerts and embeddable online tools.

“Home energy management (HEM) is a broad market of technologies and services that consumers use to better manage and control their home energy consumption and production. With the development of the smart home and connected devices, energy management has become a critical part of the digitization of the home. Oracle Utilities’ Opower solutions are at the forefront of monitoring energy usage, demand side management programs and increasing customer engagement to increase energy efficiency,” said Paige Leuschner, Research Analyst at Navigant.

The Navigant Research Leaderboard Report examines the strategy and execution of 14 companies that offer HEM software solutions and rates them on 10 criteria: vision, go-to-market strategy, partners, technology, geographic reach, sales and marketing, product performance, product portfolio and integrations, pricing, and staying power. Using Navigant Research’s proprietary Leaderboard methodology, vendors are profiled, rated, and ranked to provide an objective assessment of each company’s relative strengths and weaknesses in the global HEM market.

Contact Info
Valerie Beaudett
Oracle Corporation
+1 650.400.7833
valerie.beaudett@oracle.com
Wendy Wang
H&K Strategies
+1 979 216 8157
wendy.wang@hkstrategies.com
About Oracle Utilities

Oracle Utilities delivers business critical applications that help electric, gas and water utilities worldwide enhance customer experience, increase operational efficiency and achieve performance excellence. Our customer care and billing, network management, work and asset, field services, meter data management and analytics solutions integrate with Oracle’s leading enterprise applications, BI tools, middleware, database technologies, servers and storage. We are the largest provider of cloud services in the industry today, serving the entire utility value chain from the grid to the meter to end customers. Our software enables customers to adapt more nimbly to market deregulation, meet ever-evolving customer demands, and deliver on energy efficiency commitments. Find out how we can become your trusted advisor—visit www.oracle.com/utilities.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650.400.7833

Wendy Wang

  • +1 979 216 8157

how to reset a sequence

Tom Kyte - Wed, 2018-06-20 01:26
Create sequence with no options, and the current value of the sequence is 10. Specify the statements in order to reset the sequence to 8, so that the next value will be generated after 8 is 11. Find out logic.
Categories: DBA Blogs

Log of switchover/failover/open

Tom Kyte - Wed, 2018-06-20 01:26
What data dictionary view can be used to determine the number of times that a switchover/failover/open has occurred for a standby database?
Categories: DBA Blogs

connection pooling

Tom Kyte - Wed, 2018-06-20 01:26
Tom, What is connection pooling ? Please, can you give an example(s) that show thorough understanding of the subject matter as related to either ODBC OR JDBC application connections to the oracle database. Your site is more important and most val...
Categories: DBA Blogs

API Monetization: What Developers Need to Know

OTN TechBlog - Tue, 2018-06-19 23:15

You’ve no doubt heard the term “API monetization,” but do you really understand what it means? More importantly, do you understand what API monetization means for developers?

“The general availability of information and services has really influenced the way APIs behave and the way APIs are built,” says Oracle ACE and Developer Champion Arturo Viveros, principal architect at Sysco AS in Norway. “The hyper-distributed nature of the systems we work with, with cloud computing and with blockchain, and all of these technologies, makes it very important. Everyone wants to have information in real time now, as opposed to before when we could afford to create APIs that could give you a snapshot of what happened a few hours ago, or a day ago.”

These days the baseline consumer expectation is 24/7/365 service. “So, as a developer, when you’re designing APIs that are going to be exposed as business assets or as products, you need to take into account characteristics like high availability, performance resiliency, and flexibility,” says Viveros. “That’s why all of these new technologies go into supporting APIs, like microservices and containers and serverless. It's so critical to learn to use them because they allow you to be flexible to deploy new versions or improved versions of APIs. They allow your APIs to have an improved life cycle and to move away from the whole monolithic paradigm, reduce time to market, and move forward at the speed that the organization and your user base and consumer base require.”

So yeah, there’s a bit of a learning curve. But hasn’t that always been the developer’s reality? And hasn’t there always been some kind of reward at the end of the learning curve?

“It’s an exciting time for developers,” says Luis Weir. He’s an Oracle ACE Director, a Developer Champion, and the CTO of the Oracle Delivery Unit with Capgemini in the UK. “API monetization is an opportunity to add direct tangible value to the business. APIs have become a source of revenue on their own,” says Weir. “This is quite exciting. I don't think this is something that we’ve seen before in the IT industry. Whatever APIs we had in the past were in support of a business product, they were not the business product. That's different, and I think developers have the opportunity now to be completely, directly involved in the creation and maintenance of these products.”

While developing APIs is certainly important, it’s no less important to take advantage of what is already out there. “Developers within an organization need to be thinking about what APIs might be available to complete functions that are not within their core competency,” says Robert Wunderlich, product strategy director for Cloud, API, and Integration at Oracle. “There are a lot of publicly available APIs that can be used for low or no cost or a reasonable cost.”

[For example, check out the API Showcase on the NYC Developer Portal ]

Luis Weir sees another important aspect of API monetization. “As a developer it's always exciting to see how your product is received. For example, when you create an open source GitHub project and then all of a sudden you see a lot of people forking your project and trying to trace pull requests to contribute to it, that's exciting because that means that you did something that added to your organization or to the community. That's rewarding as a developer. It’s far more rewarding to see an IT asset that's directly influencing the direction of the business.” API monetization provides that visibility.

Arturo Viveros, Luis Weir, and Robert Wunderlich explore API monetization in depth from a developer perspective in this month’s Oracle Developer Community Podcast. Check it out!

The Panelists

In alphabetical order

Arturo Viveros
Oracle ACE
Oracle Developer Champion
Principal Architect, Sysco AS
Twitter LinkedIn Luis Weir
Oracle ACE Director
Oracle Developer Champion
CTO, Oracle Delivery Unit, Capgemini UK
Twitter LinkedIn Robert Wunderlich
Product Strategy Director for Cloud, API, and Integration, Oracle
Twitter LinkedIn  Additional Resources Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

Q4 FY18 GAAP EPS UP 8% TO $0.82 and NON-GAAP EPS UP 11% TO $0.99

Oracle Press Releases - Tue, 2018-06-19 14:21
Press Release
Q4 FY18 GAAP EPS UP 8% TO $0.82 and NON-GAAP EPS UP 11% TO $0.99 Q4FY18 Total Revenue Up 3% to $11.3 Billion and FY18 Total Revenue Up 6% to $39.8 Billion

Redwood Shores, Calif.—Jun 19, 2018

Oracle Corporation (NYSE: ORCL) today announced fiscal 2018 Q4 results and fiscal 2018 full year results. In Q4, Total Revenues were up 3% to $11.3 billion compared to Q4 last year. Q4 Cloud Services and License Support revenues were up 8% to $6.8 billion. Q4 Cloud License and On-Premise License revenues were down 5% to $2.5 billion.

Q4 GAAP Operating Income was up 8% to $4.4 billion, and GAAP Operating Margin was 39%. Q4 Non-GAAP Operating Income was up 6% to $5.3 billion, and non-GAAP Operating Margin was 47%. Q4 GAAP Net Income was $3.4 billion, and non-GAAP Net Income was $4.1 billion. Q4 GAAP Earnings Per Share was up 8% to $0.82, while non-GAAP Earnings Per Share was up 11% to $0.99.

At the end of Q4, short-term deferred revenues were up 2% to $8.4 billion, while Operating Cash Flow on a trailing twelve-month basis was up 9%, or $1.3 billion, to a record $15.4 billion.

For the full fiscal year 2018, Total Revenues were up 6% to $39.8 billion compared to fiscal 2017. FY18 Cloud Services and License Support revenues were up 10% to $26.3 billion. FY18 Cloud License and On-Premise License revenues were down 4% to $6.2 billion.

FY18 GAAP Operating Income was up 8% to $13.7 billion, and GAAP Operating Margin was 34%. FY18 Non-GAAP Operating Income was up 9% to $17.6 billion, and non-GAAP Operating Margin was 44%. FY18 GAAP Net Income was $3.8 billion, and non-GAAP Net Income was $13.2 billion. FY18 GAAP Earnings Per Share was $0.90, while Non-GAAP Earnings Per Share was $3.12.

“Last year, I forecast double-digit non-GAAP earnings per share growth for FY18 and we delivered 14% growth this year, largely driven by strong growth in our cloud businesses,” said Oracle CEO, Safra Catz. “Looking ahead to FY19, I expect revenue growth will enable us to deliver double-digit non-GAAP earnings per share growth once again.”

“We had a great fourth quarter with total revenues more than $200 million above our constant currency forecast,” said Oracle CEO, Mark Hurd. “Our strategic Fusion ERP and HCM SaaS cloud applications suite revenues grew over 50% in the fourth quarter, and we expect continued strong growth from our Fusion SaaS suites throughout FY19.”

“Some of our largest customers have now begun the process of moving their on-premise Oracle databases to the Oracle Cloud,” said Oracle Chairman and CTO, Larry Ellison. “For example, AT&T is moving thousands of databases and tens of thousands of terabytes of data into the Oracle Cloud. We think that these large scale migrations of Oracle database to the cloud will drive our PaaS and IaaS businesses throughout FY19.”

The Board of Directors also declared a quarterly cash dividend of $0.19 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 17, 2018, with a payment date of July 31, 2018.

Q4 Fiscal 2018 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast of this event, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q4 results and fiscal 2018 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Pass Code: 6866209.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com/ or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding our expectations for future growth in revenues, non-GAAP earnings per share and our Fusion SaaS suite of products, and our beliefs that large scale customer cloud migrations will drive our PaaS and IaaS businesses in FY19, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud computing strategy, including our Oracle Cloud SaaS, PaaS, IaaS and data as a service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, enhance and improve our products and support services in a timely manner or position and price our products and services to meet demand, customers may not buy cloud licenses and on-premise licenses, cloud services or hardware products or purchase or renew support contracts. (3) If the security measures for our products and services are compromised or if our products and services contain significant coding, manufacturing or configuration errors, we may experience reputational harm, legal claims and reduced sales. (4) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (5) Our international sales and operations subject us to additional risks that can adversely affect our operating results, including risks relating to foreign currency gains and losses. (6) We have an active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. (7) We may fail to achieve our financial forecasts due to such factors as delays or size reductions in transactions, fewer large transactions in a particular quarter, fluctuations in currency exchange rates, delays in delivery of new products or releases or a decline in our renewal rates for support contracts. A detailed discussion of these factors and other risks that affect our business is contained in our U.S. Securities and Exchange Commission (SEC) filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading “Risk Factors.” Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of June 19, 2018. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

APIs to the Rescue in the Aftermath of 2017 Mexican Earthquake

OTN TechBlog - Tue, 2018-06-19 13:38

After three weeks Hawaii's Kilauea volcano is still busy eating an island. Early in June Guatemala's Volcan De Fuego erupted and is still literally shaking the earth. And just this past weekend a 5.3 magnitude quake struck Osaka, Japan. Mother Earth knows how to get our attention. But in doing so she also triggers an impulse in some human beings to jump in and help in any way they can.

One great example of that kind of techie humanitarianism is the group of Mexican developers and DBAs who, in the immediate aftermath of the earthquake that hit Mexico in 2017, banded together in a collaborative effort to rapidly build a system to coordinate rescue and relief efforts.

Oracle ACE Rene Antunez was one of the volunteers in that effort. He shares the organizational and technical details in this video interview recorded at last week's ODTUG Kscope 2018 event in Orlando.

Given that natural disasters are likely to continue to happen, the open source project is ongoing, and is available on GItHub:

https://github.com/CodeandoMexico/terremoto-cdmx

Why not lend your skills to this worthwhile effort?

Have you been involved in similar humanitarian software development efforts? post a comment below

 

Oracle Hospitality OPERA Reporting and Analytics Cloud Service Arms Hoteliers with Critical Insights to Improve Operational Efficiencies and Create Rewarding Guest Experiences

Oracle Press Releases - Tue, 2018-06-19 08:00
Press Release
Oracle Hospitality OPERA Reporting and Analytics Cloud Service Arms Hoteliers with Critical Insights to Improve Operational Efficiencies and Create Rewarding Guest Experiences New Analytical Tools Empower Corporate Revenue Managers, Property General Managers and Front Desk Managers to Make More Strategic Decisions

Redwood Shores, Calif.—Jun 19, 2018

Oracle Hospitality today unveiled OPERA Reporting and Analytics Cloud Service, a new offering providing hotel management with access to business data and performance metrics, intuitive data visualization and customized reporting. OPERA Reporting and Analytics Cloud Service is powered by Oracle Business Intelligence (OBI) an industrial-strength analytics engine known for its capability to develop, design and deploy reports that is used by Fortune 100 organizations to leapfrog their competitors. With Oracle Business Intelligence at the core, OPERA Reporting and Analytics Cloud Service empowers corporate hotel executives and front-desk staff to make more strategic decisions that optimize operational efficiency, enhance guest experience, and drive continued revenue.

“OPERA Reporting and Analytics was developed with the goal of simplifying reporting, creating a common reporting platform for both our restaurant and hotel customers and integrating with OPERA platform to provide actionable insights to the hospitality industry faster than ever,” said Laura Calin, vice president strategy and solutions, Oracle Hospitality. “With these new tools hotel staff at every level can make more accurate and strategic decisions that align to corporate growth objectives while enabling meaningful guest interactions that enhance guest loyalty.”

OPERA Reporting and Analytics enables smarter decisions and better forecasts by allowing management to easily analyze and visualize data on property financial performance, guest profiles, reservations, room rates and revenue metrics, restaurant sales, catering events, and blocks. The solution can be fully customized to reflect performance indicators and metrics unique to individual properties and multiple tiers of hotel staff.

Empowering Staff at Every Level to Execute Better

OPERA Reporting and Analytics provides hotel staff at all levels to drive revenue and better guest experiences with a variety of use cases including:

  • Allowing corporate and area revenue managers to analyze performance across multiple properties in a region and understand the factors that cause revenue to fluctuate year to year.
  • Providing general managers with deep analysis of daily operations and measurement against room revenue, food and beverage revenue and occupancy which can be aligned with monthly, quarterly and annual performance goals.
  • Empowering front desk management to offer better guest experience by accelerating guest check-in and prioritizing room availability for loyal or VIP guests with near real-time perspective on departures and room inventory.

Contact Info
Matt Torres
Oracle
415-595-1584
matt.torres@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

About Oracle Hospitality

Oracle Hospitality brings 35 years of experience in providing technology solutions to food and beverage operators and hoteliers. We provide hardware, software, and services that allow our customers to deliver exceptional guest experiences while maximizing profitability. Our solutions include integrated point-of-sale, loyalty, reporting and analytics, inventory and labor management, all delivered from the cloud to lower IT cost and maximize business agility.

For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The above is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

Talk to a Press Contact

Matt Torres

  • 415-595-1584

How to detect if insert transactions in oracle db are really slow?

Tom Kyte - Tue, 2018-06-19 07:06
At work, I have an Oracle DB (11g) in which I want to detect if there's slow performance while inserting data. Here's the situation: Some production devices send data results from tests to Server A, this server is a important server and it replica...
Categories: DBA Blogs

How i can optimize this operation DELETE if the values ares setted in codehard.

Tom Kyte - Tue, 2018-06-19 07:06
Hi, I'm a bit new to the development of plsql. I would like to know how I could optimize the delete operation with a forall if my query is the following: DELETE FROM SCH.TA_DELETE WHERE FIACUM < 1 AND FIPAIS = 1 AND F...
Categories: DBA Blogs

problem of inserting a long string of characters

Tom Kyte - Tue, 2018-06-19 07:06
Hello Team , I'm trying to insert into a table " TEST COM " the result of selecting rows of another table. I used the wm_concat function . /**********/ insert into COMMENTAIRE_TEST (SELECT wm_concat((DBMS_LOB.SUBSTR(COM_TEXTE,400...
Categories: DBA Blogs

Use RESULT_CACHE in subqueries

Tom Kyte - Tue, 2018-06-19 07:06
Dear Tom, I am thinking to use the new feature "RESULT_CACHE" to optimize some search queries for my paginated pages. So far, for a search page I have : 1.) a count query and 2.) the query that returns a page from the result set Both 1 an...
Categories: DBA Blogs

database links

Tom Kyte - Tue, 2018-06-19 07:06
how can i create database links to access remote databases. please tell me the procedure of creating database links.
Categories: DBA Blogs

How to avoid repeated function call for multiple columns' values.

Tom Kyte - Tue, 2018-06-19 07:06
Hi I'm refactoring an old procedure that calls a function for determining whether passed in values consist of only characters allowed in the front end app on top of the database. The procedure has a cursor that gathers all records it needs to ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator