Feed aggregator

APEX Licensing

Tom Kyte - Mon, 2019-02-11 14:46
Why are you talking about the license, there is a license to have the software?
Categories: DBA Blogs

Getting Started with APEX

Tom Kyte - Mon, 2019-02-11 14:46
From where I start my career in Apex Mean how to start apex
Categories: DBA Blogs

JSON Simple Dot-Notation Access Returning Null

Tom Kyte - Mon, 2019-02-11 14:46
Hi All, I am having an issue to retrieve data from JSON column based on key. Find below the json. I already validate the JSON. I didn't find any issue with the JSON. <code>"{ "Test": "123.40.4", "allowedtables": [{ "name": "t", "attri...
Categories: DBA Blogs

SQL to find indexed and unindexed queries

Tom Kyte - Mon, 2019-02-11 14:46
Hi Guys, Is there a way to determine the indexed and unindexed queries running on the database? Thanks.
Categories: DBA Blogs

Getting results based on dates

Tom Kyte - Mon, 2019-02-11 14:46
Hello, Ask tom Team. I want to make a procedure (if it's the best way) to be execute every day based on two possible dates. I thing I have to fill two variables first. --<b>Gettings dates</b> If today's date is <= 15th day of the current mont...
Categories: DBA Blogs

Database administration of log buffer

Tom Kyte - Mon, 2019-02-11 14:46
2. The airline database is an active database. The admin of the database has configured the redo log buffer to 16M. Assume that, in every 1/2 second 1M of redo log entries are created. LGWR copies those entries from buffer to file when 1M full in 1 s...
Categories: DBA Blogs

Microsoft Azure: Pricing Calculator

Dietrich Schroff - Mon, 2019-02-11 14:18
If you are thinking about to move your server/services/apps into Micosoft Azure, the pricing calculator would be very helpful to estimate the cost:
https://azure.microsoft.com/de-de/pricing/calculator/


The usage is very simple. Just use the kind of service you want to use from Microsoft Azure:
 And modify the defaults to you needs:

The problem is, that it is not so easy to calculate the number of i/o transactions for your application, but for a first start these number should be sufficient.



Jupyter Notebook — Forget CSV, fetch data from DB with Python

Andrejus Baranovski - Mon, 2019-02-11 14:11
If you read a book, article or blog about Machine Learning — high chances it will use training data from CSV file. Nothing wrong with CSV, but let’s think if it is really practical. Wouldn’t be better to read data directly from the DB? Often you can’t feed business data directly into ML training, it needs pre-processing — changing categorial data, calculating new data features, etc. Data preparation/transformation step can be done quite easily with SQL while fetching original business data. Another advantage of reading data directly from DB — when data changes, it is easier to automate ML model re-train process.

In this post I describe how to call Oracle DB from Jupyter notebook Python code.

Step 1 

Install cx_Oracle Python module:

python -m pip install cx_Oracle

This module helps to connect to Oracle DB from Python.

Step 2

cx_Oracle enables to execute SQL call from Python code. But to be able to call remote DB from Python script, we need to install and configure Oracle Instant Client on the machine where Python runs.

If you are using Ubuntu, install alien:

sudo apt-get update 
sudo apt-get install alien 

Download RPM files for Oracle Instant Client and install with alien:

alien -i oracle-instantclient18.3-basiclite-18.3.0.0.0–1.x86_64.rpm 
alien -i oracle-instantclient18.3-sqlplus-18.3.0.0.0–1.x86_64.rpm 
alien -i oracle-instantclient18.3-devel-18.3.0.0.0–1.x86_64.rpm 

Add environment variables:

export ORACLE_HOME=/usr/lib/oracle/18.3/client64 
export PATH=$PATH:$ORACLE_HOME/bin 

Read more here.

Step 3 

Install Magic SQL Python modules:

pip install jupyter-sql 
pip install ipython-sql 

Installation and configuration complete.

For today sample I’m using Pima Indians Diabetes Database. CSV data can be downloaded from here. I uploaded CSV data into the database table and will be fetching it through SQL directly in Jupyter notebook.

First of all, the connection is established to the DB and then SQL query is executed. Query result set is stored in a variable called result. Do you see %%sql — this magic SQL:


Username and password must be specified while establishing a connection. To avoid sharing a password, make sure to read password value from the external source (it could be simple JSON file as in this example or more advanced encoded token from keyring).

The beauty of this approach — data fetched through SQL query is out of the box available in Data Frame. Machine Learning engineer can work with the data in the same way as it would be loaded through CSV:

Sample Jupyter notebook available on GitHub. Sample credentials JSON file.

Reintroducing Rekha Ayothi

Steven Chan - Mon, 2019-02-11 10:27

Continuing our series of posts reacquainting you with the Oracle E-Business Suite Applications Technology Group blogging team, I’d like to reintroduce Rekha Ayothi. Rekha is a Senior Principal Product Manager handling all things related to integrations with Oracle E-Business Suite.  She is an expert on Oracle's E-Business Suite Integration product set and holds a US Patent for her design work, US Patent: 9860298 - Providing access via hypertext transfer protocol (HTTP) request methods to services implemented by stateless object. 

Rekha has been a regular contributor to this blog, and will continue to share her expertise by bringing you the latest news in the integration space. Here are a few of Rekha's recent blog posts:

Related Articles

 

Categories: APPS Blogs

TaylorMade Golf Tees Off with Oracle Autonomous Database

Oracle Press Releases - Mon, 2019-02-11 07:00
Press Release
TaylorMade Golf Tees Off with Oracle Autonomous Database Leading golf equipment manufacturer turns over key data management tasks to Oracle, boosting performance and scalability

Redwood Shores, Calif.—Feb 11, 2019

TaylorMade Golf, in its quest for growth after divesting from Adidas, is turning to Oracle Autonomous Database to underpin its cloud modernization strategy and drive innovation. A key benefit of the new Oracle Database is that it automates the day-to-day database operations and tuning, freeing TaylorMade to focus on making the world’s best golf equipment.

When TaylorMade separated from Adidas, the company needed to build out a platform in the cloud and chose Oracle Autonomous Database for its unprecedented availability, performance, and security. The industry’s first and only autonomous database, Oracle Autonomous Data Warehouse provides TaylorMade with the ability to scale as needed for planned business initiatives, particularly around seasonal workloads, in a simple and cost-effective manner. It also works with TaylorMade’s existing business analytic tools to drive faster analysis to rapidly adjust sales strategies as needed. The database also gives TaylorMade the tools they need to pinpoint opportunities for business diversification.

“As our business needs continued to evolve, we required a more efficient way to seamlessly manage and scale our data management system,” said Tom Collard, vice president of IT, TaylorMade. “With Oracle Autonomous Data Warehouse, we now have a scalable, low-cost cloud platform to power our business. This will help sustain growth and free up valuable employee time so they can focus on more mission-critical initiatives.”

Taking advantage of Oracle Autonomous Data Warehouse’s self-driving, self-securing, and self-repairing capabilities, TaylorMade will ensure its critical infrastructure is running efficiently with performance that is 40x faster than its previous on-premise database solution. This will enable its IT staff to focus on activities that drive growth and better meet customer demands while keeping costs low.

“TaylorMade has a tradition of building the best golf equipment for professionals and consumers,” said Andrew Mendelsohn, executive vice president of Database Server Technologies, Oracle. “With Oracle Autonomous Database, Oracle handles all the database operations and tuning so TaylorMade’s IT staff can devote more time to deriving business value from their data.”

Contact Info
Dan Muñoz
Oracle
+1.650.506.2904
dan.munoz@oracle.com
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
About TaylorMade Golf Company

Headquartered in Carlsbad, California, TaylorMade Golf is a leading manufacturer of high performance golf equipment with industry-leading innovative products like M5/M6 metalwoods, M5/M6 irons and TP5/TP5X golf balls. TaylorMade is the #1 Driver in Golf and also a major force on the PGA TOUR with one of the strongest athlete portfolios in golf, that includes Dustin Johnson, Rory McIlroy, Jason Day, Jon Rahm and Tiger Woods.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Dan Muñoz

  • +1.650.506.2904

Nicole Maloney

  • +1.650.506.0806

Use Of Oracle Coherence in Oracle Utilities Application Framework

Anthony Shorten - Sun, 2019-02-10 17:03

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality of Oracle Coherence are used by the batch runtime to implement clustering of threadpools and submitters to help support the simple and complex architectures necessary for batch.

Partners ask about the libraries and their potential use in their implementations. There are a few things to understand:

  • Restricted Use License conditions. The license is for exclusive use in the managing of executing executing members (i.e. submitters and threadpools) across hardware licensed for use with the Oracle Utilities Application Framework based products. It cannot be used in any code outside of that restriction. Partners cannot use the libraries directly in their extensions. It is all embedded in the Oracle Utilities Application Framework.
  • Limited Libraries. The Oracle Coherence libraries are restricted to a subset needed by the license. It is not a full implementation of Oracle Coherence. As it is a subset, Oracle does not recommend using the Oracle Coherence Plug-In available for Oracle Enterprise Manager to be used with the Oracle Utilities Application Framework implementation of the Oracle Coherence cluster. Use of this plugin against the batch cluster will result in missing and incomplete information presented to the plug-in causing inconsistent results in that plug-in.
  • Patching. The Oracle Coherence libraries are shipped with the Oracle Utilities Application Framework and therefore are managed by patches for the Oracle Utilities Application Framework not Coherence directly. Unless otherwise directed by Oracle Support, do not manually manipulate the Oracle Coherence libraries.

The Oracle Coherence implementation with the Oracle Utilities Application Framework has been optimized for use with the batch architecture with a combination of prebuilt Oracle Coherence and Oracle Utilities Application Framework based configuration files.

Note: If you need to find out the version of the Oracle Coherence Libraries used in the product at any time then the libraries are listed in the file $SPLEBASE/etc/ouaf_jar_versions.txt

The following command can be used to see the version:

cat $SPLEBASE/etc/ouaf_jar_versions.txt | grep coh

For example in the latest version of the Oracle Utilities Application Framework (4.4.0.0.0):

cat /u01/umbk/etc/ouaf_jar_versions.txt | grep coh

coherence-ouaf                   12.2.1.3.0
coherence-work                   12.2.1.3.0

Region only shown in development mode in Oracle APEX

Dimitri Gielis - Sun, 2019-02-10 11:51
In the last months, I had to look up several times in different projects how to show a region only when I was logged in into the App Builder in Oracle Application Express (APEX). So I thought to write a quick post on it.

In one project we had to log in with different users to test the behavior of the authorization schemes, so people saw the things they should see and could do the things they are allowed to do. As the logins where not straight forward we created a region with those test credentials. Other people were testing too with their own credentials, so we really wanted to keep the original login page, and decided to just add a region on the page we would only see when we were logged in into APEX itself.

Today I added some new pages to an app and wanted to make sure the navigation to those pages were only visible to me. I know, you should do this in DEV, and then when all is fine, propagate the app to TEST and PROD. The truth is, I've some applications that only exist in one environment and so I update straight in "production". Those apps are still backed up automatically every night, so worst case I can always take the version of the previous day. But just to be clear, this is not good practice ;)

So how do you show a region only when you are in development mode in Oracle APEX?

You go to the Conditions section of your region, list entry or any component in APEX really, and add a PL/SQL Expression: apex_application.g_edit_cookie_session_id is not null


It would be cool if there was a condition type "Development Mode", but maybe I'm the only one needing this...

Typically you would use Build Options (see Shared Components) to include or exclude certain functionality in your APEX app, but in the above two use cases, it won't really work.


Another enhancement would be that the Status of the Build option would include "Dev Only" next to Include and Exclude.
Categories: Development

ERR_TOO_MANY_REDIRECTS in Apex login for http and not for https

Geert De Paep - Sun, 2019-02-10 07:02

I have had this mysterious error before and I never knew where it came from or how to solve it. Sometimes it was there, sometimes not. And I was very scared, because if you don’t know where it comes from and suddenly it occurs in production, what then…?

Now I spent a lot of time in investigating this, and I think I found the explanation.

I was setting up a new apex environment and the behaviour was that accessing the login page via https is ok, while accessing it via http gives the error ERR_TOO_MANY_REDIRECTS.

Of course, you can argue that it is not wise to mix http and https, but there may be reasons to do so and the topic of this discussion is to give insight in the error above.

When accessing my site via https, these are the requests, obtained from the developer option in Chrome (F12), and extracted only the relevant parts of the requests sent to the server and the responses received:

Request URL (https): 

https://myserver.mydomain.com/myalias/f?p=450

Response:

Location: 

https://myserver.mydomain.com/myalias/f?p=450:1::::::

–> So we get a redirect to page 1, which is the login page.

The browser automatically continues with:

Request URL: 

https://myserver.mydomain.com/myalias/f?p=450:1::::::

Response:

Location: 

https://myserver.mydomain.com/myalias/f?p=450:LOGIN_DESKTOP:14782492558009:::::

Set-Cookie: 

ORA_WWV_APP_450=ORA_WWV-ca0VJbR5JTZgDZ16HZYWS7Hu; secure; HttpOnly

–> A session id is generated by Apex (14782492558009)

–> We get a redirect to LOGIN_DESKTOP page again, but with a url containing our session id

–> But note that a cookie with name “ORA_WWV_APP_450” is being sent to the browser (Set-Cookie).

–> Also note that the cookie has the “secure” attribute set. So it will only be sent when the url contains https, which is ok because we are using https

The browser automatically continues with:

Request URL: 

https://myserver.mydomain.com/myalias/f?p=450:LOGIN_DESKTOP:14782492558009:::::

Cookie sent

ORA_WWV_APP_450=ORA_WWV-ca0VJbR5JTZgDZ16HZYWS7Hu

–> We go again to the login page with a url containing our session id

–> Note that the cookie that was set in the browser in the previous request, is now sent to the server and Apex is happy. I.e. is show the login page.

In other words, the fact that the url contains a valid session id and the cookie matches, means this is a valid session

Response:

The apex login page. All ok.

 

Now in the same browser, we repeat this process using http instead of https and we get:

Request URL (no https):

http://myserver.mydomain.com/myalias/f?p=450

Location:

http://myserver.mydomain.com/myalias/f?p=450:1::::::

–> Same as above

The browser automatically continues with:

Request URL:

http://myserver.mydomain.com/myalias/f?p=450:1::::::

Location:

http://myserver.mydomain.com/myalias/f?p=450:LOGIN_DESKTOP:14897208542039:::::

Set-Cookie:

ORA_WWV_APP_450=ORA_WWV-PO71knA50OiuT9n5MY3zEayQ; HttpOnly

–> Basically the same as above

–> Note however that also a cookie is set, but it does not contain the “secure” attribute because this is not an https connection

Now the fun begins, the browser continues and the log shows:

Request URL:

http://myserver.mydomain.com/myalias/f?p=450:LOGIN_DESKTOP:14897208542039:::::

Response:

Location:

http://myserver.mydomain.com/myalias/f?p=450:1:9970034434813:::::

Set-Cookie:

ORA_WWV_APP_450=ORA_WWV-JP6Aq5b-MH74eD-FiXO4fBBr; HttpOnly

 

Immediately followed by:

 

Request URL:

http://myserver.mydomain.com/myalias/f?p=450:1:9970034434813:::::

Response:

Location:

http://myserver.mydomain.com/myalias/f?p=450:LOGIN_DESKTOP:5362634758337:::::

Set-Cookie:

ORA_WWV_APP_450=ORA_WWV-qVBvmxuGxWUgGXqFZgVYb1b2; HttpOnly

 

And these 2 requests keep repeating after each other very fast until the browser complains about too many redirects.

 

What is happening? Why are we stuck in a redirect loop until we get a browser error?

Did you notice that in the above 2 requests, no cookie is sent to the server anymore (no Set-Cookie header)? As a consequence, Apex does not trust this as a valid session because the cookie is not received. The session id in the url and cookie must match otherwise someone could just hijack your session by copying your url. As a consequence, Apex creates new sessions over and over again, and as the cookie is never sent, none of these sessions becomes a valid one. So this is in fact intended behavior.

The question is of course, why is the cookie not sent? If you look at the Set-Cookie header, there is no reason NOT to set it:

  • No domain is given, so it is sent for all domains
  • No path is given, so it is sent for all paths
  • HttpOnly has nothing to do with this, this is to avoid something like avoiding evil javascript to read your cookie

It took me while to figure this out, but when looking at the cookies stored in the browser and their attributes, it became clear (via Chrome -> stings -> advanced -> content settings -> cookies). There is only 1 cookie in my browser with name ORA_WWV_APP_450. And surprise surprise, it contains the “secure” attribute, which means it is only sent on https connections. And hence, not on my session, resulting in the ERR_TOO_MANY_REDIRECTS.

So this is what happens according to me:

  • The initial apex-login using https has set the cookie in the browser with the secure attribute
  • The second apex-login using http (no https) sets the same cookie to a different value. But it does not ‘clear’ the secure flag (I don’t know if that is possible anyway). The only thing that the Set-Cookie header says, is: set the value to ORA_WWV-qVBvmxusomethingKhdshkh and set HttpOnly on. It does not say “clear the secure flag”, or “remove the old cookie and create a new one without the secure flag”.
  • So the secure flag is not cleared and the cookie is never sent over http, only over https

Summary:

  • Once a https cookie (secure) was sent, it is not possible anymore to use a non-https session
  • The only solution is to manually remove the cookie from the browser. Then it works again, until you use https again followed by http.
  • That also explains why this error may seem to occur randomly. As long as the secure cookie is not present, all works, and then suddenly after 1 https request, the problem is there.

 

Is this a bug? You can discuss about this, but according to me: yes. An application should never come into such a redirect loop no matter how you use it. And yes, you can say that http and https should not be mixed, but there can be reasons to do so (fallback if you have issues with your ssl config – or an internal-only connection that will always work, even if ssl is temporarily unavailable).

Using a different cookie name for a https session vs an http session might solve this problem, but I don’t know if the apex developers will do so, because this is in the core of the authentication logic of Apex.

Questions, if someone can help:

  • Do you know of a (supported) way to avoid this, I mean any setting in Apex or so that I overlooked?
  • What would be the easiest way to clear the secure flag in the cookie, if you would need to do so? Manually removing the cookie from the browser is somewhat complex and I think many people (end users) would not know how to do this.
  • Do you know of a way to better handle this error? E.g. do you know if you can have Apache redirect to some error page in case of ERR_TOO_MANY_REDIRECTS (because this is not an http error code like 404 – Not Found), or even better, to a page where you have a button to clear the cookie?

I hope you have better insight now in what happens when you encounter this error.

Updating Oracle Opatch with AWS Systems Manager Run Command at Scale

Pakistan's First Oracle Blog - Sun, 2019-02-10 02:16
AWS Systems Manager (SSM) is a managed service by AWS, which is used to configure and manage EC2 instances, other AWS resources, and on-premise VMs/servers at scale. SSM frees you up from having ssh or bastion host access to the remote resources.


Pre-requisites of SSM:

The managed instances need to have SSM agent running.
The managed instances need to be assigned an IAM role with policy AmazonEC2RoleforSSM.
The managed insatnces need to have a meaningful tag assigned to them to make it possible to manage them in bulk.

Example:

This example assumes that above pre-requisites are already there. For step by step instructions as how to do that, check this resource (https://aws.amazon.com/getting-started/tutorials/remotely-run-commands-ec2-instance-systems-manager/). This also assumes that all the instances
have been assigned tags like Env=Prod or Env=Dev.

Following is the script update_opatch.sh which was already bootstrapped to the EC2 instance at time of creation in userdata,
so its already present at /u01/scripts/update_opatch.sh

#/usr/bin/env bash -x
ORACLE_SID=`ps -ef | grep pmon | grep -v asm | awk '{print $NF}' | sed s'/pmon_//' | egrep "^[+]"`
export ORAENV_ASK=NO
. oraenv > /dev/null 2>&1
mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch.$(date)
curl -T /tmp/ -u test@test.com ftps://
mv /tmp/p6880880_101000_linux64.zip $ORACLE_HOME
cd $ORACLE_HOME
unzip p6880880_101000_SOLARIS64.zip


Now just running following command in Systems Manager will update opatch on all the managed instances with tag Prod.

aws ssm send-command --document-name "AWS-RunShellScript" --comment "update_opatch.sh" --parameters commands=update_opatch.sh Key=tag:Env,Values=Prod


Categories: DBA Blogs

OBUG 2019 – First event from Oracle Users Group in Belgium!

Yann Neuhaus - Sat, 2019-02-09 09:00
Introduction

It’s the first edition of the Techdays, the Oracle community event in Belgium. This event happened in Antwerp these past 2 days, and a lot of speakers came from around the world to talk about their experience on focused subjects. Really amazing to heard such high-level conferences.
And it was a great pleasure for me because I’ve been working in Belgium for several years before.

I’ll will try to give you a glimpse of what I found the most interesting among the sessions I chose.

Cloud: you cannot ignore it anymore!

Until now, I did not have much interest in the cloud because my job is actually helping customers to build on-premise (that means not in the cloud) environments. If you can live without the cloud, you cannot ignore it anymore now because it deals with budget, infrastructure optimization, strategy, flexibility, scalability.
Cloud is bringing a modern pay-what-you-need-now model compared to monolitic and costly infrastructures you’ll have to pay off in years. Cloud is bringing a service for a problem.

Cloud providers have understood that now or later, customers will move at least parts of their infrastructure into the cloud.

Going into the cloud is not something you answer yes or no. It’s a real project that you’ll have to study as it needs to rethink nearly everything. Migrating your current infrastructure to the cloud without any changes would probably be a mistake. Don’t consider the cloud as just putting your servers elsewhere.

I learned that Oracle datacenters are actually not dedicated datacenters: most of the time, their cloud machines are located in existing datacenters from different providers, making your connection to the cloud sometimes only meters away from your actual servers!

And for those who still don’t want to move their critical data somewhere outside, Oracle brings another solution named Cloud-at-customer. It’s basically the same as pure cloud in terms of management, but Oracle delivers the servers in your datacenter, keeping your data secured. At least for those who are thinking that way :-)

EXADATA

You probably know that EXADATA is the best database machine you can buy, and it’s true. But EXADATA is not the machine every customer needs (actually ODA is much more affordable and popular), only very demanding databases can benefit from this platform.

Gurmeet Goindi, the EXADATA product manager at Oracle, told us that EXADATA will still increase the gap from classic technologies.

For example, I heard that EXADATA’s maximum numbers of PDB will increase to 4’000 in one CDB, even you’ll probably never reach this limit, it’s quite an amazing number.

If I didn’t heard about new hardware coming shortly, major Exadata enhancements will come with 19c software stack release.

19c is coming

Maria Colgan from Oracle introduced the 19c new features and enhancements.

We were quite used to previous Oracle releases, R1 and R2, R1 being the very first release with a low adoption from the customer, and R2 being the mature release with longuer term support. But after the big gap between 12cR1 and 12cR2, Oracle changed the rules for a more Microsoft-like versioning: version number is now the year of product delivery. Is there still longer term release like 11gR2 was? For sure, and 19c will be the one you were waiting for. You may know that 18c was some kind of 12.2.0.2. 19c will be the lastest version of the 12.2 kernel, 12.2.0.3. If you plan to migrate your databases from previous releases this year, you should know that 19c will be available shortly, and that this could worth the wait.

19c will bring bunch of new features, like automatic indexing, which could be part of a new option. PDB could now have separate encryptions keys, and not only one for the CDB.

InMemory option will receive enhancements and now supports the storage of objects in both column and row. InMemory content can now be different between the primary and the active standby, making distribution of read only statements more efficient. If your memory is not big enough to store all your InMemory data, it will soon be possible (on Exadata only) to put the columnar data into flash to keep the benefit of columnar scans outside memory. It makes sense.

Brief overview of new “runaway queries” feature, there will be a kind of quarantine for statements that reach resource plan limits. Goal is to avoid the need for the DBA to connect and kill the session to free up system resources.

Autonomous database will also be there in 19c, but first limited to Cloud-at-customer Exadatas. It will take some years for all databases to become automomous :-)

Zero Downtime Migration tool

What you’ve been dreaming of for years is now nearly there. A new automatic migration tool without downtime. But don’t dream too much because it seems to be limited to migration to the cloud and source and target should be in the same version (11g, 12c or 18c). Don’t know actually if it will support migrations from on-premise to on-premise.

Ricardo Gonzalez told us that with this new tool, you will be able to migrate to the cloud very easily. It’s a one button approach, with a lot of intelligence inside the tool to maximize the security of the operation. And as described it looks great, and everything is considered, pre-checks, preparation, migration, post-migration, post-checks and so on. You’ll still have to do the final switchover yourself, and yes it’s based on Data Guard, so you can trust the tool as it relies on something reliable. If something goes wrong, you can still move back to on-premise database.

Autoupgrade tool

Another great tool presented by Mike Dietrich is coming with 19c. It’s a java based tool able to plan and manage database upgrades with a single input file describing the environment. It seems very useful if you have a lot of databases to upgrade. Refer to MOS Note: 2485457.1 if you’re interested.

Conclusion

So many interesting things to learn in these 2 days! Special thanks to Philippe Fierens and Pieter Van Puymbroeck for the organization.

Cet article OBUG 2019 – First event from Oracle Users Group in Belgium! est apparu en premier sur Blog dbi services.

[1Z0-932] Step By Step Activity Guides You Must Perform for Oracle Cloud Architect Certification

Online Apps DBA - Sat, 2019-02-09 04:32

You learn by doing, hence I’ve created list of tasks that you must perform in Cloud to learn Infrastructure (Compute, Storage, Database, Network, IAM, HA& DR) on Oracle’s Gen 2 Cloud (OCI)? Visit: https://k21academy.com/oci05 & Check Step-by-Step Activity Guides to plan for your Oracle Cloud Infrastructure (OCI) Architect (1Z0-932) Certification in 8 Weeks. Get Going […]

The post [1Z0-932] Step By Step Activity Guides You Must Perform for Oracle Cloud Architect Certification appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Cloud on a Roll: From One ‘Next Big Things’ Session to Another…

OTN TechBlog - Sat, 2019-02-09 02:00

The Oracle Open World Showcase in London this January

We wrapped up an exciting Open World in London last month with a spotlight on all things Oracle Cloud. Hands-on Labs and demos took center stage to showcase the hottest use cases in apps and converged infrastructure (IaaS + PaaS). 

From autonomous databases and analytics, platform solutions for SaaS -  like a digital assistant (Chatbot), app and data integration, and API gateways for any SaaS play across verticals, and cloud-native application development on OCI, we ran a series of use cases. Several customers joined us on stage for various keynote streams to share their experience and to demonstrate the richness of Oracle’s offering.

Macty’s (an Oracle Scaleup partner) Move from AWS to the Oracle Cloud

Macty is one such customer who transitioned out of AWS to Oracle Cloud to build their fashion e-commerce platform with a focus on AI/ML to power visual search. Navigating AWS was hard for Macty. Expensive support, complex pricing choices, lack of automated backups for select devices, and delays in getting to the support workforce were some of the reasons why Macty embarked on to Oracle’s Cloud Infrastructure.

Macty used Oracle’s bare metal GPU to train deep learning models. They used the compartments to isolate and use the correct billing for customers and the DevCS platform (Terraform and Ansible) to update and check the environment from a deployment and configuration perspective.

Macty’s CEO @Susana Zoghbi presented the Macty success story with the VP of Oracle Cloud, Ashish Mohindroo. She demonstrated the power of the Macty chatbot (through Facebook Messenger) that was built on Oracle’s platform to enable e-commerce vendors to engage with their customers better. 

The other solutions that Macty brings with their AI/API powered platform are: a recommendation engine to complete the look in real time, find similar items, customize the fashion look, and get customer analytics to connect e-commerce with the in-store experience. Any of these features can be used by e-commerce stores to delight their customers and up their game against big retailers.

And now, Oracle Open World is Going to Dubai!

Ashish Mohindroo, VP of Oracle Cloud will be keynoting the Next Big Things session again and this time at the Oracle Open World in Dubai next week. He will be accompanied by Asser Smidt, Founder of BotSupply (an Oracle Scaleup partner). BotSupply assists companies with conversational bots, have an award-winning multi-lingual NLP and are also a leader in conversational design. While Ashish and Asser are going to explore conversational AI and design via bots powered by Oracle cloud, Ashish is also going to elaborate on how Oracle Blockchain and Oracle IoT are becoming building blocks for extending modern applications in his ‘Bringing Enterprises to Blockchain’ session. He will be accompanied by Ghassan Sarsak from ICS Financial Services, and Thrasos Thrasyvoulu from the Oracle Cloud Platform App Dev team. Last, but never the least, Ashish will explain how companies can build compelling user interfaces with augmented reality (AR/VR) and show how content is at the core of this capability. Oracle content cloud makes it easy for customers to build these compelling experiences on any channel: mobile, web, and other device. If you're in Dubai next week, swing by Open World to catch the action.

 

OSvC BUI Extension - How to create a library extension

OTN TechBlog - Fri, 2019-02-08 13:45

Library is an existing extension type that you can find as part of BUI Extensibility Framework. If you are not familiar with the library concept, it is a collection of non-volatile resources or implementations of behavior that can be invoked from other programs (in our case, across extensions that share the same resources or behaviors). For example, your extension project requires a common behavior such as a method for authentication, global variable, a method for trace/log, and others. In this case, a library is a useful approach because it can wrap all common methods in a single extension that can be invoked from others, it prevents your project from inconsistently repeated methods over different extensions. The following benefits can be observed when this approach is used:

  1. centralized maintenance of core methods;
  2. reduced size of other extensions, which might improve the time of download content;
  3. standardized methods;
  4. and others...
 

Before we can get further, let's see what this sample code delivers. 

  • Library
    • myLibrary.js: This file includes a common implementation of behavior such as a method to trace, to return authentication credentials and to execute ROQL queries.
  • myGlobalHeaderMenu
    • init.html: This file is initializing the required js. ** if you have experience with require.js, probably you are thinking why not use require.js. We can work with require.js in another post. Although, the library concept is still needed.
    • js/myGlobalHeaderMenu.js: This file is creating our user interface extension. We want to see a Menu Header with thumbs-ups icon like we did before. As it is a sample code, we want to have something simple to trigger our methods implemented as a library and see it in action.
 

The global header menu is invoking a trace log and ROQL Query function that was implemented as part of the library sample code. When the thumbs-up is clicked a ROQL Query Statement( "select count(*) from accounts") is passed as a parameter to a function that was implemented as part of the library. The result is presented by another library behavior which was defined to trace any customization. In order to have the trace log function on, we've implemented a local storage item (localStorage.setItem('debugMyExtension',true);) as you can see in the animation gif below.

 

It will make more sense in the next session of this post where you can read the code line with comments to understand the logic under the hood. For now, let's see what you should expect when this sample code is uploaded to your site.

 

Demo Sample Code

 

Here are the sample codes to create a ‘Global Header Menu’ and a ‘Library.’ Please, download from attachment and add each one of the add-is as Agent Browser UI Extension, then select Console for the myGlobalHeaderMenu extension, and init.html as the init file. Lastly, upload myLibrary.js as a new extension (the extension name should be Utilities), then select library as extension type

 

Library  

Here is the code line implemented in myLibrary.js. Read the code line with comments for a better understanding.

  /* As mentioned in other posts, we want to keep the app name and app version consistent for each extension. Later, it will help us to better troubleshoot and read the logs provided by BUI Extension Log Viewer.*/ var appName = "UtilityExtension"; var appVersion = "1.0"; /*We have created this function in order to troubleshoot our extensions. You don't want to have your extension tracing for all agents, so in this sample code, we are using a local storage to check whether the trace mode is on or off. In order to have the trace on, with the console object opened set a local item as follow; localStorage.setItem('debugMyExtension',true);*/ let myExtensions2log = function(e){ if (localStorage.getItem('debugMyExtension') == 'true') window.console.log("[My Extension Log]: " + e); } /* Authentication is required to connect to Oracle Service Cloud APIs. This function returns the current session token and the REST API end-point, you don't want to have this information hard-coded.*/ let myAuthentication = new Promise(function(resolve, reject){ ORACLE_SERVICE_CLOUD.extension_loader.load(appName,appVersion).then(function(extensionProvider){ extensionProvider.getGlobalContext().then(function(globalContext){ _urlrest = globalContext.getInterfaceServiceUrl("REST"); _accountId = globalContext.getAccountId(); globalContext.getSessionToken().then( function(sessionToken){ resolve({'sessionToken': sessionToken,'restEndPoint': _urlrest, 'accountId': _accountId}); }); }); }); }); /* This function will receive a ROQL statement and will return the result object. With this function, other extensions can send a ROQL statement and receive a JSON object as result.*/ let myROQLQuery = function(param){ return new Promise(function(resolve, reject){ var xhr = new XMLHttpRequest(); myAuthentication.then(function(result){ xhr.open("GET", result['restEndPoint'] + "/connect/latest/queryResults/?query=" + param, true); xhr.setRequestHeader("Authorization", "Session " + result['sessionToken']); xhr.setRequestHeader("OSvC-CREST-Application-Context", "UtilitiesExtension"); xhr.onload = function(e) { if (xhr.readyState === 4) { if (xhr.status === 200) { var obj = JSON.parse(xhr.responseText); resolve(obj); } else { reject('myROQLQuery from Utilities Library has failed'); } } } xhr.onerror = function (e) { console.error(xhr.statusText); }; xhr.send(); }); }); }   myGlobalHeaderMenu

 

init.html

This is the init.html file. The important part here is to understand the src path. If you are not familiar with "src path" here is a quick explanation. Notice that each extension resides in a directory and the idea is to work with directory paths.

 

/   = Root directory

.   = This location

..  = Up a directory

./  = Current directory

../ = Parent of current directory

../../ = Two directories backwards

 

In our case, it is ./../[Library Extension Name]/[Library File name]  -> "./../Utilities/myLibrary.js"

  <!--This HTML file was created to make a call on the required files to run this extension--> <!--myLibrary is the first extension to be called. This file has the common resources that is needed to run the second .js file--> <script src="./../Utilities/myLibrary.js"></script> <!--myGlobalHeaderMenu is the main extension which will create the Global Header Menu and call myLibrary for dependet resources--> <script src="./js/myGlobalHeaderMenu.js"></script>  

 

  js/myGlobalHeaderMenu.js   let myHeaderMenu = function(){ ORACLE_SERVICE_CLOUD.extension_loader.load("GlobalHeaderMenuItem", "1.0").then(function (sdk) { sdk.registerUserInterfaceExtension(function (IUserInterfaceContext) { IUserInterfaceContext.getGlobalHeaderContext().then(function (IGlobalHeaderContext) { IGlobalHeaderContext.getMenu('').then(function (IGlobalHeaderMenu) { var icon = IGlobalHeaderMenu.createIcon("font awesome"); icon.setIconClass("fas fa-thumbs-up"); IGlobalHeaderMenu.addIcon(icon); IGlobalHeaderMenu.setHandler(function (IGlobalHeaderMenu) { myROQLQuery("select count(*) from accounts").then(function(result){ result["items"].forEach(function(rows){ rows["rows"].forEach(function(value){ myExtensions2log(value); }) }); }); }); IGlobalHeaderMenu.render(); }); }); }); }); } myHeaderMenu();  

We hope that you find this post useful. We encourage you to try the sample code from this post and let us know what modifications you have made to enhance it. What other topics would you like to see next? Let us know in the comments below.

Oracle Functions: Serverless On Oracle Cloud - Developer's Guide To Getting Started (Quickly!)

OTN TechBlog - Fri, 2019-02-08 13:44

Back in December, as part of our larger announcement about several cloud native services, we announced a new service offering called Oracle Functions. Oracle Functions can be thought of as Functions as a Service (FaaS), or hosted serverless that utilizes Docker containers for execution.  The offering is built upon the open source Fn Project, which itself isn't new, but the ability to quickly deploy your serverless functions and invoke them via Oracle's Cloud makes implementation much easier than it was previously.  This service is currently in Limited Availability (register here if you'd like to give it a try), but recently I have been digging in to the offering and wanted to put together some resources to make things easier for developers looking to get started with serverless on Oracle Cloud. This post will go through the necessary steps to get your tenancy configured and create, deploy and invoke your first application and function with Oracle Functions.

Before getting started you'll need to configure your Oracle Cloud tenancy.  If you're in the Limited Availability trial, make sure your tenancy is subscribed to the Phoenix region because that's currently the only region where Oracle Functions is available.  To check and/or subscribe to this region, see the following GIF:

Before moving on, if you haven't yet installed the OCI CLI, do so now.  And if you haven't, what are you waiting for?  It's really helpful for doing pretty much anything with your tenancy without having to log in to the console.

The rest of the configuration is a multi-step process that can take some time, and since no one likes to waste time on configuration when they could be writing code and deploying functions, I've thrown together a shell script to perform all the necessary configuration steps for you and get your tenancy completely configured to use Oracle Functions. 

Before we get to the script, please do not simply run this script without reading it over and fully understanding what it does.  The script makes the following changes/additions to your cloud tenancy:

  1. Creates a dedicated compartment for FaaS
  2. Creates a IAM group for FaaS users
  3. Creates a FaaS user
  4. Creates a user auth token that can be later used for Docker login
  5. Adds the FaaS user to the FaaS group
  6. Creates a group IAM policy
  7. Creates a VCN
  8. Creates 3 Subnets within the VCN
  9. Creates an internet gateway for the VCN
  10. Updates the VCN route table to allow internet traffic to hit the internet gateway
  11. Updates the VCN default security list to allow traffic on port 80
  12. Prints a summary of all credentials that it creates

That's quite a lot of configuration that you'd normally have to manually perform via the console UI.  Using the OCI CLI via this script will get all that done for you in about 30 seconds.  Before I link to the script, let me reiterate, please read through the script and understand what it does.  You'll first need to modify (or at least verify) some environment variables on lines 1-20 that contain the names and values for the objects you are creating.

So with all the necessary warnings and disclaimers out of the way, here's the script.  Download it and make sure it's executable and then run it.  You'll probably see some failures when it attempts to create the VCN because compartment creation takes a bit of time before it's available for use with other objects.  That's expected and OK, which is why I've put in some auto-retry logic around that point.  Other than that, the script will configure your tenancy for Oracle Functions and you'll be ready to move on to the next step.  Here's an example of the output you might see after running the script:

Next, create a signing key.  I'll borrow from the quick start guide here:

If you'd rather skip heading to the console UI in the final step, you can use the OCI CLI to upload your key like so:

oci iam user api-key upload --user-id ocid1.user.oc1..[redacted]ra --key-file <path-to-key-file> --region <home-region>

Next, open your OCI CLI config file (~/.oci/config) in a text editor, paste the profile section that was generated in the script above and populate it with the values from your new signing key.  

At this point we need to make sure you've got Docker installed locally.  I'm sure you do, but if not, head over to the Docker docs and install it for your particular platform.  Verify your installation with:

docker version

While we're here, let's login to Docker using the credentials we generated with the script above:

docker login phx.ocir.io

For username, copy the username from the script output (format <tenancy>/<username>) and the generated auth token will be used as your Docker login password.

Now let's get the Fn CLI installed. Jump to the Fn project on GitHub where you'll find platform specific instructions on how to do that. To be sure all's good, run:

fn version

To see all the available commands with the Fn CLI, refer to the command reference docs. Good idea to bookmark that one!

Cool, now we're ready to finalize your Fn config.  Again, I'll borrow from the Fn quick start for that step:

Log in to your development environment as a functions developer and:

Create the new Fn Project CLI context by entering:

fn create context <my-context> --provider oracle

Specify that the Fn Project CLI is to use the new context by entering:

fn use context <my-context>

Configure the new context with the OCID of the compartment you want to own deployed functions:

fn update context oracle.compartment-id <compartment-ocid>

Configure the new context with the api-url endpoint to use when calling the Fn Project API by entering:

fn update context api-url <api-endpoint>

For example:

fn update context api-url https://functions.us-phoenix-1.oraclecloud.com

Configure the new context with the address of the Docker registry and repository that you want to use with Oracle Functions by entering:

fn update context registry <region-code>.ocir.io/<tenancy-name>/<repo-name>

For example:

fn update context registry phx.ocir.io/acme-dev/acme-repo

Configure the new context with the name of the profile you've created for use with Oracle Functions by entering:

fn update context oracle.profile <profile-name>

And now we're ready to create an application.  In Oracle Functions, an application is a logical grouping of serverless functions that share a common context of config variables that are available to all functions within the application.  The quick start shows how you use the console UI to create an application, but let's stick to the command line here to keep things moving quickly.  To create an application, run the following:

fn create app faas-demo --annotation oracle.com/oci/subnetIds='["ocid1.subnet.oc1.phx.[redacted]ma"]'

You'll need to pass at least one of your newly created subnet IDs in the JSON array to this call above. For high availability, pass additional subnets. To see your app, run:

fn list apps

To create your first function, run the following:

fn init --runtime node faas-demo-func-1

Note, I've used NodeJS in this example, but the runtime support is pretty diverse. You can currently choose from go, java8, java9, java, node, python, python3.6, python, python3.7, ruby, kotlin as your runtime. Once your function is generated, you'll see output similar to this:

Creating function at: /faas-demo-func-1 Function boilerplate generated. func.yaml created.

Go ahead and navigate into the new directory and take a look at the generated files. Specifically, the func.yaml file which is a metadata definition file that is used by Fn to describe your project, it's triggers, etc. Leave the YAML file for now and open up func.js in a text editor. It ought to look something like so:

const fdk=require('@fnproject/fdk'); fdk.handle(function(input){ let name = 'World'; if (input.name) { name = input.name; } return {'message': 'Hello ' + name} })

Just a simple Hello World, but your function can be as powerful as you need it to be. It can interact with a DB within the same subnet on Oracle Cloud, or utilize object storage, etc. Let's deploy this function and invoke it. To deploy, run this command from the root directory of the function (the place where the YAML file lives). You'll see some similar output:

fn deploy --app faas-demo Deploying faas-demo-func-1 to app: faas-demo Bumped to version 0.0.2 Building image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2 . Parts: [phx.ocir.io [redacted] faas-repo faas-demo-func-1:0.0.2] Pushing phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2 to docker registry...The push refers to repository [phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1] 1bf689553076: Pushed 9703c7ab5d87: Pushed 0adc398bfc34: Pushed 0b3e54ee2e85: Pushed ad77849d4540: Pushed 5bef08742407: Pushed 0.0.2: digest: sha256:94d9590065a319a4bda68e7389b8bab2e8d2eba72bfcbc572baa7ab4bbd858ae size: 1571 Updating function faas-demo-func-1 using image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2... Successfully created function: faas-demo-func-1 with phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.2

Fn has compiled our function into a Docker container, pushed the Docker container to the Oracle Docker registry, and at this point our function is ready to invoke. Do that with the following command (where the first argument is the application name and the second is the function name):

fn invoke faas-demo faas-demo-func-1 {"message":"Hello World"}%

The first invocation will take a bit of time since Fn has to pull the Docker container and spin it up, but subsequent runs will be quick. This isn't the only way to invoke your function; you can also use HTTP endpoints via a signed request, but that's a topic for another blog post.

Now let's add some config vars to the application:

fn update app faas-demo --config defaultName=Person

As mentioned above, config is shared amongst all functions in an application. To access a config var from a function, grab it from the environment variables. Let's update our Node function to grab the config var, deploy it and invoke it:

const fdk=require('@fnproject/fdk'); fdk.handle(function(input){ let name = process.env.defaultName || 'World'; if (input.name) { name = input.name; } return {'message': 'Hello ' + name} }) $ fn deploy --app faas-demo Deploying faas-demo-func-1 to app: faas-demo Bumped to version 0.0.3 Building image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.3 . Parts: [phx.ocir.io [redacted] faas-repo faas-demo-func-1:0.0.3] Pushing phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.3 to docker registry...The push refers to repository [phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1] 7762ea1ed77f: Pushed 1b0d385392d8: Pushed 0adc398bfc34: Layer already exists 0b3e54ee2e85: Layer already exists ad77849d4540: Layer already exists 5bef08742407: Layer already exists 0.0.3: digest: sha256:c6537183b5b9a7bc2df8a0898fd18e5f73914be115984ea8e102474ccb4126da size: 1571 Updating function faas-demo-func-1 using image phx.ocir.io/[redacted]/faas-repo/faas-demo-func-1:0.0.3... $ fn invoke faas-demo faas-demo-func-1 {"message":"Hello Person"}%

So that's the basics on getting started quickly developing serverless functions with Oracle Functions. The Fn project has much more to offer and I encourage you to read more about it. If you're interested in taking a deeper look, make sure to sign up for access to the Limited Availability program.

Concurrent Processing RUP1 Patchset for EBS 12.2 Now Available

Steven Chan - Fri, 2019-02-08 13:43

We are pleased to announce the release of Concurrent Processing RUP1 patch 25452805:R12.FND.C for Oracle E-Business Suite Release 12.2. This patch is recommended for all EBS customers.

Concurrent Processing RUP1 includes the following new features:

  • Recalculation of Dynamic Default Parameters in Standard Request Submission
  • Setting Environment Values and Parameters on a Per-Service Basis
  • Concurrent Processing Command-Line Utility
  • Storage Strategies for Log and Output File Locations
  • Output File Naming Strategies
  • Timed Shutdown
  • 64-bit Java Support for the Output Post Processor Service
References

Related Articles

Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator