Feed aggregator

API for Amazon SageMaker ML Sentiment Analysis

Andrejus Baranovski - Thu, 2018-12-06 13:50
Assume you manage support department and want to automate some of the workload which comes from users requesting support through Twitter. Probably you already would be using chatbot to send back replies to users. Bu this is not enough - some of the support requests must be taken with special care and handled by humans. How to understand when tweet message should be escalated and when no? Machine Learning for Business book got an answer. I recommend to read this book, my today post is based on Chapter 4.

You can download source code for Chapter 4 from book website. Model is trained based on sample dataset from Kaggle - Customer Support on Twitter. Model is trained based on subset of available data, using around 500 000 Twitter messages. Book authors converted and prepared dataset to be suitable to feed into Amazon SageMaker (dataset can be downloaded together with the source code).

Model is trained in such way, that it doesn't check if tweet is simply positive or negative. Sentiment analysis is based on the fact if tweet should be escalated or not. It could be even positive tweet should be escalated.

I have followed instructions from the book and was able to train and host the model. I have created AWS Lambda function and API Gateway to be able to call model from the outside (this part is not described in the book, but you can check my previous post to get more info about it - Amazon SageMaker Model Endpoint Access from Oracle JET).

To test trained model, I took two random tweets addressed to Lufthansa account and passed them to predict function. I exposed model through AWS Lambda function and created API Gateway, this allows to initiate REST request from such tool as Postman. Response with __label__1 needs esacalation and __label__0 doesn't need. Second tweet is more direct and it refers immediate feedback, it was labeled for escalation by our model for sentiment analysis. First tweet is a bit abstract, for this tweet no escalation:


This is AWS Lambda function, it gets data from request, calls model endpoint and returns back prediction:

Let's have a quick look into training dataset. There are around 20% of tweets representing tweets marked for escalation. This shows - there is no need to have 50%/50% split in training dataset. In real life probably number of escalations is less than half of all requests, this realistic scenario is represented in the dataset:


ML model is built using Amazon SageMaker BlazingText algorithm:


Once ML model is built, we deploy it to the endpoint. Predict function is invoked through the endpoint:

Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems

Most organizations have silos of content spread out amongst databases, file shares, and one or more document management systems. Without a unified search system to tap into this information, knowledge often remains hidden and the assets employees create cannot be used to support design, manufacturing, or research objectives.

An enterprise search system that can connect these disparate content stores and provide a single search experience for users can help organizations increase operational efficiencies, enhance knowledge sharing, and ensure compliance. PTC Windchill provides a primary source for the digital product thread, but organizations often have other key systems storing valuable information. That is why it is critical to provide workers with access to associated information regardless of where it is stored.

This past August, Fishbowl released its PTC Windchill Connector for Google Cloud Search. Fishbowl developed the connector for companies needing a search solution that allows them to spend less time searching for existing information and more time developing new products and ideas. These companies need a centralized way to search their key engineering information stores, like PLM (in this case Windchill), ERP, quality database, and other legacy data systems. Google Cloud Search is Google’s next generation, cloud-based enterprise search platform from which customers can search large data sets both on-premise and in the cloud while taking advantage of Google’s world-class relevancy algorithms and search experience capabilities.

Connecting PTC Windchill and Google Cloud Search

Through Google Cloud Search, Google provides the power and reach of Google search to the enterprise. Fishbowl’s PTC Windchill Connector for Google Cloud Search provides customers with the ability to leverage Google’s industry-leading technology to search PTC Windchill for Documents, CAD files, Enterprise Parts, Promotion Requests, Change Requests, and Change Notices. The PTC Windchill Connector for Google Cloud Search assigns security to all items indexed through the connector based on the default ACL configuration specified in the connector configuration. The connector allows customers to take full advantage of additional search features provided by Google Cloud Search including Facets and Spelling Suggestions just as you would expect from a Google solution.

To read the rest of this blog post and see an architecture diagram showing how Fishbowl connects Google Cloud Search with PTC Windchill, please visit the PTC LiveWorx 2019 blog.

The post Leveraging Google Cloud Search to Provide a 360 Degree View to Product Information Existing in PTC® Windchill® and other Data Systems appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

[Solved] Oracle EBS (R12) Installation Issue: Could not find the main class: oracle.apps.ad.rapidwiz.RIWizard

Online Apps DBA - Thu, 2018-12-06 06:52

Troubled with the issue while running rapidwiz on an Exadata database machine to install Oracle EBS R12.2 ? If yes, Visit: https://k21academy.com/appsdba39 and consider our new Blog Covering: ✔What is rapidwiz ✔Issues, Causes and Solution for Installing Through Rapidwiz Troubled with the issue while running rapidwiz on an Exadata database machine to install Oracle EBS […]

The post [Solved] Oracle EBS (R12) Installation Issue: Could not find the main class: oracle.apps.ad.rapidwiz.RIWizard appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DockerCon18 Barcelona – Day 3

Yann Neuhaus - Thu, 2018-12-06 06:37

Wednesday was the last day of the DockerCon18 Europe. Like the previous day, we started the journey by a keynote of 2 hours, more oriented “Docker Community”, the core message of the keynote, the community is one of the pillars of open source technologies and Docker Inc want to push more and more the community aspect. The community is growing very fast and competitive.

They take the opportunity to award the community leader of the year and a new Docker Captain, Bret Fisher.

dockercommunity-min

Then we attended an interesting session: Docker Storage with Swarm and Kubernetes.

The guy who presented started the session with a funny part: Container Storage Fake News!! During a few minutes, he listed one by one all fake news related to storage in the container world. The best fake news for us:

RDBMS and databases cannot be run on containers: NO! Official images are available from the providers. The best example is SQL Server, who provide a very competitive Docker image for their users.

The core message of the session is that databases containers are coming more and more and will be used and deployed. The very interesting thing is about the collaboration between Docker and storage provider, who are developing API for Docker compatibility, so in the future, each storage provider will have its own API to communicate with Docker container.
 

The last but not least session of the day for me was about Provisioning and Managing Storage for Docker Containers.

The goal of the session was the explanation of How we can manage easily storage operations for containers.

Docker EE Platform with Kubernetes (with PV and PVC) help us in the future to manage storage operations for containers.

PV = Persistent Volume
PVC = Persistent Volume Claim

They present us, also, the difference between static and dynamic provisioning in Kubernetes and the future of storage management in Docker using CSI.

Core message: Docker is making storage a priority.

oznor

Cet article DockerCon18 Barcelona – Day 3 est apparu en premier sur Blog dbi services.

DockerCon18 Barcelona – Day 2

Yann Neuhaus - Thu, 2018-12-06 04:59

Tuesday was the second day in Barcelona for the DockerCon18. We attend the first general session in the morning. It was a mix of presentations, live demos and the participation of Docker big customer in EMEA such as Société Général in France, who present us the impact of Docker in their daily business. The main message of the first part of the keynote was: “How Docker can help you to make the digital transformation of your business”.

In the second part, new features were presented during the live demos:

  • docker stack deployment using Docker EE
  • docker-assemble: command: build docker image without configuration starting with a git repository of the application source code.
  • docker stack command: to deploy a Docker image using a compose file.
  • docker-app command: An utility to help make Compose files more reusable and shareable.

Then they present the introduction to Kubernetes support on Docker EE platform.

Screenshot 2018-12-06 at 11.02.01

Finally, they present the way to deploy an application with Docker Desktop Application Designer.

The keynote video is also available here, those interested.

After the keynote, we attended a very interesting workshop concerning the Storage in Docker EE platform 2.1, done by Don Stewart a Solution Architect at Docker.

storagews-min

In the lab, we discovered the types of storage options that are available and how to implement them within a container environment.
Lab link for those interested: https://github.com/donmstewart/docker-storage-workshop

The first session of the afternoon was about Docker Enterprise platform 2.1: Architecture Overview and Uses Cases.

 

The presentation was split into 3 main parts:

  • Docker Enterprise overview and architecture
  • Docker Enterprise 2.1 – What’s new with demos
  • Next steps

 

The first part of the presentation was more marketing oriented, by the presentation of the Docker Enterprise platform.

Then the following new features were presented including small demos:

  • Extended Windows Support
  • Extended Kubernetes Support: Windows Server 2016, 1709, 1803, 2019
  • Improve Operational Insights: node metrics, data retention overview, more metrics, and charts…
  • Image management and storage optimizations
  • Security improvements

oznor

We finish the conference day by a workshop again, yes…Because during this conference the level and the quality of the workshops was very good and interesting. The workshop was about Swarm Orchestration – Features and Workflows.
This was definitively one of the best workshops I attended.

Slides: https://container.training/swarm-selfpaced.yml.html#1
Github repository: https://github.com/jpetazzo/container.training

During this workshop, we create a complete Docker cluster using Swarm and deep dive into Swarm orchestration.

A very interesting day, with a lot of new things around Docker.

Cet article DockerCon18 Barcelona – Day 2 est apparu en premier sur Blog dbi services.

Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations

Fishbowl has helped numerous customers migrate Solidworks data into PTC Windchill. We have proven processes and proprietary applications to migrate from SolidWorks Enterprise PDM (EPDM) and PDMWorks, and WTPart migrations including structure and linking. This extensive experience combined with our bulk loading software has elevated us as one of the world’s premiere PTC Data Migration specialists.

Over the years, we’ve created various resources for Windchill customers to help them understand their options to migrate Solidworks data into Windchill, as well as some best practices when doing so. After all, we’ve seen firsthand how moving CAD files manually wastes valuable engineering resources that can be better utilized on more important work.

We’ve categorized those resources below. Please explore them and learn how Fishbowl Solution can help you realize the automation gains you are looking for.

Blog Posts Infographic Webinar Brochures LinkLoader Web Page

The post Fishbowl Resource Guide: Solidworks to PTC Windchill Data Migrations appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

Podcast: Inspiring Innovation and Entrepreneurism in Young People

OTN TechBlog - Wed, 2018-12-05 07:37

A common thread connecting the small army of IT professionals I've met over the last 20 years is that their interest in technology developed when they were very young, and that youthful interest grew into a full-fledged career. That's truly wonderful. But what happens if a young person never has a chance to develop that interest? And what can be done to draw those young people to careers in technology? In this Oracle Groundbreakers Podcast extra you will meet someone who is dedicated to solving that very problem.

Karla Readshaw is director of development for Iridescent, a non-profit organization focused on bringing quality STEM education (science, technology, engineering, and mathematics) to young people -- particularly girls -- around the globe.

"Our end goal is to ensure that every child, with a specific focus on underrepresented groups -- women and minorities -- has the opportunity to learn, and develop curiosity, creativity and perseverance, what real leaders are made of," Karla explains in her presentation.

Iridescent, through its Technovation program, provides middle- and high-school girls with the resources to develop solutions to real problems in their local communities, "leveraging technology and engineering for social good," as Karla explains.

Over a three-month period, the girls involved in the Technovation program identify a problem within their community, design and develop a mobile app to address the issue, and then build a business around that app, all under the guidance of an industry mentor.

The results are impressive. In one example, a team of hearing-impaired girls in Brazil developed an app that teaches American Sign Language, and then developed a business around it. In another example, a group of high-school girls in Guadalajara, Mexico drew on personal experience to develop an app that strengthens the relationship between Alzheimers patients and their caregivers. And a group of San Francisco Bay area girls created a mobile app that will help those with autism to improve social skills and reduce anxiety.

Want to learn more about the Technovation program, and about how you can get involved? Just listen to this podcast. 

This program was recorded during Karla's presentation at the Women In Technology Breakfast held on October 22, 2018 as part of Oracle Code One.

Additional Resources Coming Soon
  • Baruch Sadogursky, Leonid Igolnik, and Viktor Gamov discuss DevOps, streaming, liquid software, and observability in this podcast captured during Oracle Code One 2018.
  • GraphQL and REST: An Objective Comparison: a panel of experts weighs the pros and cons of each of these approaches in working with APIs. 
  • Database: Breaking the Golden Rules: There comes a time question, and even break,  long-established rules. This program presents a discussion of the database rules that may no longer be advantageous. 
Subscribe

Never miss an episode! The Oracle Groundbreakers Podcast is available via:

OEM Cloud Control 13c – Agent Gold Image

Yann Neuhaus - Wed, 2018-12-05 06:52
Introduction

I am currently setting up a new “Base Image” virtual machine (Red Hat Enterprise Linux 7.6) which will be used to create 6 brand new Oracle database servers requested by a customer. Besides installing and configuring the OS, I also have to install 3 Oracle Homes and one Cloud Control Agent 13c.

An OMS13c server already exists including an Agent patched with the EM-AGENT Bundle Patch 13.2.0.0.181031 (28680866) :
oracle@oms13c:/home/oracle/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Tue Nov 13 17:32:48 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@oms13c:/home/oracle/ [agent13c]

However, when I wanted to deploy the CC13c Agent on my Master VM from the Cloud Control 13c web interface (Setup > Add Target > Add Targets Manually > Install Agent on Host), the Agent was successfully installed but… without the patch 28680866 :( . That means I will have to install the patch manually. Considering that the goal of creating a “Base Image” VM for this project is to quickly and easily delivering 6 database servers, having to install AND to patch the Agent on each server is not very efficient and doesn’t fit with what I want.
I had so to find a better way to deploy a patched Agent and the solution has been to use an Agent Gold Image. It allowed me to do exactly what I wanted.

In this post I will show how I have set this up.

Deploying the Agent

Here is how we can deploy the Agent on the Base Image VM. From Cloud Control 13c, we click on Setup > Add Target > Add Targets Manually > Install Agent on Host :
1

Then we insert the name of the target VM, we select the approriate platform…
2_2

…and we specify the directory in which we want to install the Agent (Agent Home) :
3_2

Everything is now ready to start the deployment. We can click on Next to see the review of the deployment configuration and on Deploy Agent to start.
Once the Agent is correctly deployed, the status should be like that :
4

As explained above we can see that the Agent is not patched with the Bundle Patch of October 2018 :
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c] ./opatch lsinventory | grep 28680866
oracle@basevm:/u01/app/oracle/agent13c/agent_13.2.0.0.0/OPatch/ [agent13c]

We must patch it manually…

Updating OPatch

Before installing a patch it is highly recommended to update the OPatch utility first. All version of the tool are available here. The current one my VM is 13.8.0.0.0 :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] opatch version
OPatch Version: 13.8.0.0.0


OPatch succeeded.

We must use the following command to update OPatch :
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] unzip -q p6880880_139000_Generic.zip
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/ [agent13c] cd 6880880/
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] $ORACLE_HOME/oracle_common/jdk/bin/java -jar ./opatch_generic.jar -silent oracle_home=$ORACLE_HOME
Launcher log file is /tmp/OraInstall2018-11-23_02-58-11PM/launcher2018-11-23_02-58-11PM.log.
Extracting the installer . . . . Done
Checking if CPU speed is above 300 MHz. Actual 2099.998 MHz Passed
Checking swap space: must be greater than 512 MB. Actual 4095 MB Passed
Checking if this platform requires a 64-bit JVM. Actual 64 Passed (64-bit not required)
Checking temp space: must be greater than 300 MB. Actual 27268 MB Passed
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2018-11-23_02-58-11PM
Installation Summary
[...] [...] Logs successfully copied to /u01/app/oraInventory/logs.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/OPatch/oms13cAgent/6880880/ [agent13c]

You probably noticed that since OEM 13cR2 the way to update OPatch has changed : no more easy unzip, we have to use a Java file instead (don’t really understand why…).

Patching the Agent

As OPatch is now up to date we can proceed with the installation of the patch 28680866 :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] unzip -q p28680866_132000_Generic.zip
oracle@basevm:/u01/app/oracle/software/agent13c/patch/ [agent13c] cd 28680866/28680866/
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl stop agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Stopping agent ... stopped.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch apply
Oracle Interim Patch Installer version 13.9.3.3.0
Copyright (c) 2018, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/agent13c/agent_13.2.0.0.0
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/agent13c/agent_13.2.0.0.0/oraInst.loc
OPatch version : 13.9.3.3.0
OUI version : 13.9.1.0.0
Log file location : /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/agent13c"


Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 28680866


Do you want to proceed? [y|n] y
User Responded with: Y
All checks passed.
Backing up files...
Applying interim patch '28680866' to OH '/u01/app/oracle/agent13c/agent_13.2.0.0.0'


Patching component oracle.sysman.top.agent, 13.2.0.0.0...
Patch 28680866 successfully applied.
Log file location: /u01/app/oracle/agent13c/agent_13.2.0.0.0/cfgtoollogs/opatch/opatch2018-11-23_15-33-14PM_1.log


OPatch succeeded.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Let’s restart the Agent and check that the patch has been applied :
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] emctl start agent
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved.
Starting agent ................... started.
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@basevm:/u01/app/oracle/software/agent13c/patch/28680866/28680866/ [agent13c]

Perfect. The Agent is now patched but…

Installing the DB plugin

…what about its plugins ? We can see from the OMS13c server that the Agent doesn’t have the database plugin installed :
oracle@oms13c:/home/oracle/ [oms13c] emcli login -username=sysman
Enter password :


Login successful
oracle@oms13c:/home/oracle/ [oms13c] emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0

This is normal. As no Oracle database are currently running on the VM, the DB plugin was not installed automatically during the Agent deployment. We have to install it manually using the following command :
oracle@oms13c:/home/oracle/ [oms13c] emcli deploy_plugin_on_agent -agent_names="basevm.xx.yyyy.com:3872" -plugin=oracle.sysman.db
Agent side plug-in deployment is in progress
Use "emcli get_plugin_deployment_status -plugin=oracle.sysman.db" to track the plug-in deployment status.
oracle@oms13c:/home/oracle/ [oms13c]

To check the status of the plugin installation :
oracle@oms13c:/home/oracle/ [oms13c] emcli get_plugin_deployment_status -plugin=oracle.sysman.db
Plug-in Deployment/Undeployment Status


Destination : Management Agent - basevm.xx.yyyy.com:3872
Plug-in Name : Oracle Database
Version : 13.2.2.0.0
ID : oracle.sysman.db
Content : Plug-in
Action : Deployment
Status : Success
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step Start Time End Time Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment 11/23/18 4:06:29 PM CET 11/23/18 4:06:30 PM CET Success


Initialize 11/23/18 4:06:32 PM CET 11/23/18 4:06:43 PM CET Success


Validate Environment 11/23/18 4:06:44 PM CET 11/23/18 4:06:44 PM CET Success


Install software 11/23/18 4:06:44 PM CET 11/23/18 4:06:45 PM CET Success


Attach Oracle Home to Inventory 11/23/18 4:06:46 PM CET 11/23/18 4:07:04 PM CET Success


Configure plug-in on Management Agent 11/23/18 4:07:05 PM CET 11/23/18 4:07:28 PM CET Success


Update inventory 11/23/18 4:07:23 PM CET 11/23/18 4:07:28 PM CET Success


---------------------------------------- ------------------------- ------------------------- ----------
oracle@oms13c:/home/oracle/ [oms13c]

Quick check :
oracle@oms13c:/home/oracle/ emcli list_plugins_on_agent -agent_names="basevm.xx.yyyy.com:3872"
The Agent URL is https://basevm.xx.yyyy.com:3872/emd/main/ -
Plug-in Name Plugin-id Version [revision]

Oracle Database oracle.sysman.db 13.2.2.0.0
Oracle Home oracle.sysman.oh 13.2.0.0.0
Systems Infrastructure oracle.sysman.si 13.2.2.0.0


oracle@oms13c:/home/oracle/ [oms13c]

The Agent is now exactly in the state in which we want to deploy it on all 6 servers (OPatch up to date, Agent patched, DB plugin installed).
It’s now time to move forward with the creation of an Agent Gold Image.

Creating the Agent Gold image

Going back to Cloud Control we can navigate to Setup > Manage Cloud Control > Gold Agent Images :
Screenshot from 2018-12-03 21-13-14
We click on Manage All Images
6

…then on Create and we give a name to our Image :
7

Once the Image created, we must create its 1st version. We click on the Image name and then on Action > Create. From here we can select the Agent configured earlier on the VM. It will be the source of the Gold Image :
8

The creation of the Gold Agent Image and its 1st version can be also done from command-line with the following emcli command :
oracle@oms13c:/home/oracle/ [oms13c] emcli create_gold_agent_image -image_name="agent13c_gold_image" -version_name="gold_image_v1" -source_agent="basevm.xx.yyyy.com:3872"
A gold agent image create operation with name "GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042" has been submitted.
You can track the progress of this session using the command "emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042"


oracle@oms13c:/home/oracle/ [oms13c] emcli get_gold_agent_image_activity_status -operation_name=GOLD_AGENT_IMAGE_CREATE_2018_12_03_22_04_20_042
Inputs
------
Gold Image Version Name : gold_image_v1
Gold Image Name : agent13c_gold_image
Source Agent : basevm.xx.yyyy.com:3872
Working Directory : %agentStateDir%/install


Status
-------
Step Name Status Error Cause Recommendation
Create Gold Agent Image IN_PROGRESS


oracle@oms13c:/home/oracle/

The Gold Agent Image is now created. We can start to deploy it on the others servers in the same way we did at the first deployment, but by selecting this time With Gold Image :
9

Once the Agent is deployed on the server we can see that OPatch is up to date :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch version
OPatch Version: 13.9.3.3.0


OPatch succeeded.
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

The Agent Bundle Patch is installed :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] opatch lsinventory | grep 28680866
Patch 28680866 : applied on Mon Dec 03 17:17:25 CET 2018
28680866, 28744209, 28298159, 25141245, 28533438, 28651962, 28635152
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

And the DB plugin is ready :
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] ll
total 24
drwxr-xr-x. 31 oracle oinstall 4096 Dec 3 22:59 agent_13.2.0.0.0
-rw-r--r--. 1 oracle oinstall 209 Dec 3 22:32 agentimage.properties
drwxr-xr-x. 8 oracle oinstall 98 Dec 3 22:58 agent_inst
-rw-r--r--. 1 oracle oinstall 565 Dec 3 22:56 agentInstall.rsp
-rw-r--r--. 1 oracle oinstall 19 Dec 3 22:56 emctlcfg.rsp
-rw-r-----. 1 oracle oinstall 350 Dec 3 22:32 plugins.txt
-rw-r--r--. 1 oracle oinstall 470 Dec 3 22:57 plugins.txt.status
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c] cat plugins.txt.status
oracle.sysman.oh|13.2.0.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.oh|13.2.0.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.db|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.xa|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.emas|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||agentPlugin|STATUS_SUCCESS
oracle.sysman.si|13.2.2.0.0||discoveryPlugin|STATUS_SUCCESS
oracle@srvora01:/u01/app/oracle/agent13c/ [agent13c]

Conclusion

Using a Gold Image drastically ease the management of OMS Agents in Oracle environments. In addition to allowing massive deployment on targets, it is also possible to manage several Gold Images with different patch levels. The hosts are simply subscribed to a specific Image and follow its life cycle (new patch, new plugins, aso…).

Think about it during your next Oracle monitoring project !

Cet article OEM Cloud Control 13c – Agent Gold Image est apparu en premier sur Blog dbi services.

[Blog] Automatic Workload Repository (AWR): Database Statistics

Online Apps DBA - Wed, 2018-12-05 05:21

Automatic Workload Repository report or AWR report collects, processes, and maintains performance statistics for problem detection and self-tuning purposes. Want to Know more…. Visit: https://k21academy.com/tuning14 and Consider our New Blog on AWR Covering Important Topics like: ✔ The different features of AWR ✔ Snapshots & Baselines in AWR ✔ How to Read AWR Reports and […]

The post [Blog] Automatic Workload Repository (AWR): Database Statistics appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Adding Calculated Fields to Your Visual Builder UI

Shay Shmeltzer - Tue, 2018-12-04 17:16

This is a quick blog to show two techniques for adding calculated fields to an Oracle Visual Builder application.

Both techniques do the calculation on the client side (in the browser). Keep in mind that you might want to consider doing the calculation on the back-end of your application and get the calculated value delivered directly to your client - in some cases this results in better performance. But sometimes you don't have access to modify the backend, or you can't do calculations there, so here we go:

1. For simple calculation you can just use the value property of a field to do the calculation for you.

For example if you need to know the yearly salary you can take the value in a field and just add *12 to it.

You can also use this to calculate values from multiple fields for example [[$current.data.firstName + " " +$current.data.lastName]] - will get you a field with the full name.

2. For more complex calculation you might need to write some logic to arrive at your calculated value, for example if you have multiple if/then conditions. To do that you can create a client side JavaScript function in your page's JS section. Then you refer to the function from your UI component's value attribute using something like {{$functions.myFunc($current.data.salary)}}

As you'll see in the demo, if you switch to the code view of your application the code editor in Oracle VB will give you code insight into the functions you have for your page, helping you eliminate coding errors.

Categories: Development

odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError

Yann Neuhaus - Tue, 2018-12-04 16:06

Today I was onsite at my customer and he told me: I can no longer create databases on my ODA X7-2HA, every time I try to use odacli create-database it fails, please help.

Ok, let’s check what happens, the customer shares the Oracle Homes, he wants to create a 11.2.0.4 database:

[root@robucnoroda020 ~]# odacli list-dbhomes

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
157bfdf4-4430-4fb1-878e-2fb803ee54bd     OraDB11204_home1     11.2.0.4.180417 (27441052, 27338049)     /u01/app/oracle/product/11.2.0.4/dbhome_1     Configured
2aaba0e6-4482-4c9f-8d98-4a9d72fdb96e     OraDB12102_home1     12.1.0.2.180417 (27338020, 27338029)     /u01/app/oracle/product/12.1.0.2/dbhome_1     Configured
ad2b0d0a-11c1-4a15-b22a-f698496cd606     OraDB12201_home1     12.2.0.1.180417 (27464465, 27674384)     /u01/app/oracle/product/12.2.0.1/dbhome_1     Configured

[root@robucnoroda020 ~]#

Ok we try to create a 11.2.0.4 database:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Created
 Created: December 4, 2018 11:11:26 PM EET
 Message:

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

[root@robucnoroda020 log]#

The job was created successful, we check whats going on:

[root@robucnoroda020 log]# odacli describe-job -i 1959838e-34a6-419e-94da-08b931a039cc

Job details
----------------------------------------------------------------
 ID: 1959838e-34a6-419e-94da-08b931a039cc
 Description: Database service creation with db name: FOO
 Status: Failure
 Created: December 4, 2018 11:11:26 PM EET
 Message: DCS-10001:Internal error encountered: Failed to create the database FOO.

Task Name Start Time End Time Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Service creation December 4, 2018 11:11:26 PM EET December 4, 2018 11:13:01 PM EET Failure
Setting up ssh equivalance December 4, 2018 11:11:26 PM EET December 4, 2018 11:11:46 PM EET Success
Creating volume dclFOO December 4, 2018 11:11:47 PM EET December 4, 2018 11:12:04 PM EET Success
Creating volume datFOO December 4, 2018 11:12:04 PM EET December 4, 2018 11:12:21 PM EET Success
Creating ACFS filesystem for DATA December 4, 2018 11:12:21 PM EET December 4, 2018 11:12:34 PM EET Success
Database Service creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:01 PM EET Failure
Database Creation December 4, 2018 11:12:34 PM EET December 4, 2018 11:13:00 PM EET Failure

[root@robucnoroda020 log]#

Indeed, the job has failed, next we check the DCS log, there we can see the database creation failure:

2018-12-04 23:13:01,209 DEBUG [Database Service creation] [] c.o.d.c.t.r.TaskReportRecorder:  Compile task plan for ServiceJobReport
'{
  "updatedTime" : null,
  "jobId" : "1959838e-34a6-419e-94da-08b931a039cc",
  "status" : "Failure",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : 1543957886185,
  "resourceList" : [ ],
  "description" : "Database service creation with db name: FOO"
}'...

2018-12-04 23:13:01,219 DEBUG [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] call() completed.
2018-12-04 23:13:01,219 INFO [Database Service creation] [] c.o.d.a.t.TaskServiceRequest: Task[id: 1959838e-34a6-419e-94da-08b931a039cc, jobid: 1959838e-34a6-419e-94da-08b931a039cc, TaskName: Database Service creation] completed: Failure

Ok, the log don’t tell us what’s going wrong, but behind the scene odacli create-database uses dbca in the requested ORACLE Home. In the next step we check the the dbca logs:

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] ll
total 276
-rw-r----- 1 oracle oinstall 275412 Dec  4 23:13 trace.log
oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204]

oracle@robucnoroda020:/u01/app/oracle/cfgtoollogs/dbca/FOO/ [rdbms11204] cat trace.log

......

[main] [ 2018-12-04 23:12:57.546 EET ] [InventoryUtil.getHomeName:111]  homeName = OraDB11204_home1
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at oracle.xml.parser.v2.XMLDocument.createNodeFromType(XMLDocument.java:4132)
	at oracle.xml.parser.v2.XMLDocument.createElement(XMLDocument.java:2801)
	at oracle.xml.parser.v2.DocumentBuilder.startElement(DocumentBuilder.java:488)
	at oracle.xml.parser.v2.NonValidatingParser.parseElement(NonValidatingParser.java:1616)
	at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:456)
	at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:402)
	at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:244)
	at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:155)
	at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:172)
	at oracle.sysman.oix.oixd.OixdDOMReader.getDocument(OixdDOMReader.java:42)
	at oracle.sysman.oic.oics.OicsCheckPointReader.buildCheckpoint(OicsCheckPointReader.java:75)
	at oracle.sysman.oic.oics.OicsCheckPointSession.<init>(OicsCheckPointSession.java:101)
	at oracle.sysman.oic.oics.OicsCheckPointIndexSession.<init>(OicsCheckPointIndexSession.java:123)
	at oracle.sysman.oic.oics.OicsCheckPointFactory.getIndexSession(OicsCheckPointFactory.java:69)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPointSession(CheckpointContext.java:256)
	at oracle.sysman.assistants.util.CheckpointContext.getCheckPoint(CheckpointContext.java:245)
	at oracle.sysman.assistants.dbca.backend.Host.cleanup(Host.java:3710)
	at oracle.sysman.assistants.dbca.backend.SilentHost.cleanup(SilentHost.java:585)
	at oracle.sysman.assistants.dbca.Dbca.execute(Dbca.java:145)
	at oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:189)
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:44]  Cleanup started
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [OracleHome.cleanupDBOptionsIntance:1482]  DB Options dummy instance sid=null
[Thread-5] [ 2018-12-04 23:13:00.631 EET ] [DbcaCleanupHook.run:49]  Cleanup ended

Ah, there is a JAVA OutOfMemory Exception, we know this from older times, we have to change the Heap Space for dbca’s Java  engine, let’s change to ORACLE_HOME and check dbca:

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx128m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

In the dbca script, we see that the Java Heap Space is 128MB, we change it to 512MB (and yes create a backup first):

oracle@robucnoroda020:/u01/app/oracle/product/11.2.0.4/dbhome_1/bin/ [rdbms11204] grep JRE_OPT dbca
JRE_OPTIONS="${JRE_OPTIONS} -DSET_LAF=${SET_LAF} -Dsun.java2d.font.DisableAlgorithmicStyles=true -Dice.pilots.html4.ignoreNonGenericFonts=true  -DDISPLAY=${DISPLAY} -DJDBC_PROTOCOL=thin -mx512m"
exec $JRE_DIR/bin/java  $JRE_OPTIONS  $DEBUG_STRING -classpath $CLASSPATH oracle.sysman.assistants.dbca.Dbca $ARGUMENTS

After deleting the failed database we try again to create our database FOO:

[root@robucnoroda020 log]# odacli create-database -n FOO -dh 157bfdf4-4430-4fb1-878e-2fb803ee54bd -cs AL32UTF8 -y RAC -r ACFS -m
Password for SYS,SYSTEM and PDB Admin:

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Created
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------

Let’s check what’s going on:

[root@robucnoroda020 log]# odacli describe-job -i b289ed58-c29f-4ea8-8aa8-46a5af8ca529

Job details
----------------------------------------------------------------
                     ID:  b289ed58-c29f-4ea8-8aa8-46a5af8ca529
            Description:  Database service creation with db name: FOO
                 Status:  Success
                Created:  December 4, 2018 11:45:04 PM EET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Setting up ssh equivalance               December 4, 2018 11:45:05 PM EET    December 4, 2018 11:45:25 PM EET    Success
Creating volume dclFOO                   December 4, 2018 11:45:25 PM EET    December 4, 2018 11:45:42 PM EET    Success
Creating volume datFOO                   December 4, 2018 11:45:42 PM EET    December 4, 2018 11:45:59 PM EET    Success
Creating ACFS filesystem for DATA        December 4, 2018 11:45:59 PM EET    December 4, 2018 11:46:13 PM EET    Success
Database Service creation                December 4, 2018 11:46:13 PM EET    December 4, 2018 11:51:23 PM EET    Success
Database Creation                        December 4, 2018 11:46:13 PM EET    December 4, 2018 11:49:56 PM EET    Success
updating the Database version            December 4, 2018 11:51:21 PM EET    December 4, 2018 11:51:23 PM EET    Success
create Users tablespace                  December 4, 2018 11:51:23 PM EET    December 4, 2018 11:51:25 PM EET    Success

[root@robucnoroda020 log]#

Super, the database was successfully created. It seems that sometimes odacli create-database fails due to dbca memory usage. So also on ODA check your dbca logs, if your database creation wasn’t successful. If you see these Heap Space Exceptions, don’t be afraid to change dbca’s heap memory allocation.

Cet article odacli create-database fails on ODA X7-2HA with java.lang.OutOfMemoryError est apparu en premier sur Blog dbi services.

Oracle VM Server x86: How to get a redundant network for the heartbeat (part 2)

Dietrich Schroff - Tue, 2018-12-04 15:55
A while ago i played around with Oracle VM Manager
I was wondering, if i can setup a redundant network for the heartbeat on my virtualbox playground. My question was: Can i add an additional network and stripe the heartbeat over both networks or do i have to configure 2 network interfaces and use bonding.

A few day ago i tried to stripe the "Heartbeat Network" over 2 networkss, but this failed: Oracle VM Server x86: How to get a redundant network for the heartbeat

Now i tried to configure bonding for the "Heartbeat Network":
First step is to navigate to "Server and VMs" and change to the perspective "Bond Ports":

Select the bond0 port and add eth1:

 Then click ok and after that make a check via perpective "Ethernet ports":

That was easy.

Conclusion: The heartbeat inside OVM is implemented, that it can only work on the same subnet. It is not possible to use two different subnets for the heartbeat.

Deploy containers on Oracle Container Engine for Kubernetes using Developer Cloud

OTN TechBlog - Tue, 2018-12-04 08:25

In my previous blog, I described how to use Oracle Developer Cloud to build and push the Node.js microservice Docker image on DockerHub. This blog will help you understand, how to use Oracle Developer Cloud to deploy the Docker image pushed to DockerHub on Container Engine for Kubernetes.

Container Engine for Kubernetes

Container Engine for Kubernetes is a developer-friendly, container-native, enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle Cloud Infrastructure. Visit the following link to learn about Oracle’s Container Engine for Kubernetes:

https://cloud.oracle.com/containers/kubernetes-engine

Prerequisites for Kubernetes Deployment

  1. Access to an Oracle Cloud Infrastructure (OCI) account
  2. A Kubernetes cluster set up on OCI
    This tutorial explains how to set up a Kubernetes cluster on OCI. 

Set Up the Environment:

Create and Configure Build VM Templates and Build VMs

You’ll need to create and configure the Build VM template and Build VM with the required software, which will be used to execute the build job.

 

Click the user avatar, then select Organization from the menu. 

 

Click VM Templates then New Template. In the dialog that pops up, enter a template name, such as Kubernetes Template, select “Oracle Linux 7” for the platform, then click the Create button.  

 

After the template has been created, click Configure Software.

 

Select Kubectl and OCIcli (you’ll be asked to add Python3 3.6, as well) from the list of software bundles available for configuration, then click + to add these software bundles to the template. 

Click the Done button to complete the software configuration for that Build VM template.

           

From the Virtual Machines page, click +New VM and, in the dialog that pops up, enter the number of VMs you want to create and select the VM Template you just created (Kubernetes Template).

 

Click the Add button to add the VM.

 

Kubernetes deployment scripts

From the Project page, click the + New Repository button to add a new repository.

 

After creating the repository, Developer Cloud will bring you to the Code page, with the  NodejsKubernetes repository showing. Click the +File button to create a new file in the repository. (The README file in the repository was created when the project was created.) 

 

Copy the following script into a text editor and save the file as nodejs_micro.yaml.

apiVersion: apps/v1beta1 kind: Deployment metadata: name: nodejsmicro-k8s-deployment spec: selector: matchLabels: app: nodejsmicro replicas: 1 # deployment runs 1 pods matching the template template: # create pods using pod definition in this template metadata: labels: app: nodejsmicro spec: containers: - name: nodejsmicro image: abhinavshroff/nodejsmicro4kube:latest ports: - containerPort: 80 #Endpoint is at port 80 in the container --- apiVersion: v1 kind: Service metadata: name: nodejsmicro-k8s-service spec: type: NodePort #Exposes the service as a node port ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nodejsmicro

 

Click the Commit button to create the file and commit the code changes.

 

Click the Commit button in the Commit changes dialog that displays.

You should see the nodejs_micro.yaml file in the list of files for the NodejsKubernetes.git repository, as shown in the screenshot below.

 

Configuring the Build Job

Click Build on the navigation bar to display the Build page. Click the +New Job button to create a new build job. In the New Job dialog box, enter NodejsKubernetesDeploymentBuild for the Job name and, from the Software Template drop-down list, select Kubernetes Template as the Software Template. Then click the Create Job button to create the build job.

 

After the build job has been created, you’ll be brought to the configure screen. Click the Source Control tab and select NodejsKubernetes.git from the repository drop-down list. This is the same repository where you created the nodejs_micro.yaml file. Select master from the Branch drop-down list.

 

In the Builders tab, click the Add Builder drop-down and select OCIcli Builder from the drop-down list. 

To see what you need to fill in for each of the input fields in the OCIcli Builder form and to find out where to retrieve these values, you can either read my “Oracle Cloud Infrastructure CLI on Developer Cloud” blog or the documentation link to the “Access Oracle Cloud Infrastructure Services Using OCIcli” section in Using Oracle Developer Cloud Service.

Note: The values in the screenshot below have been obfuscated for security reasons. 

 

Click the Add Builder drop-down list again and select Unix Shell Builder.

 

In the text area of the Unix Shell Builder, add the following script that downloads the Kubernetes config file and deploys the container on Oracle Kubernetes Engine, which you created by following the instructions in my previous blog. Click the Save button to save the build job. 

 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.iad.aaaaaaaaafrgkzrwhtimldhaytgnjqhazdgmbuhc2gemrvmq2w --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl config view kubectl get nodes kubectl create -f nodejs_micro.yaml sleep 120 kubectl get services nodejsmicro-k8s-service kubectl get pods kubectl describe pods

This script creates the kube directory, uses the OCIcli command oci ce cluster to download the Kubernetes cluster config file, then sets the KUBECONFIG environment variable.

The kubectl config and get nodes commands just let you view the cluster configuration and see the node details of the cluster. The create command actually deploys the Docker container on the Kubernetes cluster. We run the get services and, get pods commands to retrieve the IP address and the port of the deployed container. Note that the nodejsmicro-k8s-service name was previously configured in the nodejs_micro.yaml file.

Note: The OCID for the cluster, mentioned in the script above, needs to be replaced by the one which you have. 

 

Click the Build Now button to start executing the Kubernetes deployment build. You can click the Build Log icon to view the build execution logs.

 

After the build job executes successfully, you can examine the build log to retrieve the IP address and the port for the deployed service on Kubernetes cluster. You’ll need to look for the IP address and the port under the deployment name you configured in the YAML file.

 

Use the IP address and the port that you retrieved in the format shown below and see the output in your browser.

http://<IP Address>:port/message

Note: The message output you see may differ from what is shown here, based on what you coded in the Node.js REST application that was containerized.

 

So, now you’ve seen how Oracle Developer Cloud streamlines and simplifies the process for managing the automation for building and deploying Docker containers on Oracle Kubernetes Engine.

Happy Coding!

 

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

What should a PL/SQL Developer should learn in 2019

Tom Kyte - Tue, 2018-12-04 04:46
Hi Tom, I worked for around 14 years as Oracle developer, over SQL, PL/SQL, Forms Reports and APEX. Now since 2018 we are observing major changes like people are using Rest API rather direct backend coding. So what should a traditional Oracle ...
Categories: DBA Blogs

[Video] Role of Oracle Apps DBA (EBS) R12 On Cloud

Online Apps DBA - Tue, 2018-12-04 04:41

Does Cloud mean No Job for Apps DBAs? To find the answer and to know about, How the role of Apps DBAs managing Oracle E-Business Suite (R12) will change in Cloud, Visit: https://k21academy.com/ebscloud12 & find answers for: ✔ What is the Future of Apps DBAs in Cloud ✔ Advanced/Expert Cloud Apps DBAs Skills ✔ 3 […]

The post [Video] Role of Oracle Apps DBA (EBS) R12 On Cloud appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Blog] Resource Management: Oracle Database Cloud Certification (1Z0-160) Performance Management

Online Apps DBA - Tue, 2018-12-04 03:59

Get Ready for Being an Oracle database Cloud Service Certified [1Z0160] Visit: https://k21academy.com/1Z016021 and Know More About ✔ Multi-Tenent Architecture (CDB & PDB) ✔ Performance issues: Multiple PDBs in CDB ✔ Issues Related to Workload Management Get Ready for Being an Oracle database Cloud Service Certified [1Z0160] Visit: https://k21academy.com/1Z016021 and Know More About ✔ Multi-Tenent […]

The post [Blog] Resource Management: Oracle Database Cloud Certification (1Z0-160) Performance Management appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DockerCon2018 Barcelona – Day 1

Yann Neuhaus - Mon, 2018-12-03 17:00

As a football fan, traveling to Barcelona without watching a football game is inconceivable, so I started my travel by attending the game against Villareal in Camp Nou  8-)

FCB-min

 

DockerCon Day 1

Today, with David Barbarin, was our first day at DockerCon2018 in Barcelona. The conference is located in a huge conference center and split between types of sessions including workshops, keynotes, hands-on labs, and hallway track.

The first day was only focused on workshops, hands-on labs and hallway track in where you can meet Docker experts and exchange on multiple topics.

Interesting workshops was proposed today, but to follow a workshop you had to register on it before. Fortunately, for the latecomers, a waiting list was here based on first arrived first served.

We started by following a very interesting workshop: Migrating .NET application to Docker Containers.

DotNet_WS1-min

The instructor presents us, how to migrate a monolithic application to Docker Containers. The starting point was a simple .NET application running into a single container and steps by steps the instructors explain us, through efficient demos, how to split the different services of our application in microservices architecture. We easily had access to a lab environment hosted in Azure through an RDP connection.

The 2 hour workshop was split by the following part:

  1. Building a single .NET application in one Docker Container.
  2. Split the Home page to the rest of the website, by using two containers.
  3. Add an SQL Server database for the data with the persistent storage.
  4. Add an NGINX proxy to redirect requests for the homepage and other pages of the website
  5. Create a container for the API with a connection to the database.
  6. Add a Message queue for the website.

 

After lunch, we planned to follow another workshop concerning the storage in Docker but the session and the waiting list was full. So we decided to get started with Hands-On Labs. After signing up to the Hands-On Labs, you get access to your own hosted environment and can explore all the features and capabilities of Docker through different labs:

  • Docker for Beginners – Linux
  • Docker for Beginners – Windows
  • Docker EE – an Introduction
  • Docker Security
  • Modernizing Traditional Java Applications
  • Docker EE with Kubernetes

 

We finally end the day by attending to one of the last workshops of today: Kubernetes Security, by Dimitris Kapadinis. During this workshop, the instructor shows us, the different methods to secure a Kubernetes cluster.

Kubernetes-WS-min (1)

The workshop was composed by the following part:

  1. Create a Kubernetes cluster with Play-with-Kubernetes (or minikube locally).
  2. Create and deploy a simple web application using Kubernetes.
  3. Hack the web application by entering inside the web-app pod.
  4. Protect the application by creating a security context for Kubernetes – Securing Kubernetes components.

 

It was a very intensive and interesting first day, I learned a lot through the different workshops and labs I done, so see you tomorrow  ;-)

 

Cet article DockerCon2018 Barcelona – Day 1 est apparu en premier sur Blog dbi services.

Row Migration

Jonathan Lewis - Mon, 2018-12-03 10:27

There’s a little detail of row migration that’s been bugging me for a long time – and I’ve finally found a comment on MoS explaining why it happens. Before saying anything, though, else I’m going to give you a little script (that I’ve run on 12.2.0.1 with an 8KB block size in a tablespace using ASSM and system allocated extents) to demonstrate the anomaly.


rem
rem     Script:         migration_itl.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1

create table t1 (v1 varchar2(4000))
segment creation immediate
tablespace test_8k
pctfree 0
;

insert into t1
select  null from dual connect by level <= 734 -- > comment to avoid wordpress format issue
;

commit;

spool migration_itl.lst

column rel_file_no new_value m_file
column block_no    new_value m_block

select 
        dbms_rowid.rowid_relative_fno(rowid)    rel_file_no, 
        dbms_rowid.rowid_block_number(rowid)    block_no,
        count(*)                                rows_starting_in_block
from 
        t1
group by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid) 
order by 
        dbms_rowid.rowid_relative_fno(rowid), 
        dbms_rowid.rowid_block_number(rowid)
;

update t1 set v1 = rpad('x',10);
commit;

alter system flush buffer_cache;

alter system dump datafile &m_file block &m_block;

column tracefile new_value m_tracefile

select
        tracefile 
from 
        v$process where addr = (
                select paddr from v$session where sid = (
                        select sid from v$mystat where rownum = 1
                )
        )
;

-- host grep nrid &m_tracefile

spool off

The script creates a single column table with pctfree set to zero, then populates it with 734 rows where every row has a null for its single column. The query using the calls to the dbms_rowid package will show you that all 734 rows are in the same block. In fact the block will be full (leaving a handful of bytes of free space) because even though each row will require only 5 bytes (2 bytes row directory entry, 3 bytes row overhead, no bytes for data) Oracle’s arithmetic will allow for the 11 bytes that is the minimum needed for a row that has migrated – the extra 6 bytes being the pointer to where the migrated row now lives. So 734 rows * 11 bytes = 8078, leaving 4 bytes free space with 110 bytes block and transaction layer overhead.

After populating and reporting the table the script then updates every row to grow it by a few bytes, and since there’s no free space every row will migrate to a new location. By dumping the block (flushing the buffer cache first) I can check where each row has migrated to. (If you’re running a UNIX flavour and have access to the trace directory then the commented grep command will give you what you want to see.) Here’s a small extract from the dump on a recent run:

nrid:  0x05c00082.0
nrid:  0x05c00082.1
nrid:  0x05c00082.2
nrid:  0x05c00082.3
...
nrid:  0x05c00082.a4
nrid:  0x05c00082.a5
nrid:  0x05c00082.a6
nrid:  0x05c00083.0
nrid:  0x05c00083.1
nrid:  0x05c00083.2
nrid:  0x05c00083.3
...
nrid:  0x05c00085.a4
nrid:  0x05c00085.a5
nrid:  0x05c00085.a6
nrid:  0x05c00086.0
nrid:  0x05c00086.1
nrid:  0x05c00086.2
...
nrid:  0x05c00086.3e
nrid:  0x05c00086.3f
nrid:  0x05c00086.40
nrid:  0x05c00086.41

My 734 rows have migrated to fill the next four blocks (23,130) to (23,133) of the table and taken up some of the space in the one after that (23,134). The first four blocks have used up row directory entries 0x00 to oxa6 (0 to 166), and the last block has used up row directory entries 0x00 to 0x41 (0 to 65) – giving us the expected total: 167 * 4 + 66 = 734 rows. Let’s dump one of the full blocks – and extract the interesting bits:

alter system dump datafile 23 block 130;
Block header dump:  0x05c00082
 Object id on Block? Y
 seg/obj: 0x1ba1e  csc:  0x0000000001e0aff3  itc: 169  flg: -  typ: 1 - DATA
     fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0006.00f.000042c9  0x0240242d.08f3.14  --U-  167  fsc 0x0000.01e0affb
0x02   0x0000.000.00000000  0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x03   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0x04   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0x05   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0x06   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
...
0xa6   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0xa7   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0xa8   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000
0xa9   0x0000.000.00000000  0x00000000.0000.00  C---    0  scn  0x0000000000000000

nrow=167
frre=-1
fsbo=0x160
fseo=0x2ec
avsp=0x18c
tosp=0x18c

tab 0, row 0, @0xfe4
tl: 20 fb: ----FL-- lb: 0x1  cc: 1
hrid: 0x05c00081.0
col  0: [10]  78 20 20 20 20 20 20 20 20 20
tab 0, row 1, @0xfd0
tl: 20 fb: ----FL-- lb: 0x1  cc: 1
hrid: 0x05c00081.1

This block has 169 (0xa9) ITL entries – that’s one for each row migrated into the block (nrow = 167) plus a couple spare. The block still has some free space (avsp = tosp = 0x18c: available space = total space = 396 bytes), but it can’t be used for any more incoming migration because Oracle is unable to create any more ITL entries – it’s reached the ITL limit for 8KB blocks.

So finally we come to the question that’s been bugging me for years – why does Oracle want an extra ITL slot for every row that has migrated into a block? The answer appeared in this sentence from MoS Doc ID: 2420831.1: Errors Noted in 12.2 and Above During DML on Compressed Tables”

“It is a requirement during processing of parallel transactions that each data block row that does not have a header have a block ITL available.”

Rows that have migrated into a block do not have a row header – check the flag byte (fb) for the two rows I’ve listed, it’s: “—-FL–“ , there is no ‘H’ for header. We have the First and Last row pieces of the row in this block and that’s it. So my original “why” question now becomes “What’s the significance of parallel DML?”

Imagine the general case where we have multiple processes updating rows at random from multiple blocks, and many different processes forced rows to migrate at the same time into the same block. The next parallel DML statement would dispatch multiple parallel execution slaves, which would all be locking rows in their own separate block ranges – but multiple slaves could find that they wanted to lock rows which had all migrated into the same block – so every slave MUST be able to get an ITL entry in that block at the same time; for example, if we have 8 rows that had migrated into a specific block from 8 different places, and 8 parallel execution slaves each followed a pointer from the region they were scanning to update a row that had migrated into this particular block then all 8 slaves would need an ITL entry in the block (and if there were a ninth slave scanning this region of the table we’d need a 9th ITL entry). If we didn’t have enough ITL entries in the block for every single migrated row to be locked by a different process at the same time then (in principle, at least) parallel execution slaves could deadlock each other because they were visiting blocks in a different order to lock the migrated rows. For example:

  1. PQ00 visits and locks a row that migrated to block (23,131)
  2. PQ01 visits and locks a row that migrated to block (23,132)
  3. PQ00 visits and tries to lock a row that migrated to block (23,132) — but if there were no “extra” ITL slots available, it would wait
  4. PQ01 visits and tries to lock a row that migrated to block (23,131) — but there were no “extra” ITL slots available so it would wait, and we’d be in a deadlock.

Oracle’s solution to this threat: when migrating a row to a block add a new ITL if the number of migrated rows exceeds the number of ITL slots + 2 (the presence of the +2 is a working hypothesis, it might be “+initrans of table”).

Footnote 1

The note was about problems with compression for OLTP, but the underlying message was about 4 Oracle errors of type ORA-00600 and ORA-00700, which report the discovery and potential threat of blocks where the number of ITL entries isn’t large enough compared to the number of inward migrated rows. Specifically:

  • ORA-00600 [PITL1]
  • ORA-00600 [kdt_bseg_srch_cbk PITL1]
  • ORA-00700: soft internal error, arguments: [PITL6]
  • ORA-00700: soft internal error, arguments: [kdt_bseg_srch_cbk PITL5]

 

Footnote 2

While drafting the SQL script above, I decide to check to see how many other scripts I had already written about migrated rows and itl slots: there were 12 of the former and 10 of the latter, and reading through the notes I found that one of the scripts (itl_chain.sql),Ac dated December 2002 included the following note:

According to a comment that came from Oracle support via Steve Adams, the reason for the extra ITLs is to avoid a risk of parallel DML causing an internal deadlock.

So it looks like I knew what the ITLs were for about 16 years ago, but managed to forget sometime since then.

 

 

bulk collect through dynamic sql

Tom Kyte - Mon, 2018-12-03 10:26
I have written a procedure which extracts data from tables into a csv file. The tables and columns to be extracted are stored in two tables. My code picks up the table name and column name builds a dynamic query and then writes it to csv file. I have...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator