Feed aggregator

Getting IP Address of DB Server

Tom Kyte - Sun, 2018-04-15 09:06
Hi, I would like to know IP Address of DB Server. So i executed below Query found on Google. SELECT A.HOST_NAME,A.INSTANCE_NAME,UTL_INADDR.GET_HOST_ADDRESS FROM V$INSTANCE A; I got one IP Address but its not matching with the IP address in...
Categories: DBA Blogs

Oracle SOA Suite 12c: Invoking a BPEL process via Enterprise Manager (EM)

Dietrich Schroff - Sun, 2018-04-15 09:05
After you logged in via http://yourhost:7001/em navigate to "soa-infra":
 Then choose "Deployed Composites":
 There you have to click on your composite (here: "Project1"):
 Choose "Test":
 Go to the bottom of the page and expand the payload:
 There you have  fill in your test string:
 
 And after a click on "Test Web Service"
The response is shown incl. the response time.

To list the audit trails goto "Home-> Flow instances

And choose "search" on the right side:


UKOUG Northern Technology Summit 2018

Andrew Clarke - Sun, 2018-04-15 02:55
The UKOUG has run something called the Northern Server Day for several years. Northern because they were held in a northern part of England (but south of Scotland) and Server because the focus was the database server. Over the last couple of years the day has had several streams, covering Database, High Availability and Engineered Systems. So primarily a day for DBAs and their ilk.

This year the event has expanded to let in the developers. Yay!

The Northern Technology Summit 2018 is effectively a mini-conference: in total there are five streams - Database, RAC Cloud Infrastructure & Availability, Systems, APEX and Development. But for registration it counts as a SIG. So it's free for UKOUG members to attend. What astonishingly good value!1 And it doesn't affect your entitlement to attend the annual conference in December. The Development streamThe Development stream covers a broad range of topics. Application development in 2018 is a tangled hedge with new technologies like Cloud, AI and NoSQL expanding the ecosystem but not displacing the traditional database and practices. The Development stream presents a mix of sessions from the new-fangled and Old Skool ends of the spectrum.

  • The New Frontier: Mobile and AI Powered Conversational Apps. Oracle are doing interesting work with AI and Grant Ronald is king of the chatbots. This is an opportunity to find out what modern day apps can do.
  • The New Frontier: Mobile and AI Powered Conversational Apps. Oracle are doing interesting work with AI and Grant Ronald is king of the chatbots. This is an opportunity to find out what modern day apps can do.
  • Building a Real-Time Streaming Platform with Oracle, Apache Kafka, and KSQL No single technology is a good fit for all enterprise problems. Robin Moffat of Confluent will explain how we can use Apache Kafka to handle event-based data processing.
  • Modernising Oracle Forms Applications with Oracle Jet Oracle Forms was< - still is - a highly-productive tool for building OLTP front-ends. There are countless organisations still running Forms applications. But perhaps the UX looks a little jaded in 2018. So here's Mark Waite from Griffiths Waite to show how we can use Oracle's JET JavaScript library to write new UIs without having to re-code the whole Forms application.
  • 18(ish) Things Developers Will Love about Oracle Database 18c Oracle's jump to year-based release numbers doesn't make live easier for presenters: ten things about 10c was hard enough. But game for a challenge, Oracle's Chris Saxon attempts to squeeze as many new features as possible into his talk.
  • Modernize Your Development Experience With Oracle Cloud Cloud isn't just something for the sysadmins, there's a cloud for developers too. Sai Janakiram Penumuru from DXC Technology will explain how Oracle Developer Cloud might revolutionise your development practices.
  • Designing for Deployment As Joel Spolsky says, shipping is a feature. But it's a feature which is hard to retrofit. In this talk I will discuss some design principles which make it easier to build, deploy and ship database applications.
Everything elseSo I hope the Development stream offers a day of varied and useful ideas. There are things you might be able to use right now or in the next couple of months, and things which might shape what you'll be doing next year. But it doesn't matter if not everything floats your boat. The cool thing about the day is that delegates can attend any of the streams. 2 .

So you can listen to Nigel Bayliss talking about Optimisation in the Database Stream, Vipul Sharma: talking about DevOps in the Availability stream, Anthony talking about Kubernetes in the Systems stream and John Scott talking about using Docker with Oracle in the Apex stream. There are sessions on infrastructure as code, upgrading Oracle 12cR1 to 12cR2, GDPR (the new EU data protection law), the Apex Interactive grid, Apache Impala, and Cloud, lots of Cloud. Oh my!

The full agenda is here (pdf).Register nowSo if you're working with Oracle technology and you want to attend this is what you need to know:
  • Date: 16th May 2018
  • Location: Park Plaza Hotel, Leeds
  • Cost: Free for UKOUG members. There is a fee for non-members but frankly you might as well buy bronze membership package and get a whole year's work of access to UKOUG events (including the annual conference). It's a bargain.
1. The exact number of SIG passes depends on the membership package you have
2. The registration process requires you to pick a stream but that is just for administrative purposes. It's not a lock-in.

Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies)

Amis Blog - Sat, 2018-04-14 13:15

At the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august, I attended the API Cloud Platform Service & Integration Cloud Service bootcamp.

In a series of article’s I will give a high level overview of what you can do with Oracle API Platform Cloud Service.

At the Summer Camp a pre-built Oracle VM VirtualBox APIPCS appliance (APIPCS_17_3_3.ova) was provided to us, to be used in VirtualBox. Everything needed to run a complete demo of API Platform Cloud Service is contained within Docker containers that are staged in that appliance. The version of Oracle API Platform CS, used within the appliance, is Release 17.3.3 — August 2017.

See https://docs.oracle.com/en/cloud/paas/api-platform-cloud/whats-new/index.html to learn about the new and changed features of Oracle API Platform CS in the latest release.

In this article in the series about Oracle API Platform CS, the focus will be on the Management Portal and creating an API (including some policies) .

Be aware that the screenshot’s in this article and the examples provided, are based on a demo environment of Oracle API Platform CS and were created by using the Oracle VM VirtualBox APIPCS appliance mentioned above.

This article only covers part of the functionality of Oracle API Platform CS. For more detail I refer you to the documentation: https://cloud.oracle.com/en_US/api-platform.

Short overview of Oracle API Platform Cloud Service

Oracle API Platform Cloud Service enables companies to thrive in the digital economy by comprehensively managing the full API lifecycle from design and standardization to documenting, publishing, testing and managing APIs. These tools provide API developers, managers, and users an end-to-end platform for designing, prototyping. Through the platform, users gain the agility needed to support changing business demands and opportunities, while having clear visibility into who is using APIs for better control, security and monetization of digital assets.
[https://cloud.oracle.com/en_US/api-platform/datasheets]

Architecture

Management Portal:
APIs are managed, secured, and published using the Management Portal.
The Management Portal is hosted on the Oracle Cloud, managed by Oracle, and users granted
API Manager privileges have access.

Gateways:
API Gateways are the runtime components that enforce all policies, but also help in
collecting data for analytics. The gateways can be deployed anywhere – on premise, on Oracle
Cloud or to any third party cloud providers.

Developer Portal:
After an API is published, Application Developers use the Developer Portal to discover, register, and consume APIs. The Developer Portal can be customized to run either on the Oracle Cloud or directly in the customer environment on premises.
[https://cloud.oracle.com/opc/paas/datasheets/APIPCSDataSheet_Jan2018.pdf]

Oracle Apiary:
In my article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about using Oracle Apiary and interacting with its Mock Server for the “HumanResourceService” API, I created earlier.

The Mock Server for the “HumanResourceService” API is listening at:
http://private-b4874b1-humanresourceservice.apiary-mock.com
[https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

Roles

Within Oracle API Platform CS roles are used.

Roles determine which interfaces a user is authorized to access and the grants they are eligible to receive.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/api-platform-cloud-service-roles-resources-actions-and-grants.html]

  • Administrator
    System Administrators responsible for managing the platform settings. Administrators possess the rights of all other roles and are eligible to receive grants for all objects in the system.
  • API Manager
    People responsible for managing the API lifecycle, which includes designing, implementing, and versioning APIs. Also responsible for managing grants and applications, providing API documentation, and monitoring API performance.
  • Application Developer
    API consumers granted self-service access rights to discover and register APIs, view API documentation, and manage applications using the Developer Portal.
  • Gateway Manager
    Operations team members responsible for deploying, registering, and managing gateways. May also manage API deployments to their gateways when issued the Deploy API grant by an API Manager.
  • Gateway Runtime
    This role indicates a service account used to communicate from the gateway to the portal. This role is used exclusively for gateway nodes to communicate with the management service; users assigned this role can’t sign into the Management Portal or the Developer Portal.
  • Service Manager
    People responsible for managing resources that define backend services. This includes managing service accounts and services.
  • Plan Manager
    People responsible for managing plans.

Within the Oracle VM VirtualBox APIPCS appliance the following users (all with password welcome1) are present and used by me in this article:

User Role api-manager-user APIManager api-gateway-user GatewayManager

Design-First approach

Design is critical as a first step for great APIs. Collaboration ensures that we are creating the correct design. In my previous article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about the Design-First approach and using Oracle Apiary. I designed a “HumanResourceService” API.
[https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

So with a design in place, an application developer could begin working on the front-end, while service developers work on the back-end implementation and others can work on the API implementation, all in parallel.

Create an API, via the Management Portal (api-manager-user)

Start the Oracle API Platform Cloud – Management Portal as user api-manager-user.

After a successful sign in, the “APIs” screen is visible.

Create a new API via a click on button “Create API”. Enter the following values:

Name HumanResourceService Version 1 Description Human Resource Service is an API to manage Human Resources.

Next, click on button “Create”.

After a click on the “HumanResourceService” API, the next screen appears (with tab “APIs” selected):

Here you can see on the left, that the tab “API Implementation” is selected.

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “API Implementation” of the “HumanResourceService” API

Tab “Deployments” of the “HumanResourceService” API

Tab “Publication” of the “HumanResourceService” API

Tab “Grants” of the “HumanResourceService” API

API grants are issued per API.

The following tabs are visible and can be chosen:

  • Manage API
    Users issued this grant are allowed to modify the definition of and issue grants for this API.
  • View all details
    Users issued this grant are allowed to view all information about this API in the Management Portal.
  • Deploy API
    Users issued this grant are allowed to deploy or undeploy this API to a gateway for which they have deploy rights. This allows users to deploy this API without first receiving a request from an API Manager.
  • View public details
    Users issued this grant are allowed to view the publicly available details of this API on the Developer Portal.
  • Register
    Users issued this grant are allowed to register applications for this plan.
  • Request registration
    Users issued this grant are allowed to request to register applications for this plan.

Users and groups issued grants for a specific API have the privileges to perform the associated actions on that API. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-api-grants.html.

Tab “Registrations” of the “HumanResourceService” API

Tab “Analytics” of the “HumanResourceService” API

Tab “API Implementation” of the “HumanResourceService” API

After you create an API, you can apply policies to configure the Request and Response flows. Policies in the Request flow secure, throttle, route, manipulate, or log requests before they reach the backend service. Polices in the Response flow manipulate and log responses before they reach the requesting client.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html]

Request flow, configuring the API Request URL

The API Request URL is the endpoint to which users or applications send requests for your API. You configure part of this URL. This endpoint resides on the gateway on which the API is deployed. The API will be deployed later.

The full address to which requests are sent consists of the protocol used, the gateway hostname, the API Request endpoint, and any private resource paths available for your service.

<protocol>://<hostname and port of the gateway node instance>/<API Request endpoint>/<private resource path of the API>

Anything beyond the API Request endpoint is passed to the backend service.

Hover over the “API Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

Your Policy Name API Request Comments Configuration | Protocol HTTP ://MyGatewayIP/ Configuration | API Endpoint URL HumanResourceService/1

Next, click on button “Apply”.

In the pop-up, click on button “Save Changes”.

Request flow, configuring the Service Request URL

The Service Request is the URL at which your backend service receives requests.

When a request meets all policy conditions, the gateway routes the request to this URL and calls your service. Note that the Service Request URL can point to any of your service’s resources, not just its base URL. This way you can restrict users to access only a subset of your API’s resources.

Hover over the “Service Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

Configure Headers – Service | Enter a URL <Enter the Apiary Mock Service URL>

For example:
http://private-b4874b1-humanresourceservice.apiary-mock.com

Remark:
Remove the “/employees” from the Mock Service URL so the API can be designed to call multiple end-points such as “/departments” Use Gateway Node Proxy uncheck Service Account None

Next, click on button “Apply”.

In the pop-up, click on button “Save Changes”.

Oftentimes, there are multiple teams participating in the development process. There may be front-end developers creating a new mobile app or chatbot, there can be a backend services and integration team and of course the API team.

If the backend service is not yet ready, you can still start creating the API. Perhaps you may want to begin with a basic implementation (for example an Apiary Mock Service URL) so your front-end developers are already pointing to the API, even before it is fully operational.

Response Flow

Click the Response tab to view a top-down visual representation of the response flow. The Service and API Response entries can’t be edited.
The Service Response happens first. The response from the backend service is always the first entry in the outbound flow. You can place additional policies in this flow. Policies are run in order, with the uppermost policy run first, followed by the next policy, and so on, until the response is sent back to the client.
The API Response entry is a visual representation of the point in the outbound flow when the response is returned to the requesting client.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html]

Deploy an API to the Gateway, via the Management Portal (api-manager-user)

On the left, click on tab “Deployments”.

Next, click on button “Deploy API”.

In the pop-up “Deploy API” there are no gateways, or they are not visible for the current user. So in order to find out what the situation is about the gateways, we have to sign in, in the Oracle API Platform Cloud – Management Portal as a Gateway Manager. There we also can grant the privileges needed to deploy the API. How you do this is described later on in this article.

For now we continue as if the correct privileges were already in place.

So in the pop-up “Deploy API”, select the “Production Gateway” gateway and click on button ‘Deploy”.

For a short while a pop-up “Deployment request submitted” appears.

Next, click on tab “Requesting” where we can see the request (for an API deployment to a gateway), the user api-manager-user sent to the Gateway Manager. The “Deployment State” is REQUESTING. So now we have to wait for the approval of the Gateway Manager.

Sign in to the Oracle API Platform Cloud – Management Portal as user api-gateway-user

In the top right of the Oracle API Platform Cloud – Management Portal click on the api-manager-user and select ”Sign Out”. Next, Sign in as user api-gateway-user.

After a successful sign in, the “Gateways” screen is visible.

Because this user is only a Gateway Manager, only the tab “Gateways” is visible.

At the moment (in this demo environment) there is one gateway available, being the “Production Gateway”. After a click on the “Production Gateway” gateway, the next screen appears:

Here you can see on the left, that the tab “Settings” is selected.

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “Settings” of the “Production Gateway” gateway

Have a look at the “Load Balancer URL” (http://apics.oracle.com:8001), which we will be using later on in this article.

Tab “Nodes” of the “Production Gateway” gateway

Tab “Deployments” of the “Production Gateway” gateway

Tab “Grants” of the “Production Gateway” gateway

Tab “Analytics” of the “Production Gateway” gateway

Tab “Grants” of the “Production Gateway” gateway

On the left, click on tab “Grants”.

Grants are issued per gateway.

The following tabs are visible and can be chosen:

  • Manage Gateway
    Users issued this grant are allowed to manage API deployments to this gateway and manage the gateway itself.

    Remark:
    The api-gateway-user (with role GatewayManager) is granted the “Manage Gateway” privilege.

  • View all details
    Users issued this grant are allowed to view all information about this gateway.
  • Deploy to Gateway
    Users issued this grant are allowed to deploy or undeploy APIs to this gateway.
  • Request Deployment to Gateway
    Users issued this grant are allowed to request API deployments to this gateway.
  • Node service account
    Gateway Runtime service accounts are issued this grant to allow them to download configuration and upload statistics.

Users issued grants for a specific gateway have the privileges to perform the associated actions on that gateway. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-gateway-grants.html.

Click on tab “Request Deployment to Gateway”.

Next, click on button “Add Grantee”.

Select “api-manager-user” and click on button “Add”.

So now, the user api-manager-user (with Role APIManager) is granted the “Request Deployment to Gateway” privilege.

Remark:
In practice you would probably grant to a group instead of to a single user.

Be aware that you could also grant the “Deploy to Gateway” privilege, so approval of the Gateway Manager (for deploying an API to a gateway) is not needed anymore in that case. This makes sense if it concerns a development environment, for example. Since the Oracle VM VirtualBox APIPCS appliance is using a “Production Gateway” gateway, in this article, I chose for the request and approve mechanism.

Approve a request for an API deployment to a gateway, via the Management Portal (api-gateway-user)

On the left, click on tab “Deployments” and then click on tab “Requesting”.

Hover over the “HumanResourceService” API, then click on button “Approve”.

In the pop-up, click on button “Yes”.

Then you can see that on the tab “Waiting”, the deployment is waiting.

Remark:
The deployment enters a Waiting state and the logical gateway definition is updated. The endpoint is deployed the next time gateway node(s) poll the management server for the updated gateway definition.

So after a short while, you can see on the tab “Deployed”, that the deployment is done.

After a click on the top right icon “Expand”, more details are shown:

So now the “HumanResourceService” API is deployed on the “Production Gateway” gateway (Node 1). We can also see the active policies in the Request and Response flow of the API Implementation.

It is time to invoke the API.

Invoke method “GetAllEmployees” of the “HumanResourceService” API, via Postman

For invoking the “HumanResourceService” API I used Postman (https://www.getpostman.com) as a REST Client tool.

In Postman, I created a collection named “HumanResourceServiceCollection”(in order to bundle several requests) and created a request named “GetAllEmployeesRequest”, providing method “GET” and request URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”.

Remember the “API Request URL”, I configured partly in the “API Request” policy and the “Load Balancer URL” of the “Production Gateway” gateway? They make up the full address to which requests have to be sent.

After clicking on button Send, a response with “Status 200 OK” is shown:

Because I have not applied any extra policies, the request is passed to the backend service without further validation. This is simply the “proxy pattern”.

Later on in this article, I will add some policies and send additional requests to validate each one of them.

Tab “Analytics” of the “Production Gateway” gateway

Go back to the Management Portal (api-gateway-user) and in the tab “Analytics” the request I sent, is visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the request is also visible.

Policies

Policies in API Platform CS serve a number of purposes. You can apply any number of policies to an API definition to secure, throttle, limit traffic, route, or log requests sent to your API. Depending on the policies applied, requests can be rejected if they do not meet criteria you specify when configuring each policy. Policies are run in the order they appear on the Request and Response tabs. A policy can be placed only in certain locations in the execution flow.

The available policies are:

Security:

  • OAuth 2.0 | 1.0
  • Key Validation | 1.0
  • Basic Auth | 1.0
  • Service Level Auth | 1.0 Deprecated
  • IP Filter Validation | 1.0
  • CORS | 1.0

Traffic Management:

  • API Throttling – Delay | 1.0
  • Application Rate Limiting | 1.0
  • API Rate Limiting | 1.0

Interface Management:

  • Interface Filtering | 1.0
  • Redaction | 1.0
  • Header Validation | 1.0
  • Method Mapping | 1.0

Routing:

  • Header Based Routing | 1.0
  • Application Based Routing | 1.0
  • Gateway Based Routing | 1.0
  • Resource Based Routing | 1.0

Other:

  • Service Callout | 2.0
  • Service Callout | 1.0
  • Logging | 1.0
  • Groovy Script | 1.0

As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management).

Add a Key Validation Policy, via the Management Portal (api-manager-user)

Use a key validation policy when you want to reject requests from unregistered (anonymous) applications.

Keys are distributed to clients when they register to use an API on the Developer Portal. At runtime, if they key is not present in the given header or query parameter, or if the application is not registered, the request is rejected; the client receives a 400 Bad Request error if no key validation header or query parameter is passed or a 403 Forbidden error if an invalid key is passed.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html#GUID-5CBFE528-A74E-4700-896E-154378818E3A]

This policy requires that you create and register an application, which is described in my next article.

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-manager-user.

Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Security”. Hover over the “Key Validation” policy and then, on the right, click the icon “Apply”. Enter the following values:

Your Policy Name Key Validation Comments Place after the following policy API Request

Then, click on icon “Next”. Enter the following values:

Key Delivery Approach Header Key Header application-key

Click on button “Apply as Draft”.

Next, click on button “Save Changes”.

I applied this as a draft policy, represented as a dashed line around the policy. Draft policies let you “think through” what you want before you have the complete implementation details. This enables you to complete the bigger picture in one sitting and to leave reminders of what is missing to complete the API later.
When you deploy an API, draft policies are not deployed.

Add an Interface Filtering Policy, via the Management Portal (api-manager-user)

Use an interface filtering policy to filter requests based on the resources and methods specified in the request.
[https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html#GUID-69B7BC21-416B-4262-9CE2-9896DEDF2144]

Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Interface Management”. Hover over the “Interface Filtering” policy and then, on the right, click the icon “Apply”. Enter the following values:

Your Policy Name Interface Filtering Comments Place after the following policy Key Validation

Then, click on icon “Next”.

In the table below I summarized the requests that I created in the Oracle Apiary Mock Server for the “HumanResourceService” API:

Request name Method Oracle Apiary Mock Server Request URL GetAllEmployeesRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/employees CreateEmployeeRequest POST http://private-b4874b1-humanresourceservice.apiary-mock.com/employees GetEmployeeRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/employees/100 UpdateEmployeeRequest PUT http://private-b4874b1-humanresourceservice.apiary-mock.com/employees/219 GetDepartmentRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/departments/30 GetDepartmentEmployeeRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/departments/30/employees/119

I want to use an interface filtering policy to filter requests. As an example, I want to pass requests (to the backend service) with the method GET specified in the request and a resource starting with employees followed by an identification or starting with departments followed by employees and an identification.

Select “Pass” from the list.

At “Filtering Conditions”, “Condition 1” enter the following values:

Resources /employees/* ; /departments/*/employees/* Methods GET

Click on button “Apply ”.

Next, click on button “Save Changes”.

I applied this policy as an active policy, represented as a solid line around the policy.

Redeploy the API, via the Management Portal (api-manager-user)

Navigate to tab “Deployments” of the “HumanResourceService” API, and then hover over the “Production Gateway” gateway and then, on the right, hover over the icon “Redeploy”.

Next, click on icon “Latest Iteration”.

In the pop-up, click on button “Yes”. For a short while a pop-up “Redeploy request submitted” appears.

Then repeat the steps described before in this article, to approve the request, by switching to a Gateway Manager.

Remark:
Click on “Latest Iteration” to deploy the most recently saved iteration of the API.
Click on “Current Iteration” to redeploy the currently deployed iteration of the API.

After that, it is time to try out the effect of adding the “Interface Filtering” policy.

Validating the “Interface Filtering” policy, via Postman

In Postman for each request mentioned earlier (in the table), I created that request within the collection named “HumanResourceServiceCollection”.

Then again I invoked each request, to validate it against the “Interface Filtering” policy.

Invoke method “GetAllEmployees” of the “HumanResourceService” API

From Postman I invoked the request named “GetAllEmployeesRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “CreateEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “CreateEmployeeRequest” (with method “POST” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetEmployeesRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/100”) and a response with “Status 200 OK” is shown:

Invoke method “UpdateEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “UpdateEmployeeRequest” (with method “PUT” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/219”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetDepartment” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetDepartmentEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30/employees/119”) and a response with “Status 200 OK” is shown:

Tab “Analytics” of the “Production Gateway” gateway

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-gateway-user and click on the “Production Gateway” gateway and navigate to the tab “Analytics”.

In this tab the requests I sent, are visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the requests are also visible.

Next, click on icon “Error and Rejections (4 Total)” and if we look, for example, at “Rejection Distribution”, we can see that there were 4 request rejections, because of policy “Interface Filtering”.

So the “Interface Filtering” policy is working correct.

Summary

As a follow up from my previous articles about Oracle API Platform Cloud Service, in this article the focus is on using the Management Portal and Creating the “HumanResourceService” API (including some policies).

As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management). The later policy, I deployed to a gateway and validated that this policy worked correct, using requests which I created in Postman.

While using the Management Portal in this article, I focused on the roles “API Manager” and “Gateway Manager”. For example, the user api-gateway-user had to approve a request from the api-manager-user to deploy an API the a gateway.

In a next article the focus will be on validating the “Key Validation” policy and using the “Development Portal”.

The post Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies) appeared first on AMIS Oracle and Java Blog.

How (Not) to Change Passwords in PostgreSQL

Don Seiler - Sat, 2018-04-14 11:53
A few months ago I was doing some testing on a development PostgreSQL database and watching the postgresql server log (akin to Oracle's alert log). I was skimming for lines relevant to my testing when I noticed something chilling:

LOG:  statement: alter user john password 'IloveDBAs';

The username and password have obviously been changed but the point is plain: PostgreSQL printed the password value in plain text to the log file. A few tests of my own confirmed that whenever the PASSWORD specification is used in a CREATE USER or ALTER USER command, it will be printed to the server log:

LOG:  statement: create user john password 'badidea';
LOG:  statement: alter user john password 'alsobad';

The way around this in psql is to use the \PASSWORD command. When creating a new user, do not specify the password at creation time, just use \PASSWORD later:

# create user susan;
CREATE ROLE
# \password susan
Enter new password:
Enter it again:
# \password john
Enter new password:
Enter it again:

Now, like most password-change processes, you're asked to enter it twice and the input is not echoed to the screen. The log will show this:

LOG:  statement: create user susan;
LOG:  statement: ALTER USER susan PASSWORD 'md5d311d09782068f6a2391358ae512e113'
LOG:  statement: ALTER USER john PASSWORD 'md5a7e4187c1c977ed2e700e880dac11f15'

You can see the passwords are still printed, but they are md5-hashed now.

However, this only works when I'm using psql. If a developer is using a GUI tool that offers a feature to change a password, they could very well be doing the same bad command under the hood.

Note: this behavior was originally observed by me in PostgreSQL 9.2 but it's still the case in 9.6 and 10.2. From a discussion that was had with the pgsql-admin community, it's not likely to be changed any time soon.
Categories: DBA Blogs

Framework To Run SQL For All Active DB Instances

Michael Dinh - Sat, 2018-04-14 06:52

Requirement is to configure hugepages for multiple RAC database instances.

pmon processes

grid     12692     1  0 09:39 ?        00:00:00 asm_pmon_+ASM1
grid     13296     1  0 09:39 ?        00:00:00 mdb_pmon_-MGMTDB
oracle   13849     1  0 09:40 ?        00:00:00 ora_pmon_DEV1
oracle   13851     1  0 09:40 ?        00:00:00 ora_pmon_QA1
oracle   13854     1  0 09:40 ?        00:00:00 ora_pmon_PERF1
oracle   13855     1  0 09:40 ?        00:00:00 ora_pmon_TEST1
oracle   14998     1  0 09:40 ?        00:00:00 ora_pmon_INT1

Create parameter.sh which will run parameter.sql.
You might be thinking, WTH is this person thinking!
I wanted SQL script to be reusable.

Run parameter.sql

oracle@racnode-dc1-1:hawk1:/home/oracle
$ sqlplus / as sysdba @ parameter.sql

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 14 13:25:56 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options


NAME                           CDB
------------------------------ ---
HAWK                           NO


NAME                           DISPLAY_VALUE INST_ID     CON_ID DEFAULT_VALUE ISDEFAULT
------------------------------ ------------- ------- ---------- ------------- ---------
cluster_database               TRUE                1          0 FALSE         FALSE
                               TRUE                2          0 FALSE         FALSE
cluster_database_instances     2                   1          0 4294967295    TRUE
                               2                   2          0 4294967295    TRUE
db_file_name_convert                               1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
db_name                        hawk                1          0 NULL          FALSE
                               hawk                2          0 NULL          FALSE
db_unique_name                 hawk                1          0 NONE          TRUE
                               hawk                2          0 NONE          TRUE
instance_groups                                    1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
instance_name                  hawk1               1          0 NULL          TRUE
                               hawk2               2          0 NULL          TRUE
instance_number                1                   1          0 0             FALSE
                               2                   2          0 0             FALSE
instance_type                  RDBMS               1          0 NONE          TRUE
                               RDBMS               2          0 NONE          TRUE
memory_max_target              0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
memory_target                  0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
pdb_file_name_convert                              1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
pga_aggregate_limit            2G                  1          0 1             TRUE
                               2G                  2          0 1             TRUE
pga_aggregate_target           256M                1          0 0             FALSE
                               256M                2          0 0             FALSE
sga_max_size                   768M                1          0 1000          TRUE
                               768M                2          0 1000          TRUE
sga_target                     768M                1          0 0             FALSE
                               768M                2          0 0             FALSE
use_large_pages                TRUE                1          0 NULL          FALSE
                               TRUE                2          0 NULL          FALSE

34 rows selected.

13:25:56 SYS @ hawk1:>

Run parameter.sh

oracle@racnode-dc1-1:hawk1:/u01/app/oracle/12.1.0.2/db1
$ ~/parameter.sh

******** Current ora_pmon:
----------------------------------------
ora_pmon_hawk1
----------------------------------------

******** SQL Script: /home/oracle/parameter.sql

The Oracle base remains unchanged with value /u01/app/oracle
Oracle Instance alive for sid "hawk1"

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 14 13:26:55 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

13:26:55 SYS @ hawk1:>13:26:55 SYS @ hawk1:>13:26:55 SYS @ hawk1:>
NAME                           CDB
------------------------------ ---
HAWK                           NO


NAME                           DISPLAY_VALUE INST_ID     CON_ID DEFAULT_VALUE ISDEFAULT
------------------------------ ------------- ------- ---------- ------------- ---------
cluster_database               TRUE                1          0 FALSE         FALSE
                               TRUE                2          0 FALSE         FALSE
cluster_database_instances     2                   1          0 4294967295    TRUE
                               2                   2          0 4294967295    TRUE
db_file_name_convert                               1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
db_name                        hawk                1          0 NULL          FALSE
                               hawk                2          0 NULL          FALSE
db_unique_name                 hawk                1          0 NONE          TRUE
                               hawk                2          0 NONE          TRUE
instance_groups                                    1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
instance_name                  hawk1               1          0 NULL          TRUE
                               hawk2               2          0 NULL          TRUE
instance_number                1                   1          0 0             FALSE
                               2                   2          0 0             FALSE
instance_type                  RDBMS               1          0 NONE          TRUE
                               RDBMS               2          0 NONE          TRUE
memory_max_target              0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
memory_target                  0                   1          0 0             TRUE
                               0                   2          0 0             TRUE
pdb_file_name_convert                              1          0 NULL          TRUE
                                                   2          0 NULL          TRUE
pga_aggregate_limit            2G                  1          0 1             TRUE
                               2G                  2          0 1             TRUE
pga_aggregate_target           256M                1          0 0             FALSE
                               256M                2          0 0             FALSE
sga_max_size                   768M                1          0 1000          TRUE
                               768M                2          0 1000          TRUE
sga_target                     768M                1          0 0             FALSE
                               768M                2          0 0             FALSE
use_large_pages                TRUE                1          0 NULL          FALSE
                               TRUE                2          0 NULL          FALSE

34 rows selected.

13:26:55 SYS @ hawk1:>Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@racnode-dc1-1:hawk1:/u01/app/oracle/12.1.0.2/db1
$

Requirements are .sh and .sql must have the same name and reside in the same location.
.sh can be called from any location.

parameter.sh


#!/bin/sh
# --------------------------------------------------------------------------------
# parameter.sh
# MDinh April 12, 2018
#
# Shell script will run SQL script having the same base name
# for all active database instances.
# --------------------------------------------------------------------------------
DN=`dirname $0`
BN=`basename $0`
SQL_SCRIPT_DIR=$DN
SQL=`echo $BN|cut -d'.' -f1`.sql
echo
echo "******** Current ora_pmon:"
echo "----------------------------------------"
ps -eo cmd|grep ora_pmon|grep -v grep
echo "----------------------------------------"
echo
echo "******** SQL Script: "$SQL_SCRIPT_DIR/$SQL
echo
for x in `ps -eo cmd|grep ora_pmon|grep -v grep|awk -F "_" '{print $NF}'`
do
  ORAENV_ASK=NO
  set -a
  ORACLE_SID=$x
  . oraenv
  set +a
  sysresv|tail -1

sqlplus -L "/ as sysdba" << EOF
whenever sqlerror exit sql.sqlcode
whenever oserror exit 1
start $SQL_SCRIPT_DIR/$SQL
exit
EOF
if [ "$?" != "0" ]; then
  echo "$ORACLE_SID ERROR: Running $SQL_SCRIPT_DIR/$SQL"
  exit 1
fi
done
exit

parameter.sql


col name for a30
col value for a30
col default_value for a13
col display_value for a13
col inst_id for 99
break on name
set lines 200 pages 1000 trimsp on tab off
select name,CDB from v$database
;
select name,display_value,inst_id,con_id,default_value,isdefault
from gv$parameter
where regexp_like (name,'^sga|^pga|^memory|^cluster.*database|^instance|use_large_pages|db.*name','i')
order by name,value,inst_id
;

Frame work for shell script is the same. Just make a copy and update any comments.

oracle@racnode-dc1-1:hawk1:/home/oracle
$ ll
total 36
-rwxr-xr-x 1 oracle oinstall   19 Feb 10 20:44 db.env
-rwxr-xr-x 1 oracle oinstall   49 Feb 10 20:45 gi.env
-rwxr-xr-x 1 oracle oinstall 1020 Apr 14 13:24 parameter.sh
-rw-r--r-- 1 oracle oinstall  414 Apr 14 13:24 parameter.sql
-rwxr-xr-x 1 oracle oinstall 1038 Apr 14 13:23 set_db_use_large_pages_only.sh
-rw-r--r-- 1 oracle oinstall  430 Apr 12 17:50 set_db_use_large_pages_only.sql
-rwxr-xr-x 1 oracle oinstall 1038 Apr 14 13:23 set_db_use_large_pages_true.sh
-rw-r--r-- 1 oracle oinstall  430 Apr 12 17:53 set_db_use_large_pages_true.sql
-rw-r--r-- 1 oracle oinstall 1909 Jan 29 02:39 wc.sql

oracle@racnode-dc1-1:hawk1:/home/oracle
$ diff parameter.sh set_db_use_large_pages_true.sh
3c3
 # set_db_use_large_pages_true.sh

oracle@racnode-dc1-1:hawk1:/home/oracle
$ ./set_db_use_large_pages_true.sh

******** Current ora_pmon:
----------------------------------------
ora_pmon_hawk1
----------------------------------------

******** SQL Script: ./set_db_use_large_pages_true.sql

The Oracle base remains unchanged with value /u01/app/oracle
Oracle Instance alive for sid "hawk1"

SQL*Plus: Release 12.1.0.2.0 Production on Sat Apr 14 13:48:17 2018

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

13:48:17 SYS @ hawk1:>13:48:17 SYS @ hawk1:>13:48:17 SYS @ hawk1:>
NAME                           CDB
------------------------------ ---
HAWK                           NO


NAME                           DISPLAY_VALUE INST_ID     CON_ID DEFAULT_VALUE ISDEFAULT
------------------------------ ------------- ------- ---------- ------------- ---------
use_large_pages                TRUE                1          0 NULL          FALSE
                               TRUE                2          0 NULL          FALSE

13:48:17 SYS @ hawk1:>alter system set USE_LARGE_PAGES=TRUE scope=spfile sid='*'
13:48:17   2  ;

System altered.

13:48:17 SYS @ hawk1:>Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@racnode-dc1-1:hawk1:/home/oracle
$

Application Engine in Process Scheduler: PSAESRV Server Process -v- Standalone PSAE executable

David Kurtz - Sat, 2018-04-14 06:05
Whether to use the Application Engine server process (PSAESRV) in the process scheduler tuxedo domain or the standalone PSAE executable is a frequently discussed point amongst PeopleSoft administrator.  Over the years, I have written various things on the subject.  I am going to draw them together in this blog, and restate Oracle’s now clear advice about when to use which option.

In PeopleTools 8.4, the Process Scheduler became a fully fledged Tuxedo domain.  The PSAESRV process was also introduced at this time.  It is a persistent process that handles both Application Engine and Optimization Engine requests.  Each server process creates persistent database connections.  A number of these server processes are started with the domain.   The PSAESRV process does not spawn like other Tuxedo server processes.  Instead, you must configure the number of server processes to match the maximum number of concurrent Application Engine process requests and concurrent Optimization Engine requests that the process scheduler can run.  The server was introduced to handle very short-lived Application Engine programs thus avoiding the overhead of instantiating a new process and new database sessions for each process request.  CRM typically uses Application Engine in this fashion, but generally, you do not see this in other PeopleSoft products.

Oracle has not always been clear what they mean by a short-lived process.  It has been suggested that if Application Engine processes are typically taking less than 10-30 seconds, or if you run more than 1000 Application Engine processes requests per hour (note 651970.1) you should use PSAESRVs.
PeopleBooks advises you should use PSAESRV because it delivers improved system performance.  However, PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1) contradicts this somewhat.  Ultimately, if you have any doubts, you should it test each way and determine whether one way gives a clear advantage over the other.

Oracle Support Note "What will be the impact of disabling PSAESRV on the Process Scheduler (Doc ID 651970.1)" explains that if PSAESRV is disabled in the Tuxedo domain configuration, the Process Scheduler goes back to the legacy behaviour and spawns a stand-alone PSAE process for each Application Engine request.  “The Application Engine will take a bit longer to start, [the] time delay may be range from millisecond to seconds” depending on hardware and configuration.

The stand-alone process has several advantages.
  • At the end of the Application Engine program, it disconnects from the database and terminates.  Thus releasing resources from the process and the database session.  Whereas the persistent Application Engine process has been reported to accumulate allocated memory over time.
  • If you are using Oracle database Global Temporary Tables in an application engine, then you should not use PSAESRV because the tables are always created PRESERVE ON COMMIT and so are only released when the database session terminates.
  • If you set any session parameters within an Application Engine program run via PSAESRV, or enable database trace, then these settings will carry forward from one Application Program to the next unless you reset the parameter at the end of the program, or the start of the next.  This is not a concern with standalone PSAE processes.
However, there is at least one case where you must use the server process:
  • If you are using Oracle Active Data Guard and wish to mark some Application Engine programs as read-only then they must be run via the PSAESRV process
Conclusion
  • PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1 sums it up very nicely: “PSAE is as good as PSAESRV for most practical purposes.  If you have an application engine job that runs longer than 10 seconds, PSAE is equivalent to PSAESRV.  PSAE has the added advantage of being recycled at the end of each application engine job, cleaning up any outstanding SQL cursors to the database that may have been left behind.  Because PSAE recycles after each use, PSAE does not have any possible memory leakage problem that may occupy the precious system memory.  In short, PSAE is a cleaner workhorse.”
  • I think it is reasonable to use PSAESRV in CRM.  For all other products, I recommend that PSAESRV should be disabled from all Process Schedulers.
    • If you do have some Application Processes that are both short-lived (i.e. less than 10 seconds) and run frequently, then consider creating other process schedulers with PSAESRV processes that are dedicated to running only these process.  You can then move these processes to a new Process Scheduler category that only runs on these new Process Scheduler.
  • PSAESRV is configured by default, so if you don’t want to use it, and mostly you won’t, then you have to remember to disable it.

AWS CLI is Boon for DBAs

Pakistan's First Oracle Blog - Sat, 2018-04-14 01:45
For most of production RDS databases , we normally have a related EC2 server to access that RDS database through tools like datapump, sqlplus etc.



RDS is great for point and click but if you want to run your own monitoring or other administration related scripts, you need to have an EC2 instance with AWS CLI installed. For example if you want to check the status of RDS instances or if you want to check if RDS snapshots happened or not today through some check and notify DBA through page or email, you can do that with the AWS CLI.

I will be writing and sharing some RDS related shell scripts using AWS CLI in coming days so stay tuned. 
Categories: DBA Blogs

New Oracle Security Public Training Dates Available

Pete Finnigan - Fri, 2018-04-13 19:46
Due to some very critical close family health issues in the last few months I have delayed advertising any public training dates this year for my Oracle Security classes as I have had to be available for family support during....[Read More]

Posted by Pete On 13/04/18 At 10:10 AM

Categories: Security Blogs

Check 12.1.0.2 Alert Log For HugePages Usage

Michael Dinh - Fri, 2018-04-13 18:23

What! Another post on hugepages – seriously?

+ grep 'Dump of system resources acquired for SHARED GLOBAL AREA' -B1 -A22 database alert log
+ tail -25
2018-04-13T09:40:23.908633-07:00
Dump of system resources acquired for SHARED GLOBAL AREA (SGA) 

2018-04-13T09:40:23.916573-07:00
 Per process system memlock (soft) limit = UNLIMITED
2018-04-13T09:40:23.920591-07:00
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 2996M
2018-04-13T09:40:23.928517-07:00
 Available system pagesizes:
  4K, 2048K 
2018-04-13T09:40:23.936717-07:00
 Supported system pagesize(s):
 
2018-04-13T09:40:23.943044-07:00
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
2018-04-13T09:40:23.947112-07:00
     2048K             2303            1498            1498        NONE
2018-04-13T09:40:23.951899-07:00
 Reason for not supporting certain system pagesizes: 
2018-04-13T09:40:23.960107-07:00
  4K - Large pagesizes only
2018-04-13T09:40:23.965247-07:00

====================================================================================================

Tue Apr 10 12:29:13 2018
Dump of system resources acquired for SHARED GLOBAL AREA (SGA) 

Tue Apr 10 12:29:13 2018
 Per process system memlock (soft) limit = 128G
Tue Apr 10 12:29:13 2018
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 4002M
Tue Apr 10 12:29:13 2018
 Available system pagesizes:
  4K, 2048K 
Tue Apr 10 12:29:13 2018
 Supported system pagesize(s):
Tue Apr 10 12:29:13 2018
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
Tue Apr 10 12:29:13 2018
        4K       Configured               5         1024005        NONE
Tue Apr 10 12:29:13 2018
     2048K                0            2001               0        NONE
Tue Apr 10 12:29:13 2018
RECOMMENDATION:
Tue Apr 10 12:29:13 2018
 1. For optimal performance, configure system with expected number 
 of pages for every supported system pagesize prior to the next 

Covering indexes in Oracle, and branch size

Yann Neuhaus - Fri, 2018-04-13 16:01

A covering index is an index that contains all the columns required by your query, so that you don’t have to do a TABLE ACCESS BY INDEX ROWID, which is the major cost of an index range scan. You don’t need any special feature to do that in Oracle. Just add the required columns at the end of the index. In the execution plan you will see the columns used as index keys for the range scan displayed in ‘access’ predicates, and the further filtering done on the remaining columns with ‘filter’ predicates. The ‘projection’ shows the columns that are returned in the rowset result.
However you may have seen that SQL Server has a special ‘INCLUDING’ keyword to separate those non-key columns added only for filtering or projection but not for access. What does it bring that Oracle doesn’t have?

An index entry is composed of a key and data associated to the key. The index is sorted on the key. The data for each key have no special order, like in a heap table. The idea of the SQL Server INCLUDING keyword is to separate the columns belonging to the key and the columns belonging to the data. It is not mandatory. You can add all columns to the key but depending on the implementation, the benefit can be:

  • some data types may not be allowed in the key but allowed as data
  • sorting the data when not required may be a performance overhead
  • there can be limitations on the size of the key
  • having a larger key may require more space in the branches
  • adding sorted columns may change the clustering factor

In Oracle, there are very few data types that cannot be indexed (like LONG). The limitation on the size of the key may come into play for large 12c Extended Datatypes. You can substring them, but that defeats the goal of covering indexes. I see two reasons why ‘INCLUDING’ indexes can be useful. The first reason is about the clustering factor. The second about sorting the whole index entry and referencing it from the branches. I’ll detail those reasons later, but first here is an example.


SQL> create table DEMO (UNIQU ,RANGE ,RANDOM_TEXT ,CONSTANT_TEXT ) as select rownum UNIQU , mod(rownum,4) RANGE , dbms_random.string('u',80) RANDOM_TEXT , lpad('x',80,'x') CONSTANT_TEXT from xmltable('1 to 100000');
Table DEMO created.
SQL> commit;
Commit complete.

This table has an all-distinct-values column UNIQ, a few-distinct-values on (RANGE) and I’ll use them for the key. And I’ve two columns I’ll add as additional column for covering queries: one is with lot of distinct values (RANDOM_TEXT) and the other has few distinct values (CONSTANT_TEXT).
The first rows look like this:

SQL> select * from DEMO order by ROWID fetch first 5 rows only;
UNIQU RANGE RANDOM_TEXT CONSTANT_TEXT
----- ----- -------------------------------------------------------------------------------- --------------------------------------------------------------------------------
1 1 XCFNWCRCFBEPJPSHREUVVVTBUCCXLZMRPJPNQDTHWYRZRUORBPDOBCIRFHLICETULTCZTMPOCMUNQITV xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
2 2 XUSPNDOMPQKOIRCVDDTVYAGKRDGIXOSVUNMRAQLSRQGYKOFEXRQMCPXPYZYKRHHKDXGIINOUUAUJOLOO xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3 3 ZBCVFTDSRUFIUTSIWOOOBWIRMEFUXNWLADAPUPFNPVYDLPQTOUZVXJKMGIPCGZESXFXOIYVMKNSMMZKB xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
4 0 VOIRCXFVSRVZQRZDRLQRHZWNGQJAAWJXWXJKRCJVPWYDJSZLJIOEWAMCFSRCUPSPPEKITJYHHOUQSVYQ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
5 1 UUSAMEVRWNLPGCUVMJWVVPDAENRYKIWWMIHTUJSZRQASMTYOVQNCGZGZIJZWNSOJVSIBMMUEAXOHJCOA xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

I’m adding indexes fo access on RANGE as the index key, with only the key, or covering the random or constant text:

SQL> create index DEMO_RANGE on DEMO(RANGE) pctfree 50;
Index DEMO_RANGE created.
SQL> create index DEMO_RANGE_COVERING_RANDOM on DEMO(RANGE,RANDOM_TEXT) pctfree 50;
Index DEMO_RANGE_COVERING_RANDOM created.
SQL> create index DEMO_RANGE_COVERING_CONSTANT on DEMO(RANGE,CONSTANT_TEXT) pctfree 50;
Index DEMO_RANGE_COVERING_CONSTANT created.

An additional one adding the unique column in-between:

SQL> create index DEMO_RANGE_COVERING_WITH_PK on DEMO(RANGE,UNIQU,CONSTANT_TEXT) pctfree 50;
Index DEMO_RANGE_COVERING_WITH_PK created.

And now for access with the unique column as a key:

SQL> create index DEMO_UNIQU_COVERING_RANDOM on DEMO(UNIQU,RANDOM_TEXT) pctfree 50;
Index DEMO_UNIQU_COVERING_RANDOM created.
SQL> create index DEMO_UNIQU_COVERING_CONSTANT on DEMO(UNIQU,CONSTANT_TEXT) pctfree 50;
Index DEMO_UNIQU_COVERING_CONSTANT created.

Here are some interesting stats:

SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
SQL> select index_name,blevel,leaf_blocks,num_rows,clustering_factor from user_indexes where table_name='DEMO' order by 2,3;
INDEX_NAME BLEVEL LEAF_BLOCKS NUM_ROWS CLUSTERING_FACTOR
-------------------------------- ------ ----------- -------- -----------------
DEMO_RANGE 1 353 100000 9757
DEMO_RANGE_COVERING_RANDOM 2 2440 100000 99967
DEMO_RANGE_COVERING_CONSTANT 2 2440 100000 9757
DEMO_UNIQU_COVERING_RANDOM 2 2500 100000 2440
DEMO_UNIQU_COVERING_CONSTANT 2 2500 100000 2440
DEMO_RANGE_COVERING_WITH_PK 2 2565 100000 9757
6 rows selected.

Leaf size

About the size, the covering indexes have approximately the same number of leaf blocks because the included column (RANDOM_TEXT or CONSTANT_TEXT) has the same size (80 bytes). Of course, the non-covering index is smaller (but will need table access to query additional column). The key on UNIQU is slightly larger than the one on RANGE because the numbers go higher. The index with 3 columns is the largest.

Clustering factor

About the clustering factor, there’s one outlier here which deserves an explanation. But before that, you must understand that this higher clustering factor is not important for a query using the covering index, such as a SELECT RANDOM_TEXT WHERE RANGE=0, because in that case you don’t read the table. However for some queries you may cover only the filter predicates and go to the table for projection.
But the big problem is that when you add a column to an index to address a specific query, you don’t want to risk a side effect on another query, and changing the clustering factor is a risk here. One solution is to keep the old non-covering index (DEMO_RANGE) but then the side effect is on DML overhead.

To understand the change in clustering factor we must go deeper on Oracle index key and data implementation. The ‘data’ part exists in Oracle indexes even when not specified explicitely with an INCLUDING clause. The ROWID is the data part. An index entry associates a key (the indexed columns) with a pointer to the table row (the ROWID). At least, this is for UNIQUE indexes where each key is unique.

Non-unique indexes are a special case. Actually, Oracle implements only unique key indexes. When the indexed columns are not unique, the ROWID is stored on the key part of the index entry, and there is no data part. You should read Richard Foote, Differences between Unique and Non-Unique Indexes for detailed explanation.

Branch size

The previous statistics displayed only the number of branch level, which was the same, but we can have more detail about the branch size with an ANALYZE INDEX.

The non-covering index has only one branch block, the root, which references all the 353 leaf blocks containing the 100000 entries, with an average of 5479/352=15 bytes per branch entry:

SQL> analyze index DEMO_RANGE validate structure offline;
Index DEMO_RANGE analyzed.
SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
2 384 353 1375000 352 1 5479 25000 2830616 1380479 49 25000 12502.5 19 1375000 353

The covering index with lot of distinct values for the non-key columns has more branch blocks, with an average of 34623/2439=14 bytes per branch entry:

SQL> analyze index DEMO_RANGE_COVERING_RANDOM validate structure offline;
Index DEMO_RANGE_COVERING_RANDOM analyzed.
SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
3 2560 2440 9475000 2439 6 34623 1 19558408 9509623 49 1 4 2 9475000 2440

Here the number of branches is higher only because there are more leaves (as we have more columns), but not because of the size in the branch entries, which are even smaller. They are smaller because the branch does not have to store the full value of all columns in order to identify one leaf block. Then, only the first bytes are needed and not the full 80 bytes of them.

The covering index with few of distinct values for the non-key columns has a lot more branch blocks, with an average of 234755/2439=96 bytes per branch entry:

SQL> analyze index DEMO_RANGE_COVERING_CONSTANT validate structure offline;
Index DEMO_RANGE_COVERING_CONSTANT analyzed.
SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
 
HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
3 2560 2440 9475000 2439 31 234755 25000 19759108 9709755 50 25000 12503.5 86 9475000 2440

So, here the size of the branch blocks is higher because we have multiple leaves blocks with the value of COVERING_CONSTANT the second column is not sufficient to identify only one leaf block. The full 80 bytes must be stored, and the rowid in addition to it.

When the indexed column has only unique values, there is no need to store more in the branches (not the additional columns, not the rowid) and only 12 bytes are needed here on average:

SQL> analyze index DEMO_UNIQU_COVERING_RANDOM validate structure offline;
Index DEMO_UNIQU_COVERING_RANDOM analyzed.
SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
 
HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
3 2560 2500 9688892 2499 5 29737 1 20030140 9718629 49 1 4 0 9688892 2500

As the second column is not needed, the size of branch is the same whether we use RANDOM_TEXT or CONSTANT_TEXT:

SQL> analyze index DEMO_UNIQU_COVERING_CONSTANT validate structure offline;
Index DEMO_UNIQU_COVERING_CONSTANT analyzed.
SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
 
HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
3 2560 2500 9688892 2499 5 29737 1 20030140 9718629 49 1 4 0 9688892 2500

Now, the last one is my workaround for the higher size when adding a column that do not have a lot of distinct values: just add a column before with more distinct values. Here I use the UNIQU one, but you probably have one that can be useful for your queries.

SQL> analyze index DEMO_RANGE_COVERING_WITH_PK validate structure offline;
Index DEMO_RANGE_COVERING_WITH_PK analyzed.
SQL> select height,blocks,lf_blks,lf_rows_len,br_rows,br_blks,br_rows_len,most_repeated_key,btree_space,used_space,pct_used,rows_per_key,blks_gets_per_access,opt_cmpr_pctsave,lf_uncmp_rows_len,lf_uncmp_blks from index_stats
 
HEIGHT BLOCKS LF_BLKS LF_ROWS_LEN BR_ROWS BR_BLKS BR_ROWS_LEN MOST_REPEATED_KEY BTREE_SPACE USED_SPACE PCT_USED ROWS_PER_KEY BLKS_GETS_PER_ACCESS OPT_CMPR_PCTSAVE LF_UNCMP_ROWS_LEN LF_UNCMP_BLKS
------ ------ ------- ----------- ------- ------- ----------- ----------------- ----------- ---------- -------- ------------ -------------------- ---------------- ----------------- -------------
3 2688 2565 9963892 2564 6 37456 1 20557908 10001348 49 1 4 2 9963892 2565

Now you get the idea. When creating an index, or adding columns for covering index, and you have the choice of column order, then try to have their first bytes selective enough so that the branch needs only a small substring to identify each leaf block (or lower level branches).

Block dumps

If you want to see the details about the branch length, here are some info from block dumps. I got them with the following:

SQL> column value new_value tracefile
SQL> select value from v$diag_info where name='Default Trace File';
VALUE
/u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_6799.trc
SQL> exec for i in (select header_file, header_block from dba_segments where owner='DEMO' and segment_name='DEMO_RANGE') loop execute immediate 'alter system dump datafile '||i.header_file||' block '||(i.header_block+1); end loop;
PL/SQL procedure successfully completed.
SQL> host tail -20 &tracefile

Here is the last branch entry for the root block of DEMO_RANGE where the first column is not very selective and then the rowid is required in the branch:

row#351[3279] dba: 113261807=0x6c03cef
col 0; len 2; (2): c1 04
col 1; len 6; (6): 07 00 05 7b 00 25

Here is the last branch entry for the root block of DEMO_RANGE_COVERING_RANDOM where instead of the rowid the 3 first bytes of the RANDOM_TEXT column are sufficient:

row#3[8006] dba: 113263037=0x6c041bd
col 0; len 2; (2): c1 04
col 1; len 3; (3): 53 51 52
col 2; TERM

Here is the last branch entry for the root block of DEMO_RANGE_COVERING_CONSTANT where the full 80 bytes of CONSTANT_TEXT are not even sufficient, and the ROWID is needed as a 3rd column:

row#28[5316] dba: 117444566=0x7000fd6
col 0; len 2; (2): c1 04
col 1; len 80; (80):
78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78 78
78 78 78 78 78
col 2; len 6; (6): 07 00 05 43 00 25

Here is the last branch entry for the root block of DEMO_UNIQU_COVERING_CONSTANT where the first column is sufficient:

row#2[8026] dba: 117447160=0x70019f8
col 0; len 4; (4): c3 09 0d 04
col 1; TERM

So what?

We probably don’t need a feature like SQL Server INCLUDING indexes in most of the cases. However, this may require thinking about the order of columns, mainly:

  • ensure that selective columns appear as early as possible (without compromising the index access efficiency of course) in order to lower the bytes required to address branches and leaves
  • when adding columns, try to add first a column that will keep the clustering factor you had with the rowid, such as a date of insert
 

Cet article Covering indexes in Oracle, and branch size est apparu en premier sur Blog dbi services.

Collaborate Preview #2: Consider your Options for Moving Oracle WebCenter to the Cloud

By now, most people have heard about the benefits of cloud computing. To summarize, the cloud promises more agility and scalability, with less cost and administration. However, for legacy customers using on-premise software, getting to the cloud isn’t always a simple and straightforward process. In fact, a lot of times confusion on deployment options, pricing, customer-managed versus vendor-managed, and security may delay cloud strategies. This is definitely the case for Oracle WebCenter Content customers who have a myriad of options to move their documents, images, and other enterprise content to the cloud.

Fortunately for Oracle WebCenter customers, Oracle offers the most complete set of cloud services spanning Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). With this set of cloud services, Oracle WebCenter Content customers have industry-leading options to deploy their WebCenter instances to the cloud. Here is a summary of those options:

Oracle Bare Metal Cloud Service (IaaS)
  • Public cloud with granular control of security
  • Computing, block storage, networking services
  • Customer administered
  • Bring WebCenter licenses
  • Administration: High, user-owned
Oracle Compute Cloud Service (IaaS)
  • Computing, block storage, networking services
  • Bring WebCenter licenses
  • Administration: High, user-owned
Oracle Java Cloud Service (PaaS)
  • Full operating environment including WebLogic
  • Complete control and customization
  • Bring WebCenter licenses
  • Administration: Moderate to low
Oracle WebCenter Portal Cloud Service (PaaS)
  • WebCenter Portal in the Cloud
  • Metered or non-metered licenses
  • Administration: Moderate to low

You might be surprised that Oracle WebCenter Portal Cloud Service is listed above as one of the options to move Oracle WebCenter Content to, but it does present a viable solution. The user experience has always been one of the biggest complaints with WebCenter Content. Moving your content to the cloud and using WebCenter Portal Cloud to create intranets, extranets, composite applications, self-service portals and user experiences to access content could ensure a better user experience overall, and ensure more adoption going forward. It provides users a more secure and efficient means to consume information while being able to also interact with applications, processes, and other users. The added benefit is that it comes with Oracle WebCenter Content.

We will be discussing more about the options WebCenter Content and Portal customers have to move their on-premise instances to the Oracle Cloud at Collaborate 2018 during this session: Options and Considerations for Moving Oracle WebCenter Content & Portal to the Cloud, which takes place on Monday, April 23rd from 11:00 AM to 12:00 PM. In this session, Fishbowl’s Director of Solutions, Jerry Aber, will go into more detail about the Oracle Cloud options listed above, as well as what to expect from a pricing perspective. Come hear about considerations for hybrid cloud environments as well, and what that means from an Oracle Cloud architecture perspective.

For more information on all of Fishbowl’s activities at Collaborate 2018, please visit this page: https://www.fishbowlsolutions.com/about/news/collaborate/

The post Collaborate Preview #2: Consider your Options for Moving Oracle WebCenter to the Cloud appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

MySQL – Time mismatch between system and log files

Yann Neuhaus - Fri, 2018-04-13 03:26

I was working on MySQL upgrades to version 5.7 at a customer, when I found out that for my instance, time in the error log file didn’t correspond to local system time:

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] mysql -uroot -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] ls -ltr mysqld1.err
-rw-r-----. 1 mysql mysql 13323 Apr 12 10:54 mysqld1.err

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] tail -1 mysqld1.err
2018-04-12T08:54:55.300633Z 67 [Note] Access denied for user 'root'@'localhost' (using password: YES)

I performed the operation at 10h54 but in the error log file I saw 8h54.
I checked if there was a problem between system and instance time, but that was not the case:

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] date
Thu Apr 12 10:56:02 CEST 2018

mysqld1-(root@localhost) [(none)]> select now();
+---------------------+
| now()               |
+---------------------+
| 2018-04-12 10:56:35 |
+---------------------+
1 row in set (0.00 sec)

Actually, starting from version 5.7.2, there is a variable called log_timestamps which is set to UTC as default:

mysqld1-(root@localhost) [(none)]> select @@global.log_timestamps;
+-------------------------+
| @@global.log_timestamps |
+-------------------------+
| UTC                     |
+-------------------------+
1 row in set (0.00 sec)

I modified this variable from UTC to SYSTEM:

mysqld1-(root@localhost) [(none)]> set global log_timestamps='SYSTEM';
Query OK, 0 rows affected (0.00 sec)

mysqld1-(root@localhost) [(none)]> select @@global.log_timestamps;
+-------------------------+
| @@global.log_timestamps |
+-------------------------+
| SYSTEM                  |
+-------------------------+
1 row in set (0.00 sec)

and after that everything was OK, local system time was now aligned with timestamp time zone in MySQL log files:

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] mysql -uroot -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] ls -ltr mysqld1.err
-rw-r-----. 1 mysql mysql 13323 Apr 12 10:58 mysqld1.err

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] tail -1 mysqld1.err
2018-04-12T10:58:08.467615+01:00 69 [Note] Access denied for user 'root'@'localhost' (using password: YES)

Remember to set this variable also in the configuration file, to make this change persistent at reboot:

mysql@test1:/u01/app/mysql/admin/mysqld1/log/ [mysqld1] vi /etc/my.cnf

# 12042018 - dbi services - Timestamp time zone of log files from UTC (default) to SYSTEM
log_timestamps = SYSTEM
 

Cet article MySQL – Time mismatch between system and log files est apparu en premier sur Blog dbi services.

ACL created but unable to fetch the result

Tom Kyte - Fri, 2018-04-13 02:06
Hi TEam, I have created function to return the different language by refrencing http://www.orafaq.com/forum/t/170320 also i have created ACL and after running the function we have below error. select eospb_translation ('cats', 'fr','en') fr...
Categories: DBA Blogs

How to improve performance of select query

Tom Kyte - Fri, 2018-04-13 02:06
Hi Tom, I have table(coupon_c ) where millions of records in which there is one column coupon which have hardly 100 distinct values but total count are in millions so data distribution is like for one coupon there is around 1 millions records. <...
Categories: DBA Blogs

Move subpartition to another partition

Tom Kyte - Fri, 2018-04-13 02:06
Hi. Today I've discovered another issue on the DB I've inherited. I've found this range subpartitions in wrong partitions. Let me try to explain. The scenario is: Partitions with month values subpartitioned by weeks, so I expect to have a ...
Categories: DBA Blogs

PL/SQL Performance Tuning

Tom Kyte - Fri, 2018-04-13 02:06
My stored PROCEDURE searches for potential duplicate PeopleSoft Vouchers (PS_VOUCHER, PS_VOUCHER_LINE, PS_DISTRIB_LINE, PS_VENDOR, PS_PYMNT_VCHR_XREF). The first step is to get the set of unique concatenated/delimited values for records that are cur...
Categories: DBA Blogs

Avoid full-table scan when filtered by ora_rowscn

Tom Kyte - Fri, 2018-04-13 02:06
Hello, We have a need to extract data using "sqoop" on a bunch of tables. However, there is no column defined in those tables that store information like "update date or last modified date". We are thinking of using ora_rowscn as a filter. Howe...
Categories: DBA Blogs

Column deleted by mistake

Tom Kyte - Fri, 2018-04-13 02:06
Hi Tom, Thanks for wonderful support always. I want to know is there any way to know who has deletes the entire column from the table. There is no manually created trigger in my database for auditing on column delete. One entire column ha...
Categories: DBA Blogs

Conditional execution one of two parts of union all query

Tom Kyte - Fri, 2018-04-13 02:06
I have have two complex, expensive queries that I have performed a union all to bring together for use in a report tool I have a bind variable that, if set to one value, the union query should execute the first part of the union. If the bind ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator