Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 13 hours 42 min ago

Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode

Thu, 2018-04-19 02:23

imageIn previous articles, I have talked about using Docker Containers in smart testing strategies by creating a container image that contains the baseline of the application and the required test setup (test data for example). For each test instead of doing complex setup actions and finishing of with elaborate tear down steps, simply spinning up a container at the beginning and tossing it away at the end.

I have shown how that can be done through the command line – but that of course is not a workable procedure. In this article I will provide a brief introduction of programmatic manipulation of containers. By providing access to the Docker Daemon API from remote clients (step 1) and by leveraging the npm package Dockerode (step 2) it becomes quite simple from a straightforward Node application to create, start and stop containers – as well as build, configure, inspect, pause them and manipulate in other ways. This opens up the way for build jobs to programmatically run tests by starting the container, running the tests against that container and killing and removing the container after the test. Combinations of containers that work together can be managed just as easily.

As I said, this article is just a very lightweight introduction.

Expose Docker Daemon API to remote HTTP clients

The step that to me longest was exposing the Docker Daemon API. Subsequent versions of Docker used different configurations for this and apparently different Linux distributions also have different approaches. I was happy to find this article: that describes for Ubuntu 16.x as Docker Host how to enable access to the API.

Edit file /lib/systemd/system/docker.service – add -H tcp:// to the entry that describes how to start the Docker Daemon in order to have it listen to incoming requests at port 4243 (note: other ports can be used just as well).

Reload (systemctl daemon-reload) to apply the changed file configuration

Restart the Docker Service: service docker restart

And we are in business.image

A simple check to see if HTTP requests on port 4243 are indeed received and handled: execute this command on the Docker host itself:

curl http://localhost:4243/version


The next step is the actual remote access. From a browser running on a machine that can ping successfully to the Docker Host – in my case that is the Virtual Box VM spun up by Vagrant, at IP as defined in the Vagrantfile – open this URL: The result should be similar to this:


Get going with Dockerode

To get started with npm package Dockerode is not any different really from any other npm package. So the steps to create a simple Node application that can list, start, inspect and stop containers in the remote Docker host are as simple as:

Use npm init to create the skeleton for a new Node application


npm install dockerode –save

to retrieve Dockerode and create the dependency in package.json.

Create file index.js. Define the Docker Host IP address ( in my case) and the Docker Daemon Port (4243 in my case) and write the code to interact with the Docker Host. This code will list all containers. Then it will inspect, start and stop a specific container (with identifier starting with db8). This container happens to run an Oracle Database – although that is not relevant in the scope of this article.

var Docker = require('dockerode');
var dockerHostIP = ""
var dockerHostPort = 4243

var docker = new Docker({ host: dockerHostIP, port: dockerHostPort });

docker.listContainers({ all: true }, function (err, containers) {
    console.log('Total number of containers: ' + containers.length);
    containers.forEach(function (container) {
        console.log(`Container ${container.Names} - current status ${container.Status} - based on image ${container.Image}`)

// create a container entity. does not query API
async function startStop(containerId) {
    var container = await docker.getContainer(containerId)
    try {
        var data = await container.inspect()
        console.log("Inspected container " + JSON.stringify(data))
        var started = await container.start();
        console.log("Started "+started)
        var stopped = await container.stop();
        console.log("Stopped "+stopped)
    } catch (err) {
//invoke function

The output in Visual Studio Code looks like this:


And the action can be tracked on the Docker host like this (to prove it is real…)image


Article by Ivan Krizsan on configuring the Docker Daemon on Ubuntu 16.x – my life safer:

GitHub Repo for Dockerode – with examples and more:

Presentation at DockerCon 2016 that gave me the inspiration to use Dockerode: 

Docker docs on Configuring the Daemon –

The post Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode appeared first on AMIS Oracle and Java Blog.

Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests

Wed, 2018-04-18 07:00

imageHere is a procedure for running an Oracle Database, preparing a baseline in objects (tables, stored procedures) and data, creating an image of that baseline and subsequently running containers based on that baseline image. Each container starts with a fresh setup. For running automated tests that require test data to be available in a known state, this is a nice way of working.

The initial Docker container was created using an Oracle Database 11gR2 XE image:

Execute this statement on the Docker host:

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true --name oracle-xe  wnameless/oracle-xe-11g

This will spin up a container called oracle-xe. After 5-20 seconds, the database is created and started and can be accessed from an external database client.

From the database client, prepare the database baseline, for example:

create user newuser identified by newuser;

create table my_data (data varchar2(200));

insert into my_data values ('Some new data '||to_char(sysdate,'DD-MM HH24:MI:SS'));



These actions represent the complete database installation of your application – that may consists of hundreds or thousands of objects and MBs of data. The steps and the principles remain exactly the same.

At this point, create an image of the baseline – that consists of the vanilla database with the current application release’s DDL and DML applied to it:

docker commit --pause=true oracle-xe

This command returns an id, the identifier of the Docker image that is now created for the current state of the container – our base line. The original container can now be stopped and even removed.

docker stop oracle-xe


Spinning up a container from the base line image is now done with:

docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true  --name oracle-xe-testbed  <image identifier>

After a few seconds, the database has started up and remote database clients can start interacting with the database. They will find the database objects and data that was part of the baseline image. To perform a test, no additional set up nor any tear down is required.

Perform the tests that require performing. The tear down after the test consists of killing and removing the testbed container:

docker kill oracle-xe-testbed && docker rm oracle-xe-testbed

Now return to the step “Spinning up a container”

Spinning up the container takes a few seconds – 5 to 10. The time is mainly taken up by the database processes that have to be started from scratch.

It should be possible to create a snapshot of a running container (using Docker Checkpoints) and restore the testbed container from that snapshot. This create-start from checkpoint –kill-rm should happen even faster than the run-kill-rm cycle that we have now got going. A challenge is the fact that opening the database does not just start processes and manipulate memory, but also handles files. That means that we need to commit the running container and associate the restored checkpoint with that image. I have been working on this at length – but I have not been successful yet – running into various issues (ORA-21561 OID Generation failed,  ora 27101 shared-memory-realm-does-not-exist, Redo Log File not found,…).I continue to look into this.

Use Oracle Database 12c Image

Note: instead of the Oracle Database XE image used before, we can go through the same steps based for example on image sath89/oracle-12c (see ) .

The commands and steps are now:

docker pull sath89/oracle-12c

docker run -d -p 8080:8080 -p 1521:1521 --name oracle-db-12c sath89/oracle-12c

connect from a client – create baseline.

When the baseline database and database contents has been set up, create the container image of that state:

docker commit --pause=true oracle-db-12c

Returns an image identifier.

docker stop oracle-db-12c

Now to run a test iteration, run a container from the base line image:

docker run -d -p 1521:1521  --name oracle-db-12c-testbed  <image identifier>

Connect to the database at port 1521 or have the web application or API that is being tested make the connection.



The Docker Create Command:

Nifty Docker commands in Everyday hacks for Docker:

Circle CI Blog – Checkpoint and restore Docker container with CRIU –

The post Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests appeared first on AMIS Oracle and Java Blog.

How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3)

Mon, 2018-04-16 02:00
Recapitulation on how to install the Oracle Integration Cloud on premises connectivity agent

Recently (april 2018) I gained access to the new Oracle Integration Cloud (OIC), version,  and wanted to make an integration connection to an on-premise database. For this purpose, an on premise connectivity agent needs to be installed, as is thoroughly explained by my colleague Robert van Mölken in his blog prepraring-to-use-the-ics-on-premises-connectivity-agent.

With the (new) Oracle Integration Cloud environment the installation of the connectivity agent has slightly changed though, as shown below. It gave me some effort to get the new connectivity agent working. Therefore I decided to recapture the steps needed in this blog. Hopefully, this will give you a headstart to get the connectivity agent up and running.


MenuBar Prerequisites

Access to an Oracle Integration Cloud Service instance.

Rights to do some installation on a local / on-premise environment, Linux based (eg. SOA virtual box appliance).


Agent groups

For connection purposes you need to have an agent group defined in the Oracle Integration Cloud.

To define an agent group, you need to select the agents option in the left menu pane.  You can find any already existing agent groups here as well.

Select the ‘create agent group’ button to define a new agent group and fill in this tiny web form.


Downloading and extracting the connectivity agent

For downloading the connectivity agent software you also need to select the agents option in the left menu pane, followed by the download option in the upper menu bar.

After downloading you have a file called ‘’, which takes 145.903.548 bytes

This has a much smaller memory footprint than the former connectivity agent software (, which takes 1.867.789.797 bytes).

For installation of the connectivity agent, you need to copy and extract the file to an installation folder of your choice on the on-premise host.

After extraction you see several files, amongst which ‘InstallerProfile.cfg’.


Setting configuration properties

Before starting the installation you need to edit the content of the file InstallerProfile.cfg.

Set the value for the property OIC_URL to the right hostname and sslPort *.

Also set the value for the property agent_GROUP_IDENTIFIER to the name of the agent group  you want the agent to belong to.

After filling in these properties save the file.


* On the instance details page you can see the right values for the hostname and sslPort. This is the page which shows you the weblogic instances that host your OIC and it looks something like this:

ServiceCapture Certificates

For my trial purpose I didn’t need a certificate to communicate between the OIC and the on-premise environment.

But if you do, you can follow the next 2 steps:


a. Go to the agenthome/agent/cert/ directory.

b. Run the following command: keytool -importcert -keystore keystore.jks -storepass changeit -keypass password -alias alias_name  -noprompt -file certificate_file


Java JDK

Before starting the installation of the connectivity agent, make sure your JAVA JDK is at least version 8, with the JAVA_HOME and PATH set.

To check this, open a terminal window and type: ‘java –version’ (without the quotes)

You should see the version of the installed java version, eg. java version “1.8.0_131”.

To add the JAVA_HOME to the PATH setting, type ‘setenv PATH = $JAVA_HOME/bin:$PATH’ (without the quotes)

Running the installer

You can start the connectivity agent installer with the command: ‘java –jar connectivityagent.jar’  (again, without the quotes).

During the installation you are for your OIC username and corresponding password.

The installation finishes with a message that the agent was installed succesfully en is now up and running.

Check the installed agent

You can check that the agent is communicating to/under/with the agent group you specified.

Behind the name of the agent group the number of agents communicating within it is shown


The post How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3) appeared first on AMIS Oracle and Java Blog.

Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies)

Sat, 2018-04-14 13:15

At the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august, I attended the API Cloud Platform Service & Integration Cloud Service bootcamp.

In a series of article’s I will give a high level overview of what you can do with Oracle API Platform Cloud Service.

At the Summer Camp a pre-built Oracle VM VirtualBox APIPCS appliance (APIPCS_17_3_3.ova) was provided to us, to be used in VirtualBox. Everything needed to run a complete demo of API Platform Cloud Service is contained within Docker containers that are staged in that appliance. The version of Oracle API Platform CS, used within the appliance, is Release 17.3.3 — August 2017.

See to learn about the new and changed features of Oracle API Platform CS in the latest release.

In this article in the series about Oracle API Platform CS, the focus will be on the Management Portal and creating an API (including some policies) .

Be aware that the screenshot’s in this article and the examples provided, are based on a demo environment of Oracle API Platform CS and were created by using the Oracle VM VirtualBox APIPCS appliance mentioned above.

This article only covers part of the functionality of Oracle API Platform CS. For more detail I refer you to the documentation:

Short overview of Oracle API Platform Cloud Service

Oracle API Platform Cloud Service enables companies to thrive in the digital economy by comprehensively managing the full API lifecycle from design and standardization to documenting, publishing, testing and managing APIs. These tools provide API developers, managers, and users an end-to-end platform for designing, prototyping. Through the platform, users gain the agility needed to support changing business demands and opportunities, while having clear visibility into who is using APIs for better control, security and monetization of digital assets.


Management Portal:
APIs are managed, secured, and published using the Management Portal.
The Management Portal is hosted on the Oracle Cloud, managed by Oracle, and users granted
API Manager privileges have access.

API Gateways are the runtime components that enforce all policies, but also help in
collecting data for analytics. The gateways can be deployed anywhere – on premise, on Oracle
Cloud or to any third party cloud providers.

Developer Portal:
After an API is published, Application Developers use the Developer Portal to discover, register, and consume APIs. The Developer Portal can be customized to run either on the Oracle Cloud or directly in the customer environment on premises.

Oracle Apiary:
In my article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about using Oracle Apiary and interacting with its Mock Server for the “HumanResourceService” API, I created earlier.

The Mock Server for the “HumanResourceService” API is listening at:


Within Oracle API Platform CS roles are used.

Roles determine which interfaces a user is authorized to access and the grants they are eligible to receive.

  • Administrator
    System Administrators responsible for managing the platform settings. Administrators possess the rights of all other roles and are eligible to receive grants for all objects in the system.
  • API Manager
    People responsible for managing the API lifecycle, which includes designing, implementing, and versioning APIs. Also responsible for managing grants and applications, providing API documentation, and monitoring API performance.
  • Application Developer
    API consumers granted self-service access rights to discover and register APIs, view API documentation, and manage applications using the Developer Portal.
  • Gateway Manager
    Operations team members responsible for deploying, registering, and managing gateways. May also manage API deployments to their gateways when issued the Deploy API grant by an API Manager.
  • Gateway Runtime
    This role indicates a service account used to communicate from the gateway to the portal. This role is used exclusively for gateway nodes to communicate with the management service; users assigned this role can’t sign into the Management Portal or the Developer Portal.
  • Service Manager
    People responsible for managing resources that define backend services. This includes managing service accounts and services.
  • Plan Manager
    People responsible for managing plans.

Within the Oracle VM VirtualBox APIPCS appliance the following users (all with password welcome1) are present and used by me in this article:

User Role api-manager-user APIManager api-gateway-user GatewayManager

Design-First approach

Design is critical as a first step for great APIs. Collaboration ensures that we are creating the correct design. In my previous article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about the Design-First approach and using Oracle Apiary. I designed a “HumanResourceService” API.

So with a design in place, an application developer could begin working on the front-end, while service developers work on the back-end implementation and others can work on the API implementation, all in parallel.

Create an API, via the Management Portal (api-manager-user)

Start the Oracle API Platform Cloud – Management Portal as user api-manager-user.

After a successful sign in, the “APIs” screen is visible.

Create a new API via a click on button “Create API”. Enter the following values:

Name HumanResourceService Version 1 Description Human Resource Service is an API to manage Human Resources.

Next, click on button “Create”.

After a click on the “HumanResourceService” API, the next screen appears (with tab “APIs” selected):

Here you can see on the left, that the tab “API Implementation” is selected.

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “API Implementation” of the “HumanResourceService” API

Tab “Deployments” of the “HumanResourceService” API

Tab “Publication” of the “HumanResourceService” API

Tab “Grants” of the “HumanResourceService” API

API grants are issued per API.

The following tabs are visible and can be chosen:

  • Manage API
    Users issued this grant are allowed to modify the definition of and issue grants for this API.
  • View all details
    Users issued this grant are allowed to view all information about this API in the Management Portal.
  • Deploy API
    Users issued this grant are allowed to deploy or undeploy this API to a gateway for which they have deploy rights. This allows users to deploy this API without first receiving a request from an API Manager.
  • View public details
    Users issued this grant are allowed to view the publicly available details of this API on the Developer Portal.
  • Register
    Users issued this grant are allowed to register applications for this plan.
  • Request registration
    Users issued this grant are allowed to request to register applications for this plan.

Users and groups issued grants for a specific API have the privileges to perform the associated actions on that API. See for more information:

Tab “Registrations” of the “HumanResourceService” API

Tab “Analytics” of the “HumanResourceService” API

Tab “API Implementation” of the “HumanResourceService” API

After you create an API, you can apply policies to configure the Request and Response flows. Policies in the Request flow secure, throttle, route, manipulate, or log requests before they reach the backend service. Polices in the Response flow manipulate and log responses before they reach the requesting client.

Request flow, configuring the API Request URL

The API Request URL is the endpoint to which users or applications send requests for your API. You configure part of this URL. This endpoint resides on the gateway on which the API is deployed. The API will be deployed later.

The full address to which requests are sent consists of the protocol used, the gateway hostname, the API Request endpoint, and any private resource paths available for your service.

<protocol>://<hostname and port of the gateway node instance>/<API Request endpoint>/<private resource path of the API>

Anything beyond the API Request endpoint is passed to the backend service.

Hover over the “API Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

Your Policy Name API Request Comments Configuration | Protocol HTTP ://MyGatewayIP/ Configuration | API Endpoint URL HumanResourceService/1

Next, click on button “Apply”.

In the pop-up, click on button “Save Changes”.

Request flow, configuring the Service Request URL

The Service Request is the URL at which your backend service receives requests.

When a request meets all policy conditions, the gateway routes the request to this URL and calls your service. Note that the Service Request URL can point to any of your service’s resources, not just its base URL. This way you can restrict users to access only a subset of your API’s resources.

Hover over the “Service Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

Configure Headers – Service | Enter a URL <Enter the Apiary Mock Service URL>

For example:

Remove the “/employees” from the Mock Service URL so the API can be designed to call multiple end-points such as “/departments” Use Gateway Node Proxy uncheck Service Account None

Next, click on button “Apply”.

In the pop-up, click on button “Save Changes”.

Oftentimes, there are multiple teams participating in the development process. There may be front-end developers creating a new mobile app or chatbot, there can be a backend services and integration team and of course the API team.

If the backend service is not yet ready, you can still start creating the API. Perhaps you may want to begin with a basic implementation (for example an Apiary Mock Service URL) so your front-end developers are already pointing to the API, even before it is fully operational.

Response Flow

Click the Response tab to view a top-down visual representation of the response flow. The Service and API Response entries can’t be edited.
The Service Response happens first. The response from the backend service is always the first entry in the outbound flow. You can place additional policies in this flow. Policies are run in order, with the uppermost policy run first, followed by the next policy, and so on, until the response is sent back to the client.
The API Response entry is a visual representation of the point in the outbound flow when the response is returned to the requesting client.

Deploy an API to the Gateway, via the Management Portal (api-manager-user)

On the left, click on tab “Deployments”.

Next, click on button “Deploy API”.

In the pop-up “Deploy API” there are no gateways, or they are not visible for the current user. So in order to find out what the situation is about the gateways, we have to sign in, in the Oracle API Platform Cloud – Management Portal as a Gateway Manager. There we also can grant the privileges needed to deploy the API. How you do this is described later on in this article.

For now we continue as if the correct privileges were already in place.

So in the pop-up “Deploy API”, select the “Production Gateway” gateway and click on button ‘Deploy”.

For a short while a pop-up “Deployment request submitted” appears.

Next, click on tab “Requesting” where we can see the request (for an API deployment to a gateway), the user api-manager-user sent to the Gateway Manager. The “Deployment State” is REQUESTING. So now we have to wait for the approval of the Gateway Manager.

Sign in to the Oracle API Platform Cloud – Management Portal as user api-gateway-user

In the top right of the Oracle API Platform Cloud – Management Portal click on the api-manager-user and select ”Sign Out”. Next, Sign in as user api-gateway-user.

After a successful sign in, the “Gateways” screen is visible.

Because this user is only a Gateway Manager, only the tab “Gateways” is visible.

At the moment (in this demo environment) there is one gateway available, being the “Production Gateway”. After a click on the “Production Gateway” gateway, the next screen appears:

Here you can see on the left, that the tab “Settings” is selected.

First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

Tab “Settings” of the “Production Gateway” gateway

Have a look at the “Load Balancer URL” (, which we will be using later on in this article.

Tab “Nodes” of the “Production Gateway” gateway

Tab “Deployments” of the “Production Gateway” gateway

Tab “Grants” of the “Production Gateway” gateway

Tab “Analytics” of the “Production Gateway” gateway

Tab “Grants” of the “Production Gateway” gateway

On the left, click on tab “Grants”.

Grants are issued per gateway.

The following tabs are visible and can be chosen:

  • Manage Gateway
    Users issued this grant are allowed to manage API deployments to this gateway and manage the gateway itself.

    The api-gateway-user (with role GatewayManager) is granted the “Manage Gateway” privilege.

  • View all details
    Users issued this grant are allowed to view all information about this gateway.
  • Deploy to Gateway
    Users issued this grant are allowed to deploy or undeploy APIs to this gateway.
  • Request Deployment to Gateway
    Users issued this grant are allowed to request API deployments to this gateway.
  • Node service account
    Gateway Runtime service accounts are issued this grant to allow them to download configuration and upload statistics.

Users issued grants for a specific gateway have the privileges to perform the associated actions on that gateway. See for more information:

Click on tab “Request Deployment to Gateway”.

Next, click on button “Add Grantee”.

Select “api-manager-user” and click on button “Add”.

So now, the user api-manager-user (with Role APIManager) is granted the “Request Deployment to Gateway” privilege.

In practice you would probably grant to a group instead of to a single user.

Be aware that you could also grant the “Deploy to Gateway” privilege, so approval of the Gateway Manager (for deploying an API to a gateway) is not needed anymore in that case. This makes sense if it concerns a development environment, for example. Since the Oracle VM VirtualBox APIPCS appliance is using a “Production Gateway” gateway, in this article, I chose for the request and approve mechanism.

Approve a request for an API deployment to a gateway, via the Management Portal (api-gateway-user)

On the left, click on tab “Deployments” and then click on tab “Requesting”.

Hover over the “HumanResourceService” API, then click on button “Approve”.

In the pop-up, click on button “Yes”.

Then you can see that on the tab “Waiting”, the deployment is waiting.

The deployment enters a Waiting state and the logical gateway definition is updated. The endpoint is deployed the next time gateway node(s) poll the management server for the updated gateway definition.

So after a short while, you can see on the tab “Deployed”, that the deployment is done.

After a click on the top right icon “Expand”, more details are shown:

So now the “HumanResourceService” API is deployed on the “Production Gateway” gateway (Node 1). We can also see the active policies in the Request and Response flow of the API Implementation.

It is time to invoke the API.

Invoke method “GetAllEmployees” of the “HumanResourceService” API, via Postman

For invoking the “HumanResourceService” API I used Postman ( as a REST Client tool.

In Postman, I created a collection named “HumanResourceServiceCollection”(in order to bundle several requests) and created a request named “GetAllEmployeesRequest”, providing method “GET” and request URL “”.

Remember the “API Request URL”, I configured partly in the “API Request” policy and the “Load Balancer URL” of the “Production Gateway” gateway? They make up the full address to which requests have to be sent.

After clicking on button Send, a response with “Status 200 OK” is shown:

Because I have not applied any extra policies, the request is passed to the backend service without further validation. This is simply the “proxy pattern”.

Later on in this article, I will add some policies and send additional requests to validate each one of them.

Tab “Analytics” of the “Production Gateway” gateway

Go back to the Management Portal (api-gateway-user) and in the tab “Analytics” the request I sent, is visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the request is also visible.


Policies in API Platform CS serve a number of purposes. You can apply any number of policies to an API definition to secure, throttle, limit traffic, route, or log requests sent to your API. Depending on the policies applied, requests can be rejected if they do not meet criteria you specify when configuring each policy. Policies are run in the order they appear on the Request and Response tabs. A policy can be placed only in certain locations in the execution flow.

The available policies are:


  • OAuth 2.0 | 1.0
  • Key Validation | 1.0
  • Basic Auth | 1.0
  • Service Level Auth | 1.0 Deprecated
  • IP Filter Validation | 1.0
  • CORS | 1.0

Traffic Management:

  • API Throttling – Delay | 1.0
  • Application Rate Limiting | 1.0
  • API Rate Limiting | 1.0

Interface Management:

  • Interface Filtering | 1.0
  • Redaction | 1.0
  • Header Validation | 1.0
  • Method Mapping | 1.0


  • Header Based Routing | 1.0
  • Application Based Routing | 1.0
  • Gateway Based Routing | 1.0
  • Resource Based Routing | 1.0


  • Service Callout | 2.0
  • Service Callout | 1.0
  • Logging | 1.0
  • Groovy Script | 1.0

As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management).

Add a Key Validation Policy, via the Management Portal (api-manager-user)

Use a key validation policy when you want to reject requests from unregistered (anonymous) applications.

Keys are distributed to clients when they register to use an API on the Developer Portal. At runtime, if they key is not present in the given header or query parameter, or if the application is not registered, the request is rejected; the client receives a 400 Bad Request error if no key validation header or query parameter is passed or a 403 Forbidden error if an invalid key is passed.

This policy requires that you create and register an application, which is described in my next article.

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-manager-user.

Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Security”. Hover over the “Key Validation” policy and then, on the right, click the icon “Apply”. Enter the following values:

Your Policy Name Key Validation Comments Place after the following policy API Request

Then, click on icon “Next”. Enter the following values:

Key Delivery Approach Header Key Header application-key

Click on button “Apply as Draft”.

Next, click on button “Save Changes”.

I applied this as a draft policy, represented as a dashed line around the policy. Draft policies let you “think through” what you want before you have the complete implementation details. This enables you to complete the bigger picture in one sitting and to leave reminders of what is missing to complete the API later.
When you deploy an API, draft policies are not deployed.

Add an Interface Filtering Policy, via the Management Portal (api-manager-user)

Use an interface filtering policy to filter requests based on the resources and methods specified in the request.

Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Interface Management”. Hover over the “Interface Filtering” policy and then, on the right, click the icon “Apply”. Enter the following values:

Your Policy Name Interface Filtering Comments Place after the following policy Key Validation

Then, click on icon “Next”.

In the table below I summarized the requests that I created in the Oracle Apiary Mock Server for the “HumanResourceService” API:

Request name Method Oracle Apiary Mock Server Request URL GetAllEmployeesRequest GET CreateEmployeeRequest POST GetEmployeeRequest GET UpdateEmployeeRequest PUT GetDepartmentRequest GET GetDepartmentEmployeeRequest GET

I want to use an interface filtering policy to filter requests. As an example, I want to pass requests (to the backend service) with the method GET specified in the request and a resource starting with employees followed by an identification or starting with departments followed by employees and an identification.

Select “Pass” from the list.

At “Filtering Conditions”, “Condition 1” enter the following values:

Resources /employees/* ; /departments/*/employees/* Methods GET

Click on button “Apply ”.

Next, click on button “Save Changes”.

I applied this policy as an active policy, represented as a solid line around the policy.

Redeploy the API, via the Management Portal (api-manager-user)

Navigate to tab “Deployments” of the “HumanResourceService” API, and then hover over the “Production Gateway” gateway and then, on the right, hover over the icon “Redeploy”.

Next, click on icon “Latest Iteration”.

In the pop-up, click on button “Yes”. For a short while a pop-up “Redeploy request submitted” appears.

Then repeat the steps described before in this article, to approve the request, by switching to a Gateway Manager.

Click on “Latest Iteration” to deploy the most recently saved iteration of the API.
Click on “Current Iteration” to redeploy the currently deployed iteration of the API.

After that, it is time to try out the effect of adding the “Interface Filtering” policy.

Validating the “Interface Filtering” policy, via Postman

In Postman for each request mentioned earlier (in the table), I created that request within the collection named “HumanResourceServiceCollection”.

Then again I invoked each request, to validate it against the “Interface Filtering” policy.

Invoke method “GetAllEmployees” of the “HumanResourceService” API

From Postman I invoked the request named “GetAllEmployeesRequest” (with method “GET” and URL “”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “CreateEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “CreateEmployeeRequest” (with method “POST” and URL “”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetEmployeesRequest” (with method “GET” and URL “”) and a response with “Status 200 OK” is shown:

Invoke method “UpdateEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “UpdateEmployeeRequest” (with method “PUT” and URL “”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetDepartment” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentRequest” (with method “GET” and URL “”) and a response with “Status 405 Method Not Allowed” is shown:

Invoke method “GetDepartmentEmployee” of the “HumanResourceService” API

From Postman I invoked the request named “GetDepartmentEmployeeRequest” (with method “GET” and URL “”) and a response with “Status 200 OK” is shown:

Tab “Analytics” of the “Production Gateway” gateway

In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-gateway-user and click on the “Production Gateway” gateway and navigate to the tab “Analytics”.

In this tab the requests I sent, are visible at “Total Requests”.

If we look, for example, at “Requests By Resource”, the requests are also visible.

Next, click on icon “Error and Rejections (4 Total)” and if we look, for example, at “Rejection Distribution”, we can see that there were 4 request rejections, because of policy “Interface Filtering”.

So the “Interface Filtering” policy is working correct.


As a follow up from my previous articles about Oracle API Platform Cloud Service, in this article the focus is on using the Management Portal and Creating the “HumanResourceService” API (including some policies).

As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management). The later policy, I deployed to a gateway and validated that this policy worked correct, using requests which I created in Postman.

While using the Management Portal in this article, I focused on the roles “API Manager” and “Gateway Manager”. For example, the user api-gateway-user had to approve a request from the api-manager-user to deploy an API the a gateway.

In a next article the focus will be on validating the “Key Validation” policy and using the “Development Portal”.

The post Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies) appeared first on AMIS Oracle and Java Blog.

A DBA’s first steps in Jenkins

Thu, 2018-04-12 02:56

My Customer wanted an automated way to refresh an application database to a known state, to be done by non-technical personnel. As a DBA I know a lot of scripting, can build some small web interfaces, but why bother when there are ready available tools, like Jenkins. Jenkins is mostly a CI/CD developer thing that for a classical DBA is a bit of magic. I decided to try this tool to script the refreshing of my application.



Getting started

First, fetch the Jenkins distribution from, I used the jenkins.war latest version. Place the jenkins.war file in a desired location and you’re almost set to go, set the environment variable JENKINS_HOME to a sane value, or else your Jenkins settings, data and workdir will be in $HOME/.jenkins/

Start Jenkins by using the following commandline:

java -jar jenkins.war --httpPort=8024

You may want to make a start script to automate this step. Please note the –httpPort argument: choose a available portnumber (and make sure the firewall is opened for this port)

When starting Jenkins for the first time it creates a password that it will show in the standard output. When you open the webinterface for Jenkins for the first time you need this password. After logging in, install the recommended plugins. In this set there should be at least the Pipeline plugin. The next step will create your admin user account.

Creating a Pipeline build job.

Navigate to “New Item” to start creating your first pipeline. Type a descriptive name, choose as type a Pipeline


After creating the job, you can start building the pipeline: In my case I needed about four steps: stopping the Weblogic servers,
clearing the schemas, importing the schemas and fixing stuff, and finally starting Weblogic again.

The Pipeline scripting language is quite extensive, I only used the bare minimum of the possibilities, but at least it gets my job done. The actual code can be entered in the configuration of the job, in the pipeline script field. A more advanced option could be to retrieve your Pipeline code (plus additional scripts) from a SCM like Git or Bitbucket.



The code below is my actual code to allow the refresh of the application:

pipeline {
    agent any
    stages {
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/ /home/oracle/scripts/'
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping the Owner"
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/drop_tables.sql"'
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Importing OWNER'
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; impdp /@@theSID directory=thedirforyourdump \
                            dumpfile=Youknowwhichfiletoimport.dmp \
                            logfile=import-`date +%F-%h%m`.log \
                            schemas=ONLY_OWNER,THE_OTHER_OWNER,SOME_OTHER_REQUIRED_SCHEMA \

				 echo 'Fixing invalid objects'           
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus / as sysdba @?/rdbms/admin/utlrp"'    
                 echo 'Gathering statistics in the background'
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/refresh_stats.sql"'
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/ /home/oracle/scripts/'

In this script you can see the four global steps, but some steps are more involved. In this situation I decided not to completely drop the schemas associated with the application, the dump file could come from a different environment with different passwords. Additionally I only import here the known schemas, if the supplied dumpfile accidentally contains additional schemas the errors in the log would be enormous due to not creating the useraccounts in the import stage.

When the job is saved, you can try a Build, this will run your job, you can monitor the console output to see how your job is going.

SQL*Plus with wallet authentication

The observant types among you may have noticed that I used a wallet for authentication with SQL*Plus and impdp. As this tool would be used by people who should not get DBA passwords, using a password on the commandline is not recommended: note that all the command above and their output would be logged in plaintext. So I decided to start making use of a wallet for the account information. Most steps are well documented, but I found that the step of making the wallet autologin capable (not needing to type a wallet password all the time) was documented using the GUI tool, but not the commandline tool. Luckily there are ways of doing that on the command line.

mkdir -p $ORACLE_HOME/network/admin/wallet
mkstore -wrl $ORACLE_HOME/network/admin/wallet/ -create
mkstore -wrl $ORACLE_HOME/network/admin/wallet -createCredential theSID_system system 'YourSuperSekritPassword'
orapki wallet create -wallet $ORACLE_HOME/network/admin/wallet -auto_login

sqlnet.ora needs to contain some information so the wallet can be found:

   (METHOD_DATA =      (DIRECTORY = &lt;&lt;ORACLE_HOME&gt;&gt;/network/admin/wallet)    )  )

also make sure a tnsnames entry is added for your wallet credential name (above: theSID_system) now using sqlplus /@theSID_system should connect you to the database as the configured user.

Asking Questions

The first job was quite static: always the same dump, or I need to edit the pipeline code to change the named dumpfile… not as flexible as I would like… Can Jenkins help me here? Luckily, YES:

    def dumpfile
    def dbhost = 'theHost'
    def dumpdir = '/u01/oracle/admin/THESID/dpdump'

    pipeline {
    agent any
    stages {
        stage ('Choose Dumpfile') {
            steps {
                script {
                    def file_collection
                    file_collection = sh script: "ssh $dbhost 'cd $dumpdir; ls *X*.dmp *x*.dmp 2&gt;/dev/null'", returnStdout: true
                    dumpfile = input message: 'Choose the right dump', ok: 'This One!', parameters: [choice(name: 'dump file', choices: "${file_collection}", description: '')]
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/ /home/oracle/scripts/'
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping Owner"
                sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/drop_tables.sql"'
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Import OWNER'
                sh script: "ssh $dbhost 'export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; impdp /@theSID directory=dump \
                            dumpfile=$dumpfile \
                            logfile=import-`date +%F@%H%M%S`.log \
                            schemas=MYFAVOURITE_SCHEMA,SECONDOWNER \
                 sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus / as sysdba @?/rdbms/admin/utlrp"'
                 sh script: 'ssh dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/refresh_stats.sql"'
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/ /home/oracle/scripts/'

The first stage actually looks at the place where all the dumpfiles are to be found and does a ls on it. This listing is then stored in a variable that will be split into choices. The running job will wait for input, so no harm is done until the choice is made.

Starting a build like this will pause, you can see that when looking at the latest running build in the build queue.

When clicking the link the choice can be made (or the build can be aborted)







The post A DBA’s first steps in Jenkins appeared first on AMIS Oracle and Java Blog.

First steps with Docker Checkpoint – to create and restore snapshots of running containers

Sun, 2018-04-08 01:31

Docker Containers can be stopped and started again. Changes made to the file system in a running container will survive this deliberate stop and start cycle. Data in memory and running processes obviously do not. A container that crashes cannot just be restarted and will have a file system in an undetermined state if it can be restarted. When you start a container after it was stopped, it will go through its full startup routine. If heavy duty processes needs to be started – such as a database server process – this startup time can be substantial, as in many seconds or dozens of seconds.

Linux has a mechanism called CRIU or Checkpoint/Restore In Userspace. Using this tool, you can freeze a running application (or part of it) and checkpoint it as a collection of files on disk. You can then use the files to restore the application and run it exactly as it was during the time of the freeze. See for details. Docker CE has (experimental) support for CRIU. This means that using straightforward docker commands we can take a snapshot of a running container (docker checkpoint create <container name> <checkpointname>). At a later moment, we can start this snapshot as the same container (docker start –checkpoint <checkpointname> <container name> ) or as a different container.

The container that is started from a checkpoint is in the same state – memory and processes – as the container was when the checkpoint was created. Additionally, the startup time of the container from the snapshot is very short (subsecond); for containers with fairly long startup times – this rapid startup can be a huge boon.

In this article, I will tell about my initial steps with CRIU and Docker. I got it to work. I did run into an issue with recent versions of Docker CE (17.12 and 18.x) so I resorted back to 17.04 of Docker CE. I also ran into an issue with an older version of CRIU, so I built the currently latest version of CRIU (3.8.1) instead of the one shipped in the Ubuntu Xenial 64 distribution (2.6).

I will demonstrate how I start a container that clones a GitHub repository and starts a simple REST API as a Node application; this takes 10 or more seconds. This application counts the number of GET requests it handles (by keeping some memory state). After handling a number of requests, I create a checkpoint for this container. Next, I make a few more requests, all the while watching the counter increase. Then I stop the container and start a fresh container from the checkpoint. The container is running lightningly fast – within 700ms – so it clearly leverages the container state at the time of creating the snapshot. It continues counting requests at the point were the snapshot was created, apparently inheriting its memory state. Just as expected and desired.

Note: a checkpoint does not capture changes in the file system made in a container. Only the memory state is part of the snapshot.

Note 2: Kubernetes does not yet provide support for checkpoints. That means that a pod cannot start a container from a checkpoint.

In a future article I will describe a use case for these snapshots – in automated test scenarios and complex data sets.

The steps I went through (on my Windows 10 laptop using Vagrant 2.0.3 and VirtualBox 5.2.8):

  • use Vagrant to a create an Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x
  • downgrade Docker from 18.x to 17.04
  • configure Docker for experimental options
  • install CRIU package
  • try out simple scenario with Docker checkpoint
  • build CRIU latest version
  • try out somewhat more complex scenario with Docker checkpoint (that failed with the older CRIU version)


Create Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x

My Windows 10 laptop already has Vagrant 2.0.3 and Virtual Box 5.2.8. Using the following vagrantfile, I create the VM that is my Docker host for this experiment:


After creating (and starting) the VM with

vagrant up

I connect into the VM with

vagrant ssh

ending up at the command prompt, ready for action.

And in just to make sure we are pretty much up to date, I run

sudo apt-get upgrade


Downgrade Docker CE to Release 17.04

At the time of writing there is an issue with recent Docker version (at least 17.09 and higher – see and for that reason I downgrade to version 17.04 (as described here: ).

First remove the version of Docker installed by the vagrant provider:

sudo apt-get autoremove -y docker-ce \
&& sudo apt-get purge docker-ce -y \
&& sudo rm -rf /etc/docker/ \
&& sudo rm -f /etc/systemd/system/ \
&& sudo rm -rf /var/lib/docker \
&&  sudo systemctl daemon-reload

then install the desired version:

sudo apt-cache policy docker-ce

sudo apt-get install -y docker-ce=17.04.0~ce-0~ubuntu-xenial


    Configure Docker for experimental options

    Support for checkpoints leveraging CRIU is an experimental feature in Docker. In order to make use of it, the experimental options have to be enabled. This is done (as described in


    sudo nano /etc/docker/daemon.json


    "experimental": true

    Press CTRL+X, select Y and press Enter to save the new file.

    restart the docker service:

    sudo service docker restart

    Check with

    docker version

    if experimental is indeed enabled.


    Install CRIU package

    The simple approach with CRIU – how it should work – is by simply installing the CRIU package:

    sudo apt-get install criu

    (see for example in

    This installation results for me in version 2.6 of the CRIU package. For some actions that proves sufficient, and for others it turns out to be not enough.



    Try out simple scenario with Docker checkpoint on CRIU

    At this point we have Docker 17.04, Ubuntu 16.04 with CRIU 2.6. And that combination can give us a first feel for what the Docker Checkpoint mechanism entails.

    Run a simple container that writes a counter value to the console once every second (and then increases the counter)

    docker run --security-opt=seccomp:unconfined --name cr -d busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'

    check on the values:

    docker logs cr

    create a checkpoint for the container:

    docker checkpoint create  --leave-running=true cr checkpoint0


    leave the container running for a while and check the logs again

    docker logs cr


    now stop the container:

    docker stop cr

    and restart/recreate the container from the checkpoint:

    docker start --checkpoint checkpoint0 cr

    Check the logs:

    docker logs cr

    You will find that the log is resumed at the value (19) where the checkpoint was created:



    Build CRIU latest version

    When I tried a more complex scenario (see next section) I ran into this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host. Here are the steps I went through to accomplish that – following these instuctions:

    First, remove the currently installed CRIU package:

    sudo apt-get autoremove -y criu \
    && sudo apt-get purge criu -y \

    Then, prepare the build environment:

    sudo apt-get install build-essential \
    && sudo apt-get install gcc   \
    && sudo apt-get install libprotobuf-dev libprotobuf-c0-dev protobuf-c-compiler protobuf-compiler python-protobuf \
    && sudo apt-get install pkg-config python-ipaddr iproute2 libcap-dev  libnl-3-dev libnet-dev --no-install-recommends

    Next, clone the GitHub repository for CRIU:

    git clone <a href=""></a>

    Navigate into to the criu directory that contains the code base

    cd criu

    and build the criu package:


    When make is done, I can run CRIU :

    sudo ./criu/criu check

    to see if the installation is successful. The final message printed should be: Looks Good (despite perhaps one or more warnings).


    sudo ./criu/criu –V

    to learn about the version of CRIU that is currently installed.

    Note: the CRIU instructions describe the following steps to install criu system wide. This does not seem to be needed in order for Docker to leverage CRIU from the docker checkpoint commands.

    sudo apt-get install asciidoc  xmlto
    sudo make install
    criu check

    Now we are ready to take on the more complex scenario that failed before with an issue in the older CRIU version.

    A More complex scenario with Docker Checkpoint

    This scenario failed with the older CRIU version – probably because of this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host.

      In this case, I run a container based on a Docker Container image for running any Node application that is downloaded from a GitHub Repository. The Node application that the container will download and run handles simple HTTP GET requests: it counts requests and returns the value of the counter as the response to the request. This container image and this application were introduced in an earlier article:

      Here you see the command to run the container – to be be called reqctr2:

      docker run --name reqctr2 -e "GIT_URL=" -e "APP_PORT=8080" -p 8005:8080 -e "APP_HOME=part1"  -e "APP_STARTUP=requestCounter.js"   lucasjellema/node-app-runner


      It takes about 15 seconds for the application to start up and handle requests.

      Once the container is running, requests can be sent from outside the VM – from a browser running on my laptop for example – to be handled  by the container, at

      After a number or requests, the counter is at 21:


      At this point, I create a checkpoint for the container:

      docker checkpoint create  --leave-running=true reqctr2 checkpoint1


      I now make a few additional requests in the browser, bringing the counter to a higher value:

      imageAt this point, I stop the container – and subsequently start it again from the checkpoint:

      docker stop reqctr2
      docker start --checkpoint checkpoint1 reqctr2


      It takes less than a second for the container to continue running.

      When I make a new request, I do not get 1 as a value (as would be the result from a fresh container) nor is it 43 (the result I would get if the previous container would still be running). Instead, I get

      imageThis is the next value starting at the state of the container that was captured in the snapshot. Note: because I make the GET request from the browser and the browser also tries to retrieve the favicon, the counter is increased by two for every single time I press refresh in the browser.

      Note: I can get a list of all checkpoints that have been created for a container. Clearly, I should put some more effort in a naming convention for those checkpoints:

      docker checkpoint ls reqctr2


      The flow I went through in this scenario can be visualized like this:


      The starting point: Windows laptop with Vagrant and Virtual Box. A VM has been created by Vagrant with Docker inside. The correct version of Docker and of the CRIU package have been set up.

      Then these steps are run through:

      1. Start Docker container based on an image with Node JS runtime
      2. Clone GitHub Repository containing a Node JS application
      3. Run the Node JS application – ready for HTTP Requests
      4. Handle HTTP Requests from a browser on the Windows Host machine
      5. Create a Docker Checkpoint for the container – a snapshot of the container state
      6. The checkpoint is saved on the Docker Host – ready for later use
      7. Start a container from the checkpoint. This container starts instantaneously, no GitHub clone and application startup are required; it resumes from the state at the time of creating the checkpoint
      8. The container handles HTTP requests – just like its checkpointed predecessor



      Sources are in this GitHub repo:

      Article on CRIU:

      Also: on CRIU and Docker:

      Docs on Checkpoint and Restore in Docker:


      Home of CRIU:   and page on Docker support:; install CRIU package on Ubuntu:

      Install and Build CRIU Sources:


      Docs on Vagrant’s Docker providingprovisioning:

      Article on downgrading Docker :

      Configure Docker for experimental options:

      Issue with Docker and Checkpoints (at least in 17.09-18.03):

      The post First steps with Docker Checkpoint – to create and restore snapshots of running containers appeared first on AMIS Oracle and Java Blog.

      Regenerate Oracle VM Manager repository database

      Fri, 2018-04-06 02:01

      Some quick notes to regenerate a corrupted Oracle VM manager repository database.

      How did we discover the corruption?
      The MySQL repository databases was increasing in size, the file “OVM_STATISTIC.ibd” was 62G. We also found the following error messages in the “AdminServer.log” logfile:

      ####<2018-02-13T07:52:17.339+0100> <Error> <> <ovmm003.gemeente.local> <AdminServer> <Scheduled Tasks-12> <<anonymous>> <> <e000c2cc-e7fe-4225-949d-25d2cdf0b472-00000004> <1518504737339> <BEA-000000> <Archive task exception: No such object (level 1), cluster is null: <9208>

      Regenerate steps
      – Stop the OVM services
      ve used Toad in the past, this tool will be very familiar to you.</p> <p>TOra s born out of <a href="">jealousy</a>. Windows users have an abundance of tools to choose from, Linux user however, don’t… or at least didn’t. TOra filled this gap.</p> <p>It was created in C++ and uses the Qt library. In the included documentation, there is a section explaining ways to create plug-ins for TOra. It even includes a tutorial. The only <a href="">plug-in</a> I could find incorporates <a href="">Log4PLSQL</a> into TOra.</p> <p>While using Google to search for plug-ins available for TOra I came across a post mentioning a plug-in for SQL*Loader, I couldn’t find the actual plug-in though.</p> <p>TOra is free of charge, unless you’re a Windows user, then you’ll need to purchase a commercial license. The Windows version of TOrais governed by <a href="">the Software License Agreement from Quest Software.</a> Other platform releases are licensed under <a href="">GPL</a>.</p> <p>Features included in TOra:</p> <ul style='margin-top:0in' type=disc> <li>PL/SQL Debugger, at least according to the specs. I couldn’t get it going. The<br /> menu showed the icon, but was disabled.</li> <li>SQL Worksheet with syntax highlighting. Tab Pages provide additional<br /> information such as Explain Plan and a Log of previously executed<br /> statements<br /> A nice feature here is the “describe under cursorâ€? which shows the table<br /> structure you a currently querying.</li> <li>Schema Browser to show tables, view, indexes , sequences, synonyms, pl/sql and triggers for a particular schema. </li> </ul> <p>Here is a screenshot showcasing some of these features.<br /> <img src="" alt="TOra Screenshot" /></p> <p>TOra supports Database versions up to Oracle 9i (which release is not specified). Being connected to an Oracle 10g database didn’t seem to cause any problems.</p> <p>I installed a trial version on a Windows platform and played around with that for a while.<br /> The first thing that strikes me is the resemblance to Toad. There are a lot of similarities between these two products. The overall look and feel, where the different tools are located etc. make clear that TOra was inspired by Toad.</p> <p>My experience with TOra… it has a lot of features I never use. The ones I do use, don’t provide me with the feedback I need.</p> <p>An example to illustrate this: If I create a procedure with an error in it. It will compile, or at least it appears that way. The error messages are shown on the status bar and disappear after a while. You can<br /> recall the messages using a button on the status bar, or navigate the cursor to the status bar to display the error message in a tooltip.<br /> What I’d like to see is more immediate feedback to notice errors early on during development. Toad will display a pop-up window clearly stating the error.</p> <p>Creating and manipulating Objects formed somewhat of a problem in the SQL Worksheet. A valid Object Type Body definition (tested in SQL*Plus) resulted in an “ORA-00900: Invalid SQL Statement” error, making it impossible to create the Object Type Body here.</p> <p>Doing a similar action(creating a Object Type Body in a SQL window) in Toad or SQL*Plus was no problem. A valid Object Type Body was the result.</p> <p>A really nice feature in TOra is the DB Extract/Compare/Search tool. Simply using check-marks to specify which database objects you want to use and this tool will either Extract (creating installation scripts), Compare (handy if you need to compare two schema’s) or Search the database.</p> <p>I think it’s possible to overcome the limitations I mentioned before, once you get more comfortable using this tool. Getting used to a tool like Toad or TOra requires some time. There are so many tools at your disposal, learning each one of them simply takes time and effort. It’s like a new pair of shoes, once you break them in, they’re comfortable to wear, but the first two weeks…</p> <p>There are a number of tools on the market to choose from, especially if you’re using Windows. TOra beats Toad price-wise, but for how long? Quest is involved in TOra, draw your own conclusion. How will it compete with others on the Windows platform? Is it still going to evolve and incorporate new features and enhancements?</p> <p>If you’re not on a Windows platform, TOramight be worth looking into. The price is right, it offers a lot (maybe most) of the features Toad has.<br /> Jealousy can be a thing of the past.</p> " data-medium-file="" data-large-file="" class="alignnone size-full wp-image-119" src="" alt="restore_db_1" width="674" height="42" data-recalc-dims="1" />

      – Delete the Oracle VM Manager repository database

      “/u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ –deletedb –dbuser=ovs –dbpass=<PASSWORD> –dbhost=localhost –dbport=49500 –dbsid=ovs”


      – Generate replacement certificate

      – Start the OVM services and generate new certificates

      – Restart OVM services

      – Repopulate the database by discovering the Oracle VM Servers

      – Restore simple names
      Copy the restoreSimpleName script to /tmp, see Oracle Support note: 2129616.1

      [OVM] Issues with huge OVM_STATISTIC.ibd used as OVM_STATISTIC Table. (Doc ID 2216441.1)
      Oracle VM: How To Regenerate The OVM 3.3.x/3.4.x DB (Doc ID 2038168.1)
      Restore OVM Manager “Simple Names” After a Rebuild/Reinstall (Doc ID 2129616.1)

      The post Regenerate Oracle VM Manager repository database appeared first on AMIS Oracle and Java Blog.

      First steps with REST services on ADF Business Components

      Sat, 2018-03-31 10:20

      Recently we had a challenge at a customer for which ADF REST resources on Business Components were the perfect solution.

      Our application is built in Oracle JET and of course we wanted nice REST services to communicate with. Because our data is stored in an Oracle database we needed an implementation to easily access the data from JET. We decided on using ADF and Business Components to achieve this. Of course there are alternative solutions available but because our application runs as a portal in Webcenter Portal, ADF was already in our technology stack. I would like to share some of my first experiences with this ADF feature. We will be using ADF

      In this introduction we will create a simple application, the minimal required set of business components and a simple REST service. There are no prerequirements to start using the REST functionality in ADF. If you create a custom application you can choose to add the feature for REST Services but it is not necessary. Start with making a simple EO and VO:


      Before you can create any REST services, you need to define your first release version. The versions of REST resources are managed in the adf-config.xml. Go to this file, open the Release Versions tab and create version 1. The internal name is automatically configured based on your input:


      Your application is now ready for your first service. Go to the Web Service tab of the Application Module and then the REST tab. Click the green plus icon to add a resource. Your latest version will automatically be selected. Choose an appropriate name and press OK.


      ADF will create a config file for your resource (based on the chosen ViewObject), a resourceRegistry that will manage all resources in your application and a new RESTWebService project that you can use to start the services. The config file automatically opens and you can now further configure your resource.


      In the wizard Create Business Components from Tables, there is a REST Resources step in which you can immediately define some resources on View Objects. Using this option always gives me an addPageDefinitionUsage error, even by creating the simplest service:


      After ignoring this error, several things go wrong (what a surprise). The REST resource is created in a separate folder (not underneath the Application Module), it is not listed as a REST resource in the Application Module and finally it doesn’t work. All in all not ideal. I haven’t been able to figure out what happens but I would recommend avoiding this option (at least in this version of JDeveloper).

      There are two important choices to make before starting your service. You have to decide which REST actions will be allowed, and what attributes will be exposed.

      Setting the actions is simple. On the first screen of your config file are several checkboxes for the actions, Create, Delete and Update. By default they are all allowed on your service. Make sure to uncheck all actions that you don’t want to allow on your service. This provides for better security.


      Limiting the exposed attributes can be done in two ways. You can hide attributes on the ViewObject for all REST services on that VO. This is a secure and convenient way if you know an attribute should never be open to a user.


      Another way of configuring attributes for your REST services is creating REST shapes. This is a powerful feature that can be accessed from the ViewObject screen. You can make shapes independent of specific resources and apply them whenever you want. To create a shape, go to the ViewObject and to the tab Service Shaping. Here you can add a shape with the green plus-icon. Keep in mind that the name you choose for your shape will be a suffix to the name of your ViewObject. After creating the shape, you can use the shuttle to remove attributes.


      The newly created shape will have its own configuration file in a different location but you can only change it in the ViewObject configuration.


      After the shape is created, it can now be added to your REST service. To do this, use the Attributes tab in your Resource file, select the shape and you see the attribute shuttle is updated automatically.


      You are now ready to start your service. Right-click on the RESTWebService project and run. If you have done everything right, JDeveloper will show you the url where your services are running. Now you can REST easily.

      The post First steps with REST services on ADF Business Components appeared first on AMIS Oracle and Java Blog.

      ORDS: Installation and Configuration

      Fri, 2018-03-30 09:57

      In my job as system administrator/DBA/integrator I was challenged to implement smoketesting using REST calls. Implementing REST in combination with WebLogic is pretty easy. But then we wanted to extend smoketesting to the database. For example we wanted to know if the database version and patch level were at the required level as was used throughout the complete DTAP environment. Another example is the existence of required database services. As it turns out Oracle has a feature called ORDS – Oracle REST Data Service – to accomplish this.

      With ORDS you can install it in 2 different scenario’s, in standalone mode on the database server, or in combination with an application server such as WebLogic Server, Glassfish Server, or Tomcat.

      This article will give a short introduction to ORDS. It then shows you how to install ORDS feasible for a production environment using WebLogic Server 12c and an Oracle 12c database as we have done for our smoketesting application.

      We’ve chosen WebLogic Server to deploy the ORDS application because we already used WebLogic’s REST feature for smoketesting the application and WebLogic resources, and for high availability reasons because we use an Oracle RAC database. Also running in stand-alone mode would lead to additional security issues for port configutions.


      REST: Representational State Transfer. It provides interoperability on the Internet between computer systems.

      ORDS: Oracle REST Data Services. Oracle’s implementation of RESTful services against the database.

      RESTful service: an http web service that follows the REST architecture principles. Access to and/or manipulation of web resources is done using a uniform and predefined set of stateless operators.

      ORDS Overview

      ORDS makes it easy to develop a REST interface/service for relational data. This relational data can be stored in either an Oracle database, an Oracle 12c JSON Document Store, or an Oracle NoSQL database.

      A mid-tier Java application called ORDS, maps HTTP(S) requests (GET, PUT, POST, DELETE, …) to database transactions and returns results in a JSON format.

      ORDS Request Response Flow

      Installation Process

      The overall process of installing and configuring ORDS is very simple.

      1. Download the ORDS software
      2. Install the ORDS software
      3. Make some setup configurational changes
      4. Run the ORDS setup
      5. Make a mapping between the URL and the ORDS application
      6. Deploy the ORDS Java application

      Download the ORDS software

      Downloading the ORDS software can be done from the Oracle Technology Network. I used version I downloaded it from Oracle Technet:

      Install the ORDS software

      The ORDS software is installed on the WebLogic server running the Administration console. Create an ORDS home directory and unzip the software.

      Here are the steps on Linux

      $ mkdir -p /u01/app/oracle/product/ords
      $ cp -p /u01/app/oracle/product/ords
      $ cd /u01/app/oracle/product/ords
      $ unzip

      Make some setup configurational changes File

      Under the ORDS home directory a couple of subdirectories are created. One subdirectory is called params. This directory holds a file called This file holds some default parameters that are used during the installation. This file, is used for silent installation. In case any parameters aren’t specified in this file, ORDS interactively asks you for the values.

      In this article I go for a silent installation. Here are the default parameters and the ones I set for installing


      Default Value

      Configured Value









































      As you see, I refer to a tablespace ORDS for the installation of the metadata objects. Don’t forget to create this tablespace before continuing.


      The parameters sys.user and sys.password are removed from the file after running the setup (see later on in this article)


      The password for parameter user.public.password is obscured after running the setup (see later on in this article)


      As you can see there are many parameters that refer to APEX. APEX is a great tool for rapidly developing very sophisticated applications nowadays. Although you can run ORDS together with APEX, you don’t have to. ORDS runs perfectly without an APEX installation.

      Configuration Directory

      I create an extra directory to hold all configuration data, called config directly under the ORDS home directory. Here all configurational data used during setup are stored.

      $ mkdir config
      $ java -jar ords.war configdir /u01/app/oracle/product/ords/config
      $ # Check what value of configdir has been set!
      $ java -jar ords.war configdir

      Run the ORDS setup

      After all configuration is done, you can run the setup, which installs the Oracle metadata objects necessary for running ORDS in the database. The setup creates 2 schemas called:


      The setup is run in silent mode, which uses the parameter values previously set in the file.

      $ mkdir -p /u01/app/oracle/logs/ORDS
      $ java -jar ords.war setup –database ords –logDir /u01/app/oracle/logs/ORDS –silent

      Make a mapping between the URL and the ORDS application

      After running the setup, ORDS required objects are created inside the database. Now it’s time to make a mapping from the request URL to the ORDS interface in the database.

      $ java -jar ords.war map-url –type base-path /ords ords

      Here a mapping is made between the request URL from the client to the ORDS interface in the database. The /ords part after the base URL is used to map to a database connection resource called ords.

      So the request URL will look something like this:


      Where http://webserver01.localdomain:7001 is the base path.

      Deploy the ORDS Java application

      Right now all changes and configurations are done. It’s time to deploy the ORDS Java application against the WebLogic Server. Here I use wlst to deploy the ORDS Java application, but you can do it via the Administration Console as well, whatever you like.

      $ connect(‘weblogic’,’welcome01′,’t3://webserver01.localdomain:7001′)
      $ progress= deploy(‘ords’,’/u01/app/oracle/product/ords/ords.war’,’AdminServer’)
      $ disconnect()
      $ exit()

      And your ORDS installation is ready for creating REST service!


      After deployment of the ORDS Java application, it’s state should be Active and health OK. You might need to restart the Managed Server!

      Deinstallation of ORDS

      As the installation of ORDS is pretty simple, deinstallation is even more simple. The installation involves the creation of 2 schemas on the database and a deployment of ORDS on the application server. The deinstall process is the reverse.

      1. Undeploy ORDS from WebLogic Server
      2. Deinstall the database schemas using

        $ java –jar ords.war uninstall

        In effect this removes the 2 schemas from the database

      3. Optionally remove the ORDS installation directories
      4. Optionally remove the ORDS tablespace from the database



      The installation of ORDS is pretty simple. You don’t need to get any extra licenses to use ORDS. ORDS can be installed without installing APEX. You can run ORDS stand-alone, or use a J2EE webserver like WebLogic Server, Glassfish Server, or Apache Tomcat. Although you will need additional licenses for the use of these webservers.

      Hope this helps!

      The post ORDS: Installation and Configuration appeared first on AMIS Oracle and Java Blog.

      Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415

      Thu, 2018-03-29 10:26

      We are in the process of upgrading our Oracle Clusters and SIHA/Restart systems to Oracle

      The upgrade of the Grid-Infra home on a Oracle SIHA/Restart system from to fails when
      running with error message:

      CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’

      We start the upgrade to (with Jan2018 RU patch) as:
      $ ./ -applyPSU /app/software/27100009

      The installation and relink of the software looks correct.
      However, when running the as root user, as part of the post-installation,
      the script ends with :

      2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/crsctl query has softwareversion
      2018-03-28 11:20:27: Command output:
      > Oracle High Availability Services version on the local node is []
      >End Command output
      2018-03-28 11:20:27: Version String passed is: [Oracle High Availability Services version on the local node is []]
      2018-03-28 11:20:27: Version Info returned is : []
      2018-03-28 11:20:27: Got CRS softwareversion for su025p074:
      2018-03-28 11:20:27: The software version on su025p074 is
      2018-03-28 11:20:27: leftVersion=; rightVersion=
      2018-03-28 11:20:27: [] is lower than []
      2018-03-28 11:20:27: Disable the SRVM_NATIVE_TRACE for srvctl command on pre-12.2.
      2018-03-28 11:20:27: Invoking “/app/gi/12201_grid/bin/srvctl upgrade model -s -d -p first”
      2018-03-28 11:20:27: trace file=/app/oracle/crsdata/su025p074/crsconfig/srvmcfg1.log
      2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/srvctl upgrade model -s -d -p first
      2018-03-28 11:21:02: Command output:
      > PRCA-1003 : Failed to create ASM asm resource ora.asm
      > PRCR-1071 : Failed to register or update resource ora.asm
      > CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’.
      >End Command output
      2018-03-28 11:21:02: “upgrade model -s -d -p first” failed with status 1.
      2018-03-28 11:21:02: Executing cmd: /app/gi/12201_grid/bin/clsecho -p has -f clsrsc -m 180 “/app/gi/12201_grid/bin/srvctl upgrade model -s -d -p first”
      2018-03-28 11:21:02: Command

      The script is run as the root user as prescribed,  but root cannot add the ASM resource.
      This leaves the installation unfinished.

      There is no description in the Oracle Knowledge base, however according Oracle Support this problem is
      caused by unpublished   Bug 25183818 : SIHA 11204 UPGRADE TO MAIN IS FAILING 

      As per March 2018, no workaround or software patch is yet available.

      The post Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415 appeared first on AMIS Oracle and Java Blog.

      Dbvisit Standby upgrade

      Wed, 2018-03-28 10:00
      Upgrading to Dbvisit Standby 8.0.x

      Dbvisit provides upgrade documentation which is detailed and in principle  correct but only describes the upgrade process from the viewpoint of an installation on a single host.
      I upgraded Dbvisit Standby at a customer’s site with Dbvisit Standby in a running configuration with several hosts with several primary and standby databases.  I found, by trial and error and with the help of Dbvisit support, some additional steps and points of advice that I think may be of help to others.
      This documents describes the upgrade process for a working environment and provides information and advice in addition to the upgrade documentation. Those additions will be clearly marked in red throughout the blog. Also the steps of the upgrade process have been rearranged in a more logical order.
      It is assumed that the reader is familiar with basic Dbvisit concepts and processes.


      The customer’s configuration that was upgraded is as follows:

      • Dbvisit 8.0.14
      • 4 Linux OEL 6 hosts running Dbvisit Standby
      • 6 databases in Dbvisit Standby configuration distributed among the hosts
      • 1 Linux OEL 7 host running Dbvisit Console
      • DBIVIST_BASE: /usr/dbvisit
      • Dbvctl running in Deamon mode
      Dvisit upgrade overview

      The basic steps that are outlined in the Dbvisit upgrade documentation are as follows:

      1. Stop your Dbvisit Schedules if you have any running.
      2. Stop or wait for any Dbvisit processes that might still be executing.
      3. Backup the Dbvisit Base location where your software is installed.
      4. Download the latest version from
      5. Extract the install files into a temporary folder, example /home/oracle/8.0.
      6. Start the Installer and select to install the required components.
      7. Once the update is complete, you can remove the temporary install folder where the installer was extracted.
      8. It is recommended to run a manual send/apply of logs once an upgrade is complete.
      9. Re-enable any schedules.

      During the actual upgrade we deviated significantly from this: steps were rearranged, added and changed slightly.

      1. Download the latest available version of DBVisit and make it available on all server.
      2. Make a note of the primary host for each Dbvisit standby configuration.
      3. Stop dbvisit processes.
      4. Backup the Dbvisit Base location where your software is installed.
      5. Upgrade the software.
      6. Start dbvagent and dbvnet.
      7. Upgrade de DDC configuration files.
      8. Restart dbvserver.
      9. Update DDC’s in Bbvisit Console.
      10. run a manual send/apply of logs.
      11. Restart dbvsit standby processes.

      In the following sections these steps will  be explained more detailed.

      Dbvisit Standby upgrade

      Here follow the steps in detail that in my view should be taken for a Dbvisit upgrade, based on the experience and steps taken during the actual Dbvisit upgrade.

      1. Download the latest available version of DBVisit and make it available on all server.
        In our case I put it in /home/oracle/upgrade on all hosts. The versions used were 8.0.18 for Oracle Enterprise Linux 6 and 7:
      2. Make a note of the primary hosts for each Dbvisit standby configuration.
        You will need this information later in step 7. It is possible to get the information from the DDC .env files, but in our case it is easier to get it from the Dbvisit console.
        If you need to get them from the DDC .env files look for the SOURCE parameter. Say we have a database db1:

        [root@dbvhost04 conf]# cd /usr/dbvisit/standby/conf/
        [root@dbvhost04 conf]# grep "^SOURCE" dbv_db1.env
        SOURCE = dbvhost04
      3. Stop dbvisit processes.
        The Dbvisit upgrade manual assumes you schedule dbvctl from the cron. In our situation the dbvctl processes were running in Deamon mode. Easiest was therefor to stop them from the Dbvisit console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose stop.
        Dbvagent, dbvnet and, on the Dbvisit console host, dbvserver can be stopped as follows:

        cd /usr/dbvisit/dbvagent
        ./dbvagent -d stop
        cd /usr/dbvisit/dbvnet
        ./dbvnet -d stop
        cd /usr/dbvisit/dbvserver
         ./dbvserver -d stop

        Do this on all hosts. Dbvisit support advices that all hosts in a configuration be upgraded at the same time. There is no rolling upgrade or something similar.
        Check before proceeding if all processes are down.

      4. Backup the Dbvisit Base location where your software is installed.
        The Dbvisit upgrade manual marks this step as optional – but recommended. In my view it is not optional.
        You can simply tar everything under DBIVIST_BASE for later use.
      5. Upgrade the software.
        Extract the downloaded software and run the included  installer. It will show you which version you already have and which versions is available in the downloaded software. Choose the correct install option to upgrade. Below you can see the upgrade of one of the OEL 6 database hosts running Dbvisit standby:

        cd /home/oracle/upgrade
        <unzip and untar the correct version from /home/oracle/upgrade>
        cd dbvisit/installer/
            Welcome to the Dbvisit software installer.
            It is recommended to make a backup of our current Dbvisit software
            location (Dbvisit Base location) for rollback purposes.
            Installer Directory /home/oracle/upgrade/dbvisit
        >>> Please specify the Dbvisit installation directory (Dbvisit Base).
            The various Dbvisit products and components - such as Dbvisit Standby, 
            Dbvisit Dbvnet will be installed in the appropriate subdirectories of 
            this path.
            Enter a custom value or press ENTER to accept default [/usr/dbvisit]: 
             >     DBVISIT_BASE = /usr/dbvisit 
            Component      Installer Version   Installed Version
            standby        8.0.18_0_gc6a0b0a8                                      
            dbvnet         8.0.18_0_gc6a0b0a8                                      
            dbvagent       8.0.18_0_gc6a0b0a8                                      
            dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
            What action would you like to perform?
               1 - Install component(s)
               2 - Uninstall component(s)
               3 - Exit
            Your choice: 1
            Choose component(s):
               1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
               2 - Dbvisit Standby Core (Command Line Interface)
               3 - Dbvnet (Dbvisit Network Communication) 
               4 - Dbvagent (Dbvisit Agent)
               5 - Dbvserver (Dbvisit Central Console) - Not available on Solaris/AIX
               6 - Exit Installer
            Your choice: 1
            Summary of the Dbvisit STANDBY configuration
            DBVISIT_BASE /usr/dbvisit 
            Press ENTER to continue 
            About to install Dbvisit STANDBY
            Component standby installed. 
            Press ENTER to continue 
            About to install Dbvisit DBVNET
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/cert.pem to /usr/dbvisit/dbvnet/conf/cert.pem
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/ca.pem to /usr/dbvisit/dbvnet/conf/ca.pem
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/prikey.pem to /usr/dbvisit/dbvnet/conf/prikey.pem
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/dbvnet to /usr/dbvisit/dbvnet/dbvnet
        Copied file /usr/dbvisit/dbvnet/conf/dbvnetd.conf to /usr/dbvisit/dbvnet/conf/dbvnetd.conf.201802201235
            DBVNET config file updated 
            Press ENTER to continue 
            About to install Dbvisit DBVAGENT
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/cert.pem to /usr/dbvisit/dbvagent/conf/cert.pem
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/ca.pem to /usr/dbvisit/dbvagent/conf/ca.pem
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/prikey.pem to /usr/dbvisit/dbvagent/conf/prikey.pem
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/dbvagent to /usr/dbvisit/dbvagent/dbvagent
        Copied file /usr/dbvisit/dbvagent/conf/dbvagent.conf to /usr/dbvisit/dbvagent/conf/dbvagent.conf.201802201235
            DBVAGENT config file updated 
            Press ENTER to continue 
            Component      Installer Version   Installed Version
            standby        8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvnet         8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvagent       8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
            What action would you like to perform?
               1 - Install component(s)
               2 - Uninstall component(s)
               3 - Exit
            Your choice: 3
      6. Start dbvagent and dbvnet.
        For the next step dbvagent and dbvnet need to be running.  In our case we had an init script which started both:

        cd /etc/init.d
        ./dbvisit start

        Otherwise do something like:

        cd /usr/dbvisit/dbvnet
        ./dbvnet –d start 
        cd /usr/dbvisit/dbvagent
        ./dbvagent –d start

        The upgrade documentation at this point refers to section 5 of the Dbvisit Standby Networking chapter from the Dbvisit 8.0 user guide: Testing Dbvnet Communication. There some tests to see if dbvnet is working are described. It is important, as the upgrade documentation rightly points, out to test this before proceeding.
        Do on all database hosts:

        [oracle@dbvhost04 init.d]$ cd /usr/dbvisit/dbvnet/
        [oracle@dbvhost04 dbvnet]$ ./dbvnet -e "uname -n"
        [oracle@dbvhost04 dbvnet]$ ./dbvnet -f /tmp/dbclone_extract.out.err -o /tmp/testfile
        [oracle@dbvhost04 dbvnet]$ cd /usr/dbvisit/standby
        [oracle@dbvhost04 standby]$ ./dbvctl -f system_readiness
        Please supply the following information to complete the test.
        Default values are in [].
        Enter Dbvisit Standby location on local server: [/usr/dbvisit]:
        Your input: /usr/dbvisit
        Is this correct? <Yes/No> [Yes]:
        Enter the name of the remote server: []: dbvhost01
        Your input: dbvhost01
        Is this correct? <Yes/No> [Yes]:
        Enter Dbvisit Standby location on remote server: [/usr/dbvisit]:
        Your input: /usr/dbvisit
        Is this correct? <Yes/No> [Yes]:
        Enter the name of a file to transfer relative to local install directory
        /usr/dbvisit: [standby/doc/README.txt]:
        Your input: standby/doc/README.txt
        Is this correct? <Yes/No> [Yes]:
        Choose copy method:
        1)   /usr/dbvisit/dbvnet/dbvnet
        2)   /usr/bin/scp
        Please enter choice [1] :
        Is this correct? <Yes/No> [Yes]:
        Enter port for method /usr/dbvisit/dbvnet/dbvnet: [7890]:
        Your input: 7890
        Is this correct? <Yes/No> [Yes]:
        Testing the network connection between local server and remote server dbvhost01.
        Remote server                                          =dbvhost01
        Dbvisit Standby location on local server               =/usr/dbvisit
        Dbvisit Standby location on remote server              =/usr/dbvisit
        Test file to copy                                      =/usr/dbvisit/standby/doc/README.txt
        Transfer method                                        =/usr/dbvisit/dbvnet/dbvnet
        port                                                   =7890
        Checking network connection by copying file to remote server dbvhost01...
        Trace file /usr/dbvisit/standby/trace/58867_dbvctl_f_system_readiness_201803201304.trc
        File copied successfully. Network connection between local and dbvhost01
        correctly configured.
      7. Upgrade de DDC configuratie files.
        Having upgraded the software now the Dbvisit Standby Configuration (DDC) files, which are located in DBVISIT_BASE/standby/conf, on the database hosts need to be upgraded.
        Do this once for each standby configuration only on the primary host. If you do it on the secondary host you will get an error and all DDC configuration files will be deleted!
        So if we have a database db1 in a Dbvisit standby configuration with database host dbvhost1 running the primary database (source in Dbvisit terminology) and database host dbvhost2 running the standby database (destination in Dbvisit terminology), we do the following on the dbvhost1 only:

        cd /usr/dbvisit/standby
        ./dbvctl -d db1 -o upgrade
      8. Restart dbvserver.
        In our configuration the next step is to restart dbvserver to renable the Dbvisit Console.

        cd /usr/dbvisit/dbvserver
        ./dbvserver -d start
      9. Update DDC’s in Dbvisit Console.
        After the upgrade the configurations need to te be updated in Dbvisit Console. Go to Manage Configurations and status field will show an error and the edit configuration button is replaced with an update button.
        Update the DDC for each configuration on that screen.
      10. run a manual send/apply of logs.
        In our case this was easiest done from the Dbvisit console again: Main Menu -> Database Actions -> send logs button, followed by apply logs button.
        Do this for each configuration and check for errors before continuing.
      11. Restart Dbvsit standby processes.
        In our case we restarted the dbvctl processes in deamon mode from the Dbvisit Console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose start.

      Linux – Upgrade from Dbvisit Standby version 8.0.x
      Dbvisit Standby Networking – Dbvnet – 5. Testing Dbvnet Communication

      The post Dbvisit Standby upgrade appeared first on AMIS Oracle and Java Blog.

      Getting started with git behind a company proxy

      Sun, 2018-03-25 11:50

      Since a few months I’ve been involved in working with git to save our Infrastructure as Code in GitHub. But I don’t want to have to type in my password every time and do not like in clear text saved passwords, thus I prefer ssh over https. But when working behind a proxy that doesn’t allow for traffic over port 22 (ssh) I had to spend some time to get it working. Without a proxy there is nothing to it.

      First some background information. We connect to a “stepping stone” server that has some version of Windows as the O.S. and then use Putty to connect to our Linux host where we work on our code.


      Network background

      Our connection to Internet is via the proxy, but the proxy doesn’t allow traffic over port 22 (ssh/git). It does however allow traffic over port 80 (http) or 443 (https).

      So the goal here is to:
      1. use a public/private key pair to authenticate myself at
      2. route traffic to via the proxy
      3. reroute port 22 to port 443
      Generate a public/private key pair.

      This can be done on the Linux prompt but then you either need to type your passphrase every time you use git (or have it cached in Linux), or use a key pair without a passphrase. I wanted to take this one step further and use Putty Authentication Agent (Pageant.exe) to cache my private key and forward authentication requests over Putty to Pageant.

      With Putty Key Generator (puttygen.exe) you generate a public/private key pair. Just start the program and press the generate button.

      2018-03-25 16_35_08-keygen

      You then need to generate some entropy by moving the mouse around:

      2018-03-25 16_39_08-PuTTY Key Generator

      And in the end you get something like this:

      2018-03-25 16_41_25-PuTTY Key Generator

      Ad 1) you should use a descriptive name like “github <accountname>”

      Ad 2) you should use a sentence to protect your private key. Mind you: If you do not use a caching mechanism you need to type it in frequently

      Ad 3) you should save your private key somewhere you consider safe. (It should not be accessible for other people)

      Ad 4) you copy this whole text field (starting with ssh-rsa in this case up to and including the Key comment “rsa-key-20180325” which is repeated in that text field)

      Once you have copied the public key you need to add it to your account at

      Adding the public key in

      Log in to and click on your icon:

      2018-03-25 17_03_03-github

      Choose “Settings” and go to “SSH and GPG keys”:

      2018-03-25 17_03_14-Your Profile

      There you press the “Add SSH key” button and you get to the next screen:

      2018-03-25 17_08_16-Add new SSH keys

      Give the Title a descriptive name so you can recognize/remember where you generated this key for, and in the Key field you paste the copied public key in. Then you press Add SSH key which results in something like this:

      2018-03-25 17_11_43-SSH and GPG keys

      In your case the picture of the key will not be green but black as you haven’t used it yet. In case you no longer want this public/private key pair to have access to your github account you can Delete it here as well.

      So now you can authenticate yourself with a private key that get checked by the public key you uploaded in github.

      You can test that on a machine that has direct access to Internet and is able to use port 22 (For example a VirtualBox VM on your own laptop at home).

      Route git traffic to via the Proxy and change the port.

      On the Linux server behind the company firewall, when logged on with your own account, you need to got to the “.ssh” directory. If it isn’t there yet you haven’t used ssh on that machine yet. (ssh <you>@<linuxserver> is enough and cancel the logging in). So change directory to .ssh in your home dir. Create a file called “config” with the contents.

          ProxyCommand nc -X connect -x 192.168.x.y:8080 %h %p
          Port 443
          ServerAliveInterval 20
          User git
      #And if you use gitlab as well the entry should be like:
          Port    443
          ProxyCommand    /usr/bin/nc -X connect -x 192.168.x.y:8080 %h %p
          ServerAliveInterval 20
          User  git

      This is the part where you define that ssh call’s to server should be rerouted to the proxy server 192.168.x.y on port 8080 (change that to your proxy details), and that the server should not be but changed to That is the server where github allows you to use the git or ssh protocol to connect to over https (port 443). I’ve added the example for gitlab as well. There the hostname should be changed to as is done in the config above.

      “nc” or “/usr/bin/nc” is the utility Netcat that does the work of changing hostname and port number for us. On our RedHat Linux 6 server it is installed by default.

      The ServerAliveInterval 20 makes sure that the connection is kept alive by sending a packet every 20 seconds to prevent a “broken pipe”. And the User git makes sure you will not connect as your local Linux user to but as user git.

      But two things still needs to be done:

      1. Add your private key to Putty Authentication Agent
      2. Allow the Putty session to your Linux host to use Putty Authentication Agent
      Add your private key to Putty Authentication Agent

      On your “Stepping Stone Server” start the Putty Authentication Agent (Pageant.exe), right click on the icon (useally somewhere on the bottom of your screen to the right)

      2018-03-25 17_49_49-

      Select View Keys to see the keys already loaded or press Add Key to add your newly created private key. You get asked to type your passphrase. Via View Keys you can check if the key was loaded:

      2018-03-25 17_56_06-Pageant Key List

      The obfuscated part shows the key fingerprint and the text to the right of that is the Key Comment you used. If the comment is bigger not all the text is visible. Thus make sure the Key Comment is distinguishable in the first part.

      If you want to use the same key for authentication on the Linux host, then put the Public key part in a file called “authorized_keys”. This file should be located in the “.ssh” directory and have rw permissions for your local user only (chmod 0600 authorized_keys) and nobody else. If you need or want a different key pair for that make sure you load the corresponding private key as well.

      Allow the Putty session to your Linux host to use Putty Authentication Agent

      The Putty session that you use to connect to the Linux host needs to have the following checked:

      2018-03-25 18_08_03-PuTTY Configuration

      Thus for the session go to “Connection” –> “SSH” –> “Auth” and check “Allow agent forwarding” to allow you terminal session on the Linux host to forward the authentication request with GitHub (or gitlab) to be handled by your Pageant process on the Stepping Stone server. For that last part you need to have checked the box “Attempt authentication using Pageant”.

      Now you are all set to clone a GitHub repository on your Linux host and use key authentication.

      Clone a git repository using the git/ssh protocol

      Browse to, select the repository you have access to with your GitHub account (if it is a private repo), press the “Clone or download” button and make sure you select “Clone with SSH”. See the picture below.

      2018-03-25 18_18_41-git

      Press the clipboard icon to copy the line starting with “” and ending with “.git”.

      That should work now (like it did for me).

      HTH Patrick

      P.S. If you need to authenticate your connection with the proxy service you probably need to have a look at the manual pages of “nc”. Or google it. I didn’t have to authenticate with the proxy service so I didn’t dive into that.

      The post Getting started with git behind a company proxy appeared first on AMIS Oracle and Java Blog.

      How to fix Dataguard FAL server issues

      Wed, 2018-03-21 06:23

      One of my clients had an issue with their Dataguard setup, after having to move tables and rebuild indexes the transport to their standby databases failed. The standby databases complained about not being able to fetch archivelogs from the primary database. In this short blog I will explain what happened and how I diagnosed the issue and fixed it.


      The situation

      Below  you can see a diagram of the setup: a primary site with both a primary database and a standby database. At the remote site there are two standby databases both get their redo stream from the primary database.


      This setup was working well for the company, but having two redo streams going to the remote site with limited bandwith can give issues when doing massive data manipulation. When the need arrived for doing massive table movements and rebuilding of indexes the generation of redo was too much for the WAN link and also to the local standby database. After trying to fix the standby databases for several days my help was requested because the standby databases were not able to fix the gaps in the redo stream.


      The issues

      While analyzing the issues I found that the standby databases failed to fetch archived logs from the primary database, usually you can fix this by using RMAN to supply the primary database with the archived logs needed for the standby, because in most cases the issue is that het archived logs have been deleted on the primary database. The client’s own DBA already supplied the required archived logs so the message was kind of misleading, the archived logs are there, but the primary doesn’t seem to be able to supply them.

      When checking the alert log for the primary database there was no obvious sign that there was anything going on or going wrong.  While searching for more information I discovered the default setting for the parameter log_archive_max_processes is 4. This setting controls the amount of processes available for archiving, redo transport and FAL servers. Now take a quick look at the drawing above and start counting with me: at least one for local archiving, and three for the redo transport to the three standby databases. So when one of the standby databases wants to fetch archived logs to fill in a gap, it may not be able to request this from the primary database. So time to fix it:


      ALTER SYSTEM SET log_archive_max_processes=30 scope=both;

      Now the fetching start working better, but I discovered some strange behaviour, the standby database closest to the primary database was still not able to fetch archive logs from the primary. The two remote standby databases were actually fetching some archived logs, so thats an improvement… but still, the alert log for the primary database was quiet silent… fortunately Oracle provides us with more server parameters: log_archive_trace. This setting enables extra logging for certain subprocesses. add the values in the linked documentation to see the desired logging: in this case 2048  and 128 for getting FAL server logging and redo transport logging.

      ALTER SYSTEM SET log_archive_trace=2176 scope=both;

      With this setting I was able to see that all 26 other archiver processes were busy with supplying one of the standby databases with archived logs. It seems to me that the database thats furthest behind will get the first go at the primary database…. Anyway, my first instinct was to have the local standby database fixed first so this one is available for failover, so by stopping the remote standby databases the local standby database was now able to fetch archived logs from the primary database. The next step is to start the other standby databases, to speed up things I started the first one and only after this database has completed its archive log gap I started the second database.


      In conclusion, it’s important that you tune your settings for your environment: set log_archive_max_processes as appropriate and set your log level so you see what’s going on.

      Please mind that both of these settings are also managed by the Dataguard Broker. To prevent warnings from Dataguard Broken make sure you set these parameters via dgmgrl:

      edit database <<primary>> set property LogArchiveTrace=2176;
      edit database <<primary>> set property LogArchiveMaxProcesses=30;

      The post How to fix Dataguard FAL server issues appeared first on AMIS Oracle and Java Blog.

      Handle a GitHub Push Event from a Web Hook Trigger in a Node application

      Tue, 2018-03-20 11:57

      My requirement in this case: a push of one or more commits to a GitHub repository need to trigger a Node application that inspects the commit and when specific conditions are met – it will download the contents of the commit.


      I have implemented this functionality using a Node application – primarily because it offers me an easy way to create a REST end point that I can configure as a WebHook in GitHub.

      Implementing the Node application

      The requirements for a REST endpoint that can be configured as a webhook endpoint are quite simple: handle a POST request  no response required. I can do that!

      In my implementation, I inspect the push event, extract some details about the commits it contains and write the summary to the console. The code is quite straightforward and self explanatory; it can easily be extended to support additional functionality:'/github/push', function (req, res) {
        var githubEvent = req.body
        // - githubEvent.head_commit is the last (and frequently the only) commit
        // - githubEvent.pusher is the user of the pusher and
        // - timestamp of final commit: githubEvent.head_commit.timestamp
        // - branch:  githubEvent.ref (refs/heads/master)
        var commits = {}
        if (githubEvent.commits)
          commits = githubEvent.commits.reduce(
            function (agg, commit) {
              agg.messages = agg.messages + commit.message + ";"
              agg.filesTouched = agg.filesTouched.concat(commit.added).concat(commit.modified).concat(commit.removed)
                .filter(file => file.indexOf("src/js/jet-composites/input-country") > -1)
              return agg
            , { "messages": "", "filesTouched": [] })
        var push = {
          "finalCommitIdentifier": githubEvent.after,
          "pusher": githubEvent.pusher,
          "timestamp": githubEvent.head_commit.timestamp,
          "branch": githubEvent.ref,
          "finalComment": githubEvent.head_commit.message,
          "commits": commits
        console.log("WebHook Push Event: " + JSON.stringify(push))
        if (push.commits.filesTouched.length > 0) {
          console.log("This commit involves changes to the input-country component, so let's update the composite component for it ")
          var compositeName = "input-country"
        var response = push

      Configuring the WebHook in GitHub

      A web hook can be configured in GitHub for any of your repositories. You indicate the endpoint URL, the type of event that should trigger the web hook and optionally a secret. See my configuration:


      Trying out the WebHook and receiving Node application

      In this particular case, the Node application is running locally on my laptop. I have used ngrok to expose the local application on a public internet address:


      (note: this is the address you saw in the web hook configuration)

      I have committed and pushed a small change in a file in the repository on which the webhook is configured:


      The ngrok agent has received the WebHook request:


      The Node application has received the push event and has done its processing:


      The post Handle a GitHub Push Event from a Web Hook Trigger in a Node application appeared first on AMIS Oracle and Java Blog.

      Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller

      Mon, 2018-03-19 00:58

      The requirement is simple: a Node JS application that receives HTTP requests and forwards (some of) them to other hosts and subsequently the returns the responses it receives to the original caller.


      This can be used in many situations – to ensure all resources loaded in a web application come from the same host (one way to handle CORS), to have content in IFRAMEs loaded from the same host as the surrounding application or to allow connection between systems that cannot directly reach each other. Of course, the proxy component does not have to be the dumb and mute intermediary – it can add headers, handle faults, perform validation and keep track of the traffic. Before you know it, it becomes an API Gateway…

      In this article a very simple example of a proxy that I want to use for the following purpose: I create a Rich Web Client application (Angular, React, Oracle JET) – and some of the components used are owned and maintained by an external party. Instead of adding the sources to the server that serves the static sources of the web application, I use the proxy to retrieve these specific sources from their real origin (either a live application, a web server or even a Git repository). This allows me to have the latets sources of these components at any time, without redeploying my own application.

      The proxy component is of course very simple and straightforward. And I am sure it can be much improved upon. For my current purposes, it is good enough.

      The Node application consists of file www that is initialized with npm start through package.json. This file does some generic initialization of Express (such as defining the port on which the listen). Then it defers to app.js for all request handling. In app.js, a static file server is configured to serve files from the local /public subdirectory (using express.static).


      var app = require('../app');
      var debug = require('debug')(' :server');
      var http = require('http');

      var port = normalizePort(process.env.PORT || '3000');
      app.set('port', port);
      var server = http.createServer(app);
      server.on('error', onError);
      server.on('listening', onListening);

      function normalizePort(val) {
      var port = parseInt(val, 10);

      if (isNaN(port)) {
      // named pipe
      return val;

      if (port >= 0) {
      // port number
      return port;

      return false;

      function onError(error) {
      if (error.syscall !== 'listen') {
      throw error;

      var bind = typeof port === 'string'
      ? 'Pipe ' + port
      : 'Port ' + port;

      // handle specific listen errors with friendly messages
      switch (error.code) {
      case 'EACCES':
      console.error(bind + ' requires elevated privileges');
      case 'EADDRINUSE':
      console.error(bind + ' is already in use');
      throw error;

      function onListening() {
      var addr = server.address();
      var bind = typeof addr === 'string'
      ? 'pipe ' + addr
      : 'port ' + addr.port;
      debug('Listening on ' + bind);


      "name": "jet-on-node",
      "version": "0.0.0",
      "private": true,
      "scripts": {
      "start": "node ./bin/www"
      "dependencies": {
      "body-parser": "~1.18.2",
      "cookie-parser": "~1.4.3",
      "debug": "~2.6.9",
      "express": "~4.15.5",
      "morgan": "~1.9.0",
      "pug": "2.0.0-beta11",
      "request": "^2.85.0",
      "serve-favicon": "~2.4.5"


      var express = require('express');
      var path = require('path');
      var favicon = require('serve-favicon');
      var logger = require('morgan');
      var cookieParser = require('cookie-parser');
      var bodyParser = require('body-parser');

      const http = require('http');
      const url = require('url');
      const fs = require('fs');
      const request = require('request');

      var app = express();
      // uncomment after placing your favicon in /public
      //app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
      app.use(bodyParser.urlencoded({ extended: false }));

      // define static resource server from local directory public (for any request not otherwise handled)
      app.use(express.static(path.join(__dirname, 'public')));

      app.use(function (req, res, next) {
      res.header("Access-Control-Allow-Origin", "*");
      res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");

      // catch 404 and forward to error handler
      app.use(function (req, res, next) {
      var err = new Error('Not Found');
      err.status = 404;

      // error handler
      app.use(function (err, req, res, next) {
      // set locals, only providing error in development
      res.locals.message = err.message;
      res.locals.error ='env') === 'development' ? err : {};

      // render the error page
      res.status(err.status || 500);
      message: err.message,
      error: err

      module.exports = app;

      Then the interesting bit: requests for URL /js/jet-composites/* are intercepted: instead of having those requests also handle by serving local resources (from directory public/js/jet-composites/*), the requests are interpreted and routed to an external host. The responses from that host are returned to the requester. To the requesting browser, there is no distinction between resources served locally as static artifacts from the local file system and resources retrieved through these redirected requests.

      // any request at /js/jet-composites (for resouces in that folder)
      // should be intercepted and redirected
      var compositeBasePath = '/js/jet-composites/'
      app.get(compositeBasePath + '*', function (req, res) {
      var requestedResource = req.url.substr(compositeBasePath.length)
      // parse URL
      const parsedUrl = url.parse(requestedResource);
      // extract URL path
      let pathname = `${parsedUrl.pathname}`;
      // maps file extention to MIME types
      const mimeType = {
      '.ico': 'image/x-icon',
      '.html': 'text/html',
      '.js': 'text/javascript',
      '.json': 'application/json',
      '.css': 'text/css',
      '.png': 'image/png',
      '.jpg': 'image/jpeg',
      '.wav': 'audio/wav',
      '.mp3': 'audio/mpeg',
      '.svg': 'image/svg+xml',
      '.pdf': 'application/pdf',
      '.doc': 'application/msword',
      '.eot': 'appliaction/',
      '.ttf': 'aplication/font-sfnt'

      handleResourceFromCompositesServer(res, mimeType, pathname)

      async function handleResourceFromCompositesServer(res, mimeType, requestedResource) {
      var reqUrl = "http://yourhost:theport/applicationURL/" + requestedResource
      // fetch resource and return
      var options = url.parse(reqUrl);
      options.method = "GET";
      options.agent = false;

      // options.headers['host'] =;
      http.get(reqUrl, function (serverResponse) {
      console.log('<== Received res for', serverResponse.statusCode, reqUrl); console.log('\t-> Request Headers: ', options);
      console.log(' ');
      console.log('\t-> Response Headers: ', serverResponse.headers);


      serverResponse.headers['access-control-allow-origin'] = '*';

      switch (serverResponse.statusCode) {
      // pass through. we're not too smart here...
      case 200: case 201: case 202: case 203: case 204: case 205: case 206:
      case 304:
      case 400: case 401: case 402: case 403: case 404: case 405:
      case 406: case 407: case 408: case 409: case 410: case 411:
      case 412: case 413: case 414: case 415: case 416: case 417: case 418:
      res.writeHeader(serverResponse.statusCode, serverResponse.headers);
      serverResponse.pipe(res, { end: true });

      // fix host and pass through.
      case 301:
      case 302:
      case 303:
      serverResponse.statusCode = 303;
      serverResponse.headers['location'] = 'http://localhost:' + PORT + '/' + serverResponse.headers['location'];
      console.log('\t-> Redirecting to ', serverResponse.headers['location']);
      res.writeHeader(serverResponse.statusCode, serverResponse.headers);
      serverResponse.pipe(res, { end: true });

      // error everything else
      var stringifiedHeaders = JSON.stringify(serverResponse.headers, null, 4);
      res.writeHeader(500, {
      'content-type': 'text/plain'
      res.end(process.argv.join(' ') + ':\n\nError ' + serverResponse.statusCode + '\n' + stringifiedHeaders);



      Express Tutorial Part 2: Creating a skeleton website -

      Building a Node.js static file server (files over HTTP) using ES6+ -

      How To Combine REST API calls with JavaScript Promises in node.js or OpenWhisk -

      Node script to forward all http requests to another server and return the response with an access-control-allow-origin header. Follows redirects. -

      5 Ways to Make HTTP Requests in Node.js -

      The post Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller appeared first on AMIS Oracle and Java Blog.

      Create a Node JS application for Downloading sources from GitHub

      Sun, 2018-03-18 16:26

      My objective: create a Node application to download sources from a repository on GitHub. I want to use this application to read a simple package.json-like file (that describes which reusable components (from which GitHub repositories) the application has dependencies on) and download all required resources from GitHub and store them in the local file system. This by itself may not seem very useful. However, this is a stepping stone on the road to a facility to trigger run time update of appliation components triggered by GitHub WebHook triggers.

      I am making use of the Octokit Node JS library to interact with the REST APIs of GitHub. The code I have created will:

      • fetch the meta-data for all items in the root folder of a GitHub Repo (at the tip of a specific branch, or at a specific tag or commit identifier)
      • iterate over all items:
        • download the contents of the item if it is a file and create a local file with the content (and cater for large files and for binary files)
        • create a local directory for each item in the GitHub repo that is a diectory, then recursively process the contents of the directory on GitHub

      An example of the code in action:

      A randomly selected GitHub repo (at


      The local target directory is empty at the beginning of the action:


      Run the code:


      And the content is downloaded and written locally:


      Note: the code could easily provide an execution report with details such as file size, download, last change date etc. It is currently very straightforward. Note: the gitToken is something you need to get hold of yourself in the GitHub dashboard: . Without a token, the code will still work, but you will be bound to the GitHub rate limit (of about 60 requests per hour).

      const octokit = require('@octokit/rest')() 
      const fs = require('fs');
      var gitToken = "YourToken"
          type: 'token',
          token: gitToken
      var targetProjectRoot = "C:/data/target/" 
      var github = { "owner": "lucasjellema", "repo": "WebAppIframe2ADFSynchronize", "branch": "master" }
      downloadGitHubRepo(github, targetProjectRoot)
      async function downloadGitHubRepo(github, targetDirectory) {
          console.log(`Installing GitHub Repo ${github.owner}\\${github.repo}`)
          var repo = github.repo;
          var path = ''
          var owner = github.owner
          var ref = github.commit ? github.commit : (github.tag ? github.tag : (github.branch ? github.branch : 'master'))
          processGithubDirectory(owner, repo, ref, path, path, targetDirectory)
      // let's assume that if the name ends with one of these extensions, we are dealing with a binary file:
      const binaryExtensions = ['png', 'jpg', 'tiff', 'wav', 'mp3', 'doc', 'pdf']
      var maxSize = 1000000;
      function processGithubDirectory(owner, repo, ref, path, sourceRoot, targetRoot) {
          octokit.repos.getContent({ "owner": owner, "repo": repo, "path": path, "ref": ref })
              .then(result => {
                  var targetDir = targetRoot + path
                  // check if targetDir exists 
         => {
                      if (item.type == "dir") {
                          processGithubDirectory(owner, repo, ref, item.path, sourceRoot, targetRoot)
                      } // if directory
                      if (item.type == "file") {
                          if (item.size > maxSize) {
                              var sha = item.sha
                              octokit.gitdata.getBlob({ "owner": owner, "repo": repo, "sha": item.sha }
                              ).then(result => {
                                  var target = `${targetRoot + item.path}`
                                      , Buffer.from(, 'base64').toString('utf8'), function (err, data) { })
                                  .catch((error) => { console.log("ERROR BIGGA" + error) })
                          }// if bigga
                          octokit.repos.getContent({ "owner": owner, "repo": repo, "path": item.path, "ref": ref })
                              .then(result => {
                                  var target = `${targetRoot + item.path}`
                                  if (binaryExtensions.includes(item.path.slice(-3))) {
                                          , Buffer.from(, 'base64'), function (err, data) { reportFile(item, target) })
                                  } else
                                          , Buffer.from(, 'base64').toString('utf8'), function (err, data) { if (!err) reportFile(item, target); else console.log('Fuotje ' + err) })
                              .catch((error) => { console.log("ERROR " + error) })
                      }// if file
              }).catch((error) => { console.log("ERROR XXX" + error) })
      function reportFile(item, target) {
          console.log(`- installed ${} (${item.size} bytes )in ${target}`)
      function checkDirectorySync(directory) {
          try {
          } catch (e) {
              console.log("Created directory: " + directory)


      Octokit REST API Node JS library: 

      API Documentation for Octokit:

      The post Create a Node JS application for Downloading sources from GitHub appeared first on AMIS Oracle and Java Blog.

      Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu

      Sun, 2018-03-18 08:53

      Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

      Getting started with Docker Installing Docker

      Of course you need a Docker installation. I’ll not get into details here but;

      Oracle Linux 7
      yum-config-manager --enable ol7_addons
      yum-config-manager --enable ol7_optional_latest
      yum install docker-engine
      systemctl start docker
      systemctl enable docker
      curl -fsSL | sudo apt-key add -
      add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
      apt-get update
      apt-get install docker-ce

      You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

      Running a Docker container

      See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

      If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

      • go to and enable your OTN account to be used
      • go to the product you want to use and accept the license agreement
      • do docker login -u username -p password

      If you are using the Docker Store, you first need to

      • go to and create an account
      • find the image you want to use. Click Get Content and accept the license agreement
      • do docker login -u username -p password

      To start a container in the foreground

      docker run --name mycontainer -it imagename /bin/sh

      To start a container in the background

      docker run --name mycontainer -d imagename tail -f /dev/null

      To ‘enter’ a running container:

      docker exec -it mycontainer /bin/sh

      /bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

      Cleaning up

      Good to know is how to clean up your images/containers after having played around with them. See here.

      # Delete all containers
      docker rm $(docker ps -a -q)
      # Delete all images
      docker rmi $(docker images -q)
      Options for JDK

      Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

      Oracle JDK on Oracle Linux

      When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about support and roadmap, read the following blog.

      The steps to obtain Docker images for Oracle JDK / Oracle Linux from are as follows:

      Create an account on Go to Click Get Content. Accept the agreement and you’re ready to login, pull and run.

      #use the username and password
      docker login -u yourusername -p yourpassword
      docker pull store/oracle/serverjre:8

      To start in the foreground:

      docker run --name jre8 -it store/oracle/serverjre:8 /bin/bash

      You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

      #use your OTN username and password
      docker login -u yourusername -p yourpassword
      docker pull
      #To start in the foreground:
      docker run --name jre8 -it /bin/bash
      OpenJDK on Alpine Linux

      When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread related challenges with Alpine Linux though. See for example here and here.

      Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login.

      When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

      docker pull openjdk:8-jdk-alpine

      Next you can do

      docker run --name openjdk8 -it openjdk:8-jdk-alpine /bin/sh
      Zulu on Ubuntu Linux

      You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

      Do it yourself

      Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

      Spring Boot in a Docker container

      Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

      FROM openjdk:8-jdk-alpine
      VOLUME /tmp
      ADD ${JAR_FILE} app.jar
      ENTRYPOINT ["java","","-jar","/app.jar"]

      The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

      And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.


      To actually build the Docker image, which allows using it locally, you can do:

      mvn install dockerfile:build

      If you want to distribute it (allow others to easily pull and run it), you can push it with

      mvn install dockerfile:push

      This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to You can find it there since it is public.

      You can then do something like

      docker run -t maartensmeets/accs-cache-sample:latest

      The post Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu appeared first on AMIS Oracle and Java Blog.

      Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

      Wed, 2018-03-14 10:24

      Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I’ll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

      Using the Application Cache Java SDK Create an Application Cache

      You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the ‘-‘ character in your cache name, since the LBaaS configuration will fail.

      Use the Java SDK

      Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO’s which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can’t find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.

      In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider. The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to ‘never expires’ since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

      When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.

      The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.


      Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

      Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.


      The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

      Deploy a Spring Boot application to ACCS Create a deployable

      In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.

      Deploy to ACCS

      There are 2 major ways (next to directly using the API’s with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.


      Create a new Java SE application.


      Upload the previously created ZIP file


      Introducing Application Cache Client Java SDK for Oracle Cloud

      Caching with Oracle Application Container Cloud

      Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)

      A sample of using the Application Cache Java SDK. Application is Jersey based

      The post Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application appeared first on AMIS Oracle and Java Blog.

      ADF Performance Tuning: Avoid a Long Browser Load Time

      Wed, 2018-03-07 04:10

      It is not always easy to troubleshoot ADF performance problems – it is often complicated. Many parts needs to be measured, analyzed and considered. While looking for performance problems at the usual suspects (ADF application, database, network), the real problem can also be found in the often overlooked browser load time. The browser load time is just an important part of the HTTP request and response handling as is the time spent in the applicationserver, database and network. The browser load time can take a few seconds extra time on top of the server and network process time before the end-user receives the HTTP response and can continue with his work. Especially if the browser needs to build a very very ‘rich’ ADF page – the browser needs to build and process the very large DOM-tree. The end-user needs to wait then for seconds, even in modern browsers as Google Chrome, Firefox and Microsoft Edge. Often this is caused by a ‘bad’ page design where too much ADF components are rendered and displayed at the same time; too many table columns and rows, but also too many other components can cause a slow browser load time. This blog shows an example, analyses the browser load time in the ADF Performance Monitor, and suggest simple page design considerations to prevent a large browser load time.

      Read more on – our new website on the ADF Performance Monitor.

      The post ADF Performance Tuning: Avoid a Long Browser Load Time appeared first on AMIS Oracle and Java Blog.

      Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl

      Sun, 2018-03-04 14:08


      The challenge I describe in this article is quite specific. I have a Windows laptop. I have access to a remote Kubernetes cluster (on Oracle Cloud Infrastructure). I want to create Fn functions and deploy them to an Fn server running on that Kubernetes (k8s from now on) environment and I want to be able to execute functions running on k8s from my laptop. That’s it.

      In this article I will take you on a quick tour of what I did to get this to work:

      • Use vagrant to spin up a VirtualBox VM based on a Debian Linux image and set up with Docker Server installed. Use SSH to enter the Virtual Machine and install Helm (a Kubernetes package installer) – both client (in the VM) and server (called Tiller, on the k8s cluster). Also install kubectl in the VM.
      • Then install Project Fn in the VM. Also install Fn to the Kubernetes cluster, using the Helm chart for Fn (this will create a series of Pods and Services that make up and run the Fn platform).
      • Still inside the VM, create a new Fn function. Then, deploy this function to the Fn server on the Kubernetes cluster. Run the function from within the VM – using kubectl to set up port forwarding for local calls to requests into the Kubernetes cluster.
      • On the Windows host (the laptop, outside the VM) we can also run kubectl with port forwarding and invoke the Fn function on the Kubernetes cluster.
      • Finally, I show how to expose the the fn-api service from the Kubernetes service on an external IP address. Note: the latter is nice for demos, but compromises security in a major way.

      All in all, you will see how to create, deploy and invoke an Fn function – using a Windows laptop and a remote Kubernetes cluster as the runtime environment for the function.

      The starting point:


      a laptop running Windows, with VirtualBox and Vagrant installed and a remote Kubernetes Cluster (could be in some cloud, such as Oracle the Container Engine Cloud that I am using or could be minikube).

      Step One: Prepare Virtual Machine

      Create a Vagrantfile – for example this one:

      Vagrant.configure("2") do |config|
      config.vm.provision "docker"
      config.vm.define "debiandockerhostvm"
      # = "debian/jessie64" "private_network", ip: ""
      config.vm.synced_folder "./", "/vagrant", id: "vagrant-root",
             owner: "vagrant",
             group: "www-data",
             mount_options: ["dmode=775,fmode=664"],
             type: ""
      config.vm.provider :virtualbox do |vb| = "debiananddockerhostvm"
         vb.memory = 4096
         vb.cpus = 2
         vb.customize ["modifyvm", :id, "--natdnshostresolver1","on"]
         vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]

      This Vagrantfile will create a VM with VirtualBox called debiandockerhostvm – based on the VirtualBox image debian/jessie64. It exposes the VM to the host laptop at IP (you can safely change this). It maps the local directory that contains the Vagrantfile into the VM, at /vagrant. This allows us to easily exchange files between Windows host and Debian Linux VM. The instruction “config.vm.provision “docker”” ensures that Docker is installed into the Virtual Machine.

      To actually create the VM, open a command line and navigate to the directory that contains the Vagrant file. Then type “vagrant up”. Vagrant starts running and creates the VM, interacting with the VirtualBox APIs. When the VM is created, it is started.

      From the same command line, using “vagrant ssh”, you can now open a terminal window in the VM.

      To further prepare the VM, we need to install Helm and kubectl. Helm is installed in the VM (client) as well as in the Kubernetes cluster (the Tiller server component).

      Here are sthe steps to perform inside the VM (see step 1):

      ######## kubectl
      # download and extract the kubectl binary 
      curl -LO$(curl -s
      # set the executable flag for kubectl
      chmod +x ./kubectl
      # move the kubectl executable to the bin directory
      sudo mv ./kubectl /usr/local/bin/kubectl
      # assuming that the kubeconfig file with details for Kubernetes cluster is available On the Windows Host:
      # Copy the kubeconfig file to the directory that contains the Vagrantfile and from which vagrant up and vagrant ssh were performed
      # note: this directory is mapped into the VM to directory /vagrant
      #Then in VM - set the proper Kubernetes configuration context: 
      export KUBECONFIG=/vagrant/kubeconfig
      #now inspect the succesful installation of kubectl and the correct connection to the Kubernetes cluster 
      kubectl cluster-info
      ########  HELM
      #download the Helm installer
      curl -LO
      #extract the Helm executable from the archive
      tar -xzf helm-v2.8.1-linux-amd64.tar.gz
      #set the executable flag on the Helm executable
      sudo chmod +x  ./linux-amd64/helm
      #move the Helm executable to the bin directory - as helm
      sudo mv ./linux-amd64/helm /usr/local/bin/helm
      #test the successful installatin of helm
      helm version
      ###### Tiller
      #Helm has a server side companion, called Tiller, that should be installed into the Kubernetes cluster
      # this is easily done by executing:
      helm init
      # an easy test of the Helm/Tiller set up can be run (as described in the quickstart guide)
      helm repo update              
      helm install stable/mysql
      helm list
      # now inspect in the Kubernetes Dashboard the Pod that should have been created for the MySQL Helm chart
      # clean up after yourself:
      helm delete <name of the release of MySQL>

      When this step is complete, the environment looks like this:


      Step Two: Install Project Fn – in VM and on Kubernetes

      Now that we have prepared our Virtual Machine, we can proceed with adding the Project Fn command line utility to the VM and the Fn platform to the Kubernetes cluster. The former is simple local installation of a binary file. The latter is an even simpler installation of a Helm Chart. Here are the steps that you should go through inside the VM (also see step 2):

      # 1A. download and install Fn locally inside the VM
      curl -LSs | sh
      #note: this previous statement failed for me; I went through the following steps as a workaround
      # 1B. create install script
      curl -LSs > inst
      # make script executable
      chmod u+x
      # execute script - as sudo
      sudo ./
      # 1C. and if that fails, you can manually manipulate the downloaded executable:
      sudo mv /tmp/fn_linux /usr/local/bin/fn
      sudo chmod +x /usr/local/bin/fn
      # 2. when the installation was done through one of the  methods listed, test the success by running  
      fn --version
      # 3. Server side installation of Fn to the Kubernetes Cluster
      # details in
      # Clone the GitHub repo with the Helm chart for fn; sources are downloaded into the fn-helm directory
      git clone && cd fn-helm
      # Install chart dependencies from requirements.yaml in the fn-helm directory:
      helm dep build fn
      #To install the Helm chart with the release name my-release into Kubernetes:
      helm install --name my-release fn
      # to verify the cluster server side installation you could run the following statements:
      export KUBECONFIG=/vagrant/kubeconfig
      #list all pods for app my-release-fn
      kubectl get pods --namespace default -l "app=my-release-fn"

      When the installation of Fn has been done, the environment can be visualized as shown below:


      You can check in the Kubernetes Dashboard to see what has been created from the Helm chart:


      Or on the command line:


      Step Three: Create, Deploy and Run Fn Functions

      We now have a ready to run environment – client side VM and server side Kubernetes cluster – for creating Fn functions – and subsequently deploying and invoking them.

      Let’s now go through these three steps, starting with the creation of a new function called shipping-costs, created in Node.

      docker login
      export FN_REGISTRY=lucasjellema
      mkdir shipping-costs
      cd shipping-costs
      fn init --name shipping-costs --runtime  node
      # this creates the starting point of the Node application (package.json and func.js) as well as the Fn meta data file (func.yaml) 
      # now edit the func.js file (and add dependencies to package.json if necessary)
      #The extremely simple implementation of func.js looks like this:
      var fdk=require('@fnproject/fdk');
        var name = 'World';
        if ( {
          name =;
        response = {'message': 'Hello ' + name, 'input':input}
        return response
      #This function receives an input parameter (from a POST request this would be the body contents, typically a JSON document)
      # the function returns a result, a JSON document with the message and the input document returned in its entirety

      After this step, the function exists in the VM – not anywhere else yet. Some other functions could already have been deployed to the Fn platform on Kubernetes.


      This function shipping-costs should now be deployed to the K8S cluster, as that was one of our major objectives.

      export KUBECONFIG=/vagrant/kubeconfig
      # retrieve the name of the Pod running the Fn API
      kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0]}"
      # retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
      export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0]}")
      echo $POD_NAME    
      # set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
      # this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
      kubectl port-forward --namespace default $POD_NAME 8080:80 &
      #now we inform Fn that deployment activities can be directed at port 8080 of the local host, effectively to the pod $POD_NAME on the K8S cluster
      export FN_API_URL=
      export FN_REGISTRY=lucasjellema
      docker login
      #perform the deployment of the function from the directory that contains the func.yaml file
      #functions are organized in applications; here the name of the application is set to soaring-clouds-app
      fn deploy --app soaring-clouds-app

      Here is what the deployment looks like in the terminal window in the VM. (I have left out the steps: docker login, set FN_API_URL and set FN_REGISTRY


      After deploying function shipping-costs it now exists on the Kubernetes cluster – inside the fn-api Pod (where a docker containers are running for each of the functions):image

      To invoke the functions, several options are available. The function can be invoked from within the VM, using cURL to the function’s endpoint – leveraging kubectrl port forwarding as before. We can also apply kubectl port forwarding on the laptop – and use any tool that can invoke HTTP endpoints – such as Postman – to call the function.

      If we want clients without kubectl port forwarding – and even completely without knowledge of the Kubernetes cluster – to invoke the function, that can be done as well, by exposing an external IP for the service on K8S for fn-api.

      imageFirst, let’s invoke the function from with in the VM.

      export KUBECONFIG=/vagrant/kubeconfig
      # retrieve the name of the Pod running the Fn API
      kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0]}"
      # retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
      export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0]}")
      echo $POD_NAME    
      # set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
      # this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
      kubectl port-forward --namespace default $POD_NAME 8080:80 &
      curl -X POST \ \
        -H 'Cache-Control: no-cache' \
        -H 'Content-Type: application/json' \
        -H 'Postman-Token: bb753f9f-9f63-46b8-85c1-8a1428a2bdca' \
        -d '{"X":"Y"}'
      # on the Windows laptop host
      set KUBECONFIG=c:\data\2018-soaring-keys\kubeconfig
      kubectl port-forward --namespace default <name of pod> 8080:80 &
      kubectl port-forward --namespace default my-release-fn-api-frsl5 8085:80 &


      Now, try to call the function from the laptop host. This assumes that on the host we have both kubectl and the kubeconfig file that we also use in the VM.

      First we have to set the KUBECONFIG environment variable to refer to the kubeconfig file. Then we set up kubectl port forwarding just like in the VM, in this case forwarding port 8085 to the Kubernetes Pod for the Fn API.


      When this is done, we can make calls to the shipping-costs functions on the localhost, port 8085: endpoint

      imageThis still requires the client to be aware of Kubernetes: have the kubeconfig file and the kubectl client. We can make it possible to directly invoke Fn functions from anywhere without using kubectl. We do this by exposing an external IP directly on the service for Fn API on Kubernetes.

      The simplest way of making this happen is through the Kubernetes dashboard.

      Run the dashboard:


      and open it in a local browser at : .

      Edit the configuration of the service for fn-api:


      Change type ClusterIP to LoadBalancer. This instructs Kubernetes to externally expose this Service – and assign an external IP address to it. Click on Update to make the change real.


      After a litle while, the change will have been processed and we can find an external endpoint for the service.


      Now we (and anyone who has this IP address) can invoke the Fn function shipping-costs directly using this external IP address:



      This article showed how to start with a standard Windows laptop – with only Virtual Box and Vagrant as special components. Through a few simple, largely automated steps, we created a VM that allows us to create Fn functions and to deploy those functions to a Kubernetes cluster, onto which we have also deployed the Fn server plaform. The article provides all sources and scripts and demonstrates how to create, deploy and invoke a specific function.


      Sources for this article in GitHub:

      Vagrant home page:

      VirtualBox home page: 

      Quickstart for Helm:

      Fn Project Helm Chart for Kubernetes –

      Installation instruction for kubectl –

      Project Fn – Quickstart –

      Tutorial for Fn with Node:

      Kubernetes – expose external IP address for a Service –

      Use Port Forwarding to Access Applications in a Cluster –

      AMIS Technology Blog – Rapid first few steps with Fn – open source project for serverless functions –

      AMIS Technology Blog – Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions –

      The post Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl appeared first on AMIS Oracle and Java Blog.