Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 9 hours 56 min ago

Quickly setup a persistent React application

Tue, 2018-06-12 03:20

After having recently picked up the React framework, I figured I’d share how I quickly setup my projects to implement persistence. This way you spend minimal time setting up the skeleton of your application, letting you focus on adding functionality. This is done by combining a few tools. Let’s take a closer look.

Should you be interested in the code itself, go ahead and get it here.

Tools

NPM/Yarn

Be sure to install a package manager like npm or Yarn. We will use it to install and run some of the other tools.

Create-react-app
If you ever worked with React you undoubtedly came across this CLI tool. It is useful for providing the boilerplate code you need for the front-end of your application. Take a look at the documentation to get a sense on how to use it.

FeathersJS
This is an implementation of the Express framework. It provides another CLI tool we can use to quickly setup the boilerplate code for building a REST API. What makes it powerful is the concept of hooks (see this link for more) as well as the possibility to model our entities in a javascript file. After starting this service we can manipulate data in our database using REST calls.

MongoDB
This is the database we will use – be sure to download, install and start it. It’s a NoSQL database and uses a JSON document structure.

Steps
Create a folder for your application with <application name>. I like to create two seperate folders in here. One ‘frontend’, the other one ‘backend’. Usually I will add the source folder to a version control tool like Git. If you’re dealing with a larger application however, you might want to initiate two separate repositories for both your front- and backend. Using Visual Studio Code, my file structure looks like this:

Selection_002

Fire up your terminal. Install create-react-app globally.

npm install -g create-react-app

Navigate to your frontend folder. Initialize your first application.

cd frontend
create-react-app <application name>

Note how the CLI will add another folder with the application name into your frontend folder. Since we already created a root folder, copy the content of this folder and move it one level higher. Now the structure looks like this:

create-react-app

Install FeathersJS CLI globally.

npm install -g @feathersjs/cli

Navigate to the backend folder of your project an generate a feathers application

cd ../backend
feathers generate app

The CLI will prompt for a project name. Follow along with these settings:

feathers-settings

Alright, now it’s time to generate a service so we can start making REST calls. In true Oracle fashion, let’s make a service for the entity employees. While in the backend folder, run:

feathers generate services

Follow along:

feathers-service-settings

Navigate to the employees.model.js file. The filestructure of the backend folder should look like this:

feathers-backend-structure

In this file we can specify what employees look like. For now, let’s just give them the property name of type String and let’s make it required. Also, let’s delete the timestamps section to keep things clean. It should look like this:

//employees.model.js

module.exports = function (app) {
  const mongooseClient = app.get('mongooseClient');
  const { Schema } = mongooseClient;
  const employees = new Schema({
    name: { type: String, required: true }
  });

  return mongooseClient.model('employees', employees);
};

Great. We are almost good to go. First fire up MongoDB. Windows users see here. Unix users:

sudo service mongod start

Next up navigate to the backend folder. Using the terminal, give the following command:

npm start

You should see a message saying: Feathers application started on http://localhost:3030

The service we created is available on http://localhost:3030/employees. Navigate there. You should see JSON data – though right now all you’ll probably see is metadata. By using this address we can make REST calls to see and manipulate the data. You could use curl commands in the terminal, simply use the browser, or install a tool like Postman.

Next up we need a way to access this service in our frontend section of the application. For this purpose, let’s create a folder in the location: frontend/src/api. In here, create the file Client.js.

This file should contain the following code:

//Client.js

import io from 'socket.io-client';
import feathers from '@feathersjs/client';

const socket = io('http://localhost:3030');
const client = feathers();

client.configure(feathers.socketio(socket));
client.configure(feathers.authentication({
  storage: window.localStorage
}));

export default client;

Make sure the libraries we refer to are included in the package.json of our frontend. The package.json should look like this:

{
  "name": "react-persistent-start",
  "version": "0.1.0",
  "private": true,
  "dependencies": {
    "@feathersjs/client": "^3.4.4",
    "react": "^16.4.0",
    "react-dom": "^16.4.0",
    "react-scripts": "1.1.4",
    "socket.io-client": "^1.7.3"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test --env=jsdom",
    "eject": "react-scripts eject"
  }
}

Be sure to run npm install after updating this file.

That’s basically all we need to perform CRUD operations from our front-end section of the application. To make the example a little more vivid, let’s implement a button into the App.js page which was automatically generated by the create-react-app CLI.

First, I’ll get rid of the boilerplate code in there and I will import the client we just created. Next, create a button that calls method handleClick() when the React onClick method is fired. In here we will call the client, let it call the service and perform a create operation for us. The code will look like this:

//App.js

import React, { Component } from 'react';

import client from './api/Client';
import './App.css';

class App extends Component {
handleClick() {
  client.service('employees').create({
    name: "Nathan"
  });
}

  render() {
    return (
      <div className="App">
        <button onClick={this.handleClick}>Add employee</button>
      </div>
    );
  }
}

export default App;

Navigate to the frontend section in the terminal, make sure the node modules are installed correctly by running npm install. Now use command npm start. Navigate to http://localhost:3030/employees. Verify there is no employee here. On http://localhost:3000 our page should be available and show a single button.  Click the button and refresh the http://localhost:3030/employees page. You can see we have added an employee with the same name as yours truly.

json-result

(I use a Chrome plugin called JSON viewer to get the layout as shown in the picture)

Using the code provided in the handleClick() method you can expand upon this. See this link for all the calls you can make in order to provide CRUD functionality using FeathersJS.

That’s basically all you need to start adding functionality to your persistent React application.

Happy developing! The code is available here.

The post Quickly setup a persistent React application appeared first on AMIS Oracle and Java Blog.

Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX

Tue, 2018-06-05 13:00

In a previous article I talked about an OSBServiceExplorer tool to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. The code mentioned in that article was based on Oracle Service Bus 11.1.1.7 (11g).

In the meantime the OSB world has changed (for example now we can use pipelines) and it was time for me to pick up the old code and get it working within Oracle Service Bus 12.2.1.1.0 (12c).

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX in an 12c environment.

Unfortunately, getting the java code to work in 12c wasn’t as straightforward as I hoped.

For more details on the OSB, WebLogic Server MBeans and JMX subject, I kindly refer you to my previous article. In this article I will refer to it as my previous MBeans 11g article.
[https://technology.amis.nl/2017/03/09/oracle-service-bus-service-exploring-via-weblogic-server-mbeans-with-jmx/]

Before using the OSBServiceExplorer tool in an 12c environment, I first created two OSB Projects (MusicService and TrackService) with pipelines, proxy and business services. I used Oracle JDeveloper 12c (12.2.1.1.0) for this (from within a VirtualBox appliance).

For the latest version of Oracle Service Bus see:
http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html

If you want to use a VirtualBox appliance, have a look at for example: Pre-built Virtual Machine for SOA Suite 12.2.1.3.0
[http://www.oracle.com/technetwork/middleware/soasuite/learnmore/vmsoa122130-4122735.html]

After deploying the OSB Projects that were created in JDeveloper, to the WebLogic server, the Oracle Service Bus Console 12c (in my case: http://localhost:7101/servicebus) looks like:

Before we dive into the OSBServiceExplorer tool , first I give you some detail information of the “TrackService” (from JDeveloper), that will be used as an example in this article.

The “TrackService” sboverview looks like:

As you can see, several proxy services, a pipeline and a business service are present.

The Message Flow of pipeline “TrackServicePipeline” looks like:

The OSB Project structure of service “TrackService” looks like:

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
– Server name: DefaultServer. Server state: RUNNING

For more info and the responsible code fragment see my previous MBeans 11g article.

List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 45 refs, including the following pipelines, proxy and business services:
– ProxyService: TrackService/proxy/TrackServiceRest
– BusinessService: MusicService/business/db_InsertCD
– BusinessService: TrackService/business/CDService
– Pipeline: TrackService/pipeline/TrackServicePipeline
– ProxyService: TrackService/proxy/TrackService
– Pipeline: MusicService/pipeline/MusicServicePipeline
– ProxyService: MusicService/proxy/MusicService
– ProxyService: TrackService/proxy/TrackServiceRestJSON

See the code fragment below (I highlighted the changes I made on the code from the 11g version):

Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

fileWriter.write("Found total of " + refs.size() +
                 " refs, including the following pipelines, proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {
        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("Pipeline")) {
        fileWriter.write("- Pipeline: " +
                         ref.getFullName() + "\n");                    
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " +
                         ref.getFullName() + "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

For more info see my previous MBeans 11g article.

ResourceConfigurationMBean

In the Oracle Enterprise Manager FMC 12c (in my case: http://localhost:7101/em) I navigated to SOA / service-bus and opened the System MBean Browser:

Here the ResourceConfigurationMBean’s can be found under com.oracle.osb.


[Via MBean Browser]

If we navigate to a particular ResourceConfigurationMBean for a proxy service (for example …$proxy$TrackService), the information on the right is as follows :


[Via MBean Browser]

As in the 11g version the attributes Configuration, Metadata and Name are available.

If we navigate to a particular ResourceConfigurationMBean for a pipeline (for example …$pipeline$TrackServicePipeline), the information on the right is as follows :


[Via MBean Browser]

As you can see the value for attribute “Configuration” for this pipeline is “Unavailable”.

Remember the following java code in OSBServiceExplorer.java (see my previous MBeans 11g article):

for (ObjectName osbResourceConfiguration :
    osbResourceConfigurations) {
 
    CompositeDataSupport configuration =
        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                      "Configuration");

So now apparently, getting the configuration can result in a NullPointerException. This has to be dealt with in the new 12c version of OSBServiceExplorer.java, besides the fact that now also a pipeline is a new resource type.

But of course for our OSB service explorer we are in particular, interested in the elements (nodes) of the pipeline. In order to get this information available in the System MBean Browser, something has to be done.

Via the Oracle Enterprise Manager FMC 12c I navigated to SOA / service-bus / Home / Projects / TrackService and clicked on tab “Operations”:

Here you can see the Operations settings of this particular service.

Next I clicked on the pipeline “TrackServicePipeline”, where I enabled “Monitoring”

If we then navigate back to the ResourceConfigurationMBean for pipeline “TrackServicePipeline”, the information on the right is as follows:


[Via MBean Browser]

So now the wanted configuration information is available.

Remark:
For the pipeline “MusicServicePipeline” the monitoring is still disabled, so the configuration is still unavailabe.

Diving into attribute Configuration of the ResourceConfigurationMBean

For each found pipeline, proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

Proxy service configuration:
Please see my previous MBeans 11g article.

Business service configuration:
Please see my previous MBeans 11g article.

Pipeline configuration:
Below is an example of a pipeline configuration (content fragment of the text file):

Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

If the pipeline configuration is unavailable, the following is shown:

Resource is a Pipeline (without available Configuration)

The pipelines can be recognized by the Pipeline$ prefix.

Pipeline, element hierarchy

In the 11g version of OSBServiceExplorer.java, for a proxy service the elements (nodes) of the pipeline were investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");

In 12c however this doesn’t work for a proxy service. The same code can be used however for a pipeline.

For pipeline “TrackServicePipeline”, the configuration (including nodes) looks like:


[Via MBean Browser]

Based on the nodes information (with node-id) in the MBean Browser and the content of pipeline “TrackServicePipeline.pipeline” the following structure can be put together:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .pipeline file for the _ActiondId- identification, mentioned as value for the name key.

Example of the details of node with node-id = 4 and name = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc:


[Via MBean Browser]

Content of pipeline “TrackServicePipeline.pipeline”:

<?xml version="1.0" encoding="UTF-8"?>
<con:pipelineEntry xmlns:con="http://www.bea.com/wli/sb/pipeline/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:con1="http://www.bea.com/wli/sb/stages/config" xmlns:con2="http://www.bea.com/wli/sb/stages/routing/config" xmlns:con3="http://www.bea.com/wli/sb/stages/transform/config">
    <con:coreEntry>
        <con:binding type="SOAP" isSoap12="false" xsi:type="con:SoapBindingType">
            <con:wsdl ref="TrackService/proxy/TrackService"/>
            <con:binding>
                <con:name>TrackServiceBinding</con:name>
                <con:namespace>http://trackservice.services.soatraining.amis/</con:namespace>
            </con:binding>
        </con:binding>
        <con:xqConfiguration>
            <con:snippetVersion>1.0</con:snippetVersion>
        </con:xqConfiguration>
    </con:coreEntry>
    <con:router>
        <con:flow>
            <con:route-node name="RouteNode1">
                <con:context>
                    <con1:userNsDecl prefix="trac" namespace="http://trackservice.services.soatraining.amis/"/>
                </con:context>
                <con:actions>
                    <con2:route>
                        <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc</con1:id>
                        <con2:service ref="TrackService/business/CDService" xsi:type="ref:BusinessServiceRef" xmlns:ref="http://www.bea.com/wli/sb/reference"/>
                        <con2:operation>getTracksForCD</con2:operation>
                        <con2:outboundTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff9</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDRequest"/>
                                        <con1:param name="getTracksForCDRequest">
                                            <con1:path>$body/trac:getTracksForCDRequest</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:outboundTransform>
                        <con2:responseTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff6</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDResponse"/>
                                        <con1:param name="getTracksForCDResponse">
                                            <con1:path>$body/*[1]</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:responseTransform>
                    </con2:route>
                </con:actions>
            </con:route-node>
        </con:flow>
    </con:router>
</con:pipelineEntry>

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For more info about this, see my previous MBeans 11g article.

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found pipeline the metadata information (dependencies and dependents) is written to file.

Example content fragment of the text file:

Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
dependencies:
– BusinessService$TrackService$business$CDService
– WSDL$TrackService$proxy$TrackService

dependents:
– ProxyService$TrackService$proxy$TrackService
– ProxyService$TrackService$proxy$TrackServiceRest
– ProxyService$TrackService$proxy$TrackServiceRestJSON

As can be seen in the MBean Browser, the metadata for a particular pipeline shows the dependencies on other resources (like business services and WSDLs) and other services that are dependent on the pipeline.

For more info and the responsible code fragment see my previous MBeans 11g article.

Remark:
In the java code, dependencies on Xquery’s are filtered out and not written to the text file.

MBeans with regard to version 11.1.1.7

In the sample java code shown at the end of my previous MBeans 11g article, the use of the following MBeans can be seen:

MBean and other classes Jar file weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-api.jar com.bea.wli.config.Ref.class <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar

Therefor in JDeveloper 11g, the following Project Libraries and Classpath settings were made:

Description Class Path Com.bea.common.configfwk_1.6.0.0.jar /oracle/fmwhome/Oracle_OSB1/modules/com.bea.common.configfwk_1.6.0.0.jar Sb-kernel-api.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-api.jar Sb-kernel-impl.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-impl.jar Wlfullclient.jar /oracle/fmwhome/wlserver_10.3/server/lib/wlfullclient.jar

For more info about these MBeans, see my previous MBeans 11g article.

In order to connect to a WebLogic MBean Server in my previous MBeans 11g article I used the thick client wlfullclient.jar.

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

So I build wlfullclient.jar as follow:

cd <Middleware Home Directory>/wlserver_10.3/server/lib
java -jar wljarbuilder.jar

In the sample java code shown at the end of this article, the use of the same MBeans can be seen. However in JDeveloper 12c, changes in Project Libraries and Classpath settings were necessary, due to changes in the jar files used in the 12c environment. Also the wlfullclient.jar is deprecated as of WebLogic Server 12.1.3 !

Overview of WebLogic Client jar files WebLogic Client Jar file Protocol WebLogic Full Client weblogic.jar (6 KB)
(Via the manifest file MANIFEST.MF, classes in other JAR files are referenced) T3 wlfullclient.jar (111.131 KB)
is deprecated as of WebLogic Server 12.1.3 T3 WebLogic Thin Client wlclient.jar (2.128 KB) IIOP wljmxclient.jar (238 KB) IIOP WebLogic Thin T3 Client wlthint3client.jar (7.287 KB) T3

Remark with regard to version 12.2.1:

Due to changes in the JDK, WLS no longer supports JMX with just the wlclient.jar. To use JMX, you must use either the ”full client” (weblogic.jar) or wljmxclient.jar.
[https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Full Client

The WebLogic full client, wlfullclient.jar, is deprecated as of WebLogic Server 12.1.3 and may be removed in a future release. Oracle recommends using the WebLogic Thin T3 client or other appropriate client depending on your environment.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT130]

For WebLogic Server 10.0 and later releases, client applications need to use the wlfullclient.jar file instead of the weblogic.jar. A WebLogic full client is a Java RMI client that uses Oracle’s proprietary T3 protocol to communicate with WebLogic Server, thereby leveraging the Java-to-Java model of distributed computing.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

Not all functionality available with weblogic.jar is available with the wlfullclient.jar. For example, wlfullclient.jar does not support Web Services, which requires the wseeclient.jar. Nor does wlfullclient.jar support operations necessary for development purposes, such as ejbc, or support administrative operations, such as deployment, which still require using the weblogic.jar.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

WebLogic Thin Client

In order to connect to a WebLogic MBean Server, it is also possible to use a thin client wljmxclient.jar (in combination with wlclient.jar). This JAR contains Oracle’s implementation of the HTTP and IIOP protocols.

Remark:
wlclient.jar is included in wljmxclient.jar‘s MANIFEST ClassPath entry, so wlclient.jar and wljmxclient.jar need to be in the same directory, or both jars need to be specified on the classpath.

Ensure that weblogic.jar or wlfullclient.jar is not included in the classpath if wljmxclient.jar is included. Only the thin client wljmxclient.jar/wlclient.jar or the thick client wlfullclient.jar should be used, but not a combination of both. [https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Thin T3 Client

The WebLogic Thin T3 Client jar (wlthint3client.jar) is a light-weight, high performing alternative to the wlfullclient.jar and wlclient.jar (IIOP) remote client jars. The Thin T3 client has a minimal footprint while providing access to a rich set of APIs that are appropriate for client usage. As its name implies, the Thin T3 Client uses the WebLogic T3 protocol, which provides significant performance improvements over the wlclient.jar, which uses the IIOP protocol.

The Thin T3 Client is the recommended option for most remote client use cases. There are some limitations in the Thin t3 client as outlined below. For those few use cases, you may need to use the full client or the IIOP thin client.

Limitations and Considerations:

This release does not support the following:

  • Mbean-based utilities (such as JMS Helper, JMS Module Helper), and JMS multicast are not supported. You can use JMX calls as an alternative to “mbean-based helpers.”
  • JDBC resources, including WebLogic JDBC extensions.
  • Running a WebLogic RMI server in the client.

The Thin T3 client uses JDK classes to connect to the host, including when connecting to dual-stacked machines. If multiple addresses available on the host, the connection may attempt to go to the wrong address and fail if the host is not properly configured.
[https://docs.oracle.com/middleware/12212/wls/SACLT/wlthint3client.htm#SACLT387]

MBeans with regard to version 12.2.1

As I mentioned earlier in this article, in order to get the Java code working in a 12.2.1 environment, I had to make some changes.

MBean and other classes Jar file weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-api.jar com.bea.wli.config.Ref.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.configfwk.jar weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver/modules/com.bea.core.management.jmx.jar com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-wls.jar

In JDeveloper 12c, the following Project Libraries and Classpath settings were made (at first):

Description Class Path Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar Wlfullclient.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlfullclient.jar

Using wlfullclient.jar:
At first I still used the thick client wlfullclient.jar (despite the fact that it’s deprecated), which I build as follow:

cd <Middleware Home Directory>/wlserver/server/lib
java -jar wljarbuilder.jar
Creating new jar file: wlfullclient.jar

wlfullclient.jar and jarbuilder are deprecated starting from the WebLogic 12.1.3 release.
Please use one of the equivalent stand-alone clients instead. Consult Oracle WebLogic public documents for details.

Compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using weblogic.jar:
When I changed wlfullclient.jar in to weblogic.jar the OSBServiceExplorer tool also worked.

Using wlclient.jar:
When I changed wlfullclient.jar in to wlclient.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlclient.jar and wljmxclient.jar:
Also adding wljmxclient.jar did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Adding wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then I got an error on:

String name = serverRuntimeMBean.getName();

I then decided to go for the, by Oracle recommended, WebLogic Thin T3 client wlthint3client.jar.

Using wlthint3client.jar:
When I changed wlfullclient.jar in to wlthint3client.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlthint3client.jar and wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then again I got an error on:

String name = serverRuntimeMBean.getName();

However I could run the OSBServiceExplorer tool in JDeveloper , but then I got the error:

Error(160,49): cannot access weblogic.security.ServerRuntimeSecurityAccess; class file for weblogic.security.ServerRuntimeSecurityAccess not found

I found that the following jar files could solve this error:

For the time being I extracted the needed class file (weblogic.security.ServerRuntimeSecurityAccess.class) from the smallest size jar file to a lib directory on the filesystem and in JDeveloper added that lib directory as a Classpath to the Project.

As it turned out I had to repeat these steps for the following errors I still got after I extended the Classpath:

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/utils/collections/WeakConcurrentHashMap

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/runtime/TimeServiceRuntimeMBean

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/partition/admin/ResourceGroupLifecycleOperations$RGState

After that, compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using the lib directory with the extracted class files, was not what I wanted. Adding the jar files mentioned above seemed a better idea. So I picked the jar files with the smallest size, to get the job done, and discarded the lib directory.

So in the end, in JDeveloper 12c, the following Project Libraries and Classpath settings were made:

Description Class Path Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar Com.oracle.weblogic.management.base.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.management.base.jar Com.oracle.weblogic.security.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.security.jar Com.oracle.webservices.wls.jaxrpc-client.jar /u01/app/oracle/fmw/12.2/wlserver/modules/clients/com.oracle.webservices.wls.jaxrpc-client.jar Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar Wlthint3client.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlthint3client.jar Wls-api.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wls-api.jar

Shell script

For ease of use, a shell script file was created, using MBeans, to explore pipeline, proxy and business services. The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

The content of the shell script file “OSBServiceExplorer” is:

#!/bin/bash

# Script to call OSBServiceExplorer

echo “Start calling OSBServiceExplorer”

java -classpath “OSBServiceExplorer.jar:oracle.servicebus.configfwk.jar:com.bea.core.management.jmx.jar:oracle.servicebus.kernel-api.jar:oracle.servicebus.kernel-wls.jar:wlthint3client.jar:wls-api.jar:com.oracle.weblogic.security.jar:com.oracle.webservices.wls.jaxrpc-client.jar:com.oracle.weblogic.management.base.jar” nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

echo “End calling OSBServiceExplorer”

In the shell script file via the java executable, a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

Example content of the text file:

Found server runtimes:
- Server name: DefaultServer. Server state: RUNNING

Found total of 45 refs, including the following pipelines, proxy and business services:
- ProxyService: TrackService/proxy/TrackServiceRest
- BusinessService: MusicService/business/db_InsertCD
- BusinessService: TrackService/business/CDService
- Pipeline: TrackService/pipeline/TrackServicePipeline
- ProxyService: TrackService/proxy/TrackService
- Pipeline: MusicService/pipeline/MusicServicePipeline
- ProxyService: MusicService/proxy/MusicService
- ProxyService: TrackService/proxy/TrackServiceRestJSON

ResourceConfiguration list of pipelines, proxy and business services:
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/MusicService
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

    Index#4:
       level    = 1
       label    = route
       name     = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc
       node-id  = 4
       type     = Action
       children = [1,3]
    Index#6:
       level    = 1
       label    = route-node
       name     = RouteNode1
       node-id  = 6
       type     = RouteNode
       children = [5]

  Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$TrackService$business$CDService
      - WSDL$TrackService$proxy$TrackService

    dependents:
      - ProxyService$TrackService$proxy$TrackService
      - ProxyService$TrackService$proxy$TrackServiceRest
      - ProxyService$TrackService$proxy$TrackServiceRestJSON

- Resource: com.oracle.osb:Location=DefaultServer,Name=Operations$System$Operator Settings$GlobalOperationalSettings,Type=ResourceConfigurationMBean
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$MusicService$pipeline$MusicServicePipeline,Type=ResourceConfigurationMBean
  Resource is a Pipeline (without available Configuration)
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=jca, url=jca://eis/DB/MUSIC
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://127.0.0.1:7101/cd_services/CDService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRest
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/TrackService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRestJSON

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            }

            if ((level == 1 && type.equals("OperationalBranchNode")) ||
                level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];
                
                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);            

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following pipelines, proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {
                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("Pipeline")) {
                    fileWriter.write("- Pipeline: " +
                                     ref.getFullName() + "\n");                    
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);
            
            fileWriter.write("ResourceConfiguration list of pipelines, proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                              
                try {
                    CompositeDataSupport configuration =
                        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                      "Configuration");
                      
                    if (canonicalName.contains("ProxyService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        String url = (String)transportconfiguration.get("url");
                        
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("BusinessService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        CompositeData[] urlconfiguration =
                            (CompositeData[])transportconfiguration.get("url-configuration");
                        String url = (String)urlconfiguration[0].get("url");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("Pipeline")) {
                        String servicetype =
                            (String)configuration.get("service-type");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype + "\n");
                    }
                    
                    if (canonicalName.contains("Pipeline")) {
                        fileWriter.write("" + "\n");
    
                        CompositeDataSupport pipeline =
                            (CompositeDataSupport)configuration.get("pipeline");
                        TabularDataSupport nodes =
                            (TabularDataSupport)pipeline.get("nodes");
    
                        TabularType tabularType = nodes.getTabularType();
                        CompositeType rowType = tabularType.getRowType();
    
                        Iterator keyIter = nodes.keySet().iterator();
    
                        for (int j = 0; keyIter.hasNext(); ++j) {
    
                            Object[] key = ((Collection)keyIter.next()).toArray();
    
                            CompositeData compositeData = nodes.get(key);
    
                            String label = (String)compositeData.get("label");
                            String type = (String)compositeData.get("type");
                            if (type.equals("Action") &&
                                (label.contains("wsCallout") ||
                                 label.contains("javaCallout") ||
                                 label.contains("route"))) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            } else if (type.equals("OperationalBranchNode") ||
                                       type.equals("RouteNode")) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            }
                        }
                        
                        fileWriter.write("" + "\n");
                        
                        CompositeDataSupport metadata =
                            (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                          "Metadata");
                        
                        fileWriter.write("  Metadata of " + canonicalName + "\n");
    
                        String[] dependencies =
                            (String[])metadata.get("dependencies");
                        fileWriter.write("    dependencies:\n");
                        int size;
                        size = dependencies.length;
                        for (int i = 0; i < size; i++) {
                            String dependency = dependencies[i];
                            if (!dependency.contains("Xquery")) {
                                fileWriter.write("      - " + dependency + "\n");
                            }
                        }
                        fileWriter.write("" + "\n");
    
                        String[] dependents = (String[])metadata.get("dependents");
                        fileWriter.write("    dependents:\n");
                        size = dependents.length;
                        for (int i = 0; i < size; i++) {
                            String dependent = dependents[i];
                            fileWriter.write("      - " + dependent + "\n");
                        }
                        fileWriter.write("" + "\n");                
                    }
                }
                catch(Exception e) {
                    if (canonicalName.contains("Pipeline")) {
                      fileWriter.write("  Resource is a Pipeline (without available Configuration)" + "\n");
                    } else {
                      e.printStackTrace();
                    }
                }
            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

Running Istio on Oracle Kubernetes Engine–the managed Kubernetes Cloud Service

Sat, 2018-05-26 08:30

imageIn a recent post, I introduced the managed Oracle Cloud Service for Kubernetes, the Oracle Kubernetes Engine (OKE): https://technology.amis.nl/2018/05/25/first-steps-with-oracle-kubernetes-engine-the-managed-kubernetes-cloud-service/. A logical next step when working with Kubernetes in somewhat challenging situations, for example with microservice style architectures and deployments, is the use of Istio – to configure, monitor and manage the so called service mesh. Istio – https://istio.io – is brand new – not even Beta yet, although a first production release is foreseen for Q3 2018. It offers very attractive features, including:

  • intelligent routing of requests, including load balancing, A/B testing, content/condition based routing, blue/green release, canary release
  • resilicience – for example through circuit breaking and throttling
  • policy enforcement and access control
  • telemetry, monitoring, reporting

In this article, I will describe how I got started with Istio on the OKE cluster that I provisioned in the previous article. Note: there is really nothing very special about OKE for Istio: it is just another Kubernetes cluster, and Istio will do its thing. More interesting perhaps is the fact that I work on a Windows laptop and use a Vagrant/VirtualBox powered Ubuntu VM to do some of the OKE interaction, especially when commands and scripts are Linux only.

The steps I will describe:

  • install Istio client in the Linux VM
  • deploy Istio to the OKE Kubernetes Cluster
  • deploy the Bookinfo sample application with Sidecar Injection (the Envoy Sidecar is the proxy that is added to every Pod to handle all traffic into and out of the Pod; this is the magic that makes Istio work)
  • try out some typical Istio things – like traffic management and monitoring

The conclusion is that leveraging Istio on OKE is quite straightforward.

 

Install Istio Client in Linux VM

The first step with Istio, prior to deploying Istio to the K8S cluster, is the installation on your client machine of the istoctl client application and associated sources, including the Kubernetes yaml files required for the actual deployment. Note: I tried deployment of Istio using a Helm chart, but that did not work and it seems that Istio 0.7.x is not suitable for Helm (release 0.8 is supposed to be ready for Helm).

Following the instructions in the quick start guide: https://istio.io/docs/setup/kubernetes/quick-start.html

and working in the Ubuntu VM that I have spun up with Vagrant and Virtual Box, I go through these steps:

Ensure that the current OCI and OKE user kubie is allowed to do cluster administration tasks:

kubectl create clusterrolebinding k8s_clst_adm –clusterrole=cluster-admin –user=ocid1.user.oc1..aaaaaaaavorp3sgemd6bh5wjr3krnssvcvlzlgcxcnbrkwyodplvkbnea2dq

image

Download and install istioctl:

curl -L https://git.io/getLatestIstio | sh –

imageThen add the bin directory in the Istio release directory structure to the PATH variable, to make istoctl accessible from anywhere.

image

    Deploy Istio to the OKE Kubernetes Cluster

    The resources that were created during the installation of the Istio client include the yaml files that can be used to deploy Istio to the Kubernetes cluster. The command to perform that installation is very straightforward:

    kubectl apply -f install/kubernetes/istio.yaml

    The screenshot shows some of the steps executed when this command is kicked off:

    image

    The namespace istio-system is created, the logical container for all Istio related resources.

    SNAGHTML197befee

    The last two commands:

    kubectl get svc -n istio-system

    and

    kubectl get pods -n istio-system

    are used to verify what has been installed and is now running [successfully]in the Kubernetes cluster.

    The Dashboard provides a similar overview:

    image

    Deploy supporting facilities

    Istio is prepared for interaction with a number of facilities that will help with monitoring and tracing – such as Zipkin, Prometheus, Jaeger and Grafana. The core installation of Istio does not include these tools. Using the following kubectl commands, we can extend the istio-system namespace with these tools:

    kubectl apply -f install/kubernetes/addons/prometheus.yaml

    kubectl apply -f install/kubernetes/addons/zipkin.yaml

    kubectl apply -n istio-system -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml

    kubectl apply -f install/kubernetes/addons/grafana.yaml

    kubectl apply -f install/kubernetes/addons/servicegraph.yaml

    image

    Istio-enabled applications can be configured to collect trace spans using Zipkin or Jaeger. On Grafana (https://grafana.com/):  The Grafana add-on is a pre-configured instance of Grafana. The base image (grafana/grafana:4.1.2) has been modified to start with both a Prometheus data source and the Istio Dashboard installed. The base install files for Istio, and Mixer in particular, ship with a default configuration of global (used for every service) metrics. The Istio Dashboard is built to be used in conjunction with the default Istio metrics configuration and a Prometheus backend. More details on Prometheus: https://prometheus.io/ .

    To view a graphical representation of your service mesh,  use the Service Graph Add-On:  https://istio.io/docs/tasks/telemetry/servicegraph.html .

    For log gathering with fluentd and writing them to Elastic Stack, see: https://istio.io/docs/tasks/telemetry/fluentd.html

     

     

    image

    Deploy the Bookinfo sample application with Sidecar Injection

    (the Envoy Sidecar is the proxy that is added to every Pod to handle all traffic into and out of the Pod; this is the magic that makes Istio work)

    The Bookinfo sample application (https://istio.io/docs/guides/bookinfo.html) is shipped as part of the Istio client installation. This application is composed of several (versions of) microservices that interact. These services and their interactions can be used to investigate the functionality of Istio.

    image

    To install the Bookinfo application, all you need to do:

    kubectl apply -f <(istioctl kube-inject –debug -f samples/bookinfo/kube/bookinfo.yaml)

    The istoctl kube-inject instruction (see https://istio.io/docs/reference/commands/istioctl.html) performs a preprocessing of the bookinfo.yaml file – injecting the specs for the Envoy Sidecar. Note: automatic injection of the sidecar into all Pods that get deployed is supported in Kubernetes 1.9 and higher. I did not yet get that to work, so I am using manual or explicit injection.

    image

    We can list the pods and inspect one of them:

    image

    The product page pod was defined with a single container – with a Python web application. However, because of the injection that Istio performed prior to creation of the Pod on the cluster, the Pod actually contains more than a single container: the istio-proxy was added to the pod. The same thing happened in the other pods in this bookinfo application.

    SNAGHTML199d2277

     

    This is what the Bookinfo application looks like:

    image

    (note: using kubectl port-forward I can make the application accessible from my laptop, without having to expose the service on the Kubernetes cluster)

    Try out some typical Istio things – like traffic management and monitoring

      Just by accessing the application, metrics will be gathered by the sidecar and shared with Prometheus. The Prometheus dashboard visualizes these metrics:

      image

      Zipkin helps to visualize the end to end trace of requests through the service mesh. Here is the request to the productpage dissected:

      image

      A drilldown reveals:

      image

      Reviews apparently is called sequentially, after the call to Details is complete. This may be correct, but perhaps we can improve performance by performing these calls in parallel. The calls to review takes much longer than the one to reviews. Both are still quite fast – no more than 35 ms.

      The Grafana dashboard plugin for Istio provides an out of the box dashboard on the HTTP requests that happen inside the service mesh. We can see the number of requests and the success rate (percentage of 200 and 300 response codes vs 400 and 500 responses)

      image

      Here are some more details presented in the dashboard:

      image

       

      At this point I am ready to start using Istio in anger – for my own microservices.

      Resources

      Istio.io – https://istio.io/

      Istio on Kubernetes – Quickstart Guide – https://istio.io/docs/setup/kubernetes/quick-start.html

      Working with the Istio Sample Application Bookinfo – https://istio.io/docs/guides/bookinfo.html

      YouTube: Module 1: Istio – Kubernetes – Getting Started – Installation and Sample Application Review by Bruno Terkaly – https://www.youtube.com/watch?v=ThEsWl3sYtM

      Istioctl reference: https://istio.io/docs/reference/commands/istioctl.html

      The post Running Istio on Oracle Kubernetes Engine–the managed Kubernetes Cloud Service appeared first on AMIS Oracle and Java Blog.

      First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service

      Fri, 2018-05-25 14:59

      imageOracle recently (May 2018) launched its Managed Kubernetes Cloud Service (OKE – Oracle Kubernetes Engine) – see for example this announcement: https://blogs.oracle.com/developers/kubecon-europe-2018-oracle-open-serverless-standards-fn-project-and-kubernetes. Yesterday I got my self a new free cloud trial on the Oracle Public Cloud (https://cloud.oracle.com/tryit). Subsequently, I created a Kubernetes cluster and deployed my first pod on that cluster. In this article, I will describe the steps that I went through:

      • create Oracle Cloud Trial account
      • configure OCI (Oracle Cloud Infrastructure) tenancy
        • create service policy
        • create OCI user
        • create virtual network
        • create security lists
        • create compute instance
      • configure Kubernetes Cluster & Node Pool; have the cluster deployed
      • install and configure OCI CLI tool
      • generate kubeconfig file
      • connect to Kubernetes cluster using Kubectl – inspect and roll out a Pod

      The resources section at the end of this article references all relevant documentation.

      Configure OCI (Oracle Cloud Infrastructure) tenancy

      Within your tenancy, a suitably pre-configured compartment must already exist in each region in which you want to create and deploy clusters. The compartment must contain the necessary network resources already configured (VCN, subnets, internet gateway, route table, security lists). For example, to create a highly available cluster spanning three availability domains, the VCN must include three subnets in different availability domains for node pools, and two further subnets for load balancers.

      Within the root compartment of your tenancy, a policy statement (allow service OKE to manage all-resources in tenancy) must be defined to give Container Engine for Kubernetes access to resources in the tenancy.

      Create policy

      You have to define a policy to enable Container Engine for Kubernetes to perform operations on the compartment.

      Click on Identity | Policies:

      image

      image

      Ensure you are in the Root Compartment. Click on Create Policy. Define a new policy. The statement must be:

      allow service OKE to manage all-resources in tenancy

      image

      Click on Create. The new policy is added to the list.

      image

      Precreate
      the required network resources

      See for instructions: https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengnetworkconfig.htm 

      Create a Virtual Cloud Network.

      The VCN in which you want to create and deploy clusters must be configured as follows:

      • The VCN must have a CIDR block defined that is large enough for at least five subnets, in order to support the number of hosts and load balancers a cluster will have. A /16 CIDR block would be large enough for almost all use cases (10.0.0.0/16 for example). The CIDR block you specify for the VCN must not overlap with the CIDR block you specify for pods and for the Kubernetes services (see CIDR Blocks and Container Engine for Kubernetes).
      • The VCN must have an internet gateway defined.
      • The VCN must have a route table defined that has a route rule specifying the internet gateway as the target for the destination CIDR block.
      • The VCN must have five subnets defined:

        • Three subnets in which to deploy worker nodes. Each worker node subnet must be in a different availability domain. The worker node subnets must have different security lists to the load balancer subnets.
        • Two subnets to host load balancers. Each load balancer subnet must be in a different availability domain. The load balancer subnets must have different security lists to the worker node subnets.
      • The VCN must have security lists defined for the worker node subnets and the load balancer subnets. The security list for the worker node subnets must have:

        • Stateless ingress and egress rules that allow all traffic between the different worker node subnets.
        • Stateless ingress and egress rules that allow all traffic between worker node subnets and load balancer subnets.

        Optionally, you can include ingress rules for worker node subnets to:

      image

      image

      image

      Create Internet Gateway

      image

      image

      Create Route Table

      The VCN in which you want to create and deploy clusters must have a route table. The route table must have a route rule that specifies an internet gateway as the target for the destination CIDR block 0.0.0.0/0.

      image

      image

      Set DHCP options

      The VCN in which you want to create and deploy clusters must have DHCP Options configured. The default value for DNS Type of Internet and VCN Resolver is acceptable.

      image

      Create Secuity Lists

      The VCN in which you want to create and deploy clusters must have security lists defined for the worker node subnets and the load balancer subnets. The security lists for the worker node subnets and the load balancer subnets must be different.

      Create list called workers

      image

      image

      Worker Node Seclist Configuration


      image

      Create Securty List loadbalancers

      image

      image

      Create Subnet in VCN

      The VCN in which you want to create and deploy clusters must usually have (five) subnets defined as follows:

      • (Three) subnets in which to deploy worker nodes. Each worker node subnet must be in a different availability domain. The worker node subnets must have different security lists to the load balancer subnets.
      • (Two) subnets to host load balancers. Each load balancer subnet must be in a different availability domain. The load balancer subnets must have different security lists to the worker node subnets.

      In addition, all the subnets must have the following properties set as shown:

      • Route Table: The name of the route table that has a route rule specifying an internet gateway as the target for the destination CIDR block 0.0.0.0/0
      • Subnet access: Public
      • DHCP options: Default

      image

      Subnet called workers-1

      image

      Associated Security List workers with this subnet:

      image

      image

      Create a second subnet called loadbalancers1:

      image

      Press the Create button. Now we have all subnets we need.

      image

      Create Compute Instance

      Click on Home. Then click on Create Compute Instance.

      image

      Set the attributes of the VM as shown in the two figures. I am sure other settings are fine too – but at least these work for me.

      image

      SNAGHTML15a0f2fc

      Click on Create Instance. The VM will now be spun up.

      image

      Create a non-federated identity – new user kubie in the OCI tenancy

      Note: initially I tried to just create the cluster as the initial user that was created when I initiated the OCI tenancy in my new trial account. However, the Create Cluster button was not enabled.

      image

      Oracle Product Management suggested that my account probably was a Federated Identity, which OKE does not support at this time. In order to use OKE in one of these accounts, you need to create a native OCI Identity User

      image

      image

      Click on Create/Reset password. You will be presented with a generated password. Copy it to the clipboard or in some text file. You will not see it again.

      Add user kubie to group Administrators:

      image

      After creating the OCI user kubie we can now login as that user, using the generated password that you had saved in the clipboard or in a text file:

      image

      Configure Kubernetes Cluster & Node Pool and have the cluster deployed

      After preparing the OCI tenancy to make it comply with the prerequisites for creating the K8S cluster, we can now proceed and provsion the cluster.

      Click on Containers in the main menu, Select clusteras the sub menu option. Click on Create Cluster.

      image

      Provide details for the Kubernetes cluster; see instructions here: https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm . Set the name to k8s-1, select 1.9.7 as the version. Select the VCN that was created earlier on. Select the two subnets that were defined for the load-balancers.

      Set the Kubernetes Service CIDR Block: for example to 10.96.0.0/16.

      Set the Pods CIDR Block: for example to 10.244.0.0/16.

      image

      Enable Dashboard and Tiller:

      image

      SNAGHTML14e5657e[4]

      Click on Add Node Pool.

      The details for the Node Pool:

      image

      Then press create to start the creation of the cluster.

      The k8s-1 cluster is added to the list:

      image

      At this point, the cluster creation request is being processed:

      image

      After a little while, the cluster has been created:

      image

      Install and configure OCI CLI tool

      To interact with the Oracle Cloud Infrastructure, we can make use of the OCI Command Line Interface – a Python based tool. We need to use this tool to generate the kubeconfig file that we need to interact with the Kubernetes cluster.

      My laptop is a Windows machine. I have used vagrant to spin up a VM with Ubuntu Xenial (16.04 LTS), to have an isolated and Linux based environment to work with the OCI CLI.

      In that environment, I download and install the OCI CLI:

      bash -c “$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)”

      image

      Some more output from the installation process:

      image

      And the final steps:

      image

      Next, I need to configure the OCI for my specific environment:

      oci setup config

      SNAGHTML14ce60ea

      The setup action generates a public and private key pair – the former in a file called: oci_api_key_public.pem. The contents of this file should be added as new public key to the OCI user – in my case the user called kubie

      SNAGHTML14d6291a

      At this point, OCI CLI is installed and configured for the right OCI Tenancy. The public key is added to the user account. We should now be able to use OCI CLI to access the OCI Tenancy.

      Try out the OCI CLI with simple calls like:

      oci compute instance list

      and

      oci compute instance list -c ocid1.tenancy.oc1..aaa

      Note: the compartment identifier parameter takes the value of the Tenancy OCID.

      image

      Generate kubeconfig file

      After installing and configuring the OCI CLI tool, we can continue to generate the kubeconfig file. The OCI Console contains the page with details on the k8s-1 cluster. Press the Access Kubeconfig button. A popup opens, with the instructions to generate the kubeconfig file – unfortunately not yet to simply download the kubeconfig file.

      Download the get-kubeconfig.sh script to the Ubuntu VM.

      image

      Make this script executable and execute using the instructions that are copied and pasted from the popup overhead.

      Using the commands provided from the OCI Console, I can generate the kubeconfig file:

      image


      Connect to Kubernetes cluster using Kubectl – inspect and roll out a Pod

        After generating the kubeconfig file, I have downloaded and installed kubectl to my Ubuntu VM, using the Git Gist:
        https://gist.githubusercontent.com/fabriciosanchez/2f193f76dfb1af3a3895661fce620b1a/raw/a28a70aca282a28d690ae240679e99125a3fd763/InstallingKubectl.sh

        To dowload:

        curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

        To make executable:


        chmod +x ./kubectl
        sudo mv ./kubectl /usr/local/bin/kubectl

        And try out if kubectl can be used:

        kubectl get nodes

        image

        On a Windows client that has kubectl installed and that has access to the kubeconfig file that was created in the previous section, set environment variable KUBECONFIG referencing the kubeconfig file that was generated using OCI CLI. Using that kubeconfig file, create a deployment of a test deployment.yaml through kubectl:

        image

        Expose the nginx deployment. Get the list of services. Then change the type of the service – from ClusterIP to NodePort. Then get the list of services again – to retrieve the port at which the service is exposed on the cluster node (31907)

        image

        The Kubernetes dashboard is now made available on the client using:

        kubectl proxy –kubeconfig=”C:\data\lucasjellemag-oci\kubeconfig”

        image

        Now we can check the deployment in the K8S Dashboard:

        image

        Here the new Type of Service in the Kubernetes Dashboard:

        image

        Access NGINX from any client anywhere:

        image

        Resources

        Documentation on Preparing for an OKE Cluster and installing the Cluster – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Concepts/contengprerequisites.htm 

        Docs on how to get hold of kubeconfig file – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm 

        Installing and Configuring the OCI Command Line Interface – https://docs.us-phoenix-1.oraclecloud.com/Content/API/SDKDocs/cliinstall.htm 

        Kubectl Reference – https://kubernetes-v1-4.github.io/docs/user-guide/kubectl-overview/ 

        Git Gist for installing kubectl on Ubuntu –
        https://gist.githubusercontent.com/fabriciosanchez/2f193f76dfb1af3a3895661fce620b1a/raw/a28a70aca282a28d690ae240679e99125a3fd763/InstallingKubectl.sh

        Deploying a Sample App to the K8S Cluster – https://docs.us-phoenix-1.oraclecloud.com/Content/ContEng/Tasks/contengdeployingsamplenginx.htm 

        Articles on the availability of the Oracle Kubernetes Engine cloud service:

        http://www.devopsonline.co.uk/oracles-new-container-engine-for-kubernetes/

        https://containerjournal.com/2018/05/03/oracle-extends-kubernetes-and-serverless-frameworks-support/

        https://www.forbes.com/sites/oracle/2018/05/10/cozying-up-to-kubernetes-in-copenhagen-developers-celebrate-open-serverless-apps/

        https://www.itopstimes.com/contain/oracle-brings-new-kubernetes-features-to-its-container-engine/

        https://blogs.oracle.com/developers/kubecon-europe-2018-oracle-open-serverless-standards-fn-project-and-kubernetes

        The post First steps with Oracle Kubernetes Engine–the managed Kubernetes Cloud Service appeared first on AMIS Oracle and Java Blog.

        5 main building blocks of the new Visual Builder Cloud Service

        Fri, 2018-05-25 06:16

        In may 2018 Oracle introduced the new version of Visual Builder Cloud Service. This version is not just aimed at the Citizen Developer, in the end an experienced JavaScript can do nice things with it.

        In this blog I will have a look at 5 of the 6 main building blocks you build a VBCS applications with:

        1. REST service connections
        2. Flows and Pages
        3. Variables
        4. Action Chains
        5. UI Components

        Putting all of this in one blog is a lot, so this is a lengthy one. The final result can be found here.

        With VBCS you can create a lot using Drag and Drop! But in the end you have to be aware that it is all Javascipt, HTML5 and CSS you are creating. And it is all build on JET!

        Before we can start with these concepts, I create a New Application.

        Rest Service Connection

        I start with creating some endpoints for a publicly available REST API, https://docs.openaq.org/ An API with Air quality measurements

        This API contains several endpoints, the graph I am going to create uses the measurements API. As I am from the Netherlands, I use data from the Netherlands in this blog.

        First I create the Service Connection based on the Cities endpoint.

        1. Create Service Connection by Service Endpoint

        2. Specify “URL”: https://api.openaq.org/v1/cities

        3. Service Base URL is populated with https://api.openaq.org/v1/, give Service proper name

        4. For Request > URL Parameters Add the Static Parameter “country” with value “NL”

        5. Test the endpoint and Copy to Response Body

        6. Create the endpoint

        Create flow and first Page

        I will create a Web Application that contains the main Flow and main-start Page.

        On the main-start Page I drop a Table UI Component.

        I have marked yellow the Table UI component, the Collapse/Expand Property Inspector and Quick start button.

        For this Table we Add Data, which is available on the Quick start Tab. Select the cities Service Endpoint.

        As the country is hardcoded, I won’t display it in the table. I reorder the columns with drag and drop. City I select as Primary Key.

        In the layout editor the Data from the Service endpoint is displayed. In the code you will see that an Oracle JET oj-table component is used.

        You can also run the Application:

        Next we add a Page for the Line-graph and drag an Oracle JET Line Chart on it.

        Variables and Types

        The responses from a Rest endpoints are stored in Variables, UI components and Action Chains use Variables.

        When you have a look at the Line Chart code, it contains two Arrays: Groups and Series. The Groups Array is an Array of Strings ( [‘Group A’,’Group B’] ), the Series array is an Array of Objects ( [{name:’Series 1′,items:[41,34]},{name:’Series 2′,items:[55,30]},…] ). The Serie Object consists of a String (name) and a numeric Array (items).

        For the line Graph I create two Types in the main Flow.

        1. a Type with a structure that can hold the result data of a REST call
        2. a Type with a structure that can be mapped to a JET line graph

        A getCitiesResponse type was already created by VBCS for the response of the REST call. This is the final result I want:

        Action Chain

        I create an Action chain that will do these steps:

        • Show notification with City selected
        • Call measurement REST endpoint and map to linegraphDatatype
        • Map linegraphDatatype to linegraphType using JavaScript
        • Navigate to page line-graph page

        When I open the Actions Tab for main-start Page, I see that a Action-Chain was already created. This Action-Chain saves the Primary Key of the row selected in my city Table.

        I now create the mentioned Action Chain. In this ActionChain I create a variable and assign the page variable with the selected City as Default.

         

        Next I drop a Fire Notification Action on the +-sign below  Start.

        I set the Display Mode to transinet and specify the Message as

        {{ “AirQuality data for ” + $chain.variables.selectedCity + ” is being retrieved.” }}

        The measurement REST endpoint is called with a Call Rest Endpoint Action. selectedCity from the Action Chain is mapped to the city parameter of this Action.

        The Result of this Action has to be mapped to linegraphData variable using Asign Variables Action.

        This linegraphData array I need to convert to my linegraph object. For this I will call a peace of javascript. First I create a function in the main Flow javascript.

        The complete peace of javascript can be found in the Application Export that is attached to this blog.

        This javascript function can be called with a Call Module Function Action, VBCS recognizes the function added in the javascript. The linegraphData variable needs to be mapped to the chartData parameter.

        The result from the javascript function needs to be mapped to the linegraph variable using an Assign Variables Action.

        Finally I navigate to the main-graph Page using a Navigate Action.

        A quick way to call this Action Chain (and get the linegraph) is by calling it from the already existing Action Chain to handle the selection of a row in the Cities table. I add a Call Action Chain Action

        UI Components

        The linegraph variable is now ready to be used by our graph. In the Data Tab of the Chart we set the Groups and Series.

        To get a readable layout for the date-axis, I enable the x-axis as time-axis-type:

        Everything together

        The final graph for Amsterdam:

        Amsterdam pm25 graph

        The VBCS export can be downloaded here

        The post 5 main building blocks of the new Visual Builder Cloud Service appeared first on AMIS Oracle and Java Blog.

        Running Kafka, KSQL and the Confluent Open Source Platform 4.x using Docker Compose on a Windows machine

        Wed, 2018-05-23 06:39

        image

        For conducting some experiments and preparing several demonstrations I needed a locally running Kafka Cluster (of a recent release) in combination with a KSQL server instance. Additional components from the Core Kafka Project and the Confluent Open Source Platform (release 4.1) would be convenient to have. I needed everything to run on my Windows laptop.

        This article describes how I could get what I needed using Vagrant and VirtualBox, Docker and Docker Compose and two declarative files. One is the vagrant file that defines the Ubuntu Virtual Machine that Vagrant spins up in collaboration with VirtualBox and that will contain Docker and Docker Compose. This file is discussed in more detail in this article: https://technology.amis.nl/2018/05/21/rapidly-spinning-up-a-vm-with-ubuntu-and-docker-on-my-windows-machine-using-vagrant-and-virtualbox/. The file itself can be found here, as GitHub Gist: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2.

        The second file is the Docker Compose file – which can be found on GitHub as well: https://gist.github.com/lucasjellema/c06f8a790114396f11eadd10434d9b7e . Note: I received great help from Guido Schmutz from Trivadis for this file!

        The Docker Compose file is shared into the VM when vagrant boots up the VM

        image

        and is executed automatically by the Vagrant docker-compose provisioner.

        SNAGHTML370e8d1

        Alternatively, you can ssh into the VM and execute it manually using these commands:

        cd /vagrant

        docker-compose up –d

        image

        Docker Compose will start all Docker Containers configured in this file, the order determined by the dependencies between the containers. Note: the IP address in this file (192.168.188.102) should correspond with the IP address defined in the vagrantfile. The two gists currently do not correspond because the Vagrantfile defined 192.168.188.110 as the IP address for the VM.

        Once Docker Compose has done its thing, all containers configured in the docker-compose.yml file will be running. The Kafka Broker is accessible at 192.168.188.102:9092, the Zoo Keeper at 192.168.188.102:2181 and the REST API at port 8084; the Kafka Connect UI at 8001, the Schema Registry UI at 8002 and the KSQL Server at port 8088. The Kafka Manager listens at port 9000.

        image

        image

        image

        image

        To run the KSQL Command Line, use this command to execute the shell in the Docker container called ksql-server:

        docker exec -it vagrant_ksql-server_1 /bin/bash

        Then, inside that container, simply type

        ksql

        And for example list all topics:

        list topics;

        image

        Here follows the complete contents of the docker-compose.yml file (largely credited to Guido Schmutz):

        
        version: '2'
        services:
          zookeeper:
            image: "confluentinc/cp-zookeeper:4.1.0"
            hostname: zookeeper
            ports:
              - "2181:2181"
            environment:
              ZOOKEEPER_CLIENT_PORT: 2181
              ZOOKEEPER_TICK_TIME: 2000
        
          broker-1:
            image: "confluentinc/cp-enterprise-kafka:4.1.0"
            hostname: broker-1
            depends_on:
              - zookeeper
            ports:
              - "9092:9092"
            environment:
              KAFKA_BROKER_ID: 1
              KAFKA_BROKER_RACK: rack-a
              KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
              KAFKA_ADVERTISED_HOST_NAME: 192.168.188.102
              KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://192.168.188.102:9092'
              KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
              KAFKA_DELETE_TOPIC_ENABLE: "true"
              KAFKA_JMX_PORT: 9999
              KAFKA_JMX_HOSTNAME: 'broker-1'
              KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
              CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: broker-1:9092
              CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
              CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
              CONFLUENT_METRICS_ENABLE: 'true'
              CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
        
          schema_registry:
            image: "confluentinc/cp-schema-registry:4.1.0"
            hostname: schema_registry
            container_name: schema_registry
            depends_on:
              - zookeeper
              - broker-1
            ports:
              - "8081:8081"
            environment:
              SCHEMA_REGISTRY_HOST_NAME: schema_registry
              SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
              SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_ORIGIN: '*'
              SCHEMA_REGISTRY_ACCESS_CONTROL_ALLOW_METHODS: 'GET,POST,PUT,OPTIONS'
        
          connect:
            image: confluentinc/cp-kafka-connect:3.3.0
            hostname: connect
            container_name: connect
            depends_on:
              - zookeeper
              - broker-1
              - schema_registry
            ports:
              - "8083:8083"
            environment:
              CONNECT_BOOTSTRAP_SERVERS: 'broker-1:9092'
              CONNECT_REST_ADVERTISED_HOST_NAME: connect
              CONNECT_REST_PORT: 8083
              CONNECT_GROUP_ID: compose-connect-group
              CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
              CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
              CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
              CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
              CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
              CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
              CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter
              CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
              CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
              CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
              CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
              CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
              CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
            volumes:
              - ./kafka-connect:/etc/kafka-connect/jars
        
          rest-proxy:
            image: confluentinc/cp-kafka-rest
            hostname: rest-proxy
            depends_on:
              - broker-1
              - schema_registry
            ports:
              - "8084:8084"
            environment:
              KAFKA_REST_ZOOKEEPER_CONNECT: '192.168.188.102:2181'
              KAFKA_REST_LISTENERS: 'http://0.0.0.0:8084'
              KAFKA_REST_SCHEMA_REGISTRY_URL: 'http://schema_registry:8081'
              KAFKA_REST_HOST_NAME: 'rest-proxy'
        
          adminer:
            image: adminer
            ports:
              - 8080:8080
        
          db:
            image: mujz/pagila
            environment:
              - POSTGRES_PASSWORD=sample
              - POSTGRES_USER=sample
              - POSTGRES_DB=sample
        
          kafka-manager:
            image: trivadisbds/kafka-manager
            hostname: kafka-manager
            depends_on:
              - zookeeper
            ports:
              - "9000:9000"
            environment:
              ZK_HOSTS: 'zookeeper:2181'
              APPLICATION_SECRET: 'letmein'   
        
          connect-ui:
            image: landoop/kafka-connect-ui
            container_name: connect-ui
            depends_on:
              - connect
            ports:
              - "8001:8000"
            environment:
              - "CONNECT_URL=http://connect:8083"
        
          schema-registry-ui:
            image: landoop/schema-registry-ui
            hostname: schema-registry-ui
            depends_on:
              - broker-1
              - schema_registry
            ports:
              - "8002:8000"
            environment:
              SCHEMAREGISTRY_URL: 'http://192.168.188.102:8081'
        
          ksql-server:
            image: "confluentinc/ksql-cli:4.1.0"
            hostname: ksql-server
            ports:
              - '8088:8088'
            depends_on:
              - broker-1
              - schema_registry
            # Note: The container's `run` script will perform the same readiness checks
            # for Kafka and Confluent Schema Registry, but that's ok because they complete fast.
            # The reason we check for readiness here is that we can insert a sleep time
            # for topic creation before we start the application.
            command: "bash -c 'echo Waiting for Kafka to be ready... && \
                               cub kafka-ready -b 192.168.188.102:9092 1 20 && \
                               echo Waiting for Confluent Schema Registry to be ready... && \
                               cub sr-ready schema_registry 8081 20 && \
                               echo Waiting a few seconds for topic creation to finish... && \
                               sleep 2 && \
                               /usr/bin/ksql-server-start /etc/ksql/ksql-server.properties'"
            environment:
              KSQL_CONFIG_DIR: "/etc/ksql"
              KSQL_OPTS: "-Dbootstrap.servers=192.168.188.102:9092 -Dksql.schema.registry.url=http://schema_registry:8081 -Dlisteners=http://0.0.0.0:8088"
              KSQL_LOG4J_OPTS: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
        
            extra_hosts:
              - "moby:127.0.0.1"
              
        

        Resources

        Vagrant File: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2

        Docker Compose file: https://gist.github.com/lucasjellema/c06f8a790114396f11eadd10434d9b7e

        The post Running Kafka, KSQL and the Confluent Open Source Platform 4.x using Docker Compose on a Windows machine appeared first on AMIS Oracle and Java Blog.

        ADF Performance Tuning: Manage Your Fetched Data

        Wed, 2018-05-23 01:41

        In this blog I want to stress how important it is to manage the data that you fetch and load into your ADF application. I blogged on this subject earlier. It is still underestimated in my opinion. Recently I was involved in troubleshooting the performance in two different ADF projects. They had one thing in common: their servers became frequently unavailable, and they fetched far too many rows from the database. This will likely lead to memory over-consumption, ‘stop the world’ garbage collections that can run far too long, a much slower application, or in the worst case even servers that run into an OutOfMemoryError and become unavailable.

        Developing a plan to manage and monitor fetched data during the whole lifetime of your ADF application is an absolute must. Keeping your sessions small is indispensable to your performance success. This blog shows a few examples of what can happen if you do not do that.

        Normal JVM Heap and Garbage Collection

        First, just or our reference, let’s have a look at how a normal, ‘healthy’ JVM heap and garbage collection should look like (left bottom). The ADF Performance Monitor shows real-time or historic heap usage and garbage collection times. The heap space (purple) over time is like a saw-tooth shaped line – showing a healthy JVM. There are many small and just a few big garbage collections (pink). This is because there are basically two types of garbage collectors. The big garbage collections do not run longer than 5 seconds:

        Read more on adfpm.com.

        The post ADF Performance Tuning: Manage Your Fetched Data appeared first on AMIS Oracle and Java Blog.

        Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code

        Tue, 2018-05-22 07:29

        Put simply – CQRS (Command Query Responsibility Segregation) is an architecture pattern that recognizes the fact that it may be wise to separate the database that processes data manipulations from the engines that handle queries. When data retrieval requires special formats, scale, availability, TCO, location, search options and response times, it is worth considering introducing additional databases to handle those specific needs. These databases can provide data in a way that caters for the special needs to special consumers – by offering data in filtered, preprocessed format or shape or aggregation, with higher availability, at closer physical distance, with support for special search patterns and with better performance and scalability.

        A note of caution: you only introduce CQRS in a system if there is a clear need for it. Not because you feel obliged to implement such a shiny, much talked about pattern or you feel as if everyone should have it. CQRS is not a simple thing – especially in existing systems, packaged applications and legacy databases. Detecting changes and extracting data from the source, transporting and converting the data and applying the data in a reliable, fast enough way with the required level of consistency is not trivial.

        In many of my conference presentations, I show demonstrations with running software. To better clarify what I am talking about, to allow the audience to try things out for themselves and because doing demos usually is fun. And a frequent element in these demos is Twitter. Because it is well known and because it allows the audience to participate in the demo. I can invite an audience to tweet using an agreed hashtag and their tweets trigger the demo or at least make an appearance. In this article, I discuss one of these demos – showing an example of CQRS. The picture shows the outline: tweets are consumed by a Node application. Each tweet is converted to an event on a Kafka Topic. This event is consumed by a second Node application (potentially one of multiple instances in Kafka Consumer Group, to allow for more scalability. This Node application creates a new record in an Elastic Search index – the Query destination in this little CQRS spiel.  The out of the box dashboard tool Kibana allows us to quickly inspect and analyse the tweet records. Additionally we can create an advanced query service on top of Elastic Search.

        This article shows the code behind this demo. This code as prepared for the JEEConf 2018 conference in Kyiv, Ukraine – and can be found in GitHub: https://github.com/lucasjellema/50-shades-of-data-jeeconf2018-kyiv/tree/master/twitter-kafka-demo .

        image

        The main elements in the demo:

        1. Kafka Topic tweets-topic (in my demo, this topic is created in Oracle Cloud Event Hub Service, a managed Kafka cloud service)

        2. Node application that consumes from Twitter – and publishes to the Kafka topic

        3. (Postman Collection to create) Elastic Search Index plus custom mapping (primarily to extract proper creation date time value from a date string) (in my demo, this Elastic Search Index is created in a Elastic Search instance running in a Docker Container on Oracle Container Cloud)

        4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the index. In this demo, the Node application is also running on Oracle Cloud (Application Container Cloud), but that does not have to be the case

        5. Kibana dashboard on top of the Tweets Index. In my demo, Kibana is also running in a Docker container in Oracle Container Cloud

        1. Kafka Tweets Topic on Oracle Event Hub Cloud Service

        image

        After completing the wizard, the topic is created an can be accessed by producers and consumers.

        2. Node application that consumes from Twitter – and publishes to the Kafka topic

        The Node application consists of an index.js file that handles HTTP Requests – for health checking – and consumes from Twitter and pulishes to a Kafka Topic. It uses a file twitterconfig.js (not included) that contains the secret details of a Twitter client. The contents of this file should look like this – and should contain your own Twitter Client Details:

        // CHANGE THIS **************************************************************
        // go to https://apps.twitter.com/ to register your app
        var twitterconfig = {
            consumer_key: 'mykey',
            consumer_secret: 'mysecret',
            access_token_key: 'at-key',
            access_token_secret: 'at-secret'  
            };
            
            module.exports = {twitterconfig};
        

        The index.js file requires the npm libraries kafka-node and twit as well as express and http for handling http requests.

        The code can be said to be divided in three parts:

        • initialization, create HTTP server and handle HTTP requests
        • Consume from Twitter
        • Publish to Kafka

        Here are the three code sections:

        const express = require('express');
        var http = require('http')
        const app = express();
        var PORT = process.env.PORT || 8144;
        const server = http.createServer(app);
        var APP_VERSION = "0.0.3"
        
        const startTime = new Date()
        const bodyParser = require('body-parser');
        app.use(bodyParser.json());
        var tweetCount = 0;
        app.get('/about', function (req, res) {
          var about = {
            "about": "Twitter Consumer and Producer to " + TOPIC_NAME,
            "PORT": process.env.PORT,
            "APP_VERSION ": APP_VERSION,
            "Running Since": startTime,
            "Total number of tweets processed": tweetCount
        
          }
          res.json(about);
        })
        server.listen(PORT, function listening() {
          console.log('Listening on %d', server.address().port);
        });
        

        Code for consuming from Twitter – in this case for the hash tags #jeeconf,#java and #oraclecode:

        var Twit = require('twit');
        const { twitterconfig } = require('./twitterconfig');
        
        var T = new Twit({
          consumer_key: twitterconfig.consumer_key,
          consumer_secret: twitterconfig.consumer_secret,
          access_token: twitterconfig.access_token_key,
          access_token_secret: twitterconfig.access_token_secret,
          timeout_ms: 60 * 1000,
        });
        
        
        var twiterHashTags = process.env.TWITTER_HASHTAGS || '#oraclecode,#java,#jeeconf';
        var tracks = { track: twiterHashTags.split(',') };
        
        let tweetStream = T.stream('statuses/filter', tracks)
        tweetstream(tracks, tweetStream);
        
        function tweetstream(hashtags, tweetStream) {
          console.log("Started tweet stream for hashtag #" + JSON.stringify(hashtags));
        
          tweetStream.on('connected', function (response) {
            console.log("Stream connected to twitter for #" + JSON.stringify(hashtags));
          })
          tweetStream.on('error', function (error) {
            console.log("Error in Stream for #" + JSON.stringify(hashtags) + " " + error);
          })
          tweetStream.on('tweet', function (tweet) {
            produceTweetEvent(tweet);
          });
        }
        
        

        Code for publishing to the Kafka Topic a516817-tweetstopic:

        const kafka = require('kafka-node');
        const APP_NAME = "TwitterConsumer"
        
        var EVENT_HUB_PUBLIC_IP = process.env.KAFKA_HOST || '129.1.1.116';
        var TOPIC_NAME = process.env.KAFKA_TOPIC || 'a516817-tweetstopic';
        
        var Producer = kafka.Producer;
        var client = new kafka.Client(EVENT_HUB_PUBLIC_IP);
        var producer = new Producer(client);
        KeyedMessage = kafka.KeyedMessage;
        
        producer.on('ready', function () {
          console.log("Producer is ready in " + APP_NAME);
        });
        producer.on('error', function (err) {
          console.log("failed to create the client or the producer " + JSON.stringify(err));
        })
        
        
        let payloads = [
          { topic: TOPIC_NAME, messages: '*', partition: 0 }
        ];
        
        function produceTweetEvent(tweet) {
          var hashtagFound = false;
          try {
            // find out which of the original hashtags { track: ['oraclecode', 'java', 'jeeconf'] } in the hashtags for this tweet; 
            //that is the one for the tagFilter property
            // select one other hashtag from tweet.entities.hashtags to set in property hashtag
            var tagFilter = "#jeeconf";
            var extraHashTag = "liveForCode";
            for (var i = 0; i < tweet.entities.hashtags.length; i++) {
              var tag = '#' + tweet.entities.hashtags[i].text.toLowerCase();
              console.log("inspect hashtag " + tag);
              var idx = tracks.track.indexOf(tag);
              if (idx > -1) {
                tagFilter = tag;
                hashtagFound = true;
              } else {
                extraHashTag = tag
              }
            }//for
        
            if (hashtagFound) {
              var tweetEvent = {
                "eventType": "tweetEvent"
                , "text": tweet.text
                , "isARetweet": tweet.retweeted_status ? "y" : "n"
                , "author": tweet.user.name
                , "hashtag": extraHashTag
                , "createdAt": tweet.created_at
                , "language": tweet.lang
                , "tweetId": tweet.id
                , "tagFilter": tagFilter
                , "originalTweetId": tweet.retweeted_status ? tweet.retweeted_status.id : null
              };
              eventPublisher.publishEvent(tweet.id, tweetEvent)
              tweetCount++
            }// if hashtag found
          } catch (e) {
            console.log("Exception in publishing Tweet Event " + JSON.stringify(e))
          }
        }
        
        var eventPublisher = module.exports;
        
        eventPublisher.publishEvent = function (eventKey, event) {
          km = new KeyedMessage(eventKey, JSON.stringify(event));
          payloads = [
            { topic: TOPIC_NAME, messages: [km], partition: 0 }
          ];
          producer.send(payloads, function (err, data) {
            if (err) {
              console.error("Failed to publish event with key " + eventKey + " to topic " + TOPIC_NAME + " :" + JSON.stringify(err));
            }
            console.log("Published event with key " + eventKey + " to topic " + TOPIC_NAME + " :" + JSON.stringify(data));
          });
        }//publishEvent
        

        3. (Postman Collection to create) Elastic Search Index plus custom mapping

        Preparation of an Elastic Search environment is done through REST API calls. These can be made from code or from the command line (using CURL) or from a tool such as Postman. In this case, I have created a Postman collection with a number of calls to prepare the Elastic Search index tweets.

        image

        The following requests are relevant:

        • Check if the Elastic Search server is healthy: GET {{ELASTIC_HOME}}:9200/_cat/health
        • Create the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets
        • Create the mapping for the tweets index: PUT {{ELASTIC_HOME}}:9200/tweets/_mapping/doc

        The body for the last request is relevant:

        {
                        "properties": {
                            "author": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "createdAt": {
                                "type": "date",
                  "format": "EEE MMM dd HH:mm:ss ZZ yyyy"
          
                            },
                            "eventType": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "hashtag": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "isARetweet": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "language": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "tagFilter": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "text": {
                                "type": "text",
                                "fields": {
                                    "keyword": {
                                        "type": "keyword",
                                        "ignore_above": 256
                                    }
                                }
                            },
                            "tweetId": {
                                "type": "long"
                            }
                        }
                    }
        

        The custom aspect of the mapping is primarily to extract proper creation date time value from a date string.

        4. Node application that consumes the events from the Kafka tweets-topic and turns each event into a new record in the elastic search index

        The tweetListener.js file contains the code for two main purposes: handle HTTP requests (primarily for health checks) and consume events from the Kafka Topic for tweets. This file requires the npm modules express, http and kafka-node for this. It also imports the local module model from the file model.js. This module writes Tweet records to the Elastic Search index. It uses the npm  module elasticsearch for this.

        The code in tweetListener.js is best read in two sections:

        First section for handling HTTP requests:

        const express = require('express');
        var https = require('https')
          , http = require('http')
        const app = express();
        var PORT = process.env.PORT || 8145;
        const server = http.createServer(app);
        var APP_VERSION = "0.0.3"
        
        
        const bodyParser = require('body-parser');
        app.use(bodyParser.json());
        var tweetCount = 0;
        app.get('/about', function (req, res) {
          var about = {
            "about": "Twitter Consumer from  " +SOURCE_TOPIC_NAME,
            "PORT": process.env.PORT,
            "APP_VERSION ": APP_VERSION,
            "Running Since": startTime,
            "Total number of tweets processed": tweetCount
        
          }
          res.json(about);
        })
        server.listen(PORT, function listening() {
          console.log('Listening on %d', server.address().port);
        });
        
        

        Second section for consuming Kafka events from tweets topic – and invoking the model module for each event:

        var kafka = require('kafka-node');
        var model = require("./model");
        
        var tweetListener = module.exports;
        
        var subscribers = [];
        tweetListener.subscribeToTweets = function (callback) {
          subscribers.push(callback);
        }
        
        // var kafkaHost = process.env.KAFKA_HOST || "192.168.188.102";
        // var zookeeperPort = process.env.ZOOKEEPER_PORT || 2181;
        // var TOPIC_NAME = process.env.KAFKA_TOPIC ||'tweets-topic';
        
        var KAFKA_ZK_SERVER_PORT = 2181;
        
        var SOURCE_KAFKA_HOST = '129.1.1.116';
        var SOURCE_TOPIC_NAME = 'a516817-tweetstopic';
        
        var consumerOptions = {
            host: SOURCE_KAFKA_HOST + ':' + KAFKA_ZK_SERVER_PORT ,
          groupId: 'consume-tweets-for-elastic-index',
          sessionTimeout: 15000,
          protocol: ['roundrobin'],
          fromOffset: 'latest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
        };
        
        var topics = [SOURCE_TOPIC_NAME];
        var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumer1' }, consumerOptions), topics);
        consumerGroup.on('error', onError);
        consumerGroup.on('message', onMessage);
        
        function onMessage(message) {
          console.log('%s read msg Topic="%s" Partition=%s Offset=%d', this.client.clientId, message.topic, message.partition, message.offset);
          console.log("Message Value " + message.value)
        
          subscribers.forEach((subscriber) => {
            subscriber(message.value);
        
          })
        }
        
        function onError(error) {
          console.error(error);
          console.error(error.stack);
        }
        
        process.once('SIGINT', function () {
          async.each([consumerGroup], function (consumer, callback) {
            consumer.close(true, callback);
          });
        });
        
        
        tweetListener.subscribeToTweets((message) => {
          var tweetEvent = JSON.parse(message);
          tweetCount++; 
          // ready to elastify tweetEvent
          console.log("Ready to put on Elastic "+JSON.stringify(tweetEvent));
          model.saveTweet(tweetEvent).then((result, error) => {
            console.log("Saved to Elastic "+JSON.stringify(result)+'Error?'+JSON.stringify(error));
        })
        
        })
        

        The file model.js connects to the Elastic Search server and saves tweets to the tweets index when so requested. Very straightforward. Without any exception handling, for example in case the Elastic Search server does not accept a record or is simply unavailable. Remember: this is just the code for a demo.

        var tweetsModel = module.exports;
        var elasticsearch = require('elasticsearch');
        
        var ELASTIC_SEARCH_HOST = process.env.ELASTIC_CONNECTOR || 'http://129.150.114.134:9200';
        
        var client = new elasticsearch.Client({
            host: ELASTIC_SEARCH_HOST,
        });
        
        client.ping({
            requestTimeout: 30000,
        }, function (error) {
            if (error) {
                console.error('elasticsearch cluster is down!');
            } else {
                console.log('Connection to Elastic Search is established');
            }
        });
        
        tweetsModel.saveTweet = async function (tweet) {
            try {
                var response = await client.index({
                    index: 'tweets',
                    id: tweet.tweetId,
                    type: 'doc',
                    body: tweet
                }
                );
        
                console.log("Response: " + JSON.stringify(response));
                return tweet;
            }
            catch (e) {
                console.error("Error in Elastic Search - index document " + tweet.tweetId + ":" + JSON.stringify(e))
            }
        
        }
        

        5. Kibana dashboard on top of the Tweets Index.

        Kibana is an out of the box application, preconfigured in my case for the colocated Elastic Search server. Once I provide the name of the index – TWEETS – I am interested in, immediately Kibana shows an overview of (selected time ranges in) this index (the peaks in the screenshot indicate May 19th and 20th when the JEEConf was taking place in Kyiv, where I presented this demo:

        image

        The same results in the Twitter UI:

        image

        The post Simple CQRS – Tweets to Apache Kafka to Elastic Search Index using a little Node code appeared first on AMIS Oracle and Java Blog.

        Rapidly spinning up a VM with Ubuntu and Docker–on my Windows machine using Vagrant and VirtualBox

        Mon, 2018-05-21 08:22

        imageI have a Windows laptop. And of course I want to work with Docker containers. Using the Docker Quickstart Terminal is one way of doing so, and to some extent that works fine. But whenever I want to have more control over the Linux environment that runs the Docker host, or I want to run multiple such envionments in parallel, I like to just run VMs under my own control and use them to run Docker inside.

        The easiest way for me to create and run Docker enabled VMs is using combination of Vagrant and VirtualBox. VirtualBox runs the VM and takes care of network from and to the VM as well as mapping local directories on the Windows host machine into the VM. Vagrant runs on the Windows machine as a command line tool. It interacts with the VirtualBox APIs, to create, start, pause and resume and stop the VMs. Based on simple declarative definitions – text files – it will configure the VM and take care of it.

        In this article, I share the very simple Vagrant script that I am using to spin up and manage VMs in which I run Docker containers. Vagrant takes care of installing Docker into the VM, of configuring the Network, for mapping a local host directory into the VM and for creating a larger-than-normal disk for the Ubuntu VM. I will briefly show how to create/start the VM, SSH into it to create a terminal session, run a Docker file from the Windows host to run a container and to halt and restart.

        The prerequisites for following along: have a recent version of Vagrant and VirtualBox installed.

        To create and run the VM, write the following Vagrantfile to a directory: https://gist.github.com/lucasjellema/7593677f6d03285236c8f0391f1a78c2 , for example using

        git clone https://gist.github.com/7593677f6d03285236c8f0391f1a78c2.git

        Open a Windows command line and cd to that directory. image

        Then type

        vagrant up

        This will run Vagrant and have it process the local vagrantfile. The base VM image is downloaded – if it does not already exist on your Windows host and subsequently Vagrant engages with VirtualBox to create the VM according to the configuration settings.

        image

        When the VM is running, Vagrant will do the next processing steps: provisioning Docker and Docker Compose in the VM.

        SNAGHTMLd51f82Finally, if there is a docker-compose.yml file in the current directory, it will be run by docker compose inside the VM; if there is none, an ugly error message is shown – but the VM will still be created and end up running.

         

        When vagrant up is complete, the VM is running, Docker is running and if any containers were created and started by Docker Compose, then they will be running as well.

        Using

        vagrant ssh

        (from the command line interface and still from the same directory) we can create a terminal session into the VM.

        SNAGHTML3ed88f

        Using

        docker ps

        we can check if any containers were started. And we can start a(nother) container if we feel like it, such as:

        docker run busybox echo “hello from busybox”

        image

        The directory on the Windows host from which we ran vagrant up and vagrant ssh is mapped into the vm, to /vagrant. Using

        ls /vagrant

        we can check on the files in that directory that are available from within the VM.

         

        We can for example build a Docker container from a Docker file in that directory.

        Using

        exit

        we can leave the Vagrant SSH session. The VM keeps on running. We can return into the VM using vagrant ssh again. We can have multiple sessions into the same VM – by just starting multiple command line sessions in Windows, navigating to the same directory and running vagrant ssh in each session.

        Using

        vagrant halt

        we stop the VM. Its state is saved and we can continue from that state at a later point, simply by running vagrant up again.

        With vagrant pause and vagrant resume we can create a snapshot of the VM in mid flight and at a later moment (which can be after a restart of the host system) continue where we left off.

        Using

        vagrant destroy

        you can completely remove the VM, releasing the host (disk) resources that were consumed by it.

        image

        Resources

        Vagrant Documentation: https://www.vagrantup.com/docs/

        Download Vagrant: https://www.vagrantup.com/downloads.html

        Download Oracle VirtualBox: https://www.virtualbox.org/wiki/Downloads

        The post Rapidly spinning up a VM with Ubuntu and Docker–on my Windows machine using Vagrant and VirtualBox appeared first on AMIS Oracle and Java Blog.

        SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts

        Thu, 2018-05-17 10:58

        For developers, installing a full blown local SOA Suite environment has never been a favorite (except for a select few). It is time consuming and requires you to download and run various installers after each other. If you want to start clean (and you haven’t taken precautions), it could be you have to start all over again.

        There is a new and easy way to get a SOA Suite environment up and running without downloading any installers in only a couple of commands without depending on scripts provided by any party other than Oracle. The resulting environment is an Oracle Enterprise Edition database, an Admin Server and a Managed Server. All of them running in separate Docker containers with ports exposed to the host. The 3 containers can run together within an 8Gb RAM VM.

        The documentation Oracle provides in its Container Registry for the SOA Suite images, should be used as base, but since you will encounter some errors if you follow it, you can use this blog post to help you solve them quickly.

        A short history QuickStart and different installers

        During the 11g times, a developer, if he wanted to run a local environment, he needed to install a database (usually XE), WebLogic Server, SOA Infrastructure, run the Repository Creation Utility (RCU) and one or more of SOA, BPM, OSB. In 12c, the SOA Suite QuickStart was introduced. The QuickStart uses an Apache Derby database instead of the Oracle database and lacks features like ESS, split Admin Server / Managed Server, NodeManager and several other features, making this environment not really comparable to customer environments. If you wanted to install a standalone version, you still needed to go through all the manual steps or automate them yourself (with response files for the installers and WLST files for domain creation). As an alternative, during these times, Oracle has been so kind as to provide VirtualBox images (like this one or this one) with everything pre-installed. For more complex set-ups Edwin Biemond / Lucas Jellema have provided Vagrant files and blog posts to quickly create a 12c environment.

        Docker

        One of the benefits of running SOA Suite in Docker containers is that the software is isolatd in the container. You can quickly remove and recreate domains. Also, in general, Docker is more resource efficient compared to for example VMWare, VirtualBox or Oracle VM and the containers are easily shippable to other environments/machines.

        Dockerfiles

        Docker has become very popular and there have been several efforts to run SOA Suite in Docker containers. At first these efforts where by people who created their own Dockerfiles and used the installers and responsefiles to create images. Later Oracle provided their own Dockerfiles but you still needed the installers from edelivery.oracle.com and first build the images. The official Oracle provided Docker files can be found in GitHub here.

        Container Registry

        Oracle has introduced its Container Registry recently (the start of 2017). The Container Registry is a Docker Registry which contains prebuild images, thus just Dockerfiles. Oracle Database appeared, WebLogic and the SOA Infrastructure and now (May 2018) the complete SOA Suite.

        How do you use this? You link your OTN account to the Container Registry. This needs to be done only once. Next you can accept the license agreement for the images you would like to use. The Container Registry contains a useful description with every image on how to use it and what can be configured. Keep in mind that since the Container Registry has recently been restructured, names of images have changed and not all manuals have been updated yet. That is also why you want to tag images so you can access them locally in a consistent way.

        Download and run!

        For SOA Suite, you need to accept the agreement for the Enterprise Edition database and SOA Suite. You don’t need the SOA Infrastructure; it is part of the SOA Suite image.

        Login
        docker login -u OTNusername -p OTNpassword container-registry.oracle.com
        Pull, tag, create env files

        Pulling the images can take a while… (can be hours on Wifi). The commands for pulling differ slightly from the examples given in the image documentation in the Container Registry because image names have recently changed. For consistent access, tag them.

        Database
        docker pull container-registry.oracle.com/database/enterprise:12.2.0.1
        docker tag container-registry.oracle.com/database/enterprise:12.2.0.1 oracle/database:12.2.0.1-ee

        The database requires a configuration file. The settings in this file are not correctly applied by the installation which is executed when a container is created from the image however. I’ve updated the configuration file to reflect what is actually created:

        db.env.list
        ORACLE_SID=orclcdb
        ORACLE_PDB=orclpdb1
        ORACLE_PWD=Oradoc_db1
        SOA Suite
        docker pull container-registry.oracle.com/middleware/soasuite:12.2.1.3
        docker tag container-registry.oracle.com/middleware/soasuite:12.2.1.3 oracle/soa:12.2.1.3

        The Admin Server also requires a configuration file:

        adminserver.env.list
        CONNECTION_STRING=soadb:1521/ORCLPDB1.localdomain
        RCUPREFIX=SOA1
        DB_PASSWORD=Oradoc_db1
        DB_SCHEMA_PASSWORD=Welcome01
        ADMIN_PASSWORD=Welcome01
        MANAGED_SERVER=soa_server1
        DOMAIN_TYPE=soa

        As you can see, you can use the same database for multiple SOA schema’s since the RCU prefix is configurable.

        The Managed Server also requires a configuration file:

        soaserver.env.list
        MANAGED_SERVER=soa_server1
        DOMAIN_TYPE=soa
        ADMIN_HOST=soaas
        ADMIN_PORT=7001

        Make sure the Managed Server mentioned in the Admin Server configuration file matches the Managed Server in the Managed Server configuration file. The Admin Server installation creates a boot.properties for the Managed Server. If the server name does not match, the Managed Server will not boot.

        Create local folders and network

        Since you might not want to lose your domain or database files when you remove your container and start it again, you can create a location on your host machine where the domain will be created and the database can store its files. Make sure the user running the containers has userid/groupid 1000 for the below commands to allow the user access to the directories. Run the below commands as root. They differ slightly from the manual since errors will occur if SOAVolume/SOA does not exist.

        mkdir -p /scratch/DockerVolume/SOAVolume/SOA
        chown 1000:1000 /scratch/DockerVolume/SOAVolume/
        chmod -R 700 /scratch/DockerVolume/SOAVolume/

        Create a network for the database and SOA servers:

        docker network create -d bridge SOANet
        Run Start the database

        You’ll first need the database. You can run it by:

        #Start the database
        docker run --name soadb --network=SOANet -p 1521:1521 -p 5500:5500 -v /scratch/DockerVolume/SOAVolume/DB:/opt/oracle/oradata --env-file /software/db.env.list oracle/database:12.2.0.1-ee

        This installs and starts the database. db.env.list, which is described above, should be in /software in this case.

        SOA Suite

        In the examples documented, it is indicated you can run the Admin Server and the Managed Server in separate containers. You can and they will startup. However, the Admin Server cannot manage the Managed Server and the WebLogic Console / EM don’t show the Managed Server status. The configuration in the Docker container uses a single machine with a single host-name and indicates both the Managed Server and Admin Server both run there. In order to fix this, I’ll suggest two easy workarounds.

        Port forwarding. Admin Server and Managed Server in separate containers

        You can create a port-forward from the Admin Server to the Managed Server. This allows the WebLogic Console / EM and Admin Server to access the Managed Server at ‘localhost’ within the Docker container on port 8001.

        #This command starts an interactive shell which runs the Admin Server. Wait until it is up before continuing!
        docker run -i -t --name soaas --network=SOANet -p 7001:7001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list oracle/soa:12.2.1.3

        #This command starts an interactive shell which runs the Managed Server.
        docker run -i -t --name soams --network=SOANet -p 8001:8001 --volumes-from soaas --env-file /software/soaserver.env.list oracle/soa:12.2.1.3 "/u01/oracle/dockertools/startMS.sh"

        #The below commands install and run socat to do the port mapping from Admin Server port 8001 to Managed Server port 8001
        docker exec -u root soaas yum -y install socat
        docker exec -d -u root soaas "/usr/bin/socat" TCP4-LISTEN:8001,fork TCP4:soams:8001"

        The container is very limited. It does not contain executables for ping, netstat, wget, ifconfig, iptables and several other common tools. socat seemed an easy solution (easier than iptables or SSH tunnels) to do port forwarding and it worked nicely.

        Admin Server and Managed Server in a single container

        An alternative is to run the both the Managed Server and the Admin Server in the same container. Here you start the Admin Server with both the configuration files so all environment variables are available. Once the Admin Server is started, the Managed Server can be started in a separate shell with docker exec.

        #Start Admin Server
        docker run -i -t --name soaas --network=SOANet -p 7001:7001 -p 8001:8001 -v /scratch/DockerVolume/SOAVolume/SOA:/u01/oracle/user_projects --env-file /software/adminserver.env.list --env-file /software/soaserver.env.list oracle/soa:12.2.1.3
        #Start Managed Server
        docker exec -it soaas "/u01/oracle/dockertools/startMS.sh"
        Start the NodeManager

        If you like (but you don’t have to), you can start the NodeManager in both set-ups like;

        docker exec -d soaas "/u01/oracle/user_projects/domains/InfraDomain/bin/startNodeManager.sh"

        The NodeManager runs on port 5658.

        How does it look?

        A normal SOA Suite environment.

        The post SOA Suite 12c in Docker containers. Only a couple of commands, no installers, no third party scripts appeared first on AMIS Oracle and Java Blog.

        Java SE licensing

        Tue, 2018-05-01 07:35

        There are several good blogs about this subject (like this one), and I never paid much attention to Java and licensing. Until one of our customers became a bit frightend after Oracle approached them with the question if they were really license compliant using Java. For many the words Java and licensing doesn’t belong in one sentence.  So I am obliged to give our customer a proper answer and gave it a bit more attention by writing this blog. What’s this about licensing Java.

        Let’s start with the license terms, the Oracle Binary Code License Agreement.  Java SE license consists of two parts. The standard Java SE is still free to use , the commercial components are not:

        Oracle grants you a non-exclusive, non-transferable, limited license without license fees to reproduce and use internally the Software complete and unmodified for the sole purpose of running Programs.THE LICENSE SET FORTH IN THIS SECTION 2 DOES NOT EXTEND TO THE COMMERCIAL FEATURES….. restricted to Java-enabled General Purpose Desktop Computers and Servers

        What ‘General Purpose Desktop Computers’ exactly is, is exlained here, in a blog of B-Lay. An excerpt:

        Personal computers, including desktops, notebooks, smartphones and tablets, are all examples of general-purpose computers. The term “general purpose computing” is used to differentiate general-purpose computers from other types, in particular the specialized embedded computers used in intelligent systems.

        The Licensing Information User Manual (about 247 pages needed to eplain Java licensing…) is trying to give more information about what is free and what’s not, but doesn’t give much clarity.

        The Java Command line option page is quite clear about it. Using the following flag unlocks commercial features:

        -XX:+UnlockCommercialFeatures

        Use this flag to actively unlock the use of commercial features. Commercial features are the products Oracle Java SE Advanced or Oracle Java SE Suite, as defined at the Java SE Products web page.

        If this flag is not specified, then the default is to run the Java Virtual Machine without the commercial features being available. After they are enabled, it is not possible to disable their use at runtime.

        As an example

        java -XX:UnlockCommercialFeatures -XX:+FlightRecorder

        Another way to split what commercial or not is to look at  the pricelist.

        Three paid packages are mentioned: Advanced Desktop, Advanced, and Suite.

        But how do you know what you’re hitting, what commercial product is in what version, unlocked by the parameter ‘ XX:+UnlockCommercialFeatures ‘ . This can be derived from  the Oracle technology price list supplement (for partners),on page 5 of the  Licensing Information User Manual:

         

        But what’s free to use:

        • JDK (Java Development Kit)
        • JRE (Java Runtime Environment)
        • SDK (JavaFX Software Development Kit)
        • JavaFX Runtime
        • JRockit JDK

         

        Regardz

         

        Resources

        Oracle binary code license agreement: http://www.oracle.com/technetwork/java/javase/terms/license/index.html

        Official Oracle licensing doc: http://www.oracle.com/technetwork/java/javase/documentation/java-se-lium-2017-09-18-3900135.pdf

        Aspera blog : https://www.aspera.com/en/blog/understanding-oracle-java-se-licensing/

        Java command line option page: https://docs.oracle.com/javase/7/docs/technotes/tools/windows/java.html

        Oracle pricelist: http://www.oracle.com/us/corporate/pricing/technology-price-list-070617.pdf

        B-Lay java licensing: https://www.linkedin.com/pulse/oracle-java-licensing-free-charge-vs-commercial-use-richard-spithoven/

        Other sources:

        The post Java SE licensing appeared first on AMIS Oracle and Java Blog.

        Get going with KSQL on Kubernetes

        Tue, 2018-05-01 03:59

        imageThis article describes how to quickly get going with KSQL on Kubernetes. KSQL is Confuent’s ‘continuous streaming query language’. It allows us to write SQL-like queries that operate on Kafka Topics. Queries that join, filter and aggregate – for each event that gets produced and over time windows. Results are also published to Kafka Topics. A KSQL query can be created as a Table or Stream: as a background process running in the KSQL Server – a piece of [Java] logic that consumes, processes and produces events, until the query is explicitly stopped – i.e. the Table or Stream is dropped.

        At the time of writing, there is not yet an official Docker Container image for KSQL. It is automatically included in the full blown Confluent platform Docker Compiser set – configured to run against the Kafka Cluster that is also in that constellation: https://docs.confluent.io/current/installation/docker/docs/quickstart.html.

        What I am looking for is a way to run KSQL on my Kubernetes cluster, and preferably not the full platform. And more importantly: I want that KSQL instance to interact with my own Kafka Cluster, that is also running on that same Kubernetes instance. In a previous article – 15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application  – I described how to get going with Apache Kafka on Kubernetes. The final situation in that article is my starting point in this piece.

        The Docker Composer yaml file for the fullblown Confuent Platform provides the necessary clue: it uses the Docker Container Image confluentinc/ksql-cli:4.1.0 as part of its setup. And I can work with that.

        The steps:

        1. Create Kubernetes YAML for KSQL [server]Pod
        2. Apply YAML file with Kubectl – this creates the Pod with the running container
        3. Open bash shell on the pod and run ksql [command line utility]
        4. Do the KSQL thing (create Streams and Tables, execute queries, …) against the local Kafka cluster

        The piece the resistance of this article undoubtedly is the Pod YAML file. And the key element in that file is the reference to the local Kafka Cluster that is passed through the environment variable KSQL_OPTS that contains the bootstrap.servers parameter which is set to broker.kafka:9092, which is the endpoint within the Kubernetes Cluster for the Kafka broker. This settings ties the KSQK server to the local Kafka Cluster.

        The full Pod YAML file is listed below:

        apiVersion: v1
        kind: Pod
        metadata:
          name: ksql-server
          labels:
            app: ksql-server
        spec:
          nodeName: minikube
          containers:
          - name: ksql-server
            # get latest version of image
            image: confluentinc/ksql-cli:4.1.0
            imagePullPolicy: IfNotPresent
            command: ["/bin/bash"]
            args: ["-c","echo Waiting for Kafka to be ready... && \
                               cub kafka-ready -b broker.kafka:9092 1 20 && \
                               echo Waiting a few seconds for topic creation to finish... && \
                               sleep 2 && \
                               /usr/bin/ksql-server-start /etc/ksql/ksql-server.properties"]
            env:
            - name: KSQL_CONFIG_DIR
              value: "/etc/ksql"
            - name: KSQL_OPTS
              value: "-Dbootstrap.servers=broker.kafka:9092"
            - name: KSQL_LOG4J_OPTS
              value: "-Dlog4j.configuration=file:/etc/ksql/log4j-rolling.properties"
            ports:
            # containerPort is the port exposed by the container (where ksql server is listening)
            - containerPort: 8088
        

        With this Pod definition available, the next steps are very straightforward. Create the Pod using kubectl:

        kubectl apply -f ksqlServerPod.yaml

        this will create a Pod that runs the KSQL server against the Kafka Broker in Kubernetes (on broker.kafka:9092).

        To start ksql commandline, either use Kubernetes Dashboard, select ksql-server pod and use EXEC option

        image

        or from command line on laptop (minikube host) use:

        kubectl exec -it ksql-server /bin/bash
        

        In both cases, you will land on the command line of the KSQL server. Simply type:

        ksql

        The KSQL command line utility will start – and we can start doing KSQL things.

        For example:

        list topics;

        (this will display a list of all Kafka Topics)

        To create a stream on topic workflowEvents:

        create stream wf_s (workflowConversationIdentifier VARCHAR, payload map&lt;VARCHAR,VARCHAR&gt;, updateTimeStamp VARCHAR, lastUpdater VARCHAR, audit array&lt;map&lt;VARCHAR,VARCHAR&gt;&gt;) WITH (kafka_topic='workflowEvents', value_format='JSON', key='workflowConversationIdentifier');
        

        And to query from that stream:

        select workflowConversationIdentifier, audit, lastUpdater, updateTimeStamp, payload, payload['author'], payload['workflowType'], payload['lastUpdater'] from wf_s;

        Note: messages published to topic workflowEvents have a JSON payload with the following structure:

        {
            "workflowType": "oracle-code-tweet-processor",
            "workflowConversationIdentifier": "OracleCodeTweetProcessor1525151206872",
            "creationTimeStamp": 1525148332841,
            "creator": "WorkflowLauncher",
            "audit": [
                {
                    "when": 1525148332841,
                    "who": "WorkflowLauncher",
                    "what": "creation",
                    "comment": "initial creation of workflow"
                },
                {
                    "when": 1525151212318,
                    "who": "TweetBoard",
                    "what": "update",
                    "comment": "Tweet Board Capture done"
                }
            ],
            "payload": {
                "text": "#556 Today is a microservice workshop at Fontys Hogeschool in Eindhoven",
                "author": "lucasjellema",
                "authorImageUrl": "http://pbs.twimg.com/profile_images/427673149144977408/7JoCiz-5_normal.png",
                "createdTime": "May 9th, 2018 at 08:39AM",
                "tweetURL": "http://twitter.com/SaibotAirport/status/853935915714138112",
                "firstLinkFromTweet": "https://t.co/cBZNgqKk0U",
                "enrichment": "Lots of Money",
                "translations": [
                    "# 556 Heute ist ein Microservice-Workshop in der Fontys Hogeschool in Eindhoven",
                    "# 556 Vandaag is een microservice-workshop aan de Fontys Hogeschool in Eindhoven"
                ]
            },
            "updateTimeStamp": 1525151212318,
            "lastUpdater": "TweetBoard"
        }
        

        Note: I ran into the following error message when I tried to select from a stream:

        Input record ConsumerRecord(topic = logTopic, partition = 0, offset = 268, CreateTime = -1, serialized key size = 8, serialized value size = 1217, headers = RecordHeaders(headers = [], isReadOnly = false), key = logEntry, value = [ null | null | null | null ]) has invalid (negative) timestamp. Possibly because a pre-0.10 producer client was used to write this record to Kafka without embedding a timestamp, or because the input topic was created before upgrading the Kafka cluster to 0.10+. Use a different TimestampExtractor to process this data.

        The workaround for this – until such moment that I can fix  the producer of these events – is to instruct KSQL to use a special TimestampExtractor. This is done with the following command:

        SET 'timestamp.extractor'='org.apache.kafka.streams.processor.WallclockTimestampExtractor';</p><p>You can list all properties values, using:</p><p>list properties;
        

        Resources

        KSQL Syntax Reference – https://github.com/confluentinc/ksql/blob/0.1.x/docs/syntax-reference.md#syntax-reference 

        KSQL Examples – https://github.com/confluentinc/ksql/blob/0.1.x/docs/examples.md

        WallclockTimeExtractor – https://kafka.apache.org/10/javadoc/org/apache/kafka/streams/processor/WallclockTimestampExtractor.html 

        Confluent Docker Quick Start – https://docs.confluent.io/current/installation/docker/docs/quickstart.html 

        By Naveen on Medium: Tools that make my life easier to work with kubernetes  https://medium.com/google-cloud/tools-that-make-my-life-easier-to-work-with-kubernetes-fce3801086c0

        The post Get going with KSQL on Kubernetes appeared first on AMIS Oracle and Java Blog.

        ADF Performance Monitor – Major New Version 7.0

        Wed, 2018-04-25 03:52

        We are very happy to announce that a major new version 7.0 of the ADF Performance Monitor will be available from May 2018. There are many improvements and major new features. This blog describes one of the new features; on usage statistics and performance metrics of end-user click actions.

        A click action is the start trigger event of an HTTP request by the browser, by an action that a user takes within the UI. These are most often physical clicks of end-users on UI elements such as buttons, links, icons, charts, and tabs. But it can also be scrolling and selection events on tables, rendering of charts, polling events, auto-submits of input fields and much more. With monitoring by click action you get insight in the click actions that have the worst performance, that cause most errors, that are used most frequently, e.g. You can see in which layer (database, webservice, application server, network, browser) the total execution time has been spent. You can SLA monitor the business functions that are behind the click actions – from the perspective of the end-user.

        To get a glance in which layer processing time of click actions have been spent – the monitor already shows the status-meter gauge: database (yellow), webservice (pink), application server (blue), network (purple), and browser load time (grey). It can also show the exact processing time spent in these layers: 

        Read more on adfpm.com – our new website on the ADF Performance Monitor.

        The post ADF Performance Monitor – Major New Version 7.0 appeared first on AMIS Oracle and Java Blog.

        Using Java Management Extensions within Oracle Enterprise Manager Cloud Control 13c

        Mon, 2018-04-23 12:00

        The answer to using Java Management Extensions(JMX) within Oracle Enterprise Manager Cloud Control is using Metric Extensions. Metric extensions provide you with the ability to extend Oracle’s monitoring capabilities. I will show you how to create an extension to monitor a WebLogic Messaging Bridge. You can use other methods to create metric extensions, for example via the EMCLI. This will not be covered in this blog.

        First of all, you have to have sufficient permissions within Cloud Control to be able to create metric extensions. Second is if you want to execute or test the given example you will need to have a JMS server.

        Navigate to Cloud Control and login.

        CC13C

        Creating a draft.

        Navigate via your Enterprise Summary to “Metric Extensions”.

        blog1

        Navigate within the “Actions” pane from “Create” to “Metric Extension”

        blog2

        Because we are going to monitor a messaging bridge, the target type will be set to “Oracle WebLogic Server”. We will use the Java Management Extensions (JMX) adapter to retrieve the attributes and results from the server the extension will be deployed on.

        blog3

        Below the “General Properties” section there is a section regarding the collection schedule to enable data upload and to configure the upload interval.

        Selecting columns and thresholds.

        blog4

        In the “Select Target” field, you will have to select a server om which a WebLogic Messaging Bridge is active. This will be the JMS server you are planning to deploy your metric extension on.

        We want to retrieve the status of the bridge.

        In the “Enter MBean Pattern” field type: “com.bea:Type=MessagingBridgeRuntime,*”

        Click on “List MBeans”.

        You will see a list of bridges that are deployed on you JMS server. If this is not the case verify you have active bridges in your JMS server configuration.

        blog5

        Scroll down and you will be able to select the attributes you want to retrieve.
        Click on the “Select” button and click on “Next” to go to the next page.
        Click on the “State” column and click on “Edit”.

        blog6

        Select “Comparison Operator” on value “Matches” and type in “Inactive” in the warning or critical field.
        An alert will be triggered based on these values.

        The “Number of Occurrences Before Alert” field can be used to set the amount of times the threshold can be breached before an alert is created. Per default this value is set on “1”.

        You can also use the “Compute Expression” section to calculate a value for this column based on the values in the other columns. If you would like to use this function I would recommend to create another column in which you can populate the values you have calculated.

        Example Compute Expression operation:

        (DestinationInfo_Name __contains ‘error’)

        This command will check if the “DestinationInfo” column contains a row with the word “error” in it. If this is the case it will return a “1” for that row. If this is not the case it will return a “0” for that row.

        Testing your metric extension.

        Press “Next” to test your metric on the target.

        blog7

        Add the JMS server to run the test.

        blog8

        As you can see the column “Bridge state” will have the value “Active”.
        After testing the metric with success navigate to the last page.

        The last page is the “Review” page.
        Click “Finish” to create the extension.

        Deploying and publishing your metric extension.

        Navigate to your metric extension and click on the “Actions” pane and click on “Save as Deployable Draft”. To be able to deploy your metric.
        If you do not want to change your metric extension you can publish your metric by clicking on “Publish Metric Extension”.

        blog9

        An example on how this metric would look like on the JMS server:
        blog10

        The post Using Java Management Extensions within Oracle Enterprise Manager Cloud Control 13c appeared first on AMIS Oracle and Java Blog.

        Oracle API Platform Cloud Service: using the Developer Portal for discovering APIs via the API Catalog and subscribing applications to APIs

        Sun, 2018-04-22 14:22

        At the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august, I attended the API Platform Cloud Service & Integration Cloud Service bootcamp.

        In a series of article’s I will give a high level overview of what you can do with Oracle API Platform Cloud Service.

        At the Summer Camp a pre-built Oracle VM VirtualBox APIPCS appliance (APIPCS_17_3_3.ova) was provided to us, to be used in VirtualBox. Everything needed to run a complete demo of API Platform Cloud Service is contained within Docker containers that are staged in that appliance. The version of Oracle API Platform CS, used within the appliance, is Release 17.3.3 — August 2017.

        See https://docs.oracle.com/en/cloud/paas/api-platform-cloud/whats-new/index.html to learn about the new and changed features of Oracle API Platform CS in the latest release.

        In this article in the series about Oracle API Platform CS, the focus will be on the Developer Portal, discovering APIs via the API Catalog and subscribing applications to APIs. As a follow-up from my previous article, at the end the focus is on validating the “Key Validation” policy of the “HumanResourceService”API.
        [https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

        Be aware that the screenshot’s in this article and the examples provided, are based on a demo environment of Oracle API Platform CS and were created by using the Oracle VM VirtualBox APIPCS appliance mentioned above.

        This article only covers part of the functionality of Oracle API Platform CS. For more detail I refer you to the documentation: https://cloud.oracle.com/en_US/api-platform.

        Short overview of Oracle API Platform Cloud Service

        Oracle API Platform Cloud Service enables companies to thrive in the digital economy by comprehensively managing the full API lifecycle from design and standardization to documenting, publishing, testing and managing APIs. These tools provide API developers, managers, and users an end-to-end platform for designing, prototyping. Through the platform, users gain the agility needed to support changing business demands and opportunities, while having clear visibility into who is using APIs for better control, security and monetization of digital assets.
        [https://cloud.oracle.com/en_US/api-platform/datasheets]

        Architecture

        Management Portal:
        APIs are managed, secured, and published using the Management Portal.
        The Management Portal is hosted on the Oracle Cloud, managed by Oracle, and users granted
        API Manager privileges have access.

        Gateways:
        API Gateways are the runtime components that enforce all policies, but also help in
        collecting data for analytics. The gateways can be deployed anywhere – on premise, on Oracle
        Cloud or to any third party cloud providers.

        Developer Portal:
        After an API is published, Application Developers use the Developer Portal to discover, register, and consume APIs. The Developer Portal can be customized to run either on the Oracle Cloud or directly in the customer environment on premises.
        [https://cloud.oracle.com/opc/paas/datasheets/APIPCSDataSheet_Jan2018.pdf]

        Oracle Apiary:
        In my article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about using Oracle Apiary and interacting with its Mock Server for the “HumanResourceService” API, I created earlier.

        The Mock Server for the “HumanResourceService” API is listening at:
        http://private-b4874b1-humanresourceservice.apiary-mock.com
        [https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

        Roles

        Within Oracle API Platform CS roles are used.

        Roles determine which interfaces a user is authorized to access and the grants they are eligible to receive.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/api-platform-cloud-service-roles-resources-actions-and-grants.html]

        • Administrator
          System Administrators responsible for managing the platform settings. Administrators possess the rights of all other roles and are eligible to receive grants for all objects in the system.
        • API Manager
          People responsible for managing the API lifecycle, which includes designing, implementing, and versioning APIs. Also responsible for managing grants and applications, providing API documentation, and monitoring API performance.
        • Application Developer
          API consumers granted self-service access rights to discover and register APIs, view API documentation, and manage applications using the Developer Portal.
        • Gateway Manager
          Operations team members responsible for deploying, registering, and managing gateways. May also manage API deployments to their gateways when issued the Deploy API grant by an API Manager.
        • Gateway Runtime
          This role indicates a service account used to communicate from the gateway to the portal. This role is used exclusively for gateway nodes to communicate with the management service; users assigned this role can’t sign into the Management Portal or the Developer Portal.
        • Service Manager
          People responsible for managing resources that define backend services. This includes managing service accounts and services.
        • Plan Manager
          People responsible for managing plans.

        Within the Oracle VM VirtualBox APIPCS appliance the following users (all with password welcome1) are present and used by me in this article:

        User Role api-manager-user APIManager api-gateway-user GatewayManager app-dev-user ApplicationDeveloper

        Publish an API, via the Management Portal (api-manager-user)

        Start the Oracle API Platform Cloud – Management Portal as user api-manager-user.

        Navigate to tab “Publication” of the “HumanResourceService” API (which I created earlier).
        [https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

        Publish an API to the Developer Portal when you want application developers to discover and consume it.

        Each published API has a details page on the Developer Portal. This page displays basic information about the API, an overview describing the purpose of the API, and documentation for using the API. This page is not visible on the Developer Portal until you publish it.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-145F0AAE-872B-4577-ACA6-994616A779F1]

        The tab “Publication” contains the following parts:

        • API Portal URL
        • Developer Portal API Overview
        • Documentation

        Next I will explain (in reversed order) these parts in more detail.

        As you can see, for some of the parts we can use HTML, Markdown or Apiary.

        Remark:
        Markdown is a lightweight markup language with plain text formatting syntax.
        [https://en.wikipedia.org/wiki/Markdown]

        Part “Documentation” of the tab “Publication”

        You can provide HTML or Markdown documentation by uploading a file, manually entering text, or providing a URL to the documentation resource. After you have added the documentation, it appears on the Documentation tab of the API detail page in the Developer Portal.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-9FD22DC2-18A9-4338-91E7-70726C906B91]

        It is also possible to add documentation from Oracle Apiary to an API.

        Adding documentation to the API can help users understand its purpose and how it was configured.

        Note:
        Swagger or API Blueprint documentation can only be added to an Oracle Apiary Pro account. To add documentation, the team must have ownership of the API in Oracle Apiary. API definitions owned by personal accounts cannot be used. To transfer ownership of an API from a personal account to a team account, see the Oracle Apiary documentation.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-A7E68AA0-396D-400C-933C-1C4CD3DDD832]

        So let’s see how I tried using documentation from Oracle Apiary.

        Using Oracle Apiary for documentation

        I clicked on button “Apiary”. In my case the following screen appeared:

        Next, I clicked on button “Go To Apiary”.

        Then I clicked on button “Sign in”.

        After a successful sign in (for example by using Email/Password), the following screen appeared (with the “HumanResourceService” API visible):

        Next, I clicked on button “Create a team”. The following pop-up appeared:

        Because I use a Free (personal) Account for Oracle Apiary, I am not able to create a team.

        Remember the note (see above) saying: “Swagger or API Blueprint documentation can only be added to an Oracle Apiary Pro account. To add documentation, the team must have ownership of the API in Oracle Apiary. API definitions owned by personal accounts cannot be used.”.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-A7E68AA0-396D-400C-933C-1C4CD3DDD832]

        So, for me, the path of using documentation from Oracle Apiary came to an end.

        As an alternative, in this article, I used Markdown for documentation. But before explaining that in more detail, I want to give you an impression of screenshot’s of what happens when you click on button “Apiary” and have an Apiary account with the right privileges to add documentation to an API.

        Remark:
        The screenshot’s that follow, are taken from the API Platform Cloud Service bootcamp, I attended at the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year.

        So, when you click on button “Apiary”, the following screen appears:

        A list of API’s is visible where you can choose one to connect. For example: “TicketService27” API.

        After a click on button “Connect”, the “Documentation” part of tab “Publication” looks like:

        Using Markdown for documentation

        For reasons mentioned above, as an alternative for using Oracle Apiary, in this article, I used Markdown for documentation. Markdown is new to me, so (in this article) I will only demonstrate it with a simplified version of the documentation (available in Apiary).

        Click on button “Markdown”.

        Next, click on tab “Text” and enter the following text:

        # HumanResourceService

        Human Resource Service is an API to manage Human Resources.

        ## Employees Collection [/employees]

        ### Get all employees [GET /employees]

        Get all employees.

        ### Get an employee [GET /employees/{id}]

        Get a particular employee by providing an identifier.

        ### Create an employee [POST /employees]

        Create an employee, by using post with the complete payload.

        ### Update an employee [PUT /employees/{id}]

        Update an employee, by using put with the a payload containing: last_name, job_id, salary and department_id.

        ## Departments Collection [/departments]

        ### Get a department [GET /department/{id}]

        Get a particular department by providing an identifier.

        ### Get a department and employee [GET /departments/{department_id}/employees/{employee_id}]

        Get a particular department by providing a department identifier and a particular employee within that department by providing an employee identifier.

        After a click on button “OK”, the “Documentation” part of tab “Publication” looks like:

        In the pop-up, click on button “Save Changes”.

        Part “Developer Portal API Overview” of the tab “Publication”

        You can provide overview text for an API, describing its features and other information a developer should know about its use, in either HTML or Markdown.

        You can upload a file, enter text manually, or provide a link to HTML or Markdown to use as overview text. This text appears on the API’s detail page in the Developer Portal.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-D1BF7E3E-03C9-42AE-9808-EC9BC77D3E61]

        For the “Developer Portal API Overview” part, I chose to use HTML (because in this article, up to now, examples of using Markdown and Apiary were already provided).

        Once again I will only demonstrate it with a simplified version of the documentation (available in Apiary).

        Click on button “HTML”.

        Next, click on tab “Text” and enter the following text:

        <h1>HumanResourceService</h1>

        Human Resource Service is an API to manage Human Resources.

        It provides CRUD methods for the resources <b>Employees</b> and <b>Departments</b>.

        After a click on button “OK”, the “Developer Portal API Overview” part of tab “Publication” looks like:

        In the pop-up, click on button “Save Changes”.

        Part “API Portal URL” of the tab “Publication”

        Before publishing to the Developer Portal, each API has to be configured with its own unique Vanity Name. A vanity name is the URI path of an API’s details page when it is published to the Developer Portal.

        On the Publication tab, enter the path at which this API will be discoverable in the Developer Portal in the API Portal URL field. This is also called the API’s vanity name.

        Note:
        An API’s vanity name must be unique, regardless of case. You can’t have APIs with vanity names of Creditcheck and creditcheck. You must enter the vanity name exactly (matching case) in the URL to navigate to an API’s details page in the Developer Portal. For example, navigating to https://<host>:<port>/developers/apis/Creditcheck opens the page for an API with a vanity name of Creditcheck; https://<host>:<port>/developers/apis/creditcheck doesn’t open this page and returns a 404 because the segment in the URL does not match the vanity name exactly.

        Only valid URI simple path names are supported. Characters such as “?”, “/”, and “&” are not supported in vanity names. Test_2 is a supported vanity name, but Test/2 is not.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/publishing-apis.html#GUID-C9034B10-72EA-4046-A8B8-B5B1AE087180]

        The default API’s vanity name, is derived from the API name:

        <not published>/HumanResourceService

        Publish the “HumanResourceService” API to the Developer Portal

        So now that we have all the documentation in place, Notice that the button “Preview” appeared.

        Clicking on button “Preview” generates an error:

        Remember that I am using a demo environment of Oracle API Platform CS by using the Oracle VM VirtualBox APIPCS appliance. This seems to be a bug in that environment. So what should have been visible was something like:

        Here you can see on the left, that the tab “Overview” is selected . There is also a tab “Documentation”.

        Remark:
        Please see the screenshot’s later on in this article for the “HumanResourceService”API in the “Developer Portal” (tab APIs), with regard to the tabs “Overview” and “Documentation”. These show the same information as in the Preview context.

        Next, click on button “Publish to Portal”.

        Notice that the > icon “Launch Developer Portal in another browser window” appeared and also that the API Portal URL is changed to:

        http://apics.oracle.com:7201/developers/apis/HumanResourceService

        In the top part of the screen you can see that the “HumanResourceService”API is “Published’.

        It’s time to launch the Developer Portal.

        Click on the icon “Launch Developer Portal in another browser window”.

        Sign in to the Oracle API Platform Cloud – Developer Portal as user app-dev-user

        After a successful sign in as user app-dev-user, the next screen appears (with tab “APIs” selected):

        The “Developer Portal” is the web page where you discover APIs, subscribe to APIs and get the necessary information to invoke them. When you access the “Developer Portal”, the API Catalog page appears. All the APIs that have been published to the “Developer Portal” are listed. Use the API Catalog page to find APIs published to the “Developer Portal”.

        In the “Developer Portal” screen above there are no APIs, or they are not visible for the current user. So we have to go back to the Oracle API Platform Cloud – Management Portal (as an API Manager). There we can grant the privileges needed for an Application Developer to see the API. How you do this is described later on in this article.

        For now we continue as if the correct privileges were already in place. Therefor the “HumanResourceService” API is visible.

        Click on the “HumanResourceService” API.

        Here you can see on the left, that the tab “Overview” is selected.

        For now I will give you a short overview of screenshot’s of each of the tabs on the left.

        Tab “Overview” of the “HumanResourceService” API

        Remember that we used HTML code for the “Developer Portal API Overview” part of the tab “Publication”?
        So here you can see the result.

        Tab “Documentation” of the “HumanResourceService” API

        Remember that we used Markdown code for the “Documentation” part of the tab “Publication”?
        So here you can see the result.

        Remark:
        If I had an Apiary account with the right privileges to add documentation to an API and used Apiary for documentation, the tab “Documentation” would have looked like:

        Discover APIs

        In the API Catalog page, you can search for an API by entering keywords in the field at the top of the catalog. The list is narrowed to the APIs that have that word in the name or the description. If you enter multiple words, the list contains all APIs with either of the words; APIs with both words appear at the top of the list. If a keyword or keywords have been applied to the list, they appear in a bar at the top of the page. Filters can also be applied to the list. You can also sort the list for example in alphabetical order or by newest to oldest API.
        [Oracle Partner PaaS Summer Camps VII 2017, APIPCS bootcamp, Lab_APIPCS_Design_and_Implement.pdf]

        Subscribe an application to the “HumanResourceService” API

        In the “Developer Portal” screen if we navigate, in the API Catalog page, to the “HumanResourceService” API, and if the user has the correct privileges, a button “Subscribe” is visible. In the Oracle API Platform Cloud – Management Portal (as an API Manager) we can grant the privileges needed for an Application Developer to register an application to the API. How you do this is described later on in this article.

        For now we continue as if the correct privileges were already in place.

        Click on button “Subscribe”.

        Next, click on button “Create New Application”. Enter the following values:

        Application Name HumanResourceWebApplication Description Web Application to manage Human Resources. Application Types Web Application Contact information: First Name FirstName Last Name LastName Email Email@company.com Phone 123456789 Company Company

        Click on button “Save”.

        For a short while a pop-up “The application ‘HumanResourceWebApplication’ was created.” appears.

        So now we have an application, we can subscribe it, to the “HumanResourceService” API.

        Notice that an Application Key was generated, with as value:

        fb3138d1-0636-456e-96c4-4e21b684f45e

        Remark:
        You can reissue a key for an application in case it has been compromised, Application keys are established at the application level. If you reissue an application’s key, the old key is invalidated. This affects all APIs (that have the key validation policy applied) to which an application is registered. Every request to these APIs must use the new key to succeed. Requests using the old key are rejected. APIs without the key validation policy are not affected as these do not require a valid application key to pass requests.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/reissuing-application-key.html#GUID-4E570C15-C289-4B6D-870C-F7ADACC1F6DD]

        Next, click on button “Subscribe API”.

        For a short while a pop-up “API ‘HumanResourceService’ was subscribed to application ‘HumanResourceWebApplication’.” appears.

        A request to register the application to the API is sent to the API Manager. So now we have to wait for the approval of the API Manager. How you do this is described later on in this article.

        In the API Catalog page, when viewing an API you can see which applications are subscribed to it.

        In the My Applications page, when viewing an application you can see which APIs it subscribed to.

        After a click on the “HumanResourceWebApplication” application, the next screen appears (with tab “Overview” selected):

        First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

        Tab “Overview” of the “HumanResourceWebApplication” application

        Tab “Subscribed APIs” of the “HumanResourceWebApplication” application

        Tab “Grants” of the “HumanResourceWebApplication” application

        Application grants are issued per application.

        The following tabs are visible and can be chosen:

        • Manage Application
          People issued this grant can view, modify and delete this application. API Manager users issued this grant can also issue grants for this application to others.
        • View all details
          People issued this grant can see all details about this application in the Developer Portal.

        See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-application-grants.html

        Tab “Analytics” of the “HumanResourceWebApplication” application

        Create an Application in the “My Applications” page

        Click on button “New Application”.

        In the same way as described before I created several applications (one at a time) with minimum information (Application Name, Application Types, First Name, Last Name and Email).

        In the My Applications page, the list of applications then looks like:

        In the table below I summarized the applications that I created:

        Application Name Application Types Application Key DesktopApp_A_Application Desktop App e194833d-d5ac-4c9d-8143-4cf3a3e81fea DesktopApp_B_Application Desktop App fd06c3b5-ab76-4e89-8c5a-e4b8326c360b HumanResourceWebApplication Web Application fb3138d1-0636-456e-96c4-4e21b684f45e MobileAndroid_A_Application Mobile – Android fa2ed56f-da3f-49ea-8044-b16d9ca75087 MobileAndroid_B_Application Mobile – Android 385871a2-7bb8-4281-9a54-c0319029e691 Mobile_iOS_A_Application Mobile – iOS 7ebb4cf8-5a3f-4df5-82ad-fe09850f0e50

        In the API Catalog page, navigate to the “HumanResourceService” API. Here you can see that there is already one subscribed application.

        Click on button “Subscribe”.

        Next, select the “MobileAndroid_B_Application” application.

        For a short while a pop-up “API ‘HumanResourceService’ was subscribed to application ‘ MobileAndroid_B_Application ‘.” appears.

        In the API Catalog page, when viewing an API you can see which applications are subscribed to it.

        Here we can see the status “Pending”. A request to register the “MobileAndroid_B_Application” application to the “HumanResourceService” API is sent to the API Manager. So now we have to wait for the approval of the API Manager. Repeat the steps described in this article, to approve the request, by switching to an API Manager.

        In the screen below, we can see the end result:

        Edit the Key Validation Policy, via the Management Portal (api-manager-user)

        In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-manager-user.

        Navigate to tab “API Implementation” of the “HumanResourceService” API.

        Hoover over the “Key Validation” policy and then, on the right, click on icon “Edit policy details”.

        Click on button “Apply”.

        Next, click on button “Save Changes”.

        I applied this policy as an active policy, represented as a solid line around the policy.

        Redeploy the API, via the Management Portal (api-manager-user)

        Navigate to tab “Deployments” of the “HumanResourceService” API, and then hover over the “Production Gateway” gateway and then, on the right, hover over the icon “Redeploy”.

        Next, click on icon “Latest Iteration”. Also approve the request, by switching to a Gateway Manager.
        How you do this, is described in my previous article “Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies)”.
        [https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

        So now the “HumanResourceService” API is redeployed on the “Production Gateway” gateway (Node 1).

        It is time to invoke the API.

        Validating the “Key Validation” policy, via Postman

        As described in my previous article, in Postman, I created requests within the collection named “HumanResourceServiceCollection”.
        [https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

        Then again I invoked two request, to validate them against the “Key Validation” policy.

        Invoke method “GetEmployee” of the “HumanResourceService” API

        From Postman I invoked the request named “GetEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/100”) and a response with “Status 401 Unauthorized” is shown:

        After providing the Value fb3138d1-0636-456e-96c4-4e21b684f45e (being the Application Key of the “HumanResourceWebApplication” application) for the Header Key “application-key”, a response with “Status 200 OK” is shown:

        After providing the Value e194833d-d5ac-4c9d-8143-4cf3a3e81fea (being the Application Key of the “DesktopApp_A_Application” application) for the Header Key “application-key”, a response with “Status 401 Unauthorized” is shown:

        Invoke method “GetDepartmentEmployee” of the “HumanResourceService” API

        From Postman I invoked the request named “GetDepartmentEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30/employees/119”) and a response with “Status 401 Unauthorized” is shown:

        After providing the Value 385871a2-7bb8-4281-9a54-c0319029e691 (being the Application Key of the “MobileAndroid_B_Application” application) for the Header Key “application-key”, a response with “Status 200 OK” is shown:

        Tab “Analytics” of the “Production Gateway” gateway

        In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-gateway-user and click on the “Production Gateway” gateway and navigate to the tab “Analytics”.

        In this tab the requests I sent, are visible at “Total Requests”.

        If we look, for example, at “Requests By Resource”, the requests are also visible.

        Next, click on icon “Applications (4 Active)” and if we look, for example, at “Active Applications”, we can see that there were in total 3 request rejections (because of policy “Key Validation”).

        If we look, for example, at “Requests By API”, the requests are also visible.

        There were 2 request that had no Header Key “application-key” at all. As you can see in the graph above, these were rejected and were administrated under “Unknown Application (No Key).

        There was 1 request that had a Value e194833d-d5ac-4c9d-8143-4cf3a3e81fea for the Header Key “application-key”. As you can see in the graph above, this request was rejected and was administrated under the “DesktopApp_A_Application” application. Remember that this application was not registered to the “HumanResourceService” API.

        The other 2 request were accepted, because they had a valid Value for the Header Key and the corresponding applications were registered to the “HumanResourceService” API.

        So the “Key Validation” policy is working correct.

        Sign in to the Oracle API Platform Cloud – Management Portal as user api-manager-user

        Go back to the Oracle API Platform Cloud – Management Portal and, if not already done, sign in as user api-manager-user. Navigate to tab “Grants” of the “HumanResourceService” API.

        API grants are issued per API.

        The following tabs are visible and can be chosen:

        • Manage API
          Users issued this grant are allowed to modify the definition of and issue grants for this API.
        • View all details
          Users issued this grant are allowed to view all information about this API in the Management Portal.
        • Deploy API
          Users issued this grant are allowed to deploy or undeploy this API to a gateway for which they have deploy rights. This allows users to deploy this API without first receiving a request from an API Manager.
        • View public details
          Users issued this grant are allowed to view the publicly available details of this API on the Developer Portal.
        • Register
          Users issued this grant are allowed to register applications for this plan.
        • Request registration
          Users issued this grant are allowed to request to register applications for this plan.

        Users and groups issued grants for a specific API have the privileges to perform the associated actions on that API. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-api-grants.html.

        “View public details” grant

        To view an API, the Application Developer must have the “View public details” grant or another grant that implies these privileges.

        Click on tab “View public details”.

        Next, click on button “Add Grantee”.

        Select “app-dev-user” and click on button “Add”.

        So now, the user app-dev-user (with Role ApplicationDeveloper) is granted the “View public details” privilege.

        Remark:
        In practice you would probably grant to a group instead of to a single user.

        “Request registration” grant

        To register an API, the Application Developer must have the “Request registration” grant or another grant that implies these privileges.

        Click on tab “Request registration”.

        Next, click on button “Add Grantee”.

        Select “app-dev-user” and click on button “Add”.

        So now, the user app-dev-user (with Role ApplicationDeveloper) is granted the “Request registration” privilege.

        Remark:
        In practice you would probably grant to a group instead of to a single user.

        Be aware that you could also grant the “Register” privilege, so approval of the API Manager (for registering an application to an API) is not needed anymore in that case. This makes sense if it concerns a development environment, for example. Since the Oracle VM VirtualBox APIPCS appliance is using a “Production Gateway” gateway, in this article, I chose for the request and approve mechanism.

        Approve a request for registering an application to an API, via the Management Portal (api-manager-user)

        On the left, click on tab “Registrations” and then click on tab “Requesting”.

        Hover over the “HumanResourceWebApplication” application, then click on button “Approve”.

        In the pop-up, click on button “Yes”.

        Then you can see on the tab “Registered”, that the registration is done.

        After a click on the top right icon “Expand”, more details are shown:

        So now the “HumanResourceWebApplication” application is registered to the “HumanResourceService” API.

        Summary

        As a follow up from my previous articles about Oracle API Platform Cloud Service, in this article the focus is on using the Developer Portal, discovering APIs via the API Catalog and subscribing applications to APIs.

        I activated the Key Validation (Security) policy, which I created in my previous article, and redeployed the API to a gateway and validated that this policy worked correct, using requests which I created in Postman.
        [https://technology.amis.nl/2018/04/14/oracle-api-platform-cloud-service-using-the-management-portal-and-creating-an-api-including-some-policies/]

        While using the Management Portal and Developer Portal in this article, I focused on the roles “API Manager” and “Application Developer”. For example, the user api-manager-user had to approve a request from the app-dev-user to register an application to an API.

        At the API Platform Cloud Service bootcamp (at the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august), I (and many others) got hands-on experience with the API Platform Cloud Service. There we created an API with more policies than described in this article.

        It became obvious that the API Platform Cloud Service is a great API Management solution and that with the help of policies your are able to secure, throttle, route, manipulate, or log requests before they reach the backend service.

        The post Oracle API Platform Cloud Service: using the Developer Portal for discovering APIs via the API Catalog and subscribing applications to APIs appeared first on AMIS Oracle and Java Blog.

        15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application

        Thu, 2018-04-19 11:07

        imageFor  workshop I will present on microservices and communication patterns I need attendees to have their own local Kafka Cluster. I have found a way to have them up and running in virtually no time at all. Thanks to the combination of:

        • Kubernetes
        • Minikube
        • The Yolean/kubernetes-kafka GitHub Repo with Kubernetes yaml files that creates all we need (including Kafka Manager)

        Prerequisites:

        • Minikube and Kubectl are installed
        • The Minikube cluster is running (minikube start)

        In my case the versions are:

        Minikube: v0.22.3, Kubectl Client 1.9 and (Kubernetes) Server 1.7:

        image

        The steps I went through:

        Git Clone the GitHub Repository: https://github.com/Yolean/kubernetes-kafka 

        From the root directory of the cloned repository, run the following kubectl commands:

        (note: I did not know until today that kubectl apply –f can be used with a directory reference and will then apply all yaml files in that directory. That is incredibly useful!)

        kubectl apply -f ./configure/minikube-storageclass-broker.yml
        kubectl apply -f ./configure/minikube-storageclass-zookeeper.yml

        (note: I had to comment out the reclaimPolicy attribute in both files – probably because I am running a fairly old version of Kubernetes)

        kubectl apply -f ./zookeeper

        kubectl apply -f ./kafka

        (note: I had to change API version in 50pzoo and 51zoo as well as in 50kafka.yaml from apiVersion: apps/v1beta2 to apiVersion: apps/v1beta1 – see https://github.com/kubernetes/kubernetes/issues/55894 for details; again, I should upgrade my Kubernetes version)

        To make Kafka accessible from the minikube host (outside the K8S cluster itself)

        kubectl apply -f ./outside-services

        This exposes Services as type NodePort instead of ClusterIP, making them available for client applications that can access the Kubernetes host.

        I also installed (Yahoo) Kafka Manager:

        kubectl apply -f ./yahoo-kafka-manager

        (I had to change API version in kafka-manager from apiVersion: apps/v1beta2 to apiVersion: apps/v1beta1 )

        At this point, the Kafka Cluster is running. I can check the pods and services in the Kubernetes Dashboard as well as through kubectl on the command line. I can get the Port at which I can access the Kafka Brokers:

        image

        And I can access the Kafka Manager at the indicated Port.

        image

        Initially, no cluster is visible in Kafka Manager. By providing the Zookeeper information highlighted in the figure (zookeeper.kafka:2181) I can make the cluster visible in this user interface tool.

        Finally the eating of the pudding: programmatic production and consumption of messages to and from the cluster. Using the world’s simplest Node Kafka clients, it is easy to see the stuff is working. I am impressed.

        I have created the Node application and its package.json file. Then added the kafka-node dependency (npm install kafka-node –save). Next I created the producer:

        // before running, either globally install kafka-node  (npm install kafka-node)
        // or add kafka-node to the dependencies of the local application
        
        var kafka = require('kafka-node')
        var Producer = kafka.Producer
        KeyedMessage = kafka.KeyedMessage;
        
        var client;
        KeyedMessage = kafka.KeyedMessage;
        
        var APP_VERSION = "0.8.5"
        var APP_NAME = "KafkaProducer"
        
        var topicName = "a516817-kentekens";
        var KAFKA_BROKER_IP = '192.168.99.100:32400';
        
        // from the Oracle Event Hub - Platform Cluster Connect Descriptor
        var kafkaConnectDescriptor = KAFKA_BROKER_IP;
        
        console.log("Running Module " + APP_NAME + " version " + APP_VERSION);
        
        function initializeKafkaProducer(attempt) {
          try {
            console.log(`Try to initialize Kafka Client at ${kafkaConnectDescriptor} and Producer, attempt ${attempt}`);
            const client = new kafka.KafkaClient({ kafkaHost: kafkaConnectDescriptor });
            console.log("created client");
            producer = new Producer(client);
            console.log("submitted async producer creation request");
            producer.on('ready', function () {
              console.log("Producer is ready in " + APP_NAME);
            });
            producer.on('error', function (err) {
              console.log("failed to create the client or the producer " + JSON.stringify(err));
            })
          }
          catch (e) {
            console.log("Exception in initializeKafkaProducer" + JSON.stringify(e));
            console.log("Try again in 5 seconds");
            setTimeout(initializeKafkaProducer, 5000, ++attempt);
          }
        }//initializeKafkaProducer
        initializeKafkaProducer(1);
        
        var eventPublisher = module.exports;
        
        eventPublisher.publishEvent = function (eventKey, event) {
          km = new KeyedMessage(eventKey, JSON.stringify(event));
          payloads = [
            { topic: topicName, messages: [km], partition: 0 }
          ];
          producer.send(payloads, function (err, data) {
            if (err) {
              console.error("Failed to publish event with key " + eventKey + " to topic " + topicName + " :" + JSON.stringify(err));
            }
            console.log("Published event with key " + eventKey + " to topic " + topicName + " :" + JSON.stringify(data));
          });
        
        }
        
        //example calls: (after waiting for three seconds to give the producer time to initialize)
        setTimeout(function () {
          eventPublisher.publishEvent("mykey", { "kenteken": "56-TAG-2", "country": "nl" })
        }
          , 3000)
        

        and ran the producer:

        image

        The create the consumer:

        var kafka = require('kafka-node');
        
        var client;
        
        var APP_VERSION = "0.8.5"
        var APP_NAME = "KafkaConsumer"
        
        var eventListenerAPI = module.exports;
        
        var kafka = require('kafka-node')
        var Consumer = kafka.Consumer
        
        // from the Oracle Event Hub - Platform Cluster Connect Descriptor
        
        var topicName = "a516817-kentekens";
        
        console.log("Running Module " + APP_NAME + " version " + APP_VERSION);
        console.log("Event Hub Topic " + topicName);
        
        var KAFKA_BROKER_IP = '192.168.99.100:32400';
        
        var consumerOptions = {
            kafkaHost: KAFKA_BROKER_IP,
            groupId: 'local-consume-events-from-event-hub-for-kenteken-applicatie',
            sessionTimeout: 15000,
            protocol: ['roundrobin'],
            fromOffset: 'earliest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
        };
        
        var topics = [topicName];
        var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumerLocal' }, consumerOptions), topics);
        consumerGroup.on('error', onError);
        consumerGroup.on('message', onMessage);
        
        consumerGroup.on('connect', function () {
            console.log('connected to ' + topicName + " at " + consumerOptions.host);
        })
        
        function onMessage(message) {
            console.log('%s read msg Topic="%s" Partition=%s Offset=%d'
            , this.client.clientId, message.topic, message.partition, message.offset);
        }
        
        function onError(error) {
            console.error(error);
            console.error(error.stack);
        }
        
        process.once('SIGINT', function () {
            async.each([consumerGroup], function (consumer, callback) {
                consumer.close(true, callback);
            });
        });
        
        

        and ran the consumer – which duly consumed the event published by the publisher. It is wonderful.

        image

        Resources

        The main resources is the GitHub Repo: https://github.com/Yolean/kubernetes-kafka . Absolutely great stuff.

        Also useful: npm package kafka-node – https://www.npmjs.com/package/kafka-node

        Documentation on Kubernetes: https://kubernetes.io/docs/user-journeys/users/application-developer/foundational/#section-2 – with references to Kubectl and Minikube – and the Katakoda playground: https://www.katacoda.com/courses/kubernetes/playground

        The post 15 Minutes to get a Kafka Cluster running on Kubernetes – and start producing and consuming from a Node application appeared first on AMIS Oracle and Java Blog.

        Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode

        Thu, 2018-04-19 02:23

        imageIn previous articles, I have talked about using Docker Containers in smart testing strategies by creating a container image that contains the baseline of the application and the required test setup (test data for example). For each test instead of doing complex setup actions and finishing of with elaborate tear down steps, simply spinning up a container at the beginning and tossing it away at the end.

        I have shown how that can be done through the command line – but that of course is not a workable procedure. In this article I will provide a brief introduction of programmatic manipulation of containers. By providing access to the Docker Daemon API from remote clients (step 1) and by leveraging the npm package Dockerode (step 2) it becomes quite simple from a straightforward Node application to create, start and stop containers – as well as build, configure, inspect, pause them and manipulate in other ways. This opens up the way for build jobs to programmatically run tests by starting the container, running the tests against that container and killing and removing the container after the test. Combinations of containers that work together can be managed just as easily.

        As I said, this article is just a very lightweight introduction.

        Expose Docker Daemon API to remote HTTP clients

        The step that to me longest was exposing the Docker Daemon API. Subsequent versions of Docker used different configurations for this and apparently different Linux distributions also have different approaches. I was happy to find this article: https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04 that describes for Ubuntu 16.x as Docker Host how to enable access to the API.

        Edit file /lib/systemd/system/docker.service – add -H tcp://0.0.0.0:4243 to the entry that describes how to start the Docker Daemon in order to have it listen to incoming requests at port 4243 (note: other ports can be used just as well).

        Reload (systemctl daemon-reload) to apply the changed file configuration

        Restart the Docker Service: service docker restart

        And we are in business.image

        A simple check to see if HTTP requests on port 4243 are indeed received and handled: execute this command on the Docker host itself:

        curl http://localhost:4243/version

        image

        The next step is the actual remote access. From a browser running on a machine that can ping successfully to the Docker Host – in my case that is the Virtual Box VM spun up by Vagrant, at IP 192.168.188.108 as defined in the Vagrantfile – open this URL: http://192.168.188.108:4243/version. The result should be similar to this:

        image

        Get going with Dockerode

        To get started with npm package Dockerode is not any different really from any other npm package. So the steps to create a simple Node application that can list, start, inspect and stop containers in the remote Docker host are as simple as:

        Use npm init to create the skeleton for a new Node application

        Use

        npm install dockerode –save

        to retrieve Dockerode and create the dependency in package.json.

        Create file index.js. Define the Docker Host IP address (192.168.188.108 in my case) and the Docker Daemon Port (4243 in my case) and write the code to interact with the Docker Host. This code will list all containers. Then it will inspect, start and stop a specific container (with identifier starting with db8). This container happens to run an Oracle Database – although that is not relevant in the scope of this article.

        var Docker = require('dockerode');
        var dockerHostIP = "192.168.188.108"
        var dockerHostPort = 4243
        
        var docker = new Docker({ host: dockerHostIP, port: dockerHostPort });
        
        docker.listContainers({ all: true }, function (err, containers) {
            console.log('Total number of containers: ' + containers.length);
            containers.forEach(function (container) {
                console.log(`Container ${container.Names} - current status ${container.Status} - based on image ${container.Image}`)
            })
        });
        
        // create a container entity. does not query API
        async function startStop(containerId) {
            var container = await docker.getContainer(containerId)
            try {
                var data = await container.inspect()
                console.log("Inspected container " + JSON.stringify(data))
                var started = await container.start();
                console.log("Started "+started)
                var stopped = await container.stop();
                console.log("Stopped "+stopped)
            } catch (err) {
                console.log(err);
            };
        }
        //invoke function
        startStop('db8')
        

        The output in Visual Studio Code looks like this:

        SNAGHTML26a0b0e

        And the action can be tracked on the Docker host like this (to prove it is real…)image

        Resources

        Article by Ivan Krizsan on configuring the Docker Daemon on Ubuntu 16.x – my life safer: https://www.ivankrizsan.se/2016/05/18/enabling-docker-remote-api-on-ubuntu-16-04

        GitHub Repo for Dockerode – with examples and more: https://github.com/apocas/dockerode

        Presentation at DockerCon 2016 that gave me the inspiration to use Dockerode: https://www.youtube.com/watch?v=1lCiWaLHwxo 

        Docker docs on Configuring the Daemon – https://docs.docker.com/install/linux/linux-postinstall/#configure-where-the-docker-daemon-listens-for-connections


        The post Remote and Programmatic Manipulation of Docker Containers from a Node application using Dockerode appeared first on AMIS Oracle and Java Blog.

        Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests

        Wed, 2018-04-18 07:00

        imageHere is a procedure for running an Oracle Database, preparing a baseline in objects (tables, stored procedures) and data, creating an image of that baseline and subsequently running containers based on that baseline image. Each container starts with a fresh setup. For running automated tests that require test data to be available in a known state, this is a nice way of working.

        The initial Docker container was created using an Oracle Database 11gR2 XE image: https://github.com/wnameless/docker-oracle-xe-11g.

        Execute this statement on the Docker host:

        docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true --name oracle-xe  wnameless/oracle-xe-11g
        

        This will spin up a container called oracle-xe. After 5-20 seconds, the database is created and started and can be accessed from an external database client.

        From the database client, prepare the database baseline, for example:

        create user newuser identified by newuser;
        
        create table my_data (data varchar2(200));
        
        insert into my_data values ('Some new data '||to_char(sysdate,'DD-MM HH24:MI:SS'));
        
        commit;
        

         

        These actions represent the complete database installation of your application – that may consists of hundreds or thousands of objects and MBs of data. The steps and the principles remain exactly the same.

        At this point, create an image of the baseline – that consists of the vanilla database with the current application release’s DDL and DML applied to it:

        docker commit --pause=true oracle-xe
        

        This command returns an id, the identifier of the Docker image that is now created for the current state of the container – our base line. The original container can now be stopped and even removed.

        docker stop oracle-xe
        

         

        Spinning up a container from the base line image is now done with:

        docker run -d -p 49160:22 -p 49161:1521 -e ORACLE_ALLOW_REMOTE=true  --name oracle-xe-testbed  &lt;image identifier&gt;
        

        After a few seconds, the database has started up and remote database clients can start interacting with the database. They will find the database objects and data that was part of the baseline image. To perform a test, no additional set up nor any tear down is required.

        Perform the tests that require performing. The tear down after the test consists of killing and removing the testbed container:

        docker kill oracle-xe-testbed &amp;&amp; docker rm oracle-xe-testbed
        

        Now return to the step “Spinning up a container”

        Spinning up the container takes a few seconds – 5 to 10. The time is mainly taken up by the database processes that have to be started from scratch.

        It should be possible to create a snapshot of a running container (using Docker Checkpoints) and restore the testbed container from that snapshot. This create-start from checkpoint –kill-rm should happen even faster than the run-kill-rm cycle that we have now got going. A challenge is the fact that opening the database does not just start processes and manipulate memory, but also handles files. That means that we need to commit the running container and associate the restored checkpoint with that image. I have been working on this at length – but I have not been successful yet – running into various issues (ORA-21561 OID Generation failed,  ora 27101 shared-memory-realm-does-not-exist, Redo Log File not found,…).I continue to look into this.

        Use Oracle Database 12c Image

        Note: instead of the Oracle Database XE image used before, we can go through the same steps based for example on image sath89/oracle-12c (see https://hub.docker.com/r/sath89/oracle-12c/ ) .

        The commands and steps are now:

        docker pull sath89/oracle-12c
        
        docker run -d -p 8080:8080 -p 1521:1521 --name oracle-db-12c sath89/oracle-12c
        

        connect from a client – create baseline.

        When the baseline database and database contents has been set up, create the container image of that state:

        docker commit --pause=true oracle-db-12c
        

        Returns an image identifier.

        docker stop oracle-db-12c
        

        Now to run a test iteration, run a container from the base line image:

        docker run -d -p 1521:1521  --name oracle-db-12c-testbed  <image identifier>
        

        Connect to the database at port 1521 or have the web application or API that is being tested make the connection.

         

        Resources

        The Docker Create Command: https://docs.docker.com/engine/reference/commandline/create/#parent-command

        Nifty Docker commands in Everyday hacks for Docker:  https://codefresh.io/docker-tutorial/everyday-hacks-docker/

        Circle CI Blog – Checkpoint and restore Docker container with CRIU – https://circleci.com/blog/checkpoint-and-restore-docker-container-with-criu/

        The post Quickly spinning up Docker Containers with baseline Oracle Database Setup – for performing automated tests appeared first on AMIS Oracle and Java Blog.

        How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3)

        Mon, 2018-04-16 02:00
        Recapitulation on how to install the Oracle Integration Cloud on premises connectivity agent

        Recently (april 2018) I gained access to the new Oracle Integration Cloud (OIC), version 18.1.3.180112.1616-762,  and wanted to make an integration connection to an on-premise database. For this purpose, an on premise connectivity agent needs to be installed, as is thoroughly explained by my colleague Robert van Mölken in his blog prepraring-to-use-the-ics-on-premises-connectivity-agent.

        With the (new) Oracle Integration Cloud environment the installation of the connectivity agent has slightly changed though, as shown below. It gave me some effort to get the new connectivity agent working. Therefore I decided to recapture the steps needed in this blog. Hopefully, this will give you a headstart to get the connectivity agent up and running.

        LeftMenuPane

        MenuBar Prerequisites

        Access to an Oracle Integration Cloud Service instance.

        Rights to do some installation on a local / on-premise environment, Linux based (eg. SOA virtual box appliance).

         

        Agent groups

        For connection purposes you need to have an agent group defined in the Oracle Integration Cloud.

        To define an agent group, you need to select the agents option in the left menu pane.  You can find any already existing agent groups here as well.

        Select the ‘create agent group’ button to define a new agent group and fill in this tiny web form.

        DefineAgentGroup

        Downloading and extracting the connectivity agent

        For downloading the connectivity agent software you also need to select the agents option in the left menu pane, followed by the download option in the upper menu bar.

        After downloading you have a file called ‘oic_connectivity_agent.zip’, which takes 145.903.548 bytes

        This has a much smaller memory footprint than the former connectivity agent software (ics_conn_agent_installer_180111.0000.1050.zip, which takes 1.867.789.797 bytes).

        For installation of the connectivity agent, you need to copy and extract the file to an installation folder of your choice on the on-premise host.

        After extraction you see several files, amongst which ‘InstallerProfile.cfg’.

        oic-content(1)

        Setting configuration properties

        Before starting the installation you need to edit the content of the file InstallerProfile.cfg.

        Set the value for the property OIC_URL to the right hostname and sslPort *.

        Also set the value for the property agent_GROUP_IDENTIFIER to the name of the agent group  you want the agent to belong to.

        After filling in these properties save the file.

        InstallerProfile

        * On the instance details page you can see the right values for the hostname and sslPort. This is the page which shows you the weblogic instances that host your OIC and it looks something like this:

        ServiceCapture Certificates

        For my trial purpose I didn’t need a certificate to communicate between the OIC and the on-premise environment.

        But if you do, you can follow the next 2 steps:

        oic-content(2)

        a. Go to the agenthome/agent/cert/ directory.

        b. Run the following command: keytool -importcert -keystore keystore.jks -storepass changeit -keypass password -alias alias_name  -noprompt -file certificate_file

         

        Java JDK

        Before starting the installation of the connectivity agent, make sure your JAVA JDK is at least version 8, with the JAVA_HOME and PATH set.

        To check this, open a terminal window and type: ‘java –version’ (without the quotes)

        You should see the version of the installed java version, eg. java version “1.8.0_131”.

        To add the JAVA_HOME to the PATH setting, type ‘setenv PATH = $JAVA_HOME/bin:$PATH’ (without the quotes)

        Running the installer

        You can start the connectivity agent installer with the command: ‘java –jar connectivityagent.jar’  (again, without the quotes).

        During the installation you are for your OIC username and corresponding password.

        The installation finishes with a message that the agent was installed succesfully en is now up and running.

        Check the installed agent

        You can check that the agent is communicating to/under/with the agent group you specified.

        Behind the name of the agent group the number of agents communicating within it is shown

        AgentGroupShowingAgentCapture

        The post How to install the Oracle Integration Cloud on premises connectivity agent (18.1.3) appeared first on AMIS Oracle and Java Blog.

        Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies)

        Sat, 2018-04-14 13:15

        At the Oracle Partner PaaS Summer Camps VII 2017 in Lisbon last year, at the end of august, I attended the API Cloud Platform Service & Integration Cloud Service bootcamp.

        In a series of article’s I will give a high level overview of what you can do with Oracle API Platform Cloud Service.

        At the Summer Camp a pre-built Oracle VM VirtualBox APIPCS appliance (APIPCS_17_3_3.ova) was provided to us, to be used in VirtualBox. Everything needed to run a complete demo of API Platform Cloud Service is contained within Docker containers that are staged in that appliance. The version of Oracle API Platform CS, used within the appliance, is Release 17.3.3 — August 2017.

        See https://docs.oracle.com/en/cloud/paas/api-platform-cloud/whats-new/index.html to learn about the new and changed features of Oracle API Platform CS in the latest release.

        In this article in the series about Oracle API Platform CS, the focus will be on the Management Portal and creating an API (including some policies) .

        Be aware that the screenshot’s in this article and the examples provided, are based on a demo environment of Oracle API Platform CS and were created by using the Oracle VM VirtualBox APIPCS appliance mentioned above.

        This article only covers part of the functionality of Oracle API Platform CS. For more detail I refer you to the documentation: https://cloud.oracle.com/en_US/api-platform.

        Short overview of Oracle API Platform Cloud Service

        Oracle API Platform Cloud Service enables companies to thrive in the digital economy by comprehensively managing the full API lifecycle from design and standardization to documenting, publishing, testing and managing APIs. These tools provide API developers, managers, and users an end-to-end platform for designing, prototyping. Through the platform, users gain the agility needed to support changing business demands and opportunities, while having clear visibility into who is using APIs for better control, security and monetization of digital assets.
        [https://cloud.oracle.com/en_US/api-platform/datasheets]

        Architecture

        Management Portal:
        APIs are managed, secured, and published using the Management Portal.
        The Management Portal is hosted on the Oracle Cloud, managed by Oracle, and users granted
        API Manager privileges have access.

        Gateways:
        API Gateways are the runtime components that enforce all policies, but also help in
        collecting data for analytics. The gateways can be deployed anywhere – on premise, on Oracle
        Cloud or to any third party cloud providers.

        Developer Portal:
        After an API is published, Application Developers use the Developer Portal to discover, register, and consume APIs. The Developer Portal can be customized to run either on the Oracle Cloud or directly in the customer environment on premises.
        [https://cloud.oracle.com/opc/paas/datasheets/APIPCSDataSheet_Jan2018.pdf]

        Oracle Apiary:
        In my article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about using Oracle Apiary and interacting with its Mock Server for the “HumanResourceService” API, I created earlier.

        The Mock Server for the “HumanResourceService” API is listening at:
        http://private-b4874b1-humanresourceservice.apiary-mock.com
        [https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

        Roles

        Within Oracle API Platform CS roles are used.

        Roles determine which interfaces a user is authorized to access and the grants they are eligible to receive.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/api-platform-cloud-service-roles-resources-actions-and-grants.html]

        • Administrator
          System Administrators responsible for managing the platform settings. Administrators possess the rights of all other roles and are eligible to receive grants for all objects in the system.
        • API Manager
          People responsible for managing the API lifecycle, which includes designing, implementing, and versioning APIs. Also responsible for managing grants and applications, providing API documentation, and monitoring API performance.
        • Application Developer
          API consumers granted self-service access rights to discover and register APIs, view API documentation, and manage applications using the Developer Portal.
        • Gateway Manager
          Operations team members responsible for deploying, registering, and managing gateways. May also manage API deployments to their gateways when issued the Deploy API grant by an API Manager.
        • Gateway Runtime
          This role indicates a service account used to communicate from the gateway to the portal. This role is used exclusively for gateway nodes to communicate with the management service; users assigned this role can’t sign into the Management Portal or the Developer Portal.
        • Service Manager
          People responsible for managing resources that define backend services. This includes managing service accounts and services.
        • Plan Manager
          People responsible for managing plans.

        Within the Oracle VM VirtualBox APIPCS appliance the following users (all with password welcome1) are present and used by me in this article:

        User Role api-manager-user APIManager api-gateway-user GatewayManager

        Design-First approach

        Design is critical as a first step for great APIs. Collaboration ensures that we are creating the correct design. In my previous article “Oracle API Platform Cloud Service: Design-First approach and using Oracle Apiary”, I talked about the Design-First approach and using Oracle Apiary. I designed a “HumanResourceService” API.
        [https://technology.amis.nl/2018/01/31/oracle-api-platform-cloud-service-design-first-approach-using-oracle-apiary/]

        So with a design in place, an application developer could begin working on the front-end, while service developers work on the back-end implementation and others can work on the API implementation, all in parallel.

        Create an API, via the Management Portal (api-manager-user)

        Start the Oracle API Platform Cloud – Management Portal as user api-manager-user.

        After a successful sign in, the “APIs” screen is visible.

        Create a new API via a click on button “Create API”. Enter the following values:

        Name HumanResourceService Version 1 Description Human Resource Service is an API to manage Human Resources.

        Next, click on button “Create”.

        After a click on the “HumanResourceService” API, the next screen appears (with tab “APIs” selected):

        Here you can see on the left, that the tab “API Implementation” is selected.

        First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

        Tab “API Implementation” of the “HumanResourceService” API

        Tab “Deployments” of the “HumanResourceService” API

        Tab “Publication” of the “HumanResourceService” API

        Tab “Grants” of the “HumanResourceService” API

        API grants are issued per API.

        The following tabs are visible and can be chosen:

        • Manage API
          Users issued this grant are allowed to modify the definition of and issue grants for this API.
        • View all details
          Users issued this grant are allowed to view all information about this API in the Management Portal.
        • Deploy API
          Users issued this grant are allowed to deploy or undeploy this API to a gateway for which they have deploy rights. This allows users to deploy this API without first receiving a request from an API Manager.
        • View public details
          Users issued this grant are allowed to view the publicly available details of this API on the Developer Portal.
        • Register
          Users issued this grant are allowed to register applications for this plan.
        • Request registration
          Users issued this grant are allowed to request to register applications for this plan.

        Users and groups issued grants for a specific API have the privileges to perform the associated actions on that API. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-api-grants.html.

        Tab “Registrations” of the “HumanResourceService” API

        Tab “Analytics” of the “HumanResourceService” API

        Tab “API Implementation” of the “HumanResourceService” API

        After you create an API, you can apply policies to configure the Request and Response flows. Policies in the Request flow secure, throttle, route, manipulate, or log requests before they reach the backend service. Polices in the Response flow manipulate and log responses before they reach the requesting client.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html]

        Request flow, configuring the API Request URL

        The API Request URL is the endpoint to which users or applications send requests for your API. You configure part of this URL. This endpoint resides on the gateway on which the API is deployed. The API will be deployed later.

        The full address to which requests are sent consists of the protocol used, the gateway hostname, the API Request endpoint, and any private resource paths available for your service.

        <protocol>://<hostname and port of the gateway node instance>/<API Request endpoint>/<private resource path of the API>

        Anything beyond the API Request endpoint is passed to the backend service.

        Hover over the “API Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

        Your Policy Name API Request Comments Configuration | Protocol HTTP ://MyGatewayIP/ Configuration | API Endpoint URL HumanResourceService/1

        Next, click on button “Apply”.

        In the pop-up, click on button “Save Changes”.

        Request flow, configuring the Service Request URL

        The Service Request is the URL at which your backend service receives requests.

        When a request meets all policy conditions, the gateway routes the request to this URL and calls your service. Note that the Service Request URL can point to any of your service’s resources, not just its base URL. This way you can restrict users to access only a subset of your API’s resources.

        Hover over the “Service Request” policy and then, on the right, click the icon “Edit policy details”. Enter the following values:

        Configure Headers – Service | Enter a URL <Enter the Apiary Mock Service URL>

        For example:
        http://private-b4874b1-humanresourceservice.apiary-mock.com

        Remark:
        Remove the “/employees” from the Mock Service URL so the API can be designed to call multiple end-points such as “/departments” Use Gateway Node Proxy uncheck Service Account None

        Next, click on button “Apply”.

        In the pop-up, click on button “Save Changes”.

        Oftentimes, there are multiple teams participating in the development process. There may be front-end developers creating a new mobile app or chatbot, there can be a backend services and integration team and of course the API team.

        If the backend service is not yet ready, you can still start creating the API. Perhaps you may want to begin with a basic implementation (for example an Apiary Mock Service URL) so your front-end developers are already pointing to the API, even before it is fully operational.

        Response Flow

        Click the Response tab to view a top-down visual representation of the response flow. The Service and API Response entries can’t be edited.
        The Service Response happens first. The response from the backend service is always the first entry in the outbound flow. You can place additional policies in this flow. Policies are run in order, with the uppermost policy run first, followed by the next policy, and so on, until the response is sent back to the client.
        The API Response entry is a visual representation of the point in the outbound flow when the response is returned to the requesting client.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html]

        Deploy an API to the Gateway, via the Management Portal (api-manager-user)

        On the left, click on tab “Deployments”.

        Next, click on button “Deploy API”.

        In the pop-up “Deploy API” there are no gateways, or they are not visible for the current user. So in order to find out what the situation is about the gateways, we have to sign in, in the Oracle API Platform Cloud – Management Portal as a Gateway Manager. There we also can grant the privileges needed to deploy the API. How you do this is described later on in this article.

        For now we continue as if the correct privileges were already in place.

        So in the pop-up “Deploy API”, select the “Production Gateway” gateway and click on button ‘Deploy”.

        For a short while a pop-up “Deployment request submitted” appears.

        Next, click on tab “Requesting” where we can see the request (for an API deployment to a gateway), the user api-manager-user sent to the Gateway Manager. The “Deployment State” is REQUESTING. So now we have to wait for the approval of the Gateway Manager.

        Sign in to the Oracle API Platform Cloud – Management Portal as user api-gateway-user

        In the top right of the Oracle API Platform Cloud – Management Portal click on the api-manager-user and select ”Sign Out”. Next, Sign in as user api-gateway-user.

        After a successful sign in, the “Gateways” screen is visible.

        Because this user is only a Gateway Manager, only the tab “Gateways” is visible.

        At the moment (in this demo environment) there is one gateway available, being the “Production Gateway”. After a click on the “Production Gateway” gateway, the next screen appears:

        Here you can see on the left, that the tab “Settings” is selected.

        First l will give you a short overview of screenshot’s of each of the tabs on the left. Some of these I will explain in more detail as I will walk you through some of the functionality of Oracle API Platform CS.

        Tab “Settings” of the “Production Gateway” gateway

        Have a look at the “Load Balancer URL” (http://apics.oracle.com:8001), which we will be using later on in this article.

        Tab “Nodes” of the “Production Gateway” gateway

        Tab “Deployments” of the “Production Gateway” gateway

        Tab “Grants” of the “Production Gateway” gateway

        Tab “Analytics” of the “Production Gateway” gateway

        Tab “Grants” of the “Production Gateway” gateway

        On the left, click on tab “Grants”.

        Grants are issued per gateway.

        The following tabs are visible and can be chosen:

        • Manage Gateway
          Users issued this grant are allowed to manage API deployments to this gateway and manage the gateway itself.

          Remark:
          The api-gateway-user (with role GatewayManager) is granted the “Manage Gateway” privilege.

        • View all details
          Users issued this grant are allowed to view all information about this gateway.
        • Deploy to Gateway
          Users issued this grant are allowed to deploy or undeploy APIs to this gateway.
        • Request Deployment to Gateway
          Users issued this grant are allowed to request API deployments to this gateway.
        • Node service account
          Gateway Runtime service accounts are issued this grant to allow them to download configuration and upload statistics.

        Users issued grants for a specific gateway have the privileges to perform the associated actions on that gateway. See for more information: https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/managing-gateway-grants.html.

        Click on tab “Request Deployment to Gateway”.

        Next, click on button “Add Grantee”.

        Select “api-manager-user” and click on button “Add”.

        So now, the user api-manager-user (with Role APIManager) is granted the “Request Deployment to Gateway” privilege.

        Remark:
        In practice you would probably grant to a group instead of to a single user.

        Be aware that you could also grant the “Deploy to Gateway” privilege, so approval of the Gateway Manager (for deploying an API to a gateway) is not needed anymore in that case. This makes sense if it concerns a development environment, for example. Since the Oracle VM VirtualBox APIPCS appliance is using a “Production Gateway” gateway, in this article, I chose for the request and approve mechanism.

        Approve a request for an API deployment to a gateway, via the Management Portal (api-gateway-user)

        On the left, click on tab “Deployments” and then click on tab “Requesting”.

        Hover over the “HumanResourceService” API, then click on button “Approve”.

        In the pop-up, click on button “Yes”.

        Then you can see that on the tab “Waiting”, the deployment is waiting.

        Remark:
        The deployment enters a Waiting state and the logical gateway definition is updated. The endpoint is deployed the next time gateway node(s) poll the management server for the updated gateway definition.

        So after a short while, you can see on the tab “Deployed”, that the deployment is done.

        After a click on the top right icon “Expand”, more details are shown:

        So now the “HumanResourceService” API is deployed on the “Production Gateway” gateway (Node 1). We can also see the active policies in the Request and Response flow of the API Implementation.

        It is time to invoke the API.

        Invoke method “GetAllEmployees” of the “HumanResourceService” API, via Postman

        For invoking the “HumanResourceService” API I used Postman (https://www.getpostman.com) as a REST Client tool.

        In Postman, I created a collection named “HumanResourceServiceCollection”(in order to bundle several requests) and created a request named “GetAllEmployeesRequest”, providing method “GET” and request URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”.

        Remember the “API Request URL”, I configured partly in the “API Request” policy and the “Load Balancer URL” of the “Production Gateway” gateway? They make up the full address to which requests have to be sent.

        After clicking on button Send, a response with “Status 200 OK” is shown:

        Because I have not applied any extra policies, the request is passed to the backend service without further validation. This is simply the “proxy pattern”.

        Later on in this article, I will add some policies and send additional requests to validate each one of them.

        Tab “Analytics” of the “Production Gateway” gateway

        Go back to the Management Portal (api-gateway-user) and in the tab “Analytics” the request I sent, is visible at “Total Requests”.

        If we look, for example, at “Requests By Resource”, the request is also visible.

        Policies

        Policies in API Platform CS serve a number of purposes. You can apply any number of policies to an API definition to secure, throttle, limit traffic, route, or log requests sent to your API. Depending on the policies applied, requests can be rejected if they do not meet criteria you specify when configuring each policy. Policies are run in the order they appear on the Request and Response tabs. A policy can be placed only in certain locations in the execution flow.

        The available policies are:

        Security:

        • OAuth 2.0 | 1.0
        • Key Validation | 1.0
        • Basic Auth | 1.0
        • Service Level Auth | 1.0 Deprecated
        • IP Filter Validation | 1.0
        • CORS | 1.0

        Traffic Management:

        • API Throttling – Delay | 1.0
        • Application Rate Limiting | 1.0
        • API Rate Limiting | 1.0

        Interface Management:

        • Interface Filtering | 1.0
        • Redaction | 1.0
        • Header Validation | 1.0
        • Method Mapping | 1.0

        Routing:

        • Header Based Routing | 1.0
        • Application Based Routing | 1.0
        • Gateway Based Routing | 1.0
        • Resource Based Routing | 1.0

        Other:

        • Service Callout | 2.0
        • Service Callout | 1.0
        • Logging | 1.0
        • Groovy Script | 1.0

        As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management).

        Add a Key Validation Policy, via the Management Portal (api-manager-user)

        Use a key validation policy when you want to reject requests from unregistered (anonymous) applications.

        Keys are distributed to clients when they register to use an API on the Developer Portal. At runtime, if they key is not present in the given header or query parameter, or if the application is not registered, the request is rejected; the client receives a 400 Bad Request error if no key validation header or query parameter is passed or a 403 Forbidden error if an invalid key is passed.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html#GUID-5CBFE528-A74E-4700-896E-154378818E3A]

        This policy requires that you create and register an application, which is described in my next article.

        In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-manager-user.

        Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Security”. Hover over the “Key Validation” policy and then, on the right, click the icon “Apply”. Enter the following values:

        Your Policy Name Key Validation Comments Place after the following policy API Request

        Then, click on icon “Next”. Enter the following values:

        Key Delivery Approach Header Key Header application-key

        Click on button “Apply as Draft”.

        Next, click on button “Save Changes”.

        I applied this as a draft policy, represented as a dashed line around the policy. Draft policies let you “think through” what you want before you have the complete implementation details. This enables you to complete the bigger picture in one sitting and to leave reminders of what is missing to complete the API later.
        When you deploy an API, draft policies are not deployed.

        Add an Interface Filtering Policy, via the Management Portal (api-manager-user)

        Use an interface filtering policy to filter requests based on the resources and methods specified in the request.
        [https://docs.oracle.com/en/cloud/paas/api-platform-cloud/apfad/implementing-apis.html#GUID-69B7BC21-416B-4262-9CE2-9896DEDF2144]

        Navigate to tab “API Implementation” of the “HumanResourceService” API, and then in the “Available Policies” region, expand “Interface Management”. Hover over the “Interface Filtering” policy and then, on the right, click the icon “Apply”. Enter the following values:

        Your Policy Name Interface Filtering Comments Place after the following policy Key Validation

        Then, click on icon “Next”.

        In the table below I summarized the requests that I created in the Oracle Apiary Mock Server for the “HumanResourceService” API:

        Request name Method Oracle Apiary Mock Server Request URL GetAllEmployeesRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/employees CreateEmployeeRequest POST http://private-b4874b1-humanresourceservice.apiary-mock.com/employees GetEmployeeRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/employees/100 UpdateEmployeeRequest PUT http://private-b4874b1-humanresourceservice.apiary-mock.com/employees/219 GetDepartmentRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/departments/30 GetDepartmentEmployeeRequest GET http://private-b4874b1-humanresourceservice.apiary-mock.com/departments/30/employees/119

        I want to use an interface filtering policy to filter requests. As an example, I want to pass requests (to the backend service) with the method GET specified in the request and a resource starting with employees followed by an identification or starting with departments followed by employees and an identification.

        Select “Pass” from the list.

        At “Filtering Conditions”, “Condition 1” enter the following values:

        Resources /employees/* ; /departments/*/employees/* Methods GET

        Click on button “Apply ”.

        Next, click on button “Save Changes”.

        I applied this policy as an active policy, represented as a solid line around the policy.

        Redeploy the API, via the Management Portal (api-manager-user)

        Navigate to tab “Deployments” of the “HumanResourceService” API, and then hover over the “Production Gateway” gateway and then, on the right, hover over the icon “Redeploy”.

        Next, click on icon “Latest Iteration”.

        In the pop-up, click on button “Yes”. For a short while a pop-up “Redeploy request submitted” appears.

        Then repeat the steps described before in this article, to approve the request, by switching to a Gateway Manager.

        Remark:
        Click on “Latest Iteration” to deploy the most recently saved iteration of the API.
        Click on “Current Iteration” to redeploy the currently deployed iteration of the API.

        After that, it is time to try out the effect of adding the “Interface Filtering” policy.

        Validating the “Interface Filtering” policy, via Postman

        In Postman for each request mentioned earlier (in the table), I created that request within the collection named “HumanResourceServiceCollection”.

        Then again I invoked each request, to validate it against the “Interface Filtering” policy.

        Invoke method “GetAllEmployees” of the “HumanResourceService” API

        From Postman I invoked the request named “GetAllEmployeesRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”) and a response with “Status 405 Method Not Allowed” is shown:

        Invoke method “CreateEmployee” of the “HumanResourceService” API

        From Postman I invoked the request named “CreateEmployeeRequest” (with method “POST” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees”) and a response with “Status 405 Method Not Allowed” is shown:

        Invoke method “GetEmployee” of the “HumanResourceService” API

        From Postman I invoked the request named “GetEmployeesRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/100”) and a response with “Status 200 OK” is shown:

        Invoke method “UpdateEmployee” of the “HumanResourceService” API

        From Postman I invoked the request named “UpdateEmployeeRequest” (with method “PUT” and URL “http://apics.oracle.com:8001/HumanResourceService/1/employees/219”) and a response with “Status 405 Method Not Allowed” is shown:

        Invoke method “GetDepartment” of the “HumanResourceService” API

        From Postman I invoked the request named “GetDepartmentRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30”) and a response with “Status 405 Method Not Allowed” is shown:

        Invoke method “GetDepartmentEmployee” of the “HumanResourceService” API

        From Postman I invoked the request named “GetDepartmentEmployeeRequest” (with method “GET” and URL “http://apics.oracle.com:8001/HumanResourceService/1/departments/30/employees/119”) and a response with “Status 200 OK” is shown:

        Tab “Analytics” of the “Production Gateway” gateway

        In the top right of the Oracle API Platform Cloud – Management Portal sign in as user api-gateway-user and click on the “Production Gateway” gateway and navigate to the tab “Analytics”.

        In this tab the requests I sent, are visible at “Total Requests”.

        If we look, for example, at “Requests By Resource”, the requests are also visible.

        Next, click on icon “Error and Rejections (4 Total)” and if we look, for example, at “Rejection Distribution”, we can see that there were 4 request rejections, because of policy “Interface Filtering”.

        So the “Interface Filtering” policy is working correct.

        Summary

        As a follow up from my previous articles about Oracle API Platform Cloud Service, in this article the focus is on using the Management Portal and Creating the “HumanResourceService” API (including some policies).

        As an example I have created two policies: Key Validation (Security) and Interface Filtering (Interface Management). The later policy, I deployed to a gateway and validated that this policy worked correct, using requests which I created in Postman.

        While using the Management Portal in this article, I focused on the roles “API Manager” and “Gateway Manager”. For example, the user api-gateway-user had to approve a request from the api-manager-user to deploy an API the a gateway.

        In a next article the focus will be on validating the “Key Validation” policy and using the “Development Portal”.

        The post Oracle API Platform Cloud Service: using the Management Portal and creating an API (including some policies) appeared first on AMIS Oracle and Java Blog.

        Pages