Feed aggregator

How to compile PostgreSQL 11 with support for JIT compilation on RHEL/CentOS 7

Yann Neuhaus - Wed, 2018-06-06 01:30

As you might already know PostgreSQL 11 will bring support for just-in-time compilation. When you want to compile PostgreSQL 11 with jit support on RedHat/CentOS 7 this requires a little hack (more on the reason below). In this post we’ll look at how you can do it at least for testing. For production it is of course not recommended as hacking the make file is nothing you want to do, at least I would not do it. Lets go.

As mentioned I am on CentOS 7:

postgres@pgbox:$ cat /etc/centos-release
CentOS Linux release 7.5.1804 (Core) 

What you need to get support for jit is llvm. When you check the CentOS repository llvm is there:

postgres@pgbox:$ yum search llvm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.spreitzer.ch
 * extras: mirror.spreitzer.ch
 * updates: mirror.spreitzer.ch
===================================================== N/S matched: llvm =====================================================
llvm-devel.x86_64 : Libraries and header files for LLVM
llvm-doc.noarch : Documentation for LLVM
llvm-libs.x86_64 : LLVM shared libraries
llvm-ocaml.x86_64 : OCaml binding for LLVM
llvm-ocaml-devel.x86_64 : Development files for llvm-ocaml
llvm-ocaml-doc.noarch : Documentation for LLVM's OCaml binding
llvm-private.i686 : llvm engine for Mesa
llvm-private.x86_64 : llvm engine for Mesa
llvm-private-devel.i686 : Libraries and header files for LLVM
llvm-private-devel.x86_64 : Libraries and header files for LLVM
llvm-static.x86_64 : LLVM static libraries
mesa-private-llvm.i686 : llvm engine for Mesa
mesa-private-llvm.x86_64 : llvm engine for Mesa
mesa-private-llvm-devel.i686 : Libraries and header files for LLVM
mesa-private-llvm-devel.x86_64 : Libraries and header files for LLVM
clang.x86_64 : A C language family front-end for LLVM
llvm.x86_64 : The Low Level Virtual Machine

  Name and summary matches only, use "search all" for everything.

The issue is that the PostgreSQL documentation clearly states that llvm needs to be at least of version 3.9 and in the CentOS repository you’ll find this:

postgres@pgbox:$ yum info llvm
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror1.hs-esslingen.de
 * extras: mirror.fra10.de.leaseweb.net
 * updates: mirror.netcologne.de
Available Packages
Name        : llvm
Arch        : x86_64
Version     : 3.4.2
Release     : 8.el7
Size        : 1.3 M
Repo        : extras/7/x86_64
Summary     : The Low Level Virtual Machine
URL         : http://llvm.org/
License     : NCSA
Description : LLVM is a compiler infrastructure designed for compile-time,
            : link-time, runtime, and idle-time optimization of programs from
            : arbitrary programming languages.  The compiler infrastructure includes
            : mirror sets of programming tools as well as libraries with equivalent
            : functionality.

What to do? What you need to do is to add the epel repository where you can find llvm 3.9 and 5.0:

postgres@pgbox:$ wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
postgres@pgbox:$ sudo yum localinstall epel-release-latest-7.noarch.rpm
postgres@pgbox:$ sudo yum install llvm5.0 llvm5.0-devel clang

Having that we should be ready for configuration:

postgres@pgbox:$ PGHOME=/u01/app/postgres/product/11/db_0
postgres@pgbox:$ SEGSIZE=1
postgres@pgbox:$ BLOCKSIZE=8
postgres@pgbox:$ WALSEGSIZE=16
postgres@pgbox:$ ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-llvm LLVM_CONFIG='/usr/lib64/llvm3.9/bin/llvm-config' \
            --with-systemd 

That succeeds so lets compile:

postgres@pgbox:$ make -j 4 all

… and you will run into this issue:

/usr/bin/clang -Wno-ignored-attributes -fno-strict-aliasing -fwrapv -O2  -I../../src/include  -D_GNU_SOURCE  -flto=thin -emit-llvm -c -o localtime.bc localtime.c
clang: error: unknown argument: '-flto=thin'
make[2]: *** [localtime.bc] Error 1
make[2]: Leaving directory `/home/postgres/postgresql/src/timezone'
make[1]: *** [all-timezone-recurse] Error 2
make[1]: Leaving directory `/home/postgres/postgresql/src'
make: *** [all-src-recurse] Error 2

There is a mail thread on the hackers mailing list which describes the issue. Apparently the clang compiler is too old to understand this argument. What you could do is to adjust the corresponding line in the Makefile:

postgres@pgbox:$ vi src/Makefile.global.in
COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c
COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -emit-llvm -c

Doing all the stuff again afterwards:

postgres@pgbox:$ make clean
postgres@pgbox:$ ./configure --prefix=${PGHOME} \
            --exec-prefix=${PGHOME} \
            --bindir=${PGHOME}/bin \
            --libdir=${PGHOME}/lib \
            --sysconfdir=${PGHOME}/etc \
            --includedir=${PGHOME}/include \
            --datarootdir=${PGHOME}/share \
            --datadir=${PGHOME}/share \
            --with-pgport=5432 \
            --with-perl \
            --with-python \
            --with-openssl \
            --with-pam \
            --with-ldap \
            --with-libxml \
            --with-libxslt \
            --with-segsize=${SEGSIZE} \
            --with-blocksize=${BLOCKSIZE} \
            --with-wal-segsize=${WALSEGSIZE}  \
            --with-llvm LLVM_CONFIG='/usr/lib64/llvm3.9/bin/llvm-config' \
            --with-systemd 
postgres@pgbox:$ make -j 4 all

… led to the following issue (at least for me):

make[2]: g++: Command not found
make[2]: *** [llvmjit_error.o] Error 127
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory `/home/postgres/postgresql/src/backend/jit/llvm'
make[1]: *** [all-backend/jit/llvm-recurse] Error 2
make[1]: Leaving directory `/home/postgres/postgresql/src'
make: *** [all-src-recurse] Error 2

This should be easy to fix:

postgres@pgbox:$ sudo yum install -y gcc-c++
postgres@pgbox:$ which g++
/bin/g++

Again:

postgres@pgbox:$ make -j 4 install
postgres@pgbox:$ cd contrib
postgres@pgbox:$ make -j 4 install

… and this time it succeeds (note that I did not run the regression tests, so maybe something will still go wrong there).

JIT is enabled by default and controlled by these parameters:

postgres=# select name,setting from pg_settings where name like 'jit%';
          name           | setting 
-------------------------+---------
 jit                     | on
 jit_above_cost          | 100000
 jit_debugging_support   | off
 jit_dump_bitcode        | off
 jit_expressions         | on
 jit_inline_above_cost   | 500000
 jit_optimize_above_cost | 500000
 jit_profiling_support   | off
 jit_provider            | llvmjit
 jit_tuple_deforming     | on
(10 rows)

To test that it really kicks in you can do something like this:

postgres=#create table ttt (a int, b text, c date );
postgres=#insert into ttt (a,b,c)
           select aa.*, md5(aa::text), now()
             from generate_series(1,1000000) aa;
postgres=#set jit_above_cost=5;
postgres=#set jit_optimize_above_cost=5;
postgres=#set jit_inline_above_cost=5;
postgres=#explain select sum(a) from ttt;

… which should lead to a plan like this:

                                      QUERY PLAN                                       
---------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=15554.55..15554.56 rows=1 width=8)
   ->  Gather  (cost=15554.33..15554.54 rows=2 width=8)
         Workers Planned: 2
         ->  Partial Aggregate  (cost=14554.33..14554.34 rows=1 width=8)
               ->  Parallel Seq Scan on ttt  (cost=0.00..13512.67 rows=416667 width=4)
 JIT:
   Functions: 8
   Inlining: true
   Optimization: true
(9 rows)

Hope that helps.

 

Cet article How to compile PostgreSQL 11 with support for JIT compilation on RHEL/CentOS 7 est apparu en premier sur Blog dbi services.

Composite Range-Hash interval partitioning with LOB

Tom Kyte - Tue, 2018-06-05 13:06
Hi, I would like to partition a table with a LOB column using Range-Hash interval partitioning scheme. But I not sure how the exact partition gets distributed in this scenario and also I noticed following differences based on how I specify my p...
Categories: DBA Blogs

Larry Ellison Debuts Automated Oracle Offering that Dramatically Cuts Cloud Upgrade Costs

Oracle Press Releases - Tue, 2018-06-05 13:00
Press Release
Larry Ellison Debuts Automated Oracle Offering that Dramatically Cuts Cloud Upgrade Costs Oracle’s automated cloud migration solution enables customers to reduce upgrade costs, accelerate time to value and improve delivery predictability

Redwood Shores, Calif.—Jun 5, 2018

Oracle (NYSE: ORCL) CTO and Chairman, Larry Ellison, today unveiled the world’s first automated enterprise cloud application upgrade product that will enable Oracle customers to reduce the time and cost of cloud migration by up to 30 percent. By providing a complete set of automated tools and proven cloud transition methodologies, the new “Soar to the Cloud” solution enables customers with applications running on premises to upgrade to Oracle Cloud Applications in as little as 20 weeks.

“It’s now easier to move from Oracle E-Business Suite to Oracle Fusion ERP in the cloud, than it is to upgrade from one version of E-Business Suite to another,” said Ellison. “A lot of tedious transitions that people once did manually are now automated. If you choose Oracle Soar, it will be the last upgrade you’ll ever do.”

Oracle Soar includes a discovery assessment, process analyzer, automated data and configuration migration utilities, and rapid integration tools. The automated process is powered by the True Cloud Method, Oracle’s proprietary approach to support customers throughout the journey to the cloud. It is guided by a dedicated Oracle concierge service to help ensure a rapid and predictable upgrade that aligns with modern, industry best practices. Customers can keep the upgrade on-track by monitoring the status of their cloud transition via an intuitive mobile application, which features a step-by-step implementation guide indicating exactly what needs to be done each day.

“We know the power of automation in solving business problems for our customers—it’s baked into all of our applications,” said Beth Boettcher, senior vice president, North American applications consulting, Oracle. “We’ve applied the same thinking to the cloud upgrade process to create an end-to-end solution that will enable our customers to experience a rapid, predictable, and cost-effective journey to the cloud.”

Oracle Soar to the Cloud

Leading organizations are choosing Oracle Cloud:

 

The Oracle Soar offering is available today for Oracle E-Business Suite, Oracle PeopleSoft and Oracle Hyperion Planning customers who are moving to Oracle ERP Cloud, Oracle SCM Cloud and Oracle EPM Cloud. Oracle will continue to invest in the development of the product, extending the solution to Oracle PeopleSoft and Oracle E-Business Suite customers moving to Oracle HCM Cloud, and Oracle Siebel customers moving to Oracle CX Cloud in the future.

Additional Information

For additional information on Oracle Soar, visit oracle.com/soar

Contact Info
Bill Rundle
Oracle
650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The above is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX

Amis Blog - Tue, 2018-06-05 13:00

In a previous article I talked about an OSBServiceExplorer tool to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. The code mentioned in that article was based on Oracle Service Bus 11.1.1.7 (11g).

In the meantime the OSB world has changed (for example now we can use pipelines) and it was time for me to pick up the old code and get it working within Oracle Service Bus 12.2.1.1.0 (12c).

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX in an 12c environment.

Unfortunately, getting the java code to work in 12c wasn’t as straightforward as I hoped.

For more details on the OSB, WebLogic Server MBeans and JMX subject, I kindly refer you to my previous article. In this article I will refer to it as my previous MBeans 11g article.
[https://technology.amis.nl/2017/03/09/oracle-service-bus-service-exploring-via-weblogic-server-mbeans-with-jmx/]

Before using the OSBServiceExplorer tool in an 12c environment, I first created two OSB Projects (MusicService and TrackService) with pipelines, proxy and business services. I used Oracle JDeveloper 12c (12.2.1.1.0) for this (from within a VirtualBox appliance).

For the latest version of Oracle Service Bus see:
http://www.oracle.com/technetwork/middleware/service-bus/downloads/index.html

If you want to use a VirtualBox appliance, have a look at for example: Pre-built Virtual Machine for SOA Suite 12.2.1.3.0
[http://www.oracle.com/technetwork/middleware/soasuite/learnmore/vmsoa122130-4122735.html]

After deploying the OSB Projects that were created in JDeveloper, to the WebLogic server, the Oracle Service Bus Console 12c (in my case: http://localhost:7101/servicebus) looks like:

Before we dive into the OSBServiceExplorer tool , first I give you some detail information of the “TrackService” (from JDeveloper), that will be used as an example in this article.

The “TrackService” sboverview looks like:

As you can see, several proxy services, a pipeline and a business service are present.

The Message Flow of pipeline “TrackServicePipeline” looks like:

The OSB Project structure of service “TrackService” looks like:

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
– Server name: DefaultServer. Server state: RUNNING

For more info and the responsible code fragment see my previous MBeans 11g article.

List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 45 refs, including the following pipelines, proxy and business services:
– ProxyService: TrackService/proxy/TrackServiceRest
– BusinessService: MusicService/business/db_InsertCD
– BusinessService: TrackService/business/CDService
– Pipeline: TrackService/pipeline/TrackServicePipeline
– ProxyService: TrackService/proxy/TrackService
– Pipeline: MusicService/pipeline/MusicServicePipeline
– ProxyService: MusicService/proxy/MusicService
– ProxyService: TrackService/proxy/TrackServiceRestJSON

See the code fragment below (I highlighted the changes I made on the code from the 11g version):

Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

fileWriter.write("Found total of " + refs.size() +
                 " refs, including the following pipelines, proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {
        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("Pipeline")) {
        fileWriter.write("- Pipeline: " +
                         ref.getFullName() + "\n");                    
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " +
                         ref.getFullName() + "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

For more info see my previous MBeans 11g article.

ResourceConfigurationMBean

In the Oracle Enterprise Manager FMC 12c (in my case: http://localhost:7101/em) I navigated to SOA / service-bus and opened the System MBean Browser:

Here the ResourceConfigurationMBean’s can be found under com.oracle.osb.


[Via MBean Browser]

If we navigate to a particular ResourceConfigurationMBean for a proxy service (for example …$proxy$TrackService), the information on the right is as follows :


[Via MBean Browser]

As in the 11g version the attributes Configuration, Metadata and Name are available.

If we navigate to a particular ResourceConfigurationMBean for a pipeline (for example …$pipeline$TrackServicePipeline), the information on the right is as follows :


[Via MBean Browser]

As you can see the value for attribute “Configuration” for this pipeline is “Unavailable”.

Remember the following java code in OSBServiceExplorer.java (see my previous MBeans 11g article):

for (ObjectName osbResourceConfiguration :
    osbResourceConfigurations) {
 
    CompositeDataSupport configuration =
        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                      "Configuration");

So now apparently, getting the configuration can result in a NullPointerException. This has to be dealt with in the new 12c version of OSBServiceExplorer.java, besides the fact that now also a pipeline is a new resource type.

But of course for our OSB service explorer we are in particular, interested in the elements (nodes) of the pipeline. In order to get this information available in the System MBean Browser, something has to be done.

Via the Oracle Enterprise Manager FMC 12c I navigated to SOA / service-bus / Home / Projects / TrackService and clicked on tab “Operations”:

Here you can see the Operations settings of this particular service.

Next I clicked on the pipeline “TrackServicePipeline”, where I enabled “Monitoring”

If we then navigate back to the ResourceConfigurationMBean for pipeline “TrackServicePipeline”, the information on the right is as follows:


[Via MBean Browser]

So now the wanted configuration information is available.

Remark:
For the pipeline “MusicServicePipeline” the monitoring is still disabled, so the configuration is still unavailabe.

Diving into attribute Configuration of the ResourceConfigurationMBean

For each found pipeline, proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

Proxy service configuration:
Please see my previous MBeans 11g article.

Business service configuration:
Please see my previous MBeans 11g article.

Pipeline configuration:
Below is an example of a pipeline configuration (content fragment of the text file):

Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

If the pipeline configuration is unavailable, the following is shown:

Resource is a Pipeline (without available Configuration)

The pipelines can be recognized by the Pipeline$ prefix.

Pipeline, element hierarchy

In the 11g version of OSBServiceExplorer.java, for a proxy service the elements (nodes) of the pipeline were investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");

In 12c however this doesn’t work for a proxy service. The same code can be used however for a pipeline.

For pipeline “TrackServicePipeline”, the configuration (including nodes) looks like:


[Via MBean Browser]

Based on the nodes information (with node-id) in the MBean Browser and the content of pipeline “TrackServicePipeline.pipeline” the following structure can be put together:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .pipeline file for the _ActiondId- identification, mentioned as value for the name key.

Example of the details of node with node-id = 4 and name = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc:


[Via MBean Browser]

Content of pipeline “TrackServicePipeline.pipeline”:

<?xml version="1.0" encoding="UTF-8"?>
<con:pipelineEntry xmlns:con="http://www.bea.com/wli/sb/pipeline/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:con1="http://www.bea.com/wli/sb/stages/config" xmlns:con2="http://www.bea.com/wli/sb/stages/routing/config" xmlns:con3="http://www.bea.com/wli/sb/stages/transform/config">
    <con:coreEntry>
        <con:binding type="SOAP" isSoap12="false" xsi:type="con:SoapBindingType">
            <con:wsdl ref="TrackService/proxy/TrackService"/>
            <con:binding>
                <con:name>TrackServiceBinding</con:name>
                <con:namespace>http://trackservice.services.soatraining.amis/</con:namespace>
            </con:binding>
        </con:binding>
        <con:xqConfiguration>
            <con:snippetVersion>1.0</con:snippetVersion>
        </con:xqConfiguration>
    </con:coreEntry>
    <con:router>
        <con:flow>
            <con:route-node name="RouteNode1">
                <con:context>
                    <con1:userNsDecl prefix="trac" namespace="http://trackservice.services.soatraining.amis/"/>
                </con:context>
                <con:actions>
                    <con2:route>
                        <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc</con1:id>
                        <con2:service ref="TrackService/business/CDService" xsi:type="ref:BusinessServiceRef" xmlns:ref="http://www.bea.com/wli/sb/reference"/>
                        <con2:operation>getTracksForCD</con2:operation>
                        <con2:outboundTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff9</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDRequest"/>
                                        <con1:param name="getTracksForCDRequest">
                                            <con1:path>$body/trac:getTracksForCDRequest</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:outboundTransform>
                        <con2:responseTransform>
                            <con3:replace varName="body" contents-only="true">
                                <con1:id>_ActionId-7f000001.N38d9a220.0.163b507de28.N7ff6</con1:id>
                                <con3:location>
                                    <con1:xpathText>.</con1:xpathText>
                                </con3:location>
                                <con3:expr>
                                    <con1:xqueryTransform>
                                        <con1:resource ref="TrackService/Resources/xquery/CDService_getTracksForCDResponse"/>
                                        <con1:param name="getTracksForCDResponse">
                                            <con1:path>$body/*[1]</con1:path>
                                        </con1:param>
                                    </con1:xqueryTransform>
                                </con3:expr>
                            </con3:replace>
                        </con2:responseTransform>
                    </con2:route>
                </con:actions>
            </con:route-node>
        </con:flow>
    </con:router>
</con:pipelineEntry>

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For more info about this, see my previous MBeans 11g article.

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found pipeline the metadata information (dependencies and dependents) is written to file.

Example content fragment of the text file:

Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
dependencies:
– BusinessService$TrackService$business$CDService
– WSDL$TrackService$proxy$TrackService

dependents:
– ProxyService$TrackService$proxy$TrackService
– ProxyService$TrackService$proxy$TrackServiceRest
– ProxyService$TrackService$proxy$TrackServiceRestJSON

As can be seen in the MBean Browser, the metadata for a particular pipeline shows the dependencies on other resources (like business services and WSDLs) and other services that are dependent on the pipeline.

For more info and the responsible code fragment see my previous MBeans 11g article.

Remark:
In the java code, dependencies on Xquery’s are filtered out and not written to the text file.

MBeans with regard to version 11.1.1.7

In the sample java code shown at the end of my previous MBeans 11g article, the use of the following MBeans can be seen:

MBean and other classes Jar file weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-api.jar com.bea.wli.config.Ref.class <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar

Therefor in JDeveloper 11g, the following Project Libraries and Classpath settings were made:

Description Class Path Com.bea.common.configfwk_1.6.0.0.jar /oracle/fmwhome/Oracle_OSB1/modules/com.bea.common.configfwk_1.6.0.0.jar Sb-kernel-api.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-api.jar Sb-kernel-impl.jar /oracle/fmwhome/Oracle_OSB1/lib/sb-kernel-impl.jar Wlfullclient.jar /oracle/fmwhome/wlserver_10.3/server/lib/wlfullclient.jar

For more info about these MBeans, see my previous MBeans 11g article.

In order to connect to a WebLogic MBean Server in my previous MBeans 11g article I used the thick client wlfullclient.jar.

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

So I build wlfullclient.jar as follow:

cd <Middleware Home Directory>/wlserver_10.3/server/lib
java -jar wljarbuilder.jar

In the sample java code shown at the end of this article, the use of the same MBeans can be seen. However in JDeveloper 12c, changes in Project Libraries and Classpath settings were necessary, due to changes in the jar files used in the 12c environment. Also the wlfullclient.jar is deprecated as of WebLogic Server 12.1.3 !

Overview of WebLogic Client jar files WebLogic Client Jar file Protocol WebLogic Full Client weblogic.jar (6 KB)
(Via the manifest file MANIFEST.MF, classes in other JAR files are referenced) T3 wlfullclient.jar (111.131 KB)
is deprecated as of WebLogic Server 12.1.3 T3 WebLogic Thin Client wlclient.jar (2.128 KB) IIOP wljmxclient.jar (238 KB) IIOP WebLogic Thin T3 Client wlthint3client.jar (7.287 KB) T3

Remark with regard to version 12.2.1:

Due to changes in the JDK, WLS no longer supports JMX with just the wlclient.jar. To use JMX, you must use either the ”full client” (weblogic.jar) or wljmxclient.jar.
[https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Full Client

The WebLogic full client, wlfullclient.jar, is deprecated as of WebLogic Server 12.1.3 and may be removed in a future release. Oracle recommends using the WebLogic Thin T3 client or other appropriate client depending on your environment.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT130]

For WebLogic Server 10.0 and later releases, client applications need to use the wlfullclient.jar file instead of the weblogic.jar. A WebLogic full client is a Java RMI client that uses Oracle’s proprietary T3 protocol to communicate with WebLogic Server, thereby leveraging the Java-to-Java model of distributed computing.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

Not all functionality available with weblogic.jar is available with the wlfullclient.jar. For example, wlfullclient.jar does not support Web Services, which requires the wseeclient.jar. Nor does wlfullclient.jar support operations necessary for development purposes, such as ejbc, or support administrative operations, such as deployment, which still require using the weblogic.jar.
[https://docs.oracle.com/middleware/1213/wls/SACLT/t3.htm#SACLT376]

WebLogic Thin Client

In order to connect to a WebLogic MBean Server, it is also possible to use a thin client wljmxclient.jar (in combination with wlclient.jar). This JAR contains Oracle’s implementation of the HTTP and IIOP protocols.

Remark:
wlclient.jar is included in wljmxclient.jar‘s MANIFEST ClassPath entry, so wlclient.jar and wljmxclient.jar need to be in the same directory, or both jars need to be specified on the classpath.

Ensure that weblogic.jar or wlfullclient.jar is not included in the classpath if wljmxclient.jar is included. Only the thin client wljmxclient.jar/wlclient.jar or the thick client wlfullclient.jar should be used, but not a combination of both. [https://docs.oracle.com/middleware/1221/wls/JMXCU/accesswls.htm#JMXCU144]

WebLogic Thin T3 Client

The WebLogic Thin T3 Client jar (wlthint3client.jar) is a light-weight, high performing alternative to the wlfullclient.jar and wlclient.jar (IIOP) remote client jars. The Thin T3 client has a minimal footprint while providing access to a rich set of APIs that are appropriate for client usage. As its name implies, the Thin T3 Client uses the WebLogic T3 protocol, which provides significant performance improvements over the wlclient.jar, which uses the IIOP protocol.

The Thin T3 Client is the recommended option for most remote client use cases. There are some limitations in the Thin t3 client as outlined below. For those few use cases, you may need to use the full client or the IIOP thin client.

Limitations and Considerations:

This release does not support the following:

  • Mbean-based utilities (such as JMS Helper, JMS Module Helper), and JMS multicast are not supported. You can use JMX calls as an alternative to “mbean-based helpers.”
  • JDBC resources, including WebLogic JDBC extensions.
  • Running a WebLogic RMI server in the client.

The Thin T3 client uses JDK classes to connect to the host, including when connecting to dual-stacked machines. If multiple addresses available on the host, the connection may attempt to go to the wrong address and fail if the host is not properly configured.
[https://docs.oracle.com/middleware/12212/wls/SACLT/wlthint3client.htm#SACLT387]

MBeans with regard to version 12.2.1

As I mentioned earlier in this article, in order to get the Java code working in a 12.2.1 environment, I had to make some changes.

MBean and other classes Jar file weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar weblogic.management.runtime.ServerRuntimeMBean.class <Middleware Home Directory>/ wlserver/server/lib/wlfullclient.jar com.bea.wli.sb.management.configuration.ALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-api.jar com.bea.wli.config.Ref.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.configfwk.jar weblogic.management.jmx.MBeanServerInvocationHandler.class <Middleware Home Directory>/wlserver/modules/com.bea.core.management.jmx.jar com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class <Middleware Home Directory>/osb/lib/modules/oracle.servicebus.kernel-wls.jar

In JDeveloper 12c, the following Project Libraries and Classpath settings were made (at first):

Description Class Path Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar Wlfullclient.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlfullclient.jar

Using wlfullclient.jar:
At first I still used the thick client wlfullclient.jar (despite the fact that it’s deprecated), which I build as follow:

cd <Middleware Home Directory>/wlserver/server/lib
java -jar wljarbuilder.jar
Creating new jar file: wlfullclient.jar

wlfullclient.jar and jarbuilder are deprecated starting from the WebLogic 12.1.3 release.
Please use one of the equivalent stand-alone clients instead. Consult Oracle WebLogic public documents for details.

Compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using weblogic.jar:
When I changed wlfullclient.jar in to weblogic.jar the OSBServiceExplorer tool also worked.

Using wlclient.jar:
When I changed wlfullclient.jar in to wlclient.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlclient.jar and wljmxclient.jar:
Also adding wljmxclient.jar did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Adding wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then I got an error on:

String name = serverRuntimeMBean.getName();

I then decided to go for the, by Oracle recommended, WebLogic Thin T3 client wlthint3client.jar.

Using wlthint3client.jar:
When I changed wlfullclient.jar in to wlthint3client.jar the OSBServiceExplorer tool did not work, because of errors on:

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;

Using wlthint3client.jar and wls-api.jar:
So in order to try resolving the errors shown above, I also added wls-api.jar. But then again I got an error on:

String name = serverRuntimeMBean.getName();

However I could run the OSBServiceExplorer tool in JDeveloper , but then I got the error:

Error(160,49): cannot access weblogic.security.ServerRuntimeSecurityAccess; class file for weblogic.security.ServerRuntimeSecurityAccess not found

I found that the following jar files could solve this error:

For the time being I extracted the needed class file (weblogic.security.ServerRuntimeSecurityAccess.class) from the smallest size jar file to a lib directory on the filesystem and in JDeveloper added that lib directory as a Classpath to the Project.

As it turned out I had to repeat these steps for the following errors I still got after I extended the Classpath:

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/utils/collections/WeakConcurrentHashMap

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/runtime/TimeServiceRuntimeMBean

Exception in thread “main” java.lang.NoClassDefFoundError: weblogic/management/partition/admin/ResourceGroupLifecycleOperations$RGState

After that, compiling and running the OSBServiceExplorer tool in JDeveloper worked.

Using the lib directory with the extracted class files, was not what I wanted. Adding the jar files mentioned above seemed a better idea. So I picked the jar files with the smallest size, to get the job done, and discarded the lib directory.

So in the end, in JDeveloper 12c, the following Project Libraries and Classpath settings were made:

Description Class Path Com.bea.core.management.jmx.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.bea.core.management.jmx.jar Com.oracle.weblogic.management.base.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.management.base.jar Com.oracle.weblogic.security.jar /u01/app/oracle/fmw/12.2/wlserver/modules/com.oracle.weblogic.security.jar Com.oracle.webservices.wls.jaxrpc-client.jar /u01/app/oracle/fmw/12.2/wlserver/modules/clients/com.oracle.webservices.wls.jaxrpc-client.jar Oracle.servicebus.configfwk.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.configfwk.jar Oracle.servicebus.kernel-api.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-api.jar Oracle.servicebus.kernel-wls.jar /u01/app/oracle/fmw/12.2/osb/lib/modules/oracle.servicebus.kernel-wls.jar Wlthint3client.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wlthint3client.jar Wls-api.jar /u01/app/oracle/fmw/12.2/wlserver/server/lib/wls-api.jar

Shell script

For ease of use, a shell script file was created, using MBeans, to explore pipeline, proxy and business services. The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

The content of the shell script file “OSBServiceExplorer” is:

#!/bin/bash

# Script to call OSBServiceExplorer

echo “Start calling OSBServiceExplorer”

java -classpath “OSBServiceExplorer.jar:oracle.servicebus.configfwk.jar:com.bea.core.management.jmx.jar:oracle.servicebus.kernel-api.jar:oracle.servicebus.kernel-wls.jar:wlthint3client.jar:wls-api.jar:com.oracle.weblogic.security.jar:com.oracle.webservices.wls.jaxrpc-client.jar:com.oracle.weblogic.management.base.jar” nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

echo “End calling OSBServiceExplorer”

In the shell script file via the java executable, a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

Example content of the text file:

Found server runtimes:
- Server name: DefaultServer. Server state: RUNNING

Found total of 45 refs, including the following pipelines, proxy and business services:
- ProxyService: TrackService/proxy/TrackServiceRest
- BusinessService: MusicService/business/db_InsertCD
- BusinessService: TrackService/business/CDService
- Pipeline: TrackService/pipeline/TrackServicePipeline
- ProxyService: TrackService/proxy/TrackService
- Pipeline: MusicService/pipeline/MusicServicePipeline
- ProxyService: MusicService/proxy/MusicService
- ProxyService: TrackService/proxy/TrackServiceRestJSON

ResourceConfiguration list of pipelines, proxy and business services:
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$MusicService$proxy$MusicService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/MusicService
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean: service-type=SOAP

    Index#4:
       level    = 1
       label    = route
       name     = _ActionId-7f000001.N38d9a220.0.163b507de28.N7ffc
       node-id  = 4
       type     = Action
       children = [1,3]
    Index#6:
       level    = 1
       label    = route-node
       name     = RouteNode1
       node-id  = 6
       type     = RouteNode
       children = [5]

  Metadata of com.oracle.osb:Location=DefaultServer,Name=Pipeline$TrackService$pipeline$TrackServicePipeline,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$TrackService$business$CDService
      - WSDL$TrackService$proxy$TrackService

    dependents:
      - ProxyService$TrackService$proxy$TrackService
      - ProxyService$TrackService$proxy$TrackServiceRest
      - ProxyService$TrackService$proxy$TrackServiceRestJSON

- Resource: com.oracle.osb:Location=DefaultServer,Name=Operations$System$Operator Settings$GlobalOperationalSettings,Type=ResourceConfigurationMBean
- Resource: com.oracle.osb:Location=DefaultServer,Name=Pipeline$MusicService$pipeline$MusicServicePipeline,Type=ResourceConfigurationMBean
  Resource is a Pipeline (without available Configuration)
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$MusicService$business$db_InsertCD,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=jca, url=jca://eis/DB/MUSIC
- Resource: com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=BusinessService$TrackService$business$CDService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://127.0.0.1:7101/cd_services/CDService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRest,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRest
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackService,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=/music/TrackService
- Resource: com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=DefaultServer,Name=ProxyService$TrackService$proxy$TrackServiceRestJSON,Type=ResourceConfigurationMBean: service-type=REST, transport-type=http, url=/music/TrackServiceRestJSON

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) {
                        fileWriter.write(",");
                    }
                }
                fileWriter.write("]\n");
            }

            if ((level == 1 && type.equals("OperationalBranchNode")) ||
                level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];
                
                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);            

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);

            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following pipelines, proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {
                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("Pipeline")) {
                    fileWriter.write("- Pipeline: " +
                                     ref.getFullName() + "\n");                    
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);
            
            fileWriter.write("ResourceConfiguration list of pipelines, proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                              
                try {
                    CompositeDataSupport configuration =
                        (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                      "Configuration");
                      
                    if (canonicalName.contains("ProxyService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        String url = (String)transportconfiguration.get("url");
                        
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("BusinessService")) {
                        String servicetype =
                            (String)configuration.get("service-type");
                        CompositeDataSupport transportconfiguration =
                            (CompositeDataSupport)configuration.get("transport-configuration");
                        String transporttype =
                            (String)transportconfiguration.get("transport-type");
                        CompositeData[] urlconfiguration =
                            (CompositeData[])transportconfiguration.get("url-configuration");
                        String url = (String)urlconfiguration[0].get("url");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype +
                                         ", transport-type=" + transporttype +
                                         ", url=" + url + "\n");
                    } else if (canonicalName.contains("Pipeline")) {
                        String servicetype =
                            (String)configuration.get("service-type");
    
                        fileWriter.write("  Configuration of " + canonicalName +
                                         ":" + " service-type=" + servicetype + "\n");
                    }
                    
                    if (canonicalName.contains("Pipeline")) {
                        fileWriter.write("" + "\n");
    
                        CompositeDataSupport pipeline =
                            (CompositeDataSupport)configuration.get("pipeline");
                        TabularDataSupport nodes =
                            (TabularDataSupport)pipeline.get("nodes");
    
                        TabularType tabularType = nodes.getTabularType();
                        CompositeType rowType = tabularType.getRowType();
    
                        Iterator keyIter = nodes.keySet().iterator();
    
                        for (int j = 0; keyIter.hasNext(); ++j) {
    
                            Object[] key = ((Collection)keyIter.next()).toArray();
    
                            CompositeData compositeData = nodes.get(key);
    
                            String label = (String)compositeData.get("label");
                            String type = (String)compositeData.get("type");
                            if (type.equals("Action") &&
                                (label.contains("wsCallout") ||
                                 label.contains("javaCallout") ||
                                 label.contains("route"))) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            } else if (type.equals("OperationalBranchNode") ||
                                       type.equals("RouteNode")) {
    
                                fileWriter.write("    Index#" + j + ":\n");
                                printCompositeData(nodes, key, 1);
                            }
                        }
                        
                        fileWriter.write("" + "\n");
                        
                        CompositeDataSupport metadata =
                            (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                          "Metadata");
                        
                        fileWriter.write("  Metadata of " + canonicalName + "\n");
    
                        String[] dependencies =
                            (String[])metadata.get("dependencies");
                        fileWriter.write("    dependencies:\n");
                        int size;
                        size = dependencies.length;
                        for (int i = 0; i < size; i++) {
                            String dependency = dependencies[i];
                            if (!dependency.contains("Xquery")) {
                                fileWriter.write("      - " + dependency + "\n");
                            }
                        }
                        fileWriter.write("" + "\n");
    
                        String[] dependents = (String[])metadata.get("dependents");
                        fileWriter.write("    dependents:\n");
                        size = dependents.length;
                        for (int i = 0; i < size; i++) {
                            String dependent = dependents[i];
                            fileWriter.write("      - " + dependent + "\n");
                        }
                        fileWriter.write("" + "\n");                
                    }
                }
                catch(Exception e) {
                    if (canonicalName.contains("Pipeline")) {
                      fileWriter.write("  Resource is a Pipeline (without available Configuration)" + "\n");
                    } else {
                      e.printStackTrace();
                    }
                }
            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus 12.2.1.1.0: Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

The World Cup 2018 Challenge is live... An app created 12 years ago to showcase the awesome Oracle APEX

Dimitri Gielis - Tue, 2018-06-05 10:39

Since 2006 it's a tradition... every two years we launch a site where you can bet on the games of the World Cup (or Euro Cup). This year you find the app at https://www.wc2018challenge.com

You can read more about the history and see how things look like over time, or you can look on this blog at other posts in the different years.

The initial goal of the app was to showcase what you can do with Oracle Application Express (APEX). Many companies have Excel sheets where they keep the scores of the games and keep some kind of ranking for their employees. When I saw in 2006 that Excel sheet, I thought, oh well, I can do this in APEX, and it would give us way more benefits... results straight away, no sending of Excel sheets or merging data, much more attractive design with APEX etc. and from then on this app lives its own life.

Every two years I updated the app with the latest and greatest of Oracle APEX at that time.

Today the site is built in Oracle APEX 18.1 and it showcases some of the new features.
The look and feel is completely upgraded. Instead of a custom theme, the site is now using Universal Theme. You might think, it doesn't look like a typical APEX app, but it is! Just some minimal changes in CSS and a background image makes the difference.

The other big change is the Social Authentication, which is now using the built-in capabilities of APEX 18.1 instead of a custom authentication scheme I used the previous years. You can authenticate with Google, Facebook and with your own email (custom).

Some other changes came with JET charts and some smaller enhancements that came with APEX 5.1 and 18.1.

Some people asked me how certain features were done, so I'll do some separate blog posts about how Universal Theme was adapted on the landing page and how Social Authentication was included and what issues we had along the line. If you wonder how anything else was done, I'm happy to do some more posts to explain.

Finally, I would like to thank a few people who helped to make the site ready for this year: Erik, Eduardo, Miguel, Diego, Galan, and Theo, thanks so much!
Categories: Development

PostgreSQL 11: Instant add column with a non null default value

Yann Neuhaus - Tue, 2018-06-05 06:53

As I am currently preparing my session for the Swiss PGDay which is about some of the new features for PostgreSQL 11, I though this one is worth a blog post as well. Up to PostgreSQL 10 when you add a column to table which has a non null default value the whole table needed to be rewritten. With PostgreSQL 11 this is not anymore the case and adding a column in such a way is almost instant. Lets check.

We start by creating a test table in PostgreSQL 10:

postgres=# select version();
                                       version                                                           
--------------------------------------------------------------------------------------------------------
 PostgreSQL 10.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)

postgres=# create table test ( a int, b text, c timestamp );
CREATE TABLE
postgres=# insert into test (a,b,c) 
           select aa.*, md5(aa::text), now() 
             from generate_series ( 1, 1000000 ) aa;
INSERT 0 1000000

This gave us 1’000’000 rows and what I want to do is to check the amount of sequential scans against the table before and after the alter table.

postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        0
postgres=# alter table test add column d text default 'a';
ALTER TABLE
Time: 1252.188 ms (00:01.252)
postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        1

As you can see a sequential scan happened when the alter table was performed and it took more than a second for the alter table to complete. Lets do the same in PostgreSQL 11.

Creating the table:

postgres=# select version();
                                                            version                                                            
-------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11devel dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-16), 64-bit
(1 row)
postgres=# create table test ( a int, b text, c timestamp );
CREATE TABLE
postgres=# insert into test (a,b,c) select aa.*, md5(aa::text), now() from generate_series ( 1, 1000000 ) aa ;
INSERT 0 1000000

Doing the same test again:

postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        0
(1 row)

postgres=# alter table test add column d text default 'a';
ALTER TABLE
Time: 5.064 ms
postgres=# select seq_scan from pg_stat_user_tables where relid = 'test'::regclass;
 seq_scan 
----------
        0
(1 row)

No sequential scan at all and it only took 5 ms for the alter table to complete. This is quite a huge improvement. The question is how does that work in the background? Actually the idea is quite simple. The catalog table pg_attribute got two new columns called “attmissingval” and “atthasmissing”:

postgres=# \d pg_attribute
              Table "pg_catalog.pg_attribute"
    Column     |   Type    | Collation | Nullable | Default 
---------------+-----------+-----------+----------+---------
 attrelid      | oid       |           | not null | 
 attname       | name      |           | not null | 
 atttypid      | oid       |           | not null | 
 attstattarget | integer   |           | not null | 
 attlen        | smallint  |           | not null | 
 attnum        | smallint  |           | not null | 
 attndims      | integer   |           | not null | 
 attcacheoff   | integer   |           | not null | 
 atttypmod     | integer   |           | not null | 
 attbyval      | boolean   |           | not null | 
 attstorage    | "char"    |           | not null | 
 attalign      | "char"    |           | not null | 
 attnotnull    | boolean   |           | not null | 
 atthasdef     | boolean   |           | not null | 
 atthasmissing | boolean   |           | not null | 
 attidentity   | "char"    |           | not null | 
 attisdropped  | boolean   |           | not null | 
 attislocal    | boolean   |           | not null | 
 attinhcount   | integer   |           | not null | 
 attcollation  | oid       |           | not null | 
 attacl        | aclitem[] |           |          | 
 attoptions    | text[]    |           |          | 
 attfdwoptions | text[]    |           |          | 
 attmissingval | anyarray  |           |          | 
Indexes:
    "pg_attribute_relid_attnam_index" UNIQUE, btree (attrelid, attname)
    "pg_attribute_relid_attnum_index" UNIQUE, btree (attrelid, attnum)

As soon as a new column with a non null default value is added to a table these columns get populated. We can see that when we check for our current table. The column we added has that set in pg_attribute:

postgres=# select attmissingval
                , atthasmissing 
             from pg_attribute 
            where attrelid = 'test'::regclass 
              and attname = 'd';

 attmissingval | atthasmissing 
---------------+---------------
 {a}           | t

(1 row)

We know that all the rows in that table should have the new default value but we know also that the table was not rewritten. So whenever you select from the table and a row is missing the default it will be populated from pg_attribute:

postgres=# select d from test where a = 1;
 d 
---
 a

For new rows the default will be there anyway and as soon as the table is rewritten the flag is cleared:

postgres=# vacuum full test;
VACUUM
postgres=# select attmissingval
                , atthasmissing 
             from pg_attribute 
            where attrelid = 'test'::regclass 
              and attname = 'd';

 attmissingval | atthasmissing 
---------------+---------------
               | f
(1 row)

Nice feature.

 

Cet article PostgreSQL 11: Instant add column with a non null default value est apparu en premier sur Blog dbi services.

How to flashback databases in dataguard with broker

Yann Neuhaus - Tue, 2018-06-05 06:15

Last week I had to do some tests with dataguard. To make restores easier restore points were required.

Given is following configuration:

show configuration;
Configuration - ila
Protection Mode: MaxAvailability
 Databases:
 ila - Primary database
 ilal - Logical standby database
 ILAP - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
 SUCCESS

How to set restore points here ?

Two things to consider:
Redo apply to a physical standby is incompatible with setting a restore point so it must me switched off for a while.
SCN of the restore points of the standby databases must be lower than on primary database, otherwise redo shipment does not continue after flashback.

Setting of restore points

First redo shipment has to be stopped:

edit database "ilal" set state=APPLY-OFF;
edit database "ILAP" set state=APPLY-OFF;

Second, restore points on both standby databases are set:

create restore point RZ_C GUARANTEE FLASHBACK DATABASE;

 

Third, restore point on primary database is set:

create restore point RZ_C GUARANTEE FLASHBACK DATABASE;

Last, redo shipment is switched on again:

edit database "ILAP" set state=APPLY-ON;
edit database "ilal" set state=APPLY-ON;

Now dataguard is running as before.

Let’s flashback the databases:

First switch off redo shipment:

edit database "ILAP" set state=APPLY-OFF;
edit database "ilal" set state=APPLY-OFF;

Second start all three databases in mount mode:

shutdown immediate;
startup mount;

Third flashback primary database:

flashback database to restore point RZ_C;
alter database open read only;

Check whether flashbacked database looks fine.

If yes, open it read write:

shutdown immediate;
startup mount;
alter database open resetlogs;

Do the same on logical standby database:

flashback database to restore point RZ_C;
alter database open read only;

Check whether flashbacked database looks fine.

If yes, open it:

shutdown immediate;
startup mount;
alter database open resetlogs;

Last do flashback on physical standby:

flashback database to restore point RZ_C;

Now start redo shipment again:

edit database "ILAP" set state=APPLY-ON;
edit database "ilal" set state=APPLY-ON;

After a few minutes, dataguard will run as before:

show configuration;
Configuration - ila
Protection Mode: MaxAvailability
 Databases:
 ila - Primary database
 ilal - Logical standby database
 ILAP - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
 SUCCESS
 

Cet article How to flashback databases in dataguard with broker est apparu en premier sur Blog dbi services.

This Week in PostgreSQL – May 31

Jeremy Schneider - Tue, 2018-06-05 01:19

Since last October I’ve been periodically writing up summaries of interesting content I see on the internet related to PostgreSQL (generally blog posts). My original motivation was just to learn more about PostgreSQL – but I’ve started sharing them with a few colleagues and received positive feedback.  Thought I’d try posting one of these digests here on the Ardent blog – who knows, maybe a few old readers will find it interesting? Here’s the update that I put together last week – let me know what you think!

Hello from California!

Part of my team is here in Palo Alto and I’m visiting for a few days this week. You know… for all the remote work I’ve done over the years, I still really value this in-person, face-to-face time. These little trips from Seattle to other locations where my teammates physically sit are important to me.

This is also part of the reason I enjoy user groups and conferences so much. They’re opportunities to meet with other PostgreSQL users in real life. In fact – at this very moment – one of the most important PostgreSQL conferences in the world is happening: PgCon! Having attended a few other conferences over the past year, I’m holding down the fort in the office this week in order to send a bunch of other teammates… but you can be sure I’m keeping an eye on twitter. :)

https://www.pgcon.org/2018/
https://twitter.com/search?l=&q=pgcon%20OR%20pgcon_org

=====

In the meantime, lets get busy with the latest updates from the postgresql virtual world. First of all, I think the biggest headline is that (just in time for pgcon) we have the first official beta version of PostgreSQL 11! The release announcement headlines with major partitioning enhancements, more parallelism, a feature to speed up SQL execution by compiling certain operations on-demand into native machine code (JIT/Just-In-Time compilation), and numerous SQL enhancements. You can also read the first draft of the release notes. This is the time to start testing and give feedback to the development community!

https://www.postgresql.org/about/news/1855/
https://www.postgresql.org/docs/devel/static/release-11.html

Closely related to this, there’s one other really big headline that I’m excited about: the new AWS RDS Preview Environment. You can now try out the new pg11 features ahead of time with a single click! In part because the development community is so awesome, the first database available in the RDS Preview Environment is PostgreSQL. And the official PostgreSQL 11 beta release is _already_ available on RDS!! Personally I’m hoping that this benefits the community by getting more people to test and give feedback on new features being built for PostgreSQL 11. I hope it will make a great database even better.

https://aws.amazon.com/rds/databasepreview/
https://forums.aws.amazon.com/ann.jspa?annID=5788 (pg11 beta announcement)

Outside of the RDS and PG11-related stuff, I saw one other headline that I thought might be worth mentioning. On May 29, IBM published a blog post that caught my attention, featuring EnterpriseDB as an IBM partner on their Private Cloud deployments. You might not realize just how much PostgreSQL is being used and sold by IBM… but there’s Compose, ElephantSQL, and now EDB in the mix.

https://www.ibm.com/blogs/think/2018/05/next-enterprise-platform/

Part of the reason I took note of this was that I remember just last November when HPE ran a similar announcement, partnering with EDB on their on-premise subscription-based GreenLake platform.

https://news.hpe.com/hpe-expands-pay-per-use-it-to-key-business-workloads/

So it seems to me that EDB is doing some nice work at building up the PostgreSQL presence in the enterprise world – which I’m very happy to see. To be clear, this isn’t necessarily new… does anyone remember vPostgres?

https://blogs.vmware.com/vsphere/2016/03/getting-comfortable-with-vpostgres-and-the-vcenter-server-appliance-part-1.html

Nonetheless, it feels to me like the ball is moving forward. It feels like PostgreSQL maturity and adoption are continually progressing at a very healthy pace.

=====

Moving on from headlines, lets get to the real stuff – the meaty technical articles. :)

First up, who likes testing and benchmarking? One of my favorite activities, truth be told! So I can’t quite convey just how excited I am about the preview release of Kevin Closson’s pgio testing kit. For those unfamiliar, Kevin has spent years refining his approach for testing storage through database I/O paths. Much work was done in the past with Oracle databases, and he calls his method SLOB. I’m excited to start using this kit for exploring the limits of storage through PostgreSQL I/O paths too.

https://kevinclosson.net/2018/05/23/sneak-preview-of-pgio-the-slob-method-for-postgresql-part-iv-how-to-reduce-the-amount-of-memory-in-the-linux-page-cache-for-testing-purposes/

Right after Kevin published that post, Franck Pachot followed up with a short article using pgio to look at the impact of the ext4 “sync” filesystem option (made relevant by the recently disclosed flaws in how PostgreSQL has been interacting with Linux’s implementation of fsync).

https://blog.dbi-services.com/postgres-the-fsync-issue-and-pgio-the-slob-method-for-postgresql/

In addition to Kevin’s release of PGIO, I also saw three other generally fun technical articles. First, Kaarel Moppel from Cybertec published an article showing much lower-than-expected impact of pl/pgsql triggers on a simple pgbench execution. Admittedly, I want to poke around at this myself, having seen a few situations myself where the impact seemed higher. Great article – and it certainly got some circulation on twitter.

https://www.cybertec-postgresql.com/en/are-triggers-really-that-slow-in-postgres/

Next, Sebastian Insausti has published an article explaining PostgreSQL streaming replication. What I appreciate the most about this article is how Sebastian walks through the history of how streaming replication was developed. That context is so important and helpful!

https://severalnines.com/blog/postgresql-streaming-replication-deep-dive

Finally, the requisite Vacuum post.  :)  This month we’ve got a nice technical article from Sourabh Ghorpade on the Gojek engineering team. Great high-level introduction to vacuuming in general, and a good story about how their team narrowly averted an “xid wraparound” crisis.

https://blog.gojekengineering.com/postgres-autovacuum-tuning-394bb99fe2c0

=====

We’ve been following Dimitri Fontaine’s series on PostgreSQL data types. Well sadly (but inevitably) he has brought the series to a close. On May 24, Dimitri published a great summary of the whole data type series – this definitely deserves to be on every developer’s short list of PostgreSQL bookmarks!

https://tapoueh.org/blog/2018/05/postgresql-data-types/

But while we’re talking about data types, there were two more related articles worth pointing out this time around. First, Berend Tober from SeveralNines published a nice article back on the 9th about the serial data type in PostgreSQL. This is an integer type that automatically comes with not-null constraints and auto-assignment from a sequence.

https://severalnines.com/blog/overview-serial-pseudo-datatype-postgresql

Secondly, Hans-Jürgen Schönig from Cybertec gives a nice overview of mapping data types from Oracle to PosgreSQL. He has a little paragraph in there about mapping Oracle numbers to PostgreSQL integers and numerics. That little paragraph probably deserves triple-bold-emphesis. Automatic mapping of every number column to PostgreSQL numeric has been cause for many, many performance woes in PostgreSQL databases!

https://www.cybertec-postgresql.com/en/mapping-oracle-datatypes-to-postgresql/

=====

For something that might be relevant to both developers and DBAs, I have a couple articles about SQL tuning. First, Brian Fehrle has written a great tuning introduction for the severalnines blog. His starting point is pg_stat_activity and the explain SQL command; exactly the same as my starting point. :)

https://severalnines.com/blog/why-postgresql-running-slow-tips-tricks-get-source

Next up, Louise Grandjonc from France has published a series of articles called “understanding EXPLAIN“. Did you ever wonder why there are _two_ numbers reported for execution time of each step? Louise answers this question and many more in the these four articles!

http://www.louisemeta.com/blog/explain/
http://www.louisemeta.com/blog/explain-2/
http://www.louisemeta.com/blog/explain-3/
http://www.louisemeta.com/blog/explain-4/

=====

Moving down the stack a little more, there were two articles about monitoring that seem worth passing along. Datadog has put out a lot of articles about monitoring recently. I hadn’t mentioned it before, but Emily Chang gave us yet another one back on April 12. As usual, I’m impressed with the level of quality in this thorough article which is specifically focused on PostgreSQL on RDS. I especially appreciated the key metrics, things I’ve used myself to characterize workloads.

https://www.datadoghq.com/blog/aws-rds-postgresql-monitoring/

Earlier I mentioned the severalnines blog post about replication – and the pgDash team published a nice article on May 2 about monitoring replication. The give another nice general architectural overview of replication as well.

https://pgdash.io/blog/monitoring-postgres-replication.html

=====

In my last update, I closed with a few articles about pgpool on the severalnines blog. It seemed worth mentioning that they have published a third, final article for their series.

https://severalnines.com/blog/guide-pgpool-part-three-hints-observations

Also, I spotted an article about pgpool on the AWS database blog too. While focused on Aurora PostgreSQL, there’s plenty to be learned about using pgpool with regular PostgreSQL here.

https://aws.amazon.com/blogs/database/a-single-pgpool-endpoint-for-reads-and-writes-with-amazon-aurora-postgresql/

Along those lines, most people know about the other popular PostgreSQL connection pooler, pgbouncer. This is the connection pooler which is used by Gulcin Yildirim’s near-zero-downtime ansible-based upgrade tool, pglupgrade. He’s written a few articles about his process recently, and being a big ansible fan I’ve been following along.

https://blog.2ndquadrant.com/near-zero-downtime-automated-upgrades-postgresql-clusters-cloud/
https://blog.2ndquadrant.com/near-zero-downtime-automated-upgrades-postgresql-clusters-cloud-part-ii/

But I wonder if the landscape is about to change? Yesterday the Yandex team announced that they have built and released a new load balancer to address limitations in pgpool and pgbouncer. I’ll be very interested to see what happens with odyssey!

https://www.postgresql.org/message-id/C9D1137E-F2A7-4307-B758-E5ED5559FFCA@simply.name (announcement)
https://github.com/yandex/odyssey

And that wraps things up for this edition.

Have a great week and keep learning!

oci-utils-0.6-34.el7

Wim Coekaerts - Mon, 2018-06-04 20:20

I will write up some examples on this later but for now... here's the changelog:

The oci-utils package is used to manage block volumes and VNICs and is available for use with Oracle Linux 7 images in Oracle Cloud (excludes support for OCI-C). The latest release (oci-utils-0.6-34.el7) is available in the Oracle Linux 7 developer channel on YUM. The following changes/additions have been made in this release (0.6): - Support added for API access through Instance Principals - Support added for root using a designated user's OCI config files and keys - oci_utils API automatically detects authentication method to be used - ocid can discover secondary IP addresses and CHAP user/password using OCI API calls, if the Python SDK is configured or if Instance Principals is used - network proxy support for making SDK calls - configuration files for ocid: /etc/oci-utils.d/* - support configuring the various functions of ocid individually, including refresh frequency or turning them off completely. - ocid saves state and restores all volumes and VNIC configuration after reboot - oci-network-config: new option: --detach-vnic - oci-iscsi-config: new option: --destroy-volume - oci-utils APIs are now thread safe - NEW tool: oci-image-cleanup - a script that runs a set of cleanup steps to prepare the instance for a custom image - oci-kvm utility rejects attempts to create guests if the required virtualization support is not enabled in the image it is being executed on

 

 

 

GDPR for the Oracle DBA

Pete Finnigan - Mon, 2018-06-04 18:26
I did a talk at the recent UKOUG Norther Technology Summit in Leeds, UK on May 16th. This talk was an enhanced version of the one i did at the UKOUG tech conference in Birmingham in December 2017 to a....[Read More]

Posted by Pete On 04/06/18 At 08:40 PM

Categories: Security Blogs

Battle in India for e-commerce market leadership is no longer between just Amazon and Flipkart

Abhinav Agarwal - Mon, 2018-06-04 12:30
Amazon Launches Prime Music in India. What It Means for the Indian e-commerce Market


O
n a day when it was reported that the online streaming music app Gaana was raising $115 million (about ₹750 crores) from Chinese Internet investment company Tencent Holdings Ltd and Times Internet Ltd (Gaana to raise $115 million from Tencent, Times Internet – Livemint), came the news that online retailer Amazon had launched its PrimeMusic streaming music service in India.

According to Amazon, “Prime Music provides unlimited, ad-free access to on-demand streaming of curated playlists and stations, plus millions of songs and albums at no additional cost for eligible Amazon Prime members.”

The Amazon Prime service in India costs ₹999 annually and provides “free One-Day, Two-Day and Standard Delivery on eligible items”, PrimeVideo – Amazon’s video streaming service, and now PrimeMusic. According to Midis Research, Amazon had become the third-largest music subscription service globally, behind Spotify (40%) and Apple Music (19%).


I gave the PrimeMusic service a spin, and after two days of trying it out, I came away reasonably impressed. If you are already a subscriber to Amazon’s Prime service, you don’t need to do anything more than downloading the PrimeMusic app (available on most mobile operating systems, including iOS and Android). The service is also available on web browsers. The selection is impressive, and the curated lists are fairly comprehensive. For someone like me who has a preference for older Hindi soundtracks and likes listening to Mohd Rafi, I was pleasantly surprised to find a broad array of choices at my disposal. I can create my own playlists, listen to curated playlists, or listen all day long to stations. I spent all Sunday streaming songs from the service to my speakers. The service restricts streaming on only one device at a time, however. For morning listening, there is an acceptable, though not comprehensive, selection of devotional songs. The price too is right. Where that leaves other services like Gaana, Saavn, Wynk, Hungama, and others remains to be seen. The future does not look very promising, to be frank. But more on this later.

From a first look, the Amazon PrimeMusic service seems to have hit the ground running.
Flashback - Flipkart First and FlyteFlipkart First graphicIt is worth noting that it was in May 2014 that Flipkart had launched Flipkart First, a membership program patterned on Amazon Prime, which cost ₹500 a year, and which promised “Free shipping on your orders*“, “in a day” delivery for most products, and “Discounted same day delivery*“. In the more than three years since its launch, Flipkart has not added any new benefits to its First program. It was more than two years after Flipkart First’s launch that Amazon launched its Prime service in India (see my article). At the time, I had remarked on the lack of focus on Flipkart’s part to promote its FlipkartFirst service.
“But for reasons best known to Flipkart, after an initial flurry of promotion and advertising, including a three-month giveaway to 75000 customers, Flipkart did not seem to pursue it with any sort of vigor. Customers complained of many products being excluded from Flipkart First, and in the absence of any sustained campaign to make customers aware of the programme, it has slowly faded from memory. … Worse, there was a news story on July 20th about Flipkart planning to launch a programme called “F-Assured”, as a replacement to Flipkart Advantage. The story suggested that the launch of F-Assured was also meant to “preempts the launch of Amazon Prime” — something that did not come to pass.”It, therefore, came as no surprise when there were news reports in late 2017 of Flipkart looking to “relaunch” its Flipkart First program in collaboration with other e-commerce vendors and startups like MakeMyTrip, Ola, and BookMyShow (article in Mint and Economic Times). The article also mentioned that Flipkart “lost focus in making it work. Customers didn’t take to it and the service fizzled out.”

Contrast this with Amazon’s focus on making its Prime program a success. So much so that this is what Amazon founder and CEO Jeff Bezos had to say in his annual letter to shareholders in 2015:
“AWS, Marketplace and Prime are all examples of bold bets at Amazon that worked, and we’re fortunate to have those three big pillars.”It is not as if Flipkart had not ventured into the music business. In Feb 2012, it had launched its online music store, Flyte, and in June 2013 shuttered it. I wrote about it in my article in DNA in 2015. According to one website, the reasons for Flyte being shut down were several, from poor marketing support, digital piracy, low revenues, and more.
“Flyte, with low revenues and low growth, remained a low priority business, and it never got the marketing support needed to push for an increase in sales. A couple of people we spoke with repeatedly emphasized the lack of marketing support as a key reason for Flyte’s lack of success – Flyte built traction almost entirely on word of mouth, while the physical goods business got all the marketing spends.”Second, the financial reasons for which Flipkart shut down Flyte, even if true, defy belief. For a company that had raised thousands of crores of rupees (billions of dollars) in funding, persisting with a new and promising line of business that would burn a hole of a few million dollars a year should have been a no-brainer. Except, that it wasn’t.
"Flyte Music had struck deals for India based music downloads on web and app by paying music labels an aggregate minimum guarantee (MG) of around $1 million (Rs 5.5-6 crores) for the year, multiple sources told MediaNama. … Revenues from song downloads were fairly low – not even 50% of the minimum guarantee amount (only around Rs 2-3 crore is what we heard), and the ARPU was around Rs 9-12 per user, which made it difficult to justify the minimum guarantee, and any significant customer acquisition costs." [Medianama article]One can only speculate what Flipkart’s competitive response would have been had it not prematurely abandoned its foray into digital music.
The Battle-lines and the BattlegroundMore pertinently, what seems to be clear is that Amazon is betting heavily on India. Having lost the race in China, it has pulled out all stops to turn its India operations into the market leader in the country. According to this article, Alibaba had a 51% market share, while Amazon had less than 1%. Clearly, India is a market Amazon can ill-afford to lose.


The battle for space on the consumer’s smartphone screen is also one of numbers. On this dimension, Flipkart has but one app in its arsenal – its shopping app. The second app is the “Flipkart Seller Hub”.
Flipkart apps on AndroidOn the other hand, Amazon has a formidable presence that allows it to land-and-expand: its shopping app, Music, Prime Video, Now (for hyperlocal grocery shopping), Kindle, Drive, Assistant, Photos, Fire TV, Go, Alexa, and more. In all but the shopping space, Flipkart is more of an out-of-sight-out-of-mind case.

Amazon apps on AndroidPeople may remember that it was in November 2015 that Flipkart had launched, on an experimental basis, its hyperlocal grocery delivery app, Flipkart Nearby. It was a smart and timely competitive, pre-emptive move against Amazon. Unfortunately, even that was shut down a short few months later in 2016 as Flipkart doubled down on clamping down on its bleeding bottom-line.

But this is only one half of the picture. To understand the other half, it is important to look at the entities that have been funding Flipkart in recent times. According to website Crunchbase, Flipkart saw investments from eBayMicrosoftTencent, and Softbank to the tune of $2.5 billion in its last funding round. According to online news site Recode, Flipkart is the largest e-commerce investment Softbank has made from its $100 billion fund. Furthermore, there have been persistent rumours that brick-and-mortal behemoth Walmart has been looking to invest in Flipkart. As recently as January 2018, there were several news stories that talked about the possibility of Walmart acquiring a 20% stake in Flipkart at valuations that could touch $20 billion. To put that in perspective, Walmart’s biggest acquisition to date was its purchase of online e-commerce site Jet.com in August 2016 for $3 billion in cash and $300 million in Walmart shares. Jet.com had been founded by Marc Lore, who had earlier been a co-founder of Quidsi, the company behind the website Diapers.com, and which had a very bruising battle with Amazon a decade back. This is another fascinating story that I mentioned in my article here.
ConclusionAll this points to one inescapable conclusion – the battle in India for e-commerce market leadership is no longer between just Amazon and Flipkart. On one side you have Amazon with its relentless focus on execution and remarkable track record of having won many more battles than it has lost. On the other side, you have Flipkart that is now backed not only by billions of dollars in fresh funding, but also a diverse array of interests like Tencent, Alibaba, Microsoft, eBay, Softbank, and possibly even Walmart. These entities seem to be bound by their unanimous need to put all their wood behind the arrow that is Flipkart. This is no longer about investing in the Indian e-commerce market to get a hefty multiple on their investments. It is now looking more and more like the OK Corral where the final shootout will take place. Place your bets on who will come out of this scrap as Wyatt Earp.

This article first appeared in OpIndia on March 7th, 2018.
This was also cross-posted on my personal blog.



© 2018, Abhinav Agarwal. All rights reserved.

Index Column Order – Impact On Index Branch Blocks Part I (Day-In Day-Out)

Richard Foote - Mon, 2018-06-04 07:16
I recently replied on Twitter to some comments regarding an excellent blog post by Franck Pachot – Covering indexes in Oracle, and branch size, where I disagreed somewhat with one of the conclusions stated in the post: “ensure that selective columns appear as early as possible (without compromising the index access efficiency of course) in order to […]
Categories: DBA Blogs

PeteFinnigan.com Limited Printed Oracle Security Training Manuals for Sale

Pete Finnigan - Mon, 2018-06-04 00:06
Over the last year or so we have offered for sale left over printed manuals from some of our training courses. Normally we only print the manuals for classes that we organise for in person training such as the classes....[Read More]

Posted by Pete On 03/06/18 At 01:47 PM

Categories: Security Blogs

Wireframes vs Prototypes : An In-Depth Look

Nilesh Jethwa - Sun, 2018-06-03 23:07

Regardless of experience, designers can still easily mix up the terms “wireframe” and “prototype.” Misunderstanding the two – even among seasoned and veteran designers – still happens even though the differences between the two are clear-cut. If you are one … Continue reading ?

Origin: MockupTiger Wireframes

SID YES SERVICE_NAME NO in tnsnames.ora

Tom Kyte - Sun, 2018-06-03 06:06
Hello Tom, We use tnsnames.ora . 1) What is the difference between SID entry and SERVICE_NAME in the tnsnames.ora 2) When I put SID the connection is successful, when I put SERVICE_NAME it says tns could not resolve service name So This w...
Categories: DBA Blogs

Oracle Security Training In York, UK, 2018

Pete Finnigan - Sun, 2018-06-03 05:46
I have just updated our public training dates page to add two new dates for Oracle Security training classes that I will be running here in York, UK. We now have 4 dates covering three available classes. These are as....[Read More]

Posted by Pete On 02/06/18 At 06:54 PM

Categories: Security Blogs

Demo: Oracle Database Continuous Query Notification in Node.js

Christopher Jones - Sat, 2018-06-02 08:32

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary node-oracledb release, you can compile node-oracledb yourself and play with this demo.

 

 

Some of you may already be using CQN via its PL/SQL APIs. The new, native support in node-oracledb makes it all so much nicer. Check out the development documentation for connection.subscribe() and the 'user manual'. There are a couple of examples cqn1.js and cqn2.js available, too.

CQN allows JavaScript methods to be called when database changes are committed by any transaction. You enable it in your node-oracledb app by registering a SQL query. CQN has two main modes: object-level and query-level. The former sends notifications (i.e. calls your nominated JavaScript method) when changes are made to database objects used in your registered query. The query-level mode only sends notifications when database changes are made that would impact the result set of the query, e.g. the WHERE clause is respected.

If you're not using CQN, then you might wonder when you would. For infrequently updated tables you can get CQN to generate notifications on any data or table change. I can see how query-level mode might be useful for proactive auditing to send alerts when an unexpected, but valid, value is inserted or deleted from a table. For tables with medium levels of updates, CQN allows grouping of notifications by time, which is a way of reducing load by preventing too many notifications being generated in too short a time span. But, as my colleague Dan McGhan points out, if you know the table is subject to a lot of change, then your apps will be better off simply polling the table and avoiding any CQN overhead. Note that CQN was designed to be used for relatively infrequently updated tables.

DEMO APP

I've thrown together a little app that uses CQN and Socket.IO to refresh a message notification area on a web page. It's really just a simple smush of the Socket.IO intro example and the node-oracledb CQN examples.

There is a link to all the code in the next section of this post; I'll just show snippets inline here. I'm sure Dan will update his polished 'Real-time Data' example soon, but until then here is my hack code. It uses Node.js 8's async/await style - you can rewrite it if you have an older Node.js version.

One thing about CQN is that the node-oracledb computer must be resolvable by the Database computer; typically this means having a fixed IP address which may be an issue with laptops and DHCP. Luckily plenty of other cases work too. For example, I replaced my Docker web service app with a CQN example and didn't need to do anything with ports or identifying IP addresses. I'll leave you to decide how to run it in your environment. There are CQN options to set the IP address and port to listen on, which may be handy.

The demo premise is a web page with a message notification area that always shows the five most recent messages in a database table. The messages are being inserted into that table by other apps (I'll just use SQL*Plus to do these inserts) and the web page needs to be updated with them only when there is a change. I'm just using dummy data and random strings:

To see how it fits together, look at this no-expense-spared character graphic showing the four components: SQL*Plus, the database, the browser and the Node.js app:

SQL*PLUS: DATABASE: insert into msgtable >-------> msgtable >-------CQN-notification------------------+ commit | | BROWSER: <-------+ NODE.JS APP: | 5 Message | URL '/' serves index.html | 4 Message | | 3 Message | CQN: | 2 Message | subscribe to msgtable with callback myCallback | 1 Message | | | myCallback: <------------------------------------+ | query msgtable +-----------< send rows to browser to update the DOM

The app (bottom right) serves the index page to the browser. It connects to the DB and uses CQN to register interest in msgtable. Any data change in the table from SQL*Plus (top left) triggers a CQN notification from the database to the application, and the callback is invoked. This callback queries the table and uses Socket.IO to send the latest records to the browser, which updates the index.html DOM.

The first thing is to get your DBA (i.e. log in as the SYSTEM user) to give you permission to get notifications:

GRANT CHANGE NOTIFICATION TO cj;

We then need a table that our app will get notifications about, and then query to get the latest messages:

CREATE TABLE cj.msgtable ( k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), message VARCHAR(100) );

The column K is an Oracle Database 12c identity column that will automatically get a unique number inserted whenever a new message is inserted. In older database versions you would create a sequence and trigger to do the same.

The little SQL script I use to insert data (and trigger notifications) is:

INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); COMMIT;

The Node.js app code is more interesting, but not complex. Here is the code that registers the query:

conn = await oracledb.getConnection(); await conn.subscribe('mysub', { callback: myCallback, sql: "SELECT * FROM msgtable" }); console.log("CQN subscription created");

Although CQN has various options to control its behavior, here I keep it simple - I just want to get notifications when any data change to msgtable is committed.

When the database sends a notifications, the method 'myCallback' will get a message, the contents of which will vary depending on the subscription options. Since I know the callback is invoked when any table data has changed, I ignore the message contents and go ahead and query the table. The rows are then stringified and, by the magic of Socket.IO, sent to the web page:

async function myCallback(message) { let rows = await getData(); // query the msgtable io.emit('message', JSON.stringify(rows)); // update the web page }

The helper function to query the table is obvious:

async function getData() { let sql = `SELECT k, message FROM msgtable ORDER BY k DESC FETCH NEXT :rowcount ROWS ONLY`; let binds = [5]; // get 5 most recent messages let options = { outFormat: oracledb.OBJECT }; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds, options); await conn.close(); return result.rows; }

At the front end, the HTML for the web page contains a 'messages' element that is populated by JQuery code when a message is received by Socket.IO:

<ul id="messages"></ul> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script> $(function () { var socket = io(); socket.on('message', function(msg){ $('#messages').empty(); $.each(JSON.parse(msg), function(idx, obj) { $('#messages').append($('<li>').text(obj.K + ' ' + obj.MESSAGE)); }); }); }); </script>

You can see that the JSON string received from the app server is parsed and the K and MESSAGE fields of each row object (corresponding to the table columns of the same names) are inserted into the DOM in an unordered list.

That's it.

DEMO IN ACTION

To see it in action, extract the code and install the dependencies:

cjones@mdt:~/n/cqn-sockets$ npm install npm WARN CQN-Socket-Demo@0.0.1 No repository field. npm WARN CQN-Socket-Demo@0.0.1 No license field. added 86 packages in 2.065s

I cheated a bit there and didn't show node-oracledb compiling. Once a production release of node-oracledb is made, you should edit the package.json dependency to use its pre-built binaries. Until then, node-oracledb code will be downloaded and compiled - check the instructions for compiling.

Edit server.js and set your database credentials - or set the referenced environment variables:

let dbConfig = { user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING, events: true // CQN needs events mode }

Then start the app server:

cjones@mdt:~/n/cqn-sockets$ npm start > CQN-Socket-Demo@0.0.1 start /home/cjones/n/cqn-sockets > node server.js CQN subscription created Listening on http://localhost:3000

Then load http://localhost:3000/ in a browser. Initially the message pane is blank - I left bootstrapping it as an exercise for the reader.

Start SQL*Plus in a terminal window and create a message:

SQL> INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); SQL> COMMIT;

Every time data is committed to msgtable, the message list on the web page is automatically updated:

If you don't see messages, review Troubleshooting CQN Registrations. The common problems will be network related: the node-oracledb machine must be resolvable, the port must be open etc.

Try it out and let us know how you go. Remember you are using development code that just landed, so there may be a few rough edges.

PostgreSQL 11 : Procedures are coming

Yann Neuhaus - Sat, 2018-06-02 05:38

Reading about new features about future version PostgreSQL 11. I see that procedures will be implemented. Why is it so important? Until now only functions are available in PostgtreSQL. Of course a function is a procedure that returns a value, we can say, yes it’s true but you cannot manage transactions in a function. To better understand let’s do a quick demonstration.
Note that I am using the snapshot developer version of PostgreSQL 11 .

[postgres@pg_essentials_p1 bin]$ ./psql
Null display is "NULL".
psql (11devel)
Type "help" for help.

(postgres@[local]:5432) [postgres] > select version();
                                                  version
--------------------------------------------------------------------------------
---------------------------
 PostgreSQL 11devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623
 (Red Hat 4.8.5-4), 64-bit
(1 row)

(postgres@[local]:5432) [postgres] >

For the demonstration I have a table emp

(postgres@[local]:5432) [test] > table emp;
 id | name
----+------
  1 | toto
  2 | Edge
(2 rows)

(postgres@[local]:5432) [test] >

And let’s say I want to insert data in my table using following function

CREATE or replace FUNCTION fun_insert_emp(id_emp int,  emp_name varchar(20))
 RETURNS  void AS $$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
commit;
END;
$$
LANGUAGE PLPGSQL;

We can describe the function like this

(postgres@[local]:5432) [test] >  \sf  fun_insert_emp                           
CREATE OR REPLACE FUNCTION public.fun_insert_emp(id_emp integer, emp_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
commit;
END;
$function$
(postgres@[local]:5432) [test] >

To call a function we use a select like any built-in function. So let’s try to insert a new employee with the function

(postgres@[local]:5432) [test] > select fun_insert_emp(3,'New Emp');
ERROR:  invalid transaction termination
CONTEXT:  PL/pgSQL function fun_insert_emp(integer,character varying) line 4 at COMMIT
(postgres@[local]:5432) [test] >

Seems that the word COMMIT is causing trouble. OK let’s recreate the function without the COMMIT

(postgres@[local]:5432) [test] >  \sf  fun_insert_emp                           
CREATE OR REPLACE FUNCTION public.fun_insert_emp(id_emp integer, emp_name character varying)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
BEGIN
 insert into emp (id,name) values (id_emp,emp_name);
END;
$function$
(postgres@[local]:5432) [test] >

And let’s call again the function. We can see that the row was inserted

(postgres@[local]:5432) [test] > select fun_insert_emp(3,'New Emp');             
fun_insert_emp
----------------

(1 row)

(postgres@[local]:5432) [test] > table emp;
 id |  name
----+---------
  1 | toto
  2 | Edge
  3 | New Emp
(3 rows)

(postgres@[local]:5432) [test] >

But the main problem we saw is that a function in PostgreSQL does not support any transaction instruction (BEGIN…COMMIT, BEGIN…ROLLBACK). This means that in a function I cannot use instructions that allow to rollback or commit based on some conditions.
But PostgreSQL 11 will support procedure. Let’s do again the demonstration with a procedure.
Let’s first create the procedure

(postgres@[local]:5432) [test] > create or replace procedure proc_insert_emp (id_emp int,  emp_name varchar(20))
test-# as $$
test$# Begin
test$# insert into emp (id,name) values (id_emp,emp_name);
test$# commit;
test$# end ;
test$# $$
test-# LANGUAGE PLPGSQL;
CREATE PROCEDURE
(postgres@[local]:5432) [test] >

And let’s insert a new row in table emp using proc_insert_emp

(postgres@[local]:5432) [test] > call  proc_insert_emp(4,'Brice');
CALL

(postgres@[local]:5432) [test] > table emp;
 id |  name
----+---------
  1 | toto
  2 | Edge
  3 | New Emp
  4 | Brice
(4 rows)

(postgres@[local]:5432) [test] >

We can see that the row was inserted. But the main difference is the support of autonomous transaction and this will be definitively change life for developers.

 

Cet article PostgreSQL 11 : Procedures are coming est apparu en premier sur Blog dbi services.

Application Development Services Update

Application Container Cloud Service Secure Environment Variables Environment variables can now be secured by using the user interface or the deployment.json file. The value of the secure...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Installing MAMP to play with PHP, MySQL and OpenFlights

Yann Neuhaus - Sat, 2018-06-02 02:30

You may wonder what I’m doing with those technologies that are completely different from what I usually do. I’ll detail in a future blog post but the idea is giving a short introduction to databases to students at https://powercoders.org/, a coding academy for refugees in Switzerland. They install MAMP (My Apache – MySQL – PHP) during their curriculum for website development, and then I’ll use this environment to teach data modeling and SQL basics. Thus, I’ve to look at MAMP, PHP and MySQL for the first time… And I decided to load the OpenFlights open data to play with.
That explains the title.

CaptureMAMP002
So MAMP is like LAMP (Linux+Apache+PHP+MySQL) but with a M for MacOS, but also Windows (W being an upside-down M after all). Let’s install that on my laptop. I download it from https://www.mamp.info/en/downloads/, run the .exe, all is straightforward and the installer notifies me that the installation will be completed after a reboot.

What? Reboot? Seriously, we are in 2018, that’s Windows 10, I refuse to reboot to install a simple Apache server!

This bundle is very easy to use: a simple window to start and stop the servers (Apache and MySQL) . A setup menu to configure them, but I keep the default. And a link to the start page. All that is installed under C:\MAMP (you can change it, I just kept the default settings). The first time you start the servers, the Windows Firewall configuration is raised and you have to accept it:

CaptureMAMP003

With all defaults (Apache on port 80) my web server pages are on http://localhost (serving the files in C:\MAMP\htdocs) and administration page is at http://localhost/MAMP/
The MySQL administration page (phpMyAdmin) is at http://localhost/MAMP/index.php?page=phpmyadmin. It seems that, at least by default, I don’t need a password to go to the admin pages.

display_errors

I’ll write some PHP and because it’s the first time in my life, I will have some errors. With the default configuration, Apache just raises and Error 500 which does not help me a lot for debugging. This configuration is ok for production because displaying errors may give clues to hackers. But I’m there to play and I want to see the error messages and line numbers.

I have to set display_errors=on for that. The current setting is displayed in http://localhost/MAMP/index.php?language=English&page=phpinfo#module_core and I can change it in C:\MAMP\conf\php7.2.1\php.ini and after a restart of the Apache server I can see full error messages:

Warning: mysqli_real_connect(): (28000/1045): Access denied for user 'root'@'localhost' (using password: YES) in C:\MAMP\htdocs\index.php on line 123

Little fix

But now that I display the errors, I get this annoying message each time I try to do something in phpMyAdmin (which runs as PHP in the same Apache server):

MAMP "count(): Parameter must be an array or an object that implements Countable"

CaptureMAMP004

So this product, which is free but has also a ‘PRO’ version, probably running the same code, is delivered with bad code, raising errors that were ignored. Don’t tell me that it is just a warning. You will see that parentheses are missing, this is a syntax error and raising only a warning for that is quite bad.
CaptureMAMP006
My common sense tells me that we should set display_errors=on and test a few screens before releasing a software. But that step has probably been skipped. Fortunately, the message is clear: line 615 of C:\MAMP\bin\phpMyAdmin\libraries\sql.lib.php

The message is about count() not having the correct parameter. The line 615 shows count($analyzed_sql_results[‘select_expr’] == 1 ) which is probably not correct because it counts a boolean expression. I’ve changed it to (count($analyzed_sql_results[‘select_expr’]) == 1 ) as I suppose they want to count and compare to one.

Well, I’ve never written one line of PHP and I already hate it for its error handling weakness.

Load some data

I want to initialize the database with some data and I’ll use the OpenFlights database. I’ve downloaded and unzipped https://github.com/jpatokal/openflights/blob/master/sql/master.zip
I go to the unzipped directory and run MySQL:

cd /d D:\Downloads\openflights-master

Another little thing to fix here: the sql\create.sql and sql\load-data.sql files contain some lines starting with “\! echo” but this \! command (to run a system command) exists on Linux but not on the Windows port of MySQL. We have to remove them before running the SQL scripts. I’m used to Oracle where I can port my code and scripts from one platform to the other, and was a but surprised by this.

Ready to connect:

C:\MAMP\bin\mysql\bin\mysql test --port=3306 --host=localhost --user root --password
Enter password:

The MySQL connection parameters are displayed on http://localhost/MAMP/ including the password (root)


source sql\create.sql
 
mysql> source sql\create.sql
Query OK, 0 rows affected (0.00 sec)
 
Connection id: 314
Current database: flightdb2
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.02 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.01 sec)
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.02 sec)
Records: 0 Duplicates: 0 Warnings: 0
 
Query OK, 0 rows affected (0.00 sec)
...

This has created the flightdb2 database, with openflights user, and 15 tables.

Now, if you are still in the unzipped directory, you can load data with the source sql\load-data.sql script which loads from the data\*.dat files

mysql> source sql\load-data.sql
Query OK, 6162 rows affected, 4 warnings (0.04 sec)
Records: 6162 Deleted: 0 Skipped: 0 Warnings: 4
 
Query OK, 7184 rows affected, 7184 warnings (0.12 sec)
Records: 7184 Deleted: 0 Skipped: 0 Warnings: 7184
 
Query OK, 67663 rows affected (0.53 sec)
Records: 67663 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 260 rows affected (0.01 sec)
Records: 260 Deleted: 0 Skipped: 0 Warnings: 0
 
Query OK, 12 rows affected (0.01 sec)
Records: 12 Deleted: 0 Skipped: 0 Warnings: 0

Query from PHP

So, for my first lines of PHP I’ve added the following to C:\MAMP\htdocs\index.php:

<?php
$user = 'openflights'; $password = '';
$db = 'flightdb2'; $host = 'localhost'; $port = 3306;
 
$conn = mysqli_init();
if (!$conn) {
die("mysqli_init failed");
}
if (!$success = mysqli_real_connect( $conn, $host, $user, $password, $db, $port)) {
die("😠 Connection Error: " . mysqli_connect_error());
}
echo "😎 Connected to database <b>$db</b> as user <b>$user</b>.";
?>
 
<p>
Here are the Airports:
<table border=1>
<tr><th>IATA</th><th>Name</th></tr>
 
<?php
$result = $conn->query("select iata,name from airports where country='Greenland' order by 2");
if ($result->num_rows > 0) {
while($row = $result->fetch_assoc()) {
echo "<tr><td>" . $row["iata"]. "</td><td> " . $row["name"]. "</tr>";
}
} else {
echo "0 results";
}
mysqli_close($conn);
?>
</table>

CaptureMAMP008

Here, I call mysqli_init(), set the credentials and call mysqli_real_connect() to get the connection handle. Then I run my query and display the result as an HTML table. Nothing difficult here. The main challenge is probably to keep the code maintainable.

In my opinion, and despite the small issues encountered, MAMP is a simple way to setup a development environment on Windows. All is there to introduce SQL and Database for developers, and show how to call it from a programming language.

 

Cet article Installing MAMP to play with PHP, MySQL and OpenFlights est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator