Feed aggregator

Error: ORA-06533: Subscript beyond count

Tom Kyte - Mon, 2018-04-09 06:26
Please help me in editing the below Pl/SQL code: <code>create or replace Function user_score_scurve (scurve_in varchar2, inpt IN number) RETURN number IS params_number number; TYPE type_params IS table OF number; Params ...
Categories: DBA Blogs

Clarification of XAI, MPL and IWS

Anthony Shorten - Mon, 2018-04-09 00:36

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and Oracle Service Bus (OSB) Adapters.

In the next service pack of the Oracle Utilities Application Framework, XAI and MPL will finally be removed from the product.

The following applies to this:

  • The MPL software and XAI Servlet will be removed from the code. This is the final step in the retirement process. The tables associated with XAI and MPL will not be removed from the product for backward compatibility with newer adapters. Maintenance functions that will be retained will be prefixed with Message rather than XAI. Menu items not retained will be disabled by default. Refer to release notes of service packs (latest and past) for details of the menu item changes.
  • Customers using XAI should migrate to Inbound Web Services using the following guidelines:
    • XAI Services using the legacy Base and CorDaptix adapters will be automatically migrated to Inbound Web Services. These services will be auto-deployed using the Inbound Web Services Deployment online screen or iwsdeploy utility.
    • XAI Services using the Business adapter (sic) can either migrate their definitions manually to Inbound Web Services or use a technique similar to the technique outlined in Converting your XAI Services to IWS using scripting. Partners should take the opportunity to rationalize their number of web services using the multi-operation capability in Inbound Web Services.
    • XAI Services using any other adapter than those listed above are not migrate-able as they are typically internal services for use with the MPL.
  • Customers using the Multi-purpose Listener should migrate to Oracle Service Bus with the relevant adapters installed.

There are a key number of whitepapers that can assist in this process:

Congestion Control algorithms in UEK5 preview - try out BBR

Wim Coekaerts - Sun, 2018-04-08 18:47

One of the new features in UEK5 is a new TCP congestion control management algorithm called BBR (bottleneck bandwidth and round-trip propagation time). You can find very good papers here and here.

Linux supports a large variety of congestion control algorithms,  bic, cubic, westwood, hybla, vegas,  h-tcp, veno, etc..

Wikipedia has some good information on them : https://en.wikipedia.org/wiki/TCP_congestion_control

Here is a good overview of the important ones, including BBR : https://blog.apnic.net/2017/05/09/bbr-new-kid-tcp-block/

The default algorithm used, for quite some time now, is cubic (and this will remain the default also in UEK5). But we now also include support for BBR. BBR was added in the mainline Linux kernel version 4.9. UEK5 picked it up because we based the UEK5 tree on mainline 4.14. Remember we have our kernels on github for easy access and reading. We don't do tar files, you get the whole thing with changelog - standard upstream kernel git with backports, fixes, etc...

We have seen very promising performance improvements using bbr when downloading or uploading large files over the WAN. So for cloud computing usage and moving data from on-premises to cloud or the other way around, this might (in some situations) provide a bit of a performance boost. I've measured 10% in some tests. Your mileage may vary. It certainly should help when you have packet loss.

One advantage is that you don't need to have both source and target systems run this kernel. So to test out BBR you can run OL7 on either side and install uek5 on it (see here) and just enable it on that system. Try ssh or netperf or wget of a large(ish) file.

All you have to do is:

- use an Oracle Linux 7 install on one of the 2 servers.

- install the UEK5 preview kernel and boot into that one

- use sysctl (as root) to modify the settings / enable BBR. You can do this online. No reboot required.

You should also set the queue discipline to fq instead of pfifo_fast(default).

# sysctl -w net.ipv4.tcp_congestion_control=bbr # sysctl -w net.core.default_qdisc=fq

if you want to go back to the defaults:

# sysctl -w net.ipv4.tcp_congestion_control=cubic # sysctl -w net.core.default_qdisc=pfifo_fast

(feel free to experiment with switching pfifo_fast vs fq as well).

If need be, this can be set on an individual socket level in Linux. If you have a specific application (like a webserver or a data transfer program), use setsockopt(). Something like:

sock = socket(AF_INET, SOCK_STREAM, 0); sockfd = accept(sock, ...); strcpy(optval, "bbr"); optlen = strlen(optval); if (setsockopt(sockfd, IPPROTO_TCP, TCP_CONGESTION, optval, optlen) < 0) error("setsockopt(TCP_CONGESTION) failed");

or you should be able to do the same in Python starting in Python 3.6+.

sock.setsockopt(socket.IPPROTO_IP, socket.TCP_CONGESTION,...)

Have fun playing with it. Let me know if/when you see advantages as well.

Split 1 row into 2 rows based on column values without UNION

Tom Kyte - Sun, 2018-04-08 12:06
Hi, I will be glad if you could help me to know if the below can be achieved without using UNION I want to split a row into 2 based on a column value create table xx_test_split ( id number, amount number, discount_amount number, currency ...
Categories: DBA Blogs

Oracle Query CPU Utilization

Tom Kyte - Sun, 2018-04-08 12:06
I am looking at instance sizing and have some questions on how Oracle uses CPU cores. Assume this is a Standard Edition database so no queries will be running in parallel mode. Assuming system background processes are negligible: *) When a single ...
Categories: DBA Blogs

DBA career advice

Tom Kyte - Sun, 2018-04-08 12:06
I work in mid-size company in India. I'm Oracle Developer last 1 year. I want to become DBA. How to become professional Oracle DBA like Core DBA or Oracle Application DBA in INDIA? Which book should be read for me? What should be work do by me ...
Categories: DBA Blogs

How can i change my sys password

Tom Kyte - Sun, 2018-04-08 12:06
How can i change my sys password
Categories: DBA Blogs

Speaking At DOAG 2018 Exa & Middleware Days In Frankfurt

Randolf Geist - Sun, 2018-04-08 07:45
I will be speaking at the DOAG 2018 Exa & Middleware Days in Frankfurt on June 18th and 19th. My talk will be "Exadata & InMemory Real World Performance" where I discuss the different performance improvements you can expect from the super fast scans delivered by those technologies depending on the actual work profile of the SQL and data used.

Hope to see you there!

First steps with Docker Checkpoint – to create and restore snapshots of running containers

Amis Blog - Sun, 2018-04-08 01:31

Docker Containers can be stopped and started again. Changes made to the file system in a running container will survive this deliberate stop and start cycle. Data in memory and running processes obviously do not. A container that crashes cannot just be restarted and will have a file system in an undetermined state if it can be restarted. When you start a container after it was stopped, it will go through its full startup routine. If heavy duty processes needs to be started – such as a database server process – this startup time can be substantial, as in many seconds or dozens of seconds.

Linux has a mechanism called CRIU or Checkpoint/Restore In Userspace. Using this tool, you can freeze a running application (or part of it) and checkpoint it as a collection of files on disk. You can then use the files to restore the application and run it exactly as it was during the time of the freeze. See https://criu.org/Main_Page for details. Docker CE has (experimental) support for CRIU. This means that using straightforward docker commands we can take a snapshot of a running container (docker checkpoint create <container name> <checkpointname>). At a later moment, we can start this snapshot as the same container (docker start –checkpoint <checkpointname> <container name> ) or as a different container.

The container that is started from a checkpoint is in the same state – memory and processes – as the container was when the checkpoint was created. Additionally, the startup time of the container from the snapshot is very short (subsecond); for containers with fairly long startup times – this rapid startup can be a huge boon.

In this article, I will tell about my initial steps with CRIU and Docker. I got it to work. I did run into an issue with recent versions of Docker CE (17.12 and 18.x) so I resorted back to 17.04 of Docker CE. I also ran into an issue with an older version of CRIU, so I built the currently latest version of CRIU (3.8.1) instead of the one shipped in the Ubuntu Xenial 64 distribution (2.6).

I will demonstrate how I start a container that clones a GitHub repository and starts a simple REST API as a Node application; this takes 10 or more seconds. This application counts the number of GET requests it handles (by keeping some memory state). After handling a number of requests, I create a checkpoint for this container. Next, I make a few more requests, all the while watching the counter increase. Then I stop the container and start a fresh container from the checkpoint. The container is running lightningly fast – within 700ms – so it clearly leverages the container state at the time of creating the snapshot. It continues counting requests at the point were the snapshot was created, apparently inheriting its memory state. Just as expected and desired.

Note: a checkpoint does not capture changes in the file system made in a container. Only the memory state is part of the snapshot.

Note 2: Kubernetes does not yet provide support for checkpoints. That means that a pod cannot start a container from a checkpoint.

In a future article I will describe a use case for these snapshots – in automated test scenarios and complex data sets.

The steps I went through (on my Windows 10 laptop using Vagrant 2.0.3 and VirtualBox 5.2.8):

  • use Vagrant to a create an Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x
  • downgrade Docker from 18.x to 17.04
  • configure Docker for experimental options
  • install CRIU package
  • try out simple scenario with Docker checkpoint
  • build CRIU latest version
  • try out somewhat more complex scenario with Docker checkpoint (that failed with the older CRIU version)

 

Create Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x

My Windows 10 laptop already has Vagrant 2.0.3 and Virtual Box 5.2.8. Using the following vagrantfile, I create the VM that is my Docker host for this experiment:

 

After creating (and starting) the VM with

vagrant up

I connect into the VM with

vagrant ssh

ending up at the command prompt, ready for action.

And in just to make sure we are pretty much up to date, I run

sudo apt-get upgrade

image

Downgrade Docker CE to Release 17.04

At the time of writing there is an issue with recent Docker version (at least 17.09 and higher – see https://github.com/moby/moby/issues/35691) and for that reason I downgrade to version 17.04 (as described here: https://forums.docker.com/t/how-to-downgrade-docker-to-a-specific-version/29523/4 ).

First remove the version of Docker installed by the vagrant provider:

sudo apt-get autoremove -y docker-ce \
&& sudo apt-get purge docker-ce -y \
&& sudo rm -rf /etc/docker/ \
&& sudo rm -f /etc/systemd/system/multi-user.target.wants/docker.service \
&& sudo rm -rf /var/lib/docker \
&&  sudo systemctl daemon-reload

then install the desired version:

sudo apt-cache policy docker-ce

sudo apt-get install -y docker-ce=17.04.0~ce-0~ubuntu-xenial

 

    Configure Docker for experimental options

    Support for checkpoints leveraging CRIU is an experimental feature in Docker. In order to make use of it, the experimental options have to be enabled. This is done (as described in https://stackoverflow.com/questions/44346322/how-to-run-docker-with-experimental-functions-on-ubuntu-16-04)

     

    sudo nano /etc/docker/daemon.json
    

    add

    {
    "experimental": true
    }
    

    Press CTRL+X, select Y and press Enter to save the new file.

    restart the docker service:

    sudo service docker restart
    

    Check with

    docker version
    

    if experimental is indeed enabled.

     

    Install CRIU package

    The simple approach with CRIU – how it should work – is by simply installing the CRIU package:

    sudo apt-get install criu
    

    (see for example in https://yipee.io/2017/06/saving-and-restoring-container-state-with-criu/)

    This installation results for me in version 2.6 of the CRIU package. For some actions that proves sufficient, and for others it turns out to be not enough.

    image

     

    Try out simple scenario with Docker checkpoint on CRIU

    At this point we have Docker 17.04, Ubuntu 16.04 with CRIU 2.6. And that combination can give us a first feel for what the Docker Checkpoint mechanism entails.

    Run a simple container that writes a counter value to the console once every second (and then increases the counter)

    docker run --security-opt=seccomp:unconfined --name cr -d busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
    

    check on the values:

    docker logs cr
    

    create a checkpoint for the container:

    docker checkpoint create  --leave-running=true cr checkpoint0
    

    image

    leave the container running for a while and check the logs again

    docker logs cr
    

    SNAGHTML19a5da6

    now stop the container:

    docker stop cr
    

    and restart/recreate the container from the checkpoint:

    docker start --checkpoint checkpoint0 cr
    

    Check the logs:

    docker logs cr
    

    You will find that the log is resumed at the value (19) where the checkpoint was created:

    SNAGHTML197d66e

     

    Build CRIU latest version

    When I tried a more complex scenario (see next section) I ran into this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host. Here are the steps I went through to accomplish that – following these instuctions: https://criu.org/Installation.

    First, remove the currently installed CRIU package:

    sudo apt-get autoremove -y criu \
    && sudo apt-get purge criu -y \
    

    Then, prepare the build environment:

    sudo apt-get install build-essential \
    && sudo apt-get install gcc   \
    && sudo apt-get install libprotobuf-dev libprotobuf-c0-dev protobuf-c-compiler protobuf-compiler python-protobuf \
    && sudo apt-get install pkg-config python-ipaddr iproute2 libcap-dev  libnl-3-dev libnet-dev --no-install-recommends
    

    Next, clone the GitHub repository for CRIU:

    git clone <a href="https://github.com/checkpoint-restore/criu">https://github.com/checkpoint-restore/criu</a>
    

    Navigate into to the criu directory that contains the code base

    cd criu
    

    and build the criu package:

    make
    

    When make is done, I can run CRIU :

    sudo ./criu/criu check
    

    to see if the installation is successful. The final message printed should be: Looks Good (despite perhaps one or more warnings).

    Use

    sudo ./criu/criu –V
    

    to learn about the version of CRIU that is currently installed.

    Note: the CRIU instructions describe the following steps to install criu system wide. This does not seem to be needed in order for Docker to leverage CRIU from the docker checkpoint commands.

    sudo apt-get install asciidoc  xmlto
    sudo make install
    criu check
    

    Now we are ready to take on the more complex scenario that failed before with an issue in the older CRIU version.

    A More complex scenario with Docker Checkpoint

    This scenario failed with the older CRIU version – probably because of this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host.

      In this case, I run a container based on a Docker Container image for running any Node application that is downloaded from a GitHub Repository. The Node application that the container will download and run handles simple HTTP GET requests: it counts requests and returns the value of the counter as the response to the request. This container image and this application were introduced in an earlier article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

      Here you see the command to run the container – to be be called reqctr2:

      docker run --name reqctr2 -e "GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017" -e "APP_PORT=8080" -p 8005:8080 -e "APP_HOME=part1"  -e "APP_STARTUP=requestCounter.js"   lucasjellema/node-app-runner
      

      image

      It takes about 15 seconds for the application to start up and handle requests.

      Once the container is running, requests can be sent from outside the VM – from a browser running on my laptop for example – to be handled  by the container, at http://192.168.188.106:8005/.

      After a number or requests, the counter is at 21:

      image

      At this point, I create a checkpoint for the container:

      docker checkpoint create  --leave-running=true reqctr2 checkpoint1
      

      image

      I now make a few additional requests in the browser, bringing the counter to a higher value:

      imageAt this point, I stop the container – and subsequently start it again from the checkpoint:

      docker stop reqctr2
      docker start --checkpoint checkpoint1 reqctr2
      

      image

      It takes less than a second for the container to continue running.

      When I make a new request, I do not get 1 as a value (as would be the result from a fresh container) nor is it 43 (the result I would get if the previous container would still be running). Instead, I get

      imageThis is the next value starting at the state of the container that was captured in the snapshot. Note: because I make the GET request from the browser and the browser also tries to retrieve the favicon, the counter is increased by two for every single time I press refresh in the browser.

      Note: I can get a list of all checkpoints that have been created for a container. Clearly, I should put some more effort in a naming convention for those checkpoints:

      docker checkpoint ls reqctr2
      

      image

      The flow I went through in this scenario can be visualized like this:

      image

      The starting point: Windows laptop with Vagrant and Virtual Box. A VM has been created by Vagrant with Docker inside. The correct version of Docker and of the CRIU package have been set up.

      Then these steps are run through:

      1. Start Docker container based on an image with Node JS runtime
      2. Clone GitHub Repository containing a Node JS application
      3. Run the Node JS application – ready for HTTP Requests
      4. Handle HTTP Requests from a browser on the Windows Host machine
      5. Create a Docker Checkpoint for the container – a snapshot of the container state
      6. The checkpoint is saved on the Docker Host – ready for later use
      7. Start a container from the checkpoint. This container starts instantaneously, no GitHub clone and application startup are required; it resumes from the state at the time of creating the checkpoint
      8. The container handles HTTP requests – just like its checkpointed predecessor

       

      Resources

      Sources are in this GitHub repo: https://github.com/lucasjellema/docker-checkpoint-first-steps

      Article on CRIU: http://www.admin-magazine.com/Archive/2014/22/Save-and-Restore-Linux-Processes-with-CRIU

      Also: on CRIU and Docker: https://yipee.io/2017/06/saving-and-restoring-container-state-with-criu/.

      Docs on Checkpoint and Restore in Docker: https://github.com/docker/cli/blob/master/experimental/checkpoint-restore.md

       

      Home of CRIU:   and page on Docker support: https://criu.org/Docker; install CRIU package on Ubuntu: https://criu.org/Packages#Ubuntu

      Install and Build CRIU Sources: https://criu.org/Installation

       

      Docs on Vagrant’s Docker providingprovisioning: https://www.vagrantup.com/docs/provisioning/docker.html

      Article on downgrading Docker : https://forums.docker.com/t/how-to-downgrade-docker-to-a-specific-version/29523/4

      Configure Docker for experimental options: https://stackoverflow.com/questions/44346322/how-to-run-docker-with-experimental-functions-on-ubuntu-16-04

      Issue with Docker and Checkpoints (at least in 17.09-18.03): https://github.com/moby/moby/issues/35691

      The post First steps with Docker Checkpoint – to create and restore snapshots of running containers appeared first on AMIS Oracle and Java Blog.

      Ubuntu: unmet dependencies gparted (libparted-fs-resize0 (>= 3.1))

      Dietrich Schroff - Sat, 2018-04-07 09:34
      After a new ubuntu installation i got the following error:
      root@pc:/etc/apt/sources.list.d# apt install   gparted
      Paketlisten werden gelesen... Fertig
      Abhängigkeitsbaum wird aufgebaut.   
      Statusinformationen werden eingelesen.... Fertig
      Einige Pakete konnten nicht installiert werden. Das kann bedeuten, dass
      Sie eine unmögliche Situation angefordert haben oder, wenn Sie die
      Unstable-Distribution verwenden, dass einige erforderliche Pakete noch
      nicht erstellt wurden oder Incoming noch nicht verlassen haben.
      Die folgenden Informationen helfen Ihnen vielleicht, die Situation zu lösen:
      Die folgenden Pakete haben unerfüllte Abhängigkeiten:
       gparted : Hängt ab von: libparted-fs-resize0 (>= 3.1) soll aber nicht installiert werden
      E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete.or
      The following packages have unmet dependencies:
       gparted :
      Depends: libglibmm-2.4-1c2a (>= 2.42.0) but 2.39.93-0ubuntu1 is to be installed
       Depends: libparted-fs-resize0 (>= 3.1) but it is not installable
       Depends: libparted2 (>= 3.1) but it is not installable
       Depends: libstdc++6 (>= 4.9) but 4.8.2-19ubuntu1 is to be installed
      E: Unable to correct problems, you have held broken packages.


      I tried some solutions like
      apt cleanor removing many repositories out of /etc/apt/sources.list and /etc/apt/sources.d but the error still remained.

      After some tries i found the solution via these commands:

      The installation of libparted-fs-resize0 does not work with the following error:
      root@pc:/etc/apt# apt install libparted-fs-resize0
      Paketlisten werden gelesen... Fertig
      Abhängigkeitsbaum wird aufgebaut.     
      Statusinformationen werden eingelesen.... Fertig
      Einige Pakete konnten nicht installiert werden. Das kann bedeuten, dass
      Sie eine unmögliche Situation angefordert haben oder, wenn Sie die
      Unstable-Distribution verwenden, dass einige erforderliche Pakete noch
      nicht erstellt wurden oder Incoming noch nicht verlassen haben.
      Die folgenden Informationen helfen Ihnen vielleicht, die Situation zu lösen:
      Die folgenden Pakete haben unerfüllte Abhängigkeiten:
       libparted-fs-resize0 : Hängt ab von: libparted2 (= 3.2-15) aber 3.2-15ubuntu0.1 soll installiert werden
      E: Probleme können nicht korrigiert werden, Sie haben zurückgehaltene defekte Pakete.So i checked this package:

      root@pc:/etc/apt# apt-cache policy libparted2
      libparted2:
        Installiert:           3.2-15ubuntu0.1
        Installationskandidat: 3.2-15ubuntu0.1
        Versionstabelle:
       *** 3.2-15ubuntu0.1 100
              100 /var/lib/dpkg/status
           3.2-15 500
              500 http://de.archive.ubuntu.com/ubuntu xenial/main amd64 Packages
      Hmmm. 2 Versions inside my versions-table. So i removed the local status file:
      root@pc:/etc/apt# cd /var/lib/dpkg/
      root@pc:/var/lib/dpkg# mv status status.180227
      root@pc:/var/lib/dpkg# touch statusAnd after that the problem was gone:
      root@pc:/var/lib/dpkg# apt update
      root@pc:/var/lib/dpkg# apt install gparted
      Here we go:


      How to pass a bind variable inside string inside exexute immediate string?

      Tom Kyte - Fri, 2018-04-06 23:26
      The benefits of using bind variables inside dynamic SQL statements have been discussed many times here. I have dynamic SQl statement like this where "id" is a pls_integer: <code>---- ENSURE THAT BULK PROCESSING IS DISABLED ---- b...
      Categories: DBA Blogs

      Custom pivot with count and sum summaries and horizontal sorting

      Tom Kyte - Fri, 2018-04-06 23:26
      Hello Team, Good Day! I have linked livesql script for data creation. Data basically looks like this. <code> 1 symnum NUMBER 22 2 symname VARCHAR2 100 3 remnum NUMBER 22 4 remname VARCHAR2 32 5 grade NUMBER 22 </code> ...
      Categories: DBA Blogs

      Move data within partitions

      Tom Kyte - Fri, 2018-04-06 23:26
      Hi Tom. I'm working on a database with a partitioned table (by date). There are weekly partitions plus a edge partition where values higher than a certain value are stored. The last partition has never been used and is empty. Counting recor...
      Categories: DBA Blogs

      Oracle Comments on Terix Criminal Sentences

      Oracle Press Releases - Fri, 2018-04-06 11:11
      Press Release
      Oracle Comments on Terix Criminal Sentences

      Redwood Shores, Calif.—Apr 6, 2018

      “Oracle is pleased that the United States District Court for the Southern District of Ohio accepted the guilty pleas of James Olding and Bernd Appleby, the principals of Terix, for their roles in misappropriating Oracle’s intellectual property and sentenced them both to prison for their criminal acts,” says Oracle spokesperson Deborah Hellinger.  “Oracle takes violations of its intellectual property rights very seriously and, as demonstrated by Oracle’s lawsuits against Terix, Rimini Street and other IP violators, Oracle will not hesitate to go after those who do so.  Oracle appreciates the fine work of the law enforcement officials whose efforts led to the criminal penalties assessed against Terix’s principals.”

      In June 2015, Oracle obtained a judgment against Terix for copyright infringement based on Terix’s theft of patches and updates to Oracle’s Solaris operating system.

      For more information on the U.S. Department of Justice’s ruling on the Terix case, please visit here.

      Contact Info
      Deborah Hellinger
      Oracle Corporate Communications
      +1.212.508.7935
      deborah.hellinger@oracle.com
      About Oracle

      Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE: ORCL), visit www.oracle.com.

      Trademarks

      Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

      Talk to a Press Contact

      Deborah Hellinger

      • +1.212.508.7935

      Automating Oracle Linux Installation with Kickstart

      Yann Neuhaus - Fri, 2018-04-06 08:10
      Automating Oracle Linux Installation with Kickstart Kickstart ?

      If you need to setup from scratch several Oracle Linux systems for your Oracle databases, it can be boring to repeat the install tasks again and again on each servers.
      Automation and standardization are the keys.
      Kickstart can provide an easy way to accomplish mass deployment.

      Kickstart configuration files

      Kickstart will use a Kickstart configuration file to perform the deployment.
      Maintaining ready to go Kickstart configurations is easy.
      We will use in our demo an ftp server to store and access our configuration file.

      Direct access to the video:
      51

      Lets go ! Fisrt install an ftp server

      On an oralinux 7.2 server, just type following command to install an ftp server + an ftp client

      yum install vsftpd ftp lftp
      

      53

      Then adapt timeout parameter to avoid disconnection when deploying your server.
      Be sure anonymous access is enable.

      [root@localhost ~]# sed '/^#/d' /etc/vsftpd/vsftpd.conf 
      
      anonymous_enable=YES
      local_enable=YES
      write_enable=YES
      local_umask=022
      dirmessage_enable=YES
      xferlog_enable=YES
      connect_from_port_20=YES
      xferlog_std_format=YES
      idle_session_timeout=6000
      data_connection_timeout=1200
      listen=NO
      listen_ipv6=YES
      pam_service_name=vsftpd
      userlist_enable=YES
      tcp_wrappers=YES
      

      and start your ftpd server.

      systemctl start vsftpd
      

      Then put your kickstart configuration file in it. i will explain the file later:

      vi /var/ftp/pub/myksfile.ks
      

      And copy/paste the whole content. I will explain the file later:

      ########################################################################
      ########################################################################
      ##                                                                    ##
      ##               Kickstart for OEL7 :  olg.dbi-services.com           ##
      ##                                                                    ##
      ########################################################################
      ########################################################################
      
      # install through HTTP
      ########################################################################
      install
      cdrom
      
      
      # locale settings
      ########################################################################
      lang en_US.UTF-8
      keyboard --vckeymap=ch --xlayouts='ch'
      timezone --utc Europe/Zurich
      
      
      # X is not configured on the installed system. 
      ########################################################################
      skipx
      
      
      # installation mode
      ########################################################################
      text
      reboot --eject
      
      
      # Partition table initialization
      ########################################################################
      zerombr
      
      
      # Network configuration
      # Oracle Linux 7: How to modify Network Interface names (Doc ID 2080965.1)
      ########################################################################
      ### network --device eth0 --bootproto static --ip 192.168.56.102 --netmask 255.255.255.0 --gateway 192.168.56.1 --nameserver it.dbi-services.com --hostname olg.dbi-services.com net.ifnames=0
      
      
      # security settings
      ########################################################################
      rootpw      toor
      firewall    --enabled --ssh
      selinux   --enforcing
      authconfig  --enableshadow --passalgo=sha512
      
      
      # Partitioning and bootloader
      ########################################################################
      # only 1 disk presented to the O.S during installation time
      # net.ifnames=0 to use eth name for network devices
      bootloader      --location=mbr  --append="nofb quiet splash=quiet crashkernel=auto net.ifnames=0"
      firstboot       --disable
      clearpart       --all          --initlabel
      part /boot      --fstype xfs   --ondisk=/dev/sda --size=512
      part swap       --size=2048   --ondisk=/dev/sda
      part pv.01      --size=100     --ondisk=/dev/sda --grow
      volgroup RHELVG pv.01
      logvol /        --fstype xfs   --name=RootLV   --vgname=RHELVG --size=8196
      logvol /usr     --fstype xfs   --name=UsrLV    --vgname=RHELVG --size=2048
      logvol /tmp     --fstype xfs   --name=TmpLV    --vgname=RHELVG --size=2048
      logvol /var     --fstype xfs   --name=VarLV    --vgname=RHELVG --size=4096
      logvol /var/log/audit     --fstype xfs   --name=AuditLV    --vgname=RHELVG --size=2048
      logvol /opt     --fstype xfs   --name=OptLV    --vgname=RHELVG --size=2048
      logvol /home    --fstype xfs   --name=HomeLV   --vgname=RHELVG --size=2048
      logvol /u01     --fstype xfs   --name=u01LV    --vgname=RHELVG --size=2048
      
      
      
      # packages + RPMs
      ########################################################################
      %packages
      @base
      
      # system components
      device-mapper-multipath
      kexec-tools
      lvm2
      e4fsprogs
      sg3_utils
      lsscsi
      dstat
      ntp
      perl
      postfix
      bc
      
      # VI
      vim-common
      vim-enhanced
      
      # SELINUX
      setroubleshoot
      setroubleshoot-server
      setroubleshoot-plugins
      
      %end
      
      
      # POST installations tasks
      ########################################################################
      %post
      
      modprobe --first-time bonding
      # VLAN kernel module
      # modprobe --first-time 8021q
      
      # configure bond
      ################
      echo "DEVICE=bond0
      TYPE=Bond
      BONDING_MASTER=yes
      BOOTPROTO=static
      IPADDR=192.168.56.149
      NETMASK=255.255.255.0
      GATEWAY=192.168.56.1
      BONDING_OPTS=\"mode=active-backup miimon=100\"
      ONPARENT=yes
      ONBOOT=yes" > /etc/sysconfig/network-scripts/ifcfg-bond0
      
      echo "DEVICE=eth0
      ONBOOT=yes
      MASTER=bond0
      BOOTPROTO=none
      NM_CONTROLLED=no
      SLAVE=yes" > /etc/sysconfig/network-scripts/ifcfg-eth0
      
      echo "DEVICE=eth1
      ONBOOT=yes
      MASTER=bond0
      BOOTPROTO=none
      NM_CONTROLLED=no
      SLAVE=yes" > /etc/sysconfig/network-scripts/ifcfg-eth1
      
      echo "DEVICE=eth2
      ONBOOT=yes
      BOOTPROTO=dhcp
      NM_CONTROLLED=no
      " > /etc/sysconfig/network-scripts/ifcfg-eth2
      
      rm -f /etc/sysconfig/network-scripts/ifcfg-en*
      
      systemctl restart network
      systemctl stop NetworkManager.service
      systemctl disable NetworkManager.service
      
      
      # Switch to Postfix
      ###################
      alternatives --set mta  /usr/sbin/sendmail.postfix
      
      
      # HOSTS FILE
      ############
      cat >> /etc/hosts <> /etc/ntp.conf
      
      # DNS config
      #############
      cat > /etc/resolv.conf < /etc/postfix/main.cf < /etc/postfix/master.cf <> /etc/postfix/generic
      postmap /etc/postfix/generic
      
      
      
      # user management + SUDO privilege delegation
      ########################################################################
      adduser admora
      echo toor | passwd admora --stdin
      
      echo "admora    ALL=NOPASSWD: ALL
      #admora  ALL = NOPASSWD: /bin/su - oracle , /bin/su -" >> /etc/sudoers 
      
      
      # Enable services
      ########################################################################
      systemctl enable ntpd.service
      systemctl start ntpd.service
      systemctl enable ntpdate.service
      
      
      # Oracle +Nagios prereqs
      ########################################################################
      yum -y install oracle-rdbms-server-11gR2-preinstall oracle-rdbms-server-12cR1-preinstall oracle-database-server-12cR2-preinstall
      yum -y install openssl openssl-devel
      yum -y install net-tools
      # as of ALUA RHEL7.4 incompatibilities (stay on 7.2 and lock repo. later)
      #yum -y update
      
      
      # Oracle tuned configuration
      ########################################################################
      mkdir -p /usr/lib/tuned/dbiOracle
      cat > /usr/lib/tuned/dbiOracle/tuned.conf < /sys/class/fc_host/host1/issue_lip
      echo 1 > /sys/class/fc_host/host2/issue_lip
      
      echo "# Format:
      # alias wwid
      #
      LUN_ORAFRA 360030d90466abf0660191bde985bba15
      LUN_ORADBF 360030d906382c2065827918ddb6506da" >> /etc/multipath/bindings
      
      cat > /etc/multipath.conf <<EOF
      
      defaults {
         polling_interval 60
               }
      
      blacklist {
       devnode "^sd[a]"
              devnode "^(zram|ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
              devnode "^hd[a-z]"
              devnode "^cciss!c[0-9]d[0-9]*"
      }
      blacklist_exceptions {
       wwid "360030d90466abf0660191bde985bba15"
       wwid "360030d906382c2065827918ddb6506da"
       #vendor   "DataCore"
       #product   "Virtual Disk"
                    }
      devices {
       device {
         vendor    "DataCore"
         product   "Virtual Disk"
         path_checker   tur
         prio     alua
         failback   10
         no_path_retry   fail
      
         dev_loss_tmo   infinity
         fast_io_fail_tmo  5
      
         rr_min_io_rq    100
         # Alternative option – See notes below
         # rr_min_io  100
      
         path_grouping_policy  group_by_prio
         # Alternative policy - See notes below
         # path_grouping_policy failover
      
          # optional - See notes below
         user_friendly_names yes
                       }
               }
      EOF
      
      systemctl reload multipathd
      
      # final post steps (Bugs, security)
      ####################################
      systemctl disable rdma.service
      touch /.autorelabel
      dracut -f 
      
      %end
      
      

      Test that you can access anonymously to your file through ftp with your browser
      ftp://192.168.56.101/pub/myksfile.ks
      52
      Or via an ftp client

      $ lftp ftp://192.168.56.101
      
      lftp 192.168.56.101:~> cat /pub/myksfile.ks
      
      You can now deploy your Oracle Linux server for a new database:

      When you arrive on the installation screen,
      22

      Booting from dvd, press ESC to get the boot prompt and type
      For the demo, I’m using Virtual Box VM, + 1 dvd drive for the ISO file i have downloaded from the oracle site: V100082-01.iso (oralinux7.2)

      linux ks=ftp://192.168.56.101/pub/myksfile.ks
      

      Then press ENTER as shown in this demo:
      51

      Here, if you don’t get RNETLINK answers: File exists, something is wrong in your network configuration.
      07

      At this step, if you see the green line, it’s mean you entered in anaconda and that your installation process is ongoing.
      55

      If you receive some Pane errors, once again, something is wrong in the network configuration. This is the hard part. Depending of the customer infrastructure, you could nee to set up ip manually.
      Below 2 examples: one using a static IP configuration and the other a VLAN configuration.

      static IP configuration
      
      linux ip=192.168.56.102 netmask=255.255.255.0 gateway=192.168.56.1 servername=it.dbi-services.com ks=ftp://192.168.56.101/pub/myksfile.ks net.ifnames=0
      
      static IP configuration with use of VLAN (VLANID=27 in this example)
      
      linux ip=192.168.56.102 netmask=255.255.255.128 gateway=192.168.56.1 servername=it.dbi-services.com ks=ftp://192.168.56.1/myksfile.ks net.ifnames=0 vlan=VLAN27.27:eth0
      

      Anaconda will now perform the partitioning part:
      04

      For the demo, I’m using a 40G disk. If you don’t give enough space, or if you have done some errors in your configuration, you will be prompt to fix the configuration issues. You would better restart the installation from the beginning.

      # Partitioning and bootloader
      ########################################################################
      # only 1 disk presented to the O.S during installation time
      # net.ifnames=0 to use eth name for network devices
      bootloader      --location=mbr  --append="nofb quiet splash=quiet crashkernel=auto net.ifnames=0"
      firstboot       --disable
      clearpart       --all          --initlabel
      part /boot      --fstype xfs   --ondisk=/dev/sda --size=512
      part swap       --size=2048   --ondisk=/dev/sda
      part pv.01      --size=100     --ondisk=/dev/sda --grow
      volgroup RHELVG pv.01
      logvol /        --fstype xfs   --name=RootLV   --vgname=RHELVG --size=8196
      logvol /usr     --fstype xfs   --name=UsrLV    --vgname=RHELVG --size=2048
      logvol /tmp     --fstype xfs   --name=TmpLV    --vgname=RHELVG --size=2048
      logvol /var     --fstype xfs   --name=VarLV    --vgname=RHELVG --size=4096
      logvol /var/log/audit     --fstype xfs   --name=AuditLV    --vgname=RHELVG --size=2048
      logvol /opt     --fstype xfs   --name=OptLV    --vgname=RHELVG --size=2048
      logvol /home    --fstype xfs   --name=HomeLV   --vgname=RHELVG --size=2048
      logvol /u01     --fstype xfs   --name=u01LV    --vgname=RHELVG --size=2048
      

      When the partitioning part is finish, the package installation process will begin.
      25

      You can add personalize the packages you want install from the dvd.

      # packages + RPMs
      ########################################################################
      %packages
      @base
      
      # system components
      device-mapper-multipath
      kexec-tools
      lvm2
      e4fsprogs
      sg3_utils
      lsscsi
      dstat
      ntp
      perl
      postfix
      bc
      

      During the installation, you can TAB between console to get more information on what’s going on.
      Console 2 permit you to type shell commands:

      For the demo, I’m using 3 Ethernet cards: 2 for the bonding, 1 NAT for internet connection.
      With ip a command, i can see which the interface names and IP i’m currently using during the installation process:
      54
      Because I set net.ifnames=0, eth will be used after rebooting for my netcard interfaces name. I will configure them in the POST installations tasks.

       bootloader      --location=mbr  --append="nofb quiet splash=quiet crashkernel=auto net.ifnames=0
      

      Switching between Console 1 / Console 3 / Console 5 permit to see what anaconda is doing. Interesting part it the %%post message.
      It means you are in the POST installations tasks.
      21

      Configuration files of your system can be modified.
      In my demo, i will configure bonding, postfix, and multipathing and yum install oracle-database-server-12cR2-preinstall package with dependencies !
      21

      The script coming from the kickstart configuration file is stored in the /tmp folders. It is called /tmp/ks-script-JeYnWI.log
      After reboot, you can inspect it if you like to.

       

      Cet article Automating Oracle Linux Installation with Kickstart est apparu en premier sur Blog dbi services.

      Oracle Integration Cloud: New! The Data Mapper Activity

      Jan Kettenis - Fri, 2018-04-06 05:22
      In a previous blog I discussed a work-around for not having a Script activity in Oracle Integration Cloud's Process Builder. In this blog I will discuss another work-around which is actually not a work-around, but the real thing: the Data Mapper!

      As you can read in a previous blog about the matter, not having the equivalent of the Script activity of the on-premise BPM Suite, was an omission that we often had to find a work-around for. The one I used was the Business Rule activity. However, some weeks ago the Business Rule activity got deprecated (you could clearly see that).



      With the latest release of OIC (which may not yet be public available when you read this) the Business Rule activity has vanished. At the same time the Data Mapper activity has been added.



      The Data Mapper activity has no properties other than that you can put it in draft mode.


      The implementation is as simple as you might expect: there is only an Output tab on which you can map data from Data Objects, Predefined Variables and Business Parameters on one hand, to Data Objects and Predefined Variables on the other.



      Next to simple mappings, you can also create and use (reusable) transformations to map Data Objects (or attributes) of which the types don't match.


      I hope I don't have to write this any time in the future again, but if you used my work-around I got you into trouble if you want to export and import an application, because import with a Business Rule activity in the application is not supported! Sorry :-D

      Soundex

      Tom Kyte - Fri, 2018-04-06 05:06
      Buen dia Connor, espero estes bien. Estoy realizando una practica e investigacion sobre chatbots, en mi algoritmo estoy buscando por dos metodos el parseo o proceso de las palabras para hacer el NLP y darles sentido, se que existe la funcion soun...
      Categories: DBA Blogs

      How to start a job on two conditions with DBMS_SCHEDULER : another job has finished and we are on monday ?

      Tom Kyte - Fri, 2018-04-06 05:06
      Hi, I have a job (JOB1) which runs every night at 0:30. I have another job (JOB2) which must be run after JOB1 completes but only once a week, on monday. For the moment, I manage this by starting JOB2 on monday at 4:00 because most of the time J...
      Categories: DBA Blogs

      Oracle EMEA Identity & Security Partner Forum 2018 #OPNCloudSecurity

                ...

      We share our skills to maximize your revenue!
      Categories: DBA Blogs

      Regenerate Oracle VM Manager repository database

      Amis Blog - Fri, 2018-04-06 02:01

      Some quick notes to regenerate a corrupted Oracle VM manager repository database.

      How did we discover the corruption?
      The MySQL repository databases was increasing in size, the file “OVM_STATISTIC.ibd” was 62G. We also found the following error messages in the “AdminServer.log” logfile:

      ####<2018-02-13T07:52:17.339+0100> <Error> <com.oracle.ovm.mgr.task.ArchiveTask> <ovmm003.gemeente.local> <AdminServer> <Scheduled Tasks-12> <<anonymous>> <> <e000c2cc-e7fe-4225-949d-25d2cdf0b472-00000004> <1518504737339> <BEA-000000> <Archive task exception:
      com.oracle.odof.exception.ObjectNotFoundException: No such object (level 1), cluster is null: <9208>

      Regenerate steps
      – Stop the OVM services
      ve used Toad in the past, this tool will be very familiar to you.</p> <p>TOra s born out of <a href="http://www.globecom.net/tora/history.html">jealousy</a>. Windows users have an abundance of tools to choose from, Linux user however, don’t… or at least didn’t. TOra filled this gap.</p> <p>It was created in C++ and uses the Qt library. In the included documentation, there is a section explaining ways to create plug-ins for TOra. It even includes a tutorial. The only <a href="http://log4plsql.sourceforge.net/download/log4plsql.tpl">plug-in</a> I could find incorporates <a href="http://log4plsql.sourceforge.net/">Log4PLSQL</a> into TOra.</p> <p>While using Google to search for plug-ins available for TOra I came across a post mentioning a plug-in for SQL*Loader, I couldn’t find the actual plug-in though.</p> <p>TOra is free of charge, unless you’re a Windows user, then you’ll need to purchase a commercial license. The Windows version of TOrais governed by <a href="http://www.globecom.se/tora/license.pdf">the Software License Agreement from Quest Software.</a> Other platform releases are licensed under <a href="http://www.gnu.org/copyleft/gpl.html">GPL</a>.</p> <p>Features included in TOra:</p> <ul style='margin-top:0in' type=disc> <li>PL/SQL Debugger, at least according to the specs. I couldn’t get it going. The<br /> menu showed the icon, but was disabled.</li> <li>SQL Worksheet with syntax highlighting. Tab Pages provide additional<br /> information such as Explain Plan and a Log of previously executed<br /> statements<br /> A nice feature here is the “describe under cursorâ€? which shows the table<br /> structure you a currently querying.</li> <li>Schema Browser to show tables, view, indexes , sequences, synonyms, pl/sql and triggers for a particular schema. </li> </ul> <p>Here is a screenshot showcasing some of these features.<br /> <img src="https://technology.amis.nl/wp-content/uploads/images/ToraScreenshot.png" alt="TOra Screenshot" /></p> <p>TOra supports Database versions up to Oracle 9i (which release is not specified). Being connected to an Oracle 10g database didn’t seem to cause any problems.</p> <p>I installed a trial version on a Windows platform and played around with that for a while.<br /> The first thing that strikes me is the resemblance to Toad. There are a lot of similarities between these two products. The overall look and feel, where the different tools are located etc. make clear that TOra was inspired by Toad.</p> <p>My experience with TOra… it has a lot of features I never use. The ones I do use, don’t provide me with the feedback I need.</p> <p>An example to illustrate this: If I create a procedure with an error in it. It will compile, or at least it appears that way. The error messages are shown on the status bar and disappear after a while. You can<br /> recall the messages using a button on the status bar, or navigate the cursor to the status bar to display the error message in a tooltip.<br /> What I’d like to see is more immediate feedback to notice errors early on during development. Toad will display a pop-up window clearly stating the error.</p> <p>Creating and manipulating Objects formed somewhat of a problem in the SQL Worksheet. A valid Object Type Body definition (tested in SQL*Plus) resulted in an “ORA-00900: Invalid SQL Statement” error, making it impossible to create the Object Type Body here.</p> <p>Doing a similar action(creating a Object Type Body in a SQL window) in Toad or SQL*Plus was no problem. A valid Object Type Body was the result.</p> <p>A really nice feature in TOra is the DB Extract/Compare/Search tool. Simply using check-marks to specify which database objects you want to use and this tool will either Extract (creating installation scripts), Compare (handy if you need to compare two schema’s) or Search the database.</p> <p>I think it’s possible to overcome the limitations I mentioned before, once you get more comfortable using this tool. Getting used to a tool like Toad or TOra requires some time. There are so many tools at your disposal, learning each one of them simply takes time and effort. It’s like a new pair of shoes, once you break them in, they’re comfortable to wear, but the first two weeks…</p> <p>There are a number of tools on the market to choose from, especially if you’re using Windows. TOra beats Toad price-wise, but for how long? Quest is involved in TOra, draw your own conclusion. How will it compete with others on the Windows platform? Is it still going to evolve and incorporate new features and enhancements?</p> <p>If you’re not on a Windows platform, TOramight be worth looking into. The price is right, it offers a lot (maybe most) of the features Toad has.<br /> Jealousy can be a thing of the past.</p> " data-medium-file="" data-large-file="" class="alignnone size-full wp-image-119" src="https://ronniekalisingh.files.wordpress.com/2018/02/restore_db_1.png?resize=674%2C42" alt="restore_db_1" width="674" height="42" data-recalc-dims="1" />

      – Delete the Oracle VM Manager repository database
      restore_db_2

      “/u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ovm_upgrade.sh –deletedb –dbuser=ovs –dbpass=<PASSWORD> –dbhost=localhost –dbport=49500 –dbsid=ovs”

      restore_db_3

      – Generate replacement certificate
      restore_db_4

      – Start the OVM services and generate new certificates
      restore_db_5

      – Restart OVM services
      restore_db_6

      – Repopulate the database by discovering the Oracle VM Servers
      restore_db_7.png

      – Restore simple names
      Copy the restoreSimpleName script to /tmp, see Oracle Support note: 2129616.1
      restore_db_8

      Resources
      [OVM] Issues with huge OVM_STATISTIC.ibd used as OVM_STATISTIC Table. (Doc ID 2216441.1)
      Oracle VM: How To Regenerate The OVM 3.3.x/3.4.x DB (Doc ID 2038168.1)
      Restore OVM Manager “Simple Names” After a Rebuild/Reinstall (Doc ID 2129616.1)

      The post Regenerate Oracle VM Manager repository database appeared first on AMIS Oracle and Java Blog.

      Pages

      Subscribe to Oracle FAQ aggregator