Feed aggregator

Oracle E-Business Suite 12.2.8 Now Available

Steven Chan - Mon, 2018-10-08 15:21

I am pleased to announce that Oracle E-Business Suite 12.2.8 is now available.

This update includes several significant functional innovations around customer requests and voting, Enterprise Command Centers, user experience, data privacy standards, accounting standards changes, and automation to help customers move EBS environments to Oracle Cloud Infrastructure.

For details about the 2018 innovations, see:

Cliff Godwin talks through the latest innovations and what they mean for customers in this video:

Instructions for downloading and applying this latest release update pack (RUP) for the EBS 12.2 codeline can be found here:

What Does Release 12.2.8 Include?

As a consolidated suite-wide patchset, this RUP includes new features, statutory and regulatory updates, and enhancements for stability, performance, and security.

Release 12.2.8 is cumulative. That means that as well as providing new updates for this release, it also includes updates that were originally made available as one-off patches for earlier 12.2 releases.

For a complete list of new features, refer to:

Common Questions and Answers About Upgrading

  • Q: Is there a direct upgrade path from Release 11i to Release 12.2.8?
    A: No. Release 11i customers must first upgrade to Release 12.2 before applying 12.2.8.
  • Q: Is there a direct upgrade path from EBS 12.0 to 12.2.8?
    A: No. Release 12.0 customers must first upgrade to Release 12.2 before applying 12.2.8.
  • Q: Is there a direct upgrade path from EBS 12.2 to 12.2.8?
  • A: Yes. Release 12.2 customers can apply 12.2.8 directly to their
    environments. EBS 12.2.8 is an online patch, so it can be applied while an existing Release 12.2 system is running.
Additional References Related Articles
Categories: APPS Blogs

From Oracle to Postgres with the EDB Postgres Migration Portal

Yann Neuhaus - Mon, 2018-10-08 10:28

EnterpriseDB is a valuable actor in PostgreSQL’s world. In addition to provide support, they also deliver very useful tools to manage easily your Postgres environments. Among these we can mention EDB Enterprise Manager, EDB Backup & Recovery Tool, EDB Failover Manager, aso…
With this post I will present one of the last in the family, EDB Postgres Migration Portal, a helpful tool to migrate from Oracle to Postgres.

To acces to the Portal, use your EDB account or create one if you don’t have. By the way, with your account you can also connect to PostgresRocks, a very interesting community platform. Go take a look :) .

Once connected, click on “Create project” :
1

Fulfill the fields and click on “Create”. Currently it is only possible to migrate from Oracle 11 or 12 to Postgres EDB Advanced Server 10 :
2

All your projects are displayed at the bottom of the page. Click on the “Assess” link to continue :
3

The migration steps consist of the following :

  1. Extracting the DDL metadata from Oracle database using the EDB’s DDL Extractor script
  2. Running assessment
  3. Correcting conflicts
  4. Downloading and running the new DDL statements adapted to your EDB Postgres database
  5. Migrating data
1. Extracting the DDL metadata from Oracle database

The DDL Extractor script is easy to use. You just need to specify the schema name to extract the DDLs and the path to store the DDLs file. As you can guess, the script run the Oracle dbms_metadata.get_dll package to extract objects definitions :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select object_type, count(*) from dba_objects where owner='HR' and status='VALID' group by object_type order by 1;

OBJECT_TYPE COUNT(*)
----------------------- ----------
INDEX 19
PROCEDURE 2
SEQUENCE 3
TABLE 7
TRIGGER 2

SQL>

SQL> @edb_ddl_extractor.sql
# -- EDB DDL Extractor Version 1.2 for Oracle Database -- #
# ------------------------------------------------------- #
Enter SCHEMA NAME to extract DDLs : HR
Enter PATH to store DDL file : /home/oracle/migration


Writing HR DDLs to /home/oracle/migration_gen_hr_ddls.sql
####################################################################################################################
## DDL EXTRACT FOR EDB POSTGRES MIGRATION PORTAL CREATED ON 03-10-2018 21:41:27 BY DDL EXTRACTION SCRIPT VERSION 1.2
##
## SOURCE DATABASE VERSION: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
####################################################################################################################
Extracting SYNONYMS...
Extracting DATABASE LINKS...
Extracting TYPE/TYPE BODY...
Extracting SEQUENCES...
Extracting TABLEs...
Extracting PARTITION Tables...
Extracting CACHE Tables...
Extracting CLUSTER Tables...
Extracting KEEP Tables...
Extracting INDEX ORGANIZED Tables...
Extracting COMPRESSED Tables...
Extracting NESTED Tables...
Extracting EXTERNAL Tables..
Extracting INDEXES...
Extracting CONSTRAINTS...
Extracting VIEWs..
Extracting MATERIALIZED VIEWs...
Extracting TRIGGERs..
Extracting FUNCTIONS...
Extracting PROCEDURE...
Extracting PACKAGE/PACKAGE BODY...


DDLs for Schema HR have been stored in /home/oracle/migration_gen_hr_ddls.sql
Upload this file to the EDB Migration Portal to assess this schema for EDB Advanced Server Compatibility.


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
oracle@vmrefdba01:/home/oracle/migration/ [DB1]

2. Assessment

Go back to your browser. It’s time to check if the Oracle schema can be imported to Postgres or not. Upload the output file…
4…and click on “Run assessment” to start the check.
The result is presented as follow :
6

3. Correcting conflicts

We can notice an issue in the report above… the bfile type is not supported by EDB PPAS. You can click on the concerned table to get more details about the issue :7Tips : when you want to manage bfile columns in Postgres, you can use the external_file extension.
Of course several other conversion issues can happen. A very good point with the Portal is that it provide a knowledge base to solve conflicts. You will find all necessary information and workarounds by navigating to the “Repair handler” and “Knowledge base” tabs. Moreover, you can do the corrections directly from the Portal.

4. Creating the objects in Postgres database

Once you have corrected the conflicts and the assess report indicates a 100% success ratio, click on the top right “Export DLL” button to download the new creation script adapted for Postgres EDB :
8
Then connect to your Postgres instance and run the script :
postgres=# \i Demo_HR.sql
CREATE SCHEMA
SET
CREATE SEQUENCE
CREATE SEQUENCE
CREATE SEQUENCE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
CREATE PROCEDURE
CREATE PROCEDURE
CREATE TRIGGER
CREATE TRIGGER
postgres=#

Quick check :
postgres=# select object_type, count(*) from dba_objects where schema_name='HR' and status='VALID' group by object_type order by 1;
object_type | count
-------------+-------
INDEX | 19
PROCEDURE | 2
SEQUENCE | 3
TABLE | 7
TRIGGER | 2
(5 rows)

Sounds good ! All objects have been created successfully.

5. Migrating data

The Migration Portal doesn’t provide an embedded solution to import the data. So to do that you can use the EDB Migration Tool Kit.
Let see how it works…
You will find MTK in the edbmtk directory of the {PPAS_HOME}. Inside etc the toolkit.properties file is used to store the connection parameters to the source & target database :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb] cat toolkit.properties
SRC_DB_URL=jdbc:oracle:thin:@192.168.22.101:1521:DB1
SRC_DB_USER=system
SRC_DB_PASSWORD=manager

TARGET_DB_URL=jdbc:edb://localhost:5444/postgres
TARGET_DB_USER=postgres
TARGET_DB_PASSWORD=admin123
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/etc/ [PG10edb]

MTK use JDBC to connect to the Oracle database. You need to download the Oracle JDBC driver (ojdbc7.jar) and to store it in the following location :
postgres@ppas01:/home/postgres/ [PG10edb] ll /etc/alternatives/jre/lib/ext/
total 11424
-rw-r--r--. 1 root root 4003800 Oct 20 2017 cldrdata.jar
-rw-r--r--. 1 root root 9445 Oct 20 2017 dnsns.jar
-rw-r--r--. 1 root root 48733 Oct 20 2017 jaccess.jar
-rw-r--r--. 1 root root 1204766 Oct 20 2017 localedata.jar
-rw-r--r--. 1 root root 617 Oct 20 2017 meta-index
-rw-r--r--. 1 root root 2032243 Oct 20 2017 nashorn.jar
-rw-r--r--. 1 root root 3699265 Jun 17 2016 ojdbc7.jar
-rw-r--r--. 1 root root 30711 Oct 20 2017 sunec.jar
-rw-r--r--. 1 root root 293981 Oct 20 2017 sunjce_provider.jar
-rw-r--r--. 1 root root 267326 Oct 20 2017 sunpkcs11.jar
-rw-r--r--. 1 root root 77962 Oct 20 2017 zipfs.jar
postgres@ppas01:/home/postgres/ [PG10edb]

As HR’s objects already exist, let’s start the data migration with the -dataOnly option :
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb] ./runMTK.sh -dataOnly -truncLoad -logBadSQL HR
Running EnterpriseDB Migration Toolkit (Build 51.0.1) ...
Source database connectivity info...
conn =jdbc:oracle:thin:@192.168.22.101:1521:DB1
user =system
password=******
Target database connectivity info...
conn =jdbc:edb://localhost:5444/postgres
user =postgres
password=******
Connecting with source Oracle database server...
Connected to Oracle, version 'Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options'
Connecting with target EDB Postgres database server...
Connected to EnterpriseDB, version '10.5.12'
Importing redwood schema HR...
Loading Table Data in 8 MB batches...
Disabling FK constraints & triggers on hr.countries before truncate...
Truncating table COUNTRIES before data load...
Disabling indexes on hr.countries before data load...
Loading Table: COUNTRIES ...
[COUNTRIES] Migrated 25 rows.
[COUNTRIES] Table Data Load Summary: Total Time(s): 0.054 Total Rows: 25
Disabling FK constraints & triggers on hr.departments before truncate...
Truncating table DEPARTMENTS before data load...
Disabling indexes on hr.departments before data load...
Loading Table: DEPARTMENTS ...
[DEPARTMENTS] Migrated 27 rows.
[DEPARTMENTS] Table Data Load Summary: Total Time(s): 0.046 Total Rows: 27
Disabling FK constraints & triggers on hr.employees before truncate...
Truncating table EMPLOYEES before data load...
Disabling indexes on hr.employees before data load...
Loading Table: EMPLOYEES ...
[EMPLOYEES] Migrated 107 rows.
[EMPLOYEES] Table Data Load Summary: Total Time(s): 0.168 Total Rows: 107 Total Size(MB): 0.0087890625
Disabling FK constraints & triggers on hr.jobs before truncate...
Truncating table JOBS before data load...
Disabling indexes on hr.jobs before data load...
Loading Table: JOBS ...
[JOBS] Migrated 19 rows.
[JOBS] Table Data Load Summary: Total Time(s): 0.01 Total Rows: 19
Disabling FK constraints & triggers on hr.job_history before truncate...
Truncating table JOB_HISTORY before data load...
Disabling indexes on hr.job_history before data load...
Loading Table: JOB_HISTORY ...
[JOB_HISTORY] Migrated 10 rows.
[JOB_HISTORY] Table Data Load Summary: Total Time(s): 0.035 Total Rows: 10
Disabling FK constraints & triggers on hr.locations before truncate...
Truncating table LOCATIONS before data load...
Disabling indexes on hr.locations before data load...
Loading Table: LOCATIONS ...
[LOCATIONS] Migrated 23 rows.
[LOCATIONS] Table Data Load Summary: Total Time(s): 0.053 Total Rows: 23 Total Size(MB): 9.765625E-4
Disabling FK constraints & triggers on hr.regions before truncate...
Truncating table REGIONS before data load...
Disabling indexes on hr.regions before data load...
Loading Table: REGIONS ...
[REGIONS] Migrated 4 rows.
[REGIONS] Table Data Load Summary: Total Time(s): 0.025 Total Rows: 4
Enabling FK constraints & triggers on hr.countries...
Enabling indexes on hr.countries after data load...
Enabling FK constraints & triggers on hr.departments...
Enabling indexes on hr.departments after data load...
Enabling FK constraints & triggers on hr.employees...
Enabling indexes on hr.employees after data load...
Enabling FK constraints & triggers on hr.jobs...
Enabling indexes on hr.jobs after data load...
Enabling FK constraints & triggers on hr.job_history...
Enabling indexes on hr.job_history after data load...
Enabling FK constraints & triggers on hr.locations...
Enabling indexes on hr.locations after data load...
Enabling FK constraints & triggers on hr.regions...
Enabling indexes on hr.regions after data load...
Data Load Summary: Total Time (sec): 0.785 Total Rows: 215 Total Size(MB): 0.01

Schema HR imported successfully.
Migration process completed successfully.

Migration logs have been saved to /home/postgres/.enterprisedb/migration-toolkit/logs

******************** Migration Summary ********************
Tables: 7 out of 7

Total objects: 7
Successful count: 7
Failed count: 0
Invalid count: 0

*************************************************************
postgres@ppas01:/u01/app/postgres/product/10edb/edbmtk/bin/ [PG10edb]

Quick check :
postgres=# select * from hr.regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
(4 rows)

Conclusion

Easy, isn’t it ?
Once again, EnterpriseDB is providing a very practical, user-frendly and quick to handle tool. In my demo the HR schema is pretty simple. The migration of more complexe schema can be more challenging. Currently only migrations from Oracle are available but SQL Server and other legacy databases should be supported in future versions. In the meantime, you must use EDB Migration Tool Kit for that.

That’s it. Have fun and… be ready to say goodbye to Oracle :-)

 

Cet article From Oracle to Postgres with the EDB Postgres Migration Portal est apparu en premier sur Blog dbi services.

Random Upgrade

Jonathan Lewis - Mon, 2018-10-08 07:36

Here’s a problem that (probably) won’t affect the day to day running of most systems – but it could be a pain in the backside for people who write programs to generate repeatable test data. I’m not going to say much about the problem, just leave you with a test script.


rem
rem	Script	random_upgrade.sql
rem	Author:	Jonathan Lewis
rem	Dated:	Oct 2018
rem
rem	Last tested
rem		18.3.0.0
rem		12.2.0.1
rem	Notes
rem	In the upgrade from 12.2.0.1 something
rem	changed that meant
rem		create as select dbms_random
rem	gets different data from
rem		select dbms_random
rem

drop table t4 purge;
drop table t3 purge;
drop table t2 purge;
drop table t1 purge;
drop table t0 purge;

set feedback off

create table t0 as
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
;


execute dbms_random.seed(0);

create table t1
as
select dbms_random.normal
from
	t0
;

execute dbms_random.seed(0);

create table t2
as
with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	dbms_random.normal
from
	g1
;

prompt	=================
prompt	Diff the two CTAS
prompt	=================

select count(*)
from (
select * from t1
minus
select * from t2
union all
select * from t2
minus
select * from t1
)
;


create table t3 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

create table t4 
as 
select * from t2 
where rownum < 1 -- > comment to avoid WordPress format issue
;

execute dbms_random.seed(0)

insert into t3
select dbms_random.normal
from
	t0
;

execute dbms_random.seed(0)

insert into t4
with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select
	dbms_random.normal
from
	g1
;


prompt	===================
prompt	Diff the two Insert
prompt	===================

select count(*)
from (
select * from t3
minus
select * from t4
union all
select * from t4
minus
select * from t3
)
;


prompt	===========
prompt	Sum of CTAS
prompt	===========

select sum(normal) from t1;

prompt	=============
prompt	Sum of Insert
prompt	=============

select sum(normal) from t3;


execute dbms_random.seed(0)

prompt	=============
prompt	Sum of select
prompt	=============

with g1 as (
	select rownum id
	from dual
	connect by
		level <= 1e4 -- > comment to avoid WordPress format issue
)
select sum(n) from (
select
	dbms_random.normal n
from
	g1
)
;


I’m repeatedly using dbms_random.seed(0) to reset the random number generator and trying to generate 10,000 normally distributed numbers. (I’ve chosen the normal distribution because that happened to be the function in a script I sent someone with the comment that “this will recreate the data for the demonstration” – and they wrote back to say that it didn’t.)

I’ve got two “create as select”, and two “insert as select”. One of each pair selects from a real existing table to get 10,000 rows, the other uses the “select dual connect by” trick to generate rows. I’ve written SQL that shows whether or not the two pairs of tables end up with the same data (they do, pairwise), then I’ve summed one table from each pair to see if the different mechanisms produce the same data – and that depends on the version of Oracle you’re using. Finally I’ve reset the random number generator and summed across a pure select to see what that produces.

If you run this code on 12.2.0.1 or earlier you’ll see that the “diffs” report zeros and the “sums” report -160.39249. If you upgrade to 18.3 the diffs will still report zeros and some of the sums will still report -160.39249 but the sum of the CTAS will report -91.352172.

Bottom Line

If you’ve got code that you wrote to create reproducible test cases and the code uses: “create table … as select … dbms_random …” then it won’t produce the same data when you upgrade to 18.3. You’ll have to modify the code to do “create table (); insert as select …”.

As of this afternoon I have 1,209 test scripts on my laptop that use the dbms_random package to model data distribution patterns. It is almost certain that I will end up modifying every single one of them eventually.

There are words to express how I feel about this – but not ones that I would consider publishing.

(EX42) Flash disk failure may lead to ASM metadata corruption when using write-back flash cache

Syed Jaffar - Mon, 2018-10-08 07:19
While reviewing the latest Exachk report on X5-2 machine, the following critical alrams were observed:



And details shows below description:


And the MOS Note : 1270094.1 explains the following:


According to MOS Doc: 2356460.1, the said behavior is due to a bug (27372426) which applies on Exa version 12.2.1.1.0 to 12.2.1.1.5 or 18.1.0.0.0 to 18.1.3.0.0.

Impact:

If you are running GI 11.2.0.4 or 12.1 with the above said Exa version, and  with FlashCache configured as Writeback mode, the following ORA error may encounter, during: ASM rebalancing operation, disk group mount, & disk group consistency checks, ASM review asm alert.log:

ORA-00600: internal error code, arguments: [kfdAuDealloc2]

WARNING: cache read a corrupt block: group=1(DATA) fn=381 indblk=27 disk=110 (DATA_CD_04_DM01CEL01)
ORA-15196: invalid ASM block header [kfc.c:26411] [endian_kfbh]

ORA-00600: internal error code, arguments: [kfrValAcd30]

ORA-00600: internal error code, arguments: [kfdAuPivotVec2], [kfCheckDG]

ERROR: file +DATADG1.3341.962251267: F3341 PX38530 => D55 A765853 => F1677
PX1647463: fnum mismatch
ERROR: file +DATADG1.3341.962251267: F3341 PX38531 => D15 A205431 => F3341
PX56068: xnum mismatch



Workaround:
To fix the bug, Following action plan needs to be applied:

1) Update the storage server to >=12.2.1.1.6 or >=18.1.4.0.0
2) Apply patch 27510959 and scan ASM metadata


Note :

The issues doesn't impact on GI 12.2 or whenever you have higher version of Exa software mentioned in this bug;
The bug also doesn't affect if the FlashCache mode is WriteThrough;

References:

Exadata Critical Issues (Doc ID 1270094.1)


(EX42) Flash disk failure may lead to ASM metadata corruption when using write-back flash cache (Doc ID 2356460.1)

Deploy a MySQL Server in Docker containers

Yann Neuhaus - Mon, 2018-10-08 06:26

We hear about Docker every day. By working on MySQL Server, I am curious to test this platform which makes it possible to create containers independent of the OS to deploy virtualized applications.
So let’s try to deploy a MySQL Server with Docker!


Here is the architecture we will put in place:
MySQL on Docker
So we will run a Docker container for MySQL within a VM.

I’m working on a CentOS 7 installed on a VirtualBox Machine:

[root@node4 ~]# cat /etc/*release*
CentOS Linux release 7.5.1804 (Core)
Derived from Red Hat Enterprise Linux 7.5 (Source)

I install Docker on my VM and enable the Docker service:

[root@node4 ~]# yum install docker
[root@node4 ~]# systemctl enable docker.service

I start the Docker service:

[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com
[root@node4 ~]# systemctl stop docker.service
[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: http://docs.docker.com
[root@node4 ~]# systemctl start docker.service
[root@node4 ~]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-10-05 16:42:33 CEST; 2s ago
     Docs: http://docs.docker.com
 Main PID: 1514 (dockerd-current)
   CGroup: /system.slice/docker.service
           ├─1514 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt nati...
           └─1518 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2...
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.561072533+02:00" level=warning msg="Docker could not enable SELinux on the...t system"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.597927636+02:00" level=info msg="Graph migration to content-addressability... seconds"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.598407196+02:00" level=info msg="Loading containers: start."
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.642465451+02:00" level=info msg="Firewalld running: false"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.710685631+02:00" level=info msg="Default bridge (docker0) is assigned with... address"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.762876995+02:00" level=info msg="Loading containers: done."
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.780275247+02:00" level=info msg="Daemon has completed initialization"
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.780294728+02:00" level=info msg="Docker daemon" commit="8633870/1.13.1" gr...on=1.13.1
Oct 05 16:42:33 node4 systemd[1]: Started Docker Application Container Engine.
Oct 05 16:42:33 node4 dockerd-current[1514]: time="2018-10-05T16:42:33.799371435+02:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.

I check my network:

[root@node4 ~]# ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:f3:9e:fa brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic enp0s3
       valid_lft 85959sec preferred_lft 85959sec
    inet6 fe80::a00:27ff:fef3:9efa/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:45:62:a7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.204/24 brd 192.168.56.255 scope global noprefixroute enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe45:62a7/64 scope link
       valid_lft forever preferred_lft forever
4: docker0:  mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:b0:bf:02:d6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
[root@node4 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
b32241ce8931        bridge              bridge              local
9dd4a24a4e61        host                host                local
f1490ec17c17        none                null                local

So I have a network bridge named docker0 to which an IP address is assigned.

To obtain some information about the system, I can run the following command:

[root@node4 ~]# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 1.13.1
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: false
 Native Overlay Diff: true
Logging Driver: journald
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: docker-runc runc
Default Runtime: docker-runc
Init Binary: /usr/libexec/docker/docker-init-current
containerd version:  (expected: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1)
runc version: 5eda6f6fd0c2884c2c8e78a6e7119e8d0ecedb77 (expected: 9df8b306d01f59d3a8029be411de015b7304dd8f)
init version: fec3683b971d9c3ef73f284f176672c44b448662 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
 seccomp
  WARNING: You're not using the default seccomp profile
  Profile: /etc/docker/seccomp.json
Kernel Version: 3.10.0-862.3.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
Number of Docker Hooks: 3
CPUs: 1
Total Memory: 867.7 MiB
Name: node4
ID: 6FFJ:Z33K:PYG3:2N4B:MZDO:7OUF:R6HW:ES3D:H7EK:MFLA:CAJ3:GF67
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Registries: docker.io (secure)

For the moment I have no containers:

[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Now I can search the Docker Hub for MySQL images, and I pull the first one in my example (I normally chose an official build with the biggest number of stars):

[root@node4 ~]# docker search --filter "is-official=true" --filter "stars=3" mysql
INDEX       NAME                DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
docker.io   docker.io/mysql     MySQL is a widely used, open-source relati...   7075      [OK]
docker.io   docker.io/mariadb   MariaDB is a community-developed fork of M...   2267      [OK]
docker.io   docker.io/percona   Percona Server is a fork of the MySQL rela...   376       [OK]
[root@node4 ~]# docker pull docker.io/mysql
Using default tag: latest
Trying to pull repository docker.io/library/mysql ...
latest: Pulling from docker.io/library/mysql
802b00ed6f79: Pull complete
30f19a05b898: Pull complete
3e43303be5e9: Pull complete
94b281824ae2: Pull complete
51eb397095b1: Pull complete
54567da6fdf0: Pull complete
bc57ddb85cce: Pull complete
d6cd3c7302aa: Pull complete
d8263dad8dbb: Pull complete
780f2f86056d: Pull complete
8e0761cb58cd: Pull complete
7588cfc269e5: Pull complete
Digest: sha256:038f5f6ea8c8f63cfce1bce9c057ab3691cad867e18da8ad4ba6c90874d0537a
Status: Downloaded newer image for docker.io/mysql:latest

I create my container for MySQL named mysqld1:

[root@node4 ~]# docker run -d --name mysqld1 docker.io/mysql
b058fba64c7e585caddfc75f5d96076edb3e80b31773f135d9e44a3487724914

But if I list it, I see that I have a problem, it has exited with an error:

[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
b058fba64c7e        docker.io/mysql     "docker-entrypoint..."   55 seconds ago      Exited (1) 54 seconds ago                       mysqld1
[root@node4 ~]# docker logs mysqld1
error: database is uninitialized and password option is not specified
  You need to specify one of MYSQL_ROOT_PASSWORD, MYSQL_ALLOW_EMPTY_PASSWORD and MYSQL_RANDOM_ROOT_PASSWORD

That means that I forgot password assignment for the ‘root’ user account of MySQL Server. So I stop and the remove the container, and create it again with some additional options:

[root@node4 ~]# docker stop b058fba64c7e
b058fba64c7e
[root@node4 ~]# docker rm b058fba64c7e
b058fba64c7e
[root@node4 ~]# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@node4 ~]# docker run --name mysqld1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=manager -d docker.io/mysql
46a2020f58740d5a87288073ab6292447fe600f961428307d2e2727454655504

Now my container is up and running:

[root@node4 ~]#  docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                               NAMES
46a2020f5874        docker.io/mysql     "docker-entrypoint..."   5 seconds ago       Up 5 seconds        0.0.0.0:3306->3306/tcp, 33060/tcp   mysqld1

I can execute a bash shell on the container in an interactive mode to open a session on it:

[root@node4 ~]# docker exec -it mysqld1 bash
root@46a2020f5874:/#

And try to connect to MySQL Server:

root@46a2020f5874:/# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.12 MySQL Community Server - GPL
Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.02 sec)

Great news, everything works well! In a few minutes I have a MySQL Server on his latest version up and running in a Docker container.

 

Cet article Deploy a MySQL Server in Docker containers est apparu en premier sur Blog dbi services.

Generate Link dymanicly

Tom Kyte - Sun, 2018-10-07 23:26
i have to call oracle report rdf,rep from apex, i have some fields like DATE_FROM and DATE_TO in apex form. i want to generate url dynamically based on data from date_from and date_to fields by pressing button. i have used window.open('') in dynami...
Categories: DBA Blogs

wait event - PGA memory operation

Tom Kyte - Sun, 2018-10-07 23:26
Trans 1) One temporary table which hold approx 45000 rows fill using cursor. Trans 2) Now that temporary table update with query which also call using cursor So when Trans 1 is call at that time following wait event fine "PGA memory operation...
Categories: DBA Blogs

Tablespace sizing for datawarehouse

Tom Kyte - Sun, 2018-10-07 23:26
Hello Team, my question is a bit similar to https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:228413960506 but I will need further clarification. I am loading data on monthly partitioned tables from flat files using sqlloader ev...
Categories: DBA Blogs

Locally partitioned index rebuild issues

Tom Kyte - Sun, 2018-10-07 23:26
Hi, I have a huge partitioned table with 1 Billion rows, for some reason we dropped the index and were re creating the index when due to a Support issue we found out that we had duplicate rows. So we created the index in disabled mode and then ...
Categories: DBA Blogs

Insert data in a new table

Tom Kyte - Sun, 2018-10-07 23:26
I have a question table having 1 record. create table q_text_t (q_id number,q_text varchar2(100)); insert into q_text_t values(1,'What is the capital of India?'); I have another answer table having 4 answers to the corresponding question. creat...
Categories: DBA Blogs

Minimum of a Date column

Tom Kyte - Sun, 2018-10-07 05:06
Hello Tom, I am working on a current project wherein the requirement to calculate a certain column in a certain table is as under. The base table is this: <code> create table main_data (from_value varchar2(10), to_value varchar2(10), act...
Categories: DBA Blogs

Compound Trigger and Global Variables

Tom Kyte - Sun, 2018-10-07 05:06
Hi! To avoid mutating exception, i'm using a compound trigger, filling a array and then i intend to loop through this array e do my thing. The problem is that the global variable loses it contents when I enter "after statement" if (and only if)...
Categories: DBA Blogs

Oracle Offline Persistence Toolkit - Submitting Client Changes

Andrejus Baranovski - Sat, 2018-10-06 20:30
One of the key topics related to Oracle Offline Persistence toolkit - submitting client changes to backend when data conflict exists. If data was updated on the backend, while client was offline and client wants to submit his changes - we inform about the conflict and ask what client really wants to do. If client choose to submit changes, this means we should push client changes to the backend with the latest change indicator.

There is a special case, when client updates same data multiple times while offline - during online sync we need to make sure, change indicator will be retrieved in after sync and applied in before sync listeners, to make sure subsequent requests execute correctly. Check my previous post about before request sync listener - Oracle Offline Persistence Toolkit - Before Request Sync Listener.

Example - let's update a record and submit change to the backend:


Assume another user is offline and updates same record:


User updates same record again, before going online. Now we will have two requests in the sync queue:


Once going online, sync will be executed and we will get conflict for the first request (same row was updated already by another user). At this moment, after sync listener will get info about conflict and will cache latest change indicator value returned from backend. If user decides to apply his changes, requests is removed, new request is constructed with the latest change indicator value received from backend and this request is inserted into sync queue:


If same record was updated multiple times, second request will fail too - because this request wasn't updated yet with latest change indicator:


Assuming user decided to apply changes from the second request too, we will update request with latest change indicator and submit it for sync. In after sync listener, change indicator value stored in local cache will be updated.

Successful sync with change indicator = 296:


New change indicator value will be retrieved in after sync listener and applied in before sync listener for the second request, updating same data row:


Here is the code, which allows user to apply changes to backend. We remove failed request, update it and create new request in sync queue, resuming sync process:


Download sample code for the described use case from my GitHub repository.

Java: Slow java with server.policy enabled - how to fix this issue

Dietrich Schroff - Sat, 2018-10-06 14:35
If you use Java security manager for hardening your java processes, you have to add the following JVM options:
-Djava.security.manager
-Djava.security.policy=server.policy Create a server.policy file (you can use jdkXXX/jre/lib/security/java.policy as a tamplate) and add the following line:
permission java.net.SocketPermission "localhost:*", "listen, accept, connect, resolve"; Now create a small java program, which listens on a port (like this example).

If you send a message with netcat
nc -u localhost 9876Everyhting is fine.
Now send a message from a remote host. This does not work - like expected.

Try it again with the following network tracing running (capturing all DNS packets):
tcpdump -i any port 53Cool. For each connect a DNS-Lookup is done.
This could be a problem for high performance systems or for systems, which have to running/reachable DNS-Servers. In the latter case all requests will be sent to localhost:53 and of course, localhost will not give any answer. (This is not true - there will be a "ICMP - not reachable", but no DNS answer.).
If you add now line with *.*, to allow the connection the server.policy file should contain the following lines:
permission java.net.SocketPermission "*:*", "listen, accept, connect, resolve";
permission java.net.SocketPermission "localhost:*", "listen, accept, connect, resolve"; 
Hmmm. The connection is allowed, but there still DNS requests happening. The problem is that "*:*" is behind the "localhost:*" because Java reads this file from bottom to top - so if you write it this way:
permission java.net.SocketPermission "localhost:*", "listen, accept, connect, resolve";
permission java.net.SocketPermission "*:*", "listen, accept, connect, resolve";
there are no DNS requests happening anymore.

If you still see DNS requests: Take a look at this file:
YourJDK/jre/lib/security/java.policy there are some entries with java.net.SocketPermission like:
permission java.net.SocketPermission "localhost:0", "listen"; Because java first checks this file, you have to remove such lines, to get rid of the DNS requests.

If you do not need to use DNS, you should remove dns in /etc/nsswitch.conf. But, then no domain lookup will succeed  on this machine anymore...

Inserting with WITH FUNCTION Select is giving error

Tom Kyte - Sat, 2018-10-06 10:46
Hi Ask Tom Team, Below select query is working fine for me, but am not able to insert the result using insert statement., Please provide me some suggestions <code>WITH FUNCTION T11 (P_A1 VARCHAR2) RETURN NUMBER IS BEGIN IF P_A1 = ...
Categories: DBA Blogs

Is the Oracle regular expression supporting this character extraction?

Tom Kyte - Sat, 2018-10-06 10:46
Hi Oracle SQL experts, I am using Oracle regular expression to deal with some characters. My db is 12c. I have a string every line like this input like this 1234adhefd#123 345bheufs15# ... output will be from the first alpha lette...
Categories: DBA Blogs

SGA Management

Tom Kyte - Sat, 2018-10-06 10:46
Hi Tom, I would like to know that, how Oracle internally manages when end user try to extract the data which is more than SGA, for Example if our SGA is 7 GB and user query is about 20 GB data, how it will internally manages, as far as I know, ser...
Categories: DBA Blogs

Moving datafile

Tom Kyte - Sat, 2018-10-06 10:46
Hi there in our production database,we have a tablespace called TESTDB and this tablespace has 2 DATAFILEs. location for these 2 datafiles are D:\ORADATA\TESTDB\TESTDB01.DBF D:\ORADATA\TESTDB\TESTDB02.DBF Recently i have added a new datafi...
Categories: DBA Blogs

How to move contents data from one datafile to another?

Tom Kyte - Sat, 2018-10-06 10:46
Hi Tom, Thanks for your asktom website. A tablespace consists of several data files. My purpose is to cleanly move contents in one data file to another on and then I can drop that empty file from the tablespace. So I can reduce the data file nu...
Categories: DBA Blogs

Lock wait timeout

Tom Kyte - Fri, 2018-10-05 16:26
How to set lock wait timeout in Oracle. We are executing insert/update/delete from java applications. Sometimes due to long running transactions or slowness lock acquired by one transaction on particular row gets hit by another transaction and blocki...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator