Feed aggregator

Create/Replace procedure command

Tom Kyte - Mon, 2018-12-03 10:26
Hi Tom, Are there any issues with issuing a "create or replace procedure" command when the application that uses the database is up and running in a production environment ? What happens when you fire this script when this same procedure i...
Categories: DBA Blogs

Oooooops or how to undelete a file on an ext4 filesystem

Yann Neuhaus - Mon, 2018-12-03 07:06

It happens within the blink of an eye.
A delete command was executed and half a second after you hit the enter button you knew it. That was a mistake.
This is the scenario which leads to this blog entry in where I show you how you can get your files back if you are lucky…

Short summary for the desperate

If you land here you are probably in the same situation I was so here is a short summary
Extundelete did not work for me but ext4magic did and I had to compile it from the sources

  • Remount the filesystem read-only or umount it as soon as possible after the incident
  • Backup your inode table you will need it for the restore
    • debugfs -R "dump /tmp/VMSSD01.journal" /dev/mapper/VMSSD01-VMSSD01
  • Check at which time your files were still there
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
  • List the files within this timepoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
  • Restore the file to a different disc/mountpoint
    • ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /tmp/recover
  • Be happy and promise never doing it again
And now the hole story

So it happened that I deleted two VM images by accident. I was cleaning up my environment and there were two files centos75_base_clone-1.qcow2 and centos75_base_clone-2.qcow2: As you can see I was using a clean and good naming convention which points directly, that these are the OS image files for my “nomachine” and my “networkmaster” machine… Especially the second one with my dhcp, dns, nfs and iscsi configuration would take some time to configure again.
In the first place nothing seemed to be wrong, all VMs were running normally until I tried to restart one of them and I went from :cool: to :shock: and at the end to :oops:

I could remember, that it was very important to unmount the filesystem as quickly as possible and stop changing anything on this filesystem
umount /VMSSD01
So a solution had to be found. A short Google search brought me to a tool with the promising name “extundelete” which can be found in the CentOS repository in the actual version 0.2.4 from 2012….
So a yum install -y extundelete and a man extundelete later I tried the command
extundelete --restore-all --after $(date -d "-2 hours" +%s) /dev/mapper/VMSSD01-VMSSD01
And…. It does not work.
A cryptical core dump and no solution on google so I went from :shock: TO :cry: .
extundelete_coredump
But it was not the time to give up. With the courage of the despaired, I searched around and found the tool ext4magic. Magic never sounded better than in this right moment. The tool was newer then extundelete even when it builds on extundelete. So I downloaded and compiled the newest Version 0.3.2 (from 2014). Before you can compile the source you need some dependencies:

yum install -y libblkid \
libblkid-devel \
zerofree e2fsp* \
zlib-devel \
libbz2-devel \
bzip2-devel \
file-devel

and to add some more “Magic” you need also yum install -y perl-File-LibMagic

A short ./configure && make later I got a binary and to tell it with Star Was: “A New Hope” started growing in me.

I listed all the files deleted in the last 3 hours and there they are. At least I thought these have to be my image files:
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -H -a $(date -d "-3hours" +%s)
ext4magic_showInode

I listed out the content on the different timestamps and found at least one of my files. The timestamp 1542797503 showed some files so I tried to list all files from an earlier timestamp and one of my missing image files showed up.
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -l
ext4magic_file2restore
My mood started getting better and better and switched from :cry: to :???:.
I tried to restore my file
./src/ext4magic /dev/mapper/VMSSD01-VMSSD01 -a 1542796423 -f / -j /tmp/VMSSD01.journal -r -d /VMSSD02/recovery/
ext4magic_restoreInProgress
My first file is back :grin: . But the tool did not stop, it recovers more and more files and my hope was growing, to get both files back. The first file was back with the original name. For the second one, it was not that clear what happened. The tool was still running and recovers file after file after file and put all in the subdirectories MAGIC-2.

I tried to cancel the recovery job and give it a shot with the recovered files.
ext4magic_file2restore_unknown
After renaming the *.unknown files I tried to boot up the VM. To my surprise the first try was successful and all my VMs were back online.

Summary
  • Do not delete your files (obviously).
  • Use a clear naming convention for all your files.
  • A lsof before deleting a supposed unused file is always a good idea.
  • ext4magic worked for me and did as promised. My files are back, the VMs are up and running again. I am happy and :cool: .

    Cet article Oooooops or how to undelete a file on an ext4 filesystem est apparu en premier sur Blog dbi services.

[Blog] Oracle Cloud Database Certification (1Z0-160) Day 1 Session Review & Q/A’s

Online Apps DBA - Mon, 2018-12-03 06:38

Want to start your preparations for Oracle Cloud Database Certification (1Z0-160)? If yes, then visit: https://k21academy.com/1z016014 and go through our Blog that covers: ✔Cloud Service Model: SaaS, PaaS & IaaS ✔Cloud Deployment Model: Private, Public, Hybrid ✔What is user OPC (Oracle Public Cloud) & OCPU’s and much more…. Want to start your preparations for Oracle […]

The post [Blog] Oracle Cloud Database Certification (1Z0-160) Day 1 Session Review & Q/A’s appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

OEM 13c Repository view gc$metric_values_daily

Yann Neuhaus - Mon, 2018-12-03 03:50

Oracle OEM 13c offers the possibility to generate reports about Oracle Database Tablespace Monthly Space Usage. The displayed information is quite useful and display a lot of information:

Screen Shot 2018-11-23 at 12.09.37

 

Screen Shot 2018-11-23 at 12.15.16

 

The report displays the details for each month and for each tablespace, but the GUI is sometimes painful: you have a nice report but many tablespaces and many numbers :=(

At a client’s site, I had the following behavior: a production database had its data files distributed across three file system /u01 /u01 and /u03. And I wanted to know the evolution of the data files available size for each disk.

My first idea was to write sql requests querying the tablespace history view dba_hist_tbspc_space_usage, but this view does not contain any information about the file system, and as a consequence it is not possible to know how much space will be needed by each file system in the future.

OEM 13c offers the capability of viewing graphs for most of the metrics within OEM, but forecasting and trending capabilities are not present, and most of the category of metrics are set up for real time alerting, not for historical trending.

And I did not find any views like dba_hist_datafiles_space_usage :=)

So I checked in EM13c to verify that the file system space usage is correctly monitored. Naturally each file system is monitored and triggers a warning when the available space is below 20%, or a critical alert when it is below 5%.

As a consequence I had the idea to query the OEM 13c repository views, and this was the good solution.

At first we can query the mgmt.$alert_current to display the alerts generated on the metric_name Filesystems over target of type host:

SQL> select target_name , target_type, metric_name, metric_column, column_label,alert_state, violation_level
  2  FROM MGMT$ALERT_CURRENT
  3  where metric_name ='Filesystems'
  4  and target_guid in (SELECT target_guid
  5  FROM mgmt$target
  6* WHERE target_type ='host');

em13c.localdomain
host
Filesystems
pctAvailable
Filesystem Space Available (%)
Warning 			20

There is a lot of management repository views providing access to target, metrics and monitoring information stored in the Management Repository. Those views offer the possibility to create your own scripts to query historical data from those views.

In my case I used the gc$metric_values_daily view corresponding to the data in the EM_METRIC_VALUES_DAILY table:

SQL> desc gc$metric_values_daily
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 ENTITY_TYPE				   NOT NULL VARCHAR2(64)
 ENTITY_NAME				   NOT NULL VARCHAR2(256)
 ENTITY_GUID				   NOT NULL RAW(16)
 PARENT_ME_TYPE 				    VARCHAR2(64)
 PARENT_ME_NAME 				    VARCHAR2(256)
 PARENT_ME_GUID 			   NOT NULL RAW(16)
 TYPE_META_VER				   NOT NULL VARCHAR2(8)
 METRIC_GROUP_NAME			   NOT NULL VARCHAR2(64)
 METRIC_COLUMN_NAME			   NOT NULL VARCHAR2(64)
 COLUMN_TYPE				   NOT NULL NUMBER(1)
 COLUMN_INDEX				   NOT NULL NUMBER(3)
 DATA_COLUMN_TYPE			   NOT NULL NUMBER(2)
 METRIC_GROUP_ID			   NOT NULL NUMBER(38)
 METRIC_GROUP_LABEL				    VARCHAR2(64)
 METRIC_GROUP_LABEL_NLSID			    VARCHAR2(64)
 METRIC_COLUMN_ID			   NOT NULL NUMBER(38)
 METRIC_COLUMN_LABEL				    VARCHAR2(64)
 METRIC_COLUMN_LABEL_NLSID			    VARCHAR2(64)
 DESCRIPTION					    VARCHAR2(1024)
 SHORT_NAME					    VARCHAR2(40)
 UNIT						    VARCHAR2(64)
 IS_FOR_SUMMARY 				    NUMBER
 IS_STATEFUL					    NUMBER
 NON_THRESHOLDED_ALERTS 			    NUMBER
 METRIC_KEY_ID				   NOT NULL NUMBER(38)
 KEY_PART_1				   NOT NULL VARCHAR2(256)
 KEY_PART_2				   NOT NULL VARCHAR2(256)
 KEY_PART_3				   NOT NULL VARCHAR2(256)
 KEY_PART_4				   NOT NULL VARCHAR2(256)
 KEY_PART_5				   NOT NULL VARCHAR2(256)
 KEY_PART_6				   NOT NULL VARCHAR2(256)
 KEY_PART_7				   NOT NULL VARCHAR2(256)
 COLLECTION_TIME			   NOT NULL DATE
 COLLECTION_TIME_UTC				    DATE
 COUNT_OF_COLLECTIONS			   NOT NULL NUMBER(38)
 AVG_VALUE					    NUMBER
 MIN_VALUE					    NUMBER
 MAX_VALUE					    NUMBER
 STDDEV_VALUE					    NUMBER

The main column signification:

ENTITY_TYPE The type of the target. Host, oracle_database ENTITY_NAME The target or component name for example the hostname METRIC_GROUP_NAME Name of the metric group for example filesystems METRIC_COLUMN_NAME Name of the metric column for example available, pct KEY_PART_1 Key part 1 of composite key for example name of the file system KEY_PART_2 Key part 2 of composite key COLLECTION_TIME Collection time in target time zone AVG_VALUE Average value of metric for the day MIN_VALUE Minimum value of metric for the day MAX_VALUE Maximum value of metric for the day

So in my case I only have to execute the following SQL request:

select entity_name, 
key_part_1,
collection_time, 
avg_value as avg, 
avg_value/1000000 as percent  from gc$metric_values_daily
where metric_group_name='Filesystems' 
and metric_column_name= 'available' 
and key_part_1='/u01' 
and entity_name ='oraprod.com'
order by 1,2
/

oraprod.com     /u01          04-OCT-16  169545.44  	.380928435
oraprod.com     /u01          05-OCT-16  169572.44	    .380928435
oraprod.com     /u01          06-OCT-16  169583.44	    .380928435
…
oraprod.com     /u01           17-NOV-18  169538.44 	.172295163
oraprod.com     /u01           18-NOV-18  169538.44 	.172295163
oraprod.com     /u01           19-NOV-18  169525.94 	.172282459
oraprod.com     /u01           20-NOV-18  169498.44 	.172254512

With this result I can easily know how much space is consumed by each file system during the time. We can use Excel’s capabilities to determine the trend of the file system available size.

Conclusion:

We often forget that OEM is not only a graphical administration tool, the main interesting part of OEM is that OEM stores date at regular intervals for important metrics into the management repository. If you understand the management repository views structure, you will have extraordinary possibilities to extract historical data, build graphs in order to analyze more precisely your infrastructure.

Cet article OEM 13c Repository view gc$metric_values_daily est apparu en premier sur Blog dbi services.

Partitioning -- 11 : Composite Partitioning

Hemant K Chitale - Sun, 2018-12-02 09:53
Oracle allows Composite Partitioning where a Partition can, itself, be Sub-Partitioned.  Each SubPartition is a distinct segment (allocation of physical blocks) while the Partition itself remains a logical definition without a segment.

Composite Partitioning can comprise of :


  • Range-Hash  
  • Range-List  
  • Range-Range
  • List-Range
  • List-Hash
  • List-List
  • Interval-Hash
  • Interval-List
  • Interval-Range

Here is one example of Range-List Partitioning :

SQL> drop table my_sales_table; Table dropped. SQL>
SQL> alter session set deferred_segment_creation=FALSE;

Session altered.

SQL>
SQL> l
1 create table my_sales_table
2 (invoice_id number(16) primary key,
3 invoice_date date,
4 region_code varchar2(5),
5 invoice_amount number)
6 partition by range (invoice_date)
7 subpartition by list (region_code)
8 subpartition template
9 (
10 subpartition US values ('US') tablespace tbs_US,
11 subpartition EMEA values ('EMEA') tablespace tbs_EMEA,
12 subpartition ASIA values ('ASIA') tablespace tbs_ASIA,
13 subpartition OTHERS values (DEFAULT) tablespace tbs_OTHERS)
14 (
15 partition p_2018 values less than (to_date('01-JAN-2019','DD-MON-YYYY')),
16 partition p_2019 values less than (to_date('01-JAN-2020','DD-MON-YYYY'))
17* )
SQL> /

Table created.

SQL>
SQL> select table_name, partition_name, subpartition_name
2 from user_tab_subpartitions
3 where table_name = 'MY_SALES_TABLE'
4 /

TABLE_NAME PARTITION_NAME SUBPARTITION_NAME
------------------------------ --------------- ------------------------------
MY_SALES_TABLE P_2018 P_2018_US
MY_SALES_TABLE P_2018 P_2018_EMEA
MY_SALES_TABLE P_2018 P_2018_ASIA
MY_SALES_TABLE P_2018 P_2018_OTHERS
MY_SALES_TABLE P_2019 P_2019_US
MY_SALES_TABLE P_2019 P_2019_EMEA
MY_SALES_TABLE P_2019 P_2019_ASIA
MY_SALES_TABLE P_2019 P_2019_OTHERS

8 rows selected.

SQL>
SQL> l
1 select segment_name, segment_type, partition_name, tablespace_name
2 from user_segments
3 where segment_name = 'MY_SALES_TABLE'
4* order by 1,2,3
SQL> /

SEGMENT_NAME SEGMENT_TYPE PARTITION_NAME TABLESPACE_N
--------------- ------------------ --------------- ------------
MY_SALES_TABLE TABLE SUBPARTITION P_2018_ASIA TBS_ASIA
MY_SALES_TABLE TABLE SUBPARTITION P_2018_EMEA TBS_EMEA
MY_SALES_TABLE TABLE SUBPARTITION P_2018_OTHERS TBS_OTHERS
MY_SALES_TABLE TABLE SUBPARTITION P_2018_US TBS_US
MY_SALES_TABLE TABLE SUBPARTITION P_2019_ASIA TBS_ASIA
MY_SALES_TABLE TABLE SUBPARTITION P_2019_EMEA TBS_EMEA
MY_SALES_TABLE TABLE SUBPARTITION P_2019_OTHERS TBS_OTHERS
MY_SALES_TABLE TABLE SUBPARTITION P_2019_US TBS_US

8 rows selected.

SQL>


Note how the actual SubPartition Names are auto-created by Oracle using the composite of the Partition Name  (P_2018, P_2019) and the SubPartition Name (from the SubPartition Template).

In this case, the names that are SubPartition in USER_TAB_SUBPARTITIONS appear as PARTITION_NAME in USER_SEGMENTS because each of the two logical Partitions (P_2018, P_2019) don't actually have their own Segments.


Note :  I set "deferred_segment_creation" to FALSE so that all the Segments would be created upfront even if they are not populated. "deferred_segment_creation" is an 11g feature.



Categories: DBA Blogs

Integration Cloud Who Are We - Using Headers to Identify IP Addresses

Antony Reynolds - Sat, 2018-12-01 23:55
Identifying Integration Cloud IP Addresses

In this blog post I will show how to identify IP addresses associated with Integration Cloud.

The Challenge

When whitelisting services we need to know their IP address.  This is easy if we need to identify the inbound address to Integration Cloud, we can just resolve the Integration Cloud hostname to get the IP address of the Integration Cloud load balancer to be whitelisted for inbound traffic to integration Cloud.  This is useful if we need to whitelsit outbound calls through a firewall to Integration Cloud.  However for outbound traffic from Integration Cloud we need to do a little more.  This is needed when the target of integration Cloud invokes need to whitelist their caller.

The Solution Part #1 Using an External Service to Look Up Source IP

We will create an integration that calls a service that returns the calling IP address.  For outbound traffic from Integration Cloud this will be different from the inbound address.  There is a service call Ipify that returns the calling IP address.  This has a REST API that can be invoked to obtain the IP address.

To create it we create a new integration called Get Outbound IP.

The integration provides a simple REST interface to retrieve the outbound IP address.

REST Interface: Resource /outboundip : Method GET

Response Media Type: application/json

Response sample: {"ip":"129.157.69.37"}

We call the Ipify REST API as an invoke so that we can get the outbound IP address of Integration Cloud.

The IPIFY Connection has the following properties:

Connection Type: REST API Base URL

Connection URL: https://api.ipify.org

Security Policy: No Security Policy

The invoke has the following properties:

REST Service URL: /

Method: GET

Query Params: format

JSON Response Sample: {"ip":"129.157.69.37"}

The overall flow looks as below:

Calling this integration returns the IP address used by Integration Cloud for Outbound Calls.  This can then be used to whitelist calls to an FTP server for example.

The Solution Part #2 Implementing the Equivalent of Ipify

In Part #1 we used an external service to get the IP address of the caller of the service.  In this section we will implement an equivalent service in Integration Cloud.

In order to obtain the IP address of the client of Integration Cloud we need to access the headers set by the load balancer.  The Load Balancer as a Service used by Integration Cloud defines a special header X-Real-IP to provide target systems with the actual IP address of the calling system.

When defining the REST endpoint of our integration we need to define a customer inbound header.  This is done on the trigger.

Note we declare the use of customer headers.

We provide the basic endpoint information.

Then we are prompted to provide the name of custom header fields - X-Real-IP.

This is all that is needed to obtain the IP address of the caller of the integration.  The whole flow does not require anything other than mapping the header field back onto the response JSON as shown in the flow below.

Summary

This post has shown how using custom headers we can determine the IP address of the caller of an integration, providing the same functionality as the Ipify service.

The IP address of the caller may be useful for logging or other purposes, so being able to access it via the custom headers is a valuable tool in our arsenal.

AWS re:invent 2018 – Day 5 – Final day

Yann Neuhaus - Sat, 2018-12-01 12:34

On Friday, it’s the last day for AWS re:Invent convention in Las Vegas. I didn’t know in advance what to expect because I have to admit that even if I started to use AWS, there is still a lot to learn. And that’s why I personally think working with technology is interesting, it’s always moving and there is always something new to investigate or learn.

This morning, I attended a very interesting session about the life of a Cloud Ops Engineer with 2 people from AWS speaking about some scenarios that may happen (or already happened) in daily life of system engineers. I was quite surprised on 1 scenario where the starting point to analyze a failure was the billing console. But imagine, you get a call from a customer because the application is down and you don’t have lots of informations about it. In AWS, you pay for the services you use so you can find some resources directly on the bill. Then you can pull the string and follow different resources to get a better view. Then CloudTrail can help to see what was changed on the resources because it keeps track of API calls.

Knowing some keys services and enabling them can help a lot operation/devops team to support applications in an AWS environment: CloudTrail, CloudWatch, GuardDuty or few of them. Using CloudFormation or tools to provision the infrastructure can also help to detect drifts when there is an issue. It definitely worth waking up this morning.

I ended the conference with a new launch session about running on Amazon RDS with VMware on local datacenter. It allows customers to give the offload the management part to AWS while keeping the data on site. AWS will provide a bunch of control machines built by AWS and VMware which is AWS partner. These machines will connect using a private VPN to control machines in AWS Cloud infrastructure. Currently supporting PostgreSQL / MySQL and planning to support SQL Server and Oracle. It will be a BYOL (Bring Your Own Licence) model so it will interesting to see how to licence that environment. Aurora is not supported and with what I learned this week about the storage architecture I think it will be a difficult challenge.

With Amazon RDS on VMware and AWS Outpost which has been announced in the keynote, AWS is giving solutions for customer want to use Cloud but keep data in local datacenter. Oracle proposed Cloud at Customer to achieve a similar goal. Oracle proposal is built on hardware (engineered systems) provided and maintained by Oracle when AWS ‘ solution is built on VMware. But many companies are already running successfully VMware on their own hardware so the “fight” will be interesting.

Cet article AWS re:invent 2018 – Day 5 – Final day est apparu en premier sur Blog dbi services.

AWS: What services are free of charge? How to control your costs...(part 3)

Dietrich Schroff - Sat, 2018-12-01 12:00
After looking into my bill (see post 1 and post 2) i was keen, how fine granular the cost statistics are inside the aws web console. So i moved to the billing dashboard and to the cost explorer:
This provides you with the following dashboard:
You can play around with the settings to get for example something like this:




Machine Learning - Date Feature Transformation Explained

Andrejus Baranovski - Sat, 2018-12-01 09:46
Machine Learning is all about data. The way how you transform and feed data into ML algorithm - greatly depends training success. I will give you an example based on date type data. I will be using scenario described in my previous post - Machine Learning - Getting Data Into Right Shape. This scenario is focused around invoice risk, ML trains to recognize when invoice payment is at risk.

One of the key attributes in invoice data are dates - invoice date, payment due date and payment date. ML algorithm expects number as training feature, it can't operate with literals or dates. This is when data transformation comes in - out of original data we need to prepare data which can be understood by ML.

How we can transform dates into numbers? One of the ways is to split date value into multiple columns with numbers describing original date (year, quarter, month, week, day of year, day of month, day of week). This might work? To be sure - we need to run training and validate training success.

Resources:

1. Sample Jupyter notebooks and datasets are available on my GitHub repo
2. I would recommend to read this book - Machine Learning for Business

Two approaches:

1. Date feature transformation into multiple attributes

Example where date is split into multiple columns:


Correlation between decision column and features show many dependencies, but it doesn't pick up all columns for payment date feature. This is early sign training might not work well:


We need to create test (1/3 of remaining data), validation (2/3 of remaining data) and training (70% of all data) datasets to be able to train, validate and test ML model. Splitting original dataset into three parts:


Running training using XGBoost (Gradient boosting is currently one of the most popular techniques for efficient modeling of tabular datasets of all sizes). Read more about XGBoost parameters. We have validation dataset and this allows to use XGBoost early stopping functionality, if training quality would not improve in N (10 in our case) rounds - it will stop and pick best iteration as the one to be used for training result:


Result: training accuracy 93% and validation accuracy 74%. Validation accuracy is too low, this means training wasn't successful and we should try to transform dates in another way:


2. Date feature transformation into difference between dates

Instead of splitting date into multiple attributes, we should reduce number of attributes to two. We can use date difference as such:

- Day difference between Payment Due Date and Invoice Date
- Day difference between Payment Date and Invoice Date

This should bring clear pattern, when there is payment delay - difference between payment date/invoice date will be bigger than between payment date/invoice date. Sample data with date feature transformed into date difference:


Correlation is much better this time. Decision correlates well with date differences and total:


Test, validation and training data sets will be prepared in the same proportions as in previous test. But we will be using stratify option. This option helps to shuffle data and create test, validation and training data sets where decision attribute is well represented:


Training, validation and test datasets are prepared:


Using same XGBoost training parameters:


Result: This time we get 99% training accuracy and 97% validation accuracy. Great result. You can see how important is data preparation step for ML. It directly relates to ML training quality:

PostgreSQL 12: csv output format for psql

Yann Neuhaus - Sat, 2018-12-01 07:25

Getting data out of PostgreSQL in csv format is not a big issue. Using copy you can do that easily. PostgreSQL 12 will enhance psql so that you can directly return csv formatted output from a query.

Using “copy” you would do something like this:

postgres=# copy pg_database to '/var/tmp/aa.txt' with (format csv);
COPY 3
postgres=# \! cat /var/tmp/aa.txt
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
postgres=# 

What you can not do directly with copy is dumping a view:

postgres=# copy pg_settings to '/var/tmp/aa.txt' with (format csv);
ERROR:  cannot copy from view "pg_settings"
HINT:  Try the COPY (SELECT ...) TO variant.

Of course you can workaround that by doing it like this:

postgres=# copy (select * from pg_settings) to '/var/tmp/aa.txt' with (format csv);
COPY 309
postgres=# \! head -2 /var/tmp/aa.txt
allow_system_table_mods,off,,Developer Options,Allows modifications of the structure of system tables.,,postmaster,bool,default,,,,off,off,,,f
application_name,psql,,Reporting and Logging / What to Log,Sets the application name to be reported in statistics and logs.,,user,string,client,,,,"",psql,,,f

With PostgreSQL 12 there will be an easier way of doing that:

postgres=# \pset format csv
Output format is csv.
postgres=# select * from pg_database;
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"

… or directly when executing psql:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv  postgres
oid,datname,datdba,encoding,datcollate,datctype,datistemplate,datallowconn,datconnlimit,datlastsysoid,datfrozenxid,datminmxid,dattablespace,datacl
11788,postgres,10,6,en_US.utf8,en_US.utf8,f,t,-1,11787,598,1,1663,NULL
1,template1,10,6,en_US.utf8,en_US.utf8,t,t,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
11787,template0,10,6,en_US.utf8,en_US.utf8,t,f,-1,11787,598,1,1663,"{=c/postgres,postgres=CTc/postgres}"
Time: 0.794 ms

This makes is very easy to pass the data to other programs for further processing. By switching to unaligned output mode you even specify the field separator:

postgres@pgbox:/home/postgres/ [PGDEV] psql -c "select * from pg_database" --csv --field-separator='##' -A postgres
oid##datname##datdba##encoding##datcollate##datctype##datistemplate##datallowconn##datconnlimit##datlastsysoid##datfrozenxid##datminmxid##dattablespace##datacl
11788##postgres##10##6##en_US.utf8##en_US.utf8##f##t##-1##11787##598##1##1663##NULL
1##template1##10##6##en_US.utf8##en_US.utf8##t##t##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
11787##template0##10##6##en_US.utf8##en_US.utf8##t##f##-1##11787##598##1##1663##{=c/postgres,postgres=CTc/postgres}
(3 rows)
Time: 2.105 ms

Cet article PostgreSQL 12: csv output format for psql est apparu en premier sur Blog dbi services.

[Vlog] Oracle WebLogic Administration: Security Realm

Online Apps DBA - Sat, 2018-12-01 04:29

Security Realm in WebLogic includes Users, Groups, Authentication Schemes, Connecting to LDAP like OID, MS-AD, OUD etc Unravel our Blog at https://k21academy.com/weblogic22 and Go through Important Coverings like: ✔ What is Weblogic Security Realm? ✔ Security Roles Security Realm in WebLogic includes Users, Groups, Authentication Schemes, Connecting to LDAP like OID, MS-AD, OUD etc Unravel […]

The post [Vlog] Oracle WebLogic Administration: Security Realm appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Joining date and timestamp partitioned tables but partition pruning seems to not to work

Tom Kyte - Sat, 2018-12-01 03:26
Hi We have date and timestamp partitioned tables. While joining them partition pruning seems to not to work. Is there any way to do this? <code> create table T_A( ORDERED DATE not null, C1 number(20) not null ) partition ...
Categories: DBA Blogs

PostgreSQL 12: log_statement_sample_rate

Yann Neuhaus - Fri, 2018-11-30 12:41

A common way to identify long running queries in PostgreSQL is to set log_min_duration_statement to a value that is known to cause troubles. In other words: If you know most of your statements usually execute in under a second you could set log_min_duration_statement to “2s” (which means two seconds) so that whenever a statement takes longer than two seconds it will be reported in the log file. The issue with this however is, that this can cause a lot of activity in the server log file which of course is not good for performance as well. PostgreSQL 12 will probably come with a solution to that.

The default value of log_min_duration_statement is “-1″, which means disabled:

postgres=# select version();
                                                  version                                                   
------------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 -1
(1 row)

Lets set this to 5ms and then generate some queries:

postgres=# alter system set log_min_duration_statement = '5ms';
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_min_duration_statement;
 log_min_duration_statement 
----------------------------
 5ms
(1 row)

When we execute some queries that take longer than 5ms they should get reported in the log file:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

Looking at the log file there are exactly 10 entries:

2018-11-28 03:13:32.240 CET - 20 - 28978 - [local] - postgres@postgres LOG:  duration: 529.825 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:32.770 CET - 21 - 28978 - [local] - postgres@postgres LOG:  duration: 529.904 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.273 CET - 22 - 28978 - [local] - postgres@postgres LOG:  duration: 501.729 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:33.783 CET - 23 - 28978 - [local] - postgres@postgres LOG:  duration: 509.532 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.305 CET - 24 - 28978 - [local] - postgres@postgres LOG:  duration: 520.946 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:34.809 CET - 25 - 28978 - [local] - postgres@postgres LOG:  duration: 502.624 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.315 CET - 26 - 28978 - [local] - postgres@postgres LOG:  duration: 505.043 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:35.817 CET - 27 - 28978 - [local] - postgres@postgres LOG:  duration: 502.034 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.321 CET - 28 - 28978 - [local] - postgres@postgres LOG:  duration: 503.417 ms  statement: select pg_sleep(0.5)
2018-11-28 03:13:36.824 CET - 29 - 28978 - [local] - postgres@postgres LOG:  duration: 501.790 ms  statement: select pg_sleep(0.5)

Imagine there are thousands of statements that exceed log_min_duration_statement: This will make the log file grow quite fast and performance will suffer because writing the log file takes resources as well. PostgreSQL 12 introduces a new parameter “log_statement_sample_rate” which helps with this:

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 1
(1 row)

The default is 1 which means log all statements that exceed the log_min_duration_statement threshold. When that is too much we can now go and say: We want only 10% of these statements getting logged:

postgres=# alter system set log_statement_sample_rate=0.1;
ALTER SYSTEM
postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

postgres=# show log_statement_sample_rate;
 log_statement_sample_rate 
---------------------------
 0.1
(1 row)

Doing the same test as above again:

postgres=# select 'select pg_sleep(0.5)' from generate_series(1,10); \gexec
       ?column?       
----------------------
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
 select pg_sleep(0.5)
(10 rows)

 pg_sleep 
----------
 
(1 row)

...

… we only get one entry in the log file, instead of 10:

2018-11-28 03:20:36.218 CET - 34 - 28978 - [local] - postgres@postgres LOG:  duration: 504.845 ms  statement: select pg_sleep(0.5)

Nice. Btw: The commit is here.

Cet article PostgreSQL 12: log_statement_sample_rate est apparu en premier sur Blog dbi services.

New OA Framework 12.2.4 Update 20 Now Available

Steven Chan - Fri, 2018-11-30 12:09

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.4 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.4 users should apply this patch. Future OAF patches for EBS Release 12.2.4 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.4 bundle patches.

This latest bundle patch includes a fix for the following issues:

  • When Rich Table Interaction is set to off, data binding for user-customizable properties does not take effect for classic tables.

Related Articles

Categories: APPS Blogs

AWS re:invent 2018 – Day 4

Yann Neuhaus - Fri, 2018-11-30 11:52

Thursday morning it’s time for the keynote from Dr Werner Vogels.

A big part of the keynote was about databases again. Maybe AWS people knows that there are still lots of DBAs out there! It’s also again a focus on Amazon Aurora (relational), Dynamo DB (NoSQL) and AWS Redshift that are replacing Oracle databases Amazon was using. There was also a focus on how S3 works to maintain the best durability for the data. I’ve seen several sessions this week mentioning S3 as the object storage for building a data lake.

I haven’t mentioned on yesterday’s keynote but there are some guest coming on stage to speak about how they are using AWS on there businesses. Today, Ethan Kaplan from Fender. He spoke about Serverless movement than is used to power the new learning guitar application from Fender. This application was born because even if there are still lots of people buying new guitars, most of them quit playing guitar after 6 months. On the other side, people that continue playing are buying 8-10 guitars so it was important to develop a solution for people to continue to play.

The important guest for me was Yuri Misnik, a representative of National Australian Bank speaking about the Cloud first strategy within the Bank. Banking is traditionally associated with legacy technologies and regulation that would be against moving to the Cloud. But they are targeting 1/3 of the applications running in AWS by the year 2020 and they are moving fast.

There were less new launches than in Andy’s keynote but there are few to mention:

  • Redshift concurrent scaling: Allow Redshift to automatically increase the capacity to avoid waits when number of queries increases. The very nice part is the burstable, for every 24 hours your cluster is running you get 1 hour credit for this new concurrent usage making it almost transparent and free for most customers according to Werner
  • Support for new languages in AWS Lambda and most important support of custom running to allow customer running almost any language they like
  • AWS Lambda layer: You can now share a library between several Lambda functions without having to copy the code again
  • AWS Well-Architected Tool based on the AWS Well-Architected framework to allow reviewing the architecture without the need of a meeting with AWS or an AWS partner

Again, the keynote consumed a big part of the day but I was able to attend 2 more sessions. The 1st was Kyle Hailey presenting Performance Insights. It’s a performance tool for Amazon RDS working for Aurora, PostgreSQL, MySQL and Oracle. SQL Server is planned and will be the next on the list. It’s very promising even if it’s currently missing SQL execution plans. The strength of the tool is that it will provide consistent GUI across different engines.

AWS Performance Insights

As an Oracle DBA, I have the feeling that the tool is inspired a lot of what has been done already quite some time ago when ASH has been implemented directly in the database. The same AAS metric is driving the analysis and it’s based on 1 second sampling interval. It’s a good news, most DBAs that are already working this way with Oracle will be able to quickly understand their load when moving to Amazon RDS. The fun part is a demo from the company Slalom that developed a connector to Performance Insights for Alexa, now you can speak to your database.

I finished the day with a workshop about CI/CD: Continuous Integration / Continuous Deployment or Delivery according to the level you reach. Most often, we prefer or we have to (validated environment) have an approval for moving to Production thus only it’s only continuous delivery. I can’t put all the details but the workshop was well organize with people able to answer questions and infrastructure running without issues. It was an occasion to deal a little bit with Opswork service for a managed Puppet server and discover CodeBuild / CodePipeline services.

On Thursday night, it’s re:Play party where nobody told people that they can’t play anymore because they are too old :-) There will be a lot of activities besides 2 different stages: 1 for live music and 1 with DJs. But on Friday morning, there will be the last sessions and it will still be interesting, stay tuned.

Cet article AWS re:invent 2018 – Day 4 est apparu en premier sur Blog dbi services.

showing PL/SQL code and SQL statement relationship

Tom Kyte - Fri, 2018-11-30 09:06
Hi Oracle Gurus, I?m banging my head into walls for some time trying to set up queries monitoring DB activity that would show relationship between a PL/SQL code ( anonymous block, function or stored procedure) and the statement issued inside. F...
Categories: DBA Blogs

Usage of type records and assigning values from single varchar2

Tom Kyte - Fri, 2018-11-30 09:06
I have a type record defined, using heroine utl function i want to read the line of data from a file and put the data into type record. Type record has got multiple variables declared under it.
Categories: DBA Blogs

Conditional complex constraint

Tom Kyte - Fri, 2018-11-30 09:06
I would like to implement a constraint for the following table: <code>CREATE TABLE GLOBAL_LOCKS ( RESOURCE_NAME VARCHAR2(50) NOT NULL, IS_EXCLUSIVE CHAR(1) DEFAULT 'N' NOT NULL )</code> as follows: 1. RESOURCE_NAME must be unique if...
Categories: DBA Blogs

Dynamic Values list in PIVOT IN CLAUSE

Tom Kyte - Fri, 2018-11-30 09:06
Dear Developers, I want to write a PIVOT/SQL Statement to get the values in PIVOT IN-Clause Dynamically like below: Select * FROM (Select StaffID, OSSID FROM Staff) PIVOT(Count(StaffID) FOR OSSID IN(Selec...
Categories: DBA Blogs

Explain SCN and ORA_ROWSCN

Tom Kyte - Fri, 2018-11-30 09:06
How to find a row on a specific inserted date using a scn? And What is major difference scn and ora_rowscn?
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator