Feed aggregator

Listagg returning multiple values

Tom Kyte - Mon, 2018-06-11 15:46
hello, I am new to writing this kind of SQL and I am almost there with this statement but not quite. I'm trying to write a query using listagg and I am getting repeating values in the requirement column when I have 2 passengers. This is because...
Categories: DBA Blogs

Difference between Procedure and function(at least 5, if there are)

Tom Kyte - Mon, 2018-06-11 15:46
Difference between Procedure and function(at least 5, if there are) Seems like a basic question but its a very tricky question.. Some of the differences which I encountered on the internet seems incorrect later, I will list some of them below...
Categories: DBA Blogs

How to use conditional case with aggregate function

Tom Kyte - Mon, 2018-06-11 15:46
I have a query like the one below: SELECT CH.BLNG_NATIONAL_PRVDR_IDNTFR CASE WHEN CH.TCN_DATE BETWEEN TO_DATE('01-JAN-2016','DD/MM/YYYY') AND TO_DATE('30-JUN-2016','DD/MM/YYYY') THEN ROUND(SUM(CH.PAID_AMOUNT)/COUNT(DISTINCT CH.MBR_IDENT...
Categories: DBA Blogs

Conversion of Date format

Tom Kyte - Mon, 2018-06-11 15:46
I am having Column value as 'YYYYDDMM' and need to convert into 'DD/MM/YYYY' and store the data.
Categories: DBA Blogs

extract only two digit after fraction

Tom Kyte - Mon, 2018-06-11 15:46
For example, My query is following input data is in number format :- 123.456 and I want to see 123.45. Not need round off.
Categories: DBA Blogs

alter database noarchivelog versus flashback

Tom Kyte - Mon, 2018-06-11 15:46
SQL> alter database noarchivelog; alter database noarchivelog * ERROR at line 1: ORA-38774: cannot disable media recovery - flashback database is enabled
Categories: DBA Blogs

Can you Override USER built-in function ?

Tom Kyte - Mon, 2018-06-11 15:46
Hi Tom, Good Morning. My question may be naive, I'm not an Oracle expert. Can the USER built-in function be overridden ? Context: My current team has used Oracle for decades ( along with power-builder). We are now transitioning to a n...
Categories: DBA Blogs

Add Weblogic12c Vagrant project

Darwin IT - Mon, 2018-06-11 07:53
Last week I proudly presented a talk on  how to create, provision and maintain VMs with Vagrant, including installing Oracle software (Database, FusionMiddlware,etc.). It was during the nlOUG Tech Experience 18.  I've written about it in the last few posts.

Today I added my WLS12c Vagrant project to Github. Based on further insights I refactored my database12c installation a bit and pushed an updated Database12c Vagrant project to Github as well.

Main updates are that I disliked the 'Zipped' subfolder in the stage folder paths for the installations. And I wanted all the installations extracted into the same main folder. Then I can clean that up more easily.

You should check the references to the stage folder. Download the appropriate Oracle installer zip files and place them in the appropriate folder:



Coming up: a SOA 12c (including SOA, BPM and OSB) complete with creation of the domain (based on my scripts from 2016 and my TechExperience talk of last year), and configuration of nodemanager. Ready to start up after 'up'.

I also uploaded my slides of my talk to slideshare:
 

Change Data Capture from Oracle with StreamSet Data Collector

Yann Neuhaus - Mon, 2018-06-11 07:21

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects. There are some ideas running on (Debezium), VOODOO but not yet released.

Today I tested the Oracle CDC Data Collector for StreamSets. StreamSets Data Collector is an open-source project started by former people from Cloudera and Informatica, to define pipelines streaming data from data collectors. It is easy, simple and has a buch of destinations possible. The Oracle CDC is based on LogMiner which means that it is easy but may have some limitations (mainly datatypes, DDL replication and performance).

Install

The installation guide is at streamsets.com. I choose the easiest way for testing as they provide a Docker container (https://github.com/streamsets)

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest: sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Status: Downloaded newer image for streamsets/datacollector:latest
ef707344c8bd393f8e9d838dfdb475ec9d5f6f587a2a253ebaaa43845b1b516d

CaptureStreamSets001
And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and efficient. There’s a home page where you define the ‘pipelines’ and monitor them running. In the pipelines, we define sources and destinations. Some connectors are already installed, others can be automatically installed. For Oracle, as usual, you need to download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal reasons. I’ll do something simple here just to check the mining from Oracle.

CaptureStreamSets002CaptureStreamSets003
In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the streamsets-datacollector-jdbc-lib library
Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver (ojdbc8.jar).
I’ve also installed the MySQL one for future tests:

File Name Library ID
ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline

I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy: just select ‘Oracle CDC’ as source in a new pipeline. Click on it and configure it. I’ve set the minimum here.
In JDBC tab I’ve set only the JDBC Connection String to: jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1 which is my PDB (I’m on Oracle 18c here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set ‘sys as sysdba’ as username and its password. The configuration can also be displayed as JSON and here is the corresponding entry:

"configuration": [
{
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//192.168.56.188:1521/pdb1"
},
{
"name": "hikariConf.useCredentials",
"value": true
},
{
"name": "hikariConf.username",
"value": "sys as sysdba"
},
{
"name": "hikariConf.password",
"value": "oracle"
},
...

I’ve provided SYSDBA credentials and only the PDB service, but it seems that StreamSets figured out automatically how to connect to the CDB (as LogMiner can be started only from CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to the source – but of course, it will use CPU and memory resource from the source database host in this case.

Then I’ve defined the replication in the Oracle CDC tab. Schema to ‘SCOTT’ and Table Name Pattern to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually replicate for this first test. But of course, we can define a SCN here which is what must be used to ensure consistency between the initial load and the replication. ‘Dictionary source to ‘Online Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table names and column names. But be carefull as table structure changes may not be managed correctly with this option.

{
"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
{
"schema": "SCOTT",
"table": "%"
}
] },
{
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"INSERT",
"UPDATE",
"DELETE",
"SELECT_FOR_UPDATE"
] },
{
"name": "oracleCDCConfigBean.dictionary",
"value": "DICT_FROM_ONLINE_CATALOG"
},

I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is there.

Named Pipe destination

I know that the destination part is easy. I just want to see the captured changes here and I took the easiest destination: Named Pipe where I configured only the Named Pipe (/tmp/scott) and Data Format (JSON)

{
"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName": "com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
{
"name": "namedPipe",
"value": "/tmp/scott"
},
{
"name": "dataFormat",
"value": "JSON"
},
...

Supplemental logging

The Oracle redo log stream is by default focused only on recovery (replay of transactions in the same database) and contains only the minimal physical information requried for it. In order to get enough information to replay them in a different database we need supplemental logging for the database, and for the objects involved:

SQL> alter database add supplemental log data;
Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where owner='SCOTT' and table_name like '%') loop execute immediate 'alter table "'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary key) columns'; end loop;
PL/SQL procedure successfully completed.

Run

And that’s all. Just run the pipeline and look at the logs:

CaptureStreamSets005-log

StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the queries that it uses for that:

BEGIN DBMS_LOGMNR.START_LOGMNR( STARTTIME => :1 , ENDTIME => :2 , OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.NO_SQL_DELIMITER); END;

This starts to mine between two timestamp. I suppose that it will read the SCNs to get finer grain and avoid overlapping information.

And here is the main one:

SELECT SCN, USERNAME, OPERATION_CODE, TIMESTAMP, SQL_REDO, TABLE_NAME, COMMIT_SCN, SEQUENCE#, CSF, XIDUSN, XIDSLT, XIDSQN, RS_ID, SSN, SEG_OWNER FROM V$LOGMNR_CONTENTS WHERE ((( (SEG_OWNER='SCOTT' AND TABLE_NAME IN ('BONUS','DEPT','EMP','SALGRADE')) ) AND (OPERATION_CODE IN (1,3,2,25))) OR (OPERATION_CODE = 7 OR OPERATION_CODE = 36))

This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The operations 1,3,2,25 are those that we want to capture (insert, update, delete, select for update) and were defined in the configuration. Here the pattern ‘%’ for the SCOTT schema has been expanded to the table names. As far as I know, there’s no DDL mining here to automatically capture new tables.

Capture

Then I’ve run this simple insert (I’ve added a primary key on this table as it is not ther from utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet buffers the changes until the end of the transaction)

SQL> commit;

and here I got the message from the pipe:

/ $ cat /tmp/scott
 
{"LOC":"Cloud","DEPTNO":50,"DNAME":"IT"}

The graphical interface shows how the pipeline is going:
CaptureStreamSets006

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually, this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo stream format) and limitations are clearly documented.

Monitoring

Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on the source database. With big transactions, you may need large PGA (but you can also choose buffer to disk). If you have Oracle Tuning Pack, you can also monitor the main query which retreives the redo information from LogMiner:
CaptureStreamSets007
You will see a different SQL_ID because the filter predicates sues literals instead of bind variables (which is not a problem here).

Conclusion

This product is very easy to test, so you can do a Proof of Concept within a few hours and test for your context: supported datatypes, operations and performance. By easy to test, I mean: very good documentation, very friendly and responsive graphical interface, very clear error messages,…

 

Cet article Change Data Capture from Oracle with StreamSet Data Collector est apparu en premier sur Blog dbi services.

Real-time Sailing Yacht Performance - stepping back a bit (Part 1.1)

Rittman Mead Consulting - Mon, 2018-06-11 07:20

Slight change to the planned article. At the end of my analysis in Part 1 I discovered I was missing a number of key messages. It turns out that not all the SeaTalk messages from the integrated instruments were being translated to an NMEA format and therefore not being sent wirelessly from the AIS hub. I didn't really want to introduce another source of data directly from the instruments as it would involve hard wiring the instruments to the laptop and then translating a different format of a message (SeaTalk). I decided to spend on some hardware (any excuse for new toys). I purchased a SeaTalk to NMEA converter from DigitalYachts (discounted at the London boat show I'm glad to say).

This article is about the installation of that hardware and the result (hence Part 1.1), not our usual type of blog. You never know it may be of interest to somebody out there and this is a real-life data issue! Don't worry it will be short and more of an insight into Yacht wiring than anything.

The next blog will be very much back on track. Looking at Kafka in the architecture.

The existing wiring

The following image shows the existing setup, what's behind the panels and how it links to the instrument architecture documented in Part 1. No laughing at the wiring spaghetti - I stripped out half a tonne of cable last year so this is an improvement. Most of the technology lives near the chart table and we have access to the navigation lights, cabin lighting, battery sensors and DSC VHF. The top left image also shows a spare GPS (Garmin) and far left an EPIRB.

Approach

I wanted to make sure I wasn't breaking anything by adding the new hardware so followed the approach we use as software engineers. Check before, during and after any changes enabling us to narrow down the point errors are introduced. To help with this I create a little bit of Python that reads the messages and lets me know the unique message types, the total number of messages and the number of messages in error.

 
import json
import sys

#DEF Function to test message
def is_message_valid (orig_line):

........ [Function same code described in Part 1]

#main body
f = open("/Development/step_1.log", "r")

valid_messages = 0
invalid_messages = 0
total_messages = 0
my_list = [""]
#process file main body
for line in f:

  orig_line = line

  if is_message_valid(orig_line):
    valid_messages = valid_messages + 1
    #look for wind message
    #print "message valid"

    if orig_line[0:1] == "$":
      if len(my_list) == 0:
        #print "ny list is empty"
        my_list.insert(0,orig_line[0:6]) 
      else:
        #print orig_line[0:5]
        my_list.append(orig_line[0:6])

      #print orig_line[20:26]

  else:
    invalid_messages = invalid_messages + 1

  total_messages = total_messages + 1

new_list = list(set(my_list))

i = 0

while i < len(new_list):
    print(new_list[i])
    i += 1

#Hight tech report
print "Summary"
print "#######"
print "valid messages -> ", valid_messages
print "invalid messages -> ", invalid_messages
print "total mesages -> ", total_messages

f.close()

For each of the steps, I used nc to write the output to a log file and then use the Python to analyse the log. I log about ten minutes of messages each step although I have to confess to shortening the last test as I was getting very cold.

nc -l 192.168.1.1 2000 > step_x.log

While spooling the message I artificially generate some speed data by spinning the wheel of the speedo. The image below shows the speed sensor and where it normally lives (far right image). The water comes in when you take out the sensor as it temporarily leaves a rather large hole in the bottom of the boat, don't be alarmed by the little puddle you can see.

Step 1;

I spool and analyse about ten minutes of data without making any changes to the existing setup.

The existing setup takes data directly from the back of a Raymarine instrument seen below and gets linked into the AIS hub.

Results;
 
$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

$IIDBT -> Depth sensor
$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 

Summary
#######
valid messages ->  2129
invalid messages ->  298
total mesages ->  2427
12% error

Step 2;

I disconnect the NMEA interface between the AIS hub and the integrated instruments. So in the diagram above I disconnect all four NMEA wires from the back of the instrument.

I observe the Navigation display of the integrated instruments no longer displays any GPS information (this is expected as the only GPS messages I have are coming from the AIS hub).

Results;

$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

No $II messages as expected 

Summary
#######
valid messages ->  3639
invalid messages ->  232
total mesages ->  3871
6% error
Step 3;

I wire in the new hardware both NMEA in and out then directly into the course computer.

Results;

$AITXT -> AIS (from AIS hub)

$GPGBS -> GPS messages
$GPGGA
$GPGLL
$GPRMC

$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 
$IIVHW -> Heading & Speed
$IIRSA -> Rudder Angle
$IIHDG -> Heading
$IIVLW -> Distance travelled

Summary
#######
valid messages ->  1661
invalid messages ->  121
total mesages ->  1782
6.7% error
Conclusion;

I get all the messages I am after (for now) the hardware seems to be working.

Now to put all the panels back in place!

In the next article, I will get back to technology and the use of Kafka in the architecture.

Categories: BI & Warehousing

dbms_random

Jonathan Lewis - Mon, 2018-06-11 02:31

In a recent ODC thread someone had a piece of SQL that was calling dbms_random.string(‘U’,20) to generate random values for a table of 100,000,000 rows. The thread was about how to handle the ORA-30009 error (not enough memory for operation) that is almost inevitable when you use the “select from dual connect by level <= n” strategy for generating very large numbers of rows, but this example of calling dbms_random.string() so frequently prompted me to point out an important CPU saving , and then publicise through this blog a little known fact (or deduction) about the dbms_random.string() function.

If you generate a random string of length 6 using only upper-case letters there are 308,915,766 different combinations (266); so if you’re after “nearly unique” values for 100 million rows then a six character string is probably good enough – it might give you a small percentage of values which appear in a handful rows but most of the values are likely to be unique or have two rows. If you want to get closer to uniqueness then 7 characters will do it, and 8 will make it almost certain that you will get a unique value in every row.

So if you want “nearly unique” and “random 20 character strings” it’s probably sufficient to generate random strings of 6 to 8 characters and then rpad() them up to 20 characters with spaced – the saving in CPU will be significant; roughly a factor of 3 (which is going to matter when you’re trying to generate 100 million rows. As a little demo I supplied the OP with a script to create a table of just one million random strings – first of 20 random characters, then of 6 random characters with 14 spaces appended. The run time (mostly CPU) dropped from 1 minute 55 seconds to 41 seconds.

Why is there such a difference ? Because to generate a random string of 6 characters Oracle generates a random string of one character six times in a row and concatenates them. The difference between 6 calls and 20 calls per row gives you that factor of around 3. For a quick demo, try running the following anonymous PL/SQL block:

rem
rem     Script:         random_speed.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2010
rem

begin
        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',6));
        dbms_output.new_line;

        dbms_random.seed(0);
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
        dbms_output.put_line(dbms_random.string('U',1));
end;
/

Jere are the results I got from instances of 12.1.0.2, 12.2.0.1, and 18.1.0.0 (from LiveSQL):


BVGFJB
B
V
G
F
J
B

I haven’t shown the tests for all the possible dbms_random.string() options but, unsurprisingly, changing the test to use the ‘L’ (lower case alpha) option produces the same effect (and the same 6 letters changed to lower case). The same effect, with different characters, also appeared using the ‘A’ (mixed case alpha), ‘X’ (uppercase alphanumeric) and ‘P’ (all printable characters) options.

I haven’t considered the effect of using a multi-byte character set – maybe Oracle calls its random number generator once per byte rather than once per character. The investigation is left as an exercise to the interested reader.

tl;dr

When generating a very large number of random strings – keep the “operational” part of the string as short as you can and leave the rest to be rpad()‘ed.

Live Virtual Fluid Training in July

Jim Marion - Sun, 2018-06-10 21:46

In the Northern hemisphere, with days getting longer, and temperatures rising, many of us seek wet forms of recreation to keep cool. With the heat of summer upon us, I can't think of a better topic to study than the cool topic of PeopleSoft Fluid. That is why we are offering a remote live virtual Fluid training class during the hottest week of July. Additional details and registration links are available on our live virtual training schedule page. I look forward to having you in our virtual class!

New FY19 Cloud Platform Specializations Available

Showcase your expertise and distinguish yourself in the market by achieving one or more of our 120+ product and solution specializations. Specialized partners are recognized by Oracle and preferred...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Need Help with Oracle Security GDPR Training and Services

Pete Finnigan - Sun, 2018-06-10 02:46
I talked here a few days ago about GDPR in general and I also published my slides from my talk GDPR for the Oracle DBA . We have been helping clients secure data in their Oracle databases and training people....[Read More]

Posted by Pete On 09/06/18 At 04:33 PM

Categories: Security Blogs

Oracle Cloud Orchestration

Michael Dinh - Sat, 2018-06-09 08:57

Oracle Cloud has a pretty cool concept (Orchestration) to recreate an instance; however, it’s all hard coded.
The orchestration cannot be shared with some else to create the configuration with different name or from different account.
What version is the Orchestration and shouldn’t it be in the metadata.


{
"account" : "/Compute-601138841/default",
"description" : "",
"tags" : [ ],
"name" : "/Compute-601138841/me@yahoo.com/qs-classic",
"objects" : [ {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"managed" : true,
"description" : "qs-classic Storage Volume",
"bootable" : true,
"shared" : false,
"imagelist" : "/oracle/public/OL_7.2_UEKR4_x86_64",
"size" : "128G",
"properties" : [ "/oracle/public/storage/default" ],
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_storage"
},
"label" : "qs-classic_storage_1",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "StorageVolume",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/storage_1"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"description" : "qs-classic Security Rule (IP Network) Egress",
"tags" : [ "qs-classic" ],
"flowDirection" : "egress",
"acl" : "{{qs-classic_AccessControlList:name}}",
"enabledFlag" : true,
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_SecurityRule_Egress"
},
"label" : "qs-classic_SecurityRule_Egress",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "SecurityRule",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/f10f9f0d-9577-4f83-9b81-e1cc9d8bc9df"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"ipAddressPool" : "/oracle/public/public-ippool",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_IP_eth0_public"
},
"label" : "qs-classic_IP_eth0_public",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "IpAddressReservation",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/ac8dbfc5-d560-47e1-8d33-aeb54a0cc4c8"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"description" : "qs-classic Security Rule (IP Network)",
"tags" : [ "qs-classic" ],
"flowDirection" : "ingress",
"acl" : "{{qs-classic_AccessControlList:name}}",
"enabledFlag" : true,
"secProtocols" : [ "/oracle/public/ssh" ],
"dstVnicSet" : "{{qs-classic_VnicSet:name}}",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_SecurityRule"
},
"label" : "qs-classic_SecurityRule",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "SecurityRule",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/0ddfc441-7855-47a4-856f-15c400265975"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"appliedAcls" : [ "{{qs-classic_AccessControlList:name}}" ],
"description" : "qs-classic Virtual NIC Set",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_VnicSet",
"tags" : [ "qs-classic" ]
},
"label" : "qs-classic_VnicSet",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "VirtualNicSet",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/f7841713-54d1-4c9a-a6cb-32e3e84c753f"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"enabledFlag" : true,
"description" : "qs-classic Access Control List",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_AccessControlList",
"tags" : [ "qs-classic" ]
},
"label" : "qs-classic_AccessControlList",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "Acl",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/2f2915ca-5047-45bb-8875-0ae183a6425f"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : true,
"template" : {
"ipAddressPool" : "/oracle/public/cloud-ippool",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic_IP_eth0_cloud"
},
"label" : "qs-classic_IP_eth0_cloud",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "IpAddressReservation",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/e3f78f72-3eca-4dd1-a054-fdd3fee8a51b"
}, {
"account" : "/Compute-601138841/default",
"desired_state" : "inherit",
"description" : "",
"persistent" : false,
"template" : {
"networking" : {
"eth0" : {
"vnic" : "/Compute-601138841/me@yahoo.com/qs-classic_eth0",
"ipnetwork" : "/Compute-601138841/default",
"is_default_gateway" : true,
"nat" : [ "network/v1/ipreservation:{{qs-classic_IP_eth0_public:name}}", "network/v1/ipreservation:{{qs-classic_IP_eth0_cloud:name}}" ],
"vnicsets" : [ "{{qs-classic_VnicSet:name}}" ]
}
},
"name" : "/Compute-601138841/me@yahoo.com/qs-classic",
"boot_order" : [ 1 ],
"storage_attachments" : [ {
"volume" : "{{qs-classic_storage_1:name}}",
"index" : 1
} ],
"label" : "qs-classic",
"shape" : "oc3",
"imagelist" : "/oracle/public/OL_7.2_UEKR4_x86_64",
"sshkeys" : [ "/Compute-601138841/me@yahoo.com/qs-classic" ]
},
"label" : "qs-classic_instance",
"orchestration" : "/Compute-601138841/me@yahoo.com/qs-classic",
"type" : "Instance",
"name" : "/Compute-601138841/me@yahoo.com/qs-classic/instance"
} ],
"desired_state" : "active"
}

New Video : Online Relocation of a Pluggable Database

Hemant K Chitale - Fri, 2018-06-08 23:09
I have published a new YouTube Video:    Online Relocation of a Pluggable Database.

.
.
.
Categories: DBA Blogs

Do you need the same column with the same check constraint twice? Create a domain!

Yann Neuhaus - Fri, 2018-06-08 07:29

Did you know that you can create domains in PostgreSQL? No, nothing to worry about. We’ll take Frank’s leave for a new opportunity as a chance to introduce the concept of domains. @Franck: Yes, although we all fully understand your decision and the reasons to move on to a new challenge, this post is dedicated to you and you need to be the example in the following little demo. Lets go …

For the (not in any way serious scope) of this post lets assume that we do not want Franck anymore to blog on our blog site. We want to ban him. Of course we could simply delete his user account or disable the login. But, hey, we want to do that by using a domain as that is much more fun to do. Lets assume our blog software comes with two little tables that look like this:

postgres=# \d blogs
                            Table "public.blogs"
 Column |  Type   | Collation | Nullable |              Default              
--------+---------+-----------+----------+-----------------------------------
 id     | integer |           | not null | nextval('blogs_id_seq'::regclass)
 author | text    |           |          | 
 blog   | text    |           |          | 
Indexes:
    "blogs_pk" PRIMARY KEY, btree (id)
Referenced by:
    TABLE "blog_comments" CONSTRAINT "comments_ref_blogs" FOREIGN KEY (blog_id) REFERENCES blogs(id)

postgres=# \d blog_comments
                             Table "public.blog_comments"
 Column  |  Type   | Collation | Nullable |                  Default                  
---------+---------+-----------+----------+-------------------------------------------
 id      | integer |           | not null | nextval('blog_comments_id_seq'::regclass)
 blog_id | integer |           |          | 
 author  | text    |           |          | 
 comment | text    |           |          | 
Indexes:
    "blog_comments__pk" PRIMARY KEY, btree (id)
Foreign-key constraints:
    "comments_ref_blogs" FOREIGN KEY (blog_id) REFERENCES blogs(id)

When we want that Franck is not anymore able to create blogs and to comment on blogs we could do something like this:

postgres=# alter table blogs add constraint no_franck_blogs check ( author ~ '!^Franck' );
ALTER TABLE
postgres=# alter table blog_comments add constraint no_franck_comments check ( author ~ '!^Franck' );
ALTER TABLE

This will prevent Franck (actually it will prevent all people called Franck, but this is good in that case as we do not like people called Franck anymore) from inserting anything into these two tables:

postgres=# insert into blogs (author,blog) values ('Franck Pachot','another great blog');
ERROR:  new row for relation "blogs" violates check constraint "no_franck_blogs"
DETAIL:  Failing row contains (1, Franck Pachot, another great blog).

(Btw: Did you notice that you can use regular expressions in check constraints?)

This works and does what we want it to do. But there is an easier way of doing it. Currently we need to maintain two check constraints which are doing the same thing. By creating a domain we can centralize that:

postgres=# create domain no_franck_anymore as text check (value ~ '!^Franck' );
CREATE DOMAIN

Once we have that we can use the domain in our tables:

postgres=# alter table blogs drop constraint no_franck_blogs;
ALTER TABLE
postgres=# alter table blog_comments drop constraint no_franck_comments;
ALTER TABLE
postgres=# alter table blogs alter column author type no_franck_anymore;
ALTER TABLE
postgres=# alter table blog_comments alter column author type no_franck_anymore;
ALTER TABLE
postgres=# \d blogs
                                 Table "public.blogs"
 Column |       Type        | Collation | Nullable |              Default              
--------+-------------------+-----------+----------+-----------------------------------
 id     | integer           |           | not null | nextval('blogs_id_seq'::regclass)
 author | no_franck_anymore |           |          | 
 blog   | text              |           |          | 
Indexes:
    "blogs_pk" PRIMARY KEY, btree (id)
Referenced by:
    TABLE "blog_comments" CONSTRAINT "comments_ref_blogs" FOREIGN KEY (blog_id) REFERENCES blogs(id)

postgres=# \d blog_comments
                                  Table "public.blog_comments"
 Column  |       Type        | Collation | Nullable |                  Default                  
---------+-------------------+-----------+----------+-------------------------------------------
 id      | integer           |           | not null | nextval('blog_comments_id_seq'::regclass)
 blog_id | integer           |           |          | 
 author  | no_franck_anymore |           |          | 
 comment | text              |           |          | 
Indexes:
    "blog_comments__pk" PRIMARY KEY, btree (id)
Foreign-key constraints:
    "comments_ref_blogs" FOREIGN KEY (blog_id) REFERENCES blogs(id)

This still prevents Franck from blogging:

postgres=# insert into blogs (author,blog) values ('Franck Pachot','another great blog');
ERROR:  value for domain no_franck_anymore violates check constraint "no_franck_anymore_check"

… but we only need to maintain one domain and not two or more check constraints.

 

Cet article Do you need the same column with the same check constraint twice? Create a domain! est apparu en premier sur Blog dbi services.

Massive Delete

Jonathan Lewis - Fri, 2018-06-08 03:14

The question of how to delete 25 million rows from a table of one billion came up on the ODC database forum recently. With changes in the numbers of rows involved it’s a question that keeps coming back and I wrote a short series for AllthingsOracle a couple of years ago that discusses the issue. This is note is just a catalogue of links to the articles:

There is an error in part 2 in the closing paragraphs – it says that the number of index entries deleted varies “from just one to 266″, it actually varies from 181 to 266.

 

Node-oracledb 2.3 with Continuous Query Notifications is on npm

Christopher Jones - Fri, 2018-06-08 01:48

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Continuous Query Notifications. Heterogeneous Connection Pools.

 

 

Our 2.x release series continues with some interesting improvements: Node-oracledb 2.3 is now available for your pleasure. Binaries for the usual platforms are available for Node.js 6, 8, and 10; source code is available on GitHub. We are not planning on releasing binaries for Node.js 4 or 9 due to the end of life of Node.js 4, and the release of Node.js 10.

The main new features in node-oracledb 2.3 are:

  • Support for Oracle Database Continuous Query Notifications, allowing JavaScript methods to be called when database changes are committed. This is a cool feature useful when applications want to be notified that some data in the database has been changed by anyone.

    I recently posted a demo showing CQN and Socket.IO keeping a notification area of a web page updated. Check it out.

    The new node-oracledb connection.subscribe() method is used to register a Node.js callback method, and the SQL query that you want to monitor. It has two main modes: for object-level changes, and for query-level changes. These allow you to get notifications whenever an object changes, or when the result set from the registered query would be changed, respectively. There are also a bunch of configuration options for the quality-of-service and other behaviors.

    It's worth noting that CQN requires the database to establish a connection back to your node-oracledb machine. Commonly this means that your node-oracledb machine needs a fixed IP address, but it all depends on your network setup.

    Oracle Database CQN was designed for infrequently modified tables, so make sure you test your system scalability.

  • Support for heterogeneous connection pooling and for proxy support in connection pools. This allows each connection in the pool to use different database credentials.

    Some users migrating to node-oracledb had schema architectures that made use of this connection style for data encapsulation and auditing. Note that making use of the existing clientId feature may be a better fit for new code, or code that does mid-tier authentication.

  • A Pull Request from Danilo Silva landed, making it possible for Windows users to build binaries for self-hosting. Thanks Danilo! Previously this was only possible on Linux and macOS.

  • Support for 'fetchAsString' and 'fetchInfo' to allow fetching RAW columns as hex-encoded strings.

See the CHANGELOG for the bug fixes and other changes.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Grants WITH GRANT

Pete Finnigan - Thu, 2018-06-07 19:46
The ability to make grants on objects in the database such as tables, views, procedures or others such as SELECT, DELETE, EXECUTE and more is the cornerstone of giving other users or schemas granular access to objects. I say granular....[Read More]

Posted by Pete On 07/06/18 At 06:58 PM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator