Feed aggregator

Converting rows to columns

Tom Kyte - Fri, 2018-06-15 11:06
Hi Tom, Hope you are good. I have a requirement where I need to display rows as columns. Suppose there are 2 rows with 4 columns each then the result should display 2*4 i.e, 8 columns. Is it possible just using SQL? Thanks
Categories: DBA Blogs

Oracle Goldengate

Tom Kyte - Fri, 2018-06-15 11:06
What is the advantage of Goldengate over Stream? Oracle Goldengate has high license cost compared to Streams. So, why an organization should use Goldengate for their data replication need and not Streams? Does Goldengate has advantage, which is wo...
Categories: DBA Blogs

What is the relationship of CPU, Memories against DB performances?

Tom Kyte - Fri, 2018-06-15 11:06
Hi Tom, Frequently I get asked quite a number of times when planning for a new server setup for the creation of databases. How much CPU cores should I get? How much Memories should I get. Normally I'll answer them, just get the highest cores & me...
Categories: DBA Blogs

DBMS_FILE_TRANSFER.PUT_FILE multiple "source_file_name"

Tom Kyte - Fri, 2018-06-15 11:06
Hi I am using Datapump to export dump file from a database and while exporting the dumpfile, I am splitting that dumpfile into multiple files. Now I want to transfer those files to another server using DBMS_FILE_TRANSFER.PUT_FILE. I know ...
Categories: DBA Blogs

UTL_FILE.FCOPY not working in FOR LOOP <file read error>

Tom Kyte - Fri, 2018-06-15 11:06
Hi There, I have a PIPELINED function which retrieves me filenames which I feed to UTL_FILE.FCOPY like below: <code>DECLARE PROCEDURE copy_var_templates (p_var_report_name st_string) IS lkv_template_dir CONSTANT st_string := 'T...
Categories: DBA Blogs

Side-effects when working with associative array in pl/sql

Tom Kyte - Fri, 2018-06-15 11:06
I've noticed strange side-effect when working with associative arrays in pl/sql. Basically, it appearts, that when element of the array is passed to procedure as "in out nocopy", then after procedure finishes, Oracle copies possibly updated element b...
Categories: DBA Blogs

Extracting attributes from JSON documents

Tom Kyte - Fri, 2018-06-15 11:06
Hi all, Have question in JSON array accessing along with normal columns like below, <code> (Reports : [( 'reportname': 'abc', 'Sort order':'abc', 'sortlabel':'name', 'columns' :[ ( 'component' : 'q_test1', ...
Categories: DBA Blogs

Get a JSON from a SQL query

Tom Kyte - Fri, 2018-06-15 11:06
Hello! Just a question. Is it possible to write a query that returns a JSON code? If yes, could you give me a brief example? Thanks!
Categories: DBA Blogs

Index-Organized Materialized View with different primary key than the master table?

Tom Kyte - Fri, 2018-06-15 11:06
Dear Oracle-Team, we need a daily snapshot from the company's personal data for our software. For that reason we want to use an index-organized materialized view (with daily 'refesh complete'). Unfortunately there are two user id's for every empl...
Categories: DBA Blogs

Italian Core Banking Market Takes Major Leap Forward with Cabel and Oracle

Oracle Press Releases - Fri, 2018-06-15 07:00
Press Release
Italian Core Banking Market Takes Major Leap Forward with Cabel and Oracle Invest Banca S.p.A. is the first Italian bank to implement Oracle FLEXCUBE, localized and integrated by banking outsourcer Cabel Industry S.p.A.

Redwood Shores, Calif.—Jun 15, 2018

Cabel, an IT service provider for the financial services market in Italy since 1985, and Oracle Financial Services Global Business Unit, announced the availability today of Oracle FLEXCUBE for the Italian market. Oracle FLEXCUBE is a core banking solution that has been adopted by more than 600 financial institutions around the world.

Cabel and Oracle have collaborated since 2016 to localize the Oracle FLEXCUBE solution to improve the process of marketing new products and services to the Italian market, where client requirements are evolving rapidly. In recent months, the Oracle FLEXCUBE solution has been adapted to the regulations governing the Italian banking market and is now fully able to support the typical activity of the Italian banking system.

“The attitude of the banking system towards innovation is changing and at the same time there is a growing interest in the world of fintech. Invest Banca, thanks to the Oracle FLEXCUBE solution, has taken a decisive step forward. We went live with this Open Banking Platform May 7th and Oracle FLEXCUBE now allows us to easily and efficiently integrate with a series of specialized solutions already in use by our retail and institutional clients, but moreover, it allows us to keep apace with ever more demanding banking regulations, such as MiFID, PSD2 and GDPR. It also facilitates our interaction and experimentation with the latest technology advances such as Robo-Advisor, artificial intelligence, data science, social trading and blockchain,” said Stefano Sardelli, Managing Director of Invest Banca.”

Cabel implemented the Italian version of Oracle FLEXCUBE making it possible to integrate in an already live and running banking system covering other banking operations. The solution can be outsourced or used on premise.

"This is a radically innovative solution, because it is a technology that facilitates the creation of lean products and services that are independent and based on completely different and more modern logic than traditional core banking systems in Italy,” said Francesco Bosio, President of Cabel Holding S.p.A.

“Oracle’s strategy is to work with leading local partners, who bring local domain skills to our best-in-class global solutions," said Chet Kamat, Senior Vice President, Banking, Oracle Financial Services. "Cabel is an innovation-oriented company and we chose to work with Cabel knowing it could fully utilize our modern, flexible technology to respond to the changes imposed by the digital age. As a result, Italian banks will see a significant improvement in their own productivity and market offerings—and their customers will get the benefit of excellent customer experience.”

Contact Info
Stefano Cassola
Oracle
39 022 495 9032
stefano.cassola@oracle.com
Sara D’Agati
Cabel Industry S.p.A.
39 339 8610096
sara.dagati@hfilms.net
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Stefano Cassola

  • 39 022 495 9032

Sara D’Agati

  • 39 339 8610096

How to replace old Exadata storage cells with the new X7-2 storage cells, without downtime

Alejandro Vargas - Fri, 2018-06-15 03:32

Lately I had to help various customers to replace their old storage cells with the new X7-2 ones.

There are huge benefits in doing this, the X7 has 25.5TB of flash, 12 x 10TB disks and 192 GB of DDR4 Memory.

X7-2 hardware

 

 

 

 

 

 

 

 

 

 

 

The question my customers asked the most was: Can we do the migration from our old storage to the new X7 without downtime and without risk?

The answer was: YES!

For doing this I've prepared and implemented a procedure that cover step by step how to migrate critical data, from production databases, while these databases are online, without downtime and without risk.

So far I've done two of these migrations in 2 weeks, one in Haifa and one in Istanbul.

The haifa migration was run on a machine without customers working on it. The Istanbul migration was implemented on a critical production database with full customer load.

Customer was very happy to see how the data was transferred in a fast and safe way without affecting the online business.

This is the power of both Exadata and ASM, a migration that only a few years ago may have imposed tremendous effort of planning and most probably required downtime, is now possible to be run online and without affecting the performance of critical workloads!

In summa

ry the steps of the migration includes the following:

1. Attach the new cells to Exadata and setup the ILOM to the final IP on customer network

2. Connect via ILOM to the new storage and setup the network to customer values

3. Upgrade the new storage servers to latest Exadata storage version

4. Customize the new servers to reflect customer preferences, mail alerts, writeback, asr server, etc

5. Create celldisks and griddisks to match existing diskgroups

6. Extend existing disk groups into the new cells and wait for first rebalance to complete

7. Once second rebalance completes, drop failgroups from the old cells and wait for second rebalance to complete

8. Once second rebalance complete flush the flashcache on the old cells, drop its griddisks and celldisks and shutdown the cells

9. Check the free space available on new cells and increase the size of the griddisks to use all of it as required

10. On the ASM instance resize all griddisks on the disk groups where you increase the size of the griddisks, wait for the third rebalance to complete.

 

 

 

 

Categories: DBA Blogs

ChitChat for OBIEE - Now Available as Open Source!

Rittman Mead Consulting - Fri, 2018-06-15 03:20

ChitChat is the Rittman Mead commentary tool for OBIEE. ChitChat enhances the BI experience by bridging conversational capabilities into the BI dashboard, increasing ease-of-use and seamlessly joining current workflows. From tracking the history behind analytical results to commenting on specific reports, ChitChat provides a multi-tiered platform built into the BI dashboard that creates a more collaborative and dynamic environment for discussion.

Today we're pleased to announce the release into open-source of ChitChat! You can find the github repository here: https://github.com/RittmanMead/ChitChat

Highlights of the features that ChitChat provides includes:

  • Annotate - ChitChat's multi-tiered annotation capabilities allow BI users to leave comments where they belong, at the source of the conversation inside the BI ecosystem.

  • Document - ChitChat introduces the ability to include documentation inside your BI environment for when you need more that a comment. Keeping key materials contained inside the dashboard gives the right people access to key information without searching.

  • Share - ChitChat allows to bring attention to important information on the dashboard using the channel or workflow manager you prefer.

  • Verified Compatibility - ChitChat has been tested against popular browsers, operating systems, and database platforms for maximum compatibility.

Getting Started

In order to use ChitChat you will need OBIEE 11.1.1.7.x, 11.1.1.9.x or 12.2.1.x.

First, download the application and unzip it to a convenient access location in the OBIEE server, such as a home directory or the desktop.

See the Installation Guide for full detail on how to install ChitChat.

Database Setup

Build the required database tables using the installer:

cd /home/federico/ChitChatInstaller  
java -jar SocializeInstaller.jar -Method:BuildDatabase -DatabasePath:/app/oracle/oradata/ORCLDB/ORCLPDB1/ -JDBC:"jdbc:oracle:thin:@192.168.0.2:1521/ORCLPDB1" -DatabaseUser:"sys as sysdba" -DatabasePassword:password -NewDBUserPassword:password1  

The installer will create a new user (RMREP), and tables required for the application to operate correctly. -DatabasePath flag tells the installer where to place the datafiles for ChitChat in your database server. -JDBC indicates what JDBC driver to use, followed by a colon and the JDBC string to connect to your database. -DatabaseUser specifies the user to access the database with. -DatabasePassword specifies the password for the user previously given. -NewDBUserPassword indicates the password for the new user (RMREP) being created.

WebLogic Data Source Setup

Add a Data Source object to WebLogic using WLST:

cd /home/federico/ChitChatInstaller/jndiInstaller  
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./create-ds.py

To use this script, modify the ds.properties file using the method of your choice. The following parameters must be updated to reflect your installation: domain.name, admin.url, admin.userName, admin.password, datasource.target, datasource.url and datasource.password.

Deploying the Application on WebLogic

Deploy the application to WebLogic using WLST:

cd /home/federico/ChitChatInstaller  
$ORACLE_HOME/oracle_common/common/bin/wlst.sh ./deploySocialize.py

To use this script, modify the deploySocialize.py file using the method of your choice. The first line must be updated with username, password and url to connect to your Weblogic Server instance. The second parameter in deploy command must be updated to reflect your ChitChat access location.

Configuring the Application

ChitChat requires several several configuration parameters to allow the application to operate successfully. To change the configuration, you must log in to the database schema as the RMREP user, and update the values manually into the APPLICATION_CONSTANT table.

See the Installation Guide for full detail on the available configuration and integration options.

Enabling the Application

To use ChitChat, you must add a small block of code on any given dashboard (in a new column on the right-side of the dashboard) where you want to have the application enabled:

<rm id="socializePageParams"  
user="@{biServer.variables['NQ_SESSION.USER']}"  
tab="@{dashboard.currentPage.name}"  
page="@{dashboard.name}">  
</rm>  
<script src="/Socialize/js/dashboard.js"></script>  

Congratulations! You have successfully installed the Rittman Mead commentary tool. To use the application to its fullest capabilities, please refer to the User Guide.

Problems?

Please raise any issues on the github issue tracker. This is open source, so bear in mind that it's no-one's "job" to maintain the code - it's open to the community to use, benefit from, and maintain.

If you'd like specific help with an implementation, Rittman Mead would be delighted to assist - please do get in touch with Jon Mead or DM us on Twitter @rittmanmead to get access to our Slack channel for support about ChitChat.

Please contact us on the same channels to request a demo.

Categories: BI & Warehousing

Convert a WebLogic Cluster from configured to dynamic

Yann Neuhaus - Fri, 2018-06-15 00:14

Unless the servers in cluster are not symmetric, which is not recommended anyway, dynamic cluster have many advantages against configured cluster:

  1. Ensure Cluster Member Uniformity
  2. Easily add new servers to manage more traffic
  3. Automatically adapt to load to add/remove managed servers
  4. Can still contain configured servers even if not recommended as for point 1
Server template

A server template defines a set of attributes. A change in a template will be propagated to all server depending on it. A dynamic cluster can be based on a server template.

Here is an overview of the configured to dynamic change:

  1. Write down all customized parameters of server’s member of the cluster.
  2. Create new dynamic cluster
  3. Report all settings. There are two specificities on dynamic cluster:
    1. Listen port and SSL port which can be either:
      1. Static, meaning all servers of the cluster will have same port. This is the best option when you have one server to one machine mapping
      2. Calculated, meaning each server will listen on a different port by step of 1. For example, if first port is set to 7000, server 1 will listen on 7001, server 2 on 7002, …
    2. Machine binding: Use of a specific (filtered) subset of machines from the cluster
  4. Create new server template
Procedure
  1. In left tree, go in Environment, Clusters, Server Templates
  2. Lock & Edit the configuration and click New
    1. Name the template:
      5 - Server Template Name
    2. In the server template list, select the newly created template to customize the parameters.
    3. In General tab, select the Cluster that needs to be converted:
      6 - Server Template Name
    4. Save
  3. In Environment, Clusters, Servers tab, set Server Template name:
    Cluster settings - servers tab
  4. Save and Activate the changes
  5. In Environment, Servers folder, dynamically created servers will be displayed (4 in your example):
    7 - Servers List
    Note that machine are distributed across available machines in round robin and also the Listen port and SSL are incrementing.
  6. Then, start these new servers, test application is running correctly.
  7. Finally, stop configured managed servers by selecting “Work when completes”:
    8 - stop when complete

Cluster is now dynamic and you can easily add or remove managed servers from cluster.

What Next ?

This was a quick overview of how to convert configured to dynamic cluster. As we saw, it still require manual intervention to add or remove servers from cluster.

Coming with 12.2.1, WebLogic introduce a new feature called “elasticity”. This feature allows to automatically scales the amount of managed servers in the cluster based on user defined policies.

Thanks to WebLogic Diagnostic Framework (WLDF) Policies, it is possible to monitor memory, CPU usage, threads and then trigger a scale up or down action.

 

Cet article Convert a WebLogic Cluster from configured to dynamic est apparu en premier sur Blog dbi services.

Migrating Your Database to Oracle Cloud

Gerger Consulting - Thu, 2018-06-14 22:47
Oracle Database Cloud is increasingly becoming an attractive option to run databases. However, moving all our data to the cloud still represents an interesting problem. Attend the free webinar by Oracle ACE Director and OCM Kamran Aghayev and learn the eight ways you can migrate your databases to the Oracle Cloud. Register at this link.


About the Webinar

Are you planning to move your on-premise database to Oracle Cloud? Are you looking for the best way to achieve it?
In this session Oracle Certified Master and ACE Director Kamran Agayev will show you how you can migrate your production database to Oracle Public Cloud using various methods such as data pump, cross platform transportable tablespaces with incremental backups, Data Guard, Golden Gate etc...
During the presentation Kamran will be present step by step guides for eight different practical migration scenarios that will help you migrate your database to the Oracle Cloud easily.

Categories: Development

Real-time Sailing Yacht Performance - Kafka (Part 2)

Rittman Mead Consulting - Thu, 2018-06-14 08:18

In the last two blogs Getting Started (Part 1) and Stepping back a bit (Part 1.1) I looked at what data I could source from the boat's instrumentation and introduced some new hardware to the boat to support the analysis.

Just to recap I am looking to create the yachts Polars with a view to improving our knowledge of her abilities (whether we can use this to improve our race performance is another matter).

Polars give us a plot of the boat's speed given a true wind speed and angle. This, in turn, informs us of the optimal speed the boat could achieve at any particular angle to wind and wind speed.

Image Description

In the first blog I wrote a reader in Python that takes messages from a TCP/IP feed and writes the data to a file. The reader is able, using a hash key to validate each message (See Getting Started (Part 1)). I'm also converting valid messages into a JSON format so that I can push meaningful structured data downstream. In this blog, I'll cover the architecture and considerations around the setup of Kafka for this use case. I will not cover the installation of each component, there has been a lot written in this area. (We have some internal IP to help with configuration). I discuss the process I went through to get the data in real time displayed in a Grafana dashboard.

Introducing Kafka

I have introduced Kafka into the architecture as a next step.

Why Kafka?

I would like to be able to stream this data real time and don't want to build my own batch mechanism or create a publish/ subscribe model. With Kafka I don't need to check that messages have been successfully received and if there is a failure while consuming messages the consumers will keep track of what has been consumed. If a consumer fails it can be restarted and it will pick up where it left off (consumer offset stored in Kafka as a topic). In the future, I could scale out the platform and introduce some resilience through clustering and replication (this shouldn't be required for a while). Kafka therefore is saving me a lot of manual engineering and will support future growth (should I come into money and am able to afford more sensors for the boat).

High level architecture

Let's look at the high-level components and how they fit together. Firstly I have the instruments transmitting on wireless TCP/IP and these messages are read using my Python I wrote earlier in the year.

I have enhanced the Python I wrote to read and translate the messages and instead of writing to a file I stream the JSON messages to a topic in Kafka.

Once the messages are in Kafka I use Kafka Connect to stream the data into InfluxDB. The messages are written to topic-specific measurements (tables in InfluxdDB).

Grafana is used to display incoming messages in real time.

Kafka components

I am running the application on a MacBook Pro. Basically a single node instance with zookeeper, Kafka broker and a Kafka connect worker. This is the minimum setup with very little resilience.

Image Description

In summary

ZooKeeper is an open-source server that enables distributed coordination of configuration information. In the Kafka architecture ZooKeeper stores metadata about brokers, topics, partitions and their locations. ZooKeeper is configured in zookeeper.properties.

Kafka broker is a single Kafka server.

"The broker receives messages from producers, assigns offsets to them, and commits the messages to storage on disk. It also services consumers, responding to fetch requests for partitions and responding with the messages that have been committed to disk." 1

The broker is configured in server.properties. In this setup I have set auto.create.topics.enabled=false. Setting this to false gives me control over the environment as the name suggests it disables the auto-creation of a topic which in turn could lead to confusion.

Kafka connect worker allows us to take advantage of predefined connectors that enable the writing of messages to known external datastores from Kafka. The worker is a wrapper around a Kafka consumer. A consumer is able to read messages from a topic partition using offsets. Offsets keep track of what has been read by a particular consumer or consumer group. (Kafka connect workers can also write to Kafka from datastores but I am not using this functionality in this instance). The connect worker is configured in connect-distributed-properties. I have defined the location of the plugins in this configuration file. Connector definitions are used to determine how to write to an external data source.

Producer to InfluxDB

I use kafka-python to stream the messages into kafka. Within kafka-python there is a KafkaProducer that is intended to work in a similar way to the official java client.

I have created a producer for each message type (parameterised code). Although each producer reads the entire stream from the TCP/IP port it only processes it's assigned message type (wind or speed) this increasing parallelism and therefore throughput.

  producer = KafkaProducer(bootstrap_servers='localhost:9092' , value_serializer=lambda v: json.dumps(v).encode('utf-8'))
  producer.send(topic, json_str) 

I have created a topic per message type with a single partition. Using a single partition per topic guarantees I will consume messages in the order they arrive. There are other ways to increase the number of partitions and still maintain the read order but for this use case a topic per message type seemed to make sense. I basically have optimised throughput (well enough for the number of messages I am trying to process).

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic wind-json

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic speed-json

kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic gps-json 

When defining a topic you specify the replaication-factor and the number of partitions.

The topic-level configuration is replication.factor. At the broker level, you control the default.replication.factor for automatically created topics. 1 (I have turned off the automatic creation of topics).

The messages are consumed using Stream reactor which has an InfluxDB sink mechanism and writes directly to the measurements within a performance database I have created. The following parameters showing the topics and inset mechanism are configured in performance.influxdb-sink.properties.

topics=wind-json,speed-json,gps-json

connect.influx.kcql=INSERT INTO wind SELECT * FROM wind-json WITHTIMESTAMP sys_time();INSERT INTO speed SELECT * FROM speed-json WITHTIMESTAMP sys_time();INSERT INTO gps SELECT * FROM gps-json WITHTIMESTAMP sys_time()

The following diagram shows the detail from producer to InfluxDB.

If we now run the producers we get data streaming through the platform.

Producer Python log showing JSON formatted messages:

Status of consumers show minor lag reading from two topics, the describe also shows the current offsets for each consumer task and partitions being consumed (if we had a cluster it would show multiple hosts):

Inspecting the InfluxDB measurements:

When inserting into a measurement in InfluxDB if the measurement does not exist it gets created automatically. The datatypes of the fields are determined from the JSON object being inserted. I needed to adjust the creation of the JSON message to cast the values to floats otherwise I ended up with the wrong types. This caused reporting issues in Grafana. This would be a good case for using Avro and Schema Registry to handle these definitions.

The following gif shows Grafana displaying some of the wind and speed measurements using a D3 Gauge plugin with the producers running to the right of the dials.

Next Steps

I'm now ready to do some real-life testing on our next sailing passage.

In the next blog, I will look at making the setup more resilient to failure and how to monitor and automatically recover from some of these failures. I will also introduce the WorldMap pannel to Grafana so I can plot the location the readings were taken and overlay tidal data.

References
Categories: BI & Warehousing

Is adding a column to a typed table in PostgreSQL instant?

Yann Neuhaus - Thu, 2018-06-14 06:26

Today at the SOUG Day I did some little demos and one of them was about creating typed tables. In the demo the two tables did not contain any rows and one of the questions was: When these tables contain a million of rows would adding a column be instant as well? Lets do a quick test.

Same setup as in the post referenced above: Two schemas, one type, two tables based on the type:

postgres=# select version();
                                                          version                                                           
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 10.4 build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)
postgres=# create schema a;
CREATE SCHEMA
postgres=# create schema b;
CREATE SCHEMA
postgres=# create type typ1 as ( a int, b text );
CREATE TYPE
postgres=# create table a.t1 of typ1;
CREATE TABLE
postgres=# create table b.t1 of typ1;
CREATE TABLE
postgres=# insert into a.t1
postgres-# select a.*
postgres-#      , md5(a::text)
postgres-#   from generate_series(1,1000000) a;
INSERT 0 1000000
postgres=# insert into b.t1 select * from a.t1;
INSERT 0 1000000
postgres=# 

Both of the tables contain 1’000’000 rows so how much time would a modification of the type take?

postgres=# \timing
Timing is on.
postgres=# alter type typ1 add attribute c timestamp cascade;;
ALTER TYPE
Time: 9.338 ms
Time: 0.867 ms
postgres=# \d a.t1
                             Table "a.t1"
 Column |            Type             | Collation | Nullable | Default 
--------+-----------------------------+-----------+----------+---------
 a      | integer                     |           |          | 
 b      | text                        |           |          | 
 c      | timestamp without time zone |           |          | 
Typed table of type: typ1

postgres=# \d b.t1
                             Table "b.t1"
 Column |            Type             | Collation | Nullable | Default 
--------+-----------------------------+-----------+----------+---------
 a      | integer                     |           |          | 
 b      | text                        |           |          | 
 c      | timestamp without time zone |           |          | 
Typed table of type: typ1

Almost instant. Btw: Of course you can also remove an attribute from the type:

postgres=# alter type typ1 drop attribute c cascade;
ALTER TYPE
Time: 14.417 ms
postgres=# \d a.t1
                   Table "a.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | text    |           |          | 
Typed table of type: typ1

postgres=# \d b.t1
                   Table "b.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | text    |           |          | 
Typed table of type: typ1
 

Cet article Is adding a column to a typed table in PostgreSQL instant? est apparu en premier sur Blog dbi services.

Sizing clusters

Tom Kyte - Wed, 2018-06-13 22:46
My question is generally how to determine the size to set the <u>size</u> value in a create cluster statement. And specifically for a parent table 400 bytes wide with 15 million rows, and a child 120 bytes wide with 40 million rows. There may be a...
Categories: DBA Blogs

SQL with inline view Errors in 11g with ORA-00979 Not a Group BY expression, but runs in 12c

Tom Kyte - Wed, 2018-06-13 22:46
Hi Following SQL (correctly) errors with "ORA-00979 Not a Group BY Expression" when run on 11.2.0.4.0. But when run on 12.1.0.2.0, SQL runs without error and returns incorrect information for total_tablespace_size - All rows return 0.5 whereas i...
Categories: DBA Blogs

Leaving fake hints in queries

Tom Kyte - Wed, 2018-06-13 22:46
Hello, I'm creating a script to automatically generate plan reports usings DBMS_XPLAN.DISPLAY_CURSOR, and to do so I want to put in a standard comment in the table and query it via dba_source and v$sql. e.g. <code>select /* xplan_my_test_pkg01 ...
Categories: DBA Blogs

Oracle Can Generate 6 Password Hashes When a User is Added or Password Changed in 12.1.0.2 and Above

Pete Finnigan - Wed, 2018-06-13 22:46
In a 12.2.0.2 database it's possible that Oracle generates 6 different password hashes for one password for one user under certain circumstances when a password is changed or created (user is created). I will layout the 6 different ones first....[Read More]

Posted by Pete On 13/06/18 At 09:02 PM

Categories: Security Blogs

Pages

Subscribe to Oracle FAQ aggregator