Feed aggregator

List of “Dashboard” Startups and Products

Nilesh Jethwa - 7 hours 42 min ago

The idea of building a dashboard application has become synonymous to the idea of a developer thinking let us build a "To-do" application.

This is evident by the sheer amount of dashboard related applications that developers/founders have submitted to Hacker News over the past several years.

The dashboards range from wide spectrum, on one side to personal dashboards, project management dashboards, network dashboards to full blown social media dashboards.

If you are looking to build the next dashboard application or just shopping, it is very interesting to look at all the "Show HN" Dashboard submissions

Read more at http://www.infocaptor.com/dashboard/list-of-dashboard-startups

Set autotrace on

Tom Kyte - 17 hours 50 min ago
Hi Tom When I enter the statement set autotrace on i get the following error. SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabled SP2-0611: Error enabling STATISTICS report How can solve it?
Categories: DBA Blogs

hash join

Tom Kyte - 17 hours 50 min ago
Hi Tom , I have your book and am still not able to understand the mechanism of hash join . it just says similar to this ( I do not have book at work so can not exactly reproduce it ) one table would be hashed into memory and then the other table...
Categories: DBA Blogs

SQL Loader - Load CSV file, double quoted record, dual double quoted field

Tom Kyte - 17 hours 50 min ago
Hello, Please tell me how to write the control file to load following data (without quotes) using sqlldr in three separate data fields. We get such a csv file from out client and they can't change the way it generates. This file opens fine in M...
Categories: DBA Blogs

ORA-29471: DBMS_SQL access denied

Tom Kyte - 17 hours 50 min ago
I have a problem in procedure: procedure .. begin v_c := dbms_sql.open_cursor; v_c2 := dbms_sql.open_cursor; dbms_sql.parse(v_c, v_sql, dbms_sql.native); dbms_sql.parse(v_c2, v_query, dbms_sql.native); v_stmt := dbms_sql.execute(...
Categories: DBA Blogs

Error in Opening KeyStore in Oracle 12c R2

Tom Kyte - 17 hours 50 min ago
I am trying to create TDE example . I was able to create a keystore ; but when I open the key store , I am getting ORA-28367. What am I missing here ? Thanks very much in advance. <code> SQL> select * from v$version ; BANNER ...
Categories: DBA Blogs

Oracle 12c - SQL query with inline PL/SQL function

Tom Kyte - 17 hours 50 min ago
I'm playing around with Oracle 12c and trying out the new features. One of the new features is that the WITH clause in a SQL query now allows for a PL/SQL function. I created the following sample query: <code> WITH FUNCTION get_number RE...
Categories: DBA Blogs

How do I determine where the ora_home directory is using SQL or PL/SQL?

Tom Kyte - 17 hours 50 min ago
I tried the following: <code> DECLARE theresult varchar2(1000); begin dbms_system.get_env('ORACLE_HOME', theresult); dbms_output.put_line('result = ' || theresult); end; </code> but got the following error <code>line 2: ORA-0655...
Categories: DBA Blogs

nls_date_format difference between v$parameter and database_properties

Tom Kyte - 17 hours 50 min ago
When I query v$parameter, nls_date_format is YYYY-MM-DD When I query database_properties, nls_date_format is DD-MON-RR Why is it different? Could this cause problems?
Categories: DBA Blogs

Latest Updates + FREE Training This Week

Online Apps DBA - 18 hours 17 min ago

In this Week, you will find: 1. Oracle GoldenGate 12c Administration Training Review   1.1 GoldenGate Day 1 Architecture Overview & Installation: Lessons Learned & Key Takeaways   1.2 GoldenGate Day 2 Processes Configuration & Replication Setup: Lessons Learned & Key Takeaways 2. Oracle Apps DBA – Troubleshoot/Debug Long Running Concurrent Request in Oracle EBS (R12/11i) 3. Oracle SOA Suite […]

The post Latest Updates + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Node-oracledb v2 Query Methods and Fetch Tuning

Christopher Jones - 20 hours 46 min ago
Computer screen showing random javascript code (Photo by Markus Spiske on unsplash.com)

 

For our Node.js node-oracledb v2 add-on for Oracle Database we revamped its lower data access layer and made it into a standalone project ODPI-C, which is being reused by several other language APIs. ODPI-C gave us a reason and opportunity to simplify some of internal query handling code in node-oracledb.

To recap, node-oracledb has four ways to execute queries against an Oracle Database. These are the same in version 1 and version 2:

  • Direct Fetches - these are non-ResultSet, non- queryStream() fetches. All rows are returned in one big array, limited to maxRows (v2 allows an unlimited array size).

  • ResultSet getRow() - return one row on each call until all rows are returned.

  • ResultSet getRows(numRows) - return batches of rows in each call until all rows are returned.

  • queryStream() - stream rows until all rows are returned.

The changes in node-oracledb v2 are:

  • Enhanced direct fetches to allow an unlimited number of rows to be fetched, and made this the default. This occurs when maxRows = 0

  • Replaced prefetchRows (previously used for internal fetch buffering and tuning) with a new property fetchArraySize; the default size is 100.

    fetchArraySize affects direct fetches, ResultSet getRow() and queryStream().

  • getRows(numRows,...) internal fetch buffering is now only tuned by the numRows value. Previously prefetchRows could also affect internal buffering.

  • queryStream() now use fetchArraySize for internal buffer sizing.

  • Implemented getRow() in JavaScript for better performance and use fetchArraySize as the way to tune internal buffer sizes.

The change in v2 that I want to discuss is how a 'direct fetch' does its internal buffering.

To make fetching an unlimited number of rows feasible for direct fetches, data is now internally fetched from Oracle Database in batches of size fetchArraySize and then concatenated to form one big array of results that is returned in the callback. This lets you use fetchArraySize to tune fetch performance. In node-oracledb v1, one big array of size maxRows was allocated before the data was fetched from the database. (With node-oracledb v2 you can have the same behavior, if you really want it, by setting fetchArraySize = maxRows, where maxRows > 0).

Let's look at two different scenarios that fetch 50000 rows with fetchArraySize of 100 (the default) and 1000. The code is at the end of the post.

Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643

You can see in this case (with a local database) that increasing fetchArraySize improved performance. There may be various factors for this, but the common one is reduced network cost and reduced load on the database because there were fewer 'round trips' to get batches of records. Each query and environment will be different, and require its own tuning.

The benefits of using fetchArraySize for direct fetches are:

  • Performance of batching and network transfer of data can be tuned.

  • Memory can incrementally grow when the number of query rows is unknown, or varies from execution to execution. A single large chunk of memory (based on maxRows in v1) doesn't need to pre-allocated to handle the 'worst case' of a large number of rows.

There are two drawbacks with direct fetches:

  • One big array of results is needed. This is the same in v1 and v2.

  • The concatenation of batches of records can use more memory than the final array requires, and can cause fragmentation.

Let's look at timings for all query methods. This is one run; there was expected variability each time I ran the scripts. The 'batch size' number is numRows for getRows(numRows) calls, or fetchArraySize for the other fetch methods.

Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 ResultSet getRow(): rows: 50000, batch size: 100, seconds: 1.625 ResultSet getRows(): rows: 50000, batch size: 100, seconds: 1.586 queryStream(): rows: 50000, batch size: 100, seconds: 1.691 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643 ResultSet getRow(): rows: 50000, batch size: 1000, seconds: 1.471 ResultSet getRows(): rows: 50000, batch size: 1000, seconds: 1.327 queryStream(): rows: 50000, batch size: 1000, seconds: 1.415

The ResultSet and queryStream() methods don't have to store all rows in memory at once so there is less memory management involved. The outlier is obviously the first result: the memory management of concatenating small chunks of memory together is big. We have some ideas on what we can do inside node-oracledb to improve this a bit, but that is a future project to be investigated, and can't solve the first drawback that all rows eventually have to be held in memory at the same time.

The conclusion is to use ResultSets or streaming for large numbers of rows. This is the same recommendation we gave for v1.

For small numbers of rows, the various query methods perform pretty much the same. The timings are so short that you can treat the differences in the one run shown below as noise. Here each query only returned 1 row:

Direct fetch: rows: 1, batch size: 100, seconds: 0.011 ResultSet getRow(): rows: 1, batch size: 100, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 100, seconds: 0.013 queryStream(): rows: 1, batch size: 100, seconds: 0.013 Direct fetch: rows: 1, batch size: 1, seconds: 0.012 ResultSet getRow(): rows: 1, batch size: 1, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 1, seconds: 0.013 queryStream(): rows: 1, batch size: 1, seconds: 0.013

Although the timings are small, I suggest using a small fetchArraySize or numRows if you are querying a small handful of rows, particularly if the number of rows is known (such as 1). This reduces the amount of memory that needs to be allocated throughout node-oracledb, the Oracle client libraries, and also in the database.

References

Node.oracledb documentation is here.

If you are currently using node-oracledb v1, you may be interested in the documentation on Migrating from node-oracledb 1.13 to node-oracledb 2.0.

Code

Here are the rudimentary test scripts I used. The ResultSet code originated in the v1 examples from https://jsao.io/2015/07/an-overview-of-result-sets-in-the-nodejs-driver/

The config.js file is at the end. The dbconfig.js file is the same as in the examples.

The timings include statement execution in the DB, though this is not controlled by node-oracledb. With direct fetches there isn't a way in JavaScript to distinguish the query execute cost from the data fetch costs that fetchArraySize and numRows control.

Direct Fetch

// direct fetch var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; rowsProcessed = results.rows.length; // do work on the rows here var t = ((Date.now() - startTime)/1000); console.log('Direct fetch: rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.release(function(err) { if (err) console.error(err.message); }); }); });

ResultSet getRow()

// ResultSet getRow() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true, fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; function processResultSet() { results.resultSet.getRow(function(err, row) { if (err) throw err; if (row) { rowsProcessed++; // do work on the row here processResultSet(); // try to get another row from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRow(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); } ); } );

ResultSet getRows()

// ResultSet getRows() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); oracledb.fetchArraySize = 1; connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true }, function(err, results) { var rowsProcessed = 0; if (err) throw err; function processResultSet() { results.resultSet.getRows(config.batchSize, function(err, rows) { if (err) throw err; if (rows.length) { rows.forEach(function(row) { rowsProcessed++; // do work on the row here }); processResultSet(); // try to get more rows from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRows(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); }); });

queryStream()

// queryStream() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); var stream = connection.queryStream( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize } ); stream.on('data', function (data) { // do work on the row here rowsProcessed++; }); stream.on('end', function () { var t = ((Date.now() - startTime)/1000); console.log('queryStream(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.close( function(err) { if (err) { console.error(err.message); } }); }); });

The Configuration File

// config.js var maxRows; // number of rows to query var batchSize; // batch size for fetching rows maxRows = 50000; batchSize = 1000 exports.maxRows = maxRows; exports.batchSize = batchSize;

node-oracledb 2.0 with pre-built binaries is on npm

Christopher Jones - 21 hours 2 min ago

Release announcement: Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use.

Top features: Pre-built binaries, Query fetch improvements

It's time to shake things up. Node-oracledb version 1 has been stable for a while. Our tight, hard working team is now proud to deliver node-oracledb 2 to you. With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure.

Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0.

node-oracledb v2 highlights
  • node-oracledb 2.0 is the first release to have pre-built binaries. These are provided for convenience and will make life a lot easier, particularly for Windows users.

    Binaries for Node 4, 6, 8 and 9 are available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6).

    Simply add oracledb to your package.json dependencies or manually install with:

    npm install oracledb

    (Yes, Oracle Client libraries are still required separately - these do all the heavy lifting.)

    We've got great error messages when a binary isn't available, and improved the messages when require('oracledb') fails, however Linux users with older glibc libraries may get Linux runtime errors - you should continue building node-oracledb from source in this case, see below.

    There is support for hosting binary packages on your own internal web server, which will be great for users with a large number of deployments. See package/README.

    This is the first release to use the ODPI-C abstraction layer which is also used by Python's cx_Oracle 6.x API, as well as 3rd party APIs for other languages. Using ODPI is the the main change that allowed node-oracledb 2.0 binaries to be distributed. As another consequence of ODPI-C, any node-oracledb 2 binary will run with any of the Oracle client 11.2, 12.1 or 12.2 libraries without needing recompilation. This improves portability when node-oracledb builds are copied between machines. Since the available Oracle functionality will vary with different Oracle Client (and Oracle Database!) versions, it's important to test your applications using your intended deployment environment.

  • The driver can still be built from source code if you need, or want, to do so. Compiling is now simpler than in version 1, since you no longer need Oracle header files, and no longer need OCI_*_DIR environment variables.

    To build from source you need to pull the code from a GitHub branch or tag - generally the most recent release tag is what you want. Make sure you have Python 2.7, the 'git' utility, and a compiler, and add oracle/node-oracledb.git#v2.0.15 to your package.json dependencies. Or manually run the install:

    npm install oracle/node-oracledb.git#v2.0.15

    Users without 'git', or with older versions of Node that won't pull the ODPI submodule code, can use the source package:

    npm install https://github.com/oracle/node-oracledb/releases/download/v2.0.15/oracledb-src-2.0.15.tgz

    I've noticed GitHub can be a little slow to download the source before compilation can begin, so bear with it.

  • Improved query handling:

    • Enhanced direct fetches to allow an unlimited number of rows to be fetched and changed the default number of rows fetched by this default method to be unlimited. The already existing ResultSet and Streaming methods are still recommended for large numbers of rows.

    • Since ODPI-C internally uses 'array fetches' instead of 'prefetching' (both are underlying methods for buffering/tuning differing only in where the buffering takes place - both are transparent to applications), we've replaced prefetchRows with a new, almost equivalent property fetchArraySize..

    • We've moved the buffering or rows for getRow() into JavaScript for better performance. It no longer needs to call down into lower layers as often.

  • We tightened up some resource handling to make sure applications don't leak resources. If you inadvertently try to close a connection when a LOB or ResultSet is open, you will see a new error DPI-1054.

  • The node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. In node-oracledb 1 these were particularly low when Oracle 11gR2 client libraries were used, so this should be welcome for people who have not updated the Oracle client. Node.js and V8 will still limit sizes, so continue to use the Stream interface for large LOBs.

  • Added support for ROWID and UROWID. Data is fetched as a String

  • Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer).

  • Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in the Date object.

  • Added query support for NCHAR, NVARCHAR2 and NCLOB columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set.

Plans for Version 1

Our stated plan was to cease formal support for version 1 when Node 4 LTS maintenance ended in April 2018. We're pleased 1.13.1 has been stable for some time, and we are not anticipating needing any further node-oracledb 1 releases, unless exceptional circumstances occur.

Plans for Version 2

We are now testing with Node 4, 6, 8 and 9. This list will, of course, change as new Node versions are released. The pre-built binaries will also change and availability is not guaranteed.

ODPI-C forms a solid base to extend node-oracledb. Users of Python cx_Oracle 6, which is also based on ODPI-C, are appreciating all the advanced Oracle features that are available. Many of these features have also been requested by node-oracledb users. As with any open source project, there are no hard and fast guarantees for node-oracledb, but you can see the general direction we'd like to head in. Pull Requests are welcome.

One little thing you might be unaware of is that along the way we have been testing (and helping create) the next major release of Oracle Database, so sometimes we've had to take detours from direct node-oracledb work order to move the whole of Oracle along. Whether we work on the "client" or the "server", we look forward to bringing you great things in future releases.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Using sed to backup file and remove lines

Michael Dinh - Wed, 2017-12-13 19:13
[oracle@racnode-dc1-2 ~]$ cd /u01/app/oracle/12.1.0.2/db1/rdbms/log/

--- DDL will fail since datafile is hard coded!
[oracle@racnode-dc1-2 log]$ cat tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

--- Remove ALTER and RESIZE from sql file.
--- Most likely the incorrect way to do this since TBS may be undersized.

12.2 Datapump Improvements actually does this the right way.

[oracle@racnode-dc1-2 log]$ sed -i.bak '/ALTER DATABASE DATAFILE\|RESIZE/ d' tablespaces_ddl.sql

[oracle@racnode-dc1-2 log]$ ls -l tablespace*
-rw-r--r-- 1 oracle dba 1214 Dec 14 02:03 tablespaces_ddl.sql
-rw-r--r-- 1 oracle dba 1488 Dec 14 01:45 tablespaces_ddl.sql.bak

[oracle@racnode-dc1-2 log]$ diff tablespaces_ddl.sql tablespaces_ddl.sql.bak
14a15,16
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
24a27,28
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
32a37,38
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

[oracle@racnode-dc1-2 log]$ cat tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

[oracle@racnode-dc1-2 log]$

12.2 Datapump Improvements

Michael Dinh - Wed, 2017-12-13 19:00

Datafile for tablespace USERS was resize to 5242880.

12.2.0.1.0
5242880 size is part of create tablespace.

CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 5242880
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

12.1.0.2.0
5242880 size is part of alter tablespace.

Why is this important?
Manual intervention is no longer required to have correct datafiles size.

 
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

Test Case:

01:01:05 SYS @ owl:>select bytes,tablespace_name,autoextensible,maxbytes from dba_data_files where tablespace_name='USERS';

     BYTES TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
   5242880 USERS                          YES 3.4360E+10

01:01:57 SYS @ owl:>alter tablespace users add datafile size 4m;

Tablespace altered.

01:02:43 SYS @ owl:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         7    5242880
         5    4194304

01:04:09 SYS @ owl:>alter database datafile 5 resize 5242880;

Database altered.

01:05:08 SYS @ owl:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         7    5242880
         5    5242880

01:05:15 SYS @ owl:>

+++++++++++

[oracle@db-asm-1 ~]$ expdp parfile=expdp_tbs.par

Export: Release 12.2.0.1.0 - Production on Thu Dec 14 01:31:12 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYS"."SYS_EXPORT_FULL_01":  /******** AS SYSDBA parfile=expdp_tbs.par
W-1 Startup took 1 seconds
W-1 Processing object type DATABASE_EXPORT/TABLESPACE
W-1      Completed 3 TABLESPACE objects in 0 seconds
W-1 Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/admin/owl/dpdump/tbs.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Dec 14 01:31:19 2017 elapsed 0 00:00:06

+++++++++++

[oracle@db-asm-1 ~]$ impdp parfile=impdp_tbs.par

Import: Release 12.2.0.1.0 - Production on Thu Dec 14 01:32:51 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
W-1 Startup took 0 seconds
W-1 Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_tbs.par
W-1 Processing object type DATABASE_EXPORT/TABLESPACE
W-1      Completed 3 TABLESPACE objects in 0 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 14 01:32:54 2017 elapsed 0 00:00:02

+++++++++++

[oracle@db-asm-1 ~]$ cat /u01/app/oracle/admin/owl/dpdump/tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 73400320
  AUTOEXTEND ON NEXT 73400320 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;


CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 33554432
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 5242880
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;


[oracle@db-asm-1 ~]$

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

01:40:59 SYS @ hawk2:>select bytes,tablespace_name,autoextensible,maxbytes from dba_data_files where tablespace_name='USERS';

     BYTES TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
   5242880 USERS                          YES 3.4360E+10

01:41:17 SYS @ hawk2:>alter tablespace users add datafile size 4m;

Tablespace altered.

01:41:24 SYS @ hawk2:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         6    5242880
         2    4194304


01:41:34 SYS @ hawk2:>alter database datafile 2 resize 5242880;

Database altered.

01:41:56 SYS @ hawk2:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         6    5242880
         2    5242880

01:42:02 SYS @ hawk2:>

++++++++++

[oracle@racnode-dc1-2 ~]$ expdp parfile=expdp_tbs.par

Export: Release 12.1.0.2.0 - Production on Thu Dec 14 01:43:19 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Starting "SYS"."SYS_EXPORT_FULL_01":  /******** AS SYSDBA parfile=expdp_tbs.par
Startup took 12 seconds
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 2 seconds
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/12.1.0.2/db1/rdbms/log/tbs.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Dec 14 01:44:34 2017 elapsed 0 00:00:43

++++++++++

[oracle@racnode-dc1-2 ~]$ impdp parfile=impdp_tbs.par

Import: Release 12.1.0.2.0 - Production on Thu Dec 14 01:45:48 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Startup took 1 seconds
Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_tbs.par
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 1 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 14 01:45:57 2017 elapsed 0 00:00:05

[oracle@racnode-dc1-2 ~]$

++++++++++

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oracle/12.1.0.2/db1/rdbms/log/tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
  
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;
[oracle@racnode-dc1-2 ~]$

Google dashboard for lazy businessman

Nilesh Jethwa - Wed, 2017-12-13 15:20

Once you start your own business or startup, the very first thing that comes to mind is "How many visitors did my website receive?"

Google Analytics provides tons of metrics and it becomes quite a chore to keep up with all the dashboard pages and filter options. As a small biz owner myself, I went through a phase where I ended up spending significant amount of time checking out Google Analytics

To save time and mental energy on a daily routine task, I asked "What are the most basic metrics I need to measure from Google Analytics?"

The answer pretty much came down as a need to have "one page dashboard that displays various metrics".

Read more at http://www.infocaptor.com/dashboard/what-are-the-bare-minimum-traffic-metrics-that-i-can-track-easily

Docker-CE: Setting up a tomcat in less than a minute and running your JSP...

Dietrich Schroff - Wed, 2017-12-13 15:14
Last time i wrote about processes and files of a docker container hosting the docker example myapp.py.
Next step was to run a tomcat with a small application inside.

This can be done with theses commands:
  1. Get tomcat from the docker library:
    # docker pull tomcat
    Using default tag: latest
    latest: Pulling from library/tomcat
    3e17c6eae66c: Pull complete
    fdfb54153de7: Pull complete
    a4ca6e73242a: Pull complete
    5161d2a139e2: Pull complete
    7659b327f9ec: Pull complete
    ce47e69f11ad: Pull complete
    7d946df3a3d8: Pull complete
    a57cba73d797: Pull complete
    7e6f56cdb523: Pull complete
    06e4787b3ca5: Pull complete
    c760cb7e43cb: Pull complete
    ad6d0815df5c: Pull complete
    d7e1da09fc22: Pull complete
    Digest: sha256:a069d49c414bad0d98f5a4d7f9b7fdd318ccc451dc535084480c8aead68272d2
    Status: Downloaded newer image for tomcat:latest
  2. Test the tomcat:
    # docker run -p 4000:8080 tomcat
    20-Nov-2017 20:38:11.754 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.23
    20-Nov-2017 20:38:11.762 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Sep 28 2017 10:30:11 UTC
    20-Nov-2017 20:38:11.762 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.5.23.0
    ....
    ....
    org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8080"]
    20-Nov-2017 20:41:59.928 INFO [Thread-5] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["ajp-nio-8009"]
     

This was easy.
Now create your JSP and run it:
  1. create a directory
    mkdir tomcatstatus
  2. create a jsp inside this direcotry
    vi tomcatstatus/index.jsp
    and insert the following content:
    <%@ page language="java" import="java.util.*" %>


    Host name : <%=java.net.InetAddress.getLocalHost().getHostName() %>

    Server Version: <%= application.getServerInfo() %>

    Servlet Version: <%= application.getMajorVersion() %>.<%= application.getMinorVersion(
    ) %>
    JSP Version: <%= JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersio
    n() %>
  3. Run docker
    docker run -v /home/schroff/tomcatstatus:/usr/local/tomcat/webapps/status -p 4000:8080 tomcat
  4. Connect to port 4000:

Wow - i am really stunned how fast the tomcat was setup and the jsp was launched. No installation of java (ok, this is only apt install) and no setup procedure for Apache tomcat (ok, this is just a tar -zxvf). But if i want to run more than one installation - docker is faster than repeating the installation or copying files.  Really cool!


(One thing i forgot: Installation of docker onto your server)

Both talks accepted for Collaborate 18

Bobby Durrett's DBA Blog - Wed, 2017-12-13 10:35

IOUG accepted both my Toastmasters and Python talks for Collaborate 18. RMOUG also accepted them both so I will be doing these two talks in both February and April. I am a little surprised because I have had talks rejected by IOUG in the past. There are a lot of great speakers competing for speaking slots. This is my first time for RMOUG so I did not know how hard it would be to get a talk accepted. I put both talks in for both conferences not knowing if either would be accepted and both were at both conferences!  So, 2018 will be a busier year than normal for me in terms of speaking at conferences. My last conference was two years ago at Oracle OpenWorld where I spoke about Delphix, a tool that I use with the Oracle database. Next year I’m talking about two things that I feel passionate about. The talks are not about the Oracle database but they are about things I have learned that have helped me in my Oracle database work. They are about how Toastmasters has helped me improve my speaking and leadership skills and about why the Python programming language has become my favorite general purpose scripting tool. I am looking forward to giving the talks. If you are able to attend one of the conferences maybe you could check out one of my talks. Fun.

Bobby

Categories: DBA Blogs

RMAN Parallelism question

Tom Kyte - Wed, 2017-12-13 09:26
Tom, I am reading Oracle? Database Backup and Recovery Advanced User's Guide, 10g Release 2 (10.2), Part Number B14191-02,http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconfg003.htm#sthref521 The document says: 1) "As a...
Categories: DBA Blogs

partitioned index

Tom Kyte - Wed, 2017-12-13 09:26
Hi Tom, please explain the differences between global partitioned index and local partitioned index. Thanks!
Categories: DBA Blogs

Oracle MOOC: Developing Chatbots with Oracle Intelligent Bots

Oracle Intelligent Bots with Oracle Mobile Cloud Services gives you the ability to create an artificially intelligent bot that can converse with your users, to determine their intent and perform...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator