Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 9 hours 10 min ago

convert database server sysdate to GMT date and daylight saving aware

Thu, 2021-04-15 10:26
We need to code a function to convert database server sysdate to GMT date. So we use: alter database set time_zone='EST' to set dbtimezone, and then code something like: CREATE OR REPLACE FUNCTION FN_GET_GMT_DATE RETURN DATE IS v_dbtimezone varchar2(10); BEGIN select dbtimezone into v_dbtimezone from dual; RETURN NEW_TIME(SYSDATE, ltrim(rtrim(v_dbtimezone)), 'GMT'); END FN_GET_GMT_DATE; / But this is not daylight saving awarance. What's the best way to convert database sysdate to GMT date and also daylight saving aware?
Categories: DBA Blogs

How to exclude rows from outer join calculation but still include them in results

Thu, 2021-04-15 10:26
Hello Chris, Hello Connor. I have a view with a schedule of some kind of launches: <code>create or replace view v_schedule as select trunc(sysdate) + interval'1'minute * level as plan_date from dual connect by level <= 100000</code> And a log of actual launch dates: <code>create table t_launches as select trunc(sysdate) + interval'1'minute * level + mod(abs(dbms_random.random), 60) * interval'1'second as launch_date from dual connect by level <= 1000;</code> I want to join them like that: <code>create table t1 as select plan_date, launch_date from ( select plan_date, lead(plan_date) over (order by plan_date) as next_plan_date from v_schedule ) left join t_launches on (launch_date >= plan_date and launch_date < next_plan_date);</code> This works for about 10 seconds. But I have an extra piece of information: launch_date cannot be larger than sysdate(<i>trunc(sysdate) + interval'1'minute * 1000 in this example</i>). So for 99% of rows in v_schedule there is no point in even trying to join them with anything in t_launches. So a query like that works for about half a second: <code>create table t2 as select plan_date, launch_date from ( select plan_date, lead(plan_date) over (order by plan_date) as next_plan_date from v_schedule where plan_date <= trunc(sysdate) + interval'1'minute * 1000 ) left join t_launches on (launch_date >= plan_date and launch_date < next_plan_date) union all select plan_date, null from v_schedule where plan_date > trunc(sysdate) + interval'1'minute * 1000;</code> My question is: Is there any way to tell Oracle to filter rows from an outer join calculation but still include them in the results? I thought this should work, but it doesn't: <code>create table t3 as select plan_date, launch_date from ( select plan_date, lead(plan_date) over (order by plan_date) as next_plan_date from v_schedule ) left join t_launches on (plan_date <= trunc(sysdate) + interval'1'minute * 1000 and launch_date >= plan_date and launch_date < next_plan_date);</code> I would very much like to avoid union all because my actual query is much larger and complicated than this and union all would almost double its size. Besides, it does an extra pass over v_schedule.
Categories: DBA Blogs

Radio radio button prompt shifted after upgrade to 20.2

Thu, 2021-04-15 10:26
We are upgrading Apex from 5.2 to 20.2. Our applications are using theme 26. (Of course we need to update the applications to theme 42, but for now this step is too big.) After the upgrade, the prompt of the region buttons is shifted to left (behind the buttons). See https://apex.oracle.com/pls/apex/f?p=18605:1. Can we easily solve this with an css-adjustment?
Categories: DBA Blogs

ORA-65040 when loading jar file into pluggable database with sys.dbms_java.loadjava or commandline tool loadjava

Thu, 2021-04-15 10:26
I want to load a jar file into the database. First I loaded it into the root container. loadjava -o -f -noverify -synonym -stdout -verbose -recursivejars -resolve -grant PUBLIC -user sys/password@CDB javax.mail.jar >\mail.log This also works, but I don't have access to it in the PDB. Trying to load the jarfile into the PDB fails with the error ORA-65040: operation not allowed from within a pluggable database. The same message comes when I use dbms_java.loadjava('-v -r -g PUBLIC /path/java/javax.mail.jar'); after switching into the PDB. How can I load jarfiles into a PDB or make them available there? Thanks for help Sigrid
Categories: DBA Blogs

Password Security During Logins in 2021

Thu, 2021-04-15 10:26
Greetings, I have a few questions regarding how Oracle by default encrypts the password when sent to the Oracle databases. (https://asktom.oracle.com/pls/apex/asktom.search?tag=password-security-during-logins&p_session=117601880297172) First, please confirm that Oracle by default also encrypts the password when the client uses a JDBC thin client to connect to the databases. Second, what kind of encryption algorithm (AES128, DES, etc.) does it use? Third, why doesn't the default encryption of passwords require the complexity of setting up SSL encryption for data in motion? The complexity I am referring to is the need to create various server and client certificates ( CA certificate, self signed, pass it to through SHA-2 algorithm), create wallets and setup several sqlnet.ora parameters on both client and server? Forth, do you think it is a good idea for Oracle Oracle to provide a simple on/off option to turn on the same default password encryption but for all data in motion? Lastly, if Oracle encrypts the password automatically, why does Oracle provide this document: "Step by Step Guide To Configure SSL Authentication (Doc ID 736510.1)?" "This article describes how to configure and use the Secure Sockets Layer (SSL) for Oracle Database User Authentication." If Oracle encrypts the password by default, then Oracle providing the document seems unnecessary. Thanks for your valuable insights. John
Categories: DBA Blogs

Deploying Oracle RAC on Docker

Thu, 2021-04-15 10:26
Hello Ask-Tom-Team, I came across this Oracle documentation ?Deploying Oracle RAC on Docker?: https://www.oracle.com/technetwork/database/options/clustering/rac-ondocker-bp-wp-5458685.pdf Specifically the sentence: OracleRAC on Docker supports the <b>non-privilege</b> mode feature. This allows Oracle RAC to safely and securely run on a single hostor <b>multiple hosts without interference</b> from the other DockerContainers. Does it mean, I could install two Oracle RAC clusters (e.g. 2 node each, 4 total) on multiple containers on the same physical host without interference between them? Oracle RAC clusterware uses Linux-Root account taking over the entire physical host. With Oracle RAC Docker non-privilege mode feature, I guess that wouldn't be the case. Where do I find more information on this Oracle RAC Docker non-privilege mode feature? Does Oracle supports / certifies "RAC on Docker" for production systems? Thank you. Elio
Categories: DBA Blogs

Log User Activities with a trigger

Wed, 2021-04-14 16:06
I am in the process of implementing a process in APEX. This process includes some roles. The process is initiated by a person who fills in forms in the first step. The data entered is stored in collections and entered into the corresponding tables after submission. This also initiates the next step of the process. Now to my question, I would like to display a kind of history of the completed steps after the completed forms have been submitted (i.e. after the first process step has been completed). I was thinking of a kind of log table. This summarises which action has been taken, who has taken the action, and what other important information has been entered in the table (e.g. which person responsible was selected in the form, etc., the ID of the table). <code> -- create log tbl create table form_submitted log_id number not null, Responsible_id1 number , Responsible_id2 number, Responsible_id3 number, Responsible_id4 number, added_date date, action varchar2(100); alter table form_submitted add constraint pk_log_id PRIMARY KEY (log_id); -- create sequence to autoincrement values create sequence seq_log_id start with 1 increment by 1 nocache nocycle; --create trigger to insert auto-incremented value create or replace trigger bi_form_submitted before insert on form_submitted for each row begin :new.log_id :=seq_log_id.nextval; end; </code> To achieve this, I thought of creating a trigger that would track the user's activities and enter the necessary information into the log table. <code> -- create trigger to track user activity and insert on logs table create or replace trigger trg_user_activity after insert or update or delete on tbl1 for each row begin if deleting then --insert data in tbl_logs insert into form_submitted (Responsible_id1,Responsible_id2, Responsible_id3, Responsible_id4,created_time, added_date, action) values (:new.Responsible_id1,:new.Responsible_id2,:new.Responsible_id3,:new.Responsible_id4, SYSDATE,to_char(SYSDATE,'HH:MI:SS PM'),SYSDATE, 'form_deleted'); end if; if updating then --insert data in tbl_logs insert into form_submitted (Responsible_id1,Responsible_id2, Responsible_id3, Responsible_id4,created_time, added_date, action) values (:new.Responsible_id1,:new.Responsible_id2,:new.Responsible_id3,:new.Responsible_id4, SYSDATE,to_char(SYSDATE,'HH:MI:SS PM'),SYSDATE, 'form_updated'); end if; if inserting then --insert data in tbl_logs insert into form_submitted (Responsible_id1,Responsible_id2, Responsible_id3, Responsible_id4,created_time, added_date, action) values (:new.Responsible_id1,:new.Responsible_id2,:new.Responsible_id3,:new.Responsible_id4, SYSDATE,to_char(SYSDATE,'HH:MI:SS PM'),SYSDATE, 'form_submitted'); end if; end; </code> This table I want then to display as a classic report in the history tab. Each time a user makes a change it will be logged and displayed. But each time in a different log tables. This means in the second step, another user has to make an action, this should as well be logged and been displayed in the history tab. The aim is to log and display every activity of each user in the process. I would like some help with the following three things: 1. how can I get the information I need for the log table( like Responsible_id1 ) from the other table (here tbl1). 2. different users edit the same table (which means that the next user adds data to the same table) but a different entry has to be made in a different log table. Can I bind the trigger to a user or to this specific process step. 3.After the first process step has been successfully completed, how can I display the current status (this means that the steps should not have to be carried out again the next time the user logs in but he should see in which step of the process he is at the moment)? ...
Categories: DBA Blogs

sdo_geometry varrays and basic to securefile migration

Wed, 2021-04-14 16:06
Hi Tom, we're making plans to migrate from basicfile to securefile lob storage (Oracle Database 19c). The database also contains some spatial layers, in partitioned tables with a local spatial index. These tables/partitions were defined in Oracle 11 without explicitly specifying a storage clause for the sdo_geometry varray elements (sdo_elem_info and sdo_ordinates). As I understand from the Oracle docs (https://docs.oracle.com/en/database/oracle/oracle-database/19/adlob/introduction-to-large-objects.html#GUID-9B66BC56-A427-4C1B-B185-8E004CF4B2F7), these varrays are only stored as basicfile LOB's in case their size exceeds approximately 4kB. The varray elements of smaller geometries are stored as raw values inline. As we want to migrate to securefile lobs (using table redefinition), we have two options for the definition of the sdo_geometry varrays 1. WITH an explicit storage clause: <code> VARRAY "SPATIAL_COLUM"."SDO_ELEM_INFO" STORE AS SECUREFILE LOB (CACHE ENABLE STORAGE IN ROW) VARRAY "PERC_GEOMETRIE_PERCEEL"."SDO_ORDINATES" STORE AS SECUREFILE LOB (CACHE ENABLE STORAGE IN ROW) ENABLE ROW MOVEMENT </code> 2. WITHOUT an explicit storoge clause. In this case only the larger geometry varrays will be stored as LOB's using securefiles. The other varrays will be stored inline as raw values. My question is what option is advisable in term of spatial query performance? Thank you. best regards
Categories: DBA Blogs

Synchronize PK values

Wed, 2021-04-14 16:06
Hi! We would appreciate your advice about problem we have. We have two tables of the same structure in different DB (DB1 and DB2) . Rows are inserted into table in DB1, and then some of them (based on simple condition) are being prorogated to the table in DB2 via trigger using dblink. When rows are inserted into those tables different sequences are used to generate primary key values. So, all the data in rows being pushed between DB is identical except for PK values. Now, we need to synchronize PK values as well and we would like to drop sequence in DB2 and insert rows with incoming PK values. Is it possible to actualize PK values for already inserted rows in the table in DB2 or the only way in to drop and recreate the table ? Thank you!
Categories: DBA Blogs

Sorting on columns with many data points having same value

Tue, 2021-04-13 21:46
We are trying to confirm the expected behavior of Oracle for how it handles sorting on a column where many of the data points are the same value. For example if I queried the following table and sorted on the Data column I might get changing result order on each query like the two iterations below. (First query) <code>| ID | Data | ----------------- | 1 | James | | 2 | James | | 3 | James |</code> (Second query) <code>| ID | Data | ----------------- | 3 | James | | 1 | James | | 2 | James |</code> We believe it is expected that this can happen and that Oracle does not guarantee the order of row for this type of query. Regardless,we know we can update the application to add a second sort on a unique column, but that is a huge development effort. So wanted to know if perhaps there is something we can do on the database side to avoid making large application changes.?
Categories: DBA Blogs

Loading a Double Quote enclosed csv file into Table where csv file has Newline character itself in the enclosed data

Tue, 2021-04-13 21:46
Hi Sir, I have a csv (Say Dummy.csv) file like the below : <i> "D_ID","D_DATA_A","D_DATA_B" "1","Normal Data","This is a Dummy Data" "2","Data with Comma","This is a Dummy Data, with Comma" "3","Data with New Line","This is a Dummy Data Along with New Line" "4","Data with Double Quote","This is a ""Dummy Data"" with Double Quote" "5","Data with all of the above","This is a ""Dummy Data "" with Double Quote, and Comma With Newline" "6","Test Data with Null","" "7","Data with really big dummy data","ABCD , EFGH "" IJKL MNOP"" QRST , UVWX"" ""YZ AB, ""CDEF"",""GHIJ KL . . MNOP, QRST "" UVWX" </i> Now I need to load this in a table say Dummy_Load table having the definition : D_ID NUMBER(2) D_DATA_A VARCHAR2(100) D_DATA_B VARCHAR2(1000) While using sqlldr for this with the control file as follows: <code> LOAD DATA INFILE '<root_path>/Dummy.csv' INSERT INTO TABLE Dummy_Load FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS ( d_id, d_data_a, d_data_b ) </code> all the rows are not getting loaded due to the new line characters throwing the below error in log <i> Record 5: Rejected - Error on table DUMMY_LOAD, column D_DATA_B. second enclosure string not present Record 8: Rejected - Error on table DUMMY_LOAD, column D_DATA_B. second enclosure string not present Record 9: Rejected - Error on table DUMMY_LOAD, column D_ID. no terminator found after TERMINATED and ENCLOSED field Record 12: Rejected - Error on table DUMMY_LOAD, column D_DATA_B. second enclosure string not present Record 14: Rejected - Error on table DUMMY_LOAD, column D_ID. no terminator found after TERMINATED and ENCLOSED field Record 16: Discarded - all columns null. Record 21: Discarded - all columns null. Record 22: Discarded - all columns null. Record 23: Discarded - all columns null. </i> What am I really missing here? Is it not possible to import a csv where newline is present in the data itself in table? Thanks in advance. Regards, Soumya
Categories: DBA Blogs

UTL_FILE folders and directories

Mon, 2021-04-12 09:06
Hello. I have a query that produces a result. We need to automate the result into a text file with pipe (|) delimited characters that will be used in another process. The problem is that creating this file using the UTL_FILE tool will store it on the database server side folders. I don't have access to the database server and the tool wont allow me to create the file on a local folder in my computer or any other server. Our Oracle database is running on a Unix Solaris server, and all clients are on Windows platforms (server and PCs). I have researched everywhere on how to code anything to be able to create any type of file from the query, like SPOOL and 3rd party development (Java, .NET) but it is either complicated or the output is not formatted appropriately. The best tool so far has been UTL_FILE, but without the capability of being able to create and store the file on a different server other than the database server, it seems very difficult to automate the process, most specially when there is a UNIX/Windows separation. So the question, what other ways can I create the pipe (|) delimited text file and store it locally? The development tool we use is PL/SQL Developer.
Categories: DBA Blogs

Issue with shared pool and query performance

Mon, 2021-04-12 09:06
Hello, Ask TOM Team. I have an application (is like a report tool) that reads data from a 18c Oracle Database. One day the results return quick but a couple days after it gets slow. When I execute "ALTER SYSTEM FLUSH SHARED_POOL" command the the results return quick again. What could be the problem? Thanks in advanced. Regards,
Categories: DBA Blogs

Advantage and/or Disadvanage of using Autoextend

Mon, 2021-04-12 09:06
Tom, I have a few questions related to autoextends: 1. Can you tell me the advantage and/or disadvantage of having "autoextend on" for every datafile I create in the database? 2. If I have a tablespace with all datafiles created to have autoextend on, then I turned it off after all datafiles have been filled, would this have any impact on the datafile or tablespace? Thanks!
Categories: DBA Blogs

Multiple Pdf merge

Mon, 2021-04-12 09:06
Hi, Can we merge multiple PDF documents into 1 PDF using plsql function?If so please suggest me the solution. Thank You, Vidya K Bangalore
Categories: DBA Blogs

EDITION and cross edition triggers

Mon, 2021-04-12 09:06
Hi Tom, Can you explain, what is 'EDITION' and what is it meant as cross-edition triggers? How it can be useful for high availability. Thanks for you replies. It is a big learning experience.
Categories: DBA Blogs

Limiting query execution based on sql id

Mon, 2021-04-12 09:06
Hi, Thank you for reviewing my question. Two nodes RAC database version 11.2.0.4 1. Query is generated at the application server 2. Persistent connection to the database using the same account 3. The query is using a lot of IO. When there is more than 40 execution of the same query, the database slowdown dramatically 4. Is there a method to limit/queue/kill query execution based on sqlid? 5. Resource Manager limiting based on IO does not work for us since the application using the same database account and there are other query running with high IO usage
Categories: DBA Blogs

Extract multiple hostname from a long string by using regexp_substr

Wed, 2021-04-07 19:06
Hi Tom, I am trying to extract multiple hostname out from a long strings (In fact, I will use this method for DBLink - host column later). However, my query is only able to extract the first hostname. I did try to put {1,0} after the match pattern '((HOST|HOSTNAME)[[:space:]]*(\=)[[:space:]]*(\w+|\-|\_)*(\.\w+)*|(\w+|\-|\_)*(\.\w+)+|((\w+|\-|\_)+$)){1,}', but still not able to extract the rest of occurrence match at all. Can you please help me out? Thanks. == The query output === HOSTNAME -------------------- LXC01-VIP.TEST.COM Expected Output -------------------- LXC01-VIP.TEST.COM LXC02-VIP.TEST.COM -------------------------------- My current query ---------------------------------- <code>SELECT REGEXP_REPLACE ( REGEXP_REPLACE ( REGEXP_REPLACE ( REGEXP_SUBSTR ( UPPER ( '(DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = lxc01-vip.test.com)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = lxc02-vip.test.com)(PORT = 1521)) (LOAD_BALANCE = ON) (FAILOVER = ON) ) (CONNECT_DATA = (SERVICE_NAME = Test) ) )'), '((HOST|HOSTNAME)[[:space:]]*(\=)[[:space:]]*(\w+|\-|\_)*(\.\w+)*|(\w+|\-|\_)*(\.\w+)+|((\w+|\-|\_)+$))'), ' ', ''), 'HOST=', ''), 'HOSTNAME=', '') HOSTNAME FROM DUAL;</code> Regards, Anders
Categories: DBA Blogs

How to do point in time recovery

Wed, 2021-04-07 19:06
Hello Tom, Supposed my data file got crashed at 9PM IST & we to do point in time recovery. We have a full db backup which was started at 8AM IST & got finished at 7PM IST in that do we need to apply redo log archive backup? Can you tell me how will come to know which archive backup needed to perform point in time recovery? Regards, Nishant
Categories: DBA Blogs

Want to understand use of Checkpoint & scn

Wed, 2021-04-07 00:46
Hello Tom, I just wanted to know use of Checkpoint & scn. I know the basic definition but want to know in what are all the situation it occurs & how it will be use in case of Oracle recovery. Can you please explain me with an example. Thanks, Nitin
Categories: DBA Blogs

Pages