DBA Blogs

Radio radio button prompt shifted after upgrade to 20.2

Tom Kyte - 13 hours 25 min ago
We are upgrading Apex from 5.2 to 20.2. Our applications are using theme 26. (Of course we need to update the applications to theme 42, but for now this step is too big.) After the upgrade, the prompt of the region buttons is shifted to left (behind the buttons). See https://apex.oracle.com/pls/apex/f?p=18605:1. Can we easily solve this with an css-adjustment?
Categories: DBA Blogs

convert database server sysdate to GMT date and daylight saving aware

Tom Kyte - 13 hours 25 min ago
We need to code a function to convert database server sysdate to GMT date. So we use: alter database set time_zone='EST' to set dbtimezone, and then code something like: CREATE OR REPLACE FUNCTION FN_GET_GMT_DATE RETURN DATE IS v_dbtimezone varchar2(10); BEGIN select dbtimezone into v_dbtimezone from dual; RETURN NEW_TIME(SYSDATE, ltrim(rtrim(v_dbtimezone)), 'GMT'); END FN_GET_GMT_DATE; / But this is not daylight saving awarance. What's the best way to convert database sysdate to GMT date and also daylight saving aware?
Categories: DBA Blogs

Deploying Oracle RAC on Docker

Tom Kyte - 13 hours 25 min ago
Hello Ask-Tom-Team, I came across this Oracle documentation ?Deploying Oracle RAC on Docker?: https://www.oracle.com/technetwork/database/options/clustering/rac-ondocker-bp-wp-5458685.pdf Specifically the sentence: OracleRAC on Docker supports the <b>non-privilege</b> mode feature. This allows Oracle RAC to safely and securely run on a single hostor <b>multiple hosts without interference</b> from the other DockerContainers. Does it mean, I could install two Oracle RAC clusters (e.g. 2 node each, 4 total) on multiple containers on the same physical host without interference between them? Oracle RAC clusterware uses Linux-Root account taking over the entire physical host. With Oracle RAC Docker non-privilege mode feature, I guess that wouldn't be the case. Where do I find more information on this Oracle RAC Docker non-privilege mode feature? Does Oracle supports / certifies "RAC on Docker" for production systems? Thank you. Elio
Categories: DBA Blogs

Password Security During Logins in 2021

Tom Kyte - 13 hours 25 min ago
Greetings, I have a few questions regarding how Oracle by default encrypts the password when sent to the Oracle databases. (https://asktom.oracle.com/pls/apex/asktom.search?tag=password-security-during-logins&p_session=117601880297172) First, please confirm that Oracle by default also encrypts the password when the client uses a JDBC thin client to connect to the databases. Second, what kind of encryption algorithm (AES128, DES, etc.) does it use? Third, why doesn't the default encryption of passwords require the complexity of setting up SSL encryption for data in motion? The complexity I am referring to is the need to create various server and client certificates ( CA certificate, self signed, pass it to through SHA-2 algorithm), create wallets and setup several sqlnet.ora parameters on both client and server? Forth, do you think it is a good idea for Oracle Oracle to provide a simple on/off option to turn on the same default password encryption but for all data in motion? Lastly, if Oracle encrypts the password automatically, why does Oracle provide this document: "Step by Step Guide To Configure SSL Authentication (Doc ID 736510.1)?" "This article describes how to configure and use the Secure Sockets Layer (SSL) for Oracle Database User Authentication." If Oracle encrypts the password by default, then Oracle providing the document seems unnecessary. Thanks for your valuable insights. John
Categories: DBA Blogs

ORA-65040 when loading jar file into pluggable database with sys.dbms_java.loadjava or commandline tool loadjava

Tom Kyte - 13 hours 25 min ago
I want to load a jar file into the database. First I loaded it into the root container. loadjava -o -f -noverify -synonym -stdout -verbose -recursivejars -resolve -grant PUBLIC -user sys/password@CDB javax.mail.jar >\mail.log This also works, but I don't have access to it in the PDB. Trying to load the jarfile into the PDB fails with the error ORA-65040: operation not allowed from within a pluggable database. The same message comes when I use dbms_java.loadjava('-v -r -g PUBLIC /path/java/javax.mail.jar'); after switching into the PDB. How can I load jarfiles into a PDB or make them available there? Thanks for help Sigrid
Categories: DBA Blogs

How to exclude rows from outer join calculation but still include them in results

Tom Kyte - 13 hours 25 min ago
Hello Chris, Hello Connor. I have a view with a schedule of some kind of launches: <code>create or replace view v_schedule as select trunc(sysdate) + interval'1'minute * level as plan_date from dual connect by level <= 100000</code> And a log of actual launch dates: <code>create table t_launches as select trunc(sysdate) + interval'1'minute * level + mod(abs(dbms_random.random), 60) * interval'1'second as launch_date from dual connect by level <= 1000;</code> I want to join them like that: <code>create table t1 as select plan_date, launch_date from ( select plan_date, lead(plan_date) over (order by plan_date) as next_plan_date from v_schedule ) left join t_launches on (launch_date >= plan_date and launch_date < next_plan_date);</code> This works for about 10 seconds. But I have an extra piece of information: launch_date cannot be larger than sysdate(<i>trunc(sysdate) + interval'1'minute * 1000 in this example</i>). So for 99% of rows in v_schedule there is no point in even trying to join them with anything in t_launches. So a query like that works for about half a second: <code>create table t2 as select plan_date, launch_date from ( select plan_date, lead(plan_date) over (order by plan_date) as next_plan_date from v_schedule where plan_date <= trunc(sysdate) + interval'1'minute * 1000 ) left join t_launches on (launch_date >= plan_date and launch_date < next_plan_date) union all select plan_date, null from v_schedule where plan_date > trunc(sysdate) + interval'1'minute * 1000;</code> My question is: Is there any way to tell Oracle to filter rows from an outer join calculation but still include them in the results? I thought this should work, but it doesn't: <code>create table t3 as select plan_date, launch_date from ( select plan_date, lead(plan_date) over (order by plan_date) as next_plan_date from v_schedule ) left join t_launches on (plan_date <= trunc(sysdate) + interval'1'minute * 1000 and launch_date >= plan_date and launch_date < next_plan_date);</code> I would very much like to avoid union all because my actual query is much larger and complicated than this and union all would almost double its size. Besides, it does an extra pass over v_schedule.
Categories: DBA Blogs

Synchronize PK values

Tom Kyte - Wed, 2021-04-14 16:06
Hi! We would appreciate your advice about problem we have. We have two tables of the same structure in different DB (DB1 and DB2) . Rows are inserted into table in DB1, and then some of them (based on simple condition) are being prorogated to the table in DB2 via trigger using dblink. When rows are inserted into those tables different sequences are used to generate primary key values. So, all the data in rows being pushed between DB is identical except for PK values. Now, we need to synchronize PK values as well and we would like to drop sequence in DB2 and insert rows with incoming PK values. Is it possible to actualize PK values for already inserted rows in the table in DB2 or the only way in to drop and recreate the table ? Thank you!
Categories: DBA Blogs

sdo_geometry varrays and basic to securefile migration

Tom Kyte - Wed, 2021-04-14 16:06
Hi Tom, we're making plans to migrate from basicfile to securefile lob storage (Oracle Database 19c). The database also contains some spatial layers, in partitioned tables with a local spatial index. These tables/partitions were defined in Oracle 11 without explicitly specifying a storage clause for the sdo_geometry varray elements (sdo_elem_info and sdo_ordinates). As I understand from the Oracle docs (https://docs.oracle.com/en/database/oracle/oracle-database/19/adlob/introduction-to-large-objects.html#GUID-9B66BC56-A427-4C1B-B185-8E004CF4B2F7), these varrays are only stored as basicfile LOB's in case their size exceeds approximately 4kB. The varray elements of smaller geometries are stored as raw values inline. As we want to migrate to securefile lobs (using table redefinition), we have two options for the definition of the sdo_geometry varrays 1. WITH an explicit storage clause: <code> VARRAY "SPATIAL_COLUM"."SDO_ELEM_INFO" STORE AS SECUREFILE LOB (CACHE ENABLE STORAGE IN ROW) VARRAY "PERC_GEOMETRIE_PERCEEL"."SDO_ORDINATES" STORE AS SECUREFILE LOB (CACHE ENABLE STORAGE IN ROW) ENABLE ROW MOVEMENT </code> 2. WITHOUT an explicit storoge clause. In this case only the larger geometry varrays will be stored as LOB's using securefiles. The other varrays will be stored inline as raw values. My question is what option is advisable in term of spatial query performance? Thank you. best regards
Categories: DBA Blogs

Log User Activities with a trigger

Tom Kyte - Wed, 2021-04-14 16:06
I am in the process of implementing a process in APEX. This process includes some roles. The process is initiated by a person who fills in forms in the first step. The data entered is stored in collections and entered into the corresponding tables after submission. This also initiates the next step of the process. Now to my question, I would like to display a kind of history of the completed steps after the completed forms have been submitted (i.e. after the first process step has been completed). I was thinking of a kind of log table. This summarises which action has been taken, who has taken the action, and what other important information has been entered in the table (e.g. which person responsible was selected in the form, etc., the ID of the table). <code> -- create log tbl create table form_submitted log_id number not null, Responsible_id1 number , Responsible_id2 number, Responsible_id3 number, Responsible_id4 number, added_date date, action varchar2(100); alter table form_submitted add constraint pk_log_id PRIMARY KEY (log_id); -- create sequence to autoincrement values create sequence seq_log_id start with 1 increment by 1 nocache nocycle; --create trigger to insert auto-incremented value create or replace trigger bi_form_submitted before insert on form_submitted for each row begin :new.log_id :=seq_log_id.nextval; end; </code> To achieve this, I thought of creating a trigger that would track the user's activities and enter the necessary information into the log table. <code> -- create trigger to track user activity and insert on logs table create or replace trigger trg_user_activity after insert or update or delete on tbl1 for each row begin if deleting then --insert data in tbl_logs insert into form_submitted (Responsible_id1,Responsible_id2, Responsible_id3, Responsible_id4,created_time, added_date, action) values (:new.Responsible_id1,:new.Responsible_id2,:new.Responsible_id3,:new.Responsible_id4, SYSDATE,to_char(SYSDATE,'HH:MI:SS PM'),SYSDATE, 'form_deleted'); end if; if updating then --insert data in tbl_logs insert into form_submitted (Responsible_id1,Responsible_id2, Responsible_id3, Responsible_id4,created_time, added_date, action) values (:new.Responsible_id1,:new.Responsible_id2,:new.Responsible_id3,:new.Responsible_id4, SYSDATE,to_char(SYSDATE,'HH:MI:SS PM'),SYSDATE, 'form_updated'); end if; if inserting then --insert data in tbl_logs insert into form_submitted (Responsible_id1,Responsible_id2, Responsible_id3, Responsible_id4,created_time, added_date, action) values (:new.Responsible_id1,:new.Responsible_id2,:new.Responsible_id3,:new.Responsible_id4, SYSDATE,to_char(SYSDATE,'HH:MI:SS PM'),SYSDATE, 'form_submitted'); end if; end; </code> This table I want then to display as a classic report in the history tab. Each time a user makes a change it will be logged and displayed. But each time in a different log tables. This means in the second step, another user has to make an action, this should as well be logged and been displayed in the history tab. The aim is to log and display every activity of each user in the process. I would like some help with the following three things: 1. how can I get the information I need for the log table( like Responsible_id1 ) from the other table (here tbl1). 2. different users edit the same table (which means that the next user adds data to the same table) but a different entry has to be made in a different log table. Can I bind the trigger to a user or to this specific process step. 3.After the first process step has been successfully completed, how can I display the current status (this means that the steps should not have to be carried out again the next time the user logs in but he should see in which step of the process he is at the moment)? ...
Categories: DBA Blogs

Loading a Double Quote enclosed csv file into Table where csv file has Newline character itself in the enclosed data

Tom Kyte - Tue, 2021-04-13 21:46
Hi Sir, I have a csv (Say Dummy.csv) file like the below : <i> "D_ID","D_DATA_A","D_DATA_B" "1","Normal Data","This is a Dummy Data" "2","Data with Comma","This is a Dummy Data, with Comma" "3","Data with New Line","This is a Dummy Data Along with New Line" "4","Data with Double Quote","This is a ""Dummy Data"" with Double Quote" "5","Data with all of the above","This is a ""Dummy Data "" with Double Quote, and Comma With Newline" "6","Test Data with Null","" "7","Data with really big dummy data","ABCD , EFGH "" IJKL MNOP"" QRST , UVWX"" ""YZ AB, ""CDEF"",""GHIJ KL . . MNOP, QRST "" UVWX" </i> Now I need to load this in a table say Dummy_Load table having the definition : D_ID NUMBER(2) D_DATA_A VARCHAR2(100) D_DATA_B VARCHAR2(1000) While using sqlldr for this with the control file as follows: <code> LOAD DATA INFILE '<root_path>/Dummy.csv' INSERT INTO TABLE Dummy_Load FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS ( d_id, d_data_a, d_data_b ) </code> all the rows are not getting loaded due to the new line characters throwing the below error in log <i> Record 5: Rejected - Error on table DUMMY_LOAD, column D_DATA_B. second enclosure string not present Record 8: Rejected - Error on table DUMMY_LOAD, column D_DATA_B. second enclosure string not present Record 9: Rejected - Error on table DUMMY_LOAD, column D_ID. no terminator found after TERMINATED and ENCLOSED field Record 12: Rejected - Error on table DUMMY_LOAD, column D_DATA_B. second enclosure string not present Record 14: Rejected - Error on table DUMMY_LOAD, column D_ID. no terminator found after TERMINATED and ENCLOSED field Record 16: Discarded - all columns null. Record 21: Discarded - all columns null. Record 22: Discarded - all columns null. Record 23: Discarded - all columns null. </i> What am I really missing here? Is it not possible to import a csv where newline is present in the data itself in table? Thanks in advance. Regards, Soumya
Categories: DBA Blogs

Sorting on columns with many data points having same value

Tom Kyte - Tue, 2021-04-13 21:46
We are trying to confirm the expected behavior of Oracle for how it handles sorting on a column where many of the data points are the same value. For example if I queried the following table and sorted on the Data column I might get changing result order on each query like the two iterations below. (First query) <code>| ID | Data | ----------------- | 1 | James | | 2 | James | | 3 | James |</code> (Second query) <code>| ID | Data | ----------------- | 3 | James | | 1 | James | | 2 | James |</code> We believe it is expected that this can happen and that Oracle does not guarantee the order of row for this type of query. Regardless,we know we can update the application to add a second sort on a unique column, but that is a huge development effort. So wanted to know if perhaps there is something we can do on the database side to avoid making large application changes.?
Categories: DBA Blogs

Presenting At The “Oracle Horizon Breakfast Series” Throughout May in Canberra

Richard Foote - Mon, 2021-04-12 20:38
  If you’re lucky enough to be based close to sunny Canberra, Australia, please join me in cooperation with Oracle Corporation for the “Oracle Horizons Breakfast Series” I’m running throughout May 2021 at the Canberra Hyatt Hotel. Enjoy breakfast while I discuss some exciting Oracle Database related topics: 5 May  : Innovate with Oracle Database […]
Categories: DBA Blogs

Limiting query execution based on sql id

Tom Kyte - Mon, 2021-04-12 09:06
Hi, Thank you for reviewing my question. Two nodes RAC database version 11.2.0.4 1. Query is generated at the application server 2. Persistent connection to the database using the same account 3. The query is using a lot of IO. When there is more than 40 execution of the same query, the database slowdown dramatically 4. Is there a method to limit/queue/kill query execution based on sqlid? 5. Resource Manager limiting based on IO does not work for us since the application using the same database account and there are other query running with high IO usage
Categories: DBA Blogs

EDITION and cross edition triggers

Tom Kyte - Mon, 2021-04-12 09:06
Hi Tom, Can you explain, what is 'EDITION' and what is it meant as cross-edition triggers? How it can be useful for high availability. Thanks for you replies. It is a big learning experience.
Categories: DBA Blogs

Multiple Pdf merge

Tom Kyte - Mon, 2021-04-12 09:06
Hi, Can we merge multiple PDF documents into 1 PDF using plsql function?If so please suggest me the solution. Thank You, Vidya K Bangalore
Categories: DBA Blogs

Advantage and/or Disadvanage of using Autoextend

Tom Kyte - Mon, 2021-04-12 09:06
Tom, I have a few questions related to autoextends: 1. Can you tell me the advantage and/or disadvantage of having "autoextend on" for every datafile I create in the database? 2. If I have a tablespace with all datafiles created to have autoextend on, then I turned it off after all datafiles have been filled, would this have any impact on the datafile or tablespace? Thanks!
Categories: DBA Blogs

Issue with shared pool and query performance

Tom Kyte - Mon, 2021-04-12 09:06
Hello, Ask TOM Team. I have an application (is like a report tool) that reads data from a 18c Oracle Database. One day the results return quick but a couple days after it gets slow. When I execute "ALTER SYSTEM FLUSH SHARED_POOL" command the the results return quick again. What could be the problem? Thanks in advanced. Regards,
Categories: DBA Blogs

UTL_FILE folders and directories

Tom Kyte - Mon, 2021-04-12 09:06
Hello. I have a query that produces a result. We need to automate the result into a text file with pipe (|) delimited characters that will be used in another process. The problem is that creating this file using the UTL_FILE tool will store it on the database server side folders. I don't have access to the database server and the tool wont allow me to create the file on a local folder in my computer or any other server. Our Oracle database is running on a Unix Solaris server, and all clients are on Windows platforms (server and PCs). I have researched everywhere on how to code anything to be able to create any type of file from the query, like SPOOL and 3rd party development (Java, .NET) but it is either complicated or the output is not formatted appropriately. The best tool so far has been UTL_FILE, but without the capability of being able to create and store the file on a different server other than the database server, it seems very difficult to automate the process, most specially when there is a UNIX/Windows separation. So the question, what other ways can I create the pipe (|) delimited text file and store it locally? The development tool we use is PL/SQL Developer.
Categories: DBA Blogs

Python with Oracle using sqlalchemy and cx_oracle

Hemant K Chitale - Sun, 2021-04-11 09:46

 Here is a simple demo of using Python and the sqlalchemy and cx_oracle libraries


This is the code :

#import required libraries
import pandas as pd
import cx_Oracle
import sqlalchemy
from sqlalchemy.exc import SQLAlchemyError

#setup connection
try:
oracle_pdb = sqlalchemy.create_engine("oracle+cx_oracle://hemant:hemant@localhost/?service_name=orclpdb1", arraysize=100)
except SQLAlchemyError as e:
print(e)

#setup query and pandas dataframe for results
try:
employees_query = """SELECT * FROM hr.employees order by employee_id""";
df_employees = pd.read_sql(employees_query, oracle_pdb)
except SQLAlchemyError as e:
print(e)

#Info on the dataframe
print(df_employees.info())
#the first five rows
print(df_employees.head())

#create a new dataframe with a subset of columns
df_emp_selected_cols=df_employees
df_emp_selected_cols.drop(['email','phone_number','salary','commission_pct','manager_id','department_id'],axis=1, inplace=True)
print(df_emp_selected_cols.head())


And here is the output from my database :

Info on the dataframe

RangeIndex: 108 entries, 0 to 107
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 employee_id 108 non-null int64
1 first_name 108 non-null object
2 last_name 108 non-null object
3 email 108 non-null object
4 phone_number 107 non-null object
5 hire_date 108 non-null datetime64[ns]
6 job_id 108 non-null object
7 salary 107 non-null float64
8 commission_pct 35 non-null float64
9 manager_id 106 non-null float64
10 department_id 107 non-null float64
dtypes: datetime64[ns](1), float64(4), int64(1), object(5)
memory usage: 9.4+ KB
None
The first 5 rows
employee_id first_name last_name ... commission_pct manager_id department_id
0 100 Steven King ... NaN NaN 90.0
1 101 Neena Kochhar ... NaN 100.0 90.0
2 102 Lex De Haan ... NaN 100.0 90.0
3 103 Alexander Hunold ... NaN 102.0 60.0
4 104 Bruce Ernst ... NaN 103.0 60.0

[5 rows x 11 columns]
With selected columns only
employee_id first_name last_name hire_date job_id
0 100 Steven King 2003-06-17 AD_PRES
1 101 Neena Kochhar 2005-09-21 AD_VP
2 102 Lex De Haan 2001-01-13 AD_VP
3 103 Alexander Hunold 2006-01-03 IT_PROG
4 104 Bruce Ernst 2007-05-21 IT_PROG


Once you are familiar with this method, you can use numpy, matplotlib and a host of other python libraries with the dataset.

This article by Yuli Vasiliev is a good starter.


Categories: DBA Blogs

SymPy Tutorial Repository

Bobby Durrett's DBA Blog - Fri, 2021-04-09 11:02

I have been playing with the Python SymPy package and created a repository with my test scripts and notes:

https://github.com/bobbydurrett/sympytutorial

Might be helpful to someone. I just got started.

I had used Maxima before. SymPy and Maxima are both what Wikipedia calls “Computer Algebra Systems.” They have a nice list here:

https://en.wikipedia.org/wiki/List_of_computer_algebra_systems

I got a lot of use out of Maxima but I think it makes sense to switch the SymPy because it is written in Python and works well with other mainstream Python packages that I use like Matplotlib. They both fall under the SciPy umbrella of related tools so for me if I need some computer algebra I probably should stick with SymPy.

Maxima and SymPy are both free.

Bobby

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs