DBA Blogs

sql trace and tkprof

Tom Kyte - 4 hours 10 min ago
hi tom, i have seen lot of answers from you regarding sql trace and tkprof for sql performance tuning.but i am still not clear with few things. i hope you will explain me clearly those things 1.how does this sql trace and tkprof differ from set autotrace on. 2.if we use set autotrace on and explain plan, how do we analyze a particular query and tell whether the query is tuned or not 3.for sql trace and tkprof i want to know if if i do 'ALTER SESSION SET SQL_TRACE = TRUE' where does the trace file get created(my oracle server is running on unix). by formatting the trace file using tkprof how do i analyze it to tell the query is tuned or not? 4.is the order of tables in a query matters in performance? for this question i got an answer that it matters for RBO but not for CBO.but who decides to go for RBO,CBO.can we tell oracle to go for CBO OR RBO.If we can decide, when to go for RBO,when to go for CBO. sorry for the flow of questions, but i am confused on the above issues. please answer the above. thanks a lot. nk
Categories: DBA Blogs

Oracle APEX 20.2 IE 11 Depreciated

Tom Kyte - Fri, 2020-07-10 17:26
Hi, I saw the following depreciated features in Oracle APEX 20.1 release notes: <b>6.1.4 Support for Internet Explorer 11 Deprecated Support for Internet Explorer (IE) 11 is deprecated. Starting with release 20.2, only the current and prior major release of Microsoft Edge along with Google Chrome, Mozilla Firefox, Apple Safari will be supported.</b> Does it mean that Oracle APEX 20.2 will not work correctly in Microsoft IE 11? Does this statement apply only to the APEX builder?...or does it also apply to all applications in workspace? We have a lot of PC's that are still running IE 11. Thanks!
Categories: DBA Blogs

Oracle JET Paging Control in Oracle APEX

Tom Kyte - Fri, 2020-07-10 17:26
Hi, In APEX SOD, it says there will be additional Oracle JET-based components. May I know if there will include the following paging control in Oracle JET? https://www.oracle.com/webfolder/technetwork/jet/jetCookbook.html?component=pagingControl&demo=basicPagingTable I am particularly interested in replacing the pagination in classic reports by Oracle JET paging control. I haven't seen any information about how to implement these Oracle JET components into Oracle APEX. Thank you!
Categories: DBA Blogs

How can I audit the individual functions/procedures inside package?

Tom Kyte - Fri, 2020-07-10 17:26
Oracle Audit sentence: audit execute on [package] by access; How Can I Do something like this? audit execute on [package.function] by access; I need to audit execution of functions inside packages with Oracle Audit options. Is it possible to audit arguments sent to this functions too? And Is it possible to Select this info of SYS.AUD$ or other Views Best regards, Jery
Categories: DBA Blogs

ORA-06531: Reference to uninitialized collection

Tom Kyte - Fri, 2020-07-10 17:26
Hi Tom, I am getting "ORA-06531: Reference to uninitialized collection" even after initializing collection to assign a value. Could you please help. Please find the code below. create or replace TYPE TYP_GRP_ID is object (grp_id number); create or replace TYPE TAB_GROUP is table of TYP_GRP_ID; create or replace PACKAGE TEST AS PROCEDURE TEST; END; create or replace PACKAGE BODY TEST AS PROCEDURE TEST IS acc_arr TAB_GROUP; BEGIN acc_arr.EXTEND; acc_arr := new TAB_GROUP(null); acc_arr(acc_arr.LAST).grp_id := 1000; dbms_output.put_line(acc_arr(acc_arr.LAST).grp_id); END; END; Thanks in advance Mathew
Categories: DBA Blogs

An instance named dvil?

Tom Kyte - Fri, 2020-07-10 17:26
<u></u> Hello Masters, I hope you are OK, it was impossible to ask TOM during many days... Two weeks ago I was testing, with RMAN, the "RECOVER TABLE" command. It helped me to restore a dropped table, with RMAN backups : a very powerful and useful feature, thanks to Oracle for that tool. I droped the table HR.ZZRMAN01 of the PDB ORCL and, to restore it, my command was : <code>RMAN> RECOVER TABLE HR.ZZRMAN01 OF PLUGGABLE DATABASE ORCL UNTIL SCN 2331673 AUXILIARY DESTINATION '/u01/app/oracle/RMAN_TEMP' REMAP TABLE 'HR'.'ZZRMAN01':'ZZRMAN01_PREV';</code> And, surprise, on the screen I saw that Oracle creates an instance named dvil... <code> Starting recover at 20-JUN-20 using channel ORA_DISK_1 RMAN-05026: warning: presuming following set of tablespaces applies to specified point-in-time List of tablespaces expected to have UNDO segments Tablespace SYSTEM Tablespace ORCL:SYSTEM Tablespace ORCL:UNDOTBS1 Tablespace UNDOTBS2</code> Here! <code> <u><b>Creating automatic instance, with SID='dvil'</b></u> initialization parameters used for automatic instance: db_name=ORCL12C db_unique_name=dvil_pitr_ORCL_ORCL12C compatible=12.2.0 db_block_size=8192 db_files=200 diagnostic_dest=/u01/app/oracle _system_trig_enabled=FALSE sga_target=800M processes=200 db_create_file_dest=/u01/app/oracle/RMAN_TEMP log_archive_dest_1='location=/u01/app/oracle/RMAN_TEMP' enable_pluggable_database=true _clone_one_pdb_recovery=true #No auxiliary parameter file used starting up automatic instance ORCL12C Oracle instance started Total System Global Area 838860800 bytes Fixed Size 8798312 bytes Variable Size 239079320 bytes Database Buffers 583008256 bytes Redo Buffers 7974912 bytes Automatic instance created ...</code> Well, my little question is just : does Dvil mean devil? You agree with me or I am wrong? I was so amused that I wanted to talk about it with Oracle experts :-) Have a nice day, David D.
Categories: DBA Blogs

Session wise rank ,change in value should lead to new rank

Tom Kyte - Fri, 2020-07-10 17:26
Hi Tom, I need help to get sequence no or rank whenever there is change in continuous sequence below is the example- Input seq,id,value 1,1,200 2,1,200 3,1,300 4,1,200 5,1,200 6,1,500 7,1,500 8,1,700 expected out put same combination of id,value as per sequence order should be assigned one value. for id 1 and values 200 --rnk should be 1 then there is a change in 3rd row --then rnk=2 then 4th row should be assigned with new rnk 3 not 1 ans so on. seq,id,value, rnk 1,1,200,1 2,1,200,1 3,1,300,2 4,1,200,3, 5,1,200,3, 6,1,500,4 7,1,500,4 8,1,700,5 9,1,800,6 I have tried lead, lag, first_value,last_value ,rank,dense_rank and row_number analytical function ,but i could not achieve the requirement.
Categories: DBA Blogs

View raise ORA-28094 based on "dbms_redact" base-table

Tom Kyte - Fri, 2020-07-10 17:26
Dear Ask-Tom-Team, I've to apologize up front because I think I've something missed in the Documentation regarding dbms_redact setup, or restrictions therewith. If so I'm sorry for wasting your time just to point me in right direction. I've a simple test-case on livesql (but you cannot run it there, because you have no execute-right on dbms_redact), please have a look at: https://livesql.oracle.com/apex/livesql/s/kcq634fgexodc6m6a8n4esb0l My questions is regarding the ORA-28094 raised by the "tom_redact_view_28094"-View (at statement 13). The restriction I couldn't find, and to be honest don't understand, is: The above mentioned view just selects all columns from the base-table (which has one redacted column - ssn), and just add two columns together (nr1 + nr2). And this addition seems to be enough to raise the ORA-28094. If you do not select the "sum" column from this view, it works just fine. I've tested this on AIX and Oracle 19.5, and on Oracle 12.2 and 18.3 on Oracle-Linux. All of the above shows the same behaviour, which is another reason why I think I've just missed a restriction in the documentation. I know I could redact the column in the view, but I think this would defeat the purpose. Because I would like to redact the 'base-table' and rather not 10 views based on this table. In the end my tests would go a step further, because my initial setup started with dbms_tsdp_* using dbms_redact. But I think that dbms_redact is the restricting part, so I've simplified this test-case. I hope the test-case and my explanation is sufficient for you to reproduce the behaviour. And as stated above if I've missed something in the documentation, and you can point me in the right direction, I'm grateful and sorry! Thank you in advance for your time and help! best regards, Tom
Categories: DBA Blogs

Inserting data into destination table from source if data is not present in destination table, taking more time due to large volume of data

Tom Kyte - Fri, 2020-07-10 17:26
sample example: /* This procedure is taking approx 20-25 mins to complete because of selecting large volume of data, Is there any way to reduce execution time ? */ <code> procedure sample ( a_in IN varchar2) IS v_row number; v1_row number; v2_row number; cursor c1 IS select a_value, b_value.., from source_table<where condition>; /* cursor c1 selecting 46 millions record, but inserted few records to the below two destinations tables based on conditions, source_table is a force view*/ Begin for i in c1 loop v_row := 0; select count(1) into v_row from table_item where item = i.a_value||'_'||a_in; if v_row > 0 then select count(1) into v1_row from destination_table1 where item1 = (i.b_value||'_'||a_in); if v1_row = 0 then insert into destination_table1 (a_value, b_value) values(i.a_value, i.b_value); commit; end if; if i.b_value is not null then v2_row := 0; select count(1) into v2_row from destination_table2 where item2 = (i.a_value ||'_'||a_in) and item3 = (i.b_value||'_'||a_in); if v2_row = 0 then insert into destination_table2 (item2, item3) values (i.a_value ||'_'||a_in, i.b_value||'_'||a_in); commit; end if; end if; end if; end loop; End sample; /* this procedure is taking approx. 20 mins to complete */ </code>
Categories: DBA Blogs

Adding a Function-Based Index

Hemant K Chitale - Thu, 2020-07-09 10:51
What happens when you create a Function Based Index ?

{This demo is in 12.2}

I start with my "large" table.

SQL> create table my_data_table 
2 as select * from dba_objects
3 union all select * from dba_objects;

Table created.

SQL> select count(*) from my_data_table;

COUNT(*)
----------
157408

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL> select num_rows, avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

NUM_ROWS AVG_ROW_LEN
---------- -----------
157408 132

SQL>


I then add a Function Based Index on it.

SQL> create index my_data_fbi   
2 on my_data_table(lower(OBJECT_NAME))
3 /

Index created.

SQL>


I now look at Table and Column statistics

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> desc my_data_table
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(23)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(128)
SHARING VARCHAR2(18)
EDITIONABLE VARCHAR2(1)
ORACLE_MAINTAINED VARCHAR2(1)
APPLICATION VARCHAR2(1)
DEFAULT_COLLATION VARCHAR2(100)
DUPLICATED VARCHAR2(1)
SHARDED VARCHAR2(1)
CREATED_APPID NUMBER
CREATED_VSNID NUMBER
MODIFIED_APPID NUMBER
MODIFIED_VSNID NUMBER

SQL>
SQL> l
1 select column_name, num_nulls,to_char(last_analyzed,'DD-MON HH24:MI')
2 from user_tab_columns
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME NUM_NULLS TO_CHAR(LAST_ANALYZED
------------------------------ ---------- ---------------------
OWNER 0 09-JUL 22:43
OBJECT_NAME 0 09-JUL 22:43
SUBOBJECT_NAME 155954 09-JUL 22:43
OBJECT_ID 2 09-JUL 22:43
DATA_OBJECT_ID 139120 09-JUL 22:43
OBJECT_TYPE 0 09-JUL 22:43
CREATED 0 09-JUL 22:43
LAST_DDL_TIME 2 09-JUL 22:43
TIMESTAMP 2 09-JUL 22:43
STATUS 0 09-JUL 22:43
TEMPORARY 0 09-JUL 22:43
GENERATED 0 09-JUL 22:43
SECONDARY 0 09-JUL 22:43
NAMESPACE 2 09-JUL 22:43
EDITION_NAME 157408 09-JUL 22:43
SHARING 0 09-JUL 22:43
EDITIONABLE 101970 09-JUL 22:43
ORACLE_MAINTAINED 0 09-JUL 22:43
APPLICATION 0 09-JUL 22:43
DEFAULT_COLLATION 125494 09-JUL 22:43
DUPLICATED 0 09-JUL 22:43
SHARDED 0 09-JUL 22:43
CREATED_APPID 157408 09-JUL 22:43
CREATED_VSNID 157408 09-JUL 22:43
MODIFIED_APPID 157408 09-JUL 22:43
MODIFIED_VSNID 157408 09-JUL 22:43

26 rows selected.

SQL>
SQL> l
1 select column_name, num_nulls,to_char(last_analyzed,'DD-MON HH24:MI')
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME NUM_NULLS TO_CHAR(LAST_ANALYZED
------------------------------ ---------- ---------------------
OWNER 0 09-JUL 22:43
OBJECT_NAME 0 09-JUL 22:43
SUBOBJECT_NAME 155954 09-JUL 22:43
OBJECT_ID 2 09-JUL 22:43
DATA_OBJECT_ID 139120 09-JUL 22:43
OBJECT_TYPE 0 09-JUL 22:43
CREATED 0 09-JUL 22:43
LAST_DDL_TIME 2 09-JUL 22:43
TIMESTAMP 2 09-JUL 22:43
STATUS 0 09-JUL 22:43
TEMPORARY 0 09-JUL 22:43
GENERATED 0 09-JUL 22:43
SECONDARY 0 09-JUL 22:43
NAMESPACE 2 09-JUL 22:43
EDITION_NAME 157408 09-JUL 22:43
SHARING 0 09-JUL 22:43
EDITIONABLE 101970 09-JUL 22:43
ORACLE_MAINTAINED 0 09-JUL 22:43
APPLICATION 0 09-JUL 22:43
DEFAULT_COLLATION 125494 09-JUL 22:43
DUPLICATED 0 09-JUL 22:43
SHARDED 0 09-JUL 22:43
CREATED_APPID 157408 09-JUL 22:43
CREATED_VSNID 157408 09-JUL 22:43
MODIFIED_APPID 157408 09-JUL 22:43
MODIFIED_VSNID 157408 09-JUL 22:43
SYS_NC00027$ 0 09-JUL 22:43

27 rows selected.

SQL>

SQL> l
1 select column_name, avg_col_len
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME AVG_COL_LEN
------------------------------ -----------
OWNER 6
OBJECT_NAME 34
SUBOBJECT_NAME 2
OBJECT_ID 5
DATA_OBJECT_ID 2
OBJECT_TYPE 10
CREATED 8
LAST_DDL_TIME 8
TIMESTAMP 20
STATUS 7
TEMPORARY 2
GENERATED 2
SECONDARY 2
NAMESPACE 3
EDITION_NAME 0
SHARING 13
EDITIONABLE 2
ORACLE_MAINTAINED 2
APPLICATION 2
DEFAULT_COLLATION 4
DUPLICATED 2
SHARDED 2
CREATED_APPID 0
CREATED_VSNID 0
MODIFIED_APPID 0
MODIFIED_VSNID 0
SYS_NC00027$ 34

27 rows selected.

SQL>
SQL> select num_rows, avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

NUM_ROWS AVG_ROW_LEN
---------- -----------
157408 166

SQL>


A new column "SYS_NC00027$" appears in USER_TAB_COLS but not in USER_TAB_COLUMNS.  The new column is not also not visible when I run a "DESCRIBE" command.

Also, the AVG_ROW_LEN has increased by 34  (seeing as I do not have any objects with long names) to reflect the addition of the new virtual column.
But did all the blocks actually get re-written ?  Are the "lower(OBJECT_NAME)" values written into each table block as well ?

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> l
1 select obj#, savtime, rowcnt, blkcnt, avgrln,samplesize,to_char(analyzetime,'DD-MON-RR HH24:MI:SS')
2 from sys.WRI$_OPTSTAT_TAB_HISTORY
3 where obj# =
4 (select object_id
5 from dba_objects
6 where owner = 'HEMANT'
7 and object_type = 'TABLE'
8 and object_name = 'MY_DATA_TABLE')
9* order by 1,2
SQL> /

OBJ# SAVTIME ROWCNT BLKCNT AVGRLN SAMPLESIZE TO_CHAR(ANALYZETIME,'DD-MON
---------- ---------------------------- ---------- ---------- ---------- ---------- ---------------------------
79843 09-JUL-20 10.39.03.789392 PM 157408 3106 132 157408 09-JUL-20 22:37:53
79843 09-JUL-20 10.43.59.424420 PM 157408 3106 132 157408 09-JUL-20 22:39:03
79843 09-JUL-20 11.02.35.088733 PM 157408 3106 166 157408 09-JUL-20 22:43:59

SQL>


Apparently, the actual number of blocks did not increase.

SQL> l
1 select segment_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_DATA%'
4* order by 1,2
SQL> /

SEGMENT_NAME SEGMENT_TYPE BYTES/1024
------------------------------ ------------------ ----------
MY_DATA_FBI INDEX 8192
MY_DATA_TABLE TABLE 25600

SQL>
SQL>
SQL> l
1 select index_name, column_name
2 from user_ind_columns
3* where index_name = 'MY_DATA_FBI'
SQL> /

INDEX_NAME COLUMN_NAME
------------------------------ ------------------------------
MY_DATA_FBI SYS_NC00027$

SQL>


So, the Index does take up space as an Index segment but the Virtual Column ("lower(OBJECT_NAME)")  created on the Table does not grow the table.  What seems to happen is that the "computed" value ("lower(OBJECT_NAME)") is stored in the Index segment (leaf blocks) but not in the Table segment (blocks)


What happens if I drop the Function Based Index ?

SQL> drop index my_data_fbi;

Index dropped.

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
2 from user_tab_columns
3 where table_name = 'MY_DATA_TABLE'
4 /

COUNT(*)
----------
26

SQL> select count(*)
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4 /

COUNT(*)
----------
26

SQL> select avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

AVG_ROW_LEN
-----------
132

SQL>


The additional column and the 34 bytes it represents has disappeared from the Table and Column statistics.



Categories: DBA Blogs

PL\SQL to GetDDL for all user account except system account using cursor and loop

Tom Kyte - Tue, 2020-07-07 16:06
I am not sure what I am making wrong with this statement, <code>Declare cursor cur_dba_users IS select username from dba_users where username not in ('ANONYMOUS','APEX_040200','APEX_PUBLIC_USER','APPQOSSYS','AUDSYS','BI','CTXSYS','DBSNMP','DIP','DVF','DVSYS','EXFSYS','FLOWS_FILES','GSMADMIN_INTERNAL','GSMCATUSER','GSMUSER','HR','IX','LBACSYS','MDDATA','MDSYS','OE','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','PM','SCOTT','SH','SI_INFORMTN_SCHEMA','SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR','SYS','SYSBACKUP','SYSDG','SYSKM','SYSTEM','WMSYS','XDB','SYSMAN','RMAN','RMAN_BACKUP','OWBSYS','OWBSYS_AUDIT','APEX_030200','MGMT_VIEW','OJVMSYS'); Begin For cur_username in cur_dba_users Loop SELECT dbms_metadata.get_ddl('USER',cur_username) FROM dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT',cur_username) from dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT',cur_username) from dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT',cur_username) from dual; END LOOP; CLOSE cur_dba_users; End; Error report - ORA-06550: line 6, column 39: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 6, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 7, column 55: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 7, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 8, column 53: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 8, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 9, column 55: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 9, column 1: PL/SQL: SQL Statement ignored 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action:</code>
Categories: DBA Blogs

Redo Log query

Tom Kyte - Tue, 2020-07-07 16:06
I understand instead of log generation rate , we should consider amount of redo generated. My question is - How can we identify any database bottleneck using amount of redo? Is there any way to compare the amount of redo generation between two days or months probably and see if there is need of tuning or re-sizing?
Categories: DBA Blogs

Need information on tuning a join

Tom Kyte - Tue, 2020-07-07 16:06
Hi Tom, I need to know how the where clause is effecting my query. I have a table ABC where it has batch_id. It has 1L rows. Its daily truncated table. so it will have only one batch_id. table1 = ABC table2 = XYZ select * from ABC t1, XYZ t2 where batch_id = 111 and t1.col1 = t2.col2 the above query is taking 20+ minutes to retrieve data. the below query is taking hardly 6s. select * from ABC t1, XYZ t2 where t1.col1 = t2.col2; Just want it understand why its taking time with just batch_id in where clause.
Categories: DBA Blogs

COLLECTION ITERATOR PICKLER FETCH

Tom Kyte - Tue, 2020-07-07 16:06
Hi Tom, I am using DBMS_XPLAN.DISPLAY to display the execution plan of a query. After using EXPLAIN PLAN SET STATEMENT ID .... I am giving the following query: <code>SET PAGESIZE 0 SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE','SAM','ALL')); Execution Plan ---------------------------------------------------------- Plan hash value: 3013799171 --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 32744 | 65488 | 49 (3)| 00:00:01 | | 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY | | | | | ---------------------------------------------------------------------------------------------</code> can you please help me to know how to avoid this "COLLECTION ITERATOR PICKLER FETCH" and get the proper explain plan? Thank you very much Sarayu K.S.
Categories: DBA Blogs

SQL TO_CHAR returns differently between 2 Servers

Tom Kyte - Tue, 2020-07-07 16:06
Hello, I have two server with version 12.2.0.1.0. When I run the same SQL on each I don't have the same result on dates when I used then with TO_CHAR... SQLPLUS: <code>SELECT CONTRACTDATE, TO_CHAR(CONTRACTDATE,'YYYY/MM/DD') FROM MYTABLES WHERE CONTRACTNO=101010;</code> COLUMN CONTRACTDATE is a DATE. SERVER1 CONTRACTD TO_CHAR(CO --------- ---------- 05-DEC-19 2019/12/05 SERVER2 CONTRACTD TO_CHAR(CO --------- ---------- 05-DEC-19 <b>00</b>19/12/05 As you can see on the second server it use <b>0019</b> for the year instead of the expected <b>2019</b> Any idea why I have such behaviour? Thank you PS: I have checked the NLS and they are identical. <code>SELECT DB.PARAMETER, DB.VALUE "DATABASE", I.VALUE "INSTANCE", S.VALUE "SESSION" FROM NLS_DATABASE_PARAMETERS DB, NLS_INSTANCE_PARAMETERS I, NLS_SESSION_PARAMETERS S WHERE DB.PARAMETER=I.PARAMETER(+) AND DB.PARAMETER=S.PARAMETER(+) ORDER BY 1;</code>
Categories: DBA Blogs

Hidden paramater: No_Invalidate

Tom Kyte - Tue, 2020-07-07 16:06
Hi Tom, from couple of months some queries in production environment are intermittently producing 2 hash plan values. Obviously one is better and other is bad. We run stats gather Job daily in the morning. Have check many other platforms and understood that there are many reasons for this behaviour. Some of them reasons for above behaviour, that I have check are below 1. Parsing a SQL statement with bind variable defined as wrong type. 2. Not running stats gather Job daily. above two reasons are not valid for my case. Other than this there are some more reasons that I wanted to ask you. a. Invalidation of cursors. I understand that, this is handled by parameter 'NO_INVALIDATE'. This is set to AUTO_INVALIDATE for me. So even though my stats gather Job has produced a bad hash plan value, this will not be effective immediately. b. Adaptive cursor sharing. How to make sure that because of this feature my SQL is producing multiple Hash plan value? Could you please help me with above understanding and suggest me for point a and b. Thanks a lot.
Categories: DBA Blogs

Special character constant package after deployment

Tom Kyte - Tue, 2020-07-07 16:06
Hi TOM i have a constant package where i define a variable constant whose value is '<b>#</b>' (pound), Now, running the package script in dev environment, package created with no errors and did not show any special character in constant variable, but running same package script in UT environment package created with no errors but constant value has '<b>A#</b>'. how can i resolve this issues ? came to know its different character set issue thanks in advance. Code: create or replace package pk_constants is lv_pound constant char(1) :='#'; -- for germany and europe countries ...... ..... end pk_constants; /
Categories: DBA Blogs

Simple Performance Tuning Example

Bobby Durrett's DBA Blog - Tue, 2020-07-07 10:41

I helped resolve an Oracle database performance problem yesterday and I thought I would capture what I did in this blog post. It is a “simple” problem in that the solution was easy once I understood the issue and because this sort of problem happens often. But the process to get to the solution is not completely simple. I think documenting the process may have more value than writing down the details of this specific problem.

Today is Tuesday. Thursday last week our development team was doing some testing and performance of the system was terrible. I was busy working on another project that is my primary assignment right now, so I did not look at it. It was also Thursday before the July 4th holiday weekend and we are all mentally and physically checking out a bit early. Some key people were already off on vacation. So, we shut it all down until Monday (yesterday). Also, late Thursday there was some thought that there was a system issue maybe with our virtual machines’ host or networking. So, we let it go. Monday our dev team said they thought the slowness was caused by bad data including duplicates that were throwing errors on a target system (non-Oracle) in the cloud. So, they cleaned up the data and started a new run. But it was slow as molasses and the application front end was also super slow.

In retrospect, looking at the paragraph above, I bet a lot of DBAs with performance tuning experience would jump to the solution. I can think off the top of my head of a few types of things that might cause these symptoms. But I did not immediately think of it so I went through a process to get there. No leap of intuition was required except maybe at the very end.

Since the problem query was currently running the first step was to look at the session monitor in Toad. I may have looked at HP Unix’s Glance tool, which is like Top, to see the CPU usage and top processes at an OS level. What I saw was a parallel query running with 8 parallel child processes and the query had been running about 30 seconds.

Step two was to take an AWR report. First, I ran a command like this to get a snapshot after the current activity:

exec dbms_workload_repository.create_snapshot;

Then I ran an AWR report for the last two snapshots.

The “SQL ordered by Elapsed Time” part of the report looked like this:

SQL ordered by Elapsed Time

The top query azrtxpx1g6mxy was the one I saw in Toad. The second query is a subselect of the top one. Averaging 51.23 seconds and having 46.77% of its time on I/O I eventually went where I usually go with AWR reports. I went to the “Segments by Logical Reads” section of the AWR report. Here is part of that section:

Segments by Logical Reads

I hid the table names, but the point is that the top segment was doing 99.67% of the logical I/Os. So, the issue had to do with I/O. But looking at the “Top 5 Timed Foreground Events” section I/O seems very efficient.

Foreground Events

The top wait was the I/O event “direct path read” which can be deceptively low on systems that use asynchronous I/O. But this system uses HP Unix file system I/O which does not support asynchronous I/O so the direct path read number is correct. One of my list of possible things that could be wrong before I started was some sort of I/O contention but this report pointed away from I/O hardware.

So, the AWR report and my look at Toad session monitor points to a SQL statement doing parallel query full scans of a table and that SQL statement running 10s of seconds in length. So, I looked at the execution plan next. I like to use my getplans.sql SQL*Plus script to extract a query’s plan from the AWR using its SQL_ID. I ran this is SQL*Plus:

@getplans azrtxpx1g6mxy

Sure enough, the same top table was running a parallel full table scan in the plan.

Somewhere in this process I did some counts. I think the simplicity of using select count(*) queries may get in the way of people using it but count(*) queries are one of the biggest helps to me tuning a SQL statement. I did two types of counts for this case. I did a select count(*) on the top table and found out that it had 160 million rows. The count took 45 seconds even through it was doing a fast full scan of an index and not the full table. Then I queried grouping by the column that joins the table to its parent table. This is a typical header detail join where the child table joins the parent on the parent’s unique key. Then I got the idea that the query might run faster if there were an index on the joining column.

I ran my optimizer statistics scripts to check the index columns and to look at the column statistics to see how selective the joining column is. I found that there was no index on the column that the header table used to join to the problem detail table. So, I recommended that the dev team add an index on that column and including the other columns in the where clause for good measure. They might have to try two or three different column choices and orders to get the best index.

But the dev team was skeptical. This same query runs fine in production. It is not new so what changed? I tried running my sqlstat.sql script to get a history of the execution of SQL_ID azrtxpx1g6mxy but there was no history on our test system going back before July 2nd. I ran it on production and there was no history at all. I found out that this SQL had not run for two months on production and since we only keep 6 weeks history in our AWR any execution history of azrtxpx1g6mxy had fallen off the list.

Finally, I had a moment of insight. If I could have seen a faster plan on production, no insight would have been required. It would have been an almost mechanical process of comparing the good and bad plans. But I finally got the idea of running my same optimizer statistics scripts against production and looking at the indexes. Sure enough, production had an index on the joining column. Duh! I probably should have checked for differences between the tables and indexes in test and prod first, but I had no reason to suspect a difference. So, a coworker put the indexes back in test and everything was fine.

So, it is a simple thing. Indexes were dropped on a test system. Happens all the time. Not profound. But maybe the process of driving from problem statement through the different kinds of evidence and tools to the solution is more profound. It gets results. What does the AWR report point to? The top segments pointed to a table. The counts and column statistics suggested the need for an index. Finally, a compare of prod and test showed that an index was missing.

Bobby

Categories: DBA Blogs

A Correlated SubQuery

Hemant K Chitale - Tue, 2020-07-07 10:16
I take this simple Correlated Subquery that identifies all employees whose Salary is greater than the department's average Salary and is also greater than $9,000

SQL> l
1 select emp.department_id, emp.employee_id, emp.salary
2 from hr.employees emp
3 where emp.salary >
4 (select avg(salary)
5 from hr.employees emp_inner
6 where emp.department_id = emp_inner.department_id)
7 and emp.salary > 9000
8* order by 1,2
SQL> /

DEPARTMENT_ID EMPLOYEE_ID SALARY
------------- ----------- ----------
20 201 13000
30 114 11000
80 145 14000
80 146 13500
80 147 12000
80 148 11000
80 149 10500
80 150 10000
80 151 9500
80 156 10000
80 157 9500
80 162 10500
80 163 9500
80 168 11500
80 169 10000
80 170 9600
80 174 11000
90 100 24000
100 108 12008
110 205 12008

20 rows selected.

SQL>


The Row Source Statistics for the actual query execution are :

select emp.department_id, emp.employee_id, emp.salary
from hr.employees emp
where emp.salary >
(select avg(salary)
from hr.employees emp_inner
where emp.department_id = emp_inner.department_id)
and emp.salary > 9000
order by 1,2

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 12 0 20
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.00 0.00 0 12 0 20

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
20 20 20 FILTER (cr=12 pr=0 pw=0 time=1027 us starts=1)
23 23 23 SORT GROUP BY (cr=12 pr=0 pw=0 time=1020 us starts=1 cost=7 size=3552 card=111)
536 536 536 HASH JOIN (cr=12 pr=0 pw=0 time=917 us starts=1 cost=6 size=71648 card=2239)
23 23 23 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=30 us starts=1 cost=3 size=1752 card=73)
108 108 108 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=7 us starts=1 cost=3 size=864 card=108)

********************************************************************************


The Execution Plan is

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 111 | 3552 | 7 (15)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | SORT GROUP BY | | 111 | 3552 | 7 (15)| 00:00:01 |
|* 3 | HASH JOIN | | 2239 | 71648 | 6 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| EMPLOYEES | 73 | 1752 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| EMPLOYEES | 108 | 864 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("EMP"."SALARY">SUM("SALARY")/COUNT("SALARY"))
3 - access("EMP"."DEPARTMENT_ID"="EMP_INNER"."DEPARTMENT_ID")
4 - filter("EMP"."SALARY">9000)


(Since I had only 1 execution of the query when tracing is enabled, "Rows (1st"), "Rows (avg)" and "Rows (max)" report on the same, single, execution)

The EMPLOYEES table underwent two Full Table Scans, each consisting of 6 "consistent reads"  ("cr=6"). 

The execution at line Id=4 expected to return 73 rows ("card=73") but actually returned 23 rows.  This query filtered for SALARY greater than 9000.

The execution at line Id=5 expected to return 108 rows ("card=108") and did actually return 108 rows.  There is no predicate information, meaning that all 108 rows of the table were returned, none filtered.

The join of the outer query and the inner query on DEPARTMENT_ID was the Hash Join at operation Id=3.

Do you note how the FILTER at operation ID=1 appears ?  It is filtering for SALARY greater than AVERAGE SALARY  (where AVERAGE is computed as SUM of SALARY  values divided by the number of not-null SALARY values)



Categories: DBA Blogs

ORA-1652

Pakistan's First Oracle Blog - Mon, 2020-07-06 22:01
If you are wondering what might be the reasons to have ORA-1652, then you can use the following query to get the idea:



select sql_id,  sum(temp_space_allocated)/1024/1024

from dba_hist_active_sess_history

where sample_time between timestamp '2020-06-25 19:30:00' and timestamp '2020-06-25 20:00:00'

group by sql_id

order by 2 desc;

Also check out the control real-time monitoring of the sessions and v$tempseg_usage, maybe some other query who is using temp and rapidly filling it.

Also check this view V$TEMPSEG_USAGE which describes temporary segment usage.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs