Just married :)

I think and hope I will not be around for a while :)

For details of this huge change in my life please check my personal blog here.

And thanks everybody for their good wishes, warm regards from İstanbul to all.

OracleTurk Hints: Running Huge SQL Files from SQL*Plus and Deleting Huge Amount of Data Best Practices

Last week there were several good hints at local user group’s mailing list. I chosed to mention two of them here for public interest. Lets assume you are provided a SQL script which is ~500 MB from a vendor, with unix fifo’s you may decrease the need of the memory to run this kind of a script.


-- Posted by Ogün Heper; Using FIFO to Run Huge SQL Files from SQL*Plus 

oracle@camel # cat s.sql 
select * from dual;
exit
oracle@camel # mkfifo f.sql
oracle@camel # cat s.sql >f.sql &
[1] 1224
oracle@camel # sqlplus / as sysdba @f.sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 24 14:38:07 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


D
-
X

Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[1]+  Done                    cat s.sql >f.sql
oracle@camel # 

Huge INSERTs can be done via direct path with APPEND hint and taking the related segments to NOLOGGING during the load but huge DELETEs and UPDATEs are problematic with Oracle since they produce lots of redo and undo I/O. So a very common question is how to handle these tasks. If you have purchased partitioning feature dropping historic date ranged old partitions is a good workaround for DELETEs. So a very common advise for both huge DELETEs and UPDATEs is a CTAS(Create Table As Select) since it can be done NOLOGGING. Instead of deleting 50 millions of rows out of 250 millions of rows creating a new table with 200 millions of rows with NOLOGGING option may usually be preffered, similar path is always an option for UPDATEs. But the table holding the data here may have lots of grants, triggers etc. and if you do not want to mess up with migrating these metadata to a new table below actions can be a workaround.


-- Posted by Fatih Özerol
-- lets assume we want to delete 41 mil. out of 50 mil. 
-- PQ may decrease the response time if the system is not heavy-loaded already
ALTER SESSION ENABLE PARALLEL DML;
ALTER SESSION ENABLE PARALLEL DDL;
-- at this point you have to make the table read-only not to loose any transaction
-- initially we create a temp table with other 9 mil.
create table tab_tmp nologging parallel  ... as 
select select /*+ parallel(t, some_dop) */.... from ori_tab t where ... ;
-- if the table is an OLTP table and you are running on ARCHIVELOG mode 
-- better to take it to NOLOGGING and PARALLEL
alter table ori_tab nologging parallel some_dop ;
-- at this point setting only the table to nologging is not enough if you have indexes, 
-- you may prefer to set unusable each of them and rebuild them after the load
-- truncate and insert the other 9 mil. rows to the target table
truncate table ori_tab reuse storage ;
insert /*+ append parallel(a, some_dop) */ into ori_tab a nologging 
select /*+ parallel(b, some_dop) */ * from tab_tmp b ;
commit;
-- if the table is an OLTP table revert back the settings
alter table ori_tab NOPARALLEL LOGGING ;
-- at this point you can alter the table to read-write again
-- if any index is set unusable you must rebuild them now

Some additional reading I may advise are:

Direct path inserts, nologging option and index cost(setting indexes to unusable during load) – https://tonguc.wordpress.com/2007/01/20/direct-path-inserts-nologging-option-and-index-cost/

Preventing DDL operations on a table(Read-Only Table prior 11gR1) – https://tonguc.wordpress.com/2007/09/25/preventing-ddl-operations-on-a-table/

Interval Partitioning New Feature and Logging Option Problem

One of my old friends who is an Oracle DBA for at least 15 years always argued with me that “every new feature of Oracle is an evil!” :) He advised that it is not secure to upgrade a production environment to 10gR2 for example unless 11gR2 is released on your platform. Personally I always loved new features and tried to test them as soon as possible. I accept and also experience that there are always bugs around new features especially when they are complemented with some other feature but this is why we have Oracle support.

Recently on our 11.1 ODS production database we saw that the automatic created partitions do not inherit the LOGGING property of neither the table nor the tablespace it belongs to where as it is created always with LOGGING on. If you are trying to benefit from direct path inserts and your database is running on ARCHIVELOG mode and the target segments are LOGGING you will be failing to benefit and producing lots of redo and undo information depending on the size of data you are inserting. So an SR opened, bug escalated, a typical process.

Until 11.1 we had our own interval partitioning maintenance scripts and still we stick to them especially because the automatically created partitions’ naming. This is something I think will be evolving with the future releases since anybody who uses partitioning feature love to name their partitions and is not happy with anonymous partition names. But there is no evil with this and it is the life cycle of any software I think; a new feature comes out – some people test it/consume it and find bugs or report additional feature requests – the producer handles these within future releases or patches.

ps: for exchange of anonymous named partitions you may use EXCHANGE PARTITION FOR clause as a workaround but still with several limitations, you may check this post for more details.

Instrumented Code is Better: An Example on How to Generate Session Level ASH Reports

After 10g if you have its license ASH based performance monitoring is an alternative to the traditional SQL Tracing in my opinion. You may either use EM to generate ASH reports or directly query ASH dictionary. The handy additional hint here is that you can set filters to ASH data in order to focus on a specific session, the filter options at ASH report screen at EM can be used for this purpose or below is an example to monitor the waits from v$active_session_history view.


-- at session 1
begin
	dbms_session.set_identifier('your_identifier');
	/* other useful options may be */
	dbms_application_info.set_action('your_action');
	dbms_application_info.set_client_info('your_client_info');
	dbms_application_info.set_module('your_module', 'your_action');
	/* */
end;
/

select ... ;

-- while query at session 1 continues to execute switch to session 2
-- note that no commit is needed to see the client_identifier set from the first session(autonomous transaction)
SELECT sid,
       client_identifier,
       module,
       action,
       client_info
  FROM v$session
 WHERE client_identifier = 'your_identifier';
 
       SID CLIENT_IDENTIFIER                                                MODULE                                           ACTION                           CLIENT_INFO
---------- ---------------------------------------------------------------- ------------------------------------------------ -------------------------------- ----------------------------------------------------------------
       273 your_identifier                                                  your_module                                      your_action                      your_client_info

-- filtered ASH information 
--
-- Now query v$active_session_history columns you may be interested with client_id = 'your_identifier' and additional filters you may like 
--

dbms_session and dbms_application_info supplied packages’ set options can be very handy like in the above example, credits goes to Mark Rittman for this hint.

[Update on July 2 2009 ]
After Doug Burns’s comments(check out the pingback below) on this post I updated the header and the example.

With this short post I wanted to share the idea of setting client identifier in an application may help especially during monitoring/troubleshooting and wanted to give an example on ASH dictionary. For example using database services even you are running on single node environment also helps as filters with the instance activity reports. Additionally I wanted to highlight the effort to get the ASH information over 10046 SQL Tracing efforts for a database developer, since with 10046 tracing a typical developer will need a DBA or unix admin assistance to ftp the produced trace files where as simple SQL queries to ASH dictionary will guide you to understand the waits for example specific to a session.

So anyway, thanks for Doug Burns’s time to correct the ASH example on this post and provided details on his blog.