Introduction to Oracle Trace Utulity and Understanding The Fundamental Performance Equation

So we are as developers look for how to write the application which will use only the database resources necessary to return the result(s) as quickly as possible.And method here to achive this goal is accessing the least number of blocks and reducing and eliminating waits in Oracle. When it comes to waits it is a new domain, I believe this Fundamental Performance Equation can help understanding;

Response Time = Service Time + Wait Time

Simply “Service Time” is time spent on the CPU and “Wait Time” is non-idle time spent away from the CPU waiting for an event to complete or a resource to become available.

Oracle trace facility is with us for a long time, I remember doing my first SQL Trace for Oracle support without knowing how valuable information I generated. After some years SQL Trace has become very dominant in proactive tuning process during development and test steps. With 9i also wait option is included to Tkprof. Below is a simple template you many use for session based tuning needs.

-- it is better to make a new connection before you start a trace each time, if not information
-- will be appended if any trace has already been started

conn hr/hr

-- Set these initialization parameters for your trace session to guarantee the integrity of the trace file

alter session set max_dump_file_size=unlimited;
ALTER session SET timed_statistics = true;
alter session set STATISTICS_LEVEL = ALL ;
alter session set “_rowsource_execution_statistics” = true

-- in order to seperate your produced trace file easily from the others at user_dump_dest folder of Oracle
alter session set tracefile_identifier = SQL_Trace ;

-- also if you are after 10g you can produce session level ASH reports by the help of CLIENT_ID filter
-- call dbms_session.set_identifier('an_identifier') and use this identifier name as a filter

-- start tracing from this session
Alter session set SQL_Trace = true ;

-- also these can be used for tracing
-- execute DBMS_SESSION.SET_SQL_TRACE ({true|false});
-- execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (session_id, serial_id, {true|false});
-- execute DBMS_SYSTEM.SET_EV(session_id, serial_id, 10046, level, '');
-- for more information;

-- Run the application that you want to trace, any SQL(s) or any PL/SQL block(s)
select sysdate, user from dual;

-- stop tracing
Alter session set SQL_Trace = false ;

-- Than go to USER_DUMP_DEST to pick up the trace file. A query that gives your trace file name generated from your session
-- in windows without the tracefile_identifier would be as follows
-- select c.value || '\' || d.instance_name || '_ora_' || a.spid || '.trc' trace_file_is_here
-- from v$process a, v$session b, v$parameter c, v$instance d
-- where a.addr = b.paddr
-- and b.audsid = userenv('sessionid')
-- and = 'user_dump_dest' ;
-- Format the trace file with TKPROF and interpret the output.
-- $ tkprof tracefile.trc output.txt [set the options as you like]
-- tkprof D:\oraclexee\app\oracle\admin\XE\udump\xe_ora_2220_bind.trc D:\temp\xe_ora_2220_bind.txt explain=hr/hr sys=yes waits=yes sort=prscpu

I want to discuss SQL_TRACE, 10046 Event and 10053 Event as the most popular trace types . By setting different tracefile_identifier for each and uncommenting related start and stop tracing lines in the template you will produce these three different trace files.

Queries are different from other types of SQL statements because, if successful, they return data as results. Whereas other statements simply return success or failure, a query can return one row or thousands of rows. There are three main stages in the processing of a query:
1- Parse
2- Execute
3- Fetch

During the parse stage, the SQL statement is passed from the user process to the server process, and a parsed representation of the SQL statement is loaded into a shared SQL area. During the parse, the server process performs the following functions:
– Searches for an existing copy of the SQL statement in the shared pool
– Validates the SQL statement by checking its syntax
– Performs data dictionary lookups to validate table and column definitions
The execute fetch executes the statement using the best optimizer approach and the fetch retrieves the rows back to the user.

There are two types of parse operations:
1- Hard parsing: A SQL statement is submitted for the first time, and no shareable match is found in the shared pool. Hard parses are the most resource-intensive and unscalable, because they perform all the operations involved in a parse.
2- Soft parsing: A SQL statement is submitted, and a match is found in the shared pool. The match can be the result of a previous execution by another user. The SQL statement is shared, which is good for performance. However, soft parses still require syntax and security checking, which consume system resources.
When bind variables are used properly, more soft parses are possible, thereby reducing hard parses and keeping parsed statements in the library cache for a longer period. This is very important for an Oracle application to be scalable.

The optimizer is the part of the Oracle Database that creates the execution plan for a SQL statement. The determination of the execution plan is an important step in the processing of any SQL statement and can greatly affect execution time. The execution plan is a series of operations that are performed in sequence to execute the statement. The optimizer considers many factors related to the objects referenced and the conditions specified in the query. The information necessary to the optimizer includes:
– Statistics gathered for the system (I/O, CPU, and so on) as well as schema objects (number of rows, index, and so on)
– Information in the dictionary
– WHERE clause qualifiers
– Hints supplied by the developer
When we use diagnostic tools such as Enterprise Manager, EXPLAIN PLAN, and SQL*Plus AUTOTRACE, we can see the execution plan that the optimizer chooses.

The Oracle query optimizer(Cost Based Optimize-CBO) determines the most efficient execution plan and is the most important step in the processing of any SQL statement. The optimizer:
– Evaluates expressions and conditions
– Uses object and system statistics
– Decides how to access the data
– Decides how to join tables
– Decides which path is most efficient

A good understanding of SQL processing is essential for writing optimal SQL statements. These tracing features and Tkprof are free, when used effectively they help to understand what the performance problems are and they directly point the bottlenecks. No need to guess or try and see methods anymore :) Also there are alternative analyzing tools for tkprof like ubtools’ itrprof , web based and produced reports with charts improves understanding.

When you open these trc extentioned files with any text editor you use like notepad you will immediately see that the only difference between SQL_TRACE and 10046 Level 8 trace files are the WAIT lines in 10046 trace file;

Event 10046 Raw Output

10053 Event with 10gR2 version provides very detailed information on how and why Cost Based Optimizer choosed to execute the query.

Continue reading with Part 2 Optimizer Debug Trace Event – 10053 Trace File.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release

References Used :
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2) – Understanding SQL Trace and TKPROF
Note:39817.1 – Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output


  1. Some more important notes –

    The important cursor statistics like the Row source operation (a.k.a. Runtime execution plan) is dumped in trace file when cursor is closed. Therefore it is a “must” that the session disconnects, if possible, to obtain all relevant cursor information. Otherwise one may expect, that the SQL statements will lack of runtime execution plans.

    Prior start tracing one should check and set the below initialization parameters to the values that are printed in bold:


    STATISTICS_LEVEL ALL This setting expands the amount of gathered statistics and ensures that the row source operation will contain also the timed statistics information.

    UPDATE (cr=18951 r=5045 w=0 time=49370534 us)
    HASH JOIN SEMI (cr=18680 r=5041 w=0 time=18018099 us)
    TABLE ACCESS FULL GL_BATCH_XXX (cr=8986 r=0 w=0 time=5188060 us) (object id 43647)
    INDEX FAST FULL SCAN GAC_BRANCH_UK2 (cr=9694 r=5041 w=0 time=5553558 us) (object id 42544)

    Setting statistics parameter to ALL has big performance impact because it consumes additional CPU resources. Therefore it shouldn’t be set system wide.

    TIMED_STATISTICS TRUE Enables or disables the collection of timed statistics of CPU and elapsed times. As this generates just a very small overhead it can be permanently switched on.

    MAX_DUMP_FILE_SIZE UNLIMITED By default the size of output trace files is limited.

    USER_DUMP_DEST Defines the destination (directory) of trace files and can be set to different location to ensure enough free space for trace files.

    TRACEFILE_IDENTIFIER Appends your custom identifier to the name of trace file (in 8.1.7 and 9i)

    _TRACE_FILES_PUBLIC TRUE Sets proper access rights for trace files. This is important on systems where (security issue!)

    _ROWSOURCE_EXECUTION_STATISTICS TRUE You can set this parameter in your session by executing:
    SQL> alter session set “_rowsource_execution_statistics” = true

    Reference : Optimizing for Performance education by Joze Senegacnik

  2. tkprof is good for basic analysis but be careful about tkprof traps –

    * sql statemet hierarchy
    * exclusive timings for recursive sql statements
    * report for SQLs which were not parsed
    * report for cursor #0

    Two questions I didn’t have time to investigate yet are;

    * also trcsess after 10g may introduced some new pitfalls during merging,
    * what about 11g, is there any progress with tkprof pitfalls?

    I hope Oracle will invest more in tkprof since 10046 responce time based analysis is very important with root cause analysis of application performance tuning.

  3. How To Perform a Session Trace on Mappings In OWB 11g:

    To perform the Session Trace on the Mapping OWB11g requires a connection to the Workspace where the Mapping has been deployed;

    SQL> connect /
    SQL> call owbsys.wb_workspace_management.set_workspace(”,”);
    SQL> alter session set sql_trace=true;
    SQL> var p_status varchar2(30)
    SQL> set serveroutput on
    SQL> exec .main(p_status => :p_status)
    SQL> print p_status
    SQL> alter session set sql_trace=false;
    SQL> call owbsys.wb_workspace_management.unset_workspace();

  4. This is a very useful link. A beginner can easily understand what will happen when user writes a select statement.

  5. great post, appreciate the sweet script and i learned a few things to boot! thanks again ! — jp

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s