On Extended Transaction Monitoring Statistics After 10g

After 10g when a long-running transaction is rolling back and if the process takes more than six seconds this event is recorded in the view V$SESSION_LONGOPS.

-- to estimate when the monitored rollback process will finish
 WHERE SID = :sid
   AND OPNAME = 'Transaction Rollback' ;

Also after 10g addition to the user issued rollback statements, transactions recovered by SMON and parallel instance recovery sessions are monitored. You can view historical information about transaction recovery and transaction rollback and you can calculate average rollback duration to estimate better transaction recovery time and set the FAST_START_PARALLEL_ROLLBACK initialization parameter more appropriately to optimize system performance. Historical information is kept in V$FAST_START_TRANSACTIONS until the next instance shutdown.

V$FAST_START_TRANSACTIONS contains information both for transactions that the Oracle server is recovering(the STATE is RECOVERING) and for transactions that the Oracle server has recovered(the STATE is RECOVERED). New columns are added to this view as;

• XID: Transaction ID of this transaction
• PXID: Transaction ID of the parent transaction
• RCVSERVERS: Number of servers working on this transaction including the coordinator server(It can be 1 if only SMON is doing the recovery).

For example this statement can be used to track transaction recovery after instance startup;

SELECT state, undoblocksdone, undoblockstotal, cputime
  FROM v$fast_start_transactions;

---------- -------------- --------------- ---------
RECOVERING 574 1945 16

SQL> /

---------- -------------- --------------- ---------
RECOVERING 1300 1945 34

SQL> /

---------- -------------- --------------- ---------
RECOVERED 1945 1945 65

SQL> SELECT message FROM v$session_longops;

Transaction Rollback: xid:0x0001.00a.00000812 : 1945 out of 1945 Blocks done

For more reading on this topic please visit the article Oracle Database 10g: The Top 20 Features for DBAs by Arup Nanda on OTN; Week 2 – How Much Longer?: Rollback Monitoring

May Dynamic Sampling be an Answer for OLAP Systems’ Statistics Needs?

It is very important to determine when and how often to gather new statistics for Cost Based Optimizer. Especially after 10g since Rule Based Optimizer is now obsolete as a result the Automatics Statistics Gathering mechanism keeps all the statistics current by default after a fresh installation with a scheduled job.

-- Verifying Automatic Statistics Gathering Job
SELECT owner, job_name, enabled 

The default gathering interval is nightly but you can customize this interval for your needs. The frequency of collection intervals should balance the task of providing accurate statistics and the processing overhead incurred by the statistics collection process. If you want you can disable this job and have your own strategy of course.

-- Disabling Automatic Statistics Gathering Job

Or you can manually gather and lock statistics at the table or schema level(LOCK_SCHEMA_STATS) as another option.

-- Manual statistics strategy example

You can query the STATTYPE_LOCKED column of DBA_TAB_STATISTICS view to determine if the statistics on the table are locked or not and you may use the UNLOCK_TABLE/SCHEMA_STATS procedures to unlock the statistics on a specified table. An important note here is when you lock the statistics on a table all of the dependent statistics are considered locked like table statistics, column statistics, histograms and dependent index statistics.

In an OLAP environment for tables which are being substantially modified in batch operations such as bulk loads usually DBMS_STATS procedure is called as soon as the load operation completes as a part of that ETL step. Also statistics are needed to be gathered manually after new object creation until the configured custom or Oracle’s Automatic Statistics Gathering job process takes action. The statistics in certain cases on these highly volatile tables can be set to NULL and when Oracle encounters a table with no statistics optimizer dynamically gathers the necessary statistics as part of the query optimization. This 10g new feature is called dynamic sampling and is controlled by the OPTIMIZER_DYNAMIC_SAMPLING parameter which should be set to a value of 2 or higher(the default value is 2).

-- Setting statistics to NULL by deleting and then locking the statistics for Dynamic Sampling

Dynamic sampling offers the opportunity to collect statistics for segments with either missing statistics or stale statistics. Where as this approach involves an overhead at query optimization time and this action is repeated for the same objects unless statistics are gathered. This usually is not acceptable for OLTP applications but when it comes to OLAP usually parsing time can be discarded compared to the execution time and also avoiding statistics collection overhead on huge tables may be a high motivation for most of the time.

A 100% dynamic sampling-based OLAP system topic was raised and discussed before by David Aldridge here, so any further comments are welcomed.

Hundreds of reasons why you might want(or have) to use R-Man!

When I was a kid I loved to watch He-Man, The Most Powerful Man in the Universe. Yes, R-Man is the most powerful database backup and recovery tool in the universe for today. But still there are some DBAs do not want use RMAN. They are the same crowd who also do not use Database or Grid Control, Automatic Storage Management(ASM), Automatic PGA Management and resist to migrate to 10g Release 2. Sometimes this resistance drives me nuts because the resistance need to create lots of myths to dirty all what they are afraid of.

For RMAN two common used alternatives are file system backup and storage vendor’s snapshot capabilities which usually cost a fortune. For me they can not be any alternatives for RMAN but in this post I won’t be questioning these methods versus RMAN like how they can do Block Level Recovery, Duplicate or Validate a backup with a simple command etc. But I will provide two examples since the resistance believed these two are myths of RMAN :)

1. Resistance cries out : “RMAN is backing up only used blocks is a myth!” and R-Man answers them ;)

RMAN does not backup empty blocks which are above the high watermark and not formated, this feature is called NULL COMPRESSION. After 10g Release 2 RMAN also does not backup empty blocks below the high watermark and this new feature is called UNUSED BLOCK COMPRESSION.


The resistance used 10g XE for a similar demonstration and this feature does not function on XE, this was enough for them to built their own “scientific” myth. I love XE but IMHO XE is not the place to do these kinds of testing.


Just a complementary note on Automatic Segment Space Management(ASSM) and high watermark concept; ASSM was introduced with Oracle 9i Release 2 and it is default segment management when you create a new tablespace after 10g Release 2. ASSM introduced two new high watermarks;
– High High Watermark(HHWM); all blocks above HHWM are unformatted,
– Low High Watermark(LHWM); all blocks below LHWM are formatted.
as a result now we can have unformatted blocks in the middle of a segment.

2. Resistance cries out : “During RMAN back up compared to file system based backups redo generation is NOT less!” and R-Man answers them ;)

If a tablespace is in backup mode Oracle will stop updating its file headers but will of course continue to write data to the database files. In backup mode Oracle will write out complete changed blocks to the redo log files with the first DML on that block. Where as normally only changes on a block are logged to the redo logs. This is done to enable the reconstruction of a block if only half of it was backed up, so the result is increased log activity and archiving during on-line backups.

Code Listing 207c – Redo Generation Cost of Backup Mode Demo

RMAN does not put tablespaces in backup mode, so there is no redo generation overhead like shown in above demo. RMAN will re-read database blocks until it gets a consistent image of it.

For much more of R-Man’s strengths after 10g Release 2, like incremental merge of ASM based FRA(disk) backup, faster incremental backups with Block Change Tracking and Backup Compression please check these two resources;
Faster Backup, Faster Recovery By Arup Nanda

Oracle Database 10g Release 2 New Features in Backup and Recovery

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release

References Used :
RMAN does (not?) backup empty blocks | what is NULL COMPRESSION ? | what is BINARY COMPRTESSION ?| what is UNUSED BLOCK COMPRESSION ? « lutz hartmann as sysdba

Oracle 10g New Features for Administrators by Ahmed Baraka

Especially people who are preparing for Oracle 10g certification exams will like this quick note I guess. One of my colleagues shared this work of Mr.Ahmed Baraka on 10g New Features for Administrators last weekend and I liked it very much so I also wanted to share it here. For more Baraka’s work you may visit http://ahmedbaraka.com/computer/index.htm

And below are quick reminders for 10g NFs –

Oracle provides SQL and PL/SQL features so that we can do a lot more with fewer lines of code

Recently I started a discussion on oraclecommunity.net database forum with subject “Why Oracle?”. Hans Forbrich and Billy Verreynne gave important feedbacks. As a group we are preparing a free half day seminar to the computer science students in Istanbul.

When I started to study 10g I tried to blog my experiences on new pl/sql packages with a series called Expanded Supplied Packages with 10g. If you do not want to waste your time and money on developing some features which are already available in your database, it is better to be aware of them. As Tom Kyte‘s saying; no need to re-invent a dumper wheel :) For 11g here is a start point for the new and updated ones;

11g Release 1 New Features in the SQL Language
11g Release 1 New and Updated PL/SQL Packages
11g Release 1 All New Features Grouped by Guides

Recently I was playing with DBMS_ADVANCED_REWRITE package which was also a new feature on 10g. You may want to use this package for rewriting a bad query on your production database until it is tuned or even to make your friends go mad with changed results to their queries on your development database :)

Code Listing 198 : DBMS_ADVANCED_REWRITE demo

Testing Information : the scripts mentioned are tested on Oracle Database 11g Enterprise Edition Release

References Used :

New partition pruning features of 10g

As I mentioned earlier I started working with our data warehouse team on their migration project; Planning and Testing the 9iR2 to 10gR2 migration of our XXX Tb DWH

I will be testing Oracle’s 10gR2 features for performance. So before tests in details, I also wanted to mention some important very large Oracle database init.ora 10g parameters additional to ASM and CBO configurations I already mentioned; Listing 177a – Some important very large Oracle database init.ora 10g parameters

During some initial tests I experienced new partition pruning features of 10g; “PARTITION RANGE OR”-“KEY(OR)” and “PARTITION RANGE MULTI-COLUMN”-“KEY(MC)” access paths; Code Listing 177b – New partition pruning features of 10g

These tests will be really cool :)

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release

References Used :
Subject: Deployment of very large databases (10TB to PB range) with Automatic Storage Management (ASM)

Oracle Database 10g (10.2 & 10.1) New Features in Data Warehousing

Oracle10g-Features für Warehouses by Sven Vetter

Top Ten New Data Warehousing Features In Oracle Database 10g by Mark Rittman

Oracle Database 10g: Top Features for DBAs Release 2 Features Addendum by Arup Nanda
Part 4: Data Warehousing and Integration Features

Alejandro Vargas Oracle Maximum Availability Architecture Seminar Notes

Oracle Maximum Availability Architecture(MAA) is a set of high availability best practices which targets to minimize the complexity in designing the optimal high availability for critical production databases. I do not work as an infrastructure(operational) dba for years. Within last years especially with 10g Release 2 Oracle’s availability options evolved a lot. I try to catch up with these on my laptop, thanks to vmware, and I really admire how everything is as is now with Oracle.

When I think about the recent years I may easily give my job description like this; database developer, part-time fire-fighter and part-time myth-buster. I always loved to develop database applications, I am working with application developers as an old Oracle dba for the last 4 years. Mostly database modeling, physical design, Oracle’s optimizer and testing for performance is what I think I am useful. Also if there is an Apex project I never miss the chance, developing Apex applications is always fun for me.

I used myth-buster on purpose, because lots of my time is dedicated to exorcism of some evil believes which mostly depend on rumors or some n years aged wrong and/or undocumented test results; “never use Oracle xyz feature it is buggy” or “this optimizer you call in fact is an abuser, I never get what I need without hints” kind of very common behavior. Usually it feels like being Van Helsing; read the documentation which is available to anyone and easy to understand, not rocket science at the end, and kill the beast with a small test case which is most probably adapted from Asktom or the Oracle Users’ Co-operative FAQ kind of resource. In the beginning this is a delicious rant but after years it becomes boring, disappointing and dissatisfying. What is the rationality behind not reading and trying to understand the technology that you are using to make money, please advice me? These days everybody is after some quick and dirty solutions.

I used fire-fighter on purpose, because I am called for aid when something really goes crazy or unexpected on production after a long development and testing period. These cases are the worst ones since with my experience it is usually too late and changes needed will cause delays and angry customers. Still it is a fire and I do whatever I can, in most cases interestingly even this amount is enough. But this is Oracle’s success of course, there is always some options guide you directly to the root cause of the problem and then a workaround related. Like Apex I am also a fan of 10g Grid Control, I never miss the chance to use it when there is a performance problem especially.

During Mr.Alejandro Vargas three days seminar I strongly felt that I have to leave this mystique world of mine and return back to the world where checklists, metalink notes and documents rule. If you do not follow them you even do not get to the point where you ruin everything. How unbelievable it is that anybody can develop some piece of code, google and copy&paste are the major enabler technologies here. When it comes to Oracle database, how ironic also that it works very satisfactory with its defaults and let people think they are “developer”s or “dba”s. I believe even after 10g’s self management capabilities and Grid Control heaven, today database market need experienced people more than ever. Do you think anybody can implement RAC or Data Guard options without reading and testing carefully? And I seriously bet that if all Oracle users as a prerequisite only read the concept guide all around the world, if you consider Oracle’s dominant leadership in the market, we should easily expect a technological shift, of course my share should be less otn forums type of questions, fires and myths.

When I contacted Alejandro with the content of this MAA seminar my primary focus was to motivate our senior dba staff in using these features. We have unlimited kind of Oracle licensing, so avoiding these options we already purchased is both a threat and causing inefficiency for the company. Alejandro was the right person and luckily he was very interested with the content and as a result for the last three days was successful. But this was only the beginning, there is a big challenge waiting. Any expert who is motivated to implement these new MAA features with 10g Release 2 will be facing another very common behavior; “if it works never take a risk and change it” Of course managers need to avoid risks, but if this is the only priority you kill the creativity in your organization, how may a dba can take risk for the sake of performance of a system if you do not support him as his manager? The answer is simple, no dba would do that and you live with a release xx years old forever. So I plan the next step as to implement and document a MAA solution’s success, this will be a reference point I hope. This is my new priority and motivation for the coming months, so I really suspect that I will be ending up as being an operational Oracle dba again, but if something unexpected happens my b plan is already in pending status :)

I hope you enjoy my three days seminar notes below, much more is already published at Alejandro’s blog, he also may be publishing a post related to this seminar with our questions’ answers and his İstanbul experiences I guess.

10gR2 Automatic Storage Management(ASM)

10gR2 Real Application Clusters(RAC)

10gR2 Data Guard

10gR2 Recovery Manager(RMAN)

10gR2 Flashback

10gR2 Enterprise Manager Grid Control

Automatic SQL Tuning with DBMS_SQLTUNE package example and SQL Profile Contents

I mentioned 10g Automatic SQL Tuning feature before –
Expanded Supplied Packages with 10g – Part 3 – Section 5- Automatic SQL Tuning

I am a traditional manual tuning guy but when I have the chance of working on 10g instances, I never regret Oracle’s help :) This post is related to this Oracle forums post – http://forums.oracle.com/forums/thread.jspa?threadID=552342&tstart=0&messageID=2078280#2078280

Here the problem is that Oracle recommends some profiles inside OEM or DBMS_SQLTUNE package to improve your problematic sql queries but what Oracle recommends is something you have to dig a little more – Code Listing 142 – Automatic SQL Tuning with DBMS_SQLTUNE package example and SQL Profile Contents

SQL Profiles are like hints and outlines they make your execution plan static, so I believe they again may be preferred as a last resort.

A special thanks goes to Nicolas Gasparotto for sharing this Firefox add-on to paste automatic signature into a forums post, I share this since you may also like to use it for possible other needs of yours –

I finished another Turkcel Academy class yesterday and at last, I am going for a holiday, destination is my family and beautiful land Fethiye. So here silence will be around for a while and I hope to meet you 10 days after today with new ideas to share :)

Life after 10g Enterprise Manager Grid Control and its advisors

In my opinion history will show that, life of an Oracle DBA is divided into two stages; before and after 10g Enterprise Manager Grid Control :) I still remember some old days after paying to Oracle every possible extra licensing you may pay, everybody were paying lots of extra money to some bunch of other companies’ tools just to manage Oracle effectively. But somehow, someday Oracle decided to end its customers’ pain; Grid Control, Automatic Storage Management(ASM) and SQL Developer are examples of this strategy, so those were now the “good” old days just for some companies I guess.

Everybody even presidents needs, loves advisors. It is the same with Oracle, after 10g and with 11g Oracle is continuing to invest on its advisories. Memory, SQL Access and Tuning, Segment advisors have important parts in our lives now. But with this post I planned to discuss on some less popular advisors(I believe) like Undo, Redo and Automatic Checkpoint Tuning Advisors with 10g and also some 11g delights like Partitioning and Data Recovery advisors.

The Undo Advisor assists in correctly sizing the undo tablespace. The Undo Advisor can also be used to set the low threshold value of the undo retention period for any Oracle Flashback requirements. And to access the Undo Advisor in the Database Control you may follow; Administration > Undo Management > Undo Advisor

Also after 10g Rollback Monitoring improved; when a transaction rolls back, the event is recorded in the view V$SESSION_LONGOPS, if the process takes more than six seconds. This view enables you to estimate when the monitored rollback process will finish.

AND OPNAME =’Transaction Rollback’

Redo Log Tuning Advisory and Automatic Checkpoint Tuning are also new features introduced with 10G. The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time, as rule of thumb switching logs at most once every fifteen-twenty minutes. Undersized log files increase checkpoint activity and increase CPU usage.

Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files.

The redo logfile sizing advisory is specified by column optimal_logfile_size of v$instance_recovery. This feature require setting the parameter “fast_start_mttr_target” for the advisory to take effect and populate the column optimal_logfile_size.

You can try this with different settings of “FAST_START_MTTR_TARGET” but –
* If FAST_START_MTTR_TARGET is set to such a small value that it is impossible to do a recovery within its time frame then target_mttr field of v$instance_recovery contains the effective MTTR target which is larger than FAST_START_MTTR_TARGET.
* If FAST_START_MTTR_TARGET is set to such a high value that even in worst case entire buffer cache is dirty) recovery would not take that long, then target_mrrt field contains the estimated mttr in worst-case scnerios.

Code Listing 117 – Redo Log Tuning Advisory Demo

And to access the Redo Logfile Size Advisor you may follow; Administration > Storage > Redo Log Groups > Sizing Advice

10g also supports automatic checkpoint tuning. But by default, this feature is not enabled, because FAST_START_MTTR_TARGET has a default value of 0. It is an advancement over the MTTR related parameter introduced in earlier versions. The idea is to use the periods of low I/O usage to advance checkpoints and therefore improve availability.

To enable automatic checkpoint tuning, unset FAST_START_MTTR_TARGET or set it to a nonzero value(This is measured in seconds). If you set this parameter to zero this feature will be disabled. When you enable fast-start checkpointing, remove or disable(set to 0) the following initialization parameters:
Enabling fast-start checkpointing can be done statically using the initialization files or dynamically using –

SQL> alter system set FAST_START_MTTR_TARGET=10;

Enabling fast-start checkpointing increases the average number of writes per transaction that DBWn issues for a given workload. However, if the system is not already near or at its maximum I/O capacity, then fast-start checkpointing has a negligible impact on performance. View V$MTTR_TARGET_ADVICE will give information on the additional I/O’s on various values of FAST_START_MTTR_TARGET. But if –
– FAST_START_MTTR_TARGET is set to a low value – Fast-start checkpointing is more aggressive. The average number of writes per transaction that DBWn issues is higher in order to keep the thread checkpoint sufficiently advanced to meet the requested MTTR.
– FAST_START_MTTR_TARGET is set to a high value – Fast-start checkpointing in less aggressive, and the average number of writes per transaction that DBWn issues is lower.
– FAST_START_MTTR_TARGET is unset – automatic checkpoint tuning is in effect. Average number of writes per transaction is reduced but at the same time MTTR is highest.

So what about future releases and Oracle’s advisory strategy, I advise :) you to check 11g Manageability presentation’s fifth slide for some hints. After 11g a new advisor called Partition Advisor which analyzes the data access patterns and suggests the decision to choose the partitioning scheme and the partitioning column(s) will be assisting us especially during development and testing stages. By the way before 10g if you drop a partitioned table Oracle removed all the partitions at once, so time and resource consuming process. After 10g Release 2 when you drop a partitioned table, partitions are dropped one by one, maybe you already felt the change :)

Data Recovery Advisor will be another friend which automatically diagnose data failures and recommend repairs. You can repair failures manually or request that they be repaired automatically. Of course Enterprise Manager includes interfaces for these new advisors with 11g.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release

Refences Used :
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2)
Note:265831.1 – Automatic Checkpoint Tuning in 10g
Note 180894.1 – “V$INSTANCE_RECOVERY”
Note 151062.1 – Init.ora Parameter “FAST_START_MTTR_TARGET” Reference Note
Note 30754.1 – Init.ora Parameter “LOG_CHECKPOINT_INTERVAL” Reference Note
Note 30755.1 – Init.ora Parameter “LOG_CHECKPOINT_TIMEOUT” Reference Note
Note 68931.1 – Init.ora Parameter “FAST_START_IO_TARGET” Reference Note

Using Automatic Database Diagnostic Monitor Manually

For Oracle after 10g the statistical data needed for accurate diagnosis of a problem is saved in the Automatic Workload Repository(AWR). The Automatic Database Diagnostic Monitor(ADDM) analyzes the AWR data on a regular basis, then locates the root causes of performance problems, provides recommendations for correcting any problems and identifies non-problem areas of the system. Because AWR is a repository of historical performance data, ADDM can be used to analyze performance issues after the event.

This decision tree Oracle developed in years under ADDM improves root-cause analysis and after this amount of research and development you pay additional cost for this feature even you are using Enterprise Edition.

The goal of the analysis is to reduce a single throughput metric called DB time. DB time is the cumulative time spent by the database server in processing user requests. It includes wait time and CPU time of all non-idle user sessions. DB time is displayed in the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.

The background process Manageability Monitor Process(MMON) schedules the automatic running of the ADDM. You only need to make sure that the initialization parameters STATISTICS_LEVEL is set to TYPICAL or ALL, in order for the AWR to gather its cache of performance statistics. MMON schedules the ADDM to run every time the AWR collects its most recent snapshot. To view the ADDM’s findings:
* Use the OEM Database Control, the primary interface for diagnostic monitoring is the Oracle Enterprise Manager Database Control.
* Run the Oracle-provided script addmrpt.sql, very similar to statspack or awr reports.
Note: The DBMS_ADVISOR package requires the ADVISOR privilege.

Code Listing 112 – Manual ADDM Demo

ADDM analysis finding consists of the following four components:
* The definition of the problem itself
* The root cause of the performance problem
* Recommendation(s) to fix the problem
* The rationale for the proposed recommendations

Example ADDM Report

FINDING 1: 31% impact (7798 seconds)
SQL statements were not shared due to the usage of literals. This resulted in additional hard parses which were consuming significant database time.

RECOMMENDATION 1: Application Analysis, 31% benefit (7798 seconds)
ACTION: Investigate application logic for possible use of bind variables
instead of literals. Alternatively, you may set the parameter “cursor_sharing” to “force”.
RATIONALE: SQL statements with PLAN_HASH_VALUE 3106087033 were found to be using literals. Look in V$SQL for examples of such SQL statements.

In order to determining optimal I/O performance, ADDM uses the default value of the parameter DBIO_EXPECTED, which is 10 milliseconds.


Today’s hardwares can be significantly different, so you may set the parameter value one time for all subsequent ADDM executions of course for better advices. Since this is not an init.ora parameter, the parameter can be changed again with DBMS_ADVISOR supplied package;


There are two important views you will be checking for the results of the ADDM analysis:

1. DBA_ADVISOR_RECOMMENDATIONS: This view displays the results of completed diagnostic tasks with recommendations for the problems identified in each run. The recommendations should be looked at in the order of the RANK column, as this relays the magnitude of the problem for the recommendation. The BENEFIT column gives the benefit to the system you can expect after the recommendation is carried out.

2. DBA_ADVISOR_FINDINGS: This view displays all the findings and symptoms that the diagnostic monitor encountered along with the specific recommendation.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release

Refences Used :
Automatic Database Diagnostic Monitor (ADDM) in Oracle Database 10g at oracle-base.com
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2) – Chapter 6 Automatic Performance Diagnostics