Oracle Partitioning and Parallel Execution for Performance seminar by Tanel Poder

Tanel was our visitor last week as I mentioned earlier, he did a one day seminar on partitioning and parallel execution. There were lots of interesting discussions for me, I tried to share some of my notes just to give an idea how Tanel’s experience may affect your daily Oracle database administration or development routine.

Other ways for detecting partition pruning rather than Pstart and Pstop info at DBMS_XPLAN:
– Event 10128, semi documented in Note 209070.1, create partition pruning tracing table under your schema KPAP_PRUNING
– Samping ROW_WAIT_OBJ# on V$SESSION with Snapper or ASH, you get the dba_object.data_object_id values
– Taking snapshots of V$SEGMENT_STATISTICS logical IO for each partition segment

Partitioning for OLTP DML Concurrency
– Partitioning can be effectively used for reducing concurrency on right hand index blocks in OLTP(ever increasing sequence or timestamp based columns)
– Hash sub partitioning for meaningless pkey columns, the kgghash() function used for row placement by hash is optimized for power of 2 buckets
– Partitioning for data load concurrency, below option will take TM lock only on the partition and allow multiple load operations on the same table
INSERT /*+ APPEND */ INTO taget_table PARTITION(target_partition)
SELECT * FROM source_table;

Parallel Query Execution change with 10g
– In 9i the QC unparses transformed query text and send a copy for each slave for separate execution. In 10g+ each slave executes the same cursor

Parallel execution more resources
– Especially because of the PX communication overhead
– And additionally PX operation on a table requires a segment level checkpoint.
– With 11.2 in-memory parallel execution if PARALLEL_DEGREE_POLICY is set to AUTO
– So not everything should use paralelism and not every system is a candidate for parallelism, even in DWs(like ETL metadata updates etc.)

For parallel DML always remember to enable parallel dml at session level
– alter session enable parallel DML;
insert /*+ parallel(4) pq_distribute(eo none) */ into etl_orders eo
select * from tab;
– Query PDML_ENABLED, PDML_STATUS, PDDL_STATUS, PQ_STATUS columns of V$SESSION to check parallel DML is enabled or not for a session
– if data read is highly skewed, different slave produce way amount of rows, you may want to distribute rows for loading with pq_distribute(eo random) hint

Major Parallel Execution bottlenecks
– Serially executed steps in PQ plans, check for P->S and S->P operations in the middle of the execution plans. “IN-OUT” column is empty for serial operations in DBMS_XPLAN
– Too much communication between slaves, if possible replace a BROADCAST distribution with HASH or PART
– Skewed row distribution between producers and consumers, check V$PQ_TQSTAT if some slaves produced/consumed signaficantly more rows than others in the slave set, swith form HASH to BROADCAST ot PARTITION distribution
– Unbalanced hardware configuration, not enough IO bandwith, the very common reason, query ASH or run snapper of PX slaves to measure time waited for IO

Parallel Execution waith events and sql trace
– Metalink Note 191103.1 for PQ wait events
– After 10g+ before tkprof utulity to aggregate the multiple tracefiles produced for each slave use trcsess utulity

Parallel Execution and RAC
– In 10g set PARALLEL_INSTANCE_GROUPS parameter in each instance to assign the instances to groups
ALTER SYSTEM SET instance_groups = ‘rac01′,’all’,’batch’ ;
ALTER SYSTEM SET instance_groups = ‘rac02′,’all’,’batch’ ;
ALTER SYSTEM SET instance_groups = ‘rac03′,’all’,’oltp’ ;
ALTER SYSTEM SET instance_groups = ‘rac04′,’all’,’oltp’ ;
— Before running the PQ statement SET parallel_instance_groups
ALTER SESSION SET parallel_instance_groups = ‘batch’ ;
— The slaves will run only on the instances associated with the ‘batch’ group
– In 11g again parallel_instance_groups controls the allowed groups where QC can schedule inter instance PQ slaves, but this time slaves can only run on the instances which allow running the same “service” the QC connected through.
— Alternatively
ALTER SESSION set PARALLEL_INSTANCE_GROUP = ‘service1′,’service3’ ;

Parallel Execution Overview
– PX COORDINATOR: Query Coordinator(QC), distributes the work to slaves and returns results back to the client. Additionally performans work if something has to be done serially.
– PX SEND QC: Sends produced rows to QC via a table queue
– PX BLOCK ITERATOR: Operation which picks different ranges ıf PX granules to read different slaves
– Table Queues: In-memory communication channel between PX slaves, if SGA_TARGET is in use or PARALLEL_AUTOMATIC_TUNING is set to TRUE a memory buffer inside large pool
– QC(RAND): The row source is sending its rows back to QC

Things to remember
– Watch out for accidently enabled paralelism especially on OLTP, after parallel index builds and rebuilds remember to ALTER INDEX NOPARALLEL
– The Cost we see in DBMS_XPLAN does not include PX distribution cost, however CBO does choose PC distribution based on cost(10053 CBO tracefile shows that the costing is done)
– Use PARALLEL_MIN_PERCENT parameter to decide the minimum % of the requested Degree of Paralellism(DOP) query should be able to allocate in order to run the query
– PARALLEL_EXECUTION_MESSAGE_SIZE default to 16KB after 11.2
– Use ALTER SESSION FORCE PARALLEL QUERY PARALLEL n; to force session level paralelism

Oracle Partitioning and Parallel Execution for Performance seminar by Tanel Poder

ps: Check out Oracle Sweden User Group(ORCAN) Event presentations of Hüsnü

On some 11.1 security features and enhancements

Last week security was one of the hot topics at my company, I found myself writing and talking with one of my DBA friends on Oracle’s security options and the history, we agreed that security is one of the areas where in time Oracle invested a lot but still there were important failures all around the software. Then we started discussing on 11.1 security features and enhancements and I remembered reading some cool options from Lutz Hartmann‘s 11g New Features book related chapter, like;

– Against brute force attacks, sec_ax_failed_login_attemps specifies the number of unsuccessful authentication attempts before the client is dropped,
– Against denial of service attacks, sec_protocol_error_further_action and sec_protocol_error_trace_action specify how the database should react, sec_protocol_error_further_action with DELAY,n option specify in seconds how long the server waits before it accepts further requests and sec_protocol_error_trace_action with LOG option creates a short audit log instead of creating huge trace files with TRACE option,
– 11g’s password verification function(utlpwdmg.sql) is enhanced,
– After 11g Audit is by default enabled and audit_trail is set to DB,
– sqlnet.allowed_logon_versions parameter defined the minimum client version that is allowed to connect.

And much more of them in the chapter, but they did not mention who wrote which chapters in the book. Where as it is so obvious that which chapters belong to Lutz in the book since he put a lot of efforts with chapters 8(security), 10(change management), 11(sql management), 12(Flashback) and 13(ASM) with examples they beautifully help understanding the new enhancements after 11.1 on these areas. Also I caught that using your name as a username at the SQL*Plus prompt is a good technique to claim your examples later on(Kyte and Foote uses this technique a lot) :)

I must of course mention that the other chapters of the other authors in this book were big disappointments for me, I thought what if Lutz didn’t write for the book will they still publish this book since there is nearly no value added to the already available text at Oracle’s documentation? Who knows. There is an important lesson for me here, if one day I decide to write a book I will choose to write alone or choose the co-authors and the press I will work with very carefully. Since I love reading Lutz and he is not blogging like the old days because he was angry to some people copying his blog posts and making money out of them, I hope he chooses to write alone for his future book projects and I can consume more of his quality Oracle readings.

Improvements with 11.1 for the datatypes and table structures supported for asynchronous change data capture option

I was studying the Asynchronous Autolog CDC Cookbook, I am working on the S_AUDIT_READ Siebel source table, but the AUDIT_LOG column with CLOB datatype failed during the setup with ORA-31513: unsupported column type specified in change table error on 10.2.

As I found out that this limitation for LOBs is not a problem after 11.1 anymore, I wanted to share this information.

Oracle® Database Data Warehousing Guide 10g Release 2 (10.2)
Chapter 16 Change Data Capture: Datatypes and Table Structures Supported for Asynchronous Change Data Capture

Oracle® Database Data Warehousing Guide 11g Release 1 (11.1)
Chapter 16 Change Data Capture: Datatypes and Table Structures Supported for Asynchronous Change Data Capture

ps: if you are interested in CDC option of Oracle please check this previous post of mine for an introduction and a Hotlog option implementation example.

DBMS_COMPARISON supplied package after 11g Release 1

I discussed 10g’s ORA_HASH SQL function to determine if two Oracle tables’ data are equal or not with this post before, especially if you are working with huge tables and they are on different locations this method is very effective.

In the same post I also mentioned some favorite schema and data comparison methods used within the community, so this one is a following post to that post to mention the new 11g supplied package; DBMS_COMPARISON.

After a fast startup demonstration based on psoug’s library I saw that this new feature also uses ORA_HASH SQL function. :) Also for the schema comparison part which dictionary checks are made can be followed inside a 10046 SQL trace as demonstrated below;

conn hr/hr



CREATE UNIQUE INDEX ui_depts ON depts(department_id) NOLOGGING ;

DELETE FROM depts WHERE department_id = 10;

1 row deleted.


ALTER SESSION SET max_dump_file_size=unlimited;
ALTER SESSION SET timed_statistics = true;
ALTER SESSION SET statistics_level = ALL ;

ALTER SESSION SET tracefile_identifier = dbms_comparison ;


exec dbms_comparison.create_comparison(comparison_name=>'compare_test', schema_name=>'HR', object_name=>'DEPTS', dblink_name=>NULL,  remote_schema_name=>'HR', remote_object_name=>'DEPARTMENTS',  scan_percent=>90);

set serveroutput on

  retval dbms_comparison.comparison_type;
  IF'compare_test', retval, perform_row_dif=>TRUE) THEN
    dbms_output.put_line('No Differences');
    dbms_output.put_line('Differences Found');
Differences Found

PL/SQL procedure successfully completed.


select value from v$diag_info where name = 'Default Trace File' ;


CMD > tkprof /u01/app/oracle/diag/rdbms/tong/tong/trace/tong_ora_4402_DBMS_COMPARISON.trc tong_ora_4402_DBMS_COMPARISON.txt sys=yes waits=yes sort=execpu

set null @
set colsep |
set linesize 2500

select *  FROM  user_comparison_row_dif ;




COMPARE_TEST                  |         4|@                 |AAARADAAFAAAAA3AAA
DIF|04-OCT-08 PM

exec dbms_comparison.drop_comparison(comparison_name=>'compare_test') ;

There are several constraints to use this method like the local database that runs the subprograms in the DBMS_COMPARISON package must be at least 11.1 and the remote database must be 10.1 at least. Non-Oracle databases are not supported and the database character sets must be the same for the databases being compared. But I am pretty sure with feature releases this package will be enhanced and be used with other database features.

ps: During my unconference session at OOW08 – APEX Test Drive – I mentioned that Apex 3 on 11.1 under Utilities tab also has a GUI support to use this new supplied package.

11g’s Metadata Information for SQL Built-In Operators and Functions

11g introduced two new views: V$SQLFN_METADATA and V$SQLFN_ARG_METADATA, which provide metadata for all Oracle SQL built-in operations and functions.

SQL> set linesize 2500
SQL> SELECT * FROM v$sqlfn_metadata where name = 'REGEXP_SUBSTR' ;

   FUNC_ID NAME                              MINARGS    MAXARGS DATATYPE VERSION      ANA AGG DISP_TYPE     USAGE                          DESCR
---------- ------------------------------ ---------- ---------- -------- ------------ --- --- ------------- ------------------------------ ------------------------------
       526 REGEXP_SUBSTR                           2          5 STRING   V10 Oracle   NO  NO  NORMAL

SQL> SELECT * FROM v$sqlfn_arg_metadata WHERE func_id = 526 ;

---------- ---------- -------- ------------------------------
       526          1 STRING
       526          2 STRING
       526          3 NUMERIC
       526          4 NUMERIC
       526          5 STRING

This new feature may help especially to third party tools to maintain the function usage metadata redundantly in the application layer. For more informations you may check the documentation and this article.

11g Enhancement for ALTER TABLE .. ADD COLUMN Functionality

Before Oracle 11g adding new columns with DEFAULT values and NOT NULL constraint required both an exclusive lock on the table and the default value to be stored in all existing records.

Now in Oracle 11g the database can optimize the resource usage and storage requirements for this operation, the default value is stored in the data dictionary instead of updating the table column as a result especially for large tables the execution time is reduced and space is saved.

In addition, the following ADD COLUMN operations can now run concurrently with DML operations:
* Add a NOT NULL column with a default value
* Add a nullable column without a default value
* Add a virtual column

release 1002000300 –

drop table tst_source purge ;
create table tst_source nologging as
select rownum id, text
  from dba_source;

set timing on


Elapsed: 00:00:30.43

set timing off

exec dbms_stats.gather_table_stats(user, 'tst_source');

select count(*) from tst_source where name is NULL ;


release 1101000600 –

drop table tst_source purge ;
create table tst_source nologging as
select rownum id, text
  from dba_source;

set timing on


Elapsed: 00:00:00.10

set timing off

exec dbms_stats.gather_table_stats(user, 'tst_source');

select count(*) from tst_source where name is NULL ;


On SAGE and Oracle’s new 11g SQL Tuning Workshop Education Content

Most probably you think SAGE (this was its project name before Larry’s announcement, much more compact naming so I prefer it :) –Oracle Exadata Storage Server and the HP Oracle Database Machine– will not make a change in your daily life and you maybe unsatisfied with this announcement but I promise to give you something cool whether you are a massive OLAP or an XE developer with this post. :)

But first let me share with you one of my friend’s, who is a warehouse developer, first response to SAGE: “Oh I knew Oracle should work much more faster for years, the fault was not ours you see!” .:) So does these new features promise to reduce the importance of the warehouse development best practices? I really don’t think so..

So on my 22 hours way back home I had a chance to look at this new Oracle University education and I strongly think that this is the best content you may get from a tuning education. First, with this post, let me briefly explain the table of contents and then I think I will be doing some series of blog posts parallel to this content.

If you had 10g’s SQL Tuning Workshop here is not only new features but new chapters in the 11g’s content now, like a whole chapter dedicated to Star Transformation and a very long chapter dedicated to the optimizer operations like access path options of the Oracle database. I tried to motivate developers around me for not doing any SQL Tuning, just do tuning as they are developing and I am very happy to see this chapter as it is a part I always dreamed of. :)

Let me mention another interesting observation of mine, if you attended Jonathan Lewis’s optimizer seminars I think you will also think that the content and even some pictures are very similar in this education to that content, to my understanding this is a kind of a kind approval for people like Mr.Lewis who are trying to improve the communities understanding of the complex technologies like optimizer. No pain no gain right, so I do not think this education now will be easily consumed by starters as it should be.

By the way, as we own one of the largest warehouses in the world and there is a very important possibility that we may be testing and using Exadata and so I may be blogging something more than “reading billions of rows in seconds” at the end of the day ETL(I mean lots of writes), sort operations parallel scans and hash joins are in total what will make the difference since lots of reports(I mean reads) are already tuned to be fast enough by logical design and access path options like partitioning, bitmap indexes and materialized views in today’s optimized warehouses.

Oracle 11gR1 enhancements to the initialization parameters management

Oracle Database 11.1 now upon startup writes values of initialization parameters to the alert log in such a way that when you loose your SPFILE or PFILE it is now easier to copy and paste them to create a new PFILE.

Also the SPFILE or PFILE that was used to open the instance is now mentioned with an additional line.

Starting up ORACLE RDBMS Version:
Using parameter settings in server-side spfile /u01/app/oracle/product/11.1.0/db_1/dbs/spfiletong.ora
System parameters with non-default values:
  processes                = 150
  shared_pool_size         = 144M
  streams_pool_size        = 16M
  control_files            = "/u01/app/oracle/oradata/tong/control01.ctl"
  control_files            = "/u01/app/oracle/oradata/tong/control02.ctl"
  control_files            = "/u01/app/oracle/oradata/tong/control03.ctl"
  db_block_size            = 8192
  db_cache_size            = 220M
  compatible               = ""
  db_recovery_file_dest    = "/u01/app/oracle/flash_recovery_area"
  db_recovery_file_dest_size= 8G
  undo_tablespace          = "UNDOTBS1"
  remote_login_passwordfile= "EXCLUSIVE"
  db_domain                = ""
  dispatchers              = "(PROTOCOL=TCP) (SERVICE=tongXDB)"
  audit_file_dest          = "/u01/app/oracle/admin/tong/adump"
  audit_trail              = "DB"
  db_name                  = "tong"
  open_cursors             = 300
  star_transformation_enabled= "TRUE"
  pga_aggregate_target     = 192M
  diagnostic_dest          = "/u01/app/oracle"
  _trace_files_public      = TRUE
Sat Sep 13 11:13:50 2008
PMON started with pid=2, OS id=3199 

Starting up ORACLE RDBMS Version:
System parameters with non-default values:
  processes                = 250
  sessions                 = 280
  __shared_pool_size       = 222298112
  __large_pool_size        = 4194304
  __java_pool_size         = 4194304
  __streams_pool_size      = 0
  nls_language             = TURKISH
  nls_territory            = TURKEY
  sga_target               = 285212672
  control_files            = /oracle/oradata/tong/control01.ctl, /oracle/oradata/tong/control02.ctl, /oracle/oradata/tong/control03.ctl
  db_block_size            = 8192
  __db_cache_size          = 50331648
  compatible               =
  db_file_multiblock_read_count= 32
  db_recovery_file_dest    = /home/oracle/flash_recovery_area
  db_recovery_file_dest_size= 8589934592
  fast_start_mttr_target   = 180
  undo_management          = AUTO
  undo_tablespace          = UNDOTBS1
  remote_login_passwordfile= EXCLUSIVE
  db_domain                =
  dispatchers              = (PROTOCOL=TCP) (SERVICE=tongXDB)
  job_queue_processes      = 10
  background_dump_dest     = /oracle/admin/tong/bdump
  user_dump_dest           = /oracle/admin/tong/udump
  core_dump_dest           = /oracle/admin/tong/cdump
  audit_file_dest          = /oracle/admin/tong/adump
  db_name                  = tong
  open_cursors             = 300
  star_transformation_enabled= TRUE
  pga_aggregate_target     = 94371840
PMON started with pid=2, OS id=5423

Also now you can create a pfile or spfile from the active instance’s current values of the initialization parameters.



This feature gives an opportunity to compare the parameters used at that moment with the last startup ones.

Stored outlines are deprecated in Oracle Database 11g

I hope one day I write a similar post for HINTs, an optimizer world which always guarantees the *best* path for your each query without any additional interrupts..

[from metalink’s 11g Upgrade Companion]
Oracle highly recommends the use of SQL plan baselines instead of the stored outlines after 11gR1. With Oracle Database 11g using the SQL Plan Management (SPM) the optimizer automatically manages plans and ensures that only verified or known plans are used.

SQL Plan Management allows controlled plan evolution by only using a new plan after it has been verified to be perform better than the current plan. You should also use SQL Plan Management as part of your upgrade strategy. Please see the best practices section for more information.
[/from metalink’s 11g Upgrade Companion]

For more details on SQL Plan Management

Oracle Database Performance Tuning Guide – Using SQL Plan Management

SQL Plan Management in Oracle Database 11g

Some Useful Oracle Database Utilities and Errorlogging Feature of 11g’s SQL*Plus

Some Oracle utilities are well documented in the Oracle Database Utilities Guide but some are not. Some are very familiar to us; exp, imp, expdp, impdp, sqlldr, dbverify, csscan, tnsping, lsnrctl, emctl, tkprof etc. But some may be not; trcroute, oradebug, dbshut, dbstart, oerr, orakill etc. Here are some simple examples I use frequently on Linux.

[oracle@tonghost bin]$ oerr ora 600
00600, 00000, "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
// *Cause:  This is the generic internal error number for Oracle program
//         exceptions.  This indicated that a process encountered an
//         exceptional condition.
// *Action: Report as a bug - the first argument is the internal error number.

[oracle@tonghost bin]$ dbhome

[oracle@tonghost bin]$ dbfsize /u01/app/oracle/oradata/tong/example01.dbf

Database file: /u01/app/oracle/oradata/tong/example01.dbf
Database file type: file system
Database file size: 12800 8192 byte blocks

[oracle@tonghost bin]$ dbshut /u01/app/oracle/product/11.1.0/db_1
Processing Database instance "tong": log file /u01/app/oracle/product/11.1.0/db_1/shutdown.log

[oracle@tonghost bin]$ dbstart /u01/app/oracle/product/11.1.0/db_1
Processing Database instance "tong": log file /u01/app/oracle/product/11.1.0/db_1/startup.log

With 11g sql*plus’s errorlogging feature we can store any SQL, PL/SQL or SQL*Plus errors by default in table SPERRORLOG.

conn / as sysdba
create user tong identified by tong;
grant create session to tong;
grant unlimited tablespace to tong;
create table tong.tong_sperrorlog(username   varchar(256),                  
				timestamp  TIMESTAMP,                     
				script     varchar(1024),                 
				identifier varchar(256),                  
				message    CLOB,                          
				statement  CLOB);                         
connect tong/tong
set linesize 1200
show errorlogging
set errorlogging on table tong_sperrorlog   

create table tong ( c1 number ) ;

set errorlogging off

select timestamp, statement, message from tong_sperrorlog;

TIMESTAMP       22-JUN-08 PM                                                                                                                                     
STATEMENT		create table tong ( c1 number )
MESSAGE			ORA-01031: insufficient privileges

This feature can be also used with 11g client against 9i and 10g servers. Check $ORACLE_HOME/sqlplus/doc/elgsetup.txt for details.