Two new page anouncement: “Introduction to Oracle for Newbies” and “Oracle Quality and Performance for FREE”

I study Oracle database and its development tools each year during my company’s internship program with young engineers;

So all attendants needed a start point and introduction guidance, as a result I consolidated what I have been writing for some time and added two new links to “Highlights” menu at the right top of my blog, as below;

A. Introduction to Oracle for Newbies

for more information please also check

B. Oracle Quality and Performance for FREE –

If you’re also new to Oracle Database, or even new to administration or development responsibilities and technologies, these resources may guide and assist you through the learning process.

Preventing DDL operations on a table

One of my friends from OracleTURK asked me how is it possible to lock(avoid doing) tables to DDL statements. He was after a simple implementation similar to for example how we lock(avoid re-gathering) object statistics with DBMS_STATS package.

Oracle permits DDL operations on a table only if the table can be locked during the operation. Such table locks are not required during DML operations. One exception is table locks are not acquired on temporary tables.

I mentioned this option in my two day seminar notes of Mr.Jonathan Lewis before for another need –
“alter table fk_table disable table lock; command will produce an error for a locking need dml on child, there will not be any lock problem even if the fkey column is not indexed”

There can be several different ways to handle this need and this option may not be handy for your own enviroment. For example if you are not the only one who knows the password of the schema user which the table is owned by or there are “alter any” or some wide privileged, like roles dba or import_full_database granted, accounts someone can quickly alter table to enable lock to get rid of the error :)

“alter table disable table lock” may be one of the simplest way to do this – Code Listing 143 – How to prevent DDL on a table

Automatic SQL Tuning with DBMS_SQLTUNE package example and SQL Profile Contents

I mentioned 10g Automatic SQL Tuning feature before –
Expanded Supplied Packages with 10g – Part 3 – Section 5- Automatic SQL Tuning

I am a traditional manual tuning guy but when I have the chance of working on 10g instances, I never regret Oracle’s help :) This post is related to this Oracle forums post –

Here the problem is that Oracle recommends some profiles inside OEM or DBMS_SQLTUNE package to improve your problematic sql queries but what Oracle recommends is something you have to dig a little more – Code Listing 142 – Automatic SQL Tuning with DBMS_SQLTUNE package example and SQL Profile Contents

SQL Profiles are like hints and outlines they make your execution plan static, so I believe they again may be preferred as a last resort.

A special thanks goes to Nicolas Gasparotto for sharing this Firefox add-on to paste automatic signature into a forums post, I share this since you may also like to use it for possible other needs of yours –

I finished another Turkcel Academy class yesterday and at last, I am going for a holiday, destination is my family and beautiful land Fethiye. So here silence will be around for a while and I hope to meet you 10 days after today with new ideas to share :)

Column retrieval cost in CBO calculation

Same database, same table, same column, same data, same statistics but where does this difference come from;
Code Listing 141 – Column retrieval cost in CBO calculation demo

Summary –
for column c1 : elapsed time – 00.07 / cost – 5104 / consistent gets – 18575
for column c2 : elapsed time – 00.10 / cost – 5105 / consistent gets – 26811

Joze Senegacnik in his Optimizing for Performance 2 Day Seminar at Istanbul, Turkcell Academy 23-23/08/2007 mentioned that each subsequently parsed column introduces a cost of 20 cpu cycles regardless if it will be extracted or not;

“The answer is the cost of LIO. A CBO trace (event 10053) shows a difference in CPU cost – additional 20 cycles for the second column. In my presentation “Access Path Optimization” presented at UKOUG, Collaborate06 I have a demo case with a table with 250 columns where the difference between selecting from the first and last column is almost 50% increased elapsed time with no PIO, only LIO.

It is important to know that the CBO, when using the new cost model which includes also the CPU time, is aware of the fact of increased CPU consumption for each column retrieved. In reality the kernel has to parse the row data due to the variable length of columns (either VARCHAR2 or NUMBER) and it adds 20 cycles per row retrieved. For instance: retrieving first column costs 20, retrieving tenth column costs 200 – and kernel has to walk through the row data across all columns to finally reach the tenth column – and this is the difference.”

Additional readings :
Optimizer debug trace event – 10053 trace file
How to migrate to system statistics with DBMS_STATS

ubTools Support IssueNavigator and RSS

Recently I posted the Announcement of ubSQL from ubTools. And here are the ubTools Support IssueNavigator and ubTools Support RSS. I guess you will like especially the internals category :)

Also again recently I posted my feedbacks on Oracle Technology Network, Community Discussion Forums, Community Feedback and Suggestions category about, you may also want to share yours, especially if you are using some tools for similar needs like Jira.

Again testing, testing, testing..

Testing is believing, but your test quality must service your need. With my past experiences I may guarantee that;
– isolated environment tests will cheat you,
– response time based tests will cheat you

Once upon a time I told one of friends that if she saw less logical i/o within a performance comparison, even it is Mr.Lewis whom we perceive as a god of Oracle :), don’t believe if you are told the opposite. This statement is a little missing, I always prefer to check these outputs during an important performance comparison for each scenerio;
– logical i/o amounts and the access paths,
– latching amounts and locking activity is critical for scalability on oltp applications,
– waits in the response time

So in my opinion the most accurate path is;
– to create a sufficient and appropriate test case parallel to your need,
– load it parallel to your production needs on your test environment(for load purposes you may use dbms_scheduler or dbms_job prior to 10g for example)
– and do 10046 level 8 + tkprof analysis of each session for comparison(but be careful about tkprof traps)

Here is an example also Mr.Lewis at the end included :) This article Myths on bitmap indexes by Patrick Sinke took a lot of attention some time ago. There were two interesting claims about Bitmap Indexes;
– Myth 2 – Bitmap indexes are only suitable for data warehouses
– Myth 3 – Bitmap indexes have slow performance on DML actions

But these hypothesis were not tested under load, just a single connection isolated environment sql*plus session, so if your production is similar to this than no problem :) There were lots of comments warning the author, so he re-published the article.

Here is another example sharing the comparison results of index organized table(IOT) versus heap organized table under load. So after these results do you still believe IOT or Hash Clusters that Oracle use with world record breaking tpc benchmarks are performing worser than heap organized tables as it is usually advised referencing idiot looping test cases? My answer is, you again have to implement a test case for your own need, load it and analyze 10046 outputs before you end up with any conclusion.

After years my conclusion is that you may never believe easily all you read or hear. Everybody is sharing something they believe useful or “TRUE” to others also. But just because something is printed or someone is more experienced doesn’t mean it is “TRUE”, at least for you. OS and Oracle versions matter, parameters and statistics change everything.

And I advice when you are using a search engine nowadays, add “” at the end of your search words, this way at least you will be searching first the official information. There are always some hunters waiting for their sheep..

How do you automate change and release management for PL/SQL Language?

This post on “How to design a release management strategy for Oracle pl/sql? ” was my question recently.

Is there any tools you may be using within your big projects for this need(maybe some opensource tools) and would you share your strategies?

These are perceived as the weak side of this great development environment and it is known there are also big PL/SQL based projects like Advanced Replication,  Applications module, Workflow and Apex inside Oracle. But where are the tools that can be easily found and used for other development environments for PL/SQL?

11g Express Edition Wishlists @ XE forum

On Oracle Database 10g Express Edition (you will need an extra registration with your OTN account for this forum – please check for details) forum I started some threads recently, each is related to an extension for the coming 11g XE.

So you may support the ones already opened or you may create new ones similar to these. Of course there is no guarantee that these will have an affect on Oracle’s decisions but why not trying :)

11g Express Edition Wishlist – Compression
11g Express Edition Wishlist – JVM as a separate download and install
11g Express Edition Wishlist – maximum storage to 20GB
11g Express Edition Wishlist – Point-in-time tablespace recovery
11g Express Edition Wishlist – More globalization support
11g Express Edition Wishlist – Maximum memory 2GB
11g Express Edition Wishlist – Database Control(Oracle Enterprise Manager)
11g Express Edition Wishlist – Flashback Table

References used :
Express Edition Licensing Information Guide

Linux way of Flashback; how to restore back a dropped datafile

Using the PROC file system available on Unix and Linux, we can retrieve accidentally dropped datafiles. Below is a demonstration of this method on OEL4 and 10gR2 –

SQL> create tablespace my_test datafile '/tmp/my_test_01.dbf' size 200k; 

SQL> alter system check datafiles ;

SQL> alter system checkpoint global ;

SQL> host rm -rf /tmp/my_test_01.dbf

-- since it is removed can not resize
SQL> alter database datafile '/tmp/my_test_01.dbf' resize 250k; 

alter database datafile '/tmp/my_test_01.dbf' resize 250k
ERROR at line 1:
ORA-01565: error in identifying file '/tmp/my_test_01.dbf'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3

-- first take the process id of the dbwr(could be pmon, smon etc.)
SQL> host ps -ef |grep tcell|grep dbw|grep -v grep
oracle    5033     1  0 09:54 ?        00:00:00 ora_dbw0_tcell

-- find open files for this process id
SQL> host lsof -p 5033 |grep /tmp/my_test_01.dbf
oracle  5033 oracle   28uW  REG  253,0    212992  180316 /tmp/my_test_01.dbf (deleted)

-- go to the file descriptors directory and make a copy of file using file descriptor(28uW for this example)
SQL> host cat /proc/5033/fd/28 > /tmp/my_test_01.dbf
SQL> host ls -alt /tmp/my_test_01.dbf
-rw-r--r--  1 oracle oinstall 212992 Sep  4 14:25 /tmp/my_test_01.dbf

SQL> alter database datafile '/tmp/my_test_01.dbf' resize 250k;

This method can also be used for retrieving the deleted current redo logfile, but the limitations are –
a. Database is not restarted,
b. Server is not restarted,
c. The file was not offline before deletion.

Refences Used :
Note:444749.1 – Retrieve deleted files on Unix / Linux using File Descriptors