I read Rob, Laurent and Chen on this topic. As far as I follow OTN SQL and PL/SQL forum is the place for SQL madness, any kind of question here is discussed in a race to have a more performant single SQL solution. Oracle provides very rich SQL options like analytic, xml and data mining functions or model clause for this kind of a race.
Additionally we know by experience that any data intensive need will have several different solutions and *usually* one set based SQL solutions will be faster than its alternatives especially row based(cursor looping) PL/SQL ones. So parallel to the need there can be exceptions, you may check Nicolas Gasparotto‘s approach on this example with pipelined PL/SQL function option.
IMHO if you are not solving a puzzle but developing an application in a team, PL/SQL based solutions are easier to understand for all team members and as a result it is easier to maintain the application in long run. After all performance is not the single metric for an application’s success, PL/SQL provides exception handling and is also rich with bulk processing and pipelined functions kind of options for related performance needs.
Here is another interesting one you might want to check.
ps: you may use this link for the older support case studies.
Especially on data warehouse environments constraints are perceived as evil because of their ETL load costs. Where as their reporting performance gains may motivate you for using constraints even with these known costs.
One of the most common example is the NOT NULL constraint with ANTI JOINs, for joining massive amount of data HASH JOIN is something you are always after but if you do not let CBO to know that there are no NULL values on the columns you are joining you may end up with a NESTED LOOP join method which usually lasts for the eternity. Please check this paper for this case’s details; Speeding Up Queries with Semi-Joins and Anti-Joins: How Oracle Evaluates EXISTS, NOT EXISTS, IN and NOT IN
So as always, we need to test, calculate and decide on the alternative cost of using or not using any option available with Oracle for your own scenarios, after investigating below examples I shared you may also choose to go with VALIDATED constraints or not.
Not Null constraints and the optimizer after 10g
Check constraints and the optimizer after 10g
Telling the database that one-to-one mandatory relationship exists on the join columns with a foreign key, you can eliminate the unnecessary table’s join cost over a view after 10g Release 2
Having a Novalidate constraint hides valuable information from the CBO
SQL*Loader direct path load disables the foreign key constraint&ignores the foreign key constraint&ignores the trigger where as direct path insert just ignores the foreign key constraint
Last week security was one of the hot topics at my company, I found myself writing and talking with one of my DBA friends on Oracle’s security options and the history, we agreed that security is one of the areas where in time Oracle invested a lot but still there were important failures all around the software. Then we started discussing on 11.1 security features and enhancements and I remembered reading some cool options from Lutz Hartmann‘s 11g New Features book related chapter, like;
– Against brute force attacks, sec_ax_failed_login_attemps specifies the number of unsuccessful authentication attempts before the client is dropped,
– Against denial of service attacks, sec_protocol_error_further_action and sec_protocol_error_trace_action specify how the database should react, sec_protocol_error_further_action with DELAY,n option specify in seconds how long the server waits before it accepts further requests and sec_protocol_error_trace_action with LOG option creates a short audit log instead of creating huge trace files with TRACE option,
– 11g’s password verification function(utlpwdmg.sql) is enhanced,
– After 11g Audit is by default enabled and audit_trail is set to DB,
– sqlnet.allowed_logon_versions parameter defined the minimum client version that is allowed to connect.
And much more of them in the chapter, but they did not mention who wrote which chapters in the book. Where as it is so obvious that which chapters belong to Lutz in the book since he put a lot of efforts with chapters 8(security), 10(change management), 11(sql management), 12(Flashback) and 13(ASM) with examples they beautifully help understanding the new enhancements after 11.1 on these areas. Also I caught that using your name as a username at the SQL*Plus prompt is a good technique to claim your examples later on(Kyte and Foote uses this technique a lot) :)
I must of course mention that the other chapters of the other authors in this book were big disappointments for me, I thought what if Lutz didn’t write for the book will they still publish this book since there is nearly no value added to the already available text at Oracle’s documentation? Who knows. There is an important lesson for me here, if one day I decide to write a book I will choose to write alone or choose the co-authors and the press I will work with very carefully. Since I love reading Lutz and he is not blogging like the old days because he was angry to some people copying his blog posts and making money out of them, I hope he chooses to write alone for his future book projects and I can consume more of his quality Oracle readings.
Today I was looking at the recent TPC-H benchmark results, I saw a vendor named EXASOL which dominated the benchmark results, what kind of a performance is this? They have their own operating system and they are listed as clustered, who are these guys? :)
Once I wrote on Oracle vs. SQL Server – a never ending story and advised on checking the other open source options as well. Today global economic crisis is upon all of us so I observe that any kind of financial decision is stressed as never before. EXASOL seems to be an interesting case study for our reporting database needs maybe, needs careful testing of course parallel to each individuals’ needs.
Really congratulations to EXASOL because of their engineering/creativity against giants like Oracle and Microsoft.
I want to advise Christian Antognini‘s book Troubleshooting Oracle Performance to the newbies and especially Oracle database application developers. For years I advised Efficient Oracle by Design by Thomas Kyte to the beginners because of similar reasons and now this book also has lots of practical hints and additionally lots of 11g new features are mentioned.
Especially I loved Chapter 9 Optimizing Data Access, 11 Beyond Data Access-Join Optimization and 12 Optimizing the Physical Design. The last two chapters has important performance tips for the Oracle database application developers, for example column retrieval cost in CBO calculation and the following example on best practices in datatype selection are in my opinion some of the most simple and discarded ones. Below example on the NUMEBR dataype shows how the same value may have different scales and since the main datatype used to store floating-point numbers and integers is NUMBER with Oracle database applications this kind of tips are really important to be aware of.
set numwidth 18
CREATE TABLE t (n1 NUMBER, n2 NUMBER(*,2));
INSERT INTO t VALUES (1/3, 1/3);
SELECT * FROM t;
SELECT vsize(n1), vsize(n2) FROM t;
Additionally related to the NUMBER datatype, it is mentioned not to be efficient when supporting number-crunching loads and as of 10g two new datatypes are available, BINARY_FLOAT and BINARY_DOUBLE which implement the IEEE 754 standard so a CPU can directly process them and they are fixed-length. Please check Binary_double vs. Number in compute intensive processes post for an example how this decision may affect the performance.
So if you also find yourself requested tuning of applications on production databases again and again like me, I strongly believe that it will be a very wise action in long term to let the Oracle database application developers around you to know about this book and make sure they read and understand the last two chapters so that they can be aware of the different datatypes, table types and index types options with Oracle database during the design and implementation of their database applications.
I saw Dion Cho from OTN forums has started blogging in English: http://dioncho.wordpress.com/ and wanted to share.
This was the question of one of my colleague DBAs yesterday. Since he was on 10.2 and had AWR as a cost option we worked on DBA_HIST_ACTIVE_SESS_HISTORY first. After a while I decided to have the opinions of the experts at oracle-l and there Deepak Sharma advised a query based on DBA_HIST_SQLSTAT and Niall Litchfield advised DBMS_XPLAN.
So there is a guy who ran his query and insists that it took too long since he couldn’t get the DOP he requested that morning and he thinks this is DBA’s fault since he configured the Resource Manager not appropriate. As a result my DBA colleague wants to show that he really got the DOP what he requested for that query and he needs an accurate method to prove this on 10.2.
For the details of the discussion, it is still ongoing here and if you have any comments please let me know :)
Today I saw this warning when I try to login to my workspace at apex.oracle.com; This site will be unavailable on Wednesday, 11-FEB-2009 from 0900 – 1300 EST (1400 – 1800 GMT). During this time, this site will be upgraded to Application Express 3.2.
Happy to hear about 3.2 and wanted to share :)
Today Mr.Mughees A. Minhas, director of product development/database management was our guest at my company and he gave a presentation on “DBA 2.0” idea. The presentation had three parts:
– Subjects related to how to sustain optimal performance with Oracle database,
– Subjects related to preserve order amid change with Oracle database,
– Subjects related to managing more with less with Oracle database
Lots of nice powerpoint slides related to extra cost options were discussed as usual :) One of my comments after the presentation was Oracle DBAs’ lives are getting much more complicated day by day and what Oracle marketing stuff is forcing IT managers to believe is completely the opposite, this is not fair just to sell more.
Change is of course inevitable and we are all somehow have to accept the changes which are managed by the guys at San Francisco head quarters, but in my opinion forcing management that if they migrate their DBA stuff to version 2.0 somehow everything will be easier and the stuff will be doing absolutely less is not accurate. For example with RAT DBA’s are now also testers, with ASM DBA’s are now storage admins right? Until now Oracle was able to automate easier parts like Memory, Undo, Segment Space management and they aggressively continue to come up with new features which need manual attention. But calling the older DBAs who are using Statspack not Grid Control or file system backup not RMAN as version 1.0, I believe this is not right.
ps: if until now you didn’t read it, here is the paper that started these kind of arguments in my organization: Performing a DBA 1.0 to DBA 2.0 Upgrade