Some Best Practices of Upgrading Oracle Database

I will be upgrading my experiment mouse 10g Release 2 database(an emotional period for me to get apart :) to 11g Release 1 soon, so I thought remembering, consolidating and sharing some of my past experiences on upgrading an Oracle database can be useful. You may comment on these depending to your own experiences.

If an upgrade is again at my door step(usually within 3 years at most :) –

a. I first read the new features, concepts and upgrade guides of the new release. Also I check to my platform specific release notes for especially special cases.

b. I test the upgrade process carefully, load and test my applications on upgraded environment and of course test the restore/recovery plan for an unexpected situations. Here do not skip to test the third party tools(if you are using) like data extraction(unloader) or performance monitoring tools you purchased extra. Also test your backup and disaster recovery strategies, cronjobs, if there is an active-passive clustering your switchover scripts.

c. If you are also needing and believing in control and performance of the upgrade steps, you may also plan to do it manually like me. For example I plan to;
– take the database temporarily to NOARCHIVELOG mode,
– close the other instances and increase sga and pga for the database to be upgraded,
– create large redo log files,
– spooling each step to a log file during migration and
– carefully monitor os cpu and memory usage for example with top utulity, tail -f alert_SID.log, plan and guarantee sufficient space in your SYSTEM, UNDO and TEMP tablespaces.
Also there is always a well detailed metalink note for this manual upgrade path :)

Of course when I have the tolerance for downtime, prior to migration and after a successful migration a cold(clean closed) backup will always feel good. During migration restricting database connection to users, stopping listener, unix cron and oracle scheduler(jobs) for unwanted change or interrupts is very important.

Since I believe and trust in Cost based optimizer-CBO(hints are always last resorts as Mr.Lewis advices) I re-collect both object and system statistics as a post migration step(still staying in NOARCHIVELOG may be useful at this point). Also setting COMPATIBLE and OPTIMIZER_FEATURES_ENABLE init.ora parameters to enable new optimizer features is important for the quality of your tests. Always one of the last steps is checking and compiling the invalid objects after upgrading, here a report of the invalid objects prior to the migration will be useful for comparison. In fact I create lots of temporary tables with create table as select for the copies of important dictionary views like dba_tables/indexes/tab_partitions/constraints/triggers etc.

d. For a meaningful(best test strategy let me say), I clone my production database onto a machine with similar resources, this means budget of course(I linked a reference guide for cloning a database) . I run my applications under a load similar to its production and monitor the database with OEM, V$ views, events like 10046-10053 and AWR reports. Here CBO enhancements need the primary attention and you will be needing some reports, prior execution plans and reference metrics prior to upgrade for detail comparisons in order to catch a problematic change in CBO behavior.

e. Of course, no matter what the deadline is, I do not accept to upgrade the real production database until I successfully upgrade and finish my planned tests on the cloned test environment..

I will also share some notes from the 11g new features guide I saw which I think you may also be interested –

i. 11g Release 1 deprecated features;
– Oracle Ultra Search,
– Java Development Kit(JDK) 1.4,
– CTXXPATH index

ii. Important initialization parameter changes;
– USER_DUMP_DEST/BACKGROUND_DUMP_DEST/CORE_DUMP_DEST is replaced with DIAGNOSTIC_DEST parameter which defaults to $ORACLE_BASE/diag.
– If there is a null value for the UNDO_MANAGEMENT initialization parameter which means UNDO_MANAGEMENT is not set, after 11g this implies to AUTO mode.
– So to migrate to automatic undo management if you are not already some how :)
1. Set UNDO_MANAGEMENT=MANUAL
2. create a reasonable load compared to your production
3. and execute DBMS_UNDO_ADV.RBU_MIGRATION function to get the advised size for undo tablespace;

DECLARE
   undo_tablespace_MB NUMBER;
BEGIN
   undo_tablespace_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
END;
/

4. create new undo tablespace as adviced sized
5. last step will be setting UNDO_MANAGEMENT=AUTO

And some extra related readings –

Some migration experiences to share
Deffensive upgrade methods, but still no pain no gain
Take the risk and migrate to 10gR2
Cloning a Database by Howard Rogers

Oracle Express Edition(XE) and Application Express(ApEx) Part 2

For quick wins and a brief introduction please first read part 1 of this series. Express Edition and Application Express, I think because of this “Express” common word, usually perceived to be the same by the beginners. So it may be important to point out that XE is a limited but free Oracle database edition of Oracle 10g Release 2, where as ApEx is a rapid web based wizard driven development environment bundled with 10g XE but also can be installed and used for any editions of Oracle. Since this part is an overview, I want to share some stories of mine, please hold on :)

Application Express enabled me to challenge J2EE developers in my company once upon a time. After reading its user’s guide and 2 day plus apex developer’s guide and playing with the sample application on 1.6 release, immediately I fell in love with this development environment. The primary reason behind this was, at last one of my dreams was becoming real, I was also able to build web based database applications only with my SQL and PL/SQL knowledge. Okey, some HTML and Javascript basics also helped me a lot, I do confess. In order to introduce this tool to the developer community in my company, I challenged with both time and head count constraints; I proposed that I will do the project at half time whatever time others will be giving and I will be the only developer for every layer(database-middle layer-front ends we may say). When you fall in love, you may harm yourself :)

Since I am one of the old wolfs in my company, project leader believed in me and the project was an important success. I will be going through the best practices I accepted and used, also some tricks behind the project’s success during the series. But first of all let me mention that as a starter I targeted a similar project which was mentioned at chapter 10 of 2 days plus apex developer’s guide. Before starting this series I really want to warn all DBAs, Forms&Reports or PL/SQL developers to check part 1 for the introduction and warm up with ApEx. ApEx is now installed as part of the 11g installation and Oracle will continue to invest in this beauty with its 4.0 new version.

Express edition also enabled me to challenge MySQL in a project once upon time. Express edition has limitations yes, but be open minded and please see the full side of the glass, this free edition is completely Oracle. Meaning lots of unique features are included and the kernel is the same;
a. with its Undo mechanism which enable you to create high-concurrent applications(readers never wait for writers and writers only wait for writers for the same row and this is done automatically, also phantom reads are not permitted and of course multi-versisioning)
b. with its Redo mechanism which enable you to have high-available applications(archivelog mode, instance and media recovery, RMAN etc.)
c. with its SQL and PL/SQL supplied features enables you to be time to market(CBO, SQL Model and Analytic Functions, Oracle Text and Locator, Connect By Prior, UTL_% and DBMS_% packages, who has resources to re-invent a dumper wheel as Mr.Kyte mentions).
After reading the Licensing guide available under XE documentation you will also reconsider on using XE for some of your projects.

11g Express Edition is expected to be available on OTN with the second release and until that time requesting some add-ons to 11g XE is also free :) You may post your requests under the XE forum. Since XE and Apex are free, support may be a question for you, if so just monitor their forums a little, you will see how valuable Oracle experts will be interested and answering your questions.

This will be the draft content of this series –
Part I – Quick Wins and Introduction to XE and ApEx
Part II – Overview of XE and ApEx
Part III – Installing, Upgrading and Configuring XE and ApEx on OEL4
Part IV – Sample IMEI and Terminal Management Application Implementation

Also last words are for an important change in my life; just one day after I read this post from one of my important idols, I learned that I became an Oracle ACE. Bad timing let’s say, I completely understand Mr.Rogers points. So at least I may say I will be doing what I was doing for some time, which I believe was important to get this award. Turkish Oracle Users community had a 10g OCM, Aykut Çelik, and this is another first for the community. But since I know the potential and proficiency of this community, I may guarantee soon there will be others for both OCM and ACE levels.

Materialized Views – Summary Management with Oracle

In this blog’s Oracle Business Intelligence category I made introduction before to both Parallel Execution and Partitioning two epical and maybe most used two options of Oracle on BIS environments.

Using Materialized Views is a critical performance argument for data warehousing – decision support – reporting needs, they are also widely used to replicate and distribute data. By calculating the answers to the really hard questions and also query rewrite feature, we greatly reduce the load on the system and these pre-computed summaries and would typically be very small compared to the original source data. Query rewrite is a mechanism where Oracle or applications from the end user or database transparently improve query response time, by automatically rewriting the SQL query to use the materialized view instead of accessing the original tables –

Code Listing 121 – Materialized View Basics Demo

A materialized view can be partitioned, indexed, large materialized views in can be created in parallel and you may use Materialized View Logs for incremental refreshes. Also they are very similar to indexes in several ways –
* They consume storage space,
* They must be refreshed when the data in their master tables changes,
* They improve the performance of SQL execution when they are used for query rewrites,
* Their existence is transparent to SQL applications and users,
* With write operations they have performance impact.
Unlike indexes, materialized views can be accessed directly using a SELECT statement. Depending on the types of refresh that are required, they can also be accessed directly in an INSERT, UPDATE, or DELETE statement.

Managing Materialized Views after 10g is much more easier, if you are interested with dbms_advisor.tune_mview – dbms_advisor.create_file – select /*+ REWRITE_OR_ERROR */ .. stuff please check this Arup Nanda‘s article; Oracle Database 10g: The Top 20 Features for DBAs by Arup Nanda – Week 12 – Materialized Views

Also this week in the group oracle-l, which I mentioned earlier in one of my posts, “Materialized View Refresh method has been changed from 9i to 10g” was a hot topic. As a summary; prior to 9i for complete refreshes Oracle did truncate mv and insert /*+ append */ where as after 10g it does delete and normal insert causing performance overhead. This was also discussed on this AskTom thread. As Jared Still mentioned on one of his follow ups, by changing the refresh method to set atomic_refresh = false, the truncate/append behavior can be restored;

begin
— dbms_mview.refresh(‘MVTEST_MV’,method => ‘C’);
— use this with 10g to return to truncate/append behavior
dbms_mview.refresh(‘MVTEST_MV’,method => ‘C’, atomic_refresh=>false);
end;
/

ps: to whom may be interested, as I promised earlier I published my two days Optimizing for Performance seminar by Joze Senegacnik seminar notes.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

Refences Used :
http://www.akadia.com/services/ora_materialized_views.html
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2) – DBMS_ADVISOR.TUNE_MVIEW Procedure
http://psoug.org/reference/dbms_advisor.html

Two days tuning seminar with Joze Senegacnik

After Mr.Jonathan Lewis, Mr.Julian Dyke and Mr.Lutz Hartmann tomorrow and friday Mr.Joze Senegacnik will be our guest in İstanbul, world’s most beautiful city :)

I hope we will have nice time with Joze and since Joze is a very knowledgeable Oracle expert the seminar will be very useful for me. Of course again I will be sharing my notes’ highlights like I did before for this years’ seminars ( Lewis / Hartmann).

Joze requested that before the seminar we need to prepare a homework, a 10046 case. For me this was an easy homework since for the past several years I have been tracing anything around me called as a bug or a performance problem :) So I found this recent forums thread interesting to show the power of 10046 analysis and I hope the class also like it.

Optimizing for Performance by Joze Senegacnik 2 Day Seminar Notes – Istanbul, Turkcell Academy 23-23/08/2007

Note : Check out Joze’s hidden pga parameter tuning paper I recently mentioned and used during a critical migration, if you still didn’t experience it yet.

Debugging RMAN

When an RMAN operation fails, RMAN log and debug information may be needed to be collected and diagnosed. RMAN log and debug files are NOT generated by default and need to be enabled explicitly;

Listing 118 – RMAN log and debug demo

If you ever used 10053 or 10046 trace files, you may like what you find inside this big trace file. This time from the beginning I will say some like playing with these stuff where some do not need them, both are acceptable and we may live together happily ever after :)

For more information see Note:132941.1 – RMAN: Quick Debugging Guide

Life after 10g Enterprise Manager Grid Control and its advisors

In my opinion history will show that, life of an Oracle DBA is divided into two stages; before and after 10g Enterprise Manager Grid Control :) I still remember some old days after paying to Oracle every possible extra licensing you may pay, everybody were paying lots of extra money to some bunch of other companies’ tools just to manage Oracle effectively. But somehow, someday Oracle decided to end its customers’ pain; Grid Control, Automatic Storage Management(ASM) and SQL Developer are examples of this strategy, so those were now the “good” old days just for some companies I guess.

Everybody even presidents needs, loves advisors. It is the same with Oracle, after 10g and with 11g Oracle is continuing to invest on its advisories. Memory, SQL Access and Tuning, Segment advisors have important parts in our lives now. But with this post I planned to discuss on some less popular advisors(I believe) like Undo, Redo and Automatic Checkpoint Tuning Advisors with 10g and also some 11g delights like Partitioning and Data Recovery advisors.

The Undo Advisor assists in correctly sizing the undo tablespace. The Undo Advisor can also be used to set the low threshold value of the undo retention period for any Oracle Flashback requirements. And to access the Undo Advisor in the Database Control you may follow; Administration > Undo Management > Undo Advisor

Also after 10g Rollback Monitoring improved; when a transaction rolls back, the event is recorded in the view V$SESSION_LONGOPS, if the process takes more than six seconds. This view enables you to estimate when the monitored rollback process will finish.

SELECT TIME_REMAINING, SOFAR/TOTALWORK*100 PCT
FROM V$SESSION_LONGOPS
WHERE SID = 9
AND OPNAME =’Transaction Rollback’

Redo Log Tuning Advisory and Automatic Checkpoint Tuning are also new features introduced with 10G. The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance, however it must balanced out with the expected recovery time, as rule of thumb switching logs at most once every fifteen-twenty minutes. Undersized log files increase checkpoint activity and increase CPU usage.

Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files.

The redo logfile sizing advisory is specified by column optimal_logfile_size of v$instance_recovery. This feature require setting the parameter “fast_start_mttr_target” for the advisory to take effect and populate the column optimal_logfile_size.

You can try this with different settings of “FAST_START_MTTR_TARGET” but –
* If FAST_START_MTTR_TARGET is set to such a small value that it is impossible to do a recovery within its time frame then target_mttr field of v$instance_recovery contains the effective MTTR target which is larger than FAST_START_MTTR_TARGET.
* If FAST_START_MTTR_TARGET is set to such a high value that even in worst case entire buffer cache is dirty) recovery would not take that long, then target_mrrt field contains the estimated mttr in worst-case scnerios.

Code Listing 117 – Redo Log Tuning Advisory Demo

And to access the Redo Logfile Size Advisor you may follow; Administration > Storage > Redo Log Groups > Sizing Advice

10g also supports automatic checkpoint tuning. But by default, this feature is not enabled, because FAST_START_MTTR_TARGET has a default value of 0. It is an advancement over the MTTR related parameter introduced in earlier versions. The idea is to use the periods of low I/O usage to advance checkpoints and therefore improve availability.

To enable automatic checkpoint tuning, unset FAST_START_MTTR_TARGET or set it to a nonzero value(This is measured in seconds). If you set this parameter to zero this feature will be disabled. When you enable fast-start checkpointing, remove or disable(set to 0) the following initialization parameters:
– LOG_CHECKPOINT_INTERVAL
– LOG_CHECKPOINT_TIMEOUT
– FAST_START_IO_TARGET
Enabling fast-start checkpointing can be done statically using the initialization files or dynamically using –

SQL> alter system set FAST_START_MTTR_TARGET=10;

Enabling fast-start checkpointing increases the average number of writes per transaction that DBWn issues for a given workload. However, if the system is not already near or at its maximum I/O capacity, then fast-start checkpointing has a negligible impact on performance. View V$MTTR_TARGET_ADVICE will give information on the additional I/O’s on various values of FAST_START_MTTR_TARGET. But if –
– FAST_START_MTTR_TARGET is set to a low value – Fast-start checkpointing is more aggressive. The average number of writes per transaction that DBWn issues is higher in order to keep the thread checkpoint sufficiently advanced to meet the requested MTTR.
– FAST_START_MTTR_TARGET is set to a high value – Fast-start checkpointing in less aggressive, and the average number of writes per transaction that DBWn issues is lower.
– FAST_START_MTTR_TARGET is unset – automatic checkpoint tuning is in effect. Average number of writes per transaction is reduced but at the same time MTTR is highest.

So what about future releases and Oracle’s advisory strategy, I advise :) you to check 11g Manageability presentation’s fifth slide for some hints. After 11g a new advisor called Partition Advisor which analyzes the data access patterns and suggests the decision to choose the partitioning scheme and the partitioning column(s) will be assisting us especially during development and testing stages. By the way before 10g if you drop a partitioned table Oracle removed all the partitions at once, so time and resource consuming process. After 10g Release 2 when you drop a partitioned table, partitions are dropped one by one, maybe you already felt the change :)

Data Recovery Advisor will be another friend which automatically diagnose data failures and recommend repairs. You can repair failures manually or request that they be repaired automatically. Of course Enterprise Manager includes interfaces for these new advisors with 11g.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

Refences Used :
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2)
Note:265831.1 – Automatic Checkpoint Tuning in 10g
Note:274264.1 – REDO LOGS SIZING ADVISORY
Note 180894.1 – “V$INSTANCE_RECOVERY”
Note 151062.1 – Init.ora Parameter “FAST_START_MTTR_TARGET” Reference Note
Note 30754.1 – Init.ora Parameter “LOG_CHECKPOINT_INTERVAL” Reference Note
Note 30755.1 – Init.ora Parameter “LOG_CHECKPOINT_TIMEOUT” Reference Note
Note 68931.1 – Init.ora Parameter “FAST_START_IO_TARGET” Reference Note
http://psoug.org/reference/dbms_advisor.html

Memory Access Mode of Oracle Enterprise Manager

Many people after some years with Oracle software may come up with Direct SGA Access project and presentation of Kyle Hailey discussing how to do direct SGA access in C. This following material of Miladin Modrakovic also helps understanding.

Why might we need this one, may be the first question coming into mind; let’s think of a bad work day now, you as a dba of a production system found out that you can not access to your database from any oracle utility even it is opened and startup doesn’t change anything. Let’s say Oracle support also advices you to use your backups, nothing to do for this hang situation after some dump analysis. But your recently backup media is broken or lost somehow, so what now..

After 10g Oracle Enterprise Manager gets the data about system hangs using the same method. When you enable the SGA direct attach, Oracle uses a single SQL collector per instance and this collector starts automatically along with Enterprise Manager.

Also this technique may be useful in tuning areas to access oracle statistics without impacting the Oracle kernel. It could also be a technique for accessing data from a running database for other security related purposes or just to do some reverse engineering to understand internals.

Refences Used :
Memory Access Mode of EM
Undocumented Oracle
Oracle Insights: Tales of the Oak Table – Chapter 6: Direct Memory Access, by Kyle Hailey
Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning (Osborne ORACLE Press Series) (Paperback) – Appendix D – Direct SGA Access
Another similar example, on security
Direct SGA Access for Hung/Slow Systems by Arup Nanda
Hung But Not Paralyzed: Memory-Attached SGA Query by Arup Nanda

Oracle How to Libraries

Recently I used and wanted to share below how to items –

1. how to spool all oracle messages to a text file
2. how to change a session’s parameter value from another session
3. how to write messages into an oracle trace file or alert.log
4. how to get help on undocumented utility oradebug
5. how to find the trace file name within sql*plus session

Code Listing 115-Some Oracle How To Sharing

If you are interested for more, you may check Eddie Awad‘s OraQA – Oracle Question and Answer Also metalink has a large How To library.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

Refences Used :

Oracle body building – oracle-l –

I am a member of this list for a long time, but never asked a question nor answered one yet, just a read-only mode follower :)

I filter the feeds by the sender since there are lots of messages going around. I may advice two efficient ways of following this list –

1. digest mode membership : http://www.freelists.org/webpage/oracle-l
2. following over an rss reader : http://www.freelists.org/archives/oracle-l/feed.rss

At most I can read maybe %10 of the content daily, but still it is still very beneficial, there are really hidden rdbms and os experts out there, believe me.

Note : we are having difficulty reaching to wordpress domain from Türkiye, so many thanks to http://www.trproxy.net making this post possible..

Direct Path Load Operations versus Integrity Constraints, Triggers and Instance Recovery

Recently two different forum notes triggered me to write this post –
1. http://forums.oracle.com/forums/message.jspa?messageID=2012276
2. http://www.ceturk.com/forum/forum_posts.asp?TID=11131

Direct-path operation terminology in Oracle is used when a session is reading buffers from disk directly into the PGA(opposed to the buffer cache in SGA). During direct-path INSERT operations, the database appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused, and referential integrity constraints are ignored. You may prefer to use a direct path load when you have a large amount of data to load quickly and you want to load data in parallel for maximum performance, but there are alternative costs to be aware of.

Below picture and definitions are related wait events from Performance Tuning Guide –
Listing 113 – Scattered Read-Sequential Read-Direct Path Read
* db file sequential read(single block read into one SGA buffer)
* db file scattered read(multiblock read into many discontinuous SGA buffers)
* direct read(single or multiblock read into the PGA, bypassing the SGA)

With the conventional path load method, arrays of rows are inserted with standard sql INSERT statements, integrity constraints and insert triggers are automatically applied. But when you load data with the direct path, SQL*Loader disables some integrity constraints and all database triggers. The constraints that remain in force are:
* NOT NULL
* UNIQUE
* PRIMARY KEY (unique-constraints on not-null columns)
and the following constraints are automatically disabled by default:
* CHECK constraints
* Referential constraints (FOREIGN KEY)

To start SQL*Loader in direct path load mode, we set the DIRECT parameter to true on the command line or in the parameter file. SQL*Loader log file describes the constraints that were disabled, the ones that were reenabled, and what error, if any, prevented reenabling or validating of each constraint etc. It also contains the name of the exceptions table specified for each loaded table.

In below demo you may see that sql*loader direct path load
– disables the foreign key constraint,
– ignores the foreign key constraint,
– ignores the trigger
where as direct path insert just ignores the foreign key constraint;
Code Listing 113 – Direct Path Load Demo

Because SQL*Loader writes directly to the database files, all rows inserted up to the last data save will automatically be present in the database files if the instance is restarted. Changes do not need to be recorded in the redo log file to make instance recovery possible. But if an instance failure occurs, the indexes being built may be left in an Index Unusable state. Indexes that are Unusable must be rebuilt before you can use the table or partition. If redo log archiving is not enabled(you are operating in NOARCHIVELOG mode), then media recovery is not possible of course. Also if the database or tablespace is in FORCE LOGGING mode, then direct path operation always produces redo, regardless of the nologging settings.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

Refences Used :
Oracle® Database Utilities 10g Release 2 (10.2) – Chapter 11 Conventional and Direct Path Loads