Instrumented Code is Better: An Example on How to Generate Session Level ASH Reports

After 10g if you have its license ASH based performance monitoring is an alternative to the traditional SQL Tracing in my opinion. You may either use EM to generate ASH reports or directly query ASH dictionary. The handy additional hint here is that you can set filters to ASH data in order to focus on a specific session, the filter options at ASH report screen at EM can be used for this purpose or below is an example to monitor the waits from v$active_session_history view.

-- at session 1
	/* other useful options may be */
	dbms_application_info.set_module('your_module', 'your_action');
	/* */

select ... ;

-- while query at session 1 continues to execute switch to session 2
-- note that no commit is needed to see the client_identifier set from the first session(autonomous transaction)
  FROM v$session
 WHERE client_identifier = 'your_identifier';
       SID CLIENT_IDENTIFIER                                                MODULE                                           ACTION                           CLIENT_INFO
---------- ---------------------------------------------------------------- ------------------------------------------------ -------------------------------- ----------------------------------------------------------------
       273 your_identifier                                                  your_module                                      your_action                      your_client_info

-- filtered ASH information 
-- Now query v$active_session_history columns you may be interested with client_id = 'your_identifier' and additional filters you may like 

dbms_session and dbms_application_info supplied packages’ set options can be very handy like in the above example, credits goes to Mark Rittman for this hint.

[Update on July 2 2009 ]
After Doug Burns’s comments(check out the pingback below) on this post I updated the header and the example.

With this short post I wanted to share the idea of setting client identifier in an application may help especially during monitoring/troubleshooting and wanted to give an example on ASH dictionary. For example using database services even you are running on single node environment also helps as filters with the instance activity reports. Additionally I wanted to highlight the effort to get the ASH information over 10046 SQL Tracing efforts for a database developer, since with 10046 tracing a typical developer will need a DBA or unix admin assistance to ftp the produced trace files where as simple SQL queries to ASH dictionary will guide you to understand the waits for example specific to a session.

So anyway, thanks for Doug Burns’s time to correct the ASH example on this post and provided details on his blog.

Google pages will be migrating to Google sites :(

Google pages is migrating to Google sites and Google sites is forbidden at Türkiye, since I maintained for years all my SQL scripts mentioned here at my Google pages domain and these files are not a part of the planned migration this migration will cause lots of trouble for me and of course for the readers of this blog unfortunately.

To my taste blogs must be short and to the point, so I always proffered to link my long SQL scripts inside my posts to a text file uploaded at my Google pages, this way also the text file is opened inside browser and the file is not downloaded(you may check out for the behavior on this example post here).

I have hundreds of files linked to hundreds of blog posts, so this file maintainance will take lots of time. I will try to fix first the all time most read ten posts referenced from WordPress stats and as a workaround I will upload the text SQL files as word documents to wordpress media library(you may check out for an example migrated post here). I hope someone may advise a better alternative soon :)

I am busy with lots of reporting projects lately, most of them related to the 3g introduction in my country and CRM application migration at my company. This week I will be attending to Oracle 11g Data Warehousing Masterclass with Mark Rittman at Istanbul and at last I will have two days time to get away from the work and enjoy this seminar. And after 4 weeks I will be a family database guy now; I am getting married at the end of this July and I am very excited that for the first time in my 13 years career I will be having my first two consecutive weeks holiday as my honeymoon :)

ps: I was thinking what if or wordpress kind of free services also migrates with some constraints, too boring even to imagine, free services and their risks are like roses and their thorns.

Event-Driven Applications: Introduction to Oracle EDA Suite

Last week I was reading IBM’s Real-Time Data Analysis announcement, so this triggered me to ask local Oracle office if they have a similar application. I am pretty aware of the features at database side, thanks to Concepts Guide and tahiti search. But especially within the last years with Oracle’s purchases the features now outside the database Oracle has is an important question mark for me.

And quickly the answer came: Oracle Event Driven Architecture and there Complex Event Processing feature(aka BEA Weblogic Event Server).

Even I am a database guy I really liked the application until now, maybe it is because event processing language(EPL) is very similar to SQL, it is Eclipse based(at least for now), Jrockit JVM can promise deterministic garbage collection, its hot deployment capabilities and its monitor->analyze->act included business activity monitor application(business user oriented dashboards and alerts without writing a single line of code). We planned to do some PoC so maybe I will be sharing some of my experiences with this product later on. To make you more interested here are some concepts of this product:
• Extend the relational model to support “continuous” query
• Construct “windows” over event streams
– Bounded by time or count
– Partitioned based on values
– Processed incrementally or in batches
– Defines a “working set” of events to then apply queries to
• Use relational operators to characterize the event streams
– Filtering, aggregation, and correlation of events
• Add pattern matching
– Supports track-and-trace scenarios (e.g. detecting missing events)

Pattern Matching Example(Defined as regular expressions, In-Memory Continuous Queries): Stock Trading “W” Pattern

	SELECT FIRST(x.time), LAST(z.time)
		PATTERN (X+ Y+ W+ Z+)
			DEFINE X AS (price < PREV(price)) 			
                                   Y AS (price > PREV(price))
			           W AS (price < PREV(price)) 			
                                  Z AS (price > PREV(price)))

So here is an Introduction to Oracle EDA Suite presentation you may want to check for more details.

Conclusions: being aware of a feature you already purchased is very important not to re-invent a dumper wheel and in order to be time to market against competition. So I hope Oracle develops a Concepts Guide for its Fusion Middleware applications soon for people like me :)

“RAC Aware Software Development” Discussion

If you ask this question(is there a concept like RAC Aware Software Development?) to Oracle employees since RAC is positioned to be completely application transparent they will not accept this concept. So this is another chance for me to have the comfort of not being an Oracle employee :)

First of all, you pay additionally for RAC option, so it will of course be a rational behaviour to get out of it. Also RAC is only an Oracle database tier HA solution(and it provides features to help application HA). HA on the other layers should also be analyzed and solved, so an important question to answer is “How long can you tolerate a service interruption? How much money do you loose when your service is down n minutes?”. Because the resulting architecture can change dramatically according to the answer of this question and each HA solution can provide a certain amount of availability, these all come with a certain cost. This costing is primarily a business decision rather than a technical decision, technical side is easy once the business decision is made and the budget is accepted.

Another important question to answer is “Why do need RAC?”, most possible answers are High Availability and/or Horizontal Scalability needs, and these are really strong motivations if you really need them so as a result you may become willing to have this major change/migration on your environment. But what if all your migration efforts may end up with a failure, meaning if your applications still fail when a node fails and/or your applications can not scale with multiple nodes? This kind of a result may harm your position in your organization if you are the trusted Oracle guy around there, so it is important to communicate the positioning of this Oracle option for your environment with your managers.

It is always best to experience problems or not expected behaviour during testing phase, not production where you have your on going business. But the hardest part of testing is to create a workload similar to the production load and your test environment-scenarios must be as close as possible to the real workload. To achieve this RAT(Real Application Testing) option(11g new feature) provides a way to capture&run the real workload onto a test system, but depending to your application(like lots of distributed Oracle databases calls over dblinks) the product may not be mature enough yet.

Oracle believes that a single node environment can not be more high available than its RAC alternative even without any application change, but for me this comment is something to consume carefully. The internal mechanisms that play a role in a RAC environment is more complex than a single instance environment, as a result if the DBA group responsible to manage this new environment is not experienced or is afraid of it this will cause additional downtime(I have experienced this scenario several times with ASM and RMAN). Also Oracle believes that most of the problems related to RAC migrations are caused by misconfiguration or wrong sizing of hardware and/or software, but we also experienced at a recent 11g 5 node Linux consolidation project that even if the sizing was more than enough and all best practices were followed you still may hit bugs. When these problems popped out our project was at production, we contacted with the Oracle developers immediately and got fixes in one day after the diagnosis(we were not able to capture these bugs during testing since RAT(Real Application Testing) had also several bugs). We all need to accept that there can always be bugs or limitations on any kind of software, even it is Oracle database :)

Also not specific to RAC migrations, any big change(like changing OS version, adding new hardware, changing driver version etc.) in a system can magnify the existing problems. So one of the initial steps of a RAC migration project may be checking tuning possibilities(like SQL query tuning, avoid parsing overhead, partitioning, indexing and clustering etc. for high load SQLs) for the single node environment. All these strategies and as a result some additional development and testing will decrease the interconnect traffic dramatically.

In order to increase your applications'(not database) HA you need to develop special connection methods and exception handling so that you can understand a node failure and refresh connection pools at mid-tier for example. But these exceptions you may manage may not be available to your technology stack if you are not using Oracle JDBC, OCI or ODP.NET drivers.

Conclusions; if you follow shared best practices(I shared my favorite links below) and architect your systems according to your needs, have careful load testing, you will minimize the risk of having problems. Also having a close relationship with Oracle is an important success factor here. As a result to my perspective, in order to get most out of RAC option you need to develop RAC aware software! :)

Oracle Real Application Clusters Sample Code

Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
Chapter 6 Introduction to Workload Management

OC4J Data Sources: Implicit Connection Caching and Fast Connection Failover by Frances Zhao

Oracle RAC Tuning Tips by Joel Goodman

Understanding RAC Internals by Barb Lundhild

Oracle Database Concepts Introduction Seminar

Attendance to this seminar will be free, it will be at İstanbul Yıldız Teknik Üniversitesi-Oditoryum room this Sunday and the targeted attenders are university students. Hüsnü Şensoy will also be our guest for a session on Oracle Data Mining and Data Warehousing Concepts. At the end of the seminar Hüsnü and I will be answering questions related to our summer internship program at Turkcell.

You can find more details of the seminar here.

Embedded Software License For Oracle Products

Let’s say you are a product development company and you want to develop your products on Oracle technologies, but of course for a competitive product pricing strategy Oracle’s list prices may frighten you :)

Here is a solution you may get interested if you are a member of the Oracle PartnerNetwork; Embedded Software License(ESL). This is a very restrictive license type available for Independent Software Vendors(ISVs) who embed Oracle technology into their product which is available after 10g. With this option the end user you sell your product may not even be aware of Oracle technology behind and should not be able to access it directly as a developer nor an administrator.

You choose the Oracle edition and options you need for your product and then you receive %80 discount over their list price if your application gets accepted by Oracle. Also you only pay %19 for support over this discounted %20 amount. But is only pricing the main benefit here? No, I don’t think so; embedding an Oracle Database and/or Oracle Application Server directly into an application can make you very time-to-market if you already have team with Oracle development experience and lower your operational costs at long run.

Here is some more details of this Oracle’s very attractive licensing option but for much more information I advise you to contact to your local Oracle sales representative.

ps: this may be a helpful additional reading Oracle Product Family and Pricing Highlights

Getting Started and Extending your Knowledge with Oracle Warehouse Builder 11g

I already mentioned here and here that I love Apex and OWB since they are Oracle database integrated development environments, this is simply because I am an old school SQL and PL/SQL minded person and these two development environments make you feel more powerful, which is IMHO a really cool feeling during chatting with the Java/SOA guys :)

I have been using OWB 11g for some time now and I want to prepare a getting starting material to assist my colleagues. Apex has a two-days developer guide for this purpose, and OWB has its fantastic OTN OBEs.

If you are already aware of the BI fundamentals and Oracle database fundamentals you can go through these 16 11g OWB OBEs within a day or two and than I am pretty sure you will get the feeling I mentioned. As you may know with 11gR1 now OWB and Apex are a part of the standard database installation, so if you have a 11g installation you already have an OWB practice environment, following the first OBE will be enough to initialize your OWB inside your 11g installation.

These OBEs do not only include typical practices but also information on Warehouse Builder 11g architecture, components, BI concepts like Star Schema and Snowflake Schema or Slowly Changing Dimensions. Also here is an additional viewlet which may give you an idea about OWB. Remember what actions you do inside your mappings they will all deploy a standart PL/SQL package, these old friends are your ETL executables for your each BI need. And OWB is a cost-free(database-licence included) option and it is a highly optimized ETL tool which uses Oracle’s SQL capabilities.

And as some readers may be thinking since Oracle bought ODI OWB will be dead soon, no this not true and you may read the future strategy from OWB’s corporate blog here, to my understanding OWB and ODI will be a part of a combined BI solution strategy together, like a coffee and milk together, so this is not a coffee or a tea choise :)

Force RMAN to ignore corrupted blocks with COPY command

To tell RMAN to permit corrupt blocks to be backed up you must use the SET MAXCORRUPT command(note that this goes inside the RUN command);


where ‘n’ is the number of corrupt blocks which will be allowed in the backup. However the ‘set maxcorrupt’ command only applies to the rman BACKUP command not the rman COPY command. To work around this problem for the rman COPY command you must modify “recover.bsq”.

Find the line in “$ORACLE_HOME/rdbms/admin/recover.bsq” which looks like this:

sys.dbms_backup_restore.copyDataFile(full_name =full_name,

and add the following line immediately after that line:

max_corrupt =1,

For more information please see metalink note 1069093.6.