For the fifth consecutive time my company is in the list at 2007. Oracle is 22nd and Turkcell is 25th, above technology giants like HP, LG, Intel.
I am very lucky to be a part of Turkcell, one of the largest Oracle customer all over the world :)
After seeing Pawel’s post I also registered for the event. This three hours briefing was mostly about Oracle Database 11g and provided an opportunity for me to meet with Oracle server technologies vice president Mark Townsend, identify how we can be involved in the 11g Release 2 beta program and outline how we can participate in the marketing of our products and services at Oracle OpenWorld as a partner.
In order to attend you should be in one of below categories;
– Independent Software Vendors
– Systems Integrators
– Consultants, and Pre-sales Consultants
And below are the scheduled next stops;
May 19, 2008: Czech Republic, Prague
May 20, 2008: Poland, Kraków
May 21, 2008: Sweden, Stockholm
May 22, 2008: Spain, Madrid
My impressions is that Pawel was right about his expectations since it was mostly just another colorful Oracle powerpoint day. I tried to ask several questions and took the advantage of speaking to Mark and these were the seconds where we got some technical. Since I believe it can be not appropriate to talk about 11g R2 in details I will skip details of those parts of the event and share my remaining notes as usual;
For 11g R2 let me briefly mention that other than impressive fusion middleware new features we the database people will be living the new Grid 2.0 era, ASMing everywhere, single RACing to rule them all and edition based redefinition for minimum change impacts on applications ;)
Knowledge comes with experience and experimentation, but of course only your experiences will be limited. Other than documentation I love reading support case studies very much. They are real life experiences, you can also find out some “don’t do”s and bugs here.
So I want to share one recent Ubtools support case study I liked very much;
And two complementary reminders;
This topic was discussed today on our forum group OracleTURK. First things first, what may be the reasons you may need to find unused or read only Oracle segments in your database.
– of course if we now they are not needed we can easily get rid of them right :)
– if you can find your read only data you can put them into specific separate read only tablespaces, compress those segments as a result gain in backup duration and space.
If your database applications pro-actively handles this data lifecycle management need or you can easily gather this information from them then life will be much more easy for you. Otherwise your database management software, for us Oracle of course :), is your friend again as usual, all you have to do is to choose some combination of features already supplied within your Oracle release depending to your need and their advantages and disadvantages.
After 10g if you have the extra cost option Automatic Workload Repository(AWR-DBA_HIST% views) helps a lot, but what if you are still on 9i or you do not have that cost options? Auditing or Statspack data(PERFSTAT schema’s tables) will help similarly if you configure them. Before getting into the details of some options I tried be warned about the retention of the data you will be analyzing. For example if last 15 days information is relevant for you to decide if a segment is not used or can be read only and your AWR retention is 7 days then you will conclude wrong. So as an initial step controlling AWR or Statspack reports retentions will be important.
SELECT retention FROM dba_hist_wr_ control;
Or if you will be depending on V$SQL kind of dynamic performance views data you must ensure that your last instance startup time is enough behind for your analysis need.
Below mentioned options may have their own alternative performance costs, no pain no gain in the real world, so who can not pay for AWR, Statpack kind of a repository activity can prefer to use some kind of a network sniffer tool capturing any SQL requested from the instance.
1- Database auditing based options
a) Standart auditing;
AUDIT SELECT ON BY ACCESS;
WHERE .. ;
b) Fine grained access control;
dbms_fga.add_policy (object_schema=>’..’, object_name=>’..’, policy_name=>’..’);
FROM dba_fga_audit_trail a, dba_objects b
WHERE a.object_name = b.object_name
AND a.object_schema = b.owner
AND .. ;
2- 10g AWR based options
a) Historical SQL execution plan statistics;
FROM dba_hist_sql_plan p, dba_hist_sqlstat s
WHERE p.sql_id = s.sql_id
AND .. ;
b) Historical segment statistics;
FROM dba_hist_seg_stat ss, dba_hist_seg_stat_obj o
WHERE ss.obj# = o.obj#
AND ss.dataobj# = o.dataobj#
AND ss.dbid = o.dbid
AND .. ;
3- Statspack based options
a) Historical SQL statistics;
COUNT(DECODE(s.command_type, 2, 1, 6, 1, 7, 1, 189, 1)) sql_change,
COUNT(DECODE(s.command_type, 3, 1)) sql_read
FROM perfstat.stats$sqltext s
WHERE s.sql_text LIKE ‘% ‘||t.table_name||’ %’
AND .. ;
b) Historical segment statistics;
FROM perfstat.stats$seg_stat s, perfstat.stats$seg_stat_obj o
WHERE s.obj# = o.obj#
AND s.dataobj# = o.dataobj#
AND s.dbid = o.dbid
AND .. ;
4- Others require mentioning
DBA_TAB_MODIFICATIONS information is reset after statistics gathering, so you may easily capture the modification amounts for your segments from the last stats gathering time.
WHERE .. ;
V$ACCESS can show the accessed objects for the moment that you are querying;
WHERE .. ;
And lets meet with Oracle’s new answer to this kidn of a need; Information Lifecycle Management(ILM)
This morning I was requested to check a 10gR2 physical standby instance which was slow to apply redo and the lag was growing rapidly, my initial check showed me 9 to 12 minutes duration for 250 MB redo log file to apply;
Mon May 5 06:03:17 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902209.arc Mon May 5 06:13:20 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902210.arc Mon May 5 06:23:07 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902211.arc Mon May 5 06:32:51 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902212.arc Mon May 5 06:42:40 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902213.arc Mon May 5 06:52:01 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902214.arc Mon May 5 07:00:52 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902215.arc Mon May 5 07:09:58 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902216.arc ..
After a quick look from unix SQL*Plus as sysdba the Top 5 waits were;
PX Deq: Test for msg (4,5x amount) free buffer waits log file sequential read PX Deq Credit: send blkd checkpoint completed
And after going through the Note:387343.1 and Data Guard Redo Apply and Media Recovery Best Practices 10gR2 I recommended to increase the buffer cache from 512 MB to 6 GB, reduce the recovery parallelism from 20 to 8 and also some additional parameters like;
*.db_cache_size=6G *.log_buffer=78643200 *.large_pool_size=2G *.pga_aggregate_target=10G *.workarea_size_policy='AUTO' *.parallel_min_servers=360 *.parallel_max_servers=520 *.parallel_threads_per_cpu=2 *.optimizer_mode=ALL_ROWS *.optimizer_dynamic_sampling=4 *.cursor_sharing=EXACT RECOVER MANAGED STANDBY DATABASE PARALLEL 8;
After the restart as I monitored the apply process from the alert.log again, now the database was able to apply 250 MB redo log file within 30 seconds and 1 minutes;
Mon May 5 09:01:07 2008 ALTER DATABASE RECOVER automatic from '/assos_arcdir' standby database PARALLEL 8 Mon May 5 09:01:07 2008 Media Recovery Start Managed Standby Recovery not using Real Time Apply parallel recovery started with 8 processes Mon May 5 09:01:52 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902217.arc Mon May 5 09:02:29 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902218.arc Mon May 5 09:03:41 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902219.arc Mon May 5 09:04:39 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902220.arc Mon May 5 09:05:33 2008 Media Recovery Log /assos_arcdir/arch_rtx_1_445653769_0000902221.arc ..
And the Top 5 waits were now;
PX Deq: Test for msg (x amount) log file sequential read checkpoint completed db file parallel write latch: checkpoint queue latch
So as my lessons learned, I really want to highlight two things, yes again and again these two;
1- If you have enough physical memory, benefit from it to the ends, do not run your production databases with small SGA(critical for OLTP) or PGA(critical for OLAP) especially after 9iR2(nearly no latching problem like the old days with large SGAs). Come on even on laptops we use some GBs of sized SGAs nowadays.
2- Parallelism does not mean faster, I benefit from reducing the degree of parallelism most often than compared to increasing it, 4 or 8 is nearly anytime is enough for my needs. I hope Oracle invests more in automatic parallelism tuning feature in the near future which is FALSE by default on 10gR2 and like automatic undo or automatic pga management also automatic parallel tuning takes its place in my most liked automatic tuning Oracle features :)
Here are some of the sql scripts I used during above troubleshooting from sql*plus, like in above standby case sometimes we only have sql*plus and some simple v$ queries to gather required troubleshooting information.
Each year I find myself in a project which involves a huge data migration or database upgrade somehow. So up to now I blogged several times on my previous experiences;
For a quick summary on migration best practices I may advise you to take close attention on your SGA and PGA memory pools tuning, keep target database on NOARCHIVELOG mode with big(I prefer 10GB at least) single membered 10 or more redo groups during data transfer at target and test your migration setup several times until you are satisfied with the results. When you are within the expected time window leave it, tuning is an endless road do not let to loose control :)
This week we needed a strategy to easily switch between migration(much more OLAP) and normal(much more OLTP) parameters. So I choosed to use pfile over spfile for this kind of a need and created a special OLAP tuned parameter file and an OLTP pfile from current spfile. As a result a normal startup will open the instance with the original OLTP based parameters which comes from the pfile under $ORACLE_HOME/dbs
— clean shutdown and open with OLTP like default parameter file
SQL> shutdown immediate SQL> startup
And whenever we need OLAP like migration parameters we will close the related instance and open with the special tuned OLAP pfile.
— clean shutdown and open with OLAP like parameter file
SQL> shutdown immediate SQL> startup pfile=/../../FOR_MIGRATION_USE_ONLY_PFILE_BSCS.ora
In this pfile I used IFILE option of parameter file, so that I make sure I take all mandatory parameters like destinations first from the original OLTP like pfile and override only the ones needed for OLAP like settings. Below is a sample I share with you which was designed on a 84 dual core CPU, ~200GB physical memory, 10.2.0.3 IBM AIX instance.
So of course any time you need a similar OLAP like parameter file you need to customize and test the SGA, PGA and parallelism parameters depending to the instance’s hardware and Oracle release you will be working.
Reference Guide of your release and this query for hidden parameters may assist you during customization.
— how to query hidden parameters’ information based on x$ views
select a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", ksppdesc "Desc" from x$ksppi a, x$ksppcv b, x$ksppsv c where a.indx = b.indx and a.indx = c.indx -- and ksppinm in ('_smm_px_max_size', '_smm_max_size', '_pga_max_size') order by a.ksppinm;