LIKE Escape Characters and Alternative Quoting Mechanism for String Literals

LIKE Escape Characters

The characters % and _ have special meaning in SQL LIKE clauses. You use % to match zero or more characters and _ to match exactly one character. If you want to interpret these characters literally in strings, then you precede them with a special escape character. For example, if you want to use ampersand (&) as the escape character, then you identify it in the SQL statement as:

SELECT NAME, value FROM v$parameter WHERE NAME LIKE ‘%_io_%’

sessions 49
license_max_sessions 0
license_sessions_warning 0
java_soft_sessionspace_limit 0
java_max_sessionspace_size 0
fileio_network_adapters
filesystemio_options
dbwr_io_slaves 0

SELECT NAME, VALUE FROM v$parameter WHERE NAME LIKE ‘%\_io\_%’ ESCAPE ‘\’

dbwr_io_slaves 0
backup_tape_io_slaves FALSE
fast_start_io_target 0

Alternative Quoting Mechanism for String Literals in 10g

SQL statements can use character literals in expressions or conditions. If the literal itself contains a single quotation mark, we need to use two single quotation marks as in ‘Tonguç’s dog’. The second quotation mark inside the character literal acts as an escape character and this additional quotation marks inside a character literal is both cumbersome and error prone.

With 10g we have an alternative. We can use the quote operator, q. This new quote operator allows you to choose your own quotation Mark delimiter and supports both CHAR and NCHAR literals. You can use any convenient delimiter, single or multi byte, or any of the [], {}, (), < > character pairs. This increases readability and usability for applications;

set serveroutput on
declare
str varchar2(128);
begin
str := q'[‘Evet,’ dedi Osman, ‘Galatasaray’lı kaldı mı aranızda? Demiştim, bir gün herkes Fenerbahçe’li olacak..’]’;
dbms_output.put_line(str);
end;
/

‘Evet,’ dedi Osman, ‘Galatasaray’lı kaldı mı aranızda? Demiştim, bir gün herkes Fenerbahçe’li olacak!’

PL/SQL procedure successfully completed

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
Oracle® Database JDBC Developer’s Guide and Reference 10g Release 2 (10.2) LIKE Escape Characters
Metalink Note:311971.1 Subject: New Features For Oracle10g PL/SQL 10.x

Great news from Oracle Türkiye today :)

I recieved an email with subject “Oracle University Türkiye Ocak-Nisan 2007 Eğitim Programı” and in the mail it was said that Mr.Jonathan Lewis will be in İstanbul for a seminar;

“..Oracle database konusunda Oracle Gurusu olan Jonathan Lewis “Cost Based Optimization, Indexing Strategies and Explain Plan” adlı eğitim semineriyle 27-28 Şubat tarihlerinde İstanbul’da olacak. Eğitim semineri içeriğine bu linki tıklayarak ulaşabilirsiniz.

Eğitim Adı: Cost Based Optimization, Indexing Strategies and Explain Plan
Eğitim Tarihi: 27-28 Şubat 2007
..
Daha fazla bilgi için, lütfen 0212 329 68 48 numaralı telefondan veya educations_tr@oracle.com e-posta adresinden bizlere ulaşınız..”

I learned these details immediately;
– ~40 attandance is planned for this event,
– the place will be announced later, most probably a hotel’s meeting lounge,
– the price will be around 750 Euro per person

As a group we were still working on his new book so this was a really good news for me, lots of questions to ask :) I also post a comment to Mr.Lewis’s blog and tried to show how excited we are :)

So who is Mr.Jonathan Lewis and why is he so important to us?

These lines are from the Oracle ACE list at OTN;
“Jonathan Lewis is a freelance consultant who specializes in design, troubleshooting, and advanced training in best use of the Oracle database engine. He is a frequent speaker on the international circuit, with presentations lasting from 45 minutes to three days. His first book, Practical Oracle8i, is still worth reading, despite being two generations behind the latest version, and his most recent book, Cost Based Oracle Fundamentals (Apress), is currently receiving enthusiastic reviews. Jonathan is currently a director of the UK Oracle User Group.”

More than these Mr.Lewis is one of three people who I admire most, follow and except as my masters through the last eight years of my Oracle database administration and development career. Other two are Mr.Thomas Kyte and Mr.Steven Feuerstein. I trust these experts because they talk with numbers, with proofs. I try to read every single line they write, more than that I try to understand what they are trying to explain, I work on their books and blog posts. This pain is worthy, I experienced it with the development of my career and I advise all of the starters to read and follow them.

Mr.Jonathan Lewis was choosen as “Oracle Author of the year 2006” with his new book and Mr.Steven Feuerstein was choosen as “Oracle Pl/Sql Developer of the year 2006” by the readers of Oracle Magazine.

They are the owners of all these life saver mottas and of course more;
Universal mantra on data intensive processing
Beware of “Question Authorities”
The moral of the story

I hope to meet Mr.Lewis during this seminar, but since he is no different than Eric Clapton or
Clint Eastwood for me I am really not sure if I will be able to speek some words in front of him :) But I am sure this time not reading but llistenening to him will be something very special in my life!

Mr.Lewis’s official site
Mr.Lewis’s blog
Practical Oracle 8i: Building Efficient Databases
Cost-Based Oracle Fundamentals

Oracle Best Practices Part 1

1- Put your Database into ARCHIVELOG Mode

Although backup and recovery operations can vary from one business to another, the basic principles follow these four simple steps:

1. Run the database in ARCHIVELOG mode and archive redo logs to multiple locations,
2. Multiplex the online redo logs,
3. Maintain multiple concurrent backups of the control file,
4. Take frequent backups of physical datafiles and store them in a safe place, making multiple copies if possible
5. Periodically doing a complete restore to test your procedures.

As long as you have backups of the database and archive redo logs in a safe storage, the original database can be in safe.

Oracle can be run in either of two modes:
* ARCHIVELOG – Oracle archives the filled online redo log files before reusing them in the cycle.
* NOARCHIVELOG – Oracle does not archive the filled online redo log files before reusing them in the cycle.

Running the database in ARCHIVELOG mode has the following benefits:
* The database can be completely recovered from both instance and media failure.
* The user can perform online backups, i.e., back up tablespaces while the database is open and available for use.
* Archived redo logs can be transmitted and applied to the standby database
* Oracle supports multiplexed archive logs to avoid any possible single point of failure on the archive logs.
* The user has more recovery options, such as the ability to perform tablespace-point-in-time recovery (TSPITR)

Running the database in NOARCHIVELOG mode has the following consequences:
* The user can only back up the database while it is completely closed after a clean shutdown.
* Typically, the only media recovery option is to restore the whole database, which causes the loss of all transactions issued since the last backup.

Most of the High Availability features of Oracle require you to enable ARCHIVELOG mode for your database. When you enable this mode redo logs will be archived instead of overwritten and they are utilized by RMAN, Data Guard, LogMiner, Flashback and many others. I personally set my laptop XE database also ARCHIVELOG mode on, since there is no capacity to perform any type of point in time recovery operations or online backups on NOARCHIVELOG mode, no need to be a mission critical system..

Oracle databases are created initially in NOARCHIVELOG mode by default but enabling ARCHIVELOG mode is simple;


conn / as sysdba
-- Lets start by checking the current archive mode
archive log list

Database log mode              No Archive Mode
Automatic archival             Disabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     24
Current log sequence           25

-- close the database
shutdown immediate;
 -- now startup the database at mount mode, switch to archivelog and open it
startup mount exclusive;
alter database archivelog;
alter database open;

alter system switch logfile;
archive log list

Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     25
Next log sequence to archive   25
Current log sequence           26

Also Oracle provides data dictionary views for the archived redo logs like:
v$parameter – Shows the location of the flash recovery area or archive destination where archived redo logs are created.
v$archived_log – Information about archived redo logs.
v$log_history – Contains information on previous redo logs

2- Comment on Tables and Columns similar to your PL/SQL or SQL Source Codes

COMMENT statement adds a comment about a table, view, materialized view, or column into the Oracle data dictionary. To drop a comment from the database, set it to the empty string ‘ ‘.

This feature is what I call documentation in the database, very helpfull always, even you can not remember why you created a column after some years of development;


-- DROP TABLE tbl PURGE;
CREATE TABLE tbl (ncol number, vcol varchar2(16), dcol date);

-- Comment a table : COMMENT ON TABLE <table_name> IS '<comment>';
COMMENT ON TABLE tbl IS 'Speed xxx Responsible yyy Description zzz ...';
SELECT table_name, comments FROM user_tab_comments WHERE comments LIKE 'Speed xxx%';

TABLE_NAME                     COMMENTS

TBL                            Speed xxx Responsible yyy Description zzz ...

-- Comment a column :      COMMENT ON COLUMN <table_name.column_name> IS '<comment>';
COMMENT ON COLUMN tbl.ncol IS 'n Digit Customer Code, Genereted From m Sequence ...';
SELECT table_name, column_name, comments
FROM user_col_comments
WHERE comments LIKE '%Customer%';

TABLE_NAME                     COLUMN_NAME                    COMMENTS

TBL                            NCOL                           n Digit Customer Code, Genereted From m Sequence ...

3- Use DBMS_STATS for statistic collection

Collection of data that describes the database and the objects in the database are very important for the performance of your queries.

Important information collected by dbms_stats package and used by query optimizer to estimate are:
– Selectivity of predicates
– Cost of each execution plan
– Access method and join method
– CPU and I/O costs

Types of Optimizer Statistics are;
Object statistics
– Table statistics
– Column statistics
– Index statistics
System statistics
– I/O performance and utilization
– CPU performance and utilization

So how statistics can be gathered;
– Automatic statistics gathering; introduced with 10g, GATHER_STATS_JOB
– Manual statistics gathering; DBMS_STATS package or old ANALYZE statement
– Dynamic sampling; introduced with 10g, used to automatically collect statistics when:
* The cost of collecting the statistics is minimal compared to the execution time
* The query is executed many times

DBMS_STATS package is available with Oracle8i and later, better than ANALYZE because;
– it can speed the Stats Collection Process ;
– Tables can be analyzed in parallel,
– You can select STALE(10% changed) tables for ANALYZING, not all of them everytime
– it can copy stats stored in one database to another,
– you can lock the statistics,
– you can manipulate the statistics,
– you can revert back to previous stats,
– you can migrate to system statistics which enable the CBO to use CPU and I/O characteristics

How to verify Statistics collected and valid;


-- Table based
SELECT last_analyzed analyzed, sample_size, monitoring
FROM dba_tables WHERE table_name ='EMPLOYEES';

-- Column based
SELECT column_name, num_distinct,histogram,  num_buckets, density, last_analyzed
FROM dba_tab_col_statistics WHERE table_name  ='EMPLOYEES'
ORDER BY column_name;

-- Index based
SELECT index_name , num_rows , last_analyzed , distinct_keys , leaf_blocks ,
avg_leaf_blocks_per_key, join_index
FROM dba_indexes WHERE table_name = 'EMPLOYEES' ORDER BY index_name;

How to copy statistics with DBMS_STATS

You can copy statistics from a production to a test database to facilitate tuning. For example, to copy a schema’s statistics:

1. Use the DBMS_STATS.CREATE_STAT_TABLE procedure in the production database to create a user-defined statistics table.
2. Use the DBMS_STATS.EXPORT_SCHEMA_STATS procedure in the production database to copy statistics from the data dictionary to the user-defined statistics table from step 1.
3. Use the Export and Import utilities to transfer the statistics to a corresponding user-defined statistics table in the test database.
4. Use the DBMS_STATS.IMPORT_SCHEMA_STATS procedure to import the statistics into the data dictionary in the test database.

The DBMS_STATS package can also be used to back up statistics prior to analyzing objects. The backup can used to:
– Restore old statistics and
– Study changes in data characteristics over time


-- Step 1. Create the table to hold the statistics:
DBMS_STATS.CREATE_STAT_TABLE
(‘HR’ /* schema name */
,’STATS’ /* statistics table name */
,’SAMPLE’ /* tablespace */
);

-- Step 2. Copy the statistics into the table:
DBMS_STATS.EXPORT_TABLE_STATS
(’HR’ /* schema name */
,’EMP’ /* table name */
, NULL /* no partitions */
,’STATS’ /* statistics table name */
,’CRS990601’ /* id for statistics */
, TRUE /* index statistics */
);

-- Step 3. Export the STATS table, and then import it into the second database.

-- Step 4. Copy the statistics into the data dictionary:
DBMS_STATS.IMPORT_TABLE_STATS
(’HR’ /* schema name */
,’EMP’ /* table name */
, NULL /* no partitions */
,’STATS’ /* statistics table name */
,’CRS990601’ /* id for statistics */
, TRUE /* index statistics */
);

How to lock statistics with DBMS_STATS

With Oracle Database 10g, you can also lock statistics on a specified table with the new LOCK_TABLE_STATS procedure of the DBMS_STATS package to prevents automatic statistics gathering on that table.

If a table’s data is so volatile that the statistics become stale between statistics gathering intervals, you may choose to lock it when the table is empty. The optimizer then performs dynamic sampling on the table whenever required to get current statistics. You can also lock statistics on a volatile table at a point when it is fully populated so that the table statistics are more representative of the table population.

You can lock statistics at the schema level using the LOCK_SCHEMA_STATS procedure.
You can query the STATTYPE_LOCKED column in the {USER | ALL | DBA}_TAB_STATISTICS view to determine if the statistics on the table are locked.

You can use the UNLOCK_TABLE_STATS procedure to unlock the statistics on a specified table. When you lock the statistics on a table, all of the dependent statistics are considered locked. This includes table statistics, column statistics, histograms, and dependent index statistics. Dynamic sampling involves an overhead and is repeated for the same objects unless statistics are gathered.

How to migrate to system statistics with DBMS_STATS


-- Step 1. Check for system statistics if they are gathered before:

SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN';

PNAME                               PVAL1
—————————— ———-
CPUSPEED
CPUSPEEDNW                        495,977
IOSEEKTIM                              10
IOTFRSPEED                           4096
MAXTHR
MBRC
MREADTIM
SLAVETHR
SREADTIM

9 rows selected.

-- Step 2. Start gathering system statistics for a period:

exec dbms_stats.gather_system_stats(’start’);

-- During this period load on the database will determine the accuracy of the statistics representation.
-- Sample load script - http://tonguc.oracleturk.org/index.php/2006/12/13/migrating-to-system-statistics/

-- Step 3. Stop gathering system statistics:

exec dbms_stats.gather_system_stats(’stop’);

-- Step 4. Check for new system statistics gathered:

SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN';

PNAME                               PVAL1
—————————— ———-
CPUSPEED                              496
CPUSPEEDNW                        495,977
IOSEEKTIM                              10
IOTFRSPEED                           4096
MAXTHR                            7029760
MBRC                                   88
MREADTIM                           26,065
SLAVETHR
SREADTIM                              4,8

9 rows selected.

4- Using Recovery Manager(RMAN) for Backup and Recovery needs

RMAN is a utility that backs up, restores, and recovers Oracle databases. You can use it with or without the central information repository called a recovery catalog. If you do not use a recovery catalog, RMAN uses the database’s control file to store information necessary for backup and recovery operations. You can use RMAN in conjunction with a media manager to back up files to tertiary storage.

How to start RMAN without a Catalog


C:\>rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Sal Oca 9 15:37:35 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

connected to target database: XE (DBID=2488207843)

RMAN>

How to configure Backup Parameters


CONFIGURE BACKUP OPTIMIZATION ON;                   -- do not back up unchanged data files
CONFIGURE MAXSETSIZE TO 2 G;                        -- make filesize <= 2GB
CONFIGURE ENCRYPTION FOR DATABASE ON;               -- encrypt backups
CONFIGURE ENCRYPTION ALGORITHM 'AES256';            -- use 256 bit encryption
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;        # default
/* Configure the number of server processes (channels) that write backups to DISK. */
CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
/* Set the retention policy to a recovery window of 30 days. This ensures that RMAN retains all backups needed to recover the database to any point in time in the last 30 days. You can use the DELETE OBSOLETE command to delete backups that are no longer required by the retention policy. To exclude a backup from consideration by the policy, you can use KEEP option with the BACKUP command. */
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
/* Enable automatic controlfile backup after each database or archivelog backup */
CONFIGURE CONTROLFILE AUTOBACKUP ON;

RMAN> show all;

RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 30 DAYS;
CONFIGURE BACKUP OPTIMIZATION ON;
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO 2 G;
CONFIGURE ENCRYPTION FOR DATABASE ON;
CONFIGURE ENCRYPTION ALGORITHM 'AES256';
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLEXE\APP\ORACLE\PRODUCT\10.2.0\SERVER\DATABASE\SNCFXE.ORA'; # default

Run Full Backup


RMAN> report need backup;

RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 30 days
Report of files that must be backed up to satisfy 30 days recovery window
File Days  Name
---- ----- -----------------------------------------------------
1    336   D:\ORACLEXE\ORADATA\XE\SYSTEM.DBF
2    336   D:\ORACLEXE\ORADATA\XE\UNDO.DBF
3    336   D:\ORACLEXE\ORADATA\XE\SYSAUX.DBF
4    336   D:\ORACLEXE\ORADATA\XE\USERS.DBF

# mkdir c:\backup_xe1
# mkdir h:\backup_xe2

-- RMAN-06770: backup encryption requires Enterprise Edition
CONFIGURE ENCRYPTION FOR DATABASE OFF ;
-- RMAN-06908: WARNING: operation will not run in parallel on the allocated channels
-- RMAN-06909: WARNING: parallelism require Enterprise Edition
RUN
{       ALLOCATE CHANNEL d1 DEVICE TYPE DISK FORMAT 'c:\backup_xe1\%U';
ALLOCATE CHANNEL d2 DEVICE TYPE DISK FORMAT 'h:\backup_xe2\%U';
# AS COPY is default when backing up to disk
BACKUP DATABASE PLUS ARCHIVELOG; }

RMAN> report need backup;

RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 30 days
Report of files that must be backed up to satisfy 30 days recovery window
File Days  Name
---- ----- -----------------------------------------------------

RMAN> list backup summary;

List of Backups
===============
Key     TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
1       B  A  A DISK        09/01/2007      1       1       NO         TAG20070109T154753
2       B  A  A DISK        09/01/2007      1       1       NO         TAG20070109T154753
3       B  F  A DISK        09/01/2007      1       1       NO         TAG20070109T154809
4       B  A  A DISK        09/01/2007      1       1       NO         TAG20070109T155150
5       B  F  A DISK        09/01/2007      1       1       NO         TAG20070109T155157

RMAN> validate backupset 1 ;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=23 devtype=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: sid=36 devtype=DISK
channel ORA_DISK_1: starting validation of archive log backupset
channel ORA_DISK_1: reading from backup piece C:\BACKUP_XE12I7351A_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=C:\BACKUP_XE12I7351A_1_1 tag=TAG20070109T154753
channel ORA_DISK_1: validation complete, elapsed time: 00:00:05

RMAN> BACKUP VALIDATE DATABASE;

Starting backup at 09/01/2007
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00004 name=D:\ORACLEXE\ORADATA\XE\USERS.DBF
channel ORA_DISK_2: starting full datafile backupset
channel ORA_DISK_2: specifying datafile(s) in backupset
input datafile fno=00003 name=D:\ORACLEXE\ORADATA\XE\SYSAUX.DBF
input datafile fno=00002 name=D:\ORACLEXE\ORADATA\XE\UNDO.DBF
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:24
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=D:\ORACLEXE\ORADATA\XE\SYSTEM.DBF
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:41
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:36
Finished backup at 09/01/2007

Creating and Updating Duplicate Databases with RMAN


# Duplicate database to TARGET’s current state.
DUPLICATE TARGET DATABASE TO CLONE;

# Duplicate database to TARGET’s state 4 days ago.
DUPLICATE TARGET DATABASE TO CLONE UNTIL TIME ‘SYSDATE-4';

Block Recovery Solution Example with RMAN


SQL> select * from emp;
select * from emp
*
ERROR at line 1:
ORA-01578: ORACLE data block corrupted (file # 4, block # 24165)
ORA-01110: data file 4: ‘D:\ORACLE\ORADATA\USERS01.DBF’

SQL> select header_file,header_block from dba_segments where segment_name=’EMP’;

HEADER_FILE HEADER_BLOCK
———– ————
4 24163

SQL> select * from v$backup_corruption;

RECID STAMP SET_STAMP SET_COUNT PIECE# FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# MAR CO
———- ———- ———- ———- ———- ———- ———- ———- ————————–
1 550688483 550688389 46 1 4 24165 1 0 YES CORRUPT

The above output confirms that block 24165 in file 4 is indeed corrupt. We can recover the same using the following command.


RMAN> run {blockrecover datafile 4 block 24165;}

Starting blockrecover at 25-OCT-06
using channel ORA_DISK_1

channel ORA_DISK_1: restoring block(s)
channel ORA_DISK_1: specifying block(s) to restore from backup set
restoring blocks of datafile 00004
channel ORA_DISK_1: restored block(s) from backup piece 1
piece handle=D:\ORACLE\FLASH_RECOVERY_AREA\BACKUPSET\2005_02_19\O1_MF_NNNDF_TAG20050219T164615_11FNO9BQ_.BKP tag=TAG20050219T164615
channel ORA_DISK_1: block restore complete

starting media recovery
media recovery complete

Finished blockrecover at 25-OCT-06

SQL> select * from emp;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
———- ———- ——— ———- ——— ———- ———- ———-
7369 SMITH CLERK 7902 17-DEC-80 800 20
…
14 rows selected.

Continue reading with Part 2

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

References Used :
Oracle® Database Administrator’s Guide 10g Release 2 (10.2) Chapter 7 Managing Archived Redo Logs Choosing Between NOARCHIVELOG and ARCHIVELOG Mode
Archive Log Mode Blog post by Mr.Thomas Kyte
BASICS OF ORACLE BACKUP AND RECOVERY
Understanding System Statistics by Jonathan Lewis
“CPU costing, mreadtim
http://www.psoug.org/reference/dbms_stats.html
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2) Chapter 14 Managing Optimizer Statistics Gathering Statistics with DBMS_STATS Procedures
http://www.psoug.org/reference/rman_demos.html
http://www.psoug.org/reference/rman.html
Oracle® Database Backup and Recovery Advanced User’s Guide 10g Release 2 (10.2) Creating and Updating Duplicate Databases with RMAN
Backup Validation with RMAN
When to Use Block Media Recovery

Getting Rows N through M of a Result Set From Database

You would like to fetch data and sort it based on some field. As this query results into approx 100 records, you would like to cut the result set into 4, each of 25 records and you would like to give sequence number to each record.

SELECT *
FROM (SELECT A.*, ROWNUM rnum
FROM (your query including the order by) A
WHERE ROWNUM <= MAX_ROWS )
WHERE rnum >= MIN_ROWS;

A simple test done on SQL*Plus is as follows;

Code listing 50 : Getting Rows N through M of a Result Set From Database Example

Here is also a brief explaination for SQL*Plus autotrace statistics used above;

db block gets : Number of logical I/Os for current gets(from undo segments)
consistent gets : Reads of buffer cache blocks
physical reads : Number of blocks read from disk
redo size : Amount of redo generated (for DML statements)
sorts (memory) : Number of sorts performed in memory
sorts (disk) : Number of sorts performed using temporary disk storage

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
“getting rows N through M of a result set” thread on Asktom

Project Lockdown by Arup Nanda

A great article by Mr.Arup Nanda, as usual, also one of the best series on Oracle I have ever read. Mr.Arup Nanda was named “DBA of the Year” by Oracle Magazine in 2003 and he is an Oracle ACE.

This article discusses a phased approach to securing your database infrastructure, here are the highlights;

Phase 1 – Duration: One Day
1.1 Remove Default Passwords
1.2 Configure Oracle Binary Permissions
1.3 Secure Other Executables
1.4 Use umask
1.5 Limit SYSDBA Login
1.6 Create a Listener Password
1.7 Protect the Listener
1.8 Trim Sweeping Privileges
1.9 Change DBSNMP Password

Phase 2 – Duration: One Week
2.1 Remove utl_file_dir
2.2 Limit OS Authentication
2.3 Disable Remote OS Authentication
2.4 Secure SQL*Plus Using Product Profile
2.5 Rein In SQL*Plus
2.6 Wrap Sensitive Code
2.7 Convert Derived Grants to Direct Grants
2.8 Limit Tablespace Quotas
2.9 Monitor Listener Logs for Attempted Break-Ins
2.10 Audit and Analyze User Access

Phase 3 – Duration: One Month
3.1 Remove Passwords from Scripts
3.2 Remove Password from RMAN
3.3 Move DBA Scripts to Scheduler
3.4 Lock Down Objects
3.5 Create Profiles of Database Users
3.6 Create and Analyze Object Access Profiles
3.7 Enable Auditing for Future Objects
3.8 Restrict Access from Specific Nodes Only

Phase 4 – Duration: One Quarter
4.1 Enable Fine Grained Auditing
4.2 Activate a Virtual Private Database
4.3 Mask Sensitive Columns
4.4 Encrypt Sensitive Data
4.5 Secure Backups
4.6 Mine History from Archived Logs
4.7 Conclusion

Continue reading the whole article.

Also you may want to check Mr.Arup Nanda’s below two articles, one of the all time most read articles on Oracle Technology Network published articles;

Oracle Database 10g: The Top 20 Features for DBAs

Oracle Database 10g: Top Features for DBAs Release 2 Features Addendum

Oracle Business Intelligence and Extraction, Transformation, Loading(ETL) Part 1

1- Introduction to Parallel Execution

Oracle 7.1 introduced parallel query option in 1994. Usually a full scan is a good candidate for parallel query if you are not on I/O bounded system.

Also as percieved you do NOT need more than one CPU in order to use this option, in other words a degree of parallelism greater then the number of cpus on the system.

Consider an index build you read, sort and write. When you are reading, you are not using the CPU so if there were another process sorting that would be great. Likewise when you are sorting, you are not using the disk if someone else could be reading or writing, that would be great.

Parallel execution dramatically reduces response time for data-intensive operations on large databases that are typically associated with decision support systems (DSS) and data warehouses. You can also implement parallel execution on certain types of online transaction processing (OLTP) and hybrid systems. Parallel execution improves processing for:
• Queries requiring large table scans, joins, or partitioned index scans
• Creation of large indexes
• Creation of large tables (including materialized views)
• Bulk inserts, updates, merges, and deletes
• Sorting large volumes of data
You can also use parallel execution to access object types within an Oracle database. For example, you can use parallel execution to access large objects(LOBs).

You can use parallel execution for any of the following:
• Access methods: Some examples are table scans, index full scans, and partitioned index range scans.
• Join methods: Some examples are nested loop, sort merge, hash, and star transformation.
• DDL statements: Some examples are CREATE TABLE AS SELECT, CREATE INDEX, REBUILD INDEX, REBUILD INDEX PARTITION, and MOVE/SPLIT/COALESCE PARTITION. You can normally use parallel data definition language (DDL) where you use regular DDL. There are, however, some additional details to consider when designing your database. One important restriction is that parallel DDL cannot be used on tables with object or LOB columns. All of these DDL operations can be performed in NOLOGGING mode for either parallel or serial execution. The CREATE TABLE statement for an index-organized table can be parallelized either with or without an AS SELECT clause. Different parallelism is used for different operations. The Parallel CREATE (PARTITIONED) TABLE… AS SELECT and parallel CREATE (PARTITIONED) INDEX statements run with a degree of parallelism equal to the number of partitions. Parallel operations require accurate statistics to perform optimally.
• DML statements: Some examples are INSERT AS SELECT, updates, deletes, and MERGE operations. Parallel DML (parallel insert, update, merge, and delete) uses parallel execution mechanisms to speed up or scale up large DML operations against large database tables and indexes. You can also use INSERT … SELECT statements to insert rows into multiple tables as part of a single DML statement. You can normally use parallel DML where you use regular DML. Although DML normally includes queries, the term parallel DML refers only to inserts, updates, merges, and deletes done in parallel.
• Miscellaneous SQL operations: Some examples are GROUP BY, NOT IN, SELECT DISTINCT, UNION, UNION ALL, CUBE, and ROLLUP, as well as aggregate and table functions.
• Parallel query: You can parallelize queries and subqueries in SELECT statements, as well as the query portions of DDL statements and DML statements (INSERT, UPDATE, DELETE, and MERGE).
• SQL*Loader: You can parallelize the use of SQL*Loader where large amounts of data are routinely encountered.

When to Implement and When NOT to Implement Parallel Execution

The benefits of parallel execution can be seen in DSS and data warehousing environments. OLTP systems can also benefit from parallel execution during batch processing and during schema maintenance operations such as creation of indexes. The average simple data manipulation language (DML) or SELECT statements that characterize OLTP applications would not see any benefit from being executed in parallel.

Parallel execution is NOT normally useful for:
• Environments in which the typical query or transaction is very short (a few seconds or less). This includes most online transaction systems. Parallel execution is not useful in these environments because there is a cost associated with coordinating the parallel execution servers; for short transactions, the cost of this coordination may outweigh the benefits of parallelism.
• Environments in which the CPU, memory, or I/O resources are already heavily utilized. Parallel execution is designed to exploit additional available hardware resources; if no such resources are available, then parallel execution does not yield any benefits and indeed may be detrimental to performance.

How Parallel Execution Works

Parallel execution divides the task of executing a SQL statement into multiple small units, each of which is executed by a separate process. The incoming data can be divided into parts (called granules). The user shadow process that is going to execute a query in parallel takes on the role as parallel execution coordinator or query coordinator. The query coordinator does the following:
• Parses the query and determines the degree of parallelism
• Allocates one or two sets of slaves (threads or processes)
• Controls the query and sends instructions to the PQ slaves
• Determines which tables or indexes need to be scanned by the PQ slaves
• Produces the final output to the user
At execution time, the coordinator also performs the parts of the plan that execute serially (such as accessing tables in serial if they are small or have no hint or degree of parallelism set). Ranging is also done serially to determine the ranges of keys to be distributed from producer slaves to consumer slaves who are sorting or otherwise must consume specific ranges of rows.

When using EXPLAIN PLAN with parallel queries, one parallel plan is compiled and executed. This plan is derived from the serial plan by allocating row sources specific to the parallel support in the Query Coordinator (QC) plan. The table queue row sources (PX Send and PX Receive), the granule iterator, and buffer sorts, required by the two slave set PQ model, are directly inserted into the parallel plan. This plan is the exact same plan for all the slaves if executed in parallel or for the QC if executed in serial.


CREATE TABLE emp2 AS SELECT * FROM employees;
ALTER TABLE emp2 PARALLEL 2;

EXPLAIN PLAN FOR
SELECT SUM(salary) FROM emp2 GROUP BY department_id;
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());

--------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU) |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |          |   107 |  2782 |     3 (34)  |        |      |            |
|   1 |  PX COORDINATOR          |          |       |       |             |        |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10001 |   107 |  2782 |     3 (34)  |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY         |          |   107 |  2782 |     3 (34)  |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE           |          |   107 |  2782 |     3 (34)  |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH        | :TQ10000 |   107 |  2782 |     3 (34)  |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY      |          |   107 |  2782 |     3 (34)  |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR |          |   107 |  2782 |     2 (0)   |  Q1,00 | PCWP |            |
|   8 |         TABLE ACCESS FULL| EMP2     |   107 |  2782 |     2 (0)   |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------

The table EMP2 is scanned in parallel by one set of slaves while the aggregation for the GROUP BY is done by the second set. The PX BLOCK ITERATOR row source represents the splitting up of the table EMP2 into pieces so as to divide the scan workload between the parallel scan slaves. The PX SEND and PX RECEIVE row sources represent the pipe that connects the two slave sets as rows flow up from the parallel scan, get repartitioned through the HASH table queue, and then read by and aggregated on the top slave set. The PX SEND QC row source represents the aggregated values being sent to the QC (Query Coordinator) in random (RAND) order. The PX COORDINATOR row source represents the QC or Query Coordinator which controls and schedules the parallel plan appearing below it in the plan tree.

Degree of Parallelism (DOP)

The parallel execution coordinator may enlist two or more of the instance’s parallel execution servers to process a SQL statement. The number of parallel execution servers associated with a single operation is known as the degree of parallelism. A single operation is a part of a SQL statement such as an order by or full table scan to perform a join on a non-indexed column table. The degree of parallelism applies directly only to intra-operation parallelism. If inter-operation parallelism is possible, the total number of parallel execution servers for a statement can be twice the specified degree of parallelism. No more than two sets of parallel execution servers can run simultaneously. Each set of parallel execution servers may process multiple operations. Only two sets of parallel execution servers need to be active to guarantee optimal inter-operation parallelism.

Parallel execution is designed to effectively use multiple CPUs and disks to answer queries quickly. When multiple users use parallel execution at the same time, it is easy to quickly exhaust available CPU, memory, and disk resources. The default DOP is used when you ask to parallelize an operation but you do not specify a DOP in a hint or in the definition of a table or index. The default DOP is appropriate for most applications.

If no parallel hints are used and there is no default degree of parallelism for the table in the dictionary:
• Execution for that table is serial
• When parallelism is enforced with the ALTER SESSION FORCE PARALLEL … command, the DOP for a SQL statement is determined by the value of the parameter CPU_COUNT. The value of CPU_COUNT is, by default, the number of CPUs on the system and the value of the PARALLEL_THREADS_PER_CPU parameter.
However, the actual number of processes that are used is limited by their availability on the requested instances during run time. The PARALLEL_MAX_SERVERS initialization parameter sets an upper limit on the total number of parallel execution servers that an instance can have.
If a minimum fraction of the desired parallel execution servers is not available (specified by the PARALLEL_MIN_PERCENT initialization parameter), a user error is produced. You can retry the query when the system is less busy.

Parallelization Rules for SQL Statements

A SQL statement can be parallelized if it includes a parallel hint or if the table or index being operated on has been declared PARALLEL with a CREATE or ALTER statement. In addition, a DDL statement can be parallelized with the PARALLEL clause. However, not all of these methods apply to all types of SQL statements. Parallelization has two components: the decision to parallelize and the DOP. These components are determined differently for queries, DDL operations, and DML operations.

To determine the DOP, the Oracle Server looks at the reference objects:
• Parallel query looks at each table and index in the portion of the query that is being parallelized to determine which is the reference table. The basic rule is to pick the table or index with the largest DOP.
• For parallel DML (INSERT, UPDATE, MERGE, and DELETE), the reference object that determines the DOP is the table being modified by an insert, update, or delete operation. Parallel DML also adds some limits to the DOP to prevent deadlock. If the parallel DML statement includes a subquery, the subquery’s DOP is the same as the DML operation.
• For parallel DDL, the reference object that determines the DOP is the table, index, or partition being created, rebuilt, split, or moved. If the parallel DDL statement includes a subquery, the subquery’s DOP is the same as the DDL operation.

Disabling Parallel Execution

You disable parallel SQL execution with an ALTER SESSION DISABLE PARALLEL DML|DDL|QUERY statement. All subsequent DML (INSERT, UPDATE, DELETE), DDL (CREATE, ALTER), or query (SELECT) operations are executed serially after such a statement is issued. They are executed serially regardless of any PARALLEL clause that is associated with the statement or any parallel attribute that is associated with the table or indexes involved.

The following statement disables parallel DDL operations:
ALTER SESSION DISABLE PARALLEL DDL;

You can also use the ALTER statement to change the PARALLEL state of tables and indexes to NOPARALLEL.

Hints for Parallel Execution

PARALLEL: The PARALLEL hint lets you specify the desired number of concurrent servers that can be used for a parallel operation. The hint applies to the SELECT, INSERT, UPDATE, and DELETE portions of a statement as well as to the table scan portion.
NO_PARALLEL: The NO_PARALLEL hint overrides a PARALLEL specification in the table clause.
PQ_DISTRIBUTE: The PQ_DISTRIBUTE hint improves the performance of parallel join operations. The optimizer ignores the distribution hint if both tables are serial.
PARALLEL_INDEX: The PARALLEL_INDEX hint specifies the desired number of concurrent servers that can be used to parallelize index range scans for partitioned indexes.
NO_PARALLEL_INDEX: The NO_PARALLEL_INDEX hint overrides a PARALLEL attribute setting on an index to avoid a parallel index scan operation.

So what is the best technique to determine the degree of parallelism(DOP) to use?

Mr.Kyte‘s answer to this question is; “don’t, let the database do it. automatic tuning (in 9i, the default in 10g) and adaptive multiuser. Just make things “parallel” or use the “parallel hint” without a degree and let the database do it. If you have the resources, it’ll use them. If you have too many concurrent users to do a parallel 8 query, it’ll downgrade it to parallel 4 or maybe just serial.”

Another bad habit is, with hints you may use lots of parallel servers but this will not reduce responce time, usually syncronization betwwen the threads will result more waits and responce time to grow. Before starting a parallel query in the database you can check the available parallel servers by this queries;


-- how many parallel servers can be opened
SELECT NAME, VALUE
FROM v$parameter
WHERE NAME LIKE ‘%paral%max%’;

-- how many parallel servers are being used by whom
SELECT a.qcsid, a.qcserial#, y.osuser, COUNT(*)
FROM v$px_session a, v$session y
WHERE y.sid = a.qcsid
AND y.serial# = a.qcserial#
GROUP BY a.qcsid, a.qcserial#, y.osuser ;

Oracle Database versions 8.1.5 and later we can also parallelize pl/sql functions. The PARALLEL_ENABLE keyword is the preferred way to mark your code as safe for parallel execution. This keyword is syntactically similar to DETERMINISTIC and it is placed after the return value type in a declaration of the function:


CREATE FUNCTION F1 (P1 NUMBER) RETURN NUMBER PARALLEL_ENABLE IS
BEGIN
RETURN P1 * 2;
END;

As a conclusion, to benefit when using parallelism option there are some important considerations, “Parallel <> fast=true, never has been, never will.” Thomas Kyte

Continue reading with Part 2

References Used :
Oracle® Database Application Developer’s Guide – Fundamentals 10g Release 2 (10.2) Chapter 7 Coding PL/SQL Procedures and Packages Parallel Query and Parallel DML
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 16 Business Intelligence Overview of Parallel Execution
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2) Chapter 19 Using EXPLAIN PLAN Viewing Parallel Queries with EXPLAIN PLAN
“Performance in Parallel query” thread on Asktom
“Parallel Query concerns” thread on Asktom

Prerequisites for using Oracle’s anti-join access paths

This time the subject is how Oracle evaluates NOT EXISTS and NOT IN clauses and hints that influence anti-join query optimization.

Below query had no respond for hours before I started to work on it, two table have some 100 millions of rows in this example;


-- original query
SELECT /*+ parallel (q1,16) */ count(*)
FROM party PARTITION(party_01) q1
WHERE party_id NOT IN (SELECT /*+ hash_aj */ party_id
FROM party_new PARTITION(party_01))

I started with checking some critical database parameters like;

– maximum how many parallel servers configured to open,

– what is the PGA management configuration


-- critical database parameter values
SELECT NAME, VALUE
FROM v$parameter
WHERE NAME LIKE '%paral%max%'
UNION ALL
SELECT NAME, VALUE
FROM v$parameter
WHERE NAME LIKE '%pga%tar%'
UNION ALL
SELECT NAME, VALUE
FROM v$parameter
WHERE NAME LIKE '%work%pol%';

parallel_max_servers    100
pga_aggregate_target    314572800
workarea_size_policy    AUTO

-- how many parallel servers are being used by whom
SELECT a.qcsid, a.qcserial#, y.osuser, COUNT(*)
FROM v$px_session a, v$session y
WHERE y.sid = a.qcsid
AND y.serial# = a.qcserial#
GROUP BY a.qcsid, a.qcserial#, y.osuser

31  1034   TCYYUCEL     8
112 473    TCYYUCEL     32
123 1005   TCYYUCEL     16

So I was sure that : 100 – (8+32+16) = 44 more parallel servers can be initiated at that time.

The key prerequisite for the anti-join access paths is that the subquery of a NOT IN clause cannot be capable of returning a null value. If the subquery of a NOT IN clause returns at least one null value, then the NOT IN predicate evaluates to false. The NOT EXISTS construct, meanwhile, concerns itself only with whether a row is returned or not, and thus does not do anything differently if a null value is returned.

For a specific query we place the MERGE_AJ, HASH_AJ, or NL_AJ hint into the NOT IN subquery. MERGE_AJ uses a sort-merge anti-join, HASH_AJ uses a hash anti-join, and NL_AJ uses a nested loop anti-join.

So I add hash_aj hint and party_id IS NOT NULL condition to subquery and the query returns in 2 and a half minute;


-- suggested method 1 - via SQL*Plus
set timing on
set linesize 300
set pagesize 0
set serveroutput on size 1000000
alter session set max_dump_file_size=unlimited;
ALTER session SET timed_statistics = true;
exec DBMS_OUTPUT.ENABLE(1000000) ;
ALTER SESSION ENABLE PARALLEL DDL ;
ALTER SESSION ENABLE PARALLEL DML ;
alter session set STATISTICS_LEVEL = ALL ;
ALTER SESSION ENABLE RESUMABLE TIMEOUT 10800 ;
WHENEVER SQLERROR CONTINUE NONE

SELECT /*+ parallel (a 8) */ COUNT(*)
FROM party PARTITION(party_01) a
WHERE party_id NOT IN (SELECT /*+ hash_aj parallel(b 8) */ party_id
FROM party_new PARTITION(party_01) b
WHERE party_id IS NOT NULL) ;

Elapsed: 00:02:29.00

With this second method I tried to reconfigure the storage parameters and statistics by creating two new tables before the anti-join;


-- suggested method 2 - via SQL*Plus
set timing on
set linesize 300
set pagesize 0
set serveroutput on size 1000000
alter session set max_dump_file_size=unlimited;
ALTER session SET timed_statistics = true;
exec DBMS_OUTPUT.ENABLE(1000000) ;
ALTER SESSION ENABLE PARALLEL DDL ;
ALTER SESSION ENABLE PARALLEL DML ;
alter session set STATISTICS_LEVEL = ALL ;
ALTER SESSION ENABLE RESUMABLE TIMEOUT 10800 ;
WHENEVER SQLERROR CONTINUE NONE

create table tong1
PCTFREE 0 PCTUSED 99 INITRANS 3 MAXTRANS 255
STORAGE ( INITIAL 5M NEXT 5M MINEXTENTS 5 MAXEXTENTS
512 PCTINCREASE 0 FREELISTS 3 FREELIST GROUPS 1)
nologging parallel 4 PARTITION BY HASH (party_id) partitions 4 AS
select /*+ parallel(aa 4) */ * from party partition(party_01) aa
/

alter table tong1
add constraint NN#tong1#party_id
check (party_id IS NOT NULL) nologging parallel 4;

EXEC DBMS_STATS.gather_table_stats(USER, 'tong1', estimate_percent => 10, block_sample => FALSE, degree => 4, granularity => 'ALL', cascade => TRUE) ;
commit ;

create table tong2
PCTFREE 0 PCTUSED 99 INITRANS 3 MAXTRANS 255
STORAGE ( INITIAL 5M NEXT 5M MINEXTENTS 5 MAXEXTENTS
512 PCTINCREASE 0 FREELISTS 3 FREELIST GROUPS 1)
nologging parallel 4 PARTITION BY HASH (party_id) partitions 4 AS
select /*+ parallel(aa 4) */ party_id FROM ANT.party_new partition(party_01) aa
/

alter table tong2
add constraint NN#tong2#party_id
check (party_id IS NOT NULL) nologging parallel 4;

EXEC DBMS_STATS.gather_table_stats(USER, 'tong2', estimate_percent => 10, block_sample => FALSE, degree => 4, granularity => 'ALL', cascade => TRUE) ;
commit ;

SELECT /*+ parallel(a 4) */ COUNT(*)
FROM tong1 a
WHERE party_id NOT IN (SELECT /*+ hash_aj parallel(b 4) */ party_id
FROM tong2 b
WHERE party_id IS NOT NULL);

Elapsed: 00:01:48.00

Also another ~%38 improvement.

Since in an anti-join access the subquery of a NOT IN clause cannot be capable of returning a null value either the columns being selected must have NOT NULL constraints or there must be predicates in the WHERE clause of the subquery to ensure there are no nulls. Even if every row in the table has a non-null value in a nullable column, you must still add the extra predicate to the WHERE clause or else Oracle will refuse to use the merge and hash anti-join access paths.

For more information please look at the examples of how anti-join access paths can be used to make some queries more efficient in the article given in the references written by Mr.Roger Schrag.

Testing Information : the scripts mentioned are tested on Oracle Database 9i Enterprise Edition Release 9.2.0.7.0

Refences Used :
“NOT IN in SQL” thread on AskTom
Metalink Note:28934.1 “Use of indexes with NOT IN subquery”
“Speeding Up Queries with Semi-Joins and Anti-Joins: How Oracle Evaluates EXISTS, NOT EXISTS, IN, and NOT IN,” by Roger Schrag (December 2004)
Jonathan Lewis’s NOT IN Blog Entry

Without Bind Variables Your Code Is Also Less Secure

So here are some security issue highlights;

1- Greatest Risk is from

– External hack
– Rogue employee; Is the DBA a threat?
– Social Engineering; Single sign on is dangerous (why?)
– Hackers choose the easiest method for getting in not the one that is the most challenging.
– If there is a deadbolt on the front door, just go in via the left open fly screen on the backdoor.

2- Methods of Theft

– Steal the data; Hack in
– Steal the database; Backups, Copy the files
– Ask for the data; Prevent social engineering

3- Common Hacks

– Social Engineering; “Can I have your password?”, “Where can I plug in my laptop?”
– Theoretical Hacks; Trojan, Java, Wrapped PL/SQL

4- Code Injection

– Add a command to the URL to correctly validate or view data one shouldn’t

http://www.site/code/myproc?pw=SMITH
http://www.site/code/myproc?pw=X’’+or+1=1+or+passw=‘’

‘select username from table where passw = ‘’’ || ‘X’’ or 1=1 or passw=‘’’ || ‘’’’
‘select username from table where passw = ‘X’ or 1=1 or passw=‘’

– Web App prompts for Product Code and Returns Product description;

sql_stmt = “select prod_desc from products where prod_code =‘ ”& input_str & “‘”
Select product_desc from products where product_code = ‘123’

User enters Product code as 123’ UNION select username, password from dba_users where ‘1’=‘1

Resulting SQL statement is now

Select product_desc from products where product_code = ‘123’ UNION select username, password from dba_users where ‘1’=‘1’

To understand how “Code Injection” works lets work on a simple example. Suppose we have an application that asks a code and returns the bank account information related to that code;

create table my_top_secrets (
PK_SECRET    VARCHAR2(30) PRIMARY KEY,
BANK_ACCOUNT VARCHAR2(30),
AMOUNT       NUMBER(10)
);

insert into my_top_secrets values('1', 'Malta Bank - 123456',  123000000);
insert into my_top_secrets values('2', 'Isviçre Bank - 7890', 4567000000);
commit;

create or replace function get_bankaccount(p_secret varchar2) return varchar2 is
TYPE rc IS REF CURSOR;
l_rc rc;
l_dummy varchar2(64) := 'Fail';
BEGIN
OPEN l_rc FOR ’select BANK_ACCOUNT
from my_top_secrets
where PK_SECRET = ‘ || p_secret;
FETCH l_rc INTO l_dummy;
CLOSE l_rc;
return l_dummy ;
END;
/

set serveroutput on
declare
l_dummy1 varchar2(64) := '-1'; -- since I dont know a primary key value in the table
l_dummy2 varchar2(64);
begin
l_dummy2 := get_bankaccount(l_dummy1);
dbms_output.put_line(l_dummy2);
end;
/
Fail

PL/SQL procedure successfully completed.

-- SQL modification involves deliberately altering a dynamic SQL statement so that it executes in a way unintended by the application developer.
declare
l_dummy1 varchar2(64) := '-1' || ' or 1=1';
l_dummy2 varchar2(64);
begin
l_dummy2 := get_bankaccount(l_dummy1);
dbms_output.put_line(l_dummy2);
end;
/
Malta Bank - 123456

PL/SQL procedure successfully completed.

-- Statement injection occurs when a user appends one or more new SQL statements to a dynamically generated SQL statement.
declare
l_dummy1 varchar2(64) := '-1' || ' union select user from dual';
l_dummy2 varchar2(64);
begin
l_dummy2 := get_bankaccount(l_dummy1);
dbms_output.put_line(l_dummy2);
end;
/
HR

PL/SQL procedure successfully completed.

What about if we binded instead of concatenation;


create or replace function get_bankaccount(p_secret varchar2) return varchar2 is
TYPE rc IS REF CURSOR;
l_rc rc;
l_dummy varchar2(64) := 'Fail';
BEGIN
OPEN l_rc FOR ’select BANK_ACCOUNT
from my_top_secrets
where PK_SECRET = :x’ USING p_secret;
FETCH l_rc INTO l_dummy;
CLOSE l_rc;
return l_dummy ;
END;
/

declare
l_dummy1 varchar2(64) := '-1'; -- since I dont know a primary key value in the table
l_dummy2 varchar2(64);
begin
l_dummy2 := get_bankaccount(l_dummy1);
dbms_output.put_line(l_dummy2);
end;
/
Fail

PL/SQL procedure successfully completed.

declare
l_dummy1 varchar2(64) := '-1' || ' or 1=1';
l_dummy2 varchar2(64);
begin
l_dummy2 := get_bankaccount(l_dummy1);
dbms_output.put_line(l_dummy2);
end;
/
Fail

PL/SQL procedure successfully completed.

declare
l_dummy1 varchar2(64) := '-1' || ' union select user from dual';
l_dummy2 varchar2(64);
begin
l_dummy2 := get_bankaccount(l_dummy1);
dbms_output.put_line(l_dummy2);
end;
/
Fail

PL/SQL procedure successfully completed.

If you develop your application using string concatenation instead of using bind variables, side effects on your system will not only be poor scalability as I mentioned in one of my previous posts but especially applications open to the Internet have many hidden vulnerabilities. Be carefull and research on “SQL Injection” topic through Google..

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

References Used :

Oracle® Database Application Developer’s Guide – Fundamentals 10g Release 2 (10.2) Chapter 8 Coding Dynamic SQL Avoiding SQL Injection in PL/SQL

“Defending and detecting SQL injection” thread on Asktom
SQL Injection article by Hakkı Oktay

Oracle SQL Extentions Part 1

Universal mantra on data intensive processing advices that “You should do it in a single SQL statement if at all possible.”

So learning all there is to learn about SQL is our primary goal for developing successful database applications. This series will be my favorite I know from the start :)

1- Using SQL to Generate SQL

SQL can be used to generate scripts in SQL. The data dictionary, is a collection of tables and views that contain database information which are created and maintained by the Oracle server.


SELECT 'CREATE TABLE ' || table_name || '_STRUCTURE_CLONE ' || 'AS SELECT * FROM ' || table_name ||' WHERE 1=2;'
AS "Create Table Script"
FROM   user_tables;

Another example in SQL*PLUS can be Dumping the Contents of a Table to a File;


SET HEADING OFF ECHO OFF FEEDBACK OFF
SET PAGESIZE 0

SELECT
'INSERT INTO departments_test VALUES
(' || department_id || ', ''' || department_name ||
''', ''' || location_id || ''');'
AS "Insert Statements Script"
FROM   departments
/

SET PAGESIZE 24
SET HEADING ON ECHO ON FEEDBACK ON

This script’s output can be spooled to a file and this file can be executed after the dynamic content prepared. For example a daily backup procedure for parameter tables.

2- Examples on SQL efficiency

a) To improve SQL efficiency, use equijoins whenever possible. Statements that perform equijoins on untransformed column values are the easiest to tune.

Use untransformed column values. For example, use:
WHERE a.order_no = b.order_no

rather than:
WHERE TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, ‘.’) – 1)) = TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, ‘.’) – 1))

b) SQL is not a procedural language. Using one piece of SQL to do many different things usually results in a less-than-optimal result for each task. If you want SQL to accomplish different things, then write various statements, rather than writing one statement to do different things depending on the parameters you give it.


SELECT info
FROM tables
WHERE ...

AND somecolumn BETWEEN DECODE(:loval, 'ALL', somecolumn, :loval)
AND DECODE(:hival, 'ALL', somecolumn, :hival);

It is always better to write separate SQL statements for different tasks, but if you must use one SQL statement, then you can make a very complex statement slightly less complex by using the UNION ALL operator.

Optimization (determining the execution plan) takes place before the database knows what values will be substituted into the query. An execution plan cannot, therefore, depend on what those values are.


SELECT /* change this half of UNION ALL if other half changes */ info
FROM tables
WHERE ...

AND somecolumn BETWEEN :loval AND :hival
AND (:hival != 'ALL' AND :loval != 'ALL')

UNION ALL
SELECT /* Change this half of UNION ALL if other half changes. */ info
FROM tables
WHERE ...
AND (:hival = 'ALL' OR :loval = 'ALL');

c) Write your WHERE clause in order to use a possible index;

Don’t Use
WHERE SUBSTR(ACCOUNT_NAME,1,7) = ‘CAPITAL’;
WHERE AMOUNT != 0;
WHERE TRUNC(TRANS_DATE) = TRUNC(SYSDATE);
WHERE ACCOUNT_NAME || ACCOUNT_TYPE = ‘AMEXA’;
WHERE AMOUNT + 3000 < 5000;
WHERE ACCOUNT_NAME = NVL(:ACC_NAME, ACCOUNT_NAME);

Use
WHERE ACCOUNT_NAME LIKE ‘CAPITAL%’;
WHERE AMOUNT > 0 ;
WHERE TRANS_DATE BETWEEN TRUNC(SYSDATE) AND TRUNC(SYSDATE) + .99999;
WHERE ACCOUNT_NAME =’AMEX’ AND ACCOUNT_TYPE = ‘A’ ;
WHERE AMOUNT < 2000;
WHERE ACCOUNT_NAME LIKE NVL(:ACC_NAME, ‘%’);

d) Data Casting Problems;

If EMP_TYPE is a VARCHAR2 column, the following statement will not use an index.

SELECT . . . FROM EMP WHERE EMP_TYPE = 123

This statement will be processed as:

SELECT . . . FROM EMP WHERE TO_NUMBER(EMP_TYPE) = 123;

e) NOT, != and <> disable index use;

WHERE city not in (‘DALLAS’, ‘HOUSTON’)
WHERE city != ‘DALLAS’

3- Using Explicit Default Values


create table dept_tong as select * from departments ;

alter table dept_tong modify manager_id default 5 ;

-- DEFAULT with INSERT
INSERT INTO dept_tong
(department_id, department_name, manager_id)
VALUES (300, 'Engineering', DEFAULT);

-- DEFAULT with UPDATE
UPDATE dept_tong
SET manager_id = DEFAULT WHERE department_id = 10;

4- Referencing the Same Subquery Multiple Times with the WITH clause

Using the WITH clause, you can use the same query block in a SELECT statement when it occurs more than once within a complex query. The WITH clause retrieves the results of a query block and stores it in the user’s temporary tablespace.

In complex queries that process the same subquery multiple times, you might be tempted to store the subquery results in a temporary table and perform additional queries against the temporary table. The WITH clause lets you factor out the subquery, give it a name, then reference that name multiple times within the original complex query.

This technique lets the optimizer choose how to deal with the subquery results, whether to create a temporary table or inline it as a view.

For example, the following query joins two tables and computes the aggregate SUM(SAL) more than once;


drop table emp purge ;
drop table dept purge ;

create table emp nologging as
select rownum empno, name ename, line sal, rownum deptno from all_source ;
create table dept nologging as
select rownum deptno, name dname from all_source ;

SELECT dname, SUM(sal) AS dept_total
FROM emp, dept
WHERE emp.deptno = dept.deptno
GROUP BY dname
HAVING SUM(sal)  >
(  SELECT SUM(sal) * 1/3
FROM emp, dept
WHERE emp.deptno = dept.deptno )
ORDER BY SUM(sal) DESC;

You may improve the query by doing the subquery once, and referencing it at the appropriate points in the main query. The bold text represents the common parts of the subquery, and the places where the subquery is referenced.


set serveroutput on
exec runstats_pkg.rs_start;

SELECT dname, SUM(sal) AS dept_total
FROM emp, dept
WHERE emp.deptno = dept.deptno
GROUP BY dname
HAVING SUM(sal)  >
(  SELECT SUM(sal) * 1/3
FROM emp, dept
WHERE emp.deptno = dept.deptno )
ORDER BY SUM(sal) DESC;

exec runstats_pkg.rs_middle;

WITH summary AS (
SELECT dname, SUM(sal) AS dept_total
FROM emp, dept
WHERE emp.deptno = dept.deptno
GROUP BY dname )
SELECT dname, dept_total
FROM summary
WHERE dept_total >
( SELECT SUM(dept_total) * 1/3
FROM summary      )
ORDER BY dept_total DESC;

exec runstats_pkg.rs_stop(100000);

Run1 ran in 53 hsecs
Run2 ran in 40 hsecs
run 1 ran in 132,5% of the time

Name                                  Run1        Run2        Diff
STAT...physical read total byt           0     122,880     122,880
STAT...table scan rows gotten      618,444     313,900    -304,544
STAT...session pga memory                0     327,680     327,680

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
5,922       3,475      -2,447    170.42%

PL/SQL procedure successfully completed.

5- Difference Between UNION and UNION ALL

The problem is that in a UNION, Oracle finds all the qualifying rows and then “deduplicates” them;


select * from dual union
select * from dual;

D
-
X

select * from dual  union ALL
select * from dual;

D
-
X
X

If you can use UNION ALL, by all means use it over UNION to avoid a costly deduplication step, a step that is probably not even necessary most of the time.

6- The WITH CHECK OPTION Keyword

A subquery is used to identify the table and columns of the DML statement. The WITH CHECK OPTION keyword prohibits you from changing rows that are not in the subquery.

Specify WITH CHECK OPTION to indicate that, if the subquery is used in place of a table in an INSERT, UPDATE, or DELETE statement, no changes that produce rows that are not included in the subquery are permitted to that table.
In the example shown, the WITH CHECK OPTION keyword is used. The subquery identifies rows that are in department 50, but the department ID is not in the SELECT list, and a value is not provided for it in the VALUES list. Inserting this row results in a department ID of null, which is not in the subquery.


INSERT INTO  (SELECT employee_id, last_name, email,
hire_date, job_id, salary
FROM   empl3
WHERE  department_id = 50 WITH CHECK OPTION)
VALUES (99998, 'Smith', 'JSMITH',
TO_DATE('07-JUN-99', 'DD-MON-RR'),
'ST_CLERK', 5000);

ERROR at line 1:
 ORA-01402: view WITH CHECK OPTION where-clause violation

7- Updating and Inserting a Row Using a Record

Although you can enumerate each field of a PL/SQL record when inserting or updating rows in a table, the resulting code is not especially readable or maintainable. Instead, you can use PL/SQL records directly in these statements. The most convenient technique is to declare the record using a %ROWTYPE attribute, so that it has exactly the same fields as the SQL table.


SELECT * FROM emp WHERE empno = 110;

EMPNO ENAME                                              DEPTNO
———- ———————————————- ———-
110 John Chen                                             100

SELECT * FROM emp WHERE empno = 1500;

no rows selected

DECLARE
emp_rec emp%ROWTYPE;
BEGIN
emp_rec.empno   := 1500;
emp_rec.ename := ‘Steven Hill’;
emp_rec.deptno   := 10;
–
— A %ROWTYPE value can fill in all the row fields.
–
INSERT INTO emp VALUES emp_rec;
–
— The fields of a %ROWTYPE can completely replace the table columns.
–
UPDATE emp SET ROW = emp_rec WHERE empno = 110;
END;
/

SELECT * FROM emp WHERE empno = 110;

no rows selected

SELECT * FROM emp WHERE empno = 1500;

EMPNO ENAME                                              DEPTNO
———- ———————————————- ———-
1500 Steven Hill                                            10
1500 Steven Hill                                            10

8- MERGE Statement

Provides the ability to conditionally update or insert data into a database table, performs an UPDATE if the row exists, and an INSERT if it is a new row:
– Avoids separate updates,
– Increases performance and ease of use,
– Is useful in data warehousing applications


MERGE INTO empl3  c
USING employees e
ON (c.employee_id = e.employee_id)
WHEN MATCHED THEN
UPDATE SET
c.first_name     = e.first_name,
c.last_name      = e.last_name,
...
c.department_id  = e.department_id
WHEN NOT MATCHED THEN
INSERT VALUES(e.employee_id, e.first_name, e.last_name,
e.email, e.phone_number, e.hire_date, e.job_id,
e.salary, e.commission_pct, e.manager_id,
e.department_id);

Or another example which can be used in an Extract Transform Load(ETL) process of a datawarehouse may be;


alter session enable parallel dml;
MERGE /*+ parallel(contract_dim,10) append */
INTO contracts_dim d
USING (select * from table(transform_pkg.go(CURSOR(select /*+ parallel(contracts_file,10) full(contracts_file) */ * from contracts_file)))) f
ON (d.contract_id = f.contract_id)
WHEN MATCHED THEN
update set descrip = f.descrip,
init_val_loc_curr = f.init_val_loc_curr,
init_val_adj_amt = f.init_val_adj_amt
WHEN NOT MATCHED THEN
insert values ( f.contract_id,
f.descrip,
f.init_val_loc_curr,
f.init_val_adj_amt) ;

9- Multitable Inserts

Multitable inserts allow a single INSERT INTO .. SELECT statement to conditionally, or non-conditionally, insert into multiple tables. This statement reduces table scans and PL/SQL code necessary for performing multiple conditional inserts compared to previous versions.

It’s main use is for the ETL process in data warehouses where it can be parallelized and/or convert non-relational data into a relational format:


set pagesize 0
drop table emp_40 purge;
drop table emp_50 purge;
drop table emp_left_over purge;

create table emp_40 nologging parallel
as SELECT EMPLOYEE_ID,FIRST_NAME,LAST_NAME,DEPARTMENT_ID FROM employees where 1 = 2;

create table emp_50 nologging parallel
as SELECT EMPLOYEE_ID,EMAIL,PHONE_NUMBER,HIRE_DATE,DEPARTMENT_ID FROM employees where 1 = 2;

create table emp_left_over nologging parallel
as SELECT EMPLOYEE_ID,FIRST_NAME,LAST_NAME,EMAIL,PHONE_NUMBER,HIRE_DATE,JOB_ID
,SALARY,COMMISSION_PCT,MANAGER_ID,DEPARTMENT_ID FROM employees where 1 = 2;

SELECT DEPARTMENT_ID,count(*) FROM employees
group by DEPARTMENT_ID order by DEPARTMENT_ID;

ALTER SESSION ENABLE PARALLEL DML;
INSERT /*+ append parallel(2) */ ALL
WHEN (DEPARTMENT_ID=40) THEN
INTO emp_40 (EMPLOYEE_ID,FIRST_NAME,LAST_NAME,DEPARTMENT_ID)
VALUES (EMPLOYEE_ID,FIRST_NAME,LAST_NAME,DEPARTMENT_ID)
WHEN (DEPARTMENT_ID=50) THEN
INTO emp_50 (EMPLOYEE_ID,EMAIL,PHONE_NUMBER,HIRE_DATE,DEPARTMENT_ID)
VALUES (EMPLOYEE_ID,EMAIL,PHONE_NUMBER,HIRE_DATE,DEPARTMENT_ID)
ELSE
INTO emp_left_over (EMPLOYEE_ID,FIRST_NAME,LAST_NAME,EMAIL,PHONE_NUMBER,HIRE_DATE,JOB_ID,SALARY,COMMISSION_PCT,MANAGER_ID,DEPARTMENT_ID)
VALUES (EMPLOYEE_ID,FIRST_NAME,LAST_NAME,EMAIL,PHONE_NUMBER,HIRE_DATE,JOB_ID,SALARY,COMMISSION_PCT,MANAGER_ID,DEPARTMENT_ID)
SELECT /*+ PARALLEL(b 2) */ * FROM employees b ;
commit;

SELECT DEPARTMENT_ID,count(*) FROM emp_40
group by DEPARTMENT_ID order by DEPARTMENT_ID;

40          1

1 row selected.

SELECT DEPARTMENT_ID,count(*) FROM emp_50
group by DEPARTMENT_ID order by DEPARTMENT_ID;

50         45

1 row selected.

SELECT DEPARTMENT_ID,count(*) FROM emp_left_over
group by DEPARTMENT_ID order by DEPARTMENT_ID;

10          1
20          2
30          6
60          5
70          1
80         34
90          3
100          6
110          2
1

10 rows selected.

The different types of multitable INSERT statements are:
– Unconditional INSERT
– Conditional ALL INSERT
– Conditional FIRST INSERT
– Pivoting INSERT

Also Exception Logging can be used with multi table insert;


INSERT /*+ append parallel(2) */ ALL
WHEN (DEPARTMENT_ID=40) THEN
INTO emp_40 (EMPLOYEE_ID,FIRST_NAME,LAST_NAME,DEPARTMENT_ID)
VALUES (EMPLOYEE_ID,FIRST_NAME,LAST_NAME,DEPARTMENT_ID)
 LOG ERRORS REJECT LIMIT UNLIMITED
WHEN (DEPARTMENT_ID=50) THEN
INTO emp_50 (EMPLOYEE_ID,EMAIL,PHONE_NUMBER,HIRE_DATE,DEPARTMENT_ID)
VALUES (EMPLOYEE_ID,EMAIL,PHONE_NUMBER,HIRE_DATE,DEPARTMENT_ID)
 LOG ERRORS REJECT LIMIT UNLIMITED
ELSE
...

Pivoting INSERT example;


INSERT ALL
INTO sales_info VALUES (employee_id,week_id,sales_MON)
INTO sales_info VALUES (employee_id,week_id,sales_TUE)
INTO sales_info VALUES (employee_id,week_id,sales_WED)
INTO sales_info VALUES (employee_id,week_id,sales_THUR)
INTO sales_info VALUES (employee_id,week_id, sales_FRI)
SELECT EMPLOYEE_ID, week_id, sales_MON, sales_TUE,sales_WED, sales_THUR,sales_FRI FROM sales_source_data;

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

References Used :
Oracle® Database SQL Reference 10g Release 2 (10.2)
http://www.psoug.org/library.html
On Partitioning and Pipelining Oracle Magazine Article By Tom Kyte
Akadias Publications
http://asktom.oracle.com/tkyte/runstats.html

Database Performance and SQL Tuning Checklist

Database Performance Checklist
• Set the minimal number of initialization parameters. Ideally, most initialization parameters should be left at default. If there is more tuning to perform, this shows up when the system is under load. Set storage options for tables and indexes in appropriate tablespaces.
• Verify that all SQL statements are optimal and understand their resource usage.
• Validate that middleware and programs that connect to the database are efficient in their connection management and do not log on and log off repeatedly.
• Validate that the SQL statements use cursors efficiently. Each SQL statement should be parsed once and then executed multiple times. The most common reason this does not happen is because bind variables are not used properly and WHERE clause predicates are sent as string literals.
• Validate that all schema objects have been correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, procedures, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system.
• As soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process.

SQL Tuning Checklist
1. Identify Statements to tune using:
• ADDM
• AWR
• EM Top SQL
• V$SQL_AREA
• V$SQL_TEXT
• Statspack

2. View execution statistics using:
• ADDM
• SQL*trace
• TKPROF
• TRCCESS
• DBMS_MONITOR
• Make note of
• CPU time
• Elapsed time
• Disk reads
• Disk sorts

3. Tune SQL automatically using SQL Tuning Advisor.
• Optimizer stats analysis
• SQL Profiling
• Index analysis
• SQL restructure

4. Tune SQL manually:
a. Gather information about the underlying objects used in the SQL statements
• Obtaind table, index and column definitions
• Obtain view definitions
• Understand column data distribution
o Uniqueness
o Nulls
o Skew
• Identify if data from more than one table is required by a statement resulting in joins
• Verify the join predicates to avoid cartesian joins
• Verify presence of indexes
• Verify presence of Materialized views
• Verify type of indexes
b. Verify execution plans by using:
• Explain plan
• SQL*Plus Auto trace
c. Verify statistics in:
• User_tab_columns
• User_indexes
• User_tables
• Look for :
• Last analyzed
• Existence of histograms where appropriate

5. Verify tha statistics are current:
• Use Automatic statistics gatheringset to appropriate intervals
• Backup existing statistics before gathering new ones
• Use DBMS_STATS package to gather statistics where statistics are stale or absent
• Use Dynamic sampling on volatile objects if needed

6. Change access paths.
• Use SQL Access Advisor
• Use SQL Tuning Advisor
• Create B*tree indexes on highly selective data.
• Create bitmap indexes on low cardinality columns.
• Bitmap indexes help in queries using OR or aggregates.
• Create bitmap join indexes to facilitate joins.
• Create concatenated indexes to facilitate full index scans.
• Create histograms on skewed data.
• Create materialized views on queries involving joins and aggregates.
• Keep in mind that :
• Full table scans on small tables or queries retrieving a large percentage of rows are OK
• A full index scan may be faster than a full table scan
• An index skip scan may be faster to a full index scan
• An index access by rowid may be faster to an index range scan
• Look for distinct or GROUP By as this may indicate a mising predicate

7. Restructure queries keeping the following in mind:
• Use SQL Tuning advisor.
• Inequality conditions cannot use indexes.
• Distinct causes sorts.
• Group by causes sorts.
• Aggregates can use indexes.
• Applying functions on indexed columns prevents the index from being used.
• Low selectivity queries do not use indexes.
• Use UNION ALL instead of UNION (wherever possible).
• Nesting queries too deeply causes poor performance.
• Use EXISTS instead of IN for subqueries to check for TRUE or FALSE values(wherever possible).
• Use NOT EXISTS instead of NOT IN whenever possible.
• Implicit or explicit conversions may cause an index not to be used.
• OR and IN lists conditions are not performance efficient.
• If possible = or AND conditions are preferable.

8. Use hints to influence the optimizer in choosing:
• Query transformation
• Join orders
• Join methods
• Access paths

9. Verify the new code improves performance
• From a user perspective such response time, timre taken to run a report etc.
• Check that the execution statistics (step 2) reflect the performance gain from the changes you have made in CPU time, elapsed time etc. from a resource uage perspective.

Refences Used :
Oracle® Database Performance Tuning Guide 10g Release 2 (10.2)