Oracle Concepts and Architecture – Part 2

In part 1 I started to discuss below subjects;

1- Flashback Versions Query, Drop Table and Recyclebin Management
2- How shared pool works in Oracle and binding example
3- The Storage Hierarchy Summary in an Oracle Database
4- Concurrency and Isolation Levels Concepts
5- Automated Checkpoint Tuning and Mean time to recover(MTTR) Concepts
6- Concurrency and Consistency Concepts
7- Latch and Lock Concepts

I will continue with an Oracle Database Architecture overview;

8- Oracle Database Architecture: Overview

a) Oracle database and an Oracle instance

The database consists of physical structures such as;
Control files: These files contain data about the database itself. These files are critical to the database. Without them you cannot open the database. Check view v$controlfile for your controlfile settings.
Data files: These files contain the data of the database. Check view v$datafile for your data file settings.
Online redo log files: These files allow for instance recovery of the database. If the database were to crash, the database can be recovered with the information in these files. Check view v$logfile and v$log for your redo log file settings.

There are other files that are not officially part of the database but are important to the successful running of the database:
Parameter file: The parameter file is used to define how the instance is configured on startup. Check view v$parameter for your parameter settings.
Password file: This file enables super users to connect remotely to the database and perform administrative tasks.
Archive log files: These files ensure database recovery and are copies of the online redo log files. Using these files and a backup of the database, you can recover a lost data file. Check view v$archived_log for your redo log file settings.

The instance consists of memory structures such as System Global Area(SGA) and Program Global Area(PGA) and background processes that perform tasks within the database as well as the server processes that are initiated for each user session.

The size and structure of an Oracle database and instance impact performance. The physical structure of the database impacts the I/O to hard disks. It is therefore important to both size and place the physical files in such a way that I/O across disks is distributed evenly and waits are minimized. The size of the various memory areas of the instance directly impacts the speed of SQL processing.

The default Oracle database, created by the Oracle Universal Installer(OUI), is preconfigured with initial settings for the memory parameters. The performance of the database depends on the sizing of these memory parameters, so you should fine tune it to meet the requirements of your growing database.

Two memory parameters, PGA_AGGREGATE_TARGET and SGA_TARGET, are provided that allow the database to automatically resize the memory structures within the SGA and PGA. These parameters can be set based on the recommendations of Automatic Database Diagnostics Monitor(ADDM), which is available with the Enterprise Edition of Oracle Database 10g, or you can manually run several advisors and use the combined recommendations of these advisors to set the sizes appropriately.

The basic memory structures associated with an Oracle instance include:
System Global Area(SGA): Shared by all server and background processes
Program Global Area(PGA): Exclusive to each server and background process. There is one PGA for each server process.

The System Global Area(SGA) consists of the following data structures:
Database buffer cache: Caches blocks of data retrieved from the data files
Redo log buffer: Caches redo information (used for instance recovery) until it can be written to the physical redo log files stored on disk
Shared pool: Caches various constructs that can be shared among users
Large pool: Optional area in the SGA that provides large memory allocations for Oracle backup and restore operations, I/O server processes, and session memory for the shared server
Java pool: Used for all session-specific Java code and data within the Java Virtual Machine (JVM)
Streams pool: Used by Oracle Streams

The Program Global Area(PGA) is a memory region which contains data and control information for each server process. A server process is a process that services a client’s requests. Each server process has its own private PGA that is created when the server process is started. Only a server process can access its own PGA. Generally, the PGA contains the following:
Private SQL area: Contains data such as bind information and run-time memory structures. Each session that issues a SQL statement has a private SQL area.
Session memory: Memory allocated to hold session variables and other information related to the session

b) Connecting to an Instance

When a user starts a tool such as SQL*Plus or connects to the database using an application, the application or tool is executed in a user process. When a user actually logs on to the Oracle database, a process is created on the computer running the Oracle database. The listener on the Oracle database actually establishes the connection and directs the request to an available server process. The server process communicates with the Oracle instance on behalf of the user process that runs on the client. The server process executes SQL statements on behalf of the user.

A connection is a communication pathway between a user process and an Oracle database. A database user can connect to an Oracle database in one of three ways:
• The user logs on to the machine running the Oracle instance and starts an application or tool that accesses the database on that system. The communication pathway is established using the interprocess communication mechanisms available on the host operating system.
• The user starts the application or tool on a local computer and connects over a network to the computer running the Oracle instance. In this configuration, called client/server, network software is used to communicate between the user and the Oracle database.
• In a three-tiered connection, the user’s computer communicates over the network to an application or a network server, which is connected through a network to the machine running the Oracle instance. For example, the user runs a browser on a network computer to use an application residing on an NT server that retrieves data from an Oracle database running on a UNIX host.

A session is a specific connection of a user to an Oracle database. The session starts when the user is validated by the Oracle database, and it ends when the user logs out or when there is an abnormal termination. For a given database user, many concurrent sessions are possible if the user logs on from many tools, applications, or terminals at the same time. Except for some specialized database administration tools, starting a database session requires that the Oracle database be available for use.

Good database connection management offers benefits in minimizing the number of connections, thereby increasing scalability.

c) Fast COMMIT and System Change Number

The Oracle Database uses a Fast COMMIT mechanism that guarantees the committed changes can be recovered in case of instance failure. Whenever a transaction commits, the Oracle Database assigns a commit system change number(SCN) to the transaction. The SCN is monotonically incremented and is unique within the database. It is used by the Oracle Database as an internal time stamp to synchronize data and to provide read consistency when data is retrieved from the data files. Using the SCN enables the Oracle Database to perform consistency checks without depending on the date and time of the operating system.

When a COMMIT is issued, the following steps are performed:
• The server process places a commit record, along with the SCN, in the redo log buffer.
• The background Log Writer process (LGWR) performs a contiguous write of all the redo log buffer entries up to and including the commit record to the redo log files. After this point, the Oracle Database can guarantee that the changes will not be lost even if there is an instance failure.
• The server process provides feedback to the user process about the completion of the transaction.

DBWR eventually writes the actual changes back to disk based on its own internal timing mechanism.

d) Factors to be Managed and Top Oracle Performance Issues on this Architecture

Performance management can be divided into the following four areas. Although the areas are separate, they are also interdependent and require different skill sets;

Schema tuning deals with the physical structure of the data. If an application has inadequate or inappropriate data design, then tuning the physical allocation, providing indexes, or rewriting programs will not overcome the problem.

Application tuning deals with such business requirements as 24/7 availability, OLAP, OLTP, and so on as well as the program modules or applications that implement the functions. Tuning the procedural code for the type of application and tuning the embedded SQL statements are also included in this factor. If an application is well designed, it may still perform badly. A common reason for this is badly written SQL.

Instance tuning deals with the configuration of the Oracle server for memory utilization.

Database tuning deals with managing the physical arrangement of data on the disk.

User expectations: Usually users expect consistent performance on all applications. However, they may accept certain applications (such as OLAP operations) as slower if the project team builds realistic user expectations. An application may include messages to warn operators that they are requesting resource-intensive operations. The best time to do this is before the design and build phases and as part of the transition phase.–

Hardware and network tuning deals with performance issues arising from the CPU and from network traffic on all machines supporting the application. The main hardware components are:
CPU: There can be one or more CPUs, and they can vary in processing power from simple CPUs found in hand-held devices to high-powered server CPUs. Sizing of other hardware components is usually a multiple of the CPUs on the system.
Memory: Databases require considerable amounts of memory to cache data and avoid time-consuming disk access.
I/O subsystem: The I/O subsystem can vary between the hard disk on a client PC and high-performance disk arrays. Disk arrays can perform thousands of I/Os each second and provide availability through redundancy in terms of multiple I/O paths and hot pluggable mirrored disks.
Network: The primary concerns with network specifications are bandwidth (volume) and latency (speed).

Top Oracle performance issues reported on metalink Oracle global support site are;

Bad connection management: The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. Additionally, simultaneous connections from the same client are also a waste of system and network resources.

Poor use of cursors and the shared pool: Not reusing cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL statements. This has an order-of-magnitude impact in performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications generating dynamic SQL.

Bad SQL: Bad SQL is SQL that uses more resources than appropriate for the application requirement. This can be a decision support systems (DSS) query that runs for more than 24 hours or a query from an online application that takes more than a minute. SQL that consumes significant system resources should be investigated for potential improvement. ADDM identifies high-load SQL, and the SQL Tuning Advisor can be used to provide recommendations for improvement.

Use of nonstandard initialization parameters: These might have been implemented based on poor advice or incorrect assumptions. Most systems will give acceptable performance using only the set of basic parameters. In particular, parameters associated with SPIN_COUNT on latches and undocumented optimizer features can cause a great deal of problems that can require considerable investigation.

I/O issues: If you configure your database to use multiple disks by disk space and not I/O bandwidth, then there will be excessive I/O to certain disks and little I/O to others. Frequently and simultaneously accessed objects (a table and its index) should be designed to be stored over different disks.

Long full-table scans: Long full-table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization.

In-disk sorting: In-disk sorts for online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Disk sorts, by nature, are I/O-intensive and unscalable.

High amounts of recursive SQL: Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent allocations, taking place. This is unscalable and impacts user response time. Recursive SQL executed under another user ID is probably SQL and PL/SQL, and this is not a problem.

Schema errors and optimizer problems: In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementation. Examples of this are missing indexes or incorrect statistics. These errors can lead to suboptimal execution plans and poor interactive user performance. When migrating applications of known performance, you should export the schema statistics to maintain plan stability by using the DBMS_STATS package.

Continue reading with part 3

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2)

Expanded Supplied Packages with 10g – Part 3

In Part 1 I mentioned how pl/sql is empowered by new supplied packages and the importance of using them.

In Part 2 I mentioned;

1- Monitoring batch processes “kindly” – DBMS_APPLICATION_INFO package
2- Lempel-Ziv compression of RAW and BLOB data – UTL_COMPRESS package
3- Which Oracle version I am developing on – DBMS_DB_VERSION package and PL/SQL Conditional Compilation

So we continue with some more new packages;

4- Handling Data Errors with an Error Logging Table

DML error logging extends existing DML functionality by enabling you to specify the name of an error logging table into which Oracle should record errors encountered during DML operations. This enables you to complete the DML operation in spite of any errors, and to take corrective action on the erroneous rows at a later time.

Lets look at this simple example first, here I want to take the data of all_tables into T table. But there is a check constraint, blocks<10. Because of the automicity rule below insert statement fails and T destination table has no rows;


DROP TABLE t PURGE ;
CREATE TABLE t AS SELECT *
FROM all_tables WHERE 1=2;

ALTER TABLE t ADD CONSTRAINT pk_t
PRIMARY KEY (owner, table_name);

ALTER TABLE t ADD CONSTRAINT cc_t
CHECK (blocks < 10);

INSERT /*+ APPEND */ INTO t SELECT * FROM all_tables;

ORA-02290: check constraint (CLON.CC_T) violated

SELECT COUNT(*) FROM t;

COUNT(*)
----------
0

Prior to 10g we could handle this with PL/SQL exceptions, but this would have some performance cost which we will discuss in the second part. But with 10g we can create an error log table on table T and the exceptional rows automatically inserted into this table;


exec dbms_errlog.create_error_log('T');

desc err$_t

Name                      Type           Nullable Default Comments
------------------------- -------------- -------- ------- --------
ORA_ERR_NUMBER$           NUMBER         Y
ORA_ERR_MESG$             VARCHAR2(2000) Y
ORA_ERR_ROWID$            UROWID(4000)   Y
ORA_ERR_OPTYP$            VARCHAR2(2)    Y
ORA_ERR_TAG$              VARCHAR2(2000) Y
OWNER                     VARCHAR2(4000) Y
TABLE_NAME                VARCHAR2(4000) Y
TABLESPACE_NAME           VARCHAR2(4000) Y
CLUSTER_NAME              VARCHAR2(4000) Y
...

INSERT /*+ APPEND */ INTO t
SELECT * FROM all_tables
 LOG ERRORS REJECT LIMIT UNLIMITED;

COMMIT;

SELECT COUNT(*) FROM t;

COUNT(*)
----------
2386

SELECT COUNT(*) FROM err$_t;

COUNT(*)
----------
3238

Mandatory columns of the error log table are described as follows;


Column Name    Data Type    Description

ORA_ERR_NUMBER$    NUMBER    Oracle error number
ORA_ERR_MESG$    VARCHAR2(2000)    Oracle error message text
ORA_ERR_ROWID$    ROWID    Rowid of the row in error update and delete only)
ORA_ERR_OPTYP$    VARCHAR2(2)    Type of operation I = insert, U = update, D = delete
ORA_ERR_TAG$    VARCHAR2(2000)    User supplied tag

So we are always trying to do it with a single SQL if possible, so lets compare this scenerio with PL/SQL bulk operations;


drop table sales_src purge ;
drop table sales_target purge ;
drop table err$_sales_target purge ;
create table sales_src nologging parallel 2 as select * from all_source ;
create table sales_target nologging parallel 2 as select * from all_source where 1 = 2 ;

ALTER TABLE sales_target ADD CONSTRAINT cc_sales_target CHECK (line < 5000);

set serveroutput on SIZE UNLIMITED
set timing on

exec runstats_pkg.rs_start;

-- step 1 - pl/sql bulk operastions and exception handling
DECLARE
TYPE ARRAY IS TABLE OF sales_target%ROWTYPE INDEX BY BINARY_INTEGER;
sales_src_arr ARRAY;
errors        NUMBER;
l_cnt NUMBER := 0;
l_bulk_limit NUMBER := 1000;
bulk_error EXCEPTION;
PRAGMA EXCEPTION_INIT(bulk_error, -24381);
CURSOR c IS
SELECT * FROM sales_src;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT
INTO sales_src_arr LIMIT l_bulk_limit ;
BEGIN
FORALL i IN 1 .. sales_src_arr.COUNT SAVE EXCEPTIONS
INSERT /*+ APPEND */ INTO sales_target VALUES sales_src_arr (i);
EXCEPTION
WHEN bulk_error THEN
errors := SQL%BULK_EXCEPTIONS.COUNT;
l_cnt  := l_cnt + errors;
FOR i IN 1 .. errors LOOP
dbms_output.put_line('error iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX || ' oracle error IS ' || SQL%BULK_EXCEPTIONS(i).ERROR_CODE);
END LOOP;
END;
EXIT WHEN c%NOTFOUND;
END LOOP;
CLOSE c;
dbms_output.put_line(l_cnt || ' total errors');
END;
/
..
error iteration 877 oracle error IS 2290
error iteration 878 oracle error IS 2290
error iteration 879 oracle error IS 2290
1441 total errors

PL/SQL procedure successfully completed.

Elapsed: 00:00:18.39

ROLLBACK;

Rollback complete.

Elapsed: 00:00:13.40

exec runstats_pkg.rs_middle;

-- step 2 - Single Insert and Error Log Table Solution with 10g
exec dbms_errlog.create_error_log(upper('sales_target'));
INSERT /*+ APPEND */
INTO sales_target
SELECT * FROM sales_src log
LOG ERRORS
REJECT LIMIT UNLIMITED
/
153136 rows created.

Elapsed: 00:00:13.81

ROLLBACK;

Rollback complete.

Elapsed: 00:00:00.00

exec runstats_pkg.rs_stop(1000);

Run1 ran in 3183 hsecs
Run2 ran in 1437 hsecs
run 1 ran in 221,5% of the time

select count(*) from err$_sales_target ;

COUNT(*)
----------
1441

SELECT * FROM err$_sales_target WHERE rownum < 2;

ORA_ERR_NUMBER$ ORA_ERR_MESG$                                                                    ORA_ERR_ROWID$                                                                   ORA_ERR_OPTYP$ ORA_ERR_TAG$                                                                     OWNER                                                                            NAME                                                                             TYPE                                                                             LINE                                                                             TEXT

2290 ORA-02290: check constraint (HR.CC_SALES_TARGET) violated                                                                                                         I                                                                                               SYS                                                                              DBMS_BACKUP_RESTORE                                                              PACKAGE                                                                          5190                                                                               --     fname - Datafile copy name

5- Automatic SQL Tuning

Automatic SQL Tuning is a new capability of the query optimizer that automates the entire SQL tuning process. Using the newly enhanced query optimizer to tune SQL statements, the automatic process replaces manual SQL tuning, which is a complex, repetitive, and time-consuming function. The Automatic SQL Tuning features are exposed to the user with the SQL Tuning Advisor.

While the recommended interface for the SQL Tuning Advisor is the Oracle Enterprise Manager Database Control, the advisor can be administered with procedures in the DBMS_SQLTUNE package.

Running SQL Tuning Advisor using DBMS_SQLTUNE package is a multi-step process:

1. Create a SQL Tuning Set (if tuning multiple SQL statements)
2. Create a SQL tuning task
3. Execute a SQL tuning task
4. Display the results of a SQL tuning task
5. Implement recommendations as appropriate


set serveroutput on

– Creating a SQL Tuning Task
DECLARE
my_task_name VARCHAR2(30);
my_sqltext   CLOB;
BEGIN
my_sqltext := ‘SELECT /*+ ORDERED */ * ‘                      ||
‘FROM employees e, locations l, departments d ‘ ||
‘WHERE e.department_id = d.department_id AND ‘  ||
‘l.location_id = d.location_id AND ‘      ||
‘e.employee_id < :bnd';

-- uncomment to re-run
-- DBMS_SQLTUNE.drop_tuning_task(task_name   => ‘my_sql_tuning_task’);

my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text    => my_sqltext,
bind_list   => sql_binds(anydata.ConvertNumber(100)),
user_name   => ‘HR’,
scope       => ‘COMPREHENSIVE’,
time_limit  => 60,
task_name   => ‘my_sql_tuning_task’,
description => ‘Task to tune a query on a specified employee’);
END;
/

– Executing a SQL Tuning Task
BEGIN
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => ‘my_sql_tuning_task’ );
END;
/

– Checking the Progress of the SQL Tuning Advisor
SELECT sofar, totalwork FROM V$ADVISOR_PROGRESS WHERE username = ‘HR’ AND task_id =
(SELECT TASK_ID FROM USER_ADVISOR_TASKS WHERE task_name = ‘my_sql_tuning_task’);

– Checking the Status of a SQL Tuning Task
SELECT TASK_ID, status FROM USER_ADVISOR_TASKS WHERE task_name = ‘my_sql_tuning_task’;

– Displaying the Results of a SQL Tuning Task
SET LONG 10000
SET LONGCHUNKSIZE 10000
SET LINESIZE 100
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( ‘my_sql_tuning_task’)
FROM DUAL;

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK')
--------------------------------------------------------------------------------
--------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name                  : my_sql_tuning_task
Tuning Task Owner                 : HR
Scope                             : COMPREHENSIVE
Time Limit(seconds)               : 60
Completion Status                 : COMPLETED
Started at                        : 01/05/2007 14:33:38
Completed at                      : 01/05/2007 14:33:43
Number of SQL Profile Findings    : 1
Number of SQL Restructure Findings: 1

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK')
--------------------------------------------------------------------------------
--------------------

-------------------------------------------------------------------------------
Schema Name: HR
SQL ID     : dg7nfaj0bdcvk
SQL Text   : SELECT /*+ ORDERED */ * FROM employees e, locations l,
departments d WHERE e.department_id = d.department_id AND
l.location_id = d.location_id AND e.employee_id < :bnd

-------------------------------------------------------------------------------
FINDINGS SECTION (2 findings)
-------------------------------------------------------------------------------

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK')
----------------------------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially better execution plan was found for this statement.

Recommendation (estimated benefit: 33,55%)
------------------------------------------
- Consider accepting the recommended SQL profile.
execute dbms_sqltune.accept_sql_profile(task_name =>
'my_sql_tuning_task', replace => TRUE);

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK')
----------------------------------------------------------------------------------------------------
2- Restructure SQL finding (see plan 1 in explain plans section)
----------------------------------------------------------------
An expensive cartesian product operation was found at line ID 3 of the
execution plan.

Recommendation
--------------
- Consider removing the "ORDERED" hint.

Rationale
---------

DBMS_SQLTUNE.REPORT_TUNING_TASK('MY_SQL_TUNING_TASK')
----------------------------------------------------------------------------------------------------
The "ORDERED" hint might force the optimizer to generate a cartesian
product. A cartesian product should be avoided whenever possible because
it is an expensive operation and might produce a large amount of data.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------
...

6- Hashing and Encryption Capabilities

DBMS_CRYPTO provides an interface to encrypt and decrypt stored data, and can be used in conjunction with PL/SQL programs running network communications. It provides support for several industry-standard encryption and hashing algorithms, including the Advanced Encryption Standard (AES) encryption algorithm. AES has been approved by the National Institute of Standards and Technology (NIST) to replace the Data Encryption Standard (DES).

While encryption is not the ideal solution for addressing a number of security threats, it is clear that selectively encrypting sensitive data before storage in the database does improve security. Examples of such data could include:

* Credit card numbers
* National identity numbers


conn / as sysdba
grant execute on dbms_crypto to hr;

conn hr/hr
set serveroutput on
DECLARE
l_credit_card_no VARCHAR2(19) := '1234-5678-9012-3456';
l_ccn_raw        RAW(128) := utl_raw.cast_to_raw(l_credit_card_no);
l_key            RAW(128) := utl_raw.cast_to_raw('abcdefgh');
l_encrypted_raw RAW(2048);
l_decrypted_raw RAW(2048);
BEGIN
dbms_output.put_line('Original : ' || l_credit_card_no);
l_encrypted_raw := dbms_crypto.encrypt(l_ccn_raw,
dbms_crypto.des_cbc_pkcs5,
l_key);
dbms_output.put_line('Encrypted : ' ||
RAWTOHEX(utl_raw.cast_to_raw(l_encrypted_raw)));
l_decrypted_raw := dbms_crypto.decrypt(src => l_encrypted_raw,
typ => dbms_crypto.des_cbc_pkcs5,
key => l_key);
dbms_output.put_line('Decrypted : ' ||
utl_raw.cast_to_varchar2(l_decrypted_raw));
END;
/

Original : 1234-5678-9012-3456
Encrypted :
38303843354144343732323245303145433338393341423337343543464345393641364335414434
4245454345424136
Decrypted : 1234-5678-9012-3456

PL/SQL procedure successfully completed.

DECLARE
enc_val   RAW(2000);
l_key     RAW(2000);
l_key_len NUMBER := 128 / 8; -- convert bits to bytes
l_mod     NUMBER := dbms_crypto.ENCRYPT_AES128 + dbms_crypto.CHAIN_CBC +
dbms_crypto.PAD_PKCS5;
BEGIN
l_key := dbms_crypto.randombytes(l_key_len);
enc_val := dbms_crypto.encrypt(utl_i18n.string_to_raw('1234-5678-9012-3456',
'AL32UTF8'),
l_mod,
l_key);
dbms_output.put_line(enc_val);
END;
/

D2743700936E23C1CA7CC802022BA7C7954ED0B2DAE274958B56ED6FF4A0DAE5

PL/SQL procedure successfully completed.

Another simple example to demostrate hashing and storing in database;


CREATE TABLE security (    data RAW(2000) );
/* Hash calculating function.
* Takes a string of data as its argument.
* Returns a hash in raw format.
*/
CREATE OR REPLACE FUNCTION return_hash(data IN VARCHAR) RETURN RAW IS
BEGIN
/* Call HASH() function, which takes data in RAW format.
* Must use STRING_TO_RAW() to convert data.
* HASH_SH1 or HASH_MD5 are constants representing types of hashes to calculate.
*/
RETURN DBMS_CRYPTO.HASH(UTL_I18N.STRING_TO_RAW (data, 'AL32UTF8'), DBMS_CRYPTO.HASH_SH1);

END;
/
INSERT INTO security (data) VALUES (return_hash('TONGUC'))    ;
SELECT * FROM security ;

DATA
--------------------------------------------------------------------------------
D8A3068DCFBC27341808B206F6CFFB674F7200EE

7- PL/SQL Compile-Time Warnings

To make your programs more robust and avoid problems at run time, you can turn on checking for certain warning conditions. These conditions are not serious enough to produce an error and keep you from compiling a subprogram. They might point out something in the subprogram that produces an undefined result or might create a performance problem.

To work with PL/SQL warning messages, you use the PLSQL_WARNINGS initialization parameter, the DBMS_WARNING package, and the USER/DBA/ALL_PLSQL_OBJECT_SETTINGS views.


set linesize 120
col name format a40
col value format a40

-- current warning state
-- PLSQL_WARNINGS enables or disables the reporting of warning messages by the PL/SQL compiler, and specifies which warning messages to show as errors.
-- Default value     'DISABLE:ALL'
SELECT name, value FROM gv$parameter
WHERE name LIKE 'plsql%warn%';

NAME                                     VALUE
---------------------------------------- ----------------------------------------
plsql_warnings                           DISABLE:ALL

-- current optimizer level for the instance
-- OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the optimizer.
-- Default value     If OPTIMIZER_FEATURES_ENABLE is set to 10.0.0 or higher, then 2
SELECT name, value FROM gv$parameter
WHERE name = 'optimizer_dynamic_sampling';

NAME                                     VALUE
---------------------------------------- ----------------------------------------
optimizer_dynamic_sampling               2

col plsql_code_type format a30
col plsql_debug format a30
col plsql_warnings format a50
col nls_length_semantics format a30

-- USER_PLSQL_OBJECT_SETTINGS displays compiler settings for all stored objects in the database.
SELECT DISTINCT TYPE,
plsql_optimize_level,
plsql_code_type,
plsql_debug,
plsql_warnings,
nls_length_semantics
FROM user_plsql_object_settings;

TYPE         PLSQL_OPTIMIZE_LEVEL PLSQL_CODE_TYPE                PLSQL_DEBUG                    PLSQL_WARNINGS                                     NLS_LENGTH_SEMANTICS
------------ -------------------- ------------------------------ ------------------------------ -------------------------------------------------- ------------------------------
PACKAGE BODY                    2 INTERPRETED                    FALSE                          DISABLE:ALL                                        BYTE
PROCEDURE                       2 INTERPRETED                    FALSE                          ENABLE:INFORMATIONAL,DISABLE:PERFORMANCE,DISABLE:S BYTE
...

First example with DBMS_WARNING package;


-- Check the current warning setting
SELECT DBMS_WARNING.get_warning_setting_string() FROM DUAL;

DBMS_WARNING.GET_WARNING_SETTING_STRING()
-----------------------------------------
DISABLE:ALL

-- When warnings disabled, the following procedure compiles with no warnings
CREATE OR REPLACE PROCEDURE unreachable_code AS
l_dummy  VARCHAR2(10) := '1';
BEGIN
IF 1=1 THEN
SELECT '2'
INTO   l_dummy
FROM   dual;
ELSE
RAISE_APPLICATION_ERROR(-20000, ‘l_dummy != 1!’);
END IF;
END unreachable_code;
/

Procedure created.

-- enable all warning messages for this session
CALL DBMS_WARNING.set_warning_setting_string('ENABLE:ALL' ,'SESSION');

Call completed.

-- Recompile the procedure and a warning about unreachable code displays
ALTER PROCEDURE unreachable_code COMPILE;

SP2-0805: Procedure altered with compi

SHOW ERRORS;

Errors for PROCEDURE UNREACHABLE_CODE:

LINE/COL ERROR
-------- -----------------------------
9/9      PLW-06002: Unreachable code

Another example with “alter session” this time;


ALTER SESSION SET PLSQL_WARNINGS='DISABLE:ALL';

CREATE OR REPLACE PROCEDURE nocopy_warn(sa IN OUT dbms_sql.varchar2a) IS
BEGIN
dbms_output.put_line('This is a test');
END nocopy_warn;
/

Procedure created.

ALTER SESSION SET PLSQL_WARNINGS='ENABLE:ALL';

CREATE OR REPLACE PROCEDURE nocopy_warn(sa IN OUT dbms_sql.varchar2a) IS
BEGIN
dbms_output.put_line('This is a test');
END nocopy_warn;
/

SP2-0804: Procedure created with compilation warnings

SQL> sho err
Errors for PROCEDURE NOCOPY_WARN:

LINE/COL ERROR
-------- ------------------------------------------------------------
1/23     PLW-07203: parameter 'SA' may benefit from use of the NOCOPY
compiler hint

CREATE OR REPLACE PROCEDURE nocopy_warn(sa IN OUT NOCOPY dbms_sql.varchar2a) IS
BEGIN
dbms_output.put_line('This is a test');
END nocopy_warn;
/

Procedure created.

– Instance or session level.
ALTER SYSTEM SET PLSQL_WARNINGS=’ENABLE:ALL’;
ALTER SESSION SET PLSQL_WARNINGS=’ENABLE:ALL’;
ALTER SESSION SET PLSQL_WARNINGS=’DISABLE:PERFORMANCE’;

– Recompile with extra checking.
ALTER PROCEDURE hello COMPILE PLSQL_WARNINGS=’ENABLE:PERFORMANCE’;

– Set mutiple values.
ALTER SESSION SET PLSQL_WARNINGS=’ENABLE:SEVERE’,'DISABLE:PERFORMANCE’,'DISABLE:INFORMATIONAL’;
Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
http://www.psoug.org/reference/dbms_errlog.html
Oracle® Database Performance Tuning Guide
10g Release 2 (10.2) Chapter 12 Automatic SQL Tuning

http://www.psoug.org/reference/dbms_sqltune.html
Oracle® Database Security Guide
10g Release 2 (10.2) Chapter 17 Developing Applications Using Data Encryption

http://www.psoug.org/reference/dbms_crypto.html
How To Encrypt Data in Oracle Using PHP by Larry Ullma
Oracle® Database PL/SQL User’s Guide and Reference 10g Release 2 (10.2) Chapter 10 Handling PL/SQL Errors
http://www.psoug.org/reference/dbms_warning.html

Logical I/O(consistent get) and Arraysize relation with SQL*PLUS

“Set Arraysize” sets the number of rows that SQL*Plus will fetch from the database at one time. Valid values are 1 to 5000. The effectiveness of setting ARRAYSIZE depends on how well Oracle Database fills network packets and your network latency and throughput. Here is a small test case;

conn hr/hr

drop table test_arraysize purge ;
create table test_arraysize nologging as select * from all_source ;
set serveroutput on
exec show_space(upper(‘test_arraysize’));

Unformatted Blocks ………………… 0
FS1 Blocks (0-25) ………………… 0
FS2 Blocks (25-50) ………………… 0
FS3 Blocks (50-75) ………………… 0
FS4 Blocks (75-100)………………… 0
Full Blocks ………………… 9,976
Total Blocks………………………. 10,240
Total Bytes……………………….. 83,886,080
Total MBytes………………………. 80
Unused Blocks……………………… 121
Unused Bytes………………………. 991,232
Last Used Ext FileId……………….. 4
Last Used Ext BlockId………………. 11,529
Last Used Block……………………. 903

PL/SQL procedure successfully completed.

set timing on
set autotrace traceonly statistics
show arraysize

arraysize 15 <– default value

select text from test_arraysize where owner = ‘SYS’ ;

set arraysize 5
select text from test_arraysize where owner = ‘SYS’ ;

set arraysize 50
select text from test_arraysize where owner = ‘SYS’ ;

set arraysize 100
select text from test_arraysize where owner = ‘SYS’ ;

Here are the information given;

1- “select text from test_arraysize where owner = ‘SYS'” query returns 93963 rows for my case,
2- test_arraysize table Full Blocks 9976 and Unused Blocks 121, returned from show_space procedure
3- Formula is : Consistent Gets ~= (number of rows / arraysize) + (Full Blocks – Unused Blocks)

Lets test the formula from these information and above test scripts results;

** arraysize 15(default)
16155 consistent gets
Elapsed: 00:00:04.51

93963 rows / 15 arraysize = 6264,2
9976 Full Blocks – 121 Unused Blocks = 9855
6264,2 + 9855 = 16119,2

** arraysize 5
28233 consistent gets
Elapsed: 00:00:05.34

93963 rows / 5 arraysize = 18792,6
9976 Full Blocks – 121 Unused Blocks = 9855
18792,6 + 9855 = 28647,6

** arraysize 50
11819 consistent gets
Elapsed: 00:00:03.71

93963 rows / 50 arraysize = 1879,26
9976 Full Blocks – 121 Unused Blocks = 9855
1879,26 + 9855 = 11734,26

** arraysize 100
10904 consistent gets
Elapsed: 00:00:03.84

93963 rows / 100 arraysize = 939,63
9976 Full Blocks – 121 Unused Blocks = 9855
939,63 + 9855 = 10794,63

Responce time changes with the arraysize since logical I/O changes. Ok what about increasing arraysize to 500, what will be the impact;

set arraysize 15
select count(*) from test_arraysize where owner = ‘SYS’ ;

Elapsed: 00:00:02.82

set arraysize 500
select count(*) from test_arraysize where owner = ‘SYS’ ;

Elapsed: 00:00:02.96

Responce time got worser, overlarge sizes can easily take more SQL*Plus memory which may decrease overall performance. So optimum arraysize for your query is a test subject, no silver bullet again :)

Also in recent versions of SQL*Plus and Oracle Database, ARRAYSIZE may have little effect.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.2.0

Refences Used : “consistent gets — Very puzzling” askTom thread
show_space procedure on asktom

New Dictionary Enhancements with 10g – Part 3

In Part 1 I mentioned three dictionary enhancements;

1- OS Statistics and Finding System CPU Utilization with 10g
2- BLOCKING_SESSION_STATUS and BLOCKING_SESSION columns on V$SESSION
3- Identify Tracing Enabled Sessions

And in In Part 2 I mentioned four more dictionary enhancements;

4- Tuning and monitoring enhancements; latch free and enqueue examples
5- What are the valid values for Oracle parameters?
6- Oracle Active Session History(ASH) samples
7- New views for usage pattern of the database

Now I will continue with some DBA stuff :)

8- Rman Views V$RMAN_BACKUP_JOB_DETAILS, V$RMAN_BACKUP_TYPE, V$RMAN_OUTPUT


col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,’mm/dd/yy hh24:mi’) start_time,
to_char(END_TIME,’mm/dd/yy hh24:mi’)   end_time,
elapsed_seconds/3600                   hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key ;

SESSION_KEY    INPUT_TYPE      STATUS    START_TIME     END_TIME          HRS
———–    ————-   ——–  ————– ————-     ——-
1    DATAFILE FULL   COMPLETED 03/25/05 00:48 03/25/05 00:48    .00
4    DB FULL         COMPLETED 03/27/05 02:09 03/27/05 02:11    .04
7    DB FULL         FAILED    03/27/05 02:18 03/27/05 02:24    .10

col ins format a10
col outs format a10
select SESSION_KEY,
OPTIMIZED,
COMPRESSION_RATIO,
INPUT_BYTES_PER_SEC_DISPLAY ins,
OUTPUT_BYTES_PER_SEC_DISPLAY outs,
TIME_TAKEN_DISPLAY
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;

SESSION_KEY OPT COMPRESSION_RATIO       INS        OUTS TIME_TAKEN
———– — —————– ———- ———- ———-
1 NO         2.23776224      3.33M      1.49M  00:00:06
4 NO         1.31065794      6.92M      5.28M  00:02:16
7 NO         1.32363058      3.68M      2.78M  00:06:00

select * from V$RMAN_BACKUP_TYPE;

WEIGHT INPUT_TYPE
———- ————-
1 BACKUPSET
2 SPFILE
3 CONTROLFILE
4 ARCHIVELOG
5 DATAFILE INCR
6 DATAFILE FULL
7 DB INCR
8 RECVR AREA
9 DB FULL

select output
from v$rman_output
where session_key = 4
order by recid;

OUTPUT
———————————————————————-
connected to target database: TEST (DBID=1849323268)
Starting backup at 27-MAR-05
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=201 devtype=DISK
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=/u01/app/oracle/oradata/TEST/system01.dbf
input datafile fno=00003 name=/u01/app/oracle/oradata/TEST/sysaux01.dbf
...

9- Using V$RECOVERY_FILE_DEST and V$FLASH_RECOVERY_AREA_USAGE

The V$RECOVERY_FILE_DEST and V$FLASH_RECOVERY_AREA_USAGE views can help you determine whether you have allocated enough space for your flash recovery area.


SELECT * FROM V$RECOVERY_FILE_DEST;

NAME            SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
--------------  ----------- ---------- ----------------- ---------------
/mydisk/rcva     5368709120 109240320             256000              28

select * from V$FLASH_RECOVERY_AREA_USAGE;

FILE_TYPE    PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE                   0                         0               0
ONLINELOG                   ,98                         0               2
ARCHIVELOG                    0                         0               0
BACKUPPIECE                   0                         0               0
IMAGECOPY                     0                         0               0
FLASHBACKLOG                  0                         0               0

6 rows selected.

10- Tablespace Usage Statistics in dba_tablespace_usage_metrics


select * from dba_tablespace_usage_metrics;

TABLESPACE_NAME                USED_SPACE TABLESPACE_SIZE USED_PERCENT
------------------------------ ---------- --------------- ------------
SYSAUX                              60672         4194302   1,44653389
SYSTEM                              53872           76800   70,1458333
TEMP                                    0         4194302            0
UNDO                                  160           64000          ,25
USERS                               16456          655360   2,51098633

5 rows selected.

Also for the history information DBA_HIST_TBSPC_SPACE_USAGE view can be used.

11- DB Startup time log


col instance_name format a15
col startup_time format a25
SELECT instance_name,startup_time
FROM dba_hist_database_instance;

INSTANCE_NAME   STARTUP_TIME
--------------- -------------------------
xe              21/11/2006 11:18:58,000
xe              28/11/2006 08:00:29,000
xe              27/11/2006 15:34:06,000
xe              30/11/2006 08:14:06,000
xe              05/12/2006 08:53:35,000
xe              06/12/2006 16:00:00,000
xe              07/12/2006 09:21:17,000
...

12- Redo log file size advice

The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance. V$INSTANCE_RECOVERY monitors the mechanisms available to users to limit recovery I/O.
OPTIMAL_LOGFILE_SIZE column shows redo log file size (in megabytes) that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. It is recommended that the user configure all online redo logs to be at least this value.


select optimal_logfile_size from v$instance_recovery;

OPTIMAL_LOGFILE_SIZE
--------------------
43 -- size in MB

13- Supported Platforms Dictionary Views

V$TRANSPORTABLE_PLATFORM displays all platforms that support cross-platform tablespace transport. Specifically, it lists all platforms supported by the RMAN CONVERT TABLESPACE command, along with the endianness of each platform.

V$DB_TRANSPORTABLE_PLATFORM displays all platforms to which the database can be transported using the RMAN CONVERT DATABASE command. The transportable database feature only supports transports of the same endian platform. Therefore, V$DB_TRANSPORTABLE_PLATFORM displays fewer rows than V$TRANSPORTABLE_PLATFORM.


col PLATFORM_NAME format a35
col ENDIAN_FORMAT format a15
select PLATFORM_NAME , ENDIAN_FORMAT from v$transportable_platform ;

PLATFORM_NAME                       ENDIAN_FORMAT
----------------------------------- ---------------
Solaris[tm] OE (32-bit)             Big
Solaris[tm] OE (64-bit)             Big
Microsoft Windows IA (32-bit)       Little
Linux IA (32-bit)                   Little
AIX-Based Systems (64-bit)          Big
HP-UX (64-bit)                      Big
HP Tru64 UNIX                       Little
HP-UX IA (64-bit)                   Big
Linux IA (64-bit)                   Little
HP Open VMS                         Little
Microsoft Windows IA (64-bit)       Little
IBM zSeries Based Linux             Big
Linux 64-bit for AMD                Little
Apple Mac OS                        Big
Microsoft Windows 64-bit for AMD    Little
Solaris Operating System (x86)      Little
IBM Power Based Linux               Big

17 rows selected.

col PLATFORM_NAME format a35
col ENDIAN_FORMAT format a15
select PLATFORM_NAME , ENDIAN_FORMAT from V$DB_TRANSPORTABLE_PLATFORM ;

PLATFORM_NAME                       ENDIAN_FORMAT
----------------------------------- -------------
Microsoft Windows IA (32-bit)       Little
Linux IA (32-bit)                   Little
HP Tru64 UNIX                       Little
Linux IA (64-bit)                   Little
HP Open VMS                         Little
Microsoft Windows IA (64-bit)       Little
Linux 64-bit for AMD                Little
Microsoft Windows 64-bit for AMD    Little
Solaris Operating System (x86)      Little

9 rows selected.

The v$database data dictionary view also adds two columns, platform ID and platform name:


select name, platform_name from   v$database;

NAME      PLATFORM_NAME
--------- ------------------------------
XE        Microsoft Windows IA (32-bit)

1 row selected.

To transport a tablespace from one platform to another, datafiles on different platforms must be in the same endian format (byte ordering).

To use RMAN TRANSPORT TABLESPACE, start the RMAN client, connecting to the source database and, if used, the recovery catalog. Then enter the TRANSPORT TABLESPACE command, specifying the required arguments. For example, to transport the tablespaces tbs_2 and tbs_3, use the TRANSPORT TABLESPACE command as follows:


transport tablespace tbs_2, tbs_3
tablespace destination '/disk1/transportdest'
auxiliary destination '/disk1/auxdest'
;

If the query returns a row, the platform supports cross-platform tablespace transport.


SELECT d.PLATFORM_NAME, ENDIAN_FORMAT
FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

-- The following is the query result from the source platform

PLATFORM_NAME             ENDIAN_FORMAT
------------------------- --------------
Solaris[tm] OE (32-bit)   Big

-- The following is the result from the target platform

PLATFORM_NAME             ENDIAN_FORMAT
------------------------- --------------
Microsoft Windows NT      Little

You can see that the endian formats are different and thus a conversion is necessary for transporting the tablespace set.

14- List of all oracle background processes


SELECT NAME, description FROM v$bgprocess ;

NAME  DESCRIPTION
----- -------------------------------------------
PMON  process cleanup
DIAG  diagnosibility process
FMON  File Mapping Monitor Process
PSP0  process spawner 0
LMON  global enqueue service monitor
LMD0  global enqueue service daemon 0
LMS0  global cache service process 0
LMS1  global cache service process 1
LMS2  global cache service process 2
LMS3  global cache service process 3
LMS4  global cache service process 4
LMS5  global cache service process 5
LMS6  global cache service process 6
LMS7  global cache service process 7
LMS8  global cache service process 8
LMS9  global cache service process 9
LMSa  global cache service process 10
LMSb  global cache service process 11
LMSc  global cache service process 12
LMSd  global cache service process 13
LMSe  global cache service process 14
..

15- Naming Restore Points

In Oracle 10.2 and above restore points can be named, previously required SCN or timestamp. Feature requires Archiving and Flashback

To enable flashback set the following parameters:


ALTER SYSTEM SET db_recovery_file_dest_size = 1000M;
ALTER SYSTEM SET db_recovery_file_dest = ‘’;
ALTER SYSTEM SET db_flashback_retention_target = 1440;

-- Enable flashback mode
SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER DATABASE FLASHBACK ON;
ALTER DATABASE OPEN;

Restore points can be;
– Guaranteed restore points
– FLASHBACK must be enabled before creation
– Must be manually deleted
– Can restore beyond time specified by DB_FLASHBACK_RETENTION_TARGET parameter

Normal restore points
– Can be created before FLASHBACK is enabled
– Can be automatically deleted
– Cannot restore beyond time specified by DB_FLASHBACK_RETENTION_TARGET parameter


-- To create a restore point
CREATE RESTORE POINT restore_point1;
CREATE RESTORE POINT restore_point2 GUARANTEE FLASHBACK DATABASE;

-- To drop a restore point
DROP RESTORE POINT restore_point1;

-- To flashback database to a restore point
SHUTDOWN IMMEDIATE
STARTUP MOUNT
FLASHBACK DATABASE TO RESTORE POINT restore_point1;
ALTER DATABASE OPEN RESETLOGS;
Flashback table to restore point
FLASHBACK TABLE TO RESTORE POINT restore_point3;

New dynamic performance view; V$RESTORE_POINT

Name Type
SCN NUMBER
DATABASE_INCARNATION# NUMBER
GUARANTEE_FLASHBACK_DATABASE VARCHAR2(3)
STORAGE_SIZE NUMBER
TIME TIMESTAMP(9)
NAME VARCHAR2(128)

Additional Note 1 – OEM Warning

The Enterprise Manager for Oracle 10g comes with a number of packages which need an additional license if you want to use them. These packages are as follows:
– Database Diagnostics Pack
– Database Diagnostics Pack
– Configuration Management Pack

Additional Note 2 – Some usefull synonyms

CAT Synonym for USER_CATALOG
CLU Synonym for USER_CLUSTERS
COLS Synonym for USER_TAB_COLUMNS
DICT Synonym for DICTIONARY


column table_name format a25
column column_name format a25
column comments format a35
SELECT table_name, column_name, comments
FROM   dict_columns
WHERE  table_name like '%QUEUE%' and comments is not null
ORDER BY table_name, column_name, comments ;

TABLE_NAME                COLUMN_NAME               COMMENTS
------------------------- ------------------------- -----------------------------------
ALL_APPLY_ENQUEUE         DESTINATION_QUEUE_NAME    Name of the queue where events sati
sfying the rule will be enqueued
ALL_APPLY_ENQUEUE         RULE_NAME                 Name of the rule
ALL_APPLY_ENQUEUE         RULE_OWNER                Owner of the rule…
...

Additional Note 3 – {ORACLE_HOME}/rdbms/admin

catalog.sql Builds the data dictionary views
catblock.sql Creates views that dynamically display lock dependency graphs
caths.sql Installs packages for administering heterogeneous services
catio.sql Allows I/O to be traced on a table-by-table basis

Check this list; http://www.psoug.org/reference/files.html

Additional Note 4 – Data dictionary flash file :)
Data dictionary for Oracle 10g

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.2.0

Refences Used : Oracle® Database Reference 10g Release 2 (10.2)
Oracle® Database New Features Guide 10g Release 2 (10.2)
How Oracle 10g makes life easier in a day to day administration.
Oracle 10g Enterprise Manager Packages usage and licensing
New Cross-Platform Transportable Tablespaces
Julian Dyke Presentations

There are more than one INDEX Type Part 1

Oracle uses B-trees to store indexes to speed up data access. With no indexes, you have to do a sequential scan on the data to find a value. For n rows, the average number of rows searched is n/2. This does not scale very well as data volumes increase.

When you create an index, Oracle fetches and sorts the columns to be indexed and stores the rowid along with the index value for each row. Then Oracle loads the index from the bottom up. For example, consider the statement:

CREATE INDEX employees_last_name ON employees(last_name);

Oracle sorts the employees table on the last_name column. It then loads the index with the last_name and corresponding rowid values in this sorted order. When it uses the index, Oracle does a quick search through the sorted last_name values and then uses the associated rowid values to locate the rows having the sought last_name value.

B*Tree Indexes are default indexing feature in Oracle. They are;

– Based on modified B*Tree algorithm,
– Contain branch blocks and leaf blocks,
– Blocks contain keys and data,
– Keys maintained in sorted order within blocks,
– All leaf blocks are at the same depth,
– All blocks are on average 75% full,
– Maximum number of B*tree levels is 24
– Maximum number of columns is 16 in 7.3 and below; 32 in 8.0 and above

This figure describes the Internal Structure of a B-tree Index.

There are several recent variants of B*tree indexes including;

Type Introduced
Bitmap Indexes 7.3.2
Index Organised Table 8.0
Partitioned Indexes 8.0
Reverse Key 8.0
LOB Index 8.0
Compressed 8.1.5
Function-Based Indexes 8.1.5
Descending 8.1.5
Virtual Indexes 8.1.5
Bitmap Join Indexes 9.0.1

1- Virtual / NoSegment Indexes

An undocumented “fake” index type. A virtual index is a non-physical (no-segments) index useful for evaluating whether the optimizer will benefit from index creation prior to creating a physical index;


DROP TABLE tbl_virtual PURGE ;
CREATE TABLE tbl_virtual AS
SELECT table_name, tablespace_name FROM all_tables;

CREATE INDEX vix_tbl_virtual
ON tbl_virtual(table_name) NOSEGMENT;

SELECT segment_name FROM user_segments
WHERE segment_name like ‘%VIRTUAL%’ ;

SEGMENT_NAME
———————————————————————–
TBL_VIRTUAL -- only table

SELECT index_name, index_type FROM user_indexes
WHERE index_name = ‘VIX_TBL_VIRTUAL’; -- Virtual indexes have a data dictionary definition, but no associated segment

no rows selected

SELECT column_name FROM user_ind_columns
WHERE index_name = ‘VIX_TBL_VIRTUAL’;

COLUMN_NAME
———————————————————————–
TABLE_NAME

SELECT object_name FROM user_objects
WHERE object_name = ‘VIX_TBL_VIRTUAL’ ;

OBJECT_NAME
———————————————————————–
VIX_TBL_VIRTUAL

exec dbms_stats.gather_table_stats(’HR’, ‘TBL_VIRTUAL’, CASCADE=>TRUE);
commit;

alter session set “_use_nosegment_indexes” = TRUE;

SET AUTOTRACE TRACEONLY
SELECT * FROM tbl_virtual
WHERE table_name = ‘T1';
SET AUTOTRACE OFF

.. INDEX RANGE SCAN| VIX_TBL_VIRTUAL ..

ALTER INDEX VIX_TBL_VIRTUAL REBUILD
*
ERROR at line 1:
ORA-08114: can not alter a fake index :)

Continue reading with Part 2

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

References Used :
Index Internals presentation by Mr.Julian Dyke

Oracle Native Types Part 1

1- ANYTYPE, ANYDATA and ANYDATASET

The Any types provide highly flexible modeling of procedure parameters and table columns where the actual type is not known. These datatypes let you dynamically encapsulate and access type descriptions, data instances, and sets of data instances of any other SQL type. These types have OCI and PL/SQL interfaces for construction and access.

ANYTYPE : This type can contain a type description of any named SQL type or unnamed transient type.

ANYDATA : This type contains an instance of a given type, with data, plus a description of the type. ANYDATA can be used as a table column datatype and lets you store heterogeneous values in a single column. The values can be of SQL built-in types as well as user-defined types.

ANYDATASET : This type contains a description of a given type plus a set of data instances of that type. ANYDATASET can be used as a procedure parameter datatype where such flexibility is needed. The values of the data instances can be of SQL built-in types as well as user-defined types.

Also in $ORACLE_HOME/rdbms/admin/dbmsany.sql you can find the package DBMS_TYPES and types SYS.AnyData etc.


-- DROP TABLE tst_tbl purge ;
-- DROP TYPE tst_typ ;
CREATE OR REPLACE TYPE tst_typ AS OBJECT (mycol sys.anyData ) ;
/

CREATE TABLE tst_tbl OF tst_typ;

INSERT INTO tst_tbl
VALUES (sys.anyData.convertNumber(5));

INSERT INTO tst_tbl
VALUES (sys.anyData.convertDate(SYSDATE));

INSERT INTO tst_tbl
VALUES (sys.anyData.convertVarchar2(’hello world’));

COMMIT;

col typename format a20
SELECT mytab.mycol.gettypeName() typeName
FROM tst_tbl mytab;

TYPENAME
——————–
SYS.NUMBER
SYS.DATE
SYS.VARCHAR2

CREATE OR REPLACE FUNCTION getData(p_x IN sys.anyData)
RETURN VARCHAR2 IS
l_num      NUMBER;
l_date     DATE;
l_varchar2 VARCHAR2(4000);
BEGIN
CASE p_x.gettypeName
WHEN 'SYS.NUMBER' THEN
IF (p_x.getNumber(l_num) = dbms_types.success) THEN
l_varchar2 := l_num;
END IF;
WHEN 'SYS.DATE' THEN
IF (p_x.getDate(l_date) = dbms_types.success) THEN
l_varchar2 := l_date;
END IF;
WHEN 'SYS.VARCHAR2' THEN
IF (p_x.getVarchar2(l_varchar2) = dbms_types.success) THEN
NULL;
END IF;
ELSE
l_varchar2 := '** unknown **';
END CASE;
RETURN l_varchar2;
END getData;
/

col getdata format a20
SELECT getData(mycol) GETDATA
FROM tst_tbl ;

GETDATA
—————–
5
05/12/2006
hello world

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
Oracle® Database PL/SQL Packages and Types Reference 10g Release 2 (10.2)
http://www.psoug.org/reference/anydata.html
http://mennan.kagitkalem.com/ANYDATATypeInOracle.aspx

2- Binary_double vs. Number in compute intensive processes

Oracle provides two numeric datatypes exclusively for floating-point numbers: BINARY_FLOAT and BINARY_DOUBLE. They support all of the basic functionality provided by the NUMBER datatype. However, while NUMBER uses decimal precision, BINARY_FLOAT and BINARY_DOUBLE use binary precision. This enables faster arithmetic calculations and usually reduces storage requirements.

BINARY_FLOAT and BINARY_DOUBLE are approximate numeric datatypes. They store approximate representations of decimal values, rather than exact representations. For example, the value 0.1 cannot be exactly represented by either BINARY_DOUBLE or BINARY_FLOAT. They are frequently used for scientific computations. Their behavior is similar to the datatypes FLOAT and DOUBLE in Java and XMLSchema.

BINARY_FLOAT is a 32-bit, single-precision floating-point number datatype. Each BINARY_FLOAT value requires 5 bytes, including a length byte. BINARY_DOUBLE is a 64-bit, double-precision floating-point number datatype. Each BINARY_DOUBLE value requires 9 bytes, including a length byte. BINARY_DOUBLE and BINARY_FLOAT implement most of the Institute of Electrical and Electronics Engineers (IEEE) Standard for Binary Floating-Point Arithmetic, IEEE Standard 754-1985 (IEEE754).


set serveroutput on
declare

function pi_with_number return number as
subtype my_number is NUMBER;
last_pi my_number := 0;
delta   my_number := 0.000001;
pi      my_number := 1;
denom   my_number := 3;
oper    my_number := -1;
negone  my_number := -1;
two     my_number := 2;
begin
loop
last_pi := pi;
pi := pi + oper * 1/denom;
exit when (abs(last_pi-pi) <= delta );
denom := denom + two;
oper := oper * negone;
end loop;
return pi * 4;
end;

function pi_with_binary_double return number as
subtype my_number is BINARY_DOUBLE;
last_pi my_number := 0;
delta   my_number := 0.000001;
pi      my_number := 1;
denom   my_number := 3;
oper    my_number := -1;
negone  my_number := -1;
two     my_number := 2;
begin
loop
last_pi := pi;
pi := pi + oper * 1/denom;
exit when (abs(last_pi-pi) <= delta );
denom := denom + two;
oper := oper * negone;
end loop;
return pi * 4;
end;

begin
runStats_pkg.rs_start;
dbms_output.put_line( pi_with_number );
runStats_pkg.rs_middle;
dbms_output.put_line( pi_with_binary_double );
runStats_pkg.rs_stop(1000);
end;
/

3,14159465358579324446263938327350288021
3,1415946535856922

Run1 ran in 103 hsecs
Run2 ran in 40 hsecs
run 1 ran in 257,5% of the time

Name                                  Run1        Run2        Diff
LATCH.shared pool simulator          1,167           1      -1,166
STAT...session uga memory           65,464           0     -65,464
STAT...session pga memory           65,536           0     -65,536
STAT...session uga memory max      261,964      65,464    -196,500

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
3,012         901      -2,111    334.30%

PL/SQL procedure successfully completed.

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
Oracle® Database PL/SQL User’s Guide and Reference 10g Release 2 (10.2)
http://asktom.oracle.com/tkyte/runstats.html

3- XMLType and XDB

With Oracle9iR2 XML and SQL worlds integrated in Oracle. This Oracle-supplied native type(XMLType) can be used to store and query XML data in the database. XMLType has member functions you can use to access, extract, and query the XML data using XPath expressions. XPath is another standard developed by the W3C committee to traverse XML documents. Oracle XMLType functions support many W3C XPath expressions. Oracle also provides a set of SQL functions and PL/SQL packages to create XMLType values from existing relational or object-relational data.

XMLType is a system-defined type, so you can use it as an argument of a function or as the datatype of a table or view column. You can also create tables and views of XMLType. When you create an XMLType column in a table, you can choose to store the XML data in a CLOB column or object relationally.

You can also register the schema (using the DBMS_XMLSCHEMA package) and create a table or column conforming to the registered schema. In this case Oracle stores the XML data in underlying object-relational columns by default, but you can specify storage in a CLOB column even for schema-based data.

createXML: creates an XMLtype instance from a string or CLOB
existsNode: returns 1 if the given XPath expression returns any result nodes
extract: applies an XPath expression over the XML data to return a XMLType instance containing the resultant fragment
isFragment: returns 1 if the XMLtype contains a fragment
getCLOBval, getStringval, getNumberval: returns an XML document or fragment as CLOB, string or number

You can verify that XML DB has been installed by simply running the SQL below:


select comp_name from dba_registry where comp_name like '%XML%';

COMP_NAME
--------------------
Oracle XML Database

The XML DB schema and its objects can also be checked to see that Oracle XML DB has been installed.

Here are some XML data store and query examples;


DROP TABLE xwarehouses PURGE ;
CREATE TABLE xwarehouses OF XMLTYPE;

INSERT INTO xwarehouses VALUES
(xmltype('<?xml version="1.0"?>
<Warehouse>
<WarehouseId>1</WarehouseId>
<WarehouseName>Southlake, Texas</WarehouseName>
<Building>Owned</Building>
<Area>25000</Area>
<Docks>2</Docks>
<DockType>Rear load</DockType>
<WaterAccess>true</WaterAccess>
<RailAccess>N</RailAccess>
<Parking>Street</Parking>
<VClearance>10</VClearance>
</Warehouse>'));

set pagesize 0
select e.getstringval() from xwarehouses e;

<?xml version="1.0"?>
<Warehouse>
<WarehouseId>1</WarehouseId>
<WarehouseName>Southlake, Texas</WarehouseName>
<Building>Owned</Building>
<Area>25000</Area>
<Docks>2</Docks>
<DockType>Rear load</DockType>
<WaterAccess>true</WaterAccess>
<RailAccess>N</RailAccess>
<Parking>Street</Parking>
<VClearance>10</VClearance>
</Warehouse>

SELECT e.getClobVal() FROM xwarehouses e;

E.GETCLOBVAL()
---------------------------------

<?xml version="1.0"?>
<Warehouse>
<WarehouseId>1</WarehouseId>
<Wareho..

-- Creating an XMLType View: In some cases you may have an object-relational table upon which you would like to build an XMLType view.
CREATE TABLE warehouse_table
(
WarehouseID       NUMBER,
Area              NUMBER,
Docks             NUMBER,
DockType          VARCHAR2(100),
WaterAccess       VARCHAR2(10),
RailAccess        VARCHAR2(10),
Parking           VARCHAR2(20),
VClearance        NUMBER
);

INSERT INTO warehouse_table
VALUES(5, 103000,3,'Side Load','false','true','Lot',15);

CREATE VIEW warehouse_view OF XMLTYPE
-- XMLSCHEMA "http://www.oracle.com/xwarehouses.xsd"
--    ELEMENT "Warehouse"
WITH OBJECT ID
(extract(OBJECT_VALUE, '/Warehouse/Area/text()').getnumberval())
AS SELECT XMLELEMENT("Warehouse",
XMLFOREST(WarehouseID as "Building",
area as "Area",
docks as "Docks",
docktype as "DockType",
wateraccess as "WaterAccess",
railaccess as "RailAccess",
parking as "Parking",
VClearance as "VClearance"))
FROM warehouse_table;

SELECT VALUE(e) FROM warehouse_view e;

VALUE(E)
--------------------------------------------------------------------------------

<Warehouse><Building>5</Building><Area>103000</Area><Docks>3</Docks><DockType>Si..

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

References Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 26 Native Datatypes
Oracle® XML DB Developer's Guide 10g Release 2 (10.2)
Steven Feuerstein presentation - NINE NEW and NIFTY NUANCES OF ORACLE 9i PL-SQL
"Getting into SQL/XML" OTN article by Tim Quinlan

Note Related : How to give datatype to columns with create table as select?


drop table test purge;
create table test as
select sysdate dcol, user vcol1, ‘ ‘ vcol2, ‘ ‘ vcol3, 0 ncol1, 5.5 ncol2
from dual ;

describe test

Name                                      Null?    Type
----------------------------------------- -------- ------------

DCOL                                               DATE
VCOL1                                              VARCHAR2(30)
VCOL2                                              CHAR(1)
VCOL3                                              CHAR(1)
NCOL1                                              NUMBER
NCOL2                                              NUMBER

one solution may be using CAST function, CAST converts one built-in datatype or collection-typed value into another built-in datatype or collection-typed value;


drop table test purge;
create table test as
select cast( null as varchar2(10) ) a,
cast( null as date ) b,
cast( null as number(5,2) ) c
from dual;

describe test

Name                                      Null?    Type
----------------------------------------- -------- ------------

A                                                  VARCHAR2(10)
B                                                  DATE
C                                                  NUMBER(5,2)

Continue reading with Part 2

Oracle Concepts and Architecture – Part 1

Recently I posted three entries on this category;

1- Oracle 10g Flashback Versions Query, Drop Table and Recyclebin Management
2- How shared pool works in Oracle and binding example
3- The Storage Hierarchy Summary in an Oracle Database

4- Concurrency and Isolation Levels Concepts

Databases uses locking to ensure transactional integrity and database consistency. Locking prevents users from reading data being changed by other users, and prevents multiple users from changing the same data at the same time. If locking is not used, data within the database may become logically incorrect, and queries executed against that data may produce unexpected results.

If locking is not available and several users access a database concurrently, problems may occur if their transactions use the same data at the same time.

* Dirty Reads; A dirty read occurs when a transaction can see uncommitted changes to a row. If another transaction changes a value, your transaction can read that changed value, but the other transaction will roll back its transaction, making the value invalid, or dirty.

* Non-repeatable Reads; A non-repeatable read occurs when a row not updated during the transaction is read twice within that transaction, and different results are seen from the two reads. If your transaction reads a value, and another transaction commits a change to that value (or deletes the row), then your transaction could read the changed value (or find the row missing) even though your transaction has not committed or rolled back.

* Phantom Reads; A phantom read occurs when a transaction reads a row inserted by another transaction that has not been committed. If another transaction inserts a row to a table, when your transaction queries that table it can read the new row even if the other transaction subsequently rolls back.

The ANSI/ISO SQL standard SQL92 defines three possible kinds of transaction interaction, and four levels of isolation that provide increasing protection against these interactions. These interactions and isolation levels are:

Isolation Level Dirty Read Non-Repeatable Read Phantom Read
READ UNCOMMITTED Possible Possible Possible
READ COMMITTED Not possible Possible Possible
REPEATABLE READ Not possible Not possible Possible
SERIALIZABLE Not possible Not possible Not possible

The behavior of Oracle is:

READ UNCOMMITTED; Oracle never permits “dirty reads.” Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle.

READ COMMITTED; Oracle meets the READ COMMITTED isolation standard. This is the default mode for all Oracle applications. Because an Oracle query only sees data that was committed at the beginning of the query (the snapshot time), Oracle actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READ COMMITTED isolation.

REPEATABLE READ; Oracle does not normally support this isolation level, except as provided by SERIALIZABLE.

SERIALIZABLE; You can set this isolation level using the SET TRANSACTION command or the ALTER SESSION command.

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 13 Data Concurrency and Consistency
Akadia Information Technology Publications

5- Automated Checkpoint Tuning and Mean time to recover(MTTR) Concepts

Check-pointing is an important Oracle activity which records the highest system change number(SCN), a stamp that defines a committed version of a database at a point in time and Oracle assigns every committed transaction a unique SCN, so that all data blocks less than or equal to the SCN are known to be written out to the data files. If there is a failure and then subsequent cache recovery, only the redo records containing changes at SCN(s) higher than the checkpoint need to be applied during recovery.

Instance and crash recovery happens in two steps – cache recovery followed by transaction recovery. During the cache recovery phase, also known as the rolling forward stage, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database and the time between checkpoints.

Fast-start recovery can greatly reduce the mean time to recover(MTTR), The desired time required to perform instance or media recovery on the database, with minimal effects on online application performance. Oracle continuously estimates the recovery time and automatically adjusts the check-pointing rate to meet the target recovery time.

With 10g, the Oracle database can now self-tune check-pointing to achieve good recovery times with low impact on normal throughput. You no longer have to set any checkpoint-related parameters. This method reduces the time required for cache recovery and makes the recovery bounded and predictable by limiting the number of dirty buffers and the number of redo records generated between the most recent redo record and the last checkpoint. Administrators specify a target (bounded) time to complete the cache recovery phase of recovery with the FAST_START_MTTR_TARGET initialization parameter, and Oracle automatically varies the incremental checkpoint writes to meet that target.

The TARGET_MTTR field of V$INSTANCE_RECOVERY contains the MTTR target in effect. The ESTIMATED_MTTR field of V$INSTANCE_RECOVERY contains the estimated MTTR should a crash happen right away.


SELECT TARGET_MTTR,
ESTIMATED_MTTR,
CKPT_BLOCK_WRITES
FROM V$INSTANCE_RECOVERY;

TARGET_MTTR ESTIMATED_MTTR CKPT_BLOCK_WRITES
———– ————– —————–
0             16           1611201

Whenever you set FAST_START_MTTR_TARGET to a nonzero value, then set the following parameters to 0.

LOG_CHECKPOINT_TIMEOUT = 0
LOG_CHECKPOINT_INTERVAL = 0
FAST_START_IO_TARGET = 0

Because these initialization parameters either override FAST_START_MTTR_TARGET or potentially drive checkpoints more aggressively than FAST_START_MTTR_TARGET does, they can interfere with the simulation.

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 14 Manageability
Akadia Information Technology Publications

6- Concurrency and Consistency Concepts

Performance is interested with avaoiding unneccessary work on limited system resouces, Scalability is interested with avaoiding blocking others doing same operations(serialization). These need different attentions, especially second is very important for heavy loaded OLTP systems. Sometimes you may sacrifice in one to gain in other.

In a single-user database, the user can modify data in the database without concern for other users modifying the same data at the same time. However, in a multiuser database, the statements within multiple simultaneous transactions can update the same data. Transactions executing at the same time need to produce meaningful and consistent results. Therefore, control of data concurrency and data consistency is vital in a multiuser database.

  • Data concurrency means that many users can access data at the same time.
  • Data consistency means that each user sees a consistent view of the data, including visible changes made by the user’s own transactions and transactions of other users.

In Oracle locking by default is row level, this makes Oracle a high concurrent database system. In Oracle with its Undo mechanism readers do not block writers and writers do not block readers. Writers only wait for other writers if they attempt to update identical rows in concurrent transactions. Where as a simple update query in Oracle does a lot behind to guarantee these and in this system your primary performance target is I/O.

Also Oracle allows you a high degree of read consistency, if you start a long running query you do not see the changes of the other users until your query ends. Also the system protects you from phantom-uncommited data changes. This is also handled with undo mechanism.

In Oracle you learn that concurrency and consistency can be achieved very fast and correct, every time.

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 13 Data Concurrency and Consistency

7- Latch and Lock Concepts

Recently I bought this book;
Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning
Table of Contents

Really a nice handbook that make the life of the Oracle performance tuning people easier. In Chapter 6: Interpreting Locks-Related Wait Events, the topic is on Concurrency vs. Integrity. In the DBMS concept this is the “I” of the ACID properties(Atomicity, Consistency, Isolation and Durability). In real world applications, concurrency must be sacrificed for the sake of integrity. The Oracle SGA architecture could be a lot simpler if concurrent access was the only goal :)

When it comes to a latch, it is a form of lock. Although both are locking mechanisms, latches are different from locks(enqueues) in several ways; Table 6-1: Latches vs. Locks.xls

  Latches Locks
Purpose Surve a single purpose: to provide exclusive accessto memory structures. Serve two purposes: to allow multiple processes to share the same resource when lock modes are compatible and to enforce exclusive access to the resource when the lock modes are incompatible.
Jurisdiction Apply only to data structures in the Sga. Protect memory objects, which are temporary. Control access to a memory structure for a single operation. Not transactional. Protect database objects such as tables, data blocks, and state objects. Apllication driven and control access to data or metadata in the database. Transactional.
Acquisition Can be requested in two modes: willing-to-wait or no-wait Can be requested in six different modes: null, row share, row exclusive, share, share row exclusive, or exclusive
Scope Information is kept in the memory and is only visible on the local instance-latches operate at instance level. Information is kept in the database and is visible to all instances accessing the database-locks operate at database level.
Compexity Implemented using simple instructions typically, test-and-set, compare-and-swap, or simple CPU instructions. Implementation is port spesific because the CPU instructions are machine dependent. Lightweight. Implemented using a series of instructions with context switches. Heavyweight.
Duration Held briefly(in microseconds). Normally held for an extended period of time(transactional duration).
Queue When a process goes to sleep after failing to acquire a latch, its request is not queued and serviced in order. Latches are fair game and up for grabs. When a process fails to get a lock, its request is queued and serviced in order, unless the NOWAIT option is specified.
Deadlock Latches are implemented in such a way that they are not subject to deadlocks. Locks support queueing and are subject to deadlocks. A trace file is generated each time a deadlock occurs.

Continue reading with Part 2

Which one perform better for the same job : Oracle Function or Oracle View

Ok lets try and see, as usual :)

Code listing 35 : Oracle View or Function performs better for the same job

Conclusions; same amount of I/O to do the same thing but view option used less latching and ran faster(older releases will show a larger disparity between the run times). Again time to remember the “Universal mantra on data intensive processing” : don’t use plsql when plain old sql will do it for you ;)

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
http://asktom.oracle.com/tkyte/runstats.html
Related AskTom Thread

Oracle 10g Flashback Versions Query, Drop Table and Recyclebin Management

Flashback technology provides a set of features to view and rewind data back and forth in time. The flashback features offer the capability to query past versions of schema objects, query historical data, perform change analysis, and perform self-service repair to recover from logical corruption while the database is online.

Flashback technology provides a SQL interface to quickly analyze and repair human errors. Flashback provides fine-grained analysis and repair for localized damage such as deleting the wrong customer order. Flashback technology also enables correction of more widespread damage, yet does it quickly to avoid long downtime. Flashback technology is unique to the Oracle Database and supports recovery at all levels including row, transaction, table, tablespace, and database.

Oracle Flashback Versions Query is an extension to SQL that can be used to retrieve the versions of rows in a given table that existed in a specific time interval. Oracle Flashback Versions Query returns a row for each version of the row that existed in the specified time interval. For any given table, a new row version is created each time the COMMIT statement is executed.

Oracle Flashback Table enables users to recover a table to a previous point in time. It provides a fast, online solution for recovering a table or set of tables that has been erroneously modified by a user or application. In most cases, Flashback Table alleviates the need for administrators to perform more complicated point-in-time recovery operations. Even after a flashback, the data in the original table is not lost; it can later be reverted back to the original state.

With 9i this feature was marketed as “Oracle found the time machine” :) But with 9i to flashback was not so easy, needed DBA privileges etc. After 10g with SELECT statements we are not able to get only a picture but a movie of our data in time. Ok lets work on an example, seeing is necessary for believing, we are not marketing here :)


create table flashback_test ( c1 number, c2 date ) nologging ;

insert into flashback_test values ( 1, sysdate ) ;
commit ;
exec dbms_lock.sleep(15);

update  flashback_test set c1 = c1 * 2 ;
commit ;
exec dbms_lock.sleep(15);

update  flashback_test set c1 = c1 * 2 ;
commit ;
exec dbms_lock.sleep(15);

delete flashback_test ;
commit ;
exec dbms_lock.sleep(15);

select versions_starttime, versions_endtime, versions_xid, versions_operation, c1
from  flashback_test versions between timestamp minvalue and maxvalue
order by VERSIONS_STARTTIME ;

VERSIONS_STARTTIME                                VERSIONS_ENDTIME                                  VERSIONS_XID     VERSIONS_OPERATION         C1
------------------------------------------------- ------------------------------------------------- ---------------- ------------------ ----------
08/10/2007 08:18:29                               08/10/2007 08:18:44                               08002200A6040000 U                           2
08/10/2007 08:18:44                               08/10/2007 08:18:59                               03002300C7040000 U                           4
08/10/2007 08:18:59                                                                                 06002900AC040000 D                           4
                                                  08/10/2007 08:18:29                                                                            1

SELECT UNDO_SQL FROM FLASHBACK_TRANSACTION_QUERY WHERE XID = hextoraw('06002900AC040000') ; -- delete

UNDO_SQL
--------------------------------------------------------------------------------
insert into "HR"."FLASHBACK_TEST"("C1","C2") values ('4',TO_DATE('08/10/2007', '

SELECT UNDO_SQL FROM FLASHBACK_TRANSACTION_QUERY WHERE XID = hextoraw('03002300C7040000') ; -- update 4

UNDO_SQL
--------------------------------------------------------------------------------
update "HR"."FLASHBACK_TEST" set "C1" = '2' where ROWID = 'AAAONoAAEAAAAGWAAA';

The maximum of there versions e can get are dependent on UNDO_RETENTION parameter of the database. TIMESTAMP_TO_SCN and SCN_TO_TIMESTAMP flashback functions are used in SQL or PL/SQL as needed. In this example we are looking for the sum of the salaries on the employees in time;


set serveroutput on
DECLARE
   l_scn NUMBER;
   l_timestamp TIMESTAMP;
BEGIN
   l_scn := TIMESTAMP_TO_SCN(SYSTIMESTAMP - 1/48);
   dbms_output.put_line('l_scn '||l_scn);
   l_timestamp := SCN_TO_TIMESTAMP(l_scn);
   dbms_output.put_line('l_timestamp '||l_timestamp);
END;
/

SELECT sum(salary) FROM employees
AS OF SCN TIMESTAMP_TO_SCN(SYSTIMESTAMP - 1/24);

SELECT sum(salary) FROM employees -- TIMESTAMP_TO_SCN(SYSTIMESTAMP - 1/48)
AS OF TIMESTAMP SCN_TO_TIMESTAMP(1531264);

SQL>

l_scn 1531264
l_timestamp 04/01/2007 08:01:29,000000

PL/SQL procedure successfully completed

SUM(SALARY)
-----------
691400

SUM(SALARY)
-----------
691400

Another related subject is PURGE command and RECYCLEBIN with 10g, after 10g when you drop a table by default it is stored in recyclebin like windows. And you can restore the table with flashback command easily;


drop table RECYCLETEST purge ;
create table RECYCLETEST (c1 number) ;
select * from tab where tname like '%REC%';

-- lets drop the table, this must be seen as an accident :)
drop table recycletest;
select * from tab where tname like '%BIN%';
show recyclebin

-- ok lets restore back
FLASHBACK TABLE RECYCLETEST TO BEFORE DROP;
select * from tab where tname like '%REC%';

SQL>

Table dropped

Table created

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
RECYCLETEST                    TABLE

Table dropped

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
BIN$6V3qqZH/Q1ascMRsn5uImg==$0 TABLE

Done

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
RECYCLETEST                    TABLE

Similar to windows you may want to free the recyclebin in time.

* For an index, table or tablespace you can use;

PURGE TABLE TEST; or PURGE TABLE “BIN$04LhcpndanfgMAAAAAANPw==$0";
purge index in_test1_01;
PURGE TABLESPACE USERS; or PURGE TABLESPACE USERS USER SCOTT;

* For the all objects belonging to the user ;

PURGE RECYCLEBIN;

* As a DBA all objects;

PURGE DBA_RECYCLEBIN;

Some important notes related to this subject are;

* When you flashback a table the triggers and indexes are not returned with their original names. This can be fixed by running an extra alter index or alter trigger statement.
* Not only you may want to flashback a table that is dropped, like Versions query a whole table can be flashbacked in time;

FLASHBACK TABLE RECYCLETEST TO SCN 2202666520;

* If there are tables with same names in Recyclebin they can be restores with different names;

FLASHBACK TABLE TEST TO BEFORE DROP RENAME TO TEST2;
FLASHBACK TABLE TEST TO BEFORE DROP RENAME TO TEST1;

* Flashback technology includes the following features:
Oracle Flashback Query
Oracle Flashback Versions Query
Oracle Flashback Transaction Query
Oracle Flashback Table
Oracle Flashback Drop
Oracle Flashback Database — This provides a more efficient alternative to database point-in-time recovery, Check V$RECOVERY_FILE_DEST and V$FLASH_RECOVERY_AREA_USAGE views also.
Oracle Flashback Restore Points
* You may want to check the Flashback General and Performance Tips

Oracle Flashback Technologies

Testing Information : the scripts mentioned are tested on Oracle Database 10g Enterprise Edition Release 10.2.0.2.0

References Used :
Oracle® Database Concepts 10g Release 2 (10.2)

How shared pool works in Oracle and binding example

In my previous post on Introduction to Oracle Trace Utulity and Understanding The Fundamental Performance Equation I mentioned how a query is executed in Oracle and the importance of using bind variables for the scalability of your applications in Oracle. With this post I want to show how queries shared on a memory area called shared pool in Oracle, for this purpose I will use V$SQLAREA view;


-- reset the shared pool for the demostration, do ot run this at a production system, only maybe with your local XE
alter system flush shared_pool ;

-- lets create a bind variable at sql*plus and bind 100 to it
variable x number;
exec :x := 100 ;

-- lets get first name from employees table for employee_id 100
select first_name from employees where employee_id = :x ;

FIRST_NAME
----------
Steven

-- and check the shared pool for our first query
SELECT a.sql_text, parse_calls, executions FROM v$sqlarea a
WHERE upper(a.sql_text) LIKE upper(’%select%emp%employee_id%’) and upper(a.sql_text) NOT LIKE upper('%sqlarea%') ;

SQL_TEXT
-------------------------------------------------------

PARSE_CALLS EXECUTIONS
----------- ----------
select first_name from employees where employee_id = :x
          1          1

-- lets bind 110
exec :x := 110 ;

-- and get first name from employees table for employee_id 110
select first_name from employees where employee_id = :x ;

FIRST_NAME
----------
John

-- and check the shared pool for our first query
SELECT a.sql_text, parse_calls, executions FROM v$sqlarea a
WHERE upper(a.sql_text) LIKE upper(’%select%emp%employee_id%’) and upper(a.sql_text) NOT LIKE upper('%sqlarea%') ;

SQL_TEXT
-------------------------------------------------------

PARSE_CALLS EXECUTIONS
----------- ----------
select first_name from employees where employee_id = :x
          2          2

-- lets bind 110
exec :x := 120 ;

-- carefull this time I changed the query, upper SELECT FROM and WHERE
SELECT first_name FROM employees WHERE employee_id = :x ;

FIRST_NAME
-----------
Matthew

SELECT a.sql_text, parse_calls, executions FROM v$sqlarea a
WHERE upper(a.sql_text) LIKE upper(’%select%emp%employee_id%’) and upper(a.sql_text) NOT LIKE upper('%sqlarea%') ;

SQL_TEXT
-------------------------------------------------------

PARSE_CALLS EXECUTIONS
----------- ----------
SELECT first_name FROM employees WHERE employee_id = :x
1          1

select first_name from employees where employee_id = :x
2          2

Two important things I wanted to show here;
1- The first query experienced a hard-parse and since the second query was using bind variable and exactly the same query experienced a soft-parse,
2- Using bind variables is not enough to limit hard parsing overhead, you also need to share your application codes. The third query was different from the first two, SELECT FROM and WHERE was written upper. In order to overcome this setting development standarts and favoring package usage is needed, a function called fnc_get_first_name_by_employee_id for everyone.

Also with 10g V$SQL_BIND_CAPTURE view introduced to give the bind variable values;


SELECT DATATYPE_STRING, VALUE_STRING
FROM v$sqlarea a, V$SQL_BIND_CAPTURE b
WHERE upper(a.sql_text) LIKE upper(’%select%emp%employee_id%’) and upper(a.sql_text) NOT LIKE upper('%sqlarea%') and
a.HASH_VALUE = b.HASH_VALUE and b.ADDRESS = a.ADDRESS;

DATATYPE_STRING
----------------
VALUE_STRING
----------------

NUMBER
120

NUMBER
100

To show the cost of no binding I will use two examples, first with pl/sql and second with jdbc. Both will have a loop yo simulate a heavy On-line Transaction Processing(OLTP) system, like 1000 of people executing the same query.

a) JDBC example;


-- noBind JDBC
..
// record the start time
long start_time = System.currentTimeMillis();

// insert the rows
for (int count = 0; count < TOTAL_NUM_ROWS; count++) {
stmt.executeUpdate("insert into perf_test ( id, code, descr, insert_user, insert_date ) " +
"values ( " + Integer.toString( count ) + ", '123456789012345678901234567890', " +
"'12345678901234567890123456789012345678901234567890123456789012345678901234567890', " +
"USER, to_date('" + sdf.format(new java.util.Date(System.currentTimeMillis())) + "', 'YYYYMMDDHH24MISS'))");
}

// record the end time
long end_time = System.currentTimeMillis();

// display the total time taken to insert the rows
System.out.println("Total time for inserting " + TOTAL_NUM_ROWS +
" rows was " + (end_time - start_time) + " milliseconds");

..
-- bind JDBC
..
// create a PreparedStatement object
PreparedStatement myPrepStatement =
myConnection.prepareStatement(
"INSERT INTO perf_test " +
"(id, code, descr, insert_user, insert_date) VALUES "+
"(?, ?, ?, ?, ?)"
);

//SimpleDateFormat sdf = new SimpleDateFormat( "yyyyMMddHHmmss" );
// record the start time
long start_time = System.currentTimeMillis();

// insert the rows
for (int count = 0; count < TOTAL_NUM_ROWS; count++) {
myPrepStatement.setInt(1, count);
myPrepStatement.setString(2, Integer.toString(count));
myPrepStatement.setString(3, "123456789012345678901234567890");
myPrepStatement.setString(4, "12345678901234567890123456789012345678901234567890123456789012345678901234567890");
myPrepStatement.setString(4, "TONGUC");
myPrepStatement.setDate(5, new java.sql.Date(System.currentTimeMillis()));
myPrepStatement.executeUpdate();
}

// record the end time
long end_time = System.currentTimeMillis();

// display the total time taken to insert the rows
System.out.println("Total time for inserting " + TOTAL_NUM_ROWS +
" rows was " + (end_time - start_time) + " milliseconds");
..

With Prepared Statement…
Total time for inserting 2000 rows was 2703 milliseconds

WithOUT Prepared Statement…
Total time for inserting 2000 rows was 4281 milliseconds

b) PL/SQL Example;


set serveroutput on
declare

-- noBind PL/SQL
procedure step_1 is
TYPE rc IS REF CURSOR;
l_rc    rc;
l_dummy all_objects.object_name%TYPE;
l_start NUMBER DEFAULT dbms_utility.get_time;
BEGIN
FOR i IN 1 .. 1000 LOOP
OPEN l_rc FOR 'select object_name
from all_objects
where object_id = ' || i;
FETCH l_rc INTO l_dummy;
CLOSE l_rc;
END LOOP;
dbms_output.put_line(round((dbms_utility.get_time - l_start) / 100,
2) || 'seconds...');
END;

-- bind PL/SQL
procedure step_2 is
TYPE rc IS REF CURSOR;
l_rc    rc;
l_dummy all_objects.object_name%TYPE;
l_start NUMBER DEFAULT dbms_utility.get_time;
BEGIN
FOR i IN 1 .. 1000 LOOP
OPEN l_rc FOR 'select object_name
from all_objects
where object_id = :x' USING i;
FETCH l_rc INTO l_dummy;
CLOSE l_rc;
END LOOP;
dbms_output.put_line(round((dbms_utility.get_time - l_start) / 100,
2) || 'seconds...');
end;

begin
runstats_pkg.rs_start;

step_1;

runstats_pkg.rs_middle;

step_2;

runstats_pkg.rs_stop(100000);
end;
/

43,88seconds...
,63seconds...
Run1 ran in 4388 hsecs
Run2 ran in 63 hsecs
run 1 ran in 6965,08% of the time

Name                                  Run1        Run2        Diff
LATCH.library cache                187,190       4,273    -182,917
LATCH.row cache objects            513,635      50,094    -463,541
STAT...session pga memory        2,228,224           0  -2,228,224

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
973,738      69,950    -903,788  1,392.05%

PL/SQL procedure successfully completed.

Heavy loaded Oracle OLTP applications must be optimized for two important topics;

1- SQL Sharing,
2- Fast and reliable transaction handling.

Here I have given examples for JDBC and PL/SQL, please check documentation for others like .Net or ODBC. Before closing I recommend you the asktom thread given at references for deeper information and I want to mention three important related topics more;
1- Peeking of User-Defined Bind Variables
2- Caching Session Cursors
3- When to use CURSOR_SHARING

Testing Information : the scripts mentioned are tested on Oracle Database 10g Express Edition Release 10.2.0.1.0

Refences Used :
Using the Shared Pool Effectively Oracle® Database Performance Tuning Guide 10g Release 2 (10.2) Chapter 7 Memory Configuration and Use
http://asktom.oracle.com/tkyte/runstats.html
Let’s go deeper with Mr.Thomas Kyte