Force RMAN to ignore corrupted blocks with COPY command

To tell RMAN to permit corrupt blocks to be backed up you must use the SET MAXCORRUPT command(note that this goes inside the RUN command);

SET MAXCORRUPT FOR DATAFILE ‘foo’ TO n;

where ‘n’ is the number of corrupt blocks which will be allowed in the backup. However the ‘set maxcorrupt’ command only applies to the rman BACKUP command not the rman COPY command. To work around this problem for the rman COPY command you must modify “recover.bsq”.

Find the line in “$ORACLE_HOME/rdbms/admin/recover.bsq” which looks like this:

sys.dbms_backup_restore.copyDataFile(full_name =full_name,

and add the following line immediately after that line:

max_corrupt =1,

For more information please see metalink note 1069093.6.

Hot discussion: SQL or PL/SQL

I read Rob, Laurent and Chen on this topic. As far as I follow OTN SQL and PL/SQL forum is the place for SQL madness, any kind of question here is discussed in a race to have a more performant single SQL solution. Oracle provides very rich SQL options like analytic, xml and data mining functions or model clause for this kind of a race.

Additionally we know by experience that any data intensive need will have several different solutions and *usually* one set based SQL solutions will be faster than its alternatives especially row based(cursor looping) PL/SQL ones. So parallel to the need there can be exceptions, you may check Nicolas Gasparotto‘s approach on this example with pipelined PL/SQL function option.

IMHO if you are not solving a puzzle but developing an application in a team, PL/SQL based solutions are easier to understand for all team members and as a result it is easier to maintain the application in long run. After all performance is not the single metric for an application’s success, PL/SQL provides exception handling and is also rich with bulk processing and pipelined functions kind of options for related performance needs.

How constraints may affect cost based optimizer’s choises

Especially on data warehouse environments constraints are perceived as evil because of their ETL load costs. Where as their reporting performance gains may motivate you for using constraints even with these known costs.

One of the most common example is the NOT NULL constraint with ANTI JOINs, for joining massive amount of data HASH JOIN is something you are always after but if you do not let CBO to know that there are no NULL values on the columns you are joining you may end up with a NESTED LOOP join method which usually lasts for the eternity. Please check this paper for this case’s details; Speeding Up Queries with Semi-Joins and Anti-Joins: How Oracle Evaluates EXISTS, NOT EXISTS, IN and NOT IN

So as always, we need to test, calculate and decide on the alternative cost of using or not using any option available with Oracle for your own scenarios, after investigating below examples I shared you may also choose to go with VALIDATED constraints or not.

Not Null constraints and the optimizer after 10g

Check constraints and the optimizer after 10g

Telling the database that one-to-one mandatory relationship exists on the join columns with a foreign key, you can eliminate the unnecessary table’s join cost over a view after 10g Release 2

Having a Novalidate constraint hides valuable information from the CBO

SQL*Loader direct path load disables the foreign key constraint&ignores the foreign key constraint&ignores the trigger where as direct path insert just ignores the foreign key constraint