Wednesday, May 28, 2014

Database reporting a lot of ORA-12012

ORA-12012: error on auto execute of job 4002
ORA-08102: index key not found, obj# 289, file 1, block 2033 (2)
Errors in file /u01/app/oracle/diag/rdbms/oblprod/oblprod/trace/oblprod_j001_28452.trc:

select * from dba_objects where object_id=289;
SYS    I_JOB_NEXT        289    289    INDEX

Now the index I_JOB_NEXT seems to be corrupted at block# 2033. This belongs to the job$ table. Let’s try to run an ANALYZE on the table.

SQL> ANALYZE TABLE JOB$ VALIDATE STRUCTURE CASCADE;
ANALYZE TABLE JOB$ VALIDATE STRUCTURE CASCADE
*
ERROR at line 1:
ORA-01499: table/index cross reference failure - see trace file
By now, we know that index i_job_next is created on the next_date column of job$ table.
So, let’s try to do an explicit full table scan on job$.

SELECT /*+ FULL(j1) */ next_date
FROM job$ j1
MINUS
SELECT /*+ index(j i_job_next) */ next_date
FROM job$ j;

Surprisingly, doing the above FTS followed with an ANALYZE table command did the trick for me and the index is reusable now.
SQL> ANALYZE TABLE JOB$ VALIDATE STRUCTURE CASCADE;

Table analyzed.

Thursday, April 24, 2014

Automatic Repair of Corrupt Data Blocks on Standby Database


If a corrupt data block is encountered when a primary database is accessed, it is automatically replaced with an uncorrupted copy of that block from a physical standby database. This requires the following conditions:
  • The physical standby database must be operating in real-time query mode, which requires an Active Data Guard license.
  • The physical standby database must be running real-time apply.
Also keep the following in mind:
  • Automatic repair is supported with any Data Guard protection mode. However, the effectiveness of repairing a corrupt block at the primary using the noncorrupt version of the block from the standby depends on how closely the standby apply is synchronized with the redo generated by the primary.
  • When an automatic block repair has been performed, a message is written to the database alert log.
  • If automatic block repair is not possible, an ORA-1578 error is returned.
If a corrupt data block is discovered on a physical standby database, the server attempts to automatically repair the corruption by obtaining a copy of the block from the primary database if the following database initialization parameters are configured on the standby database:
  • The LOG_ARCHIVE_CONFIG parameter is configured with a DG_CONFIG list and a LOG_ARCHIVE_DEST_n parameter is configured for the primary database.
    or
  • The FAL_SERVER parameter is configured and its value contains an Oracle Net service name for the primary database.

Wednesday, April 16, 2014

How to Avoiding row migration ?

When we update a row and it does not fit entirely within the original database block due to the corresponding growth in size, we have a row migration. In the original place (where the row was stored) we have placed a pointer to the new location of the row.

1.  Create the table BIG_ROWS:

CREATE TABLE HR.BIG_ROWS (
id number NOT NULL,
field1 char(2000) DEFAULT 'A' NOT NULL,
field2 char(2000), 
field3 char(2000),
field4 char(1000),
constraint PK_BIG_ROWS primary key (ID)) 
TABLESPACE USERS PCTFREE 10;

2.  Populate the table:

INSERT INTO HR.BIG_ROWS (id) select rownum from all_objects where rownum < 101;

3.  Analyze the table to refresh the statistics:

ANALYZE TABLE HR.BIG_ROWS COMPUTE STATISTICS;

4.  Verify if there are migrated rows:

SELECT CHAIN_CNT FROM ALL_TABLES 
WHERE OWNER = 'HR' AND TABLE_NAME = 'BIG_ROWS';


5.  In the next screenshot, we can see the results of these operations:
6.  Update some data in the table:

UPDATE HR.BIG_ROWS SET field2 = 'B', field3 = 'C', field4 = 'D' WHERE MOD(id, 2) = 1;

7.  Analyze the table to refresh the statistics:

ANALYZE TABLE HR.BIG_ROWS COMPUTE STATISTICS;

8.  Verify if there are migrated rows:

SELECT CHAIN_CNT FROM ALL_TABLES 
WHERE OWNER = 'HR' AND TABLE_NAME = 'BIG_ROWS';

9. Create table CHAINED_ROWS:

create table HR.CHAINED_ROWS (
owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid, 
analyze_timestamp date);

10. Analyze the table to list the migrated rows:

ANALYZE TABLE HR.BIG_ROWS LIST CHAINED ROWS INTO HR.CHAINED_ROWS;

11. Count (or list) the migrated rows:

SELECT COUNT(*) FROM HR.CHAINED_ROWS;

12. In the next screenshot, we can see the results of these operations:
13. Create a temporary table, which is an empty copy of BIG_ROWS:

CREATE GLOBAL TEMPORARY TABLE HR.TEMP_BIG_ROWS as select * FROM HR.BIG_ROWS WHERE 1=0;

14. Copy the migrated rows to the temporary table:

INSERT INTO TEMP_BIG_ROWS SELECT B.* FROM HR.BIG_ROWS B, HR.CHAINED_ROWS T WHERE T.OWNER_NAME = 'HR' AND T.TABLE_NAME = 'BIG_ROWS' AND T.HEAD_ROWID = B.ROWID;

15. Delete the migrated rows from the BIG_ROWS table:

DELETE FROM HR.BIG_ROWS B WHERE EXISTS (SELECT T.ROWID FROM HR.CHAINED_ROWS T WHERE T.OWNER_NAME = 'HR' AND T.TABLE_NAME = 'BIG_ROWS' AND T.HEAD_ROWID = B.ROWID);

17.  Copy the migrated rows from the temporary table back to the BIG_ROWS table:

INSERT INTO HR.BIG_ROWS SELECT * FROM HR.TEMP_BIG_ROWS;

17. Analyze the table to refresh the statistics:

ANALYZE TABLE HR.BIG_ROWS COMPUTE STATISTICS;

18. Verify if there are migrated rows:

SELECT CHAIN_CNT FROM ALL_TABLES 
WHERE OWNER = 'HR' AND TABLE_NAME = 'BIG_ROWS';

19. In the next screenshot, we can see the results of these operations:

20. Drop the tables used for testing:

DROP TABLE HR.TEMP_BIG_ROWS;
DROP TABLE HR.CHAINED_ROWS;
DROP TABLE HR.BIG_ROWS;

How to Avoiding row chaining ?

We encounter row chaining when the size of the row data is larger than the size of the database block used to store it. In this situation, the row is split across more than one database block, so, to read it we need to access more than one database block, resulting in greater I/O.

Before we can start, we have to alter an initialization parameter of the test database (assuming the default block size is 8KB in the test database):

ALTER SYSTEM SET db_16k_cache_size = 16m scope=both;


We need to set this parameter to allocate a memory buffer dedicated to storing database blocks of a different size; in this recipe, we will create a tablespace using a 16KB block size, so we need the corresponding buffer allocated to use it.


We will examine how to detect row chaining issues, and how to avoid chaining in our tables. Follow these steps:
1.  Create the table BIG_ROWS:

CREATE TABLE HR.BIG_ROWS (
id number NOT NULL,
field1 char(2000) DEFAULT 'A' NOT NULL,
field2 char(2000) DEFAULT 'B' NOT NULL,
 field3 char(2000) DEFAULT 'C' NOT NULL,
field4 char(2000) DEFAULT 'D' NOT NULL,
field5 char(2000) DEFAULT 'E' NOT NULL,
constraint PK_BIG_ROWS primary key (ID)) 
TABLESPACE USERS;

2.  Populate the table:

INSERT INTO HR.BIG_ROWS (id) select rownum from all_objects where rownum < 101;

3.  Analyze the table to refresh the statistics:

ANALYZE TABLE HR.BIG_ROWS COMPUTE STATISTICS;

4.  Verify if there are chained rows:

SELECT CHAIN_CNT FROM ALL_TABLES  
WHERE OWNER = ‘HR’ AND TABLE_NAME = ‘BIG_ROWS’;

5.  In the next screenshot, we can see the results of these operations:

6.  Create a tablespace with a different block size:

CREATE TABLESPACE TS_16K BLOCKSIZE 16K  DATAFILE '/u01/app/oracle/oradata/orcl/TS_16K.DBF' SIZE 10M 
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

7.  Move the table BIG_ROWSto the tablespace just created:

ALTER TABLE HR.BIG_ROWS MOVE TABLESPACE TS_16K;

8.  Rebuild the indexes, as they are unusable after the move:

ALTER INDEX HR.PK_BIG_ROWS REBUILD;

9. Analyze the table to refresh the statistics:

ANALYZE TABLE HR.BIG_ROWS COMPUTE STATISTICS;

10. Verify if there are chained rows.

SELECT CHAIN_CNT FROM ALL_TABLES 
WHERE OWNER = 'HR' AND TABLE_NAME = 'BIG_ROWS';

11. In the next screenshot, we can see the results of these operations:

12. Drop the tablespace and the table:

DROP TABLESPACE TS_16K INCLUDING CONTENTS AND DATAFILES;

We have created the table BIG_ROWSin which row length is greater than 8 Kbytes, the DB block size value for the tablespace USERS of ORCL.

We have populated this table with 100 rows, and after analyzing the table we know that there are 100 chained rows. A chained row is a row which cannot fit into one DB block, due to its size (when inserted or updated). So the database engine stores the initial part of the row in a DB block, which is chained to another DB block where the remaining part of the row content is stored.

To avoid chained rows, we can move the table to a tablespace with a greater DB block size; we have created the tablespace TS_16Kwith a block size of 16K, greater than the average row length of the BIG_ROWS table.

We have moved the BIG_ROWS table to the newly created tablespace and rebuilt the primary key index—which is marked unusable after the move. We then analyzed the table again to refresh the statistics.
After the move, the chained rows have disappeared from the BIG_ROWS table.

We can use different block sizes in the Oracle database, but every tablespace can have only one block size. Before adding a tablespace with a different DB block size, we have to make room in the database buffer cache to store DB blocks of every size, as we have done in the Getting readysection of this recipe.

As stated earlier, row chaining occurs when the database block size isn't big enough to store a row entirely. In such cases, the only solution to avoid row chaining is to move the table to a tablespace with a bigger DB block size, as we have done.

After moving the table, we had to rebuild the index. Why?

The answer is simple. An index contains the ROWIDs of the table rows, and the ROWIDs identify the position of the row, madeup by the object, the datafile, the block number, and the slot (row) number. When we move a table, the datafile and the block number change, so the indexes have to be rebuilt.

Thursday, April 3, 2014

What is a Flash Recovery Area and how to configure it

What i s a Flash Recovery Area and how to configure it ? (Doc ID 305648.1)
--------------------------------------------------------------------
In this Document
Purpose
Scope
Details

1. What is a Flash Recovery Area ?
2. Why should we use a Flash Recovery Area?
3. What should be the location of Flash Recovery Area ?
4. What should be the size of Flash Recovery Area ?
5. Can I  have same Flashback Recovery Area for multiple database ?
6. ASM (Automatic Storage Management) support for Flash Recovery Area :
7. OMF (Oracle Managed Files) support for Flash Recovery Area :
8. Flash Recovery Area for RAC :
9. What type of files can be stored in  Flash Recovery Area ?
9. Initialization Parameters required for Flash Recovery Area
10. Restrictions on Initialization Parameters for Flash Recovery Area


References

APPLIES TO:
-------------------
Oracle S erver - Enterprise Edition - Version 10.1.0.2 to 11.2.0.2 [Release 10.1 to 11.2]
I nformation in this document applies to any platform.


* * * Checked for relevance on 28-Nov-2012* * *

PURPOSE
-------------------------
This document explains the basic design and configuration of a Flash Recovery Area.

SCOPE
----------------

DBA, Support.

DETAILS
---------------
1. What is a Flash Recovery  Area ?

The flash recovery area is an Oracle-managed directory, file system, or Automatic S torage Management disk group that provides a centralized disk location for backup and recovery files. All the files you need to completely recover a database from a media failure are part of the Flash Recovery Area. Oracle creates archived logs and flashback logs in the flash recovery area. RMAN can store its backup sets and image copies in the flash recovery area, and it uses it
when restoring files during media recovery. The flash recovery area also acts as a disk cache for tape.
Flash recovery extends the functionality of Oracle Managed Files to all recovery related files (backup sets, image copies, and archived logs). Oracle D atabase automatically manages this storage, deleting files that are no longer needed. The user specifies only the location of a Flash Recovery Area, and the amount of disk space that Oracle is allowed to use for recovery related files. This feature is integrated with the 10G MMON feature, so that out of space conditions can be handled through the standard Oracle monitoring framework.
Periodically copying backups to tape frees space in the flash recovery area for other files. Oracle recommends that you enable a recovery area to simplify backup management.

The following recovery-related files are stored in the flash recovery area:
-- Current control file
-- Online redo logs
-- Archived redo logs
-- Flashback logs
-- Control file autobackups
-- Datafile and control file copies
-- Backup pieces
-- Foreign archived redo log (An archived redo log received by a logical standby database for a LogMiner session.)

2. Why should we  use  a Flash Recovery  Area?

As disk storage media is now competitive to tape with respect to purchase costs, disk-based backup is an optimal and preferable storage mechanism. By using disks for storage purposes, you gain significant benefits in terms of mass storage,and you can randomly access your data in milliseconds rather than hours. The previously expensive downtime in traditional recovery is exchanged for rapid data access and recovery times using cheap disk space.
The advantage that we have over tape is that tape is a sequential access device and disk is a random access device. Hence the amount of time needed for restoring from the tape is eliminated or reduced.

3. What should be  the  location of Flash Recovery  Area?

The flash recovery area should place on a separate disk from the working set of database files. Otherwise, the disk becomes a single point of failure for your database.

4. What should be  the size of Flash Recovery  Area ?

The larger the flash recovery area is, the more useful it becomes. Ideally, the flash recovery area should be large enough to contain the required files. The recovery area should be able to contain a copy of all datafiles in the database and the incremental backups used by your chosen backup strategy.
If providing this much space is impractical, then it is best to create an area large enough to keep a backup of the most important tablespaces and all the archived logs not yet on tape. At an absolute minimum, the flash recovery area must be large enough to contain the archived redo logs not yet on tape. I f the recovery area has insufficient space to store flashback logs and meet other backup retention requirements, then the recovery area may delete flashback logs to make room.

Formulas for estimating a useful flash recovery area size depend on whether:
-- Your database has a small or large number of data blocks that change frequently .
-- You store backups only on disk, or on disk and tape.
-- You use a redundancy-based backup retention policy, or a recovery window-based retention policy .
-- You plan to use Flashback Database or a guaranteed restore point as alternatives to point-in-time recovery.

I f you plan to enable flashback logging, then note that the volume of flashback log generation is approximately the same order of magnitude as redo log generation.

For example, if you intend to set
DB_FLASHBACK_RETENTION_TARGET to 24 hours, and if the database generates 20 GB of redo in a day, then a rule of thumb is to allow 20 GB to 30 GB disk space for the flashback logs. The same rule applies to guaranteed restore points when flashback logging is enabled.  For example, if the database generates 20 GB redo every day, and if the guaranteed restore point will be kept for a day, then plan to allocate 20 to 30 GB.

For an example suppose that you want to determine the size of a flash recovery when the backup retention policy is set to REDUNDANCY 1 and you intend to follow the Oracle Suggested Strategy of using an incrementally updated backup. You use the following formula to estimate the disk quota

Disk Quota =
Size of a copy of database   +
Size   of   an   incremental backup   +
Size of (n+1) days of archived redo logs   +
Size of (y+1) days of foreign archived redo logs (for logical standby) +
Size of control file +
Size of an online redo log member *  number of log  groups +
Size of   flashback logs (based on DB_FLASHBACK_RETENTION_TARGET value)

Where  n is the interval in days between incremental updates and y is the delay in applying
the foreign archived redo logs on a  logical standby database :

For Oracle suggested backup kindly refer Note.303861.1 I ncrementally Updated Backup I n 10G

5. Can I have same  Flashback Recovery Are a for multiple database  ?

Multiple databases can have the same value for D B_RECOVERY_FILE_D ES T, but one of the following must be true:
-- No two databases for which the DB_UNIQUE_NAME initialization parameters are specified have the same value for DB_UNIQUE_NAME.
-- For those databases where no DB_UNIQUE_NAME is provided, no two databases have the same value for DB_NAME.
When databases share a single recovery area in this way, the location should be large enough to hold the files for all databases. Add the values for DB_RECOVERY_FILE_DEST_SIZE for the databases, then allow for overhead such as mirroring or compression.

6. ASM  (Automatic Storage Management) support for Flash Recovery Area :

The Flash Recovery Area can be an ASM disk group. RMAN can back up and restore files that are stored in AS M disk groups.In fact, RMAN is the only way to back up and restore ASM files. Backup sets and image copies can be created in ASM, either by automatically creating them in the Flash Recovery Area, or by explicitly specifying an ASM disk group for their location.

7. OM F (Oracle  Managed Files) support for Flash Recovery Area :

Backup/Restore of OMF files is easier with RMAN as RMAN does not use the long OMF file names in any commands. Backup sets and image copies can be created as OMF files by creating them in the Flash Recovery Area. RMAN can be used to migrate existing files to OMF.

8. Flash Recovery  Area for RAC :

For RAC database the location and disk quota must be the same on all instances. To accomplish this, Oracle
recommends that you place the flash recovery area on the shared AS M disks. I n addition, you must set the
DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE parameters to the same values on all instances. To use the Flash Recovery feature, you must first configure the flash recovery area for each instance in your Oracle RAC cluster.

9. What type  of files can be  store d in  Flash Recovery  Area ?

The files in Flash Recovery Area are classified as permanent or transient.

-- Permanent Files

The permanent files (assuming these are configured to be stored in the recovery area) are multiplexed copies of the current control file and online redo logs. These cannot be deleted without causing the instance to fail.

-- Transient Files

Transient files include archived redo logs, datafile copies, control file copies, control file autobackups, backup pieces and flashback logs. Oracle manages these files automatically for deletion whenever space is required in the Flash Recovery Area. They are deleted once they become obsolete under the retention policy or have been backed up to tape. Any transient file in the flash recovery area once backed up to tape even if not deleted are internally placed on a file can be deleted list. Until there is a backup of the file on disk made to a tertiary storage device it cannot be obsolete.

9. Initialization Parameters required for Flash Recovery Area

To enable the Flash Recovery Area, you must set the two initialization parameters:

-- DB_RECOVERY_FILE_DEST_SIZE :
It is the disk limit, which is the amount of space the flash recovery area is permitted to use. The minimum size of the Flash Recovery Area should be at least large enough to contain archive logs that have not been copied to tape.

Note: This value does not include certain kinds of disk overhead:
-Block 0 or the OS  block header of each Oracle file is not included in this size, so make sure to allow an extra 10% for this data when computing the actual disk usage required for the Flash Recovery Area.

-DB_RECOVERY_FILE_DEST_SIZE does not indicate the real size occupied on disk when the underlying filesystem is mirrored, compressed, or in some other way affected by overhead not known to Oracle.

-- DB_RECOVERY_FI LE_DEST:
This initialization parameter is a valid destination to create the Flash Recovery Area. The destination can be defined as a directory, file system, or ASM disk group.

Note1: The Flash Recovery Area cannot be stored on a raw file system.
Note2 : DB_RECOVERY_FILE_DEST_SIZE must be set before DB_RECOVERY_FILE_DEST.

10. Restrictions on Initialization Parameters for Flash Recovery Area

- You cannot use the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters to specify redo log archive destinations. You must always use the LOG_ARCHIVE_DEST_n parameters in case you have configured flash recovery area.

- LOG_ARCHIVE_DEST_10 is implicitly set to USE_DB_RECOVERY_FILE_DEST if you create a recovery area and do not set any other local archiving destinations.

- Multiple database can have the same DB_RECOVERY_FILE_DEST only if the DB_NAME are different or if the DB_NAME is same (example the primary and standby database) then the DB_UNIQUE_NAME parameter must be different for the databases.

- For RAC the location of Flash Recovery Area must be on a cluster file system, ASM or a shared directory configured through NFS . The location and disk quota must be the same on all instances.

REFERENCES
-----------------
NOTE:305817.1 - FAQ - Flash Recovery Area feature
NOTE:829755.1 - S pace issue in Flash Recovery Area( FRA )
NOTE:303861.1 - I ncrementally Updated Backup I n 10G and higher
NOTE:305812.1 - Flash Recovery area - S pace management Warning & Alerts