Pages

Popular Posts

Powered By Blogger

Thursday, December 30, 2010

Analyze Statement and Transferring Statistics.

Analyze Statement

The ANALYZE statement can be used to gather statistics for a specific table, index or cluster. The statistics can be computed exactly, or estimated based on a specific number of rows, or a percentage of rows:

ANALYZE TABLE employees COMPUTE STATISTICS;
ANALYZE INDEX employees_pk COMPUTE STATISTICS;

ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 100 ROWS;
ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 15 PERCENT;



DBMS_STATS

The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred method of gathering object statistics. Oracle list a number of benefits to using it including parallel execution, long term storage of statistics and transfer of statistics between servers. Once again, it follows a similar format to the other methods:

EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);

EXEC DBMS_STATS.gather_schema_stats('SCOTT');
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15);

EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15);

EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);
This package also gives you the ability to delete statistics:

EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats('SCOTT');
EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMPLOYEES_PK');






Transfering Stats
=================

First the statistics must be collected into a statistics table. In the following examples the statistics for the APPS user are collected into a new table, STATS_TABLE, which is owned by scott:


SQL> EXEC DBMS_STATS.create_stat_table('SCOTT','STATS_TABLE');
SQL> EXEC DBMS_STATS.export_schema_stats('APPS','STATS_TABLE',NULL,'SCOTT');

This table can then be transfered to another server using your preferred method (Export/Import ) and the stats imported into the data dictionary as follows:

SQL> EXEC DBMS_STATS.import_schema_stats('APPS','STATS_TABLE',NULL,'SCOTT');
SQL> EXEC DBMS_STATS.drop_stat_table('APPS','STATS_TABLE');

TSM for ORACLE Installation

TSM for ORACLE Installation :-

On TSM server :-

1. Create Policy domain and backup copy group ( vere=1, verd=0, retain extra=0, retain only=0)
2. Register TDP for Oracle node

On TSM client :-

1. Install TDP for oracle fileset

2. Configure dsm.opt file in /usr/tivoli/tsm/client/api/bin64/dsm.opt
example :-
SErvername tsmserv (provide tsm server name)

3. Configure dsm.sys file in /usr/tivoli/tsm/client/api/bin64/dsm.sys
example :-
SErvername tsmserv (provide tsm server name)
COMMmethod TCPip
TCPPort 1500
TCPServeraddress 192.168.13.104 (provide tsm server IP)
compression no
errorlogname /tmp/dsmerror_erpdev_ora.log (provide error log path)
nodename erpdev_ora (provide node name)

4. Configure tdpo.opt file in /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt
example :-
DSMI_ORC_CONFIG /usr/tivoli/tsm/client/api/bin64/dsm.opt (provide option file path)
DSMI_LOG /tmp/ (provide error log directory)

TDPO_FS orc9_db
TDPO_NODE erpdev_ora (provide node name)
*TDPO_OWNER
*TDPO_PSWDPATH /usr/tivoli/tsm/client/oracle/bin64

*TDPO_DATE_FMT 1
*TDPO_NUM_FMT 1
*TDPO_TIME_FMT 1

*TDPO_MGMT_CLASS_2 mgmtclass2
*TDPO_MGMT_CLASS_3 mgmtclass3
*TDPO_MGMT_CLASS_4 mgmtclass4

5. Go to /usr/tivoli/tsm/client/oracle/bin64 and run
./tdpoconf password

6. Put the node password and confirm with same password and you can oracle environment by giving ./tdpoconf showenv in the same path

7. Then give permission to
dsmerror_erpprod_ora.log in /tmp (path of error log file)

8. Link the "TSM for DB" library file within oracle
-Shutdown oracle instance
-Change directory to oracle's lib directory
# cd /
# cd lib
-Check presence of original libobk.a file
# ls -l libobk.a
-If found rename it
# ln libobk.a libobk.a.org
-Link to TSM library
# ln –s /usr/lib/libobk64.a libobk.a
-Start oracle instance

9. Oracle administrator role is to create RMAN script for backup and set environment variable path for tdpo.opt file as " /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt"

ora-01555 snapshot too old error.

Now that we understand why the ORA-1555 occurs and some aspects about how they can occur we need to examine the following:

How can we logically determine and resolve what has occurred to cause the ORA-1555?

1) Determine if UNDO_MANAGEMENT is MANUAL or AUTO

If set to MANUAL, it is best to move to AUM. If it is not feasible to switch to AUM see

Note 69464.1 Rollback Segment Configuration & Tips

to attempt to tune around the ORA-1555 using V$ROLLSTAT

If set to AUTO, proceed to #2

2) Gather the basic data

a) Acquire both the error message from the user / client ... and the message in the alert log

User / Client session example:

ORA-01555: snapshot too old: rollback segment number 9 with name "_SYSSMU1$" too small

Alert log example

ORA-01555 caused by SQL statement below (Query Duration=9999 sec, SCN:0x000.008a7c2d)

b) Determine the QUERY DURATION from the message in the alert log

From our example above ... this would be 9999

c) Determine the undo segment name from the user / client message

From our example above ... this would be _SYSSMU1$

d) Determine the UNDO_RETENTION of the undo tablespace

show parameter undo_retention

3) Determine if the ORA-1555 is occurring with an UNDO or a LOB segment

If the undo segment name is null ...

ORA-01555: snapshot too old: rollback segment number with name "" too small

or the undo segment is unknown

ORA-01555: snapshot too old: rollback segment number # with name "???" too small

then this means this is a read consistent failure on a LOB segment

If the segment_name or the undo segment is known the error is occurring with an UNDO segment.

==============================================================================================

What to do if an ORA-1555 is occurring with an UNDO segment
-------------------------------------------------------------------------------------------------

1) QUERY DURATION > UNDO_RETENTION

There are no guarantees that read consistency can be maintained after the transaction slot for the
committed row has expired (exceeded UNDO_RETENTION)

Why would one think that the transaction slot's time has exceeded UNDO_RETENTION?

Lets answer this with an example

If UNDO_RETENTION = 900 seconds ... but our QUERY DURATION is 2000 seconds ...

This says that our query has most likely encountered a row that was committed more than 900
seconds ago ... and has been overwritten as we KNOW that the transaction slot being examined
no longer matches the row we are looking for

The reason we say "most likely" is that it is possible that an unexpired committed transaction slot was
overwritten due to either space pressure on the undo segment or this is a bug

SOLUTION:

The best solution is to tune the query can to reduce its duration. If that cannot be done then increase
UNDO_RETENTION based on QUERY DURATION to allow it to protect the committed
transaction slots for a longer period of time

NOTE: Increasing UNDO_RETENTION requires additional space in the UNDO tablespace. Make
sure to accommodate for this space. One method of doing this is to set AUTOEXTEND on one or
more of the UNDO tablespace datafiles for a period of time to allow for the increased space. Once the
size has stabilized, AUTOEXTEND can be removed.

See the solution for #2 below for more options

2) QUERY DURATION <= UNDO_RETENTION

This case is most often due to the UNDO tablespace becoming full sometime during the time when the
query was running

How do we tell if the UNDO tablespace has become full during the query?

Examine V$UNDOSTAT.UNXPSTEALCNT for the period while the query that generated the
ORA-1555 occurred.

This column shows committed transaction slots that have not exceeded UNDO_RETENTION but
were overwritten due to space pressure on the undo tablespace (IE became full).

If UNEXPSTEACNT > 0 for the time period during which the query was running then this shows
that the undo tablespace was too small to be able to maintain UNDO_RETENTION. Unexpired
blocks were over written, thus ending read consistency for those blocks for that time period.
set pagesize 25
set linesize 120

select inst_id,
to_char(begin_time,'MM/DD/YYYY HH24:MI') begin_time,
UNXPSTEALCNT "# Unexpired|Stolen",
EXPSTEALCNT "# Expired|Reused",
SSOLDERRCNT "ORA-1555|Error",
NOSPACEERRCNT "Out-Of-space|Error",
MAXQUERYLEN "Max Query|Length"
from gv$undostat
where begin_time between
to_date('','MM/DD/YYYY HH24:MI:SS')
and
to_date('