Pages

Popular Posts

Powered By Blogger

Thursday, December 30, 2010

Analyze Statement and Transferring Statistics.

Analyze Statement

The ANALYZE statement can be used to gather statistics for a specific table, index or cluster. The statistics can be computed exactly, or estimated based on a specific number of rows, or a percentage of rows:

ANALYZE TABLE employees COMPUTE STATISTICS;
ANALYZE INDEX employees_pk COMPUTE STATISTICS;

ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 100 ROWS;
ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 15 PERCENT;



DBMS_STATS

The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred method of gathering object statistics. Oracle list a number of benefits to using it including parallel execution, long term storage of statistics and transfer of statistics between servers. Once again, it follows a similar format to the other methods:

EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);

EXEC DBMS_STATS.gather_schema_stats('SCOTT');
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15);

EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15);

EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);
This package also gives you the ability to delete statistics:

EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats('SCOTT');
EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMPLOYEES_PK');






Transfering Stats
=================

First the statistics must be collected into a statistics table. In the following examples the statistics for the APPS user are collected into a new table, STATS_TABLE, which is owned by scott:


SQL> EXEC DBMS_STATS.create_stat_table('SCOTT','STATS_TABLE');
SQL> EXEC DBMS_STATS.export_schema_stats('APPS','STATS_TABLE',NULL,'SCOTT');

This table can then be transfered to another server using your preferred method (Export/Import ) and the stats imported into the data dictionary as follows:

SQL> EXEC DBMS_STATS.import_schema_stats('APPS','STATS_TABLE',NULL,'SCOTT');
SQL> EXEC DBMS_STATS.drop_stat_table('APPS','STATS_TABLE');

TSM for ORACLE Installation

TSM for ORACLE Installation :-

On TSM server :-

1. Create Policy domain and backup copy group ( vere=1, verd=0, retain extra=0, retain only=0)
2. Register TDP for Oracle node

On TSM client :-

1. Install TDP for oracle fileset

2. Configure dsm.opt file in /usr/tivoli/tsm/client/api/bin64/dsm.opt
example :-
SErvername tsmserv (provide tsm server name)

3. Configure dsm.sys file in /usr/tivoli/tsm/client/api/bin64/dsm.sys
example :-
SErvername tsmserv (provide tsm server name)
COMMmethod TCPip
TCPPort 1500
TCPServeraddress 192.168.13.104 (provide tsm server IP)
compression no
errorlogname /tmp/dsmerror_erpdev_ora.log (provide error log path)
nodename erpdev_ora (provide node name)

4. Configure tdpo.opt file in /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt
example :-
DSMI_ORC_CONFIG /usr/tivoli/tsm/client/api/bin64/dsm.opt (provide option file path)
DSMI_LOG /tmp/ (provide error log directory)

TDPO_FS orc9_db
TDPO_NODE erpdev_ora (provide node name)
*TDPO_OWNER
*TDPO_PSWDPATH /usr/tivoli/tsm/client/oracle/bin64

*TDPO_DATE_FMT 1
*TDPO_NUM_FMT 1
*TDPO_TIME_FMT 1

*TDPO_MGMT_CLASS_2 mgmtclass2
*TDPO_MGMT_CLASS_3 mgmtclass3
*TDPO_MGMT_CLASS_4 mgmtclass4

5. Go to /usr/tivoli/tsm/client/oracle/bin64 and run
./tdpoconf password

6. Put the node password and confirm with same password and you can oracle environment by giving ./tdpoconf showenv in the same path

7. Then give permission to
dsmerror_erpprod_ora.log in /tmp (path of error log file)

8. Link the "TSM for DB" library file within oracle
-Shutdown oracle instance
-Change directory to oracle's lib directory
# cd /
# cd lib
-Check presence of original libobk.a file
# ls -l libobk.a
-If found rename it
# ln libobk.a libobk.a.org
-Link to TSM library
# ln –s /usr/lib/libobk64.a libobk.a
-Start oracle instance

9. Oracle administrator role is to create RMAN script for backup and set environment variable path for tdpo.opt file as " /usr/tivoli/tsm/client/oracle/bin64/tdpo.opt"

ora-01555 snapshot too old error.

Now that we understand why the ORA-1555 occurs and some aspects about how they can occur we need to examine the following:

How can we logically determine and resolve what has occurred to cause the ORA-1555?

1) Determine if UNDO_MANAGEMENT is MANUAL or AUTO

If set to MANUAL, it is best to move to AUM. If it is not feasible to switch to AUM see

Note 69464.1 Rollback Segment Configuration & Tips

to attempt to tune around the ORA-1555 using V$ROLLSTAT

If set to AUTO, proceed to #2

2) Gather the basic data

a) Acquire both the error message from the user / client ... and the message in the alert log

User / Client session example:

ORA-01555: snapshot too old: rollback segment number 9 with name "_SYSSMU1$" too small

Alert log example

ORA-01555 caused by SQL statement below (Query Duration=9999 sec, SCN:0x000.008a7c2d)

b) Determine the QUERY DURATION from the message in the alert log

From our example above ... this would be 9999

c) Determine the undo segment name from the user / client message

From our example above ... this would be _SYSSMU1$

d) Determine the UNDO_RETENTION of the undo tablespace

show parameter undo_retention

3) Determine if the ORA-1555 is occurring with an UNDO or a LOB segment

If the undo segment name is null ...

ORA-01555: snapshot too old: rollback segment number with name "" too small

or the undo segment is unknown

ORA-01555: snapshot too old: rollback segment number # with name "???" too small

then this means this is a read consistent failure on a LOB segment

If the segment_name or the undo segment is known the error is occurring with an UNDO segment.

==============================================================================================

What to do if an ORA-1555 is occurring with an UNDO segment
-------------------------------------------------------------------------------------------------

1) QUERY DURATION > UNDO_RETENTION

There are no guarantees that read consistency can be maintained after the transaction slot for the
committed row has expired (exceeded UNDO_RETENTION)

Why would one think that the transaction slot's time has exceeded UNDO_RETENTION?

Lets answer this with an example

If UNDO_RETENTION = 900 seconds ... but our QUERY DURATION is 2000 seconds ...

This says that our query has most likely encountered a row that was committed more than 900
seconds ago ... and has been overwritten as we KNOW that the transaction slot being examined
no longer matches the row we are looking for

The reason we say "most likely" is that it is possible that an unexpired committed transaction slot was
overwritten due to either space pressure on the undo segment or this is a bug

SOLUTION:

The best solution is to tune the query can to reduce its duration. If that cannot be done then increase
UNDO_RETENTION based on QUERY DURATION to allow it to protect the committed
transaction slots for a longer period of time

NOTE: Increasing UNDO_RETENTION requires additional space in the UNDO tablespace. Make
sure to accommodate for this space. One method of doing this is to set AUTOEXTEND on one or
more of the UNDO tablespace datafiles for a period of time to allow for the increased space. Once the
size has stabilized, AUTOEXTEND can be removed.

See the solution for #2 below for more options

2) QUERY DURATION <= UNDO_RETENTION

This case is most often due to the UNDO tablespace becoming full sometime during the time when the
query was running

How do we tell if the UNDO tablespace has become full during the query?

Examine V$UNDOSTAT.UNXPSTEALCNT for the period while the query that generated the
ORA-1555 occurred.

This column shows committed transaction slots that have not exceeded UNDO_RETENTION but
were overwritten due to space pressure on the undo tablespace (IE became full).

If UNEXPSTEACNT > 0 for the time period during which the query was running then this shows
that the undo tablespace was too small to be able to maintain UNDO_RETENTION. Unexpired
blocks were over written, thus ending read consistency for those blocks for that time period.
set pagesize 25
set linesize 120

select inst_id,
to_char(begin_time,'MM/DD/YYYY HH24:MI') begin_time,
UNXPSTEALCNT "# Unexpired|Stolen",
EXPSTEALCNT "# Expired|Reused",
SSOLDERRCNT "ORA-1555|Error",
NOSPACEERRCNT "Out-Of-space|Error",
MAXQUERYLEN "Max Query|Length"
from gv$undostat
where begin_time between
to_date('','MM/DD/YYYY HH24:MI:SS')
and
to_date('

Tuesday, September 14, 2010

Points_2_Resolve the ORA-12705 error

Below are the points to fix the issue on your oracle client machine.

Cause: There are two possible causes: Either an attempt was made to issue an ALTER SESSION statement with an invalid NLS parameter or value; or the NLS_LANG environment variable contains an invalid language, territory, or character set.

Action: Check the syntax of the ALTER SESSION command and the NLS parameter, correct the syntax and retry the statement, or specify correct values in the NLS_LANG environment variable.

Points to fix the issue:


1.simply set it with the below command
C:>set NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252. (if not fix,try with below procedure)


2.From start-->Run--->Type regedit--->HKEY_LOCAL_MACHINE-->SOFTWARE-->ORACLE-->HOME folder has a key
NLS_LANG as AMERICAN_AMERICA.WE8MSWIN1252.
change to AMERICAN_AMERICA.WE8ISO8859P15 may fix your connectivity issue. (Permanent fix, even if not fix try with below procedure)


3.Try this...
check if exist into windows registry (regedit) the oracle_home
Example
[HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE]
ORACLE_HOME= (for ex: D:\Appls\Oracle)
if not create one.
and add the path into the windows variable PATH adding bin folder
example D:\Appls\Oracle\bin
Control Panel -> System -> Advanced Options -> Environment variables - > system variables -> path


4. Still not resolve
Just go regedit -->Oracle Home--and delete the NLS entries from there..You should be fine.


5. Final point if not solve with above points, uninstall and install Oracle client software.

Sunday, August 29, 2010

WHEN TO REORGANIZE TABLES AND REBUILD INDEXES.

When to reorganize tables and rebuild indexes:

Table fragmentation:When rows are not stored contiguously or if rows are split auto more than one block,performance decrease because these rows required additional block accesses.
Note that table fragmentation is different from file fragmentation.When a lot of DML operations are applied on a table,the table will become fragmentation because DML does not release free space from the table below the HWM.
HWM is an indicator of used blocks in the database.Blocks below the high water mark (used blocks) have at least once contained data.This data might have been deleted.Since oracle knows that blocks before the high water mark didn't have data,it only reads block up to the high water when doing a full table scan.
*DDL statements always resets the HWM.

Table size (with fragmentation)
Sql>exec dbms_stat.gather_table_stats('SCOTT','BIG1');
Sql>select table_name,round((blocks*8),2)||'kb' "size" from user_tables where table_name='BIG1'
output of the command:
table_name size
BIG1 72952 kb

Actual data in table
Sql>select table_name,round((num_rows*avg_row_len/1024),||'kb' "size" from user_tables where table_name='BIG1';
output
table_name size
BIG1 30604.2 kb

72952-30604=42348 kb is wasted space in table
The difference between two values is 60% and pct free 10%(default)-so,the table has 50% extra space which is wasted becuase there is no data.
How to reset HWM/remove fragmentation ?
4 options:
1)alter table .. move to another tablespace.
2)export,truncate or drop import.
3)CTAS method
4)dbms_redifinition

To check indexes
sql>select status,index_name from user_indexes where table_name='BIG1';
output
status index_name
unusable bigidx
sql>alter index bigidx rebuild;
select del_lf_rows*100/decode(lf_rows,0,1,lf_rows) from index_stats where name='';
you may decide that index should be rebuilt if more than 20% of its rows are deleted.

How can you determine if an index needs to be dropped and rebuilt?
Level: Intermediate
Expected answer: Run the ANALYZE INDEX command on the index to validate its structure and then calculate the ratio of LF_BLK_LEN/LF_BLK_LEN+BR_BLK_LEN and if it isn?t near 1.0 (i.e. greater than 0.7 or so) then the index should be rebuilt. Or if the ratio
BR_BLK_LEN/ LF_BLK_LEN+BR_BLK_LEN is nearing 0.3

To check the compress or not compress mode:
Sql>select compression from dba_tables where table_name in <'tablename'>;
If some times show actual size is greater than the table size it is due to in compress mode of table
If table is compress make this to no compress mode
Sql>alter table move nocompress;

MYSQL_DBA_BEGINNER.

MYSQL DOCUMENTATION

To install Mysql
#rpm -ivh mysql-server-community_x86-64.rpm

After installing immediately give password
#/usr/bin/mysqladmin -u root -p password "root123"

To start and stop mysql server

#/etc/init.d/mysql start
#/etc/init.d/mysql stop

To connect mysql
#mysql -u root -p

To check databases and use database
Mysql>show databases;
Mysql>use ;

To check tables and its contents and its use
Mysql>show tables;
Mysql>desc ;
Mysql>select * from ;

To create user
Mysql>create user 'khan'@'localhost' identified by 'khan';

To give Privileges
Mysql>grant select,insert,update,delete on *.* to 'khan';

To drop user
Mysql>drop user 'khan'@'localhost';

To check which database to connected
Mysql>select database ();

To check which user is connected
Mysql>select user ();

To check how many users have
Mysql>select distinct grantee from user_privileges;

Global privileges are administrative or apply to all databases on a given server. To assign global privileges, use ON *.* syntax:

GRANT ALL ON *.* TO 'someuser'@'somehost';
GRANT SELECT, INSERT ON *.* TO 'someuser'@'somehost';

Only two rows information need give limit 2 in your query

In mysql by default database create in /var/lib/mysql

To create table
Mysql>create table oolaala(name varchar(20));

To check mysql version
#rpm -qa|grep -i mysql

Try this after relocating files if errors getting
#setsebool -p mysqld_disable_trans=1

Note:If not available my.cnf file or removed or '/home/oracle/data/three/redo01a.log' size 20m,lost we can copy the same file available in /usr/share/mysql (my_large.cnf)

MYISAM INNODB (engines of mysql)

By default tables create in MYISAM

To create tables in INNODB
set in my.cnf (uncomment#innodb parameter)

Mysql>create table pepsi(name varchar(10)) engine=innodb;

MYISAM is for OLAP and INNODB is for OLTP

To check connection errors or any errors
#/var/log/messages or /var/log/sys/logs of /var/lib/mysql/mysqld.log

How to rename and relocate on datafiles
1.Stop service
2.copy data directory on another mount point
3.vi /etc/my.cnf
#The MySQL server
[mysqld]
datadir=/home/mysql/data (five here different mount point)
user=mysql

To migrate from Oracle to Mysql
1.PhpAdmin tool use or
2.$mysql -u -p < abc.sql 3.mysql>source abc.sql

It helps to migrate from oracle to mysql

To create tables in oracle
set echo off
set heading off
spool t1_insert.txt
create table t1 (c1 number, c2 varchar2(10), c3 date);
insert into t1 values (1,'one', sysdate);
insert into t1 values (2,'two', sysdate);
commit;
select 'insert into t1 (c1, c2, c3) values (' || c1 || ', ' || c2 || ', ' || c3 || ');' from t1;
select 'insert into t1 (c1, c2, c3) values (' || c1 || ', ''' || c2 || ''', ''' || to_char(c3,'yyyy-mm-dd hh24:mi:ss') || ''');' from t1;
spool off

To create tables in Mysql
create table t1 (c1 int, c2 varchar(10), c3 datetime);
select '' || c1 || ', ''' || c2 || ''', ''' || to_char(c3,'yyyy-mm-dd hh24:mi:ss') || '''' from t1;

make t1_insert.txt to t1_insert.sql

to migrate
mysql>source t1_insert.sql

To take logical backup dump of mysql

#mysqldump -uroot -padmin --all-database >testmysql.sql
it generates file that we can read for import on mysql or oracle. For oracle change the data format and import by running sql file.

site to download mysql software
http://dev.mysql.com/

Sunday, June 6, 2010

ORACLE_11G_NEW_FEATURES

ORACLE 11G NEW FEATURES:

Installation, Database Upgrades and Change Management
Installation New Features Support
Role and Privilege Changes
Deprecated Components
New Initialization Parameters Affecting Database Creation
DBCA Enhancements
Upgrading to Oracle Database 11g
Database Replay
The SQL Performance Analyzer
Patching in Oracle Database Control
Database Diagnosis and Repair
Introducing Automatic Diagnostic Repository (ADR)
Configuring the ADR
Using adrci Tool
Using The Support Workbench in the OEM
Database Health Monitor
Data Recovery Advisor
SQL Test Case Builder
Data Block Corruption Parameters
Database Administration
Automatic Memory Management
Automatic Maintenance Tasks
Oracle Flashback-Related New Features
LogMiner Interface in Oracle Enterprise Manager
Oracle Flashback Transaction Backout
Flashback Data Archive
Virtual Columns
New Data Partitioning Schemes
DDL Lock Timeout
Explicit Locking of Tables
Invisible Indexes
Read-Only Tables
Shrinking Temporary Tablespaces and Tempfiles
Creating an Initialization Parameter File from Memory
Restore Point Enhancements
Database Resident Connection Pooling
Comparing and Synchronizing Database Objects
SQL*Plus New Features

Oracle 11g New Features for Administrators:-

Online Application Maintenance
Oracle Advanced Compression Option
Oracle Scheduler New Features
Lightweight Jobs
Remote External Jobs
Finer-grained Dependency Management
Enhancements in Oracle Database Resource Manager
Performance Tuning
PL/SQL Native Compilation
Server Result Cache
Client Side Result Cache
Enhanced Oracle Process Monitoring
Subprogram Inlining
SQL Tuning Automation
SQL Access Advisor Enhancements
Changing Statistics Preferences
Enhanced Statistics Maintenance
SQL Plan Management
ADDM New Features
AWR New Features
Setting Metric Thresholds for Baselines
Performance-Related Changes in Database Control
Miscellaneous New Performance Tuning Features
Real-Time SQL Monitoring
Adaptive Cursor Sharing
Database Security
Stronger Password Hash Algorithm
Security Out of the Box
Anti Network Attacks Parameters
Tablespace Encryption
Fine-Grained Access Control for UTL_* Packages
Further Security New Features
Backup and Recovery New Features
Enhanced Block Media Recovery
RMAN Substitution Variables
New RMAN Configuration Parameters
The Multisection Backups
Creating Archival Backups
VALIDATE Command
Configuring an Archived Redo Log Deletion Policy

Oracle 11g New Features for Administrators:-

Active Database Duplication
Importing and Moving Recovery Catalogs
Virtual Private Catalogs
Miscellaneous New Features in RMAN
Data Pump Utilities
Compression Enhancement
Encryption Enhancements
Reusing a Dump File
Remapping Data
Renaming Tables During Export or Import
Data Pump and Partitioned Tables
Ignoring Nondeferred Constraints
External Tables Based on Data Pump Driver
Enhancement in the Transportable Parameter
Automatic Storage Management (ASM)
SYSASM Privilege and OSASM Group
Upgrading ASM using DBUA
Upgrading ASM Manually
ASM Restricted Mode
Diskgroup Attributes
Checking Diskgroup
asmcmd Utility Commands
Fast Rebalance
The FORCE option with Drop Diskgroup Command
Miscellaneous ASM New Features
PL/SQL New Features
PL/SQL New Features
Data Warehousing
SecureFiles
Accessing a LOB Using SQL and PL/SQL
Online Redefinition
Partition Change Tracking (PCT)
Generating SQL Crosstab Report using PIVOT Operator
Partitioning Improvements

Below Topics Not covered:-

Oracle Streams
Data Guard
Oracle RAC
Oracle XML DB