First & Test post on new site

April 7, 2013 Leave a comment

Hello All,

This is the first post and test post for this new site, if you are encountering any issues with site please drop me an email with issue, will fix it.

For those who enrolled post to email subscription, they should receive this post via email, if not can you please spare sometime to revert me so that i can add you.

Thanks for your support and time



Categories: Uncategorized

Moving to a new site

April 4, 2013 Leave a comment

Dear Readers & Friends,

I am in process of moving to our new site where all of this blog contents has been copied there. You will be redirecting to new site very soon if you land in here.

Thank you so much to each & every one for providing support and over helming response for my this blog.

All my future posts will be appear on and the users and email subscriptions also migrated accordingly.

You will continue receive the posts via email but from

Your suggestions/inputs are most welcome to make my new site user friendly and more helpful.

If you are experiencing any broken links and page not displaying, kindly send me a mail

See you there….


Categories: Administration

SQL Tuning: Handling nulls in indexes and get them used

April 3, 2013 1 comment


Thanks to my colleague, I have just buyed the following from my Friend Naga Satish from his email and sharing here.

"Edited Explain plan"
| Id  | Operation            | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT     |           |  3626K|   390M|       |  1217K  (1)| 04:03:28 |
|*  1 |  HASH JOIN RIGHT SEMI|           |  3626K|   390M|   964M|  1217K  (1)| 04:03:28 |
|*  2 |   TABLE ACCESS FULL  | TEST_ISID |    19M|   746M|       |   452K  (2)| 01:30:33 |
|*  3 |   HASH JOIN          |           |    46M|  3160M|   749M|   499K  (1)| 01:39:49 |
|*  4 |    TABLE ACCESS FULL | TEST_MKIS |    16M|   558M|       |   120K  (2)| 00:24:03 |
|*  5 |    TABLE ACCESS FULL | TEST_MIXR |    46M|  1624M|       |   213K  (1)| 00:42:37 |

Predicate Information (identified by operation id):
   1 - access("G"."INSTR_ID"="A"."INSTR_ID" AND "G"."ISID_OID"="B"."ISID_OID")
   2 - filter("G"."BEND_TMS" IS NULL AND ("G"."ID_CTXT_TYP"=' ' OR
              "G"."ID_CTXT_TYP"=' ' OR "G"."ID_CTXT_TYP"=' ' OR
              "G"."ID_CTXT_TYP"=' ' OR "G"."ID_CTXT_TYP"=' ' OR
              "G"."ID_CTXT_TYP"=' ' OR "G"."ID_CTXT_TYP"='RIC' OR
              OR "G"."ID_CTXT_TYP"='' OR
   3 - access("A"."MKT_ISS_OID"="B"."MKT_ISS_OID")
   4 - filter("A"."BEND_TMS" IS NULL)
   5 - filter("B"."BEND_TMS" IS NULL)

Index Information:-

TABLE_NAME                     INDEX_NAME                     COLUMN_NAME
------------------------------ ------------------------------ ------------- 
TEST_ISID                      TEST_ISID_U001                 BEND_TMS
TEST_MIXR                      TEST_MIXR_IND2                 BEND_TMS
TEST_MIXR                      TEST_IND                       BEND_TMS

Though indexes were placed why the optimizer has not choosen the FTS over index scan?

Issue is that the filter predicates at 2,4,5 (where conditions) is trying to pull null
values from a column, where logically, In general, Oracle btree will not store the index entries
for the null values to make the index structure smaller,
but for bitmap indexes nulls always stored.

Possible Solutions

To fix the same, Possible solutions would be,

1.Define a composite index with at least one other column 
  that has a NOT NULL constraint ideally, the column in which the NULL values 
  might appear would be the leading column in the composite index.
2.Define a composite index with a numeric constant (such as 1) 
  as the second column in the composite index.
3.Bitmap indexes always store NULL values 
  if appropriate (column experiences few updates, deletes, inserts, 
  and an Enterprise Edition database), 
  create a bitmap index for the column.
4.If the number of NULL values in a column will be relatively small 
  (compared to the number of rows in the table), 
  and the original SQL statement may be modified, 
  create a function based index that converts NULL values to 1 
  and non-NULL values to NULL:
5) Another solution would be add a null space as a second column 

After implementing the fix, FULL TABLE SCAN disappeared and index range scan has been used.

"Edited explain plan"
| Id  | Operation            | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT     |           |  3626K|   390M|       |  1217K  (1)| 00:14:00 |
|*  1 |  HASH JOIN RIGHT SEMI|           |  3626K|   390M|   964M|  1217K  (1)| 00:14:00 |
|*  2 |   INDEX RANGE SCAN   | INDEX3    |     9M|   346M|       |   452K  (2)| 00:05:00 |
|*  3 |   HASH JOIN          |           |    11M|   482M|   749M|   499K  (1)| 00:09:49 |
|*  4 |    INDEX RANGE SCAN  | INDEX1    |     5M|    58M|       |   120K  (2)| 00:05:03 |
|*  5 |    INDEX RANGE SCAN  | INDEX2    |     6M|   424M|       |   213K  (1)| 00:04:37 |

Charles hooper Post
Richard Foote’s Post


SQL Plan Management:- How many days the sql plan baselines or management data can be retained?

April 2, 2013 Leave a comment

If you want to understand more about SQL Plan Management, read this post before reading further,

Debate- SPA vs. SPM vs. ACS, SPM vs. sql_profiles etc

As you see in the figure, Each and every sql baselines has multiple plans associated with it and stored in sql plan management base. Some of the statements plan has verified and not verified and some are repeatable. Means, used plans, unused plans etc resides in sql plan management base.

Source: Optimizer blog.

So now back to the post, the question here is,

1) Where does they store?

The SQL management base (SMB) is a part of the data dictionary that resides in the SYSAUX tablespace. It stores statement logs, plan histories, SQL plan baselines, and SQL profiles. To allow weekly purging of unused plans and logs, the SMB uses automatic space management.

2) How many days this plan baselines are stored or retained?

1) Disk Space Usage:-

Disk space used by the SMB is regularly checked against a limit based on the size of the SYSAUX tablespace. By default, the limit for the SMB is no more than 10% of the size of SYSAUX. The allowable range for this limit is between 1% and 50%.

A weekly background process measures the total space occupied by the SMB. When the defined limit is exceeded, the process writes a warning to the alert log. The database generates alerts weekly until one of the following conditions is met:

  • The SMB space limit is increased
  • The size of the SYSAUX tablespace is increased
  • The disk space used by the SMB is decreased by purging SQL management objects (SQL plan baselines or SQL profiles)
  • To change the percentage limit, use the CONFIGURE procedure of the DBMS_SPM package. The following example changes the space limit to 30%:

    SQL> DBMS_SPM.CONFIGURE(‘space_budget_percent’,30);

    2) Purge Policy :-
      A weekly scheduled purging task manages the disk space used by SQL plan management. The task runs as an automated task in the maintenance window.

      The database purges plans not used for more than 53 weeks, as identified by the LAST_EXECUTED timestamp stored in the SMB for that plan. The 53-week period ensures plan information is available during any yearly SQL processing. The unused plan retention period can range between 5 and 523 weeks (a little more than 10 years).

    To configure the retention period, use the CONFIGURE procedure of the DBMS_SPM PL/SQL package. The following example changes the retention period to 105 weeks:

  DBMS_SPM.CONFIGURE( ‘plan_retention_weeks’,105);

3) Are they part of AWR collection & awr retention?

No they are not part of AWR collection and not follows AWR retention.

4) Views


------------------------------ ---------------
SPACE_BUDGET_PERCENT                        30
PLAN_RETENTION_WEEKS                       105

5) What are the tables that belongs to sql plan management baselines.

According to my understanding, The DBA_SQL_PLAN_BASELINES view is based on SQLOBJ$, SQLOBJ$DATA, and SQLOBJ$AUXDATA, so the data for sql profiles and baselines will be stored in this tables.




Quick Question: When was the table last modified

March 26, 2013 Leave a comment

Hello All,

In continuation of my previous post,

I have another questions when is my table last modified and what are the ways i can view it.

Here you go,

1) Enabling audit on segment

	1)Enable auditing on table
	2) Use dba_audit_trail to know insert and update statements

2) DBA_TAB_MODIFICATIONS to view the same, statistics_level set to All or Typical
But here are the caevets

  1) If you are in 9i, the data will be pushed from memory to this view 
     every 15 mins
  2) Until 10gr1, the data will be pushed from memory to this view 
     every 3 hours
  3) From 10gr2 onwards this data will be pushed only when you 
     gather statistics on segment 
  4) or manually flush the monitoring info i.e using  

Background:- Table/segment modifications used to be monitored using monitoring
on /off on segments on 9i, where this was removed and monitoring is
default in 10g and monitors all object modifications in mon_mod$_all

Another important aspect is, for example, on the below lines

	SQL> Select * from dba_tab_modifications where table_name='TEST';

	TABLE_NAME      INSERTS    UPDATES    DELETES trnc TIMESTAMP                    
	------------ ---------- ---------- ---------- ---- -----------------             
	TEST              320          0          0 NO     26-Mar-2012 10:21     

The date is when the record is entered in the table mon_mod$_all not the exact date
when it modified in the original table.

3) If you have awr license, use dba_hist_sql_stat etc.rather relying on statistics to flush or wait to gather

select to_char(sn.begin_interval_time,'yy-mm-dd hh24'),
p.object_owner owners, p.object_name Obj_Name, 
p.operation Operation, 
p.options Options, count(1) Idx_Usg_Cnt 
from dba_hist_sql_plan p,dba_hist_sqlstat s ,  
dba_hist_snapshot sn
where p.object_owner = '&USERNAME'
and p.sql_id = s.sql_id and p.object_name='&OBJNAME' 
and p.operation like '%UPDATE%'
and sn.snap_id = t.snap_id 
group by sn.begin_interval_time,p.object_owner,
p.object_name,p.operation,p.options order by 1,2,3

Note:- I am filtering out the operation as update from sqlstat or sqlplan views 
 ----------- ------------ ------------------------
 13-02-26 17 TEST     TEST           UPDATE       
 13-02-26 17 TEST     TEST           UPDATE       
 13-02-27 16 TEST     TEST           UPDATE       
 13-02-27 16 TEST     TEST           UPDATE       
 13-03-02 16 TEST     TEST           UPDATE       
 13-03-02 16 TEST     TEST           UPDATE       
 13-03-02 20 TEST     TEST           UPDATE       
 13-03-02 20 TEST     TEST           UPDATE       
 13-03-07 17 TEST     TEST           UPDATE       
 13-03-07 17 TEST     TEST           UPDATE       
 13-03-08 16 TEST     TEST           UPDATE       
 13-03-08 16 TEST     TEST           UPDATE       
 13-03-08 17 TEST     TEST           UPDATE       
 13-03-08 17 TEST     TEST           UPDATE       
 13-03-09 17 TEST     TEST           UPDATE       
 13-03-09 17 TEST     TEST           UPDATE       
 13-03-12 16 TEST     TEST           UPDATE       
 13-03-12 16 TEST     TEST           UPDATE       
 13-03-14 16 TEST     TEST           UPDATE       
 13-03-14 16 TEST     TEST           UPDATE       
 13-03-14 17 TEST     TEST           UPDATE       
 13-03-14 17 TEST     TEST           UPDATE       
 13-03-16 18 TEST     TEST           UPDATE       
 13-03-16 18 TEST     TEST           UPDATE       
 13-03-19 16 TEST     TEST           UPDATE       
 13-03-19 16 TEST     TEST           UPDATE       
 13-03-20 17 TEST     TEST           UPDATE       
 13-03-20 17 TEST     TEST           UPDATE       
 13-03-21 16 TEST     TEST           UPDATE       

Hope this helps

Another good blog: Iordon Iotzov’s

March 20, 2013 Leave a comment

Thanks Iordon Iotzov for showing up here and following this blog.

To my blog readers,

Iordon has many good works and posts under his blog and he has recently presented at HOTSOS about confidence on cardinalities/statistics. You can read it here.


Categories: Uncategorized

RAC: Find when the CRS resource was last started or status changes

March 19, 2013 Leave a comment

Finding the CRS resource last modified or started will be very clumpsy when you look at crsd.log or any other logs.

For that use -v flag in crsctl status resource command where you can find some important details like last server , last changed, last restart, internal status etc.

For example:-

[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -l 
STATE=ONLINE on node1 
[grid@node1 ~]$ crsctl status resource ora.mydb.test_srv.svc -v 
ID=ora.mydb.test_srv.svc 1 1 
LAST_RESTART=08/10/2011 16:32:53 


Script: Shell Script to connect to all databases in a server and change a parameter

March 19, 2013 2 comments


When you need to run something like a check of parameter in all databases in a server and if you have more databases, manually doing will be cumbersome.

Here is the script that helps to connect to all databases in a server by reading /etc/oratab (linux) and then change a parameter streams pool to 50M and then change in the database too.

for INSTANCE in `cat /etc/oratab|grep -v "^#"|cut -f1 -d: -s`
# grep the streams pool size value in the parameter file (my pfile location is same for all database except the SID of database)
sed -i "s/`grep STREAMS_POOL_SIZE /u01/$ORACLE_SID/pfile/init.ora \
    | cut -d= -f2`/ 50M/g" /u01/$ORACLE_SID/pfile/init.ora
#Then connect to database and change in spfile as well
sqlplus '/ as sysdba' << eof
alter system set streams_pool_size = 50M scope=spfile;

Further, if you have more than one servers and copying this script to others server and run there will be again problem, for that
connect to one server and use ssh with ‘bash -s’ option

For example log in test1 and copy the above script to, and run, this runs the script on remote server from locally and change the required stuff :-

test1:/home/oracle> ssh test2 ‘bash -s’ <
test1:/home/oracle> ssh test3 'bash -s' <
test1:/home/oracle> ssh test4 'bash -s' <

Hope this helps.


Categories: Scripts

RAC: Restrict Parallel Query processing to local node only

March 19, 2013 Leave a comment

When a query is issued against an Oracle Real Application Clusters (Oracle RAC) database with multiple nodes, the parallel processes may be spawned on different nodes. This approach is used to ensure that no one node becomes overloaded and that the processing power of all nodes is utilized as efficiently as possible.

However, under certain conditions, the interinstance traffic in the Oracle RAC database may already be significantly high. As the parallel processes on different nodes send their result sets via the interconnect, there is a strong possibility that this added traffic will introduce performance issues, especially related to global cache metrics. In such cases, you may want to restrict the parallel processes to the node where the parallel query coordinator runs. Because all the components of the query—the coordinator and the parallel processes—are in the same instance, there is no interinstance traffic and hence there are no global-cache-related issues.

The parallel_force_local parameter restricts parallel processes to a single instance. The default value is FALSE, meaning that the parallel processes can go into any available instance. To restrict the parallel processes to a single instance, set the parallel_force_local parameter value to TRUE


Is your SGA is > 10gb? Then probably you have to look at huge pages

March 18, 2013 2 comments

In Theory:-

HugePages is a feature integrated into the Linux kernel 2.6. It is a method to have larger page size that is useful for working with very large memory. HugePages is useful for both 32-bit and 64-bit configurations. HugePage sizes vary from 2MB to 256MB, depending on the kernel version and the hardware architecture. For Oracle Databases, using HugePages reduces the operating system maintenance of page states, and increases Translation Lookaside Buffer (TLB) hit ratio.

Without HugePages, the operating system keeps each 4KB of memory as a page, and when it is allocated to the SGA, then the lifecycle of that page (dirty, free, mapped to a process, and so on) is kept up to date by the operating system kernel.

With HugePages, the operating system page table (virtual memory to physical memory mapping) is smaller, since each page table entry is pointing to pages from 2MB to 256MB. Also, the kernel has fewer pages whose lifecyle must be monitored.

As the amount of memory available on systems grows and the amount of memory needed by the database grows the traditional 4k page size used in most Linux systems is becoming a bit too small. As the total memory allocated increases the number of pages that must be managed also increases – meaning more work for the kernel. With HugePages you can increase the typical 4KB page size to something like 2MB. This means that for the same amount of RAM being used your OS will have a multiple of 512 less pages to manage. In addition, with HugePages the pages are pinned in memory and can’t be swapped to disk, thus avoiding possible disk writes. Another key benefit I’ve read is that HugePages are managed via a global PageTable rather than every process having its own PageTable – this also reduces the amount of memory needed.

Ok enough about hugepages documentation, Steps to implement the hugepages

The basics steps are as follows:-

* Set the memlock ulimit for the oracle user.
* Disable Automatic Memory Managment if necesary as it is incompatible with HugePages.
* Run the Oracle supplied script to calculate the recommended value for the vm.nr_hugepages kernel parameter.
* Edit /etc/sysctl.conf with the vm.nr_hugepages with the recommeneded setting.
* Reboot the server

OS Level Settings Considerations:-


  • kernel.shmmax – set to the largest SGA on your server plus 1G
  • kernel.shmall – set to sum of all SGAs on the server divided by page size – ‘getconf PAGESIZE’


  • oracle soft memlock – set to slightly less than total RAM on server (in KB)
  • oracle hard memlock – set to slightly less than total RAM on server (in KB)

So, for my system for example,

  • RAM = 128GB = 132093152 kB
  • SGA = 48GB – however, to allow for possible growth and given I have 128GB total, I’m going to use 64G for my numbers
  • PGA = 16GB
  • shmmax = 64GB+1GB = 65GB= 69793218560
  • shmall = 1SGA @ 64GB = 64G/4096 = 16,777,216
  • oracle soft memlock = slightly less than 132093152 = 130000000
  • oracle hard memlock = oracle soft memlock = 130000000
    2) Oracle has a script (in Note 401749.10 that will determine what they recommend for your HugePages configuration. Run this script:

->./ …

Recommended setting: vm.nr_hugepages = 24580

Then add it to

Next add the following to /etc/sysctl.conf


Reboot the server

3) Verify the hugepages settings

cat /proc/sys/vm/nr_hugepages

->grep Huge /proc/meminfo

HugePages_Total: 24580

HugePages_Free: 16212

HugePages_Rsvd: 16209

Hugepagesize: 2048 kB

Now your Hugepages is configured, you can use them when your database instance is started.

But hold the problem is if there is inadequate huge pages left in OS and to startup your instance oracle will silently fall back to normal paging of memory.

To overcome this oracle 11gr2 ( & 3) has a parameter called use_large_pages which has two values “true” and “only”

With use_large_pages=true if there are less hugepages then total allocated SGA – Oracle will write a warning message to alert.log and will go on with normal pages means do not use huge pages at all.

But again, in the default behavior has changed – now with use_large_pages=true and less then SGA hugepages Oracle will allocate part of the SGA with them and the resting part with normal 4k pages. In alert.log it will look like

Specified value of sga_max_size is too small, bumping to 94220845056

****************** Large Pages Information *****************

Total Shared Global Region in Large Pages = 84 GB (95%)

Large Pages used by this instance: 42881 (84 GB)

Large Pages unused system wide = 119 (238 MB) (alloc incr 256 MB)

Large Pages configured system wide = 43000 (84 GB) Large Page size = 2048 KB

RECOMMENDATION: Total Shared Global Region size is 88 GB. For optimal performance, prior to the next instance restart increase the number of unused Large Pages by atleast 1929 2048 KB Large Pages (3858 MB) system wide to get 100% of the Shared Global Region allocated with Large pages


Did you observe part it has allocated from the huge pages and rest in normal OS pages.

With use_large_pages=only Oracle will check during the startup if there’s enough preallocated large pages and if there isn’t – will not proceed starting up with a message like

Specified value of sga_max_size is too small, bumping to 94220845056

****************** Large Pages Information *****************

Parameter use_large_pages = ONLY

Large Pages unused system wide = 43000 (84 GB) (alloc incr 256 MB)

Large Pages configured system wide = 43000 (84 GB) Large Page size = 2048 KB

ERROR: Failed to allocate shared global region with large pages, unix errno = 12.

Aborting Instance startup.

ORA-27137: unable to allocate Large Pages to create a shared memory segment ACTION: Total Shared Global Region size is 88 GB. Increase the number of unused large pages to atleast 44932 (88 GB) to allocate 100% Shared Global Region with Large Pages.



  • MOS 361323.1 – Hugepages on Linux
  • MOS 361468.1 HugePages on 64-bit Linux
  • MOS 401749.1 script to calculate number of huge pages
  • MOS 361468.1 Troubleshooting Huge Pages
  • And many blogs with search huge pages
  • BigData -

    Sureshgandhi's Blog : Its time to learn BigData

    Oracle Alchemist

    Steve Karam's Oracle Blog


    The King's .... "Joy of Life"..

    the dba diary !!!

    Naga's Blog : Oracle DBA experiences


    Sureshgandhi's Blog: It's all about Oracle..Just Learn,Apply & Share.


    All about Oracle.......

    All about Oracle.......

    DB Optimizer

    database performance, SQL tuning and data visualizatoin


    The thoughts of Jason Arneil

    All Things Oracle

    All about Oracle.......

    Dion Cho - Oracle Performance Storyteller

    We are natural born scientists

    All about Oracle.......

    All about Oracle.......

    All about Oracle.......

    Miladin Modrakovic's Blog: Oraclue

    Oracle internals, debugging and undocumented features

    Coskan's Approach to Oracle

    What I learned about Oracle

    The Arup Nanda Blog

    All about Oracle.......


    Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)

    Martins Blog

    Trying to explain complex things in simple terms

    All about Oracle.......

    Oracle database internals by Riyaj

    Discussions about Oracle performance tuning, RAC, Oracle internal & E-business suite.

    All about Oracle.......

    Kevin Closson's Blog: Platforms, Databases and Storage

    Platform, Database and Storage Topics

    Iordan Iotzov's DBA blog

    The opinions here may not necessarily reflect that of my employers - both past and present. The comments left by the reviewers are theirs alone and may not reflect my opinion. None of the advice is warranted to be free of errors and omissions. Please use at your own risk and after thorough testing in your environment.

    Doug's Oracle Blog

    All about Oracle.......

    Oracle related stuff

    All about Oracle.......

    All about Oracle.......

    Charles Hooper's Oracle Notes

    Miscellaneous Random Oracle Topics: Stop, Think, ... Understand

    Richard Foote's Oracle Blog

    Focusing Specifically On Oracle Indexes, Database Administration and Some Great Music

    Oracle E-Business Suite Technology

    All about Oracle.......

    Tanel Poder's blog: Responsible data management

    Linux, Oracle, Exadata and Hadoop.

    Oracle DBA - Tips and Techniques

    Knowledge Is Power - Share IT ....

    Guenadi N Jilevski's Oracle BLOG

    Oracle RAC, DG, EBS, DR and HA DBA BLOG

    Laurent Schneider

    All about Oracle.......

    Kerry Osborne's Oracle Blog

    All about Oracle.......

    Hemant's Oracle DBA Blog

    All about Oracle.......

    Oracle Enterprise Manager

    All about Oracle.......

    Structured Data

    Thoughts on: Big Data, Hadoop, Databases, Platform, Performance & Scalability

    Oracle Scratchpad

    Just another Oracle weblog

    Karl Arao's Blog

    Just another weblog about Oracle,Linux,Troubleshooting,Performance,etc..etc..

    The Data Warehouse Insider

    All about Oracle.......

    Online Apps DBA

    One Stop Shop for Apps DBA's

    All about Oracle.......

    Microsoft SQL Server

    All about Oracle.......


    Get every new post delivered to your Inbox.

    Join 83 other followers