Tuesday, April 30, 2013

High 'direct path read' waits in 11g

Symptoms:

See high 'direct path read' wait when query a large table.

cause:

In 10g, serial table scans for "large" tables used to go through cache (by default) which is not the case anymore.  In 11g, this decision to read via direct path or through cache is based on the size of the table, buffer cache size and various other stats.
Direct path reads are faster than scattered reads and have less impact on other processes because they avoid latches.

Solution:

When using Automatic Shared Memory Management (ASMM) and with buffer cache low limit set at a low end compared to the normal workload requirements and usually after startup, 11g might choose to do serial direct path read scans for large tables that do not fit in the SGA. When ASMM increases the buffer cache due to increased demand, 11g might not again do serial direct path read scans for these same large tables.  If you like to avoid this from happening, you should note the buffer cache and share pool requirements for a normal workload and set the low limits of buffer cache and shared pool in spfile/pfile close to these normal workload values.
db_cache_size
shared_pool_size 


From MOS Note: [ID 793845.1]

Friday, April 26, 2013

DBUA failed in bringing up the database. Either the database is not running from Oracle Home or the correct pfile was not found

Symptoms:

When using dbua to upgrade a database, you might get a pop-up asking for the location of the init.ora, with a message like this:
"Either the database is not running from Oracle Home or the correct pfile was not found" 

Cause:
An earlier upgrade attempt failed, and a dbua file got left "in limbo"

Solution:
cd /u01/app/oracle/cfgtoollogs/dbua/logs

ls
PreUpgradeResults.html  sqls.log  trace.log  Welcome_mydb.txt

rm Welcome_mydb.txt

From MOS Note: [ID 1303325.1]

Wednesday, April 24, 2013

Setup active dataguard using dgmgrl

1. Set apply off
DGMGRL> edit database 'DBAPLC' set state=apply-off;
Succeeded.
DGMGRL> show database 'DBAPLC'

Database - DBAPLC

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-OFF
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    DBAPLC

Database Status:
SUCCESS
2. Open Standby database
SQL> alter database open;

Database altered.
3. Set apply on
DGMGRL> edit database 'DBAPLC' set state=apply-on;
Succeeded.
DGMGRL> show database 'DBAPLC'

Database - DBAPLC

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: ON
  Instance(s):
    DBAPLC

Database Status:
SUCCESS

Determining CPU/core counts on Linux Server

Here are the commands that can be used to determine the physical CPU/core and logical CPU counts on a Linux server.

1. Determining the number of logical processors seen by the OS:

cat /proc/cpuinfo | grep ^processor | wc -l

2. Determining  the number of physical CPUs on a machine:

cat /proc/cpuinfo | grep "physical id" | uniq | wc -l

3. Determining the number of cores per CPU:

cat /proc/cpuinfo | grep "cores" | head -1

4. Determining if Hyperthreading is turned on:

a.  First determine the number of siblings per CPU.  Siblings are the number of logical processors sharing the same physical processor:

cat /proc/cpuinfo | grep "siblings" | head -1

b. If the number of CPU cores per processor  (from #2 above) is the same as the number of siblings, then Hyperthreading is off.  If siblings is 2 times the number of cores, Hyperthreading is turned on.

Tuesday, April 23, 2013

ORA-39095: Dump file space has been exhausted

Problem Description:

datapump export got following error:
ORA-39095: Dump file space has been exhausted: Unable to allocate 4096 bytes


Cause of the Error:

Note that %U specification for the dump file can expand up to 99 files.
If 99 files have been generated before the export has completed, it will again return the ORA-39095 error. 
 

Solution of the Error:

Use a bigger filesize value.

Add a 3rd node to Oracle RAC 11gr2

1 Verify the prerequisites with cluvfy
 

Login as a grid user on one of the existing cluster nodes, for example raclinux1, and run the following commands.
 

cluvfy stage -post hwos -n rac3

cluvfy comp peer -refnode rac1 -n rac3

cluvfy stage -pre nodeadd -n rac3

cluvfy stage -pre crsinst -n rac3


2 Ignore error due to Bug #10310848 
If your shared storage is ASM using asmlib you may get an error – similar to the following 
 
ERROR:
PRVF-5449 : Check of Voting Disk location "ORCL:CRS1(ORCL:CRS1)" failed on the following nodes:

        rac3:No such file or directory

PRVF-5431 : Oracle Cluster Voting Disk configuration check failed

To ignore it:

IGNORE_PREADDNODE_CHECKS=Y 
export IGNORE_PREADDNODE_CHECKS

3 Extend Clusterware
addNode.sh "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"

execute following as root on new node:
/u01/app/oraInventory/orainstRoot.sh
/u01/app/crs/11.2.0,2/root.sh

verify new node:
cluvfy stage -post crsinst -n rac3-lnx
cluvfy stage -post nodeadd -n rac3-lnx  

4 Extend Oracle Database Software

$ORACLE_HOME/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

5 Use DBCA to add database instance on new nodes