Tuesday, September 26, 2017

Convert snapshot standby database to Physical standby database: Dataguard 11gR2


Step 1
SQL> shutdown immediate;

Step 2
SQL> startup nomount

Step 3
SQL> alter database mount;

Step 4
SQL>  ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Step 5
SQL> shutdown database;

Step 6
SQL> startup nomount;
SQL> alter database mount standby database;

Step 7
SQL> alter database open;
SQL> alter database recover managed standby database disconnect;

Step 8
set pagesize 1000
set linesize 1000
col HOST_NAME for a20

select INSTANCE_NAME,OPEN_MODE,DATABASE_ROLE from v$instance,v$database;

Monday, May 1, 2017

OCR and Voting disk recovery

su - grid
. oraenv
+ASM1 or +ASM2


Check the VD Location

crsctl query css votedisk

Check the OCR Location
cat /etc/oracle/ocr.loc
ocrcheck -config

asmcmd spget
asmcmd lsdg
asmcmd lsdsk -kp -G CRSDG78

or
asmcmd -p
lsdsk -kp -G CRSDG78

Login as a Root#
#. oraenv
+ASM1 or +ASM2

To view the backup files
ocrconfig -showbackup

To take OCR/VD Backup(Backup contains both OCR and Voting Disk)
ocrconfig -manualbackup

To view the backup files
ocrconfig -showbackup manual
or
ocrconfig -showbackup (this will display both auto and manual backup)

Check the Below Logfiles when cluster issue occures
---------------------------------------------------
1. Cluster Alertlog File Path/Filename: (locate alertnodename.log)
/u01/app/11.2.0/grid/log/racnode7/alertracnode7.log
2. CSSD Logfile Path/Filename: (locate ocssd.log)
/u01/app/11.2.0/grid/log/racnode7/cssd/ocssd.log
3. CRSD Logfile Path/Filename: (locate crsd.log)
/u01/app/11.2.0/grid/log/racnode7/crsd/crsd.log
4. ASM Instance Alert Logfile Path/Filename: (locate alert_+ASM)
/u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
5. OS Logfile.
ls -lrt /var/log/messages*


Do Not Use in Production(Real Time)
dd if=/dev/zero of=/dev/sdc2 count=100 bs=1024

/etc/init.d/oracleasm scandisks

crsctl stat res -t

crsctl stop crs -f (
*********************************************************************************************

1. Locate the latest automatic OCR backup
#. oraenv
+ASM1/+ASM2
#ocrconfig -showbackup

2. Stop CRS on All Nodes (As a root User)
#. oraenv
+ASM1/+ASM2
#crsctl stop crs -f

3. Start the CRS stack in exclusive mode (Only on ONE Node) (As a root User)

On the node that has the most recent OCR backup, log on as root and start CRS in exclusive mode, this mode will allow ASM to start & stay up without the presence of a Voting disk and without the CRS daemon process (crsd.bin) running.

# crsctl start crs -excl -nocrs (Only on ONE Node)

4. Label the CRS disk for ASMLIB use (Only on ONE Node)
/etc/init.d/oracleasm createdisk DISKNEW2 /dev/sdc2 (/dev/sdc2 would be different in your case)

5. Create the CRS diskgroup via sqlplus (Only on ONE Node)(As a grid User)
su - grid
. oraenv
+ASM1 or +ASM2

$ sqlplus / as sysasm
SQL> show parameter asm_diskstring
SQL> alter system set asm_diskstring='/dev/oracleasm/disks/*';
SQL> create diskgroup CRS external redundancy disk '/dev/oracleasm/disks/KIRAN_1' attribute 'COMPATIBLE.ASM' = '11.2';

6. Restore the latest OCR backup (As a root User)
#ocrconfig -restore <backuppath/filename>

7. Recreate the Voting file (As a grid/root User)
crsctl replace votedisk +CRSDG78

8. Recreate the SPFILE for ASM (optional)(Only on ONE Node) (As a grid User)
su - grid
. oraenv
+ASM1 or +ASM2

vi /home/grid/asm_pfile.ora

*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'


sqlplus / as sysasm

SQL> create spfile='+CRSDG78' from pfile='/home/grid/asm_pfile.ora';

9. Shutdown CRS (Only on ONE Node) (As a root User)

crsctl stop crs -f

10. Rescan ASM disks (If using ASMLIB rescan all ASM disks on each node As the root user)

/etc/init.d/oracleasm scandisks

11. Start CRS (On All Nodes) (As a root User)

crsctl start crs

12. Verify CRS (Only on ONE Node) (As a root/grid User)

crsctl stat res -t



Tuesday, April 25, 2017

Rename the Database with and without NID

Check the below parameters and values.

show parameter db_name-->spfile
or
select name from v$database;

alter database backup controlfile to trace as '/oradata/dbname/c1text.sql';
shut immediate
mv /oradata/dbname/control01.ctl /oradata/dbname/control01.ctl_old

mv /oradata/dbname/control02.ctl /oradata/dbname/control02.ctl_old

startup nomount
alter system set db_name='test' scope=spfile;

select name,value from v$spparameter where name='db_name';

cp -p /oradata/dbname/c1text.sql /oradata/dbname/c1text.sql_bkp

vi /oradata/dbname/c1text.sql

There will be 2 set of controlfile creation script in SQL file.
Delete colplete 1st set and change below  first line in backup controlfile sql.

reuse ==>set
olddb ==>newdb
noresetlogs==>resetlogs


SQL>@ /oradata/dbname/c1text.sql
.
SQL>alter database open RESETLOGS;

select file#,name from v$datafile;
select file#,name from v$tempfile;

you will get renamed database  while query to v$database.

Rename DB With NID 

select tablespace_name,status from dba_tablespaces;
*********************************************
shut immediate
startup mount
select name,open_mode,log_mode,dbid from v$database;
##nid target=/ DBNAME=NEWTEST
nid target=/ DBNAME=NEWTEST SETNAME=Y
startup nomount
alter system set db_name='NEWTEST' scope=spfile;
shut immediate
startup
*********************************************


Friday, April 14, 2017

10g database with 11g Grid infrastructure

11gR2 Grid Infrastructure with lower version DB

I had a situation where I need to have 11gR2 infrastructure Grid with lower version Database 10gr2.
faced issues starting the database after 11gr2 grid install.

Steps I have done before starting the DB

x. Install Grid Infrastructure 11.2.0.3
x. Install 10.2.0.1
x. Apply patch set on 10.2.0.1 to 10.2.0.4


DB start up was giving the error

[oracle@xd3cfp001 dbs]$ sqlplus '/as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Nov 17 14:28:08 2011

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup nomount
ORA-29702: error occurred in Cluster Group Service operation
SQL> exit

This is due to the fact that nodes were not pinned to grid infrastructure.

atsdb1.ea.com[ATS4A]$ /opt/oracle/grid/11.2.0/bin/olsnodes -t -n
atsdb1        1       Unpinned
atsdb2        2       Unpinned


To pin, I had to run the below commands


/opt/oracle/grid/11.2.0/bin/crsctl pin css -n atsdb1  atsdb2


atsdb2.ea.com[APF11G]$  /opt/oracle/grid/11.2.0/bin/olsnodes -t -n
atsdb1        1       Pinned
atsdb2        2       Pinned

create database AP10G
    USER SYS IDENTIFIED BY dvstmq4
    USER SYSTEM IDENTIFIED BY dvstmq4
    MAXLOGFILES 10
    MAXLOGMEMBERS 2
    MAXDATAFILES 200
    MAXINSTANCES 1
    MAXLOGHISTORY 1
logfile
        group 1 ('/oradata_dbupgrade/APF11G/redo/redo1a.log','/oradata_dbupgrade/APF11G/redo/redo1b.log') size 400M,
        group 2 ('/oradata_dbupgrade/APF11G/redo/redo2a.log','/oradata_dbupgrade/APF11G/redo/redo2b.log') size 400M,
        group 3 ('/oradata_dbupgrade/APF11G/redo/redo3a.log','/oradata_dbupgrade/APF11G/redo/redo3b.log') size 400M,
        group 4 ('/oradata_dbupgrade/APF11G/redo/redo4a.log','/oradata_dbupgrade/APF11G/redo/redo4b.log') size 400M
datafile  '/oradata_dbupgrade/APF11G/data/system01.dbf' size 1000M
extent management local
sysaux datafile '/oradata_dbupgrade/APF11G/data/sysaux01.dbf' size 1000M
undo tablespace UNDOTBS1
datafile '/oradata_dbupgrade/SOAPF11G/data/undotbs01.dbf' size 5000M
default temporary tablespace temp tempfile '/oradata_dbupgrade/APF11G/data/temp01.dbf' size 5000M
character set  AL32UTF8
national character set AL16UTF16;

$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/orapwAP10GA password=oracle entries=5

alter database add logfile thread 2
group 5 ('/oradata_dbupgrade/APF11G/redo/redo5a.log','/oradata_dbupgrade/APF11G/redo/redo5b.log') size 400M,
group 6 ('/oradata_dbupgrade/APF11G/redo/redo6a.log','/oradata_dbupgrade/APF11G/redo/redo6b.log') size 400M,
group 7 ('/oradata_dbupgrade/APF11G/redo/redo7a.log','/oradata_dbupgrade/APF11G/redo/redo7b.log') size 400M,
group 8 ('/oradata_dbupgrade/APF11G/redo/redo8a.log','/oradata_dbupgrade/APF11G/redo/redo8b.log') size 400M;

create undo tablespace UNDOTBS2 datafile '/oradata_dbupgrade/APF11G/data/undotbs02.dbf' size 1000M;

From 10g DB home

srvctl add database -d AP10G -o $ORACLE_HOME

srvctl add instance -d AP10G -i AP10GA -n atsdb1
srvctl add instance -d AP10G -i AP10GB -n atsdb2
srvctl enable database -d AP10G
srvctl enable instance -d AP10G  -i AP10GA
srvctl enable instance -d AP10G  -i AP10GB

atsdb1.ea.com[APF11G]$ /opt/oracle/product/1020/bin/srvctl start instance -d AP10G -i AP10GA
atsdb1.ea.com[APF11G]$ /opt/oracle/product/1020/bin/srvctl start instance -d AP10G -i AP10GB

Change the SCAN Name and SubNet

========================================================================

Change the SCAN Name and Subnet

========================================================================

As a root run the below:
---------------------------------------------------------
mydrdb5.ea.com[MYPD3A]$ srvctl config scan
SCAN name: mydrdb-scan, Network: 1/10.30.206.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /mydrdb-scan/10.30.206.52

mydrdb5.ea.com[MYPD3A]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1523
---------------------------------------------------------


mydrdb5.ea.com[MYPD3A]$ srvctl stop scan_listener

mydrdb5.ea.com[MYPD3A]$ srvctl stop scan

mydrdb5.ea.com[MYPD3A]$
mydrdb5.ea.com[MYPD3A]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running

mydrdb5.ea.com[MYPD3A]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
----------------------------------------------------------
$GRID_HOME/bin/crsctl modify type ora.scan_vip.type -attr "ATTRIBUTE=SCAN_NAME,DEFAULT_VALUE=mytestdb-scan"
$GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_SUBNET=10.30.207.0"
$GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_NETMASK=255.255.255.0"
$GRID_HOME/bin/srvctl modify scan_listener -u
$GRID_HOME/bin/srvctl start scan_listener

Before running this Make sure all entry from /etc/hosts for scan is removed and it should be working from DNS.
----------------------------------------------------------
mydrdb5.ea.com[MYPD3A]$ cd $GRID_HOME
mydrdb5.ea.com[MYPD3A]$ pwd
/opt/oracle/grid/11.2.0

mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/crsctl modify type ora.scan_vip.type -attr "ATTRIBUTE=SCAN_NAME,DEFAULT_VALUE=mytestdb-scan"
mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_SUBNET=10.30.207.0"
mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_NETMASK=255.255.255.0"
mydrdb5.ea.com[MYPD3A]$
mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/srvctl modify scan -n mytestdb-scan


mydrdb5.ea.com[MYPD3A]$ srvctl config scan
SCAN name: mytestdb-scan, Network: 1/10.30.207.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /mytestdb-scan/10.30.207.157
SCAN VIP name: scan2, IP: /mytestdb-scan/10.30.207.158
SCAN VIP name: scan3, IP: /mytestdb-scan/10.30.207.156

mydrdb5.ea.com[MYPD3A]$ srvctl modify scan_listener -u

mydrdb5.ea.com[MYPD3A]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1523
mydrdb5.ea.com[MYPD3A]$

mydrdb5.ea.com[MYPD3A]$ srvctl start scan
mydrdb5.ea.com[MYPD3A]$ srvctl start scan_listener
mydrdb5.ea.com[MYPD3A]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node mydrdb6
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node mydrdb5
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node mydrdb5
mydrdb5.ea.com[MYPD3A]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node mydrdb6
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node mydrdb5
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node mydrdb5

========================================================================

But in case subnet is not different then:
========================================================================
$GRID_HOME/bin/srvctl modify scan -n mydrdb-scan
$GRID_HOME/bin/srvctl modify scan_listener -u

It will do the same as above but for only same subnet.

mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/srvctl modify scan -n mydrdb-scan

mydrdb5.ea.com[MYPD3A]$ srvctl config scan
SCAN name: mydrdb-scan, Network: 1/10.30.206.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /mydrdb-scan/10.30.206.157
SCAN VIP name: scan2, IP: /mydrdb-scan/10.30.206.158
SCAN VIP name: scan3, IP: /mydrdb-scan/10.30.206.156

mydrdb5.ea.com[MYPD3A]$ srvctl modify scan_listener -u

mydrdb5.ea.com[MYPD3A]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1523


========================================================================

Node Addition in Oracle 11g RAC

========================================================================

Node Addition in 11G RAC

========================================================================
Before add or run any cluster command we need to make sure that all server level configuration is same.
EX:
1) Public, Private and VIP IP should be in same subnet.
2) All Required package should be install with should be same kernal version on both the server.
3) All required setting for Kernal and System need to be completed.

Now:

1) Start with the comparison of an existing node and the new node. This will explain if the setup is the similar and if you can continue.


mydrdb5.ea.com[MYPD3A]$ cluvfy comp peer -n mydrdb6 -refnode mydrdb5 -r 11gR2

Is some of the comparison will fail this command will did not pass.

ex is like below:

Compatibility check: User existence for "oracle" [reference node: mydrdb5]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  mydrdb6      oracle(1304)              oracle(110)               mismatched
User existence for "oracle" check failed



Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6


Check the system and rectify the error and run the same command again it should not complete as verification successfull.


2) To Validate the node if we can add the node, if any error will come it will suggest to fix with poosible issues.

mydrdb5.ea.com[]$ cluvfy stage -pre nodeadd -n mydrdb6 -fixup -verbose

It should come with below:

Pre-check for node addition was successful.


3) Run AddNode.sh on grid Home first


mydrdb5.ea.com[MYPD3A]$ /opt/oracle/grid/11.2.0/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={mydrdb6}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={mydrdb6-v}"

Errro That might come in soem cases :

Copying to remote nodes (Tuesday, November 20, 2012 12:14:05 AM PST)
........................................WARNING:Error while copying directory /opt/oracle/grid/11.2.0 with exclude file list '/tmp/OraInstall2012-11-20_12-13llExcludeFile.lst' to nodes 'mydrdb6'. [PRKC-PRCF-2015 : One or more commands were not executed successfully on one or more nodes : <null>]
----------------------------------------------------------------------------------
mydrdb6:
    PRCF-2023 : The following contents are not transferred as they are non-readable.
Directories:
  1) /opt/oracle/grid/11.2.0/gns
Files:
   1) /opt/oracle/grid/11.2.0/bin/orarootagent.bin
   2) /opt/oracle/grid/11.2.0/bin/crsd
   3) /opt/oracle/grid/11.2.0/bin/cssdagent.bin
   4) /opt/oracle/grid/11.2.0/bin/crfsetenv
and so on.




Fix-up this issue:

check the OS user oracle and put it into same group on both the server. because it may not copy the root owner files if oracle group is different on both the node and also if files is having root:root ownership.
so need to change the ownership with root:dba or oracle then only it can solve the issue.


run the same addnode.sh again
it will end with below


Instantiating scripts for add node (Tuesday, November 20, 2012 12:01:37 PM PST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, November 20, 2012 12:01:39 PM PST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, November 20, 2012 12:03:24 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/opt/oracle/oraInventory/orainstRoot.sh' with root privileges on nodes 'mydrdb6'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/oraInventory/orainstRoot.sh #On nodes mydrdb6
/opt/oracle/grid/11.2.0/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/grid/11.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.
========================================================================

On other node :
run below:
=======================================================================
[root@mydrdb6 oracle]# /opt/oracle/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /opt/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete.

------------------------------------------------------------------------

[root@mydrdb6 oracle]# /opt/oracle/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/oracle/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node                                                                mydrdb5, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@mydrdb6 oracle]#
[root@mydrdb6 oracle]#
========================================================================

Check the cluster verification, it should show new node entry as well.
========================================================================

mydrdb5.ea.com[MYPD3A]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora....N1.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora.cvu        ora.cvu.type   ONLINE    ONLINE    mydrdb5
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora.mypd3.db ora....se.type OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    mydrdb5
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    mydrdb5
ora.ons        ora.ons.type   ONLINE    ONLINE    mydrdb5
ora....ry.acfs ora....fs.type OFFLINE   OFFLINE
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    mydrdb5
ora....SM1.asm application    OFFLINE   OFFLINE
ora....B5.lsnr application    ONLINE    ONLINE    mydrdb5
ora....db5.gsd application    OFFLINE   OFFLINE
ora....db5.ons application    ONLINE    ONLINE    mydrdb5
ora....db5.vip ora....t1.type ONLINE    ONLINE    mydrdb5
ora....SM2.asm application    OFFLINE   OFFLINE
ora....B6.lsnr application    ONLINE    ONLINE    mydrdb6
ora....db6.gsd application    OFFLINE   OFFLINE
ora....db6.ons application    ONLINE    ONLINE    mydrdb6
ora....db6.vip ora....t1.type ONLINE    ONLINE    mydrdb6


Check the post node addition validation
mydrdb5.ea.com[MYPD3A]$ cluvfy stage -post nodeadd -n mydrdb6
it will end-up with below;

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.
========================================================================

Pre database node addition validation
========================================================================

mydrdb5.ea.com[MYPD3A]$ cluvfy stage -pre dbinst -n mydrdb6 -r 11gR2
it failed for me for below:

---------------------------------------------------
Membership check for user "oracle" in group "oracle" [as Primary] failed
Check failed on nodes:
        mydrdb6

--------------------------------------------------

ASM and CRS versions are compatible
Database Clusterware version compatibility passed

Pre-check for database installation was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6
mydrdb5.ea.com[MYPD3A]$

We can ignore this to add database  node
========================================================================
mydrdb5.ea.com[MYPD3A]$ /opt/oracle/product/11.2.0/db_1/oui/bin/addNode.sh -silent CLUSTER_NEW_NODES={mydrdb6}

Saving inventory on nodes (Tuesday, November 20, 2012 12:31:55 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/product/11.2.0/db_1/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
mydrdb5.ea.com[MYPD3A]$

-----------------------------------------------------------
on mydrdb6:
-----------------------------------------------------------
[root@mydrdb6 oracle]# /opt/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

------------------------------------------------------

Wednesday, April 12, 2017

Adding ASM Disk in Oracle 10g and 11g

Add ASM Disk

RACPD Disk addtion detailed work plan:

1) run the below commands on database all node:

SQL> select name,log_mode,open_mode from v$Database;

NAME      LOG_MODE     OPEN_MODE
--------- ------------ ----------
RACPD     ARCHIVELOG READ WRITE

SQL> select inst_id, instance_name,instance_number,status,STARTUP_TIME from gv$instance;

   INST_ID INSTANCE_NAME    INSTANCE_NUMBER STATUS       STARTUP_T
---------- ---------------- --------------- ------------ ---------
         1 RACPD                       1 OPEN         28-JUL-11

SQL> select count(*) from v$recover_file;

  COUNT(*)
----------
         0

SQL> select distinct status,count(*) from v$datafile group by status;

STATUS    COUNT(*)
------- ----------
ONLINE         262
SYSTEM           3

2) Stop the database on all node
srvctl stop instance -d RACPD -i RACPD1
srvctl stop instance -d RACPD -i RACPD2
srvctl stop instance -d RACPD -i RACPD3
srvctl stop instance -d RACPD -i RACPD4



2) Verify that ASM disk is its path is visible from ASM

1) select name,GROUP_NUMBER,STATE,TOTAL_MB,FREE_MB from  v$asm_diskgroup;

2) /etc/init.d/oracleasm listdisks
3) /etc/init.d/oracleasm querydisk /dev/mpath/*p1
4) /etc/init.d/oracleasm querydisk /dev/oracleasm/disks/*
5) select name,GROUP_NUMBER,STATE,TOTAL_MB,FREE_MB from  v$asm_diskgroup;

3) Check ASM DISK

SQL> select GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,TOTAL_MB,FREE_MB,PATH from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU   TOTAL_MB    FREE_MB PATH
------------ ----------- ------- ------------ ---------- ---------- --------------------------------------------------
           0           3 CLOSED  PROVISIONED      512071          0 /dev/oracleasm/disks/DATA1DISK13
           0          11 CLOSED  PROVISIONED      512071          0 /dev/oracleasm/disks/DATA1DISK12

Above 2 should be visible on all the node


4) Add disk in ASM with power rebalance 0 and  1

SQL> ALTER DISKGROUP DATA1 ADD DISK '/dev/oracleasm/disks/DATA1DISK12'  name DATA1DISK12 REBALANCE POWER 0;

Diskgroup altered.

SQL>
SQL> ALTER DISKGROUP DATA1 ADD DISK '/dev/oracleasm/disks/DATA1DISK13' name DATA1DISK13  REBALANCE POWER 1;

Diskgroup altered.


5)  Increase power rebalance after some time after analyze the performance.
SQL> select GROUP_NUMBER,OPERATION,STATE,POWER from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER
------------ ----- ---- ----------
           1 REBAL RUN           1


After 20 min

SQL> ALTER DISKGROUP DATA1 REBALANCE power 5;

Diskgroup altered.

SQL> select * from  v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES
-----------
           1 REBAL RUN           7          7     536925     998761       2047
        225


SQL> select group_number, name, TOTAL_MB, FREE_MB from V$asm_disk_stat;

GROUP_NUMBER NAME                             TOTAL_MB    FREE_MB
------------ ------------------------------ ---------- ----------
           1 DATA1DISK1                         512071      82414
           1 DATA1DISK10                        512071      82419
           1 DATA1DISK3                         512071      82415
           1 DATA1DISK13                        512071     106868
           2 REDODISK1                           24575      14653
           1 DATA1DISK5                         512071      82416
           3 TEMP_0000                          409618       9367
           1 DATA1DISK9                         512071      82417
           1 DATA1DISK4                         512071      82417
           1 DATA1DISK6                         512071      82418
           1 DATA1DISK11                        512071      82417
           1 DATA1DISK12                        512071     106870
           1 DATA1DISK2                         512071      82417
           1 DATA1DISK8                         512071      82416

14 rows selected.


6) Start the database and release for use to application team.

srvctl start instance -d RACPD -i RACPD1
srvctl start instance -d RACPD -i RACPD2
srvctl start instance -d RACPD -i RACPD3
srvctl start instance -d RACPD -i RACPD4

Wednesday, February 10, 2016

PuTTy color setting for different environments.

PURPOSE 

Document  Cover the detailed steps for configuration the PuTTy color setting for different environment

SCOPE

This Document is meant for all who works on Putty.


DETAILS


Here are the steps we can configure the Putty color for different environment.


Setting up the color coding on putty:

For setting up the color coding,
open Putty and on the “Saved Session" text box enter the Name of setting you want to create in below example we have create 3 three (Production, Non-prod, SMO and one default) and then "Save".

Thursday, June 25, 2015

Files`that need to use in Oracle Database administration

Useful Files In Linux

 Path    
 Description
 /etc/passwd
 User Settings
 /etc/group
  Group settings for users.  
 /etc/hosts
 Host name lookup information
 /etc/sysctl.conf                   
 Kernel parameters for Linux.                
 /var/log/messages
 Check System and error logs and messages
 /etc/oratab
  Oracle Registered instance (DBCA) 
 /etc/fstab
Files to check for File System entries
 /home/oracle/.bash_profile
Oracle user profile setting file in Linux.
 /proc/meminfo      
To determine To determine the RAM size
/etc/redhat- release
get the OS release information
 /etc/security/limits.conf
 Specify process and open files related limits 
 /etc/selinux/config
Enable or disable security feature.

Deinstall and cleanup Oracle 11g RAC instance

Deinstall and Clean-up  11g RAC installaton:


1) Shutdown all the DB and cluster services on all the RAC node
From root:
crsctl stop cluster -all
and
crsctl stop crs  on both node



2) From Oracle User:

Run the below:

soatsdb1.com[GSOATS1]$ cd /opt/oracle/grid/11.2.0/deinstall

soatsdb1.com[GSOATS1]$ ls
bootstrap.pl  deinstall  deinstall.pl  deinstall.xml  jlib  readme.txt  response  sshUserSetup.sh

soatsdb1.com[GSOATS1]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/oracle/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/oracle/grid/11.2.0
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/oracle/product
Checking for existence of central inventory location /opt/oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/oracle/grid/11.2.0
The following nodes are part of this cluster: soatsdb1,soatsdb2
Checking for sufficient temp space availability on node(s) : 'soatsdb1,soatsdb2'

## [END] Install check configuration ##

Traces log file: /opt/oracle/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "soatsdb1"[null]
 >
soatsdb1-v.com
The following information can be collected by running "/sbin/ifconfig -a" on node "soatsdb1"
Enter the IP netmask of Virtual IP "10.30.207.150" on node "soatsdb1"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "10.30.207.150" is active
 >

Enter an address or the name of the virtual IP used on node "soatsdb2"[10.30.207.150]
 >

The following information can be collected by running "/sbin/ifconfig -a" on node "soatsdb2"
Enter the IP netmask of Virtual IP "soatsdb2-vip" on node "soatsdb2"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "soatsdb2-vip" is active
 >

Enter an address or the name of the virtual IP[]
 >


Network Configuration check config START

Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_check2012-02-02_11-59-38-PM.log

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /opt/oracle/oraInventory/logs/asmcadc_check2012-02-02_11-59-40-PM.log

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
ASM was not detected in the Oracle Home

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/oracle/grid/11.2.0
The cluster node(s) on which the Oracle home deinstallation will be performed are:soatsdb1,soatsdb2
Oracle Home selected for deinstall is: /opt/oracle/grid/11.2.0
Inventory Location where the Oracle home registered is: /opt/oracle/oraInventory
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2012-02-02_11-56-20-PM.out'
Any error messages from this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2012-02-02_11-56-20-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/oracle/oraInventory/logs/asmcadc_clean2012-02-02_11-59-55-PM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_clean2012-02-02_11-59-55-PM.log

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "soatsdb2".

/tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "soatsdb1".

/tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Press Enter after you finish running the above commands

<----------------------------------------


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Press Enter after you finish running the above commands

<----------------------------------------

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the local node : Done

Delete directory '/opt/oracle/grid/11.2.0' on the local node : Done

The Oracle Base directory '/opt/oracle/product' will not be removed on local node. The directory is in use by Oracle Home '/opt/oracle/product/11.2.0'.

Detach Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the remote nodes 'soatsdb2' : Done

Delete directory '/opt/oracle/grid/11.2.0' on the remote nodes 'soatsdb2' : Done

The Oracle Base directory '/opt/oracle/product' will not be removed on node 'soatsdb2'. The directory is in use by Oracle Home '/opt/oracle/product/11.2.0'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2012-02-02_11-56-11PM' on node 'soatsdb1'
Clean install operation removing temporary directory '/tmp/deinstall2012-02-02_11-56-11PM' on node 'soatsdb2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Oracle Clusterware is stopped and successfully de-configured on node "soatsdb2"
Oracle Clusterware is stopped and successfully de-configured on node "soatsdb1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/opt/oracle/grid/11.2.0' on the local node.
Successfully detached Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the remote nodes 'soatsdb2'.
Successfully deleted directory '/opt/oracle/grid/11.2.0' on the remote nodes 'soatsdb2'.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

soatsdb1.com[GSOATS1]$


3) run the given command on

Node 2:

[root@soatsdb2 ~]# /tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'soatsdb2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'soatsdb2'
CRS-2673: Attempting to stop 'ora.crf' on 'soatsdb2'
CRS-2677: Stop of 'ora.mdnsd' on 'soatsdb2' succeeded
CRS-2677: Stop of 'ora.crf' on 'soatsdb2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'soatsdb2'
CRS-2677: Stop of 'ora.gipcd' on 'soatsdb2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'soatsdb2'
CRS-2677: Stop of 'ora.gpnpd' on 'soatsdb2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'soatsdb2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@soatsdb2 ~]#


4) Run the given command on

Node 1:


[root@soatsdb1 deinstall]# /tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Using configuration parameter file: /tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp

CRS-5702: Resource 'ora.cssd' is already running on 'soatsdb1'
CRS-4000: Command Start failed, or completed with errors.
CSS startup failed with return code 1
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'soatsdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'soatsdb1'
CRS-2677: Stop of 'ora.crsd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'soatsdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'soatsdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'soatsdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'soatsdb1'
CRS-2677: Stop of 'ora.cssd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'soatsdb1'
CRS-2677: Stop of 'ora.gipcd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'soatsdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'soatsdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'soatsdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@soatsdb1 deinstall]#


5) run below command on all the node.

[root@soatsdb1 stage]# rm /etc/oracle/*
rm: cannot lstat `/etc/oracle/*': No such file or directory
[root@soatsdb1 stage]#
[root@soatsdb1 stage]# rm -f /etc/init.d/init.cssd
[root@soatsdb1 stage]# rm -f /etc/init.d/init.crs
[root@soatsdb1 stage]# rm -f /etc/init.d/init.crsd
[root@soatsdb1 stage]# rm -f /etc/init.d/init.evmd
[root@soatsdb1 stage]# rm -f /etc/rc2.d/K96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc2.d/S96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc3.d/K96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc3.d/S96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc5.d/K96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc5.d/S96init.crs
[root@soatsdb1 stage]# rm -Rf /etc/oracle/scls_scr
[root@soatsdb1 stage]# rm -f /etc/inittab.crs
[root@soatsdb1 stage]# ps -ef | grep crs
root      7139 34131  0 10:54 pts/5    00:00:00 grep crs
[root@soatsdb1 stage]# ps -ef | grep evm
root      7143 34131  0 10:54 pts/5    00:00:00 grep evm
[root@soatsdb1 stage]# ps -ef | grep css
root      7146 34131  0 10:54 pts/5    00:00:00 grep css
[root@soatsdb1 stage]# rm -f /var/tmp/.oracle
[root@soatsdb1 stage]# rm -f /tmp/.oracle
[root@soatsdb1 stage]#


6) Any files should not be there if deinstall has completed without any failure.

7) This will also delete all the contents which is on grid home.

8) We only need to remove ORALCE_HOME with rm command.

Data guard related issues and its fixes

If error is   Warning: ORA-16789: standby redo logs not configured

DGMGRL> DGMGRL> show configuration

Configuration - abcprd

  Protection Mode: MaxPerformance
  Databases:
    fc_abcprd - Primary database
      Warning: ORA-16789: standby redo logs not configured

    dr_abcprd - Physical standby database
      Error: ORA-16525: the Data Guard broker is not yet available

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

DGMGRL>

Add Standby redo log group on to  Primary db as well as DR site:

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo04.log' size 50M;
SQL>  ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo05.log' size 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo06.log' size 50M;

And on  DR Site: 

SQL> alter database recover managed standby database cancel;

SQL>  ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo04.log' size 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo05.log' size 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo06.log' size 50M;

After the above steps completion start DR in managed recovery:

SQL> alter database recover managed standby database disconnect from session;

On primary now run the dgmgrl :

DGMGRL>  show configuration;

Configuration - abcprd

  Protection Mode: MaxPerformance
  Databases:
    fc_abcprd - Primary database
    dr_abcprd - Physical standby database (disabled)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

If error is below after above steps then on Primary site

Warning: ORA-16826: apply service state is inconsistent with the DelayMins property

DGMGRL> remove database DR_ABCPRD;
Removed database "dr_abcprd" from the configuration
DGMGRL> add database DR_ABCPRD as connect identifier is  DR_ABCPRD maintained as physical;
Database "dr_abcprd" added

Then Run :

Show configuration

It should not show any error message after that.  But if you are still facing any issue then please leave a message to me, will try to help you. 


Regards,
Amaresh

Wednesday, June 24, 2015

Script to list missing and INVALID Objects in the database

REM Script to list missing and INVALID Objects in the database
REM
REM      MISSING.SQL                                                  
REM
REM      This script recompiles all objects that have become invalidated    
REM
REM      It should be run as SYS or SYSTEM
REM

set pagesize 0
set linesize 120
set feedback off
set trimspool on
set termout on

spool missing.txt

select A.Owner Oown,
       A.Object_Name Oname,
       A.Object_Type Otype,
       'Miss Pkg Body' Prob
  from DBA_OBJECTS A
 where A.Object_Type = 'PACKAGE'
   and A.Owner not in ('SYS','SYSTEM')
   and not exists
        (select 'x'
           from DBA_OBJECTS B
          where B.Object_Name = A.Object_Name
            and B.Owner = A.Owner
            and B.Object_Type = 'PACKAGE BODY')
union
select Owner Oown,
       Object_Name Oname,
       Object_Type Otype,
       'Invalid Obj' Prob
  from DBA_OBJECTS
 where Owner not in ('SYS','SYSTEM')
   and Status != 'VALID'
 order by 1,4,3,2
/
spool off

Friday, September 26, 2014

ERROR :: ORA-04030: out of process memory when trying to allocate 16328 bytes

Issue: ORA-04030: out of process memory when trying to allocate 16328 bytes (koh-kghu call ,pl/sql vc2)

Following ORA-04030 error is encountered every time when the PGA allocation reaches 15GB:


The incident trace shows 15G used by pl/sql:
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
---------------------------------------
100%   15 GB, 1008569 chunks: "pl/sql vc2                "  PL/SQL
        koh-kghu call   ds=fffffc7ffc6f51f8  dsprt=c715710
0%   15 MB, 15763 chunks: "free memory               "
        pga heap        ds=c715710  dsprt=0


This is due to bug 14119856 when real free allocator is used even though pga_aggregate_target is set more than 16GB.
Use below query to check if real free allocator is used:

SQL> col name format a30
col cur_val format a20
select i.ksppinm name , v.ksppstvl cur_val, v.ksppstdf default_val,v.ksppstvf
from x$ksppi i, x$ksppcv v where i.indx = v.indx and i.ksppinm in
('_realfree_heap_pagesize_hint', '_use_realfree_heap');SQL> SQL>   2    3

NAME                           CUR_VAL              DEFAULT_V   KSPPSTVF
------------------------------ -------------------- --------- ----------
_realfree_heap_pagesize_hint   65536                TRUE               0
_use_realfree_heap             TRUE                 TRUE               0


Technique 1:


 Step 1:
Restart the database and  server in order to fix the issue
Or
Change the upper limit at either the OS or at the database level:


Change the page count at the OS level:

by root user,
$ more /proc/sys/vm/max_map_count
$ sysctl -w vm.max_map_count=200000 (for example)

OR at database level,
Adjust the realfree heap pagesize within the database by setting the following parameters in the init/spfile and restart the database.

_use_realfree_heap=TRUE
_realfree_heap_pagesize_hint = 262144
- OR -
Use Workaround:

 Set "_use_realfree_heap=false" and restart database instance.

Or

 Apply patch <="" a="">14119856> if available for your platform and Oracle version or request for a one-off patch.

 Reference  MI note :: Doc ID 1506315.1 and Thanks for the giving time and reading the post .

Thursday, September 25, 2014

ORA-00119: invalid specification for system parameter LOCAL_LISTENER


ERROR  ::   ORA-00119: invalid specification for system parameter LOCAL_LISTENER

Issue: ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_DATABASE NAME'

Couse: 1) You have not made the entry of local listener in TNSNAMES.ora file and parameter is added in spfile or pfile . In order to avoid the above error we have two options as per my understanding (if you have plz comment and share your solution.)

Technique 1:

 Step 1: Make an entry in the tnsnames.ora as mentioned below
LISTENER_<NAME/DATABASE NAME> =
  (ADDRESS = (PROTOCOL = TCP)(HOST = my.host.com)(PORT = 1521))

Where LISTENER_<NAME/DATABASE NAME>  is the parameter value for LOCAL_LISTENER which is mentioned in pfile/spfile
PORT is the listener port you are using.
Also there is no need to modify the pfile/spfile.

Technique 2 :
Step 2:

a)      If you are using a spfile for starting up your database, create a pfile from it and remove the LOCAL_LISTNER parameter and Create spfile from the pfile .
b)      Strat the database .
        sqlplus / as sysdba
              startup ;

c)        start the listener  lsnrctl start


Friday, September 19, 2014

ORA-00845: MEMORY_TARGET not supported on this system and ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0

Database not coming up due to below errors (ORA-00845: MEMORY_TARGET not supported on this system and ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0)

Scene 1)
##########################################
SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system
SQL> shut abort
ORACLE instance shut down.
SQL> exit
Disconnected

Scene 2)
#########################################
SQL> startup nomount;
ORA-01078: failure in processing system parameters
ORA-00093: _shared_pool_reserved_min_alloc must be between 4000 and 0
SQL> shut abort;
ORACLE instance shut down.
SQL> exit
Disconnected

Scene 3)
#########################################
SQL> startup nomount
ORA-01078: failure in processing system parameters
ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 3072M
SQL>
SQL> exit
Disconnected
#########################################

These are all due to memory Memory related parameter which is not specified as it should be. 

Solution: 
Scene 1) This error always come if /dev/shm FS size is less then memory target parameter.   
To overcome from this situation either increase the /dev/shm to greater than memory target or keep less value of memory target from /dev/shm. 
Other way is to use SGA_MAX_SIZE instead of MEMORY_TARGET. 

Scene 2) This error only came if SGA_MAX_SIZE is set but aggregate value of  individual memory parameter (db_cache_size, shared_pool_size, java_pool_size, stream_pool_size etc..) value is more than SGA_MAX_SIZE
To overcome from this situation either increase the SGA_MAX_SIZE or decrease aggregate value of  individual memory parameter.

Scene 3) in case by mistaken SGA_MAX_SIZE and MEMORY_TARGET both has been define, and SGA_MAX_SIZE is greater than MEMORY_TARGET. 
To overcome from this situation remove or keep size 0  either SGA_MAX_SIZE or MEMORY_TARGET parameter value

[oracle@abcdbhost01 dbs]$ sqlplus '"as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Wed Sep 10 20:48:05 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>  startup nomount
ORACLE instance started.

Total System Global Area 3340451840 bytes
Fixed Size                  2232960 bytes
Variable Size             771755392 bytes
Database Buffers         2550136832 bytes
Redo Buffers               16326656 bytes
SQL>

Convert snapshot standby database to Physical standby database: Dataguard 11gR2

Step 1 SQL> shutdown immediate; Step 2 SQL> startup nomount Step 3 SQL> alter database mount; Step 4 SQL>  ALTER DATABASE CONV...