Tuesday, September 26, 2017

Convert snapshot standby database to Physical standby database: Dataguard 11gR2


Step 1
SQL> shutdown immediate;

Step 2
SQL> startup nomount

Step 3
SQL> alter database mount;

Step 4
SQL>  ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

Step 5
SQL> shutdown database;

Step 6
SQL> startup nomount;
SQL> alter database mount standby database;

Step 7
SQL> alter database open;
SQL> alter database recover managed standby database disconnect;

Step 8
set pagesize 1000
set linesize 1000
col HOST_NAME for a20

select INSTANCE_NAME,OPEN_MODE,DATABASE_ROLE from v$instance,v$database;

Monday, May 1, 2017

OCR and Voting disk recovery

su - grid
. oraenv
+ASM1 or +ASM2


Check the VD Location

crsctl query css votedisk

Check the OCR Location
cat /etc/oracle/ocr.loc
ocrcheck -config

asmcmd spget
asmcmd lsdg
asmcmd lsdsk -kp -G CRSDG78

or
asmcmd -p
lsdsk -kp -G CRSDG78

Login as a Root#
#. oraenv
+ASM1 or +ASM2

To view the backup files
ocrconfig -showbackup

To take OCR/VD Backup(Backup contains both OCR and Voting Disk)
ocrconfig -manualbackup

To view the backup files
ocrconfig -showbackup manual
or
ocrconfig -showbackup (this will display both auto and manual backup)

Check the Below Logfiles when cluster issue occures
---------------------------------------------------
1. Cluster Alertlog File Path/Filename: (locate alertnodename.log)
/u01/app/11.2.0/grid/log/racnode7/alertracnode7.log
2. CSSD Logfile Path/Filename: (locate ocssd.log)
/u01/app/11.2.0/grid/log/racnode7/cssd/ocssd.log
3. CRSD Logfile Path/Filename: (locate crsd.log)
/u01/app/11.2.0/grid/log/racnode7/crsd/crsd.log
4. ASM Instance Alert Logfile Path/Filename: (locate alert_+ASM)
/u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log
5. OS Logfile.
ls -lrt /var/log/messages*


Do Not Use in Production(Real Time)
dd if=/dev/zero of=/dev/sdc2 count=100 bs=1024

/etc/init.d/oracleasm scandisks

crsctl stat res -t

crsctl stop crs -f (
*********************************************************************************************

1. Locate the latest automatic OCR backup
#. oraenv
+ASM1/+ASM2
#ocrconfig -showbackup

2. Stop CRS on All Nodes (As a root User)
#. oraenv
+ASM1/+ASM2
#crsctl stop crs -f

3. Start the CRS stack in exclusive mode (Only on ONE Node) (As a root User)

On the node that has the most recent OCR backup, log on as root and start CRS in exclusive mode, this mode will allow ASM to start & stay up without the presence of a Voting disk and without the CRS daemon process (crsd.bin) running.

# crsctl start crs -excl -nocrs (Only on ONE Node)

4. Label the CRS disk for ASMLIB use (Only on ONE Node)
/etc/init.d/oracleasm createdisk DISKNEW2 /dev/sdc2 (/dev/sdc2 would be different in your case)

5. Create the CRS diskgroup via sqlplus (Only on ONE Node)(As a grid User)
su - grid
. oraenv
+ASM1 or +ASM2

$ sqlplus / as sysasm
SQL> show parameter asm_diskstring
SQL> alter system set asm_diskstring='/dev/oracleasm/disks/*';
SQL> create diskgroup CRS external redundancy disk '/dev/oracleasm/disks/KIRAN_1' attribute 'COMPATIBLE.ASM' = '11.2';

6. Restore the latest OCR backup (As a root User)
#ocrconfig -restore <backuppath/filename>

7. Recreate the Voting file (As a grid/root User)
crsctl replace votedisk +CRSDG78

8. Recreate the SPFILE for ASM (optional)(Only on ONE Node) (As a grid User)
su - grid
. oraenv
+ASM1 or +ASM2

vi /home/grid/asm_pfile.ora

*.asm_power_limit=1
*.diagnostic_dest='/u01/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'


sqlplus / as sysasm

SQL> create spfile='+CRSDG78' from pfile='/home/grid/asm_pfile.ora';

9. Shutdown CRS (Only on ONE Node) (As a root User)

crsctl stop crs -f

10. Rescan ASM disks (If using ASMLIB rescan all ASM disks on each node As the root user)

/etc/init.d/oracleasm scandisks

11. Start CRS (On All Nodes) (As a root User)

crsctl start crs

12. Verify CRS (Only on ONE Node) (As a root/grid User)

crsctl stat res -t



Tuesday, April 25, 2017

Rename the Database with and without NID

Check the below parameters and values.

show parameter db_name-->spfile
or
select name from v$database;

alter database backup controlfile to trace as '/oradata/dbname/c1text.sql';
shut immediate
mv /oradata/dbname/control01.ctl /oradata/dbname/control01.ctl_old

mv /oradata/dbname/control02.ctl /oradata/dbname/control02.ctl_old

startup nomount
alter system set db_name='test' scope=spfile;

select name,value from v$spparameter where name='db_name';

cp -p /oradata/dbname/c1text.sql /oradata/dbname/c1text.sql_bkp

vi /oradata/dbname/c1text.sql

There will be 2 set of controlfile creation script in SQL file.
Delete colplete 1st set and change below  first line in backup controlfile sql.

reuse ==>set
olddb ==>newdb
noresetlogs==>resetlogs


SQL>@ /oradata/dbname/c1text.sql
.
SQL>alter database open RESETLOGS;

select file#,name from v$datafile;
select file#,name from v$tempfile;

you will get renamed database  while query to v$database.

Rename DB With NID 

select tablespace_name,status from dba_tablespaces;
*********************************************
shut immediate
startup mount
select name,open_mode,log_mode,dbid from v$database;
##nid target=/ DBNAME=NEWTEST
nid target=/ DBNAME=NEWTEST SETNAME=Y
startup nomount
alter system set db_name='NEWTEST' scope=spfile;
shut immediate
startup
*********************************************


Friday, April 14, 2017

10g database with 11g Grid infrastructure

11gR2 Grid Infrastructure with lower version DB

I had a situation where I need to have 11gR2 infrastructure Grid with lower version Database 10gr2.
faced issues starting the database after 11gr2 grid install.

Steps I have done before starting the DB

x. Install Grid Infrastructure 11.2.0.3
x. Install 10.2.0.1
x. Apply patch set on 10.2.0.1 to 10.2.0.4


DB start up was giving the error

[oracle@xd3cfp001 dbs]$ sqlplus '/as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Thu Nov 17 14:28:08 2011

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup nomount
ORA-29702: error occurred in Cluster Group Service operation
SQL> exit

This is due to the fact that nodes were not pinned to grid infrastructure.

atsdb1.ea.com[ATS4A]$ /opt/oracle/grid/11.2.0/bin/olsnodes -t -n
atsdb1        1       Unpinned
atsdb2        2       Unpinned


To pin, I had to run the below commands


/opt/oracle/grid/11.2.0/bin/crsctl pin css -n atsdb1  atsdb2


atsdb2.ea.com[APF11G]$  /opt/oracle/grid/11.2.0/bin/olsnodes -t -n
atsdb1        1       Pinned
atsdb2        2       Pinned

create database AP10G
    USER SYS IDENTIFIED BY dvstmq4
    USER SYSTEM IDENTIFIED BY dvstmq4
    MAXLOGFILES 10
    MAXLOGMEMBERS 2
    MAXDATAFILES 200
    MAXINSTANCES 1
    MAXLOGHISTORY 1
logfile
        group 1 ('/oradata_dbupgrade/APF11G/redo/redo1a.log','/oradata_dbupgrade/APF11G/redo/redo1b.log') size 400M,
        group 2 ('/oradata_dbupgrade/APF11G/redo/redo2a.log','/oradata_dbupgrade/APF11G/redo/redo2b.log') size 400M,
        group 3 ('/oradata_dbupgrade/APF11G/redo/redo3a.log','/oradata_dbupgrade/APF11G/redo/redo3b.log') size 400M,
        group 4 ('/oradata_dbupgrade/APF11G/redo/redo4a.log','/oradata_dbupgrade/APF11G/redo/redo4b.log') size 400M
datafile  '/oradata_dbupgrade/APF11G/data/system01.dbf' size 1000M
extent management local
sysaux datafile '/oradata_dbupgrade/APF11G/data/sysaux01.dbf' size 1000M
undo tablespace UNDOTBS1
datafile '/oradata_dbupgrade/SOAPF11G/data/undotbs01.dbf' size 5000M
default temporary tablespace temp tempfile '/oradata_dbupgrade/APF11G/data/temp01.dbf' size 5000M
character set  AL32UTF8
national character set AL16UTF16;

$ORACLE_HOME/bin/orapwd file=$ORACLE_HOME/dbs/orapwAP10GA password=oracle entries=5

alter database add logfile thread 2
group 5 ('/oradata_dbupgrade/APF11G/redo/redo5a.log','/oradata_dbupgrade/APF11G/redo/redo5b.log') size 400M,
group 6 ('/oradata_dbupgrade/APF11G/redo/redo6a.log','/oradata_dbupgrade/APF11G/redo/redo6b.log') size 400M,
group 7 ('/oradata_dbupgrade/APF11G/redo/redo7a.log','/oradata_dbupgrade/APF11G/redo/redo7b.log') size 400M,
group 8 ('/oradata_dbupgrade/APF11G/redo/redo8a.log','/oradata_dbupgrade/APF11G/redo/redo8b.log') size 400M;

create undo tablespace UNDOTBS2 datafile '/oradata_dbupgrade/APF11G/data/undotbs02.dbf' size 1000M;

From 10g DB home

srvctl add database -d AP10G -o $ORACLE_HOME

srvctl add instance -d AP10G -i AP10GA -n atsdb1
srvctl add instance -d AP10G -i AP10GB -n atsdb2
srvctl enable database -d AP10G
srvctl enable instance -d AP10G  -i AP10GA
srvctl enable instance -d AP10G  -i AP10GB

atsdb1.ea.com[APF11G]$ /opt/oracle/product/1020/bin/srvctl start instance -d AP10G -i AP10GA
atsdb1.ea.com[APF11G]$ /opt/oracle/product/1020/bin/srvctl start instance -d AP10G -i AP10GB

Change the SCAN Name and SubNet

========================================================================

Change the SCAN Name and Subnet

========================================================================

As a root run the below:
---------------------------------------------------------
mydrdb5.ea.com[MYPD3A]$ srvctl config scan
SCAN name: mydrdb-scan, Network: 1/10.30.206.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /mydrdb-scan/10.30.206.52

mydrdb5.ea.com[MYPD3A]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1523
---------------------------------------------------------


mydrdb5.ea.com[MYPD3A]$ srvctl stop scan_listener

mydrdb5.ea.com[MYPD3A]$ srvctl stop scan

mydrdb5.ea.com[MYPD3A]$
mydrdb5.ea.com[MYPD3A]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is not running

mydrdb5.ea.com[MYPD3A]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is not running
----------------------------------------------------------
$GRID_HOME/bin/crsctl modify type ora.scan_vip.type -attr "ATTRIBUTE=SCAN_NAME,DEFAULT_VALUE=mytestdb-scan"
$GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_SUBNET=10.30.207.0"
$GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_NETMASK=255.255.255.0"
$GRID_HOME/bin/srvctl modify scan_listener -u
$GRID_HOME/bin/srvctl start scan_listener

Before running this Make sure all entry from /etc/hosts for scan is removed and it should be working from DNS.
----------------------------------------------------------
mydrdb5.ea.com[MYPD3A]$ cd $GRID_HOME
mydrdb5.ea.com[MYPD3A]$ pwd
/opt/oracle/grid/11.2.0

mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/crsctl modify type ora.scan_vip.type -attr "ATTRIBUTE=SCAN_NAME,DEFAULT_VALUE=mytestdb-scan"
mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_SUBNET=10.30.207.0"
mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/crsctl modify resource ora.net1.network -attr "USR_ORA_NETMASK=255.255.255.0"
mydrdb5.ea.com[MYPD3A]$
mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/srvctl modify scan -n mytestdb-scan


mydrdb5.ea.com[MYPD3A]$ srvctl config scan
SCAN name: mytestdb-scan, Network: 1/10.30.207.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /mytestdb-scan/10.30.207.157
SCAN VIP name: scan2, IP: /mytestdb-scan/10.30.207.158
SCAN VIP name: scan3, IP: /mytestdb-scan/10.30.207.156

mydrdb5.ea.com[MYPD3A]$ srvctl modify scan_listener -u

mydrdb5.ea.com[MYPD3A]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1523
mydrdb5.ea.com[MYPD3A]$

mydrdb5.ea.com[MYPD3A]$ srvctl start scan
mydrdb5.ea.com[MYPD3A]$ srvctl start scan_listener
mydrdb5.ea.com[MYPD3A]$ srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node mydrdb6
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node mydrdb5
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node mydrdb5
mydrdb5.ea.com[MYPD3A]$ srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node mydrdb6
SCAN Listener LISTENER_SCAN2 is enabled
SCAN listener LISTENER_SCAN2 is running on node mydrdb5
SCAN Listener LISTENER_SCAN3 is enabled
SCAN listener LISTENER_SCAN3 is running on node mydrdb5

========================================================================

But in case subnet is not different then:
========================================================================
$GRID_HOME/bin/srvctl modify scan -n mydrdb-scan
$GRID_HOME/bin/srvctl modify scan_listener -u

It will do the same as above but for only same subnet.

mydrdb5.ea.com[MYPD3A]$ $GRID_HOME/bin/srvctl modify scan -n mydrdb-scan

mydrdb5.ea.com[MYPD3A]$ srvctl config scan
SCAN name: mydrdb-scan, Network: 1/10.30.206.0/255.255.255.0/eth0
SCAN VIP name: scan1, IP: /mydrdb-scan/10.30.206.157
SCAN VIP name: scan2, IP: /mydrdb-scan/10.30.206.158
SCAN VIP name: scan3, IP: /mydrdb-scan/10.30.206.156

mydrdb5.ea.com[MYPD3A]$ srvctl modify scan_listener -u

mydrdb5.ea.com[MYPD3A]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1523
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1523


========================================================================

Node Addition in Oracle 11g RAC

========================================================================

Node Addition in 11G RAC

========================================================================
Before add or run any cluster command we need to make sure that all server level configuration is same.
EX:
1) Public, Private and VIP IP should be in same subnet.
2) All Required package should be install with should be same kernal version on both the server.
3) All required setting for Kernal and System need to be completed.

Now:

1) Start with the comparison of an existing node and the new node. This will explain if the setup is the similar and if you can continue.


mydrdb5.ea.com[MYPD3A]$ cluvfy comp peer -n mydrdb6 -refnode mydrdb5 -r 11gR2

Is some of the comparison will fail this command will did not pass.

ex is like below:

Compatibility check: User existence for "oracle" [reference node: mydrdb5]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  mydrdb6      oracle(1304)              oracle(110)               mismatched
User existence for "oracle" check failed



Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6


Check the system and rectify the error and run the same command again it should not complete as verification successfull.


2) To Validate the node if we can add the node, if any error will come it will suggest to fix with poosible issues.

mydrdb5.ea.com[]$ cluvfy stage -pre nodeadd -n mydrdb6 -fixup -verbose

It should come with below:

Pre-check for node addition was successful.


3) Run AddNode.sh on grid Home first


mydrdb5.ea.com[MYPD3A]$ /opt/oracle/grid/11.2.0/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={mydrdb6}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={mydrdb6-v}"

Errro That might come in soem cases :

Copying to remote nodes (Tuesday, November 20, 2012 12:14:05 AM PST)
........................................WARNING:Error while copying directory /opt/oracle/grid/11.2.0 with exclude file list '/tmp/OraInstall2012-11-20_12-13llExcludeFile.lst' to nodes 'mydrdb6'. [PRKC-PRCF-2015 : One or more commands were not executed successfully on one or more nodes : <null>]
----------------------------------------------------------------------------------
mydrdb6:
    PRCF-2023 : The following contents are not transferred as they are non-readable.
Directories:
  1) /opt/oracle/grid/11.2.0/gns
Files:
   1) /opt/oracle/grid/11.2.0/bin/orarootagent.bin
   2) /opt/oracle/grid/11.2.0/bin/crsd
   3) /opt/oracle/grid/11.2.0/bin/cssdagent.bin
   4) /opt/oracle/grid/11.2.0/bin/crfsetenv
and so on.




Fix-up this issue:

check the OS user oracle and put it into same group on both the server. because it may not copy the root owner files if oracle group is different on both the node and also if files is having root:root ownership.
so need to change the ownership with root:dba or oracle then only it can solve the issue.


run the same addnode.sh again
it will end with below


Instantiating scripts for add node (Tuesday, November 20, 2012 12:01:37 PM PST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, November 20, 2012 12:01:39 PM PST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, November 20, 2012 12:03:24 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/opt/oracle/oraInventory/orainstRoot.sh' with root privileges on nodes 'mydrdb6'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/oraInventory/orainstRoot.sh #On nodes mydrdb6
/opt/oracle/grid/11.2.0/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/grid/11.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.
========================================================================

On other node :
run below:
=======================================================================
[root@mydrdb6 oracle]# /opt/oracle/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /opt/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete.

------------------------------------------------------------------------

[root@mydrdb6 oracle]# /opt/oracle/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/oracle/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node                                                                mydrdb5, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@mydrdb6 oracle]#
[root@mydrdb6 oracle]#
========================================================================

Check the cluster verification, it should show new node entry as well.
========================================================================

mydrdb5.ea.com[MYPD3A]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora....N1.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora.cvu        ora.cvu.type   ONLINE    ONLINE    mydrdb5
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora.mypd3.db ora....se.type OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    mydrdb5
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    mydrdb5
ora.ons        ora.ons.type   ONLINE    ONLINE    mydrdb5
ora....ry.acfs ora....fs.type OFFLINE   OFFLINE
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    mydrdb5
ora....SM1.asm application    OFFLINE   OFFLINE
ora....B5.lsnr application    ONLINE    ONLINE    mydrdb5
ora....db5.gsd application    OFFLINE   OFFLINE
ora....db5.ons application    ONLINE    ONLINE    mydrdb5
ora....db5.vip ora....t1.type ONLINE    ONLINE    mydrdb5
ora....SM2.asm application    OFFLINE   OFFLINE
ora....B6.lsnr application    ONLINE    ONLINE    mydrdb6
ora....db6.gsd application    OFFLINE   OFFLINE
ora....db6.ons application    ONLINE    ONLINE    mydrdb6
ora....db6.vip ora....t1.type ONLINE    ONLINE    mydrdb6


Check the post node addition validation
mydrdb5.ea.com[MYPD3A]$ cluvfy stage -post nodeadd -n mydrdb6
it will end-up with below;

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.
========================================================================

Pre database node addition validation
========================================================================

mydrdb5.ea.com[MYPD3A]$ cluvfy stage -pre dbinst -n mydrdb6 -r 11gR2
it failed for me for below:

---------------------------------------------------
Membership check for user "oracle" in group "oracle" [as Primary] failed
Check failed on nodes:
        mydrdb6

--------------------------------------------------

ASM and CRS versions are compatible
Database Clusterware version compatibility passed

Pre-check for database installation was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6
mydrdb5.ea.com[MYPD3A]$

We can ignore this to add database  node
========================================================================
mydrdb5.ea.com[MYPD3A]$ /opt/oracle/product/11.2.0/db_1/oui/bin/addNode.sh -silent CLUSTER_NEW_NODES={mydrdb6}

Saving inventory on nodes (Tuesday, November 20, 2012 12:31:55 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/product/11.2.0/db_1/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
mydrdb5.ea.com[MYPD3A]$

-----------------------------------------------------------
on mydrdb6:
-----------------------------------------------------------
[root@mydrdb6 oracle]# /opt/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

------------------------------------------------------

Wednesday, April 12, 2017

Adding ASM Disk in Oracle 10g and 11g

Add ASM Disk

RACPD Disk addtion detailed work plan:

1) run the below commands on database all node:

SQL> select name,log_mode,open_mode from v$Database;

NAME      LOG_MODE     OPEN_MODE
--------- ------------ ----------
RACPD     ARCHIVELOG READ WRITE

SQL> select inst_id, instance_name,instance_number,status,STARTUP_TIME from gv$instance;

   INST_ID INSTANCE_NAME    INSTANCE_NUMBER STATUS       STARTUP_T
---------- ---------------- --------------- ------------ ---------
         1 RACPD                       1 OPEN         28-JUL-11

SQL> select count(*) from v$recover_file;

  COUNT(*)
----------
         0

SQL> select distinct status,count(*) from v$datafile group by status;

STATUS    COUNT(*)
------- ----------
ONLINE         262
SYSTEM           3

2) Stop the database on all node
srvctl stop instance -d RACPD -i RACPD1
srvctl stop instance -d RACPD -i RACPD2
srvctl stop instance -d RACPD -i RACPD3
srvctl stop instance -d RACPD -i RACPD4



2) Verify that ASM disk is its path is visible from ASM

1) select name,GROUP_NUMBER,STATE,TOTAL_MB,FREE_MB from  v$asm_diskgroup;

2) /etc/init.d/oracleasm listdisks
3) /etc/init.d/oracleasm querydisk /dev/mpath/*p1
4) /etc/init.d/oracleasm querydisk /dev/oracleasm/disks/*
5) select name,GROUP_NUMBER,STATE,TOTAL_MB,FREE_MB from  v$asm_diskgroup;

3) Check ASM DISK

SQL> select GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,TOTAL_MB,FREE_MB,PATH from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU   TOTAL_MB    FREE_MB PATH
------------ ----------- ------- ------------ ---------- ---------- --------------------------------------------------
           0           3 CLOSED  PROVISIONED      512071          0 /dev/oracleasm/disks/DATA1DISK13
           0          11 CLOSED  PROVISIONED      512071          0 /dev/oracleasm/disks/DATA1DISK12

Above 2 should be visible on all the node


4) Add disk in ASM with power rebalance 0 and  1

SQL> ALTER DISKGROUP DATA1 ADD DISK '/dev/oracleasm/disks/DATA1DISK12'  name DATA1DISK12 REBALANCE POWER 0;

Diskgroup altered.

SQL>
SQL> ALTER DISKGROUP DATA1 ADD DISK '/dev/oracleasm/disks/DATA1DISK13' name DATA1DISK13  REBALANCE POWER 1;

Diskgroup altered.


5)  Increase power rebalance after some time after analyze the performance.
SQL> select GROUP_NUMBER,OPERATION,STATE,POWER from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER
------------ ----- ---- ----------
           1 REBAL RUN           1


After 20 min

SQL> ALTER DISKGROUP DATA1 REBALANCE power 5;

Diskgroup altered.

SQL> select * from  v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES
-----------
           1 REBAL RUN           7          7     536925     998761       2047
        225


SQL> select group_number, name, TOTAL_MB, FREE_MB from V$asm_disk_stat;

GROUP_NUMBER NAME                             TOTAL_MB    FREE_MB
------------ ------------------------------ ---------- ----------
           1 DATA1DISK1                         512071      82414
           1 DATA1DISK10                        512071      82419
           1 DATA1DISK3                         512071      82415
           1 DATA1DISK13                        512071     106868
           2 REDODISK1                           24575      14653
           1 DATA1DISK5                         512071      82416
           3 TEMP_0000                          409618       9367
           1 DATA1DISK9                         512071      82417
           1 DATA1DISK4                         512071      82417
           1 DATA1DISK6                         512071      82418
           1 DATA1DISK11                        512071      82417
           1 DATA1DISK12                        512071     106870
           1 DATA1DISK2                         512071      82417
           1 DATA1DISK8                         512071      82416

14 rows selected.


6) Start the database and release for use to application team.

srvctl start instance -d RACPD -i RACPD1
srvctl start instance -d RACPD -i RACPD2
srvctl start instance -d RACPD -i RACPD3
srvctl start instance -d RACPD -i RACPD4

Convert snapshot standby database to Physical standby database: Dataguard 11gR2

Step 1 SQL> shutdown immediate; Step 2 SQL> startup nomount Step 3 SQL> alter database mount; Step 4 SQL>  ALTER DATABASE CONV...