Friday, April 14, 2017

Node Addition in Oracle 11g RAC

========================================================================

Node Addition in 11G RAC

========================================================================
Before add or run any cluster command we need to make sure that all server level configuration is same.
EX:
1) Public, Private and VIP IP should be in same subnet.
2) All Required package should be install with should be same kernal version on both the server.
3) All required setting for Kernal and System need to be completed.

Now:

1) Start with the comparison of an existing node and the new node. This will explain if the setup is the similar and if you can continue.


mydrdb5.ea.com[MYPD3A]$ cluvfy comp peer -n mydrdb6 -refnode mydrdb5 -r 11gR2

Is some of the comparison will fail this command will did not pass.

ex is like below:

Compatibility check: User existence for "oracle" [reference node: mydrdb5]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  mydrdb6      oracle(1304)              oracle(110)               mismatched
User existence for "oracle" check failed



Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6


Check the system and rectify the error and run the same command again it should not complete as verification successfull.


2) To Validate the node if we can add the node, if any error will come it will suggest to fix with poosible issues.

mydrdb5.ea.com[]$ cluvfy stage -pre nodeadd -n mydrdb6 -fixup -verbose

It should come with below:

Pre-check for node addition was successful.


3) Run AddNode.sh on grid Home first


mydrdb5.ea.com[MYPD3A]$ /opt/oracle/grid/11.2.0/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={mydrdb6}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={mydrdb6-v}"

Errro That might come in soem cases :

Copying to remote nodes (Tuesday, November 20, 2012 12:14:05 AM PST)
........................................WARNING:Error while copying directory /opt/oracle/grid/11.2.0 with exclude file list '/tmp/OraInstall2012-11-20_12-13llExcludeFile.lst' to nodes 'mydrdb6'. [PRKC-PRCF-2015 : One or more commands were not executed successfully on one or more nodes : <null>]
----------------------------------------------------------------------------------
mydrdb6:
    PRCF-2023 : The following contents are not transferred as they are non-readable.
Directories:
  1) /opt/oracle/grid/11.2.0/gns
Files:
   1) /opt/oracle/grid/11.2.0/bin/orarootagent.bin
   2) /opt/oracle/grid/11.2.0/bin/crsd
   3) /opt/oracle/grid/11.2.0/bin/cssdagent.bin
   4) /opt/oracle/grid/11.2.0/bin/crfsetenv
and so on.




Fix-up this issue:

check the OS user oracle and put it into same group on both the server. because it may not copy the root owner files if oracle group is different on both the node and also if files is having root:root ownership.
so need to change the ownership with root:dba or oracle then only it can solve the issue.


run the same addnode.sh again
it will end with below


Instantiating scripts for add node (Tuesday, November 20, 2012 12:01:37 PM PST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, November 20, 2012 12:01:39 PM PST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, November 20, 2012 12:03:24 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/opt/oracle/oraInventory/orainstRoot.sh' with root privileges on nodes 'mydrdb6'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/oraInventory/orainstRoot.sh #On nodes mydrdb6
/opt/oracle/grid/11.2.0/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/grid/11.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.
========================================================================

On other node :
run below:
=======================================================================
[root@mydrdb6 oracle]# /opt/oracle/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /opt/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete.

------------------------------------------------------------------------

[root@mydrdb6 oracle]# /opt/oracle/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/oracle/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node                                                                mydrdb5, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@mydrdb6 oracle]#
[root@mydrdb6 oracle]#
========================================================================

Check the cluster verification, it should show new node entry as well.
========================================================================

mydrdb5.ea.com[MYPD3A]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora....N1.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora.cvu        ora.cvu.type   ONLINE    ONLINE    mydrdb5
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora.mypd3.db ora....se.type OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    mydrdb5
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    mydrdb5
ora.ons        ora.ons.type   ONLINE    ONLINE    mydrdb5
ora....ry.acfs ora....fs.type OFFLINE   OFFLINE
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    mydrdb5
ora....SM1.asm application    OFFLINE   OFFLINE
ora....B5.lsnr application    ONLINE    ONLINE    mydrdb5
ora....db5.gsd application    OFFLINE   OFFLINE
ora....db5.ons application    ONLINE    ONLINE    mydrdb5
ora....db5.vip ora....t1.type ONLINE    ONLINE    mydrdb5
ora....SM2.asm application    OFFLINE   OFFLINE
ora....B6.lsnr application    ONLINE    ONLINE    mydrdb6
ora....db6.gsd application    OFFLINE   OFFLINE
ora....db6.ons application    ONLINE    ONLINE    mydrdb6
ora....db6.vip ora....t1.type ONLINE    ONLINE    mydrdb6


Check the post node addition validation
mydrdb5.ea.com[MYPD3A]$ cluvfy stage -post nodeadd -n mydrdb6
it will end-up with below;

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.
========================================================================

Pre database node addition validation
========================================================================

mydrdb5.ea.com[MYPD3A]$ cluvfy stage -pre dbinst -n mydrdb6 -r 11gR2
it failed for me for below:

---------------------------------------------------
Membership check for user "oracle" in group "oracle" [as Primary] failed
Check failed on nodes:
        mydrdb6

--------------------------------------------------

ASM and CRS versions are compatible
Database Clusterware version compatibility passed

Pre-check for database installation was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6
mydrdb5.ea.com[MYPD3A]$

We can ignore this to add database  node
========================================================================
mydrdb5.ea.com[MYPD3A]$ /opt/oracle/product/11.2.0/db_1/oui/bin/addNode.sh -silent CLUSTER_NEW_NODES={mydrdb6}

Saving inventory on nodes (Tuesday, November 20, 2012 12:31:55 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/product/11.2.0/db_1/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
mydrdb5.ea.com[MYPD3A]$

-----------------------------------------------------------
on mydrdb6:
-----------------------------------------------------------
[root@mydrdb6 oracle]# /opt/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

------------------------------------------------------

No comments:

Post a Comment

Convert snapshot standby database to Physical standby database: Dataguard 11gR2

Step 1 SQL> shutdown immediate; Step 2 SQL> startup nomount Step 3 SQL> alter database mount; Step 4 SQL>  ALTER DATABASE CONV...