Friday, April 14, 2017

Node Addition in Oracle 11g RAC

========================================================================

Node Addition in 11G RAC

========================================================================
Before add or run any cluster command we need to make sure that all server level configuration is same.
EX:
1) Public, Private and VIP IP should be in same subnet.
2) All Required package should be install with should be same kernal version on both the server.
3) All required setting for Kernal and System need to be completed.

Now:

1) Start with the comparison of an existing node and the new node. This will explain if the setup is the similar and if you can continue.


mydrdb5.ea.com[MYPD3A]$ cluvfy comp peer -n mydrdb6 -refnode mydrdb5 -r 11gR2

Is some of the comparison will fail this command will did not pass.

ex is like below:

Compatibility check: User existence for "oracle" [reference node: mydrdb5]
  Node Name     Status                    Ref. node status          Comment
  ------------  ------------------------  ------------------------  ----------
  mydrdb6      oracle(1304)              oracle(110)               mismatched
User existence for "oracle" check failed



Verification of peer compatibility was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6


Check the system and rectify the error and run the same command again it should not complete as verification successfull.


2) To Validate the node if we can add the node, if any error will come it will suggest to fix with poosible issues.

mydrdb5.ea.com[]$ cluvfy stage -pre nodeadd -n mydrdb6 -fixup -verbose

It should come with below:

Pre-check for node addition was successful.


3) Run AddNode.sh on grid Home first


mydrdb5.ea.com[MYPD3A]$ /opt/oracle/grid/11.2.0/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={mydrdb6}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={mydrdb6-v}"

Errro That might come in soem cases :

Copying to remote nodes (Tuesday, November 20, 2012 12:14:05 AM PST)
........................................WARNING:Error while copying directory /opt/oracle/grid/11.2.0 with exclude file list '/tmp/OraInstall2012-11-20_12-13llExcludeFile.lst' to nodes 'mydrdb6'. [PRKC-PRCF-2015 : One or more commands were not executed successfully on one or more nodes : <null>]
----------------------------------------------------------------------------------
mydrdb6:
    PRCF-2023 : The following contents are not transferred as they are non-readable.
Directories:
  1) /opt/oracle/grid/11.2.0/gns
Files:
   1) /opt/oracle/grid/11.2.0/bin/orarootagent.bin
   2) /opt/oracle/grid/11.2.0/bin/crsd
   3) /opt/oracle/grid/11.2.0/bin/cssdagent.bin
   4) /opt/oracle/grid/11.2.0/bin/crfsetenv
and so on.




Fix-up this issue:

check the OS user oracle and put it into same group on both the server. because it may not copy the root owner files if oracle group is different on both the node and also if files is having root:root ownership.
so need to change the ownership with root:dba or oracle then only it can solve the issue.


run the same addnode.sh again
it will end with below


Instantiating scripts for add node (Tuesday, November 20, 2012 12:01:37 PM PST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Tuesday, November 20, 2012 12:01:39 PM PST)
...............................................................................................                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Tuesday, November 20, 2012 12:03:24 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at '/opt/oracle/oraInventory/orainstRoot.sh' with root privileges on nodes 'mydrdb6'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/oraInventory/orainstRoot.sh #On nodes mydrdb6
/opt/oracle/grid/11.2.0/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/grid/11.2.0 was successful.
Please check '/tmp/silentInstall.log' for more details.
========================================================================

On other node :
run below:
=======================================================================
[root@mydrdb6 oracle]# /opt/oracle/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /opt/oracle/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete.

------------------------------------------------------------------------

[root@mydrdb6 oracle]# /opt/oracle/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /opt/oracle/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node                                                                mydrdb5, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@mydrdb6 oracle]#
[root@mydrdb6 oracle]#
========================================================================

Check the cluster verification, it should show new node entry as well.
========================================================================

mydrdb5.ea.com[MYPD3A]$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....ER.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora....N1.lsnr ora....er.type ONLINE    ONLINE    mydrdb5
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora.cvu        ora.cvu.type   ONLINE    ONLINE    mydrdb5
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE
ora.mypd3.db ora....se.type OFFLINE   OFFLINE
ora....network ora....rk.type ONLINE    ONLINE    mydrdb5
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    mydrdb5
ora.ons        ora.ons.type   ONLINE    ONLINE    mydrdb5
ora....ry.acfs ora....fs.type OFFLINE   OFFLINE
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    mydrdb5
ora....SM1.asm application    OFFLINE   OFFLINE
ora....B5.lsnr application    ONLINE    ONLINE    mydrdb5
ora....db5.gsd application    OFFLINE   OFFLINE
ora....db5.ons application    ONLINE    ONLINE    mydrdb5
ora....db5.vip ora....t1.type ONLINE    ONLINE    mydrdb5
ora....SM2.asm application    OFFLINE   OFFLINE
ora....B6.lsnr application    ONLINE    ONLINE    mydrdb6
ora....db6.gsd application    OFFLINE   OFFLINE
ora....db6.ons application    ONLINE    ONLINE    mydrdb6
ora....db6.vip ora....t1.type ONLINE    ONLINE    mydrdb6


Check the post node addition validation
mydrdb5.ea.com[MYPD3A]$ cluvfy stage -post nodeadd -n mydrdb6
it will end-up with below;

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.
========================================================================

Pre database node addition validation
========================================================================

mydrdb5.ea.com[MYPD3A]$ cluvfy stage -pre dbinst -n mydrdb6 -r 11gR2
it failed for me for below:

---------------------------------------------------
Membership check for user "oracle" in group "oracle" [as Primary] failed
Check failed on nodes:
        mydrdb6

--------------------------------------------------

ASM and CRS versions are compatible
Database Clusterware version compatibility passed

Pre-check for database installation was unsuccessful.
Checks did not pass for the following node(s):
        mydrdb6
mydrdb5.ea.com[MYPD3A]$

We can ignore this to add database  node
========================================================================
mydrdb5.ea.com[MYPD3A]$ /opt/oracle/product/11.2.0/db_1/oui/bin/addNode.sh -silent CLUSTER_NEW_NODES={mydrdb6}

Saving inventory on nodes (Tuesday, November 20, 2012 12:31:55 PM PST)
.                                                               100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.
/opt/oracle/product/11.2.0/db_1/root.sh #On nodes mydrdb6
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts in each cluster node

The Cluster Node Addition of /opt/oracle/product/11.2.0/db_1 was successful.
Please check '/tmp/silentInstall.log' for more details.
mydrdb5.ea.com[MYPD3A]$

-----------------------------------------------------------
on mydrdb6:
-----------------------------------------------------------
[root@mydrdb6 oracle]# /opt/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /opt/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.

------------------------------------------------------

Wednesday, April 12, 2017

Adding ASM Disk in Oracle 10g and 11g

Add ASM Disk

RACPD Disk addtion detailed work plan:

1) run the below commands on database all node:

SQL> select name,log_mode,open_mode from v$Database;

NAME      LOG_MODE     OPEN_MODE
--------- ------------ ----------
RACPD     ARCHIVELOG READ WRITE

SQL> select inst_id, instance_name,instance_number,status,STARTUP_TIME from gv$instance;

   INST_ID INSTANCE_NAME    INSTANCE_NUMBER STATUS       STARTUP_T
---------- ---------------- --------------- ------------ ---------
         1 RACPD                       1 OPEN         28-JUL-11

SQL> select count(*) from v$recover_file;

  COUNT(*)
----------
         0

SQL> select distinct status,count(*) from v$datafile group by status;

STATUS    COUNT(*)
------- ----------
ONLINE         262
SYSTEM           3

2) Stop the database on all node
srvctl stop instance -d RACPD -i RACPD1
srvctl stop instance -d RACPD -i RACPD2
srvctl stop instance -d RACPD -i RACPD3
srvctl stop instance -d RACPD -i RACPD4



2) Verify that ASM disk is its path is visible from ASM

1) select name,GROUP_NUMBER,STATE,TOTAL_MB,FREE_MB from  v$asm_diskgroup;

2) /etc/init.d/oracleasm listdisks
3) /etc/init.d/oracleasm querydisk /dev/mpath/*p1
4) /etc/init.d/oracleasm querydisk /dev/oracleasm/disks/*
5) select name,GROUP_NUMBER,STATE,TOTAL_MB,FREE_MB from  v$asm_diskgroup;

3) Check ASM DISK

SQL> select GROUP_NUMBER,DISK_NUMBER,MOUNT_STATUS,HEADER_STATUS,TOTAL_MB,FREE_MB,PATH from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU   TOTAL_MB    FREE_MB PATH
------------ ----------- ------- ------------ ---------- ---------- --------------------------------------------------
           0           3 CLOSED  PROVISIONED      512071          0 /dev/oracleasm/disks/DATA1DISK13
           0          11 CLOSED  PROVISIONED      512071          0 /dev/oracleasm/disks/DATA1DISK12

Above 2 should be visible on all the node


4) Add disk in ASM with power rebalance 0 and  1

SQL> ALTER DISKGROUP DATA1 ADD DISK '/dev/oracleasm/disks/DATA1DISK12'  name DATA1DISK12 REBALANCE POWER 0;

Diskgroup altered.

SQL>
SQL> ALTER DISKGROUP DATA1 ADD DISK '/dev/oracleasm/disks/DATA1DISK13' name DATA1DISK13  REBALANCE POWER 1;

Diskgroup altered.


5)  Increase power rebalance after some time after analyze the performance.
SQL> select GROUP_NUMBER,OPERATION,STATE,POWER from v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER
------------ ----- ---- ----------
           1 REBAL RUN           1


After 20 min

SQL> ALTER DISKGROUP DATA1 REBALANCE power 5;

Diskgroup altered.

SQL> select * from  v$asm_operation;

GROUP_NUMBER OPERA STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE
------------ ----- ---- ---------- ---------- ---------- ---------- ----------
EST_MINUTES
-----------
           1 REBAL RUN           7          7     536925     998761       2047
        225


SQL> select group_number, name, TOTAL_MB, FREE_MB from V$asm_disk_stat;

GROUP_NUMBER NAME                             TOTAL_MB    FREE_MB
------------ ------------------------------ ---------- ----------
           1 DATA1DISK1                         512071      82414
           1 DATA1DISK10                        512071      82419
           1 DATA1DISK3                         512071      82415
           1 DATA1DISK13                        512071     106868
           2 REDODISK1                           24575      14653
           1 DATA1DISK5                         512071      82416
           3 TEMP_0000                          409618       9367
           1 DATA1DISK9                         512071      82417
           1 DATA1DISK4                         512071      82417
           1 DATA1DISK6                         512071      82418
           1 DATA1DISK11                        512071      82417
           1 DATA1DISK12                        512071     106870
           1 DATA1DISK2                         512071      82417
           1 DATA1DISK8                         512071      82416

14 rows selected.


6) Start the database and release for use to application team.

srvctl start instance -d RACPD -i RACPD1
srvctl start instance -d RACPD -i RACPD2
srvctl start instance -d RACPD -i RACPD3
srvctl start instance -d RACPD -i RACPD4

Wednesday, February 10, 2016

PuTTy color setting for different environments.

PURPOSE 

Document  Cover the detailed steps for configuration the PuTTy color setting for different environment

SCOPE

This Document is meant for all who works on Putty.


DETAILS


Here are the steps we can configure the Putty color for different environment.


Setting up the color coding on putty:

For setting up the color coding,
open Putty and on the “Saved Session" text box enter the Name of setting you want to create in below example we have create 3 three (Production, Non-prod, SMO and one default) and then "Save".

Thursday, June 25, 2015

Files`that need to use in Oracle Database administration

Useful Files In Linux

 Path    
 Description
 /etc/passwd
 User Settings
 /etc/group
  Group settings for users.  
 /etc/hosts
 Host name lookup information
 /etc/sysctl.conf                   
 Kernel parameters for Linux.                
 /var/log/messages
 Check System and error logs and messages
 /etc/oratab
  Oracle Registered instance (DBCA) 
 /etc/fstab
Files to check for File System entries
 /home/oracle/.bash_profile
Oracle user profile setting file in Linux.
 /proc/meminfo      
To determine To determine the RAM size
/etc/redhat- release
get the OS release information
 /etc/security/limits.conf
 Specify process and open files related limits 
 /etc/selinux/config
Enable or disable security feature.

Deinstall and cleanup Oracle 11g RAC instance

Deinstall and Clean-up  11g RAC installaton:


1) Shutdown all the DB and cluster services on all the RAC node
From root:
crsctl stop cluster -all
and
crsctl stop crs  on both node



2) From Oracle User:

Run the below:

soatsdb1.com[GSOATS1]$ cd /opt/oracle/grid/11.2.0/deinstall

soatsdb1.com[GSOATS1]$ ls
bootstrap.pl  deinstall  deinstall.pl  deinstall.xml  jlib  readme.txt  response  sshUserSetup.sh

soatsdb1.com[GSOATS1]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /opt/oracle/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /opt/oracle/grid/11.2.0
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /opt/oracle/product
Checking for existence of central inventory location /opt/oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /opt/oracle/grid/11.2.0
The following nodes are part of this cluster: soatsdb1,soatsdb2
Checking for sufficient temp space availability on node(s) : 'soatsdb1,soatsdb2'

## [END] Install check configuration ##

Traces log file: /opt/oracle/oraInventory/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "soatsdb1"[null]
 >
soatsdb1-v.com
The following information can be collected by running "/sbin/ifconfig -a" on node "soatsdb1"
Enter the IP netmask of Virtual IP "10.30.207.150" on node "soatsdb1"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "10.30.207.150" is active
 >

Enter an address or the name of the virtual IP used on node "soatsdb2"[10.30.207.150]
 >

The following information can be collected by running "/sbin/ifconfig -a" on node "soatsdb2"
Enter the IP netmask of Virtual IP "soatsdb2-vip" on node "soatsdb2"[255.255.255.0]
 >

Enter the network interface name on which the virtual IP address "soatsdb2-vip" is active
 >

Enter an address or the name of the virtual IP[]
 >


Network Configuration check config START

Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_check2012-02-02_11-59-38-PM.log

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /opt/oracle/oraInventory/logs/asmcadc_check2012-02-02_11-59-40-PM.log

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: n
ASM was not detected in the Oracle Home

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /opt/oracle/grid/11.2.0
The cluster node(s) on which the Oracle home deinstallation will be performed are:soatsdb1,soatsdb2
Oracle Home selected for deinstall is: /opt/oracle/grid/11.2.0
Inventory Location where the Oracle home registered is: /opt/oracle/oraInventory
ASM was not detected in the Oracle Home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2012-02-02_11-56-20-PM.out'
Any error messages from this session will be written to: '/opt/oracle/oraInventory/logs/deinstall_deconfig2012-02-02_11-56-20-PM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /opt/oracle/oraInventory/logs/asmcadc_clean2012-02-02_11-59-55-PM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /opt/oracle/oraInventory/logs/netdc_clean2012-02-02_11-59-55-PM.log

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "soatsdb2".

/tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Run the following command as the root user or the administrator on node "soatsdb1".

/tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode

Press Enter after you finish running the above commands

<----------------------------------------


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Press Enter after you finish running the above commands

<----------------------------------------

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the local node : Done

Delete directory '/opt/oracle/grid/11.2.0' on the local node : Done

The Oracle Base directory '/opt/oracle/product' will not be removed on local node. The directory is in use by Oracle Home '/opt/oracle/product/11.2.0'.

Detach Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the remote nodes 'soatsdb2' : Done

Delete directory '/opt/oracle/grid/11.2.0' on the remote nodes 'soatsdb2' : Done

The Oracle Base directory '/opt/oracle/product' will not be removed on node 'soatsdb2'. The directory is in use by Oracle Home '/opt/oracle/product/11.2.0'.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2012-02-02_11-56-11PM' on node 'soatsdb1'
Clean install operation removing temporary directory '/tmp/deinstall2012-02-02_11-56-11PM' on node 'soatsdb2'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Oracle Clusterware is stopped and successfully de-configured on node "soatsdb2"
Oracle Clusterware is stopped and successfully de-configured on node "soatsdb1"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the local node.
Successfully deleted directory '/opt/oracle/grid/11.2.0' on the local node.
Successfully detached Oracle home '/opt/oracle/grid/11.2.0' from the central inventory on the remote nodes 'soatsdb2'.
Successfully deleted directory '/opt/oracle/grid/11.2.0' on the remote nodes 'soatsdb2'.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

soatsdb1.com[GSOATS1]$


3) run the given command on

Node 2:

[root@soatsdb2 ~]# /tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'soatsdb2'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'soatsdb2'
CRS-2673: Attempting to stop 'ora.crf' on 'soatsdb2'
CRS-2677: Stop of 'ora.mdnsd' on 'soatsdb2' succeeded
CRS-2677: Stop of 'ora.crf' on 'soatsdb2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'soatsdb2'
CRS-2677: Stop of 'ora.gipcd' on 'soatsdb2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'soatsdb2'
CRS-2677: Stop of 'ora.gpnpd' on 'soatsdb2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'soatsdb2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@soatsdb2 ~]#


4) Run the given command on

Node 1:


[root@soatsdb1 deinstall]# /tmp/deinstall2012-02-02_11-56-11PM/perl/bin/perl -I/tmp/deinstall2012-02-02_11-56-11PM/perl/lib -I/tmp/deinstall2012-02-02_11-56-11PM/crs/install /tmp/deinstall2012-02-02_11-56-11PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp" -lastnode
Using configuration parameter file: /tmp/deinstall2012-02-02_11-56-11PM/response/deinstall_Ora11g_gridinfrahome1.rsp

CRS-5702: Resource 'ora.cssd' is already running on 'soatsdb1'
CRS-4000: Command Start failed, or completed with errors.
CSS startup failed with return code 1
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd

CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'soatsdb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'soatsdb1'
CRS-2677: Stop of 'ora.crsd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'soatsdb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'soatsdb1'
CRS-2677: Stop of 'ora.mdnsd' on 'soatsdb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'soatsdb1'
CRS-2677: Stop of 'ora.cssd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'soatsdb1'
CRS-2677: Stop of 'ora.gipcd' on 'soatsdb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'soatsdb1'
CRS-2677: Stop of 'ora.gpnpd' on 'soatsdb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'soatsdb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@soatsdb1 deinstall]#


5) run below command on all the node.

[root@soatsdb1 stage]# rm /etc/oracle/*
rm: cannot lstat `/etc/oracle/*': No such file or directory
[root@soatsdb1 stage]#
[root@soatsdb1 stage]# rm -f /etc/init.d/init.cssd
[root@soatsdb1 stage]# rm -f /etc/init.d/init.crs
[root@soatsdb1 stage]# rm -f /etc/init.d/init.crsd
[root@soatsdb1 stage]# rm -f /etc/init.d/init.evmd
[root@soatsdb1 stage]# rm -f /etc/rc2.d/K96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc2.d/S96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc3.d/K96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc3.d/S96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc5.d/K96init.crs
[root@soatsdb1 stage]# rm -f /etc/rc5.d/S96init.crs
[root@soatsdb1 stage]# rm -Rf /etc/oracle/scls_scr
[root@soatsdb1 stage]# rm -f /etc/inittab.crs
[root@soatsdb1 stage]# ps -ef | grep crs
root      7139 34131  0 10:54 pts/5    00:00:00 grep crs
[root@soatsdb1 stage]# ps -ef | grep evm
root      7143 34131  0 10:54 pts/5    00:00:00 grep evm
[root@soatsdb1 stage]# ps -ef | grep css
root      7146 34131  0 10:54 pts/5    00:00:00 grep css
[root@soatsdb1 stage]# rm -f /var/tmp/.oracle
[root@soatsdb1 stage]# rm -f /tmp/.oracle
[root@soatsdb1 stage]#


6) Any files should not be there if deinstall has completed without any failure.

7) This will also delete all the contents which is on grid home.

8) We only need to remove ORALCE_HOME with rm command.

Data guard related issues and its fixes

If error is   Warning: ORA-16789: standby redo logs not configured

DGMGRL> DGMGRL> show configuration

Configuration - abcprd

  Protection Mode: MaxPerformance
  Databases:
    fc_abcprd - Primary database
      Warning: ORA-16789: standby redo logs not configured

    dr_abcprd - Physical standby database
      Error: ORA-16525: the Data Guard broker is not yet available

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

DGMGRL>

Add Standby redo log group on to  Primary db as well as DR site:

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo04.log' size 50M;
SQL>  ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo05.log' size 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo06.log' size 50M;

And on  DR Site: 

SQL> alter database recover managed standby database cancel;

SQL>  ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo04.log' size 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo05.log' size 50M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oradata/ABCPRD/stdby_redo06.log' size 50M;

After the above steps completion start DR in managed recovery:

SQL> alter database recover managed standby database disconnect from session;

On primary now run the dgmgrl :

DGMGRL>  show configuration;

Configuration - abcprd

  Protection Mode: MaxPerformance
  Databases:
    fc_abcprd - Primary database
    dr_abcprd - Physical standby database (disabled)

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

If error is below after above steps then on Primary site

Warning: ORA-16826: apply service state is inconsistent with the DelayMins property

DGMGRL> remove database DR_ABCPRD;
Removed database "dr_abcprd" from the configuration
DGMGRL> add database DR_ABCPRD as connect identifier is  DR_ABCPRD maintained as physical;
Database "dr_abcprd" added

Then Run :

Show configuration

It should not show any error message after that.  But if you are still facing any issue then please leave a message to me, will try to help you. 


Regards,
Amaresh

Wednesday, June 24, 2015

Script to list missing and INVALID Objects in the database

REM Script to list missing and INVALID Objects in the database
REM
REM      MISSING.SQL                                                  
REM
REM      This script recompiles all objects that have become invalidated    
REM
REM      It should be run as SYS or SYSTEM
REM

set pagesize 0
set linesize 120
set feedback off
set trimspool on
set termout on

spool missing.txt

select A.Owner Oown,
       A.Object_Name Oname,
       A.Object_Type Otype,
       'Miss Pkg Body' Prob
  from DBA_OBJECTS A
 where A.Object_Type = 'PACKAGE'
   and A.Owner not in ('SYS','SYSTEM')
   and not exists
        (select 'x'
           from DBA_OBJECTS B
          where B.Object_Name = A.Object_Name
            and B.Owner = A.Owner
            and B.Object_Type = 'PACKAGE BODY')
union
select Owner Oown,
       Object_Name Oname,
       Object_Type Otype,
       'Invalid Obj' Prob
  from DBA_OBJECTS
 where Owner not in ('SYS','SYSTEM')
   and Status != 'VALID'
 order by 1,4,3,2
/
spool off

Convert snapshot standby database to Physical standby database: Dataguard 11gR2

Step 1 SQL> shutdown immediate; Step 2 SQL> startup nomount Step 3 SQL> alter database mount; Step 4 SQL>  ALTER DATABASE CONV...