sábado, 21 de diciembre de 2013

Como aplicar primero en tu BD Standby un parche PSU (17272731) y luego tu BD primaria con un ambiente RAC de dos nodos

Mi creencia siempre ha sido que como DBA , lo mas importante , mas allá del desempeño de una base de datos, es la disponibilidad de los datos dentro de ella. Creo que si los datos no están disponibles para el cliente , no tenemos trabajo, se puede tardar mas en obtenerlos, pero nuestro trabajo dejaría de existir si no tenemos a estos.

Con este pequeño prologo, lo que intentamos hacer es mantener ambiente con alta disponibilidad y sabemos que un ambiente RAC no equivale a esto, si no son nuestros ambientes Dataguard + nuestros ambientes RAC lo que nos dan un mayor grado de alta disponibilidad, pero que pasa cuando tenemos que aplicar un parche, parte de esta alta disponibilidad esta comprometida.

Oracle provee desde 11.2.0.1una manera de aplicar ciertos parches, primero a nuestro standby , sin comprometer a la base de datos primaria, para luego aplicar los parches en los binarios de la BD primaria y que estos caigan en cascada a nuestra BD Standby

Lo primero que te tienes que cerciorar antes de aplicar un parche de esta manera, es que este certificado como "Standby-First".

Los binarios del parche los tengo en un directorio compartido entre todos los nodos en  /u01/app/oracle/patches/11.2.0.3/PSUOct2013 y las bases de datos que vamos a aplicar los parches son la primaria TEST  con los nodos oracleenespanol1, oracleenespanol2 y la Standby TESTSTBY con los nodos oracleenespanol3 y oracleenespanol4. El parche PSU que voy a aplicar es el 17272731 que es el 11.2.0.3.8.

Vamos a empezar este largo post en el ambiente Standby, así que ten paciencia, los primero que te recomiendo es que tomes un respaldo de tus binarios, así en dado caso de que haya un error, podamos regresar a estos sin ningún problema. Esto lo corro como el usuario root. De igual manera te recomiendo que antes de que empieces tengas los respaldos necesarios de tu base de datos.

[root@oracleenespanol3 ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
export GI_HOME=/u01/app/grid/CRS
mkdir /orabackup/StgOraBackup/GI_RDBMS_OH_backup
cd $ORACLE_HOME ##Verifica que este sea el ORACLE_HOME correcto para TESTSTBY
tar -cvf /orabackup/StgOraBackup/GI_RDBMS_OH_backup/rdbms_oh_bu_OCT2013.tar .

cd $GI_HOME ##Verifica que este sea el GI_HOME correcto para TESTSTBY
tar -cvf /orabackup/StgOraBackup/GI_RDBMS_OH_backup/gi_oh_bu_OCT2013.tar .
cat /etc/oraInst.loc
inventory_loc=/u01/app/OraInventory
inst_group=oinstall

cd /u01/app/OraInventory

tar -cvf /orabackup/StgOraBackup/OraInventory.tar .

A continuación, con el usuario oracle, vamos a asegurarnos que no haya un conflicto con el parche que estamos por aplicar a los binarios que ya tenemos instalados

[oracle@oracleenespanol3 StgOraBackup]$ id
uid=400(oracle) gid=400(oinstall) groups=400(oinstall),401(dba),402(oper)
export GI_HOME=/u01/app/grid/CRS
export PATH=$GI_HOME/OPatch:$GI_HOME/bin:$PATH
cd /u01/app/grid/CRS/OPatch
./opatch version
 ## La version del comando anterior debe ser 11.2.0.4.0 o mayor. Si no baja e instala la ultima version de opatch en MOS Parche 6880880
./opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/grid/CRS -phBaseDir /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717
./opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/grid/CRS -phBaseDir /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043
./opatch prereq CheckConflictAgainstOHWithDetail -oh /u01/app/oracle/product/11.2.0/db_1 -phBaseDir /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043

Para poder continuar, hay que crear como el usuario oracle ,un archivo de respuesta para OCM, esto no implica que lo vayamos a instalar, pero es necesario hacerlo para un PSU. Acuerdate que nuestro directorio donde tenemos los binarios del parche son compartidos, así que esto nada mas lo vamos a hacer una sola vez.

[oracle@oracleenespanol3 StgOraBackup]$ id
uid=400(oracle) gid=400(oinstall) groups=400(oinstall),401(dba),402(oper)

$ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp

A mi me gusta verificar que servicios están arriba en mi ambiente RAC antes de hacer cualquier cambio, así se que es lo que tiene que estar arriba cuando acabo. Yo utilizo este script llamado crs_status.sh para verificar y lo corro con mi usuario grid, lo único que tienes que cambiar el valor de CRS_HOME, espero que también te sirva.

[grid@oracleenespanol3 StgOraBackup]$ id
uid=401(grid) gid=400(oinstall) groups=400(oinstall),401(dba),402(oper)

[grid@oracleenespanol3 antunez]$ ./crs_status.sh

NAME      TARGET     STATE           SERVER       STATE_DETAILS   
-------------------------   ---------- ----------      ------------ ------------------
ora.LISTENER.lsnr           ONLINE     ONLINE          oracleenespanol3  
ora.LISTENER.lsnr           ONLINE     ONLINE          oracleenespanol4  
ora.asm                     ONLINE     ONLINE          oracleenespanol3  
ora.asm                     ONLINE     ONLINE          oracleenespanol4 
ora.teststby.db             ONLINE     ONLINE          oracleenespanol3 Open,Readonly 
ora.teststby.db             ONLINE     ONLINE          oracleenespanol4 Open,Readonly  
...
ora.cvu                     ONLINE     ONLINE          oracleenespanol3  
ora.oc4j                    ONLINE     ONLINE          oracleenespanol4  
ora.scan1.vip               ONLINE     ONLINE          oracleenespanol4  
ora.scan2.vip               ONLINE     ONLINE          oracleenespanol3  
ora.scan3.vip               ONLINE     ONLINE          oracleenespanol3  

Ahora si vamos a aplicar el parche en nuestro ambiente standby , como el usuario root, asegurate que primero corra en el primer nodo, antes de moverte al segundo nodo

[root@oracleenespanol3 ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@oracleenespanol3 ~]#export GI_HOME=/u01/app/grid/CRS
[root@oracleenespanol3 ~]#export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
[root@oracleenespanol3 ~]#export PATH=$GI_HOME/OPatch:$GI_HOME/bin:$ORACLE_HOME/bin:$PATH

[root@oracleenespanol3 ~]# which make
/usr/bin/make

[root@oracleenespanol3 ~]# which ar
/usr/bin/ar

[root@oracleenespanol3 ~]# which ld
/usr/bin/ld

[root@oracleenespanol3 ~]# which nm
/usr/bin/nm

[root@oracleenespanol3 ~]# cd $GI_HOME/OPatch

[root@oracleenespanol3 OPatch]# ./opatch auto /u01/app/oracle/patches/11.2.0.3/PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp

Executing /u01/app/grid/CRS/perl/bin/perl ./crs/patch11203.pl -patchdir /u01/app/oracle/patches/11.2.0.3 -patchn PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp -paramfile /u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/s_crsconfig_defs

This is the main log file: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.report.log

2013-12-17 17:09:10: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/grid/CRS/crs/install/crsconfig_params
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717/custom/server/17076717  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol3'
CRS-2673: Attempting to stop 'ora.crsd' on 'oracleenespanol3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'oracleenespanol3'
...
CRS-2673: Attempting to stop 'ora.cssd' on 'oracleenespanol3'
CRS-2677: Stop of 'ora.cssd' on 'oracleenespanol3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'oracleenespanol3'
CRS-2677: Stop of 'ora.gipcd' on 'oracleenespanol3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'oracleenespanol3'
CRS-2677: Stop of 'ora.gpnpd' on 'oracleenespanol3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully unlock /u01/app/grid/CRS
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717  apply successful for home  /u01/app/grid/CRS 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/grid/CRS 
CRS-4123: Oracle High Availability Services has been started.

###Una vez que oracleenespanol3 esta completo,
###Como root en oracleenespanol4 y con las mismas variables de GI_HOME/ORACLE_HOME y PATH:

[root@oracleenespanol4 ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@oracleenespanol4 ~]#export GI_HOME=/u01/app/grid/CRS
[root@oracleenespanol4 ~]#export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
[root@oracleenespanol4 ~]#export PATH=$GI_HOME/OPatch:$GI_HOME/bin:$ORACLE_HOME/bin:$PATH

[root@oracleenespanol4 ~]# which make
/usr/bin/make

[root@oracleenespanol4 ~]# which ar
/usr/bin/ar

[root@oracleenespanol4 ~]# which ld
/usr/bin/ld

[root@oracleenespanol4 ~]# which nm
/usr/bin/nm

[root@oracleenespanol4 ~]# cd $GI_HOME/OPatch

[root@oracleenespanol4 OPatch]# ./opatch auto /u01/app/oracle/patches/11.2.0.3/PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp

Executing /u01/app/grid/CRS/perl/bin/perl ./crs/patch11203.pl -patchdir /u01/app/oracle/patches/11.2.0.3 -patchn PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp -paramfile /u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/s_crsconfig_defs

This is the main log file: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.report.log

2013-12-17 17:09:10: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/grid/CRS/crs/install/crsconfig_params
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717/custom/server/17076717  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol4'
CRS-2673: Attempting to stop 'ora.crsd' on 'oracleenespanol4'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'oracleenespanol4'
...
CRS-2673: Attempting to stop 'ora.cssd' on 'oracleenespanol4'
CRS-2677: Stop of 'ora.cssd' on 'oracleenespanol4' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'oracleenespanol4'
CRS-2677: Stop of 'ora.gipcd' on 'oracleenespanol4' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'oracleenespanol4'
CRS-2677: Stop of 'ora.gpnpd' on 'oracleenespanol4' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol4' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully unlock /u01/app/grid/CRS
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717  apply successful for home  /u01/app/grid/CRS 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/grid/CRS 
CRS-4123: Oracle High Availability Services has been started.

Una vez que finalizo el parche ya nada mas vamos a verificar que nuestro ambiente este parchado correctamente.

[oracle@oracleenespanol3 StgOraBackup]$ id
uid=400(oracle) gid=400(oinstall) groups=400(oinstall),401(dba),402(oper)
export GI_HOME=/u01/app/grid/CRS
export PATH=$GI_HOME/OPatch:$GI_HOME/bin:$PATH
cd $GI_HOME/OPatch
./opatch lsinventory

An expected result is below:

[oracle@oracleenespanol3 CRS]$ OPatch/opatch lsinventory
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/grid/CRS
Central Inventory : /u01/app/OraInventory
from           : /u01/app/grid/CRS/oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.3.0
Log file location : /u01/app/grid/CRS/cfgtoollogs/opatch/opatch2013-12-17_13-42-59PM_1.log

Lsinventory Output file location : /u01/app/grid/CRS/cfgtoollogs/opatch/lsinv/lsinventory2013-12-17_13-42-59PM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1): 

Oracle Grid Infrastructure       11.2.0.3.0
There are 1 products installed in this Oracle Home.


Interim patches (2) :

Patch  16902043     : applied on Tue Dec 17 12:51:03 EST 2013
Unique Patch ID:  16676143
Patch description:  "Database Patch Set Update : 11.2.0.3.8 (16902043)"
Created on 24 Sep 2013, 23:20:58 hrs PST8PDT
Sub-patch  16619892; "Database Patch Set Update : 11.2.0.3.7 (16619892)"
Sub-patch  16056266; "Database Patch Set Update : 11.2.0.3.6 (16056266)"
Sub-patch  14727310; "Database Patch Set Update : 11.2.0.3.5 (14727310)"
Sub-patch  14275605; "Database Patch Set Update : 11.2.0.3.4 (14275605)"
Sub-patch  13923374; "Database Patch Set Update : 11.2.0.3.3 (13923374)"
Sub-patch  13696216; "Database Patch Set Update : 11.2.0.3.2 (13696216)"
Sub-patch  13343438; "Database Patch Set Update : 11.2.0.3.1 (13343438)"
Bugs fixed:
13593999, 13566938, 10350832, 14138130, 12919564, 14198511, 13561951
...
14063280, 12772404, 13011409

Patch  17076717     : applied on Tue Dec 17 12:50:01 EST 2013
Unique Patch ID:  16721032
Patch description:  "Grid Infrastructure Patch Set Update : 11.2.0.3.8 (HAS Components)"
 Created on 11 Oct 2013, 02:52:50 hrs PST8PDT
Bugs fixed:
17076717, 16619898, 16315641, 15876003, 14275572, 13919095, 13696251
...
15936101, 14009845, 12827493, 13637590, 13068077

Rac system comprising of multiple nodes
Local node = oracleenespanol3
Remote node = oracleenespanol4

--------------------------------------------------------------------------------

OPatch succeeded.

Yo lo único que haría es volver a utilizar el script crs_status.sh  para verificar los servicios que están arriba. De igual manera tienes que saber que no hay ningún paso extra, como aplicar el catbundle, ya que ese se aplica en la BD primaria no en la Standby.

Como has podido ver, es un proceso largo, por que apenas hemos terminado nuestro ambiente standby, ahora vamos a proseguir a nuestro ambiente primario. Aqui algo muy recomendable es que dejes pasar unos dias en tu ambiente standby para ver si no hay ningun error que te pueda impactar a tu base de datos primaria.

No me voy a poner a hacer los mismo pasos hicimos en oracleenespanol3oracleenespanol4 (Standby) y que se repiten en oracleenespanol1oracleenespanol2 (Primarios), pero si te recomiendo que los hagas en tus nodos primarios, que son :
  1. Toma un respaldo de tus binarios de GI_HOME/ORACLE_HOME y el inventario
  2. Asegurate que tengas un respaldo valido de tu base de datos
  3. Vuelve a correr contra tus binarios de tus nodos primarios opatch prereq CheckConflictAgainstOHWithDetail
  4. Asegurate que tengas el archivo ocm.rsp creado
  5. Asegurate con el script crs_status.sh cuales servicios tienes arriba antes de empezar
Una vez que hiciste estos pasos, aqui es donde cambia el proceso de aplicación.

En los servidores oracleenespanol3 y oracleenespanol4 (Standby) , vamos a poner a TESTSTBY en modo mount. 

SYS@TESTSTBY1> select OPEN_MODE,GUARD_STATUS,DATABASE_ROLE from v$database;

OPEN_MODE      GUARD_S DATABASE_ROLE
-------------------- ------- ----------------
READ ONLY WITH APPLY  NONE    PHYSICAL STANDBY

1 row selected.

SYS@TESTSTBY1> select host_name,instance_name from v$instance;
HOST_NAME
----------------------------------------------------------------
INSTANCE_NAME
----------------
oracleenespanol3
TESTSTBY1

1 row selected.

SYS@TESTSTBY1> shutdown immediate
SYS@TESTSTBY1> startup mount

Ahora nos regresamos a oracleenespanol1 y oracleenespanol2 (Primarios), y aplicamos el parche como el usuario root, asegurate que primero corra en el primer nodo, antes de moverte al segundo nodo.

[root@oracleenespanol1 ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@oracleenespanol1 ~]#export GI_HOME=/u01/app/grid/CRS
[root@oracleenespanol1 ~]#export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
[root@oracleenespanol1 ~]#export PATH=$GI_HOME/OPatch:$GI_HOME/bin:$ORACLE_HOME/bin:$PATH

[root@oracleenespanol1 ~]# which make
/usr/bin/make

[root@oracleenespanol1 ~]# which ar
/usr/bin/ar

[root@oracleenespanol1 ~]# which ld
/usr/bin/ld

[root@oracleenespanol1 ~]# which nm
/usr/bin/nm

[root@oracleenespanol1 ~]# cd $GI_HOME/OPatch

[root@oracleenespanol1 OPatch]# ./opatch auto /u01/app/oracle/patches/11.2.0.3/PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp

Executing /u01/app/grid/CRS/perl/bin/perl ./crs/patch11203.pl -patchdir /u01/app/oracle/patches/11.2.0.3 -patchn PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp -paramfile /u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/s_crsconfig_defs

This is the main log file: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.report.log

2013-12-17 17:09:10: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/grid/CRS/crs/install/crsconfig_params
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717/custom/server/17076717  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol1'
CRS-2673: Attempting to stop 'ora.crsd' on 'oracleenespanol1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'oracleenespanol1'
...
CRS-2673: Attempting to stop 'ora.cssd' on 'oracleenespanol1'
CRS-2677: Stop of 'ora.cssd' on 'oracleenespanol1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'oracleenespanol1'
CRS-2677: Stop of 'ora.gipcd' on 'oracleenespanol1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'oracleenespanol1'
CRS-2677: Stop of 'ora.gpnpd' on 'oracleenespanol1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully unlock /u01/app/grid/CRS
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717  apply successful for home  /u01/app/grid/CRS 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/grid/CRS 
CRS-4123: Oracle High Availability Services has been started.

###Una vez que oracleenespanol1 esta completo,
###Como root en oracleenespanol2 y con las mismas variables de GI_HOME/ORACLE_HOME y PATH:

[root@oracleenespanol2 ~]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@oracleenespanol2 ~]#export GI_HOME=/u01/app/grid/CRS
[root@oracleenespanol2 ~]#export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
[root@oracleenespanol2 ~]#export PATH=$GI_HOME/OPatch:$GI_HOME/bin:$ORACLE_HOME/bin:$PATH

[root@oracleenespanol2 ~]# which make
/usr/bin/make

[root@oracleenespanol2 ~]# which ar
/usr/bin/ar

[root@oracleenespanol2 ~]# which ld
/usr/bin/ld

[root@oracleenespanol2 ~]# which nm
/usr/bin/nm

[root@oracleenespanol2 ~]# cd $GI_HOME/OPatch

[root@oracleenespanol2 OPatch]# ./opatch auto /u01/app/oracle/patches/11.2.0.3/PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp

Executing /u01/app/grid/CRS/perl/bin/perl ./crs/patch11203.pl -patchdir /u01/app/oracle/patches/11.2.0.3 -patchn PSUOct2013 -ocmrf /u01/app/oracle/patches/11.2.0.3/PSUOct2013/ocm.rsp -paramfile /u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/crsconfig_params
/u01/app/grid/CRS/crs/install/s_crsconfig_defs

This is the main log file: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.log
This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/grid/CRS/cfgtoollogs/opatchauto2013-12-17_17-09-10.report.log

2013-12-17 17:09:10: Starting Clusterware Patch Setup
Using configuration parameter file: /u01/app/grid/CRS/crs/install/crsconfig_params
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717/custom/server/17076717  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/oracle/product/11.2.0/db_1 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol2'
CRS-2673: Attempting to stop 'ora.crsd' on 'oracleenespanol2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'oracleenespanol2'
...
CRS-2673: Attempting to stop 'ora.cssd' on 'oracleenespanol2'
CRS-2677: Stop of 'ora.cssd' on 'oracleenespanol2' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'oracleenespanol2'
CRS-2677: Stop of 'ora.gipcd' on 'oracleenespanol2' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'oracleenespanol2'
CRS-2677: Stop of 'ora.gpnpd' on 'oracleenespanol2' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oracleenespanol2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully unlock /u01/app/grid/CRS
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/17076717  apply successful for home  /u01/app/grid/CRS 
patch /u01/app/oracle/patches/11.2.0.3/PSUOct2013/16902043  apply successful for home  /u01/app/grid/CRS 
CRS-4123: Oracle High Availability Services has been started.

Volvemos a verificar como lo hicimos en el standby con opatch lsinventory que el parche 11.2.0.3.8 este aplicado correctamente en nuestros binarios y con el script crs_status.sh cuales servicios tienes arriba y que equivalgan a los que tenias antes de empezar. 

Lo que tenemos que hacer a continuación es correr el catbundle.sql como el usuario oracle desde $ORACLE_HOME/rdbms/admin en nuestra base de datos primaria, esto nada mas lo vas a correr en un nodo , no en los dos.


SYS@TEST> set pagesize 9999
col ACTION_TIME for a30
col COMMENTS for a30
col ACTION  for a8
col NAMESPACE for a12
col VERSION for a10
col BUNDLE_SERIES for a30
set lines 200
select name from v$database;
select host_name,instance_name from v$instance;
select * from dba_registry_history;

NAME
---------
TEST

SYS@TEST> 
HOST_NAME        INSTANCE_NAME
---------------------------------------------------------------- ----------------
oracleenespanol1                         TEST1

SQL@TEST> 

ACTION_TIME         ACTION NAMESPACE    VERSION  ID BUNDLE_SERIES    COMMENTS
------------------------------ -------- ------------ ---------- ---------- ------------------------------ ------------------------------
17-SEP-11 10.21.11.595816 AM   APPLY SERVER      11.2.0.3   0 PSU      Patchset 11.2.0.2.0
24-MAY-12 01.56.22.270056 PM   APPLY SERVER      11.2.0.3   0 PSU      Patchset 11.2.0.2.0
03-JUL-12 09.43.20.177762 PM   APPLY SERVER      11.2.0.3   2 PSU      PSU 11.2.0.3.2
20-MAR-13 05.26.24.599603 PM   APPLY SERVER      11.2.0.3   2 PSU      PSU 11.2.0.3.2
14-AUG-13 04.48.25.893605 PM   APPLY SERVER      11.2.0.3   7 PSU      PSU 11.2.0.3.7
 
SQL> @catbundle.sql psu apply

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

Generating apply and rollback scripts...
Check the following file for errors:
/u01/app/oracle/cfgtoollogs/catbundle/catbundle_PSU_TEST_GENERATE_2013Dec17_18_08_13.log
Apply script: /u01/app/oracle/product/11.2.0/db_1/rdbms/admin/catbundle_PSU_TEST_APPLY.sql
Rollback script: /u01/app/oracle/product/11.2.0/db_1/rdbms/admin/catbundle_PSU_TEST_ROLLBACK.sql
...
Updating registry...
SQL> INSERT INTO registry$history
  2    (action_time, action,
  3     namespace, version, id,
  4     bundle_series, comments)
  5  VALUES
  6    (SYSTIMESTAMP, 'APPLY',
  7     SYS_CONTEXT('REGISTRY$CTX','NAMESPACE'),
  8     '11.2.0.3',
  9     8,
 10     'PSU',
 11     'PSU 11.2.0.3.8');

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SPOOL off
SQL> SET echo off
Check the following log file for errors:
/u01/app/oracle/cfgtoollogs/catbundle/catbundle_PSU_TEST_APPLY_2013Dec17_18_08_13.log

SYS@TEST> set pagesize 9999
col ACTION_TIME for a30
col COMMENTS for a30
col ACTION  for a8
col NAMESPACE for a12
col VERSION for a10
col BUNDLE_SERIES for a30
set lines 200
select name from v$database;
select host_name,instance_name from v$instance;
select * from dba_registry_history;

NAME
---------
TEST

SYS@TEST>

HOST_NAME        INSTANCE_NAME
---------------------------------------------------------------- ----------------
oracleenespanol1                         TEST1

SQL@TEST> 
ACTION_TIME                    ACTION   NAMESPACE    VERSION            ID BUNDLE_SERIES                  COMMENTS
------------------------------ -------- ------------ ---------- ---------- ------------------------------ ------------------------------
17-SEP-11 10.21.11.595816 AM   APPLY    SERVER       11.2.0.3            0 PSU                            Patchset 11.2.0.2.0
24-MAY-12 01.56.22.270056 PM   APPLY    SERVER       11.2.0.3            0 PSU                            Patchset 11.2.0.2.0
03-JUL-12 09.43.20.177762 PM   APPLY    SERVER       11.2.0.3            2 PSU                            PSU 11.2.0.3.2
20-MAR-13 05.26.24.599603 PM   APPLY    SERVER       11.2.0.3            2 PSU                            PSU 11.2.0.3.2
14-AUG-13 04.48.25.893605 PM   APPLY    SERVER       11.2.0.3            7 PSU                            PSU 11.2.0.3.7
17-DEC-13 06.08.13.889118 PM   APPLY    SERVER       11.2.0.3            8 PSU                            PSU 11.2.0.3.8

Ya lo único que nos queda por hacer es verificar que los ARCHIVED REDO LOGS se estén aplicando correctamente en la base de datos Standby.

Así que primero vamos a archivar un REDO LOG en la base de datos TEST (Primaria), y vamos a ver que es el 82197 el que se va a archivar.

SYS@TEST>  ARCHIVE LOG LIST

Database log mode	       Archive Mode
Automatic archival	       Enabled
Archive destination	       USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     82190
Next log sequence to archive   82197
Current log sequence	       82197


SYS@TEST> ALTER SYSTEM ARCHIVE LOG CURRENT;
SYS@TEST> ALTER SYSTEM CHECKPOINT;

En este momento, ya terminamos con la base de datos primaria y los nodos primarios. En los servidores oracleenespanol3 y oracleenespanol4 (Standby) , vamos a poner a TESTSTBY en modo de recovery, en mi caso estoy usando Active Dataguard, nada mas cuidado con eso, por que si tu no lo estas usando es una opción extra de tu base de datos Standby.

SYS@TESTSTBY1> select OPEN_MODE,DATAGUARD_BROKER,GUARD_STATUS,DATABASE_ROLE from v$database;

OPEN_MODE	     DATAGUAR GUARD_S DATABASE_ROLE
-------------------- -------- ------- ----------------
MOUNTED 	     DISABLED NONE    PHYSICAL STANDBY

SYS@TESTSTBY1> alter database open;

Database altered.

SYS@TESTSTBY1> select OPEN_MODE,DATAGUARD_BROKER,GUARD_STATUS,DATABASE_ROLE from v$database;

OPEN_MODE	     GUARD_S DATABASE_ROLE
-------------------- ------- ----------------
READ ONLY	      NONE    PHYSICAL STANDBY


SYS@TESTSTBY1> select process , status from v$managed_standby where process  like '%MRP%';

no rows selected

SYS@TESTSTBY1> select process ,status , thread# , sequence# from gv$managed_standby;

PROCESS   STATUS	  THREAD#  SEQUENCE#
--------- ------------ ---------- ----------
ARCH	  CLOSING		1      82064
ARCH	  CONNECTED		0	   0
ARCH	  CONNECTED		0	   0
ARCH	  CONNECTED		0	   0
ARCH	  CLOSING		2      81785
ARCH	  CLOSING		2      81797
ARCH	  CONNECTED		0	   0
ARCH	  CLOSING		1      82074
ARCH	  CLOSING		2      81798
ARCH	  CLOSING		1      82075
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			2      81799
RFS	  IDLE			0	   0
RFS	  IDLE			1      82076
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0


SYS@TESTSTBY1> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

SYS@TESTSTBY1> select process , status from v$managed_standby where process ='MRP0';

PROCESS   STATUS
--------- ------------
MRP0	  APPLYING_LOG

SYS@TESTSTBY1> select process ,status , thread# , sequence# from gv$managed_standby;

PROCESS   STATUS	  THREAD#  SEQUENCE#
--------- ------------ ---------- ----------
ARCH	  CLOSING		1      82064
ARCH	  CONNECTED		0	   0
ARCH	  CONNECTED		0	   0
ARCH	  CONNECTED		0	   0
ARCH	  CLOSING		2      81785
MRP0	  APPLYING_LOG		2      81798
ARCH	  CLOSING		2      81797
ARCH	  CONNECTED		0	   0
ARCH	  CLOSING		1      82074
ARCH	  CLOSING		2      81798
ARCH	  CLOSING		1      82075
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			2      81799
RFS	  IDLE			0	   0
RFS	  IDLE			1      82076
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0
RFS	  IDLE			0	   0

SYS@TESTSTBY1> set lines 200 pages 9999
SELECT NAME,OPEN_MODE,PROTECTION_MODE,
       PROTECTION_LEVEL,
       DATABASE_ROLE ROLE,
       SWITCHOVER_STATUS
FROM V$DATABASE;

NAME	  OPEN_MODE	       PROTECTION_MODE	    PROTECTION_LEVEL	 ROLE		  SWITCHOVER_STATUS
--------- -------------------- -------------------- -------------------- ---------------- --------------------
TEST	  READ ONLY WITH APPLY MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE  PHYSICAL STANDBY NOT ALLOWED

Y ya nada mas para verificar que todo este bien, vamos a checar que el ultimo ARCHIVED REDO LOG este aplicado, que fue el 82197,  y el registro de la base de datos Standby y vas a ver que ya vas a tener en el la versión 11.2.0.3.8


SYS@TESTSTBY1> set lines 200 pages 9999
SELECT NAME,OPEN_MODE,PROTECTION_MODE,
       PROTECTION_LEVEL,
       DATABASE_ROLE ROLE,
       SWITCHOVER_STATUS
FROM V$DATABASE;

NAME	  OPEN_MODE	       PROTECTION_MODE	    PROTECTION_LEVEL	 ROLE		  SWITCHOVER_STATUS
--------- -------------------- -------------------- -------------------- ---------------- --------------------
TEST	  READ ONLY WITH APPLY MAXIMUM PERFORMANCE  MAXIMUM PERFORMANCE  PHYSICAL STANDBY NOT ALLOWED



SYS@TESTSTBY1> SELECT 'Last Applied  : ' Logs,
  TO_CHAR(next_time,'DD-MON-YY:HH24:MI:SS') TIME,SEQUENCE#
FROM v$archived_log
WHERE sequence# =
  (SELECT MAX(sequence#) FROM v$archived_log WHERE applied='YES'
  )
UNION
SELECT 'Last Received : ' Logs,
  TO_CHAR(next_time,'DD-MON-YY:HH24:MI:SS') TIME,SEQUENCE#
FROM v$archived_log
WHERE sequence# =
  (SELECT MAX(sequence#) FROM v$archived_log
  ); 

 LOGS		 TIME			      SEQUENCE#
---------------- --------------------------- ----------
Last Applied  :  18-DEC-13:11:23:58		  82197
Last Received :  18-DEC-13:11:23:58		  82197

SYS@TESTSTBY1> set pagesize 9999
col ACTION_TIME for a30
col COMMENTS for a30
col ACTION  for a8
col NAMESPACE for a12
col VERSION for a10
col BUNDLE_SERIES for a30
set lines 200
select name from v$database;
select host_name,instance_name from v$instance;
select * from dba_registry_history;

NAME
---------
TEST

HOST_NAME							 INSTANCE_NAME
---------------------------------------------------------------- ----------------
oracleenespanol3				                  TESTSTBY1


ACTION_TIME  ACTION   NAMESPACE    VERSION            ID BUNDLE_SERIESCOMMENTS
------------------------------ -------- ------------ ---------- ---------- ------------------------------ ------------------------------
17-SEP-11 10.21.11.595816 AM   APPLY    SERVER       11.2.0.3            0 PSU Patchset 11.2.0.2.0
24-MAY-12 01.56.22.270056 PM   APPLY    SERVER       11.2.0.3            0 PSU Patchset 11.2.0.2.0
03-JUL-12 09.43.20.177762 PM   APPLY    SERVER       11.2.0.3            2 PSU PSU 11.2.0.3.2
20-MAR-13 05.26.24.599603 PM   APPLY    SERVER       11.2.0.3            2 PSU PSU 11.2.0.3.2
14-AUG-13 04.48.25.893605 PM   APPLY    SERVER       11.2.0.3            7 PSU PSU 11.2.0.3.7
17-DEC-13 06.08.13.889118 PM   APPLY    SERVER       11.2.0.3            8 PSU PSU 11.2.0.3.8

Conclusión

Esta es una manera muy util de mantener alta disponibilidad en tu ambiente mientras te permite verificar que el parche que vas a aplicar no perjudique a tu base de datos primaria.