In this part of the series, we will upgrade the Oracle Grid Infrastructure on a 2-node cluster. This is the first part of a 7-part series that includes:
Part 1. Upgrade Oracle Physical Standby Database from 18c to 19c
Part 2. Upgrading the Oracle Grid Infrastructure on the Physical Standby from 18c to 19c
Part 3. Upgrade Oracle RAC Database from 18c to 19c
Part 4. Upgrade Oracle Physical Standby Database from 18c to 19c
Part 5. Installing the Latest Oracle 19c Update Patches – Opatch/OJVM/GI/DB/JDK on Oracle 19c RAC on Linux
For the purpose of this guide, note the following:
-To make following this guide easier, put all your downloaded software in /usr/software/oracle.
-GI stands for Grid Infrastructure, also known as Oracle Restart for Standalone Databases.
-DB stands for Database.
-DB Home owner is Linux OS user “oracle”.
-OS Version: Oracle Linux Server release 7.5
-Source GI/DB Version: 18.14.0.0
-Target GI/DB Version: 19.3.0.0
-Source 18c DB Home: /oracle/oraBase/oracle/18.0.0/database.
-Target 19c DB Home: /oracle/oraBase/oracle/19.0.0/database.
-GI Home owner is Linux OS user “grid”.
-Source 18c GI Home: /oracle/oraBase/18.0.0/grid.
-Target 19c GI Home: /oracle/oraBase/19.0.0/grid.
-DG stands for Data Guard.
-ALWAYS make a backup of the database and the oracle software before you start.
Primary RAC DB Name: PROD
RAC Node 1: rac1
SID: PROD1
RAC Node 2: rac2
SID: PROD2
—
Standby DB Name: PRODDR
Physical Standby Node: stdby1
SID: PRODDR1
1. Prerequisites and Preparations
1.1 Download the following software:
–oracle-database-preinstall-19c for Linux 7
-The latest OPatch from Oracle Support
-The latest AutoUpgrade Tool (Doc ID 2485457.1)
-The latest 19c Update Patches
-The latest JDK for GI and DB Homes
Note: The JDK is for GI and DB Homes, not to be confused with the JDK that is installed directly on the OS. That is not covered here.
1.2 Install oracle-database-preinstall-19c
[root@rac1 ~]# cd /usr/software/oracle/
[root@rac1 oracle]# yum install -y oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm
Loaded plugins: langpacks, ulninfo
Examining oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm: oracle-database-preinstall-19c-1.0-3.el7.x86_64
Marking oracle-database-preinstall-19c-1.0-3.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package oracle-database-preinstall-19c.x86_64 0:1.0-3.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=====================================================================================================================================================================================================================================
Package Arch Version Repository Size
=====================================================================================================================================================================================================================================
Installing:
oracle-database-preinstall-19c x86_64 1.0-3.el7 /oracle-database-preinstall-19c-1.0-3.el7.x86_64 76 k
Transaction Summary
=====================================================================================================================================================================================================================================
Install 1 Package
Total size: 76 k
Installed size: 76 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : oracle-database-preinstall-19c-1.0-3.el7.x86_64 1/1
Verifying : oracle-database-preinstall-19c-1.0-3.el7.x86_64 1/1
Installed:
oracle-database-preinstall-19c.x86_64 0:1.0-3.el7
Complete!
1.3 Create the 19c GI Home Directories on both RAC nodes.
mkdir -p /oracle/oraBase/19.0.0/grid
chown -R grid.oinstall /oracle/oraBase/19.0.0
1.4 Extract the GI software into the 19c GI Homes on node rac1 ONLY on the Primary. The installation will copy oracle binaries over to the second RAC node.
su - grid
cd /oracle/oraBase/19.0.0/grid
unzip -oq /usr/software/oracle/LINUX.X64_193000_grid_home.zip
2. On Primary rac1, upgrade the Grid Infrastructure.
2.1 As grid user, execute following command to determine any additional preinstallation steps are needed. It will also generate a fixup script.
[grid@rac1 ~]$ cd /oracle/oraBase/19.0.0/grid
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /oracle/oraBase/18.0.0/grid -dest_crshome /oracle/oraBase/19.0.0.0/grid -dest_version 19.3.0.0 -fixup -verbose
...
Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following nodes:
rac2,rac1
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
Verifying Network Time Protocol (NTP) ...FAILED
rac2: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on
nodes "rac2,rac1" on which NTP daemon or service was
not running
rac1: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on
nodes "rac2,rac1" on which NTP daemon or service was
not running
Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.
CVU operation performed: stage -pre crsinst
Date: Jun 17, 2024 2:48:01 PM
CVU home: /oracle/oraBase/19.0.0/grid/
User: grid
Fixing the errors.
The PRVG-1017 were caused by NTP services not running.
To resolve these, I enable and start the ntpd service. Do this on both RAC nodes.
root@rac1 oracle]# systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
Active: inactive (dead)
[root@rac1 oracle]# systemctl is-enabled ntpd
disabled
[root@rac1 oracle]# systemctl enable --now ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[root@rac1 oracle]# systemctl is-enabled ntpd
enabled
[root@rac1 oracle]# systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2024-06-17 15:02:10 PDT; 15min ago
Main PID: 29154 (ntpd)
CGroup: /system.slice/ntpd.service
└─29154 /usr/sbin/ntpd -u ntp:ntp -g
Jun 17 15:02:10 rac1 ntpd[29154]: Listen normally on 5 eth1:3 10.71.40.10 UDP 123
Jun 17 15:02:10 rac1 ntpd[29154]: Listen normally on 6 eth1:5 10.71.40.15 UDP 123
Jun 17 15:02:10 rac1 ntpd[29154]: Listen normally on 7 eth2 10.71.41.17 UDP 123
Jun 17 15:02:10 rac1 ntpd[29154]: Listen normally on 8 eth2:1 169.254.25.155 UDP 123
Jun 17 15:02:10 rac1 ntpd[29154]: Listen normally on 9 virbr0 192.168.122.1 UDP 123
Jun 17 15:02:10 rac1 ntpd[29154]: Listening on routing socket on fd #26 for interface updates
Jun 17 15:02:11 rac1 ntpd[29154]: 0.0.0.0 c016 06 restart
Jun 17 15:02:11 rac1 ntpd[29154]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
Jun 17 15:02:11 rac1 ntpd[29154]: 0.0.0.0 c011 01 freq_not_set
Jun 17 15:02:17 rac1 ntpd[29154]: 0.0.0.0 c614 04 freq_mode
PRVG-11250 is caused by missing root privileges. We can’t run runcluvfy.sh as root. To correct this issue, we pass the option “-method root”.
After fixing all the issues listed, we run it again. And this time, it succeeded with no errors.
[root@rac1 grid]# su - grid
[grid@rac1 ~]$ cd /oracle/oraBase/19.0.0/grid
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /oracle/oraBase/18.0.0/grid -dest_crshome /oracle/oraBase/19.0.0.0/grid -dest_version 19.3.0.0 -fixup -verbose -method root
Enter "ROOT" password:
...
Pre-check for cluster services setup was successful.
CVU operation performed: stage -pre crsinst
Date: Jun 17, 2024 3:26:33 PM
CVU home: /oracle/oraBase/19.0.0/grid/
User: grid
2.2 Create the response for GI Upgrade in /usr/software/oracle/19c_grid_crs_upgrade.rsp
Below is my sample response file.
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/oracle/oraBase/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/oracle/oraBase/grid
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=bisdb-cluster
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=rac1,rac2
oracle.install.crs.configureGIMR=false
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
In the response file, change these values to match your system’s configurations:
INVENTORY_LOCATION
ORACLE_BASE
oracle.install.crs.config.clusterName
oracle.install.crs.config.clusterNodes
oracle.install.asm.diskGroup.name <-- name of the diskgroup where your votedisk is located.
Run these commands to get the some of the information you need. The first 2 commands are different ways to get the cluster name. The third one gives you the name of the diskgroup (ie. CRS) that contains the votedisk.
The inventory_loc variable in /etc/oraInst.loc will provide the value for INVENTORY_LOCATION.
The 19c Home is not yet configured. So we run these commands from the 18c home.
[root@rac1 grid]# /oracle/oraBase/18.0.0/grid/bin/olsnodes -c
bisdb-cluster
[root@rac1 grid]# /oracle/oraBase/18.0.0/grid/bin/cemutlo -n
bisdb-cluster
[root@rac1 grid]# /oracle/oraBase/18.0.0/grid/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE f2aef2f98bed4ffdbf24ff4e6a5cbd5c (ORCL:CRS1) [CRS]
Located 1 voting disk(s).
[root@rac1 oracle]# cat /etc/oraInst.loc
inventory_loc=/oracle/oraBase/oraInventory
inst_group=oinstall
2.3 We will do a dry run which will not actually upgrade the GI Home but will copy the software to the other RAC node in the cluster. We will run the actual upgrade if the dry run is successful. Run these command as grid user on rac1 ONLY.
[grid@rac1 ~]$ unset ORACLE_HOME
[grid@rac1 ~]$ unset ORACLE_SID
[grid@rac1 ~]$ cd /oracle/oraBase/19.0.0/grid
[grid@rac1 grid]$ ./gridSetup.sh -silent -ignorePrereqFailure -dryRunForUpgrade -responseFile /usr/software/oracle/19c_grid_crs_upgrade.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
[FATAL] [INS-43052] The Oracle home location contains directories or files on following remote nodes:[rac2].
ACTION: Please provide valid Oracle home location which is empty.
[grid@rac1 grid]$ ./gridSetup.sh -silent -ignorePrereqFailure -dryRunForUpgrade -responseFile /usr/software/oracle/19c_grid_crs_upgrade.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /oracle/oraBase/oraInventory/logs/GridSetupActions2024-06-17_05-05-14PM/gridSetupActions2024-06-17_05-05-14PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /oracle/oraBase/oraInventory/logs/GridSetupActions2024-06-17_05-05-14PM/gridSetupActions2024-06-17_05-05-14PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/oracle/oraBase/19.0.0/grid/install/response/grid_2024-06-17_05-05-14PM.rsp
You can find the log of this install session at:
/oracle/oraBase/oraInventory/logs/GridSetupActions2024-06-17_05-05-14PM/gridSetupActions2024-06-17_05-05-14PM.log
As a root user, execute the following script(s):
1. /oracle/oraBase/19.0.0/grid/rootupgrade.sh
Execute /oracle/oraBase/19.0.0/grid/rootupgrade.sh on the following nodes:
[rac1]
Run the script on the local node.
Successfully Setup Software with warning(s).
2.4 Run /oracle/oraBase/19.0.0/grid/rootupgrade.sh as root on rac1
[root@rac1 ContentsXML]# /oracle/oraBase/19.0.0/grid/rootupgrade.sh
Check /oracle/oraBase/19.0.0/grid/install/root_rac1_2024-06-17_17-19-20-997034230.log for the output of root script
2.5 Some directories and files get changed to root owner. They need to be owned by grid user. Run these command on both RAC nodes (rac1 and rac2).
# chown -R grid.oinstall /oracle/oraBase/19.0.0/grid
2.6 Run the gridSetup.sh on the node rac1 ONLY to upgrade the grid homes on both RAC nodes.
[grid@rac1 ~]$ unset ORACLE_HOME
[grid@rac1 ~]$ unset ORACLE_SID
[grid@rac1 ~]$ cd /oracle/oraBase/19.0.0/grid
[grid@rac1 grid]$ ./gridSetup.sh -silent -ignorePrereqFailure -responseFile /usr/software/oracle/19c_grid_crs_upgrade.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. /oracle/oraBase/oraInventory/logs/GridSetupActions2024-06-17_05-51-06PM/gridSetupActions2024-06-17_05-51-06PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /oracle/oraBase/oraInventory/logs/GridSetupActions2024-06-17_05-51-06PM/gridSetupActions2024-06-17_05-51-06PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/oracle/oraBase/19.0.0/grid/install/response/grid_2024-06-17_05-51-06PM.rsp
As a root user, execute the following script(s):
1. /oracle/oraBase/19.0.0/grid/rootupgrade.sh
Execute /oracle/oraBase/19.0.0/grid/rootupgrade.sh on the following nodes:
[rac1, rac2]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.
Successfully Setup Software with warning(s).
As install user, execute the following command to complete the configuration.
/oracle/oraBase/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /usr/software/oracle/19c_grid_crs_upgrade.rsp [-silent]
2.6.1 As root, execute /oracle/oraBase/19.0.0/grid/rootupgrade.sh on rac1. When it’s done, run it on rac2.
On rac1:
[root@rac1 ~]# /oracle/oraBase/19.0.0/grid/rootupgrade.sh
Check /oracle/oraBase/19.0.0/grid/install/root_rac1_2024-06-17_18-10-30-009989931.log for the output of root script
[root@rac1 ~]# cat /oracle/oraBase/19.0.0/grid/install/root_rac1_2024-06-17_18-10-30-009989931.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/oraBase/19.0.0/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /oracle/oraBase/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/oracle/oraBase/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2024-06-17_06-10-30PM.log
2024/06/17 18:10:39 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2024/06/17 18:10:39 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2024/06/17 18:10:39 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2024/06/17 18:10:40 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2024/06/17 18:10:40 CLSRSC-464: Starting retrieval of the cluster configuration data
2024/06/17 18:10:47 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2024/06/17 18:12:20 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2024/06/17 18:12:21 CLSRSC-693: CRS entities validation completed successfully.
2024/06/17 18:12:25 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2024/06/17 18:12:25 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2024/06/17 18:12:26 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2024/06/17 18:12:27 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2024/06/17 18:12:27 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2024/06/17 18:14:01 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2024/06/17 18:14:01 CLSRSC-482: Running command: '/oracle/oraBase/18.0.0/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2024/06/17 18:14:05 CLSRSC-482: Running command: '/oracle/oraBase/19.0.0/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /oracle/oraBase/18.0.0/grid -oldCRSVersion 18.0.0.0.0 -firstNode true -startRolling false '
ASM configuration upgraded in local node successfully.
2024/06/17 18:14:08 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2024/06/17 18:14:13 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2024/06/17 18:14:51 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2024/06/17 18:14:53 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2024/06/17 18:14:56 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2024/06/17 18:15:05 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2024/06/17 18:15:36 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2024/06/17 18:15:43 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2024/06/17 18:15:49 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2024/06/17 18:15:50 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2024/06/17 18:16:48 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2024/06/17 18:17:52 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2024/06/17 18:17:58 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2024/06/17 18:20:05 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2024/06/17 18:20:27 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2024/06/17 18:20:30 CLSRSC-474: Initiating upgrade of resource types
2024/06/17 18:21:14 CLSRSC-475: Upgrade of resource types successfully initiated.
2024/06/17 18:21:25 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2024/06/17 18:21:32 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
On rac2:
[root@rac2 ~]# /oracle/oraBase/19.0.0/grid/rootupgrade.sh
Check /oracle/oraBase/19.0.0/grid/install/root_rac2_2024-06-17_18-24-30-463885556.log for the output of root script
[root@rac2 ~]# cat /oracle/oraBase/19.0.0/grid/install/root_rac2_2024-06-17_18-24-30-463885556.log
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/oraBase/19.0.0/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /oracle/oraBase/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/oracle/oraBase/grid/crsdata/rac2/crsconfig/rootcrs_rac2_2024-06-17_06-24-51PM.log
2024/06/17 18:25:00 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2024/06/17 18:25:00 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2024/06/17 18:25:00 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2024/06/17 18:25:01 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2024/06/17 18:25:01 CLSRSC-464: Starting retrieval of the cluster configuration data
2024/06/17 18:25:16 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2024/06/17 18:25:16 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2024/06/17 18:25:17 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2024/06/17 18:25:18 CLSRSC-363: User ignored prerequisites during installation
2024/06/17 18:25:20 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2024/06/17 18:25:20 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2024/06/17 18:25:50 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
ASM configuration upgraded in local node successfully.
2024/06/17 18:26:06 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2024/06/17 18:26:45 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2024/06/17 18:26:48 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2024/06/17 18:26:50 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2024/06/17 18:26:55 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2024/06/17 18:26:55 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2024/06/17 18:26:57 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2024/06/17 18:26:59 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2024/06/17 18:27:00 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2024/06/17 18:27:51 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2024/06/17 18:28:52 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2024/06/17 18:28:54 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2024/06/17 18:31:06 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 19 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2024/06/17 18:31:47 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
Start upgrade invoked..
2024/06/17 18:31:51 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2024/06/17 18:31:51 CLSRSC-482: Running command: '/oracle/oraBase/19.0.0/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Started to upgrade Oracle ACFS.
Oracle ACFS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 19.0.0.0.0.
2024/06/17 18:33:02 CLSRSC-479: Successfully set Oracle Clusterware active version
2024/06/17 18:33:05 CLSRSC-476: Finishing upgrade of resource types
2024/06/17 18:33:12 CLSRSC-477: Successfully completed upgrade of resource types
2024/06/17 18:33:36 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
Successfully updated XAG resources.
2024/06/17 18:34:00 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Once rootupgrade.sh has completed on both RAC nodes, run gridSetup.sh with executeConfigTools option on rac1 ONLY as grid user.
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ /oracle/oraBase/19.0.0/grid/gridSetup.sh -executeConfigTools -responseFile /usr/software/oracle/19c_grid_crs_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...
You can find the logs of this session at:
/oracle/oraBase/oraInventory/logs/GridSetupActions2024-06-17_06-36-37PM
You can find the log of this install session at:
/oracle/oraBase/oraInventory/logs/UpdateNodeList2024-06-17_06-36-37PM.log
You can find the log of this install session at:
/oracle/oraBase/oraInventory/logs/UpdateNodeList2024-06-17_06-36-37PM.log
Successfully Configured Software.
On rac1, add this to /etc/oratab.
+ASM1:/oracle/oraBase/19.0.0/grid:N
On rac2, add this to /etc/oratab.
+ASM2:/oracle/oraBase/19.0.0/grid:N
2.6.2 As grid user, detached the 18c GI Home from both rac1 and rac2.
# su - grid
$ cd /oracle/oraBase/18.0.0/grid/oui/bin
$ ./runInstaller -detachhome ORACLE_HOME=/oracle/oraBase/18.0.0/grid ORACLE_HOME_NAME="OraGI18Home1"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 16379 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
3. Check Status and Verify.
Check Cluster Status.
[grid@rac1 bin]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Check Cluster Software Version
[grid@rac1 bin]$ crsctl query crs softwareversion -all
Oracle Clusterware version on node [rac1] is [19.0.0.0.0]
Oracle Clusterware version on node [rac2] is [19.0.0.0.0]
[grid@rac1 bin]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [19.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [724960844].
[grid@rac1 bin]$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]
As user grid, stop listener on both RAC nodes.
$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
$ srvctl stop listener
$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is not running
On both rac1 and rac2, update /home/grid/.bash_profile to point to the new 19c home by updating these 2 lines:
ORACLE_HOME=/oracle/oraBase/19.0.0/grid
GRID_HOME=/oracle/oraBase/19.0.0/grid
$ . /home/grid/.bash_profile
$ env|egrep "ORACLE_HOME|GRID_HOME"
GRID_HOME=/oracle/oraBase/19.0.0/grid
ORACLE_HOME=/oracle/oraBase/19.0.0/grid
On both rac1 and rac2, update listener.ora to point to the new 19c home.
$ cat /oracle/oraBase/19.0.0/grid/network/admin/listener.ora
ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM)))) # line added by Agent
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = rac1-vip)(PORT = 1521))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = PROD)
(ORACLE_HOME = /oracle/oraBase/oracle/19.0.0/database)
(SID_NAME = PROD1)
)
(SID_DESC =
(GLOBAL_DBNAME = PROD_DGMGRL)
(ORACLE_HOME = /oracle/oraBase/oracle/19.0.0/database)
(SID_NAME = PROD1)
)
)
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2)))) # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3)))) # line added by Agent
MGMTLSNR=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=MGMTLSNR)))) # line added by Agent
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))) # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))) # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF # line added by Agent - Disabled by Agent because REMOTE_REGISTRATION_ADDRESS is set
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_MGMTLSNR=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_MGMTLSNR=SUBNET # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN3=OFF # line added by Agent - Disabled by Agent because REMOTE_REGISTRATION_ADDRESS is set
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN2=OFF # line added by Agent - Disabled by Agent because REMOTE_REGISTRATION_ADDRESS is set
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON # line added by Agent
VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET # line added by Agent
As user grid, start the listener on both RAC nodes.
$ srvctl start listener
$ srvctl status listener
Listener LISTENER is enabled
Listener LISTENER is running on node(s): rac1,rac2
Congratulation! Oracle Grid Infrastructure has been upgraded to 19c!