In this part of the series, we will upgrade the Oracle RAC Database from 18c to 19c. This is the third part of a 7-part series that includes:
Part 1. Upgrade Oracle Grid Infrastructure on the RAC Cluster from 18c to 19c
Part 2. Upgrading the Oracle Grid Infrastructure on the Physical Standby from 18c to 19c
Part 3. Upgrade Oracle RAC Database from 18c to 19c
Part 4. Upgrade Oracle Physical Standby Database from 18c to 19c
Part 5. Installing the Latest Oracle 19c Update Patches – Opatch/OJVM/GI/DB/JDK on Oracle 19c RAC on Linux
For the purpose of this guide, note the following:
-To make following this guide easier, put all your downloaded software in /usr/software/oracle.
-GI stands for Grid Infrastructure, also known as Oracle Restart for Standalone Databases.
-DB stands for Database.
-DB Home owner is Linux OS user “oracle”.
-OS Version: Oracle Linux Server release 7.5
-Source GI/DB Version: 18.14.0.0
-Target GI/DB Version: 19.3.0.0
-Source 18c DB Home: /oracle/oraBase/oracle/18.0.0/database.
-Target 19c DB Home: /oracle/oraBase/oracle/19.0.0/database.
-GI Home owner is Linux OS user “grid”.
-Source 18c GI Home: /oracle/oraBase/18.0.0/grid.
-Target 19c GI Home: /oracle/oraBase/19.0.0/grid.
-DG stands for Data Guard.
-ALWAYS make a backup of the database and the oracle software before you start.
Primary RAC DB Name: PROD
RAC Node 1: rac1
SID: PROD1
RAC Node 2: rac2
SID: PROD2
—
Standby DB Name: PRODDR
Physical Standby Node: stdby1
SID: PRODDR1
1. Prerequisites and Preparations
1.1 Download the following software:
–oracle-database-preinstall-19c for Linux 7
-The latest OPatch from Oracle Support
-The latest AutoUpgrade Tool (Doc ID 2485457.1)
-The latest 19c Update Patches
-The latest JDK for GI and DB Homes
Note: The JDK is for GI and DB Homes, not to be confused with the JDK that is installed directly on the OS. That is not covered here.
1.2 Enable archive mode. Skip this section if your system is already in archive mode.
Note: If you need to enable archive log on a RAC system, you need to do stop all the database instances. Then start one in mount mode and run the commands below.
SQL> alter system set db_recovery_file_dest='+FRA' scope=both sid='*';
SQL> alter system set db_recovery_file_dest_size=85G scope=both sid='*';
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database archivelog;
SQL> alter database open;
1.3 Verify Data Guard is Healthy.
DGMGRL> show configuration;
Configuration - BISDB
Protection Mode: MaxPerformance
Members:
PROD - Primary database
PRODDR1 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 45 seconds ago)
DGMGRL> show database 'PROD';
Database - PROD
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
PROD1
PROD2
Database Status:
SUCCESS
DGMGRL> show database 'PRODDR1';
Database - PRODDR1
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 7.00 KByte/s
Real Time Query: ON
Instance(s):
PRODDR1
Database Status:
SUCCESS
1.4 On Primary, determine the log_archive_dest_n used for log transport and set the corresponding log_archive_dest_state_n to defer.
SQL> set lines 200 pages 9999
SQL> col value_col_plus_show_param ON FOR a100 HEADING VALUE
SQL> show parameter log_archive_dest_3
NAME TYPE VALUE
------------------------------------ --------------------------------- ----------------------------------------------------------------------------------------------------
log_archive_dest_3 string service="proddr1", ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections
=1 reopen=300 db_unique_name="PRODDR1" net_timeout=30, valid_for=(online_logfile,all_roles)
SQL> show parameter log_archive_dest_state_3
NAME TYPE VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_state_3 string ENABLE
SQL> alter system set log_archive_dest_state_3='defer' scope=both sid='*';
System altered.
SQL> show parameter log_archive_dest_state_3
NAME TYPE VALUE
------------------------------------ --------------------------------- ------------------------------
log_archive_dest_state_3 string defer
The scope=both will apply the settings to both memory and spfile. And the sid=’*’ will apply the settings to both RAC nodes.
1.5 On Standby, stop the managed recovery.
SQL> alter database recover managed standby database cancel;
Database altered.
1.6 Create the 19c DB Home Directories on both RAC nodes.
mkdir -p /oracle/oraBase/oracle/19.0.0/database
chown -R oracle.oinstall /oracle/oraBase/oracle/19.0.0
1.7 Extract the DB software into the 19c DB Homes on node rac1 ONLY on the Primary. The installation will copy oracle binaries over to the second RAC node.
su - oracle
cd /oracle/oraBase/oracle/19.0.0/database
unzip -oq /usr/software/oracle/LINUX.X64_193000_db_home.zip
2 On rac1, create a response file in /usr/software/oracle/19c_db_rac.rsp. Here’s my response file. Edit it to reflect your environment.
oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v19.0.0
oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/oracle/oraBase/oraInventory
ORACLE_BASE=/oracle/oraBase/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=oper
oracle.install.db.OSBACKUPDBA_GROUP=backupdba
oracle.install.db.OSDGDBA_GROUP=dgdba
oracle.install.db.OSKMDBA_GROUP=kmdba
oracle.install.db.OSRACDBA_GROUP=racdba
oracle.install.db.rootconfig.executeRootScript=false
oracle.install.db.CLUSTER_NODES=rac1,rac2
oracle.install.db.config.starterdb.type=GENERAL_PURPOSE
oracle.install.db.ConfigureAsContainerDB=false
oracle.install.db.config.starterdb.memoryOption=false
oracle.install.db.config.starterdb.installExampleSchemas=false
oracle.install.db.config.starterdb.managementOption=DEFAULT
oracle.install.db.config.starterdb.omsPort=0
oracle.install.db.config.starterdb.enableRecovery=false
3 Install the DB Software. Execute on rac1 as user oracle.
[oracle@rac1 ~]$ cd /oracle/oraBase/oracle/19.0.0/database
[oracle@rac1 database]$ ./runInstaller -ignorePrereq -waitforcompletion -silent -responseFile /usr/software/oracle/19c_db_rac.rsp
Launching Oracle Database Setup Wizard...
[WARNING] [INS-13013] Target environment does not meet some mandatory requirements.
CAUSE: Some of the mandatory prerequisites are not met. See logs for details. /oracle/oraBase/oraInventory/logs/InstallActions2024-06-18_12-06-47PM/installActions2024-06-18_12-06-47PM.log
ACTION: Identify the list of failed prerequisite checks from the log: /oracle/oraBase/oraInventory/logs/InstallActions2024-06-18_12-06-47PM/installActions2024-06-18_12-06-47PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/oracle/oraBase/oracle/19.0.0/database/install/response/db_2024-06-18_12-06-47PM.rsp
You can find the log of this install session at:
/oracle/oraBase/oraInventory/logs/InstallActions2024-06-18_12-06-47PM/installActions2024-06-18_12-06-47PM.log
As a root user, execute the following script(s):
1. /oracle/oraBase/oracle/19.0.0/database/root.sh
Execute /oracle/oraBase/oracle/19.0.0/database/root.sh on the following nodes:
[rac1, rac2]
Successfully Setup Software with warning(s).
3.1 As root user, run root.sh on rac1, then on rac2.
[root@rac1 ~]# /oracle/oraBase/oracle/19.0.0/database/root.sh
Check /oracle/oraBase/oracle/19.0.0/database/install/root_rac1_2024-06-18_12-15-51-281834027.log for the output of root script
[root@rac2 ~]# /oracle/oraBase/oracle/19.0.0/database/root.sh
Check /oracle/oraBase/oracle/19.0.0/database/install/root_rac2_2024-06-18_12-15-57-302525999.log for the output of root script
4 Create an AutoUpgrade config file: /usr/software/oracle/18c_19c_autoupgrade.cfg
global.autoupg_log_dir=/home/oracle/autoupgrade
upg1.log_dir=/home/oracle/autoupgrade/log
upg1.sid=PROD
upg1.source_home=/oracle/oraBase/oracle/18.0.0/database
upg1.target_home=/oracle/oraBase/oracle/19.0.0/database
upg1.target_version=19.3.0.0
upg1.upgrade_node=rac1
upg1.run_utlrp=yes
upg1.timezone_upg=yes
5 As user oracle on rac1, run the AutoUpgrade in the analyze mode to check for upgrade readiness.
Note: Analyze mode will will not perform any actual upgrade.
[oracle@rac1 database]$ /oracle/oraBase/oracle/19.0.0/database/jdk/bin/java -jar /usr/software/oracle/autoupgrade.jar -config /usr/software/oracle/18c_19c_autoupgrade.cfg -mode analyze
AutoUpgrade 24.4.240426 launched with default internal options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 CDB(s) plus 2 PDB(s) will be analyzed
Type 'help' to list console commands
upg> Job 100 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Please check the summary report at:
/home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html
/home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log
[oracle@rac1 database]$ cat /home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log
==========================================
Autoupgrade Summary Report
==========================================
[Date] Tue Jun 18 13:51:33 PDT 2024
[Number of Jobs] 1
==========================================
[Job ID] 100
==========================================
[DB Name] PROD
[Version Before Upgrade] 18.14.0.0.0
[Version After Upgrade] 19.3.0.0.0
------------------------------------------
[Stage Name] PRECHECKS
[Status] SUCCESS
[Start Time] 2024-06-18 13:50:11
[Duration]
[Log Directory] /home/oracle/autoupgrade/log/PROD1/100/prechecks
[Detail] /home/oracle/autoupgrade/log/PROD1/100/prechecks/prod_preupgrade.log
Check passed and no manual intervention needed
------------------------------------------
Additional logs are in /home/oracle/autoupgrade, which we specified in the AutoUpgrade config file.
6 Run AutoUpgrade in fixups mode.
[oracle@rac1 database]$ /oracle/oraBase/oracle/19.0.0/database/jdk/bin/java -jar /usr/software/oracle/autoupgrade.jar -config /usr/software/oracle/18c_19c_autoupgrade.cfg -mode fixups
AutoUpgrade 24.4.240426 launched with default internal options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 CDB(s) plus 2 PDB(s) will be processed
Type 'help' to list console commands
upg> Job 101 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Please check the summary report at:
/home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html
/home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log
7 Finally, run AutoUpgrade in deploy mode which will create a guaranteed restore point (GRP), upgrade the DB, and apply postfixes.
[oracle@rac1 database]$ /oracle/oraBase/oracle/19.0.0/database/jdk/bin/java -jar /usr/software/oracle/autoupgrade.jar -config /usr/software/oracle/18c_19c_autoupgrade.cfg -mode deploy
AutoUpgrade 24.4.240426 launched with default internal options
Processing config file ...
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 CDB(s) plus 2 PDB(s) will be processed
Type 'help' to list console commands
upg>
7.1 Checking job status
upg> lsj
+----+-------+--------+---------+-------+----------+-------+-------+
|Job#|DB_NAME| STAGE|OPERATION| STATUS|START_TIME|UPDATED|MESSAGE|
+----+-------+--------+---------+-------+----------+-------+-------+
| 102| PROD1|DISPATCH|EXECUTING|RUNNING| 16:33:17| 9s ago| |
+----+-------+--------+---------+-------+----------+-------+-------+
Total jobs 1
upg> status -job 102
Details
Job No 102
Oracle SID PROD1
Start Time 24/06/18 16:33:17
Elapsed (min): 107
End time: N/A
Logfiles
Logs Base: /home/oracle/autoupgrade/log/PROD1
Job logs: /home/oracle/autoupgrade/log/PROD1/102
Stage logs: /home/oracle/autoupgrade/log/PROD1/102/postfixups
TimeZone: /home/oracle/autoupgrade/log/PROD1/temp
Remote Dirs:
Stages
SETUP <1 min
GRP <1 min
PREUPGRADE <1 min
PRECHECKS 1 min
PREFIXUPS <1 min
DRAIN 4 min
DBUPGRADE 98 min
DISPATCH 1 min
POSTCHECKS <1 min
DISPATCH <1 min
POSTFIXUPS ~0 min (RUNNING)
POSTUPGRADE
SYSUPDATES
Stage-Progress Per Container
+--------+----------+
|Database|POSTFIXUPS|
+--------+----------+
|CDB$ROOT| 0 % |
|PDB$SEED| 0 % |
|PROD1PDB| 0 % |
+--------+----------+
upg> status -job 102
Details
Job No 102
Oracle SID PROD1
Start Time 24/06/18 16:33:17
Elapsed (min): 114
End time: N/A
Logfiles
Logs Base: /home/oracle/autoupgrade/log/PROD1
Job logs: /home/oracle/autoupgrade/log/PROD1/102
Stage logs: /home/oracle/autoupgrade/log/PROD1/102/postfixups
TimeZone: /home/oracle/autoupgrade/log/PROD1/temp
Remote Dirs:
Stages
SETUP <1 min
GRP <1 min
PREUPGRADE <1 min
PRECHECKS 1 min
PREFIXUPS <1 min
DRAIN 4 min
DBUPGRADE 98 min
DISPATCH 1 min
POSTCHECKS <1 min
DISPATCH <1 min
POSTFIXUPS ~7 min (RUNNING)
POSTUPGRADE
SYSUPDATES
Stage-Progress Per Container
+--------+----------+
|Database|POSTFIXUPS|
+--------+----------+
|CDB$ROOT| 75 % |
|PDB$SEED| 75 % |
|PROD1PDB| 80 % |
+--------+----------+
7.2 When job completed, note the GRP it automatically created.
upg> Job 102 completed
------------------- Final Summary --------------------
Number of databases [ 1 ]
Jobs finished [1]
Jobs failed [0]
Jobs restored [0]
Jobs pending [0]
---- Drop GRP at your convenience once you consider it is no longer needed ----
Drop GRP from PROD1: drop restore point AUTOUPGRADE_9212_PROD1814000
Please check the summary report at:
/home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.html
/home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log
7.3 Verify the logs are clean.
[root@rac1 ~]# cat /home/oracle/autoupgrade/cfgtoollogs/upgrade/auto/status/status.log
==========================================
Autoupgrade Summary Report
==========================================
[Date] Tue Jun 18 18:40:04 PDT 2024
[Number of Jobs] 1
==========================================
[Job ID] 102
==========================================
[DB Name] PROD
[Version Before Upgrade] 18.14.0.0.0
[Version After Upgrade] 19.3.0.0.0
------------------------------------------
[Stage Name] GRP
[Status] SUCCESS
[Start Time] 2024-06-18 16:33:17
[Duration] 0:00:00
[Detail] Please drop the following GRPs after Autoupgrade completes:
AUTOUPGRADE_9212_PROD1814000
------------------------------------------
[Stage Name] PREUPGRADE
[Status] SUCCESS
[Start Time] 2024-06-18 16:33:18
[Duration] 0:00:00
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/preupgrade
------------------------------------------
[Stage Name] PRECHECKS
[Status] SUCCESS
[Start Time] 2024-06-18 16:33:18
[Duration] 0:01:35
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/prechecks
[Detail] /home/oracle/autoupgrade/log/PROD1/102/prechecks/prod_preupgrade.log
Check passed and no manual intervention needed
------------------------------------------
[Stage Name] PREFIXUPS
[Status] SUCCESS
[Start Time] 2024-06-18 16:34:53
[Duration] 0:00:40
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/prefixups
[Detail] /home/oracle/autoupgrade/log/PROD1/102/prefixups/prefixups.html
------------------------------------------
[Stage Name] DRAIN
[Status] SUCCESS
[Start Time] 2024-06-18 16:35:34
[Duration] 0:04:26
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/drain
------------------------------------------
[Stage Name] DBUPGRADE
[Status] SUCCESS
[Start Time] 2024-06-18 16:40:01
[Duration] 1:38:12
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/dbupgrade
------------------------------------------
[Stage Name] POSTCHECKS
[Status] SUCCESS
[Start Time] 2024-06-18 18:20:09
[Duration] 0:00:15
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/postchecks
[Detail] /home/oracle/autoupgrade/log/PROD1/102/postchecks/prod_postupgrade.log
Check passed and no manual intervention needed
------------------------------------------
[Stage Name] POSTFIXUPS
[Status] SUCCESS
[Start Time] 2024-06-18 18:20:37
[Duration] 0:15:15
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/postfixups
[Detail] /home/oracle/autoupgrade/log/PROD1/102/postfixups/postfixups.html
------------------------------------------
[Stage Name] POSTUPGRADE
[Status] SUCCESS
[Start Time] 2024-06-18 18:35:52
[Duration] 0:01:04
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/postupgrade
------------------------------------------
[Stage Name] SYSUPDATES
[Status] SUCCESS
[Start Time] 2024-06-18 18:36:56
[Duration]
[Log Directory] /home/oracle/autoupgrade/log/PROD1/102/sysupdates
------------------------------------------
Summary: /home/oracle/autoupgrade/log/PROD1/102/dbupgrade/upg_summary.log
8 Clean ups
8.1 Verify the database entry is added to /etc/oratab on both rac1 and rac2 with their respective SID. If not, manually add them in.
[oracle@rac1 ~]$ grep PROD /etc/oratab
PROD1:/oracle/oraBase/oracle/19.0.0/database:N # line added by Agent
[oracle@rac2 ~]$ grep PROD /etc/oratab
PROD2:/oracle/oraBase/oracle/19.0.0/database:N # line added by Agent
8.2 On both rac1 and rac2, update the /home/oracle/.bash_profile to have the correct settings.
ORACLE_HOME=$ORACLE_BASE/19.0.0/database; export ORACLE_HOME
[oracle@rac1 ~]$ . ~/.bash_profile
[oracle@rac1 ~]$ env|grep ORACLE_HOME
ORACLE_HOME=/oracle/oraBase/oracle/19.0.0/database
8.3 On both RAC nodes, check the sqlnet.ora in grid and oracle homes and change any references from the old 18c homes to the new 19c homes. If you don’t see any, then you are good to go. Here are the locations of the files in my system.
/oracle/oraBase/19.0.0/grid/network/admin/sqlnet.ora
/oracle/oraBase/oracle/19.0.0/database/network/admin/sqlnet.ora
8.4 Verify Database version and open mode.
[oracle@rac1 ~]$ sqlplus / as sysdba
SQL*Plus: Release 19.0.0.0.0 - Production on Thu Jun 20 15:42:00 2024
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
SQL> col open_mode for a25
SQL> col database_role for a25
SQL> select open_mode, database_role from v$database;
OPEN_MODE DATABASE_ROLE
------------------------- -------------------------
READ WRITE PRIMARY
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PROD1PDB READ WRITE NO
8.5 Check the timezone.
SQL> set lines 200
FILENAME VERSION CON_ID
------------------------------------------------------------ ---------- ----------
timezlrg_32.dat 32 0
8.6 Verify the statuses are all VALID in the registry for all containers.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> set lines 180 pages 9999
SQL> col cid for a20
SQL> col cname for a45
SQL> col version for a20
SQL> col prv_version for a20
SQL> col status for a15
SQL> select r.cid, r.cname, r.prv_version, r.version, d.status from registry$ r, dba_registry d
2 where r.cid=d.comp_id;
CID CNAME PRV_VERSION VERSION STATUS
-------------------- --------------------------------------------- -------------------- -------------------- ---------------
CATALOG Oracle Database Catalog Views 18.0.0.0.0 19.0.0.0.0 VALID
CATPROC Oracle Database Packages and Types 18.0.0.0.0 19.0.0.0.0 VALID
RAC Oracle Real Application Clusters 18.0.0.0.0 19.0.0.0.0 VALID
JAVAVM JServer JAVA Virtual Machine 18.0.0.0.0 19.0.0.0.0 VALID
XML Oracle XDK 18.0.0.0.0 19.0.0.0.0 VALID
CATJAVA Oracle Database Java Packages 18.0.0.0.0 19.0.0.0.0 VALID
XDB Oracle XML Database 18.0.0.0.0 19.0.0.0.0 VALID
OWM Oracle Workspace Manager 18.0.0.0.0 19.0.0.0.0 VALID
CONTEXT Oracle Text 18.0.0.0.0 19.0.0.0.0 VALID
9 rows selected.
SQL> alter session set container=PDB$SEED;
Session altered.
SQL> select r.cid, r.cname, r.prv_version, r.version, d.status from registry$ r, dba_registry d
2 where r.cid=d.comp_id;
CID CNAME PRV_VERSION VERSION STATUS
-------------------- --------------------------------------------- -------------------- -------------------- ---------------
CATALOG Oracle Database Catalog Views 18.0.0.0.0 19.0.0.0.0 VALID
CATPROC Oracle Database Packages and Types 18.0.0.0.0 19.0.0.0.0 VALID
RAC Oracle Real Application Clusters 18.0.0.0.0 19.0.0.0.0 VALID
JAVAVM JServer JAVA Virtual Machine 18.0.0.0.0 19.0.0.0.0 VALID
XML Oracle XDK 18.0.0.0.0 19.0.0.0.0 VALID
CATJAVA Oracle Database Java Packages 18.0.0.0.0 19.0.0.0.0 VALID
XDB Oracle XML Database 18.0.0.0.0 19.0.0.0.0 VALID
OWM Oracle Workspace Manager 18.0.0.0.0 19.0.0.0.0 VALID
CONTEXT Oracle Text 18.0.0.0.0 19.0.0.0.0 VALID
9 rows selected.
SQL> alter session set container=PROD1PDB;
Session altered.
SQL> select r.cid, r.cname, r.prv_version, r.version, d.status from registry$ r, dba_registry d
2 where r.cid=d.comp_id;
CID CNAME PRV_VERSION VERSION STATUS
-------------------- --------------------------------------------- -------------------- -------------------- ---------------
CATALOG Oracle Database Catalog Views 18.0.0.0.0 19.0.0.0.0 VALID
CATPROC Oracle Database Packages and Types 18.0.0.0.0 19.0.0.0.0 VALID
RAC Oracle Real Application Clusters 18.0.0.0.0 19.0.0.0.0 VALID
JAVAVM JServer JAVA Virtual Machine 18.0.0.0.0 19.0.0.0.0 VALID
XML Oracle XDK 18.0.0.0.0 19.0.0.0.0 VALID
CATJAVA Oracle Database Java Packages 18.0.0.0.0 19.0.0.0.0 VALID
XDB Oracle XML Database 18.0.0.0.0 19.0.0.0.0 VALID
OWM Oracle Workspace Manager 18.0.0.0.0 19.0.0.0.0 VALID
CONTEXT Oracle Text 18.0.0.0.0 19.0.0.0.0 VALID
9 rows selected.
9 Drop the restore point.
Note: You should only do Step 9 after you have upgrade both Primary and Standby. And after you have verified database is successfully upgraded and you are certain that you no longer need to flashback the database to a point prior to the upgrade.
SQL> col name for a55
SQL> select name from v$restore_point;
NAME
-------------------------------------------------------
AUTOUPGRADE_9212_PROD1814000
SQL> drop restore point AUTOUPGRADE_9212_PROD1814000;
Congratulation! Oracle RAC Database has been upgraded to 19c!