# Oracle Real Application Clusters in Linux Containers
Learn how to deploy Oracle Real Application Clusters (Oracle RAC) Release 23.26ai in Linux container environments, including preparation, installation, and validation steps.
## Overview of Running Oracle RAC in Containers
Oracle Real Application Clusters (Oracle RAC) is an option to the award-winning Oracle Database Enterprise Edition. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. Oracle RAC uses Oracle Clusterware as a portable cluster software that allows clustering of independent servers so that they cooperate as a single system. In the cluster, Oracle RAC uses Oracle Automatic Storage Management (Oracle ASM) to provide simplified storage management that is consistent across all servers and storage platforms. Oracle Clusterware and Oracle ASM are part of Oracle Grid Infrastructure, which bundles both solutions in a software package that is easy to deploy.
For more information about Oracle RAC Database 23.26ai, refer to the (Oracle Database documentation)[http://docs.oracle.com/en/database/].
# Using this Documentation
The navigation links that follow enable you to jump to different portions of this document as required for your preparation and deployment.
- [Oracle Real Application Clusters in Linux Containers](#oracle-real-application-clusters-in-linux-containers)
- [Overview of Running Oracle RAC in Containers](#overview-of-running-oracle-rac-in-containers)
- [Using this Documentation](#using-this-documentation)
- [Preparation Steps for running Oracle RAC in containers](#preparation-steps-for-running-oracle-rac-database-in-containers)
- [Download Oracle RAC Database Container Images](#download-oracle-rac-database-container-images)
- [Configure Network Management](#configure-network-management)
- [Configure Password Management](#configure-password-management)
- [Deploying Two-Node Oracle RAC on Podman Using an Oracle RAC Container Image](#deploying-two-node-oracle-rac-on-podman-using-an-oracle-rac-container-image)
- [Attach the Network to Containers](#attach-the-network-to-containers)
- [Validate the Oracle RAC Container Environment](#validate-the-oracle-rac-container-environment)
- [Connecting to an Oracle RAC Database](#connecting-to-an-oracle-rac-database)
- [Cleanup](#cleanup)
- [Orale RAC on Linux Containers Deployment Scenarios](#orale-rac-on-linux-containers-deployment-scenarios)
- [Support](#support)
- [License](#license)
- [Copyright](#copyright)
## Preparation Steps for Running Oracle RAC Database in Containers
Complete each step and prerequisite in this section before proceeding:
* To complete the preparation steps for Oracle RAC on container deployment, refer to the following sections in the publication [Oracle Real Application Clusters Installation Guide](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for Podman Oracle Linux x86-64:
* Overview of Oracle RAC on Podman
* Host Preparation for Oracle RAC on Podman
* Podman Host Server Configuration
* **Note**: Because we use command-line installation for Oracle RAC on containers, configuring X Window System is not required.
* Podman Containers and Oracle RAC Nodes
* Provisioning the Podman Host Server
* Podman Host Preparation
* Preparing for Podman Container Installation
* Installing Podman Engine
* Allocate Linux Resources for Oracle Grid Infrastructure Deployment
* How to Configure Podman for SELinux Mode
* If you plan to use NFS storage for ASM devices, then create an NFS Volume. For details, see [Configuring NFS for Storage for Oracle RAC on Podman](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf).
**Note:** You can skip this step if you are planning to use block devices for storage.
* If SELinux is enabled on the Podman host, then ensure that you create an SELinux policy for Oracle RAC on Podman.
For details about this procedure, see `How to Configure Podman for SELinux Mode` in the publication [Oracle Real Application Clusters Installation Guide for Podman Oracle Linux x86-64](https://docs.oracle.com/en/database/oracle/oracle-database/21/racpd/target-configuration-oracle-rac-podman.html#GUID-59138DF8-3781-4033-A38F-E0466884D008).
When you perform the installation using any files from a Podman host machine where SELinux is enabled, you must ensure that they are labeled correctly with a `container_file_t` context. You can use `ls -lZ
` to see the security context set on files.
* To resolve VIPs and SCAN IPs in this guide, we use a preconfigured DNS server in our environment.
Replace environment variables `-e DNS_SERVERS=10.0.20.25`,`--dns=10.0.20.25`,`-e DOMAIN=example.info` and `--dns-search=example.info` parameters in the examples in this guide based on your environment. If you want to build a DNS server on a container, then complete the steps in [Oracle DNS Server to resolve Oracle RAC IPs](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleDNSServer)
## Download Oracle RAC Database Container Images
Download the Oracle RAC image from the Oracle Container Registry for the Oracle RAC release that you want to deploy.
Oracle RAC is supported for production use on Podman starting with Oracle Database 19c (19.16), Oracle Database 21c (21.7) and Oracle Database 23.26ai (26ai).
For example:
```bash
podman pull container-registry.oracle.com/database/rac:latest
```
* The sections that follow assume that you have completed all of the prerequisites in [Preparation Steps for Running Oracle RAC Database in Containers](#preparation-steps-for-running-oracle-rac-database-in-containers) and completed all the other steps required for your environment.
## Configure Network Management
Before you start the installation, you must plan your private and public podman networks.
To set up your networks, review `Podman Host Preparation` in the publication [Oracle Real Application Clusters Installation Guide](https://docs.oracle.com/cd/F39414_01/racpd/oracle-real-application-clusters-installation-guide-podman-oracle-linux-x86-64.pdf) for Podman Oracle Linux x86-64.
You can create a [podman network](https://docs.podman.io/en/latest/markdown/podman-network-create.1.html) on every container host so that the containers running within that host can communicate with each other.
For example: create `rac_eth0pub1_nw` for the public network (`10.0.20.0/24`), `rac_eth1priv1_nw` (`192.168.17.0/24`) and `rac_eth2priv2_nw`(`192.168.18.0/24`) for private networks. You can use any network subnet based on your environment.
You can configure standard frames MTU networks or Jumbo Frames MTU networks.
### Configure Standard Frames MTU Networks
Create the standard frame MTU networks by using the following commands;
```bash
ip link show|grep ens
3: ens5: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
4: ens6: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
5: ens7: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
```
Create Podman Bridge networks by using the following commands:
```bash
podman network create --driver=bridge --subnet=10.0.20.0/24 --gateway=10.0.20.1 rac_eth0pub1_nw
podman network create --driver=bridge --subnet=192.168.18.0/24 --disable-dns --internal rac_eth1priv2_nw
podman network create --driver=bridge --subnet=192.168.18.0/24 --disable-dns --internal rac_eth2priv2_nw
```
To run Oracle RAC using Oracle Container Runtime for Podman on multiple hosts, you must create either macvlan networks or ipvlan networks:
- Create Podman macvlan networks using the following commands:
```bash
podman network create -d macvlan --subnet=10.0.20.0/24 --gateway=10.0.20.1 -o parent=ens5 rac_eth0pub1_nw
podman network create -d macvlan --subnet=192.168.17.0/24 -o parent=ens6 --disable-dns --internal rac_eth1priv1_nw
podman network create -d macvlan --subnet=192.168.18.0/24 -o parent=ens7 --disable-dns --internal rac_eth2priv2_nw
```
- Create Podman `ipvlan` networks using the following commands:
```bash
podman network create -d ipvlan --subnet=10.0.20.0/24 --gateway=10.0.20.1 -o parent=ens5 rac_eth0pub1_nw
podman network create -d ipvlan --subnet=192.168.17.0/24 -o parent=ens6 --disable-dns --internal rac_eth1priv1_nw
podman network create -d ipvlan --subnet=192.168.18.0/24 -o parent=ens7 --disable-dns --internal rac_eth2priv2_nw
```
### Configure Jumbo Frames MTU Network Configuration
Create Jumbo frame Maximum Transmission Unit (MTU) networks by using the following commands;
```bash
ip link show | egrep "ens"
3: ens5: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
4: ens6: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
5: ens7: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
```
If the MTU on each interface is set to 9000, then you can then run the following commands on each Podman host to extend the maximum payload length for each network to use the entire MTU:
```bash
#Podman bridge networks
podman network create --driver=bridge --subnet=10.0.20.0/24 --gateway=10.0.20.1 --opt mtu=9000 rac_eth0pub1_nw
podman network create --driver=bridge --subnet=192.168.18.0/24 --opt mtu=9000 --disable-dns --internal rac_eth1priv2_nw
podman network create --driver=bridge --subnet=192.168.18.0/24 --opt mtu=9000 --disable-dns --internal rac_eth2priv2_nw
# Podman macvlan networks
podman network create -d macvlan --subnet=10.0.20.0/24 --gateway=10.0.20.1 --opt mtu=9000 -o parent=ens5 rac_eth0pub1_nw
podman network create -d macvlan --subnet=192.168.17.0/24 --opt mtu=9000 -o parent=ens6 --disable-dns --internal rac_eth1priv1_nw
podman network create -d macvlan --subnet=192.168.18.0/24 --opt mtu=9000 -o parent=ens7 --disable-dns --internal rac_eth2priv2_nw
#Podman ipvlan networks
podman network create -d ipvlan --subnet=10.0.20.0/24 --gateway=10.0.20.1 --opt mtu=9000 -o parent=ens5 rac_eth0pub1_nw
podman network create -d ipvlan --subnet=192.168.17.0/24 --opt mtu=9000 -o parent=ens6 --disable-dns --internal rac_eth1priv1_nw
podman network create -d ipvlan --subnet=192.168.18.0/24 --opt mtu=9000 -o parent=ens7 --disable-dns --internal rac_eth2priv2_nw
```
## Configure Password Management
- Specify the secret volume for resetting the grid, oracle, and database user passwords either during node creation or node addition. The volume can be a shared volume among all the containers. For example:
```bash
mkdir /opt/.secrets/
```
- Generate a password file. Edit the `/opt/.secrets/pwdfile.txt` and seed the password for the grid, oracle, and database user passwords. For this deployment scenario, we will use a common password for all of these users. To generate the password file, run the following command:
```bash
cd /opt/.secrets
openssl genrsa -out key.pem 4096
openssl rsa -in key.pem -out key.pub -pubout
openssl pkeyutl -in pwdfile.txt -out pwdfile.enc -pubin -inkey key.pub -encrypt
rm -rf /opt/.secrets/pwdfile.txt
```
- Oracle recommends using Podman secrets inside the containers. To create Podman secrets, run the following command:
```bash
podman secret create pwdsecret /opt/.secrets/pwdfile.enc
podman secret create keysecret /opt/.secrets/key.pem
podman secret ls
ID NAME DRIVER CREATED UPDATED
7eb7f573905283c808bdabaff keysecret file 13 hours ago 13 hours ago
e3ac963fd736d8bc01dcd44dd pwdsecret file 13 hours ago 13 hours ago
podman secret inspect
```
Notes:
- In this example we use `pwdsecret` as the common password for SSH setup between containers for the oracle, grid, and Oracle RAC database users. Also, `keysecret` is used to extract secrets inside the Oracle RAC Containers. After setup is complete, you must change the Oracle RAC Database password and operating system (OS) user's password inside the container.
## Deploying Two-Node Oracle RAC on Podman Using an Oracle RAC Container Image
Use the instructions that follow to set up Oracle RAC on Podman using an Oracle RAC image.
You can set up Oracle RAC either on block devices or on NFS storage devices. However, in this guide, we show an **example** of deploying Oracle RAC on Containers with **block devices**. For information about other Oracle RAC image deployment scenarios, see [Oracle Real Application Clusters in Linux Containers on GitHub](https://github.com/oracle/docker-images/blob/main/OracleDatabase/RAC/OracleRealApplicationClusters/README.md) Other scenarios include deploying with user-defined response files, or deploying without response user-defined files.
### Prerequisites for setting up Oracle RAC with block devices
Ensure that you have created at least one block device with at least 50 Gb of storage space that can be accessed by two Oracle RAC Nodes, and can be shared between them. You can create more block devices as needed. Pass environment variables and devices to the `podman create` command and in the Oracle Grid Infrastructure (grid) response files.
Ensure that the ASM devices have no existing file system. To clear any other file system from the devices, use the following command:
```bash
dd if=/dev/zero of=/dev/oracleoci/oraclevdd bs=8k count=10000
```
Repeat this command on each shared block device. In this example command, `/dev/oracleoci/oraclevdd` is a shared block device.
##### Create Oracle RAC Containers
Create the Oracle RAC containers using the Oracle RAC image. For details about environment variables, see [Environment Variables Explained](https://github.com/oracle/docker-images/blob/main/OracleDatabase/RAC/OracleRealApplicationClusters/docs/ENVIRONMENTVARIABLES.md)
**Note:**
- To use this example in your environment, adjust environment variables as needed. For more details, see [Environment Variables for Oracle RAC on Containers](#environment-variables-for-oracle-rac-on-containers) .
- This example uses a Podman bridge network with one public and two private networks. When using two private networks, include `--sysctl 'net.ipv4.conf.eth1.rp_filter=2' --sysctl 'net.ipv4.conf.eth2.rp_filter=2` . If your use case is different, then you do not require this `sysctl` configuration for the Podman Bridge.
- If you plan to use different disk groups for datafiles and archivelogs, then you must pass these parameters: `DB_ASM_DEVICE_LIST`, `RECO_ASM_DEVICE_LIST`,`DB_DATA_FILE_DEST`, `DB_RECOVERY_FILE_DEST`. For more information, see [Section 8: Environment Variables for Oracle RAC on Containers](#environment-variables-for-oracle-rac-on-containers).
In this example, we create a container on host `racnodep1`:
```bash
podman create -t -i \
--hostname racnodep1 \
--dns-search "example.info" \
--dns 10.0.20.25 \
--shm-size 4G \
--cpuset-cpus 0-1 \
--memory 16G \
--memory-swap 32G \
--sysctl kernel.shmall=2097152 \
--sysctl "kernel.sem=250 32000 100 128" \
--sysctl kernel.shmmax=8589934592 \
--sysctl kernel.shmmni=4096 \
--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \
--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \
--cap-add=SYS_RESOURCE \
--cap-add=NET_ADMIN \
--cap-add=SYS_NICE \
--cap-add=AUDIT_WRITE \
--cap-add=AUDIT_CONTROL \
--cap-add=NET_RAW \
--secret pwdsecret \
--secret keysecret \
--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \
-e DNS_SERVERS="10.0.20.25" \
-e DB_SERVICE=service:soepdb \
-e CRS_PRIVATE_IP1=192.168.17.170 \
-e CRS_PRIVATE_IP2=192.168.18.170 \
-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \
-e SCAN_NAME=racnodepc1-scan \
-e INIT_SGA_SIZE=3G \
-e INIT_PGA_SIZE=2G \
-e INSTALL_NODE=racnodep1 \
-e DB_PWD_FILE=pwdsecret \
-e PWD_KEY=keysecret \
--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \
--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \
-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \
-e OP_TYPE=setuprac \
--restart=always \
--ulimit rtprio=99 \
--systemd=always \
--name racnodep1 \
container-registry.oracle.com/database/rac:latest
```
To create another container on host `racnodep2`, use the following command:
```bash
podman create -t -i \
--hostname racnodep2 \
--dns-search "example.info" \
--dns 10.0.20.25 \
--shm-size 4G \
--cpuset-cpus 0-1 \
--memory 16G \
--memory-swap 32G \
--sysctl kernel.shmall=2097152 \
--sysctl "kernel.sem=250 32000 100 128" \
--sysctl kernel.shmmax=8589934592 \
--sysctl kernel.shmmni=4096 \
--sysctl 'net.ipv4.conf.eth1.rp_filter=2' \
--sysctl 'net.ipv4.conf.eth2.rp_filter=2' \
--health-cmd "/bin/python3 /opt/scripts/startup/scripts/main.py --checkracstatus" \
--cap-add=SYS_RESOURCE \
--cap-add=NET_ADMIN \
--cap-add=SYS_NICE \
--cap-add=AUDIT_WRITE \
--cap-add=AUDIT_CONTROL \
--cap-add=NET_RAW \
--secret pwdsecret \
--secret keysecret \
-e DNS_SERVERS="10.0.20.25" \
-e DB_SERVICE=service:soepdb \
-e CRS_PRIVATE_IP1=192.168.17.171 \
-e CRS_PRIVATE_IP2=192.168.18.171 \
-e CRS_NODES="\"pubhost:racnodep1,viphost:racnodep1-vip;pubhost:racnodep2,viphost:racnodep2-vip\"" \
-e SCAN_NAME=racnodepc1-scan \
-e INIT_SGA_SIZE=3G \
-e INIT_PGA_SIZE=2G \
-e INSTALL_NODE=racnodep1 \
-e DB_PWD_FILE=pwdsecret \
-e PWD_KEY=keysecret \
--device=/dev/oracleoci/oraclevdd:/dev/asm-disk1 \
--device=/dev/oracleoci/oraclevde:/dev/asm-disk2 \
-e CRS_ASM_DEVICE_LIST=/dev/asm-disk1,/dev/asm-disk2 \
-e OP_TYPE=setuprac \
--restart=always \
--ulimit rtprio=99 \
--systemd=always \
--name racnodep2 \
container-registry.oracle.com/database/rac:latest
```
## Attach the Network to Containers
You must assign the podman networks created in the preceding examples to each container. Complete the following tasks:
### Attach the network to racnodep1
```bash
podman network disconnect podman racnodep1
podman network connect rac_eth0pub1_nw --ip 10.0.20.170 racnodep1
podman network connect rac_eth1priv1_nw --ip 192.168.17.170 racnodep1
podman network connect rac_eth2priv2_nw --ip 192.168.18.170 racnodep1
```
### Attach the network to racnodep2
```bash
podman network disconnect podman racnodep2
podman network connect rac_eth0pub1_nw --ip 10.0.20.171 racnodep2
podman network connect rac_eth1priv1_nw --ip 192.168.17.171 racnodep2
podman network connect rac_eth2priv2_nw --ip 192.168.18.171 racnodep2
```
## Start the containers
You must start the container. Run the following commands:
```bash
podman start racnodep1
podman start racnodep2
```
It takes approximately 20 minutes or longer to create and set up a two-node Oracle RAC primary. To check the logs, use the following command from another terminal session:
```bash
podman exec racnodep1 /bin/bash -c "tail -f /tmp/orod/oracle_db_setup.log"
```
When the database configuration is complete, you should see a message similar to the following:
```bash
####################################
ORACLE RAC DATABASE IS READY TO USE!
####################################
```
## Validate the Oracle RAC Container Environment
To validate if the environment is healthy, run the following command:
```bash
podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2f42e49758d1 container-registry.oracle.com/database/rac:latest 46 minutes ago Up 37 minutes (healthy) racnodep1
```
**Note:**
Look for `(healthy)` next to container names under the `STATUS` section. Also check for `racnodep2`.
## Connecting to an Oracle RAC Database
To connect to the container, run the following command:
```bash
podman exec -i -t racnodep1 /bin/bash
```
### Validating Oracle Grid Infrastructure
Validate if Oracle Grid Infrastructure is up and running from within the Container:
```bash
su - grid
#Verify the status of Oracle Clusterware stack:
[grid@racnodep1 ~]$ crsctl check cluster -all
**************************************************************
racnodep1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnodep2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@racnodep1 u01]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
```
### Client Connection
* If you are using the Podman network created using a MACVLAN driver, and you have configured DNS appropriately, then you can connect using the public Single Client Access (SCAN) listener directly from any external client. To connect with the SCAN, use the following connection string, where `` is the SCAN name for the database, and `` is the database system identifier:
```bash
system/@//:1521/
```
* If you are using a connection manager and exposed port 1521 on the host, then connect from an external client using the following connection string, where `` is the host container, and `` is the database system identifier:
```bash
system/@//:1521/
```
* If you are using a bridge driver without a connection manager, then you must connect applications to the same bridge network that you are using for Oracle RAC.
## Oracle RAC Deployment Scenarios
To learn more about other Oracle RAC deployment options, refer to [Oracle Real Application Clusters in Linux Containers](https://github.com/oracle/docker-images/tree/main/OracleDatabase/RAC/OracleRealApplicationClusters) . Use the instructions in that guide to implement other deployment options.
## Cleanup
For instructions on how to clean up to an Oracle RAC Database container environment, refer to the [README](https://github.com/oracle/docker-images/blob/main/OracleDatabase/RAC/OracleRealApplicationClusters/docs/CLEANUP.md) .
# Oracle Legal Notices
[Copyright © 2006, 2026, Oracle and/or its affiliates.](https://docs.oracle.com/cd/E23003_01/html/en/cpyr.htm)