Openshift¶
OpenShift Content Bundle¶
This content pack provides comprehensive tooling and automation for deploying and managing OpenShift clusters through Digital Rebar Platform (DRP). It handles the complete lifecycle of OpenShift clusters including installation, configuration, node management, and advanced features like Advanced Cluster Management (ACM).
Design Philosophy¶
The content bundle is designed around several key principles:
-
Pipeline-Driven Deployment: The main cluster deployment is handled through a specialized profile (pipeline) that orchestrates the entire process. This ensures consistency and reduces human error.
-
Task-Based Management: Individual administrative tasks are packaged as blueprints, allowing for targeted operations to manage cluster.
-
Flexible Infrastructure: Support for both DRP-managed and external DNS, disconnected installations, and various infrastructure configurations.
-
Automated Coordination: Tasks like node approval and cluster joining are automatically synchronized to ensure proper cluster formation.
Architecture¶
Node Types¶
The content bundle supports four distinct node types:
- Bootstrap Node
- Temporary node that initializes the cluster
- Minimum 2 vCPUs, 8GB RAM, 100GB disk
- Converts to worker node after cluster initialization
-
Provides initial control plane services
-
Control Plane Nodes
- Manage cluster's core services (API server, scheduler, etcd)
- Minimum 4 vCPUs, 16GB RAM, 100GB disk per node
- Requires exactly three nodes for production
-
Must have identical hardware specifications
-
Worker Nodes
- Run application workloads and containers
- Minimum 2 vCPUs, 8GB RAM, 100GB disk
- Scalable based on workload demands
-
Can have varying hardware specifications
-
Load Balancer Nodes
- HAProxy-based traffic distribution
- Minimum 2 vCPUs, 4GB RAM, 20GB disk
- Multiple nodes recommended for HA
- Handles API and application ingress
Network Architecture¶
The cluster uses three distinct network segments that MUST NOT overlap:
- Machine Network (Default: 172.21.0.0/20)
- Used for node IP addresses
- Must be routable within infrastructure
-
Hosts API endpoints and load balancers
-
Service Network (Default: 172.30.0.0/16)
- Used for Kubernetes services
- Internal cluster communications
-
Not routable outside cluster
-
Cluster Network (Default: 10.128.0.0/14)
- Pod networking
- Configurable host prefix (default: /23 - 512 pods per node)
- Internal container communication
Prerequisites¶
Infrastructure Requirements¶
- DNS configuration (two options):
- DRP-managed DNS (default): DRP automatically manages required DNS records
- External DNS: Must manually configure DNS records as detailed in the DNS configuration section
- Network connectivity between all nodes
- Internet access or configured disconnected registry
- Valid Red Hat OpenShift subscription
- Sufficient network capacity for cluster traffic
Required Parameters¶
broker/name
: Resource broker name (typically "pool-broker" for pool-based deployments)openshift/pull-secret
: Red Hat registry authentication (obtain from Red Hat OpenShift Cluster Manager)openshift/cluster-domain
: Base domain for cluster DNS
Optional Parameters¶
openshift/workers/names
: Worker node hostnamesopenshift/controlplanes/names
: Control plane node hostnamesopenshift/bootstraps/names
: Bootstrap node hostnameopenshift/load-balancers/names
: Load balancer hostnamesopenshift/external-registry
: Disconnected registry configuration
Deployment Process¶
The deployment is orchestrated by the universal-application-openshift-cluster
pipeline, which is implemented as a specialized DRP profile. The process can be initiated through either the DRP web interface or CLI.
Web Interface Deployment¶
- Navigate to the cluster wizard
- Click "Add +" to create a new cluster
- Select "openshift-cluster" as the Cluster Pipeline
- Select "oc-cluster" as the context
- Select appropriate broker (typically "pool-broker")
- Paste your pull secret and click "Save"
CLI Deployment¶
First, prepare your pull secret (assuming it's saved as pull-secret.json
):
# Ensure pull secret is properly JSON encoded
jq -c . pull-secret.json > encoded-pull-secret.json
# Create cluster configuration
cat > cluster-config.json <<EOF
{
"Name": "demo",
"Profiles": ["universal-application-openshift-cluster"],
"Workflow": "universal-start",
"Context": "oc-cluster",
"Meta": {
"BaseContext": "oc-cluster",
},
"Params": {
"broker/name": "pool-broker",
"openshift/pull-secret": $(cat encoded-pull-secret.json)
}
}
EOF
# Create the cluster
drpcli clusters create - < cluster-config.json
Deployment Stages¶
The deployment process consists of three main phases:
-
Pre-provisioning Tasks:
universal/cluster-provision-pre-flexiflow: - openshift-cluster-tools # Install OpenShift CLI and required tools - openshift-cluster-external-registry-create # Setup disconnected registry if configured - openshift-cluster-external-registry-update # Mirror required images if using disconnected registry - openshift-cluster-prep # Generate cluster configuration and ignition files
-
Resource Provisioning:
- The resource broker (typically pool-broker) selects or creates the required machines
- Machines are assigned appropriate roles (bootstrap, control plane, worker, load balancer)
- Base operating system is installed and configured
-
Nodes wait at the approval stage for orchestrated deployment
-
Post-provisioning Tasks:
The pipeline ensures these phases execute in the correct order and handles all necessary synchronization between nodes.
Testing OpenShift¶
Deploy Test Application¶
# Create a new project
oc new-project hello-openshift
# Create the deployment
kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname
# Expose the service
oc expose deployment hello-node --port=9376
oc expose service hello-node
# Test the deployment
curl hello-node-hello-openshift.apps.demo.k8s.local
# Scale the deployment
oc scale deployment hello-node --replicas=3
# Cleanup (removes all resources in the project)
oc delete project hello-openshift
Advanced Features¶
Advanced Cluster Management (ACM)¶
- Multi-cluster management capabilities
- Automated through
openshift-cluster-acm-install
task - Configurable via:
openshift/acm-namespace
(default: open-cluster-management)openshift/acm-channel
(default: release-2.11)openshift/acm-operator-name
(default: advanced-cluster-management)
Installation Timeouts¶
The installation process includes configurable timeouts for different stages:
openshift/acm-install-timeout
(default: 1500 seconds)- Controls timeout for ACM operator installation
-
Increase if operator deployment is timing out
-
openshift/acm-install-mcm-timeout
(default: 3600 seconds) - Controls timeout for Multi-Cluster Hub installation
- Increase if hub deployment is timing out
If installations are failing due to timeouts, these values can be increased to accommodate slower networks or resource-constrained environments.
Disconnected Installations¶
Support for air-gapped environments through: - External registry configuration - Image mirroring capabilities - Certificate management - Custom catalog sources
Load Balancer Configuration¶
By default, the content bundle configures HAProxy
for cluster load balancing. However, production deployments often use external load balancers. Regardless of the implementation, the following ports must be configured:
- API server (port
6443
) - Machine config server (port
22623
) - HTTP ingress (port
80
) - HTTPS ingress (port
443
)
The load balancer configuration works in conjunction with the DNS configuration to provide access to cluster services.
Administrative Tasks¶
The content bundle includes several blueprints for common administrative tasks:
openshift-cluster-status
: Check cluster health and componentsopenshift-cluster-acm-cleanup
: Remove ACM installationopenshift-cluster-dns-refresh
: Update DNS and load balancer configurationopenshift-cluster-remove-node
: Safely remove nodes from the cluster
Troubleshooting¶
Common Commands¶
# Check node status
oc get nodes
# View cluster operators
oc get clusteroperators
# Monitor pod status
oc get pods --all-namespaces
# Check events
oc get events --sort-by='.metadata.creationTimestamp'
# View cluster version
oc get clusterversion
# Check ACM status
oc get multiclusterhub -n open-cluster-management
# List available upgrade versions
oc adm upgrade
# Initiate upgrade
oc adm upgrade --to=<version-number>
# Example: oc adm upgrade --to=4.15.36
Resource Cleanup¶
Dedicated tasks for cleanup operations:
- openshift-cluster-cleanup
: General cluster cleanup
- openshift-cluster-acm-cleanup
: ACM removal
- openshift-cluster-remove-node
: Node removal
Future Enhancements¶
Planned improvements: - Status event monitoring - Cron-triggered maintenance - Enhanced metrics collection - Automated backup solutions
Support¶
For issues or questions: - Check the Digital Rebar documentation - Review the OpenShift documentation - Review the troubleshooting section - Contact RackN support
DNS Configuration¶
When using external DNS, the following records must be configured (example for cluster "demo.k8s.local"). All records should use TTL of 0.
Name | Type | Value |
---|---|---|
ns1 | A | \ |
smtp | A | \ |
helper | A | \ |
helper.demo | A | \ |
api.demo | A | \ |
api-int.demo | A | \ |
*.apps.demo | A | \ |
cp1.demo | A | \ |
cp2.demo | A | \ |
cp3.demo | A | \ |
worker1.demo | A | \ |
worker2.demo | A | \ |
worker3.demo | A | \ |
License¶
RackN License - See documentation for details.
.. Release v4.15.0 Start