Skip to content

Custom Configurations

OpenShift clusters deployed through DRP support various advanced configuration options to meet specific operational requirements. This guide covers key customization capabilities and their implementation.

Machine Configurations

Machine configurations allow you to manage node-level settings across your OpenShift cluster. These configurations can modify system settings, add custom systemd units, or manage files on the nodes.

Machine Config Pools

OpenShift manages nodes through Machine Config Pools (MCPs). By default, there are two pools: - master: For control plane nodes - worker: For compute nodes

Create a custom MCP:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfigPool
metadata:
  name: infra
spec:
  machineConfigSelector:
    matchExpressions:
      - key: machineconfiguration.openshift.io/role
        operator: In
        values: [worker,infra]
  nodeSelector:
    matchLabels:
      node-role.kubernetes.io/infra: ""

Custom Machine Configurations

Apply specific configurations to nodes:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
  labels:
    machineconfiguration.openshift.io/role: infra
  name: 50-infra-chrony-configuration
spec:
  config:
    ignition:
      version: 3.2.0
    storage:
      files:
      - contents:
          source: data:text/plain;charset=utf-8;base64,<base64_encoded_chrony_conf>
        mode: 0644
        path: /etc/chrony.conf

Node Configuration

Labels and Taints

Configure nodes for specific workloads using labels and taints:

# Add infrastructure role
oc label node worker1.demo.k8s.local node-role.kubernetes.io/infra=""

# Add GPU workload taint
oc adm taint nodes worker2.demo.k8s.local workload=gpu:NoSchedule

# Verify configuration
oc get nodes --show-labels
oc describe node worker2.demo.k8s.local | grep Taints

Custom Resources

Define resource requirements and limits:

apiVersion: config.openshift.io/v1
kind: ClusterResourceOverride
metadata:
  name: cluster
spec:
  podResourceOverride:
    spec:
      memoryRequestToLimitPercent: 50
      cpuRequestToLimitPercent: 25
      limitCPUToMemoryPercent: 200

Storage Configuration

Storage Classes

Create custom storage classes for different performance tiers:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: high-performance
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: pd-ssd
  iops: "3000"

Local Storage

Configure local storage for specific workloads:

apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
  name: local-disks
  namespace: openshift-local-storage
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - worker1.demo.k8s.local
  storageClassDevices:
    - storageClassName: local-sc
      volumeMode: Filesystem
      devicePaths:
        - /dev/sdb

Network Customization

Network Policies

Implement network isolation:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-namespace
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}

Custom DNS

Configure custom DNS settings:

apiVersion: operator.openshift.io/v1
kind: DNS
metadata:
  name: default
spec:
  servers:
  - name: custom-dns
    zones: 
      - example.com
    forwardPlugin:
      upstreams: 
        - 192.168.1.53

Monitoring Configuration

User Workload Monitoring

Enable and configure user workload monitoring:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    enableUserWorkload: true
    prometheusK8s:
      retention: 15d
      volumeClaimTemplate:
        spec:
          storageClassName: fast
          resources:
            requests:
              storage: 100Gi

Best Practices

When implementing custom configurations:

  1. Test configurations in a non-production environment first.

  2. Document all customizations thoroughly, including:

  3. Purpose and requirements
  4. Configuration details
  5. Testing procedures
  6. Rollback plans

  7. Use version control for all configuration files.

  8. Monitor the impact of customizations on cluster performance and stability.

  9. Keep configurations consistent across similar environments.