Wazuh on Kubernetes - Cluster Deployment Guide

Running Wazuh on Kubernetes provides automatic component recovery, horizontal scaling, and standardized upgrade workflows. Wazuh publishes Kubernetes manifests and Helm charts for deploying all central components within a Kubernetes cluster.

Deployment architecture

Wazuh components map to the following Kubernetes resource types:

ComponentResource typeReplicasRationale
Wazuh IndexerStatefulSet3Stable network identities, persistent storage
Wazuh Manager (master)StatefulSet1State preservation, stable pod identity
Wazuh Manager (worker)StatefulSet1+State preservation, horizontal scaling
Wazuh DashboardDeployment1Stateless, horizontally scalable
Wazuh AgentDaemonSetPer nodeAutomatic placement on every cluster node

Prerequisites

Kubernetes cluster

  • Kubernetes 1.25 or later
  • kubectl configured for cluster access
  • Helm 3.x (when using Helm charts)
  • A StorageClass with dynamic provisioning support (for PVCs)
  • At least 3 worker nodes for production deployments

Resource requirements

ComponentCPU (request/limit)Memory (request/limit)
Indexer (per replica)1000m / 2000m2Gi / 4Gi
Manager master500m / 1000m1Gi / 2Gi
Manager worker500m / 1000m1Gi / 2Gi
Dashboard250m / 500m512Mi / 1Gi

Manifest-based deployment

Clone the repository

git clone https://github.com/wazuh/wazuh-kubernetes.git -b v4.14.3
cd wazuh-kubernetes

Create a namespace

kubectl create namespace wazuh

Generate certificates

Run the certificate generation script:

cd wazuh/certs
./generate_certs.sh

Create Secrets from the generated certificates:

kubectl -n wazuh create secret generic indexer-certs \
  --from-file=root-ca.pem \
  --from-file=node.pem \
  --from-file=node-key.pem \
  --from-file=admin.pem \
  --from-file=admin-key.pem

Create secrets for the manager and dashboard:

kubectl -n wazuh create secret generic manager-certs \
  --from-file=root-ca.pem \
  --from-file=manager.pem \
  --from-file=manager-key.pem

kubectl -n wazuh create secret generic dashboard-certs \
  --from-file=root-ca.pem \
  --from-file=dashboard.pem \
  --from-file=dashboard-key.pem

Deploy the indexer

kubectl apply -n wazuh -f wazuh/indexer/

Wait until all pods reach Ready state:

kubectl -n wazuh get pods -l app=wazuh-indexer -w

Verify the cluster health:

kubectl -n wazuh exec -it wazuh-indexer-0 - \
  curl -sk -u admin:SecretPassword https://localhost:9200/_cluster/health?pretty

Deploy the manager

kubectl apply -n wazuh -f wazuh/manager/

Deploy the dashboard

kubectl apply -n wazuh -f wazuh/dashboard/

Verify the deployment

kubectl -n wazuh get all

All pods should be in Running state with assigned Endpoints on the services.

Helm-based deployment

Add the repository

helm repo add wazuh https://wazuh.github.io/wazuh-kubernetes
helm repo update

Review available parameters

helm show values wazuh/wazuh > values.yaml

Install the chart

helm install wazuh wazuh/wazuh \
  -n wazuh --create-namespace \
  -f values.yaml

Key values.yaml parameters

indexer:
  replicas: 3
  resources:
    requests:
      cpu: 1000m
      memory: 2Gi
    limits:
      cpu: 2000m
      memory: 4Gi
  persistence:
    size: 50Gi
    storageClass: gp3

manager:
  master:
    replicas: 1
    resources:
      requests:
        cpu: 500m
        memory: 1Gi
  worker:
    replicas: 1
    resources:
      requests:
        cpu: 500m
        memory: 1Gi
  persistence:
    size: 20Gi

dashboard:
  replicas: 1
  resources:
    requests:
      cpu: 250m
      memory: 512Mi
  service:
    type: LoadBalancer

Persistent volumes

Storage requirements

StatefulSets create a PersistentVolumeClaim (PVC) per replica automatically. Ensure a StorageClass with dynamic provisioning is configured in the cluster.

Check PVC status

kubectl -n wazuh get pvc

StorageClass recommendations

Cloud platformStorageClassType
AWSgp3EBS
GCPstandard-rwoPersistent Disk
Azuremanaged-premiumManaged Disk
On-premiseslocal-path / nfsLocal / NFS

For production, use SSD storage with at least 3000 IOPS for indexer nodes.

Backups

Use the OpenSearch snapshot API for indexer data backups:

kubectl -n wazuh exec -it wazuh-indexer-0 - \
  curl -sk -u admin:SecretPassword \
  -X PUT "https://localhost:9200/_snapshot/backup" \
  -H "Content-Type: application/json" \
  -d '{"type":"fs","settings":{"location":"/mnt/snapshots"}}'

The /mnt/snapshots directory must be mounted as an additional volume accessible by all indexer nodes.

TLS configuration

Inter-component communication

All communication between Wazuh components is TLS-encrypted. Certificates are stored in Kubernetes Secrets and mounted into pods.

Using cert-manager

For automated certificate lifecycle management, integrate with cert-manager:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wazuh-indexer-cert
  namespace: wazuh
spec:
  secretName: wazuh-indexer-tls
  issuerRef:
    name: ca-issuer
    kind: ClusterIssuer
  commonName: wazuh-indexer
  dnsNames:
    - wazuh-indexer
    - wazuh-indexer-0.wazuh-indexer
    - wazuh-indexer-1.wazuh-indexer
    - wazuh-indexer-2.wazuh-indexer
  usages:
    - digital signature
    - key encipherment
    - server auth
    - client auth

External access via Ingress

To expose the dashboard through an Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: wazuh-dashboard
  namespace: wazuh
  annotations:
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx
  rules:
    - host: wazuh.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: wazuh-dashboard
                port:
                  number: 443
  tls:
    - hosts:
        - wazuh.example.com
      secretName: wazuh-dashboard-tls

Resource limits and autoscaling

Setting resource limits

All components should have explicit requests and limits:

resources:
  requests:
    cpu: 1000m
    memory: 2Gi
  limits:
    cpu: 2000m
    memory: 4Gi

HorizontalPodAutoscaler

The dashboard and manager workers support horizontal scaling:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: wazuh-dashboard-hpa
  namespace: wazuh
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: wazuh-dashboard
  minReplicas: 1
  maxReplicas: 3
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70

The indexer does not autoscale - changing the OpenSearch cluster node count requires manual intervention for shard rebalancing.

DaemonSet for agents

To monitor Kubernetes nodes, deploy agents as a DaemonSet:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: wazuh-agent
  namespace: wazuh
spec:
  selector:
    matchLabels:
      app: wazuh-agent
  template:
    metadata:
      labels:
        app: wazuh-agent
    spec:
      containers:
        - name: wazuh-agent
          image: wazuh/wazuh-agent:4.14.3
          env:
            - name: WAZUH_MANAGER
              value: "wazuh-manager-master-0.wazuh-manager-master"
            - name: WAZUH_AGENT_GROUP
              value: "kubernetes"
          volumeMounts:
            - name: host-root
              mountPath: /host
              readOnly: true
            - name: host-var-log
              mountPath: /var/log
              readOnly: true
          securityContext:
            privileged: true
      volumes:
        - name: host-root
          hostPath:
            path: /
        - name: host-var-log
          hostPath:
            path: /var/log
      tolerations:
        - operator: Exists
      hostNetwork: true
      hostPID: true

Excluding nodes

To exclude specific nodes from monitoring, use nodeSelector or affinity rules:

spec:
  template:
    spec:
      nodeSelector:
        wazuh-agent: "enabled"

Scaling

Scaling the indexer

Increasing the indexer replica count requires updating the StatefulSet and rebalancing shards:

kubectl -n wazuh scale statefulset wazuh-indexer --replicas=5

After adding nodes, verify shard distribution:

kubectl -n wazuh exec -it wazuh-indexer-0 - \
  curl -sk -u admin:SecretPassword \
  https://localhost:9200/_cat/allocation?v

Scaling managers

Adding manager worker nodes:

kubectl -n wazuh scale statefulset wazuh-manager-worker --replicas=3

The master node does not scale - a Wazuh cluster supports exactly one master.

Troubleshooting

Indexer pods in CrashLoopBackOff

Symptoms: indexer pods restart continuously.

Solution:

  1. Check pod logs:
kubectl -n wazuh logs wazuh-indexer-0 --previous
  1. If the error relates to vm.max_map_count, configure the parameter on all worker nodes:
sysctl -w vm.max_map_count=262144

Or use an init container:

initContainers:
  - name: sysctl
    image: busybox
    command: ["sysctl", "-w", "vm.max_map_count=262144"]
    securityContext:
      privileged: true
  1. Check StorageClass availability and PVC status:
kubectl -n wazuh get pvc
kubectl -n wazuh describe pvc wazuh-indexer-data-wazuh-indexer-0

Manager cannot reach the indexer

Symptoms: the manager starts, but Filebeat cannot deliver data to the indexer.

Solution:

  1. Verify DNS resolution from within the manager pod:
kubectl -n wazuh exec -it wazuh-manager-master-0 - \
  nslookup wazuh-indexer
  1. Test indexer reachability:
kubectl -n wazuh exec -it wazuh-manager-master-0 - \
  curl -sk https://wazuh-indexer:9200
  1. Confirm that the manager and indexer certificates are signed by the same CA

PVCs stuck in Pending state

Symptoms: PVCs do not bind to PVs.

Solution:

  1. Verify that a StorageClass exists:
kubectl get storageclass
  1. Confirm that the provisioner is running:
kubectl get pods -n kube-system | grep provisioner
  1. Inspect PVC events:
kubectl -n wazuh describe pvc <pvc-name>

Agents fail to connect to the manager

Symptoms: DaemonSet agents are running but do not register with the manager.

Solution:

  1. Verify the WAZUH_MANAGER environment variable in the DaemonSet configuration - it should point to the headless service or a specific master pod

  2. Review network policies:

kubectl -n wazuh get networkpolicy
  1. Confirm that ports 1514 and 1515 are accessible between agent and manager pods

Additional resources

Last updated on