Deploying RavenDB with Helm Chart

by Omer Ratsaby

Overview

When you’re working on Kubernetes, sometimes it takes more than a single Pod or Service to deploy your app. Bonus points if it’s not stateless – RavenDB is a database, so deploying it on Kubernetes (a very ephemeral and stateless world) isn’t so straightforward. It usually consists of dozens of Kubernetes resources, with many moving parts working together.

Here comes Helm, a “package manager” for Kubernetes. It allows creating “charts” that consist of multiple resources installable at once. This way, DevOps packs up complex deployments into atomic, manageable packages.

The RavenDB Helm Chart follows this model by packaging all the building blocks required to run a RavenDB cluster – including StatefulSets, Services, Ingress rules, and configuration logic – into a single, customizable deployment unit.

Rather than manually designing the solution and crafting dozens of Kubernetes manifests, users can rely on the Helm chart to deploy a fully functional, production-ready, secured RavenDB cluster with minimal effort.

Long-story-short? Supply it with the generated “setup package”, and Helm will spin up your cluster automatically.

Let’s give it a try

(ground-zero guide)

Step 1. Define your RavenDB cluster & generate setup package

To generate Helm chart inputs, we’ll use the official `rvn` CLI tool.

Instead of manually generating certificates and configuring security settings, RavenDB does the heavy lifting through the `rvn create-setup-package` command.

This command bundles all required TLS certificates, topology configuration, and cluster secrets into a ready-to-use setup package.

Scaffold the Configuration File

Rather than writing the setup file from scratch, you can scaffold a template using the following command:

  docker run --rm \
    -v "/home/$USER:/ravendb" \
    ravendb/ravendb:latest \
    /bin/bash -c "/usr/lib/ravendb/server/rvn init-setup-params -m=lets-encrypt -o=/ravendb/setup.json"

This creates a setup.json file scaffolded with a placeholder:

  {
    "License": {
      "Name": "",
      "Keys": [
        ""
      ]
    },
    "Email": "",
    "Domain": "your-domain",
    "RootDomain": "development.run",
    "NodeSetupInfos": {
      "A": {
        "PublicServerUrl": "https://your-domain.development.run",
        "PublicTcpServerUrl": "tcp://your-domain.development.run:38888",
        "Port": 443,
        "TcpPort": 38888,
        "Addresses": [
          "0.0.0.0"
        ]
      }
    }
  }

The next step is to customize the scaffolded file to match your cluster topology.

In this guide, we’ll modify it to define a three-node setup.

    # setup.json
    {
      "License": { "Id": "", "Name": "","Keys": [] },
      "Email": "user@ravendb.net",
      "Domain": "my-domain",
      "RootDomain": "development.run",
      "NodeSetupInfos": {
        "A": {
          "PublicServerUrl": "https://a.my-domain.development.run:443",
          "PublicTcpServerUrl": "tcp://a.my-domain.development.run:443",
          "Port": 443,
          "TcpPort": 38888,
          "Addresses": ["0.0.0.0"]
        },
        "B": {
          "PublicServerUrl": "https://b.my-domain.development.run:443",
          "PublicTcpServerUrl": "tcp://b.my-domain.development.run:443",
          "Port": 443,
          "TcpPort": 38888,
          "Addresses": ["0.0.0.0"]
        },
        "C": {
          "PublicServerUrl": "https://c.my-domain.development.run:443",
          "PublicTcpServerUrl": "tcp://c.my-domain.development.run:443",
          "Port": 443,
          "TcpPort": 38888,
          "Addresses": ["0.0.0.0"]
        }
      }
    }

What Does This Configuration Define?

  1. License Information

The “License” section holds the RavenDB license details. You need to paste your license there. If you don’t have it, you can obtain a free developer license here.

  1. Let’s Encrypt Integration

The Email field and the RootDomain ( development.run ) are used for automated TLS certificate provisioning via Let’s Encrypt. RavenDB supports a built-in Let’s Encrypt mode, which handles certificate generation and renewal internally. By default, it uses the provided email for registration with Let’s Encrypt and issues publicly trusted certificates for your internal cluster nodes.

Let’s Encrypt is a free, automated, and open Certificate Authority that RavenDB can leverage to secure your cluster with real TLS certificates. This avoids the overhead of generating self-signed certs manually. The use of a valid email is required for recovery and notification purposes by Let’s Encrypt.

  1. Node Setup Formation

Each node (A, B, C) is assigned a unique public server URL (e.g.,https://a.my-domain.development.run, ). The corresponding public TCP URL is also defined.  All nodes use the same ports, 443 for HTTPS and 38888 for TCP to enable consistent SNI-based routing through a shared ingress. This simplifies the configuration and ensures seamless traffic forwarding based on hostname alone.

    With this structured configuration, we are now ready to generate the setup package:

    $ docker run --rm -v "/home/$USER:/ravendb" ravendb/ravendb:latest   /bin/bash -c "cd /usr/lib/ravendb/server && ./rvn create-setup-package -m=lets-encrypt -s=/ravendb/setup.json -o=/ravendb/setup_package.zip --generate-helm-values=/ravendb/values.yaml"

    What This Command Does:

    ⚙️ Runs a temporary RavenDB container to process the setup.

    ⚙️ Uses the setup.json file as input to define the cluster configuration.

    ⚙️ Handles Let’s Encrypt certs and DNS validation.

    ⚙️ Generates node-specific settings and the required TLS certificates

    ✅ Outputs everything into a single setup package (setup_package.zip).

    ✅ Generates the matching values.yaml for the Helm chart.

    During execution, you will see log messages indicating progress:

      [13:23:34 INFO] Setting up RavenDB in Let's Encrypt security mode. 
      ... 
      [13:24:07 INFO] Successfully updated DNS record(s) and challenge(s) in my-domain.development.run 
      ... 
      [13:24:12 INFO] Adding node 'A' to the cluster.
      [13:24:12 INFO] Adding node 'B' to the cluster. 
      [13:24:12 INFO] Adding node 'C' to the cluster.
      [13:24:12 INFO] Generating the client certificate. ... 
      ...
      [13:24:17 INFO] ZIP file was successfully added to this location: /ravendb/setup_package.zip

    After the setup command completes, you’ll find the following files in your working directory:

    $ ls
    setup_package.zip  values.yaml

    The values.yaml file contains the Helm chart configuration derived from your original setup.json.

    You can open and review values.yaml to verify how it maps to your input settings, and customize it further if needed:

      storageSize: 10Gi
      ravenImageTag: latest
      imagePullPolicy: IfNotPresent
      ingressClassName: nginx
      nodeTags:
        - A
        - B
        - C
      domain: my-domain.development.run
      email: userv@ravendb.net
      setupMode: LetsEncrypt
      license: '{...}'

    Step 2. Initialize Your Local Kubernetes Environment

    Refer to the official Minikube documentation to install the latest versions of Minikube, a compatible container/virtual machine manager, and the kubectl CLI tool.

    Once your environment is ready, launch the Kubernetes control plane locally with:

    $ minikube start

    Step 3. Networking Setup to expose RavenDB outside of the Kubernetes internal network

    Kubernetes, by design, does not expose internal services to external clients in a secure or routable way. For applications like RavenDB, which require both HTTPS access for clients and TCP-level communication between nodes, a robust ingress setup is critical.

    To enable hostname-based routing over HTTPS and TCP passthrough for RavenDB’s clustering protocol, you must deploy an ingress controller capable of handling both Layer 7 (HTTP/HTTPS) and Layer 4 (TCP) traffic.

    We’ve prepared a preconfigured dedicated NGINX ingress controller manifest to meet RavenDB’s ingress and networking requirements.

    Download and apply it:

    $ wget https://raw.githubusercontent.com/ravendb/helm-charts/master/charts/ravendb-cluster/misc/nginx-ingress-ravendb.yaml
    
    $ kubectl apply -f nginx-ingress-ravendb.yaml

    NOTE: If you want to use a different ingress controller or a newer version of the NGINX deployment, follow the steps from the chart README. Our pre-configured manifest has just a few changes that differentiate it from the original, like opening ports responsible for RavenDB communication or ensuring TLS passthrough is enabled.

    Confirm the NGINX Ingress controller is deployed successfully:

      $ kubectl get svc -n ingress-nginx
      NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
      ingress-nginx-controller             LoadBalancer   10.98.131.184   <pending>     80:30234/TCP,443:31451/TCP   79s
      ingress-nginx-controller-admission   ClusterIP      10.99.52.54     <none>        443/TCP                      79s
    • Notice the <pending> state in the EXTERNAL-IP field for the ingress-nginx-controller service. In self-hosted environments, services of the type LoadBalancer do not automatically receive routable IP addresses, unlike in cloud environments like GKE or EKS. Without native integration with a cloud provider, the cluster cannot advertise external IPs or handle traffic routing to services independently.
    • A load balancer mechanism must be introduced manually to bridge this gap in self-hosted setups. In the context of Minikube, this is achieved using the Minikube tunnel feature. When started, the tunnel process runs with elevated privileges and creates a network route on the host machine that allows Minikube to assign and expose external IPs for LoadBalancer services.

    Run minikube tunnel in a separate terminal. This creates a network route that exposes LoadBalancer services with real IPs by bridging internal cluster traffic to your local network.:

      $ minikube tunnel
      Status:
              machine: minikube
              pid: 286556
              route: 10.96.0.0/12 -> 192.168.49.2
              minikube: Running
              services: [ingress-nginx-controller]

    This means cluster IPs are advertised through the host, making services externally reachable. After starting the tunnel, the ingress controller receives a proper external IP:

      $ kubectl get svc -n ingress-nginx
      NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)                      AGE
      ingress-nginx-controller             LoadBalancer   10.98.131.184   10.98.131.184   80:30234/TCP,443:31451/TCP   14m
      ingress-nginx-controller-admission   ClusterIP      10.99.52.54     <none>          443/TCP                      14m

    Step 4. Configure Domain-Based Resolution for RavenDB Nodes

    RavenDB’s secured mode relies on domain names matching each node’s certificates. For the cluster to operate correctly, clients must be able to resolve these domain names to the correct IP address, typically that of the ingress controller.

    In environments such as browsers, WSL, or any external client accessing the cluster, domains like a.my-domain.development.run must resolve to the external IP assigned to the ingress controller.

    In a self-hosted Kubernetes setup, no built-in DNS provisioning or external DNS record management exists. This means the user must ensure that local machines can resolve these custom domains to the appropriate IP addresses.

    One way to achieve this is by updating the /etc/hosts file on your local machine:

    $ INGRESS_IP=$(kubectl get svc ingress-nginx-controller -n ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    $ for node in a b c; do
        echo "$INGRESS_IP  $node.my-domain.development.run ${node}-tcp.my-domain.development.run" | sudo tee -a /etc/hosts > /dev/null
    done

    What This Command Does:

    • Retrieves the external IP assigned to the ingress controller.
    • Appends host entries for each RavenDB node (a, b, and c) to /etc/hosts.
    • Ensures that your machine’s tools and browsers can resolve HTTPS and TCP hostnames to the correct ingress IP.

    Step 5. Start RavenDB

    With the networking foundation in place, the next step is deploying the RavenDB cluster using Helm.

    Add the official RavenDB Helm repository:

    $ helm repo add ravendb https://ravendb.github.io/helm-charts/charts

    Use the helm install command to deploy the cluster, referencing the chart, the generated setup package, and the customized values.yaml file:

      $ helm install ravendb --set-file package=setup_package.zip -f values.yaml
      NAME: ravendb
      LAST DEPLOYED: Tue May 20 09:48:08 2025
      NAMESPACE: default
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None

    Once the Helm chart has been installed, you can inspect the deployed resources to confirm that everything started as expected:

      $ kubectl get pods -n ravendb
      NAME                      READY   STATUS      RESTARTS   AGE
      ravendb-a-0               1/1     Running     0          30s
      ravendb-b-0               1/1     Running     0          30s
      ravendb-c-0               1/1     Running     0          30s
      ravendb-cluster-creator   0/1     Completed   0          30s

    – Each ravendb-[a|b|c]-0 pods are the three RavenDB nodes that make up the cluster. Each is deployed as a StatefulSet pod, preserving stable network identities and persistent volumes. Their names reflect both the node tag (a, b, c).

    – The ravendb-cluster-creator is a special helper pod. It executes once and uses the setup_package.zip contents to bootstrap the cluster – establishing the secure topology, certificates, and licensing as defined earlier in the setup JSON.

    Step 6. Access RavenDB Studio and Interact via CLI

    Once the cluster is up and running, you can access the RavenDB Studio through your browser using any of the domain names exposed via the ingress controller, for example:

    https://a.my-domain.development.run

    Log in using the generated client certificate. In the Cluster View, you should see all three nodes (A, B, C) connected and healthy, reflecting the secure topology established during setup.

    We can also verify our setup with command-line interaction using HTTPS and client authentication via certificates

    The setup process (step 0) generated PKCS#12 (.pfx) certificates for each node, we can convert into separate PEM-format files for use with tools like curl.

    $ unzip setup_package.zip -d setup_package
    
    $ sudo openssl pkcs12 -in ./setup_package/A/cluster.server.certificate.my-domain.pfx -clcerts -nokeys -out ./setup_package/A/cluster.server.certificate.pem -legacy -passin pass:
    
    $ sudo openssl pkcs12 -in ./setup_package/A/cluster.server.certificate.my-domain.pfx -nocerts -nodes -out ./setup_package/A/cluster.server.certificate.key -legacy -passin pass:
    
    $ sudo chmod 640 ./setup_package/admin.client.certificate.my-domain.pfx

    ⚠️ These commands assume the certificate was created with an empty password (-passin pass:). Adjust accordingly if a password was used.

    Once certificates are in place, you can query the cluster directly using curl and inspect its topology:

    sudo curl --cert ./setup_package/A/cluster.server.certificate.pem --key ./setup_package/A/cluster.server.certificate.key  https://a.my-domain.development.run/cluster/topology | jq -c '.Topology.AllNodes'
    
      {
         "A":"https://a.my-domain.development.run",
         "B":"https://b.my-domain.development.run",
         "C":"https://c.my-domain.development.run"
      }

    Conclusion

    We’ve defined our cluster, configured networking, and deployed it to the K8s, without the need to design and fine-tune Kubernetes resources like StatefulSet, thanks to the Helm Chart. This way, you can focus on the actual deployment and maintenance, not the Kubernetes architecture. Customize this setup to your needs, and enjoy the atomicity of Helm packages with RavenDB.

    Woah, already finished? 🤯

    If you found the article interesting, don’t miss a chance to try our database solution – totally for free!

    Try now try now arrow icon