TLS certificate

I have a question how TLS certificate rotation is intended to work.

As far as I can tell dragonflydb itself does not detect when a certificate file is updated. If this is correct, how should changed certifications be handled?

For other applications I have used stakater-reloader to trigger a rolling restart when a secret is changed, but I don’t see how to do that if dragonfly is deployed using the operator.

reloader requires an annotation for the statefulset to monitor resources and trigger a reload if the resource changes. I don’t see an option set an annotation like that. Also, the updateStrategy is hardcoded to OnDelete so rolling restart doesn’t work (not sure if reloader would handle this, though).

Are you using the Operator? :thinking:

yes, I installed with the operator.

and how did you attach it? through the relevant field right?

I guess, it makes sense to add automatic reload in the operator when the configmap changes.

Is there a kubernetes way of doing it? seems common to put in the Operator.

I created the certificate as a Secret using cert-manager and configured it in the Dragonfly object for the operator:

apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
  name: dragonfly
spec:
  replicas: 2
  image: docker.dragonflydb.io/dragonflydb/dragonfly:v1.12.1
  tlsSecretRef:
    name: "dragonfly-tls"
  authentication:
    clientCaCertSecret:
      name: "dragonfly-tls"
      key: "ca.crt"

It may make sense to have the operator monitor any objects it attaches (secrets or configmaps) and propagate changes, since it is in control of the application.

An alternative is to use a dedicated controller (e.g. stakater/reloader or wave-k8s/wave) that monitors resources and performs a rolling upgrade of affected workloads.

I don’t think using a dedicated controller makes sense! Yep, The operator should be responsible and it could aslo be simple to fix. We need to add a listener to that secret and trigger a rollout when there is a change. I will raise an issue.

Btw, you can trigger a manual rollout right? meanwhile :thinking:

kubectl rollout restart -n dragonfly sts dragonfly doesn’t do anything since the operator hardcodes the updateStragy to “OnDelete”.
I can manually delete a pod, wait for it to restart and then delete the other. Not ideal when you need to remember that the certificate is about to expire…

Not ideal then! Will raise an issue and get Atleast the manual trigger fixed