Hi,
We got fed up with Redis not behaving well and constantly crashing, even though it was expected to be highly available. So, we moved to Dragonfly, and we’ve been quite happy since then.
However, we noticed an odd behavior, and perhaps it’s something obvious. When setting up podAntiAffinity it’s not actually applied to the StatefulSet (STS) or the pods. I don’t think it can be, but does the operator perform any magic to enforce the affinity settings defined in the Dragonfly resource?
Below is the Dragonfly resource. The affinity section is not being propagated to the StatefulSet created by the operator,
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
labels:
app.kubernetes.io/managed-by: dragonfly-operator
app.kubernetes.io/version: 0.0.1
contact/alerts.pagerduty: Kubernetes_B.Hours
contact/alerts.slack: engops-notifications
contact/help.slack: engops-help
contact/jira: ENGOPS
contact/owner: EngOps
helm.sh/chart: levelblue-dragonfly-0.0.4
name: apm-redis
namespace: apm
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app: apm-redis
topologyKey: topology.kubernetes.io/zone
weight: 100
- podAffinityTerm:
labelSelector:
matchLabels:
app: apm-redis
topologyKey: kubernetes.io/hostname
weight: 100
args:
- --cluster_mode=emulated
image: nexus.aveng.me:5000/levelblue-dragonfly:v1.34.2
imagePullPolicy: Always
labels:
app: apm-redis
app.kubernetes.io/managed-by: dragonfly-operator
app.kubernetes.io/version: 0.0.1
contact/alerts.pagerduty: Kubernetes_B.Hours
contact/alerts.slack: engops-notifications
contact/help.slack: engops-help
contact/jira: ENGOPS
contact/owner: EngOps
helm.sh/chart: levelblue-dragonfly-0.0.4
replicas: 3
resources:
limits:
cpu: 600m
memory: 750Mi
requests:
cpu: 500m
memory: 500Mi