HA Mode Failure

I’m deploying off the Helm deployment per below:

kind: Application
metadata:
  name: dragonfly-da
  namespace: argocd
spec:
  destination:
    name: ''
    namespace: dragonfly-da
    server: 'https://kubernetes.default.svc'
  source:
    chart: dragonfly/helm/dragonfly
    repoURL: ghcr.io/dragonflydb
    targetRevision: v1.15.0
    helm:
      values: |
        replicaCount: 2
        resources:
          requests:
            cpu: 1500m
            memory: 18000Mi
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
  project: default

Everything spins up, but it doesn’t recognize the replicas/HA mode:
kubectl exec service/dragonfly-da -n dragonfly-da -- redis-cli INFO

redis_version:6.2.11
dragonfly_version:df-v1.15.0
redis_mode:standalone
arch_bits:64
os:Linux 5.15.133+ x86_64
multiplexing_api:iouring
tcp_port:6379
thread_count:8
uptime_in_seconds:1300
uptime_in_days:0

# Clients
connected_clients:20
client_read_buffer_bytes:5120
blocked_clients:0
dispatch_queue_entries:0

# Memory
used_memory:2458304
used_memory_human:2.34MiB
used_memory_peak:2458304
used_memory_peak_human:2.34MiB
used_memory_rss:19816448
used_memory_rss_human:18.90MiB
used_memory_peak_rss:19816448
comitted_memory:33095680
maxmemory:24684160614
maxmemory_human:22.99GiB
object_used_memory:0
table_used_memory:2255232
num_buckets:3840
num_entries:0
inline_keys:0
listpack_blobs:0
listpack_bytes:0
small_string_bytes:0
pipeline_cache_bytes:0
dispatch_queue_bytes:0
dispatch_queue_subscriber_bytes:0
dispatch_queue_peak_bytes:504
client_read_buffer_peak_bytes:5120
cache_mode:store
maxmemory_policy:noeviction

# Stats
total_connections_received:280
total_commands_processed:307
instantaneous_ops_per_sec:0
total_pipelined_commands:9
pipelined_latency_usec:1628
total_net_input_bytes:2432
connection_migrations:0
total_net_output_bytes:70385
instantaneous_input_kbps:-1
instantaneous_output_kbps:-1
rejected_connections:-1
expired_keys:0
evicted_keys:0
hard_evictions:0
garbage_checked:0
garbage_collected:0
bump_ups:0
stash_unloaded:0
oom_rejections:0
traverse_ttl_sec:0
delete_ttl_sec:0
keyspace_hits:0
keyspace_misses:0
keyspace_mutations:0
total_reads_processed:30
total_writes_processed:2210
defrag_attempt_total:0
defrag_realloc_total:0
defrag_task_invocation_total:0
reply_count:2210
reply_latency_usec:26883
blocked_on_interpreter:0
ram_hits:0
ram_misses:0

# Replication
role:master
connected_slaves:0
master_replid:39ff877c4227ff49980b31c9bf2043e04fd238b9

# Modules
module:name=ReJSON,ver=20000,api=1,filters=0,usedby=[search],using=[],options=[handle-io-errors]
module:name=search,ver=20000,api=1,filters=0,usedby=[],using=[ReJSON],options=[handle-io-errors]

# Keyspace
db0:keys=0,expires=0,avg_ttl=-1

# Cpu
used_cpu_sys:11.881669
used_cpu_user:21.416741
used_cpu_sys_children:0.1391
used_cpu_user_children:0.0
used_cpu_sys_main_thread:1.459071
used_cpu_user_main_thread:2.694399

# Cluster
cluster_enabled:0```

`kubectl get pods -l role=master -n dragonfly-da`
```No resources found in dragonfly-da namespace.```

Hey @deep-panther, I don’t think we had a way to mark the pods as replica with the Helm chart all along! As we don’t have a component to do the co-ordination here! This is present in the Operator. Any reason for choosing the helm chart over the Operator?

I had assumed that since replicas was specified in the Helm chart values without any context around it, that this had been baked in. Switched to operator and it’s working now. <a:ty:1110736837716217877>