I recently migrated from Redis to Dragonfly in my production environment, setting an instance with --cache_mode and --maxmemory=160G . Everything works as expected at first, but as the memory usage approaches the limit, old keys are evicted, yet the memory usage keeps increasing instead of stabilizing or decreasing.
Observed behavior:
- After restarting the instance, the
.dfsdump is restored correctly and memory usage drops to normal. But once new writes resume, memory usage starts growing again and eventually exceedsmaxmemory. - Running
MEMORY DEFRAGMENTmanually has no visible effect - memory stats don’t change, even if I completely stop new writes. - Key eviction occurs while writes are active, but stops immediately once I pause writes, even though memory usage remains above the limit.
- After hours of inactivity (no new keys), memory usage remains high and does not reduce.
- I tried increasing
--max_eviction_per_heartbeatto200000(from the default of 100), but that didn’t make a difference.
Here’s how I’m running my Dragonfly instance:
docker run -d --name "app_cache" \
--network host \
--log-driver=gcplogs \
--log-opt gcp-log-cmd=true \
-v /data:/data \
-m "172g" \
docker.dragonflydb.io/dragonflydb/dragonfly \
--port "6379" \
--logtostderr \
--dir /data \
--cache_mode
--maxmemory 160G
And these are the instance’s stats:
127.0.0.1:6379> info all
Server
redis_version:7.4.0
dragonfly_version:df-v1.31.1
redis_mode:standalone
arch_bits:64
os:Linux 6.1.0-37-cloud-amd64 x86_64
thread_count:22
multiplexing_api:epoll
tcp_port:6379
uptime_in_seconds:3993
uptime_in_days:0Clients
connected_clients:1
max_clients:64000
client_read_buffer_bytes:256
blocked_clients:0
pipeline_queue_length:0
send_delay_ms:0
timeout_disconnects:0Memory
used_memory:154617454176
used_memory_human:144.00GiB
used_memory_peak:154619189920
used_memory_peak_human:144.00GiB
fibers_stack_vms:10256368
fibers_count:157
used_memory_rss:176138014720
used_memory_rss_human:164.04GiB
used_memory_peak_rss:176138014720
maxmemory:171798691840
maxmemory_human:160.00GiB
used_memory_lua:0
object_used_memory:96439049648
type_used_memory_string:96439049648
table_used_memory:56645575768
prime_capacity:852799080
expire_capacity:850650360
num_entries:474603534
inline_keys:0
small_string_bytes:48701660592
pipeline_cache_bytes:0
dispatch_queue_bytes:0
dispatch_queue_subscriber_bytes:0
dispatch_queue_peak_bytes:0
client_read_buffer_peak_bytes:65792
tls_bytes:5664
snapshot_serialization_bytes:0
commands_squashing_replies_bytes:0
lsn_buffer_size_sum:0
lsn_buffer_bytes_sum:0
cache_mode:cache
maxmemory_policy:eviction
replication_streaming_buffer_bytes:0
replication_full_sync_buffer_bytes:0Stats
total_connections_received:426235
total_handshakes_started:0
total_handshakes_completed:0
total_commands_processed:411575
instantaneous_ops_per_sec:0
total_pipelined_commands:0
pipeline_throttle_total:0
pipelined_latency_usec:0
total_net_input_bytes:55554981821
connection_migrations:0
connection_recv_provided_calls:0
total_net_output_bytes:7043752
rdb_save_usec:0
rdb_save_count:0
big_value_preemptions:0
compressed_blobs:0
instantaneous_input_kbps:-1
instantaneous_output_kbps:-1
rejected_connections:-1
expired_keys:44717
evicted_keys:90086370
total_heartbeat_expired_keys:44567
total_heartbeat_expired_bytes:4755136
total_heartbeat_expired_calls:8634627
hard_evictions:0
garbage_checked:45269533
garbage_collected:150
bump_ups:0
stash_unloaded:4
oom_rejections:0
traverse_ttl_sec:17887
delete_ttl_sec:21
keyspace_hits:0
keyspace_misses:0
keyspace_mutations:564734621
total_reads_processed:1724481
total_writes_processed:429089
huffenc_attempt_total:0
huffenc_success_total:0
defrag_attempt_total:507013963
defrag_realloc_total:8012608
defrag_task_invocation_total:1417605
reply_count:429089
reply_latency_usec:0
blocked_on_interpreter:0
lua_interpreter_cnt:0
lua_blocked_total:0
lua_interpreter_return:0
lua_force_gc_calls:0
lua_gc_freed_memory_total:0
lua_gc_duration_total_sec:0Tiered
tiered_entries:0
tiered_entries_bytes:0
tiered_total_stashes:0
tiered_total_fetches:0
tiered_total_cancels:0
tiered_total_deletes:0
tiered_total_uploads:0
tiered_total_stash_overflows:0
tiered_heap_buf_allocations:0
tiered_registered_buf_allocations:0
tiered_allocated_bytes:0
tiered_capacity_bytes:0
tiered_pending_read_cnt:0
tiered_pending_stash_cnt:0
tiered_small_bins_cnt:0
tiered_small_bins_entries_cnt:0
tiered_small_bins_filling_bytes:0
tiered_cold_storage_bytes:0
tiered_offloading_steps:0
tiered_offloading_stashes:0
tiered_ram_hits:0
tiered_ram_cool_hits:0
tiered_ram_misses:0Persistence
current_snapshot_perc:0
current_save_keys_processed:0
current_save_keys_total:0
last_success_save:1752515648
last_saved_file:
last_success_save_duration_sec:0
loading:0
saving:0
current_save_duration_sec:0
rdb_changes_since_last_success_save:564734621
rdb_bgsave_in_progress:0
rdb_last_bgsave_status:ok
last_failed_save:0
last_error:
last_failed_save_duration_sec:0Transaction
tx_shard_polls:0
tx_shard_optimistic_total:409898
tx_shard_ooo_total:0
tx_global_total:0
tx_normal_total:409898
tx_inline_runs_total:18655
tx_schedule_cancel_total:0
tx_batch_scheduled_items_total:391243
tx_batch_schedule_calls_total:391243
tx_with_freq:409898,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
tx_queue_len:0
eval_io_coordination_total:0
eval_shardlocal_coordination_total:0
eval_squashed_flushes:0Replication
role:master
connected_slaves:0
master_replid:cacbb1d0e0d11212117a0782f6810a744fcccf03Commandstats
cmdstat_command:calls=2,usec=12673,usec_per_call=6336.5
cmdstat_info:calls=1549,usec=1680434,usec_per_call=1084.85
cmdstat_memory:calls=3,usec=1196,usec_per_call=398.667
cmdstat_ping:calls=122,usec=3259,usec_per_call=26.7131
cmdstat_set:calls=409898,usec=43167388,usec_per_call=105.313Modules
module:name=ReJSON,ver=20000,api=1,filters=0,usedby=[search],using=,options=[handle-io-errors]
module:name=search,ver=20000,api=1,filters=0,usedby=,using=[ReJSON],options=[handle-io-errors]Search
search_memory:0
search_num_indices:0
search_num_entries:0Errorstats
COMMAND DOCS Not Implemented:1
-LOADING Dragonfly is loading the dataset in memory:14665
syntax_error:2Keyspace
db0:keys=474603534,expires=474187297,avg_ttl=-1
Cpu
used_cpu_sys:405.383363
used_cpu_user:1968.673714
used_cpu_sys_children:0.1299
used_cpu_user_children:0.1633
used_cpu_sys_main_thread:16.125300
used_cpu_user_main_thread:88.914462Cluster
cluster_enabled:0
migration_errors_total:0
total_migrated_keys:0
And the result of memory malloc-stats:
127.0.0.1:6379> memory malloc-stats
___ Begin malloc stats ___
arena: 14913536, ordblks: 56, smblks: 0
hblks: 0, hblkhd: 0, usmblks: 0
fsmblks: 0, uordblks: 10634416, fordblks: 4279120, keepcost: 197632
___ End malloc stats ___
___ Begin mimalloc stats ___
heap stats: peak total freed current unit count
reserved: 168.0 GiB 168.0 GiB 0 168.0 GiB
committed: 163.3 GiB 168.0 GiB 4.8 GiB 163.1 GiB
reset: 0
purged: 337.5 MiB
touched: 947.7 KiB 15.1 MiB 163.6 MiB -148.5 MiB ok
segments: 241 243 2 241 not all freed
-abandoned: 0 0 0 0 ok
-cached: 0 0 0 0 ok
pages: 0 0 693 -693 ok
-abandoned: 0 0 0 0 ok
-extended: 0
-noretire: 0
arenas: 35
-crossover: 0
-rollback: 0
mmaps: 0
commits: 0
resets: 0
purges: 232
threads: 44 44 0 44 not all freed
searches: 0.0 avg
numa nodes: 1
elapsed: 5828.089 s
process: user: 2030.446 s, system: 417.763 s, faults: 17, rss: 164.0 GiB, commit: 163.3 GiB
___ End mimalloc stats ___
Any help or insights on what I might be doing wrong in my configuration or how to sort this out are much appreciated.