Data is getting remove from dragonfly without throwing any error
Hi there,
Thanks for bringing this up, and we’re sorry to hear you’re facing this issue. To help us better understand what’s going on, could you please provide a bit more detail?
Specifically:
- What’s your hardware/environment setup?
- Which version of Dragonfly are you running?
- Is there any chance that a client (e.g., a local CLI/SDK or any connected application) could have issued a command like
FLUSHALL
?
This does sound unusual, so we’d need more context to investigate further. If this is your production instance, feel free to join our Discord server, and we may be able to connect you with an engineer for additional support. But we’ll definitely need more information to move forward.
For example, hypothetically, if you are running Dragonfly on a public cloud on the default port without authentication, that could be very dangerous, and hackers can easily issue destructive commands.
Whatever the issue could be, if you still have the Dragonfly instance, the output of the INFO ALL
command would be a good starting point for us to help with the situation.
Server
redis_version:6.2.11
dragonfly_version:df-v1.23.0
redis_mode:standalone
arch_bits:64
os:Linux 6.1.0-18-amd64 x86_64
multiplexing_api:iouring
tcp_port:6379
thread_count:96
uptime_in_seconds:9253
uptime_in_days:0
Clients
connected_clients:390
max_clients:64000
client_read_buffer_bytes:1973760
blocked_clients:0
pipeline_queue_length:0
Memory
used_memory:59202160
used_memory_human:56.46MiB
used_memory_peak:432599144
used_memory_peak_human:412.56MiB
fibers_stack_vms:32948512
fibers_count:1006
used_memory_rss:468480000
used_memory_rss_human:446.78MiB
used_memory_peak_rss:1680318464
maxmemory:42949672960
maxmemory_human:40.00GiB
used_memory_lua:0
object_used_memory:197552
type_used_memory_string:197552
table_used_memory:54125568
num_buckets:92160
num_entries:96
inline_keys:4
listpack_blobs:0
listpack_bytes:0
small_string_bytes:3952
pipeline_cache_bytes:9450
dispatch_queue_bytes:0
dispatch_queue_subscriber_bytes:0
dispatch_queue_peak_bytes:0
client_read_buffer_peak_bytes:1973760
tls_bytes:5664
snapshot_serialization_bytes:0
cache_mode:cache
maxmemory_policy:eviction
Stats
total_connections_received:1266
total_commands_processed:8261180
instantaneous_ops_per_sec:14
total_pipelined_commands:1160139
total_pipelined_squashed_commands:898
pipelined_latency_usec:308824163
total_net_input_bytes:8018413100
connection_migrations:0
total_net_output_bytes:11018755978
rdb_save_usec:0
rdb_save_count:0
instantaneous_input_kbps:-1
instantaneous_output_kbps:-1
rejected_connections:-1
expired_keys:87868
evicted_keys:0
hard_evictions:0
garbage_checked:0
garbage_collected:0
bump_ups:729123
stash_unloaded:0
oom_rejections:0
traverse_ttl_sec:2933
delete_ttl_sec:0
keyspace_hits:2038937
keyspace_misses:2189621
keyspace_mutations:4030441
total_reads_processed:7858075
total_writes_processed:7531026
defrag_attempt_total:0
defrag_realloc_total:0
defrag_task_invocation_total:0
reply_count:7531026
reply_latency_usec:123327199
blocked_on_interpreter:0
lua_interpreter_cnt:0
lua_blocked:0
Tiered
tiered_entries:0
tiered_entries_bytes:0
tiered_total_stashes:0
tiered_total_fetches:0
tiered_total_cancels:0
tiered_total_deletes:0
tiered_total_uploads:0
tiered_total_stash_overflows:0
tiered_heap_buf_allocations:0
tiered_registered_buf_allocations:0
tiered_allocated_bytes:0
tiered_capacity_bytes:0
tiered_pending_read_cnt:0
tiered_pending_stash_cnt:0
tiered_small_bins_cnt:0
tiered_small_bins_entries_cnt:0
tiered_small_bins_filling_bytes:0
tiered_cold_storage_bytes:0
tiered_offloading_steps:0
tiered_offloading_stashes:0
tiered_ram_hits:2038937
tiered_ram_cool_hits:0
tiered_ram_misses:0
Persistence
current_snapshot_perc:0
current_save_keys_processed:0
current_save_keys_total:0
last_success_save:1729172879
last_saved_file:
last_success_save_duration_sec:0
loading:0
saving:0
current_save_duration_sec:0
rdb_changes_since_last_success_save:118
last_failed_save:1729182116
last_error:Invalid argument: Couldn’t open file for writing (is direct I/O supported by the file system?)
last_failed_save_duration_sec:0
Transaction
tx_shard_polls:576
tx_shard_optimistic_total:8258999
tx_shard_ooo_total:0
tx_global_total:6
tx_normal_total:8258999
tx_inline_runs_total:84863
tx_schedule_cancel_total:0
tx_with_freq:8258999,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6
tx_queue_len:0
eval_io_coordination_total:0
eval_shardlocal_coordination_total:0
eval_squashed_flushes:0
multi_squash_execution_total:80
multi_squash_execution_hop_usec:542899
multi_squash_execution_reply_usec:5023641
Replication
role:master
connected_slaves:0
master_replid:d86ad84777578de8ec147ae35d4881e38655cbde
Commandstats
cmdstat_client:calls=2062,usec=41611,usec_per_call=20.1799
cmdstat_command:calls=2,usec=3651,usec_per_call=1825.5
cmdstat_config:calls=9,usec=430,usec_per_call=47.7778
cmdstat_del:calls=2192434,usec=180708453,usec_per_call=82.4237
cmdstat_flushall:calls=2,usec=98508,usec_per_call=49254
cmdstat_get:calls=4225036,usec=378292900,usec_per_call=89.536
cmdstat_info:calls=75,usec=1641861,usec_per_call=21891.5
cmdstat_save:calls=5,usec=21013,usec_per_call=4202.6
cmdstat_select:calls=25,usec=29202,usec_per_call=1168.08
cmdstat_set:calls=1450949,usec=137259015,usec_per_call=94.5995
cmdstat_zadd:calls=386912,usec=35689492,usec_per_call=92.2419
cmdstat_zrange:calls=146,usec=6575488,usec_per_call=45037.6
cmdstat_zrangebyscore:calls=3376,usec=504590,usec_per_call=149.464
cmdstat_zrem:calls=146,usec=227861,usec_per_call=1560.69
Modules
module:name=ReJSON,ver=20000,api=1,filters=0,usedby=[search],using=,options=[handle-io-errors]
module:name=search,ver=20000,api=1,filters=0,usedby=,using=[ReJSON],options=[handle-io-errors]
Search
search_memory:0
search_num_indices:0
search_num_entries:0
Errorstats
DF snapshot format requires no filename extension. Got “.db”:1
config_error:5
Invalid argument: Couldn’t open file for writing (is direct I/O supported by the file system?):4
Keyspace
db0:keys=63,expires=36,avg_ttl=-1
db5:keys=33,expires=33,avg_ttl=-1
Cpu
used_cpu_sys:1312.634241
used_cpu_user:1532.743540
used_cpu_sys_children:0.0
used_cpu_user_children:0.0
used_cpu_sys_main_thread:12.602310
used_cpu_user_main_thread:15.231120
Cluster
cluster_enabled:0
These all information of INFO ALL
Using below command to run
dragonfly --bind 0.0.0.0 --logtostderr --dir /dev/dragonflydump --dbfilename my-snapshot-file --maxmemory 40GB
From the Commandstats
section, we can see that the FLUSHALL
command was called twice.
Is there any possibility that an application service calls this command upon restart? (i.e., some example code snippets perform a flush to ensure a clean environment.)
Thanks @joezhou_df. Now working, It was authentication issue only.
Glad we figured it out! Easy fix—and please double-check those settings to avoid it happening again.
I’ll go ahead and update the post title to reflect the solution. If you have any other questions or run into anything else, feel free to ask!
As an additional side note, you may consider renaming/disabling some commands like FLUSHALL
in your production instance using configs like --rename_command
or --restricted_commands
.