understood. btw, you can also run dragonfly as memcache and redis or both
yep! I saw that, very cool feature
i hope you will find dragonfly useful. in any case we would love to hear your feedback once you checkl it out on your workload
(good or bad/constructive)
Definitely!
thank you!
no problem, thank you as well
Hello again, following up because I got memtier_benchmark
up and running this morning… I’m seeing conflicting information between Prometheus/Grafana and the memtier_benchmark
reported results… Any insight would be appreciated. The image is straight out of Grafana from my last run.
Memtier output here (full distribution omitted):
admin@dragonfly-testing:~$ memtier_benchmark -h dragonfly.***.*** --ratio 0:1 -n 10000
Writing results to stdout
[RUN #1] Preparing benchmark client...
[RUN #1] Launching threads now...
[RUN #1 100%, 10 secs] 0 threads: 2000000 ops, 190865 (avg: 186829) ops/sec, 7.26MB/sec (avg: 7.11MB/sec), 1.05 (avg: 1.07) msec latency
4 Threads
50 Connections per thread
10000 Requests per client
ALL STATS
============================================================================================================================
Type Ops/sec Hits/sec Misses/sec Avg. Latency p50 Latency p99 Latency p99.9 Latency KB/sec
----------------------------------------------------------------------------------------------------------------------------
Sets 0.00 --- --- --- --- --- --- 0.00
Gets 187496.60 0.00 187496.60 1.06995 1.05500 2.00700 5.59900 7303.41
Waits 0.00 --- --- --- --- --- --- ---
Totals 187496.60 0.00 187496.60 1.06995 1.05500 2.00700 5.59900 7303.41
From memtier i’m seeing 1-5ms, but grafana reports >200ms… and I’ve got the units set as milliseconds. Thanks!
I can provide some info on the machine too if needed
Hi
200ms looks really strange
yeah in testing with my own tool, I observed grafana reporting 400ms in some cases (with the unit being milliseconds as opposed to seconds)
with our monitoring setup (the whole LGTM stack from grafana in kubernetes) we’ve seen great reliability in other metrics… These dragonfly metrics in particular are being surfaced by the host machine which is running an instance of the Grafana Agent
why do you test misses?
that was just a memtier_benchmark configuration I found in other places in your Github… I can certainly modify it and test again
could you check set operation
certainly, would this make more sense as a command?
memtier_benchmark -h dragonfly.***.*** --ratio 1:1 -n 10000
where ratio is changed from 0:1 to 1:1
I’d prefer to see only set command test
would that be 1:0?