@reliable-impala maybe you have missed the division by number of commands calls. I.e the graph shows the commands duration total of command
Hi again, back in the office, I attempted to run a number of server-side latency commands, and unless i’m missing something, none of them are implemented yet: https://github.com/dragonflydb/dragonfly/blob/main/src/server/server_family.cc#L2204-L2214
that’s correct
that was something @spirited-jay suggested above, unless I misread
ah, i see. he made a mistake suggesting this.
no worries
bit his general response was correct. memtier includes network latency so it’s the correct one
the latency reported by /metrics shows values bigger by factor of 1000
now I see the numbers you got with memtier. Those are not great.
definitely, setting Grafana to milliseconds from seconds should correct for that. It appears to me though that number is too high (when compared to memtier)… I don’t doubt that it’s been working for others, but I’m trying to track down why ours is a bit different
it’s a bug on our side. we just recently implemented the latency export so it’s relatively new feature
regarding the memtier latency. what’s the ping latency from your client machine to your server?
let me get that for you
you can also get it with “redis-cli --latency-history -i 2 -h ”
admin@dragonfly-testing:~$ redis-cli --latency-history -i 2 -h dragonfly.xxx.xxx
min: 0, max: 1, avg: 0.13 (196 samples) -- 2.00 seconds range
min: 0, max: 1, avg: 0.17 (195 samples) -- 2.00 seconds range
min: 0, max: 1, avg: 0.13 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.15 (196 samples) -- 2.01 seconds range
min: 0, max: 2, avg: 0.13 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.16 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.16 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.11 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.11 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.16 (196 samples) -- 2.01 seconds range
min: 0, max: 1, avg: 0.13 (196 samples) -- 2.01 seconds range
this is good latency
there’s currently a process of ours running too doing 60k LPOP’s / sec and grafana is reporting over a second in the “millisecond” mode vs 1.2ms in the “microsecond” mode
1.2ms it’s a lot. how the CPU utilization looks like on dragonfly side?
getting that now
can you screenshot an htop screen there?