dragonfly_commands_processed_total metrics

Testing pipelining on dragonfly using lettuce-core java client, In code executing 20 SET command in each pipelining.
To generate load using jmeter with 30threads, and written test service with below code to execute 20 SET command in pipelining.
But test service RPS and command process total metric doesn’t match.

RPS on test service: 35.9K (35.9K * 20 = 718K)
dragonfly_commands_processed_total per sec: 178K {rate(dragonfly_commands_processed_total{instance=~"$instance"}[1m])}

Question: Why doesn’t the dragonfly_commands_processed_total per second align with either the Requests Per Second (RPS) on the test service or the flush command count per second?

In below code, synchronized (flushDisabledConnections) is used to make sure in each flush we provide only 20 SET operation.

Sharing the code in same thread.

public class RedisStandaloneKVRepository implements IKVRepository<String, String> {

    private StatefulRedisConnection<String, String> statefulRedisConnection;
    private final StatefulRedisConnection<String, String> flushDisabledConnections;
    private RedisClient redisClient;

    RedisStandaloneKVRepository(RedisStandaloneProperties redisStandalone) {
        this.redisClient = ConnectionFactory.getRedisStandaloneClient(redisStandalone);
        this.statefulRedisConnection = ConnectionFactory.getRedisConnection(redisClient, redisStandalone);
        this.flushDisabledConnections = ConnectionFactory.getRedisConnection(redisClient, redisStandalone);
        this.flushDisabledConnections.setAutoFlushCommands(false);
    }

    //Here map size is 20 with random key and values
    public void msetPipeline(Map<String, String> keyToValueMap) {
        List<RedisFuture<String>> redisFutureList;

        synchronized (flushDisabledConnections) {
            RedisAsyncCommands<String, String> redisAsyncCommands = flushDisabledConnections.async();
            redisFutureList = keyToValueMap.keySet().stream()
                    .map(key -> redisAsyncCommands.set(key, keyToValueMap.get(key)))
                    .collect(Collectors.toList());

            redisAsyncCommands.flushCommands();
        }
        boolean result = LettuceFutures.awaitAll(100, TimeUnit.MILLISECONDS,
                redisFutureList.toArray(new RedisFuture[redisFutureList.size()]));
        if (!result) {
            log.error("Failed to set keys in redis");
            throw new RuntimeException("Failed to set keys in redis");
        }
    }
}

Hi! Thanks for reporting this issue. We implement a special optimization that internally speeds up and parallelizes execution of pipelines… and we don’t record every command there separately :sweat_smile: You can expect it to be fixed in the next version (be it 1.15.1 or 1.16)

@positive-camel can you please run dragonfly with --pipeline_squash=0 and see if it produces aligned metrics?