blocking on set until replication lag catches up?

I’m running dragon using the provided kubernetes operator with replicas set to 2. So I have one master and one replica.

Is it possible to know that a set key has been replicated? I hit an issue today that I set a key then notified the client that the data was available which then connected to the kube deployment and couldnt find the key on the replica.

I suppose I could spin in the client until the get returns a value. But it’d be nice to know data is available before I notify.

Oh no! There is a redis command that would help here but it doesn’t seem to be supported by dragonfly

Looking at the operator service definition it looks like it targets master? So then everyone should have been talking to the same pod and replication lag doesn’t even matter. Now I’m even more confused on why that key was missing.

Hi @praiseworthy-rhino . We use replication for HA only, so your clients should query the master node. The reasons a key would be missing that I can think of is : 1. master failed and replica tookover but did not get the key that was inserted. 2. key has expiry time. 3. if you run in cache mode and the key was evicted

Gotcha. These are all great hints. Thank you!

I’m starting to think it’s a logic bug on my side instead of some mysterious dragon bug

Sharing for a laugh. The issue was that I generated a cache key by base64 encoding some datas and putting that into the URL. The encoded string had a plus sign which was being interpreted as a space in the URL. The fix was to use the URL safe base64 encoder. :sweat_smile: