What is the correct method for specifying the default Dragonfly port when installing via the Kubernetes Operator?
I tried adding an arg to the spec as seen here:
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
labels:
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly
app.kubernetes.io/part-of: dragonfly-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: dragonfly-operator
name: dragonfly
namespace: pa3077-qa-cache
spec:
replicas: 2
image: docker.dragonflydb.io/dragonflydb/dragonfly:v1.30.0
args:
- "--port=6425"
authentication:
passwordFromSecret:
name: dragonfly-auth
key: password
resources:
requests:
cpu: 500m
memory: 1024Mi
limits:
# cpu: 1000m
memory: 1298Mi
but it doesn’t seem to recognize the port change after the apply. Using kubectl get services still shows it running on the default port (6379).
Thanks for any pointers!
We are fairly new to Kubernetes in general, but are replacing AWS ElastiCache (Redis) with Dragonfly in a particular installation due to some incompatibilities between Redis and an SSO solution. So far, Dragonfly has been working great in this regard.
Hi @mkizer, let me double-check this for you. Although at first glance, I think in a K8s environment, the port of Dragonfly itself matters less than the service port.
In the meantime, do you mind sharing more information about your workload and ElastiCache setup? Since Dragonfly Cloud does support AWS, your cost can be slashed by migrating over, and your internal DevOps/operational effort can be highly reduced as well. 
I see you joined our Discord server, if you want to share privately, please DM me.
Thanks for checking. The port change is just a requirement by our security team. They always want the default ports changed on everything.
This solution is an AWS EKS backed WordPress site. The Redis caching is for WordPress objects and sessions (to lesson the number of hits to the backend database which is AWS RDS).
It’ll be a public facing local gov site, that will have a medium to high volume of traffic. The EKS is configured to scale from 2 to 30 EC2s. Our first load test was about 3000 concurrent users, but our configuration at the time was only set to a max of 16 EC2s, and we maxed it out. We’re going to try another test next week. This time, the site has been reworked a bit, Dragonfly replacing ElastiCache, etc.
We did look into Dragonfly Cloud, but it wasn’t approved. I’m not totally sure for the reason. I believe the cost was fine. It may have been a security concern and trying to keep everything in our own VPC.
I did some experiments myself and checked the operator code a bit, and I think the port for the pods and the headless service created by the operator is not that configurable. I will forward this thread to our engineers so that they can confirm.
However, another option is to create a custom service that uses a different port. Here’s an example:
apiVersion: dragonflydb.io/v1alpha1
kind: Dragonfly
metadata:
labels:
app.kubernetes.io/name: dragonfly
app.kubernetes.io/instance: dragonfly-sample
app.kubernetes.io/part-of: dragonfly-operator
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: dragonfly-operator
name: dragonfly-sample
spec:
replicas: 2
resources:
requests:
cpu: 500m
memory: 500Mi
limits:
cpu: 600m
memory: 750Mi
---
apiVersion: v1
kind: Service
metadata:
name: dragonfly-sample-custom-svc # A custom service.
spec:
type: ClusterIP
selector:
app: dragonfly-sample
app.kubernetes.io/name: dragonfly
ports:
- port: 6900 # Port for the custom service.
targetPort: 6379 # Port for the Dragonfly container.
name: dragonfly
Inspect pods & services, port forward for local testing:
$> kubectl get pods
NAME READY STATUS RESTARTS AGE
dragonfly-sample-0 1/1 Running 0 20m
dragonfly-sample-1 1/1 Running 0 20m
$> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dragonfly-sample ClusterIP 10.98.1.154 <none> 6379/TCP 20m
dragonfly-sample-custom-svc ClusterIP 10.98.213.198 <none> 6900/TCP 20m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
$> kubectl port-forward service/dragonfly-sample-custom-svc 6900:6900
Forwarding from 127.0.0.1:6900 -> 6379
Forwarding from [::1]:6900 -> 6379
Handling connection for 6900
Test with Redis CLI locally:
$> redis-cli -h localhost -p 6900 PING
PONG
Thank you sincerely for sharing insights about your workloads—it’s inspiring to see projects like yours trying to leverage Dragonfly!
Regarding Dragonfly Cloud’s security features, we have a dedicated security portal for security teams to review. For production environments, we strongly recommend VPC peering, which offers truly top-tier security integration. While I stand behind its robustness, I do respect that every organization has unique compliance needs and security frameworks to consider.
Let us know if you have more questions.
Checked with engineers. The answer in my previous post is correct. The port for the pod and the headless service is not configurable at the moment.
Great, thanks for checking so thoroughly on this one. I’ll test out your custom service solution above, and keep an eye on future updates.
1 Like