-
Notifications
You must be signed in to change notification settings - Fork 12
Description
When deploying Redpanda and Console via the operator,
And enabling the TLS for the Console,
Then the created Deployment does not expose the HTTPS port
And therefore the Console cannot be used using HTTPS.
This is not strictly related to the Operator I think since Redpanda Console Helm chart does not add the additional HTTPS port in the Deployment template.
Since there is no option to add an additional port via the config, the HTTPS feature can be enabled but never used.
To replicate
Using a Redpanda cluster yaml which enabled TLS for Kafka and the console:
redpanda-cluster.yaml
apiVersion: cluster.redpanda.com/v1alpha2
kind: Redpanda
metadata:
name: redpanda
namespace: redpanda
spec:
chartRef:
useFlux: false
chartVersion: 5.9.18
clusterSpec:
auth:
sasl:
enabled: true
mechanism: SCRAM-SHA-512
secretRef: "redpanda-superuser"
# This user will be created and saved in the secretRef
users:
- name: superuser
password: superuserpwd
mechanism: SCRAM-SHA-512
external:
enabled: true
domain: svc.cluster.local
type: NodePort
statefulset:
replicas: 1
initContainers:
setDataDirOwnership:
enabled: true
console:
enabled: true
ingress:
enabled: true
className: traefik
hosts:
- host: svc.cluster.local
paths:
- path: /redpanda
pathType: Prefix
console:
config:
server:
httpsListenPort: 443
advertisedHttpsListenPort: 443
listenPort: 8080
tls:
enabled: true
certFilepath: /etc/tls/certs/default/tls.crt
keyFilepath: /etc/tls/certs/default/tls.key
tls:
enabled: true
config:
cluster:
admin_api_require_auth: true
The deployed Pod yaml only contains the port 8080 and did not add a port for the configured httpsListenPort
:
pod: redpanda-console-xxxxxxxx-xxxxx.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum-redpanda-chart/config: acd9ec5c50a895391d4ab5e226f26e84cbb657dea7258108e14c936bbaf363d7
checksum/config: baf41f19b9acab235facc68c230da874b19152bc26f4f582685522f375cb7c3d
creationTimestamp: "2025-02-27T12:38:22Z"
generateName: redpanda-console-74d89cb95b-
labels:
app.kubernetes.io/instance: redpanda
app.kubernetes.io/name: console
pod-template-hash: 74d89cb95b
name: redpanda-console-74d89cb95b-ck6cb
namespace: redpanda
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: redpanda-console-74d89cb95b
uid: 529c6be6-40d6-4f86-807c-df9e7e3e2074
resourceVersion: "9051"
uid: 4a2c9562-0b25-450d-af7b-b4ace2042e69
spec:
affinity: {}
automountServiceAccountToken: false
containers:
- args:
- --config.filepath=/etc/console/configs/config.yaml
command:
- sh
- -c
- set -e; IFS=':' read -r KAFKA_SASL_USERNAME KAFKA_SASL_PASSWORD KAFKA_SASL_MECHANISM
< <(grep "" $(find /mnt/users/* -print)); KAFKA_SASL_MECHANISM=${KAFKA_SASL_MECHANISM:-SCRAM-SHA-512};
export KAFKA_SASL_USERNAME KAFKA_SASL_PASSWORD KAFKA_SASL_MECHANISM; export
KAFKA_SCHEMAREGISTRY_USERNAME=$KAFKA_SASL_USERNAME; export KAFKA_SCHEMAREGISTRY_PASSWORD=$KAFKA_SASL_PASSWORD;
export REDPANDA_ADMINAPI_USERNAME=$KAFKA_SASL_USERNAME; export REDPANDA_ADMINAPI_PASSWORD=$KAFKA_SASL_PASSWORD;
/app/console $@
- ' --'
image: docker.redpanda.com/redpandadata/console:v2.8.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /admin/health
port: http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: console
ports:
- containerPort: 8080
name: http
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /admin/health
port: http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
runAsNonRoot: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/console/configs
name: configs
readOnly: true
- mountPath: /mnt/users
name: redpanda-users
readOnly: true
- mountPath: /etc/tls/certs/default
name: redpanda-default-cert
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: k3d-redpanda-server-0
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 99
runAsUser: 99
serviceAccount: redpanda-console
serviceAccountName: redpanda-console
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: redpanda-console
name: configs
- name: redpanda-users
secret:
defaultMode: 420
secretName: redpanda-superuser
- name: redpanda-default-cert
secret:
defaultMode: 272
secretName: redpanda-default-cert
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-02-27T12:38:23Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-02-27T12:38:22Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-02-27T12:38:22Z"
message: 'containers with unready status: [console]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-02-27T12:38:22Z"
message: 'containers with unready status: [console]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-02-27T12:38:22Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://1cdd0e59e2444d079cd0d9017403f8185843a72394e0de1300b160dde35adf81
image: docker.redpanda.com/redpandadata/console:v2.8.0
imageID: docker.redpanda.com/redpandadata/console@sha256:69e209d2549cf84ff9a468a11c6da7d76d94d17ab77fd02cc6f5cd99ec23f43b
lastState: {}
name: console
ready: false
restartCount: 0
started: true
state:
running:
startedAt: "2025-02-27T12:38:22Z"
volumeMounts:
- mountPath: /etc/console/configs
name: configs
readOnly: true
recursiveReadOnly: Disabled
- mountPath: /mnt/users
name: redpanda-users
readOnly: true
recursiveReadOnly: Disabled
- mountPath: /etc/tls/certs/default
name: redpanda-default-cert
hostIP: 172.18.0.3
hostIPs:
- ip: 172.18.0.3
phase: Running
podIP: 10.42.0.29
podIPs:
- ip: 10.42.0.29
qosClass: BestEffort
startTime: "2025-02-27T12:38:22Z"
When connecting to the Console on port 8080 (after port-forwarding it to localhost for testing), it returns
$ curl http://localhost:8080
<a href="https://localhost:443/">Moved Permanently</a>.
Port 443
can never be used since it is not exposed by the Pod.
Similarly, the service does not expose the port:
service: redpanda-console.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: redpanda
meta.helm.sh/release-namespace: redpanda
creationTimestamp: "2025-02-27T12:38:22Z"
labels:
app.kubernetes.io/instance: redpanda
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: console
app.kubernetes.io/version: v2.8.0
helm.sh/chart: console-0.7.31
helm.toolkit.fluxcd.io/name: redpanda
helm.toolkit.fluxcd.io/namespace: redpanda
name: redpanda-console
namespace: redpanda
ownerReferences:
- apiVersion: cluster.redpanda.com/v1alpha2
controller: true
kind: Redpanda
name: redpanda
uid: c754b253-18f5-4bb6-b985-4f8dd76e302d
resourceVersion: "9028"
uid: 9f5a4596-85a5-4e89-ae8e-ae5764a76294
spec:
clusterIP: 10.43.149.209
clusterIPs:
- 10.43.149.209
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 8080
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/instance: redpanda
app.kubernetes.io/name: console
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Workaround
When updating the Deployment of the console to add the additional port
- containerPort: 443
name: https
protocol: TCP
Then port 443
is exposed and the Console can be accessed over HTTPS.
Test environment
This is tested on a local K3D cluster setup with these steps:
# Create the cluster
k3d cluster create redpanda -p "80:80@loadbalancer" -p "443:443@loadbalancer" -p "31092:31092@server:0" -p "31644:31644@server:0" -p "30081:30081@server:0"
# Install cert-manager
helm install cert-manager jetstack/cert-manager --set crds.enabled=true --namespace cert-manager --create-namespace
# Add CDRs
kubectl kustomize "https://github.com/redpanda-data/redpanda-operator//operator/config/crd?ref=v2.3.6-24.3.3" \
| kubectl apply --server-side -f -
# Deploy Operator
helm upgrade --install redpanda-operator redpanda/operator --namespace redpanda --set image.tag=v2.3.6-24.3.3 --create-namespace
# Deploy Redpanda Cluster
kubectl apply -f redpanda-cluster.yaml -n redpanda
The content of the Redpanda-cluster.yaml
is listed above.