Description
What happened?
On our production cluster we tried to enable auditlogging using kubespray upgrade, this didn't go through as expected. When deploying a new cluster, the settings come through.
What did you expect to happen?
The upgrade to trigger the configuration change
How can we reproduce it (as minimally and precisely as possible)?
By upgrading an existing cluster and setting these flags:
kubernetes_audit: true
audit_log_path: /var/log/audit/kubernetes/kube-apiserver-audit.log
audit_log_maxage: 30
audit_log_maxbackups: 1
audit_log_maxsize: 100
audit_policy_file: "{{ kube_config_dir }}/audit-policy/apiserver-audit-policy.yaml" #This policy is applied via
OS
RHEL 8
Version of Ansible
ansible [core 2.16.14]
config file = /REDACTED/clusters/REDACTED/ansible.cfg
configured module search path = ['/kubespray/library']
ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (/usr/bin/python3)
jinja version = 3.1.5
libyaml = True
Version of Python
python version = 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (/usr/bin/python3)
Version of Kubespray (commit)
latest
Network plugin used
cilium
Full inventory with variables
N/A.
Command used to invoke ansible
#!/bin/bash -e cd /kubespray ansible-playbook -b /mfi-k8s-kubespray/playbooks/pre-install.yml # Upgrade cluster ansible-playbook upgrade-cluster.yml -b
Output of ansible run
N/A, the output was correct, no errors there.
Anything else we need to know
As discussed on Kubecon London