Description
Before you start please confirm the following.
- Yes, I've searched similar issues on GitHub.
- Yes, I've checked whether this issue is covered in the Portainer documentation or knowledge base.
Problem Description
When I run rollout restart on a deployment, to ensure the correct uptime guarantee one container will be created and all traffic routed to it, while the container it replaced is terminated.
kubectl is showing the correct status.
However, Portainer is showing both containers in the running state
Expected Behavior
The terminating container should show "terminating".
Actual Behavior
It doesn't, it shows "running"
Steps to Reproduce
A deployment that takes a while to get going and to terminate.
Image pull policy is set to "always" in my case and tag = "latest"
Replica Count is 1.
kubectl rollout restart deployment/your-deployment
kubectl get all
compare to portainer deployment app view showing running pods
Portainer logs or screenshots
I have a screencast of it but cannot seem to update it. Here is a screenshot of it. Notice the state of the pods.
I refreshed the portainer screen after kubectl was already showing "terminating". You can clearly see the pod state still shows "running"

Portainer version
2.31.3
Portainer Edition
Community Edition (CE)
Platform and Version
microk8s
OS and Architecture
Ubuntu 22.04
Browser
firefox
What command did you use to deploy Portainer?
standard
Additional Information
No response