How to force restarting all Pods in a Kubernetes Deployment
In contrast to classical deployment managers like systemd or pm2, Kubernetes does not provide a simple restart my application command.
However there’s an easy workaround: If you chance anything in your configuration, even innocuous things that don’t have any effect, Kubernetes will restart your pods.
Consider configuring a rolling update strategy before doing this if you are updating a production application that should have minimal downtime.
In this example we’ll assume you have a StatefulSet your want to update and it’s named elasticsearch-elasticsearch. Be sure to fill in the actual name of your deployment here.
kubectl patch statefulset/elasticsearch-elasticsearch -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"dummy-date\":\"`date +'%s'`\"}}}}}"This will just set a dummy-date annotation which does not have any effect.
You can monitor the update by
kubectl rollout status statefulset/elasticsearch-elasticsearchCredits for the original solution idea to pstadler on GitHub.