K8s oddities to keep in mind - changes to configmaps don't get reflected in realtime to pods
Given the following config, would an update to a configmap propogate to a pod?
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
data: this-is-an-initial-value
---
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
Initially, it’s set to what you’d expect:
$ k apply -f demo-configmap.yml
configmap/special-config created
pod/pod-demo created
$ k exec -it pod-demo cat /etc/config/data && echo
this-is-an-initial-value
After modification, it doesn’t appear to have propagated
$ k edit cm special-config
configmap/special-config edited
$ k get cm special-config -o yaml
apiVersion: v1
data:
data: this-is-modified
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"data":"this-is-modified"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"special-config","namespace":"default"}}
creationTimestamp: "2019-08-13T01:42:23Z"
name: special-config
namespace: default
resourceVersion: "38232031"
selfLink: /api/v1/namespaces/default/configmaps/special-config
uid: bf334aaa-1e86-498c-9e7b-2079e13425c1
$ k exec -it pod-demo cat /etc/config/data && echo
this-is-an-initial-value
This is especially infuriating that an update to a configmap won’t trigger a new deployment, which makes certain deployments a pain.
This is a well known issue, and is tracked in https://github.com/kubernetes/kubernetes/issues/22368