Hi everyone!! While managing the kubernetes deployment, have you ever wanted a pod to get scheduled on a particular node, but it doesn’t happen the way you want and the pod gets scheduled randomly in any of the nodes. Don’t Worry, there is a way through which we can make this happen using the concept ‘Node affinity’.
Node affinity:
The main purpose of node affinity is to ensure pods are hosted to particular nodes. The similar purpose can be achieved by using ‘nodeSelector’, but it is too restricted and doesn’t provide a lot of features. Node affinity provides advanced capabilities to place pods on specific nodes.
There are two types of attributes available in node affinity.
requiredDuringSchedulingIgnoredDuringExecution
-Choosing this type of node affinity strictly schedules the pod on the node. If there is no node with matching affinity, then the pod doesn’t get scheduled. This is considered to be a hard requirement.
preferredDuringSchedulingIgnoredDuringExecution
-Choosing this type of affinity, the scheduler tries its best to schedule the pod based on node affinity. But if there is no node with matching affinity, then it ignores the affinity and the pod randomly gets scheduled. This is considered to be a soft requirement.
For example, Let’s consider a scenario where there are two or more nodes with mixed instance types. If I run a new pod, it can get scheduled in any of the nodes. But, for some reason, if you want your app to run on either t3.small or t3.medium machine. It is possible with affinity rules.
1)Example of node affinity using ‘requiredDuringSchedulingIgnoredDuringExecution’
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: beta.kubernetes.io/instance-type
operator: In
values:
– t3.small
– t3.medium
containers:
– name: nginx
image: <docker image>
imagePullPolicy: Always
ports:
– containerPort: 80
In the above example, You can look at the affinity section. We are defining the rules in such a way that pod gets scheduled only in the nodes that have the instance type as t3.small or t3.medium. If there are no nodes that have these instance types, then the pod goes in pending state.
But, in few cases, our requirement is soft i.e even if any such node is not available, we still want to schedule the pod and not let it go in pending state. This can be achieved by using the type of affinity called ‘preferredDuringSchedulingIgnoredDuringExecution’.
2)Example of node affinity using ‘preferredDuringSchedulingIgnoredDuringExecution’
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
– matchExpressions:
– key: beta.kubernetes.io/instance-type
operator: In
values:
– t3.small
– t3.medium
containers:
– name: nginx
image: <docker image>
imagePullPolicy: Always
ports:
– containerPort: 80
In this example, pod tries to gets scheduled in any of the nodes with instance type as t3.small or t3.medium. Let’s say if there is only one node available with instance type ‘t3.large’, then the pod ignores the affinity rules and gets scheduled on to the node whichever is available. Similarly, you can set affinity rules for a few other keys such as kubernetes.io/hostname, beta.kubernetes.io/os, failure-domain.beta.kubernetes.io/region etc. All you have to do is check whether the label is present for the node or not. If it is present, we can set affinity rules based on that key.
Conclusion:
In this blog, you have learnt how to use nodeaffinity and how the pods can be scheduled onto the nodes. NodeAffinity is useful in many cases such as spreading our pods across multiple hosts or multiple availability zones and managing dedicated nodes.
Thanks for reading the article! I hope you liked it.