Developing network policies for Kubernetes (AWS EKS) to isolate namespaces and allow external traffic to the application and deploy using helm.
When you are using a kubernetes deployment, you may need to share the same kubernetes cluster among several customers. The best way of implementing it is by using a different namespace for each customer and implement namespace isolation using network policies.
If you are new to the kubernetes network policies I recommend you to follow the documentation [1] on network policies before continue on this post.
In this post I am going to explain how to use network policies to isolate namespaces and also to allow external traffic in to the application. Here I will be using AWS EKS in this post.
First you will need to install some network plugin to make network policies work. Since I am using EKS I went with the AWS recommended solution Calico [2].
Now you need to deploy a network policy to isolate namespaces. You can use the following as your starting point. You can use the tool[3] to validate your network policy configuration. Here we are restricting traffic from outside network to the pods in current namespace which is “customer1” here. Here I haven’t added any egress configuration since I want my application to access some system services I deployed in other namespaces such as AWS Xray and some other. But if you are willing, you can add an egress to stop the communication or limit the communication from current namespace to other namespaces.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.networkPolicy.name }}
labels:
{{ include "<chart name>.labels" . | indent 4 }}
spec:
podSelector:
matchLabels:
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
If you have applied this then you will see that now any other namespace cannot hit endpoints of your namespace. But the problem here is that it also blocks outside network. With network policies you can whitelist cidrs. But here you should be careful to whitelist correct entities otherwise your isolation might fall. In my situation I am using AWS ALB with ALB ingress controller. So I wanted my application to allow requests from my ALB. But again here after some playing around I found out that it’s referring to ALB’s internal IP / private IP not the public one. And I had to find out that IP first. I created below script to find those IPs.
#!/bin/bash
namespace=”customer1”
ingress_name="customer1-ingress"
name=$(kubectl get ingress $ingress_name -n $namespace -o=json | grep -wE \"hostname\" | xargs | cut -d : -f 2 | xargs | cut -d . -f 1)
part1=$(echo $name | cut -d '-' -f1 )
part2=$(echo $name | cut -d '-' -f2 )
part3=$(echo $name | cut -d '-' -f3 )
part4=$(echo $name | cut -d '-' -f4 )
alb_name=$part1-$part2-$part3-$part4
ip_addresses=$(aws ec2 describe-network-interfaces --filters "Name=description,Values=ELB app/$alb_name/$(aws elbv2 describe-load-balancers --names $alb_name | grep -wE 'LoadBalancerArn' | xargs | cut -d / -f 4 | cut -d , -f 1)" | grep -wE 'PrivateIpAddress' | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' | uniq)
echo $ip_addresses
This gave me two IP addresses which were the interfaces for two availability zones I deployed in. Next step was to add it to my network policy. Here I used cidrs for exact IP addresses as below.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.networkPolicy.name }}
labels:
{{ include "<chart name>.labels" . | indent 4 }}
spec:
podSelector:
matchLabels:
policyTypes:
- Ingress
ingress:
- from:
- ipBlock:
cidr: 192.168.xxx.xxx/32
- ipBlock:
cidr: 192.168.xxx.xxx/32
- podSelector: {}
Now you will see outside traffic is allowed.
So the next step will be to automate this. From the aws command above you will get all IPs associated with the ALB. Now I need to add it to a helm chart. My policy yaml file would be like after parameterization.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
namespace: {{ .Release.Namespace }}
name: {{ .Values.networkPolicy.name }}
labels:
{{ include "<char name>.labels" . | indent 4 }}
spec:
podSelector:
matchLabels:
{{ range .Values.sourcePodSelector }}
- {{ . }}
{{ end }}
policyTypes:
- Ingress
ingress:
- from:
{{ range .Values.albPrivateIPs }}
- ipBlock:
cidr: {{ . }}/32
{{ end }}
- podSelector: {}
My values.yaml would be as below.
# Default values for <chart name>.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
namespace: default
sourcePodSelector: []
albPrivateIPs: []
networkPolicy:
name: deny-from-other-namespaces
Now I add following commands to my script as below to deploy my chart.
ipaddrs=$(echo $ip_addresses | xargs)
helmVar="{${ipaddrs// /, }}"
helm install --name networkpolicies <chart_location> --set albPrivateIPs=”$helmVar” --namespace=$namespace
Update on 28/10/2019 :- Later I found out that AWS changes the ALB private IPs. So this script might need to run as a cron job. So I modified the script to connect with an S3 bucket and store private IPs and if they get updates then update the network policies. So the complete script looks like below now. I took namespace and S3 bucket name as inputs to the script. Here my ingress name is configured as “$namespace”-ingress. If you are using a different format please be mindful of that.
#!/bin/bash
namespace=$1
bucketName=$2
name=$(kubectl get ingress $namespace-ingress -n $namespace -o=json | grep -wE \"hostname\" | xargs | cut -d : -f 2 | xargs | cut -d . -f 1)
part1=$(echo $name | cut -d '-' -f1 )
part2=$(echo $name | cut -d '-' -f2 )
part3=$(echo $name | cut -d '-' -f3 )
part4=$(echo $name | cut -d '-' -f4 )
alb_name=$part1-$part2-$part3-$part4
ip_addresses=$(aws ec2 describe-network-interfaces --filters "Name=description,Values=ELB app/$alb_name/$(aws elbv2 describe-load-balancers --names $alb_name | grep -wE 'LoadBalancerArn' | xargs | cut -d / -f 4 | cut -d , -f 1)" | grep -wE 'PrivateIpAddress' | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' | uniq)
ipaddrs=$(echo $ip_addresses | xargs)
helmVar="{${ipaddrs// /, }}"
s3Resource=$(aws s3 ls s3://"$bucketName" | grep ips-"$namespace".txt)
if test -z "$s3Resource"
then
echo "File doesnot exist in S3. Continuing..."
else
aws s3 cp s3://"$bucketName"/ips-"$namespace".txt .
iplist=`cat ips-"$namespace".txt`
if [ "$iplist" = "$helmVar" ]
then
echo "IP list is not changed. Hense exiting."
exit 0
else
echo "IP list is updated. Will update networkpolicies..."
fi
fi
echo "$helmVar" > ips-"$namespace".txt
aws s3 cp ips-"$namespace".txt s3://"$bucketName"/
res=$(helm ls networkpolicies)
if test -z "$res"
then
helm install --name networkpolicies <chart_location> --set albPrivateIPs="$helmVar" --namespace=$namespace
else
helm upgrade networkpolicies <chart_location>s --set albPrivateIPs="$helmVar" --namespace=$namespace
fi
Hope this helps. Good luck!!!
References
[1] https://kubernetes.io/docs/concepts/services-networking/network-policies/