Overview of Kubernetes Networking and Service Mesh
No Benz stories today (referring to the last article, lol), we just hit the ground writing…
What is Kubernetes networking?
Kubernetes networking refers to the networking infrastructure and concepts used by Kubernetes to aid communication between different components of a Kubernetes cluster, such as Pods, Services, and Nodes.
In Kubernetes, every Pod is assigned a unique IP address, which allows it to communicate with other Pods in the same cluster. Kubernetes also provides the concept of Services, which allows multiple Pods to be grouped together and accessed via a single IP address and port number. Services can be configured to provide load balancing, failover, and other features.
Kubernetes networking also includes the concept of Network Policies, which allow administrators to define rules for controlling traffic between Pods in a cluster. Network Policies can be used to enforce security policies, restrict access to specific Pods, and more.
I will be exploring the following regarding Kubernetes networking:
- Pod Networking
- Service Networking
- Network Policies
- Ingress and Ingress Controllers
- Service Mesh
Regarding the above-listed, I will explain them and how to set them up.
Pod networking
In Kubernetes, a pod is the smallest deployable unit that can be created and managed. Pods are typically used to run a single container, although it’s possible to run multiple containers in a pod if they need to share resources. Each pod in a Kubernetes cluster is assigned a unique IP address, which allows it to communicate with other pods within the cluster.
Pod networking enables the communication between pods within a cluster.
Pod networking in Kubernetes works by using a Container Network Interface (CNI) plugin to configure the network interfaces of containers in a pod. When a pod is created, the CNI plugin creates a network namespace for the pod, along with a virtual Ethernet interface and an IP address for the container. The CNI plugin then configures the pod’s network namespace to enable communication between containers in the pod, as well as between the pod and other pods in the cluster.
By default, Kubernetes uses the “bridge” network plugin to enable pod networking. This plugin creates a virtual bridge device on each node in the cluster and assigns a unique IP address to each pod using the Virtual Ethernet (veth) interface. When a pod communicates with another pod in the cluster, the traffic is routed through the bridge device on each node.
Setup
We use the following steps for the Pod networking setup:
- Choose a Container Network Interface (CNI) plugin: There are many CNI plugins available for Kubernetes, each with its own set of features and configuration options. Some popular CNI plugins include Calico, Flannel, and Weave Net. In this article, we will be using Calico, and we set it up by doing the following:
- Install the Calico CNI plugin using this command:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
2. Verify the CNI Plugin Installation, we do that using this command:
kubectl get pods -n kube-system
This should display the status of the Calico pods.
3. Create a Pod; Now our Pod network is set up, we test its functionality by creating a Pod below;
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx
We save the YAML manifest file as ourtest-pod.yaml
and apply or create is using the command:
kubectl apply -f ourtest-pod.yaml
We verify Pod is running using;
kubectl get pods -o wide
This should display the status of the Pod, including its name, IP address, and status.
Service networking
In Kubernetes, a service is an abstraction layer that provides a stable IP address and DNS name for a set of pods. Services enable communication between pods, both within the same cluster and across different clusters. Services in Kubernetes work by creating a virtual IP address and DNS name that load-balances traffic between a set of pods.
When a service is created, Kubernetes assigns it a unique IP address from the cluster’s Service IP address range. The service’s IP address is virtual and does not correspond to a physical network interface on any node in the cluster. Instead, the IP address is managed by the kube-proxy component, which runs on each node in the cluster and routes traffic to the appropriate pod.
To route traffic to a service, Kubernetes uses a set of labels and selectors to identify the pods that belong to the service. Each pod that matches the selector is automatically added to the service’s endpoint list, and kube-proxy routes traffic to one of the available endpoints.
It provides the following benefits;
- Load balancing: Services can distribute traffic across multiple pods, providing high availability and scalability.
- Service discovery: Services provide a stable IP address and DNS name, making it easy for other services to discover and communicate with them.
- Cross-cluster communication: Services can be exposed to the internet or other Kubernetes clusters, making it easy to communicate between services in different environments.
Setup
Here’s a step-by-step guide to setting up service networking in a Kubernetes cluster:
Define a service: To create a service, you need to define a YAML manifest file that describes the service’s properties, such as its name, selector, and port. Below is a manifest file for a service that routes traffic to a set of pods labeled ‘app=ourapp’ :
apiVersion: v1
kind: Service
metadata:
name: ourapp-service
spec:
selector:
app: ourapp
ports:
- name: http
port: 80
targetPort: 8080
After saving the yml file, we apply/create the service with the command below;
kubectl apply -f ourapp-service.yaml
We check for our service status with this command;
kubectl get service ourapp-service
Access the service: To access the service from within the cluster, you can use its DNS name or IP address. For example, if the service’s IP address is ‘10.0.0.2', you can access using this command:
curl http://10.0.0.2
This will route traffic to one of the available pods and return the response.
Network Policies
In Kubernetes, Network Policies provide a way to control traffic flow between pods and services in a cluster. A Network Policy is a specification that defines rules for traffic ingress and egress for a set of pods. With Network Policies, you can enforce security policies and limit traffic between pods and services.
A Network Policy specifies the following:
- The namespaces to which it applies.
- The pods to which it applies.
- The traffic flow that it allows or blocks.
To create a Network Policy, it is important to define a set of rules that specify which traffic is allowed or denied based on source and destination IP addresses, ports, and protocols. Network Policies are enforced by the kube-proxy component running on each node in the cluster.
Some common use cases for Network Policies include:
- Enforcing application-level security policies
- Isolating sensitive workloads from the rest of the cluster
- Limiting traffic flow between different applications
Setup
Here’s a step-by-step guide to setting up Network Policies in a Kubernetes cluster:
- Install a Network Policy provider: Before creating Network Policies there will be a need to install a CNI plugin and we have installed that earlier with Calico.
- Define a Network Policy: To create a Network Policy, there will be a need to define a YAML manifest file that specifies the policy’s properties, such as its name, namespace, and rules. Below is an example manifest file for a Network Policy that allows traffic only from pods labeled ‘app=ourapp’:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myapp-policy
namespace: default
spec:
podSelector:
matchLabels:
app: ourapp
ingress:
- from:
- podSelector:
matchLabels:
app: ourapp
ports:
- protocol: TCP
port: 80
This policy allows ingress traffic only from pods with the same label selector and only on port 80.
We save the yml file, then We apply/create it with this command;
kubectl apply -f ourapp-policy.yaml
- Verify the Network Policy: Once the Network Policy is created, we can verify its properties using this command:
kubectl get networkpolicy ourapp-policy
This will display information about the Network Policy, including its name and rules.
Ingress and Ingress controller
In Kubernetes, an Ingress is a resource that exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. It provides a way to route external traffic to different services based on the URL path or hostname of the incoming request.
An Ingress Controller is a Kubernetes resource that is responsible for implementing the rules specified in the Ingress resource. The Ingress Controller acts as a reverse proxy and directs traffic from the external network to the appropriate service within the cluster.
When an Ingress is created, it defines the routing rules that determine how traffic should be directed to different services within the cluster. The Ingress Controller then monitors the Ingress resource for changes and updates its configuration accordingly.
Some common use cases for Ingress include:
- Exposing multiple services over a single IP address and port
- Implementing SSL/TLS termination and encryption
- Load balancing traffic between different services based on URL paths or hostnames.
Setup
Here’s a step-by-step guide to setting up Ingress and Ingress Controllers in a Kubernetes cluster:
- Install an Ingress Controller: Before creating Ingress resources, there will be a need to install an Ingress Controller in our Kubernetes cluster. Several controllers are available, including Nginx, Traefik, and Istio. We will be installing the Nginx ingress controller with the steps below:
- Create a namespace: This will help to keep things organized and isolated from other resources in your Kubernetes cluster. We do that with this command;
kubectl create namespace nginx-ingress
- Add the Helm repository: We do that with this command;
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
- Update the Helm repository: To make sure that we have the latest version of the Nginx Ingress Controller we update the helm repo with this command:
helm repo update
- Install the Nginx Ingress Controller using Helm: We use the following command to install the controller:
helm install nginx-ingress ingress-nginx/ingress-nginx --namespace nginx-ingress --set controller.publishService.enabled=true
This will install the Nginx Ingress Controller and create a LoadBalancer service to expose the controller outside of the cluster. We verify the installation with this command;
kubectl get pods -n nginx-ingress
This should display one or more pods running the Nginx Ingress Controller
2. Create Ingress resource: Now that the Nginx Ingress Controller is installed and running, we can create an Ingress resource to define the routing rules for incoming traffic. Below is an example manifest file for an Ingress that routes traffic based on the incoming hostname:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: our-ingress
spec:
rules:
- host: ourapp.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
We save the YAML file, and create/apply it using this command:
kubectl apply -f our-ingress.yaml
We verify that the Ingress is configured correctly, and use the following command to get the Ingress status:
kubectl get ingress our-ingress
This should display information about the Ingress, including its name, rules, and backend services.
Service Mesh
A Service Mesh is a dedicated infrastructure layer for managing service-to-service communication within a microservices architecture. It provides a set of features for traffic management, service discovery, security, and observability. Service Mesh is typically implemented as a set of sidecar containers that are deployed alongside each application container in a Kubernetes cluster.
Service Mesh Architecture
A typical Service Mesh architecture consists of a control plane and a data plane. The control plane is responsible for configuring and managing the Service Mesh, while the data plane is responsible for handling the actual traffic between services.
Control Plane
The control plane typically includes the following components:
- Service Mesh API: A set of APIs for configuring and managing the Service Mesh.
- Service Mesh Controller: A component that translates the Service Mesh API into the configuration for the data plane.
- Service Mesh Policy Engine: A component that enforces policy decisions, such as routing rules and security policies.
- Service Mesh Telemetry Collector: A component that collects telemetry data, such as metrics and traces, from the data plane.
Data Plane
The data plane typically includes the following components:
- Service Proxy: A sidecar container that is deployed alongside each application container. The service proxy intercepts all inbound and outbound traffic to the application container and routes it through the Service Mesh.
- Service Mesh Data Plane Agent: A component that manages the service proxy and reports telemetry data to the control plane.
Setup
Here’s a step-by-step guide to setting up a Service Mesh in Kubernetes using Istio:
- Install Istio: We Install Istio on our Kubernetes cluster using the following command:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=<version> sh -
Replace <version>
with the desired version of Istio. This will download the Istio installation files to the local machine.
- Deploy Istio: We deploy Istio to our Kubernetes cluster using the following command:
istioctl install --set profile=default
This will deploy Istio to our Kubernetes cluster using the default configuration profile.
- Verification: To verify that Istio is installed correctly, we use the following command to check the status of the Istio pods:
kubectl get pods -n istio-system
This should display the status of the Istio pods.
Deploy an application
Now that Istio is installed, we deploy an application to our Kubernetes cluster. Here’s an example manifest file for a simple application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: our-app
labels:
app: our-app
spec:
replicas: 1
selector:
matchLabels:
app: our-app
template:
metadata:
labels:
app: our-app
spec:
containers:
- name: our-app
image: docker.io/istio/examples-helloworld-v1:1.0.0
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: our-app
spec:
selector:
app: our-app
ports:
- name: http
port: 80
targetPort: 5000
type: ClusterIP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: our-app
spec:
hosts:
- our-app.example.com
http:
- route:
- destination:
host: our-app
port:
number: 80
This YAML configuration deploys a simple application with Istio called our-app
. The Deployment
creates a replica set of 1 pod with a container running the istio/examples-helloworld-v1
image on port 5000
. The Service
selects the pods with the label app=our-app
and exposes them as a Kubernetes Service with the name my-app
, on port 80
.
Finally, the VirtualService
is used to configure Istio's routing rules. The VirtualService
listens for traffic on the host our-app.example.com
and routes it to the our-app
Service running on port 80
.
Conclusion
We have discussed what is basically needed to know when setting up Kubernetes Networking. We also looked at Service Mesh, for traffic management.