Why do I need a service network
The service mesh, in this case Istio, is a binding for everything that is required to manage and configure interservice communication: routing, authentication, authorization, tracing, access control, and much more. And although there are tons of open libraries to implement these functions directly in the service code, with Istio you can get the same thing without adding anything to the service itself.
Components
Article written for istio 1.6
About changes
Istio is logically divided into a data plane and a control plane.
The data plane is a collection of proxy servers (Envoy) added to the pod in the form of sidecars. These proxies provide and control all network communication between microservices and are configured from the control plane.
The management plane (istiod) provides service discovery, configuration and management of certificates. It converts Istio objects into configurations that Envoy understands and distributes them in the data plane.
Istio service mesh components
You can add envoy to the application pod either manually or by setting up automatic adding using the Mutating Admission webhook, which Istio adds during its installation. To do this, put the istio-injection = enabled tag on the required namespace.
In addition to the proxy sidecar with envoy, Istio will add a special init container to the pod, which will redirect the combat traffic to the container with envoy. But how is this achieved? In this case, there is no magic, and this is implemented by installing additional iptables rules in the network namespace of the pod.
About resource consumption
, 100 Istio ~2-3 , envoy 40 , CPU 5%-7% pod.
Let's take a practical look at how a sidecar captures inbound and outbound traffic from a container. To do this, let's take a look at the network space of some pod with the sidecar added by Istio in more detail.
Demo stand
Kubernetes Istio.
Kubernetes minikube:
Istio demo :
: productpage details. Istio .
Kubernetes minikube:
Linux:
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube
sudo mkdir -p /usr/local/bin/
sudo install minikube /usr/local/bin/
minikube start --driver=<driver_name> // --driver=none .
MacOS:
brew install minikube
minikube start --driver=<driver_name>
Istio demo :
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.6.3
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo
: productpage details. Istio .
kubectl label namespace default istio-injection=enabled
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
Let's see the list of productpage application containers:
kubectl -n default get pods productpage-v1-7df7cb7f86-ntwzz -o jsonpath="{.spec['containers','initContainers'][*].name}"
productpage
istio-proxy
istio-init
In addition to the productpage itself, sidecar istio-proxy (the same envoy) and init container istio-init work in the pod.
You can look at the iptables rules configured in the pod space using the nsenter utility. To do this, we need to find out the pid of the container process:
docker inspect k8s_productpage --format '{{ .State.Pid }}'
16286
Now we can see the iptables rules installed in this container.
You can see that almost all incoming and outgoing traffic is now intercepted and redirected to ports on which envoy is already waiting for it.
Turn on mutual traffic encryption
Policy and MeshPolicy objects have been removed from istio 1.6. Instead, it is suggested to use the PeerAuthentication object
Istio allows you to encrypt all traffic between containers, and the applications themselves will not even know that they are communicating via tls. This is done by Istio itself out of the box with literally one manifest, since client certificates are already mounted in the proxy sidecar.
The algorithm is as follows:
- Client-side and server-side envoy proxies authenticate each other before sending requests;
- If the check is successful, the client proxy encrypts the traffic and sends it to the server proxy;
- The proxy server side decrypts the traffic and redirects it locally to the actual destination service.
You can enable mTLS at different levels:
- At the level of the entire network;
- At the namespace level;
- At the level of a specific pod.
Operating modes:
- PERMISSIVE: both encrypted and plain text traffic are allowed;
- STRICT: only TLS allowed;
- DISABLE: only plain text allowed.
Let's access the details service from the productpage pod using curl without TLS enabled and see what tcpdump brings to details:
Request:
Dump traffic:
All body and headers are perfectly readable in plain text.
Turn on tls. To do this, create an object of type PeerAuthentication in the namespace with our pods.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: default
spec:
mtls:
mode: STRICT
Run the request from the product page to details again and see what we can get: The
traffic is encrypted
Authorization
The ClusterRbacConfig, ServiceRole, and ServiceRoleBinding objects have been removed along with the implementation of the new authorization policy. It is suggested to use the AuthorizationPolicy object instead.
Istio uses authorization policies to configure access from one application to another. Moreover, unlike pure Kubernetes network policies, this works at the L7 level. For example, for http traffic, you can fine-tune the request methods and paths.
As we saw in the previous example, by default, access is open to all pods in the entire cluster.
Now we will ban all activities in the default namespace using this yaml file:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: default
spec:
{}
And try to get to the details service:
curl details:9080
RBAC: access denied
Great, now our request fails.
And now let's configure access so that only a GET request goes through and only along the path / details, and all other requests are rejected. There are several options for this:
- You can configure that requests with specific headers pass;
- By service account of the application;
- By outgoing ip-address;
- By outgoing namespace;
- By claims in the JWT token.
The easiest thing to maintain is to set up access to the service account of the application, since no preliminary configuration is required for this, since the bookinfo demo application already comes with a created and monitored service account.
To use authorization policies based on service accounts, you must enable TLS mutual authentication.
Setting up a new access policy:
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "details-viewer"
namespace: default
spec:
selector:
matchLabels:
app: details
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/bookinfo-productpage"]
to:
- operation:
methods: ["GET"]
paths: ["/details/*"]
And try to reach out again:
root@productpage-v1-6b64c44d7c-2fpkc:/# curl details:9080/details/0
{"id":0,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}
Everything is working. Let's try other methods and ways:
root@productpage-v1-6b64c44d7c-2fpkc:/# curl -XPOST details:9080/details/0
RBAC: access denied
root@productpage-v1-6b64c44d7c-2fpkc:/#
root@productpage-v1-6b64c44d7c-2fpkc:/# curl -XGET details:9080/ping
RBAC: access denied
Conclusion
In conclusion, I note that the opportunities considered are only a fraction of what Istio can do. Out of the box, we received and configured interservice traffic encryption and authorization, albeit at the cost of adding additional components and, therefore, additional resource consumption.
Thanks to all!