hacktricks/pentesting/pentesting-kubernetes/README.md
2021-04-28 17:14:31 +00:00

37 KiB
Raw Blame History

Pentesting Kubernetes

The main author of this page is Jorge (read his original post here)

Architecture & Basics

What does Kubernetes do?

  • Allows running container/s in a container engine.
  • Schedule allows containers mission efficient.
  • Keep containers alive.
  • Allows container communications.
  • Allows deployment techniques.
  • Handle volumes of information.

Architecture

  • Node: operating system with pod or pods.
    • Pod: Wrapper around a container or multiple containers with. A pod should only contain one application so usually, a pod run just 1 container. The pod is the way kubernetes abstracts the container technology running.
      • Service: Each pod has 1 internal IP address from the internal range of the node. However, it can be also exposed via a service. The service has also an IP address and its goal is to maintain the communication between pods so if one dies the new replacement with a different internal IP will be accessible exposed in the same IP of the service. It can be configured as internal or external. The service also actuates as a load balancer when 2 pods are connected to the same service. When a service is created you can find the endpoints of each service running kubectl get endpoints

  • Kubelet: Primary node agent. The component that establishes communication between node and kubectl, and only can run pods through API server. The kubelet doesnt manage containers that were not created by Kubernetes.
  • Kube-proxy: is the service in charge of the communications services between the apiserver and the node. The base is an IPtables for nodes. Most experienced users could install other kube-proxies from other vendors.
  • Sidecar container: Sidecar containers are the containers that should run along with the main container in the pod. This sidecar pattern extends and enhances the functionality of current containers without changing them. Nowadays, We know that we use container technology to wrap all the dependencies for the application to run anywhere. A container does only one thing and does that thing very well.
  • Master process:
    • Api Server: Is the way the users and the pods use to communicate with the master process. Only authenticated request should be allowed.
    • Scheduler: Scheduling refers to making sure that Pods are matched to Nodes so that Kubelet can run them. It has enough intelligence to decide which node has more available resources the assign the new pod to it. Note that the scheduler doesn't start new pods, it just communicate with the Kubelet process running inside the node, which will launch the new pod.
    • Kube Controller manager: It checks resources like replica sets or deployments to check if, for example, the correct number of pods or nodes are running. In case a pod is missing, it will communicate with the scheduler to start a new one. It controls replication, tokens, and account services to the API.
    • etcd: Data storage, persistent, consistent, and distributed. Is Kubernetess database and the key-value storage where it keeps the complete state of the clusters each change is logged here. Components like the Scheduler or the Controller manager depends on this date to know which changes have occurred available resourced of the nodes, number of pods running...
  • Cloud controller manager: Is the specific controller for flow controls and applications, i.e: if you have clusters in AWS or OpenStack.

Note that as the might be several nodes running several pods, there might also be several master processes which their access to the Api server load balanced and their etcd synchronized.

Volumes:

When a pod creates data that shouldn't be lost when the pod disappear it should be stored in a physical volume. Kubernetes allow to attach a volume to a pod to persist the data. The volume can be in the local machine or in a remote storage. If you are running pods in different physical nodes you should use a remote storage so all the pods can access it.

Other configurations:

  • ConfigMap: You can configure URLs to access services. The pod will obtain data from here to know how to communicate with the rest of the services pods. Note that this is not the recommended place to save credentials!
  • Secret: This is the place to store secret data like passwords, API keys... encoded in B64. The pod will be able to access this data to use the required credentials.
  • Deployments: This is where the components to be run by kubernetes are indicated. A user usually won't work directly with pods, pods are abstracted in ReplicaSets number of same pods replicated, which are run via deployments. Note that deployments are for stateless applications. The minimum configuration for a deployment is the name and the image to run.
  • StatefulSet: This component is meant specifically for applications like databases which needs to access the same storage.
  • Ingress: This is the configuration that is use to expose the application publicly with an URL. Note that this can also be done using external services, but this is the correct way to expose the application.
    • If you implement an Ingress you will need to create Ingress Controllers. The Ingress Controller is a pod that will be the endpoint that will receive the requests and check and will load balance them to the services. the ingress controller will send the request based on the ingress rules configured. Note that the ingress rules can point to different paths or even subdomains to different internal kubernetes services.
      • A better security practice would be to use a cloud load balancer or a proxy server as entrypoint to don't have any part of the Kubernetes cluster exposed.
      • When request that doesn't match any ingress rule is received, the ingress controller will direct it to the "Default backend". You can describe the ingress controller to get the address of this parameter.
      • minikube addons enable ingress

PKI infrastructure - Certificate Authority CA:

  • CA is the trusted root for all certificates inside the cluster.
  • Allows components to validate to each other.
  • All cluster certificates are signed by the CA.
  • ETCd has its own certificate.
  • types:
    • apiserver cert.
    • kubelet cert.
    • scheduler cert.

Minikube

Minikube can be used to perform some quick tests on kubernetes without needing to deploy a whole kubernetes environment. It will run the master and node processes in one machine. Minikube will use virtualbox to run the node. See here how to install it.

$ minikube start
😄  minikube v1.19.0 on Ubuntu 20.04
✨  Automatically selected the virtualbox driver. Other choices: none, ssh
💿  Downloading VM boot image ...
    > minikube-v1.19.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.19.0.iso: 244.49 MiB / 244.49 MiB  100.00% 1.78 MiB p/s 2m17.
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v10-v1...: 491.71 MiB / 491.71 MiB  100.00% 2.59 MiB
🔥  Creating virtualbox VM (CPUs=2, Memory=3900MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.4 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by defaul

$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

---- ONCE YOU HAVE A K8 SERVICE RUNNING WITH AN EXTERNAL SERVICE -----
$ minikube service mongo-express-service
(This will open your browser to access the service exposed port)

$ minikube delete
🔥  Deleting "minikube" in virtualbox ...
💀  Removed all traces of the "minikube" cluster

Kubectl Basics

Kubectl is the command line tool fro kubernetes clusters. It communicates with the Api server of the master process to perform actions in kubernetes or to ask for data.

kubectl version #Get client and server version
kubectl get pod
kubectl get services
kubectl get deployment
kubectl get replicaset
kubectl get secret
kubectl get all
kubectl get ingress
kubectl get endpoints

#kubectl create deployment <deployment-name> --image=<docker image>
kubectl create deployment nginx-deployment --image=nginx
#Access the configuration of the deployment and modify it
#kubectl edit deployment <deployment-name>
kubectl edit deployment nginx-deployment
#Get the logs of the pod for debbugging (the output of the docker container running)
#kubectl logs <replicaset-id/pod-id>
kubectl logs nginx-deployment-84cd76b964
#kubectl describe pod <pod-id>
kubectl describe pod mongo-depl-5fd6b7d4b4-kkt9q
#kubectl exec -it <pod-id> -- bash
kubectl exec -it mongo-depl-5fd6b7d4b4-kkt9q -- bash
#kubectl describe service <service-name>
kubectl describe service mongodb-service
#kubectl delete deployment <deployment-name>
kubectl delete deployment mongo-depl
#Deploy from config file
kubectl apply -f deployment.yml

YAML configuration files examples

Each configuration file has 3 parts: metadata, specification what need to be launch, status desired state.
Inside the specification of the deployment configuration file you can find the template defined with a new configuration structure defining the image to run:

Example of Deployment + Service declared in the same configuration file from [here](https://gitlab.com/nanuchi/youtube-tutorial-series/-/blob/master/demo-kubernetes-components/mongo.yaml)

As a service usually is related to one deployment it's possible to declare both in the same configuration file the service declared in this config is only accessible internally:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongodb-deployment
  labels:
    app: mongodb
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      containers:
      - name: mongodb
        image: mongo
        ports:
        - containerPort: 27017
        env:
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-username
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom: 
            secretKeyRef:
              name: mongodb-secret
              key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
spec:
  selector:
    app: mongodb
  ports:
    - protocol: TCP
      port: 27017
      targetPort: 27017

Example of external service config

This service will be accessible externally check the `nodePort` and `type: LoadBlancer` attributes:

---
apiVersion: v1
kind: Service
metadata:
  name: mongo-express-service
spec:
  selector:
    app: mongo-express
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8081
      targetPort: 8081
      nodePort: 30000

{% hint style="info" %} This is useful for testing but for production you should have only internal services and an Ingress to expose the application. {% endhint %}

Example of Ingress config file

This will expose the application in http://dashboard.com.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dashboard-ingress
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: dashboard.com
    http:
      paths:
      - backend:
          serviceName: kubernetes-dashboard
          servicePort: 80

Example of secrets config file

Note how the password are encoded in B64 which isn't secure!

apiVersion: v1
kind: Secret
metadata:
    name: mongodb-secret
type: Opaque
data:
    mongo-root-username: dXNlcm5hbWU=
    mongo-root-password: cGFzc3dvcmQ=

Example of ConfigMap

A ConfigMap is the configuration that is given to the pods so they know how to locate and access other services. In this case, each pod will know that the name mongodb-service is the address of a pod that they can communicate with this pod will be executing a mongodb:

apiVersion: v1
kind: ConfigMap
metadata:
  name: mongodb-configmap
data:
  database_url: mongodb-service

Then, inside a deployment config this address can be specified in the following way so it's loaded inside the env of the pod:

[...]
spec:
  [...]
  template:
    [...]
    spec:
      containers:
      - name: mongo-express
        image: mongo-express
        ports:
        - containerPort: 8081
        env:
        - name: ME_CONFIG_MONGODB_SERVER
          valueFrom: 
            configMapKeyRef:
              name: mongodb-configmap
              key: database_url
[...]

Example of volume config

You can find different example of storage configuration yaml files in https://gitlab.com/nanuchi/youtube-tutorial-series/-/tree/master/kubernetes-volumes.
Note that volumes aren't inside namespaces

Namespaces

Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces. These are intended for use in environments with many users spread across multiple teams, or projects. For clusters with a few to tens of users, you should not need to create or think about namespaces at all. You only should start using namespaces to have a better control and organization of each part of the application deployed in kubernetes.

Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces cannot be nested inside one another and each Kubernetes resource can only be in one namespace.

There are 4 namespaces by default if you are using minikube:

kubectl get namespace
NAME              STATUS   AGE
default           Active   1d
kube-node-lease   Active   1d
kube-public       Active   1d
kube-system       Active   1d
  • kube-system: It's not meant or the users use and you shouldn't touch it. It's for master and kubectl processes.
  • kube-public: Publicly accessible date. Contains a configmap which contains cluster information
  • kube-node-lease: Determines the availability of a node
  • default: The namespace the user will use to create resources
#Create namespace
kubectl create namespace my-namespace

{% hint style="info" %} Note that most Kubernetes resources e.g. pods, services, replication controllers, and others are in some namespaces. However, other resources like namespace resources and low-level resources, such as nodes and persistenVolumes are not in a namespace. To see which Kubernetes resources are and arent in a namespace:

kubectl api-resources --namespaced=true #In a namespace
kubectl api-resources --namespaced=false #Not in a namespace

{% endhint %}

You can save the namespace for all subsequent kubectl commands in that context.

kubectl config set-context --current --namespace=<insert-namespace-name-here>

Helm

Helm is the package manager for Kubernetes. It allows to package YAML files and distribute them in public and private repositories. These packages are called Helm Charts.

helm search <keyword>

Helm is also a template engine that allows to generate config files with variables:

Pentesting Kubernetes from the outside

{% page-ref page="pentesting-kubernetes-from-the-outside.md" %}

VULNERABILITIES and some fixes

Enumeration inside a Pod

{% page-ref page="enumeration-from-a-pod.md" %}

Vulnerabilities - kubernetes secrets

A Secret is an object that contains sensitive data such as a password, a token or a key. Such information might otherwise be put in a Pod specification or in an image. Users can create Secrets and the system also creates Secrets. The name of a Secret object must be a valid DNS subdomain name.

Secrets can be things like:

  • API, SSH Keys.
  • OAuth tokens.
  • Credentials, Passwords plain text or b64 + encryption.
  • Information or comments.
  • Database connection code, strings… .

Secret types:

Builtin Type Usage
Opaque arbitrary user-defined data
kubernetes.io/service-account-token service account token
kubernetes.io/dockercfg serialized ~/.dockercfg file
kubernetes.io/dockerconfigjson serialized ~/.docker/config.json file
kubernetes.io/basic-auth credentials for basic authentication
kubernetes.io/ssh-auth credentials for SSH authentication
kubernetes.io/tls data for a TLS client or server
bootstrap.kubernetes.io/token bootstrap token data

How secrets works:

https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod

Create a secret, commands:

kubectl create secret generic secret_01 --from-literal user=<user>
kubectl create secret generic secret_01 --from-literal password=<password>
kubectl run pod --image=nginx -oyaml --dry-run=client
kubectl run pod --image=nginx -oyaml --dry-run=client > <podName.yaml>

This is the generated file:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    volumeMounts:
    - name: <secret_01>
      mountPath: "/etc/<secret_01>"
      readOnly: true
  volumes:
  - name: <secret_01>
    secret:
      secretName: <secret_01>
      items:
      - key: username
        path: my-group/my-username

Using Secrets as environment variables

If you want to use a secret in an environment variable to allow the rest of the pods to reference the same secret, you could use:

In the you could add the uncomment lines:

#apiVersion: v1
#kind: Pod
#metadata:
#  name: secret-env-pod
#spec:
#  containers:
#  - name: mycontainer
#    image: redis
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            name: mysecret
            key: username
#     - name: SECRET_PASSWORD
#        valueFrom:
#          secretKeyRef:
#            name: mysecret
#            key: password
#  restartPolicy: Never

The result is:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: redis
    env:
      - name: PASSWORD
        valueFrom:
          secretKeyRef:
            name: <secret_02>
            key: <password>   
    volumeMounts:
    - name: <secret_01>
      mountPath: "/etc/<secret_01>"
      readOnly: true
  volumes:
  - name: <secret_01>
    secret:
      secretName: <secret_01>
      items:
      - key: username
        path: my-group/my-username

Save and:

kubectl -f <podName.yaml> delete --force
kubectl -f <podName.yaml> create

or:

kubectl -f <podName.yaml> replace --force

More info: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables

Discover secrets in docker:

To get the id of the container.

docker ps | grep <service>

Inspect:

docker inspect <docker_id>

Check env environment variable section for secrets and you will see:

  • Passwords.
  • Ips.
  • Ports.
  • Paths.
  • Others… .

If you want to copy:

docker cp <docket_id>:/etc/<secret_01> <secret_01>

Discover secrets in etcd:

Remember that etcd is a consistent and highly-available key-value store used as Kubernetes backing store for all cluster data. Lets access to the secret in etcd:

cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd

You will see certs, keys and urls were are located in the FS. Once you get it, you would be able to connect to etcd.

#ETCDCTL_API=3 etcdctl --cert <path to client.crt> --key <path to client.ket> --cacert <path to CA.cert> endpoint=[<ip:port>] health

ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/etcd/ca.cert endpoint=[127.0.0.1:1234] health

Once you achieve establish communication you would be able to get the secrets:

#ETCDCTL_API=3 etcdctl --cert <path to client.crt> --key <path to client.ket> --cacert <path to CA.cert> endpoint=[<ip:port>] get <path/to/secret>

ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/etcd/ca.cert endpoint=[127.0.0.1:1234] get /registry/secrets/default/secret_02

Adding encryption to the ETCD

So, by default all the secrets are in plain text unless you apply an encryption layer: If the identity provider is empty with the default value = {} so the secrets are in plain text. https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/

Encryption types

| Name | Encryption | Strength | Speed | Key Length | Other Considerations | |-|-|-|-|-|-| | identity | None | N/A | N/A | N/A | Resources written as-is without encryption. When set as the first provider, the resource will be decrypted as new values are written. | | aescbc | AES-CBC with PKCS#7 padding | Strongest | Fast | 32-byte | The recommended choice for encryption at rest but may be slightly slower than secretbox. | | secretbox | XSalsa20 and Poly1305 | Strong | Faster | 32-byte | A newer standard and may not be considered acceptable in environments that require high levels of review. | | aesgcm | AES-GCM with random nonce | Must be rotated every 200k writes | Fastest | 16, 24, or 32-byte | Is not recommended for use except when an automated key rotation scheme is implemented. | | kms | Uses envelope encryption scheme: Data is encrypted by data encryption keys DEKs using AES-CBC with PKCS#7 padding, DEKs are encrypted by key encryption keys KEKs according to configuration in Key Management Service KMS | Strongest | Fast | 32-bytes | The recommended choice for using a third party tool for key management. Simplifies key rotation, with a new DEK generated for each encryption, and KEK rotation controlled by the user. |

The secrets will be encrypted with the above algorithms and encoded by base64.

kubectl get secrets --all-namespaces -o json | kubectl replace -f -

How to encrypt the ETCD

Create a directory in /etc/kubernetes ; in this case you will name it as etcd, so you have /etc/kubernetes/etcd

You create a yaml file with the configuration.

vi <configFile.yaml>

You can copy the content of https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
  - resources:
    - secrets
    providers:
    - aescbc:
        keys:
        - name: key1
          secret: <your pass in b64>
    - identity: {}

Generate pass in b64 remember to use a pass character with lenght = 16 or = 24 or = 32 :

echo -n <password> | base64

You can see how the encryption provider is not setting.

After that, you have to edit the file /etc/kubernetes/manifest/kube-apiserver.yaml and add the following lines into the sections: And add the following line: spec:

  containers:
  - command:
    - kube-apiserver
    - --encriyption-provider-config=/etc/kubernetes/etcd/<configFile.yaml>

Scroll down in the volumeMounts:

- mountPath: /etc/kubernetes/etcd
    name: etcd
    readOnly: true

Scroll down in the volumeMounts to hostPath:

- hostPath:
    path: /etc/kubernetes/etcd
    type: DirectoryOrCreate
  name: etcd

Get information about the secrets.

kubectl get secret
kubectl get secret <secret_name> -oyaml
ETCDCTL_API=3 etcdctl get /registry/secrets/default/secret1 [...] | hexdump -C
kubectl create secret generic test-secret --from-literal='username=my-app' --from-literal='password=45tRf$we34rR'

With root access:

# kubectl get secret
kubectl get secret test-secret -oyaml

Do not forget to delete de secrets and re-create them again in order to apply the encryption layer.

Final tips:

Vulnerabilities - Container runtime sandboxes

How an attack with lateral movement and privesc could be done:

Getting inside the container:

kubectl get node
kubectl run pod --image=<image_name>
kubectl exec pod -it -- bash

Once inside the container:

uname -r

When the attack achieves discovering the kernel version, he could run exploiting techniques to gather information or escalate into the OS.

For secure sandboxes:

  • gVisor:

https://github.com/google/gvisor

  • Katakontainers:

https://katacontainers.io/

Vulnerabilities - OS

Is mandatory to keep in mind to define privilege and access control for container / pod:

  • userIDs and groupIDs.
  • Privileged or unprivileged escalation runs.
  • Linux.

More info at: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

userID and groupID

kubectl run pod --image=busybox --command -oyaml --dry-run=client > <podName.yaml> -- sh -c 'sleep 1h'
vi <podName>.yaml

Add the uncomment lines:

#apiVersion: v1
#kind: Pod
#metadata:
#  name: security-context-demo
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
#  volumes:
#  - name: sec-ctx-vol
#    emptyDir: {}
#  containers:
#  - name: sec-ctx-demo
#    image: busybox
#    command: [ "sh", "-c", "sleep 1h" ]
   securityContext:
    runAsNonRoot: true
#    volumeMounts:
#    - name: sec-ctx-vol
#      mountPath: /data/demo
#    securityContext:
#      allowPrivilegeEscalation: true

Save and:

kubectl -f <podName>.yaml delete --force
kubectl -f <podName>.yaml create

Check permissions:

kubectl exec -it <podName> -- sh

How to disable privilege escalation:

vi <podName>.yaml

Set this line to false

      allowPrivilegeEscalation: false

Save and:

kubectl -f <podName>.yaml delete --force
kubectl -f <podName>.yaml create

Modify PodSecurityPolicy

Pod security policies control the security policies about how a pod has to run. More info at: https://kubernetes.io/docs/concepts/policy/pod-security-policy/

Edit the kube-apiserver.yaml file:

vi /etc/kubernetes/manifests/kube-apiserver.yaml

Inside you add in

- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy

Vulnerabilities - mTLS

Mutual authentication, two-way, pod to pod.

More info at: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

Create a sidecar proxy app

Create your .yaml

kubectl run app --image=bash --comand -oyaml --dry-run=client > <appName.yaml> -- shj -c 'ping google.com'

Edit your .yaml and add the uncomment lines:

#apiVersion: v1
#kind: Pod
#metadata:
#  name: security-context-demo
#spec:
#  securityContext:
#    runAsUser: 1000
#    runAsGroup: 3000
#    fsGroup: 2000
#  volumes:
#  - name: sec-ctx-vol
#    emptyDir: {}
#  containers:
#  - name: sec-ctx-demo
#    image: busybox
    command: [ "sh", "-c", "apt update && apt install iptables -y && iptables -L && sleep 1h" ]
    securityContext:
      capabilities:
        add: ["NET_ADMIN"]
 #   volumeMounts:
 #   - name: sec-ctx-vol
 #     mountPath: /data/demo
 #   securityContext:
 #     allowPrivilegeEscalation: true

See the logs of the proxy:

kubectl logs app -C proxy

More info at: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

CLUSTER HARDENING - RBAC

Kubernetes has an authorization module named Role-Based Access Control [**RBAC**](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) that helps to set utilization permissions to the API server.
The RBAC table is constructed from “Roles” and “ClusterRoles.” The difference between them is just where the role will be applied a “Role” will grant access to only one specific namespace, while a “ClusterRole” can be used in all namespaces in the cluster. Moreover, ClusterRoles can also grant access to:

  • cluster-scoped resources like nodes.
  • non-resource endpoints like /healthz.
  • namespaced resources like Pods, across all namespaces.

Example of Role configuration:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: defaultGreen
  name: pod-and-pod-logs-reader
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]

Example of ClusterRole configuration:

For example you can use a ClusterRole to allow a particular user to run:

kubectl get pods --all-namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  # "namespace" omitted since ClusterRoles are not namespaced
  name: secret-reader
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get", "watch", "list"]

Role and ClusterRole Binding concept

A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects users, groups, or service accounts, and a reference to the role being granted. A RoleBinding grants permissions within a specific namespace whereas a ClusterRoleBinding grants that access cluster-wide.

RoleBinding example:

apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
  name: jane # "name" is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  # "roleRef" specifies the binding to a Role / ClusterRole
  kind: Role #this must be Role or ClusterRole
  name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
  apiGroup: rbac.authorization.k8s.io

ClusterRoleBinding example:

apiVersion: rbac.authorization.k8s.io/v1
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
metadata:
  name: read-secrets-global
subjects:
- kind: Group
  name: manager # Name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: secret-reader
  apiGroup: rbac.authorization.k8s.io

Permissions are additive so if you have a clusterRole with “list” and “delete” secrets you can add it with a Role with “get”. So be aware and test always your roles and permissions and specify what is ALLOWED, because everything is DENIED by default.

RBAC Structure

RBACs permission is built from three individual parts:

  1. Role\ClusterRole ­ The actual permission. It contains rules that represent a set of permissions. Each rule contains resources and verbs. The verb is the action that will apply on the resource.
  2. Subject User, Group or ServiceAccount The object that will receive the permissions.
  3. RoleBinding\ClusterRoleBinding The connection between Role\ClusterRole and the subject.

This is what it will look like in a real cluster:

Fine-grained role bindings provide greater security, but require more effort to administrate."

From Kubernetes 1.6 onwards, RBAC policies are enabled by default. ****But to enable RBAC you can use something like:

kube-apiserver --authorization-mode=Example,RBAC --other-options --more-options

This is enabled by default. RBAC functions:

  • Restrict the access to the resources to users or ServiceAccounts.
  • An RBAC Role or ClusterRole contains rules that represent a set of permissions.
  • Permissions are purely additive there are no “deny” rules.
  • RBAC works with Roles and Bindings

{% hint style="info" %} When configuring roles and permissions it's highly important to always follow the principle of Least Privileges {% endhint %}

SERVICE ACCOUNTS HARDENING

To learn about Service Accounts Hardenig read the page:

{% page-ref page="enumeration-from-a-pod.md" %}

KUBERNETES API HARDENING

It's very important to protect the access to the Kubernetes Api Server as a malicious actor with enough privileges could be able to abuse it and damage in a lot of way the environment.
It's important to secure both the access **whitelist** origins to access the API Server and deny any otehr connection and the authentication following the principle of **least** **privilege**. And definitely never allow anonymous requests.

Common Request process:
User or K8s ServiceAccount > Authentication > Authorization > Admission Control.

Tips:

  • Close ports.
  • Avoid Anonymous access.
  • NodeRestriction; No access from specific nodes to the API.
  • Ensure with labels the secure workload isolation.
  • Avoid specific pods from API access.
  • Avoid ApiServer exposure to the internet.
  • Avoid unauthorized access RBAC.
  • ApiServer port with firewall and IP whitelisting.

KUBERNETES CLUSTER HARDENING

You should update your Kubernetes environment as frequently as necessary to have:

  • Dependencies up to date.
  • Bug and security patches.

****Release cycles: Each 3 months there is a new minor release -- 1.20.3 = 1(Major).20(Minor).3(patch)

The best way to update a Kubernetes Cluster is from** [**here**](https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/)**:

  • Upgrade the Master Node components following this sequence:
    • etcd all instances.
    • kube-apiserver all control plane hosts.
    • kube-controller-manager.
    • kube-scheduler.
    • cloud controller manager, if you use one.
  • Upgrade the Worker Node components such as kube-proxy, kubelet.

References

{% embed url="https://sickrov.github.io/" %}

{% embed url="https://www.youtube.com/watch?v=X48VuDVv0do" %}