I want to create a simple k8s cronjob to keep a gcp identity token fresh in a secret.
A relatively simple problem that I am not able to solve
Given
kubectl patch secret token-test --type json -p=<< END
[
{
"op": "replace",
"path": "/data/token.jwt",
"value": "$(gcloud auth print-identity-token | base64 )"
}
]
END
I want this to be applied with kubectl patch secret token-test --type json -p=$(printf "'%s'" "$json")
I have tried many variants, the weird thing is that if I paste in the result of the heredoc insted of printf it works. But all my efforts fails with (also tried a json doc on a single line)
$ kubectl patch secret token-test --type json -p=$(printf "'%s'" "$json")
error: unable to parse "'[{": yaml: found unexpected end of stream
Whereas this actually works:
printf "'%s'" "$json"|pbcopy
kubectl patch secret sudo-token-test --type json -p '[{ "op": "replace","path": "/data/token.jwt","value": "ZX...Zwo="}]'
secret/token-test patched
I cannot understand what is different when it fails. I understand bash is a bit tricky when it comes to string handling, but I am not sure if this is a bash issue or an issue in kubectl.
It's a slight different approach but, howabout:
printf "foo" > test
kubectl create secret generic freddie \
--namespace=default \
--from-file=./test
kubectl get secret/freddie \
--namespace=default \
--output=jsonpath="{.data.test}" \
| base64 --decode
foo
X="$(printf "bar" | base64)"
kubectl patch secret/freddie \
--namespace=default \
--patch="{\"data\":{\"test\":\"${X}\"}}"
kubectl get secret/freddie \
--namespace=default \
--output=jsonpath="{.data.test}" \
| base64 --decode
bar
NOTE it's not a best practice to use your user (gcloud auth print-identity-token) credentials in this way. Service Accounts are preferred. Service Accounts are intended for machine (rather than a human) auth and they can be more easily revoked.
User credentials grant the bearer all the powers of the user account (and this is likely extensive).
There's a portable alternative in which you create a Kubernetes secret from a Service Account key:
kubectl create secret generic your-key \
--from-file=your-key.json=/path/to/your-key.json
There's a cool-kids who use GKE-mostly approach called Workload Identity
$ json='[{ "op": "replace","path": "/data/token.jwt","value": "'"$(gcloud auth print-identity-token | base64 )"'"}]'
$ kubectl patch secret token-test --type json -p="$json"
secret/token-test patched
By appending the string insted of interpolating in the heredoc this was solved.
Still don't know why the other approach failed though.
EDIT:
The end result for this was to drop kubectl and use curl instead as it fit better inside the docker image.
# Point to the internal API server hostname
APISERVER=https://kubernetes.default.svc
# Path to ServiceAccount token
SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
# Read this Pod's namespace
NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
# Read the ServiceAccount bearer token
TOKEN=$(cat ${SERVICEACCOUNT}/token)
# Reference the internal certificate authority (CA)
CACERT=${SERVICEACCOUNT}/ca.crt
SECRET_NAME=$(cat /etc/scripts/secret-name)
# Explore the API with TOKEN
curl --fail --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" --request PATCH ${APISERVER}/api/v1/namespaces/${NAMESPACE}/secrets/${SECRET_NAME} \
-H 'Accept: application/json' \
-H "Content-Type: application/json-patch+json" \
-d '[{ "op": "replace","path": "/data/token.jwt","value": "'"$(gcloud auth print-identity-token | base64 -w0 )"'"}]' \
--output /dev/null
As commente by me in DazWilkin answer as well, the user that calls gcloud auth print-identity-token is actually a serviceaccount on k8s authenticated via workload identity on GKE to a GCP ServiceAccount.
I need the token to be able to call AWS without storing the actual credentials by using it to call AWS via Workload Identity Federation
Related
I am using GitLab for a CI/CD process. I want to send messages to my channel in Slack. Following the API works from the terminal:
curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' https://hooks.slack.com/services/xxx/yyyy/zzzz
However, when I put this line into my .yml file, it gives me a "yaml invalid error". Complete block is here:
slack_jar:
stage: slack
before_script:
- echo "hi there"
script:
- curl -F file=#target/springApp-0.0.1.jar -F channels=#application_dev_backend -F token='xoxb-1111-2222-yyyyyy' https://slack.com/api/files.upload
only:
- dev
slack_message:
stage: slack
script:
- echo "Send Slack Messages"
- curl -X POST -H 'Content-type: application/json' --data '{"text":"Hello, World!"}' https://hooks.slack.com/services/xxxx/yyyy/zzzz
only:
- dev
The first stage (sending file) is correct, but the second one is not working. This is the error message I get:
Status: syntax is incorrect Error: jobs:slack_message:script config should be a string or an array of strings
Based on your error message, the curl command in slack_message is incorrect. Try wrapping the entire command in quotes and escaping the internal quotes. The way you have it, the YAML parser thinks the Content-type: application/json is a key:value pair of a dictionary.
Try this instead:
slack_message:
stage: slack
script:
- echo "Send Slack Messages"
- "curl -X POST -H 'Content-type: application/json' --data '{\"text\":\"Hello, World!\"}' https://hooks.slack.com/services/xxxx/yyyy/zzzz"
only:
- dev
Pro Tip
You can use the CI Lint tool to validate the contents of gitlab-ci.yaml. You can access this in the CI/CD > Pipelines screen. See CI Lint.
There is also a useful website http://www.yamllint.com/ where you can input YAML, and it will (a) validate it, and (b) return a UTF-8 version. If you have string problems, the UTF-8 version will look mangled (which is what happens with your YAML).
Please give required privilege to bot User to post in channel.
follow the below ci.yaml
#please select the image which has curl command
slack_notification:
image: ubuntu:latest
script:
- echo "Get user id with curl from Slack"
- curl -X GET -H 'Authorization:Bearer <bot token>' https://slack.com/api/users.lookupByEmail?email=$GITLAB_USER_EMAIL | jq -r '.user.id'
# From the above if you change the | jq -r '.user.name' then you will get the name of the user from slack.
- echo "Slack post request"
- >
curl -X POST -H 'Authorization:Bearer <bot token>' -H 'Content-type: application/json' --data '{"channel":"<channel id which start CXXXX>","text":"Your job has been finished please validate Job Url '"$CI_PIPELINE_URL"'"}' https://slack.com/api/chat.postMessage
CI_PIPELINE_URL: This will give the job url.
Slack reference: https://api.slack.com/web
I want to set up a dev environment of Hasura on my local machine, that replicates my existing production (same tables, same schema, same data).
What are the required steps to achieve this task?
I've found this process to work well.
Create a clean empty local postgresql database and Hasura instance. To update an existing local database, drop it and recreate it.
Dump the schema and data from your existing Hasura server (as per the answer by #protob, but with clean_output set so that manual changes to the output do not have to be made. See pgdump for details.
curl --location --request POST 'https://example.com/v1alpha1/pg_dump' \
--header 'Content-Type: application/json' \
--header 'X-Hasura-Role: admin' \
--header 'Content-Type: text/plain' \
--header 'x-hasura-admin-secret: {SECRET}' \
--data-raw '{ "opts": ["-O", "-x","--inserts", "--schema", "public"], "clean_output": true}' > hasura-db.sql
Import the schema and data locally:
psql -h localhost -U postgres < hasura-db.sql
The local database has all the migrations because we copied the latest schema, so just mark them as applied:
# A simple `hasura migrate apply --skip-execution` may work too!
for x in $(hasura migrate status | grep "Not Present" | awk '{ print $1 }'); do
hasura migrate apply --version $x --skip-execution
done
# and confirm the updated status
hasura migrate status
Now finally apply the Hasura metadata using the hasura CLI:
hasura metadata apply
Enjoy your new instance!
Backup the database.
Run Hasura with the database.
Make sure Hasura metadata is synced.
Hasura has a special endpoint for executing pg_dump on the Postgres instance.
Here is a sample CURL request:
curl --location --request POST 'https://your-remote-hasura.com/v1alpha1/pg_dump' \
--header 'Content-Type: application/json' \
--header 'X-Hasura-Role: admin' \
--header 'Content-Type: text/plain' \
--data-raw '{
"opts": ["-O", "-x","--inserts", "--schema", "public"]
}'
It outputs the schema and data in psql format.
You can use a tool such as Postman for convenience to import, test and run the CURL query.
Please follow the pg_dump documentation to adjust needed opts.
i.e. the above query uses "--inserts" opt, which produces "INSERT INTO" statements in the output.
The output can be copied, pasted and imported directly to Hasura Panel SQL Tab ("COPY FROM stdin" statements result in errors when inserted in the panel).
http://localhost:8080/console/data/sql
Before import, comment out or delete the line CREATE SCHEMA public; from query, because it already exists.
You also have to select tables and relations to be tracked, during or after executing the query.
If the amout of data is bigger, it might be better to use CLI for import.
Environment:
AWX: 3.0.1
Ansible: 2.7.8
Greetings fellows. Having a problem listing organization in AWX via REST API. This is a brand-new installation. What has been done so far:
Organization Created
Users created
Users added to Organization
Users assigned Permissions ('admin' here)
Now, I can obtain a token ,no problem. Using this $token, I am trying to list inventories:
$ curl -H "Authorization:Token $token" -f -k -H "content-Type: application/json" -X GET http://192.168.2.37/api/v2/organizations | jq .
$
...and getting null. I don't understand what is going on. It is authenticating me.
Any feedback or direction is greatly appreciated.
Answering my own question: in prior versions of AWX when it used authtoken instead of oauth2, the cURL directive was "Authorization: Token <your token>". Now that AWX is using oauth2, I must have used "Bearer <token>" instead.
I am trying to upload a zip file to Google drive account using curl.
The file is uploaded successfully but the filename is not getting updated. It gets uploaded with default filename i.e. "Untitled".
I am using below command.
curl -k -H "Authorization: Bearer cat /tmp/token.txt" -F "metadata={name : 'backup.zip'} --data-binary "#backup.zip" https://www.googleapis.com/upload/drive/v2/files?uploadType=multipart
You can use Drive API v3 to upload the zip file. The modified curl code is as follows.
curl -X POST -L \
-H "Authorization: Bearer `cat /tmp/token.txt`" \
-F "metadata={name : 'backup.zip'};type=application/json;charset=UTF-8" \
-F "file=#backup.zip;type=application/zip" \
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart"
In order to use this, please include https://www.googleapis.com/auth/drive in the scope.
The answer above works fine and was the command I used in uploading my file to Google Drive using Curl. However, I didn't understand what scope was and all of the initial setup required to make this command work. Hence, for documentation purposes. I'll give a second answer.
Valid as at the time of writing...
Visit the Credentials page and create a new credential (this is assuming you have created a project). I created credentials for TVs and Limited devices, so the work flow was similar to:
Create credentials > OAuth client ID > Application Type > TVs and Limited Input devices > Named the client > Clicked Create.
After doing this, I was able to copy the Client ID and Client Secret when viewing the newly created credential.
NB: Only the variables with double asterisk from the Curl commands should be replaced.
Next step was to run the Curl command:
curl -d "client_id=**client_id**&scope=**scope**" https://oauth2.googleapis.com/device/code
Scope in this situation can be considered to be the kind of access you intend to have with the credential having the inputted client_id. More about scope from the docs For the use case in focus, which is to upload files, the scope chosen was https://www.googleapis.com/auth/drive.file.
On running the curl command above, you'll get a response similar to:
{ "device_code": "XXXXXXXXXXXXX", "user_code": "ABCD-EFGH",
"expires_in": 1800, "interval": 5, "verification_url":
"https://www.google.com/device" }
Next step is to visit the verification_url in the response in your browser, provide the user_code and accept requests for permissions. You will be presented with a code when all prompts have been followed, this code wasn't required for the remaining steps (but there may be some reasons to use it for other use cases).
Next step is to use the Curl command:
curl -d client_id=**client_id** -d client_secret=**client_secret** -d device_code=**device_code** -d grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code https://accounts.google.com/o/oauth2/token
You will get a response similar to:
{ "access_token": "XXXXXXXXX", "expires_in": 3599,
"refresh_token": "XXXXXXXXX", "scope":
"https://www.googleapis.com/auth/drive.file", "token_type": "Bearer"
}
Now you can use the access token and follow the accepted answer with a Curl command similar to:
curl -X POST -L \
-H "Authorization: Bearer **access_token**" \
-F "metadata={name : 'backup.zip'};type=application/json;charset=UTF-8" \
-F "file=#backup.zip;type=application/zip" \
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart"
I have tried both key 1 and key 2 from the Azure Resource Management > Keys page with the following, where foo is a direct copy/paste:
curl -X POST "https://api.cognitive.microsoft.com/sts/v1.0/issueToken?Subscription-Key=foo" --data ""
curl -X POST "https://api.cognitive.microsoft.com/sts/v1.0/issueToken" -H "Ocp-Apim-Subscription-Key: foo" --data ""
In both cases I get:
{ "statusCode": 401, "message": "Access denied due to invalid subscription key. Make sure to provide a valid key for an active subscription." }
Is there something I need to configure so I can I retrieve access tokens for my subscription? My ultimate goal is to use the access token to authenticate with a Custom Speech Service Endpoint. Thanks!
For some reason this URL worked instead of the one in the documentation:
https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken
Here's the complete command:
curl -X POST --header "Ocp-Apim-Subscription-Key:foo" --data "" "https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken"