Directly pass variable content to command input expecting a filename - bash

I'm trying to pass my environment secret to gcloud auth for a service account. I currently am doing it by creating a json file that gcloud can load.
printf "%s" "$GCP_KEY" >> GCP_KEY.json
gcloud auth activate-service-account --key-file=GCP_KEY.json
I would like to avoid creating a file with this secret.
Ideally, I would like something similar to:
printf "%s" "$GCP_KEY" | gcloud auth activate-service-account --key-file=/dev/stdin
Unfortunately, gcloud uses the filename to determine whether the key is in json format or p12. Is there any way to make gcloud see it as a file with a filename ending in .json?

I took a look at the gcloud auth activate-service-account command here and it says the --key-file param is required so how about deleting the file once the command ran successfully, you can do this as follow:
gcloud auth activate-service-account --key-file=GCP_KEY.json ; rm GCP_KEY.json

Related

create basic auth token from cloudformation userdata (which is already in base64)

I am trying to create a Basic auth token from username & password from the userdata of cloudformation.
The script is creating the token but it is not the right one as it is adding a line before the substitution. What is the possible problem/solution ?
code is something like this -
username=test
password=password
export AUTH=$(echo -ne "$username:$password" | base64)
echo $AUTH
If I run this script locally it works absolutely fine.

How to create a secret docker secret?

I need create a MariaDB docker container, but need set the root password, but the password is set using a argument from the command line, it is very dangerous for the storage in the .bash_history.
I try use secrets using print pass | docker secret create mysql-root -, but have the same problem, the password is saved into .bash_history. The docker secret is not very secret.
I try use an interactive command:
while read -e line; do printf $line | docker secret create mysql-root -; break; done;
But, is very ugly xD. Why is a beter way to create a docker secret without save it into bash history but without remove all bash history?
The simplest way I have found is to use the following:
docker secret create private_thing -
Then enter the secret on the command line, followed by Ctrl-D twice.
You could try
printf $line | sudo docker secret create MYSQL_ROOT_PASSWORD -
and then
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
The information concerning using secrets with MariaDB can be found on the MariaDB page of DockerHub.
"Docker Secrets
As an alternative to passing sensitive information via environment variables, _FILE may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
Currently, this is only supported for MYSQL_ROOT_PASSWORD, MYSQL_ROOT_HOST, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD"
You can use openssl rand option to generate a random string and pass to docker secret command i.e
openssl rand -base64 10| docker secret create my_sec -
The openssl rand option will generate 10 byte base64 encoded random string.

How to generate Personal Access Token for github (enterprise/organisation account) from shell script or command line?

I am trying to write an automation script for configuring git, but since its for an organisation account and two factor authentication, I need to generate personal access token for authorisation.
I have already tried the below code but it doesnt return anything.
token=$(curl -u $uname:$passwd --silent -d '{"scopes":["user"]}' "https://api.github.organisation.com/authorizations" | grep -o '[0-9A-Fa-f]\{40\}')
I want to get the token value into a variable like above.

Docker login to gcp using json credentials

I want to log into docker on google cloud from the command line in Windows using credentials in json format.
Firstly, I generated the keys of the service accounts in google cloud IAM & Admin. Afterwards, I tried to login as advised using the following commands:
set /p PASS=<keyfile.json
docker login -u _json_key -p "%PASS%" https://[HOSTNAME]
The json that is generated from google, though, has newline characters and the
above set command couldn't read the whole file.
Then, I edited the file to be a single line. But still, the set command is not reading the whole file. Any advice on how to read a json file using the set command and pass it to the docker login command below?
The solution on this is to run the following command:
docker login -u _json_key --password-stdin https://gcr.io < keyfile.json

How can I automate entering input for a command in a bash script that runs on AWS EC2 launch?

For example: upon launching my EC2 instance, I would like to automatically run
docker login
so I can pull a private image from dockerhub and run it. To login to dockerhub I need to input a username and password, and this is what I would like to automate but haven't been able to figure out how.
I do know that you can pass in a script to be ran on launch via User Data. The issue is that my script expects input and I would like to automate entering that input.
Thanks in advance!
If just entering a password for docker login is your problem then I would suggest searching for a manual for docker login. 30 secs on Google gave me this link:
https://docs.docker.com/engine/reference/commandline/login/
It suggests something of the form
docker login --username foo --password-stdin < ~/my_password.txt
Which will read the password from a file my_password.txt in the current users home directory.
Seems like the easiest solution for you here is to modify your script to accept command line parameters, and pass those in with the UserData string.
Keep in mind that this will require you to change your launch configs every time your password changes.
The better solution here is to store your containers in ECS, and let AWS handle the authentication for you (as far as pulling the correct containers from a repo).
Your UserData then turns into something along:
#!/bin/bash
mkdir -p /etc/ecs
rm -f /etc/ecs/ecs.config # cleans up any old files on this instance
echo ECS_LOGFILE=/log/ecs-agent.log >> /etc/ecs/ecs.config
echo ECS_LOGLEVEL=info >> /etc/ecs/ecs.config
echo ECS_DATADIR=/data >> /etc/ecs/ecs.config
echo ECS_CONTAINER_STOP_TIMEOUT=5m >> /etc/ecs/ecs.config
echo ECS_CLUSTER=<your-cluster-goes-here> >> /etc/ecs/ecs.config
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env-file=/etc/ecs/ecs.config amazon/amazon-ecs-agent:latest
You may or may not need all the volumes specified above.
This setup lets the AWS ecs-agent handle your container orchestration for you.
Below is what I could suggest at this moment -
Create a S3 bucket i.e mybucket.
Put a text file(doc_pass.txt) with your password into that S3 bucket
Create a IAM policy which has GET access to just that particular S3 bucket & add this policy to the EC2 instance role.
Put below script in you user data -
aws s3 cp s3://mybucket/doc_pass.txt doc_pass.txt
cat doc_pass.txt | docker login --username=YOUR_USERNAME --password-stdin
This way you just need to make your S3 bucket secure, no secrets gets displayed in the user data.

Resources