Docker login to gcp using json credentials - windows

I want to log into docker on google cloud from the command line in Windows using credentials in json format.
Firstly, I generated the keys of the service accounts in google cloud IAM & Admin. Afterwards, I tried to login as advised using the following commands:
set /p PASS=<keyfile.json
docker login -u _json_key -p "%PASS%" https://[HOSTNAME]
The json that is generated from google, though, has newline characters and the
above set command couldn't read the whole file.
Then, I edited the file to be a single line. But still, the set command is not reading the whole file. Any advice on how to read a json file using the set command and pass it to the docker login command below?

The solution on this is to run the following command:
docker login -u _json_key --password-stdin https://gcr.io < keyfile.json

Related

How can I get Logstash-Keystore to find its password?

For background: I'm attempting to automate steps to provision and create a multitude of Logstash processes within Ansible, but want to ensure the steps and configuration work manually before automating the process.
I have installed Logstash as per Elastic's documentation (its an RPM installation), and have it correctly shipping logs to my ES instance without issue. Elasticsearch and Logstash are both v7.12.0.
Following the keystore docs, I've created a /etc/sysconfig/logstash file and have set the permissions to the file to 0600. I've added the LOGSTASH_KEYSTORE_PASS key to the file to use as the environment variable sourced by the keystore command on creation and reading of the keystore itself.
Upon running the sudo /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create command, the process spits back the following error:
WARNING: The keystore password is not set.
Please set the environment variable `LOGSTASH_KEYSTORE_PASS`.
Failure to do so will result in reduced security.
Continue without password protection on the keystore? [y/N]
This should not be the case, as the keystore process should be sourcing my password env var from the aforementioned file. Has anyone experienced a similar issue, and if so, how did you solve it?
This is expected, the file /etc/sysconfig/logstash will be read only when you start logstash as a service, not when you run it from command line.
To create the keystore you will need to export the variable with the password first, as explained in the documentation.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
sudo -E /usr/share/logstash/bin/logstash-keystore --path.settings /etc/logstash create
After that, when you start logstash as a service it will read the variable from the /etc/sysconfig/logstash file.
1 - you should write your password for KEYSTORE itself.
It is under config/startup-options.
E.g. LOGSTASH_KEYSTORE_PASS=mypassword (without export)
2 - Then you should use the Keystore password to create your keystore file.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
..logstash/bin/logstash-keystore --path.settings ../logstash create
Note: logstash-keystore and logstash.keystore are different things. you created the one with dot. It is in config/.. directory where your startup.options is.
History command is to hide your password to be seen. Because if somebody uses "history" to list all the commands used previously, they can see your password.
3 - Then you can add your first password into keystore file. You should give your keystore password beforehand.
set +o history
export LOGSTASH_KEYSTORE_PASS=mypassword
set -o history
./bin/logstash-keystore add YOUR_KEY
Then it will ask for your VALUE. If you do not give your keystore password, you get an error: Found a file at....but it's not a valid Logstash keystore
4 - Once you give your password. You can list the content of your keystore file, or remove. Replace "list" with "remove".
./bin/logstash-keystore list

How can I automate entering input for a command in a bash script that runs on AWS EC2 launch?

For example: upon launching my EC2 instance, I would like to automatically run
docker login
so I can pull a private image from dockerhub and run it. To login to dockerhub I need to input a username and password, and this is what I would like to automate but haven't been able to figure out how.
I do know that you can pass in a script to be ran on launch via User Data. The issue is that my script expects input and I would like to automate entering that input.
Thanks in advance!
If just entering a password for docker login is your problem then I would suggest searching for a manual for docker login. 30 secs on Google gave me this link:
https://docs.docker.com/engine/reference/commandline/login/
It suggests something of the form
docker login --username foo --password-stdin < ~/my_password.txt
Which will read the password from a file my_password.txt in the current users home directory.
Seems like the easiest solution for you here is to modify your script to accept command line parameters, and pass those in with the UserData string.
Keep in mind that this will require you to change your launch configs every time your password changes.
The better solution here is to store your containers in ECS, and let AWS handle the authentication for you (as far as pulling the correct containers from a repo).
Your UserData then turns into something along:
#!/bin/bash
mkdir -p /etc/ecs
rm -f /etc/ecs/ecs.config # cleans up any old files on this instance
echo ECS_LOGFILE=/log/ecs-agent.log >> /etc/ecs/ecs.config
echo ECS_LOGLEVEL=info >> /etc/ecs/ecs.config
echo ECS_DATADIR=/data >> /etc/ecs/ecs.config
echo ECS_CONTAINER_STOP_TIMEOUT=5m >> /etc/ecs/ecs.config
echo ECS_CLUSTER=<your-cluster-goes-here> >> /etc/ecs/ecs.config
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env-file=/etc/ecs/ecs.config amazon/amazon-ecs-agent:latest
You may or may not need all the volumes specified above.
This setup lets the AWS ecs-agent handle your container orchestration for you.
Below is what I could suggest at this moment -
Create a S3 bucket i.e mybucket.
Put a text file(doc_pass.txt) with your password into that S3 bucket
Create a IAM policy which has GET access to just that particular S3 bucket & add this policy to the EC2 instance role.
Put below script in you user data -
aws s3 cp s3://mybucket/doc_pass.txt doc_pass.txt
cat doc_pass.txt | docker login --username=YOUR_USERNAME --password-stdin
This way you just need to make your S3 bucket secure, no secrets gets displayed in the user data.

cURL to call REST Api

So I want to call a REST API from Bamboo after a deployment has completed.
This API needs a username and password but it can't be stored in Bamboo as it seems it can be viewed in the Bash History of the Build agent.
I intended to use a script task and execute something like
curl -f -v -k --user "${bamboo.user}":"${bamboo.password}" -X POST https://bamboo.url/builds/rest/api/latest/queue/project_name"/
This would make the REST call. But the username and password is a problem.
I do have the option, however of using a PEM file. It can be provided so does anyone know if this can be used in conjunction with the cURL?
--OR--
One other thought- could I encrypt a password within a file in my source control, and somehow decrypt it on the build agent, and then have curl use the file instead of reading the password from the command line? How would this look in cURL?
Any ideas how this could be achieved?
Your command seems to have an extra quote at the end of your command
Using a pem file to authenticate with curl:
curl -E /path/to/user-cert.pem -X POST https://bamboo.url/builds/rest/api/latest/queue/project_name
The file should have both private key and public key inside.

How to automate Quay.io login in a shell script?

I've got a shell script that I use to configure my Ubuntu instance upon instantiation. One of the things I need to do is login to my Quay.io account so I can pull docker images from my private registry. Kinda like so:
Instance-Config.sh
#!/bin/bash
docker login quay.io -u 'myUserName' -p 'myPassword' -e 'me#mydomain.com'
docker run quay.io/myUserName/myContainerName
The above script works just fine when logging in to Dockerhub, but when I try to use it to login to Quay.io it produces prompts for the various arguments (-u, -p, -e) when it should automatically fill those from the arguments provided in the command.
How do I go about automating login for Quay.io?
I should note that I've already tried logging in, copying the contents of the ~/.dockercfg file and then trying to echo the resulting string into a new .dockercfg file in the Instance-Init.sh script but there must be a machine id or something in the auth token that's produced and placed in the .dockercfg file so the resulting login from one machine cannot be used on a new instance (which is probably a good thing).
Doi. You need to put the host argument at the end, like illustrated in their docs:
#!/bin/bash
docker login -u 'myUserName' -p 'myPassword' -e 'me#mydomain.com' quay.io
docker run quay.io/myUserName/myContainerName
Hopefully that'll help someone else save some time.

Create github repo bash file, NOT be prompted for password

I need to be able to create github repositories via bash scripts that run from a php page, so I need to be able to pass the password in the curl command or the API Key.
However I can not seem to find the API key as I believe this may be redundant now with V3 of the github API
I followed Is it possible to create a remote repo on GitHub from the CLI without opening browser? and it got me as far as being prompted for the password
Bash file looks like this:
#! /bin/bash
a=$1
curl="-u 'USERNAME' -p 'PASSWORD' https://api.github.com/user/repos -d '{\"name\":\""$a"\"}'"
curl $curl
This does not work as it is not liking the -p parameter it seems, tried -u 'USERNAME:PASSWORD' and it did not like that either and I can not seem to find the answer on github pages. Ideally I would use the API key as this would not leave my repo password exposed in my bash file correct?
Many thanks
curl -u 'dmalikov:my_password' https://api.github.com/user/repos -d '{"name":"HI"}' works fine for me, now I have this HI repo.

Resources