Passing a private key through Fly Cli - bash

The company I work for has recently started to use Concourse CI to do all our CI needs. At the current moment one of my jobs consists of a task with a script that scp's and ssh's into our aws ec2 instances and configures those servers. The issue I am having, however, is getting the private key to ssh into those instances. One way discussed here (https://concourse-ci.org/fly-set-pipeline.html) is to pass the key in through a variable. In my script, I take that variable passed in and echo it to a new .pem file and I set the permissions to 600. When I echo just the variable and later cat the new .pem file, they look exactly the same as the original .pem file. The container I am trying to ssh from is the standard ubuntu docker image.
When I try to use this file to scp and ssh I am confronted with the prompt about entering the passphrase. If I try to ssh with the original file I don't get this prompt at all. Is there something I am missing? I would greatly appreciate some insight into this issue.
pipeline.yml
jobs:
- name: edge-priceconfig-deploy
plan:
- aggregate:
- get: ci-git
- get: pricing-config
trigger: true
- task: full-price-deploy
file: ci-git/ci/edge/edge-price-config-task.yml
params:
USER_AND_SERVER: {{edge_user_and_server}}
DEPLOY_KEY_PAIR: {{deploy_key_pair}}
task.yml
---
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu}
inputs:
- name: ci-git
- name: pricing-config
run:
path: ./ci-git/ci/edge/edge-priceconfig-deploy.sh
task.sh
#!/bin/bash
touch DeployKeyPair.pem
echo $DEPLOY_KEY_PAIR
echo $DEPLOY_KEY_PAIR > DeployKeyPair.pem
cat DeployKeyPair.pem
apt-get update && apt-get -y install sudo
sudo apt-get -y install openssh-client
sudo chmod 400 ci-git/key/DeployKeyPair.pem
sudo chmod 600 DeployKeyPair.pem
mkdir company-price-config-edge
mv pricing-config/fsconfig/conf/com.company.api.v1.pricing/*.xlsx company-price-config-edge/
commandstr="sudo rm -f /etc/company/edge/fsconfig/*xlsx; \
ls -l /etc/company/edge/fsconfig; \
sudo mv /home/ec2-user/company-price-config-edge/*xlsx /etc/company/edge/fsconfig/; \
sudo rm -rf /home/ec2-user/company-price-config-edge;"
scp_link="$USER_AND_SERVER:/home/ec2-user/"
scp_link="$(echo $scp_link | tr -d ' ')"
echo $scp_link
sudo echo -ne '\n' | scp -r -oStrictHostKeyChecking=no -i DeployKeyPair.pem company-price-config-edge $scp_link
sudo ssh -oStrictHostKeyChecking=no -i DeployKeyPair.pem $USER_AND_SERVER $commandstr
credentials.yml
username: |
username
password: |
password
access_token: |
token
ci_scripts_github: |
ci-script-link
edge_user_and_server: |
server.com
staging_user_and_server: |
staging
training_user_and_server: |
training
production_user_and_server: |
production
deploy_key_pair:
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----

Had this same problem today and took me a bit to figure out what was going on.
You're specifying the private key as a multiline yaml property, but when it's being echoed out to a file it's stripping out the linebreaks and replacing them with spaces, so your key is a large single line file.
I had to use sed to replace the space with newlines to get it to work
echo $DEPLOY_KEY_PAIR | sed -e 's/\(KEY-----\)\s/\1\n/g; s/\s\(-----END\)/\n\1/g' | sed -e '2s/\s\+/\n/g' > DeployKeyPair.pem
First sed command uses two sets of replacement groups, both with the same purpose
First group - Grab header text and break it out to a new line
s/\(KEY-----\)\s/\1\n/g
\(KEY-----\) puts the last part of the header block in a capture group
\s matches whitespace
/1\n/g Puts capture group (#1) back in, then adds a newline
/g Unnecessary global capture that I forgot to remove
Second sed group does the same but grabs the space and first part of the footer text to put it on a new line
Third group does a global replacement on whitespace and replaces it with newline. This had to be a separate command so processing was finished for first command and it was then a 3-line file/stream
'2s/\s\+/\n/g'
2s/ Perform replacement only on line 2
\s\+ matches any whitespace
\n replace with newline
/g global match to grab every instance

Two things.
One: You need to declare that these particular environment variables are being used by your task
task.yml:
---
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu}
inputs:
- name: ci-git
- name: pricing-config
params:
USER_AND_SERVER:
DEPLOY_KEY_PAIR:
run:
path: ./ci-git/ci/edge/edge-priceconfig-deploy.sh
Two: I think you need to add a | to your credentials.yml file.
credentials.yml:
deploy_key_pair: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----

you can also use a keystore integration, e.g. credhub, with concourse.
see https://docs.pivotal.io/p-concourse/credential-management.html

Related

Automating password change inside a Docker container

I need to use a bash script:
Launch the container
Generate a password
Enter the container
Run the 'cd /' command
Change the password using htpasswd to the generated one
I tried it like this:
docker restart c1
a = date +%s | sha256sum | base64 | head -c 32 ; echo
docker exec -u 0 -it c1 bash 'echo cd /'
htpasswd user.passwd webdav a
And so:
docker restart c1
docker exec -u 0 -it c1 bash
cd /
a = date +%s | sha256sum | base64 | head -c 32 ; echo
htpasswd user.passwd webdav a
With the first option , I get:
bash: echo cd /: No such file or directory
With the second one, it enters the container and does nothing
I will be grateful for any help
I tried many variations of the script, which did not help me
You do not need Docker or debugging tools like docker exec just to generate an htpasswd file.
htpasswd is part of the Apache distribution, and you should be able to install it on your host system using your OS package manager. Since it just manipulates a credential file it doesn't need the actual server.
# On the host system, without using Docker at all
sudo apt-get update && apt-get install apache2-utils
# Make sure to wrap the password-generating command in `$()`
a=$(date +%s | sha256sum | base64 | head -c 32)
# Make sure to use a variable reference `$a`
htpasswd user.passwd webdav "$a"
This gives you a user.passwd file on your local system. Now when you launch your container, you can bind-mount the file into the container:
docker run -d -p 80:80 ... \
-v "$PWD/user.passwd:/usr/local/apache2/conf/user.passwd" \
httpd
The container will be immediately ready to use. If you delete and recreate this container, you do not need to repeat the manual setup step. If you need to launch multiple copies of the container, they can all have the same credentials file without doing manual steps.

sed fails with error "\1 not defined in the RE" when running in Gitlab CI

I am trying to update contents of a file from a variable with sed during Gitlab CI job. The variable comes from artifacts of the previous stage version. If simplified, my job looks something like this:
build-android-dev:
stage: build
dependencies:
- version
only:
- mybranch
before_script:
- PUBSPEC_VERSION="`cat app-pubspec-version`"
- echo "Pubspec version - $PUBSPEC_VERSION"
script:
- >
sed -i -E "s/^(version: )(.+)$/\1${PUBSPEC_VERSION}/g" pubspec.yaml
- cat pubspec.yaml | grep version
interruptible: true
tags:
- macmini-flutter
Unfortunatelly, the job fails with the following error message:
$ sed -i -E "s/^(version: )(.+)$/\1${PUBSPEC_VERSION}/g" pubspec.yaml
sed: 1: "s/^(version: )(.+)$/\14 ...": \1 not defined in the RE
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit status 1
PUBSPEC_VERSION coming from artifacts is the following:
$ echo "Pubspec version - $PUBSPEC_VERSION"
Pubspec version - 4.0.0+2
I am able to execute the command on my local Ubuntu (Linux) machine without any issues:
$ export PUBSPEC_VERSION=4.0.0+2
$ sed -i -E "s/^(version: )(.+)$/\1${PUBSPEC_VERSION}/g" pubspec.yaml
$ cat pubspec.yaml | grep version
version: 4.0.0+2
The remote machine where Gitlab Runner is started is MacOS. Not sure whether it matters.
As you can see, I also use folding style in my CI configuration like proposed here in order to avoid inproper colon interpretation.
I googled for solutions to solve the issue but it seems that I don't need to escape (though I also tried) group parentheses in my regular expression because I use extended regular expression.
So I'm stuck on it...
P.S. I don't have access to the shell of remote MacOS.
is MacOS.
-i takes a suffix argument, so -E is the backup suffix to create. Yuo would want:
- sed -i '' -E 's/...'

xargs does not seem to separate parameters

I am trying to use xargs to set secrets in github using the gh CLI.
Given I have an .env file with the following entries
SECRET1=djfjgdfkjg
SECRET2=jbnfdgjn
SECRET3=A line of text
And the sed command sed -r 's/^([A-Za-z0-9_]*)=(.*)$/\1 -b "\2"/g' ./.env produces the following output:
SECRET1 -b "djfjgdfkjg"
SECRET2 -b "jbnfdgjn"
SECRET3 -b "A line of text"
I am unsure as to why the command:
sed -r 's/^([A-Za-z0-9_]*)=(.*)$/\1 -b "\2"/g' test.env | xargs -I {} gh secret set {}
fails for each secret with the message secret name can only contain letters, numbers, and _
Manually running gh secret set SECRET1 -b "djfjgdfkjg" works without an error.
I'm guessing that the issue is that the first arg (secret name) is being passed the value SECRET1 -b "djfjgdfkjg" rather than just SECRET1 but I'm unsure how I can fix this?
After doing a bit more digging I discovered that the problem is the use of -I and that:
sed -rn 's/^[[:space:]]*([[:alpha:]][[:alnum:]_]*)=(.*)$/\1 -b "\2"/p' .env | xargs -n 3 gh secret set
resolves the problem.

Multi-line bash script in yaml for EC2 Image Builder

I'm trying to create a custom component document. While I've tested the yaml file using various yaml linters, EC2 Image builder is complaining with the below error
Failed to create component. Fix the error(s) and try again:
The value supplied for parameter 'data' is not valid. Parsing step 'ConfigureMySQL' in phase 'build' failed. Error: line 4: cannot unmarshal map into string.
And I'm unable to figure out what is wrong with my yaml file
name: MyJavaAppTestDocument
description: This is JavaApp Document
schemaVersion: 1.0
phases:
- name: build
steps:
- name: InstallSoftware
action: ExecuteBash
inputs:
commands:
- sudo yum update -y
- sudo yum install -y java-1.8.0
- sudo amazon-linux-extras install -y tomcat8.5
- sudo yum install -y https://dev.mysql.com/get/mysql57-community-release-el7-11.noarch.rpm
- sudo yum install -y mysql-community-server
- name: ConfigureTomcat
action: ExecuteBash
inputs:
commands:
- sudo sed -i 's/<\/tomcat-users>/\n<role rolename="manager-gui"\/>\n <role
rolename="manager-script"\/>\n <role rolename="admin-gui"\/>\n <user username="admin"
password="admin" roles="manager-gui,manager-script,admin-gui"\/>\n<\/tomcat-users>/'
/etc/tomcat/tomcat-users.xml
- sudo systemctl start tomcat
- sudo systemctl enable tomcat
- name: ConfigureMySQL
action: ExecuteBash
inputs:
commands:
- sudo systemctl start mysqld
- sudo systemctl enable mysqld
- mysqlpass=$(sudo grep 'temporary password' /var/log/mysqld.log | sed 's/.*root#localhost: //')
- mysql -u root -p$mysqlpass --connect-expired-password -h localhost -e "ALTER USER 'root'#'localhost' IDENTIFIED BY 'whyDoTh1s#2020'"
- |
sudo cat <<EoF > /tmp/mysql-create-user.sql
CREATE USER 'admin'#'%' IDENTIFIED BY 'whyDoTh1s#2020';
GRANT ALL PRIVILEGES ON *.* TO 'admin'#'%' WITH GRANT OPTION;
EoF
- sudo mysql -u root -pwhyDoTh1s#2020 -h localhost < /tmp/mysql-create-user.sql
Appreciate if someone could help me find the error. The objective is to build an AMI with pre-configured software and settings.
You're getting that error because of the : in this line:
- mysqlpass=$(sudo grep 'temporary password' /var/log/mysqld.log | sed 's/.*root#localhost: //')
The YAML parser is interpreting that line as creating a map instead of creating a string entry in an existing map. A workaround I was able to use in my own YAML was to surround the : with single quotes, so the problem line would become
- mysqlpass=$(sudo grep 'temporary password' /var/log/mysqld.log | sed 's/.*root#localhost':' //')
According to my own use and this online YAML parser I tested with that solution applied, that should do the trick.
Although #micah-l-c answer can help in OP's case.
I would suggest an alternate approach which can also help and can be more correct solution towards this problem.
Erroneous line here is
- mysqlpass=$(sudo grep 'temporary password' /var/log/mysqld.log | sed 's/.*root#localhost: //')
This can be rewritten as
- |-
mysqlpass=$(sudo grep 'temporary password' /var/log/mysqld.log | sed 's/.*root#localhost: //')
OR
- >-
mysqlpass=$(sudo grep 'temporary password' /var/log/mysqld.log | sed 's/.*root#localhost: //')
Now : will be escaped fine with above approach.
I faced a similar issue and I will give an example where #micah-l-c solution didn't work and above approach worked fine.
Below was line i was adding in the ec2 image builder component
- echo "user1 ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
I put single quotes around : and it made /etc/sudoers malformed and thus it didn't work.
I replace above line like this
- |-
echo "user1 ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
References :
How to escape indicator characters (i.e. : or - ) in YAML

How to add multiple keys for elastic beanstalk instance?

There is a very good question on [How to] SSH to Elastic [an] Beanstalk instance, but one thing I noticed is that, through this method, it is only possible to add one SSH key.
How can I add multiple SSH keys to an instance? Is there a way to automatically add multiple keys to new instances?
To create a file named .ebextensions/authorized_keys.config is another way to do it.
files:
/home/ec2-user/.ssh/authorized_keys:
mode: "000400"
owner: ec2-user
group: ec2-user
content: |
ssh-rsa AAAB3N...QcGskx keyname
ssh-rsa BBRdt5...LguTtp another-key
The name of file authorized_keys.config is arbitrary.
Combining rhunwicks's and rch850's answers, here's a clean way to add additional SSH keys, while preserving the one set through the AWS console:
files:
/home/ec2-user/.ssh/extra_authorized_keys:
mode: "000400"
owner: ec2-user
group: ec2-user
content: |
ssh-rsa AAAB3N...QcGskx keyname
ssh-rsa BBRdt5...LguTtp another-key
commands:
01_append_keys:
cwd: /home/ec2-user/.ssh/
command: sort -u extra_authorized_keys authorized_keys -o authorized_keys
99_rm_extra_keys:
cwd: /home/ec2-user/.ssh/
command: rm extra_authorized_keys
Note that eb ssh will work only if the private key file has the same name as the private key defined in the AWS console.
Following on from Jim Flanagan's answer, you could get the keys added to every instance by creating .ebextensions/app.config in your application source directory with contents:
commands:
copy_ssh_key_userA:
command: echo "ssh-rsa AAAB3N...QcGskx userA" >> /home/ec2-user/.ssh/authorized_keys
copy_ssh_key_userB:
command: echo "ssh-rsa BBRdt5...LguTtp userB" >> /home/ec2-user/.ssh/authorized_keys
No, Elastic Beanstalk only supports a single key pair. You can manually add SSH keys to the authorized_keys file, but these will not be known to the Elastic Beanstalk tools.
One way you could accomplish this is to create a user data script which appends the public keys of the additional key-pairs you want to use to ~ec2-user/.ssh/authorized_keys, and launch the instance with that user data, for example:
#!
echo ssh-rsa AAAB3N...QcGskx keyname >> ~ec2-user/.ssh/authorized_keys
echo ssh-rsa BBRdt5...LguTtp another-key >> ~ec2-user/.ssh/authorized_keys
The most dynamic way to add multiple SSH keys to Elastic Beanstalk EC2 instances
Step 1
Create a group in IAM. Call it something like beanstalk-access. Add the users who need SSH access to that group in IAM. Also add their public ssh key(s) to their IAM Security credentials.
Step 2
The deployment script below will be parsing JSON data from AWS CLI using a handy Linux tool called jq (jq official tutorial), so we need to add it in .ebextensions:
packages:
yum:
jq: []
Step 3
Add the following BASH deployment script to .ebextensions:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/980_beanstalk_ssh.sh":
mode: "000755"
owner: ec2-user
group: ec2-user
content: |
#!/bin/bash
rm -f /home/ec2-user/.ssh/authorized_keys
users=$(aws iam get-group --group-name beanstalk-access | jq '.["Users"] | [.[].UserName]')
readarray -t users_array < <(jq -r '.[]' <<<"$users")
declare -p users_array
for i in "${users_array[#]}"
do
user_keys=$(aws iam list-ssh-public-keys --user-name $i)
keys=$(echo $user_keys | jq '.["SSHPublicKeys"] | [.[].SSHPublicKeyId]')
readarray -t keys_array < <(jq -r '.[]' <<<"$keys")
declare -p keys_array
for j in "${keys_array[#]}"
do
ssh_public_key=$(aws iam get-ssh-public-key --encoding SSH --user-name $i --ssh-public-key-id $j | jq '.["SSHPublicKey"] .SSHPublicKeyBody' | tr -d \")
echo $ssh_public_key >> /home/ec2-user/.ssh/authorized_keys
done
done
chmod 600 /home/ec2-user/.ssh/authorized_keys
chown ec2-user:ec2-user /home/ec2-user/.ssh/authorized_keys
Unfortunately, because this is YAML, you can't indent the code to make it more easily readable. But let's break down what's happening:
(In the code snippet directly below) We're removing the default SSH key file to give full control of that list to this deployment script.
rm -f /home/ec2-user/.ssh/authorized_keys
(In the code snippet directly below) Using AWS CLI, we're getting the list of users in the beanstalk-access group, and then we're piping that JSON list into jq to extract only that list of `$users.
users=$(aws iam get-group --group-name beanstalk-access | jq '.["Users"] | [.[].UserName]')
(In the code snippet directly below) Here, we're converting that JSON $users list into a BASH array and calling it $users_array.
readarray -t users_array < <(jq -r '.[]' <<<"$users")
declare -p users_array
(In the code snippet directly below) We begin looping through the array of users.
for i in "${users_array[#]}"
do
(In the code snippet directly below) This can probably be done in one line, but it's grabbing the list of SSH keys associated to each user in the beanstalk-access group. It has not yet turned it into a BASH array, it's still a JSON list.
user_keys=$(aws iam list-ssh-public-keys --user-name $i)
keys=$(echo $user_keys | jq '.["SSHPublicKeys"] | [.[].SSHPublicKeyId]')
(In the code snippet directly below) Now it's converting that JSON list of each users' SSH keys into a BASH array.
readarray -t keys_array < <(jq -r '.[]' <<<"$keys")
declare -p keys_array
(In the code snippet directly below) Now it's converting that JSON list into a BASH array.
readarray -t keys_array < <(jq -r '.[]' <<<"$keys")
declare -p keys_array
(In the code snippet directly below) Now we loop through each user's array of SSH keys.
for j in "${keys_array[#]}"
do
(In the code snippet directly below) We're adding each SSH key for each user to the authorized_keys file.
ssh_public_key=$(aws iam get-ssh-public-key --encoding SSH --user-name $i --ssh-public-key-id $j | jq '.["SSHPublicKey"] .SSHPublicKeyBody' | tr -d \")
echo $ssh_public_key >> /home/ec2-user/.ssh/authorized_keys
(In the code snippet directly below) Close out both the $users_array loop and $users_keys loop.
done
done
(In the code snippet directly below) Give the authorized_keys file the same permissions it originally had.
chmod 600 /home/ec2-user/.ssh/authorized_keys
chown ec2-user:ec2-user /home/ec2-user/.ssh/authorized_keys
Step 4
If your Elastic Beanstalk EC2 instance is in a public subnet, you can just ssh into it using:
ssh ec2-user#ip-address -i /path/to/private/key
If your Elastic Beanstalk EC2 instance is in a private subnet (as it should be for cloud security best practices), then you will need to have a "bastion server" EC2 instance which will act as the gateway for tunneling all SSH access to EC2 instances. Look up ssh agent forwarding or ssh proxy commands to get an idea of how to accomplish SSH tunneling.
Adding new users
All you do is add them to your IAM beanstalk-access group and run a deployment, and that script will add them to your Elastic Beanstalk instances.
instead of running echo and storing your keys on Git, you can upload your public keys to IAM user's on AWS and than do:
commands:
copy_ssh_key_userA:
command: rm -f /home/ec2-user/.ssh/authorized_keys;aws iam list-users --query "Users[].[UserName]" --output text | while read User; do aws iam list-ssh-public-keys --user-name "$User" --query "SSHPublicKeys[?Status == 'Active'].[SSHPublicKeyId]" --output text | while read KeyId; do aws iam get-ssh-public-key --user-name "$User" --ssh-public-key-id "$KeyId" --encoding SSH --query "SSHPublicKey.SSHPublicKeyBody" --output text >> /home/ec2-user/.ssh/authorized_keys; done; done;

Resources