I'm starting an ec2 instance from the userdata and I need to clone a repo with my ansible playbooks but it fails to clone. See details below. Can anyone help me figure this out. when I ssh to the instance after bootstrap, then clone works but not while bootstrapping.
#!/usr/bin/env bash
set -x
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
cd /home/ec2-user
mkdir -p .ssh
ssh-keygen -b 2048 -t rsa -f /home/ec2-user/.ssh/codecommit -q -N ""
KEY_ID=`aws iam upload-ssh-public-key --user-name ${user_id} --ssh-public-key-body "$(cat /home/ec2-user/.ssh/codecommit.pub)" \
--query 'SSHPublicKey.SSHPublicKeyId' --output text`
echo -e "
Host git-codecommit.*.amazonaws.com
User $KEY_ID
IdentityFile /home/ec2-user/.ssh/codecommit
" >> /home/ec2-user/.ssh/config
ssh-keyscan -t rsa git-codecommit.us-east-2.amazonaws.com >> /home/ec2-user/.ssh/known_hosts
sudo chown -R ec2-user:ec2-user /home/ec2-user/.ssh
sudo chmod 700 /home/ec2-user/.ssh
sudo chmod 644 /home/ec2-user/.ssh/*
sudo chmod 600 /home/ec2-user/.ssh/codecommit*
eval "$(ssh-agent -s)"
export GIT_SSH_COMMAND="ssh -v -F /home/ec2-user/.ssh/config -o StrictHostKeyChecking=no"
export GIT_TRACE_PACKET=true
export GIT_TRACE=2
export GIT_CURL_VERBOSE=1
**sleep 60s**
git clone ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/ansible
Adding a sleep of 60 seconds before the git clone command did the trick. It seems like SSH Key uploads take a bit of time before becoming active.
sleep 60s
git clone ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/ansible
OR
for i in {1..30}; do
git clone ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/ansible
[ $? == 0 ] && break || sleep 2s; echo "keep trying ..."
done
Related
I have the below ci/cd script, (removed the test steps from it which works fine)
Our repo has two branches ready_4_release and master(prod), any feature branch gets merged into ready_4_release and at the end of the week we push the changes into master.
Any feature branch that gets merged into ready_4_release gets deployed into the remote airflow servers,
problem:
When a new MR gets merged into ready_4_release branch the changes are not getting deployed into remote servers but the changes on the feature branch gets merged into the ready_4_release branch, i am not able to figure out why?
When we rerun the deploy step the changes does appear on the remote server(Airflow),
OR when a new MR is submitted the previous changes gets deployed into remote servers.
Below is how the branches are setup in gitlab.
This is how the merge request settings are on the repo
stages:
- test
- deploy
.ssh-connection: &ssh-connection
- 'which git || ( apt-get update -y && apt-get install git -y )'
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
- eval $(ssh-agent -s)
- echo "$LOTBOT_SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh /config'
- ssh-keyscan ip-10-0-1-24.ec2.internal >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
deploy-dev:
stage: deploy
tags:
- prd-runner
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/ubuntu
before_script:
- echo "Deploying repo dev-data-analytics to Airkube EFS"
- *ssh-connection
script:
- ssh lotbot#10.0.1.24 "sudo rm -r /mnt/airflow_dags/airkube/external_repos/dev- -data-analytics/"
- echo "${PWD}"
- cd ..
- mkdir dev-lotlinx-data-analytics && cp -R "${PWD}"/data-analytics/* "${PWD}"/dev-data-analytics
- cd dev-data-analytics
# - echo "Git commit id"
# - git show -s --format=%h
- echo "${PWD}"
- ssh lotbot#10.0.1.24 "sudo mkdir -p /mnt/airflow_dags/airkube/external_repos/dev- data-analytics"
- ssh lotbot#10.0.1.24 "sudo chown -R lotbot:sudo /mnt/airflow_dags/airkube/external_repos/dev-lotlinx-data-analytics/ && sudo chmod -R g+w /mnt/airflow_dags/airkube/external_repos/dev-data-analytics/"
- rsync -avu --delete --exclude "*__pycache__" "${PWD}"/ lotbot#10.0.1.24:/mnt/airflow_dags/airkube/external_repos/lotlinx-data-analytics;
- ssh lotbot#10.0.1.24 "sudo sleep 3s"
- ssh lotbot#10.0.1.24 "sudo chown -R lotbot:sudo /mnt/airflow_dags/airkube/external_repos/dev-lotlinx-data-analytics/ && sudo chmod -R g+w /mnt/airflow_dags/airkube/external_repos/lotlinx-data-analytics/"
rules:
- if: '$CI_COMMIT_REF_NAME == "ready_for_release" && $CI_PIPELINE_SOURCE == "push"'
When entering my container, I want to log in as user ryan in directory /home/ryan/cas with the command eval "$(ssh-agent -c)" run. My following Dockerfile:
FROM ubuntu:latest
ENV TZ=Australia/Sydney
RUN set -ex; \
# NOTE(Ryan): Prevent docker build hanging on timezone confirmation
ln -sf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone; \
apt update; \
apt install -y --no-install-recommends \
sudo ca-certificates git gnupg openssh-client vim; \
useradd -m ryan -g sudo; \
printf "ryan ALL=(ALL:ALL) NOPASSWD:ALL" | sudo EDITOR="tee -a" visudo; \
# NOTE(Ryan): Prevent sudo usage prompt appearing on startup
touch /home/ryan/.sudo_as_admin_successful; \
git clone https://github.com/ryan-mcclue/cas.git /home/ryan/cas; \
chmod 777 -R /home/ryan/cas;
ENTRYPOINT ["/bin/bash", "-l", "-c"]
USER ryan
WORKDIR /home/ryan/cas
CMD eval "$(ssh-agent -s)"
However, running ssh-add I still get the Could not open a connection to your authentication agent which is indicative that the ssh-agent is not running. Manually typing eval "$(ssh-agent -c)" works.
I think you want remove your ENTRYPOINT statement, and then you want:
USER ryan
WORKDIR /home/ryan/cas
CMD ["ssh-agent", "bash", "-l"]
This will get you a login shell, run under the control of ssh-agent (so you'll have the necssary SSH_* environment variables and an active socket available).
To understand what's happening with your container, try running from the command line:
bash -l -c 'eval $(ssh-agent -s)'
What happens? The shell exits immediately, because running ssh-agent -s causes the agent to background itself, which looks pretty much the same as "exiting". Since you passed the -c flag, and the command given to -c has exited, the parent bash shell exits as well.
I'm trying to deploy my flask application to AWS EC2 instance using gitlab ci runner.
.gitlab.ci.yml
stages:
- test
- deploy
test_app:
image: python:latest
stage: test
before_script:
- python -V
- pip install virtualenv
- virtualenv env
- source env/bin/activate
- pip install flask
script:
- cd flask-ci-cd
- python test.py
prod-deploy:
stage: deploy
only:
- master # Run this job only on changes for stage branch
before_script:
- mkdir -p ~/.ssh
- echo -e "$RSA_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy-prod.sh
environment:
name: deploy
.gitlab-deploy-prod.sh
#!/bin/bash
# Get servers list
set -f
# access server terminal
shell="ssh -o StrictHostKeyChecking=no ${SERVER_URL}"
git_token=$DEPLOY_TOKEN
echo "Deploy project on server ${SERVER_URL}"
if [ ${shell} -d "/flask-ci-cd" ] # check if directory exists
then
eval "${shell} cd flask-ci-cd && git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd"
else
eval "${shell} git pull https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd && cd flask-ci-cd"
fi
Error: .gitlab-deploy-prod.sh: line 7: -o: command not found
How can i check if directory existing??
What i've tried.
#!/bin/bash
# Get servers list
set -f
# access server terminal
shell="ssh -o StrictHostKeyChecking=no ${SERVER_URL}"
git_token=$DEPLOY_TOKEN
eval "${shell}" # i thought gitlab would provide me with shell access
echo "Deploy project on server ${SERVER_URL}"
if [-d "/flask-ci-cd" ] # check if directory exists
then
eval "cd flask-ci-cd && git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd"
else
eval "git pull https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git master && cd flask-ci-cd && cd flask-ci-cd"
fi
I've tried to log into the ssh shell before executing the scripts inside if else. But it doesn't works the way intended.
Your script has some errors.
Do not use eval. No, eval does not work that way. eval is evil
When storing a command to a variable, do not use normal variables. Use bash arrays instead to preserve "words".
Commands passed via ssh are double escaped. I would advise to prefer to use here documents, they're simpler to get the quoting right. Note the difference in expansion when the here document delimiter is quoted or not.
i thought gitlab would provide me with shell access No, without open standard input the remote shell will just terminate, as it will read EOF from input. No, it doesn't work that way.
Instead of doing many remote connection, just transfer the execution to remote side once and do all the work there.
Take your time and research how quoting and word splitting works in shell.
git_token=$DEPLOY_TOKEN No, set variables are not exported to remote shell. Either pass them manually or expand them before calling the remote side. (Or you could also use ssh -o SendEnv=git_token and configure remote ssh with AcceptEnv=git_token I think, never tried it).
Read documentation for the utilities you use.
No, git clone doesn't take branch name after url. You can specify branch with --branch or -b option. After url it takes directory name. See git clone --help. Same for git pull.
How can i check if directory existing??
Use bash arrays to store the command. Check if the directory exists just by executing the test command on the remote side.
shell=(ssh -o StrictHostKeyChecking=no "${SERVER_URL}")
if "${shell[#]}" [ -d "/flask-ci-cd" ]; then
...
In case of directory name with spaces I would go with:
if "${shell[#]}" sh <<'EOF'
[ -d "/directory with spaces" ]
EOF
then
Pass set -x to sh to see what's happening also on the remote side.
For your script, try rather to move the execution to remote side - there is little logic in making 3 separate connections. I say just
echo "Deploy project on server ${SERVER_URL}"
ssh -o StrictHostKeyChecking=no "${SERVER_URL}" bash <<EOF
if [ ! -d /flask-ci-cd ]; then
# Note: git_token is expanded on host side
git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git /flask-ci-cd
fi
cd /flask-ci-cd
git pull
EOF
But instead of getting the quoting in some cases right, use declare -p and declare -f to transfer properly quoted stuff to remote side. That way you do not need case about proper quoting - it will work naturally:
echo "Deploy project on server ${SERVER_URL}"
work() {
if [ ! -d /flask-ci-cd ]; then
# Note: git_token is expanded on host side
git clone https://sbhusal123:"${git_token}"#gitlab.com/sbhusal123/flask-ci-cd.git /flask-ci-cd
fi
cd /flask-ci-cd
git pull
{
ssh -o StrictHostKeyChecking=no "${SERVER_URL}" bash <<EOF
$(declare -p git_token) # transfer variables you need
$(declare -f work) # transfer function you need
work # call the function.
EOF
Updated answer for future reads.
.gitlab-ci.yml
stages:
- test
- deploy
test_app:
image: python:latest
stage: test
before_script:
- python -V
- pip install virtualenv
- virtualenv env
- source env/bin/activate
- pip install flask
script:
- cd flask-ci-cd
- python test.py
prod-deploy:
stage: deploy
only:
- master
before_script:
- mkdir -p ~/.ssh
- echo -e "$RSA_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy-prod.sh
environment:
name: deploy
.gitlab-deploy-prod.sh
#!/bin/bash
# Get servers list
set -f
shell=(ssh -o StrictHostKeyChecking=no "${SERVER_URL}")
git_token=$DEPLOY_TOKEN
echo "Deploy project on server ${SERVER_URL}"
ssh -o StrictHostKeyChecking=no "${SERVER_URL}" bash <<EOF
if [ ! -d flask-ci-cd ]; then
echo "\n Cloning into remote repo..."
git clone https://sbhusal123:${git_token}#gitlab.com/sbhusal123/flask-ci-cd.git
# Create and activate virtualenv
echo "\n Creating virtual env"
python3 -m venv env
else
echo "Pulling remote repo origin..."
cd flask-ci-cd
git pull
cd ..
fi
# Activate virtual env
echo "\n Activating virtual env..."
source env/bin/activate
# Install packages
cd flask-ci-cd/
echo "\n Installing dependencies..."
pip install -r requirements.txt
EOF
There is a test command which is explicit about checking files and directories:
test -d "/flask-ci-cd" && eval $then_commands || eval $else_commands
Depending on the AWS instance I'd expect "test" to be available. I'd recommend putting the commands in variables. (e.g. eval $then_commands)
I followed the GitLab Docs to enable my project's CI to clone other private dependencies. Once it was working, I extracted from .gitlab-ci.yml:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
into a separate shell script setup.sh as follows:
which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
eval $(ssh-agent -s)
ssh-add <(echo "$SSH_PRIVATE_KEY")
mkdir -p ~/.ssh
[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
leaving only:
before_script:
- chmod 700 ./setup.sh
- ./setup.sh
I then began getting:
Cloning into '/root/Repositories/DependentProject'...
Warning: Permanently added 'gitlab.com,52.167.219.168' (ECDSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
How do I replicate the original behavior in the extracted script?
When running ssh-add either use source or . so that the script runs within the same shell, in your case it would be:
before_script:
- chmod 700 ./setup.sh
- . ./setup.sh
or
before_script:
- chmod 700 ./setup.sh
- source ./setup.sh
For a better explanation as to why this needs to run in the same shell as the rest take a look at this answer to a related question here.
This question already has answers here:
write a shell script to ssh to a remote machine and execute commands
(10 answers)
Closed 9 years ago.
I'm writing a script which purpose is to connect to a number of servers and create an account. The "core" is:
ssh user#ip
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
I have established a private-public key relationship between the servers in order to be able to perform the ssh without being prompted for the password, however, when I run the script it does the ssh but then doesn't perform the next commands on the target machine. Instead, when manually exiting from the target server, I see that those commands were executed (or better said, tried to be executed) on the local machine.
So there should be no asking password when run both ssh and sudo command
ssh user#ip bash -c "'
sudo su -
useradd -m -p 123 $1
if [ $? -eq 0 ]; then
echo "$1 successfully created on ip."
fi
chage -d 0 $1
chown -R $1 /home/$1
exit #exit root
exit #exit the server
'"
If you are planning to sudo why don't you just ssh as root: root#ip? Just do:
ssh root#ip 'command1; command2; command3'
In your case if you want to be sure they are all successfull in order to proceed:
ssh root#ip 'USER=someUser; useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'
EDIT:
If the root access is not alowed if would do the following:
Create the script with the commands you want to execute on the remote machine, for instance script.sh:
#!/bin/bash
USER=someUser
useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER
Copy the script to the remote machine:
scp script.sh user#ip:/destination/dir
Invoke it remotely:
ssh user#ip 'sudo /destination/dir/script.sh'
EDIT2:
Other option without creating any files:
ssh user#ip "sudo bash -c 'USER=someUser && useradd -m -p 123 $USER && chage -d 0 $USER && chown -R $USER /home/$USER'"
It won't work this way. You shoudl do it like:
ssh user#ip 'yourcommands ; listed ; etc.' or
copy the script you want to execute on the servers via scp /your/scriptname user#ip:/tmp/ then execute it ssh user#ip 'sh /tmp/yourscriptname'
But you are starting another script when starting sudo.
Now you have (at least) two options:
ssh user#ip 'sudo -s -- "yourcommands ; listed ; etc."' or
copy the part after the sudo to a different script, then:
ssh user#ip 'sudo -s -- "sh differentscript"'`