Testing Gitlab ci cd how to solve "the connection is refused" "no matching host key type found" - continuous-integration

Gitlab CI/CD can't connect to my remote vps.
I took https://gitlab.com/gitlab-examples/ssh-private-key as an example to make a .gitlab-ci.yaml file, its contents:
image: ubuntu
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- echo "$SSH_KEY_VU2NW" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan (domain name here) >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
Test SSH:
script:
- ssh root#(IP address here)
The runner responds with
the connection is refused
The server auth log says
sshd[2222]: Unable to negotiate with XXXXX port 53068: no matching
host key type found. Their offer: sk-ecdsa-sha2-nistp256#openssh.com
[preauth]
sshd[2220]: Unable to negotiate with XXXXX port 53068: no
matching host key type found. Their offer: sk-ssh-ed25519#openssh.com
[preauth]
Is there any way to solve this? I already tried connecting to another VPS, also without luck.

Finally got it to work, with this contents in the .gitlab-ci.yaml file:
image: ubuntu
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p /root/.ssh
- chmod 700 /root/.ssh
- echo "$SSH_KEY_GITLAB" >> /root/.ssh/id_rsa
- ssh-keyscan DOMAINNAME >> /root/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- chmod 400 ~/.ssh/id_rsa
Test SSH:
script:
- ssh root#DOMAINNAME
Where $SSH_KEY_GITLAB is set in Gitlabs' Settings > CICD section, and is a private key, generated by Putty, converted in Putty to an open SSH key.
The public version of this key must be in the target hosts' ~/.ssh/authorized_keys
...and DOMAINNAME must be a domain that resides on the target host, or, the DNS record should point there anyhow.
With ssh -vvv came some debugging info that pointed to the checking of ~/.ssh/id_rsa, so that's where I put the private key.

Related

Gitlab remote server push on the deploy step does not push new files

I have the below ci/cd script, (removed the test steps from it which works fine)
Our repo has two branches ready_4_release and master(prod), any feature branch gets merged into ready_4_release and at the end of the week we push the changes into master.
Any feature branch that gets merged into ready_4_release gets deployed into the remote airflow servers,
problem:
When a new MR gets merged into ready_4_release branch the changes are not getting deployed into remote servers but the changes on the feature branch gets merged into the ready_4_release branch, i am not able to figure out why?
When we rerun the deploy step the changes does appear on the remote server(Airflow),
OR when a new MR is submitted the previous changes gets deployed into remote servers.
Below is how the branches are setup in gitlab.
This is how the merge request settings are on the repo
stages:
- test
- deploy
.ssh-connection: &ssh-connection
- 'which git || ( apt-get update -y && apt-get install git -y )'
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- 'which rsync || ( apt-get update -y && apt-get install rsync -y )'
- eval $(ssh-agent -s)
- echo "$LOTBOT_SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh /config'
- ssh-keyscan ip-10-0-1-24.ec2.internal >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
deploy-dev:
stage: deploy
tags:
- prd-runner
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/ubuntu
before_script:
- echo "Deploying repo dev-data-analytics to Airkube EFS"
- *ssh-connection
script:
- ssh lotbot#10.0.1.24 "sudo rm -r /mnt/airflow_dags/airkube/external_repos/dev- -data-analytics/"
- echo "${PWD}"
- cd ..
- mkdir dev-lotlinx-data-analytics && cp -R "${PWD}"/data-analytics/* "${PWD}"/dev-data-analytics
- cd dev-data-analytics
# - echo "Git commit id"
# - git show -s --format=%h
- echo "${PWD}"
- ssh lotbot#10.0.1.24 "sudo mkdir -p /mnt/airflow_dags/airkube/external_repos/dev- data-analytics"
- ssh lotbot#10.0.1.24 "sudo chown -R lotbot:sudo /mnt/airflow_dags/airkube/external_repos/dev-lotlinx-data-analytics/ && sudo chmod -R g+w /mnt/airflow_dags/airkube/external_repos/dev-data-analytics/"
- rsync -avu --delete --exclude "*__pycache__" "${PWD}"/ lotbot#10.0.1.24:/mnt/airflow_dags/airkube/external_repos/lotlinx-data-analytics;
- ssh lotbot#10.0.1.24 "sudo sleep 3s"
- ssh lotbot#10.0.1.24 "sudo chown -R lotbot:sudo /mnt/airflow_dags/airkube/external_repos/dev-lotlinx-data-analytics/ && sudo chmod -R g+w /mnt/airflow_dags/airkube/external_repos/lotlinx-data-analytics/"
rules:
- if: '$CI_COMMIT_REF_NAME == "ready_for_release" && $CI_PIPELINE_SOURCE == "push"'

Odd ansible behaviour in CentOS container

I have some odd behaviour when using ansible inside a CentOS 8 base container. All I am doing initially is testing basic function, essentially run a ping from another machine using ansible from a gitlab runner. It should be super simple, but I'm having issues with basic auth.
I've set up authorized keys and checked to make sure they work for the connection from the container host (Centos8 with podman) to the test machine also CentOS8, all working correctly with ansible see below:
[root#automation home]# ansible all -i lshyp01.lab, -u ansible -v --private-key=/home/ansible/.ssh/id_rsa -a "/usr/sbin/ping -c 3 8.8.8.8"
Using /etc/ansible/ansible.cfg as config file
lshyp01.lab | CHANGED | rc=0 >>
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=5.30 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=5.21 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=4.97 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.967/5.160/5.304/0.153 ms
[root#automation home]#
however when I run the same command via the Gitlab runner I get:
$ useradd ansible
$ mkdir -p /home/ansible/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > /home/ansible/.ssh/id_rsa
$ chmod -R 744 /home/ansible/.ssh/id_rsa*
$ chown ansible:ansible -R /home/ansible/.ssh
$ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible all -i lshyp01.lab, -u ansible -v --private-key=/home/ansible/.ssh/id_rsa -a "/usr/sbin/ping -c 3 8.8.8.8"
Using /etc/ansible/ansible.cfg as config file
lshyp01.lab | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added 'lshyp01.lab,10.16.4.19' (ECDSA) to the list of known hosts.\r\nansible#lshyp01.lab: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).",
"unreachable": true
}
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1
And here is the .gitlab-ci.yml file:
# Use minimal CentOS7 image
image: centos:latest
# Set up variables
# TF_ROOT: ${CI_PROJECT_DIR}/
# TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/state/prod
stages:
- prepare
- validate
- build
- deploy
before_script:
# Install tools - these should be baked into the image for prod
- which ssh-agent || (dnf -y install openssh-clients)
- eval $(ssh-agent -s)
- dnf -y install which
- which git || (dnf -y install git)
- which terraform || (dnf install -y dnf-utils && dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo && dnf -y install terraform)
- which ansible || (dnf -y install epel-release && dnf -y install ansible)
- which nslookup || (dnf -y install bind-utils)
- which sudo || (dnf -y install sudo)
# Seup user
- useradd ansible
- mkdir -p /home/ansible/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > /home/ansible/.ssh/id_rsa
- chmod -R 744 /home/ansible/.ssh/id_rsa*
- chown ansible:ansible -R /home/ansible/.ssh
# Pre testing
sshtest:
stage: prepare
script:
- export ANSIBLE_HOST_KEY_CHECKING=False
- ansible all -i lshyp01.lab, -u ansible -v --private-key=/home/ansible/.ssh/id_rsa -a "/usr/sbin/ping -c 3 8.8.8.8"
I have verified that the key is correct. Any help is greatly appreciated.
The answer turned out to be an issue with Gitlab variables. In the end I had to encode the keys into base 64 to store them then decode them on use. the updated gitlab-ci section is below.
As pointed out the above example also had the wrong permissions, however, I'd tried a few options, I should have reverted the permission changes before posting, sorry for the confusion.
- mkdir -p /root/.ssh
- echo "$SSH_PRIVATE_KEY" | base64 -d > /root/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | base64 -d > /root/.ssh/id_rsa.pub
- chmod -R 600 /root/.ssh/id_rsa && chmod -R 664 /root/.ssh/id_rsa.pub
- export ANSIBLE_HOST_KEY_CHECKING=False

The authenticity of host 'github.com (192.30.253.113)' can't be established passing "yes" with a bash script

I'm working on a personal project that requires me to do some bash scripting. I am currently trying to write a script that pulls from my private git repo. At the moment I am able to spin up my instance and install all my packages through a script. But when it comes to pulling from my private repo I get The authenticity of host 'github.com (192.30.253.113)' can't be established
I am trying to figure out a way to pass "yes" with my script. I know this is very bad practice but for my current use case, I'm not too concerned about security.
running this ssh-keyscan github.com >>~/.ssh/known_hosts command manual works, but when I put this in my script it does not seem to work.
Any help would be greatly appreciated
My script:
echo "update install -start"
sudo yum -y update
sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
sudo yum install -y httpd mariadb-server
sudo yum install -y git
sudo systemctl start httpd
echo "end"
#file premissions
sudo usermod -a -G apache ec2-user
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
#pulling from my git repo
ssh-keyscan github.com >>~/.ssh/known_hosts
cd ../../var/www/html/
git clone git#github.com:jackbourkemckenna/testrepo

EC2 userdata codecommit clone fails

I'm starting an ec2 instance from the userdata and I need to clone a repo with my ansible playbooks but it fails to clone. See details below. Can anyone help me figure this out. when I ssh to the instance after bootstrap, then clone works but not while bootstrapping.
#!/usr/bin/env bash
set -x
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
cd /home/ec2-user
mkdir -p .ssh
ssh-keygen -b 2048 -t rsa -f /home/ec2-user/.ssh/codecommit -q -N ""
KEY_ID=`aws iam upload-ssh-public-key --user-name ${user_id} --ssh-public-key-body "$(cat /home/ec2-user/.ssh/codecommit.pub)" \
--query 'SSHPublicKey.SSHPublicKeyId' --output text`
echo -e "
Host git-codecommit.*.amazonaws.com
User $KEY_ID
IdentityFile /home/ec2-user/.ssh/codecommit
" >> /home/ec2-user/.ssh/config
ssh-keyscan -t rsa git-codecommit.us-east-2.amazonaws.com >> /home/ec2-user/.ssh/known_hosts
sudo chown -R ec2-user:ec2-user /home/ec2-user/.ssh
sudo chmod 700 /home/ec2-user/.ssh
sudo chmod 644 /home/ec2-user/.ssh/*
sudo chmod 600 /home/ec2-user/.ssh/codecommit*
eval "$(ssh-agent -s)"
export GIT_SSH_COMMAND="ssh -v -F /home/ec2-user/.ssh/config -o StrictHostKeyChecking=no"
export GIT_TRACE_PACKET=true
export GIT_TRACE=2
export GIT_CURL_VERBOSE=1
**sleep 60s**
git clone ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/ansible
Adding a sleep of 60 seconds before the git clone command did the trick. It seems like SSH Key uploads take a bit of time before becoming active.
sleep 60s
git clone ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/ansible
OR
for i in {1..30}; do
git clone ssh://git-codecommit.us-east-2.amazonaws.com/v1/repos/ansible
[ $? == 0 ] && break || sleep 2s; echo "keep trying ..."
done

GitLab Pipeline: Works in YML, Fails in Extracted SH

I followed the GitLab Docs to enable my project's CI to clone other private dependencies. Once it was working, I extracted from .gitlab-ci.yml:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
into a separate shell script setup.sh as follows:
which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
eval $(ssh-agent -s)
ssh-add <(echo "$SSH_PRIVATE_KEY")
mkdir -p ~/.ssh
[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
leaving only:
before_script:
- chmod 700 ./setup.sh
- ./setup.sh
I then began getting:
Cloning into '/root/Repositories/DependentProject'...
Warning: Permanently added 'gitlab.com,52.167.219.168' (ECDSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
How do I replicate the original behavior in the extracted script?
When running ssh-add either use source or . so that the script runs within the same shell, in your case it would be:
before_script:
- chmod 700 ./setup.sh
- . ./setup.sh
or
before_script:
- chmod 700 ./setup.sh
- source ./setup.sh
For a better explanation as to why this needs to run in the same shell as the rest take a look at this answer to a related question here.

Resources