Odd ansible behaviour in CentOS container - ansible

I have some odd behaviour when using ansible inside a CentOS 8 base container. All I am doing initially is testing basic function, essentially run a ping from another machine using ansible from a gitlab runner. It should be super simple, but I'm having issues with basic auth.
I've set up authorized keys and checked to make sure they work for the connection from the container host (Centos8 with podman) to the test machine also CentOS8, all working correctly with ansible see below:
[root#automation home]# ansible all -i lshyp01.lab, -u ansible -v --private-key=/home/ansible/.ssh/id_rsa -a "/usr/sbin/ping -c 3 8.8.8.8"
Using /etc/ansible/ansible.cfg as config file
lshyp01.lab | CHANGED | rc=0 >>
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=5.30 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=5.21 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=4.97 ms
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 4.967/5.160/5.304/0.153 ms
[root#automation home]#
however when I run the same command via the Gitlab runner I get:
$ useradd ansible
$ mkdir -p /home/ansible/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > /home/ansible/.ssh/id_rsa
$ chmod -R 744 /home/ansible/.ssh/id_rsa*
$ chown ansible:ansible -R /home/ansible/.ssh
$ export ANSIBLE_HOST_KEY_CHECKING=False
$ ansible all -i lshyp01.lab, -u ansible -v --private-key=/home/ansible/.ssh/id_rsa -a "/usr/sbin/ping -c 3 8.8.8.8"
Using /etc/ansible/ansible.cfg as config file
lshyp01.lab | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added 'lshyp01.lab,10.16.4.19' (ECDSA) to the list of known hosts.\r\nansible#lshyp01.lab: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).",
"unreachable": true
}
Cleaning up file based variables
00:00
ERROR: Job failed: exit status 1
And here is the .gitlab-ci.yml file:
# Use minimal CentOS7 image
image: centos:latest
# Set up variables
# TF_ROOT: ${CI_PROJECT_DIR}/
# TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/state/prod
stages:
- prepare
- validate
- build
- deploy
before_script:
# Install tools - these should be baked into the image for prod
- which ssh-agent || (dnf -y install openssh-clients)
- eval $(ssh-agent -s)
- dnf -y install which
- which git || (dnf -y install git)
- which terraform || (dnf install -y dnf-utils && dnf config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo && dnf -y install terraform)
- which ansible || (dnf -y install epel-release && dnf -y install ansible)
- which nslookup || (dnf -y install bind-utils)
- which sudo || (dnf -y install sudo)
# Seup user
- useradd ansible
- mkdir -p /home/ansible/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > /home/ansible/.ssh/id_rsa
- chmod -R 744 /home/ansible/.ssh/id_rsa*
- chown ansible:ansible -R /home/ansible/.ssh
# Pre testing
sshtest:
stage: prepare
script:
- export ANSIBLE_HOST_KEY_CHECKING=False
- ansible all -i lshyp01.lab, -u ansible -v --private-key=/home/ansible/.ssh/id_rsa -a "/usr/sbin/ping -c 3 8.8.8.8"
I have verified that the key is correct. Any help is greatly appreciated.

The answer turned out to be an issue with Gitlab variables. In the end I had to encode the keys into base 64 to store them then decode them on use. the updated gitlab-ci section is below.
As pointed out the above example also had the wrong permissions, however, I'd tried a few options, I should have reverted the permission changes before posting, sorry for the confusion.
- mkdir -p /root/.ssh
- echo "$SSH_PRIVATE_KEY" | base64 -d > /root/.ssh/id_rsa
- echo "$SSH_PUBLIC_KEY" | base64 -d > /root/.ssh/id_rsa.pub
- chmod -R 600 /root/.ssh/id_rsa && chmod -R 664 /root/.ssh/id_rsa.pub
- export ANSIBLE_HOST_KEY_CHECKING=False

Related

Testing Gitlab ci cd how to solve "the connection is refused" "no matching host key type found"

Gitlab CI/CD can't connect to my remote vps.
I took https://gitlab.com/gitlab-examples/ssh-private-key as an example to make a .gitlab-ci.yaml file, its contents:
image: ubuntu
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- echo "$SSH_KEY_VU2NW" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan (domain name here) >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
Test SSH:
script:
- ssh root#(IP address here)
The runner responds with
the connection is refused
The server auth log says
sshd[2222]: Unable to negotiate with XXXXX port 53068: no matching
host key type found. Their offer: sk-ecdsa-sha2-nistp256#openssh.com
[preauth]
sshd[2220]: Unable to negotiate with XXXXX port 53068: no
matching host key type found. Their offer: sk-ssh-ed25519#openssh.com
[preauth]
Is there any way to solve this? I already tried connecting to another VPS, also without luck.
Finally got it to work, with this contents in the .gitlab-ci.yaml file:
image: ubuntu
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client git -y )'
- eval $(ssh-agent -s)
- mkdir -p /root/.ssh
- chmod 700 /root/.ssh
- echo "$SSH_KEY_GITLAB" >> /root/.ssh/id_rsa
- ssh-keyscan DOMAINNAME >> /root/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- chmod 400 ~/.ssh/id_rsa
Test SSH:
script:
- ssh root#DOMAINNAME
Where $SSH_KEY_GITLAB is set in Gitlabs' Settings > CICD section, and is a private key, generated by Putty, converted in Putty to an open SSH key.
The public version of this key must be in the target hosts' ~/.ssh/authorized_keys
...and DOMAINNAME must be a domain that resides on the target host, or, the DNS record should point there anyhow.
With ssh -vvv came some debugging info that pointed to the checking of ~/.ssh/id_rsa, so that's where I put the private key.

Lambda gives No such file or directory(cant find the script file) error while running a bash script inside container. But this is successful in local

I am creating a lambda function from a docker image, this docker image actually runs a bash script inside of the docker container but when I tried to test that then it gives this following error. But this is successful in local. I tested with commented and uncommented entrypoint. Please help me to figure it out.
The dockerfile -
FROM amazon/aws-cli
USER root
ENV AWS_ACCESS_KEY_ID XXXXXXXXXXXXX
ENV AWS_SECRET_ACCESS_KEY XXXXXXXXXXXXX
ENV AWS_DEFAULT_REGION ap-south-1
# RUN mkdir /tmp
COPY main.sh /tmp
WORKDIR /tmp
RUN chmod +x main.sh
RUN touch file_path_final.txt
RUN touch file_path_initial.txt
RUN touch output_final.json
RUN touch output_initial.json
RUN chmod 777 file_path_final.txt
RUN chmod 777 file_path_initial.txt
RUN chmod 777 output_final.json
RUN chmod 777 output_initial.json
RUN yum install jq -y
# ENTRYPOINT ./main.sh ; /bin/bash
ENTRYPOINT ["/bin/sh", "-c" , "ls && ./tmp/main.sh"]
The error -
START RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Version: $LATEST
/bin/sh: ./tmp/main.sh: No such file or directory
/bin/sh: ./tmp/main.sh: No such file or directory
END RequestId: 8d689260-e500-45d7-aac8-ae260834ed96
REPORT RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Duration: 58.29 ms Billed Duration: 59 ms Memory Size: 128 MB Max Memory Used: 3 MB
RequestId: 8d689260-e500-45d7-aac8-ae260834ed96 Error: Runtime exited with error: exit status 127
Runtime.ExitError
Here how i did it to Run A C++ over a bash script :
#Pulling the node image from the AWS WCR PUBLIC docker hub.
FROM public.ecr.aws/lambda/provided:al2.2022.10.11.10
#Setting the working directory to /home.
WORKDIR ${LAMBDA_RUNTIME_DIR}
#Copying the contents of the current directory to the working directory.
COPY . .
#This is installing ffmpeg on the container.
RUN yum update -y
# Install sudo, wget and openssl, which is required for building CMake
RUN yum install sudo wget openssl-devel -y
# Install development tools
RUN sudo yum groupinstall "Development Tools" -y
# Download, build and install cmake
RUN yum install -y make
#RUN wget https://github.com/Kitware/CMake/releases/download/v3.22.3/cmake-3.22.3.tar.gz && tar -zxvf cmake-3.22.3.tar.gz && cd ./cmake-3.22.3 && ./bootstrap && make && sudo make install
RUN yum -y install gcc-c++ libcurl-devel cmake3 git
RUN ln -s /usr/bin/cmake3 /usr/bin/cmake
RUN ln -s /usr/bin/ctest3 /usr/bin/ctest
RUN ln -s /usr/bin/cpack3 /usr/bin/cpack
# get cmake versin
RUN cmake --version
RUN echo $(cmake --version)
#This is installing the nodejs and npm on the container.
RUN ./build.sh
RUN chmod 755 run.sh bootstrap
#This is running the nodejs application.
CMD [ "run.sh" ]
You will need a bootstrap file in the root directory : (FROM DOC)
#!/bin/sh
set -euo pipefail
# Initialization - load function handler
source $LAMBDA_RUNTIME_DIR/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Run the handler function from the script
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done

Error in yaml code could not find expected ':'

yaml code
- hosts: all
tasks:
#Import Remi GPG key - see: http://rpms.famillecollet.com/RPM-GPG-KEY-remi
wget http://rpms.famillecollet.com/RPM-GPG-KEY-remi \ -O /etc/pki/rpm-gpg/RPM-GPG-KEY-remi
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-remi
#Install Remi repo
rpm -Uvh --quiet \
http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
Install EPEL repo.
yum install epel-release
Install Node.js (npm plus all its dependencies).
yum --enablerepo=epel install node
I am getting following error when compiling: ERROR! Syntax Error while loading YAML.
The error appears to have been in '/home/shahzad/playbook.yml': line
7, column 3, but may be elsewhere in the file depending on the exact
syntax problem.
The offending line appears to be:
wget http://rpms.famillecollet.com/RPM-GPG-KEY-remi \ -O /etc/pki/rpm-gpg/RPM-GPG-KEY-remi
rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-remi
^ here
exception type: <class 'yaml.scanner.ScannerError'>
exception: while scanning a simple key
in "<unicode string>", line 6, column 3
could not find expected ':'
in "<unicode string>", line 7, column 3
I installed everything from the instructions above, but i used the installer alien for converting and installing rpm packages on Ubuntu 18.04.
But you will not be able to install with yum, since some packages are not in its list.
use alien:
# apt install alien # apt install -y
# cd /tmp
# wget http://rpms.famillecollet.com/RPM-GPG-KEY-remi \ -O /etc/pki/rpm-gpg/RPM-GPG-KEY-remi
# wget http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
# alien -kiv remi-release-6.rpm
# ls -l
# wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
# alien epel-release-latest-8.noarch.rpm
# ls -l
# alien -k epel-release-latest-8.noarch.rpm ; alien -i epel-release-latest-8.noarch.rpm
# cd /home/user
# apt install curl gcc g++ make # apt install -y
# curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
# apt install nodejs # apt install -y
# curl -sL https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
# echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
# apt update ; sudo apt install yarn # apt install -y
# apt install nodejs ; apt upgrade ; passwd -dl root ; reboot # apt install -y
But i still have the same error Invalid YAML: could not find expected ':':, but on command networkctl it became better for me to see , it says failed (although before installing node.js, remi-release, epel-release it didn't sign it like that) which interfaces are not configured correctly.
IDX LINK TYPE OPERATIONAL SETUP
1 lo loopback carrier unmanaged
2 ens11 ether off unmanaged
3 enp2t1 ether routable configured
4 br0 ether off failed
5 vlan5 ether off configuring
These installed packages let you see the interface error in depth, this method works!!!!!!!! Shahzad Adil shaikh thank your!
I was getting same error while running commands using PowerShell task in yaml.
- task: PowerShell#1
inputs:
scriptType: inlineScript
inlineScript: |
Command1
Commands2
I fixed this error by indenting the commands/script block.
You need to indent Command1 one lever under inlineScript: |.
If you wish to use shell commands in your yaml playbook such as wget, you'll need to use the shell module:
- name: Import Remi GPG key
shell: wget ...
":" is a special character in yaml, please read the YAML Syntax page in the official ansible documentation, for quoting.
As for yum commands, you may use ansible's yum module.
As a best practice, you may use http://www.yamllint.com/ for debugging your YAML syntax, checking for the exact line & column where the parser fails.

GitLab Pipeline: Works in YML, Fails in Extracted SH

I followed the GitLab Docs to enable my project's CI to clone other private dependencies. Once it was working, I extracted from .gitlab-ci.yml:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
into a separate shell script setup.sh as follows:
which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
eval $(ssh-agent -s)
ssh-add <(echo "$SSH_PRIVATE_KEY")
mkdir -p ~/.ssh
[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
leaving only:
before_script:
- chmod 700 ./setup.sh
- ./setup.sh
I then began getting:
Cloning into '/root/Repositories/DependentProject'...
Warning: Permanently added 'gitlab.com,52.167.219.168' (ECDSA) to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
How do I replicate the original behavior in the extracted script?
When running ssh-add either use source or . so that the script runs within the same shell, in your case it would be:
before_script:
- chmod 700 ./setup.sh
- . ./setup.sh
or
before_script:
- chmod 700 ./setup.sh
- source ./setup.sh
For a better explanation as to why this needs to run in the same shell as the rest take a look at this answer to a related question here.

Build failed while appending line in source of docker container

I'm working on https://github.com/audip/rpi-haproxy and get this error message when building the docker container:
Build failed: The command '/bin/sh -c echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list' returned a non-zero code: 1
This can be viewed at https://hub.docker.com/r/audip/rpi-haproxy/builds/brxdkayq3g45jjhppndcwnb/
I tried to find answers, but the problem seems to be something off on Line 4 of the Dockerfile. Need help to fix this build from failing.
# Pull base image.
FROM resin/rpi-raspbian:latest
# Enable Jessie backports
RUN echo "deb http://httpredir.debian.org/debian jessie-backports main" >> /etc/apt/sources.list
# Setup GPG keys
RUN gpg --keyserver pgpkeys.mit.edu --recv-key 8B48AD6246925553 \
&& gpg -a --export 8B48AD6246925553 | sudo apt-key add - \
&& gpg --keyserver pgpkeys.mit.edu --recv-key 7638D0442B90D010 \
&& gpg -a --export 7638D0442B90D010 | sudo apt-key add -
# Install HAProxy
RUN apt-get update \
&& apt-get install haproxy -t jessie-backports
# Define working directory.
WORKDIR /usr/local/etc/haproxy/
# Copy config file to container
COPY haproxy.cfg .
COPY start.bash .
# Define mountable directories.
VOLUME ["/haproxy-override"]
# Run loadbalancer
# CMD ["haproxy", "-f", "/usr/local/etc/haproxy/haproxy.cfg"]
# Define default command.
CMD ["bash", "start.bash"]
# Expose ports.
EXPOSE 80
EXPOSE 443
From your logs:
standard_init_linux.go:178: exec user process caused "exec format error"
It's complaining about an invalid binary format. The image you are using is a Raspberry Pi image, which would be based on an ARM chipset. Your build is running on an AMD64 chipset. These are not binary compatible. I believe this image is designed to be built on a Pi itself.

Resources