./deploy.sh not working on gitlab ci - bash

My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".

This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!

docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh

Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.

Related

gitlab-ci predefined is defined in script step of deploy stage but undefined inside bash script run via 'bash -s'

I am trying to deploy a branch other than the default (master) branch. For some reason the predefined variables are not visible inside the roll-out.sh script. But the echo statements before calling the script do print the variables correctly.
I have another script that rolls out the master branch. In this script it is able to run docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY with no problems.
I have tried with the branch both protected and not protected. The variables are still undefined for some reason.
What I am doing wrong here?
build:
stage: build
image: docker:20.10.12
services:
- docker:20.10-dind
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
allow_failure: true
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/client/base:latest || true
- docker build --cache-from $CI_REGISTRY_IMAGE/client/base:latest --cache-from $CI_REGISTRY_IMAGE/client:latest -t $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH .
- docker push $CI_REGISTRY_IMAGE/client:$CI_COMMIT_BRANCH
deploy:
variables:
branch: $CI_COMMIT_BRANCH
stage: deploy
image: alpine
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH # not master branch
before_script:
- apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- echo $CI_COMMIT_REF_NAME
- echo $CI_COMMIT_BRANCH
- echo $branch
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN 'bash -s' < ./script/roll_out.sh
If you are sshing to another machine within your ci script I don’t think it would have accsess to the variables you are echoing out in the lines before because it’s a new session on a different machine.
You could try a few different things though to achieve what you want;
Try to send the variables as arguments (not great sending information to a machine through ssh).
Install a gitlab runner on the host you are trying to ssh to, tag this runner so it only runs the specific deployment job and then you’ll have the variables available on the host.
The problem was solved by explicitly passing the environment variables to the script
- ssh -o StrictHostKeyChecking=no $SERVER_USER#$DOMAIN "CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE CI_COMMIT_BRANCH=$CI_COMMIT_BRANCH bash -s" < ./script/roll_out_tagged.sh
I had another developer with lots of experience take a look at it.
He had no idea why docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY worked. That is what lead me to believe that all of the environment variables would be available on the remote. But seems like those three variables are some sort of exception.

How to pull image from Docker hub to EC2 using bitbucket pipeline

I am trying to implement CICD using bitbucket pipelines.
So far I was able to create the image and push it to docker hub. Seems straightforward and the internet is full of tutorials.
But, to pull the image from an EC2 instance and run the image I didnt find anything.
I have this bitbucket-pipeline.yml file:
image: atlassian/default-image:latest
pipelines:
default:
- step:
services:
- docker
script:
- export IMAGE_NAME=juanibe/vinimayapi:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME .
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker push $IMAGE_NAME
And I have this script, but I dont know where tu put it:
#!bin/bash
sudo docker ps
echo 'Login in to docker'
docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD // How can I set env variable here?
echo 'Fetching latest image'
sudo docker pull user/vinimayapi:latest
echo 'Stoping current container'
sudo docker stop cont_docker_app_test
echo 'Removing old container'
sudo docker rm cont_docker_app_test-old
echo 'Rename stoped container'
sudo docker rename user/cont_docker_app_test user/cont_docker_app_test_old
echo 'Starting new container'
sudo docker run -d --name cont_docker_app_test -p 443:3333 -p 8001:8001 --link my-mongo-testing:my-mongo-testing user/vinimayapi:latest
Any help will be really appreciated, I've been trying to create a pipeline for days without success.
Add to your pipeline additional step:
- pipe: "atlassian/ssh-run:0.2.4"
variables:
SSH_USER: user
SERVER: ip_server
SSH_KEY: sshkey
MODE: script
COMMAND: script.sh
It should look like below:
image: atlassian/default-image:latest
pipelines:
default:
- step:
services:
- docker
script:
- export IMAGE_NAME=juanibe/vinimayapi:$BITBUCKET_COMMIT
- docker build -t $IMAGE_NAME .
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker push $IMAGE_NAME
- pipe: "atlassian/ssh-run:0.2.4"
variables:
SSH_USER: user
SERVER: ip_server
SSH_KEY: sshkey
MODE: script
COMMAND: script.sh
script.sh file located in that case is located in the same directory as bitbucket_pipelines.yml

Bash script GitLab shared runner

I am attempting to use a shared runner to run a script which handles env vars necessary for deployment. The section of my YAML config that is failing is:
release:
stage: release
image: docker:latest
only:
- master
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
before_script:
- docker version
- docker info
- docker login -u ${CI_REGISTRY_USER} -p ${CI_BUILD_TOKEN} ${CI_REGISTRY}
script:
- dckstart=$(cat dockerfile-start)
- export > custom_vars
- chmod +x scripts/format-variables.sh
- bash scripts/format-variables.sh
- dckenv=$(cat custom_vars)
- dckfin=$(cat dockerfile-finish)
- echo -e "$dckstart\n$dckenv\n$dckfin" >> Dockerfile
- rm dockerfile-start dockerfile-finish custom_vars
- docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull .
- docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest
after_script:
- docker logout ${CI_REGISTRY}
This step fails & gives the error:
$ chmod +x scripts/format-variables.sh
$ bash scripts/format-variables.sh
/bin/sh: eval: line 101: bash: not found
I have attempted:
/bin/bash scripts/format-variables.sh
/bin/sh: eval: line 114: /bin/bash: not found
cd scripts && ./format-variables.sh
/bin/sh: eval: line 116: ./format-variables.sh: not found
--shell /bin/bash scripts/format-variables.sh
/bin/sh: eval: line 114: --shell: not found
The final attempt was an idea I grabbed from the docs. I have not specified the shared runners to use but I assume the one being used is UNIX based as all other UNIX commands work.
Is it possible to do this via a shared runner or do I need to get a dedicated runner for this?
NOTE: I have to use Bash for this script & not Shell due to using arrays. If I were to use Shell, I would come up with the error mentioned here
The docker:latest image doesn't contain bash, to save space. You can either install it (see How to use bash with an Alpine based docker image?) or use a different base image (like CentOS or Ubuntu).
Use an image that has bash installed like CentOS or Ubuntu.

sshpass not executing in bash script

I have a dockerfile: (these are the relevent commands)
RUN apk app --update bash openssh sshpass
CMD ["bin/sh", "/home/build/build.sh"]
Which my dockerfile gets ran by this command
docker run --rm -it -v $(pwd):/home <image-name>
and all of the commands within my bash script, that are within the mounted volume execute. These commands range from npm installs to using tar to zip up a file and I want to SFTP that tar.gz file.
I am using sshpass to automate logging in which I know isn't secured, but I'm not worried about that with this application.
sshpass -p <password> sftp -P <port> username#host << EOF
<command>
<command>
EOF
But the sshpass command is never executed. I've tested my docker run command by appending /bin/sh to it and trying it and it also does not run. The SFTP command by itself does.
And when I say it's never executed, I don't receive an error or anything.
Two possible reason
You apk command is wrong, it should be RUN apk add --update bash openssh sshpass, but I assume it typo
Seems like the known host entry is missing, you should check logs `docker logs -f , Also need to add entry in for known-host, check the suggested build script below.
Here is a working example that you can try
Dockerfile
FROM alpine
RUN apk add --update bash openssh sshpass
COPY build.sh /home/build/build.sh
CMD ["bin/sh", "/home/build/build.sh"]
build script
#!/bin/bash
echo "adding host to known host"
mkdir -p ~/.ssh
touch ~/.ssh/known_hosts
ssh-keyscan sftp >> ~/.ssh/known_hosts
echo "run command on remote server"
sshpass -p pass sftp foo#sftp << EOF
ls
pwd
EO
Now build the image, docker build -t ssh-pass .
and finally, the docker-compose for testing the above
version: '3'
services:
sftp-client:
image: ssh-pass
depends_on:
- sftp
sftp:
image: atmoz/sftp
ports:
- "2222:22"
command: foo:pass:1001
so you will able to connect the sftp container using docker-compose up

GItLab is not deploying laravel app to AWS ec2

I am trying to deploy my Laravel app to AWS ec2 instance. I m using GitLab for code management and pipeline process.
Here is my .gitlab-ci.yml file.
# Node docker image on which this would be run
image: node:8.9.0
#This command is run before actual stages start running
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- npm install
stages:
- test
- deploy
- production
#Production stage
production:
stage: production
before_script:
#generate ssh key
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
script:
- bash .gitlab-deploy.sh
environment:
name: production
url: MY_HOST_IP_ADDRESS
when: manual
# lint and test are two different jobs in the same stage.
# This allows us to run these two in parallel and making build faste
And here is my .gitlab-deploy.sh file
#!/bin/bash
#Get servers list
set -f
string=$DEPLOY_SERVER
array=(${string//,/ })
#Iterate servers for deploy and pull last commit
for i in "${!array[#]}"; do
echo "Deploying information to EC2 and Gitlab"
echo "Deploy project on server ${array[i]}"
ssh ubuntu#${array[i]} "cd /var/www/html && git pull origin"
done
When I push my code it's processing fine. As you can see below image.
After a successful process when I list the directory of /var/www/html it's still empty. I am using Apache2 on Ubuntu. I want to deploy my code directly AWS EC2 instance.
Thanks in advance!

Resources