Jenkins shell dont interpret $ variables - shell

I am trying to deploy a nodejs app inside docker container on a prod machine using jenkins.
I have this shell :
ssh -tt vagrant#10.2.3.129<<EOF
cd ~/app/backend
git pull
cat <<EOM >./Dockerfile
FROM node:8
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
EOM
docker build -t vagrant/node-web-app .
docker kill $(docker ps -q)
docker rm $(docker ps -a -q)
docker run -p 3000:3000 -d vagrant/node-web-app
exit
EOF
this will connect via ssh to prod machine and create a Dockerfile then build and run image. but it failed.
and this a part of the jenkins logs:
Successfully built 8e5796ea9846
vagrant#ubuntu-xenial:~$ docker kill
"docker kill" requires at least 1 argument.
See 'docker kill --help'.
Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
Kill one or more running containers
vagrant#ubuntu-xenial:~$ docker rm
"docker rm" requires at least 1 argument.
See 'docker rm --help'.
Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
Remove one or more containers
vagrant#ubuntu-xenial:~$ docker run -p 3000:3000 -d vagrant/node-web-app
0cc8b5b67f70065ace03e744500b5b66c79941b4cb36d53a3186845445435bb5
docker: Error response from daemon: driver failed programming external connectivity on endpoint stupefied_margulis (d0e4cdd5642c288a31537e1bb8feb7dde2d19c0f83fe5d8fdb003dcba13f53a0): Bind for 0.0.0.0:3000 failed: port is already allocated.
vagrant#ubuntu-xenial:~$ exit
logout
Connection to 10.2.1.129 closed.
Build step 'Execute shell' marked build as failure
Finished: FAILURE
It seems like jenkins dont execute the " $(docker ps -q) "
and " $(docker ps -a -q) "
so docker kill and docker rm got 0 arguments.
But why this happen ?

I found the issue,
Just I have to replace "$" with "\$" .
this solve the problem.

Related

check if docker container is running before removing or deleting it via script

I have a bash script that deploys an application called enhanced-app. It is expected to clean up all running containers first before building a new image. My current code does that, but in cases where the container doesn't exist or isn't running, I get an error.
I want to only run the cleanup command if enhanced-app is running. Please how can I achieve this?
!/bin/bash
echo "Stopping App2..."
docker container stop enhanced-app
docker container rm enhanced-app
CURPATH=$(dirname "${BASH_SOURCE[0]}")
docker build . -t enhanced-app
docker run -d -p 5000:5000 --name enhanced-app enhanced-app
I believe you can use the output of docker ps for that:
#!/bin/bash
IS_RUNNING=$(docker ps --filter name=enhanced-app --format '{{.ID}}')
if [ -n "${IS_RUNNING}" ]; then
echo "enhanced-app is running. Stopping App2 and removing container..."
docker container stop enhanced-app
docker container rm enhanced-app
else
IS_STOPPED=$(docker ps -a --filter name=enhanced-app --format '{{.ID}}')
if [ -n "${IS_STOPPED}" ]; then
echo "enhanced-app is stopped. Removing container..."
docker container rm enhanced-app
else
fi
CURPATH=$(dirname "${BASH_SOURCE[0]}")
docker build . -t enhanced-app
docker run -d -p 5000:5000 --name enhanced-app enhanced-app
You can use the exit status for docker container inspect
if docker inspect -f 'Container exists and is {{.State.Status}}' enhanced-app; then
docker container stop enhanced-app
docker container rm enhanced-app
fi

The docker CI pipeline is failing during execution of bash script

I am facing a problem in creating CI for docker containers. During CI build I have to remove the previous docker container and image, in this case, the build is failing when there is not any image on the server.
How can I execute this statement without stopping the build to fail?
docker rmi example/hello-world:latest
Unable to find image 'example/hello-world:latest' locally
docker: Error response from daemon:
The build is not failing in the docker stop and docker rm case:
docker stop zod || true && docker rm zod || true
How do I make sure the build doesn't fail if the image doesn't exist on the server?
This is my script for docker deployement:
docker build -t example/hello-world:latest .
docker stop zod || true && docker rm zod || true
docker rmi example/hello-world:latest
docker run --name zod -d -p 6000:6000 -dit example/hello-world:latest
First check if the image exists, then remove it:
exists=$(docker images example/hello-world:latest | tail -n +2)
if [ -z $exists ]
then
docker rmi example/hello-world:latest
fi

docker commands not working with latest version

The following commands used to work before, but as of Docker version 19.03.8 - build afacb8b they are not working.
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q) -f
docker rmi $(docker images -f "dangling=true" -q)
This is the error I'm getting for the first docker command to stop all containers:
unknown shorthand flag: 'a' in -a
See 'docker stop --help'.
In case of using Windows OS, Faced similar issue, got it working when executed the command from windows PowerShell, preferably with admin privilege's.

Correct way to deploy deploy a container from GitLab to EC2

I try to deploy my container from gitlab registry to EC2 Instance, I arrived to deploy my container, but when I change something, and want to deploy again, It is required to remove the old container and the old images and deploy again, for that I create this script to remove every thing and deploy again.
...
deploy-job:
stage: deploy
only:
- master
script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh -i ~/.ssh/id_rsa ec2-user#$DEPLOY_SERVER "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com &&
docker stop $(docker ps -a -q) &&
docker rm $(docker ps -a -q) &&
docker pull registry.gitlab.com/doesntmatter/demo:latest &&
docker image tag registry.gitlab.com/doesntmatter/demo:latest doesntmatter/demo &&
docker run -d -p 80:8080 doesntmatter/demo"
When I try this script, I got this error:
"docker stop" requires at least 1 argument. <<-------------------- error
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Running after script
00:01
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 1
if you look closer, I use $(docker ps -a -q) after the the docker stop.
Questions:
I know this is not the wonderful way to make my deploys (a developer here), can you please suggest other ways, just with using gitlab and ec2.
Is there any way to avoid this error, when I have or not containers in my machine?
Probably no containers were running when the job was executed.
To avoid this behavior, you can change a bit you command to have :
docker ps -a -q | xargs -r sudo docker stop
docker ps -a -q | xargs -r sudo docker rm
These will not produce errors if no containers are running.
Afterwards, indeed there are other way to deploy a container on AWS where there are services handling containers very well like ECS, EKS or Fargate. Think also about terraform to deploy your infrastructure using IaC principle (even for you ec2 instance).

get command executed result from docker container

anyway for me to know when command is finished inside docker container? I have created a docker container and able to send command from my local into docker container by docker exec
so far in my bash script I am using sleep to wait until "cd root: npm install" command finished inside docker container. If I do not have sleep, done is printed out right away after npm install is sent into docker container. How can I remove sleep so done is printed out only after npm install is finished inside docker container?
docker exec -d <docker container name> bash -c "cd root;npm install"
sleep 100
echo "done"
Don't background the command if you want to keep it running in the foreground (the -d flag):
docker exec <docker container name> bash -c "cd root;npm install"
echo "done"
Run it as background process & and then wait for it:
docker exec -d <docker container name> bash -c "cd root;npm install" &
wait
echo "done"
If you omit the -d (detach) the docker exec will return only after completion (and not immediately), so no wait will be needed.

Resources