I have a gradle task which runs a bash script which in turn cleans up some docker containers, it's part of a pipeline of tasks.
If I run this task from the command line, it works fine; if I run it from within IntelliJ IDEA, it fails because I cannot find the docker command.
This is the task:
task localDockerClean(type: Exec) {
executable './cleanDocker.sh'
ignoreExitValue true
}
This is the script:
#! /bin/bash
echo "Init - Clean Docker Containers"
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
docker system prune --force --volumes
echo "End - Clean Docker Containers"
And this is the output that only happens when run from IntelliJ IDEA:
> Task :scripts:migrations:localDockerClean
Init - Clean Docker Containers
./cleanDocker.sh: line 4: docker: command not found
./cleanDocker.sh: line 4: docker: command not found
./cleanDocker.sh: line 5: docker: command not found
./cleanDocker.sh: line 5: docker: command not found
./cleanDocker.sh: line 6: docker: command not found
End - Clean Docker Containers
Looks like the script has no access to the docker command which is in the PATH in my MacOS but only when run from the IDE, otherwise it works fine.
Any idea how to fix this issue in my IDE?
Thank you!
I'm trying to run a bash script from a Docker Image on a Mac. Here is my Dockerfile
FROM bash
ADD app.sh /
ENTRYPOINT ["/bin/bash", "/app.sh"]
Error
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
This is a simple exercise in creating Docker Images where I need to execute app.sh when I run docker run.
Any idea what I'm doing wrong?
According to your error message, the file /bin/bash does not exist in your Docker image. Why is this?
The bash image puts the bash executable at /usr/local/bin/bash. Here's how I determined this:
$ docker run -it bash
bash-5.1# which bash
/usr/local/bin/bash
bash-5.1#
I ran the bash image with -it to make it interactive, then used the which command to give me the full path to bash, which is /usr/local/bin/bash.
For that reason, you need to change your Dockerfile like this:
FROM bash
ADD app.sh /
ENTRYPOINT ["/usr/local/bin/bash", "/app.sh"]
I have a container that runs a database migration (source):
FROM golang:1.12-alpine3.10 AS downloader
ARG VERSION
RUN apk add --no-cache git gcc musl-dev
WORKDIR /go/src/github.com/golang-migrate/migrate
COPY . ./
ENV GO111MODULE=on
ENV DATABASES="postgres mysql redshift cassandra spanner cockroachdb clickhouse mongodb sqlserver firebird"
ENV SOURCES="file go_bindata github github_ee aws_s3 google_cloud_storage godoc_vfs gitlab"
RUN go build -a -o build/migrate.linux-386 -ldflags="-s -w -X main.Version=${VERSION}" -tags "$DATABASES $SOURCES" ./cmd/migrate
FROM alpine:3.10
RUN apk add --no-cache ca-certificates
COPY --from=downloader /go/src/github.com/golang-migrate/migrate/build/migrate.linux-386 /migrate
ENTRYPOINT ["/migrate"]
CMD ["--help"]
I want to integrate it into a docker-compose and make it dependent on the Postgres database service. However, since I have to wait until the database is fully initialised I have to wrap the migrate command in a script and thus replace the entrypoint of the migration container. I'm using the wait-for script to poll the database, which is a pure shell (not bash) script and should thus work in an alpine container.
This is how the service is defined in the docker-compose:
services:
database:
# ...
migration:
depends_on:
- database
image: migrate/migrate:v4.7.0
volumes:
- ./scripts/migrations:/migrations
- ./scripts/wait-for:/wait-for
entrypoint: ["/bin/sh"]
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running docker-compose up on this fails with
migration_1 | /bin/sh: can't open './wait-for database:5432': No such file or directory
Running the migrate container for itself with
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
does work flawlessly, the script is there and can be run with /bin/sh ./wait-for.
So why does it fail as part of the docker-compose?
If you read the error message carefully, you will see that the file that cannot be found is not ./waitfor, it is ./wait-for database:5432. This is consistent with your input file, where that whole thing is given as the first element of the command list:
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
It's unclear to me what you actually want instead, since the working alternatives presented do not seem to be fully analogous, but possibly it's
command: ["./wait-for", "database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running the migrate container for itself with does work flawlessly
When you run it like:
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
entrypoint /bin/sh is executed.
When you run it using docker-compose:
entrypoint (/bin/sh ) + command (./wait-for database:5432) ...` is executed.
./wait-for database:5432 as whole stands for executable that will run and it can't be found, that's why you get the error No such file or directory
Try to specify an absolute path to wait-for in command: and split ./wait-for database:5432 into "./wait-for", "database:5432".
It's possible that splitting will be enough
As an alternative you can follow CMD syntax docs and use different command syntax without array: command: ./wait-for database:5432 ...
ENTRYPOINT ["/bin/sh"] is not enough, you also need the -c argument.
Example (testing a docker-compose.yml with docker-compose run --rm MYSERVICENAMEFROMTHEDOCKERCOMPOSEFILE bash here):
entrypoint: ["/bin/sh"]
Throws:
/bin/sh: 0: cannot open bash: No such file
ERROR: 2
And some wrong syntax examples like
entrypoint: ["/bin/sh -c"]
(wrong!)
or
entrypoint: ["/bin/sh, -c"]
(wrong!)
throw errors:
starting container process caused: exec: "/bin/sh, -c": stat /bin/sh, -c: no such file or directory: unknown
ERROR: 1
starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
ERROR: 1
In docker-compose or Dockerfile, for an entrypoint, you need the -c argument.
This is right:
entrypoint: "/bin/sh -c"
or:
entrypoint: ["/bin/sh", "-c"]
The -c is to make clear that this is a command executed in the command line, waiting for an additional command to be used in that command line. but not starting the bash /bin/sh just on its own. You can read that between the lines at What is the difference between CMD and ENTRYPOINT in a Dockerfile?.
My problem is the bash script I created got this error "/bin/sh: eval: line 88: ./deploy.sh: not found" on gitlab. Below is my sample script .gitlab-ci.yml.
I suspect that gitlab ci is not supporting bash script.
image: docker:latest
variables:
IMAGE_NAME: registry.gitlab.com/$PROJECT_OWNER/$PROJECT_NAME
DOCKER_DRIVER: overlay
services:
- docker:dind
stages:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker pull $IMAGE_NAME:$CI_BUILD_REF_NAME || true
production-deploy:
stage: deploy
only:
- master#$PROJECT_OWNER/$PROJECT_NAME
script:
- echo "$PRODUCTION_DOCKER_FILE" > Dockerfile
- docker build --cache-from $IMAGE_NAME:$CI_BUILD_REF_NAME -t $IMAGE_NAME:$CI_BUILD_REF_NAME .
- docker push $IMAGE_NAME:$CI_BUILD_REF_NAME
- echo "$PEM_FILE" > deploy.pem
- echo "$PRODUCTION_DEPLOY" > deploy.sh
- chmod 600 deploy.pem
- chmod 700 deploy.sh
- ./deploy.sh
environment:
name: production
url: https://www.example.com
And this also my deploy.sh.
#!/bin/bash
ssh -o StrictHostKeyChecking=no -i deploy.pem ec2-user#targetIPAddress << 'ENDSSH'
// command goes here
ENDSSH
All I want is to execute deploy.sh after docker push but unfortunately got this error about /bin/bash thingy.
I really need your help guys. I will be thankful if you can solve my problem about gitlab ci bash script got error "/bin/sh: eval: line 88: ./deploy.sh: not found".
This is probably related to the fact you are using Docker-in-Docker (docker:dind). Your deploy.sh is requesting /bin/bash as the script executor which is NOT present in that image.
You can test this locally on your computer with Docker:
docker run --rm -it docker:dind bash
It will report an error. So rewrite the first line of deploy.sh to
#!/bin/sh
After fixing that you will run into the problem that the previous answer is addressing: ssh is not installed either. You will need to fix that too!
docker:latest is based on alpine linux which is very minimalistic and does not have a lot installed by default. For example, ssh is not available out of the box, so if you want to use ssh commands you need to install it first. In your before_script, add:
- apk update && apk add openssh
Thanks. This worked for me by adding bash
before_script:
- apk update && apk add bash
Let me know if that still doesn't work for you.
From the GitLab CI documentation the bash shell is supported on Windows.
Supported systems by different shells:
Shells Bash Windows Batch PowerShell
Windows ✓ ✓ (default) ✓
In my config.toml, I have tried:
[[runners]]
name = "myTestRunner"
url = xxxxxxxxxxxxxxxxxxx
token = xxxxxxxxxxxxxxxxxx
executor = "shell"
shell = "bash"
But if my .gitlab-ci.yml attempts to execute bash script, for example
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- echo $PWD
tags:
- myTestRunner
And then from the folder containing the GitLab multi runner I right-click and select 'git bash here' and then type:
gitlab-runner.exe exec shell testJob
It cannot resolve $PWD, proving it is not actually using a bash executor. (Git bash can usually correctly print out $PWD on Windows.)
Running with gitlab-runner 10.6.0 (a3543a27)
Using Shell executor...
Running on G0329...
Cloning repository...
Cloning into 'C:/GIT/CI_dev_project/builds/0/project-0'...
done.
Checking out 8cc3343d as bashFromBat...
Skipping Git submodules setup
$ echo $PWD
$PWD
Job succeeded
The same thing happens if I push a commit, and the web based GitLab CI terminal automatically runs the .gitlab-ci script.
How do I correctly use the Bash terminal in GitLab CI on Windows?
Firstly my guess is that it is not working as it should (see the comment below your question). I found a workaround, maybe it is not what you need but it works. For some reason the command "echo $PWD" is concatenated after bash command and then it is executed in a Windows cmd. That is why the result is "$PWD". To replicate it execute the following in a CMD console (only bash is open):
bash && echo $PWD
The solution is to execute the command inside bash with option -c (it is not the ideal solution but it works). The .gitlab-ci.yml should be:
stages:
- Stage1
testJob:
stage: Stage1
when: always
script:
- bash -c "echo $PWD"
tags:
- myTestRunner