How to run bash file in Dockerfile? - bash

Im trying to execute bash file called start.bash by CMD["bash" ,"start.bash"] in Dockerfile. When I create the image this command is not executed for some reason even though the bash file was of course copied properly in the dockerfile.
The point is that while im trying to run this command inside the container itself its success.
Here is my Dockerfile:
# build back end
FROM node:12.22.12 AS server_build
###ENV HOSTNAME myhost
WORKDIR /VideoServiceApp
COPY ./projConf.json /VideoServiceApp
COPY ./projVideoApp.json /VideoServiceApp
COPY ./front/UIVideo ./front/UIVideo
COPY ./front/videoService ./front/videoService
COPY --from=client_build /VideoServiceApp/front/video/dist/video /VideoServiceApp/front/video/dist/video
COPY ./start.bash /VideoServiceApp
COPY ./classes ./classes
WORKDIR /VideoServiceApp/front/UIVideo
RUN npm install
WORKDIR /VideoServiceApp/front/videoService
RUN npm install && npm install -g typescript#latest
RUN tsc
EXPOSE 7717 7708
WORKDIR /VideoServiceApp
CMD ["bash" , "start.bash"]

You need to specify the full path for the command when you are using the CMD array's syntax.
CMD ["/bin/bash", "start.bash"]
You could also switch to the shell form of CMD
CMD bash start.bash
For more information read the CMD-Documentation

Related

Docker entrypoint.sh not found

Following the instructions as outlined to deploy Duo CloudMapper to AWS environment and getting an error
Docker File
FROM python:3.7-slim as cloudmapper
LABEL maintainer="https://github.com/0xdabbad00/"
LABEL Project="https://github.com/duo-labs/cloudmapper"
WORKDIR /opt/cloudmapper
ENV AWS_DEFAULT_REGION=us-east-1
RUN apt-get update -y
RUN apt-get install -y build-essential autoconf automake libtool python3.7-dev python3-tk jq awscli
COPY cloudmapper/. /opt/cloudmapper
COPY entrypoint.sh /opt/cloudmapper/entrypoint.sh
# Remove the demo data
RUN rm -rf /opt/cloudmapper/account-data/demo
# Install the python libraries needed for CloudMapper
RUN cd /opt/cloudmapper && pip install -r requirements.txt
ENTRYPOINT /opt/cloudmapper/entrypoint.sh
Now building the docker image
C:\> docker build -t cloudmapper .
When I run the docker using the below command I get an error
C:/> docker run -t cloudmapper
Error
/bin/sh: 1: /opt/cloudmapper/entrypoint.sh: not found
Verified that the file exists in the appropriate location
Using Docker on Windows 10
Image in the dockerfile is python:3.7-slim
Assuming the images are removed and replaced with text and the question doesn't get closed.
bash can return "file not found" when
the entrypoint shell script is not marked executable for the current user
the hash bang in the entrypoint shell script points to a binary that does not exist
the shell script actually does not exist.
You can fix the first problem by ensuring you use the new --chmod flag to ensure the executable bit is set. Even if the user is root it is necessary that there is at least 1 executable bit set.
COPY --chmod=0755 *.sh /opt/cloudmapper/
ENTRYPOINT ["/opt/cloudmapper/entrypoint.sh"]
ps. This integrated COPY --chmod only works with buildkit enabled builds, so you might need to force buildkit, or split the chmod into a separate explicit RUN step.
The 2nd issue can be dealt with by ensuring the first line of entrypoint.sh uses sh rather than bash if you are using a lightweight base image like alpine:
#!/bin/sh
set -e
# etc
Also, if on Windows especially, ensure ALL files, especially the entrypoint .sh file, are set to utf-8 encoding with lf style line endings. As linux doesn't understand the cr, it will try to execute /bin/sh<cr> as the shell which clearly doesn't exist.
In terms of the file not existing, verify the entrypoint.sh is being copied into a location that is referenced by env.PATH, or that the entry point directive uses a fully qualified path.
--
edited to add cr-lf revelation.

Bash commands are ignored in my ENTRYPOINT Docker bash script

I have some few libraries that I have compiled on my machine. So I want to copy all the binaries into my docker container and at first, I tried to use COPY and ADD command in my Dockerfile:
# Installing zeromq
WORKDIR /${home}/${user}/master-wheel
COPY ${PWD}/libzmq ./libzmq
COPY ${PWD}/cppzmq ./cppzmq
WORKDIR /${home}/${user}/master-wheel/libzmq/binaries
ADD * /
WORKDIR /${home}/${user}/master-wheel/cppzmq/binaries
ADD * /
Note that the directories and files do exist and upon entering into my created container, I can see that the copied directories libzmq and cppzmq do exist and I can manually copy all the binaries to the root /. However, for some reason Dockerfile doesn't copy and I can't figure out what could be the problem.
Then, I have decided to do that inside my ENTRYPOINT script and it looks like this:
#!/bin/bash
#set -e
#set -u
echo "==> Executing master image entrypoint ..."
echo "-> Setting up"
cp -r /home/ed/master-wheel/libzmq/binaries/* /
cp -r /home/ed/master-wheel/cppzmq/binaries/* /
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
ldconfig
echo "==> Container ready"
exec "$#"
Everything except two cp commands executes. I tried the same command (cp command) from my container bash terminal and it worked.
What could be the problem?
EDIT:
This part of the file works and the binaries do really get copied to the root directory:
# Installing libfreespace
WORKDIR /${home}/${user}/master-wheel
COPY ${PWD}/libfreespace ./libfreespace
WORKDIR /${home}/${user}/master-wheel/libfreespace/binaries
COPY * /
EDIT 2:
It seems that if I do something like this:
WORKDIR /${home}/${user}/master-wheel
COPY ${PWD}/libzmq ./libzmq
COPY ${PWD}/cppzmq ./cppzmq
WORKDIR /${home}/${user}/master-wheel/libzmq/binaries/usr/
ADD * /usr/
WORKDIR /${home}/${user}/master-wheel/cppzmq/binaries/usr/
ADD * /usr/
It works.

Docker Build Failed "chmod: cannot access '/main.sh': No such file or directory"

[this is the error I'm getting after build command ]
Step 7/9 : RUN chmod +x /main.sh
---> Running in 6e880a009c7d
chmod: cannot access '/main.sh': No such file or directory
The command '/bin/sh -c chmod +x /main.sh' returned a non-zero code: 1
and here is my docker file
FROM centos:latest
MAINTAINER Aditya Gupta
#install git
RUN yum -y update
RUN yum -y install git
#make git repo folder, change GIT_LOCATION
RUN mkdir -p /home/centos/doimages/dockimg;cd /home/centos/doimages/dockimg;
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Run chmod +x ./main.sh
RUN echo " ./main.sh\n "
EXPOSE Portnumber
When you perform a RUN step in a Dockerfile, a temporary container is launched, often with a shell parsing your command. When that command finishes, the container exits, and docker packages the filesystem changes as an image layer. That process is repeated from the beginning for each RUN line.
The key piece there is the shell exits, losing environment variables you've set, background processes you've run, and in this case, the current working directory you tried to set here:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Instead of a cd in a RUN command, you can update the value of WORKDIR:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername)
WORKDIR foldername
You want to execute a shell file which does not exist on your docker machine. use ADD command to add your script to your docker image!
-- somewehe inside your dockerfile befor the execution ---
ADD ./PATH/ON/HOST/main.sh /PATH/YOU/LIKE/ON/DOCKER/MACHINE
Then try to build your docker machine
issue is resolved with workdir and cloning manually without docker file and then give the path to mainsh in dockerfile.

Docker run fail with ruby

I have a little problem, when i run my container this:
docker run -it emails_request cucumber -t #teste_inserindo_email
It's ok.
But, when i run this:
docker run it emails_request
Where my #teste_inserindo_emails, is on my dockerfile
WORKDIR /app
COPY Gemfile .
RUN bundle install && bundle clean
COPY . /app
EXPOSE 80
RUN cucumber -t #teste_inserindo_email
#CMD ["cucumber", "-t", " #teste_inserindo_email"]
Not found, return:
$ docker run -t emails_request
irb(main):001:0>
Or:
$ docker run emails_request
Switch to inspect mode.
what's your question exactly? you can just run it manually by firing up a container with an interactive terminal from your image, and then run the commands you want, or have a script in the image (or mounted as a volume) and then pass the script as the entry command instead.
docker run -it IMAGE_ID bash (for running manual commands)
if you want to use a script instead, put an ENTRYPOINT script in your Dockerfile instead

Difference between "docker build" and "docker run" if we running dockerfile having .sh files

This is my Dockerfile
# This Dockerfile describes the standard way to build
FROM centos:latest
MAINTAINER praveen
# Run a root to allow "rpm"
USER root
WORKDIR /root/
# Get the ACE-TAO rpm from seachange repo
COPY TAO-1.7.7-0.x86_64.rpm /root/TAO-1.7.7-0.x86_64.rpm
# Insatall the rpm
RUN rpm -ivh /root/TAO-1.7.7-0.x86_64.rpm
#Start the TAO service
#CMD /etc/init.d/tao start
COPY namingServiceConfig.sh /
RUN /namingServiceConfig.sh
EXPOSE 13021
EXPOSE 13022
EXPOSE 13023
ENV NS_PORTS=13021,13022,13023
#ENTRYPOINT /etc/init.d/tao start && bash
While doing the docker build
Whether it'll execute the shell script and reflect the changes as part images or while running the images using docker run its will reflect the changes to container level
In my case ,I'm suspecting that, it is executing while docker build and docker run both time
I'm using below commands as part of building and running via vagrant file
d.build_image "/vagrant/tao", args: " -t tao/basic"
d.run "tao/basic:latest",
args: " -t -d"\
" --name tao-basic"\
" -p 13021:13021"\
" -e NS_PORT=13025,13026,13027"
let me know, need any more information
The Dockerfile instructions (such as RUN etc...) are actioned at build time (docker build -t something . etc...). Only the CMD and ENTRYPOINT instructions happen at run time (when the container is started).
In your example the shell script will get run as part of the build and whatever changes occur will be committed as a new layer in the image.

Resources