I have a custom package with install.sh script, which I want to run while building a docker image (meaning - put ./install.sh inside Dockerfile). I could have ran it along with the container, but I want to have an image that contains the required packages (which are mentioned in the install script).
What I tried:
RUN /bin/sh/ -c "./install.sh"
RUN ./install.sh
It errors out saying -
/bin/sh install.sh not found
or
/bin/sh ./install.sh not found
This might be a repeated question, but I haven't found an answer to this anywhere. Any help would be appreciated.
You must copy your install.sh into docker image with this command in your dockerfile:
COPY install.sh /tmp
Then use your RUN command to run it:
RUN /bin/sh -c "/tmp/install.sh"
or
RUN sh /tmp/install.sh
Don't forget to make install.sh executable before run it:
chmod +x /tmp/install.sh
Related
Following the instructions as outlined to deploy Duo CloudMapper to AWS environment and getting an error
Docker File
FROM python:3.7-slim as cloudmapper
LABEL maintainer="https://github.com/0xdabbad00/"
LABEL Project="https://github.com/duo-labs/cloudmapper"
WORKDIR /opt/cloudmapper
ENV AWS_DEFAULT_REGION=us-east-1
RUN apt-get update -y
RUN apt-get install -y build-essential autoconf automake libtool python3.7-dev python3-tk jq awscli
COPY cloudmapper/. /opt/cloudmapper
COPY entrypoint.sh /opt/cloudmapper/entrypoint.sh
# Remove the demo data
RUN rm -rf /opt/cloudmapper/account-data/demo
# Install the python libraries needed for CloudMapper
RUN cd /opt/cloudmapper && pip install -r requirements.txt
ENTRYPOINT /opt/cloudmapper/entrypoint.sh
Now building the docker image
C:\> docker build -t cloudmapper .
When I run the docker using the below command I get an error
C:/> docker run -t cloudmapper
Error
/bin/sh: 1: /opt/cloudmapper/entrypoint.sh: not found
Verified that the file exists in the appropriate location
Using Docker on Windows 10
Image in the dockerfile is python:3.7-slim
Assuming the images are removed and replaced with text and the question doesn't get closed.
bash can return "file not found" when
the entrypoint shell script is not marked executable for the current user
the hash bang in the entrypoint shell script points to a binary that does not exist
the shell script actually does not exist.
You can fix the first problem by ensuring you use the new --chmod flag to ensure the executable bit is set. Even if the user is root it is necessary that there is at least 1 executable bit set.
COPY --chmod=0755 *.sh /opt/cloudmapper/
ENTRYPOINT ["/opt/cloudmapper/entrypoint.sh"]
ps. This integrated COPY --chmod only works with buildkit enabled builds, so you might need to force buildkit, or split the chmod into a separate explicit RUN step.
The 2nd issue can be dealt with by ensuring the first line of entrypoint.sh uses sh rather than bash if you are using a lightweight base image like alpine:
#!/bin/sh
set -e
# etc
Also, if on Windows especially, ensure ALL files, especially the entrypoint .sh file, are set to utf-8 encoding with lf style line endings. As linux doesn't understand the cr, it will try to execute /bin/sh<cr> as the shell which clearly doesn't exist.
In terms of the file not existing, verify the entrypoint.sh is being copied into a location that is referenced by env.PATH, or that the entry point directive uses a fully qualified path.
--
edited to add cr-lf revelation.
[this is the error I'm getting after build command ]
Step 7/9 : RUN chmod +x /main.sh
---> Running in 6e880a009c7d
chmod: cannot access '/main.sh': No such file or directory
The command '/bin/sh -c chmod +x /main.sh' returned a non-zero code: 1
and here is my docker file
FROM centos:latest
MAINTAINER Aditya Gupta
#install git
RUN yum -y update
RUN yum -y install git
#make git repo folder, change GIT_LOCATION
RUN mkdir -p /home/centos/doimages/dockimg;cd /home/centos/doimages/dockimg;
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Run chmod +x ./main.sh
RUN echo " ./main.sh\n "
EXPOSE Portnumber
When you perform a RUN step in a Dockerfile, a temporary container is launched, often with a shell parsing your command. When that command finishes, the container exits, and docker packages the filesystem changes as an image layer. That process is repeated from the beginning for each RUN line.
The key piece there is the shell exits, losing environment variables you've set, background processes you've run, and in this case, the current working directory you tried to set here:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername);cd (foldername)/
Instead of a cd in a RUN command, you can update the value of WORKDIR:
RUN git clone https://(username):(password)#gitlab.com/abc/xyz.git (foldername)
WORKDIR foldername
You want to execute a shell file which does not exist on your docker machine. use ADD command to add your script to your docker image!
-- somewehe inside your dockerfile befor the execution ---
ADD ./PATH/ON/HOST/main.sh /PATH/YOU/LIKE/ON/DOCKER/MACHINE
Then try to build your docker machine
issue is resolved with workdir and cloning manually without docker file and then give the path to mainsh in dockerfile.
it's been a few days now but I really can't understand how to run a bash script correctly in ubuntu/xenial64 using docker. Any clarifications will be very appreciated.
I created a Dockerfile like this
FROM ubuntu:16.04
COPY setup.sh /setup.sh
RUN chmod +x /setup.sh
ENTRYPOINT [ "/setup.sh" ]
The error returned is: standard_init_linux.go:195: exec user process caused "no such file or directory"
But why?? If I run ls the file is correctly placed on the root. I also tried using CMD ["/setup.sh"]. My script file has a shebang like this #!/bin/bash.
I have the following docker setup:
python27.Dockerfile
FROM python:2.7
COPY ./entrypoint.sh /entrypoint.sh
RUN mkdir /src
RUN apt-get update && apt-get install -y bash libmysqlclient-dev python-pip build-essential && pip install virtualenv
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
WORKDIR /src
CMD source /src/env/bin/activate && python /src/manage.py runserver
entrypoint.sh
#!/bin/bash
# some code here...
# some code here...
# some code here...
exec "$#"
Whenever I try to run my docker container I get python27 | /bin/sh: 1: source: not found.
I understand that the error comes from the fact that the command is run with sh instead of bash, but I can't understand why is that happening, given the fact that I have the correct shebang at the top of my entrypoint.
Any ideas why is that happening and how can I fix it?
The problem is that for CMD you're using the shell form that uses /bin/sh, and the /src/env/bin/activate likely contains a "source" command, which isn't available on POSIX /bin/sh (the equivalent builtin would be just .).
You must use the exec form for CMD using brackets:
CMD ["/bin/bash", "-c", "source /src/env/bin/activate && python /src/manage.py runserver"]
More details in:
https://docs.docker.com/engine/reference/builder/#run
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
I have some script that I need to run inside the container, and somehow it only run if I run it inside a bash --login.
I normally run my docker: docker build -t sometags . and I noticed it only run bash without --login.
I know I can just use bash -l -c "some-command-here" but I'd say it's my final fallback if nothing can helps.
so, tl;dr: how can I achieve something like this in my Dockerfile
#dockerfile
RUN bash --login
RUN some-script
and then, I'll just run it with: docker build -t x/y:z .
updated:
the scripts I want to run is things like: gem install bundler, and bundle install.