How can I solve "crontab: your UID isn't in the passwd file. bailing out."? - ruby

Hi I'm using Docker and whenever to write cron schedule rules, but when I run whenever --update-crontab in my docker container this errors is showing to me.
crontab: your UID isn't in the passwd file.
bailing out.
[fail] Couldn't write crontab; try running `whenever' with no options to ensure your schedule file is valid.
Dockerfile
FROM ruby:2.4.1-slim
RUN apt-get update && apt-get -y install cron
ENV RAILS_ENV production
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile Gemfile.lock ./
RUN bundle install --binstubs --jobs 20 --retry 5
COPY . .
RUN chown -R nobody:nogroup /app
USER nobody
# use docker run -it --entrypoint="" demo "ls -la" to skip
EXPOSE 3000
CMD puma -C config/puma.rb
Docker Version: Docker version 17.05.0-ce, build 89658be
My Docker compose file
chatbot_web:
container_name: chatbot_web
depends_on:
- postgres
- chatbot_redis
- chatbot_lita
user: "1000:1000"
build: .
image: dpe/chatbot
ports:
- '3000:3000'
volumes:
- '.:/app'
restart: always
How can I solve this?
EDIT:
When I use:
host$ docker run -it dpe/chatbot bash
container $ whenever --update-cron
[write] crontab file updated
Works, but when I use:
host$ docker exec -it chatbot_web bash
I have no name!#352c6a7500d2:/app$ whenever --update-cron
crontab: your UID isn't in the passwd file.
bailing out.
[fail] Couldn't write crontab; try running `whenever' with no options to ensure your schedule file is valid.
Don't Work =(

To fix I use same user in Dockerfile and docker-compose
Dockerfile
RUN chown -R nobody:nogroup /app
USER nobody
Docker Compose
chatbot_web:
user: "nobody:nogroup"

Related

/bin/sh: No such file or directory when setting a docker-compose entrypoint

I have a container that runs a database migration (source):
FROM golang:1.12-alpine3.10 AS downloader
ARG VERSION
RUN apk add --no-cache git gcc musl-dev
WORKDIR /go/src/github.com/golang-migrate/migrate
COPY . ./
ENV GO111MODULE=on
ENV DATABASES="postgres mysql redshift cassandra spanner cockroachdb clickhouse mongodb sqlserver firebird"
ENV SOURCES="file go_bindata github github_ee aws_s3 google_cloud_storage godoc_vfs gitlab"
RUN go build -a -o build/migrate.linux-386 -ldflags="-s -w -X main.Version=${VERSION}" -tags "$DATABASES $SOURCES" ./cmd/migrate
FROM alpine:3.10
RUN apk add --no-cache ca-certificates
COPY --from=downloader /go/src/github.com/golang-migrate/migrate/build/migrate.linux-386 /migrate
ENTRYPOINT ["/migrate"]
CMD ["--help"]
I want to integrate it into a docker-compose and make it dependent on the Postgres database service. However, since I have to wait until the database is fully initialised I have to wrap the migrate command in a script and thus replace the entrypoint of the migration container. I'm using the wait-for script to poll the database, which is a pure shell (not bash) script and should thus work in an alpine container.
This is how the service is defined in the docker-compose:
services:
database:
# ...
migration:
depends_on:
- database
image: migrate/migrate:v4.7.0
volumes:
- ./scripts/migrations:/migrations
- ./scripts/wait-for:/wait-for
entrypoint: ["/bin/sh"]
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running docker-compose up on this fails with
migration_1 | /bin/sh: can't open './wait-for database:5432': No such file or directory
Running the migrate container for itself with
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
does work flawlessly, the script is there and can be run with /bin/sh ./wait-for.
So why does it fail as part of the docker-compose?
If you read the error message carefully, you will see that the file that cannot be found is not ./waitfor, it is ./wait-for database:5432. This is consistent with your input file, where that whole thing is given as the first element of the command list:
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
It's unclear to me what you actually want instead, since the working alternatives presented do not seem to be fully analogous, but possibly it's
command: ["./wait-for", "database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running the migrate container for itself with does work flawlessly
When you run it like:
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
entrypoint /bin/sh is executed.
When you run it using docker-compose:
entrypoint (/bin/sh ) + command (./wait-for database:5432) ...` is executed.
./wait-for database:5432 as whole stands for executable that will run and it can't be found, that's why you get the error No such file or directory
Try to specify an absolute path to wait-for in command: and split ./wait-for database:5432 into "./wait-for", "database:5432".
It's possible that splitting will be enough
As an alternative you can follow CMD syntax docs and use different command syntax without array: command: ./wait-for database:5432 ...
ENTRYPOINT ["/bin/sh"] is not enough, you also need the -c argument.
Example (testing a docker-compose.yml with docker-compose run --rm MYSERVICENAMEFROMTHEDOCKERCOMPOSEFILE bash here):
entrypoint: ["/bin/sh"]
Throws:
/bin/sh: 0: cannot open bash: No such file
ERROR: 2
And some wrong syntax examples like
entrypoint: ["/bin/sh -c"]
(wrong!)
or
entrypoint: ["/bin/sh, -c"]
(wrong!)
throw errors:
starting container process caused: exec: "/bin/sh, -c": stat /bin/sh, -c: no such file or directory: unknown
ERROR: 1
starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
ERROR: 1
In docker-compose or Dockerfile, for an entrypoint, you need the -c argument.
This is right:
entrypoint: "/bin/sh -c"
or:
entrypoint: ["/bin/sh", "-c"]
The -c is to make clear that this is a command executed in the command line, waiting for an additional command to be used in that command line. but not starting the bash /bin/sh just on its own. You can read that between the lines at What is the difference between CMD and ENTRYPOINT in a Dockerfile?.

Running multiple ROS process in a Docker container

I want to create a bash script which installs all required software to run a docker, create a new image and then runs, in a container, all required processes. My bash script looks like this:
#! /bin/sh
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containered.io
sudo groupadd docker
sudo gpasswd -a $USER docker
docker pull ros:indigo-robot
docker build -t myimage .
docker run --name myimage-cont -dit myimage
And the Dockerfile:
FROM ros:indigo-robot
RUN apt-get update && apt-get install -y \
git \
ros-indigo-ardrone-autonomy
I am new to Docker and do not know best practices, but what I need to achieve is running 3 different process at the same time.
- roscore
- rosrun ardrone_autonomy ardrone_driver
- rostopic pub ardrone/takeoff std_msgs/Empty "{}" --once
I was able to achieve it 'manually' by opening 3 terminals and executing docker exec myimage-cont... commands. However, what I need it is make it automatically run by the code once I execute my bash script. What is the best way to do it?

Run bash command before running container

I want to run a pre-existing Docker image like so:
docker run -d --name cdt-selenium selenium/standalone-firefox:3.4.0-chromium
So there is no Dockerfile that I control for this image. However, I would like to copy some files into this container.
If I did control the Dockerfile, I would like to run these commands:
RUN mkdir -p /root/cdt-tests/csv-data
COPY ./csv-data/* /root/cdt-tests/csv-data
Is there a way to run those commands in the same line as the Docker run command above?
I tried this:
docker run -d --name cdt-selenium selenium/standalone-firefox:3.4.0-chromium
docker exec cdt-selenium mkdir -p /root/cdt-tests/csv-data
docker cp cdt-selenium:/root/cdt-tests/csv-data ./csv-data
but I get a permissions error on the docker exec line
All images have a FROM line, and that can be any other image. So you can make a Dockerfile with:
FROM selenium/standalone-firefox:3.4.0-chromium
USER root
RUN mkdir -p /root/cdt-tests/csv-data
COPY ./csv-data/* /root/cdt-tests/csv-data
USER seluser
that will build your own image with your commands run.
You'd build it and create your own tag:
docker build -t alexander/selenium:3.4.0-chromium .
And then run it:
docker run -d --name cdt-selenium alexander/selenium:3.4.0-chromium
Edit: the exec command you ran failed because docker runs this container as a different user. You can see that in their Dockerfile. To solve that, run the exec with the root user option (-u root):
docker exec -u root cdt-selenium mkdir -p /root/cdt-tests/csv-data

Docker: RUN touch doesn't create file

While trying to debug a RUN statements in my Dockerfile, I attempted to redirect output to a file in a bound volume (./mongo/log).
To my surprise I was unable to create files via the RUN command, or to pipe the output of another command to a file using the redirection/appending (>,>>) operators. I was however able to perform the said task by logging in the running container via docker exec -ti mycontainer /bin/sh and issuing the command from there.
Why is this behaviour happening? How can I touch file in the Dockerfile / redirect output to a file or to the console from which the Dockerfile is run?
Here is my Dockerfile:
FROM mongo:3.4
#Installing NodeJS
RUN apt-get update && \
apt-get install -y curl && \
curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
#Setting Up Mongo
WORKDIR /var/www/smq
COPY ./mongo-setup.js mongo-setup.js
##for testing
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
##this was the command to debug
#RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log
Here an excerpt from my docker-compose.yml:
mongodb:
build:
context: ./
dockerfile: ./mongodb-dockerfile
container_name: smqmongodb
volumes:
- /var/lib/mongodb/data
- ./mongo/log/:/var/log/
- ../.config:/var/www/.config
You are doing this during your build:
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
The file /var/log/node.log is created and fixed immutably into the resulting image.
Then you run the container with this volume mount:
volumes:
- ./mongo/log/:/var/log/
Whatever is in ./mongo/log/ is mounted as /var/log in the container, which hides whatever was there before (from the image). This is the thing that's making it look like your touch didn't work (even though it probably worked fine).
You're thinking about this backward - your volume mount doesn't expose the container's version of /var/log externally - it replaces whatever was there.
Nothing you do in Dockerfile (build) will ever show up in an external mount.
Instead of RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log, within the container, what if you just say `RUN node mongo-setup.js'?
Docker recommends using docker logs. Like so:
docker logs container-name
To accomplish what you're after (see the mongo setup logs?), you can split the stdout & stderr of the container by piping the separate streams: and send them to files:
me#host~$ docker logs foo > stdout.log 2>stderr.log
me#host~$ cat stdout.log
me#host~$ cat stderr.log
Also, refer to the docker logs documentation

nginx not starting inside Docker [duplicate]

This question already has answers here:
Dockerized nginx is not starting
(5 answers)
Closed 6 years ago.
Here is my Dockerfile:
FROM ubuntu:14.04.4
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:nginx/stable
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nginx
ADD configurations/nginx.conf /etc/nginx/nginx.conf
ADD configurations/app.conf /etc/nginx/sites-available/default.conf
RUN ln -sf /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled/default.conf
RUN chown -Rf www-data.www-data /var/www/
ADD scripts/start.sh /start.sh
RUN chmod 755 /start.sh
EXPOSE 443
EXPOSE 80
CMD ["/bin/bash", "/start.sh"]
The start.sh script:
cat scripts/start.sh
service nginx start
echo "test" > /tmp/test
When I log to the container:
docker exec --interactive --tty my_container bash
neither the test file exists nor nginx is running. There are no errors on the nginx log.
The best practice is to run the process in the foreground instead of as a service.
Remove the start.sh file and change the CMD to:
CMD ["nginx", "-g", "daemon off;"]
You can get a better idea reading the official nginx dockerfile: https://github.com/nginxinc/docker-nginx/blob/master/stable/jessie/Dockerfile
Try
RUN /etc/init.d/nginx start

Resources