I have following docker file
FROM ruby:latest
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
WORKDIR /usr/src/app/
COPY Gemfile Gemfile.lock ./
RUN bundle install
ADD . /usr/src/app/
EXPOSE 3333
CMD ["ruby", "/usr/src/app/helloworld.rb"]
When I run image
docker run -t -d hello-world-ruby
Sinatra throws exception (which is OK), and container exits.
How can I keep it running so I can ssh into container and debug what's happening?
A trick you can use is to start the application with a script.
The following will work:
FROM ruby:latest
# throw errors if Gemfile has been modified since Gemfile.lock
RUN bundle config --global frozen 1
WORKDIR /usr/src/app/
COPY Gemfile Gemfile.lock ./
RUN bundle install
ADD . /usr/src/app/
EXPOSE 3333
CMD [/usr/src/app/startup.sh]
and in startup.sh, do:
#!/bin/bash
ruby /usr/src/app/helloworld.rb &
sleep infinity
# or if you're on a version of linux that doesn't support bash and sleep infinity
while true; do sleep 86400; done
chmod 755 and you should be set.
For debugging puproposes it is not needed to keep the container with a failing command alive, with loops or otherwise.
To debug such issues just spawn a new container from the image with entrypoint/command as bash.
docker run -it --name helloruby hello-world-ruby bash
OR
docker run -it --name helloruby --entrypoint bash hello-world-ruby
This will give you shell inside the container where you can run/debug the ruby app
ruby /usr/src/app/helloworld.rb
Related
My problem is when run cron and rackup service for ruby sinatra in docker.
file cronjobs
* * * * * cd /app && rake parser >> cron.log 2>&1
file Dockerfile
RUN apk update && apk upgrade
RUN apk add --update build-base \
mariadb-dev bash dcron
RUN gem install bundler
WORKDIR /app
COPY Gemfile .
RUN bundle install && bundle clean
COPY . /app
COPY cronjobs /etc/crontabs/root
EXPOSE 80
CMD crond -f && rackup --host 0.0.0.0 -p 80
When run docker only one service is functional
Docker container is running while main process inside it is running. So if you want to run two services inside docker container, one of them has to be run in a background mode.
So, CMD layer should be the following:
CMD ( crond -f & ) && rackup --host 0.0.0.0 -p 80
I want to create a container using Docker which is will be responsible for starting recurrent rake tasks based on the Whenever gem's configuration. I have a plain ruby project (without rails/sinatra) with the following structure:
Gemfile:
source 'https://rubygems.org'
gem 'rake', '~> 12.3', '>= 12.3.1'
gem 'whenever', '~> 0.9.7', require: false
group :development, :test do
gem 'byebug', '~> 10.0', '>= 10.0.2'
end
group :test do
gem 'rspec', '~> 3.5'
end
config/schedule.rb: (whenever's configuration)
ENV.each { |k, v| env(k, v) }
every 1.minutes do
rake 'hello:start'
end
lib/tasks/hello.rb: (rake configuration)
namespace :hello do
desc 'This is a sample'
task :start do
puts 'start something!'
end
end
Dockerfile:
FROM ruby:2.5.3-alpine3.8
RUN echo "http://dl-cdn.alpinelinux.org/alpine/edge/community" >> /etc/apk/repositories && \
apk update && apk upgrade && \
apk add build-base bash dcron && \
apk upgrade --available && \
rm -rf /var/cache/apk/* && \
mkdir /usr/app
WORKDIR /usr/app
COPY Gemfile* /usr/app/
RUN bundle install
COPY . /usr/app
RUN bundle exec whenever --update-crontab
CMD ['sh', '-c', 'crond && gulp']
I've used the following resources to get at the this point
How to run a cron job inside a docker container
https://github.com/renskiy/cron-docker-image/blob/master/alpine/Dockerfile
https://stackoverflow.com/a/43622984/5171758 <- very close to I want but no success
If I call my rake task using command line, I get the result I want.
$ rake 'hello:start'
start something!
However, I can't figure out how to make it work using Docker. The container is build but no log is written, no output is shown, nothing happens. Can someone help me showing what I'm doing wrong?
building commands
docker build -t gsc:0.0.1 .
docker container run -a stdin -a stdout -i --net host -t gsc:0.0.1 /bin/bash
Thanks all. Cheers
This is the solution to the problem I listed above. I had some issues at the Dockerfile and schedule.rb. This is what I had to change to make it work correctly.
Dockerfile
wrong echo call
wrong bundle command
change ENTRYPOINT instead of CMD
FROM ruby:2.5.3-alpine3.8
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/edge/main && \
apk update && apk upgrade && \
apk add build-base bash dcron && \
apk upgrade --available && \
rm -rf /var/cache/apk/* && \
mkdir /usr/app
WORKDIR /usr/app
COPY Gemfile* /usr/app/
RUN bundle install
COPY . /usr/app
RUN bundle exec whenever -c && bundle exec whenever --update-crontab && touch ./log/cron.log
ENTRYPOINT crond && tail -f ./log/cron.log
config/schedule.rb
no need to ENV.each
every 1.minutes do
rake 'hello:start'
end
UPDATE
I've created a GitHub repository and a Docker Hub repository to share with the community this progress.
Update: I've narrowed the [or a?] problem down to the line - groupy-gemcache:/usr/local/bundle in my services: app: volumes dictionary. If I remove it, the container runs fine [i think] but presumably i lose my local gem cacheing.
tldr: After running docker-compose build, things seem ok, but I cannot run any gem or bundle inside my running docker container if I add something to my gemfile. for example, after docker-compose build && docker-compose run app bash:
root#2ea58aff612e:/src# bundle check
The Gemfile's dependencies are satisfied
root#2ea58aff612e:/src# echo 'gem "hello-world"' >> Gemfile
root#2ea58aff612e:/src# bundle
Could not find gem 'hello-world' in any of the gem sources listed in your Gemfile.
Run `bundle install` to install missing gems.
root#2ea58aff612e:/src# bundle install
Could not find gem 'hello-world' in any of the gem sources listed in your Gemfile.
Run `bundle install` to install missing gems.
root#2ea58aff612e:/src# gem
Could not find gem 'hello-world' in any of the gem sources listed in your Gemfile.
Run `bundle install` to install missing gems.
root#2ea58aff612e:/src# gem env
Could not find gem 'hello-world' in any of the gem sources listed in your Gemfile.
Run `bundle install` to install missing gems.
I'm still pretty novice with docker, but i've been trying to configure a perfect dockerfile that can build, cache gems and update while still committing its Gemfile.lock to git [maybe this isn't actually perfect and I'm open to suggestions there]. In my use case, I'm using a docker compose file with images for a rails app and sidekiq worker as well as postgres and redis images- very simliar to the setups described here and here.
My Dockerfile [some things commented out that I cobbled from the tutorials above:
FROM ruby:2.3
ENTRYPOINT ["bundle", "exec"]
ARG bundle_path
# throw errors if Gemfile has been modified since Gemfile.lock
# RUN bundle config --global frozen 1
ENV INSTALL_PATH /src
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
# Install dependencies:
# - build-essential: To ensure certain gems can be compiled
# - nodejs: Compile assets
# - npm: Install node modules
# - libpq-dev: Communicate with postgres through the postgres gem
# - postgresql-client-9.4: In case you want to talk directly to postgres
RUN apt-get update && apt-get install -qq -y build-essential nodejs npm libpq-dev postgresql-client-9.4 --fix-missing --no-install-recommends && \
rm -rf /var/lib/apt/lists/*
COPY app/Gemfile app/Gemfile.lock ./
# Bundle and then save the updated Gemfile.lock in our volume since it will be clobbered in the next step
RUN echo $bundle_path && bundle install --path=$bundle_path && \
cp Gemfile.lock $bundle_path
# ./app contains the rails app on host
COPY app .
# Unclobber the updated gemfile.lock
RUN mv $bundle_path/Gemfile.lock ./
CMD ["./script/start.sh"]
docker-compose.yml:
version: '2'
volumes:
groupy-redis:
groupy-postgres:
groupy-gemcache:
services:
app:
build:
args:
bundle_path: /usr/local/bundle
context: .
dockerfile: Dockerfile
command: "./script/start.sh"
links:
- postgres
- redis
volumes:
- ./app:/src
- groupy-gemcache:/usr/local/bundle
ports:
- '3000:3000'
env_file:
- .docker.env
stdin_open: true
tty: true
Wow, I figured it out. The problem was coming from my ENTRYPOINT dockerfile command, as explained at the end of this article on CMD vs ENTRYPOINT.
I believe the result of ENTRYPOINT ["bundle", "exec"] with CMD ["./script/start.sh"] OR docker-compose run app bash was to run a command like bundle exec 'bash'. I confirmed this by removing the entrypoint from the dockerfile, entering the shell as above and manually running bundle exec 'bash' and sure enough i landed in a subshell where I couldn't run any bundle or gem commands- had to exit twice to leave.
I have created a Dockerfile in my file structure, built a docker repository, and tried to run it, but I keep getting the following error:
Error response from daemon: repository not found, does not exist, or no pull access.
I'm pretty new to docker, so what could possibly be wrong here?
Commands I run:
docker build -t repoName .
docker run -d -p 8080:80 repoName
My Dockerfile:
FROM nginx:1.10.3
RUN mkdir /app
WORKDIR /app
# Setup enviromnet
RUN apt-get update
RUN apt-get install -y curl tar
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
# Do some building and copying
EXPOSE 80
CMD ["/bin/sh", "/app/run-dockerized.sh"]
I would like to use the AWS CLI in an Heroku Ruby project (mainly to use it with a thin wrapper from the ruby application).
Is there any standard way to install additional software like this into an existing application with a Gemfile?
Here are the steps that worked for me:
1) use buildpack-multi to install buildpacks both for ruby and python:
heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
echo "https://github.com/heroku/heroku-buildpack-ruby" >> .buildpacks
echo "https://github.com/heroku/heroku-buildpack-python" >> .buildpacks
echo "web: bundle exec rails server -p $PORT" > Procfile
2) add a requirements.txt file to the root of the project, containing the desired pip package:
echo "awscli" >> requirements.txt
3) deploy to Heroku
git add .buildpacks requirements.txt Procfile
git commit -a -m "use buildpacks for ruby and python, install aws cli"
git push heroku
This works just fine and allows me to use my aws scripts from my ruby app.
As was pointed out to me, using fog is probably the better solution in the long term.
You can use Docker to vendor things for Heroku apps.
Add a Dockerfile
FROM ubuntu:14.04
COPY . /app
Then build an image and run a container:
$ docker build .
$ docker run -it $image_id bash
# apt-get update && apt-get install jq
Now you can copy the data out from another terminal:
$ docker cp $container_id:/usr/bin/jq .
The aws-cli tool is tricker because it needs a whole Python environment.
You should add the Heroku buildpack for the AWS CLI via running:
$ heroku buildpacks:add heroku-community/awscli
More details can be found on herokus page, or in the buildpack's git repo.