I'm trying to run my cucumber tests in a docker container using firefox. I'm getting this error when my code just tries to visit 'google.com'
Net::ReadTimeout (Net::ReadTimeout)
Here is my Dockerfile
FROM ruby:2.2.2
RUN mkdir /app
WORKDIR /app
RUN gem update
ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
RUN bundle install
RUN apt-get update && apt-get install -y --fix-missing iceweasel xvfb unzip
ENV GECKODRIVER_VERSION v0.13.0
RUN echo $GECKODRIVER_VERSION
RUN mkdir -p /opt/geckodriver_folder
RUN wget -O /tmp/geckodriver_linux64.tar.gz https://github.com/mozilla/geckodriver/releases/download/$GECKODRIVER_VERSION/geckodriver-$GECKODRIVER_VERSION-linux64.tar.gz
RUN tar xf /tmp/geckodriver_linux64.tar.gz -C /opt/geckodriver_folder
RUN rm /tmp/geckodriver_linux64.tar.gz
RUN chmod +x /opt/geckodriver_folder/geckodriver
RUN ln -fs /opt/geckodriver_folder/geckodriver /usr/local/bin/geckodriver
ADD features /app/features
I've tried increasing my read_timeout to 120 with no effect.
When i bash into the container and run firefox it says "no display specified"
Any suggestions ?
I was having the same issue. This worked for me: https://github.com/juandelgado/docker-headless-firefox-cucumber/blob/master/Dockerfile
FROM ruby:2.4-slim
MAINTAINER Juan Delgado <juan#ustwo.com>
RUN mkdir /home/ustwo
WORKDIR /home/ustwo
RUN apt-get update && \
apt-get install -y xvfb build-essential libffi-dev wget firefox-esr && \
wget https://github.com/mozilla/geckodriver/releases/download/v0.17.0/geckodriver-v0.17.0-linux64.tar.gz && \
tar -zxvf geckodriver-v0.17.0-linux64.tar.gz && \
chmod +x geckodriver && \
mv geckodriver /usr/local/bin && \
rm geckodriver-v0.17.0-linux64.tar.gz
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN bundle
RUN ["cucumber", "--version"]
Related
I'm trying to build a docker image but it throws an error and I can't seem to figure out why.
It is stuck at RUN apt-get -y update with the following error messages:
4.436 E: Release file for http://security.debian.org/debian-security/dists/buster/updates/InRelease is not valid yet (invalid for another 2d 16h 26min 22s). Updates for this repository will not be applied.
4.436 E: Release file for http://deb.debian.org/debian/dists/buster-updates/InRelease is not valid yet (invalid for another 3d 10h 28min 24s). Updates for this repository will not be applied.
executor failed running [/bin/sh -c apt-get -y update]: exit code: 100
Here's my docker file:
FROM python:3.7
# Adding trusting keys to apt for repositories
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
# Adding Google Chrome to the repositories
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
# Updating apt to see and install Google Chrome
RUN apt-get -y update
# Magic happens
RUN apt-get install -y google-chrome-stable
# Installing Unzip
RUN apt-get install -yqq unzip
# Download the Chrome Driver
RUN CHROMEDRIVER_RELEASE=$(curl http://chromedriver.storage.googleapis.com/LATEST_RELEASE) && \
echo "Chromedriver latest version: $CHROMEDRIVER_RELEASE" && \
wget --quiet "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_RELEASE/chromedriver_linux64.zip" && \
unzip chromedriver_linux64.zip && \
rm -rf chromedriver_linux64.zip && \
mv chromedriver /usr/local/bin/chromedriver && \
chmod +x /usr/local/bin/chromedriver && \
chromedriver --version
# Set display port as an environment variable
ENV DISPLAY=:99
WORKDIR /
COPY requirements.txt ./
RUN pip install --upgrade pip && pip install -r requirements.txt
COPY . .
RUN pip install -e .
What is happening here?
In my case, docker was still using the cached RUN apt update && apt upgrade command, thus not updating the package sources.
The solution was to build the docker image once with the --no-cache flag:
docker build --no-cache .
If you are using docker desktop, please check if enough resources are set in settings/preferences.
Eg. memory and disk requirement
It's answered here https://askubuntu.com/questions/1059217/getting-release-is-not-valid-yet-while-updating-ubuntu-docker-container
Correct your system clock. (in comments I also suggested checking for a mismatch between clock and your timezone too)
I get this ERROR: executor failed running [...]: exit code: 100 error message when I mistyped the name of a package.
This was in my Dockerfile:
RUN sudo apt-get update; \
sudo apt-get -y upgrade; \
sudo apt-get install -y gnupg2 wget lsb_release
instead of this:
RUN sudo apt-get update; \
sudo apt-get -y upgrade; \
sudo apt-get install -y gnupg2 wget lsb-release
(see the difference between the underscore and the dash.)
Fixing the package name solved the problem.
This happens specific to OS also.
I had same issues running MariaDB on my Windows 10.
Check for Docker Settings:
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": false,
"experimental": false,
"features": {
"buildkit": true
},
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "20GB"
}
}
}
Remove below block, and it should work:
"features": {
"buildkit": true
},
I had this error and I think it was because I installed buildx but the version of the plugin didn't match my docker installation. Uninstalling buildx resolved the issue for me:
docker buildx uninstall
For me adding this to the Dockerfile did the job:
RUN apk add --update linux-headers;
I'm creating a Dockerfile to run truffleruby. I'm getting an error when trying to install bundler and foreman. The error is /bin/sh: 1: gem: not found
Dockerfile
FROM debian:buster-slim
# Install packages for building ruby
RUN apt update -y && apt install -y git curl libssl-dev libpq-dev libreadline-dev zlib1g-dev \
autoconf bison build-essential libyaml-dev \
libreadline-dev libncurses5-dev libffi-dev libgdbm-dev
RUN apt clean
# Install rbenv and ruby-build
RUN git clone https://github.com/sstephenson/rbenv.git /root/.rbenv
RUN git clone https://github.com/sstephenson/ruby-build.git /root/.rbenv/plugins/ruby-build
RUN /root/.rbenv/plugins/ruby-build/install.sh
ENV PATH /root/.rbenv/bin:$PATH
RUN echo 'eval "$(rbenv init -)"' >> /etc/profile.d/rbenv.sh # or /etc/profile
RUN echo 'eval "$(rbenv init -)"' >> .bashrc
RUN . ~/.bashrc
RUN rbenv install truffleruby-20.3.0
RUN rbenv global truffleruby-20.3.0
RUN rbenv rehash
ENV BUNDLER_VERSION=2.2.4 NODE_ENV=production RAILS_ENV=production RAILS_SERVE_STATIC_FILES=true RAILS_LOG_TO_STDOUT=true PORT=3000
ENV CONFIGURE_OPTS --disable-install-doc
RUN apt-get install -y curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
apt-get update && apt-get install -y nodejs && \
apt-get clean
RUN rbenv versions
RUN gem install bundler:2.2.4 foreman
RUN mkdir /app
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle config set --local deployment 'true'
RUN bundle config set --local without 'development test'
RUN bundle install
COPY . .
EXPOSE 3000
CMD ["foreman", "start"]
tail of build
Removing intermediate container 1a445fde7fc0
---> 43c3d72b7eb6
Step 17/27 : RUN rbenv versions
---> Running in feb5bb9361cc
* truffleruby-20.3.0 (set by /root/.rbenv/version)
Removing intermediate container feb5bb9361cc
---> c7d1a5826af5
Step 18/27 : RUN gem install bundler:2.2.4 foreman
---> Running in 998461afc89c
/bin/sh: 1: gem: not found
The command '/bin/sh -c gem install bundler:2.2.4 foreman' returned a non-zero code: 127
You don't generally use version managers like rbenv in Docker. There are a couple of reasons for this. One is that an image usually only contains a single application and its single runtime, so you'd never have more than one Ruby in an image and therefore there's no need to switch. A second is that most common paths of running containers (including docker run and the Dockerfile RUN directive) don't look at shell dotfiles like .bashrc or /etc/profile, so the version manager setup will never get run.
TruffleRuby is distributed (among other ways) as a standalone tar file so you can just install that in your Dockerfile. I'd make the Dockerfile look roughly like:
FROM debian:buster-slim
# Install the specific dependency packages TruffleRuby recommends
# (build-essential is much larger but might actually be necessary)
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
curl \
gcc \
libssl-dev \
libz-dev \
make
# Download and unpack TruffleRuby
ARG TRUFFLERUBY_VERSION=20.3.0
ENV PATH /opt/truffleruby-$TRUFFLERUBY_VERSION-linux-amd64/bin:$PATH
RUN cd /opt \
&& curl -L https://github.com/oracle/truffleruby/releases/download/vm-$TRUFFLERUBY_VERSION/truffleruby-$TRUFFLERUBY_VERSION-linux-amd64.tar.gz | tar xz \
&& /opt/truffleruby-$TRUFFLERUBY_VERSION-linux-amd64/lib/truffle/post_install_hook.sh
# Now build and install your application
RUN gem install bundler:2.2.4
WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle config set --local deployment 'true'
RUN bundle config set --local without 'development test'
RUN bundle install
COPY . .
ENTRYPOINT ["bundle", "exec"]
EXPOSE 3000
CMD ["rails", "start"]
You can reasonably split this into two separate Dockerfiles. End the first one before the "build and install your application" comment, and build it with docker build -t myname/truffleruby:20.3.0 -f Dockerfile.truffleruby .. Then the second one can begin with FROM myname/truffleruby:20.3.0 in the same way as the standard Docker Hub ruby image.
Does the same work with CRuby?
I'd suspect RUN doesn't read shell files, so PATH needs to be modified.
RUN git clone https://github.com/sstephenson/rbenv.git /root/.rbenv
...
RUN /root/.rbenv/plugins/ruby-build/install.sh
ENV PATH /root/.rbenv/bin:$PATH
seems a bit weird, is rbenv cloned and installed in the same place?
I would skip rbenv in Docker, and instead just:
RUN ruby-build truffleruby ~/.rubies/truffleruby
ENV PATH $HOME/.rubies/truffleruby/bin:$PATH
So i have written my automated Robot Framework tests and they are in a GitLab repo. I want to run these automatically once a day.
Is this possible?
Do I need a .gitlab-ci.yml file for it? (if yes what do I put in it?)
Yes you can totally run the robot tests in gitlab ci.
so answer
Yes its very much possible , infact that is how you execute pipeline tests . You just need to build a Dockerfile that has the things you need to execute the framework inside docker. Here's the sample dockerfile. I would suggest you wrap the .robot script to run from bash script (like robot -d *.robot).
FROM ubuntu:18.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y python3-setuptools wget git bzip2 ca-certificates curl bash chromium-browser chromium-chromedriver firefox python3.8 python3-pip nano && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz
RUN tar xvf geckodriver*
RUN chmod +x geckodriver
RUN mv geckodriver /usr/bin
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2
RUN pip3 install --upgrade pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN pip install rpaframework
COPY . /usr/src/
ADD robot.sh /usr/local/bin/robot.sh
RUN chmod +x /usr/local/bin/robot.sh
WORKDIR /usr/src
Now you need .gitlab-ci.yml in your repository to have a content like this.
stages:
- build
- run
variables:
ARTIFACT_REPORT_PATH: "${CI_PROJECT_DIR}/reports"
build_image:
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
script:
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
robot_tests:
stage: run
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- robot-test.sh
artifacts:
paths:
- $ARTIFACT_REPORT_PATH
when: always
That should be it and once the job finishes you would see the output in the job at the path location in the repository.
New to ruby and bundler here.
I am installing them in a docker image with this docker file:
FROM alpine:3.5
# Install Ruby, Ruby Bundler and other ruby dependencies
RUN apk add --update \
ruby ruby-bigdecimal ruby-bundler \
ca-certificates libressl \
libressl-dev build-base ruby-dev \
ruby-rdoc ruby-io-console ruby-irb; \
\
&& bundle config build.nokogiri --use-system-libraries; \
&& bundle config git.allow_insecure true; \
\
&& gem install json foreman --no-rdoc --no-ri; \
&& gem cleanup; \
&& rm -rf /usr/lib/ruby/gems/*/cache/*; \
&& apk del libressl-dev build-base ruby-dev; \
&& rm -rf /var/cache/apk/* /tmp;
CMD ["bundle"]
When I run do a docker run I get:
Don't run Bundler as root. Bundler can ask for sudo if it is needed,
and installing your bundle as root will break this application for all
non-root users on this machine.
Could not locate Gemfile or .bundle/ directory
How do I resolve this? I just want to install ruby and ruby-bundle and be done with this ...
There are pre built ruby images (e.g Alpine 3.11 Ruby 2.7) that include bundler. It's easier to start with them as they generally use the current "best practices" to build.
Notice that they set the BUNDLE_SILENCE_ROOT_WARNING environment variable with ENV directive in the image build to remove that root warning.
You normally wouldn't run bundler as the CMD for a container either, you might run bundler during a RUN image build step though.
Running containers as non-root users is not a bad idea in any case. Use the USER directive to change that.
FROM ruby:2.7
WORKDIR /app
ADD . /app/
RUN set -uex; \
bundle install; \
adduser -D rubyapp; \
mkdir -p /app/data; \
chown rubyapp /app/data
USER rubyapp
CMD [ "ruby", "whatever.rb" ]
Dockerizing a Rails application taking ages to rebuild the container.
I tried to ADD as far at the end but not possible I think more.
Any suggestions on how to improve the rebuild speed of my docker container?
Or general suggestions on how to improve the docker file, it takes very long to rebuild every time.
Also are there smart ways to check if for example a directory already exist without throwing an error and not be able to complete the build?
FROM ruby:2.2.0
EXPOSE 80
EXPOSE 22
ENV RAILS_ENV production
RUN apt-get update -qq && apt-get install -y build-essential
# --------------------------------------
# GEM PRE-REQ
# --------------------------------------
#RUN apt-get install -y libpq-dev
#RUN apt-get install -y libxml2-dev libxslt1-dev #nokigiri
#RUN apt-get install -y libqt4-webkit libqt4-dev xvfb
RUN cd /tmp && git clone https://github.com/maxmind/geoipupdate && cd geoipupdate && ./bootstrap
# --------------------------------------
# HOME FOLDER
# --------------------------------------
WORKDIR /srv/my
ADD . /srv/my
ADD ./Gemfile /srv/my/Gemfile
ADD ./Gemfile.lock /srv/my/Gemfile.lock
#RUN mkdir /srv/my
RUN bundle install --without development test
#RUN bundle install foreman
RUN bundle exec rake assets:precompile --trace
# --------------------------------------
# UNICORN AND NGINX
# --------------------------------------
ADD ./config/_server/unicorn_my /etc/init.d/unicorn_my
RUN chmod 755 /etc/init.d/unicorn_my
RUN update-rc.d unicorn_my defaults
ADD ./config/_server/nginx.conf /etc/nginx/sites-available/default
RUN apt-get install -y nginx
RUN echo "\ndaemon off;" >> /etc/nginx/nginx.conf
#RUN chown -R www-data:www-data /var/lib/nginx ??
ADD ./config/_server/nginx.conf /etc/nginx/my.conf
ADD ./config/_server/my.conf /etc/nginx/sites-enabled/my.conf
ADD ./config/_server/unicorn.rb /srv/my/config/unicorn.rb
ADD ./config/_server/Procfile /srv/my/Procfile
#RUN service unicorn_my start
#RUN foreman start -f ./Procfile
You can improve your build speed by:
Install all of your requirement as early as possible.
Combine all apt-get/yum into a single command, after that clean up the apt/yum cache. It can decrease your image size.
Sample:
RUN \
apt-get -y update && \
apt-get -y install curl build-essential nginx && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
Put ADD/COPY as late as possible, because it will invalidate the Docker image cache.
Avoid put long-running task (e.g: apt-get, download large file, etc.) after ADD/COPY file or directory that is often changed.
Docker take a "snapshot" for each your command. So, when you build a new image from same state (no Dockerfile/file/directory change), it should be fast.
Comment/uncomment Dockerfile in order to reduce apt-get install time might not help you, because it will invalidate your Docker cache.