I have created a Dockerfile in my file structure, built a docker repository, and tried to run it, but I keep getting the following error:
Error response from daemon: repository not found, does not exist, or no pull access.
I'm pretty new to docker, so what could possibly be wrong here?
Commands I run:
docker build -t repoName .
docker run -d -p 8080:80 repoName
My Dockerfile:
FROM nginx:1.10.3
RUN mkdir /app
WORKDIR /app
# Setup enviromnet
RUN apt-get update
RUN apt-get install -y curl tar
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
# Do some building and copying
EXPOSE 80
CMD ["/bin/sh", "/app/run-dockerized.sh"]
Related
So i have written my automated Robot Framework tests and they are in a GitLab repo. I want to run these automatically once a day.
Is this possible?
Do I need a .gitlab-ci.yml file for it? (if yes what do I put in it?)
Yes you can totally run the robot tests in gitlab ci.
so answer
Yes its very much possible , infact that is how you execute pipeline tests . You just need to build a Dockerfile that has the things you need to execute the framework inside docker. Here's the sample dockerfile. I would suggest you wrap the .robot script to run from bash script (like robot -d *.robot).
FROM ubuntu:18.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y python3-setuptools wget git bzip2 ca-certificates curl bash chromium-browser chromium-chromedriver firefox python3.8 python3-pip nano && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz
RUN tar xvf geckodriver*
RUN chmod +x geckodriver
RUN mv geckodriver /usr/bin
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2
RUN pip3 install --upgrade pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN pip install rpaframework
COPY . /usr/src/
ADD robot.sh /usr/local/bin/robot.sh
RUN chmod +x /usr/local/bin/robot.sh
WORKDIR /usr/src
Now you need .gitlab-ci.yml in your repository to have a content like this.
stages:
- build
- run
variables:
ARTIFACT_REPORT_PATH: "${CI_PROJECT_DIR}/reports"
build_image:
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
script:
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
robot_tests:
stage: run
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- robot-test.sh
artifacts:
paths:
- $ARTIFACT_REPORT_PATH
when: always
That should be it and once the job finishes you would see the output in the job at the path location in the repository.
Goal: to be able to run a phoenix, mix release in an ec2 instance (this machine: https://hub.docker.com/_/amazonlinux/)
Problem: Running my release produces the following error:
/my_app/erts-11.0.3/bin/beam.smp: /lib64/libtinfo.so.6: no version information available (required by /my_app/erts-11.0.3/bin/beam.smp)
2020-09-08 13:17:33.469328
args: [load_failed,"Failed to load NIF library /my_app/lib/crypto-4.7/priv/lib/crypto: 'libcrypto.so.1.1: cannot open shared object file: No such file or directory'","OpenSSL might not format: label: be installed on this system.\n"]
"Unable to load crypto library. Failed with error:~n\"~p, ~s\"~n~s"
{error_logger,error_msg}
but I have openssl installed in each scenario (OpenSSL 1.0.2k-fips 26 Jan 2017).
Setup:
I create a new phoenix project with:
yes | mix phx.new my_app --no-webpack --no-ecto --no-dashboard --no-gettext
cd my_app
and uncomment the config :my_app, MyAppWeb.Endpoint, server: true line in config/prod.secret.exs to start the server when running the app.
I create the following Dockerfile to build my project:
FROM debian:buster
# Install essential build packages
RUN apt-get update
RUN apt-get install -y wget git locales curl gnupg-agent
# Set locale
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV HOME=/opt/app
WORKDIR /opt/app
# Install erlang and elixir
RUN wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb
RUN dpkg -i erlang-solutions_2.0_all.deb
RUN apt-get update
RUN apt-get install -y esl-erlang
RUN apt-get install -y elixir
# Install hex and rebar
RUN mix do local.hex --force, local.rebar --force
# Install phoenix
RUN mix archive.install hex phx_new 1.5.4 --force
COPY mix.exs mix.lock ./
COPY config config
COPY priv priv
COPY lib lib
RUN mix deps.get
ENV SECRET_KEY_BASE='secretExampleQrzdplBPdbHHhr2bpELjiGVGVqmjvFl2JEXdkyla8l6+b2CCcvs'
ENV MIX_ENV=prod
RUN mix phx.digest
RUN mix compile
RUN mix release
CMD ["_build/prod/rel/my_app/bin/my_app", "start"]
and build the image with:
docker build . -t my_app
We can check that everything is running as expected with:
docker run -p 4000:4000 -i my_app:latest
and visiting localhost:4000.
I copy the _build/prod/rel/my_app directory from the built docker container (as this is all I'll be transferring across to my ec2 instance).
# list all containers
docker container ls -a
# locate the container with image tag: my_app:latest. It should look like:
# f9c46df97e55 my_app:latest "_build/prod/rel/my_…"
# note the container_id, and copy across the build release
docker cp f9c46df97e55:/opt/app/_build/prod/rel/my_app .
We create an instance.Dockerfile to run the commands of our ec2 instance:
FROM amazonlinux:latest
COPY my_app my_app
CMD ["my_app/bin/my_app", "start"]
and run it:
docker build . -f instance.Dockerfile -t my_app_instance && docker run -p 4000:4000 -i my_app_instance:latest
This fails to run, with the error:
[load_failed,"Failed to 2020-09-08 13:27:49.980715
args: load NIF library /my_app/lib/crypto-4.7/priv/lib/crypt format: label: 2020-09-08 13:27:49.981847 supervisor_report o: 'libcrypto.so.1.1: cannot open shared object file: No such file or directory'","OpenSSL might not be installed on this system.\n"]
"Unable to load crypto library. Failed with error:~n\"~p, ~s\"~n~s"
{error_logger,error_msg}
Note:
I am able to replicate the error on a debian:buster machine with the above docker build ... && docker run ... command, but with this instance.Dockerfile:
FROM debian:buster
RUN apt-get update
RUN apt-get install -y locales
# Set locale
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
COPY my_app my_app
CMD ["my_app/bin/my_app", "start"]
and fix the error by changing: RUN apt-get install -y locales to RUN apt-get install -y locales curl.
I have tried yum install curl and yum install openssl on the amazonlinux:latest machine, but still experience the same error.
Question:
Where should I look to make progress on this? It seems to be an erlang/otp requirement issue, but the above is hardly a sscce, so difficult to raise.
I have struggled with finding what crypto or openssl library the apt-get curl package installs which causes the error to be fixed.
Any pointers to a particular forum to ask for help, or what to try next would be greatly appreciated.
Thanks in advance for the help.
Thanks to the suggestion to build it on CentOs from #VenkatakumarSrinivasan,
I managed to get it working on an amazonlinux machine with the following Dockerfiles.
Building the release:
FROM centos:7
RUN yum update -y
RUN yum clean -y all
RUN echo 'LC_ALL="en_US.UTF-8"' >> /etc/locale.conf
ENV LC_ALL="en_US.UTF-8"
RUN yum install -y epel-release
RUN yum install -y gcc gcc-c++ glibc-devel make ncurses-devel openssl-devel \
autoconf java-1.8.0-openjdk-devel git wget wxBase.x86_64
WORKDIR /opt
RUN wget http://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm
RUN rpm -Uvh erlang-solutions-1.0-1.noarch.rpm
RUN yum update -y
RUN yum install -y erlang
WORKDIR /opt/elixir
RUN git clone https://github.com/elixir-lang/elixir.git /opt/elixir
RUN make clean test
ENV PATH=/opt/elixir/bin:${PATH}
RUN mix do local.hex --force, local.rebar --force
RUN mix archive.install hex phx_new 1.5.4 --force
WORKDIR /opt/app
COPY mix.exs mix.lock ./
COPY config config
COPY priv priv
COPY lib lib
RUN mix deps.get
ENV SECRET_KEY_BASE='secretExampleQrzdplBPdbHHhr2bpELjiGVGVqmjvFl2JEXdkyla8l6+b2CCcvs'
ENV MIX_ENV=prod
RUN mix phx.digest
RUN mix compile
RUN mix release
CMD ["_build/prod/rel/my_app/bin/my_app", "start"]
Running the release:
FROM amazonlinux:latest
RUN yum -y update
ENV LANG="en_US.UTF-8"
ENV LC_ALL="en_US.UTF-8"
RUN ln -s /usr/lib64/libtinfo.so.{6,5}
COPY my_app my_app
CMD ["my_app/bin/my_app", "start"]
I'm not sure if this is a satisfying solution though, I'm open to a more elegant solution.
I tried to run a dotnet command located in a shell file which will be called by dockerfile during the docker build process.
Here is the dockerfile snippet:
FROM ubuntu:16.04
FROM microsoft/dotnet:2.2-sdk as build-env
# .net core
RUN apt-get update -y && apt-get install -y wget apt-transport-https
RUN wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb && dpkg -i packages-microsoft-prod.deb
RUN apt-get update -y && apt-get install -y aspnetcore-runtime-2.2=2.2.1-1
# dotnet tool command
RUN apt-get update -y && apt-get install dotnet-sdk-2.2 -y
# for dot net tool #https://stackoverflow.com/questions/51977474/install-dotnet-core-tool-dockerfile
ENV PATH="${PATH}:/root/.dotnet/tools"
# Supervisor
RUN apt-get update -y && apt-get install -y supervisor && mkdir -p /etc/supervisor
# main script as defacult command when docker container runs
# Run the main sh script to run script in each xxx/*/db-migrate.sh.
CMD ["/xxx/main-migrate.sh"]
# Microservice files
ADD xxx /xxx
# install the xxx deploy tool
WORKDIR /xxx
RUN for d in /xxx/*/ ; do cd "$d"; if [ -f "./install.sh" ]; then sh ./install.sh; fi; done
In the install.sh, here is the code:
dotnet tool install -g xxx.DEPLOY --version [$(cat version)] --add-source /xxx/
When I run docker build -t xxx:v0 ., I get an error message saying:
./install.sh: 1: ./install.sh: dotnet: not found
I have added FROM microsoft/dotnet:2.2-sdk as build-env & RUN apt-get update -y && apt-get install dotnet-sdk-2.2 -y, but why Docker could not find the dotnet command during build?
How do I call the dotnet command located in the shell script file during the docker build process?
Thank you
FROM ubuntu:16.04
FROM microsoft/dotnet:2.2-sdk as build-env
In the above lines FROM ubuntu:16.04 will be totally ignored as there should be only one base image, so the last FROM will be considered as a base image which is FROM microsoft/dotnet:2.2-sdk not the ubuntu.
So if your base image is FROM microsoft/dotnet:2.2-sdk as build-env then why to bother to run these complex script to install dotnet?
You are good to go to check version of dotnet.
FROM microsoft/dotnet:2.2-sdk as build-env
RUN dotnet --version
output
Step 1/6 : FROM microsoft/dotnet:2.2-sdk as build-env
---> f13ac9d68148
Step 2/6 : RUN dotnet --version
---> Running in f1d34507c7f2
> 2.2.402
Removing intermediate container f1d34507c7f2
---> 7fde8596c331
Im fairly new to docker and so im trying to learn more about it using a laravel project, im following this tutorial:
https://www.digitalocean.com/community/tutorials/how-to-set-up-laravel-nginx-and-mysql-with-docker-compose
Ive adjusted the Dockerfile a bit from what the tutorial has but even the tutorial file causes the same result.
FROM php:7.3-fpm
# Copy composer.lock and composer.json
COPY composer.lock composer.json /var/www/
# Install dependencies
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - && \
apt-get update && apt-get install -y mysql-client \
RUN npm install -g npm
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install extensions
RUN docker-php-ext-install pdo pdo_mysql
# Install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
# Add user for laravel application
RUN groupadd -g 1000 www
RUN useradd -u 1000 -ms /bin/bash -g www www
# Copy existing application directory contents
COPY . /var/www
# Copy existing application directory permissions
COPY --chown=www:www . /var/www
# Change current user to www
USER www
# Set working directory
WORKDIR /var/www
# Expose port 9000 and start php-fpm server
EXPOSE 9000
CMD ["php-fpm"]
But i keep getting the following error when i run docker-compose up -d:
E: Package 'mysql-client' has no installation candidate
ERROR: Service 'app' failed to build: The command '/bin/sh -c curl -sL https://deb.nodesource.com/setup_10.x | bash - && apt-get update && apt-get install -y mysql-client nodejs build-essential vim git curl' returned a non-zero code: 100
Am i missing something?
I expected this to work since i am running apt-get update before installing mysql-client.
Thanks.
php:7.3-fpm now use Debian 10 (Buster) as its base image and Buster ships with MariaDB, so just replace mysql-client with mariadb-client should fix it.
If you still want to use the mysql client, it's called default-mysql-client now.
php:7.2-apache triggers the error as well, but I resolve it using php:7.2.18-apache
it worked for me: sudo apt-get update && apt-get install -y git curl libmcrypt-dev default-mysql-client
or alternatively apt-cache search mysql-server
find out your servers then sudo apt-get install default-mysql-server default-mysql-server-core mariadb-server-10.6 mariadb-server-core-10.6
in my case it was the above codes
FROM phusion/baseimage:0.9.16
MAINTAINER Raheel <raheelwp#gmail.com>
RUN apt-get update
RUN apt-get -y install apache2 libapache2-mod-php5 curl git
RUN add-apt-repository ppa:ondrej/php5-5.6
RUN apt-get update
RUN apt-get -y install python-software-properties
RUN apt-get update
RUN apt-get -y --force-yes install php5 php5-cli php5-mcrypt php5-curl php5-xdebug php5-json
RUN apt-get clean
RUN php5enmod mcrypt
RUN rm -rf /var/www/html
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/local/bin/composer
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD /home/raheel/code/laravel-app /var/www/laravel-app
EXPOSE 80
COPY setup-laravel.sh /var/www/laravel-app/setup-laravel.sh
RUN ["chmod", "+x", "/var/www/laravel-app/setup-laravel.sh"]
ENTRYPOINT ["/var/www/laravel-app/setup-laravel.sh"]
I created the above Dockerfile to run laravel application.
Then i ran docker build -t raheelwp/laravel-app . command and it created the image without name. Then i ran docker tag imageid raheelwp/laravel-app . After this when i check docker images it shows my image with name.
So far so good. But when i run docker run -t raheelwp/laravel-app and then login into container by docker exec -it containerid bash and do the ls on /var/www directory there is no laravel-app folder present there.
I am new to docker and this is my first ever docker file. I would be very thankful to you to please guide me why i am having this problem and how to fix this.
Thanks