Present are two files, Dockerfile.infra and docker-compose-infra.yml. Firstly, docker-compose-infra.yml is built via the following command:
docker-compose --file docker-compose-infra.yml build
This results in no errors and finishes as expected.
The problem arises when trying to deploy this to AWS. The following command:
docker-compose --file docker-compose-infra.yml run cdk
Produces this error:
bash: cdk: command not found
This appears to be triggered when the docker-compose-infra.yml attempts to run the cdk deploy bash command.
The command should run because within the Dockerfile.infra build, cdk is installed via npm install -g aws-cdk-lib.
Dockerfile.infra file:
FROM node:16-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN npm install -g aws-cdk-lib \
&& apt-get update -y \
&& apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
# install Python
python3-pip \
# install Poetry via curl
curl \
&& curl -k https://install.python-poetry.org | python3 - \
&& apt-get remove curl -y \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
COPY pyproject.toml poetry.lock /
ENV PATH=/root/.local/bin:$PATH
RUN poetry config virtualenvs.create false \
&& poetry install --no-dev
WORKDIR /app/
COPY app.py cdk.json cdk.context.json /app/
COPY stacks/ /app/stacks/
docker-compose-infra.yml:
version: "3"
services:
cdk:
command: bash -c "cdk deploy --require-approval never --all --parameters my-app-${ENVIRONMENT}-service:MyServiceImageTag=${IMAGE_TAG}"
build:
context: ./
dockerfile: Dockerfile.infra
environment:
- AWS_PROFILE=${AWS_PROFILE}
- ENVIRONMENT=${ENVIRONMENT}
- DEPLOY_ACCOUNT=${DEPLOY_ACCOUNT}
volumes:
- ~/.aws/credentials:/root/.aws/credentials
You need to install aws-cdk not aws-cdk-lib
RUN npm install -g aws-cdk \
This might be a bit confusing because aws-cdk-lib is also the name of the required Python dependency when writing Python CDK apps and a valid npm package.
Related
This is my dockerfile. I have also a app.py file with two handlers and I need to be able to run one of them based of a variable or similar. Right now I run only lambda.handler1, but also I want to run lambda.handler2
# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.8"
# Bundle base image + runtime
FROM python:${RUNTIME_VERSION}-buster
# Include global args
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
# Install OCR build dependencies
RUN apt-get update \
&& apt-get install -y tesseract-ocr tesseract-ocr-spa \
&& apt-get install -y poppler-utils
RUN apt-get update \
&& apt-get install libgl1 -y
RUN python${RUNTIME_VERSION} -m pip install --upgrade pip
RUN python${RUNTIME_VERSION} -m pip install tesseract pillow pytesseract
# Copy handler function
RUN mkdir -p ${FUNCTION_DIR}
COPY app/* ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}
# Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install -r ${FUNCTION_DIR}/docker-requirements.txt
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
COPY entry.sh /
RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/entry.sh"]
CMD [ "app.lambda_handler1" ]
I'm trying to build a docker image but it throws an error and I can't seem to figure out why.
It is stuck at RUN apt-get -y update with the following error messages:
4.436 E: Release file for http://security.debian.org/debian-security/dists/buster/updates/InRelease is not valid yet (invalid for another 2d 16h 26min 22s). Updates for this repository will not be applied.
4.436 E: Release file for http://deb.debian.org/debian/dists/buster-updates/InRelease is not valid yet (invalid for another 3d 10h 28min 24s). Updates for this repository will not be applied.
executor failed running [/bin/sh -c apt-get -y update]: exit code: 100
Here's my docker file:
FROM python:3.7
# Adding trusting keys to apt for repositories
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
# Adding Google Chrome to the repositories
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
# Updating apt to see and install Google Chrome
RUN apt-get -y update
# Magic happens
RUN apt-get install -y google-chrome-stable
# Installing Unzip
RUN apt-get install -yqq unzip
# Download the Chrome Driver
RUN CHROMEDRIVER_RELEASE=$(curl http://chromedriver.storage.googleapis.com/LATEST_RELEASE) && \
echo "Chromedriver latest version: $CHROMEDRIVER_RELEASE" && \
wget --quiet "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_RELEASE/chromedriver_linux64.zip" && \
unzip chromedriver_linux64.zip && \
rm -rf chromedriver_linux64.zip && \
mv chromedriver /usr/local/bin/chromedriver && \
chmod +x /usr/local/bin/chromedriver && \
chromedriver --version
# Set display port as an environment variable
ENV DISPLAY=:99
WORKDIR /
COPY requirements.txt ./
RUN pip install --upgrade pip && pip install -r requirements.txt
COPY . .
RUN pip install -e .
What is happening here?
In my case, docker was still using the cached RUN apt update && apt upgrade command, thus not updating the package sources.
The solution was to build the docker image once with the --no-cache flag:
docker build --no-cache .
If you are using docker desktop, please check if enough resources are set in settings/preferences.
Eg. memory and disk requirement
It's answered here https://askubuntu.com/questions/1059217/getting-release-is-not-valid-yet-while-updating-ubuntu-docker-container
Correct your system clock. (in comments I also suggested checking for a mismatch between clock and your timezone too)
I get this ERROR: executor failed running [...]: exit code: 100 error message when I mistyped the name of a package.
This was in my Dockerfile:
RUN sudo apt-get update; \
sudo apt-get -y upgrade; \
sudo apt-get install -y gnupg2 wget lsb_release
instead of this:
RUN sudo apt-get update; \
sudo apt-get -y upgrade; \
sudo apt-get install -y gnupg2 wget lsb-release
(see the difference between the underscore and the dash.)
Fixing the package name solved the problem.
This happens specific to OS also.
I had same issues running MariaDB on my Windows 10.
Check for Docker Settings:
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": false,
"experimental": false,
"features": {
"buildkit": true
},
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "20GB"
}
}
}
Remove below block, and it should work:
"features": {
"buildkit": true
},
I had this error and I think it was because I installed buildx but the version of the plugin didn't match my docker installation. Uninstalling buildx resolved the issue for me:
docker buildx uninstall
For me adding this to the Dockerfile did the job:
RUN apk add --update linux-headers;
I tried to run a dotnet command located in a shell file which will be called by dockerfile during the docker build process.
Here is the dockerfile snippet:
FROM ubuntu:16.04
FROM microsoft/dotnet:2.2-sdk as build-env
# .net core
RUN apt-get update -y && apt-get install -y wget apt-transport-https
RUN wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb && dpkg -i packages-microsoft-prod.deb
RUN apt-get update -y && apt-get install -y aspnetcore-runtime-2.2=2.2.1-1
# dotnet tool command
RUN apt-get update -y && apt-get install dotnet-sdk-2.2 -y
# for dot net tool #https://stackoverflow.com/questions/51977474/install-dotnet-core-tool-dockerfile
ENV PATH="${PATH}:/root/.dotnet/tools"
# Supervisor
RUN apt-get update -y && apt-get install -y supervisor && mkdir -p /etc/supervisor
# main script as defacult command when docker container runs
# Run the main sh script to run script in each xxx/*/db-migrate.sh.
CMD ["/xxx/main-migrate.sh"]
# Microservice files
ADD xxx /xxx
# install the xxx deploy tool
WORKDIR /xxx
RUN for d in /xxx/*/ ; do cd "$d"; if [ -f "./install.sh" ]; then sh ./install.sh; fi; done
In the install.sh, here is the code:
dotnet tool install -g xxx.DEPLOY --version [$(cat version)] --add-source /xxx/
When I run docker build -t xxx:v0 ., I get an error message saying:
./install.sh: 1: ./install.sh: dotnet: not found
I have added FROM microsoft/dotnet:2.2-sdk as build-env & RUN apt-get update -y && apt-get install dotnet-sdk-2.2 -y, but why Docker could not find the dotnet command during build?
How do I call the dotnet command located in the shell script file during the docker build process?
Thank you
FROM ubuntu:16.04
FROM microsoft/dotnet:2.2-sdk as build-env
In the above lines FROM ubuntu:16.04 will be totally ignored as there should be only one base image, so the last FROM will be considered as a base image which is FROM microsoft/dotnet:2.2-sdk not the ubuntu.
So if your base image is FROM microsoft/dotnet:2.2-sdk as build-env then why to bother to run these complex script to install dotnet?
You are good to go to check version of dotnet.
FROM microsoft/dotnet:2.2-sdk as build-env
RUN dotnet --version
output
Step 1/6 : FROM microsoft/dotnet:2.2-sdk as build-env
---> f13ac9d68148
Step 2/6 : RUN dotnet --version
---> Running in f1d34507c7f2
> 2.2.402
Removing intermediate container f1d34507c7f2
---> 7fde8596c331
I have created a laravel project and put it in a docker container and it all seems to load fine, however at some point I get an error message saying:
There is no existing directory at "/Users/john/Documents/work/dashboard/src/storage/logs" and its not buildable: Permission denied
It is strange that Laravel is trying to write into a file or directory that is part of my local environment, instead of the docker container environment which I would expect to be something like:
/var/www/storage/logs
This is my docker-compose.yml:
version: '2'
services:
# The Web Server
web:
build:
context: ./
dockerfile: web.signup.dockerfile
working_dir: /var/www
volumes:
- ./src:/var/www
- /var/www/storage
env_file: 'src/.env'
ports:
- 80:80
volumes:
dbdata:
And this is my Dockerfile
FROM centos:latest
RUN set -ex \
&& yum install -y epel-release \
&& yum update -y mysql-client libmagickwand-dev \
&& yum install -y libmcrypt-devel \
&& yum install -y python-pip \
&& pip install --upgrade pip \
&& yum install -y zip unzip \
&& yum install -y java-1.8.0-openjdk \
&& yum clean all
RUN pip install --upgrade --ignore-installed six awscli
RUN yum install -y supervisor
RUN yum install -y php-pear php-devel
RUN pecl install imagick
# Add the Ngix
ADD nginx.repo /etc/yum.repos.d/nginx.repo
# Add the Centos PHP dependent repository
RUN rpm -Uvh http://rpms.famillecollet.com/enterprise/remi-release-7.rpm
RUN yum update -y
# Installing Nginx
RUN yum -y install nginx
# Installing PHP
RUN yum -y --enablerepo=remi,remi-php72 install php-fpm php-common php-mcrypt php-mbstring
WORKDIR /var/www
ADD vhost.prod.conf /etc/nginx/conf.d/default.conf
COPY src/. /var/www
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" \
&& php -r "if (hash_file('SHA384', 'composer-setup.php') === '544e09ee996cdf60ece3804abc52599c22b1f40f4323403c44d44fdfdd586475ca9813a858088ffbc1f233e9b180f061') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" \
&& php composer-setup.php \
&& php -r "unlink('composer-setup.php');" \
&& php composer.phar install --no-dev --no-scripts \
&& rm composer.phar
RUN chown -R apache:apache \
/var/www/storage \
/var/www/bootstrap/cache
EXPOSE 80
EXPOSE 9000
RUN mkdir -p /run/php-fpm
COPY supervisord.conf /supervisord.conf
CMD ["/usr/bin/supervisord", "-c", "/supervisord.conf"]
Any ideas at all?
Works as designed,
you are mounting "src" from your current directory into the container.
volumes:
- ./src:/var/www
If you want to access your code inside the container you could add the files while building.
To fix your "bug",
chmod -R 777 /Users/john/Documents/work/dashboard/src/storage/logs
Would be ok a for local development environment
From my end, results are when no matter what folder permission we changed in the physical folder.
Delete everything in bootstrap/cache/* solved the issue.
In my case, I got this error when I ran out of HDD space and I cleared it
Only this helped:
rm bootstrap/cache/config.php
php artisan cache:clear
composer dump-autoload
This question has been asked here in many ways, but none of them seemed to be exactly what I'm looking for. Note: I'm totally new to Docker.
I've currently got a Docker image ("padcrawler") on my Mac, which is an image built off of scrapinghub/splash.
I'm basically just trying to make it so that any changes I make on my host machine (Mac), are automatically updated within the container. I'm using docker-machine and NOT boot2docker.
This is my Dockerfile:
FROM scrapinghub/splash:latest
# Update UBUNTU base image and install deps
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
ca-certificates \
gcc \
git \
libpq-dev \
make \
libxml2-dev \
libxslt-dev \
libssl-dev \
libffi-dev \
zlib1g-dev \
python-pip \
python \
python-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
RUN mkdir -p /usr/src/pad
WORKDIR /usr/src/pad
# Copy over everything to container directory
COPY . /usr/src/pad
# Install Python (2.7) deps
RUN /usr/bin/pip install -r /usr/src/pad/requirements.txt
# Launch pad-crawler upon docker container bootup
ENTRYPOINT /usr/src/pad/start
I'm also using docker-compose and this is my yml file:
version: '2'
services:
crawler:
build: .
volumes:
- ".:/usr/src/pad"
image: padcrawler
However, I'm still unable to get any host file changes to automatically sync with the mounted volume in the container. Obviously it's annoying and frustrating to have to manually copy over the changes every-time. Is there a bigger issue here? I'm using Docker totally wrong?