Why dotnet command is not found in dockerfile? - shell

I tried to run a dotnet command located in a shell file which will be called by dockerfile during the docker build process.
Here is the dockerfile snippet:
FROM ubuntu:16.04
FROM microsoft/dotnet:2.2-sdk as build-env
# .net core
RUN apt-get update -y && apt-get install -y wget apt-transport-https
RUN wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb && dpkg -i packages-microsoft-prod.deb
RUN apt-get update -y && apt-get install -y aspnetcore-runtime-2.2=2.2.1-1
# dotnet tool command
RUN apt-get update -y && apt-get install dotnet-sdk-2.2 -y
# for dot net tool #https://stackoverflow.com/questions/51977474/install-dotnet-core-tool-dockerfile
ENV PATH="${PATH}:/root/.dotnet/tools"
# Supervisor
RUN apt-get update -y && apt-get install -y supervisor && mkdir -p /etc/supervisor
# main script as defacult command when docker container runs
# Run the main sh script to run script in each xxx/*/db-migrate.sh.
CMD ["/xxx/main-migrate.sh"]
# Microservice files
ADD xxx /xxx
# install the xxx deploy tool
WORKDIR /xxx
RUN for d in /xxx/*/ ; do cd "$d"; if [ -f "./install.sh" ]; then sh ./install.sh; fi; done
In the install.sh, here is the code:
dotnet tool install -g xxx.DEPLOY --version [$(cat version)] --add-source /xxx/
When I run docker build -t xxx:v0 ., I get an error message saying:
./install.sh: 1: ./install.sh: dotnet: not found
I have added FROM microsoft/dotnet:2.2-sdk as build-env & RUN apt-get update -y && apt-get install dotnet-sdk-2.2 -y, but why Docker could not find the dotnet command during build?
How do I call the dotnet command located in the shell script file during the docker build process?
Thank you

FROM ubuntu:16.04
FROM microsoft/dotnet:2.2-sdk as build-env
In the above lines FROM ubuntu:16.04 will be totally ignored as there should be only one base image, so the last FROM will be considered as a base image which is FROM microsoft/dotnet:2.2-sdk not the ubuntu.
So if your base image is FROM microsoft/dotnet:2.2-sdk as build-env then why to bother to run these complex script to install dotnet?
You are good to go to check version of dotnet.
FROM microsoft/dotnet:2.2-sdk as build-env
RUN dotnet --version
output
Step 1/6 : FROM microsoft/dotnet:2.2-sdk as build-env
---> f13ac9d68148
Step 2/6 : RUN dotnet --version
---> Running in f1d34507c7f2
> 2.2.402
Removing intermediate container f1d34507c7f2
---> 7fde8596c331

Related

When Deploying CDK to AWS via Docker, bash: cdk: command not found

Present are two files, Dockerfile.infra and docker-compose-infra.yml. Firstly, docker-compose-infra.yml is built via the following command:
docker-compose --file docker-compose-infra.yml build
This results in no errors and finishes as expected.
The problem arises when trying to deploy this to AWS. The following command:
docker-compose --file docker-compose-infra.yml run cdk
Produces this error:
bash: cdk: command not found
This appears to be triggered when the docker-compose-infra.yml attempts to run the cdk deploy bash command.
The command should run because within the Dockerfile.infra build, cdk is installed via npm install -g aws-cdk-lib.
Dockerfile.infra file:
FROM node:16-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN npm install -g aws-cdk-lib \
&& apt-get update -y \
&& apt-get upgrade -y \
&& apt-get install -y --no-install-recommends \
# install Python
python3-pip \
# install Poetry via curl
curl \
&& curl -k https://install.python-poetry.org | python3 - \
&& apt-get remove curl -y \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
COPY pyproject.toml poetry.lock /
ENV PATH=/root/.local/bin:$PATH
RUN poetry config virtualenvs.create false \
&& poetry install --no-dev
WORKDIR /app/
COPY app.py cdk.json cdk.context.json /app/
COPY stacks/ /app/stacks/
docker-compose-infra.yml:
version: "3"
services:
cdk:
command: bash -c "cdk deploy --require-approval never --all --parameters my-app-${ENVIRONMENT}-service:MyServiceImageTag=${IMAGE_TAG}"
build:
context: ./
dockerfile: Dockerfile.infra
environment:
- AWS_PROFILE=${AWS_PROFILE}
- ENVIRONMENT=${ENVIRONMENT}
- DEPLOY_ACCOUNT=${DEPLOY_ACCOUNT}
volumes:
- ~/.aws/credentials:/root/.aws/credentials
You need to install aws-cdk not aws-cdk-lib
RUN npm install -g aws-cdk \
This might be a bit confusing because aws-cdk-lib is also the name of the required Python dependency when writing Python CDK apps and a valid npm package.

How can I run a custom docker image with multiple Lambda handlers?

This is my dockerfile. I have also a app.py file with two handlers and I need to be able to run one of them based of a variable or similar. Right now I run only lambda.handler1, but also I want to run lambda.handler2
# Define global args
ARG FUNCTION_DIR="/home/app/"
ARG RUNTIME_VERSION="3.8"
# Bundle base image + runtime
FROM python:${RUNTIME_VERSION}-buster
# Include global args
ARG FUNCTION_DIR
ARG RUNTIME_VERSION
# Install aws-lambda-cpp build dependencies
RUN apt-get update && \
apt-get install -y \
g++ \
make \
cmake \
unzip \
libcurl4-openssl-dev
# Install OCR build dependencies
RUN apt-get update \
&& apt-get install -y tesseract-ocr tesseract-ocr-spa \
&& apt-get install -y poppler-utils
RUN apt-get update \
&& apt-get install libgl1 -y
RUN python${RUNTIME_VERSION} -m pip install --upgrade pip
RUN python${RUNTIME_VERSION} -m pip install tesseract pillow pytesseract
# Copy handler function
RUN mkdir -p ${FUNCTION_DIR}
COPY app/* ${FUNCTION_DIR}
# Install Lambda Runtime Interface Client for Python
RUN python${RUNTIME_VERSION} -m pip install awslambdaric --target ${FUNCTION_DIR}
# Install the function's dependencies
RUN python${RUNTIME_VERSION} -m pip install -r ${FUNCTION_DIR}/docker-requirements.txt
# Set working directory to function root directory
WORKDIR ${FUNCTION_DIR}
# (Optional) Add Lambda Runtime Interface Emulator and use a script in the ENTRYPOINT for simpler local runs
ADD https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie /usr/bin/aws-lambda-rie
COPY entry.sh /
RUN chmod 755 /usr/bin/aws-lambda-rie /entry.sh
ENTRYPOINT [ "/entry.sh"]
CMD [ "app.lambda_handler1" ]

VMSS Custom script not fully executing

We are currently experimenting with new VMSS build agents for our devops environment, which requires some components for each build pipeline.
So to make sure don't need to add this to every build pipeline, we created a startup script which is executed every time the machine is created (standard custom script extension) with the following contents
#install kubectl
echo "Installing KubeCTL"
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
echo "Installing DotNet SDK"
#install dotnet runtime (5) for dotnet tool install
sudo apt-get install -y dotnet-sdk-5.0
echo "Installing DotNet DotNet Runtime"
#install dotnet runtime (5) for FluentMigrator
sudo apt-get install -y dotnet-runtime-5.0
echo "Installing DotNet AzureCLI"
#install AzureCLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
echo "Installing Powershell"
#install powershell
# Update the list of packages
sudo apt-get update
# Install pre-requisite packages.
sudo apt-get install -y wget apt-transport-https software-properties-common
# Download the Microsoft repository GPG keys
wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb"
# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb
# Update the list of packages after we added packages.microsoft.com
sudo apt-get update
# Install PowerShell
sudo apt-get install -y powershell
echo "Installing FluentMigrator"
#install fluent migrator
dotnet tool install -g FluentMigrator.DotNet.Cli
echo "Installing OpenJDK"
#install openJDK
sudo apt-get install -y openjdk-11-jre
Now, everything in this script executes and installs without any issues, the build pipeline runs correctly after a new agent is booted up.
However, during release time, we require fluentmigrator, which is not installed even though it is included in the script.
If we add the same install line dotnet tool install -g FluentMigrator.DotNet.Cli as a build step or as a step during release it gets installed correctly. To do this we run a custom bash with the command.
However, i would very much prefer to have this run within the boot-up of the machine instead of adding a custom bash script to 20 release pipelines. Anyone has any idea why the tool is not installing correctly within this script?

Dockerfile | => ERROR [base 2/7] RUN apt-get update -y && apt-get -y --no-install-recommends install curl wget && rm -rf /var/lib/apt/lists/* [duplicate]

I'm trying to build a docker image but it throws an error and I can't seem to figure out why.
It is stuck at RUN apt-get -y update with the following error messages:
4.436 E: Release file for http://security.debian.org/debian-security/dists/buster/updates/InRelease is not valid yet (invalid for another 2d 16h 26min 22s). Updates for this repository will not be applied.
4.436 E: Release file for http://deb.debian.org/debian/dists/buster-updates/InRelease is not valid yet (invalid for another 3d 10h 28min 24s). Updates for this repository will not be applied.
executor failed running [/bin/sh -c apt-get -y update]: exit code: 100
Here's my docker file:
FROM python:3.7
# Adding trusting keys to apt for repositories
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
# Adding Google Chrome to the repositories
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
# Updating apt to see and install Google Chrome
RUN apt-get -y update
# Magic happens
RUN apt-get install -y google-chrome-stable
# Installing Unzip
RUN apt-get install -yqq unzip
# Download the Chrome Driver
RUN CHROMEDRIVER_RELEASE=$(curl http://chromedriver.storage.googleapis.com/LATEST_RELEASE) && \
echo "Chromedriver latest version: $CHROMEDRIVER_RELEASE" && \
wget --quiet "http://chromedriver.storage.googleapis.com/$CHROMEDRIVER_RELEASE/chromedriver_linux64.zip" && \
unzip chromedriver_linux64.zip && \
rm -rf chromedriver_linux64.zip && \
mv chromedriver /usr/local/bin/chromedriver && \
chmod +x /usr/local/bin/chromedriver && \
chromedriver --version
# Set display port as an environment variable
ENV DISPLAY=:99
WORKDIR /
COPY requirements.txt ./
RUN pip install --upgrade pip && pip install -r requirements.txt
COPY . .
RUN pip install -e .
What is happening here?
In my case, docker was still using the cached RUN apt update && apt upgrade command, thus not updating the package sources.
The solution was to build the docker image once with the --no-cache flag:
docker build --no-cache .
If you are using docker desktop, please check if enough resources are set in settings/preferences.
Eg. memory and disk requirement
It's answered here https://askubuntu.com/questions/1059217/getting-release-is-not-valid-yet-while-updating-ubuntu-docker-container
Correct your system clock. (in comments I also suggested checking for a mismatch between clock and your timezone too)
I get this ERROR: executor failed running [...]: exit code: 100 error message when I mistyped the name of a package.
This was in my Dockerfile:
RUN sudo apt-get update; \
sudo apt-get -y upgrade; \
sudo apt-get install -y gnupg2 wget lsb_release
instead of this:
RUN sudo apt-get update; \
sudo apt-get -y upgrade; \
sudo apt-get install -y gnupg2 wget lsb-release
(see the difference between the underscore and the dash.)
Fixing the package name solved the problem.
This happens specific to OS also.
I had same issues running MariaDB on my Windows 10.
Check for Docker Settings:
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": false,
"experimental": false,
"features": {
"buildkit": true
},
"builder": {
"gc": {
"enabled": true,
"defaultKeepStorage": "20GB"
}
}
}
Remove below block, and it should work:
"features": {
"buildkit": true
},
I had this error and I think it was because I installed buildx but the version of the plugin didn't match my docker installation. Uninstalling buildx resolved the issue for me:
docker buildx uninstall
For me adding this to the Dockerfile did the job:
RUN apk add --update linux-headers;

Setup Robot Framework pipeline with GitLab CI / CD

So i have written my automated Robot Framework tests and they are in a GitLab repo. I want to run these automatically once a day.
Is this possible?
Do I need a .gitlab-ci.yml file for it? (if yes what do I put in it?)
Yes you can totally run the robot tests in gitlab ci.
so answer
Yes its very much possible , infact that is how you execute pipeline tests . You just need to build a Dockerfile that has the things you need to execute the framework inside docker. Here's the sample dockerfile. I would suggest you wrap the .robot script to run from bash script (like robot -d *.robot).
FROM ubuntu:18.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update --fix-missing && \
apt-get install -y python3-setuptools wget git bzip2 ca-certificates curl bash chromium-browser chromium-chromedriver firefox python3.8 python3-pip nano && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN wget https://github.com/mozilla/geckodriver/releases/download/v0.27.0/geckodriver-v0.27.0-linux64.tar.gz
RUN tar xvf geckodriver*
RUN chmod +x geckodriver
RUN mv geckodriver /usr/bin
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 2
RUN pip3 install --upgrade pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN pip install rpaframework
COPY . /usr/src/
ADD robot.sh /usr/local/bin/robot.sh
RUN chmod +x /usr/local/bin/robot.sh
WORKDIR /usr/src
Now you need .gitlab-ci.yml in your repository to have a content like this.
stages:
- build
- run
variables:
ARTIFACT_REPORT_PATH: "${CI_PROJECT_DIR}/reports"
build_image:
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
script:
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
robot_tests:
stage: run
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- robot-test.sh
artifacts:
paths:
- $ARTIFACT_REPORT_PATH
when: always
That should be it and once the job finishes you would see the output in the job at the path location in the repository.

Resources