ASP NET Core SQL Server Docker-Compose entry point bash script show errors on logs - bash

I'm dockerizing an app but the entry point script doesn't work. The app couldn't perform CRUD actions and shows the "failed logging in with sa" error. I checked out the database because I suspect that the database didn't get created and yes, it didn't. So I thought the problem has to be the entry point script:
#!/bin/sh
set -e
until dotnet database update; do
echo "SQL Server is starting up"
sleep 1
done
echo "SQL Server is up - executing command"
dotnet database update
This script doesn't stop the app from building but it shows this in the log:
./Setup.sh: 1: ./Setup.sh: #!/bin/bash
not found
./Setup.sh: 2: ./Setup.sh:
not found
./Setup.sh: 3: set: Illegal option -
Here's the the docker file that's supposed to run the script above (named Migrations.Dockerfile):
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["Project/Project.csproj", "Project/"]
COPY Setup.sh Project/Setup.sh
RUN dotnet tool install --global dotnet-ef
RUN dotnet restore "./Project/Project.csproj"
COPY . .
WORKDIR "/src/Project"
RUN /root/.dotnet/tools/dotnet-ef migrations add InitialMigrations
RUN chmod +x ./Setup.sh
CMD /bin/sh ./Setup.sh
And here's the docker-compose.yml file:
version: '3.4'
services:
project:
image: ${DOCKER_REGISTRY-}project
build:
context: .
dockerfile: Project/Dockerfile
ports:
- "9080:80"
depends_on:
- migrations
- db
db:
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
SA_PASSWORD: "W0WV3RYSTRONGPASSWORD!"
ACCEPT_EULA: "Y"
ports:
- "14331:1433"
depends_on:
- migrations
migrations:
build:
context: .
dockerfile: Migrations.Dockerfile

Related

Dockerfile ENTRYPOINT script doesn't run when the container starts if docker compose is ran from Gitlab pipelines

My container has a shell script that runs after the container starts. It was working before but when I push it to Gitlab and ran it with a pipeline it doesn't start. Here's my Dockerfile.
Dockerfile
# Setup SQL Server 2019
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV SA_PASSWORD=Banana100
ENV ACCEPT_EULA=Y
ENV MSSQL_PID=Developer
USER mssql
#Copy test database and create DB script to image working directory
WORKDIR /usr/src/app
COPY . /usr/src/app
EXPOSE 1433
ENTRYPOINT /bin/bash ./entrypoint.sh
What's weird is that whenever I delete the /bin/bash ./entrypoint.sh, retype it, then run docker compose --force-recreate --build -d on my local machine, the shell script is executed just fine. But if the pipeline do it, it doesn't work. I have nothing to push on my repo, Git thinks I didn't make any changes at all.
This post here suggest that it could be an issue with the line endings. It probably worked on most people but I have tried these with no luck:
Daniel Howard's answer
P.J.Meisch's answer
Ryan Allen's answer
My entrypoint.sh is a shell script that runs an SQL file to create and restore database. Please see the codes below.
entrypoint.sh
# /opt/mssql/bin/sqlservr & /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P FBuilder*09 -d master -i createDB.sql
/opt/mssql/bin/sqlservr & /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P $SA_PASSWORD -d master -i createDB.sql
while true; do sleep 1000; done
docker-compose.yml
version: "3"
services:
fb_db:
container_name: test_fbdb
build: ./test.environment
ports:
- "5002:1433"
gitlab-ci.yml
.ci_server:
tags:
- ci
stages:
- build
- test
variables:
SOLUTION_NAME: FBSVC.sln
before_script:
- Set-Variable -Name "time" -Value (date -Format "%H:%m")
- echo ${time}
- echo "started by ${GITLAB_USER_NAME}"
build:
stage: build
extends:
- .ci_server
script:
- echo "Restoring project dependencies..."
- $Env:Path += ";C:\nuget"
- nuget restore
- echo "Preparing development environment"
- copy "D:\CI_TestDB\FormulaBuilder\FormulaBuilder.bak" "D:\Project Files\Gitlab-Runner\builds\_1ZgTgcF\0\water-utilities\formula-builder\test.environment"
- docker compose up --force-recreate --build -d
- echo "Publishing project..."
- $Env:Path += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Current\Bin"
- msbuild FBSVC.sln /p:DeployOnBuild=true /p:PublishProfile=CI_IntegrationTest
- echo "Build stage completed successfully!"
only:
- merge_requests
test:
stage: test
variables:
GIT_STRATEGY: clone
extends:
- .ci_server
script:
- echo "Installing dev dependencies..."
- npm ci
- echo "Running tests..."
- npm run cy:test
dependencies:
- build
only:
- merge_requests
The Gitlab Runner is running on my physical local machine, it is not a Shared Runner. Any help would be appreciated. Thanks!
UPDATE (June 24, 2022)
I have replaced the ENTRYPOINT on the Dockerfile to ENTRYPOINT [ "/bin/bash", "-c", "./entrypoint.sh"], it is still do the same issue. Looking at the "Select End Line Sequence" menu on VS Code found at the bottom-right corner, it is set to "LF". I checked the Dockerfile being pulled by the pipeline it is now set to "CRLF". Does this matter?
I have ran git config --global core.autocrlf input multiple times but if the file is being pulled from the repo, it automatically does this. Hence, I decided to detach the folder containing my files that builds my database environment for the time being. Doing this requires rewriting the whole ENTRYPOINT line on my Dockerfile to work properly again. If you have any advice, please let me know.

Docker compose on Windows: no such file or directory

I'm trying to run two services with docker compose: a MySQL server and a SAMP server.
My docker-compose.yml looks like this:
version: '3.9'
services:
mysql:
image: mysql:8.0.28
volumes:
- ./mysql:/var/nw-mysql-data
env_file:
- .env
samp:
build:
context: .
dockerfile: Dockerfile
command: ./samp03svr
ports:
- "7777:7777"
env_file:
- .env
depends_on:
- mysql
And my Dockerfile:
FROM ubuntu:20.04
RUN mkdir /app
COPY . /app/
WORKDIR /app
COPY ./samp03svr /app/
The problem is in the command of the samp service. It throws the following exception when I run docker-compose up:
standard_init_linux.go:228: exec user process caused: no such file or directory
The weird thing, is that if I change the samp's command for an ls like this:
command: ls
I can see the binary file that I'm trying to execute listed, and docker is saying that there's no such file...
Help?
EDIT: Here is a screenshot of the ls command return, ran from the command instruction of the samp service:

docker is not found when running a docker command in entrypoint.sh

I'm getting
app_1 | ./entrypoint.sh: line 2: docker: command not found
when running this line of code in entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
How would i properly execute this command ?
entrypoint.sh
# entrypoint.sh
docker exec -it fullstacktypescript_database_1 psql -U postgres -c "CREATE DATABASE elitypescript"
npm run seed # my attempt to run seed first before server kicks in. but doesnt work
npm run server
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
PSQL_HOST: database
PSQL_PORT: 5430
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-elitypescript}
entrypoint: ["/bin/bash", "./entrypoint.sh"]
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5439
volumes:
database:
Try this Dockerfile :
FROM node:10.6.0
COPY . /home/app
WORKDIR /home/app
COPY package.json ./
RUN npm install
ENV DOCKERVERSION=18.03.1-ce
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
EXPOSE 5000
You trying to run docker container inside of the docker container. In most cases it is very bad approach and you should to avoid it. But in case if you really need it and if you really understand what are you doing, you have to apply Docker-in-Docker(dind).
As far as I understand you, you need to run script CREATE DATABASE elitypescript, the better option will be to apply sidecar pattern - to run another one container with PostgreSQL client that will run your script.
Link the containers together and connect using the hostname.
# docker-compose
services:
app:
links:
- database
...
then just:
# entrypoint.sh
# the database container is available under the hostname database
psql -h database -p 3030 -U postgres -c "CREATE DATABASE elitypescript"
Links are a legacy option, but easier to use then networks.

Bash: sudo: command not found

I have an up and running containers and I wish to execute a database backup. Apparently, a simple command from the docker such as: sudo mkdir new_folder result in: bash: sudo: command not found
What have I tried (on an intuitive level) I accessed one of the running container with docker exec -i -t 434a38fedd69/bin/bash and RUN
apt-get update
apt-get install sudo
when exit back to docker and tried to perform sudo mkdir new_folder but I got the same message bash: sudo: command not found
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ mkdir new_folder
mkdir: cannot create directory ‘new_folder’: Permission denied
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ sudo mkdir new_folder
bash: sudo: command not found
BTW, I'm not sure if this is relevant but the docker-compose file I was using is:
version: '2'
services:
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: changeme
PGDATA: /data/postgres
volumes:
- /data/postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
container_name: xx_postgres
pgadmin:
links:
- postgres:postgres
image: fenglc/pgadmin4
volumes:
- /data/pgadmin:/root/.pgadmin
ports:
- "5050:5050"
networks:
- postgres
restart: unless-stopped
container_name: xx_pgadmin
networks:
postgres:
driver: bridge
First, nothing you do in a docker exec is persistent outside of that particular running container (copy of the image), so if you want future containers run from that image to include sudo, those apt-get commands need to go into the Dockerfile that builds the image. Which, since you're using docker-compose, would require you to first make a Dockerfile and specify its location in the YAML.
Second, what do you mean "exit back to docker"? Nothing you do inside a container is going to have any effect on the system that Docker itself is running on, but it looks like you're running software install commands inside a Docker container and then expecting that to result in the newly-installed software being available outside the container on the Windows system that is running Docker.
To do a backup of the postgres database in the container, you first have to enter the container (similar to how you do it):
docker exec -it postgres bash
(substitude postgres with the real container name you get from docker-compose ps)
Now you are in the container as root. That means, you don't need sudo for anything. Next create your backup folder:
mkdir /tmp/backup
Now run the backup command, from a quick Google I found the following (you might know better):
pg_dumpall > /tmp/backup/filename
Then exit the shell within the container by typing exit. From your host system run the following to copy the backup file out of the container:
docker cp postgres:/tmp/backup/filename .
(postgres is your container name again)

Docker compose containers fail and exit with code 127 missing /bin/env bash

I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.

Resources