I have created .ddev/web-build/Dockerfile. It builds, but it doesn't do what I want. How do I figure out why? DRUD_DEBUG=1 ddev start doesn't show any additional output. I don't know the proper docker-compose command to try building it manually either. This does not work:
cd .ddev
docker-compose build web
because it needs specific environment variables that DDEV injects.
On current versions of ddev (v1.18+) you can
cd .ddev
~/.ddev/bin/docker-compose -f .ddev-docker-compose-full.yaml build web --no-cache
and you'll see the full build go by. You can use additional build arguments as well to change the output format, etc. For example, ~/.ddev/bin/docker-compose -f .ddev-docker-compose-full.yaml build web --no-cache --progress=plain
You can review the entire Dockerfile that's being built at .ddev/.webimageBuild/Dockerfile.
See docs.
Related
I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate
I have a project set up to run locally in Docker with docker-compose. Until recently, it's been working fine. I don't believe I changed anything that should affect this (except maybe a VS upgrade?), and I've even tried rolling back to an older commit. In all cases, I'm now getting an error message, which appears in Visual Studio's output window as:
docker exec -i f93fb2962a1e sh -c ""dotnet" --additionalProbingPath /root/.nuget/packages --additionalProbingPath /root/.nuget/fallbackpackages "bin/Debug/netcoreapp3.1/MattsTwitchBot.Web.dll" | tee /dev/console"
sh: 0: getcwd() failed: No such file or directory
It was not possible to find any installed .NET Core SDKs
Did you mean to run .NET Core SDK commands? Install a .NET Core SDK from:
https://aka.ms/dotnet-download
I've tried a variety of different things (changing the base image in the Docker file, deleting old images and containers, and more) but I keep getting the same error message. The weird thing is that when I do a File->New, Visual Studio generates a very similar looking Docker file and it works fine. I have no idea what the problem is, but I'm hoping someone here can spot it.
My full repo is available on Github. Here is the docker for the asp.net core project:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["MattsTwitchBot.Web/MattsTwitchBot.Web.csproj", "MattsTwitchBot.Web/"]
COPY ["MattsTwitchBot.Core/MattsTwitchBot.Core.csproj", "MattsTwitchBot.Core/"]
RUN dotnet restore "MattsTwitchBot.Web/MattsTwitchBot.Web.csproj"
COPY . .
WORKDIR "/src/MattsTwitchBot.Web"
RUN dotnet build "MattsTwitchBot.Web.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MattsTwitchBot.Web.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MattsTwitchBot.Web.dll"]
and the docker-compose for the solution (even without the Couchbase stuff I'm getting the same error, but I'm pasting it here for completeness):
version: '3.4'
services:
couchbase:
image: couchbase:6.5.0-beta2
volumes:
- "./couchbasetwitchbot:/opt/couchbase/var" # couchbase data folder
ports:
- "8091-8096:8091-8096" # https://docs.couchbase.com/server/current/install/install-ports.html
- "11210-11211:11210-11211"
mattstwitchbot.web:
image: ${DOCKER_REGISTRY-}mattstwitchbotweb
build:
context: .
dockerfile: MattsTwitchBot.Web/Dockerfile
environment:
Couchbase__Servers__0: http://couchbase:8091/ # Reference to the "couchbase" service name on line 4
depends_on:
- couchbase # Reference to the "couchbase" service name on line 4
command: ["./wait-for-it.sh", "http://couchbase:8091"]
I don't have enough reputation to comment but I think it might be your .csproj file. You mentioned you upgraded Visual Studio. Since the .csproj file contains information about the project (including references to system assemblies), and you are copying it in your Dockerfile, it's possible that:
The .csproj file needs to be updated since you updated VS.
The dotnet core version in your dockerfile 'FROM' statement is a different version than what you're using locally.
Maybe test this by starting a new project and adding your source, then do a diff on the old and new .csproj files. You can also backup the original and try modifying the .csproj file manually. I found a blog post that demonstrates upgrading a vs2015 csproj file to vs2017. Hopefully it helps.
Because I don't have enough reputation I cannot comment on your question.
But one thing that puzzles me is the fact that you are using as base image an image that doesn't have .Net SDK and if you try to run a command that requires an SDK looks it fails
I am assuming, in the container, f93fb2962a1e is using the image created by the docker file you posted in the question
getcwd() error, means the solution lost context to path. I found that completely removing the dock-compose solution and the associated Dockerfile file from the project fixed the problem. It's hacky but works if you're in a bind.
I had a similar issue and am debugging code that has previously worked. Simply closing my terminal and re-opening it resolved the issue for me. I am using WSL, and have changed some environment variables that appear to have caused the initial issues.
I belive that your current working directory got deleted or path to working directory was reseted. But it will be the first option because update of VS may removed directory /tmp on your docker machine so its not there anymore, and it will be created on some external event.
Or set port to blocking of connection to your docker machine.
Check connection to docker machine
Check existence of folder that is used as working directory on docker machine
If you didnt found problem then continue with this:
docker exec --it {containerID} /bin/sh
you can use this official docker debugging article
with this, follow the directories that is docker trying to access and check
for their existence.
With this debugging you shoud be able to discover problems.
I hope that it helped you
I'm using the current Jenkins Maven Project tutorial using Docker:
https://jenkins.io/doc/tutorials/build-a-java-app-with-maven/
I keep getting this error at the Build stage:
[simple-java-maven-app] Running shell script
sh: can't create
/var/jenkins_home/workspace/simple-java-maven-app#tmp/durable-bae402a9/jenkins-log.txt:
nonexistent directory
sh: can't create
/var/jenkins_home/workspace/simple-java-maven-app#tmp/durable-bae402a9/jenkins-result.txt:
nonexistent directory
I've tried setting least restrictive permissions with chmod -R 777, chown -R nobody and chown -R 1000 on the listed directories, but nothing seems to work.
This is happening with the jenkins image on Docker version 17.12.0-ce, build c97c6d6 on Windows 10 Professional.
As this is happening with the Maven project tutorial on the Jenkins site, I'm wondering how many others have run into this issue.
I had also the same problem on MacOSX.
After few hours of research, I have finally find the solution.
To solve the problem, it's important to understand that Jenkins is inside a container and when the docker agent inside this container talk to your docker engine, it give path to mount volume matching inner the container. But your docker engine is outer. So to allow to work correctly path inner the container must match the same path outer the container in your host.
To allow working correctly, you need to change 2 things.
docker run arguments
Jenkinsfile docker agent arguments
For my own usage, I used this
docker run -d \
--env "JENKINS_HOME=$HOME/Library/Jenkins" \
--restart always \
--name jenkins \
-u root \
-p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v $HOME/Library/Jenkins:$HOME/Library/Jenkins \
-v "$HOME":/home \
jenkinsci/blueocean
In the Jenkinsfile
Replace the agent part
agent {
docker {
image 'maven:3-alpine'
args '-v /root/.m2:/root/.m2'
}
by
agent {
docker {
image 'maven:3-alpine'
args '-v <host_home_path>/.m2:/root/.m2'
}
It's quite likely that this issue resulted from a recent change in Docker behaviour, which was no longer being handled correctly by the Docker Pipeline plugin in Jenkins.
Without going into too much detail, the issue was causing Jenkins to no longer be able to identify the container it was running in, which results in the errors (above) that you encountered with these tutorials.
A new version (1.15) of the Docker Pipeline plugin was released yesterday (https://plugins.jenkins.io/docker-workflow).
If you upgrade this plugin on your Jenkins (in Docker) instance (via Manage Jenkins > Manage Plugins), you'll find that these tutorials should start working again (as documented).
The error message means that the directory durable-bae402a9 was not created.
Walk back through the tutorial to find the step that should have created that directory, and make whatever changes are needed to make sure it succeeds.
My scenario is as follow
I need to add "project" folder to docker container for production build but for development build I want to mount local volume to project folder of container
eg. ADD project /var/www/html/project in production
Nothing in development (I can copy a dummy folder in development)
If I copy whole project folder to container in development then any changes in project folder will invalidate the docker cache of layers after the add command. It will take time to build docker image in development.
I want to use same docker file for both environment
To achieve that I used ADD $PROJECT_DIR /var/www/html/project in docker file, where $PROJECT_DIR is environment variable
Setting the environment variable in docker file like ENV PROJECT_DIR project or ENV CONFIG_FILE_PATH dummy-folder adds correct folders to container, but it needs me to change docker file each time.
I can also pass "build-arg" parameter when building docker image like
docker build -t myproject --build-arg "BUILD_TYPE=PROD" --build-arg "PROJECT_DIR=project" .
As variables BUILD_TYPE and PROJECT_DIR are related, I want to set CONFIG_FILE_PATH variable based on BUILD_TYPE. This will prevent case of me forgetting to change one parameter.
For setting the PROJECT_DIR variable I written following script "set_config_path.sh"
if [ $BUILD_TYPE="PROD" ]; then
PROJECT_DIR="project";
else
PROJECT_DIR="dummy-folder";
fi
I then run the script in dockerfile using
RUN . /root/set_project_folder.sh
Doing this, set_project_folder.sh script can access BUILD_TYPE variable but PROJECT_DIR is not reflected back in docker file
When running the set_project_folder.sh in my local machine's terminal, the PROJECT_DIR variable is changed but it is not working with dockerfile
Is there anyway we can change environment variable from subshell script e.g "set_config_path.sh" in above questions?
If it is possible, It can be used in many use cases to make docker build dynamic
Am I doing anything wrong here?
OR
Is there another good way to achieve this?
You can use something like below
FROM alpine
ARG BUILD_TYPE=prod
ARG CONFIG_FILE_PATH=config-$BUILD_TYPE.yml
RUN echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
CMD echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
The output would be like
Step 4/4 : RUN echo "BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH"
---> Running in b5de774d9ebe
BUILD_TYPE=prod CONFIG_FILE_PATH=config-prod.yml
But if you run the image
$ docker run 9df23a126bb1
BUILD_TYPE= CONFIG_FILE_PATH=
This is because build args are not persisted as environment variables. If you want to persists these variables in the image also then you need to add below
ENV BUILD_TYPE=$BUILD_TYPE CONFIG_FILE_PATH=$CONFIG_FILE_PATH
And now docker run will also output
$ docker run c250a9d1d109
BUILD_TYPE=prod CONFIG_FILE_PATH=config-prod.yml
I am looking for a solution to a simple configuration problem to solve; it has been nagging me for quite some time now. :)
I have a golang project on github which gives me a static binary, and uses godeps.
Now I want to ensure that the godep go install ... command can be run after a git clone and a docker container be built from this newly built binary locally.
As an option, the user should be able to push it to docker hub or a private repo as applicable.
I am thinking of using Makefiles, but that seems too complicated (set the gopath, then godep build, modify the Dockerfile dynamically to point to the place where the binary is located, then a docker build).
Is there an easier way to do it?
So far, when I've been on your situation, I've always come up with a Makefile for doing all the work, which, like you said, has never been simple. Although it's never been simple, I've done it using at least 2 different approaches dependending on the level of dependency you want between the build process and the development environment.
The simplest way is, as you say, just throw in a Makefile the steps you would do by yourself. You can use Dockerfiles ARGuments to parameterize the binary name so you don't have to modify the Dockerfile during the build.
Here you have a quick and dirty (working) Makefile I've just made up for getting you started:
APP_IMAGE=group/example
APP_TAG=1.0
APP_BINARY=example
.PHONY: clean image binary
all: image
clean:
if [ -r $(APP_BINARY) ]; then rm $(APP_BINARY); fi
if [ -n "$$(docker images -q $(APP_IMAGE):$(APP_TAG))" ]; then docker rmi $(APP_IMAGE):$(APP_TAG); fi
image: binary
docker build --build-arg APP_BINARY=$(APP_BINARY) -t $(APP_IMAGE):$(APP_TAG) $(dir $(abspath $(lastword $(MAKEFILE_LIST))))
binary: $(APP_BINARY)
$(APP_BINARY): main.go
go build -o $# $^
That Makefile expects to be in the same directory as the Dockerfile.
Here is a minimal one (which works):
FROM alpine:3.5
ARG APP_BINARY
ADD ${APP_BINARY} /app
ENTRYPOINT /app
I've tested that having a main.go in the same top-level project directory as both Makefile and Dockerfile, in case you have a main.go nested inside some inside directory ("ROOT/cmd/bla" is commonplace) then you should change the "go build" line to account for that.
Although I've been doing things like that for a while, now that I see (and think about) your question I've come to see that a dedicated tool which knows specifically how to do this could be great to have. Specifically a tool that mimics "go build/get/install" but can build docker images... so where you can run the following command to get a binary:
go install github.com/my/program
You could also run the following command to get a simple docker image:
goi install github.com/my/program
How does that sound? Is there something like that in existence? Should I get started?
Provide a base image (with GOPATH, godep ... pre configured).
Create Dockerfile based on the base image, then user only need to alter the COPY path.