I've created some wercker pipelines with "build" and "docker" pipeline. Both pipelines use different boxes. The "docker" pipeline copies the compiled code from $WERCKER_SOURCE_DIR/target to the correct folders and the last step is a internal/docker-push.
When I inspect the resulting docker image I still see all the sources in the /pipeline/source folder within the resulting image.
I've tried to do a script step with rm -rf /pipeline after the docker-push, but after inspection the /pipeline (and all sources, build cache, scripts) are still part of the resulting image.
Is there any way to clean this and create a cleaner image (I just want my compiled code in the image).
Thanks,
Danny
I've moved the step to remove the /pipeline folder before the internal/docker-push step, and this works out fine.
Related
My docker build is failing due to the following error:
COPY failed: CreateFile \?\C:\ProgramData\Docker\tmp\docker-builder117584470\Aeros.Services.Kubernetes\Aeros.Services.Kubernetes.csproj: The system cannot find the path specified.
I am fairly new to docker and have went with the basic project template that is set up when you create a Kubernetes container project template, so I'd figure it would work out of the box, but I'm mistaken.
I'm having problems trying to figure out what it's attempting to due in the temp directory structure and the reason it is failing. Can anyone offer some assistance? I've done some searching and others have said the default docker template was incorrect in Visual Studio, but I'm not seeing any of the files being copied over to the temp directory to begin with, so figuring out what is going on is being rather problematic at the time.
Here is the docker file, the only thing I've added is a publishingProfile arg so I can tell it which profile to use in the Build and Publish steps :
ARG publishingProfile
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /src
COPY ["Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj", "Aeros.Services.Kubernetes/"]
RUN dotnet restore "Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj"
COPY . ./
WORKDIR "/src/Aeros.Services.Kubernetes"
RUN dotnet build "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app
FROM build AS publish
RUN dotnet publish "Aeros.Services.Kubernetes.csproj" -c $publishingProfile -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "Aeros.Services.Kubernetes.dll"]
I haven't touched the yaml file, but if you need that I can provide it as well. Again, all I've done with this is add a few NuGet packages to the project reference. Build in VisualStudio runs fine, but the docker command:
docker build . --build-arg publishingProfile=Release
is failing with the error mentioned above.
Can someone be so kind as to offer some enlightenment? Thanks!
Edit 1:
I am executing this from the project's folder via a PowerShell command line.
Leandro's comments helped come across the solution.
So first a rundown of that COPY command, it takes two parameters, source and destination.
Within the template for the Dockerfile for Visual Studio, it includes the folder location of the .csproj file it is attempting to copy. In my case, the command read as follows:
COPY ["Aeros.Services.Kubernetes/Aeros.Services.Kubernetes.csproj", "Aeros.Services.Kubernetes/"]
So it is looking for my Aeros.Services.Kubernetes.csproj file in the Aeros.Services.Kubernetes project folder and copying it to the Aeros.Services.Kubernetes folder in the src folder of Docker.
The problem with this is that if you use the default setup, your dockerfile is included inside the project folder. If you are executing the docker build from within the project folder, the syntax for the COPY command is actually looking in the wrong file location. For instance, if your project is TestApp.csproj located in the TestApp project folder, and you are executing the Docker build command for the dockerfile within the same folder, the syntax for that COPY command:
COPY ["TestApp/TestApp.csproj", "TestApp/"]
is actually looking for: TestApp/TestApp/TestApp.csproj.
The correct syntax for the COPY command in this situation should be:
COPY ["TestApp.csproj", "TestApp/"]
since you are already within the TestApp project folder.
Another problem with the default template that may trouble some is that it doesn't copy the web files for the project either, so once you get past the COPY and dotnet restore steps, you will fail during the BUILD with a:
CSC : error CS5001: Program does not contain a static 'Main' method
suitable for an entry point
This is resolved by adding:
COPY . ./
following your RUN dotnet restore command to copy your files.
Once these pieces have been addressed in the default template provided, everything should be functioning as expected.
Thanks for the help!
In which line the problem happens? I do not remember if docker build shows it.
Where are you executing this build? The problem is that it is not finding the file you are trying to copy. It should be local to where the command is executed.
I saw now, the problem is on the first COPY.
I have a google cloud container build with the following steps
gcr.io/cloud-builders/mvn to run a mvn clean package cmd
gcr.io/cloud-builders/docker to create a docker image
My docker image includes and will run tomcat.
Both these steps work fine independently.
How can I copy the artifacts built by step 1 into the correct folder of my docker container? I need to move either the built wars or specific lib files from step 1 to the tomcat dir in my docker container.
Echoing out the /workspace and /root dir in my Dockerfile doesn't show the artifacts. I think I'm misunderstanding this relationship.
Thanks!
Edit:
I ended up changing the Dockerfile to set the WORKDIR to /workspace
and
COPY /{files built by maven} {target}
The working directory is a persistent volume mounted in the builder containers, by default under /workdir. You can find more details in the documentation here https://cloud.google.com/container-builder/docs/build-config#dir
I am not sure what is happening in your case. But there is an example with a Maven step and a Docker build step in the documentation of the gcr.io/cloud-builders/mvn builder. https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/mvn/examples/spring_boot. I suggest you compare with your files.
If it does not help, could you share your Dockerfile and cloudbuild.yaml. Please make sure you remove any sensitive information.
Also, you can inspect the working directory by running the build locally https://cloud.google.com/container-builder/docs/build-debug-locally#preserve_intermediary_artifacts
I would like to use multi-stage builds to avoid downloading all the Maven dependencies required by my Java project every time I build the app.
I am thinking of resolving the Maven dependencies in a first stage, then building the app in a second stage which would require access to the dependencies downloaded in the previous stage.
If I understood well multi-stage builds I could copy files created in the first stage to the second stage, but ideally I would like to be able to "mount" or "share" the folder from the first stage where the dependencies live instead of copying the files, is it possible? Or is there a better way to achieve this?
Thanks.
EDIT:
This was the first stage I was thinking about
FROM some-image-with-maven AS maven-repo
WORKDIR /workspace/
COPY pom.xml .
RUN mvn -B -f pom.xml dependency:resolve
But since the pom file will be different most of the times (because I would like to share this stage across projects), the following step that resolves dependencies will download all of them again (instead of using a cached layer).
You can only copy stuff from the first stage if you are not using volumes. When using volumes, you can share data between stages which are basically separate container instances.
Since missing to clean up volumes is often not handled properly I suggest to keep to the copy strategy. There is no real benefit using bind-mount to share data over the copy approach.
I don't believe there's a way to do this currently. To share from one build stage to the next, the only option is to COPY files from one stage's directory to the current stage.
To use the first stage as a build cache and avoid copying all the dependencies, I'd run your build in that first stage. Or you can make a second intermediate stage that is FROM stage1name if you want additional separation between the stages. The output of your build can then be copied to the final layer, avoiding the need to copy all the build dependencies.
Answering from the future...
If using buildkit or compatible (most people probably are by now), you can mount a previous stage with a bind mount. Something like this would accomplish what the original post was asking:
FROM someimage as build
COPY pom.xml .
RUN mvn -Dmaven.repo.local=/.m2_repository -B -f pom.xml dependency:resolve
FROM runtimeimage
COPY pom.xml .
COPY src/ ./src/
RUN --mount=type=bind,from=build,source=/.m2_repository,target=/.m2_repository \
mvn package
But more to the point, there is also a cache mount that you can use instead and you would incur the cost of downloading all the deps on first run, but subsequent would be able to find those deps in the cache:
FROM runtimeimage
COPY pom.xml .
COPY src/ ./src/
RUN --mount=type=cache,target=/.m2_repository,sharing=locked \
mvn package
Is it possible to make TeamCity only clean up certain files upon fetching files from my git repo? I modify one file as a build step, and thus always need a clean version of that file. However, it's really unnecessary to fetch the whole repo everytime because usually only a few files are modified (thus, I'd rather not use the 'Clean all files before build' command).
Thanks!
To clarify, lets say I have the following structure:
- index.html
- js/script.js
- js/plugins.js
I only want to always (regardless if any change has happened) to checkout index.html. The files in the js folder I only want to replace whenever any updates on them have happened.
If you are using TeamCity 6.5 or above you can use the Build Files Cleaner (Swabra) Build Feature. Once you have added it your build steps and run clean build it will clean any new unversioned files generated during the build either before the new build starts or at the end of the current build.
I personally prefer to run it before the new build starts as it allows you to look at any of the output when trying to work out why something went wrong.
Basically it makes sure that there is nothing in the build agents work folder that was not pulled from the repository before each build.
I have a Maven project which performs a number of time consuming tests as part of the integration-test Maven cycle. I'm using Jenkins as the CI server.
During the integration test a number of files are produced in the target folder. For example, an "actual" BMP file is produced and compared to an "expected" BMP file. If the test fails, I need to look at the files in the target folder to determine how to deal with the error. Maybe the actual BMP looks fine and so it should be promoted to the new expected BMP. On the other hand, it may reveal a problem that requires a code fix.
The thing is I don't have any way to get access to these files, other than to ssh into the CI server and manually scp the files over to my own machine for closer inspection. It would be extremely helpful if I could access these files from the Jenkins web interface.
I tried using the build-helper-maven-plugin to attach the relevant files as Maven artifacts, but the problem is that there is no suitable phase in Maven that executes after an integration-test, if any test fails.
What can I do? Can I use the "Copy Artifact" plugin for this?
1) The files in the target folder can be accessed using a link such as /ws/projectname/target/filename...
2) Rather than typing the url each time, the SideBar plugin can be used to add a link to the file to Jenkins' left menu, making it easily accessible.
You need to copy your files into your workspace in a build step and archive them from there - Jenkins lets you specify artifacts only relative to the workspace.
I usually create a directory keyed by the BUILD_ID in the workspace, so that artifacts from different builds do not get mixed up in case I do not clean the workspace and archive from there (specifying ${BUILD_ID}/**/* in the archiving step).
In case your build fails before it can run the copying step and because of it does not do the copy, take a look at this question.