Best way for debugging docker build error - debugging

I am trying to build docker image from the dockerfile and currently let's say I have 20-30 commands. And if my build fail in error in 25th command I again have to run the whole build process from the start.
Is there any right solution for building docker containers?

Related

Docker on windows generates different image layer id for COPY command

Environment:
Docker on windows
Code in questions: COPY configure_my_app.bat .
Expected: If I run docker build multiple times, I expect the image id to be same from various runs on docker build, given that there is no change in the Dockerfile
What happens: The image id after the above command changes for each run of docker build
I read it somewhere a while ago that this might have something to do with file attributes/permissions. I wonder, how can I get around this ? I want to use layers from cache if Dockerfile is not changed.

Windows automated batch file for building of docker images

You might say why you want to automate that? Because I'm tired to update 4-5 repos every week and push them manually in Docker. So I want to use a simple windows batch-file to automate that.
I got my Dockerfile and my commands:
docker build -t project .
docker tag project repo/project:tag
docker push repo/project:tag:project
The problem I have is that for example the build command needs a bit time and the commands should only be called when the immediate predecessor is done.
START /B
somehow did not work.

Committing a docker container after build fail

I'm trying to use the Docker plugin in Jenkins to commit a docker container when the build running on it fails. Currently I have a Jenkins server with ~15 nodes, each with its own docker cloud. The nodes all have the latest version of docker-ce installed. I have a build set up to run on a docker container. What I want to do is commit the container when the build fails. Below are the things I have tried:
Adding a post build task, where I obtain the container ID and the hostname of the node running the container. I then SSH into the node and then commit the container.
The problem: Not able to SSH from inside a container as it requires a password and there's no way adding the node to the container's list of known hosts
Checking the "commit container" box in the build's general configurations
The problem: This is probably working but I don't know where the container is being committed to. Also this happens every time, and not just when the build fails.
Using the build script
Same problem as using the post build task
Execute a docker command (Build step)
This option asks for the container ID, which I have no way of knowing as it is new every time a build is run.
Please let me know if I have misunderstood any of the above ways! I am still new to Jenkins and Docker so I am learning as I go. :)

Building and running a docker image for a Go executable

I am trying to containerize my Go applicaton. I am using Docker to do this. I have a fully executable Docker application running in my system. To run this in a container i am have created a Dockerfile.
FROM golang:1.7
EXPOSE "portno"
I have kept my dockerfile very simple because i already have an executable file running in my system. Please help me what all contents should i add to get the go app running. I am not able to run the go app as many of the contents are not getting copied in container.
You need to add your executable file to your container using ADD command:
ADD ./app /go/bin/app
And then you need to tell docker that it should be executed as the main container process:
CMD ["/go/bin/app"]
Note that it may be better to build your application from the source code inside your container. It can be done when you build your docker image.
As an example, see this article for more information: http://thenewstack.io/dockerize-go-applications/

Creating docker images on successful TeamCity build

I'm currently trying to simulate a situation where I can make a docker image after a successful build in TeamCity. I'm using Docker Hub to store my docker images and build them. After that, I web-hooked them to Tutum (Docker Cloud) to eventually push them into Microsoft Azure.
What is the best-practice to make sure there is always a valid docker images in my repo in Docker Hub? I'm running several tests in TeamCity and want to create a Docker image when the build is successful. The TeamCity server is not running a docker host, but my project has a Dockerfile.
Any ideas?
Thanks in advance,
Tim
You can put last successful artifacts on your Dockerfile with ADD command
ADD http://{{TeamcityUrl}}/guestAuth/repository/download/{{BuildName}}/latest.lastSuccessful/dist.zip /{{DockerFolder}}

Resources