I'm currently using Docker for Windows with Bamboo. I'm running into the problem, that when a Docker task fails, the error log only shows that exact message, that the task failed. But it doesn't show the error message from within Docker, like compilation errors, access denied or out of memory. Is there any way to get to these error messages?
You can setup a task in the "Final tasks" area of your job. This job will execute regardless of whether the tasks in the job failed or not.
Depending on how you are running Docker in Bamboo you can make this a Docker task or a script task and use the Docker logs command to output the logs from the container. This should tell you why the container failed to run (e.g., build, run-time error, initialization failure).
Docker Logs CLI
Related
Running bash script in Azure pipeline. I am trying to rerun the npm publish step x-times if there is any network error is detected.
Is there any way to detect a network error specifically and rerun the whole task again?
I've found this document (https://learn.microsoft.com/en-us/azure/devops/release-notes/2021/sprint-195-update#automatic-retries-for-a-task) but I believe this reruns the process regardless of the error type.
I am using Azure Devops pipeline and in that there is one task that will create KVM guest VM and once VM is created through packer inside the host it will run a bash script to check the status of services running inside the guest VM.
If any services are not running or thrown error then this bash script will exit with code 3 as i have added the value in bash script as below
set -e
So i want the task to fail if the above bash script fails, but issue is in the same task as KVM guest VM is getting created so while booting up and shutdown it throws expected errors but i dont want this task to fail due these error but to fail it only bash scripts fails.
i have selected the option in task "Fail on Standard Error"
But not sure how we can fail the task specifically for bash script error, can anyone have some suggestions on this?
You can try and use exit 1 command to have the bash task failed. And it is often a command you'll issue soon after an error is logged.
Additionally, you also may use logging commands to customized a error message. Kindly refer to the sample below.
#!/bin/bash
echo "##vso[task.logissue type=error]Something went very wrong."
exit 1
In an azure devops build pipeline running on a self-hosted windows agent, I am trying to execute a tool that run a docker container.
Unfortunately I get this error :
Failed to start: failed to create container: Error response from daemon: CreateFile c:\Users\BUILDAGENT\.aerokube\selenoid: Access is denied.
The build agent is configured with its own windows local user "BUILDAGENT", so he has permissions on the C:\Users\BUILDAGENT\ folder
Looking at the process manager, I see that except com.docker.service, the others docker processes are running with the user that launched the Docker Desktop (my coworker).
If I restart windows and relaunch docker myself, the settings selected by my coworker ("Disk Image Location" for instance), are not restored...
Is there a way to make docker run as a daemon on startup with a specific user (service or system user, but not mine or my coworker) ?
Once this is done I guess I just have to give permissions for that specific user on the C:\Users\BUILDAGENT\ folder to solve my issue, right ?
Update :
I added my BUILDAGENT user in docker-users group, and it solves the permission issue, but I still would like to run docker as a service, instead of login as my local user to launch it with its GUI...
but I still would like to run docker as a service, instead of login as my local user to launch it with its GUI
You could try to create a task scheduler to run docker with that specific user when your PC starts.
Please check this thread How to create an automated task using Task Scheduler on Windows 10 for some more details.
In this case, docker will start automatically every time you start your computer.
We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.
I'm trying to run pishrink on MacOS using a Docker host, as explained here. The pishrink script shrinks the size of an .img so it's quicker to burn onto an SD card.
I have Docker Desktop running, and I've add the repo to the top-level in my file system (/pishrink) and and running the following command:
docker-compose run pishrink /pishrink/pishrink.sh /pishrink/big-image.img /pishrink/small-image.img
When I do, I get the following error:
Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"/pishrink/pishrink.sh\": permission denied": unknown
Can someone help me debug this issue? I'm relatively new to using Docker so I might be making some simple + fundamental mistakes.
I was able to fix this with the following command, using sudo as suggested:
sudo docker-compose run pishrink /pishrink/pishrink.sh /pishrink/big-image.img /pishrink/small-image.img