Been stuck with this failure a few days.
This happens when I build an image or try to commit after installing a particular application. I'm using mcr.microsoft.com/windows/servercore:ltsc2019 as base image.
"Error response from daemon: re-exec error: exit status 1: output: hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)"
If I do not install my application to the image, I do not get this error. The application installs fine without any failures. I'm able to run the container fine with this application installed, but it fails when I commit it to an image.
I came across a few existing posts with this error, but I couldn't get this to work. Some existing posts mentions about possible size limit of the image but here I don't see size to be an issue. This error is too vague for me to do anything about it. Where can I look for some detailed logging from docker daemon to try to understand what in my application is causing the docker commit to fail?
Tried to look into log under, I don't find any thing useful to understand this failure.
AppData\Local\Docker
Appreciate any help or pointers to find what in my application can cause this commit failure.
Related
I've been running a few containers (approximately a dozen) for awhile now. I've approached whatever the hard limit is on container/image sizes in the past, and had to clean these up to keep it from barfing all over everything, and recently the same has happened again.
I have identified several containers and images I can safely remove to reduce its footprint. But just as I was getting ready to do so, Docker crashed on me. And when I attempt to restart it, it crashes with the error message:
Fatal Error
Docker daemon failed to start
[timestamp] dockerd failed to start daemon: error initializing graphdriver: driver not supported
Thus, I can't use any of the command-line tools to remove these images/containers.
As there are running containers that I don't dare delete at this point, this makes it a little difficult to resolve. Is there a way to start Docker (on the mac) that doesn't actually start any of the containers so that maybe I can avoid this error?
Is the error message even related to my problem? I'm on Docker 2.3.0.4 if it matters.
You could switch to overlay2 driver instead of graph driver
You can follow the document below to switch
https://docs.docker.com/storage/storagedriver/overlayfs-driver/
we facing the same issue as described in Artifactory : java.io.IOException: Failed to deploy file. Status code: 404 Response message when running our deployment via bitbucket pipelines.
This happens on Artifactory cloud to all pipelines from on day to another.
Execution failed for task ':artifactoryDeploy'.
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: Failed to deploy file. Status code: 404 Response message: Artifactory returned the following errors:
Failed to persist file with sha1: 0fexxxxxxxxxxxxxxxx Status code: 404
In Artifactory system-logs I get following warning all the time, but I'm not sure if this issue is connected. Beside following message there are no errors in logs:
2020-08-25T16:26:43.889Z [jfrt ] [WARN ] [c19ba246224f712c] [ntuallyPersistedAddFileTask:96] [al-binary-provider-2] - Failed to delete 'add file' after completing eventually persisted task '/storage/eventual/_add/a3/a396fb897aXXXXXXXXXXXXXXXXXXXXXXXX'
ERROR in request.log
2020-08-26T07:05:43.041Z|1765ac2ce37a6ffc|34.232.119.183|gradle-build|PUT|/gradle-dev-local/app/app-front/1.0.1.418_dev/app-front-1.0.1.418_dev.war;build.timestamp=1598425011065;build.name=app;build.number=1598425011337|404|0|0|9|ArtifactoryBuildClient/2.18.0
2020-08-26T07:05:44.014Z|e62cf9a7063d3fff|34.232.119.183|gradle-build|PUT|/gradle-dev-local/com/customer/app/app-core/1.0.1.418_dev/app-core-1.0.1.418_dev.pom;build.timestamp=1598425011065;build.name=app;build.number=1598425011337|404|4474|0|184|ArtifactoryBuildClient/2.18.0
Does anyone has an idea what could be the reason and what could be checked on top?
We deploying via Artifactory plugin & gradle. (https://bintray.com/jfrog/jfrog-jars/build-info-extractor-gradle#release)
We use fix version but I also updated the plugin to 4.17.1 (before we used 4.9.8)
Thanks in advance!
That sounds like more of an internal issue than something with your client.
It sounds like you may be using some sort of cloud storage, which in turn is using eventual storage. I can imagine a situation like this arising from using a mounted eventual directory over a sharded one in an HA setup.
I'd recommend to see whether that file exists in the filestore still or if it has weird permissions that couldn't be removed. If it is indeed a mounted eventual it'd be worth checking too if the request to upload that artifact came in multiple times; perhaps it was a collision of some sort.
Along those lines, since it's a 404 (not found) and it couldn't delete that file; I'm wondering whether it just couldn't write it to _add in the first place.
To summarize it could be one of two in my opinion with the information so far:
You are using a mounted eventual directory, which may be causing issues
The permissions on the filestore are not correct, affecting the filestore operations
I have installed Kyma version 1.13.0 on Windows, it's working fine if I don't restart my machine or minikube. But when I restart minikube by following steps provided in the below link. Kyma is not working.
https://kyma-project.io/docs/latest/root/kyma#installation-install-kyma-locally-stop-and-restart-kyma-without-reinstalling
I need to reinstall kyma again to make it work.
Any help would be appreciated
This sounds similar to what I get on my Windows machine.
This is the error that I get after restarting minikube.
stderr:
error execution phase addon/coredns: unable to patch the CoreDNS deployment: Timeout: request did not complete within requested timeout 30s
To see the stack trace of this error execute with --v=5 or higher
If you get same error, this has been reported as a bug.
https://github.com/kyma-project/cli/issues/455
My solution to this issue is to get the kyma working by issuing provision command twice, so give it a try.
I was trying to run a custom script on my scaleset vm due to the wrong location of the sh file the exeuction failed. but after that when I try to remove (az vmss extension delete) or rerun(az vmss extension set ) the custom script with correct url I keep getting the same error. It is stuck. How do I fix it.
Deployment failed. Correlation ID:
249a034f-76e2-4b0d-beb2-e9c6577623d1. VM has reported a failure when
processing extension 'customScript'. Error message: "Enable failed:
processing file downloads failed: failed to download file[0]: failed
to download file: http request failed: Get
https://wrongurl.blob.core.windows.net/script/deploytemp.sh: dial tcp:
lookup wrongurl.blob.core.windows.net on 164.33.122.16:53: no such
host".
Delete the instance and rebuild it!
It might not be the answer to delete the instance and rebuilding it, but applications on VMSS by nature should be resilient enough to let you do so.
Also, I'm curious if auto-healing/remediation helps you on this, I know that it does not reinstall the extension tough.
I downloaded Hudson, and am trying to install it as a service. I followed the steps from this page, but when I try to start the service, it always fails. I'm not really getting any defined error codes either. If I try to run the service from the command line (using net start) I get the following (unhelpful) message:
The hudson service could not be started.
The service did not report an error.
The install process seemed to work fine, as the hudson service is installed, and all the files are in the new directory, but the service won't start. Has anyone else run into this problem?
As documented on that page, they have a note of...
If a restart fails for some reason, check the output from Hudson, which is stored in the installation directory that you specified.
Is there anything to denote further errors?