we facing the same issue as described in Artifactory : java.io.IOException: Failed to deploy file. Status code: 404 Response message when running our deployment via bitbucket pipelines.
This happens on Artifactory cloud to all pipelines from on day to another.
Execution failed for task ':artifactoryDeploy'.
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: Failed to deploy file. Status code: 404 Response message: Artifactory returned the following errors:
Failed to persist file with sha1: 0fexxxxxxxxxxxxxxxx Status code: 404
In Artifactory system-logs I get following warning all the time, but I'm not sure if this issue is connected. Beside following message there are no errors in logs:
2020-08-25T16:26:43.889Z [jfrt ] [WARN ] [c19ba246224f712c] [ntuallyPersistedAddFileTask:96] [al-binary-provider-2] - Failed to delete 'add file' after completing eventually persisted task '/storage/eventual/_add/a3/a396fb897aXXXXXXXXXXXXXXXXXXXXXXXX'
ERROR in request.log
2020-08-26T07:05:43.041Z|1765ac2ce37a6ffc|34.232.119.183|gradle-build|PUT|/gradle-dev-local/app/app-front/1.0.1.418_dev/app-front-1.0.1.418_dev.war;build.timestamp=1598425011065;build.name=app;build.number=1598425011337|404|0|0|9|ArtifactoryBuildClient/2.18.0
2020-08-26T07:05:44.014Z|e62cf9a7063d3fff|34.232.119.183|gradle-build|PUT|/gradle-dev-local/com/customer/app/app-core/1.0.1.418_dev/app-core-1.0.1.418_dev.pom;build.timestamp=1598425011065;build.name=app;build.number=1598425011337|404|4474|0|184|ArtifactoryBuildClient/2.18.0
Does anyone has an idea what could be the reason and what could be checked on top?
We deploying via Artifactory plugin & gradle. (https://bintray.com/jfrog/jfrog-jars/build-info-extractor-gradle#release)
We use fix version but I also updated the plugin to 4.17.1 (before we used 4.9.8)
Thanks in advance!
That sounds like more of an internal issue than something with your client.
It sounds like you may be using some sort of cloud storage, which in turn is using eventual storage. I can imagine a situation like this arising from using a mounted eventual directory over a sharded one in an HA setup.
I'd recommend to see whether that file exists in the filestore still or if it has weird permissions that couldn't be removed. If it is indeed a mounted eventual it'd be worth checking too if the request to upload that artifact came in multiple times; perhaps it was a collision of some sort.
Along those lines, since it's a 404 (not found) and it couldn't delete that file; I'm wondering whether it just couldn't write it to _add in the first place.
To summarize it could be one of two in my opinion with the information so far:
You are using a mounted eventual directory, which may be causing issues
The permissions on the filestore are not correct, affecting the filestore operations
Related
Been stuck with this failure a few days.
This happens when I build an image or try to commit after installing a particular application. I'm using mcr.microsoft.com/windows/servercore:ltsc2019 as base image.
"Error response from daemon: re-exec error: exit status 1: output: hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)"
If I do not install my application to the image, I do not get this error. The application installs fine without any failures. I'm able to run the container fine with this application installed, but it fails when I commit it to an image.
I came across a few existing posts with this error, but I couldn't get this to work. Some existing posts mentions about possible size limit of the image but here I don't see size to be an issue. This error is too vague for me to do anything about it. Where can I look for some detailed logging from docker daemon to try to understand what in my application is causing the docker commit to fail?
Tried to look into log under, I don't find any thing useful to understand this failure.
AppData\Local\Docker
Appreciate any help or pointers to find what in my application can cause this commit failure.
I'm having issues to perform NodeRED App deploy after a package.json modification to add a Dashboard and IBM Input and Output. The log states that I exceeded the memory limit of my organization, with the error msg:
Error restarting application: Server error, status code: 400, error code: 100005, message: You have exceeded your organization's memory limit: app requested more memory than available
Which is not true, so, I've tried to reduce the memory and app instances, as sugested here:
https://cloud.ibm.com/docs/cloud-foundry-public?topic=cloud-foundry-public-ts-cf-apps
And also tried to delete all and start all over. But nothing seems to work.
The code added to package.json is:
"node-red-dashboard":"^2.22.1",
"node-red-contrib-scx-ibmiotapp":"0.0.49"
I was able to deploy the service after I modified the memory variable to 128MB at the manifest.yml file. The manifest.yml file can be found at the root of your NodeRED repository.
The clue to solve this problem was found here in this post:
How do I find out memory requirement when deploy Python sample to Bluemix?
Thanks to whitfiea
I was trying to run a custom script on my scaleset vm due to the wrong location of the sh file the exeuction failed. but after that when I try to remove (az vmss extension delete) or rerun(az vmss extension set ) the custom script with correct url I keep getting the same error. It is stuck. How do I fix it.
Deployment failed. Correlation ID:
249a034f-76e2-4b0d-beb2-e9c6577623d1. VM has reported a failure when
processing extension 'customScript'. Error message: "Enable failed:
processing file downloads failed: failed to download file[0]: failed
to download file: http request failed: Get
https://wrongurl.blob.core.windows.net/script/deploytemp.sh: dial tcp:
lookup wrongurl.blob.core.windows.net on 164.33.122.16:53: no such
host".
Delete the instance and rebuild it!
It might not be the answer to delete the instance and rebuilding it, but applications on VMSS by nature should be resilient enough to let you do so.
Also, I'm curious if auto-healing/remediation helps you on this, I know that it does not reinstall the extension tough.
I'm working on a fresh installation of stock DSpace 5.3 (Windows Server 2012, Tomcat 8.0, Maven 3.2.5, Ant 1.9.6). This particular instance will be a dark archive without Google Analytics enabled; we don't currently have a GA account or analytics key, although we plan to register one eventually for a separate public-facing instance.
As per the problem described in JIRA ticket DS-2718, DSpace hangs with the following message in dspace.log when I attempt to download a bitstream:
2015-10-20 09:52:02,324 INFO org.apache.http.impl.execchain.RetryExec
# I/O exception (java.net.SocketException) caught when processing
request to {s}->https://www.google-analytics.com:443: Network is
unreachable: connect
2015-10-20 09:52:02,324 INFO org.apache.http.impl.execchain.RetryExec
# Retrying request to {s}->https://www.google-analytics.com:443
Since we won't be using GA on this instance, disabling it in Spring is a good workaround until the issue is resolved. As per the instructions, I commented out the Google Analytics entry in dspace-5.3-src-release\dspace-xmlui\src\main\webapp\WEB-INF\spring\applicationContext.xml, disabled Tomcat and rebuilt DSpace. An initial attempt running mvn package -Dmirage2.on=true still produced the problem, so I tried a "ground up" rebuild:
cd d:\dspace-5.3-src-release\dspace
mvn clean package -U -Dmirage2.on=true
[successful build]
cd d:\dspace-5.3-src-release\dspace\target\dspace-installer
ant update
[successful update]
[copy webapps to Tomcat 8.0\webapps and start Tomcat]
Even after the rebuild, however, I'm still getting the same error, with the same java.net.SocketException in dspace.log.
Not sure why this isn't working. Have I missed a step or setting in the rebuild process so that the change to applicationContext.xml isn't being applied?
FWIW, I tried grepping for "google" in dspace-5.3-src-release\dspace-xmlui-mirage2 to see if this could be a Mirage 2 problem, but I don't see anything that looks relevant.
This isn't an answer to why you're still seeing the SocketException, but the real fix for the problem you're describing is to remove the default GA key from dspace-services/src/main/resources/config/dspace-defaults.cfg, see https://github.com/DSpace/DSpace/commit/5b84fef1ad789443d06c338558a92f854b20c8ef. Have you tried doing that?
The issue resolved itself after I ran mvn clean -Dmirage2.on=true in both [dspace-src] and [dspace-src]\dspace. I'm guessing that the issue originated on our end due to someone running a maven build from the wrong directory.
I've also removed the default key from dspace-defaults.cfg as suggested. Everything's now working.
Im trying to use jenkins to build and deploy the war file to a tomcat present in different server and im getting the following error -
Deploying /var/lib/jenkins/jobs/ura_Web/workspace/ura-1.0.war to container Tomcat 6.x Remote
ERROR: Publisher hudson.plugins.deploy.DeployPublisher aborted due to exception
org.codehaus.cargo.container.ContainerException: Failed to redeploy [/var/lib/jenkins/jobs/ura_Web/workspace/ura-1.0.war]
at org.codehaus.cargo.container.tomcat.internal.AbstractTomcatManagerDeployer.redeploy(AbstractTomcatManagerDeployer.java:195)
at hudson.plugins.deploy.CargoContainerAdapter.deploy(CargoContainerAdapter.java:64)
at hudson.plugins.deploy.CargoContainerAdapter$1.invoke(CargoContainerAdapter.java:90)
at hudson.plugins.deploy.CargoContainerAdapter$1.invoke(CargoContainerAdapter.java:77)
at hudson.FilePath.act(FilePath.java:905)
at hudson.FilePath.act(FilePath.java:878)
at hudson.plugins.deploy.CargoContainerAdapter.redeploy(CargoContainerAdapter.java:77)
at hudson.plugins.deploy.DeployPublisher.perform(DeployPublisher.java:47)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:36)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:804)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:776)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:969)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:726)
at hudson.model.Run.execute(Run.java:1618)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:491)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:247)
Caused by: java.io.FileNotFoundException: http://192.168.2.X/manager/list
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1401)
at org.codehaus.cargo.container.tomcat.internal.TomcatManager.invoke(TomcatManager.java:504)
at org.codehaus.cargo.container.tomcat.internal.TomcatManager.list(TomcatManager.java:622)
at org.codehaus.cargo.container.tomcat.internal.TomcatManager.getStatus(TomcatManager.java:635)
at org.codehaus.cargo.container.tomcat.internal.AbstractTomcatManagerDeployer.redeploy(AbstractTomcatManagerDeployer.java:176)
... 16 more </code>
Can anyone tell me what is wrong?
I had exactly this problem just now and still have not solved it. However, I suspect it is happening because of proxy issues.
Is it possible to try setting the manager URL to http://localhost
rather than http://192.168.2.X? This worked for me, showing that the credentials were at least correct and the module functions. When I switch the manager URL back to a remote machine or the FQDN of the local server, it again failed. This indicates to me something proxy related.
The only trouble then is configuring the proxy settings for Jenkins, especially http.nonProxyHosts. If you can do that maybe you'll have more luck than me. I cannot get the Jenkins System Information proxy values to change no matter what I do!
Also manually test from a browser on both the build server and elsewhere your access to the manager URL: http://192.168.2.X/manager/list