I am trying to create a Jenkins job and noticed that whenever I try to build my maven project, it exits showing the following error in the log (Which happens after about 2 minutes and after the memory usage reaches almost 99%):
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Could not init C:\Users\rami_\.jenkins\workspace\Soc
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:994)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:749)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1222)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1300)
at hudson.scm.SCM.checkout(SCM.java:505)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1211)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:636)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:508)
at hudson.model.Run.execute(Run.java:1906)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:543)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "C:\Program Files\Git\bin\git init C:\Users\rami_\.jenkins\workspace\Soc" returned status code 1:
stdout:
stderr: error launching git: Insufficient system resources exist to complete the requested service.
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2608)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2538)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:2534)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1920)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$5.execute(CliGitAPIImpl.java:992)
... 12 more
ERROR: Error cloning remote repo 'origin'
So I tried to run git init using CMD to check if the problem is Jenkins related or not, and I got exactly the same behavior, which is the two minutes wait, and when memory usage reaches 95-100% it exits and shows the following message:
error launching git: Insufficient system resources exist to complete the requested service.
I have 12 GB installed on my PC and the project I am working with is rather small. Memory usage before running the command is usually around 50%.
I saw some people suggesting to perform "scan disk for errors" which I tried and did not work.
Worth noting that Github desktop is working flawlessly.
The issue was resolved by uninstalling Git, removing all its files under AppData, restarting, and then installing it again.
Related
Been stuck with this failure a few days.
This happens when I build an image or try to commit after installing a particular application. I'm using mcr.microsoft.com/windows/servercore:ltsc2019 as base image.
"Error response from daemon: re-exec error: exit status 1: output: hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)"
If I do not install my application to the image, I do not get this error. The application installs fine without any failures. I'm able to run the container fine with this application installed, but it fails when I commit it to an image.
I came across a few existing posts with this error, but I couldn't get this to work. Some existing posts mentions about possible size limit of the image but here I don't see size to be an issue. This error is too vague for me to do anything about it. Where can I look for some detailed logging from docker daemon to try to understand what in my application is causing the docker commit to fail?
Tried to look into log under, I don't find any thing useful to understand this failure.
AppData\Local\Docker
Appreciate any help or pointers to find what in my application can cause this commit failure.
In the last couple of days I've been unsuccessfully trying to clone our huge SVN repository to GIT.
All the time, sooner or later, I'm running into the following error:
Software caused connection to abort: Error running context: Software caused connection abort at: C:/Program Files/Git/mingw64/share/perl5/Git/SVN/Ra.pn line 312.
I couldn't find any log entry on my Windows 10 client nor on the Ubuntu server giving details on the reason for this error.
StackOverflow question #53157918 suggested to increase the Apache server timeout value. I increased the Apache timeout value to 10 times the original timeout value - but, apparently, this didn't help.
Following the StdOut output, reading each of the files is a snap, so I don't suggest it's a transmission timeout issue, anyway.
Edit
I just tried again ... This time the error is Out of Memory:
libsvn: Out of memory - terminating application.
1 [main] perl 735 cygwin_exception::open_stackdumpfile: Dumping stack trace to perl.exe.stackdump
As the workaround you can kick-off Ubuntu virtual machine and import your repository there. Here is what I did.
Download and install Oracle Virtual Box.
Download and use Ubuntu iso as your OS.
Run terminal and install git and git-svn
sudo apt-get install git git-svn
Follow your procedure to checkout. Using this method I've successfully downloaded big SVN repo into git and transferred it to Windows.
As you encountered the out of memory error, it would be a good idea to increase the git limit and window size and cache, etc. Probably this is the solution:
cd /migrated/git/repo/.git the directory created after git svn clone command
edit .git/config file as below:
under [core] section add lines
packedGitLimit = 512m
packedGitWindowSize = 512m
longpaths = true
and also add following sections:
[http]
postBuffer = 100000000
[pack]
deltaCacheSize = 256m
packSizeLimit = 256m
windowMemory = 1024m
we facing the same issue as described in Artifactory : java.io.IOException: Failed to deploy file. Status code: 404 Response message when running our deployment via bitbucket pipelines.
This happens on Artifactory cloud to all pipelines from on day to another.
Execution failed for task ':artifactoryDeploy'.
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: Failed to deploy file. Status code: 404 Response message: Artifactory returned the following errors:
Failed to persist file with sha1: 0fexxxxxxxxxxxxxxxx Status code: 404
In Artifactory system-logs I get following warning all the time, but I'm not sure if this issue is connected. Beside following message there are no errors in logs:
2020-08-25T16:26:43.889Z [jfrt ] [WARN ] [c19ba246224f712c] [ntuallyPersistedAddFileTask:96] [al-binary-provider-2] - Failed to delete 'add file' after completing eventually persisted task '/storage/eventual/_add/a3/a396fb897aXXXXXXXXXXXXXXXXXXXXXXXX'
ERROR in request.log
2020-08-26T07:05:43.041Z|1765ac2ce37a6ffc|34.232.119.183|gradle-build|PUT|/gradle-dev-local/app/app-front/1.0.1.418_dev/app-front-1.0.1.418_dev.war;build.timestamp=1598425011065;build.name=app;build.number=1598425011337|404|0|0|9|ArtifactoryBuildClient/2.18.0
2020-08-26T07:05:44.014Z|e62cf9a7063d3fff|34.232.119.183|gradle-build|PUT|/gradle-dev-local/com/customer/app/app-core/1.0.1.418_dev/app-core-1.0.1.418_dev.pom;build.timestamp=1598425011065;build.name=app;build.number=1598425011337|404|4474|0|184|ArtifactoryBuildClient/2.18.0
Does anyone has an idea what could be the reason and what could be checked on top?
We deploying via Artifactory plugin & gradle. (https://bintray.com/jfrog/jfrog-jars/build-info-extractor-gradle#release)
We use fix version but I also updated the plugin to 4.17.1 (before we used 4.9.8)
Thanks in advance!
That sounds like more of an internal issue than something with your client.
It sounds like you may be using some sort of cloud storage, which in turn is using eventual storage. I can imagine a situation like this arising from using a mounted eventual directory over a sharded one in an HA setup.
I'd recommend to see whether that file exists in the filestore still or if it has weird permissions that couldn't be removed. If it is indeed a mounted eventual it'd be worth checking too if the request to upload that artifact came in multiple times; perhaps it was a collision of some sort.
Along those lines, since it's a 404 (not found) and it couldn't delete that file; I'm wondering whether it just couldn't write it to _add in the first place.
To summarize it could be one of two in my opinion with the information so far:
You are using a mounted eventual directory, which may be causing issues
The permissions on the filestore are not correct, affecting the filestore operations
Getting this error when setting up my GitHub repo in Jenkins on a OS X machine.
Failed to connect to repository : Command "git ls-remote -h https://github.com/username/repo.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'https://github.com/username/repo.git/': Internal SSL engine error encountered during the SSL handshake
This has been working before and I've been running build successfully connected to this repo but all of a sudden I started to get this message. Does anybody have any idea?
Thanks
PS. Have looked at the other related questions but they don't have the exact same issue and/or are not on the same platform as me.
I think I found what the issue was. I logged into the mac machine with a certain user and started the jenkins.war file from there. It worked fine until I logged out the user, that's when I started to see the problem and why I resolved it by restarting the computer. So as long as I am not logging out the jenkins user, everything is fine.
For some months we've run Hudson on a Windows XP "server" under a user account. This means someone manually logs in and starts Hudson via a .bat file (that sets up a few environment variables, then runs java -jar hudson.war)
However a few recent power cuts have resulted in the requirement to have Hudson start automatically at the time the server boots up. So I've turned to looking at Hudson running as a Windows Service. This would allow Hudson to start automatically with Windows, and would not require a specific user account.
I've managed to install it as a service, and I've changed hudson.xml so that the batch file is run rather than java directly. I do this because we build with git on Cygwin and SHELLOPTS=igncr must be set before bash starts java/Hudson.
The service seems to start properly, and the web interface is present and functional. However, it appears that the user that Hudson is now running under is unable to write/modify existing jobs in C:\hudson:
FATAL: Could not checkout 4a121704f178123c36f6ab4e861b3c771953b187
hudson.plugins.git.GitException: Could not checkout 4a121704f178123c36f6ab4e861b3c771953b187
at hudson.plugins.git.GitAPI.checkout(GitAPI.java:382)
at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:529)
at hudson.plugins.git.GitSCM$4.invoke(GitSCM.java:521)
at hudson.FilePath.act(FilePath.java:676)
at hudson.FilePath.act(FilePath.java:660)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:521)
at hudson.model.AbstractProject.checkout(AbstractProject.java:833)
at hudson.model.AbstractBuild$AbstractRunner.checkout(AbstractBuild.java:314)
at hudson.model.AbstractBuild$AbstractRunner.run(AbstractBuild.java:266)
at hudson.model.Run.run(Run.java:948)
at hudson.model.Build.run(Build.java:112)
at hudson.model.ResourceController.execute(ResourceController.java:93)
at hudson.model.Executor.run(Executor.java:118)
Caused by: hudson.plugins.git.GitException: Error performing c:\cygwin\bin\git.exe checkout -f 4a121704f178123c36f6ab4e861b3c771953b187
at hudson.plugins.git.GitAPI.launchCommandIn(GitAPI.java:302)
at hudson.plugins.git.GitAPI.launchCommand(GitAPI.java:276)
at hudson.plugins.git.GitAPI.checkout(GitAPI.java:380)
... 12 more
Caused by: hudson.plugins.git.GitException: Command returned status code 1: error: git checkout-index: unable to create file .gitignore (Permission denied)
error: git checkout-index: unable to create file .gitmodules (Permission denied)
error: git checkout-index: unable to create file Makefile (Permission denied)
I'm not really a Windows sort of person, but I thought perhaps if I added "Full Access" Security permissions to C:\hudson for the user "LOCAL_SERVICE" then that might fix it. Alas, it did not. I also tried full permissions for the user "Everyone" but that also did not solve the problem.
What am I missing here? Is there any way to allow a process running as a Service unfettered access to a subdirectory on a local disk?
How about you change the user that the service is running as. So create a new "technical" user account whose password nobody knows (except an envelope in your safe) and make this user the owner of all your hudson job folders. This also has the advantage that you can take permissions away from hudson. This way a Hudson job can not act as an admin on your windows machine.
The advantage of a service compared to a scheduled job is, that it restarts when it crashes.
Instead of running it as a service, maybe you should use Task Scheduler to set the process to run at logon, then have the user account auto-login. This is probably going to be much less hassle than dealing with service permissions, especially if you have to communicate with other machines.