Jenkins/Hudson Java.IO error Unable to clean workspace - Windows server - continuous-integration

I have a Jenkins/Hudson CI server, hosted on a Dedicated server (Kindly hosted by someone else). We have come to a problem which we cannot solve, and need help from people who may know solutions:
When we try to run a build, we get a Build Failed, and
java.io.IOException: Unable to delete C:\Program Files (x86)\Jenkins\jobs\JumpPorts-2\workspace
Jenkins was able to create the files, so surely it can delete them? It is running as a service, and it is cloning the source (Maven - Java) from GitHub. This is on a windows server. I tested it on my VPS (Centos5) and it worked correctly, however due to it being a VPS, java does not run well with my other services, so i am unable to host it on there.
Full Error: http://pastebin.com/0tWVVdiH
Thanks in advance

Most likely you are using the Maven project type.
The Maven project type can parse the pom on disk before building and while accessing the GUI. As a result when building on Windows, there is the chance that window's strict version of file locking can get in the way, marking a file as in use until absolutely every file handle is released.
One way to reduce this issue is to have the windows builds run on a slave node rather than the master (note that the slave node can be the same physical machine, but because the remoting channel is required to see the slave's filesystem, the file handles may not be as big an issue)
Another way to reduce this issue is to switch to a FreeStyle project with a Maven build step. Note that my personal preference is to avoid the Maven project type on pain of death. ;-)

Related

How to update the code of Laravel that is deployed on live server?

The Laravel project made based on vuejs UI is deployed on the server. Now I need to change the code and worked fine on the local machine. But the problem arose that I have to zip all the files and again upload. This seemed tedious. Also when I uploaded it, the application seemed not changed as on the local machine. What should I do? I also don't have a node installed on my Cpanel so that I was unable to run npm run dev.
The preferred way is to use a Version Control System (VCS) like Git.
VCS
Version control systems are software tools that help software teams manage changes to source code over time. Consider uploading your project to a Github repository.
If you Google this, you’ll find tutorials that can explain it much better than we can in an answer here.
Note: You require SSH access to the server in order to run Git commands. Having SSH access will also solve your problem of not being able to run commands like npm run dev. Consider deploying your repository on a Virtual Private Server (VPS).
(S)FTP
There are several ways of deploying. One of them being, manually transferring files using SFTP or FTP. However, as you've mentioned, this is a tedious process.

Azure Artifacts - Downloading Maven package content throws azure UnhandledPromiserejectionWarning

One of our development teams have recently migrated their Maven project files from another version control system to Azure DevOps. However, a major caveat is that these Maven projects were created with no POM files (I have no idea why).
The project team have in any case managed to move all of their Maven packages into Azure Artifacts. When a release pipeline job is then run, it is then expected to retrieve the desired artifacts (*.war files) from Azure Artifacts, download to the agent's artifacts directory and subsequently deploy to the target server. The pipeline itself is completed successfully (all tasks set to green status), however when we review the output log, we discover that the artifacts were never downloaded and instead, the download task returns an "UnhandledPromiserejectionWarning". Any idea why this is happening?
So far, we've tried using CURL as an alternative to download the artifacts, but all has been in vain. Naturally, we've also tried and failed with the Azure Artifacts' dedicated "Download Package" pipeline task, but are willing to try an alternative solution, if anyone has some kind suggestions. Been wondering something like Powershell (or other script) can for instance be used to download the desired package files.
I must also mention that the CURL option only appears to fail over a failed OAuth authentication. Any advice on that front will also be helpful, as it is probably our quickest route to a workaround.
I had a similar issue, it turns out the problem was just the capital case.
The DownloadPackage task was trying to download 0.0.30-SNAPSHOT, but instead you can only download the package as 0.0.30-snapshot, as it states on the microsoft official docs.
In order to fix it, I had to install an external plugin, then add an additional task before download, that task was converting the name to lower case.
I was using the classical release pipeline, btw.

How to set up and maintain directory structure in TFS build server?

So I have this pretty huge solution with many projects, few of them use dlls from other projects in this solution, some projects copy files to other directories after build is performed. (as post build events)
when I build the solution locally on my machine, everything is great and working, but when i configure a build, and build it on build server (we use TFS) something goes wrong and i get a an error when i try to load one of the applications in this solution. (the error does not give me much data on what went wrong)
so before i sit to debug all of this. does anybody know how can i smartly manage all the build actions that are performed locally and via build server and see the deltas?
I would like to be able to build the solution exactly the same on build server as i do on my machine (with directory structure, post build events..etc)
thanks a lot
The generally accepted way to do what you're after is to use NuGet for managing your assembly references. You can publish your dependent assemblies into NuGet as part of a continuous delivery process, then reference (and update!) those dependencies in the solutions that consume them as necessary.
This removes ambiguity ("What version of Foo.dll is Project X using?") and reduces runtime errors ("Why is Project X using Foo.dll 3.0? It was never tested with 3.0! It needs to run with 2.7!").

SVN Post-Commit to Update Working Copy when Working Copy is on a Network Drive

I work for a fairly new web development company and we are currently testing subversion installations to implement a versioning system. One of the features we need the versioning system to perform is to update the development server with an edited file once it has been committed.
We would like to maintain one server for all of our SVN repositories, even though, due to system requirements, we need to maintain several separate development servers. I understand that the updates are fairly simple when the development server resides in the same location as SVN, but that is just not possible for us. So, we need to map separate network drives to the SVN server for each development server.
However, this errors on commit. Here is my working copy test directory, as referenced in the post-commit.bat file:
SET WORKING_COPY=Z:\testweb
This, however, results in an error...
post-commit hook failed (exit code 1) with output: svn: Error resolving case of 'Z:\testweb'
I'm sure this is because the server is not the same user as me and therefore does not have the share I need mapped to "Z" - I just have no idea how to work around this. Can anyone help?
UPDATE: The more I look in to these issues it appears that the real solution to the problem is to use a CI Server to accomplish what I am attempting to accomplish. I am currently looking in to TeamCity and what it might do for us.
Don't do this through a post-commit hook. If you ever manage to get the hook to succeed, you'll be causing the person who did the commit to wait until the update is complete. Instead, I recommend that you use Jenkins which is a continuous build engine.
It is possible that you don't have anything to build. After all, if you're using PHP or JavaScript, there's nothing to compile. However, you can still use Jenkins to do the update for you.
I can't get into the nitty-gritty detail hear, but one of the things you can do with Jenkins is redefine its working directory. You can do this by clicking on the Advanced button when you define a job, and it'll ask you where you want the working directory. In this case, you can specify your server's working directory.
One of the things you can do with Jenkins is have it automatically run tests, or maybe do a bit smoother update. For example, you might have to restart your web server when you change a few files, or maybe you need to make sure that if you're changing 100 files, they all get changed at once, or your server isn't in a stable state. You could use Jenkins to do this too. And, if there are any problems, you can have Jenkins email the person who is responsible for the server that the server update failed.
Jenkins is easy to setup and use. You can download it and start up Jenkins in 10 minutes. Setting up a job in Jenkins might take you another 15 minutes if you had never seen Jenkins before and had no idea how it works.

In continuous integration what is the best way to deal with external application dependencies

In using our TeamCity Continuous Integration server we have uncovered some issues that we are unsure as to the best way to handle. Namely how to reference external applications that our application requires on the CI server.
This was initially uncovered with a dependency on Crystal Reports, so we went and installed Crystal Reports on the Server fixing the immediate problem. However as we move more applications over to the CI server we are finding more dependencies.
What is the best strategy here? Is it to continue installing the required applications on the Server?
Thanks
Where possible make the external dependencies part of your build system.
For instance check the installer in to your version control system and have a step that checks it out and runs it in silent mode (many installers support a mode with no user action sometimes using the commandline /s).
This way if you need to set up another build machine for a branch or just for new hardware everything is repeatable.
If your builds require the actual application to complete the build, then you should probably continue to install the application on your build server.
If you just need references to dlls or assemblies from the application, then what we've done at my company is to create installable 'SDKs' of the references required for a particular applicatoin and install them on our development and build machines in well-known library directories that our solutions reference.
On the build machine, our pre-build steps install the correct version of the dependencies and then clean them up when we are finished.
Recently, we've moved to using virtual machines for our build machines that our build process activates. These VMs get the SDKs installed on them as a pre-build, and then are restored to their snap-shot state after the build. We had some dependencies that were almost impossible to uninstall, so this made for a clean starting point each time.
If you use Maven to build, you can define your dependencies in the pom.xml file. They will then be automatically downloaded if necessary.
I am not sure if I followed correctly...
I am assuming your application is dependent on this external app, while building? In that case it should be on the machine doing CI...

Resources