jenkins svn plugin revert action slower than command line job? - windows

I have three build machines with Jenkins v1.473 installed.
Let's call them machine A, B and C. All are running Windows 7 x64.
Machine B and C jenkins installation was carried over from machine A. I simply copied the folder over and everything was imported and works fine.
We initially had a Windows command line job that would call svn revert and then, svn update on a bunch of folders. After that, we would build our project. The average running time was about 10 mins. SVN version of the command line tools is 1.7 on all 3 machines (with tortoisesvn)
Since we were using command line svn commands, we didn't have access to $CHANGES when sending the job completion email, so we switched to the Jenkins svn plugin.
The switch was made on machine B and C first and after confirming it was working fine, we applied the same changes to machine A. The plugin version is 1.50.
Check-out Strategy is set to "Use 'svn update' as much as possible, with 'svn revert' before update".
Repository browser is set to Auto.
Under Jenkins->Configuration, Subversion Workspace Version is set to 1.7.
The other fields are left as-is (Exclusion revprop name is empty, Validate repository URLs up to the first variable name is unchecked, Update default Subversion credentials cache after successful authentication is checked)
Now, onto my issue.
The running time on machine B and C stayed about the same: about 10 mins.
However, on machine A, running time has more than doubled: it now averages 25 mins.
Looking at the job log, the revert part seems to be the culprit.
Is there a reason switching from svn command line to the plugin would make it run slower ? On top of that, on one particular machine only ?

After digging and testing several jobs across all machines, here's what I found out:
command line tools are fairly faster than their plugin counterpart, most likely due to the speed of the programs themselves (compiled C binary vs SVNKIT which is written in Java and running on top of a JVM)
machine A is also fairly slower than B and C, despite having similar specs, most likely because of other applications and services running at the same time (B and C are near-fresh installs, while A has been running for a longer time, with more applications installed etc)
In the end, I just split the clean/revert part from the update:
Clean/Revert job is carried in command line (I/O heavy due to folder traversal I believe) so it is as fast as it can be
Update (and update only, no revert done within the plugin) is then done with the plugin (fairly fast, less than a minute usually for several dozen files) which allows me to still have access to the CHANGES variable.
With this, I somehow managed to keep the building time on machine A nearly unchanged (around 10 mins).
Machines B and C however, have seen their building times greatly improved (from 10 mins to 5 mins in general)

Related

AzureDevOps onPremise, Workspacemapping really slow

We are using the onPremise version of the DevOps Server 2019 (curently update 1) with self-hosted agents (agents have the last version available from gitHub) in combination with TFVC (2019).
The devOps server is running in a virtual machine and the tfvc server is running in a different virtual machine.
The communication between them is fast, i tested this already by simply copying big testdata from one to the other over network. There is no Problem.
On each and every run, at the very beginning, the workspace mapping from a previous run is getting deleted, a new one is created and than a new workspace mapping to every source paths defined in the repository is established. This is taking about 30-60 Minutes on each and every pipeline/run.
We dont have only one single path in the repository defined. there are a lot of mappings, so that the amount of code that gets taken from TFS stays little and only represents that source code, that is needed by this executed solution.
This can't be changed and has to stay as it is, also we can't simply move to github. (Just sayin in case someone would like to advice to move to github :))
Are there any people, that experienced the same behaviour in the past, that the repository paths mapping at the fist build step is taking about 30-60 minutes when a build is executed?
thanks for any hints in advance
The solution now was, installing everything from scretch on a new machine.
After that, the mappings are running in a 10th of the time it took before

Trying to install Git and perform a pull with a single batch file

I have a situation where I need to deploy ruby code to lab devices. These machines are slow and very locked down (hard to get files to). They get re-imaged fairly often, but I am not allowed to bake the install into the image.
I've solved the problem by using the same open ports for Git code distribution to distribute my install files.
I have a long install process boiled down to 3 batch files, but if I could reduce it to one, it would make life a lot easier (not to babysit a lot of installs via VNC). I'm only mentioning all this for reference.
The problem:
I can't install git and then do a pull from command line without opening a new cmd prompt - I think it pertains to environment variables, but am not 100% sure.
I get "git is not recognized" blah blah blah if I don't break out at this point and start the next batch. Same deal when I install ruby and don't break out before starting the DevKit install.
Powershell is not an option (I'm only allowed to install Git, Ruby, and the support (DevKit, C++ redistributables, .Net Client 4) and some of the machines do not have it.
I did a version where I scripted reboots and moving the batches into startup progressively - it works, but some of the other machines are tied together in a way that makes rebooting an issue (please don't make me explain - it's complicated lol).
Is there a way to merge my 3 batch files and execute all steps without reboots?
Edit: I have tried starting the next batch from the first (Start... Call...), and even creating a scheduled task to execute the next step. Can't swear I haven't made any mistakes, but they all seem to inherit the initial conditions and don't recognize the "git" command unless a new cmd prompt is opened.

SonarQube Incremental mode gives different results on different machines with same code

I am running a new sonar server with one project and one quality profile.
I have a "runner" machine that continuously running full scans (after getting the latest code)
and the developers machines where they would like to run incremental analyses before checking-in.
when running an incremental on the "runner" machine without making any changes I get 0 issues as expected (but I am also getting a lot of "resolved" issues - what is the deal with that? I expected to get 0 new and 0 resolved since I changed NOTHING).
BUT, when running incremental on the developer machine (after getting the latest code) I am getting a huge number of new issues, even though they also made no changes to the code.
to make sure I am not making any mistakes, I used TFS to compare the two project directories (on the runner and on the dev local machines, the folders that the analysis uses) - and proved that both are the same (except for the sonar generated files)
so to sum it up:
what could be the cause for such a behavior?
why would I get resolved issues if I did not make any changes to the code?
could it have anything to do with machine clocks? (i am desperate...)
If you would tell me that there is no change in hell that such a problem can occur, then I would go back and check myself, but I am running such a simple configuration that I don't think that I am missing anything.
Thank you for your help.

Ruby redmine svn post-commit hook is extremally slow

I use the following post-commit hook for svn:
"path\to\ruby.exe" "path\to\redmine\script\runner" "Repository.fetch_changesets; Rails.logger.flush" -e production
It works correctly, but it takes about 1-2 minutes.
I also thought that a lot of time is required for first commit, but successive commit takes the same amount of time.
Is it possible to improve such behavior?
I know about slow behavior of Ruby on Windows, about 3 times, but in my case it is much more longer.
Configuration is following: Windows Vista, redmine 1.1.1, Ruby 1.8.7 with RubyGems 1.8.7, all packages installed and testing is performed on the same PC.
The problem is that script/runner starts up a new Rails process from scratch each time it's run, which will make your commit pause. So 3 commits == 3 startups and 3 shutdowns.
There are a few things you can do to improve this:
Run the script/runner process in the background so it doesn't slow down your commit. On Linux you can do this by adding an & at the end of the command but I don't remember how to do it on Windows
Instead of fetching changesets on each commit you can run it regularly through cron or a scheduled task. The rake task redmine:fetch_changesets is built for this purpose, it will iterate through each project and run fetch_changesets for you.
The command you are running goes through every project and fetches the changes. If you know the project identifier you can change the query so it only gets the changes for the project you are working on:
script\runner "Project.find('your-project').try(:repository).try(:fetch_changesets); Rails.logger.flush"
Replace 'your-project' with the project identifier (found in most of the urls in Redmine). The try parts are used to make sure you don't get an empty record.
Use the web service to fetch changesets instead of script/runner. The web service will use an existing Ruby process so it should already be loaded and the only slow down will be while Redmine downloads and processes the changes. This can also be used with the first option (i.e. background a web service request). Docs: http://www.redmine.org/projects/redmine/wiki/HowTo_setup_automatic_refresh_of_repositories_in_Redmine_on_commit
Personally, I just run a cronjob every hour (#2). Having the Repository view current isn't that important to me. Hope this gives you some ideas.

How to speed up the eclipse project 'refresh'

I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.

Resources