Jenkins Timeout because of long script execution - performance

I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1

I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).

Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".

Related

Confirming compile time scripts execution in Xcode

I am downloading data from a remote server using curl in Build Phases > Run Script. Downloading takes 5-15s, not that much, but multiple times a day it consumes considerable time. Is there a better way to skip a script than commenting it out? Ideally, it would be some kind of confirmation at compile time (e.g Do you really need to download X? y/n).
You can’t make the run script interactive in the console as far as I know. But you can use a shell conditional with an AppleScript interactive dialog, because AppleScript itself blocks while dialog is shown. See for example https://cantina.co/adding-interactivity-to-the-xcode-build-process/.
However, introducing uncertainty into a build is dangerous. Plus you’d never be able to automate the build. In my view you’d be better off flipping a custom build setting / environment variable.

Is there a way to prevent a bash script from running certain commands if the script has to be run again?

I have a bash script that works at the moment. It gets an image and JDK 8 from a link and then runs a installer for the JDK 8 to move on to setting up another piece of software.
As I was debugging the script, I kept finding myself having to delete directories and even the java installation because when I introduce a fix and rerun the script, I have to wait for everything to download again and I have to worry about duplicate files messing up my current logic -which can probably be improved, but I'll go to the StackExchange Code Review site later.
At the moment, I would like to know what approaches there are to prevent commands -like downloading the JDK and running the JDK installer script all over again and others- from running again.
What kind of general approaches are out there for cases such as these?
For the JDK download and running the installer, I did think of simply checking for the existing of java on the system and if there is then bash would not not to run those commands.
However, there are other commands I do not want run and I do want to simply check, for example, the existence of certain files to prevent wget-ing them all over again and moving them -causing duplicates. (Should I maybe suck it up and do that anyway as that might be best practice?)
I did also think of perhaps, at each successful command, outputting like a 1 to a text file and mapping each line in that text file to the commands run in the script (like using an if statement to see if that command had a 1 or not in the text file) and if it was a 0, then the script would know only to run that command and never the 1s.
That sounded clunky to me and I am pretty sure that is not a good approach.

Jenkins Build Never Finishing

I have a Jenkins master/slave set up which has been working quite happily, running Oracle imports on some Linux boxes.
I have just added a new slave node and tried to run our existing database import job on this new node. This job consists of three subprojects; the first one runs some execute shells, copying files and changing permissions and this currently completes successfully, the second runs an execute shell which ends with an Oracle impdp. The impdp completes (the db exists and ps -ef no longer shows impdp running) but the Jenkins subproject never finishes. The UI just sits there with the clock whirring around.
I've tried adding an echo after the impdp, and this also executes correctly, but the subproject still never finishes.
If I add a Post-Build email notification, it is not sent.
The third subproject is never reached.
What could be the cause of this and how do I debug what is happening?
In our case, the jobs would declare "Finished: SUCCESS", but then continue with some unknown Jenkins business for another 10 or 20 minutes. After putting on more detailed logging, we found it was related to the ill-named LogRotator.
We have thousands of old builds and are deleting the artifacts for those older than a certain number of days. Because of the way old builds are handled, Jenkins searches the entire list of old builds even though they have already had their artifacts removed.
There is issue that is now fixed related to this: https://issues.jenkins-ci.org/browse/JENKINS-22607
As of right now I do not see it in a release, but if you have this issue, the temporary workaround is to turn off the deletion.
This turned out to be something horrible :-)
After finishing the work, Jenkins tries to kill all processes it spawned. To identify them, it goes through all processes in the OS, reads from /proc/<pid>/environ (this is a Linux box) which contains the process’ environment variables and compares them with the environment it sets to Jenkins processes.
Problem was there was one particular Oracle process running on our db server where if you tried to read from /proc/pid/environ for it, it would just hang forever – which is where the Jenkins code would get stuck.
I have no idea why it was getting stuck like this and nor did our DBA. We restarted it and now it works.
You can add set +x to the top of shell scripts to see which commands are actually executed. That way you should be able to easily see from the output which command is blocking.

Run a pre-script to determine if a Jenkins matrix job should be run

I have a Jenkins job consisting of a matrix style configuration. I want to run a script that determines whether that combination should be run (similar to the combination filter, but dynamic). Thus setting the job to 'No Run' (grey) or running the rest of the scripts over it and producing a result.
Is there a way (or a plugin) to do this? In the event there isn't is there a way to set a job to 'No Run' once it has started running?
Cheers,
Stu
Edit: Discussion at:
http://groups.google.com/group/jenkinsci-users/browse_thread/thread/d99e865b17575e92/6c83ee0f894980fb?lnk=gst&q=dynamic#6c83ee0f894980fb
Suggests two plugins, but perhaps a pre script looking at the previous build is just as easy.
I've done the same things.
I can't do that in matrix filter feature because it's not dynamic.
So what I've done is to add to my build script 2 groovy (pre and post) which determinate if the build should be done or not.
The first script look for a old build execution log file to determine if the current build should be done or not and save the state in a file named disabled at the root of the workspace.
The build process only continue if disabled file is not present.
The second groovy script is to store that a particular build is done and store it for further execution and remove the file.
With this method, I can do a round-robin build of 50 configuration, 6 per day.
The only problem I add is that all build are launched so all are green, and we can't quickly see which build have been really done.
Regards,
Ludovic SMADJA
JALIOS - R&D
http://www.jalios.com

Is there a gui for nosetests

I've been using nosetests for the last few months to run my Python unit tests.
It definitely does the job but it is not great for giving a visual view of what tests are working or breaking.
I've used several other GUI based unit test frameworks that provide a visual snap shot of the state of your unit tests as well as providing drill down features to get to detailed error messages.
Nosetests dumps most of its information to the console leaving it the developer to sift through the detail.
Any recommendations?
You can use rednose plugin to color up your console. The visual feedback is much better with it.
I've used Trac + Bitten for continuous integration, it was fairly complex setup and required substantial amount of time to RTFM, set up and then maintain everything but I could get nice visual reports with failed tests and error messages and graphs for failed tests, pylint problems and code coverage over time.
Bitten is a continuous integration plugin for Trac. It has the master-slave architecture. Bitten master is integrated with and runs together with Trac. Bitten slave can be run on any system that communicate with master. It would regularly poll master for build tasks. If there is a pending task (somebody has commited something recently), master will send "build recipe" similar to ant's build.xml to slave, slave would follow the recipe and send back results. Recipe can contain instructions like "check out code from that repository", "execute this shell script", "run nosetests in this directory".
The build reports and statistics then show up in Trac.
I know this question was asked 3 years ago, but I'm currently developing a GUI to make nosetests a little easier to work with on a project I'm involved in.
Our project uses PyQt which made it really simple to start with this GUI as it provides all you need to create interfaces. I've not been working with Python for long but its fairly easy to get to grips with so if you know what you're doing it'll be perfect providing you have the time.
You can convert .UI files created in the PyQt Designer to python scripts with:
pyuic4 -x interface.ui -o interface.py
And you can get a few good tutorials to get a feel for PyQt here. Hope that helps someone :)
I like to open a second terminal, next to my editor, in which I just run a loop which re-runs nosetests (or any test command, e.g. plain old unittests) every time any file changes. Then you can keep focus in your editor window, while seeing test output update every time you hit 'save' in your editor.
I'm not sure what the OP means by 'drill down', but personally all I need from the test output is the failure traceback, which of course is displayed whenever a test fails.
This is especially effective when your code and tests are well-written, so that the vast majority of your tests only take milliseconds to run. I might run these fast unit tests in a loop as described above while I edit or debug, and then run any longer-running tests manually at the end, just before I commit.
You can re run tests manually using Bash 'watch' (but this just runs them every X seconds. Which is fine, but it isn't quite snappy enough to keep me happy.)
Alternatively I wrote a quick python package 'rerun', which polls for filesystem changes and then reruns the command you give it. Polling for changes isn't ideal, but it was easy to write, is completely cross-platform, is fairly snappy if you tell it to poll every 0.25 seconds, doesn't cause me any noticeable lag or system load even with large projects (e.g. Python source tree), and works even in complicated cases (see below.)
https://pypi.python.org/pypi/rerun/
A third alternative is to use a more general-purpose 'wait on filesystem changes' program like 'watchdog', but this seemed heavyweight for my needs, and solutions like this which listen for filesystem events sometimes don't work as I expected (e.g. if Vim saves a file by saving a tmp somewhere else and then moving it into place, the events that happen sometimes aren't the ones you expect.) Hence 'rerun'.

Resources