I have a requirement to run a script on all available slave machines. Primarily this is so they get relevant windows hotfixes and new 3rd party tools before building.
The script I have can be run multiple times without undesirable side effects & is quite light weight, so I'm happy for this to be brute force if necessary.
Can anybody give suggestions as to how to ensure that a slave is 'up-to-date' before it works on a job?
I'm happy with solutions that are driven by a job on the master, or ones which can inject the task (automatically) before normal slave job processing.
My shop does this as part of the slave launch process. We have the slaves configured to launch via execution of a command on the master; this command runs a shell script that rsync's the latest tool files to the slave and then launches the slave process. When there is a tool update, all we need to do is to restart the slaves or the master.
However - we use Linux whereas it looks like you are on Windows, so I'm not sure what the equivalent solution would be for you.
To your title: either use Parameter Plugin or use matrix configuration and list your nodes in it.
To your question about ensuring a slave is reliable, we mark it with a 'testbox' label and try out a variety of jobs on it. You could also have a job that is deployed to all of them and have the job take the machine offline it fails, I imagine.
Using Windows for slaves is very obnoxious for us too :(
Related
I have a Cucumber test (feature files) in the RubyMine IDE and lately I have a need to execute one of the feature repeatedly on a scheduled time.
I haven't found a way to do so. Any idea or thoughts on scheduling that feature file?
You can create a cron job which will execute a rake.
The software utility Cron is a time-based job scheduler in Unix-like
computer operating systems. People who set up and maintain software
environments use cron to schedule jobs (commands or shell scripts) to
run periodically at fixed times, dates, or intervals.
These links might help
How to create a cron job using Bash
how to create a cron job to run a ruby script?
http://rake.rubyforge.org/
I solved the problem by simply installing Jenkins on my machine from its official site,https://jenkins-ci.org/. I configured master and slave nodes on my own machine because I just needed to run one feature file(it has the script I want to run on daily basis).
Although, it is recommended to configure slave on a different machine if we have multiple jobs to run and our jobs are resource intensive.
There is a very good illustration on installing, configuring and running jobs in this link http://yakiloo.com/setup-jenkins-and-windows/
I'm using git-bash on windows and I find it annoying to open up two terminal windows (and navigating to the right path in both) to:
start a http-server to server static files (node tool)
start grunt (default grunt task is grunt-watch which watches the file system and runs tasks when things change)
What I want is to be able to execute a bash script or something to
start the http-server
start other things if relevant
run the grunt command to start it watching
My questions are:
is it possible?
is it practical? (i.e. console feedback might not be possible or would be confusing if multiple things were able to be shown as they would be interwoven.. if that is even possible)
is there a better way? -- other than multiple terminals :o)
If your using Grunt already, you should be able to utilize Grunt's task queues to run multiple tasks in one go. Typically, for each project you have some default task that orchestrates a running development environment like so:
grunt.registerTask(
'default',
'Starts the server in development mode and watches for changes',
['build', 'server', 'watch']);
Sometimes, though, merely queuing tasks isn't enough. You can drop into writing ad-hoc tasks and utilize Grunt's extensive API, such as grunt.task.run and the rich context inside of tasks.
I am not going to bombard you with examples, but things you can do here can range from fetching data from remote sources, spawning different child processes, piping their stdin to the Grunt process and starting arbitrary tasks using grunt.task.run.
Our Jenkins server(linux machine) slows down over a period of time and it gets unresponsive. All the jobs take unexpectedly long time(even though they run on slaves which are different machines from server). One of things I have observed is increase in the number of open files. The number seems to be increasing as shown in the image below. Does anyone have a solution to keep check on this without restarting the server? Also, are there any configurations/tweaks that could improve the performance of the jenkins server?
We are using Jenkins for four years and we tried to keep it up-to-date (Jenkins + plug-ins).
Like you we experimented some inconvenience, depending on new versions of Jenkins or plug-ins...
So we decided to stop this "continuous" upgrade
Here are humble tips:
Avoid technical debt. Update Jenkins as much as you can, but use only "Long Term Support" versions (latest is 2.138.2)
Backup your entire jenkins_home before any upgrade!
Restart Jenkins every night
Add RAM to your server. Jenkins use file system a lot and this will improve caching
Define JVM min/max memory parameters with the same value to avoid dynamic reallocation, for example: -Xms4G -Xmx4G
Add slaves and execute jobs only on slaves
In addition to above, you can also try:
Discarding old builds
Distribute the builds on multiple slaves, if possible.
I would like a periodic job that rebuilds jenkins build slaves, but I don't want it to fire if jobs are currently running. My thoughts are either to
consume all possible build slots on a slave, or
disable the slave from the job and wait for it to go idle
I don't know how to do either from a job. Is it possible? Maybe another approach?
Thanks!
I think the way I'm going to solve this is to use the "launch slave via script" option. The slave has options for periodically turning the slave off, and then I can just use the script to clean it up, rebuild it, and relaunch the slave.
The offline process has safeguards to make sure it won't happen when a build is executing.
Can anyone recommend an automated backup solution that can handle VMWare instances?
I would like something to run overnight, suspend any running virtual machines, back up the files over the network (or hand off to another backup job), and (optionally) resume any VMs that it suspended.
A free/open source solution would be ideal, but I'll pay for a closed solution if necessary.
You could do this with a scheduled task and a script - Workstation is pretty easy to automate from the command line.
Psuedocode for the script:
for each VM {
vmrun.exe suspend <path_to_.vmx>
copy <path_to_vm>\*.vmdk \\backup-server\vmbackups\<vmname>\
vmrun.exe start <path_to_.vmx>
}
There's some more plumbing to be done, but once you have a working backup script you can schedule it or run it whenever you like. If you get your VM information from vmrun.exe list you don't have to worry about adding more running VMs or anything. Hope that gets you started.