Is there a way to have my hudson slaves used by multiple hudson masters?
A bit of background info:
My build guy has set-up separate hudson masters to do the deployment and testing of our solution into different test environments. My tests are run on hudson slaves (I have 4 slaves). These slaves are associated to one specific hudson master. I want the slaves to be available for use by any of the hudson masters.
I believe the build guy chooses to use multiple hudsom masters to manage the number of jobs on each master. His set-up for one environment has 8 view tabs therefore 5 environments would mean 40 tabs. Unfortunately as is common, the solution to one problem creates another.
Yes, you can add the slaves to both Hudson masters. The problem is that each master will not be aware of the resource utilization by the other master, so you'll have to figure out some mechanism for that, such as reducing the number of executors.
Even better would be to combine the two Hudson masters into a single Hudson instance. Your question doesn't explain the motivation for having two masters.
As I cannot comment above I'll try an answer.
I think you can have several independent slaves on same machine, each attaching and discussing with its unique master. I also think that different slaves on same machine sharing same home directory is not supported, not working. And of course if they are completely independent, as Michael Donohue said above, there is a workload sharing issue to resolve.
v1.366 added support for Windows slaves running as a Win32 service to serve multiple masters
see http://hudson-ci.org/changelog.html
Hudson jobs can also be parameterized, with a default value used for scheduled jobs and a web page offered for parameter input on manually triggered jobs. That can work in some situations to reduce need for multiple jobs.
Or try the nested view plugin if number of tabs an issue and can't reduce number of jobs
Related
I was hoping to get some help/suggestions regarding my JMeter Master/slave test set up.
Here is my scenario:
I need to do load testing using Jmeter master slave set up. I am planning to launch the master and slave nodes on AWS (window boxes, dependency on one of the tool I launch via jmeter). I want to launch these master-slave set up in AWS on demand where I can tell how many slave nodes I want. I looked around a lot of blogs around using Jmeter with AWS and everywhere they assume these nodes will be launched manually and needs further configuration for master and slave nodes to talk to each other. For the tests where we might have 5 or 10 slave nodes this will be fine but for my tests I want to launch 50 instances(again the tool I use with jmeter has limitation that forces me to use each jmeter slave node as 1 user, instead of using 1 slave node to act as multiple users) like this and manually updating each of the slave nodes will be very cumbersome. So I was wondering if anybody else ran into this issue and have any suggestions. In the mean time I am looking into other solutions that will help me to use same slave node to mimic multiple users, which will help me to reduce the need to launch these many slave nodes.
Regards,
Vikas
Have you seen JMeter ec2 Script? It seems to be something you're looking for.
If for any reason you don't want to use particularly this script be aware that Amazon has the API to you should be able to automate instances creation by using a script AWS Java SDK or Amazon CLI.
You can even automate instances creation using a separate JMeter script with either JSR223 Sampler
or OS Process Sampler (this approach will require a separate JMeter script of course)
I am using a jenkins configuration where the same job is being executed in different locations: one in farm1 and another in an overseas farm2.
The Jenkins master server is located in farm1.
I encounter a situation where the job on farm2 takes much more time to finish, sometimes twice the elapsed time.
Do you have an idea what could be the reason for that?
is there a continuous master-slave discussion during the build that can cause such delay?
The job is a maven junit test + ui seleniun using vnc server on the slave
Thanks in advance,
Roy
I assume your server farms have identical hardware specs?
Network differences while checking out code, downloading dependencies, etc. Workspace of Master and Slave are on different servers
If you are archiving artifacts, they are usually archived back on Master, even when the job is run on Slave.
Install Timestamper plugin, enable it, and then review the logs of both the Master and the Slave runs, and see where there is a big time difference (you can configure Timestamper to show time as increments from the start of job, this would be helpful here)
I am using Jenkins v1.564 with the Amazon EC2 Plugin and set-up 2x AMIs. The first AMI has the label small and the second AMI has the label large. Both AMIs have the Usage setting set to Utilize this node as much as possible.
Now, I have created 2x jobs. The first job has Restrict where this project can be run set to small. The second job, similarly, set to large.
So then I trigger a build of the first job. No slaves were previously running, so the plugin fires up a small slave. I then trigger a build of the second job, and it waits endlessly for the slave with the message All nodes of label `large' are offline.
I would have expected the plugin to fire up a large node since no nodes of that label are running. Clearly I'm misunderstanding something. I have gone over the plugin documentation but clearly I'm not getting it.
Any feedback or pointers to documentation that explains this would be much appreciated.
Are the two machine configurations using the same image? If so, you're probably running into this: https://issues.jenkins-ci.org/browse/JENKINS-19845
The EC2 plugin counts the number of instances based on
Found there's an Instance Cap setting in Manage Jenkins -> Configure System under Advanced for the EC2 module, which limits how many instances can be launched by the plug-in at any one time. It was set to 2. Still odd as I only had one instance running and it wasn't starting another one (so maybe the limit is "less than" ). Anyway, increasing the cap to a higher number made the instance fire up.
I am new to Jenkins and know how to create Jobs and add servers for JAR deployment.
I need to create deployment job using Jenkins which takes a JAR file and deploys it of 50-100 servers.
These servers are categorized in 6 categories. there will be different process run on each server but same JAR will be used.
Please suggest what is the best approach to create JOB for this.
As of now, the servers are less(6-7), I have added each server to Jenkins and using command execution over ssh for process execution. But for 50 servers this is not the possibility.
Jenkins is a great tool for managing builds and dependencies, but it is not a great tool for Configuration Management. If you're deploying to more than 2 targets (and especially if different targets have different configurations), I would highly recommend investing the time to learn a configuration management tool.
I can personally recommend Puppet and Ansible. In particular, Ansible works over an SSH connection to the target (which it sounds like you have) and requires only a base Python install.
My company has thousands of server instances running application code - some instances run databases, others are serving web apps, still others run APIs or Hadoop jobs. All servers run Linux.
In this cloud, developers typically want to do one of two things to an instance:
Upgrade the version of the application running on that instance. Typically this involves a) tagging the code in the relevant subversion repository, b) building an RPM from that tag, and c) installing that RPM on the relevant application server. Note that this operation would touch four instances: the SVN server, the build host (where the build occurs), the YUM host (where the RPM is stored), and the instance running the application.
Today, a rollout of a new application version might be to 500 instances.
Run an arbitrary script on the instance. The script can be written in any language provided the interpreter exists on that instance. E.g. The UI developer wants to run his "check_memory.php" script which does x, y, z on the 10 UI instances and then restarts the webserver if some conditions are met.
What tools should I look at to help build this system? I've seen Celery and Resque and delayed_job, but they seem like they're built for moving through a lot of tasks. This system is under much less load - maybe on a big day a thousand hundred upgrade jobs might run, and a couple hundred executions of arbitrary scripts. Also, they don't support tasks written in any language.
How should the central "job processor" communicate with the instances? SSH, message queues (which one), something else?
Thank you for your help.
NOTE: this cloud is proprietary, so EC2 tools are not an option.
I can think of two approaches:
Set up password-less SSH on the servers, have a file that contains the list of all machines in the cluster, and run your scripts directly using SSH. For example: ssh user#foo.com "ls -la". This is the same approach used by Hadoop's cluster startup and shutdown scripts. If you want to assign tasks dynamically, you can pick nodes at random.
Use something like Torque or Sun Grid Engine to manage your cluster.
The package installation can be wrapped inside a script, so you just need to solve the second problem, and use that solution to solve the first one :)