Jenkins slave and Maven Project - maven

I have problem with understanding how jenkins slaves are working. I have a master jenkins and a node defined as slave. I have a Maven project that I want to run on slave but having the report available on Jenkins Master.
Should I have Maven installed on Master or on Slave or both? Do I keep the Maven project on Slave or Master or both? I think it should be both but I can not understand why?

When using slaves, you just have to ensure that each slave has external tools installed (Maven in your case), and properly configured (available in path, custom settings.xml if any, etc.)
To avoid having to bind a job to a unique slave, an obvious best practice is to have all your slave configured alike (i.e. all linux slaves with all needed tools, same for windows slaves, etc.)
Once all your tools are available on each slave, Jenkins takes care of running the project on an available node (master or a slave). Monitoring, log tail, build history and so on... is transparently available to the user, whatever the node used.
You don't even need to care about plugins, they are automatically available to the slaves once installed on the master.

I don't think you need any MAVEN or Jenkins component on slave. When you create the slave then, It gives you the option to launch it as JNLP(Java Network Launch Protocol) file.
Save this file and copy it to slave machine and launch it. Only prequisite is to have JAVA installed on slave machine.
On launching it will establish connection with Master. I am using selenium grid like that. I am not aware of your use case. This may help.

Related

Remote gradle build daemon

This is in context of a CICD system. Is there any way to have a remote, long lived gradle daemon that does not live on the same box as the gradle client? I.E a daemon that I can call over the network w/ the local gradle CLI
No, Gradle does not (yet) support distributing builds to remote machines. There is a hint from the developers here that suggests that the feature might come eventually.

Switching Spark versions and distributing jars to all nodes - Yarn v Standalone

I have an environment setup with both Spark 2.0.1 and 2.2.0. both run in standalone mode and contain a master and 3 slaves. They each sit on the same servers and are configured in the exact same way. I only ever want to run one at once and to do so I set the SPARK_HOME environment version to the location of the Spark version I wish to start and run start-master.sh and start-slaves.sh in the bin folder of that particular version.
I have a jar file which I wish to use for all Spark programs to be executed with. This is regardlss of version. I'm aware I could just pass it in the spark-submit --jars parameter but I don't want to have to account for any transfer time in the job execution so am currently placing the the jar file in the jars folder of each of the master and slave nodes prior to startup. This is a regular task as the jar file gets updated quite often.
If I wish to switch Spark versions I must run stop-slaves.sh and stop-master.sh in the bin folder of the version I wish to stop, then go through the above process again.
Key things I wish achieve are that I can differentiate the transfer of jars from execution timings and that I can easily switch versions. I am able to do this with my current setup but its all done manually and I'm looking at automating it. However, I don't want to spend time doing this if theres already a solution that will do what I need.
Is there a better way of doing this? I'm currently looking at Yarn to see if it can offer anything.

TeamCity forcing "checking for changes" only on agent

I have the following set-up:
TeamCity server running on one machine
TeamCity agent on a separate machine, connected via VPN to source control (TFS).
The VPN is a bit tricky to set up to run as a service so can't/don't want to set it up on the server as well. Rather, I was hoping to have everything go through that agent.
The build server fails while collecting sources, it appears it's trying to figure out what changes were performed in TFS (but it can't find the TFS host since it's not on that VPN). The build is set to check out the sources only on the agent.
I'm afraid the answer is obvious, but couldn't find any documentation confirming this...Is it possible to have such a setup? Or does the build server need access to the TFS repo to check for changes and trigger builds?
The TeamCity server will still require access to the VCS root to evaluate the current revision and changeset details.
It's important to note the additional side-effects of agent side checkout as well. See VCS Checkout Mode in the TeamCity docs for more information (note the 2nd line).

Running load tests via Jenkins on a slave EC2 instance that starts and stops with the build

Ideally, we'd like to run load tests on an EC2 Jenkins slave that starts and stops with our build.
Are there any tools out there (without writing our own plugins) that currently solve this?
I've come across this, but it seems to only be triggered based on the load of Jenkins in general, and not tied to a build.
This configuration is environment specific, and not project specific, so I would prefer to keep this maintained within Jenkins instead of within Maven and the project itself. Although, I'm open to suggestions in that realm.
You can check out WebLOAD Jenkins plugin, it executes RadView's WebLOAD load testing tool, triggered by Jenkins. WebLOAD itself can launch EC2 cloud machines as needed, if that's what you need.

Jenkins/Hudson Java.IO error Unable to clean workspace - Windows server

I have a Jenkins/Hudson CI server, hosted on a Dedicated server (Kindly hosted by someone else). We have come to a problem which we cannot solve, and need help from people who may know solutions:
When we try to run a build, we get a Build Failed, and
java.io.IOException: Unable to delete C:\Program Files (x86)\Jenkins\jobs\JumpPorts-2\workspace
Jenkins was able to create the files, so surely it can delete them? It is running as a service, and it is cloning the source (Maven - Java) from GitHub. This is on a windows server. I tested it on my VPS (Centos5) and it worked correctly, however due to it being a VPS, java does not run well with my other services, so i am unable to host it on there.
Full Error: http://pastebin.com/0tWVVdiH
Thanks in advance
Most likely you are using the Maven project type.
The Maven project type can parse the pom on disk before building and while accessing the GUI. As a result when building on Windows, there is the chance that window's strict version of file locking can get in the way, marking a file as in use until absolutely every file handle is released.
One way to reduce this issue is to have the windows builds run on a slave node rather than the master (note that the slave node can be the same physical machine, but because the remoting channel is required to see the slave's filesystem, the file handles may not be as big an issue)
Another way to reduce this issue is to switch to a FreeStyle project with a Maven build step. Note that my personal preference is to avoid the Maven project type on pain of death. ;-)

Resources