Scheduling Cucumber test features to run repeatedly - ruby

I have a Cucumber test (feature files) in the RubyMine IDE and lately I have a need to execute one of the feature repeatedly on a scheduled time.
I haven't found a way to do so. Any idea or thoughts on scheduling that feature file?

You can create a cron job which will execute a rake.
The software utility Cron is a time-based job scheduler in Unix-like
computer operating systems. People who set up and maintain software
environments use cron to schedule jobs (commands or shell scripts) to
run periodically at fixed times, dates, or intervals.
These links might help
How to create a cron job using Bash
how to create a cron job to run a ruby script?
http://rake.rubyforge.org/

I solved the problem by simply installing Jenkins on my machine from its official site,https://jenkins-ci.org/. I configured master and slave nodes on my own machine because I just needed to run one feature file(it has the script I want to run on daily basis).
Although, it is recommended to configure slave on a different machine if we have multiple jobs to run and our jobs are resource intensive.
There is a very good illustration on installing, configuring and running jobs in this link http://yakiloo.com/setup-jenkins-and-windows/

Related

what's the difference between spark-shell and submitted sbt programs

Spark-shell can be used to interact with the distributed storage of data, then what is the essential difference between coding in spark-shell and uploading packaged sbt independent applications to the cluster operation?(I found a difference is sbt submit the job can be seen in the cluster management interface, and the shell can not) After all, sbt is very troublesome, and the shell is very convenient.
Thanks a lot!
Spark-shell gives you a bare console-like interface in which you can run your codes like individual commands. This can be very useful if you're still experimenting with the packages or debugging your code.
I found a difference is sbt submit the job can be seen in the cluster management interface, and the shell can not
Actually, spark shell also comes up in the job UI as "Spark-Shell" itself and you can monitor the jobs you are running through that.
Building spark applications using SBT gives you some organization in your development process, iterative compilation which is helpful in day-to-day development, and a lot of manual work can be avoided by this. If you have a constant set of things that you always run, you can simply run the same package again instead of going through the trouble of running the entire thing like commands. SBT does take some time getting used to if you are new to java style of development, but it can help maintain applications on the long run.

How to improve Jenkins server performance?

Our Jenkins server(linux machine) slows down over a period of time and it gets unresponsive. All the jobs take unexpectedly long time(even though they run on slaves which are different machines from server). One of things I have observed is increase in the number of open files. The number seems to be increasing as shown in the image below. Does anyone have a solution to keep check on this without restarting the server? Also, are there any configurations/tweaks that could improve the performance of the jenkins server?
We are using Jenkins for four years and we tried to keep it up-to-date (Jenkins + plug-ins).
Like you we experimented some inconvenience, depending on new versions of Jenkins or plug-ins...
So we decided to stop this "continuous" upgrade
Here are humble tips:
Avoid technical debt. Update Jenkins as much as you can, but use only "Long Term Support" versions (latest is 2.138.2)
Backup your entire jenkins_home before any upgrade!
Restart Jenkins every night
Add RAM to your server. Jenkins use file system a lot and this will improve caching
Define JVM min/max memory parameters with the same value to avoid dynamic reallocation, for example: -Xms4G -Xmx4G
Add slaves and execute jobs only on slaves
In addition to above, you can also try:
Discarding old builds
Distribute the builds on multiple slaves, if possible.

How to Run JMETER everyday at a specified time

I want to run my JMETER test scenarios everyday at 12 PM, any suggestion ?
You can use linux Cron task or windows equivalent.
You could also do it from jmeter by using scheduler but it's a kind of tweak.
You can use Jenkins it is pretty cool stuff and designed for such task only.

how can I run a java program automaticlly

I have a java package,
I want to my program be runned every night at 0 o'clock automaticlly,
how can I do this work?
Generally you have 2 solutions:
Create application that runs your code every night, i.e. implement scheduling yourself. Obviously you can (and should) use tools that help you to do scheduling.
Use OS-specific tools. For example cron for unix and windows task scheduler for windows.
You can either schedule in your own OS. On *nix, there is cron. I'm not sure what is used in windows.
Or you can make your own java program schedule: on running it, it sets a times to execute your task in a specific time.
You could use a Thread.sleep() counting the time from now until midnight, but that's a poor-man's solution. Quartz is your man, as it works schedulling your tasks.
If you choose the schedulling path, you can't forget to run your application in the OS startup

Jenkins/Hudson - Run script on all slaves

I have a requirement to run a script on all available slave machines. Primarily this is so they get relevant windows hotfixes and new 3rd party tools before building.
The script I have can be run multiple times without undesirable side effects & is quite light weight, so I'm happy for this to be brute force if necessary.
Can anybody give suggestions as to how to ensure that a slave is 'up-to-date' before it works on a job?
I'm happy with solutions that are driven by a job on the master, or ones which can inject the task (automatically) before normal slave job processing.
My shop does this as part of the slave launch process. We have the slaves configured to launch via execution of a command on the master; this command runs a shell script that rsync's the latest tool files to the slave and then launches the slave process. When there is a tool update, all we need to do is to restart the slaves or the master.
However - we use Linux whereas it looks like you are on Windows, so I'm not sure what the equivalent solution would be for you.
To your title: either use Parameter Plugin or use matrix configuration and list your nodes in it.
To your question about ensuring a slave is reliable, we mark it with a 'testbox' label and try out a variety of jobs on it. You could also have a job that is deployed to all of them and have the job take the machine offline it fails, I imagine.
Using Windows for slaves is very obnoxious for us too :(

Resources