I have a system thats highly reliant on various web APIs. I would like to run my API specific tests at least once per day to make sure all API's are still playing nicely and alive. I have a set of unit tests (just plain rb files that test API calls for expected data) and would like to run these every 24 hours. If something breaks, I would like to take an action (e.g. email or sms me).
How best to setup automated Ruby tests and parse the result? Can I just setup a cron job to handle the .rb files? How would I take an action and detect programmatically if the tests are failing? Maybe there is some kind of continuous integration solution for RUby that can handle this?
I've just gone through the process of setting up Hudson CI as my integration server, using this amazing tutorial from Dr. Nic. It installs through a gem, coming pretty much preconfigured, and was extremely simple to get working.
I'm using rspec and cucumber, and Hudson runs all tests when it sees a new commit on my git repository. If all tests pass, it merges the code into my master branch. If any test fails, it holds its horses and sends me an email.
EDIT:
I also want to give ten thumbs up to the ChuckNorris plugin for Hudson. Agile doesn't get better than pair programming with Walker, Texas Ranger.
Ruby has Test::Unit built-in, RSpec, ZenTest, shoulda, cucumber and probably many more tools to help test. Being built-in, Test::Unit is used a lot and is the target to be beaten by the other tools.
ZenTest and RSpec can do continuous testing: You make a change and save a file and they'll see it and run the test suite. I like that because then I know the state of things right away.
I haven't used cucumber, but have used the rest. I heard cucumber's emphasis is on integration testing but that might have been the commenter's feelings rather than by design of the developers. The list of tutorials for cucumber is interesting browsing. In particular there's webrat: Automated Acceptance Testing with RSpec or Cucumber.
Any of these could be wired up with cron to run periodically; Just treat them as you would any other set of command-line apps.
It should be easy to tie in web-testing too, but you'll have to identify the gems/modules needed and write the glue code. I haven't had need for such a beast as I'd go at it using Mechanize and/or one of the other HTTP gems plus Nokogiri to ransack the pages.
Related
What is the point of using a Continuous Integration system to test your code if you already have a system like Husky running that allows you to test you code before pre-commit and pre-push?
Pre-commit and pre-push hooks are great for quick operations and tests. Sometimes you can even setup a hook in your IDE that will run quick unit tests every time you save a file. But usually you have multiple suites of tests and unlike unit tests functional, integration and performance tests often take longer time to run, which is not feasible for hooks.
Also, you want to run your tests in the same environment where you build your deliverables, which is usually not your local machine.
Another reason to use CI system is to run post-merge tests to verify that there are no issues introduced by multiple parallel merges.
All-in-all, the more tests you run, the better and a CI system allows you to run both pre-merge tests usually triggered by some sort of pull request hook and post-merge tests. And all of that in a controlled reliable environment.
I'm not really interested about whether it passes in your local environment, where you may have a different version of some dependent library on your environment path. I want to know for sure that anyone's contributions don't break the software when linked against the specific library versions that we ship with.
One reason to test using a Continuous Integration platform like Travis would be to assure developers haven't circumvented their own local development environment's testing git hooks.
CI is not only tests, it's a lot more, but the test stage is of course a very important part of the flow.
As you said in your own answer, local environments could be changed, the tests on the CI could have stricter settings, the environment you test on could be more like the environment that the end-user uses (say, set versions of software or even hardware).
Say for example that you develop a PHP package. The package have support for everything between php 5.6 and 7.2, it should also support multiple types of operating systems and should behave differently if ext/open_ssl is installed or not. A local test suite would rarely have a setup allowing the developer to test each of the possible versions on each of the required platforms, but a test suite set up in a CI pipeline could.
And honestly, it's always a good idea to test one more time, just to be safe! ;)
In certain useful and reasonable workflows, it is acceptable to commit and push broken commits (though not to the master branch). Preventing such workflows with git hooks is annoying.
Rebasing or merging as an example does not run hooks again, even though files are changed.
Hooks are also very difficult to get right. They check a local state which might not be what gets pushed (if certain files are present that are not in git).
CI servers also provide a stable predictable environment. E.g. consider a CI server having Linux and developers using MacOs laptops. The git hooks run on MacOs, which has case insensitive file system, allowing tests to pass even if filenames are wrong.
Hooks also punish diligent developers who run checks manually before committing, because tests are just run again one more time.
Each professional project should have CI. The real question is why any project should maintain annoying slow fragile broken local hooks when you already have CI.
Use hooks only for private toy projects.
We are looking at cucumber for our automation test framework because everyone including business people can understand it.
We use Angualr JS frontend and Java REST backend. Our team that is going to write the step definitions likes Ruby so we want to stick with Ruby for that.
Also we would like to use Maven to tie this process into our build process.
Will cucumber be a good fit given that story above ?
Hui Peztherez, from my prospective cucumber is a great choice, using it with the same architecture expect for Angular.
We are using Maven too, and it's so useful to orchestrate them with Jenkins, using maven to run the tags..
mvn test -Dcucumber.options="--tags #smoke"
ref: https://cucumber.io/docs/reference/jvm
Also Jenkins have several plugin to report the Cucumber Analysis, so useful for testers, and in the end, we are now working about the HPQ server integration with a plugin called Bumblebee (this part is still under development for both sides, our and bumblebee)
Another good choice is Ruby, you can take the step definition so easily defined with Ruby...
We also have a integration with Selenium for the front end side, and it works as well...
So go further!
We are using Cucumber in Java with gradle in past, It was in Maven and It works fine. We have framework for UI and API, In UI we used WebDriver to write step definition and In API, We used RestAssured to write step definition. You can do same thing in Java what you can do in Ruby.
Maven for Java Cucumber :
http://mvnrepository.com/artifact/info.cukes/cucumber-java/1.2.4 - Please add other dependency as per requirement.
Jenkin Plugin : https://wiki.jenkins-ci.org/display/JENKINS/Cucumber+Reports+Plugin
Will cucumber be a good fit given that story above ?
- Yes It is good fit. I will request you to show POC(Proof of concept) to management. I had experience in past that management have no clue about BDD and they have very hard to time to understand coverage. We did very deep dive to provide all information to them. It is very important to answer following question to management
BDD report is providing accurate test converage idea to management ?
Everyone in team is able to write feature file and able to provide same quality of feature file
Feature file and BDD report will be starting tool for check test converage
Thank you.
Please be aware that Cucumber is a BDD framework that can be used on top of a browser automation framework like Selenium WebDriver/Watir/Protractor they are two distinct things. Most of them implements Selenium WebDriver's protocol.
My only concern is for you using Maven in that project setup, I know that you can run ruby code in a JVM by using JRuby. But I'm not sure which plugin you'd use to trigger that from Maven.
I am using tools like Puppet/Chef/Ansible to set up and config development environments and production servers.
Whenever I update the configuration, I run the tool against my development environment and log in to check manually if things works as expected.
But this is tedious to do, and I can't test everything every time, so is there any way I can automate the testing?
There are Infrastructure Testing Frameworks for this:
ServerSpec / InSpec - ruby-based. Famous, big community, nice looking and best in his class.
BATS - Bash Automated Testing System, which is a bit easier.
TestInfra - Python-based infra testing framework. Still pretty young, very small community. Intro.
Goss - Fast (written in Go), small tool for validating server/infra configuration. Test scenarios are written in yaml.
Automation:
There is interesting Molecule project - some automation for testing Ansible roles, designed by Cisco. Never tried it yet.
Step further would be using TestKitchen which handles automation to spin up Vagrant or Docker or even AWS instance and test Puppet/Chef/Ansible with Rspec/BATS against just spinned up machines.
So what you need - pick up framework, write tests and run your playbooks/recipes & tests against mock VMs.
Ideally is to keep your "infra as code" in vcs and configure ci like TravisCI to run your tests for every PR once you bring new changes in your repository.
You can even follow tdd here: write tests first, make them fail, then write actual implementation in your favorite configuration management tool and see if that change makes tests green/passed.
MOAR Infrastructure Testing & Automation!
If you can let us know, what you want to test. We can help better.
But,
Did you check the dry-run mode? I think, Puppet and Ansible supports it, you can have a cron or some automated script which runs all the puppet/ansible modules against a single(test) node.
More info:
1. http://docs.ansible.com/ansible/playbooks_checkmode.html
2. Check the noop mode in https://docs.puppet.com/puppet/latest/reference/man/agent.html
While the refactoring step of test driven development should always involve another full run of tests for the given functionality, what is your approach about preventing possible regressions beyond the functionality itself?
My professional experience makes me want to retest the whole functional module after any code change. Is it what TDD recommends?
Thank you.
While the refactoring step of test driven development should always
involve another full run of tests for the given functionality, what is
your approach about preventing possible regressions beyond the
functionality itself?
When you are working on specific feature it is enough to run tests for the given functionality only. There is no need to do full regression.
My professional experience makes me want to retest the whole
functional module after any code change.
You do not need to do full regression but you can, since Unit tests are small, simple and fast.
Also, there are several tools that are used for "Continuous Testing" in different languages:
in Ruby (e.g Watchr)
in PHP, (e.g. Sismo)
in .NET (e.g. NCrunch)
All these tools are used to run tests automatically on your local machine to get fast feedback.
Only when you are about to finish implementation of the feature it is time to do full run of all your tests.
Running tests on Continuous integration (CI) server is essential. Especially, when you have lots of integration tests.
TDD is just a methodology to write new code or modify old one. Your entire tests base should be ran every time a modification is done to any of the code file (new feature or refactoring). That's how you ensure no regression has taken place. We're talking about automatic testing here (unit-tests, system-tests, acceptance-tests, sometimes performance tests as well)
Continuous integration (CI) will help you achieve that: a CI server (Jenkins, Hudson, TeamCity, CruiseControl...) will have all your tests, and run them automatically when you commit a change to source control. It can also calculate test coverage and indicate where your code is insufficiently tested (note if you do proper TDD, your test coverage should always be 100%).
Is there a facility similar to SeleniumGrid that I can use to run webrat (or other, similar framework) browser automation tests in parallel across a farm of coordinated agents?
Coordinated via TeamCity with rake?
Edit: We're looking at using cucumber+webrat to do functional and acceptance testing as described in Testing ASP.NET Web Applications
I've worked on just this project actually. If you're working on rails, check out http://github.com/sgrove/spec_storm . It's only setup to run rspec + selenium tests in parallel, but it can be extended to others depending on the demand. And of course if you have any questions, I'm more than happy to help out. The more people using it, the happier I am :D