I am using tools like Puppet/Chef/Ansible to set up and config development environments and production servers.
Whenever I update the configuration, I run the tool against my development environment and log in to check manually if things works as expected.
But this is tedious to do, and I can't test everything every time, so is there any way I can automate the testing?
There are Infrastructure Testing Frameworks for this:
ServerSpec / InSpec - ruby-based. Famous, big community, nice looking and best in his class.
BATS - Bash Automated Testing System, which is a bit easier.
TestInfra - Python-based infra testing framework. Still pretty young, very small community. Intro.
Goss - Fast (written in Go), small tool for validating server/infra configuration. Test scenarios are written in yaml.
Automation:
There is interesting Molecule project - some automation for testing Ansible roles, designed by Cisco. Never tried it yet.
Step further would be using TestKitchen which handles automation to spin up Vagrant or Docker or even AWS instance and test Puppet/Chef/Ansible with Rspec/BATS against just spinned up machines.
So what you need - pick up framework, write tests and run your playbooks/recipes & tests against mock VMs.
Ideally is to keep your "infra as code" in vcs and configure ci like TravisCI to run your tests for every PR once you bring new changes in your repository.
You can even follow tdd here: write tests first, make them fail, then write actual implementation in your favorite configuration management tool and see if that change makes tests green/passed.
MOAR Infrastructure Testing & Automation!
If you can let us know, what you want to test. We can help better.
But,
Did you check the dry-run mode? I think, Puppet and Ansible supports it, you can have a cron or some automated script which runs all the puppet/ansible modules against a single(test) node.
More info:
1. http://docs.ansible.com/ansible/playbooks_checkmode.html
2. Check the noop mode in https://docs.puppet.com/puppet/latest/reference/man/agent.html
Related
I have a crazy idea to run integration tests (xUnit in .Net) in the Jenkins pipeline by using Docker Compose. The goal is to create testing environment ad-hoc and run integration tests form Jenkins (and Visual Studio) wthout using DBs etc. on physical server. In my previous project sometimes there was a case, when two builds override test data from the second build and I would like to avoid it.
The plan is the following:
Add dockerfile for each test project
Add references in the docker compose file (with creation of DBs on docker)
Add step in the Jenkins that will run integration tests
I have no long experience with contenerization, so I cannot predict what problems can appear.
The questions are:
Does it have any sence?
Is it possible?
Can it be done simpler?
I suppose that Visual Sutio test runner won't be able to get results from the docker images. I am right?
It looks that development of tests will be more difficult, because test will be run on the docker. I am right?
Thanks for all your suggestions.
Depends very much on the details. In a small project - no, in a big project with multiple micro services and many devs - sure.
Absolutely. Anything that can be done with shell commands can be automated with Jenkins
Yes, just have a test DB running somewhere. Or just run it locally with a simple script. Automation and containerization is the opposite of simple, you would only do it if the overhead is worth it in the long run
Normally it wouldn't even run on the same machine, so that could be tricky. I am no VS Code expert though
The goal of containers is to make it simpler because the environment does not change, but they add configuration overhead. Most days it shouldn't make a difference but whenever you make a big change it will cost some time.
I'd say running a Jenkins on your local machine is rarelly worth it, you could just use docker locally with scripts (bash or WSL).
What is the point of using a Continuous Integration system to test your code if you already have a system like Husky running that allows you to test you code before pre-commit and pre-push?
Pre-commit and pre-push hooks are great for quick operations and tests. Sometimes you can even setup a hook in your IDE that will run quick unit tests every time you save a file. But usually you have multiple suites of tests and unlike unit tests functional, integration and performance tests often take longer time to run, which is not feasible for hooks.
Also, you want to run your tests in the same environment where you build your deliverables, which is usually not your local machine.
Another reason to use CI system is to run post-merge tests to verify that there are no issues introduced by multiple parallel merges.
All-in-all, the more tests you run, the better and a CI system allows you to run both pre-merge tests usually triggered by some sort of pull request hook and post-merge tests. And all of that in a controlled reliable environment.
I'm not really interested about whether it passes in your local environment, where you may have a different version of some dependent library on your environment path. I want to know for sure that anyone's contributions don't break the software when linked against the specific library versions that we ship with.
One reason to test using a Continuous Integration platform like Travis would be to assure developers haven't circumvented their own local development environment's testing git hooks.
CI is not only tests, it's a lot more, but the test stage is of course a very important part of the flow.
As you said in your own answer, local environments could be changed, the tests on the CI could have stricter settings, the environment you test on could be more like the environment that the end-user uses (say, set versions of software or even hardware).
Say for example that you develop a PHP package. The package have support for everything between php 5.6 and 7.2, it should also support multiple types of operating systems and should behave differently if ext/open_ssl is installed or not. A local test suite would rarely have a setup allowing the developer to test each of the possible versions on each of the required platforms, but a test suite set up in a CI pipeline could.
And honestly, it's always a good idea to test one more time, just to be safe! ;)
In certain useful and reasonable workflows, it is acceptable to commit and push broken commits (though not to the master branch). Preventing such workflows with git hooks is annoying.
Rebasing or merging as an example does not run hooks again, even though files are changed.
Hooks are also very difficult to get right. They check a local state which might not be what gets pushed (if certain files are present that are not in git).
CI servers also provide a stable predictable environment. E.g. consider a CI server having Linux and developers using MacOs laptops. The git hooks run on MacOs, which has case insensitive file system, allowing tests to pass even if filenames are wrong.
Hooks also punish diligent developers who run checks manually before committing, because tests are just run again one more time.
Each professional project should have CI. The real question is why any project should maintain annoying slow fragile broken local hooks when you already have CI.
Use hooks only for private toy projects.
I have a system thats highly reliant on various web APIs. I would like to run my API specific tests at least once per day to make sure all API's are still playing nicely and alive. I have a set of unit tests (just plain rb files that test API calls for expected data) and would like to run these every 24 hours. If something breaks, I would like to take an action (e.g. email or sms me).
How best to setup automated Ruby tests and parse the result? Can I just setup a cron job to handle the .rb files? How would I take an action and detect programmatically if the tests are failing? Maybe there is some kind of continuous integration solution for RUby that can handle this?
I've just gone through the process of setting up Hudson CI as my integration server, using this amazing tutorial from Dr. Nic. It installs through a gem, coming pretty much preconfigured, and was extremely simple to get working.
I'm using rspec and cucumber, and Hudson runs all tests when it sees a new commit on my git repository. If all tests pass, it merges the code into my master branch. If any test fails, it holds its horses and sends me an email.
EDIT:
I also want to give ten thumbs up to the ChuckNorris plugin for Hudson. Agile doesn't get better than pair programming with Walker, Texas Ranger.
Ruby has Test::Unit built-in, RSpec, ZenTest, shoulda, cucumber and probably many more tools to help test. Being built-in, Test::Unit is used a lot and is the target to be beaten by the other tools.
ZenTest and RSpec can do continuous testing: You make a change and save a file and they'll see it and run the test suite. I like that because then I know the state of things right away.
I haven't used cucumber, but have used the rest. I heard cucumber's emphasis is on integration testing but that might have been the commenter's feelings rather than by design of the developers. The list of tutorials for cucumber is interesting browsing. In particular there's webrat: Automated Acceptance Testing with RSpec or Cucumber.
Any of these could be wired up with cron to run periodically; Just treat them as you would any other set of command-line apps.
It should be easy to tie in web-testing too, but you'll have to identify the gems/modules needed and write the glue code. I haven't had need for such a beast as I'd go at it using Mechanize and/or one of the other HTTP gems plus Nokogiri to ransack the pages.
I've just installed Hudson and it is running beautifully. It builds, runs JUnit-tests and also CheckStyle analysis.
Next step for us would be to create an installation, install it and then run automated tests on the actual installation. I would then like to fail the build if the tests fail or at least publish the results somehow. I think we would set it up so that part runs periodically or manually triggered.
We use InstallAnywhere for installation and IBM Rational Functional Tester for automated tests.
So questions are: anyone created a similar setup? are there any plugins, tutorials or other resource that could help me along. Or do you have any tips or advice in general.
The command line reference for Rational Functional Tester:
http://publib.boulder.ibm.com/infocenter/rfthelp/v8r0m0/index.jsp?topic=/com.ibm.rational.test.ft.doc/topics/RobotJCommandLine.html
Sample command for running a test:
java -classpath "C:\IBM\RFT\FunctionalTester\bin\rational_ft.jar"
com.rational.test.ft.rational_ft -datastore \\My_project\AUser\RobotJProjects -user admin -project
\\My_project\AUser\TestManagerProjects\Test.rsp -build "Build 1" -logfolder "Default" -log
"Al_SimpleClassicsA#1" -rt.log_format "TestManager" -rt.bring_up_logviewer true -playback
basetests.SimpleClassicsA_01
An additional note, you'll want to configure windows properly on your agent machine which will be running the tests. This is not advice specific to Hudson or RFT, but rather all GUI automation tools on Windows. RFT will require an interactive desktop environment for it to be able to click buttons, etc. If you have your Hudson agent running as a Windows service, there will be no desktop. See the following: Silverlight tests not working unless RDP connection open
We have run a fairly complicated distributed build on Hudson, it is a process that basically follows:
Test on Windows.
Test on OSX, run code coverage & push results to server.
Test on OSX Tiger.
Package for OSX Leopard & push build to server.
Package for Windows & push build to server.
Update product website.
We don't use InstallAnywhere or Rational Functional Tester, but have similar sorts of mechanisms in their place. The key we found to making it all sing in Hudson was being able run our various steps from the command line. Maven and appropriate plugins made short work of this task. So my advice would be just that, using whatever build tool you are using (ant, maven, ?) configure them so that you can run your rational functional tester and install anywhere from the command line with a simple goal passed to your build tool (i.e. mvn test or mvn assembly:assembly).
After that, make sure whatever machine Hudson is running on has everything installed (i.e. Rational Functional Tester) and configured, so that you can open up the command line and type in the goal and have your tests correctly execute.
Hooking it up in Hudson from that point on is fairly simple - just pass in the goal when you configure the build.
I believe the best answer is that integrating RFT with Hudson/Jenkins is a useless endeavor.
As this IBM FAQ says, to make RFT work you must:
be logged in the machine;
the screen can't be locked;
if you are remotely connected, you can't minimize the connection screen.
So you can't run Jenkins/Hudson as a service, making it not very useful. You must run it from your logged account. If you are in a corporate computer (very probable if you are using RFT), you probably must use a hack to prevent the screen saver to start. If the screen is locked, your tests will always fails.
It isn't very difficult to configure your tests to run from the command line, you just have to take care of the return codes when the tests fail and succeed.
Jenkins/Hudson would also give you some advantages, like integrating the tests with your version control, probably automatically running the tests when a commit is made. It would also help sending emails when the tests fail.
But you still would have to integrate the RFT logs with some kind of JUnit plugin to have a nice report. You also would have to have script to run the tests using the command line.
I think it is not worth the trouble to use an continuous integration server with RFT. Better just have your tests running every day in Windows Task Scheduler. It is a simpler solution with less failure points.
Or use my final solution: quit RFT and use the free Selenium with a headless web driver.
I have some general advice on this because I have not yet implemented this myself.
I am assuming you want to have Hudson run the RFT scripts automatically for you via a build or Hudson process?
I want to implement something similar in my organisation as well.
I have not yet been able to implement this because of organisational constraints but here is what I have thought out/done so far:
Downloaded Windows process viewer, got the command for running the tests.
Made shell Script out of it, separated out the variables etc
The future plan is to setup a Windows Slave machine which would have all the tools in it that would be required once the Tests are kicked off, for eg. the correct versions of browsers, and environment variables, and other tools that are required.
Hudson would kick off a process which runs the shell scripts created which runs all the RFT Scripts and performs necessary operations on the slave machine.
I've been exploring different strategies for running integration tests within some Nant build scripts. Typically a number of different scripts are chained in one monolithic build that has separate targets: staging (build a staging version, like build), build (just build the stuff), integration (build the stuff and run the integration tests). This works reasonably well, the build target takes about a third of the time to run as the integration target and it's not painfully long so I don't find myself disinclined to run it frequently.
The integration target on the other hand takes long enough that I don't want to do it very often - ideally just before I'm ready to do a deploy. Does this seem like a reasonable strategy? IOW, am I doing it right?
The plan is to eventually move this project to Continuous Integration. I'm new to the whole Continuous Integration thing but I think I understand the concept of "breaking the build" so I'm wondering what are some good practices to pick up in order to make the most of it?
Any good sources of reading on this subject would be appreciated as well. Thanks!
Yes, you are on the right track. What you need to do now is to hook up your nant target to an automated process. I recommend using either Team City or Cruise Control for as your CI tool. Once you have your automated server setup you can run your build and unit tests on each check in (Continuous Integration). Your integration tests could then run at night or over the weekend since they typically take longer to run. If your integration tests are successful, you can then have a job that will deploy to some QA or other server.
Sounds like you're 99% of the way there. My advice is to just dive in and start doing it. You'll learn a lot more by actually taking the plunge and doing it than by thinking about whether you're doing it right.
My company is currently using CruiseControl and I personally think it's great.
See this related thread What is a good CI build process?
You are on the right track. If you're using a decent CI tool, you should be able to set each setup up as a separate project that triggers the next step in the chain... i.e. sucessfull build triggers tests which trigger deployment which triggers integration etc
This way your ealiest "break" stops the line so to speak.
We use CruiseControl to build, unit-test, configure and deploy, run integration tests and code coverage, run acceptance tests, and package for release. This is with a system of 8 or so web services, and a dozen or so databases, all with interralated configuration and deployment dependencies with across multiple environments with different configurations (anythin from single boxes to redundent boxes for each component)