When a new puppet module is created in Geppetto, manifests, spec and tests folders are created as well. The init.pp which resides in the manifests folder looks as follows:
define X (
$version = "",
$installer = "",
$installer_extension = ""
) {
file { "${name}_installer_copied_to_temp_directory":
path => "c:/temp/${name}-${version}.${installer_extension}",
source => "puppet:///files/${installer}.${installer_extension}",
before => Exec["${name}_installed"],
require => File["c:/temp"];
}
}
The associated init.pp which resides in the tests folder looks as follows:
include X
If TDD in puppet makes sense, how to create a test for the define X snippet and how to run the tests in Geppetto?
It does make sense!
I'd begin by looking at rspec-puppet. It provides you with custom RSpec matchers that will help you test your Puppet resources.
I'm not sure about what the boilerplate created by Geppetto includes, but running rspec-puppet-init at the root of your module will set you up for it.
In your case, you should probably begin by creating a specs/defines/X_spec.rb file and create a few test cases with variations on the define's inputs which test that the included File resource has the right attributes.
You can check rspec-puppet's tutorial for examples on how to write your test cases.
Be aware that you're not supposed to test if a file was created in the disk. This is Puppet's domain (not your module's), and is already tested in Puppet's codebase.
Just to clarify at the higher level, the way that you do TDD in Puppet with rspec-puppet may be a little bit different from TDD you've done in other languages. Rspec-puppet only tests the generated catalog; it does not "smoke test" in the way that you might think of first, but it works very nicely for unit testing.
For example of what it is NOT for, say that in the general package-file-service paradigm, there's some environment-specific reason that the package might not install -- like, it conflicts with another RPM. Rspec-puppet can't test or catch things like this because it is only examining the catalog that gets generated; not the actual application of that catalog on a real system.
In general, a great approach for getting started with testing is to identify (a) branching logic and (b) varying data/values in your module. Write your tests against all logic branches, as a first step, and test both valid, invalid, and outlier values for variables that may be in your code.
In the above example, there really isn't much in the way of logic, but you might test what happens when odd values for $name or $version get put in; you want your puppet code to fail gracefully, because unhandled puppet errors are not, as a rule, the most communicative and clear of errors.
At the absolute minimum...
it { should compile }
(Getting it running cleanly after install can be a bit of a bear, it's true. I also use puppetlabs_spec_helper with it.)
Related
For a blog, I put together an inline tag that pulls header information from a specified webpage. It works, but I need to add caching, so that I'm not making redundant network calls.
I'd like a tighter debugging cycle than restarting the server and waiting for a rebuild, but running the plugin as Ruby code reports uninitialized constant Liquid (NameError). That makes sense, since it isn't required, and wouldn't run, since the plugin is just a class definition.
So, I tried creating some scaffolding to run the code, or what I thought would be scaffolding, anyway.
require 'liquid'
require_relative '../_plugins/header.rb'
ht = HeaderInlineTag.new
ht.initialize 'header', 'some path'
puts ht.render()
This produces...
_test/header.rb:4:in `<main>': private method `new' called for HeaderInlineTag:Class (NoMethodError)
Considering the possibility that initialize() might be run to create objects, I combined the first two lines of code, but that didn't work, either. Same error, different function name. The plugin doesn't mark anything as private, and declaring the methods public doesn't change anything.
What more is needed to test the plugin without carrying around the entire blog?
The solution was beyond my Ruby knowledge, but mostly straightforward, once I connected the pieces of information floating around.
First, this existing answer is about a specific issue dealing with Rails, but incidentally shows how to deal with private new methods: Call them through send, as in HeaderInlineTag.send :new.
Then, this indirect call to .new() now (of course) calls .initialize(), meaning that it needs the three parameters that are required by any plugin. Two parameters are for the testing itself, so they're easy. The third is a parse context. Documentation on writing Jekyll plugins is never clear on what the parse context actually is, since it gets sent automatically as part of the build process. However, some research and testing turns up Liquid::ParseContext as the culprit.
Finally, .render() also takes the ParseContext value.
Therefore, the the test scaffold should look something like this.
require 'liquid'
require_relative '../_plugins/header.rb'
context = Liquid::ParseContext.new
ht = HeaderInlineTag.send :new, 'header', 'some path...', context
puts ht.render context
I can call this from my blog's root folder with ruby _test/header.rb, and it prints the plugin's output. I can now update this script to pull the parameters from the command line or a CSV file, depending on the tests required.
I would like to ask if we can also achieved a Page/Window Object Model in AutoIT? Majority of my project assignment was on Web Automation and I'm using Selenium Webdriver with Framework uses Page Object Model. Currently, I'm assigned to a project for GUI automation. I like to implement this kind of approach also in AutoIT if feasible so that I can reuse the objects to other classes. We are planning to use AutoIT standalone. I noticed that most of the example available in the internet was the object created on each class/script.
Your insights are highly appreciated.
Thanks!
General:
That common approach of using the Page Object Model (POM) Design Pattern isn't quit good feasible with AutoIt. Of course you can create a object structure with AutoIt too, but it was not intended for the language. Anyway, some of the goals of POM can be achieved with the following example suggestion of a test structure.
Please notice:
Since you don't provide enough information about your application under test (AUT), I explain a basic structure. The implementation depends on your application (SWING/RCP, WinForm etc.). It's also important which tool support do you need for your page object recognition. Besides WinForm that could be controled by ControlCommand functions in AutoIt, it's a proper way to use UIASpy or au3_uiautomation as helper tools.
UIASpy - UI Automation Spy Tool
au3_uiautomation
It's an advantage to know the POM structure in context with Selenium. I usually include a test case description with behavior driven development BDD (Gherkin syntax with Cucumber or SpecFlow), but this will not be a part of that example here.
Example structure:
The structure consists of two applications under test Calc and VlcPlayer. Both follow the common structure PageObjects and Tests. You should try to devide your page objects (files) in many subfolders to keep an overview. This substructure should be similar for the Tests folder/subfolders.
In the Tests area you could include several test stages or test categories depending on your test goals (Acceptance/UI tests, just functional smoke tests and so on). It's also a good idea to control the execution order by an separat wrapper file, TestCaseExecutionOrder.au3. This should exist for all test categories to avoid a mixing of them.
This wrapper au3 file contains the function calls, it's the processing start/control.
Approach description:
TestCaseExecutionOrder.au3
Calls the functions which are the test cases in the subfolders (Menu, PlaylistContentArea, SideNavigation).
Test case NiceName consists of some test steps.
These test steps have to be included into that script/file by:
#include-once ; this line is optional
#include "Menu\OpenFolder.au3"
Test step OpenFolder.au3 (which is a part of a test case) contains the function(s) to do the folder loading and there content.
In that functions the PageObject MenuItemMedia.au3 will be loaded/included into the script/file by:
#include-once ; this line is optional
#include "..\..\..\PageObjects\Menu\MenuItemMedia.au3"
File MenuItemMedia.au3 should only contain the recognition mechanism for that area and actions.
This could be find menu item Media (as a function).
or find open folder menu item (as a function) and so on.
Func _findMenuItemMedia()
; do the recognition action
; ...
Return $oMenuItem
EndFunc
In the test step OpenFolder.au3 which calls _findMenuItemMedia() like:
Global $oMedia = _findMenuItemMedia()
can a .click executed or something like .getText etc.
The test cases should only #include the files which are necessary (test steps). The test steps should also only #include the necessary files (page objects) and so on. So it's possible to adjust the recognition functions once and it can be used in the corresponding test steps.
Conclusion:
Of course it's hard to explain it in this way, but with this approach you can do a similar way like in Selenium for web testing. Please notice that you properbly have to use Global variables often. You have to be ensure the correct includings and don't lose the overview of your test, which is in OOP test based approaches much easier.
I recommend the usage of VS Code, because you can jump from file to file at the #include statements. That's pretty handy.
I hope this will help you.
Im am quite happy to have found FakeFS to be able to fake a file system sandbox in which my tests can mess around. Now I want to be able to test test FileUtils.chown and chmod operations and therefor want to work with fake users in the test context and not tie my stuff to the operation systems user database to have fully portable and discrete tests.
I do my testing with rspec.
What would be the best way to do that?
I'm not sure I get the question, but I assume you want to test the "real" results of the FileUtils calls, i.e. changed permissions and ownership of files, that's why you need some kind of users. I wouldn't do that, since "SELECT isn't broken": http://pragmatictips.com/26
Instead, assume the following: if you call FileUtils.chown and friends with the right parameters, they will do the right thing. To make sure your application passes the correct parameters to FileUtils, use a mock. Here's an example: https://gist.github.com/phillipoertel/5020102
If you don't want to couple your test so closely to the implementation (the mock knows what's going on in your class internally) you could test the changed permissions by expecting it's effects. For example: if #user wasn't allowed to access #file before, call the method which changes permissions, then call one of your methods which require the changed permissions and assert that works. But this approach will access the filesystem, which you didn't want in the first place.
There is a very easy way to do this: FileUtils::DryRun. It's a module that acts just like FileUtils, but doesn't actually do anything to the filesystem for exactly these kinds of scenarios.
If you need it to replace FileUtils, just run FileUtils = FileUtils::DryRun
Recently, I got started reading on BDD and TDD and I got hooked. I got lost with the amount of unorganized sources of information and different opinions of what's best and what not. At the end I settled on xBehave & xUnit. I like the fluent syntax and the ease of defining the behaviors with Fluent Assertions and Fluent Validation.
I'm also trying to implement the onion architecture with a test project I'm working on for learning. Here's my scenario: The project, to make it simple, is a product tracker. I can create products and track who owns it. I want to implement two specs:
when a new product is created without a name then an error should be displayed
when a new product is created without an owner assigned then an error should be displayed.
I created the spec which instantiated a new Product and a new ProductService which in turns creates the Product. The spec passes and the validation is occuring now the question is:
How do I test my ProductRepository class? Do I test it next or mock it and finish all specs first then come back and test repository classes?
Should I have mocked the ProductService class in the first spec?
Is that done at the unit test level? should i create a unit test class?
Wouldn't testing the repository make it an integration test?
so far, I don't have a UI and i'm writing my specs for the domain, service, and infrastructure layers.
do i need to use watin for my UI tests?
would switching to watin/specflow makes more sense and would save on efforts to have fully tested layers from top to bottom?
Here's one of the specs I worked on:
[Scenario]
public void creating_new_product_without_a_name_should_throw_error()
{
var productService = default(IProductService);
var action = default(Action);
_
.Given("a new product", () =>
productService = new ProductService() as IProductService)
.When("creating the new product without a name", () =>
action = () => productService.Create(new Product()))
.Then("it should should display an error", () =>
action.ShouldThrow<ValidationException("Name is required."));
}
Thank you for your reply in advance and, please, if you are answering this thread back up with some materials/articles/sample code on to why your suggestion would be better to follow.
It sounds like you are testing small parts and then plan to glue them together into something big. This is (IMO, againt) not TDD (and certainly not BDD): you are not letting the tests drive the design/architecture.
To start with, don't think that much about the design. Don't think onion architecture, don't think repositories, don't think services.
Start by writing a test that verifies the whole solution, from end to end. Make that test as small as possible. To start with, just verify that e.g. a label is displayed. Then write the smallest solution that you can think of to make the test pass. Then write another test and the solution for that.
After a while, have a look at the code (production, but also test) to find resposibilities. Is there an embryo of a service in there somewhere? Extract it! But don't do it prematurely. Let the code tell you what it wants to look like.
Start thinking about the domain (Product, Owner, etc) up front and include it early in your code. Wait a little longer with persistance (repositories), but not too long.
Keep testing at this level (end-to-end). Add micro tests when necessary. So my answer to the question "how do I test my ProductRepository/Service class" is a question to you: do you need to? Or is it sufficiently covered by the end-to-end tests? If not, why?
Within a test file (MyTest.cs) it is possible to do setup and teardown at the class and the individual test level. Do similar hooks exist for the entire project? Entire solution?
No I don't believe that they do.
Typically when people ask this question it's because they have tests whoch are depending on something heavy, like a DB setup which needs to be reset for each test pass. Usually the right thing to do here is to mock/stub/fake the dependency out and remove the part that's causing the issue. Of course your reasons for this may be completely different.
Updated: So thinking about this some more I think you could do something with attributes and static types. You could add an assembly level attribute to each test assembly and pass it a static type.
[OnLoadAttribute(typeof(ProjectInitializer))]
When the assembly loads the type will get resolved and it's static constructor will be executed the first time it's resolved (when the assembly is loaded).
Doing something at a solution level is much harder because it depends how your unit test runner deals with tests and how it loads tests into AppDomains, per test, per test class or per test project. I suspect most runners create a new AppDomain per project.
I'm not exactly recommending this as I haven't tried it and there may be some repercussions. It's an idea you might want to try. Another option would be do derive all your tests from a common base class which has a constructor which resolves a singleton that in turn does your setup. This is less hacky but means having a common base class. You could also use an aspect oriented approach I suspect.
Hope this helps. These are just thoughts as to how you could do this.
Ade
We use the [AssemblyInitialize] / [AssemblyCleanup] attributes for project level test setup and cleanup code. We do this for two things:
creating a test database
creating configuration files in a temp directory
It works fine for us, although we have to be careful that each test leaves the database how it found it. Looks a little like this (simplified):
[AssemblyInitialize]
public static void AssemblyInit(TestContext context)
{
ConnectionString = DatabaseHelper.CreateDatabaseFornitTests();
}
[AssemblyCleanup]
public static void AssemblyCleanup()
{
DatabaseHelper.DeleteDatabase(ConnectionString);
}
I do not know of a way to do this 'solution' wide (I guess this really means for the test run - across potentially multiple projects)