Using Context in Behat - The right approach - tdd

What is the right approach for using FooContext classes in behat?
What it is by the docs:
A simple mnemonic for context classes is: “testing features in a context”. (...), the way you will test those features pretty much depends on the context you test them in.
In the docs FeatureContext seems for me only to be a dummy context file so that you can fast create a behat test.
The context class should be called FeatureContext. It’s a simple convention inside the Behat infrastructure. FeatureContext is the name of the context class for the default suite.
Its not saying me directly that it has to be a context file per feature.
The only real other examples in the docs are contexts like ApiContext or WebContext.
default:
suites:
web_features:
paths: [ %paths.base%/features/web ]
contexts: [ WebContext ]
api_features:
paths: [ %paths.base%/features/api ]
contexts: [ ApiContext ]
What i also found was a CommandFeature and another CommandLineProcessContext.
So if i have many features to test, a context file would blow up very fast.
Then i see a more likely a context file per feature example by Marco Pivetta using an Aggregate as Context.
Is it a good idea to have a single context file per feature foo.feature? Or are context files thought to be environmental contexts like in the docs ApiContext or WebContext?

Usually I prefer to divide context and features as follows:
One folder for each domain subject
One feature / context file for each action you can perform onto this subjects
That way you'll end up with a single suite using more than one context but with all context divided with a single responsibility.
Quick example
Features
|--- User
|----- login.feature
|----- change_password.feature
|----- impersonate.feature
|----- ban.feature
|----- ...
|--- ...
|--- Order
|----- checkout.feature
|----- cancel.feature
...
Context
|--- User
|---- LoginContext
|---- ChangePasswordContext
|---- ImpersonateContext
|---- BanContext
|---- ...
|--- Order
|---- CheckoutContext
|---- CancelContext
Each suite is composed with many context (for example, every time you need to login to check a behavior, you'll include LoginContext in your suite).
default:
suites:
suite_name:
paths:
- '%paths.base%/Path/To/Feature/File'
contexts:
- Path\To\Context\LoginContext
- Path\To\A\SecondContext
- ...
This approach has many advantages in terms of maintainability, intuitiveness and so on.
If you would like to have a more generic panoramic on this subject, you can check slides from my talk at Symfony day 2017 in Milan

Related

Include Controller (executing all the simple controllers in the test fragment)

[main-sub projects][1][1]: https://i.stack.imgur.com/1YL97.jpg
As shown in the picture, I've created two JMeter projects (Main. JMX, Sub1.JMX). The main.JMX has a thread group (Subscription) that has include controller to run the Sub1.jmx project.
Sub1.JMX is created with a Thread Group (Subscription) and Test Fragment. All the reusable functionalities are added under the test fragment such as Sub1-Login, Sub1-Logout, and Sub1-close using the simple controllers. A test case (Test Case-1) is created under the thread group Subscription that is invoking Sub1-Login, Sub1-Logout simple controllers using module controller. The Sub1-close simple controller is not used in the test case.
When I run the Main.JMX, is executing the test case (Test Case-1), which includes login and logout and also remaining simple controllers from the test fragment (Sub1-close).
what is the best way to run only simple controllers (defined in the test fragment) referenced in the test cases?
JMeter's Module Controller runs a Test Fragment. Entire.
If you don't want to run a certain part of the test fragment you have 2 options:
Either put it under the If Controller and come up with a JMeter Function or Variable which will control whether it will be executed now or not
Or split your Test Fragment into 2 (or more) separate fragments and compose your "test case" from smaller parts .

AutoIT Page/Window Object Model

I would like to ask if we can also achieved a Page/Window Object Model in AutoIT? Majority of my project assignment was on Web Automation and I'm using Selenium Webdriver with Framework uses Page Object Model. Currently, I'm assigned to a project for GUI automation. I like to implement this kind of approach also in AutoIT if feasible so that I can reuse the objects to other classes. We are planning to use AutoIT standalone. I noticed that most of the example available in the internet was the object created on each class/script.
Your insights are highly appreciated.
Thanks!
General:
That common approach of using the Page Object Model (POM) Design Pattern isn't quit good feasible with AutoIt. Of course you can create a object structure with AutoIt too, but it was not intended for the language. Anyway, some of the goals of POM can be achieved with the following example suggestion of a test structure.
Please notice:
Since you don't provide enough information about your application under test (AUT), I explain a basic structure. The implementation depends on your application (SWING/RCP, WinForm etc.). It's also important which tool support do you need for your page object recognition. Besides WinForm that could be controled by ControlCommand functions in AutoIt, it's a proper way to use UIASpy or au3_uiautomation as helper tools.
UIASpy - UI Automation Spy Tool
au3_uiautomation
It's an advantage to know the POM structure in context with Selenium. I usually include a test case description with behavior driven development BDD (Gherkin syntax with Cucumber or SpecFlow), but this will not be a part of that example here.
Example structure:
The structure consists of two applications under test Calc and VlcPlayer. Both follow the common structure PageObjects and Tests. You should try to devide your page objects (files) in many subfolders to keep an overview. This substructure should be similar for the Tests folder/subfolders.
In the Tests area you could include several test stages or test categories depending on your test goals (Acceptance/UI tests, just functional smoke tests and so on). It's also a good idea to control the execution order by an separat wrapper file, TestCaseExecutionOrder.au3. This should exist for all test categories to avoid a mixing of them.
This wrapper au3 file contains the function calls, it's the processing start/control.
Approach description:
TestCaseExecutionOrder.au3
Calls the functions which are the test cases in the subfolders (Menu, PlaylistContentArea, SideNavigation).
Test case NiceName consists of some test steps.
These test steps have to be included into that script/file by:
#include-once ; this line is optional
#include "Menu\OpenFolder.au3"
Test step OpenFolder.au3 (which is a part of a test case) contains the function(s) to do the folder loading and there content.
In that functions the PageObject MenuItemMedia.au3 will be loaded/included into the script/file by:
#include-once ; this line is optional
#include "..\..\..\PageObjects\Menu\MenuItemMedia.au3"
File MenuItemMedia.au3 should only contain the recognition mechanism for that area and actions.
This could be find menu item Media (as a function).
or find open folder menu item (as a function) and so on.
Func _findMenuItemMedia()
; do the recognition action
; ...
Return $oMenuItem
EndFunc
In the test step OpenFolder.au3 which calls _findMenuItemMedia() like:
Global $oMedia = _findMenuItemMedia()
can a .click executed or something like .getText etc.
The test cases should only #include the files which are necessary (test steps). The test steps should also only #include the necessary files (page objects) and so on. So it's possible to adjust the recognition functions once and it can be used in the corresponding test steps.
Conclusion:
Of course it's hard to explain it in this way, but with this approach you can do a similar way like in Selenium for web testing. Please notice that you properbly have to use Global variables often. You have to be ensure the correct includings and don't lose the overview of your test, which is in OOP test based approaches much easier.
I recommend the usage of VS Code, because you can jump from file to file at the #include statements. That's pretty handy.
I hope this will help you.

How can I organize and load packages based configurable properties?

The system I'm working on can be configurated with different boolean properties. The theorical maximum of different configurations is 2^n where n is the number of such properties.
There is a configuration tool separated from the production code, which uses boolean expressions to select which code packages are loaded.
This tool contains a class that defines methods like isWithX or more complex ones like isWithXwithoutYwithZ.
A package is basically a list of class extensions that define or redefine some method definitions.
Today the packages are named like myPackageWithAWithoutBwithCwithDwithoutE.
Since there is an evolving number of boolean configuration properties, the number of different packages and the size of their names is getting ridiculous and we can't see their names without scrolling all the time.
There is also a lot of code duplication.
EDIT: since production code has no access to the configurated properties the question is now limited to those issues:
list of package names for each package and each functionality
how many methods and how to name them
Right now the list of package names is basically all different combinations of names like this one: myPackageWithAWithoutBwithCwithDwithoutE except those where there is no behaviour specific to that configuration that we need to implement.
Right now for each package and each functionality: 1 method per package with the name of the functionality.
I cannot tell you how to solve your specific problem, so I will share with you some hints that might help you get started in finding the design that will actually work for you.
Consider a hierarchy of configuration objects:
ConfigurationSettings
ApplicationSettings
UserSettings
DisplaySettings
...
The abstract class ConfigurationSettings provides basic services to read/write settings that belong to any specific section. The hierarchy allows for simpler naming conventions because selectors can be reimplemented in different subclasses.
The ApplicationSettings subclass plays a different role: it registers all the sections in its registry instance variable, a Dictionary where the keys are section names and the values the corresponding subclass instances:
ApplicationSettings current register: anEmailSettings under: `Email`
The abstract class provides the basic services for reading and writing settings:
settingsFor: section
settingAt: key for: section
settingAt: key for: section put: value
Subclasses use these services to access individual settings and implement the logic required by the client to test whether the current configuration is, has, supports, etc., a particular feature or combination. These more specific methods are implemented in terms of the more basic settingAt:for:.
When a new package registers its own subclass its testing methods become available as follows:
self <section>Settings isThisButNotThat
where, for instance,
emailSettings
^(ApplicationSettings current for: 'Email') isThisButNotThat
and similarly for any other section. As I mentioned above, the division in subclasses will allow you to have simpler selectors that implicitly refer to the section (#isThisButNotThat instead of #isEmailThisButNotThat).
One other feature that is important to support Apply/Cancel dialogs for the user to modify settings is provided by two methods:
ConfigurationSettings >> #readFrom:
and
ConfigurationSettings >> #writeOn:
So, when you open the GUI that displays settings you don't open it on the current instance but on a copy of them
settings := ApplicationSettings new readFrom: ApplicationSettings current.
Then you present in the GUI this copy to the user. If the user cancels the dialog, you simply forget the copy. Otherwise you apply the changes this way:
settings writeOn: ApplicationSettings current
The implementation of these two services follows a simple pattern:
ApplicationSettings >> readFrom: anApplicationSettings
registry keysAndValuesDo: [:area :settings | | section |
section := anApplicationSettings for: area.
settings readFrom: section]
and
ApplicationSettings >> writeOn: anApplicationSettings
registry keysAndValuesDo: [:area :settings | | settings |
section := anApplicationSettings for: area.
settings writeOn: section]
I don't fully understand all of the aspects of your problem but maybe you could use a dynamic approach. For example, you could override #doesNotUnderstand: to parse the selector sent to the configuration and extract the package names:
doesNotUnderstand: aMessage
| stream included excluded |
"parse selectors like #isWithXwithoutYwithoutZ"
stream := (#isWithXwithoutYwithoutZ allButFirst: 6) asLowercase readStream.
included := Set new.
excluded := Set new.
[ stream atEnd ] whileFalse: [
(stream peek: 3) = 'out'
ifTrue: [
stream next: 3.
excluded add: (stream upToAll: 'with') ]
ifFalse: [ included add: (stream upToAll: 'with') ] ].
Then, all you need is a bit more code to generate the list of packages from that (I hope).

What is a good Visual studio source code layout for cross-project inheritance hierarchies

I develop similar calculation packages for different clients (Bob, Rick, Sue, Eve). The calculation packages consist of several calculation modules (A, B, C, D, ...), which are organized in a "Chain of responsibility" pattern. Assembly is done with Abstract factory.
As a lot of code is shared between the calculation modules of different clients, they are organized in an hierarchy:
ICalcA
|
AbstrCalcA
| |
AbstrCalcAMale AbstrCalcAFemale
| | | |
CalcABob CalcARick CalcASue CalcAEve
(same hierarchy for B, C, D, ...).
Now release management dictates, that I organize the source code per inheritance level:
Project: CalcCommon
[CalcA]
ICalcA.cs
AbstrCalcA.cs
[CalcB]
ICalcB.cs
AbstrCalcB.cs
[CalcC]
...
Project: CalcMale
[CalcA]
AbstrCalcAMale.cs
[CalcB]
AbstrCalcBMale.cs
[CalcC]
....
Project: CalcBob
[CalcA]
CalcABob.cs
[CalcB]
CalcBBob.cs
[CalcC]
....
Project: CalcFemale
....
For Bob, I release CommonCalc.dll, CalcMale.dll and CalcBob.dll.
Now this is all good, but with many modules, helper classes, and so on, it is very cumbersome to work within the same module hierarchy. Closely related classes (e.g. ICalcA and CalcABob) are far away in the solution explorer. No one on my teams seems to find anything without searching for the class name -- if he can remember it. Features tend to get implemented in wrong or multiple hierarchy levels.
How can I improve the situation?
I was thinking of creating one project per Module and hierarchy level (Projects: CalcCommonA, CalcMaleA, CalcBobA, CalcRickA, CalcCommonB, CalcMaleB, ...), and grouping them via solution folders.
I just found out that the new search bar on top of the solution explorer comes in handy for this.
Step 1: Make sure all classes related to Feature A contain "FeatureA" in their class name.
Step 2: If working in the FeatureA hierarchy, enter "FeatureA" into the search/filter bar.
This will display just to the classes of this particular hierarchy, just as required.

Business Layer structure, how do you build yours?

I am a big fan of NTiers for my development choices, of course it doesnt fit every scenario.
I am currently working on a new project and I am trying to have a play with the way I normally work, and trying to see if I can clean it up. As I have been a very bad boy and have been putting too much code in the presentation layer.
My normal business layer structure is this (basic view of it):
Business
Services
FooComponent
FooHelpers
FooWorkflows
BahComponent
BahHelpers
BahWorkflows
Utilities
Common
ExceptionHandlers
Importers
etc...
Now with the above I have great access to directly save a Foo object and a Bah object, via their respective helpers.
The XXXHelpers give me access to Save, Edit and Load the respective objects, but where do I put the logic to save objects with child objects.
For example:
We have the below objects (not very good objects I know)
Employee
EmployeeDetails
EmployeeMembership
EmployeeProfile
Currently I would build these all up in the presentation layer and then pass them to their Helpers, I feel this is wrong, I think the data should be passed to a single point above presentation in the business layer some place and sorted out there.
But I'm at a bit of a loss as to where I would put this logic and what to call the sector, would it go under Utilities as EmployeeManager or something like this?
What would you do? and I know this is all preference.
A more detailed layout
The workflows contain all the calls directly to the DataRepository for example:
public ObjectNameGetById(Guid id)
{
return DataRepository.ObjectNameProvider.GetById(id);
}
And then the helpers provider access to the workflows:
public ObjectName GetById(Guid id)
{
return loadWorkflow.GetById(id);
}
This is to cut down on duplicate code, as you can have one call in the workflow to getBySomeProperty
and then several calls in the Helper which could do other operations and return the data in different ways, a bad example would be public GetByIdAsc and GetByIdDesc
By seperating the calls to the Data Model by using the DataRepository, it means that it would be possible to swap out the model for another instance (that was the thinking) but ProviderHelper has not been broken down so it is not interchangable, as it is hardcode to EF unfortunately.
I dont intend to change the access technology, but in the future there might be something better or just something that all the cool kids are now using that I might want to implement instead.
projectName.Core
projectName.Business
- Interfaces
- IDeleteWorkflows.cs
- ILoadWorkflows.cs
- ISaveWorkflows.cs
- IServiceHelper.cs
- IServiceViewHelper.cs
- Services
- ObjectNameComponent
- Helpers
- ObjectNameHelper.cs
- Workflows
- DeleteObjectNameWorkflow.cs
- LoadObjectNameWorkflow.cs
- SaveObjectNameWorkflow.cs
- Utilities
- Common
- SettingsManager.cs
- JavascriptManager.cs
- XmlHelper.cs
- others...
- ExceptionHandlers
- ExceptionManager.cs
- ExceptionManagerFactory.cs
- ExceptionNotifier.cs
projectName.Data
- Bases
- ObjectNameProviderBase.cs
- Helpers
- ProviderHelper.cs
- Interfaces
- IProviderBase.cs
- DataRepository.cs
projectName.Data.Model
- Database.edmx
projectName.Entities (Entities that represent the DB tables are created by EF in .Data.Model, this is for others that I may need that are not related to the database)
- Helpers
- EnumHelper.cs
projectName.Presenation
(depends what the call of the application is)
projectName.web
projectName.mvc
projectName.admin
The test Projects
projectName.Business.Tests
projectName.Data.Test
+1 for an interesting question.
So, the problem you describe is pretty common - I'd take a different approach - first with the logical tiers and secondly with the utility and helper namespaces, which I'd try and factor out completely - I'll tell you why in a second.
But first, my preferred approach here is pretty common enterprise architecture which I'll try to highlight in brief, but there's much more depth out there. It does require some radical changes in thinking - using NHibernate or Entity framework to allow you to query your object model directly and let the ORM deal with things like mapping to and from the database and lazy loading relationships etc. Doing this will allow you to implement all of your business logic within a domain model.
First the tiers (or projects in your solution);
YourApplication.Domain
The domain model - the objects representing your problem space. These are plain old CLR objects with all of your key business logic. This is where your example objects would live, and their relationships would be represented as collections. There is nothing in this layer that deals with persistence etc, it's just objects.
YourApplication.Data
Repository classes - these are classes that deal with getting the aggregate root(s) of your domain model.
For instance, it's unlikely in your sample classes that you would want to look at EmployeeDetails without also looking at Employee (an assumption I know, but you get the gist - invoice lines is a better example, you generally will get to invoice lines via an invoice rather than loading them independently). As such, the repository classes, of which you have one class per aggregate root will be responsible for getting initial entities out of the database using the ORM in question, implementing any query strategies (like paging or sorting) and returning the aggregate root to the consumer. The repository would consume the current active data context (ISession in NHibernate) - how this session is created depends on what type of app you are building.
YourApplication.Workflow
Could also be called YourApplication.Services, but this can be confused with web services
This tier is all about interrelated, complex atomic operations - rather than have a bunch of things to be called in your presentation tier, and therefore increase coupling, you can wrap such operations into workflows or services.
It's possible you could do without this in many applications.
Other tiers then depend on your architecture and the application you're implementing.
YourApplication.YourChosenPresentationTier
If you're using web services to distribute your tiers, then you would create DTO contracts that represent just the data you are exposing between the domain and the consumers. You would define assemblers that would know how to move data in and out of these contracts from the domain (you would never send domain objects over the wire!)
In this situation, and you're also creating the client, you would consume the operation and data contracts defined above in your presentation tier, probably binding to the DTOs directly as each DTO should be view specific.
If you have no need to distribute your tiers, remembering the first rule of distributed architectures is don't distribute, then you would consume the workflow/services and repositories directly from within asp.net, mvc, wpf, winforms etc.
That just leaves where the data contexts are established. In a web application, each request is usually pretty self contained, so a request scoped context is best. That means that the context and connection is established at the start of the request and disposed at the end. It's trivial to get your chosen IoC/dependency injection framework to configure per-request components for you.
In a desktop app, WPF or winforms, you would have a context per form. This ensures that edits to domain entities in an edit dialog that update the model but don't make it to the database (eg: Cancel was selected) don't interfere with other contexts or worse end up being accidentally persisted.
Dependency injection
All of the above would be defined as interfaces first, with concrete implementations realised through an IoC and dependency injection framework (my preference is castle windsor). This allows you to isolate, mock and unit test individual tiers independently and in a large application, dependency injection is a life saver!
Those namespaces
Finally, the reason I'd lose the helpers namespace is, in the model above, you don't need them, but also, like utility namespaces they give lazy developers an excuse not to think about where a piece of code logically sits. MyApp.Helpers.* and MyApp.Utility.* just means that if I have some code, say an exception handler that maybe logically belongs within MyApp.Data.Repositories.Customers (maybe it's a customer ref is not unique exception), a lazy developer can just place it in MyApp.Utility.CustomerRefNotUniqueException without really having to think.
If you have common framework type code that you need to wrap up, add a MyApp.Framework project and relevant namespaces. If your're adding a new model binder, put it in MyApp.Framework.Mvc, if it's common logging functionality, put it in MyApp.Framework.Logging and so on. In most cases, there shouldn't be any need to introduce a utility or helpers namespace.
Wrap up
So that scratches the surface - hope it's of some help. This is how I'm developing software today, and I've intentionally tried to be brief - if I can elaborate on any specifics, let me know. The final thing to say on this opinionated piece is the above is for reasonably large scale development - if you're writing notepad version 2 or a corporate phone book, the above is probably total overkill!!!
Cheers
Tony
There is a nice diagram and a description on this page about the application layout, alhtough looks further down the article the application isnt split into physical layers (seperate project) - Entity Framework POCO Repository

Resources