I'm still very new at TDD, and I am also trying to work with a legacy application that was not built with testing in mind.
One feature (bug) I am trying to test is identifying whether an order is in a state allowing a user to schedule delivery.
The story is: given an order with a ReadyFrom date > 10 days and < 28 days from today, where [a webservice to test the order is in a valid state for delivery] returns true the system should list 7 available delivery dates starting from the ReadyFrom date
So I identified some orders suitable for testing these conditions. I think I should also make a stub for the web service, so it returns true or false depending on the test.
I wrote a failing test, and using that I fixed the bug using a copy of the live database, the problem is that next week, the orders I've been using will no longer satisfy some of the conditions, which are based on the system date.
Am I right in thinking I should put the test orders into a fixture and dynamically alter the relevant date values on these orders during the setup, before using them in the test, and dynamically change my expectations of the set of delivery dates the system sends back (the delivery dates are also returned by a web service, which would have to be mocked too)
Or would this invite problems as the application develops?
Thanks
I use the test data builder pattern for creating test data and set it up in the test method itself. IMHO it makes very readable test code. The builder it self goes something like this (C# + Rhinomocks):
public class OrderBuilder
{
MockRepository _mockRepository;
IOrder _order;
public OrderBuilder()
{
_mockRepository = new MockRepository();
_order = _mockRepository.Stub<IOrder>();
}
public OrderBuilder WithDate(DateTime date)
{
_order.Date = date;
return this;
}
public IOrder Build()
{
_mockRepository.ReplayAll();
return _order;
}
}
In the test method the order is created with this syntax:
DateTime someValidDate = new DateTime(1,2,2012);
IOrder order = new OrderBuilder()
.WithDate(someValidDate)
.Build();
Isn't that pretty? :o)
Put the test orders into a fixture
Yes.
dynamically alter the relevant date values on these orders during the setup
Sort of.
Don't make it too complex-sounding. The Fixture should have a bunch of test orders with fixed, known dates. You shouldn't have to alter too much.
If -- for some reason -- the dates can't be fixed, known dates, then setUp does three things.
Builds orders "from scratch" with appropriate dates.
Configures the mock with appropriate dates.
Saves some "expected results" hint for the actual tests to be used.
Again. Don't make it sound complex. You're not "dynamically change my expectations". You're just setting the expectations in setUp.
dynamically change my expectations of the set of delivery dates the system sends back
(the delivery dates are also returned by a web service, which would have to be mocked too)
Mocking the web service means the dates could be fixed. The mock and the fixture should be able to return one, fixed set of dates.
Related
I understand that within this framework users are unable to call scenarios within scenarios.
I am trying to create End to End test cases where there are validation points at multiple stages of the test.
Using 'bad' gherkin syntax the process for the scenario would be something like:
Given Item A exists
When User processes Item
Then Warning is displayed
When User accesses Item
Then Warning is not displayed
When User finalises Item
Then Item status = "CURRENT"
And Record Status = "COMPLETED"
The first thing I considered was breaking the scenario into 3 distinct GWT scenarios.
That is fine.
However suppose I now want to create a new end to end scenario that can re-use one of the 3 scenarios created (as you would re-use a function).
How do I do this without duplication of Gherkin code?
I cannot use Background as the sections that I require to re-use are in the middle of the execution.
Steps prior and after may be different.
IN SUMMARY: I am trying to re-use a GWT scenario that is common for many end to end scenarios where the end to end scenarios are inherently different.
Any feedback or assistance you can provide would be greatly appreciated.
Cheers and thanks,
I think I know what you mean. I have done some "meta" scenario's. For example there are ones that test login, maintaining details and creating an account that are very fine grained. Then I have scenario's that test things further along the way, assuming that the happy path has been taken up to that point. So imagine having a scebnario called:
The user is logged in and has successfully created an account and is at the add product screen
For that, in the step definition, I would just combine function calls with hardcoded values:
#When("^The user is logged in and has successfully created an account and is at the add product screen$")
def addProducts(String val) {
navLibrary.loginAsUser("user", "password")
navLibrary.createAccount("some", "params", "you", "need")
navLibrary.navToProducts("some param")
}
And then start with the finer details in new separate steps. You have to think system design, and cascading. For me it meant a lot of initial rework as the tests started growing, but now it's a breeze. Pick the right level of reusability. It is testing code, so it doesn't have to subscribe perfectly to all the programming precepts. It must work, and it must be low maintenance in the long run.
I didn't use Cypress though. My tests are in Groovy with Selenium and Cucumber.
My situation is this: I have multiple components in my view that ultimately depend on the same data, but in some cases the view state is derived from the data. How do I make sure my whole view stays in sync when the underlying data changes? I'll illustrate with an example using everyone's favorite Star Wars API.
First, I show a list of all the films, with a query like this:
# ALL_FILMS
query {
allFilms {
id
title
releaseDate
}
}
Next, I want a separate component in the UI to highlight the most recent film. There's no query for that, so I'll implement it with a client-side resolver. The query would be:
# MOST_RECENT_FILM
query {
mostRecentFilm #client {
id
title
}
}
And the resolver:
function mostRecentFilmResolver(parent, variables, context) {
return context.client.query({ query: ALL_FILMS }).then(result => {
// Omitting the implementation here since it's not relevant
return deriveMostRecentFilm(result.data);
})
}
Now, where it gets interesting is when SWAPI gets around to adding The Last Jedi and The Rise of Skywalker to its film list. We can suppose I'm polling on the list so that it gets periodically refetched. That's great, now my list UI is up to date. But my "most recent film" UI isn't aware that anything has changed — it's still stuck in 2015 showing The Force Awakens, even though the user can clearly see there are newer films.
Maybe I'm spoiled; I come from the world of MobX where stuff like this Just Works™. But this doesn't feel like an uncommon problem. Is there a best practice in the realm of Apollo/GraphQL for keeping things in sync? Am I approaching this problem in entirely the wrong way?
A few ideas I've had:
My "most recent film" query could also poll periodically. But you don't want to poll too often; after all, Star Wars films only come out every other year or so. (Thanks, Disney!) And depending on how the polling intervals overlap there will still be a big window where things are out of sync.
Instead putting the deriveMostRecentFilm logic in a resolver, just put it in the component and share the ALL_FILMS query between components. That would work, but that's basically answering "How do I get this to work in Apollo?" with "Don't use Apollo."
Some complicated system of keeping track of the dependencies between queries and chaining refreshes based on that. (I'm not keen to invent this if I can avoid it!)
In Apollo observables are (in components) over queried values (cached data 'slots') but your mostRecentFilm is not an observable, is not based on cached values (they are cached) but on one time fired query result (updated on demand).
You're only missing an 'updating connection', f.e. like this:
# ALL_FILMS
query {
allFilms {
id
title
releaseDate
isMostRecentFilm #client
}
}
Use isMostRecentFilm local resolver to update mostRecentFilm value in cache.
Any query (useQuery) related to mostRecentFilm #client will be updated automatically. All without additional queries, polling etc. - Just Works? (not tested, it should work) ;)
I have an entity which I've created two setAttribute functions for:
public function setStartAttribute($value) { }
and
public function setEndAttribute($value) { }
These attributes, start and end, are both datetimes which I check against some criteria in each of my setter function before allowing. Under certain conditions, I prevent or allow the start or end attributes to be updated.
I've hit a wall, however, in that if I prevent one of these from being updated, I need to prevent both. In other words, if the user tries to update the entity with a start date which is out of bounds, I need to prevent the start date from being updated, but I also need to prevent the end date from being update.
As these are two separate functions, I'm not sure how to use one to prevent the other in a case like this.
EDIT:
Since the answer is extremely obvious (just do it both in one function) without adding this extra info, I'll add that the part that makes this less straightforward is that I'm using Backpack for Laravel. Within the Backpack admin panel is the CRUD that lets me create or update my entity. I'm using the date_range field type to allow setting the start and end time/dates on my entity. It's upon saving this that I need to be able to pass both the start and end values to a function and validate them, prior to setting them on my entity. I found that creating the two separate functions above setStartAttribute() and setEndAttribute() allowed me to validate those values and choose whether to assign them to the entity, however I need to be able to use one unified function rather than two separate ones. It is this integration with Backpack which makes this problem less straightforward for me.
If those start and end attributes are connected somehow (one can't be set if another is invalid), you better make one method to set both of them. Something like this:
public function setStartAndEnd($start, $end)
{
if ($start is valid && $end is valid)
{
$this->start = $start;
$this->end = $end;
}
}
Which you can use as follows:
$entity->setStartAndEnd($date, $another_date);
Data validation should occur at the following places in a web-application:
Client-side: browser. To speed up user error reporting
Server-side: controller. To check if user input is syntactically valid (no sql injections, for example, valid format for all passed in fields, all required fields are filled in etc.)
Server-side: model (domain layer). To check if user input is domain-wise valid (no duplicating usernames, account balance is not negative etc.)
I am currently a DDD fan, so I have UI and Domain layers separated in my applications.
I am also trying to follow the rule, that domain model should never contain an invalid data.
So, how do you design validation mechanism in your application so that validation errors, that take place in the domain, propagate properly to the client? For example, when domain model raises an exception about duplicate username, how to correctly bind that exception to the submitted form?
Some article, that inspired this question, can be found here: http://verraes.net/2015/02/form-command-model-validation/
I've seen no such mechanisms in web frameworks known to me. What first springs into my mind is to make domain model include the name of the field, causing exception, in the exception data and then in the UI layer provide a map between form data fields and model data fields to properly show the error in it's context for a user. Is this approach valid? It looks shaky... Are there some examples of better design?
Although not exactly the same question as this one, I think the answer is the same:
Encapsulate the validation logic into a reusable class. These classes are usually called specifications, validators or rules and are part of the domain.
Now you can use these specifications in both the model and the service layer.
If your UI uses the same technology as the model, you may also be able to use the specifications there (e.g. when using NodeJS on the server, you're able to write the specs in JS and use them in the browser, too).
Edit - additional information after the chat
Create fine-grained specifications, so that you are able to display appropriate error messages if a spec fails.
Don't make business rules or specifications aware of form fields.
Only create specs for business rules, not for basic input validation tasks (e.g. checking for null).
I want to share the approach used by us in one DDD project.
We created a BaseClass having fields ErrorId &
ErrorMessage.
Every DomainModel derive from this BaseClass & thus have a two extra fields ErrorId & ErrorMessage available from
BaseClass.
Whenever exception occurs we handle exception(Log in server, take appropriate steps for compensating logic & fetch User Friendly message from client location based localized Resource file for message ) then propagate data as simple flow without raising or throwing exception.
At client side check if ErrorMessage is not null then show error.
It's basic simple approach we followed from start of project.
If it's new project this is least complicated & efficient approach, but if you doing changes in big old project this might not help as changes are big.
For validation at each field level, use Validation Application Block from Enterprise Library.
It can be used as :
Decorate domain model properties with proper attributes like:
public class AttributeCustomer
{
[NotNullValidator(MessageTemplate = "Customer must have valid no")]
[StringLengthValidator(5, RangeBoundaryType.Inclusive,
5, RangeBoundaryType.Inclusive,
MessageTemplate = "Customer no must have {3} characters.")]
[RegexValidator("[A-Z]{2}[0-9]{3}",
MessageTemplate = "Customer no must be 2 capital letters and 3 numbers.")]
public string CustomerNo { get; set; }
}
Create validator instance like:
Validator<AttributeCustomer> cusValidator =
valFactory.CreateValidator<AttributeCustomer>();
Use object & do validation as :
customer.CustomerNo = "AB123";
customer.FirstName = "Brown";
customer.LastName = "Green";
customer.BirthDate = "1980-01-01";
customer.CustomerType = "VIP";
ValidationResults valResults = cusValidator.Validate(customer);
Check Validation results as:
if (valResults.IsValid)
{
MessageBox.Show("Customer information is valid");
}
else
{
foreach (ValidationResult item in valResults)
{
// Put your validation detection logic
}
}
Code example is taken from Microsoft Enterprise Library 5.0 - Introduction to Validation Block
This links will help to understand Validation Application Block:
http://www.codeproject.com/Articles/256355/Microsoft-Enterprise-Library-Introduction-to-V
https://msdn.microsoft.com/en-in/library/ff650131.aspx
https://msdn.microsoft.com/library/cc467894.aspx
I'm using Grails with an Oracle database. Most of the data in my application is part of a hierarchy that goes something like this (each item containing the following one):
Direction
Group
Building site
Contract
Inspection
Non-conformity
Data visible to a user is filtered according to his accesses which can be at the Direction, Group or Building Site level depending on user role.
We easily accomplished this by creating a listWithSecurity method for the BuildingSite domain class which we use instead of list across most of the system. We created another listWithSecurity method for Contract. It basically does a Contract.findAllByContractIn(BuildingSite.listWithSecurity). And so on with the other classes. This has the advantage of keeping all the actual access logic in BuildingSite.listWithsecurity.
The problem came when we started getting real data in the system. We quickly hit the "ora-01795 maximum number of expressions in a list is 1000" error. Fair enough, passing a list of over 1000 literals is not the most efficient thing to do so I tried other ways even though it meant I would have to deport the security logic to each controller.
The obvious way seemed to use a criteria such as this (I only put the Direction level access here for simplicity):
def c = NonConformity.createCriteria()
def listToReturn = c.list(max:params.max, offset: params.offset?.toInteger() ?: 0)
{
inspection {
contract {
buildingSite {
group {
'in'("direction",listOfOneOrTwoDirections)
}
}
}
}
}
I was expecting Grails to generate a single query with joins that would avoid the ora-01795 error but it seems to be calling a separate query for each level and passing the result back to Oracle as literal in an 'in' to query the other level. In other words, it does exactly what I was doing so I get the same error.
Actually, it might be optimising a bit. It seems to be solving the problem but only for one level. In the previous example, I wouldn't get an error for 1001 inspections but I would get it for 1001 contracts or building sites.
I also tried to do basically the same thing with findAll and a single HQL where statement to which I passed a single direction to get the nonConformities in one query. Same thing. It solves the first levels but I get the same error for other levels.
I did manage to patch it by splitting my 'in' criteria into many 'in' inside an 'or' so no single list of literals is more than 1000 long but that's profoundly ugly code. A single findAllBy[…]In becomes over 10 lines of code. And in the long run, it will probably cause performance problems since we're stuck doing queries with a very large amount of parameters.
Has anyone encountered and solved this problem in a more elegant and efficient way?
This won't win any efficiency awards but I thought I'd post it as an option if you just plainly need to query a list of more than 1000 items none of the more efficient options are available/appropriate. (This stackoverflow question is at the top of Google search results for "grails oracle 1000")
In a grails criteria you can make use of Groovy's collate() method to break up your list...
Instead of this:
def result = MyDomain.createCriteria().list {
'in'('id', idList)
}
...which throws this exception:
could not execute query
org.hibernate.exception.SQLGrammarException: could not execute query
at grails.orm.HibernateCriteriaBuilder.invokeMethod(HibernateCriteriaBuilder.java:1616)
at TempIntegrationSpec.oracle 1000 expression max in a list(TempIntegrationSpec.groovy:21)
Caused by: java.sql.SQLSyntaxErrorException: ORA-01795: maximum number of expressions in a list is 1000
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440)
You'll end up with something like this:
def result = MyDomain.createCriteria().list {
or { idList.collate(1000).each { 'in'('id', it) } }
}
It's unfortunate that Hibernate or Grails doesn't do this for you behind the scenes when you try to do an inList of > 1000 items and you're using an Oracle dialect.
I agree with the many discussions on this topic of refactoring your design to not end up with 1000+ item lists but regardless, the above code will do the job.
Along the same lines as Juergen's comment, I've approached a similar problem by creating a DB view that flattens out user/role access rules at their most granular level (Building Site in your case?) At a minimum, this view might contain just two columns: a Building Site ID and a user/group name. So, in the case where a user has Direction-level access, he/she would have many rows in the security view - one row for each child Building Site of the Direction(s) that the user is permitted to access.
Then, it would be a matter of creating a read-only GORM class that maps to your security view, joining this to your other domain classes, and filtering using the view's user/role field. With any luck, you'll be able to do this entirely in GORM (a few tips here: http://grails.1312388.n4.nabble.com/Grails-Domain-Class-and-Database-View-td3681188.html)
You might, however, need to have some fun with Hibernate: http://grails.org/doc/latest/guide/hibernate.html