MSTest - data drive ntest from a method? - mstest

Anyonw knows how? There is a way to put in a data source, but what is the syntax to have the data injected from a amethod?
I need to test all classes with specific attributes. The test basically validates certain attributes in certain assemblies (checking whether the database is in sync).
For that it would be nieto use one data driven test htat has a "driver" method that feeds in the names or types of the classes to test.

You can use the DataSourceAttribute http://msdn.microsoft.com/en-us/library/microsoft.visualstudio.testtools.unittesting.aspx

Related

Creational adapter

I have a lot of code like this
additional_params = {
date_issued: pending.present? ? pending.date_issued : Time.current,
gift_status: status,
date_played: status == "Opened" ? Chronic.parse("now") : (opened.present? ? opened.date_played : nil),
email_template: service&.email_template,
email_text: service&.email_text,
email_subject: service&.email_subject,
label: service&.label,
vendor_confirmation_code: service&.vendor_confirmation_code
}
SomeService.new(reward, employee: employee, **additional_params).create
The same pattern applies to many models and services.
What is the name of this pattern?
How to refactor the current solution?
Is there a gem to solve this kind of solution? Like draper or something else
To me, that looks a bit like a god object for every type of entity. You expect your service to take care of everything related to your entity. The entity itself just acts as a data container and isn't responsible for its data. That's called an anemic model.
First of all, you need to understand that there can be several representations of the same entity. You can have several different classes that represent a user. On the "List user" page, the class contains just a subset of the information, maybe combined with information from the account system (last login, login attempt etc). On the user registration page, you have another class as it's not valid to supply all information for the user.
Those classes are called data transfer objects. Their purpose is to provide the information required for a specific use case and to decouple the internal entity from the external API (i.e. the web page).
Once you have done that, your service classes will start to shrink and you need fewer custom parameters for every method call.
Now your service class has two responsibilities: To manage all entities and to be responsible for their business rules.
To solve that, you should start to only modify your entities through behaviors (methods) and never update the fields directly. When you do so, you will automatically move logic from your service class to your entity class.
Once that is done, your service classes will be even cleaner.
You can read about Domain Driven Design to get inspired (no need to use DDD, but get inspired by how the application layer is structured in it).
You can try the builder pattern. I am not familiar with a ruby gem, but you can find information here: https://en.wikipedia.org/wiki/Builder_pattern and https://en.wikipedia.org/wiki/Fluent_interface

test if collection contains 2 objects in any order with equals

What is the best way to test in JUnit that a collection contains two complex objects?
I know that there is containsInAnyOrder(), but I have no control over the objects, as they are created via a REST API and stored in a database. I need them to be compared by equals, not by reference.
Alternatively, it would be sufficient if I can test whether some of their attributes equal, but since the method the test covers involves AsyncCircuitBreakers, I'm not sure of the order.
How can I make sure, the two objects are created in the database with the data I have in mind?
assertThat(Arrays.asList(array), hasItems(yourItem1, yourItem2));
Don't forget to add equals and hashCode methods to implement in your item class. hasItem is a hamcrest method.

Groovy pass request params between classes

If I want to handle many parameters from for example a web request and pass it between classes (layers) - what is the preferred way?
I know it is easy to pass optional numbers of parameters through the constructor as a map.
I can also pass a map directly and if the keys match the receiving objects property names it should work in a similar way
Or I could just pass the map and then instantiate for example domain classes from that
I could use a special class as data carrier with given number of properties
I have a domain class (not database domain but business domain) that needs data from the user interface.
What is the best way to pass data through the layers and how do I know that all required data is being passed if using a data structure - like a map - with key values? If I would have a more static constructor with a given number of parameters, then I would know that the parameters are being passed. But how do I secure this when using a more dynamic approach? With unit tests?
Well in Grails command objects are an excellent choice. You can pass them up to various layers without issues. They are pretty analogous to domain classes, only without the whole persistence functionality.
Otherwise I would recommend using plain old Groovy classes (POGOs). Groovy allows you to keep your code very short (compared to Java and many other languages as well) and offers very handy transforms for common design patterns you might need (e.g. Canonical, Immutable, IndexedProperty, DelegatesTo...).
Compared to command objects POGOs do require you to write e.g. validation code by yourself, but this can be as simple as
boolean isValid() {
name && lastName && countryCode in ['US', 'CA']
}
You can keep static factories in a POGO to help you construct them in the various circumstances. Plus you can define more than one class in a file so you can keep the POGO code wherever it makes most sense. I would definitely prefer this approach to simple maps because the code is better encapsulated, POGOs can be unit tested & documented.

grails - I need to define my validation at runtime

I have an idea to read an XML document from the database and generate simple CRUD screens (via Grails) based on the data defined. My application will call RESTFul services to persist the data so I don't need Hibernate on the client side. I have ideas about how to generate the UI but where I'm stumped is in how to perform the validation.
I'll have a single, generic domain/command object that contains only the fields that are common for all instances of this "runtime" data type. All other fields are defined via the XML found in the database. I need something like this:
String xml // defines the fields, constraints, UI information for this data type
def constraints = {
callMyCustomValidator(obj)
}
and in my callMyCustomValidator method, I'll extract the xml for obj and perform my validation as needed.
Note: We have a working example of this in a different app (written in java/servlers/jsp) and without any formal "framework" this isn't difficult to do. Why do I need this? We need to add simple datatypes on the fly (via script) without a release.
You can use the validator to add custom validation to your domain class. Just add this to some of your common fields.

Creating mock data for unit testing

I consider myself still pretty new to the TDD scene. But find that no matter which method I use (mock framework or stubbing my own objects) I find that I have to write a lot of code to create mock data. I like the idea of loading up objects to create an in-memory database. But what I don't like is cluttering up my tests with a ton of code for the sole purpose of creating mock data. This is especially the case when the data needs to account for all the different cases.
I'd love some suggestions for a better way of doing this.
It would seem to me that I should be able to load the data once into a known state from some data store and then I could use a snapshot of that state which is loaded in the test setup/initialize before each test method is executed. This would satisfy proper testing practices while providing convenience and let me focus on writing tests instead of writing code to create test data "by hand".
May be you could try the NBuilder library. It provides a very fluent interface and is easy to use. You can use it for generating single instances of a class with defualt values or generate lists with default or overriden values. You can have a look at this one.
If your are using .Net Try NDBUnit
You populate your store and then it reverts your DB to a known state at test time, for each test. The Autumn of Agile screen cast series shows this in pretty good detail.
Or you can do this manually...build a stored procedure or whatever to truncate your tables and copy in the data in your teardown method.
You can have Builder class(es) that helps you building the instances you need / in this case ones you would use related to the repository.
Have the Builder use appropiate defaults, and on your tests you can overwride what you need. This helps you avoid needing to put have every single case of "data" mixed up for all the different tests (which introduces problems, because usually there are cases that aren't compatible for different tests).
**Update 1:**Take a look at www.markhneedham.com/blog/2009/01/21/c-builder-pattern-still-useful-for-test-data
I know exactly what you mean. I think a good approach to solving this problem is to actually have a separate MockFramework project that houses all your mock data, outside the test project. This way you can generate mock data separately, store it in memory if you want to, or not, and then reference the mock framework from the test project. If you use a third party framework to do this, all the better, but you can still wrap that third party framework in your own mock framework so you can get all that "glue" that creates the mock data the way you need it out of your tests so the tests can really be only what they need to be.
Thanks for all the suggestions, I think the solution requires a little bit of everything. I don't want these tests to end up being regression tests, but w/o some kind of existing data store everything still boils down to creating the data by "manually" building the objects.
What would really be nice would be a framework that allowed me to use my existing DAL to either script the data to code for me or get the data in memory and access it like an in memory database.
Untils.org covers this way better than I ever could.
Their whole guide is actually very good.
But basically, if your units require "a lot of data" they may not be unit tests anymore. I'd recommend attempting testing the smaller pieces individually.

Resources