How to test external APIs? - laravel

I am having a hard time on how to start testing the integration with an external API ( Vimeo API ) for things like ( deleting a video - uploading a video - etc..).
Should I hit the real Vimeo's server while testing?
Is it a bad idea to do this like:
use Tests\TestCase;
use Vimeo\Laravel\VimeoManager;
class VimeoApiTest extends TestCase
{
protected function setUp() : void
{
parent::setUp();
$this->vimeo = new VimeoManager();
}
/** #test */
public function a_video_can_be_deleted()
{
$video = $this->vimeo->upload($fakeVideo);
// make http request to delete the video
$result = $this->vimeo->delete($video['id']);
$this->assertEquals('success', $result['status']);
}
}

In my eyes testing a package should not be your responsibility. Testing your implementation of the package is. Your example tests the package's VimeoManager directly. Which is in my opinion not what you should do.
Making a request to your API route that uploads the video is what you should test. During this test you do not want to upload it to the real Vimeo API, but you want to mock it.
Laravel includes a package that can do this, it is called Mockery. You can mock methods of classes to return a value without the initial logic being executed. In this case you would mock the delete method of the VimeoManager.
Laravel also provides so called Facades which can easily be mocked. I can see this package makes use of such facade. In this case you can do the following to test implemention of lets say your delete request.
/** #test */
use Vimeo\Laravel\Facades\Vimeo;
public function your_test()
{
Vimeo::shouldReceive('delete')
->once()
->with(...) // The parameters is should receive.
->andReturn(...); // The value it should return.
// Make the delete request to your API.
}

Well, you can test however you see fit. I've find it useful to have two types of tests.
A test which interacts with "something" else (service, system, etc...) is known as an integration test. These are nice and give some piece of mind, but are linked to the system that you are interacting with being online, this isn't always the case.
The second type of test can have a couple of different names, but that isn't really the point. The point of this second type of test is that you can "mock" out external/internal dependencies, ensuring that the "thing" your code depends on is online/behaves how you want. Mocking is when you manipulate a "thing" to respond a certain way. Usually, this is done via some framework or language feature. These types of test put a way larger burden on your code, and count more in my opinion.

I'm generally against mocking server to which my tests should be connected.
The main disadvantage of mocking servers and use simulators is fact that implementation of API may change (and for example throw exception/new status code or timeouts may be shorter) or there may be not compatibility between APIs between 2 versions of server.
Should I hit the real Vimeo's server while testing?
If You have possibility then create your local Vimeo's server using docker :)
Your tests will find out if something changes in API in future

Related

Unit, Integration or Feature Test?

A simple question: How do you differentiate between a feature, unit and integration test?
There are a lot of differing opinions, but I'm specifically trying to determine how to organise a Laravel test which touches a model's relationship. Here is an example if some PHP code which would require testing:
public function prices()
{
return $this->hasMany(Prices::class);
}
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
The test descriptions as I understand them (feel free to correct me):
Unit test
Tests the smallest part of your code
Does not touch the database
Does not interact with any other part of the system
Integration test
Tests part of the system working together
e.g controllers which call helper functions which need to be tested together
Feature test
Blackbox test
e.g. Call an api end point, see that it has returned the correct JSON response
Here is my issue given those descriptions:
My Laravel model test needs to test the smallest unit of code - the calculated accessor of a model, which makes it feel like a Unit test
But, it touches the database when it loads the model's relationship
It doesnt feel like an Integration test, because it is only touching other related models, not internal or external services
Other property accessor tests in Laravel would fall under Unit tests when they do not touch the database or the model's relationships
Separating these types of tests into integration tests would mean that a single model's tests against its properties are fragmented between integration and unit tests
So, without mocking relationships between models, where would my test belong?
If I’m interpreting your original question correctly, I think the killer constraint here is:
So, without mocking relationships between models, where would my test belong?
If mocking isn't allowed and you're required to touch a DB then, by your/and google's definition, it has to belong as an integration/medium size test :)
The way I think of this is get price attribute functionality is separate from the DB. Even though it's in the model the prices could come from anywhere. Right now its a RDBMS but what if your org go really big and it split into another service? Basically, I believe, that the capability of getPriceAttributes is distinct from the storage of attributes:
public function getPriceAttribute($)
{
return $this->prices()->first() * 2;
}
If you buy into this reasoning, it creates a logical separation that supports unit tests. prices() can be mocked to returns a collection of 0, 1 & many (2) results. This test can be executed as a unit tests (for orders of magnitude faster test execution (ie on the order of 1ms vs potentially 10s or 100s of ms talking to a local DB)
I am not familiar with php test ecosystem but one way to do this could be with a test specific subclass (not sure if the following is valid PHP :p ):
class PricedModel extends YourModel {
function __construct($stub_prices_supporting_first) {
$this->stub_prices = $stub_prices_supporting_first;
}
public function prices() {
return $this->stub_prices;
}
}
tests
function test_priced_model_0_prices() {
p = new PricedModel(new Prices(array()));
assert.equal(null, p.getPriceAttribute());
}
function test_priced_model_1_price() {
p = new PricedModel(new Prices(array(1)));
assert.equal(2, p.getPriceAttribute());
}
function test_priced_model_2_prices() {
p = new PricedModel(new Prices(array(5, 1)));
assert.equal(10, p.getPriceAttribute());
}
The above should hopeuflly allow you to fully control input into the getPriceAttribute method to support direct IO-free unit testing.
——
Also all the unit tests above can tell you is that you’re able to process prices correctly , it doesn’t price any feedback on if you’re able to query prices !
What distinguishes the tests is their respective goal:
Unit-testing aims at findings those bugs that can be found in isolated small parts of the software. (Note that this does not say you must isolate - it only means your focus is on the isolated code. Isolation and mocking often enough are not needed to reach this goal: Think of a call to a sin function - you almost never need to mock this, you let your system under test just call the original one.)
Integration testing aims at findings bugs in the interaction of two or more components, for example mutual misconceptions about an interface. These bugs can not be found in the isolated software: If you test code in isolation, you also write your tests on your (possibly wrong) understanding of the other components.
Feature tests as you describe them will then have the goal to find further bugs, which the other tests so far could not detect. One example for such a bug could be, that an old version of the feature was integrated (which was correct at that time, but lacked some functionality).
The conclusion, although it may be surprising, is, that it is not in the stricter sense forbidden to make data base accesses in unit-testing. Consider the following scenario: You start writing unit-tests and mock the data base accesses. Later, you realize you can be more lazy and just use the data base without mocking - but otherwise leave all the tests as they are. Your tests have not changed, and they will continue finding the bugs in the isolated code as before. They may run a bit slower now, and the setup may be more complex than with the mocked data base. However, the goal of the test suite was the same - with and without mocking the data base.
This scenario simplifies things a bit, because there may be test cases that can only be done with a mock: For example, testing the case that the data base gets corrupted in a specific way and your code handles this properly. With the real data base such test cases may be practically impossible to set up.

TDD dilemma: Testing behavior instead of testing state VS Tests should be unaware of implementation

I am trying to implement my Spring website using TDD technique.
There are some TDD rules:
Test behaviour instead of state.
Tests shouldn't depends on
implementation.
I created UsersService empty class which depends on crud UsersRepository.
Now, I am trying to write test for signing up new users, but I don't know how to do this properly.
#Test
public void signUp_shouldCheckIfUserExistsBeforeSign() throws ServiceException {
// given
User user = new User();
user.setEmail(EMAIL);
when(usersRepository.save(user)).thenReturn(user);
when(usersRepository.exists(anyString())).thenReturn(Boolean.FALSE);
// when
usersService.signUp(user);
// then
thrown.expect(UserAlreadyExistsServiceException.class);
usersService.signUp(user);
}
This code tests behaviour but also enforce me to implement my service using exists() method instead of findByEmail() for example.
How should this test looks like?
Your test seems to reflect some confusion about behavior and implementation. It seems like you expect a state change when you call signUp() the first time, but because you're using mocks I don't think that will happen, so don't call signUp() twice if you're using mocks (and the expect() should be before signUp(), I believe). If you weren't using mocks, it would be a valid test to call signUp() twice, with no implementation dependency, but you're (wisely, IMHO) using mocks to avoid slow, database-dependent tests for easily mockable dependencies, so call signUp() just one time and let the mocks simulate the state. It makes sense to mock your storage interface when testing your service behavior.
As for your 2 testing rules, you can't use mocks without some notion of implementation (I prefer to think of it as "interactions" - especially if you mock interfaces rather than concrete classes). You seem to have a modular design, so I wouldn't worry about simulating an obvious interaction. If you change your mind later about the interaction (whether you should retrieve a user object instead of the boolean existence check), you change your test - no big deal, IMHO. Having unit tests should make you LESS afraid to change your code. It's true that mocks can make tests more brittle if the interactions need to be changed. The flip side of that is you think more about those interactions before coding them, which is good, but don't get stuck.
Your dilemma about whether to retrieve the user by email or to check its existence with a boolean exists() call sounds like a case of YAGNI to me. If you don't know what you're going to do with the User object you retrieve besides checking whether it's null, go with the boolean. If you change your mind later, you may have a few broken tests to (easily) fix, but you'll have a clearer idea of how things should work.
So your test might look like this if you decide to stick with exists():
#Test
public void signUp_shouldCheckIfUserExistsBeforeSign() throws ServiceException {
// given
User user = new User();
user.setEmail(EMAIL);
when(usersRepository.exists(anyString())).thenReturn(Boolean.FALSE);
thrown.expect(UserAlreadyExistsServiceException.class);
// when
usersService.signUp(user);
// then - no validation because of expected exception
}
BTW, (this is a side issue, and there are lots of different approaches to expecting exceptions in tests which are covered elsewhere on StackOverflow) it would be nice to be able to put the expect() call in the "then" section, but it has to be before signUp(). You could alternately (if you don't want to call expect() in the "given" section) use the expected parameter of #Test instead of calling expect(). Apparently JUnit 5 will allow wrapping the throwing call in an expectation call that returns the exception thrown or fails if none is thrown.
Testing behavior is good, but the production code will need to exhibit that behavior, which will affect the implementation to some degree.
Focus the test on a single behavior:
#Test
public void signUpFailsIfUserEmailAlreadyExists() throws ServiceException {
// given
User user = new User();
user.setEmail(EMAIL);
when(usersRepository.emailExists(EMAIL)).thenReturn(Boolean.TRUE);
// when
usersService.signUp(user);
// then
thrown.expect(UserAlreadyExistsServiceException.class);
}

Multiple JUnit test in one browser session

I’ve written a program in selenium webdriver but for my next project I would like to make it more maintainable by using better programming techniques. The main part I want to focus on is launching the browser once (1 session) and run say 10 different test then close the browser but I’m not sure how to do this. Using JUnit this is how I currently have my project laid out :
package example1;
public class TestBase { //main class
#Before
public void setup () {
//launch browser
}
#Test //all test run here
public void test1(){
login();
homepage();
}
#After
public void teardown(){
//close browser
}
}
package example1;
public class login(){
//do some action
}
package example1;
public class homepage(){
//do some action
}
package example1;
public class storeMethods(){
//all methods are stored which are then called by different classes
}
I’m not sure if the #Test annotation should even be in the main class or if it should be in its own class (login(), homepage()) because I read somewhere that test should not depend on each other. I don’t have much experience in java but I’m more than willing to learn. I just need some guidance on best practices and how to write good maintainable test so if someone could help me out or point me in the right direction then I’d really appreciate it.
While what Robbie Wareham said is correct, reusing the browser is not a good idea, you said that your overall goal is maintainability.
The techniques I've found to increase maintainability is the Page Object pattern with separate functions to interact with it.
The Page Object pattern separates the selector from the rest of the code. That way, if an element on a page changes, and your tests uses that element 5 times...you only change your code in 1 spot. It is also standard to include isLoaded(), which is a function that can be used to identify if you are already on the page you need so you don't reload the page.
I would also recommend having your test not directly deal with that Page you created. If you had a toolbar that you had to use to go to X page...and then the toolbar changed so the link you wanted was in a sub-menu, then every time in your tests you used that link, you would have to change the method to click on that link. Creating sets of selenium commands that interact with the page will make your tests high-level and easy to read.
I would suggest that reusing the browser is not following better automation programming practice.
Reusing the browser will result in unstable and unreliable tests, with inter-test dependencies.
In my opinion, it is far better to have atomic self contained tests.
If test runtime is an issue, then look at parallelism and using selenium grid

What are the possible problems with unit testing ASP.NET MVC code in the following way?

I've been looking at the way unit testing is done in the NuGetGallery. I observed that when controllers are tested, service classes are mocked. This makes sense to me because while testing the controller logic, I didn't want to be worried about the architectural layers below. After using this approach for a while, I noticed how often I was running around fixing my mocks all over my controller tests when my service classes changed. To solve this problem, without consulting people that are smarter than me, I started writing tests like this (don't worry, I haven't gotten that far):
public class PersonController : Controller
{
private readonly LESRepository _repository;
public PersonController(LESRepository repository)
{
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
public class PersonControllerTests
{
public void can_get_person()
{
var person = _helper.CreatePerson(username: "John");
var controller = new PersonController(_repository);
controller.FakeOutContext();
var result = (ViewResult)controller.Index(person.Id);
var model = (VMPerson)result.Model;
Assert.IsTrue(model.Person.Username == "John");
}
}
I guess this would be integration testing because I am using a real database (I'd prefer an inmemory one). I begin my test by putting data in my database (each test runs in a transaction and is rolled back when the test completes). Then I call my controller and I really don't care how it retrieves the data from the database (via a repository or service class) just that the Model to be sent to the view must have the record I put into the database aka my assertion. The cool thing about this approach is that a lot of times I can continue to add more layers of complexity without having to change my controller tests:
public class PersonController : Controller
{
private readonly LESRepository _repository;
private readonly PersonService _personService;
public PersonController(LESRepository repository)
{
_repository = repository;
_personService = new PersonService(_repository);
}
public ActionResult Index(int id)
{
var model = _personService.GetActivePerson(id);
if(model == null)
return PersonNotFoundResult();
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Now I realize I didn't create an interface for my PersonService and pass it into the constructor of my controller. The reason is 1) I don't plan to mock my PersonService and 2) I didn't feel I needed to inject my dependency since my PersonController for now only needs to depend on one type of PersonService.
I'm new at unit testing and I'm always happy to be shown that I'm wrong. Please point out why the way I'm testng my controllers could be a really bad idea (besides the obvious increase in the time my tests will take to run).
Hmm. a few things here mate.
First, it looks like you're trying to test the a controller method. Great :)
So this means, that anything the controller needs, should be mocked. This is because
You don't want to worry about what happens inside that dependency.
You can verify that the dependency was called/executed.
Ok, so lets look at what you did and I'll see if i can refactor it to make it a bit more testable.
-REMEMBER- i'm testing the CONTROLLER METHOD, not the stuff the controller method calls/depends upon.
So this means I don't care about the service instance or the repository instance (which ever architectural way you decide to follow).
NOTE: I've kept things simple, so i've stripped lots of crap out, etc.
Interface
First, we need an interface for the repository. This can be implemented as a in-memory repo, an entity framework repo, etc.. You'll see why, soon.
public interface ILESRepository
{
IQueryable<Person> GetAll();
}
Controller
Here, we use the interface. This means it's really easy and awesome to use a mock IRepository or a real instance.
public class PersonController : Controller
{
private readonly ILESRepository _repository;
public PersonController(ILESRepository repository)
{
if (repository == null)
{
throw new ArgumentNullException("repository");
}
_repository = repository;
}
public ActionResult Index(int id)
{
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
var viewModel = new VMPerson(model);
return View(viewModel);
}
}
Unit Test
Ok - here's the magic money shot stuff.
First, we create some Fake People. Just work with me here... I'll show you where we use this in a tick. It's just a boring, simple list of your POCO's.
public static class FakePeople()
{
public static IList<Person> GetSomeFakePeople()
{
return new List<Person>
{
new Person { Id = 1, Name = "John" },
new Person { Id = 2, Name = "Fred" },
new Person { Id = 3, Name = "Sally" },
}
}
}
Now we have the test itself. I'm using xUnit for my testing framework and moq for my mocking. Any framework is fine, here.
public class PersonControllerTests
{
[Fact]
public void GivenAListOfPeople_Index_Returns1Person()
{
// Arrange.
var mockRepository = new Mock<ILESRepository>();
mockRepository.Setup(x => x.GetAll<Person>())
.Returns(
FakePeople.GetSomeFakePeople()
.AsQueryable);
var controller = new PersonController(mockRepository);
controller.FakeOutContext();
// Act.
var result = controller.Index(person.Id) as ViewResult;
// Assert.
Assert.NotNull(result);
var model = result.Model as VMPerson;
Assert.NotNull(model);
Assert.Equal(1, model.Person.Id);
Assert.Equal("John", model.Person.Username);
// Make sure we actually called the GetAll<Person>() method on our mock.
mockRepository.Verify(x => x.GetAll<Person>(), Times.Once());
}
}
Ok, lets look at what I did.
First, I arrange my crap. I first create a mock of the ILESRepository.
Then i say: If anyone ever calls the GetAll<Person>() method, well .. don't -really- hit a database or a file or whatever .. just return a list of people, which created in FakePeople.GetSomeFakePeople().
So this is what would happen in the controller ...
var model = _repository.GetAll<Person>()
.FirstOrDefault(x => x.Id == id);
First, we ask our mock to hit the GetAll<Person>() method. I just 'set it up' to return a list of people .. so then we have a list of 3 Person objects. Next, we then call a FirstOrDefault(...) on this list of 3 Person objects .. which returns the single object or null, depending on what the value of id is.
Tada! That's the money shot :)
Now back to the rest of the unit test.
We Act and then we Assert. Nothing hard there.
For bonus points, I verify that we've actually called the GetAll<Person>() method, on the mock .. inside the Controller's Index method. This is a safety call to make sure our controller logic (we're testing for) was done right.
Sometimes, you might want to check for bad scenario's, like a person passed in bad data. This means you might never ever get to the mock methods (which is correct) so you verify that they were never called.
Ok - questions, class?
Even when you do not plan to mock an interface, I strongly suggest you to do not hide the real dependencies of an object by creating the objects inside the constructor, you are breaking the Single Responsibility principle and you are writing un-testable code.
The most important thing to consider when writing tests is: "There is no magic key to write tests". There are a lot of tools out there to help you write tests but the real effort should be put in writing testable code rather than trying to hack our existing code to write a test which usually ends up being an integration test instead of a unit-test.
Creating a new object inside a constructor is one of the first big signals that your code is not testable.
These links helped me a lot when I was making the transition to start writing tests and let me tell you that after you start, that will become a natural part of your daily work and you will love the benefits of writing tests I can not picture myself writing code without tests anymore
Clean code guide (used in Google): http://misko.hevery.com/code-reviewers-guide/
To get more information read the following:
http://misko.hevery.com/2008/09/30/to-new-or-not-to-new/
and watch this video cast from Misko Hevery
http://www.youtube.com/watch?v=wEhu57pih5w&feature=player_embedded
Edited:
This article from Martin Fowler explain the difference between a Classical and a Mockist TDD approach
http://martinfowler.com/articles/mocksArentStubs.html
As a summary:
Classic TDD approach: This implies to test everything you can without creating substitutes or doubles (mocks, stubs, dummies) with the exception of external services like web services or databases. The Classical testers use doubles for the external services only
Benefits: When you test you are actually testing the wiring logic of your application and the logic itself (not in isolation)
Cons: If an error occurs you will see potentially hundreds of tests failing and it will be hard to find the code responsible
Mockist TDD approach: People following the Mockist approach will test in isolation all the code because they will create doubles for every dependency
Benefits: You are testing in isolation each part of your application. If an error occurs, you know exactly where it occurred because just a few tests will fail, ideally only one
Cons: Well you have to double all your dependencies which makes tests a little bit harder but you can use tools like AutoFixture to create doubles for the dependencies automatically
This is another great article about writing testable code
http://www.loosecouplings.com/2011/01/how-to-write-testable-code-overview.html
There are some downsides.
First, when you have a test that depends on an external component (like a live database), that test is no longer really predictable. It can fail for any number of reasons - a network outage, a changed password on the database account, missing some DLLs, etc. So when your test suddenly fails, you cannot be immediately sure where the flaw is. Is it a database problem? Some tricky bug in your class?
When you can immediately answer that question just by knowing which test failed, you have the enviable quality of defect localization.
Secondly, if there is a database problem, all your tests that depend on it will fail at once. This might not be so severe, since you can probably realize what the cause is, but I guarantee it will slow you down to examine each one. Widespread failures can mask real problems, because you don't want to look at the exception on each of 50 tests.
And I know you want to hear about factors besides the execution time, but that really does matter. You want to run your tests as frequently as possible, and a longer runtime discourages that.
I have two projects: one with 600+ tests that run in 10 seconds, one with 40+ tests that runs in 50 seconds (this project does actually talk to a database, on purpose). I run the faster test suite much more frequently while developing. Guess which one I find easier to work with?
All of that said, there is value in testing external components. Just not when you're unit-testing. Integration tests are more brittle, and slower. That makes them more expensive.
Accessing the database in unit tests has the following consequences:
Performance. Populating the database and accessing it is slow. The more tests you have, the longer the wait. If you used mocking your controller tests may run in a couple of milliseconds each, compared to seconds if it was using the database directly.
Complexity. For shared databases, you'll have to deal with concurrency issues where multiple agents are running tests against the same database. The database needs to be provisioned, structure needs to be created, data populated etc. It becomes rather complex.
Coverage. You mind find that some conditions are nearly impossible to test without mocking. Examples may include verifying what to do when the database times out. Or what to do if sending an email fails.
Maintenance. Changes to your database schema, especially if its frequent, will impact almost every test that uses the database. In the beginning when you have 10 tests it may not seem like much, but consider when you have 2000 tests. You may also find that changing business rules and adapting the tests to be more complex, as you'll have to modify the data populated in the database to verify the business rule.
You have to ask whether it is worth it for testing business rules. In most cases, the answer may be "no".
The approach I follow is:
Unit classes (controllers, service layers etc) by mocking out dependencies and simulating conditions that may occur (like database errors etc). These tests verify business logic and one aims to gain as much coverage of decision paths as possible. Use a tool like PEX to highlight any issues you never thought of. You'll be surprised how much robust (and resilient) your application would be after fixing some of the issues PEX highlights.
Write database tests to verify that the ORM I'm using works with the underlying database. You'll be surprised the issues EF and other ORM have with certain database engines (and versions). These tests are also useful to for tuning performance and reducing the amount of queries and data being sent to and from the database.
Write coded UI tests that automates the browser and verifies the system actually works. In this case I would populate the database before hand with some data. These tests simply automate the tests I would have done manually. The aim is to verify that critical pieces still work.

TDD Data Access layer

In TDD, I've been testing business logic by mocking data access functionalities.
but in reality I need the layers below the business layer also to be implemented for the application to work.
should I be implementing data access layer using TDD?
Based on the discussions I've seen on web, unit tests should not be connecting to any external resources such as databases, web services etc.. If they connect, then they become integration tests.
Could someone shed some light on this please.
Thank you very much.
You are right, contact to the outside makes it integration test, but that contact is important to test as well. Using TDD, it should be exposed to you, that the contact surface is as small as possible. That can be achieved by using wrappers for each record, or similar methods.
If you're using something like Hibernate for example and if you have any kind of logic in your DAO you can mock out the calls to eg Session and Query and unit test without hitting the database.
If you want to test the queries themselves you could use an in-memory database and something like DbUnit. I would count these as integration tests and run them separately as they tend to take longer.
Here's an example of a typical Java Spring/Hibernate DAO method with some logic you might want to test:
public List<Entity> searchEntities(final String searchText, final int maxResults) {
String sql = "select * from entity where field like :text";
Query query = sessionFactory.getCurrentSession().createSQLQuery(sql);
if (maxResults != 0) {
query.setMaxResults(maxResults);
}
query.setString("searchText", "%"+searchText+"%");
return query.list();
}
Using a mocking framework you could mock sessionFactory, session and query and create a unit test that has expectations that query.setMaxResults is only called if it doesn't equal 0 and that query.setString is called with the correct string value. You could also assert that whatever is returned from query.list() is the thing returned by the method.
However, this is making you test code coupled to your implementation of this method. Also, if your DAO method has a lot of logic in it then you should consider refactoring and maybe moving this logic to a service layer where you could unit test it separately from any database interaction.
You can use Dev Magic Fake to fake the DAL so you can Work with TDD as you need without writing any code for the Fake DAL for example
Just add a reference to DevMagicFake.dll and you can code the following:
[HttpPost]
public ActionResult Create(VendorForm vendorForm)
{
var repoistory = new FakeRepository<VendorForm>();
repoistory.Save(vendorForm);
return View("Page", repoistory.GetAll());
}
This will save the VendorForm permanent in the memory, and you can retrieve it anytime You need, you can also generate data for this object or any other object in your model, without writing any code to that operation, so now you can make TDD as you already finish your DAL for more information about Dev Magic Fake see the following Link on CodePlex:
http://devmagicfake.codeplex.com
Thanks
M.Radwan

Resources