For an application we use configuration files in which a large number of endpoint characteristics are determined (relations, fillables, visibles, roles, etc.) We would like to loop through these files and conduct automatic tests with PHPUnit, simply to see if we receive a response, if validation errors are being triggered, if the response is in line with the files, etc.
We load the configuration and perform the tests for each endpoint configuration:
public function testConfigurationFiles()
{
$config = resolve('App\Contracts\ConfigInterface');
foreach ($config->resources as $resource=>$configuration) {
foreach ($configuration->endpoints() as $method=>$rules) {
$this->endpoint($method, $resource, $configuration);
}
}
}
After which we use a switch, to test each type of method differently (index, show, create, update, delete). In total this comes down to dozens of tests with hundreds of assertions.
However, if even one of these endpoints fails, the entire tests fails without showing explicit information what went wrong. Is there a way to automatically generate a "test{$resource}{$method}" method for each endpoint, so they will be handled like individual tests?
Besides these tests we also conduct units tests & e2e tests, so we are fully aware of the disadvantages of this way of testing.
After studying PHPUnit some more, I found my answer in dataProviders:
https://phpunit.de/manual/current/en/writing-tests-for-phpunit.html#writing-tests-for-phpunit.data-providers
This way you can indicate a data provider for a method, which should return an array with all cases you want to iterate over.
Related
I am trying to test the following bit of code:
DimonaClient is just a simple wrapper around a Laravel HttpClient; simplified function here:
The getDeclaration() response is a \Illuminate\Http\Client\Response
What I am trying to do in my test is:
Mock the DimonaClient class so I don't create an actual api call
"Mock" (use Laravel's Http::response()) the response I want so that I can test that a 200 w/ certain statuses dispatches the appropriate event (also mocked, but not relevant here)
My test code looks like this:
My issue(s) seem to be:
the getDeclaration() has an expectation of Illuminate\Http\Client\Response but I can't seem to create anything that will satisfy that (a new Response wants a MessageInterface, etc, etc... )
I don't actually need getDeclaration() to return anything for my testing, so I wonder if I should be mocking this differently in any case (I base this assumption on Http::response handling the internal code I'm testing for things like $response->ok(), instead of a Mockery expectation)
I feel like I'm one small step away from making this work, but going round in circles trying to hook it up correctly.
TIA!
If you are using Http Facade, you don't need to mock DimonaCient. You are nearly there with your test, but let me show you what you would have done:
/** #test */
public function it_can_handle_an_approved_submission(): void
{
Http::fake([
'*' => Http::response([
'declarationStatus' => [
'result' => DimonaDeclarationStatus::ACCEPTED,
'dimonaPeriodId' => $this->faker->numerify('############'),
],
],
]);
$dimonaDeclarationId = $this->faker->numerify('############');
// Do your normal call, and then assertions
}
Doing this, you will tell Http to fake any URL, because we are using *. I would recommend you use $this->endpoint/$declarationId so if it does not match, you will also know you did not hit the right endpoint.
I am not sure what Laravel you are using but this is available since Laravel 6+, check Http fake URLs.
I've splitted my project from one huge test to a few smaller to speed up tests and avoid some errors. Is there any way to run all of them parallel with single conf file? I must pass through login.js before every testcase
specs: ['login.js', 'test1.js'],
I suggest changing your login.js spec into a file which exports a login function. Then create a beforeAll in your onPrepare in your conf. This will be executed before every describe block, which in your case is every test.
onPrepare: function {
beforeAll(function(){
loginToApp();
});
};
I know you have already split up the files but I would seriously consider using the page object model to structure your tests if you have the time.
Is it possible to use something other than reflection and the [Fact] attribute on a test method to expose tests to xUnit test runner? For example, I'd like to do something like:
[FactSource] // just making this up
public IEnumerable<ITest> GetUnitTests()
{
yield return new TestCase("test case 1", () => FooAssertion());
yield return new TestCase("test case 2", () => BarAssertion());
}
I've wanted to do this many times to reduce the boilerplate of a function to wrap every single case. Usually it makes sense, but when I am testing 100 API endpoints it's the difference between a file with 100 lines vs. 400 lines of code. Also, I have cases where I want to load the tests from a .JSON or .XML file so it would be great if there was another way to load the tests rather than just [Fact] or [Theory] attributes.
NOTE: [Theory] works great for some tests like this, but it doesn't work for loading the cases from a file or for the case I demonstrate above where I am using lambda expressions.
Thank you!
Check out Exude. It does exactly what you want.
I'm writing unit tests for Spring HATEOAS backend using MockMvc and JsonPath.
To test the links contained in a response I'm doing something like:
#Test
public void testListEmpty() throws Exception {
mockMvc.perform(get("/rest/customers"))
.andExpect(status().isOk())
.andExpect(content().contentType(MediaType.APPLICATION_JSON))
.andExpect(jsonPath("$.links", hasSize(1))) // make sure links only contains self link
.andExpect(jsonPath("$.links[?(#.rel=='self')]", hasSize(1))) // make sure the self link exists 1 time
.andExpect(jsonPath("$.links[?(#.rel=='self')].href", contains("http://localhost/rest/customers{?page,size,sort}"))) // test self link is correct
.andExpect(jsonPath("$.links[?(#.rel=='self')][0].href", is("http://localhost/rest/customers{?page,size,sort}"))) // alternative to test self link is correct
.andExpect(jsonPath("$.content", hasSize(0))); // make sure no content elements exists
}
However I wonder if there are some best practices I should use to make it easier for myself like:
Testing link contains http://localhost does not feel right. Can I use some Spring MovkMvc helper to determine the host?
With JsonPath it's difficult to test if an array contains an element where 2 attributes have certain value.
Like that the array should contain a self link with a certain value.
Is there a better way to test that then above
This will also come into play when testing validation errors for fields with error messages.
I've see technique like below in some blog posts:
.andExpect(jsonPath("$.fieldErrors[*].path", containsInAnyOrder("title", "description")))
.andExpect(jsonPath("$.fieldErrors[*].message", containsInAnyOrder(
"The maximum length of the description is 500 characters.",
"The maximum length of the title is 100 characters.")));
But this does not guarantee at all that title has the specific error message.
It could also be that the title has incorrectly the "The maximum length of the description is 500 characters." but the test will succeed.
You may use Traverson (included in Spring HATEOAS) to traverse the links in tests.
If you are using Spring Boot, I'd consider using #WebIntegrationTest("server.port=0") rather than MockMvc, as in some cases I experienced behaviours slightly different from the actual application.
You may find some example in a post of mine: Implementing HAL hypermedia REST API using Spring HATEOAS.
Also look at the tests in the sample project.
One approach that addresses the http://localhost concern without sacrificing the need to test two attribute constraints on an array element is to use the org.hamcrest.CoreMatchers.hasItem(org.hamcrest.Matcher nestedMatcher) matcher. The test you showed above now becomes:
.andExpect(jsonPath("$.links[?(#.rel=='self')].href", hasItem(endsWith("/rest/customers{?page,size,sort}"))))
.andExpect(jsonPath("$.links[?(#.rel=='self')][0].href", hasItem(endsWith("/rest/customers{?page,size,sort}"))))
I have a scenario running testcases in GO where in I know that a testfile for eg: first_test.go will pass after second or third attempt ,
assuming that it is invoking a connection to a database or calling a REST service or any other typical scenario.
Was going through the options available in the $go test command ,but no parameters are available to many tries.
Is there any way of implementing the tries for a file or calling a method from a static file with contains any method to try 3-4 times, like for this typical file scenario:
func TestTry(t *testing.T) {
//Code to connect to a database
}
One idiom is to use build flags. Create a special test file only for integration test and add
// +build integration
package mypackage
import testing
Then to run the tests for integration run :
go test -tags=integration
And then you can add logic
// +build integration
package testing
var maxAttempts = flag.Int(...)
func TestMeMaybe(t *testing.T){
for i :=0 ; i < *maxAttempts; i++ {
innerTest()
}
}
No, this would be very strange: What good is a test if it randomly succeeds sometimes?
Why don't you "try" yourself inside the test? The real test either passes or fails and you handle your knowledge about "I need to 'try' calling this external resource n times to wake it up."
That's not the way test are meant to work: a test is here to tell you if your code is working as expected, not tell if an external resource is available.
The simplest way to do it when using an external resource (a webservice or api, for example) is to mock out it's functionnalities by making fake calls that return a valid response, then run your code on that. Then you will be able to test your code.