Can I use karate for testing microservices? [duplicate] - spring-boot

This question already has answers here:
Deploying microservice to be tested within the test [duplicate]
(3 answers)
Closed 1 year ago.
Let me give some background. We're trying to do e2e testing between a bunch of spring boot services that write to kafka, move files and talk to other services to do the file moving. I think we're pretty good on integration testing with mocks and whatnot, but is there a way to leverage karate or any other testing framework to help achieve fully e2e for the few major scenarios? Like can i have a test on one service that takes actual data and sends to the next service without mocking?
I hope that made sense. Thank you.

It really does not matter whether the "system under test" is a single service or a collection of services - we just need to think in terms of input to the system, and the outputs from the system. From the description it looks like, the "system under test" does ...
(a) interact with some downstream services : you should be able to mock them using Karate
(b) produces to Kafka : See if https://github.com/Sdaas/karate-kafka can help here
(c) does some file i/o
For the last one, you may need to write some custom Java code that can be called from within Karate. See https://github.com/Sdaas/karate-kafka/tree/master/src/test/java/karate/java for some examples.

As you said, karate is for e2e testing. The answer is quite easy: if you can assess changes in your overall state across all microservices according to the result of some action(s), then yes.
Of course all these changes, actions and assessments have to be HTTP, since it is Karate’s main protocol.

The answer is yes. Even within the same Scenario you can make calls to 2 different URL-s. So for example you can GET from service A and use the response to POST to service B.

Related

Is Cypress a good choice for API Automation Testing than Rest Assured/Karate? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have been googling about using Cypress for Automation API testing. I found below links:
Example: Cypress Real World App - API Testing
Docs: API Testing
Blog: Add GUI to your e2e API tests
Plugin: cy-api
API Testing with Cypress
Most of the cases I can think of like Oauth, All REST Methods including Form-data, GraphQL, Assertions, mocking can be achieved using Cypress.
However I was wondering why its not being used and is it a good choice over other tools?
Please suggest!
Thanks a lot in Advance.
Ask these questions and you will get the answer:
How easy is it to chain API requests and feed the response of one request into the next?
How easy is it to create data-driven tests? Which is to loop over a tabular set of data and make requests to the same end-point.
Can you run your tests in parallel? (Without having to pay extra)
And when you run tests in parallel, do you get a single aggregate HTML report?
Can you mix async operations (e.g. message queues, API hooks / call-backs) into your test suites?
Can you mix database calls into your test suites and assertions?
Can you assert "deeply" on response payloads, and easily ignore (or just check formats for) any data-values that are random, such as time-stamps and generated id-s?
Can you re-use your functional API tests as performance tests?
Can you step-through and debug your API tests using an IDE?
Can you write mocks in the same framework, that you can then use to replace services you depend on in your end-to-end tests?
I've found very few frameworks that can actually do all the above. You can do the research :)
I currently work with Cypress only for api testing (a decision made by the company). There's no right or wrong answer. If it's the tool you want to use and you are comfortable with it, then go for it.
That being said, take into account that Cypress is a framework designed to do end to end or even component testing and therefore, even when you do not have a UI, it will always use the browser to perform actions (this in the long term, and depending the amount of tests, can have an impact in the run time). Lots of things are designed to interact with ui components and you need to patch/wrap your way to perform more complex stuff.
I personally had some issues with nested requests, or when having to perform multiple actions with responses. Asynchrony in general (handled in a weird way by cypress in my opinion).
If I could choose, I would go with a framework designed for api testing just because it will be more flexible and probably will have more tools for you to use when dealing with some of the things I mentioned above.
Hope this helps and good luck!

Creating test that depends on another test case

I'm currently working on an application that uploads a file to a web service (using spring resttemplate). This upload function returns an id which can be used to download the uploaded file later on.
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
What i want to do is the download test case will depends on the result of the upload test (since the id will comes from the upload function) - this will be tested against an actual web service for me to confirm if the upload and download functions works properly.
I'm not sure if this approach that i want to do is correct so if any one can suggest a good approach how to implement it, it would be greatly appreciated.
Thanks in advance!
Since this upload/download functionality is already covered on Unit level
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
I know Test chaining is considered harmful
the download test case will depends on the result of the upload test (since the id will comes from the upload function)
and can cause lots of overlap between tests, so changes to one can cascade outwards and cause failures everywhere. Further more the tests should have Atomicity (isolation). But if the trade-off in your case suite you, my advice is to use it.
What you can look at, is a proper Test Fixture strategy. The other Fixture Setup patterns can help you with this.
Sounds like an 'acceptance test' is what is required. This would be basically an integration test of a subsystem for the desired feature.
Have a look at Cucumber as a good easy framework to get started.
Here you would define your steps
Given:
When:
Then:
and you can then test the feature as a whole.
Services external (that You have no control over) to Your application has to be mocked, even in e2e test.
This means that service where You are uploading file should be faked. Just setup dummy http server that is pretending to be real service.
With such fake service you can setup it's behaviour for every test, in example you can prepare file to be downloaded with given id.
Pseudo code:
// given
file = File(id, content);
fakeFileService.addFile(file);
// when
applicationRunner.downloadFile(file.id());
// then
assertThatFileWasDownloaded(file);
This is a test which checks if application can download given file.
File class is some domain object in Your application, not a system
File!
fakeFileService is instance that controls dummy file service.
applicationRunner is a wrapper around Your application that makes
it do what you want.
I recommend You to read "Growing Object-Oriented software guided by tests".

Integration testing with Web API - non-InMemory-tests or InMemory tests -

I would like to do integration testing on my Web API Controllers.
When the integration test starts the whole request/response pipeline of the Web API should be processed so its a real integration test.
I have read some blogs about non-InMemory-tests or InMemory tests. I need to know what is the difference and what of those approaches matches my above criteria?
I would really be glad about some explanations from people who really dealt with integration testing on Web API for self-hosting or IIS hosting (if there is a difference in testing...)
Not sure what you mean by non-in-memory testing but with integration testing involving an in-memory hosted web API, the requests are sent directly to the HttpServer, which is basically the first component to run in ASP.NET Web API pipeline. This means, the requests do not hit the network stack. So, you don't need to worry about running on specific ports, etc and also if you write good amount of tests, the time it takes to run all your tests will not be too big, since you deal with in-memory and not network. You should get comparable running times as a typical unit test. Look at this excellent post from Kiran for more details on in-memory testing. In-memory testing will exercise all the components you setup to run in the pipeline but one thing to watch out for is formatters. If you send ObjectContent in the request, there is no need to run media-formatters, since the request is already in deserialized format and hence media formatting does not happen.
If you want to get more closer and willing to take a hit on the running time, you can write your tests using a self-host. Is that what you mean by non-in-memory testing? As an example, you can use OWIN self-hosting. You can use Katana hosting APIs and host your web API and hit it with your requests. Of course, this will use the real HttpListener and the requests do traverse the network stack although it is all happening in the same machine. The tests will be comparatively slower but you get much closer to your prod runs probably.
I personally have not seen anyone using web-hosting and doing lots of integration testing. It is technically possible to fire off your requests using HttpClient and inspect the response and assert stuff but you will not have lot of control over arranging your tests programatically.
My choice is to mix and match, that is, use in-memory tests as much as possible and use Katana-based host only for those specific cases I need to really hit the network.

How does one unit test network-dependent operations?

Some of my apps grab data from the internet. I'm getting into the habit of writing unit tests, and I suspect that if I write a test for values returned from the internet, I will run into latency and/or reliability problems.
What's a valid way to test data that "lies on the other end of a web request"?
I'm using the stock unit testing toolkit that comes with Xcode, but the question theoretically applies to any testing framework.
Unit test is focused specifically on the business logic of your class. There would no latency, reliability etc as you would use some mock object to simulate what you actually interact.
What you are describing is some form of integration testing and for the OP seems like is not what you intent.
You should "obscure" the other end by mocking and not really access the network, a remote database etc.
Among others:
introduce artificial latency in requests
use another machine on the same network or at least another VM
test connection failures (either by connecting to a non existent server or cutting physically the connection
test for incomplete data (connection could be cut half way)
test for duplicate data (app could try to submit the request more than once if it thinks it was not successful - and in some scenarios may lead to lost data)
All of these should fail gracefully (either on the server side or on the client side)
I posed this question to the venerable folks on #macdev IRC channel on freenode.net, and I got a few really good answers.
Mike Ash (of mikeash.com) suggests implementing a local web server inside my app. For complex cases, I'd probably do this. However, I'm just using some of the built in initWithContentsOfURL:(NSURL *)url method of NSData.
For simpler cases, mike says an alternate method is to pass base64 encoded dummy data directly to the NSData initializer. (Use data://dummyDataEncodedAsBase64GoesAfterTheDataProtocolThingy.)
Similarly, "alistra" suggests using local file URLs that point to files containing mock data.

Using Specflow to drive outside in development on .NET MVC 3 based projects

I want to do ATDD with TDD and DDD and I want to first discover behaviors (using mocks) of a domain model (ecommerce in my example).
You can imagine that in DDD layering we can have application services calling domain services and repositories or other services and non business logic code, only tasks related to the application)
Please use the text below that I am trying to understand:
HOW TO USE MOCKS TO DISCOVER BEHAVIOUR OF MY ECOMMERCE DOMAIN AND THEN ENTER MORE GRANULAR TDD DEVELOPMENT TO IMPLEMENT DESIRED BEHAVIOUR.
This is an excerpt from another question (as an answer).
BDD, what's a feature?
"Pick whatever task that you need to implement, open a blank text file and try to explain using simple sentences the behavior. Every sentence should start with one of three keywords: given, when and then. Using your favorite BDD framework write the code that will parse these sentences and stimulate the application to get into the start state (given), execute some commands (when) and assert the transitioned state (then). Application code may start from mere mocks. Replace gradually those mocks with gradually built code and grow your application with higher confidence and quality levels."
Can someone provide some concrete examples of starting with mocks (RhinoMock, Moq) using two approaches:
1.Driving ATDD via Controller's actions and
2.Using Watin Driver (Page Objects, WatiN MVCContrib extensions) or Selenium.
If I am using no. 2. will I be able to see some example data when I visit some pages myself and do some actions ("When" I do something: navigate, post data) and validate results of these actions.
To fully understand the nature of my question please read this:
http://jockeholm.wordpress.com/2010/02/14/combining-tddbdd-with-ddd/
Especially Steps 3. and 4.
I will privide the text for step 3:
3.[BDD/ATDD] For each test scenario, implement an executable example that fails, since that behaviour is not supported by the system. Then, use outside-in development, with an extensive use of mock objects, to flesh out the behavior specified in the executable example.
Thanks,
Rad
This may help:
http://msdn.microsoft.com/en-us/magazine/dd882516.aspx

Resources