Hoverfly API Simulations with Golang repositories: how to get started - go

I have just started to experiment with Hoverfly and I have a Golang backend calling a number of 3rd party APIs for which I would need to create simulations. I am aware that Hoverfly has Java and Py bindings and I have come across a number of tutorials using Hoverfly with both. I think I am possibly missing very trivial point here, once I have created the simulations (via the Capture Mode), what is the next step? Do I simply create integration tests making use of them? Do you import the go package here into my repository? I was looking for some sample usages in the examples folder and I have seen more .py driven ones. Is there any available example that I totally missed out?
Thank you

For Golang testing, you can have a look at the functional tests in the hoverfly project: https://github.com/SpectoLabs/hoverfly/tree/master/functional-tests, it’s using hoverfly to test hoverfly!

Related

One-click OpenAPI/Swagger build architecture for backend server code

I use swagger to generate both client/server api code. On my frontend (react/ts/axios), integrating the generated code is very easy: I make a change to my spec, consume the new version through NPM, and immediately the new API methods are available
On the server side, however, the process is a lot more janky. I am using java, and I have to copy and paste specific directories over (such as the routes, data types etc) and a lot of the generated code doesn't play nice with my existing application (in terms of imports etc). I am having the same experience with a flask instance that I have.
I know that comparing client to server is apple to oranges, but is there a better way to construct a build architecture so that I don't have to go through this error prone/tedious process every time? Any pointers here?

Jrpc calls through jmeter

I wanted to make a grpc calls through jmeter .Have done some searches but not getting the proper doc or site.
Would like to know if it is possible to make grpc calls through jmeter.
Also do we have some example doc/site for the same.
GRPC has Java API so you can generate client code (or reuse your application code if it's in Java) and implement the business use cases using JSR223 Samplers and Groovy language
Unfortunately the absolute maximum of information I can provide as the client code will be specific to your application so no one will be able to help unless you want to share the source code of your app which is highly unlikely to happen.
You may also take a look at jmeter-grpc-plugin (you will still have to write the code) and Client-side streaming RPC example

Test endpoints compliance against openapi contract in Spring Boot Rest

I am looking for a nice way to write tests to make sure that enpoints in Spring Boot Rest (ver. 2.1.9) application follows the contract in openapi contract.
In the project I moved recently there is following workflow: architects write contract openapi.yml and developers have to implement endpoint to compliance the contract. Unfortunately a lot of differences happen and this test have to catch such situation and it is not possible to change this :(
I was thinking about the solution to generate openapi.yml from current ednpoints and compares it somehow but wonder if there is some out of the box solution.
I was thinking about the solution to generate openapi.yml from current ednpoints and compares it somehow but wonder if there is some out of the box solution.
In a general case, even the generated spec may not match the actual app behavior because some things can't be expressed with Open API. However, it still could be helpful as a starting point.
Open API provides a way to specify examples, that could be used to verify the contract. But the actual schemas might be a better source of expectations.
I want to note two tools that can generate and execute test cases based only on the input Open API spec:
Schemathesis uses both examples and schemas and doesn't require configuration by default. It utilizes property-based testing and verifies properties defined in the tested schema - response codes, schemas, and headers. It supports Open API 2 & 3.
Dredd focuses more on examples and provides several automatic expectations. It supports only Open API 2, and the third version is experimental.
Both provide a CLI and could be extended with various hooks to fit the desired workflow.
I'd suggest passing the contracts (as a spec you mentioned) to Schemathesis and it will verify if all schemas and examples are handled correctly by your app.

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

Creating test that depends on another test case

I'm currently working on an application that uploads a file to a web service (using spring resttemplate). This upload function returns an id which can be used to download the uploaded file later on.
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
What i want to do is the download test case will depends on the result of the upload test (since the id will comes from the upload function) - this will be tested against an actual web service for me to confirm if the upload and download functions works properly.
I'm not sure if this approach that i want to do is correct so if any one can suggest a good approach how to implement it, it would be greatly appreciated.
Thanks in advance!
Since this upload/download functionality is already covered on Unit level
I want this scenario covered by a test (i'm not talking about unit test - maybe integration or functional test, whichever is appropriate).
I know Test chaining is considered harmful
the download test case will depends on the result of the upload test (since the id will comes from the upload function)
and can cause lots of overlap between tests, so changes to one can cascade outwards and cause failures everywhere. Further more the tests should have Atomicity (isolation). But if the trade-off in your case suite you, my advice is to use it.
What you can look at, is a proper Test Fixture strategy. The other Fixture Setup patterns can help you with this.
Sounds like an 'acceptance test' is what is required. This would be basically an integration test of a subsystem for the desired feature.
Have a look at Cucumber as a good easy framework to get started.
Here you would define your steps
Given:
When:
Then:
and you can then test the feature as a whole.
Services external (that You have no control over) to Your application has to be mocked, even in e2e test.
This means that service where You are uploading file should be faked. Just setup dummy http server that is pretending to be real service.
With such fake service you can setup it's behaviour for every test, in example you can prepare file to be downloaded with given id.
Pseudo code:
// given
file = File(id, content);
fakeFileService.addFile(file);
// when
applicationRunner.downloadFile(file.id());
// then
assertThatFileWasDownloaded(file);
This is a test which checks if application can download given file.
File class is some domain object in Your application, not a system
File!
fakeFileService is instance that controls dummy file service.
applicationRunner is a wrapper around Your application that makes
it do what you want.
I recommend You to read "Growing Object-Oriented software guided by tests".

Resources