Mocking a system dependencies - cucumberjs

I'm working on a system that calls external APIs, some are owned by my company and others are not.
My system is composed of an HTTP interface that takes orders and publishes them into a message queue in order to run an operation chain. My system is composed of 3 NodeJS processes (1 for HTTP, 2 message queue consumers), 2 databases and a message queue.
As I develop my application it becomes hard to test all the scenarios covered by my system (even tho I have unit tests). To ensure that all the components are working together, I am writing specifications using the Gherkin language and cucumber js.
To test the system, I want to be as close as the deployment environment, so I start all my system including the databases, the NodeJS processes and the message queue with docker-compose. All of the components of the system are communicating through a docker network defined in the docker-compose configuration.
The issue is that I can't make sure that all of the external APIs are in the right state ready to accept my request and that they will respond in a way that's interesting for my test steps.
So, I thought about using a Mock server for each of my dependencies and discovered pact.io. As I understand, Pact allows me to write contracts and start a mock server so my system can then run HTTP requests against the mock server. Pact also allows me to give the contract to the service provider so it can also run the contract against the real app to see if it really works.
I saw the examples, in javascript, and I am able to start a mock service, provide it an interaction, verify the interaction and close the mock service. (JS with mocha example)
My issue is that I want my system to be as close as the production so I want it to access the Pact mock service through my docker network. I saw a Pact CLI docker image to run the pact mock service (Pact CLI docker image) but once my mock server is dockerized, I lose the control that I had with the JS wrapper to add new interactions.
Also, I don't want to write pact files, I want to add interactions at the time my tests are running otherwise I will declare the test data twice (once in the cucumber tests scenarios and once in the pact files).
My questions are:
Is there a way to bind the JS wrapper to an existing mock service, a dockerize one?
When using the docker pact image, is there a way to add interaction at run time?
Is pact the right tool to use since I just need a mock service?
Edit
I just create a sandbox env to see what could be done with the NodeJS wrapper. It seems that you can create a mock service using docker and them control it via the NodeJS wrapper.
# Starts the docker container
docker run -dit \
--rm \
--name pact-mock-service \
-p 1234:1234 \
-v <YOUR_PATH>/docker_pacts/:/tmp/pacts \
pactfoundation/pact-cli:latest \
mock-service \
-p 1234 \
--host 0.0.0.0 \
--pact-dir /tmp/pacts
const {Pact, MockService, } = require('#pact-foundation/pact')
const axios = require('axios')
const pact = new Pact({
consumer: "my client",
provider: "some provider",
// Those two are ignored since we override the inner mock service
port: 1234,
host: 'localhost'
})
const mockService = new MockService(
// You need to duplicate those data, normally they are passed
// by the pact object when calling `pact.setup()`.
'my client',
'provider',
// the port and host to the docker container
1234,
'localhost'
)
pact.mockService = mockService
async function run () {
await pact.addInteraction({
state: "some data is created",
withRequest: {
method: "GET",
path: "/hello"
},
willRespondWith: {
status: 200,
body: {
hello: 'hello world'
}
},
uponReceiving: ''
})
const response = await axios.get('http://localhost:1234/hello')
console.log(response.data) // { "hello": "hello world" }
}
run().catch(console.error)
Edit 2
I will probably follow Matthew Fellows's answer, and test my system using some sort of unit tests with Pact mocking the external interactions of my system.

So, I thought about using a Mock server for each of my dependencies and discovered pact.io. As I understand, Pact allows me to write contracts and start a mock server so my system can then run HTTP requests against the mock server. Pact also allows me to give the contract to the service provider so it can also run the contract against the real app to see if it really works.
Yes, that's correct. Pact can probably be considered as a unit testing tool for your API client, that uses contract testing to ensure the mocks are valid.
If you're using Pact just for mocking, you're missing out on all of the key benefits though.
Using Pact at such a high level test is considered bad practice, and as you can see, is difficult to do (you're working against the way it is intended to be used).
I would worry less about overlapping tests (end-to-end tests will always double up with other layers testing by design), and concern yourself more about ensuring each of the API contracts are covered by Pact. These tests will run much faster, are more precise in what the test and are less flakey.
This way, you can reduce the scope of your end-to-end BDD scenarios to the key ones and that should reduce the maintenance cost.

Related

Using gorillamux in gcp cloud function

I am new to Go and new to GCP so i may not be able to put all details. But will try to share what i have.
I have setup a small microservice using docker. The docker-compose file runs my main method which register http handler via gorrillamux ... it is working as expected. here is sample code
func main() {
r := gorrilamux.NewRouter()
r.HandleFunc("/stores/orders/{orderID}/status", handler).Methods("GET")
http.Handle("/", r)
fmt.Printf("%+v", http.ListenAndServe(":8080", nil))
}
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Println("------ in handler!!!")
}
With this code, I am able to call my service after doing docker-compose -up. The thing which i am confused at that how would i use this gorrilla mux to route my calls in goocle cloud function?
Based on my understanding, for GCP CF, I would tell what method is the entry point i.e.,
gcloud functions deploy service-name <removing_other_details> --entry-point handler
This handler would call whenever each request is received, it wouldn't be ListenandServce. So how can i use gorillamux to do so?
What i eventually want to do is extract path variables from incoming request. One approach is to use string manipulation and just get path variable from request object. But this could be error prone. So i thought if i can use gorilla mux to handle such things.
Any ideas
Google Cloud Functions is used to execute Single Purpose functions based on triggers like HTTP or another GCP server to server triggers. Your go service looks more like an HTTP server than a single function.
If you want to establish a microservice architecture with Cloud Functions what you are going to do is create a bunch of different functions mainly triggered by HTTP (each one will have a different HTTP address assigned automatically) and then call them from your application without needing any external HTTP router.
If you want to have a distributed microservice (with every single service sharing the same URL but with a different endpoint within the URL) You want to take a look into Appengine where you can deploy your server. You can use this tutorial to get started with Google Appengine
In addition of PoWar answer, I propose you to use Cloud Run instead of App Engine. Cloud Run works on the same underlying infrastructure as Cloud Functions and a lot of feature are similar.
In addition, Cloud Run uses containers which are one of the top current best practices to package the applications. You can use a standard Dockerfile like this one in the documentation, and then build with docker. Or Cloud Build if you haven't docker installed
gcloud builds submit -t gcr.io/PROJECT_ID/containerName
Or you can even an alpha feature if you don't want to write or customize your Dockerfile, this is based on buildpack
gcloud alpha builds submit --pack=image=gcr.io/PROJECT_ID/containerName
And then deploy your image on Cloud Run
gcloud run deploy --image=gcr.io/PROJECT_ID/containerName --platform=managed --region=yourRegion myService

Faking non-XHR network requests for Cypress

I have an AngularJS application and I'm trying to use Cypress to stub some of the network requests that it makes. Currently, my problem is with a request with resource type Img. I know from Cypress documentation that Cypress cannot stub non-XHR resource types/requests, but I'm looking for a workaround.
My application requests the image from a backend server, which I want to stub or fake. I prefer not to modify the application code, and would rather create an external workaround.
I've looked into the following and found them not to be useful in this scenario:
Sinon.js - Similarly can only handle XHR requests.
nock - Replaces node's http.request, but that doesn't seem to work within Cypress. It might work if I added it straight into my application code, which I prefer not to do.
I've also tried the following but was unsuccessful:
mockserver - Ran the mockserver and added an expectation, but none of the requests made to the mockserver seemed to go through.
Service Worker API - Was unsure about how to register my service worker, since it requires a .js file as an input. What .js file would be served as input if I'm controlling the service worker via Cypress?
a mock server using express - The issue is the application is running on localhost:<some_port>, while the mock server is running on localhost:<some_other_port>. I'm having trouble specifying port numbers when constructing the request through the application. Basically, my application isn't really respecting different port numbers.
EDIT:
I've been successful with creating a mock server using express. According to Cypress documentation, servers shouldn't be started and stopped within before() and after()'s. Instead, they should be started prior to Cypress being started, and stopped after Cypress is stopped.

How to e2e test multiple client synchronization?

I am writing e2e tests where I want to test that when I add an entity on one client, the other online client will be synced and see the added entity. (Think google docs when you type and the words appear on the other users' screens).
My question is: how can I e2e test client synchronization through WebSockets?
Should I mock WebSockets if possible? Should I find an e2e framework that allows multiple tabs/ browser instances and test that the clients sync like that? Is there another way?
I have looked at applications that also use synchronization like these: https://github.com/automerge/trellis/blob/master/test/application.js and https://github.com/automerge/pixelpusher. Unfortunately, they either don't have tests or don't use WebSockets.
I think the simplest way would be to start two tests simultaneously like this:
create a new script entry in the scripts section of the package.json file:
"scripts" : {
"testcafe": testcafe chrome,
"test-synchro": npm run testcafe -- test1.js & npm run testcafe -- test2.js
}
in test1.js you add one entity then create a json file that contains all information on the new entity.
in test2.js you wait for this file to be present and stable in the file system and then you act on it. Maybe you could use package wait-on to achieve this.

Testing 2 web applications that interact with each other

We are developing 2 different web applications (WARS).
Both use the same message bus (ActiveMQ - jms).
We would like to preform tests that triggers one action on a webapp#1 , that action should induce message throwing that will be consumed on webapp#2 and mutate the DB.
How can we test this end to end scenario??
We would like to have an automated test for that, and would like to avoid manual testing as much as possible.
We are using junit with springframework, and already have tons of junit that are being preformed daily, but non of them so far involved the usage of the message bus. it appear that this scenarion is a whole different story to automate.
Are there any possibilities to test this scenario with automated script (spring \ junit \ other)?
A JUnit test could certainly this integration test sequence:
send a HTTP request to webapp#1 to trigger, using HTTPUrlConnection for example
run a SQL command (using JDBC) to detect wether the database contains the expected value
In the test setup, the database needs to be initialized (rest) so that the second step does not give a false-positive result

Use mocha with grunt-contrib-connect or through filesystem?

Is it better to use mocha with a local server through the grunt-contrib-connect task or just run it with grunt-mocha?
What are the differences/downsides of both?
They are two totally different things. You do not automatically run spec files with grunt-contrib-connect, it is meant to be used in conjunction with other tasks that hit the connect server. You can use it with grunt-mocha (see the urls option), but it's really only useful if you need to test with server logic. Otherwise, you can mock server responses and XHR requests in your tests using sinon.

Resources