Performance test for graphQL API - jmeter

Today I'm doing my API automation testing and performance testing with Jmeter when the server is a REST API.
Now the development changed to graphQL API, and I have two questions about it:
What is the best way to perform the automation API and performance testing?
Does Jmeter support graphQL API?

I use Apollo to build the GraphQL server, and use JMeter to query the GraphQL API as below.
1. Set up HTTP Request
2. Set up HTTP Headers
Depending on your application, you might also need to set up HTTP header Authorization for JWT web tokens, such as:
Authorization: Bearer xxxxxxxxxxxxxxxxxxxxxxxxxxxx
3. Set up HTTP Cookie if needed for your app
4. Run the test

Disclaimer: I work for LoadImpact; the company behind k6.
If you are willing to consider an alternative, I've recently written a blog post about this topic: Load testing GraphQL with k6.
This is how a k6 example looks like:
let accessToken = "YOUR_GITHUB_ACCESS_TOKEN";
let query = `
query FindFirstIssue {
repository(owner:"loadimpact", name:"k6") {
issues(first:1) {
edges {
node {
id
number
title
}
}
}
}
}`;
let headers = {
'Authorization': `Bearer ${accessToken}`,
"Content-Type": "application/json"
};
let res = http.post("https://api.github.com/graphql",
JSON.stringify({ query: query }),
{headers: headers}
);

Looking into Serving over HTTP section of the GraphQL documentation
When receiving an HTTP GET request, the GraphQL query should be specified in the "query" query string.
So you can just append your GraphQL query to your request URL.
With regards to "best practices" - you should follow "normal" recommendations for web applications and HTTP APIs testing, for example check out REST API Testing - How to Do it Right article.

You can try using easygraphql-load-tester
How it works:
easygraphql-load-tester is a node library created to make load testing on GraphQL based on the schema; it'll create a bunch of queries, that are going to be the ones used to test your server.
Examples:
Artillery.io
K6
Result:
Using this package, it was possible to me, to identify a bad implementation using dataloaders on the server.
Results without dataloaders
All virtual users finished
Summary report # 10:07:55(-0500) 2018-11-23
Scenarios launched: 5
Scenarios completed: 5
Requests completed: 295
RPS sent: 36.88
Request latency:
min: 1.6
max: 470.9
median: 32.9
p95: 233.2
p99: 410.8
Scenario counts:
GraphQL Query load test: 5 (100%)
Codes:
200: 295
Results with dataloaders
All virtual users finished
Summary report # 10:09:09(-0500) 2018-11-23
Scenarios launched: 5
Scenarios completed: 5
Requests completed: 295
RPS sent: 65.85
Request latency:
min: 1.5
max: 71.9
median: 3.3
p95: 19.4
p99: 36.2
Scenario counts:
GraphQL Query load test: 5 (100%)
Codes:
200: 295

I am testing our GraphQL Implementation, you will need:
Thread Group
HTTP Header Manager: You need to add as Content-Type: Application/json
https://i.stack.imgur.com/syXqK.png
HTTP Request: use GET and add in the Body Data your query
https://i.stack.imgur.com/MpxAb.png
Response Assertion: You want to count as correct requests only responses without errors
https://i.stack.imgur.com/eXWGs.png
A Listener:
https://i.stack.imgur.com/VOVLo.png

I have recently tried API testing with GraphQl with both GET and POST request in Jmeter
Make sure its POST request for both Query and Mutation
Example Your Graph Ql query
{
storeConfig{
default_title
copyright
}
}
For Jmeter it would be like this
{
"query":"{ storeConfig { default_title copyright } }"
}
Step up HTTP Request
In place of the localhost, your domain name will come. Make sure you don't add https
Example:- https://mydomainname.com
In Jmeter :- mydomainname.com
Setup HTTP Header Manager
For requesting Mutation in Jmeter
Example mutation in Graphql
mutation {
generateCustomerToken(
email: "rd#mailinator.com"
password: "1234567"
) {
token
}
}
In Jemeter mutation will be like this
{
"query":"mutation { generateCustomerToken( email: \"rd#mailinator.com\" password: \"1234567\" ) { token } }"
}
Replace double quotes with (\") as shown in above query

The easiest way will be to use the GraphQL queries directly in JMeter without the need to convert them to JSON.
All you need to do is to pass "Content-Type" as "application/graphql" in the header.
Image Link for: HTTP Request with GraphQL Query as input
Image Link for: Header details

Related

grpc-gateway: redirect missmatch with the defination (proto file)

I use grpc-gateway to host REST api for my grpc service.
I'm have 2 api:
API A
option (google.api.http) = {
post: "/v1/product/{id}"
body: "*"
};
API B
option (google.api.http) = {
post: "/v1/product/unit"
body: "*"
};
but when i call POST v1/product/unit
grpc-gateway redirect to the method with the API A.
Do i miss something?
It depends on order in which you register your gRPC servers in gateway mux. It's not obvious but you should register more general path (/v1/product/{id}) before more exact path (/v1/product/uni). Then you should change order of registration of your gRPC servers in grpc-gatewar ServeMux

Load/stress test in a SPA with Hasura Cloud Graphql as a backend and subscriptions

I'm trying to do a performance test on a
SPA with a Frontend in React, deployed with Netlify
As a backend we're using Hasura Cloud Graphql (std version) https://hasura.io/, where everything from the client goes directly through Hasura to the DB.
DB is in Postgress housed in Heroku (Std 0 tier).
We're hoping to be able to have around 800 users simultaneous.
The problem is that i'm loss about how to do it or if i'm doing it correctly, seeing how most of our stuff are "subscriptions/mutations" that I had to transform into queries. I tried doing those test with k6 and Jmeter but i'm not sure if i'm doing them properly.
k6 test
At first, i did a quick search and collected around 10 subscriptions that are commonly used. Then i tried to create a performance test with k6 https://k6.io/docs/using-k6/http-requests/ but i wasn't able to create a working subscription test so i just transform each subscription into a query and perform a http.post with this setup:
export const options = {
stages: [
{ duration: '30s', target: 75 },
{ duration: '120s', target: 75 },
{ duration: '60s', target: 50 },
{ duration: '30s', target: 30 },
{ duration: '10s', target: 0 }
]
};
export default function () {
var res = http.post(prod,
JSON.stringify({
query: listaQueries.GetDesafiosCursosByKey(
keys.desafioCursoKey
)}), params);
sleep(1)
}
I did this for every query and ran each test individually. Unfortunately, the numbers i got were bad, and somehow our test environment was getting better times than production. (The only difference afaik is that we're using Hasura Cloud for production).
I tried to implement websocket, but i couldn't getthem work and configure them to do a stress/load test.
K6 result
Jmeter test
After that, i tried something similar with Jmeter, but again i couldn't figure how to set up a subscription test (after i while, i read in a blog that jmeter doesn't support it
https://qainsights.com/deep-dive-into-graphql-in-jmeter/ ) so i simply transformed all subscriptions into a query and tried to do the same, but the numbers I was getting were different and much higher than k6.
Jmeter query Config 1
Jmeter query config 2
Jmeter thread config
Questions
I'm not sure if i'm doing it correctly, if transforming every subscription into a query and perform a http request is a correct approach for it. (At least I know that those queries return the data correctly).
Should i just increase the number of VUS/threads until i get a constant timeout to simulate a stress test? There were some test that are causing a graphql error on the website Graphql error, and others were having a
""WARN[0059] Request Failed error="Post \"https://xxxxxxx-xxxxx.herokuapp.com/v1/graphql\": EOF""
in the k6 console.
Or should i just give up with k6/jmeter and try to search for another tool to perfom those test?
Thanks you in advance, and sorry for my English and explanation, but i'm a complete newbie at this.
I'm not sure if i'm doing it correctly, if transforming every
subscription into a query and perform a http request is a correct
approach for it. (At least I know that those queries return the data
correctly).
Ideally you would be using WebSocket as that is what actual clients will most likely be using.
For code samples, check out the answer here.
Here's a more complete example utilizing a main.js entry script with modularized Subscription code in subscriptions\bikes.brands.js. It also uses the Httpx library to set a global request header:
// main.js
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.5/index.js';
import { getBikeBrandsByIdSub } from './subscriptions/bikes-brands.js';
const session = new Httpx({
baseURL: `http://54.227.75.222:8080`
});
const wsUri = 'wss://54.227.75.222:8080/v1/graphql';
const pauseMin = 2;
const pauseMax = 6;
export const options = {};
export default function () {
session.addHeader('Content-Type', 'application/json');
getBikeBrandsByIdSub(1);
}
// subscriptions/bikes-brands.js
import ws from 'k6/ws';
/* using string concatenation */
export function getBikeBrandsByIdSub(id) {
const query = `
subscription getBikeBrandsByIdSub {
bikes_brands(where: {id: {_eq: ${id}}}) {
id
brand
notes
updated_at
created_at
}
}
`;
const subscribePayload = {
id: "1",
payload: {
extensions: {},
operationName: "query",
query: query,
variables: {},
},
type: "start",
}
const initPayload = {
payload: {
headers: {
"content-type": "application/json",
},
lazy: true,
},
type: "connection_init",
};
console.debug(JSON.stringify(subscribePayload));
// start a WS connection
const res = ws.connect(wsUri, initPayload, function(socket) {
socket.on('open', function() {
console.debug('WS connection established!');
// send the connection_init:
socket.send(JSON.stringify(initPayload));
// send the chat subscription:
socket.send(JSON.stringify(subscribePayload));
});
socket.on('message', function(message) {
let messageObj;
try {
messageObj = JSON.parse(message);
}
catch (err) {
console.warn('Unable to parse WS message as JSON: ' + message);
}
if (messageObj.type === 'data') {
console.log(`${messageObj.type} message received by VU ${__VU}: ${Object.keys(messageObj.payload.data)[0]}`);
}
console.log(`WS message received by VU ${__VU}:\n` + message);
});
});
}
Should i just increase the number of VUS/threads until i get a
constant timeout to simulate a stress test?
Timeouts and errors that only happen under load are signals that you may be hitting a bottleneck somewhere. Do you only see the EOFs under load? These are basically the server sending back incomplete responses/closing connections early which shouldn't happen under normal circumstances.
My expectation is that your test should be replicating the real user activity as close as possible. I doubt that real users will be sending requests to GraphQL directly and well-behaved load test must replicate the real life application usage as close as possible.
So I believe you should move to HTTP protocol level and mimic the network footprint of the real browser instead of trying to come up with individual GraphQL queries.
With regards to JMeter and k6 differences it might be the case that k6 produces higher throughput given the same hardware and running requests at maximum speed as it evidenced by kind of benchmark in the Open Source Load Testing Tools 2021 article, however given you're trying to simulate real users using real browsers accessing your applications and the real users don't hammer the application non-stop, they need some time to "think" between operations you should be getting the same number of requests for both load testing tools, if JMeter doesn't give you the load you want to conduct make sure to follow JMeter Best Practices and/or consider running it in distributed mode .

Azure Form Recognizer training not finding data

I'm trying to train a Form Recognizer using the browser API console (https://eastus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api/operations/TrainCustomModel/console). I've uploaded traning images to a container and created an SAS. The browser API console generate following HTTP request:
POST https://eastus.api.cognitive.microsoft.com/formrecognizer/v1.0-preview/custom/train?source=https://pythonimages.blob.core.windows.net/?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-01-22T00:23:33Z&st=2020-01-21T16:23:33Z&spr=https&sig=••••••••••••••••••••••••••••••••&prefix=images HTTP/1.1
Host: eastus.api.cognitive.microsoft.com
Content-Type: application/json
Ocp-Apim-Subscription-Key: ••••••••••••••••••••••••••••••••
{
"source": "string",
"sourceFilter": {
"prefix": "string",
"includeSubFolders": true
}
}
However, the answer I get back is
Transfer-Encoding: chunked
x-envoy-upstream-service-time: 4
apim-request-id: 5ad37aa2-e251-4b61-98ae-023930b47d27
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
Date: Tue, 21 Jan 2020 16:25:03 GMT
Content-Type: application/json; charset=utf-8
{
"error": {
"code": "1004",
"message": "Dataset path must be relative to local input mount path '/input' if local data is referenced."
}
}
I don't understand why it seems to be looking for data locally. I've experimented with the SAS, e.g. including the container name (images) in the blob http address rather than as a query parameter, but no success so far.
I've also tried the Python/REST path (described here: https://learn.microsoft.com/en-gb/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract-v1), which results in a different error:
Response status code: 408
Response body: {'error': {'code': '1011', 'innerError': {'requestId': 'e7f9ef9f-97bc-4b6a-86f3-0b29c9591c87'}, 'message': 'The operation exceeded allowed time limit and was canceled. The common reasons are that the data source is too large or contains unsupported content. Please check that your request conforms to service limits and retry with redacted data source.'}}
For completeness, the code I use is as follows (key/signature *ed out:)
########### Python Form Recognizer Train #############
from requests import post as http_post
# Endpoint URL
base_url = r"https://markusformsrecognizer.cognitiveservices.azure.com/" + "/formrecognizer/v1.0-preview/custom"
source = r"https://pythonimages.blob.core.windows.net/images?sv=2019-02-02&ss=bfqt&srt=sco&sp=rl&se=2020-01-22T15:37:26Z&st=2020-01-22T07:37:26Z&spr=https&sig=*********************************"
headers = {
# Request headers
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '*********************************'
}
url = base_url + "/train"
body = {"source": source}
try:
resp = http_post(url = url, json = body, headers = headers)
print("Response status code: %d" % resp.status_code)
print("Response body: %s" % resp.json())
except Exception as e:
print(str(e))
For error code 1004 Please follow the below to get the Source path containing the training documents and pass as value to the source key.
{
"source": "string",
"sourceFilter": {
"prefix": "string",
"includeSubFolders": true
}
}
Replace with the Azure Blob storage container's shared access signature (SAS) URL. To retrieve the SAS URL, open the Microsoft Azure Storage Explorer, right-click your container, and select Get shared access signature.
Make sure the Read and List permissions are checked, and click Create.
Then copy the value in the URL section. It should have the form:
https://.blob.core.windows.net/container name?SAS value.
Please use the new Form Recognizer v2.0 release it is an async API and enables training on large data sets and analyzing large documents. https://aka.ms/form-recognizer/api
quick start - https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/python-train-extract
To get started with Form Recognizer please login to the Azure Portal using this link to create a Form Recognizer resource (for v2.0 (preview) please use West US 2 or West Europe regions).
try removing the string value from prefix property.
{
"source": "string",
"sourceFilter": {
"prefix": "",
"includeSubFolders": true
}
}
The Python Quick Start code for version 2.0 seems to be working, at least I don’t get any errors anymore. I’m now feeling slightly silly that I didn’t try this earlier. The API (web-browser) console, linked from the Quick Start page of the Form Recognizer seems automatically assume I want to use version 1.0 and there’s no way to change that (or perhaps I’ve just overseen something). Hence I assumed I’d been allocated a v1.0 trial and therefore that’s what I used when I tried the Python Quick Start the first time around.
Instead of using just the SAS URI in the "source" of Request parameter on the API POST call, use the complete string of the container followed by the SAS URI token.
For ex:
https://.blob.core.windows.net//

How can I set HTTP request headers when using Go-Github and an http.Transport?

I am writing an app that uses the GitHub API to look at repositories in my GitHub orgs. I am using the github.com/google/go-github library.
I am also using the github.com/gregjones/httpcache so that I can do token based authentication as well as set the conditional headers for the API calls. I have got authentication working thus:
ctx := context.Background()
// GitHUb API authentication
transport = &oauth2.Transport{
Source: oauth2.StaticTokenSource(
&oauth2.Token{
AccessToken: gh.tokens.GitHub.Token,
},
),
}
// Configure HTTP memory caching
transport = &httpcache.Transport{
Transport: transport,
Cache: httpcache.NewMemoryCache(),
MarkCachedResponses: true,
}
// Create the http client that GutHUb will use
httpClient := &http.Client{
Transport: transport,
}
// Attempt to login to GitHub
client := github.NewClient(httpClient)
However I am unable to work out how to add the necessary If-Match header when I use client.Repositories.Get for example. This is so I can work out if the repo has changed in the last 24 hours for exampple.
I have searched how to do this, but the examples I come across show how to create an HTTP client and then create a request (so the headers can be added) and then do a Do action on it. However As I am using the client directly I do not have that option.
The documentation for go-github states that for conditional requests:
The GitHub API has good support for conditional requests which will help prevent you from burning through your rate limit, as well as help speed up your application. go-github does not handle conditional requests directly, but is instead designed to work with a caching http.Transport. We recommend using https://github.com/gregjones/httpcache for that.
Learn more about GitHub conditional requests at https://developer.github.com/v3/#conditional-requests.
I do not know how to add it in my code, any help is greatly appreciated.
As tends to be the case with these things, shortly after posting my question I found the answer.
The trick is to set the headers using the Base in the Oauth2 transport thus:
transport = &oauth2.Transport{
Source: oauth2.StaticTokenSource(
&oauth2.Token{
AccessToken: gh.tokens.GitHub.Token,
},
),
Base: &transportHeaders{
modifiedSince: modifiedSince,
},
}
The struct and method look like:
type transportHeaders struct {
modifiedSince string
}
func (t *transportHeaders) RoundTrip(req *http.Request) (*http.Response, error) {
// Determine the last modified date based on the transportHeader options
// Do not add any headers if blank or zero
if t.modifiedSince != "" {
req.Header.Set("If-Modified-Since", t.modifiedSince)
}
return http.DefaultTransport.RoundTrip(req)
}
So by doing this I can intercept the call to RoundTrip and add my own header. This now means I can check the resources and see if they return a 304 HTTP status code. For example:
ERRO[0001] Error retrieving repository error="GET https://api.github.com/repos/chef-partners/camsa-setup: 304 []" name=camsa-setup vcs=github
I worked out how to do this after coming across this page - https://github.com/rmichela/go-reddit/blob/bd882abbb7496c54dbde66d92c35ad95d4db1211/authenticator.go#L117

Couldn't make new request verification for Slack API

I'm trying the new request verification process for Slack API on AWS Lambda but I can't produce a valid signature from a request.
The example showed in https://api.slack.com/docs/verifying-requests-from-slack is for a slash command but I'm using for an event subscription, especially, a subscription to a bot event (app_mention). Does the new process support event subscriptions as well?
If so, am I missing something?
Mapping template for Integration request in API Gateway. I can't get a raw request as the slack documentation says but did my best like this:
{
"body" : $input.body,
"headers": {
#foreach($param in $input.params().header.keySet())
"$param": "$util.escapeJavaScript($input.params().header.get($param))" #if($foreach.hasNext),#end
#end
}
}
My function for verification:
def is_valid_request(headers, body):
logger.info(f"DECODED_SECRET: {DECODED_SECRET}")
logger.info(f"DECRYPTED_SECRET: {DECRYPTED_SECRET}")
timestamp = headers.get(REQ_KEYS['timestamp'])
logger.info(f"timestamp: {timestamp}")
encoded_body = urlencode(body)
logger.info(f"encoded_body: {encoded_body}")
base_str = f"{SLACK_API_VER}:{timestamp}:{encoded_body}"
logger.info(f"base_str: {base_str}")
base_b = bytes(base_str, 'utf-8')
dgst_str = hmac.new(DECRYPTED_SECRET, base_b, digestmod=sha256).hexdigest()
sig_str = f"{SLACK_API_VER}={dgst_str}"
logger.info(f"signature: {sig_str}")
req_sig = headers.get(REQ_KEYS['sig'])
logger.info(f"req_sig: {req_sig}")
logger.info(f"comparing: {hmac.compare_digest(sig_str, req_sig)}")
return hmac.compare_digest(sig_str, req_sig)
Lambda Log in CloudWatch. I can't show the values for security reasons but it seems like each variable/constant has a reasonable value:
DECODED_SECRET: ...
DECRYPTED_SECRET: ...
timestamp: 1532011621
encoded_body: ...
base_str: v0:1532011621:token= ... &team_id= ... &api_app_id= ...
signature: v0=3 ...
req_sig: v0=1 ...
comparing: False
signature should match with req_sig but it doesn't. I guess there is something wrong with base_str = f"{SLACK_API_VER}:{timestamp}:{encoded_body}". I mean, the concatination or urlencoding of the request body, but I'm not sure. Thank you in advance!

Resources