JMeter: How to send GET request with body data? - jmeter

I want perform load test on my Elasticsearch deployment. _search API of Elasticsearch expects body data with search request as documented here.
However I see that body data is sent empty when I send GET request. I could verify it from "view results tree" as well as from logs on my server. Is it not allowed to send Body Data in GET quest or am I doing something wrong? I am using JMeter 3.0 r1743807. Screenshot also attached.
<HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="One-Dimension aggregation" enabled="true">
<boolProp name="HTTPSampler.postBodyRaw">true</boolProp>
<elementProp name="HTTPsampler.Arguments" elementType="Arguments">
<collectionProp name="Arguments.arguments">
<elementProp name="" elementType="HTTPArgument">
<boolProp name="HTTPArgument.always_encode">false</boolProp>
<stringProp name="Argument.value">{
"query": {
"filtered": {
"query": {
"query_string": {
"query": "+_exists_:category_list",
"analyze_wildcard": true
}
}
}
},
"size": 0,
"aggs": {
"2": {
"terms": {
"field": "category_list.raw",
"size": 20,
"order": {
"_count": "desc"
}
}
}
}
}</stringProp>
<stringProp name="Argument.metadata">=</stringProp>
</elementProp>
</collectionProp>
</elementProp>
<stringProp name="HTTPSampler.domain"></stringProp>
<stringProp name="HTTPSampler.port"></stringProp>
<stringProp name="HTTPSampler.connect_timeout"></stringProp>
<stringProp name="HTTPSampler.response_timeout"></stringProp>
<stringProp name="HTTPSampler.protocol"></stringProp>
<stringProp name="HTTPSampler.contentEncoding"></stringProp>
<stringProp name="HTTPSampler.path">/-*kibana*/_search/</stringProp>
<stringProp name="HTTPSampler.method">GET</stringProp>
<boolProp name="HTTPSampler.follow_redirects">true</boolProp>
<boolProp name="HTTPSampler.auto_redirects">false</boolProp>
<boolProp name="HTTPSampler.use_keepalive">true</boolProp>
<boolProp name="HTTPSampler.DO_MULTIPART_POST">false</boolProp>
<boolProp name="HTTPSampler.monitor">false</boolProp>
<stringProp name="HTTPSampler.embedded_url_re"></stringProp>
</HTTPSamplerProxy>

I recall answering similar question here
Shortly: you cannot do it using JMeter's HTTP Request sampler, but it is possible via scripting. I would recommend getting familiarized with the How to Use BeanShell: JMeter's Favorite Built-in Component article prior to implementing the solution from the above answer.

Its fixed in the latest jMeter ver. 3.1. see bugzilla #60358

When it comes to JMeter+ElasticSearch POST and GET requests on /index-name/_search are equivalent.
At least for me.
Below screenshot from Insomnia:

Related

Find the keywords that matched the query in Elastic Search

I'm using Elastic Search to search several indices.
When the user performs a query, the matches are split between 1 or 2 keywords that yield results. I'd like to be able to know for every hit, which keyword it originated from.
So if the user searched for "ventolin for asthma", I'd like to know which hits are for "ventolin" and which are for "asthma".
That is, for this query:
{
'query': {
'multi_match': {
'query': 'ventolin for asthma',
'fuzziness': 2,
'prefix_length': 1,
'type': 'best_fields',
'fields': ['term*']
}
}
}
And these hits:
{
...
'hits': {
'total': {
'value': 287,
'relation': 'eq'
},
'max_score': 10.301256,
'hits': [{
'_index': 'normalized-term-mapping',
'_type': '_doc',
'_id': '194526',
'_score': 10.301256,
'_source': {
'term': 'Ventolin Evohaler',
...
}
}, {
'_index': 'normalized-term-mapping',
'_type': '_doc',
'_id': '194362',
'_score': 8.529675,
'_source': {
'term': 'Childhood Asthma',
...
}
},
...
]
}
}
I want to match the first hit with the keyword Ventolin and the second hit with Asthma.
Note that:
I use fuzziness == 2, so the keywords may not exactly match the hit term
The indices use an analyzer (not a complex one but not trivial)
I can try and write code to match the terms with the query, but that would effectively mean reimplementing the elastic analysis in code which is not a great solution.
Is there a way to get the matched term from the original query from Elastic?
Yes, there is a way to get the matched terms using the Highlight API.
You're using a multi_match query so the default highlight options may be sufficient for you. You do need to specify the fields you want to highlight with something like this:
{
'query': {
'multi_match': {
'query': 'ventolin for asthma',
'fuzziness': 2,
'prefix_length': 1,
'type': 'best_fields',
'fields': ['term*']
}
},
'highlight': {
'fields': {
'term*': {}
}
}
}
However, this won't return an array of matched items. Instead, you will get the fields with existing matches marked (usually with HTML, but you can customize it). You could use that markup to post-process and isolate the individual matches if you need them.

How do I work with two endpoints using the JetBrains GraphQL plugin?

Currently I'm using something like the following in PyCharm using the GraphQL plugin for JetBrains IDEs to work with two different GraphQL endpoints that I switch between in my work (a local and a remote), with the consequence that I (need to manually) overwrite the schema file when I switch between them.
Is there a way to do this with a different schema file for each endpoint? What is the correct idiom for working with (and switching between) two endpoints?
{
"name": "My Schema",
"schemaPath": "_schema.graphql",
"extensions": {
"endpoints": {
"Local GraphQL Endpoint": {
"url": "http://localhost:5000",
"headers": {
"user-agent": "JS GraphQL"
},
"introspect": true
},
"Remote GraphQL Endpoint": {
"url": "http://my.remote.io",
"headers": {
"user-agent": "JS GraphQL"
},
"introspect": true
}
}
}
}

Excecuting an Azure API Management Operation to save data in Azure Blob Storage fails in PowerShell-Script, but not in Postman or DeveloperPortal

In Azure API Management different results for different clients happen.
This is the API Management Policy to store a JSON-Document in Azure Blob Storage:
<base />
<!-- ########## put to storage ########## -->
<set-variable name="resource" value="#{
string prefix = "/payloads/" + context.Request.MatchedParameters["business-object"] + "/";
string fileName = string.empty;
return prefix + fileName;
}" />
<set-variable name="storageUrl" value="{{STORAGE_URL}}" />
<set-variable name="blobUrl" value="#((string)context.Variables["storageUrl"] + (string)context.Variables["resource"])" />
<set-variable name="storageKey" value="{{STORAGE_KEY}}" />
<set-variable name="storageAccountName" value="#(context.Variables.GetValueOrDefault<string>("storageUrl").Split('.')[0].Split('/')[2])" />
<set-variable name="date" value="#(DateTime.UtcNow.ToString("R"))" />
<set-variable name="version" value="2018-03-28" />
<trace source="keyInput">#{
string body = context.Request.Body.As<string>(preserveContent: true);
string contentType = "text/plain";
string contentLength = context.Request.Headers["Content-Length"][0];
var hmacSha256 = new System.Security.Cryptography.HMACSHA256 { Key = Convert.FromBase64String(context.Variables.GetValueOrDefault<string>("storageKey")) };
var payLoad = string.Format("{0}\n\n\n{1}\n\n{2}\n\n\n\n\n\n\nx-ms-blob-type:BlockBlob\nx-ms-date:{3}\nx-ms-version:{4}\n{5}",
"PUT",
contentLength,
contentType,
context.Variables["date"],
context.Variables["version"],
"/" + context.Variables.GetValueOrDefault<string>("storageAccountName") + context.Variables.GetValueOrDefault<string>("resource"));
return payLoad;
}</trace>
<send-request mode="new" response-variable-name="putStorageRequest" timeout="5" ignore-error="true">
<set-url>#((string)context.Variables["blobUrl"])</set-url>
<set-method>PUT</set-method>
<set-header name="x-ms-date" exists-action="override">
<value>#((string) context.Variables["date"] )</value>
</set-header>
<set-header name="x-ms-version" exists-action="override">
<value>#((string) context.Variables["version"] )</value>
</set-header>
<set-header name="x-ms-blob-type" exists-action="override">
<value>BlockBlob</value>
</set-header>
<set-header name="Content-Type" exists-action="override">
<value>application/json</value>
</set-header>
<set-header name="Authorization" exists-action="override">
<value>#{
string body = context.Request.Body.As<string>(preserveContent: true);
string contentType = "application/json";
string contentLength = context.Request.Headers["Content-Length"][0];
var hmacSha256 = new System.Security.Cryptography.HMACSHA256 { Key = Convert.FromBase64String(context.Variables.GetValueOrDefault<string>("storageKey")) };
var payLoad = string.Format("{0}\n\n\n{1}\n\n{2}\n\n\n\n\n\n\nx-ms-blob-type:BlockBlob\nx-ms-date:{3}\nx-ms-version:{4}\n{5}",
"PUT",
contentLength,
contentType,
context.Variables["date"],
context.Variables["version"],
"/" + context.Variables.GetValueOrDefault<string>("storageAccountName") + context.Variables.GetValueOrDefault<string>("resource"));
return "SharedKey "+ context.Variables.GetValueOrDefault<string>("storageAccountName") + ":" + Convert.ToBase64String(hmacSha256.ComputeHash(System.Text.Encoding.UTF8.GetBytes(payLoad)));
}</value>
</set-header>
<set-body>#( context.Request.Body.As<string>(true) )</set-body>
</send-request>
<choose>
<when condition="#(context.Variables["putStorageRequest"] == null)">
<return-response>
<set-status code="500" reason="Storage failure" />
<set-body />
</return-response>
</when>
<when condition="#(((IResponse)context.Variables["putStorageRequest"]).StatusCode != 201)">
<return-response>
<set-status code="500" reason="Storage failure" />
<set-body>#(((IResponse)context.Variables["putStorageRequest"]).Body.As<string>())</set-body>
</return-response>
</when>
</choose>
</inbound>
Ocp-Apim-Subscription-Key is used as HTTP Header for Authentication.
Executing it in API Management Developer Portal and Postman works as expected and the document is stored in Azure Blob Storage.
By executing a PowerShell-Script, it fails:
Invoke-RestMethod -Method POST -Uri $url -Headers $authHeaders -Body $body -ContentType "application/json"
Exception:
code: 403
reason: "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature."
The problem is, that the Content-Length changes during the inbound flow.
Please find an excerpt of the OCP-Trace below:
{
"traceEntries": {
"inbound": [
{
"source": "api-inspector",
"timestamp": "2019-10-22T13:52:47.4545895Z",
"elapsed": "00:00:00.0019930",
"data": {
"request": {
"method": "POST",
"url": "https://lorem.ipsum/private/api/store",
"headers": [
{
"name": "Ocp-Apim-Subscription-Key",
"value": "secret"
},
{
"name": "Connection",
"value": "Keep-Alive"
},
{
"name": "Content-Length",
"value": "13782"
},
{
"name": "Content-Type",
"value": "application/json"
},
{
"name": "Host",
"value": "lorem.ipsum"
},
{
"name": "User-Agent",
"value": "Mozilla/5.0 (Windows NT; Windows NT 10.0; en-US) WindowsPowerShell/5.1.14393.3053"
}
]
}
}
},
{
"source": "keyInput",
"timestamp": "2019-10-22T13:52:47.4545895Z",
"elapsed": "00:00:00.0036425",
"data": "PUT 13782 text/plain x-ms-blob-type:BlockBlob x-ms-date:Tue, 22 Oct 2019 13:52:47 GMT x-ms-version:2018-03-28 --CUTTED--"
},
{
source: "send-request",
timestamp: "2019-10-22T13:52:47.4545895Z",
elapsed: "00:00:00.0040858",
data: {
message: "Request is being forwarded to the backend service. Timeout set to 5 seconds",
request: {
method: "PUT",
url: "https://lorem.ipsum.blob.core.windows.net/payloads/stuff/b812a1b4-decd-45a1-bf00-f7792fb3789a",
headers: [
{
name: "Content-Length",
value: 13784
}
]
}
}
},
{
source: "send-request",
timestamp: "2019-10-22T13:52:47.5639587Z",
elapsed: "00:00:00.1123550",
data: {
response: {
status: {
code: 403,
reason: "Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature."
}
}
}
}
]
}
}
The Content-Length has does not change if the API is called with Postman.
Why does the Content-Length change and causes me an authentication issue?
--- UPDATE ---
It also depends on the content:
- vs. –
Good:
{
"value": "te-st"
}
Bad:
"value": "te–st"
}
The other thing is the file-encoding of the JSON-Document used in PowerShell.
Bad works as ANSI
Bad does not work as UTF-8
This makes sense, it's also documented:
https://learn.microsoft.com/en-us/powershell/scripting/components/vscode/understanding-file-encoding?view=powershell-6#common-causes-of-encoding-issues
This problem occurs because VSCode encodes the character – in UTF-8 as the bytes 0xE2 0x80 0x93. When these bytes are decoded as Windows-1252, they are interpreted as the characters –.
Some strange character sequences that you might see include:
– instead of –
— instead of —
But this does not explain, why the Content-Length changes in API Management.
And how do I handle wrong encoding in API Management?

Spring Data MongoDB - custom query [duplicate]

This question already has answers here:
Query with sort() and limit() in Spring Repository interface
(2 answers)
Closed 5 years ago.
I want to store trace info like below:
{
"timestamp": 1394343677415,
"info": {
"method": "GET",
"path": "/trace",
"headers": {
"request": {
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
"Connection": "keep-alive",
"Accept-Encoding": "gzip, deflate",
"User-Agent": "Mozilla/5.0 Gecko/Firefox",
"Accept-Language": "en-US,en;q=0.5",
"Cookie": "_ga=GA1.1.827067509.1390890128; ..."
"Authorization": "Basic ...",
"Host": "localhost:8080"
},
"response": {
"Strict-Transport-Security": "max-age=31536000 ; includeSubDomains",
"X-Application-Context": "application:8080",
"Content-Type": "application/json;charset=UTF-8",
"status": "200"
}
}
}
My #Document entity is extending HashMap.
Now I have to write custom query for pagination.
In Mongo client shell I would write it:
db.traceInfo.find({"headers.response.status": "404"}).limit(n);
and it works, but I don't know how to write this query as #Query in Spring MongoRepository? How can I do that?
This is totally easy, but keyword limit is not supported by spring-data-mongodb, check out the reference here: https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repository-query-keywords
Possible solution:
#Query({"headers.response.status": "?1"})
List<T> findByGivenStatus(int status);

Elasticsearch response time - strange values

I'm querying simple Elasticsearch index with house numbers data.
".house-numbers": {
"mappings": {
"house-number": {
"properties": {
"id": {
"type": "keyword"
},
"value": {
"type": "text",
"index_options": "docs"
}
}
}
}
}
Then I'm querying data like POST http request
Request url
http://localhost:9200/.house-numbers/housenumber/_search
Headers:
Content-Type: text/plain
Content-Length: 55
Accept: */*
Accept-Encoding: gzip, deflate, br
Request body:
{
"size": 30,
"query": {
"match": {
"value": {
"query": "2 3"
}
}
}
}
Request returns data in 10ms - 30ms and everything works fine. Elasticsearch reponse parameter took is small in all cases 3-5ms.
When I change size in request body to "size": 35 response time has suddenly 500ms. Took parameter from Elasticsearch is the same. There are no special characters and size of response is very similar.
I tried many clients NEST, Postman, Fiddler to do these requests, every client has the same behaviour.
Setting of my elasticsearch contains only
http.compression : true
http.compression_level : 9
Setting of my jvm
"jvm": {
"timestamp": 1478108615141,
"uptime_in_millis": 17150141,
"mem": {
"heap_used_in_bytes": 1384307624,
"heap_used_percent": 66,
"heap_committed_in_bytes": 2077753344,
"heap_max_in_bytes": 2077753344,
"non_heap_used_in_bytes": 96403904,
"non_heap_committed_in_bytes": 101502976,
"pools": {
"young": {
"used_in_bytes": 324358632,
"max_in_bytes": 558432256,
"peak_used_in_bytes": 558432256,
"peak_max_in_bytes": 558432256
},
"survivor": {
"used_in_bytes": 69730304,
"max_in_bytes": 69730304,
"peak_used_in_bytes": 69730304,
"peak_max_in_bytes": 69730304
},
"old": {
"used_in_bytes": 990220848,
"max_in_bytes": 1449590784,
"peak_used_in_bytes": 1190046816,
"peak_max_in_bytes": 1449590784
}...
I tried different versions of elasticsearch
I tried different settings - turn off http.compression, change compression_level
I tried another hosts for elasticsearch
I have no idea what can cause this problem and I can't continue with my work.
Any idea where to look or how to proceed?
Of course problem was not in elasticsearch but in the http communication, especially when http compression was turned on
Hints to remove delays
close fiddler
disable firewall and anti-virus software
close all programs that possibly can intercept http communication

Resources