How to add unique to each request in JMeter - jmeter

I'm having a JSON array with n number of elements like productName, productId. I would like to generate unique id in productId for each element and for each request.
Currently, I'm reading productId from .csv file but for each request same productId is applied for all the elements. For example:
test.csv
productId
10
11
12
13
14
In JMeter it substitute like below for request 1:
[
{
"productName": "Apple",
"productId": "10"
},
{
"productName": "Apple",
"productId": "10"
},
{
"productName": "Apple",
"productId": "10"
},
{
"productName": "Apple",
"productId": "10"
}
]
request 2:
[
{
"productName": "Apple",
"productId": "11"
},
{
"productName": "Apple",
"productId": "11"
},
{
"productName": "Apple",
"productId": "11"
},
{
"productName": "Apple",
"productId": "11"
}
]
But the way I'm expecting is, first request should be
[
{
"productName": "Apple",
"productId": "10"
},
{
"productName": "Apple",
"productId": "11"
},
{
"productName": "Apple",
"productId": "12"
},
{
"productName": "Apple",
"productId": "13"
}
]
And second request should be like below and so on,
[
{
"productName": "Apple",
"productId": "14"
},
{
"productName": "Apple",
"productId": "15"
},
{
"productName": "Apple",
"productId": "16"
},
{
"productName": "Apple",
"productId": "17"
}
]
productId should generate with some random id for each request and apply random id for all the elements in the json. How can we achieve this in JMeter?

You can generate a random unique number with JMeter function __UUID
Replace the productId with the following
${__UUID}
Example
[
{
"productName": "Apple",
"productId": "${__UUID}"
},
{
"productName": "Apple",
"productId": "${__UUID}"
},
{
"productName": "Apple",
"productId": "${__UUID}"
},
{
"productName": "Apple",
"productId": "${__UUID}"
}
]

As per CSV Data Set Config documentation:
By default, the file is only opened once, and each thread will use a different line from the file. However the order in which lines are passed to threads depends on the order in which they execute, which may vary between iterations. Lines are read at the start of each test iteration. The file name and mode are resolved in the first iteration.
If you want to generate a random number you can just go for __Random() function which produces a random number within the given range:
[
{
"productName": "Apple",
"productId": "${__Random(1,2147483647,)}"
},
{
"productName": "Apple",
"productId": "${__Random(1,2147483647,)}"
},
{
"productName": "Apple",
"productId": "${__Random(1,2147483647,)}"
},
{
"productName": "Apple",
"productId": "${__Random(1,2147483647,)}"
}
]
More information on JMeter Functions concept: Apache JMeter Functions - An Introduction

Another solution could be using the __CSVRead function instead of CSV Data Set Config element.
Note :
You will have to remove the column names. i.e. First row
Ensure you have sufficient test data in the CSV file
[
{
"productName": "Apple",
"productId": "${__CSVRead(productIds.csv,0)}${__CSVRead(productIds.csv,next)}"
},
{
"productName": "Apple",
"productId": "${__CSVRead(productIds.csv,0)}${__CSVRead(productIds.csv,next)}"
},
{
"productName": "Apple",
"productId": "${__CSVRead(productIds.csv,0)}${__CSVRead(productIds.csv,next)}"
},
{
"productName": "Apple",
"productId": "${__CSVRead(productIds.csv,0)}${__CSVRead(productIds.csv,next)}"
}
]

Related

Elasticsearch merge new document with the existing document

I want to merge new document with the existing document in elasticsearch instead of override. I have below record in ES,
{
"id": "1",
"student_name": "Rahul",
"books": [
{
"book_id": "11",
"book_name": "History",
"status": "Started"
}
]
}
I have received another json to process I need to update the existing document if id is same or just insert it. If I receive below json,
{
"id": "1",
"address": "Bangalore",
"books": [
{
"book_id": "11",
"book_name": "History",
"status": "Finished"
},
{
"book_id": "12",
"book_name": "History",
"status": "Started"
}
]
}
I want to have my final document like below:
{
"id": "1",
"student_name": "Rahul",
"address": "Bangalore",
"books": [
{
"book_id": "11",
"book_name": "History",
"status": "Finished"
},
{
"book_id": "12",
"book_name": "History",
"status": "Started"
}
]
}
So basically I want to merge the new json with the existing document if any. i.e. for any given key be it on top or nested if its there in db but not received this time I have to retain that as it is. I got any new key have to add it and if updated have to modify.
Also for the array of json inside the doc if I got same id in json I have to replace but if new json with new id, I need to append that json in the array.
I want to understand whether it is possible to via es queries if yes then want to know the way how to achieve it. Merging at application level and override I can think one way but want to know the better way.
You can achieve this with an upsert query.
The first piece will be indexed as new document because it doesn't exist yet:
POST my-index/_doc/1/_update
{
"doc": {
"id": "1",
"student_name": "Rahul",
"books": [
{
"book_id": "11",
"book_name": "History",
"status": "Started"
}
]
},
"doc_as_upsert": true
}
And the second piece will be merged with the first one because it already exists:
POST my-index/_doc/1/_update
{
"doc": {
"id": "1",
"address": "Bangalore",
"books": [
{
"book_id": "11",
"book_name": "History",
"status": "Finished"
},
{
"book_id": "12",
"book_name": "History",
"status": "Started"
}
]
},
"doc_as_upsert": true
}
The document you get after the two commands will be the one you expect:
GET my-index/_doc/1
=>
{
"id": "1",
"student_name": "Rahul",
"address": "Bangalore",
"books": [
{
"book_id": "11",
"book_name": "History",
"status": "Finished"
},
{
"book_id": "12",
"book_name": "History",
"status": "Started"
}
]
}

Graphql query to get an object nested in multiple layers

I am a graphql noob and the first query I have to write turned to be a complex one. Imagine this is the object I'm looking for
{
name : "ferrari"
year : "1995"
}
Now, there is a nested object in which this could be present, The object could look like this
{
"name": "car",
"year": "1990",
"morecars": [
{
"name": "ferrari",
"year": "1995"
},
{
"name": "bmw",
"year": "200"
}
]
}
or this
{
"name": "car",
"year": "1990",
"morecars": [
{
"name": "red",
"year": "1990",
"morecares": [
{
"name": "ferrari",
"year": "1995"
}
]
},
{
"name": "bmw",
"year": "200"
}
]
}
How do fetch the ferrari i need

Elasticsearch: how to apply multiple filters to the same value?

Shortly: when a field has multiple values, how can I get only those items where both my filter applies to the SAME value in a multiple-values field?
Details
I have stored in Elasticsearch some items which have a nested field with multiple values, e.g.
"hits": [
{
"name": "John",
"tickets": [
{
"color": "green",
"code": "001"
},
{
"color": "red",
"code": "002"
}
]
},
{
"name": "Frank",
"tickets": [
{
"color": "red",
"code": "001"
},
{
"color": "green",
"code": "002"
}
]
}
]
Now consider these filters:
...
filter: [
{ terms: { 'tickets.code': '001' } },
{ terms: { 'tickets.color': 'green' } },
]
...
Both items match, because each one of them has at least a ticket with code "001" and each one of them has ticket with color "green".
How do I write my filters so that only the first match, because it has a ticket which has code "001" AND color "green"?
Thank you in advance for any suggestion.
Your problem is caused by the fact that Elasticsearch flattens objects. So internally, your data is represented something like this:
{
"name": "John",
"tickets.color": ["green", "red"],
"tickets.code": ["001", "002"]
},
{
"name": "Frank",
"tickets.color": ["red", "green"],
"tickets.code": ["001", "002"]
}
It's impossible to know which color and code are on the same object. (The original source is also stored, in order to be returned when you make a request, but that's not the data that's queried when you search.)
There are two potential solutions here: denormalization, or nested data type. If you can at all get away with it, denormalization is the better choice here, because it's more efficient. If you denormalize your data, you might end up with a representation like this:
{
"name": "John",
"ticket": {
"color": "green",
"code": "001"
}
},
{
"name": "John",
"ticket": {
"color": "red",
"code": "002"
}
},
{
"name": "Frank",
"ticket": {
"color": "red",
"code": "001"
}
},
{
"name": , "Frank",
"ticket": {
"color": "green",
"code": "002"
}
}
If you use a nested data type, you'll have to use a mapping something like this:
{
"ticket": {
"type": "nested",
"properties": {
"color": {"type": "keyword"},
"code": {"type": "keyword"}
}
}
}

Object Array search support in Elasticsearch

I have a array of object in elastic search.
I would like to search if a particular field value appears in top 2 position of the array without using script.
Imagine my ES data is as follows
[
{
"_id": "TestID1",
"data": [
{
"name": "Test1",
"priority": 2
},
{
"name": "Test2",
"priority": 3
},
{
"name": "Test3",
"priority": 4
}
]
},
{
"_id": "TestID2",
"data": [
{
"name": "Test3",
"priority": 2
},
{
"name": "Test9",
"priority": 3
},
{
"name": "Test5",
"priority": 4
},
{
"name": "Test10",
"priority": 5
}
]
},
{
"_id": "TestID3",
"data": [
{
"name": "Test1",
"priority": 2
},
{
"name": "Test2",
"priority": 3
},
{
"name": "Test3",
"priority": 6
}
]
}
]
Here I would like to make a query which searches for _Test3_ ONLY within the top 2 elements of the data array.
Searching here would return the result
_id: TestID2's data
because only TestID2 has Test3 in the top 2 of the data array.
You will not be able to perform such request directly without using script. The only solution that I can think of is to create a copy of the array field containing only the first 2 elements. You will then be able to search on this field.
You can add an ingest pipeline to trim your array automatically.
PUT /_ingest/pipeline/top2_elements
{
"description": "Create a top2 field containing only the first two values of an array",
"processors": [
{
"script": {
"source": "ctx.top2 = [ctx.data[0], ctx.data[1]]"
}
}
]
}

Elasticsearch order by a certain field value first

I'd like to apply a certain sort to a query, it should sort my documents by a single value first then all the others. I need to achieve something like the ORDER BY CASE WHEN in MySQL, but I couldn't find how to do it.
Each element in the index in Elastic has the following structure:
{
"id": 123,
"name": "Title",
"categories": ["A", "B", "C"],
"price": 100,
"city": "London",
"country": "United Kingdom",
"status": 1
}
I do the following query:
{
"fields": [],
"sort": [{"price": {"order": "asc"}}],
"size": 0,
"query": {
"query_string": {
"query": "status:1 AND country:'United Kingdom'"
}
},
"aggs": {
"id": {
"terms": {
"field": "id",
"size": 10
}
}
}
}
So by sorting the column city with value "Liverpool" first and considering the following example:
{"id": 1, "name": "Test", "categories": ["A", "B", "C"], "price": 100, "city": "London", "country": "United Kingdom", "status": 1 }
{"id": 2, "name": "Sample", "categories": ["A", "D", "F"], "price": 200, "city": "Manchester", "country": "United Kingdom", "status": 1 }
{"id": 3, "name": "Title", "categories": ["X", "Y", "Z"], "price": 1000, "city": "Liverpool", "country": "United Kingdom", "status": 1 }
I expect to have as output the following id: 3, 1, 2.
How can I change my query to obtain this behaviour?
UPDATE: The version is 1.7.2
You should use "_script" for 1.7 version. Try this:
1.7:
"query" : {
....
},
"sort" : {
"_script" : {
"script" : "doc['city'].value == 'Liverpool' ? 1 : 0",
"type" : "number",
"order" : "desc"
},
"example_other_field_order":"asc",
"next_example_field_order":"asc"
}
For latest version of elasticsearch (>= 5.5) check this doc.
Actually you want to sort by city first then by price ?
The key point is the type of "city" filed , it's string (maybe you set analyzed ? ) not integer. So if you set analyzed on filed "city", you'd better add a property named "raw" without analyzed, then you can sort by "city: first .
Try the following:
"sort": [{"city.raw": {"order": "asc"}},{"price": {"order": "asc"}}],
You can visit https://www.elastic.co/guide/en/elasticsearch/guide/current/multi-fields.html for more help.
Otherwise, if you only need to make the "Liverpool" city first then other cities when search and sort , you should use boost feature in your query. Visit https://www.elastic.co/guide/en/elasticsearch/guide/2.x/multi-query-strings.html for more help.

Resources