pass multiple(distinct) values to and single attribute in a iteration JMeter - performance

Unable to pass multiple values in a single iteration from csv config data set
POST Body :-
[{"customerOrderId": "WS_${_RandomString(5,ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,)}", "shipFromLocationId": "${ship_from_location}", "deliveryDate": "${expected_arrival_date}Z", "deliveryLatestDate": "${expected_arrival_date}Z", "packId": "${pack_id}", "itemId": "${item_id}", "type": "fresh", "itemDescription": "${random_text}", "quantityOrdered": "${qty_ordered}", "quantityType": "CS", "createdAt": "${current_time}Z"},
{"customerOrderId": "WS${_RandomString(5,ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,)}", "shipFromLocationId": "${ship_from_location}", "deliveryDate": "${expected_arrival_date}Z", "deliveryLatestDate": "${expected_arrival_date}Z", "packId": "${pack_id}", "itemId": "${item_id}", "type": "fresh", "itemDescription": "${random_text}", "quantityOrdered": "${qty_ordered}", "quantityType": "CS", "createdAt": "${current_time}Z"},
{"customerOrderId": "WS${__RandomString(5,ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,)}", "shipFromLocationId": "${ship_from_location}", "deliveryDate": "${expected_arrival_date}Z", "deliveryLatestDate": "${expected_arrival_date}Z", "packId": "${pack_id}", "itemId": "${item_id}", "type": "fresh", "itemDescription": "${random_text}", "quantityOrdered": "${qty_ordered}", "quantityType": "CS", "createdAt": "${current_time}Z"}]
CSV Data File
Column 2 is itemId , i need different itemID in a single iteration. As body contains different itemId, i need to make sure it has different item id from csv
How can I achieve this? as with csv data set config I don't see any option to achieve this

Given default CSV Data Set Config Sharing Mode of All Threads each virtual user will read next line from the csv file on each iteration.
If you want to read several lines within the bounds of a single iteration - you will need to consider switching to __CSVRead() function instead
Replace ${item_id} with ${__CSVRead(test.csv,1)}${__CSVRead(test.csv,next)}
[
{
"customerOrderId": "WS_${_RandomString(5,ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,)}",
"shipFromLocationId": "${ship_from_location}",
"deliveryDate": "${expected_arrival_date}Z",
"deliveryLatestDate": "${expected_arrival_date}Z",
"packId": "${pack_id}",
"itemId": "${__CSVRead(test.csv,1)}${__CSVRead(test.csv,next)}",
"type": "fresh",
"itemDescription": "${random_text}",
"quantityOrdered": "${qty_ordered}",
"quantityType": "CS",
"createdAt": "${current_time}Z"
},
{
"customerOrderId": "WS${_RandomString(5,ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,)}",
"shipFromLocationId": "${ship_from_location}",
"deliveryDate": "${expected_arrival_date}Z",
"deliveryLatestDate": "${expected_arrival_date}Z",
"packId": "${pack_id}",
"itemId": "${__CSVRead(test.csv,1)}${__CSVRead(test.csv,next)}",
"type": "fresh",
"itemDescription": "${random_text}",
"quantityOrdered": "${qty_ordered}",
"quantityType": "CS",
"createdAt": "${current_time}Z"
},
{
"customerOrderId": "WS${__RandomString(5,ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890,)}",
"shipFromLocationId": "${ship_from_location}",
"deliveryDate": "${expected_arrival_date}Z",
"deliveryLatestDate": "${expected_arrival_date}Z",
"packId": "${pack_id}",
"itemId": "${__CSVRead(test.csv,1)}${__CSVRead(test.csv,next)}",
"type": "fresh",
"itemDescription": "${random_text}",
"quantityOrdered": "${qty_ordered}",
"quantityType": "CS",
"createdAt": "${current_time}Z"
}
]
again, replace test.csv with either relative or full path to your CSV file.

Related

Nifi - Route the JSON based on the Array Name

I am new to Nifi, i hv a requirement where we get multiple JSON inputs with different Header Names. I have to parse the JSON and insert into different tables based on the Header value.
Not sure how to use RouteonContent processor or EvaluateJSON Path processor
Input 1
{
"Location": [
{
"country": "US",
"division": "Central",
"region": "Big South",
"locationID": 1015,
"location_name": "Hattiesburg, MS (XF)",
"location_type": "RETAIL",
"location_sub_type": "COS",
"store_type": "",
"planned_open_date": "",
"planned_close_date": "",
"actual_open_date": "2017-07-26",
"actual_close_date": "",
"new_store_flag": "",
"address1": "2100 Lincoln Road",
"address2": "",
"city": "Hattiesburg",
"state": "MS",
"zip": 39402,
"include_for_planning": "Y"
},
{
"country": "US",
"division": "Central",
"region": "Big South",
"locationID": 1028,
"location_name": "Laurel, MS",
"location_type": "RETAIL",
"location_sub_type": "COS",
"store_type": "",
"planned_open_date": "",
"planned_close_date": "",
"actual_open_date": "",
"actual_close_date": "",
"new_store_flag": "",
"address1": "1225 5th street",
"address2": "",
"city": "Laurel",
"state": "MS",
"zip": 39440,
"include_for_planning": "Y"
}
]
Input 2
{
"Item": [
{
"npi_code": "NEW",
"cifa_category": "XM",
"o9_category": "Accessories"
},
{
"npi_code": "NEW",
"cifa_category": "XM0",
"o9_category": "Accessories"
}
]
Use the website https://jsonpath.com/ to figure out the proper JSON expression. But what you could potentially do is use: if the array contains either $.npi_code then do X and if it contains $. country, then do Y

Kafka JDBC source connector for Oracle DB - How to map nested json to avro schema

I am using Kafka JDBC source connector (confluent.io) to get data from Oracle DB and push to kafka topic.
The DB table "test_table" has these columns.
create table test_table
(
json_obj varchar2(4000)
last_update_date DATE
)
JSON returned by this query has below structure.
{
"name": "John",
"department": {
"deptId": "1",
"deptName": "dept1"
}
}
The Avro schema applied on the kafka topic is:
{
"fields": [
{
"name": "name",
"type": "string"
},
{
"name": "department",
"type":[
{
"fields": [
{
"name": "deptId",
"type": "string"
},
{
"name": "deptName",
"type": [
"string",
"null"
]
}
],
"name": "DeptObj",
"type": "record"
},"null"]
}
],
"name": "EmpObj",
"namespace": "com.test",
"type": "record"
}
And I am using the below connector config.
{
"name": "JdbcSourceConnectorConnector_0",
"config": {
"schema.registry.url": "<schema-reg-url>",
"name": "JdbcSourceConnectorConnector_0",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"tasks.max": "1",
"connection.url": "jdbc:oracle:thin:#<ldap-connect-string>/<db-name>",
"connection.user": "<db-schema-name>",
"connection.password": "<db-pwd>",
"numeric.mapping": "best_fit",
"dialect.name": "OracleDatabaseDialect",
"mode": "timestamp",
"timestamp.column.name": "last_update_date",
"validate.non.null": "false",
"query": "SELECT * FROM (SELECT JSON_VALUE(json_obj, '$.empId') as \"empId\", JSON_VALUE(json_obj, '$.department.deptId') as \"department.deptId\", JSON_VALUE(json_obj, '$.department.deptName') as \"department.deptName\" FROM test_table) A",
"table.types": "TABLE",
"poll.interval.ms": "30000",
"topic.prefix": "<my-topic>",
"db.timezone": "America/Los_Angeles",
"key.serializer": "org.apache.kafka.common.serialization.StringSerializer",
"value.serializer": "io.confluent.kafka.serializers.KafkaAvroSerializer",
"transforms": "createKeyStruct,ExtractField, addNamespace",
"transforms.createKeyStruct.fields": "empId",
"transforms.createKeyStruct.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.ExtractField.field": "empId",
"transforms.ExtractField.type": "org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.addNamespace.type":"org.apache.kafka.connect.transforms.SetSchemaMetadata$Value",
"transforms.addNamespace.schema.name": "EmpObj"
}
}
But when this connector gets the json data then it throws the below error.
Caused by: org.apache.avro.SchemaParseException: Illegal character in: department.deptId
at org.apache.avro.Schema.validateName(Schema.java:1561)
at org.apache.avro.Schema.access$400(Schema.java:87)
at org.apache.avro.Schema$Field.<init>(Schema.java:541)
at org.apache.avro.Schema$Field.<init>(Schema.java:580)
at io.confluent.connect.avro.AvroData.addAvroRecordField(AvroData.java:1114)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:910)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:732)
at io.confluent.connect.avro.AvroData.fromConnectSchema(AvroData.java:726)
at io.confluent.connect.avro.AvroConverter.fromConnectData(AvroConverter.java:85)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:63)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$3(WorkerSourceTask.java:321)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 11 more
[2022-10-10 07:53:57,012] INFO Stopping JDBC source task (io.confluent.connect.jdbc.source.JdbcSourceTask)
That means the way I am trying to map the json to Avro is not correct. I need to know how to map the department.deptId to the Avro.

Kafka to Elasticsearch connector not able to start

Elasticsearch -> elasticsearch-7.6.0
I am trying to move data from Kafka topic to Elasticsearch by using Elasticsearch Sink connector.
But starting the connector is showing below error
2022-05-28 09:39:27,864] ERROR [SINK_ELASTIC_TEST_01|task-0] Failed to create mapping for index users.user_details3 with schema Schema{STRING} due to 'ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=Root mapping definition has unsupported parameters: [type : text] [fields : {keyword={ignore_above=256, type=keyword}}]]]' after 6 attempt(s) (io.confluent.connect.elasticsearch.RetryUtil:164)
ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=Root mapping definition has unsupported parameters: [type : text] [fields : {keyword={ignore_above=256, type=keyword}}]]]
at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:178)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2484)
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:2461)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:2184)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:2154)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:2118)
at org.elasticsearch.client.IndicesClient.putMapping(IndicesClient.java:440)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$createMapping$3(ElasticsearchClient.java:238)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:158)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:119)
at io.confluent.connect.elasticsearch.ElasticsearchClient.callWithRetries(ElasticsearchClient.java:426)
at io.confluent.connect.elasticsearch.ElasticsearchClient.createMapping(ElasticsearchClient.java:236)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.checkMapping(ElasticsearchSinkTask.java:151)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.tryWriteRecord(ElasticsearchSinkTask.java:294)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.put(ElasticsearchSinkTask.java:118)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:584)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:255)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://localhost:9200], URI [/users.user_details3/_mapping?master_timeout=30s&timeout=30s], status line [HTTP/1.1 400 Bad Request]
Topic data
rowtime: 2022/05/27 13:46:48.136 Z, key: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D642000000022B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D642F23DB1FBE13668180004"}}"}, value: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D642000000022B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D642F23DB1FBE13668180004"}, "operationType": "insert", "clusterTime": {"$timestamp": {"t": 1653659202, "i": 2}}, "fullDocument": {"_id": {"$oid": "6290d642f23db1fbe1366818"}, "userid": 1.0, "name": "Gaurav"}, "ns": {"db": "users", "coll": "user_details3"}, "documentKey": {"_id": {"$oid": "6290d642f23db1fbe1366818"}}}"}, partition: 0
rowtime: 2022/05/27 13:47:06.142 Z, key: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D654000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D654F23DB1FBE13668190004"}}"}, value: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D654000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D654F23DB1FBE13668190004"}, "operationType": "insert", "clusterTime": {"$timestamp": {"t": 1653659220, "i": 1}}, "fullDocument": {"_id": {"$oid": "6290d654f23db1fbe1366819"}, "userid": 1.0, "name": "Gaurav"}, "ns": {"db": "users", "coll": "user_details3"}, "documentKey": {"_id": {"$oid": "6290d654f23db1fbe1366819"}}}"}, partition: 0
rowtime: 2022/05/27 13:47:24.149 Z, key: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D668000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D668F23DB1FBE136681A0004"}}"}, value: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D668000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D668F23DB1FBE136681A0004"}, "operationType": "insert", "clusterTime": {"$timestamp": {"t": 1653659240, "i": 1}}, "fullDocument": {"_id": {"$oid": "6290d668f23db1fbe136681a"}, "userid": 1.0, "name": "Gaurav"}, "ns": {"db": "users", "coll": "user_details3"}, "documentKey": {"_id": {"$oid": "6290d668f23db1fbe136681a"}}}"}, partition: 0
rowtime: 2022/05/27 13:48:00.156 Z, key: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D68A000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D68AF23DB1FBE136681B0004"}}"}, value: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D68A000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D68AF23DB1FBE136681B0004"}, "operationType": "insert", "clusterTime": {"$timestamp": {"t": 1653659274, "i": 1}}, "fullDocument": {"_id": {"$oid": "6290d68af23db1fbe136681b"}, "userid": 1.0, "name": "Gaurav"}, "ns": {"db": "users", "coll": "user_details3"}, "documentKey": {"_id": {"$oid": "6290d68af23db1fbe136681b"}}}"}, partition: 0
rowtime: 2022/05/27 13:50:00.182 Z, key: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D706000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D706F23DB1FBE136681C0004"}}"}, value: {"schema":{"type":"string","optional":false},"payload":"{"_id": {"_data": "826290D706000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D706F23DB1FBE136681C0004"}, "operationType": "insert", "clusterTime": {"$timestamp": {"t": 1653659398, "i": 1}}, "fullDocument": {"_id": {"$oid": "6290d706f23db1fbe136681c"}, "userid": 2.0, "name": "Gaurav"}, "ns": {"db": "users", "coll": "user_details3"}, "documentKey": {"_id": {"$oid": "6290d706f23db1fbe136681c"}}}"}, partition: 0
ElasticSearchSink Configuration
{
"name": "SINK_ELASTIC_TEST_01",
"config": {
"type.name": "kafka-connect",
"name": "SINK_ELASTIC_TEST_01",
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"topics": "users.user_details3",
"connection.url": "http://localhost:9200",
"connection.username": "",
"connection.password": "",
"key.ignore": "true",
"schema.ignore": "false"
}
}
Please see I am creating topic from MongoDB source connector , so on a high level I am trying to achieve this
MongoDB --> MongoDBSourceConnector--> Kafka --> ElasticSearchSinkConnector --> ElasticSearch
Please find the MongoDBSourceConnector Configuration for reference
{
"name": "source-mongodb-kafka-stream",
"config": {
"name": "source-mongodb-kafka-stream",
"connector.class": "com.mongodb.kafka.connect.MongoSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"connection.uri": "mongodb://localhost:27017",
"database": "users",
"collection": "",
"topic.prefix": ""
}
}
Update 06-03-2022 after OneCricketeer comments
I have created an index in Elasticsearch with the same name as topic
PUT /users_generic/_doc/1
{
"id":{
"_data":"826290D706000000012B022C0100296E5A100429BEBD4C4B7F4C0BA80B881299B5FF9246645F696400646290D706F23DB1FBE136681C0004"
},
"operationType":"insert",
"clusterTime":{
"$timestamp":{
"t":1653659398,
"i":1
}
},
"fullDocument":{
"id":{
"$oid":"6290d706f23db1fbe136681c"
},
"userid":2.0,
"name":"Gaurav"
},
"ns":{
"db":"users",
"coll":"user_details3"
},
"documentKey":{
"id":{
"$oid":"6290d706f23db1fbe136681c"
}
}
}
With above step , mapping is been created , now when I again executed the connector with same parameters as above , it throws me a different effort
2-06-03 12:39:16,691] ERROR [SINK_ELASTIC_TEST_01|task-0] Failed to execute bulk request due to 'org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes' after 6 attempt(s) (io.confluent.connect.elasticsearch.RetryUtil:164)
org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes

Compare two JSON arrays using two or more columns values in Dataweave 2.0

I had a task where I needed to compare and filter two JSON arrays based on the same values using one column of each array. So I used this answer of this question.
However, now I need to compare two JSON arrays matching two, or even three columns values.
I already tried to use one map inside other, however, it isn't working.
The examples could be the ones in the answer I used. Compare db.code = file.code, db.name = file.nm and db.id = file.identity
var db = [
{
"CODE": "A11",
"NAME": "Alpha",
"ID": "C10000"
},
{
"CODE": "B12",
"NAME": "Bravo",
"ID": "B20000"
},
{
"CODE": "C11",
"NAME": "Charlie",
"ID": "C30000"
},
{
"CODE": "D12",
"NAME": "Delta",
"ID": "D40000"
},
{
"CODE": "E12",
"NAME": "Echo",
"ID": "E50000"
}
]
var file = [
{
"IDENTITY": "D40000",
"NM": "Delta",
"CODE": "D12"
},
{
"IDENTITY": "C30000",
"NM": "Charlie",
"CODE": "C11"
}
]
See if this works for you
%dw 2.0
output application/json
var file = [
{
"IDENTITY": "D40000",
"NM": "Delta",
"CODE": "D12"
},
{
"IDENTITY": "C30000",
"NM": "Charlie",
"CODE": "C11"
}
]
var db = [
{
"CODE": "A11",
"NAME": "Alpha",
"ID": "C10000"
},
{
"CODE": "B12",
"NAME": "Bravo",
"ID": "B20000"
},
{
"CODE": "C11",
"NAME": "Charlie",
"ID": "C30000"
},
{
"CODE": "D12",
"NAME": "Delta",
"ID": "D40000"
},
{
"CODE": "E12",
"NAME": "Echo",
"ID": "E50000"
}
]
---
file flatMap(v) -> (
db filter (v.IDENTITY == $.ID and v.NM == $.NAME and v.CODE == $.CODE)
)
Using flatMap instead of map to flatten otherwise will get array of arrays in the output which is cleaner unless you are expecting a possibility of multiple matches per file entry, in which case I'd stick with map.
You can compare objects in DW directly, so the solution you linked can be modified to the following:
%dw 2.0
import * from dw::core::Arrays
output application/json
var db = [
{
"CODE": "A11",
"NAME": "Alpha",
"ID": "C10000"
},
{
"CODE": "B12",
"NAME": "Bravo",
"ID": "B20000"
},
{
"CODE": "C11",
"NAME": "Charlie",
"ID": "C30000"
},
{
"CODE": "D12",
"NAME": "Delta",
"ID": "D40000"
},
{
"CODE": "E12",
"NAME": "Echo",
"ID": "E50000"
}
]
var file = [
{
"IDENTITY": "D40000",
"NM": "Delta",
"CODE": "D12"
},
{
"IDENTITY": "C30000",
"NM": "Charlie",
"CODE": "C11"
}
]
---
db partition (e) -> file contains {IDENTITY:e.ID,NM:e.NAME,CODE:e.CODE}
You can make use of filter directly and using contains
db filter(value) -> file contains {IDENTITY: value.ID, NM: value.NAME, CODE: value.CODE}
This tells you to filter the db array based on if the file contains the object {IDENTITY: value.ID, NM: value.NAME, CODE: value.CODE}. However, this will not work if objects in the file array has other fields that you will not use for comparison. Using above, you can update filter condition to check if an object in file array exist (using data selector) where the condition applies. You can use below to check that.
db filter(value) -> file[?($.IDENTITY==value.ID and $.NM == value.NAME and $.CODE == value.CODE)] != null

Require help in creating JSR223 request for getting csv data & parsing to Jmeter

I have a HTTP request whose body data(which is in Json) is given below
{
"$ct": false,
"Source": [
"DFT"
],
"Type": "View",
"Apply": "Filter",
"Format": "PDF",
"validationFactors": {
"Expand": "attributes",
"FilterConstraints": [{
"type": "articles",
"Apply": "All",
"CreatedUpdated": [{
"title": "UN",
"FirstName": "Alia",
"MiddleName": "",
"LastName": "Stve",
"Datatype": "string",
"Encode": "Pswd",
"Local": "project",
"Id": "146FG"
}]
},
{
"type": "articles",
"Apply": "All",
"CreatedUpdated": [{
"title": "UA",
"FirstName": "ABC",
"MiddleName": "XYZ",
"LastName": "TFG",
"Datatype": "string",
"Encode": "title",
"Local": "project",
"Id": "ST6879GIGOYGO790"
}]
}
]
}
}
In above Json,I have paratermize below attributes, these values are stored in csv ."title": "${title}","FirstName": "${FirstName}","MiddleName": "${MiddleName}","LastName": "${LastName}","Datatype": "${Datatype}","Encode": "${Encode}","Local": "${Local}","Id": "${Id}"
Problem : I have created a JSR223 below my http request, but in script area how to get data from csv and parametrize it? Thanks in advance
You ain't need JSR223 PreProcessor for this, just placing JSON Payload into "Body data" tab of the HTTP Request sampler should be sufficient, just replace hard-coded values with the JMeter Variables from the CSV Data Set Config reference names.
You might also need to add a HTTP Header Manager and configure it to send Content-Type header with the value of application/json

Resources