How to commit each DB transaction in mapping node in IIB Toolkit? - ibm-integration-bus

I have a JSON body of Array type. Now each body will be inserted into DB one by one. There is constraint to use mapping node only as it is having GUI. But in mapping node, I cannot change the transaction from automatic to any other value. As a result when first body is inserted into DB and second one is failed, first one is also getting rolled back. How to design the mapping node such a way that when 1st is successful, it will be committed too? or if the 1st is failed, second will also be attempted to be inserted?
here is the JSON request of atleast 2 elements' array:
[
{
"batch_id": 2019,
"trx_no": "test",
"sys_ref_no": "test",
"trx_line_num": 1,
"trx_start_date": "test",
"trx_post_on_date": "test",
"trx_post_on_date_dr": "06-APR-20",
"company_code": "test",
"company_id": 1,
"shop_num": "test",
"shop_id": 1,
"pos_num": "test",
"pos_id": 1,
"tax_type": "test",
"tax_code": "test",
"tax_rate": 1,
"currency_code": "test",
"currency_id": 1,
"tax_amt": 1,
"tax_id": 1,
"tax_base_amt": 1,
"tax_exchange_rate": 1,
"tax_exchange_date": "test",
"tax_exchange_type": "test",
"status": "test",
"error_message": "test",
"creation_date": "06-APR-20",
"created_by": 1,
"last_update_date": "06-APR-20",
"last_update_by": 1,
"data_source": "test",
"request_id": 1,
"billing_intf_flag": "test",
"n_attribute1": 1,
"n_attribute2": 1,
"n_attribute3": 1,
"c_attribute1": "test",
"c_attribute2": "test",
"c_attribute3": "test",
"d_attribute1": "test",
"d_attribute2": "test",
"d_attribute3": "test",
"d_attribute1_dr": "06-APR-20",
"d_attribute2_dr": "06-APR-20",
"d_attribute3_dr": "06-APR-20"
},
{
"batch_id": 2020,
"trx_no": "test",
"sys_ref_no": "test",
"trx_line_num": 1,
"trx_start_date": "test",
"trx_post_on_date": "test",
"trx_post_on_date_dr": "06-APR-20",
"company_code": "test",
"company_id": 1,
"shop_num": "test",
"shop_id": 1,
"pos_num": "test",
"pos_id": 1,
"tax_type": "test",
"tax_code": "test",
"tax_rate": 1,
"currency_code": "test",
"currency_id": 1,
"tax_amt": 1,
"tax_id": 1,
"tax_base_amt": 1,
"tax_exchange_rate": 1,
"tax_exchange_date": "test",
"tax_exchange_type": "test",
"status": "test",
"error_message": "test",
"creation_date": "06-APR-20",
"created_by": 1,
"last_update_date": "06-APR-20",
"last_update_by": 1,
"data_source": "test",
"request_id": 1,
"billing_intf_flag": "test",
"n_attribute1": 1,
"n_attribute2": 1,
"n_attribute3": 1,
"c_attribute1": "test",
"c_attribute2": "test",
"c_attribute3": "test",
"d_attribute1": "test",
"d_attribute2": "test",
"d_attribute3": "test",
"d_attribute1_dr": "06-APR-20",
"d_attribute2_dr": "06-APR-20",
"d_attribute3_dr": "06-APR-20"
}
]

There is constraint to use mapping node only as it is having GUI.
This problem would be simple to solve using a Compute node and a small amount of ESQL. It is (as far as I know) impossible using the Mapping node. So either that constraint gets relaxed, or the requirement for independent transactions has to be removed.
In general, it is a bad idea to make rigid rules about which IIB transformation language to use, because it leads to awkward situations like this one. It's OK to have a preferred language or a default language, as long as other languages are allowed when it makes sense. Your situation is actually slightly unusual. It is more common (but still unwise) for ESQL to be the one-and-only-allowed-language!

Since don't want to process this JSON as single unit of work i.e. either commit to db or rollback, you could divide the work into two separate message flows. The first message flow splits the input JSON with multiple childs into individual JSON with only one node. Feed this individual JSON node(body) to the next message flow which uses the mapping node to insert in the database. This way you can workaround your constraint of using mapping node.

Related

How to get values from logs in alerts text message in Elasticsearch Kiban

I am continuously sending health data of my Ubuntu machine to elasticsearch using td-agent. This health data contains cpu temperature which I have to monitor. So I have created alerts in which is the temperature value increses to more than 60*F, it gives alerts on my Microsoft Teams channel. This all setup is working fine.
Below is the logs data:
{
"_index": "health_skl_gateway",
"_type": "_doc",
"_id": "DwxjinkBwxSy0OQ_4rhS",
"_version": 1,
"_score": null,
"_source": {
"Data": {
"WiFiIP": "N/A",
"signal_strength": "N/A",
"signal_percent": 0,
"signal_level": "N/A",
"EthIP": "192.168.100.30 ",
"TotalDisk": "916G",
"UsedDisk": "40G",
"FreeDisk": "830G",
"DiskPercent": "5%",
"TotalRAM": "16312468",
"UsedRAM": "3735596",
"FreeRAM": "5866548",
"CPU": 27,
"cpu_temp": 57,
"Internet": true,
"Publish msg count": 442,
"Created": "2021-05-20T15:26:51.557564",
"DeviceId": "TX-G1-318",
"UpTime": "2021-05-19T07:13:05"
},
"hostname": "TX-G1-318",
"Version": "V2"
},
"fields": {
"Data.UpTime": [
"2021-05-19T07:13:05.000Z"
],
"Data.Created": [
"2021-05-20T15:26:51.557Z"
]
},
"sort": [
1621524411557
]
}
In alerting of Kibana, I have set alerts in which if the count is 3, of all documents of index health_skl_gateway, for last 10 minutes, where Data.cpu_temp is greater than 60, it generates alerts to Microsoft Teams channel. Now below is how I have configured the message which is sent to Microsoft teams
So in the message, I am just sending the static text message. But I want to send the actual Data.cpu_temp value in the messsage.
Is this possible. How can we do this? Thanks
Did you try using double braces? Like this. I guess mapping is done in the same way for all alert types.
In the server monitoring example, the email action type is used, and server is mapped to the body of the email, using the template string CPU on {{server}} is high.

Jmeter - Compare/assert multiple data: Data from JSON against data from DB

For the purpose of my task i need to compare Data from JSON against data from DB, but i have few doubts how to build the scenario. My scenario is like:
1 Perform DB query
Which returns dynamic set like:
url secret
https://test1.com/ 1234
https://test2.com/ 1234
https://test3.com/ 1234
Based on this dynamic set, i drive my Loop controller to loop. Here from every single call, different JSON is produced like:
[
{
"adminLink": "",
"BTCAmount": 0,
"lastName": "test",
"amount": 1,
"clientId": "e1d4ab18517711eaa84cfa163eb75a2c",
"foundingSourceName": "test",
"secretId": "2938663415",
"txId": "",
"mcTxId": "1079249234",
"paymentAddress": "",
"result": "transaction timed out",
"firstName": "test",
"phoneNumber": "",
"currency": "USD",
"refoundAmount": 0,
"approveTime": 1582543463,
"email": "",
"status": 1,
"timestamp": 1581938595
},
{
"adminLink": "",
"BTCAmount": 0,
"lastName": "test",
"amount": 550,
"clientId": "ffe22f34742311eab73f06ed6719cf46",
"foundingSourceName": "test",
"secretId": "3096308675",
"txId": "",
"mcTxId": "1101155492",
"paymentAddress": "",
"result": "transaction timed out",
"firstName": "test",
"phoneNumber": "",
"currency": "USD",
"refoundAmount": 0,
"approveTime": 1586355699,
"email": "",
"status": 1,
"timestamp": 1585750862
}
]
2 From this dynamic json, i can extract: mcTxId with:
3.
For every single mcTxId , i need to perform JDBC query using:
select *
FROM affiliate_transaction
WHERE affiliate_id = 1 and mctxid = '${mcTxId_1}'
Which result to:
I managed to solve up to Loop Controller, and extract every single mcTxId, but i am stuck to nested looping logic, and assert each data.
How can i compare/assert every single clientId & approveTime between API call(json) and DB query, where their data set is always dynamic?
Any help is highly appreciated.
Apologies for the long post
Change your query to select client_id
FROM affiliate_transaction
WHERE affiliate_id = 1 and mctxid = '${mcTxId_1}' and store it into a JMeter Variable like client_id_from_db
Use JSON Extractor to get the client ID from the API and store it into a JMeter Variable like client_id_from_api
Once done you should be able to compare 2 JMeter Variables using Response Assertion

Elastic serach record upsert with a complex _id field

I have to upsert bulk records in elastic search index with _id being combination of more than one field from the message. Can I do so. if that can be done then please give me a sample json for the same.
Regards
A sample _id field I am looking for some thing like below
{
"_index": "kpi_aggr",
"_type": "KPIBackChannel",
"_id": "<<<combination of name , period_type>>>",
"_score": 1,
"_source": {
"name": "kpi-v1",
"period_type": "w",
"country": "AL",
"pg_name": "DENTAL CARE",
"panel_type": "retail",
"number_of_records_with_proposal": 10000,
"number_of_proposals": 80000,
"overall_number_of_records": 2000,
"#timestamp": 1442162810
}
}
Naturally, you can specify your own Elasticsearch document ids during a call to the Index API:
PUT kpi_aggr/KPIBackChannel/kpi-v1,w
{
"name": "kpi-v1",
"period_type": "w",
"country": "AL",
"pg_name": "DENTAL CARE",
"panel_type": "retail",
"number_of_records_with_proposal": 10000,
"number_of_proposals": 80000,
"overall_number_of_records": 2000,
"#timestamp": 1442162810
}
You can also do so during a _bulk API call:
POST _bulk
{ "index" : { "_index" : "kpi_aggr", "_type" : "KPIBackChannel", "_id" : "kpi-v1,w" } }
{"name":"kpi-v1","period_type":"w","country":"AL","pg_name":"DENTAL CARE","panel_type":"retail","number_of_records_with_proposal":10000,"number_of_proposals":80000,"overall_number_of_records":2000,"#timestamp":1442162810}
Notice that Elasticsearch will replace the document with the new version.
If you execute these two queries on an empty index, then querying by document id:
GET kpi_aggr/KPIBackChannel/kpi-v1,w
will give you the following:
{
"_index": "kpi_aggr",
"_type": "KPIBackChannel",
"_id": "kpi-v1,w",
"_version": 2,
"found": true,
"_source": {
"name": "kpi-v1",
"period_type": "w",
"country": "AL",
"pg_name": "DENTAL CARE",
"panel_type": "retail",
"number_of_records_with_proposal": 10000,
"number_of_proposals": 80000,
"overall_number_of_records": 2000,
"#timestamp": 1442162810
}
}
Notice "_version": 2, which in our case indicates that a document has been indexed twice, hence performed an "upsert" (but in general is meant to be used for Optimistic Concurrency Control).
Hope that helps!

Aggregations on PyElasticSearch (pyes)

I wish to calculate value-count aggregations on some indexed product data, but I seem to be getting some parameters in the ValueCountAgg constructor wrong.
An example of such indexed data is as follows -:
{
"_index": "test-index",
"_type": "product_product",
"_id": "1",
"_score": 1,
"_source": {
"code": "SomeProductCode1",
"list_price": 10,
"description": null,
"displayed_on_eshop": "true",
"active": "true",
"tree_nodes": [],
"id": 1,
"category": {},
"name": "This is Product",
"price_lists": [
{
"price": 10,
"id": 1
},
{
"price": 10,
"id": 2
}
],
"attributes": {
"color": "blue",
"attrib": "something",
"size": "L"
},
"type": "goods"
}
}
I'm calculating aggregations as follows -:
for attribute in filterable_attributes:
count = ValueCountAgg(
name='count_'+attribute, field='attributes.'+attribute
)
query.agg.add(count)
where query is a ~pyes.query.Query object wrapped inside a ~pyes.query.Search object. filterable_attributes is a list of attribute names, such as color and size.
I have tried setting field=attribute as well, but it seems to make no difference. The resultset that I obtain on conducting the search has the following as its aggs attribute -:
{'count_size': {'value': 0}, 'count_color': {'value': 0}}
where size and color are indexed inside the attributes dictionary as shown above. These are evidently wrong results, and I think it is because I am not setting field properly.
Where am I going wrong?
I've found where I was going wrong.
According to Scoping Aggregations, the scope of an aggregation is by default associated with its query. My query was returning zero results, and I had to modify the search phrase for the same.
I got the required results after that, and aggregations are coming out right.
{'count_size': {'value': 3}, 'count_color': {'value': 3}}

ElasticSearch _Source is always empty on the return

I am posting a query to http://localhost:9200/movie_db/movie/_search but _source attribute is always empty on the return resposne. I made it enabled but that doesn't help.
Movie DB:
TRY DELETE /movie_db
PUT /movie_db {"mappings": {"movie": {"properties": {"title": {"type": "string", "analyzer": "snowball"}, "actors": {"type": "string", "position_offset_gap" : 100, "analyzer": "standard"}, "genre": {"type": "string", "index": "not_analyzed"}, "release_year": {"type": "integer", "index": "not_analyzed"}, "description": {"_source": true, "type": "string", "analyzer": "snowball"}}}}}
BULK INDEX movie_db/movie
{"_id": 1, "title": "Hackers", "release_year": 1995, "genre": ["Action", "Crime", "Drama"], "actors": ["Johnny Lee Miller", "Angelina Jolie"], "description": "High-school age computer expert Zero Cool and his hacker friends take on an evil corporation's computer virus with their hacking skills."}
{"_id": 2, "title": "Johnny Mnemonic", "release": 1995, "genre": ["Science Fiction", "Action"], "actors": ["Keanu Reeves", "Dolph Lundgren"], "description": "A guy with a chip in his head shouts incomprehensibly about room service in this dystopian vision of our future."}
{"_id": 3, "title": "Swordfish", "release_year": 2001, "genre": ["Action", "Crime"], "actors": ["John Travolta", "Hugh Jackman", "Halle Berry"], "description": "A cast of characters challenge society's commonly held view that computer experts are not the beautiful people. Somehow, the CIA is hacked in under 5 minutes."}
{"_id": 4, "title": "Tomb Raider", "release_year": 2001, "genre": ["Adventure", "Action", "Fantasy"], "actors": ["Angelina Jolie", "Jon Voigt"], "description": "The story of a girl and her quest for antiquities in the face of adversity. This epic is adapter from its traditional video-game format to the big screen"}
Query:
{
"query" :
{
"term" : { "genre" : "Crime" }
},
}
Results:
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 0.30685282,
"hits": [
{
"_index": "movie_db",
"_type": "movie",
"_id": "3",
"_score": 0.30685282,
"_source": {}
},
{
"_index": "movie_db",
"_type": "movie",
"_id": "1",
"_score": 0.30685282,
"_source": {}
}
]
}
}
I had the same problem: despite enabling _source in my query as well as in my mappings, _source would always be {}.
Your proposed solution of setting cluster.name in elasticsearch.yml gave me the hint that the problem must be some hidden setting in the old cluster.
I found out that I had an index template definition that came with a plugin I installed (in my case elasticsearch-transport-couchbase), which said
"_source" : {
"includes" : [ "meta.*" ]
},
thereby implicitely excluding all fields other than meta.* from source.
Check your templates like this:
curl -XGET localhost:9200/_template/?pretty
I deleted the couchbase template like so
curl -XDELETE localhost:9200/_template/couchbase
and created a new, almost identical one but with source enabled.
Here is how:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html
Solution:
In elasticsearch config folder, open elasticsearch.yml and set cluster.name to a different value, then restart elasticsearch.bat
I once accidentally passed a single field in source array and that too didn't exist. Just for example "_source": ["bazinga"] and in the aggregations result source was empty.
So maybe you could simple pass a totally unrelated string into the _source array. This can be a better solution instead of making changes in the elasticsearch.yml file.

Resources