Nifi ReplaceText Processor inserting zero length strings - apache-nifi

The Apache Nifi ReplaceText processor isn't behaving the way I expect. I can't figure out why the expression evaluation is inserting zero length strings where the data should be going.
The ReplaceText processor's configuration is:
Replacement strategy is: Always Replace.
Evaluation Mode is: Entire text.
The processor chain is: QueryDatabaseTable->SplitAvro->UpdateAttribute->ReplaceText->PutSQL
The replacement value in the ReplaceText processor is:
INSERT INTO public.journal (payee, amount, billed_date, paid_date)
VALUES ('${payee}', ${amount}, '${billed_date}', '${paid_date}');
It should become….
INSERT INTO public.journal (payee, amount, billed_date, paid_date)
VALUES ('Dead End LLC', 2000.000, ‘2018-02-01’, ‘2018-02-01’);
Instead I get:
INSERT INTO public.journal (payee, amount, billed_date, paid_date)
VALUES (‘’, , ‘’, ‘’);
Which is especially frustrating when I look at the output of the preceding UpdateAttribute processor step and see…
[ {
"payee" : "Dead End LLC",
"amount" : "2000.00",
"billed_date" : "2018-02-01",
"paid_date" : "2018-02-02"
} ]
This breaks my brain since the expression processing appears to be working just fine but not pulling in the right data (which my naive implementation assumes will be there.)
Previous reading that got me to where I am now:
Database Extract
Database Insert

The reason you are getting empty string is because the expressions for '${payee}', ${amount}, '${billed_date}', '${paid_date} are evaluating to no value, and that is because you probably do not have flow file attributes with those names.
You cannot go directly from a value in Avro in the content of a flow file into NiFi's expression language, you would need to first extract the values from content into flow file attributes.
Something like this would probably work...
QueryDatabaseTable-> SplitAvro-> ConvertAvroToJson -> EvaluteJsonPath -> UpdateAttribute -> ReplaceText -> PutSQL
In EvluateJsonPath is where you would extract the values from the json into flow file attributes.

Related

Powerautomate Parsing JSON Array

I've seen the JSON array questions here and I'm still a little lost, so could use some extra help.
Here's the setup:
My Flow calls a sproc on my DB and that sproc returns this JSON:
{
"ResultSets": {
"Table1": [
{
"OrderID": 9518338,
"BasketID": 9518338,
"RefID": 65178176,
"SiteConfigID": 237
}
]
},
"OutputParameters": {}
}
Then I use a PARSE JSON action to get what looks like the same result, but now I'm told it's parsed and I can call variables.
Issue is when I try to call just, say, SiteConfigID, I get "The output you selected is inside a collection and needs to be looped over to be accessed. This action cannot be inside a foreach."
After some research, I know what's going on here. Table1 is an Array, and I need to tell PowerAutomate to just grab the first record of that array so it knows it's working with just a record instead of a full array. Fair enough. So I spin up a "Return Values to Virtual Power Agents" action just to see my output. I know I'm supposed to use a 'first' expression or a 'get [0] from array expression here, but I can't seem to make them work. Below are what I've tried and the errors I get:
Tried:
first(body('Parse-Sproc')?['Table1/SiteConfigID'])
Got: InvalidTemplate. Unable to process template language expressions in action 'Return_value(s)_to_Power_Virtual_Agents' inputs at line '0' and column '0': 'The template language function 'first' expects its parameter be an array or a string. The provided value is of type 'Null'. Please see https://aka.ms/logicexpressions#first for usage details.'.
Also Tried:
body('Parse-Sproc')?['Table1/SiteconfigID']
which just returns a null valued variable
Finally I tried
outputs('Parse-Sproc')?['Table1']?['value'][0]?['SiteConfigID']
Which STILL gives me a null-valued variable. It's the worst.
In that last expression, I also switched the variable type in the return to pva action to a string instead of a number, no dice.
Also, changed 'outputs' in that expression for 'body' .. also no dice
Here is a screenie of the setup:
To be clear: the end result i'm looking for is for the system to just return "SiteConfigID" as a string or an int so that I can pipe that into a virtual agent.
I believe this is what you need as an expression ...
body('Parse-Sproc')?['ResultSets']['Table1'][0]?['SiteConfigID']
You can see I'm just traversing down to the object and through the array to get the value.
Naturally, I don't have your exact flow but if I use your JSON and load it up into Parse JSON step to get the schema, I am able to get the result. I do get a different schema to you though so will be interesting to see if it directly translates.

LookupRecord Processor always routes to unmatched even if matched

I have the following json file :
“a”:{
“after”:{
“a123”:{
“ip”:”1.0.0.0”,
“p_id”:”4500”
}
},
“before”:{
“a123”:{
“ip”:”1.0.0.0”,
“p_id”:”4500
}
}
}
I'm trying to create a lookuprecord Processor in NiFi to match with the p_id. If matched then route to something and if unmatched then route to something else. The lookuprecord processor I have is a MongoDBLookUpService.
The user expression I've written for the lookuprecord processor is :
key : /a/after/['p_id']
Note: [] because that bit in json keeps changing with every json file.
I've even tried /a/after/*/p_id and nothing seems to work. They all seem to route to unmatched. Please help me figure out if I'm doing something wrong in the expression.

Split array of strings and put each string on a flow-file-attribute in nifi

I try to extract each element from the frequentlyBoughtTogether array and put it on a flow-attribute:
{
frequentlyBoughtTogether: ["a","b","c"]
}
Frist step: SplitJson
Second step: EvaluateJsonPath to make each element a flow-file-attribute:
However this gives me following error:
When I log the failure, I can see the element in the flow-file-content, but I need it to be an attribute. Any ideas how to solve this issue?
Use ExtractText processor instead of EvaluateJsonPath processor.
in case of EvaluateJsonPath processor evaluates the flowfile content if the content is not a valid json then processor routes the flowfile to failure)
In case of Extract Text processor just extracts the content of the flowfile by applying the regex.
ExtractText configs:
Add new property as
val
(.*)
Then processor adds new attribute names val to the flowfile by extracting the flowfile content as value.
Flow:
SplitJson->ExtractText

How to get value from a column referenced by a number, from JDBC Response object of Jmeter?

I know they advice to get a cell value this way:
columnValue = vars.getObject("resultObject").get(0).get("Column Name");
as stated on jMeter doc : component reference : JDBC_Request.
But: How to access the same RS cell value by just a number of the column?
RS.get(0).get(4);
...instead of giving it a String of column Name/Label.
edit 1: Lets use Groovy/Java, instead of BeanShell. Thanks.
edit 2: The original motivation was the difference between column Name / Label, as these seem to be not fully guaranteed (? seems to be not clear here, not to me), especially due case-sensitivity ("id"/"ID", "name"/"Name"/"NAME" ..)
It should be something like:
String value = (new ArrayList<String>(vars.getObject("resultObject").get(0).values())).get(4)
More information: Debugging JDBC Sampler Results in JMeter
Be aware that according to HashMap documentation:
This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.
So the order of columns might be a big question mark.
The row itself is a HashMap, defined in source code as:
HashMap<String, Object> row
So using BeanShell syntax, you could get it as
row = vars.getObject("resultObject").get(0); // returns HashMap
In HashMap, you cannot access item (column) by ID. You could, however, apply one of the methods described here, but HashMap doesn't guarantee order, so you cannot be sure what "column 4" will contain.
If you want to be able to loop through all columns, it's better to do it in a Map style, not by index. For example using entrySet() with BeanShell:
for(Map.Entry entry : row.entrySet())
{
log.info(entry.getKey() + "=" + entry.getValue());
}
See various ways to iterate through Map here.

Specifying null with nifi expression language

I'm trying to replace an empty field with nulls in an UpdateRecord processor.
/title ${field.value:replaceEmpty(null)}
This fails because "null" is not a valid keyword. How does one specify null in the nifi expression language?
You can use the literal() function to return a String value that is the exact input to the function, and you can nest that inside your replaceEmpty method. Try using the expression ${field.value:replaceEmpty(${literal('null')})}.
If you are doing this in the UpdateRecord processor, you want to use Apache NiFi RecordPath syntax, not Expression Language. I believe the CSVReader and others parse even a field value containing only spaces to empty, so a regular expression like replaceRegex( /title, '^(?![\s\S])$', 'null' ) doesn't work.
My suggestion would be to file a Jira requesting this capability. In the mean time, do not use UpdateRecord for this, but rather ReplaceText with a regular expression like ,\s?, for an empty CSV value and replace it with null.
There is a trick using RecordPath, if the field value is blank you can do this to get a null value.
/fieldName[not(isBlank(/fieldName))]
It is giving answer as
{
"fieldname" : "null"
}
here null is a string not a null value.

Resources