Jmeter, read BLOB (Which contains an XML) using Jmeter - oracle

I have a database table in which one column has a BLOB data, which can contain a text or an XML [Please do not ask why, somehow a dev made this choice]. I want to read the BLOB data and save it in a file. Can anyone help me on how to read the BLOB data?
I have the following JSR 223, POST processor (Groovy Code):
byte[] blobByte = vars.getObject("gaurav").get(0).get("TEXT");
String blob = new String(blobByte);
log.info(blob);
However, I get the following error:
javax.script.ScriptException: groovy.lang.GroovyRuntimeException: Ambiguous method overloading for method java.lang.String#<init>.
Cannot resolve which method to invoke for [null] due to overlapping prototypes between:
[class [B]
[class [C]
[class java.lang.String]
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:324) ~[groovy-all-2.4.16.jar:2.4.16]
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:72) ~[groovy-all-2.4.16.jar:2.4.16]
at javax.script.CompiledScript.eval(Unknown Source) ~[?:1.8.0_251]
at org.apache.jmeter.util.JSR223TestElement.processFileOrScript(JSR223TestElement.java:223) ~[ApacheJMeter_core.jar:5.2.1]
at org.apache.jmeter.extractor.JSR223PostProcessor.process(JSR223PostProcessor.java:44) [ApacheJMeter_components.jar:5.2.1]
at org.apache.jmeter.threads.JMeterThread.runPostProcessors(JMeterThread.java:931) [ApacheJMeter_core.jar:5.2.1]
at org.apache.jmeter.threads.JMeterThread.executeSamplePackage(JMeterThread.java:569) [ApacheJMeter_core.jar:5.2.1]
at org.apache.jmeter.threads.JMeterThread.processSampler(JMeterThread.java:490) [ApacheJMeter_core.jar:5.2.1]
Can anyone help me what is wrong in my logic?
PS: I refer this page for the conversion logic : https://www.blazemeter.com/blog/performance-testing-blob-from-a-mysql-database-with-jmeter

In the article you're referring to the TEXT is the name of the column, I strongly doubt that your column has this name therefore your blobByte is null therefore your BLOB reading logic fails.
Check which variables are being produced by your JDBC Request sampler using Debug Sampler and View Results Tree listener combination, your approach will work only if you see gaurav JMeter Variable with the value of java.sql.Blob
See Debugging JDBC Sampler Results in JMeter article for more information on working with JDBC Result Sets in JMeter tests

Related

how to pass the dynamic filename in csv data set config in jmeter while the dynamic names are generated to save data of previous request?

I have one http request whose response comes in nested json and using groovy i am saving that data in different csv file on the basis of conditions.
the name of csv file is generated dynamically and saved in a variable
using vars.put() function
vars.put("_cFileName",cFileName.toString());
when try to use this variable in csv data set config
enter image description here
getting error message
2022-01-19 16:58:39,370 ERROR o.a.j.t.JMeterThread: Test failed!
java.lang.IllegalArgumentException: File ${_cFileName} must exist and be readable
it is not converting the name to actual file name
But in case if file name is not dynaamic and the variable is defined under user defined variable in test plan it will able to convert to actual file name?
is there any way we can use the dynamic name created in an previos request post processor?
You cannot, as per JMeter Test Elements Execution Order Configuration Elements are executed before everything else, CSV Data Set Config is a Configuration Element hence it's being initialized before any JMeter Variables are being set.
The solution would be moving to __CSVRead() function, JMeter Functions are evaluated at the time they're being called so it's possible to provide dynamic file name there, see How to Pick Different CSV Files at JMeter Runtime guide for more details if needed.

Using JMETER GUI JDBC Request with Callable Statement – how do I getResultSet/MetaData?

I’ve got a call to my database working as a SQL select statement. But I am working to call a stored procedure using JMeter for further testing. I’m strictly working off of the JMX files and do not have JMETER integrated into our main Java project at this time.
I’ve setup the JMETER GUI with the JDBC Connection Configuration and the JDBC Request. I’ve made a successful call to my database with my callable statement with my string INPUT and get the string OUTPUT parameter string.
The OUTPUT parameter string only contains information about the call (user,system, success, etc…), but none of the values/data from the table -- which are found in the ResultSet/MetaData. But I cannot figure out how to get the ResultSet or the Metadata using the JDBC Request in JMETER.
In Java, I know I use the statement and just call statement.getResultSet() and perform a loop while resultSet.next() exists. How do I do this in JMETER?
I've tried adding an additional out parameter but then my statement rejects the call, because there is only one in-parameter. I've tried a variety of JMeter Assertions - but because the main call is only returning the out parameter, I cannot grab additional data.
Query: call XXXXX.readUser(?)
Parameter Values: ${inputJSONString}
Parameter Types: INOUT VARCHAR
Variable Names: ouputJSONString
Result Variable Name: ouputJSONString
View Results Tree: Response code: 200, Response message: OK, Output variables by position: Contains the whole JSON out parameter string with user, system, and success. Returns the table column headers but no values.
I do not have errors - the call is being made successfully. I just cannot figure out how to access the Result Set from JMETER.
Don't use the same reference name for the Variable Names and the Result Variable Name as the latter one will be overwritten.
So
Change ouputJSONString to i.e. ouputJSONStringObject
Add JSR223 PostProcessor as the child of the request
You will be able to access the JMeter's representation of the ResultSet as vars.getObject('ouputJSONStringObject') (basically ArrayList of HashMaps
See Debugging JDBC Sampler Results in JMeter article for more details.
Unfortunately you cannot access the normal ResultSet as it is not exposed anywhere and being converted via private function

JMeter JDBC stored Procedure invocation with BLOB parameter

For testing purpose I need to invoke a DB2 Stored Procedure via JMeter.
I set up a JDBC connection, then add the JDBC Request step, but I have problem when populating the parameters.
The problem is that one of the parameters is a BLOB taken from a .bin file, and I can't find a way to insert it. The parameter are read from a CSV file.
What I did was:
Query Type: Callable statement
CALL MY.STOREDPROCEDURE(?,?,?)
Parameter values: ${par1},FROM_FILE('${par2}'),'a'
Parameter types: IN VARCHAR, INOUT BLOB, OUT VARCHAR
The error I get is that it can't convert String to Blob (
Illegal conversion: can not convert from "java.lang.String" to
"java.sql.Blob" ERRORCODE=-4474 SQLSTAT=null.).
I think that the problem is that the FROM_FILE function return a String with the content of the file.
Following an idea I found on-line I set up a JSR233 Sampler to load the file with a Groovy script, I saved the file as an Object, but when i read it it still seems to be read as a String, even with a groovy script (${__groovy()}).
I tried to do add a cast in the invocation but then I will get an error about the "Literal replacement parsing failed for procedure call".
How can I pass a Blob to the invocation?

NiFi putelasticsearch5 error - Content type is missing

I am using PutElasticsearch5 processor to index documents into ES.My workflow has couple of other processors before PutElasticsearch5 which converts avro to json.
I am getting the below given error when I run the workflow.
java.lang.IllegalArgumentException: Validation Failed: 1: content type is missing;2: content type is missing;
I coudlnt find any other relvant information to troubleshoot this. There is no setting for "Content Type" under Putelasticsearch5 configuration
I'm also having this issue, like user2297083 said if you are sending a batched JSON file into the PutElasticsearch5 then it will throw this exception and move the file into the FAILED relationship. The processor seems like it only handles one JSON object written into a file at a time that cannot be surrounded by array brackets. So if you have a file with content such as:
[{"key":"value"}]
then the processor will fail however if you send the same document as:
{"key":"value"}
then the processor will index successfully, considering your other configurations are correct.
One solution can be such that if you don't want to send everything through a splitter before coming to the PutElasticsearch5 processor, then use a splitter processor that works off of the FAILURE relationship to the PutElasticsearch5 and sends data back into the same PutElasticsearch5. More FlowFiles means more IO in your node, so I'm actively looking for a way to have the PutElasticsearch5 processor handle a batched JSON document. I feel like there's got to be a way without writing a custom iteration of it or creating a ton of new FlowFiles.
EDIT:
Actually, it does answer the question. His question is:
I am using PutElasticsearch5 processor to index documents into ES.My workflow has couple of other processors before PutElasticsearch5 which converts avro to json.
I am getting the below given error when I run the workflow.
java.lang.IllegalArgumentException: Validation Failed: 1: content type is missing;2: content type is missing;
which is exactly the exception message that is given by the PutElasticsearch5 processor when passing a JSON file that is not formatted correctly. His question is why this is happening.
My answer states why it's happening (one possible use case) and how to work around it by giving a solution that does work.
In this case, correctly formatted JSON means a FlowFile that has a single JSON object as it's content as I have shown above.
Further looking into this though, it makes sense that the processor only takes a single JSON document FlowFile at a time because you can use FlowFile attributes to specify the "id" of the indexed document. If the uuid of the FlowFile is used and it was a batched JSON i.e.
[{"one":1},{"two":2},{"three":3}]
then each JSON object would be indexed in elasticsearch using the same "index","type", and "id" (id being the FlowFile uuid), and this would not be desired.

Spring Data Mongo: IllegalArgumentException trying to use field reference for minus()

I'm trying to use the $subtract operator to project the difference between two fields as a separate third field using this line of code:
pipeline.add(Aggregation.project("createdTime","modResult").andExpression("createdTime").minus("modResult").as("bucketedTime"));
However, when I try to perform the aggregation, I get the following exception:
org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalArgumentException: can't serialize class org.springframework.data.mongodb.core.aggregation.Fields$AggregationField
What am I doing wrong here? I've noticed that if I provide an integer instead of a field name, there is no issue. Thanks for your help.
As per my question here, it turns out I should have been using and() instead of andExpression(). The latter tries to interpret the argument as an expression, such as field1 + field2 as a more convenient way of doing .and("field1").plus("field2").

Resources