how to get an array response of jdbc request in jmeter? - jmeter

Example I have my JDBC Request and the response is like:
X Y Z
a1 b1 c1
a2 b2 c2
a3 b3 c3
a4 b4 c4
a5 b5 c5
. . .
. . .
. . .
How can I get all the value of x, y and z?
then I have HTTP request and I'm going to assert if all the response is matched to the data selected from JDBC.
example response:
{
{
"x":"a1",
"y":"b1",
"z": "c1"
},
{
"x":"a2",
"y":"b2",
"z": "c2"
},
{
"x":"a3",
"y":"b3",
"z": "c4"
},
{
"x":"a4",
"y":"b4",
"z": "c4"
},
{
"x":"a5",
"y":"b5",
"z": "c5"
},
{
"x":"a6",
"y":"b6",
"z": "c6"
},
{
"x":"a7",
"y":"b7",
"z": "c7"
},
{
"x":"a8",
"y":"b8",
"z": "c8"
},
.
.
.
.
}

As per JDBC Request sampler documentation:
If the Variable Names list is provided, then for each row returned by a Select statement, the variables are set up with the value of the corresponding column (if a variable name is provided), and the count of rows is also set up. For example, if the Select statement returns 2 rows of 3 columns, and the variable list is A,,C, then the following variables will be set up:
A_#=2 (number of rows)
A_1=column 1, row 1
A_2=column 1, row 2
C_#=2 (number of rows)
C_1=column 3, row 1
C_2=column 3, row 2
So given you provide "Variable Names" as X,Y,Z you should be able to access the values as ${X_1}, ${Y_2}, etc.
See Debugging JDBC Sampler Results in JMeter for more detailed information on working with JDBC Test Elements results and result sets.

You should declare the "Variable Names" field and also declare a result variable name as shown below.
Then you can access them using the _1 _2 method. Please find below the sample code that you can use in the beanshell post processor.
import java.util.ArrayList;
import net.minidev.json.parser.JSONParser;
import net.minidev.json.JSONObject;
import net.minidev.json.JSONArray;
ArrayList items = vars.getObject("result1");
for ( int i = items.size() - 1; i >= 0; i--) {
JSONObject jsonitemElement = new JSONObject();
jsonitemElement.put("x", vars.get("x_" + (i + 1)));
jsonitemElement.put("y", vars.get("y_" + (i + 1)));
jsonitemElement.put("z", vars.get("z_" + (i + 1)));
log.info(jsonitemElement.toString());
}
Since you are getting these values as the response from the response payload of the HTTP request, you should add the code to parse that JSON response in an assertion or post processor and compare it with the elements from the above sample code.
A point to note - Different applications send the target JSON in any order. So, there is no guarantee that the HTTP response will always send the response as A1,B1,C1 - A2,B2,C2 etc. It can send them in any order starting with A5,B5,C5 etc. It is better to then use a hashmap or write your array comparison to ensure that your result set completely matches the HTTp response.

Related

Data Store - dash_table conditional formatting failing

#dashapp.callback(
Output(component_id='data-storage', component_property='data'),
Input(component_id='input', component_property='n_submit')
.
.
.
return json_data
#dashapp.callback(
Output('table', component_property='columns'),
Output('table', component_property='data'),
Output('table', component_property='style_cell_conditional'),
Input(component_id='data-storage', component_property='data'),
.
.
.
column_name = 'Target Column'
value = 'This value is a string'
table_columns = [{"name": i, "id": i} for i in df.columns]
table_data = df.to_dict("records")
conditional_formatting = [{
'if': {
'filter_query': f'{{{column_name}}} = {value}'
},
'backgroundColor': 'white',
'color' : 'black',
}
]
return table_columns, table_data, conditional_formatting
When the code above is used WITH the conditional_formatting part - it works for some 'value's, and does not work for other 'value's
When the code above is used WITHOUT the conditional_formatting part - it works as expected for all 'value's
To be noted that when the conditional_formatting part is used, all callbacks are triggered twice. After this happens, the Data Store acts as if it has been infected by the "sick" value and does not allow new data.
Example:
Step 1. Use working input -> All callbacks triggered once -> Data Store is populated -> Data is displayed as expected
Step 2. Use working input -> All callbacks triggered once -> Data Store is populated -> Data is displayed as expected
Step 3. Use not working input -> All callbacks triggered once -> All callbacks are triggered again -> Data related to Input from b) is displayed
Step 4. Use working input -> All callbacks triggered once -> All callbacks are triggered again -> Data related to Input from b) is displayed
Any ideas why does this happen?
Any feedback is appreciated!
conditional_formatting = [{
'if': {
'filter_query': f'{{{column_name}}} = "{value}"'
},
'backgroundColor': 'white',
'color' : 'black',
}
]
Issue was because the failing values had empty space (e.g. San Francisco). Adding quotes around solved the issue.

DynamoDB Stream - Lambda to process formula

I've got a DynamoDB Table that contains attributes similar to:
{
"pk": "pk1",
"values": {
"v2": 5,
"v1": 90
},
"formula": "(v1 + v2) / 100",
"calc": 5.56
}
I've a Lambda that is triggered by DDB Stream. Is there any way to calculate the "calc" attribute based on the formula and values? Ideally I'd like to do it during update_item call which is updating this table every time Stream sends a message.
Your lambda function can trigger an event like this
def lambda_handler(event, context):
records = event['Records']
for record in records:
new_record = record['dynamodb']['NewImage']
calc = new_record.get('calc')
# do your stuff here
calc = some_functions()
return event

How to compare value at specific position from DB to JSON variable in JMeter

In Database their is field as Diagnosis Reference and its value is like "123456"
In JSON value should display as :
Diagnosis Reference 1 : 1 ( Value at position 1)
Diagnosis Reference 2 : 2 ( Value at position 2)
Diagnosis Reference 3 : 3 ( Value at position 3)
Diagnosis Reference 4 : 4 ( Value at position 4)
Diagnosis Reference 5 : 5 ( Value at position 5)
Diagnosis Reference 6 : 6 ( Value at position 6)
How can I fetch value at each position from DB and compare it with JSON variables using JMeter?
You can save the value from the database into a JMeter Variable using JDBC Request sampler configured like:
then my assumption is that you need to execute some HTTP Request to get the JSON from some API endpoint.
At this stage you can add a JSR223 Assertion to parse the JSON response and compare it with the variable from the JDBC Request sampler
If your JSON response looks like:
[
{
"Diagnosis Reference 1": 1
},
{
"Diagnosis Reference 2": 2
},
{
"Diagnosis Reference 3": 3
},
{
"Diagnosis Reference 4": 4
},
{
"Diagnosis Reference 5": 5
},
{
"Diagnosis Reference 6": 6
}
]
you could use the code like:
def response = new groovy.json.JsonSlurper().parse(prev.getResponseData())
vars.get('diagnosisReference_1').toCharArray().eachWithIndex { number, index ->
if (response[index].get('Diagnosis Reference ' + number) != Character.getNumericValue(number)) {
AssertionResult.setFailure(true)
AssertionResult.setFailureMessage('Numbers mismatch, expected: ' + number + ', got: ' + response[index].get('Diagnosis Reference ' + number))
}
}

How to loop a HTTPS requests.post from csv data

I want to loop post requests for data from a csv.
csv file (has 2 columns )=
agentLicenseID(x) licenseExpirationDate(y)
271844 6/20/2021
271847 6/30/2021
271848 5/21/2021
body = {'sid':API_SID,'key':API_KEY, 'agentLicenseID':x,'licenseExpirationDate':y }
response = requests.post(url=UPD_URL,data=body)
I intend on looping the response for different values of x and y ( agentLicenseID and licenseExpirationDate) from the csv file
With help of pandas:
import pandas as pd
df = pd.read_csv("your_file.csv", sep=r"\s+") # <-- change the separator if it's different
for x, y in zip(df["agentLicenseID"], df["licenseExpirationDate"]):
body = {
"sid": API_SID,
"key": API_KEY,
"agentLicenseID": x,
"licenseExpirationDate": y,
}
response = requests.post(url=UPD_URL, data=body)
# ...

Use of sum function to get a value in sparkR

I have a DataFrame 'data' in sparkR which contains ID= 1,2,.. and amount= 232, 303, 444, 10, ...
I want to check if the sum of amount is greater than 5000.
sum(data$amount ) > 5000
Now sparkR should return TRUE if its TRUE and FALSE otherwise but all I get is this message
Column (SUM(amount)>5000)
How can I check if it's true?
It might not be the best possible solution, but it works. You did create a column of 1 item, but I did not find a way to get the result stored in that item, therefor I applied a different approach:
df <- data.frame(ID=c(1,2,3,4),amount=c(232, 303, 444, 10))
data <- createDataFrame(sqlContext,df)
data <- withColumn(data, "constant", data$ID * 0)
sumFrame <- agg(groupBy(data, data$constant), sumAmount = sum(data$amount))
localResult <- collect(sumFrame)
localResult$sumAmount > 5000
With this approach, I create a DataFrame of 1 row, but a DataFrame is collectable to obtain the result.

Resources