How can I write all values extracted via regex to a file? - jmeter

I've got a piece of regex which I've tested in JMeter using the regexp tester and it returns multiple results (10), which is what I'm expecting.
I'm using the Regular Expression Extractor to retrieve the values and I would like to write ALL of them to a CSV file. I'm using the Beanshell Post Processor but I am only aware of a method to write 1 value to file.
My script in Beanshell so far:
temp = vars.get("VALUES"); // VALUES is the Reference Name in regex extractor
FileWriter fstream = new FileWriter("c:\\downloads\\results.txt",true);
BufferedWriter out = new BufferedWriter(fstream);
out.write(temp);
out.close();
How can I write all the values found via the regex to file? Thanks.

If you'll look into Debug Sampler output, you'll see that VALUES will be a prefix.
Like
VALUES=...
VALUES_g=...
VALUES_g0=...
VALUES_g1=...
etc.
You can use ForEach Controller to iterate over them.
If you want to proceed with Beanshell - you'll need to iterate through all variables like:
import java.io.FileOutputStream;
import java.util.Map;
import java.util.Set;
FileOutputStream out = new FileOutputStream("c:\\downloads\\results.txt", true);
String newline = System.getProperty("line.separator");
Set variables = vars.entrySet();
for (Map.Entry entry : variables) {
if (entry.getKey().startsWith("VALUES")) {
out.write(entry.getValue().toString().getBytes("UTF-8"));
out.write(newline.getBytes("UTF-8"));
out.flush();
}
}
out.close();

To write the contents of your values array into the file, the following code should work (untested):
String[] values = vars.get("VALUES");
FileWriter fstream = new FileWriter("c:\\downloads\\results.txt", true);
BufferedWriter out = new BufferedWriter(fstream);
for(int i = 0; i < values.length; i++)
{
out.write(values[i]);
out.newLine();
out.flush();
}
out.close();

Related

Trying to read a csv file in Jmeter using Beanshell Sampler

I am trying to read the cells in an xls file.This is the code I have.Please let me know where I am going wrong.I don't see any error in the logviewer but it isn't printing anything.
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVRecord;
import java.io.IOException;
import java.io.Reader;
import java.nio.file.Files;
import java.nio.file.Paths;
public class ApacheCommonsCSV {
public void readCSV() throws IOException {
String CSV_File_Path = "C:\\source\\Test.csv";
// read the file
Reader reader = Files.newBufferedReader(Paths.get(CSV_File_Path));
// parse the file into csv values
CSVParser csvParser = new CSVParser(reader, CSVFormat.DEFAULT);
for (CSVRecord csvRecord : csvParser) {
// Accessing Values by Column Index
String name = csvRecord.get(0);
String product = csvRecord.get(1);
// print the value to console
log.info("Record No - " + csvRecord.getRecordNumber());
log.info("---------------");
log.info("Name : " + name);
log.info("Product : " + product);
log.info("---------------");
}
}
}
You're declaring readCSV() function but not calling it anywhere, that's why your code doesn't even get executed.
You need to add this readCSV() function call and if you're lucky enough it will start working and if not - your will see an error in jmeter.log file.
For example like this:
import java.io.IOException;
import java.io.Reader;
import java.nio.file.Files;
import java.nio.file.Paths;
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVRecord;
public class ApacheCommonsCSV {
public void readCSV() throws IOException {
String CSV_File_Path = "C:\\source\\Test.csv";
// read the file
Reader reader = Files.newBufferedReader(Paths.get(CSV_File_Path));
// parse the file into csv values
CSVParser csvParser = new CSVParser(reader, CSVFormat.DEFAULT);
for (CSVRecord csvRecord : csvParser) {
// Accessing Values by Column Index
String name = csvRecord.get(0);
String product = csvRecord.get(1);
// print the value to console
log.info("Record No - " + csvRecord.getRecordNumber());
log.info("---------------");
log.info("Name : " + name);
log.info("Product : " + product);
log.info("---------------");
}
}
readCSV(); // here is the entry point
}
Just make sure to have commons-csv.jar in JMeter Classpath
Last 2 cents:
Any reason for not using CSV Data Set Config?
Since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language for scripting so consider migrating to Groovy right away. Check out Apache Groovy - Why and How You Should Use It for reasons, tips and tricks.

Evaluate expressions as well as regex in single field in custom processors of Nifi

In my custom processor I have added below field
public static final PropertyDescriptor CACHE_VALUE = new PropertyDescriptor.Builder()
.name("Cache Value")
.description("Cache Value")
.required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();
Where I expect to read flowfile attributes like ${fieldName}
as well as regex like .* to read full content or some part of content like $.nodename.subnodename
For that I have added below code
for (FlowFile flowFile : flowFiles) {
final String cacheKey = context.getProperty(CACHE_KEY).evaluateAttributeExpressions(flowFile).getValue();
String cacheValue = null;
cacheValue = context.getProperty(CACHE_VALUE).evaluateAttributeExpressions(flowFile).getValue();
if (".*".equalsIgnoreCase(cacheValue.trim())) {
final ByteArrayOutputStream bytes = new ByteArrayOutputStream();
session.exportTo(flowFile, bytes);
cacheValue = bytes.toString();
}
cache.put(cacheKey, cacheValue);
session.transfer(flowFile, REL_SUCCESS);
}
How to achieve this one some part of content like $.nodename.subnodename.
Do I need to parse the json or is there any other way?
You will either have to parse the JSON yourself, or use an EvaluateJsonPath processor before reaching this processor to extract content values out to attributes via JSON Path expressions, and then in your custom code, reference the value of the attribute.

store csv values to hashmap in jmeter

I want to store csv values to a hashmap. Suppose my column names are errorcode and errormessage in the csv, can any one please help me with the sample code as I am stuck with the same. I have tried to put it using java, but I am not able to read the values.
You can do it via Beanshell Scripting and bsh.shared namespace like
To put the values:
Map map = null;
if (bsh.shared.map == void) {
map = new HashMap();
}
else {
map = bsh.shared.map;
}
map.put(vars.get("errorcode"),vars.get("errormessage"));
bsh.shared.map = map;
To read the values:
Map map = bsh.shared.map;
Iterator iterator = map.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry pair = (Map.Entry)iterator.next();
String errorcode = pair.getKey();
String errormessage = pair.getValue();
//do what you need to do with the values from the HashMap
}
See How to use BeanShell: JMeter's favorite built-in component guide for comprehensive information on Beanshell scripting in JMeter

How to write inresponse in indesign scripts

I am running this script for getting all form fields of a indd file.
var _ALL_FIELDS = "";
var allFields = myDocument.formFields;
for(var i=0;i<allFields.length;i++){
var tf = allFields[i];
alert(tf.id);
alert(tf.label);
alert(tf.name);
alert(_ALL_FIELDS = _ALL_FIELDS +\",\"+ tf.name);
}
What i have done is, created a soap-java based client and calling the runscript method.
Now i am able to get these fields but how to send these fields back to the client i.e. how to write this in response and then at client side how to read it from response.
Code for calling runscript method is:-
Service inDesignService = new ServiceLocator();
ServicePortType inDesignServer = inDesignService.getService(new URL(parsedArgs.getHost()));
IntHolder errorNumber = new IntHolder(0);
StringHolder errorString = new StringHolder();
DataHolder results = new DataHolder();
inDesignServer.runScript(runScriptParams, errorNumber, errorString, results);
Also I have found in docs that runScript method returns RunScriptResponse but in my case it's returning void.
http://wwwimages.adobe.com/content/dam/Adobe/en/devnet/indesign/sdk/cs6/server/ids-solutions.pdf
It looks like you want to return an array of formField names. You could take advantage of the everyItem() collection interface method in your javascript:
var result = myDocument.formFields.everyItem().name;
result;
The javascript will return the last value in the called script, so just to make it completely obvious, the last line is just the value to be returned.
On the Java side, the runScript method is passing the results variable as the 4th parameter, and that's where you'll find your response. So, after your code snippet, you might have something like this:
List<String> formFieldNames = new ArrayList<String>();
if (results.value.getData() != null) {
Data[] resultsArray = (Data[]) results.value.getData();
for (int i = 0; i < resultsArray.length; i++) {
formFieldNames.add(resultsArray[i].getData().toString());
}
}

Pig Not Interpreting Int Correctly -- Custom Loader

So this is my first time to ever use Pig and I'm having a hard time getting it to interpret my data correctly. I dont want to have to define a schema for my input files until run time, so I wrote a super simple custom loader where the only changes I made to PigStorage were changing the GetSchema Method to read the first two lines of my file and create a schema off of it:
public ResourceSchema getSchema(String location,
Job job) throws IOException {
BufferedReader br = new BufferedReader(new FileReader(location.replace("file://", "")));
String[] line = br.readLine().split(",");
String[] data = br.readLine().split(",");
List<FieldSchema> fields = new ArrayList<FieldSchema>();
for(int f = 0; f< line.length; f++)
{
Byte type = GetType(data[f].replace("\"", ""));
fields.add(new FieldSchema(line[f].replace("\"", ""), type));
}
schema = new ResourceSchema(new Schema(fields));
return schema;
}
private Byte GetType(Object Data)
{
try{
int number = Integer.parseInt(Data.toString());
return org.apache.pig.data.DataType.INTEGER;
}
catch(Exception e){}
try{
double dnumber = Double.parseDouble(Data.toString());
return org.apache.pig.data.DataType.DOUBLE;
}
catch(Exception e){}
return org.apache.pig.data.DataType.CHARARRAY;
}
When I load a file and run DESCRIBE on it, it looks like what I want, for instance:
{CU_NUMBER: int,CYCLE_DATE: chararray,JOIN_NUMBER: int,RSSD: int,CU_TYPE: int,CU_NAME: chararray}
And the first 10 Rows look like this:
(1,9/30/2013 0:00:00,2,"50377","1","MORRIS SHEPPARD TEXARKANA")
(5,9/30/2013 0:00:00,6,"859879","1","FIRST CASTLE")
(6,9/30/2013 0:00:00,7,"54571","1","THE NEW ORLEANS FIREMEN'S")
(12,9/30/2013 0:00:00,11,"56678","1","FRANKLIN TRUST")
(13,9/30/2013 0:00:00,12,"861676","1","E")
(16,9/30/2013 0:00:00,14,"59277","1","WOODMEN")
(19,9/30/2013 0:00:00,16,"863773","1","NEW HAVEN TEACHERS")
(22,9/30/2013 0:00:00,17,"61074","1","WATERBURY CONNECTICUT TEACHER")
(26,9/30/2013 0:00:00,19,"866372","1","FARMERS")
(28,9/30/2013 0:00:00,21,"953375","1","CENTRIS")
However, when I try to do stuff with the data like:
FOICU = LOAD 'file:///home/biadmin/NCUA/foicu.txt' USING org.apache.pig.builtin.PigStorageInferSchema(',', '-schema');
FirstSixColumns = FOREACH FOICU GENERATE CU_NUMBER, CYCLE_DATE, JOIN_NUMBER, RSSD, CU_TYPE, CU_NAME;
TopTen = LIMIT FirstSixColumns 10;
FOICUFiltered = FILTER TopTen BY CU_NUMBER > 20;
CU_FIVE = FILTER TopTen BY CU_NUMBER == 5;
DUMP FOICUFiltered;
DUMP CU_FIVE;
FOICUFiltered returns all 10 rows even though 7 of them have a CU_NUMBER less than 20:
(1,9/30/2013 0:00:00,2,"50377","1","MORRIS SHEPPARD TEXARKANA")
(5,9/30/2013 0:00:00,6,"859879","1","FIRST CASTLE")
(6,9/30/2013 0:00:00,7,"54571","1","THE NEW ORLEANS FIREMEN'S")
(12,9/30/2013 0:00:00,11,"56678","1","FRANKLIN TRUST")
(13,9/30/2013 0:00:00,12,"861676","1","E")
(16,9/30/2013 0:00:00,14,"59277","1","WOODMEN")
(19,9/30/2013 0:00:00,16,"863773","1","NEW HAVEN TEACHERS")
(22,9/30/2013 0:00:00,17,"61074","1","WATERBURY CONNECTICUT TEACHER")
(26,9/30/2013 0:00:00,19,"866372","1","FARMERS")
(28,9/30/2013 0:00:00,21,"953375","1","CENTRIS")
And CU_FIVE returns no rows at all.
Does anybody know what I've done wrong here and is there a better way to dynamically load the schema at run time without using schema files?

Resources