Memory limitation of Jmeter Beanshell sampler holding many variables - jmeter

I have a csv file with 450K rows and 2 columns. Using the CSV data config results in SocketException: Too many open files error on some load generators. To get around it, I used a Beanshell sampler to read the contents of the large csv in memory just once, however when it tries to save variable # 22,770 it throws java.lang.ArrayIndexOutOfBoundsException: null
Here is my simple code -
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.lang.*;
BufferedReader lineReader = null;
try{
lineReader= new BufferedReader(new FileReader("${skufile}"));
String line = null;
int count = 0;
while ((line = lineReader.readLine()) != null){
String[] values = line.split(",");
vars.put("sku_" + count, values[0]);
vars.put("optionid_" + count, values[1]);
log.info("Sku# "+ count + " : " +vars.get("sku_"+count));
count++;
}
}catch (Throwable e) {
log.error("Errror in Beanshell", e);
throw e;
}
I have tried using both props and vars.

The error is not connected with any form of limits, take a look at line 22771 of your CSV file, it might i.e. not contain comma therefore your values[1] becomes null
Holding the file in memory is not the best option, I would rather recommend going for CSV Data Set Config and increasing the maximum number of open files which might be as low as 1024 for normal user for the majority of Linux distributions. The steps are:
add the next lines to /etc/security/limits.conf file
your_user_name soft nofile 4096
your_user_name hard nofile 65536
you can also run the following command to ramp-up system-wide "hard" limit
ulimit -n 8192
Be aware that since JMeter 3.1 it is recommended to use JSR223 Test Elements and Groovy language for scripting. Groovy is not only compatible with latest Java language features and offers syntax sugar on top, Groovy has much better performance comparing to Beanshell.

Related

Spring Batch Best Architecture to Read XML

What is the Best performance architecture to read XML in Spring Batch? Each XML is approximately 300 KB size and we are processing 1 Million.
Our Current Approach
30 partitions and 30 Grids and Each slave gets 166 XMLS
Commit Chunk 100
Application Start Memory is 8 GB
Using JAXB in Reader Default Bean Scope
#StepScope
#Qualifier("xmlItemReader")
public IteratorItemReader<BaseDTO> xmlItemReader(
#Value("#{stepExecutionContext['fileName']}") List<String> fileNameList) throws Exception {
String readingFile = "File Not Found";
logger.info("----StaxEventItemReader----fileName--->" + fileNameList.toString());
List<BaseDTO> fileList = new ArrayList<BaseDTO>();
for (String filePath : fileNameList) {
try {
readingFile = filePath.trim();
Invoice bill = (Invoice) getUnMarshaller().unmarshal(new File(filePath));
UnifiedInvoiceDTO unifiedDTO = new UnifiedInvoiceDTO(bill, environment);
unifiedDTO.setFileName(filePath);
BaseDTO baseDTO = new BaseDTO();
baseDTO.setUnifiedDTO(unifiedDTO);
fileList.add(baseDTO);
} catch (Exception e) {
UnifiedInvoiceDTO unifiedDTO = new UnifiedInvoiceDTO();
unifiedDTO.setFileName(readingFile);
unifiedDTO.setErrorMessage(e);
BaseDTO baseDTO = new BaseDTO();
baseDTO.setUnifiedDTO(unifiedDTO);
fileList.add(baseDTO);
}
}
return new IteratorItemReader<>(fileList);
}
Our questions:
Is this Archirecture correct
Is any performance or architecture advantage of using StaxEventItemReader and XStreamMarshaller over JAXB.
How to handle memory properly to avoid slow down
I would create a job per xml file by using the file name as a job parameter. This approach has many benefits:
Restartability: If a job fails, you only restart the failed file (from where it left off)
Scalability: This approach allows you to run multiple jobs in parallel. If a single machine is not enough, you can distribute the load on multiple machines
Logging: Logs are separate by design, you don't need to use an MDC or any other technique to separate logs
We are receiving XML filepath in a *.txt file
You can a create a script that iterates over these lines and launch a job per line (aka per file). Gnu Parallel (or a similar tool) is a good option to launch jobs in parallel.

Files are overwriting instated of appending in BeanShell PostProcessor Jmeter while running 1000 threads

I have written below code under beanshall post-processor. But when I am running 1000 threads the files are overwriting existing content instated of appending. It is working for 1-5 threads. Can anyone help me on this?
import org.apache.commons.io.FileUtils;
import java.util.ArrayList;
import java.util.Collections;
File fExceptionLog = new File("${logPath}/ExceptionLog.txt");
String extExceptionData= FileUtils.readFileToString(fExceptionLog);
id=vars.get("id");
try{
String cDatestamp="${__time(yyyyMMddHHmmssSSS)}";
String cResponce = prev.getResponseDataAsString();
String cRequest = prev.getQueryString();
String cResponceCode=prev.getResponseCode();
cTransactionName = prev.getSampleLabel();
cResponseTime = prev.getTime();
cSize = prev.getBytesAsLong();
cIsSuccessful =prev.isSuccessful();
File fRequestLog = new File("${logPath}/RequestLog.txt");
File fHitLog = new File("${logPath}/HitLog.txt");
File fResponceLog = new File("${logPath}/ResponceLog.txt");
File fErrorLog = new File("${logPath}/ErrorLog.txt");
String extHitData = FileUtils.readFileToString(fHitLog);
String extRequestData = FileUtils.readFileToString(fRequestLog);
String extResponceData = FileUtils.readFileToString(fResponceLog);
String extErrorData = FileUtils.readFileToString(fErrorLog);
log.info("cResponceCode"+cResponceCode);
FileUtils.writeStringToFile(fHitLog,extHitData+id+"~"+cDatestamp+"~"+cTransactionName+"~"+cResponceCode+"~"+cResponseTime+"~"+cSize+"~"+cIsSuccessful+"\n");
if(cResponceCode.equals("200")){
FileUtils.writeStringToFile(fRequestLog,extRequestData+id+"~"+cDatestamp+"~"+cTransactionName+"~"+cResponce+"\n");
FileUtils.writeStringToFile(fResponceLog,extResponceData+id+"~"+cDatestamp+"~"+cResponceCode+"~"+cResponce+"\n");
}else{
FileUtils.writeStringToFile(fErrorLog,extErrorData+id+"~"+cDatestamp+"~"+cTransactionName+"~"+cResponce+"\n"+id+"~"+cDatestamp+"~"+cResponceCode+"~"+cResponce+"\n");
}
}catch(Exception e){
FileUtils.writeStringToFile(fExceptionLog,extExceptionData+id+"~"+cDatestamp+"~"+cTransactionName+"~"+e+"\n");
}
You're violating at least 3 JMeter Best Practices
You're referring JMeter Variables like ${logPath} while you should be using vars shorthand instead like vars.get("logPath")
You're using Beanshell while starting from JMeter 3.1 you should be using JSR223 and Groovy
And last but not the least, you yourself introduced a race condition so when several threads will be concurrently writing the same file it will result in data loss. You can put this Beanshell test element (along with the parent Sampler(s)) under the Critical Section Controller, but it will reduce concurrency of the parent sampler(s) to only one at a time
If you need to write some some metrics into a custom file in your own format I would rather recommend consider migrating to the Flexible File Writer which is extremely "flexible" with regards to what values is to store and it accumulates multiple entries in memory and flushes them periodically in batch manner so all the data will be stored without collisions.
You can install Flexible File Writer using JMeter Plugins Manager

my access token is getting overwritten every time

import java.io.*;
import com.opencsv.CSVWriter;
File f= new File("C:\\Users\\Web\\Desktop\\Tokenss.csv");
FileWriter fw= new FileWriter(f);
BufferedWriter bw= new BufferedWriter(fw);
//var rc = prev.getResponseCode();
//ctx.getPreviousResult().getResponseHeaders();
String tok = vars.get("Token");
bw.write(tok);
bw.newLine();
bw.close();
fw.close();
Question: how to write access_token in CSV always in a new row? It overwrites my access token every time.
You're overwriting the whole file each time you call your script, in order to write new line at the end of the file you need change this:
FileWriter fw= new FileWriter(f);
to this:
FileWriter fw= new FileWriter(f, true);
where second argument is the switch for the "append" mode
In general since JMeter 3.1 you should be using JSR223 Test Elements and Groovy language so consider migrating to Groovy on next available opportunity. You will either be able to re-use your existing code or simplify it to something like:
new File('C:\\Users\\Web\\Desktop\\Tokenss.csv') << vars.get('Token') << System.getProperty('line.separator')
See Apache Groovy - Why and How You Should Use It article for more information on Groovy scripting in JMeter

JRecord - Formatting file transferred from Mainframe

I am trying to display a mainframe file in a eclipse RCP application using JRecord library.I already have the COBOL copybook as a text file.
to accomplish that,
I am transferring the file from mainframe to my desktop through
apache commons net FTPClient API
Now I have a text file
I am removing the newline and carriage return characters
then I read it via ., a CobolIoProvider and convert it into a ArrayList of type AbstractLine
But I have offset issues because of some special charcters .
here are the issues
when I dont perform step #3 , there are offset issues right from
record 1. hence I included step #3
even when I perform step #3 , the first few thounsands of records seem to be formatted(or read ) by the AbstractLineReader correctly unless it encounters a special character (not sure but thats my assumption).
Code snippet:
ArrayList<AbstractLine> lines = new ArrayList<AbstractLine>();
InputStream copyStream;
InputStream fis;
try {
copyStream = new FileInputStream(new File(copybookfile));
String filec = FileUtils.readFileToString(new File(datafile));
System.out.println("initial len: "+filec.length());
filec=filec.replaceAll("\r", "");
filec=filec.replaceAll("\n", "");
System.out.println("initial len: "+filec.length());
fis= new ByteArrayInputStream(filec.getBytes());
CobolIoProvider ioProvider = CobolIoProvider.getInstance();
AbstractLineReader reader = ioProvider.newIOBuilder(copyStream, "REQUEST",
Convert.FMT_MAINFRAME).newReader(fis);
AbstractLine line;
while ((line = reader.read()) != null) {
lines.add(line);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
What am I missing here ? is there an additional preprocessing that I need to do for the file transferred from mainframe ?
If it is a Text File (no binary data) with \r\n line delimiters try:
ArrayList<AbstractLine> lines = new ArrayList<AbstractLine>();
InputStream copyStream;
InputStream fis;
try {
copyStream = new FileInputStream(new File(copybookfile));
AbstractLineReader reader = CobolIoProvider.getInstance()
.newIOBuilder(copyStream, "REQUEST", ICopybookDialects.FMT_MAINFRAME)
.setFileOrganization(Constants.IO_STANDARD_TEXT_FILE)
.newReader(datafile);
AbstractLine line;
while ((line = reader.read()) != null) {
lines.add(line);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Note: The setFileOrganization tells JRecord what type of file it is. So .setFileOrganization(Constants.IO_STANDARD_TEXT_FILE) tells JRecord it is a Text file with \n or \r\n end-of-line markers. Here is a Description of FileOrganisation in JRecord.
The special charcters worry me though, if there is a \n in the 'Data' it will be treated as an end-of-line. You may need to do binary transfer and keep the RDW (Record-Descriptor-Word) if it is a VB file.
If The file contains Binary data, you will need:
do a binary transfer (with RDW if it is a VB file)
use the appropriate File-Organisation
Specify Ebcdic (.setFont("cp037") tells JRecord is US-Ebcdic)
I will add a second answer for Generating Code using the RecordEditor
If you are absolutely sure all the records are the same length you can use the low-level routines to do the reading see the ReadAqtrans.java program in https://sourceforge.net/p/jrecord/discussion/678634/thread/4b00fed4/
basically you would do:
ICobolIOBuilder iobuilder = CobolIoProvider.getInstance()
.newIOBuilder("copybookFileName", ICopybookDialects.FMT_MAINFRAME)
.setFont("CP037")
.setFileOrganization(Constants.IO_FIXED_LENGTH);
LayoutDetail layout = iobuilder.getLayout();
FixedLengthByteReader br
= new FixedLengthByteReader(layout.getMaximumRecordLength() + 2);
br.open("...");
byte[] bytes;
while ((bytes = br.read()) != null) {
lines.add(iobuilder.newLine(bytes));
}
Future Reference / Binary File
If the file does contain Binary Data, you really need to do a binary transfer. You may find the RecordEditor useful.
The RecordEditor 0.98 has a JRecord code Generation
function. The advantages of using the RecordEditor Generate function are
The Recordeditor will try and work out the appropriate File attributes by looking at the File
You can try out various attributes (left hand pane) and see what the file looks like with those attributes
(right hand side).
When happy, hit the Generate button and the RecordEditor will generate JRecord code. There are several Code Templates
available:
Standard - will generate basic JRecord code (with a field name class
lineWrapper - will generate a "wrapper" class with the Cobol fields represented as get/set methods
RecordEditor Generate
In the RecordEditor select Generate >>> Java~JRecord code for Cobol
Generate Screen
Enter the Cobol CopyBook / Sample file and adjust the attributes as needed
Code Template
Next you can select the Code Template
Generated Code
Finally the RecordEditor will generate JRecord code based on the Attributes entered.

JMeter: How to fetch and use csv values from beanshell script so as to avoid "Too many open files" error?

Lets say, CSV file (abc.csv) contains 10 records of Login credentials(Email, Password), and I want to fetch those values at once using Beanshell script just to make sure that CSV has to get open only once and avoid opening CSV file 10 times for fetching every single record which creates problem of following error:
"Too many open files" in Response data.
Is there any way to do it?
I don't think your issue with "too many open files" will be fixed by the workaround you think about.
IMHO, it will just make your test less maintainable and less scalable.
It's a server configuration issue for your account.
You should apply what has been advised to you in question:
JMeter Ubuntu: java.net.SocketException: Too many open files
You can do that with something like:
import org.apache.jmeter.threads.JMeterContextService;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;
import java.util.Hashmap;
import java.util.Map;
File csvFile = new File("/home/yourname/folder/csvFile.csv");
csvData = new Hashmap<String,String>();
csvData = null;
try (Scanner scanner = new Scanner(csvFile)) {
while (scanner.hasNextLine()) {
String[] line = scanner.nextLine().split(",");
csvData.put(line[0],line[1]);
}
} catch (FileNotFoundException ex) {
ex.printStackTrace();
}
JMeterContextService.getContext().getVariables().put("csvHashmap", csvData);
This can be done in the beginning, you would open file just once and afterwards use the hash map object stored in memory.

Resources