jMeter: use separated .csv file for each each thread - jmeter

I want to run 5 threads and each thread pulls in data from different .csv file. For example, thread 1 maps to data_1.csv... I do NOT want to create 5 Thread Groups.
Please help. Thank you!

To be able to open different csv files in the same test plan execution, you have to build a file name with the threadNum function.
According to your example you would have to set the filename to "data_${__threadNum}.csv" in the csv reader so the 5 threads will load your 5 files.
The files are shared upon their filenames so the sharing mode is not an issue.

According to usermanual By default, the file is only opened once, and each thread will use a different line from the file. You can change sharing mode but not open several different CSV files, i.e. one file for each thread.
Upd: on the other hand, if you don't have a lot of threads you can try this

Related

Jmeter read 2 different csv files in different loops

Currently, i have a requirement where I need to make sure that the data once read is not read again. Earlier I used to use HttpSimpleTableServer when I had to run only one loop with keep=false. However now I need to run 2 loops and for which the above option doesn’t work as the same csv is read from the start agin for the second loop. So I was thinking if there is a way to read data from different csv files per loop. If not how can I make sure that different data is read from the csv for every loop and no data is ever repeated. My Jmeter version is 5.3.
You can use CSV Data Set Config component to read the data from CSV files.
Set the `` flag to false to read the data only once.
You may set the remaining flags based on your need.
You may add two different CSV Data Set Config elements to work with different CSV files.
If you want to handle this programmatically API documentation will be useful. API documenation
If you need to read 2 different files in 2 different loops you should consider going for __CSVRead() function instead
Create 2 files like file0.csv and file1.csv
Once done you will be able to:
${__CSVRead(file${__jm__Thread Group__idx}.csv,0)} - read first column
${__CSVRead(file${__jm__Thread Group__idx}.csv,1)} - read second column
${__CSVRead(file${__jm__Thread Group__idx}.csv,next)} - proceed to next row
etc.
The __CSVRead() function will proceed to the next file on next Thread Group iteration
More information: How to Pick Different CSV Files at JMeter Runtime

How to fix threads not iterating through CSV file?

I have the following Test Plan:
Test Plan
Thread Group
Java Request
CSV Data Config Set
My Thread Group has 1 thread looping forever. To my understanding, the thread should go down the CSV file line by line, 1 line per loop. However, it stays on the same first line. If I have two threads, then the first thread will stay on the first line, second thread on the second line, and so on.
I have tried all the different options in CSV Data Config Set (even if it doesn't make sense to try those options) including:
Checked path to file is correct
Tried file encoding as empty, UTF-8, UTF-16
Checked delimiter was correct in CSV
Checked variable names were correct
Allow quoted data true and false
Recycle on EOF true and false
Stop thread on EOF true and false
Tried all sharing modes
I also ensured the CSV file had no empty lines. I am using JMeter 2.13 and the line break character in the CSV is CR LF if that helps.
I've looked at tutorials and other JMeter questions on here, it seems that by default the threads should go down the CSV file. I remember it was behaving properly awhile back, unsure when it started behaving this way.
It is hard to say anything without seeing the code of the Java Request sampler to read the variable from the CSV and your CSV Data Set configuration.
If you want each thread to read the next line from the CSV file each iteration you need to set the Sharing Mode to All Threads
Try using other sampler, i.e. Debug Sampler as it might be the case your approach to reading the variable from the CSV file is not valid
According to JMeter Best Practices you should always be using the latest version of JMeter and you're sitting on a 4-years-old version, it might be the case you're suffering from an issue which has been already fixed so consider migrating to JMeter 5.1.1 or whatever is the latest stable JMeter version available at JMeter Downloads page

Download multiple files with multiple threads

I am trying to download few files using 3 threads. my requirement is i want to achieve file download on 3 threads so that all files download 3 times in 3 different folders so that the files dont overwrite. I am using __counter to append 1,2,3 to the folders. Problem is if i give Thread count as 1 or 2 or 3 , it is behaving same in all the scenarios i.e. it always create two folders Folder1 and Folder2 and in all in folder1 it download all the files and in folder2 only last file gets downloaded with size as 0 KB.
Number of threads = 1
Attaching what i have tried so far-
Please try without counter function and with prefix, and two threads. I am guessing it based on the below information.
https://jmeter.apache.org/usermanual/component_reference.html#Save_Responses_to_a_file
Please note that Filename Prefix must not contain Thread related data,
so don't use any Variable (${varName}) or functions like
${__threadNum} in this field
Or try to keep some delay/pacing between two threads.
Hope this helps.
Update:-
Just give the folder path and file name without extension. It will save the with extension. I tried with image and it is save as Myfile1.jpeg

hadoop/HDFS: Is it possible to write from several processes to the same file?

f.e. create file 20bytes.
1st process will write from 0 to 4
2nd from 5 to 9
etc
I need this to parallel creating a big files using my MapReduce.
Thanks.
P.S. Maybe it is not implemented yet, but it is possible in general - point me where I should dig please.
Are you able to explain what you plan to do with this file after you have created it.
If you need to get it out of HDFS to then use it then you can let Hadoop M/R create separate files and then use a command like hadoop fs -cat /path/to/output/part* > localfile to combine the parts to a single file and save off to the local file system.
Otherwise, there is no way you can have multiple writers open to the same file - reading and writing to HDFS is stream based, and while you can have multiple readers open (possibly reading different blocks), multiple writing is not possible.
Web downloaders request parts of the file using the Range HTTP header in multiple threads, and then either using tmp files before merging the parts together later (as Thomas Jungblut suggests), or they might be able to make use of Random IO, buffering the downloaded parts in memory before writing them off to the output file in the correct location. You unfortunately don't have the ability to perform random output with Hadoop HDFS.
I think the short answer is no. The way you accomplish this is write your multiple 'preliminary' files to hadoop and then M/R them into a single consolidated file. Basically, use hadoop, don't reinvent the wheel.

Joining large files into one humongous file

Is there a way in Windows to link multiple files together without having to open the target file and read the contents of the source files to append them to the target file? Something like a shell link api?
Background
I have up to 8 seperate processes creating parts of a data file that I want to recombine into one large file.
A less radical solution that should work just fine.
system("copy filefragment.1+filefragmenent.2+filefragment.3+....+filefragment.8 outputfile.bin");
No simple way that I know of. But here's a radical idea.
Use a virtual file system (Dokan, EldoS CBFS, Pismo Technic, etc..) to emulate one logical file that is actually backed by separate files on disk.
I have up to 8 seperate processes creating parts of a data file that I want to recombine into one large file.
How do you want them concatenated? Mixed or one after the other?
If you want them mixed, you can just open() your output file and write() to it from your threads. If you want them one after the other, you're best bet is to write to separate files and join them together at the end.

Resources