Jmeter wont write on CSV - jmeter

I've been scratching my head on this issue. It seems that Jmeter wont write on one of my empty CSV file and throws this error. But one of my CSV file in the same thread doesn't replicate the error and writes properly.
These is how I called my CSV and how I printed it.

I don't see anything which could cause the error in your partial screenshot, if you need further assistance consider:
Adding Debug Sampler to visualize all your variables which are being used in the script
Adding the full code as text, not as the screenshot so the issue could be reproduced.
Going forward it might be a better idea to use i.e. Flexible File Writer instead of Groovy for writing the variables into the file as if you run your script with 2+ threads you might run into a race condition with several threads concurrently writing into the file which most probably will result in data corruption/loss

Related

The loop with one go-exiftool instance hangs on big amount of files

I'm looping big ~10k files using go-exiftool.
I'm using one instance of the go-exiftool to get info for all required files.
This code is called 10k times in the loop, where the file is always different.
fileInfos := et.ExtractMetadata(file)
After the ~7k loops the program hangs. I debugged go-exiftool and found that it hangs in
https://github.com/barasher/go-exiftool/blob/master/exiftool.go#L121
on the line:
fmt.Fprintln(io.WriteCloser, "-execute")
if i understood correct io.WriteCloser has instance of exec.Command(binary, initArgs...).StdinPipe()
so, the questions are:
Does exec.Command has a limit of execution?
If 1) - not, what can be the reason else?
Does it depends on the file sizes? I tried another folder and it worked with 35k files and then hanged. How to check that?
UPDATE:
i tried to run the same file in 10k loops. Works fine. It looks like it runs out of memory, can it be? I see no problem in the system memory graph. Or stdin is overflowed. Have no idea how to check that.

How to fix threads not iterating through CSV file?

I have the following Test Plan:
Test Plan
Thread Group
Java Request
CSV Data Config Set
My Thread Group has 1 thread looping forever. To my understanding, the thread should go down the CSV file line by line, 1 line per loop. However, it stays on the same first line. If I have two threads, then the first thread will stay on the first line, second thread on the second line, and so on.
I have tried all the different options in CSV Data Config Set (even if it doesn't make sense to try those options) including:
Checked path to file is correct
Tried file encoding as empty, UTF-8, UTF-16
Checked delimiter was correct in CSV
Checked variable names were correct
Allow quoted data true and false
Recycle on EOF true and false
Stop thread on EOF true and false
Tried all sharing modes
I also ensured the CSV file had no empty lines. I am using JMeter 2.13 and the line break character in the CSV is CR LF if that helps.
I've looked at tutorials and other JMeter questions on here, it seems that by default the threads should go down the CSV file. I remember it was behaving properly awhile back, unsure when it started behaving this way.
It is hard to say anything without seeing the code of the Java Request sampler to read the variable from the CSV and your CSV Data Set configuration.
If you want each thread to read the next line from the CSV file each iteration you need to set the Sharing Mode to All Threads
Try using other sampler, i.e. Debug Sampler as it might be the case your approach to reading the variable from the CSV file is not valid
According to JMeter Best Practices you should always be using the latest version of JMeter and you're sitting on a 4-years-old version, it might be the case you're suffering from an issue which has been already fixed so consider migrating to JMeter 5.1.1 or whatever is the latest stable JMeter version available at JMeter Downloads page

Loading Data into the application from GUI using Ruby

Problem:
Hi everyone, I am currently building an automation suite using Ruby-Selenium Webdriver-Cucumber to load data into the application using it's GUI. I've take input from mainframe .txt files. The scenarios are like to create a customer and then load multiple accounts for them as per the data provided in the inputs.
Current Approach
Execute the scenario using the rake task by passing line number as parameter and the script is executed for only one set of data.
To read the data for a particular line, I'm using below code:
File.readlines("#{file_path}")[line_number.to_i - 1]
My purpose of using line by line loading is to keep the execution running even if a line fails to load.
Shortcomings
Supposed I've to load 10 accounts to a single customer. So my current script will run 10 times to load each account. I want something that can load the accounts in a single go.
What I am looking for
To overcome the above shortcoming, I want to capture the entire data for a single customer from the file like accounts etc and load them into the application in a single execution.
Also, I've to keep track on the execution time and memory allocation as well.
Please provide your thoughts on this approach and any suggestions or improvements are welcomed. (Sorry for the long post)
The first thing I'd do is break this down into steps -- as you said in your comment, but more formally here:
Get the data to apply to all records. Put up a page with the
necessary information (or support command line specification if not
too much?).
For each line in the file, do the following (automated):
Get the web page for inputting its data;
Fill in the fields;
Submit the form
Given this, I'd say the 'for each line' instruction should definitely be reading a line at a time from the file using File.foreach or similar.
Is there anything beyond this that needs to be taken into account?

Spark: Silently execute sc.wholeTextFiles

I am loading about 200k text files in Spark using input = sc.wholeTextFiles(hdfs://path/*)
I then run a println(input.count)
It turns out that my spark shell outputs a ton of text (which are the path of every file) and after a while it just hangs without returning my result.
I believe this may be due to the amount of text outputted by wholeTextFiles. Do you know of any way to run this command silently? or is there a better workaround?
Thanks!
How large are your files?
From the wholeTextFiles API:
Small files are preferred, large files are also allowable, but may
cause bad performance.
In conf/log4j.properties, you can suppress excessive logging, like this:
# Set everything to be logged to the console
log4j.rootCategory=ERROR, console
That way, you'll get back only res to the repl, just like in the Scala (the language) repl.
Here are all other logging levels you can play with: log4j API.

Help in understanding this bash file

I am trying to understand the code in this page: https://github.com/corroded/git-achievements/blob/gh-pages/git-achievements
and I'm kinda at a loss on how it actually works. I do know some bash and shell scripting, but how does this script actually "store" how many times you've used a command(im guessing saving into a text file?) and how does it "sense" that you actually typed in a git command? I have a feeling it's line 464 onwards that does it but I don't seem to quite follow the logic.
Can anyone explain this in a bit more understandable context?
I plan to do some achievements for other commands and I hope to have an idea on HOW to go about it without randomly copying and pasting stuff and voodoo.
Yes on 464 start the script, everything before are helping functions. I dont know how it gets installed, but I would assume you have to call this script instead of the normal git-command. It just checks if the first parameter is achievement, and if not then just (regular) git with the rest parameters is executed. Afterwards he checks if an error happend (if he exits). And then he just makes log_action and check_for_achievments. log_action just writes the issued command with a date into a text file, while achievments scans for that log file for certains events. If you want to add another achievment you have to do it in this check_for_achievments.
Just look how the big case handles it (most of the achievments call the count_function which counts the # usages of the function and matches when a power of 2 is reached).

Resources