How to write script in Beanshell preprocessor In Jmeter tool - jmeter

]2I am trying to write script in Beanshell preprocessor to manipulate an input text file containing a list of locations. I want to pass Location 1 as input for the 1st user's destinations, Location 2 as second user's destination and so on... I also want to send a combination of locations for some users. Please help me with this.
Thanks in advance.

If you need to parametrize your test so different users would use different locations from the text file - you don't even need Beanshell. Take a look at __StringFromFile() function - it reads next line from the specified file each time it's being called.
If you still want to use Beanshell - just consider it Java as it's almost Java compliant. To be completely sure that your test will work - write it J2SE 1.4-way.
Be aware that if your script logic is complex and it does something "heavy" and/or if you plan to produce immense load - it's better to consider JSR223 PreProcessor and Groovy scripting language as:
Groovy is even more Java-compliant than Beanshell
Groovy engine performance is much higher
See Beanshell vs JSR223 vs Java JMeter Scripting: The Performance-Off You've Been Waiting For! guide for different scripting engines benchmarks, instructions on installation of groovy engine and scripting best practices.

Related

JMeter - when NOT to use Cache compiled script if available

I want to know when checking Cache compiled script if available checkbox is wrong,
Following Best practices there are some situations that Cache compiled script shouldn't be used, but the example of not using ${varName} is wrong, I did a test and the value it's taking is the updated value of ${varName} and not the first value.
When using JSR 223 elements, it is advised to check Cache compiled
script if available property to ensure the script compilation is
cached if underlying language supports it. In this case, ensure the
script does not use any variable using ${varName} as caching would
take only first value of ${varName}.
Does someone knows of a real case it's wrong to use caching?
EDIT
I check using ${varName} in script and there are similar results with/without caching:
I define variable in Jmeter called aa with value 1, and created a script:
def aa = "2";
aa = "3";
log.info("${aa}");
Value 1 was return in both cases of checkbox, so it doesn't relate to caching
Also tried with Beanshell (not compiled language without def aa = "2";) and got same results.
What is meant by documentation is that whenever ${varName} has a different value a new entry is stored in cache which will end up filling it with useless data.
So in this case it is wrong and ${varName} should be replaced with
vars.get("varName")
In fact I don't see real reason for unchecking this option provided you use the right JMeter syntax
The option is unchecked by default due to the risk described above and also for "non consensus" reasons:
https://bz.apache.org/bugzilla/show_bug.cgi?id=56554
http://mail-archives.apache.org/mod_mbox/jmeter-dev/201212.mbox/%3CCAH9fUpZ7dMvuxq9U5iQYoBtVX7G-TqOx4aJjjOnPcp%3D5Fc%3D8Qw%40mail.gmail.com%3E
http://mail-archives.apache.org/mod_mbox/jmeter-dev/201301.mbox/%3CCAH9fUpbnde3mnyKmHXwoQeL-SqZUA0Wt1%3D%3D-XMxQWq3ZAS6Pvw%40mail.gmail.com%3E
As to performance, it is exactly the same wether or not you check it for a language that does not support compilation as the first thing JMeter does is to check "supportsCompilable" before using the checkbox, see:
https://github.com/apache/jmeter/blob/trunk/src/core/org/apache/jmeter/util/JSR223TestElement.java#L171
You should not be using caching when you use a scripting engine which does not support caching if compiled scripts. Since only Groovy is capable of compiling scripts you should tick this box for Groovy and untick for the other engines (the code block which does not make sense will be triggered each time the script will be called)
Well-behaved Groovy engine should:
Compile script in the runtime to avoid interpretation each time
Cache compiled script to avoid recompilation
Recompile script and update the cache if any changes are being made to it.
Inlining JMeter functions and variables into scripts is a little bit dangerous for any language as it might resolve into something which cause compilation failure or even even worse result in code which you don't expect. In case of Groovy JMeter Variables syntax conflicts with Groovy GString template
So inlining variables will cause overhead for recompiling the script each time it's called.
So please continue following JMeter's Best Practices and remember one more little tip: avoid scripting where possible as none scripting option performance behaves as fast as Java does, check out Beanshell vs JSR223 vs Java JMeter Scripting: The Performance-Off You've Been Waiting For! guide for details.

How to remove result files in runtime?

I try to remove result files from a listener but it won't work. It seems JMeter lock the result files in runtime.
The screenshot below shows that I save the result to a csv file 'raw-result-table.csv'.
In Setup Thread, I add an OS Sampler to remove the result files. See screenshot below.
It cannot remove the files. I think it is because JMeter lock the file in runtime.
Please note that the OS Sampler is correct. It can remove the files when I disable the thread 'AD'. I've tried BeanShell script and result is the same.
Actually you won't be able to delete the file which is being used for storing results of the current session. Also there are some issues with your test design:
You should not be using any Listeners, especially View Results in Table / Tree, they consume a lot of resources and may ruin your test.
You should be running your test in command-line non-GUI mode. You can combine it with deleting the previous results file like:
del *result*.csv && jmeter -n -t test.jmx -l result.csv
Upon test completion you can open result.csv file in JMeter GUI and perform the analysis.
You should be using JSR223 Test Elements and Groovy language instead of Beanshell (same applies to the functions, you should substitute __Beanshell() function with __groovy() function) as Groovy has much better performance

JMeter results size limit

I am running a stability test (60hrs) in Jmeter. I have several graphs in the test plan to capture system resources like cpu, threads, heap.
The size of View_Results_Tree.xml file is 9GB after 24hrs. I am afraid if jmeter will sustain for 60hrs.
Is there size limit for View_Results_Tree.xml or results folder size in Jmeter?
What are the best practices to be followed in Jmeter before running such long tests? I am looking for recommended config/properties for such long tests.
Thanks
Veera.
There is no results file limit as long as it fits into your hard drive for storing or in your RAM to open and analyze.
The general recommendations are:
Use CSV format instead of XML
Store only those metrics which you need to store, saving unnecessary stuff causes massive memory and disk IO overheads.
If you look into jmeter.properties file (located in JMeter's "bin" folder) for properties which names start with jmeter.save.saveservice i.e.
#jmeter.save.saveservice.output_format=csv
#jmeter.save.saveservice.assertion_results_failure_message=true
#etc.
Copy them all to user.properties file, set "interesting" properties to true and others to false - that will allow to save a lot of disk space and release valuable resources for the load testing itself.
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure for more detailed explanation of the above recommendations and few more JMeter performance tuning tweaks.
There are not limits on file size in JMeter, the limit is your your disk space.
From the file name, I guess you chose XML output, it is better to choose CSV output (see below another reason for that).
Besides, ensure you're not using GUI for load testing in JMeter which is a bad practice, this will certainly break your test if you do it.
Switch to Non GUI mode and ensure you follow those recommendations.
/bin/jmeter -t -n -l /results.csv
Since JMeter 3.0, you can even generate report at end of load test of from an existing CSV (JTL, not from XML format) file, see:
http://jmeter.apache.org/usermanual/generating-dashboard.html
As you need GUI for monitoring, run Jmeter in GUI mode only for the monitoring part.
I agree with the answer provided by UBIK LOAD PACK. But in case if you need to have the results stored somewhere you don't need to worry about the size of the file. The best solution is using Grafana and Graphite (InfluxDB) for realtime reports and efficient storage.

How can I have different states of expansion in LuaTex/LuaLaTex for debugging for instance?

I am preparing LaTex/Tex fragments with lua programs getting information from SQL request to a database (LuaSQL).
I wish I could see intermediate states of expansion for debugging purpose but also to control what has been brought from SQL requests and lua processings.
My dream would be for instance to see the code of my Latex page as if I had typed it myself manually with all the information given by the SQL requests and the lua processing.
I would then have a first processing with my lua programs and SQL request to build a valid and readable luaLatex code I could amend if necessary ; then I would compile again that file to obtain the wanted pdf document.
Today, I use a lua development environment, ZeroBrane Studio, to execute and test the lua chunk before I integrate it in my luaLatex code. For instance:
my lua chunk :
for k,v in pairs(data.param) do
print('\\def\\'..k..'{'..data.param[k]..'}')
end
lua print out:
\gdef\pB{0.7}
\gdef\pAinterB{0.5}
\gdef\pA{0.4}
\gdef\pAuB{0.6}
luaLaTex code :
nothing visible ! except that now I can use \pA for instance in the code
my dream would be to have, in the luaLatex code :
\gdef\pB{0.7}
\gdef\pAinterB{0.5}
\gdef\pA{0.4}
\gdef\pAuB{0.6}
May be a solution would stand in the use of the expl3 extension ? But since I am not familiar with it nor with the precise Tex expansion process, I prefer to ask you experts before I invest heavily in the understanding of this module.
Addition :
Pushing forward the reflection, a consequence would be that from a Latex code I get a Latex code instead of a pdf file for instance. This implies that we use only the first steps of the four TeX processors as described by Veijkhout in "TeX by Topic" : the input processor, the expansion processor (but with a controlled depth of expansion), not the execution processor nor the visual processor. Moreover, there would be need to show the intermediate state, that means a new processor able to show tokens back into readable strings and correct Tex/Latex code that can be processed later.
Unless somebody has already done or seen something like that, I feel that my wish may be unfeasible in the short and middle terms. What is your feeling, should I abandon any hope ?

code coverage tools for validating the scripts

Is there any way to validate the coverage of the shell scripts? I have project having lots of shell scripting, and need to ensure static analysis can be performed on the coverage for the shell scripts. Is there any tool available?
I seriously doubt that there could be any static code analysis performed on shell scripts - especially due to the fact that shell scripts are supposed to call external programs and based on what these external programs return - and there are myriads of external programs and external environment states. It's similar to the problem of static analysis of code heavily relying on eval-like mechanism, but shell scripting is all about eval-style programming.
However, there are some general pointers that could prove useful for "proper" validation, code coverage and documenting of shell scripts as major languages do:
You can always run a script with -x (AKA xtrace) option - it will output trace looking like that to stderr:
+ log_end_msg 0
+ [ -z 0 ]
+ retval=0
+ log_end_msg_pre 0
+ :
+ log_use_fancy_output
+ TPUT=/usr/bin/tput
+ EXPR=/usr/bin/expr
+ [ -t 1 ]
+ [ xxterm != x ]
+ [ xxterm != xdumb ]
Bash makes it possible to redirect this stream to external FD (using BASH_XTRACEFD variable) - that's much easier to parse in practice.
It's not trivial, but it's possible to write a program that will find relevant pieces of code being executed using xtrace output and make you a fancy "code coverage" report - like what was called how many times, which pieces of code weren't run at all and thus lack test coverage.
In fact, there's a wonderful tool named shcov already written that uses this process - albeit it's somewhat simplistic and doesn't handle all possible cases very well (especially when we're talking about long and complex lines)
Last, but not least - there's minimalistic shelldoc project (akin to javadoc) that helps generating documentation based on comments in shell scripts. Yep, that's a shameless plug :)
I don't think there are COTS tools available for test coverage, regardless of the scripting language and there are lots.
Another poster suggested an ad hoc approach that might work with some tools: get them to dump some trace data, and try to match that up with the actual code to get your coverage. He says it sort of works... that's the problem with most hueristics.
Another approach one might take to construct a test coverage for your favorite scripting language is covered by my technical paper on a general approach for building test coverage tools using program transformations. My company builds a line of such tools for more popular languages this way.
You might try looking at shcov, a GPL v2-licensed Python-based tool. It seems to perhaps have been abandoned by the author, but does produce HTML-based graphical reports and seemed (in my limited testing) to be reasonably accurate in terms of coverage analysis.
I have written the tool, that can do the coverage for the shell scripts, the name is shAge means shell script coverage, the project is hosted here
To do the Coverage if any shell scripts do the following
First Download the shAge.jar
Make sure you have you have jdk1.6 update 65 onwards
run the programe like
java -jar shAge.jar hello.sh
the content in the shell script will get executed and finally a report will be get generated
Report will be genared with the name of file in .html format hello.sh.html
Here is the sample output

Resources