Graphs generated but shows waiting for samples in JMeter - jmeter

IHi I have the same question as JMeter: jp#gc Graphs Generator: I got .png just with text "Waiting for sample...". The jtl file has been created without empty line, have edited the user.properties file.
I followed the steps mentioned in this link for the graph generator.
sh jmeter -t /home/Annie/JMeter/grp.jmx -n -l /home/Annie/JMeter/g.jtl -JTEST_RESULTS_FILE=/home/Annie/JMeter/g.jtl
Creating summariser <summary>
Created the tree successfully using /home/Annie/JMeter/grp.jmx
Starting the test # Mon Oct 16 11:27:30 IST 2017 (1508133450438)
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary + 1 in 00:00:03 = 0.3/s Avg: 3133 Min: 3133 Max: 3133 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 14 in 00:00:14 = 1.0/s Avg: 2731 Min: 2098 Max: 4216 Err: 0 (0.00%) Active: 0 Started: 5 Finished: 5
summary = 15 in 00:00:18 = 0.9/s Avg: 2757 Min: 2098 Max: 4216 Err: 0 (0.00%)
Tidying up ... # Mon Oct 16 11:27:48 IST 2017 (1508133468522)
... end of run
In log its showing :
WARN o.a.j.v.ViewResultsFullVisualizer:Error loading result renderer: org.apache.jmeter.visualizers.RenderInBrowser
java.lang.NoClassDefFoundError: javafx/embed/swing/JFXPanel
Caused by: java.lang.ClassNotFoundException: javafx.embed.swing.JFXPanel
What should be done to get the graph?

My expectation is that you are using OpenJDK on Linux which doesn't have JavaFX
Use your Linux distribution package manager to get Oracle Java 8 and make sure JMeter is configured to use Oracle Java instead of OpenJDK.
If you are trying to use PerfMon Metrics Collector Listener in GUI mode to test it - make sure JMeter test is running at this time as first of all it is a Listener therefore it needs to process sample events in order to display anything, it might be even a Dummy Sampler firing each N seconds. See How to Monitor Your Server Health & Performance During a JMeter Load Test guide for more details.

Related

running lambda function from container image which runs bash script after invokes. script working perfctly but still it throw Runtime.ExitError in end

I am running a lamba function from container image. basically this image runs a bash script while someone invokes it. So the script is working perfctly but still it throw Runtime.ExitError error at the end. How can I fix this?
START RequestId: 13b31468-aa8d-45cc-9c9a-b4fcaaca9b79 Version: $LATEST
main.sh
2022-04-05 16:29:09 arghyadockercli01
2022-03-31 17:38:17 aws-cloudtrail-logs-201043775914-116d21af
2022-04-05 18:45:45 dkabdjkdjse
2022-04-04 14:31:41 dsfdhfjdhdhhjkjfhjdhfj
2022-04-04 05:13:06 ghfjgfhjg
2022-04-01 14:10:49 s3-trail-log-1
2022-03-31 15:46:30 s3trail-bucket
file copied 01
main.sh
2022-04-05 16:29:09 arghyadockercli01
2022-03-31 17:38:17 aws-cloudtrail-logs-201043775914-116d21af
2022-04-05 18:45:45 dkabdjkdjse
2022-04-04 14:31:41 dsfdhfjdhdhhjkjfhjdhfj
2022-04-04 05:13:06 ghfjgfhjg
2022-04-01 14:10:49 s3-trail-log-1
2022-03-31 15:46:30 s3trail-bucket
file copied 01
END RequestId: 13b31468-aa8d-45cc-9c9a-b4fcaaca9b79
REPORT RequestId: 13b31468-aa8d-45cc-9c9a-b4fcaaca9b79 Duration: 11095.96 ms Billed Duration: 11096 ms Memory Size: 128 MB Max Memory Used: 49 MB
RequestId: 13b31468-aa8d-45cc-9c9a-b4fcaaca9b79 Error: Runtime exited without providing a reason
Runtime.ExitError

Hadoop Container failed even 100 percent completed

I have setup a small cluster Hadoop 2.7, Hbase 0.98 and Nutch 2.3.1. I have wrote a custom job that simple first combine docs of same domain, after that each URL of domain (from cache i.e., a list) is first obtained from from cache and then corresponding key is used to fetched the object via datastore.get(url_key) and then after updating score, it is written via context.write.
The job should complete after all docs are processed but what I have observed that each attempt if failed due to timeout and progress is 100 percent complete show. Here is the LOG
attempt_1549963404554_0110_r_000001_1 100.00 FAILED reduce > reduce node2:8042 logs Thu Feb 21 20:50:43 +0500 2019 Fri Feb 22 02:11:44 +0500 2019 5hrs, 21mins, 0sec AttemptID:attempt_1549963404554_0110_r_000001_1 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
attempt_1549963404554_0110_r_000001_3 100.00 FAILED reduce > reduce node1:8042 logs Fri Feb 22 04:39:08 +0500 2019 Fri Feb 22 07:25:44 +0500 2019 2hrs, 46mins, 35sec AttemptID:attempt_1549963404554_0110_r_000001_3 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
attempt_1549963404554_0110_r_000002_0 100.00 FAILED reduce > reduce node3:8042 logs Thu Feb 21 12:38:45 +0500 2019 Thu Feb 21 22:50:13 +0500 2019 10hrs, 11mins, 28sec AttemptID:attempt_1549963404554_0110_r_000002_0 Timed out after 1800 secs Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
What it is so i.e., when an attempt is 100.00 percent complete then it should be marked as successfull. Unfortunately, there is any error information other than timeout for my case. How to debug this problem ?
My reducer is somewhat posted to another question
Apache Nutch 2.3.1 map-reduce timeout occurred while updating the score
I have observed that, in the mentioned 3 logs the time required for execution is varied with big difference. Please look upto the job which you are executing once.

JMeter testing SOAP web service programmatically

Using JMeter APIs, I tried to call a SOAP web service programmatically.
After executing the service, when I checked the web service server logs, no transaction / entry is made over there. It seems the SOAP request did not send to the server at all. But, when I tried to invoke the web service directly using SOAPConnection call() method, with out JMeter APIs, it is working fine and server could generate logs.
Could any one please check it out and correct me on sending SOAP requests using JMeter APIs?
My full source code is as follows:
JMeterUtils.setJMeterHome("JmeterHome");
JMeterUtils.loadJMeterProperties("JmeterHome\\bin\\jmeter.properties");
JMeterUtils.initLogging();
JMeterUtils.initLocale();
HashTree testPlanTree=new HashTree();
HTTPSampler httpSampler=new HTTPSampler();
httpSampler.setDomain("domain");
httpSampler.setPort(<port number>);
httpSampler.setProtocol("http");
httpSampler.setPath("/wsdl");
httpSampler.setMethod("GET");
httpSampler.setName("Webservice Sampler");
httpSampler.setPostBodyRaw(true);
HeaderManager headerManager=new HeaderManager();
headerManager.add(new Header("SM_USER", "xyz#abc.com"));
headerManager.add(new Header("Content-Type","text/xml; charset=utf-8"));
headerManager.add(new Header("SOAPAction","http://example.com:<port number>/services/MyWebService?wsdl"));
httpSampler.setHeaderManager(headerManager);
HTTPArgument httpArgument=new HTTPArgument();
httpArgument.setValue(<SOAP Message>);
httpSampler.addTestElement(httpArgument);
LoopController loopController=new LoopController();
loopController.setLoops(1);
loopController.addTestElement(httpSampler);
loopController.setFirst(true);
loopController.initialize();
ThreadGroup threadGroup=new ThreadGroup();
threadGroup.setNumThreads(5);
threadGroup.setRampUp(1);
threadGroup.setSamplerController(loopController);
TestPlan testPlan=new TestPlan("Web Service Operations Test Plan");
testPlanTree.add("testPlan", testPlan);
testPlanTree.add("threadGroup",threadGroup);
testPlanTree.add("httpSampler",httpSampler);
testPlanTree.add("loopController",loopController);
SaveService.saveTree(testPlanTree, new FileOutputStream("jmeter_api.jmx"));
Summariser summary=null;
String summariserName=JMeterUtils.getPropDefault("summariser.name", "summary");
if (summariserName.length() > 0) {
summary = new Summariser(summariserName);
}
String reportFile = "report.jtl";
ResultCollector logger = new ResultCollector(summary);
logger.setFilename(reportFile);
testPlanTree.add(testPlanTree.getArray()[0], logger);
jmeter.configure(testPlanTree);
jmeter.run();
System.exit(0);
I am getting the following output in the Jmeter log file:
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread finished: 1-1
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread is done: 1-3
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread finished: 1-3
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread is done: 1-4
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread finished: 1-4
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread is done: 1-5
2017/01/10 19:50:14 INFO - jmeter.threads.JMeterThread: Thread finished: 1-5
2017/01/10 19:50:14 INFO - jmeter.engine.StandardJMeterEngine: Notifying test listeners of end of test
2017/01/10 19:50:14 INFO - jmeter.reporters.Summariser: summary = 5 in 00:00:02 = 3.1/s Avg: 877 Min: 780 Max: 1088 Err: 0 (0.00%)
The following results captured on result file:
2017-01-10 19:50:13.070,894, Webservice Sampler,200,OK, 1-2,text,true,,35016,0,5,5,887,0,0
2017-01-10 19:50:13.070,1088, Webservice Sampler,200,OK, 1-1,text,true,,35016,0,5,5,1082,0,0
2017-01-10 19:50:13.466,813, Webservice Sampler,200,OK, 1-3,text,true,,35016,0,3,3,806,0,0
2017-01-10 19:50:13.666,780, Webservice Sampler,200,OK, 1-4,text,true,,35016,0,2,2,776,0,0
2017-01-10 19:50:13.866,811, Webservice Sampler,200,OK, 1-5,text,true,,35016,0,1,1,802,0,0
You are going not recommended way of creating a JMeter test, check out Building a WebService Test Plan to learn how to do it using JMeter GUI.
If for some reason you need to create a SOAP request programmatically, you can get a known good test plan hashtree structure by calling SaveService.loadTree() method passing an existing .jmx script as a parameter so you will have a reference test plan in your debugger which you can re-create using JMeter API.
You can find an example of adding a Header Manager to the HTTP Request here
In any case I would strongly recommend to double check that the script is doing what it is supposed to be doing via View Results Tree listener or with a sniffer tool like Wireshark

AB (Apache benchmark) to the authenticate

I want to test the performance of my apache with AB (Apache Benchemark) with parameter authentication.
I followed this tutorial step
Using Apache Benchmark (ab) on sites with authentication
and when I execute the command
ab-c 1-n 1-C PHPSESSID = 65pnirttcbn0l6seutjkf28452 http://my-web-site
but authentication does not pass
testeur#ERP:~$ ab -c 1 -n 1 -C PHPSESSID=65pnirttcbn0l6seutjkf28452 http:my-web-site.com/mapviewimproved
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, `http://www.zeustech.net/`
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking my-web-site.com (be patient).....done
Server Software: Apache
Server Hostname: algeotrack.com
Server Port: 80
Document Path: /my-page
Document Length: 0 bytes
Concurrency Level: 1
Time taken for tests: 0.627 seconds
Complete requests: 1
Failed requests: 0
Write errors: 0
Non-2xx responses: 1
Total transferred: 335 bytes
HTML transferred: 0 bytes
Requests per second: 1.59 [#/sec] (mean)
Time per request: 627.320 [ms] (mean)
Time per request: 627.320 [ms] (mean, across all concurrent requests)
Transfer rate: 0.52 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 36 36 0.0 36 36
Processing: 591 591 0.0 591 591
Waiting: 591 591 0.0 591 591
Total: 627 627 0.0 627 627
I note that the application is developed with Zend Framework 1
is that you can help me please
You have to quote the cookie value:
ab -c 1 -n 1 -C 'PHPSESSID=65pnirttcbn0l6seutjkf28452' http:my-web-site.com/mapviewimproved
see this other question as reference: How do I pass a complex cookie to ab for testing?

What is the meaning of Karma runner log times

I've got a very simple question which I've tried to find out but without satisfactory results.
Example is underneath:
INFO [karma]: Karma server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [launcher]: Starting browser Firefox
INFO [Chrome 28.0 (Linux)]: Connected on socket id MIsxYm-yXOtkIlbXrkr4
INFO [Chrome 28.0 (Linux)]: Connected on socket id Ek6biR3iiKgej2a-rkr5
INFO [Firefox 21.0 (Linux)]: Connected on socket id OcDqEq-VZ5o7tCjNrkr6
Chrome 28.0 (Linux): Executed 2 of 2 SUCCESS (1.655 secs / 1.392 secs)
Chrome 28.0 (Linux): Executed 2 of 2 SUCCESS (2.131 secs / 1.659 secs)
Firefox 21.0 (Linux): Executed 2 of 2 SUCCESS (2.351 secs / 1.414 secs)
TOTAL: 6 SUCCESS
The question is what exactly these times (1.655 secs / 1.392 secs) means?
Correct me if I created a bad main question (title) :)
The two numbers are totalTime/netTime source. The total time is wall clock time from start to end, and the net time is the sum of all the time spent in individual tests source.

Resources