I know how to prepare a test plan and run it in JMeter using the Java API, there are quite a number of examples on how to do that. What's missing is a way to collect the results directly. I know it's possible to save the results in a .jtl file but that would require me to open the file after saving it and parse it (depending on its format). I have seen the API provides quite a number of Result classes but I wasn't able to figure out how to use them. I have also tried debugging to try to figure out what classes were involved and try to understand an execution model.
Any help would be really appreciated
Right, I am not sure if that is the right answer, I think there's no right answer because it really depends on your needs. At least I managed to understand a bit more debugging a test execution.
Basically what I ended up doing was to extend the ResultCollector and add it to the instance of TestPlan. What the collector does is to store the events once received and print them at the end of the test (but at this point you can do whatever you want with it)
If you have better approaches please let me know (I guess a more generic approach would be to implement SampleListener and TestStateListener without using the specific implementation of the ResultCollector)
import java.util.LinkedList;
import org.apache.jmeter.reporters.ResultCollector;
import org.apache.jmeter.samplers.SampleEvent;
public class RealtimeResultCollector extends ResultCollector{
LinkedList<SampleEvent> collectedEvents = new LinkedList<>();
/**
* When a test result is received, store it internally
*
* #param event
* the sample event that was received
*/
#Override
public void sampleOccurred(SampleEvent event) {
collectedEvents.add(event);
}
/**
* When the test ends print the response code for all the events collected
*
* #param host
* the host where the test was running from
*/
#Override
public void testEnded(String host) {
for(SampleEvent e: collectedEvents){
System.out.println("TEST_RESULT: Response code = " + e.getResult().getResponseCode()); // or do whatever you want ...
}
}
}
And in the code of the main or wherever you have created your test plan
...
HashTree ht = new HashTree();
...
TestPlan tp = new TestPlan("MyPlan");
RealtimeResultCollector rrc = new RealtimeResultCollector();
// after a lot of confugration, before executing the test plan ...
ht.add(tp);
ht.add(ht.getArray()[0], rtc);
For the details about the code above you can find a number of examples on zgrepcode.com
Related
I need to re-do the setup with every thread(number). Now I have 2 times first the setup thread, and then 2 times the main thread. In setting apply 2 threads in SetUp and Main
The problem is that I end up getting requests with the same JSESSION ID
I want to see like this->
First thread:
CSRF-TOKEN
LOGIN
Test(first csrf login)
Second thread:
CSRF-TOKEN
LOGIN
Test(second csrf-token)
I know that this can be done in one thread, but there can be many such requests. And so we need a single setup, so as not to duplicate
UPDATED:
changed my code like you said and now it doesn't work.
I use in postprocessor this code:
import org.apache.jmeter.protocol.http.control.CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
import org.apache.jmeter.testelement.property.PropertyIterator;
import org.apache.jmeter.testelement.property.JMeterProperty;
CookieManager manager = ctx.getCurrentSampler().getCookieManager();
PropertyIterator iter = manager.getCookies().iterator();
while (iter.hasNext()) {
JMeterProperty prop = iter.next();
Cookie cookie = prop.getObjectValue();
if (cookie.getName().equals("JSESSIONID")) {
vars.putObject("JSESSIONID", cookie);
break;
}
}
while (iter.hasNext()) {
JMeterProperty prop = iter.next();
Cookie cookie = prop.getObjectValue();
if (cookie.getName().equals("XSRF-TOKEN")) {
vars.putObject("XSRF-TOKEN", cookie);
break;
}
}
This Preprocessor main thread:
CookieManager manager = sampler.getCookieManager();
manager.add(vars.getObject("JSESSIONID"));
manager.add(vars.getObject("XSRF-TOKEN"));
But now it doesn't work
I used to use:
props.put("JSESSIONID", cookie);
The correctness of your test design is a big question mark, normally you don't need to pass any data between thread groups.
However if you've been told to do so by your not very competent team lead looking at my crystal ball I can see the following possible problems:
If you're using something like props.put("property-name", vars.get("csrf-token")) you're overwriting the previous value with the next value on each occurrence/iteration because properties are global for the whole JVM process. Consider using ctx.getThreadNum() as the property prefix/postfix
If you're using something like props.put("property-name", "${csrf-token}") the first occurrence of variable is being cached and used on subsequent iterations. Consider using vars.get("csrf-token") instead
More information on these ctx, vars and props guys: Top 8 JMeter Java Classes You Should Be Using with Groovy
I am using Akka in Play Controller and performing ask() to a actor by name publish , and internal publish actor performs ask to multiple actors and passes reference of sender. The controller actor needs to wait for response from multiple actors and create a list of response.
Please find the code below. but this code is only waiting for 1 response and latter terminating. Please suggest
// Performs ask to publish actor
Source<Object,NotUsed> inAsk = Source.fromFuture(ask(publishActor,service.getOfferVerifyRequest(request).getPayloadData(),1000));
final Sink<String, CompletionStage<String>> sink = Sink.head();
final Flow<Object, String, NotUsed> f3 = Flow.of(Object.class).map(elem -> {
log.info("Data in Graph is " +elem.toString());
return elem.toString();
});
RunnableGraph<CompletionStage<String>> result = RunnableGraph.fromGraph(
GraphDSL.create(
sink , (builder , out) ->{
final Outlet<Object> source = builder.add(inAsk).out();
builder
.from(source)
.via(builder.add(f3))
.to(out); // to() expects a SinkShape
return ClosedShape.getInstance();
}
));
ActorMaterializer mat = ActorMaterializer.create(aSystem);
CompletionStage<String> fin = result.run(mat);
fin.toCompletableFuture().thenApply(a->{
log.info("Data is "+a);
return true;
});
log.info("COMPLETED CONTROLLER ");
If you have several responses ask won't cut it, that is only for a single request-response where the response ends up in a Future/CompletionStage.
There are a few different strategies to wait for all answers:
One is to create an intermediate actor whose only job is to collect all answers and then when all partial responses has arrived respond to the original requestor, that way you could use ask to get a single aggregate response back.
Another option would be to use Source.actorRef to get an ActorRef that you could use as sender together with tell (and skip using ask). Inside the stream you would then take elements until some criteria is met (time has passed or elements have been seen). You may have to add an operator to mimic the ask response timeout to make sure the stream fails if the actor never responds.
There are some other issues with the code shared, one is creating a materializer on each request, these have a lifecycle and will fill up your heap over time, you should rather get a materializer injected from play.
With the given logic there is no need whatsoever to use the GraphDSL, that is only needed for complex streams with multiple inputs and outputs or cycles. You should be able to compose operators using the Flow API alone (see for example https://doc.akka.io/docs/akka/current/stream/stream-flows-and-basics.html#defining-and-running-streams )
I have this case that I want to run PHPUnit test and check behaviour of current testing class as follows:
public function it_allows_to_add_items()
{
// Create prophesies
$managerProphecy = $this->getProphet(ListingManager::class);
$listingItemProphecy = $this->getProphet(ListingItemInterface::class);
$listing = factory(\App\Misc\Listings\Listing::class)->create();
$manager = new ListingManager($listing);
$item = factory(\App\Misc\Listings\ListingItem::class)->make(['listing_id' => null]);
$item2 = factory(\App\Misc\Listings\ListingItem::class)->make(['listing_id' => null]);
$manager->addItem($item);
$managerProphecy->validate($listingItemProphecy)->shouldBeCalledTimes(2);
$manager->addItem($item2);
$this->assertTrue(true);
}
is that even possible ?
Of course I'm getting
1) GenericListingManagerTest::it_allows_to_add_items
Some predictions failed:
Double\App\Misc\Listings\ListingManager\P2:
Expected exactly 2 calls that match:
Double\App\Misc\Listings\ListingManager\P2->validate(exact(Double\ListingItemInterface\P1:00000000058d2b7a00007feda4ff3b5f Object (
'objectProphecy' => Prophecy\Prophecy\ObjectProphecy Object (*Prophecy*)
)))
but none were made.
I think your way to approach this test is a bit off. If I understand correctly you want to verify that addItem(object $item) works properly, i.e. the manager contains the item and the item is the same you added. For this you should not need prophecies and in fact your test does not actually use the prophecies you created. Depending on what your Manager looks like you could write something like this:
function test_manager_add_items_stores_item_and_increases_count()
{
$manager = new ListingManager(); // (2)
$item = new ListingIem(); // (3)
$initialCount = $manager->countItems(); // (1)
$manager->addItem($item);
$this->assertEquals($initialCount + 1, $manager->countItems());
// Assuming offset equals (item count - 1) just like in a numeric array
$this->assertSame($item, $manager->getItemAtOffset($initialCount));
}
(1) Assuming your manager has a count()-method you can check before and after whether the count increased by the number of added items.
(2) We want to test the real manager - that's why we don't need a mock created by prophecy here - and we can use a real item because it's just a value.
(3) I'm not sure why you have a ListingItemInterface. Are there really different implementations of ListingItem and if so, do you really want a generic ListingManager that possibly contains all of them or do you need a more specific one to make sure that each Manager contains only it's specific kind of item? This really depends on your use case, but it looks like you might violate the I (Interface Segregation Principle) or L (Liskov Substitution Principle) in SOLID.
Depending on your use case you might want to add real items, e.g. 2 different types to make clear that it's intended that you can put 2 different implementations of the interface in there or you can do it like above and just add a ListingItem and verify that each item in the manager implements the interface - I leave finding the assertion for that to you ;). Of course you can use your factory to create the item as well. The important thing is that we test with assertSame() whether the managed object and the one we created initially are the same, meaning reference the exact same object.
You could add additional tests if you want to ensure additional behaviour, such as restricting the kind of items you can put in the manager or how it behaves when you put an invalid object in there.
The important thing is, that you want to test the actual behaviour of the Manager and that's why you don't want to use a mock for it. You could use a mock for the ListingItemInterface should you really require it. In that case the test should probably look something like this:
function test_manager_add_items_stores_item_and_increases_count()
{
$manager = new ListingManager();
$dummyItem = $this->prophecy(ListingIemInterface::class);
$initialCount = $manager->countItems();
$manager->addItem($dummyItem->reveal());
$this->assertEquals($initialCount + 1, $manager->countItems());
$this->assertSame($dummyItem, $manager->getItemAtOffset($initialCount));
}
edit: If addItem checks the validation which you want to skip, e.g. because the empty item you provide is not valid and you don't care. You can use PHPUnit's own mock framework to partially mock the manager like this:
$item = new ListingItem();
$managerMock = $this->getMockBuilder(ListManager::class)
->setMethods(['validate'])
->getMock();
$managerMock
->expects($this->exactly(2))
->method('validate')
->with($this-> identicalTo($item))
->willReturn(true);
$managerMock->addItem($item);
$managerMock->addItem($item);
You don't have to assert anything at the end, because expects() is already asserting something. Your Manager will work normally, except for validate(), meaning you will run the code in addItem() and if in there validate is called (once per item) the test will pass.
I want to do sentence detection using OPenNLP and Hadoop. I have implemented same on Java successfully. Want to implement same on Mapreduce platform. Can anyone help me out?
I have done this two different ways.
One way is to push out your Sentence detection model to each node to a standard dir (ie /opt/opennlpmodels/), and at the class level in your mapper class read in the serialized model, and then use it appropriately in your map or reduce function.
Another way is to put the model in a database or the distributed cache (as a blob or something... I have used Accumulo to store Document categorization models before like this). then at the class level make the connection to the database and get the model as a bytearrayinputstream.
I have used Puppet to push out the models, but use whatever you typically use to keep files up to date on your cluster.
depending on your hadoop version you may be able to sneak the model in as a property on jobsetup and then only the master (or wherever you launch jobs from) will need to have the actual model file on it. I've never tried this.
If you need to know how to actually use the OpenNLP sentence detector let me know and I'll post an example.
HTH
import java.io.File;
import java.io.FileInputStream;
import opennlp.tools.sentdetect.SentenceDetector;
import opennlp.tools.sentdetect.SentenceDetectorME;
import opennlp.tools.sentdetect.SentenceModel;
import opennlp.tools.util.Span;
public class SentenceDetection {
SentenceDetector sd;
public Span[] getSentences(String docTextFromMapFunction) throws Exception {
if (sd == null) {
sd = new SentenceDetectorME(new SentenceModel(new FileInputStream(new File("/standardized-on-each-node/path/to/en-sent.zip"))));
}
/**
* this gives you the actual sentences as a string array
*/
// String[] sentences = sd.sentDetect(docTextFromMapFunction);
/**
* this gives you the spans (the charindexes to the start and end of each
* sentence in the doc)
*
*/
Span[] sentenceSpans = sd.sentPosDetect(docTextFromMapFunction);
/**
* you can do this as well to get the actual sentence strings based on the spans
*/
// String[] spansToStrings = Span.spansToStrings(sentPosDetect, docTextFromMapFunction);
return sentenceSpans;
}
}
HTH... just make sure the file is in place. There are more elegant ways of doing this but this works and it's simple.
I have the following UnitTest:
[TestMethod]
public void NewGamesHaveDifferentSecretCodesTothePreviousGame()
{
var theGame = new BullsAndCows();
List<int> firstCode = new List<int>(theGame.SecretCode);
theGame.NewGame();
List<int> secondCode = new List<int>(theGame.SecretCode);
theGame.NewGame();
List<int> thirdCode = new List<int>(theGame.SecretCode);
CollectionAssert.AreNotEqual(firstCode, secondCode);
CollectionAssert.AreNotEqual(secondCode, thirdCode);
}
When I run it in Debug mode, my code passes the test, but when I run the test as normal (run mode) it does not pass. The exception thrown is:
CollectionAssert.AreNotEqual failed. (Both collection contain same elements).
Here is my code:
// constructor
public BullsAndCows()
{
Gueses = new List<Guess>();
SecretCode = generateRequiredSecretCode();
previousCodes = new Dictionary<int, List<int>>();
}
public void NewGame()
{
var theCode = generateRequiredSecretCode();
if (previousCodes.Count != 0)
{
if(!isPreviouslySeen(theCode))
{
SecretCode = theCode;
previousCodes.Add(previousCodes.Last().Key + 1, SecretCode);
}
}
else
{
SecretCode = theCode;
previousCodes.Add(0, theCode);
}
}
previousCodes is a property on the class, and its Data type is Dictionary key integer, value List of integers. SecretCode is also a property on the class, and its Data type is a List of integers
If I were to make a guess, I would say the reason is the NewGame() method is called again, whilst the first call hasn't really finished what it needs to do. As you can see, there are other methods being called from within the NewGame() method (e.g. generateRequiredSecretCode()).
When running in Debug mode, the slow pace of my pressing F10 gives sufficient time for processes to end.
But I am not really sure how to fix that, assuming I am right in my identification of the cause.
What happens to SecretCode when generateRequiredSecretCode generates a duplicate? It appears to be unhandled.
One possibility is that you are getting a duplicate, so SecretCode remain the same as its previous value. How does the generator work?
Also, you didn't show how the BullsAndCows constructor is initializing SecretCode? Is it calling NewGame?
I doubt the speed of keypresses has anything to do with it, since your test method calls the functions in turn without waiting for input. And unless generateReq... is spawning a thread, it will complete whatever it is doing before it returns.
--after update--
I see 2 bugs.
1) The very first SecretCode generated in the constructor is not added to the list of previousCodes. So the duplicate checking won't catch if the 2nd game has the same code.
2) after previousCodes is populated, you don't handle the case where you generate a duplicate. a duplicate is previouslySeen, so you don't add it to the previousCodes list, but you don't update SecretCode either, so it keeps the old value.
I'm not exactly sure why this is only showing up in release mode - but it could be a difference in the way debug mode handles the random number generator. See How to randomize in WPF. Release mode is faster, so it uses the same timestamp as seed, so it does in fact generate exactly the same sequence of digits.
If that's the case, you can fix it by making random a class property instead of creating a new one for each call to generator.