Gatling: define message rate in minutes (JMS scenario) - jms

I'm performing a load tests against IBM MQ and would like to have 10 msgs./users being submitted in 10 minutes (just a as a proof of concept).
I'm injecting the respective load like this:
scn_message_ZIP_DP102.inject(rampUsers(10) over(10 minutes)).protocols(jmsConfigMQ1)
But when checking the logs I'm seeing the applicaiton is being flooded with messages. I'd expect to have just 10 messages being submitted in a timeframe of 10 minutes.

Well we have an answer - in 10 minutes you start 10 users and every of them is sending message after message in a 48 hour loop, so instead of 10 messages you probably have hundreds of millions of them. Remove during loop and it should be fine fe.:
val scnMessageID14 = scenario("Load testing InboundQueue on MQ-HOST-1 with MessageID14")
.exec(
jms("F&F testing with MessageID 14")
.send
.queue("MESSAGES.QUEUE")
.textMessage(message14)
)

Related

How to limit Message consumption rate of Kafka Consumer in SpringBoot? (Kafka Stream)

I want to limit my Kafka Consumer message consumption rate to 1 Message per 10 seconds .I'm using kafka streams in Spring boot .
Following is the property I tried to Make this work but it didn't worked out s expected(Consumed many messages at once).
config.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, brokersUrl);
config.put(StreamsConfig.APPLICATION_ID_CONFIG, applicationId);
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
//
config.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,1);
config.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 10000);
is there any way to Manually ACK(Manual offsetCommits) in KafkaStreams? which will be usefull to control the msg consumption rate .
Please note that i'm using Kstreams(KafkaStreams)
Any help is really appreciated . :)
I think you misunderstand what MAX_POLL_INTERVAL_MS_CONFIG actually does.
That is the max allowed time for the client to read an event.
From docs
controls the maximum time between poll invocations before the consumer will proactively leave the group (5 minutes by default). The value of the configuration request.timeout.ms (default to 30 seconds) must always be smaller than max.poll.interval.ms(default to 5 minutes), since that is the maximum time that a JoinGroup request can block on the server while the consumer is rebalance
"maximum time" not saying any "delay" between poll invocations.
Kafka Streams will constantly poll; you cannot easily pause/start it and delay record polling.
To read an event every 10 seconds without losing consumers in the group due to lost heartbeats, then you should use Consumer API, with pause() method, call Thread.sleep(Duration.ofSeconds(10)), then resume() + poll() while setting max.poll.records=1
Finally ,I achieved the desired message consuming limit using Thread.sleep().
Since , there is no way to control the message consumption rate using kafka config properties itself . I had to use my application code to control the rate of consumption .
Example: if I want control the record consumption rate say 4 msg per 10 seconds . Then i will just consumer 4 msg (will keep a count parallely) once 4 records are consumer then i will make the thread sleep for 10 seconds and will repeat the same process over again .
I know it's not a good solution but there was no other way.
thank you OneCricketeer

Do Gatling reports req/s include pauses and pace?

I'm running load tests in gatling, and noticed when I ramp 250 users over 10 seconds, the report gives me an average of 31 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.exec(saveData)
.exec(processDocumentRequest)
)
val scn = List(OAuthRequest.inject(atOnceUsers(1)),
combinedScenario.inject(nothingFor(5 seconds),
rampUsers(250) over (10 seconds)));
setUp(scn).protocols(httpConf).maxDuration(60 minutes)
However, when I surround the scenario in a forever loop and put a 60 second pace in between each set of requests, the report then says I average about 8 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.forever(
pace(60 seconds)
.exec(saveData)
.exec(processDocumentRequest)
)
Is this simply because the report factors in the 50 seconds in between iterations where 0 requests are being sent? Can I assume that it's still sending around 31 req/s for the short bursts of requests being sent each minute?
Yes - the reports just show what the actual throughout during the scenario was, not some hypothetical maximum. The number you get could be constrained by your scenario or by the application under test. You would need to run some experiments to confirm.
With the pace in the scenario, you should also be able to increase the number of concurrent users, based on your initial testing

How to pause sending request Jmeter for certain time when getting 502

I am currently doing performance Testing for the application.. We need to test with number of concurrent users (eg. 300). We are using Stepping Thread group and it is working fine..
The test is about 38 mins. At some point, when the server memory is overloaded the memory is cleaned and getting restarted takes 10 to 20 seconds during that time we are getting 502 - Bad Gateway response..
We have almost 6 Modules (each is in Transaction controller) and each controller has almost 20 to 30 api calls)
I just wanted to pause 20 seconds when first we encounter 502.. Is it possible to do that? I can use If controller but i can not add for all the 20 calls is that previous sample is OK which is time taking process. Is there any other way?
I would check ResponseCodes in PostProcessor and in case it is 502 Bad Gateway, I would get the current thread to sleep using Java Tread and Jmeter Api using
JMeterThread getThread() from JMeterContext.
JMeterContext jmctx = JMeterContextService.getContext();
JMeterThread currentThread = jmctx.getThread();
currentThread.sleep(20000);
I am not sure about that currentThread.sleep(20000); because I need to check if JMeterThread inherits sleep() from Java Thread.
Checking it locally.
more samples are here :
https://www.programcreek.com/java-api-examples/?api=org.apache.jmeter.threads.JMeterContext

Is there any way to get delay of specific time using persistence database like redis?

I want to insert a delay of 30 minutes between two function call for eg:
sending email after 30 minutes once fcm/sms is sent.
I'm trying to use Redis for this so I'm using node module name bull which allows me to create a job with the delay and push it inside the queue.
//sending sms to user
sms(null, {to: phone,content: {msg: "test message"},sender: "XYZ"});
// here I have to add a delay of 30 minutes
// sending notification to user
fcm(null,{user_ids: userId,message: "restart!!!"});
I don't want to use setTimeout as it will not work if my app will restart.
I was able to do it using bull https://www.npmjs.com/package/bull this package contains the option to add delays in the milliseconds:-
For example, adding a delay of 30 mins
providerDelayedQueue.add(options, { delay: 1800000 });
The above job will be consumed at least after a delay of 30 mins.

FB Messenger API - Receiving double requests

I have a working FB Bot built with Ruby which allows players to play a scavenger hunt.
Sometimes though, when I have multiple players in a team, FB is sending me a players 'Answer' webhook twice. I have looked into it and at first thought it was to do with the 20 second timeout if FB gets no 200 OK response (Docs here). After checking the logs though, I am receiving the second webhook from FB only 14 seconds later. See below:
# Webhook #1
{"object"=>"page", "entry"=>[{"id"=>"252445748474312", "time"=>1532153642358, "messaging"=>[{"sender"=>{"id"=>"1709242109154907"}, "recipient"=>{"id"=>"252445748474312"}, "timestamp"=>1532153641935, "message"=>{"mid"=>"0FeOChulGjuPgg3YJqEgajNsY8kMfNRt_bpIdeegEeE54h-KB8szcd-EQ-UHUT3850RwHgH4TxVYFkoFwxqhtg", "seq"=>402953, "text"=>"Larrikins"}}]}]}
# Webhook #2 (14 seconds later)
{"object"=>"page", "entry"=>[{"id"=>"252445748474312", "time"=>1532153656901, "messaging"=>[{"sender"=>{"id"=>"1709242109154907"}, "recipient"=>{"id"=>"252445748474312"}, "timestamp"=>1532153641935, "message"=>{"mid"=>"0FeOChulGjuPgg3YJqEgajNsY8kMfNRt_bpIdeegEeE54h-KB8szcd-EQ-UHUT3850RwHgH4TxVYFkoFwxqhtg", "seq"=>402953, "text"=>"Larrikins"}}]}]}
Notice both are exactly the same apart from the first "time" attribute (14 secs later).
Due to a number of methods and calls that I process after receiving the first webhook, the 200 OK response is only being sent back to FB once I have finished sending my messages in response (hence the 14 second delay).
So I have two questions:
Is the 14 second delay too long and that is why FB is resending? If so, how can I send a 200OK response straight away (head :ok)?
Is it another issue entirely?
You also ensure that "Echo" is disabled.
Go to Settings>Webhooks, edit events.
Asyncronous language like NodeJS is recomended, in my case y work with AWS SQS, I have workers that process the requests witout blocking (dont wait), I return 200,"ok" to FB to avoid that FB send again the message to my webhook.
Anothe apporach maybe store the mid in database, and check in each request if the mid exists, if exists the dont process the message. I was use Dynamo DB (AWS) with TTL enabled, thus with TTL my database autoclean every hour erasing old request.
I think it is the 15 second wait before replying, was also happening to me as Facebook auto retries when you don't reply fast enough. Te EEe Te's idea is solid, write some mechanism to cache mids and check if it is a duplicate before processing

Resources