Control input to itemprocessor in Spring batch - spring

I have a spring batch with reader/processor/writer. which works perfectly fine. The new requirement is the processor should process only 1 or 2 message and sleep for 1 or 2 sec depending on the load in the server (My item-processor calls 3rd party API to send some data, the 3rd party calls should not be continuous).
I have implemented chunk listener, but i m not able to identify how to read the message/item-process call,
chunklistener "readcount" returns the number of message processed in incremental.
paramChunkContext.getStepContext().getStepExecution().getReadCount() //chunkcontext
One option that i m thinking of is to divide the chunkcount/2 and based on the return(if return is 0 - sleep for 1 sec). then
thread.sleep(1000) //
Is there any other better way to perform this? the above process is a dirty job.

Related

How to keep certain thread delay between requests and store correlation id for all the request during load test

May i know how to achieve this scenario in JMeter.
Requirement 1 : Request 1 should execute for 15 mins, once 15 mins crossed request 2 should execute and request 1 to be stopped.
Requirement 2 : In request 1, we need to capture all dynamic value and store it some place and same dynamic value we should use it as request body for request 2. We like to run large numbers of users. Not sure, how to store all the response in some files or other alternatives.
Ex : Request 1 - > Trigger -> Store response somewhere(15 min run & 100 iteration) - stopped
Request 2 - > Trigger after 15 min - Execute request with above 100 iteration response)
Either take a look at Runtime Controller, using this guy you can choose how long its child(ren) will be run or just put your requests 1 and 2 into separate Thread Groups
If you want to store full responses into a file take a look at:
Post-Processors to capture the required part of the response into a JMeter Variable
Sample Variables property to tell JMeter to save this variable into .jtl results file
Flexible File Writer if you want to write the values into a separate file

KafkaConsumer poll() behavior understanding

Trying to understand (new to kafka)how the poll event loop in kafka works.
Use Case : 25 records on the topic, max poll size is set to 5.
max.poll.interval.ms = 5000 //5 seconds by default max.poll.records = 5
Sequence of tasks
Poll the records from the topic.
Process the records in a for loop.
Some processing login where the logic would either pass or fail.
If logic passes (with offset) will be added to a map.
Then it will be committed using commitSync call.
If fails then the loop will break and whatever was success before this would be committed.The problem starts after this.
The next poll would just keep moving in batches of 5 even after error, is it expected?
What we basically expect is that the loop breaks and the offsets till success process message logic should get committed, then the next poll should continue from the failed message.
Example, 1st batch of poll 5 messages polled and 1,2 offsets successful and committed then 3rd failed.So the poll call keep moving to next batch like 5-10,10-15 if there are any errors in between we expect it to stop at that point and poll should start from 3 in first case or if it fails in 2nd batch at 8 then the next poll should start from 8th offset not from next max poll batch settings which would be like 5 in this case.IF IT MATTERS USING SPRING BOOT PROJECT and enable autocommit is false.
I have tried finding this in documentation but no help.
tried tweaking this but no help max.poll.interval.ms
EDIT: Not accepted answer because there is no direct solution for a customer consumer.Keeping this for informational purpose
max.poll.interval.ms is milliseconds, not seconds so it should be 5000.
Once the records have been returned by the poll (and offsets not committed), they won't be returned again unless you restart the consumer or perform seek() operations on the consumer to reset the offset to the unprocessed ones.
The Spring for Apache Kafka project provides a SeekToCurrentErrorHandler to perform this task for you.
If you are using the consumer yourself (which it sounds like), you must do the seeks.
You can manually seek to the beginning offset of the poll for all the assigned partitions on failure. I am not sure using spring consumer.
Sample code for seeking offset to beginning for normal consumer.
In the code below I am getting the records list per partition and then getting the offset of the first record to seek to.
def seekBack(records: ConsumerRecords[String, String]) = {
records.partitions().map(partition => {
val partitionedRecords = records.records(partition)
val offset = partitionedRecords.get(0).offset()
consumer.seek(partition, offset)
})
}
One problem doing this in production is bad since you don't want seekback all the time only in cases where you have a transient error otherwise you will end up retrying infinitely.

Load distribution using Throughput controller in jmeter when throughput controller has child throughput controller

Please refer the attachment where i have mentioned the anticipated volume for each requests.
Had i not have Action 2 , I would have easily derived the load distribution.
I'm stuck when i have one more transaction inside the throughput controller Action 1 . Can anyone please suggest what need to be the throughput controller value(%) when i still need to derive the load from that to the child request.
If i have addition of Throughput Controller Search Action1 and Throughput Controller Action 2 i end up having more volume for Action 1.
Hope my requirement is clear.Please can anyone suggest me how can i achieve the anticipated load for all the 4 requests.
You can go for something like:
If you need to execute these 15 times samplers after 32 times you can use Inter-Thread Communication Plugin in order to pause them until the data will be available.
You can install Inter-Thread Communication plugin using JMeter Plugins Manager:
In case you need to execute 32 Action 1 with 15 (or 18, or 19) of them followed by Action 2 - you have to put not one, but two Throughput controllers under under "Search action 32 of 210":
first takes 19 "Action 1"s followed by "Action 1" out of 32
second takes remaining 13 "Action 1" alone (not followed by "Action 2")
Is that what you are aiming for?
Here Goes my Answer to my own question:
Like i said ,below is my Throughput shaping timer.
Virtual user is 3 because to perform 0.35 rps considering response time+think time as 10 sec (0.35RPS*10sec)
Below is the work load model :
As launch and login is pretty clear , below is my explanation of action1 and action 2.
Since Out of 210 requests 51 requests(action 1 and action 2) should fall under one flow i need to have one thru controller as a parent which should make sure that out of 210 requests 51 should be from its child requests.Not over yet.
Now I cant let action 1 to take all 51 load because my req is 32. Hence i ll take 32 out of 51 requests for action 1.(32/52=62% of parent load)
Now for action 2 parent is action 1's load. so i need to make sure that when control comes (32 times) to action 1 i should let only 19 of them to proceed. hence 19/32 =59%.
Thus i achieved the desired load by making parent throughput controller to take responsibility of not letting other thoughput controller to take more than told load(launch and login) .
I have used Gaussian random timer value as 8 sec(2 sec deviation) when testing the actual application.

without using only once controller,i want the login request to be executed once

I am trying to do the load testing of the pages which can be access after login only.
As I am using Once Only Controller for login request ,when I changed Number of thread- 5 or more,login executes 5 times.
as Once Only Controller works for loopcount so I used loopcount but it slowdown my process and it executes the whole testplan.
My test-plan is:
login thread A- one time execution - how to do it?
http request B- multiple times(by using Number of threads)
http request C- multiple times(by using Number of threads)
What should I use for one time login execution and other successor requests needs to be executed multiple times without using Once Only Controller?
Kindly follow these steps:-
In Thread Group put number of threads:5 , Ramp-up:0 , Loop count: 1
Put your Login part in Once only controller
Right click on Your thread group > add > logic controller > loop controller
Now put your http request part in loop controller and set loop count of loop controller how many times you want to run
Then run the test you will get whatever you want
The Loop count in "Thread Group" is your full script Loop count not for your Transactions, for your transactions you have to put separate loop count before transaction starts.
Please do below
Your thread group setting should be below:
A.Number of threads :1
B:Ramp up period : 0 or as per application response.
C:Loop Count :5
Put your login request under Only once controller.
Request B and Request C should be at thread group Level.ie one step above it.
Run the request.
please find Sample Jmxsample Jmx for your reference Plse try. Hope it resolves your issue.

Limitation in retrieving rows from a mongodb from ruby code

I have a code which gets all the records from a collection of a mongodb and then it performs some computations.
My program takes too much time as the "coll_id.find().each do |eachitem|......." returns only 300 records at an instant.
If I place a counter inside the loop and check it prints 300 records and then sleeps for around 3 to 4 seconds before printing the counter value for next set of 300 records..
coll_id.find().each do |eachcollectionitem|
puts "counter value for record " + counter.to_s
counter=counter +1
---- My computations here -----
end
Is this a limitation of ruby-mongodb api or some configurations needs to be done so that the code can get access to all the records at one instant.
How large are your documents? It's possible that the deseriaization is taking a long time. Are you using the C extensions (bson_ext)?
You might want to try passing a logger when you connect. That could help sort our what's going on. Alternatively, can you paste in the MongoDB log? What's happening there during the pause?

Resources