I need to call a external rest service, if it fails on first attempt then I have to call again after 30 minutes. Max 3 time I can call like this.
I know spring has RetryTemplate for the retry. But I feel, for my cases its not fit. I have to call like this for more than 1000 records.
Any idea How can I achieve this in Spring.
Use a TaskScheduler.
scheduler.schedule(() -> { ... },
new Date(System.currentTimeMillis() + (30 * 60_000));
Keep track of how many times and if not exhausted, re-schedule.
Related
I want to consume message in batch mode in every 15 minutes.
For that I have set these properties,
spring.cloud.stream.kafka.binder.consumer-properties.max.poll.records=5000000
spring.cloud.stream.kafka.binder.consumer-properties.fetch.max.wait.ms=900000
spring.cloud.stream.kafka.binder.consumer-properties.fetch.min.bytes=500000000
Consuming message works fine when I set this property spring.cloud.stream.kafka.binder.consumer-properties.fetch.max.wait.ms between 10000 to 30000, 10seconds or 30 seconds.
But If I increase the fetch.max.wait.ms to 1 minutes or more, It doesn't consumes messages even the waiting time is over.
I know the default value is 500ms, but will there be an issue if I increase that??
And How can I get the desired behaviour (consumer to wait for 10-15min before consuming the batch again)??
Can I use max.poll.interval.ms for that?
I was able to consume messages in every 15 minutes by setting these properties.
spring.cloud.stream.kafka.binder.consumer-properties.max.poll.interval.ms=1000000
And
setting idleTime between polls using container property.
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?,
?>> customizer() {
return (container, dest, group) ->
container.getContainerProperties().setIdleBetweenPolls(idlePollTimeout);
}
In our project we need to retrieve prices from a remote ftp server. During the office hours this works fine, prices are retrieved and successfully processed. After office hours there are no new prices published on the ftp server, so as expected we don't find anything new.
Our problem is that after a few hours of not finding new prices, the poller just stops polling. No error in the logfiles (even when running on org.springframework.integration on debug level) and no exceptions. We are now using a separate TaskExecutor to isolate the issue, but still the poller just stops. In the mean time we adjusted the cron expression to match these hours, to limited the resource use, but still the poller just stops when it is supposed to run.
Any help to troubleshoot this issue is very much appreciated!
We use an #InboudChannelAdapter on a FtpStreamingMessageSource which is configured like this:
#Bean
#InboundChannelAdapter(
value = FTP_PRICES_INBOUND,
poller = [Poller(
maxMessagesPerPoll = "\${ftp.fetch.size}",
cron = "\${ftp.poll.cron}",
taskExecutor = "ftpTaskExecutor"
)],
autoStartup = "\${ftp.fetch.enabled:false}"
)
fun ftpInboundFlow(
#Value("\${ftp.remote.prices.dir}") pricesDir: String,
#Value("\${ftp.remote.prices.file.pattern}") remoteFilePattern: String,
#Value("\${ftp.fetch.size}") fetchSize: Int,
#Value("\${ftp.fetch.enabled:false}") fetchEnabled: Boolean,
clock: Clock,
remoteFileTemplate: RemoteFileTemplate<FTPFile>,
priceParseService: PriceParseService,
ftpFilterOnlyFilesFromMaxDurationAgo: FtpFilterOnlyFilesFromMaxDurationAgo
): FtpStreamingMessageSource {
val messageSource = FtpStreamingMessageSource(remoteFileTemplate, null)
messageSource.setRemoteDirectory(pricesDir)
messageSource.maxFetchSize = fetchSize
messageSource.setFilter(
inboundFilters(
remoteFilePattern,
ftpFilterOnlyFilesFromMaxDurationAgo
)
)
return messageSource;
}
The property values are:
poll.cron: "*/30 * 4-20 * * MON-FRI"
fetch.size: 10
fetch.enabled: true
We limit the poll.cron we used the retrieve every minute.
In the related DefaultFtpSessionFactory, the timeouts are set to 60 seconds to override the default value of -1 (which means no timeout at all):
sessionFactory.setDataTimeout(timeOut)
sessionFactory.setConnectTimeout(timeOut)
sessionFactory.setDefaultTimeout(timeOut)
Maybe my answer seems a bit too easy, bit is it because your cron expression states that it should schedule the job between 4 and 20 hour. After 8:00 PM it will not schedule the job anymore and it will start polling again at 4:00 AM.
It turned out that the processing took longer than the scheduled interval, so during processing a new task was already executed. So eventually multiple task were trying to accomplish the same thing.
We solved this by using a fixedDelay on the poller instead of a fixedRate.
The difference is that a fixedRate schedules on a regular interval independent if the task was finished and the fixedDelay schedules a delay after the task is finished.
I have a simple spring boot app with the following config (the project is available here on GitHub):
management:
metrics:
export:
simple:
mode: step
endpoints:
web:
exposure:
include: "*"
The above config creates SimpleMeterRegistry and configures its metrics to be step-based, with 60 seconds step. I have one script that sends 50-100 requests per second to the service dummy endpoint and there's the other script that polls the data from /actuator/metrics/http.server.requests every X seconds. When I run the latter script every 60 seconds everything works as expected, but when the script is run every 120 seconds, the response always contains zeros for TOTAL_TIME and COUNT metrics.
Can anyone explain this behavior?
I have read the documentation here. The picture below
could indicate that a registry will try to aggregate the data for the previous interval only if pollAsRate is called during the current interval. This will explain why it does not work for 120 seconds interval. But this is just my assumption, does anyone know what is really happening here?
Spring boot version: 2.1.7.RELEASE
UPDATE
I did a similar test with management.metrics.export.simple.step=10s, it works fine when polling interval is 10s and not working when it is 20s. For 15s interval it sporadically works. So, it's definitely related to the step size and polling frequency.
MAX, TOTAL_TIME, COUNT is the property of Statistic.
DistributionStatisticConfig has .expiry(Duration.ofMinutes(2)) which sets the some measutement to 0 if there is no request has been made for last 2 minutes (120 seconds)
Methods such as public TimeWindowMax(Clock clock,...), private void rotate() has been written for the same. You may see the implementation here
More Detailed Answer
Finally figured out what is happening.
On every request to /actuator/metrics, MetricsEndpoint is going to merge measures (see here). That is done by collecting values for all meters with measurement.getValue(). The StepMeasurement.getValue() will not simply return the value, it will update the current and the previous intervals and counts, and roll the count (see here and here).
StepMeasurement.getValue
public double getValue() {
double absoluteCount = (Double)this.f.get();
double inc = Math.max(0.0D, absoluteCount - this.lastCount.sum());
this.lastCount.add(inc);
this.value.getCurrent().add(inc);
return this.value.poll();
}
StepDouble.poll
public double poll() {
rollCount(clock.wallTime());
return previous;
}
How is this related to the polling interval? If you do not poll /actuator/metrics endpoint, the current and previous intervals will not be updated, thus resulting in the current interval not being up-to-date and metrics being recorded for the "wrong" interval.
I am trying to implement session management, where we store jwt token to redis. Now I want remove the key if the object idle time is more than 8 hours. Pls help
There is no good reason that comes to my mind for using IDLETIME instead of using the much simpler pattern of issuing a GET followed by an EXPIRE apart from very trivial memory requirements for key expiry.
Recommended Way: GET and EXPIRE
GET the key you want.
Issue an EXPIRE <key> 28800.
Way using OBJECT IDLETIME, DEL and some application logic:
GET the key you want.
Call OBJECT IDLETIME <key>.
Check in your application code if the idletime > 8h.
If condition 3 is met, then issue a DEL command.
The second way is more cumbersome and introduces network latency since you need three round trips to your redis server while the first solution just does it in one round trip if you use a pipeline or two round trips without any app server time at worst.
This is what I did using Jedis. I am fetching 1000 records at a time. You can add a loop to fetch all records in a batch.
Jedis jedis = new Jedis("addURLHere");
ScanParams scanParams = new ScanParams().count(1000);
ScanResult<String> scanResult = jedis.scan(ScanParams.SCAN_POINTER_START, scanParams);
List<String> result = scanResult.getResult();
result.stream().forEach((key) -> {
if (jedis.objectIdletime(key) > 8 * 60 * 60) { // more than 5 days
//your functionality here
}
});`
I have a spring application that uses quartz cron trigger. I have given the following for frequency 0 0/20 * * * ?.....once every 20 min. But i want the first one to run immediately. Right now, when I start the application, it runs after 20 min. I was hoping it would run asap and then after 20 min.
Thanks in advance.
It sounds like you want to use an interval trigger (SimpleTrigger in Quartz can do the job).
The CronTrigger wants you to specify the minutes at which to run.
So your trigger schedule says: start at 0 minutes, and run every 20 minutes after that until the hour is over. Then start at 0 again.
But with the SimpleTrigger, you say - start now and run every 20 minutes.
Here is a tutorial on SimpleTrigger:
http://quartz-scheduler.org/documentation/quartz-2.x/tutorials/tutorial-lesson-05
Here is a tutorial on CronTrigger:
http://quartz-scheduler.org/documentation/quartz-1.x/tutorials/crontrigger
You don't need CRON expression (and Quartz at all!) to run given code every 20 minutes. Just use fixed rate (Spring built-in):
#Scheduled(fixedRate=20 * 60 * 1000)
That's it! By default first invocation happens immediately, second after 20 minutes. Since Spring 3.2 you can even say initialDelay=10000 to run for the first time after exactly 10 seconds.
If you really want to use Quartz, check out SimpleTrigger.