How #Transactional timeout works with multiple dataSource - spring

So here is the example, I have method where I defined timeout=10 seconds for DS1. now this method is also calling another transactional method for DS2. Will time taken by DS2 response also considered here for timeout?
MainService.java
#Transactional(value = "transactionManagerDS1", timeout = 10)
public void tranTest(){
try {
DS1service.sleepDS1(1);
DS2service.sleepDS2(10);
DS1service.sleepDS1(1);
}catch (RuntimeException e){
logger.error(e.getMessage());
}
}
Now these DS1service and DS2service do select sleep for respective seconds
#Transactional(value = "transactionManagerDS1")
public void sleepDS1(int sec) {
SQLQuery query = getCurrentSession().createSQLQuery("select pg_sleep(" + sec + ")");
System.out.println(query.uniqueResult());
}
#Transactional(value = "transactionManagerDS2")
public void sleepDS2(int sec) {
SQLQuery query = getCurrentSession().createSQLQuery("select pg_sleep(" + sec + ")");
System.out.println(query.uniqueResult());
}
here time taken by DS2 query also counted and timeoutException were thrown.Is it correct behaviour?

Related

Springboot Kafka #Listener consumer pause/resume not working

I have a springboot Kafka Consumer & Producer. The consumer is expected to read data from topic 1 by 1, process(time consuming) it & write it to another topic and then manually commit the offset.
In order to avoid rebalancing, I have tried to call pause() and resume() on KafkaContainer but the consumer is always running & never responds to pause() call, tried it even with a while loop and faced no success(unable to pause the consumer). KafkaListenerEndpointRegistry is Autowired.
Springboot version = 2.6.9, spring-kafka version = 2.8.7
#KafkaListener(id = "c1", topics = "${app.topics.topic1}", containerFactory = "listenerContainerFactory1")
public void poll(ConsumerRecord<String, String> record, Acknowledgment ack) {
log.info("Received Message by consumer of topic1: " + value);
String result = process(record.value());
producer.sendMessage(result + " topic2");
log.info("Message sent from " + topicIn + " to " + topicOut);
ack.acknowledge();
log.info("Offset committed by consumer 1");
}
private String process(String value) {
try {
pauseConsumer();
// Perform time intensive network IO operations
resumeConsumer();
} catch (InterruptedException e) {
log.error(e.getMessage());
}
return value;
}
private void pauseConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isRunning()) {
log.info("Attempting to pause consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).pause();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
private void resumeConsumer() throws InterruptedException {
if (registry.getListenerContainer("c1").isContainerPaused() || registry.getListenerContainer("c1").isPauseRequested()) {
log.info("Attempting to resume consumer");
Objects.requireNonNull(registry.getListenerContainer("c1")).resume();
Thread.sleep(5000);
log.info("kafkalistener container state - " + registry.getListenerContainer("c1").isRunning());
}
}
Am I missing something? Could someone please guide me with the right way of achieving the required behaviour?
You are running the process() method on the listener thread so pause/resume will not have any effect; the pause only takes place when the listener thread exits the listener method (and after it has processed all the records received by the previous poll).
The next version (2.9), due later this month, has a new property pauseImmediate, which causes the pause to take effect after the current record is processed.
You can try like this. This work for me
public class kafkaConsumer {
public void run(String topicName) {
try {
Consumer<String, String> consumer = new KafkaConsumer<>(config);
consumer.subscribe(Collections.singleton(topicName));
while (true) {
try {
ConsumerRecords<String, String> consumerRecords = consumer.poll(Duration.ofMillis(80000));
for (TopicPartition partition : consumerRecords.partitions()) {
List<ConsumerRecord<String, String>> partitionRecords = consumerRecords.records(partition);
for (ConsumerRecord<String, String> record : partitionRecords) {
kafkaEvent = record.value();
consumer.pause(consumer.assignment());
/** Implement Your Business Logic Here **/
Once your processing done
consumer.resume(consumer.assignment());
try {
consumer.commitSync();
} catch (CommitFailedException e) {
}
}
}
} catch (Exception e) {
continue;
}
}
} catch (Exception e) {
}
}

Trying to understand deferredresult performance improvement

We are trying to follow this blog and understand deferredresult . https://www.linkedin.com/pulse/building-async-non-blocking-microservices-using-spring-patnaik/
In the blog, the person has given a normal blocking and non blocking code. I have copied it here.
The person ran jmeter to concurrently test 1000 threads for 5 minutes. He mentioned that latency and tps were different for non blocking with the code. When I try to see it using jmeter with same details, i see the same latency for both non blocking and blocking.
I have already tried decreasing and increasing the times etc.
public SpringBootAppController() {
timer = new Timer();
ses = new ScheduledThreadPoolExecutor(10);
}
#RequestMapping("/blockingprocess")
public String blockingProcessing(#RequestParam(value="processingtime") long processingtime) throws InterruptedException
{
long startTime = System.currentTimeMillis();
Thread.sleep(processingtime);
//add more processing later
long endTime = System.currentTimeMillis();
long timeTaken = endTime-startTime;
return "SUCCESS. Blocking process completed in " + timeTaken + " Ms";
}
#RequestMapping("/nonblockingprocess")
public DeferredResult<String> nonBlockingProcessing(#RequestParam(value="processingtime") long processingtime) throws InterruptedException
{
DeferredResult<String> deferredResult = new DeferredResult<String>();
NewProcess j = new NewProcess(deferredResult, processingtime);
ses.schedule(j,processingtime, TimeUnit.MILLISECONDS);
System.out.println("hello");
return deferredResult;
}
Another class.
public NewProcess(DeferredResult<String> deferredresult, long processingtime)
{
this.deferredresult = deferredresult;
this.processingtime = processingtime;
}
#Override
public void run()
{
String result = "SUCCESS non blocking process completed in " + processingtime + " Ms";
deferredresult.setResult(result);
}
Expected is difference and better performance for non blocking compared to blocking.

How to limit the request/second with WebClient?

I'm using a WebClient object to send Http Post request to a server.
It's sending a huge amount of requests quite rapidly (there is about 4000 messages in a QueueChannel). The problem is... it seems the server can't respond fast enough... so I'm getting a lot of server error 500 and connexion closed prematurely.
Is there a way to limit the number of request per seconds ? Or limit the number of threads it's using ?
EDIT :
The Message endpoint processe message in a QueueChannel :
#MessageEndpoint
public class CustomServiceActivator {
private static final Logger logger = LogManager.getLogger();
#Autowired
IHttpService httpService;
#ServiceActivator(
inputChannel = "outputFilterChannel",
outputChannel = "outputHttpServiceChannel",
poller = #Poller( fixedDelay = "1000" )
)
public void processMessage(Data data) {
httpService.push(data);
try {
Thread.sleep(20);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
The WebClient service class :
#Service
public class HttpService implements IHttpService {
private static final String URL = "http://www.blabla.com/log";
private static final Logger logger = LogManager.getLogger();
#Autowired
WebClient webClient;
#Override
public void push(Data data) {
String body = constructString(data);
Mono<ResponseEntity<Response>> res = webClient.post()
.uri(URL + getLogType(data))
.contentLength(body.length())
.contentType(MediaType.APPLICATION_JSON)
.syncBody(body)
.exchange()
.flatMap(response -> response.toEntity(Response.class));
res.subscribe(new Consumer<ResponseEntity<Response>>() { ... });
}
}
Resilience4j has excellent support for non-blocking rate limiting with Project Reactor.
Required dependencies (beside Spring WebFlux):
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-reactor</artifactId>
<version>1.6.1</version>
</dependency>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-ratelimiter</artifactId>
<version>1.6.1</version>
</dependency>
Example:
import io.github.resilience4j.ratelimiter.RateLimiter;
import io.github.resilience4j.ratelimiter.RateLimiterConfig;
import io.github.resilience4j.reactor.ratelimiter.operator.RateLimiterOperator;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import java.time.Duration;
import java.time.LocalDateTime;
import java.util.concurrent.atomic.AtomicInteger;
public class WebClientRateLimit
{
private static final AtomicInteger COUNTER = new AtomicInteger(0);
private final WebClient webClient;
private final RateLimiter rateLimiter;
public WebClientRateLimit()
{
this.webClient = WebClient.create();
// enables 3 requests every 5 seconds
this.rateLimiter = RateLimiter.of("my-rate-limiter",
RateLimiterConfig.custom()
.limitRefreshPeriod(Duration.ofSeconds(5))
.limitForPeriod(3)
.timeoutDuration(Duration.ofMinutes(1)) // max wait time for a request, if reached then error
.build());
}
public Mono<?> call()
{
return webClient.get()
.uri("https://jsonplaceholder.typicode.com/todos/1")
.retrieve()
.bodyToMono(String.class)
.doOnSubscribe(s -> System.out.println(COUNTER.incrementAndGet() + " - " + LocalDateTime.now()
+ " - call triggered"))
.transformDeferred(RateLimiterOperator.of(rateLimiter));
}
public static void main(String[] args)
{
WebClientRateLimit webClientRateLimit = new WebClientRateLimit();
long start = System.currentTimeMillis();
Flux.range(1, 16)
.flatMap(x -> webClientRateLimit.call())
.blockLast();
System.out.println("Elapsed time in seconds: " + (System.currentTimeMillis() - start) / 1000d);
}
}
Example output:
1 - 2020-11-30T15:44:01.575003200 - call triggered
2 - 2020-11-30T15:44:01.821134 - call triggered
3 - 2020-11-30T15:44:01.823133100 - call triggered
4 - 2020-11-30T15:44:04.462353900 - call triggered
5 - 2020-11-30T15:44:04.462353900 - call triggered
6 - 2020-11-30T15:44:04.470399200 - call triggered
7 - 2020-11-30T15:44:09.461199100 - call triggered
8 - 2020-11-30T15:44:09.463157 - call triggered
9 - 2020-11-30T15:44:09.463157 - call triggered
11 - 2020-11-30T15:44:14.461447700 - call triggered
10 - 2020-11-30T15:44:14.461447700 - call triggered
12 - 2020-11-30T15:44:14.461447700 - call triggered
13 - 2020-11-30T15:44:19.462098200 - call triggered
14 - 2020-11-30T15:44:19.462098200 - call triggered
15 - 2020-11-30T15:44:19.468059700 - call triggered
16 - 2020-11-30T15:44:24.462615 - call triggered
Elapsed time in seconds: 25.096
Docs: https://resilience4j.readme.io/docs/examples-1#decorate-mono-or-flux-with-a-ratelimiter
Question Limiting rate of requests with Reactor provides two answrers (one in comment)
zipWith another flux that acts as rate limiter
.zipWith(Flux.interval(Duration.of(1, ChronoUnit.SECONDS)))
just delay each web request
use delayElements function
edit: answer below is valid for blocking RestTemplate but do not really fit well into reactive pattern.
WebClient does not have ability to limit request, but you could easily add this feature using composition.
You may throttle your client externally using RateLimiter from Guava/
(https://google.github.io/guava/releases/19.0/api/docs/index.html?com/google/common/util/concurrent/RateLimiter.html)
In this tutorial http://www.baeldung.com/guava-rate-limiter you will find how to use Rate limiter in blocking way, or with timeouts.
I would decorate all calls that need to be throttled in separate class that
limits number of calls per second
performs actual web call using WebClient
I hope I'm not late for the party. Anyway, limiting the rate of the request is just one of the problem I faced a week ago as I was creating a crawler. Here are the issues:
I have to do a recursive, paginated sequential request. Pagination parameters are included in the API that I'm calling for.
Once a response is received, pause for 1 second before doing the next request.
For certain errors encountered, do a retry
On retry, pause for certain seconds
Here's the solution:
private Flux<HostListResponse> sequentialCrawl() {
AtomicLong pageNo = new AtomicLong(2);
// Solution for #1 - Flux.expand
return getHosts(1)
.doOnRequest(value -> LOGGER.info("Start crawling."))
.expand(hostListResponse -> {
final long totalPages = hostListResponse.getData().getTotalPages();
long currPageNo = pageNo.getAndIncrement();
if (currPageNo <= totalPages) {
LOGGER.info("Crawling page " + currPageNo + " of " + totalPages);
// Solution for #2
return Mono.just(1).delayElement(Duration.ofSeconds(1)).then(
getHosts(currPageNo)
);
}
return Flux.empty();
})
.doOnComplete(() -> LOGGER.info("End of crawling."));
}
private Mono<HostListResponse> getHosts(long pageNo) {
final String uri = hostListUrl + pageNo;
LOGGER.info("Crawling " + uri);
return webClient.get()
.uri(uri)
.exchange()
// Solution for #3
.retryWhen(companion -> companion
.zipWith(Flux.range(1, RETRY + 1), (error, index) -> {
String message = "Failed to crawl uri: " + error.getMessage();
if (index <= RETRY && (error instanceof RequestIntervalTooShortException
|| error instanceof ConnectTimeoutException
|| "Connection reset by peer".equals(error.getMessage())
)) {
LOGGER.info(message + ". Retries count: " + index);
return Tuples.of(error, index);
} else {
LOGGER.warn(message);
throw Exceptions.propagate(error); //terminate the source with the 4th `onError`
}
})
.map(tuple -> {
// Solution for #4
Throwable e = tuple.getT1();
int delaySeconds = tuple.getT2();
// TODO: Adjust these values according to your needs
if (e instanceof ConnectTimeoutException) {
delaySeconds = delaySeconds * 5;
} else if ("Connection reset by peer".equals(e.getMessage())) {
// The API that this app is calling will sometimes think that the requests are SPAM. So let's rest longer before retrying the request.
delaySeconds = delaySeconds * 10;
}
LOGGER.info("Will retry crawling after " + delaySeconds + " seconds to " + uri + ".");
return Mono.delay(Duration.ofSeconds(delaySeconds));
})
.doOnNext(s -> LOGGER.warn("Request is too short - " + uri + ". Retried at " + LocalDateTime.now()))
)
.flatMap(clientResponse -> clientResponse.toEntity(String.class))
.map(responseEntity -> {
HttpStatus statusCode = responseEntity.getStatusCode();
if (statusCode != HttpStatus.OK) {
Throwable exception;
// Convert json string to Java POJO
HostListResponse response = toHostListResponse(uri, statusCode, responseEntity.getBody());
// The API that I'm calling will return error code of 06 if request interval is too short
if (statusCode == HttpStatus.BAD_REQUEST && "06".equals(response.getError().getCode())) {
exception = new RequestIntervalTooShortException(uri);
} else {
exception = new IllegalStateException("Request to " + uri + " failed. Reason: " + responseEntity.getBody());
}
throw Exceptions.propagate(exception);
} else {
return toHostListResponse(uri, statusCode, responseEntity.getBody());
}
});
}
I use this to limit the number of active requests:
public DemoClass(WebClient.Builder webClientBuilder) {
AtomicInteger activeRequest = new AtomicInteger();
this.webClient = webClientBuilder
.baseUrl("http://httpbin.org/ip")
.filter(
(request, next) -> Mono.just(next)
.flatMap(a -> {
if (activeRequest.intValue() < 3) {
activeRequest.incrementAndGet();
return next.exchange(request)
.doOnNext(b -> activeRequest.decrementAndGet());
}
return Mono.error(new RuntimeException("Too many requests"));
})
.retryWhen(Retry.anyOf(RuntimeException.class)
.randomBackoff(Duration.ofMillis(300), Duration.ofMillis(1000))
.retryMax(50)
)
)
.build();
}
public Mono<String> call() {
return webClient.get()
.retrieve()
.bodyToMono(String.class);
}
We can customize ConnectionBuilder to rate limit the active connections on WebClient.
Need to add pendingAquiredMaxCount for number of waiting requests on queue as the default queue size is always 2 * maxConnections.
This rate limits the webclient to serve the requests at a time.
ConnectionProvider provider = ConnectionProvider.builder('builder').maxConnections(maxConnections).pendingAcquireMaxCount(maxPendingRequests).build()
TcpClient tcpClient = TcpClient
.create(provider)
WebClient client = WebClient.builder()
.baseUrl('url')
.clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient)))

Chronicle Roll Files Daily

I am trying to implement Chronicle Queue into our system and had a question regarding rolling of files daily but at a specific time as per the local time zone of the process. I read few write-ups regarding how to specify roll cycle but as per documentation the epoch time works as per midnight UTC. What would I need to do to configure a roll cycle let's say every day at 5PM local time zone of the process running? Any suggestions?
public class TestRollCycle {
public class TestClass implements TestEvent {
private int counter = 1;
#Override
public void setOrGetEvent(String event) {
System.out.println("Counter Read Value: " + counter);
counter++;
}
}
public interface TestEvent {
void setOrGetEvent(String event);
}
#Test
public void testRollProducer() {
int insertCount = 1;
String pathOfFile = "rollPath";
// Epoch is 5:15PM EDT
SingleChronicleQueue producerQueue = SingleChronicleQueueBuilder.binary(pathOfFile).epoch(32940000).build();
ExcerptAppender myAppender = producerQueue.acquireAppender();
TestEvent eventWriter = myAppender.methodWriter(TestEvent.class);
while (true) {
String testString = "Insert String";
eventWriter.setOrGetEvent(testString);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Counter Write Value: " + insertCount);
insertCount++;
}
}
#Test
public void testRollConsumer() throws InterruptedException {
String pathOfFile = "rollPath";
// Epoch is 5:15PM EDT
SingleChronicleQueue producerQueue = SingleChronicleQueueBuilder.binary(pathOfFile).epoch(32940000).build();
TestClass myClass = new TestClass();
ExcerptTailer trailer = producerQueue.createTailer();
MethodReader methodReader = trailer.methodReader(myClass);
while (true) {
if (!methodReader.readOne()) {
Thread.sleep(1000);
} else {
//System.out.println(trailer.index());
}
}
}
}
This is a feature we added to Chronicle Queue Enterprise. I suggest you contact sales#chronicle.software if you are will to pay for it.
I think there's a problem in your test - the epoch of 32940000 supplied to the queue builder is 9hr 15m from midnight, so 9:15AM UTC or 5:15AM EDT. It should be another 12 hours later for the roll-time to be 5:15PM.
I've added a test that documents the current behaviour for your use-case, and it passes as expected. Can you double-check that you're supplying the correct epoch offset, and perhaps implement a StoreFileListener in order to capture/log any roll events.
The roll will not actually occur until an event is written to the queue that is after the roll-time boundary. So an idle queue that is not being written-to will not roll without input events.
The test is on github:
https://github.com/OpenHFT/Chronicle-Queue/blob/master/src/test/java/net/openhft/chronicle/queue/impl/single/QueueEpochTest.java

Commit during transaction in #Transactional

Is that possible to perform commit in the method that is marked as Spring's #Transactional?
#PersistenceContext
private EntityManager em;
#Transactional(propagation = Propagation.REQUIRED)
public void saveMembersWithMultipleCommits(List<Member> members)
throws HibernateException
{
Iterator<Member> it = members.iterator();
while (it.hasNext())
{
while (it.hasNext())
{
Member wsBean = it.next();
em.persist(wsBean); // overall commit will be made after method exit
log.info("Webservices record " + wsBean + " saved. " + i++);
}
}
}
I would like to have commit to DB after say each 500 items. Is that possible with aforementioned context?
No, you need to do it programatically using, for instance, the TransactionTemplate API. Read more here.
It would look something like
while (it.hasNext())
{
transactionTemplate.execute(new TransactionCallbackWithoutResult() {
protected void doInTransactionWithoutResult(TransactionStatus status) {
int counter = 0;
while (it.hasNext() && counter++ < 500) {
Member wsBean = it.next();
em.persist(wsBean);
log.info("Webservices record " + wsBean + " saved. " + i++);
}
}
);
}
Your question suggests that you have misplaced your transaction boundary.
You can move the persist call into a private method and make that method transactional instead of the outer one. This method could accept 500 members at a time and then will commit when it exits.
If you are looking forward to committing transactionally inside your other transaction, you might need to use #Transactional (propagation = Propagation.REQUIRES_NEW)
Alternate strategy is you create a method in DAO and mark it #Transactional. This method will do bulk update(for eg 500 nos). So you can have a method with code
#Transactional
public void mybatchUpdateMethod(){
StatelessSession session = this.hibernateTemplate.getSessionFactory()
.openStatelessSession();
Transaction transaction = null;
Long entryCounter = 0L;
PreparedStatement batchUpdate = null;
try {
transaction = session.beginTransaction();
batchUpdate = session.connection().prepareStatement(insertSql);
for (BatchSnapshotEntry entry : entries) {
entry.addEntry(batchUpdate);
batchUpdate.addBatch();
if (++entryCounter == 500) {
// Reached limit for uncommitted entries, so commit
batchUpdate.executeBatch();
}
}
batchUpdate.executeBatch();
batchUpdate.close();
batchUpdate = null;
}
catch (HibernateException ex) {
transaction.rollback();
transaction = null;
}
}
Every time you call this method, it will commit after 500 inserts/updates

Resources