How solve slow througput with blocking call in SpringBoot2? - spring-boot

I use Spring Boot 2 and Router Function and all fine if haven't blocking code, but if I have block for 2 seconds - awfully. My application can't reach huge numbers of concurrent users and I can't impove throughput.
Doc have a section How do I wrap a synchronous, blocking call , but this approach doesn't solve the problem.
I created simple springboot2 applications where recreated the problem.
return serverRequest
.bodyToMono(ValueDto.class)
.doOnNext(order -> log.info("get order request " + order))
.map(i -> {
log.info("map 1 " + requestId);
return i;
})
.map(i -> {
log.info("map 2 " + requestId);
return i;
})
.map(i -> {
log.info("map 3 " + requestId);
return i;
})
.flatMap(i -> Mono.fromCallable(() -> executeLongMethod(i, requestId))
.subscribeOn(Schedulers.elastic()))
.map(v -> {
log.info("map 5 " + requestId);
return v;
})
.flatMap(req -> ServerResponse.ok().build());
private ValueDto executeLongMethod(final ValueDto dto, final String requestId) {
final long start = System.currentTimeMillis();
try {
log.info("start executeLongMethod. requestId:" + requestId);
TimeUnit.MILLISECONDS.sleep(1500);
return dto;
} catch (InterruptedException e) {
e.printStackTrace();
return dto;
} finally {
log.info("finish executeLongMethod requestId:" + requestId + " executed in " + (System.currentTimeMillis() - start) + "ms.");
}
}
Perform Automated Load Testing through Jmeter. It Settings:
ThreadGroup:
Number of Threads: number of concurrent threads to run during the test : 30
Ramp-Up Period: linear increase the load from 0 to target load over this time: 1
Loop Count: forever
Post request: {
"valueA":"fake",
"valueB":"fake",
"valueC":"fake"
}
Results:
Code samples can be found over on GitHub.

Related

How to use "cache" method of Mono

I'm a beginner of spring webflux. While researching I found some code like:
Mono result = someMethodThatReturnMono().cache();
The name "cache" tell me about caching something, but where is the cache and how to retrieve cached things? Is it something like caffeine?
It cache the result of the previous steps of the Flux/Mono until the cache() method is called, check the output of this code to see it in action:
import reactor.core.publisher.Mono;
public class CacheExample {
public static void main(String[] args) {
var mono = Mono.fromCallable(() -> {
System.out.println("Go!");
return 5;
})
.map(i -> {
System.out.println("Double!");
return i * 2;
});
var cached = mono.cache();
System.out.println("Using cached");
System.out.println("1. " + cached.block());
System.out.println("2. " + cached.block());
System.out.println("3. " + cached.block());
System.out.println("Using NOT cached");
System.out.println("1. " + mono.block());
System.out.println("2. " + mono.block());
System.out.println("3. " + mono.block());
}
}
output:
Using cached
Go!
Double!
1. 10
2. 10
3. 10
Using NOT cached
Go!
Double!
1. 10
Go!
Double!
2. 10
Go!
Double!
3. 10

Why performance diffs between Flux.range and Flux.range().flatmap(Flux.range())?

I'm using spring webclient based on netty to initiate massive http requests, assuming:
total = page * cnt
here total is the total request.
One option is to use Flux.range(1, total) as:
#RequestMapping(value = "/tid1", method = RequestMethod.GET)
public Mono<String> total(#RequestParam(value = "total") Integer total) {
long ms = System.currentTimeMillis();
return Flux.range(1, total)
.flatmap(n -> {
// WebClient POST request here
})
.then(Mono.just(total))
.map(n -> {
long ms1 = System.currentTimeMillis();
long du = n * 1000 / (ms1-ms);
String res = "Num: " + n + ", QPS: " + du;
log.error(res);
return res;
})
;
}
Another option is splitting total into page pages, each page takes cnt requests:
#RequestMapping(value = "/tid", method = RequestMethod.GET)
public Mono<String> paging(#RequestParam(value = "page") Integer page, #RequestParam(value = "cnt") Integer cnt) {
long ms = System.currentTimeMillis();
return Flux.range(1, page)
.flatMap(n -> {
return Flux.range(1, cnt)
.flatMap(k -> {
// WebClient POST request here
});
})
.then(Mono.just(cnt * page))
.map(n -> {
long ms1 = System.currentTimeMillis();
long du = n * 1000 / (ms1-ms);
String res = "Num: " + n + ", QPS: " + du;
log.error(res);
return res;
})
;
}
I found that the performance diffs a lot between these options, the first one has a QPS of 12500, and second has a QPS of 7000(QPS is the requests number processed in one second).
Unfortunately I must adopt paging solution, and I wonder why these two approachs has so much difference?

quarkus http calls load test results 1000 requests - 16 seconds vs 65 seconds

Test 1:
#Path("/performance")
public class PerformanceTestResource {
#Timeout(20000)
#GET
#Path("/resource")
#Produces(MediaType.APPLICATION_JSON)
public Response performanceResource() {
final String name = Thread.currentThread().getName();
System.out.println(name);
Single<Data> dataSingle = null;
try {
dataSingle = Single.fromCallable(() -> {
final String name2 = Thread.currentThread().getName();
System.out.println(name2);
Thread.sleep(1000);
return new Data();
}).subscribeOn(Schedulers.io());
} catch (Exception ex) {
int a = 1;
}
return Response.ok().entity(dataSingle.blockingGet()).build();
}
}
The test itself see also for the callPeriodically definition:
#QuarkusTest
public class PerformanceTestResourceTest {
#Tag("load-test")
#Test
public void loadTest() throws InterruptedException {
int CALL_N_TIMES = 1000;
final long CALL_NIT_EVERY_MILLISECONDS = 10;
final LoadTestMetricsData loadTestMetricsData = LoadTestUtils.callPeriodically(
this::callHttpEndpoint,
CALL_N_TIMES,
CALL_NIT_EVERY_MILLISECONDS
);
assertThat(loadTestMetricsData.responseList.size(), CoreMatchers.is(equalTo(Long.valueOf(CALL_N_TIMES).intValue())));
long executionTime = loadTestMetricsData.duration.getSeconds();
System.out.println("executionTime: " + executionTime + " seconds");
assertThat(executionTime , allOf(greaterThanOrEqualTo(1L),lessThan(20L)));
}
Results test 1:
executionTime: 16 seconds
Test 2: same but without #Timeout annotation:
executionTime: 65 seconds
Q: Why? I think even 16 seconds is slow.
Q: How to make it faster: say to be 2 seconds for 1000 calls.
I realise that I use .blockingGet() in the resource, but still, I would expect re-usage of the blocking threads.
P.S.
I tried to go more 'reactive' returning Single or CompletionStage to return from the responses - but this seems not yet ready (buggy on rest-easy side). So I go with simple .blockingGet()` and Response.
UPDATE: Reactive / RX Java 2 Way
#path("/performance")
public class PerformanceTestResource {
//#Timeout(20000)
#GET
#Path("/resource")
#Produces(MediaType.APPLICATION_JSON)
public Single<Data> performanceResource() {
final String name = Thread.currentThread().getName();
System.out.println(name);
System.out.println("name: " + name);
return Single.fromCallable(() -> {
final String name2 = Thread.currentThread().getName();
System.out.println("name2: " + name2);
Thread.sleep(1000);
return new Data();
});
}
}`
pom.xml:
io.smallrye
smallrye-context-propagation-propagators-rxjava2
org.jboss.resteasy
resteasy-rxjava2
Then when run same test:
executionTime: 64 seconds
The output would be something like:
name: vert.x-worker-thread-5 vert.x-worker-thread-9 name: vert.x-worker-thread-9
name2: vert.x-worker-thread-9
name2: vert.x-worker-thread-5
so, we are blocking the worker thread, that is use on REst/Resource side. That's hwy. Then:
If I use:Schedulers.io() to be on separate execution context for the sleep-1000-call:
return Single.fromCallable(() -> { ... }).subscribeOn(Schedulers.io());
executionTime: 16 seconds
The output will be something like this (see a new guy: RxCachedThreadScheduler)
name: vert.x-worker-thread-5
name2: RxCachedThreadScheduler-1683
vert.x-worker-thread-0
name: vert.x-worker-thread-0
vert.x-worker-thread-9
name: vert.x-worker-thread-9
name2: RxCachedThreadScheduler-1658
vert.x-worker-thread-8
Seems regardless: whether I use explicitly blockingGet() or not, i get the same result.
I assume if I am not blocking it it would be around 2-3 seconds.
Q: I there a way to fix/tweak this from this point?
I assume the use of Schedulers.io() that brings the RxCachedThreadScheduler is the bottle neck in this point so I end up with the 16 seconds, 200 io / io threads is the limit by default? But should those be reused and not really be blocked. (don't think is good idea to set that limit to 1000).
Q: Or anyways: how would make app be responsive/reactive/performant as it should with Quarkus. Or what did I miss?
Thanks!
Ok. Maybe it is me.
In my callPeriodically(); i pass CALL_NIT_EVERY_MILLISECONDS = 10 milliseconds.
10 * 1000 = 10 000 - + ten seconds just to add the requests.
This, I set it to 0.
And got my 6 seconds for server 1000 simulations requests.
Still not 2-3 seconds. But 6.
Seems there is no difference if I use .getBlocking and return Response or returning Single.
--
But just to mention it, this hello world app take 1 second to process 1000 parallel requests. While Quarkus one is 6 seconds.
public class Sample2 {
static final AtomicInteger atomicInteger = new AtomicInteger(0);
public static void main(String[] args) {
long start = System.currentTimeMillis();
final List<Single<Response>> listOfSingles = Collections.synchronizedList(new ArrayList<>());
for (int i=0; i< 1000; i++) {
// try {
// Thread.sleep(10);
// } catch (InterruptedException e) {
// e.printStackTrace();
// }
final Single<Response> responseSingle = longCallFunction();
listOfSingles.add(responseSingle);
}
Single<Response> last = Single.merge(listOfSingles).lastElement().toSingle();
final Response response = last.blockingGet();
long end = System.currentTimeMillis();
System.out.println("Execution time: " + (end - start) / 1000);
System.out.println(response);
}
static Single<Response> longCallFunction() {
return Single.fromCallable( () -> { // 1 sec
System.out.println(Thread.currentThread().getName());
Thread.sleep(1000);
int code = atomicInteger.incrementAndGet();
//System.out.println(code);
return new Response(code);
}).subscribeOn(Schedulers.io());
}
}

Is it possible to change the frequency in which spring actuator performs a health pulse?

I am attempting to look around to see how I can modify my actuator end points (specifically health) to limit its frequency. I want to see if I can set it up to being set to trigger once a minute for a specific dataset (ex mail) but leave it for others?
So far I can't seem to find that logic anywhere. The only known way I can think of is creating your own health server:
#Component
#RefreshScope
public class HealthCheckService implements HealthIndicator, Closeable {
#Override
public Health health() {
// check if things are stale
if (System.currentTimeMillis() - this.lastUpdate.get() > this.serviceProperties.getMonitorFailedThreshold()) {
String errMsg = '[' + this.serviceName + "] health status has not been updated in over ["
+ this.serviceProperties.getMonitorFailedThreshold() + "] milliseconds. Last updated: ["
+ this.lastUpdate.get() + ']';
log.error(errMsg);
return Health.down().withDetail(this.serviceName, errMsg).build();
}
// trace level since this could be called a lot.
if (this.detailMsg != null) {
Health.status(this.status);
}
Health.Builder health = Health.status(this.status);
return health.build();
}
/**
* Scheduled, low latency health check.
*/
#Scheduled(fixedDelayString = "${health.update-delay:60000}")
public void healthUpdate() {
if (this.isRunning.get()) {
if (log.isDebugEnabled()) {
log.debug("Updating Health Status of [" + this.serviceName + "]. Last Status = ["
+ this.status.getCode() + ']');
}
// do some sort of checking and update the value appropriately.
this.status = Status.UP;
this.lastUpdate.set(System.currentTimeMillis());
if (log.isDebugEnabled()) {
log.debug("Health Status of [" + this.serviceName + "] updated to [" + this.status.getCode() + ']');
}
}
}
I am not sure if there is a way to set this specifically in spring as a configuration or is the only way around this is to build a custom HealthIndicator?

Trying to manually commit during interceptor managed transaction

This is a weird situation and I normally would never do it but our system has unfortunately now required this kind of scenario.
The System
We are running a Spring/Hibernate applications that is using OpenSessionInView and TransactionInterceptor to manage our transactions. For the most part it works great. However, we have recently required the need to spawn a number of threads to make some concurrent HTTP requests to providers.
The Problem
We need the entity that is passed into the thread to have all of the data that we have updated in our current transaction. The problem is we spawn the thread deep down in the guts of our service layer and it's very difficult to make a smaller transaction to allow this work. We tried originally just passing the entity to the thread and just calling:
leadDao.update(lead);
The problem is that we than get the error about the entity living in two sessions. Next we try to commit the original transaction and reopen as soon as the threads are complete.
This is what I have listed here.
try {
logger.info("------- BEGIN MULTITHREAD PING for leadId:" + lead.getId());
start = new Date();
leadDao.commitTransaction();
List<Future<T>> futures = pool.invokeAll(buyerClientThreads, lead.getAffiliate().getPingTimeout(), TimeUnit.SECONDS);
for (int i = 0; i < futures.size(); i++) {
Future<T> future = futures.get(i);
T leadStatus = null;
try {
leadStatus = future.get();
if (logger.isDebugEnabled())
logger.debug("Retrieved results from thread buyer" + leadStatus.getLeadBuyer().getName() + " leadId:" + leadStatus.getLead().getId() + " time:" + DateUtils.formatDate(start, "HH:mm:ss"));
} catch (CancellationException e) {
leadStatus = extractErrorPingLeadStatus(lead, "Timeout - CancellationException", buyerClientThreads.get(i).getBuyerClient().getLeadBuyer(), buyerClientThreads.get(i).getBuyerClient().constructPingLeadStatusInstance());
leadStatus.setTimeout(true);
leadStatus.setResponseTime(new Date().getTime() - start.getTime());
logger.debug("We had a ping that didn't make it in time");
}
if (leadStatus != null) {
completed.add(leadStatus);
}
}
} catch (InterruptedException e) {
logger.debug("There was a problem calling the pool of pings", e);
} catch (ExecutionException e) {
logger.error("There was a problem calling the pool of pings", e);
}
leadDao.beginNewTransaction();
The begin transaction looks like this:
public void beginNewTransaction() {
if (getCurrentSession().isConnected()) {
logger.info("Session is not connected");
getCurrentSession().reconnect();
if (getCurrentSession().isConnected()) {
logger.info("Now connected!");
} else {
logger.info("STill not connected---------------");
}
} else if (getCurrentSession().isOpen()) {
logger.info("Session is not open");
}
getCurrentSession().beginTransaction();
logger.info("BEGINNING TRANSAACTION - " + getCurrentSession().getTransaction().isActive());
}
The threads are using TransactionTemplates since my buyerClient object is not managed by spring (long involved requirements).
Here is that code:
#SuppressWarnings("unchecked")
private T processPing(Lead lead) {
Date now = new Date();
if (logger.isDebugEnabled()) {
logger.debug("BEGIN PINGING BUYER " + getLeadBuyer().getName() + " for leadId:" + lead.getId() + " time:" + DateUtils.formatDate(now, "HH:mm:ss:Z"));
}
Object leadStatus = transaction(lead);
if (logger.isDebugEnabled()) {
logger.debug("PING COMPLETE FOR BUYER " + getLeadBuyer().getName() + " for leadId:" + lead.getId() + " time:" + DateUtils.formatDate(now, "HH:mm:ss:Z"));
}
return (T) leadStatus;
}
public T transaction(final Lead incomingLead) {
final T pingLeadStatus = this.constructPingLeadStatusInstance();
Lead lead = leadDao.fetchLeadById(incomingLead.getId());
T object = transactionTemplate.execute(new TransactionCallback<T>() {
#Override
public T doInTransaction(TransactionStatus status) {
Date startTime = null, endTime = null;
logger.info("incomingLead obfid:" + incomingLead.getObfuscatedAffiliateId() + " affiliateId:" + incomingLead.getAffiliate().getId());
T leadStatus = null;
if (leadStatus == null) {
leadStatus = filterLead(incomingLead);
}
if (leadStatus == null) {
leadStatus = pingLeadStatus;
leadStatus.setLead(incomingLead);
...LOTS OF CODE
}
if (logger.isDebugEnabled())
logger.debug("RETURNING LEADSTATUS FOR BUYER " + getLeadBuyer().getName() + " for leadId:" + incomingLead.getId() + " time:" + DateUtils.formatDate(new Date(), "HH:mm:ss:Z"));
return leadStatus;
}
});
if (logger.isDebugEnabled()) {
logger.debug("Transaction complete for buyer:" + getLeadBuyer().getName() + " leadId:" + incomingLead.getId() + " time:" + DateUtils.formatDate(new Date(), "HH:mm:ss:Z"));
}
return object;
}
However, when we begin our new transaction we get this error:
org.springframework.transaction.TransactionSystemException: Could not commit Hibernate transaction; nested exception is org.hibernate.TransactionException: Transaction not successfully started
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:660)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:754)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:723)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:393)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:120)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:90)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:202)
My Goal
My goal is to be able to have that entity fully initalized on the other side or Does anyone have any ideas on how I can commit the data to the database so the thread can have a fully populated object. Or, have a way to query for a full object?
Thanks I know this is really involved. I apologize if I haven't been clear enough.
I have tried
Hibernate.initialize()
saveWithFlush()
update(lead)
I didn't follow everything - you can try one of this to workaround the issue that you get about the same object being associated with two sessions.
// do this in the main thread to detach the object
// from the current session
// if it has associations that also need to be handled the cascade=evict should
// be specified. Other option is to do flush & clear on the session.
session.evict(object);
// pass the object to the other thread
// in the other thread - use merge
session.merge(object)
Second approach - create a deep copy of the object and pass the copy. This can be easily achieved if your entity classes are serializable - just serialize the object and deserialize.
Thanks #gkamal for your help.
For everyone living in posterity. The answer to my dilemma was a left over call to hibernateTemplate instead of getCurrentSession(). I made the move about a year and a half ago and for some reason missed a few key places. This was generating a second transaction. After that I was able to use #gkamal suggestion and evict the object and grab it again.
This post helped me figure it out:
http://forum.springsource.org/showthread.php?26782-Illegal-attempt-to-associate-a-collection-with-two-open-sessions

Resources