Hystrix does not terminating thread after calling fallback - spring-boot

I am testing a feign client slow response using chaos monkey spring boot. Fallback is getting called and response is returned back but the method execution continues.
logger.info("Get the value from the organization ms {}",Thread.currentThread().getName());
organizationDTO = organizationRemoteData.getRemoteOrgData(organizationId); // 1
logger.info("saving data in cache {} by {}", organizationDTO,Thread.currentThread().getName());
// saving data in redis
cacheOrganizationObject(organizationDTO);
return organizationDTO;
Line number 1 is failing and fallback get called but I still see the "saving data in cache" in logs. This behavior makes the application inconsistent. is there any workaround?
Logs:
Get the value from the organization ms hystrix-organizationThreadPool-1
calling fallback method to get the organization data for id 1
saving data in cache OrganizationDTO [] by hystrix-organizationThreadPool-1

Related

FireStore save document asynchronously in Java Spring Application

Per Java Example, a document can be written as
ApiFuture<WriteResult> future = db.collection("cities").document("LA").set(docData);
// ...
// future.get() blocks on response
System.out.println("Update time : " + future.get().getUpdateTime());
but if there is a big document and I do not want to wait(block) on it to finish but want let it finish it in background, I tried using
future.get(2, TimeUnit.MILLISECONDS).getUpdateTime()
Will it guarantee that the document be written, sometime I am getting following error
java.util.concurrent.TimeoutException: Waited 2 nanoseconds for
TransformFuture#aaac[status=PENDING, info=[inputFuture=
[com.google.api.core.ApiFutureToListenableFuture#qqqqa5b9],
function=[com.google.api.core.ApiFutures$ApiFunctionToGuavaFunction#3244n86]]] at
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:508)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get
(FluentFuture.java:93)
So does the application needs to wait for the write ( set) operation) to finish or Firestore will take care of it? App is running in google cloud Run.
Since you have an asynchronous operation, you need to ensure you only send a response to the caller once that asynchronous operation has completed.
See the documentation on avoiding background activities if CPU is allocated only during request processing, specifically the part about Spring Boot in there.
Failing to follow this guidance may result in your code getting terminated by Cloud Run before it completes, or you getting charged for code that is longer than is necessary.

Webflux - hanging requests when using bounded elastic Scheduler

I have a service written with webflux that has high load (40 request per second)
and I'm encountering a really bad latency and performance issues with behaviours I can't explain: at some point during peaks, the service hangs in random locations as if it doesn't have any threads to handle the request.
The service does however have several calls to different service that aren't reactive - using WebClient, and another call to a main service that retrieves the main data through an sdk wrapped in Mono.fromCallable(..).publishOn(Schedulers.boundedElastic()).
So the flow is:
upon request such as Mono<Request>
convert to internal object Mono<RequestAggregator>
call GCP to get JWT token and then call some service to get data using webclient
call the main service using Mono.fromCallable(MainService.getData(RequestAggregator)).publishOn(Schedulers.boundedElastic())
call another service to get more data (same as 3)
call another service to get more data (same as 3)
do some manipulation with all the data and return a Mono<Response>
the webclient calls look something like that:
Mono.fromCallable(() -> GoogleService.getToken(account, clientId)
.buildIapRequest(REQUEST_URL))
.map(httpRequest -> httpRequest.getHeaders().getAuthorization())
.flatMap(authToken -> webClient.post()
.uri("/call/some/endpoint")
.header(HttpHeaders.AUTHORIZATION, authToken)
.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.header(HttpHeaders.ACCEPT, MediaType.APPLICATION_JSON_VALUE)
.body(BodyInserters.fromValue(countries))
.retrieve()
.onStatus(HttpStatus::isError, clientResponse -> {
log.error("{} got status code: {}",
ERROR_MSG_ERROR, clientResponse.statusCode());
return Mono.error(new SomeWebClientException(STATE_ABBREVIATIONS_ERROR));
})
.bodyToMono(SomeData.class));
sometimes step 6 hangs for more than 11 minutes, and this service does not have any issues. It's not reactive but responses take ~400ms
Another thing worth mentioning is that MainService is a heavy IO operation that might take 1 minute or more.
I feel like a lot of request hangs on MainService and theren't any threads left for the other operations, does that make sense? if so, how does one solve something like that?
Can someone suggest any reason for this issue? I'm all out of ideas
It's not possible to tell for sure without knowing the full application, but indeed the blocking IO operation is the most likely culprit.
Schedulers.boundedElastic(), as its name suggests, is bounded. By default the bound is "ten times the number of available CPU cores", so on a 2-core machine it would be 20. If you have more concurrent requests than the limit, the rest is put into a queue waiting for a free thread indefinitely. If you need more concurrency than that, you should consider setting up your own scheduler using Scheduler.fromExecutor with a higher limit.

does Spring allow to configure retry and recovery mechanism for KafkaTemplate send method?

Trying to build a list of possible errors that can potentially happen during the execution of kafkaTemplate.send() method:
Errors related serialization process;
Some network issues or broker is down;
Some technical issues on broker side, for example acknowledgement not received from broker, etc.
And now I need to find a way how to handle all possible errors in the right way:
Based on the business requirements: in case of any exceptions I need to do the following things:
Retry 3 times;
If all 3 retries failed - log appropriate message.
I found that configuration property spring.kafka.producer.retries available, and I believe it exactly what I need.
But have can I configure recovery method (method that will be executed when all retries failed)?
Probably that spring.kafka.producer.retries is not what you are looking for.
This auto-configuration property is mapped directly to ConsumerConfig:
map.from(this::getRetries).to(properties.in(ProducerConfig.RETRIES_CONFIG));
and then we go and read docs for that ProducerConfig.RETRIES_CONFIG property:
private static final String RETRIES_DOC = "Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error."
+ " Note that this retry is no different than if the client resent the record upon receiving the error."
+ " Allowing retries without setting <code>" + MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION + "</code> to 1 will potentially change the"
+ " ordering of records because if two batches are sent to a single partition, and the first fails and is retried but the second"
+ " succeeds, then the records in the second batch may appear first. Note additionally that produce requests will be"
+ " failed before the number of retries has been exhausted if the timeout configured by"
+ " <code>" + DELIVERY_TIMEOUT_MS_CONFIG + "</code> expires first before successful acknowledgement. Users should generally"
+ " prefer to leave this config unset and instead use <code>" + DELIVERY_TIMEOUT_MS_CONFIG + "</code> to control"
+ " retry behavior.";
As you see spring-retry is fully not involved in the process and all the retries are done directly inside Kafka Client and its KafkaProducer infrastructure.
Although this is not all. Pay attention to the KafkaProducer.send() contract:
Future<RecordMetadata> send(ProducerRecord<K, V> record);
It returns a Future. And if we take a look closer to the implementation, we will see that there is a synchronous part - topic metadata request and serialization, - and enqueuing for the batch for async sending to Kafka broker. The mentioned ProducerConfig.RETRIES_CONFIG has an effect only in that Sender.completeBatch().
I believe that the Future is completed with an error when those internal retries are exhausted. So, you probably should think about using a RetryTemplate manually in the service method around KafkaTemplate to be able to control a retry (and recovery, respectively) around metadata and serialization which are really sync and blocking in the current call. The actual send you also can control in that method with retry, but if you call Future.get() to block it for a response or error from Kafka client on send.

Why is returning a Springboot ResponseEntitty so slow?

So I am building this springboot REST consumer within an API. The API request is dependend on a different API.
The user can make a Request to my API and my API makes a request to another service to log the user in.
While building this I came to the conclusion that returning a ResponseEntity is much slower than just returning the result in the body of the request.
This my fast code, response time less than a seccond:
#PostMapping("/adminLogin")
fun adminLogin(#RequestBody credentials: Credentials): AuthResponse {
return RestTemplate().getForEntity(
"$authenticatorURL/adminLogin?userName=${credentials.username}&passWord=${credentials.password}",
AuthResponse::class.java).body
}
When doing this it takes lots of seconds to respond:
#PostMapping("/adminLogin")
fun adminLogin(#RequestBody credentials: Credentials): ResponseEntity<AuthResponse> {
return RestTemplate().getForEntity(
"$authenticatorURL/adminLogin?userName=${credentials.username}&passWord=${credentials.password}",
AuthResponse::class.java)
}
Can someone explain to me what the difference is why one approach is faster than the other.
I had the same issue yesterday. The problem was as follows: imagine the API I use is sending a json like this:
{"id": "12"}
what I do is take that into a ResponseEntity, and IdDTO stores the id field as an integer. When I returned this ResponseEntity as a response to my request, it returns this:
{"id": 12}// notice the absence of string quotes around 12
The problem is as follows: the API that I used sends the Content-Length header to be equal to 12, but after my DTO conversion it becomes 10.
Spring does not recalculate the content length and the client is reading the 10 characters you sent, then waiting for other 2. It never receives anything and Spring closes the connection after 1 minute(that is the default timeout for a connection).
If you create a new response entity and put your data into it, Spring will calculate the new content length and it will be as fast as the first case you mentioned.

Releasing Grails database connection early

I have a controller action that does the following things:
Gets a domain object from the database
Uses info on that object to find a data file (on disk) and writes contents of that file to the response output stream.
My problem is that the database connection is reserved for the duration of the action, including the (long) time required to stream the data. This results in a lot of unnecessary database connections when there are several users streaming data at the same time.
def stream() {
StreamDetails sd = StreamDetails.get(params.id)
// Extract info needed to read the stream
String filename = sd.filename
// The database connection is no longer needed, how to properly release it?
// Start writing the data stream to response output
// This may take a long time and does not use a db connection
streamService.writeToOutput(filename,response.getOutputStream())
}
I have tried:
Injecting the sessionFactory bean to the controller and calling sessionFactory.currentSession.close() before calling the service. However this causes a SessionException on the line calling the service, ie. before entering the writeToOutput() method (and nothing in that method needs a database connection). AND I don't think the session should be really closed, just released to the pool.
Copy-pasting the code from streamService.writeToOutput(...) to the controller to avoid the service call. In this case all the code gets executed but a SessionException is still thrown after the action is complete.
How to properly release the connection early?
Have you tried to inject the dataSource? You could use the DataSourceUtils to create a new connection that you can then use to get the filename. You can then manually close() this connection.
I don't know if you can use this connection in combination with gorm, so you might have to create a custom sql query as well.

Resources