How can I execute this full CompletableFuture chain to run asynchronously using a separate executor
.thenApply(r -> {
return validateStudents();
})
.thenCompose(r -> {
return fetchAll(r);
})
.thenCompose(r -> {
return processAll(r);
})
.whenComplete((r, t) -> {
});
});
You can use the Async methods from CompletableFuture with the default ForkJoinPool
All async methods without an explicit Executor argument are performed using the ForkJoinPool.commonPool() (unless it does not support a parallelism level of at least two, in which case, a new Thread is created to run each task)
System.out.println(Thread.currentThread().getName());
CompletableFuture.supplyAsync(()->{
System.out.println(Thread.currentThread().getName());
return "supplyAsync";
}).thenApplyAsync(supply->{
System.out.println(Thread.currentThread().getName()+"----"+supply);
return "applyAsync";
}).thenComposeAsync(compose->{
System.out.println(Thread.currentThread().getName()+"----"+compose);
return CompletableFuture.completedStage("composeAsync");
});
Output :
main
ForkJoinPool.commonPool-worker-3
ForkJoinPool.commonPool-worker-3----supplyAsync
ForkJoinPool.commonPool-worker-3----applyAsync
You can also define custom thread pool and you can use that thread pool
ExecutorService pool = Executors.newFixedThreadPool(1);
System.out.println(Thread.currentThread().getName());
CompletableFuture.supplyAsync(()->{
System.out.println(Thread.currentThread().getName());
return "supplyAsync";
},pool).thenApplyAsync(supply->{
System.out.println(Thread.currentThread().getName()+"----"+supply);
return "applyAsync";
},pool).thenComposeAsync(compose->{
System.out.println(Thread.currentThread().getName()+"----"+compose);
return CompletableFuture.completedStage("composeAsync");
},pool);
Output :
main
pool-1-thread-1
pool-1-thread-1----supplyAsync
pool-1-thread-1----applyAsync
Related
I want to be able to execute multiple jobs concurrently on a Job Consumer. At the moment if I run one service instance and try to execute 2 jobs concurrently, 1 job waits for the other to complete (i.e. waits for the single job slot to become available).
However if I run 2 instances by using dotnet run twice to create 2 separate processes I am able to get the desired behavior where both jobs run at the same time.
Is it possible to run 2 (or more) jobs at the same time for a given consumer inside a single process? My application requires the ability to run several jobs concurrently but I don't have the ability to deploy many instances of my application.
Checking the application log I see this line which I feel may have something to do with it:
[04:13:43 DBG] Concurrent Job Limit: 1
I tried changing the SagaPartitionCount to something other than 1 on instance.ConfigureJobServiceEndpoints to no avail. I can't seem to get the Concurrent Job Limit to change.
My configuration looks like this:
services.AddMassTransit(x =>
{
x.AddDelayedMessageScheduler();
x.SetKebabCaseEndpointNameFormatter();
// registering the job consumer
x.AddConsumer<DeploymentConsumer>(typeof(DeploymentConsumerDefinition));
x.AddSagaRepository<JobSaga>()
.EntityFrameworkRepository(r =>
{
r.ExistingDbContext<JobServiceSagaDbContext>();
r.LockStatementProvider = new SqlServerLockStatementProvider();
});
// add other saga repositories here for JobTypeSaga and JobAttemptSaga here as well
x.UsingRabbitMq((context, cfg) =>
{
var rmq = configuration.GetSection("RabbitMq").Get<RabbitMq>();
cfg.Host(rmq.Host, rmq.Port, rmq.VirtualHost, h =>
{
h.Username(rmq.Username);
h.Password(rmq.Password);
});
cfg.UseDelayedMessageScheduler();
var options = new ServiceInstanceOptions()
.SetEndpointNameFormatter(context.GetService<IEndpointNameFormatter>() ?? KebabCaseEndpointNameFormatter.Instance);
cfg.ServiceInstance(options, instance =>
{
instance.ConfigureJobServiceEndpoints(js =>
{
js.SagaPartitionCount = 1;
js.FinalizeCompleted = true;
js.ConfigureSagaRepositories(context);
});
instance.ConfigureEndpoints(context);
});
});
}
Where DeploymentConsumerDefinition looks like
public class DeploymentConsumerDefinition : ConsumerDefinition<DeploymentConsumer>
{
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<DeploymentConsumer> consumerConfigurator)
{
consumerConfigurator.Options<JobOptions<DeploymentConsumer>>(options =>
{
options.SetJobTimeout(TimeSpan.FromMinutes(20));
options.SetConcurrentJobLimit(10);
options.SetRetry(r =>
{
r.Ignore<InvalidOperationException>();
r.Interval(5, TimeSpan.FromSeconds(10));
});
});
}
}
Your definition should specify the job consumer message type, not the job consumer type:
public class DeploymentConsumerDefinition : ConsumerDefinition<DeploymentConsumer>
{
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<DeploymentConsumer> consumerConfigurator)
{
// MESSAGE TYPE NOT CONSUMER TYPE
consumerConfigurator.Options<JobOptions<DeploymentCommand>>(options =>
{
options.SetJobTimeout(TimeSpan.FromMinutes(20));
options.SetConcurrentJobLimit(10);
options.SetRetry(r =>
{
r.Ignore<InvalidOperationException>();
r.Interval(5, TimeSpan.FromSeconds(10));
});
});
}
}
From time to time Spring REST function fails with: "kotlinx.coroutines.JobCancellationException: MonoCoroutine was cancelled".
It is suspend function which calls another service using spring-webflux client. There are multiple suspend functions in my rest class. Looks like this problem occurs when multiple requests arrive to the same time. But may be not :-)
Application runs on Netty server.
Example:
#GetMapping("/customer/{id}")
suspend fun getCustomer(#PathVariable #NotBlank id: String): ResponseEntity<CustomerResponse> =
withContext(MDCContext()) {
ResponseEntity.status(HttpStatus.OK)
.body(customerService.aggregateCustomer(id))
}
Service call:
suspend fun executeServiceCall(vararg urlData: Input) = webClient
.get()
.uri(properties.url, *urlData)
.retrieve()
.bodyToMono(responseTypeRef)
.retryWhen(
Retry.fixedDelay(properties.retryCount, properties.retryBackoff)
.onRetryExhaustedThrow { _, retrySignal ->
handleRetryException(retrySignal)
}
.filter { it is ReadTimeoutException || it is ConnectTimeoutException }
)
.onErrorMap {
// throw exception
}
.awaitFirstOrNull()
Part of Stack Trace:
Caused by: kotlinx.coroutines.JobCancellationException: MonoCoroutine was cancelled; job="coroutine#1":MonoCoroutine{Cancelling}#650774ce
at kotlinx.coroutines.JobSupport.cancel(JobSupport.kt:1578)
at kotlinx.coroutines.Job$DefaultImpls.cancel$default(Job.kt:183)
at kotlinx.coroutines.reactor.MonoCoroutine.dispose(Mono.kt:122)
at reactor.core.publisher.FluxCreate$SinkDisposable.dispose(FluxCreate.java:1033)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.disposeResource(MonoCreate.java:313)
at reactor.core.publisher.MonoCreate$DefaultMonoSink.cancel(MonoCreate.java:300)
I'm using the following code to send a request/response message between two different processes.
This is the process that "sends" the request:
// configure host
var hostUri = new Uri(configuration["RabbitMQ:Host"]);
services.AddSingleton(provider => Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(hostUri, h =>
{
h.Username(configuration["RabbitMQ:Username"]);
h.Password(configuration["RabbitMQ:Password"]);
});
}));
// add request client
services.AddScoped(provider => provider.GetRequiredService<IBus>().CreateRequestClient<QueryUserInRole, QueryUserInRoleResult>(new Uri(hostUri, "focus-authorization"), TimeSpan.FromSeconds(5)));
// add dependencies
services.AddSingleton<IPublishEndpoint>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<ISendEndpointProvider>(provider => provider.GetRequiredService<IBusControl>());
services.AddSingleton<IBus>(provider => provider.GetRequiredService<IBusControl>());
// add the service class so that the runtime can automatically handle the start and stop of our bus
services.AddSingleton<Microsoft.Extensions.Hosting.IHostedService, BusService>();
Here's the implementation of the BusService:
public class BusService : Microsoft.Extensions.Hosting.IHostedService
{
private readonly IBusControl _busControl;
public BusService(IBusControl busControl)
{
_busControl = busControl;
}
public Task StartAsync(CancellationToken cancellationToken)
{
return _busControl.StartAsync(cancellationToken);
}
public Task StopAsync(CancellationToken cancellationToken)
{
return _busControl.StopAsync(cancellationToken);
}
}
The problem is that when the CreateRequestClient code runs, the bus has not started up yet. Thus the response is never returned from the consumer.
If I replace the host configuration with the following code, I get the desired behavior:
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(hostUri, h =>
{
h.Username(configuration["RabbitMQ:Username"]);
h.Password(configuration["RabbitMQ:Password"]);
});
});
bus.Start();
services.AddSingleton(bus);
For some reason, the BusService(IHostedService) executes AFTER the AddScoped delegates.
My question is: what is the correct way to start up the bus before using the CreateRequestClient method? Or is the latter approach to starting up the bus sufficient?
I am new to spring 5.
1) How I can log the method params which are Mono and flux type without blocking them?
2) How to map Models at API layer to Business object at service layer using Map-struct?
Edit 1:
I have this imperative code which I am trying to convert into a reactive code. It has compilation issue at the moment due to introduction of Mono in the argument.
public Mono<UserContactsBO> getUserContacts(Mono<LoginBO> loginBOMono)
{
LOGGER.info("Get contact info for login: {}, and client: {}", loginId, clientId);
if (StringUtils.isAllEmpty(loginId, clientId)) {
LOGGER.error(ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
throw new ServiceValidationException(
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getErrorCode(),
ErrorCodes.LOGIN_ID_CLIENT_ID_NULL.getDescription());
}
if (!loginId.equals(clientId)) {
if (authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId))) {
loginId = clientId;
} else {
LOGGER.error(ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
throw new AuthorizationException(
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getErrorCode(),
ErrorCodes.LOGIN_ID_VALIDATION_ERROR.getDescription());
}
}
UserContactDetailEntity userContactDetail = userContactRepository.findByLoginId(loginId);
LOGGER.debug("contact info returned from DB{}", userContactDetail);
//mapstruct to map entity to BO
return contactMapper.userEntityToUserContactBo(userContactDetail);
}
You can try like this.
If you want to add logs you may use .map and add logs there. if filters are not passed it will return empty you can get it with swichifempty
loginBOMono.filter(loginBO -> !StringUtils.isAllEmpty(loginId, clientId))
.filter(loginBOMono1 -> loginBOMono.loginId.equals(clientId))
.filter(loginBOMono1 -> authorizationFeignClient.validateManagerClientAccess(new LoginDTO(loginId, clientId)))
.map(loginBOMono1 -> {
loginBOMono1.loginId = clientId;
return loginBOMono1;
})
.flatMap(o -> {
return userContactRepository.findByLoginId(o.loginId);
})
I'm looking to eagerly cache the results of a Reactor Mono. It's scheduled to be updated in cache every 10 minutes, but since the Mono is only evaluated when subscribed to, the task doesn't actually refresh the cache.
Example:
#Scheduled(fixedRate = 10 * 60 * 1000 + 3000)
fun getMessage(): Mono<String> {
return Mono.just("Hello")
.map { it.toUpperCase() }
.cache(Duration.ofMinutes(10))
}
You need to store your Mono somewhere, otherwise each invocation of the method (through the Scheduled or directly) will return a different instance.
Perhaps as a companion object?
Here is how I would do it naïvely in Java:
protected Mono<String> cached;
//for the scheduler to periodically eagerly refresh the cache
#Scheduled(fixedRate = 10 * 60 * 1000 + 3000)
void refreshCache() {
this.cached = Mono.just("Hello")
.map { it.toUpperCase() }
.cache(Duration.ofMinutes(10));
this.cached.subscribe(v -> {}, e -> {}); //swallows errors during refresh
}
//for users
public Mono<String> getMessage() {
return this.cached;
}