SSE events, check if there is active session - spring-boot

In a Spring Boot application I have an endpoint which returns a Flux which is set with a custom Sink similar to this post.
I dont have a Flux with interval.
The code looks like this, constructor:
private final Flux<MyDto> events;
public MyController(MyService service,
EventPublisher publisher
) {
this.service = service;
this.events = Flux.create(publisher).share();
}
API:
#GetMapping(path = "/path/events-stream", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<MyDto> fluxEvents() {
return this.events.map(event -> {
return (MyDto) event.getSource();
});
}
The event is triggered by an external factor exactly like the post mentioned.
How can I know if there are active connections on the events-stream ?
I need this to know if I should trigger new events to the user
The question is asked here again with no answer

Related

How to assign values, call method in Subscribe method in Spring boot web client(non blocking)

I have a Spring boot application. End point A calls three different REST endpoints X, Y, Z. All the calls were using RestTemplate. I am trying to change from RestTemplate to Webclient. As a part of this I changed endpoint Y from RestTemplate to Webclient.
I had a blocking code. It was working as expected. But when I changed it to non-blocking using subscribe things are not working as expected.
With Blocking code
public class SomeImplClass {
#Autowired
private WebClient webClient;
public someReturnType someMethodName()
{
List myList = new ArrayList<>();
Mono<SomeResponse> result = this.webclient.post().uri(url).header(…).bodyValue(….).retrieve().bodyToMone(responseType);
someResponse = result.block(someDuration);
if(someResponse.getId().equals(“000”)
{
myList.addAll(this.somemethod(someResponse));
}else{
log.error(“some error”);
throw new SomeCustomException(“some error”)
}
return myList;
}
With Non Blocking Code
public class SomeImplClass {
#Autowired
private WebClient webClient;
public someReturnType someMethodName()
{
List myList = new ArrayList<>();
Mono<SomeResponse> result = this.webclient.post().uri(url).header(…).bodyValue(….).retrieve().bodyToMone(responseType);
result.subscribe(someResponse -> {
if(someResponse.getId().equals(“000”)
{
myList.addAll(this.somemethod(someResponse));
}
else{
log.error(“some error”);
throw new SomeCustomException(“some error”) //Not able to throw custom exception here.
}
});
return myList;
}
I am getting 2 issues
With non-blocking code the list which I am returning is empty. I guess return is called before subscribe consumes the data. How to resolve this? I tried result.doOnSuccess and doOnNext but both are not working. If I ad d Thread.sleep(5000) before return, everything is working as expected. How to achieve this without adding Thread.sleep.
I am able to throw RunTimeExceptions alone from subscribe. How to throw customeExceptions.

Spring cloud function Function interface return success/failure handling

I currently have a spring cloud stream application that has a listener function that mainly listens to a certain topic and executes the following in sequence:
Consume messages from a topic
Store consumed message in the DB
Call an external service for some information
Process the data
Record the results in DB
Send the message to another topic
Acknowledge the message (I have the acknowledge mode set to manual)
We have decided to move to Spring cloud function, and I have been already able to already do almost all the steps above using the Function interface, with the source topic as input and the sink topic as an output.
#Bean
public Function<Message<NotificationMessage>, Message<ValidatedEvent>> validatedProducts() {
return message -> {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
return MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
}
}
My problem goes with exception handling in step 7 (Acknowledge the message). We only acknowledge the message if we are sure that it was sent successfully to the sink queue, otherwise we do no acknowledge the message.
My question is, how can such a thing be implemented within Spring cloud function, specially that the send method is fully dependant on the Spring Framework (as the result of the function interface implementation evaluation).
earlier, we could do this through try/catch
#StreamListener(value = NotificationMesage.INPUT)
public void onMessage(Message<NotificationMessage> message) {
try {
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message);
notificationMessageService.saveOrUpdate(notificationMessage, 1, true);
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
}
Is there a listener that triggers after the Function interface have returned successfully, something like KafkaSendCallback but without specifying a template
Building upon what Oleg mentioned above, if you want to strictly restore the behavior in your StreamListener code, here is something you can try. Instead of using a function, you can switch to a consumer and then use KafkaTemplate to send on the outbound as you had previously.
#Bean
public Consumer<Message<NotificationMessage>> validatedProducts() {
return message -> {
try{
Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);
notificationMessageService.saveOrUpdate(notificationMessage, 0, false);
String status = restEndpoint.getStatusFor(message.getPayload());
ValidatedEvent event = getProcessingResult(message.getPayload(), status);
Message message = MessageBuilder
.withPayload(event)
.setHeader(KafkaHeaders.MESSAGE_KEY, event.getKey().getBytes())
.build();
kafkaTemplate.send(message); //here, you make sure that the data was sent successfully by using some callback.
//only ack if the data was sent successfully.
Optional.ofNullable(acknowledgment).ifPresent(Acknowledgment::acknowledge);
}
catch (Exception exception){
notificationMessageService.saveOrUpdate(notificationMessage, 1, false);
}
};
}
Another thing that is worth looking into is using Kafka transactions, in which case if it doesn't work end-to-end, no acknowledgment will happen. Spring Cloud Stream binder has support for this based on the foundations in Spring for Apache Kafka. More details here. Here is the Spring Cloud Stream doc on this.
Spring cloud stream has no knowledge of function. It is just the same message handler as it was before, so the same approach with callback as you used before would work with functions. So perhaps you can share some code that could clarify what you mean? I also don't understand what do you mean by ..send method is fully dependant on the Spring Framework..
Alright, So what I opted in was actually not to use KafkaTemplate (Or streamBridge)for that matter. While it is a feasible solution it would mean that my Function is going to be split into Consumer and some sort of an improvised supplied (the KafkaTemplate in this case).
As I wanted to adhere to the design goals of the functional interface, I have isolated the behaviour for Database update in a ProducerListener interface implementation
#Configuration
public class ProducerListenerConfiguration {
private final MongoTemplate mongoTemplate;
public ProducerListenerConfiguration(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
#Bean
public ProducerListener myProducerListener() {
return new ProducerListener() {
#SneakyThrows
#Override
public void onSuccess(ProducerRecord producerRecord, RecordMetadata recordMetadata) {
final ValidatedEvent event = new ObjectMapper().readerFor(ValidatedEvent.class).readValue((byte[]) producerRecord.value());
final var updateResult = updateDocumentProcessedState(event.getKey(), event.getPayload().getVersion(), true);
}
#SneakyThrows
#Override
public void onError(ProducerRecord producerRecord, #Nullable RecordMetadata recordMetadata, Exception exception) {
ProducerListener.super.onError(producerRecord, recordMetadata, exception);
}
};
}
public UpdateResult updateDocumentProcessedState(String id, long version, boolean isProcessed) {
Query query = new Query();
query.addCriteria(Criteria.where("_id").is(id));
Update update = new Update();
update.set("processed", isProcessed);
update.set("version", version);
return mongoTemplate.updateFirst(query, update, ProductChangedEntity.class);
}
}
Then with each successful attempt, the DB is updated with the processing result and the updated version number.

how to consume events from kafka by a Spring Rest endpoint

I'm new to Kafka. I've seen that the consumer is "always running" and retrieves messages from a topic as soon as been published.
In a typical database web application you have a rest API that connects to DB and returns some response.
From what I see the consumer stays active and never close.
So I don't figure out how to return a subset of messages from a topic based on client request.
I thought the service would create a consumer to get what I need, but as far as consumer never close, I guess my opinion is not correct.
What should I do?
Then it's a simple question of persisting messages rceived thru KafkaListener, let's say adding each of them to a simple collecton (along with its timestamp) and implementing an endpoint to filter the messages accordingly and returning some of them.
#Controller
public class KafkaController {
#Autowired
private KafkaProducerConfig kafkaProducerConfig;
private Map<Date, String> msgMap = new HashMap();
#KafkaListener(topics = "myTopic", groupId = "myGroup")
public void listenAndAddMsg(String message) {
msgMap.put(new Date(), message);
}
#PostMapping("messages")
#ResponseBody
public String filterMessages(#RequestBody Interval interval) {
return msgMap.entrySet()
.stream()
.filter(map -> map.getKey().after(interval.getStartDate()) && map.getKey().before(interval.getEndDate()))
.collect(Collectors.toMap(map -> map.getKey(), map -> map.getValue()));
}
}
public class Interval {
private Date startDate;
private Date endDate;
// setters and getters
}

Spring Cloud - HystrixCommand - How to properly enable with shared libraries

Using Springboot 1.5.x, Spring Cloud, and JAX-RS:
I could use a second pair of eyes since it is not clear to me whether the Spring configured, Javanica HystrixCommand works for all use cases or whether I may have an error in my code. Below is an approximation of what I'm doing, the code below will not actually compile.
From below WebService lives in a library with separate package path to the main application(s). Meanwhile MyWebService lives in the application that is in the same context path as the Springboot application. Also MyWebService is functional, no issues there. This just has to do with the visibility of HystrixCommand annotation in regards to Springboot based configuration.
At runtime, what I notice is that when a code like the one below runs, I do see "commandKey=A" in my response. This one I did not quite expect since it's still running while the data is obtained. And since we log the HystrixRequestLog, I also see this command key in my logs.
But all the other Command keys are not visible at all, regardless of where I place them in the file. If I remove CommandKey-A then no commands are visible whatsoever.
Thoughts?
// Example WebService that we use as a shared component for performing a backend call that is the same across different resources
#RequiredArgsConstructor
#Accessors(fluent = true)
#Setter
public abstract class WebService {
private final #Nonnull Supplier<X> backendFactory;
#Setter(AccessLevel.PACKAGE)
private #Nonnull Supplier<BackendComponent> backendComponentSupplier = () -> new BackendComponent();
#GET
#Produces("application/json")
#HystrixCommand(commandKey="A")
public Response mainCall() {
Object obj = new Object();
try {
otherCommandMethod();
} catch (Exception commandException) {
// do nothing (for this example)
}
// get the hystrix request information so that we can determine what was executed
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = hystrixExecutedCommands();
// set the hystrix data, viewable in the response
obj.setData("hystrix", executedCommands.orElse(Collections.emptyList()));
if(hasError(obj)) {
return Response.serverError()
.entity(obj)
.build();
}
return Response.ok()
.entity(healthObject)
.build();
}
#HystrixCommand(commandKey="B")
private void otherCommandMethod() {
backendComponentSupplier
.get()
.observe()
.toBlocking()
.subscribe();
}
Optional<Collection<HystrixInvokableInfo<?>>> hystrixExecutedCommands() {
Optional<HystrixRequestLog> hystrixRequest = Optional
.ofNullable(HystrixRequestLog.getCurrentRequest());
// get the hystrix executed commands
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = Optional.empty();
if (hystrixRequest.isPresent()) {
executedCommands = Optional.of(hystrixRequest.get()
.getAllExecutedCommands());
}
return executedCommands;
}
#Setter
#RequiredArgsConstructor
public class BackendComponent implements ObservableCommand<Void> {
#Override
#HystrixCommand(commandKey="Y")
public Observable<Void> observe() {
// make some backend call
return backendFactory.get()
.observe();
}
}
}
// then later this component gets configured in the specific applications with sample configuraiton that looks like this:
#SuppressWarnings({ "unchecked", "rawtypes" })
#Path("resource/somepath")
#Component
public class MyWebService extends WebService {
#Inject
public MyWebService(Supplier<X> backendSupplier) {
super((Supplier)backendSupplier);
}
}
There is an issue with mainCall() calling otherCommandMethod(). Methods with #HystrixCommand can not be called from within the same class.
As discussed in the answers to this question this is a limitation of Spring's AOP.

How to mimic SimpMessagingTemplate.convertAndSendToUser using RabbitTemplate?

So I've been reading about Spring Message Relay (Spring Messaging stuff) capability with a RabbitMQ broker. What I want to achieve is as follows:
Have a service (1), which acts as a message relay between rabbitmq and a browser. This works fine now. I'm using MessageBrokerRegistry.enableStompBrokerRelay to do that.
Have another service (2) on the back-end, which will send a message to a known queue onto RabbitMQ and have that message routed to a specific user. As a sender, I want to have a control over who the message gets delivered to.
Normally, you'd use SimpMessagingTemplate to do that. Problem is though, that the origin of the message doesn't actually have access to that template, as it's not acting as a relay, it's not using websockets and it doesn't hold mapping of queue names to session ids.
One way I could think of doing it, is writing a simple class on the service 1, which will listen on all queues and forward them using simp template. I fell however this is not an ideal way to do it, and I feel like there might be already a way to do it using Spring.
Can you please advise?
This question got me thinking about the same dilemma I was facing. I have started playing with a custom UserDestinationResolver that arrives at a consistent topic naming scheme that uses just the username and not the session ID used by the default resolver.
That lets me subscribe in JS to "/user/exchange/amq.direct/current-time" but send via a vanilla RabbitMQ application to "/exchange/amqp.direct/users.me.current-time" (to a user named "me").
The latest source code is here and I am "registering" it as a #Bean in an existing #Configuration class that I had.
Here's the custom UserDestinationResolver itself:
public class ConsistentUserDestinationResolver implements UserDestinationResolver {
private static final Pattern USER_DEST_PREFIXING_PATTERN =
Pattern.compile("/user/(?<name>.+?)/(?<routing>.+)/(?<dest>.+?)");
private static final Pattern USER_AUTHENTICATED_PATTERN =
Pattern.compile("/user/(?<routing>.*)/(?<dest>.+?)");
#Override
public UserDestinationResult resolveDestination(Message<?> message) {
SimpMessageHeaderAccessor accessor = MessageHeaderAccessor.getAccessor(message, SimpMessageHeaderAccessor.class);
final String destination = accessor.getDestination();
final String authUser = accessor.getUser() != null ? accessor.getUser().getName() : null;
if (destination != null) {
if (SimpMessageType.SUBSCRIBE.equals(accessor.getMessageType()) ||
SimpMessageType.UNSUBSCRIBE.equals(accessor.getMessageType())) {
if (authUser != null) {
final Matcher authMatcher = USER_AUTHENTICATED_PATTERN.matcher(destination);
if (authMatcher.matches()) {
String result = String.format("/%s/users.%s.%s",
authMatcher.group("routing"), authUser, authMatcher.group("dest"));
UserDestinationResult userDestinationResult =
new UserDestinationResult(destination, Collections.singleton(result), result, authUser);
return userDestinationResult;
}
}
}
else if (accessor.getMessageType().equals(SimpMessageType.MESSAGE)) {
final Matcher prefixMatcher = USER_DEST_PREFIXING_PATTERN.matcher(destination);
if (prefixMatcher.matches()) {
String user = prefixMatcher.group("name");
String result = String.format("/%s/users.%s.%s",
prefixMatcher.group("routing"), user, prefixMatcher.group("dest"));
UserDestinationResult userDestinationResult =
new UserDestinationResult(destination, Collections.singleton(result), result, user);
return userDestinationResult;
}
}
}
return null;
}
}

Resources