Spring-boot metrics count and gauge every requestURI for RESTful service - jersey

I'm using starter-jersey and starter-actuator. If I have an endpoint as below,
#Component
#Path("/greeting")
public class GreetingEndpoint {
#GET
#Path("{id}/{message}")
#Produces(MediaType.APPLICATION_JSON)
public Greeting sayHello(#PathParam("id") Long id, #PathParam("message") String message) {
return new Greeting(id, message);
}
}
I would have counters/gauges for every request URI. Will this cause memory blowup?
counter.status.200.greeting.1000.look: 1
counter.status.200.greeting.1001.watch-out: 1
counter.status.200.metrics: 1
gauge.response.greeting.1000.look: 109
gauge.response.greeting.1001.watch-out: 6
gauge.response.metrics: 32

To answer the question asked, yes it could blow up if you have essentially unlimited id/message combos. All that information is getting stored in memory. Which unless you control the client calling the endpoint is certainly the case. It might take a long time, but nothing is reaping the metrics repository so they will live indefinitely for the life of the app.
There may be a workaround for you (I can't explain why this works however). Use #RestController+#RequestMapping+#PathVariable. In my experience it will create 1 entry for counter.status.200.greeting.id,message. If you just want to get rid of the counters/gauges for HTTP request but keep all the other auto-configure features then you can include this
#EnableAutoConfiguration(exclude={MetricFilterAutoConfiguration.class})
Hope this helps.

Related

Start processing Flux response from server before completion: is it possible?

I have 2 Spring-Boot-Reactive apps, one server and one client; the client calls the server like so:
Flux<Thing> things = thingsApi.listThings(5);
And I want to have this as a list for later use:
// "extractContent" operation takes 1.5s per "thing"
List<String> thingsContent = things.map(ThingConverter::extractContent)
.collect(Collectors.toList())
.block()
On the server side, the endpoint definition looks like this:
#Override
public Mono<ResponseEntity<Flux<Thing>>> listThings(
#NotNull #Valid #RequestParam(value = "nbThings") Integer nbThings,
ServerWebExchange exchange
) {
// "getThings" operation takes 1.5s per "thing"
Flux<Thing> things = thingsService.getThings(nbThings);
return Mono.just(new ResponseEntity<>(things, HttpStatus.OK));
}
The signature comes from the Open-API generated code (Spring-Boot server, reactive mode).
What I observe: the client jumps to things.map immediately but only starts processing the Flux after the server has finished sending all the "things".
What I would like: the server should send the "things" as they are generated so that the client can start processing them as they arrive, effectively halving the processing time.
Is there a way to achieve this? I've found many tutorials online for the server part, but none with a java client. I've heard of server-sent events, but can my goal be achieved using a "classic" Open-API endpoint definition that returns a Flux?
The problem seemed too complex to fit a minimal viable example in the question body; full code available for reference on Github.
EDIT: redirect link to main branch after merge of the proposed solution
I've got it running by changing 2 points:
First: I've changed the content type of the response of your /things endpoint, to:
content:
text/event-stream
Don't forget to change also the default response, else the client will expect the type application/json and will wait for the whole response.
Second point: I've changed the return of ThingsService.getThings to this.getThingsFromExistingStream (the method you comment out)
I pushed my changes to a new branch fix-flux-response on your Github, so you can test them directly.

Spring Boot Webflux/Netty - Detect closed connection

I've been working with spring-boot 2.0.0.RC1 using the webflux starter (spring-boot-starter-webflux). I created a simple controller that returns a infinite flux. I would like that the Publisher only does its work if there is a client (Subscriber). Let's say I have a controller like this one:
#RestController
public class Demo {
#GetMapping(value = "/")
public Flux<String> getEvents(){
return Flux.create((FluxSink<String> sink) -> {
while(!sink.isCancelled()){
// TODO e.g. fetch data from somewhere
sink.next("DATA");
}
sink.complete();
}).doFinally(signal -> System.out.println("END"));
}
}
Now, when I try to run that code and access the endpoint http://localhost:8080/ with Chrome, then I can see the data. However, once I close the browser the while-loop continues since no cancel event has been fired. How can I terminate/cancel the streaming as soon as I close the browser?
From this answer I quote that:
Currently with HTTP, the exact backpressure information is not
transmitted over the network, since the HTTP protocol doesn't support
this. This can change if we use a different wire protocol.
I assume that, since backpressure is not supported by the HTTP protocol, it means that no cancel request will be made either.
Investigating a little bit further, by analyzing the network traffic, showed that the browser sends a TCP FIN as soon as I close the browser. Is there a way to configure Netty (or something else) so that a half-closed connection will trigger a cancel event on the publisher, making the while-loop stop?
Or do I have to write my own adapter similar to org.springframework.http.server.reactive.ServletHttpHandlerAdapter where I implement my own Subscriber?
Thanks for any help.
EDIT:
An IOException will be raised on the attempt to write data to the socket if there is no client. As you can see in the stack trace.
But that's not good enough, since it might take a while before the next chunk of data will be ready to send and therefore it takes the same amount of time to detect the gone client. As pointed out in Brian Clozel's answer it is a known issue in Reactor Netty. I tried to use Tomcat instead by adding the dependency to the POM.xml. Like this:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
Although it replaces Netty and uses Tomcat instead, it does not seem reactive due to the fact that the browser does not show any data. However, there is no warning/info/exception in the console. Is spring-boot-starter-webflux as of this version (2.0.0.RC1) supposed to work together with Tomcat?
Since this is a known issue (see Brian Clozel's answer), I ended up using one Flux to fetch my real data and having another one in order to implement some sort of ping/heartbeat mechanism. As a result, I merge both together with Flux.merge().
Here you can see a simplified version of my solution:
#RestController
public class Demo {
public interface Notification{}
public static class MyData implements Notification{
…
public boolean isEmpty(){…}
}
#GetMapping(value = "/", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<? extends Notification>> getNotificationStream() {
return Flux.merge(getEventMessageStream(), getHeartbeatStream());
}
private Flux<ServerSentEvent<Notification>> getHeartbeatStream() {
return Flux.interval(Duration.ofSeconds(2))
.map(i -> ServerSentEvent.<Notification>builder().event("ping").build())
.doFinally(signalType ->System.out.println("END"));
}
private Flux<ServerSentEvent<MyData>> getEventMessageStream() {
return Flux.interval(Duration.ofSeconds(30))
.map(i -> {
// TODO e.g. fetch data from somewhere,
// if there is no data return an empty object
return data;
})
.filter(data -> !data.isEmpty())
.map(data -> ServerSentEvent
.builder(data)
.event("message").build());
}
}
I wrap everything up as ServerSentEvent<? extends Notification>. Notification is just a marker interface. I use the event field from the ServerSentEvent class in order to separate between data and ping events. Since the heartbeat Flux sends events constantly and in short intervals, the time it takes to detect that the client is gone is at most the length of that interval. Remember, I need that because it might take a while before I get some real data that can be sent and, as a result, it might also take a while before it detects that the client is gone. Like this, it will detect that the client is gone as soon as it can’t sent the ping (or possibly the message event).
One last note on the marker interface, which I called Notification. This is not really necessary, but it gives some type safety. Without that, we could write Flux<ServerSentEvent<?>> instead of Flux<ServerSentEvent<? extends Notification>> as return type for the getNotificationStream() method. Or also possible, make getHeartbeatStream() return Flux<ServerSentEvent<MyData>>. However, like this it would allow that any object could be sent, which I don’t want. As a consequence, I added the interface.
I'm not sure why this behaves like this, but I suspect it is because of the choice of generation operator. I think using the following would work:
return Flux.interval(Duration.ofMillis(500))
.map(input -> {
return "DATA";
});
According to Reactor's reference documentation, you're probably hitting the key difference between generate and push (I believe a quite similar approach using generate would probably work as well).
My comment was referring to the backpressure information (how many elements a Subscriber is willing to accept), but the success/error information is communicated over the network.
Depending on your choice of web server (Reactor Netty, Tomcat, Jetty, etc), closing the client connection might result in:
a cancel signal being received on the server side (I think this is supported by Netty)
an error signal being received by the server when it's trying to write on a connection that's been closed (I believe the Servlet spec does not provide that that callback and we're missing the cancel information).
In short: you don't need to do anything special, it should be supported already, but your Flux implementation might be the actual problem here.
Update: this is a known issue in Reactor Netty

How can I improve the performance of the JbossFuse (v6.3) DSL route code?

APPLICATION INFO:
Code below: reads from IBM MQ queue and then posts the message to a REST service
(note: reading from the MQ queue is fast and not an issue - rather, it is the post operation performance I am having trouble improving)...
PROBLEM:
Unable to output/post more than 44-47 messages per second...
QUESTION:
How can I improve the performance of the JbossFuse (v6.3) DSL route code below?... (What techniques are available that would make it faster?)
package aaa.bbb.ccc;
import org.apache.camel.Exchange;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.cdi.ContextName;
#ContextName("rest-dsl")
public class Netty4HttpSlowRoutes extends RouteBuilder {
public Netty4HttpSlowRoutes() {
}
private final org.apache.camel.Processor proc1 = new Processor1();
#Override
public void configure() throws Exception {
org.apache.log4j.MDC.put("app.name", "netty4HttpSlow");
System.getProperties().list(System.out);
errorHandler(defaultErrorHandler().maximumRedeliveries(3).log("***FAILED_MESSAGE***"));
from("wmq:queue:mylocalqueue")
.log("inMessage=" + (null==body()?"":body().toString()))
.to("seda:node1?concurrentConsumers=20");
from("seda:node1")
.streamCaching()
.threads(20)
.setHeader(Exchange.HTTP_METHOD, constant(org.apache.camel.component.http4.HttpMethods.POST))
.setHeader(Exchange.CONTENT_TYPE, constant("application/json"))
.toD("netty4-http:http://localhost:7001/MyService/myServiceThing?textline\\=true");
}
}
Just a couple of thoughts. First things first: did you measure the slowness? How much time do you spend in Camel VS how much time you spend sending the HTTP request?
If the REST service is slow there's nothing you can do in Camel. Depending on what the service does, you could try reducing the number of threads.
Try to disable streamCaching since it looks like you're not using it.
Then use a to instead of toD to invoke the service, I see that the URL is always the same. In the docs of ToD I read
By default the Simple language is used to compute the endpoint.
There may be a little overhead while parsing the URI string each time you invoke the route.

Perserving TestSecurityContextHolder during pure binary websocket connection in Spring Boot test

I have an spring boot (1.5.2.RELEASE) app that is using binary websocket (i.e. NO Stomp, AMQP pure binary buffer). In my test I am able to send messages back and forth which works just great.
However I am experiencing the following unexplained behaviour related to TestSecurityContexHolder during the websocket calls to the application.
The TestSecurityContextHolder has a context that is begin set correctly i.e. my customer #WithMockCustomUser is setting it and I can validate that when putting a breankpoint in the beginning of the test. I.e.
public class WithMockCustomUserSecurityContextFactory implements WithSecurityContextFactory<WithMockCustomUser>,
That works great and I am able to test server side methods that implement method level security such as
#PreAuthorize("hasRole('ROLE_USER') or hasRole('ROLE_ADMIN')")
public UserInterface get(String userName) {
…
}
The problem I have starting experiencing is when I want to do a full integration test of the app i.e. within the test i crate my own WebSocket connection to the app, using only java specific annotations i.e. (no spring annotaions in the client).
ClientWebsocketEndpoint clientWebsocketEndpoint = new ClientWebsocketEndpoint(uri);
.
#ClientEndpoint
public class ClientWebsocketEndpoint {
private javax.websocket.Session session = null;
private ClientBinaryMessageHandler binaryMessageHandler;
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
public ClientWebsocketEndpoint(URI endpointURI) {
try {
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
container.connectToServer(this, endpointURI);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
….
}
If try calling the websocket then I first see that the “SecurityContextPersistenceFilter” is removing the current SecurityContex which is fully expected. I actually want it to get remove since I want to test authentication anyway, since authentication is part of the websocket communication and not part of the http call in my case, but what bothers me is the following.
So far we had only one HTTP call (wireshark proves that) and the SecurityContextPersistenceFilter has cleared the session only once and by setting a breakpoint on the clear method i see that indeed it has been called only once. After 6 binary messaged (i.e. the SecurityContext is set in the 5 message received from the client) are being exchanged between the client and the server I do authentication with a custom token and write that token to the TestSecurityContextHolder btw SecurityContexHolder i.e.
SecurityContext realContext = SecurityContextHolder.getContext();
SecurityContext testContext = TestSecurityContextHolder.getContext();
token.setAuthenticated(true);
realContext.setAuthentication(token);
testContext.setAuthentication(token);
I see that the hashCode of that token is the same in bought ContexHolders which means that this is the same object. However next time I received a ByteBuffer from the client, the result of SecuriyContextHolder.getAuthentication() is null. I first though that his is related to the SecurityContextChannelInterceptor since i read a good article about websockets and spring i.e. here but this does not seems to be the case. The securityContextChannelInterceptor is not executed or called anywhere at least when putting breakpoints i see that IDE is not stopping there. Please note that I am deliberately not extending the AbstractWebSocketMessageBrokerConfigurer here since i do not need it i.e. this is plain simple binary websocket with no (STOMP AMQP etc. i.e. no known messaging ). However i see another class i.e. WithSecurityContextTestExecutionListener clearing the context
TestSecurityContextHolder.clearContext() line: 67
WithSecurityContextTestExecutionListener.afterTestMethod(TestContext) line: 143
TestContextManager.afterTestMethod(Object, Method, Throwable) line: 319
RunAfterTestMethodCallbacks.evaluate() line: 94
but only when the test finished!!! i.e. that is way after the SecurityContext is null, although manually set with customer token before. It seems that something like a filter (but for websockets i.e. not HTTP) is clearing the securityContext on each WsFrame received. I have no idea what that is. Also what might be also relative is: on the server side when i see the stack trace i can observe that StandardWebSocketHandlerAdapter is being called which is creating the StandardWebSocketSession.
StandardWebSocketHandlerAdapter$4.onMessage(Object) line: 84
WsFrameServer(WsFrameBase).sendMessageBinary(ByteBuffer, boolean) line: 592
In the StandardWebSocketSession i see that there is a field "Principal user". Well who is supposed to set that principal i.e. i do not see any set methods there the only way to set it is is during the "AbstractStandardUpgradeStrategy" i.e. in the first call but then what to do once the session it established? i.e. the rfc6455 defined the
10.5. WebSocket Client Authentication
This protocol doesn't prescribe any particular way that servers can
authenticate clients during the WebSocket handshake. The WebSocket
server can use any client authentication mechanism available
for me that means that i SHOULD be able to define the user Principal in the later stage whenever i want.
here is how to test is runned
#RunWith(SpringRunner.class)
#TestExecutionListeners(listeners={ // ServletTestExecutionListener.class,
DependencyInjectionTestExecutionListener.class,
TransactionalTestExecutionListener.class,
WithSecurityContextTestExecutionListener.class
}
)
#SpringBootTest(classes = {
SecurityWebApplicationInitializerDevelopment.class,
SecurityConfigDevelopment.class,
TomcatEmbededDevelopmentProfile.class,
Internationalization.class,
MVCConfigDevelopment.class,
PersistenceConfigDevelopment.class
} )
#WebAppConfiguration
#ActiveProfiles(SConfigurationProfiles.DEVELOPMENT_PROFILE)
#ComponentScan({
"org.Server.*",
"org.Server.config.*",
"org.Server.config.persistence.*",
"org.Server.core.*",
"org.Server.logic.**",
})
#WithMockCustomUser
public class workingWebSocketButNonWorkingAuthentication {
....
here is the before part
#Before
public void setup() {
System.out.println("Starting Setup");
mvc = MockMvcBuilders
.webAppContextSetup(webApplicationContext)
.apply(springSecurity())
.build();
mockHttpSession = new MockHttpSession(webApplicationContext.getServletContext(), UUID.randomUUID().toString());
}
And in order to summarize my question is what could be causing the behaviour where Security Context returned from the bought TestSecurityContextHolder or SecurityContextHolder is null after another ByteBuffer (WsFrame) is being received from the client?.
#Added 31 May:
I found by coincidence when running the test mulitple times that sometimes the contex is not null and the test OK i.e. sometimes the contex is indeed filled with the token i supplied. I guess this has something to do with the fact that the Spring Security Authentication is bound to a ThreadLocal, will need further digging.
#Added 6 June 2017:
I can confirm know that the problem is in the threads i.e.the authentication is successful but when jumping between http-nio-8081-exec-4 to nio-8081-exec-5 the Security Contex is beeing lost and that is in the case where i have set the SecurityContextHolder Strategy to MODE_INHERITABLETHREADLOCAL. Any sugesstions are greatly appreciated.
Added 07 June 2017
If i add the SecurityContextPropagationChannelInterceptor does not propagate the security Context in case of the simple websocket.
#Bean
#GlobalChannelInterceptor(patterns = {"*"})
public ChannelInterceptor securityContextPropagationInterceptor()
{
return new SecurityContextPropagationChannelInterceptor();
}
Added 12 June 2017
did the test with the Async notation i.e. the one found here. spring-security-async-principal-propagation . That is showing that the Security Context is being transferred correctly between methods that are executed in different threads within spring, but for some reason the same thing does not work for Tomcat threads i.e http-nio-8081-exec-4 , http-nio-8081-exec-5 , http-nio-8081-exec-6 , http-nio-8081-exec-7 etc. I have the feeling that his has something to do with the executor but so far i do not know how to change that.
Added 13 June 2017
I have found by printing the current threads and the Security Contex that the very first thread i.e. http-nio-8081-exec-1 does have the security context populated as expected i.e. per mode MODE_INHERITABLETHREADLOCAL, however all further threads i.e http-nio-8081-exec-2, http-nio-8081-exec-3 do not. Now the question is: Is that expected? I have found here working with threads in Spring the statement that
you cannot share security context among sibling threads (e.g. in a thread pool). This method only works for child threads that are spawned by a thread that already contains a populated SecurityContext.
which basically explains it, however since in Java there is no way to find out the parent of the thread , I guess the question is who is creating the Thread http-nio-8081-exec-2 , is that the dispatcher servlet or is that tomcat somehow magically deciding now i will create a new thread. I am asking that because i see that sometimes parts of the code are executed in the same thread or in different depending on the run.
Added 14 June 2017
Since i do not want to put all in one i have created a separated question that deals with the problem of finding the answer how to propagate the security context to all sibling threads created by the tomcat in case of a spring boot app. found here
I'm not 100% sure I understand the problem, but it's unlikely that the Java dispatcher servlet will create a new thread without being told to. I think tomcat handles each request in a different thread, so that might be why the threads are being created. You can check this
and this out. Best of luck!

Heavy REST Application

I have an Enterprise Service Bus (ESB) that posts Data to Microservices (MCS) via Rest. I use Spring to do this. The main Problem is that i have 6 Microservices, that run one after one. So it looks like this: MCS1 -> ESB -> MCS2 -> ESB -> ... -> MCS6
So my Problem looks like this: (ESB)
#RequestMapping(value = "/rawdataservice/container", method = RequestMethod.POST)
#Produces(MediaType.APPLICATION_JSON)
public void rawContainer(#RequestBody Container c)
{
// Here i want to do something to directly send a response and afterwards execute the
// heavy code
// In the heavy code is a postForObject to the next Microservice
}
And the Service does something like this:
#RequestMapping(value = "/container", method = RequestMethod.POST)
public void addDomain(#RequestBody Container container)
{
heavyCode();
RestTemplate rt = new RestTemplate();
rt.postForObject("http://134.61.64.201:8080/rest/rawdataservice/container",container, Container.class);
}
But i dont know how to do this. I looked up the post for Location method, but i dont think it would solve the Problem.
EDIT:
I have a chain of Microservices. The first Microservice waits for a Response of the ESB. In the response the ESB posts to another Microservice and waits for a response and the next one does the same as the first one. So the Problem is that the first Microservice is blocked as long as the complete Microservice Route is completed.
ESB Route
Maybe a picture could help. 1.rawdataService 2.metadataservice 3.syntaxservice 4.semantik
// Here i want to do something to directly send a response and afterwards execute the
// heavy code
The usual spelling of that is to use the data from the http request to create a Runnable that knows how to do the work, and dispatch that runnable to an executor service for later processing. Much the same, you copy the data you need into a queue, which is polled by other threads ready to complete the work.
The http request handler then returns as soon as the executor service/queue has accepted the pending work. The most common implementation is to return a "202 Accepted" response, including in the Location header the url for a resource that will allow the client to monitor the work in progress, if desired.
In Spring, it might be ResponseEntity that manages the codes for you. For instance
ResponseEntity.accepted()....
See also:
How to respond with HTTP 400 error in a Spring MVC #ResponseBody method returning String?
REST - Returning Created Object with Spring MVC
From the caller's point of view, it would invoke RestTemplate.postForLocation, receive a URI, and throw away that URI because the microservice only needs to know that the work as been accepted
Side note: in the long term, you are probably going to want to be able to correlate the activities of the different micro services, especially when you are troubleshooting. So make sure you understand what Gregor Hohpe has to say about correlation identifiers.

Resources