I have a feign client like this with endpoints to two APIs from PROJECT-SERVICE
#FeignClient(name = "PROJECT-SERVICE", fallbackFactory = ProjectServiceFallbackFactory.class)
public interface ProjectServiceClient {
#GetMapping("/api/projects/{projectKey}")
public ResponseEntity<Project> getProjectDetails(#PathVariable("projectKey") String projectKey);
#PostMapping("/api/projects")
public ResponseEntity<Project> createProject(#RequestBody Project project);
}
I'm using those clients like this:
#Service
public class MyService {
#Autowired
private ProjectServiceClient projectServiceClient;
public void doSomething() {
// Some code
ResponseEntity<Project> projectResponse = projectServiceClient.getProjectDetails(projectKey);
// Some more code
}
public void doSomethingElse() {
// Some code
ResponseEntity<Project> projectResponse = projectServiceClient.createProject(Project projectToBeCreated);
// Some more code
}
}
My problem is, most of the times (around 60% of the time), either one of these Feign calls result in a HystrixTimeoutException.
I initially thought there could be a problem in the downstream micro service (PROJECT-SERVICE in this case), but that is not the case. In fact, when getProjectDetails() or createProject() is called, the PROJECT-SERVICE actually does the job and returns a ResponseEntity<Project> with status 200 and 201 respectively, but my fallback is activated with the HystrixTimeoutException.
I'm trying in vain to find what might be causing this issue.
I, however, have this in my main application configuration:
feign.hystrix.enabled=true
feign.client.config.default.connect-timeout=5000
feign.client.config.default.read-timeout=60000
Can anyone point me towards a solution?
Thanks,
Sriram Sridharan
Hystrix's timeout is not tied to that of Feign. There is a default 1 second execution timeout enabled for Hystrix. You need to configure this timeout to be slightly longer than Feign's, to avoid HystrixTimeoutException getting thrown earlier than desired timeout. Like so:
feign.client.config.default.connect-timeout=5000
feign.client.config.default.read-timeout=5000
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=6000
Doing so would allow FeignException, caused by timeout after 5 seconds, to be thrown first, and then wrapped in a HystrixTimeoutException
Related
I want to use a MockServerContainer to mock a webhook site.
My current code will make a request to that webhook while I'm executing it. On the test, I do expect to get 3 requests to it with different bodies. The test I want to make is to run the code and check if I have the 3 requests to the MockServerContainer with the expected bodies.
I need a method from the test containers that retrieves all the requests registered on the Mockserver. Is there any?
In the end, it was not so hard to find the solution. But maybe I could save someone half an hour.
My main error was focusing on the MockServerContainer instead of the client. The client has a really nice method (retrieveRecordedRequests) that makes what I need.
In the end I do need to store the client:
public class NotificationContainer {
public static final MockServerContainer server =
new MockServerContainer(DockerImageName.parse("jamesdbloom/mockserver")
.withTag("mockserver-5.5.4"));
public static MockServerClient client;
public static String getUrl() {
return server.getEndpoint();
}
public void callAfterStartSever() {
ContainerUtils.waitFor(server, Duration.ofSeconds(10));
NotificationContainer.client = new MockServerClient(server.getHost(), server.getServerPort());
client.when(HttpRequest.request())
.respond(HttpResponse.response());
}
}
And then use it on the test so I get the responses
#Test
void checkWebhook(){
//Do the call that makes the webhooks
val responses = NotificationContainer.client.retrieveRecordedRequests(null);
//validate the request are the expected ones
}
I've just started with Vert.x and would like to understand what is the right way of handling potentially long (blocking) operations as part of processing a REST HttpRequest. The application itself is a Spring app.
Here is a simplified REST service I have so far:
public class MainApp {
// instantiated by Spring
private AlertsRestService alertsRestService;
#PostConstruct
public void init() {
Vertx.vertx().deployVerticle(alertsRestService);
}
}
public class AlertsRestService extends AbstractVerticle {
// instantiated by Spring
private PostgresService pgService;
#Value("${rest.endpoint.port:8080}")
private int restEndpointPort;
#Override
public void start(Future<Void> futureStartResult) {
HttpServer server = vertx.createHttpServer();
Router router = Router.router(vertx);
//enable reading of the request body for all routes
router.route().handler(BodyHandler.create());
router.route(HttpMethod.GET, "/allDefinitions")
.handler(this::handleGetAllDefinitions);
server.requestHandler(router)
.listen(restEndpointPort,
result -> {
if (result.succeeded()) {
futureStartResult.complete();
} else {
futureStartResult.fail(result.cause());
}
}
);
}
private void handleGetAllDefinitions( RoutingContext routingContext) {
HttpServerResponse response = routingContext.response();
Collection<AlertDefinition> allDefinitions = null;
try {
allDefinitions = pgService.getAllDefinitions();
} catch (Exception e) {
response.setStatusCode(500).end(e.getMessage());
}
response.putHeader("content-type", "application/json")
.setStatusCode(200)
.end(Json.encodePrettily(allAlertDefinitions));
}
}
Spring config:
<bean id="alertsRestService" class="com.my.AlertsRestService"
p:pgService-ref="postgresService"
p:restEndpointPort="${rest.endpoint.port}"
/>
<bean id="mainApp" class="com.my.MainApp"
p:alertsRestService-ref="alertsRestService"
/>
Now the question is: how to properly handle the (blocking) call to my postgresService, which may take longer time if there are many items to get/return ?
After researching and looking at some examples, I see a few ways to do it, but I don't fully understand differences between them:
Option 1. convert my AlertsRestService into a Worker Verticle and use the worker thread pool:
public class MainApp {
private AlertsRestService alertsRestService;
#PostConstruct
public void init() {
DeploymentOptions options = new DeploymentOptions().setWorker(true);
Vertx.vertx().deployVerticle(alertsRestService, options);
}
}
What confuses me here is this statement from the Vert.x docs: "Worker verticle instances are never executed concurrently by Vert.x by more than one thread, but can [be] executed by different threads at different times"
Does it mean that all HTTP requests to my alertsRestService are going to be, effectively, throttled to be executed sequentially, by one thread at a time? That's not what I would like: this service is purely stateless and should be able to handle concurrent requests just fine ....
So, maybe I need to look at the next option:
Option 2. convert my service to be a multi-threaded Worker Verticle, by doing something similar to the example in the docs:
public class MainApp {
private AlertsRestService alertsRestService;
#PostConstruct
public void init() {
DeploymentOptions options = new DeploymentOptions()
.setWorker(true)
.setInstances(5) // matches the worker pool size below
.setWorkerPoolName("the-specific-pool")
.setWorkerPoolSize(5);
Vertx.vertx().deployVerticle(alertsRestService, options);
}
}
So, in this example - what exactly will be happening? As I understand, ".setInstances(5)" directive means that 5 instances of my 'alertsRestService' will be created. I configured this service as a Spring bean, with its dependencies wired in by the Spring framework. However, in this case, it seems to me the 5 instances are not going to be created by Spring, but rather by Vert.x - is that true? and how could I change that to use Spring instead?
Option 3. use the 'blockingHandler' for routing. The only change in the code would be in the AlertsRestService.start() method in how I define a handler for the router:
boolean ordered = false;
router.route(HttpMethod.GET, "/allDefinitions")
.blockingHandler(this::handleGetAllDefinitions, ordered);
As I understand, setting the 'ordered' parameter to TRUE means that the handler can be called concurrently. Does it mean this option is equivalent to the Option #2 with multi-threaded Worker Verticles?
What is the difference? that the async multi-threaded execution pertains to the one specific HTTP request only (the one for the /allDefinitions path) as opposed to the whole AlertsRestService Verticle?
Option 4. and the last option I found is to use the 'executeBlocking()' directive explicitly to run only the enclosed code in worker threads. I could not find many examples of how to do this with HTTP request handling, so below is my attempt - maybe incorrect. The difference here is only in the implementation of the handler method, handleGetAllAlertDefinitions() - but it is rather involved... :
private void handleGetAllAlertDefinitions(RoutingContext routingContext) {
vertx.executeBlocking(
fut -> { fut.complete( sendAsyncRequestToDB(routingContext)); },
false,
res -> { handleAsyncResponse(res, routingContext); }
);
}
public Collection<AlertDefinition> sendAsyncRequestToDB(RoutingContext routingContext) {
Collection<AlertDefinition> allAlertDefinitions = new LinkedList<>();
try {
alertDefinitionsDao.getAllAlertDefinitions();
} catch (Exception e) {
routingContext.response().setStatusCode(500)
.end(e.getMessage());
}
return allAlertDefinitions;
}
private void handleAsyncResponse(AsyncResult<Object> asyncResult, RoutingContext routingContext){
if(asyncResult.succeeded()){
try {
routingContext.response().putHeader("content-type", "application/json")
.setStatusCode(200)
.end(Json.encodePrettily(asyncResult.result()));
} catch(EncodeException e) {
routingContext.response().setStatusCode(500)
.end(e.getMessage());
}
} else {
routingContext.response().setStatusCode(500)
.end(asyncResult.cause());
}
}
How is this different form other options? And does Option 4 provide concurrent execution of the handler or single-threaded like in Option 1?
Finally, coming back to the original question: what is the most appropriate Option for handling longer-running operations when handling REST requests?
Sorry for such a long post.... :)
Thank you!
That's a big question, and I'm not sure I'll be able to address it fully. But let's try:
In Option #1 what it actually means is that you shouldn't use ThreadLocal in your worker verticles, if you use more than one worker of the same type. Using only one worker means that your requests will be serialised.
Option #2 is simply incorrect. You cannot use setInstances with instance of a class, only with it's name. You're correct, though, that if you choose to use name of the class, Vert.x will instantiate them.
Option #3 is less concurrent than using Workers, and shouldn't be used.
Option #4 executeBlocking is basically doing Option #3, and is also quite bad.
I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.
Using Springboot 1.5.x, Spring Cloud, and JAX-RS:
I could use a second pair of eyes since it is not clear to me whether the Spring configured, Javanica HystrixCommand works for all use cases or whether I may have an error in my code. Below is an approximation of what I'm doing, the code below will not actually compile.
From below WebService lives in a library with separate package path to the main application(s). Meanwhile MyWebService lives in the application that is in the same context path as the Springboot application. Also MyWebService is functional, no issues there. This just has to do with the visibility of HystrixCommand annotation in regards to Springboot based configuration.
At runtime, what I notice is that when a code like the one below runs, I do see "commandKey=A" in my response. This one I did not quite expect since it's still running while the data is obtained. And since we log the HystrixRequestLog, I also see this command key in my logs.
But all the other Command keys are not visible at all, regardless of where I place them in the file. If I remove CommandKey-A then no commands are visible whatsoever.
Thoughts?
// Example WebService that we use as a shared component for performing a backend call that is the same across different resources
#RequiredArgsConstructor
#Accessors(fluent = true)
#Setter
public abstract class WebService {
private final #Nonnull Supplier<X> backendFactory;
#Setter(AccessLevel.PACKAGE)
private #Nonnull Supplier<BackendComponent> backendComponentSupplier = () -> new BackendComponent();
#GET
#Produces("application/json")
#HystrixCommand(commandKey="A")
public Response mainCall() {
Object obj = new Object();
try {
otherCommandMethod();
} catch (Exception commandException) {
// do nothing (for this example)
}
// get the hystrix request information so that we can determine what was executed
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = hystrixExecutedCommands();
// set the hystrix data, viewable in the response
obj.setData("hystrix", executedCommands.orElse(Collections.emptyList()));
if(hasError(obj)) {
return Response.serverError()
.entity(obj)
.build();
}
return Response.ok()
.entity(healthObject)
.build();
}
#HystrixCommand(commandKey="B")
private void otherCommandMethod() {
backendComponentSupplier
.get()
.observe()
.toBlocking()
.subscribe();
}
Optional<Collection<HystrixInvokableInfo<?>>> hystrixExecutedCommands() {
Optional<HystrixRequestLog> hystrixRequest = Optional
.ofNullable(HystrixRequestLog.getCurrentRequest());
// get the hystrix executed commands
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = Optional.empty();
if (hystrixRequest.isPresent()) {
executedCommands = Optional.of(hystrixRequest.get()
.getAllExecutedCommands());
}
return executedCommands;
}
#Setter
#RequiredArgsConstructor
public class BackendComponent implements ObservableCommand<Void> {
#Override
#HystrixCommand(commandKey="Y")
public Observable<Void> observe() {
// make some backend call
return backendFactory.get()
.observe();
}
}
}
// then later this component gets configured in the specific applications with sample configuraiton that looks like this:
#SuppressWarnings({ "unchecked", "rawtypes" })
#Path("resource/somepath")
#Component
public class MyWebService extends WebService {
#Inject
public MyWebService(Supplier<X> backendSupplier) {
super((Supplier)backendSupplier);
}
}
There is an issue with mainCall() calling otherCommandMethod(). Methods with #HystrixCommand can not be called from within the same class.
As discussed in the answers to this question this is a limitation of Spring's AOP.
I am connection through SockJS over STOMP to my Spring backend. Everything work fine, the configuration works well for all browsers etc. However, I cannot find a way to send an initial message. The scenario would be as follows:
The client connects to the topic
function connect() {
var socket = new SockJS('http://localhost:8080/myEndpoint');
stompClient = Stomp.over(socket);
stompClient.connect({}, function(frame) {
setConnected(true);
console.log('Connected: ' + frame);
stompClient.subscribe('/topic/notify', function(message){
showMessage(JSON.parse(message.body).content);
});
});
}
and the backend config looks more or less like this:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketAppConfig extends AbstractWebSocketMessageBrokerConfigurer {
...
#Override
public void registerStompEndpoints(final StompEndpointRegistry registry) {
registry.addEndpoint("/myEndpoint").withSockJS();
}
I want to send to the client an automatic reply from the backend (on the connection event) so that I can already provide him with some dataset (e.g. read sth from the db) without the need for him (the client) to send a GET request (or any other). So to sum up, I just want to send him a message on the topic with the SimMessagingTemplate object just after he connected.
Usually I do it the following way, e.g. in a REST controller, when the template is already autowired:
#Autowired
private SimpMessagingTemplate template;
...
template.convertAndSend(TOPIC, new Message("it works!"));
How to achieve this on connect event?
UPDATE
I have managed to make it work. However, I am still a bit confused with the configuration. I will show here 2 configurations how the initial message can be sent:
1) First solution
JS part
stompClient.subscribe('/app/pending', function(message){
showMessage(JSON.parse(message.body).content);
});
stompClient.subscribe('/topic/incoming', function(message){
showMessage(JSON.parse(message.body).content);
});
Java part
#Controller
public class WebSocketBusController {
#SubscribeMapping("/pending")
Configuration
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
config.setApplicationDestinationPrefixes("/app");
}
... and other calls
template.convertAndSend("/topic/incoming", outgoingMessage);
2) Second solution
JS part
stompClient.subscribe('/topic/incoming', function(message){
showMessage(JSON.parse(message.body).content);
})
Java part
#Controller
public class WebSocketBusController {
#SubscribeMapping("/topic/incoming")
Configuration
#Override
public void configureMessageBroker(final MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic");
// NO APPLICATION PREFIX HERE
}
... and other calls
template.convertAndSend("/topic/incoming", outgoingMessage);
SUMMARY:
The first case uses two subscriptions - this I wanted to avoid and thought this can be managed with one only.
The second one however has no prefix for application. But at least I can have a single subscription to listen on the provided topic as well as send initial message.
If you just want to send a message to the client upon connection, use an appropriate ApplicationListener:
#Component
public class StompConnectedEvent implements ApplicationListener<SessionConnectedEvent> {
private static final Logger log = Logger.getLogger(StompConnectedEvent.class);
#Autowired
private Controller controller;
#Override
public void onApplicationEvent(SessionConnectedEvent event) {
log.debug("Client connected.");
// you can use a controller to send your msg here
}
}
You can't do that on connect, however the #SubscribeMapping does the stuff in that case.
You just need to mark the service method with that annotation and it returns a result to the subscribe function.
From Spring Reference Manual:
An #SubscribeMapping annotation can also be used to map subscription requests to #Controller methods. It is supported on the method level, but can also be combined with a type level #MessageMapping annotation that expresses shared mappings across all message handling methods within the same controller.
By default the return value from an #SubscribeMapping method is sent as a message directly back to the connected client and does not pass through the broker. This is useful for implementing request-reply message interactions; for example, to fetch application data when the application UI is being initialized. Or alternatively an #SubscribeMapping method can be annotated with #SendTo in which case the resulting message is sent to the "brokerChannel" using the specified target destination.
UPDATE
Referring to this example: https://github.com/revelfire/spring4Test how would that be possible to send anything when the line 24 of the index.html is invoked: stompClient.subscribe('/user/queue/socket/responses' ... from the spring controllers?
Well, look like this:
#SubscribeMapping("/queue/socket/responses")
public List<Employee> list() {
return getEmployees();
}
The Stomp client part remains the same.