I'm having a problem using websocket transport when using Atmosphere. The code is shown below:
When I use long-polling as a transport, everything works fine. I'm able to call methods on the injected class without any problems. When I configure the client to use websocket, however, I end up getting a exception saying IllegalStateException: Singleton not set for org.glassfish.main.core.kernel and the method call fails. I suspect this is due to the way Atmosphere/Glassfish handles WebSockets outside the normal servlet (and therefore, ApplicationContext) environment.
#ManagedService(path = "/rt/test/{id}")
public class TestWebService {
private #Inject TestWebController controller;
#Get
public void onGet(AtmosphereResource r) {
Broadcaster b = controller.getBroadcaster(r.getRequest().getPathInfo(), true);
if(b != null)
r.setBroadcaster(b);
else
r.getResponse().setStatus("Invalid parameter", 400);
}
}
Now the controller:
#ApplicationScoped
public class TestWebController {
private #Inject IdValidator validator;
public void eventHandler(#Observes MyEvent myEvent) {
Broadcaster b = getBroadcaster(myEvent.getId(), false);
if(b != null)
b.broadcast(myEvent.getMessage());
}
public Broadcaster getBroadcaster(String id, boolean create) {
if(validator.validate(id)) {
Broadcaster b = BroadcasterFactory.getDefault().lookup(id);
if(b == null && create)
b = BroadcasterFactory.getDefault().get(id);
return b;
}
return null;
}
}
I've tried different scopes for the controller and just about everything else. If I move the broadcaster create logic into the web service class, it works just fine (without the annotation) but then I loose the ability to validate the id which is somewhat important.
I'm using Glassfish 4 and Atmosphere 2.1.7 with atmosphere-runtime, atmosphere-annotations and atmosphere-cdi jars loaded.
I'm going to try the latest 2.2 release candidate but there are no mentions of any similar issues in the bug tracker so I'm not optimistic.
UPDATE: The same issue occurs with Atmosphere 2.2 RC3.
UPDATE: I was able to make this work by annotating the class with #javax.ejb.Stateful. I have not performed any substantial testing yet to see if there are any side effects but I am able to connect with a websocket and have confirmed that the injection happens correctly. I am using Atmosphere 2.2.0.
This is actually due to a shortcoming in the CDI spec:
https://java.net/jira/plugins/servlet/mobile#issue/GLASSFISH-20468
I've had to resort to long polling for now as a result, sadly.
Related
General problem description
Due to compatibility issues with the provided database I can not use the provided r2dbc driver for the database. The only possible option is using the standard jdbc driver but I have faced some issues getting transactions to work in the spring-weflux/ project reactor context.
Transactions with jdbc usually rely on the requirement of the connection being thread-local. In project reactor Flux/Mono it is not guaranteed that each flux execution is performed in the same thread. Even more i assume one of the major benefits of reactive programming is the ability to switch threads without having to worry about it. For this reason the standard spring jdbc TransactionManager can not be used and for r2dbc a ReactiveTransactionManager is implemented. As I am using jdbc in this case neither can I use the JdbcTransactionManager, nor is a ReactiveTransactionManager available.
First of all: Is there a simple solution to this Problem?
"Hacky" solution
I will now elaborate further on the steps I already took to solve this issue for me. My idea was implementing a custom ReactiveTransactionManager, which is based on the provided JdbcTransactionManager. My assumption was that it would be possible to wrap a transaction around a Mono/Flux this way. The issue is that I did not take into account the issue described above: It works currently only in a ThreadLocal context as the underlying JdbcTransactions still rely on it. Due to this the inner transactions are handled (commit,rollback) individually if the thread is changed in between.
The following class is the implementation of my custom transaction manager to be included in a reactive stream.
public class JdbcReactiveTransactionManager implements ReactiveTransactionManager {
// Jdbc or connection based transaction manager
private final DataSourceTransactionManager transactionManager;
// ReactiveTransaction delegates everything to TransactionStatus.
static class JdbcReactiveTransaction implements ReactiveTransaction {
public JdbcReactiveTransaction(TransactionStatus transactionStatus) {
this.transactionStatus = transactionStatus;
}
private final TransactionStatus transactionStatus;
public TransactionStatus getTransactionStatus() {
return transactionStatus;
}
// [...]
}
#Override
public #NonNull Mono<ReactiveTransaction> getReactiveTransaction(TransactionDefinition definition)
throws TransactionException {
return Mono.just(transactionManager.getTransaction(definition)).map(JdbcReactiveTransaction::new);
}
#Override
public #NonNull Mono<Void> commit(#NonNull ReactiveTransaction transaction) throws TransactionException {
if (transaction instanceof JdbcReactiveTransaction t) {
transactionManager.commit(t.getTransactionStatus());
return Mono.empty();
} else {
return Mono.error(new IllegalTransactionStateException("Illegal ReactiveTransaction type used"));
}
}
#Override
public #NonNull Mono<Void> rollback(#NonNull ReactiveTransaction transaction) throws TransactionException {
if (transaction instanceof JdbcReactiveTransaction t) {
transactionManager.rollback(t.getTransactionStatus());
return Mono.empty();
} else {
return Mono.error(new IllegalTransactionStateException("Illegal ReactiveTransaction type used"));
}
}
The implemented solution works in all scenarios where the tread does not change. But a fixed thread is not what one usually wants to archive using reactive approaches. Therefore the thread must be fixed using publishOn and subscribeOn. This is all very hacky and I myself consider this a good solution but I do not see a better alternative currently. As this is only required for one use case right now I can probably do but I would really like to find a better solution.
Pinning the Thread
The example below shows the situation that I need to use both: publishOn and subscribeOn to pin the thread. If I omit either on of these some statements wont be executed in the same thread. My current assumption is that Netty executes the parsing in a separate thread (or eventloop). Therefore the additional publishOn is required.
public Mono<ServerResponse> allocateFlows(ServerRequest request) {
final val single = Schedulers.newSingle("AllocationService-allocateFlows");
return request.bodyToMono(FlowsAllocation.class)
.publishOn(single) // Why do I need this although I execute subscribeOn later?
.flatMapMany(this::someProcessingLogic)
.concatMapDelayError(this::someOtherProcessingLogic)
.as(transactionalOperator::transactional)
.subscribeOn(single, false)
.then(ServerResponse.ok().build());
}
I'm creating a spring reactor application to consume messages from websockets server, transform them and later save them to redis and some sql database, saving to redis and sql database is also reactive. Also, before writing to redis and sql database, messages will be windowed (with different timespans) and aggregated.
I'm not sure if the way I've accomplished what I want to achieve is a proper reactive wise, it means, I'm not losing reactive benefits (performance).
First, let me show you what I got:
#Service
class WebSocketsConsumer {
public ConnectableFlux<String> webSocketFlux() {
return Flux.<String>create(emitter -> {
createWebSocketClient()
.execute(URI.create("wss://some-url-goes-here.com"), session -> {
WebSocketMessage initialMessage = session.textMessage("SOME_MSG_HERE");
Flux<String> flux = session.send(Mono.just(initialMessage))
.thenMany(session.receive())
.map(WebSocketMessage::getPayloadAsText)
.doOnNext(emitter::next);
Flux<String> sessionStatus = session.closeStatus()
.switchIfEmpty(Mono.just(CloseStatus.GOING_AWAY))
.map(CloseStatus::toString)
.doOnNext(emitter::next)
.flatMapMany(Flux::just);
return flux
.mergeWith(sessionStatus)
.then();
})
.subscribe(); //1: highlighted by Intellij Idea: `Calling subsribe in not blocking context`
})
.publish();
}
private ReactorNettyWebSocketClient createWebSocketClient() {
return new ReactorNettyWebSocketClient(
HttpClient.create(),
() -> WebsocketClientSpec.builder().maxFramePayloadLength(131072 * 100)
);
}
}
And
#Service
class WebSocketMessageDispatcher {
private final WebSocketsConsumer webSocketsConsumer;
private final Consumer<String> reactiveRedisConsumer;
private final Consumer<String> reactiveJdbcConsumer;
private Disposable webSocketsDisposable;
WebSocketMessageDispatcher(WebSocketsConsumer webSocketsConsumer, Consumer<String> redisConsumer, Consumer<String> dbConsumer) {
this.webSocketsConsumer = webSocketsConsumer;
this.reactiveRedisConsumer = redisConsumer;
this.reactiveJdbcConsumer = dbConsumer;
}
#EventListener(ApplicationReadyEvent.class)
public void onReady() {
ConnectableFlux<String> messages = webSocketsConsumer.webSocketFlux();
messages.subscribe(reactiveRedisConsumer);
messages.subscribe(reactiveJdbcConsumer);
webSocketsDisposable = messages.connect();
}
#PreDestroy
public void onDestroy() {
if (webSocketsDisposable != null) webSocketsDisposable.dispose();
}
}
Questions:
Is it a proper use of reactive streams? Maybe redis and database writes should be done in flatMap, however IMO they can't as I want them to happen in the background and they will also aggregate messages with different time windows. Also note comment 1 from the code above where idea lints my code, code works however I wonder what this lint may result in? Maybe I should use doOnNext not to call emitter::next but to invoke some dispatcher of messages there with some funcion like doOnNext(dispatcher::dispatchMessage) ?
I want websockets client to start immediately after application is ready and stop consuming messages when application shuts down, are #EventListener(ApplicationReadyEvent.class) and #PreDestroy annotations and code shown above a proper way to handle this scenario in reactive world?
As I said saving to redis and sql database is also reactive, i.e. those saves are also producing Mono<T> is subscribing to those Monos inside subscribe of websockets flux ok or it should be accomplished some other way (comments 2 and 3 in code above)
I have a feign client like this with endpoints to two APIs from PROJECT-SERVICE
#FeignClient(name = "PROJECT-SERVICE", fallbackFactory = ProjectServiceFallbackFactory.class)
public interface ProjectServiceClient {
#GetMapping("/api/projects/{projectKey}")
public ResponseEntity<Project> getProjectDetails(#PathVariable("projectKey") String projectKey);
#PostMapping("/api/projects")
public ResponseEntity<Project> createProject(#RequestBody Project project);
}
I'm using those clients like this:
#Service
public class MyService {
#Autowired
private ProjectServiceClient projectServiceClient;
public void doSomething() {
// Some code
ResponseEntity<Project> projectResponse = projectServiceClient.getProjectDetails(projectKey);
// Some more code
}
public void doSomethingElse() {
// Some code
ResponseEntity<Project> projectResponse = projectServiceClient.createProject(Project projectToBeCreated);
// Some more code
}
}
My problem is, most of the times (around 60% of the time), either one of these Feign calls result in a HystrixTimeoutException.
I initially thought there could be a problem in the downstream micro service (PROJECT-SERVICE in this case), but that is not the case. In fact, when getProjectDetails() or createProject() is called, the PROJECT-SERVICE actually does the job and returns a ResponseEntity<Project> with status 200 and 201 respectively, but my fallback is activated with the HystrixTimeoutException.
I'm trying in vain to find what might be causing this issue.
I, however, have this in my main application configuration:
feign.hystrix.enabled=true
feign.client.config.default.connect-timeout=5000
feign.client.config.default.read-timeout=60000
Can anyone point me towards a solution?
Thanks,
Sriram Sridharan
Hystrix's timeout is not tied to that of Feign. There is a default 1 second execution timeout enabled for Hystrix. You need to configure this timeout to be slightly longer than Feign's, to avoid HystrixTimeoutException getting thrown earlier than desired timeout. Like so:
feign.client.config.default.connect-timeout=5000
feign.client.config.default.read-timeout=5000
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=6000
Doing so would allow FeignException, caused by timeout after 5 seconds, to be thrown first, and then wrapped in a HystrixTimeoutException
I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.
Using Springboot 1.5.x, Spring Cloud, and JAX-RS:
I could use a second pair of eyes since it is not clear to me whether the Spring configured, Javanica HystrixCommand works for all use cases or whether I may have an error in my code. Below is an approximation of what I'm doing, the code below will not actually compile.
From below WebService lives in a library with separate package path to the main application(s). Meanwhile MyWebService lives in the application that is in the same context path as the Springboot application. Also MyWebService is functional, no issues there. This just has to do with the visibility of HystrixCommand annotation in regards to Springboot based configuration.
At runtime, what I notice is that when a code like the one below runs, I do see "commandKey=A" in my response. This one I did not quite expect since it's still running while the data is obtained. And since we log the HystrixRequestLog, I also see this command key in my logs.
But all the other Command keys are not visible at all, regardless of where I place them in the file. If I remove CommandKey-A then no commands are visible whatsoever.
Thoughts?
// Example WebService that we use as a shared component for performing a backend call that is the same across different resources
#RequiredArgsConstructor
#Accessors(fluent = true)
#Setter
public abstract class WebService {
private final #Nonnull Supplier<X> backendFactory;
#Setter(AccessLevel.PACKAGE)
private #Nonnull Supplier<BackendComponent> backendComponentSupplier = () -> new BackendComponent();
#GET
#Produces("application/json")
#HystrixCommand(commandKey="A")
public Response mainCall() {
Object obj = new Object();
try {
otherCommandMethod();
} catch (Exception commandException) {
// do nothing (for this example)
}
// get the hystrix request information so that we can determine what was executed
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = hystrixExecutedCommands();
// set the hystrix data, viewable in the response
obj.setData("hystrix", executedCommands.orElse(Collections.emptyList()));
if(hasError(obj)) {
return Response.serverError()
.entity(obj)
.build();
}
return Response.ok()
.entity(healthObject)
.build();
}
#HystrixCommand(commandKey="B")
private void otherCommandMethod() {
backendComponentSupplier
.get()
.observe()
.toBlocking()
.subscribe();
}
Optional<Collection<HystrixInvokableInfo<?>>> hystrixExecutedCommands() {
Optional<HystrixRequestLog> hystrixRequest = Optional
.ofNullable(HystrixRequestLog.getCurrentRequest());
// get the hystrix executed commands
Optional<Collection<HystrixInvokableInfo<?>>> executedCommands = Optional.empty();
if (hystrixRequest.isPresent()) {
executedCommands = Optional.of(hystrixRequest.get()
.getAllExecutedCommands());
}
return executedCommands;
}
#Setter
#RequiredArgsConstructor
public class BackendComponent implements ObservableCommand<Void> {
#Override
#HystrixCommand(commandKey="Y")
public Observable<Void> observe() {
// make some backend call
return backendFactory.get()
.observe();
}
}
}
// then later this component gets configured in the specific applications with sample configuraiton that looks like this:
#SuppressWarnings({ "unchecked", "rawtypes" })
#Path("resource/somepath")
#Component
public class MyWebService extends WebService {
#Inject
public MyWebService(Supplier<X> backendSupplier) {
super((Supplier)backendSupplier);
}
}
There is an issue with mainCall() calling otherCommandMethod(). Methods with #HystrixCommand can not be called from within the same class.
As discussed in the answers to this question this is a limitation of Spring's AOP.