Websocket timeout with Apache Wicket and embedded Jetty - websocket

I am using Wicket with an embedded Jetty webserver. The application is using websockets, therefore the Jetty setup looks the following:
FilterHolder filterHolder = new FilterHolder(Jetty9WebSocketFilter.class);
filterHolder.setInitParameter("applicationClassName", "MyApplication");
filterHolder.setInitParameter("filterMappingUrlPattern", "/*");
WebAppContext context = new WebAppContext();
context.setResourceBase(".");
context.addFilter(filterHolder, "/*", null);
context.addServlet(DefaultServlet.class, "/*");
Server server = new Server(8083);
server.setHandler(context);
try {
server.start();
} catch (Exception e) {
e.printStackTrace();
}
Everything works fine, besides the websocket timeout. Research showed that Jetty's websocket connections time out, although this is not common for webservers.
After researching I stumbled upon the following init parameter, that I now pass to my filterHolder:
filterHolder.setInitParameter("maxIdleTime", "5000");
And apparently this parameter does something - because now the timeout occurs notably faster than before, exactly after 5 seconds.
But I couldn't figure out how I can disable the timeout completely. Using -1, 0 or even Integer.MIN_VALUE instead of 5000 does nothing, there is still a timeout after a while. The documentation I found says nothing about an according value.
What init parameter can I use to disable the websocket timeout? Or do I have to stick with setting the timeout to some ridiculously high value?

Jetty 9.1 has its own WebSocketUpgradeFilter, use that one, and then modify the default policy's idle timeout.
Example:
package jetty.websocket;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.ServerConnector;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.websocket.api.Session;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketMessage;
import org.eclipse.jetty.websocket.api.annotations.WebSocket;
import org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter;
import org.eclipse.jetty.websocket.server.pathmap.ServletPathSpec;
import org.eclipse.jetty.websocket.servlet.ServletUpgradeRequest;
import org.eclipse.jetty.websocket.servlet.ServletUpgradeResponse;
import org.eclipse.jetty.websocket.servlet.WebSocketCreator;
public class JettyWebSocketViaFilter
{
#WebSocket
public static class EchoSocket
{
#OnWebSocketMessage
public void onMessage(Session session, String msg)
{
session.getRemote().sendStringByFuture(msg);
}
}
public static class EchoCreator implements WebSocketCreator
{
private EchoSocket echoer = new EchoSocket();
#Override
public Object createWebSocket(ServletUpgradeRequest req, ServletUpgradeResponse resp)
{
return echoer;
}
}
public static void main(String[] args)
{
Server server = new Server();
ServerConnector connector = new ServerConnector(server);
connector.setPort(8080);
server.addConnector(connector);
// Setup the basic application "context" for this application at "/"
// This is also known as the handler tree (in jetty speak)
ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
context.setContextPath("/");
server.setHandler(context);
// Add the websocket filter
WebSocketUpgradeFilter wsfilter = WebSocketUpgradeFilter.configureContext(context);
wsfilter.getFactory().getPolicy().setIdleTimeout(5000);
wsfilter.addMapping(new ServletPathSpec("/"),new EchoCreator());
// The filesystem paths we will map
String staticRoot = "src/main/webapp/websocket/protocol";
ServletHolder holderDefault = new ServletHolder("default",DefaultServlet.class);
holderDefault.setInitParameter("resourceBase",staticRoot);
holderDefault.setInitParameter("dirAllowed","true");
context.addServlet(holderDefault,"/");
try
{
server.start();
server.join();
}
catch (Throwable t)
{
t.printStackTrace(System.err);
}
}
}
Timeouts in Jetty with WebSockets:
Since WebSocket is an upgraded HTTP/1.1 request you have essentially 2 timeouts to worry about.
First is the connector idle timeout, that will be used for the HTTP/1.1 initial portion of the incoming Upgrade request. This is configured at the server level, with the connector that it has. With Jetty, any value 0 or below is considered an infinite timeout.
Next, you have the websocket endpoint specific idle timeout. If it has a value greater than 0, then it is applied to already established connection's idle timeout (the one from the server side).
Some combinations and what it means ...
Server Connector IdleTimeout | WebSocket Endpoint IdleTimeout | Actual Timeout
------------------------------+--------------------------------+----------------
30,000 ms | -1 | 30,000 ms
30,000 ms | 10,000 ms | 10,000 ms
30,000 ms | 400,000 ms | 400,000 ms
500,000 ms | -1 | 500,000 ms
500,000 ms | 200 ms | 200 ms
-1 | -1 | (infinite)
-1 | 1,000 ms | 1,000 ms
You can think of the Server Connector IdleTimeout as a TCP level timeout, while the WebSocket endpoint timeout is an application level idle timeout.
Hope this helps.

Related

Spring boot JMS DefaultListenerContainer occasionally drops connection and not autorevocered with Tibco EMS

Issue is similar to the one mentioned at Spring JMS Consumers to a TIBCO EMS Server expire on their own and have to restart our spring boot application to restablish the connection
and below is the code snippet we are using for Listener configuration
public JmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setMaxMessagesPerTask(5);
factory.setConcurrency("5");
return factory;
}
And connection factory
#Bean
public ConnectionFactory connectionFactory() {
ConnectionFactory connectionFactory = null
Tibjms.setPingInterval(10);
try {
TibjmsConnectionFactory tibjmsConnectionFactory = new TibjmsConnectionFactory(
environment.getProperty("url"));
//few more statments to set other properties
} catch (Exception ex) {
}
return connectionFactory;
}
Issue is observed during vpn failovers, we have active and failover vpn connection,when VPN switches, at application end netstat shows connection is established but at EMS end netstat indicates that connection is terminated or not found after few minutes, indicating no listener at EMS end.
We are using DefaultListnerContainer factory which supposed to poll and refresh connection if connection is terminated but unable to do so and have to restart the server
We suspect due to some configuration issues at VPN end, DefaultListnerContainer is not able to identify that connection has been terminated and unable to refresh JMS connection.
Please let me know if there are any other parameters or properties that can help DefaultListnerContainer to identify such scenarios.
If you look at the TIBCO EMS documentation : https://docs.tibco.com/pub/ems/8.5.1/doc/html/api/javadoc/com/tibco/tibjms/TibjmsConnectionFactory.html
You can see that there are parameters to manage reconnections :
setConnAttemptCount(int attempts)
setConnAttemptDelay(int delay)
setConnAttemptTimeout(int timeout)
setReconnAttemptCount(int attempts)
setReconnAttemptDelay(int delay)
setReconnAttemptTimeout(int timeout)
As an example you can use following values (delay and timeout are in msec):
setConnAttemptCount(int attempts) 60
setConnAttemptDelay(int delay) 2000
setConnAttemptTimeout(int timeout) 1000
setReconnAttemptCount(int attempts) 120
setReconnAttemptDelay(int delay) 2000
setReconnAttemptTimeout(int timeout) 1000
You can also define reconnection parameters in the Connection Factory definition, for example :
[QueueConnectionFactory]
type = queue
url = tcp://serveur1:7222,tcp://serveur2:7222
connect_attempt_count = 60
connect_attempt_delay = 2000
connect_attempt_timeout = 1000
reconnect_attempt_count = 120
reconnect_attempt_delay = 2000
reconnect_attempt_timeout= 1000
You may adjust values of the parameters to manage network issues that would last a long time.
Note also to make for the EMS client library to detect the loss of the connection to the EMS and trigger the reconnection mechanisms you need to have the following parameters in the EMS tibemsd.conf file (duration in seconds here):
client_heartbeat_server = 20
server_timeout_client_connection = 90
server_heartbeat_client = 20
client_timeout_server_connection = 90
The above should resolve your issue but I recommend to do test to adjust the values of the reconnection parameters

How to use JedisConfig pool efficiently without increasing number of connections more than maxtotal?

Jedis pool is not working as expected .I have mentioned active connections 10 but it is allowing even above 10 connections.
I have overridden getConnection() method from RedisConnectionFactory. This method has been called almost for 30 times for getting the connection.
I have configured the jedis config pool as mentioned below.
Can some one please help me out why it is creating the connections more than the maxtotal? And can someone please help me out with the closing of jedisconnection pool as well.
#Configuration
public class RedisConfiguration {
#Bean
public RedisTenantDataFactory redisTenantDataFactory(){
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setMaxIdle(1);
poolConfig.setMaxTotal(10);
poolConfig.setBlockWhenExhausted(true);
poolConfig.setMaxWaitMillis(10);
JedisConnectionFactory jedisConnectionFactory = new
JedisConnectionFactory(poolConfig);
jedisConnectionFactory.setHostName(redisHost);
jedisConnectionFactory.setUsePool(true);
jedisConnectionFactory.setPort(Integer.valueOf(redisPort));
}
#####
#Bean
public RedisTemplate<String, Object> redisTemplate(#Autowired RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(redisConnectionFactory);
template.afterPropertiesSet();
return template;
}
}
I have overridden getConnection() method from RedisConnectionFactory. This method has been called almost for 30 times for getting the connection.
It is probably a misunderstanding of the ConnectionPool behaviour. Without having the details about how you are using the pool in your application I guess.
So you pool is configured as followed:
...
poolConfig.setMaxIdle(1);
poolConfig.setMaxTotal(10);
poolConfig.setBlockWhenExhausted(true)
...
This means as you expect, you will not have more than 10 active connections from this specific pool to Redis.
You can check the number of clients (open connection) from Redis itself using RedisInsight or using the command CLIENT LIST, you will see that you will not have more than 10 connections coming from this JVM.
The fact that your see many call to getConnection() is just because your application is calling it each time a connection is needed.
This does NOT means "open a new connection", this means "give me a connection from the pool", and your configuration define the behaviour, as follow:
poolConfig.setMaxIdle(1) => you will have at least always 1 connection open and available for your application. This is important to chose a good number since "creating a new connection" is taking time and resources. (1 is probably too low in a normal application)
poolConfig.setMaxTotal(10) => this mean that the pool will not have more than 10 connections open in the same time. So you MUST define what happen when you have reach 10, and your app need one. This is where
poolConfig.setBlockWhenExhausted(true) => This means that if you have already 10 "active" connections used by your application, and the application call getConnection(), it will "block" until one of the 10 connections is returned to the pool.
So "blocking" is probably not a very good idea... (but once again it depends of your application)
Maybe you are wondering why your application is calling the getConnection() 30 times, and why it does not stop/block at 10....
Because your code is good ;), what I mean by that your application:
1- Jedis jedis = pool.getCoonnection(); (so it takes one active connection from the pool)
2- you are using jedis connection as much as needed
3- you close the connection jedis.close() ( this does not necessary close the real connection, it returns back the connection to the pool, and the pool can reuse it or close it depending of the application/configuration)
Does it make sense?
Usually you will work with the following code
/// Jedis implements Closeable. Hence, the jedis instance will be auto-closed after the last statement.
try (Jedis jedis = pool.getResource()) {
/// ... do stuff here ... for example
jedis.set("foo", "bar");
String foobar = jedis.get("foo");
jedis.zadd("sose", 0, "car"); jedis.zadd("sose", 0, "bike");
Set<String> sose = jedis.zrange("sose", 0, -1);
}
/// ... when closing your application:
pool.close()
You can find more information about JedisPool and Apache CommonPool here:
Getting-started
Apache Commons Pool

How to add timeout in JAX-RS API

I have a JAX-RS API that does a long duration work and the API is being called via ajax call by the client. The client is getting 503 status - Service Unavailable after 50 seconds.
How can I increase this timeout value. I tried increasing the connection timeout in tomcat (which is hosting API). I also tried adding timeout in ajax call but that also didn't work.
You could use the Suspended annotation and create a TimeoutHandler .
Not sure if you need to increase the timeout in tomcat using this example.
public class Resource {
private Executor executor = Executors.newSingleThreadExecutor();
#GET
public void asyncGet(#Suspended final AsyncResponse asyncResponse) {
asyncResponse.setTimeoutHandler(new TimeoutHandler() {
#Override
public void handleTimeout(AsyncResponse asyncResponse) {
asyncResponse.resume("Processing timeout.");
executor.shutdown();
}
});
asyncResponse.setTimeout(60, TimeUnit.SECONDS);
executor.submit(() -> {
String result = someService.expensiveOperation();
asyncResponse.resume(result);
executor.shutdown();
});
}
}
Jersey documentation here

Spring Boot with CXF Client Race Condition/Connection Timeout

I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.

Spring ws timeout on server side

I have some web services exposed using spring web services.
I would like to set a maximun timeout on server side, I mean, when a client invokes my web service It could not last more than a fixed time. Is it possible?
I have found lot of information about client timeouts, but not server timeout.
This is set at the level of the server itself and not the application, so it's application server dependent.
The reason for this is that it's the server code that opens the listening socket used by the HTTP connection, so only the server code can set a timeout by passing it to the socket API call that starts listening to a given port.
As an example, this is how to do it in Tomcat in file server.xml:
<Connector connectionTimeout="20000" ... />
You can work around this issue by making the web service server trigger the real work on another thread and countdown the time out it self and return failure if timed out.
Here is an example of how you can do it, it should time out after 10 seconds:
public class Test {
private static final int ONE_SECOND = 1_000;
public String webserviceMethod(String request) {
AtomicInteger counter = new AtomicInteger(0);
final ResponseHolder responseHolder = new ResponseHolder();
// Create another thread
Runnable worker = () -> {
// Do Actual work...
responseHolder.finished = true;
responseHolder.response = "Done"; // Actual response
};
new Thread(worker).start();
while (counter.addAndGet(1) < 10) {
try {
Thread.sleep(ONE_SECOND);
} catch (InterruptedException e) {
e.printStackTrace();
}
if (responseHolder.finished) {
return responseHolder.response;
}
}
return "Operation Timeout"; // Can throw exception here
}
private final class ResponseHolder {
private boolean finished;
private String response; // Can be any type of response needed
}
}

Resources