We are currently using Apache Httpclient in our custom load test framework. I have to implement support for WebSocket testing, and since HttpClient does not support WebSockets, I am looking for alternatives. So, I've been playing with OkHttp and Async Http Client. In general, I could use both as a replacement, but either one gives me a few challenges I need to tackle. Here's the ones I'm facing with OkHttp:
OkHttp uses global static state in com.squareup.okhttp.internal.Internal which seems a little bit scary to me. In our framework, we launch lots of threads which in turn execute what we call request flows. Each thread would get its own HttpClient instance. It seems, with OkHttp you cannot have multiple independent OkHttpClient instances in one VM because they would interfere due to this static stuff. I seem to be able to work around this problem by using one OkHttpClient instance with a CookieHandler that delegates to thread-scoped CookieHandlers. I might have to do the same with ConnectionPools. Am I on the right track here?
I need to be able to set the local address. We have client machines with multiple NICs which have to be evenly used. That's no problem with Apache HttpClient and AsyncHttpClient. How can this be done with OkHttp?
Update/Solution:
Digging through the code again after Jesse posted his comments, I realized that the Internal singleton does not keep state. I guess I was irritated just by the fact that I found a singleton in the codebase even though is does not do any harm.
I was able to set the local address using a custom Socket Factory as Jesse suggested:
public class LocalAddressSocketFactory extends SocketFactory {
private static final String ERROR_MSG = "On unconnected sockets are supported";
private final SocketFactory delegate;
private final Provider<InetAddress> localAddressProvider;
public LocalAddressSocketFactory(final SocketFactory delegate,
final Provider<InetAddress> localAddressProvider) {
this.delegate = delegate;
this.localAddressProvider = localAddressProvider;
}
#Override
public Socket createSocket() throws IOException {
Socket socket = delegate.createSocket();
socket.bind(new InetSocketAddress(localAddressProvider.get(), 0));
return socket;
}
#Override
public Socket createSocket(final String remoteAddress, final int remotePort)
throws IOException, UnknownHostException {
throw new UnsupportedOperationException(ERROR_MSG);
}
#Override
public Socket createSocket(final InetAddress remoteAddress, final int remotePort)
throws IOException {
throw new UnsupportedOperationException(ERROR_MSG);
}
#Override
public Socket createSocket(final String remoteAddress, final int remotePort,
final InetAddress localAddress, final int localPort)
throws IOException, UnknownHostException {
throw new UnsupportedOperationException(ERROR_MSG);
}
#Override
public Socket createSocket(final InetAddress remoteAddress, final int remotePort,
final InetAddress localAddress, final int localPort)
throws IOException {
throw new UnsupportedOperationException(ERROR_MSG);
}
}
This class relies on the fact the OkHttp only creates unconnected Sockets. I'd appreciate it if this could be documented. OkHttpClient allows a custom SocketFactory to be set, but you have to find out yourself how it is used. I just submitted a pull request enhancing the Javadocs: https://github.com/square/okhttp/pull/1626.
Related
I have two Java processes and I am connecting them using a websocket in spring boot. One process acts as the client and connects like this:
List<Transport> transports = new ArrayList<Transport>(1);
transports.add(new WebSocketTransport(new StandardWebSocketClient()));
WebSocketClient client = new SockJsClient(transports);
WebSocketStompClient stompClient = new WebSocketStompClient(client);
stompClient.setMessageConverter(new MappingJackson2MessageConverter());
StompSessionHandler firstSessionHandler = new MyStompSessionHandler("Philip");
stompClient.connect("ws://localhost:8080/chat", firstSessionHandler);
The session handler extends StompSessionHandlerAdapter and provides these methods (I am subscribing by username so each client can receive its own messages):
#Override
public void afterConnected(
StompSession session, StompHeaders connectedHeaders) {
session.subscribe("/user/" + userName + "/reply", this);
session.send("/app/chat", getSampleMessage());
}
#Override
public void handleFrame(StompHeaders headers, Object payload) {
Message msg = (Message) payload;
// etc.....
}
On the server side I have a Controller exposed and I am writing data by calling the endpoint from a worker thread.
#Autowired
private SimpMessagingTemplate template;
#MessageMapping("/chat")
public void send(
Message message)
throws Exception {
template.convertAndSendToUser(message.getFrom(),
"/reply",
message);
}
In the websocket config I am overriding the method to set the limits:
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic", "/user");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setMessageSizeLimit(500 * 1024);
registration.setSendBufferSizeLimit(1024 * 1024);
registration.setSendTimeLimit(20000);
}
My question is this, if the load on the server gets high enough and I overrun the limit, the websocket fails catastrophically, and I want to avoid this. What I would like to do is for the controller to have the ability to ask the message broker "will this message fit in the buffer?", so that I can throttle to stay under the limit. I searched the API documentation but I don't see any way of doing that. Are there any other obvious solutions that I am missing?
Thanks.
Actually I found a solution, so if anyone is interested, here it is.
On the server side configuration of the websockets I installed an Interceptor on the Outbound Channel (this is part of the API), which is called after each send from the embedded broker.
So I know how much is coming in, which I keep track of in my Controller class and I know how much is going out through the Interceptor that I installed, and this allows me to always stay under the limit.
The controller, before accepting any new messages to be queued up for the broker first determines if enough room is available and if not queues up the message in external storage until such time as room becomes available.
I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.
I want to test my services in spring which should send emails.
I try to use org.subethamail:subethasmtp.
To acieve my goal I created service MySender where I send email:
#Autowired
private MailSender mailSender;
//...
SimpleMailMessage message = new SimpleMailMessage();
message.setTo("example#example.com");
message.setSubject("Subject");
message.setText("Text");
mailSender.send(message);
// ...
To test this piece of code I created test application.properties (in test scope):
spring.mail.host=127.0.0.1
spring.mail.port=${random.int[4000,6000]}
And test configuration class which should start Wiser SMTP server and make it reusable in tests:
#Configuration
public class TestConfiguration {
#Autowired
private Wiser wiser;
#Value("${spring.mail.host}")
String smtpHost;
#Value("${spring.mail.port}")
int smtpPort;
#Bean
public Wiser provideWiser() {
// provide wiser for verification in tests
Wiser wiser = new Wiser();
return wiser;
}
#PostConstruct
public void initializeMailServer() {
// start server
wiser.setHostname(smtpHost);
wiser.setPort(smtpPort);
wiser.start();
}
#PreDestroy
public void shutdownMailServer() {
// stop server
wiser.stop();
}
}
Expected result is that application sends email using Wiser smtp server and verify number of sended messages.
But when I run service application throws MailSendException(Couldn't connect to host, port: 127.0.0.1, 4688; timeout -1;).
But when I add breakpoint and try connect using telnet smtp server allow to connect and don't throw Connection refused.
Do you have any idea why I can't test sending mails?
Full code preview is available on github:
https://github.com/karolrynio/demo-mail
I faced same problem. If using some constant port number for spring.mail.port in test Spring configuration combined with Maven tests forking, it resulted in tests randomly failing on port conflict when starting Wiser.
As noted here in comments, using random.int doesn't help - it returns different value each time it's referenced, and it's expected behavior (see this issue).
Hence, we need a different way to initialize spring.mail.port with a random value, so it would be constant within the test execution. Here's a way to do it (thanks for advice here):
First, we may not set spring.mail.port in test properties file at all. We'll initialize it in TestPropertySource. We'll need a class like this:
public class RandomPortInitailizer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext applicationContext) {
int randomPort = SocketUtils.findAvailableTcpPort();
TestPropertySourceUtils.addInlinedPropertiesToEnvironment(applicationContext,
"spring.mail.port=" + randomPort);
}
}
Now we can run our tests this way (not too different from what's found in OP):
#RunWith(SpringRunner.class)
#ContextConfiguration(initializers = RandomPortInitailizer.class)
public class WhenEmailingSomeStuff {
#Value("${spring.mail.host}")
String smtpHost;
#Value("${spring.mail.port}")
int smtpPort;
#Before
public void startEmailServer() {
wiser = new Wiser();
wiser.setPort(smtpPort);
wiser.setHostname(smtpHost);
wiser.start();
}
#After
public void stopEmailServer() {
wiser.stop();
}
#Test
public void testYourJavaMailSenderHere() {
//
}
}
in the application properties can you also add
mail.smtp.auth=false
mail.smtp.starttls.enable=false
The change your code to have these extra two values
#Value("${mail.smtp.auth}")
private boolean auth;
#Value("${mail.smtp.starttls.enable}")
private boolean starttls;
and put these options in your initializeMailServer
Properties mailProperties = new Properties();
mailProperties.put("mail.smtp.auth", auth);
mailProperties.put("mail.smtp.starttls.enable", starttls);
wiser.setJavaMailProperties(mailProperties);
wiser.setHostname(smtpHost);
wiser.setPort(smtpPort);
wiser.start();
let me know if this worked for you
I'm implementing an IBackingMap for my Trident topology to store tuples to ElasticSearch (I know there are several implementations for Trident/ElasticSearch integration already existing at GitHub however I've decided to implement a custom one which suits my task better).
So my implementation is a classic one with a factory:
public class ElasticSearchBackingMap implements IBackingMap<OpaqueValue<BatchAggregationResult>> {
// omitting here some other cool stuff...
private final Client client;
public static StateFactory getFactoryFor(final String host, final int port, final String clusterName) {
return new StateFactory() {
#Override
public State makeState(Map conf, IMetricsContext metrics, int partitionIndex, int numPartitions) {
ElasticSearchBackingMap esbm = new ElasticSearchBackingMap(host, port, clusterName);
CachedMap cm = new CachedMap(esbm, LOCAL_CACHE_SIZE);
MapState ms = OpaqueMap.build(cm);
return new SnapshottableMap(ms, new Values(GLOBAL_KEY));
}
};
}
public ElasticSearchBackingMap(String host, int port, String clusterName) {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();
// TODO add a possibility to close the client
client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(host, port));
}
// the actual implementation is left out
}
You see it gets host/port/cluster name as input params and creates an ElasticSearch client as a member of the class BUT IT NEVER CLOSES THE CLIENT.
It is then used from within a topology in a pretty familiar way:
tridentTopology.newStream("spout", spout)
// ...some processing steps here...
.groupBy(aggregationFields)
.persistentAggregate(
ElasticSearchBackingMap.getFactoryFor(
ElasticSearchConfig.ES_HOST,
ElasticSearchConfig.ES_PORT,
ElasticSearchConfig.ES_CLUSTER_NAME
),
new Fields(FieldNames.OUTCOME),
new BatchAggregator(),
new Fields(FieldNames.AGGREGATED));
This topology is wrapped into some public static void main, packed in a jar and sent to Storm for execution.
The question is, should I worry about closing the ElasticSearch connection or it is Storm's own business? If it is not done by Storm, how and when in the topology's lifecycle I should do that?
Thanks in advance!
Okay, answering my own question.
First of all, thanks again #dedek for suggestions and reviving the ticket in Storm's Jira.
Finally, since there's no official way to do that, I've decided to go for cleanup() method of Trident's Filter. So far I've verified the following (for Storm v. 0.9.4):
With LocalCluster
cleanup() gets called on cluster's shutdown
cleanup() DOESN'T get called when killing the topology, this shouldn't be a tragedy, very likely one won't use LocalCluster for real deployments anyway
With a real cluster
it gets called when the topology is killed as well as when the worker is stopped using pkill -TERM -u storm -f 'backtype.storm.daemon.worker'
it doesn't get called if the worker is killed with kill -9 or when it crashes or - sadly - when the worker dies due to an exception
In overall that gives more or less decent guarantee of cleanup() to get called, provided you'll be careful with exception handling (I tend to add 'thundercatches' to every of my Trident primitives anyway).
My code:
public class CloseFilter implements Filter {
private static final Logger LOG = LoggerFactory.getLogger(CloseFilter.class);
private final Closeable[] closeables;
public CloseFilter(Closeable... closeables) {
this.closeables = closeables;
}
#Override
public boolean isKeep(TridentTuple tuple) {
return true;
}
#Override
public void prepare(Map conf, TridentOperationContext context) {
}
#Override
public void cleanup() {
for (Closeable c : closeables) {
try {
c.close();
} catch (Exception e) {
LOG.warn("Failed to close an instance of {}", c.getClass(), e);
}
}
}
}
However would be nice if some day hooks for closing connections become a part of the API.
I am experiencing problems when configurating my Jersey Client with the ApacheConnector. It seems to ignore all request headers that I define in a WriterInterceptor. I can tell that the WriterInterceptor is called when I set a break point within WriterInterceptor#aroundWriteTo(WriterInterceptorContext). Contrary to that, I can observe that the modification of an InputStream is preserved.
Here is a runnable example demonstrating my problem:
public class ApacheConnectorProblemDemonstration extends JerseyTest {
private static final Logger LOGGER = Logger.getLogger(JerseyTest.class.getName());
private static final String QUESTION = "baz", ANSWER = "qux";
private static final String REQUEST_HEADER_NAME_CLIENT = "foo-cl", REQUEST_HEADER_VALUE_CLIENT = "bar-cl";
private static final String REQUEST_HEADER_NAME_INTERCEPTOR = "foo-ic", REQUEST_HEADER_VALUE_INTERCEPTOR = "bar-ic";
private static final int MAX_CONNECTIONS = 100;
private static final String PATH = "/";
#Path(PATH)
public static class TestResource {
#POST
public String handle(InputStream questionStream,
#HeaderParam(REQUEST_HEADER_NAME_CLIENT) String client,
#HeaderParam(REQUEST_HEADER_NAME_INTERCEPTOR) String interceptor)
throws IOException {
assertEquals(REQUEST_HEADER_VALUE_CLIENT, client);
// Here, the header that was set in the client's writer interceptor is lost.
assertEquals(REQUEST_HEADER_VALUE_INTERCEPTOR, interceptor);
// However, the input stream got gzipped so the WriterInterceptor has been partly applied.
assertEquals(QUESTION, new Scanner(new GZIPInputStream(questionStream)).nextLine());
return ANSWER;
}
}
#Provider
#Priority(Priorities.ENTITY_CODER)
public static class ClientInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
context.getHeaders().add(REQUEST_HEADER_NAME_INTERCEPTOR, REQUEST_HEADER_VALUE_INTERCEPTOR);
context.setOutputStream(new GZIPOutputStream(context.getOutputStream()));
context.proceed();
}
}
#Override
protected Application configure() {
enable(TestProperties.LOG_TRAFFIC);
enable(TestProperties.DUMP_ENTITY);
return new ResourceConfig(TestResource.class);
}
#Override
protected Client getClient(TestContainer tc, ApplicationHandler applicationHandler) {
ClientConfig clientConfig = tc.getClientConfig() == null ? new ClientConfig() : tc.getClientConfig();
clientConfig.property(ApacheClientProperties.CONNECTION_MANAGER, makeConnectionManager(MAX_CONNECTIONS));
clientConfig.register(ClientInterceptor.class);
// If I do not use the Apache connector, I avoid this problem.
clientConfig.connector(new ApacheConnector(clientConfig));
if (isEnabled(TestProperties.LOG_TRAFFIC)) {
clientConfig.register(new LoggingFilter(LOGGER, isEnabled(TestProperties.DUMP_ENTITY)));
}
configureClient(clientConfig);
return ClientBuilder.newClient(clientConfig);
}
private static ClientConnectionManager makeConnectionManager(int maxConnections) {
PoolingClientConnectionManager connectionManager = new PoolingClientConnectionManager();
connectionManager.setMaxTotal(maxConnections);
connectionManager.setDefaultMaxPerRoute(maxConnections);
return connectionManager;
}
#Test
public void testInterceptors() throws Exception {
Response response = target(PATH)
.request()
.header(REQUEST_HEADER_NAME_CLIENT, REQUEST_HEADER_VALUE_CLIENT)
.post(Entity.text(QUESTION));
assertEquals(200, response.getStatus());
assertEquals(ANSWER, response.readEntity(String.class));
}
}
I want to use the ApacheConnector in order to optimize for concurrent requests via the PoolingClientConnectionManager. Did I mess up the configuration?
PS: The exact same problem occurs when using the GrizzlyConnector.
After further research, I assume that this is rather a misbehavior in the default Connector that uses a HttpURLConnection. As I explained in this other self-answered question of mine, the documentation states:
Whereas filters are primarily intended to manipulate request and
response parameters like HTTP headers, URIs and/or HTTP methods,
interceptors are intended to manipulate entities, via manipulating
entity input/output streams
A WriterInterceptor is not supposed to manipulate the header values while a {Client,Server}RequestFilter is not supposed to manipulate the entity stream. If you need to use both, both components should be bundled within a javax.ws.rs.core.Feature or within the same class that implements two interfaces. (This can be problematic if you need to set two different Prioritys though.)
All this is very unfortunate though, since JerseyTest uses the Connector that uses a HttpURLConnection such that all my unit tests succeeded while the real life application misbehaved since it was configured with an ApacheConnector. Also, rather than suppressing changes, I wished, Jersey would throw me some exceptions. (This is a general issue I have with Jersey. When I for example used a too new version of the ClientConnectionManager where the interface was renamed to HttpClientConnectionManager I simply was informed in a one line log statement that all my configuration efforts were ignored. I did not discover this log statement til very late in development.)