I need to implement a TCP server which basicliy should open a socket as part of handshake with a client .
after socket is open server need to keep the socket open , and be able to push a message from the server to the client via the open socket
I look on some spring integration examples but not sure that examples I saw are actully a reference to my requiremnt .
1 .does spring integration tcp has a this ability to keep open socket and send a message from server to client ?
the server should support as well incoming requests
client side implemenatation is a mock written as a simple Tcp java client
Thanks!
Nir
here context configuration
<int-ip:tcp-connection-factory id="server"
type="server"
port="5679"
host="localhost"
single-use="false"
deserializer="javaDeserializer"
serializer="javaSerializer" />
<int-ip:tcp-inbound-channel-adapter id="inboundServer"
channel="inloop"
connection-factory="server" client-mode="false"/>
<int-ip:tcp-outbound-channel-adapter id="outboundServer"
channel="outloop"
connection-factory="server" client-mode="false"/>
<channel id="inloop"/>
<channel id="outloop"/>
on the server side I use
outputchanel.send(new GenericMessage<String>("HI World from server16\n",header));
and on the client side read push message with
BufferedReader stdIn = new BufferedReader(new InputStreamReader(socketClient.getInputStream()));
while ((serverResponse = stdIn.readLine()) != null) {
_logger.info("RESPONSE FROM SERVER::"+serverResponse);
}
the client side is a java base tcp client not configure with spring integration , this is a mock client for future integration
for support echo server for a request with byte array not terminate with '\n' , I extend AbstractByteArraySerializer , and override deserialize
public byte[] deserialize(InputStream inputStream) throws IOException {
_Logger.trace("start deserialize");
byte[] result;
try {
byte[] buffer = new byte[getMaxMessageSize()];
int numOfBytes = inputStream.read(buffer, 0, buffer.length);
result = copyToSizedArray(buffer, numOfBytes);
} catch (IOException e) {
_Logger.error("Exception on deserialize tcp inbound stream ", e);
//publishEvent(e, , n);
throw e;
}
return result;
}
You can use collaborating channel adapters for completely arbitrary messaging between peers.
See TCP Events.
The tcp inbound channel adapter (actually the connection factory) will publish a TcpConnectionOpenEvent when the client connects.
You can use an ApplicationListener to receive these events.
The event contains a connectionId. You can then start sending messages to a tcp outbound channel adapter with this connectionId in the header named ip_connectionId (IpHeaders.CONNECTION_ID).
Inbound messages (if any) from the client received by the inbound adapter will have the same value in the header.
Simply configure a server connection factory and configure both adapters to use it.
If the server has to open the socket, use client-mode="true" and inject a client connection factory.
Related
I am using org.springframework.boot:spring-boot-starter-amqp:2.6.6 .
According to the documentation, I set up #RabbitListener - I use SimpleRabbitListenerContainerFactory and the configuration looks like this:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(ObjectMapper om) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
factory.setAcknowledgeMode(AcknowledgeMode.MANUAL);
factory.setConcurrentConsumers(rabbitProperties.getUpdater().getConcurrentConsumers());
factory.setMaxConcurrentConsumers(rabbitProperties.getUpdater().getMaxConcurrentConsumers());
factory.setMessageConverter(new Jackson2JsonMessageConverter(om));
factory.setAutoStartup(rabbitProperties.getUpdater().getAutoStartup());
factory.setDefaultRequeueRejected(false);
return factory;
}
The logic of the service is to receive messages from rabbitmq, contact an external service via the rest API (using rest template) and put some information into the database based on the results of the response (using spring data jpa). The service implemented it successfully, but during testing it ran into problems that if any exceptions occur during the work of those thrown up the stack, the message is not sent to the configured dlq, but simply hangs in the broker as unacked. Can you please tell me how you can tell spring amqp that if any error occurs, you need to redirect the message to dlq?
The listener itself looks something like this:
#RabbitListener(
queues = {"${rabbit.updater.consuming.queue.name}"},
containerFactory = "rabbitListenerContainerFactory"
)
#Override
public void listen(
#Valid #Payload MessageDTO message,
Channel channel,
#Header(AmqpHeaders.DELIVERY_TAG) Long deliveryTag
) {
log.debug(DebugMessagesConstants.RECEIVED_MESSAGE_FROM_QUEUE, message, deliveryTag);
messageUpdater.process(message);
channel.basicAck(deliveryTag, false);
log.debug(DebugMessagesConstants.PROCESSED_MESSAGE_FROM_QUEUE, message, deliveryTag);
}
In rabbit managment it look something like this:
enter image description here
and unacked will hang until the queue consuming application stops
See error handling documentation: https://docs.spring.io/spring-amqp/docs/current/reference/html/#annotation-error-handling.
So, you just don't do an AcknowledgeMode.MANUAL and rely on the Dead Letter Exchange configuration for those messages which are rejected in case of error.
Or try to use a this.channel.basicNack(deliveryTag, false, false) in case of messageUpdater.process(message); exception...
I have 2 spring-boot-starter-web services. Service A sends a request via Retrofit to service B. I have configured it to timeout after 10 seconds. Service A detects the timeout (SocketTimeoutException), but I have no way for service B to detect it. How can I verify that the socket is closed? I send a file via outputstream of httpServletResponse and it does not detect that it is closed. It looks like it sends the file to service A, when service A has already timed out.
try (FileInputStream in = new FileInputStream(file)){
OutputStream out = httpServletResponse.getOutputStream();
IOUtils.copy(in,out); // copy from in to out
out.close();
} catch (IOException e) {
// In AWS I get a "broken pipe" IOException. But, locally, I don't get any exception.
}
I don't know if there is a way to check if a Socket is closed until the server tries to read/write from it. I workaround is handle this IOException's like this:
#ExceptionHandler(IOException.class)
#ResponseStatus(HttpStatus.SERVICE_UNAVAILABLE) //(1)
public Object exceptionHandler(IOException e, HttpServletRequest request) {
if (StringUtils.containsIgnoreCase(ExceptionUtils.getRootCauseMessage(e), "Broken pipe")) { //(2)
return null; //(2) socket is closed, cannot return any response
} else {
return new HttpEntity<>(e.getMessage()); //(3)
}
}
This blog post Spring - How to handle "IOException: Broken pipe" can help.
Hello I am new on spring integration
I checked examples for Spring Integration dynamic routing. Finally ı found it in here
Dynamic TCP Client
In here there were lines
#Component
#MessagingGateway(defaultRequestChannel = "toTcp.input")
public interface TcpClientGateway {
byte[] send(String data, #Header("host") String host, #Header("port") int port);
}
private MessageChannel createNewSubflow(Message<?> message) {
String host = (String) message.getHeaders().get("host");
Integer port = (Integer) message.getHeaders().get("port");
Assert.state(host != null && port != null, "host and/or port header missing");
String hostPort = host + port;
TcpNetClientConnectionFactory cf = new TcpNetClientConnectionFactory(host, port);
TcpSendingMessageHandler handler = new TcpSendingMessageHandler();
handler.setConnectionFactory(cf);
IntegrationFlow flow = f -> f.handle(handler);
IntegrationFlowContext.IntegrationFlowRegistration flowRegistration =
this.flowContext.registration(flow)
.addBean(cf)
.id(hostPort + ".flow")
.register();
MessageChannel inputChannel = flowRegistration.getInputChannel();
this.subFlows.put(hostPort, inputChannel);
return inputChannel;
}
but i changed it with
private MessageChannel createNewSubflow(Message<?> message) {
String host = (String) message.getHeaders().get("host");
Integer port = (Integer) message.getHeaders().get("port");
Assert.state(host != null && port != null, "host and/or port header missing");
String hostPort = host + port;
TcpNetClientConnectionFactory cf = new TcpNetClientConnectionFactory(host, port);
cf.setLeaveOpen(true);
//cf.setSingleUse(true);
ByteArrayCrLfSerializer byteArrayCrLfSerializer =new ByteArrayCrLfSerializer();
byteArrayCrLfSerializer.setMaxMessageSize(1048576);
cf.setSerializer(byteArrayCrLfSerializer);
cf.setDeserializer(byteArrayCrLfSerializer);
TcpOutboundGateway tcpOutboundGateway = new TcpOutboundGateway();
tcpOutboundGateway.setConnectionFactory(cf);
IntegrationFlow flow = f -> f.handle(tcpOutboundGateway);
IntegrationFlowContext.IntegrationFlowRegistration flowRegistration =
this.flowContext.registration(flow)
.addBean(cf)
.id(hostPort + ".flow")
.register();
MessageChannel inputChannel = flowRegistration.getInputChannel();
this.subFlows.put(hostPort, inputChannel);
return inputChannel;
}
to work with request/response architecture. It really works fine because it provides dynamic routing with out creating tcp clients by hand.
At this point i need some help to improve my scenario. My scenario is like that;
Client sends a message to Server and receive that message's response from server but then server needs to send arbitrary messages to that client (it is like GPS location update information). When server starts to send these messages to client generates error messages like below
ERROR 54816 --- [pool-2-thread-1] o.s.i.ip.tcp.TcpOutboundGateway : Cannot correlate response - no pending reply for ::58628:62fd67b6-af2d-42f1-9c4d-d232fbe9c8ca
I checked spring integration document and noticed that Gateways is working only with request/response so i learned that i should use adapters but i do not know how should i use adapters with dynamic tcp client.
here ı found similar topics and some responses but could not reach my goal or found example to combine solutions.
Spring Integration TCP
Spring integration TCP server push to client
You just need to register two flows; one for input; one for output - the problem is correlating the response for the reply, and routing the arbitrary messages to some place other than the gateway.
I updated the sample for this use case on this branch.
You can see the changes in the last commit on that branch; most of the changes were to simulate your server side.
On the client side, we simply register two flows and use a #ServiceActivator method to get the inbound messages; you can identify which server they come from via the connection id.
In this great answer https://stackoverflow.com/a/27161986/4358405 there is an example of how to use raw Spring4 WebSockets without STOMP subprotocol (and without SockJS potentially).
Now my question is: how do I broadcast to all clients? I expected to see an API that I could use in similar fashion with that of pure JSR 356 websockets API: session.getBasicRemote().sendText(messJson);
Do I need to keep all WebSocketSession objects on my own and then call sendMessage() on each of them?
I found a solution. In the WebSocket handler, we manage a list of WebSocketSession and add new session on afterConnectionEstablished function.
private List<WebSocketSession> sessions = new ArrayList<>();
synchronized void addSession(WebSocketSession sess) {
this.sessions.add(sess);
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
addSession(session);
System.out.println("New Session: " + session.getId());
}
When we need to broadcast, just enumerate through all session in list sessions and send messages.
for (WebSocketSession sess : sessions) {
TextMessage msg = new TextMessage("Hello from " + session.getId() + "!");
sess.sendMessage(msg);
}
Hope this help!
As far as i know and can gather from the documentation here you can't broadcast using the WebSocketHandler.
Instead you should use Stomp over WebSocket configured by a WebSocketMessageBrokerConfigurer as described here.
Use a SimpMessagingTemplate anywhere in your code to send messages to subscribed clients as described here
Just wonder if anyone has experimented with WebSocket proxying (for transparent proxy) using embedded Jetty?
After about a day and a half playing with Jetty 9.1.2.v20140210, all I can tell is that it can't proxy WebSockets in its current form, and adding such support is non-trivial task (afaict at least).
Basically, Jetty ProxyServlet strips out the "Upgrade" and "Connection" header fields regardless of whether it's from a WebSocket handshake request. Adding these fields back is easy as shown below. But, when the proxied server returned a response with HTTP code 101 (switching protocols), no protocol upgrade is done on the proxy server. So, when the first WebSocket packet arrives, the HttpParser chokes and see that as a bad HTTP request.
If anyone already has a solution for it or is familiar with Jetty to suggest what to try, that would be very much appreciated.
Below is the code in my experiment stripping out the unimportant bits:
public class ProxyServer
{
public static void main(String[] args) throws Exception
{
Server server = new Server();
ServerConnector connector = new ServerConnector(server);
connector.setPort(8888);
server.addConnector(connector);
// Setup proxy handler to handle CONNECT methods
ConnectHandler proxy = new ConnectHandler();
server.setHandler(proxy);
// Setup proxy servlet
ServletContextHandler context = new ServletContextHandler(proxy, "/", ServletContextHandler.SESSIONS);
ServletHolder proxyServlet = new ServletHolder(MyProxyServlet.class);
context.addServlet(proxyServlet, "/*");
server.start();
}
}
#SuppressWarnings("serial")
public class MyProxyServlet extends ProxyServlet
{
#Override
protected void customizeProxyRequest(Request proxyRequest, HttpServletRequest request)
{
// Pass through the upgrade and connection header fields for websocket handshake request.
String upgradeValue = request.getHeader("Upgrade");
if (upgradeValue != null && upgradeValue.compareToIgnoreCase("websocket") == 0)
{
setHeader(proxyRequest, "Upgrade", upgradeValue);
setHeader(proxyRequest, "Connection", request.getHeader("Connection"));
}
}
#Override
protected void onResponseHeaders(HttpServletRequest request, HttpServletResponse response, Response proxyResponse)
{
super.onResponseHeaders(request, response, proxyResponse);
// Restore the upgrade and connection header fields for websocket handshake request.
HttpFields fields = proxyResponse.getHeaders();
for (HttpField field : fields)
{
if (field.getName().compareToIgnoreCase("Upgrade") == 0)
{
String upgradeValue = field.getValue();
if (upgradeValue != null && upgradeValue.compareToIgnoreCase("websocket") == 0)
{
response.setHeader(field.getName(), upgradeValue);
for (HttpField searchField : fields)
{
if (searchField.getName().compareToIgnoreCase("Connection") == 0) {
response.setHeader(searchField.getName(), searchField.getValue());
}
}
}
}
}
}
}
Let's imagine the proxy scheme that you are trying to build, we have client A, server B and proxy P. Now let's walk through connect workflow:
A established TCP connection with proxy P (A-P)
A sends CONNECT addr(B) request with WebSocket handshake
Here you have the first problem, by HTTP RFC headers used in WS handshake are not end-to-end headers, because for HTTP they make sense only on a transport layer (between two hops).
P establishes TCP connection to B (P-B)
P sends WS handshake HTTP request to B
B responds with HTTP->WS upgrade (by sending 101)
And here is another problem, after sending HTTP 101 server B and client A now will communicate only over TCP, but jetty servlet doesn't support plain TCP packets propagation. In other words jetty proxy servlet waits till client A will start transmitting HTTP request, which will never happen after A will receive HTTP 101.
You would need to implement this by yourself using WS server and WS client.