Spring Integration server with Java DSL - spring-boot

I am looking for an example of a Spring Integration 4.3.14 TCP server that responds to a message using the Java DSL not XML.
The 4.3.14 requirment is set by corporate policy which also avoids XML.
The end requirment is to receive a formated text payload form a PLC and respond with likewise. The PLC code is legacy and not at all well defined and simular payloads can have diferent formats.
The easy way to deal with the input payload is to treat it as a string and deal with it in Java code.
I have a basic recive working but cant work out how to send the reply, read a lot of examples and such but now think the mind is just confued so a simple working example would be ideal.
Many thanks

Here you go...
#SpringBootApplication
public class So50412811Application {
public static void main(String[] args) {
SpringApplication.run(So50412811Application.class, args).close();
}
#Bean
public TcpNetServerConnectionFactory cf() {
return new TcpNetServerConnectionFactory(1234);
}
#Bean
public TcpInboundGateway gateway() {
TcpInboundGateway gw = new TcpInboundGateway();
gw.setConnectionFactory(cf());
return gw;
}
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(gateway())
.transform(Transformers.objectToString())
.<String, String>transform(String::toUpperCase)
.get();
}
// client
#Bean
public ApplicationRunner runner() {
return args -> {
Socket socket = SocketFactory.getDefault().createSocket("localhost", 1234);
socket.getOutputStream().write("foo\r\n".getBytes()); // default CRLF deserializer
InputStream is = socket.getInputStream();
int in = 0;
while (in != 0x0a) {
in = is.read();
System.out.print((char) in);
}
socket.close();
};
}
}

Related

Use Function to replyTo RPC request

I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).

How to send keyed message to Kafka using Spring Cloud Stream Supplier

I want to use Spring Cloud Stream to produce keyed (message with specific key) messages to Kafka.
#SpringBootApplication
public class SpringCloudStreamKafkaApplication {
public static void main(String[] args) {
SpringApplication.run(SpringCloudStreamKafkaApplication.class, args);
}
#Bean
Supplier<DataRecord> process(){
return () -> new DataRecord(42L);
}
}
What do I need to change in the Supplier code to provide key?
Is it possible in new style of API (using lambdas)?
Thank you
Return a Message<?> and set the KafkaHeaders.MESSAGE_KEY header:
#Bean
Supplier<Message<String>> process() {
return () -> MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes())
.build();
}
(assumes the default key serializer (byte[]).
EDIT
This will be called endlessly.
If you want to send a finite stream, I believe you have to switch to the reactive model.
#Bean
Supplier<Flux<Message<String>>> processFinite() {
Message<String> msg1 = MessageBuilder.withPayload("foo")
.setHeader(KafkaHeaders.MESSAGE_KEY, "bar".getBytes())
.build();
Message<String> msg2 = MessageBuilder.withPayload("baz")
.setHeader(KafkaHeaders.MESSAGE_KEY, "qux".getBytes())
.build();
return () -> {
return Flux.just(msg1, msg2);
};
}
There is also Flux.fromStream(myStream).
Which will end at the end of the stream.
EDIT2
You can also use the StreamBridge.
https://docs.spring.io/spring-cloud-stream/docs/3.1.4/reference/html/spring-cloud-stream.html#_sending_arbitrary_data_to_an_output_e_g_foreign_event_driven_sources

How to configure Spring cloud stream (kafka) to use protobuf as serialization

I am using Spring cloud stream (kafka) to exchange messages between producer and consumer microservices.
It exchanges data with native java serialization. As per Spring cloud documentation, It supports JSON,AVRO serialization.
Is any one tried protobuf serialization (message converter) in spring cloud stream
---------------- Later Added
I wrote this MessageConverter
public class ProtobufMessageConverter<T extends AbstractMessage > extends AbstractMessageConverter
{
private Parser<T> parser;
public ProtobufMessageConverter(Parser<T> parser)
{
super(new MimeType("application", "protobuf"));
this.parser = parser;
}
#Override
protected boolean supports(Class<?> clazz)
{
if (clazz != null)
{
return EquipmentProto.Equipment.class.isAssignableFrom(clazz);
}
return true;
}
#Override
public Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint)
{
if (!(message.getPayload() instanceof byte[]))
{
return null;
}
try
{
// return EquipmentProto.Equipment.parseFrom((byte[]) message.getPayload());
return parser.parseFrom((byte[]) message.getPayload());
}
catch (Exception e)
{
this.logger.error(e.getMessage(), e);
}
return null;
}
#Override
protected Object convertToInternal(Object payload, MessageHeaders headers, Object conversionHint)
{
return ((AbstractMessage) payload).toByteArray();
}
}
It's really not a question of trying but rather just doing it, since converters are a natural extension mechanism (inherited fro spring-integration) in spring-cloud-stream that exists specifically to address these concerns. So yes, you can add your own custom converter.
Also, keep in mind that with Kafka there is also a concept of native serde, so you need to make sure that the two do not create some conflict.

Spring-Boot MQTT Configuration

I have a requirement to send payload to a lot of devices whose names are picked from Database. Then, i have to send to different topics, which will be like settings/{put devicename here}.
Below is the configuration i was using which i got from spring-boot reference documents.
MQTTConfiguration.java
#Configuration
#IntegrationComponentScan
public class MQTTConfiguration {
#Autowired
private Settings settings;
#Autowired
private DevMqttMessageListener messageListener;
#Bean
MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory clientFactory = new DefaultMqttPahoClientFactory();
clientFactory.setServerURIs(settings.getMqttBrokerUrl());
clientFactory.setUserName(settings.getMqttBrokerUser());
clientFactory.setPassword(settings.getMqttBrokerPassword());
return clientFactory;
}
#Bean
MessageChannel mqttOutboundChannel() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "mqttOutboundChannel")
public MessageHandler mqttOutbound() {
MqttPahoMessageHandler messageHandler = new MqttPahoMessageHandler("dev-client-outbound",
mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic(settings.getMqttPublishTopic());
return messageHandler;
}
#MessagingGateway(defaultRequestChannel = "mqttOutboundChannel")
public interface DeviceGateway {
void sendToMqtt(String payload);
}
}
Here, i am sending to only 1 topic. So i added the bean like below to send to multiple number of topics;
#Bean
public MqttClient mqttClient() throws MqttException {
MqttClient mqttClient = new MqttClient(settings.getMqttBrokerUrl(), "dev-client-outbound");
MqttConnectOptions connOptions = new MqttConnectOptions();
connOptions.setUserName(settings.getMqttBrokerUser());
connOptions.setPassword(settings.getMqttBrokerPassword().toCharArray());
mqttClient.connect(connOptions);
return mqttClient;
}
and i send using,
try {
mqttClient.publish(settings.getMqttPublishTopic()+device.getName(), mqttMessage);
} catch (MqttException e) {
LOGGER.error("Error While Sending Mqtt Messages", e);
}
Which works.
But my question is, Can i achieve the same, using output channel for better performance? If yes, any help is greatly appreciated. Thank You.
MqttClient is synchronous.
The MqttPahoMessageHandler uses an MqttAsyncClient and can be configured (set async to true) to not wait for the confirmation, but publish the confirmation later as an application event.
If you are using your own code and sending multiple messages in a loop, it will probably be faster to use an async client, and wait for the IMqttDeliveryToken completions later.

How to read x-death header of a RabbitMQ dead-lettered message using Spring Boot?

I am trying to implement re-routing of dead-lettered messages as described in this answer. I am using Spring config. I have no idea on how to read the headers to get the original routing key and original queue. The following is my config:
#Configuration
public class NotifEngineRabbitMQConfig {
#Bean
public MessageHandler handler(){
return new MessageHandler();
}
#Bean
public Jackson2JsonMessageConverter messageConverter(){
return new Jackson2JsonMessageConverter();
}
#Bean
public MessageListenerAdapter messageListenerAdapter(){
return new MessageListenerAdapter(handler(), messageConverter());
}
/**
* Listens for incoming messages
* Allows multiple queue to listen to
* */
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(){
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.addQueueNames(QUEUE_TO_LISTEN_TO.split(","));
container.setMessageListener(messageListenerAdapter());
container.setConnectionFactory(rabbitConnectionFactory());
container.setDefaultRequeueRejected(false);
return container;
}
#Bean
public ConnectionFactory rabbitConnectionFactory(){
CachingConnectionFactory factory = new CachingConnectionFactory(HOST);
factory.setUsername(USERNAME);
factory.setPassword(PASSWORD);
return factory;
}
}
The headers are not available using "old" style Pojo messaging (with a MessageListenerAdapter). You need to implement MessageListener which gives you access to the headers.
However, you will need to invoke the converter yourself in that case and, if you are using request/reply messaging, you lose the reply mechanism within the adapter and you have to send the reply yourself.
Alternatively, you can use a custom message converter and "enhance" the converted object with the header after invoking the standard converter.
Consider instead using the newer style POJO messaging with #RabbitListener - it gives you access to the headers and has request/reply capability.
Here's an example:
#SpringBootApplication
public class So37581560Application {
public static void main(String[] args) {
SpringApplication.run(So37581560Application.class, args);
}
#Bean
public FooListener fooListener() {
return new FooListener();
}
public static class FooListener {
#RabbitListener(queues="foo")
public void pojoListener(String body,
#Header(required = false, name = "x-death") List<String> xDeath) {
System.out.println(body + ":" + (xDeath == null ? "" : xDeath));
}
}
}
Result:
Foo:[{reason=expired, count=1, exchange=, time=Thu Jun 02 08:44:19 EDT 2016, routing-keys=[bar], queue=bar}]
Gary's answer is the right one. Just a little detail, the type of xDeath is better to be ArrayList<HashMap<String,*>> instead List<String> xDeath. Then you can access any field by doing something like: xDeath.first().get("count")

Resources