Nifi Custom Processor errors with a "ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75" - websocket

*** Update: I have changed my approach as described in my answer to the question, due to which the original issue reported becomes moot. ***
I'm trying to develop a Nifi application that provides a WebSocket interface to Kakfa. I could not accomplish this using the standard Nifi components as I have tried below (it may not make sense but intuitively this is what I want to accomplish):
I have now created a custom Processor "ReadFromKafka" that I intend to use as shown in the image below. "ReadFromKafka" would use the same implementation as the standard "PutWebSocket" component but would read messages from a Kafka Topic and send as response to the WebSocket client.
I have provided a code snippet of the implementation below:
#SystemResourceConsideration(resource = SystemResource.MEMORY)
public class ReadFromKafka extends AbstractProcessor {
public static final PropertyDescriptor PROP_WS_SESSION_ID = new PropertyDescriptor.Builder()
.name("websocket-session-id")
.displayName("WebSocket Session Id")
.description("A NiFi Expression to retrieve the session id. If not specified, a message will be " +
"sent to all connected WebSocket peers for the WebSocket controller service endpoint.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_SESSION_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ID = new PropertyDescriptor.Builder()
.name("websocket-controller-service-id")
.displayName("WebSocket ControllerService Id")
.description("A NiFi Expression to retrieve the id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_CS_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ENDPOINT = new PropertyDescriptor.Builder()
.name("websocket-endpoint-id")
.displayName("WebSocket Endpoint Id")
.description("A NiFi Expression to retrieve the endpoint id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_ENDPOINT_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_MESSAGE_TYPE = new PropertyDescriptor.Builder()
.name("websocket-message-type")
.displayName("WebSocket Message Type")
.description("The type of message content: TEXT or BINARY")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.defaultValue(WebSocketMessage.Type.TEXT.toString())
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();
public static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("FlowFiles that are sent successfully to the destination are transferred to this relationship.")
.build();
public static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("FlowFiles that failed to send to the destination are transferred to this relationship.")
.build();
private static final List<PropertyDescriptor> descriptors;
private static final Set<Relationship> relationships;
static{
final List<PropertyDescriptor> innerDescriptorsList = new ArrayList<>();
innerDescriptorsList.add(PROP_WS_SESSION_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ENDPOINT);
innerDescriptorsList.add(PROP_WS_MESSAGE_TYPE);
descriptors = Collections.unmodifiableList(innerDescriptorsList);
final Set<Relationship> innerRelationshipsSet = new HashSet<>();
innerRelationshipsSet.add(REL_SUCCESS);
innerRelationshipsSet.add(REL_FAILURE);
relationships = Collections.unmodifiableSet(innerRelationshipsSet);
}
#Override
public Set<Relationship> getRelationships() {
return relationships;
}
#Override
public final List<PropertyDescriptor> getSupportedPropertyDescriptors() {
return descriptors;
}
#Override
public void onTrigger(final ProcessContext context, final ProcessSession processSession) throws ProcessException {
final FlowFile flowfile = processSession.get();
if (flowfile == null) {
return;
}
final String sessionId = context.getProperty(PROP_WS_SESSION_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceId = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceEndpoint = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ENDPOINT)
.evaluateAttributeExpressions(flowfile).getValue();
final String messageTypeStr = context.getProperty(PROP_WS_MESSAGE_TYPE)
.evaluateAttributeExpressions(flowfile).getValue();
final WebSocketMessage.Type messageType = WebSocketMessage.Type.valueOf(messageTypeStr);
if (StringUtils.isEmpty(sessionId)) {
getLogger().debug("Specific SessionID not specified. Message will be broadcast to all connected clients.");
}
if (StringUtils.isEmpty(webSocketServiceId)
|| StringUtils.isEmpty(webSocketServiceEndpoint)) {
transferToFailure(processSession, flowfile, "Required WebSocket attribute was not found.");
return;
}
final ControllerService controllerService = context.getControllerServiceLookup().getControllerService(webSocketServiceId);
if (controllerService == null) {
getLogger().debug("ControllerService is NULL");
transferToFailure(processSession, flowfile, "WebSocket ControllerService was not found.");
return;
} else if (!(controllerService instanceof WebSocketService)) {
getLogger().debug("ControllerService is not instance of WebSocketService");
transferToFailure(processSession, flowfile, "The ControllerService found was not a WebSocket ControllerService but a "
+ controllerService.getClass().getName());
return;
}
...
processSession.getProvenanceReporter().send(updatedFlowFile, transitUri.get(), transmissionMillis);
processSession.transfer(updatedFlowFile, REL_SUCCESS);
processSession.commit();
} catch (WebSocketConfigurationException|IllegalStateException|IOException e) {
// WebSocketConfigurationException: If the corresponding WebSocketGatewayProcessor has been stopped.
// IllegalStateException: Session is already closed or not found.
// IOException: other IO error.
getLogger().error("Failed to send message via WebSocket due to " + e, e);
transferToFailure(processSession, flowfile, e.toString());
}
}
private FlowFile transferToFailure(final ProcessSession processSession, FlowFile flowfile, final String value) {
flowfile = processSession.putAttribute(flowfile, ATTR_WS_FAILURE_DETAIL, value);
processSession.transfer(flowfile, REL_FAILURE);
return flowfile;
}
}
I have deployed the custom processor and when I connect to it using the Chrome "Simple Web Socket Client" I can see the following message in the logs:
ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75
I'm using the exact same code as in PutWebSocket and can't figure out why it would behave any different when I use my custom Processor. I have configured "JettyWebSocketServer" as the ControllerService under "ListenWebSocket" as shown in the image below.
Additional exception details seen in the log are provided below:
java.lang.ClassCastException: class com.sun.proxy.$Proxy75 cannot be cast to class org.apache.nifi.websocket.WebSocketService (com.sun.proxy.$Proxy75 is in unnamed module of loader org.apache.nifi.nar.InstanceClassLoader #35c646b5; org.apache.nifi.websocket.WebSocketService is in unnamed module of loader org.apache.nifi.nar.NarClassLoader #361abd01)

I ended up modifying my flow to utilize out-of-box ListenWebSocket, PutWebSocket Processors, and a custom "FetchFromKafka" Processor that is a modified version of ConsumeKafkaRecord. With this I'm able to provide a WebSocket interface to Kafka. I have provided a screenshot of the updated flow below. More work needs to be done with the custom Processor to support multiple sessions.

Related

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

creating Opentelemetry Context using trace-id and span-id of remote parent

I have micro service which support open tracing and that injecting trace-id and span-id in to header. Other micro service support open telemetry. how can I create parent span using trace-id and span-id in second micro service?
Thanks,
You can use W3C Trace Context specifications to achieve this. We need to send traceparent(Ex: 00-8652a752089f33e2659dff28d683a18f-7359b90f4355cfd9-01) from producer via HTTP headres ( or you can create it using the trace-id and span-id in the consumer). Then we can extract the remote context and create the span with traceparent.
This is the consumer controller. TextMapGetter used to map that traceparent data to the Context. ExtractModel is just a custom class.
#GetMapping(value = "/second")
public String sencondTest(#RequestHeader(value = "traceparent") String traceparent){
try {
Tracer tracer = openTelemetry.getTracer("cloud.events.second");
TextMapGetter<ExtractModel> getter = new TextMapGetter<>() {
#Override
public String get(ExtractModel carrier, String key) {
if (carrier.getHeaders().containsKey(key)) {
return carrier.getHeaders().get(key);
}
return null;
}
#Override
public Iterable<String> keys(ExtractModel carrier) {
return carrier.getHeaders().keySet();
}
};
ExtractModel model = new ExtractModel();
model.addHeader("traceparent", traceparent);
Context extractedContext = openTelemetry.getPropagators().getTextMapPropagator()
.extract(Context.current(), model, getter);
try (Scope scope = extractedContext.makeCurrent()) {
// Automatically use the extracted SpanContext as parent.
Span serverSpan = tracer.spanBuilder("CloudEvents Server")
.setSpanKind(SpanKind.SERVER)
.startSpan();
try {
Thread.sleep(150);
} finally {
serverSpan.end();
}
}
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
return "Server Received!";
}
Then when we configuring the OpenTelemetrySdk need to set W3CTraceContextPropagator in Context Propagators.
// Use W3C Propagator(to extract span from HTTP headers) since we use the W3C specifications
TextMapPropagator textMapPropagator = W3CTraceContextPropagator.getInstance();
OpenTelemetrySdk openTelemetrySdk = OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.setPropagators(ContextPropagators.create(textMapPropagator))
.buildAndRegisterGlobal();
Here is my customer ExtractModel class
public class ExtractModel {
private Map<String, String> headers;
public void addHeader(String key, String value) {
if (this.headers == null){
headers = new HashMap<>();
}
headers.put(key, value);
}
public Map<String, String> getHeaders() {
return headers;
}
public void setHeaders(Map<String, String> headers) {
this.headers = headers;
}
}
You can find more details in the official documentation for manual instrumentation.
Generally you have to propogate the span-id and trace-id if it is available in header. Any request you get in your microservice, check if the headers have span-id and trace-id in them. If yes,extract them and use them in your service.
If it is not present then you create a new one and use it in your service and also add it to requests that go out of your microservice.

Send websocket message to user across dynos

I have a spring boot application running on heroku. I make use of websockets for sending messages to and from client and server for a specific user . I use spring boot's SimpMessagingTemplate.convertAndSendToUser to send and receive messages, which works fine for when a user needs get a message back from the server. I use Heroku session affinity which means that even if I scale up the number of sessions the user and websocket still share the same session.
My problem comes when I need a user to send a message to another user. It works fine if both users are sharing the session, but not if the message will not come through.
Is it possible to send a message from one user to another across different sessions using, SimpMessagingTemple? Or would I need to use a message broker, eg Redis.
I was looking into implementing sending a message using StringRedisTemplate but not sure how to send a message to a particular user.
private SimpMessagingTemplate messagingTemplate;
#Autowired
public MessageController(SimpMessagingTemplate messagingTemplate) {
this.messagingTemplate = messagingTemplate;
}
#MessageMapping("/secured/user-in")
public void sendToDevice(Message msg, #AuthenticationPrincipal User principal) throws Exception {
if (msg.getTo() != null) {
String email = msg.getTo();
Message out = new Message();
out.setMsg(msg.getMsg());
out.setFrom(msg.getFrom());
out.setTo(msg.getTo());
out.setSentTime(new Date());
out.setStatus(msg.getStatus());
messagingTemplate.convertAndSendToUser(email, "/secured/topic", out);
}
}
JS
function connect() {
var socket = new SockJS('/secured/user-in');
ST.stompClient = Stomp.over(socket);
var headers = {};
headers[ST.getHeader()] = ST.getToken();
ST.getStompClient().connect(headers, function (frame) {
retries = 1;
console.log('Connected: ' + frame);
ST.getStompClient().subscribe('/user/secured/topic', function (event){
var msg = JSON.parse(event.body);
showMessage(msg.msg);
});
});
}
UPDATE 1
I am guessing I could do something like this, as done here:
SimpMessageHeaderAccessor headerAccessor = SimpMessageHeaderAccessor
.create(SimpMessageType.MESSAGE);
headerAccessor.setSessionId(sessionId);
headerAccessor.setLeaveMutable(true);
messagingTemplate.convertAndSendToUser(sessionId,"/queue/something", payload,
headerAccessor.getMessageHeaders());
But how could I get the session id of another user, I am using Redis to store session info: #EnableRedisHttpSession
I had my terminology a bit mixed up I was trying to send a message to another user on another dyno rather than session.
Ended up using redis sub/pub.
So when a message is receive by the controller it is published to redis, and the redis MessageListenerAdapter envokes the convertAndSendToUser method.
#MessageMapping("/secured/user-in")
public void sendToDevice(Message msg, #AuthenticationPrincipal User principal) throws Exception {
publishMessageToRedis(msg);
}
private void publishMessageToRedis(Message message) throws JsonProcessingException {
ObjectMapper objectMapper = new ObjectMapper();
String messageString = objectMapper.writeValueAsString(message);
stringRedisTemplate.convertAndSend("message", messageString);
}
redis config
#Bean
RedisMessageListenerContainer container( MessageListenerAdapter chatMessageListenerAdapter) throws URISyntaxException {
RedisMessageListenerContainer container = new RedisMessageListenerContainer();
container.setConnectionFactory(connectionFactory());
container.addMessageListener(chatMessageListenerAdapter, new PatternTopic("message"));
return container;
}
#Bean("chatMessageListenerAdapter")
MessageListenerAdapter chatMessageListenerAdapter(RedisReceiver redisReceiver) {
return new MessageListenerAdapter(redisReceiver, "receiveChatMessage");
}
public class RedisReceiver {
private static final Logger LOG = LogManager.getLogger(RedisReceiver.class);
private final WebSocketMessageService webSocketMessageService;
#Autowired
public RedisReceiver(WebSocketMessageService webSocketMessageService) {
this.webSocketMessageService = webSocketMessageService;
}
// Invoked when message is publish to "chat" channel
public void receiveChatMessage(String messageStr) throws IOException {
ObjectMapper objectMapper = new ObjectMapper();
Message message = objectMapper.readValue(messageStr, Message.class);
webSocketMessageService.sendChatMessage(message);
}
}
#Service
public class WebSocketMessageService {
private final SimpMessagingTemplate template;
private static final Logger LOG = LogManager.getLogger(WebSocketMessageService.class);
public WebSocketMessageService(SimpMessagingTemplate template) {
this.template = template;
}
public void sendChatMessage(Message message) {
template.convertAndSendToUser(message.getTo(), "/secured/topic", message);
}
}
Solution was based off this git repository

Kafka ConsumerRecord returns null

When trying to implement a Unit-test in a spring-boot application, I can't retrieve a ConsumerRecord, though a custom Serializer using an own POJO is working. I checked it with the kafka-console-consumer, where a new message is each and every time I run the test generated and appears on the console.
What do I have to do to get the record instead of a null?
#RunWith(SpringRunner.class)
#SpringBootTest
#DisplayName("Testing GlobalMessageTest")
#DirtiesContext
public class NumberPlateSenderTest {
private static Logger log = LogManager.getLogger(NumberPlateSenderTest.class);
#Autowired
KafkaeskAdapterApplication kafkaeskAdapterApplication;
#Autowired
private NumberPlateSender numberPlateSender;
private KafkaMessageListenerContainer<String, NumberPlate> container;
private BlockingQueue<ConsumerRecord<String, NumberPlate>> records;
private static final String SENDER_TOPIC = "numberplate_test_topic";
#ClassRule
public static KafkaEmbedded embeddedKafka = new KafkaEmbedded(1, true, SENDER_TOPIC);
#Before
public void setUp() throws Exception {
// set up the Kafka consumer properties
Map<String, Object> consumerProperties = KafkaTestUtils.consumerProps("sender", "false", embeddedKafka);
consumerProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
consumerProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, NumberPlateDeserializer.class);
// create a Kafka consumer factory
DefaultKafkaConsumerFactory<String, NumberPlate> consumerFactory =
new DefaultKafkaConsumerFactory<>(consumerProperties);
// set the topic that needs to be consumed
ContainerProperties containerProperties = new ContainerProperties(SENDER_TOPIC);
// create a Kafka MessageListenerContainer
container = new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
// create a thread safe queue to store the received message
records = new LinkedBlockingQueue<>();
// setup a Kafka message listener
container.setupMessageListener((MessageListener<String, NumberPlate>) record -> {
log.info("Message Listener received message='{}'", record.toString());
records.add(record);
});
// start the container and underlying message listener
container.start();
// wait until the container has the required number of assigned partitions
ContainerTestUtils.waitForAssignment(container, embeddedKafka.getPartitionsPerTopic());
}
#DisplayName("Should send a Message to a Producer and retrieve it")
#Test
public void TestProducer() throws InterruptedException {
//Test instance of Numberplate to send
NumberPlate localNumberplate = new NumberPlate();
byte[] bytes = "0x33".getBytes();
localNumberplate.setImageBlob(bytes);
localNumberplate.setNumberString("ABC123");
log.info(localNumberplate.toString());
//Send it
numberPlateSender.sendNumberPlateMessage(localNumberplate);
//Retrieve it
ConsumerRecord<String, NumberPlate> received = records.poll(3, TimeUnit.SECONDS);
log.info("Received the following content of ConsumerRecord: {}", received);
if (received == null) {
assert false;
} else {
NumberPlate retrNumberplate = received.value();
Assert.assertEquals(retrNumberplate, localNumberplate);
}
}
#After
public void tearDown() {
// stop the container
container.stop();
}
}
The complete code can be seen at my github repository.
I read a load of different SO questions and searched the web, but can't find an approach what is wrong with my code. Other users posted similar problems but to no avail.
The kafka version which runs on my Craptop is kafka_2.11-1.0.1
The springframework kafka Client is of version 2.1.5.RELEASE
Your problem that you start consumer against embedded Kafka, but send data to the real one. I don't know what is your goal, but I made it working against an embedded Kafka like this:
#BeforeClass
public static void setup() {
System.setProperty("kafka.bootstrapAddress", embeddedKafka.getBrokersAsString());
}
I override your kafka.bootstrapAddress configuration property for the producer with the broker address provided by the embedded Kafka.
In this case I fail with the:
java.lang.AssertionError: expected: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}> but was: dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Expected :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
Actual :dev.semo.kafkaeskadapter.models.NumberPlate<NumberPlate{numberString='ABC123', imageBlob=[48, 120, 51, 51]}>
But that's just because you use this assertion:
Assert.assertEquals(retrNumberplate, localNumberplate);
Meanwhile your NumberPlate doesn't provide a proper equals() implementation. Something like this:
#Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
NumberPlate that = (NumberPlate) o;
return Objects.equals(numberString, that.numberString) &&
Arrays.equals(imageBlob, that.imageBlob);
}
#Override
public int hashCode() {
int result = Objects.hash(numberString);
result = 31 * result + Arrays.hashCode(imageBlob);
return result;
}
Thank you for providing the whole project to play and reproduce! With the "question-answer-question-answer" game we would spend too much time here :-).

Spring Integration: how to access the returned values from last Subscriber

I'm trying to implement a SFTP File Upload of 2 Files which has to happen in a certain order - first a pdf file and after successfull upload of that an text file with meta information about the pdf.
I followed the advice in this thread, but can't get it to work properly.
My Spring Boot Configuration:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
final DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
final Properties jschProps = new Properties();
jschProps.put("StrictHostKeyChecking", "no");
jschProps.put("PreferredAuthentications", "publickey,password");
factory.setSessionConfig(jschProps);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(sftpPrivateKeyPassphrase);
} else {
factory.setPassword(sftpPasword);
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
#BridgeTo
public MessageChannel toSftpChannel() {
return new PublishSubscribeChannel();
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannel")
#Order(0)
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("filename");
} else {
throw new IllegalArgumentException("File expected as payload.");
}
});
return handler;
}
#ServiceActivator(inputChannel = "toSftpChannel")
#Order(1)
public String transferComplete(#Payload byte[] file, #Header("filename") String filename) {
return "The SFTP transfer complete for file: " + filename;
}
#MessagingGateway
public interface UploadGateway {
#Gateway(requestChannel = "toSftpChannel")
String upload(#Payload byte[] file, #Header("filename") String filename);
}
My Test Case:
final String pdfStatus = uploadGateway.upload(content, documentName);
log.info("Upload of {} completed, {}.", documentName, pdfStatus);
From the return of the Gateway upload call i expect to get the String confirming the upload e.g. "The SFTP transfer complete for file:..." but I get the the returned content of the uploaded File in byte[]:
Upload of 123456789.1.pdf completed, 37,80,68,70,45,49,46,54,13,37,-30,-29,-49,-45,13,10,50,55,53,32,48,32,111,98,106,13,60,60,47,76,105,110,101,97,114,105,122,101,100,32,49,47,76,32,50,53,52,55,49,48,47,79,32,50,55,55,47,69,32,49,49,49,55,55,55,47,78,32,49,47,84,32,50,53,52,51,53,57,47,72,32,91,32,49,49,57,55,32,53,51,55,93,62,62,13,101,110,100,111,98,106,13,32,32,32,32,32,32,32,32,32,32,32,32,13,10,52,55,49,32,48,32,111,98,106,13,60,60,47,68,101,99,111,100,101,80,97,114,109,115,60,60,47,67,111,108,117,109,110,115,32,53,47,80,114,101,100,105,99,116,111,114,32,49,50,62,62,47,70,105,108,116,101,114,47,70,108,97,116,101,68,101,99,111,100,101,47,73,68,91,60,57,66,53,49,56,54,69,70,53,66,56,66,49,50,52,49,65,56,50,49,55,50,54,56,65,65,54,52,65,57,70,54,62,60,68,52,50,68,51,55,54,53,54,65,67,48,55,54,52,65,65,53,52,66,52,57,51,50,56,52,56,68,66 etc.
What am I missing?
I think #Order(0) doesn't work together with the #Bean.
To fix it you should do this in that bean definition istead:
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setOrder(0);
See Reference Manual for more info:
When using these annotations on consumer #Bean definitions, if the bean definition returns an appropriate MessageHandler (depending on the annotation type), attributes such as outputChannel, requiresReply etc, must be set on the MessageHandler #Bean definition itself.
In other words: if you can use setter, you have to. We don't process annotations for this case because there is no guarantee what should get a precedence. So, to avoid such a confuse we have left for you only setters choice.
UPDATE
I see your problem and it is here:
#Bean
#BridgeTo
public MessageChannel toSftpChannel() {
return new PublishSubscribeChannel();
}
That is confirmed by the logs:
Adding {bridge:dmsSftpConfig.toSftpChannel.bridgeTo} as a subscriber to the 'toSftpChannel' channel
Channel 'org.springframework.context.support.GenericApplicationContext#b3d0f7.toSftpChannel' has 3 subscriber(s).
started dmsSftpConfig.toSftpChannel.bridgeTo
So, you really have one more subscriber to that toSftpChannel and it is a BridgeHandler with an output to the replyChannel header. And a default order is like private volatile int order = Ordered.LOWEST_PRECEDENCE; this one becomes as a first subscriber and exactly this one returns you that byte[] just because it is a payload of request.
You need to decide if you really need such a bridge. There is no workaround for the #Order though...

Resources