"For an upload InputStream with no MD5 digest metadata, the markSupported() method must evaluate to true." in Spring Integration AWS - spring

UPDATE: There is bug in spring-integration-aws-2.3.4
I am integrating SFTP (SftpStreamingMessageSource) as source with S3 as destination.
I have similar Spring Integration configuration:
#Bean
public S3MessageHandler.UploadMetadataProvider uploadMetadataProvider() {
return (metadata, message) -> {
if ( message.getPayload() instanceof DigestInputStream) {
metadata.setContentType( MediaType.APPLICATION_JSON_VALUE );
// can not read stream to manually compute MD5
// metadata.setContentMD5("BLABLA==");
// this is wrong approach: metadata.setContentMD5(BinaryUtils.toBase64((((DigestInputStream) message.getPayload()).getMessageDigest().digest()));
}
};
}
#Bean
#InboundChannelAdapter(channel = "ftpStream")
public MessageSource<InputStream> ftpSource(SftpRemoteFileTemplate template) {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template);
messageSource.setRemoteDirectory("foo");
messageSource.setFilter(new AcceptAllFileListFilter<>());
messageSource.setMaxFetchSize(1);
messageSource.setLoggingEnabled(true);
messageSource.setCountsEnabled(true);
return messageSource;
}
...
#Bean
#ServiceActivator(inputChannel = "ftpStream")
public MessageHandler s3MessageHandler(AmazonS3 amazonS3, S3MessageHandler.UploadMetadataProvider uploadMetadataProvider) {
S3MessageHandler messageHandler = new S3MessageHandler(amazonS3, "bucketName");
messageHandler.setLoggingEnabled(true);
messageHandler.setCountsEnabled(true);
messageHandler.setCommand(S3MessageHandler.Command.UPLOAD);
messageHandler.setUploadMetadataProvider(uploadMetadataProvider);
messageHandler.setKeyExpression(new ValueExpression<>("key"));
return messageHandler;
}
After start, I am getting following error
"For an upload InputStream with no MD5 digest metadata, the markSupported() method must evaluate to true."
This is because ftpSource is producing InputStream payload without mark/reset support. I even tried to transform InputStream to BufferedInputStream using #Transformer e.g. following
return new BufferedInputStream((InputStream) message.getPayload());
and no success, because then I am getting message "java.io.IOException: Stream closed" because S3MessageHandler:338 is calling Md5Utils.md5AsBase64(inputStream) which closes stream too early.
How to generate MD5 for all messages in Spring Integration AWS without pain?
I am using spring-integration-aws-2.3.4.RELEASE

The S3MessageHandler does this:
if (payload instanceof InputStream) {
InputStream inputStream = (InputStream) payload;
if (metadata.getContentMD5() == null) {
Assert.state(inputStream.markSupported(),
"For an upload InputStream with no MD5 digest metadata, "
+ "the markSupported() method must evaluate to true.");
String contentMd5 = Md5Utils.md5AsBase64(inputStream);
metadata.setContentMD5(contentMd5);
inputStream.reset();
}
putObjectRequest = new PutObjectRequest(bucketName, key, inputStream, metadata);
}
Where that Md5Utils.md5AsBase64() closes an InputStream in the end - bad for us.
This is an omission on our side. Please, raise a GH issue and we will fix it ASAP. Or feel free to provide a contribution.
As a workaround I would suggest to have a transformer upfront of this S3MessageHandler with the code like:
return org.springframework.util.StreamUtils.copyToByteArray(inputStream);
This way you will have already a byte[] as a payload for the S3MessageHandler which will use a different branch for processing:
else if (payload instanceof byte[]) {
byte[] payloadBytes = (byte[]) payload;
InputStream inputStream = new ByteArrayInputStream(payloadBytes);
if (metadata.getContentMD5() == null) {
String contentMd5 = Md5Utils.md5AsBase64(inputStream);
metadata.setContentMD5(contentMd5);
inputStream.reset();
}
if (metadata.getContentLength() == 0) {
metadata.setContentLength(payloadBytes.length);
}
putObjectRequest = new PutObjectRequest(bucketName, key, inputStream, metadata);
}

Related

ListenerExecutionFailedException Nullpointer when trying to index kafka payload through new ElasticSearch Java API Client

I'm migrating from the HLRC to the new client, things were smooth but for some reason I cannot index a specific class/document. Here is my client implementation and index request:
#Configuration
public class ClientConfiguration{
#Autowired
private InternalProperties conf;
public ElasticsearchClient sslClient(){
CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(conf.getElasticsearchUser(), conf.getElasticsearchPassword()));
HttpHost httpHost = new HttpHost(conf.getElasticsearchAddress(), conf.getElasticsearchPort(), "https");
RestClientBuilder restClientBuilder = RestClient.builder(httpHost);
try {
SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(null, (x509Certificates, s) -> true).build();
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
#Override
public HttpAsyncClientBuilder customizeHttpClient(HttpAsyncClientBuilder httpClientBuilder) {
return httpClientBuilder.setSSLContext(sslContext)
.setDefaultCredentialsProvider(credentialsProvider);
}
});
} catch (Exception e) {
e.printStackTrace();
}
RestClient restClient=restClientBuilder.build();
ElasticsearchTransport transport = new RestClientTransport(
restClient, new JacksonJsonpMapper());
ElasticsearchClient client = new ElasticsearchClient(transport);
return client;
}
}
#Service
public class ThisDtoIndexClass extends ConfigAndProperties{
public ThisDtoIndexClass() {
}
//client is declared in the class it's extending from
public ThisDtoIndexClass(#Autowired ClientConfiguration esClient) {
this.client = esClient.sslClient();
}
#KafkaListener(topics = "esTopic")
public void in(#Payload(required = false) customDto doc)
throws ThisDtoIndexClassException, ElasticsearchException, IOException {
if(doc!= null && doc.getId() != null) {
IndexRequest.Builder<customDto > indexReqBuilder = new IndexRequest.Builder<>();
indexReqBuilder.index("index-for-this-Dto");
indexReqBuilder.id(doc.getId());
indexReqBuilder.document(doc);
IndexResponse response = client.index(indexReqBuilder.build());
} else {
throw new ThisDtoIndexClassException("document is null");
}
}
}
This is all done in spring boot (v2.6.8) with ES 7.17.3. According to the debug, the payload is NOT null! It even fetches the id correctly while stepping through. For some reason, it throws me a org.springframework.kafka.listener.ListenerExecutionFailedException: in the last line (during the .build?). Nothing gets indexed, but the response comes back 200. I'm lost on where I should be looking. I have a different class that also writes to a different index, also getting a payload from kafka directly (all seperate consumers). That one functions just fine.
I suspect it has something to do with the way my client is set up and/or the kafka. Please point me in the right direction.
I solved it by deleting the default constructor. If I put it back it overwrites the extended constructor (or straight up doesn't acknowledge the extended constructor), so my client was always null. The error message it gave me was extremely misleading since it actually wasn't the Kafka's fault!
Removing the default constructor completely initializes the correct constructor and I was able to index again. I assume this was a spring boot loading related "issue".

Nifi Custom Processor errors with a "ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75"

*** Update: I have changed my approach as described in my answer to the question, due to which the original issue reported becomes moot. ***
I'm trying to develop a Nifi application that provides a WebSocket interface to Kakfa. I could not accomplish this using the standard Nifi components as I have tried below (it may not make sense but intuitively this is what I want to accomplish):
I have now created a custom Processor "ReadFromKafka" that I intend to use as shown in the image below. "ReadFromKafka" would use the same implementation as the standard "PutWebSocket" component but would read messages from a Kafka Topic and send as response to the WebSocket client.
I have provided a code snippet of the implementation below:
#SystemResourceConsideration(resource = SystemResource.MEMORY)
public class ReadFromKafka extends AbstractProcessor {
public static final PropertyDescriptor PROP_WS_SESSION_ID = new PropertyDescriptor.Builder()
.name("websocket-session-id")
.displayName("WebSocket Session Id")
.description("A NiFi Expression to retrieve the session id. If not specified, a message will be " +
"sent to all connected WebSocket peers for the WebSocket controller service endpoint.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_SESSION_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ID = new PropertyDescriptor.Builder()
.name("websocket-controller-service-id")
.displayName("WebSocket ControllerService Id")
.description("A NiFi Expression to retrieve the id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_CS_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ENDPOINT = new PropertyDescriptor.Builder()
.name("websocket-endpoint-id")
.displayName("WebSocket Endpoint Id")
.description("A NiFi Expression to retrieve the endpoint id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_ENDPOINT_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_MESSAGE_TYPE = new PropertyDescriptor.Builder()
.name("websocket-message-type")
.displayName("WebSocket Message Type")
.description("The type of message content: TEXT or BINARY")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.defaultValue(WebSocketMessage.Type.TEXT.toString())
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();
public static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("FlowFiles that are sent successfully to the destination are transferred to this relationship.")
.build();
public static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("FlowFiles that failed to send to the destination are transferred to this relationship.")
.build();
private static final List<PropertyDescriptor> descriptors;
private static final Set<Relationship> relationships;
static{
final List<PropertyDescriptor> innerDescriptorsList = new ArrayList<>();
innerDescriptorsList.add(PROP_WS_SESSION_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ENDPOINT);
innerDescriptorsList.add(PROP_WS_MESSAGE_TYPE);
descriptors = Collections.unmodifiableList(innerDescriptorsList);
final Set<Relationship> innerRelationshipsSet = new HashSet<>();
innerRelationshipsSet.add(REL_SUCCESS);
innerRelationshipsSet.add(REL_FAILURE);
relationships = Collections.unmodifiableSet(innerRelationshipsSet);
}
#Override
public Set<Relationship> getRelationships() {
return relationships;
}
#Override
public final List<PropertyDescriptor> getSupportedPropertyDescriptors() {
return descriptors;
}
#Override
public void onTrigger(final ProcessContext context, final ProcessSession processSession) throws ProcessException {
final FlowFile flowfile = processSession.get();
if (flowfile == null) {
return;
}
final String sessionId = context.getProperty(PROP_WS_SESSION_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceId = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceEndpoint = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ENDPOINT)
.evaluateAttributeExpressions(flowfile).getValue();
final String messageTypeStr = context.getProperty(PROP_WS_MESSAGE_TYPE)
.evaluateAttributeExpressions(flowfile).getValue();
final WebSocketMessage.Type messageType = WebSocketMessage.Type.valueOf(messageTypeStr);
if (StringUtils.isEmpty(sessionId)) {
getLogger().debug("Specific SessionID not specified. Message will be broadcast to all connected clients.");
}
if (StringUtils.isEmpty(webSocketServiceId)
|| StringUtils.isEmpty(webSocketServiceEndpoint)) {
transferToFailure(processSession, flowfile, "Required WebSocket attribute was not found.");
return;
}
final ControllerService controllerService = context.getControllerServiceLookup().getControllerService(webSocketServiceId);
if (controllerService == null) {
getLogger().debug("ControllerService is NULL");
transferToFailure(processSession, flowfile, "WebSocket ControllerService was not found.");
return;
} else if (!(controllerService instanceof WebSocketService)) {
getLogger().debug("ControllerService is not instance of WebSocketService");
transferToFailure(processSession, flowfile, "The ControllerService found was not a WebSocket ControllerService but a "
+ controllerService.getClass().getName());
return;
}
...
processSession.getProvenanceReporter().send(updatedFlowFile, transitUri.get(), transmissionMillis);
processSession.transfer(updatedFlowFile, REL_SUCCESS);
processSession.commit();
} catch (WebSocketConfigurationException|IllegalStateException|IOException e) {
// WebSocketConfigurationException: If the corresponding WebSocketGatewayProcessor has been stopped.
// IllegalStateException: Session is already closed or not found.
// IOException: other IO error.
getLogger().error("Failed to send message via WebSocket due to " + e, e);
transferToFailure(processSession, flowfile, e.toString());
}
}
private FlowFile transferToFailure(final ProcessSession processSession, FlowFile flowfile, final String value) {
flowfile = processSession.putAttribute(flowfile, ATTR_WS_FAILURE_DETAIL, value);
processSession.transfer(flowfile, REL_FAILURE);
return flowfile;
}
}
I have deployed the custom processor and when I connect to it using the Chrome "Simple Web Socket Client" I can see the following message in the logs:
ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75
I'm using the exact same code as in PutWebSocket and can't figure out why it would behave any different when I use my custom Processor. I have configured "JettyWebSocketServer" as the ControllerService under "ListenWebSocket" as shown in the image below.
Additional exception details seen in the log are provided below:
java.lang.ClassCastException: class com.sun.proxy.$Proxy75 cannot be cast to class org.apache.nifi.websocket.WebSocketService (com.sun.proxy.$Proxy75 is in unnamed module of loader org.apache.nifi.nar.InstanceClassLoader #35c646b5; org.apache.nifi.websocket.WebSocketService is in unnamed module of loader org.apache.nifi.nar.NarClassLoader #361abd01)
I ended up modifying my flow to utilize out-of-box ListenWebSocket, PutWebSocket Processors, and a custom "FetchFromKafka" Processor that is a modified version of ConsumeKafkaRecord. With this I'm able to provide a WebSocket interface to Kafka. I have provided a screenshot of the updated flow below. More work needs to be done with the custom Processor to support multiple sessions.

Extra characters are added to a video file downloaded via a Spring #RestController

I am downloading a file via a Spring #RestController returning the bytes of the video in the OutputStream of the ResponseEntity object.
#GetMapping(value = "/video/{id}/data")
#ResponseBody
public ResponseEntity<OutputStream> getVideoData(#PathVariable("id") Long id, HttpServletResponse response) {
try {
for (Video video : videos) {
if (id == video.getId()) {
if (videoDataMgr == null) {
videoDataMgr = VideoFileManager.get();
}
videoDataMgr.copyVideoData(video, response.getOutputStream());
return new ResponseEntity<OutputStream>(response.getOutputStream(), HttpStatus.OK);
}
}
response.sendError(HttpServletResponse.SC_NOT_FOUND);
} catch (IOException ioe) {
logger.log (Level.WARNING, ioe.toString());
}
return null;
}
If I play the video being served and the downloaded video they both show the same content. However, the following test fails when comparing byte by byte the contents of the 2 MP4 files:
#Test
public void testAddVideoData() throws Exception {
Video received = videoSvc.addVideo(video);
VideoStatus status = videoSvc.setVideoData(received.getId(),
new TypedFile(received.getContentType(), testVideoData));
assertEquals(VideoState.READY, status.getState());
Response response = videoSvc.getData(received.getId());
assertEquals(200, response.getStatus());
InputStream videoData = response.getBody().in();
byte[] originalFile = IOUtils.toByteArray(new FileInputStream(testVideoData));
byte[] retrievedFile = IOUtils.toByteArray(videoData);
assertTrue(Arrays.equals(originalFile, retrievedFile));
}
Doing a comparison between the files I can see that the videos are almost the same except for some characters at the end(the content of the downloaded file is on the right):
Where is {"ready": false} coming from?

How can I put the port and host in the property file in Spring?

I have this url
private static final String PRODUCTS_URL = "http://localhost:3007/catalog/products/";
And this methods:
public JSONObject getProductByIdFromMicroservice(String id) throws IOException, JSONException {
return getProductsFromProductMicroservice(PRODUCTS_URL + id);
}
public JSONObject getProductsFromProductMicroservice(String url) throws IOException, JSONException {
CloseableHttpClient productClient = HttpClients.createDefault();
HttpGet getProducts = new HttpGet(url);
CloseableHttpResponse microserviceResponse = productClient.execute(getProducts);
HttpEntity entity = microserviceResponse.getEntity();
BufferedReader br = new BufferedReader(new InputStreamReader((entity.getContent())));
StringBuilder sb = new StringBuilder();
String line;
while ((line = br.readLine()) != null) {
sb.append(line);
}
br.close();
System.out.println(sb.toString());
JSONObject obj = new JSONObject(sb.toString());
System.out.println(obj);
return obj;
}
I want to put the port and host in a separate property file. I have already seen examples using properties and the yml file. But I do not understand how then my methods will work using this port when creating an instance of the class, which I will indicate in the properties file. Can you tell?
You can put your properties in a properties file in the resource directory for example
PRODUCTS_URL="http://localhost:3007/catalog/products/"
and add #PropertySource("YOUR_RESOURCE_FILE_HERE.properties") in your main class (Application.java)
#SpringBootApplication
#PropertySource("products.properties")
public class Application {...}
and then use #Value("${YOUR_PROPERTY_NAME}") to load it:
#Value("${PRODUCTS_URL}")
private String PRODUCTS_URL;
Check this tutorial
This is how i do it :
CONFIG FILE
#Database Server Properties
dbUrl=jdbc:sqlserver://localhost:1433;database=Something;
dbUser=sa
dbPassword=SomePassword
Then i annotate a config class with this :
#PropertySource("file:${ENV_VARIABLE_TO_PATH}/config.properties")
Then autowire this field :
#Autowired
private Environment environment;
Then create the data source :
#Bean
public DataSource dataSource()
{
HikariDataSource dataSource = new HikariDataSource();
try
{
dataSource.setDriverClassName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
dataSource.setConnectionTestQuery("SELECT 1");
dataSource.setMaximumPoolSize(100);
String dbUrl = environment.getProperty("dbUrl");
if (dbUrl != null)
{
dataSource.setJdbcUrl(dbUrl);
}
else
{
throw new PropertyNotFoundException("The dbUrl property is missing from the config file!");
}
String dbUser = environment.getProperty("dbUser");
if (dbUser != null)
{
dataSource.setUsername(dbUser);
}
else
{
throw new PropertyNotFoundException("The dbUser property is missing from the config file!");
}
String dbPassword = environment.getProperty("dbPassword");
if (dbPassword != null)
{
dataSource.setPassword(dbPassword);
}
else
{
throw new PropertyNotFoundException("The dbPassword property is missing from the config file!");
}
logger.debug("Successfully initialized datasource");
}
catch (PropertyNotFoundException ex)
{
logger.fatal("Error initializing datasource : " + ex.getMessage());
}
return dataSource;
}
I know this is not exactly your scenario but perhaps you can find inspiration from this code to suit your specific needs?
Other answers here mention using #PropertySource annotation to specify path of config files. Also if this is a test code (unit/integration) you can also make use of another annotation #TestPropertySource.
With this, we can define configuration sources that have higher precedence than any other source used in the project.
See here: https://www.baeldung.com/spring-test-property-source

Spring Integration: how to access the returned values from last Subscriber

I'm trying to implement a SFTP File Upload of 2 Files which has to happen in a certain order - first a pdf file and after successfull upload of that an text file with meta information about the pdf.
I followed the advice in this thread, but can't get it to work properly.
My Spring Boot Configuration:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
final DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
final Properties jschProps = new Properties();
jschProps.put("StrictHostKeyChecking", "no");
jschProps.put("PreferredAuthentications", "publickey,password");
factory.setSessionConfig(jschProps);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(sftpPrivateKeyPassphrase);
} else {
factory.setPassword(sftpPasword);
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
#BridgeTo
public MessageChannel toSftpChannel() {
return new PublishSubscribeChannel();
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannel")
#Order(0)
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("filename");
} else {
throw new IllegalArgumentException("File expected as payload.");
}
});
return handler;
}
#ServiceActivator(inputChannel = "toSftpChannel")
#Order(1)
public String transferComplete(#Payload byte[] file, #Header("filename") String filename) {
return "The SFTP transfer complete for file: " + filename;
}
#MessagingGateway
public interface UploadGateway {
#Gateway(requestChannel = "toSftpChannel")
String upload(#Payload byte[] file, #Header("filename") String filename);
}
My Test Case:
final String pdfStatus = uploadGateway.upload(content, documentName);
log.info("Upload of {} completed, {}.", documentName, pdfStatus);
From the return of the Gateway upload call i expect to get the String confirming the upload e.g. "The SFTP transfer complete for file:..." but I get the the returned content of the uploaded File in byte[]:
Upload of 123456789.1.pdf completed, 37,80,68,70,45,49,46,54,13,37,-30,-29,-49,-45,13,10,50,55,53,32,48,32,111,98,106,13,60,60,47,76,105,110,101,97,114,105,122,101,100,32,49,47,76,32,50,53,52,55,49,48,47,79,32,50,55,55,47,69,32,49,49,49,55,55,55,47,78,32,49,47,84,32,50,53,52,51,53,57,47,72,32,91,32,49,49,57,55,32,53,51,55,93,62,62,13,101,110,100,111,98,106,13,32,32,32,32,32,32,32,32,32,32,32,32,13,10,52,55,49,32,48,32,111,98,106,13,60,60,47,68,101,99,111,100,101,80,97,114,109,115,60,60,47,67,111,108,117,109,110,115,32,53,47,80,114,101,100,105,99,116,111,114,32,49,50,62,62,47,70,105,108,116,101,114,47,70,108,97,116,101,68,101,99,111,100,101,47,73,68,91,60,57,66,53,49,56,54,69,70,53,66,56,66,49,50,52,49,65,56,50,49,55,50,54,56,65,65,54,52,65,57,70,54,62,60,68,52,50,68,51,55,54,53,54,65,67,48,55,54,52,65,65,53,52,66,52,57,51,50,56,52,56,68,66 etc.
What am I missing?
I think #Order(0) doesn't work together with the #Bean.
To fix it you should do this in that bean definition istead:
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setOrder(0);
See Reference Manual for more info:
When using these annotations on consumer #Bean definitions, if the bean definition returns an appropriate MessageHandler (depending on the annotation type), attributes such as outputChannel, requiresReply etc, must be set on the MessageHandler #Bean definition itself.
In other words: if you can use setter, you have to. We don't process annotations for this case because there is no guarantee what should get a precedence. So, to avoid such a confuse we have left for you only setters choice.
UPDATE
I see your problem and it is here:
#Bean
#BridgeTo
public MessageChannel toSftpChannel() {
return new PublishSubscribeChannel();
}
That is confirmed by the logs:
Adding {bridge:dmsSftpConfig.toSftpChannel.bridgeTo} as a subscriber to the 'toSftpChannel' channel
Channel 'org.springframework.context.support.GenericApplicationContext#b3d0f7.toSftpChannel' has 3 subscriber(s).
started dmsSftpConfig.toSftpChannel.bridgeTo
So, you really have one more subscriber to that toSftpChannel and it is a BridgeHandler with an output to the replyChannel header. And a default order is like private volatile int order = Ordered.LOWEST_PRECEDENCE; this one becomes as a first subscriber and exactly this one returns you that byte[] just because it is a payload of request.
You need to decide if you really need such a bridge. There is no workaround for the #Order though...

Resources