MessageBodyWriter not found StreamingBodyResponse - response-entity

I am trying to make StreamResponseBody work with sample hardcoded data.
#POST
#Path("filetypecsv")
#Produces("text/plain")
public ResponseEntity<StreamingResponseBody> studentsFile() {
String name = "name";
String rollNo = "rollNo";
StreamingResponseBody stream = output -> {
Writer writer = new BufferedWriter(new OutputStreamWriter(output));
writer.write("name,rollNo"+"\n");
for (int i = 1; i <= 1000; i++) {
writer.write(name + i + " ," + rollNo + i + "\n");
writer.flush();
}
};
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=students.csv")
.contentType(org.springframework.http.MediaType.TEXT_PLAIN)
.body(stream);
}
I am always getting this error :
SEVERE: MessageBodyWriter not found for media type=text/plain, type=class org.springframwork.http.ResponseEntity, genericType=org.springframework.http.ReponseEntity<StreamingResponseBody>.
I have added the dependency : jersey-media-json-jackson.
But I am still getting this error, please advise.

This solution applies if your code is using Jax.rs.core and not Spring #RestController. I have not seen a solution where you can use Springs StreamingResponseBody along with jax.rs
But instead you can use jax.rs StreamingOutput. You can return a jax.rs Response, and (MediaType.TEXT_PLAIN) or equivalent like an octet stream.
Please see this link - https://dzone.com/articles/jax-rs-streaming-response
StreamingOutput stream = new StreamingOutput() {
#Override
public void write(OutputStream os) throws IOException, WebApplicationException {
Writer writer = new BufferedWriter(new OutputStreamWriter(os));
for (org.neo4j.graphdb.Path path : paths) {
writer.write(path.toString() + "\n");
}
writer.flush();
}
};
return Response.ok(stream).build();

Related

Spring , No "Access-Control-Allow-Origin" after repackaging

I did simple repackaging and that caused Access-Control-Allow-Origin issue with my s3 cloud.
I have a local S3 compatible server to store videos , using Spring , am streaming the videos directly from my local cloud .
Everything is working as expected until I tried to repackage my classes .
I had one package com.example.video with the following classes:
S3Config.java this contain the AmazonS3Client
User.java A model class
VideoController.java Simple controller
VideoStreamingServiceApplication.java application class
When I created new package com.example.s3 package and moved both User.java and S3config.java , i had auto wired issue and that was fixed by using component scan as this answer suggested .
Even after the autowired issue fixed am getting an error when I try to stream .
Access to XMLHttpRequest at "http://localhost/9999/recordins/a.m3u8" fom origin 'null' has been block by CORS policy: No 'Access-Control-Allow-Origin' header is present on the request resource .
Although I do have the header mentioned in my request , Here is my VideoController.java
#RestController
#RequestMapping("/cloud")
#ConfigurationProperties(prefix = "amazon.credentials")
public class VideoCotroller {
#Autowired
private S3Config s3Client;
private String bucketName= "recordings";
Logger log = LoggerFactory.getLogger(VideoCotroller.class);
#Autowired
User userData;
#GetMapping(value = "/recordings/{fileName}", produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE })
public ResponseEntity<StreamingResponseBody> streamVideo(HttpServletRequest request, #PathVariable String fileName) {
try {
long rangeStart = 0;
long rangeEnd;
AmazonS3 s3client = s3Client.getAmazonS3Client();
String uri = request.getRequestURI();
System.out.println("Fetching " + uri);
S3Object object = s3client.getObject("recordings", fileName);
long size = object.getObjectMetadata().getContentLength();
S3ObjectInputStream finalObject = object.getObjectContent();
final StreamingResponseBody body = outputStream -> {
int numberOfBytesToWrite = 0;
byte[] data = new byte[(int) size];
while ((numberOfBytesToWrite = finalObject.read(data, 0, data.length)) != -1) {
outputStream.write(data, 0, numberOfBytesToWrite);
}
finalObject.close();
};
rangeEnd = size - 1;
return ResponseEntity.status(HttpStatus.OK)
.header("Content-Type", "application/vnd.apple.mpegurl")
.header("Accept-Ranges", "bytes")
// HERE IS THE ACCESS CONTROL ALLOW ORIGIN
.header("Access-Control-Allow-Origin", "*")
.header("Content-Length", String.valueOf(size))
.header("display", "staticcontent_sol, staticcontent_sol")
.header("Content-Range", "bytes" + " " + rangeStart + "-" + rangeEnd + "/" + size)
.body(body);
//return new ResponseEntity<StreamingResponseBody>(body, HttpStatus.OK);
} catch (Exception e) {
System.err.println("Error "+ e.getMessage());
return new ResponseEntity<StreamingResponseBody>(HttpStatus.BAD_REQUEST);
}}
If I restore the packages to one , as it was before everything is working fine .
MY QUESTION : Why repackaging caused this issue , any idea how to fix this ?

"For an upload InputStream with no MD5 digest metadata, the markSupported() method must evaluate to true." in Spring Integration AWS

UPDATE: There is bug in spring-integration-aws-2.3.4
I am integrating SFTP (SftpStreamingMessageSource) as source with S3 as destination.
I have similar Spring Integration configuration:
#Bean
public S3MessageHandler.UploadMetadataProvider uploadMetadataProvider() {
return (metadata, message) -> {
if ( message.getPayload() instanceof DigestInputStream) {
metadata.setContentType( MediaType.APPLICATION_JSON_VALUE );
// can not read stream to manually compute MD5
// metadata.setContentMD5("BLABLA==");
// this is wrong approach: metadata.setContentMD5(BinaryUtils.toBase64((((DigestInputStream) message.getPayload()).getMessageDigest().digest()));
}
};
}
#Bean
#InboundChannelAdapter(channel = "ftpStream")
public MessageSource<InputStream> ftpSource(SftpRemoteFileTemplate template) {
SftpStreamingMessageSource messageSource = new SftpStreamingMessageSource(template);
messageSource.setRemoteDirectory("foo");
messageSource.setFilter(new AcceptAllFileListFilter<>());
messageSource.setMaxFetchSize(1);
messageSource.setLoggingEnabled(true);
messageSource.setCountsEnabled(true);
return messageSource;
}
...
#Bean
#ServiceActivator(inputChannel = "ftpStream")
public MessageHandler s3MessageHandler(AmazonS3 amazonS3, S3MessageHandler.UploadMetadataProvider uploadMetadataProvider) {
S3MessageHandler messageHandler = new S3MessageHandler(amazonS3, "bucketName");
messageHandler.setLoggingEnabled(true);
messageHandler.setCountsEnabled(true);
messageHandler.setCommand(S3MessageHandler.Command.UPLOAD);
messageHandler.setUploadMetadataProvider(uploadMetadataProvider);
messageHandler.setKeyExpression(new ValueExpression<>("key"));
return messageHandler;
}
After start, I am getting following error
"For an upload InputStream with no MD5 digest metadata, the markSupported() method must evaluate to true."
This is because ftpSource is producing InputStream payload without mark/reset support. I even tried to transform InputStream to BufferedInputStream using #Transformer e.g. following
return new BufferedInputStream((InputStream) message.getPayload());
and no success, because then I am getting message "java.io.IOException: Stream closed" because S3MessageHandler:338 is calling Md5Utils.md5AsBase64(inputStream) which closes stream too early.
How to generate MD5 for all messages in Spring Integration AWS without pain?
I am using spring-integration-aws-2.3.4.RELEASE
The S3MessageHandler does this:
if (payload instanceof InputStream) {
InputStream inputStream = (InputStream) payload;
if (metadata.getContentMD5() == null) {
Assert.state(inputStream.markSupported(),
"For an upload InputStream with no MD5 digest metadata, "
+ "the markSupported() method must evaluate to true.");
String contentMd5 = Md5Utils.md5AsBase64(inputStream);
metadata.setContentMD5(contentMd5);
inputStream.reset();
}
putObjectRequest = new PutObjectRequest(bucketName, key, inputStream, metadata);
}
Where that Md5Utils.md5AsBase64() closes an InputStream in the end - bad for us.
This is an omission on our side. Please, raise a GH issue and we will fix it ASAP. Or feel free to provide a contribution.
As a workaround I would suggest to have a transformer upfront of this S3MessageHandler with the code like:
return org.springframework.util.StreamUtils.copyToByteArray(inputStream);
This way you will have already a byte[] as a payload for the S3MessageHandler which will use a different branch for processing:
else if (payload instanceof byte[]) {
byte[] payloadBytes = (byte[]) payload;
InputStream inputStream = new ByteArrayInputStream(payloadBytes);
if (metadata.getContentMD5() == null) {
String contentMd5 = Md5Utils.md5AsBase64(inputStream);
metadata.setContentMD5(contentMd5);
inputStream.reset();
}
if (metadata.getContentLength() == 0) {
metadata.setContentLength(payloadBytes.length);
}
putObjectRequest = new PutObjectRequest(bucketName, key, inputStream, metadata);
}

Spring AMQP, CorrelationId and GZipPostProcessor: UnsupportedEncodingException

I have a Project with Spring AMQP (1.7.12.RELEASE).
If I put a value for the correlationId field (etMessageProperties (). SetCorrelationId) and I use GZipPostProcessor, the following error always occurs:
"org.springframework.amqp.AmqpUnsupportedEncodingException: java.io.UnsupportedEncodingException: gzip"
To solve it, it seems that it works using the following code:
DefaultMessagePropertiesConverter messageConverter = new DefaultMessagePropertiesConverter();
messageConverter.setCorrelationIdAsString(DefaultMessagePropertiesConverter.CorrelationIdPolicy.STRING);
template.setMessagePropertiesConverter(messageConverter);
but I do not know what implications it will have to use it in real with clients that do not use Spring AMQP (I establish this field if the message that has reached me has it).
I enclose a complete example of code:
#Configuration
public class SimpleProducerGZIP
{
static final String queueName = "spring-boot";
#Bean
public CachingConnectionFactory connectionFactory() {
com.rabbitmq.client.ConnectionFactory factory = new com.rabbitmq.client.ConnectionFactory();
factory.setHost("localhost");
factory.setAutomaticRecoveryEnabled(false);
CachingConnectionFactory connectionFactory = new CachingConnectionFactory(factory);
return connectionFactory;
}
#Bean
public AmqpAdmin amqpAdmin() {
RabbitAdmin rabbitAdmin = new RabbitAdmin(connectionFactory());
rabbitAdmin.setAutoStartup(true);
return rabbitAdmin ;
}
#Bean
Queue queue() {
Queue qr = new Queue(queueName, false);
qr.setAdminsThatShouldDeclare(amqpAdmin());
return qr;
}
#Bean
public RabbitTemplate rabbitTemplate()
{
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setEncoding("gzip");
template.setBeforePublishPostProcessors(new GZipPostProcessor());
// TODO :
DefaultMessagePropertiesConverter messageConverter = new DefaultMessagePropertiesConverter();
messageConverter.setCorrelationIdAsString(DefaultMessagePropertiesConverter.CorrelationIdPolicy.STRING);
template.setMessagePropertiesConverter(messageConverter);
return template;
}
public static void main(String[] args)
{
#SuppressWarnings("resource")
ApplicationContext context = new AnnotationConfigApplicationContext(SimpleProducerGZIP.class);
RabbitTemplate _rabbitTemplate = context.getBean(RabbitTemplate.class);
int contador = 0;
try {
while(true)
{
contador = contador + 1;
int _nContador = contador;
System.out.println("\nInicio envio : " + _nContador);
Object _o = new String(("New Message : " + contador));
try
{
_rabbitTemplate.convertAndSend(queueName, _o,
new MessagePostProcessor() {
#SuppressWarnings("deprecation")
#Override
public Message postProcessMessage(Message msg) throws AmqpException {
if(_nContador%2 == 0) {
System.out.println("\t--- msg.getMessageProperties().setCorrelationId ");
msg.getMessageProperties().setCorrelationId("NewCorrelation".getBytes(StandardCharsets.UTF_8));
}
return msg;
}
}
);
System.out.println("\tOK");
}catch (Exception e) {
System.err.println("\t\tError en envio : " + contador + " - " + e.getMessage());
}
System.out.println("Fin envio : " + contador);
Thread.sleep(500);
}
}catch (Exception e) {
System.err.println("Exception : " + e.getMessage());
}
}
}
The question is, if I change the configuration of the rabbitTemplate so that the error does not happen, can it have implications for clients that use Spring AMQP or other alternatives?
--- EDIT (28/03/2019)
This is the complete stack trace with the code:
org.springframework.amqp.AmqpUnsupportedEncodingException: java.io.UnsupportedEncodingException: gzip
at org.springframework.amqp.rabbit.support.DefaultMessagePropertiesConverter.fromMessageProperties(DefaultMessagePropertiesConverter.java:211)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doSend(RabbitTemplate.java:1531)
at org.springframework.amqp.rabbit.core.RabbitTemplate$3.doInRabbit(RabbitTemplate.java:716)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:1455)
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:1411)
at org.springframework.amqp.rabbit.core.RabbitTemplate.send(RabbitTemplate.java:712)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:813)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:791)
at es.jab.example.SimpleProducerGZIP.main(SimpleProducerGZIP.java:79)
Caused by: java.io.UnsupportedEncodingException: gzip
at java.lang.StringCoding.decode(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at org.springframework.amqp.rabbit.support.DefaultMessagePropertiesConverter.fromMessageProperties(DefaultMessagePropertiesConverter.java:208)
... 8 more
I'd be interested to see the complete stack trace for more information about the problem.
This code was part of a transition from a byte[] correlation Id to a String. This was needed to avoid a byte[]/String/byte[] conversion.
When the policy is String, you should use the correlationIdString property instead of correlationId. Otherwise, the correlationId won't be mapped in outbound messages (we don't look at correlationId in that case). For inbound messages it controls which property is populated.
In 2.0 and later, correlationId is now a String instead of a byte[] so this setting is no longer needed.
EDIT
Now I see the stack trace, this...
template.setEncoding("gzip");
...is wrong.
/**
* The encoding to use when inter-converting between byte arrays and Strings in message properties.
*
* #param encoding the encoding to set
*/
public void setEncoding(String encoding) {
this.encoding = encoding;
}
There is no such Charset as gzip. This property has nothing to do with the message content, it is simply used when converting byte[] to/from String. It is UTF-8 by default.

Veracode CWE id 611

I have a piece of code where there is veracode finding for Improper Restriction of XML External Entity Reference ('XXE') Attack.
Code:
Transformer transformer = TransformerFactory.newInstance().newTransformer();
StreamResult result = new StreamResult(new StringWriter());
DOMSource source = new DOMSource(node);
transformer.transform(source, result); //CWE ID 611, impacted line.
I used
transformer.setOutputProperty(XMLConstants.ACCESS_EXTERNAL_DTD, "");
transformer.setOutputProperty(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, "");
but no luck.
The issue got resolved with the following code:
TransformerFactory transformer = TransformerFactory.newInstance();//.newTransformer();
transformer.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, "");
transformer.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, "");
StreamResult result = new StreamResult(new StringWriter());
DOMSource source = new DOMSource(node);
transformer.newTransformer().transform(source, result);
It is advised to put a try-catch block.
try{
transformer.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, "");
transformer.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, "");
} catch (IllegalArgumentException e) {
//jaxp 1.5 feature not supported
}
Please note for anyone running the application on JDK5 or older that you will not have these XML Constants available:
transformer.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, "");
transformer.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, "");
Instead you will have to parse to a Document using a secured document builder then use a DOM source in your transformer.
private static void example(String xmlDocument, Result result) throws ParserConfigurationException, IOException, SAXException, TransformerException {
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
db.setEntityResolver(new EntityResolver() {
public InputSource resolveEntity(String s, String s1) throws SAXException, IOException {
return new InputSource(new StringReader(""));
}
});
Document doc = db.parse(new InputSource(new StringReader(xmlDocument)));
DOMSource domSource = new DOMSource(doc);
Transformer transformer = TransformerFactory.newInstance().newTransformer();
transformer.transform(domSource, result);
}

How to transfer *.pgp files using SFTP spring Integration

We are developing generic automated application which will download *.pgp file from SFTP server.
The application working fine with *.txt files. But when we are trying to pull *.pgp files we are getting the below exception.
2016-03-18 17:45:45 INFO jsch:52 - SSH_MSG_SERVICE_REQUEST sent
2016-03-18 17:45:46 INFO jsch:52 - SSH_MSG_SERVICE_ACCEPT received
2016-03-18 17:45:46 INFO jsch:52 - Next authentication method: publickey
2016-03-18 17:45:48 INFO jsch:52 - Authentication succeeded (publickey).
sftpSession org.springframework.integration.sftp.session.SftpSession#37831f
files size158
java.io.IOException: inputstream is closed
at com.jcraft.jsch.ChannelSftp.fill(ChannelSftp.java:2884)
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2908)
at com.jcraft.jsch.ChannelSftp.access$500(ChannelSftp.java:36)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1390)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1340)
at org.springframework.util.StreamUtils.copy(StreamUtils.java:126)
at org.springframework.util.FileCopyUtils.copy(FileCopyUtils.java:109)
at org.springframework.integration.sftp.session.SftpSession.read(SftpSession.java:129)
at com.sftp.test.SFTPTest.main(SFTPTest.java:49)
java code :
public class SFTPTest {
public static void main(String[] args) {
ApplicationContext applicationContext = new ClassPathXmlApplicationContext("beans.xml");
DefaultSftpSessionFactory defaultSftpSessionFactory = applicationContext.getBean("defaultSftpSessionFactory", DefaultSftpSessionFactory.class);
System.out.println(defaultSftpSessionFactory);
SftpSession sftpSession = defaultSftpSessionFactory.getSession();
System.out.println("sftpSessikon "+sftpSession);
String remoteDirectory = "/";
String localDirectory = "C:/312421/temp/";
OutputStream outputStream = null;
List<String> fileAtSFTPList = new ArrayList<String>();
try {
String[] fileNames = sftpSession.listNames(remoteDirectory);
for (String fileName : fileNames) {
boolean isMatch = fileCheckingAtSFTPWithPattern(fileName);
if(isMatch){
fileAtSFTPList.add(fileName);
}
}
System.out.println("files size" + fileAtSFTPList.size());
for (String fileName : fileAtSFTPList) {
File file = new File(localDirectory + fileName);
/*InputStream ipstream= sftpSession.readRaw(fileName);
FileUtils.writeByteArrayToFile(file, IOUtils.toByteArray(ipstream));
ipstream.close();*/
outputStream = new FileOutputStream(file);
sftpSession.read(remoteDirectory + fileName, outputStream);
outputStream.close();
}
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}finally {
try {
if (outputStream != null)
outputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
public static boolean fileCheckingAtSFTPWithPattern(String fileName){
Pattern pattern = Pattern.compile(".*\\.pgp$");
Matcher matcher = pattern.matcher(fileName);
if(matcher.find()){
return true;
}
return false;
}
}
Please suggest how to sort out this issue.
Thanks
The file type is irrelevant to Spring Integration - it looks like the server is closing the connection while reading the preamble - before the data is being fetched...
at com.jcraft.jsch.ChannelSftp.header(ChannelSftp.java:2908)
at com.jcraft.jsch.ChannelSftp.access$500(ChannelSftp.java:36)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1390)
at com.jcraft.jsch.ChannelSftp$2.read(ChannelSftp.java:1340)
The data itself is not read until later (line 1442 in ChannelSftp).
So it looks like a server-side problem.

Resources