Spring , No "Access-Control-Allow-Origin" after repackaging - spring

I did simple repackaging and that caused Access-Control-Allow-Origin issue with my s3 cloud.
I have a local S3 compatible server to store videos , using Spring , am streaming the videos directly from my local cloud .
Everything is working as expected until I tried to repackage my classes .
I had one package com.example.video with the following classes:
S3Config.java this contain the AmazonS3Client
User.java A model class
VideoController.java Simple controller
VideoStreamingServiceApplication.java application class
When I created new package com.example.s3 package and moved both User.java and S3config.java , i had auto wired issue and that was fixed by using component scan as this answer suggested .
Even after the autowired issue fixed am getting an error when I try to stream .
Access to XMLHttpRequest at "http://localhost/9999/recordins/a.m3u8" fom origin 'null' has been block by CORS policy: No 'Access-Control-Allow-Origin' header is present on the request resource .
Although I do have the header mentioned in my request , Here is my VideoController.java
#RestController
#RequestMapping("/cloud")
#ConfigurationProperties(prefix = "amazon.credentials")
public class VideoCotroller {
#Autowired
private S3Config s3Client;
private String bucketName= "recordings";
Logger log = LoggerFactory.getLogger(VideoCotroller.class);
#Autowired
User userData;
#GetMapping(value = "/recordings/{fileName}", produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE })
public ResponseEntity<StreamingResponseBody> streamVideo(HttpServletRequest request, #PathVariable String fileName) {
try {
long rangeStart = 0;
long rangeEnd;
AmazonS3 s3client = s3Client.getAmazonS3Client();
String uri = request.getRequestURI();
System.out.println("Fetching " + uri);
S3Object object = s3client.getObject("recordings", fileName);
long size = object.getObjectMetadata().getContentLength();
S3ObjectInputStream finalObject = object.getObjectContent();
final StreamingResponseBody body = outputStream -> {
int numberOfBytesToWrite = 0;
byte[] data = new byte[(int) size];
while ((numberOfBytesToWrite = finalObject.read(data, 0, data.length)) != -1) {
outputStream.write(data, 0, numberOfBytesToWrite);
}
finalObject.close();
};
rangeEnd = size - 1;
return ResponseEntity.status(HttpStatus.OK)
.header("Content-Type", "application/vnd.apple.mpegurl")
.header("Accept-Ranges", "bytes")
// HERE IS THE ACCESS CONTROL ALLOW ORIGIN
.header("Access-Control-Allow-Origin", "*")
.header("Content-Length", String.valueOf(size))
.header("display", "staticcontent_sol, staticcontent_sol")
.header("Content-Range", "bytes" + " " + rangeStart + "-" + rangeEnd + "/" + size)
.body(body);
//return new ResponseEntity<StreamingResponseBody>(body, HttpStatus.OK);
} catch (Exception e) {
System.err.println("Error "+ e.getMessage());
return new ResponseEntity<StreamingResponseBody>(HttpStatus.BAD_REQUEST);
}}
If I restore the packages to one , as it was before everything is working fine .
MY QUESTION : Why repackaging caused this issue , any idea how to fix this ?

Related

MessageBodyWriter not found StreamingBodyResponse

I am trying to make StreamResponseBody work with sample hardcoded data.
#POST
#Path("filetypecsv")
#Produces("text/plain")
public ResponseEntity<StreamingResponseBody> studentsFile() {
String name = "name";
String rollNo = "rollNo";
StreamingResponseBody stream = output -> {
Writer writer = new BufferedWriter(new OutputStreamWriter(output));
writer.write("name,rollNo"+"\n");
for (int i = 1; i <= 1000; i++) {
writer.write(name + i + " ," + rollNo + i + "\n");
writer.flush();
}
};
return ResponseEntity.ok()
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=students.csv")
.contentType(org.springframework.http.MediaType.TEXT_PLAIN)
.body(stream);
}
I am always getting this error :
SEVERE: MessageBodyWriter not found for media type=text/plain, type=class org.springframwork.http.ResponseEntity, genericType=org.springframework.http.ReponseEntity<StreamingResponseBody>.
I have added the dependency : jersey-media-json-jackson.
But I am still getting this error, please advise.
This solution applies if your code is using Jax.rs.core and not Spring #RestController. I have not seen a solution where you can use Springs StreamingResponseBody along with jax.rs
But instead you can use jax.rs StreamingOutput. You can return a jax.rs Response, and (MediaType.TEXT_PLAIN) or equivalent like an octet stream.
Please see this link - https://dzone.com/articles/jax-rs-streaming-response
StreamingOutput stream = new StreamingOutput() {
#Override
public void write(OutputStream os) throws IOException, WebApplicationException {
Writer writer = new BufferedWriter(new OutputStreamWriter(os));
for (org.neo4j.graphdb.Path path : paths) {
writer.write(path.toString() + "\n");
}
writer.flush();
}
};
return Response.ok(stream).build();

Nifi Custom Processor errors with a "ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75"

*** Update: I have changed my approach as described in my answer to the question, due to which the original issue reported becomes moot. ***
I'm trying to develop a Nifi application that provides a WebSocket interface to Kakfa. I could not accomplish this using the standard Nifi components as I have tried below (it may not make sense but intuitively this is what I want to accomplish):
I have now created a custom Processor "ReadFromKafka" that I intend to use as shown in the image below. "ReadFromKafka" would use the same implementation as the standard "PutWebSocket" component but would read messages from a Kafka Topic and send as response to the WebSocket client.
I have provided a code snippet of the implementation below:
#SystemResourceConsideration(resource = SystemResource.MEMORY)
public class ReadFromKafka extends AbstractProcessor {
public static final PropertyDescriptor PROP_WS_SESSION_ID = new PropertyDescriptor.Builder()
.name("websocket-session-id")
.displayName("WebSocket Session Id")
.description("A NiFi Expression to retrieve the session id. If not specified, a message will be " +
"sent to all connected WebSocket peers for the WebSocket controller service endpoint.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_SESSION_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ID = new PropertyDescriptor.Builder()
.name("websocket-controller-service-id")
.displayName("WebSocket ControllerService Id")
.description("A NiFi Expression to retrieve the id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_CS_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_CONTROLLER_SERVICE_ENDPOINT = new PropertyDescriptor.Builder()
.name("websocket-endpoint-id")
.displayName("WebSocket Endpoint Id")
.description("A NiFi Expression to retrieve the endpoint id of a WebSocket ControllerService.")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.defaultValue("${" + ATTR_WS_ENDPOINT_ID + "}")
.build();
public static final PropertyDescriptor PROP_WS_MESSAGE_TYPE = new PropertyDescriptor.Builder()
.name("websocket-message-type")
.displayName("WebSocket Message Type")
.description("The type of message content: TEXT or BINARY")
.required(true)
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.defaultValue(WebSocketMessage.Type.TEXT.toString())
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();
public static final Relationship REL_SUCCESS = new Relationship.Builder()
.name("success")
.description("FlowFiles that are sent successfully to the destination are transferred to this relationship.")
.build();
public static final Relationship REL_FAILURE = new Relationship.Builder()
.name("failure")
.description("FlowFiles that failed to send to the destination are transferred to this relationship.")
.build();
private static final List<PropertyDescriptor> descriptors;
private static final Set<Relationship> relationships;
static{
final List<PropertyDescriptor> innerDescriptorsList = new ArrayList<>();
innerDescriptorsList.add(PROP_WS_SESSION_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ID);
innerDescriptorsList.add(PROP_WS_CONTROLLER_SERVICE_ENDPOINT);
innerDescriptorsList.add(PROP_WS_MESSAGE_TYPE);
descriptors = Collections.unmodifiableList(innerDescriptorsList);
final Set<Relationship> innerRelationshipsSet = new HashSet<>();
innerRelationshipsSet.add(REL_SUCCESS);
innerRelationshipsSet.add(REL_FAILURE);
relationships = Collections.unmodifiableSet(innerRelationshipsSet);
}
#Override
public Set<Relationship> getRelationships() {
return relationships;
}
#Override
public final List<PropertyDescriptor> getSupportedPropertyDescriptors() {
return descriptors;
}
#Override
public void onTrigger(final ProcessContext context, final ProcessSession processSession) throws ProcessException {
final FlowFile flowfile = processSession.get();
if (flowfile == null) {
return;
}
final String sessionId = context.getProperty(PROP_WS_SESSION_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceId = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ID)
.evaluateAttributeExpressions(flowfile).getValue();
final String webSocketServiceEndpoint = context.getProperty(PROP_WS_CONTROLLER_SERVICE_ENDPOINT)
.evaluateAttributeExpressions(flowfile).getValue();
final String messageTypeStr = context.getProperty(PROP_WS_MESSAGE_TYPE)
.evaluateAttributeExpressions(flowfile).getValue();
final WebSocketMessage.Type messageType = WebSocketMessage.Type.valueOf(messageTypeStr);
if (StringUtils.isEmpty(sessionId)) {
getLogger().debug("Specific SessionID not specified. Message will be broadcast to all connected clients.");
}
if (StringUtils.isEmpty(webSocketServiceId)
|| StringUtils.isEmpty(webSocketServiceEndpoint)) {
transferToFailure(processSession, flowfile, "Required WebSocket attribute was not found.");
return;
}
final ControllerService controllerService = context.getControllerServiceLookup().getControllerService(webSocketServiceId);
if (controllerService == null) {
getLogger().debug("ControllerService is NULL");
transferToFailure(processSession, flowfile, "WebSocket ControllerService was not found.");
return;
} else if (!(controllerService instanceof WebSocketService)) {
getLogger().debug("ControllerService is not instance of WebSocketService");
transferToFailure(processSession, flowfile, "The ControllerService found was not a WebSocket ControllerService but a "
+ controllerService.getClass().getName());
return;
}
...
processSession.getProvenanceReporter().send(updatedFlowFile, transitUri.get(), transmissionMillis);
processSession.transfer(updatedFlowFile, REL_SUCCESS);
processSession.commit();
} catch (WebSocketConfigurationException|IllegalStateException|IOException e) {
// WebSocketConfigurationException: If the corresponding WebSocketGatewayProcessor has been stopped.
// IllegalStateException: Session is already closed or not found.
// IOException: other IO error.
getLogger().error("Failed to send message via WebSocket due to " + e, e);
transferToFailure(processSession, flowfile, e.toString());
}
}
private FlowFile transferToFailure(final ProcessSession processSession, FlowFile flowfile, final String value) {
flowfile = processSession.putAttribute(flowfile, ATTR_WS_FAILURE_DETAIL, value);
processSession.transfer(flowfile, REL_FAILURE);
return flowfile;
}
}
I have deployed the custom processor and when I connect to it using the Chrome "Simple Web Socket Client" I can see the following message in the logs:
ControllerService found was not a WebSocket ControllerService but a com.sun.proxy.$Proxy75
I'm using the exact same code as in PutWebSocket and can't figure out why it would behave any different when I use my custom Processor. I have configured "JettyWebSocketServer" as the ControllerService under "ListenWebSocket" as shown in the image below.
Additional exception details seen in the log are provided below:
java.lang.ClassCastException: class com.sun.proxy.$Proxy75 cannot be cast to class org.apache.nifi.websocket.WebSocketService (com.sun.proxy.$Proxy75 is in unnamed module of loader org.apache.nifi.nar.InstanceClassLoader #35c646b5; org.apache.nifi.websocket.WebSocketService is in unnamed module of loader org.apache.nifi.nar.NarClassLoader #361abd01)
I ended up modifying my flow to utilize out-of-box ListenWebSocket, PutWebSocket Processors, and a custom "FetchFromKafka" Processor that is a modified version of ConsumeKafkaRecord. With this I'm able to provide a WebSocket interface to Kafka. I have provided a screenshot of the updated flow below. More work needs to be done with the custom Processor to support multiple sessions.

Issue Spring Boot Redis Multiple Applications Consuming Data

I have two applications for which I am using Spring Boot and Redis.
From both the applications I am producing data to Redis.
ISSUE The data produced by Spring Boot Redis Application 1 is not available for Redis Application 2 and vice versa.
Redis is running locally.
Application YAML for both the applications is same -
spring:
redis:
host: localhost
port: 6379
Model Class-
#RedisHash(timeToLive = 300,value = "alerts")
#Data
#NoArgsConstructor
#AllArgsConstructor
public class RedisModel {
#Id
private String id;
private String message;
public RedisModel(String message) {
this.message = message;
}
#Override
public String toString() {
return "RedisModel{" +
"id='" + id + '\'' +
", message='" + message + '\'' +
'}';
}
}
Is there some parameters that are missed??
Please let me know in case of any queries.
Spring Boot - 2.2.0 Version.
I think RedisConfiguration created by spring boot will namespace those cache keys with your application name. Therefore they won't be visible to each other. To get around this, you will have to do your own RedisConfiguration and disable the prefix via disableKeyPrefix() and then set your own prefix with computePrefixWith(...). Here is an example:
#Bean
public RedisCacheManager cacheManager( RedisConnectionFactory redisConnectionFactory,
ResourceLoader resourceLoader ) {
RedisCacheManager.RedisCacheManagerBuilder builder = RedisCacheManager
.builder( redisConnectionFactory )
.cacheDefaults( determineConfiguration( resourceLoader.getClassLoader() ) );
List<String> cacheNames = this.cacheProperties.getCacheNames();
if ( !cacheNames.isEmpty() ) {
builder.initialCacheNames( new LinkedHashSet<>( cacheNames ) );
}
return builder.build();
}
private RedisCacheConfiguration determineConfiguration(
ClassLoader classLoader ) {
if ( this.redisCacheConfiguration != null ) {
return this.redisCacheConfiguration;
}
CacheProperties.Redis redisProperties = this.cacheProperties.getRedis();
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig();
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.modulesToInstall( new SimpleModule().addSerializer( new NullValueSerializer( null ) ) )
.failOnEmptyBeans( false )
.build();
mapper.enableDefaultTyping( ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY );
ObjectMapper mapper = new Jackson2ObjectMapperBuilder()
.modulesToInstall( new SimpleModule().addSerializer( new NullValueSerializer( null ) ) )
.failOnEmptyBeans( false )
.build();
mapper.enableDefaultTyping( ObjectMapper.DefaultTyping.NON_FINAL, JsonTypeInfo.As.PROPERTY );
GenericJackson2JsonRedisSerializer serializer = new GenericJackson2JsonRedisSerializer( mapper );
//get the mapper b/c they registered some internal modules
config = config.serializeValuesWith( RedisSerializationContext.SerializationPair.fromSerializer( serializer ) );
if ( redisProperties.getTimeToLive() != null ) {
config = config.entryTtl( redisProperties.getTimeToLive() );
}
if ( redisProperties.getKeyPrefix() != null ) {
config = config.prefixKeysWith( redisProperties.getKeyPrefix() );
}
if ( !redisProperties.isCacheNullValues() ) {
config = config.disableCachingNullValues();
}
if ( !redisProperties.isUseKeyPrefix() ) {
config = config.disableKeyPrefix();
config = config.computePrefixWith( cacheName -> cacheName + "::" );
}
return config;
}
Actually, Redis is not very suitable for such a situation. A better solution would be using Infinispan

How to upload an image or a video to a persistant folder in class-path with Spring-Boot?

I am new in Spring-Boot...
I want to upload images or videos, and store them in a persistant folder "upload-storage" in the class-path of my project in the server. I don't want to store them in the database (20 Mo).
Spring-Boot store them in target/upload-storage.
That functions : I can show the videos on the view with the controller and Thymeleaf. I can close tomcat, close the browser, and open them again : that functions.
But the day after, upload-storage is disapeared !
I think that I don't use the good process.
But I found how to upload an image : ok. I found how to show images from a folder in class-path : ok. I found how to upload images to database. But nothing to store the uploaded images in a persistant folder.
Can you help me ? Can you tell me the good process ?
Some details :
I have an entity "video" to store name, extension, length,... of the video.
I have "VideoRepository" and "VideoService" to manage the requests with "Video".
I have a "StorageService" and "StorageServiceImpl" to manage the upload of video and images : It as to upload the video and save it in a folder called "upload-storage" : I will come back on it farther.
I have a videoForm.html first with a form to select a file and send it to "UploadController", then an other form to show the video, the datas extracted from the video, modify the name or add precisions, and send this form to a "VideoController" who save the entity.
A part of the code of "UploadController" :
`
#Controller
public class UploadController extends BaseController {
private final StorageService storageServiceImpl;
#Autowired
public UploadController(StorageService storageServiceImpl) {
this.storageServiceImpl = storageServiceImpl;
}
#PostMapping("/upload")
public String recupereUpload(#RequestParam("file") MultipartFile file,Model model){
String filename ="";
try {
final long limit = 200 * 1024 * 1024;
if (file.getSize() > limit) {
model.addAttribute("message", "Taille du fichier trop grand (>200MB)");
model.addAttribute("ok", false );
}
filename = storageServiceImpl.store(file);
model.addAttribute("filename", filename);
model.addAttribute("message", "Le téléchargement de " + filename+" est réussi !");
} catch (Exception e) {
model.addAttribute("message", "FAIL to upload " + filename + "!");
model.addAttribute("ok", false );
}
Video video = new Video();
model.addAttribute("ok", true );
model.addAttribute("video", video);
String baseName = storageServiceImpl.getBaseName(filename);
String ext = storageServiceImpl.getExtension(filename);
model.addAttribute("nom", baseName);
model.addAttribute("ext", ext);
model.addAttribute("nomorigin", filename);
model.addAttribute("size", Math.round(file.getSize()/1024));
String typExt = storageServiceImpl.getType(ext);
model.addAttribute("typExt", typExt);
return "elementVideo/videoForm";
}
`
"StorageServiceImpl" has different methods :
getExtension(String filename){...}
getType(String ext){...}
getType(String ext){...}
getBaseName(String filename){...}
The main method is store(MultipartFile file) {...} :
#Service
public class StorageServiceImpl implements StorageService {
private final Path storageLocation = Paths.get("upload-storage");
#Override
public String store(MultipartFile file) {
try {
// Vérification de l'existence :
if (file.isEmpty()) {
throw new Exception("Failed to store empty file " + file.getOriginalFilename() );
}
// Vérification de la nature et traitement du fichier uploadé :
String ext = getExtension(file.getOriginalFilename());
String[] extAutorise = {"mp4", "avi","ogg","ogv","jpg","jpeg","png","gif"};
String fileNameTarget ="";
if ( ArrayUtils.contains( extAutorise, ext)) {
//Définir le fichier destination :
fileNameTarget = file.getOriginalFilename();
fileNameTarget = fileNameTarget.replaceAll(" ", "_");
File dir = storageLocation.toFile();
String serverFile = dir.getAbsolutePath() + File.separator + fileNameTarget ;
try {
try (InputStream is = file.getInputStream();
BufferedOutputStream stream = new BufferedOutputStream(new FileOutputStream(serverFile))
) {
int i;
while ((i = is.read()) != -1) {
stream.write(i);
}
stream.flush();
}
} catch (IOException e) {
System.out.println("error : " + e.getMessage());
}
}
return fileNameTarget;
} catch (Exception e) {
throw new RuntimeException("FAIL!");
}
}
`
With this code, a folder "upload-storage" is created at the root of the project.
The video is uploaded in this folder...
But in "videoForm.html", the code
<video id="video" th:src="'/upload-storage/'+${filename}" height="60"
autoplay="autoplay"></video>
shows nothing.
I have an other solution.
In StorageServiceImpl, I use the code :
private final String storageLocation = this.getClass().getResource("/static/").getPath();
at place of :
private final Path storageLocation = Paths.get("upload-storage");
then :
File dir = new File(storageLocation + File.separator + "upload-storage");
at place of :
File dir = storageLocation.toFile();
then :
File serverFile = new File(dir.getAbsolutePath() + File.separator + fileNameTarget);
at place of :
String serverFile = dir.getAbsolutePath() + File.separator + fileNameTarget ;
With this solution, upload-storage is created in target folder.
I use an other controller BaseController :
public class BaseController {
public static final String PARAM_BASE_URL = "baseURL";
public String getBaseURL(HttpServletRequest request){
return request.getScheme() + "://" + request.getServerName() + ":" + request.getServerPort() + request.getContextPath();
}
}
`
UploadController extends this BaseController.
I add HttpServletRequest request in recupereUpload() :
#PostMapping("/upload")
public String recupereUpload(#RequestParam("file") MultipartFile file,
Model model, HttpServletRequest request ){
I add in the model sent by recupereUpload :
model.addAttribute(PARAM_BASE_URL, getBaseURL(request));
And at last, I can see my video in videoForm.html with the code :
<video id="video" th:src="${baseURL}+'/upload-storage/'+${filename}" height="60" autoplay="autoplay"></video>
I can close Tomcat, close Eclipse, close the machine, and open all again : all is preserved and I can see the video.
But some time later : all is disappeared.
There must be a better solution.
Can you help me ?
Why dont you use Spring Content for the video content portion of your solution? That way you won't need to implement any of the video content handling. Spring Content will provide this for you. To add Spring Content to your project:
Add Spring Content to your classpath.
pom.xml
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-rest-boot-starter</artifactId>
<version>0.0.10</version>
</dependency>
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>content-fs-spring-boot-starter</artifactId>
<version>0.0.10</version>
</dependency>
Associate content with your Video entity.
Video.java
#Entity
public class Video {
...
#ContentId
private String contentId;
#ContentLength
private Long contetLen;
#MimeType
private String mimeType;
...
Set up a "persistent folder" as the root of your video store. This is where uploaded videos will be stored/streamed from. Also create a VideoStore interface to describe to SC how you want to associate your content.
SpringBootApplication.java
#SpringBootApplication
public class YourSpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(YourSpringBootApplication.class, args);
}
#Configuration
#EnableFilesystemStores
public static class StoreConfig {
File filesystemRoot() {
return new File("/path/to/your/videos");
}
#Bean
public FileSystemResourceLoader fsResourceLoader() throws Exception {
return new FileSystemResourceLoader(filesystemRoot().getAbsolutePath());
}
}
#StoreRestResource(path="videos")
public interface VideoStore extends ContentStore<Video,String> {
//
}
}
This is all you need to create a REST-based video service at /videos. Essentially, when your application starts, Spring Content will look at your dependencies (seeing Spring Content FS/REST), look at your VideoStore interface and inject an implementation of that interface based on the filesystem. It will also inject a controller that forwards http requests to that implementation as well. This saves you having to implement any of this yourself.
So...
POST /videos/{video-entity-id}
with a multipart/form-data request will store the video in /path/to/your/videos and associate it with the video entity whose id is video-entity-id.
GET /videos/{video-entity-id}
will fetch it again. This supports partial content requests or byte ranges; i.e. video streaming too.
and so on...support full CRUD.
There are a couple of getting started guides here. The reference guide is here. And there is a tutorial video here. The coding bit starts about 1/2 way through.
HTH
Did you enable the upload by adding the following property in the application.properties file?
## MULTIPART (MultipartProperties)
# Enable multipart uploads
spring.servlet.multipart.enabled=true
I have written an article about how to upload a multipart file in spring boot using thymeleaf. Here is the service used for the upload.
package com.uploadMultipartfile.storage;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.util.StringUtils;
import org.springframework.web.multipart.MultipartFile;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
#Service
public class FileSystemStorageService implements StorageService
{
private final Path rootLocation;
#Autowired
public FileSystemStorageService(StorageProperties properties) {
this.rootLocation = Paths.get(properties.getUploadDir()).toAbsolutePath().normalize();
try {
Files.createDirectories(this.rootLocation);
} catch (Exception ex) {
throw new StorageException("Could not create the directory where the uploaded files will be stored.", ex);
}
}
#Override
public String store(MultipartFile file)
{
// Normalize file name
String fileName = StringUtils.cleanPath(file.getOriginalFilename());
try
{
if (file.isEmpty())
{
throw new StorageException("Failed to store empty file " + file.getOriginalFilename());
}
// Copy file to the target location (Replacing existing file with the same name)
Path targetLocation = this.rootLocation.resolve(fileName);
Files.copy(file.getInputStream(), targetLocation, StandardCopyOption.REPLACE_EXISTING);
return fileName;
}
catch (IOException e)
{
throw new StorageException("Failed to store file " + file.getOriginalFilename(), e);
}
}
#Override
public void init()
{
try
{
Files.createDirectory(rootLocation);
}
catch (IOException e)
{
throw new StorageException("Could not initialize storage", e);
}
}
}
Here is a link to get the code of the application. http://mkaroune.e-monsite.com/pages/spring/spring-boot-multipart-file-upload.html

Downloading uploaded files from Amazon S3 service in a Spring MVC web app

I have a problem of downloading my uploaded files from amazon s3 service. I have succesfully implemented the upload section, all I need is to download these file to my local hardrive the view them later. My application is a spring mvc application.
This is my controller to call the download service
#Controller
public class fileController{
#Autowired S3Service s3Service;
#Autowired AwsConfig awsConfig;
#Autowired Environment env;
#Autowired DocRepository docRepo;
#RequestMapping(value="downloadDocume")
public void downloadDocument(#RequestParam("docId") Long docId
,HttpServletRequest request ,HttpServletResponse response)){
Document doc = docRepo.findOne(docId);
String docName = doc.getAsset().getName();
String ASSET_PATH = awsConfig.getBaseUrl()+"/"+
awsConfig.getBucket()+"/";
if (Objects.equals(env.getProperty("spring.profiles.active"),"prod")){
ASSET_PATH= awsConfig.getBaseUrl()+"/"+
awsConfig.getBucket()+"/";
}
String filtered = StringUtils.delete(docName, ASSET_PATH);
String mimetype = request.getSession().getServletContext().getMimeType(filtered);
FileStream file = s3Service.getAssetByName("/Documents/", filtered);
response.setContentType(mimetype);
response.setContentLength((int) file.getSize());
response.setHeader("Content-Disposition","attachment; filename=\"" + docName +"\"");
FileCopyUtils.copy(file.getInputStream(), response.getOutputStream());
}
}
//This is my S3Sservice class with the download method
#Service
public class S3Service{
public FileStream getAssetByName(String path , String name)
throws FileNotFoundException{
AmazonS3Client s3 = new AmazonS3Client(
new BasicAWSCredentials(awsConfig.getAccessKey(), awsConfig.getSecretKey()));
s3.setEndpoint(awsConfig.getBaseUrl());
s3.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
S3Object obj = s3.getObject(new GetObjectRequest(awsConfig.getBucket(), getS3Path(path) + name));
return new FileStream(obj.getObjectContent(), obj.getObjectMetadata().getContentLength());
}
}
Wow.. The solution was very simple.. I just used the the html download link and passed the parameters on my jsp like this. This is i my document.jsp
<a class="btn btn-primary" href="${document.asset.name}" download="${document.asset.name}">Download Document</a>
I change my downloadDocument() in my controller to look like this
public void downloadDocument(#RequestParam("docId") Long docId
,HttpServletRequest request ,HttpServletResponse response)){
Document doc = docRepo.findOne(docId);
model.addAtribute("document" , doc);
return "document";
}
}

Resources