Springboot Upload and Download file(multiple format) from S3 - spring

Im trying to expose simple rest controller to take multipart file as input and upload to S3 and download API to get file key as input and download the file from S3 and send to FE.
Here this Api should support all standard file formats.
Is there a generic implementation for this as this looks pretty standard feature . I could not find any implementation

Why don't you try Spring Content? It does exactly what you need.
Assuming maven, Spring Boot and Spring Data (let me know if you are using something else):-
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<!-- HSQL -->
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.hateoas</groupId>
<artifactId>spring-hateoas</artifactId>
</dependency>
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>content-s3-spring-boot-starter</artifactId>
<version>${spring-content-version}</version>
</dependency>
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>content-rest-spring-boot-starter</artifactId>
<version>${spring-content-version}</version>
</dependency>
...
<dependencies>
Update your entity with the managed Spring Content annotations.
Document.java
#Entity
public class Document {
...existing fields...
#ContentId
private String contentId;
#ContentLength
private Long contentLen;
#MimeType
private String mimeType;
...getters and setters...
}
Create a connection to your S3 store. The S3 Store has been implemented to use a SimpleStorageResourceLoader so this bean will ultimately be used by your store.
StoreConfig.java
#Configuration
#EnableS3Stores
public class S3Config {
#Autowired
private Environment env;
public Region region() {
return Region.getRegion(Regions.fromName(System.getenv("AWS_REGION")));
}
#Bean
public BasicAWSCredentials basicAWSCredentials() {
return new BasicAWSCredentials(env.getProperty("AWS_ACCESS_KEY_ID"), env.getProperty("AWS_SECRET_KEY"));
}
#Bean
public AmazonS3 client(AWSCredentials awsCredentials) {
AmazonS3Client amazonS3Client = new AmazonS3Client(awsCredentials);
amazonS3Client.setRegion(region());
return amazonS3Client;
}
#Bean
public SimpleStorageResourceLoader simpleStorageResourceLoader(AmazonS3 client) {
return new SimpleStorageResourceLoader(client);
}
}
Define a Store typed to Document - as that is what you are associating content with.
DocumentContentStore.java
#StoreRestResource
public interface DocumentStore extends ContentStore<Document, String> {
}
When you run this you also need to set the bucket for your store. This can be done by specifying spring.content.s3.bucket in application.properties/yaml or by setting the AWS_BUCKET environment variable.
This is enough to create a REST-based content service for storing content in S3 and associating that content with your Document entity. Spring Content will see the Store interface and the S3 dependencies. Assume you want to store content in S3 and inject an implementation of your interface for you. Meaning you dont have to implement it yourself. You will be able to store content by POSTing a multipart-form-data request to:
POST /documents/{documentId}/content
and fetching it again with:
GET /documents/{documentId}/content
(the service supports full CRUD BTW and video streaming in case that might be important).
You'll see that Spring Content associates content with your entity by managing the content related annotations for you.
This can be used with or without Spring Data. The dependencies and Store are a little different depending. I assume you have entities that you want to associate data with as you added a spring-data tag but let me know if not and I can adapt the answer.
There is a video of this here - the demo starts about half way through. It uses the Filesystem module not S3 but they are interchangeable. Just need to pick the right dependencies for the type of store you are using. S3 in your case.
HTH

Related

Using a custom identity provider in Quarkus

In my current project, we store user login info inside a MongoDB collection. We would like to implement an authentication mechanism that checks the credentials from a request against the information stored in said MongoDB. There is a tutorial for doing this with JPA + Postgres but there is no information on using MongoDB in the same capacity. I suspect that I would need to write a custom IdentityProvider for this case. I tried using the JPA identity provider as a base, but it looks like the security-jpa source code contains only an abstract identity provider, while the actual provider is generated automatically using black magic. Has anyone ever had success adapting the existing Quarkus security architecture to MongoDB or anything else that is not covered by security-jpa?
After some research, I was able to get a custom IdentityProvider to work. Here's a very simple demo (without any MongoDB logic):
#ApplicationScoped
public class DemoIdentityProvider implements IdentityProvider<UsernamePasswordAuthenticationRequest> {
private static final Map<String, String> CREDENTIALS = Map.of("bob", "password124", "alice", "hunter2");
#Override
public Class<UsernamePasswordAuthenticationRequest> getRequestType() {
return UsernamePasswordAuthenticationRequest.class;
}
#Override
public Uni<SecurityIdentity> authenticate(UsernamePasswordAuthenticationRequest request,
AuthenticationRequestContext authenticationRequestContext) {
if (new String(request.getPassword().getPassword()).equals(CREDENTIALS.get(request.getUsername()))) {
return Uni.createFrom().item(QuarkusSecurityIdentity.builder()
.setPrincipal(new QuarkusPrincipal(request.getUsername()))
.addCredential(request.getPassword())
.setAnonymous(false)
.addRole("admin")
.build());
}
throw new AuthenticationFailedException("password invalid or user not found");
}
}
Note that in order to access QuarkusSecurityIdentity, the quarkus-security extension needs to be included as dependency in pom.xml:
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-security</artifactId>
</dependency>
Furthermore, quarkus.http.auth.basic=true needs to be added to application.properties for the identity provider be used with basic auth.

Spring Cassandra ddl-auto

I'm using Spring with Cassandra and I'm trying to deploy the service to an existing Cassandra DB in production, I've been reading about the ddl-auto and I'm not sure if my code will override the scheme or the data there.
These are my dependencies,
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
</dependencies>
and I'm using the following to query the Repository,
import org.springframework.data.cassandra.repository.CassandraRepository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
...
...
#Repository
public interface PostsRepo extends CrudRepository<Posts, String> {
Optional<Posts> findBypostid(String id);
}
I don't have any sql file in my project, and my application.properties file is empty.
My questions are
Do I need to define something specifically to stop/disable automatic schema creation?
Is the automatic schema creation option only applicable for embedded DB and there is nothing to worry about here?
what about spring.jpa.hibernate.ddl or spring.jpa.defer-datasource-initialization? should I set them to none and false? or not having them simply is enough?
Spring Data Cassandra can create the schema in Cassandra for you (Tables and Types). It is not enabled by default.
If you work with Spring Boot and the starter spring-boot-starter-data-cassandra you can use the flag spring.data.cassandra.schema-action in your application.yaml
spring:
data:
cassandra:
keyspace-name: sample keyspace
username: token
password: passwd
schema-action: create-if-not-exists
request:
timeout: 10s
connection:
connect-timeout: 10s
init-query-timeout: 10s
If you work with Spring data cassandra without Spring boot you may inherit from AbstractCassandraConfiguration and override the method getSchemaAction as described below:
#Configuration
#EnableCassandraRepositories
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.contactpoints}")
private String contactPoints;
#Value("${cassandra.port}")
private int port;
#Value("${cassandra.keyspace}")
private String keySpace;
#Value("${cassandra.basePackages}")
private String basePackages;
#Override
protected String getKeyspaceName() {
return keySpace;
}
#Override
protected String getContactPoints() {
return contactPoints;
}
#Override
protected int getPort() {
return port;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
#Override
public String[] getEntityBasePackages() {
return new String[] {basePackages};
}
}
Multiple values are allowed for the field as described in the official Spring documentation quoted here:
SchemaAction.NONE: No tables or types are created or dropped. This is the default setting.
SchemaAction.CREATE: Create tables, indexes, and user-defined types from entities annotated with #Table and types annotated with #UserDefinedType. Existing tables or types cause an error if you tried to create the type.
SchemaAction.CREATE_IF_NOT_EXISTS: Like SchemaAction.CREATE but with IF NOT EXISTS applied. Existing tables or types do not cause any errors but may remain stale.
SchemaAction.RECREATE: Drops and recreates existing tables and types that are known to be used. Tables and types that are not configured in the application are not dropped.
SchemaAction.RECREATE_DROP_UNUSED: Drops all tables and types and recreates only known tables and types.
If the feature might be useful for your developments, it is not recommended to use it and specially in production here are my rational:
The way to implement an efficient data model with Cassandra is to design your queries first, and based on them your defined the needed tables. If 2 queries work with the same data it is recommended to create 2 tables with the same data changing the primary key. If you work with Object Mapping, (object=>Table) you may be tempted to reuse the same bean for different queries ..with the same table
Creating the schema in production will require fine tuning of the DDL requests (overriding the TTL, the compaction strategy, enabling NodeSync, special properties)
Human errors. If you let you schema-action to RECREATE...good luck.

How to update image using image url

I have method which is taking multipart image file,if i want to update same image then obviously i have to take image url as a input but i cant able to take the input as url since it is taking file format
my method:
MediaType.APPLICATION_OCTET_STREAM_VALUE}, produces = MediaType.APPLICATION_JSON_VALUE)
public ResponseEntity<ApiResponse> updatePersonalDataForUser(
#RequestHeader("accessToken") #NotEmpty(message = "accessToken is mandatory") String bearer,
#RequestHeader("mappingId") #NotEmpty(message = "mappingId is mandatory") String mappingId,
#RequestPart("personalInfoObj") String personalInfoObj,
#RequestPart(value = "profileImage") MultipartFile profileImage)
throws IOException {
jobPostController.userRoleAuthorization(mappingId);
userController.oAuthByRedisAccessToken(bearer, mappingId);
ObjectMapper objectMapper = new ObjectMapper();
PersonalInfoResponse personalInfoConv = objectMapper.readValue(personalInfoObj, PersonalInfoResponse.class);
return userController.updatePersonalData(mappingId, personalInfoConv, profileImage, Contants.UserRoleName);
}```
You should take a look at the Spring community project called Spring Content.
This project makes it easy to build contentful applications and services. It has the same programming model as Spring Data. Meaning it can supply implementations for the file storage and REST controllers on top of that file storage, therefore you don't need to concern yourself with creating these yourself. It is to content (or unstructured data) what Spring Data is to structured data.
This might look something like the following:-
pom.xml (for Spring Web MVC. Spring Boot also supported)
<!-- Spring Web MVC dependencies -->
...
<!-- Java API -->
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-fs</artifactId>
<version>1.0.0.M5</version>
</dependency>
<!-- REST API -->
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>spring-content-rest</artifactId>
<version>1.0.0.M5</version>
</dependency>
StoreConfig.java
#Configuration
#EnableFilesystemStores
#Import(RestConfiguration.class)
public class EnableFilesystemStoresConfig {
#Bean
File filesystemRoot() {
try {
return new File("/path/to/your/uploaded/files");
} catch (IOException ioe) {}
return null;
}
#Bean
FileSystemResourceLoader fileSystemResourceLoader() {
return new FileSystemResourceLoader(filesystemRoot().getAbsolutePath());
}
}
ImageStore.java
#StoreRestResource(path="images")
public interface ImageStore extends Store<String> {
}
This is all you need to do to get REST Endpoints that will allow you to store and retrieve files. As mentioned how this actually works is very much like Spring Data. When your application starts Spring Content will see the spring-content-fs dependency, know that you want to store content on your filesystem and inject a filesystem implementation of the ImageStore interface into the application context. It will also see the spring-content-rest and inject a controller (i.e. REST endpoints) that talk to the ImageStore interface. Therefore, you don't have to do any of this yourself.
So, for example:
curl -X POST /images/myimage.jpg -F "file=#/path/to/myimage.jpg"
will store the image on the filesystem at /path/to/your/uploaded/files/myimage.jpg
And:
curl /images/myimage.jpg
will fetch it again and so on...these endpoints support full CRUD and the GET & PUT endpoints also support video streaming (or byte range-requests).
You could also decide to store the contents elsewhere like in the database with your entities, or in S3 by swapping the spring-content-fs dependency for the appropriate Spring Content Storage module. Examples for every type of storage are here.
In addition, in case it is helpful, often content is associated with Spring Data Entities. So, it is also possible to have the ImageStore interface implement ContentStore, like this:
> FileStore.java
#StoreRestResource(path="images")
public interface ImageStore extends ContentStore<PersonalInfo, String> {
}
And to add Spring Content-annotated fields to your Spring Data entities, like this:
> PersonalInfo.java
#Entity
public class PersonalInfo {
#Id
#GeneratedValue
private long id;
...other existing fields...
#ContentId
private String contentId;
#ContentLength
private long contentLength = 0L;
#MimeType
private String mimeType = "text/plain";
...
}
This approach changes the REST endpoints as the content is now addressable via the Spring Data URL. So:
POST /personalInfos/{personalInfoId} -F "image=#/some/path/to/myimage.jpg"
will upload myimage.jpg to /path/to/your/uploaded/files/myimage.jpg. As it did before but it will also update the fields on the PersonalInfo entity with id personalInfoId.
GET /personalInfos/{personalInfoId}
will get it again.
HTH

Spring Cloud Function expects Flux<String> instead of String, when deploying on Azure Functions

I'm currently looking into Spring Cloud Function and its possibilities to deploy one function on different cloud environments (AWS Lambda and Azure Functions).
My function looks like this (of course very simplified):
#Component
public class EchoFunction implements Function<String, String> {
#Override
public String apply(String m) {
String message = "Received message: " + m;
return message;
}
}
When deploying that on AWS Lambda, it works perfectly (the full project can be found here).
However, if I run the same function as local Azure Functions deployment using the Azure Functions Core Tools, I get the following exception when calling the function:
24.01.19 21:58:50] Caused by: java.lang.ClassCastException: reactor.core.publisher.FluxJust cannot be cast to java.lang.String
[24.01.19 21:58:50] at de.margul.awstutorials.springcloudfunction.function.EchoFunction.apply(EchoFunction.java:9)
[24.01.19 21:58:50] at org.springframework.cloud.function.adapter.azure.AzureSpringBootRequestHandler.handleRequest(AzureSpringBootRequestHandler.java:56)
[24.01.19 21:58:50] at de.margul.awstutorials.springcloudfunction.azure.handler.FunctionHandler.execute(FunctionHandler.java:19)
[24.01.19 21:58:50] ... 16 more
For some reason, the function seems to expect a Flux instead of a String.
I think this might be related to what [the documentation] (https://cloud.spring.io/spring-cloud-static/spring-cloud-function/2.0.0.RELEASE/single/spring-cloud-function.html#_function_catalog_and_flexible_function_signatures) says about this:
One of the main features of Spring Cloud Function is to adapt and support a range of type signatures for user-defined functions, while providing a consistent execution model. That’s why all user defined functions are transformed into a canonical representation by FunctionCatalog, using primitives defined by the Project Reactor (i.e., Flux and Mono). Users can supply a bean of type Function, for instance, and the FunctionCatalog will wrap it into a Function,Flux>.
So the problem might be related to this:
If I change the function in the following way, it works:
#Component
public class EchoFunction implements Function<String, Flux<String>> {
#Override
public Flux<String> apply(String m) {
String message = "Received message: "+m;
return Flux.just(message);
}
}
My function handler looks like this:
public class FunctionHandler extends AzureSpringBootRequestHandler<String, String> {
#FunctionName("createEntityFunction")
public String execute(#HttpTrigger(name = "req", methods = {
HttpMethod.POST }, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<String> entity,
ExecutionContext context) {
return handleRequest(entity.getBody(), context);
}
#Bean
public EchoFunction createEntityFunction() {
return new EchoFunction();
}
}
For the AWS deployment, I had the following dependencies:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-aws</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>2.2.5</version>
</dependency>
</dependencies>
For the Azure deployment, I have only one dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-azure</artifactId>
<version>2.0.0</version>
</dependency>
I've already looked into the source code of both adapters:
On AWS, the SpringBootRequestHandler invokes the target function (in line 48).
On Azure, the AzureSpringBootRequestHandler invokes the target function (in line 56).
For me, it looks like in both cases, a Flux is handed over.
However, for the AWS adapter, the Object is unwrapped somewhere in between obviously.
But this is not the case with the Azure adapter.
Any ideas why?
#margul Sorry for the late reply/ Without the newly created spring-cloud-function tag your question was kind of lost.
I just looked at it and also the issue you opened in GH and it appears to be a bug on our side.
Basically it seems like if we can't find function in catalog we fallback on bean factory. The problem with this approach is that bean factory has raw function bean (function not fluxified yet), hence the ClassCast exception.
Anyway, i'll address the rest in GH.
Just to close this off, please see this issue

JsonView for filtering Json properties in Spring MVC not working

In our Spring MVC web application for job recruiting, I work on a RESTful service to get information about available companies for a given account, or more detailed data for a single company.
This is implemented using Spring MVC in a pretty straightforward way.
Logically, the API for a single company shows more details than for a list of companies. In particular, there are two fields (CoverPhoto and Logo) whoch are only to be shown when querying the details for a single company by its id.
For the generation of the Json output, I use Jackson to annotate the returned DTO object for specific field names because sometimes they are different from the member variable names.
One of the ways to implement this in an elegant way is using JsonViews, as described in these tutorials:
https://spring.io/blog/2014/12/02/latest-jackson-integration-improvements-in-spring
http://www.baeldung.com/jackson-json-view-annotation
The only difference between them is that the second one uses interfaces for the View classes, and the first one uses classes. But that should not make any difference and my code is not working as expected with either of them.
I have created to interfaces (ObjectList and OblectDetails) and annotated the fields in my DTO with
#JsonView(Views.ObjectList.class)
for the fields I want to see on both the lisdt and the details API, and with
#JsonView(Views.ObjectDetails.class)
for the fields only to shown in the single company API.
But unfortunately, both API's show all fields, regardless of the annotation. Also fields without a #JsonView annotation appear in the output JSON, while according to the documentation, when annotating the Controller method with a #JsonView, each fields should also be annotated with a #JsonView annotation to show up.
My simplified code looks as follows:
DTO:
package nl.xxxxxx.dto.too;
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonPropertyOrder;
import com.fasterxml.jackson.annotation.JsonView;
#JsonAutoDetect
#JsonPropertyOrder({"id", "name", "logo", "coverPhoto", "description", "shortDescription",
"phone", "address"})
public class XxxCompanyDto {
#JsonView(Views.ObjectList.class)
private Long id;
#JsonView(Views.ObjectList.class)
private String name;
#JsonView(Views.ObjectDetails.class)
private String logo;
#JsonView(Views.ObjectDetails.class)
#JsonProperty("cover_photo")
private String coverPhoto;
#JsonView(Views.ObjectList.class)
private String description;
//more fields
//setters, no getters are used to prevent ambiguity for Json generation
//omitted for clarity
}
Views:
package nl.xxx.dto.too;
public class Views {
public interface ObjectList {}
public interface ObjectDetails extends ObjectList {}
}
Controller:
package nl.xxx.controller;
import com.fasterxml.jackson.annotation.JsonView;
import org.springframework.web.bind.annotation.*;
//more imports
/**
* Created by Klaas van Gelder on 17-Nov-16.
*/
#RestController
public class XxxCompanyController {
#Autowired
//services omitted
#JsonView(Views.ObjectDetails.class)
#RequestMapping(value = "/public-api/xxx/company/{companyId}", method = RequestMethod.GET)
public TooCompanyDto getCompanyById(
#RequestHeader(value = "X-Channel") String publicationChannelToken,
#PathVariable(value = "companyId") Long companyId) {
XxxCompany tooCompany = tooCompanyService.getCompanyById(companyId);
//some verifications omitted
TooCompanyDto tooCompanyDto = tooCompanyJsonConverter.convertToDto(tooCompany);
return tooCompanyDto;
}
#JsonView(Views.ObjectList.class)
#RequestMapping(value = "/public-api/xxx/company", method = RequestMethod.GET)
public List<TooCompanyDto> listCompaniesForChannel(
#RequestHeader(value = "X-Channel") String publicationChannelToken) {
XxxPublicationChannel channel = tooVacancyService.findPublicationChannelByToken(publicationChannelToken);
List<XxxCompany> xxxCompaniesForChannel = xxxCompanyService.findCompaniesByPublicationChannelToken(publicationChannelToken);
List<XxxCompanyDto> dtoList = new ArrayList<>();
for (XxxCompany xxxCompany : xxxCompaniesForChannel) {
XxxCompanyDto xxxCompanyDto = xxxCompanyJsonConverter.convertToDto(xxxCompany);
dtoList.add(xxxCompanyDto);
}
return dtoList;
}
}
Maven:
org.springframework
spring-core
4.2.2.BUILD-SNAPSHOT
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>${jackson-2-version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>${jackson-2-version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson-2-version}</version>
</dependency>
//more depedencies
with <jackson-2-version>2.2.2</jackson-2-version> in parent POM
It seems that the JsonView annotations are completely ignored. I can probably use another solution by using two separate DTO classes but it would be nice to get this working as it should!
Any hints are more than welcome!

Resources