I'm using Spring with Cassandra and I'm trying to deploy the service to an existing Cassandra DB in production, I've been reading about the ddl-auto and I'm not sure if my code will override the scheme or the data there.
These are my dependencies,
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-cassandra</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>
</dependencies>
and I'm using the following to query the Repository,
import org.springframework.data.cassandra.repository.CassandraRepository;
import org.springframework.data.repository.CrudRepository;
import org.springframework.stereotype.Repository;
...
...
#Repository
public interface PostsRepo extends CrudRepository<Posts, String> {
Optional<Posts> findBypostid(String id);
}
I don't have any sql file in my project, and my application.properties file is empty.
My questions are
Do I need to define something specifically to stop/disable automatic schema creation?
Is the automatic schema creation option only applicable for embedded DB and there is nothing to worry about here?
what about spring.jpa.hibernate.ddl or spring.jpa.defer-datasource-initialization? should I set them to none and false? or not having them simply is enough?
Spring Data Cassandra can create the schema in Cassandra for you (Tables and Types). It is not enabled by default.
If you work with Spring Boot and the starter spring-boot-starter-data-cassandra you can use the flag spring.data.cassandra.schema-action in your application.yaml
spring:
data:
cassandra:
keyspace-name: sample keyspace
username: token
password: passwd
schema-action: create-if-not-exists
request:
timeout: 10s
connection:
connect-timeout: 10s
init-query-timeout: 10s
If you work with Spring data cassandra without Spring boot you may inherit from AbstractCassandraConfiguration and override the method getSchemaAction as described below:
#Configuration
#EnableCassandraRepositories
public class CassandraConfig extends AbstractCassandraConfiguration {
#Value("${cassandra.contactpoints}")
private String contactPoints;
#Value("${cassandra.port}")
private int port;
#Value("${cassandra.keyspace}")
private String keySpace;
#Value("${cassandra.basePackages}")
private String basePackages;
#Override
protected String getKeyspaceName() {
return keySpace;
}
#Override
protected String getContactPoints() {
return contactPoints;
}
#Override
protected int getPort() {
return port;
}
#Override
public SchemaAction getSchemaAction() {
return SchemaAction.CREATE_IF_NOT_EXISTS;
}
#Override
public String[] getEntityBasePackages() {
return new String[] {basePackages};
}
}
Multiple values are allowed for the field as described in the official Spring documentation quoted here:
SchemaAction.NONE: No tables or types are created or dropped. This is the default setting.
SchemaAction.CREATE: Create tables, indexes, and user-defined types from entities annotated with #Table and types annotated with #UserDefinedType. Existing tables or types cause an error if you tried to create the type.
SchemaAction.CREATE_IF_NOT_EXISTS: Like SchemaAction.CREATE but with IF NOT EXISTS applied. Existing tables or types do not cause any errors but may remain stale.
SchemaAction.RECREATE: Drops and recreates existing tables and types that are known to be used. Tables and types that are not configured in the application are not dropped.
SchemaAction.RECREATE_DROP_UNUSED: Drops all tables and types and recreates only known tables and types.
If the feature might be useful for your developments, it is not recommended to use it and specially in production here are my rational:
The way to implement an efficient data model with Cassandra is to design your queries first, and based on them your defined the needed tables. If 2 queries work with the same data it is recommended to create 2 tables with the same data changing the primary key. If you work with Object Mapping, (object=>Table) you may be tempted to reuse the same bean for different queries ..with the same table
Creating the schema in production will require fine tuning of the DDL requests (overriding the TTL, the compaction strategy, enabling NodeSync, special properties)
Human errors. If you let you schema-action to RECREATE...good luck.
Related
Very simple and straight forward exception. is there any way to cache exception using caffeine/springboot ?
some specific exceptions in my method can be very time consuming... [404 for example] i wish i could cache it and avoid long processing
A simple way to cache exceptions is to encapsulate the call, catch the exception and represent it as a value. I am just adding more details based on #ben-manes comment.
Approach 1: encapsulate the exception as a business object
Approach 2: return null value or Optional object on exception. For caching null values, you need to explicitly enable caching of null values (refer here - Spring Boot Cacheable - Cache null values)
Here is an example based on Spring Caching (can be extended to Caffeine). The following class loads a Book entity object which may result in exception. The exception is handled and cached, so next time the same ISBN code (argument) is passed the return value (exception) is returned from the cache.
#Component
public class SimpleBookRepository implements BookRepository {
#Override
#Cacheable("books")
public Book getByIsbn(String isbn) {
Book book = loadBook(isbn);
return book;
}
// Don't do this at home
private void loadBook(String isbn) {
Book book;
try {
//get book from DB here
book = loadBook();//DB call. can throw exception.
} catch (Exception e) {
book = new Book("None found"); Approach 1 - //encapsulate error as an entity.
book = null; // Approach 2 - set as null
}
return book;
}
}
1. Very first time add these dependency in pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>2.7.0</version>
</dependency>
2. add #CacheConfig(cacheNames = {"customer"}) , #Slf4j under #Service
annotation
3. add #Cacheable on your method where you want caching. and in method add log
public Customer saveCustomer (Customer customerId) {
log.info("Inside save Customer method of Customer Service");
return customerRepository.save(customerId);
}
There are a few important point:
The #CacheConfig is a class level annotation and help to streamline caching configurations.
The #Cacheable annotation used to demarcate methods that are cacheable. In simple words, this annotation used to show caching API that we want to store results for this method into the cache so, on subsequent invocations, the value in the cache returned without execute the method.
I have created new spring boot project with postgresql .I like to use posgressql array_agg(ex:get all department) using JPA Repository native query but its getting some error in blow posted. I have tried some alter solution but cant able to get expected data.
Error :
org.springframework.orm.jpa.JpaSystemException: No Dialect mapping for JDBC type: 2003;
nested exception is org.hibernate.MappingException: No Dialect mapping for JDBC type: 2003
Expected : Should get array or list of data
#Repository
public interface PostGroupRepository extends JpaRepository<PostGroup, Integer> {
#Query(value = "SELECT array_agg(department) FROM boxinfo;", nativeQuery = true)
public Object[] getDept();
}
First solution is to use below dependency:
<dependency>
<groupId>com.vladmihalcea</groupId>
<artifactId>hibernate-types-52</artifactId>
<version>2.11.1</version>
</dependency>
he has custom types already written and register that in the custom dialect like below
public class CustomPostgreDialect extends PostgreSQL10Dialect {
public CustomPostgreDialect() {
super();
this.registerHibernateType(2003, StringArrayType.class.getName());
}
}
And use this dialect as the hibernate dialect in application.yaml or application.properties of spring boot.
spring.jpa.properties.hibernate.dialect: <packageName>.CustomPostgreDialect
Second solution is to write the custom type yourself and register it in the dialect as shown above, if you don't want to use dependency.
Im trying to expose simple rest controller to take multipart file as input and upload to S3 and download API to get file key as input and download the file from S3 and send to FE.
Here this Api should support all standard file formats.
Is there a generic implementation for this as this looks pretty standard feature . I could not find any implementation
Why don't you try Spring Content? It does exactly what you need.
Assuming maven, Spring Boot and Spring Data (let me know if you are using something else):-
pom.xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<!-- HSQL -->
<dependency>
<groupId>org.hsqldb</groupId>
<artifactId>hsqldb</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.hateoas</groupId>
<artifactId>spring-hateoas</artifactId>
</dependency>
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>content-s3-spring-boot-starter</artifactId>
<version>${spring-content-version}</version>
</dependency>
<dependency>
<groupId>com.github.paulcwarren</groupId>
<artifactId>content-rest-spring-boot-starter</artifactId>
<version>${spring-content-version}</version>
</dependency>
...
<dependencies>
Update your entity with the managed Spring Content annotations.
Document.java
#Entity
public class Document {
...existing fields...
#ContentId
private String contentId;
#ContentLength
private Long contentLen;
#MimeType
private String mimeType;
...getters and setters...
}
Create a connection to your S3 store. The S3 Store has been implemented to use a SimpleStorageResourceLoader so this bean will ultimately be used by your store.
StoreConfig.java
#Configuration
#EnableS3Stores
public class S3Config {
#Autowired
private Environment env;
public Region region() {
return Region.getRegion(Regions.fromName(System.getenv("AWS_REGION")));
}
#Bean
public BasicAWSCredentials basicAWSCredentials() {
return new BasicAWSCredentials(env.getProperty("AWS_ACCESS_KEY_ID"), env.getProperty("AWS_SECRET_KEY"));
}
#Bean
public AmazonS3 client(AWSCredentials awsCredentials) {
AmazonS3Client amazonS3Client = new AmazonS3Client(awsCredentials);
amazonS3Client.setRegion(region());
return amazonS3Client;
}
#Bean
public SimpleStorageResourceLoader simpleStorageResourceLoader(AmazonS3 client) {
return new SimpleStorageResourceLoader(client);
}
}
Define a Store typed to Document - as that is what you are associating content with.
DocumentContentStore.java
#StoreRestResource
public interface DocumentStore extends ContentStore<Document, String> {
}
When you run this you also need to set the bucket for your store. This can be done by specifying spring.content.s3.bucket in application.properties/yaml or by setting the AWS_BUCKET environment variable.
This is enough to create a REST-based content service for storing content in S3 and associating that content with your Document entity. Spring Content will see the Store interface and the S3 dependencies. Assume you want to store content in S3 and inject an implementation of your interface for you. Meaning you dont have to implement it yourself. You will be able to store content by POSTing a multipart-form-data request to:
POST /documents/{documentId}/content
and fetching it again with:
GET /documents/{documentId}/content
(the service supports full CRUD BTW and video streaming in case that might be important).
You'll see that Spring Content associates content with your entity by managing the content related annotations for you.
This can be used with or without Spring Data. The dependencies and Store are a little different depending. I assume you have entities that you want to associate data with as you added a spring-data tag but let me know if not and I can adapt the answer.
There is a video of this here - the demo starts about half way through. It uses the Filesystem module not S3 but they are interchangeable. Just need to pick the right dependencies for the type of store you are using. S3 in your case.
HTH
I'm currently looking into Spring Cloud Function and its possibilities to deploy one function on different cloud environments (AWS Lambda and Azure Functions).
My function looks like this (of course very simplified):
#Component
public class EchoFunction implements Function<String, String> {
#Override
public String apply(String m) {
String message = "Received message: " + m;
return message;
}
}
When deploying that on AWS Lambda, it works perfectly (the full project can be found here).
However, if I run the same function as local Azure Functions deployment using the Azure Functions Core Tools, I get the following exception when calling the function:
24.01.19 21:58:50] Caused by: java.lang.ClassCastException: reactor.core.publisher.FluxJust cannot be cast to java.lang.String
[24.01.19 21:58:50] at de.margul.awstutorials.springcloudfunction.function.EchoFunction.apply(EchoFunction.java:9)
[24.01.19 21:58:50] at org.springframework.cloud.function.adapter.azure.AzureSpringBootRequestHandler.handleRequest(AzureSpringBootRequestHandler.java:56)
[24.01.19 21:58:50] at de.margul.awstutorials.springcloudfunction.azure.handler.FunctionHandler.execute(FunctionHandler.java:19)
[24.01.19 21:58:50] ... 16 more
For some reason, the function seems to expect a Flux instead of a String.
I think this might be related to what [the documentation] (https://cloud.spring.io/spring-cloud-static/spring-cloud-function/2.0.0.RELEASE/single/spring-cloud-function.html#_function_catalog_and_flexible_function_signatures) says about this:
One of the main features of Spring Cloud Function is to adapt and support a range of type signatures for user-defined functions, while providing a consistent execution model. That’s why all user defined functions are transformed into a canonical representation by FunctionCatalog, using primitives defined by the Project Reactor (i.e., Flux and Mono). Users can supply a bean of type Function, for instance, and the FunctionCatalog will wrap it into a Function,Flux>.
So the problem might be related to this:
If I change the function in the following way, it works:
#Component
public class EchoFunction implements Function<String, Flux<String>> {
#Override
public Flux<String> apply(String m) {
String message = "Received message: "+m;
return Flux.just(message);
}
}
My function handler looks like this:
public class FunctionHandler extends AzureSpringBootRequestHandler<String, String> {
#FunctionName("createEntityFunction")
public String execute(#HttpTrigger(name = "req", methods = {
HttpMethod.POST }, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<String> entity,
ExecutionContext context) {
return handleRequest(entity.getBody(), context);
}
#Bean
public EchoFunction createEntityFunction() {
return new EchoFunction();
}
}
For the AWS deployment, I had the following dependencies:
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-aws</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-core</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-lambda-java-events</artifactId>
<version>2.2.5</version>
</dependency>
</dependencies>
For the Azure deployment, I have only one dependency:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-azure</artifactId>
<version>2.0.0</version>
</dependency>
I've already looked into the source code of both adapters:
On AWS, the SpringBootRequestHandler invokes the target function (in line 48).
On Azure, the AzureSpringBootRequestHandler invokes the target function (in line 56).
For me, it looks like in both cases, a Flux is handed over.
However, for the AWS adapter, the Object is unwrapped somewhere in between obviously.
But this is not the case with the Azure adapter.
Any ideas why?
#margul Sorry for the late reply/ Without the newly created spring-cloud-function tag your question was kind of lost.
I just looked at it and also the issue you opened in GH and it appears to be a bug on our side.
Basically it seems like if we can't find function in catalog we fallback on bean factory. The problem with this approach is that bean factory has raw function bean (function not fluxified yet), hence the ClassCast exception.
Anyway, i'll address the rest in GH.
Just to close this off, please see this issue
In our Spring MVC web application for job recruiting, I work on a RESTful service to get information about available companies for a given account, or more detailed data for a single company.
This is implemented using Spring MVC in a pretty straightforward way.
Logically, the API for a single company shows more details than for a list of companies. In particular, there are two fields (CoverPhoto and Logo) whoch are only to be shown when querying the details for a single company by its id.
For the generation of the Json output, I use Jackson to annotate the returned DTO object for specific field names because sometimes they are different from the member variable names.
One of the ways to implement this in an elegant way is using JsonViews, as described in these tutorials:
https://spring.io/blog/2014/12/02/latest-jackson-integration-improvements-in-spring
http://www.baeldung.com/jackson-json-view-annotation
The only difference between them is that the second one uses interfaces for the View classes, and the first one uses classes. But that should not make any difference and my code is not working as expected with either of them.
I have created to interfaces (ObjectList and OblectDetails) and annotated the fields in my DTO with
#JsonView(Views.ObjectList.class)
for the fields I want to see on both the lisdt and the details API, and with
#JsonView(Views.ObjectDetails.class)
for the fields only to shown in the single company API.
But unfortunately, both API's show all fields, regardless of the annotation. Also fields without a #JsonView annotation appear in the output JSON, while according to the documentation, when annotating the Controller method with a #JsonView, each fields should also be annotated with a #JsonView annotation to show up.
My simplified code looks as follows:
DTO:
package nl.xxxxxx.dto.too;
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonPropertyOrder;
import com.fasterxml.jackson.annotation.JsonView;
#JsonAutoDetect
#JsonPropertyOrder({"id", "name", "logo", "coverPhoto", "description", "shortDescription",
"phone", "address"})
public class XxxCompanyDto {
#JsonView(Views.ObjectList.class)
private Long id;
#JsonView(Views.ObjectList.class)
private String name;
#JsonView(Views.ObjectDetails.class)
private String logo;
#JsonView(Views.ObjectDetails.class)
#JsonProperty("cover_photo")
private String coverPhoto;
#JsonView(Views.ObjectList.class)
private String description;
//more fields
//setters, no getters are used to prevent ambiguity for Json generation
//omitted for clarity
}
Views:
package nl.xxx.dto.too;
public class Views {
public interface ObjectList {}
public interface ObjectDetails extends ObjectList {}
}
Controller:
package nl.xxx.controller;
import com.fasterxml.jackson.annotation.JsonView;
import org.springframework.web.bind.annotation.*;
//more imports
/**
* Created by Klaas van Gelder on 17-Nov-16.
*/
#RestController
public class XxxCompanyController {
#Autowired
//services omitted
#JsonView(Views.ObjectDetails.class)
#RequestMapping(value = "/public-api/xxx/company/{companyId}", method = RequestMethod.GET)
public TooCompanyDto getCompanyById(
#RequestHeader(value = "X-Channel") String publicationChannelToken,
#PathVariable(value = "companyId") Long companyId) {
XxxCompany tooCompany = tooCompanyService.getCompanyById(companyId);
//some verifications omitted
TooCompanyDto tooCompanyDto = tooCompanyJsonConverter.convertToDto(tooCompany);
return tooCompanyDto;
}
#JsonView(Views.ObjectList.class)
#RequestMapping(value = "/public-api/xxx/company", method = RequestMethod.GET)
public List<TooCompanyDto> listCompaniesForChannel(
#RequestHeader(value = "X-Channel") String publicationChannelToken) {
XxxPublicationChannel channel = tooVacancyService.findPublicationChannelByToken(publicationChannelToken);
List<XxxCompany> xxxCompaniesForChannel = xxxCompanyService.findCompaniesByPublicationChannelToken(publicationChannelToken);
List<XxxCompanyDto> dtoList = new ArrayList<>();
for (XxxCompany xxxCompany : xxxCompaniesForChannel) {
XxxCompanyDto xxxCompanyDto = xxxCompanyJsonConverter.convertToDto(xxxCompany);
dtoList.add(xxxCompanyDto);
}
return dtoList;
}
}
Maven:
org.springframework
spring-core
4.2.2.BUILD-SNAPSHOT
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-webmvc</artifactId>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-core</artifactId>
<version>${jackson-2-version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-annotations</artifactId>
<version>${jackson-2-version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson-2-version}</version>
</dependency>
//more depedencies
with <jackson-2-version>2.2.2</jackson-2-version> in parent POM
It seems that the JsonView annotations are completely ignored. I can probably use another solution by using two separate DTO classes but it would be nice to get this working as it should!
Any hints are more than welcome!