I am learning Spring Data Redis, and need some help with the RedisTemplate:
Looking at the javadoc of RedisTemplate, I can see the following serializer setters:
public void setKeySerializer(RedisSerializer<?> serializer)
public void setHashKeySerializer(RedisSerializer<?> hashKeySerializer)
public void setValueSerializer(RedisSerializer<?> serializer)
public void setHashValueSerializer(RedisSerializer<?> hashValueSerializer)
I wonder when is the framework using which of the serializers? What's the difference between the ones with 'Hash" and the ones without?
In order to use Redis Spring Data Repositories where secondary index matching is supported, I should provide a RedisTemplate with default setting and name it redisTemplate, right?
What happens when I need both pub/sub with custom message body format (say json) and Spring Data repository in the same app? Should I define a default RedisTemplate (like point 2) for the repositories and create my own configuration class to build the message publisher instead of registering another RedisTemplate with Jackson2JsonRedisSerializer?
Thanks in advance for the help!
Related
I'm migrating a legacy application from Spring-core 4 to Springboot 2.5.2.
The application is using spring-data-rest (SDR) alongside spring-data-mongodb to handle our entities.
The legacy code was overriding SDR configuration by extending the RepositoryRestMvcConfiguration and overriding the bean definition for persistentEntityJackson2Module to remove serializerModifier and deserializerModifier.
#EnableWebMvc
#EnableSpringDataWebSupport
#Configuration
class RepositoryConfiguration extends RepositoryRestMvcConfiguration {
...
...
#Bean
#Override
protected Module persistentEntityJackson2Module() {
// Remove existing Ser/DeserializerModifier because Spring data rest expect linked resources to be in href form. Our platform is not tailored for it yet
return ConverterHelper.configureSimpleModule((SimpleModule) super.persistentEntityJackson2Module())
.setDeserializerModifier(null)
.setSerializerModifier(null);
}
It was to avoid having to process DBRef as href link when posting entities, we pass the plain POJO instead of the href and we persist it manually before the entity.
Following the migration, there is no way to set the same overrided configuration but to avoid altering all our processes of creation we would like to keep passing the POJO even for DbRef.
I will add an exemple of what was working before :
We have the entity we want to persist :
public class EntityWithDbRefRelation {
....
#Valid
#CreateOnTheFly // Custom annotation to create the dbrefEntity before persisting the current entity
#DBRef
private MyDbRefEntity myDbRefEntity;
}
the DbRefEntity
public class MyDbRefEntity {
...
private String name;
}
and the JSON Post request we are doing:
POST base-api/entityWithDbRefRelations
{
...
"myDbRefEntity": {
"name": "My own dbRef entity"
}
}
In our database this request create our myDbRefEntity and then create the target entityWithDbRefRelation with a dbRef linked to the other entity.
Following the migration, the DBRef is never created because when deserializing the JSON into a PersistingEntity, the myDbRefEntity is ignored because it's expecting an href instead of a complex object.
I see 3 solutions :
Modify all our process to first create the DBRef through one request then create our entity with the link to the dbRef
Very costly as we have a lot of services creating entities through this backend
Compliant with SDR
Define our own rest mvc controllers to do operations, to ignore the SDR mapping machanism
Add AOP into the RepositoryRestMvcConfiguration around the persistentEntityJackson2Module to set le serializerModifier and deserializedModifier to null
I really prefer to avoid this solution as Springboot must have remove a way to configure it on purpose and it could break when migrating on newer version
Does anyone know a way to continue considering the property as a complex object instead of an href link except from my 3 previous points ?
Tell me if you need more information and thanks in advance for your help!
I Save my data to database using spring.
#RepositoryRestResource(collectionResourceRel = "operators", path = "operators")
public interface OperatorsRepository extends MongoRepository<Operator, String> {
}
and I have file:
main\resources\application.properties
spring.data.mongodb.uri=mongodb://admin:password#myclusterurl/test?retryWrites=true&w=majority
In my config class I use:
#Bean
CommandLineRunner commandLineRunner(OperatorsRepository operatorsRepository){operatorsRepository.save(myobjToSave);}
Everything works fine, i get data saved data using REST. But my problem is that in compass mongodb I don't see created collections and data. Why? Using mongo shell and mongo atlas is the same.
Declaring a bean doesn't mean it is automatically executed. If you want to create a new collection from, let's say, a JSON file from the src/main/resources (or test), then you have to trigger the call of this method somehow.
I suggest to use #PostConstruct annotation that triggers once upon the object creation. Since you want to create data using the OperatorsRepository, I'd use it at #Service class injecting that object:
#PostConstruct
void createData() {
this.operatorsRepository.save(myobjToSave);
}
Try this
spring.data.mongodb.uri=mongodb://xxxxxxxxxxxxxxxxxxx
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
I am just starting to use Kafka with Spring Boot & want to send & consume JSON objects.
I am getting the following error when I attempt to consume an message from the Kafka topic:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition dev.orders-0 at offset 9903. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalArgumentException: The class 'co.orders.feedme.feed.domain.OrderItem' is not in the trusted packages: [java.util, java.lang]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:139) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:113) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:218) ~[spring-kafka-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:923) ~[kafka-clients-1.0.1.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher.access$2600(Fetcher.java:93) ~[kafka-clients-1.0.1.jar:na]
I have attempted to add my package to the list of trusted packages by defining the following property in application.properties:
spring.kafka.consumer.properties.spring.json.trusted.packages = co.orders.feedme.feed.domain
This doesn't appear to make any differences. What is the correct way to add my package to the list of trusted packages for Spring's Kafka JsonDeserializer?
Since you have the trusted package issue solved, for your next problem you could take advantage of the overloaded
DefaultKafkaConsumerFactory(Map<String, Object> configs,
Deserializer<K> keyDeserializer,
Deserializer<V> valueDeserializer)
and the JsonDeserializer "wrapper" of spring kafka
JsonDeserializer(Class<T> targetType, ObjectMapper objectMapper)
Combining the above, for Java I have:
new DefaultKafkaConsumerFactory<>(properties,
new IntegerDeserializer(),
new JsonDeserializer<>(Foo.class,
new ObjectMapper()
.registerModules(new KotlinModule(), new JavaTimeModule()).setSerializationInclusion(JsonInclude.Include.NON_NULL)
.setDateFormat(new ISO8601DateFormat()).configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false))));
Essentially, you can tell the factory to use your own Deserializers and for the Json one, provide your own ObjectMapper. There you can register the Kotlin Module as well as customize date formats and other stuff.
Ok, I have read the documentation in a bit more detail & have found an answer to my question. I am using Kotlin so the creation of my consumer looks like this with the
#Bean
fun consumerFactory(): ConsumerFactory<String, FeedItem> {
val configProps = HashMap<String, Any>()
configProps[ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG] = bootstrapServers
configProps[ConsumerConfig.GROUP_ID_CONFIG] = "feedme"
configProps[ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG] = StringDeserializer::class.java
configProps[ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG] = JsonDeserializer::class.java
configProps[JsonDeserializer.TRUSTED_PACKAGES] = "co.orders.feedme.feed.domain"
return DefaultKafkaConsumerFactory(configProps)
}
Now I just need a way to override the creation of the Jackson ObjectMapper in the JsonDeserializer so that it can work with my Kotlin data classes that don't have a zero-argument constructor :)
Here's a bit of code from the Apache CXF documentation:
CustomMessageBodyReaderWriter provider1 = new CustomMessageBodyReaderWriter();
provider.setCustomProperty(true);
Dictionary properties = new Hashtable();
properties.put("org.apache.cxf.rs.provider", provider);
bundleContext.registerService(
new String[]{"org.books.BookService"}, new BookServiceImpl(), properties);
Note that this piece of an activator method registers an OSGi service where one of the property values is an object created and configured at runtime.
Now, what if I wanted this to be a CXF dOSGi component? The only way I know to specify service registration properties for DS #Components requires the property value to be a string in the 'properties' slot in the #Component. Is there some way to have executable code involved?
CXF is not using the service properties correctly. The spec says :
Properties hold information as key/value pairs. The key must be a
String object and the value should be a type recognized by Filter
objects (see Filters on page 138 for a list). Multiple values for the
same key are supported with arrays ([]) and Collection objects. The
values of properties should be limited to primitive or standard Java
types to prevent unwanted inter bundle dependencies. ...
The service properties are intended to provide information about the
service. The properties should not be used to participate in the
actual function of the service. Modifying the properties for the
service registration is a potentially expensive operation. For
example, a Framework may pre-process the properties into an index
during registration to speed up later look-ups.
Anyway, AFAIK, it's not possible with the current DS to create such properties. You can however :
The 'DS way', use an immediate component which creates the real component with a ComponentFactory
Use an immediate component, and register your service with the raw osgi API
If you use felix SCR, you can use a ExtComponentContext to override your component properties
Update:
an example of a ComponentFactory :
#Component(factory = "bookService")
public class BookServiceImpl implements BookService {
...
}
And a component using this factory :
#Component
public class BookServiceManager {
#Reference(target = "(component.factory=bookService)")
private ComponentFactory bookServiceFactory;
#Activate
public void start() {
CustomMessageBodyReaderWriter provider1 = new CustomMessageBodyReaderWriter();
provider.setCustomProperty(true);
Dictionary properties = new Hashtable();
properties.put("org.apache.cxf.rs.provider", provider);
bookServiceFactory.newInstance(properties);
}
}
To be honest, with this use-case, I prefer using the raw OSGi API. But this approach can be useful if you want DS to manage your #Reference in your ComponentFactory. When the dependencies are not satisfied, the ComponentFactory and all its ComponentInstance will be deactivated.
I am currently working on a 2.0 version of CXF-DOSGi which will allow to set provder instances as intents. You can then publish an instance of CustomMessageBodyReaderWriter under an intent name and refer to it from your remote service by listing the required intents.
I've been struggling for the past week to successfully integrate Spring Data MongoDB into our application. We use the fairly common practice of having separate databases for each collection that we rely on. For instance, TenantConfiguration database contains only the TenantConfigurations collection.
I've read through the documentation several times and trawled through the code for a solution but have turned up nothing. Surely such a widely adopted project has some solution for this issue? My current attempt looks like this:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = TenantConfigurationRepository.class,
mongoTemplateRef = "tenantConfigurationTemplate")
public class TenantConfigurationRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate tenantConfigurationTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "TenantConfiguration");
}
}
Here is one of the other individual repository configurations:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = RegisteredCardRepository.class,
mongoTemplateRef = "registeredCardTemplate")
public class RegisteredCardRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate registeredCardTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "RegisteredCard");
}
}
Now here is the actual repository definition for the RegisteredCard repository:
#Repository
public interface RegisteredCardRepository extends MongoRepository<RegisteredCard, Guid>,
QueryDslPredicateExecutor<RegisteredCard> { }
This all makes perfect sense to me, the individual configurations uniquely identify the specific repository interfaces they configure and the specific template bean to use with that repository via the mongoTemplateRef parameter of the annotation. At least, this is how the documentation seems to imply it should work.
In reality, when I start up the application, the RegisteredCard repository resolves to a MongoDB repository instance with an associated MongoDbFactory that is bound to the TenantConfiguration database. In fact, every single repository receives the same, incorrect MongoOperations object. Despite each repository having its own unique configuration, it appears that whatever database is accessed first remains the target database for every repository.
Are there any solutions available to this problem?
It's taken me almost a week, but I've actually found a passable solution to this issue. Here's a quick run-down of facts I've picked up while researching this issue:
#EnableMongoRepositories(basePackageClasses = Whatever.class) simply uses a qualified class name to indicate what package it should scan for all of your defined data models. This is entirely equivalent to doing #EnableMongoRepositories(basePackageClasses = "com.mypackage.whatevers") if Whatever.class resides in that package.
#EnableMongoRepositories is not repeatable but can be used to annotate several classes. This has been covered in other SO conversations but bears repeating here. You will need to define several repository configuration classes; one for each database you intend to interact with.
Each of your individual repository configurations must specify its own MongoTemplate instance in the #EnableMongoRepositories annotation. You can get away with providing only a single Mongo bean but the MongoTemplate relies on a specific MongoMappingContext.
The #EnableMongoRepositories annotation helps define your mapping context, this understands the structure of your data models and how to serialize them. It also understands the #Document and #Field annotations and does the heavy lifting of persisting your objects. The Mongo template instances are where your specify what database you want to interact with. So by providing the #EnableMongoRepositories annotation with both a basePackage attribute and a mongoTemplateRef attribute you can tell Spring Data Mongo to "take these models and persist them in this specific database".
The unfortunate requirement for this solution is that you must organize your data models into separate packages depending on what database they belong in. If, like me, you are using a Mongo database structure that allocates a single collection to each database (this is fairly common for heavily accessed collections), this means that each of your data models must reside in its own package. Each of these packages must be pointed to by an #EnableMongoRepositories annotation also containing a mongoTemplateRef attribute to a unique MongoTemplate bean.
I hope this helps someone avoid the trouble I've gone through trying to accomplish what should be a fairly run-of-the-mill Mongo integration.
PS: Abandon all hope, those who seek to combine auditing with this configuration.
I know this is old but for those who are looking for a short solution like me:
#Autowired
#Qualifier("registeredCardTemplate")
private MongoTemplate template;
Qualifier name is your "mongoTemplateRef={XXX}"