Spring Boot Mongo Repository Document DB Retryable writes are not supported - spring

I'm currently trying to update a document using MongoRepository in Spring, where it is connecting to an AWS DocumentDB cluster. I am getting a 301 - Retryable writes are not supported error even though the URL used to connect to DocumentDB includes retryWrites=false, so I don't know how I'm supposed to update documents or if I'm supposed to disable retryWrites from somewhere else in Spring as well.
The URL for the DocumentDB connection looks like this:
mongodb://<username>:<password>#mongo-dev-cluster.cluster-xxxxx.eu-west-2.docdb.amazonaws.com:27017/?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false
The code for the model, repository and service looks like this:
#Service
public class CarService {
#Autowired
private CarRepository carRepository;
public void update(String id, Car car) {
// Just saving wouldn't work because there is an indexed key
car.setId(id);
carRepository.save(car);
}
}
#Repository
public interface CarRepository extends
MongoRepository<Car, String> {
}
#Document
#TypeAlias("car")
public class Car {
#Id
private String id;
#Indexed(unique = true)
private String carName;
private String color;
}
The application.properties looks like this:
spring.data.mongodb.username=${DATABASE_USERNAME}
spring.data.mongodb.password=${DATABASE_PASSWORD}
spring.data.mongodb.database=cars-db
spring.data.mongodb.port=27017
spring.data.mongodb.host=mongo-dev-cluster.cluster-xxxxx.eu-west-2.docdb.amazonaws.com
How can I stop this error from happening when updating a document where I want to keep the ID and it's indexed values equal?

I figured out what I was doing wrong. I was configuring MongoDB in Spring to use host and port, meaning the retryWrites=false was never being read. Instead, I changed my application.properties to use a URI instead of a host and port. It now looks like this:
spring.data.mongodb.database=cars-db
spring.data.mongodb.uri=mongodb://<username>:<password>#mongo-dev-cluster.cluster-xxxxx.eu-west-2.docdb.amazonaws.com:27017/?retryWrites=false`
This means the retryWrites=false property is now being read correctly, whereas before, it was apparently only reading the host name so it wasn't disabling retryable writes.

Related

How to avoid Spring Repository<T, ID> to leak persistence information into service tier

I'm using spring-data-mongodb at the moment so this question is primarily in context of MongoDB but I suspect my question applies to repository code in general.
Out of the box when using a MongoRepository<T, ID> interface (or any other Repository<T, ID> descendent) the entity type T is expected to be the document type (the type that defines the document schema).
As a result injecting such a repository into service component means this repository is leaking database schema information into the service tier (highly pseudo) :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
interface MyRepository extends MongoRepository<MyDocument, String> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
var documentId = convert(id, ...);
var matchingDocument = repository.findById(documentId).orElse(...);
var model = convert(matchignDocument, ...);
return model;
}
}
Whilst ideally I'd want to do this :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
#Configuration
class MyMagicConversionConfig {
...
}
class MyDocumentToModelConverter implements Converter<MyModel, MyDocument> {
...
}
class MyModelToDocumentConverter implements Converter<MyDocument, MyModel> {
...
}
// Note that the model and the model's ID type are used in the repository declaration
interface MyRepository extends MongoRepository<MyModel, UUID> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
// Repository now returns the model because it was converted upstream
// by the mongo persistence layer.
var matchingModel = repository.findById(documentId).orElse(...);
return matchingModel ;
}
}
Defining this conversion once seems significantly more practical than having to consistently do it throughout your service code so I suspect I'm just missing something.
But of course this requires some way to inform the mongo mapping layer to be aware of what conversion has to be applied to move between MyModel and MyDocument and to use the latter for it's actual source of mapping metadata (e.g. #Document, #Id, etc.).
I've been fiddling with custom converters but I just can't seem to make the MongoDB mapping component do the above.
My two questions are :
Is it currently possible to define custom converters or implement callbacks that allow me to define and implement this model <-> document conversion once and abstract it away from my service tier.
If not, what is the idiomatic way to approach cleaning this up such that the service layer can stay blissfully unaware of how or with what schema an entity is persisted? A lot of Spring Boot codebases appear to be fine with using the type that defines the database schema as their model but that seems supoptimal. Suggestions welcome!
Thanks!
I think you're blowing things a bit out of proportion. The service layer is not aware of the schema. It is aware of the types returned by the repository. How the properties of those are mapped onto the schema, depends on the object-document mapping. This, by default, uses the property name, as that's the most straightforward thing to do. That translation can either be customized using annotations on the document type or by registering a FieldNamingStrategy with Spring Data MongoDB.
Spring Data MongoDB's object-document mapping subsystem provides a lot of customization hooks that allows transforming arbitrary MongoDB documents into entities. The types which the repositories return are your domain objects that - again, only by default - are mapped onto a MongoDB document 1:1, simply because that's the most reasonable thing to do in the first place.
If really in doubt, you can manually implement repository methods individually that allow you to use the MongoTemplate API that allows you to explicitly define the type, the data should be projected into.
You can use something like MapStruct or write your own Singleton Mapper.
Then create default methods in your repository:
interface DogRepository extends MongoRepository<DogDocument, String> {
DogDocument findById(String id);
default DogModel dogById(String id) {
return DogMapper.INSTANCE.toModel(
findById(id)
);
}
}

Returning only the first 10 record - Redis OM

I’m using Redis OM for spring boot, I am having trouble querying objects because it only returns the first 10 records.
Repository Class:
public interface RedisBillerRepository extends RedisDocumentRepository<Biller, Long> {
List<Biller> findByClientIds(String clientId);
}
Is there a way to return ALL the objects with the specific clientId? Not the first 10 only.
The only way which i found was with the interface Page. For example your Repository would look like this:
public interface RedisBillerRepository extends RedisDocumentRepository<Biller, Long> {
Page<Biller> findByClientIds(String clientId, Pageable pageable);
}
And your class could look like this
public class BillerService {
#Autowired
RedisBillerRepository redisBillerRepository;
public List<Biller> getAllClientsById(String clientId){
Pageable pageRequest = PageRequest.of(0, 500000);
Page<Biller> foundBillers = redisBillerRepository.findByClientIds(clientId, pageRequest);
List<Biller> billersAsList = foundBillers.getContent();
return billersAsList;
}
}
You have to set the limit for now.
I'm the author of the library... #member2 is correct. RediSearch currently has a default for the underlying FT.SEARCH (https://redis.io/commands/ft.search/) method of returning the first 10 records found. To override that, the only way to do so currently is to use the Pagination constructs in Spring.
I will expose a configuration parameter in upcoming versions to set the MAX globally.

Table name configured with external properties file

I build a Spring-Boot application that accesses a Database and extracts data from it. Everything is working fine, but I want to configure the table names from an external .properties file.
like:
#Entity
#Table(name = "${fleet.table.name}")
public class Fleet {
...
}
I tried to find something but I didn't.
You can access external properties with the #Value("...") annotation.
So my question is: Is there any way I can configure the table names? Or can I change/intercept the query that is sent by hibernate?
Solution:
Ok, hibernate 5 works with the PhysicalNamingStrategy. So I created my own PhysicalNamingStrategy.
#Configuration
public class TableNameConfig{
#Value("${fleet.table.name}")
private String fleetTableName;
#Value("${visits.table.name}")
private String visitsTableName;
#Value("${route.table.name}")
private String routeTableName;
#Bean
public PhysicalNamingStrategyStandardImpl physicalNamingStrategyStandard(){
return new PhysicalNamingImpl();
}
class PhysicalNamingImpl extends PhysicalNamingStrategyStandardImpl {
#Override
public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {
switch (name.getText()) {
case "Fleet":
return new Identifier(fleetTableName, name.isQuoted());
case "Visits":
return new Identifier(visitsTableName, name.isQuoted());
case "Result":
return new Identifier(routeTableName, name.isQuoted());
default:
return super.toPhysicalTableName(name, context);
}
}
}
}
Also, this Stackoverflow article over NamingStrategy gave me the idea.
Table names are really coming from hibernate itself via its strategy interfaces. Boot configures this as SpringNamingStrategy and there were some changes in Boot 2.x how things can be customised. Worth to read gh-1525 where these changes were made. Configure Hibernate Naming Strategy has some more info.
There were some ideas to add some custom properties to configure SpringNamingStrategy but we went with allowing easier customisation of a whole strategy beans as that allows users to to whatever they need to do.
AFAIK, there's no direct way to do config like you asked but I'd assume that if you create your own strategy you can then auto-wire you own properties to there. As in those customised strategy interfaces you will see the entity name, you could reserve a keyspace in boot's configuration properties to this and match entity names.
mytables.naming.fleet.name=foobar
mytables.naming.othertable.name=xxx
Your configuration properties would take mytables and within that naming would be a Map. Then in your custom strategy it would simply be by checking from mapping table if you defined a custom name.
Spring boot solution:
Create below class
#Configuration
public class CustomPhysicalNamingStrategy extends SpringPhysicalNamingStrategy{
#Value("${table.name}")
private String tableName;
#Override
public Identifier toPhysicalTableName(final Identifier identifier, final JdbcEnvironment jdbcEnv) {
return Identifier.toIdentifier(tableName);
}
}
Add below property to application.properties:
spring.jpa.properties.hibernate.physical_naming_strategy=<package.name>.CustomPhysicalNamingStrategy
table.name=product

spring boot #Cachable returning all fields of superclasses filled with null values

we are facing a strange problem and I dont quit understand whats going on and hope someone else already had the same issue and has a clue what is going on.
We wrote a simple REST service making use of #Cachable:
#GetMapping(value = "/get/{" + PARAM_TENANT + "}/{" + PARAM_UID + "}")
#Cacheable(value = GET_ORDERS_BY_UID )
public GetOrdersResponseDto getOrdersByUid(#PathVariable final String tenant, #PathVariable final String uid) {
....
return new GetOrdersResponseDto(createCacheKey(), orderResponseDtos);
}
GetOrdersResponseDto consists of several fields. Some contain instances of custom classes, some lists of them and other simple primitive values.
When the GetOrdersResponseDto response is served from the cache all fields of objects that are stored inside a list AND are located in the objects superclass are filled with null values.
We are using hazelcast as the cache implementation. And our cache config is very basic:
#Component
public class HazelcastConfig extends Config {
#Autowired
public HazelcastConfig(final ConfigClient configClient) {
super();
final GroupConfig groupConfig = getGroupConfig();
final String name = configClient
.getConfigPropertyValueOrThrow("public", "com.orderservice.hazelcast.group.name");
groupConfig.setName("foogroup");
final String password = configClient
.getConfigPropertyValueOrThrow("public", "com.orderservice.hazelcast.group.password");
groupConfig.setPassword(password);
The response class looks as follows:
public class GetOrdersResponseDto implements Serializable {
private String cacheSerial;
private List<OrderResponseDto> orderResponseDtos;
}
And the problems occur only for fields of OrderResponseDto that are part of the super class of OrderResponseDto.
I hope someone can give us an hint what's the cause for this strange behaviour.
Edit: I found out, that the problem only occurs for objects that are stored inside lists...
This is Java behaviour. See https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html
If your object is serializable and extends an object that is not serializable, then instead of the NotSerializeException which would be useful, the fields of the parent object are only initialized which is why you have them as nulls.
You can prove this in a unit test.
Here's one to reuse - https://github.com/hazelcast/hazelcast-code-samples/blob/master/serialization/hazelcast-airlines/the-code/src/test/java/com/hazelcast/samples/serialization/hazelcast/airlines/V1FlightTest.java

Why is this method in a Spring Data repository considered a query method?

We have implemented an application that should be able to use either JPA, Couchbase or MongoDB. (for now, may increase in the future). We successfully implemented JPA and Couchbase by separating repositories for each e.g. JPA will come from org.company.repository.jpa while couchbase will come from org.company.repository.cb. All repository interfaces extends a common repository found in org.company.repository. We are now targeting MongoDB by creating a new package org.company.repository.mongo. However we are encountering this error:
No property updateLastUsedDate found for type TokenHistory!
Here are our codes:
#Document
public class TokenHistory extends BaseEntity {
private String subject;
private Date lastUpdate;
// Getters and setters here...
}
Under org.company.repository.TokenHistoryRepository.java
#NoRepositoryBean
public interface TokenHistoryRepository<ID extends Serializable> extends TokenHistoryRepositoryCustom, BaseEntityRepository<TokenHistory, ID> {
// No problem here. Handled by Spring Data
TokenHistory findBySubject(#Param("subject") String subject);
}
// The custom method
interface TokenHistoryRepositoryCustom {
void updateLastUsedDate(#Param("subject") String subject);
}
Under org.company.repository.mongo.TokenHistoryMongoRepository.java
#RepositoryRestResource(path = "/token-history")
public interface TokenHistoryMongoRepository extends TokenHistoryRepository<String> {
TokenHistory findBySubject(#Param("subject") String subject);
}
class TokenHistoryMongoRepositoryCustomImpl {
public void updateLastUsedDate(String subject) {
//TODO implement this
}
}
And for Mongo Configuration
#Configuration
#Profile("mongo")
#EnableMongoRepositories(basePackages = {
"org.company.repository.mongo"
}, repositoryImplementationPostfix = "CustomImpl",
repositoryBaseClass = BaseEntityRepositoryMongoImpl.class
)
public class MongoConfig {
}
Setup is the same for both JPA and Couchbase but we didn't encountered that error. It was able to use the inner class with "CustomImpl" prefix, which should be the case base on the documentations.
Is there a problem in my setup or configuration for MongoDB?
Your TokenHistoryMongoRepositoryCustomImpl doesn't actually implement the TokenHistoryRepositoryCustom interface, which means that there's no way for us to find out that updateLastUsedDate(…) in the class found is considered to be an implementation of the interface method. Hence, it's considered a query method and then triggers the query derivation.
I highly doubt that this works for the other stores as claimed as the code inspecting query methods is shared in DefaultRepositoryInformation.

Resources