I am trying to use EnumCodec from the latest version of r2dbc-postgresql (0.8.4) unsuccessfully, and I wondered if you could help me.
I use also spring-data-r2dbc version 1.1.1.
I took the exact example from the GitHub and created an enum type “my_enum” in my Postgres,
and a table “sample_table” which contains ‘name’ (text) and ‘value’ (my_enum).
Then I did as in the example:
SQL:
CREATE TYPE my_enum AS ENUM ('FIRST', 'SECOND');
Java Model:
enum MyEnumType {
FIRST, SECOND;
}
Codec Registration:
PostgresqlConnectionConfiguration.builder()
.codecRegistrar(EnumCodec.builder().withEnum("my_enum", MyEnumType.class).build());
I use DatabaseClient in order to communicate with the DB.
I tried to insert using 2 methods:
databaseClient.insert().into(SampleTable.class)
.using(sampleTable).fetch().rowsUpdated();
or:
databaseClient.insert().into("sample_table")
.value("name", sampleTable.getName())
.value("value", sampleTable.getValue())
.then();
where SampleTable is:
#Data
#AllArgsConstructor
#NoArgsConstructor
#Builder
#Table("sample_table")
#JsonIgnoreProperties(ignoreUnknown = true)
#JsonInclude(JsonInclude.Include.NON_NULL)
public class SampleTable implements Serializable {
private String name;
#Column("value")
#JsonProperty("value")
private MyEnumType value;
}
But I get the same error using both:
column "value" is of type my_enum but expression is of type character varying
Can you please help me understand what I did wrong, or refer me to some working example?
I appreciate your help!
Spring Data considers enum values as values to be converted to String by default. You need to register a Converter that retains the type by writing the enum-type as-is.
#WritingConverter
class MyEnumTypeConverter implements Converter<MyEnumType, MyEnumType> {
#Override
public MyEnumType convert(MyEnumType source) {
return source;
}
}
Next, you need to register the converter. If you're using Spring Data R2DBC's AbstractR2dbcConfiguration, then override getCustomConverters():
class MyConfiguration extends AbstractR2dbcConfiguration {
#Override
protected List<Object> getCustomConverters() {
return Collections.singletonList(new MyEnumTypeConverter());
}
// …
}
Alternatively, if you configure DatabaseClient standalone, then you need a bit more of code:
PostgresqlConnectionConfiguration configuration = PostgresqlConnectionConfiguration.builder()
.codecRegistrar(EnumCodec.builder().withEnum("my_enum", MyEnumType.class).build())
.host(…)
.username(…)
.password(…)
.database(…).build();
R2dbcDialect dialect = PostgresDialect.INSTANCE;
DefaultReactiveDataAccessStrategy strategy = new DefaultReactiveDataAccessStrategy(dialect, Collections.singletonList(new MyEnumTypeConverter()));
DatabaseClient databaseClient = DatabaseClient.builder()
.connectionFactory(new PostgresqlConnectionFactory(configuration))
.dataAccessStrategy(strategy)
.build();
However, there are two bugs in the R2DBC driver that prevent Spring Data from working as expected:
Row.decode(…) fails for enum type with IllegalArgumentException: 72093 is not a valid object id #301
EnumCodec decoding fails if the requested value type is Object #302
As temporary workaround, you can duplicate EnumCodec in your codebase and apply the fix from #302 until a new release of R2DBC Postgres is available.
I have tried to use pg enum type and Java Enum class in my sample projects.
If you are using DatabaseClient API(in Spring 5.3 core, not use Spring Data R2dbc), register an EnumCodec in PostgresConnectionFactory is enough.
Check my exmaple.
If creating a pg type enum as a column type in the table schema, and Register an EnumCodec in the PostgresConnectionFactory.builder. You need to write custom #ReadingConverter to read the custom enum.
Check my example here.
If you use text-based type(varchar) in the table schema with Java Enum. no need for the extra effort on conversion, check my example here.
The Spring Data R2dbc said if using the driver built-in mechanism to handle enum, you have to register an EnumWriteSupport. But according to my experience, when using Spring Data R2dbc, the write can be handled automatically, but reading converter is required to read enum from Postgres.
Related
I'm going to use #InsertOnlyProperty with Spring Boot 2.7 as it will take time for us to migrate to Spring Boot 3.0!
So I'm going to create my DataAccessStrategy based on the DefaultAccessStrategy and also override the SqlParametersFactory so that I can pass the RelationalPersistentProperty::isInsertOnly condition to the getParameterSource method, also overriding RelationalPersistentProperty by adding isInsertOnly. And is there a way to override RelationalPersistentProperty to add isInsertOnly property. Am I correct or is there a better solution than switching to Spring Boot 3.0 now. Thank you!
Since #InsertOnlyProperty is only supported for the aggregate root (in Spring Boot 3.0), one approach could be to copy the data to a surrogate object and use a custom method to save it. It would look something like this:
public record MyAggRoot(#Id Long id,
/* #InsertOnlyProperty */ Instant createdAt, int otherField) {}
public interface MyAggRootRepository
extends Repository<MyAggRoot, Long>, MyAggRootRepositoryCustom { /* ... */ }
public interface MyAggRootRepositoryCustom {
MyAggRoot save(MyAggRoot aggRoot);
}
#Component
public class MyAggRootRepositoryCustomImpl implements TaskRepositoryCustom {
#Autowired
private final JdbcAggregateOperations jao;
// Override table name which would otherwise be derived from the class name
#Table("my_agg_root")
private record MyAggRootForUpdate(#Id Long id, int otherField) {}
#Override
public MyAggRoot save(MyAggRoot aggRoot) {
// If this is a new instance, insert as-is
if (aggRoot.id() == null) return jao.save(aggRoot);
// Create a copy without the insert-only field
var copy = new MyAggRootForUpdate(aggRoot.id(), aggRoot.otherField());
jao.update(copy);
return aggRoot;
}
}
It is however a bit verbose so it would only be a reasonable solution if you only need it in a few places.
I'm using spring-data-mongodb at the moment so this question is primarily in context of MongoDB but I suspect my question applies to repository code in general.
Out of the box when using a MongoRepository<T, ID> interface (or any other Repository<T, ID> descendent) the entity type T is expected to be the document type (the type that defines the document schema).
As a result injecting such a repository into service component means this repository is leaking database schema information into the service tier (highly pseudo) :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
interface MyRepository extends MongoRepository<MyDocument, String> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
var documentId = convert(id, ...);
var matchingDocument = repository.findById(documentId).orElse(...);
var model = convert(matchignDocument, ...);
return model;
}
}
Whilst ideally I'd want to do this :
class MyModel {
UUID id;
}
#Document
class MyDocument {
#Id
String id;
}
#Configuration
class MyMagicConversionConfig {
...
}
class MyDocumentToModelConverter implements Converter<MyModel, MyDocument> {
...
}
class MyModelToDocumentConverter implements Converter<MyDocument, MyModel> {
...
}
// Note that the model and the model's ID type are used in the repository declaration
interface MyRepository extends MongoRepository<MyModel, UUID> {
}
class MyService {
MyRepository repository;
MyModel getById(UUID id) {
// Repository now returns the model because it was converted upstream
// by the mongo persistence layer.
var matchingModel = repository.findById(documentId).orElse(...);
return matchingModel ;
}
}
Defining this conversion once seems significantly more practical than having to consistently do it throughout your service code so I suspect I'm just missing something.
But of course this requires some way to inform the mongo mapping layer to be aware of what conversion has to be applied to move between MyModel and MyDocument and to use the latter for it's actual source of mapping metadata (e.g. #Document, #Id, etc.).
I've been fiddling with custom converters but I just can't seem to make the MongoDB mapping component do the above.
My two questions are :
Is it currently possible to define custom converters or implement callbacks that allow me to define and implement this model <-> document conversion once and abstract it away from my service tier.
If not, what is the idiomatic way to approach cleaning this up such that the service layer can stay blissfully unaware of how or with what schema an entity is persisted? A lot of Spring Boot codebases appear to be fine with using the type that defines the database schema as their model but that seems supoptimal. Suggestions welcome!
Thanks!
I think you're blowing things a bit out of proportion. The service layer is not aware of the schema. It is aware of the types returned by the repository. How the properties of those are mapped onto the schema, depends on the object-document mapping. This, by default, uses the property name, as that's the most straightforward thing to do. That translation can either be customized using annotations on the document type or by registering a FieldNamingStrategy with Spring Data MongoDB.
Spring Data MongoDB's object-document mapping subsystem provides a lot of customization hooks that allows transforming arbitrary MongoDB documents into entities. The types which the repositories return are your domain objects that - again, only by default - are mapped onto a MongoDB document 1:1, simply because that's the most reasonable thing to do in the first place.
If really in doubt, you can manually implement repository methods individually that allow you to use the MongoTemplate API that allows you to explicitly define the type, the data should be projected into.
You can use something like MapStruct or write your own Singleton Mapper.
Then create default methods in your repository:
interface DogRepository extends MongoRepository<DogDocument, String> {
DogDocument findById(String id);
default DogModel dogById(String id) {
return DogMapper.INSTANCE.toModel(
findById(id)
);
}
}
I want to upgrade my spring boot project to 2.6.6 from 1.5.22.RELEASE while upgrading I'm getting the following errors suggest me how to fix it
The method findAll() in the type CrudRepository<Build,ObjectId> is not applicable for the arguments (Predicate)
The method findAll() in the type CrudRepository<Build,ObjectId> is not applicable for the arguments (Predicate, PageRequest)
Repository:
package com.capitalone.dashboard.repository;
import org.springframework.data.querydsl.QueryDslPredicateExecutor;
import org.springframework.data.repository.CrudRepository;
public interface BuildRepository extends CrudRepository<Build, ObjectId>, QueryDslPredicateExecutor<Build>
{
Build findByCollectorItemIdAndNumber(ObjectId collectorItemId, String number);
Build findByCollectorItemIdAndBuildUrl(ObjectId collectorItemId, String buildUrl);
...
}
Client code:
Iterable<Build> result;
if (request.getMax() == null) {
result = buildRepository.findAll(builder.getValue());
} else {
PageRequest pageRequest =PageRequest.of(0, request.getMax(), Sort.Direction.DESC, "timestamp");
result = buildRepository.findAll(builder.getValue(), pageRequest).getContent();
}
build.class
#Document(collection="builds")
public class Build extends BaseModel {
private ObjectId collectorItemId;
private long timestamp;
private String number;
I also tried changing to findAllById then got the below error :
The method findAllById() in the type CrudRepository<Build,ObjectId> is not applicable for the arguments (Predicate)
though the interface is extending QueryDslPredicateExecutor why I'm not able to use findAll(predicate)?
Looking at the API doc for CrudRepository and JpaRepository, I see that findaAll(Example<s> example) is only available in the JpaRepository interface. (I assume builder.getValue() in your code is a single value.)
For CrudRepository you would need to use findAllById(Iterable<ID> ids).
Thus, I suggest switching to JpaRepository or using findAllById.
Say I have a class structure as follows, it is pretty basic inheritance:
Manager extends Person {
private String name;
Manager() {
}
}
Clerk extends Person {
private String salary;
}
In spring Data if I store these in Mongo, is it possible to configure it to map the correct class when I do a getById. I assume i will have to store some class info?
What i dont want to do is the need to create seperate repository classes if i can avoid it, also i dont know what the object will be when i do a getById
If you are using spring-data-mongodb MongoRepository to write data in your database according to your entity model, a _class field will be added to document roots and to complex property types (see this section). This fields store the fully qualified name of the Java class and it allows disambiguation when mapping from MongoDb Document to Spring data model.
However, if you only use MongoRepository to read from your database, you need to tell Spring-data how to map your entities explicitly. You will need to Override Mapping with Explicit Converters.
PersonReadConverter.class
public class PersonReadConverter implements Converter<Document, Person> {
#Override
public Contact convert(Document source) {
if (source.get("attribute_specific_to_Clerk") != null) {
Clerk clerk = new Clerk();
//Set attributes using setters or defined constructor
return clerk;
}
else {
Manager manager = new Manager()
//Set attribute using setters or defined constructor
return manager;
}
}
}
Then, you have to Register Spring Converters with the MongoConverter.
You can find an example of my own at: Spring Data Mongo - How to map inherited POJO entities?
I build a Spring-Boot application that accesses a Database and extracts data from it. Everything is working fine, but I want to configure the table names from an external .properties file.
like:
#Entity
#Table(name = "${fleet.table.name}")
public class Fleet {
...
}
I tried to find something but I didn't.
You can access external properties with the #Value("...") annotation.
So my question is: Is there any way I can configure the table names? Or can I change/intercept the query that is sent by hibernate?
Solution:
Ok, hibernate 5 works with the PhysicalNamingStrategy. So I created my own PhysicalNamingStrategy.
#Configuration
public class TableNameConfig{
#Value("${fleet.table.name}")
private String fleetTableName;
#Value("${visits.table.name}")
private String visitsTableName;
#Value("${route.table.name}")
private String routeTableName;
#Bean
public PhysicalNamingStrategyStandardImpl physicalNamingStrategyStandard(){
return new PhysicalNamingImpl();
}
class PhysicalNamingImpl extends PhysicalNamingStrategyStandardImpl {
#Override
public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {
switch (name.getText()) {
case "Fleet":
return new Identifier(fleetTableName, name.isQuoted());
case "Visits":
return new Identifier(visitsTableName, name.isQuoted());
case "Result":
return new Identifier(routeTableName, name.isQuoted());
default:
return super.toPhysicalTableName(name, context);
}
}
}
}
Also, this Stackoverflow article over NamingStrategy gave me the idea.
Table names are really coming from hibernate itself via its strategy interfaces. Boot configures this as SpringNamingStrategy and there were some changes in Boot 2.x how things can be customised. Worth to read gh-1525 where these changes were made. Configure Hibernate Naming Strategy has some more info.
There were some ideas to add some custom properties to configure SpringNamingStrategy but we went with allowing easier customisation of a whole strategy beans as that allows users to to whatever they need to do.
AFAIK, there's no direct way to do config like you asked but I'd assume that if you create your own strategy you can then auto-wire you own properties to there. As in those customised strategy interfaces you will see the entity name, you could reserve a keyspace in boot's configuration properties to this and match entity names.
mytables.naming.fleet.name=foobar
mytables.naming.othertable.name=xxx
Your configuration properties would take mytables and within that naming would be a Map. Then in your custom strategy it would simply be by checking from mapping table if you defined a custom name.
Spring boot solution:
Create below class
#Configuration
public class CustomPhysicalNamingStrategy extends SpringPhysicalNamingStrategy{
#Value("${table.name}")
private String tableName;
#Override
public Identifier toPhysicalTableName(final Identifier identifier, final JdbcEnvironment jdbcEnv) {
return Identifier.toIdentifier(tableName);
}
}
Add below property to application.properties:
spring.jpa.properties.hibernate.physical_naming_strategy=<package.name>.CustomPhysicalNamingStrategy
table.name=product