I build a Spring-Boot application that accesses a Database and extracts data from it. Everything is working fine, but I want to configure the table names from an external .properties file.
like:
#Entity
#Table(name = "${fleet.table.name}")
public class Fleet {
...
}
I tried to find something but I didn't.
You can access external properties with the #Value("...") annotation.
So my question is: Is there any way I can configure the table names? Or can I change/intercept the query that is sent by hibernate?
Solution:
Ok, hibernate 5 works with the PhysicalNamingStrategy. So I created my own PhysicalNamingStrategy.
#Configuration
public class TableNameConfig{
#Value("${fleet.table.name}")
private String fleetTableName;
#Value("${visits.table.name}")
private String visitsTableName;
#Value("${route.table.name}")
private String routeTableName;
#Bean
public PhysicalNamingStrategyStandardImpl physicalNamingStrategyStandard(){
return new PhysicalNamingImpl();
}
class PhysicalNamingImpl extends PhysicalNamingStrategyStandardImpl {
#Override
public Identifier toPhysicalTableName(Identifier name, JdbcEnvironment context) {
switch (name.getText()) {
case "Fleet":
return new Identifier(fleetTableName, name.isQuoted());
case "Visits":
return new Identifier(visitsTableName, name.isQuoted());
case "Result":
return new Identifier(routeTableName, name.isQuoted());
default:
return super.toPhysicalTableName(name, context);
}
}
}
}
Also, this Stackoverflow article over NamingStrategy gave me the idea.
Table names are really coming from hibernate itself via its strategy interfaces. Boot configures this as SpringNamingStrategy and there were some changes in Boot 2.x how things can be customised. Worth to read gh-1525 where these changes were made. Configure Hibernate Naming Strategy has some more info.
There were some ideas to add some custom properties to configure SpringNamingStrategy but we went with allowing easier customisation of a whole strategy beans as that allows users to to whatever they need to do.
AFAIK, there's no direct way to do config like you asked but I'd assume that if you create your own strategy you can then auto-wire you own properties to there. As in those customised strategy interfaces you will see the entity name, you could reserve a keyspace in boot's configuration properties to this and match entity names.
mytables.naming.fleet.name=foobar
mytables.naming.othertable.name=xxx
Your configuration properties would take mytables and within that naming would be a Map. Then in your custom strategy it would simply be by checking from mapping table if you defined a custom name.
Spring boot solution:
Create below class
#Configuration
public class CustomPhysicalNamingStrategy extends SpringPhysicalNamingStrategy{
#Value("${table.name}")
private String tableName;
#Override
public Identifier toPhysicalTableName(final Identifier identifier, final JdbcEnvironment jdbcEnv) {
return Identifier.toIdentifier(tableName);
}
}
Add below property to application.properties:
spring.jpa.properties.hibernate.physical_naming_strategy=<package.name>.CustomPhysicalNamingStrategy
table.name=product
Related
I'm going to use #InsertOnlyProperty with Spring Boot 2.7 as it will take time for us to migrate to Spring Boot 3.0!
So I'm going to create my DataAccessStrategy based on the DefaultAccessStrategy and also override the SqlParametersFactory so that I can pass the RelationalPersistentProperty::isInsertOnly condition to the getParameterSource method, also overriding RelationalPersistentProperty by adding isInsertOnly. And is there a way to override RelationalPersistentProperty to add isInsertOnly property. Am I correct or is there a better solution than switching to Spring Boot 3.0 now. Thank you!
Since #InsertOnlyProperty is only supported for the aggregate root (in Spring Boot 3.0), one approach could be to copy the data to a surrogate object and use a custom method to save it. It would look something like this:
public record MyAggRoot(#Id Long id,
/* #InsertOnlyProperty */ Instant createdAt, int otherField) {}
public interface MyAggRootRepository
extends Repository<MyAggRoot, Long>, MyAggRootRepositoryCustom { /* ... */ }
public interface MyAggRootRepositoryCustom {
MyAggRoot save(MyAggRoot aggRoot);
}
#Component
public class MyAggRootRepositoryCustomImpl implements TaskRepositoryCustom {
#Autowired
private final JdbcAggregateOperations jao;
// Override table name which would otherwise be derived from the class name
#Table("my_agg_root")
private record MyAggRootForUpdate(#Id Long id, int otherField) {}
#Override
public MyAggRoot save(MyAggRoot aggRoot) {
// If this is a new instance, insert as-is
if (aggRoot.id() == null) return jao.save(aggRoot);
// Create a copy without the insert-only field
var copy = new MyAggRootForUpdate(aggRoot.id(), aggRoot.otherField());
jao.update(copy);
return aggRoot;
}
}
It is however a bit verbose so it would only be a reasonable solution if you only need it in a few places.
#Configuration
public class MyWebMvcConfigurationSupport extends WebMvcConfigurationSupport {
#Override
public FormattingConversionService mvcConversionService() {
FormattingConversionService f = super.mvcConversionService();
f.addFormatter(new DateFormatter("yyyy-MM-dd"));
return f;
}
}
#RestController
public class TestController {
#GetMapping
public Date test(Date date) {
return date;
}
}
When we access http://localhost:8080?date=2021-09-04, the argument type is converted through the DateFormatter's parse method, which relies on the SpringMVC framework to do the conversion. I wonder if the print method can also be invoked through the framework to return a string.
Do we need to manually invoke the print method, for example
#RestController
public class TestController {
#Resource
private FormattingConversionService conversionService;
#GetMapping
public String test(Date date) {
return conversionService.convert(date, String.class);
}
}
Inside the controller
You could use a class extending java.text.Format like SimpleDateFormatin your controller:
#RestController
public class TestController {
private static final SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd");
#GetMapping
public String test(Date date) {
return dateFormat.format(date);
}
}
At application level
Use DateTimeFormatterRegistrar to register your formats, like described in this tutorial.
Then you can register this set of formatters at Spring's FormattingConversionService.
Using Jackson
However if you would like to work with JSON or XML you should consider using FasterXML's Jackson. See similar question:
Spring 3.2 Date time format
This is the interface representing the environment in which the current application is running. It models two key aspects of the application environment: profiles and properties. The methods related to property access are exposed via the PropertyResolver superinterface.
A profile is a named, logical group of bean definitions to be registered with the container only if the given profile is active. Beans may be assigned to a profile whether defined in XML or via annotations; see the spring-beans 3.1 schema or the #Profile annotation for syntax details. The role of the Environment object with relation to profiles is in determining which profiles (if any) are currently active, and which profiles (if any) should be active by default.
Properties play an important role in almost all applications, and may originate from a variety of sources: properties files, JVM system properties, system environment variables, JNDI, servlet context parameters, ad-hoc Properties objects, Maps, and so on. The role of the environment object with relation to properties is to provide the user with a convenient service interface for configuring property sources and resolving properties from them.
Beans managed within an ApplicationContext may register to be EnvironmentAware or #Inject the Environment in order to query profile state or resolve properties directly.
In most cases, however, application-level beans should not need to interact with the Environment directly but instead may have to have ${...} property values replaced by a property placeholder configurer such as PropertySourcesPlaceholderConfigurer, which itself is EnvironmentAware and as of Spring 3.1 is registered by default when using context:property-placeholder/.
Configuration of the environment object must be done through the ConfigurableEnvironment interface, returned from all AbstractApplicationContext subclass getEnvironment() methods. See ConfigurableEnvironment Javadoc for usage examples demonstrating manipulation of property sources prior to application context refresh().
I am developing a REST API using spring-boot-starter-data-rest. One class I want to sync with JPA is the User class containing information about users, including who is allowed to access the API.
Unfortunately, having the User and the UserRepository means that my User class is exposed in my API. I was able to remove things like the Id (in the configureRepositoryRestConfiguration function) and usernames and passwords (by adding #JsonIgnore to every variable of my User class).
Unfortunately, users of the API can still ask for the users table (who returns a list with empty users). Although this is not really a problem, I would rather remove the /users endpoint.
Adding #JsonIgnore to the whole User class is not possible.
Exporting repositories is depend on RepositoryDetectionStrategy. The default strategy is:
Exposes all public repository interfaces but considers #(Repository)RestResource’s exported flag.
According it to disable exporting of your 'repo' you can set exported flag to false for this repo:
#RepositoryRestResource(exported = false)
public interface UserRepo extends JpaRepository<User, Integer> {
//...
}
Another approach is to change globally the RepositoryDetectionStrategy to ANNOTATED:
Only repositories annotated with #(Repository)RestResource are exposed, unless their exported flag is set to false.
#Configuration
public class RestConfig extends RepositoryRestConfigurerAdapter {
#Override
public void configureRepositoryRestConfiguration(RepositoryRestConfiguration config) {
config.setRepositoryDetectionStrategy(RepositoryDetectionStrategy.RepositoryDetectionStrategies.ANNOTATED);
super.configureRepositoryRestConfiguration(config);
}
}
Then don't apply #RepositoryRestResource annotation to repos that doesn't need to be exported.
UPDATE
We can also use this application property to setup the strategy:
spring.data.rest.detection-strategy=default
Source
You can hide certain repositories by adding this annotation to your repository: #RepositoryRestResource(exported = false).
More informations here: http://docs.spring.io/spring-data/rest/docs/current/reference/html/#customizing-sdr.hiding-repositories
There's such thing as projections.
You can define interface with fields you want and use it as repository's method:
#Projection(name = "simpleUser", types = { User.class })
interface SimpleUser {
String getFirstName();
String getLastName();
}
I would like to use Oracle NoSQL database together with Spring data. The aim is to access the data over spring data repositories and even use spring data rest on top of it.
So I think the spring-data-keyvalue project would help me, to implement an adapter for Oracle NoSQL KV.
I tried to understand the documentation of spring-data-keyvalue (http://docs.spring.io/spring-data/keyvalue/docs/current/reference/html/#key-value.core-concepts), but didn't get the idea.
An example/tutorial about how to implement an adapter from scratch would be very helpful.
What I have is this configuration class where I provide a custom KeyValueAdapter. Now if I use CrudRepository methods it uses my custom adapter.
#Configuration
#EnableMapRepositories
public class KeyValueConfig {
#Bean
public KeyValueOperations keyValueTemplate() {
return new KeyValueTemplate(new OracleKeyValueAdapter());
}
}
The OracleKeyValueAdapter is an implementation of KeyValueAdapter. I got this from the spring-data-keyvalue-redis project (https://github.com/christophstrobl/spring-data-keyvalue-redis/blob/master/src/main/java/org/springframework/data/keyvalue/redis/RedisKeyValueAdapter.java)
public class OracleKeyValueAdapter extends AbstractKeyValueAdapter {
private KVStore store;
public OracleKeyValueAdapter() {
String storeName = "kvstore";
String hostName = "localhost";
String hostPort = "5000";
store = KVStoreFactory.getStore
(new KVStoreConfig(storeName, hostName + ":" + hostPort));
}
//Custom implementations:
#Override
public Object put(Serializable serializable, Object o, Serializable
serializable1) {
return null;
}
#Override
public boolean contains(Serializable serializable, Serializable
serializable1) {
return false;
}
.
.
.
Now I'm trying to implement this OracleKeyValueAdapter, but i don't know if that does even make sense.
Can you help me?
You might want to start with how spring-data-keyvalue is implemented over Redis, the link here should be a good starting point - http://docs.spring.io/spring-data/data-keyvalue/docs/1.0.0.BUILD-SNAPSHOT/reference/redis.html
Let me know how that goes, I am interested in what you are trying to accomplish.
The following configuration should work (tested on v2.4.3)
#Configuration
#EnableMapRepositories
public class Configuration {
#Bean
public KeyValueOperations mapKeyValueTemplate() {
return new KeyValueTemplate(keyValueAdapter());
}
#Bean
public KeyValueAdapter keyValueAdapter() {
return new YourKeyValueAdapter();
}
}
The name (mapKeyValueTemplate) of the KeyValueOperations bean is important here but it can also be changed as followed:
#Configuration
#EnableMapRepositories(keyValueTemplateRef = "foo")
public class Configuration {
#Bean
public KeyValueOperations foo() {
return new KeyValueTemplate(keyValueAdapter());
}
#Bean
public KeyValueAdapter keyValueAdapter() {
return new YourKeyValueAdapter();
}
}
I saw sources of Spring KeyValue Repository:
https://github.com/spring-projects/spring-data-keyvalue
I recomend to understand, how Spring Repository work inside.
If you want to realise own repository (CustomKeyValueRepository), you must create at least 6 classes:
EnableCustomKeyValueRepositories - annotation to enable repository type in your project.
CustomKeyValueRepositoriesRegistrar - registrator for this annotaion.
CustomKeyValueRepository - repository
CustomKeyValueRepositoryConfigurationExtension - implementation of Spring ConfigurationExtension.
CustomKeyValueAdapter - implementation of custom adapter for your data store.
CustomKeyValueConfiguration - configuration of beans Adapter and Template.
I code Infinispan KeyValue Repository by this way:
https://github.com/OsokinAlexander/infinispan-spring-repository
I also write article about this:
https://habr.com/ru/post/535218/
In Chrome you can translate it to your language.
The simplest way you can try implement only CustomKeyValueAdapter and Configuration. In Configuration you must redefine Spring KeyValueAdapter bean and KeyValueTemplate (it is very important that the name of the bean is with a lowercase letter, that's the only way it worked for me):
#Configuration
public class CustomKeyValueConfiguration extends CachingConfigurerSupport {
#Autowired
private ApplicationContext applicationContext;
#Bean
public CustomKeyValueAdapter getKeyValueAdapter() {
return new CustomKeyValueAdapter();
}
#Bean("keyValueTemplate")
public KeyValueTemplate getKeyValueTemplate() {
return new KeyValueTemplate(getKeyValueAdapter());
}
}
I'm trying to configure Spring JPA to update timestamp columns using the JPA auditing framework.
I think I've got it configured correctly, but whenever I create or update a row it just sets null on all the auditable fields. (note the fields are created in the database, and if I manually write a value, it will be overwritten with null).
What am I missing here? Do I need to explicitly set the last modified date etc?
Also my auditor bean isn't being triggered, I set a break point and it's never entered, which leads me to suspect I'm missing some configuration for the auditing service.
So far I have these definitions:
#Configuration
#EnableTransactionManagement
#EnableJpaAuditing(auditorAwareRef = "auditorBean")
#EnableJpaRepositories(basePackages="com.ideafactory.mvc.repositories.jpa")
public class PersistenceConfig
{...
And the auditor aware class:
#Component
public class AuditorBean implements AuditorAware<Customer> {
private static final Logger LOGGER= LoggerFactory.getLogger(AuditorBean.class);
private Customer currentAuditor;
#Override
public Customer getCurrentAuditor() {
// Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
//
// if (authentication == null || !authentication.isAuthenticated()) {
// return null;
// }
//
// return ((MyUserDetails) authentication.getPrincipal()).getUser();
LOGGER.debug("call AuditorAware.getCurrentAuditor(");
return currentAuditor;
}
public void setCurrentAuditor(Customer currentAuditor) {
this.currentAuditor = currentAuditor;
}
}
And my entity configuration:
#Entity
#Table(name= "contact_us_notes")
public class ContactUsNote extends AbstractAuditable<Customer, Long> {...
========================== Updated ============================
Ok so i went back over the docs, and it seems I'd missed configuring the entity listener. So it's kind of working now.
But now my question becomes how in java configuration do I configure the listener as a default for all entities? (Similar to the way the docs recommend in orm.xml).
I added the entity listeners annotation below.
#Entity
#Table(name= "contact_us_notes")
#EntityListeners({AuditingEntityListener.class})
public class ContactUsNote extends AbstractAuditable<Customer, Long> {
Have you create an orm.xml file in /resources/META-INF? I don't see it posted in your question.