I really don't know how to do what I want 'out of the box', and I think it should be easy(?) ...
Use case :
In development, I have a set of properties which are required. These I should be able to specify in a file which is distributed in the jar. These are used/available to the #Value annotation (which I realise is another level, ...)
In (unit) test, I should be able to override these properties, using the same filename.
In production, I need to be able to gather the ones I want to override from an arbitrary location. When I run my application, I would like these to be merged, with (that sort of) priority order. Basic defaults can then be provided 'in jar', and then overridden by 'local'.
Is there a way to do this (whether in core spring or spring-boot)?
I'm really not looking for a workaround - I'm willing to do a tailored solution for my app - but I'd just like to make sure that I haven't missed an obvious solution
Thx
Related
New to Spring Boot here, long-time Spring Framework user though.
I'm looking for a way to split my externalised configuration into multiple .properties files, for better readability and manageability.
I already saw this SO answer: having the ability to specify a list of configuration file names in spring.config.name (which, by the way, doesn't seem to be mentioned in Boot reference documentation, correct me if I'm wrong) would solve my problem perfectly, however that configuration property can be specified only via system properties or environment variables. If I try to specify it inside my application.properties file, it gets ignored. The same happens for spring.config.additional-location. I understand this happens because, when application.properties is read, it's too late to tell Spring Boot to search for different externalised configuration file names. However this is not a proper solution, because the way I split my configuration should be an "implementation detail" that the consumer of my application shouldn't be aware of, so I don't expect the consumer to specify an external parameter otherwise my application breaks out-of-the-box.
I think that a way to do this should be provided. Perhaps some import mechanism for .properties files or the ability to specify spring.config.name even in application.properties (some known and reasonable limitations would be acceptable).
The best I could find out is to use #PropertySource, but this is not profile aware: unless you use some ugly nested class hack, or you put spring.profiles.active variable in the resource name (which will break if multiple profiles have been activated), you won't get the benefit you have for application.properties profile-specific files.
I was not able to find an "official way" to do this, apart from some statements from Spring Boot devs that say that they're rather promoting the use of a single (possibly giant...) externalised configuration file. It seems like this position is not so popular, judging from the post reactions on GitHub, and IMHO it really seems to be a basic feature missing. I have been working with multiple properties files in Spring Framework (using XML configuration) for years and I never felt that having an only huge file would have been better.
If I understand it right, in Boot 1.x this was in some way possible using the location attribute of #ConfigurationProperties, which is however missing in Boot 2.x.
Any suggestion?
Have you tried with Spring Profile?
What you can do is create application-file1.properties/yml, application-file2.properties/yml and put it in config location and then add spring.profile.active=<your env profiles>,file1,file2.
It will load the files.
This profile entry can be in bootstrap.yml, or JVM args to application, in Manifest-<env>.yml in case of Pivotal Cloud Foundry. Not sure on AWS and other cloud provider.
Hope this will help.
Over the course of writing Spring Boot apps, our team adds in a lot of #Value annotations to help make things configurable. At some point we start to lose track of exactly what we added and what can be configured. We get a lot of questions from the QA and DevOps teams about what exactly can be configured and what can't.
Currently we just do a grep through the code base and apply some crude regular expressions to try and parse out the meaningful pieces. But this doesn't catch 100% of cases and inevitably we end up digging through the code to find out what fields can be configured.
I know we could use JavaDoc to somewhat achieve our goal, but the documentation would be buried with other JavaDoc (methods, fields, classes, etc) and it's still reliant on developers to remember to add the JavaDoc to each field.
Has anyone found a more automated way to document their #Value fields? I'm thinking something like Swagger, but specifically for Spring and the various ways it can externalize configuration.
Javadoc is indeed a way to document for developers, not the QA or the operators.
Your question is really interesting but answering to that canonically is hard because #Value are implementation details of components. Swagger that you quote documents REST contracts, that is an important difference.
Here some ideas :
Writing a BDD test for them that could be used too as documentation makes really no sense functionally but technically it makes.
Indeed, you could write a BDD integration test (with Cucumber or any other library) where you document and test the presence of each expected property.
Not a perfect solution, but you could at least retrieve exposed properties and a little more with these Spring Boot actuators :
configprops : Displays a collated list of all #ConfigurationProperties.
env : Exposes properties from Spring’s ConfigurableEnvironment.
Whenever you can, favor #ConfigurationProperties injection to group properties that work together rather than #Value. Isolating them in #ConfigurationProperties classes and adding javadoc for them is not bad at all to document their presence and usage.
as suggested by caco3 you can also generate your own metadata by using the Annotation Processor :
You can easily generate your own configuration metadata file from
items annotated with #ConfigurationProperties...
The processor picks up both classes and methods that are annotated
with #ConfigurationProperties. The Javadoc for field values within
configuration classes is used to populate the description attribute.
It joins with the previous point : favoring #ConfigurationProperties whenever it is possible.
I currently have the following config setup in spring boot:
application.properties
app.database.host=${DB_HOST}
app.database.port=${DB_PORT}
app.database.name=${DB_NAME}
app.database.user=${DB_USER}
app.database.password=${DB_PASSWORD}
app.database.schema=${DB_SCHEMA:public}
spring.datasource.url=jdbc:postgresql://${app.database.host}:${app.database.port}/${app.database.name}
spring.datasource.username=${app.database.user}
spring.datasource.password=${app.database.password}
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
application-local-dev.properties:
app.database.host=${DB_HOST:localhost}
app.database.port=${DB_PORT:5432}
app.database.name=${DB_NAME:db_name}
app.database.user=${DB_USER:root}
app.database.password=${DB_PASSWORD:root}
app.database.schema=${DB_SCHEMA:public}
application-load-fixtures.properties:
spring.profiles.include=local-dev
spring.profiles.active=load-fixtures,local-dev
app.database.name=${DB_NAME:db_name}_fixtures
The idea here is that when starting the app in default mode, it will fail to boot when critical properties like database name are missing.
They should be passed via environment variables.
For development purposes, this is unnecessary overhead when setting up the project because we have a docker container with static credentials and I'd like to provide them as defaults. Therefore, I created a profile local-dev that will use default values to be able to connect to our docker database and still have the ability to override them via environment variables in case someone needs to.
Until here, everything works fine.
But now, we also have a profile that is used to load fixtures into the database (drop all tables, recreate and fill them with data).
For obvious reasons, I want to ensure that this cannot be done on an arbitrary database, so I created a profile load-fixtures that should inherit all properties from local-dev and override the database name. However, this approach seems to be wrong. I can see in the spring log that the profiles are loaded properly:
2017-11-16 13:32:11.508 INFO 23943 --- [ main] Main:
The following profiles are active: load-fixtures,local-dev
But it still uses the database name provided by the local-dev profile.
When I remove the line
app.database.name=${DB_NAME:db_name}
from the local-dev config file, it works.
However, what I want to avoid is having to add new properties to both, local-dev and load-fixtures, whenever we add a new configuration property to the project.
I understand that profile specific properties take precedence over non-profile specific ones. And also that non-default location properties take precedence over properties from the default locations. But here, both profiles (local-dev and load-fixtures) are in the same location, and they are also both profile specific.
What are proper ways to go about this problem?
Thanks in advance!
I recently came across quite the same problem and had to figure out which precedence Spring applies to several profile specific property files. Unfortunately this is not well documented and I did not find the location of the code that is responsible for that.
However after some tests and tries I'm pretty sure it works like this (or at least in a similar way):
Probably some kind of map is used to gather up all properties of all the different places and possibilites where you could define them like documented here. So for example a property my.value is defined in application.properties and so stored in the mentioned map. Then the same property is found as Java system property. Since this way of defining a property is higher in the PropertySource-order it will override the value found before in the map. Until here it is clear according to the documentation that the Java system property will win.
But as we come to two different sources on the same precedence level like two different profile specific property files the documentation is not a 100% clear in my opinion. However it says in 24.4:
If several profiles are specified, a last-wins strategy applies. For example, profiles specified by the spring.profiles.active property are added after those configured through the SpringApplication API and therefore take precedence.
Maybe it is just the example that is not optimal here or I just do not understand it correctly. But I guess the "last-wins" strategy also applies to all profiles defined for example in spring.profiles.active. That means if you run java -jar -Dspring.profiles.active=dev,fix application.jar, the properties in application-fix.properties will overwrite the values of properties having the same key in application-dev.properties.
So in your case considering the output of your application I guess you specified something like java -jar -Dspring.profiles.active=load-fixtures,local-dev application.jar. If I was correct, you would just have to change that into java -jar -Dspring.profiles.active=local-dev,load-fixtures application.jar.
We have Spring based (Spring.NET) web application and use VariablePlaceholderConfigurer to keep some settings in a separate properties file.
These properties are mainly different values affecting business logic, like emails, timeouts, paths, etc.
Now we need to implement administrative UI to allow users to change these settings in more friendly way.
So we will move all these settings to a database.
Question: What is the best (standard, common) approach to implementing settings like I described in Spring based application? (Assuming we want changes to be effective immediately without application restart.)
It is good if we can keep our current approach when setting values as just properties of beans.
The VariablePlaceholderConfigurer is ObjectFactoryPostProcessor, which is only invoked after reading the object definitions. So you cannot simply introduce a new IVariableSource that you refer to in your VariablePlaceholderConfigurer configuration, because it will only take effect after container reload.
You have to create an IObjectObjectPostProcessor to modify properties on container managed objects at runtime.
I'm new to XML Schema and to JAXB and wondering what the best or expected approach to using the Maven JAXB plugin (http://static.highsource.org/mjiip/maven-jaxb2-plugin/generate-mojo.html)is.
I have a simple XML document format for which I've defined a schema. I'm primarily interested in reading a compliant XML file into Java, but I'll probably also want to add extra properties to the POJOs which won't be in the XML, but will be used at runtime.
By default the plugin places generated code into ${project.build.directory}/generated-sources/xjc. What I think I want to do is copy the generated code into /src/main/java/whatever and add to/modify the code to add my extra properties. When I change the schema, I'd then merge changes form the newly generated POJOs into my own ones.
The alternative is to tell the plugin to place the generated source directly into /src/main/java and to perhaps subclass the POJOs to add my own properties, but I'm not sure whether the marshaling/unmarshaling can still be made to use my extended classes.
Anyone have any guidance on which approach is more normal or what the pitfalls of each are?
In your place I'd leave the generated sources where they are so that the corresponding jar can be built by Maven without further configuration and put your custom code in a different project that depends on the first one. This ensures that everything is build in the right order.
It is your choice whether to derive from the generated classes or just use instances of them in your code, as attributes or, even better, local variables. Personally I'd avoid derivation; after all JAXB is just low level machinery you use to perform I/O in a specific format.
Most importantly: forget about modifying the generated sources; why introduce an error prone manual step in your development process when you can get the same effect automatically?
(To provide a slight variation on to Nicola's answer)
If your schema rarely changes it might make sense to have a completely separate build which just creates the JAXB generated code, jars it, versions it, and sticks it in your repository.
Then in your downstream code you use that jar as a dependency and subclass the JAXB code as necessary to add your new fields.
We went this route because we felt that having JAXB complile every time we did a build was unnecessary as our schemas were pretty static.
Most importantly: forget about modifying the generated sources; why introduce an error prone manual step in your development process when you can get the same effect automatically?
Absolutely.
To elaborate and extend on a point already well-made... if there are a lot of implicit relationships and things you'd like to put "getters" on the JAXB code for, bite the bullet and wrap the JAXB class hierarchy in one that does exactly what you want where you want it.
With IDE-assisted delegation, this is only a little tedious, and factors a lot of straightforward, distracting, low-level code out of your main app.
Another benefit of this is that you'll spend a lot less time fighting JAXB to generate things exactly the way you want - the wrappers will make you care a whole lot less.