I was reading the Quarkus documentation about configuration, and this caught my attention:
Quarkus does much of its configuration and bootstrap at build time. Most properties will then be read and set during the build time step. To change them, make sure to repackage your application.
Where can I find the list of configurations that are not changeable on deployment time/runtime?
All of the Quarkus configuration options can be found here:
https://quarkus.io/guides/all-config
To the left of some properties there is a "lock" icon, which means the configuration property is fixed at build time. All other properties that do not have the "lock" icon next to them may be overridden at runtime.
For example, the quarkus.datasource.jdbc.driver property is fixed at build time, meaning between dev/test/prod you must use the same JDBC driver. On the other hand, properties such as quarkus.datasource.jdbc.url may be overridden at runtime, so at dev/test time it could point to jdbc://localhost:5432/myDB and in production this value could point to the production DB URL.
Related
For a cloud native application written with Vert.x, I need to change the context path using an environment variable depending on where it is deployed. I tried with:
quarkus.http.root-path=${CONTEXT_PATH:/app}
in application.properties but it is not taken into account at runtime, just at build time. Here is also reported that this property is build time only. What's the way to change the context path then?
Is it possible to set the root-path of a Quarkus service in runtime?
When I set quarkus.http.root-path in runtime I see the following error:
[io.qua.run.ConfigChangeRecorder] (main) Build time property cannot be changed at runtime. quarkus.http.root-path was /{old-context-path} at build time and is now /{new-context-path}
Changing the root path of a running application is impossible, or at least hard to do. Using Vert.x, it would be possible but that requires some coordination since you would need to deploy/undeploy components (Verticles) in the runtime.
What you still can do: Start another instance (process) of your Quarkus application with the new root path (for example -Dquarkus.http.root-path=<newroot>), then shutdown the existing instance.
This will, however, make the old root path unavailable to all clients which rely on it.
I currently have the following config setup in spring boot:
application.properties
app.database.host=${DB_HOST}
app.database.port=${DB_PORT}
app.database.name=${DB_NAME}
app.database.user=${DB_USER}
app.database.password=${DB_PASSWORD}
app.database.schema=${DB_SCHEMA:public}
spring.datasource.url=jdbc:postgresql://${app.database.host}:${app.database.port}/${app.database.name}
spring.datasource.username=${app.database.user}
spring.datasource.password=${app.database.password}
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQLDialect
application-local-dev.properties:
app.database.host=${DB_HOST:localhost}
app.database.port=${DB_PORT:5432}
app.database.name=${DB_NAME:db_name}
app.database.user=${DB_USER:root}
app.database.password=${DB_PASSWORD:root}
app.database.schema=${DB_SCHEMA:public}
application-load-fixtures.properties:
spring.profiles.include=local-dev
spring.profiles.active=load-fixtures,local-dev
app.database.name=${DB_NAME:db_name}_fixtures
The idea here is that when starting the app in default mode, it will fail to boot when critical properties like database name are missing.
They should be passed via environment variables.
For development purposes, this is unnecessary overhead when setting up the project because we have a docker container with static credentials and I'd like to provide them as defaults. Therefore, I created a profile local-dev that will use default values to be able to connect to our docker database and still have the ability to override them via environment variables in case someone needs to.
Until here, everything works fine.
But now, we also have a profile that is used to load fixtures into the database (drop all tables, recreate and fill them with data).
For obvious reasons, I want to ensure that this cannot be done on an arbitrary database, so I created a profile load-fixtures that should inherit all properties from local-dev and override the database name. However, this approach seems to be wrong. I can see in the spring log that the profiles are loaded properly:
2017-11-16 13:32:11.508 INFO 23943 --- [ main] Main:
The following profiles are active: load-fixtures,local-dev
But it still uses the database name provided by the local-dev profile.
When I remove the line
app.database.name=${DB_NAME:db_name}
from the local-dev config file, it works.
However, what I want to avoid is having to add new properties to both, local-dev and load-fixtures, whenever we add a new configuration property to the project.
I understand that profile specific properties take precedence over non-profile specific ones. And also that non-default location properties take precedence over properties from the default locations. But here, both profiles (local-dev and load-fixtures) are in the same location, and they are also both profile specific.
What are proper ways to go about this problem?
Thanks in advance!
I recently came across quite the same problem and had to figure out which precedence Spring applies to several profile specific property files. Unfortunately this is not well documented and I did not find the location of the code that is responsible for that.
However after some tests and tries I'm pretty sure it works like this (or at least in a similar way):
Probably some kind of map is used to gather up all properties of all the different places and possibilites where you could define them like documented here. So for example a property my.value is defined in application.properties and so stored in the mentioned map. Then the same property is found as Java system property. Since this way of defining a property is higher in the PropertySource-order it will override the value found before in the map. Until here it is clear according to the documentation that the Java system property will win.
But as we come to two different sources on the same precedence level like two different profile specific property files the documentation is not a 100% clear in my opinion. However it says in 24.4:
If several profiles are specified, a last-wins strategy applies. For example, profiles specified by the spring.profiles.active property are added after those configured through the SpringApplication API and therefore take precedence.
Maybe it is just the example that is not optimal here or I just do not understand it correctly. But I guess the "last-wins" strategy also applies to all profiles defined for example in spring.profiles.active. That means if you run java -jar -Dspring.profiles.active=dev,fix application.jar, the properties in application-fix.properties will overwrite the values of properties having the same key in application-dev.properties.
So in your case considering the output of your application I guess you specified something like java -jar -Dspring.profiles.active=load-fixtures,local-dev application.jar. If I was correct, you would just have to change that into java -jar -Dspring.profiles.active=local-dev,load-fixtures application.jar.
I have externalized all my application needed property files from webapps in tomcat. Now i can simply change a property file value without a need of rebuilding the war file and deploy it again. However each change to property file is associated with server recyling.
Is there a way how the recycling can be avoided for a property file change.
I am using spring to read the property files for few webapps and java property traditional way for few webapps.
Please suggest how to acheive
You may want to consider spring-cloud-config-server or spring-cloud-consul all of these options supports distributed properties management as well as value changes refresh without a need to recycle app servers.
And you can use #RefreshScope for Spring #Beans that want to be reinitialized when configuration changes, they also provide the following Management endpoints out of the box and many more as explained on the project git page
/refresh for refreshing the #RefreshScope beans
/restart for restarting the Spring context (disabled by default)
This is supported by either option (spring-cloud-config-server or spring-cloud-consul)
You may also give cfg4j a try. It supports reloading configuration from local files as well as remote services (git repository, Consul, etc.).
I have a Maven 3 project that uses Hibernate 3. In the Hibernate properties file, there is an entry for hibernate.connection.provider_class with the class corresponding to the C3P0 connection provider (org.hibernate.connection.C3P0ConnectionProvider). Obviously, this class is only used at runtime, so I don't need to add the corresponding dependency in my POM with the compile scope. Now, I want to give the possibility to use any connection pooling framework desired, so I also don't add a runtime dependency to the POM.
What is the best practice?
I thought about adding an entry to the classpath corresponding to the runtime dependency (in this case, hibernate-c3p0) when the application is run (for example, using the command line). But, I don't know if it's possible.
This is almost (maybe exactly) the same problem as with SLF4J. I don't know if Hibernate also uses the facade pattern for connection pooling.
Thanks
Since your code doesn't depend on the connection pooling (neither the main code nor the tests need it), there is no point to mention the dependency anywhere.
If anyone should mention it, then that would be Hibernate because Hibernate offers this feature in its config.
But you can add it to your POM with optional: true to indicate:
I support this feature
If you use it, then I recommend this framework and this version
That will make life slightly more simple for consumers of your project.
But overall, you should not mention features provided/needed by other projects unless they have some impact on your code (like when you offer a more simple way to configure connection pooling for Hibernate).
[EDIT] Your main concern is probably how to configure the project for QA. The technical term for this new movement is "DevOps" - instead of producing a dump WAR which the customer (QA) has to configure painstakingly, configuration is part of the development process just like everything else. What you pass on is a completely configured, ready-to-run setup.
To implement this, create another Maven module called "project-qa" which depends on your project and everything else you need to turn the dead code into a running application (so it will depend on DBCP plus it will contain all the necessary config files).
Maven supports overlayed WARs which will allow you to implement this painlessly.
You can mark your dependency as optional. In this case it will not be packaged into archives. In this case you have to ensure that your container provides required library.
You could use a different profile for each connection provider. In each profile you put the runtime dependency that correspond to the connection provider you want to use and change the hibernate.connection.provider_class property accordingly.
For more details about how to configure dependencies in profiles, see Different dependencies for different build profiles in maven.
To see how to change the value of the hibernate.connection.provider_class property see How can I change a .properties file in maven depending on my profile?