Switch between Prod and Dev environment - spring

I've built a REST Service with Spring Boot. The setup in the development environment is different from the setup in the production environment. What is the best approach to switching setup between development environment and production environment? By setup I mean for instance the path to the database that is different in development vs. production. I can think of three approaches, use environment variables, use a properties file or use config file. Other suggestions are welcome and what I should think about when choosing.

You should have a look at Spring Profiles - see here. Using spring profile, u can easily switch configurations for the different environments.
Just name your configuration for "dev" as "application-dev.(properties|yaml) and provide -Dspring.profiles.active=dev when running the App from command line.

Related

how to use environment variables in tomcat without restarting Tomcat

I have a tomcat server with several applications. Then, I gonna deploy spring boot application on tomcat. But before deploy I set an environment variable in my server. Because of This application should use the environment variable. So I wish not to restart my tomcat after set new environment variable in my server.
Have you any solution? Help me, please?
You cannot change environment variables from Java without resorting to dirty tricks.
You can, however, change the values of system properties. Consider using system properties instead of environment variables to adjust the behavior of your application.
Better yet, don't use globally-visible/mutatable configuration and instead configure components individually through some other mechanism, such as a configuration file.

Spring Boot application profiles

I understand there's multiple ways of handling application property files and profiles in Spring Boot and I've seen multiple questions and answers on how to handle each but I'm trying to find the "best" way of handling it for a new project, if there is one.
The application is hosted in Weblogic 12c on production/pre-prod (with a jndi database connection) and ran locally in tomcat (with hardcoded database details) for development. I'd like it so that when the project is built via gradle and deployed to production it uses the jndi properties file and when ran locally it defaults to the hardcoded datasource with minimal changes required.
src/main/resources/application.properties
# DEV
spring.datasource.url=
spring.datasource.username=
spring.datasource.password=
spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
# DEV
# PROD
# spring.datasource.jndi-name=
# spring.datasource.driver-class-name=oracle.jdbc.driver.OracleDriver
# PROD
From my understanding the recommended way is to externalize the property files and place the required one in a config directory alongside the WAR file for any differing config which is then automatically picked up and used?
You should consider creating multiple profiles. This means: Either multiple properties-Files, or multiple profiles in one file:
See https://docs.spring.io/spring-boot/docs/current/reference/html/howto-properties-and-configuration.html
I would recommend to use multiple application-ENV.properties, e.g.
application-prod.properties and application-preprod.properties.
There is always one active profile and settings from the application.properties (without any profile suffix) are used as default values if not overwritten in a specific profile-file.
Depdending on your environment (local, prod etc.) you should set an environment variable (start the java-process/application server with that environment variable), e.g.:
SPRING_PROFILES_ACTIVE=prod
On your local machine you would set:
SPRING_PROFILES_ACTIVE=dev
With this variable you can control, which profile is currently active.
Also consider integrating the active profile into you Continious Integration/Deployment settings.
Please note that putting plain text passwords hardcoded into committed files is not a good idea. Consider using jasypt or a spring cloud config server for your prod database configuraiton or use any mechanism that your cloud provider provides to you if you use any. Microsoft Azure for example provides a KeyVault for sensitive data.
https://cloud.spring.io/spring-cloud-config/multi/multi_spring-cloud-config.html
http://www.jasypt.org/
If you use gradle good solution is to set application.properties and test.properties files and put into them properties for prod and preprod respectively.
Then run application with different vm arguments: -Dspring.profiles.active=test for test.properties and without arguments for application.properties
Use gradle instruments and configure them once for test and prod. For example: gradle bootWar configure without vm arguments, gradle bootWarTest with vm arguments -Dspring.profiles.active=test. Save once you configs and you will create war for different environments only selecting between two buttons in gradle.

How to configure different data sources for local testing and deployment in Spring Boot Application

I am trying to find the best way to configure my Spring Boot Web application to easily switching between the following data sources for both local testing and deployment.
H2 in memory db. Local testing only.
Dev oracle. Local testing and deployment.
Prod oracle. Deployment only.
By local testing, I mean to test in IDE environment (Eclipse). Dev and prod oracle databases are set up on two remote servers.
After some research, there are different ways to switch from one data source to another.
Use Spring profile. Using H2 and Oracle with Spring Boot. Set up the following files in classpath, application.properties, application-h2. properties and application-dev.properties. While connections for h2 and dev are defined in corresponding properties files, spring.profiles.active is set in application.properties. My understanding is this property can be overridden during build process by specifying spring.profiles.active. However, it seems to be a JVM variable, how do I set it running maven?
Maven profile. Create multiple profiles in pom and a filter pointing to application properties files. The profile specified by -P option during maven build will determine which application properties file to look. However, according to maven application with multi environment configuration can't deploy on tomcat, this will generate multiple wars for different deployment. So method 1 is preferred. Plus, it does not apply to switching datasources while testing locally.
Persistence units. Define different persistence units for different data sources in persistence.xml. Use EntityManager by choosing a specific unit. Variation of this method include having a variable in unit names which is determined in application.properties.
JNDI lookup. Set up a jndi name in application.properties with spring.datasource.jndi-name. The actual database information including url and credentials will be specified in context.xml in the tomcat folder where the war will be deployed.
My mind is set on local testing environment. Gonna go with method 1. Switching between H2 in memory and oracle is so easy just by changing the property in application.properties. Since the testing is usually done in IDE, war does not need to be generated, although answers are welcome for run maven install with spring.profiles.active.
As far as deployment, JNDI is definitely the way to go. However, I am concerned that the two properties in application.properties: spring.profiles.active and spring.datasource.jndi-name may be conflicting with each other. If I have spring.profiles.active=h2 and then tried to deploy the war to prod server, does it try to connect to h2 based on the spring profile or to prod db based on jdni-name? What is the best practice to accommodate all scenarios with enough flexibility?
Also is a explicit configuration class for DataSource required such as Configure Mutiple DataSource in Spring Boot with JNDI? My understanding is application.properties and spring profile should be enough to handle it, right?
Definitely use Spring profiles.
You don't want to use Maven profiles as it creates different artifacts. Ask your QA/Release engineers how they feel about having different artifacts for different environments :). They wouldn't be happy.
H2 is what you want to use in CI server integration testing as well. Such integration testing is fast and easy.
Instead of changing profile in application.properties, consider defining profile via command line parameter. So that configuration file changes are not required to run your application in different profiles.

WAS Liberty: DEV and UAT Configuration

I am wondering what is the best way to configure an WAS Liberty installation, allowing to to switch from a DEV environment configuration to an UAT(testing) environment configuration dynamically.
To elaborate, we have a similar setup with our glassfish servers, we simply configure system properties for both in the Glassfish console. For example
hostname.uat="some uat value"
hostname.dev="some dev value"
Dropping the ".uat" or ".dev" in the system property configuration in Glassfish makes that property active. In Glassfish, this can be done dynamically and while the application is running (no need to reboot).
Is there or can someone elaborate how I could achieve a similar setup in WAS Liberty?
Thank-you kindly
You can create a server.env file in two possible places:
${wlp.install.dir}/etc/server.env (properties are applied to all servers) or
${server.config.dir}/server.env (properties applied only to one server)
and specify any environment variables in that file.
For example:
# Specify properties and values
admin.email=dev.admin#domain.com
admin.email.uat=uat.admin#domain.com
To access these properties in an application environment (such as a Servlet) do the following:
System.getenv("admin.email"); // returns "dev.admin#domain.com"
Other useful properties can be specified in the server.env file as well such as JAVA_HOME, WLP_USER_DIR, WLP_OUTPUT_DIR, and WLP_DEBUG_ADDRESS.
For IBM's full doc on this, see: Customizing the Liberty Environment.
What we do is generate the Liberty server with Ansible, where the variables can be added to an ansible inventory based on environment.
So, our deployments essentially drop and recreate the Liberty server by using ansible templates and roles to stamp it out as needed.
Lastly, we make use of Hasicorp Vault (you can also use ansible-vault) for credentials or secrets at deploy time to fetch credentials. This is then injected into Ansible as JSON and used to stamp out the server.xml and other related configuration files.

Best Way to externalize system properties on a multi-environment application

We are working with a Spring 3 application that runs on several environments (test, UAT and Production) these environments are managed by a third party company so we have almost no access to the servers.
We have tried with Jboss System Properties and Maven2 Profiles. Both solutions worked fine, however we don't want to tie the application to one specific Server (Jboss in this case) and we don't want to do environment specific builds (required for Maven2 profiles).
Is there a good way we could have environment specific properties for the app that do not require different builds for each environment and require no modifications on the server side and that could also run on different servers? (some sort of PropertyPlaceHolderConfigurer that could read property files outside the app context should do the trick)
Environment-specific builds are not a bad option.
But spring 3.1 is providing what you are looking for - environment specific configuration. See this and this

Resources