How to update application.prop in dockerised spring boot app - spring-boot

I have a dockerised spring-boot based application and i wanted to update some of the values in application.properties. And this seems can be achieved in 3 ways.
Update the application.properties file, rebuild the image.
Add --spring.config.location= to the ENTRYPOINT, update the prop file, rebuild the image.
Use volume mount, mention the prop file location, update the prop, rebuild the image.
Using Spring Profiles and pass the profile info before running the container. And even in this approach update the profile specific prop file, rebuild the image.
As we can see all the approaches involve rebuilding images. Is there a way to make changes to application.properties without rebuilding the image? What is the preferred approach in prod scenarios?
Thanks!

Yes, there is one more way to inject properties using environment variables.
As per spring property resolution order, environmental variables take precedence over property files.
For example, say you want to update the spring.datasource.username, you can set it via environment variable as SPRING_DATASOURCE_USERNAME. Basically, replacing . with _. By nature, environment variables are not case sensitive but by convention, we put them in upper case.
This variable can be passed when you will be creating container from your image as suggested in this answer.

I would recommend using environment variables, as #YogeshBadke suggests in their answer.
Your option of using a volume is also a good option. Host files mentioned using docker run -v replace files in the image when the container is started, and this does not require an image rebuild. For example
docker run -v $PWD/application.properties.prod:/app/application.properties ...
As a general rule you should not need to rebuild your image to run it in a different environment, and you should avoid mentioning environment-specific hostnames or similar settings anywhere in what gets built into your image. I would not recommend having separate "dev" vs. "prod" profiles in a Docker-hosted solution, since you'll have to rebuild the image as soon as you add a "qa" environment or anything else happens; it's better to be able to just change the deploy-time settings.
(If you happen to be deploying this in Kubernetes, my experience has been that setting individual values via environment variables is easiest, injecting an entire properties file via a ConfigMap works, and trying to embed the properties in the image simply doesn't.)

Related

How to link an APM agent like NewRelic to a Spring Boot application with bootBuildImage?

I have a gradle based Spring Boot 3 application. I use the bootBuildImage gradle task in circleci to build a docker image of this application.
Now, I want to add NewRelic to this application. I know I can do it by writing my own Dockerfile but I want to do it by configuring the bootBuildImage gradle task.
I saw that I can add buildPacks like this:
tasks.named("bootBuildImage") {
buildpacks = [...]
}
And it appears that NewRelic has a buildpack here.
How can I generate the docker image with NewRelic integration?
Bonus: I need to inject environment variable as NEW_RELIC_ENABLE_AGENT=true|false. How can I do it?
You're on the right track. You want to use the New Relic Buildpack that you found.
High-level instructions for that buildpack can be found here. It essentially works by taking in bindings (the secret config data) and the buildpack securely maps those values to the standard New Relic agent configuration properties (through env variables).
An example of an APM tool configured through bindings can be found here. The specific example is using a different APM tool, but the same steps will work with any APM tool configured through bindings, like New Relic.
For your app:
Create a bindings directory. The root of your project is a reasonable place, but the path doesn't ultimately matter. Don't check in binding files that contain secret data :)
In the folder, create a subfolder called new-relic. Again, the name doesn't really matter.
In the folder from the previous step, create a file called type. The name does matter. In that file, write NewRelic and that's it. Save the file. This is how the buildpack identifies the bindings.
In the same folder, you can now add additional files to configure New Relic. The name of the file is the key and the contents of the file are the value. When your app runs, the buildpack will read the bindings and translate these to New Relic configuration settings in the form NEW_RELIC_<KEY>=<VALUE>. Thus if you read the New Relic docs and see a property called foo, you could make a file called foo set the value to bar and at runtime, you'll end up with an env variable NEW_RELIC_foo=bar being set. The New Relic agent reads environment variables for it's configuration, although sometimes it's not the first way that's mentioned in their docs.
Next you need to configure your build.gradle file. These changes will tell bootBuildImage to add the New Relic buildpack and to pass through your bindings.
In the tasks.named("bootBuildImage") block, add buildpacks = ["urn:cnb:builder:paketo-buildpacks/java", "gcr.io/paketo-buildpacks/new-relic"]. This will run the standard Java buildpack and then append New Relic onto the end of that list. Example.
Add a bindings list. In the same tasks.named("bootBuildImage") block add bindings = ["path/to/local/bindings/new-relic:/platform/bindings/new-relic"]. This will mount path/to/local/bindings/new-relic on your host to /platform/bindings/new-relic in the container, which is where the buildpack expects bindings to live. You will need to change the first path to point to the local bindings you created above (you can probably use a Gradle variable to the project to reference them, but I don't know if off the top of my head). Don't change the path on the container side, that needs to be exactly what I put above.
Run your build. ./gradlew bootBuildImage. In the output, you should see the New Relic buildpack pass detection (it passes if it finds the type file with NewRelic as the contents) and it should also run and contribute the New Relic agent as is described in the buildpack README.md.
After a successful build, you'll have the image. The key to remember is that bindings are not added to the image. This is intentional for security reasons. You don't want secret binding info to be included in the image, as that will leak your secrets.
This means that you must also pass the bindings through to your container runtime when you run the image. If you're using Docker, you can docker run --volume path/to/local/bindings/new-relic:/platform/bindings/new-relic ... and use the same paths as build time. If you're deploying to Kubernets, you'll need to set up Secrets in K8s and mount those secrets as files within the container under the same path as before /platform/bindings/new-relic. So you need to make a type file, /platform/bindings/new-relic/type, and files for each key/value parameter you want to set.
At some point in the future, we're working to have all of the APM buildpacks included in the main Java buildpack by default. This would eliminate the first config change in step #5.
Because managing bindings can be kind of a pain, I also have a project called binding-tool that can help with steps 1-3. It allows you to easily create the binding files, like bt add -t NewRelic -p key1=val1 -p key2=val2. It's not doing anything magic, just creates the files for you, but I find it handy. In the future, I want it to generate the Kubernetes YAML as well.

Using Helm For Deploying Spring Boot Microservice to K8s

We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.

Best way to create image for different environment

What is the best way to maintain Images for different environments and why?
Option : 1
Create diff images for the specific environment dev, stg, prod. we have to tell Jenkin job for which environment we are building the image and spring boot will load the specific configuration files.
advantages :
Environment specif images.
disadvantages :
Every environment will have diff images so we have to build it everytime.
Option : 2
Build 1 image, externalize the config file. While building the image create a shared/mount path place an appropriate config file. While initialization load the config file.
advantages:
One image can be used by all the environment.
disadvantages :
Custom Configuration handling.
Need coordination between 2 teams.
Let me know if there are other options and whats the advantages and disadvantages above approach or any other approach are present.
Build once and deploy anywhere is considered as the fundamental principles of the continuous delivery (Google it for its advantages). So I would build the same image for all environments . And when running the image , it needs to have some ways to allow configuring these configurations based on the environment.
In term of docker , it allows to configure the environment variables when running a container (e.g see this in case of docker-compose)
In term of spring-boot, the OS environment variables will override the application properties in the app.
When designing your images, split the filesystem and environment into 3 pieces.
Binaries, runtime, libraries, code needed to run the application. This belongs in your image.
Configurations and secrets that will differ for different users of the image. These belong in configuration files and environment variables that are injected at runtime (either as a bind mount, docker-compose.yml env vars, k8s config map, etc).
Data. This should be mounted as a volume, or in an external database accessed with a configuration/secret.
This keeps with the 12 factor design and enables portability, easier testing, and less risk with deploying something into production that is different from what was tested in CI.
You can build docker image for each environments separatly.
for example: read variable of dev environment from .env.dev file.
create images that contain specific configurations for every environment

Dynamically change the property or configuration files using Ansible?

I have web application which contain multiple property files like test1.prop, test2.prop... and so on.
I am writing Ansible playbook to deploy my code but I was stuck when it comes to property file changes.
Right now the support team changes it manually by referring the release notes.
What i want to achieve is this -
When the release comes, it needs to be deployed to different environments like dev, preprod, prod etc.
The values in the property file could be different depends on the environment like env.baseurl= {{env_baseurl}}. I was thinking of using Jinja templates for this.
I need to loop through the currently deployed property files, check if they are same or if any file any difference with the current release and change it accordingly.
How do i make a script or playbook which is generic and can be used to deploy on each environment.
Please let me know if someone can help.

Changing ENV variables in Heroku doesn't change them in Phoenix application

I have a Phoenix 1.2 application running on Heroku, with an ENV variable that sets the email addresses I wish to send email to.
When I change the environment variable's value, it doesn't seem to take; Only after I make a PR and redeploy does the new change seem to take.
This makes it seem like I need to "reload" the code or memory somehow. Thus, 2 questions:
Why is this occurring?
Any ideas on how to fix it?
I'm assuming you're setting your env values in config files and using Application.get_env to access them in your application.
Elixir applications are compiled, not interpreted. When you deploy your application to heroku, it compiles it with the available Environment Variables and they become hardcoded in to the app. So, even restarting the application would not work; it needs to be recompiled with the new environment variables.
Here are a few solutions:
You can use RELX_REPLACE_OS_VARS=true if you're using Exrm to build releases;
Use System.get_env for getting ENV variables instead, but this won't work unless the application is restarted after changing the environment configuration;
Use a simple wrapper module that lets you use environment configurations by specifying them like {:system, "MY_VARIABLE"} in config.exs;
Or use an existing package like Confex or Conform to manage your configurations

Resources