Best way to create image for different environment - spring-boot

What is the best way to maintain Images for different environments and why?
Option : 1
Create diff images for the specific environment dev, stg, prod. we have to tell Jenkin job for which environment we are building the image and spring boot will load the specific configuration files.
advantages :
Environment specif images.
disadvantages :
Every environment will have diff images so we have to build it everytime.
Option : 2
Build 1 image, externalize the config file. While building the image create a shared/mount path place an appropriate config file. While initialization load the config file.
advantages:
One image can be used by all the environment.
disadvantages :
Custom Configuration handling.
Need coordination between 2 teams.
Let me know if there are other options and whats the advantages and disadvantages above approach or any other approach are present.

Build once and deploy anywhere is considered as the fundamental principles of the continuous delivery (Google it for its advantages). So I would build the same image for all environments . And when running the image , it needs to have some ways to allow configuring these configurations based on the environment.
In term of docker , it allows to configure the environment variables when running a container (e.g see this in case of docker-compose)
In term of spring-boot, the OS environment variables will override the application properties in the app.

When designing your images, split the filesystem and environment into 3 pieces.
Binaries, runtime, libraries, code needed to run the application. This belongs in your image.
Configurations and secrets that will differ for different users of the image. These belong in configuration files and environment variables that are injected at runtime (either as a bind mount, docker-compose.yml env vars, k8s config map, etc).
Data. This should be mounted as a volume, or in an external database accessed with a configuration/secret.
This keeps with the 12 factor design and enables portability, easier testing, and less risk with deploying something into production that is different from what was tested in CI.

You can build docker image for each environments separatly.
for example: read variable of dev environment from .env.dev file.

create images that contain specific configurations for every environment

Related

Using Helm For Deploying Spring Boot Microservice to K8s

We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.

Testing app that depends on environment variables locally

Google's App Engine provides a list of predefined environment variables and additional environment variables may be defined in app.yaml. Meanwhile, the instructions for Testing and Deploying your Application just say to use go run to test the app locally. If I test my app locally inside a cloud-sdk Docker container, is there a gcloud command (or another tool) that would create the same environment variables in my local container as in App Engine? Right now I am just setting the environment variables locally with a bash script, but that means that I need to maintain the variables in multiple locations.
The variables are all runtime metadata. Only the runtime can provide values for these variables and then the data is specific to the deployment.
If your app needs this metadata, you will know which variables it uses and how it uses them and, when you specify the value, you will need to provide the variable name anyway, e g. GAE_SERVICE="freddie".
For these reasons, it's likely not useful for local testing to spoof these values for you. When you go run your app, there's nothing intrinsic about it that makes it an App Engine app. It only becomes one, after you deploy it, because it's running on the App Engine service.
If you're running your code in a container, you can provide environment variables to the container runtime. Doing so is likely preferable to scripting these:
GAE_SERVICE="freddie"
docker run .... \
--env=GAE_SERVICE=${GAE_SERVICE} \
...
Although not really practical with App Engine, there's an argument for having your code not bind directly to any runtime (e.g. App Engine) metadata. If it does, you can't run it easily elsewhere.
In other platforms, metadata would be abstracted further and some sort of sidecar would convert the metadata into a form that's presented consistently regardless of where you deploy it; your code doesn't change but some adapter configures it correctly for each runtime.

Docker and casual work/dev on virtual machine and IDE

I have a general question about good practices and lets say way of work between docker and IDE.
Right now i am learning docker and docker compose, and i must admit that i like the idea of containers! Ive deployed my whole spring boot microservices architecture on containers, and everything is working really well!
The thing is, that in every place of properties when i am declaring localhost address, i was forced to change localhost to custom container names, for example localhost:8888 --> naming-server:8888. It is okay for running in containers, but obviously when i am trying to run this on IDE, it will fail. I like working/optimizing/debugging microservices in IDE, but i dont want rebuilding image and returning whole docker-compose every time i made a tiny small change.
What does it look like in real dev?
Regards!
In my day job there are at least four environments my code can run in: my desktop development environment, a developer-oriented container environment, and pre-production and production container environments. All four of these environments can have different values for things like host names. That means they must be configurable in some way.
If you've hard-coded localhost as a hostname in your application source code, it will not run in any environment other than your development system, and it needs to be changed to a configuration option.
From a pure-Docker point of view, making these configurable via environment variables is easiest (and Spring can set property values from environment variables). Spring also has the notion of a profile, which in principle matches the concept of having different settings for different environments, but injecting a whole profile configuration can be a little more complex at deployment time.
The other practice I've found helpful is to have the environment variable settings default to reasonable things for developers. The pre-production and production deployments are all heavily scripted and so there's a reasonably strong guarantee that they will have all of the correct environment variables set. If $PGHOST defaults to localhost that's right for a non-Docker developer, and all of the container-based setups can set an appropriate value for their environment at deploy time.
Even though our actual deployment system is based on containers (via Kubernetes) I do my day-to-day development in a mostly non-Docker environment. I can run an individual microservice by launching it from a shell prompt, possibly with setting some environment variables, and services have unit tests that can run just on the checked-out source tree, without needing any Docker at all. A second step is to build an image and deploy it into the development environment, and our CI system runs integration tests against the images it builds.

How to update application.prop in dockerised spring boot app

I have a dockerised spring-boot based application and i wanted to update some of the values in application.properties. And this seems can be achieved in 3 ways.
Update the application.properties file, rebuild the image.
Add --spring.config.location= to the ENTRYPOINT, update the prop file, rebuild the image.
Use volume mount, mention the prop file location, update the prop, rebuild the image.
Using Spring Profiles and pass the profile info before running the container. And even in this approach update the profile specific prop file, rebuild the image.
As we can see all the approaches involve rebuilding images. Is there a way to make changes to application.properties without rebuilding the image? What is the preferred approach in prod scenarios?
Thanks!
Yes, there is one more way to inject properties using environment variables.
As per spring property resolution order, environmental variables take precedence over property files.
For example, say you want to update the spring.datasource.username, you can set it via environment variable as SPRING_DATASOURCE_USERNAME. Basically, replacing . with _. By nature, environment variables are not case sensitive but by convention, we put them in upper case.
This variable can be passed when you will be creating container from your image as suggested in this answer.
I would recommend using environment variables, as #YogeshBadke suggests in their answer.
Your option of using a volume is also a good option. Host files mentioned using docker run -v replace files in the image when the container is started, and this does not require an image rebuild. For example
docker run -v $PWD/application.properties.prod:/app/application.properties ...
As a general rule you should not need to rebuild your image to run it in a different environment, and you should avoid mentioning environment-specific hostnames or similar settings anywhere in what gets built into your image. I would not recommend having separate "dev" vs. "prod" profiles in a Docker-hosted solution, since you'll have to rebuild the image as soon as you add a "qa" environment or anything else happens; it's better to be able to just change the deploy-time settings.
(If you happen to be deploying this in Kubernetes, my experience has been that setting individual values via environment variables is easiest, injecting an entire properties file via a ConfigMap works, and trying to embed the properties in the image simply doesn't.)

Octopus Deploy; using parameters to define config file in transformations?

We're testing Octopus Deploy 2.0 (OD) to deploy web services, windows services and citrix applications.
QUICK QUESTION:
When using config transformation, can parameters be used to indicate which config file should be used for the transformations?
MORE DETAIL:
When setting up for config transformations, we would like to have files named
MyApp.DEV_US.config
MyApp.DEV_CANADA.config
MyApp.DEV_AUSTRALIA.config
and so on for TEST, STAGE and PRODUCTION
Our deployments to DEV, for example, always include deployments to all regions. So we would prefer if OD environments were DEV, TEST, STAGE and PRODUCTION. Then in each deployment, we have multiple steps that deploy to each region.
However, OD config transformations only look for OD Environments when looking for which config files to use as part of the transformation. It seems OD would require us to bring each region up to the environment level, which from our POV is not ideal and would clutter the dashboard.
Can we pass parameters into the config transformation process such that we can indicate which file to use for the transform?
I believe you can achieve what you are after with the following, but it will require multiple steps in the process.
Create a step called Deploy to Dev - US and a step called Deploy to Dev - Canada
Now define a variable called CountrySpecificConfigFiles and you can scope it to the required step (and environment etc)
In the Configuration transformations section for each Steps, choose the variable defined in the step above
You could abstract this further by naming your steps DEV_US and DEV_CANADA and define just the one variable value as Web.#{Octopus.Task.Name}.config without any scope to steps, or by removing the variable and doing it inline in the Additional Transforms field.

Resources