We're testing Octopus Deploy 2.0 (OD) to deploy web services, windows services and citrix applications.
QUICK QUESTION:
When using config transformation, can parameters be used to indicate which config file should be used for the transformations?
MORE DETAIL:
When setting up for config transformations, we would like to have files named
MyApp.DEV_US.config
MyApp.DEV_CANADA.config
MyApp.DEV_AUSTRALIA.config
and so on for TEST, STAGE and PRODUCTION
Our deployments to DEV, for example, always include deployments to all regions. So we would prefer if OD environments were DEV, TEST, STAGE and PRODUCTION. Then in each deployment, we have multiple steps that deploy to each region.
However, OD config transformations only look for OD Environments when looking for which config files to use as part of the transformation. It seems OD would require us to bring each region up to the environment level, which from our POV is not ideal and would clutter the dashboard.
Can we pass parameters into the config transformation process such that we can indicate which file to use for the transform?
I believe you can achieve what you are after with the following, but it will require multiple steps in the process.
Create a step called Deploy to Dev - US and a step called Deploy to Dev - Canada
Now define a variable called CountrySpecificConfigFiles and you can scope it to the required step (and environment etc)
In the Configuration transformations section for each Steps, choose the variable defined in the step above
You could abstract this further by naming your steps DEV_US and DEV_CANADA and define just the one variable value as Web.#{Octopus.Task.Name}.config without any scope to steps, or by removing the variable and doing it inline in the Additional Transforms field.
Related
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
What is the best way to maintain Images for different environments and why?
Option : 1
Create diff images for the specific environment dev, stg, prod. we have to tell Jenkin job for which environment we are building the image and spring boot will load the specific configuration files.
advantages :
Environment specif images.
disadvantages :
Every environment will have diff images so we have to build it everytime.
Option : 2
Build 1 image, externalize the config file. While building the image create a shared/mount path place an appropriate config file. While initialization load the config file.
advantages:
One image can be used by all the environment.
disadvantages :
Custom Configuration handling.
Need coordination between 2 teams.
Let me know if there are other options and whats the advantages and disadvantages above approach or any other approach are present.
Build once and deploy anywhere is considered as the fundamental principles of the continuous delivery (Google it for its advantages). So I would build the same image for all environments . And when running the image , it needs to have some ways to allow configuring these configurations based on the environment.
In term of docker , it allows to configure the environment variables when running a container (e.g see this in case of docker-compose)
In term of spring-boot, the OS environment variables will override the application properties in the app.
When designing your images, split the filesystem and environment into 3 pieces.
Binaries, runtime, libraries, code needed to run the application. This belongs in your image.
Configurations and secrets that will differ for different users of the image. These belong in configuration files and environment variables that are injected at runtime (either as a bind mount, docker-compose.yml env vars, k8s config map, etc).
Data. This should be mounted as a volume, or in an external database accessed with a configuration/secret.
This keeps with the 12 factor design and enables portability, easier testing, and less risk with deploying something into production that is different from what was tested in CI.
You can build docker image for each environments separatly.
for example: read variable of dev environment from .env.dev file.
create images that contain specific configurations for every environment
For each space within an org using Pivotal Cloud Foundry (PCF) is there a way to set SPRING_PROFILES_ACTIVE for each space?
space1: SPRING_PROFILES_ACTIVE: development
space2: SPRING_PROFILES_ACTIVE: performance
space3: SPRING_PROFILES_ACTIVE: production
etc...
Thanks,
Brian
The primary way that you would set Spring profiles on Cloud Foundry is via environment variables.
Cloud Foundry does not provide a way to set environment variable groups per org or space. You can only set a staging and a running environment variable group which applies to all staging or all running apps. That's in addition to the standard facilities for setting environment variables on an application.
I think you might be able to get this to work, but it'll take a little effort. Here's the idea.
Create a custom buildpack (don't panic, this isn't that difficult). The buildpack's only responsibility would be to create a .profile.d/ script (just a regular Bash script) that contains export SPRING_PROFILES_ACTIVE=<some-profile>.
Any buildpack can create .profile.d/ scripts which are primarily used to configure environment variables. These scripts are automatically sourced by the environment before any application starts. Thus if the buildpack sets SPRING_PROFILES_ACTIVE here, it would be available to your app and take effect.
https://docs.cloudfoundry.org/buildpacks/custom.html#contract
You would just need to create the bin/supply and bin/detect scripts as defined at the link below. The bin/supply is where you'd put your logic to create the .profile.d/ script and bin/detect could be as simple as exit 0 which would just tell it to run always.
https://docs.cloudfoundry.org/buildpacks/understand-buildpacks.html#buildpack-scripts
Your custom buildpack could be as simple as hard coding profiles to use or it could be fancy and look at the VCAP_APPLICATION environment which contains the space name.
Ex: echo $VCAP_APPLICATION | jq .space_name.
The buildpack could then apply logic to set the correct profile given the space name. I don't think the org name is available to the app at staging/runtime, at least not through environment variables, so it would be harder to apply logic based on that.
The last step is using CF's multi-buildpack support. Your custom buildpack would be a supply buildpack so it would be first, then you'd list the actual buildpack to use second as you push your application.
Ex: cf push -b https://github.com/your-profile/your-custom-buildpack -b java_buildpack your-cool-app.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
Hope that helps!
I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.
I have an octopus deployment that needs to go to a load balanced environment But there are small changes in the config between the two servers.
So, in summary:
It deploys to the same environment (PreProd)
It gets deployed to two different servers linked to that environment
There are small changes between the two web.config files between the two servers.
I already have a web.preprod.config that gets transformed into web.config. Does it mean I need to create more config files, ie. web.server1.preprod.config and web.server2.preprod.config or is there another cleaner way of doing it? It is a whole section that is different so not just an appSetting.
A solution that has worked well in similar scenarios for me in the past (with OctopusDeploy specifically), is to use the web.{environment}.config transforms to get the correct config structure in place, but to use variable substitution and define placeholders in the transform file to keep the run-time environment-specific definitions in Octopus. Quite how you break down the substitution syntax is really dependent on your config, but you can use the machine-scoping features of Octopus variables to control the actual values injected.
This scenario is a good example of where web.config transforms start to blur the edges of configuration management; environment-specific config is really the domain of Octopus (or, more specifically, a centralised configuration store), but the solution proposed here is taking it out of Octopus and back into the source repository, which is one of the problems Octopus is actually designed to solve.
For example; what if you introduced a 3rd node in your pre-prod load balancer? This demands a code change, build, version bump and package, which can be completely avoided given the above.
The general approach to problems like this is, indeed, to create a web.server*.preprod.config, or local.config. I'd suggest looking at what exactly is different in the config, and why. Try to find things that you can merge. For instance:
If one difference is the difference in drive letter, and your config contains these entries:
C:/a/b/c.txt
C:/a/b/d.txt
try splitting those entries into
drive=C
drive:/a/b/c.txt
In that case you only have to change drive=C to drive=D to make two entries work.