Changing ENV variables in Heroku doesn't change them in Phoenix application - heroku

I have a Phoenix 1.2 application running on Heroku, with an ENV variable that sets the email addresses I wish to send email to.
When I change the environment variable's value, it doesn't seem to take; Only after I make a PR and redeploy does the new change seem to take.
This makes it seem like I need to "reload" the code or memory somehow. Thus, 2 questions:
Why is this occurring?
Any ideas on how to fix it?

I'm assuming you're setting your env values in config files and using Application.get_env to access them in your application.
Elixir applications are compiled, not interpreted. When you deploy your application to heroku, it compiles it with the available Environment Variables and they become hardcoded in to the app. So, even restarting the application would not work; it needs to be recompiled with the new environment variables.
Here are a few solutions:
You can use RELX_REPLACE_OS_VARS=true if you're using Exrm to build releases;
Use System.get_env for getting ENV variables instead, but this won't work unless the application is restarted after changing the environment configuration;
Use a simple wrapper module that lets you use environment configurations by specifying them like {:system, "MY_VARIABLE"} in config.exs;
Or use an existing package like Confex or Conform to manage your configurations

Related

Testing app that depends on environment variables locally

Google's App Engine provides a list of predefined environment variables and additional environment variables may be defined in app.yaml. Meanwhile, the instructions for Testing and Deploying your Application just say to use go run to test the app locally. If I test my app locally inside a cloud-sdk Docker container, is there a gcloud command (or another tool) that would create the same environment variables in my local container as in App Engine? Right now I am just setting the environment variables locally with a bash script, but that means that I need to maintain the variables in multiple locations.
The variables are all runtime metadata. Only the runtime can provide values for these variables and then the data is specific to the deployment.
If your app needs this metadata, you will know which variables it uses and how it uses them and, when you specify the value, you will need to provide the variable name anyway, e g. GAE_SERVICE="freddie".
For these reasons, it's likely not useful for local testing to spoof these values for you. When you go run your app, there's nothing intrinsic about it that makes it an App Engine app. It only becomes one, after you deploy it, because it's running on the App Engine service.
If you're running your code in a container, you can provide environment variables to the container runtime. Doing so is likely preferable to scripting these:
GAE_SERVICE="freddie"
docker run .... \
--env=GAE_SERVICE=${GAE_SERVICE} \
...
Although not really practical with App Engine, there's an argument for having your code not bind directly to any runtime (e.g. App Engine) metadata. If it does, you can't run it easily elsewhere.
In other platforms, metadata would be abstracted further and some sort of sidecar would convert the metadata into a form that's presented consistently regardless of where you deploy it; your code doesn't change but some adapter configures it correctly for each runtime.

How do I deploy an Act framework app to Heroku?

There is no official support of Act on Heroku, however the Maven buildpack seems to do almost everything the app needs, except start it properly. Any recommended settings and/or Profiles to start the app properly?
The main two things to figure out were how to bind to the dynamically assigned port, and how to load from a different profile. This Procfile handles both of those things:
web: export act_env_http_port=$PORT && java $JAVA_OPTS -Dprofile=heroku -cp target/classes:target/dependency/* com.larvalabs.gifmsgbot.AppEntry
The environment variable that specifies the port is in a special (mostly undocumented format) that allows you to automatically override configuration settings. A tiny bit more info is the bug that contains the relevant changes: https://github.com/actframework/actframework/issues/636
Also note that I'm using a profile named heroku here, this is because I don't totally understand how the prod profile works yet, but I couldn't load settings from it when specifying -Dprofile=prod

How to update application.prop in dockerised spring boot app

I have a dockerised spring-boot based application and i wanted to update some of the values in application.properties. And this seems can be achieved in 3 ways.
Update the application.properties file, rebuild the image.
Add --spring.config.location= to the ENTRYPOINT, update the prop file, rebuild the image.
Use volume mount, mention the prop file location, update the prop, rebuild the image.
Using Spring Profiles and pass the profile info before running the container. And even in this approach update the profile specific prop file, rebuild the image.
As we can see all the approaches involve rebuilding images. Is there a way to make changes to application.properties without rebuilding the image? What is the preferred approach in prod scenarios?
Thanks!
Yes, there is one more way to inject properties using environment variables.
As per spring property resolution order, environmental variables take precedence over property files.
For example, say you want to update the spring.datasource.username, you can set it via environment variable as SPRING_DATASOURCE_USERNAME. Basically, replacing . with _. By nature, environment variables are not case sensitive but by convention, we put them in upper case.
This variable can be passed when you will be creating container from your image as suggested in this answer.
I would recommend using environment variables, as #YogeshBadke suggests in their answer.
Your option of using a volume is also a good option. Host files mentioned using docker run -v replace files in the image when the container is started, and this does not require an image rebuild. For example
docker run -v $PWD/application.properties.prod:/app/application.properties ...
As a general rule you should not need to rebuild your image to run it in a different environment, and you should avoid mentioning environment-specific hostnames or similar settings anywhere in what gets built into your image. I would not recommend having separate "dev" vs. "prod" profiles in a Docker-hosted solution, since you'll have to rebuild the image as soon as you add a "qa" environment or anything else happens; it's better to be able to just change the deploy-time settings.
(If you happen to be deploying this in Kubernetes, my experience has been that setting individual values via environment variables is easiest, injecting an entire properties file via a ConfigMap works, and trying to embed the properties in the image simply doesn't.)

Laravel 5.7 - How to set all variables inside .env to point to test environment when the system is started?

I have 2 buckets prod and dev.
Inside .env I have S3_PROD and S3_DEV.
I want my system to point to S3_DEV when I am in my dev environment.
Taking consideration that I could have 10 variables to be pointed to a specific endpoint based on our environment what should be the best approach
to set that?
You're .env file should not be tracked by whatever deployment/versioning system you have. Ideally your dev environment file would contain the keys appropriate and in your config you would simply call env('S3_REGION') for example.
But for the sake of bad ideas lets say you have almost identical .env files in your dev environment and your production, change the APP_ENV=local to dev or prod and then an if statement in your config.
I would highly recommend you follow the documentation on this.
Your .env file should not be committed to your application's source
control, since each developer / server using your application could
require a different environment configuration. Furthermore, this would
be a security risk in the event an intruder gains access to your
source control repository, since any sensitive credentials would get
exposed.
If you are developing with a team, you may wish to continue including
a .env.example file with your application. By putting placeholder
values in the example configuration file, other developers on your
team can clearly see which environment variables are needed to run
your application. You may also create a .env.testing file. This file
will override the .env file when running PHPUnit tests or executing
Artisan commands with the --env=testing option.

Within Pivitoal Cloud Foundary is there a way to set SPRING_PROFILES_ACTIVE per each space?

For each space within an org using Pivotal Cloud Foundry (PCF) is there a way to set SPRING_PROFILES_ACTIVE for each space?
space1: SPRING_PROFILES_ACTIVE: development
space2: SPRING_PROFILES_ACTIVE: performance
space3: SPRING_PROFILES_ACTIVE: production
etc...
Thanks,
Brian
The primary way that you would set Spring profiles on Cloud Foundry is via environment variables.
Cloud Foundry does not provide a way to set environment variable groups per org or space. You can only set a staging and a running environment variable group which applies to all staging or all running apps. That's in addition to the standard facilities for setting environment variables on an application.
I think you might be able to get this to work, but it'll take a little effort. Here's the idea.
Create a custom buildpack (don't panic, this isn't that difficult). The buildpack's only responsibility would be to create a .profile.d/ script (just a regular Bash script) that contains export SPRING_PROFILES_ACTIVE=<some-profile>.
Any buildpack can create .profile.d/ scripts which are primarily used to configure environment variables. These scripts are automatically sourced by the environment before any application starts. Thus if the buildpack sets SPRING_PROFILES_ACTIVE here, it would be available to your app and take effect.
https://docs.cloudfoundry.org/buildpacks/custom.html#contract
You would just need to create the bin/supply and bin/detect scripts as defined at the link below. The bin/supply is where you'd put your logic to create the .profile.d/ script and bin/detect could be as simple as exit 0 which would just tell it to run always.
https://docs.cloudfoundry.org/buildpacks/understand-buildpacks.html#buildpack-scripts
Your custom buildpack could be as simple as hard coding profiles to use or it could be fancy and look at the VCAP_APPLICATION environment which contains the space name.
Ex: echo $VCAP_APPLICATION | jq .space_name.
The buildpack could then apply logic to set the correct profile given the space name. I don't think the org name is available to the app at staging/runtime, at least not through environment variables, so it would be harder to apply logic based on that.
The last step is using CF's multi-buildpack support. Your custom buildpack would be a supply buildpack so it would be first, then you'd list the actual buildpack to use second as you push your application.
Ex: cf push -b https://github.com/your-profile/your-custom-buildpack -b java_buildpack your-cool-app.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
Hope that helps!

Resources