There is no official support of Act on Heroku, however the Maven buildpack seems to do almost everything the app needs, except start it properly. Any recommended settings and/or Profiles to start the app properly?
The main two things to figure out were how to bind to the dynamically assigned port, and how to load from a different profile. This Procfile handles both of those things:
web: export act_env_http_port=$PORT && java $JAVA_OPTS -Dprofile=heroku -cp target/classes:target/dependency/* com.larvalabs.gifmsgbot.AppEntry
The environment variable that specifies the port is in a special (mostly undocumented format) that allows you to automatically override configuration settings. A tiny bit more info is the bug that contains the relevant changes: https://github.com/actframework/actframework/issues/636
Also note that I'm using a profile named heroku here, this is because I don't totally understand how the prod profile works yet, but I couldn't load settings from it when specifying -Dprofile=prod
Related
Google's App Engine provides a list of predefined environment variables and additional environment variables may be defined in app.yaml. Meanwhile, the instructions for Testing and Deploying your Application just say to use go run to test the app locally. If I test my app locally inside a cloud-sdk Docker container, is there a gcloud command (or another tool) that would create the same environment variables in my local container as in App Engine? Right now I am just setting the environment variables locally with a bash script, but that means that I need to maintain the variables in multiple locations.
The variables are all runtime metadata. Only the runtime can provide values for these variables and then the data is specific to the deployment.
If your app needs this metadata, you will know which variables it uses and how it uses them and, when you specify the value, you will need to provide the variable name anyway, e g. GAE_SERVICE="freddie".
For these reasons, it's likely not useful for local testing to spoof these values for you. When you go run your app, there's nothing intrinsic about it that makes it an App Engine app. It only becomes one, after you deploy it, because it's running on the App Engine service.
If you're running your code in a container, you can provide environment variables to the container runtime. Doing so is likely preferable to scripting these:
GAE_SERVICE="freddie"
docker run .... \
--env=GAE_SERVICE=${GAE_SERVICE} \
...
Although not really practical with App Engine, there's an argument for having your code not bind directly to any runtime (e.g. App Engine) metadata. If it does, you can't run it easily elsewhere.
In other platforms, metadata would be abstracted further and some sort of sidecar would convert the metadata into a form that's presented consistently regardless of where you deploy it; your code doesn't change but some adapter configures it correctly for each runtime.
So today I basically fumbled a huge amount of traffic because I deployed a gcloud project while having the wrong project set. As you know, when we deploy to gcloud we have to make sure we choose the right project using gcloud config set project [PROJECT_NAME], unfortunately this is sometimes hard to remember to do as multiple projects require quick deployments to be sent out when bugs arise.
I was wondering if anyone had a good solution for this that runs a predeploy shell script which makes sure that you are deploying the right project when deploying.
Thanks in advance,
Nikita
#Ajordat's answer is good but I think there's a simpler solution.
If you unset the default project, then you'll be required to explicitly set --project=${PROJECT} on each gcloud command.
gcloud config unset project
Then, every gcloud command will require:
gcloud ... --project=${PROJECT} ...
This doesn't inhibit specifying an incorrect ${PROJECT} value in the commands but it does encourage a more considered approach.
A related approach is to define configurations (sets of properties) and to enable these before running commands. IMO this is problematic too and I recommend unsetting gcloud config properties and always being explicit.
See:
https://cloud.google.com/sdk/gcloud/reference/config/configurations
As seen in the documentation, the gcloud tool has some flags that are independent of the subcommand (app deploy) that is being used, such as --account, --verbosity or --log-http. One of these gcloud-wide flags is --project, which will let you specify the project you are deploying to.
Now, to deploy you must use a project, in your case since you have to deploy to different projects you must specify it somewhere. There's no workaround to specifying the project because the service must have somewhere to be deployed to. You have two options:
You can set the default project to the one you want to use:
gcloud config set project <project-name>
gcloud app deploy
Notice that you would have to set the project back again to the one you were previously using.
Or you can use the --project flag in order to specify where to deploy.
gcloud app deploy --project=<project-name>
I would recommend you to use the --project flag as it's more deployment specific and doesn't involve changing the default project. Moreover, you can keep working on the previous project ID since it doesn't change the default values, it's just a deploy to another project.
Notice that in the past you could specify the project on the app.yaml file with the application tag, but this was deprecated in favor of the --project flag.
I have a general question about good practices and lets say way of work between docker and IDE.
Right now i am learning docker and docker compose, and i must admit that i like the idea of containers! Ive deployed my whole spring boot microservices architecture on containers, and everything is working really well!
The thing is, that in every place of properties when i am declaring localhost address, i was forced to change localhost to custom container names, for example localhost:8888 --> naming-server:8888. It is okay for running in containers, but obviously when i am trying to run this on IDE, it will fail. I like working/optimizing/debugging microservices in IDE, but i dont want rebuilding image and returning whole docker-compose every time i made a tiny small change.
What does it look like in real dev?
Regards!
In my day job there are at least four environments my code can run in: my desktop development environment, a developer-oriented container environment, and pre-production and production container environments. All four of these environments can have different values for things like host names. That means they must be configurable in some way.
If you've hard-coded localhost as a hostname in your application source code, it will not run in any environment other than your development system, and it needs to be changed to a configuration option.
From a pure-Docker point of view, making these configurable via environment variables is easiest (and Spring can set property values from environment variables). Spring also has the notion of a profile, which in principle matches the concept of having different settings for different environments, but injecting a whole profile configuration can be a little more complex at deployment time.
The other practice I've found helpful is to have the environment variable settings default to reasonable things for developers. The pre-production and production deployments are all heavily scripted and so there's a reasonably strong guarantee that they will have all of the correct environment variables set. If $PGHOST defaults to localhost that's right for a non-Docker developer, and all of the container-based setups can set an appropriate value for their environment at deploy time.
Even though our actual deployment system is based on containers (via Kubernetes) I do my day-to-day development in a mostly non-Docker environment. I can run an individual microservice by launching it from a shell prompt, possibly with setting some environment variables, and services have unit tests that can run just on the checked-out source tree, without needing any Docker at all. A second step is to build an image and deploy it into the development environment, and our CI system runs integration tests against the images it builds.
For each space within an org using Pivotal Cloud Foundry (PCF) is there a way to set SPRING_PROFILES_ACTIVE for each space?
space1: SPRING_PROFILES_ACTIVE: development
space2: SPRING_PROFILES_ACTIVE: performance
space3: SPRING_PROFILES_ACTIVE: production
etc...
Thanks,
Brian
The primary way that you would set Spring profiles on Cloud Foundry is via environment variables.
Cloud Foundry does not provide a way to set environment variable groups per org or space. You can only set a staging and a running environment variable group which applies to all staging or all running apps. That's in addition to the standard facilities for setting environment variables on an application.
I think you might be able to get this to work, but it'll take a little effort. Here's the idea.
Create a custom buildpack (don't panic, this isn't that difficult). The buildpack's only responsibility would be to create a .profile.d/ script (just a regular Bash script) that contains export SPRING_PROFILES_ACTIVE=<some-profile>.
Any buildpack can create .profile.d/ scripts which are primarily used to configure environment variables. These scripts are automatically sourced by the environment before any application starts. Thus if the buildpack sets SPRING_PROFILES_ACTIVE here, it would be available to your app and take effect.
https://docs.cloudfoundry.org/buildpacks/custom.html#contract
You would just need to create the bin/supply and bin/detect scripts as defined at the link below. The bin/supply is where you'd put your logic to create the .profile.d/ script and bin/detect could be as simple as exit 0 which would just tell it to run always.
https://docs.cloudfoundry.org/buildpacks/understand-buildpacks.html#buildpack-scripts
Your custom buildpack could be as simple as hard coding profiles to use or it could be fancy and look at the VCAP_APPLICATION environment which contains the space name.
Ex: echo $VCAP_APPLICATION | jq .space_name.
The buildpack could then apply logic to set the correct profile given the space name. I don't think the org name is available to the app at staging/runtime, at least not through environment variables, so it would be harder to apply logic based on that.
The last step is using CF's multi-buildpack support. Your custom buildpack would be a supply buildpack so it would be first, then you'd list the actual buildpack to use second as you push your application.
Ex: cf push -b https://github.com/your-profile/your-custom-buildpack -b java_buildpack your-cool-app.
https://docs.cloudfoundry.org/buildpacks/use-multiple-buildpacks.html
Hope that helps!
I have a Phoenix 1.2 application running on Heroku, with an ENV variable that sets the email addresses I wish to send email to.
When I change the environment variable's value, it doesn't seem to take; Only after I make a PR and redeploy does the new change seem to take.
This makes it seem like I need to "reload" the code or memory somehow. Thus, 2 questions:
Why is this occurring?
Any ideas on how to fix it?
I'm assuming you're setting your env values in config files and using Application.get_env to access them in your application.
Elixir applications are compiled, not interpreted. When you deploy your application to heroku, it compiles it with the available Environment Variables and they become hardcoded in to the app. So, even restarting the application would not work; it needs to be recompiled with the new environment variables.
Here are a few solutions:
You can use RELX_REPLACE_OS_VARS=true if you're using Exrm to build releases;
Use System.get_env for getting ENV variables instead, but this won't work unless the application is restarted after changing the environment configuration;
Use a simple wrapper module that lets you use environment configurations by specifying them like {:system, "MY_VARIABLE"} in config.exs;
Or use an existing package like Confex or Conform to manage your configurations