I'm confused on how the host: parameter in the Endpoint configuration in Phoenix works.
I'm deploying to different Heroku apps (prod and staging) with different URLs, respectively. I want to configure the Host URL to be dynamic, coming from an environment variable, like so:
config :testapp, TestApp.Endpoint,
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: {:system, "HOST"}, port: 443],
cache_static_manifest: "priv/static/manifest.json",
force_ssl: [rewrite_on: [:x_forwarded_proto]],
secret_key_base: System.get_env("SECRET_KEY_BASE")
However, after deploy, my asset URLs no longer have the unique hash set by phoenix.digest, which is a deal breaker.
Interestingly, when I hardcode the URL:
config :testapp, TestApp.Endpoint,
http: [port: {:system, "PORT"}],
url: [scheme: "https", host: "someapp-staging.herokuapp.com", port: 443],
cache_static_manifest: "priv/static/manifest.json",
force_ssl: [rewrite_on: [:x_forwarded_proto]],
secret_key_base: System.get_env("SECRET_KEY_BASE")
Even if it doesn't match the Heroku app url, everything still seems to work fine, and the asset URLs are correct. E.g. I can deploy to an app with an URL foo.herokuapp.com and everything still works.
The configuration above is from prod.exs, I'm using the elixir and phoenix static custom buildpacks, with the following config:
# elixir_buildpack.config
# Elixir version
elixir_version=1.2.3
# Always rebuild from scratch on every deploy?
always_rebuild=true
# ENV variables
config_vars_to_export=(DATABASE_URL HOST)
and
# phoenix_static_buildpack.config
# We can set the version of Node to use for the app here
node_version=5.10.0
# We can set the version of NPM to use for the app here
npm_version=3.8.3
# ENV variables
config_vars_to_export=(DATABASE_URL HOST)
I could probably introduce a separate staging.exs config file and set MIX_ENV=staging, but I would like to understand:
1) Why using {:system, "HOST"} breaks digested asset URLs
2) Why any string works fine on different applications and URLs
Any help is appreciated!
Heroku does not have an environment variable named HOST by default (though PORT is available). I'd double check that you've added it as a config variable in your Heroku settings.
The command heroku run printenv is handy as it will output both base environment variables and config vars added manually or by add-ons.
Related
I'm having some issues with configuring keycloak to run on our server.
Locally it works great but on on our test environment, after login, on any call using the received access token, we get "Invalid token issuer. Expected "http://keycloak:8080/auth/realms/{realmnName}" but was "http://{our-test-server-IP}/auth/realms/{realmName}""
So basically, our backend connects to the internal keycloak docker image but when the request comes it expects that the issuer is the configured external IP so even though the issuers are basically the same service keycloak sees them as being different and responds with a 401.
docker-compose.yml:
keycloak:
image: jboss/keycloak:12.0.4
restart: on-failure
environment:
PROXY_ADDRESS_FORWARDING: "true"
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_LOGLEVEL: DEBUG
KEYCLOAK_IMPORT: /etc/settings/realm.json -Dkeycloak.profile.feature.upload_scripts=enabled
TZ: Europe/Bucharest
DB_VENDOR: POSTGRES
DB_ADDR: db
DB_DATABASE: user
DB_SCHEMA: keycloak
DB_USER: user
DB_PASSWORD: user
ports:
- 8090:8080
volumes:
- ./settings:/etc/settings
depends_on:
- db
Spring application.yml:
keycloak:
cors: true
realm: Realm-Name
resource: back-office
auth-server-url: http://keycloak:8080/auth/
public-client: false
credentials:
secret: 8401b642-0ae9-4dc8-87a6-2f494b388a49
keycloak-client:
id: bcc94ed5-0099-40e0-b460-572eba3f0214
If we change the backend properties auth-server-url to connect to the exposed endpoint and no to the internal docker container we get a timeout, seems like it doesn't want to connect to it. I understand that the main issue is that we are running both the keycloak instance and our backend application on the same server but I don't see why it shouldn't work and why they can not connect to each other.
We tried setting up the FRONTEND_URL in the environment when running the container and in Keycloak admin console but nothing has changed. We've also tried to set forceBackendUrlToFrontendUrl to true in standalone.xml/standalone-ha.xml(./jboss-cli.sh --connect "/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.forceBackendUrlToFrontendUrl, value=true)") files and reset the keycloak instance inside the docker container using ./jboss-cli.sh --connect command=:reload but nothing has changed.
I understand that basically by setting up the FRONTEND_URL all tokens should be signed by the keycloak instance and we would not have this issue but I've tried everything I've found so far on this issue regarding the keycloak configuration and nothing seems to change things. How can I make sure that the issuer that signs the access token and the one that the backend service expects are the same(hopefully the frontend)? And how can I configure this, is there some property I'm missing or was there something I did wrong while configuring it?
Might be related to this answer on here: https://stackoverflow.com/a/64095482/13494285
You could set Host header value to be the expected url.
To override this behavior, you might try to set KEYCLOAK_HOSTNAME environment variable to be the expected url.
Seems like documentation might not be up-to-date (it suggests KEYCLOAK_FRONTEND_URL variable on here), but instead KEYCLOAK_HOSTNAME is used to set fixed provider, as seen on here.
On this context, also the KEYCLOAK_HTTP_PORT is required to set the port to be 8080
Setting the KEYCLOAK_HOSTNAME to the external hostname (as defined in the KEYCLOAK_FRONTEND_URL) definitly worked for my case (eclipse che installation on a vanilla kubernetes cluster)
I am trying to deploy my Dropwizard project to Heroku.
I have added a Procfile and a Postgres DB to the Heroku app.
My Procfile reads:
web: java $JAVA_OPTS -Ddw.server.connector.port=$PORT -Ddw.database.url=$DATABASE_URL -jar target/api-1.0-SNAPSHOT.jar server config.yml
When I try to deploy I receive the following error/crash message in the logs.
org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator: HHH000342: Could not obtain connection to query metadata : Driver:org.postgresql.Driver#53d13cd4 returned null for URL:postgres://fdeqzbddzbefaz:138912590e989b1b8fab5d169a1aea291f04b2d3bc040b1bbf6642a9207a5355#ec2-54-235-101-91.compute-1.amazonaws.com:5432/d67crr4pvqrfee
Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
State changed from starting to crashed
Process exited with status 1
My config.yml reads
database:
# the name of your JDBC driver
driverClass: org.postgresql.Driver
# the username
user: localusername
# the JDBC URL
url: jdbc:postgresql://localhost/dbname
# use the simple server factory if you only want to run on a single port
# HEROKU NOTE - the port gets be overridden with the Heroku $PORT in the Procfile
server:
type: simple
applicationContextPath: /
#adminContextPath: /admin # If you plan to use an admin path, you'll need to also use non-root app path
connector:
type: http
port: 8080
Does anyone have any trouble shooting ideas?
The DATABASE_URL env var is not directly compatible with the JDBC URL format. See docs. Specifically,
The DATABASE_URL for the Heroku Postgres add-on follows the below convention
postgres://username:password#host:port/dbname
However the Postgres JDBC driver uses the following convention:
jdbc:postgresql://host:port/dbname?user=username&password=password
Instead, try using JDBC_DATABASE_URL as documented here
I need to make logging with logstash in Padrino project. I setup logstash on remote server and tried to integrate it with Padrino project, I found only one solution
logger = LogStashLogger.new(type: :udp, host: host, port: 5044) if RACK_ENV = 'staging'
but it can working only when use this code logger.debug message: 'test', foo: 'bar'
Can I make that all logs automatically will send to remote server?
Try this:
Padrino::Logger.logger = LogStashLogger.new(type: :udp, host: host, port: 5044)
I use this:
Padrino::Logger.logger = LogStashLogger.new(type: :udp, host: '172.16.x.x', port: 9999).extend(Padrino::Logger::Extensions)
I'm trying to migrate my capistrano v2 script to the new v3.4 version.
All went well with development stage: I have one EC2 instance, and the deploy completed without errors.
I'm having some troubles with my production script, because I've got a proxy (EC2 instance) before my production servers (EC2 instances too); in my capistrano v2 script all was working, now I'm using cap-ec2 + capistrano v3.4 to deploy my application only to tagged servers, but when I try it I get "Permission Denied", my production servers refuse my key.
Maybe I've set something wrong with proxy parameters in my script, can you please help me?
Thanks a lot!!
Here you can find proxy parameters:
CAPISTRANO V2 (working)
set :gateway, "deploy#xxx.xxx.xxx.xxx"
set :ssh_options, { :forward_agent => true }
default_run_options[:pty] = true
ssh_options[:port] = "22"
ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "id_rsa_deploy_myapp")]
CAPISTRANO V3 (not working)
require 'net/ssh/proxy/command'
set :ssh_options, {
user: "deploy",
keys: %w("~/.ssh/id_rsa_deploy_myapp"),
auth_methods: %w(publickey),
forward_agent: true,
port: 22,
proxy: Net::SSH::Proxy::Command.new('ssh xxx.xxx.xxx.xxx -W %h:%p')
}
require "juggernaut"
Juggernaut.publish("channel1", "Some data")
The code above works if Juggernaut is on the same server as the one running the code. What's the syntax to use a Juggernaut running on another server?
The syntax is the same. I have a juggernaut.yml config file in my config directory with settings for each environment. To run juggernaut on port 8080 of localhost in development, I have:
development:
host: 'localhost'
port: 8080
To run it on a different host, you can just change that host setting (e.g. 'jugg.someserver.com')