I am trying to deploy my Dropwizard project to Heroku.
I have added a Procfile and a Postgres DB to the Heroku app.
My Procfile reads:
web: java $JAVA_OPTS -Ddw.server.connector.port=$PORT -Ddw.database.url=$DATABASE_URL -jar target/api-1.0-SNAPSHOT.jar server config.yml
When I try to deploy I receive the following error/crash message in the logs.
org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator: HHH000342: Could not obtain connection to query metadata : Driver:org.postgresql.Driver#53d13cd4 returned null for URL:postgres://fdeqzbddzbefaz:138912590e989b1b8fab5d169a1aea291f04b2d3bc040b1bbf6642a9207a5355#ec2-54-235-101-91.compute-1.amazonaws.com:5432/d67crr4pvqrfee
Unable to create requested service [org.hibernate.engine.jdbc.env.spi.JdbcEnvironment]
State changed from starting to crashed
Process exited with status 1
My config.yml reads
database:
# the name of your JDBC driver
driverClass: org.postgresql.Driver
# the username
user: localusername
# the JDBC URL
url: jdbc:postgresql://localhost/dbname
# use the simple server factory if you only want to run on a single port
# HEROKU NOTE - the port gets be overridden with the Heroku $PORT in the Procfile
server:
type: simple
applicationContextPath: /
#adminContextPath: /admin # If you plan to use an admin path, you'll need to also use non-root app path
connector:
type: http
port: 8080
Does anyone have any trouble shooting ideas?
The DATABASE_URL env var is not directly compatible with the JDBC URL format. See docs. Specifically,
The DATABASE_URL for the Heroku Postgres add-on follows the below convention
postgres://username:password#host:port/dbname
However the Postgres JDBC driver uses the following convention:
jdbc:postgresql://host:port/dbname?user=username&password=password
Instead, try using JDBC_DATABASE_URL as documented here
Related
I'm having some issues with configuring keycloak to run on our server.
Locally it works great but on on our test environment, after login, on any call using the received access token, we get "Invalid token issuer. Expected "http://keycloak:8080/auth/realms/{realmnName}" but was "http://{our-test-server-IP}/auth/realms/{realmName}""
So basically, our backend connects to the internal keycloak docker image but when the request comes it expects that the issuer is the configured external IP so even though the issuers are basically the same service keycloak sees them as being different and responds with a 401.
docker-compose.yml:
keycloak:
image: jboss/keycloak:12.0.4
restart: on-failure
environment:
PROXY_ADDRESS_FORWARDING: "true"
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: password
KEYCLOAK_LOGLEVEL: DEBUG
KEYCLOAK_IMPORT: /etc/settings/realm.json -Dkeycloak.profile.feature.upload_scripts=enabled
TZ: Europe/Bucharest
DB_VENDOR: POSTGRES
DB_ADDR: db
DB_DATABASE: user
DB_SCHEMA: keycloak
DB_USER: user
DB_PASSWORD: user
ports:
- 8090:8080
volumes:
- ./settings:/etc/settings
depends_on:
- db
Spring application.yml:
keycloak:
cors: true
realm: Realm-Name
resource: back-office
auth-server-url: http://keycloak:8080/auth/
public-client: false
credentials:
secret: 8401b642-0ae9-4dc8-87a6-2f494b388a49
keycloak-client:
id: bcc94ed5-0099-40e0-b460-572eba3f0214
If we change the backend properties auth-server-url to connect to the exposed endpoint and no to the internal docker container we get a timeout, seems like it doesn't want to connect to it. I understand that the main issue is that we are running both the keycloak instance and our backend application on the same server but I don't see why it shouldn't work and why they can not connect to each other.
We tried setting up the FRONTEND_URL in the environment when running the container and in Keycloak admin console but nothing has changed. We've also tried to set forceBackendUrlToFrontendUrl to true in standalone.xml/standalone-ha.xml(./jboss-cli.sh --connect "/subsystem=keycloak-server/spi=hostname/provider=default:write-attribute(name=properties.forceBackendUrlToFrontendUrl, value=true)") files and reset the keycloak instance inside the docker container using ./jboss-cli.sh --connect command=:reload but nothing has changed.
I understand that basically by setting up the FRONTEND_URL all tokens should be signed by the keycloak instance and we would not have this issue but I've tried everything I've found so far on this issue regarding the keycloak configuration and nothing seems to change things. How can I make sure that the issuer that signs the access token and the one that the backend service expects are the same(hopefully the frontend)? And how can I configure this, is there some property I'm missing or was there something I did wrong while configuring it?
Might be related to this answer on here: https://stackoverflow.com/a/64095482/13494285
You could set Host header value to be the expected url.
To override this behavior, you might try to set KEYCLOAK_HOSTNAME environment variable to be the expected url.
Seems like documentation might not be up-to-date (it suggests KEYCLOAK_FRONTEND_URL variable on here), but instead KEYCLOAK_HOSTNAME is used to set fixed provider, as seen on here.
On this context, also the KEYCLOAK_HTTP_PORT is required to set the port to be 8080
Setting the KEYCLOAK_HOSTNAME to the external hostname (as defined in the KEYCLOAK_FRONTEND_URL) definitly worked for my case (eclipse che installation on a vanilla kubernetes cluster)
I want to deploy my spring boot app in a docker component to gcp App Engine
When I run the docker componet local I get access to the web site.
When I deploy the component to the gcp app engine with the command gcloud app deploy
I get a http error 502 Bad Gateway nginx
The Docker file look like this
FROM adoptopenjdk/openjdk14
MAINTAINER steinko
VOLUME /tmp
COPY build/libs/atm.jar ./
ENTRYPOINT ["java"]
CMD ["-jar", "/atm.jar"]
EXPOSE 4001
The app.yaml files looks like this
runtime: custom
env: flex
handlers:
- url: /.*
script: this field is required, but ignored
service: atm
How do I fix this error?
According to this document: The App Engine front end will route incoming requests to the appropriate module on port 8080. You must be sure that your application code is listening on 8080. Also, it looks like the FROM should be one of Google's base image, also in that document.
I'm trying to build sample microservice app using this tutorial Tutorial. jhipster v5.2.1 So I've created a gateway and an armory started consul using this command:
docker-compose -f src/main/docker/consul.yml up
While I've pointed into the armory folder writing this command :
./gradlew
I got this error :
2018-09-03 13:20:11.235 WARN 7224 --- [ restartedMain] o.s.c.c.c.ConsulPropertySourceLocator : Unable to load consul config from config/armory-swagger/
com.ecwid.consul.transport.TransportException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:8600 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect
Could you please help me
If you are using the toolbox you have to replace localhost by the IP of your Docker machine vm. You will have to ajust the bootstrap.yml properties to point to this adress.
You should also be able to apply this trick : https://www.jhipster.tech/tips/020_tip_using_docker_containers_as_localhost_on_mac_and_windows.html
I just changed fail-fast to false in bootstrap-prod.yml
You can disable Spring Cloud Config this way.
fail-fast: false
Otherwise you have to provide proper configuration as stated above.
Or you can run this command if you have already installed consul on your development machine.
consul agent -dev
I have an RDS instance that I can connect through command prompt as follows:
mysql --host=rds-instance-name.ciplxctxy9hy.us-west-1.rds.amazonaws.com --user=username --password=password database
This works well. When I attempt to eb deploy to the Amazon Web Services elastic beanstalk instance, I get this error:
INFO: Deploying new version to instance(s).
ERROR: [Instance: i-08388abd] Command failed on instance. Return code: 1 Output:
(TRUNCATED)...ash -c 'leader_only bundle exec rake db:migrate' webapp
rake aborted!
Mysql2::Error: Can't connect to MySQL server on 'rds-instance-name.ciplxctxy9hy
.us-west-1.rds.amazonaws.com' (4)
My database.yml file in my ruby instance has the following:
production:
adapter: mysql2
encoding: utf8
database: database
username: username
password: password
host: rds-instance-name.ciplxctxy9hy.us-west-1.rds.amazonaws.com
When I locally start the ruby server, I get no errors:
C:/Development/Ruby/environment-name/bin/rails server -b 0.0.0.0 -p 3000 -e production
What do I need to change to the rds instance or to the database.yml to allow me to deploy?
I would look at the security group on the RDS instance and make sure it's allowing incoming connections from your EC2 instance.
How can i connect to heroku postgres from another webapplication running on different cloud service.
I have a heroku postgres. I have another webapplication running on EC2. When my webapp on ec2 is trying to connect to heroku app, it fails.
I tried telnet from ec2 instance to heroku postgres on port 5432, it fails.
Can anyone please provide a pointer?
You can use heroku pg:credentials command to get the credentials for your database. For example:
$ heroku pg:credentials DATABASE_URL
Connection info string:
"dbname=abcdefghijklmn host=ec2-1-2-3-4.compute-1.amazonaws.com port=5432 user=abcdefghijklmn password=abcdefghijklmnopqrstuvwxyz sslmode=require"
Connection URL:
postgres://abcdefghijklmn:abcdefghijklmnopqrstuvwxyz#ec2-1-2-3-4.compute-1.amazonaws.com:5432/abcdefghijklmn
Then, you can use Connection URL or decomposed credentials in your other application. Pay attention to sslmode=require. Connections to Heroku Postgres require you to use SSL, which is an "advanced" option in some of the PostgreSQL clients (e.g. Navicat). For example:
$ psql postgres://abcdefghijklmn:abcdefghijklmnopqrstuvwxyz#ec2-1-2-3-4.compute-1.amazonaws.com:5432/abcdefghijklmn
More information here: https://devcenter.heroku.com/articles/heroku-postgresql#external-connections-ingress