How to remote debug attach Keycloak in versions > 8 - debugging

I recently upgraded Keycloak to version 9, and when running in Docker, I'm having trouble attaching a remote debugger. I suspect this has to do with Keycloak's underlying upgrade to Java 9+.
The error I get is:
handshake failed - connection prematurally closed
I have my ports mapped correctly within Docker (I can run Keycloak version 7 and it attaches just fine).

The approach depends on whether you're using standalone.sh (or .bat presumably) or a docker image.
If you're using standalone.sh, you can use the --debug option, documented in standalone.sh -h:
standalone.sh --debug '*:8000'
(the * is to allow access from any host. Plain --debug 8000 will allow access only from localhost)
For docker images, this will be the documented approach from version 12 on, and it works at least from Keycloak 11.0.2:
$ git diff
diff --git a/docker-compose/keycloak-standalone/docker-compose.yml b/docker-compose/keycloak-standalone/docker-compose.yml
index fcf3a52..93b7209 100644
--- a/docker-compose/keycloak-standalone/docker-compose.yml
+++ b/docker-compose/keycloak-standalone/docker-compose.yml
## -11,11 +11,14 ## services:
environment:
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
+ DEBUG: "true"
+ DEBUG_PORT: "*:8000"
ports:
- 8080:8080
+ - 8000:8000
volumes:
- data:/opt/jboss/keycloak/standalone/data
(Again, the * is to allow access from any host.)

As it turns out, Java 9 introduced a security enhancement with respect to debugging. Information here: https://stackoverflow.com/a/60090750/2117355
In my Keycloak docker-compose service definition, I was able to add under environment:
DEBUG_PORT: "*:8787"
And that fixed the problem. I'm now able to debug.

For Keycloak version 7
I'm using this command to run the docker container to enable debugging at port 1234
docker run -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin
-e JAVA_OPTS="-server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
-Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman
-Djava.awt.headless=true
-agentlib:jdwp=transport=dt_socket,address=1234,server=y,suspend=n"
-p 8080:8080 -p 1234:1234 jboss/keycloak:7.0.0
Connecting it to the IntelliJ using Remote Configuration
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=1234
*Note: The default value of the JAVA_OPTS is below so I prepended it with the above configuration
-server -Xms64m -Xmx512m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m
-Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman
-Djava.awt.headless=true

You can replace debug params by creating your own image, using Dockerfile
Dockerfile:
FROM jboss/keycloak:latest
ENV DEBUG true
ENV DEBUG_PORT *:8787
EXPOSE 8080 8443 9990 8787
ENTRYPOINT ${JBOSS_HOME}/../tools/docker-entrypoint.sh
console:
docker build -t local/debug-keycloack ..
docker run -p 8080:8080 -p 8443:8443 -p 9990:9990 -p 8787:8787 --name debug-keycloack local/debug-keycloack

Related

Microservice java property not set in application.properties

I have the following line in micros1-mvc microservice application.properties: eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER}
I execute the microservice inside the container with:
sudo docker run -p 8081:8081 --network mynetw --env JAVA_OPTS="-DEUREKA_SERVER=http://eurekaserver:8761/eureka" micros1-mvc
And when the microservice tries to connect with Eureka it says:
overyClient :
DiscoveryClient_SERVICEASERVICE/1754e70517a8:serviceaservice:8081 -
was unable to refresh its cache! This periodic background refresh will
be retried in 30 seconds. status = There is no known eureka server;
cluster server list is empty stacktrace =
com.netflix.discovery.shared.transport.TransportException: There is no
known eureka server; cluster server list is empty at
com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:108)
at
com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at
com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator$6.execute(EurekaHttpClientDecorator.java:137)
at
com.netflix.discovery.shared.transport.decorator.SessionedEurekaHttpClient.execute(SessionedEurekaHttpClient.java:77)
at
com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134)
at
com.netflix.discovery.DiscoveryClient.getAndStoreFullRegistry(DiscoveryClient.java:1101)
at
com.netflix.discovery.DiscoveryClient.fetchRegistry(DiscoveryClient.java:1014)
at
com.netflix.discovery.DiscoveryClient.refreshRegistry(DiscoveryClient.java:1531)
It looks like the microservice properties file doesn't receive the specified value in docker execution
After some searching, I came across the fact that JAVA_OPTS are very specific to Catalina (Tomcat). Looking in the bin folder of a tomcat install you'll find a shell script that handles passing JAVA_OPTS into the exec lines.
A Dockerfile like:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["java","${JAVA_OPTS}","-jar","/app.jar"]
With:
docker run -p 9000:9000 -e JAVA_OPTS=-Dserver.port=9000 myorg/myapp
Fails. This fails because the ${} substitution requires a shell. The exec form does not use a shell to launch the process, so the options are not applied. You can get around that by moving the entry point to a script or by explicitly creating a shell in the entry point. The following example shows how to create a shell in the entry point:
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar"]
You can then launch this app by running the following command:
docker run -p 8080:8080 -e "JAVA_OPTS=-Ddebug -Xmx128m" myorg/myapp

how to run docker with keycloak image as a daemon in prod environment

I am using docker to run my Keycloak server in aws production environment. The problem is keycloak uses wildfly which is constant running. Because of this I cannot close the shell. I am trying to find a way to run docker as a daemon thread.
The command I use to run docker
docker run -p 8080:8080 jboss/keycloak
Just user docker's detach option -d.
docker run -p 8080:8080 -d jboss/keycloak

env variables visible in running container but not interpolated in a bash script

I set a env variable in docker-compose, like so:
cloud:
build:
context: foobar/.
ports:
- "5000:5000"
depends_on:
- redis
- rabbitmq
- postgresql
links:
- redis
- rabbitmq
- postgresql
environment:
- RABBITMQ_HOST=rabbitmq
And I can see that it's listed in the running container and I can echo it just fine.
WORKON_HOME=/opt/virtualenvs
VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHLVL=1
HOME=/root
no_proxy=*.local, 169.254/16
LESSOPEN=| /usr/bin/lesspipe %s
RABBITMQ_HOST=rabbitmq
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env
But when I try to use it in a configuration file in the running container, like so:
amqp://guest:guest#${RABBITMQ_HOST}
Then I run docker-compose up, and the app just exits because it can't find the RABBITMQ_HOST variable? What gives?
I'm using envdir like this to start in a bash script the gunicorn server:
envdir /apps/foobar/.envdir gunicorn -w 2 -b 0.0.0.0:5000 dispatch:app --reload
update:
I'm using Docker Enterprise Edition (EE) on a mac (el capitan).
I'm using Flask (python 3) with the Gunicorn application server (dockerfile).

Passing env variables to DOCKER Spring Boot

I have a SpringBoot application and its Dockerfile is as follows. I have application.properties for different environments like local/dev/qa/prod. When I run the application locally in IDE, I pass -Dspring.profiles.active=local in VM options so that it loads the application-local.properties. For running as docker containers, I build an image which comprises of all the application.properties. i.e. it's only SAME docker image for all the environments.
When I run the image in an environment, I want to somehow make the SpringBoot to understand that its dev env, so it has to load application-dev.properties. I am using AWS ECS for managing the containers.
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ADD target/sample-test-sb-sample-app-1.0-exec.jar app.jar
EXPOSE 8080
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
The easiest (and probably the best way) to do it via environment variable in a docker container:
SPRING_PROFILES_ACTIVE=dev,swagger
UPDATE:
In order to set environment variables to docker, you do not need to modify Dockerfile. Just build your docker image and then run it with the env variables set:
docker run your-docker-container -e SPRING_PROFILES_ACTIVE='dev,swagger' -p 8080:8080
In the .Dockerfile file:
ENTRYPOINT [ "sh", "-c", "java -Dspring.profiles.active=**${ENV}** -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
And while running the docker:
docker run --env ENV=*local* -d -p 8080:8080 <*image id*>
This way, the environment variable gets local as value and passes to Dockerfile when we bring up a container.
Update
You can also do like
ENTRYPOINT ["java","-jar", "-Dspring.profiles.active=${ENV} -Djava.security.egd=file:/dev/./urandom","app.jar"]
and while docker image
docker run --env ENV=local -d -p 8080:8080 <*image id*>

Elasticsearch 5.1 and Docker - How to get networking configured properly to reach Elasticsearch from the host

Using Elasticsearch:latest (v5.1) from the Docker public repo, I created my own image containing Cerebro. I am now attempting to get Elasticsearch networking properly configured so that I can connect to Elasticsearch from Cerebro. Cerebro running inside of the container I created, renders properly on my host at: http://localhost:9000.
After committing my image, I created my Docker container with the following:
sudo docker run -d -it --privileged --name es5.1 --restart=always \
-p 9200:9200 \
-p 9300:9300 \
-p 9000:9000 \
-v ~/elasticsearch/5.1/config:/usr/share/elasticsearch/config \
-v ~/elasticsearch/5.1/data:/usr/share/elasticsearch/data \
-v ~/elasticsearch/5.1/cerebro/conf:/root/cerebro-0.4.2/conf \
elasticsearch_cerebro:5.1 \
/root/cerebro-0.4.2/bin/cerebro
my elasticsearch.yml in ~/elasticsearch/5.1/config currently has the following network and discovery entries specified:
network.publish_host: 192.168.1.26
discovery.zen.ping.unicast.hosts: ["192.168.1.26:9300"]
I have also tried 0.0.0.0 and not specifying the values to default to the loopback for these settings. In addition, I've tried specifying network.host with a combination of values. No matter how I set this, elasticsearch logs on startup:
[info] play.api.Play - Application started (Prod)
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
[error] p.c.s.n.PlayDefaultUpstreamHandler - Cannot invoke the action
java.net.ConnectException: Connection refused: localhost/127.0.0.1:9200
… cascading errors because of this connection refusal...
No matter how I set the elasticsearch.yml networking, the error message on Elasticsearch startup does not change. I verified that the elasticsearch.yml is being picked-up inside of the Docker container. Please let me know were I'm going wrong with this configuration.
Well, it looks like I"m answering my own question after a days-worth of battle with this! The issue was that elasticsearch wasn't started inside of the container. To determine this, I got a terminal into the container:
docker exec -it es5.1 bash
Once in the container, I checked service status:
service elasticsearch status
To this, the OS responded with:
[FAIL] elasticsearch is not running ... failed!
I started it with:
service elasticsearch start
I add a single script that I'll call from docker run to start elasticsearch and cerebro and that should do the trick. However, I would still like to hear if there is a better way to configure this.
I made a github docker-compose repo that will spin up a elasticsearch, kibana, logstash, cerebro cluster
https://github.com/Shuliyey/elkc
========================================================================
On the other hand, in regard to the actual problem (elasticsearch_cerebro not working).
To get the elasticsearch and cerebro working in one docker container. Need to use supervisor
https://docs.docker.com/engine/admin/using_supervisord/
will update with more details
No need to use supervisor at all. A very simple way to solve this is to use docker-compose and bundle Elasticsearch and Cerebro together, like this:
docker-compose.yml:
version: '2'
services:
elasticsearch:
build: elasticsearch
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./elasticsearch/data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx1500m -Xms1500m"
networks:
- elk
cerebro:
build: cerebro
volumes:
- ./cerebro/config/application.conf:/opt/cerebro/conf/application.conf
ports:
- "9000:9000"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
elasticsearch/Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
cerebro/Dockerfile:
FROM yannart/cerebro
Then you run docker-compose build and docker-compose up. When everything is started, you can access ES at http://localhost:9200 and Cerebro at http://localhost:9000

Resources