Keycloak and Spring Boot web app in dockerized environment - spring

Consider the following environment:
one docker container is keycloak
another docker container is our web app that uses keycloak for authentication
The web app is a Spring Boot application with "keycloak-spring-boot-starter" applied. In application.properties:
keycloak.auth-server-url = http://localhost:8028/auth
A user accessing our web app will be redirected to keycloak using the URL for the exposed port of the keycloak docker container. Login is done without problems in keycloak and the user (browser) is redirected to our web app again. Now, the authorization code needs to be exchanged for an access token. Hence, our web app (keycloak client) tries to connect to the same host and port configured in keycloak.auth-server-url. But this is a problem because the web app resides in a docker container and not on the host machine, so it should rather access http://keycloak:8080 or something where keycloak is the linked keycloak docker container.
So the question is: How can I configure the keycloak client to apply different URLs for browser redirection and access token endpoints?

There used to be another property auth-server-url-for-backend-requests but was removed by pull request #2506 as a solution to issue #2623 on Keycloak's JIRA. In the description of this issue, you'll find the reasons why and possible workarounds: that should be solved at the DNS level or by adding entries to the host file.
So there is not much you can do in the client configuration, unless you change the code and make your own version of the adapter, but there is something you can do at the Docker level. For this to work properly, first I suggest you use a fully qualified domain name instead of localhost for the public hostname, as you would in production anyway, eg. keycloak.mydomain.com. You can use a fake one (not registered in DNS servers) if you just add it to the host's /etc/hosts file (or Windows equivalent) as an alias next to localhost.
Then, if you are using Docker Compose, you can set aliases (alternative hostnames) for the keycloak service on the docker network to which the containers are connected (see doc: Compose File reference / Service configuration reference / networks / aliases). For example:
version: "3.7"
services:
keycloak:
image: jboss/keycloak
networks:
# Replace 'mynet' with whatever user-defined network you are using or want to use
mynet:
aliases:
- keycloak.mydomain.com
webapp:
image: "nginx:alpine"
networks:
- mynet
networks:
mynet:
If you are just using plain Docker, you can do the equivalent with --alias flag of docker network connect command (see doc: Container networking / IP address and hostname).

Related

Fetch data from gatServerProps of NextJs app when another api server is also running in localhost

According NextJs Documentations:
You should not use fetch() to call an API route in getServerSideProps. Instead, directly import the logic used inside your API route. You may need to slightly refactor your code for this approach.
Fetching from an external API is fine!
So we cannot use NextJs built-in APIs in getStaticProps or getServerSidePropsbut when I'm going to use another API service that is based on Laravel Framework as the back server and fetch it by Axios on the getServerSideProps function, I get Error: connect ECONNREFUSED 127.0.0.1:8080 error.
It should also be noted that everything is fine if the API server is addressed out of our development machine. In Other words, It will face when it's the development environment and both Laravel backend server and NextJs front-end server locate at localhost.
Could you help me out finding a solution for this problem?
When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer.
There are two pretty easy solutions.
Create a docker network, add both containers to that, and use container name instead of ip (https://www.tutorialworks.com/container-networking/)
Use host networking for this container: https://docs.docker.com/network/host/
Edit: Added a link for a tutorial on how to create and use docker networks
So, as #tperamaki's answer already mentions: "When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer."
You can use the ip of your machine in your local network. By example, 196.168.0.10:8080 instead 127.0.0.1:8080.
But you also can connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host.
In your case just add the port where the othe container is listening:
http://host.docker.internal:8080
In this section of the documentation Networking features in Docker Desktop for Mac, they explain how to connect from a container to a service on the host. Note that a mac is mentioned there, but I tried it on a linux distro and it also works (also in this other answer it is mentioned that it works for windows).

How to add url prefix for server api with traefik?

I'm using traefik v2 as gateway. I have a frontend container running with host https://some.site.com which powered by traefik.
Now I have a micro-service server with multi services and all of them are listening on 80 port. I want to serve the backend server on path https://some.site.com/api/service1, https://some.site.com/api/service2 ...
I have tried traefik.http.routers.service1.rule=(Host(some.site.com) && PathPrefix(/api/service1)) but not worked and traefik.http.middlewares.add-api.addprefix.prefix=/api/service1 not worked too;
How can I implement this?
Can you post your services' docker-compose configuration?
If you use middlewares, you may need to specify the service. Like
traefik.http.routers.service1.middlewares=add-api
traefik.http.middlewares.add-api.addprefix.prefix=/api/service1

Make k8s cluster services available to local docker containers

I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.

Move application from homestead to docker

My application consists of three domains:
example.com
admin.example.com
partner.example.com
All of these domains are handled by the same Laravel app. Each domain has its own controllers and view. Models and other core functionalities are shared between all three domains.
Currently my local dev-environment is build with Homestead (based on Vagrant), where each local domain (example.test, admin.example.test and partner.example.test) points to the same directory (e.g. /home/vagrant/app/public).
Because of deployment problems regarding different versions of OS, NPM, PHP, etc. I want to move to docker. I've read a lot of articles about multiple domains or apps with docker. Best practice seems to be to set up an Nginx reverse proxy which redirects all incoming requests to the desired app. Unfortunately, I haven't found examples for my case where all domains point to the same application.
If possible I would avoid having the same repository cloned three times for each docker container running one specific part of the app.
So what would be the best approach to set up a docker environment?
I created a simple gist for you to look at of how I would do it
https://gist.github.com/karlisabele/f7d91594c004e227e504473ce2c60508
The nginx config file is based on Laravel documetation (https://laravel.com/docs/5.8/deployment#nginx) and of course in production you would also want to handle SSL and map port 443 as well, but this should serve as POC for you.
Notice that in the nginx configuration I use the php-fpm service name to pass the request to php-fpm container. In docker the service names can be used as host names for corresponding service so the line fastcgi_pass php-fpm:9000; means that you are passing the request to php-fpm containers port 9000 (default port for the fpm image to listen to)
Basically what you want to do is simply define in the nginx that all 3 of your subdomains are handled by the same server configuration. Then nginx simply passes the request to php-fpm to actually process it.
To test, you can just copy the two files from gist in your project directory, replace YOUR_PROJECT_FOLDER in docker-compose.yml file with the actual location of your project (can be simply .:/var/www/html if you place the docker-compose.yml in the root of your project) then run docker-compose up -d. Add the domains to your hosts file (/etc/hosts un linux/mac) and you should be able to visit example.test and see your site.
Note: Depending on where your database is located, you might need to change the host for it if it's localhost at the moment, because it will try to connect to a mysql server from php-fpm container, which of course does not have it's own mysql-server running.

Communication between linked docker containers over http for api gateway

I'm currently working on a golang web app which is currently one application consisting of numerous packages and is deployed in an individual docker container. I have a redis instance and a mysql instance deployed and linked as separate containers. In order to get their addresses, I pull them from the environment variables set by docker. I would like to implement an api gateway pattern wherein I have one service which exposes the HTTP port (either 80 for http or 443 for https) called 'api' which proxies requests to other services. The other services ideally do not expose any ports publicly but rather are linked directly with the services they depend on.
So, api will be linked with all the services except for mysql and redis. Any service that need to validate a user's session information will be linked with the user service, etc. My question is, how can I make my http servers listen to http requests on the ports that docker links between my containers.
Simplest way to do this is Docker Compose. You can simply define which services you want and Docker Compose automatically link them in a dedicated network. Suppose you have your goapp, redis, and mysql instance and want to use nginx as your reverse proxy. Your docker-compose.yml file looks as follows:
services:
redis:
image: redis
mysql:
image: mysql
goapp:
image: myrepo/goapp
nginx:
image: nginx
volumes:
- /PATH/TO/MY/CONF/api.conf:/etc/nginx/conf.d/api.conf
ports:
- "443:443"
- "80:80"
The advantage is that you can reference any service from other services by its name. So from your goapp you can reach your MySQL server under hostname mysql and so on. The only exposed ports (i.e. reachable from the host machine) are 443 and 80 of nginx container.
You can start the whole system with docker-compose up!

Resources