Microservice api gateway/reverse proxy design Pattern - microservices

I have got an application which has few microservices like shown below
- python microservice - runs as a Docker container on port 5001, 5002, 5003, 5004, 5005
- nodejs microservice - runs as a Docker container on runs on port 4000
- mongodb - runs as a Docker container on port 27017
- graphql microservice - runs as a Docker container on port 4000
I require clarification for the below options
OPTION 1:
Is it correct to configure nginx as a reverse proxy for each application so that I want to run each microservice on port 80
i.e * python microservice docker container + nginx
* nodejs microservice docker container + nginx
* mongodb microservice docker container + nginx
* graphql microservice docker container + nginx
OPTION 2:
or should I configure a single nginx instance and setup upstream for python application, nodejs application and mongodb ?
ie python + nodejs + mongodb + graphql + nginx
Note: In OPTION 2 only a single nginx instance is running and for OPTION 1 each microservice has a nginx instance running. Which pattern is correct OPTION 1 or OPTION 2 ?
Is it correct to containerize mongodb and expose it on port 80 ?

Question 1:
If you use only one nginx you have a single point of failure. This means that if nginx fails for some reason, all the services will be down.
If you use several different nginx with different configurations it will require more maintenance, technical debt and resources.
A good approach here is to have replicas (e.g., 2) of the same nginx server which contains rules for routing all the microservices.
Question 2:
There is no problem on deploying mongoDB in a container as soon as you have some persistent storage. The port is not a problem at all.

Related

How to add url prefix for server api with traefik?

I'm using traefik v2 as gateway. I have a frontend container running with host https://some.site.com which powered by traefik.
Now I have a micro-service server with multi services and all of them are listening on 80 port. I want to serve the backend server on path https://some.site.com/api/service1, https://some.site.com/api/service2 ...
I have tried traefik.http.routers.service1.rule=(Host(some.site.com) && PathPrefix(/api/service1)) but not worked and traefik.http.middlewares.add-api.addprefix.prefix=/api/service1 not worked too;
How can I implement this?
Can you post your services' docker-compose configuration?
If you use middlewares, you may need to specify the service. Like
traefik.http.routers.service1.middlewares=add-api
traefik.http.middlewares.add-api.addprefix.prefix=/api/service1

Make k8s cluster services available to local docker containers

I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.

Issue in DB pooling using cx_oracle and Flask (Python)

We have develoed an APP using Angular 8 and flask-restplus(0.13.0) (Python 3.7.4) and cx_oracle(7.2.3).
The Angular app is deployed on NGINX on Ubuntu server. We have created 3 micro services and have deployed it on gunicorn using docker and kubernetes pods.
Production environment is having 7 kubernetes PODs per service.
In Yaml file of kubernetes we have configured it to run on 4 threads using below command.
["gunicorn"]
args: ["run_app:app","-b","0.0.0.0:8080","--threads=4","--access-logfile","-","--error-logfile","-"]
The session pool code is as follows:
dbsession_pool = cx_Oracle.SessionPool('xxxxx', 'xxxxx', 'xxxxx.xxxxx.com/xxxxxdb', min=5, max=50, increment=5, threaded = True)
All the services run for a while and then start giving 504 gateway timeout error after some time.
But if we use cx_Oracle.connect then it works fine. Our reason to use session pool was to save time for connecting and disconnecting to DB thus improving performance.

Keycloak and Spring Boot web app in dockerized environment

Consider the following environment:
one docker container is keycloak
another docker container is our web app that uses keycloak for authentication
The web app is a Spring Boot application with "keycloak-spring-boot-starter" applied. In application.properties:
keycloak.auth-server-url = http://localhost:8028/auth
A user accessing our web app will be redirected to keycloak using the URL for the exposed port of the keycloak docker container. Login is done without problems in keycloak and the user (browser) is redirected to our web app again. Now, the authorization code needs to be exchanged for an access token. Hence, our web app (keycloak client) tries to connect to the same host and port configured in keycloak.auth-server-url. But this is a problem because the web app resides in a docker container and not on the host machine, so it should rather access http://keycloak:8080 or something where keycloak is the linked keycloak docker container.
So the question is: How can I configure the keycloak client to apply different URLs for browser redirection and access token endpoints?
There used to be another property auth-server-url-for-backend-requests but was removed by pull request #2506 as a solution to issue #2623 on Keycloak's JIRA. In the description of this issue, you'll find the reasons why and possible workarounds: that should be solved at the DNS level or by adding entries to the host file.
So there is not much you can do in the client configuration, unless you change the code and make your own version of the adapter, but there is something you can do at the Docker level. For this to work properly, first I suggest you use a fully qualified domain name instead of localhost for the public hostname, as you would in production anyway, eg. keycloak.mydomain.com. You can use a fake one (not registered in DNS servers) if you just add it to the host's /etc/hosts file (or Windows equivalent) as an alias next to localhost.
Then, if you are using Docker Compose, you can set aliases (alternative hostnames) for the keycloak service on the docker network to which the containers are connected (see doc: Compose File reference / Service configuration reference / networks / aliases). For example:
version: "3.7"
services:
keycloak:
image: jboss/keycloak
networks:
# Replace 'mynet' with whatever user-defined network you are using or want to use
mynet:
aliases:
- keycloak.mydomain.com
webapp:
image: "nginx:alpine"
networks:
- mynet
networks:
mynet:
If you are just using plain Docker, you can do the equivalent with --alias flag of docker network connect command (see doc: Container networking / IP address and hostname).

Communication between linked docker containers over http for api gateway

I'm currently working on a golang web app which is currently one application consisting of numerous packages and is deployed in an individual docker container. I have a redis instance and a mysql instance deployed and linked as separate containers. In order to get their addresses, I pull them from the environment variables set by docker. I would like to implement an api gateway pattern wherein I have one service which exposes the HTTP port (either 80 for http or 443 for https) called 'api' which proxies requests to other services. The other services ideally do not expose any ports publicly but rather are linked directly with the services they depend on.
So, api will be linked with all the services except for mysql and redis. Any service that need to validate a user's session information will be linked with the user service, etc. My question is, how can I make my http servers listen to http requests on the ports that docker links between my containers.
Simplest way to do this is Docker Compose. You can simply define which services you want and Docker Compose automatically link them in a dedicated network. Suppose you have your goapp, redis, and mysql instance and want to use nginx as your reverse proxy. Your docker-compose.yml file looks as follows:
services:
redis:
image: redis
mysql:
image: mysql
goapp:
image: myrepo/goapp
nginx:
image: nginx
volumes:
- /PATH/TO/MY/CONF/api.conf:/etc/nginx/conf.d/api.conf
ports:
- "443:443"
- "80:80"
The advantage is that you can reference any service from other services by its name. So from your goapp you can reach your MySQL server under hostname mysql and so on. The only exposed ports (i.e. reachable from the host machine) are 443 and 80 of nginx container.
You can start the whole system with docker-compose up!

Resources