I'm currently working on a golang web app which is currently one application consisting of numerous packages and is deployed in an individual docker container. I have a redis instance and a mysql instance deployed and linked as separate containers. In order to get their addresses, I pull them from the environment variables set by docker. I would like to implement an api gateway pattern wherein I have one service which exposes the HTTP port (either 80 for http or 443 for https) called 'api' which proxies requests to other services. The other services ideally do not expose any ports publicly but rather are linked directly with the services they depend on.
So, api will be linked with all the services except for mysql and redis. Any service that need to validate a user's session information will be linked with the user service, etc. My question is, how can I make my http servers listen to http requests on the ports that docker links between my containers.
Simplest way to do this is Docker Compose. You can simply define which services you want and Docker Compose automatically link them in a dedicated network. Suppose you have your goapp, redis, and mysql instance and want to use nginx as your reverse proxy. Your docker-compose.yml file looks as follows:
services:
redis:
image: redis
mysql:
image: mysql
goapp:
image: myrepo/goapp
nginx:
image: nginx
volumes:
- /PATH/TO/MY/CONF/api.conf:/etc/nginx/conf.d/api.conf
ports:
- "443:443"
- "80:80"
The advantage is that you can reference any service from other services by its name. So from your goapp you can reach your MySQL server under hostname mysql and so on. The only exposed ports (i.e. reachable from the host machine) are 443 and 80 of nginx container.
You can start the whole system with docker-compose up!
Related
I'm using kubernetes from docker for windows and I encountered problem. I use statefulset with following part of config:
spec:
terminationGracePeriodSeconds: 300
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
In classic kubernetes this spec exposes all ports from pod on node ip, so all of them can be accessed through it. I'm trying to develop it on kubernetes from docker for windows, but it seems that I cannot access node by it's ip (like in minikube or microk8s), but docker for windows maps localhost to the cluster. So here is a problem: this config exposes all ports on node ip, which is for example 192.168.65.4, but i cannot access it from windows - I can only access cluster via localhost, but it only exposes protocol related port, for example 443. So when my service runs on port i.e. 10433, there is no access from localhost:10433 but also there is no access in general through node ip. Is there any way to configure it to work as classic kubernetes, where all ports are exposed? I know that single port can be exposed through NodePort, but it's important for me to expose all ports from the pod to imitate real kubernetes behaviour
In general, Docker host networking doesn't work on non-Linux platforms. It's accepted as a valid Docker option, but the "host" network isn't actually the physical system's network. This probably applies to the Kubernetes setup embedded in Docker Desktop as well.
It should be pretty rare to need host networking, and even more unusual in Kubernetes. Host networking disables the normal inter-container communication mechanisms. Kubernetes in particular has a complex network environment and there is usually more than one node; opting out of the network setup like this can make it all but impossible to reach your service, either from inside the cluster or outside.
Instead of host networking, you should use the normal Kubernetes networking setup. Pretty much every Deployment you create will need a matching Service, and if you set that Service to have type: nodePort then it will be accessible from outside the cluster (try both the assigned nodePort: number and the service's cluster-internal port:; it's not clear which port Docker Desktop actually uses).
For some purposes, the easiest approach is to set up a local port-forward to the service
kubectl port-forward deployment/some-deployment 8888:3000
will set up a port-forward from port 8888 on the local system to port 3000 on some pod managed by the named deployment. This forwards to a single pod (if you have multiple replicas, it targets only one of them), it's slower than a direct connection, and the port-forward will fail occasionally, but this is good enough for maintenance tasks like database migrations.
imitate real kubernetes behaviour
In the environment I work on normally, each cluster has dozens to hundreds of nodes. The nodes can't be directly accessed from outside the cluster. It's also reasonably common to configure a PodSecurityPolicy to disallow host networking since it can be viewed as a security concern.
I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.
I have got an application which has few microservices like shown below
- python microservice - runs as a Docker container on port 5001, 5002, 5003, 5004, 5005
- nodejs microservice - runs as a Docker container on runs on port 4000
- mongodb - runs as a Docker container on port 27017
- graphql microservice - runs as a Docker container on port 4000
I require clarification for the below options
OPTION 1:
Is it correct to configure nginx as a reverse proxy for each application so that I want to run each microservice on port 80
i.e * python microservice docker container + nginx
* nodejs microservice docker container + nginx
* mongodb microservice docker container + nginx
* graphql microservice docker container + nginx
OPTION 2:
or should I configure a single nginx instance and setup upstream for python application, nodejs application and mongodb ?
ie python + nodejs + mongodb + graphql + nginx
Note: In OPTION 2 only a single nginx instance is running and for OPTION 1 each microservice has a nginx instance running. Which pattern is correct OPTION 1 or OPTION 2 ?
Is it correct to containerize mongodb and expose it on port 80 ?
Question 1:
If you use only one nginx you have a single point of failure. This means that if nginx fails for some reason, all the services will be down.
If you use several different nginx with different configurations it will require more maintenance, technical debt and resources.
A good approach here is to have replicas (e.g., 2) of the same nginx server which contains rules for routing all the microservices.
Question 2:
There is no problem on deploying mongoDB in a container as soon as you have some persistent storage. The port is not a problem at all.
Consider the following environment:
one docker container is keycloak
another docker container is our web app that uses keycloak for authentication
The web app is a Spring Boot application with "keycloak-spring-boot-starter" applied. In application.properties:
keycloak.auth-server-url = http://localhost:8028/auth
A user accessing our web app will be redirected to keycloak using the URL for the exposed port of the keycloak docker container. Login is done without problems in keycloak and the user (browser) is redirected to our web app again. Now, the authorization code needs to be exchanged for an access token. Hence, our web app (keycloak client) tries to connect to the same host and port configured in keycloak.auth-server-url. But this is a problem because the web app resides in a docker container and not on the host machine, so it should rather access http://keycloak:8080 or something where keycloak is the linked keycloak docker container.
So the question is: How can I configure the keycloak client to apply different URLs for browser redirection and access token endpoints?
There used to be another property auth-server-url-for-backend-requests but was removed by pull request #2506 as a solution to issue #2623 on Keycloak's JIRA. In the description of this issue, you'll find the reasons why and possible workarounds: that should be solved at the DNS level or by adding entries to the host file.
So there is not much you can do in the client configuration, unless you change the code and make your own version of the adapter, but there is something you can do at the Docker level. For this to work properly, first I suggest you use a fully qualified domain name instead of localhost for the public hostname, as you would in production anyway, eg. keycloak.mydomain.com. You can use a fake one (not registered in DNS servers) if you just add it to the host's /etc/hosts file (or Windows equivalent) as an alias next to localhost.
Then, if you are using Docker Compose, you can set aliases (alternative hostnames) for the keycloak service on the docker network to which the containers are connected (see doc: Compose File reference / Service configuration reference / networks / aliases). For example:
version: "3.7"
services:
keycloak:
image: jboss/keycloak
networks:
# Replace 'mynet' with whatever user-defined network you are using or want to use
mynet:
aliases:
- keycloak.mydomain.com
webapp:
image: "nginx:alpine"
networks:
- mynet
networks:
mynet:
If you are just using plain Docker, you can do the equivalent with --alias flag of docker network connect command (see doc: Container networking / IP address and hostname).
I created a custom HTTPS LoadBalancer (details) and I need my Kubernetes Workload to be exposed with this LoadBalancer. For now, if I send a request to this endpoint I get the error 502.
When I choose the Expose option in the Workload Console page, there are only TCP and UDP service types available, and a TCP LoadBalancer is created automatically.
How do I expose a Kubernetes Workload with an existing LoadBalancer? Or maybe I don't even need to do it, and requests don't work because my instances are "unhealthy"? (healthcheck)
You need to create a kubernetes ingress.
First, you need to expose the deployment from k8s, for a https choose 443 port and service type can be either: LoadBalance(external ip) or ClusterIp. (you can also test that by accesing the ip or by port forwarding).
Then you need to create the ingress.
Inside yaml file when choosing the backend, set the port and ServiceName that was configured when exposing the deployment.
For example:
- path: /some-route
backend:
serviceName: your-service-name
servicePort: 443
On gcp, when ingress is created, there will be a load balancer created for that. The backends and instance groups will be automatically build too.
Then if you want to use the already created load balancer you just need to select the backend services from the lb that was created by ingress and add them there.
Also the load balancer will work only if the health checks pass. You need to use the route that will return a 200 HTTPS response for that.