Make k8s cluster services available to local docker containers - macos

I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac

I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.

Related

docker desktop kubernetes - how to map ports with ClusterFirstWithHostNet

I'm using kubernetes from docker for windows and I encountered problem. I use statefulset with following part of config:
spec:
terminationGracePeriodSeconds: 300
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
In classic kubernetes this spec exposes all ports from pod on node ip, so all of them can be accessed through it. I'm trying to develop it on kubernetes from docker for windows, but it seems that I cannot access node by it's ip (like in minikube or microk8s), but docker for windows maps localhost to the cluster. So here is a problem: this config exposes all ports on node ip, which is for example 192.168.65.4, but i cannot access it from windows - I can only access cluster via localhost, but it only exposes protocol related port, for example 443. So when my service runs on port i.e. 10433, there is no access from localhost:10433 but also there is no access in general through node ip. Is there any way to configure it to work as classic kubernetes, where all ports are exposed? I know that single port can be exposed through NodePort, but it's important for me to expose all ports from the pod to imitate real kubernetes behaviour
In general, Docker host networking doesn't work on non-Linux platforms. It's accepted as a valid Docker option, but the "host" network isn't actually the physical system's network. This probably applies to the Kubernetes setup embedded in Docker Desktop as well.
It should be pretty rare to need host networking, and even more unusual in Kubernetes. Host networking disables the normal inter-container communication mechanisms. Kubernetes in particular has a complex network environment and there is usually more than one node; opting out of the network setup like this can make it all but impossible to reach your service, either from inside the cluster or outside.
Instead of host networking, you should use the normal Kubernetes networking setup. Pretty much every Deployment you create will need a matching Service, and if you set that Service to have type: nodePort then it will be accessible from outside the cluster (try both the assigned nodePort: number and the service's cluster-internal port:; it's not clear which port Docker Desktop actually uses).
For some purposes, the easiest approach is to set up a local port-forward to the service
kubectl port-forward deployment/some-deployment 8888:3000
will set up a port-forward from port 8888 on the local system to port 3000 on some pod managed by the named deployment. This forwards to a single pod (if you have multiple replicas, it targets only one of them), it's slower than a direct connection, and the port-forward will fail occasionally, but this is good enough for maintenance tasks like database migrations.
imitate real kubernetes behaviour
In the environment I work on normally, each cluster has dozens to hundreds of nodes. The nodes can't be directly accessed from outside the cluster. It's also reasonably common to configure a PodSecurityPolicy to disallow host networking since it can be viewed as a security concern.

Fetch data from gatServerProps of NextJs app when another api server is also running in localhost

According NextJs Documentations:
You should not use fetch() to call an API route in getServerSideProps. Instead, directly import the logic used inside your API route. You may need to slightly refactor your code for this approach.
Fetching from an external API is fine!
So we cannot use NextJs built-in APIs in getStaticProps or getServerSidePropsbut when I'm going to use another API service that is based on Laravel Framework as the back server and fetch it by Axios on the getServerSideProps function, I get Error: connect ECONNREFUSED 127.0.0.1:8080 error.
It should also be noted that everything is fine if the API server is addressed out of our development machine. In Other words, It will face when it's the development environment and both Laravel backend server and NextJs front-end server locate at localhost.
Could you help me out finding a solution for this problem?
When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer.
There are two pretty easy solutions.
Create a docker network, add both containers to that, and use container name instead of ip (https://www.tutorialworks.com/container-networking/)
Use host networking for this container: https://docs.docker.com/network/host/
Edit: Added a link for a tutorial on how to create and use docker networks
So, as #tperamaki's answer already mentions: "When using localhost or 127.0.0.1 inside a docker container, that points to that docker container only, not the host computer."
You can use the ip of your machine in your local network. By example, 196.168.0.10:8080 instead 127.0.0.1:8080.
But you also can connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host.
In your case just add the port where the othe container is listening:
http://host.docker.internal:8080
In this section of the documentation Networking features in Docker Desktop for Mac, they explain how to connect from a container to a service on the host. Note that a mac is mentioned there, but I tried it on a linux distro and it also works (also in this other answer it is mentioned that it works for windows).

What will be host value to connect for a Docker application with Docker database?

Have a Docker database mySql which has been setup like port 3308:3306, that mean internal docker port 3306 has been hosted by local host port 3308 and i am able to connect with this DB from my local machine and application by using port 3308 simply.
but if i run an application in Docker itself, what should be the value of below hostname and port to connect with Docker database.
jdbc:mysql://hostname:port/DBName?useSSL=false
I would recommend this kind of setup to the docker run command
Create a private bridge network.
docker network create --driver bridge privet-net
Now start your application and DB containers along with the following flag added.
--network private-net
On user-defined networks, containers can not only communicate by IP address but can also resolve a container name to an IP address. This capability is called automatic service discovery.
Read this for more details on Docker container networking.
Now you can use the following URL to access the database.
jdbc:mysql://<DB_Container_name>:port/DBName?useSSL=false
This approach might look complex, but its the recommended way. With this setup, your DB will be in a private network and cannot be accessed by other containers as well. This adds extra security to your database.

How to use run deck service from local browser using up address?

I have installed rundeck in docker using ec2 instance.
When I run the image and start rundeck. It's fine.
Lynx http:localhost:4440
Us able to show rundeck dashboard.
But, how can I access this rundeck from Windows browser?
I tried using address but connection refused.
In order to access this from outside for your setup, you might have to ensure the following things:
Ensure that host server (ec2) is forwarding ports to the docker container. You should have used -p or -ports when launching the container for this.
Test: From your EC2 instance, you should be able to access: http://localhost:4440
Ensure you have a public IP assigned to your EC2. You should be able to see that from your aws ec2 console: http://console.aws.amazon.com/ec2
Ensure that your security group(s) for that instance has InBound connections to accept 4440 from your IP or rest of the world.
After this, your http://:4440 should work.
I hope I got your question correct.
Let me know how it goes,
Thanks,
Anoop

How can I make all my docker containers use my proxy?

I am running docker on Debian Jessie which is behind a corporate proxy. To be able to download docker images, I need to add the following to my /etc/defaults/docker
http_proxy="http://localhost:3128/"
I can confirm that this works.
However, in order to be able to access the interwebz from within my container, I need to start all sessions with --net host and then setup these env variables:
export http_proxy=http://localhost:3128/
export https_proxy=https://localhost:3128/
export ftp_proxy=${http_proxy}
Ideally, I would like for the container to not need the host network, and not to know about the proxy (i.e. all outbound calls to port 20, 80, 443 in the container go via the host's proxy port). Is that possible?
Failing that, is it possible to have a site setup, which will ensure that these env variables are set locally but never exported as part of an image? I know I can pass these things with --env http_proxy=... etc, but that's clunky. I want it to work for all users on the system without having to use aliases.
(Disclaimer: I asked this on https://superuser.com/posts/890196 but the home for docker questions is a little ambiguous at the moment).
See Proxy all the Containers:
Host server runs a container running a proxy (squid, in this case) that can do transparent proxying. That container has some iptables rules that NAT traffic into the proxy server - this means that container needs to run in privileged mode.
Host server also contains (and here's the magic) ip route table entries that re-route all traffic from any container but the proxy that was destined for port 80, through the proxy container.
That last bit essentially means that for port 80 traffic, the route from container to the rest of the world goes through the proxy container - giving it the chance to NAT and transparent proxy.
https://github.com/silarsis/docker-proxy

Resources