I have a host service managed by systemd which listens on the Podman default network interface (cni-podman0) so that containers can talk to it.
The problem I have, is that Podman only creates the network interface when the first container is started. That means when the host service which the containers depend on is started, the network interface isn't up and the service fails to listen on it.
So the dependency chain is:
Podman container -needs> Host Service -needs> CNI network interface
But currently the only why I know of to bring up the interface is starting the container.
How can I make systemd tell Podman / CNI to start the default bridge network interface, so that I can depend on that in the host service unit?
Is there a command to bring up the interface explicitly, I could put in a unit file?
Unless I misunderstood the question, it's possible to use the After and Wants parameters in your systemd service file.
Open your service file, e.g. vim /etc/systemd/system/my_custom_daemon.service and make sure you have the following:
[Unit]
After=network.target
Wants=network.target
If it's not the host network that you need to satisfy as a precondition then you'd need to create a custom systemd target and reference it in your After/Wants.
I solved it for now by adding a oneshot systemd service unit, which runs an immediately exiting alpine container using Podman, to the host service dependencies, which runs an immediately exiting alpine container using Podman. This "tricks" Podman into bringing up the bridge network interface.
Less hacky solutions are still more than welcome.
Related
I am very new to consul , and has been reading about consul clustering recently. My understanding is , for each node (equivalent to a physical machine or VM), we will run a local consul agent (in client mode), hence any microservices running in that node will register itself thru this agent. but what happen if this one and only one agent is down, won't the microservices in that node unable to register anymore? Or should we expect more than one consul agent (in client mode) per node to handle such situation?
You are correct. If the Consul agent is down, the services on that host will not be able to register with the agent, and Consul will consider all services which were previously registered against the agent to be unavailable.
A very simple solution is to run Consul under a process manager like systemd, and configure systemd to restart the agent if the process unexpectedly fails. You can find an example systemd unit for this at https://learn.hashicorp.com/tutorials/consul/deployment-guide#configure-systemd. If Consul is installed from the HashiCorp Linux package repo (https://learn.hashicorp.com/tutorials/consul/get-started-install), this systemd unit will be included as part of the installation package.
I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.
I'm currently experimenting with Swarm Services with Docker for Windows. The new Win10 Insider build supports overlay networking for Windows containers and I was pleased to see my IIS service actually starting. The only issue i came across is that i can not reach the service in the browser, despite trying multiple things such as different ports and networks. The command issued is as following:
docker service create --name webfarm -p 80:80 microsoft/iis
I have also tried to use the --network flag to try different networks and I have made sure to test all IP addresses visible in the docker service inspect webfarm command.
docker service ps webfarm does indicate that my service is in state RUNNING and does not have any errors, so i don't know what else i can try. Especially since these commands worked fine on Linux with Apache.
I was wondering if anyone has been able to successfully create a service using Windows Containers on the Windows Insider build (15046), and if so, how?
Never mind, i found this actually is not supported yet.
The following source states:
"At the moment only DNS round robin is implemented as described in the Microsoft blog post. You cannot use to publish ports externally right now. More to come in the near future." (https://stefanscherer.github.io/docker-swarm-mode-windows10/)
And indeed, the blogposts states the following:
"Currently, Windows supports DNS Round-Robin load balancing between services. The routing mesh for Windows Docker hosts is not yet supported, but will be coming soon. Users seeking an alternative load balancing strategy today can setup an external load balancer (e.g. NGINX) and use Swarm’s publish-port mode to expose container host ports over which to load balance." (https://blogs.technet.microsoft.com/virtualization/2017/02/09/overlay-network-driver-with-support-for-docker-swarm-mode-now-available-to-windows-insiders-on-windows-10/)
I guess I'll have to wait for this feature, in the meantime I will use the alternative.
I'm using Docker for Mac Beta and it runs from spotlight.
Is there any way to run it from console or force to use any configuration file to specify ip address for docker host.
Right now it changing from 192.168.64.3 to 192.168.64.5 (each start of docker it can have any random IP)
probably I need to configure bridge interface?
com.docker.network.bridge.enable_ip_masquerade: true
com.docker.network.bridge.host_binding_ipv4: 0.0.0.0
Does anyone know how to do that?
You can connect to the Docker alpine host via unix socket but I have not been able to figure out how to bridge to the network.
The docs say:
Unfortunately, due to limtations in OSX, we’re unable to route traffic
to containers, and from containers back to the host.
Because of the way networking is implemented in Docker for Mac, you
cannot see a docker0 interface in OSX. This interface is actually
within HyperKit.
I have a small cluster with docker nodes, I access it via a gateway server that I ssh into. What I would like to do, is to run e.g. Eclipse with a GUI on the cluster and access that GUI on my computer.
What I have found so far is this: http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/
However, the problem I'm experiencing is that the host computer doesn't run any x-server, since it's only a node in a cluster, so I cannot mount the required directory into the container.
Is there a way to use GUI applications in a container with this setup?