After installing minio directpv, it lists the multipath devices as /dev/dm-XX devices in kubectl directpv drives list. But the /dev/dm-XX paths are not persistent between the node reboots.
I want directpv to list the /dev/mapper/mpathXX devices instead of /dev/dm-XX, so the paths will be persistent between the reboots.
Related
We are using container instances deployed with and arm-template (docs: https://learn.microsoft.com/en-us/azure/templates/microsoft.containerinstance/containergroups?tabs=json) and want to mount an on-premises volume into this container, as our environment is now both on-prem and in Azure. The on-prem environment is windows. How can we do this?
Suggestions so far that we have been looking into:
Mount a volume through the ARM-template. (Is this even possible with on-prem volumes?)
Run container instances with priviliges to be able to mount later with commands. (seems to be able through docker desktop, but is it possible through container instances?)
Use SMB-protocol to reach files on-prem
Which of these suggestions should be the best one/is possible? And is there another option that is better?
First of all, you need to consider couple of limitations when mounting volume to Azure Container Instance:
You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
Azure file share volume mount requires the Linux container run as root .
Azure File share volume mounts are limited to CIFS support.
Unfortunately there is no way to mount on-premise storage to Azure Container Instance.
It is possible to mount only following types of volumes into Azure Container Instances:
Azure Files
emptyDir
GitRepo
secret
You may try to sync your files from on-premise to Azure Storage Account File Share using Azure File Sync and mount File Share to Container Instances.
I have several Windows servers available and would like to setup a Kubernetes cluster on them.
Is there some tool or a step by step instruction how to do so?
What I tried so far is to install DockerDesktop and enable its Kubernetes feature.
That gives me a single node Cluster. However, adding additional nodes to that Docker-Kubernetes Cluster (from different Windows hosts) does not seem to be possible:
Docker desktop kubernetes add node
Should I first create a Docker Swarm and could then run Kubernetes on that Swarm? Or are there other strategies?
I guess that I need to open some ports in the Windows Firewall Settings of the hosts? And map those ports to some Docker containers in which Kubernetes is will be installed? What ports?
Is there some program that I could install on each Windows host and that would help me with setting up a network with multiple hosts and connecting the Kubernetes nodes running inside Docker containers? Like a "kubeadm for Windows"?
Would be great if you could give me some hint on the right direction.
Edit:
Related info about installing kubeadm inside Docker container:
https://github.com/kubernetes/kubernetes/issues/35712
https://github.com/kubernetes/kubeadm/issues/17
Related question about Minikube:
Adding nodes to a Windows Minikube Kubernetes Installation - How?
Info on kind (kubernetes in docker) multi-node cluster:
https://dotnetninja.net/2021/03/running-a-multi-node-kubernetes-cluster-on-windows-with-kind/
(Creates multi-node kubernetes cluster on single windows host)
Also see:
https://github.com/kubernetes-sigs/kind/issues/2652
https://hub.docker.com/r/kindest/node
You can always refer to the official kubernetes documentation which is the right source for the information.
This is the correct way to manage this question.
Based on Adding Windows nodes, you need to have two prerequisites:
Obtain a Windows Server 2019 license (or higher) in order to configure the Windows node that hosts Windows containers. If you are
using VXLAN/Overlay networking you must have also have KB4489899
installed.
A Linux-based Kubernetes kubeadm cluster in which you have access to the control plane (see Creating a single control-plane cluster with kubeadm).
Second point is especially important since all control plane components are supposed to be run on linux systems (I guess you can run a Linux VM on one of the servers to host a control plane components on it, but networking will be much more complicated).
And once you have a proper running control plane, there's a kubeadm for windows to proper join Windows nodes to the kubernetes cluster. As well as a documentation on how to upgrade windows nodes.
For firewall and which ports should be open check ports and protocols.
For worker node (which will be windows nodes):
Protocol Direction Port Range Purpose Used By
TCP Inbound 10250 Kubelet API Self, Control plane
TCP Inbound 30000-32767 NodePort Services All
Another option can be running windows nodes in cloud managed kuberneres, for example GKE with windows node pool (yes, I understand that it's not your use-case, but for further reference).
I have two physical machines both running in the same network, and I made one of them a manager and the other one worker. The nodes join correctly and I was able to view them by running docker node ls.
In the docker yml file, I have 4 applications in total which two on them running on the manager node and others running on the worker node.
My issue is that the applications in the manager node cannot reach the applications in the worker node via the overlay network.
More information:
The manager node is running Ubuntu 18.04 LTS, and the worker node is running on a Mac mini(macOS 10.14.1). The architecture looks like the below:
I suspect this is a Mac issue. Any ideas?
I have been trying to work around similar issues. The root cause is because Docker Desktop for MacOS is not a "true docker" and it does not forward network requests from/to other hosts properly. Details are here: https://docs.docker.com/docker-for-mac/docker-toolbox/
The work around is to use Virtual Machines in MacOS (e.g., VirtualBox) by docker-machine command lines. Details are introduced in How to connect to a docker container from outside the host (same network) [OSX 10.11]
I have tried the VirtualBox path, adding a third Network Adapter with bridged mode, and I can finally ping 3 nodes from the container.
If i run ipfs and ipfs cluster, i am the only Peer at this Cluster (that's clear).
But how can i add a second Peer from my Pc?
Maybe through antoher Port as the first one?
Thx for your Help!
IPFS Cluster (https://cluster.ipfs.io/documentation/overview/) is mainly used to "orchestrate IPFS daemons running on different hosts", however yes you can run two IPFS nodes on the same machine and connect them both to Cluster: How to run several IPFS nodes on a single machine?.
If you are actually trying to do this for testing, you may be looking for IPTB (https://github.com/ipfs/iptb), which is a tool to "manage a cluster of sandboxed nodes locally on your computer" - see the plugins section (https://github.com/ipfs/iptb-plugins) for managing individual IPFS nodes within those test clusters.
How can I ensure that a software installed on a cluster is always available.
I understand that I can install the software in a shared drive and if one node goes down, the other node will take care.
But what about the windows system dependencies like the registries, windows dir,
services etc?
Will these things as well get shared across the node?
Basically if I have a software written in C++/C# which has lots of windows O/S resource dependencies(registry, service etc), how can I ensure that it is highly available through a cluster? Is it possible?
Thanks & Regards
Sunil
For this scenario, let's assume:
There are two servers in the cluster. ServerA and ServerB.
Each server has their own local drive. (C:)
Each server has access to a shared/common drive called F:\ (probably on an external SAN)
When installing or updating your application on the Failover Cluster, first ensure ServerA is the cluster owner/active node. Install your application as usual, insuring the install path is a folder on the shared drive F:.
Once the install is complete to ServerA, go into Failover Cluster manager and make ServerB the cluster owner/active node. Repeat the install on ServerB, using the same folder on F:\ for the installation path.
If your application is a Windows service (or set of services), make sure after the application installation that you configure the service as a Generic Service Resource in the Failover Cluster. Then, always stop/start the service via Failover Cluster Manager