replacing minio node in 4-node cluster - minio

One of the nodes in our 4-node minio cluster is having issues and will be terminated by our cloud provider in a couple of weeks. I have done prior testing with minio and know that it will continue to function with 3 nodes, but I will be bootstrapping a new node a few minutes after I terminate the old one, and our container orchestrator should drop a new minio container on that node and into the minio cluster; I'm not concerned about that part.
What I would like to know is how can I kickstart minio to rebalance after the new node is online? In the past when I tested this scenario the new minio container did not pull much if any data from the other nodes. Is that because we're still at the (n/2) + 1?
Hypothetically, what would it take for me to see data being transferred between minio containers-- another (different) node being replaced after the new one is online?
At what point would I see data loss?
If it matters, this minio registry just holds container images from an internal docker registry-- the amount of data it holds is relatively small and static, and writes only happen when I push an image to the registry.

FWIW, minio does not resync automatically. You need to do an "mc admin heal ". Even that sppears to only get the added minio container's disk into the online state, so it is available for subsequent uploads.

Related

How can I migrate to instance storage for an EBS-defined Elasticsearch cluster without losing data?

I am using EBS for storage for my Elasticsearch cluster on EKS cluster. But in terms of performance, I want to use instance storage instead of EBS. In case the critical point instance closes, shard or replicas will be lost. How can I go about the configuration properly without losing in this scenario?
I've made sample changes to a few parameters I created for storage in the configuration files, but I'm not sure it's the right way. I left it as is so that the changes I made do not cause any data loss.

Image pull over multiple K8s nodes

When I create a pod, a corresponding image is pulled to the node where the pod is created
Can I have those images shared among the cluster nodes, instead of being stored locally on each node?
Thanks a lot
Best Regards
It's possible if you have shared storage across all the Kubernetes nodes. However, it's not a good idea 🙅 since typically the place where images get stored is also the place where the container runtime stores its files when it's actually running the container. For example, if you are using Docker, everything gets stored under /var/lib/docker or in the case of containerd it's /var/lib/containerd
So in summary, it's possible with shared files/cluster file systems like NFS, Ceph, Glusterfs, AWS EFS, etc, but it's not a good idea in my opinion 🚫.
Update (#BMitch):
Make sure that the container storage driver you are using supports the filesystem that you are using.
✌️

How to change cluster IP in a replication controller run time

I am using Kubernetes 1.0.3 where a master and 5 minion nodes deployed.
I have an Elasricsearch application that is deployed on 3 nodes using a replication controller and service is defined.
Now i have added a new minion node to the cluster and wanted to run the container elasticsearch on the new node.
I am scaling my replication controller to 4 so that based on the node label the elasticsearch container is deployed on new node.Below is my issue and please let me k ow if there is any solution ?
The cluster IP defined in the RC is wrong as it is not the same in service.yaml file.Now when I scale the RC new node is installed with the ES container pointing to the wrong Cluster IP due to which the new node is not joining the ES cluster.Is there any way that I can modify the cluster IP of deployed RC so that when I scale the RC the image is deployed on new node with the correct cluster IP ?
Since I am using old version I don't see kubectl edit command and I tried changing using kubectl patch command but the IP didn't change.
The problem is that I need to do this on a production cluster so I can't delete the existing pods but only option is to change the cluster IP of deployed RC and then scale so that it will take the new IP and image is started accordingly.
Please let me know if any way I can do this ?
Kubernetes creates that (virtual) ClusterIP for every service.
Whatever you defined in your service definition (which you should have posted along with your question) is being ignored by Kubernetes, if I recall correctly.
I don't quite understand the issue with scaling, but basically, you want to point at the service name (resolved by Kubernetes's internal DNS) rather than the ClusterIP.
E.g., http://myelasticsearchservice instead of http://1.2.3.4

Apache Ignite - move data from one server to another

I have Ignite instance started as a 'server mode' on computer A, created cache in it and stored 1M Key->Values inside the cache.
Then I started Ignite instance as a 'server mode' on computer B which joined Ignite instance on computer A and now have a cluster of 2 nodes.
Is it possible to move all 1M K->V from computer A to computer B (without any interruption for querying data or ingesting data) so that computer A can be shut down for maintenance and everything continue to work from computer B?
If this is possible - what are the steps and code to do that (move data from A -> B)?
Ignite distributes data across server nodes according to Cache Modes.
In REPLICATED mode each server holds a copy of all data, so you can shut down any node and data won't be lost.
In PARTITIONED mode you can set CacheConfiguration.backups to 1 (or more) so that data is evenly distributed across server nodes, but each server also holds a copy of data from some other server. In this scenario you can shut down any single node and data won't be lost.
There are the features named "backup" and "CacheRebalanceMode" of IgniteCache.I think you can try these.

how to transfer elastic data from one server to another

How do I move Elasticsearch data from one server to another?
I have server A running Elasticsearch 1.4.2 on one local node with multiple indices. I would like to copy that data to server B running Elasticsearch with the same version. The lucene_version is also same on both the servers.But when I copy all the files to server B data is not migrated it only shows the mappings of all the node. I tried the same procedure on my local computer and it worked perfectly. Am I missing something on the server end?
This can be achieved by multiple ways. The easier and safest way is to create a replica on the new node. Replica can be created by starting a new node on the new server by assigning the same cluster name. (if you have changed other network configurations then you might need to change that also). If you have initialized your index with no replica before then you can change the number of replica online using update settings api
Your cluster will be in yellow state until your datas are in sync.Normal operations won't get affected.
Once your cluster state is in green you can shut down the server you do not wish to have. At this stage your cluster stage will go to yellow again. You can use the update setting to change replica count to 0 / add other nodes to bring cluster state in green state.
This way is recommended only if both your servers are on the same network else data syncing will take lots of time.
Another way is to use snapshot. You can create a snapshot on your old server. Copy the snapshot files from the old server to new server in the same location. On the new server create the same snapshot on the same location. You will find the snapshot file you copied. You can restore it using that. Doing it using command line can be a bit cumbersome. You can use a plugin like kopf which will make taking snapshot and restore as easy as button click.

Resources