Is there a way using non continuous ip and path for minio server pool and cluster - minio

I need make a minio cluster on servers which are in use, so I can't change ip or mount pointon of those servers.
So, I can't use this "http://host{o...z}/export{1...m}" syntax, for IPs and PATHs are not continuous.
I know that for single server pool, minio can accept non continuous IP and PATH, like this:
./minio server http://x.x.x.182:/data1 http://x.x.x.184:/data3 http://x.x.x.186:/data5 http://x.x.x.188:/data7
Is there a way to bend the rule for cluster? Or maybe a fork that accept non continuous IP and PATH.

When you expand MinIO server using pools you can run the command like
minio server
http://minio{1...4}.example.net:9000/mnt/disk{1...4}/minio http://minio{5...12}.example.net:9000/mnt/disk{1...8}/minio
Here the MinIO server hosts with sequential hostnames which can be mapped to non-continuous ip using the etc/hosts file.
You can configure not continuous ip address to continuous host names and use the pools.
The MinIO team is available on their public slack channel or by email to answer questions 24/7/365.

Related

Distributed MINIO deployment duplicates server in pool

I want to run minio cluster for tests, this cluster should contain 2 servers with 4 drives each.
For this purpose was selected minio setup as systemd service.
Both servers has same configuration in /etc/default/minio file:
# Volume to be used for MinIO server.
MINIO_VOLUMES="http://10.24.36.82/tmp/minio/srv/d{1...4} http://10.24.36.83/tmp/minio/srv/d{1...4}"
# Use if you want to run MinIO on a custom port.
#MINIO_OPTS="--address :9199"
# Root user for the server.
#MINIO_ROOT_USER=Root-User
# Root secret for the server.
Minio start is ok, cluster is working, but for some reason admin console shows that there're 3 servers in the cluster and one is always offine. When I open minio console on 10.24.36.82, it shows third server with same ip:
Server 10.24.36.83 has same picture but this time it has its own clone:
Lots of errors produced in minio logs about third server offline.
My question is why minio duplicates its instance and how to fix this?
The problem was in server url definition. MINIO_VOLUMES must contain port for every server pool address. If its not it starts somehow and tries to find extra server on port 80

Why I cannot connect to Kafka from outside?

I am running kafka on ec2 instance. So amazon ec2 instance has two ips one is internal ip and second one is for external use.
I created producer from local machine, but it redirect to internal ip and give me connection unsuccessful error. Can anybody help me to configure kafka on ec2 instance, so that I can run producer from local machine. I am tried many combinations but didn't work.
In the Kafka FAQ (updated for new properties) you can read:
When a broker starts up, it registers its ip/port in ZK. You need to make sure the registered ip is consistent with what's listed in bootstrap.servers in the producer config. By default, the registered ip is given by InetAddress.getLocalHost.getHostAddress(). Typically, this should return the real ip of the host. However, sometimes (e.g., in EC2), the returned ip is an internal one and can't be connected to from outside. The solution is to explicitly set the host ip and port to be registered in ZK by setting the advertised.listeners property in server.properties.
I solved this problem, by setting advertised.host.name in server.properties and metadata.broker.list in producer.properties to public IP address and host.name to 0.0.0.0.
The easiest way how to reach your Kafka server (version kafka_2.11-1.0.0) on EC2 from consumer in external network is to change the properties file
kafka_2.11-1.0.0/config/server.properties
And modify the following line
listeners=PLAINTEXT://ec2-XXX-XXX-XXX-XXX.eu-central-1.compute.amazonaws.com:9092
Using your public address
Verified on 2.11-2.0.0
I just did this in AWS. First get the Kafka server to listen on the correct interface/IP using host.name. For your case this would be the internal IP, not localhost, since your intent is for outside Kafka clients to connect. Any local clients will need to use that same address, not localhost.
Then set advertised.host.name to a host name, not an IP address. The trick is to get that host name to always resolve to the correct IP for both internal and external machines. I use /etc/hosts inside and DNS outside. See my full answer about Kafka and name resolution here.
If you want to access from LAN, change following 2 files-
In config/server.properties:
advertised.listeners=PLAINTEXT://server.ip.in.lan:9092
In config/producer.properties:
bootstrap.servers=server.ip.in.lan:9092
In my case, the server.ip.in.lan value was 192.168.15.150
Below are the steps to connect Kafka from outside of EC2 instance.
Open Kafka server properties file on EC2.
/kafka_2.11-2.0.0/config/server.properties
Set the value of advertised.listeners to
advertised.listeners=PLAINTEXT://ec2-xx-xxx-xxx-xx.compute-1.amazonaws.com:9092
This should be your Public DNS (IPv4) of EC2 instance.
Stop Kafka server.
Start Kafka server to see above configuration changes in action.
Now you can connect to your Kafka of EC2 instance from outside or from your localhost.
Tried and tested on kafka_2.11-2.0.0
SSH to your EC2 instance or wheverver you're hosting Kafka.
sudo nano /etc/hosts
Add:
127.0.0.1 <your-host-name> localhost
In my case it's:
127.0.0.1 ec2-12-34-56-78.ap-southeast-1.compute.amazonaws.com
Save and exit.
For EC2 you should edit the /etc/hosts file to add:
XXX.XXX.XXX.XXX ip-YYY-YYY-YYY-YYY
where XXX... is your external IP and the ip-YYY-YYY-YYY-YYY is the string returned by the hostname command. You can use 127.0.0.1 instead of your external IP to communicate inside the server.
host.name is deprecated - as are advertised.host.name and advertised.port

Docker Minecraft Host

I am trying to host Minecraft servers in docker containers on an ec2 instance, and point a different subdomain to each container, for example
a.example.com -> container 1
b.example.com -> container 2
c.example.com -> container 3
...and so on.
If these containers were running a website, I could forward the traffic with Apache, or node-http-proxy, etc. But because these servers are running TCP services, I cannot route the traffic this way.
Is this possible? And if so, how?
The Minecraft client has supported SRV DNS records for a while now (since 1.3.1 according to google). I suggest you assign your Docker containers a stable set of port mapping with the -p flag, and then create SRV records for each FQDN pointing to the same IP but different ports.
Google gives several hits on the SRV entry format - this one is from the main MCF site: http://www.minecraftforum.net/topic/1922138-using-srv-records-to-hide-ports-on-your-server-ip/
I have four MC servers running on the same physical host with a single IP address, each with a separate friendly entry for players to use in the Minecraft client, so none of my users need to remember a port. It did cause confusion for a couple of my more technical players when they had a connectivity issue, tested with dig/ping, then thought the DNS resolution was broken when there was no A record to be found. Overall, I think that's a very small downside.
Doesn't HAProxy http://haproxy.1wt.eu/ route tcp traffic?

Amazon EC2 - seeing files between instances

I've set up 2 instances of Windows Server 2008 on EC2. I want one to act as the database server and the other as the client. For the client app to work it needs to be able to connect to the server instance with ALL of these things:
IP address of the database instance
access through a given UDP port
server name e.g. \\MyServer
an actual physical path through to its database e.g. \\UNC\SharedFolder\MyDatabaseFolder
I'm a complete novice with EC2. Is there anyway I can set this up?
Many thanks
At least three of the four are completely possible and I have worked with similar setups. Maybe someone else knows more about the UDP bit.
IP address of the database instance
That is standard on EC2. All instances have two network interfaces, one EC2 internal and one to the outside world. For communication between instances use the internal one. Data traffic over these interfaces is free.
Access through a given UDP port
I have never tried UDP communication in EC2, but if it works you should probably keep it within a local network of your own, i.e. a virtual private cloud (VPC).
Server name e.g. \MyServer
This kind of host name lookup does not need a name server, although you certainly could run one (preferably within a VPC). If you put the server name and (internal) IP into your hosts file (%systemroot%\system32\drivers\etc\hosts) you don't need a name server, though.
An actual physical path through to its database e.g. \UNC\SharedFolder\MyDatabaseFolder
Folder sharing should work the same as with any other Windows machine, but even that should probably be kept within a VPC.
Setting up a VPC can be a little steep to start with, but the documentation is good and the hard bits are often not needed (such as VPN tunnels). Have a look at the example scenarios and follow the one best matching your needs.

How do I connect up my Amazon EC2 instances without manually modifying config files?

I have a three-tier Windows-based web application bundled into 3 AMIs on Amazon EC2 that I use for load testing.
An ASP.NET web application on IIS
An .NET application server
SQL Server
After I launch them, the config files of each tier needs modifying to update the IP addresses.
At the moment I am doing this manually: I connect to the webserver instance via remote desktop and modify the config file to point to the new IP of the application server instance. Then I do the same with the application server to change the IP in the connection string.
This must be a common requirement and I must be missing something obvious. There must be a better way!
I could use Elastic IP addresses, but these machines are only provisioned for a couple of hours at a time, and I would be charged for the addresses when they were NOT in use (which would be most of the time).
Is there some way of persistently naming the machines? Can I somehow get all the machines on the same network and use machine names instead of IP addresses?
I could write some nifty PowerShell script that would perform the modifications remotely. Is there an example somewhere?
I could use a dynamic IP address service. I'm not sure if this would have any negative effect on performance or availability... Are there any downsides to this approach?
I could install some sort of self-configuring service on each machine (which connects to S3? SNS? SimpleDB?) to publish/retrieve the addresses of the other machines and update the config files automatically. Is there an example somewhere?
What is best practice?
You could use Amazon Virtual Private Cloud (Amazon VPC). You have a private subnet where you can assign an IP address to an instance, but it may require launching an instance from command line to assign IP. VPC is charged the same way as EC2.

Resources