What is the maximum number of subnets that can be assigned to a single VLAN? - vlan

When configuring a VLAN in SVI (Switch Virtual Interface) in layer 3 switch, what is the maximum number of subnets that can be assigned to a single VLAN?

You have asked two questions:
How many subnets can exist simultaneously on a VLAN.
Unlimited for IPV4 and Single(not counting local addresses) for IPV6 using SLAAC, unlimited for manually assigned and DHCP.
How many subnets can be used with layer 3 switching?
Only a single route-able address is permitted per protocol(IPV4 and IPV6) to be assigned to the VLAN interface. This is based on Cisco devices, other vendors may have different requirements.

Related

Load-balancing with a parameter in a highly-concurrent scenario

Let's say there are two service clusters A and B, namely there are tens of or hundreds of hosts in both A and B, and of course sometimes some hosts may restart or be removed or added. running and . Services in A do RPC calls to Services in B with a method doRemoteCall(String shopId, ..). It is in a highly concurrently scenario, the cluster qps could be 100k or more.
Now I hope that the load balancing for A to do RPC calls to B follows the three rules below:
RPC requests with the same shopId can be routed to the same host(idealy) or the same groups of hosts on B with high probability(of course, the higher, the better, ideally it could be 100%)
the RPC calls are relatively evenly distributed among the hosts on B.
the routing decision in hosts on A are made independently by each host, without knowing information of other hosts (because it could be complex for each host to get information from other hosts, especially in highly concurrent and dynamic(hosts leave or join ocassionally) scenario)
The magic google words are "consistent hashing".
In the classic consistent hashing scheme, each A host would do this (the constants 256 and 64-bit are arbitrary, but appropriate for this problem size):
For each host B and all k in 1...256, calculate a 64-bit server hash Hs = hash(B,k)
For each user id, calculate a 64-bit hash Hu
Assign the user id to the host with the smallest Hs such that Hs >= Hu. If there is none, use the highest Hs.
Of course, the set of server hashes only needs to be modified when B hosts go up or down.
I usually prefer rendezvous hashing, but with hundreds of hosts to balance over, it gets slow:
For each user id, for each host, calculate a 64-bit hash(userid,host)
Assign the userid to the host with the smallest hash.

Mikrotik bandwidth shaping for a campus network

I have a Mikrotik router for a University campus network. I have a total bandwidth of 100Mbps. I would like to manage the bandwidth per IP address in an away that if all of the users are connected the bandwidth needs to be equally divided and when only some users have connected the bandwidth needs to be increased to each user. for example, if there are 10 users connected each user should receive 10Mbps and if 5 users are connected 20Mbps needs to be connected. Please let me know which Queue type should be used.
You need a simple queue with PCQ as a queue type. Something like that:
/queue simple
add max-limit=95M/95M name=MyQueue queue=pcq_upload_default/pcq_download_default target=your_ether_interface

10GB connectivity between Amazon Instances

I am trying to architect a solution for Amazon EC2 that requires high network bandwidth. Is there a way to provision 10GbE connectivity between Amazon ec2 instances to get high network bandwidth?
Certain Amazon EC2 instances types launched into the same cluster placement group are placed into a non-blocking 10 Gigabit ethernet network.
These instance types include:
m4.10xlarge
c4.8xlarge
c3.8xlarge
g2.8xlarge
r3.8xlarge
d2.8xlarge
i2.8xlarge
cc2.8xlarge
cc1.4xlarge
cr1.8xlarge
Just look in the Network Performance column in the EC2 launch console and you'll see it says "10 Gigabit".
From the Placement Groups documentation:
A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both.
To provide the lowest latency, and the highest packet-per-second network performance for your placement group, choose an instance type that supports enhanced networking.
The following instances support enhanced networking: C3, C4, D2, I2, M4, R3

How will a server running multiple Docker virtual machines handle the TCP limitation?

Under a REALLY heavy load, a server doesn't seem to "recycle" the TCP connections quickly enough.
I'm looking into using Docker to deal with a higher than usual number of requests per second to an API by creating multiple instances of a node server on one machine vs using multiple machines.
If the following sysctl settings are set, the recycling does seem to happen faster but there is still a hard limit on how many sockets there can be in existence:
net.ipv4.ip_local_port_range='1024 65000'
net.ipv4.tcp_tw_reuse='1'
net.ipv4.tcp_fin_timeout='15
When running multiple docker instances, is the total cap on tcp connections still equal to the number of maximum tcp connections the "parent" machine can handle?
Yes, the total cap of TCP connections will be capped by the Docker host.
However, there are three very different limits:
total cap of open connections (regardless of the source/destination IP address), which is related to the maximum number of file descriptors, and can be extremely high (i.e. millions)
total cap of outbound connections for a given local IP address (limited to 64K per local IP address)
total cap of connections tracked by netfilter
TCP port recycling deals with the 2nd limit. If you use netstat -nt in the host and container, you should be able to easily check if you're getting close to it. If that's the case, the sysctls that you used should help a lot.
If you're container is handling outside traffic, it shouldn't be subject to that limit; however, you could hit the 3rd one. You can check the number of tracked connections with conntrack -S, and if necessary, bump up the max number of connections by tweaking /proc/sys/net/ipv4/netfilter/ip_conntrack_max.
It would be helpful to indicate which symptoms you are seeing, that make you think that the server doesn't recycle the connections fast enough?

AWS Elastic Load Balancer and multiple availability zones

I want to understand how ELB load balances between multiple availability zones. For example, if I have 4 instances (a1, a2, a3, a4) in zone us-east-1a and a single instance d1 in us-east-1d behind an ELB, how is the traffic distributed between the two availability zones? i.e., would d1 get nearly 50% of all the traffic or 1/5th of the traffic?
If you enable ELB Cross-Zone Load Balancing, d1 will get 20% of the traffic.
Here's what happen without enabling Cross-Zone Load Balancing:
D1 would get nearly 50% of the traffic. This is why Amazon recommends adding the same amount of instances from each AZ to your ELB.
The following excerpt is extracted from Overview of Elastic Load Balancing:
Incoming traffic is load balanced equally across all Availability Zones enabled for your load balancer, so it is important to have approximately equivalent numbers of instances in each zone. For example, if you have ten instances in Availability Zone us-east-1a and two instances in us-east-1b, the traffic will still be equally distributed between the two Availability Zones. As a result, the two instances in us-east-1b will have to serve the same amount of traffic as the ten instances in us-east-1a. As a best practice, we recommend you keep an equivalent or nearly equivalent number of instances in each of your Availability Zones. So in the example, rather than having ten instances in us-east-1a and two in us-east-1b, you could distribute your instances so that you have six instances in each Availability Zone.
The load balancing between different availability zones is done via DNS. When a DNS resolver on the client asks for the IP address of the ELB, it gets two addresses. And chooses to use one of them (usually the first). The DNS server usually responds with a random order, so the first IP is not used at all times but each IP is used only part of the time (half for 2, third of the time for 3, etc ...).
Then behind these IP addresses you have an ELB server in each availability zone that has your instances connected to it. This is the reason why a zone with just a single instance will get the same amount of traffic as all the instances in another zone.
When you get to the point that you have a very large number of instances, ELB can decide to create two such servers in a single availability zone, but in this case it will split your instances for it to have half (or some other equal division) of your instances.

Resources