What's the expected latency for a simple connection between a pair of Amazon EC2 instances in the same region?
Thanks!
The latency should be similar to the latency of two computers in the same LAN. Just make sure that you are using the private IPs when connecting the two images and not their public ones.
Related
I am trying to learn failover clustering from videos from CBT Nuggets but I have question that I hope can be answered. Shown below is the link to the image of the three networks used in the F/O Cluster.
CL1-NUG and CL2-NUG are the cluster nodes, DC-NUG is the domain controller and FS-NUG is the file server node that will act as the shared storage for CL1-NUG and CL2-NUG.
The 192.168.1.0/24 network is the client/company network which is the only network that can go outside to the internet.
The 192.168.2.0/24 network is the cluster communication network
The 192.168.3.0/24 network is the storage network.
If the 192.168.1.0/24 network allows the clients/company to access FS-NUG at 192.168.1.105, what is the reason for the storage network (192.168.3.0/24)? I know at one point they explain the storage network as a "dedicated network for the nodes to access the shared storage", but why can both the 192.168.1.0/24 and 192.168.3.0/24 networks access FS-NUG? If you could explain this in layman terms, that would be greatly appreciated.
The three networks shown in Server Manager
Diagram of failover cluster
Thanks,
In an AWS VPC, is there any difference in networking performance (or in risk of networking failure) between EC2 instances these two scenarios?
A) An EC2 instance in subnet Data communicating with an EC2 instance in subnet Apps. Both subnets belong to the same availability zone.
B) Two EC2 instances in a single subnet Data+apps.
I'm asking because even when Windows route print shows On-link for the subnet and a route to a gateway for other subnets in the VPC, I suspect that in the background there are no real differences in the routing that takes place.
There is no difference in performance or reliability when instances are on the same subnet, or on different subnets within the same availability zone, because of the way the network infrastructure actually works in VPC.
This is because the network is not really an Ethernet network with routers.
The entire network infrastructure in VPC is virtual, software-defined.
The best way to see this for yourself is to sniff packets on two machines on the same subnet. You'll find a significant difference in the behavior of ARP. On a "normal" network, machines on the same subnet are also in the same broadcast domain. They discover each other's hardware address by "arping" for each other, with "who has/tell" and "I have" messages. These are conspicuously different on VPC: machine A sends a request and gets a response... yet machine B never saw that request, and did not actually generate the response that it seems to have generated. There's also a conspicuous absence of stray incoming ARP messages you would find on a LAN. This layer 2 behavior is entirely emulated by the network infrastructure, which actually connects nodes together over a routed network using a numbering scheme that's entirely different from and unrelated to the numbering your topology uses, but simulates an Ethernet network very effectively.
See also A Day in the Life of a Billion Packets for an excellent overview of how the magic comes together in VPC networking.
No Keeping the instances in different subnet wishing the same region or availability zone doesn't affect the performance.However the EC2 Instance capability and provisioned resources do play a role as the lower capability EC2 may introduce some latency delay in response.
I understand that IP addresses behind the ELB may change in time, new IP addresses can be added and removed depending on the traffic pattern we have at the moment.
My question is - how does this work with long living connections, e.g. websocket? Let's say I have persistent websocket connection to the web service behind the ELB. When AWS changes the ELB's IP address I'm currently connected to, replacing it with some other, what will happen? I cannot find a good answer in AWS docs.
Thanks,
Vovan
When AWS changes the ELB's IP address I'm currently connected to, replacing it with some other, what will happen? I cannot find a good answer in AWS docs.
In general there are two situations where the ELB's IP addresses will change:
Underlying ELB Failure
Think of the ELB as a scalable cluster of Load Balancers all addressable under a single DNS name, each with an IP address. If one node dies (e.g. due to an underlying hardware failure), the IP will be removed from the DNS record and replaced with a new node.
Clients connected to it at the time of failure will lose their connection and should handle a reconnect. It won't automatically be routed to a 'healthy' part of the ELB.
Traffic Variation
If the ELB is scaled up or down - because of modifications in traffic profile - as mentioned in the forum post linked above, the connections will continue to function for some time, but there is no guarantee of that period (min or max). This is especially notable in cases where the LB is scaled up quickly to meet load ("cliff face" style), as the 'old' ELB nodes may be overwhelmed (or become so) and their ability to process traffic impaired.
Subsequently it does mean developers need to handle reconnections in both cases on the client side.
I wanted to know whether asterisk 11 on amazon EC2 would be a good idea so that it can handle more than 100 concurrent calls? If yes then which type of instance will work nicely?
I have a good amount of business logic and application logic as well with the asterisk.
I wanted to know how would be the performance with EC2 instance? is it recommended to use EC2 instance with asterisk?
Thanks
amazon ec2 is bad idea for voip.
It have NAT and not perfect timing. Also it not so hi perfomance.
100 calls require instance like c1.xlarge/ m1.xlarge/c3.large - ECU 8+.
On c1.medium asterisk usualy can handle 50-80 calls depend of dialplan and your skill.
Also note, that bandwidth on ec2 is VERY costly.
I not recomend use ec2 instances for asterisk, unless you need have any of following:
on demand application with failover setup.
payed per minute/scalable application(for example planned conference service)
need posibility launch instance on crash and/or other infrastructure already on EC2.
In all other cases much better get 2 dedicated servers and setup failover for thoose servers. You will get much more perfomance for similar cost.
A successful deployment of Asterisk on Amazon EC2 requires that you enable three critical ports on EC2's firewall. Without them, Asterisk will not work. Thus, the following ports are key to passing RTP packets (for voice) and SIP signaling (for devices, DTMF codes, etc.):
5060 (UDP)
4569 (UDP)
10000-20000 (UDP)
22 (TCP) (You'll need this for SSH access)
Use Eric Hammond's Ubuntu AMI (Amazon Machine Image), ami-ce44a1a7, and the 1000HZ AKI, aki-9b00e5f2. This AKI is important because it is specifically compiled for VoIP applications such as Asterisk. Any AKI (Amazon Kernel Image) other than one set at 1000HZ will produce undesirable results in voice quality and functionality.
TIP: Asterisk 1.4.21.1 is an older, but stable version. Supplement the version number with a newer one if you prefer
I need to test the performance of application running on localhost as if it were in the online environment. I mean the performance test conducted by the network traffic simulation, limited bandwidth simulation, or other parameter as if it were online.
Could Apache Ab do the simulation?
We've used Charles and Firefox Throttle in the past to simulate slow networks.
Why can't you connect to a different PC, or even use a virtual machine and rate limit the virtual network connection?
Yes, but you will need to connect to your application by IP address, not "localhost" or 127.0.0.1. Typically for web applications (HTTP) I use Fiddler which can simulate limited bandwidth, but only if you connect as I have noted. Other bandwidth limiters for non-HTTP I'm not sure/aware of.
Ab won't simulate low level network errors. If you're on linux, you can simulate some networking with 'tc'. See http://www.kdedevelopers.org/node/1878 for a small example.
You can set up a local tunnel to expose your localhost to the world, using Ngrok. From there you can use any number of online performance tools
You can throttle bandwidth via Chrome Dev Tools