Asterisk 11 on amazon EC2 instance to handle 100 concurrent calls - amazon-ec2

I wanted to know whether asterisk 11 on amazon EC2 would be a good idea so that it can handle more than 100 concurrent calls? If yes then which type of instance will work nicely?
I have a good amount of business logic and application logic as well with the asterisk.
I wanted to know how would be the performance with EC2 instance? is it recommended to use EC2 instance with asterisk?
Thanks

amazon ec2 is bad idea for voip.
It have NAT and not perfect timing. Also it not so hi perfomance.
100 calls require instance like c1.xlarge/ m1.xlarge/c3.large - ECU 8+.
On c1.medium asterisk usualy can handle 50-80 calls depend of dialplan and your skill.
Also note, that bandwidth on ec2 is VERY costly.
I not recomend use ec2 instances for asterisk, unless you need have any of following:
on demand application with failover setup.
payed per minute/scalable application(for example planned conference service)
need posibility launch instance on crash and/or other infrastructure already on EC2.
In all other cases much better get 2 dedicated servers and setup failover for thoose servers. You will get much more perfomance for similar cost.

A successful deployment of Asterisk on Amazon EC2 requires that you enable three critical ports on EC2's firewall. Without them, Asterisk will not work. Thus, the following ports are key to passing RTP packets (for voice) and SIP signaling (for devices, DTMF codes, etc.):
5060 (UDP)
4569 (UDP)
10000-20000 (UDP)
22 (TCP) (You'll need this for SSH access)
Use Eric Hammond's Ubuntu AMI (Amazon Machine Image), ami-ce44a1a7, and the 1000HZ AKI, aki-9b00e5f2. This AKI is important because it is specifically compiled for VoIP applications such as Asterisk. Any AKI (Amazon Kernel Image) other than one set at 1000HZ will produce undesirable results in voice quality and functionality.
TIP: Asterisk 1.4.21.1 is an older, but stable version. Supplement the version number with a newer one if you prefer

Related

How to diagnose AWS port 25 egress block

I'm having trouble diagnosing what appears to be a complete blockage of outbound port 25 connections on AWS EC2.
I'm aware of the port throttling, but I don't think that's the issue. I don't think it's the issue because
I've been running this mail server for at least 7 years
Although I can't recall for sure, I'm fairly certain that I filled out the form to remove sending limitations ~ 7 years ago
The server only sends a few dozen emails per day
I've been running tcpdump on the interface for a while, and there are no more than a few attempts per hour to send outbound packets to anyone on port 25
I don't have any emails from AWS indicating I've exceeded a quota
(as an aside, the above said, is there a way to tell if AWS has turned on throttling, and/or what is the actual quota?)
I can telnet to port 25 on the AWS private networks (another aside, where does AWS perform the throttling?):
$ telnet 172.31.14.133 25
Trying 172.31.14.133...
Connected to 172.31.14.133.
Escape character is '^]'.
220 <mymailserver>.com ESMTP Postfix
I can not telnet to the outside world from the mail server, nor from another EC2 instance set up in this VPC for testing purposes, nor from an EC2 server set up in a different VPC. For example, the exact telnet that worked above does not work if I replace the private IP address with the public one (but I can telnet to the public one from the outside world).
The outbound security group rules are Ports all Protocols all 0.0.0.0/0
The network ACL for the VPC, both inbound and outbound, is Type ALL Traffic Protocol ALL Port Range ALL Destination 0.0.0.0/0 ALLOW
Looking at the mail logs, it appears that no outbound SMTP traffic has succeeded since January 28th. I would think even if this were throttling, something would have worked somewhere along the way, and I'm now at a complete loss on how to move forward with diagnosing this.
Update: Per suggestions below, I've gone ahead and requested removal of the limit. We'll see how that goes, but I'm still unconvinced it's the problem.
Additionally, I've turned on CloudWatch logs for the VPC. The server in question has sent 14 packets outbound to port 25 in the last 12 hours, so I really would think it would be below any throttling limit. When I look at the logs, the entries are marked as "REJECT", but still no luck on figuring out what is doing the rejecting. Is there any way to determine what "rule" is causing the reject?
Any ideas?
TIA!
From Remove the Port 25 Restriction From Your EC2 Instance:
Amazon EC2 restricts traffic on port 25 of all EC2 instances by default, but you can request for this restriction to be removed.
It says that you must:
Create a DNS A record
Request AWS to remove the port 25 restriction on your instance via a Request to Remove Email Sending Limitations form
Alternatively, you could consider using Amazon Simple Email Service (Amazon SES) to send email, rather than sending it directly from the instance.
Seems like something is blocking the traffic on port 25. Please check the following things.
Check if there are any rules set in VPC ACL to block traffic.
Check if there are any recent updates to iptables on OS.
check for any recent changes to DNS / Route 53.

Are there differences in networking performance if EC2 instances are in different subnets?

In an AWS VPC, is there any difference in networking performance (or in risk of networking failure) between EC2 instances these two scenarios?
A) An EC2 instance in subnet Data communicating with an EC2 instance in subnet Apps. Both subnets belong to the same availability zone.
B) Two EC2 instances in a single subnet Data+apps.
I'm asking because even when Windows route print shows On-link for the subnet and a route to a gateway for other subnets in the VPC, I suspect that in the background there are no real differences in the routing that takes place.
There is no difference in performance or reliability when instances are on the same subnet, or on different subnets within the same availability zone, because of the way the network infrastructure actually works in VPC.
This is because the network is not really an Ethernet network with routers.
The entire network infrastructure in VPC is virtual, software-defined.
The best way to see this for yourself is to sniff packets on two machines on the same subnet. You'll find a significant difference in the behavior of ARP. On a "normal" network, machines on the same subnet are also in the same broadcast domain. They discover each other's hardware address by "arping" for each other, with "who has/tell" and "I have" messages. These are conspicuously different on VPC: machine A sends a request and gets a response... yet machine B never saw that request, and did not actually generate the response that it seems to have generated. There's also a conspicuous absence of stray incoming ARP messages you would find on a LAN. This layer 2 behavior is entirely emulated by the network infrastructure, which actually connects nodes together over a routed network using a numbering scheme that's entirely different from and unrelated to the numbering your topology uses, but simulates an Ethernet network very effectively.
See also A Day in the Life of a Billion Packets for an excellent overview of how the magic comes together in VPC networking.
No Keeping the instances in different subnet wishing the same region or availability zone doesn't affect the performance.However the EC2 Instance capability and provisioned resources do play a role as the lower capability EC2 may introduce some latency delay in response.

How to handle 4000 sip users and 10000 calls with same ip?

which technology I should use to handle 4000 sip users and 10000 calls with same ip with billing? I want it to configure so that all the sip users will use same ip and with proper billing .
Hi load is not something that can be easy setuped by reading one page of answer, or even any single book.
It require years of experience to understand issues that can arise.
From opensource stack can be used opensips/kamailio and cluster of some of opensource billing or 2600hz platform or custom billing.
In order to handle such a load you should use Kamailio + a cluster of RTPProxy servers. The following repository contains a set of Ansible playbooks for deploying an Active-Passive Kamailio cluster with a cluster of load-balanced RTPProxy servers. I think it is a good point to start:
https://github.com/ghrst/Kamailio-HA

ELB IP address change and long living connections

I understand that IP addresses behind the ELB may change in time, new IP addresses can be added and removed depending on the traffic pattern we have at the moment.
My question is - how does this work with long living connections, e.g. websocket? Let's say I have persistent websocket connection to the web service behind the ELB. When AWS changes the ELB's IP address I'm currently connected to, replacing it with some other, what will happen? I cannot find a good answer in AWS docs.
Thanks,
Vovan
When AWS changes the ELB's IP address I'm currently connected to, replacing it with some other, what will happen? I cannot find a good answer in AWS docs.
In general there are two situations where the ELB's IP addresses will change:
Underlying ELB Failure
Think of the ELB as a scalable cluster of Load Balancers all addressable under a single DNS name, each with an IP address. If one node dies (e.g. due to an underlying hardware failure), the IP will be removed from the DNS record and replaced with a new node.
Clients connected to it at the time of failure will lose their connection and should handle a reconnect. It won't automatically be routed to a 'healthy' part of the ELB.
Traffic Variation
If the ELB is scaled up or down - because of modifications in traffic profile - as mentioned in the forum post linked above, the connections will continue to function for some time, but there is no guarantee of that period (min or max). This is especially notable in cases where the LB is scaled up quickly to meet load ("cliff face" style), as the 'old' ELB nodes may be overwhelmed (or become so) and their ability to process traffic impaired.
Subsequently it does mean developers need to handle reconnections in both cases on the client side.

Redirect Traffic from NIC to Another NIC On Separate Networks While Using Remoting

The project I'm working on is to handle data capture from scan guns (Pocket PC 2003) and process this data on a host (Win XP) then into our inventory database on a separate server (Win 2000). This is all driven by the Remoting framework provided by MS and As Good As It Gets (http://gotcf.net). The application is complete enough for a general proof of concept with both the client and server working properly while in the emulator.
All is well until I began to test using actual scan guns. Due to security concerns, the scanners are on a separate network (for clarification the 10 network) than the server (the 15 network). My development machine has dual NIC connected to both networks and can communicate with both independently. However, I am having issues with my application receiving information from the 10 network using .Net Remoting, and then sending out information to the server on the 15 network via a third party app (Combination of ODBC, Btrieve, and OLE).
Is there anyway to process information from one network then update the server on another?
Any suggestions will be greatly appreciated!
Note: I'm not very familiar with networking, thus I may be calling it the wrong name but the gun IP's start with 10...* and the server IP's start with 15...*
So long as the computer's routing table is properly configured, you shouldn't have to worry about this from your application. So long as you're using the proper IP addresses, the networking stack should take care of delivering things to the right place.
You might want to check the output of "route print" (at least I think that was available on WinXp -- if not, someone else will likely post the correct command for XP soon). In any way, you should see what network destinations are configured for which interfaces. You'll need to make sure that the server's IP on the 15 network will properly route via the interface you want (ie. the lowest-cost matching destination/netmask lists your 15 interface).
The issue seems to stem from both the NIC cards not set up properly and a so far unresolved issue with the frameworks I've chosen.
To solve the NIC problem, the easiest solution I'd found had me clear the default gateway on the 10 network.
The other issue deals with recreating the remoting objects after they've been destroyed. I currently have to warm boot the scanner in order to re-connect to the host. In order to correct this issue I'm going to contact As Good As It Gets to see what their input is. Damn firewall

Resources