using aws ec2 as application server... so poor performance - performance

I am planning to host a server in several countries(us, south east asia..)
I'm testing ec2 (ebs backed, large size) and getting horrible results.
The server just isn't fast enough. cpu/hard-drive/rount trip time
I am comparing the speed with my home linux box(dual i5 cpu,2gig memory,sata)
I feel my home server is faster about 10 times.
(compared compile time of heavy libraries, performing the same db updates.. and so on)
The server application is similar to web servers in what it does(little cpu usage, many db access(mysql in the ec2 root partition).
Am I missing something obvious? like ebs backed ec2 takes time to get stabilized after booting up or something.
Maybe, connecting to cross-continent(eg, from asia to US based ec2) is no-no in aws world?
Hope there are some explanations why I'm getting so poor performance with large size ec2.
I'd like to ask if my planned usage of aws is going to work at all, or should I look for other services other than aws.

if you want to monitor your EC2 instance, consider using Amazon's cloudwatch service. This service can monitor all your instance's resources, such as CPU utilization, memory usage, network latency, and request counts. It's also free in the amazon free tier.
I know some users report that after switching from amazon aws to rackspace cloud, their applications run faster without adding extra expenses. you might consider giving rackspace a test.

Related

Serving up webpage from Amazon EC2 instance

If I'm serving up a website using apache from an Amazon EC2 instance, does it ever make sense for me to stop the machine? Also, I'm extremely new to EC2 so I'm not entirely sure how EBS works. It looks like Amazon does gave me 8gb of storage for free, but am I actually being charged for that storage 24/7? Thanks
If you stop the server, it is down. If you're in a development stage, and you want to limit your costs to the bare minimum, yes, you can stop the server at the end of each day. This is one of the advantages to an EBS backed instance.
EBS is basically external network attached storage. For most people, EBS backed servers are the way to go, since you can easily clone them, stop and start them, etc. You can also make snapshots of an ebs volume, so it's a great way to have low cost backups of your server.
As for EBS storage, yes you pay for it, but it is relatively inexpensive. The real cost of EC2 ends up being CPU/runtime for the most part, although EBS certainly makes it easy to use up large amounts of storage.
does it ever make sense for me to stop the machine?
For production machine, no. I never had to stop prod machines in last couple of years. We launch new machines from our AMI when required and kill them when not.
However, for load testing or some research work for clustered environment -- we had to pause machines for a while. We use stop feature at that time.
...I'm not entirely sure how EBS works.
Quoting from official doc:
Amazon EBS volumes are off-instance storage that persists independently from the life of an instance. Amazon Elastic Bionlock Store provides highly available, highly reliable storage volumes that can be attached to a running Amazon EC2 instance and exposed as a device within the instance.
So, in highly simplified terms, it's like an external HDD or NAS
It looks like Amazon does gave me 8gb of storage for free, but am I actually being charged for that storage 24/7?
If you're paying for the instance... it should include the cost of the storage that AWS provides. For a given instance type, EBS backed instances cost more than Instance Store ones, so I guess it would include the EBS cost, but it's their pricing policy -- I can't really comment.
Side Note
Being a network storage, EBS backed images have its pros and cons. The biggest benefit is, if instance ever crashes, your root device would not vanish (please make sure you have checked 'do not delete root device on termination' while creating the instance). It comes handy in times of hardware failures or accidental termination.
However, being on network, it has all the issue that any networked device can have. For some applications that has really really fast and excessive IO (like for Cassandra), EBS seemed to be a bad idea.
If you have a free instance (it has to be a micro instance running their brand of EC2 Linux, not, say CentOS) then there is no reason to turn it off.
If you are paying per hour, then yeah, it makes sense to shutdown when not in use.
If you need more computing power and have a bigger instance, you could have the instance running on a higher CPU rate (more expensive instance type) for those hours the site is going to be accessed a lot and after that just change the instance type back to some minor. Just don't mess with the volumes.
If you don't want to be offline for those couple of minutes you're going to need, you could set up a free (micro) instance and assign the elastic IP to that instance or even redirect to a static web page on s3...
example: redirect to a s3 static page with "Maintenance in progress" message displayed.
Also, watch out while stoping/terminating your instance, I dont know is that the case just on windows instances but after starting instance again my non-root drive (volumes for non system partitions) went offline (when checking "volumes" they were attached) so I had to mount them again.

Scaling Tigase XMPP server on Amazon EC2

Does anyone have an experience running clustered Tigase XMPP servers on Amazon's EC2, primarily I wish to know about anything that might trip me up that is non-obvious. (For example apparently running Ejabberd on EC2 can cause issues due to Mnesia.)
Or if you have any general advice to installing and running Tigase on Ubuntu.
Extra information:
The system I’m developing uses XMPP just to communicate (in near real-time) between a mobile app and the server(s).
The number of users will initially be small, but hopefully will grow. This is why the system needs to be scalable. Presumably for a just a few thousand users you wouldn’t need a cc1.4xlarge EC2 instance? (Otherwise this is going to be very expensive to run!)
I plan on using a MySQL database hosted in Amazon RDS for the XMPP server database.
I also plan on creating an external XMPP component written in Python, using SleekXMPP. It will be this external component that does all the ‘work’ of the server, as the application I’m making is quite different from instant messaging. For this part I have not worked out how to connect an external XMPP component written in Python to a Tigase server. The documentation seems to suggest that components are written specifically for Tigase - and not for a general XMPP server, using XEP-0114: Jabber Component Protocol, as I expected.
With this extra information, if you can think of anything else I should know about I’d be glad to know.
Thank you :)
I have lots of experience. I think there is a load of non-obvious problems. Like the only reliable instance to run application like Tigase is cc1.4xlarge. Others cause problems with CPU availability and this is just a lottery whether you are lucky enough to run your service on a server which is not busy with others people work.
Also you need an instance with the highest possible I/O to make sure it can cope with network traffic. The high I/O applies especially to database instance.
Not sure if this is obvious or not, but there is this problem with hostnames on EC2, every time you start instance the hostname changes and IP address changes. Tigase cluster is quite sensitive to hostnames. There is a way to force/change the hostname for the instance, so this might be a way around the problem.
Of course I am talking about a cluster for millions of online users and really high traffic 100k XMPP packets per second or more. Generally for large installation it is way cheaper and more efficient to have a dedicated servers.
Generally Tigase runs very well on Amazon EC2 but you really need the latest SVN code as it has lots of optimizations added especially after tests on the cloud. If you provide some more details about your service I may have some more suggestions.
More comments:
If it comes to costs, a dedicated server is always cheaper option for constantly running service. Unless you plan to switch servers on/off on hourly basis I would recommend going for some dedicated service. Costs are lower and performance is way more predictable.
However, if you really want/need to stick to Amazon EC2 let me give you some concrete numbers, below is a list of instances and how many online users the cluster was able to reliably handle:
5*cc1.4xlarge - 1mln 700k online users
1*c1.xlarge - 118k online users
2*c1.xlarge - 127k online users
2*m2.4xlarge (with 5GB RAM for Tigase) - 236k online users
2*m2.4xlarge (with 20GB RAM for Tigase) - 315k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 400k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 312k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 327k online users
5*m2.4xlarge (with 60GB RAM for Tigase) - 280k online users
A few more comments:
Why amount of memory matters that much? This is because CPU power is very unreliable and inconsistent on all but cc1.4xlarge instances. You have 8 virtual CPUs but if you look at the top command you often see one CPU is working and the rest is not. This insufficient CPU power leads to internal queues grow in the Tigase. When the CPU power is back Tigase can process waiting packets. The more memory Tigase has the more packets can be queued and it better handles CPU deficiencies.
Why there is 5*m2.4xlarge 4 times? This is because I repeated tests many times at different days and time of the day. As you can see depending on the time and date the system could handle different load. I guess this is because Tigase instance shared CPU power with some other services. If they were busy Tigase suffered from CPU under power.
That said I think with installation of up to 10k online users you should be fine. However, other factors like roster size greatly matter as they affect traffic, and load. Also if you have other elements which generate a significant traffic this will put load on your system.
In any case, without some tests it is impossible to tell how really your system behaves or whether it can handle the load.
And the last question regarding component:
Of course Tigase does support XEP-0114 and XEP-0225 for connecting external components. So this should not be a problem with components written in different languages. On the other hand I recommend using Tigase's API for writing component. They can be deployed either as internal Tigase components or as external components and this is transparent for the developer, you do not have to worry about this at development time. This is part of the API and framework.
Also, you can use all the goods from Tigase framework, scripting capabilities, monitoring, statistics, much easier development as you can easily deploy your code as internal component for tests.
You really do not have to worry about any XMPP specific stuff, you just fill body of processPacket(...) method and that's it.
There should be enough online documentation for all of this on the Tigase website.
Also, I would suggest reading about Python support for multi-threading and how it behaves under a very high load. It used to be not so great.

Which Amazon EC2 Instance to use?

I want to setup a proxy (using Squid) that will take ~ 5000 requests per day on an Amazon EC2 instance. Will there be a noticeable difference in speed between a micro vs small instance?
The requests are for HTML, not for any media like images or videos.
Micro instances main issue is spotty CPU and disk I/O performance, but that amount of traffic is actually quite small, averaging 3 requests a minute. As a bonus, a micro instance will qualify for Amazon's free tier. For optimal performance, make sure the machine is stripped down of unnecessary services.
I would advice you to start with ebs (not instance-store) micro instance.
Later you will be able to convert micro to small.
Besides if you are planing to use the instance during for long time (more than one year) just think about purchasing reserved instance (you will save some money). But before you reserve instance you must definitely know what is better for you micro or small.
Good luck!
A micro instance should be able to handle that kind of traffic as long as you don't have too many other things installed on it. Since you are only charged for how long the instance is running. I would take both a micro and a small instance and then run some tests to see if the micro can handle your traffic or if the small instance gives you better performance and is worth the extra cash.

Amazon EC2 Capacity & Workflow Questions

I’m hoping some of you with experience using amazon EC2 could offer some advice… of course it’ll be subjective which is fine, I’m pretty sure your guestimate would be better than mine.
I am planning on moving all my client’s websites from shared hosting environments to Amazon EC2. They’re all pretty low traffic sites (the busiest site receives around 50 unique visitors a day). There’s about 8 sites, but I may expand this as I take on more projects and host more sites… current capacity planning is for say 12 sites.
Each site runs on ASP.Net (Umbraco CMS), and requires a SQL Server database.
My thoughts are one of the following:
Setup a Small Instance (1.7gb RAM, 1 EC2 Compute Unit), and run IIS and SQL Server Express on that server.
Setup 2 Micro Instances (613MB Ram each, Up to 2 EC2 Compute Units) – one for IIS, the other for SQL Server.
Which arrangement do you think would work the best for my requirements. I’ve started setting up a Micro instance with Server 2008, SQL Server Express, etc… and finding it not coping with the memory requirements, hence considering expanding. I could always configure on a Small instance, then export the AMI and fire it up in a Micro instance after, and do the same every time any serious changes to the server are required. I guess I could even do all updates etc on a spare Small Spot instance, then switch load that AMI up in a Micro and transfer the IP Address across, so I don’t need to do too much work on the production servers. I figure if I store all my website data files on EBS Volumes, then it should be fairly easy to move hosting between servers with minimal downtime, while never working on a production server.
I’m interested to know what you all think, and what strategies you employ for such activities as upgrades, windows updates, software installations, etc.
And what capacity do you think I’d need for my requirements.
Cheers
Greg
Well, first-up, Server 2008 doesn't play well in the 613MB RAM the Micro instance gives you. It runs, but it's a dog, and it barks louder the more services (IIS, SSE, etc) you layer on top. We using nothing smaller than a Small for Server 2008, and in fact typically do the environment config in a Medium and scale down to Small once the heavy lifting is complete and the OS is ready to use. Server 2003, however, seems to breathe easier on a Micro - but we still do the config on a larger instance and scale down.
We're running low-traffic websites on Server 2003/IIS6 in a Micro, with a Server 2008/SS install on a shared, separate, Small instance. We do also have one Server 2008/IIS7 Micro build running, but only to remind ourselves why we don't use it more widely. ;)
Larger websites run Server 2008/IIS7 in either Small or Medium instances, but almost always still using that shared separate SS instance for database services. We try not to deploy multiple SS installations, since it makes maintenance and backups more complex.
Stashing content and config on EBS Volumes is of course good practice, unless you like rebuilding the entire system whenever an Instance disappears. Snapshotting your Instances periodically is also good practice, since you can spin-up a new Instance from a baseline AMI and swap the snapshot in as a boot Volume for fast recovery in the event of disaster.

Amazon EC2 consideration - redundancy and elastic IPs

I've been tasked with determining if Amazon EC2 is something we should move our ecommerce site to. We currently use Amazon S3 for a lot of images and files. The cost would go up by about $20/mo for our host costs, but we could sell our server for a few thousand dollars. This all came up because right now there are no procedures in place if something happened to our server.
How reliable is Amazon EC2? Is the redundancy good, I don't see anything about this in the FAQ and it's a problem on our current system I'm looking to solve.
Are elastic IPs beneficial? It sounds like you could point DNS to that IP and then on Amazon's end, reroute that IP address to any EC2 instance so you could easily get another instance up and running if the first one failed.
I'm aware of scalability, it's the redundancy and reliability that I'm asking about.
At work, I've had something like 20-40 instances running at all times for over a year. I think we've had 1-3 alert emails come from amazon suggesting that we terminate and boot another instance (presumably because they are detecting possible failure in the underlying hardware). We've never had an instance go down suddenly, which seems rather good.
Elastic IP's are amazing and are part of the solution. The other part is being able to rapidly bring up new instances. I've learned that you shouldn't care about instances going down, that it's more important to use proper load balancing and be able to bring up commodity instances quickly.
Yes, it's very good. If you aren't able to put together a concurrent redundancy (where you have multiple servers fulfilling requests simultaneously), using the elastic IP to quickly redirect to another EC2 instance would be a way to minimize downtime.
Yeah I think moving from inhouse server to Amazon will definitely make a lot of sense economically. EBS backed instances ensure that even if the machine gets rebooted, the transient memory is not lost. And if you have a clear separation between your application and data layer and can have them on different machines, then you can build even better redundancy for your data.
For ex, if you use mysql, then you can consider using Amazon RDS service - which gives you a highly available and reliable MySQL instance, fully managed (patches and all). The application layer then can be made more resilient by having more smaller instances rather than one larger instance, through load balancing.
The cost you will save on is really hardware maintenance and the cost you would have to incur to build in disaster recovery.

Resources