Interesting questions related to lighttpd on Amazon EC2 - amazon-ec2

This problem appeared today and I have no idea what is going on. Please share you ideas.
I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached).
And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server.
Everything going smoothly for months but suddenly there is an interesting phenomenon.
In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time.
I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot.
I used ls /proc/1234/fd | wc -l to check the number of fd.
The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal.
It sounds funny, right? Do you have any idea what's going on?
========================
The CPU graph of one of the web server.
alt text http://pencake.images.s3.amazonaws.com/4be1055884133.jpg

Thoughts:
Have a look at dmesg output.
The number of file descriptors jumping up sounds to me like something is blocking, including the processing of connections to the lighttpd/PHP, which builds up untile the blocking condition ends.
When you say the PHP file is unreachable, do you mean the file is missing? Or maybe the PHP script stalls during execution or? What do the lihttpd log files say is happening on the calls to this PHP script. Are there any other hints in the lighttpd?
What is the maximum file descriptors for the process/user?
I and others have had bizarre networking behavior on EC2 instances from time to time. Give us more details on it. Maybe setup some additional monitoring of the connectivity between your instances. Consider moving your problem instance to another instance in the hopes of the problem magically disappearing. (Shot in the dark.)
And finally...
DOS attack? I doubt it--it would be offline or not. It is way too early in the debugging process for you to infere malice on someone elses part.

Related

Stopping js ajax call from a specific user

I have done something silly and written a script for a website that does an ajax check every 2 seconds. In this case its using wordpress and its admin-ajax.php file every 2 seconds. This essentially burned up all the CPU power of the server, and made every site on the server run really slowly.
After a lot of detective work, i finally found the script and stopped it, so that it doesn't happen on new loads of that website. But looking at my apache log, i can see that it is still running in one browser somewhere.
Is there a way for me to stop that browser from requesting that ajax-call, or perhaps block it from my server? Or will I just have to wait until that browser is being refreshed or closed?
Try to use netstat or something similar through ssh to detect the IP and port of the unknown browser. Also you should try to reboot the server so it may will loose connection.
PS: It's pretty hard to give a clue or answer in the right direction without having any logs or evidence to ensure you answer to this question correctly.

Amazon ec2 instance slow

The instance started to fail on mysql... the mysql stops from nowhere. Then i was re-starting it and it worked again.
But now, it got worse. The transfer rate when i download a file from another instante starts in 900kbps and keep going down till 20kbps. Also for external downloads.
I tested also a zip job, zipping a big file.... it starts quickly then it slows down and keeps a rate of 10 files zipped per second wich is too slow ( another instances gets 1000 files per sec).
I can't access trough http the websites hosted also because its too slow.
I have already reboot, stop->start. Also i made an image e rebuild the image in a new instance and the problem continues. I also changed the Volume used by the instance and the volume with the problem keeps slow.
What should i do?
I'll take a stab (based on the lack of details), that your instance is just to small - sounds like you are running a web server and mysql on it at least, I'll also bet that you are trying to use a micro instance which have notoriously variable performance statistics and really aren't suitable for running a web server and database server with consistent performance (maybe ok for development, but not production imo).
Try just spinning up a larger instance and see if your problems magically go away; you can test this for a few dollars and if it does solve the problem you can decide if you can permanently upgrade the image size.

Propel and Persistent Connections

I'm having issues with a large number of conncurrent connections to an Amazon RDS database using propel as the ORM with PHP. The application runs fine during load testing with 20 to 50 connections open at a time, then seems to hit a wall, mushrooms up to maximum connections almost immediately, and everything dies.
I believe Propel is using mysql_pconnect, but I can't find where it designates that, or a simple way to turn it off. I may be chasing a red herring here, but I'm stumped, and there are enough comments on the net regarding pconnect causing problems with too many connections that I thought it would be worth a shot to remove it.
Anyone know how to do this? I have been searching using various phrases, can't seem to find anything.
As it turns out, the error was being caused by the RDS redo log. There is only one size for all RDS instance sizes. On the larger instances sizes, it's possible to fill the redo log and come back to the beginning before the data is written out to the database. At this point it does the 'furiously flushing' thing to get caught up, does not process any new requests, and they pile up like crazy. This eventually caused our app to crash. More, smaller RDS servers fixed the issue, though not very happy with Amazon over this. They need to be able to change the size of the redo logs.

Node.js suddenly getting extremely slow

We have this architecture with 2 node processes.
One polls a private API and pushes the changes to the second node if any.
The second node process the data and calls a bunch other API's and eventually emits a change event to the client, a HTML5 website, with socket.io
This second node will always process the data and will always emit changes even if no clients are connected. So in my opinion the CPU or mem usage is not that greatly affected by the number of connected clients. Also note that this architecture is still running on a private staging environment.
Everything runs fine and we're ready to go live until we noticed after couple of days, maybe a week, the second node suddenly gets extremely slow while the first node is still fine.
It gets so bad that even the connection between the two nodes gets timed out and they are on the same network over localhost. It also takes more then 10 seconds to browse to the socket.io/socket.io.js file.
I know its very hard to understand the problem without seeing any code but I'm kinda pulling my hair out because we have to go live in couple of days and my logs are not revealing anything and google isn't helping either.
Whats a good practice towards building Have you ever experienced anything like this? What was the problem and how did you fix it?
Whats a good monitor and profiler for node.js? (preferably free)
What are good practices towards building a node.js app with makes a lot of outgoing API calls?
Anything or anyone that could help me in the right direction of solving or even discovering the actual problem will be greatly appreciated!
Thank you!
Never experienced anything like this but may be the second node is blocking the event loop by doing CPU intensive work or waiting for some resource synchronously.
Add some logging in your code to see how much time second node is taking for processing each change pushed by first node. May be some type of change consumes CPU for 10 seconds or so to complete.
You should also start monitoring memory, CPU and network connections. When things slow down your monitoring will provide some clue as to where is the bottle neck.
For monitoring you can try following 3 tools
nodetime
hummingbird
node-monitor
Also read http://nodetime.com/blog/monitoring-nodejs-application-performance
It sounds like you have a memory leak somewhere in the second node, maybe from calling too many anonymous functions etc... do you notice your RAM usage slightly creeping up as it runs?

Causes of high network latency

I have a site that is moving incredibly slowly right now. Both Safari's inspector and Firebug are reporting that most of the load time is due to latency. The actual download is happening in less than a second. There's a lot of database activity in play (though the metrics on that indicate that it's pretty healthy), but what else can cause really high latency? Is it a purely network thing or are there changes I can make to the app to improve the latency numbers?
I'm using YSlow to help identify performance improvements, but on the whole, I don't see it reporting anything that seems crazy unreasonable. Opportunities for improvement, certainly, but nothing that seems like it would cause the huge load times I'm seeing.
Thanks.
UPDATE
Some background and metrics, in case it's useful. This is a CakePHP application and I'm using my UsersController::login action as the benchmark. For the sake of identifying how much of a factor the application code plays in this, I've printed a stacktrace immediately upon entering UsersController::beforeFilter(). Here's output:
UsersController::beforeFilter() - APP/controllers/users_controller.php, line 13
Controller::startupProcess() - CORE/cake/libs/controller/controller.php, line 522
Dispatcher::_invoke() - CORE/cake/dispatcher.php, line 187
Dispatcher::dispatch() - CORE/cake/dispatcher.php, line 171
[main] - APP/webroot/index.php, line 83
Load times, as shown by Safari's inspector range from 11.2 seconds to 52.2 seconds. This would seem to point me away from the application code and maybe something with my host, but maybe I'm completely misinterpreting this or oversimplifying it?
If you cannot identify directly a slow moving component of your application, there are a number of other steps along the way that can certainly slow your site down. Whenever I'm experiencing unusually long polling, I typically start by looking at the local DNS and then onto my hosted DNS. Sometimes a cache refresh (on their part, not yours) can cause a lot of polling until their database has caught up.
Else, they might actually have a service outage and your requests are being made to their secondary or backup server. If everything seems fine in terms of domain resolution, your hosting provider might be experiencing a service outage that can take a number of different shapes like serving static content from their backups or over-allocating shared resources until everything is running as it should. You can experience a ton of what they call throttling on shared cloud architectures when they have a box go down. On the plus side, you don't have a total outage in this circumstance.
One time, and this was just in a shared grid configuration, I had a processor go to hell. The bizarre part of it was that static content was still serving from a backup, but it was still polling against our database (which was on a different server) and causing our account to throttle because of over allocation on the backup. Wasn't our fault, but the host started sending nasty emails about our excessive long-polls. Moral of the story is, if it's not your application, and it's out of the blue, somewhere along the line I'll bet you'll find some hardware failure or misconfiguration.
Also now that I think of it, if you are syndicating some outside content (be it server or browser side) it might not be in your chain of responsibility altogether. If you are serving ads for example from a subscriber service, they might be having a high-load period or outage. These are just the steps that I would take to narrow down the culprit.
Probably this will be not the solution for you, but when I has doggy slow safari (and FF too) I simply changed the DNS servers to opendns (208.67.222.222, 208.67.220.220) and all my problems are resolved.

Resources