We have this architecture with 2 node processes.
One polls a private API and pushes the changes to the second node if any.
The second node process the data and calls a bunch other API's and eventually emits a change event to the client, a HTML5 website, with socket.io
This second node will always process the data and will always emit changes even if no clients are connected. So in my opinion the CPU or mem usage is not that greatly affected by the number of connected clients. Also note that this architecture is still running on a private staging environment.
Everything runs fine and we're ready to go live until we noticed after couple of days, maybe a week, the second node suddenly gets extremely slow while the first node is still fine.
It gets so bad that even the connection between the two nodes gets timed out and they are on the same network over localhost. It also takes more then 10 seconds to browse to the socket.io/socket.io.js file.
I know its very hard to understand the problem without seeing any code but I'm kinda pulling my hair out because we have to go live in couple of days and my logs are not revealing anything and google isn't helping either.
Whats a good practice towards building Have you ever experienced anything like this? What was the problem and how did you fix it?
Whats a good monitor and profiler for node.js? (preferably free)
What are good practices towards building a node.js app with makes a lot of outgoing API calls?
Anything or anyone that could help me in the right direction of solving or even discovering the actual problem will be greatly appreciated!
Thank you!
Never experienced anything like this but may be the second node is blocking the event loop by doing CPU intensive work or waiting for some resource synchronously.
Add some logging in your code to see how much time second node is taking for processing each change pushed by first node. May be some type of change consumes CPU for 10 seconds or so to complete.
You should also start monitoring memory, CPU and network connections. When things slow down your monitoring will provide some clue as to where is the bottle neck.
For monitoring you can try following 3 tools
nodetime
hummingbird
node-monitor
Also read http://nodetime.com/blog/monitoring-nodejs-application-performance
It sounds like you have a memory leak somewhere in the second node, maybe from calling too many anonymous functions etc... do you notice your RAM usage slightly creeping up as it runs?
Related
In order to improve my flows, I would like to test a few scenarios in which the flow/App or Integration Node is stopped while the message is still being processed (to test how transactional my flows actually are, depending on different settings). As IIB9 is fast with processing simple requests, I don't have the time to shut down the flow quickly enough.
I tried to use the debugger, but that doesn't seem to work; I cannot stop the flow or App while debugging, and shutting down the Integration Node doesn't seem to work well either.
Is there an (in-build) way to make the broker work really slowly so I have the time to shut it down? Or should I just think of a really complicated compute node to keep it occupied for a few seconds?
Any suggestions (also for the latter if that is the best option) are welcome.
Really complicated compute node will take a lot of CPU. I would prefer making flow wait for something.
Eg. A flow with HTTP request node or SOAP request node making call to external service. Make this external service take time like say 120 seconds.
Across all browsers/devices, I find random different pages, at random times, are very slow to load/don't load. The browser is stuck on 'Waiting for website.com'. I will wait 20 seconds and nothing will happen until I manually refresh the page. As I realise this is very vague, can you suggest a) most likely issues to look for first or b) some diagnostic tools that I could use to try and de-bug the issue as a starting point, so that my hosts/developers can solve the issue. Here are some results of recent speed tests.
One thing to also add is that, it seems it more often gets stuck on particular pages. Namely the pages where users take practice tests. After each time the user clicks 'Next', their selected answer is inserted into the database. My speculation is that potentially it's an issue with the DB itself, or the process which inserts into the database. It's when clicking 'Next', that the whole website sometimes just dies as described above.
Results from Google Speed Test
Waterfall image
A wait time of 20secs at random times and random pages could possibly be due to stop-the-world garbage collection. So GC logs are probably a good starting point.
A thread sampler such as Djigger a colleague of mine wrote might probably also help you figuring out what the machine is doing during the 20 seconds.
If that doesn't help I suggest to use a Profiler or better an APM tool to monitor whats going on on your system. Those tools give a you a broader insight of the internals.
You need to run a few page speed tests and look at the waterfall images.
It is very common on shared servers for the server to be too busy to get to your request. 20 seconds would indicate a serious issue with the server.
Another common reason is the page has a link to a third party resource and that resource is too often unavailable.
In your case the culprit is website.com and I assume that is your site.
Use something like webpagetest.org to run the tests.
In the waterfall image below
Dark Green is DNS lookup time.
Orange is the time for Browser to connect to server.
Green is the wait time for server to put image in output buffer.
Blue is the time for the server to transmit to the Browser.
The problem with the sample waterfall page is the index page took 4 seconds to be generated or retrieved. Most likely this is a Word Press site with plugins.
I suspect yours may be 20 seconds. But due to the randomness, is is also a good possibility it is a page resource that is stalled.
If it is the index page, then you likely have a poor ISP and or one of the other users of the server is hogging the CPU.
Keep running the tests until you see the problem occur.
It will be very obvious where the problem is located.
You can post the waterfall image and send me a message if you have any questions.
Waterfall from webpagetest.org
I'm working on a server push service, kind of like pusher/pubnub. The most critical part that handles the client polling is currently using Node.js and Redis, which is working just fine, except for one thing. I don't really have a good idea about what's going on in the app.
The whole idea is based around long polling, which means there is A LOT of requests going back and forth, and a lot of checking in redis and seeing if there's something new. The problem is that I don't really know how to monitor such a thing.
Let's say that there are 10 000 users online on average on a single site. At a polling interval of 5 seconds this would result in at least 2000 log entries every second. How do I manage that many logs? Should I use something like logstash to collect them to have at least some idea of what's going on in the app?
Would it be a good idea to have all instrumentation turned off, and only enable it with something like kill -s USR2?
I thought about using Redis monitor command to gather some data, but just running it slows down redis by 50%, not to mention the overhead of analyzing the incoming data.
How do people generally handle this? Is there any good book or something about building high-load & high-availability applications?
I have a web application that relies on very "live" data - so it needs an update every 1 second if something has changed.
I was wondering what the pros and cons of the following solutions are.
Solution 1 - Poll A Lot
So every 1 second, I send a request to the server and get back some data. Once I have the data, I wait for 1 second before doing it all again. I would detect client-side if the state had changed and take action appropriately.
Solution 2 - Block A Lot
So I start a request to the server that will time-out after 30 seconds. The server keeps an eye on the data on the server by checking it once per second. If the server notices the data has changed it sends the data back to the client, which takes action appropriately.
Scenario
Essentially, the data is reasonably small in size, but changes at random intervals based on live events. The thing is, the web UI will be running something in the region of 2,000 instances, so do I have 2,000 requests per second coming from the UI or do I have 2,000 long-running requests that take up to 30 seconds?
Help and advice would be much appreciated, especially if you have worked with AJAX requests under similar volumes.
One common solution for such cases is to use static json files. Server-side scripts update them when the data is changed and they are served by fast and light webserver (like nginx). Since files are static and small - webserver will do that right in cache, in very fast manner.
Consider a better architecture. Implementing this kind of messaging system is trivial to do right in something like nodeJS. Message dispatch will be instantaneous, and you won't need to poll for your data on either side.
You don't need to rewrite your whole system: The data producer could simply POST the updates to the nodeJS server instead of writing them to a file, and as a bonus, you don't even need to waste time on disk IO.
If you started without knowing any nodeJS, you could still be done in a couple hours, because you can just hack up the chat example.
I can't comment yet, but I would agree with geocar. Running live or almost live web services with just polling will be solution stuck between a rock and a hard place.
You could also look into web sockets to allow push as this sounds a better solution for this than just updating every second to 30 seconds.
Good luck!
I have a site that is moving incredibly slowly right now. Both Safari's inspector and Firebug are reporting that most of the load time is due to latency. The actual download is happening in less than a second. There's a lot of database activity in play (though the metrics on that indicate that it's pretty healthy), but what else can cause really high latency? Is it a purely network thing or are there changes I can make to the app to improve the latency numbers?
I'm using YSlow to help identify performance improvements, but on the whole, I don't see it reporting anything that seems crazy unreasonable. Opportunities for improvement, certainly, but nothing that seems like it would cause the huge load times I'm seeing.
Thanks.
UPDATE
Some background and metrics, in case it's useful. This is a CakePHP application and I'm using my UsersController::login action as the benchmark. For the sake of identifying how much of a factor the application code plays in this, I've printed a stacktrace immediately upon entering UsersController::beforeFilter(). Here's output:
UsersController::beforeFilter() - APP/controllers/users_controller.php, line 13
Controller::startupProcess() - CORE/cake/libs/controller/controller.php, line 522
Dispatcher::_invoke() - CORE/cake/dispatcher.php, line 187
Dispatcher::dispatch() - CORE/cake/dispatcher.php, line 171
[main] - APP/webroot/index.php, line 83
Load times, as shown by Safari's inspector range from 11.2 seconds to 52.2 seconds. This would seem to point me away from the application code and maybe something with my host, but maybe I'm completely misinterpreting this or oversimplifying it?
If you cannot identify directly a slow moving component of your application, there are a number of other steps along the way that can certainly slow your site down. Whenever I'm experiencing unusually long polling, I typically start by looking at the local DNS and then onto my hosted DNS. Sometimes a cache refresh (on their part, not yours) can cause a lot of polling until their database has caught up.
Else, they might actually have a service outage and your requests are being made to their secondary or backup server. If everything seems fine in terms of domain resolution, your hosting provider might be experiencing a service outage that can take a number of different shapes like serving static content from their backups or over-allocating shared resources until everything is running as it should. You can experience a ton of what they call throttling on shared cloud architectures when they have a box go down. On the plus side, you don't have a total outage in this circumstance.
One time, and this was just in a shared grid configuration, I had a processor go to hell. The bizarre part of it was that static content was still serving from a backup, but it was still polling against our database (which was on a different server) and causing our account to throttle because of over allocation on the backup. Wasn't our fault, but the host started sending nasty emails about our excessive long-polls. Moral of the story is, if it's not your application, and it's out of the blue, somewhere along the line I'll bet you'll find some hardware failure or misconfiguration.
Also now that I think of it, if you are syndicating some outside content (be it server or browser side) it might not be in your chain of responsibility altogether. If you are serving ads for example from a subscriber service, they might be having a high-load period or outage. These are just the steps that I would take to narrow down the culprit.
Probably this will be not the solution for you, but when I has doggy slow safari (and FF too) I simply changed the DNS servers to opendns (208.67.222.222, 208.67.220.220) and all my problems are resolved.