How to make "too much load" messages - performance

I see these kinds of messages on twitter and other places.... how do they work? Is it MySQL connections?

You can measure response time of each request (or a given fraction of the requests). If the response time goes up above a threshold you start rejecting requests.
If you measure SQL connection etc you need to know where your bottle necks are at a given moment. If your application is limited by bandwidth between your servers or something else counting SQL connections will not help you.

It might be shown if the database refused a connection or if the system load (read from /proc/loadavg) is too high. It could also be shown if too many users are logged in - but that's only likely for web applications where logged in users are expensive.. certain games for example where lots of state information must be kept while someone is logged in.

Related

what exactly 'connections' mean in heroku postgres

I am building a website on Django hosted at Heroku. I think at the peak time about 500-600 users can use it simultaneously. I can't figure out what is the best postgres plan.
According to this: https://elements.heroku.com/addons/heroku-postgresql#details
heroku-postgresql:standard-0 has a connection limit of 120, and
heroku-postgresql:standard-2 has a connection limit of 400.
Is it enough to have connections of 120 for about 500 users? Or it is entirely irrelevant?
Is it enough to have connections of 120 for about 500 users?
This isn't something that can be answered with certainty by anyone other than you but there's some general understanding that might be helpful here.
In most cases for basic web applications, one user on your website != one connection used for as long as they're using the app. For instance, that user might need a connection while they log in and load their profile, but not while they're simply viewing the content. While they're idling on a page, those database connections can service other users. With sensible defaults and connection pooling, 120 connections should be plenty for 500 concurrent users.
All that being said, it's on the application developer to manage database connections and pooling to ensure that this behavior is enforced. Also, this position only represents an average web app and there are certainly apps out there whose users require longer-lived connections.

Online game's alive connections count

In online multiplayer games where the world around you changes frequently (user gets updates from the server about that) - how many alive connections usually are made?
For example WebSockets can be used. Is it an effective way to send all data through the one connection? You will have to check every received message type:
if it's info about the world -> make changes to the world around you;
if it's info about user's personal data -> make changes in your profile;
if it's local chat message -> add new message to the chat window.
..etc.
I think this if .. else if .. else if .. else if .. for every incoming data decreases client-side performance very much. Wouldn't it be better to get world changes from the second WS connection? Then you won't have to check it's type every time. But another types are not so frequent, so the first connection can be for them.
So the question is how developers usually deal with connections count and message types to increase performance?
Thanks
It depends on clientside vs serverside load. You need to balance whether you want to place the load of having more open connections on the server, or the analysis of the payload on the client. If you have a simple game, and your server is terrible, I would suggest placing more clientside load. However, for high-performance gaming functioning with an excellent server, using more WebSockets would be the recommended approach.

Possible explanation of sudden spike in memcached get

Newbie in Newrelic here. I have an API service hosted on Heroku and being monitored at Newrelic.
While I was studying how to use newrelic. I found out my 2 workers are being underutilised with very low RPM and low transaction time. So I decided to cut down to one worker which saves me $36 a month. =]
Shortly after that I received tonnes of logEntries emails stating request timeouts of one of my web dynos. Looking into Newrelic. I found out that one of my actions are being called suspciously high number of times for 2-3 minutes.
The action being V1::CarsController#Index, which basically shows a collection of cars.
While I was not sure whether the deletion of one worker dyno has caused memcached to do something, I also suspect that may be someone is trying scrap the data off the database. I am not too sure how to further investigate into the issue. I wonder if I can track down the request IP and see it is the same? or how can I further investigate?
If further information is needed I am happy to provide in Edits!
Thanks

Bittorrent protocol 'not available'/'end connection' response?

I like being able to use a torrent app to grab the latest TV show so that I can watch it at my lesiure. The problem is that the structure of the protocol tends to cause a lot of incoming noise on my connection for some time after I close the client. Since I also like to play online games sometimes this means that I have to make sure that my torrent client is shut off about an hour (depending on how long the tracker advertises me to the swarm) before I want to play a game. Otherwise I get a horrible connection to the game because of the persistent flood of incoming torrent requests.
I threw together a small Ruby app to watch the incoming requests so I'd know when the UTP traffic let up:
http://pastebin.com/TbP4TQrK
The thought occurred to me, though, that there may be some response that I could send to notify the clients that I'm no longer participating in the swarm and that they should stop sending requests. I glanced over the protocol specifications but I didn't find anything of the sort. Does anyone more familiar with the protocol know if there's such a response?
Thanks in advance for any advice.
If a bunch of peers on the internet has your IP and think that you're on their swarm, they will try to contact you a few times before giving up. There's nothing you can do about that. Telling them to stop one at a time will probably end up using more bandwidth that just ignoring the UDP packets would.
Now, there are a few things you can do to mitigate it though:
Make sure your client sends stopped requests to all its trackers. This is part of the protocol specification and most clients do this. If this is successful, the tracker won't tell anyone about you past that point. But peers remember having seen you, so it doesn't mean nobody will try to connect to you.
Turn off DHT. The DHT acts much like a tracker, except that it doesn't have the stopped message. It will take something like 15-30 minutes for your IP to time out once it's announced to the DHT.
I think it might also be relevant to ask yourself if these stray incoming 23 byte UDP packets really matter. Presumably you're not flooded by more than a few per second (probably less). Have you made any actual measurements or is it mostly paranoia to wait for them to let up?
I'm assuming you're playing some latency sensitive FPS, in which case the server will most likely blast you with at least 10-50 full MTU packets per second, without any congestion control. I would be surprised if you attract so many bittorrent connection attempts that it would cause any of the game packets to be dropped.

How to find the cause of RESTful service bad performance?

I am creating a service which receives some data from mobile phones and saves it to the database.
The phone is sending the data every 250 ms. As I noticed that the delay for data storing is increasing I tried to run WireShark and write a log as well.
I noticed that the web requests from mobile phone are being made without the delay (checked with WireShark), but the in the service log I noticed the request is received every second and a half or almost two seconds.
Does anyone know where could be the problem or the way to test and determine the cause of such delay?
I am creating a service with WCF (webHttpBinding) and the database is MS SQL.
By the way the log stores the time of http request and also the time of writing data to the database. As mentioned above the request is received every 1.5 - 2 seconds and after that it takes 50 ms to store data to the database.
Thanks!
My first guess after reading the question was that maybe you are submitting data so fast, the database server is hitting a write-contention lock (e.g. AutoNumber fields?)
If your database platform is SQL Server, take a look at http://www.sql-server-performance.com/articles/per/lock_contention_nolock_rowlock_p1.aspx
Anyway please post more information about the overall architecture of the system... what softwares/platforms are used at what parts etc...
Maybe there is some limitation in the connection imposed by the service provider?
What happens if you (for testing) don't write to the database and just log the page hits in the server log with timestamp?
Check that you do not have any tracing running on the web services, this can really kill perf.

Resources