SignalR combined with load balancer missing messages - websocket

I have 2 web servers (IIS 8.5) behind a hardware firewall and our application uses SignalR for some real-time updates. We are using SQL Server as the backplane to help us work in this load balanced environment. Additionally we are using sticky sessions on the load balancer to help us keep the users on the same web server during their session. When we are running in this hardware configuration we lose at least 1/3 of our messages. Sometimes we get all the expected messages but more often than not we are missing plenty.
When we are running on a single web server all messages are received. Does anyone have any suggestions for troubleshooting this problem? We've turned on logs (both client & server) and nothing looks like it's missing or broken. We're really stumped.
EDIT---
Some additional details that I hope will shed light on the situation.
Server to Client messages are getting lost. Pretty much all our communication is Server to Client.
We are using sticky session just based on IP and limited to 5 minutes but we're losing messages within that 5 minutes.
This is some old SignalR code that has been only minimally touched since SignalR 1 (or even older). We are keeping an in memory list of users along with their connections and we use that list to send notices back to the client. It seems most likely that this is the cause of the troubles but with Sticky sessions the user should be stuck to the same server for at least the 5 minutes right?
This list of users maps Username to connection id. This is useful when our backend services (on another machine) sends a message back with the username not the connection id.

Finally resolved this. There were 2 issues really. The first is that we were using an in memory list of users as mentioned in the edit above. Once we realized that wasn't going to work across machines we removed it. It also led us to the second issue which was how SignalR 2 uses the IUserIdProvider and our call should have been
Clients.User(userId).send(message)
instead of
context.Clients.Client(connection)
This code had existed since we first started using SignalR many years ago and never got properly updated as we upgraded SignalR versions

Have the same machineKey specified in your web.config on both servers.

Related

ASP.NET 5 Web API application intermittently unresponsive

We are working on an ASP.NET 5 Web API project that is in production now but we are experiencing an issue where it becomes unresponsive intermittently throughout the day.
A few notes about the application architecture. It is an ASP.NET Web API project using a MariaDB database on a separate EC2 instance within the same private network. The connection string uses the private IP of the database server to avoid any name resolution issues. The site is hosted via IIS 10.
The application itself has been developed carefully following the best practices provided by Microsoft. Heavy focus on async operations, minimizing query response times and offloading more expensive operations into background services.
The app is extremely responsive. It performs with sub 100ms responses on almost all requests, even the more complicated requests, and all the way up until it becomes unresponsive this high level of performance remains the same. We tend to see between 10-30 requests per second and 300-500 select queries per second at peak usage so not too extreme. However, randomly (2-3 times over a 24 hour period) it will begin hanging on requests and simply not respond to the request. During this time, the database is still extremely responsive and we are never over 300 connections out of our 512 connection limit.
The resources on the application server itself are never really taxed much at all. The CPU never gets above ~20% and the memory usage sits around 20-30%.
If I were to stop the site in IIS and start it again while this is happening, it will quickly come back online. If I don't it will be down for a few minutes until IIS finally kills it due to a failed health check. There are no real errors generated as a response to the issue other than typical errors caused by the hanging of the process such as connection terminated errors. The only thing I have seen before that gave me pause was the fact that there a few connection timeouts when getting the connection from the pool, but like I said, the connections to the server are never close to the limit.
Also, this app and version has been in production for months and it wasn't until the traffic volume started to grow that we started seeing these issues. At this point, I am at a loss for next steps of troubleshooting and I'm seeking suggestions.
In IIS App Pool advanced settings set Start Mode to AlwaysRunning
I never found a root cause for this issue, however, after updating to newer versions of .NET MVC this issue went away. My best guess is that changes with the Kestrel possibly resolved this issue, although, I have no idea what specific change that might have been. I have gone through the change logs a few times and didn't see anything that specifically jumped out at me.

Troubleshooting MVC4 Web API Performance Issues

I have an asp.net mvc4 web api interface that gets about 54k requests a day.
http://myserv.x.com/api/123/getstuff?whatstuff=thisstuff
I have 3 web servers behind a load balancer that are setup to handle the http requests.
On average response times are ~300ms. However, lately something has gone awry (or maybe it has always been there) as there is sporadic behavior of response times coming back in 10-20sec. This would be for the same request hitting the same server directly instead of through the load balancer.
GIVEN:
- System has been passed down to me so there may be gaps with IIS confiuration, etc,.
- Database: SQL Server 2008R2
- Web Servers: Windows Server 2008R2 Enterprise SP1
- IIS 7.5
- Using MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
- Looked at the logs but really don't see anything significantly relevant
- One application pool...no other LOB applications running on this server
Assumptions & Ask:
Somehow I'm thinking that something is recycling the application pool or IIS worker threads are shutting down and restarting thus causing each new request to warmup and recache itself. It's so sporadic that it's tough to trouble shoot right now. The same request to the same server comes back fast as expected (back to back N requests) since it was cached in about 300ms....but wait about 5-10-20min and that same request to the same server takes 16seconds.
I have limited tracing to go by as these are prod systems so I can only expose so much logging details. Any help and information attacking this or similar behavior somebody else has run into is appreciated. Thx
UPDATE:
The w3wpe.exe process grows to ~3G. Somehow it gets wiped out and the PID changes so itself or something is killing it every 3-4min I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
After 4-5 days of assessing IIS and configuration vs internal code issues I finally found the issue with little to no help with windbg or debugdiag IIS tools. Those tools contain so much information even with mini dumps or log trace stacks that they can be red herrings. Best bet was to reproduce it by setting up a "copy intelligently" instance of a production system, which we did not have at the time and took a bit for ops to set something up.
Needless to say the problem had to do with over cacheing business objects. There was one race condition where updates on a certain table were updating an attribute to that corresponding business object (updates were coming from multiple servers) which was causing an OOC stackoverflow that pretty much caused the cacheing to recursively cache itself to death thus causing the w3wp.exe process to die and psuedo-recycle itself. It was one of those edge cases that was incredibly hard to test and repro in a non-production environment.

Unable to service 20+ connections MVC3 IIS6 w long polling

It seems this is a common question/problem but despite checking out a number of proposed solutions nothing worked for us so far.
The app
It's a simple chat app, that puts a new interface on an existing app's JSON library. We proxy all the calls to their app to avoid x-domain restrictions (IE8).
ASP.net MVC3 App;
It's hosted in IIS6, W2K3 SP2. DEV svr has 1gig ram, TEST svr has 4gig ram.
The problem
When we approach 20 concurrent users, requests start lagging - no issues in Event Viewer to be found. It looks like calls are just queued. There are NO 503's returned.
What we've tried
We're using AsyncController to long-poll a 3rd party webservice for results
Hosted in IIS6
We're using the TPL to call their service in our AsyncController method
We've modified processModel and set maxWorkerThreads=100.
We've looked at this how-to but the HTTP.SYS config looks to service an infinite number of threads so we haven't bothered adding the reg keys.
The 3rd party service can handle lots of concurrent requests (and is in a web farm, so we're fairly confident we're the weakest link)
what are we missing? - any help greatly appreciated
Well... almost four weeks later and I thought I'd update this in case anyone wants to find out what helped us overcome these limitations (we're cramming around 100 simul connections on our DEV server, 1gig Xeon).
AsyncControllers
If you've got a potentially long waiting request (i.e. long polling) then use them.
Feel free to use TaskFactory but be sure to mark it as a long running process, if there is risk you could exception in your thread, be sure to use ContinueWith so you can decrement the operations, and return the error to your caller.
ServicePointManager
If you're making downstream calls (i.e. WebService/3rd party API) then make sure you have increased DefaultConnectionLimit from the default of 2 simultaneous connections.
A rough guide is 8 * Num cores so you don't starve outgoing connection resources.
See this MSDN article on DefaultConnectionLimit for more info.
IOCP vs RestSharp
I love RestSharp's API, it's fantastic but it's probably meant more for client side programming, not for proxying requests. My mistake!! Use HttpWebRequest and the Begin/End methods to make use of IOCP
If you're looking to reverse proxy or url rewrite, check out URL Rewriter, a great library available freely on CodePlex
In the end, our issue wasn't with incoming requests, it was with requests being proxied to a third party, we weren't supplying enough connections and thus they all queued up lagging the whole system. Happy to say after lots of reading, investigation and coding we've resolved it.

AppFabric velocity as a state server

Is anybody using Windows AppFabric Server for out of process state management?
Any feedback, advice would be appreciated.
Using AppFabric Caching. We tried this and it appeared to work, was easy to setup etc. There are some very strange settings when setting up the cache about peristance which need to be read carefully.
Our issue was on two server we installed IIS and Appfabric Caching and told app to try the local one first. When we went into production is just started to fail. It appears that with only two servers there is a lead server which if it goes down things stop working, we read that if needed to scale to 3 or more servers to get the behaviour we wanted. Not an option when just gone live and not working so we switched to SQL server for now while we look at nCache and ScaleOut and Memchached
The other issue is that caching and session state are not the same animal, if you loose your cache it should not be the end of the world just put it back together, we need to keep session state for the lotted time period at all costs.

Handle Unstable Internet Connection in Server-Client App

what technology can i use to manage unstable internet connection in a Server-Client App. i know mainly PHP (+Zend Framework), learning C# & ASP.NET MVC. i heard WCF/MSMQ is something that can help... but how ... is there something PHP (which i am more familiar) can do? but it is also good to know a .NET alternative if its better
the background:
client***s*** will connect to server db to do CRUDs. but if the internet connection fails this will not be possible. so how do i fix this?
the solution used now was have localhost db's. at the end of the day, all clients will upload to server and morning download "consolidated" db from server. this is not foolproof as upload/download may still fail. and considering large amts of data transfered, it actually increases the chances.
UPDATE: is there a PHP/Zend Framework/MySQL replacement for MSMQ/WCF?
WCF can help, because it supports various technologies for reliable message transfer.
One thing that might help you is to have the clients make their data changes locally, then upload those changes to a reliable message queue. You would not upload all changes in a single transaction. You might upload 10 at a time, possibly one at a time. As the uploaded messages are processed on the server, the server would write the transaction results to another queue, unique to each client. After the upload (or maybe at the same time), the client would check that queue to see what the result of each upload was. If the result was success, then the client can remove their local database. If the result was a failure, then the client should try uploading it again.
Of course, you should always be careful that your attempts at error recovery don't make things worse. Too much retry traffic on a bad link may very well cause more traffic, which may itself need recovery, etc.
And, of course, the ultimate solution is to move towards links that are more reliable. Not necessarily faster, but just more reliable.

Resources