SignalR Intermittent Failure to call client method - client

I am currently using SignalR 2.0 and MVC 5.0 for my project. All the signalR update works fine except when I have updated the web.config file which causes the application pool to recycle. This is the starting point I notice SignalR behaving inconsistently. Sometimes I need to perform an action multiple times before the SignalR update can be propagated to the client. I have enabled the SignalR logging and I can't see anything displayed in the log whenever it failed to propagate to the client.
Do we need to do something in the hub, like reconnect or anything when the app pool got recycled?
At the moment, i didnt do anything in terms of coding and simply refresh the page. But it gives me that problems with SignalR. And the only way to fix that intermittent problem completely is to reboot the server.
Have anyone experienced this kind of behavior in SignalR?
Thanks in advance.

Related

Ajax error 401 in production, but works local

I have a Net Core application that started to present a few problems lately. It was working just fine, but recently my Ajax calls will throw a 401 error.
That just happens in the production server, running on localhost everything works just fine. Also, this appears to be happening randomly, so the same ajax call will sometimes throw this error and sometimes it won't.
After digging a lot I noticed a few differences between the headers of those calls when they run local and when they run on the production server, but I don't know exaclty how to interpret and solve them.
Could you help me? None of those calls are to an external API/resource, they all call the page the user is currently on in the app itself.
I'll add the screenshot of the console showing the difference between headers. On the left is the one running local and I've used exactly the same data on both tests.
The production server is running IIS 10, if that's relevant.

SignalR combined with load balancer missing messages

I have 2 web servers (IIS 8.5) behind a hardware firewall and our application uses SignalR for some real-time updates. We are using SQL Server as the backplane to help us work in this load balanced environment. Additionally we are using sticky sessions on the load balancer to help us keep the users on the same web server during their session. When we are running in this hardware configuration we lose at least 1/3 of our messages. Sometimes we get all the expected messages but more often than not we are missing plenty.
When we are running on a single web server all messages are received. Does anyone have any suggestions for troubleshooting this problem? We've turned on logs (both client & server) and nothing looks like it's missing or broken. We're really stumped.
EDIT---
Some additional details that I hope will shed light on the situation.
Server to Client messages are getting lost. Pretty much all our communication is Server to Client.
We are using sticky session just based on IP and limited to 5 minutes but we're losing messages within that 5 minutes.
This is some old SignalR code that has been only minimally touched since SignalR 1 (or even older). We are keeping an in memory list of users along with their connections and we use that list to send notices back to the client. It seems most likely that this is the cause of the troubles but with Sticky sessions the user should be stuck to the same server for at least the 5 minutes right?
This list of users maps Username to connection id. This is useful when our backend services (on another machine) sends a message back with the username not the connection id.
Finally resolved this. There were 2 issues really. The first is that we were using an in memory list of users as mentioned in the edit above. Once we realized that wasn't going to work across machines we removed it. It also led us to the second issue which was how SignalR 2 uses the IUserIdProvider and our call should have been
Clients.User(userId).send(message)
instead of
context.Clients.Client(connection)
This code had existed since we first started using SignalR many years ago and never got properly updated as we upgraded SignalR versions
Have the same machineKey specified in your web.config on both servers.

Jetty webserver after idle breaks

I have a webapp deployed successfully in Jetty webserver.
The webserver responds to requests fine.
When I access the app it renders the home page.
Recently I noticed that when I don't use the app for certain period of time it breaks somehow. The period is somewhere around 2/3 weeks.
When I access the webapp after 2/3 weeks of idle I receive this output.
If I try to access any other link, i.e. the login page (/login.faces) I receive:
Problem accessing /error/not-found.faces. Reason:
/error/not-found.xhtml Not Found in ExternalContext as a Resource
which normally used to work before idling.
If I restart the webserver everything returns to normal and works fine. There are scheduled tasks set which make the app interact every day with database. (There is a scheduled task for fetching currency rates via webservice).
Therefore, my question is what would be the cause which breaks the site and makes it unavailable after idling? Is this a webserver (jetty) issue? Am I missing any setting which is crucial?
FYI, the project structure is: Java with Spring, Hibernate, JSF (PrimeFaces) and Jetty
This occurred due to permissions in CentOS.
If anyone faces the same issue make sure to check the logs have appropriate permissions to read and write

ASP.NET MVC 3 application leads to a browser timeout

I have an ASP.NET MVC 3 application that uses Entity Framework 4.3 code first. The application works satisfactorily with the WebDev server in Visual Studio. Once the application is running in IIS 7.5, it is, it occasionally happens that the server no longer responds. The browser waits until it times out. Also, a refresh of the page does not help. Only if the browser is closed and restarted the IIS provides responses back to the browser.
The work processes are utilized here to 0%. An infinite loop is ruled out as cause therefore. When I examine the worker process with the debugger, all threads are in the external code. Even with WinDbg I can not identify the cause.
The application uses the DbContext together with the UnitOfWork pattern. The controllers receive an UnitOfWork object via dependency injection. The dependency resolution is done with the UnityDependencyResolver of the Unity.Mvc package. The Entity Framwork is also used in my own MembershipProvier and role providers, but here is the DbContext explicitly created and destroyed.
I'm desperate. What could cause this behavior?

Meaning/cause of RPC Exception 'No interfaces have been exported.'

We have a fairly standard client/server application built using MS RPC. Both client and server are implemented in C++. The client establishes a session to the server, then makes repeated calls to it over a period of time before finally closing the session.
Periodically, however, especially under heavy load conditions, we are seeing an RPC exception show up with code 1754: RPC_S_NOTHING_TO_EXPORT.
It appears that this happens in the middle of a session. The user is logged on for a while, making successful calls, then one of the calls inexplicably returns this error. As far as we can tell, the server receives no indication that anything went wrong - and it definitely doesn't see the call the client made.
The error code appears to have permanent implications, as well. Having the client retry the connection doesn't work, either. However, if the user has multiple user sessions active simultaneously between the same client and server, the other connections are unaffected.
In essence, I have two questions:
Does anyone know what RPC_S_NOTHING_TO_EXPORT means? The MSDN documentation simply says: "No interfaces have been exported." ... Huh? The session was working fine for numerous instances of the same call up until this point...
Does anyone have any ideas as to how to identify the real problem? Note: Capturing network traffic is something we would rather avoid, if possible, as the problem is sporadic enough that we would likely go through multiple gigabytes of traffic before running into an occurrence.
Capturing network traffic would be one of the best ways to tackle this issue. If you can't do that, could you dump the client process and debug with WinDBG or Visual Studio? Perhaps compare a dump when operating normally versus in the error state?

Resources