Meteor 100% uptime considering sticky sessions - session

I've been working with Meteor for some time and I'm considering using it for multiple large scale projects. I love Meteor and I really want to push it's adoption in our company but I have 1 last reservation before I do so. Sticky sessions and what it means for 100% uptime.
My requirement is 100% uptime for all of our sites. Hot code pushes obviously solve the problem of pushing new features/update/bug fixes. However, if a server needs to be taken down for maintenance, then all my active users are going to lose their sessions (something I can't let happen).
I was hoping someone may have some insights into the problem and what they've done to overcome it or if there's a possible strategy for migrating users from one server to another (session replication) thus preventing users from being kicked.
The reason I ask is because the publish cursor keeps track of whatever collections the client may have so if the server disconnects and the client connection is directed to another server (because it's behind a load balancer), that server will not have any idea of what is out of sync on the client and create strange behaviour.

Related

How to implement synchronization of browser-based online games when users refresh their browser

In implementing a browser-based simple game involving multiple users, I have the server save the game state at certain sync points (not time-based but event-specific). I identify each state by an integer.
When a user refreshes his browser, the server provides the latest state and restores the content in the browser. However, in those few seconds while the browser is loading the latest content after browser-refresh, the state could change again. I do not know how to handle this situation because sending the next state will again raise the same issue.
I want a seamless refresh so none of the other players are impacted when one user refreshes his browser (or for that matter leaves and comes back).
The implementation language is not relevant. I use websockets to communicate between the browser and the server. The server is the intermediary for all communication between users (I am not using WebRTC data channels). What is the best way to sync the application content in multiple browsers?
This is indeed a programming-based question though no code is provided.
Forget the fact that your client exists in a browser. Let's just talk about replication.
The usual approach in databases is to separate snapshots from Write Access Logging (WAL) logs. When you bring a new client up, you select a snapshot and transfer that. Then when the client is ready it asks for WAL logs from that snapshot forward. The same mechanism is used after crashes. The last available snapshot is loaded, then the WAL log is replayed, then the database comes up.
I would suggest the same strategy. This does require efficient storage of snapshots. Some kind of log. And some kind of replay mechanism. Which is a lot of easy to mess up code. If you can use something existing, that would be good.
The first thing that I looked into was using Emscripten to compile Redis to JS, and then try to use Redis' built-in asynchronous replication to replicate to your browser. That may be possible, but the fact that Redis is single-threaded and wants to be a client-server is probably a showstopper.
The next best option that I found is that you can use https://isomorphic-git.org/. Here is how that could build what you need. You simply maintain your current state in a git repository, and keep a WAL log of everything that you've done with it. When a client connects, it clones the repository. Once done it connects to the websocket, tells you what commit it is at, and you send it the WAL log from that point forward. Locally in the browser you run those git commands. If the client simply loses its connection and then rejoins, it can do a git pull, and then follow the same strategy.
This will be a bunch of work for you. But a lot less work than implementing everything from scratch.

SignalR combined with load balancer missing messages

I have 2 web servers (IIS 8.5) behind a hardware firewall and our application uses SignalR for some real-time updates. We are using SQL Server as the backplane to help us work in this load balanced environment. Additionally we are using sticky sessions on the load balancer to help us keep the users on the same web server during their session. When we are running in this hardware configuration we lose at least 1/3 of our messages. Sometimes we get all the expected messages but more often than not we are missing plenty.
When we are running on a single web server all messages are received. Does anyone have any suggestions for troubleshooting this problem? We've turned on logs (both client & server) and nothing looks like it's missing or broken. We're really stumped.
EDIT---
Some additional details that I hope will shed light on the situation.
Server to Client messages are getting lost. Pretty much all our communication is Server to Client.
We are using sticky session just based on IP and limited to 5 minutes but we're losing messages within that 5 minutes.
This is some old SignalR code that has been only minimally touched since SignalR 1 (or even older). We are keeping an in memory list of users along with their connections and we use that list to send notices back to the client. It seems most likely that this is the cause of the troubles but with Sticky sessions the user should be stuck to the same server for at least the 5 minutes right?
This list of users maps Username to connection id. This is useful when our backend services (on another machine) sends a message back with the username not the connection id.
Finally resolved this. There were 2 issues really. The first is that we were using an in memory list of users as mentioned in the edit above. Once we realized that wasn't going to work across machines we removed it. It also led us to the second issue which was how SignalR 2 uses the IUserIdProvider and our call should have been
Clients.User(userId).send(message)
instead of
context.Clients.Client(connection)
This code had existed since we first started using SignalR many years ago and never got properly updated as we upgraded SignalR versions
Have the same machineKey specified in your web.config on both servers.

User closes the browser without logging out

I am developing a social network in ASP.NET MVC 3. Every user has must have the ability to see connected people.
What is the best way to do this?
I added a flag in the table Contact in my database, and I set it to true when the user logs in and set it to false when he logs out.
But the problem with this solution is when the user closes the browser without logging out, he will still remain connected.
The only way to truly know that a user is currently connected is to maintain some sort of connection between the user and the server. Two options immediately come to mind:
Use javascript to periodically call your server using ajax. You would have a special endpoint on your server that would be used to update a "last connected time" status, and you would have a second endpoint for users to poll to see who is online.
Use a websocket to maintain a persistent connection with your server
Option 1 should be fairly easy to implement. The main thing to keep in mind that this will increase the amount of requests coming into your server, and you will have to plan accordingly in order handle the traffic this could generate. You will have some control over the amount of load on your server by configuring how often javascript timer calls back to your server.
Option 2 could be a little more involved if you did this without library support. Of course there are libraries out there such as SignalR that make this really easy to do. This also has an impact on the performance of your site since each user will be maintaining a persistent connection. The advantage with this approach is that it reduces the need for polling like option 1 does. If you use this approach it would also be very easy to push a message to user A that user B has gone offline.
I guess I should also mention a really easy 3rd option as well. If you feel like your site is pretty interactive, you could just track the last time they made a request to your site. This of course may not give you enough accuracy to determine whether a user is "connected".

What's the recommended solution/technology to this use case?

I'm building a website which offers 1 on 1 coaching on various topics. The coaching is done over the web ( video call, document upload, stuff like this ), and one of the most important things is that the client pays by the minute. My problem is the following: how will I know when a coaching session ends ( so that I can correctly bill the customer )?
I'm planning to store the coaching session in the db roughly like this:
coach_id:integer
client_id:integer
created_at:datetime
updated_at:datetime
in_progress:boolean
At the session's end I will do a difference between updated_at and created_at, and get the length of the session.
Here are the potential problems I see:
coach loses internet access => in this case, the client will press a button on the website which will notify us that the session had a problem, and the session's updated_at will be updated, and in_progress will be set to false
client loses internet access => same workflow as if coach loses internet access
both lose internet access => this is the trickiest case. I am not sure how to notify the server that the session should be considered as finished. I am thinking of doing it via push, and have both the client's browser, and the coach's browser update the server every minute. Worst case scenario, the error would cause a difference of 1 minute to the bill, which is acceptable. The downside is that I think this could load the server a lot, and I don't know if this would still be a viable solution once we will have many users.
What do you think of this approach? In case it matters, the application will be built on Rails 3.2.
Why dont you look into HTML5 EventSource or WebSockets as possible means of detecting connectivity/loss of connection?
At least in .NET (and I would guess in all server environments) it is possible to see if the client is still connected (tcp wise). EventSource/WebSockets helps you to establish an always open connection (as opposed to request/response connection with a short period of being connected) that you can monitor if its still operational/open.
So essentialy the solution needs to be implemented at the websocket server.

Handle Unstable Internet Connection in Server-Client App

what technology can i use to manage unstable internet connection in a Server-Client App. i know mainly PHP (+Zend Framework), learning C# & ASP.NET MVC. i heard WCF/MSMQ is something that can help... but how ... is there something PHP (which i am more familiar) can do? but it is also good to know a .NET alternative if its better
the background:
client***s*** will connect to server db to do CRUDs. but if the internet connection fails this will not be possible. so how do i fix this?
the solution used now was have localhost db's. at the end of the day, all clients will upload to server and morning download "consolidated" db from server. this is not foolproof as upload/download may still fail. and considering large amts of data transfered, it actually increases the chances.
UPDATE: is there a PHP/Zend Framework/MySQL replacement for MSMQ/WCF?
WCF can help, because it supports various technologies for reliable message transfer.
One thing that might help you is to have the clients make their data changes locally, then upload those changes to a reliable message queue. You would not upload all changes in a single transaction. You might upload 10 at a time, possibly one at a time. As the uploaded messages are processed on the server, the server would write the transaction results to another queue, unique to each client. After the upload (or maybe at the same time), the client would check that queue to see what the result of each upload was. If the result was success, then the client can remove their local database. If the result was a failure, then the client should try uploading it again.
Of course, you should always be careful that your attempts at error recovery don't make things worse. Too much retry traffic on a bad link may very well cause more traffic, which may itself need recovery, etc.
And, of course, the ultimate solution is to move towards links that are more reliable. Not necessarily faster, but just more reliable.

Resources