Elixir: Best practice to wrap a timed state inside a GenServer? - caching

I came from a game development background and have little experience with web servers.
I'm developing a web server based on Elixir/Phoenix. Say we have some players that can join guilds. The relationship between Guild and Player is one-to-many. In order to boost up the performance, we want to cache data in memory as much as possible, so we cache data of players and also guilds. But since players may "share" the same guilds, we attempt to wrap the guild data into a GenServer, so that every operation for a single guild can be performed sequentially.
With the design mentioned above, we realized that caching for the guild became a little bit tricky to deal with. Since every guild itself is a separate process, what process is in charge of expiring the guild process?
If the guild process is in charge of expiring itself, every cast/call message sent after the expiration could not be successfully run.
If we have a GuildManager to manage all the expiration logic for the guild processes. Kill a guild process would also cause every guild-pid-holder process to crash.
So what's the best practice to deal with this kind of problem?

Related

Task.WhenAll with Select is a footgun - but why?

Consider: you have a collection of user ids and want to load the details of each user represented by their id from an API. You want to bag up all of those users into some kind of collection and send it back to the calling code. And you want to use LINQ.
Something like this:
var userTasks = userIds.Select(userId => GetUserDetailsAsync(userId));
var users = await Task.WhenAll(tasks); // users is User[]
This was fine for my app when I had relatively few users. But, there came a point where it didn't scale. When it got to the point of thousands of users, this resulted in thousands of HTTP requests being fired concurrently and bad things started to happen. Not only did we realise we were launching a denial of service attack on the API we were consuming as did this, we were also bringing our own application to the point of collapse through thread starvation.
Not a proud day.
Once we realised that the cause of our woes was a Task.WhenAll / Select combo, we were able to move away from that pattern. But my question is this:
What is going wrong here?
As I read around on the topic, this scenario seems well described by #6 on Mark Heath's list of Async antipatterns: "Excessive parallelization":
Now, this does "work", but what if there were 10,000 orders? We've flooded the thread pool with thousands of tasks, potentially preventing other useful work from completing. If ProcessOrderAsync makes downstream calls to another service like a database or a microservice, we'll potentially overload that with too high a volume of calls.
Is this actually the reason? I ask as my understanding of async / await becomes less clear the more I read about the topic. It's very clear from many pieces that "threads are not tasks". Which is cool, but my code appears to be exhausting the number of threads that ASP.NET Core can handle.
So is that what it is? Is my Task.WhenAll and Select combo exhausting the thread pool or similar? Or is there another explanation for this that I'm not aware of?
Update:
I turned this question into a blog post with a little more detail / waffle. You can find it here: https://blog.johnnyreilly.com/2020/06/taskwhenall-select-is-footgun.html
N+1 Problem
Putting threads, tasks, async, parallelism to one side, what you describe is an N+1 problem, which is something to avoid for exactly what happened to you. It's all well and good when N (your user count) is small, but it grinds to a halt as the users grow.
You may want to find a different solution. Do you have to do this operation for all users? If so, then maybe switch to a background process and fan-out for each user.
Back to the footgun (I had to look that up BTW 🙂).
Tasks are a promise, similar to JavaScript. In .NET they may complete on a separate thread - usually a thread from the thread pool.
In .NET Core, they usually do complete on a separate thread if not complete and the point of awaiting, for an HTTP request that is almost certain to be the case.
You may have exhausted the thread pool, but since you're making HTTP requests, I suspect you've exhausted the number of concurrent outbound HTTP requests instead. "The default connection limit is 10 for ASP.NET hosted applications and 2 for all others." See the documentation here.
Is there a way to achieve some parallelism and not take exhaust a resource (threads or http connections)? - Yes.
Here's a pattern I often implement for just this reason, using Batch() from morelinq.
IEnumerable<User> users = Enumerable.Empty<User>();
IEnumerable<IEnumerable<string>> batches = userIds.Batch(10);
foreach (IEnumerable<string> batch in batches)
{
Task<User> batchTasks = batch.Select(userId => GetUserDetailsAsync(userId));
User[] batchUsers = await Task.WhenAll(batchTasks);
users = users.Concat(batchUsers);
}
You still get ten asynchronous HTTP requests to GetUserDetailsAsync(), and you don't exhaust threads or concurrent HTTP requests (or at least max out with the 10).
Now if this is a heavily used operation or the server with GetUserDetailsAsync() is heavily used elsewhere in the app, you may hit the same limits when your system is under load, so this batching is not always a good idea. YMMV.
You already have an excellent answer here, but just to chime in:
There's no problem with creating thousands of tasks. They're not threads.
The core problem is that you're hitting the API way too much. So the best solutions are going to change how you call that API:
Do you really need user details for thousands of users, all at once? If this is for a dashboard display, then change your API to enforce paging; if this is for a batch process, then see if you can access the data directly from the batch process.
Use a batch route for that API if it supports one.
Use caching if possible.
Finally, if none of the above are possible, look into throttling the API calls.
The standard pattern for asynchronous throttling is to use SemaphoreSlim, which looks like this:
using var throttler = new SemaphoreSlim(10);
var userTasks = userIds.Select(async userId =>
{
await throttler.WaitAsync();
try { await GetUserDetailsAsync(userId); }
finally { throttler.Release(); }
});
var users = await Task.WhenAll(tasks); // users is User[]
Again, this kind of throttling is best only if you can't make the design changes to avoid thousands of API calls in the first place.
While there is no thread waiting for async operation if the async operation is pure, there is a thread for continuation, so assuming that your GetUserDetailsAsync will await for some IO-bound operation the continuation (parsing output, returning result ...) will need to run on some thread so your Task.Result which was created by GetUserDetailsAsync can be set, so every one of them will wait for a thread from thread pool to finish.

Live updates on website - 1 ajax per second is bad practice?

I have a website where each user can have several orders. Each order has its own status. A background process, keeps updating the status of each order as necessary. I want to inform the user in real-time on the status of his orders. As such, I have developed an API endpoint that returns all the orders of a given user.
On the client-side, I've developed a React component that displays the orders, and then every second an AJAX request is performed to the API to get all the orders and their status, and then React will auto-update if necessary.
Is making 1 AJAX call per second to get all orders of a user a bad practice? What are other strategies that I can do?
Yes, it is. You can use Socket to accomplish this. Take a look at Socket.IO
Edit: My point is, why to use AJAX to simulate a task that can be done with a feature that is designed for it? Sockets are just made to do this kind of thing.
Imagine if your user lost internet connection for example. With Socket.IO you can handle this very nicely. But I don't think it will be that easy with AJAX.
And thinking about scalability, Socket.IO is designed to be performant with whatever transport it settles on. The way it gracefully degrades based on what connection is possible is great and means your server will be overloaded as little as possible while still reaching as wide an audience as it can.
AJAX will do the trick, but it's not the best design.
There is no one solution fit all answer for this question.
First off, this is not a chat app, a delay of less than 1 second doesn't change the user experience by much, if any.
So that leaves us with technical reasons, it really depends on many factors:
How many users you have (overall load), how many concurrent users are waiting for their orders, what infrastructure you are using, do you have other important things to build or you just want to spend more time coding things for fun?
If you have a handful of users, there is nothing wrong with querying once per second, it's easy, less maintenance overhead, and you said you have it coded already.
If you have dozens or more of concurrent users waiting for the status it's probably best to use Websockets.
In terms of infrastructure, too many websockets are expensive (some cloud hosting have limits on the number of sockets), so keep that in mind if you want to go with that route.

Laravel Raffle Project. Is a Queue the best way to achieve this?

I'm creating a raffle site as a small side project. It will handle multiple raffles each with an end time. At the end of each raffle a single winner is chosen.
Are Laravel Jobs the best way to go with this? Do I just create a single forever-repeating job to check if any raffles have ended and need a winner?
If not, what would be the best way to go?
I don't think that forever-repeating scripts are generally a good idea.
I just create a single forever-repeating job
This is almost never a good idea. It has its applications in legacy code bases but websockets and events are best considered for this job. Also, you have the benefit of using a really good framework like Laravel, so take advantage of it
Websockets
If you want people to be notified in real time in the browser.
If you have all your users subscribe to a websocket channel when they load the page, you can easily send a message to a websocket server to all subscribed clients (ie browsers) to let them know who the winner is.
Then, in your client side code (Javascript), you can parse that message to determine who the winner is and render a pop up that let's the user know.
Events
If you don't mind a bit of a delay, most definitely use events for this.
At the end of every action that might potentially end a raffle (ie, a name is chosen at random by a computer - function chooseName()). Fire an event that notifies all participants in the raffle.
https://laravel.com/docs/5.2/events
NB: I've listed the above two as separate issues, but actually, the could be used together. For example, in the event that a name is chosen at random, determine if the raffle is over and notify clients via a websocket connection.
Why I wouldn't use delayed Jobs
The crux of the reason - maintainability
Imagine a scenario where something extends the time of your raffle by a week. This could've happened because a raffle was cheated on or whatever (can't really think of all the use cases in that area).
Now, your job has a set delay in place - is it really a good programming principle to have to change two things when only one scenario changed? Nope. Having something like an event in place - onRaffleEnd - explicitly looks for the occurrence of an event. Laravel doesn't care when that event happens.
Using delayed Jobs can work - it's just not a good programming use case in your scenario and limits what you're able to do in the longer run. It will force you to make more considerations when unforeseen circumstances come along as well as when you want to change things. This also decentralizes the logic related to your raffle. Whilst decoupling code is good practice, having logic sit in completely different places makes maintenance a nightmare.

Is the mux in this golang socket.io example necessary?

In an app that I'm making, a user is always part of a 'game'. I'd like to set up a socket.io server to communicate with users in a game. I'm planning to use http://godoc.org/github.com/madari/go-socket.io go-socket.io, which defines the newSocketIOfunction to create a new socketio instance.
Instead of creating one socketio instance, I thought it might be possible to create a map that maps game id's to socket.io instances, and configure them so that they listen on an url that represents the game id.
This way, I can use methods such as broadcast and broadcastExcept to broadcast to all players ithin a single game. However, I'd have to start a new goroutine for every game, and I don't know enough about their performance characteristics to know if this is scalable, since the request rate for a single socketio instance will be very low, about 1/second at peak times, but the connection might be idle for tens of seconds at other times (except for heartbeat, and possibly other communication specified by the socket.io protocol).
Would I be better off creating 1 socket.io instance, and tracking which connections belong to which games?
I'd have to start a new goroutine for every game, and I don't know enough about their performance characteristics to know if this is scalable
Fire away, the Go scheduler is built to efficiently handle thousands and even millions of goroutines.
The default net/http server in the Go standard library spawns a goroutine for every client for instance.
Just remember to return from your goroutines once they're done working. Else you'll end up with a lot of stale ones.
Would I be better off creating 1 socket.io instance, and tracking which connections belong to which games?
I'm not involved in the project but if it follows Go's "get sh*t done" philosophy, then it shouldn't matter. You can find out what works better by profiling both approaches though.

How to design and structure a program that uses Actors

From Joe Armstrong's dissertation, he specified that an Actor-based program should be designed by following three steps. The thing is, I don't understand how the steps map to a real world problem or how to apply them. Here's Joe's original suggestion.
We identify all the truly concurrent activities in our real world activity.
We identify all message channels between the concurrent activities.
We write down all the messages which can flow on the different message channels.
Now we write the program. The structure of the program should exactly follow the structure of the problem. Each real world concurrent activity should be mapped onto exactly one concurrent process in our programming language. If there is a 1:1 mapping of the problem onto the program we say that the program is isomorphic to the problem.
It is extremely important that the mapping is exactly 1:1. The reason for this is that it minimizes the conceptual gap between the problem and the solution. If this mapping is not 1:1 the program will quickly degenerate, and become difficult to understand. This degeneration is often observed when non-CO languages are used to solve concurrent problems. Often the only way to get the program to work is to force several independent activities to be controlled by the same language thread or process. This leads to an inevitable loss of clarity, and makes the programs subject to complex and irreproducible interference errors.
I think #1 is fairly easy to figure out. It's #2 (and 3) where I get lost. To illustrate my frustration I stubbed out a small service available in this gist (Ruby service with callbacks).
Looking at that example service I can see how to answer #1. We have 5 concurrent services.
Start
LoginGateway
LogoutGateway
Stop
Subscribe
Some of those services don't work (or shouldn't) depending on the state the service is in. If the service hasn't been Started, then Login/Logout/Subscribe make no sense. Does this kind of state information have any relevance to Joe's 3 steps?
Anyway, given the example/mock service in that gist, I'm wondering how someone would go about designing a program to wrap this service up in an Actory fashion. I would just like to see a list of guidelines on how to apply Joe's 3 steps. Bonus points for writing some code (any language).
Generally, when structuring an application to use actors you have to identify the concurrent features of your application, which can be tricky to get the hang of. You identify 5 concurrent "services":
Start
LoginGateway
LogoutGateway
Stop
Subscribe
1, 4 and 5 seem to be types of messages that can flow through the system, 2 and 3 I'm not sure how to describe. Your gist is rather large and not super clear to me, but it looks like you've got some kind of message queue system. The actions a User can take are:
Log in to the system
Log out of the system
Subscribe to a Queue of messages
I'll assume logging in and out requires some auth step. I'll assume further that if the user fails the auth step their connection is broken but that creating a connection is not sufficient authentication.
The actions the System takes are:
Handling User actions
Routing messages to subscribers of a Queue
If that's not broadly true, let me know and I'll change this answer. (I'll assume that the messages that get sent to users are not generated by users but are an intrinsic part of the System; maybe we're discussing a monitoring service.) Anyhow, what is concurrent here? A few things:
Users act independently of one another
Queues have separate states
An actor based architecture represents each concurrent entity as its own process. The User is a finite state machine which authenticates, subscribes to a queue, alternatively receives messages and subscribes to more queues and eventually disconnects. In Erlang/OTP we'd represent this by a gen_fsm. The User process carries all the state needed to interact with the client which, if we're exposing a service over a network, would be a socket.
Authentication implies that the System is itself a 'process', though, more likely than not it's really a collection of processes which in Erlang/OTP we call an application. I digress. For simplification we'll assume that System is itself a single process which has some well-defined protocol and a state that keeps user credentials. User logins are, then, a well-defined message from a User process to the System process and the response therefrom. If there were no authentication we'd have no need for a System process as the only state related to a User would be a socket.
The careful reader will ask where do we accept socket connections for each User? Ah, good question. There's another concurrent entity in not mentioned, which we'll call here the Listener. It's another process that only listens for connections, creates a User for each new established socket and hands over ownership to the new User process, then loops back to listen.
The Queue is also a finite state machine. From its start state it accepts User subscription requests via a well-defined protocol, broadcasts messages to subscribers or accepts unsubscribe requests from User processes. This implies that the Queue has an internal store of User processes, the details of which are very dependent on language and need. In Erlang/OTP, for example, each Queue process would be a gen_server which stored User process ids--or PIDs--in a list and for each message to transmit simply did a multi-send to each User process in the list.
(In Erlang/OTP we'd user supervisors to ensure that processes stay alive and are restarted on death, which simplifies greatly the amount of work an Erlang developer has to do to ensure reliability in an actor-based architecture.)
Basically, to restate what Joe wrote, actor based architecture boils down to these points:
identify concurrent entities in the system and represent them in the implementation by processes,
decide how your processes will send messages (a primitive operation in Erlang/OTP, but something that has to be implemented explicitly in C or Ruby) and
create well-defined protocols between entities in the system which hide state modification.
It's been said that the Internet is the world's most successful actor based architecture and, really, that's not far off.

Resources