You have a Node.js server running, accepting connections from the outside (via Socket.io, but it's irrelevant). Every time a connection is made, you create an "Instance" structure, and then copy the instance inside a global Instances array. This part is asynchronous.
Every three instances created, you'll hand them all to another process and flag them as "promoted" (or delete them, or something else, you get the idea).
The problem is that it will go all well if the connections would arrive in a discrete laps of time, say 100ms one after the other. If they arrive almost at the same time it is quite possible that you'll end up having the famous three instances in the buffer when, in fact, a lot more have been arrived beacuse the creation and buffering of the instance objects is asynchronous.
Can you think of a way to mitigate this problem?
(I'd buffer the Instances in Redis and not in memory, but in that case I'd have some serious serialization problem. Each instance object is quite rich in methods)
Related
One of the benefits of Microservice architecture is one can scale heavily used parts of the application without scaling the other parts. This supposedly provides benefits around cost.
However, my question is, if a heavily used microservice is dependent on other microservice to do it's work wouldn't you have to scale the other services as well seemingly defeating the purpose. If a microservice is calling other micro service at real time to do it's job, does it mean that Micro service boundaries are not established correctly.
There's no rule of thumb for that.
Scaling usually depends on some metrics and when some thresholds are reached then new instances are created. Same goes for the case when they are not needed anymore.
Some services are doing simple, fast tasks, like taking an input and writing it to the database and others may be longer running task which can take any amount of time.
If a service that needs scale is calling a service that can easily handle heavy loads in a reliable way then there is no need to scale that service.
That idea behind scaling is to scale up when needed in order to support the loads and then scale down whenever loads get in the regular metrics ranges in order to reduce the costs.
There are two topics to discuss here.
First is that usually, it is not a good practice to communicate synchronously two microservices because you are coupling them in time, I mean, one service has to wait for the other to finish its task. So normally it is a better approach to use some message queue to decouple the producer and consumer, this way the load of one service doesn't affect the other.
However, there are situations in which it is necessary to do synchronous communication between two services, but it doesn't mean necessarily that both have to scale the same way, for example: if a service has to make several calls to other services, queries to database, or other kind of heavy computational tasks, and one of the service called only do an array sorting, probably the first service has to scale much more than the second in order to process the same number of request because the threads in the first service will be occupied longer time than the second
In the context of a highly requested web service written in go language, I am considering to cache some computations. For that, I am thinking to use Redis.
My application is susceptible to receiving an avalanche of requests containing the same payload that triggers a costly computation. So a cache would reward and allow to compute once.
Consider the following figure extracted from here
I use this figure because I think it helps me illustrate the problem. The figure considers the general two cases: the book is in the cache, or this one is not in. However, the figure does not consider the transitory case when a book is being retrieved from the database and other "get-same-book" requests arrive. In this case, I would like to queue the repeated requests temporarily until the book is retrieved. Next, once the book has already arrived, the queued requests are replied with the result which would remain in the cache for fast retrieving of future requests.
So my question asks for approaches for implementing this requirement. I'm considering to use a kind of table on the server (repository) that writes the status of a query database (computing, ready), but this seems a little complicated, because I would need to handle some race conditions.
So I would like to know if anyone knows this pattern or if Redis itself implements it in some way (I have not found it in my consultations, but I suspect that using a Redis lock would be possible)
You can design it as you have described. But there is some things that are important.
Use a unique key
Use an unique key for each book, and if the book is ever changed, that key should also change. This design makes your step (6) save the book in Redis an idempotent operation (you can do it many times with the same result). So you will avoid any race condition with "get-same-book".
Idempotent requests OR asynchronous messages
I would like to queue the repeated requests temporarily until the book is retrieved. Next, once the book has already arrived, the queued requests are replied with the result
I would not recommend to queue requests as you describe it. If the request is a cache-miss, let it retrieve it from the database - but design it idempotent. Alternatively, you should handle all requests as asynchronous, and use a message queue e.g. nats, RabbitMQ or something, but the complexity grows with that solution.
Serializing requests
My problem is that while that second of computation where the result is not still gotten, too many repeated requests can arrive and due to the cost I need to avoid to repeat their computations. I need to find a way of retaining them while the result of the first request arrives.
It sounds like you want to have your computations serialized instead of doing them concurrently because you want to avoid doing the same computation twice. To solve this, you should let the requests initialize the computation, e.g. by putting the input on a queue and then do the computation in a serial order (but still possibly concurrently if they have a different key) and finally notify the client, or if the client is subscribing for updates (a better solution).
Redis do have support for PubSub but it depends on what requirements you have on the clients. I would recommend a solution without locks, for scalability.
I have a cluster application, which is divided into a controller and a bunch of workers. The controller runs on a dedicated host, the workers phone in over the network and get handed jobs, so far so normal. (Basically the "divide-and-conquer pipeline" from the zeromq manual, with job-specific wrinkles. That's not important right now.)
The controller's core data structure is unordered_map<string, queue<string>> in pseudo-C++ (the controller is actually implemented in Python, but I am open to the possibility of rewriting it in something else). The strings in the queues define jobs, and the keys of the map are a categorization of the jobs. The controller is seeded with a set of jobs; when a worker starts up, the controller removes one string from one of the queues and hands it out as the worker's first job. The worker may crash during the run, in which case the job gets put back on the appropriate queue (there is an ancillary table of outstanding jobs). If it completes the job successfully, it will send back a list of new job-strings, which the controller will sort into the appropriate queues. Then it will pull another string off some queue and send it to the worker as its next job; usually, but not always, it will pick the same queue as the previous job for that worker.
Now, the question. This data structure currently sits entirely in main memory, which was fine for small-scale test runs, but at full scale is eating all available RAM on the controller, all by itself. And the controller has several other tasks to accomplish, so that's no good.
What approach should I take? So far, I have considered:
a) to convert this to a primarily-on-disk data structure. It could be cached in RAM to some extent for efficiency, but jobs take tens of seconds to complete, so it's okay if it's not that efficient,
b) using a relational database - e.g. SQLite, (but SQL schemas are a very poor fit AFAICT),
c) using a NoSQL database with persistency support, e.g. Redis (data structure maps over trivially, but this still appears very RAM-centric to make me feel confident that the memory-hog problem will actually go away)
Concrete numbers: For a full-scale run, there will be between one and ten million keys in the hash, and less than 100 entries in each queue. String length varies wildly but is unlikely to be more than 250-ish bytes. So, a hypothetical (impossible) zero-overhead data structure would require 234 – 237 bytes of storage.
Ultimately, it all boils down on how you define efficiency needed on part of the controller -- e.g. response times, throughput, memory consumption, disk consumption, scalability... These properties are directly or indirectly related to:
number of requests the controller needs to handle per second (throughput)
acceptable response times
future growth expectations
From your options, here's how I'd evaluate each option:
a) to convert this to a primarily-on-disk data structure. It could be
cached in RAM to some extent for efficiency, but jobs take tens of
seconds to complete, so it's okay if it's not that efficient,
Given the current memory hog requirement, some form of persistent storage seems a reaonsable choice. Caching comes into play if there is a repeatable access pattern, say the same queue is accessed over and over again -- otherwise, caching is likely not to help.
This option makes sense if 1) you cannot find a database that maps trivially to your data structure (unlikely), 2) for some other reason you want to have your own on-disk format, e.g. you find that converting to a database is too much overhead (again, unlikely).
One alternative to databases is to look at persistent queues (e.g. using a RabbitMQ backing store), but I'm not sure what the per-queue or overall size limits are.
b) using a relational database - e.g. SQLite, (but SQL schemas are a
very poor fit AFAICT),
As you mention, SQL is probably not a good fit for your requirements, even though you could surely map your data structure to a relational model somehow.
However, NoSQL databases like MongoDB or CouchDB seem much more appropriate. Either way, a database of some sort seems viable as long as they can meet your throughput requirement. Many if not most NoSQL databases are also a good choice from a scalability perspective, as they include support for sharding data across multiple machines.
c) using a NoSQL database with persistency support, e.g. Redis (data
structure maps over trivially, but this still appears very RAM-centric
to make me feel confident that the memory-hog problem will actually go
away)
An in-memory database like Redis doesn't solve the memory hog problem, unless you set up a cluster of machines that each holds a part of the overall data. This makes sense only if keeping all data in-memory is needed due to low response times requirements. Yet, given the nature of your jobs, taking tens of seconds to complete, response times, respective to workers, hardly matter.
If you find, however, that response times do matter, Redis would be a good choice, as it handles partitioning trivially using either client-side consistent-hashing or at the cluster level, thus also supporting scalability scenarios.
In any case
Before you choose a solution, be sure to clarify your requirements. You mention you want an efficient solution. Since efficiency can only be gauged against some set of requirements, here's the list of questions I would try to answer first:
*Requirements
how many jobs are expected to complete, say per minute or per hour?
how many workers are needed to do so?
concluding from that:
what is the expected load in requestes/per second, and
what response times are expected on part of the controller (handing out jobs, receiving results)?
And looking into the future:
will the workload increase, i.e. does your solution need to scale up (more jobs per time unit, more more data per job?)
will there be a need for persistency of jobs and results, e.g. for auditing purposes?
Again, concluding from that,
how will this influence the number of workers?
what effect will it have on the number of requests/second on part of the controller?
With these answers, you will find yourself in a better position to choose a solution.
I would look into a message queue like RabbitMQ. This way it will first fill up the RAM and then use the disk. I have up to 500,000,000 objects in queues on a single server and it's just plugging away.
RabbitMQ works on Windows and Linux and has simple connectors/SDKs to about any kind of language.
https://www.rabbitmq.com/
I could use either primitive to make it works, but I wonder from a performance perspective, which one is more adequate for such a scenario.
I need to synchronize only two processes. There are always two, no more, no less. One Writes to a memory mapped file while the other reads from it in a producer / consumer fashion. I care about performance, and given how simple the scenario is, I think I could use something light weight, but I dont know for sure which one is faster but still adequate for this scenario.
First point: they're all kernel objects so all of them involve a switch from user mode to kernel mode. That imposes enough overhead by itself that you're unlikely to notice any real difference between them in terms of speed or anything like that. Therefore, which one is preferable will depend a great deal upon how you're structuring the data in the shared memory region, and how you use it.
Let's start with what would probably be the simplest case: that the shared memory region forms the bottleneck. All the time that the consumer isn't reading, the producer will be writing and vice versa. At least initially, this seems like a case were we can use a single mutex. The producer waits on the mutex, writes data, releases the mutex. The consumer waits on the mutex, reads data, releases the mutex. This continues until everything is done.
Unfortunately, while this protects against the producer and consumer using the shared region at the same time, it does not ensure proper operation. For example: the producer writes a buffer full of information, then releases the mutex. Then it waits on the mutex again, so when the reader is done it can write more data -- but at that point, there's no guarantee that the consumer will be the next one to get the mutex. The producer might get it back immediately, and write more data over what it just produced, so the consumer will never see the previous data.
One way to prevent that would be to use a couple of events: one from the producer to the consumer to say that there's data waiting to be read, and the other from the consumer to the producer to say all the data in the buffer has been read. In this case, the producer waits on its event, which the consumer will only set when it's done reading data. The producer then writes some data, and signals the consumer's event to say some data is ready. The consumer reads the data, and then signals event to the producer so the cycle can continue.
As long as you only have a single producer and single consumer and treat the entire as a single "chunk" of data that's controlled together, that's adequate. That, however, can lead to a problem. Let's consider, for example, a web server front-end as the producer and back-end as the consumer (and some separate mechanism for passing results back to the web server). If the buffer is small enough to only hold one request, the producer may have to buffer up several incoming requests as the consumer is processing one. Each time the consumer is ready to process a request, the producer has to stop what it's doing, copy a request to the buffer, and let the consumer know it can proceed.
The basic point of separate processes, however, is to let each proceed on its own schedule as much as possible. To allow that, we might make room in our shared buffer for a number of requests. At any given time, some number of those slots will full (or, looking at it from the other direction, some number will be free). For this case, we just about need a counted semaphore to track those slots. The producer can write something any time at least one slot is free. The consumer can read something anytime at least one slot is filled.
Bottom line: the choice isn't about speed. It's about how your use/structure the data and the processes' access to it. Assuming it's really as simple as you describe, the pair of events is probably the simplest mechanism that will work.
I'm trying to work out in my head the best way to structure a Cocoa app that's essentially a concurrent download manager. There's a server the app talks to, the user makes a big list of things to pull down, and the app processes that list. (It's not using HTTP or FTP, so I can't use the URL-loading system; I'll be talking across socket connections.)
This is basically the classic producer-consumer pattern. The trick is that the number of consumers is fixed, and they're persistent. The server sets a strict limit on the number of simultaneous connections that can be open (though usually at least two), and opening new connections is expensive, so in an ideal world, the same N connections are open for the lifetime of the app.
One way to approach this might be to create N threads, each of which would "own" a connection, and wait on the request queue, blocking if it's empty. Since the number of connections will never be huge, this is not unreasonable in terms of actual system overhead. But conceptually, it seems like Cocoa must offer a more elegant solution.
It seems like I could use an NSOperationQueue, and call setMaxConcurrentOperationCount: with the number of connections. Then I just toss the download requests into that queue. But I'm not sure, in that case, how to manage the connections themselves. (Just put them on a stack, and rely on the queue to ensure I don't over/under-run? Throw in a dispatch semaphore along with the stack?)
Now that we're in the brave new world of Grand Central Dispatch, does that open up any other ways of tackling this? At first blush, it doesn't seem like it, since GCD's flagship ability to dynamically scale concurrency (and mentioned in Apple's recommendations on Changing Producer-Consumer Implementations) doesn't actually help me. But I've just scratched the surface of reading about it.
EDIT:
In case it matters: yes, I am planning on using the asynchronous/non-blocking socket APIs to do the actual communication with the server. So the I/O itself does not have to be on its own thread(s). I'm just concerned with the mechanics of queuing up the work, and (safely) doling it out to the connections, as they become available.
If you're using CFSocket's non-blocking calls for I/O, I agree, that should all happen on the main thread, letting the OS handle the concurrency issues, since you're just copying data and not really doing any computation.
Beyond that, it sounds like the only other work your app needs to do is maintain a queue of items to be downloaded. When any one of the transfers is complete, the CFSocket call back can initiate the transfer of the next item on the queue. (If the queue is empty, decrement your connection count, and if something is added to an empty queue, start a new transfer.) I don't see why you need multiple threads for that.
Maybe you've left out something important, but based on your description the app is I/O bound, not CPU bound, so all of the concurrency stuff is just going to make more complicated code with minimal impact on performance.
Do it all on the main thread.
For posterity's sake, after some discussion elsewhere, the solution I think I'd adopt for this is basically:
Have a queue of pending download operations, initially empty.
Have a set containing all open connections, initially empty.
Have a mutable array (queue, really) of idle open connections, initially empty.
When the user adds a download request:
If the array of idle connections is not empty, remove one and assign the download to it.
If there are no idle connections, but the number of total connections has not reached its limit, open a new connection, add it to the set, and assign the download to it.
Otherwise, enqueue the download for later.
When a download completes: if there are queued requests, dequeue one
and give it to the connection; otherwise, add the connection to the idle list.
All of that work would take place on the main thread. The work of decoding the results of each download would be offloaded to GCD, so it can handle throttling the concurrency, and it doesn't clog the main thread.
Opening a new connection might take a while, so the process of creating a new one might be a tad more complicated in actual practice (say, enqueue the download, initiate the connection process, and then dequeue it when the connection is fully established). But I still think my perception of the possibility of race conditions was overstated.