How do I close the loop on batched writes in AWS? - aws-lambda

I have an endpoint in my api that supports writes. The resource in question is collaborative, so it is reasonable to expect that there will be parallel write requests arriving concurrently.
If the number of writes is small, then this is relatively straight forward to do with a simple lambda - read the current state, compute the new state, compare and swap, spin until the swap succeeds or until we give up. In either case, we compute the appropriate http response and return it to the caller.
If the API is successful, then eventually the waste of conflicting writes becomes expensive enough to address.
It looks as though the natural response is to copy the requests into a queue, with a function that consumes batches; within each batch, we process the requests in sequence, storing the new write, and computing the appropriate response to the request.
What are the options for getting those computed responses copied into the http responses, and what are the trade offs to be be considered?
My sense is that in handling the http request, after (synchronously) enqueue the message, I need to block/poll on something that will eventually be populated with the response to the request.

I'm not sure if this will count an an answer, but I do not agree that the natural response is to copy/queue/block; that feels like you're just trading optimistic concurrency control for a kind of pessimistic one (and you'd probably have an easier time just implementing a lock using e.g. Redis - not to mention there are other issues with Lambda itself that would make the approach you describe even more difficult).
Users probably do not want an API like this as it would have high latency.
In my opinion an API that is well designed for collaborate modification of some shared state has higher order constructs that make the API successful: thinking of a conversation as an example, you would decompose the chat in to individual messages, where each message is in reply to some other message; the concurrent modification to the conversation is append-only for the most part (you might allow a user to edit an individual message but that's not a point of resource contention) and you might do things like count the number of messages within the conversation asynchronously such that it is eventually consistent.
You can look at the domain of your API and see if there's a way to expose modification to it in such a way that reduces contention by making modifications target sub-entities (even if the API represents this as a single resource, the storage engine does not have to).
Another option is looking in to a model like event sourcing, where the changes themselves are literally appended and you derive the state from some snapshot plus recent changes.

Related

API waiting for a specific record on DynamoDb without pooling

I am inheriting a workflow that has a reasonable amount of data stored in DynamoDb. The data is periodically refreshed by Lambdas calling third parties when needed. The lambdas are triggered by both SQS and DynamoDB streams and go through four or five steps before the data is updated.
I'm given the task to write an API that can forcibly update N items and return their status. The obvious way to do this without reinventing the wheel and honoring DRY is to trigger an event that spawns off a refresh for each item so that the lambdas can do their thing.
The trouble is that I'm not sure the best pub/sub approach to handle being notified that end state of each workflow is met. Do I read from an update/insert stream of dynamodb to see if the records are updated? Do I create some sort of pub/sub model like Reddis or SNS to listen for the end state of each lambda being triggered?
Since I'm writing a REST API, timeouts, if there are failures along the line, arefine. But at the same time I want to make sure I can handle the following.
Be guaranteed that I can be notified that an update occurred for my targets after my call (in the case of multiple forced updates being called at once I only care about the first one to arrive).
Not be bogged down by listening for updates for record updates that are not contextually relevant to the API call in question.
Have an amortized time complexity of 1
In other words, in terms of cap theory i care about C & A but not P (because a 502 isn't that big a deal). But getting the timing wrong or missing a subscription is a problem.
I know I can just listen to a dynamodb event stream but I'm concerned that when things get noisy there will be more irrelevant stuff slowing me down. And I'm not sure if having every single record getting it's own topic is scalable (or how messy that would be).
You can use DynamoDB streams in combination with Lambda Event Filtering so the Lambda function only executes for the relevant change you are interested in. More information is available here:
https://aws.amazon.com/about-aws/whats-new/2021/11/aws-lambda-event-filtering-amazon-sqs-dynamodb-kinesis-sources/

Microservice Architecture: Can you eliminate the synchronous calls between services completely in a system?

Anywhere you read about Microservices, it says microservice should communicate asynchronously. It is understandable why asynchronous communication is preferred as it removes dependencies and provides low-coupling, and availability, etc.
Suppose, there is a common authorization service that is invoked every time a user calls an API. In this scenario you cannot move further util you have the response from the authorization service. Although you can call the authorization service asynchronously using Async IO, however, it is still a request/reply pattern.
Questions I have
Is possible to get rid of synchronous communication or more appropriately request/reply pattern in microservices-based system design?
Although it is possible to implement a reply/response pattern asynchronously through messaging and callbacks, which add significant overhead and latency but is it worth converting every request/reply to asynchronously?
If synchronous calls cannot be eliminated completely, then which scenarios it is ok to have synchronous calls among microservices?
I think the short answer for your question is: request-reply pattern doesn't mean synchronous. It can also be asynchronous. Which you already mentioned.
Long answer:
Request-Reply is just a principle. For example you send an email to a friend. The message contains data relevant to you and you are expecting a response but didn't say that explicitly. Your friend will see the email when he will get back from work and then he may or may not reply to you. Only you know that you need an answer from him.
Now there are a few options while waiting for your response. Either block your entire life until your friend responds (which will mean synchronous communication) either do something else until the response arrives in your inbox (which is asynchronous).
Now, to the point:
Is possible to get rid of synchronous communication or more appropriately request/reply pattern in microservices-based system design?
Yes, you already have answered that at the second point. Even though it is possible I think it should be used where it is required.
Although it is possible to implement a reply/response pattern asynchronously through messaging and callbacks, which add significant overhead and latency but is it worth converting every request/reply to asynchronously?
For the right scenario, yes. The messaging system have very good performances so the latency should not be an issue. When a latency problem occurs in a messaging system there are other options to improve it.
If synchronous calls cannot be eliminated completely, then which scenarios it is ok to have synchronous calls among microservices?
Yes.
There is one more thing that needs to be added. Synchronous doesn't always mean blocking. In a reactive world, if you make an HTTP call to another service the caller sends the request and then awaits for the response in a non-blocking manner. When the responses arrives, the caller is notified the the response has arrived and so the process continues. While "awaiting" the CPU can do other stuff.

Send Concurrent HTTP Requests From Array In Springboot

I have an array of objects that i need to send to an endpoint. I am currently looping through the array and sending the requests one by one. The issue is that i now have over 35,000 requests to be made, and i need to update the database with the response.In my limited knowledge of springboot , i am not aware of any method i can use to send the 35,000 requests at once (without looping through one by one).
Is the best method to use still employing looping but utilize asynchronous calls, or is there a method that i can use to send the 35,000 http requests at once?..i just need a pointer because i am not aware how threads can be used, since this is already an array and each element needs to be sent.
Thank you
Well, first off 35,000 at a time of, well, anything, is a bad idea.
However, if you look in to the Java ExecutorService, this gives you the ability to fill a queue with tasks, and then each task will be performed by a thread taken from a thread pool. As the threads complete, the service pulls another request from the queue and handles that. So, you simply provide a Runnable that performs your web requests, create an Adequately Sized Thread Pool (which is basically sized through experimentation to give the best throughput), and then let the threads crunch away on the queue of tasks.
You will need a queue large enough to absorb all of your tasks, or you can look at something like the NotifyingBlockingThreadPoolExecutor. This will allow you to just gorge a queue and block when the queue gets to full, until all of your tasks are complete.
Addenda:
I don't know enough about Spring Boot to comment about whether a "batch job" would do what you want or not.
However, on that note, an alternative to creating 35,000 individual entries for the ExecutorService, you could, indeed, send a subset. For example 3,500 entries representing 10 items each, or 350 with 100 each. The idea there is to leverage any potential gains from reusing HTTP connections and what not, so there's less stand up and tear down for each request. Standing up 350 connections if far cheaper than standing up 35,000.

Batching generation of http responses

I'm trying to find an architecture for the following scenario. I'm building a REST service that performs some computation that can be quickly batch computed. Let's say that computing 1 "item" takes 50ms, and computing 100 "items" takes 60ms.
However, the nature of the client is that only 1 item needs to be processed at a time. So if I have 100 simultaneous clients, and I write the typical request handler that sends one item and generates a response, I'll end up using 5000ms, but I know I could compute the same in 60ms.
I'm trying to find an architecture that works well in this scenario. I.e., I would like to have something that merges data from many independent requests, processes that batch, and generates the equivalent responses for each individual client.
If you're curious, the service in question is python+django+DRF based, but I'm curious about what kind of architectural solutions/patterns apply here and if anything solving this is already available.
At first you could think of a reverse proxy detecting all pattern-specific queries, collecting all theses queries and sending it to your application in an HTTP 1.1 pipeline (pipelining is a way to send a big number of queries one after another and receiving all HTTP responses in the same order at the end, without waiting for a response after each query).
But:
Pipelining is very hard to do well
you would have to code the reverse proxy as I do not know a way to do it
one slow response in the pipeline block all the other responses
you need an http server able to give several queries to your application language, something which never happens if the http server is not directly coded in your application, because usually http is made to work on only one query (like you never receive 2 queries in a PHP env, you receive the 1st one, send the response, and then receive the next one, even if the connection contain 2 queries).
So the good idea would be to do that on the application side. You could identify matching queries, and wait for a small amount of time (10ms?) to see if some other queries are also incoming. You will need a way to communicate between several parallel workers here (like you have 50 application workers and 10 of them have received queries that could be treated in the same batch). This way of communication could be a database (a very fast one) or some shared memory, depends on the technology used.
Then when too much time waiting has been spend (10ms?) or when a big amount of queries are received, one of the worker could collect all queries, run the batch, and tell every other workers that a result is there (here again you need a central point of communication, like LISTEN/NOTIFY in PostgreSQL, a shared memory thing, a message queue service, etc.).
Finally every worker is responsible for sending the right HTTP response.
The key here is having a system where the time you loose in trying to share requests treatment is less important than the time saved in batching several queries together, and in case of low traffic this time should stay reasonnable (as here you will always loose time waiting for nothing). And of course you are also adding some complexity on the system, harder to maintain, etc.

Large number of concurrent ajax calls and ways to deal with it

I have a web page which, upon loading, needs to do a lot of JSON fetches from the server to populate various things dynamically. In particular, it updates parts of a large-ish data structure from which I derive a graphical representation of the data.
So it works great in Chrome; however, Safari and Firefox appear to suffer somewhat. Upon the querying of the numerous JSON requests, the browsers become sluggish and unusable. I am under the assumption that this is due to the rather expensive iteration of said data structure. Is this a valid assumption?
How can I mitigate this without changing the query language so that it's a single fetch?
I was thinking of applying a queue that could limit the number of concurrent Ajax queries (and hence also limit the number of concurrent updates to the data structure)... Any thoughts? Useful pointers? Other suggestions?
In browser-side JS, create a wrapper around jQuery.post() (or whichever method you are using)
that appends the requests to a queue.
Also create a function 'queue_send' that will actually call jQuery.post() passing the entire queue structure.
On server create a proxy function called 'queue_receive' that replays the JSON to your server interfaces as though it came from the browser, collects the results into a single response, sends back to browser.
Browser-side queue_send_success() (success handler for queue_send) must decode this response and populate your data structure.
With this, you should be able to reduce your initialization traffic to one actual request, and maybe consolidate some other requests on your website as well.
in particular, it updates parts of a largish data structure from which i derive a graphical representation of the data.
I'd try:
Queuing responses as they come in, then update the structure once
Hiding the representation invisible until the responses are in
Magicianeer's answer is also good - but I'm not sure if it fits your definition of "without changing the query language so that it's a single fetch" - it would avoid re-engineering existing logic.

Resources