I built an flow in which I make two parallel http requests using aggregation:
Depending on the input request, there may be one, two, or no requests at all. The AggregateReply node somehow understands that I made one request, not two. And if two parallel requests occur, then the AggregateReply node waits for both to end, if one request occurs, then it waits for one to end. Please help me and explain how the Aggregate Reply node recognizes these cases. (I read the documentation, but did not understand how it works)
Related
I have an array of objects that i need to send to an endpoint. I am currently looping through the array and sending the requests one by one. The issue is that i now have over 35,000 requests to be made, and i need to update the database with the response.In my limited knowledge of springboot , i am not aware of any method i can use to send the 35,000 requests at once (without looping through one by one).
Is the best method to use still employing looping but utilize asynchronous calls, or is there a method that i can use to send the 35,000 http requests at once?..i just need a pointer because i am not aware how threads can be used, since this is already an array and each element needs to be sent.
Thank you
Well, first off 35,000 at a time of, well, anything, is a bad idea.
However, if you look in to the Java ExecutorService, this gives you the ability to fill a queue with tasks, and then each task will be performed by a thread taken from a thread pool. As the threads complete, the service pulls another request from the queue and handles that. So, you simply provide a Runnable that performs your web requests, create an Adequately Sized Thread Pool (which is basically sized through experimentation to give the best throughput), and then let the threads crunch away on the queue of tasks.
You will need a queue large enough to absorb all of your tasks, or you can look at something like the NotifyingBlockingThreadPoolExecutor. This will allow you to just gorge a queue and block when the queue gets to full, until all of your tasks are complete.
Addenda:
I don't know enough about Spring Boot to comment about whether a "batch job" would do what you want or not.
However, on that note, an alternative to creating 35,000 individual entries for the ExecutorService, you could, indeed, send a subset. For example 3,500 entries representing 10 items each, or 350 with 100 each. The idea there is to leverage any potential gains from reusing HTTP connections and what not, so there's less stand up and tear down for each request. Standing up 350 connections if far cheaper than standing up 35,000.
The idea was that there are 2 different http requests to 2 different end points. The first one is a long expensive calculation and it returns. The second request goes and does the exact same expensive calculation but before it returns does some extra processing with more data reads and calculations. Instead of doing the exact same calculation twice, it would be nice for the first call to write it's results to a channel or queue and the second http endpoint could join that message with the other data reads and processing before returning.
MessageEndpoints and Service activators can subscribe to a channel but how it would be in the same thread as the second http call on the second endpoint I cannot figure out. To me the mystery is how does the second thread on the second end point block until it receives a message that the first end point creates and sends.
Maybe setting up a polling channel would be the better route to go like on the second end point, it could immediately start polling while doing it's other reads and calculations.
Thanks in advance.
Sounds like a task for an Aggregator EI pattern:
http://www.enterpriseintegrationpatterns.com/patterns/messaging/Aggregator.html
https://docs.spring.io/spring-integration/docs/5.0.5.RELEASE/reference/html/messaging-routing-chapter.html#aggregator
Both requests should correlate to the same group.
I somehow believe that it doesn't matter for your who will return first: only the concern is to perform some post-processing when all the data is gathered.
However I even think that Scatter-Gather is a good choice for you as well:
https://docs.spring.io/spring-integration/docs/5.0.5.RELEASE/reference/html/messaging-routing-chapter.html#scatter-gather
There is a Thread Barrier implementation also for your consideration:
https://docs.spring.io/spring-integration/docs/5.0.5.RELEASE/reference/html/messaging-routing-chapter.html#barrier
I have an endpoint in my api that supports writes. The resource in question is collaborative, so it is reasonable to expect that there will be parallel write requests arriving concurrently.
If the number of writes is small, then this is relatively straight forward to do with a simple lambda - read the current state, compute the new state, compare and swap, spin until the swap succeeds or until we give up. In either case, we compute the appropriate http response and return it to the caller.
If the API is successful, then eventually the waste of conflicting writes becomes expensive enough to address.
It looks as though the natural response is to copy the requests into a queue, with a function that consumes batches; within each batch, we process the requests in sequence, storing the new write, and computing the appropriate response to the request.
What are the options for getting those computed responses copied into the http responses, and what are the trade offs to be be considered?
My sense is that in handling the http request, after (synchronously) enqueue the message, I need to block/poll on something that will eventually be populated with the response to the request.
I'm not sure if this will count an an answer, but I do not agree that the natural response is to copy/queue/block; that feels like you're just trading optimistic concurrency control for a kind of pessimistic one (and you'd probably have an easier time just implementing a lock using e.g. Redis - not to mention there are other issues with Lambda itself that would make the approach you describe even more difficult).
Users probably do not want an API like this as it would have high latency.
In my opinion an API that is well designed for collaborate modification of some shared state has higher order constructs that make the API successful: thinking of a conversation as an example, you would decompose the chat in to individual messages, where each message is in reply to some other message; the concurrent modification to the conversation is append-only for the most part (you might allow a user to edit an individual message but that's not a point of resource contention) and you might do things like count the number of messages within the conversation asynchronously such that it is eventually consistent.
You can look at the domain of your API and see if there's a way to expose modification to it in such a way that reduces contention by making modifications target sub-entities (even if the API represents this as a single resource, the storage engine does not have to).
Another option is looking in to a model like event sourcing, where the changes themselves are literally appended and you derive the state from some snapshot plus recent changes.
I'm trying to find an architecture for the following scenario. I'm building a REST service that performs some computation that can be quickly batch computed. Let's say that computing 1 "item" takes 50ms, and computing 100 "items" takes 60ms.
However, the nature of the client is that only 1 item needs to be processed at a time. So if I have 100 simultaneous clients, and I write the typical request handler that sends one item and generates a response, I'll end up using 5000ms, but I know I could compute the same in 60ms.
I'm trying to find an architecture that works well in this scenario. I.e., I would like to have something that merges data from many independent requests, processes that batch, and generates the equivalent responses for each individual client.
If you're curious, the service in question is python+django+DRF based, but I'm curious about what kind of architectural solutions/patterns apply here and if anything solving this is already available.
At first you could think of a reverse proxy detecting all pattern-specific queries, collecting all theses queries and sending it to your application in an HTTP 1.1 pipeline (pipelining is a way to send a big number of queries one after another and receiving all HTTP responses in the same order at the end, without waiting for a response after each query).
But:
Pipelining is very hard to do well
you would have to code the reverse proxy as I do not know a way to do it
one slow response in the pipeline block all the other responses
you need an http server able to give several queries to your application language, something which never happens if the http server is not directly coded in your application, because usually http is made to work on only one query (like you never receive 2 queries in a PHP env, you receive the 1st one, send the response, and then receive the next one, even if the connection contain 2 queries).
So the good idea would be to do that on the application side. You could identify matching queries, and wait for a small amount of time (10ms?) to see if some other queries are also incoming. You will need a way to communicate between several parallel workers here (like you have 50 application workers and 10 of them have received queries that could be treated in the same batch). This way of communication could be a database (a very fast one) or some shared memory, depends on the technology used.
Then when too much time waiting has been spend (10ms?) or when a big amount of queries are received, one of the worker could collect all queries, run the batch, and tell every other workers that a result is there (here again you need a central point of communication, like LISTEN/NOTIFY in PostgreSQL, a shared memory thing, a message queue service, etc.).
Finally every worker is responsible for sending the right HTTP response.
The key here is having a system where the time you loose in trying to share requests treatment is less important than the time saved in batching several queries together, and in case of low traffic this time should stay reasonnable (as here you will always loose time waiting for nothing). And of course you are also adding some complexity on the system, harder to maintain, etc.
Is there a RESTful way to determine whether a POST (or any other non-idempotent verb) will succeed? This would seem to be useful in cases where you essentially need to do multiple idempotent requests against different services, any of which might fail. It would be nice if these requests could be done in a "transaction" (i.e. with support for rollback), but since this is impossible, an alternative is to check whether each of the requests will succeed before actually performing them.
For example suppose I'm building an ecommerce system that allows people to buy t-shirts with custom text printed on them, and this system requires integrating with two different services: a t-shirt printing service, and a payment service. Each of these has a RESTful API, and either might fail. (e.g. the printing company might refuse to print certain words on a t-shirt, say, and the bank might complain if the credit card has expired.) Is there any way to speculatively perform these two requests, so my system will only proceed with them if both requests appear valid?
If not, can this problem be solved in a different way? Creating a resource via a POST with status = pending, and changing this to status = complete if all requests succeed? (DELETE is more tricky...)
HTTP defines the 202 status code for exactly your scenario:
202 Accepted
The request has been accepted for processing, but the processing has not been completed. The request might or might not eventually be acted upon, as it might be disallowed when processing actually takes place. There is no facility for re-sending a status code from an asynchronous operation such as this.
The 202 response is intentionally non-committal. Its purpose is to allow a server to accept a request for some other process (perhaps a batch-oriented process that is only run once per day) without requiring that the user agent's connection to the server persist until the process is completed. The entity returned with this response SHOULD include an indication of the request's current status and either a pointer to a status monitor or some estimate of when the user can expect the request to be fulfilled.
Source: HTTP 1.1 Status Code Definition
This is similar to 201 Created, except that you are indicating that the request has not been completed and the entity has not yet been created. Your response would contain a URL to the resource representing the "order request", so clients can check the status of the order through this URL.
To answer your question more directly: There is no way to "test" whether a request will succeed before you make it, because you're asking for clairvoyance.
It's not possible to foresee the range of technical problems that could occur when you attempt to make a request in the future. The network may be unavailable, the server may not be able to access its database or external systems it depends on for functioning, there may be a power-cut and the server is offline, a stray neutrino could wander into your memory and bump a 0 to a 1 causing a catastrophic kernel fault.
In order to consume a remote service you need to account for possible failures of any request in isolation of any other processes.
For your specific problem, if the services have no transactional safety, you can't bake any in there and you have to deal with this in a more real-world way. A few options off the top of my head:
Get the T-Shirt company to give you a "test" mechanism, so you can see whether they'll process any given order without actually placing it. It could be that placing an order with them is a two-phase operation, where you construct the order in the first phase (at which time they validate its creation) and then you subsequently ask the order to be processed (after you have taken payment successfully).
Take the credit-card payment first and move your order into a "paid" state. Then attempt to fulfil the order with the T-Shirt service as an asynchronous process. If fulfilment fails and you can identify that the customer tried to get something printed the company is not prepared to produce, you will have to contact them to change their order or produce a refund.
Most organizations will adopt the second approach, due to its technical simplicity and reduced risk to the business. It also has the benefit of being able to cope with the T-Shirt service not being available; the asynchronous process simply waits until the service is available and completes the order at that time.
Exactly. That can be done as you suggest in your last sentence. The idea would be to decopule resource creation (that will always work unless network failures) that represents an "ongoing request" of the "order acceptation", that can be later decided. As POST returns a "Location" header, you can then retrieve in any moment the "status" of your request.
At some point it may become either accepted or rejected. This may be intantaneous or it may take some time, so you have to design your service with these restrictions (i.e. allowing the client to check if his/her order is accepted, or running some kind of hourly/daily service that collect accepted requests).