Owin WebAPI Max Concurrent Connections - asp.net-web-api

public IHttpActionResult TestSpeed()
{
Sleep(10000);
}
Called the above API concurrently, I can only manage to squeeze approximately 16 connections within 12 seconds. I think this is kinda low.
Is there any hard cap concurrent limit in Owin WebAPI Selfhost?
async is not an option for now, since it requires me to change hell lot of existing codes.

Related

Spring WebFlux: Refactoring blocking API with Reactive API, or should I?

I have a legacy Spring Boot REST app that interacts with downstream services that block. I'm new to reactive programming, and am unsure how to handle these blocking requests. Most Webflux examples I've seen are pretty trivial. Here's the flow-of-control of my app:
User queries MyApp at http://myapp.com
MyApp then queries partner REST API, which is BLOCKING.
Depending on account type, data from the blocking app needs to be queried to make another call to another blocking REST application.
All data is enriched and rendered by MyApp to the browser.
Where to start? I'm using WebClient currently, so that part's done. I know I should perform the blocking steps on a different scheduler (parallel or boundedElastic?) Should I use a Flux or Mono, since the partner APIs return the data all at once?
Both apps return thousands of rows of data, and the user just waits... Steps 1-2 take about 4 secs; add in step 3, and we're looking at over 30 seconds due to the inefficiency of the API. Can Flux help my users' wait time at all?
EDIT Below is a (long) example of what my application is doing. Notice that I block my first call to the API to get a count of what's being returned, then I fetch the rest in batches of TASK_QUERY_LIMIT.
#Bean
public WebClient authWebClient(WebClient.Builder builder) {
MultiValueMap<String, String> map = new LinkedMultiValueMap<>();
map.set(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE);
final int size = 48 * 1024 * 1024;
final ExchangeStrategies strategies = ExchangeStrategies.builder()
.codecs(codecs -> codecs.defaultCodecs().maxInMemorySize(size))
.build();
return builder.baseUrl(configProperties.getUrl())
.exchangeStrategies(strategies)
.defaultHeaders(httpHeaders -> httpHeaders.addAll(map))
.filters(exchangeFilterFunctions -> {
exchangeFilterFunctions.add(logResponseStatus());
exchangeFilterFunctions.add(logRequest());
})
.build();
}
public Mono<Task> getTasksMono() {
return getAuthWebClient()
.baseUrl("http://MyApp.com")
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.onStatus(HttpStatus::isError, this::onHttpStatusError)
.bodyToMono(new ParameterizedTypeReference<Response<Task>>() {}));
}
// Service method
public List<Task> getTasksMono() {
Mono<Response<Task>> monoTasks = getTasksMono();
Task tasks = monoTasks.block();
int taskCount = tasks.getCount();
List<Task> returnTasks = new ArrayList<>(tasks.getData());
List<Mono<<Task>> tasksMonoList = new ArrayList<>();
// query API-ONE for all remaining tasks
if (taskCount > TASK_QUERY_LIMIT) {
retrieveAdditionalTasks(key, taskCount, tasksMonoList);
}
// Send out all of the calls at once, and subscribe to their results.
Flux.mergeSequential(tasksMonoList)
.map(Response::getData)
.doOnNext(returnTasks::addAll)
.blockLast();
return returnTasks.stream()
.map(this::transform) // This method performs business logic on the data before returning to user
.collect(Collectors.toList());
}
private void retrieveAdditionalTasks(String key, int taskCount,
List<Mono<Response<Task>>> tasksMonoList) {
int offset = TASK_QUERY_LIMIT;
int numRequests = (taskCount - offset) / TASK_QUERY_LIMIT + 1;
for (int i = 0; i < numRequests; i++) {
tasksMonoList.add(getTasksMono(processDefinitionKey, encryptedIacToken,
TASK_QUERY_LIMIT, offset));
offset += TASK_QUERY_LIMIT;
}
}
There are multiple questions here. Will try to highlight main points
1. Does it make sense refactoring to Reactive API?
From the first look your application is IO bound and typically reactive applications are much more efficient because all IO operations are async and non-blocking. Reactive application will not be faster but you will need less resources to The only caveat is that in order to get all benefits from the reactive API, your app should be reactive end-to-end (reactive drivers for DB, reactive WebClient, …). All reactive logic is executed on Schedulers.parallel() and you need small number of threads (by default, number of CPU cores) to execute non-blocking logic. It’s still possible use blocking API by “offloading” them to Schedulers.boundedElastic() but it should be an exception (not the rule) to make your app efficient. For more details, check Flight of the Flux 3 - Hopping Threads and Schedulers.
2. Blocking vs non-blocking.
It looks like there is some misunderstanding of the blocking API. It’s not about response time but about underlining API. By default, Spring WebFlux uses Reactor Netty as underlying Http Client library which itself is a reactive implementation of Netty client that uses Event Loop instead of Thread Per Request model. Even if request takes 30-60 sec to get response, thread will not be blocked because all IO operations are async. For such API reactive applications will behave much better because for non-reactive (thread per request) you would need large number of threads and as result much more memory to handle the same workload.
To quantify efficiency we could apply Little's Law to calculate required number of threads in a ”traditional” thread per request model
workers >= throughput x latency, where workers - number of threads
For example, to handle 100 QPS with 30 sec latency we would need 100 x 30 = 3000 threads. In reactive app the same workload could be handled by several threads only and, as result, much less memory. For scalability it means that for IO bound reactive apps you would typically scale by CPU usage and for “traditional” most probably by memory.
Sometimes it's not obvious what code is blocking. One very useful tool while testing reactive code is BlockHound that you could integrate into unit tests.
3. How to refactor?
I would migrate layer by layer but block only once. Moving remote calls to WebClient could be a first step to refactor app to reactive API. I would create all request/response logic using reactive API and then block (if required) at the very top level (e.g. in controller). Do’s and Don’ts: Avoiding First-Time Reactive Programmer Mines is a great overview of the common pitfalls and possible migration strategy.
4. Flux vs Mono.
Flux will not help you to improve performance. It’s more about downstream logic. If you process record-by-record - use Flux<T> but if you process data in batches - use Mono<List<T>>.
Your current code is not really reactive and very hard to understand mixing reactive API, stream API and blocking multiple times. As a first step try to rewrite it as a single flow using reactive API and block only once.
Not really sure about your internal types but here is some skeleton that could give you an idea about the flow.
// Service method
public Flux<Task> getTasks() {
return getTasksMono()
.flatMapMany(response -> {
List<Mono<Response<Task>>> taskRequests = new ArrayList<>();
taskRequests.add(Mono.just(response));
if (response.getCount() > TASK_QUERY_LIMIT) {
retrieveAdditionalTasks(key, response.getCount(), taskRequests);
}
return Flux.mergeSequential(taskRequests);
})
.flatMapIterable(Response::getData)
.map(this::transform); // use flatMap in case transform is async
}
As I mentioned before, try to keep internal API reactive returning Mono or Flux and block only once in the upper layer.

Aiohttp server max connections

I cannot understand the reason aiohttp (and asyncio in general) server implementation does not provide a way to limit max concurrent connections limit (number of accepted sockets, or number of running requests handlers).
(https://github.com/aio-libs/aiohttp/issues/675). Without this limit, it is easy to run out of memory and/or file descriptors.
In the same time, aiohttp client by default limits number of concurrent requests to 100 (https://docs.aiohttp.org/en/stable/client_advanced.html#limiting-connection-pool-size), aiojobs limits number of running tasks and size of pending tasks list, nginx has worker_connections limit, any sync framework is limited by number of worker threads by design.
While aiohttp can handle a lot of concurrent requests, this number is still limited. Docs on aiojobs says "The Scheduler has implied limit for amount of concurrent jobs (100 by default). ... It prevents a program over-flooding by running a billion of jobs at the same time". And still, we can happily spawn "billion" (well, until we run out of resources) aiohttp handlers.
So the question is, why is it implemented the way it is? Am I missing some important detail? I think we can somehow pause requests handlers using Semafor, but the socket is still accepted by aiohttp and coroutine is spawned, in contrast with nginx. Also when deploying behind nginx, the number of worker_connections and aiohttp desired limit will certainly be different.(because nginx may serve static files also)
Based on the developers' comments on the linked issue, the reasons for this choice are the following:
The application can return a 4xx or 5xx response if it detects that the number of connections is larger than what it can reasonably handle. (This differs from the Semaphore idiom, which would effectively queue the connection.)
Throttling the number of server connections is more complicated than just specifying a number, because the limit might well depend on what your coroutines are doing, i.e. it should at least be path-based. Andrew Svetlov links to NGINX documentation about connection limiting to support this.
It is anyway recommended to put aiohttp behind a specialized front server such as NGINX.
More detail than this can only be provided by the developer(s), who have been known to read this tag.
At this point, it appears that the recommended solution is to either use a reverse proxy for limiting, or an application-based limit like this decorator (untested):
REQUEST_LIMIT = 100
def throttle_handle(real_handle):
_nrequests = 0
async def handle(request):
nonlocal _nrequests
if _nrequests >= REQUEST_LIMIT:
return aiohttp.web.Response(
status=429, text="Too many connections")
_nrequests += 1
try:
return await real_handle(request)
finally:
_nrequests -= 1
return handle
#throttle_handle
async def handle(request):
... your handler here ...
To limit concurrent connections you can use aiohttp.TCPConnector or aiohttp.ProxyConnector if you using proxy. Just create it in a session instead of using the default.
aiohttp.ClientSession(
connector=aiohttp.TCPConnector(limit=1)
)
aiohttp.ClientSession(
connector=aiohttp.ProxyConnector.from_url(proxy_url, limit=1)
)

Performance Azure function with multiple output bindings

Hello all who read this,
We have written a router function on azure in an app plan that receives messages from iothub
and depending the message type we route our message to another eventhub.
Previously we had 6 out bindings to eventhubs in this function
Recently we added 3 more message type so 3 more out binding to 3 more eventhubs
No processing of the messages happen in this function but what we see now is that we spend 16 times more time in the routing function.
Is there a performance issue about having multiple output bindings.
We don't see an increase in load of the incoming messages.
We are running on azure functions 1.0 (Runtime version: 1.0.12205.0 (~1))
Regards Ben
Simplified Sample code of the routing function
public static class IotHubRouterFunction
{
[FunctionName("IotHubRouterFunction")]
public static void Run([EventHubTrigger("%iothub%", Connection = "IothubRouterListen")]EventData myEventHubData,
[EventHub("%msg1-eventhub%", Connection = "msg1event")] ICollector<EventData> eventHub4Dmsg1Event,
[EventHub("%msg2-eventhub%", Connection = "msg2event")] ICollector<EventData> eventHub4Dmsg2Event,
[EventHub("%msg3-eventhub%", Connection = "msg3event")] ICollector<EventData> eventHub4Dmsg3Event,
//... like 6 more bindings like this
ILogger logger
)
{
try
{
var messageType = GetValue(myEventHubData.Properties, "type");
// routing
switch (messageType)
{
case "msg1event":
{
eventHub4DevicesStatusChanged.Add(eventHub4Dmsg1Event);
break;
}
case "msg2event":
{
eventHub4MeasurementLog.Add(eventHub4Dmsg2Event);
break;
}
case "msg3event":
{
eventHub4DeviceDiscovered.Add(eventHub4Dmsg3Event);
break;
}
//6 more cases like this
default:
{
logger.LogError("Unrouteable message of type: {messageType}", messageType);
break;
}
}
}
catch (Exception ex)
{
//removed
}
}
}
With 6 bindings the message fly through the router function at 50ms
With 9 bindings the message crawl through the router function at 800ms
CPU raised with 30% as well on the applan (we scaled extra so we have it under control but why so much what is causing this)
A little late with the follow up of what happened
In the end we found out what was going on
We have several instances of our app plan
but the old monitoring solution showed the average of the cpu and memory overall the instances of the applan.
Basically with switching to the newer metrics and azure monitoring we were able to drill down in the separate instances of the app plan and the instances of the functions.
We found out that one instance of a function which was running three times two of them norammly but the third function had crashed it's internal apppool and consumed all cpu power it got hold off and did absolutely nothing.
We restarted the function and all issues were gone.
Still wondering if it was something in our code that made it go through the roof
or that something happened in azure that made it go crazy.
:-s
When you are using Azure Function under App service plan then you have to watch out for performance parameters like scaling. Have you investigated your function is not getting overloaded ?
On the other hand , As part of your design this approach is wrong to me. With this many bindings there could be potential performance issues , and what if you are supposed to add more bindings in future ? If you are not performing any operation then you shouldn't be taking overhead of redirecting messages.
Event Grid
We can use event grids for that. Based on topic the IoT hub publishes the event to a topic and events are consumed by subscribers in your case other event hubs. You also get advantage of micro billing (serverless) and auto scaling as well. https://learn.microsoft.com/en-us/azure/event-grid/overview

SignalR client misses some events at a regular interval

I have a straightforward SignalR setup: OWIN-hosted .NET server and JavaScript client (both # v2.1.1). The client uses SignalR to synchronize its copy of an ordered event stream maintained in an Rx ReplaySubject on the server. When a client connects, it provides a startAfter query parameter that is used to initialize an IObserver against the ReplaySubject, and this observer then sends each event in the observed sequence to the client. Each event has a sequence number, and the client can tell, based on the event sequence number, if any event is missing in the sequence. (Which would be a serious problem in this application.)
The problem is that the client regularly receives only portions of the event sequence. In fact, there is a regular pattern to this. For every 250 events there is a large gap. So for example, each test shows that the first gap was from somewhere between 70 and 80 to 250. Why always 250? And from there on, the "skip-to" point is always in intervals of 250; e.g., a gap from 263 to 500, then one from 511 to 750, etc.. I have to assume that this is some kind of default buffer size.
Also, the first time a client connects to the server it always receives the entire sequence just fine. It's the subsequent connections that exhibit the regular skipping problem. So it seems like it's a server-side problem, and not a client problem at all.
I then added some checks to the server to ensure that the IObserver for each client is seeing all of the events in the correct order. It is. So it seems almost certain that the problem is on the SignalR server side and has nothing to do with Rx.
And finally, I checked to see if the dropped messages were perhaps just being delivered out of order (which I could live with, although I assumed SignalR provides an ordered-delivery guarantee). They are not - the messages just disappear into a void.
If it helps, I'm currently running locally, with IIS Express on Win 8.1 x64 and testing on IE Developer Channel as well as Chrome 36. The connection is using WebSockets. I couldn't find any reference to 250 as a special quantity in either the SignalR source (client or server) or the Rx.Net source.
Any suggestions on troubleshooting? I'd love to find a stable solution before I start building a complicated workaround.
Here's the relevant server-side code:
public class AllEventsReplaySource
{
private readonly IHubConnectionContext<dynamic> clients;
private readonly ReplaySubject<dynamic> allEvents;
private AllEventsReplaySource(IHubConnectionContext<dynamic> clients)
{
this.clients = clients;
this.allEvents = new ReplaySubject<dynamic>();
// (Not shown: code that generates the input to the ReplaySubject.)
}
public void SubscribeClient(string connectionId, int startAfter)
{
this.allEvents.Skip(startAfter).Subscribe(e =>
{
// (Not shown: code that verifies no skips are occurring at this point for a client.)
clients.Client(connectionId).notifyEvent(e);
});
}
private readonly static Lazy<AllEventsReplaySource> instance =
new Lazy<AllEventsReplaySource>(() => new AllEventsReplaySource(
GlobalHost.ConnectionManager.GetHubContext<AllEventsReplayHub>().Clients));
public static AllEventsReplaySource Instance
{
get { return instance.Value; }
}
}
[HubName("allEventsReplayHub")]
public class AllEventsReplayHub : Hub
{
private readonly AllEventsReplaySource source;
public AllEventsReplayHub()
: this(AllEventsReplaySource.Instance)
{ }
public AllEventsReplayHub(AllEventsReplaySource source)
{
this.source = source;
}
public override Task OnConnected()
{
var previousSequenceNumber = Int32.Parse(Context.QueryString["startAfter"]);
var connectionId = this.Context.ConnectionId;
AllEventsReplaySource.Instance.SubscribeClient(connectionId, previousSequenceNumber);
return base.OnConnected();
}
}
The issue you are experiencing seems consistent with a message buffer overflow. When SignalR releases messages from its buffer, it does so in 250 message fragments by default.
SignalR will buffer at least the last 1000 messages sent to a given connectionId. This means that when you send the 1251st message, the first 250 get dereferenced by the buffer. This explains why when a client first connects to the server, it receives the entire sequence of messages. You have to send at least 1251 messages to a given client before the buffer will drop fragments. Again, this is all assuming default settings.
While you could increase the DefaultMessageBufferSize, that probably will not fix your root problem. It seems that you are trying to send messages faster than the server can send them to the client. If you do that continuously, you will run out of buffer space no matter the size.
It's more common to reduce the DefaultMessageBufferSize rather than increase it, since the buffers can consume a lot of memory, especially if you are sending a lot of large unique messages to many different clients.
Your best bet to avoid overrunning the buffer is to have the client send an ACK at least every 1000 messages. Given this, it might be possible to avoid sending over 1000 unACKed messages thereby avoiding this problem altogether.
By the way, you can take a look at SignalR's message buffer implementation yourself if you feel so inclined. Note that the capacity constructor argument is the DefaultMessageBufferSize.

async and await: are they bad?

We recently developed a site based on SOA but this site ended up having terrible load and performance issues when it went under load. I posted a question related this issue here:
ASP.NET website becomes unresponsive under load
The site is made of an API (WEB API) site which is hosted on a 4-node cluster and a web site which is hosted on another 4-node cluster and makes calls to the API. Both are developed using ASP.NET MVC 5 and all actions/methods are based on async-await method.
After running the site under some monitoring tools such as NewRelic, investigating several dump files and profiling the worker process, it turned out that under a very light load (e.g. 16 concurrent users) we ended up having around 900 threads which utilized 100% of CPU and filled up the IIS thread queue!
Even though we managed to deploy the site to the production environment by introducing heaps of caching and performance amendments many developers in our team believe that we have to remove all async methods and covert both API and the web site to normal Web API and Action methods which simply return an Action result.
I personally am not happy with approach because my gut feeling is that we have not used the async methods properly otherwise it means that Microsoft has introduced a feature that basically is rather destructive and unusable!
Do you know any reference that clears it out that where and how async methods should/can be used? How we should use them to avoid such dramas? e.g. Based on what I read on MSDN I believe the API layer should be async but the web site could be a normal no-async ASP.NET MVC site.
Update:
Here is the async method that makes all the communications with the API.
public static async Task<T> GetApiResponse<T>(object parameters, string action, CancellationToken ctk)
{
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(BaseApiAddress);
var formatter = new JsonMediaTypeFormatter();
return
await
httpClient.PostAsJsonAsync(action, parameters, ctk)
.ContinueWith(x => x.Result.Content.ReadAsAsync<T>(new[] { formatter }).Result, ctk);
}
}
Is there anything silly with this method? Note that when we converted all method to non-async methods we got a heaps better performance.
Here is a sample usage (I've cut the other bits of the code which was related to validation, logging etc. This code is the body of a MVC action method).
In our service wrapper:
public async static Task<IList<DownloadType>> GetSupportedContentTypes()
{
string userAgent = Request.UserAgent;
var parameters = new { Util.AppKey, Util.StoreId, QueryParameters = new { UserAgent = userAgent } };
var taskResponse = await Util.GetApiResponse<ApiResponse<SearchResponse<ProductItem>>>(
parameters,
"api/Content/ContentTypeSummary",
default(CancellationToken));
return task.Data.Groups.Select(x => x.DownloadType()).ToList();
}
And in the Action:
public async Task<ActionResult> DownloadTypes()
{
IList<DownloadType> supportedTypes = await ContentService.GetSupportedContentTypes();
Is there anything silly with this method? Note that when we converted
all method to non-async methods we got a heaps better performance.
I can see at least two things going wrong here:
public static async Task<T> GetApiResponse<T>(object parameters, string action, CancellationToken ctk)
{
using (var httpClient = new HttpClient())
{
httpClient.BaseAddress = new Uri(BaseApiAddress);
var formatter = new JsonMediaTypeFormatter();
return
await
httpClient.PostAsJsonAsync(action, parameters, ctk)
.ContinueWith(x => x.Result.Content
.ReadAsAsync<T>(new[] { formatter }).Result, ctk);
}
}
Firstly, the lambda you're passing to ContinueWith is blocking:
x => x.Result.Content.ReadAsAsync<T>(new[] { formatter }).Result
This is equivalent to:
x => {
var task = x.Result.Content.ReadAsAsync<T>(new[] { formatter });
task.Wait();
return task.Result;
};
Thus, you're blocking a pool thread on which the lambda is happened to be executed. This effectively kills the advantage of the naturally asynchronous ReadAsAsync API and reduces the scalability of your web app. Watch out for other places like this in your code.
Secondly, an ASP.NET request is handled by a server thread with a special synchronization context installed on it, AspNetSynchronizationContext. When you use await for continuation, the continuation callback will be posted to the same synchronization context, the compiler-generated code will take care of this. OTOH, when you use ContinueWith, this doesn't happen automatically.
Thus, you need to explicitly provide the correct task scheduler, remove the blocking .Result (this will return a task) and Unwrap the nested task:
return
await
httpClient.PostAsJsonAsync(action, parameters, ctk).ContinueWith(
x => x.Result.Content.ReadAsAsync<T>(new[] { formatter }),
ctk,
TaskContinuationOptions.None,
TaskScheduler.FromCurrentSynchronizationContext()).Unwrap();
That said, you really don't need such added complexity of ContinueWith here:
var x = await httpClient.PostAsJsonAsync(action, parameters, ctk);
return await x.Content.ReadAsAsync<T>(new[] { formatter });
The following article by Stephen Toub is highly relevant:
"Async Performance: Understanding the Costs of Async and Await".
If I have to call an async method in a sync context, where using await
is not possible, what is the best way of doing it?
You almost never should need to mix await and ContinueWith, you should stick with await. Basically, if you use async, it's got to be async "all the way".
For the server-side ASP.NET MVC / Web API execution environment, it simply means the controller method should be async and return a Task or Task<>, check this. ASP.NET keeps track of pending tasks for a given HTTP request. The request is not getting completed until all tasks have been completed.
If you really need to call an async method from a synchronous method in ASP.NET, you can use AsyncManager like this to register a pending task. For classic ASP.NET, you can use PageAsyncTask.
At worst case, you'd call task.Wait() and block, because otherwise your task might continue outside the boundaries of that particular HTTP request.
For client side UI apps, some different scenarios are possible for calling an async method from synchronous method. For example, you can use ContinueWith(action, TaskScheduler.FromCurrentSynchronizationContext()) and fire an completion event from action (like this).
async and await should not create a large number of threads, particularly not with just 16 users. In fact, it should help you make better use of threads. The purpose of async and await in MVC is to actually give up the thread pool thread when it's busy processing IO bound tasks. This suggests to me that you are doing something silly somewhere, such as spawning threads and then waiting indefinitely.
Still, 900 threads is not really a lot, and if they're using 100% cpu, then they're not waiting.. they're chewing on something. It's this something that you should be looking into. You said you have used tools like NewRelic, well what did they point to as the source of this CPU usage? What methods?
If I were you, I would first prove that merely using async and await are not the cause of your problems. Simply create a simple site that mimics the behavior and then run the same tests on it.
Second, take a copy of your app, and start stripping stuff out and then running tests against it. See if you can track down where the problem is exactly.
There is a lot of stuff to discuss.
First of all, async/await can help you naturally when your application has almost no business logic. I mean the point of async/await is to do not have many threads in sleep mode waiting for something, mostly some IO, e.g. database queries (and fetching). If your application does huge business logic using cpu for 100%, async/await does not help you.
The problem of 900 threads is that they are inefficient - if they run concurrently. The point is that it's better to have such number of "business" threads as you server has cores/processors. The reason is thread context switching, lock contention and so on. There is a lot of systems like LMAX distruptor pattern or Redis which process data in one thread (or one thread per core). It's just better as you do not have to handle locking.
How to reach described approach? Look at disruptor, queue incoming requests and processed them one by one instead of parallel.
Opposite approach, when there is almost no business logic, and many threads just waits for IO is good place where to put async/await into work.
How it mostly works: there is a thread which reads bytes from network - mostly only one. Once some some request arrive, this thread reads the data. There is also limited thread pool of workers which processes requests. The point of async is that once one processing thread is waiting for some thing, mostly io, db, the thread is returned in poll and can be used for another request. Once IO response is ready, some thread from pool is used to finish the processing. This is the way how you can use few threads to server thousand request in a second.
I would suggest that you should draw some picture how your site is working, what each thread does and how concurrently it works. Note that it's necessary to decide whether throughput or latency is important for you.

Resources