In a MVC application, when I go to the client and return using AJAX, is the current thread from before the same current thread when I return or is it a new thread from the threadpool?
It might be but there is no guarantee and if that happens it would be mere luck.
From Performing Asynchronous Work, or Tasks, in ASP.NET Applications
New requests are received by HTTP.sys, a kernel driver. HTTP.sys posts the request to an I/O completion port on which IIS listens. IIS picks up the request on one of its thread pool threads and calls into ASP.NET where ASP.NET immediately posts the request to the CLR ThreadPool and returns a pending status to IIS.
and it continues
To execute it, we raise all of the pipeline events and the modules and handlers in the pipeline work on the request, typically while remaining on the same thread, but they can alternatively handle these events asynchronously.
So for one request you can be on the same thread.
As your AJAX request will be a new call to the http.sys kernel driver and the handover to the managed thread pool it is highly unlikely that the same thread will be re-used. If it did your web application would use way more threads then it is capable off, slowing the webserver to a crawl.
If you want to mimic that the same thread is used you must hookup in one of the state managment related event life cycle events
Related
i'm new to the Java Reactive development. After spending some reading on docs I think I got an idea how it works:
Instead of blocking a thread (e.g. the Thread for a Web request) when loading data over the network, a new thread would spawn which subscribes to the publisher (network i/o). Is the subscriber thread now BLOCKED or WAITING? Or is his 'state' saved and the thread do other things until the data arrived (if so: how reactivate this thread?)?
I come across many blog that say using rabbitmq improve the performance of microservices due to asynchronous nature of rabbitmq.
I don't understand in that case how the the http response is send to end user I am elaborating my question below more clearly.
user send a http request to microservice1(which is user facing service)
microservice1 send it to rabbitmq because it need some service from microservice2
microservice2 receive the request process it and send the response to rabbitmq
microservice1 receive the response from rabbitmq
NOW how this response is send to browser?
Does microservice1 waits untill it receive the response from rabbitmq?
If yes then how it become aynchronous??
It's a good question. To answer, you have to imagine the server running one thread at a time. Making a request to a microservice via RestTemplate is a blocking request. The user clicks a button on the web page, which triggers your spring-boot method in microservice1. In that method, you make a request to microservice2, and the microservice1 does a blocking wait for the response.
That thread is busy waiting for microservice2 to complete the request. Threads are not expensive, but on a very busy server, they can be a limiting factor.
RabbitMQ allows microservice1 to queue up a message to microservice2, and then release the thread. Your receive message will be trigger by the system (spring-boot / RabbitMQ) when microservice2 processes the message and provides a response. That thread in the thread pool can be used to process other users' requests in the meantime. When the RabbitMQ response comes, the thread pool uses an unused thread to process the remainder of the request.
Effectively, you're making the server running microservice1 have more threads available more of the time. It only becomes a problem when the server is under heavy load.
Good question , lets discuss one by one
Synchronous behavior:
Client send HTTP or any request and waits for the response HTTP.
Asynchronous behavior:
Client sends the request, There's another thread that is waiting on the socket for the response. Once response arrives, the original sender is notified (usually, using a callback like structure).
Now we can talk about blocking vs nonblocking call
When you are using spring rest then each call will initiate new thread and waiting for response and block your network , while nonblocking call all call going via single thread and pushback will return response without blocking network.
Now come to your question
Using rabbitmq improve the performance of microservices due to
asynchronous nature of rabbitmq.
No , performance is depends on your TPS hit and rabbitmq not going to improve performance .
Messaging give you two different type of messaging model
Synchronous messaging
Asynchronous messaging
Using Messaging you will get loose coupling and fault tolerance .
If your application need blocking call like response is needed else cannot move use Rest
If you can work without getting response go ahaead with non blocking
If you want to design your app loose couple go with messaging.
In short above all are architecture style how you want to architect your application , performance depends on scalability .
You can combine your app with rest and messaging and non-blocking with messaging.
In your scenario microservice 1 could be rest blocking call give call other api using rest template or web client and or messaging queue and once get response will return rest json call to your web app.
I would take another look at your architecture. In general, with microservices - especially user-facing ones that must be essentially synchronous, it's an anti-pattern to have ServiceA have to make a call to ServiceB (which may, in turn, call ServiceC and so on...) to return a response. That condition indicates those services are tightly coupled which makes them fragile. For example: if ServiceB goes down or is overloaded in your example, ServiceA also goes offline due to no fault of its own. So, probably one or more of the following should occur:
Deploy the related services behind a facade that encloses the entire domain - let the client interact synchronously with the facade and let the facade handle talking to multiple services behind the scenes.
Use MQTT or AMQP to publish data as it gets added/changed in ServiceB and have ServiceA subscribe to pick up what it needs so that it can fulfill the user request without explicitly calling another service
Consider merging ServiceA and ServiceB into a single service that can handle requests without having to make external calls
You can also send the HTTP request from the client to the service, set the application-state to waiting or similar, and have the consuming application subscribe to a eventSuccess or eventFail integration message from the bus. The main point of this idea is that you let daisy-chained services (which, again, I don't like) take their turns and whichever service "finishes" the job publishes an integration event to let anyone who's listening know. You can even do things like pass webhook URI's with the initial request to have services call the app back directly on completion (or use SignalR, or gRPC, or...)
The way we use RabbitMQ is to integrate services in real-time so that each service always has the info it needs to be responsive all by itself. To use your example, in our world ServiceB publishes events when data changes. ServiceA only cares about, and subscribes to a small subset of those events (and typically only a field or two of the event data), but it knows within seconds (usually less) when B has changed and it has all the information it needs to respond to requests. Each service literally has no idea what other services exist, it just knows events that it cares about (and that conform to a contract) arrive from time-to-time and it needs to pay attention to them.
You could also use events and make the whole flow async. In this scenario microservice1 creates an event representing the user request and then return a requested created response immediately to the user. You can then notify the user later when the request is finished processing.
I recommend the book Designing Event-Driven Systems written by Ben Stopford.
I asked a similar question to Chris Richardson (www.microservices.io). The result was:
Option 1
You use something like websockets, so the microservice1 can send the response, when it's done.
Option 2
microservice1 responds immediately (OK - request accepted). The client pulls from the server repeatedly until the state changed. Important is that microservice1 stores some state about the request (ie. initial state "accepted", so the client can show the spinner) which is modified, when you finally receive the response (ie. update state to "complete").
I have created a REST API - in a few words, my client hits a particular URL and she gets back a JSON response.
Internally, quite a complicated process starts when the URL is hit, and there are various services involved as a microservice architecture is being used.
I was observing some performance bottlenecks and decided to switch to a message queue system. The idea is that now, once the user hits the URL, a request is published on internal message queue waiting for it to be consumed. This consumer will process and publish back on a queue and this will happen quite a few times until finally, the same node servicing the user will receive back the processed response to be delivered to the user.
An asynchronous "fire-and-forget" pattern is now being used. But my question is, how can the node servicing a particular person remember who it was servicing once the processed result arrives back and without blocking (i.e. it can handle several requests until the response is received)? If it makes any difference, my stack looks a little like this: TomCat, Spring, Kubernetes and RabbitMQ.
In summary, how can the request node (whose job is to push items on the queue) maintain an open connection with the client who requested a JSON response (i.e. client is waiting for JSON response) and receive back the data of the correct client?
You have few different scenarios according to how much control you have on the client.
If the client behaviour cannot be changed, you will have to keep the session open until the request has not been fully processed. This can be achieved employing a pool of workers (futures/coroutines, threads or processes) where each worker keeps the session open for a given request.
This method has few drawbacks and I would keep it as last resort. Firstly, you will only be able to serve a limited amount of concurrent requests proportional to your pool size. Lastly as your processing is behind a queue, your front-end won't be able to estimate how long it will take for a task to complete. This means you will have to deal with long lasting sessions which are prone to fail (what if the user gives up?).
If the client behaviour can be changed, the most common approach is to use a fully asynchronous flow. When the client initiates a request, it is placed within the queue and a Task Identifier is returned. The client can use the given TaskId to poll for status updates. Each time the client requests updates about a task you simply check if it was completed and you respond accordingly. A common pattern when a task is still in progress is to let the front-end return to the client the estimated amount of time before trying again. This allows your server to control how frequently clients are polling. If your architecture supports it, you can go the extra mile and provide information about the progress as well.
Example response when task is in progress:
{"status": "in_progress",
"retry_after_seconds": 30,
"progress": "30%"}
A more complex yet elegant solution would consist in using HTTP callbacks. In short, when the client makes a request for a new task it provides a tuple (URL, Method) the server can use to signal the processing is done. It then waits for the server to send the signal to the given URL. You can see a better explanation here. In most of the cases this solution is overkill. Yet I think it's worth to mention it.
One option would be to use DeferredResult provided by spring but that means you need to maintain some pool of threads in request serving node and max no. of active threads will decide the throughput of your system. For more details on how to implement DeferredResult refer this link https://www.baeldung.com/spring-deferred-result
I have created a new dedicated HTTP thread pool in my Glassfish v3 instance. Of course along with the thread pool, I have created a associated network listener with a dedicated port. However, the newly created thread pool and network listener sits in the same VIRTUAL server as that of the existing HTTP thread pool. Essentially this means that a single virtual server will have two network listeners and two thread pools.
The reason for this design is that I want the newly created thread pool to cater to longer HTTP requests (like a 50MB download file). The other HTTP thread pool will cater to relatively smaller requests like a web-page download, diag reports stats etc.The newly created thread pool makes sense because the client requests tie up the HTTP worker thread resource. So longer the time it takes the client to download the files (50MB), the longer the HTTP resouces are tied there by making rejecting other HTTP requests. I don't expose the port externally. Apache proxy pass takes care of routing my requests to appropriate ports.
I wanted to understand if there is any flaw/drawback with this approach.
Glassfish version that I use is 3.1.1 or v3
EDIT
Adding my comments from the responses below to add more clarity to the question
However, my question is to understand if there are any issues creating multiple thread pools under one virtual server. We usually create one thread pool per domain (or virtual server). By creating two thread pools (and listeners) in a single domain am I violating anything or this is considered a normal practice?
The only caveat I found with this approach is. Say I have two port 8080 and 8085 assigned to network listeners. All the requests that are accessible on 8080 and also accessible through port 8085. Is this expected? But that is more from testing perspective since I don't expose my port externally anyways.
I think your approach makes sense.
As alternative, you can use Servlet 3.0 async request processing capabilities and make a decision if you want to delegate the long/heavy requests to a separate thread directly in Glassfish, rather than using Apache for it.
Consider scenarios below:
Assumption: You are not having any logic to determine the size of incoming HTTP request and route to a specific threadpool.
ThreadpoolA which serves html pages and light-weight requests has HTTP requests to be served and let us say due to thread synchronization issues is having some latency. During this situation, considering ThreadpoolA is full, requests get routed to threadpoolB which might have some capacity but may take way too long to complete its large size HTTP request. This leads to blocked threads irrespective of their urgency to be served.
Let us say you are using round-robin approach to route incoming HTTP request to a specific threadpool. If you have two or few incoming HTTP requests of large size at one point of time, your both thread pools are busy serving large requests and blocking/delaying light-weight http requests to be served.
Assumption: You have a logic to determine size of HTTP requests and route to a specific threadpool. This would need you to capture some stats on how well your thread pools are occupied now and make best use of them by routing http requests based on amount of work left for each threadpool. If not, you might have a dedicated threadpool which handles large http requests only but be there idle with no utilization until a large http request of certain size arrives. Also, let us say you have set a size of >=10MB to be routed to ThreadpoolB and <10MB to ThreadpoolA, the problem is worse if you have unexpected few requests with size let us say 9MB which is substantially large and still keeps hitting ThreadpoolA while ThreadpoolB is free and blocking your HTTP requests which are of less size. So, an ideal size to determine when to route your HTTP request is also a key determinant factor for optimal performance which purely depends on workload characteristics of your application.
I am developing a client-side single-page-application (SPA) with AngularJS and ASP.Net WebAPI.
One of the features of the SPA includes uploading large CSV file, processing it on the server, and returning the output to the user.
Obviously, this kind of computation can not be done online, and therefore I implemented an UploadController in charge of receiving the file, and a PollingController in charge of notifying the user when the computation is complete.
The client side application monitors the PollingController every few seconds.
I have no experience in Message Queues, but my gut tells me that they are required in this situation.
How would you recommend to implement this functionality in a non-blocking, efficient way ?
Examples will be highly appreciated
I've used message based service bus frameworks for this in the past.
You write an application (running as a windows service), that listens for messages broadcast across a event bus.
Your frontend can publish these messages into the bus.
The most popular framework for this in .NET is NServiceBus, however it recently became commercial. You can also look into MassTransit, though this one has very poor documentation.
The workflow you would do:
MVC App accepts upload and places it into some directory accessible by the windows service
MVC App publishes "UploadReady" message.
Service receives message, processes file, and updates some state for the polling controller.
Polling controller watches for this state to change. Usually a DB record etc.
The nice bit about using a framework like this is that if your service goes down, or you redeploy it, any processing can queue and resume, so you won't have any downtime.
For long running operations you need separate Windows Service application (or Worker Role, if it is Windows Azure). IIS may kill ASP.NET processes on pool recycling and your operation will not finish.
Message queue is mostly for communication. You can use it between your web and worker parts. But it is not required there unless your data is not super critical. You can establish communication using database, cache, file system or 100 other different ways :)
You can use SignalR to notify your client about finished processing.