Why isn't http server consuming a lot of CPU? [closed] - performance

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The server should response as soon as possible, isn't the server process always polling if there are requests?
So, it would like a while loop. But why is not CPU(single core) all consumed if there is no visit?

isn't the server process always polling?
Not if that's a reasonable implementation.
There are many implementations of HTTP servers, and communication servers in general, and polling is not a suitable architecture for any of these.
For example, some servers rely on asynchronous I/O operations using events, callbacks and so on. Other implementations rely on blocking socket APIs while operating in multi threaded modes, and there could be other architectures as well...

isn't the server process always polling if there are requests?
No. It is blocking, in either an accept() call or a select() call. No polling.

Related

Reactive Spring, CompletableFuture doubts [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a spring boot application where there is a rest controller.
It gets invoked from angular and immediately responds with a message background work started.
The controller has also launched a long running service method decorated with #Asynch which completes after 10 minutes. When it completes the service it causes a websocket push to the front end via a topic expected by the front end.
This causes the front end to know that the background processing is done and it enables a new link that the user can now click.
This is loosely based on https://spring.io/guides/gs/messaging-stomp-websocket/.
Some Questions:
Can this objective also be achieved using reactive programming or CompletableFuture without websocket usage as indicated above?
Does it make sense for a Controller to use completableFuture.get() or .join() and thereby wait for the asynchronous service to finish. Is that also not blocking?
Can reactive streams be used for this. Can we instead return a list of two elements as a stream where first row is the message about "background work started" and second message is "background work done"?
Does this make sense? Is there any such non blocking example ?

Concurreny modelin grpc server in golang

I have created a sample gRPC client and server in golang (used protobufs). I understand the concurrency model in golang. However, I am trying to understand the concurrency model in a server accepting parallel requests from same client(multiple goroutines on client side)/ multiple clients.
More specifically:
When a new gRPC call comes, does server create a new goroutine?
What data is shared by these goroutines? Does grpcServer.Serve set the boundary for data shared across goroutines i.e. everything set before is shared? (I am thinking of threads in Java where the threads share global data)
When a new gRPC call comes, does server create a new goroutine?
Yes, and it's highly likely that it creates a lot of concurrent goroutines to handle every connection and request (especially streaming request).
What data is shared by these goroutines?
I think this question is too broad. There are too much code both in net/http2 and google.golang.org/grpc packages to answer your question without deep investigation. However, we can be sure, that these goroutines share at least the server itself, because ServeConn is not a free function, but a method defined on http2.Server type.

Parallel web service calls or Akka actors

I've only just learnt about Akka and actors and am unsure whether to use them for my following use case in a Play framework application.
What I want to do is have a user make a web request, which then needs to make somewhere between 20-50 Yelp API calls (in parallel), get those responses, do some processing and combine them into a single result.
What I want to know is whether using Akka actors will improve the scalability and decrease the response time to the user. Will using actors give me any benefit over just making the WS calls in my controller code (using a future for each).
I think I am having trouble understanding what benefit using actors might give me in this scenario as opposed to just a bunch of asynch web requests.
Actors themselves are just a means to achieve and manage parallelism. The gain that you get with for example spinning up one actor per async request that you're issuing is supervision hierarchies. For example, a master Actor can decide to kill respond with a failure message transparently already if one of the requests fails, or only after a number of them has failed etc. You can issue restarts (retries of these requests) etc. Without Actors you'd have to implement all this supervision logic yourself – which gets quite repetitive after a while.
To answer your question about scalability and response times: sadly "it depends" on your exact access patterns, but in general yes because you can fine-tune them way better than just one asynchronous http client as well as implement retries more easily.
If you want to check out how to collect responses from many async requests using Actors refer to this answer on stack overflow on "Waiting on multiple Akka messages".
Hope this helps.

JMS messages and load balancing of JMS messages [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Have this design problem and trying to find a best way to implement it using JMS.
Web App has single Listener and multiple Producer and multiple WorkerBee. Listener pushes messages from its queue to WorkerBee. All of the messages produced are queued in a Listener. I want to implement a process than can send messages from Listener queue to each WorkerBee using some kind of load balancing process. Only single instance of the message occurs in the Listener queue or in the WorkerBee queue.
Whats the best way to do it? Does JMS sounds like a good choice here? How would I push the message from Listener queue to WorkerBee queue?
Open for suggestion :) And appreciate your response.
Messaging is good choice for such task. I think you should use Apache Camel framework as routing platform for described purposes. It allows to divide business logic from integration level in quite graceful way.
Camel supports plenty of components "out-of-box" including JMS endpoints.
You can delegate to one of your WorkerBee's using Load Balancer pattern.
JMS sounds a good option, since you have multiple instances of WorkerBee, make all WorkerBee listen to a single queue, e.g. to InWorkBeeQueue.
Now you can publish messages from Web App listener to InWorkBeeQueue. Write a simple java JMS producer code to publish messages to this queue. Depending on whichever instance of WorkBee is free, it will read message from InWorkBeeQueue and process it.
If you want to avoid writing new JMS producer code, you can directly map messages from Web app queue to InWorkBeeQueue using Apache Camel routes.

How does Facebook chat avoid continuous polling of the server? [closed]

Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am trying to understand how Facebook's chat feature receives messages without continuously poling the server.
Firebug shows me a single GET XmlHttpRequest continuously sitting there, waiting for a response from the server. After 5 minutes, this never timed out.
How are they preventing timeout?
An AJAX request can just sit there like that indefinitely, waiting for a response?
Can I do this with JSONRequest? I see this at json.org:
JSONRequest is designed to support
duplex connections. This permits
applications in which the server can
asynchronously initiate transmissions.
This is done by using two simultaneous
requests: one to send and the other to
receive. By using the timeout
parameter, a POST request can be left
pending until the server determines
that it has timely data to send.
Or is there another way to let an AJAX call just sit there, waiting, besides using JSONRequest?
Facebook uses a technique which is now called Comet to push messages from the server to the client instead of having the client poll the server.
There are many ways that this can be implemented, with XMLHttpRequest long polling being just one option. The principle behind this method is that the client sends an ordinary XMLHttpRequest but the server doesn't respond until some event happens (such as another user sending a message), so the client is forced to wait. When the client receives a response (or if the request times out) the client simply creates a new request so that it always has one open request to the server.

Resources