I have a simple rest endpoint that executes Postgres procedure.
This procedure returns the current state of device.
For example:
20 devices.
Client app connect to API and make 20 responses to that endpoint every second.
For x clients there are x*20 requests.
For 2 clients 40 requests.
It causes a big cpu load on Postgres server only if there are many clients and/or many devices.
I didn’t create it but I need to redesign it.
How to limit concurrent queries to db only for it? It would be a hot fix.
My second idea is to create background worker that executes queries only one in the same time. Then the endpoint fetches data from memory.
I would try the simple way first. Try to reduce
the amount of database connections in the pool OR
the amount of working threads in the build-in Tomcat.
More flexible option would be to put the logic behind a thread pool limiting the amount of working threads. It is not trivial, if the Spring context and database is used inside a worker. Take a look on a Spring annotation #Async.
Offtopic: The solution we are discussing here looks like a workaround. The discussed solution alone will most probably increase the throughput only by factor 2 maybe 3. It is not JEE conform and it will be most probably not very stable. It is better to refactor the application avoiding such a problem. Another option would be to buy a new database server.
Update: JEE compliant solution would be to implement some sort of bulkhead pattern. It will limit the amount of concurrent running requests and reject it, if the some critical number is reached. The server application answers with "503 Service Unavailable". The client application catches this status and retries a second later (see "exponential backoff").
Oh the joyous question of HTTP vs WebSockets is at it again, however even after quit a bit of reading on the hundreds of versus blog posts, SO questions, etc, etc.. I'm still at a complete loss as to what I should be working towards for our application. In this post I will be supplying information on application functionality, and the types of requests/responses used in our application currently.
Currently our application is a sloppy piece of work, thrown together using AngularJS and AJAX requests to a Apache server running PHP, namely XAMPP. With the launch of our application I've noticed that we're having problems with response times when the server is under any kind of load. This probably has something to do with the sloppy architecture of our server, the hardware, and the fact that our MySQL database isn't exactly optimized.
However, with such a loyal fanbase and investors seeing potential in our application and giving us a chance to roll out a 2.0 I've been studying hard into how to turn this application into a powerhouse of low latency scalability. Honestly the best option would be hire someone with experience, but unfortunately I'm a hobbyist, and a one-man-army without much experience.
After some extensive research, I've decided on writing the backend using NodeJS this time. However I'm having a hard time deciding on HTTP or Websockets. Here's the types of transactions that are done between the Server/Client.
Client sends a request to the server in JSON format. The request has a few different things.
A request id (For processing logic based on the request)
The data associated with the request ID.
The server receives the request, polls the database (if necessary) and then responds to the client in JSON format. Sometimes the server is serving files to the client. Namely images in Base64 format.
Currently the application (When being used) sends a request to the server every time an interface is changed, which on average for our application is once every few seconds. Every action on our interfaces sends another request to the server. The application also sends requests to check for notifications/messages every 8 seconds, (or two seconds depending on if they're on the messaging interface).
Currently here are the benefits I see of a stated connection over a stateless connection with our application.
If the connection is stated, I can eliminate the requests for notifications and messages, as the server can just tell the client whenever one comes available. This can eliminate x(n)/4 requests per second to the server alone.
Handling something like a disconnection from the server is as simple as attempting to reconnect, opposed to handling timeouts/errors per request, this would only be handled on the socket.
Additional security can be obtained by removing security keys for database interaction, this should prevent the possibility of Hijacking(?) of a session_key and using it to manipulate or access another users data. The session_key is only needed due to there being no state in the AJAX setup.
However, I'm someone who started learning programming through TCP game server emulation. So I understand some benefits of a STATED connection, while I don't understand the benefits of a STATELESS connection very much at all. I know they both have their benefits and quirks, but I'm curious what would be the best approach for us.
We're mainly looking for Scalability, as we had a local application launch and managed to bottleneck at nearly 10,000 users in under 48 hours. Luckily I announced this as a BETA and the users are cutting me a lot of slack after learning that I did it all on my own as a learning project. I've disabled registrations while looking into improving the application's front and backend.
IMPORTANT:
If using WebSockets, would we be able to asynchronously download pictures from the server like we can with AJAX? For example, I can make 5 requests to the server using AJAX for 5 different images, and they will all start downloading immediately, using a stated connection would I have to wait for each photo to be streamed before moving to the next request? Would this only bottle-neck a single user, or every user that is waiting on a request to be completed?
It all boils down on how your application works and how it needs to scale. I would use bare WebSockets rather than any wrapper, since it is an already easy to use API and your hands won't be tied when you need to scale out.
Here some links that will give you insight, although not concrete answers to your questions because as I said, it depends on your expectations.
Hard downsides of long polling?
WebSocket/REST: Client connections?
Websockets, and identifying unique peers[PHP]
How HTML5 Web Sockets Interact With Proxy Servers
If your question is Should I use HTTP over Websockets ?, the response is: You should not.
Even if it is faster because you don't lose time opening the connection, you lose also all the HTTP specification like verbs (GET, POST, PATCH, PUT, ...), path, body, and also response, status code. This seams simple but you'll have to re-implement all or part of these protocol things.
So you should use Ajax, as long as it is one ponctual request.
When you need to make an ajax request every 2 seconds, you need in fact that the server sends you data, not YOU request server to check Api change (if changed). So this is a sign that you should implement a websocket server.
I have a service being load tested by a third party. A few minutes after starting, we start to see requests hanging for a very long period of time and the caller ultimately times out (after 60 seconds).
They are testing with 15 users with each user using two devices at once, so a total of 30 connections.
The service is a simple façade to a more complex operation, calling an external system. Benchmarking our communications to the external system looks as though everything is responding in the time we would expect (sub 200ms).
The IIS logs reveals a bunch of very high requests (> 200sec) which ultimately do return a 200 and have Win32 error code ERROR_NETNAME_DELETD (error 64). I have checked the Service Log and can match up the response to the request (based on the SOAP message id) and can see that we do eventually respond with the correct information (although the client has long given up).
Any ideas as to what could be causing this behavior? We're hosting in IIS using wsHttpBinding and we're using WS-Security with x509 certificates (message & transport encryption).
We don't have benchmark logging inside of our service but the code is a very simple mapping of the WCF request to the server request, making the request, and mapping the response to the WCF response. We do this manually and there is no parsing involved (straight assignments).
After a detailed investigation, including getting Microsoft support involved we were hitting up against the serviceThrottling defaults, specifically the maxConcurrentSessions. We determined this from perfmon - there is a counter for this. We were unsure as to why we saw this as the service behaved when called with a .NET client.
It turns out that the Java consumer of this application, using CXF, was not respecting the WSDL (specifically the bit about WS-SecureConversation) and closing sessions out when it closed its connection.
Our solution was to jack up the maxConcurrentSessions to a high number, set the inactivityTimeout down low (to a minute) to force session abandonment. In addition, we set establishSecurityContext to false to avoid the WSS negotiation consuming an additional session.
The solution is inelegant as the service logs are littered with errors about forced session closures, but it fixed the issue we were seeing here. Unfortunately we had a requirement for WS-Security so our solution needed to stick with that.
I hope this helps someone as this was an interesting and time consuming problem to pin down.
When I check my Tomcat Application Manager, I see that I have too many sessions (over 4600):
Why is that so ? Can this be linked to a deployed application or is it the fact that it has been up for a long time ?
There can be multiple reasons for this, some more exotic than others. Some of them:
you really have a lot of users coming to your site
your app ran out of memory and the thread which invalidates sessions has died
you've been hit by a search engine crawler or DDOSed (not very likely)
you have some sort of monitoring enabled which is done via HTTP and the sessions are not discarded
Have you looked at the list of these sessions? Do they have similar idle times? What pages are being hit? What do the HTTP logs say? (Google) Analytics?
The fact that the app has been online for a long time has no correlation to the session count. Tomcat invalidates these sessions on a regular basis.
In order to get the most precise answer do a memory dump and analyze it using the profiler.
Can somebody explain what ajax-push is? From what I understand it involves leaving HTTP connections open for a long time and reconnecting as needed. It seems to be used in chat systems a lot.
I have also heard when using ajax-push in Java it is important to use something with the NIO-connetors or grizzle serlvet api? Again, I'm just researching what it exactly.
In normal AJAX (call it pull) you ask the server for something and you get it immediately. This is fine when you want to get some data from the server now. But what if something happens on the server and the server wants to push that event to the client(s)?
Technically this is implemented using so called long polling - the browser opens the HTTP connection and waits for the response. As long as there is nothing interesting on the server side, it waits. But when something happens, the server sends the response and the client receives it immediately. This is a huge advantage over normal polling where you ask the server every few seconds - it generates a lot of traffic and still introduces noticeable latency.
The only problem with this approach is the number of pending HTTP connections. Old-school Java servlet containers aren't quite capable of handling such amount of connections due to one-thread-per-connection limitation - they quickly run out of memory. Even though the HTTP threads aren't doing anything (waiting for some other part of the system to wake them up and give them the response), they occupy memory.
However there are plenty of solutions nowadays:
Tomcat NIO connectors
Atmosphere Ajax Push/Comet library
Servlet 3.0 #Async (most portable)
Container-specific features, but Servlet 3.0, if available, should be considered superior.