(Similar in spirit to but different in practice from this question.)
Is there any cross-browser-compatible, in-browser technology that allows a high-performance perstistent network connection between a server application and a client written in, say, Javascript? Think XMLHttpRequest on caffeine. I am working on a visualisation system that's restricted to at most a few users at once, and the server is pretty robust, so it can handle as much as it needs to. I would like to allow the client to have access to video streamed from the server at a minimum of about 20 frames per second, regardless of what their graphics hardware capabilities are.
Simply put: is this doable without resorting to Flash or Java?
I'm not sure what you mean by XMLHttpRequest on caffeine...the performance of a remote polling object like that are subject to the performance of the client and the server, not of the language constructs themselves. Granted, there is HTTP overhead in AJAX, but the only viable alternative is to use HTTP long polling (which basically keeps the server connection open longer and passes chunks of data down bit by bit in the background. It's literally the same as AJAX, except the connection stays open until something happens (thus moving the HTTP overhead to idle time).
If I recall correctly, Opera had some kind of sockets implementation a while back, but nobody uses Opera.
Related
I have a browser plugin which will be installed on 40,000 dekstops.
This plugin will connect to a backend configuration file available via https, e.g. http://somesite/config_file.js.
The plugin is configured to poll this backend resource once/day.
But there is only one backend server. So if 40,000 endpoints start polling together the server might crash.
I could think of randomize the polling frenquency from the desktop plugins. But randomization still does not gurantee that there will not be a overload at the server.
Is using websocket in this scenario solves the scalability issue?
Polling once a day is very little.
I don't see any upside for Websockets unless you switch to Push and have more notifications.
However, staggering the polling does make a lot of sense, since syncing requests for the same time is like writing a DoS attack against your own server.
Staggering doesn't necessarily have to be random and IMHO, it probably shouldn't.
You could start with a fixed time and add a second per client ID, allowing for ~86K connections in 24 hours which should be easy for any server to handle.
As a side note, 40K concurrent connections might not as hard to achieve as you imagine.
EDIT (relating to the comments)
Websockets vs. Server Sent Events:
IMHO, when pushing data (vs. polling), I would prefer Websockets over Server Sent Events (SSE).
Websockets have a few advantages, such as client side communication which allows clients to ping the server and confirm that the connection is still alive.
The Specific Use-Case:
From the description in the question and the comments it seems that you're using browser clients with a custom plugin and that the updates you wish to install daily might require the browser to be active.
This raises different questions that effect the implementation (are the client browsers open all day? do you have any control over the client browsers and their environment? can you guarantee installation while the browser is closed?).
...
IMHO, you might consider having the client plugins test for an update each morning as they load for the first time during that day (first access).
People arrive at work in different times and they open their browsers for the first time at different schedules. So the 40K requests you're expecting will be naturally scattered across that timeline (probably a 20-30 minute timespan).
This approach makes sure that the browsers and computers are actually open (making the update possible) and that the update requests are staggered over a period of time (about 33.3 requests per second, if my assumption is correct).
If you're serving a pre-written static configuration file (perhaps updated by the server daily), avoiding dynamic content and minimizing any database calls, than 33 req/sec should be very easy to manage.
HTTP2 multiplexing uses the same TCP connection thereby removing Connection time to the same host.
But with HTTP2 Server Push is there any significant performance benefits except for the roundtrip time that HTTP2 multiplexing will take while requesting every resource.
I gave a presentation about this, that you can find here.
In particular, the demo (starting at 36:37) shows the benefits that you can have with multiplexing alone, and then by adding HTTP/2 Push.
Spoiler: the combination of HTTP/2 multiplexing and Push yields astonishing better results with respect to HTTP/1.1.
Then again, every case is different, so you have to actually measure your case.
But the potential of HTTP/2 to yield better performance than HTTP/1.1 is really large, and many (most?) cases will benefit from this.
I'm not sure what exactly you're asking here, or if it's a good fit for StackOverflow but will attempt to answer none-the-less. If this is not the answer you are looking for then please rephrase the question so we can understand what exactly it is you are looking for.
You are right in that HTTP/2 uses multiplexing, which does negate the need for multiple connections (and the time and resources needed to set them up and manage them). However it's much more than that as it's not limited (browsers will typically limit connections to 4-6 per host) and also allows for "similar" connections (same IP and same certificate but different hostname) to share connections as well. Basically it solves the queuing of resources that the request/response method of HTTP/1 means and reduces need of limited multiple connections that HTTP/1 requires as a workaround. Which also reduces need for other workarounds like sharding, sprite files, concatenation... etc.
And yes HTTP/2 server push saves on one round trip. So when you request a webpage it sends both the HTML and the CSS needed to draw the page as the server knows you will need the CSS as it's pointless just sending you the HTML, waiting for your web browser to get it, parse it, see it needs CSS and request the CSS file and wait for it to download.
I'm not sure if you're implying that a round trip time is so low, that there is little gains in HTTP/2 server push because there is now no delay in requesting a file due to HTTP/2 multiplexing? If so that is not the case - there are significant gains to be made in pushing resources, particularly blocking resources like CSS which the browser will wait for before drawing a single thing on screen. While multiplexing reduces the delay in sending a request, it does not reduce the latency on the request travelling to the server, now on the server responding to that and sending it back. While these sound small they are noticeable and make a website feel slow.
So yes, at present, the primary gain for HTTP/2 Server Push is in reducing that round trip time (basically to zero for key resources).
However we are at the infancy of this and there are potential other uses for performance or other reasons. For example you could use this as a way of prioritising content so an important image could be pushed early when, without this, a browser would likely request CSS and Javascript first and leave images until later. Server Push could also negate the need for inline CSS (which bloats pages with copies of style sheets and may require Javascript to then load the proper CSS file) - another HTTP/1.1 workaround for performance. I think it will be very interesting to watch what happens with HTTP/2 Server Push over the coming years.
Saying that, there still some significant challenges with HTTP/2 server push. Most importantly how do you prevent wasting bandwidth by pushing resources that the browser already has cached? It's likely a digest HTTP header will be added for this but still under discussion. Which leads on how to implement HTTP/2 Server Push in the best method - for web browsers, web servers and web developers? The HTTP/2 spec is a bit vague on how this should be implemented, which leaves it up to different web servers in particular providing different methods to signal to the server to push a resource.
As I say, I think this one of the parts of HTTP/2 that could lead to some very interesting applications. We live in interesting times...
I am working on client-server software using Microsoft RPC (over TCP) as the communication method. We sometimes transfer files from the client to the server. This works fine in local networks. Unfortunately, when we have a high latency, even a very wide bandwidth does not give a decent transfer speed.
Based on a WireShark log, the RPC layer sends a bunch of fragments, then waits for an ACK from the server before sending more and this causes the latency to dominate the transfer time. I am looking for a way to tell RPC to send more packets before pausing.
The issue seems to be essentially the same as with a too small TCP window, but there might be an RPC specific fragment window at work here, since Wireshark does not show the TCP-level window being full. iPerf connection tests with a small window do give those warnings, and a speed similar to the RPC transfer. With larger windows sizes, the iPerf transfer is three times faster than the RPC, even with a reasonable (40ms) latency.
I did find some mentions of an RPC fragment window at microsoft's site (https://msdn.microsoft.com/en-us/library/gg604601.aspx) and in an RPC document (http://pubs.opengroup.org/onlinepubs/9629399/chap12.htm search for window_size), but these seem to concern only connectionless (UDP) RPC. Additionally, they mention an RPC "fack" message and I observed only regular TCP level ACK:s in the log.
My conclusion is that either the RPC layer is using a stupidly low TCP window, or it is limiting the number of fragment packages it sends at a time by some internal logic. Either way, I need to make it send more between ACKs. Is there some way to do this?
I could of course just transfer the file over multiple simultaneous connections, but that seems more like a work-around than a solution.
PS. I know RPC is not really designed for file transfer, but this is a legacy application and the RPC pipe deals with authentication and whatnot, so keeping the file transfer there would be best, at least for now.
PPS. I guess that if the answer to this question is a configuration option, this would be better suited for SuperUser, but an API setting would be ideal, which is why I posted this here.
I finally found a way to control this. This Microsoft documentation page: Configuring Computers for RPC over HTTP contains registry settings that set the windows RPC uses, at least when used in conjunction with RPC over HTTP.
The two most relevant settings were:
HKLM\Software\Microsoft\Rpc\ClientReceiveWindow: DWORD
Making this higher (some MB:s, in bytes) on the client machine made the download to the client much faster.
HKLM\Software\Microsoft\Rpc\InProxyReceiveWindow: DWORD
Making this higher on the server machine made the upload faster.
The downside of these options is that they are global. The first one will affect all RPC clients on the client machine and the latter will affect all RPC over HTTP proxying on the server. This may have serious caveats, but a tenfold speed increase is nothing to be scoffed at, either.
Still, setting these on a per-connection basis would be much better.
Oh the joyous question of HTTP vs WebSockets is at it again, however even after quit a bit of reading on the hundreds of versus blog posts, SO questions, etc, etc.. I'm still at a complete loss as to what I should be working towards for our application. In this post I will be supplying information on application functionality, and the types of requests/responses used in our application currently.
Currently our application is a sloppy piece of work, thrown together using AngularJS and AJAX requests to a Apache server running PHP, namely XAMPP. With the launch of our application I've noticed that we're having problems with response times when the server is under any kind of load. This probably has something to do with the sloppy architecture of our server, the hardware, and the fact that our MySQL database isn't exactly optimized.
However, with such a loyal fanbase and investors seeing potential in our application and giving us a chance to roll out a 2.0 I've been studying hard into how to turn this application into a powerhouse of low latency scalability. Honestly the best option would be hire someone with experience, but unfortunately I'm a hobbyist, and a one-man-army without much experience.
After some extensive research, I've decided on writing the backend using NodeJS this time. However I'm having a hard time deciding on HTTP or Websockets. Here's the types of transactions that are done between the Server/Client.
Client sends a request to the server in JSON format. The request has a few different things.
A request id (For processing logic based on the request)
The data associated with the request ID.
The server receives the request, polls the database (if necessary) and then responds to the client in JSON format. Sometimes the server is serving files to the client. Namely images in Base64 format.
Currently the application (When being used) sends a request to the server every time an interface is changed, which on average for our application is once every few seconds. Every action on our interfaces sends another request to the server. The application also sends requests to check for notifications/messages every 8 seconds, (or two seconds depending on if they're on the messaging interface).
Currently here are the benefits I see of a stated connection over a stateless connection with our application.
If the connection is stated, I can eliminate the requests for notifications and messages, as the server can just tell the client whenever one comes available. This can eliminate x(n)/4 requests per second to the server alone.
Handling something like a disconnection from the server is as simple as attempting to reconnect, opposed to handling timeouts/errors per request, this would only be handled on the socket.
Additional security can be obtained by removing security keys for database interaction, this should prevent the possibility of Hijacking(?) of a session_key and using it to manipulate or access another users data. The session_key is only needed due to there being no state in the AJAX setup.
However, I'm someone who started learning programming through TCP game server emulation. So I understand some benefits of a STATED connection, while I don't understand the benefits of a STATELESS connection very much at all. I know they both have their benefits and quirks, but I'm curious what would be the best approach for us.
We're mainly looking for Scalability, as we had a local application launch and managed to bottleneck at nearly 10,000 users in under 48 hours. Luckily I announced this as a BETA and the users are cutting me a lot of slack after learning that I did it all on my own as a learning project. I've disabled registrations while looking into improving the application's front and backend.
IMPORTANT:
If using WebSockets, would we be able to asynchronously download pictures from the server like we can with AJAX? For example, I can make 5 requests to the server using AJAX for 5 different images, and they will all start downloading immediately, using a stated connection would I have to wait for each photo to be streamed before moving to the next request? Would this only bottle-neck a single user, or every user that is waiting on a request to be completed?
It all boils down on how your application works and how it needs to scale. I would use bare WebSockets rather than any wrapper, since it is an already easy to use API and your hands won't be tied when you need to scale out.
Here some links that will give you insight, although not concrete answers to your questions because as I said, it depends on your expectations.
Hard downsides of long polling?
WebSocket/REST: Client connections?
Websockets, and identifying unique peers[PHP]
How HTML5 Web Sockets Interact With Proxy Servers
If your question is Should I use HTTP over Websockets ?, the response is: You should not.
Even if it is faster because you don't lose time opening the connection, you lose also all the HTTP specification like verbs (GET, POST, PATCH, PUT, ...), path, body, and also response, status code. This seams simple but you'll have to re-implement all or part of these protocol things.
So you should use Ajax, as long as it is one ponctual request.
When you need to make an ajax request every 2 seconds, you need in fact that the server sends you data, not YOU request server to check Api change (if changed). So this is a sign that you should implement a websocket server.
Can somebody explain what ajax-push is? From what I understand it involves leaving HTTP connections open for a long time and reconnecting as needed. It seems to be used in chat systems a lot.
I have also heard when using ajax-push in Java it is important to use something with the NIO-connetors or grizzle serlvet api? Again, I'm just researching what it exactly.
In normal AJAX (call it pull) you ask the server for something and you get it immediately. This is fine when you want to get some data from the server now. But what if something happens on the server and the server wants to push that event to the client(s)?
Technically this is implemented using so called long polling - the browser opens the HTTP connection and waits for the response. As long as there is nothing interesting on the server side, it waits. But when something happens, the server sends the response and the client receives it immediately. This is a huge advantage over normal polling where you ask the server every few seconds - it generates a lot of traffic and still introduces noticeable latency.
The only problem with this approach is the number of pending HTTP connections. Old-school Java servlet containers aren't quite capable of handling such amount of connections due to one-thread-per-connection limitation - they quickly run out of memory. Even though the HTTP threads aren't doing anything (waiting for some other part of the system to wake them up and give them the response), they occupy memory.
However there are plenty of solutions nowadays:
Tomcat NIO connectors
Atmosphere Ajax Push/Comet library
Servlet 3.0 #Async (most portable)
Container-specific features, but Servlet 3.0, if available, should be considered superior.