ListenAndServe vs ListenAndServeTLS Speed - go

I'm building an API that will be hit with many requests/second from a few sources (call it 1000/sec) and will be responding to each quickly with very little information (think < 1k).
When I use HTTP (http.ListenAndServe()), performance is between 1000-2000 req/sec using Siege on my t3.micro and CPU usage rarely exceeds 30-40%
With HTTPS (http.ListenAndServeTLS()), I cap at around 450 req/sec with CPU usage at 100%. It seems pretty obvious it's doing a lot of SSL handshake type work, but my question is why would this be the case? Even with few concurrent connections from Siege it is much slower (I also tried connection=keep-alive in siege config)
I get that the first connection should be slower, but after that, is this behavior still expected or is there some issue I am not aware of? One thing I noticed is Siege is using HTTP/1.1, I would think it should be using 2.0 when going over HTTPS?
Thanks.

When using SSL/TLS your program needs to encode, decode each message sent or received, and this consumes CPU power. You may try to add http/https proxy which will terminate the SSL/TLS traffic. This can be apache http, nginx, haproxy. This can improve the situation.

Related

Performance Issue with Http2 vs Http1.1

I have upgraded to HTTP2 from http1.1 for a tomcat9 application. After this upgrade, I can see that the performance of the rest APIs is slow. I ran a load test for 1000 samples and as a result, HTTP1.1 has better throughput and average response time however I was expecting it to be for HTTP2.
is there any configuration that has to be done or am I missing something?
PS: The response Content-type is application/json type.
You are comparing HTTP/1.1 without HTTPS with HTTP/2 with HTTPS. That’s not a true comparison of the protocols alone as HTTPS has significant overheads - particularly for just one connection.
Yes it’s true that HTTP/2 normally requires HTTPS as no browser supports HTTP/2 without HTTPS but then again browsers are increasingly warming against unsecured HTTP-only connections and limiting features like HTTP/2, Brotli, Service Workers, Geo... etc. to HTTPS anyway.
HTTP/2 over HTTPS can beat HTTP/1.1 over HTTP in certain circumstances but in general that’s a heck of a back foot to start on.
Additionally when making only one request per connection (like you are in jMeter using threads) the expense of HTTPS will be a large percentage of the connection due to the HTTPS handshake that needs to happen to set up the connection. Future requests on the same connection won’t have the same expense as in general the encryption/decryption part is relatively fast on modern hardware and so doesn’t create a noticeable delay - but the initial setup part definitely does.
Finally HTTP/2 is generally faster than HTTP/1.1 over slower, high latency connections due in large part to multiplexing. If testing over the same network, as I suspect you might be doing, where there are basically no network delays, then the benefit of HTTP/2 over HTTP/1.1 may not be apparent.

High Performance Options for Remote services access

I have a service, foo, running on machine A. I need to access that service from machine B. One way is to launch a web server on A and do it via HTTP; code running under web server on A accesses foo and returns the results. Another is to write socket server on A; socket server access service foo and returns the result.
HTTP connection initiation and handshake is expensive; sockets can be written, but I want to avoid that. What other options are available for high performance remote calls?
HTTP is just the protocol over the socket. If you are using TCP/IP networks, you are going to be using a socket. The HTTP connection initiation and handshake are not the expensive bits, it's TCP initiation that's really the expensive bit.
If you use HTTP 1.1, you can use persistent connections (Keep-Alive), which drastically reduces this cost, closer to that of keeping a persistent socket open.
It all depends on whether you want/need the higher-level protocol. Using HTTP means you will be able to do things like consume this service from a lot more clients, while writing much less documentation (if you write your own protocol, you will have to document it). HTTP servers also supports things like authentication, cookies, logging, out of the box. If you don't need these sorts of capabilities, then HTTP might be a waste. But I've seen few projects that don't need at least some of these things.
Adding to the answer of #Rob, as the question is not precisely pointing to an application or performance boundaries, it would be good to look into the options available in a broader context, which is Inter process communication.
The wikipedia page cleanly lists down the options available and would be a good place to start with.
What technology are you going to use? Let me answer for Java world.
If your request rate is below 100/sec, you should not care about optimizations and use most versatile solution - HTTP.
Well-written asynchronous server like Netty-HTTP can easily handle 1000 requests per second on medium-level hardware.
If you need more or have constrained resources, you can go to binary format. Most popular one out there is Google Protobuf(multilanguage) + Netty (Java).
What you should know about HTTP performance:
Http can use Keep-Alive which removes reconnection cost for every request
Http adds traffic overhead for every request and response - around 50-100 bytes.
Http client and server consumes additional CPU for parsing HTTP headers - that is noticeable after abovementioned 100 req/sec
Be careful when selecting technology. Even in 21 century it is hard to find well-written HTTP server and client.

Understanding HTTPS connection setup overhead

I'm building a web-based chat app which will need to make an AJAX request for every message sent or received. I'd like the data to be encrypted and am leaning towards running AJAX (with long-polling) over HTTPS.
However, since the frequency of requests here is a lot higher than with basic web browsing, I'd like to get a better understanding of the overhead (network usage, time, server CPU, client CPU) in setting up the encrypted connection for each HTTPS request.
Aside from any general info/advice, I'm curious about:
As a very rough approximation, how much extra time does an HTTPS request take compared to HTTP? Assume content length of 1 byte and an average PC.
Will every AJAX request after the first have anything significant cached, allowing it to establish the connection quicker? If so, how much quicker?
Thank you in advance :-)
Everything in HTTPS is slower. Personal information shouldn't be cached, you have encryption on both ends, and an SSL handshake is relatively slow.
Long-polling will help. Long keep-alives are good. Enabling SSL sessions on your server will avoid a lot of the overhead as well.
The real trick is going to be doing load-balancing or any sort of legitimate caching. Not sure how much that will come into play in your system, being a chat server, but it's something to consider.
You'll get more information from this article.
Most of the overhead is in the handshake (exchanging certificates, checking for their revocation, ...). Session resumption and the recent false start extension helps in that respect.
In my experience, the worse case scenario happens when using client-certificate authentication and advertising too many CAs (the CertificateRequest message sent by the server can even become too big); this is quite rare since in practice, when you use client-certificate authentication, you would only accept client-certificates from a limited number of CAs.
If you configure your server properly (for resources for which it's appropriate), you can also enable browser caching for resources served over HTTPS, using Cache-Control: public.

OpenFire, HTTP-BIND and performance

I'm looking into getting an openfire server started and setting up a strophe.js client to connect to it. My concern is that using http-bind might be costly in terms of performance versus making a straight on XMPP connection.
Can anyone tell me whether my concern is relevant or not? And if so, to what extend?
The alternative would be to use a flash proxy for all communication with OpenFire.
Thank you
BOSH is more verbose than normal XMPP, especially when idle. An idle BOSH connection might be about 2 HTTP requests per minute, while a normal connection can sit idle for hours or even days without sending a single packet (in theory, in practice you'll have pings and keepalives to combat NATs and broken firewalls).
But, the only real way to know is to benchmark. Depending on your use case, and what your clients are (will be) doing, the difference might be negligible, or not.
Basics:
Socket - zero overhead.
HTTP - requests even on IDLE session.
I doubt that you will have 1M users at once, but if you are aiming for it, then conection-less protocol like http will be much better, as I'm not sure that any OS can support that kind of connected socket volume.
Also, you can tie your OpenFires together, form a farm, and you'll have nice scalability there.
we used Openfire and BOSH with about 400 concurrent users in the same MUC Channel.
What we noticed is that Openfire leaks memory. We had about 1.5-2 GB of memory used and got constant out of memory exceptions.
Also the BOSH Implementation of Openfire is pretty bad. We switched then to punjab which was better but couldn't solve the openfire issue.
We're now using ejabberd with their built-in http-bind implementation and it scales pretty well. Load on the server having the ejabberd running is nearly 0.
At the moment we face the problem that our 5 webservers which we use to handle the chat load are sometimes overloaded at about 200 connected Users.
I'm trying to use websockets now but it seems that it doesn't work yet.
Maybe redirecting the http-bind not via Apache rewrite rule but directly on a loadbalancer/proxy would solve the issue but I couldn't find a way on how to do this atm.
Hope this helps.
I ended up using node.js and http://code.google.com/p/node-xmpp-bosh as I faced some difficulties to connect directly to Openfire via BOSH.
I have a production site running with node.js configured to proxy all BOSH requests and it works like a charm (around 50 concurrent users). The only downside so far: in the Openfire admin console you will not see the actual IP address of the connected clients, only the local server address will show up as Openfire get's the connection from the node.js server.

What are the most widespread low bandwidth alternatives to HTTPS?

I'm porting a web app to a mobile device and working with the major carriers to minimize our bandwidth use, but need to maintain security.
The SSL handshaking overhead associated with HTTPS is more than 50% of the bandwidth currently. Can someone recommend a lightweight, low bandwidth alternative to HTTPS?
The payload is HTTP/XML, but can be modified to any format. I'm using Ruby on Rails so something with a Ruby library is ideal.
It sounds like your connection is short lived, and your payload small. Would it be possible to hold the connection open, and send multiple "messages" through it, that way, as more responses get send, your SSL overhead becomes a smaller portion of the cumulative data transfer. This would avoid the need to repeat the handshake. HTTP has some keep-alive capabilities with it, hopefully those can be applied in Ruby to a SSL connection.

Resources