I made a few test downloads using the Jetty 9 server, where it is made multiple downloads of a single file with an approximate size of 80 MB. When smaller number of downloads and the time of 55 seconds is not reached to download, all usually end, however if any downloads in progress after 55 seconds the flow of the network simply to download and no more remains.
I tried already set the timeout and the buffer Jetty, though this has not worked. Has anyone had this problem or have any suggestions on how to solve? Tests on IIS and Apache Server work very well. Use JMeter for testing.
Marcus, maybe you are just hitting Jetty bug 472621?
Edit: The mentioned bug is a separate timeout in Jetty that applies to the total operation, not just idle time. So by setting the http.timeout property you essentially define a maximum time any download is allowed to take, which in turn may cause timeout errors for slow clients and/or large downloads.
Cheers,
momo
A timeout means your client isn't reading fast enough.
JMeter isn't reading the response data fast enough, so the connection sits idle long enough that it idle times out and disconnects.
We test with 800MB and 2GB files regularly.
On using HTTP/1.0, HTTP/1.1, and HTTP/2 protocols.
Using normal (plaintext) connections, and secured TLS connections.
With responses being delivered in as many Transfer-Encodings and Content-Encodings as we can think of (compressed, gzip, chunked, ranged, etc.).
We do all of these tests using our own test infrastructure, often spinning up many many Amazon EC2 nodes to perform a load test that can sufficiently test the server demands (a typical test is 20 client nodes to 1 server node)
When testing large responses, you'll need to be aware of the protocol (HTTP/1.x vs HTTP/2) and how persistence behavior of that protocol can change the request / response latency. In the real world you wont have multiple large requests after each other on the same persisted connection via HTTP/1 (on HTTP/2 the multiple requests would be parallel and be sent at the same time).
Be sure you setup your JMeter to use HTTP/1.1 and not use persisted connections. (see JMeter documentation for help on that)
Also be aware of your bandwidth for your testing, its very common to blame a server (any server) for not performing fast enough, when the test itself is sloppily setup and has expectations that far exceed the bandwidth of the network itself.
Next, don't test with the same machine, this sort of load test would need multiple machines (1 for the server, and 4+ for the client)
Lastly, when load testing, you'll want to become intimately aware of your networking configurations on your server (and to a lesser extent, your client test machines) to maximize your network configuration for high load. Default configurations for OS's are rarely sufficient to handle proper load testing.
I have a single node elastic search server running on ec2. I want to do some load testing using search requests with random search queries. I am using JMeter for load testing with two different approaches -
HTTP Client - When I test using these clients with 10k/20k/50k of requests, it works fine.
ES Transport Client - This works fine with approx 2k of requests.
Here are the steps I have followed -
Instantiating client on every run and close it once the test finished.
Once client instantiates, I start the jmeter sampling and send the search request.
After this run, stops the sampling.
I am getting No Node Available Exception after 2k of request with transport client.
ES Server is running with 3g of memory and have given 6g of memory to load tester.
Please help me if there is some config modification required and if I am not using the correct approach to test the load.
Thanks in Advance.
What kind of responses are you getting from the http test? Have you verified you are getting valid responses for all 10~50k requests? It might be perhaps your cluster cannot take on the load you're putting on it for either test. Since TransportClient is more intimately coupled to the ES server, you will explicitly see errors that come back from TransportClient, but if you're simply sending requests via HTTP without validating the response, it's easy to miss any issues.
Although, before taking a stab in the dark like I just did, I would also check to see what kind of QPS you are getting using the HTTP method vs the TC method, what your CPU/memory look like throughout both tests, what the response times look like, etc. It helps to monitor the health of your system throughout the process to detect any symptoms that might help explain the cause.
I really like Elastic Beanstalk and managed to get my webapp (Spring MVC, Hibernate, ...) up and running using SSL on a Tomcat7 64-bit container.
A major concern to me is performance (I thought using the Amazon cloud would help here).
To benchmark my server performance I am using blitz.io (which uses the amazon cloud to have multiple clients access my webservice simultaneously).
My very first simple performance test already got me wondering:
I benchmarked a health check url (which basically just prints "I'm ok").
Without SSL: Looks fine.
13 Hits/s with a response time of 9ms
230 Hits/s with a response time of 8ms
With SSL: Not so fine.
13 Hits/s with a response time of 44ms (Ok, this should be a bit larger due to encryption overhead)
30 Hits/s with a response time of 3.6s!
Going higher left me with connection timeouts (timeout = 10s).
I tried using a larger EC2 instance in the background with essentially the same result.
If I am not mistaken, the Load Balancer before the EC2 Instances serves as an endpoint for SSL encryption. How do I increase this performance?
Can this be done with elastic beanstalk? Or do I need to setup my own load balancer etc.?
I also did some tests using Heroku (albeith with a slightly different technology stack, play! vs. SpringMVC). Here I also saw the increased response time, but it stayed mostly constant. I am assuming they are using quite performant SSL endpoints. How do I get that for Elastic Beanstalk?
It seems my testing method was flawed.
Amazon's Elastic Load Balancers seem to go up to 10k SSL requests per second.
See this great writeup:
http://blog.mattheworiordan.com/post/24620577877/part-2-how-elastic-are-amazon-elastic-load-balancers
SSL requires a handshaking before a secure transmission channel is opened. Once the handshaking is done, which involves several roundtrips, the data is transmitted.
When you are just hitting a page using a load tester, it is doing the handshake for each and every hit. It is not reusing an already established session.
That's not how browsers are going to do. Browse will do handshake once and then reuse the open encrypted session for all the subsequent requests for a certain duration.
So, I would not be very worried about the results. I suggest you try a tool like www.browsermob.com to see how long a full page with many image, js, css etc takes to load over SSL vs non-SSL. That will be a fair comparison.
Helps?
Are there any major differences in performance between http and https? I seem to recall reading that HTTPS can be a fifth as fast as HTTP. Is this valid with the current generation webservers/browsers? If so, are there any whitepapers to support it?
There's a very simple answer to this: Profile the performance of your web server to see what the performance penalty is for your particular situation. There are several tools out there to compare the performance of an HTTP vs HTTPS server (JMeter and Visual Studio come to mind) and they are quite easy to use.
No one can give you a meaningful answer without some information about the nature of your web site, hardware, software, and network configuration.
As others have said, there will be some level of overhead due to encryption, but it is highly dependent on:
Hardware
Server software
Ratio of dynamic vs static content
Client distance to server
Typical session length
Etc (my personal favorite)
Caching behavior of clients
In my experience, servers that are heavy on dynamic content tend to be impacted less by HTTPS because the time spent encrypting (SSL-overhead) is insignificant compared to content generation time.
Servers that are heavy on serving a fairly small set of static pages that can easily be cached in memory suffer from a much higher overhead (in one case, throughput was havled on an "intranet").
Edit: One point that has been brought up by several others is that SSL handshaking is the major cost of HTTPS. That is correct, which is why "typical session length" and "caching behavior of clients" are important.
Many, very short sessions means that handshaking time will overwhelm any other performance factors. Longer sessions will mean the handshaking cost will be incurred at the start of the session, but subsequent requests will have relatively low overhead.
Client caching can be done at several steps, anywhere from a large-scale proxy server down to the individual browser cache. Generally HTTPS content will not be cached in a shared cache (though a few proxy servers can exploit a man-in-the-middle type behavior to achieve this). Many browsers cache HTTPS content for the current session and often times across sessions. The impact the not-caching or less caching means clients will retrieve the same content more frequently. This results in more requests and bandwidth to service the same number of users.
HTTPS requires an initial handshake which can be very slow. The actual amount of data transferred as part of the handshake isn't huge (under 5 kB typically), but for very small requests, this can be quite a bit of overhead. However, once the handshake is done, a very fast form of symmetric encryption is used, so the overhead there is minimal. Bottom line: making lots of short requests over HTTPS will be quite a bit slower than HTTP, but if you transfer a lot of data in a single request, the difference will be insignificant.
However, keepalive is the default behaviour in HTTP/1.1, so you will do a single handshake and then lots of requests over the same connection. This makes a significant difference for HTTPS. You should probably profile your site (as others have suggested) to make sure, but I suspect that the performance difference will not be noticeable.
To really understand how HTTPS will increase your latency, you have to understand how HTTPS connections are established. Here is a nice diagram. The key is that instead of the client getting the data after 2 "legs" (one round trip, you send a request, the server sends a response), the client won't get data until at least 4 legs (2 round trips). So, if it takes 100 ms for a packet to move between the client and the server, your first HTTPS request will take at least 500 ms.
Of course, this can be mitigated by re-using the HTTPS connection (which browsers should do), but it does explain part of that initial stall when loading up an HTTPS web site.
The overhead is NOT due to the encryption. On a modern CPU, the encryption required by SSL is trivial.
The overhead is due to the SSL handshakes, which are lengthy and drastically increase the number of round-trips required for a HTTPS session over a HTTP one.
Measure (using a tool such as Firebug) the page load times while the server is on the end of a simulated high-latency link. Tools exist to simulate a high latency link - for Linux there is "netem". Compare HTTP with HTTPS on the same setup.
The latency can be mitigated to some extent by:
Ensuring that your server is using HTTP keepalives - this allows the client to reuse SSL sessions, which avoids the need for another handshake
Reducing the number of requests to as few as possible - by combining resources where possible (e.g. .js include files, CSS) and encouraging client-side caching
Reduce the number of page loads, e.g. by loading data not required into the page (perhaps in a hidden HTML element) and then showing it using client-script.
December 2014 Update
You can easily test the difference between HTTP and HTTPS performance in your own browser using the HTTP vs HTTPS Test website by AnthumChris: “This page measures its load time over unsecure HTTP and encrypted HTTPS connections. Both pages load 360 unique, non-cached images (2.04 MB total).”
The results may surprise you.
It's important to have an up to date knowledge about the HTTPS performance because the Let’s Encrypt Certificate Authority will start issuing free, automated, and open SSL certificates in Summer 2015, thanks to Mozilla, Akamai, Cisco, Electronic Frontier Foundation and IdenTrust.
June 2015 Update
Updates on Let’s Encrypt - Arriving September 2015:
Let's Encrypt Launch Schedule (Jun 16, 2015)
Let's Encrypt Root and Intermediate Certificates (Jun 4, 2015)
Draft Let's Encrypt Subscriber Agreement (May 21, 2015)
More info on Twitter: #letsencrypt
For more info on HTTPS and SSL/TLS performance see:
Is TLS Fast Yet?
High Performance Browser Networking, Chapter 4: Transport Layer Security
Overclocking SSL
Anatomy and Performance of SSL Processing
For more info on the importance of using HTTPS see:
Why HTTPS for Everything? (The HTTPS-Only Standard)
Let’s Encrypt (Internet Security Research Group)
HTTPS Everywhere (Electronic Frontier Foundation)
To sum it up, let me quote Ilya Grigorik: "TLS has exactly one performance problem: it is not used widely enough. Everything else can be optimized."
Thanks to Chris - author of the HTTP vs HTTPS Test benchmark - for his comments below.
The current top answer is not fully correct.
As others have pointed out here, https requires handshaking and therefore does more TCP/IP roundtrips.
In a WAN environment typically then the latency becomes the limiting factor and not the increased CPU usage on the server.
Just keep in mind that the latency from Europe to the US can be around 200 ms (torundtrip time).
You can easily measure this (for the single user case) with HTTPWatch.
In addition to everything mentioned so far, please keep in mind that some (all?) web browsers do not store cached content obtained over HTTPS on the local hard-drive for security reasons. This means that from the user's perspective pages with plenty of static content will appear to load slower after the browser is restarted, and from your server's perspective the volume of requests for static content over HTTPS will be higher than would have been over HTTP.
There isn't a single answer for this.
Encryption will always consume more CPU. This can be offloaded to dedicated hardware in many cases, and the cost will vary by algorithm selected. 3des is more expensive than AES, for example. Some algorithms are more expensive for the encrypter than the decryptor. Some have the opposite cost.
More expensive than the bulk crypto is handshake cost. New connections will consume much more CPU. This can be reduced with session resumption, at the cost of keeping old session secrets around until they expire. This means that small requests from a client that doesn't come back for more are the most expensive.
For cross internet traffic you may not notice this cost in your data rate, because the bandwidth available is too low. But you will certainly notice it in CPU usage on a busy server.
I can tell you (as a dialup user) that the same page over SSL is several times slower than via regular HTTP...
In a number of cases the performance impact of SSL handshakes will be mitigated by the fact that the SSL session can be cached on both ends (desktop and server). On Windows machines for example the SSL session can be cached for up to 10 hours. See http://support.microsoft.com/kb/247658/EN-US . Some SSL accelerators will also have parameters allowing you to tune the time the session is cached.
Another impact to consider is that static content served over HTTPS will not be cached by proxies, and this may reduce performance across multiple users accessing the site over the same proxy. This can be mitigated by the fact that static content will be cached at desktops as well, Internet Explorer versions 6 and 7 cache cacheable HTTPS static content unless instructed to do otherwise (Tools Menu/Internet Options/Advanced/Security/Do not save encrypted pages to disk).
Here's a great article (a little bit old, but still great) on SSL handshake latency. Helped me identifying SSL as the main cause of slowness for clients who were using my app through slow Internet connections:
http://www.semicomplete.com/blog/geekery/ssl-latency.html
I made a small experiment and got 16% time difference for the same image from flickr (233 kb):
http://farm8.staticflickr.com/7405/13368635263_d792fc1189_b.jpg
https://farm8.staticflickr.com/7405/13368635263_d792fc1189_b.jpg
Of course these numbers depends on many factors, such as computer performance, connection speed, server load, QoS on path (the particular network path taken from browser to the server) but it shows the general idea: HTTPS is slowser then HTTP, since it requesres more operations to complete (SSL handshaking and encoding/decoding data).
Since I am investigating same problem for my project, I found these slides. Older but interesting:
http://www.cs.nyu.edu/artg/research/comparison/comparison_slides/sld001.htm
There seems to be a nasty edge case here: Ajax over congested wifi.
Ajax usually means that the KeepAlive has timed out after say 20 seconds. However, the wifi means that the (ideally fast) ajax connection has to make multiple round trips. Worse, the wifi often loses packets, and there are TCP retransmits. In this case, HTTPS performs really really badly!
Is TLS fast yet? Yes.
Watch: https://www.youtube.com/watch?v=0EB7zh_7UE4
Read: https://istlsfastyet.com/
There are many projects out there that aim to blur the lines and to make HTTPS just as fast. Like SPDY and mod-spdy.
HTTP VS HTTPS PERFORMANCE COMPARISON
I have always associated HTTPS with slower page load times when compared to plain old HTTP. As a web developer, web page performance is important to me and anything that will slow down the performance of my web pages is a no-no.
In order to understand the performance implications involved, the diagram below gives you a basic idea of what happens under the hood when you make a request for a resource using HTTPS.
As you can see from the diagram above, there are a few extra steps that need to take place when using HTTPS compared to using plain HTTP. When you make a request using HTTPS, a handshake needs to occur in order to verify the authenticity of the request. This handshake is an extra step when compared to an HTTP request and does unfortunately incur some overhead.
In order to understand the performance implications and see for myself whether or not the performance impact would be significant, I used this site as a testing platform. I headed over to webpagetest.org and used the visual comparison tool to compare this site loading using HTTPS vs HTTP.
As you can see from Here is Test video Result using HTTPS did have an impact on my page load times, however the difference is negligible and I only noticed a 300 millisecond difference. It's important to note that these times depend on many factors, such as computer performance, connection speed, server load, and distance from server.
Your site may be different, and it is important to test your site thoroughly and check the performance impact involved in switching to HTTPS.
HTTPS has encryption/decryption overhead so it will always be slightly slower. SSL termination is very CPU intensive. If you have devices to offload SSL, the difference in latencies might be barely noticeable depending on the load your servers are under.
This is almost certainly going to be true given that SSL requires an extra step of encryption that simply isn't required by non-SLL HTTP.
There is a way to measure this. The tool from apache called jmeter will measure throughput. If you set up a large sampling of your service with jmeter, in a controlled environment, with and without SSL, you should get an accurate comparison of the relative cost. I would be interested in your results.
The HTTPS indeed affects page speed...
The quotes above reveal the foolishness of many people about site security and speed. HTTPS / SSL server handshaking creates an initial stall in making Internet connections. There’s a slow delay before anything starts to render on your visitor’s browser screen. This delay is measured in Time-to-First-Byte information.
HTTPS handshake overhead appears in Time-to-First-Byte information (TTFB). Common TTFB ranges from under 100 milliseconds (best-case) to over 1.5 seconds (worst case). But, of course, with HTTPS it’s 500 milliseconds worse.
Roundtrip, wireless 3G connections can be 500 milliseconds or more. The extra trips double delays to 1 second or more. This is a big, negative impact on mobile performance. Very bad news.
My advice, if you're not exchanging sensitive data then you don't need SSL at all, but if you do like an ecommerce website then you can just enable HTTPS on certain pages where sensitive data is exchanged like Login and checkout.
Source: Pagepipe
A more important performance difference is that an HTTPS session is ketp open while the user is connected. An HTTP 'session' lasts only for a single item request.
It you are running a site with a large number of concurrent users, expect to buy a lot of memory.
Browsers can accept HTTP/1.1 protocol with either HTTP or HTTPS, yet browsers can only handle HTTP/2.0 protocol with HTTPS. The protocol differences from HTTP/1.1 to HTTP/2.0 make HTTP/2.0, on average, 4-5 times faster than HTTP/1.1. Also, of sites that implement HTTPS, most do so over the HTTP/2.0 protocol. Therefore, HTTPS is almost always going to be faster than HTTP simply due to the different protocol it generally uses. However, if HTTP over HTTP/1.1 is compared with HTTPS over HTTP/1.1, then HTTP is slightly faster, on average, than HTTPS.
Here are some comparisons I ran using Chrome (Ver. 64):
HTTPS over HTTP/1.1:
0.47 seconds average page load time
0.05 seconds slower than HTTP over HTTP/1.1
0.37 seconds slower than HTTPS over HTTP/2.0
HTTP over HTTP/1.1
0.42 seconds average page load time
0.05 seconds faster than HTTPS over HTTP/1.1
0.32 seconds slower than HTTPS over HTTP/2.0
HTTPS over HTTP/2.0
0.10 seconds average load time
0.32 seconds faster than HTTP over HTTP/1.1
0.37 seconds faster than HTTPS over HTTPS/1.1