I'm developing a web application using Angular + Java, and I trying to improve performance with HTTP / 2 protocol. It's ok, but I do not have a significant performance improvement, as you can see:
[https://i.imgur.com/HXrWGlf.png][1]
The time to load a specific page is aproximatly the same. The difference is that using HTTP / 1.1, there is a caching resources. With HTTP / 2, there isn't caching, as we see bellow:
[https://i.imgur.com/WdSHp8q.png[2]
Can I cache resources in HTTP/2 to load the page more fast?
It's a Chrome issue with certificates SSL self-signed . Chrome don't cache resources to self-signed certificates. Here's the bug explanation: https://bugs.chromium.org/p/chromium/issues/detail?id=103875
Related
I am currently hosting a Laravel project on Elastic Beanstalk. The issue is that requests made over HTTPS are experiencing much slower response times (average of 5 seconds). I have ruled out internet issues and the CPU/RAM utilization of the server is not fully utilized. Additionally, php-fpm (with nginx) is correctly configured with 16 pools on each instance (t3.small).
The problem seems to be with Axios (XHR request) but sometimes other HTML pages also experience the same issue. You can test this yourself by visiting https://laafisoft.bf (open the developer tools to check the response time). The configuration that I am using for the Load Balancer can be found in the image below. The certificate that I am using for HTTPS is issued by AWS Certificate Manager (RSA 2048).
When testing, I also noticed that requests over HTTP (port 80) were much faster (average of 200ms), but after some time the response time for HTTP requests increased to the same level as HTTPS requests. I am confident that the issue is not related to my Laravel application or a database problem. For comparison, I have the same version of the website hosted on DigitalOcean without a Load Balancer and it has much faster response times (https://demo.laafisoft.bf).
Any help is welcome, I'm new to AWS so maybe I'm missing something.
We've recently moved to HTTPS for all requests to our public website and application. We're seeing issues with clients that apparently have TLS 1, 1.1 and/or 1.2 disabled in Internet Explorer (and/or other browsers), the net result being they can no longer access our domain 'at all'.
Our certificate set up uses TLS 1.2 and I'm forcing HTTP requests in .htaccess to HTTPS. What's the accepted best practice to negate issues with misconfigured browsers? Allow access over HTTP? Allow access but display an error? I'd be interested to know what approaches and techniques people are using to work around this issue.
Given that Google are encouraging the adoption of HTTPS how should we proceed without alienating users that have poorly configured systems?
From watching HTTPS everywhere on YouTube
they suggest that HTTPS and SPDY combined will be quicker than just serving web pages/assets over HTTP but then since reading SPDY is Dead. Long Live HTTP/2 and what with with HTTP2 support being a way off I am in two minds as to whether to move a large site I'm working on to HTTPS entirely as ultimately it will be slower since doing perf comparison tests (the DOM content loaded took twice the time to load). I also just read somewhere that browsers are dropping support for SPDY.
What is the state of SPDY and should I just wait until HTTP2 until I advocate moving everything to HTTPS everywhere? Should I accept the performance hit?
SPDY is definitely dying, now that HTTP/2 is an official specification.
Firefox and Chrome already support HTTP/2, and servers start to deploy it instead of SPDY - Google, Twitter, etc. Internet Explorer support will arrive soon with IE 11.
HTTP/2 is definitely gaining momentum, and the future will be on HTTP/2 and TLS.
You should not wait for HTTP/2, because it's already here.
About the performance hit, the usual recommendation is to benchmark, but there is evidence that HTTP/2 over TLS is much better than HTTP/1.1 over TLS, and possibly comparable - if not better - than cleartext HTTP/1.1, depending on the case.
Reasons behind this are a number of optimizations performed by HTTP/2 such as multiplexing, header compression and resource push, that are simply not possible with HTTP/1.1.
See for example the demo video (disclaimer, I am a Jetty committer) we gave in 2012 (about Jetty and SPDY at that time, but HTTP/2 behaves the same), or the Go language HTTP/2 demo, or the Akamai HTTP/2 demo.
With Jetty, for example, you can deploy Java webapps on HTTP/2, but also complete PHP websites on HTTP/2. Our own website, https://webtide.com, is WordPress served by Jetty on HTTP/2.
You can move to TLS and HTTP/2 now.
Is there a performance impact in choosing between HTTP and HTTPS while loading Google Maps API?
Not to any significant degree.
I have a client-server app where the server is a Ruby on rails app that renders JSON and understands RESTful requests. It's served by nginx+passenger and it's address is api.whatever.com.
The client is an angular js application that consumes these services (whatever.com). It is served by a second nginx server and it's address is whatever.com.
I can either use CORS for cross subdomain ajax calls or configure the client' nginx to proxy_pass requests to the rails application.
Which one is better in terms of performance and less trouble for developers and server admins?
Unless you're Facebook, you are not going to notice any performance hit from having an extra reverse proxy. The overhead is tiny. It's basically parsing a bunch of bytes and then sending them over a local socket to another process. A reverse proxy in Nginx is easy enough to setup, it's unlikely to be an administrative burden.
You should worry more about browser support. CORS is supported on almost every browser, except of course for Internet Explorer and some mobile browsers.
Juvia uses CORS but falls back to JSONP. No reverse proxy setup.