High response time for HTTPS requests on Elastic Beanstalk - laravel

I am currently hosting a Laravel project on Elastic Beanstalk. The issue is that requests made over HTTPS are experiencing much slower response times (average of 5 seconds). I have ruled out internet issues and the CPU/RAM utilization of the server is not fully utilized. Additionally, php-fpm (with nginx) is correctly configured with 16 pools on each instance (t3.small).
The problem seems to be with Axios (XHR request) but sometimes other HTML pages also experience the same issue. You can test this yourself by visiting https://laafisoft.bf (open the developer tools to check the response time). The configuration that I am using for the Load Balancer can be found in the image below. The certificate that I am using for HTTPS is issued by AWS Certificate Manager (RSA 2048).
When testing, I also noticed that requests over HTTP (port 80) were much faster (average of 200ms), but after some time the response time for HTTP requests increased to the same level as HTTPS requests. I am confident that the issue is not related to my Laravel application or a database problem. For comparison, I have the same version of the website hosted on DigitalOcean without a Load Balancer and it has much faster response times (https://demo.laafisoft.bf).
Any help is welcome, I'm new to AWS so maybe I'm missing something.

Related

expensive aws load balancer, perhaps wrong setup

Some time ago, I needed HTTPS support for my express webserver. I found a tutorial that teached me a cool trick to achieve this. They basically explained me that an AWS load balancer can redirect HTTPS to HTTP.
So, I first created a load balancer.
And then redirected HTTPS to HTTP. The traditional HTTP, I just redirected 80 to 80. And I have a websocket (socket io) thing going on port 1337 (which I plan to change to port 1338 in the near future).
Just for clarity. I didn't really need a load balancer, since I actually only have 1 AWS instance. But using this setup, I did not have to go through the trouble of messing around with HTTPS certificate files, neither did I have to upgrade my webserver. It saved me a lot of trouble at first.
Then this morning, I received the bill, and discovered that this load balancing trick has a price tag of roughly 22usd/mo. (an expensive port forwarding trick)
I probably have to get rid of this load balancer. But I am wondering, perhaps I did something wrong in the configuration.
It's strange that charges are so high for a web app that is still in development. So, I am wondering if perhaps there is something wrong with my setup. And that leads me to the following question.
I noticed that I am actually using an old ELB setup: "Classic load balancer". And it actually states that this setup does not support websockets, which is a bit strange.
My web app hosts some static webpages (angular), but once it is downloaded, all traffic uses socket.io websockets. Even though the AWS documentation says that websockets are not supported, it seems to work fine. Unless ...
Now, socket io is a pretty smart thing. When it can't use modern websockets (e.g. because the webbrowser does not support it), it falls back to a kind of HTTP polling. I guess that means that from a load-balancer point of view, it creates 100s of visits per minute. And right now, I am wondering if that has an influence on the charges.
My really long question comes down to a simple one. Do you think upgrading my load balancer would decrease the number of counted "loadbalancer hours" ?
EDIT
Here are some ELB metrics. They are too complicated for me to draw conclusions. But perhaps some of you experts can. :)

IE11 is very slow on HTTPS sites

We have an internal web application and since short, it is very slow in IE11. Chrome and Firefox do not have this problem.
We have an HTTP version of the same website, and this is fast with up to 6 concurrent HTTP sessions to the server and session persistence.
However, when we change to HTTPS, all this changes.
We still have session persistence, but only 2 simultaneous sessions to the servers and each request seems to take ages. It is not the server response that is slow, but the "start" time before IE11 sends the request (and the next request). The connection diagram changes to a staircase, issuing one request at a time, one after the other. Each request taking 100-200ms even if the request itself returns only a couple of bytes.
The workstation has Symantec Endpoint Protection installed (12.1), and also Digital Guardian software (4.7).

If the number of requests are huge, can load balancer cause the issue while sending responses to respective clients?

I do have architecture of a Load balancer followed by two Web Application server and Database, I am hitting thousands of HTTP requests to the server from Jmeter distributed testing environment.
At the time of getting response back, few request does not get response back from the server.
I checked Database logs, 100 % requests were responded.
Checked with Web Application servers access logs, 100 % requests were responded.
Can Load balancer cause the damage traversing these pending responses to the respective clients?
Every time different different request are getting stuck.
Thanks in Advance!!
If you suspect load balancer, look at 3 typical causes first:
Server takes longer to respond than load balancer is waiting
Client has shorter timeout than it takes for server to respond.
Port/thread/connection exhaustion on load balancer, or other LB configuration problems
In all three cases, I suggest looking at the load balancer logs. Since you didn't specify which LB you are using, I cannot say exactly how the log looks, but typically LB log gives you option to see:
How long it took for a request to be sent to a web server and for the response from the web server to return to load balancer. You can them compare those numbers to timeouts configured for load balancer and the client (problem 1 and 2).
How long it took for a request from the client to be processed by LB and how long LB took to respond to a client. If it takes long, then something is not right with load balancer (problem 3)
And then of course if you have any errors on load balancer, they may just explain what's going on.
If you cannot review logs for load balancer, I suggest changing your JMeter test temporarily to target servers behind load balancer directly. You can even configure your script to evenly distribute load between all servers (for example by using multiple thread groups). That would allow you to isolate the problem, and get more information on what's going on.

HTTP GET requests work but POST requests do not

Our Spring application is running on several different servers. For one of those servers POST requests do not seem to be working. All site functionality that uses GET requests works completely fine; however, as soon as I hit something that uses a POST request (ex. form submit) the site just hangs permanently. The server won't give any response. We can see the requests in Tomcat Manager but they don't time out.
Has anyone ever seen this?
We have found the problem. Our DBA accidentally deleted the MySQL database files on that particular server (/sigh). In our Spring application we use GET requests for record retrieval and the records we were trying to retrieve must have been cached by MySQL. This made it seem as if GET requests were working. When trying to add new data to the database, which we use POST requests to do, Tomcat would wait for a response, which never came, from MySQL.
In my experience if you're getting a timeout error it's almost always due to not having correct ports open for your application. For example, go into your virtual machine's rules and insure port 8080, 8443 or 80, 443 are open for http and https traffic.
In google cloud platform: its under VPC networking -> firewall rules. Azure and AWS are similar.

Load time of hitting cache server (nginx/squid) is greater than hitting Glassfish directly when I load test with upto 10,000 users on jmeter. Why?

I'm hosting a web service on Glassfish which will probably get hit a million times a day so I thought of having a cache server (reverse proxy) to reduce the load on Glassfish.
I tried implementing them and load tested
directly on glassfish,
with nginx,
with squid.
The results was not as I expected. the average load time for 10000 users for GF, Nginx and Squid was 168ms, 245ms and 198ms - reverse of what I thought it would be.
The response contains cache control headers and it is a HIT each time it hits nginx or squid as can be seen from the access logs.

Resources