Ajax Call reduce response time - ajax

I'm trying to reduce ajax response time of my website, deployed on Hostinger, it takes around 1.30seconds to get the response on live whereas 220ms on my local machine, is there any way to make it faster on live too? even with null data it takes that much time on live.

The speed of a AJAX-response depends on several influences:
The speed of the remote machine (=host). Most of the time the
different sites are only virtual hosts which are multiplexed by a
bigger machine. So, if you share your space with other
high-traffic-sites, yours will not respond fast. You can ask your
provider to give you a new spot on another server of his - helped me
several times.
The speed of your connection may be slow or the connection of your
provider as well. If you have access to another test-machine for
routing-measurements you can check if your connection is really slow. If is not then you may ask your provider to re-route the connection is he is able to.
Does your AJAX-callmake excessive usage of databases (e.g. mysql)?
It could be a slow database-installation (maybe too many users).
Normally you can move the database-spot yourself of contact the
provider to move your databases to another virtual machine.
You see - if it runs at your computer fast and at your provider slow then it is most of the time the problem of your provider and you should contact him.

Related

Large time difference for data to reach localhost

In my laravel application, I am making a HTTP request to a remote server using guzzle library . However large time is required for data to reach local host.
Here is the resposne I get on browser ,
However if I run command ping server_IP I get roughly 175ms as average transmission time .
I also monitored my CPU usage after making requests in infinte loop, however I couldn't find much usage .
I also tried hosting my laravel application on nginx server, but I still observe around 1-1.1 seconds overhead .
What could be causing this delay and how can I reduce it ?
There are few potential reasons.
Laravel is not the fastest framework. There are few hundred of files which need to be loaded on every single request. If you server doesn't have a SSD drive your performance will be terrible. I'd recommend to create a RAMDISK and serve the files from there.
Network latency. Open up wireshark and look at all the requests that need to be done. All them impact negatively the performance and some of them you cannot get around (DNS,...). Try to combine the CSS and JS files so that the number of requests is minimised.
Database connection on server side takes long time to setup, retrieves lots of data,...
Bear in mind that in most situations the latency is due to IO constraints rather than CPU usage. Moreover in test/preproduction environments where the number of requests per second that the server gets rounds to 0.

Web app initial load time

I am using a shared hosting plan at Bluehost to host a golf tournament live scoring mobile web app. I am caching everything I can on Cloudflare, and spent quite some time on overall optimization of the initial download & rendering times. There might be more I could do, but without question my single biggest issue is the initial call to my website: www.spanishpointscup.org. Sometimes this seems to be related to DNS lookup and other times related to Waiting(TTFB).
Below are 2 screen shot images of the network calls that show variations in accessing my index.html. Sometimes this initial file load can be even longer. Very rarely are any of the other files downloaded creating a long delay time, so my only focus now is the initial file load. I think that even if I had server side rendering, I would still have this issue.
Does anyone have specific recommendations that they are confident will help me? Switch to VPS or other host? Thank you.
This is typical when you use a shared server.
The DNS has nothing to do with the issue. DNS has to do with the request not the response. It is the Browser that must resolve the the domain name to an ip address.
The delay you are seeing is due to the server being busy and your page is sitting in a queue waiting behind other processes. Possibly you have a CPU grabbing neighbor on your shared server. Or Bluehost has some performance issues.
You will likely notice some image files take an excessively long time to transmit. Which image is slow will appear to be random with each fresh (not in cache) page load.
UPDATE
After further review I noticed the "wait" times are excessive. Wait time is in green on your waterfall. Notice how the transmit time (blue) is short. This is the time it takes the server to retrieve the page from the disk and put it into the transmit buffer. 300-400 millisecond is excessive.
Find a new service provider.

Amazon ec2 instance slow

The instance started to fail on mysql... the mysql stops from nowhere. Then i was re-starting it and it worked again.
But now, it got worse. The transfer rate when i download a file from another instante starts in 900kbps and keep going down till 20kbps. Also for external downloads.
I tested also a zip job, zipping a big file.... it starts quickly then it slows down and keeps a rate of 10 files zipped per second wich is too slow ( another instances gets 1000 files per sec).
I can't access trough http the websites hosted also because its too slow.
I have already reboot, stop->start. Also i made an image e rebuild the image in a new instance and the problem continues. I also changed the Volume used by the instance and the volume with the problem keeps slow.
What should i do?
I'll take a stab (based on the lack of details), that your instance is just to small - sounds like you are running a web server and mysql on it at least, I'll also bet that you are trying to use a micro instance which have notoriously variable performance statistics and really aren't suitable for running a web server and database server with consistent performance (maybe ok for development, but not production imo).
Try just spinning up a larger instance and see if your problems magically go away; you can test this for a few dollars and if it does solve the problem you can decide if you can permanently upgrade the image size.

How many users should a EC2 Micro Instance be able to handle only with a nginx server?

I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.

Optimising Magento Loading Speed - Can't Identify Why Initial Recieving So Slow

While our website is not yet complete graphically and design wise, most of the backend operations are near completion.
However, after optimising the mysql database we are still receiving a significant initial receiving period when tested on pingdom.com:
http://tools.pingdom.com/fpt/#!/IuoBna86v/http://foscam-uk.com
According to Pingdom:
The yellow part is the time it takes to resolve the hostname and similar (before the connection is initiated to the web server), the green part is connecting to the web server, and the blue part is the time it takes to retrieve the content from the webserver.
Upon asking our managed VPS support team we got the response : 'Have you tried optimizing your script? I believe that the high wait time on there indicates actual website loading time (meaning for your script to load); not actual connection to the website/server.'
Now, pingdom shows the js/css loading relatively quickly, the mysql database side of things doesn't seem to be slowing anything down either - does anyone have any suggestions of what this could be or might be causing it?
Thank you very much for your time and help.
89 requests are too many.
Reduce number of image request by creating sprites.This is pretty important from what is shown in pingdom.
Keep Alive should be set to On and Keep alive time should be a bit higher(15 seconds or so).
Use of compiler plus merge and minify js/css is recommended.
Change the hosting provider. 8 second loading is very very slow. It means that it actually is around 15-17 seconds for a user that doesn't have cached parts of your site (first time visitor). My site www.bebepunk.ro loads according to pingdom in 2.5 seconds and users still complain about the slowness of the site. Check also with http://www.webpagetest.org for both values.

Resources