Why does a deployed Meteor site take so long to load? - performance

For a very simple application, my Meteor site is taking 4.1s to start downloading the first byte of data. This is with a very basic setup. The relevant times etc (taken from http://www.webpagetest.org) are:
IP: 107.22.210.133
Location: Ashburn, VA
Error/Status Code: 200
Start Offset: 0.121 s
DNS Lookup: 64 ms
Initial Connection: 56 ms
Time to First Byte: 4164 ms
Content Download: 247 ms
Bytes In (downloaded): 0.9 KB
Bytes Out (uploaded): 0.4 KB
Is this due to Meteor being slow, or is there likely to be a bottleneck in my code? Is there a way to determine this?
Thanks.

That delay is a function of the time it takes your subscriptions to get data from the server. If any of the document data the client needs on page load is static, store it in unmanaged (unsynchronized) local collections so it is available immediately on initial page load. See collections.meteor.com for a load time comparison of data stored in an unmanaged versus a managed collection.

According to webpagetest, that's:
the time needed for the DNS, socket and SSL negotiations + 100ms.
I loved the #ram1's answer, but I would like to add that it's also due to your server performance. That amount of time is common in shared hostings. There are two workarounds there: change your hosting or add a CDN service.
Also, it will help if you have less redirections.
You should make a better use of cache and, for Chrome users, you can apply the pre- party features.

Related

Extremely slow response in EC2 (width less than 100 requests)

I have a EC2 instance, everything was going ok but for the last few days my website (https://www.cinescondite.com) started having a very slow response. I didn't change anything, and I have the site very well optimized. I saw that I have very high network rates, but there are so many stats that I don't know which ones really matter or which ones are affecting my site. I have a bitnami WordPress installed in my instance.
AWS Stats:
Chrome console stats:
It looks like you are one of the t class instance types that use burstable credits and you ran out. For details on Burtsable Credits You can see this by the "CPU Credit Balance" which is the last graph.
You need to switch to fixed performance instance type or choose larger type. If you are using a t3.large or larger try switching to an m5 in the same size. The difference between t3.large and m5.large is only $10 a month.
If you are using a t3.medium or smaller then try stepping up to the next size.
Another way to speed up the performance is to offload content delivery with a CDN (content delivery network). By doing thing your server will not work as hard and your user experience will be improved by the delivery speed of the CDN.
One option is to use the W3 Total Cache Wordpress Plugin
And here is article on step by step instructions on using Total Cache with AWS S3 and CDN
Note: I have not stepped through instructions to see if they are correct; however, in reading them they appear to be complete.

Fast delivery webpages on shared hosting

I have a website (.org) for a project of mine on LAMP hosted on a shared plan.
It started very small but now I extended this community to other states (in US) and it's growing fast.
I had 30,000 (or so) visits per day (about 4 months ago) and my site was doing fine and today I reached 100,000 visits.
I want to make sure my site will load fast for everyone and since it's not making any money I can't really move it to a private server. (It's volunteer work).
Here's my setup:
- Apache 2
- PHP 5.1.6
- MySQL 5.5
I have 10 pages PER state and on each page people can contribute, write articles, like, share, etc... on few pages I can hit 10,000 per hours during lunch time and the rest of the day it's quiet.
All databases are setup properly (I personally paid a DBA expert to build the code). I am pretty sure the code is also good. Now, I can make page faster if I use memcached but the problem is I can't use it since I am on a shared hosting.
Will the MySQL be able to support that many people, with lots of requests per minutes? or I should create a fund to move to a private server and install all the tools I need to make it fast?
Thanks
To be honest there's not much you can do on shared hosting. There's a reason why they are cheap ... they limit you to do stuff like you want to do.
Either you move to a VPS that allow memcache (which are cheaper) and you put some google ads OR you keep going on your shared hosting using a pre-generated content system.
VPS can be very cheap (look for coupons) and you can install what ever you want since you are root.
for example hostmysite.com with the coupon: 50OffForLife you pay 20$ per month for life ... vs a 5$ shared hosting ...
If you want to keep the current hosting, then what you can do is this:
Pages are generated by a process (cronjob or on the fly), everytime someone write a comment or make an update. This process start and fetch all the data on the page and saves it to the web page.
So let say you have a page with comments, grab the contents (meta, h1, p, etc..) and the comments and save both into the file.
Example: (using .htaccess - based on your answer you are familiar with this)
/topic/1/
If the file exists, then simply echo ...
if not:
select * from pages where page_id = 1;
select * from comments where page_id = 1;
file_put_contents('/my/public_html/topic/1/index.html', $content);
Or something along these lines.
Therefore, saving static HTML will be very fast since you don't have to call any DB. It just loads the file once it's generated.
I know I'm stepping on unstable ground providing an answer to this question, but I think it is very indicative.
Pat R Ellery didn't provide enough details to do any kind of assessment, but the good news there can't be enough details. Explanation is quite simple: anybody can build as many mental model as he wants, but real system will always behave a bit differently.
So Pat, do test your system all the time, as much as you can. What you are trying to do is to plan the capacity of your solution.
You need the following:
Capacity test - To determine how many users and/or transactions a given system will support and still meet performance goals.
Stress test - To determine or validate an application’s behavior when it is pushed beyond normal or peak load conditions.
Load test - To verify application behavior under normal and peak load conditions.
Performance test - To determine or validate speed, scalability, and/or stability.
See details here:
Software performance testing
Types of Performance Testing
In the other words (and a bit primitive): if you want to know your system is capable to handle N requests per time_period simulate N requests per time_period and see the result.
(image source)
Another example:
There are a lot of tools available:
Load Tester LITE
Apache JMeter
Apache HTTP server benchmarking tool
See list here

Varnish & ESIs : Fetching in parallel and possible workarounds

I'm investigating using Varnish with ESIs to cache page content for a high traffic forum-like website.
Context : I want to cache content for visitors only (connected users will have a really different display and need absolute fresh content). Still, for visitors, some parts of a page need to be dynamic :
- not cachable, for example for a visitor-dependant module (think a 'suggestion' widget that is fed with a real-time analysis of the pages viewed, thanks to a beacon)
- cachable with a small TTL of 15mn, for example for a 'latest posts' widget or for very changing ads campaigns.
Currently we are using Apache/PHP/symfony/memcache to cache pages and have in-house ESI-like mecanism : a page from cache is parsed and some specific tags are interpreted (including calls to web services and/or databases). This is not performant enough since server time is then around 700ms.
In remplacement of this solution, we can have Varnish+ESIs. The total nb of ESIs included in a page can reach 15. The real number of ESIs to fetch will be less than that but not so much given the ESI's TTLs. The critical problem is that Varnish fetches the ESIs sequencially instead of parallel and this is not acceptable. This feature is somewhere late in Varnish's roadmap.
So,
What is your experience with Varnish and ESIs ? How many ESIs, response time gain that you have ?
Do you know workarounds or other serious and configurable (VCL was nice) reverse-proxies with parallel ESI fetching ?
If not, what kind of good caching strategy do you use for equivalent use-cases ?
Thanks,
P.
Currently i work for a high traffic site, and performance is everything for us. On several pages we use a lot (20+) of ESI's, for example on our search resultlist. The resultlist is JSON response, and every resultblock in it is a seperate ESI. Okay, we do cache warming. But we didn't run in any performance issues on this. The number of ESI's will be a problem if the backend requests are real slow.
Parallel ESI fetching is on the feature request list of Varnish, but i think it didn't make it in version 4.1.

Long time to first byte for static content

I have large time to first byte for static content on IIS 7.5, it's not site specific it's slow for all sites on that server. runAllManagedModulesForAllRequests is set to false, and gzip is enabled for static content. How can I troubleshoot this problem ?
Here are the timings, time to first byte for fairly complex ASP.NET site is 368 ms, and for just grabing css file is 617 ms !! That time is different every time but still too much, not below 200 ms that seems way too much for such task.
Server has plenty free memory (in this moment more than 7 GB).
Does it happen every time?
Is compression turned on for the static object, what level is it set to and is the compressed version being cached on the server?
What's the CPU load like?
Does it happen for all static content or just css?

Web Development: What page load times do you aim for?

Website Page load times on the dev machine are only a rough indicator of performance of course, and there will be many other factors when moving to production, but they're still useful as a yard-stick.
So, I was just wondering what page load times you aim for when you're developing?
I mean page load times on Dev Machine/Server
And, on a page that includes a realistic quantity of DB calls
Please also state the platform/technology you're using.
I know that there could be a big range of performance regarding the actual machines out there, I'm just looking for rough figures.
Thanks
Less than 5 sec.
If it's just on my dev machine I expect it to be basically instant. I'm talking 10s of milliseconds here. Of course, that's just to generate and deliver the HTML.
Do you mean that, or do you mean complete page load/render time (html download/parse/render, images downloading/display, css downloading/parsing/rendering, javascript download/execution, flash download/plugin startup/execution, etc)? The later is really hard to quantify because a good bit of that time will be burnt up on the client machine, in the web browser.
If you're just trying to ballpark decent download + render times with an untaxed server on the local network then I'd shoot for a few seconds... no more than 5-ish (assuming your client machine is decent).
Tricky question.
For a regular web app, you don't want you page load time to exceed 5 seconds.
But let's not forget that:
the 20%-80% rule applies here; if it takes 1 sec to load the HTML code, total rendering/loading time is probably 5-ish seconds (like fiXedd stated).
on a dev server, you're often not dealing with the real deal (traffic, DB load and size - number of entries can make a huge difference)
you want to take into account the way users want your app to behave. 5 seconds load time may be good enough to display preferences, but your basic or killer features should take less.
So in my opinion, here's a simple method to get a rough figures for a simple web app (using for example, Spring/Tapestry):
Sort the pages/actions given you app profile (which pages should be lightning fast?) and give them a rough figure for production environment
Then take into account the browser loading/rendering stuff. Dividing by 5 is a good start, although you can use best practices to reduce that time.
Think about your production environment (DB load, number of entries, traffic...) and take an additional margin.
You've got your target load time on your production server; now it's up to you and your dev server to think about your target load time on your dev platform :-D
One of the most useful benchmarks we use for identifying server-side issues is the "internal" time taken from request-received to response-flushed by the web server itself. This means ignoring network traffic / latency and page render times.
We have some custom components (.net) that measure this time and inject it into the HTTP response header (we set a header called X-Server-Response); we can extract this data using our automated test tools, which means that we can then measure it over time (and between environments).
By measuring this time you get a pretty reliable view into the raw application performance - and if you have slow pages that take a long time to render, but the HTTP response header says it finished its work in 50ms, then you know you have network / browser issues.
Once you push your application into production, you (should) have things to like caching, static files sub-domains, js/css minification etc. - all of which can offer huge performance gains (esp. caching), but can also mask underlying application issues (like pages that make hundreds of db calls.)
All of which to say, the values we use for this time is sub 1sec.
In terms of what we offer to clients around performance, we usually use 2-3s for read-only pages, and up to 5s for transactional pages (registration, checkout, upload etc.)

Resources