Why is my gatsby website's main thread so slow? - performance

My main-thread is 6.1sec long based on light house. My website is https://ali126191.github.io/charity-project-starter/
Light house main thead results:
Category Time Spent
Script Evaluation 2,267 m\s
Style & Layout 1,095 ms
Other 1,053 ms
Script Parsing & Compilation 757 ms
Parse HTML & CSS 536 ms
Garbage Collection 273 ms
Rendering 78 ms
I thought gatsby did code splitting so my website should be pretty fast and this would not be a issue. I am a beginner, so I do not understand much about these things.

Related

Failing core web vitals despite 90%+ PSI results

Site : www.purelocal.com.au
Tested 1000's of URL's in Google PSI - all are Green 90%+.
However , in Google webmaster tools = 0 GOOD URLS.
Can someone please explain what Google requires and what we can do to pass core web vitals before JUNE ?
We've spent months optimising everything and cannot further optimise but Google says that NONE of our URL's pass core web vitals...it's just ridiculous.
Looking at your website's report in the CrUX Dashboard, there are a couple of things you could optimize more:
First, your site's LCP is right on the edge of having 75% good desktop experiences, and phone experiences are below that at 66% good. https://web.dev/optimize-lcp/ has some great tips for addressing LCP issues.
Second, while your site's desktop FID experiences are overwhelmingly good (98%), you do seem to have a significant issue for phone users (only 44% good). There are similarly great tips in the https://web.dev/optimize-fid/ article.
While the big green "98" score on PSI makes it look like the page is nearly perfect, what matters most in terms of the user experience is real field data. That information can be found in the "Field Data" and "Origin Summary" sections of the report.
You mentioned in the comments that your server response time is an issue. I can confirm this with lab testing:
https://webpagetest.org/graph_page_data.php?tests=210506_AiDcJ0_0283e8c51814788904bdf19cebe7a5c8&medianMetric=TTFB&fv=1&median_run=1&zero_start=true&control=NOSTAT#TTFB
https://webpagetest.org/result/210506_AiDcJ0_0283e8c51814788904bdf19cebe7a5c8/8/details/#waterfall_view_step1
The long light blue bar on line 1 of the chart above shows how long it takes your server to respond to the request. In this case the time to first byte (TTFB) is 1.132 seconds. This is going to be a huge problem for most users to achieve a fast LCP because in these tests it takes 1.9 seconds just to get the HTML to the client. No amount of frontend optimizations can make the HTML arrive sooner than that. You need to focus on backend optimizations to get the TTFB down.
I can't give you any specific hosting recommendations but it does seem like the shared hosting is adversely affecting your users' LCP performance.

Why does a deployed Meteor site take so long to load?

For a very simple application, my Meteor site is taking 4.1s to start downloading the first byte of data. This is with a very basic setup. The relevant times etc (taken from http://www.webpagetest.org) are:
IP: 107.22.210.133
Location: Ashburn, VA
Error/Status Code: 200
Start Offset: 0.121 s
DNS Lookup: 64 ms
Initial Connection: 56 ms
Time to First Byte: 4164 ms
Content Download: 247 ms
Bytes In (downloaded): 0.9 KB
Bytes Out (uploaded): 0.4 KB
Is this due to Meteor being slow, or is there likely to be a bottleneck in my code? Is there a way to determine this?
Thanks.
That delay is a function of the time it takes your subscriptions to get data from the server. If any of the document data the client needs on page load is static, store it in unmanaged (unsynchronized) local collections so it is available immediately on initial page load. See collections.meteor.com for a load time comparison of data stored in an unmanaged versus a managed collection.
According to webpagetest, that's:
the time needed for the DNS, socket and SSL negotiations + 100ms.
I loved the #ram1's answer, but I would like to add that it's also due to your server performance. That amount of time is common in shared hostings. There are two workarounds there: change your hosting or add a CDN service.
Also, it will help if you have less redirections.
You should make a better use of cache and, for Chrome users, you can apply the pre- party features.

Error Page is taking 25 to 35 % of cpu in Nopcommerce 2.40

I'm using Nopcommerce 2.40.
I've run load test with 5 virtual user for 1 min on ErrorPage.htm which is simple HTML page and found that..It is taking 25 to 35% of CPU.
I think it is going to be serious performance problem if simple HTML is taking too much CPU.There is no need to check other pages and it does not matter whether you are using Output caching or other caching to improve performance.
What could be the reason behind this?
It executes several SQL commands. There is a fix available here:
http://nopcommerce.codeplex.com/SourceControl/changeset/changes/f693be2bc2e0
This will add htm and html pages to ignore.

SSRS 2005 Subreporting Performance Concern

I have a report which, when rendered on its own, has the following performance times (taken from ExecutionLogStorage table):
TimeDataRetrieval: 6776
TimeProcessing: 142
TimeRendering: 30
When this report is used as a sub-report which is repeated 34 times, the performance of the overall report comes out as follows:
TimeDataRetrieval: 9255
TimeProcessing: 187709
TimeRendering: 35
Furthermore, the memory consumption of my IIS process (using ReportViewer web control) goes up by several hundreds of MB.
Are these performance issues inherent to sub-reporting or is there something wrong with my report?

BIRT: PDF - Generation Time

I'have a report, where the html generation for a preview takes about 39 seconds. When i try to preview the report in pdf, it's not done in 4 Minutes. Is that normal? My other reports have about 50% time - diference at maximum.
If its not normal, how can i speed up the report generation in pdf?
Thanks!
(BIRT 2.1.3, RCP Designer )
I would say 6x increase in generation time for PDF over HTML is not to be expected.
Most of my reports take no more than twice as long to export to PDF than they do to HTML. XLS export is in between HTML and PDF.
I was able to gain some optimisation on execution time by splitting up some data sets, and combining others. Some experimentation may provide you with some good results.
However a key thing to note was that my optimisation was spread across all export types, not just limited to PDF.
That isn't really much help, but it gives you something to try.

Resources