Just like in this topic, I have a performance issue in dev mode when adding a twig "render" tag in my app (related documentation: Embedding controllers).
Without this render tag, my pages are generated in less than 70 ms.
With the render tag, it's at least 170 ms.
And each render tag added in the app increases the page generation by 100 ms (which is A LOT : why does a normal page run in 60 ms and a render tag in 100 ms?).
I may need 4 or 5 of them on every page of my app, so that would mean at least 500 ms for each page in dev mode.
I totally understand that there is no problem in prod mode, but it's clearly not comfortable in development.
So, does somebody know any way to get rid of any useless calls, logs or code while using "render" tag in dev mode?
I have explained it just 10 hours ago. Long story short: migrate to Twig extensions.
One of my favorite features in symfony is the render tag, embedding controller calls. The profiler adds a lot overhead to every controller calls though, not only speed but uses a lot of memory. You have a few options to speed it up.
The profiler writes every data into a sqlite database by default. IIRC sqlite doesn't allow parallel inserts, so every request have to wait for their turn to access the db to flush data collectors. You can use your development db (mysql or whatever you use) to persist profiler data. A year ago I gained a lot with this in terms of speed.
You can also disable the profiler for sub requests, or only use the profiler when an exception happens. See the framework config reference for the full details.
# config_dev.yaml
framework:
profiler:
only_exceptions: false
only_master_requests: false
dsn: sqlite:%kernel.cache_dir%/profiler.db
You can move your controller logic to a service and reference it as twig global variable and then include the template rendered by the controller.
See https://stackoverflow.com/a/13245994/982075 for instructions.
The choice depends on your application. I believe that the most practical ways are:
1) Use the render tag for heavy rendered templates and use hinclude library to load them in an asynchronous way. This is very helpful when each template to be rendered is "slow" by itself (e.g. many db connections, large texts, etc.).
2) Do as proposed by m2mdas. This is a very fast solution for common cases.
I’d also follow Elnur’s suggestion with twig extensions. An alternative would be using Sonata Block Bundle: http://sonata-project.org/bundles/block/master/doc/index.html . The overhead of a subrequest with Sonata Block is about 7ms afair.
Related
I'm working on a site where we are using the slide function from jquery-ui.
The Google-hosted minified version of jquery-ui weighs 63KB - this is for the whole library. The custom download of just the slide function weighs 14KB.
Obviously if a user has cached the Google hosted version its a no-brainer, but if they haven't it will take longer to load as I could just lump the custom jquery-ui slide function inside of my main.js file.
I guess it comes down to how many other sites using jquery-ui (if this was just for the normal jquery the above would be a no-brainer as loads of sites use jquery, but I'm a bit unsure as per the usage of jquery-ui)...
I can't work out what's the best thing to do in the above scenario?
I'd say if the custom selective build is that small, both absolutely and relatively, there's a good reasons to choose that path.
Loading a JavaScript resource has several implications, in the following order of events:
Loading: Request / response communication or, in case of a cache hit - fetching. Keep in mind that CDN or not, the communication only affects the first page. If your site is built in a traditional "full page request" style (as opposed to SPA's and the likes), this literally becomes a non-issue.
Parsing: The JS engine needs to parse the entire resource.
Executing: The JS engine executes the entire resource. That means that any initialization / loading code is executed, even if that's initialization for features that aren't used in the hosting page.
Memory usage: The memory usage depends on the entire resource. That includes static objects as well as function (which are also objects).
With that in mind, having a smaller resource is advantageous in ways beyond simple loading. More so, a request for such a small resource is negligible in terms of communication. You wouldn't even think twice about it had it been a mini version of the company logo somewhere on the bottom of the screen where nobody even notices.
As a side note and potential optimization, if your site serves any proprietary library, or a group of less common libraries, you can bundle all of these together, including the jQuery UI subset, and your users will only have a single request, again making this advantageous.
Go with the Google hosted version
It is likely that the user would have recently visited a website that loads jQuery-UI hosted on Google servers.
It will take load off from your server and make other elements load faster.
Browsers load a fixed number of resources from one domain. Loading the jQuery-UI from Google servers will make sure it is downloaded concurrently with other resource that reside on your servers.
The Yahoo developer network recommends using a CDN. Their full reasons are posted here.
https://developer.yahoo.com/performance/rules.html
This quote from their site really seals it in my mind.
"Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective."
I am not an expert but my two cents are these anyway. With a CDN you can be sure that there is reduced latency, plus as mentioned, user is most likely to have picked it up from some other website hosted by googleAlso the thing I always care about, save bandwidth.
Im using laravel 4 to develop my site but im getting increaingly concerned with speed issues.
Im developing on a local machine :windows 7, apache and php 5.4
One of my pages creates an article. The view relies on a simple eloquent query to create a select field.
Using profiler the query is timed at 20ms but the page takes over 4500ms to complete. Other pages that rely on a record query, eg edit article take over 7000ms.
The pages also pull details from the authorised user
In other threads I've followed suggestions to use 127.0.0.1 as the database ip but this hasn't made a difference.
My pages uses jquery, but nothing heavy just small features.
I use the blade templating
Are my times realistic?
How can i profile to see where the bottlenecks are? Ive tried adding timers using the profiler but not had much success getting them to view.
Are there any standards i should apply?
Does blade add a big overhead?
I've not posted the code as it would take up all of the room
I am diving into JavaScript MVC with Angular and as I understand it, along with the initial shell page, all your scripts must be loaded on the initial page load. However, and correct me if I'm wrong, that would mean that a majority of your scripts being loaded could be entirely useless (i.e. you have view #1 showing and scripts for views #2 - #10 aren't needed yet)?
In my case, I have a fairly large web app, with a feed page, results page, product page, profile page, among others. It amounts to 10+ pages, and my current (the traditional) approach is loading scripts specific to each page on load. Now each page is a partial and I don't believe it's possible to load specific scripts with partials?
So, part of my question is if my statements are accurate. The other is whether or not my fear of suffering on initial page load are justified (especially for mobile devices for instance).
I really got into Angular in hopes to clean up my JavaScript with the MVC approach and did not plan on taking advantage of it as a single page application (I can forego the use of routing different partials into my view, right?). But now I'm not sure. I just want to get a better understanding of how it works before making the leap.
Any help appreciated. Thanks!
Take a look into AMD pattern with Require.JS (Works with any type of JS framework). There is a seed project with AngularJS + RequireJS.
I have built a test website using nopCommerce open source , Everything is working fine , i need to know , why my website loading time is greater than 6 sec , the homepage works fine but the categories when clicked takes like 6-10 secs. how can i check the http request and calls to db so that i can track which function is taking a long times.
Test website is test website
Thanks
Things I would try in that order:
MvcMiniProfiler.
Analyze my code for possible performance bottlenecks using a .NET profiler.
Finally submit bugs to the nopCommerce support if the previous approaches didn't yield anything fruitful that would put my code into cause.
In between I might also checkout with my hosting provider whether he is not the cause of the slowness.
As a quick and dirty check, you can add the time it takes to generate the response as a column in the IIS logs - that will give you some idea as to whether the server is being slow to serve the pages or you need to do some front-end optimisation work.
On the front end side the first thing you need to do it merge all the CSS files for a theme into one to save on roundtrips - the browser can't render the page until it's got the CSS
All the .js files you have in the head will also block the page, can you merge them and load them later?
The performance of imagegen.ashx looks on the slow side - do you need to generate the banners on the fly or could they be pre-generated?
If the back-end side of generating the page is slow, there are some scripts around the web to show which queries are using the most CPU, making the most IO ops etc.
Below is a list of things you can improve,
1.Combine your js.
There are a few things you can use, for example, jsMin, you can read this [post] http://encosia.com/automatically-minify-and-combine-javascript-in-visual-studio/. However, jsmin doesn't seem to compress the combined js.
Another option is [jmerge] http://demo.lateralcode.com/jmerge/ It kinda does it after the fact, in the sense that you need to have the site ready to cobine them with jmerge since it only take a http link.
The best one I'v known so far is bundling and minification feature of MVC4. It's part of MVC4, however, you can get a Nuget package for you MVC 3 app.
Word of advice: bundling every js of yours is not necessarily a good idea, it even backfires someimtes, since you will end up with a big js that browser will have to download sequentially, instead of downloading several smaller ones. (you might want to look into head.js to make js download parallel) So the trick here is to keep the balance. I end up have a jquery from google CDN and bundled the rest of my js into one.
2.Put js at the bottom of the page so the browser doesnt have to load the js first before it starts to render the page. But you need to be careful with this one though, since normally you will have jquery functions doing stuff upon document.ready() at the header of the page, I adviese you moving that to the bottom of the page as well, if possible.
If you move the js reference and scirpt block in you layout page to the bottom, then you will most likely run into problem with nested js reference and js script blocks in your individual view. No worries, then you need to look into using #section (probably suitable for a discussion in an other thread) in your view and render it in your layout page, so that the referenced and script block inside your view get rendered at the bottom of the page at run time.
2.Use CDN
Pretty straight forward.
3.Combine CSS
Combine them into one, with the same tool you use for combining js, but you need to reference it at the page header, instead of the bottom.
4.Enable static content cache, something like this in your web config file
It won't help with first time load, but definitely will make it a lot faster for returning user.
5.Enable url compression
Time to first load
This is one of the metrics used by webpagetest.org. But dont bang your head against this one too much, as it basically says how fast your web server can serve the content. So probably not much you can do here form the software end.
Hope that would help!
NopCommerce is deadly slow, and the developers doesn't look in to the performance issue seriously. I have seen lot of performance related forums left unanswered. So best luck.
Well.. we've developed a j2ee application using struts2 ajax capabilities. We find that the dojo implementation is quite slow. We did the following things:
1. Custom build of the dojo library. (increased dojo.js from 240kb to 350kb)
2. Took all the static stuff out of the struts jar and kept it outside.
The performance was significantly improved. But still it is quite heavy as you can guess with 350kb size..
Is struts2 ajax supposed to be this heavy? or is there any lighter implementation available?
Edit: I used Firebug and YSlow with my application. Couple of changes that improved my situation hugely are mentioned below:
Custom build of dojo (reduced the number of I/Os)
Move the static files out of Struts jar (helped a great deal)
tune your server to gzip the response (reduced the response size to 1/3)
Reduce number of images on your site.(this is obvious)
Will keep updating on further changes..
First of all check that you did everything on the server to facilitate caching (e.g., setting right HTTP headers, compression, server-side caching, upstream caches, and so on). See Improving performance… for more details.
The goal is to reduce I/O as much as possible — use Firebug or any other network traffic monitoring tool to see how much is sent back and forth. Try to minimize the number of I/O requests and the total number of bytes.
Don't forget that it applies to your dynamic data too — choose efficient formats, bundle several related requests together, remove all deadwood that is getting sent over and over unchanged.
If the custom build and server-side tweaks didn't help, consider restructuring your web app to be more light-weight. Examples:
Evaluate the splash screen technique discussed in the link above.
If you use a lot of different form widgets, see if it is really necessary, and fall back on regular DOM elements like "input", "button", "textarea", "select".
The same goes for layout widgets. See if simple CSS can help you out.
Evaluate building Dojo in layers instead of one monolithic dojo.js so only the necessary subset is loaded by web pages. See details in The Package System and Custom Builds.
Building web applications with Dojo for a living for last 2 years I still didn't see the one that cannot be optimized properly until it is fully accepted and perceived by end users as "fast", "nimble", and "light-weight".
Make sure you follow this faq first:
http://struts.apache.org/2.x/docs/performance-tuning.html
I usually re-write my own theme instead of using the struts2 ajax theme which has dojo built in. This way I can use whatever toolkit I want to use (jQuery). I saw the biggest performance improvements when I copied the templates folder from the jar to the root web directory for the webapp.
Last I checked, struts was shipping a release of Dojo (0.4) that's going on 2 years old. Dojo did a rewrite for version 0.9/1.0 that had significant performance gains and reduced code size. You should make sure you're running a recent version of Dojo (current version is 1.2.3) and use the build and tips from Eugene, above.