Say that on page1.html I need mootools.js and main.js... I guess that these tools should generate one minified js file (say min1.js).
Then on page2.html I need mootools.js, main.js AND page2.js... Do those tools serve min1.js (already cached by browser) and page2.js ? Or do they combine these 3 .js files and serve the resulting minified file which need to be fully cached again by the browser ?
Thank you
Assuming you are using the Apache module mod_pagespeed because you tagged the question with it but didn't mention if you are or not...
If you turn on ModPagespeedEnableFilters combine_javascript (which is disabled by default), it operates on the whole page. According to the documentation:
This filter generates URLs that are essentially the concatenation of
the URLs of all the CSS files being combined.
page1.html would combine mootools.js, main.js, and page1.js; and page2.html would be mootools.js, main.js, and page2.js.
To answer your question then, yes it will cache several copies of the repeated JavaScript files.
However,
By default, the filter will combine together script files from
different paths, placing the combined element at the lowest level
common to both origins. In some cases, this may be undesirable. You
can turn off the behavior with: ModPagespeedCombineAcrossPaths off
If you leave this behavior on, and put the files spread out across paths that you want combined, you could keep them separate so that common scripts will be combined as one and individual scripts would be combined on their own. This would keep the duplication of large, common libraries down.
Related
In terms of performance/speed of the final product and according to the best practices - should Normalize.css be kept as separate file (linked from HTML head) or is it better to compile it into the final .css file?
I was searching here and on many other websites but couldn't find an answer. Hopefully, you'll understand my dilemma with this:
1. Leave normalize.css in node-modules folder and link to it from our html.
I'm still fresh into coding, but if I understand correctly with this approach we will add one more (maybe unnecessary?) request to the server in addition to our main.css file? How bad is it or how taxing is it on performance/loading time of website?
<link rel="stylesheet" href="../node_modules/normalize.css/normalize.css">
<link rel="stylesheet" href="temp/styles/styles.css">
On the other hand, we can:
2. use 'postcss-import' to import normalize.css with the other modules and compile them all together into one final .css file.
Ok, now we have everything in one place, but we have just added 451 lines of code (and comments) before the first line our our actual css. In terms of readability it doesn't seem like the best solution to me, but is website going to load a bit faster now?
Disclaimer: I've been using the second approach so far, but I started asking myself if that is the optimal solution.
Thank you in advance.
You are quite correct in stating that a web page will load faster if it makes fewer requests to the server when loading. You are also correct in stating that the combined file is less readable than the individual files loaded separately.
Which is more important to you in your situation is a question only you can answer. That is why you are having a hard time finding definitive advice.
Personally I use the separate file option in development so that the files are easy to read and debug. Speed of loading isn't as important on a development machine.
In production websites I use the combined file option. In fact, I use combine and minify to reduce the number of files loaded and keep the size of those files as small as possible. Readability is less important in this situation.
Ideally adding normalize.css to your final css would be done in a post processing step that combines all of your source files into one file and minifies the whole thing. That way your source is still readable but you end up only loading one file.
I am rebuilding a site with docpad and it's very liberating to form a folders structure that makes sense with my workflow of content-creation, but I'm running into a problem with docpad's hard-division of content-to-be-rendered vs 'static'-content.
Docpad recommends that you put things like images in /files instead of /documents, and the documentation makes it sound as if otherwise there will be some processing overhead incurred.
First, I'd like an explanation if anyone has it of why a file with a
single extension (therefore no rendering) and no YAML front-matter,
such as a .jpg, would impact site-regeneration time when placed
within /documents.
Second, the real issue: is there a way, if it does indeed create a
performance hit, to mitigate it? For example, to specify an 'ignore'
list with regex, etc...
My use case
I would like to do this for posts and their associated images to make authoring a post more natural. I can easily see the images I have to work with and all the related files are in one place.
I also am doing this for an artwork I am displaying. In this case it's an even stronger use case, as the only data in my html.eco file is yaml front matter of various meta data, my layout automatically generates the gallery from all the attached images located in a folder of the same-name as the post. I can match the relative output path folder in my /files directory but it's error prone, because you're in one folder (src/files/artworks/) when creating the folder of images and another (src/documents/artworks/) when creating the html file -- typos are far more likely (as you can't ever see the folder and the html file side by side)...
Even without justifying a use case I can't see why docpad should be putting forth such a hard division. A performance consideration should not be passed on to the end user like that if it can be avoided in any way; since with docpad I am likely to be managing my blog through the file system I ought to have full control over that structure and certainly don't want my content divided up based on some framework limitation or performance concern instead of based on logical content divisions.
I think the key is the line about "metadata".Even though a file does NOT have a double extension, it can still have metadata at the top of the file which needs to be scanned and read. The double extension really just tells docpad to convert the file from one format and output it as another. If I create a straight html file in the document folder I can still include the metadata header in the form:
---
tags: ['tag1','tag2','tag3']
title: 'Some title'
---
When the file is copied to the out directory, this metadata will be removed. If I do the same thing to a html file in the files directory, the file will be copied to the out directory with the metadata header intact. So, the answer to your question is that even though your file has a single extension and is not "rendered" as such, it still needs to be opened and processed.
The point you make, however, is a good one. Keeping images and documents together. I can see a good argument for excluding certain file extensions (like image files) from being processed. Or perhaps, only including certain file extensions.
I will potentially have many Partial Views for my application which can be grouped in a folder structure. It seems I ought to do this otherwise I will a View Folder with loads of files. So I assume I should have something like:
Views ->
Group1 ->
PartialView1
PartialView2
What would the HTML.Partial call look like?
HTML.Partial("~/Views/Group1/MyPartialView.cshtml",Model)
Another idea I had was to have the one Partial View file with conditionals Code blocks, but I suspect this goes against everything that PartialViews are about.
Finally is there any difference in performance if one has many small Partial Views versus one large Partial View with multiple conditional components? I guess I am thinking that one file load into memory and compilation to code as opposed to multiple small file loads.
Thanks.
EDIT: More Info.
I have a generic controller that I am using to render different parts of a report, so all sections for an "introduction" chapter would be rendered using "Introduction" partials ie "Introduction.Section1", "Introduction.Section2". In my scenario I do not believe I have common sections across chapters, so I could go with the "file." idea, but the Views folder would be large, hence why I am considering the use of subfolders.
EDIT: Thanks all. Some tremendous ideas here. I went with the folder idea in the end since I use this approach elsewhere. However I do realise I need to use absolute pathing, but this is not an issue.
As long as they're in the Views directory somewhere, it shouldn't really matter. If you put it in a location other than Views/{controller} or Views/Shared, then you'll need the fully qualified location, including Views and the extension, so #Html.Partial("~/Views/Group1/PartialView1.cshtml").
Personally, if you have a lot of partials that are used in a single controller, I'd leave them in the {controller-name} directory (with a leading underscore as #IyaTaisho suggested). But if they're used by in multiple controllers, and you need to group them, I'd group them under Views/Shared/{groupName}.
Regarding one big vs. many small partials, I'd say go with many small ones. There might be a reason to do one big one now and then, but in general, I believe a partial should be as simple as possible. Remember you can always have nested partials, so if you have shared functionality or layout among many partials, you can break it into a parent partial and many child partials underneath.
You could use a parent child file naming convention like:
header.html
header.login.html
header.searchbar.html
You could even take it a step further:
contact.helpdesk.html
contact.office.html
Re-using partials is much less frequent than unique partials, so you could use a convention for re-usable partials like:
global.partial1.html
global.partial2.html
Limitations are a large file directory.
Benifits are easy to skim, easy to sort.
I usually add an _ in front of a partial. Example would be have a main View called Home.cshtml. The pieces (partials) on the page would have something like this: _header.cshtml, _footer.cshtml, etc.
I've read that Firefox 3.5 has a new feature in its parser ?
Improvements to the Gecko layout
engine, including speculative parsing
for faster content rendering.
Could you explain that in simple terms.
It's all to do with this entry in bugzilla: https://bugzilla.mozilla.org/show_bug.cgi?id=364315
In that entry, Anders Holbøll suggested:
It seems that when encountering a script-tag, that references an external file,
the browser does not attempt to load any elements after the script-tag until
the external script files is loaded. This makes sites, that references several
or large javascript files, slow.
...
Here file1.js will be loaded first, followed sequentially by file2.js. Then
img1.gif, img2.gif and file3.js will be loaded concurrently. When file3.js has
loaded completely, img3.gif will be loaded.
One might argue that since the js-files could contain for instance a line like
"document.write('<!--');", there is no way of knowing if any of the content
following a script-tag will ever be show, before the script has been executed.
But I would assume that it is far more probable that the content would be shown
than not. And in these days it is quite common for pages to reference many
external javascript files (ajax-libraries, statistics and advertising), which
with the current behavior causes the page load to be serialized.
So essentially, the html parser continues reading through the html file and loading referenced links, even if it is blocked from rendering due to a script.
It's called "speculative" because the script might do things like setting css parameters like "display: none" or commenting out sections of the following html, and by doing so, making certian loads unnecessary... However, in the 95% use case, most of the references will be loaded, so the parser is usually guessing correctly.
I think it means that when the browser would normally block (for example for a script tag), it will continue to parse the HTML. It will not create an actual DOM until the missing pieces are loaded, but it will start fetching script files and stylesheets in the background.
I am very new to Ruby so could you please suggest the best practice to separating files and including them.
What is the preferred design structure of the file layout. When do you decide to separate the algorithm into a new file?
When do you use load to include other files and when do you use require?
And is there a performance hit when you include files?
Thanks.
I make one file per class, except classes that are small helper classes, not needed by other files. I separate my different modules in subdirectories also.
The difference between load and require is require will only load the file once, even if it's called multiple times, while load will load it again regardless of whether it's been loaded before. You'll almost always want to use require, except maybe in irb when you want to manually want to reload a file.
I'm not sure on the performance hit. When you load or require a file, the interpreter has to interpret the file. Most Ruby's will compile it to virtual machine code after being required. Obviously, require is more performant when the file may have already been included once, because it may not have to load it again.