I use KendoUI library in my project, so it's already minified but incredibly big.
Is it possible to exclude it from being uglified when using grunt-usemin?
Thanks!
In your grunt configuration, use an explanation point to make an exclude. Place those at the end of your src array.
e.g., add to the end of the src array, add:
'!htdocs/js/kendo.all.min.js'
You'll have to modify your flow for js and use a custom post-processor, which basically consists on adding a flow property to your useminPrepare.options (follow the basic structure in usemin README file), but instead of just adding a step (e.g. 'uglify'), plug a custom post-processor:
name: 'uglify',
createConfig: function (context, block) {
...
}
To customize how it will handle files, copy the createConfig from the example file you find most useful (see files in grunt-usemin/lib/config/) and modify it as you need (i.e. excluding the file you want).
I used a custom post-processor to add ngAnnotate to the usemin flow for js, just changing name to ngAnnotate and copying the createConfig from uglify).
Related
I am trying to optimize a RequireJS project first for HTTP/2 and SPDY, and secondly for HTTP/1.x. Rather than concatenating all of my RequireJS modules into one big file, I am hacking r.js to resave the modules that would have been concatenated as separate files. This will be better for the new protocols, because each file will be cached individually, and a change to one file won't invalidate the caches of all the other files. For the old protocols, I will pay a penalty in additional HTTP requests, but I can at least avoid subsequent rounds of requests by appending revision hashes to my files (i.e. "module.js" becomes "module.89abcdef.js") and setting a long max-age on them, so I want to do that.
r.js adds names to previously-anonymous modules. That means that the file "module.js" with this definition:
define(function () { ... })
Becomes this:
define("module.js", function () { ... })
I want this, because I will be loading some of my modules via <script> elements (to prevent RequireJS HTTP requests acting as a bottleneck to dependency resolution; I can instead utilize r.js's topological sorting to generate <script> elements which the browser can start loading as soon as it parses them). However, I also plan to load some modules dynamically (i.e., via RequireJS's programmatic insertion of <script> elements). This means RequireJS may at some point request a module via a URL.
Here is the problem I have run into: I am generating a RequireJS map configuration object to map my modules' original module IDs to their revision-hashed module IDs, so that when RequireJS loads a module dynamically, it requests the revision-hashed file name. But the modules it loads, which are now named modules thanks to r.js, have the un-revisioned name of the module, because I don't revision the files until after I minify them, which I only do after r.js generates them (thus r.js doesn't know about the revisions).
So the file "module.89abcdef.js" contains the module:
define("module.js", function () { ... })
When these named modules execute, define will register a module with the wrong name. When other modules require "module.js", this name will be mapped to "module.89abcdef.js" (thanks to the map configuration), but since a module with that name was not actually defined after the onload event of a <script> with the src attribute "module.89abcdef.js" fired, the module they receive is undefined.
To be faithful to the new file name, the file "module.89abcdef.js" should probably instead contain the definition:
define("module.89abcdef.js", function () { ... })
So, I believe the solution to my problem will be to replace the unrevisioned name of the module in the r.js-generated file with the revisioned name. However, this means the hash of the file is now invalidated. My question: In this situation, does it matter that the hash of the file is invalid?
I think it's impossible that the module name in the source code could be faithful to the file name, because if the module name had a hash, well, then the hash of the file would be different, so the module name would become the new hash, and so on recursively.
I fear there might be a case where the unfaithful hash would cause the file to still be cached even when it'd actually changed, but I cannot think of one. It seems that appending the same hash for the file's original contents to the file's name and the module's name should always still result in a unique file.
(My lengthy explanation is an attempt to eliminate certain "workaround"-type answers. If I weren't loading modules dynamically, then I wouldn't need to use map and I wouldn't have a file-name-vs-module-name mismatch; but I do load modules dynamically, so that workaround won't work. I also believe I would not need to revision files if I were only serving clients on newer protocols [I could just use ETags as there would be no additional overhead to make the comparison], but I do serve clients on the older protocols, so that workaround doesn't work either. And finally, I don't want to run r.js after revisioning files, because I only want to revision after minifying the files, in case minification maps two sources to the same output, in which case I could avoid an unnecessary cache invalidation.)
Problem
This is how the app behaves:
every folder corresponds to a dynamic "feature" in the app.
features are independent of each other. (Inside each folder, none of the files have dependencies on other folders. However, each folder may use the same common libraries like jquery.)
not all features are needed all the time
within each folder, the files are not necessarily dependent on each other (but it makes sense to bundle them together because that's how a "feature" works)
the number of features is not fixed ... so a manual process where you have to manually update various configurations/files each time you add/remove a feature is NOT ideal
Desired Solution
Given what I described, what seems to make sense is to create chunks for each feature/folder and then load each chunk asynchronously (on demand) as needed by the app.
The problem is that this doesn't seem easy with webpack. I'd like to be able to do this
require.ensure([], function() {
var implementation = require('features/*/'+feature);
//...
where each feature (folder) goes in one chunk.
Unfortunately, if I'm understanding require.context correctly, that's not possible.
An alternative?
A possible alternative is the following:
use a single config file that lists each feature (we actually already use this),
use the config file to auto-generate a module that requires each folder asynchronously. (Basically, the "module" is a factory that the app can use to load each feature.)
add an entry point to the auto-generated module (force the chunks to be built).
So to summarize, add a build step that generates a factory that asynchronously load features. The problem is that this seems unnecessarily complex and I'm wondering if there's an better way to implement the solution. Or am I on the wrong track? is there a completely different solution?
Code
Folder structure
features
sl.graded.equation
graded.equations.js
graded.equation.edit.js
graded.equation.support.js
sl.griddable
griddable.js
griddable.edit.js
//...
Config file
{
"sl.graded.equation": [ //basically lists the feature "entry points" (not every file)
"sl.graded.equation",
"sl.graded.equation.edit"
],
"sl.griddable": [
"sl.griddable",
"sl.griddable.edit"
]
//...
}
Auto-generated factory
module.exports = function(feature, callback) {
switch(feature) {
//we can get fancy and return a promise/stream
case 'sl.graded.equation':
require.ensure([], function() {
var a = require('sl.graded.equation');
var b = require('sl.graded.equation.edit');
callback(a,b);
}
//...
I'm creating a widget where there will be 2 types of params:
-The one that can change depending on where we call the widget, those one will be defined in the widget call:
<?php $this->widget('ext.myWidget', array(
'someParams' => 'someValues',
)); ?>
-The one that are the same for all the call to the widget (a path to a library, a place to save an image, ...)
I would like to know what is the best way to define the second type of parameters.
I'm hesitating between making the user define it in the params array in the config file or defining it in an array in the Widget file.
The main advantage of the first option is that the user won't have to modify the Widget file so in case of update his modifications won't be overwritten, but this is not a specific user params so putting it in the parmas array in config file might seem strange.
So what would be the best solution? If there is another one better thant the 2 above please tell me!
Edit:
To clarify my thought:
My widget will generate some images that can be stored in a configurable directory. But since this directory has to be the same each time the widget is called I don't see the point of putting this configuration into the widget call!
This is why I was thinking about putting some params into the config file, in the params array like:
params => array(
'myWidget' => array(
'imageDir' => 'images',
)
)
But I don't know if it is a good practice that an extension has some configuration values in the params array.
Another solution would be to have a config.php file in my extension directory where the user can set his own values so he won't have to modify his main config file for the plugin. But the main drawback of this alternative is that if the user update the extension, he'll loose his configuration.
This is why I'm looking for the best practice concerning the configuration of a widget
Maybe what your looking for is more of an application component than a widget. You've got a lot more power within a component that you have with a widget. It can all still live in your extensions directory, under a folder with all the relevant files, and still be easily called from anywhere but the configuration can then be defined in configuration files for each environment.
When your setting the component in your configs, just point the class array parameter to the extensions folder, instead of the components folder.
Update:
If you do want to use a widget because there's not a lot more complexity, you can provide some defaults within application configurations for widgets I believe, I've never used it myself, see here: http://www.yiiframework.com/doc/guide/1.1/en/topics.theming#customizing-widgets-globally.
But I've found with more complex widgets that a component serves me better in the long run as I can call methods and retrieve options on it much easier and cleaner, but still have everything related to the extension within one folder.
Every time a tpl file is included the system first looks for a site-specific version of the file and falls back to a standard one if the site specific one does not exist. So perahps I put include "customer/main/test1.tpl". If our site is google, the system would first look for "customer/main/google_test1.tpl" and fall back to "customer/main/test1.tpl" if that file doesn't exist.
Note: Smarty 2.6.x
Did you know about the built-in template directory cascade? addTemplateDir and setTemplateDir allow you to specify multiple directories:
$smarty->setTemplateDir(array(
'google' => 'my-templates/google/',
'default' => 'my-templates/default/',
));
$smarty->display('foobar.tpl');
Smarty will first try to find my-templates/google/foobar.tpl, if not found try my-templates/default/foobar.tpl. Using this, you can build a complete templating cascade.
That won't be too helpful if you have lots of elements on the same cascade-level. Say you had specific templates for google, yahoo and bing besides your default.tpl. A solution could involve the default template handler function. Whenever Smarty comes across a template it can't find, this callback is executed as a measure of last resort. It allows you to specify a template file (or template resource) to use as a fallback.
So you could {include}, {extend}, ->fetch(), ->display() site_google.tpl. If the file exists, everything is fine. If it doesn't, your callback could replace _google with _default to fallback to the default template.
If neither template_dir cascade nor default template handler function seems applicable, you'll want to dig into custom template resources.
I’m looking to find a way to copy the page history that we can access from My account/My orders which is available on this directory template/sales/order/history.phtml and use its content on my own way without affecting the original one. I’ve been trying many ways, as copying the whole directory and editing the Xml files related to it in order to setup up the right path and make it work, unfortunately it was a failure. I would like to know if you could give me a solution for this.
thx.
To use the functions of a block inside another .phtml I'm quite sure you can use getBlock
$blockFunctions = $this->getLayout()->getBlock('sales/order_history');
$order = $blockFunctions->getOrderHistory();
And to add a block in your custom module you'll need to create a .xml file for your block and add it to your template, you'll also have to add the actual .phtml file. Take a look at the moduleCreator (http://www.magentocommerce.com/magento-connect/danieln/extension/1108/modulecreator) this handles most of this quite well.
This is by no means througher its just a rough guide.