Dynamically require one chunk per directory for on-demand (async) files in webpack? - bundle

Problem
This is how the app behaves:
every folder corresponds to a dynamic "feature" in the app.
features are independent of each other. (Inside each folder, none of the files have dependencies on other folders. However, each folder may use the same common libraries like jquery.)
not all features are needed all the time
within each folder, the files are not necessarily dependent on each other (but it makes sense to bundle them together because that's how a "feature" works)
the number of features is not fixed ... so a manual process where you have to manually update various configurations/files each time you add/remove a feature is NOT ideal
Desired Solution
Given what I described, what seems to make sense is to create chunks for each feature/folder and then load each chunk asynchronously (on demand) as needed by the app.
The problem is that this doesn't seem easy with webpack. I'd like to be able to do this
require.ensure([], function() {
var implementation = require('features/*/'+feature);
//...
where each feature (folder) goes in one chunk.
Unfortunately, if I'm understanding require.context correctly, that's not possible.
An alternative?
A possible alternative is the following:
use a single config file that lists each feature (we actually already use this),
use the config file to auto-generate a module that requires each folder asynchronously. (Basically, the "module" is a factory that the app can use to load each feature.)
add an entry point to the auto-generated module (force the chunks to be built).
So to summarize, add a build step that generates a factory that asynchronously load features. The problem is that this seems unnecessarily complex and I'm wondering if there's an better way to implement the solution. Or am I on the wrong track? is there a completely different solution?
Code
Folder structure
features
sl.graded.equation
graded.equations.js
graded.equation.edit.js
graded.equation.support.js
sl.griddable
griddable.js
griddable.edit.js
//...
Config file
{
"sl.graded.equation": [ //basically lists the feature "entry points" (not every file)
"sl.graded.equation",
"sl.graded.equation.edit"
],
"sl.griddable": [
"sl.griddable",
"sl.griddable.edit"
]
//...
}
Auto-generated factory
module.exports = function(feature, callback) {
switch(feature) {
//we can get fancy and return a promise/stream
case 'sl.graded.equation':
require.ensure([], function() {
var a = require('sl.graded.equation');
var b = require('sl.graded.equation.edit');
callback(a,b);
}
//...

Related

Unable to get webpack to fully ignore unnecessary JSX components

After spending a few years locked into an ancient tech stack for a project, I'm finally getting a chance to explore more current frameworks and dip a toe into React and Webpack. So far, it's for the most part been a refreshing and enjoyable experience, but I've run across some difficulty with Webpack that I'm hoping the hive-mind can help resolve.
I've been poring over the Webpack 2.0 docs and have searched SO pretty exhaustively, and come up short (I've only turned up this question, which is close, but I'm not sure it applies). This lack of information out there makes me think that I'm looking at one of two scenarios:
What I'm trying to do is insane.
What I'm trying to do is elementary to the point that it should be a no-brainer.
...and yet, here I am.
The short version of what I'm looking for is a way to exclude certain JSX components from being included in the Webpack build/bundle, based on environment variables. At present, regardless of what the dependency tree should look like based on what's being imported from the entry js forward, Webpack appears to be wanting to bundle everything in the project folder.
I kind of assume this is default behavior, because it makes some sense -- an application may need components in states other than the initial state. In this case, though, our 'application' is completely stateless.
To give some background, here are some cursory requirements for the project:
Project contains a bucket of components which can be assembled into a page and given a theme based on a config file.
Each of these components contains its own modular CSS, written in SASS.
Each exported page is static, and the bundle actually gets removed from the export directory. (Yes, React is a bit of overkill if the project ends at simply rendering a series of single, static, pages. A future state of the project will include a UI on top of this for which React should be a pretty good fit, however -- so best to have these components written in JSX now.)
CSS is pulled from the bundle using extract-text-webpack-plugin - no styles are inlined directly on elements in the final export, and this is not a requirement that can change.
As part of the page theme information, SASS variables are set which are then used by the SASS for each of the JSX components.
Only SASS variables for the JSX components referenced in a given page's config file are compiled and passed through sass-loader for use when compiling the CSS.
Here's where things break down:
Let's say I have 5 JSX components, conveniently titled component_1, component_2, component_3, and so on. Each of these has a matching .scss file with which it is associated, included in the following manner:
import React from 'react';
import styles from './styles.scss';
module.exports = React.createClass({
propTypes: {
foo: React.PropTypes.string.isRequired,
},
render: function () {
return (
<section className={`myClass`}>
<Stuff {...props} />
</section>
);
},
});
Now let's say that we have two pages:
page_1 contains component_1 and component_2
page_2 contains component_3, component_4, and component_5
These pages both are built using a common layout component that decides which blocks to use in a manner like this:
return (
<html lang='en'>
<body>
{this.props.components.map((object, i) => {
const Block = component_templates[object.component_name];
return <Block key={i}{...PAGE_DATA.components[i]} />;
})}
</body>
</html>
);
The above iterates through an object containing my required JSX components (for a given page), which is for now created in the following manner:
load_components() {
let component_templates = {};
let get_component = function(component_name) {
return require(`../path/to/components/${component_name}/template.jsx`);
}
for (let i = 0; i < this.included_components.length; i++) {
let component = this.included_components[i];
component_templates[component] = get_component(component);
}
return component_templates;
}
So the general flow is:
Collect a list of included components from the page config.
Create an object that performs a require for each such component, and stores the result using the component name as a key.
Step through this object and include each of these JSX components into the layout.
So, if I just follow the dependency tree, this should all work fine. However, Webpack is attempting to include all of our components, regardless of what the layout is actually loading. Due to the fact that I'm only loading SASS variables for the components that are actually used on a page, this leads to Webpack throwing undefined variable errors when the sass-loader attempts to process the SASS files associated with the unused modules.
Something to note here: The static page renders just fine, and even works in Webpack's dev server... I just get a load of errors and Webpack throws a fit.
My initial thought is that the solution for this is to be found in the configuration for the loader I'm using for the JSX files, and maybe I just needed to tell the loader what not to load, if Webpack was trying to load everything. I'm using `babel-loader for that:
{ test: /\.jsx?$/,
loader: 'babel-loader',
exclude: [
'./path/to/components/component_1/',
],
query: { presets: ['es2015', 'react'] }
},
The exclude entry there is new, and does not appear to work.
So that's kind of a novella, but I wanted to err on the side of too much information, as opposed to being scant. Am I missing something simple, or trying to do something crazy? Both?
How can I get unused components to not be processed by Webpack?
TL;DR
Don't use an expression in require and use multiple entry points, one for each page.
Detailed answer
Webpack appears to be wanting to bundle everything in the project folder.
That is not the case, webpack only includes what you import, which is determined statically. In fact many people that are new to webpack are confused at first that webpack doesn't just include the entire project. But in your case you have hit kind of an edge case. The problem lies in the way you're using require, specifically this function:
let get_component = function(component_name) {
return require(`../path/to/components/${component_name}/template.jsx`);
}
You're trying to import a component based on the argument to the function. How is webpack supposed to know at compile time which components should be included? Well, webpack doesn't do any program flow analysis, and therefore the only possibility is to include all possible components that match the expression.
You're sort of lucky that webpack allows you to do that in the first place, because passing just a variable to require will fail. For example:
function requireExpression(component) {
return require(component);
}
requireExpression('./components/a/template.jsx');
Will give you this warning at compile time even though it is easy to see what component should be required (and later fails at run-time):
Critical dependency: the request of a dependency is an expression
But because webpack doesn't do program flow analysis it doesn't see that, and even if it did, you could potentially use that function anywhere even with user input and that's essentially a lost cause.
For more details see require with expression of the official docs.
Webpack is attempting to include all of our components, regardless of what the layout is actually loading.
Now that you know why webpack requires all the components, it's also important to understand why your idea is not going to work out, and frankly, overly complicated.
If I understood you correctly, you have a multi page application where each page should get a separate bundle, which only contains the necessary components. But what you're really doing is using only the needed components at run-time, so technically you have a bundle with all the pages in it but you're only using one of them, which would be perfectly fine for a single page application (SPA). And somehow you want webpack to know which one you're using, which it could only know by running it, so it definitely can't know that at compile time.
The root of the problem is that you're making a decision at run-time (in the execution of the program), instead this should be done at compile time. The solution is to use multiple entry points as shown in Entry Points - Multi Page Application. All you need to do is create each page individually and import what you actually need for it, no fancy dynamic imports. So you need to configure webpack that each entry point is a standalone bundle/page and it will generate them with their associated name (see also output.filename):
entry: {
pageOne: './src/pageOne.jsx',
pageTwo: './src/pageTwo.jsx',
// ...
},
output: {
filename: '[name].bundle.js'
}
It's that simple, and it even makes the build process easier, as it automatically generates pageOne.bundle.js, pageTwo.bundle.js and so on.
As a final remark: Try not to be too smart with dynamic imports (pretty much avoid using expressions in imports completely), but if you decide to make a single page application and you only want to load what's necessary for the current page you should read Code Splitting - Using import() and Code Splitting - Using require.ensure, and use something like react-router.

include config file from settings.gradle

I have two separate projects which I want to keep separate. However, sometimes I want to be able to combine them, briefly, into a composite build. Sometimes it's nice if I can do that for a while without affecting other devs. So, I want something like this:
My main settings.gradle, which would be checked into version control, would look like this:
// normal stuff
if (File('extra-settings.gradle).exists()) {
// This is what I don't know how to do
includeOtherSettingsFile('extra-settings.gradle')
}
Then extra-settings.gradle, which is not checked into source control, might look like this:
includeBuild('../anxml') {
dependencySubstitution {
substitute module('com.analyticspot.ml:framework') with project(':framework')
}
}
This way I could add an extra-settings.gradle file to make a temporary composite build. Keep it that way for several commits without affecting other programmers or worrying that I'd accidentally commit my temporary changes to settings.gradle and then, when I'm done, I could just delete it.
I know about Prezi Pride and it seems great but won't work for our current build (we use buildSrc, rootDir, etc.)
Can it be done?
settings.gradle is executed against a Settings instance which has an apply(Map) method so I'm guessing you can do:
// use Settings.getRootDir() so that it doesn't matter which directory you are executing from
File extraSettings = new File(rootDir, 'extra-settings.gradle')
if (extraSettings.exists()) {
apply from: extraSettings
}

Modular design to reduce coupling in go

I am building a RESTful API using golang.
I am using gin-gonic framework and mgo for mongodb connection.
I am trying to build this modular... (I don't think I know exactly what I am talking about and trying to do. XD LOL)
Like all other software, my project has multiple collections and number of collections will increase in the future.
And I would like to separate each module as much as possible from other modules so that I could
incrementally add modules
reuse the code for future projects and simply add files to add feature (in this case, I probably need to make each module into a separate package)
Here is my current project structure.
main.go - contains main function
database.go - small function that opens connection with Db
users.go - users collection related file
users-controller.go - contains gin.HandlerFunc for users collection
contents.go - contents collection related file
contents-controller.go - contains gin.HandlerFunc for contents collection
Code
https://gist.github.com/letsdevus/5121381be7cb1065e62ae403f14cd562
users and contents modules are totally independent.
Removing contents*.go files from project will cleanly disable contents module in the project.
Adding some non-dependent modules is very easy as well. No code to touch but simply adding files is all I need to do.
I would give myself kudos for that.
But now I am trying to add a module that has to interact with other modules.
It's called 'activities' module.
This module has to interact with other modules to track of what activities are occurring.
For example, when insert happens on users collection, activities module needs to know it happened so that it can record the activity.
A simple way of implementing activities module is adding a line, activities.insert("NewUser", Id) after users.Insert() is called.
The problem of this approach is very obvious.
I have to do this for all... the collections.
And more importantly, when I remove activities files from the project, I need to comment out all the activities related lines from other modules.
I want you to hear my plan to tackle this problem and advise me if it's viable or if there is any other better way.
My current possible solution: event manager
eventmanager.go will be added in the project as one of the base files like main.go and database.go.
eventmanager.go
package main
type EventFunc func()
type EventManager struct {
events map[string][]EventFunc
}
func (i *EventManager) AddEvent(eventName string, fn EventFunc) {
i.events[eventName] = append(i.events[eventName], fn)
}
func (i *EventManager) RunEvents(eventName string) {
for _, fn := range i.events[eventName] {
fn()
}
}
var eventManager EventManager // Globally available
And in users.go Insert() function, I add
eventManger.RunEvent("usersPostInsert")
after dbcUsers.Insert(&user).
And in init() in activities.go file, I add
eventManager.AddEvent("usersPostInsert", newUserActivity)
In the end of activities.go
func newUserActivity() {
// Do something. For now just print a message
log.Print("New User is Inserted!")
}
This way when I take off activities module, no problem occurs.
I hope you get the idea of what I am trying to do.
I just really want to know if this is a more than an OK way of tackling the problem.
p.s. If you are about to down vote this question because it's more like a codereview, just leave a comment instead. (somehow down vote makes me really uncomfortable lol)

Optimizing RequireJS for HTTP/2: Does modifying a hashed file matter here?

I am trying to optimize a RequireJS project first for HTTP/2 and SPDY, and secondly for HTTP/1.x. Rather than concatenating all of my RequireJS modules into one big file, I am hacking r.js to resave the modules that would have been concatenated as separate files. This will be better for the new protocols, because each file will be cached individually, and a change to one file won't invalidate the caches of all the other files. For the old protocols, I will pay a penalty in additional HTTP requests, but I can at least avoid subsequent rounds of requests by appending revision hashes to my files (i.e. "module.js" becomes "module.89abcdef.js") and setting a long max-age on them, so I want to do that.
r.js adds names to previously-anonymous modules. That means that the file "module.js" with this definition:
define(function () { ... })
Becomes this:
define("module.js", function () { ... })
I want this, because I will be loading some of my modules via <script> elements (to prevent RequireJS HTTP requests acting as a bottleneck to dependency resolution; I can instead utilize r.js's topological sorting to generate <script> elements which the browser can start loading as soon as it parses them). However, I also plan to load some modules dynamically (i.e., via RequireJS's programmatic insertion of <script> elements). This means RequireJS may at some point request a module via a URL.
Here is the problem I have run into: I am generating a RequireJS map configuration object to map my modules' original module IDs to their revision-hashed module IDs, so that when RequireJS loads a module dynamically, it requests the revision-hashed file name. But the modules it loads, which are now named modules thanks to r.js, have the un-revisioned name of the module, because I don't revision the files until after I minify them, which I only do after r.js generates them (thus r.js doesn't know about the revisions).
So the file "module.89abcdef.js" contains the module:
define("module.js", function () { ... })
When these named modules execute, define will register a module with the wrong name. When other modules require "module.js", this name will be mapped to "module.89abcdef.js" (thanks to the map configuration), but since a module with that name was not actually defined after the onload event of a <script> with the src attribute "module.89abcdef.js" fired, the module they receive is undefined.
To be faithful to the new file name, the file "module.89abcdef.js" should probably instead contain the definition:
define("module.89abcdef.js", function () { ... })
So, I believe the solution to my problem will be to replace the unrevisioned name of the module in the r.js-generated file with the revisioned name. However, this means the hash of the file is now invalidated. My question: In this situation, does it matter that the hash of the file is invalid?
I think it's impossible that the module name in the source code could be faithful to the file name, because if the module name had a hash, well, then the hash of the file would be different, so the module name would become the new hash, and so on recursively.
I fear there might be a case where the unfaithful hash would cause the file to still be cached even when it'd actually changed, but I cannot think of one. It seems that appending the same hash for the file's original contents to the file's name and the module's name should always still result in a unique file.
(My lengthy explanation is an attempt to eliminate certain "workaround"-type answers. If I weren't loading modules dynamically, then I wouldn't need to use map and I wouldn't have a file-name-vs-module-name mismatch; but I do load modules dynamically, so that workaround won't work. I also believe I would not need to revision files if I were only serving clients on newer protocols [I could just use ETags as there would be no additional overhead to make the comparison], but I do serve clients on the older protocols, so that workaround doesn't work either. And finally, I don't want to run r.js after revisioning files, because I only want to revision after minifying the files, in case minification maps two sources to the same output, in which case I could avoid an unnecessary cache invalidation.)

Best Way To Define Params For A Widget?

I'm creating a widget where there will be 2 types of params:
-The one that can change depending on where we call the widget, those one will be defined in the widget call:
<?php $this->widget('ext.myWidget', array(
'someParams' => 'someValues',
)); ?>
-The one that are the same for all the call to the widget (a path to a library, a place to save an image, ...)
I would like to know what is the best way to define the second type of parameters.
I'm hesitating between making the user define it in the params array in the config file or defining it in an array in the Widget file.
The main advantage of the first option is that the user won't have to modify the Widget file so in case of update his modifications won't be overwritten, but this is not a specific user params so putting it in the parmas array in config file might seem strange.
So what would be the best solution? If there is another one better thant the 2 above please tell me!
Edit:
To clarify my thought:
My widget will generate some images that can be stored in a configurable directory. But since this directory has to be the same each time the widget is called I don't see the point of putting this configuration into the widget call!
This is why I was thinking about putting some params into the config file, in the params array like:
params => array(
'myWidget' => array(
'imageDir' => 'images',
)
)
But I don't know if it is a good practice that an extension has some configuration values in the params array.
Another solution would be to have a config.php file in my extension directory where the user can set his own values so he won't have to modify his main config file for the plugin. But the main drawback of this alternative is that if the user update the extension, he'll loose his configuration.
This is why I'm looking for the best practice concerning the configuration of a widget
Maybe what your looking for is more of an application component than a widget. You've got a lot more power within a component that you have with a widget. It can all still live in your extensions directory, under a folder with all the relevant files, and still be easily called from anywhere but the configuration can then be defined in configuration files for each environment.
When your setting the component in your configs, just point the class array parameter to the extensions folder, instead of the components folder.
Update:
If you do want to use a widget because there's not a lot more complexity, you can provide some defaults within application configurations for widgets I believe, I've never used it myself, see here: http://www.yiiframework.com/doc/guide/1.1/en/topics.theming#customizing-widgets-globally.
But I've found with more complex widgets that a component serves me better in the long run as I can call methods and retrieve options on it much easier and cleaner, but still have everything related to the extension within one folder.

Resources