Webpack 4.4.1 : performance issues with splitChunks - performance

I'm working on an old project with a lot of code. This project uses Webpack 3.8.1 and I'm trying to update to 4.4.1, and it's a real obstacle course!
The main pain is that the projects uses the CommonsChunkPlugin:
new CommonsChunkPlugin({
name: 'common',
minChunks: 3,
chunks: _.without(_.keys(entry), 'ace-iframe', 'custom-theme-ace'),
}),
new CommonsChunkPlugin({
name: 'vendors',
minChunks(module, count) {
return isVendorModule(module) && count >= 2;
},
chunks: _.without(_.keys(entry), 'ace-iframe', 'custom-theme-ace'),
})
I know that Webpack 4 does not provide CommonsChunkPlugin anymore. A big thanks a lot to the below articles, they've saved hours of researches:
https://gist.github.com/sokra/1522d586b8e5c0f5072d7565c2bee693
https://medium.com/webpack/webpack-4-code-splitting-chunk-graph-and-the-splitchunks-optimization-be739a861366
Thanks to these amazing links, I've replaced CommonsChunkPlugin with these lines:
optimization: {
splitChunks: {
cacheGroups: {
vendors: {
priority: 50,
name: 'vendors',
chunks: 'async',
reuseExistingChunk: true,
minChunks: 2,
enforce: true,
test: /node_modules/,
},
common: {
name: 'common',
priority: 10,
chunks: 'async',
reuseExistingChunk: true,
minChunks: 2,
enforce: true,
},
},
},
},
},
Thanks to this config, the application is correctly building, chunks are created and the app is running as expected.
But the building time is really slow: more than 7 minutes!
Funny thing, if I totally remove the whole optimization.splitChunks configuration, the applications still works perfectly, and the building time is still around 7 minutes: it's totally like what I've done in optimization.splitChunks is useless.
I've tried to change the chunks properties: to be honest I don't really understand its role...
If I set them to all, the build is way quicker: around 1 minute.
But unfortunately, the generated files from my entries points are not running well: Webpack seems to wait that the chunks are loaded before executing my own code:
// Code from webpack
function checkDeferredModules() {
var result;
for(var i = 0; i < deferredModules.length; i++) {
var deferredModule = deferredModules[i];
var fulfilled = true;
for(var j = 1; j < deferredModule.length; j++) {
var depId = deferredModule[j];
if(installedChunks[depId] !== 0) fulfilled = false;
}
// If I understand well, Webpack checked that deferred modules are loaded
if(fulfilled) {
// If so, it runs the code of my entry point
deferredModules.splice(i--, 1);
result = __webpack_require__(__webpack_require__.s = deferredModule[0]);
}
}
return result;
}
Please tell me I am not wrong here: Webpack seems to wait deferred modules to be loaded, but it does not run the code which is actually loading them... How am I suppose to make this work?
In brief:
with chunks set to async: all is working well, but building time is not viable (more than 7 minutes)
with chunks set to all: building time is correct (around 1 minute), but my code is not running ¯\_(ツ)_/¯
Sorry for this long post, but if someone can help me to make all of this working with correct building time, it would be perfect.
Or at least helping me to understand how all of this is supposed to work, official documentation is not very helpful :(
Thanks in advance!
EDIT: I've tried to continue with chunks set to async, despite the 7mn building time.
I have 20 entries points, and if I add an import instruction importing jQuery and jQuery-UI in one of them, building time is doubling.
If I add it into 5 files, the build crashes:
<--- Last few GCs --->
[15623:0x103000000] 222145 ms: Mark-sweep 1405.0 (1717.4) ->
1405.2 (1717.4) MB, 671.3 / 0.0 ms allocation failure GC in old space requested [15623:0x103000000] 222807 ms: Mark-sweep 1405.2
(1717.4) -> 1405.0 (1667.9) MB, 662.4 / 0.0 ms last resort GC in old
space requested [15623:0x103000000] 223475 ms: Mark-sweep 1405.0
(1667.9) -> 1405.1 (1645.4) MB, 667.1 / 0.0 ms last resort GC in old
space requested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x1b6415c25ee1
1: fromString(aka fromString) [buffer.js:~298] [pc=0x1973a88756aa](this=0x1b6462b82311
,string=0x1b642d3fe779 ,encoding=0x1b6462b82311 )
3: from [buffer.js:177] [bytecode=0x1b6488c3b7c1 offset=11](this=0x1b644b936599 ,value=0x1b642d3fe779 ,encodingOrOffset=0x1b6462b82311
>> FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Out of memory... I think setting chunks to async is not the correct way to solve this issue :/

Related

Always some test cases getting jasmine.DEFAULT_TIMEOUT_INTERVAL

I am going to create end to end(e2e) test using protractor with jasmine and angular 6. I have written some test cases almost 10 cases. That's all working fine, but always some cases become fails. And its failed because of jasmine timeout. I have configure timeout value like below. But I am not getting consistant result. sometimes a test cases is success but at next run it will goes to success or fail. I have searched on google but I have not found any useful solution.
I have defined some common properties for wait
waitForElement(element: ElementFinder){
browser.waitForAngularEnabled(false);
browser.wait(() => element.isPresent(), 100000, 'timeout: ');
}
waitForUrl(url: string){
browser.wait(() => protractor.ExpectedConditions.urlContains(url), 100000, 'timeout')
}
And protractor.conf.js file I have defined that
jasmineNodeOpts: {
showColors: true,
includeStackTrace: true,
defaultTimeoutInterval: 20000,
print: function () {
}
}
I am getting below error
- Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
- Failed: stale element reference: element is not attached to the page document
(Session info: chrome=76.0.3809.100)
(Driver info: chromedriver=76.0.3809.12 (220b19a666554bdcac56dff9ffd44c300842c933-refs/branch-heads/3809#{#83}),platform=Windows NT 10.0.17134 x86_64)
I have got the solution:
I have configured waiting timeout 100000 ms for individual element find where whole script timeout was 20000 ms. So I have follow below process:
Keep full spec timeout below than sum of all elements find timeouts. I have configured defaultTimeoutInterval at jasmineNodeOpts greater than sum of value for all test cases timeout. And then add a large value to allScriptsTimeout: 2000000 inside of export.config. Its resolved my problem.
NB: I gave this answer because I think it may help others who will face this kind of problem.

increase pageSize of Generic Test Data results in sonarqube 6.2

I have imported test results according to https://docs.sonarqube.org/display/SONAR/Generic+Test+Data into SonarQube 6.2.
I can look at the detailed test results in sonar by navigating to the test file and then by clicking menu "Show Measures". The opened page then shows me the correct total number of tests 293 of which 31 failed. The test result details section however only shows 100 test results.
This page seems to get its data through a request like: http://localhost:9000/api/tests/list?testFileId=AVpC5Jod-2ky3xCh908m
with a result of:
{
paging: {
pageIndex: 1,
pageSize: 100,
total: 293
},
tests: [
{
id: "AVpDK1X_-2ky3xCh91QQ",
name: "GuiButton:Type Checks->disabledBackgroundColor",
fileId: "AVpC5Jod-2ky3xCh908m",
fileKey: "org.sonarqube:Scripting-Tests-Publishing:dummytests/ScriptingEngine.Objects.GuiButtonTest.js",
fileName: "dummytests/ScriptingEngine.Objects.GuiButtonTest.js",
status: "OK",
durationInMs: 8
...
}
From this I gather that the page size is set to 100 in the backend. Is there a way to increase it so that I can see all test results?
You can certainly call the web service with a larger page size parameter value, but you cannot change the page size requested by the UI

Can you run multiple instances of webpack at once?

Let's say I have an array of components, that need to spit out their own source, as they wont always be included together, it needs to be it's own bundle. The idea is that there will eventually be 100's of components, and they can be cherry picked whenever.
However, when using webpack-stream with gulp, even though I'm dynamically registering the tasks, and that I can see they're running sequentially, it only runs one webpack stream by the looks of it, but seems to output it's bundle from the SECOND component, into the first components directory.
It's a pretty simple build process, it's an array of components like so:
var components = [
{'name : 'a', src : './foo/bar/entrya.js', dest : '/dir/a'},
{'name : 'b', src : './foo/bar/entryb.js', dest : '/dir/b'},
];
Relatively simple right? Then to register the tasks, it's something like this:
components.forEach(component => {
gulp.task(component.name, cb => {
function task(component) {
return gulp.src(component.src)
.pipe($.webpackStream(webpackConfig))
.pipe(component.dest(component.dest));
}
return task.apply(this, [component, cb]);
});
});
This is an incredibly dumbed down version of what I have, but it's pretty much the same thing, dynamically generates and then later on we run those tasks sequentially.
webpack-stream can handle multiple entry points and multiple builds per multiple entry points.
var gulp = require('gulp');
var webpack = require('webpack-stream');
gulp.task('build', function() {
return gulp.src(['src/entry.js']) // entry.js file doesn't need to exist
.pipe(webpack({
entry: {
a : __dirname + "/foo/bar/entrya.js",
b : __dirname + "/foo/bar/entryb.js"
},
output: {
filename: '[name].js'
}
}))
.pipe(gulp.dest('dir/'));
});
Your build is not running in parallel. you are just sequentially registering tasks. However you can run those tasks in parallel as child processes. One good option you can use is parallel-webpack.

Jest silently ignores errors

Given the following test.js
var someCriticalFunction = function() {
throw 'up';
};
describe('test 1', () => {
it('is true', () => {
expect(true).toBe(true);
})
});
describe('test 2', () => {
it('is ignored?', () => {
someCriticalFunction();
expect(true).toBe(false);
})
});
Running Jest will output
Using Jest CLI v0.9.2, jasmine2
PASS __tests__/test.js (0.037s)
1 test passed (1 total in 1 test suite, run time 0.795s)
Which pretty much gives you the impression everything is great. So how do I prevent shooting my own foot while writing tests? I just spent an hour writing tests and now I wonder how many of them actually run? Because I will certainly not count all of them and compare with the number in the report.
Is there a way to make Jest fail as loud as possible when an error is thrown as part of the test suite, not the tests itself (like preparation code)? Putting it inside beforeEach doesn't change anything.
This is fixed in 0.9.3 jest-cli https://github.com/facebook/jest/pull/812
Which for some reason is unpublished again (I had it installed minutes ago). So anyway, throwing strings is now caught. You should never throw strings anyway and thanks to my test suite I found one of the last places I actually still threw strings...

Calling Win32 functions returning strings with alien in Lua

I'm trying to use alien to call Win32 functions. I tried this code, but it crashes and I don't understand why.
require( "alien" )
local f = alien.Kernel32.ExpandEnvironmentStringsA
f:types( "int", "string", "pointer", "int" )
local buffer = alien.buffer( 512 )
f( "%USERPROFILE%", 0, 512 )
It is a good question as it is, for me, an opportunity to test out Alien...
If you don't mind, I take the opportunity to explain how to use Alien, so people like me (not very used to require) stumbling upon this thread will get started...
You give the link to the LuaForge page, I went there, and saw I needed LuaRock to get it. :-(
I should install the latter someday, but I chose to skip that for now. So I went to the repository and downloaded the alien-0.4.1-1.win32-x86.rock.
I found out it was a plain Zip file, which I could unzip as usual.
After fumbling a bit with require, I ended hacking the paths in the Lua script for a quick test. I should create LUA_PATH and LUA_CPATH in my environment instead, I will do that later.
So I took alien.lua, core.dll and struct.dll from the unzipped folders and put them under a directory named Alien in a common library repository.
And I added the following lines to the start of my script (bad hack warning!):
package.path = 'C:/PrgCmdLine/Tecgraf/lib/?.lua;' .. package.path
package.cpath = 'C:/PrgCmdLine/Tecgraf/lib/?.dll;' .. package.path
require[[Alien/alien]]
Then I tried it with a simple, no-frills function with immediate visual result: MessageBox.
local mb = alien.User32.MessageBoxA
mb:types{ 'long', 'long', 'string', 'string', 'long' }
print(mb(0, "Hello World!", "Cliché", 64))
Yes, I got the message box! But upon clicking OK, I got a crash of Lua, probably like you.
After a quick scan of the Alien docs, I found out the (unnamed) culprit: we need to use the stdcall calling convention for the Windows API:
mb:types{ ret = 'long', abi = 'stdcall', 'long', 'string', 'string', 'long' }
So it was trivial to make your call to work:
local eev = alien.Kernel32.ExpandEnvironmentStringsA
eev:types{ ret = "long", abi = 'stdcall', "string", "pointer", "long" }
local buffer = alien.buffer(512)
eev("%USERPROFILE%", buffer, 512)
print(tostring(buffer))
Note I put the buffer parameter in the eev call...

Resources