I am running 7 npm commands parallelly using npm-run-all.
Running each command individually takes around 3 minutes. However, when I am running all the commands parallelly, it is taking 8 minutes and some times more than that.
How to execute these npm scripts parallelly utilizing all the processor cores?
I tried using concurrently as well. Also, I am thinking of running a shell script. However, this has to work in windows also.
Can you try worker-farm and parallel-webpack or concurrently
https://blog.box.com/blog/how-we-improved-webpack-build-performance-95
https://github.com/rvagg/node-worker-farm
https://github.com/trivago/parallel-webpack
https://www.npmjs.com/package/concurrently
I am not expert, just putting thoughts together, maybe some expert can comment upon
If you want to run multiple npm commands in parallel you can use npm-run-all. It installs into your project and is available after your dependencies have been restored (unless of course, you install it globally).
If you want to run a command against multiple files in parallel you can use glob-exec. You supply it with a glob pattern to match one or more files and glob-exec will execute the command against each file.
Both packages have the option to run in parallel or in sequence. I've used the latter to shave multiple minutes off of my build times
Related
I have setup multiple python environment using Anaconda.
Usually, to run a script "manually", I would open a command line and then type:
activate my-env
python path/to/my/script.py
Fine.
Now I am trying to run a script automatically using a scheduler and I was wondering what the difference was between
Writing a batch which activates the environment and the executes the script (like in the snippet above)
Calling directly the python executable from the environment (within the envs/my-enjv/ directory) like below:
/path/to/envs/my-env/python.exe path/to/my/script.py
Both seem to work fine. Is there any difference?
I don't claim to be an expert but here's my 2 cents.
For small scripts, no, there isn't a difference.
You should notice a difference when calling external modules / packages. conda activate alters the system path to change how the command shell searches for the appropriate capabilities.
If you supply a full path to an interpreter and the full path to an isolated script, then the shell doesn't need to do a lookup as this has priority over the path. This means you could be in a situation where the interpreter can see the script but cannot see dependencies.
If you follow the conda activate process, and the environment is correctly packaged, then the shell will be able to trace any additional resources.
EDIT: The idea behind this is portability. If an admin has been careful in setting up a system, then scripts should have the appropriate visibility - i.e. see everything in it's environment plus everything in the main system installation.
It's possible to full-path every call to an interpreter and a script or package location, but then what happens when you need to move it to another machine? You would need to spend a lot of time setting everything up exactly as it was before. On the other hand, you can follow the package process and the system path will trace everything for you.
Simply checkout the PATH variable in your environment. After conda activation it has been extended by
\Anaconda3;
\Anaconda3\Library\mingw-w64\bin;
\Anaconda3\Library\usr\bin;
\Anaconda3\Library\bin;
\Anaconda3\Scripts;
\Anaconda3\bin;
This doesn't make much of a difference, if you are just using the standard library in your code. However, if you rely on external packages like pandas, it's a prerequisite so that the modules can be found.
I have a makefile that has a ton of subsystem and am able to build it with the -j flag so that it goes much faster and builds the different recipes in parallel.
This seems to be working fine for now but am not sure if I am missing some needed dependencies and am just "getting lucky" with the order that these are being built.
Is there a way where I can randomize the order recipes are run while still following all the dependencies that I have defined?
You can control number of jobs Make is allowed to run asynchronously with -j command line option. This way you can "randomize" recipes being executed simultaneously and catch some issues in your makefiles.
I'll duplicate the answer from https://stackoverflow.com/a/72722756/5610270 here:
Next release of GNU make will have --shuffle mode. It will allow
you to execute prerequisites in random order to shake out missing
dependencies by running $ make --shuffle.
The feature was recently added in
https://savannah.gnu.org/bugs/index.php?62100 and so far is available
only in GNU make's git tree.
I was wondering is it possible to run a configure script from another one? What I have is the situation where my own project uses autotools for config and make. So before any build a configure script is ran (as usual). But now I want to add another lib to my project which also uses the same build principle (It is necessary to run configure script before building a project). So instead of making my future users run two configure scripts, is there a way to automate this. (but without using a shell script - bash, perl, etc.)
Can this be done and if so, how??
During my attempt to automate some task with rake on ubuntu, I've encountered scenarios that required packed might not already exist on target machine. What is a good way to check if certain package was already installed on the system and respond accordingly?
For example, I'd like to run 'npm start' within certain task, but I'd want to know if npm has already been installed on the system, thus giving user the correct error message. I'm also fine doing it with thor if it's possible at all.
You can run system command from Ruby scripts using the Kernel.system method. Consider something like the following:
fail unless system('which npm')
Recently, while getting acquainted with the Mocha javascript testing framework, I came across this section that I didn't understand:
Makefiles
Be kind and don't make developers hunt around in your docs to figure
out how to run the tests, add a make test target to your Makefile:
test:
./node_modules/.bin/mocha --reporter list
.PHONY: test
Which is hardly descriptive, and not very helpful if you don't know what a makefile is.
So, What is a Makefile? And how is it different from a Gruntfile or using npm run?
Makefile
A Makefile (usually with no file extension) is a configuration file used by the Unix make tool.
Quoted from one of the best introductions I have found on Make that I highly recommend you read if you are interested in knowing more about make specifically, and task-runners in general.
Make is the original UNIX build tool. It existed a long time before
gulp, or grunt. Loved by some, loathed by others it is at least worth
being aware of how make works, what the strengths are and where it
falls short.
Make is available on UNIX-like systems. That means OSX, BSD and Linux.
It exposes any system command meaning that you can run system commands
and execute arbitrary scripts. There is no doubt that tools like gulp
and grunt are inpsired by or at least a reaction to make.
To use make you create a Makefile and then run make [what to run] from
the terminal.
Gruntfile.js
A Gruntfile.js is a javascript configuration file used by the Grunt.js tool.
The newer node.js version of make, if you will, is Grunt.js which is cross-platform (works on Windows) and written in Javascript. Both can do similar things like concatenate files, minify css, run tests, etc. and there is a lot of information on the web about Grunt.
'npm run'
Another option that some developers prefer to use is npm itself, using the npm run command as described in this informative post on how to use npm run for running tasks:
There are some fancy tools [Grunt] for doing build automation on javascript
projects that I've never felt the appeal of because the lesser-known
npm run command has been perfectly adequate for everything I've needed
to do while maintaining a very tiny configuration footprint.
If you haven't seen it before, npm looks at a field called scripts in
the package.json of a project in order to make things like npm test
from the scripts.test field and npm start from the scripts.start field
work.
npm test and npm start are just shortcuts for npm run test and npm run
start and you can use npm run to run whichever entries in the scripts
field you want!
Other good introductory resources:
Introduction to grunt.js and npm scripts, and choosing between the
two.
Cross platform JavaScript.
Package Managers: An Introductory Guide For The Uninitiated Front-End
Developer.