I want to do an npm install in a ton of directories.
Can I create a shell script that will run npm install in all of them asynchronously? So I don't have to wait long for all of them to be done?
IE
cd foo; npm install; cd ..;
cd bar; npm install; cd ..;
etc.
You can run them in the background using & at the end:
cd foo && npm install &
cd bar && npm install &
There's no need for cd .. here because each line runs in a separate child process.
Also I'm using && here instead of ;, otherwise you'd need to add () to group the commands:
( cd foo; npm install ) &
( cd bar; npm install ) &
As a plus, && will not execute commands to it's right if the command to it's left fails.
Related
I have a set of directories, 10 at the moment that are named client-1, client-2,..., client-10 and 1 directory that is named nestjs-wrapper
I want to iterate over the client directories, enter each of them and fire npm install and node index.js in every one.
I could do it by hand, but the number of clients may increment in the future so I would like to automate this process.
So the flow would be something like this:
in the parent directory I would like to fire nvm use to make sure I have the desired node version
then cd into each directory, fire npm install & node index.js
cd back to parent directory
repeat this until packages are installed in every client directory
run docker-compose up in a detached terminal
cd from parent directory into a nestjs-wrapper and start it in watch mode with npm run start:dev
This is the start of the attempt, it installs the packages in the client directories, now I would somehow need to do the rest of the flow:
pattern="/home/dario/my-folder/client"
for _dir in "${pattern}-"[[:digit:]]*; do
[ -d "$_dir" ] || continue;
pushd "$_dir" && npm install;
done
I would like to start docker-compose from the parent directory in a detached terminal.
To do this, I just created a new script named start-docker.sh in which I only have docker-compose up.
And after that open a separate dir in the parent directory (one that is not named client-) and run npm run start:dev in it.
So it would go something like:
pattern="client"
for _dir in "${pattern}-"[[:digit:]]*; do
[ -d "$_dir" ] || continue;
pushd "$_dir" && npm install && node index.js;
popd;
done
gnome-terminal -- ./start-docker.sh;
pushd nestjs_wrapper && npm run start:dev;
This does the trick, I switched back to relative pathnames. First I iterate over all the client directories and install the packages, then after that I bring up docker-compose and start the wrapper in watch mode.
Following the input from the comments, here is the working solution:
pattern="client"
for _dir in "${pattern}-"[[:digit:]]*; do
[ -d "$_dir" ] || continue;
pushd "$_dir" && npm install && node index.js;
popd;
done
gnome-terminal -- ./start-docker.sh;
pushd nestjs_wrapper && npm run start:dev;
I'm launching a bash script inside a container to do npm install but I can't see the progress bar
here's the docker command inside a bash script:
docker-compose run --rm my-docker-node-service \
bash < npm-install-empty-node_modules
Here's the code for npm-install-empty-node_modules
if [ ! -d "node_modules" ]; then
echo "node_modules folder doesn't exist"
npm install
exit 0
fi
it prints "node_modules folder doesn't exist" as well as everything else, just not the npm install progress bar, which makes it seem like it has frozen. But if you wait long enough it finished installing.
I am creating a bash script which runs through each of my projects and runs npm run test if the test script exists.
I know that if I get into a project and run npm run it will give me the list of available scripts as follows:
Lifecycle scripts included in www:
start
node server.js
test
mocha --require #babel/register --require dotenv/config --watch-extensions js **/*.test.js
available via `npm run-script`:
dev
node -r dotenv/config server.js
dev:watch
nodemon -r dotenv/config server.js
build
next build
However, I have no idea how to grab that information, see if test is available and then run it.
Here is my current code:
#!/bin/bash
ROOT_PATH="$(cd "$(dirname "$0")" && pwd)"
BASE_PATH="${ROOT_PATH}/../.."
while read MYAPP; do # reads from a list of projects
PROJECT="${MYAPP}"
FOLDER="${BASE_PATH}/${PROJECT}"
cd "$FOLDER"
if [ check here if the command exists ]; then
npm run test
echo ""
fi
done < "${ROOT_PATH}/../assets/apps-manifest"
EDIT:
As mentioned by Marie and James if you only want to run the command if it exists, npm has an option for that:
npm run test --if-present
This way you can have a generic script that work with multiple projects (that may or may not have an specific task) without having the risk of receiving an error.
Source: https://docs.npmjs.com/cli/run-script
EDIT
You could do a grep to check for the word test:
npm run | grep -q test
this return true if the result in npm run contains the word test
In your script it would look like this:
#!/bin/bash
ROOT_PATH="$(cd "$(dirname "$0")" && pwd)"
BASE_PATH="${ROOT_PATH}/../.."
while read MYAPP; do # reads from a list of projects
PROJECT="${MYAPP}"
FOLDER="${BASE_PATH}/${PROJECT}"
cd "$FOLDER"
if npm run | grep -q test; then
npm run test
echo ""
fi
done < "${ROOT_PATH}/../assets/apps-manifest"
It just would be a problem if the word test is in there with another meaning
Hope it helps
The right solution is using the if-present flag:
npm run test --if-present
--if-present doesn't allow you to "check if a npm script exists", but runs the script if it exists. If you have fallback logic this won't suffice. In my case, I want to run npm run test:ci if it exists and if not check for and run, npm run test. Using --if-present would run the test:ci AND test scripts if both exists. By checking if one exists first, we can decide which to run.
Because I have both "test" and "test:ci" scripts, the npm run | grep approach wasn't sufficient. As much as I'd like to do this with strictly npm, I have jq in my environments so I decided to go that route to have precision.
To check for a script named "test:ci":
if [[ $(jq '.scripts["test:ci"]' < package.json;) != null ]]; then
// script exists
fi
I have inline checking to detect the installation of cli packages to save time on installing existing package, but I found it is tedious and not that readable for those long list.
For example:
which -s redis-cli || brew install redis
which -s java || brew cask install java
which -s yarn || npm install -g yarn
Are there any function to make it nice looking? For example:
function npmInstall(name) {
if (which -s name) {
return;
}
npm install -g name;
}
Thanks a lot!
You may pass client packages as parameters.
Example, script.sh:
for cli in $#; do
which "$cli" || npm install -g "$cli"
done
invoked with ./script.sh java yarn
Update:
As package names may differs from executable names, you can handle these differences using a Bash associative array. Package name passed as parameter to the script will be used only if no value is found in the array for that package:
for pkg in $#; do
declare -A exe
exe=([redis]="redis-cli" [otherpkg]="otherpkg-cli")
package=${exe[$pkg]:-$pkg}
which "$package" || npm install -g "$package"
done
I am trying to create an alias to reach our shared server at work. To find the server using manual terminal inputs I type the following separate inputs:
cd ..
cd ..
cd Volumes
cd Production
I want all of this to be one single alias in my .bash_profile
I tried the following which didn't work:
alias work="cd ~ cd .. cd .. cd Volumes cd Production"
Your alias needs to be a valid command. You forgot && between your commands:
alias work="cd ~ && cd .. && cd .. && cd Volumes && cd Production"
Or you could shorten it into one command using relative paths:
alias work="cd ~/../../Volumes/Production"