Bash script behaving different than when manually executing commands into remote server - bash

I have a remote server and I can ssh into it and when I execute npm install it works just fine. I can see that it's installed by calling which npm and i see:
/home/ubuntu/.nvm/versions/node/v15.11.0/bin/npm
Great.
However, when I do this via a bash script, it says
bash: line 1: npm: command not found
My script:
#!/bin/bash
ssh he-int 'npm install'
Why the discrepancy between the two ? It's the same commands...

Sounds like maybe npm is added to your PATH in one of the login scripts (.bashrc, .profile, etc). I especially think this is true because the npm path indicates you are using nvm to manage your npm environment. There may be a call to nvm to add npm to the path in one of those login scripts.
When you run an ssh command like your second command, it doesn't run as a login shell, so it doesn't run the login scripts. You can either:
Try and force it to run as a login shell (ssh -t maybe? not sure).
Try to initialize your npm environment inside your script, probably by calling nvm.

npm may not be in the PATH of the user you're sshing in as using the bash script.
Use this to see what's in your PATH variable:
ssh he-int 'echo $PATH'
You can also just try to use the path directly and see if it works.
#!/bin/bash
ssh he-int '/home/ubuntu/.nvm/versions/node/v15.11.0/bin/npm install'

Related

Pass environment variable from command line to yarn

I have a code that reads port number from environment variable or from config. Code looks like this
const port = process.env.PORT || serverConfig.port;
await app.listen(port);
To run app without defining environment variable, I run following yarn command.
yarn start:dev
This command works successfully in Linux shell and Windows command line.
Now, I want to pass environment variable. I tried following,
PORT=2344 yarn start:dev
This commands works successfully in Linux shell but failing in Windows command line. I tried following ways but couldn't get it to work.
Tried: PORT=2344 yarn start:dev
I got error: 'PORT' is not recognized as an internal or external command,
operable program or batch file.
Tried: yarn PORT=2344 start:dev
I got error: yarn run v1.17.3
error Command "PORT=2344" not found.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Any idea please? I know, I can define environment variables from System Properties in Windows. But any way if I can do it from command line?
i'd suggest you use the NPM module called cross-env. it allows adding particular env variables on the command line regardless of platform. with that said, you may try:
$ cross-env PORT=2344 yarn start:dev
You can chain commands on the Windows command prompt with &(or &&). To set an environment variable you need to use the set command.
The result should look like this: set PORT=1234 && yarn start:dev.
Found a solution for this problem in Windows command prompt.
Create a .env file in project root folder (outside src folder).
Define PORT in it. In my case, contents of .env file will be,
PORT=2344
Run yarn start:dev
Application will use port number that you have specified in .env file.
Put .env file at root. Then following command will expose content of .env file and then run yarn start command
$ source .env && yarn start
or this command
$ export $(cat .env) && yarn start
If update any variable in .env then close the terminal and open new terminal window and can again run above command. Or else can also run unset command to remove existing var.
unset VAR_NAME
You can use popular package dotenv:
create a file .env in root directory
put all your env vars
e.g.:
ENV=DEVELOPMENT
run your code like this
$ node -r dotenv/config your_script.js
here the explanation:
[https://github.com/motdotla/dotenv#preload]
To define environment variables in the Windows command prompt we can use the set command, you can then split your call into two lines.
set PORT=2344
yarn start:dev
The set command persists within the current command prompt, so you only need to run it once.
The equivalent command in bash is 'export'.
FYI (not a direct answer). I was attempting this in VS Code - passing .env variables through yarn to a JavaScript app. Google had very few examples so I'm sharing this for posterity as it's somewhat related.
The following simply substitutes text normally placed directly into the package.json or script file. Use this to quickly obfuscate or externalize your delivery configurations.
In Environment Variable File (.env)
PORT=2344
In Yarn File (package.json)
source .env; yarn ./start.sh --port $PORT
In Yarn Script (start.sh)
#!/bin/bash
while [ $? != 0 ]; do
node dist/src/index.js $1; #replace with your app call#
done
The app then accepts port as a variable. Great for multi-tenant deployments.

Run node server in background and start karma from npm script

I'd like to run a node server in background and start karma (on win7). Writing a bash script like the following (and run it with git bash) appears to work, but it reports to a separate window instead of the WebStorm terminal:
#!/bin/bash
node test/server/index.js &
karma start karma.conf.js
package.json
"scripts": {
"test": "test.sh"
},
If I try it with git bash and bash test.sh then it reports to the same window.
I tried to do something similar in npm, but it cannot run background processes.
"scripts": {
"test": "node test/server/index.js & karma start karma.conf.js"
},
No matter how I try it can run things only in a single process, so it waits for the node server to exit, and thus the karma server never starts.
Any idea how to solve the bash reporting to WebStorm terminal or the npm parallelization?
update:
I think I have found the reason: https://github.com/npm/npm/issues/8358 This seems to be a Windows related issue. On Linux it would work properly. So it is not possible to fix the npm script. I think instead of bash I'll move the karma server and the node server to a node script and create a child process for the node server to be Windows compatible. I hope that way the karma logs will show up in the WebStorm terminal.
Cross-platform shell parallelization solution
I had a little time to search more in the topic. Actually there are parallelization tools available for npm and shell scripts, which are cross-platform:
https://github.com/mysticatea/npm-run-all
https://github.com/kimmobrunfeldt/concurrently
https://github.com/royriojas/shell-executor
There was an initiative to merge all of these projects along with others, which was more or less successful: https://github.com/mysticatea/npm-run-all/issues/10. According to one of the contributors npm-run-all is great now, on the other hand the npm-run-all repo does not seem to be that active nowadays, so probably it is better to use concurrently or shell-executor instead.
WebStorm settings / Git bash solution
I set the WebStorm terminal to git bash instead of cmd.exe:
File/Settings > Tools/Terminal > Shell path: "C:\Program Files\Git\bin\bash.exe" > Ok
And I changed the npm script to run with bash:
"scripts": {
"test": "bash -c \"node test/server/index.js & karma start karma.conf.js\""
},
Hopefully the bash commands work the same on Linux too, I have to check with Travis, but there is a very good chance.
Using the bash command for the sh file works too:
"scripts": {
"test": "bash test.sh"
},
Is npm shell configuration a possible solution?
It is interesting that without using the bash command the upper solution did not work. Probably npm started it with cmd.exe and that opened bash.exe in a new window when it checked the header and realized that it is a bash script. And yes, I checked and it uses the cmd.exe by default:
$ npm config ls -l | grep shell
shell = "C:\\Windows\\system32\\cmd.exe"
So another option might be to set the npm shell to git bash and after that I don't have to use the bash in my scripts.
npm config set shell "C:\Program Files\Git\bin\bash.exe"
Well I did exactly that, but nothing changed. I still have to use bash in my scripts and the sh file still opens in a new window. It does not make a real difference, we still need the Webstorm settings to run the script with bash, so it is not a solution.

Why Can't I Set Env Variables By Running A BASH Script From An Npm Script?

I have a nodejs javascript project, but I would like to set a bunch of environment variables locally. created a bash file that just exports some variables:
#!/usr/bin/env bash
export waka=flaka
export fat=booty
When I use the dot to source and run the file from the command line it works fine:
. ./env.sh
And I can see the variable has been set
echo $waka # prints "flaka"
But then I try to take this command and make it an npm script by adding it to my package.json
scripts: {
"set-env": ". ./env.sh",
...
}
and then run it:
npm run set-env
The script is run but the environment variables are not saved:
echo $waka # prints undefined (assuming you didn't already run it from command line)
So, I'm wondering why it doesn't save the envrionment variables as an npm script and if it's possible to run the bash script from an npm script in a way such that the environment variables will be saved for the rest of the command prompt session. Thanks!
npm is not a shell command; it runs in a separate process that forks another shell in order to run the command specified by set-env. env.sh is executed, but then that shell immediately exits, at which point the changes are gone (and then npm itself exits).

Running Docker commands included in a shell script alongside other Linux commands and switching users

Using the Linux terminal, I run bash scripts (.sh files) containing sequences of commands I want to execute.
The issue is that I am unable to run a Docker command from within my shell script. I can run this Docker command when it's typed directly at the terminal with root privileges but not when I include it in the shell script file.
My script executed as a general user from command line, looks like this:
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su
# Copy a folder from Docker container to host OS
docker cp <container-name>:/home/user/data /home/user/docker_backup
# More general user commands
cd ..
My code only runs until the su line above. After i enter the root password, nothing happens. if i type exit, i get permission errors, meaning the docker cp command failed.
**
This is my desired solution
**After thorough research, as I wanted to run my script as a general user, and only run certain commands as Root when necessary, I came up with a solution that works.
My script now looks like this (run with
$ sh script_name.sh):
#!/usr/bin/env bash
cd /home/user/docker_backup
# remove /home/user/docker_backup/data
rm -rf data
# Switch to root privileges. my system is set to only run Docker as root
su - root -c "docker cp <container-name>:/home/user/data /home/user/docker_backup"
# More general user commands
cd ..
Run shell script as general user. For commands that require root privileges, I use su - root -c "<command>". Terminal prompts for root password and executes command in quotes as root, then shell proceeds as general user.
Actually posting this as an answer:
You switch your current user to root during the script, but the script was executed by your own user.
So the docker cp command will also be executed as your own user, but you will be logged into the root account.
This results in you not seeing the output of docker cp (which might give you insight to not working - I think insufficient privilege).
A solution to this is either using sudo before docker cp, starting the script as root or adding your user to the group "docker", which authorizes your user to use the docker commands
I had the similar issue where the docker commands were running fine on the Terminal but the same commands were not running when I compiled them into a bash script and the issue was basically because of two reasons.
The docker commands need to be run with uplifted privileges that is with the sudo command ( Eg: sudo docker ps works but docker ps won't work). One could add the current user to docker group so that we need not use sudo with each docker command. Please visit this link and follow the section 2 to do the same.
Run the script in the correct way
One should have #! bin/bash at the starting of the script. It is a shebang that is required by each script.
One should save the file without .sh extension
One should provide the execution permission to the script by giving command chmod 777 script_name
run the script with bash script_name

sbt (Scala) via SSH results in command not found, but works if I do it myself

So I'm trying to do something that involves running sbt over an SSH command, and this is what I'm trying:
ssh my_username#<server ip> "cd <project folder>; sbt 'run-main Foo' "
When I do that however, I get an error message: bash: sbt: command not found
Then I go SSH into the server myself, cd to the project folder, and run sbt 'run-main Foo' and everything works nicely. I have checked to make sure sbt is on the $PATH variable on the remote server via ssh my_username#<server ip> "echo $PATH" and it shows the correct value.
I feel like this is a simple fix, but cannot figure it out... help?
Thanks!
-kstruct
When you log in, bash is run as an interactive shell. When you run commands directly through ssh, bash is run as a non-interactive shell, and therefore different initialization files are sourced (see the bash manual pages for which exactly). There are a number of ways to fix this, e.g.:
Use the full path to sbt when calling it directly through ssh
Edit .bashrc and add the missing directories to the PATH environment variable
Note that your test ssh my_username#<server ip> "echo $PATH" actually prints PATH on your client, not your server, because of the double quotes. Use ssh my_username#<server ip> 'echo $PATH' or ssh my_username#<server ip> env to print PATH from the server's environment. When checking using env, you will see that PS1 is only set in interactive shells.

Resources