Bash script doesn't set environment variable correctly [duplicate] - bash

This question already has answers here:
Are shell scripts sensitive to encoding and line endings?
(14 answers)
Closed 1 year ago.
I am trying to set environment variables in a bash script to be read by another bash script, but they are not getting set properly. I am on Ubuntu 20.04.
setting environment variables in a script:
setenv.env
export DB1_IMAGE="postgres:latest"
run it: . setenv.env
test it: echo $DB1_IMAGE
result: postgres:latest
script to test the environment variable value:
test.sh
#!/bin/bash
echo $DB1_IMAGE
if [[ $DB1_IMAGE == "postgres:latest" ]]
then
echo "equals"
else
echo "not equals"
fi
run the test script: . test.sh
result:
postgres:latest
not equals
now set the environment variable with command line:
export DB1_IMAGE="postgres:latest"
now run the test script again: . test.sh
result:
postgres:latest
equals
Summary: When an environment variable is set with a bash script, that value will fail an equals comparison in another bash script. When that same environment variable is set with a command line, it passes the equals test. I can't explain why this is. I feel like I'm missing something obvious. How could the == test fail? Are there unprintable characters being inserted somehow? Please help..

Thanks to #glennjackman, the cause of this was that the bash script file (setenv.env) was DOS-formatted as opposed to UNIX-formatted. This means it had \r\n line breaks, which cause hidden characters to be inserted into the environment variables. The fix is to run dos2unix on the file (sudo apt install dos2unix)

Related

Cron removing $ character from environment variable in shell script

I have an environment variable in a docker container that stores a password with special characters. This particular password contains a $ in it. I output this environment variable in a shell script. If I run the script manually, everything is fine. When the cron runs the script, the $ and the following 2 characters are removed. I have tried escaping the special characters in several ways, the latest of which is below, but the outcome is the same (fine manually, missing with the cron). For this example, assume the password is blahblah$xy*blahblah, which is what I would see when running the script. If the cron runs the script, I would get blahblah*blahblah.
My script (testVars.sh):
#!/bin/bash
echo "Testing variables"
MY_PASS=$MY_PASSWORD
TEST_PASS=$(sed -e 's/[^a-zA-Z0-9,._+#%/-]/\\&/g; 1{$s/^$/""/}; 1!s/^/"/; $!s/$/"/' <<< $MY_PASSWORD)
echo ${MY_PASS}
echo ${TEST_PASS}
My cron:
BASH_ENV=/root/env_vars.sh
33 13 * * * root /opt/testVars.sh >> /opt/cron.log
I am assuming that it is actually possible to have a $ sign in a string in this way.
I solved it by added the following into the docker-entrypoint.sh file, just before my printenv command:
export MY_PASSWORD=$(sed -e 's/[^a-zA-Z0-9,._+#%/-]/\\&/g; 1{$s/^$/""/}; 1!s/^/"/; $!s/$/"/' <<<"$MY_PASSWORD")
Thanks to Ture PĂ„lsson for pointing me in the right direction.

Send rm-command with $variable filename via ssh [duplicate]

This question already has answers here:
is it possible to use variables in remote ssh command?
(2 answers)
Closed 4 years ago.
in a bash script i try to do:
ssh -n $username#server2 "rm ${delete_file}"
but always get the error:
rm: missing operand
when I
> echo $delete_file
> /var/www/site/myfile.txt
I get the correct path.
What am i doing wrong?
Could it be that in your case, $delete_file is set on the remote host and not on your current machine?
If you want $delete_file to be expanded on the remote side (i.e., after ssh'ing into server2), you have to use single quotes:
ssh -n $username#server2 'rm ${delete_file}'
Other than that, do you set the value of delete_file in the same script (before ssh'ing), or before invoking your script? If latter is the case, it can't work: Variables are not propagated to scripts called by the current script/session.
You could do the following about it:
delete_file=<your-value> ./ssh-script
or:
delete_file=<your-value>
export delete_file
./ssh-script
As it turns out this last option was the problem, let me elaborate on best practices:
Better than setting environment variables would be the usage of positional parameters.
#!/bin/bash
# $1: file to delete
delete_file=${1:?Missing parameter: which file for deletion?}
ssh -n $username#server2 "rm ${delete_file}"
Usage of the script is now as simple as:
./ssh-script <your-file-for-deletion>
This way, you don't have to remember which variable is exactly expected by the script when calling it - simply call the script with a positional parameter.
As a bonus, the example uses parameter expansion to check for not-set or empty parameters:
delete_file=${1:?Missing parameter: which file for deletion?}
Whenever $1 happens to be unset or empty, the scripts exits immediately with exit code 1 and prints given message to stderr.

Bash, double quotes and "reboot" command

Assume you have those two statements in your bash script:
# No. 1
MSG="Automatic reboot now."
echo $MSG
# No. 2
MSG=""Automatic reboot now.""
echo $MSG
The output of statement number 1 is as expected (it is simply printed). If bash runs statement two, the machine is rebooted (any valid bash command will be executed).
But why?
That's because the meaning of MSG=""Automatic reboot now."" is the following:
Execute reboot now. with the env. var. MSG set to Automatic.
It's equivalent to:
MSG=Automatic reboot now.
A lesser known shell feature is the ability to set environment variables for the duration of a single command. This is done by prepending a command with one or more assignments, as in: var1=foo var2=bar command.
Here's a demonstration. Notice how the original value of $MSG is preserved.
$ export MSG=Hello
$ bash -c 'echo $MSG'
Hello
$ MSG=Goodbye bash -c 'echo $MSG'
Goodbye
$ bash -c 'echo $MSG'
Hello
Now on to your question:
MSG=""Automatic reboot now.""
The pairs of double quotes nullify each other, and might as well not be there. It's equivalent to:
MSG=Automatic reboot now.
which executes reboot with an argument of now. and the $MSG environment variable set to Automatic.

Variable is not getting exported [duplicate]

This question already has an answer here:
bash - export doesn't work
(1 answer)
Closed 7 years ago.
I am running the following simple code in a shell script , but it seems like it cant export the variable :
#!/bin/bash
echo -n "Enter AWS_ACCESS_KEY_ID: "
read aws_access_key
export AWS_ACCESS_KEY_ID=$aws_access_key
After that I take the input from the user ,but when I run echo $AWS_ACCESS_KEY_ID I get a blank value .
Run your script in the current shell by using:
source your-script # this runs your-script in the existing shell
...or, if using a POSIX shell...
. your-script # likewise; that space is intentional!
not
./your-script # this starts a new shell just for `your-script`; its variables
# are lost when it exits!
...if you want variables it sets to be available to the shell that calls it.
To be clear, export puts a variable in the current process's environment -- but environment variables are propagated down to child processes, not up to parent processes.
Now, if your goal is to define an interactive command that's easy to call, you might want to consider an entirely different approach altogether -- putting a function in your .bashrc:
awsSetup() {
echo -n "Enter AWS_ACCESS_KEY_ID: "
read && [[ $REPLY ]] && export AWS_ACCESS_KEY_ID=$REPLY
}
...after which the user with this in their .bashrc can run awsSetup, which will run in the current shell.

Check if script was started in current shell [duplicate]

This question already has answers here:
How to detect if a script is being sourced
(22 answers)
Closed 8 years ago.
Is there a way to check within a shell script (ksh) whether or not the script was started in the current shell?
Example
Start script in current shell with . (dot/source) command
$ . ./myscript
$ I run in the current environment!
Start script in own process
$ ./myscript
$ I run in my own process!
This is a simple trick you can use.
#!/bin/ksh
if [ ${.sh.file} != ${0} ]; then
echo I run in the current environment
else
echo I run in my own process
fi
Every Shell Has its Own PID ..
so you can use echo "$$" in ur script ..it will helps us find from where the Script is RAN .
i.e Difference in pid means they are run from different shells .

Resources