Bash scripting with make - bash

I have a makefile which I use to run simple bash scripts inside of a repo I am working on. I am attempting to make a new make call which will log me into my mySQL database automatically. I have my username and password stored in a .zinfo file at a known location (such as "/u/r/z/myid/.zinfo"). I am attempting to read the lines of this file to get my password, which is of the format:
user = csj483
database = cs495z_csj483
password = fjlID9dD923
Here is the code I am trying to run from the makefile, but it is not working. If I run the same code directly in the shell, it seems to work ok. Note that I had to use double $$s because make doesn't like single ones.
login:
for line in $$(cat /u/z/users/cs327e/$$USER/.zinfo); \
do \
PASSWORD=$$line; \
echo $$PASSWORD; \
done
echo $$PASSWORD
At this point, I am just trying to get the password, which should be the last value that PASSWORD is set to in the loop. If anyone can suggest an easier way to retrieve the password, or point out my error in my code, I would appreciate it. Eventually I will also want to retrieve the database name as well, and any help with that too would be appreciated as well. I am new to bash, but experienced in numerous other languages.

You didn't specify what you meant by "not working"; when asking questions please always be very clear about the command you typed, the result you got (cut and paste is best), and why that's not what you expected.
Anyway, most likely the behavior you're seeing is that the first echo shows the output you expect, but the second doesn't. That's because make will invoke each logical line in the recipe in a separate shell. So, the for loop is run in one shell and it sets the environment variable PASSWORD, then that shell exits and the last echo is run in a new shell... where PASSWORD is no longer set.
You need to put the entirety of the command line in a single logical line in the recipe:
login:
for line in $$(cat /u/z/users/cs327e/$$USER/.zinfo); do \
PASSWORD=$$line; \
echo $$PASSWORD; \
done \
&& echo $$PASSWORD
One last thing to remember: you say you're running bash scripts, but make does not run bash. It runs /bin/sh, regardless of what shell you personally use (imagine the havoc if makefiles used whatever shell the user happened to be using!). Your best option is to write recipes in portable shell syntax. If you really can't do that, be sure to set SHELL := /bin/bash in your Makefile to force make to use bash.
ETA:
Regarding your larger question, you have a lot of options. If you have control over the format of the zinfo file at all, then I urge you to define it to use the same syntax as the shell for defining variables. In the example above if you removed the whitespace around the = sign, like this:
user=csj483
database=cs495z_csj483
password=fjlID9dD923
Then you have a valid shell script with variable assignments. Now you can source this script in your makefile and your life is VERY easy:
login:
. /u/z/users/cs327e/$$USER/.zinfo \
&& echo user is $$user \
&& echo database is $$database \
&& echo password is $$password
If you don't have control over the syntax of the zinfo file then life is harder. You could use eval, something like this:
login:
while read var eq value; do \
eval $$var="$$value"; \
done < /u/z/users/cs327e/$$USER/.zinfo \
&& echo user is $$user \
&& echo database is $$database \
&& echo password is $$password
This version will only work if there ARE spaces around the "=". If you want to support both that's do-able as well.

Related

Remote script not recognizing environment variable set on host machine in screen

I have a bash script that runs another script in a screen on a remote computer. The environment variable GITLAB_CI_TOKEN is set on the host machine and is defined properly. However, the script configure.sh on the remote machine tells that this environment variable is empty when it is executed, even if it is defined on the same line as the script...
Here is the command I am using:
ssh -o "StrictHostKeyChecking=accept-new" "${COMPUTERS_IPS[i]}" \
screen -S "deploy_${COMPUTERS_IPS[i]}" -dm " \
GITLAB_CI_TOKEN=${GITLAB_CI_TOKEN} \
bash \"${REMOTE_FOLDER}/configure.sh\" \"${REMOTE_FOLDER}\" > ${LOG_FILE} 2>&1;
"
Additionally, the logs are not being written to LOG_FILE, but are being displayed on the console of the screen. I have been pulling my hair out over this for the past two days... Any help or guidance would be greatly appreciated :)
Why GITLAB_CI_TOKEN is "empty":
Passing a command to a remote host over ssh is very similar to running it through eval. For example in your case, escaped newlines on the first evaluation become unescaped newlines on a subsequent evaluation. Consider this very simple program named args (place it in bin or somewhere else on your path to demo):
#!/bin/bash
for arg ; do
echo "|$arg|"
done
And these two use cases:
args "\
Hello \
World"
# prints:
# |Hello World|
ssh host args "\
Hello \
World"
# prints:
# |Hello|
# |World|
As you can see, when we run this via ssh the newline we attempted to escape splits our data into two separate lines even though we tried to keep it all on one line. This means your assignment of GITLAB_CI_TOKEN is just a regular shell variable instead of a scoped environment variable for your bash command. A scoped environment variable requires the declaration and the command to happen on the same line.
The easiest thing to do is likely to just export the variable explicitly with export GITLAB_CI_TOKEN=${GITLAB_CI_TOKEN}.
For similar reasons, the output of your command is going to the screen and not the logfile because the outer quotes of screen -dm "commands >output" are getting stripped on the first evaluation, and then the remote host is parsing screen -dm commands >output and assigning the output redirection to screen instead of commands. That means your configure.sh is writing to the screen, and it's the screen program that's writing its own output to a logfile.
To send complex commands to a remote host, you may want to look into tools like printf %q which can produce escaped output suitable for being safely evaluated in an eval-like context. Take a look at BashFAQ/096 for an example.

Using Expect to fill a password in a bash script

I am relatively new to working in bash and one of the biggest pains with this script I have to run is that I get prompted for passwords repeatedly when running this script. I am unable to pass ssh keys or use any options except expect due to security restrictions but I am struggling to understand how to use expect.
Does Expect require a separate file from this script to call itself, it seems that way looking at tutorials but they seem rather complex and confusing for a new user. Also how do I input into my script that I want it to auto fill in any prompt that says Password: ? Also this script runs with 3 separate unique variables every time the script is called. How do I make sure that those are gathered but the password is still automatically filled?
Any assistance is greatly appreciated.
#!/bin/bash
zero=`echo $2`
TMPIP=`python bin/dgip.py $zero`
IP=`echo $TMPIP`
folder1=`echo $zero | cut -c 1-6`
folder2=`echo $zero`
mkdir $folder1
cd $folder1
mkdir $folder2
cd $folder2
scp $1#`echo $IP`:$3 .
Embedding expect code in an shell script is not too difficult. We have to be careful to get the quoting correct. You'll do something like this:
#!/usr/bin/env bash
user=$1
zero=$2
files=$3
IP=$(python bin/dgip.py "$zero")
mkdir -p "${zero:0:6}/$zero"
cd "${zero:0:6}/$zero"
export user IP files
expect <<<'END_EXPECT' # note the single quotes here!
set timeout -1
spawn scp $env(user)#$env(IP):$env(files) .
expect {assword:}
send "$env(my_password)\r"
expect eof
END_EXPECT
Before you run this, put your password into your shell's exported environment variables:
export my_password=abc123
bash script.sh joe zero bigfile1.tgz
bash script.sh joe zero bigfile2.tgz
...
Having said all that, public key authentication is much more secure. Use that, or get your sysadmins to enable it, if at all possible.

Check if possible to run command as sudo in Bourne shell?

I'm writing a Bourne shell deployment script, which runs some commands as root and some as the current user. I want to not run all commands as root, and check upfront if the commands I'll need are available to root (to prevent aborted half-done deployments).
In order to do this, I want to make a function that checks if a command can be run as root. My idea was to do this:
sudo_command() {
sudo sh -c 'type "$1"'
}
And then to use it like so:
required_sudo_commands="cp rm apt"
for command in $required_sudo_commands do
sudo_command "$command" || (
echo "missing required command: $command;
exit 1;
)
done
As you might guess by my question here: it doesn't work. Does any of you see what I'm doing wrong here?
I tried running the command inside sudo_command by itself, but that miraculously (to me) did work. But when I put the command into a separate file, it didn't work.
There are two immediate problems:
The $1 not expanding in single quotes.
You can semi-fix this by expanding it in double quotes instead: sudo sh -c "type '$1'"
Your command not exiting. That's easily fixed by replacing your || (..) with || {..}.
(..) creates a subshell that limits the scope of everything inside it including exit. To group commands, use {..}
However, there is also the fundamental problem of trying to use sh -c 'type "$1" to do anything.
One of the major points of sudo is the ability to limit what a user can and can't do. You're assuming that a user has complete, unrestricted access to run arbitrary commands as root, and that any problems are due to root not having these commands available.
That may be a valid assumption for you, but you may want to instead run e.g. sudo apt --version to get a better (but still incomplete) picture of whether you're allowed and able to run apt with sudo without requiring complete and unrestricted access.

Bash commands as variables failing when joining to form a single command

ssh="ssh user#host"
dumpstructure="mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database"
mysql=$ssh "$dumpstructure"
$mysql | gzip -c9 | cat > db_structure.sql.gz
This is failing on the third line with:
mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database: command not found
I've simplified my actualy script for the purpose of debugging this specific error. $ssh and $dumpstructure aren't always being joined together in the real script.
Variables are meant to hold data, not commands. Use a function.
mysql () {
ssh user#host mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database
}
mysql | gzip -c9 > db_structure.sql.gz
Arguments to a command can be stored in an array.
# Although mysqldump is the name of a command, it is used here as an
# argument to ssh, indicating the command to run on a remote host
args=(mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database)
ssh user#host "${args[#]}" | gzip -c9 > db_structure.sql.gz
Chepner's answer is correct about the best way to do things like this, but the reason you're getting that error is actually even more basic. The line:
mysql=$ssh "$dumpstructure"
doesn't do anything like what you want. Because of the space between $ssh and "$dumpstructure", it'll parse this as environmentvar=value command, which means it should execute the "mysqldump..." part with the environment variable mysql set to ssh user#host. But it's worse than that, since the double-quotes around "$dumpstructure" mean that it won't be split into words, and so the entire string gets treated as the command name (rather than mysqldump being the command name, and the rest being arguments to it).
If this had been the right way to go about building the command, the right way to stick the parts together would be:
mysql="$ssh $dumpstructure"
...so that the whole combined string gets treated as part of the value to assign to mysql. But as I said, you really should use Chepner's approach instead.
Actually, commands in variables should also work and can be in form of `$var` or just $($var). If it says command not found, it could because the command maybe not in you PATH. Or you should give full path of you command.
So let's put this vote down away and talk about this question.
The real problem is mysql=$ssh "$dumpstructure". This means you'll execute $dumpstructure with additional environment mysql=$ssh. So we got command not found exception. It's actually because mysqldump is located on remote server not this host, so it's reasonable this command is not found.
From this point, let's see how to fix this question.
OP want to dumpplicate mysql data from remote server, which means $dumpstructure shoud be executed remotely. Let's see third line mysql=$ssh "$dumpstructure". Now we figure out this would result in problem. So what should be the correct command? The simplest command should be like mysql="$ssh $dumpstructure", which means both $ssh and $dumpstructure will be join into single command line in variable $mysql.
At the end, let's talk about the last command line. I do not agree with variable are meant to hold data, not command. Cause command is also a kind of data. The real problem is how to use it correctly.
OP's command is also supported, at least it is supported on bash 4.2.46.
So the real problem is how to use a variable to hold commands not import a new method to do that, wraping them into a bash function, for example.
So who can tell me why this answer does not come into readers' notice but be voted down?

Pass a variable in a shell script

I'm new to Unix...I have a shell script that calls sqlplus. I have some variables that are defined within the code. However, I do not feel comfortable having the password displayed within the script. I would appreciate if someone could show me ways on how to hide my password.
One approach I know of is to omit the password and sqlplus will
prompt you for the password.
An approach that I will very much be interested in is a linux
command whose output can be passed into the password variable. That
way, I can replace easily replace "test" with some parameter.
Any other approach.
Thanks
#This is test.sh It executes sqlplus
#!/bin/sh
export user=TestUser
export password=test
# Other variables have been ommited
echo ----------------------------------------
echo Starting ...
echo ----------------------------------------
echo
sqlplus $user/$password
echo
echo ----------------------------------------
echo finish ...
echo ----------------------------------------
You can pipe the password to the sqlplus command:
echo ${password} | sqlplus ${user}
tl;dr: passwords on the command line are prone to exposure to hostile code and users. don't do it. you have better options.
the command line is accessible using $0 (the command itself) through ${!#} ($# is the number of arguments and ${!name} dereferences the value of $name, in this case $#).
you may simply provide the password as a positional argument (say, first, or $1), or use getopts(1), but the thing is passwords in the arguments array is a bad idea. Consider the case of ps auxww (displays full command lines of all processes, including those of other users).
prefer getting the password interactively (stdin) or from a configuration file. these solutions have different strengths and weaknesses, so choose according to the constraints of your situation. make sure the config file is not readable by unauthorized users if you go that way. it's not enough to make the file hard to find btw.
the interactive thing can be done with the shell builtin command read.
its description in the Shell Builtin Commands section in bash(1) includes
-s Silent mode. If input is coming from a terminal, characters are not echoed.
#!/usr/bin/env bash
INTERACTIVE=$([[ -t 0 ]] && echo yes)
if ! IFS= read -rs ${INTERACTIVE+-p 'Enter password: '} password; then
echo 'received ^D, quitting.'
exit 1
fi
echo password="'$password'"
read the bash manual for explanations of other constructs used in the snippet.
configuration files for shell scripts are extremely easy, just source ~/.mystuffrc in your script. the configuration file is a normal shell script, and if you limit yourself to setting variables there, it will be very simple.
for the description of source, again see Shell Builtin Commands.

Resources