How to update a makefile variable based on command line input? - makefile

Say I have defined a variable 'ABC = 0' in the makefile. Now when the makefile is run, the user is asked a question 'What type of program is it?', if the user enters 'pq', then update makefile variable 'ABC' to 1, else keep as it is.

Make is not designed for user interaction, but it can delegate that task to a script.
Consider this script:
#!/bin/bash
echo What type of program is it?
read
if [ "$REPLY" = pq ]; then
echo 1
else
echo 0
fi
Make can execute this script, collect the output and assign it to a variable. The only trick is that it will collect all of the output, so whatever doesn't belong (e.g. "What type of program is it") should be removed from the script:
#!/bin/bash
read
if [ "$REPLY" = pq ]; then
echo 1
else
echo 0
fi
and handled by the makefile:
$(info What type of program is it?)
ABC := $(shell ./ask.bsh)

Related

why is a bash read prompt not appearing when a script is run?

I have a bash script that prompts the user for different information based on what they're trying to do. The prompts are usually done with read -p. Usually it works just fine, the user sees what is being asked, enters what they need to enter, and everything does what it needs to do.
See the following (sanitized) snippet of a function in the script:
#!/bin/bash
function_name() {
if [ "$this_value" == "default" ];then
echo "Value set to default."
read -p "Enter desired value here: " desired_value
desired_value=${desired_value^^}
if [ "${#desired_value}" != 3 ] ;then
echo "$desired_value is an invalid entry."
exit 1
fi
if [ "$desired_value" != "$(some command that returns something to compare against)" ];then
echo "$desired_value is an invalid entry."
exit 1
fi
read -p "You entered $desired_value. Is this correct? [y/N] " reply
reply=${reply,,}
case "$reply" in
y|yes)
$some command that does what I want it to do
;;
*)
echo "User did not enter yes"
exit 1
;;
esac
fi
}
Usually the Enter desired value here and is this correct? lines appear just fine. But in a few instances I've seen, for some reason the read prompt is just blank. A user will see the following:
./script.bash
##unrelated script stuff
##unrelated script stuff
Value set to default.
user_entered_value_here
User did not enter yes. Exiting.
This is a real example that just happened that finally made me come here to ask what is going on (and I modified appropriately to make it an SO post).
What's happening is these two blank lines appear instead of the read -p text. For the first one, the user entered user_entered_value_here because they already know what is supposed to be entered there even without the read prompt. The second one, the Y/N prompt, they don't know, so they see it apparently hanging, and hit Enter instead of y, causing it to trigger the * case option.
I don't understand why the read -p text is not appearing, and especially why it's appearing for most users but not all users. I suspect there's some kind of environmental setting that causes this, but for the life of me I can't figure out what. This is being run only on RHEL 6.2, under bash 4.1.2.
I looked at the man of bash to catch some kind of detail about the read built-in. It is specified that -p option displays the "prompt on standard error, without a trailing newline, before attempting to read any input. The prompt is displayed only if input is coming from a terminal".
Let's consider the simple script input.sh:
#!/bin/bash
read -p "Prompt : " value
echo The user entered: "$value"
Example of execution:
$ ./input.sh
Prompt : foo
The user entered: foo
If stderr is redirected:
$ ./input.sh 2>/dev/null
foo
The user entered: foo
If the input is a pipe
$ echo foo | ./input.sh
The user entered: foo
If the input is a heredoc
$ ./input.sh <<EOF
> foo
> EOF
The user entered: foo
Rewrote your script with shell agnostic grammar and fixed some errors like comparing the string length with a string comparator != = rather than a numerical comparator -ne -eq:
#!/usr/bin/env sh
this_value=default
toupper() {
echo "$1" | tr '[:lower:]' '[:upper:]'
}
function_name() {
if [ "$this_value" = "default" ]; then
echo "Value set to default."
printf "Enter desired value here: "
read -r desired_value
desired_value=$(toupper "$desired_value")
if [ "${#desired_value}" -ne 3 ]; then
printf '%s is an invalid entry.\n' "$desired_value"
exit 1
fi
if [ "$desired_value" != "$(
echo ABC
: some command that returns something to compare against
)" ]; then
echo "$desired_value is an invalid entry."
exit 1
fi
printf 'You entered %s. Is this correct? [y/N] ' "$desired_value"
read -r reply
reply=$(toupper "$reply")
case $reply in
'y' | 'yes')
: "Some command that does what I want it to do"
;;
*)
echo "User did not enter yes"
exit 1
;;
esac
fi
}
function_name

bash: is there a flag to make a function to return in case of error?

I am using "set -e" to make the script exit in case of an error, but I don't want it to exit if I have an error inside a function, I would like the function to return error instead
For example:
#!/bin/bash
set -e
func() {
echo 1
# code ...
cause_error
echo This should not print
}
func
if [ $? -ne 0 ]; then
echo I want this print
else
echo This should not print either
fi
The output of this script is:
$ /tmp/test.sh
1
But I would like it to be:
1
I want this print
Is this possible? Or do I have to test the exit status of every command executed inside the function?
Explicitly comparing $? is an antipattern, but in addition, getting rid of it will also bypass set -e because it does not fail when the failure happens in a condition.
The proper syntax for what you are trying to do, regardless of set -e, is
if func; then
echo I want this print
else
echo This should not print either
fi
The failure inside the function will cause the function to report an error, like you are saying in the prose description of your requirements, but that also means that the conditional will print This should not print either. If you don't want that, maybe edit your question to clarify this paradoxical requirement.
You can do this:
set -e
function foo() {
echo Foo
cause_error
}
echo Before
set +o errexit; foo; set -o errexit;
echo Will get here
foo
echo Wont get here
The behavior which you are asking is not possible. You want the function to return on error and the script to not exit, even with the set -e option. The requirement you are asking for the bash interpreter is to check exit status for each line and if it is inside a function return non-zero code else exit.
You cannot do the exact requirement which you are asking for. But you can choose to disable the set -e for the execution of the function. The way to disable set -e is to use set +e option or use && : trick
Here is example updated code using && : trick
#!/bin/bash
set -e
func() {
echo 1
# code ...
return 1
echo This should not print
}
func && :
if [ $? -ne 0 ]; then
echo I want this print
else
echo This should not print either
fi
Output:
1
I want this print
Credit: https://stackoverflow.com/a/27793459/2032943

How to run my bash functions in terminal using a parent name?

* the wording of the question is terrible, sorry!
I have some bash functions I create
test() {echo "hello wold"}
test2() {echo "hello wold"}
Then in my .bashrc I source the file that has the above function . ~/my_bash_scripts/testFile
In the terminal I can run test and get hello world.
is there a way for me to add parent variable that holds all my functions together. For example personal test, personal test2.
Similar to every other gem out there, I downloaded a tweeter one. All it's methods are followed by the letter t, as in t status to write a status, instead of just status
You are asking about writing a command-line program. Just a simple one here:
#!/usr/bin/env bash
if [[ $# -eq 0 ]]; then
echo "no command specified"
exit
elif [[ $# -gt 1 ]]; then
echo "only one argument expected"
exit
fi
case "$1" in
test)
echo "hello, this is test1"
;;
test2)
echo "hello, this is test2"
;;
*)
echo "unknown command: $1"
;;
esac
Then save it and make it an executable by run chmod +x script.sh, and in your .bashrc file, add alias personal="/fullpath/to/the/script.sh".
This is just very basic and simple example using bash and of course you can use any language you like, e.g. Python, Ruby, Node e.t.c.
Use arguments to determine final outputs.
You can use "$#" for number of arguments.
For example,
if [ $# -ne 2 ]; then
# TODO: print usage
exit 1
fi
Above code exits if arguments not euqal to 2.
So below bash program
echo $#
with
thatscript foo bar baz quux
will output 4.
Finally you can combine words to determine what to put stdout.
If you want to flag some functions as your personal functions; no, there is no explicit way to do that, and essentially, all shell functions belong to yourself (although some may be defined by your distro maintainer or system administrator as system-wide defaults).
What you could do is collect the output from declare -F at the very top of your personal shell startup file; any function not in that list is your personal function.
SYSFNS=$(declare -F | awk '{ a[++i] = $3 }
END { for (n=1; n<=i; n++) printf "%s%s", (n>1? ":" : ""), a[n] }')
This generates a variable SYSFNS which contains a colon-separated list of system-declared functions.
With that defined, you can check out which functions are yours:
myfns () {
local fun
declare -F |
while read -r _ _ fun; do
case :$SYSFNS: in *:"$fun":*) continue;; esac
echo "$fun"
done
}

How to execute a file that is located in $PATH

I am trying to execute a hallo_word.sh that is stored at ~/bin from this script that is stored at my ~/Desktop. I have made both scripts executable. But all the time I get the problem message. Any ideas?
#!/bin/sh
clear
dir="$PATH"
read -p "which file you want to execute" fl
echo ""
for fl in $dir
do
if [ -x "$fl" ]
then
echo "executing=====>"
./$fl
else
echo "Problem"
fi
done
This line has two problems:
for fl in $dir
$PATH is colon separated, but for expects whitespace separated values. You can change that by setting the IFS variable. This changes the FIELD SEPARATOR used by tools like for and awk.
$fl contains the name of the file you want to execute, but you overwrite its value with the contents of $dir.
Fixed:
#!/bin/sh
clear
read -p "which file you want to execute" file
echo
IFS=:
for dir in $PATH ; do
if [ -x "$dir/$file" ]
then
echo "executing $dir/$file"
exec "$dir/$file"
fi
done
echo "Problem"
You could also be lazy and let a subshell handle it.
PATH=(whatever) bash command -v my_command
if [ $? -ne 0 ]; then
# Problem, could not be found.
else
# No problem
fi
There is no need to over-complicate things.
command(1) is a builtin command that allows you to check if a command exists.
The PATH value contains all the directories in which executable files can be run without explicit qualification. So you can just call the command directly.
#!/bin/sh
clear
# r for raw input, e to use readline, add a space for clarity
read -rep "Which file you want to execute? " fl || exit 1
echo ""
"$fl" || { echo "Problem" ; exit 1 ; }
I quote the name as it could have spaces.
To test if the command exists before execution use type -p
#!/bin/sh
clear
# r for raw input, e to use readline, add a space for clarity
read -rep "Which file you want to execute? " fl || exit 1
echo ""
type -p "$fq" >/dev/null || exit 1
"$fl" || { echo "Problem" ; exit 1 ; }

Bash Programming Passing Argument

I am currently learning bash programming and dont really understand why the passing argument for me is not working.
i have a script like this
#!/bin/bash
# the following environment variables must be set before running this script
# SIM_DIR name of directory containing armsim
# TEST_DIR name of the directory containing this script and the expected outputs
# LOG_DIR name of the directory that your output is written to by the run_test2 script
# ARMSIM_VERBOSE set to "-v" for verbose logging or leave unset
# First check the environment variables are set
giveup=0
if [[ ${#SIM_DIR} -eq 0 || ${#TEST_DIR} -eq 0 || ${#LOG_DIR} -eq 0 ]] ; then
echo One or more of the following environment variables must be set:
echo SIM_DIR, TEST_DIR, LOG_DIR
giveup=1
fi
# Now check the verbose flag
if [[ ${#ARMSIM_VERBOSE} != 0 && "x${ARMSIM_VERBOSE}" != "x-v" ]] ; then
echo ARMSIM_VERBOSE must be unset, empty or set to -v
giveup=1
fi
# Stop if environment is not set up
if [ ${giveup} -eq 1 ] ; then
exit 0
fi
cd ${TEST_DIR}
for i in test2-*.sh; do
echo "**** Running test ${i%.sh} *****"
./$i > ${LOG_DIR}/${i%.sh}.log
done
When I run the .sh file and pass in 3 example argument as below:-
$ ./run_test2 SIM_DIR TEST_DIR LOG_DIR
It still show: One or more of the following environment variables must be set:
SIM_DIR, TEST_DIR, LOG_DIR
Can anyone guide me on this? Thank you.
That's not how it's intended to work. The environment variables must be set beforehand either in the script or in the terminal like
export SIM_DIR=/home/someone/simulations
export TEST_DIR=/home/someone/tests
export LOG_DIR=/home/someone/logs
./run_test2
If you use these variables frequently, you might want to export them in ~/.bashrc. The syntax is identical to the exports in the above example.
Environment variables aren't really arguments in the sense I understand from your question/example. It sounds to me like you want to give arguments to a function/script, if you do that you can find your arguments in $1-9 (I think bash supports even more, unsure), the number of arguments are stored in $#
Example function that expects two arguments:
my_func() {
if [ $# -ne 2 ]; then
printf "You need to give 2 arguments\n"
return
fi
printf "Your first argument: %s\n" "$1"
printf "Your second argument: $s\n" "$2"
}
# Call the functionl like this
my_func arg1 arg2

Resources