If I have a bash script that is purely made of functions, how do I get things like prompts to show up in the terminal? For example, consider the following:
prompt() {
read -p "This is an example prompt. [Y/n]"
}
main() {
prompt
}
main "$#"
How do I get that prompt message to show up in the terminal? When I just call prompt() from main(), it blackboxes the whole prompt() function. Do I have to return something from prompt()? What if I want to echo a bunch of messages after the read in prompt()? How do I get those to show up in the terminal?
I think I'm missing a basic programming concept here.
If you run a script with that code, main will not be executed. An alternative is to leave prompt() as a function, and move the functionality of main() outside of the function.
prompt() {
read -p "This is an example prompt. [Y/n]"
}
prompt
If you would like to keep main as a function, then you will have to source the file, and then call main.
$ source file.sh
$ main
where file.sh is the file with your code, and $ denotes the terminal prompt.
Answer to last question 1st. Yes, 'shell' is executed linearly (functions must appear before they are called and only get executed when called) and main is not automatically called.
The default variable for a read is $REPLY unless you supply your own variable(s), so:
echo $REPLY
or
read -p "Enter data?" MYVAR; echo $MYVAR
or
read -p "Enter data?" FIRSTVAR RESTOFSENTENCE; echo $FIRSTVAR:$RESTOFSENTENCE
if you want to do more stuff after your "main", you can just add those commands in the sequence you want them to occur either inside of main (provided that main is called) or after the call to main
if you want them to just be functions, you can save them to a file called myfuncs.sh and then from a shell source that and run a function(s):
. myfuncs.sh && main
How are you executing the script, and what's calling main()?
Bash scripts aren't like C programs and don't require a main() function. Bash will "remember" any functions it sees, but you actually need to call them directly from the script.
Try this:
# Start of script
prompt() {
read -p "This is an example prompt. [Y/n]"
}
prompt
# End of script
bash$ chmod +x script_name
bash$ ./script_name
you type prompt instead of prompt()
better is...
function prompt() {
read -p "$* [Y/n]"
}
then
prompt "This is an example prompt."
My understanding of your question is this:
You have a set of functions (of bash shell scripting) and you want them to return values to the calling points.
Example in C++:
char * prompt() {
std::cout<<"Some message:";
std::cin>>value;
return value;
}
This kind of return values are impossible in the bash(or other bourne descendant shells) scripting. Functions only have an exit value(similar to commands), for example, an implementation of the false command:
myfalse() { return 1; }
Thus return only sets a zero or nonzero exit status.
To use a value from a function, the function needs to put the value to standard output and that output captured via command substitution. For example:
toupper() {
val="$(echo $1| tr 'a-z' 'A-Z')"
echo $val
}
ucasestr="$(toupper "Hello, World")"
echo $ucasestr
HELLO, WORLD
Related
So my goal is to take a variable that is in my TCL file and pass it to my shell script for it to be able to use. Right now I am able to echo to get the result of my variable but I cannot for some reason set that result to a variable in my bash script.
Here is an example of my TCL script:
set file_status "C"
Here is what I have for my bash script:
echo 'source example.tcl; foreach _v {file_status } {puts "\"[set $_v]\""}' | tclsh
file_status='source example.tcl; foreach _v {file_status } {puts "\"[set $_v]\""}' | tclsh
echo $file_status
So the first echo statement above works but after I set the file_status variable for some reason the second echo statement doesn't work.
Doing it in general requires very complex code; Tcl variables are capable of holding arbitrary data (including full binary data) and don't have length restrictions, whereas Shell is a lot more restricted. But it's possible to do something for the common cases where the values are plain text without NULs. (C would be an excellent example of such a value.)
When passing to a subprocess, by far the easiest way is to use an environment variable:
set the_status "C"
set env(file_status) $the_status
exec bash -c {echo "file status is $file_status"} >#stdout
That has length restrictions, but it's extremely easy.
If you're sending the variable to some other process, your best bet is to write a little script (here, I'm sending it to stdout):
puts [format {file_status='%s'} [string map {"'" "'\''"} $the_status]]
That is producing a script that just sets the variable. (string map is turning single quotes into something that works inside single quotes; everything else doesn't need conversion like that.) You run the script in the shell with eval or source/. (depending on whether it is in a string or in a file).
Very large data should be moved around inside a file or via a pipe. It needs much more thought in general.
I would output shell syntax from Tcl and source it into your running shell:
Given
$ echo 'source example.tcl; foreach var {file_status} {puts "$var=\"[set $var]\""}' | tclsh
file_status="C"
then
source <(echo 'source example.tcl; foreach var {file_status} {puts "$var=\"[set $var]\""}' | tclsh)
declare -p file_status
outputs
declare -- file_status="C"
Using /bin/sh, you could:
var_file=$(mktemp)
echo ... | tclsh > "$var_file"
source "$var_file"
rm "$var_file"
I have two shell scripts:
test.sh
function func() {
echo $1
exit 1
}
run.sh
source ./test.sh
func "Hello"
exitCode=$?
echo "Output: ${exitCode}"
Result:
Hello
The current problem which I'm facing is that when the function func returns 1, my run.sh script breaks and nothing gets executed after it. So, is there any way I can effectively capture the exit code without breaking run.sh. I know there is way to invoke a subshell using ( func "Hello" ) but I want to do it without invoking sub-shell using flock. I looked up for reference example but could'nt find any close to it.
2 ideas that are "pushing the boundary". Use with care, as future changes to the sourced script(s) might break the logic. Recommended only if there is a way to monitor that script is working OK - e.g. when executing interactively, etc.
I would not use this kind of solutions in any production/critical system.
Option 1: Alias 'exit' to 'return' when sourcing the file.
Assuming that ALL 'exit' statement in the test.sh are to be replaced with 'return', and assuming that you are willing to take the risk of future changes to test.sh, consider using alias before sourcing
alias exit=return
source test.sh
unalias exit
func "foo"
Option 2: automatically update the function that is using 'exit' to use return.
source test.sh
body=$(type func | tail +2 | sed -e 's/exit/return/g')
eval "$body"
I'm trying to learn how to write some basic functions in Ubuntu, and I've found that some of them work, and some do not, and I can't figure out why.
Specifically, the following function addseq2.sh will work when I source it, but when I just try to run it with bash addseq2.shit doesn't work. When I check with $? I get a 0: command not found. Does anyone have an idea why this might be the case? Thanks for any suggestions!
Here's the code for addseq2.sh:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
Thanks everyone for all the useful advice and help!
To expand on my original question, I have two simple functions already written. The first one, hello.sh looks like this:
#!/usr/bin/env bash
# File: hello.sh
function hello {
echo "Hello"
}
hello
hello
hello
When I call this function, without having done anything else, I would type:
$ bash hello.sh
Which seems to work fine. After I source it with $ source hello.sh, I'm then able to just type hello and it also runs as expected.
So what has been driving me crazy is the first function I mentioned here, addseq2.sh. If I try to repeat the same steps, calling it just with $ bash addseq2.sh 1 2 3. I don't see any result. I can see after checking as you suggested with $ echo $?that I get a 0 and it executed correctly, but nothing prints to the screen.
After I source it with $ source addseq2.sh, then I call it just by typing $ addseq2 1 2 3 it returns 6 as expected.
I don't understand why the two functions are behaving differently.
When you do bash foo.sh, it spawns a new instance of bash, which then reads and executes every command in foo.sh.
In the case of hello.sh, the commands are:
function hello {
echo "Hello"
}
This command has no visible effects, but it defines a function named hello.
hello
hello
hello
These commands call the hello function three times, each printing Hello to stdout.
Upon reaching the end of the script, bash exits with a status of 0. The hello function is gone (it was only defined within the bash process that just stopped running).
In the case of addseq2.sh, the commands are:
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
This command has no visible effects, but it defines a function named addseq2.
Upon reaching the end of the script, bash exits with a status of 0. The addseq2 function is gone (it was only defined within the bash process that just stopped running).
That's why bash addseq2.sh does nothing: It simply defines (and immediately forgets) a function without ever calling it.
The source command is different. It tells the currently running shell to execute commands from a file as if you had typed them on the command line. The commands themselves still execute as before, but now the functions persist because the bash process they were defined in is still alive.
If you want bash addseq2.sh 1 2 3 to automatically call the addseq2 function and pass it the list of command line arguments, you have to say so explicitly: Add
addseq2 "$#"
at the end of addseq2.sh.
When I check with $? I get a 0: command not found
This is because of the way you are checking it, for example:
(the leading $ is the convention for showing the command-line prompt)
$ $?
-bash: 0: command not found
Instead you could do this:
$ echo $?
0
By convention 0 indicated success. A better way to test in a script is something like this:
if addseq.sh
then
echo 'script worked'
else
# Redirect error message to stderr
echo 'script failed' >&2
fi
Now, why might your script not "work" even though it returned 0? You have a function but you are not calling it. With your code I appended a call:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
addseq2 1 2 3 4 # <<<<<<<
and I got:
10
By the way, an alternative way of saying:
let sum=sum+$element
is:
sum=$((sum + element))
In bash I am able to write a script that contains something like this:
{ time {
#series of commands
echo "something"
echo "another command"
echo "blah blah blah"
} } 2> $LOGFILE
In ZSH the equivalent code does not work and I can not figure out how to make it work for me. This code works but I don't exactly know how to get it to wrap multiple commands.
{ time echo "something" } 2>&1
I know I can create a new script and put the commands in there then time the execution properly, but is there a way to do it either using functions or a similar method to the bash above?
Try the following instead:
{ time ( echo hello ; sleep 10s; echo hola ; ) } 2>&1
If you want to profile your code you have a few alternatives:
Time subshell execution like:
time ( commands ... )
Use REPORTTIME to check for slow commands:
export REPORTTIME=3 # display commands with execution time >= 3 seconds
setop xtrace as explained here
The zprof module
Try replace { with ( ?
I think this should help
You can also use the times POSIX shell builtin in conjunction with functions.
It will report the user and system time used by the shell and its children. See
http://pubs.opengroup.org/onlinepubs/009695399/utilities/times.html
Example:
somefunc() {
code you want to time here
times
}
The reason for using a shell function is that it creates a new shell context, at the start of which times is all zeros (try it). Otherwise the result contains the contribution of the current shell as well. If that is what you want, forget about the function and put times last in your script.
In my bash script, I execute some commands as another user. I want to call a bash function using su.
my_function()
{
do_something
}
su username -c "my_function"
The above script doesn't work. Of course, my_function is not defined inside su. One idea I have is to put the function into a separate file. Do you have a better idea that avoids making another file?
You can export the function to make it available to the subshell:
export -f my_function
su username -c "my_function"
You could enable 'sudo' in your system, and use that instead.
You must have the function in the same scope where you use it. So either place the function inside the quotes, or put the function to a separate script, which you then run with su -c.
Another way could be making cases and passing a parameter to the executed script.
Example could be:
First make a file called "script.sh".
Then insert this code in it:
#!/bin/sh
my_function() {
echo "this is my function."
}
my_second_function() {
echo "this is my second function."
}
case "$1" in
'do_my_function')
my_function
;;
'do_my_second_function')
my_second_function
;;
*) #default execute
my_function
esac
After adding the above code run these commands to see it in action:
root#shell:/# chmod +x script.sh #This will make the file executable
root#shell:/# ./script.sh #This will run the script without any parameters, triggering the default action.
this is my function.
root#shell:/# ./script.sh do_my_second_function #Executing the script with parameter
this function is my second one.
root#shell:/#
To make this work as you required you'll just need to run
su username -c '/path/to/script.sh do_my_second_function'
and everything should be working fine.
Hope this helps :)