In bash I am able to write a script that contains something like this:
{ time {
#series of commands
echo "something"
echo "another command"
echo "blah blah blah"
} } 2> $LOGFILE
In ZSH the equivalent code does not work and I can not figure out how to make it work for me. This code works but I don't exactly know how to get it to wrap multiple commands.
{ time echo "something" } 2>&1
I know I can create a new script and put the commands in there then time the execution properly, but is there a way to do it either using functions or a similar method to the bash above?
Try the following instead:
{ time ( echo hello ; sleep 10s; echo hola ; ) } 2>&1
If you want to profile your code you have a few alternatives:
Time subshell execution like:
time ( commands ... )
Use REPORTTIME to check for slow commands:
export REPORTTIME=3 # display commands with execution time >= 3 seconds
setop xtrace as explained here
The zprof module
Try replace { with ( ?
I think this should help
You can also use the times POSIX shell builtin in conjunction with functions.
It will report the user and system time used by the shell and its children. See
http://pubs.opengroup.org/onlinepubs/009695399/utilities/times.html
Example:
somefunc() {
code you want to time here
times
}
The reason for using a shell function is that it creates a new shell context, at the start of which times is all zeros (try it). Otherwise the result contains the contribution of the current shell as well. If that is what you want, forget about the function and put times last in your script.
Related
I'm trying to learn how to write some basic functions in Ubuntu, and I've found that some of them work, and some do not, and I can't figure out why.
Specifically, the following function addseq2.sh will work when I source it, but when I just try to run it with bash addseq2.shit doesn't work. When I check with $? I get a 0: command not found. Does anyone have an idea why this might be the case? Thanks for any suggestions!
Here's the code for addseq2.sh:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
Thanks everyone for all the useful advice and help!
To expand on my original question, I have two simple functions already written. The first one, hello.sh looks like this:
#!/usr/bin/env bash
# File: hello.sh
function hello {
echo "Hello"
}
hello
hello
hello
When I call this function, without having done anything else, I would type:
$ bash hello.sh
Which seems to work fine. After I source it with $ source hello.sh, I'm then able to just type hello and it also runs as expected.
So what has been driving me crazy is the first function I mentioned here, addseq2.sh. If I try to repeat the same steps, calling it just with $ bash addseq2.sh 1 2 3. I don't see any result. I can see after checking as you suggested with $ echo $?that I get a 0 and it executed correctly, but nothing prints to the screen.
After I source it with $ source addseq2.sh, then I call it just by typing $ addseq2 1 2 3 it returns 6 as expected.
I don't understand why the two functions are behaving differently.
When you do bash foo.sh, it spawns a new instance of bash, which then reads and executes every command in foo.sh.
In the case of hello.sh, the commands are:
function hello {
echo "Hello"
}
This command has no visible effects, but it defines a function named hello.
hello
hello
hello
These commands call the hello function three times, each printing Hello to stdout.
Upon reaching the end of the script, bash exits with a status of 0. The hello function is gone (it was only defined within the bash process that just stopped running).
In the case of addseq2.sh, the commands are:
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
This command has no visible effects, but it defines a function named addseq2.
Upon reaching the end of the script, bash exits with a status of 0. The addseq2 function is gone (it was only defined within the bash process that just stopped running).
That's why bash addseq2.sh does nothing: It simply defines (and immediately forgets) a function without ever calling it.
The source command is different. It tells the currently running shell to execute commands from a file as if you had typed them on the command line. The commands themselves still execute as before, but now the functions persist because the bash process they were defined in is still alive.
If you want bash addseq2.sh 1 2 3 to automatically call the addseq2 function and pass it the list of command line arguments, you have to say so explicitly: Add
addseq2 "$#"
at the end of addseq2.sh.
When I check with $? I get a 0: command not found
This is because of the way you are checking it, for example:
(the leading $ is the convention for showing the command-line prompt)
$ $?
-bash: 0: command not found
Instead you could do this:
$ echo $?
0
By convention 0 indicated success. A better way to test in a script is something like this:
if addseq.sh
then
echo 'script worked'
else
# Redirect error message to stderr
echo 'script failed' >&2
fi
Now, why might your script not "work" even though it returned 0? You have a function but you are not calling it. With your code I appended a call:
#!/usr/bin/env bash
# File: addseq2.sh
function addseq2 {
local sum=0
for element in $#
do
let sum=sum+$element
done
echo $sum
}
addseq2 1 2 3 4 # <<<<<<<
and I got:
10
By the way, an alternative way of saying:
let sum=sum+$element
is:
sum=$((sum + element))
Sorry I cannot give a clear title for what's happening but here is the simplified problem code.
#!/bin/bash
# get the absolute path of .conf directory
get_conf_dir() {
local path=$(some_command) || { echo "please install some_command first."; exit 100; }
echo "$path"
}
# process the configuration
read_conf() {
local conf_path="$(get_conf_dir)/foo.conf"
[ -r "$conf_path" ] || { echo "conf file not found"; exit 200; }
# more code ...
}
read_conf
So basically here what I am trying to do is, reading a simple configuration file in bash script, and I have some trouble in error handling.
The some_command is a command which comes from a 3rd party library (i.e. greadlink from coreutils), required for obtain the path.
When running the code above, I expect it outputs "command not found" because that's where the FIRST error occurs, but actually it always prints "conf file not found".
I am very confused about such behavior, and I think BASH probably intent to handle thing like this but I don't know why. And most importantly, how to fix it?
Any idea would be greatly appreciated.
Do you see your please install some_command first message anywhere? Is it in $conf_path from the local conf_path="$(get_conf_dir)/foo.conf" line? Do you have a $conf_path value of please install some_command first/foo.conf? Which then fails the -r test?
No, you don't. (But feel free to echo the value of $conf_path in that exit 200 block to confirm this fact.) (Also Error messages should, in general, get sent to standard error and not standard output anyway. So they should be echo "..." 2>&1. That way they don't be caught by the normal command substitution at all.)
The reason you don't is because that exit 100 block is never happening.
You can see this with set -x at the top of your script also. Go try it.
See what I mean?
The reason it isn't happening is that the failure return of some_command is being swallowed by the local path=$(some_command) assignment statement.
Try running this command:
f() { local a=$(false); echo "Returned: $?"; }; f
Do you expect to see Returned: 1? You might but you won't see that.
What you will see is Returned: 0.
Now try either of these versions:
f() { a=$(false); echo "Returned: $?"; }; f
f() { local a; a=$(false); echo "Returned: $?"; }; f
Get the output you expected in the first place?
Right. local and export and declare and typeset are statements on their own. They have their own return values. They ignore (and replace) the return value of the commands that execute in their contexts.
The solution to your problem is to split the local path and path=$(some_command) statements.
http://www.shellcheck.net/ catches this (and many other common errors). You should make it your friend.
In addition to the above (if you've managed to follow along this far) even with the changes mentioned so far your exit 100 won't exit the main script since it will only exit the sub-shell spawned by the command substitution in the assignment.
If you want that exit 100 to exit your script then you either need to notice and re-exit with it (check for get_conf_dir failure after the conf_path assignment and exit with the previous exit code) or drop the get_conf_dir function itself and just do that inline in read_conf.
I'm looking for a way to hook in a custom bash completion function. Problem is, I want this completion function not just for a specific command, but for all commands.
Is this even possible? Having looked around for a while, I couldn't find any resources online.
To reduce the problem to the most trivial case: would it be possible to always have tab-completion for the string 'foo'?
Meaning echo f<tab> would expand into echo foo, and ls fo<tab> would expand into ls foo
For context: I'm trying to implement something similar to http://blog.plenz.com/2012-01/zsh-complete-words-from-tmux-pane.html in bash, but I'm starting to fear it's not possible.
You can do that with the -D option of the complete command:
suggest_hello()
{
COMPREPLY=( hello )
return 0
}
complete -D -F suggest_hello
Now whenever I type echo h<Tab>, I get echo hello.
$ help complete
complete: ...
...
-D apply the completions and actions as the default for commands
without any specific completion defined
...
Is there a way to use the time reserved word in zsh to time multiple commands, without starting a subshell?
I know that this works:
{ time (
sleep 5
sleep 3
PROMPT='foobar> '
) }
However the parentheses mean that a subshell is created, and variables initialized don't get exported.
I know I can capture the variables before and after, like
start=$(time)
# do something
end=$(time)
echo start - end | bc
Though for ad hoc timing this is a little cumbersome.
No, time can only work on a different process. So, it won't work with { ... } or with a builtin, like:
time { ls }
time echo
Note that your method capturing the time output won't work if there are already children (as their times when running the commands will also be taken into account). Ditto if you have traps and corresponding signals occur.
If I have a bash script that is purely made of functions, how do I get things like prompts to show up in the terminal? For example, consider the following:
prompt() {
read -p "This is an example prompt. [Y/n]"
}
main() {
prompt
}
main "$#"
How do I get that prompt message to show up in the terminal? When I just call prompt() from main(), it blackboxes the whole prompt() function. Do I have to return something from prompt()? What if I want to echo a bunch of messages after the read in prompt()? How do I get those to show up in the terminal?
I think I'm missing a basic programming concept here.
If you run a script with that code, main will not be executed. An alternative is to leave prompt() as a function, and move the functionality of main() outside of the function.
prompt() {
read -p "This is an example prompt. [Y/n]"
}
prompt
If you would like to keep main as a function, then you will have to source the file, and then call main.
$ source file.sh
$ main
where file.sh is the file with your code, and $ denotes the terminal prompt.
Answer to last question 1st. Yes, 'shell' is executed linearly (functions must appear before they are called and only get executed when called) and main is not automatically called.
The default variable for a read is $REPLY unless you supply your own variable(s), so:
echo $REPLY
or
read -p "Enter data?" MYVAR; echo $MYVAR
or
read -p "Enter data?" FIRSTVAR RESTOFSENTENCE; echo $FIRSTVAR:$RESTOFSENTENCE
if you want to do more stuff after your "main", you can just add those commands in the sequence you want them to occur either inside of main (provided that main is called) or after the call to main
if you want them to just be functions, you can save them to a file called myfuncs.sh and then from a shell source that and run a function(s):
. myfuncs.sh && main
How are you executing the script, and what's calling main()?
Bash scripts aren't like C programs and don't require a main() function. Bash will "remember" any functions it sees, but you actually need to call them directly from the script.
Try this:
# Start of script
prompt() {
read -p "This is an example prompt. [Y/n]"
}
prompt
# End of script
bash$ chmod +x script_name
bash$ ./script_name
you type prompt instead of prompt()
better is...
function prompt() {
read -p "$* [Y/n]"
}
then
prompt "This is an example prompt."
My understanding of your question is this:
You have a set of functions (of bash shell scripting) and you want them to return values to the calling points.
Example in C++:
char * prompt() {
std::cout<<"Some message:";
std::cin>>value;
return value;
}
This kind of return values are impossible in the bash(or other bourne descendant shells) scripting. Functions only have an exit value(similar to commands), for example, an implementation of the false command:
myfalse() { return 1; }
Thus return only sets a zero or nonzero exit status.
To use a value from a function, the function needs to put the value to standard output and that output captured via command substitution. For example:
toupper() {
val="$(echo $1| tr 'a-z' 'A-Z')"
echo $val
}
ucasestr="$(toupper "Hello, World")"
echo $ucasestr
HELLO, WORLD