уI need a global counter and function which returns numbers one by one. For example I want this script to echo 6,7,8 (but it echo 6,6,6):
#!/bin/bash
port_counter=5
function get_free_port {
port_counter=$((port_counter + 1))
echo ${port_counter}
}
function foo {
echo $(get_free_port)
}
foo
foo
(foo;)&
How can I obtain 6,7,8?
UPDATE:
Ok, after chepner's answer I need to specify a little my question.
If I need to use get_free_port as variable in foo, I can't use this approach, isn't it?
So I can't write
function foo {
variable=get_free_port # variable=$(get_free_port) was ok, but returns 6,6,6
echo ${variable}
}
Also foo & - like usages is hardly desirable
You can't modify variables from a subprocess (which is what $(...) runs). You don't need one in this case:
function foo {
get_free_port
}
However, for the same reason, you cannot call foo from a subshell or background job, either. Neither foo &, (foo), nor (foo)& will update the value of port_counter in the current shell.
If you really need to call get_free_port and capture its output, you'll need to use a temporary file. For example:
foo () {
get_free_port > some_temp_file
cat some_temp_file
}
If this is not suitable, you may need to rethink your script's design.
The below code would give you the desired behavior:
#!/bin/bash
port_counter=5
function get_free_port {
port_counter=$(( port_counter + 1 ))
echo ${port_counter}
}
function foo {
get_free_port
# $(get_free_port) spawns a subshell and the parent shell variables are not
# available in the subshell.
}
foo #runs fine
foo #runs fine
foo #(foo;)& again spawns a subshell and the parent shell pariables are not available here.
Related
I hope that I can do something like this, and the output would be "hello"
#!/bin/bash
foo="hello"
dummy() {
local local_foo=`echo $foo`
echo $local_foo
}
foo=''
dummy
This question means that I would like to capture the value of some global values at definition time, usually used via source blablabla.bash and would like that it defines a function that captures current variable's value.
The Sane Way
Functions are evaluated when they're run, not when they're defined. Since you want to capture a variable as it exists at definition time, you'll need a separate variable assigned at that time.
foo="hello"
# By convention, global variables prefixed by a function name and double underscore are for
# the exclusive use of that function.
readonly dummy__foo="$foo" # capture foo as of dummy definition time, and prevent changes
dummy() {
local local_foo=$dummy__foo # ...and refer to that captured copy
echo "$local_foo"
}
foo=""
dummy
The Insane Way
If you're willing to commit crimes against humanity, however, it is possible to do code generation to capture a value. For instance:
# usage: with_locals functionname k1=v1 [k2=v2 [...]]
with_locals() {
local func_name func_text assignments
func_name=$1; shift || return ## fail if out of arguments
(( $# )) || return ## noop if not given at least one assignment
func_text=$(declare -f "$func_name")
for arg; do
if [[ $arg = *=* ]]; then ## if we already look like an assignment, leave be
printf -v arg_q 'local %q; ' "$arg"
else ## otherwise, assume we're a bare name and run a lookup
printf -v arg_q 'local %q=%q; ' "$arg" "${!arg}"
fi
assignments+="$arg_q"
done
# suffix first instance of { in the function definition with our assignments
eval "${func_text/{/{ $assignments}"
}
...thereafter:
foo=hello
dummy() {
local local_foo="$foo"
echo "$local_foo"
}
with_locals dummy foo ## redefine dummy to always use the current value of "foo"
foo=''
dummy
Well, you can comment out or remove the foo='' line, and that will do it. The function dummy does not execute until you call it, which is after you've blanked out the foo value, so it makes sense that you would get a blank line echoed. Hope this helps.
There is no way to execute the code inside a function unless that function gets called by bash. There is only an alternative of calling some other function that is used to define the function you want to call after.
That is what a dynamic function definition is.
I don't believe that you want that.
An alternative is to store the value of foo (calling the function) and then calling it again after the value has changed. Something hack-sh like this:
#!/bin/bash
foo="hello"
dummy() {
${global_foo+false} &&
global_foo="$foo" ||
echo "old_foo=$global_foo new_foo=$foo"
}
dummy
foo='new'
dummy
foo="a whole new foo"
dummy
Calling it will print:
$ ./script
old_foo=hello new_foo=new
old_foo=hello new_foo=a whole new foo
As I am not sure this address your real problem, just: Hope this helps.
After inspired by #CharlesDuffy, I think using eval might solve some of the problems, and the example can be modified as following:
#!/bin/bash
foo="hello"
eval "
dummy() {
local local_foo=$foo
echo \$local_foo
}
"
foo=''
dummy
Which will give the result 'hello' instead of nothing.
#CharlesDuffy pointed out that such solution is quite dangerous:
local local_foo=$foo is dangerously buggy: If your foo value contains
an expansion such as $(rm -rf $HOME), it'll be executed
Using eval is good in performance, however being bad in security. And therefore I'd suggest #CharlesDuffy 's answer.
I have a script a.sh which has :
a() {
echo "123"
}
echo "dont"
Then I have other script b.sh which has :
b() {
echo "345"
}
All I want to do is to use a in b, but when I source it I don't want to print whatever is in a() or echo "Dont".
I just want to source it for now.
so I did, source a.sh in b.sh
But it doesn't work.
Reason for sourcing is. so if I want I can call any functions when I want too.
If I do . /a.sh in b.sh it prints everything in a.sh.
One approach which will work on any POSIX-compliant shell is this:
# put function definitions at the top
a() {
echo "123"
}
# divide them with a conditional return
[ -n "$FUNCTIONS_ONLY" ] && return
# put direct code to execute below
echo "dont"
...and in your other script:
FUNCTIONS_ONLY=1 . other.sh
Make a library of common functions in a file called functionLib.sh like this:
#!/bin/sh
a(){
echo Inside a, with $1
}
b(){
echo Inside b, with $1
}
Then in script1, do this:
#!/bin/sh
. functionLib.sh # Source in the functions
a 42 # Use one
b 37 # Use another
and in another script, script2 re-use the functions:
#!/bin/sh
. functionLib.sh # Source in the functions
a 23 # Re-use one
b 24 # Re-use another
I have adopted a style in my shell scripts that allows me to design every script as a potential library, making it behave differently when it is sourced (with . .../path/script) and when it is executed directly. You can compare this to the python if __name__ == '__main__': trick.
I have not found a method that is portable across all Bourne shell descendants without explicitly referring to the script's name, but this is what I use:
a() {
echo a
}
b() {
echo b
}
(program=xyzzy
set -u -e
case $0 in
*${program}) : ;;
*) exit;;
esac
# main
a
b
)
The rules for this method are strictly:
Start a section with nothing but functions.
No variable assignments or any other activity.
Then, at the very end, create a subshell ( ... )
The first action inside the subshell tests whether
it's being sourced. If so, exit from the subshell.
If not, run a command.
If I define a function in a file, say test1.sh:
#!/bin/bash
foo() {
echo "foo"
}
And in a second, test2.sh, I try to redefine foo:
#!/bin/bash
source /path/to/test1.sh
...
foo() {
...
echo "bar"
}
foo
Is there a way to change test2.sh to produce:
foo
bar
I know it is possible to do with Bash built-ins using command, but I want to know if it is possible to extend a user function?
I don't know of a nice way of doing it (but I'd love to be proven wrong).
Here's an ugly way:
# test2.sh
# ..
eval 'foo() {
'"$(declare -f foo | tail -n+2)"'
echo bar
}'
I'm not sure I see the need to do something like this. You can use functions inside of functions, so why reuse the name when you can just call the original sourced function in a newly created function like this:
AirBoxOmega:stack d$ cat source.file
#!/usr/local/bin/bash
foo() {
echo "foo"
}
AirBoxOmega:stack d$ cat subfoo.sh
#!/usr/local/bin/bash
source /Users/d/stack/source.file
sub_foo() {
foo
echo "bar"
}
sub_foo
AirBoxOmega:stack d$ ./subfoo.sh
foo
bar
Of course if you REALLY have your heart set on modifying, you could source your function inside the new function, call it, and then do something esle after, like this:
AirBoxOmega:stack d$ cat source.file
#!/usr/local/bin/bash
foo() {
echo "foo"
}
AirBoxOmega:stack d$ cat modify.sh
#!/usr/local/bin/bash
foo() {
source /Users/d/stack/source.file
foo
echo "bar"
}
foo
AirBoxOmega:stack d$ ./modify.sh
foo
bar
No it's not possible. A new declaration would override the previous instance of a function. But despite not having that capability it's still helpful when you want to disable a function without having to unset it like:
foo() {
: ## Do nothing.
}
It's also helpful with lazy initializations:
foo() {
# Do initializations.
...
# New declaration.
if <something>; then
foo() {
....
}
else
foo() {
....
}
fi
# Call normally.
foo "$#"
}
And if you're brave and capable enough to use eval, you can even optimize your function so it would act without additional ifs based on a condition.
Yes you can,
see this page : https://mharrison.org/post/bashfunctionoverride/
save_function() {
local ORIG_FUNC=$(declare -f $1)
local NEWNAME_FUNC="$2${ORIG_FUNC#$1}"
eval "$NEWNAME_FUNC"
}
save_function foo old_foo
foo() {
initialization_code()
old_foo()
cleanup_code()
}
I'd like to get the function name from within the function, for logging purposes.
KornShell (ksh) function:
foo ()
{
echo "get_function_name some useful output"
}
Is there anything similar to $0, which returns the script name within scripts, but which instead provides a function's name?
If you define the function with the function keyword, then $0 is the function name:
$ function foo {
> echo "$0"
> }
$ foo
foo
(Tested in pdksh.)
[...] what are the main pros/cons of using keyword function?
Main pro is that "typeset myvar=abc" inside the function is now a local variable, with no possible side effects outside the function. This makes KSH noticeably safer for large shell scripts. Main con is, perhaps, the non-POSIX syntax.
Use the ksh "function foo ..." form:
$ cat foo1
#!/bin/ksh
foo3() { echo "\$0=$0"; }
function foo2 { echo "\$0=$0"; }
foo2
foo3
$ ./foo1
$0=foo2
$0=./foo1
The function below seems to get its name in both Bash and ksh:
# ksh or bash
function foo {
local myname="${FUNCNAME[0]:-$0}"
echo "$myname"
}
# test
foo
# ...
I need to pass a function as a parameter in Bash. For example, the following code:
function x() {
echo "Hello world"
}
function around() {
echo "before"
eval $1
echo "after"
}
around x
Should output:
before
Hello world
after
I know eval is not correct in that context but that's just an example :)
Any idea?
If you don't need anything fancy like delaying the evaluation of the function name or its arguments, you don't need eval:
function x() { echo "Hello world"; }
function around() { echo before; $1; echo after; }
around x
does what you want. You can even pass the function and its arguments this way:
function x() { echo "x(): Passed $1 and $2"; }
function around() { echo before; "$#"; echo after; }
around x 1st 2nd
prints
before
x(): Passed 1st and 2nd
after
I don't think anyone quite answered the question. He didn't ask if he could echo strings in order. Rather the author of the question wants to know if he can simulate function pointer behavior.
There are a couple of answers that are much like what I'd do, and I want to expand it with another example.
From the author:
function x() {
echo "Hello world"
}
function around() {
echo "before"
($1) <------ Only change
echo "after"
}
around x
To expand this, we will have function x echo "Hello world:$1" to show when the function execution really occurs. We will pass a string that is the name of the function "x":
function x() {
echo "Hello world:$1"
}
function around() {
echo "before"
($1 HERE) <------ Only change
echo "after"
}
around x
To describe this, the string "x" is passed to the function around() which echos "before", calls the function x (via the variable $1, the first parameter passed to around) passing the argument "HERE", finally echos after.
As another aside, this is the methodology to use variables as function names. The variables actually hold the string that is the name of the function and ($variable arg1 arg2 ...) calls the function passing the arguments. See below:
function x(){
echo $3 $1 $2 <== just rearrange the order of passed params
}
Z="x" # or just Z=x
($Z 10 20 30)
gives: 30 10 20, where we executed the function named "x" stored in variable Z and passed parameters 10 20 and 30.
Above where we reference functions by assigning variable names to the functions so we can use the variable in place of actually knowing the function name (which is similar to what you might do in a very classic function pointer situation in c for generalizing program flow but pre-selecting the function calls you will be making based on command line arguments).
In bash these are not function pointers, but variables that refer to names of functions that you later use.
there's no need to use eval
function x() {
echo "Hello world"
}
function around() {
echo "before"
var=$($1)
echo "after $var"
}
around x
You can't pass anything to a function other than strings. Process substitutions can sort of fake it. Bash tends to hold open the FIFO until a command its expanded to completes.
Here's a quick silly one
foldl() {
echo $(($(</dev/stdin)$2))
} < <(tr '\n' "$1" <$3)
# Sum 20 random ints from 0-999
foldl + 0 <(while ((n=RANDOM%999,x++<20)); do echo $n; done)
Functions can be exported, but this isn't as interesting as it first appears. I find it's mainly useful for making debugging functions accessible to scripts or other programs that run scripts.
(
id() {
"$#"
}
export -f id
exec bash -c 'echowrap() { echo "$1"; }; id echowrap hi'
)
id still only gets a string that happens to be the name of a function (automatically imported from a serialization in the environment) and its args.
Pumbaa80's comment to another answer is also good (eval $(declare -F "$1")), but its mainly useful for arrays, not functions, since they're always global. If you were to run this within a function all it would do is redefine it, so there's no effect. It can't be used to create closures or partial functions or "function instances" dependent on whatever happens to be bound in the current scope. At best this can be used to store a function definition in a string which gets redefined elsewhere - but those functions also can only be hardcoded unless of course eval is used
Basically Bash can't be used like this.
A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:
function myfunc()
{
local myresult='some value'
echo "$myresult"
}
result=$(myfunc) # or result=`myfunc`
echo $result
Here the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.
You should have something along the lines of:
function around()
{
echo 'before';
echo `$1`;
echo 'after';
}
You can then call around x
eval is likely the only way to accomplish it. The only real downside is the security aspect of it, as you need to make sure that nothing malicious gets passed in and only functions you want to get called will be called (along with checking that it doesn't have nasty characters like ';' in it as well).
So if you're the one calling the code, then eval is likely the only way to do it. Note that there are other forms of eval that would likely work too involving subcommands ($() and ``), but they're not safer and are more expensive.