Error converting a bash function to a csh alias - bash

I am writing a csh alias so that I can use the following bash function in my csh :
function up( )
{
LIMIT=$1
P=$PWD
for ((i=1; i <= LIMIT; i++))
do
P=$P/..
done
cd $P
export MPWD=$P
}
(I stole the above bash function from here)
I have written this:
alias up 'set LIMIT=$1; set P=$PWD; set counter = LIMIT; while[counter!=0] set counter = counter-1; P=$P/.. ; end cd $P; setenv MPWD=$P'
However, I am getting the following error:
while[counter!=0]: No match.
P=/net/devstorage/home/rghosh/..: Command not found.
end: Too many arguments.
and my script is not working as intended. I have been reading up on csh from here.
I am not an expert in csh and what I have written above is my first csh script. Please let me know what I am doing wrong.

You can also do this
alias up 'cd `yes ".." | head -n\!* | tr "\n" "\/"`'
yes ".." will repeat the string .. indefinitely; head will truncate it to the number passed as argument while calling the alias ( !* expands to the arguments passed; similar to $# ) and tr will convert the newlines to /.
radical7's answer seems to be more neat; but will only work for tcsh ( exactly wat you wanted ). This should work irrespective of the shell

You can use the csh's repeat function
alias up 'cd `pwd``repeat \!^ echo -n /..`'
No loops needed (which is handy, because while constructs in tcsh seem very finicky)

For multiple lines of code, aliases must be within single quotes, and each end of line must precede a backslash. The end of the last line must precede a single quote to delimit the end of the alias:
alias up 'set counter = $1\
set p = $cwd\
while $counter != 0\
# counter = $counter - 1\
set p = $p/..\
end\
cd $p\
setenv mpwd $p'
By the way, variables set with set are better with the equal sign separated from the variable name and content; setenv doesn't require an equal sign; math functionality is provided by #; control structures make use of parentheses (though aren't required for simple tests); use $cwd to print the current working directory.

Related

Bash Automatically replacing [0:100] with 1

I'm writing a simple graphing script that uses gnuplot, and I use a helper function to construct a .gscript file.
add_gscript() {
echo "->>"
echo $1
echo $1 >> plot.gscript
cat plot.gscript
}
However after passing the following argument into the function
echo "--"
echo 'set xrange [0:$RANGE]'
echo "--"
add_gscript "set xrange [0:100]"
where $RANGE has been defined beforehand, I get the following output
--
set xrange [0:$RANGE]
--
->>
set xrange 1
set datafile separator ","
set term qt size 800,640
set size ratio .618
set xrange 1
Is bash evaluating [0:100] to 1 somehow?
A fix was accurately described in comments on the question:
Always double-quote variable references to have their values treated as literals; without quoting, the values are (in most contexts) subject to shell expansions, including word splitting and pathname expansion. [0:100] happens to be a valid globbing pattern that matches any file in the current dir. named either 0 or : or 1. – mklement0
so echo "$1"; echo "$1" >> plot.gscript and any other unquoted vars. Good luck. – shellter
Double quoting the variables did indeed fix my issue. Thanks!

Detect empty command

Consider this PS1
PS1='\n${_:+$? }$ '
Here is the result of a few commands
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
1 $
The first line shows no status as expected, and the next two lines show the
correct exit code. However on line 3 only Enter was pressed, so I would like the
status to go away, like line 1. How can I do this?
Here's a funny, very simple possibility: it uses the \# escape sequence of PS1 together with parameter expansions (and the way Bash expands its prompt).
The escape sequence \# expands to the command number of the command to be executed. This is incremented each time a command has actually been executed. Try it:
$ PS1='\# $ '
2 $ echo hello
hello
3 $ # this is a comment
3 $
3 $ echo hello
hello
4 $
Now, each time a prompt is to be displayed, Bash first expands the escape sequences found in PS1, then (provided the shell option promptvars is set, which is the default), this string is expanded via parameter expansion, command substitution, arithmetic expansion, and quote removal.
The trick is then to have an array that will have the k-th field set (to the empty string) whenever the (k-1)-th command is executed. Then, using appropriate parameter expansions, we'll be able to detect when these fields are set and to display the return code of the previous command if the field isn't set. If you want to call this array __cmdnbary, just do:
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Look:
$ PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
0 $ [ 2 = 3 ]
1 $
$ # it seems that it works
$ echo "it works"
it works
0 $
To qualify for the shortest answer challenge:
PS1='\n${a[\#]-$? }${a[\#]=}$ '
that's 31 characters.
Don't use this, of course, as a is a too trivial name; also, \$ might be better than $.
Seems you don't like that the initial prompt is 0 $; you can very easily modify this by initializing the array __cmdnbary appropriately: you'll put this somewhere in your configuration file:
__cmdnbary=( '' '' ) # Initialize the field 1!
PS1='\n${__cmdnbary[\#]-$? }${__cmdnbary[\#]=}\$ '
Got some time to play around this weekend. Looking at my earlier answer (not-good) and other answers I think this may be probably the smallest answer.
Place these lines at the end of your ~/.bash_profile:
PS1='$_ret$ '
trapDbg() {
local c="$BASH_COMMAND"
[[ "$c" != "pc" ]] && export _cmd="$c"
}
pc() {
local r=$?
trap "" DEBUG
[[ -n "$_cmd" ]] && _ret="$r " || _ret=""
export _ret
export _cmd=
trap 'trapDbg' DEBUG
}
export PROMPT_COMMAND=pc
trap 'trapDbg' DEBUG
Then open a new terminal and note this desired behavior on BASH prompt:
$ uname
Darwin
0 $
$
$
$ date
Sun Dec 14 05:59:03 EST 2014
0 $
$
$ [ 1 = 2 ]
1 $
$
$ ls 123
ls: cannot access 123: No such file or directory
2 $
$
Explanation:
This is based on trap 'handler' DEBUG and PROMPT_COMMAND hooks.
PS1 is using a variable _ret i.e. PS1='$_ret$ '.
trap command runs only when a command is executed but PROMPT_COMMAND is run even when an empty enter is pressed.
trap command sets a variable _cmd to the actually executed command using BASH internal var BASH_COMMAND.
PROMPT_COMMAND hook sets _ret to "$? " if _cmd is non-empty otherwise sets _ret to "". Finally it resets _cmd var to empty state.
The variable HISTCMD is updated every time a new command is executed. Unfortunately, the value is masked during the execution of PROMPT_COMMAND (I suppose for reasons related to not having history messed up with things which happen in the prompt command). The workaround I came up with is kind of messy, but it seems to work in my limited testing.
# This only works if the prompt has a prefix
# which is displayed before the status code field.
# Fortunately, in this case, there is one.
# Maybe use a no-op prefix in the worst case (!)
PS1_base=$'\n'
# Functions for PROMPT_COMMAND
PS1_update_HISTCMD () {
# If HISTCONTROL contains "ignoredups" or "ignoreboth", this breaks.
# We should not change it programmatically
# (think principle of least astonishment etc)
# but we can always gripe.
case :$HISTCONTROL: in
*:ignoredups:* | *:ignoreboth:* )
echo "PS1_update_HISTCMD(): HISTCONTROL contains 'ignoredups' or 'ignoreboth'" >&2
echo "PS1_update_HISTCMD(): Warning: Please remove this setting." >&2 ;;
esac
# PS1_HISTCMD needs to contain the old value of PS1_HISTCMD2 (a copy of HISTCMD)
PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}
# PS1_HISTCMD2 needs to be unset for the next prompt to trigger properly
unset PS1_HISTCMD2
}
PROMPT_COMMAND=PS1_update_HISTCMD
# Finally, the actual prompt:
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
The logic in the prompt is roughly as follows:
${PS1_base#foo...}
This displays the prefix. The stuff in #... is useful only for its side effects. We want to do some variable manipulation without having the values of the variables display, so we hide them in a string substitution. (This will display odd and possibly spectacular things if the value of PS1_base ever happens to begin with foo followed by the current command history index.)
${PS1_HISTCMD2:=...}
This assigns a value to PS1_HISTCMD2 (if it is unset, which we have made sure it is). The substitution would nominally also expand to the new value, but we have hidden it in a ${var#subst} as explained above.
${HISTCMD%$PS1_HISTCMD}
We assign either the value of HISTCMD (when a new entry in the command history is being made, i.e. we are executing a new command) or an empty string (when the command is empty) to PS1_HISTCMD2. This works by trimming off the value HISTCMD any match on PS1_HISTCMD (using the ${var%subst} suffix replacement syntax).
${_:+...}
This is from the question. It will expand to ... something if the value of $_ is set and nonempty (which it is when a command is being executed, but not e.g. if we are performing a variable assignment). The "something" should be the status code (and a space, for legibility) if PS1_HISTCMD2 is nonempty.
${PS1_HISTCMD2:+$? }
There.
'$ '
This is just the actual prompt suffix, as in the original question.
So the key parts are the variables PS1_HISTCMD which remembers the previous value of HISTCMD, and the variable PS1_HISTCMD2 which captures the value of HISTCMD so it can be accessed from within PROMPT_COMMAND, but needs to be unset in the PROMPT_COMMAND so that the ${PS1_HISTCMD2:=...} assignment will fire again the next time the prompt is displayed.
I fiddled for a bit with trying to hide the output from ${PS1_HISTCMD2:=...} but then realized that there is in fact something we want to display anyhow, so just piggyback on that. You can't have a completely empty PS1_base because the shell apparently notices, and does not even attempt to perform a substitution when there is no value; but perhaps you can come up with a dummy value (a no-op escape sequence, perhaps?) if you have nothing else you want to display. Or maybe this could be refactored to run with a suffix instead; but that is probably going to be trickier still.
In response to Anubhava's "smallest answer" challenge, here is the code without comments or error checking.
PS1_base=$'\n'
PS1_update_HISTCMD () { PS1_HISTCMD=${PS1_HISTCMD2:-$PS1_HISTCMD}; unset PS1_HISTCMD2; }
PROMPT_COMMAND=PS1_update_HISTCMD
PS1='${PS1_base#foo${PS1_HISTCMD2:=${HISTCMD%$PS1_HISTCMD}}}${_:+${PS1_HISTCMD2:+$? }}$ '
This is probably not the best way to do this, but it seems to be working
function pc {
foo=$_
fc -l > /tmp/new
if cmp -s /tmp/{new,old} || test -z "$foo"
then
PS1='\n$ '
else
PS1='\n$? $ '
fi
cp /tmp/{new,old}
}
PROMPT_COMMAND=pc
Result
$ [ 2 = 2 ]
0 $ [ 2 = 3 ]
1 $
$
I need to use great script bash-preexec.sh.
Although I don't like external dependencies, this was the only thing to help me avoid to have 1 in $? after just pressing enter without running any command.
This goes to your ~/.bashrc:
__prompt_command() {
local exit="$?"
PS1='\u#\h: \w \$ '
[ -n "$LASTCMD" -a "$exit" != "0" ] && PS1='['${red}$exit$clear"] $PS1"
}
PROMPT_COMMAND=__prompt_command
[-f ~/.bash-preexec.sh ] && . ~/.bash-preexec.sh
preexec() { LASTCMD="$1"; }
UPDATE: later I was able to find a solution without dependency on .bash-preexec.sh.

Is the behavior behind the Shellshock vulnerability in Bash documented or at all intentional?

A recent vulnerability, CVE-2014-6271, in how Bash interprets environment variables was disclosed. The exploit relies on Bash parsing some environment variable declarations as function definitions, but then continuing to execute code following the definition:
$ x='() { echo i do nothing; }; echo vulnerable' bash -c ':'
vulnerable
But I don't get it. There's nothing I've been able to find in the Bash manual about interpreting environment variables as functions at all (except for inheriting functions, which is different). Indeed, a proper named function definition is just treated as a value:
$ x='y() { :; }' bash -c 'echo $x'
y() { :; }
But a corrupt one prints nothing:
$ x='() { :; }' bash -c 'echo $x'
$ # Nothing but newline
The corrupt function is unnamed, and so I can't just call it. Is this vulnerability a pure implementation bug, or is there an intended feature here, that I just can't see?
Update
Per Barmar's comment, I hypothesized the name of the function was the parameter name:
$ n='() { echo wat; }' bash -c 'n'
wat
Which I could swear I tried before, but I guess I didn't try hard enough. It's repeatable now. Here's a little more testing:
$ env n='() { echo wat; }; echo vuln' bash -c 'n'
vuln
wat
$ env n='() { echo wat; }; echo $1' bash -c 'n 2' 3 -- 4
wat
…so apparently the args are not set at the time the exploit executes.
Anyway, the basic answer to my question is, yes, this is how Bash implements inherited functions.
This seems like an implementation bug.
Apparently, the way exported functions work in bash is that they use specially-formatted environment variables. If you export a function:
f() { ... }
it defines an environment variable like:
f='() { ... }'
What's probably happening is that when the new shell sees an environment variable whose value begins with (), it prepends the variable name and executes the resulting string. The bug is that this includes executing anything after the function definition as well.
The fix described is apparently to parse the result to see if it's a valid function definition. If not, it prints the warning about the invalid function definition attempt.
This article confirms my explanation of the cause of the bug. It also goes into a little more detail about how the fix resolves it: not only do they parse the values more carefully, but variables that are used to pass exported functions follow a special naming convention. This naming convention is different from that used for the environment variables created for CGI scripts, so an HTTP client should never be able to get its foot into this door.
The following:
x='() { echo I do nothing; }; echo vulnerable' bash -c 'typeset -f'
prints
vulnerable
x ()
{
echo I do nothing
}
declare -fx x
seems, than Bash, after having parsed the x=..., discovered it as a function, exported it, saw the declare -fx x and allowed the execution of the command after the declaration.
echo vulnerable
x='() { x; }; echo vulnerable' bash -c 'typeset -f'
prints:
vulnerable
x ()
{
echo I do nothing
}
and running the x
x='() { x; }; echo Vulnerable' bash -c 'x'
prints
Vulnerable
Segmentation fault: 11
segfaults - infinite recursive calls
It doesn't overrides already defined function
$ x() { echo Something; }
$ declare -fx x
$ x='() { x; }; echo Vulnerable' bash -c 'typeset -f'
prints:
x ()
{
echo Something
}
declare -fx x
e.g. the x remains the previously (correctly) defined function.
For the Bash 4.3.25(1)-release the vulnerability is closed, so
x='() { echo I do nothing; }; echo Vulnerable' bash -c ':'
prints
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
but - what is strange (at least for me)
x='() { x; };' bash -c 'typeset -f'
STILL PRINTS
x ()
{
x
}
declare -fx x
and the
x='() { x; };' bash -c 'x'
segmentation faults too, so it STILL accept the strange function definition...
I think it's worth looking at the Bash code itself. The patch gives a bit of insight as to the problem. In particular,
*** ../bash-4.3-patched/variables.c 2014-05-15 08:26:50.000000000 -0400
--- variables.c 2014-09-14 14:23:35.000000000 -0400
***************
*** 359,369 ****
strcpy (temp_string + char_index + 1, string);
! if (posixly_correct == 0 || legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST);
!
! /* Ancient backwards compatibility. Old versions of bash exported
! functions like name()=() {...} */
! if (name[char_index - 1] == ')' && name[char_index - 2] == '(')
! name[char_index - 2] = '\0';
if (temp_var = find_function (name))
--- 364,372 ----
strcpy (temp_string + char_index + 1, string);
! /* Don't import function names that are invalid identifiers from the
! environment, though we still allow them to be defined as shell
! variables. */
! if (legal_identifier (name))
! parse_and_execute (temp_string, name, SEVAL_NONINT|SEVAL_NOHIST|SEVAL_FUNCDEF|SEVAL_ONECMD);
if (temp_var = find_function (name))
When Bash exports a function, it shows up as an environment variable, for example:
$ foo() { echo 'hello world'; }
$ export -f foo
$ cat /proc/self/environ | tr '\0' '\n' | grep -A1 foo
foo=() { echo 'hello world'
}
When a new Bash process finds a function defined this way in its environment, it evalutes the code in the variable using parse_and_execute(). For normal, non-malicious code, executing it simply defines the function in Bash and moves on. However, because it's passed to a generic execution function, Bash will correctly parse and execute additional code defined in that variable after the function definition.
You can see that in the new code, a flag called SEVAL_ONECMD has been added that tells Bash to only evaluate the first command (that is, the function definition) and SEVAL_FUNCDEF to only allow functio0n definitions.
In regard to your question about documentation, notice here in the commandline documentation for the env command, that a study of the syntax shows that env is working as documented.
There are, optionally, 4 possible options
An optional hyphen as a synonym for -i (for backward compatibility I assume)
Zero or more NAME=VALUE pairs. These are the variable assignment(s) which could include function definitions.
Note that no semicolon (;) is required between or following the assignments.
The last argument(s) can be a single command followed by its argument(s). It will run with whatever permissions have been granted to the login being used. Security is controlled by restricting permissions on the login user and setting permissions on user-accessible executables such that users other than the executable's owner can only read and execute the program, not alter it.
[ spot#LX03:~ ] env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.
-i, --ignore-environment start with an empty environment
-u, --unset=NAME remove variable from the environment
--help display this help and exit
--version output version information and exit
A mere - implies -i. If no COMMAND, print the resulting environment.
Report env bugs to bug-coreutils#gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
Report env translation bugs to <http://translationproject.org/team/>

using 'eval' in a function produces different results than when using it in a script

Say in my current directory I have files: a.1, a.2, b.1, b.2.
I have this script in file 'x':
echo `eval echo "$1"`
echo 'eval echo "$2"`
If I do:
> set -f; x a* b*
I get:
> a.1 a.2
> b.1 b.2
... Which is very nice, I can access and expand any of the command line arguments (as typed) and expand them at my pleasure.
Alas! If I put the contents of my script 'x' inside a function it no longer works, the wildcards refuse to expand. Of course I can remove the 'set -f' but then the list of arguments expands 'out of control' which is to say that I don't know where the 'b*' arguments start since I don't know how many arguments 'a*' will exand to.
Can I make the above work inside a function? And, for that matter, why does it behave differently than the same code in a script?
Thanks
The reason for the difference is that when you put it in a separate script x, and call that script using
x ...
then you're launching a separate instance of Bash to run the script (that is, it's equivalent to
bash x ...
), and that separate instance doesn't inherit the set -f setting, whereas when you put it in a shell function, it runs within the same instance.
One solution is to drop the whole set -f approach, and just quote your arguments when you call the function:
function x () { for arg in "$#" ; do echo $arg ; done ; }
x 'a.*' 'b.*' # prints a.1 a.2 on one line, b.1 b.2 on the next.
Another solution is to define the function to run in a subshell (but still within the same instance of Bash), and then cancel the set -f within that function by using set +f:
set -f
function x () ( set +f ; for arg in "$#" ; do echo $arg ; done )
x a.* b.* # prints a.1 a.2 on one line, b.1 b.2 on the next.
(The reason for using the subshell, i.e. for using (...) instead of {...} is that otherwise, the set +f would "leak out" of the function call. But maybe that's O.K.?)
Yet another solution is to have the function invoke a new instance of Bash:
set -f
function x () { for arg in "$#" ; do bash -c "echo $arg" ; done ; }
x a.* b.* # prints a.1 a.2 on one line, b.1 b.2 on the next.
Your initial set -f affects only the current shell instance. When you launch your script, a new shell is started, and that shell has globbing enabled. If you replace the script with the function, the function runs in the context of the shell where set -f has effect, and so no globbing takes place.
You actually don't need to disable globbing to get the behaviour you want. You need only quote the arguments to your function:
myfunc()
{
echo $1
echo $2
}
myfunc "a.*" "b.*"

How to preserve single and double quotes in shell script arguments WITHOUT the ability to control how they pass

I need to receive arguments I have no control over into a shell script, and preserve any single or double quotes. For instance, a script that simply outputs the given arguments should act as follows:
> my_script.sh "double" 'single' none
"double" 'single' none
I don't have the privilege of augmenting the arguments such as in:
> my_script.sh \"double\" \'single\' none
or
> my_script.sh '"double"' "'single'" none
And neither "$#" nor "$*" work.
I also thought of reading from STDIN and try something like:
> echo "double" 'single' none | my_script.sh
thinking it may help, but no breakthrough so far.
Any suggestions?
CSH / PERL solutions are welcomed.
This is not possible (without escaping), because the shell processes the arguments and removes the quotes before your script is called. As a result, your script does not know about the quotes specified on the command line.
You cannot recover the single/double quotes exactly as they were, because the shell 'eats' them. If you need to call some other script from your script, you can e.g. single quote the arguments again. Here is a PERL solution I use:
sub args2shell
{
local (#argv) = #_;
local $" = '\' \'';
local (#margv);
#margv = map { s/'/'\\''/g; $_ } #argv;
return "\'#margv\'" if #margv;
return undef;
}
Example usage:
$args = args2shell #ARGV;
open F, "find -follow $args ! -type d |";
...

Resources