TCSH script full path - shell

In bash shell I can get a full path of a script even if the script is called by source, link, ./..., etc.
These magic bash lines:
#Next lines just find the path of the file.
#Works for all scenarios including:
#when called via multiple soft links.
#when script called by command "source" aka . (dot) operator.
#when arg $0 is modified from caller.
#"./script" "/full/path/to/script" "/some/path/../../another/path/script" "./some/folder/script"
#SCRIPT_PATH is given in full path, no matter how it is called.
#Just make sure you locate this at start of the script.
SCRIPT_PATH="${BASH_SOURCE[0]}";
if [ -h "${SCRIPT_PATH}" ]; then
while [ -h "${SCRIPT_PATH}" ]; do SCRIPT_PATH=`readlink "${SCRIPT_PATH}"`; done
fi
pushd `dirname ${SCRIPT_PATH}` > /dev/null
SCRIPT_PATH=`pwd`;
popd > /dev/null
How can you get the script path under the same conditions in TCSH shell? What are these 'magic lines'?
P.S. It is not a duplicate of this and similar questions. I'm aware of $0.

IF your csh script is named test.csh, then this will work:
/usr/sbin/lsof +p $$ | \grep -oE /.\*test.csh

I don't use tcsh and do not claim guru status in it, or any other variant of C shell. I also firmly believe that Csh Programming Considered Harmful contains much truth; I use Korn shell or Bash.
However, I can look at manual pages, and I used the man page for tcsh (tcsh 6.17.00 (Astron) 2009-07-10 (x86_64-apple-darwin) on MacOS 10.7.1 Lion).
As far as I can see, there is no analogue to the variable ${BASH_SOURCE[0]} in tcsh, so the starting point for the script fragment in the question is missing. Thus, unless I missed something in the manual, or the manual is incomplete, there is no easy way to achieve the same result in tcsh.
The original script fragment has some problems, too, as noted in comments. If the script is invoked with current directory /home/user1 using the name /usr/local/bin/xyz, but that is a symlink containing ../libexec/someprog/executable, then the code snippet is going to produce the wrong answer (it will likely say /home/user1 because the directory /home/libexec/someprog does not exist).
Also, wrapping the while loop in an if is pointless; the code should simply contain the while loop.
SCRIPT_PATH="${BASH_SOURCE[0]}";
while [ -h "${SCRIPT_PATH}" ]; do SCRIPT_PATH=`readlink "${SCRIPT_PATH}"`; done
You should look up the realpath() function; there may even be a command that uses it already available. It certainly is not hard to write a command that does use realpath(). However, as far as I can tell, none of the standard Linux commands wrap the realpath() function, which is a pity as it would help you solve the problem. (The stat and readlink commands do not help, specifically.)
At its simplest, you could write a program that uses realpath() like this:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
int main(int argc, char **argv)
{
int rc = EXIT_SUCCESS;
for (int i = 1; i < argc; i++)
{
char *rn = realpath(argv[i], 0);
if (rn != 0)
{
printf("%s\n", rn);
free(rn);
}
else
{
fprintf(stderr, "%s: failed to resolve the path for %s\n%d: %s\n",
argv[0], argv[i], errno, strerror(errno));
rc = EXIT_FAILURE;
}
}
return(rc);
}
If that program is called realpath, then the Bash script fragment reduces to:
SCRIPT_PATH=$(realpath ${BASH_SOURCE[0]})

Related

Bash completion scripting - getting a "transparent proxy"-like behaviour

I am trying to write a simple Bash completion script for a program that runs its arguments as a command. A good example of this is kind of program is the prime-run script provided by the nvidia-prime package:
#!/bin/bash
__NV_PRIME_RENDER_OFFLOAD=1 __VK_LAYER_NV_optimus=NVIDIA_only __GLX_VENDOR_LIBRARY_NAME=nvidia "$#"
This script sets a few environment variables, which instructs the prime driver to use the Nvidia dGPU on a hybrid system. The first argument is treated as the command, and all trailing arguments are passed through. So for example you can run prime-run code . and VSCode will start in the current directory using the dGPU.
Therefore from a completion-script POV, what we want is to basically try to complete as if the prime-run token isn't there (hence "transparent proxy"-like behaviour). To give a rather contrived example:
> prime-run journalc<TAB>
(completes journalctl)
> prime-run journalctl --us<TAB>
(completes --user)
However I am finding this surprisingly difficult in Bash (not that I know how in other shells). So the question is simple: is it possible and if so how?
Ideas I've (hopelessly) had
The simple complete -A command prime-run: the first argument gets completed as a command as expected (let's call it foo), but the following arguments are also completed as commands rather than as arguments to foo
Use some combination of compgen and complete -p to invoke the completion function of foo, but AFAIK the completion function for all foo is locally defined and thus uncallable
TL;DR
bash-completion provides a function named _command_offset (permalink), which is exactly what I need.
# A meta-command completion function for commands like sudo(8), which need to
# first complete on a command, then complete according to that command's own
# completion definition.
Keep reading if you are interested in how I got here.
So I was daydreaming the other day, when it hit me - doesn't sudo basically have the exact same behaviour I want? So the task became simple - reverse engineer the completion script for sudo. Source available here: permalink.
Turns out, most of the code has to do with completing the various options, so it's safe to simply throw most of it out:
L 8-11, 50-52: Related to sudo's edit mode. Safe to ditch.
L 19-24, 27-39, 43-49: These complete sudo's options. Safe to ditch.
So we're left with this:
_sudo()
{
local cur prev words cword split
_init_completion -s || return
for ((i = 1; i <= cword; i++)); do
if [[ ${words[i]} != -* ]]; then
local PATH=$PATH:/sbin:/usr/sbin:/usr/local/sbin
local root_command=${words[i]}
_command_offset $i
return
fi
done
$split && return
} &&
complete -F _sudo sudo sudoedit
The for and if block are there to deal with sudo's options that precede the "guest command". Safe to ditch (after replacing all $i with 1).
The variable $split is only referenced in _init_completion (permalink), and it seems to be used for handling different argument styles (--foo=bar v.s. --foo bar). Same with the -s flag. Irrelevant.
Appending to $PATH and setting $root_command have to do with privilege escalation. Only relevant to sudo.
So after the dust has cleared, by process of elimination, I ended up with this simple chunk of code:
_my-script()
{
local cur prev words cword
_init_completion || return
_command_offset 1
} && complete -F _my-script my-script
Declaring these four local variables and calling _init_completion is standard for all completion scipts, so really it's as simple as one command. Of course someone had to write the massively-complex _command_offset function so lucky me I guess?
Anyways, thank you for reading the story of me messing around and hopefully this will be helpful to some other person in the future.

Linux: command passed to tar in backticks is replaced on one system but not the other: what could cause that difference?

I am debugging a shell-script that works fine on one system but fails on another.
The script essentially unzips and untars archived log-files and then greps for a given substring in the contained log-files.
After some analysis and debugging I now found out that on one system an embedded `basename $TAR_FILENAME` command is properly executed (i.e. the command between the back-ticks is executed and the result replaces the part of the string between the back-ticks) while on the other system that replacement does NOT happen and instead the string `basename <filename-here>` (including the back-ticks) is inserted. This then of course derails the further processing of that string and the grep's don't work.
What could cause this? Can one enable/disable the back-tick feature in the bash?
I am not aware of any setting or switch that allows to toggle that feature on/off. Or is there?
Later addition:
This is the script:
#!/bin/bash
pattern=$1
for f in *.tar.gz; do
echo "$f:"
tar -xzf "$f" --to-command 'echo "f:`basename $TAR_FILENAME` s:'"$pattern\""
done
On one system this yields lines like:
f:localhost_access_log.2021-07-29.txt s:pattern
On the second this yields lines like:
f:`basename ./localhost_access_log.2021-07-29.txt` s:pattern
Both systems are on SLES-11 (very old, indeed...).
tar 1.26 passed the command to a shell (source):
argv[0] = "/bin/sh";
argv[1] = "-c";
argv[2] = to_command_option;
argv[3] = NULL;
priv_set_restore_linkdir ();
execv ("/bin/sh", argv);
tar 1.27 changed this to skip the shell as part of another fix (source):
if (wordsplit (cmd, &ws, (WRDSF_DEFFLAGS | WRDSF_ENV) & ~WRDSF_NOVAR))
FATAL_ERROR ((0, 0, _("cannot split string '%s': %s"),
cmd, wordsplit_strerror (&ws)));
execvp (ws.ws_wordv[0], ws.ws_wordv);
Since the shell is responsible for handling backticks, they will be interpreted in 1.26 but not 1.27.1.
The behavior was changed back for tar 1.29.

Nemiver - Unable to direct standard input

I like the idea of nemiver, but i can't get this basic function of nemiver to work: redirect standard input to program. so as my program needs file input instead of manual input, it usually takes the form:
./program < list.txt
but apparently, nemiver does not recognize this simple redirection. and thinks "<" and "list.txt" as separate arguments. this frustrates me greatly. is there a solution to this? Thank you guys so much!
If you just want to do a postmortem analysis (and don't need to single-step), you can load a crash-dump (core) file which can be generated under any kind of pipelining/redirection scenario.
( ulimit -c unlimited && ./yourapp <in.txt >out.txt ) || nemiver --load-core=core ./yourapp
This only launches the debugger if a crash or assert occurs.
As a quick-and-dirty solution, you can add something like the following at the beginning of main()...
{
// programmatically redirect stdio
const char * stdin_filename="input.txt", * stdout_filename="output.txt";
assert( dup2(open(stdin_filename ,O_RDONLY),0) != -1 );
assert( dup2(open(stdout_filename,O_WRONLY),1) != -1 );
asm(" int3"); // optional breakpoint -- kills program when not debugging
}
Just make sure that you disable this when not debugging (since presumably you want to use the normal method of redirection in that case).
You also need the following...
#include <unistd.h>
#include <fcntl.h>

What is the purpose of the : (colon) GNU Bash builtin?

What is the purpose of a command that does nothing, being little more than a comment leader, but is actually a shell builtin in and of itself?
It's slower than inserting a comment into your scripts by about 40% per call, which probably varies greatly depending on the size of the comment. The only possible reasons I can see for it are these:
# poor man's delay function
for ((x=0;x<100000;++x)) ; do : ; done
# inserting comments into string of commands
command ; command ; : we need a comment in here for some reason ; command
# an alias for `true'
while : ; do command ; done
I guess what I'm really looking for is what historical application it might have had.
Historically, Bourne shells didn't have true and false as built-in commands. true was instead simply aliased to :, and false to something like let 0.
: is slightly better than true for portability to ancient Bourne-derived shells. As a simple example, consider having neither the ! pipeline operator nor the || list operator (as was the case for some ancient Bourne shells). This leaves the else clause of the if statement as the only means for branching based on exit status:
if command; then :; else ...; fi
Since if requires a non-empty then clause and comments don't count as non-empty, : serves as a no-op.
Nowadays (that is: in a modern context) you can usually use either : or true. Both are specified by POSIX, and some find true easier to read. However there is one interesting difference: : is a so-called POSIX special built-in, whereas true is a regular built-in.
Special built-ins are required to be built into the shell; Regular built-ins are only "typically" built in, but it isn't strictly guaranteed. There usually shouldn't be a regular program named : with the function of true in PATH of most systems.
Probably the most crucial difference is that with special built-ins, any variable set by the built-in - even in the environment during simple command evaluation - persists after the command completes, as demonstrated here using ksh93:
$ unset x; ( x=hi :; echo "$x" )
hi
$ ( x=hi true; echo "$x" )
$
Note that Zsh ignores this requirement, as does GNU Bash except when operating in POSIX compatibility mode, but all other major "POSIX sh derived" shells observe this including dash, ksh93, and mksh.
Another difference is that regular built-ins must be compatible with exec - demonstrated here using Bash:
$ ( exec : )
-bash: exec: :: not found
$ ( exec true )
$
POSIX also explicitly notes that : may be faster than true, though this is of course an implementation-specific detail.
I use it to easily enable/disable variable commands:
#!/bin/bash
if [[ "$VERBOSE" == "" || "$VERBOSE" == "0" ]]; then
vecho=":" # no "verbose echo"
else
vecho=echo # enable "verbose echo"
fi
$vecho "Verbose echo is ON"
Thus
$ ./vecho
$ VERBOSE=1 ./vecho
Verbose echo is ON
This makes for a clean script. This cannot be done with '#'.
Also,
: >afile
is one of the simplest ways to guarantee that 'afile' exists but is 0 length.
A useful application for : is if you're only interested in using parameter expansions for their side-effects rather than actually passing their result to a command.
In that case, you use the parameter expansion as an argument to either : or false depending upon whether you want an exit status of 0 or 1. An example might be
: "${var:=$1}"
Since : is a builtin, it should be pretty fast.
: can also be for block comment (similar to /* */ in C language). For example, if you want to skip a block of code in your script, you can do this:
: << 'SKIP'
your code block here
SKIP
Two more uses not mentioned in other answers:
Logging
Take this example script:
set -x
: Logging message here
example_command
The first line, set -x, makes the shell print out the command before running it. It's quite a useful construct. The downside is that the usual echo Log message type of statement now prints the message twice. The colon method gets round that. Note that you'll still have to escape special characters just like you would for echo.
Cron job titles
I've seen it being used in cron jobs, like this:
45 10 * * * : Backup for database ; /opt/backup.sh
This is a cron job that runs the script /opt/backup.sh every day at 10:45. The advantage of this technique is that it makes for better looking email subjects when the /opt/backup.sh prints some output.
It's similar to pass in Python.
One use would be to stub out a function until it gets written:
future_function () { :; }
If you'd like to truncate a file to zero bytes, useful for clearing logs, try this:
:> file.log
You could use it in conjunction with backticks (``) to execute a command without displaying its output, like this:
: `some_command`
Of course you could just do some_command > /dev/null, but the :-version is somewhat shorter.
That being said I wouldn't recommend actually doing that as it would just confuse people. It just came to mind as a possible use-case.
It's also useful for polyglot programs:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
~function(){ ... }
This is now both an executable shell-script and a JavaScript program: meaning ./filename.js, sh filename.js, and node filename.js all work.
(Definitely a little bit of a strange usage, but effective nonetheless.)
Some explication, as requested:
Shell-scripts are evaluated line-by-line; and the exec command, when run, terminates the shell and replaces it's process with the resultant command. This means that to the shell, the program looks like this:
#!/usr/bin/env sh
':' //; exec "$(command -v node)" "$0" "$#"
As long as no parameter expansion or aliasing is occurring in the word, any word in a shell-script can be wrapped in quotes without changing its' meaning; this means that ':' is equivalent to : (we've only wrapped it in quotes here to achieve the JavaScript semantics described below)
... and as described above, the first command on the first line is a no-op (it translates to : //, or if you prefer to quote the words, ':' '//'. Notice that the // carries no special meaning here, as it does in JavaScript; it's just a meaningless word that's being thrown away.)
Finally, the second command on the first line (after the semicolon), is the real meat of the program: it's the exec call which replaces the shell-script being invoked, with a Node.js process invoked to evaluate the rest of the script.
Meanwhile, the first line, in JavaScript, parses as a string-literal (':'), and then a comment, which is deleted; thus, to JavaScript, the program looks like this:
':'
~function(){ ... }
Since the string-literal is on a line by itself, it is a no-op statement, and is thus stripped from the program; that means that the entire line is removed, leaving only your program-code (in this example, the function(){ ... } body.)
Self-documenting functions
You can also use : to embed documentation in a function.
Assume you have a library script mylib.sh, providing a variety of functions. You could either source the library (. mylib.sh) and call the functions directly after that (lib_function1 arg1 arg2), or avoid cluttering your namespace and invoke the library with a function argument (mylib.sh lib_function1 arg1 arg2).
Wouldn't it be nice if you could also type mylib.sh --help and get a list of available functions and their usage, without having to manually maintain the function list in the help text?
#!/bin/bash
# all "public" functions must start with this prefix
LIB_PREFIX='lib_'
# "public" library functions
lib_function1() {
: This function does something complicated with two arguments.
:
: Parameters:
: ' arg1 - first argument ($1)'
: ' arg2 - second argument'
:
: Result:
: " it's complicated"
# actual function code starts here
}
lib_function2() {
: Function documentation
# function code here
}
# help function
--help() {
echo MyLib v0.0.1
echo
echo Usage: mylib.sh [function_name [args]]
echo
echo Available functions:
declare -f | sed -n -e '/^'$LIB_PREFIX'/,/^}$/{/\(^'$LIB_PREFIX'\)\|\(^[ \t]*:\)/{
s/^\('$LIB_PREFIX'.*\) ()/\n=== \1 ===/;s/^[ \t]*: \?['\''"]\?/ /;s/['\''"]\?;\?$//;p}}'
}
# main code
if [ "${BASH_SOURCE[0]}" = "${0}" ]; then
# the script was executed instead of sourced
# invoke requested function or display help
if [ "$(type -t - "$1" 2>/dev/null)" = function ]; then
"$#"
else
--help
fi
fi
A few comments about the code:
All "public" functions have the same prefix. Only these are meant to be invoked by the user, and to be listed in the help text.
The self-documenting feature relies on the previous point, and uses declare -f to enumerate all available functions, then filters them through sed to only display functions with the appropriate prefix.
It is a good idea to enclose the documentation in single quotes, to prevent undesired expansion and whitespace removal. You'll also need to be careful when using apostrophes/quotes in the text.
You could write code to internalize the library prefix, i.e. the user only has to type mylib.sh function1 and it gets translated internally to lib_function1. This is an exercise left to the reader.
The help function is named "--help". This is a convenient (i.e. lazy) approach that uses the library invoke mechanism to display the help itself, without having to code an extra check for $1. At the same time, it will clutter your namespace if you source the library. If you don't like that, you can either change the name to something like lib_help or actually check the args for --help in the main code and invoke the help function manually.
I saw this usage in a script and thought it was a good substitute for invoking basename within a script.
oldIFS=$IFS
IFS=/
for basetool in $0 ; do : ; done
IFS=$oldIFS
...
this is a replacement for the code: basetool=$(basename $0)
Another way, not yet mentioned here is the initialisation of parameters in infinite while-loops. Below is not the cleanest example, but it serves it's purpose.
#!/usr/bin/env bash
[ "$1" ] && foo=0 && bar="baz"
while : "${foo=2}" "${bar:=qux}"; do
echo "$foo"
(( foo == 3 )) && echo "$bar" && break
(( foo=foo+1 ))
done

Bash file descriptor leak

I get a file descriptor leak when running the following code:
function get_fd_count() {
local fds
cd /proc/$$/fd; fds=( * ) # avoid a StackOverflow source colorizer bug
echo "${#fds[#]}"
}
function fd_leak_func() {
while : ; do
echo ">> Current FDs: $(get_fd_count)"
read retval new_state < <(set +e; new_state=$(echo foo); retval=$?; printf "%d %s\n" $retval $new_state)
done
}
fd_leak_func
Tested on both 3.2.25 and 4.0.28.
This only happens when the loop is happening within a function; every time we return to top-level context, the extra file descriptors are closed.
Is this intended behavior? More to the point, are workarounds available?
Followup: After reporting to the bash-bug mailing list, this was confirmed as a bug. Chet indicated that a fix will be included in the next release (as of 4/17/2010).
Here's a simplified example:
$ fd_leaker() { while :; do read a < <(pwd); c=(/proc/$$/fd/*); c=${#c[#]}; echo $c; done; }
$ fd_leaker
This one is not fixed by using /bin/true but it's mostly fixed by using (exit 0) But I get "bash: echo: write error: Interrupted system call" errors using that "fix" or if I use /bin/pwd instead of the builtin pwd.
It also seems to be specific to read. I tried grep . < <(pwd) > /dev/null and it worked properly. When I tried while read a; do :; done < <( pwd)
The extra file descriptors in the form of:
lr-x------ 1 user user 64 2010-04-15 19:26 39 -> pipe:[8357879]
I really don't think the runaway creation of them is intended, after all there's nothing recursive going on. I really don't see how adding something in the loop should fix things.
Putting a /bin/true at the end of the loop fixes it, but I don't know why or how, or why it happens in the first place.
It seems to be fixed in bash-4.2. I tested with bash-4.2.28 particularly

Resources