Get command used to start a script - shell

How to get the command previously used to start a shell script?
for example:
nohup /script_name.sh &
Inside the script itself, how to check if "nohup" has been used?
Thanks.

You want to use the $_ parameter in your script.
Example: shell.sh
#!/bin/bash
echo $_;
user#server [~]# sh shell.sh
/usr/bin/sh
user#server [~]#
Additionally:
If you want to get rid of that full path - /usr/bin/sh - utilize basename command.
#!/bin/bash
echo `basename $_`;
user#server [~]# sh shell.sh
sh
user#server [~]#

well that depends on the script in question.There're many ways to execute a script like:
./<scriptname> #chmod 700 <scriptname> should be done before executing this script
bash <scriptname> # provided bash is used for executing the script.
or if you just want to get the name of script2 in script1, then use sed or awk for parsing the script1 with regular expression => /script2/.
Try this:
cat <script1> | awk '{ if( $0 ~ /^[^#].* \/scriptname.sh/ ){ print $1}}'

#codebaus thanks, doing something like this works but using strace definitely not.
#!/bin/bash
# echo $_
# echo $0
if grep "sh" $_ >/dev/null ; then
exit 1
fi ;
echo "string" ;

I believe you want to run this?:
#!/bin/bash
# echo $_
# echo $0
if grep "sh" $_ 2> /dev/null ; then
exit 1
fi ;
echo "string";
user#server [~]# sh shell.sh
Binary file /usr/bin/sh matches
user#server [~]#
Not sure what you are trying to accomplish in the end game. But $_ should give you what you need based on your initial question.
Additionally:
As I did not answer your strace comment, apologies. Based on the previous code above.
strace sh shell.sh
wait4(-1, Binary file /usr/bin/strace matches
[{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 874
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGCHLD (Child exited) # 0 (0) ---

Related

bash script: how to "exit" from sourced script, and allow to work non sourced?

I have a script that I'd like people to source, but optionally so. So they can run it with or without sourcing it, it's up to them.
e.g. The following should both work:
$ . test.sh
$ test.sh
The problem is, test.sh contains exit statements if correct args aren't passed in. If someone sources the script, then the exit commands exit the terminal!
I've done a bit of research and see from this StackOverflow post that I could detect if it's being sourced, and do something different, but what would that something different be?
The normal way to exit from a sourced script is simply to return (optionally adding the desired exit code) outside of any function. Assuming that when run as a command we have the -e flag set, this will also exit from a shell program:
#!/bin/sh -eu
if [ $# = 0 ]
then
echo "Usage $0 <argument>" >&2
return 1
fi
If we're running without -e, we might be able to return || exit instead.
There may be better ways to do this, but here's a sample script showing how I got this to work:
bparks#home
$ set | grep TESTVAR
bparks#home
$ ./test.sh
Outputs some useful information to the console. Please pass one arg.
bparks#home
$ set | grep TESTVAR
bparks#home
$ . ./test.sh
Outputs some useful information to the console. Please pass one arg.
bparks#home
$ set | grep TESTVAR
bparks#home
$ ./test.sh asdf
export TESTVAR=me
bparks#home
$ set | grep TESTVAR
bparks#home
$ . ./test.sh asdf
bparks#home
$ set | grep TESTVAR
TESTVAR=me
bparks#home
$
test.sh
#!/usr/bin/env bash
# store if we're sourced or not in a variable
(return 0 2>/dev/null) && SOURCED=1 || SOURCED=0
exitIfNotSourced(){
[[ "$SOURCED" != "0" ]] || exit;
}
showHelp(){
IT=$(cat <<EOF
Outputs some useful information to the console. Please pass one arg.
EOF
)
echo "$IT"
}
# Show help if no args supplied - works if sourced or not sourced
if [ -z "$1" ]
then
showHelp
exitIfNotSourced;
return;
fi
# your main script follows
# this sample shows exporting a variable if sourced,
# and outputting this to stdout if not sourced
if [ "$SOURCED" == "1" ]
then
export TESTVAR=me
else
echo "export TESTVAR=me"
fi
Checkout this answer for better description and porper solution.
And here is how it is used in docker-entrypoint.sh in official Mysql image:
# check to see if this file is being run or sourced from another script
_is_sourced() {
# https://unix.stackexchange.com/a/215279
[ "${#FUNCNAME[#]}" -ge 2 ] \
&& [ "${FUNCNAME[0]}" = '_is_sourced' ] \
&& [ "${FUNCNAME[1]}" = 'source' ]
}

Bash: Elegant way to run a command and exit if a process exited with error

I have a this logic in a Bash script:
Run something;
If it failed, print something and stop;
If not, run something else.
And all this runs in a ssh session.
This is probably trivial if I used $? and if / else.
But because of the script maintainability, I am looking for some elegant 2 lines solution.
This is what I have so far
ssh ... '
ls attributes/*'$CONF_FILE'.rb || ls -l attributes/ && exit 1;
'$EDITOR' attributes/*'$CONF_FILE'.rb '$PART_VER';'
However, this exits no matter what. So I tried:
ssh ... '
ls attributes/*'$CONF_FILE'.rb || (ls -l attributes/ && exit 1);
'$EDITOR' attributes/*'$CONF_FILE'.rb '$PART_VER';'
However, exit only exits the subshell. And exiting a script from within a subshell is not elegant at all.
Is there a simple 2-lines solution? Perhaps other operators precedence?
Written for clarity, correctness, and maintainability -- not terseness:
# store your remote script as a string. Because of the quoted heredoc, no variables are
# evaluated at this time; $1, $2 and $3 are expanded only after the code is sent to the
# remote system.
script_text=$(cat <<'EOF'
CONF_FILE=$1; PART_VER=$2; EDITOR=$3
shopt -s nullglob # Return an empty list for any failed glob
set -- attributes/*"$CONF_FILE".rb # Replace our argument list with a glob result
if (( $# )); then # Check length of that result...
"$EDITOR" "$#" "$PART_VER" # ...if it's nonzero, run the editor w/ that list
else
ls attributes # otherwise, run ls and fail
exit 1
fi
EOF
)
# generate a single string to pass to the remote shell which passes the script text
# ...and the arguments to place in $0, $1, etc while that script is running
printf -v ssh_cmd_str '%q ' \
bash -c "$script_text" '_' "$CONF_FILE" "$PART_VER" "$EDITOR"
# ...thereafter, that command can be run as follows:
ssh -tt ... "$ssh_cmd_str"
Don't use a subshell; use a command group.
ssh ... "
ls attributes/*'$CONF_FILE'.rb || { ls -l attributes/ && exit 1; };
'$EDITOR' attributes/*'$CONF_FILE'.rb '$PART_VER';"
(Note the change in quotes; this better ensures that the result of the local parameter expansions are properly quoted on the remote end, although there will still be problems if the parameter expansions themselves contain single quotes. A proper solution would run an explicit shell on the remote end, taking your local parameters as arguments, instead of using interpolation to build the script. The following is untested, but I think I quoted everything correctly.
ssh ... sh -c '
ls attributes/*"$1.rb" || { ls -l attributes/ && exit 1; };
"$EDITOR" attributes/*"\$1.rb" "$2";
' _ "$CONF_FILE" "$PART_VER"
)
For now, I resorted to duplicating the condition, like this:
ssh ... '
ls attributes/*'$CONF_FILE'.rb > /dev/null || ls -l --color=always attributes/
ls attributes/*'$CONF_FILE'.rb > /dev/null || exit 1;
'$EDITOR' attributes/*'$CONF_FILE'.rb '$PART_VER';'
(I used &>- as a short for redirecting stdout and stderr to /dev/null, but that is not probably correct, see the comment.)

How to change argv[0] value in shell / bash script?

The set command can be used to change values of the positional arguments $1 $2 ...
But, is there any way to change $0 ?
In Bash greater than or equal to 5 you can change $0 like this:
$ cat bar.sh
#!/bin/bash
echo $0
BASH_ARGV0=lol
echo $0
$ ./bar.sh
./bar.sh
lol
ZSH even supports assigning directly to 0:
$ cat foo.zsh
#!/bin/zsh
echo $0
0=lol
echo $0
$ ./foo.zsh
./foo.zsh
lol
Here is another method. It is implemented through direct commands execution which is somewhat better than sourcing (the dot command). But, this method works only for shell interpreter, not bash, since sh supports -s -c options passed together:
#! /bin/sh
# try executing this script with several arguments to see the effect
test ".$INNERCALL" = .YES || {
export INNERCALL=YES
cat "$0" | /bin/sh -s -c : argv0new "$#"
exit $?
}
printf "argv[0]=$0\n"
i=1 ; for arg in "$#" ; do printf "argv[$i]=$arg\n" ; i=`expr $i + 1` ; done
The expected output of the both examples in case ./the_example.sh 1 2 3 should be:
argv[0]=argv0new
argv[1]=1
argv[2]=2
argv[3]=3
#! /bin/sh
# try executing this script with several arguments to see the effect
test ".$INNERCALL" = .YES || {
export INNERCALL=YES
# this method works both for shell and bash interpreters
sh -c ". '$0'" argv0new "$#"
exit $?
}
printf "argv[0]=$0\n"
i=1 ; for arg in "$#" ; do printf "argv[$i]=$arg\n" ; i=`expr $i + 1` ; done

Checking in bash and csh if a command is builtin

How can I check in bash and csh if commands are builtin? Is there a method compatible with most shells?
You can try using which in csh or type in bash. If something is a built-in command, it will say so; otherwise, you get the location of the command in your PATH.
In csh:
# which echo
echo: shell built-in command.
# which parted
/sbin/parted
In bash:
# type echo
echo is a shell builtin
# type parted
parted is /sbin/parted
type might also show something like this:
# type clear
clear is hashed (/usr/bin/clear)
...which means that it's not a built-in, but that bash has stored its location in a hashtable to speed up access to it; (a little bit) more in this post on Unix & Linux.
In bash, you can use the type command with the -t option. Full details can be found in the bash-builtins man page but the relevant bit is:
type -t name
If the -t option is used, type prints a string which is one of alias, keyword, function, builtin, or file if name is an alias, shell reserved word, function, builtin, or disk file, respectively. If the name is not found, then nothing is printed, and an exit status of false is returned.
Hence you can use a check such as:
if [[ "$(type -t read)" == "builtin" ]] ; then echo read ; fi
if [[ "$(type -t cd)" == "builtin" ]] ; then echo cd ; fi
if [[ "$(type -t ls)" == "builtin" ]] ; then echo ls ; fi
which would result in the output:
read
cd
For bash, use type command
For csh, you can use:
which command-name
If it's built-in, it will tell so.
Not sure if it works the same for bash.
We careful with aliases, though. There may be options for that.
The other answers here are close, but they all fail if there is an alias or function with the same name as the command you're checking.
Here's my solution:
In tcsh
Use the where command, which gives all occurrences of the command name, including whether it's a built-in. Then grep to see if one of the lines says that it's a built-in.
alias isbuiltin 'test \!:1 != "builtin" && where \!:1 | egrep "built-?in" > /dev/null || echo \!:1" is not a built-in"'
In bash/zsh
Use type -a, which gives all occurrences of the command name, including whether it's a built-in. Then grep to see if one of the lines says that it's a built-in.
isbuiltin() {
if [[ $# -ne 1 ]]; then
echo "Usage: $0 command"
return 1
fi
cmd=$1
if ! type -a $cmd 2> /dev/null | egrep '\<built-?in\>' > /dev/null
then
printf "$cmd is not a built-in\n" >&2
return 1
fi
return 0
}
In ksh88/ksh93
Open a sub-shell so that you can remove any aliases or command names of the same name. Then in the subshell, use whence -v. There's also some extra archaic syntax in this solution to support ksh88.
isbuiltin() {
if [[ $# -ne 1 ]]; then
echo "Usage: $0 command"
return 1
fi
cmd=$1
if (
#Open a subshell so that aliases and functions can be safely removed,
# allowing `whence -v` to see the built-in command if there is one.
unalias "$cmd";
if [[ "$cmd" != '.' ]] && typeset -f | egrep "^(function *$cmd|$cmd\(\))" > /dev/null 2>&1
then
#Remove the function iff it exists.
#Since `unset` is a special built-in, the subshell dies if it fails
unset -f "$cmd";
fi
PATH='/no';
#NOTE: we can't use `whence -a` because it's not supported in older versions of ksh
whence -v "$cmd" 2>&1
) 2> /dev/null | grep -v 'not found' | grep 'builtin' > /dev/null 2>&1
then
#No-op
:
else
printf "$cmd is not a built-in\n" >&2
return 1
fi
}
Using the Solution
Once you applied the aforementioned solution in the shell of your choice, you can use it like this...
At the command line:
$ isbuiltin command
If the command is a built-in, it prints nothing; otherwise, it prints a message to stderr.
Or you can use it like this in a script:
if isbuiltin $cmd 2> /dev/null
then
echo "$cmd is a built-in"
else
echo "$cmd is NOT a built-in"
fi

How to trick an application into thinking its stdout is a terminal, not a pipe

I'm trying to do the opposite of "Detect if stdin is a terminal or pipe?".
I'm running an application that's changing its output format because it detects a pipe on STDOUT, and I want it to think that it's an interactive terminal so that I get the same output when redirecting.
I was thinking that wrapping it in an expect script or using a proc_open() in PHP would do it, but it doesn't.
Any ideas out there?
Aha!
The script command does what we want...
script --return --quiet -c "[executable string]" /dev/null
Does the trick!
Usage:
script [options] [file]
Make a typescript of a terminal session.
Options:
-a, --append append the output
-c, --command <command> run command rather than interactive shell
-e, --return return exit code of the child process
-f, --flush run flush after each write
--force use output file even when it is a link
-q, --quiet be quiet
-t[<file>], --timing[=<file>] output timing data to stderr or to FILE
-h, --help display this help
-V, --version display version
Based on Chris' solution, I came up with the following little helper function:
faketty() {
script -qfc "$(printf "%q " "$#")" /dev/null
}
The quirky looking printf is necessary to correctly expand the script's arguments in $# while protecting possibly quoted parts of the command (see example below).
Usage:
faketty <command> <args>
Example:
$ python -c "import sys; print(sys.stdout.isatty())"
True
$ python -c "import sys; print(sys.stdout.isatty())" | cat
False
$ faketty python -c "import sys; print(sys.stdout.isatty())" | cat
True
The unbuffer script that comes with Expect should handle this ok. If not, the application may be looking at something other than what its output is connected to, eg. what the TERM environment variable is set to.
Referring previous answer, on Mac OS X, "script" can be used like below...
script -q /dev/null commands...
But, because it may replace "\n" with "\r\n" on the stdout, you may also need script like this:
script -q /dev/null commands... | perl -pe 's/\r\n/\n/g'
If there are some pipe between these commands, you need to flush stdout. for example:
script -q /dev/null commands... | ruby -ne 'print "....\n";STDOUT.flush' | perl -pe 's/\r\n/\n/g'
I don't know if it's doable from PHP, but if you really need the child process to see a TTY, you can create a PTY.
In C:
#include <stdio.h>
#include <stdlib.h>
#include <sysexits.h>
#include <unistd.h>
#include <pty.h>
int main(int argc, char **argv) {
int master;
struct winsize win = {
.ws_col = 80, .ws_row = 24,
.ws_xpixel = 480, .ws_ypixel = 192,
};
pid_t child;
if (argc < 2) {
printf("Usage: %s cmd [args...]\n", argv[0]);
exit(EX_USAGE);
}
child = forkpty(&master, NULL, NULL, &win);
if (child == -1) {
perror("forkpty failed");
exit(EX_OSERR);
}
if (child == 0) {
execvp(argv[1], argv + 1);
perror("exec failed");
exit(EX_OSERR);
}
/* now the child is attached to a real pseudo-TTY instead of a pipe,
* while the parent can use "master" much like a normal pipe */
}
I was actually under the impression that expect itself does creates a PTY, though.
Updating #A-Ron's answer to
a) work on both Linux & MacOs
b) propagate status code indirectly (since MacOs script does not support it)
faketty () {
# Create a temporary file for storing the status code
tmp=$(mktemp)
# Ensure it worked or fail with status 99
[ "$tmp" ] || return 99
# Produce a script that runs the command provided to faketty as
# arguments and stores the status code in the temporary file
cmd="$(printf '%q ' "$#")"'; echo $? > '$tmp
# Run the script through /bin/sh with fake tty
if [ "$(uname)" = "Darwin" ]; then
# MacOS
script -Fq /dev/null /bin/sh -c "$cmd"
else
script -qfc "/bin/sh -c $(printf "%q " "$cmd")" /dev/null
fi
# Ensure that the status code was written to the temporary file or
# fail with status 99
[ -s $tmp ] || return 99
# Collect the status code from the temporary file
err=$(cat $tmp)
# Remove the temporary file
rm -f $tmp
# Return the status code
return $err
}
Examples:
$ faketty false ; echo $?
1
$ faketty echo '$HOME' ; echo $?
$HOME
0
embedded_example () {
faketty perl -e 'sleep(5); print "Hello world\n"; exit(3);' > LOGFILE 2>&1 </dev/null &
pid=$!
# do something else
echo 0..
sleep 2
echo 2..
echo wait
wait $pid
status=$?
cat LOGFILE
echo Exit status: $status
}
$ embedded_example
0..
2..
wait
Hello world
Exit status: 3
Too new to comment on the specific answer, but I thought I'd followup on the faketty function posted by ingomueller-net above since it recently helped me out.
I found that this was creating a typescript file that I didn't want/need so I added /dev/null as the script target file:
function faketty { script -qfc "$(printf "%q " "$#")" /dev/null ; }
There's also a pty program included in the sample code of the book "Advanced Programming in the UNIX Environment, Second Edition"!
Here's how to compile pty on Mac OS X:
man 4 pty # pty -- pseudo terminal driver
open http://en.wikipedia.org/wiki/Pseudo_terminal
# Advanced Programming in the UNIX Environment, Second Edition
open http://www.apuebook.com
cd ~/Desktop
curl -L -O http://www.apuebook.com/src.tar.gz
tar -xzf src.tar.gz
cd apue.2e
wkdir="${HOME}/Desktop/apue.2e"
sed -E -i "" "s|^WKDIR=.*|WKDIR=${wkdir}|" ~/Desktop/apue.2e/Make.defines.macos
echo '#undef _POSIX_C_SOURCE' >> ~/Desktop/apue.2e/include/apue.h
str='#include <sys/select.h>'
printf '%s\n' H 1i "$str" . wq | ed -s calld/loop.c
str='
#undef _POSIX_C_SOURCE
#include <sys/types.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s file/devrdev.c
str='
#include <sys/signal.h>
#include <sys/ioctl.h>
'
printf '%s\n' H 1i "$str" . wq | ed -s termios/winch.c
make
~/Desktop/apue.2e/pty/pty ls -ld *
I was trying to get colors when running shellcheck <file> | less on Linux, so I tried the above answers, but they produce this bizarre effect where text is horizontally offset from where it should be:
In ./all/update.sh line 6:
for repo in $(cat repos); do
^-- SC2013: To read lines rather than words, pipe/redirect to a 'while read' loop.
(For those unfamiliar with shellcheck, the line with the warning is supposed to line up with the where the problem is.)
In order to the answers above to work with shellcheck, I tried one of the options from the comments:
faketty() {
0</dev/null script -qfc "$(printf "%q " "$#")" /dev/null
}
This works. I also added --return and used long options, to make this command a little less inscrutable:
faketty() {
0</dev/null script --quiet --flush --return --command "$(printf "%q " "$#")" /dev/null
}
Works in Bash and Zsh.

Resources