pass 0 argument (executable filename by default) to called programs [duplicate] - bash

This question already has answers here:
How to change argv0 in bash so command shows up with different name in ps?
(8 answers)
Closed 8 years ago.
By default bash passes executable filename as first (0 to be more precise) argument while invoking programs
Is there any special form for calling programs that can be used to pass 0 argument?
It is usefull for bunch of programs that behave in different ways depending on location where they were called from

I think the only way to set argument 0 is to change the name of the executable. For example:
$ echo 'echo $0' > foo.sh
$ ln foo.sh bar.sh
$ sh foo.sh
foo.sh
$ sh bar.sh
bar.sh
Some shells have a non-POSIX extension to the exec command that allow you to specify an alternate value:
$ exec -a specialshell bash
$ echo $0
specialshell
I'm not aware of a similar technique for changing the name of a child process like this, other than to run in a subshell
$ ( exec -a subshell-bash bash )
Update: three seconds later, I find the argv0 command at http://cr.yp.to/ucspi-tcp/argv0.html.

Related

Bash pass argument to --init-file script

I'm running a shell script using bash --init-file script.sh that runs some commands, then leaves an interactive session open. How can I pass arguments to this init file from the process that runs the initial bash command? bash --init-file 'script.sh arg' doesn't work.
Interestingly, if the script contains echo "$# $*", passing an argument as I did above causes it to print nothing, while not passing an argument prints '0'.
Create a file with the content:
#!/bin/bash
script.sh arg
Pass that file to bash: bash --init-file thatfile
I'd like the arg to come from the command that runs bash with the
Create a file from the command line and pass it:
arg="$1"
cat >thatfile <<EOF
$(declare -p arg)
script.sh \"\$arg\"
EOF
bash --init-file thatfile
You might be interested in researching what is a process substitution in bash.

Diffrence between bash script.sh and ./script.sh [duplicate]

This question already has answers here:
History command works in a terminal, but doesn't when written as a bash script
(3 answers)
Closed 2 years ago.
Suppose we have env.sh file that contains:
echo $(history | tail -n2 | head -n1) | sed 's/[0-9]* //' #looking for the last typed command
when executing this script with bash env.sh, the output will be empty:
but when we execute the script with ./env.sh, we get the last typed command:
I just want to know the diffrence between them
Notice that if we add #!/bin/bash at the beginning of the script, the ./env.sh will no longer output anything.
History is disabled by BASH in non-interactive shells by-default. If you want to enable it however, you can do so like this:
#!/bin/bash
echo $HISTFILE # will be empty in non-iteractive shell
HISTFILE=~/.bash_history # set it again
set -o history
# the command will work now
history
The reason this is done is to avoid cluttering the history by any commands being run by any shell scripts.
Adding hashbang (meaning the file is to be interpreted as a script by the program specified in your hashbang) to your script when being run via ./env.sh invokes your script using the binary /bin/bash i.e. run via bash, thus again printing no history.

using alias in shell script? [duplicate]

This question already has answers here:
How to use aliases defined in .bashrc in other scripts?
(6 answers)
Closed 2 years ago.
My alias defined in a sample shell script is not working. And I am new to Linux Shell Scripting.
Below is the sample shell file
#!/bin/sh
echo "Setting Sample aliases ..."
alias xyz="cd /home/usr/src/xyz"
echo "Setting done ..."
On executing this script, I can see the echo messages. But if I execute the alias command, I see the below error
xyz: command not found
am I missing something ?
source your script, don't execute it like ./foo.sh or sh foo.sh
If you execute your script like that, it is running in sub-shell, not your current.
source foo.sh
would work for you.
You need to set a specific option to do so, expand_aliases:
shopt -s expand_aliases
Example:
# With option
$ cat a
#!/bin/bash
shopt -s expand_aliases
alias a="echo b"
type a
a
$ ./a
# a is aliased to 'echo b'
b
# Without option
$ cat a
#!/bin/bash
alias a="echo b"
type a
a
$ ./a
./a: line 3: type: a: not found
./a: line 4: a: command not found
reference: https://unix.stackexchange.com/a/1498/27031 and https://askubuntu.com/a/98786/127746
sourcing the script source script.sh
./script.sh will be executed in a sub-shell and the changes made apply only the to sub-shell. Once the command terminates, the sub-shell goes and so do the changes.
OR
HACK: Simply run following command on shell and then execute the script.
alias xyz="cd /home/usr/src/xyz"
./script.sh
To unalias use following on shell prompt
unalias xyz
If you execute it in a script, the alias will be over by the time the script finishes executing.
In case you want it to be permanent:
Your alias is well defined, but you have to store it in ~/.bashrc, not in a shell script.
Add it to that file and then source it with . .bashrc - it will load the file so that alias will be possible to use.
In case you want it to be used just in current session:
Just write it in your console prompt.
$ aa
The program 'aa' is currently not installed. ...
$
$ alias aa="echo hello"
$
$ aa
hello
$
Also: From Kent answer we can see that you can also source it by source your_file. In that case you do not need to use a shell script, just a normal file will make it.
You may use the below command.
shopt -s expand_aliases
source ~/.bashrc
eval $command
Your alias has to be in your .profile file not in your script if you are calling it on the prompt.
If you put an alias in your script then you have to call it within your script.
Source the file is the correct answer when trying to run a script that inside has an alias.
source yourscript.sh
Put your alias in a file call ~/.bash_aliases and then, on many distributions, it will get loaded automatically, no need to manually run the source command to load it.

How to change argv0 in bash so command shows up with different name in ps?

In a C program I can write argv[0] and the new name shows up in a ps listing.
How can I do this in bash?
You can do it when running a new program via exec -a <newname>.
Just for the record, even though it does not exactly answer the original poster's question, this is something trivial to do with zsh:
ARGV0=emacs nethack
I've had a chance to go through the source for bash and it does not look like there is any support for writing to argv[0].
I'm assuming you've got a shell script that you wish to execute such that the script process itself has a new argv[0]. For example (I've only tested this in bash, so i'm using that, but this may work elsewhere).
#!/bin/bash
echo "process $$ here, first arg was $1"
ps -p $$
The output will be something like this:
$ ./script arg1
process 70637 here, first arg was arg1
PID TTY TIME CMD
70637 ttys003 0:00.00 /bin/bash ./script arg1
So ps shows the shell, /bin/bash in this case. Now try your interactive shell's exec -a, but in a subshell so you don't blow away the interactive shell:
$ (exec -a MyScript ./script arg1)
process 70936 here, first arg was arg1
PID TTY TIME CMD
70936 ttys008 0:00.00 /bin/bash /path/to/script arg1
Woops, still showing /bin/bash. what happened? The exec -a probably did set argv[0], but then a new instance of bash started because the operating system read #!/bin/bash at the top of your script. Ok, what if we perform the exec'ing inside the script somehow? First, we need some way of detecting whether this is the "first" execution of the script, or the second, execed instance, otherwise the second instance will exec again, and on and on in an infinite loop. Next, we need the executable to not be a file with a #!/bin/bash line at the top, to prevent the OS from changing our desired argv[0]. Here's my attempt:
$ cat ./script
#!/bin/bash
__second_instance="__second_instance_$$"
[[ -z ${!__second_instance} ]] && {
declare -x "__second_instance_$$=true"
exec -a MyScript "$SHELL" "$0" "$#"
}
echo "process $$ here, first arg was $1"
ps -p $$
Thanks to this answer, I first test for the environment variable __second_instance_$$, based on the PID (which does not change through exec) so that it won't collide with other scripts using this technique. If it's empty, I assume this is the first instance, and I export that environment variable, then exec. But, importantly, I do not exec this script, but I exec the shell binary directly, with this script ($0) as an argument, passing along all the other arguments as well ($#). The environment variable is a bit of a hack.
Now the output is this:
$ ./script arg1
process 71143 here, first arg was arg1
PID TTY TIME CMD
71143 ttys008 0:00.01 MyScript ./script arg1
That's almost there. The argv[0] is MyScript like I want, but there's that extra arg ./script in there which is a consequence of executing the shell directly (rather than via the OS's #! processing). Unfortunately, I don't know how to get any better than this.
Update for Bash 5.0
Looks like Bash 5.0 adds support for writing to special variable BASH_ARGV0, so this should become far simpler to accomplish.
(see release announcement)
( exec -a foo bash -c 'echo $0' )
ps and others inspect two things, none of which is argv0: /proc/PID/comm (for the "process name") and /proc/PID/cmdline (for the command-line). Assigning to argv0 will not change what ps shows in the CMD column, but it will change what the process usually sees as its own name (in output messages, for example).
To change the CMD column, write to /proc/PID/comm:
echo -n mynewname >/proc/$$/comm; ps
You cannot write to or modify /proc/PID/cmdline in any way.
Process can set their own "title" by writing to the memory area in which argv & envp are located (note that this is different than setting BASH_ARGV0). This has the side effect of changing /proc/PID/cmdline as well, which is what some daemons do in order to prettify (hide?) their command lines. libbsd's setproctitle() does exactly that, but you cannot do that in Bash without support of external tools.
I will just add that this must be possible at runtime, at least in some environments. Assigning $0 in perl on linux does change what shows up in ps. I do not know how that is implemented, however. If I can find out, i'll update this.
edit:
Based on how perl does it, it is non-trivial. I doubt there is any bask built in way at runtime but don't know for sure. You can see how perl does sets the process name at runtime.
Copy the bash executable to a different name.
You can do this in the script itself...
cp /bin/bash ./new-name
PATH=$PATH:.
exec new-name $0
If you are trying to pretend you are not a shell script you can rename the script itself to something cool or even " " (a single space) so
exec new-name " "
Will execute bash your script and appears in the ps list as just new-name.
OK so calling a script " " is a very bad idea :)
Basically, to change the name
bash script
rename bash and rename the script.
If you are worried, as Mr McDoom. apparently is, about copying a binary to a new name (which is entirely safe) you could also create a symlink
ln -s /bin/bash ./MyFunkyName
./MyFunkyName
This way, the symlink is what appears in the ps list. (again use PATH=$PATH:. if you dont want the ./)

Using bash shell inside Matlab

I'm trying to put a large set of bash commands into a matlab script and manage my variables (like file paths, parameters etc) from there. It is also needed because this workflow requires manual intervention at certain steps and I would like to use the step debugger for this.
The problem is, I don't understand how matlab interfaces with bash shell.
I can't do system('source .bash_profile') to define my bash variables. Similarly I can't define them by hand and read them either, e.g. system('export var=somepath') and then system('echo $var') returns nothing.
What is the correct way of defining variables in bash inside matlab's command window? How can I construct a workflow of commands which will use the variables I defined as well as those in my .bash_profile?
If all you need to do is set environment variables, do this in MATLAB:
>> setenv('var','somepath')
>> system('echo $var')
Invoke Bash as a login shell to get your ~/.bash_profile sourced and use the -c option to execute a group of shell commands in one go.
# in Terminal.app
man bash | less -p 'the --login option'
man bash | less -p '-c string'
echo 'export profilevar=myProfileVar' >> ~/.bash_profile
# test in Terminal.app
/bin/bash --login -c '
echo "$0"
echo "$3"
echo "$#"
export var=somepath
echo "$var"
echo "$profilevar"
ps
export | nl
' zero 1 2 3 4 5
# in Matlab
cmd=sprintf('/bin/bash --login -c ''echo "$profilevar"; ps''');
[r,s]=system(cmd);
disp(s);

Resources