This question already has answers here:
History command works in a terminal, but doesn't when written as a bash script
(3 answers)
Closed 2 years ago.
Suppose we have env.sh file that contains:
echo $(history | tail -n2 | head -n1) | sed 's/[0-9]* //' #looking for the last typed command
when executing this script with bash env.sh, the output will be empty:
but when we execute the script with ./env.sh, we get the last typed command:
I just want to know the diffrence between them
Notice that if we add #!/bin/bash at the beginning of the script, the ./env.sh will no longer output anything.
History is disabled by BASH in non-interactive shells by-default. If you want to enable it however, you can do so like this:
#!/bin/bash
echo $HISTFILE # will be empty in non-iteractive shell
HISTFILE=~/.bash_history # set it again
set -o history
# the command will work now
history
The reason this is done is to avoid cluttering the history by any commands being run by any shell scripts.
Adding hashbang (meaning the file is to be interpreted as a script by the program specified in your hashbang) to your script when being run via ./env.sh invokes your script using the binary /bin/bash i.e. run via bash, thus again printing no history.
Related
This question already has answers here:
SFTP bash shell script to copy the file from source to destination
(3 answers)
Closed 1 year ago.
I am trying to run the cd command in a bash script to a SFTP session. My code currently looks like
#!/bin/bash
sftp $1#$2
and then I want to use cd within the SFTP session and other commands too but cd is fine for now. How can I do this?
Try the batch mode. Quoting the man page:
-b batchfile
Batch mode reads a series of commands from an input
batchfile instead of stdin. [...] A batchfile of ‘-’ may be used to indicate standard input.
You can use batch mode to run commands in sequence, e.g.
#!/bin/bash
echo << EOF > sftp-commands-to-run.txt
ls
put myfile.txt
... more commands ...
EOF
sftp -b sftp-commands-to-run.txt $1#$2
You can also pass the commands to run via stdin, e.g.
#!/bin/bash
echo ls myfile.txt | sftp -b - $1#$2
This question already has answers here:
Run Perl Script From Unix Shell Script
(2 answers)
Closed 3 years ago.
How do I replace [RUN_ABOVE_PERL_SORTING_SCRIPT_HERE] with something that runs this perl script stored in a bash variable?
#!/usr/bin/env bash
# The perl script to sort getfacl output:
# https://github.com/philips/acl/blob/master/test/sort-getfacl-output
find /etc -name .git -prune -o -print | xargs getfacl -peL | [RUN_ABOVE_PERL_SORTING_SCRIPT_HERE] > /etc/.facl.nogit.txt
Notes:
I do not want to employ 2 files (a bash script and a perl script) to solve this problem; I want the functionality to be stored all in one bash script file.
I do not want to immediately run the perl script when storing the perl-script variable, because I want to run it later in the getfacl(1) bash pipeline shown below.
There's many similar stackoverflow questions+answers, but none that I can find (that has clean-reading code, anyway?) that solve both the a) multi-line and b) delayed-execution (or the embedded perl script) portion of this problem.
And to clarify: this problem is not specifically about getfacl(1), which is simply an catalyst to explore how to embed perl scripts--and possibly other scripting languages like python--into bash variables for delayed execution in a bash script.)
Employ the bash read command, which reads the perl script into a variable that's executed later in the bash script.
#!/usr/bin/env bash
# sort getfacl output: the following code is copied from:
# https://github.com/philips/acl/blob/master/test/sort-getfacl-output
read -r -d '' SCRIPT <<'EOS'
#!/usr/bin/env perl -w
undef $/;
print join("\n\n", sort split(/\n\n/, <>)), "\n\n";
EOS
find /etc -name .git -prune -o -print | xargs getfacl -peL | perl -e "$SCRIPT" > /etc/.facl.nogit.txt
This is covered by Run Perl Script From Unix Shell Script.
As they apply here:
You can pass the code to Perl using -e/-E.
perl -e"$script"
or
perl -e"$( curl "$url" )"
You can pass the code via STDIN.
printf %s "$script" | perl -e"$script"
or
curl "$url" | perl
(This won't work for you because you need STDIN.)
You can create a virtual file.
perl <( printf %s "$script" )
or
perl <( curl "$url" )
You can take advantage of perl's -x option.
(Not applicable if you want to download the script dynamically.)
All of the above assume the following command has already been executed:
url='https://github.com/philips/acl/blob/master/test/sort-getfacl-output'
Some of the above assume the following command has already been executed:
script="$( curl "$url" )
This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Whenever I use grep, and I pipe it to an other program, the --color option is not respected. I know I could use --color=always, but It also comes up with some other commands that I would like to get the exact output of that command as the output I would get if I was in a tty.
So my question is, is it possible to trick a command into thinking that the command is run inside a tty ?
For example, running
grep --color word file # Outputs some colors
grep --color word file | cat # Doesn't output any colors
I'd like to be able to write something like :
IS_TTY=TRUE grep --color word file | cat # Outputs some colors
This question seems to have a tool that might do what I want :empty - run processes and applications under pseudo-terminal (PTY), but from what I could read in the docs, I'm not sure it can help for my problem
There are a number of options, as outlined by several other Stack Overflow answers (see Caarlos's comment). I'll summarize them here though:
Use script + printf, requires no extra dependencies:
0<&- script -qefc "ls --color=auto" /dev/null | cat
Or make a bash function faketty to encapsulate it:
faketty () {
script -qefc "$(printf "%q " "$#")" /dev/null
}
faketty ls --color=auto | cat
Or in the fish shell:
function faketty
script -qefc "(printf "%q " "$argv")" /dev/null
end
faketty ls --color=auto | cat
(credit goes to this answer)
http://linux.die.net/man/1/script
Use the unbuffer command (as part of the expect suite of commands), unfortunately this requires an extra package install, but it's the easiest solution:
sudo apt-get install expect-dev # or brew install expect
unbuffer -p ls --color=auto | cat
Or if you use the fish shell:
function faketty
unbuffer -p $argv
end
faketty ls --color=auto | cat
http://linux.die.net/man/1/unbuffer
This is a great article on how TTYs work and what Pseudo-TTYs (PTYs) are, it's worth taking a look at if you want to understand how the linux shell works with file descriptors to pass around input, output, and signals. http://www.linusakesson.net/programming/tty/index.php
This question already has answers here:
How to use aliases defined in .bashrc in other scripts?
(6 answers)
Closed 2 years ago.
My alias defined in a sample shell script is not working. And I am new to Linux Shell Scripting.
Below is the sample shell file
#!/bin/sh
echo "Setting Sample aliases ..."
alias xyz="cd /home/usr/src/xyz"
echo "Setting done ..."
On executing this script, I can see the echo messages. But if I execute the alias command, I see the below error
xyz: command not found
am I missing something ?
source your script, don't execute it like ./foo.sh or sh foo.sh
If you execute your script like that, it is running in sub-shell, not your current.
source foo.sh
would work for you.
You need to set a specific option to do so, expand_aliases:
shopt -s expand_aliases
Example:
# With option
$ cat a
#!/bin/bash
shopt -s expand_aliases
alias a="echo b"
type a
a
$ ./a
# a is aliased to 'echo b'
b
# Without option
$ cat a
#!/bin/bash
alias a="echo b"
type a
a
$ ./a
./a: line 3: type: a: not found
./a: line 4: a: command not found
reference: https://unix.stackexchange.com/a/1498/27031 and https://askubuntu.com/a/98786/127746
sourcing the script source script.sh
./script.sh will be executed in a sub-shell and the changes made apply only the to sub-shell. Once the command terminates, the sub-shell goes and so do the changes.
OR
HACK: Simply run following command on shell and then execute the script.
alias xyz="cd /home/usr/src/xyz"
./script.sh
To unalias use following on shell prompt
unalias xyz
If you execute it in a script, the alias will be over by the time the script finishes executing.
In case you want it to be permanent:
Your alias is well defined, but you have to store it in ~/.bashrc, not in a shell script.
Add it to that file and then source it with . .bashrc - it will load the file so that alias will be possible to use.
In case you want it to be used just in current session:
Just write it in your console prompt.
$ aa
The program 'aa' is currently not installed. ...
$
$ alias aa="echo hello"
$
$ aa
hello
$
Also: From Kent answer we can see that you can also source it by source your_file. In that case you do not need to use a shell script, just a normal file will make it.
You may use the below command.
shopt -s expand_aliases
source ~/.bashrc
eval $command
Your alias has to be in your .profile file not in your script if you are calling it on the prompt.
If you put an alias in your script then you have to call it within your script.
Source the file is the correct answer when trying to run a script that inside has an alias.
source yourscript.sh
Put your alias in a file call ~/.bash_aliases and then, on many distributions, it will get loaded automatically, no need to manually run the source command to load it.
This question already has answers here:
How to change argv0 in bash so command shows up with different name in ps?
(8 answers)
Closed 8 years ago.
By default bash passes executable filename as first (0 to be more precise) argument while invoking programs
Is there any special form for calling programs that can be used to pass 0 argument?
It is usefull for bunch of programs that behave in different ways depending on location where they were called from
I think the only way to set argument 0 is to change the name of the executable. For example:
$ echo 'echo $0' > foo.sh
$ ln foo.sh bar.sh
$ sh foo.sh
foo.sh
$ sh bar.sh
bar.sh
Some shells have a non-POSIX extension to the exec command that allow you to specify an alternate value:
$ exec -a specialshell bash
$ echo $0
specialshell
I'm not aware of a similar technique for changing the name of a child process like this, other than to run in a subshell
$ ( exec -a subshell-bash bash )
Update: three seconds later, I find the argv0 command at http://cr.yp.to/ucspi-tcp/argv0.html.