Obtain the last command executed by bash, inside a perl script [duplicate] - bash

This question already has answers here:
What's the difference between Perl's backticks, system, and exec?
(5 answers)
How can I store Perl's system function output to a variable?
(8 answers)
Closed 5 years ago.
I would like to write a perl script that do something similar to what fc do. Indeed I want to edit programmatically the last command executed.
Something like this should work
perl -wle 'print system("/bin/bash", "-i", "-c", "history | tail -n 1")'
but system return the exit status of the command executed, not the stdout, that is what I want, while I'm not able to activate the history (-i flag) using back-tic, qx// or opening a pipe.
I know that using back-tic, qx// or opening a pipe it is simple to read the stdout of the command, but int this case how to use the builtin bash hisotry command properly?
Even using system and passing -i to bash, I'm not able to get the expected output from history | tail -n 1. Bi redirecting the output to a file I found its content empty.
perl -wle 'print system("/bin/bash", "-i", "-c", "history | tail -n 1 > /tmp/pippo")'
So am I forced to write the bash history on a file whit history -w and to read that file inside perl?

Use qx to capture shell command output. Read the ~/.bash_history file directly since you cannot capture the output of the shell builtin function history.
You may need to add history -a to your .bashrc file to get bash to write the history file after every command, or add history -w to your bash PROMPT_COMMAND. Other settings are covered nicely over at unix.stackexchange.com.
$ echo "foo"
foo
$ perl -e 'chomp(my $cmd = qx(tail -n 1 ~/.bash_history)); print "$cmd bar\n";'
echo "foo" bar

Related

How to read user's input from bash (when catting a script) [duplicate]

I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.

Bash get the command that is piping into a script

Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)

awk and bash script? [duplicate]

This question already has answers here:
Syntax error in shell script with process substitution
(4 answers)
Closed 3 years ago.
I wonder why it doesn't work.
Please advise me.
1. working
$ nu=`awk '/^Mem/ {printf($2*0.7);}' <(free -m)`
$ echo $nu
1291.5
2. not working
$ cat test.sh
#!/bin/bash
nu=`awk '/^Mem/ {printf($2*0.7);}' <(free -m)`
echo $nu
$ sh test.sh
test.sh: command substitution: line 2: syntax error near unexpected token `('
test.sh: command substitution: line 2: `awk '/^Mem/ {printf($2*0.7);}' <(free -m)'
Could you please try following.
nu=$(free -m | awk '/^Mem/ {print $2*0.7}')
echo "$nu"
Things taken care are:
Use of backtick is depreciated so use $ to store variable's value.
Also first run free command pass its standard output as standard input to awk command by using |(which should be ideal way of sending output of a command to awk in this scenario specially) and save its output to a variable named nu.
Now finally print variable nu by echo.
Since <(...) process substitution is supported by bash not by sh so I am trying to give a solution where it could support without process substitution (which I mentioned a bit earlier too).
The <( ) construct ("process substitution") is not available in all shells, or even in bash when it's invoked with the name "sh". When you run the script with sh test.sh, that overrides the shebang (which specifies bash), so that feature is not available. You need to either run the script explicitly with bash, or (better) just run it as ./test.sh and let the shebang line do its job.
The reason to add a shebang in a script is to define an interpreter directive if the file has execution permission.
Then, you should invoke it by, for example
$ ./test.sh
once you have set the permission
$ chmod +x test.sh

Reading input while also piping a script via stdin

I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.

Bash process substitution: what does `echo >(ls)` do?

Here is an example of Bash's process substitution:
zjhui#ubuntu:~/Desktop$ echo >(ls)
/dev/fd/63
zjhui#ubuntu:~/Desktop$ abs-guide.pdf
Then I get a cursor waiting for a command.
/dev/fd/63 doesn't exist. I think what happens is:
Output the filename used in /dev/fd
Execute the ls in >(ls)
Is this right? Why is there a cursor waiting for input?
When you execute echo >(ls), bash replaces this with echo /dev/fd/63 and runs ls with /dev/fd/63 connected to its standard input. echo does not use its arguments as file names, and ls does not use standard input. Bash will write your standard prompt but the output of ls comes after it. You can type in any Bash command there, you just lost the visual cue from the prompt, which is further up the screen.
echo >(ls) is not something that is likely to ever be useful.

Resources