I am pretty inexperienced with bash. I am trying to save the last run command as a variable, this is what I have:
#!/bin/bash
prev=$(fc -ln -1)
echo $prev
This doesn't print anything. Once I save the last command, I plan on coding this line:
valgrind --leak-check=full $prev
So what am I doing wrong?
fc references the history of the current shell. When run inside a shell script it refers to history of that new shell. If you use an alias or shell function, then fc will operate within the current shell. You can also source a script file for the same effect.
$ cat go
#!/bin/bash
set -o history
echo a b c
fc -nl -1
$ ./go
a b c
echo a b c
$ alias zz='fc -nl -1 | tr a-z A-Z'
$ zz
ALIAS ZZ='FC -NL -1 | TR A-Z A-Z'
I'd use aliases rather than a script, something like:
alias val='valgrind --leak-check=full'
alias vallast='val $(history 2 | head -1 | cut -d" " -f3-)'
As explained in the link, you can add these lines to your .bashrc.
BTW, the latter can also be executed as:
val !!
val !-1 #same
Or if you want to valgrind the program you ran 2 commands ago:
val !-2
These history commands are explained here.
Related
From Ch 2. "Learning the Bash Shell" 3rd edition, 'The fc Command':
"Remember that fc actually runs the command(s) after you edit them.
Therefore, the last-named choice can be dangerous. bash will attempt
to execute all commands in the range you specify when you exit your
editor. If you have typed in any multi-line constructs (like those we
will cover in Chapter 5), the results could be even more dangerous.
Although these might seem like valid ways of generating “instant shell
programs,” a far better strategy would be to direct the output of fc
-ln with the same arguments to a file; then edit that file and execute the commands when you’re satisfied with them:
$ fc -l cp > lastcommands
$ vi lastcommands
$ source lastcommands
In this case, the shell will not try to execute the file when you
leave the editor!"
I did:
$ fc -l
Which gave the output:
1 echo test1
2 echo test2
3 echo test3
4 echo test4
So I tried:
$ fc -l cp > lastcommands
But got:
-bash: fc: history specification out of range
Wondering why I get that out of range error. The file 'lastcommands' is created but it is empty.
I tried:
$ fc -l cp > lastcommands
But got:
-bash: fc: history specification out of range
Reviewing help fc (truncated)
$ help fc
fc: fc [-e ename] [-lnr] [first] [last]
Display or execute commands from the history list.
… FIRST can be a
string, which means the most recent command beginning with that
string.
…
-l list lines instead of editing
You passed FIRST as string cp. It appears there is no command in your history beginning with cp.
Try it with a unique string that will definitely not be a command in your bash history:
fc -l NOTINYOURHISTORY
You'll get the same error.
Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)
I guess this is easy but I couldn't figure it out. Basically I have a command history file and I'd like to grep a line and execute it. How do I do it?
For example: in the file command.txt file, there is:
wc -l *txt| awk '{OFS="\t";print $2,$1}' > test.log
Suppose the above line is the last line, what I want to do is something like this
tail -1 command.txt | "execute"
I tried to use
tail -1 command.txt | echo
but no luck.
Can someone tell me how to make it work?
Thanks!
You can load an arbitrary file into your shell's history list with
history -r command.txt
at which point you can use all the normal history expansion commands to find the command you wish to execute. For example, after executing the above command, the last line in the file would be available to run with
!!
Just use command substitution
bash -c "$(tail -1 command.txt)"
Something like that?
echo "ls -l" | xargs sh -c
This answer addresses just the execution part. It assumes you have a method to extract the line you want from the file, perhaps a loop on each line. Each line of the file would be the argument for echo.
You can use eval.
eval "$(tail -n 1 ~/.bash_history)"
or if you want it to execute some other line:
while read -r; do
if some condition; then
eval "$REPLY"
fi
done < ~/.bash_history
Can you explain the output of the following test script to me:
# prepare test data
echo "any content" > myfile
# set bash to inform me about the commands used
set -x
cat < myfile
output:
+cat
any content
Namely why does the line starting with + not show the "< myfile" bit?
How to force bash to do that. I need to inform the user of my script's doings as in:
mysql -uroot < the_new_file_with_a_telling_name.sql
and I can't.
EDIT: additional context: I use variables. Original code:
SQL_FILE=`ls -t $BACKUP_DIR/default_db* | head -n 1` # get latest db
mysql -uroot mydatabase < ${SQL_FILE}
-v won't expand variables and cat file.sql | mysql will produce two lines:
+mysql
+cat file.sql
so neither does the trick.
You could try set -v or set -o verbose instead which enables command echoing.
Example run on my machine:
[me#home]$ cat x.sh
echo "any content" > myfile
set -v
cat < myfile
[me#home]$ bash x.sh
cat < myfile
any content
The caveat here is that set -v simply echos the command literally and does not do any shell expansion or iterpolation. As pointed out by Jonathan in the comments, this can be a problem if the filename is defined in a variable (e.g. command < $somefile) making it difficult to identify what $somefile refers to.
The difference there is quite simple:
in the first case, you're using the program cat, and you're redirecting the contents of myfile to the standard input of cat. This means you're executing cat, and that's what bash shows you when you have set -x;
in a possible second case, you could use cat myfile, as pointed by #Jonathan Leffler, and you'd see +cat myfile, which is what you're executing: the program cat with the parameter myfile.
From man bash:
-x After expanding each simple command, for command, case command,
select command, or arithmetic for command, display the expanded
value of PS4, followed by the command and its expanded arguments or
associated word list.
As you can see, it simply displays the command line expanded, and its argument list -- redirections are neither part of the expanded command cat nor part of its argument list.
As pointed by #Shawn Chin, you may use set -v, which, as from man bash:
-v Print shell input lines as they are read.
Basically, that's the way bash works with its -x command. I checked on a Solaris 5.10 box, and the /bin/sh there (which is close to a genuine Bourne shell) also omits I/O redirection.
Given the command file (x3.sh):
echo "Hi" > Myfile
cat < Myfile
rm -f Myfile
The trace output on the Solaris machine was:
$ sh -x x3.sh
+ echo Hi
+ cat
Hi
+ rm -f Myfile
$ /bin/ksh -x x3.sh
+ echo Hi
+ 1> Myfile
+ cat
+ 0< Myfile
Hi
+ rm -f Myfile
$ bash -x x3.sh
+ echo Hi
+ cat
Hi
+ rm -f Myfile
$
Note that bash and sh (which are definitely different executables) produce the same output. The ksh output includes the I/O redirection information — score 1 for the Korn shell.
In this specific example, you can use:
cat myfile
to see the name of the file. In the general case, it is hard, but consider using ksh instead of bash to get the I/O redirection reported.
I want to do a very simple script: just want to find the newest version of a program, say svn, on my computer. I want to load the result into a variable, say mysvn
So I make this script:
#!/bin/sh
mysvn="foobar"
best_ver=0
which -a svn | while read p
do
version=$("$p" --version | grep 'version ' | grep -oE '[0-9.]+' | head -1)
if [[ "$version" > "$best_ver" ]]
then
best_ver=$version
mysvn="$p"
fi
echo $mysvn
done
echo $mysvn
Very simple in fact ... but it does not work under rxvt (my pseudo-linux terminal), version 2.7.10, running under XP: the final output string is foobar.
Does anybody know why I have this problem?
I have been writing some scripts for the past few months, it is the first time I encounter such a behaviour.
Note: I know how to make it work, with a few changes (just put the main lines into $() )
The reason this occurs is that the while loop is part of a pipeline, and (at least in bash) any shell commands in a pipeline always get executed in a subshell. When you set mysvn inside the loop, it gets set in the subshell; when the loop ends, the subshell exits and this value is lost. The parent shell's value of mysvn never gets changed. See BashFAQ #024 and this previous question.
The standard solution in bash is to use process substitution rather than a pipeline:
while
...
done < <(which -a svn)
But note that this is a bash-only feature, and you must use #!/bin/bash as your shebang for this to work.
Here on Ubuntu:
:~$ which -a svn | while read p
> do
> version=$("$p" --version | grep 'version ' | grep -oE '[0-9.]+' | head -1)
> echo $version
> done
.
so, your version is ., not very nice.
I tried this, and I think it's what you're looking for:
:~$ which -a svn | while read p
> do
> version=$("$p" --version | grep -oE '[0-9.]+' | head -1)
> echo $version
> done
1.7.5