bash: how to execute a line from a file? - bash

I guess this is easy but I couldn't figure it out. Basically I have a command history file and I'd like to grep a line and execute it. How do I do it?
For example: in the file command.txt file, there is:
wc -l *txt| awk '{OFS="\t";print $2,$1}' > test.log
Suppose the above line is the last line, what I want to do is something like this
tail -1 command.txt | "execute"
I tried to use
tail -1 command.txt | echo
but no luck.
Can someone tell me how to make it work?
Thanks!

You can load an arbitrary file into your shell's history list with
history -r command.txt
at which point you can use all the normal history expansion commands to find the command you wish to execute. For example, after executing the above command, the last line in the file would be available to run with
!!

Just use command substitution
bash -c "$(tail -1 command.txt)"

Something like that?
echo "ls -l" | xargs sh -c
This answer addresses just the execution part. It assumes you have a method to extract the line you want from the file, perhaps a loop on each line. Each line of the file would be the argument for echo.

You can use eval.
eval "$(tail -n 1 ~/.bash_history)"
or if you want it to execute some other line:
while read -r; do
if some condition; then
eval "$REPLY"
fi
done < ~/.bash_history

Related

Shell read from files with tail

I am currently trying to read from files with shell. However, I met one sytax issue. My code is below:
while read -r line;do
echo $line
done < <(tail -n +2 /pathToTheFile | cut -f5,6,7,8 | sort | uniq )
However, it returns me error syntax error near unexpected token('`
I tried with following How to use while read line with tail -n but still cannot see the error.
The tail command works properly.
Any help will be apprepricated.
process substitution isn't support by the posix shell /bin/sh. It is a feature specific to bash (and other non posix shells). Are you running this in /bin/bash?
Anyhow, the process substitution isn't needed here, you could simple use a pipe, like this:
tail -n +2 /pathToTheFile | cut -f5,6,7,8 | sort -u | while read -r line ; do
echo "${line}"
done
Your interpreter must be #!/bin/bash not #!/bin/sh and/or you must run the script with bash scriptname instead of sh scriptname.
Why?
POSIX shell doesn't provide process-substitution. Process substitution (e.g. < <(...)) is a bashism and not available in POSIX shell. So the error:
syntax error near unexpected token('
Is telling you that once the script gets to your done statement and attempts to find the file being redirected to the loop it finds '(' and chokes. (that also tells us you are invoking your script with POSIX shell instead of bash -- and now you know why)

Bash get the command that is piping into a script

Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)

Create temporary file and redirect output to it in one command

I designed a custom script to grep a concatenated list of .bash_history backup files. In my script, I am creating a temporary file with mktemp and saving it to a variable temp. Next, I am redirecting output to that file using the cat command.
Is there a means to create a temporary file (using mktemp), redirect output to it, then store it in a variable in one command, while preserving newline characters?
The below snippet of code works just fine, but I have a feeling there is a more terse and canonical way to achieve this in one line – maybe using process substitution or something of the like.
# Concatenate all .bash_history files into a temporary file `temp`.
temp="$(mktemp)"
cat "$HOME/.bash_history."* > $temp
trap 'rm -f $temp' 0
# Set `HISTFILE` shell variable to the `temp` file.
HISTFILE="$temp"
keyword="$1"
# Search for `keyword` using the `history` command
if [[ "$keyword" ]]; then
# Enable history
set -o history
history | grep "$keyword"
# Disable history
set +o history
else
echo -e "usage: search <keyword>"
exit 0
fi
If you're comfortable with the side effect of making the assignment conditional on tempfile not previously having a nonempty value, this is straightforward via the ${var:=value} expansion:
cat "$HOME/.bash_history" >"${tempfile:=$(mktemp)}"
cat myfile.txt | f=`mktemp` && cat > "${f}"
I guess there is more than one way to do it. I found following to be working for me:
cat myfile.txt > $(echo "$(mktemp)")
Also don't forget about tee:
cat myfile.txt | tee "$(mktemp)" > /dev/null

Loop for deleting first line of Multiple Files using Bash Script

Just new to Bash scripting and programming in general. I would like to automate the deletion of the first line of multiple .data files in a directory. My script is as follows:
#!/bin/bash
for f in *.data ;
do tail -n +2 $f | echo "processing $f";
done
I get the echo message but when I cat the file nothing has changed. Any ideas?
Thanks in advance
I get the echo message but when I cat the file nothing has changed.
Because simply tailing wouldn't change the file.
You could use sed to modify the files in-place with the first line excluded. Saying
sed -i '1d' *.data
would delete the first line from all .data files.
EDIT: BSD sed (on OSX) would expect an argument to -i, so you can either specify an extension to backup older files, or to edit the files in-place, say:
sed -i '' '1d' *.data
You are not changing the file itself. By using tail you simply read the file and print parts of it to stdout (the terminal), you have to redirect that output to a temporary file and then overwrite the original file with the temporary one.
#!/usr/bin/env bash
for f in *.data; do
tail -n +2 "$f" > "${f}".tmp && mv "${f}".tmp "$f"
echo "Processing $f"
done
Moreover it's not clear what you'd like to achieve with the echo command. Why do you use a pipe (|) there?
sed will give you an easier way to achieve this. See devnull's answer.
I'd do it this way:
#!/usr/bin/env bash
set -eu
for f in *.data; do
echo "processing $f"
tail -n +2 "$f" | sponge "$f"
done
If you don't have sponge you can get it in the moreutils package.
The quotes around the filename are important--they will make it work with filenames containing spaces. And the env thing at the top is so that people can set which Bash interpreter they want to use via their PATH, in case someone has a non-default one. The set -eu makes Bash exit if an error occurs, which is usually safer.
ed is the standard editor:
shopt -s nullglob
for f in *.data; do
echo "Processing file \`$f'"
ed -s -- "$f" < <( printf '%s\n' "1d" "wq" )
done
The shopt -s nullglob is here just because you should always use this when using globs, especially in a script: it will make globs expand to nothing if there are no matches; you don't want to run commands with uncontrolled arguments.
Next, we loop on all your files, and use ed with the commands:
1: go to first line
d: delete that line
wq: write and quit
Options for ed:
-s: tells ed to shut up! we don't want ed to print its junk on our screen.
--: end of options: this will make your script much more robust, in case a file name starts with a hypen: in this case, the hyphen will confuse ed trying to process it as an option. With --, ed knows that there are no more options after that and will happily process any files, even those starting with a hyphen.

How can I write an Automator action in bash with files and folders as arguments?

When I create a Automator action in XCode using Bash, all files and folder paths are printed to stdin.
How do I get the content of those files?
Whatever I try, I only get the filename in the output.
If I just select "Run shell script" I can select if I want everything to stdin or as arguments. Can this be done for an XCode project to?
It's almost easier to use Applescript, and let that run the Bash.
I tried something like
xargs | cat | MyCommand
What's the pipe between xargs and cat doing there? Try
xargs cat | MyCommand
or, better,
xargs -R -1 -I file cat "file" | MyCommand
to properly handle file names with spaces etc.
If, instead, you want MyComand invoked on each file,
local IFS="\n"
while read filename; do
MyCommand < $filename
done
may also be useful.
read will read lines from the script's stdin; just make sure to set $IFS to something that won't interfere if the pathnames are sent without backslashes escaping any spaces:
OLDIFS="$IFS"
IFS=$'\n'
while read filename ; do
echo "*** $filename:"
cat -n "$filename"
done
IFS="$OLDIFS"

Resources