The inspiration here is a prank idea, so try to look past the fact that it's not really useful...
Let's say I wanted to set up an alias in bash that would subtly change any command entered at the prompt into the same command, but ultimately piped through tac to reverse the final output. A few examples of what I'd try to do:
ls ---> ls | tac
ls -la ---> ls -la | tac
tail ./foo | grep 'bar' ---> tail ./foo | grep 'bar' | tac
Is there a way to set up an alias, or some other means, that will append | tac to the end of each/every command entered without further intervention? Extra consideration given to ideas that are easy to hide in a bashrc. ;)
This isn't guaranteed to be side-effect-free, but it's probably a sane first cut:
reverse_command() {
# C check the number of entries in the `BASH_SOURCE` array to ensure that it's empty
# ...(meaning an interactive command).
if (( ${#BASH_SOURCE[#]} <= 1 )); then
# For an interactive command, take its text, tack on `| tac`, and evaluate
eval "${BASH_COMMAND} | tac"
# ...then return false to suppress the non-reversed version.
false
else
# for a noninteractive command, return true to run the original unmodified
true
fi
}
# turn on extended DEBUG hook behavior (necessary to suppress original commands).
shopt -s extdebug
# install our trap
trap reverse_command DEBUG
bash doesn't support modifying commands in this fashion. It does, however, let you redirect standard output for the shell itself, which every command will then inherit. Add this to .bashrc:
exec > >( tac )
Related
Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)
I would like to use an alias command in bash in combination with the watch command. The watch command is multiple chained commands.
A very simple example how I would think it would work (but it does not):
alias foo=some_command # a more complicated command
watch -n 1 "$(foo) | grep bar" # foo is not interpreted as the alias :(
watch -n 1 "$(foo) | grep sh" is wrong for two reasons.
When watch "$(cmdA) | cmdB" is executed by the shell, $(cmdA) gets expanded before running watch. Then watch would execute the output of cmdA as a command (which should fail in most cases) and pipe the output of that to cmdB. You probably meant watch 'cmdA | cmdB'.
The alias foo is defined in the current shell only. watch is not a built-in command and therefore has to execute its command in another shell which does not know the alias foo. There is a small trick presented in this answer, however we have to make some adjustments to make it work with pipes and options
alias foo=some_command
alias watch='watch -n 1 ' # trailing space treats next word as an alias
watch foo '| grep sh'
Note that the options for watch have to be specified inside the watch alias. The trailing space causes only the next word to be treated as an alias. With watch -n 1 foo bash would try to expand -n as an alias, but not foo.
I made a function that uses the --color option and allows you to use -n to specify a refresh interval.
swatch_usage() {
cat <<EOF >&2
NAME
swatch - execute a program periodically with "watch". Supports aliases.
SYNOPSIS
swatch [options] command
OPTIONS
-n, --interval seconds (default: 1)
Specify update interval. The command will not allow quicker than 0.1 second interval.
EOF
}
swatch() {
if [ $# -eq 0 ]; then
swatch_usage
return 1
fi
seconds=1
case "$1" in
-n)
seconds="$2"
args=${*:3}
;;
-h)
swatch_usage
;;
*)
seconds=1
args=${*:1}
;;
esac
watch --color -n "$seconds" --exec bash -ic "$args || true"
}
I only needed color and timing support but i'm sure you could add more if you wanted.
The meat of the function is that it executes your command with bash directly in interactive mode and can thus use any aliases or commands that are normally available to you in bash.
I'm not that experienced with scripting so fair warning, your mileage may vary. Sometimes i have to press Ctrl+C a few times to get it to stop, but for what it's worth, i've been using it frequently for 6 months without issue.
Gist form: https://gist.github.com/ablacklama/550420c597f9599cf804d57dd6aad131
I just modified ~/.bash_profile to include the following alias:
alias ngrep='grep -v grep'
I then went to an already-open terminal session and ran the following:
source ~/.bash_profile && ps aux | grep mysql | ngrep
The output was:
-bash: ngrep: command not found
However, I then immediately ran ngrep and it ran without errors.
I'm looking to understand Terminal better. Why can I not chain an alias I just added after sourcing the bash profile using &&?
On a Mac running Mojave, with the standard terminal and bash.
Aliases are simple prefix substitutions that take place before syntax is parsed. This gives them powers other constructs don't have (albeit powers which are rarely needed or appropriate) -- you can alias something to content that's subsequently parsed as syntax -- but it also constrains them: Because a compound command needs to be parsed before it can be executed, the ngrep command is parsed before the source command is executed, so the alias is not yet loaded at the point in time when it would need to be to take effect.
As a simple demonstration (thanks to a comment by #chepner):
alias foo=echo; foo hi
foo bye
...will emit:
-bash: foo: command not found
bye
...because the alias was not in place when the first line (alias foo=echo; foo hi) was parsed, but is in place for the line foo bye. (The alias is in place when foo hi is run, but the command has already been split out into the command foo with the argument hi; there's no remaining opportunity to change foo to echo, so the fact that the alias is defined at this time has no impact on execution).
You wouldn't have this problem with a function:
# note that you can't run this in a shell that previously had ngrep defined as an alias
# ...unless you unalias it first!
ngrep() { grep -v grep "$#"; }
...doesn't require recognition at parse time, so you can use it in a one-liner as shown in the question.
As an indirect solution, consider editing your match pattern. e.g.:
$: ps -fu $LOGNAME
UID PID PPID TTY STIME COMMAND
P2759474 6704 10104 pty0 14:26:54 /usr/bin/ps
P2759474 10104 9968 pty0 07:59:11 /usr/bin/bash
P2759474 9968 1 ? 07:59:10 /usr/bin/mintty
$: ps -fu $LOGNAME | grep '/mintty$'
P2759474 9968 1 ? 07:59:10 /usr/bin/mintty
You don't have to grep -v grep if your grep already is specific enough to exclude itself.
I want a grep pipeline to succeed if no lines are selected:
set -o errexit
echo foo | grep -v foo
# no output, pipeline returns 1, shell exits
! echo foo | grep -v foo
# no output, pipeline returns 1 reversed by !, shell continues
Similarly, I want the pipeline to fail (and the enclosing shell to exit, when errexit is set) if any lines come out the end. The above approach does not work:
echo foo | grep -v bar
# output, pipeline returns 0, shell continues
! echo foo | grep -v bar
# output, ???, shell continues
! (echo foo | grep -v bar)
# output, ???, shell continues
I finally found a method, but it seems ugly, and I'm looking for a better way.
echo foo | grep -v bar | (! grep '.*')
# output, pipeline returns 1, shell exits
Any explanation of the behavior above would be appreciated as well.
set -e, aka set -o errexit, only handles unchecked commands; its intent is to prevent errors from going unnoticed, not to be an explicit flow control mechanism. It makes sense to check explicitly here, since it's a case you actively care about as opposed to an error that could happen as a surprise.
echo foo | grep -v bar && exit
More to the point, several of your commands make it explicit that you care about (and thus are presumably already manually handling) exit status -- thus setting the "checked" flag, making errexit have no effect.
Running ! pipeline, in particular, sets the checked flag for the pipeline -- it means that you're explicitly doing something with its exit status, thus implying that automatic failure-case handling need not apply.
This is not a one-liner, but it works:
if echo foo | grep foo ; then
exit 1
fi
Meaning the shell exits with 1 if grep finds something. Also I don't think that set -e or set -o errexit should be used for logic, it should be used for what it is meant for, for stopping your script if an unexpected error occurs.
I'd like to run the following shell command from Ruby, which copies a string into the clipboard (on OS X), 'n' is suppressing the line break after the string caused by echo:
echo -n foobar | pbcopy
—> works, fine, now the clipboard contains "foobar"
I've tried the following, but all of them always copy the option '-n' as well into the clipboard:
%x[echo -n 'foobar' | pbcopy]
%x[echo -n foobar | pbcopy]
system "echo -n 'foobar' | pbcopy"
system "echo -n foobar | pbcopy"
exec 'echo -n "foobar" | pbcopy'
`echo -n "foobar" | pbcopy`
IO.popen "echo -n 'foobar' | pbcopy"
What is the proper way to achieve this?
Your problem is that -n is only understood by the bash built-in echo command; when you say %x[...] (or any of your other variations on it), the command is fed to /bin/sh which will act like a POSIX shell even if it really is /bin/bash. The solution is to explicitly feed your shell commands to bash:
%x[/bin/bash -c 'echo -n foobar' | pbcopy]
You will, of course, need to be careful with your quoting on whatever foobar really is. The -c switch essentially tells /bin/bash that you're giving it an inlined script:
-c string
If the -c option is present, then commands are read from string.
If there are arguments after the string, they are assigned to the positional
parameters, starting with $0.
Because echo behaves differently in different shells and in /bin/echo, it's recommended that you use printf instead.
No newline:
%x[printf '%s' 'foobar' | pbcopy]
With a newline:
%x[printf '%s\n' 'foobar' | pbcopy]
You might be reinventing a wheel.
IRB_Tools and Utility_Belt, which are both used to tweak IRB, provide an ability to use the clipboard. Both are collections of existing gems, so I did a quick search using gem clipboard -r and came up with:
clipboard (0.9.7)
win32-clipboard (0.5.2)
Looking at RubyDoc.info for clipboard reveals:
clipboard
Access the clipboard and do not care if the OS is Linux, MacOS or Windows.
Usage
You have Clipboard.copy,
Clipboard.paste and
Clipboard.clear
Have fun ;)
EDIT: If you check the source on the linked page, for the Mac you'll see for copy:
def copy(data)
Open3.popen3( 'pbcopy' ){ |input,_,_| input << data }
paste
end
and for paste you'll see:
def paste(_ = nil)
`pbpaste`
end
and clear is simply:
def clear
copy ''
end
Those should get you pointed in the right direction.
This might look like an ugly workaround, but I'm pretty sure it'll work:
Create an executable file called myfoobar.sh containing the line you want to execute.
#! /bin/sh
echo -n foobar | pbcopy
Then invoke that file from ruby.
Use the sutil for correct setup:
$ ssh-copy-id user#host