Here is a part of my .fluxbox/startup file
(a=($(grep "^1 " $HOME/Documents/4.n3u|awk '{print "/home/g" $2}'|sort -R|head -20)); \
xterm -e mpg123 -C ${a[#]} &>$HOME/Documents/mpg123.dat &)
As written, the redirection fails, all such output appearing in the xterm instead. The man page for xterm reads, in part,
-e program [ arguments ... ]
This option specifies the program (and its command line argu‐
ments) to be run in the xterm window. It also sets the window
title and icon name to be the basename of the program being
executed if neither -T nor -n are given on the command line.
This must be the last option on the command line.
mpg123 plays the content of array a as desired, and can be controlled through the keyboard as option -C specifies, but xterm seems to frustrate the redirect to file. Is that redirection possible in this context?
Alternatively, I can run it without the xterm to contain mpg123, in which case I get the redirect, but cannot control mpg123 thru the keyboard because it is running in some background subshell with no connections to the keyboard. Is there any way to establish that connection?
You have redirected the stdout and stderr of the xterm process, but xterm does not normally print anything on its own stdout and stderr. The only things that would show up there would be errors related to xterm itself (like if it unexpectedly lost its connection to the X server).
xterm creates a tty and runs the child process (-e command or a shell) with stdin, stdout, and stderr attached to that tty. You need to put the redirection inside the -e to have it apply in the child process, like this:
xterm -e 'your command > whatever'
SECOND ATTEMPT
To keep the ${a[#]} argument list intact but also use the shell redirection operator, you're going to have to explicitly invoke a shell with -c. Like this:
xterm -e sh -c 'your command "$#" > whatever' dummy "${a[#]}"
Related
I have two simple scripts:
./cpthat
BlueM/cliclick types on the keyboard: Shift+Cmd+A, then Cmd+C, to the active iTerm terminal:
#!/bin/zsh
cliclick kd:shift,cmd t:a ku:shift t:c ku:cmd
pbpaste>$THATF
Shift+Cmd+A selects the output from the previous command, and
Cmd+C copies "that" to the clipboard.
pbpaste then writes that to the file $THATF defined system-wide.
./that
#!/bin/zsh
cat $THATF
This prints out the output of the last command as stored by cpthat.
(I know I can run $ command > $THATF directly but for other reasons I need to act retroactively on the command output. Also, not thread safe.)
The challenge:
I'm trying to get to where I can start a zsh or bash command with a pipe:
$ |grep -i sometext
Where, in effect, this happens:
$ that|grep -i sometext
Would this be possible somehow?
Overriding the pipe operator?
zsh config magic?
I'm using zsh heavily but am open for any solution.
You don't need to start with a |. Thegrep utility naturally reads STDIN.
Here's a contrived example:
# /bin/sh
# count_matches
grep $1 | wc -l
$ cat file | count_matches thing
You can see the | you're looking for is on the command line itself, not within the script
Similarly this works:
$ count_matches thing < file
In the first example, the STDIN is connected (via the pipe) to the output of the first command (trivially cat). In the second, it's from the actual file via redirection.
So, just get rid of the | and you should be good to go.
Animation showing that | alone at the beginning of command can be replaced automatically by the output of previous command:
Edit ~/.zshrc to override zsh's zle accept-line widget:
readonly THATF="path/to/your/temporary/file"
my-accept-line () {
if [[ "${BUFFER:0:1}" == '|' ]]; then
/usr/local/bin/cliclick kd:shift,cmd t:a ku:shift w:100 t:c ku:cmd
pbpaste>"${THATF}"
BUFFER='cat ${THATF} '${BUFFER}
fi
zle .accept-line
}
zle -N accept-line my-accept-line
Explanation
When you hit enter after entering a command, zsh runs the accept-line widget.
We override that widget, but before exiting we remember to call the original widget with zle .accept-line. With the dot prefix the factory widget is ran.
In iTerm2, shift+cmd+a selects all the output from the previous command, and cmd+c copies that to the system pasteboard.
We paste the contents of the pasteboard and redirects that to the temporary file declared earlier, pointed to by ${THATF}.
We prepend $BUFFER, the zsh special variable available within zle widget code, with the output of the previous command.
Dependencies, caveats:
This particular solution depends on:
cliclick dispatching macOS keyboard events. Perhaps a native solutions exist e.g. ANSI/escape sequence.
iTerm to handle the keybind for copying last commands output.
zsh for the zle widget.
Xode snippet above is proof-of-concept only and is wildly insecure.
My intent was to have all the output of my bash script displayed on the console and logged to a file.
Here is my script that works as expected.
#!/bin/bash
LOG_FILE="test_log.log"
touch $LOG_FILE
# output to console and to logfile
exec > >(tee $LOG_FILE) 2>&1
echo "Starting command ls"
ls -al
echo "End of script"
However I do not understand why it works that way.
I expected to have exec >>(tee $LOG_FILE) 2>&1 work but it fails although exec >>$LOG_FILE 2>&1 indeed works.
I could not find the reason for the construction exec > >(command ) in the bash manual nor in advanced bash scripting. Can you explain the logic behind it ?
The >(tee $LOG_FILE) is an example of Process substitution, you might wish to search for that. Advanced Shell Scriptng and Bash manual
Using the syntax, <(program) for capturing output and >(program) for feeding input, we can pass data just one record at a time. It is more powerful than command substitution (backticks, or $( )) because it substitutes for a filename, not text. Therefore anywhere a file is normally specified we can substitute a program's standard output or input (although process substitution on input is not all that common).
This is particularly useful where a program does not use standard streams for what you want.
Note that in your example you are missing a space, exec >>(tee $LOG_FILE) 2>&1 is wrong (you will get a syntax error). Rather,
exec > >(tee $LOG_FILE) 2>&1
is correct, that space is critical.
So, the exec > part changes file descriptor 1 (the default), also known as stdout or standard output, to refer to "whatever comes next", in this case it is the process substitution, although normally it would be a filename.
2>&1 redirects file descriptor 2 (stderr or standard error) to refer to the same place as file descriptor 1 (stdout or standard out). Important: if you omit the & you end-up with a file called 1 rather than successful redirection.
Once you have called the exec line above, then you have changed the current process's standard output, so output from the commands which follow go to that tee process instead of to regular stdout.
Why doesn't
which myscript | xargs vim
work nicely? My terminal (ubuntu 14.04) freezes when I exit vim.
Or, is there an a nice clean alternative?
Why The Original Doesn't Work
You can't meaningfully pipe anything into vim, if you're going to use it as an interactive editor: A pipeline overrides stdin; an editor needs to be able to access your stdin (unless it's, say, interacting via X11 -- but that would be gvim).
To go into a little more detail: foo | bar runs both foo and bar at the same time, with the stdout of foo connected to the stdin of bar. Thus, which myscript | xargs vim has the shell originally starting two processes -- which myscript and xargs vim -- with the stdout of which myscript connected to the stdin of xargs vim.
However, this means that xargs vim is getting its input from which, and not from the terminal/console that the user was typing at. Thus, when xargs vim starts vim, the stdin which vim inherits isn't connected to the terminal either -- and vim, being an interactive editor built to get input from the user at a terminal, fails (perhaps spectacularly or entertainingly).
What To Do Instead
vim "$(which myscript)"
The $() syntax above is a command substitution, which is replaced with the stdout of the command which it runs. As such, while this overrides the stdout of which (directed into a FIFO which the shell reads from for purposes of that substitution), it does not in any respect redirect the input and output handed to vim.
Alternately, if you really want to use xargs (note the following uses -d, a GNUism, to ensure that it works correctly when passed filenames with spaces -- though not filenames with newlines):
which myscript | xargs -d $'\n' sh -c 'exec vim "$#" <&2'
The above has xargs, instead of directly running vim, start a shell which copies stderr (file descriptor 2) to stdin (file descriptor 0, the default target of redirection with <), and then starts vim, so as to provide that copy of vim a file descriptor for stdin that's attached to your terminal -- if your stderr isn't open to your TTY, replace <&2 with </dev/tty instead.
I need a way to make a process keep a certain file open forever. Here's an example of what I have so far:
sleep 1000 > myfile &
It works for a thousand seconds, but really don't want to make some complicated sleep/loop statement. This post suggested that cat is the same thing as sleep for infinite. So I tried this:
cat > myfile &
It almost looks like a mistake doesn't it? It seemed to work from the command line, but in a script the file connection did not stay open. Any other ideas?
Rather than using a background process, you can also just use bash to open one of its file descriptors:
exec 5>myfile
(The special use of exec here allows changing the current file descriptor redirections - see man bash for details). This will open file descriptor 5 to "myfile" (use >> if you don't want to empty the file).
You can later close the file again with:
exec 5>&-
(One possible downside of this is that the FD gets inherited by every program that the shell runs in the meantime. Mostly this is harmless - e.g. your greps and seds will generally ignore the extra FD - but it could be annoying in some cases, especially if you spawn any processes that stay around (because they will then keep the FD open).
Note: If you are using a newer version of bash (>4.1) you can use a slightly different syntax:
exec {fd}>myfile
This allocates a new file descriptor, and puts it in the variable fd. This can help ensure that scripts don't accidentally overwrite each other's file descriptors. To close the file later, use
exec {fd}>&-
The reason that cat>myfile& works is because it re-directs standard input into a file.
if you launch it with an ampersand (in background), it won't get ANY input, including end-of-file, which means it will forever wait and print nothing to the output file.
You can get an equivalent effect, except WITHOUT dependency on standard input (the latter is what makes it not work in your script), with this command:
tail -f /dev/null > myfile &
On the cat > myfile & issue running in terminal vs not running as part of a script: In a non-interactive shell the stdin of a backgrounded command & gets implicitly redirected from /dev/null.
So, cat > myfile & in a script actually gets translated into cat </dev/null > myfile, which terminates cat immediately.
See the POSIX standard on the Shell Command Language & Asynchronous Lists:
The standard input for an asynchronous list, before any explicit redirections are
performed, shall be considered to be assigned to a file that has the same
properties as /dev/null. If it is an interactive shell, this need not happen.
In all cases, explicit redirection of standard input shall override this activity.
# some tests
sh -c 'sleep 10 & lsof -p ${!}'
sh -c 'sleep 10 0<&0 & lsof -p ${!}'
sh -ic 'sleep 10 & lsof -p ${!}'
# in a script
- cat > myfile &
+ cat 0<&0 > myfile &
tail -f myfile
This 'follows' the file, and outputs any changes to the file. If you don't want to see the output of tail, redirect output to /dev/null or something:
tail -f myfile > /dev/null
You may want to use the --retry option, depending on your specific case. See man tail for more information.
It's really annoying to type this whenever I don't want to see a program's output. I'd love to know if there is a shorter way to write:
$ program >/dev/null 2>&1
Generic shell is the best, but other shells would be interesting to know about too, especially bash or dash.
>& /dev/null
You can write a function for this:
function nullify() {
"$#" >/dev/null 2>&1
}
To use this function:
nullify program arg1 arg2 ...
Of course, you can name the function whatever you want. It can be a single character for example.
By the way, you can use exec to redirect stdout and stderr to /dev/null temporarily. I don't know if this is helpful in your case, but I thought of sharing it.
# Save stdout, stderr to file descriptors 6, 7 respectively.
exec 6>&1 7>&2
# Redirect stdout, stderr to /dev/null
exec 1>/dev/null 2>/dev/null
# Run program.
program arg1 arg2 ...
# Restore stdout, stderr.
exec 1>&6 2>&7
In bash, zsh, and dash:
$ program >&- 2>&-
It may also appear to work in other shells because &- is a bad file descriptor.
Note that this solution closes the file descriptors rather than redirecting them to /dev/null, which could potentially cause programs to abort.
Most shells support aliases. For instance, in my .zshrc I have things like:
alias -g no='2> /dev/null > /dev/null'
Then I just type
program no
If /dev/null is too much to type, you could (as root) do something like:
ln -s /dev/null /n
Then you could just do:
program >/n 2>&1
But of course, scripts you write in this way won't be portable to other systems without setting up that symlink first.
It's also worth noting, that often times redirecting output is not really necessary. Many Unix and Linux programs accept a "silent flag", usually -n or -q, that suppresses any output and only returns a value on success or failure.
For example
grep foo bar.txt >/dev/null 2>&1
if [ $? -eq 0 ]; then
do_something
fi
Can be rewritten as
grep -q foo bar.txt
if [ $? -eq 0 ]; then
do_something
fi
Edit: the (:) or |: based solutions might cause an error because : doesn't read stdin. Though it might not be as bad as closing the file descriptor, as proposed in Zaz's answer.
For bash and bash-compliant shells (zsh...):
$ program &>/dev/null
OR
$ program &> >(:) # Should actually cause error or abortion
For all shells:
$ program 2>&1 >/dev/null
OR
$ program 2>&1|: # Should actually cause error or abortion
$ program 2>&1 > >(:) does not work for dash because it refuses to operate process substitution right of a file substitution.
Explanations:
2>&1 redirects stderr (file descriptor 2) to stdout (file descriptor 1).
| is the regular piping of stdout to the stdin of another command.
: is a shell builtin which does nothing (it is equivalent to true).
&> redirects both stdout and stderr outputs to a file.
>(your-command) is process substitution. It is replaced with a path to a special file, for instance: /proc/self/fd/6. This file is used as input file for the command your-command.
Note: A process trying to write to a closed file descriptor will get an EBADF (bad file descriptor) error which is more likely to cause abortion than trying to write to | true. The latter would cause an EPIPE (pipe) error, see Charles Duffy's comment.
Ayman Hourieh's solution works well for one-off invocations of overly chatty programs. But if there's only a small set of commonly called programs for which you want to suppress output, consider silencing them by adding the following to your .bashrc file (or the equivalent, if you use another shell):
CHATTY_PROGRAMS=(okular firefox libreoffice kwrite)
for PROGRAM in "${CHATTY_PROGRAMS[#]}"
do
printf -v eval_str '%q() { command %q "$#" &>/dev/null; }' "$PROGRAM" "$PROGRAM"
eval "$eval_str"
done
This way you can continue to invoke programs using their usual names, but their stdout and stderr output will disappear into the bit bucket.
Note also that certain programs allow you to configure how much logging/debugging output they spew. For KDE applications, you can run kdebugdialog and selectively or globally disable debugging output.
Seems to me, that the most portable solution, and best answer, would be a macro on your terminal (PC).
That way, no matter what server you log in to, it will always be there.
If you happen to run Windows, you can get the desired outcome with AHK (google it, it's opensource) in two tiny lines of code. That can translate any string of keys into any other string of keys, in situ.
You type "ugly.sh >>NULL" and it will rewrite it as "ugly.sh 2>&1 > /dev/null" or what not.
Solutions for other platforms are somewhat more difficult. AppleScript can paste in keyboard presses, but can't be triggered that easily.