sort, mplayer and xargs - xargs

I have a directory full of music files that I want to play in mplayer. I want to play these files in order of their track number which is the forth field (space separated) in their filename. I know I could do something like this:
ls | sort -nk4 | playlist
and then
mplayer -playlist playlist
but I would like to be able to do it without creating a playlist file. The best I have so far is
ls | sort -nk4 | xargs -I{} mplayer {}
This seems to work but I am unable to use any of the normal mplayer controls. I am curious if this is possible. It seems it should be as you can type
mplayer songA.flac songB.flac songC.flac...
and it works fine.

Once mplayer is after a pipe, its std input is connected to the pipe now and not your keyboard, so the mplayer's controls stop working - try this instead:
eval mplayer $( printf "%q\n" * | sort -n -k4 )
or if your ls has -Q (quote) option:
eval mplayer $( ls -Q | sort -n -k4 )
However note that the best way is to use temp files as suggested. They offer more flexibility, avoid the quoting issues and you can remove them after you're done. Place them under /tmp.
The quoting specifier %q of printf quotes the filenames - songs' names can have all kinds of characters in them.
eval is need to strip the extra layer of quoting, so mplayer sees just the properly quoted filename.
Kinda messy as you see, so I'd again recommend using temp files ( been there, done that :).

With GNU parallel you would do this:
ls | sort -nk4 | parallel --tty -Xj1 mplayer
This will work even if your file names contain space.

Related

How to properly pass filenames with spaces with $* from xargs to sed via sh?

Disclaimer: this happens on macOS (Big Sur); more info about the context below.
I have to write (almost did) a script which will replace images URLs in big text (xml) files by their Base64-encoded value.
The script should run the same way with single filenames or patterns, or both, e.g.:
./replace-encode single.xml
./replace-encode pattern*.xml
./replace-encode single.xml pattern*.xml
./replace-encode folder/*.xml
Note: it should properly handle files\ with\ spaces.xml
So I ended up with this script:
#!/bin/bash
#needed for `ls` command
IFS=$'\n'
ls -1 $* | xargs -I % sed -nr 's/.*>(https?:\/\/[^<]+)<.*/\1/p' % | xargs -tI % sh -c 'sed -i "" "s#%#`curl -s % | base64`#" $0' "$*"
What it does: ls all files, pipe the list to xargs then search all URLs surrounded by anchors (hence the > and < in the search expr. - also had to use sed because grep is limited on macOS), then pipe again to a sh script which runs the sed search & replace, where the remplacement is the big Base64 string.
This works perfectly fine... but only for fileswithoutspaces.xml
I tried to play with $0 vs $1, $* vs $#, w/ or w/o " but to no avail.
I don't understand exactly how does the variable substitution (is it how it's called? - not a native English speaker, and above all, not a script-writer at all!!! just a Java dev. all day long...) work between xargs, sh or even bash with arguments like filenames.
The xargs -t is here to let me check out how the substitution works, and that's how I noticed that using a pattern worked but I have to let the " around the last $*, otherwise only the 1st file is searched & replaced; output is like:
user#host % ./replace-encode pattern*.xml
sh -c sed -i "" "s#https://www.some.com/public/123456.jpg#`curl -s https://www.some.com/public/123456.jpg | base64`#" $0 pattern_123.xml
pattern_456.xml
Both pattern_123.xml and pattern_456.xml are handled here; w/ $* instead of "$*" in the end of the command, only pattern_123.xml is handled.
So is there a simple way to "fix" this?
Thank you.
Note: macOS commands have some limitations (I know) but as this script is intended to non-technical users, I can't ask them to install (or have the IT team installed on their behalf) some alternate GNU-versions installed e.g. pcregrep or 'ggrep' like I've read many times...
Also: I don't intend to change from xargs to for loops or so because, 1/ don't have the time, 2/ might want to optimize the 2nd step where some URLs might be duplicate or so.
There's no reason for your software to use ls or xargs, and certainly not $*.
./replace-encode single.xml
./replace-encode pattern*.xml
./replace-encode single.xml pattern*.xml
./replace-encode folder/*.xml
...will all work fine with:
#!/usr/bin/env bash
while IFS= read -r line; do
replacement=$(curl -s "$line" | base64)
in="$line" out="$replacement" perl -pi -e 's/\Q$ENV{"in"}/$ENV{"out"}/g' "$#"
done < <(sed -nr 's/.*>(https?:\/\/[^<]+)<.*/\1/p' "$#" | sort | uniq)
Finally ended up with this single-line script:
sed -nr 's/.*>(https?:\/\/[^<]+)<.*/\1/p' "$#" | xargs -I% sh -c 'sed -i "" "s#%#`curl -s % | base64`#" "$#"' _ "$#"
which does properly support filenames with or without spaces.

Using Bash Less and Grep together [duplicate]

Is that possible to use grep on a continuous stream?
What I mean is sort of a tail -f <file> command, but with grep on the output in order to keep only the lines that interest me.
I've tried tail -f <file> | grep pattern but it seems that grep can only be executed once tail finishes, that is to say never.
Turn on grep's line buffering mode when using BSD grep (FreeBSD, Mac OS X etc.)
tail -f file | grep --line-buffered my_pattern
It looks like a while ago --line-buffered didn't matter for GNU grep (used on pretty much any Linux) as it flushed by default (YMMV for other Unix-likes such as SmartOS, AIX or QNX). However, as of November 2020, --line-buffered is needed (at least with GNU grep 3.5 in openSUSE, but it seems generally needed based on comments below).
I use the tail -f <file> | grep <pattern> all the time.
It will wait till grep flushes, not till it finishes (I'm using Ubuntu).
I think that your problem is that grep uses some output buffering. Try
tail -f file | stdbuf -o0 grep my_pattern
it will set output buffering mode of grep to unbuffered.
If you want to find matches in the entire file (not just the tail), and you want it to sit and wait for any new matches, this works nicely:
tail -c +0 -f <file> | grep --line-buffered <pattern>
The -c +0 flag says that the output should start 0 bytes (-c) from the beginning (+) of the file.
In most cases, you can tail -f /var/log/some.log |grep foo and it will work just fine.
If you need to use multiple greps on a running log file and you find that you get no output, you may need to stick the --line-buffered switch into your middle grep(s), like so:
tail -f /var/log/some.log | grep --line-buffered foo | grep bar
you may consider this answer as enhancement .. usually I am using
tail -F <fileName> | grep --line-buffered <pattern> -A 3 -B 5
-F is better in case of file rotate (-f will not work properly if file rotated)
-A and -B is useful to get lines just before and after the pattern occurrence .. these blocks will appeared between dashed line separators
But For me I prefer doing the following
tail -F <file> | less
this is very useful if you want to search inside streamed logs. I mean go back and forward and look deeply
Didn't see anyone offer my usual go-to for this:
less +F <file>
ctrl + c
/<search term>
<enter>
shift + f
I prefer this, because you can use ctrl + c to stop and navigate through the file whenever, and then just hit shift + f to return to the live, streaming search.
sed would be a better choice (stream editor)
tail -n0 -f <file> | sed -n '/search string/p'
and then if you wanted the tail command to exit once you found a particular string:
tail --pid=$(($BASHPID+1)) -n0 -f <file> | sed -n '/search string/{p; q}'
Obviously a bashism: $BASHPID will be the process id of the tail command. The sed command is next after tail in the pipe, so the sed process id will be $BASHPID+1.
Yes, this will actually work just fine. Grep and most Unix commands operate on streams one line at a time. Each line that comes out of tail will be analyzed and passed on if it matches.
This one command workes for me (Suse):
mail-srv:/var/log # tail -f /var/log/mail.info |grep --line-buffered LOGIN >> logins_to_mail
collecting logins to mail service
Coming some late on this question, considering this kind of work as an important part of monitoring job, here is my (not so short) answer...
Following logs using bash
1. Command tail
This command is a little more porewfull than read on already published answer
Difference between follow option tail -f and tail -F, from manpage:
-f, --follow[={name|descriptor}]
output appended data as the file grows;
...
-F same as --follow=name --retry
...
--retry
keep trying to open a file if it is inaccessible
This mean: by using -F instead of -f, tail will re-open file(s) when removed (on log rotation, for sample).
This is usefull for watching logfile over many days.
Ability of following more than one file simultaneously
I've already used:
tail -F /var/www/clients/client*/web*/log/{error,access}.log /var/log/{mail,auth}.log \
/var/log/apache2/{,ssl_,other_vhosts_}access.log \
/var/log/pure-ftpd/transfer.log
For following events through hundreds of files... (consider rest of this answer to understand how to make it readable... ;)
Using switches -n (Don't use -c for line buffering!).By default tail will show 10 last lines. This can be tunned:
tail -n 0 -F file
Will follow file, but only new lines will be printed
tail -n +0 -F file
Will print whole file before following his progression.
2. Buffer issues when piping:
If you plan to filter ouptuts, consider buffering! See -u option for sed, --line-buffered for grep, or stdbuf command:
tail -F /some/files | sed -une '/Regular Expression/p'
Is (a lot more efficient than using grep) a lot more reactive than if you does'nt use -u switch in sed command.
tail -F /some/files |
sed -une '/Regular Expression/p' |
stdbuf -i0 -o0 tee /some/resultfile
3. Recent journaling system
On recent system, instead of tail -f /var/log/syslog you have to run journalctl -xf, in near same way...
journalctl -axf | sed -une '/Regular Expression/p'
But read man page, this tool was built for log analyses!
4. Integrating this in a bash script
Colored output of two files (or more)
Here is a sample of script watching for many files, coloring ouptut differently for 1st file than others:
#!/bin/bash
tail -F "$#" |
sed -une "
/^==> /{h;};
//!{
G;
s/^\\(.*\\)\\n==>.*${1//\//\\\/}.*<==/\\o33[47m\\1\\o33[0m/;
s/^\\(.*\\)\\n==> .* <==/\\o33[47;31m\\1\\o33[0m/;
p;}"
They work fine on my host, running:
sudo ./myColoredTail /var/log/{kern.,sys}log
Interactive script
You may be watching logs for reacting on events?
Here is a little script playing some sound when some USB device appear or disappear, but same script could send mail, or any other interaction, like powering on coffe machine...
#!/bin/bash
exec {tailF}< <(tail -F /var/log/kern.log)
tailPid=$!
while :;do
read -rsn 1 -t .3 keyboard
[ "${keyboard,}" = "q" ] && break
if read -ru $tailF -t 0 _ ;then
read -ru $tailF line
case $line in
*New\ USB\ device\ found* ) play /some/sound.ogg ;;
*USB\ disconnect* ) play /some/othersound.ogg ;;
esac
printf "\r%s\e[K" "$line"
fi
done
echo
exec {tailF}<&-
kill $tailPid
You could quit by pressing Q key.
you certainly won't succeed with
tail -f /var/log/foo.log |grep --line-buffered string2search
when you use "colortail" as an alias for tail, eg. in bash
alias tail='colortail -n 30'
you can check by
type alias
if this outputs something like
tail isan alias of colortail -n 30.
then you have your culprit :)
Solution:
remove the alias with
unalias tail
ensure that you're using the 'real' tail binary by this command
type tail
which should output something like:
tail is /usr/bin/tail
and then you can run your command
tail -f foo.log |grep --line-buffered something
Good luck.
Use awk(another great bash utility) instead of grep where you dont have the line buffered option! It will continuously stream your data from tail.
this is how you use grep
tail -f <file> | grep pattern
This is how you would use awk
tail -f <file> | awk '/pattern/{print $0}'

Passing filepaths containing spaces with xargs

I'm trying to use xargs to pass the contents of a variable containing zero or more filepaths separated by newlines to another command and have been having inconsistent success.
My input is the output of this:
newHTK=`grep -Fxv -f $TMPFILE /Users/foo/.htk`
Which generates the aforementioned list of filenames. Here's where things go wrong (or sometimes inexplicably right):
echo "$newHTK" | xargs -L 1 xattr -w com.apple.metadata:kMDItemFinderComment htk
The intention is for is to use each line in $newHTK as a filename argument for xattr. What usually happens is xattr splits the input at the spaces. I think I might need to escape the filenames coming out of the echo command or somehow enclose them in double quotation marks (Any advice on an easy way to do this would be appreciated). But if that's the case why did it work for some of the files?
You can use the xargs -I flag (if you have it I don't know what its portability is) to do this.
grep -Fxv -f $TMPFILE /Users/foo/.htk | xargs -I % xattr -w com.apple.metadata:kMDItemFinderComment htk %

How to execute the output of a command within the current shell?

I'm well aware of the source (aka .) utility, which will take the contents from a file and execute them within the current shell.
Now, I'm transforming some text into shell commands, and then running them, as follows:
$ ls | sed ... | sh
ls is just a random example, the original text can be anything. sed too, just an example for transforming text. The interesting bit is sh. I pipe whatever I got to sh and it runs it.
My problem is, that means starting a new sub shell. I'd rather have the commands run within my current shell. Like I would be able to do with source some-file, if I had the commands in a text file.
I don't want to create a temp file because feels dirty.
Alternatively, I'd like to start my sub shell with the exact same characteristics as my current shell.
update
Ok, the solutions using backtick certainly work, but I often need to do this while I'm checking and changing the output, so I'd much prefer if there was a way to pipe the result into something in the end.
sad update
Ah, the /dev/stdin thing looked so pretty, but, in a more complex case, it didn't work.
So, I have this:
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/git mv -f $1 $2.doc/i' | source /dev/stdin
Which ensures all .doc files have their extension lowercased.
And which incidentally, can be handled with xargs, but that's besides the point.
find . -type f -iname '*.doc' | ack -v '\.doc$' | perl -pe 's/^((.*)\.doc)$/$1 $2.doc/i' | xargs -L1 git mv
So, when I run the former, it'll exit right away, nothing happens.
The eval command exists for this very purpose.
eval "$( ls | sed... )"
More from the bash manual:
eval
eval [arguments]
The arguments are concatenated together
into a single command, which
is then read and executed, and its
exit status returned as the exit
status of eval. If there are no
arguments or only empty arguments, the
return status is zero.
$ ls | sed ... | source /dev/stdin
UPDATE: This works in bash 4.0, as well as tcsh, and dash (if you change source to .). Apparently this was buggy in bash 3.2. From the bash 4.0 release notes:
Fixed a bug that caused `.' to fail to read and execute commands from non-regular files such as devices or named pipes.
Try using process substitution, which replaces output of a command with a temporary file which can then be sourced:
source <(echo id)
Wow, I know this is an old question, but I've found myself with the same exact problem recently (that's how I got here).
Anyway - I don't like the source /dev/stdin answer, but I think I found a better one. It's deceptively simple actually:
echo ls -la | xargs xargs
Nice, right? Actually, this still doesn't do what you want, because if you have multiple lines it will concat them into a single command instead of running each command separately. So the solution I found is:
ls | ... | xargs -L 1 xargs
the -L 1 option means you use (at most) 1 line per command execution. Note: if your line ends with a trailing space, it will be concatenated with the next line! So make sure each line ends with a non-space.
Finally, you can do
ls | ... | xargs -L 1 xargs -t
to see what commands are executed (-t is verbose).
Hope someone reads this!
`ls | sed ...`
I sort of feel like ls | sed ... | source - would be prettier, but unfortunately source doesn't understand - to mean stdin.
I believe this is "the right answer" to the question:
ls | sed ... | while read line; do $line; done
That is, one can pipe into a while loop; the read command command takes one line from its stdin and assigns it to the variable $line. $line then becomes the command executed within the loop; and it continues until there are no further lines in its input.
This still won't work with some control structures (like another loop), but it fits the bill in this case.
To use the mark4o's solution on bash 3.2 (macos) a here string can be used instead of pipelines like in this example:
. /dev/stdin <<< "$(grep '^alias' ~/.profile)"
I think your solution is command substitution with backticks: http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_03_04.html
See section 3.4.5
Why not use source then?
$ ls | sed ... > out.sh ; source out.sh

Use lines in a file as filenames for grep?

I have a file which contains filenames (and the full path to them) and I want to search for a word within all of them.
some pseudo-code to explain:
grep keyword <all files specified in files.txt>
or
cat files.txt > grep keyword
cat files txt | grep keyword
the problem is that I can only get grep to search the filenames, not the contents of the actual files.
cat files.txt | xargs grep keyword
or
grep keyword `cat files.txt`
or (equivalent to previous but harder to mis-read)
grep keyword $(cat files.txt)
should do the trick.
Pitfalls:
If files.txt contains file names with spaces, either solution will malfunction, because "This is a filename.txt" will be interpreted as four files, "This", "is", "a", and "filename.txt". A good reason why you shouldn't have spaces in your filenames, ever.
There are ways around this, but none of them is trivial. (find ... -print0 / xargs -0 is one of them.)
The second (cat) version can result in a very long command line (which might fail when exceeding the limits of your environment). The first (xargs) version handles long input automatically; xargs offers several options to control the details.
Both of the answers from DevSolar work (tested on Linux Ubuntu), but the xargs version is preferable if there may be many files, since it will avoid running into command line length limits.
so:
cat files.txt | xargs grep keyword
is the way to go
tr '\n' '\0' <files.txt | LANG=C xargs -r0 grep -F keyword
tr will delimit names with NUL character so that spaces not significant (note the corresponding -0 option to xargs).
xargs -r will start a single grep process for a "large" number of files, but not start any grep process if there are no files.
LANG=C means use quick routines for matching, rather than slow locale ones
grep -F means use quick string matching rather than slow regular expression matching
bash, ksh & zsh version:
grep keyword $(<files.txt)
Long time when last created a bash shell script, but you could store the result of the first grep (the one finding all filenames) in an array and iterate over it, issuing even more grep commands.
A good starting point should be the bash scripting guide.

Resources