Unable to print newline inside xargs - bash

I'm working on some script to help organize some stuff. I have simplified my code down to this from my original:
$ echo $(find -type d -maxdepth 5 | grep -E "\./([^/])*/([^/])*/([^/])*/([^/])*"
| cut -d'/' -f 3 | sort | uniq -d | xargs -I R sh -c 'echo -e "alpha \n"')
(R actually doesn't do anything here but it's used in the original)
Particularly, I think something is wrong with my xargs
xargs -I R sh -c 'echo -e "alpha \n"'
What I would look to see happen is alpha to be printed several times, each on a newline. However, my output looks like
alpha alpha alpha...
I've been scouring around the internet trying to find how to fix this but it's no use. I've just started experimenting with bash, can someone please point out what I'm doing wrong?

This is a special case of I just assigned a variable, but echo $variable shows something else.
Just running a command, no echo:
$ printf '%s\n' "first line" "second line"
first line
second line
Running a command in an unquoted command substitution, as your code is currently written:
$ echo $(printf '%s\n' "first line" "second line")
first line second line
Running a command in a quoted command substitution:
$ echo "$(printf '%s\n' "first line" "second line")"
first line
second line
...so: If you don't have a reason for the outer echo $(...), just remove it; and if you do have a reason, add quotes.

Related

Pipe filepath to ImageJ

I have a little command line utility rjp2tif that extracts radiometric data from a jpeg file into a tiff file. I was hoping to be able to pipe the filepath to ImageJ on the command line and have ImageJ open the tiff file. To this end, rjp2tif writes the filepath of the tiff file to standard output. I tried the following in bash:
$ rjp2tif /path/to/rjpeg | open -a imagej
and
$ rjp2tif /path/to/rjpeg | open -a imagej -f
The first opens ImageJ but doesn't open the file.
The second opens ImageJ with a text window with the filepath in it.
This is on macOS Monterey, but I don't think that matters.
Anyone tried to do this and been successful? TIA.
Assuming the rjp2tif command returns a file-path in standard output, and you want to pass this output as a regular CLI argument to another command, you may be interested in the xargs command. But note that in the general case, you may hit some issue if the file-path contains spaces or so:
Read space, tab, newline and end-of-file delimited arguments from standard input and execute the specified utility with them as arguments.
The arguments are typically a long list of filenames (generated by ls or find, for example) that get passed to xargs via a pipe.
So in this case, assuming each file-path takes only one line (which is obviously the case if there's only one line overall), you can use the following NUL-based tip relying on the tr command.
Here is the command you'd obtain:
rjp2tif /path/to/rjpeg | tr '\n' '\0' | xargs -0 open -a imagej
Note: I have a GNU/Linux OS, so can you please confirm it does work under macOS?
FTR, below is a comprehensive shell code allowing one to test two different modes of xargs: generating one command per line-argument (-n1), or a single command with all line-arguments in one go:
$ printf 'one \ntwo\nthree and four' | tr '\n' '\0' | xargs -0 -n1 \
bash -c 'printf "Run "; for a; do printf "\"$a\" "; done; echo' bash
Run "one "
Run "two"
Run "three and four"
$ printf 'one \ntwo\nthree and four' | tr '\n' '\0' | xargs -0 \
bash -c 'printf "Run "; for a; do printf "\"$a\" "; done; echo' bash
Run "one " "two" "three and four"
######################################
# or alternatively (with no for loop):
######################################
$ printf 'one \ntwo\nthree and four' | tr '\n' '\0' | xargs -0 -n1 \
bash -c 'printf "Run "; printf "\"%s\" " "$#"; echo' bash
Run "one "
Run "two"
Run "three and four"
$ printf 'one \ntwo\nthree and four' | tr '\n' '\0' | xargs -0 \
bash -c 'printf "Run "; printf "\"%s\" " "$#"; echo' bash
Run "one " "two" "three and four"

xargs adds whitespace after echo statement when outputting on to new line

using xargs and echo to output result of samtools to new line in output.txt file
samtools view $SAMPLE\.bam | cut -f3 | uniq -c | sort -n | \
xargs -r0 -n1 echo -e "Summarise mapping...\n" >> ../output.txt
This adds the result on a new line after the echo but also adds a space before the result on the first new line, how can i stop this?
It's not xargs which is adding the space. It's the echo command:
The echo utility arguments shall be separated by single <space> characters and a <newline> character shall follow the last argument. (Text from Posix standard; emphasis added.)
If you want more control, use printf:
...
xargs -r0 -n1 printf "Summarise mapping...\n%s\n" >> ../output.txt
Unlike printf does not automatically add a newline at the end, so it needs to be included in the format.
Note that printf automatically interprets escape sequence like \n in the format string (but not in interpolated arguments). As an additional bonus for using printf, you could leave out the -n1 option since printf automatically repeats the format until all arguments are consumed.

Tailing less with a custom LESSOPEN colorize script

I've written the following script to pick out keywords from a log file and highlight terms:
#!/bin/bash
case "$1" in
*.log) sed -e "s/\(.*\[Error\ \].*\)/\x1B[31m&\x1b[0m/" "$1" \
| sed -e "s/\(.*\[Warn\ \ \].*\)/\x1B[33m&\x1b[0m/" \
| sed -e "s/\(.*\[Info\ \ \].*\)/\x1B[32m&\x1b[0m/" \
| sed -e "s/\(.*\[Debug\ \].*\)/\x1B[32m&\x1b[0m/"
;;
esac
It works OK until I try and follow/tail less (Shift+F) at which point it fails to tail any new log lines. Any ideas why?
That colorizes what you pass as an argument to the script. What you want instead is to read from stdin. Wrap your case statement in the following loop:
while read LINE; do
case "$LINE" in
# ... rest of your code here
esac
done
Now you can pipe it into your script:
tail -f somefile | colorize_script.sh
Additional answer:
I had this same need several years ago so I wrote a script that works like grep but colorizes the matching text instead of hiding non-matching text. If you have tcl on your system you can grab my script from here: http://wiki.tcl.tk/38096
Just copy/paste the code (it's only 200 lines) into an empty file and chmod it to make it executable. Name it cgrep (for color-grep) and put it somewhere in your executable path. Now you can do something like this:
tail -f somefile | cgrep '.*\[Error\s*\].*' -fg yellow -bg red

Is there any way to make gcc print offending lines when it emits an error?

I have a large codebase that I've been tasked with porting to 64 bits. The code compiles, but it prints a very large amount of incompatible pointer warnings (as is to be expected.) Is there any way I can have gcc print the line on which the error occurs? At this point I'm just using gcc's error messages to try to track down assumptions that need to be modified, and having to look up every one is not fun.
I've blatantly stolen Joseph Quinsey's answer for this. The only difference is I've attempted to make the code easier to understand:
For bash, use make 2>&1 | show_gcc_line with show_gcc_line the following script:
#!/bin/bash
# Read and echo each line only if it is an error or warning message
# The lines printed will start something like "foobar:123:" so that
# line 123 of file foobar will be printed.
while read input
do
loc=$(echo "$input" | sed -n 's/^\([^ :]*\):\([0-9]*\):.*/\1 \2/p')
len=${#loc}
file=${loc% *}
line=${loc#* }
if [ $len -gt 0 ]
then
echo "$input"
echo "$(sed -n ${line}p $file)"
echo
fi
done
This was partly because I did not like the formatting of the original. This only prints the warnings/errors, followed by the line of code causing the problem, followed by a blank line. I've removed the string of hyphens too.
Perhaps a script to print the desired lines would help. If you are using csh (unlikely!) use:
make ... |& show_gcc_line
with show_gcc_line the following script:
#!/bin/csh
# Read and echo each line. And, if it starts with "foobar:123:", print line 123
# of foobar, using find(1) to find it, prefaced by ---------------.
set input="$<"
while ( "$input" )
echo "$input"
set loc=`echo "$input" | sed -n 's/^\([^ :]*\):\([0-9]*\):.*/\1 \2/p'`
if ( $#loc ) then
find . -name $loc[1] | xargs sed -n $loc[2]s/^/---------------/p
endif
set input="$<"
end
And for bash, use make ... 2>&1 | show_gcc_line with:
#!/bin/bash
# Read and echo each line. And, if it starts with "foobar:123:", print line 123
# of foobar, using find(1) to find it, prefaced by ---------------.
while read input
do
echo "$input"
loc=$(echo "$input" | sed -n 's/^\([^ :]*\):\([0-9]*\):.*/\1 \2/p')
if [ ${#loc} -gt 0 ]
then
find . -name ${loc% *} | xargs sed -n ${loc#* }s/^/---------------/p
fi
done
Use -W option to control which warnings you want to display. this parameter explained here.
Also you can use this trick to suppress progressive outputs:
gcc ... 1>/dev/nul
By the time the compiler emits an error message, the actual source line is long gone, (particularly in C) - it has been transformed to a token stream, then to an abstract syntax tree, then to a decorated syntax tree... gcc has enough on its plate with the many steps of compiling, so it intentionally does not include functionality to reopen the file and re-retrieve the original source. That's what editors are for, and virtually all of them have commands to start a compilation and jump to the next error at a keypress. Do yourself a favor and use a modern editor to browse errors (and maybe even fix them semi-automatically).
This little script should work, but I can't test it right now. sorry if need edit.
LAST_ERROR_LINE=`(gcc ... 2>&1 >/dev/null ) | grep error | tail -n1`
FILE=`echo $LAST_ERROR_LINE | cut -f1 -d':'`
LINE=`echo $LAST_ERROR_LINE | cut -f2 -d':'`
sed -n "${LINE}p" $FILE
For me, the symbol of error messages redirecting is hard to remember.
So, here is my version to printed out gcc error messages:
$ee make
for both error and warning messages:
ee2 make
How:
add these into .bashrc
function ee() {
$* 2>&1 | grep error
}
function ee2() {
$* 2> ha
echo "-----"
echo "Error"
echo "-----"
grep error ha
echo "-------"
echo "Warning"
echo "-------"
grep warning ha
}

BASH: Strip new-line character from string (read line)

I bumped into the following problem: I'm writing a Linux bash script which does the following:
Read line from file
Strip the \n character from the end of the line just read
Execute the command that's in there
Example:
commands.txt
ls
ls -l
ls -ltra
ps as
The execution of the bash file should get the first line, and execute it, but while the \n present, the shell just outputs "command not found: ls"
That part of the script looks like this
read line
if [ -n "$line" ]; then #if not empty line
#myline=`echo -n $line | tr -d '\n'`
#myline=`echo -e $line | sed ':start /^.*$/N;s/\n//g; t start'`
myline=`echo -n $line | tr -d "\n"`
$myline #execute it
cat $fname | tail -n+2 > $fname.txt
mv $fname.txt $fname
fi
Commented you have the things I tried before asking SO. Any solutions? I'm smashing my brains for the last couple of hours over this...
I always like perl -ne 'chomp and print' , for trimming newlines. Nice and easy to remember.
e.g. ls -l | perl -ne 'chomp and print'
However
I don't think that is your problem here though. Although I'm not sure I understand how you're passing the commands in the file through to the 'read' in your shell script.
With a test script of my own like this (test.sh)
read line
if [ -n "$line" ]; then
$line
fi
and a sample input file like this (test.cmds)
ls
ls -l
ls -ltra
If I run it like this ./test.sh < test.cmds, I see the expected result, which is to run the first command 'ls' on the current working directory.
Perhaps your input file has additional non-printable characters in it ?
mine looks like this
od -c test.cmds
0000000 l s \n l s - l \n l s - l t
0000020 r a \n
0000023
From your comments below, I suspect you may have carriage returns ( "\r" ) in your input file, which is not the same thing as a newline. Is the input file originally in DOS format ? If so, then you need to convert the 2 byte DOS line ending "\r\n" to the single byte UNIX one, "\n" to achieve the expected results.
You should be able to do this by swapping the "\n" for "\r" in any of your commented out lines.
Someone already wrote a program which executes shell commands: sh file
If you really only want to execute the first line of a file: head -n 1 file |sh
If your problem is carriage-returns: tr -d '\r' <file |sh
I tried this:
read line
echo -n $line | od -x
For the input 'xxxx', I get:
0000000 7878 7878
As you can see, there is no \n at the end of the contents of the variable. I suggest to run the script with the option -x (bash -x script). This will print all commands as they are executed.
[EDIT] Your problem is that you edited commands.txt on Windows. Now, the file contains CRLF (0d0a) as line delimiters which confuses read (and ls\r is not a known command). Use dos2unix or similar to turn it into a Unix file.
You may also try to replace carriage returns with newlines only using Bash builtins:
line=$'a line\r'
line="${line//$'\r'/$'\n'}"
#line="${line/%$'\r'/$'\n'}" # replace only at line end
printf "%s" "$line" | ruby -0777 -n -e 'p $_.to_s'
you need eval command
#!/bin/bash -x
while read cmd
do
if [ "$cmd" ]
then
eval "$cmd"
fi
done
I ran it as
./script.sh < file.txt
And file.txt was:
ls
ls -l
ls -ltra
ps as
though not working for ls, I recommend having a look at find’s -print0 option
The following script works (at least for me):
#!/bin/bash
while read I ; do if [ "$I" ] ; then $I ; fi ; done ;

Resources