BASH: Strip new-line character from string (read line) - windows

I bumped into the following problem: I'm writing a Linux bash script which does the following:
Read line from file
Strip the \n character from the end of the line just read
Execute the command that's in there
Example:
commands.txt
ls
ls -l
ls -ltra
ps as
The execution of the bash file should get the first line, and execute it, but while the \n present, the shell just outputs "command not found: ls"
That part of the script looks like this
read line
if [ -n "$line" ]; then #if not empty line
#myline=`echo -n $line | tr -d '\n'`
#myline=`echo -e $line | sed ':start /^.*$/N;s/\n//g; t start'`
myline=`echo -n $line | tr -d "\n"`
$myline #execute it
cat $fname | tail -n+2 > $fname.txt
mv $fname.txt $fname
fi
Commented you have the things I tried before asking SO. Any solutions? I'm smashing my brains for the last couple of hours over this...

I always like perl -ne 'chomp and print' , for trimming newlines. Nice and easy to remember.
e.g. ls -l | perl -ne 'chomp and print'
However
I don't think that is your problem here though. Although I'm not sure I understand how you're passing the commands in the file through to the 'read' in your shell script.
With a test script of my own like this (test.sh)
read line
if [ -n "$line" ]; then
$line
fi
and a sample input file like this (test.cmds)
ls
ls -l
ls -ltra
If I run it like this ./test.sh < test.cmds, I see the expected result, which is to run the first command 'ls' on the current working directory.
Perhaps your input file has additional non-printable characters in it ?
mine looks like this
od -c test.cmds
0000000 l s \n l s - l \n l s - l t
0000020 r a \n
0000023
From your comments below, I suspect you may have carriage returns ( "\r" ) in your input file, which is not the same thing as a newline. Is the input file originally in DOS format ? If so, then you need to convert the 2 byte DOS line ending "\r\n" to the single byte UNIX one, "\n" to achieve the expected results.
You should be able to do this by swapping the "\n" for "\r" in any of your commented out lines.

Someone already wrote a program which executes shell commands: sh file
If you really only want to execute the first line of a file: head -n 1 file |sh
If your problem is carriage-returns: tr -d '\r' <file |sh

I tried this:
read line
echo -n $line | od -x
For the input 'xxxx', I get:
0000000 7878 7878
As you can see, there is no \n at the end of the contents of the variable. I suggest to run the script with the option -x (bash -x script). This will print all commands as they are executed.
[EDIT] Your problem is that you edited commands.txt on Windows. Now, the file contains CRLF (0d0a) as line delimiters which confuses read (and ls\r is not a known command). Use dos2unix or similar to turn it into a Unix file.

You may also try to replace carriage returns with newlines only using Bash builtins:
line=$'a line\r'
line="${line//$'\r'/$'\n'}"
#line="${line/%$'\r'/$'\n'}" # replace only at line end
printf "%s" "$line" | ruby -0777 -n -e 'p $_.to_s'

you need eval command
#!/bin/bash -x
while read cmd
do
if [ "$cmd" ]
then
eval "$cmd"
fi
done
I ran it as
./script.sh < file.txt
And file.txt was:
ls
ls -l
ls -ltra
ps as

though not working for ls, I recommend having a look at find’s -print0 option

The following script works (at least for me):
#!/bin/bash
while read I ; do if [ "$I" ] ; then $I ; fi ; done ;

Related

print lines where the third character is a digit

for example our bash script's name is masodik and there is a text.txt with these lines:
qwer
qw2qw
12345
qwert432
Then I write ./masodik text.txt and i got
qw2qw
12345
I tried it many ways and I dont know why this is not working
#!/bin/bash
for i in read u ; do
echo $i $u | grep '^[a-zA-Z0-9][a-zA-Z0-9][0-9]'
done
$ grep -E '^.{2}[0-9]' text.txt
qw2qw
12345
, and in script it could be something like:
#!/bin/sh
grep -E '^.{2}[0-9]' "$1"
To print lines whose third character is a digit:
grep ^..[0-9] text.txt
^ matches the start of the line. The dot . matches any character. [0-9] matches any digit.
You can do it with awk quite easily as well:
awk '/^..[0-9]/' file
Result
With your input in file:
$ awk '/^..[0-9]/' file
qw2qw
12345
(sed works as well, sed -n '/^..[0-9]/p' file)
The problem with the code here:
#!/bin/bash
for i in read u ; do
echo $i $u | grep '^[a-zA-Z0-9][a-zA-Z0-9][0-9]'
done
...is that the for syntax is wrong:
read u is treated as a word list. So the $u variable is never set, so $u stays empty.
The for loop will run twice -- the 1st time $i will be set to the string "read", the 2nd time $i will be set to the string "u". Since neither string contains a number, the grep returns nothing.
The code never reads text.txt.
See Sasha Khapyorsky's answer for actual working code.
If for some odd reason all external utils, (grep, awk, etc.), are forbidden, this pure POSIX code would work:
#!/bin/sh
while read u ; do
case "$u" in
[a-zA-Z0-9][a-zA-Z0-9][0-9]*) echo "$u" ;;
esac
done
If perl is installed into the system then shell script will look like
#!/bin/bash
perl -e 'print if /^.{2}\d/' text.txt

How to decode \u003d escape in bash?

I have some strings like:
dimension\u003d1920x1024:format\u003djpg
In a file. I want to decode them so they will look like:
dimension=1920x1024:format=jpg
I know that:
$ echo -e dimension\u003d1920x1024:format\u003djpg
dimensionu003d1920x1024:formatu003djpg
$ echo -e 'dimension\u003d1920x1024:format\u003djpg'
dimension=1920x1024:format=jpg
$ echo -e "dimension\u003d1920x1024:format\u003djpg"
dimension=1920x1024:format=jpg
So I tried this to get what I want:
$ cat file | xargs -L1 echo -e
dimensionu003d1920x1024:formatu003djpg
But as you can see it doesn't work. How can I get this to work? How can I make xargs pass parameters to echo as if they were quoted?
You are actually asking how to convert the sequence \uXXXX into the corresponding Unicode code point. That's quite different from other backslash escapes, or handling backslashes in general. Neither echo -e nor xargs is particularly suited for this task.
Here is one way:
perl -CSD -pe 's/\\u(\X{4})/chr(oct("0x$1"))/ge' <<<"string"
Obscurely, oct("0xff") actually performs hex decoding, because of the "0x" prefix.
Obviously, if your input is the text in a file rather than just a string in the shell, simply pass that as the argument to Perl.
For small files:
Bash:
cat file | echo -e "$(cat -)"
Zsh:
cat file | { echo -e "$(cat -)"; }
For large files in both bash and zsh:
cat file | while read -r LINE; do echo -e "$LINE"; done
(loses spaces at the beginning of the line)
This is a try with ruby where the changes are written to the file
$ cat ./file
dimension\u003d1920x1024:format\u003djpg
dimension=800x600:format\u003djpg
The example above is made a bit more real-world.
$ cat ./script.rb
#!/usr/bin/ruby
contents=File.read("#{ARGV[0]}")
file=File.open("#{ARGV[0]}","w")
if file
file.syswrite(contents.gsub(/\\[uU]\{?([0-9A-F]{4})\}?/i) { $1.hex.chr(Encoding::UTF_8) })
file.close()
else
puts "No file with name #{ARGV[0]} present, Usage script <filename>"
end
$ ./script file
# The changes are written to the file with nothing printed to stdout
$ cat ./file
dimension=1920x1024:format=jpg
dimension=800x600:format=jpg

Extracting a pattern (grep output) in Linux from shell?

Grep output is usually like this:
after/ftplugin/python.vim:49: setlocal number
Is it possible for me extract the file name and line number from this result using standard linux utilities ? Looking for a generic solution that works pretty well .
I can think of using awk to get the first string like :
Input
echo 'after/ftplugin/python.vim:49: setlocal number' | awk 'print $1'
'after/ftplugin/python.vim:49:'
$
Expected
after/ftplugin/python.vim and 49
Goal : Open in Vim
I am writing a small function that transforms the grep output to something vim can understand - mostly for academic purpose . I know there are thinks like Ack.vim out there which does something similar . What are the standard light weight utils out there ?
Edit: grep -n "text to find" file.ext |cut -f1 -d: seems to do it if you dont mind double parsing the string . Sed though needs to be used !
If you're using Bash you can do it this way:
IFS=: read FILE NUM __ < <(exec grep -Hn "string to find" file)
vim "+$NUM" "$FILE"
Or POSIX:
IFS=: read FILE NUM __ <<EOD
$(grep -Hn "string to find" file)
EOD
vim "+$NUM" "$FILE"
Style © konsolebox :)
This will do:
echo 'after/ftplugin/python.vim:49: setlocal number' | awk -F: '{print $1,"and",$2}'
after/ftplugin/python.vim and 49
But give us data before grep. It may be that we can cut it more down. No need for both grep and awk
If by "reverse parse" you mean you want to start from the end (and can safely assume that the file content contains no colons), parameter expansion makes that easy:
line='after/ftplugin/python.vim:49: setlocal number'
name_and_lineno=${line%:*}
name=${name_and_lineno%:*}
lineno=${name_and_lineno##*:}
Being all in-process (using shell built-in functionality), this is much faster than using external tools such as sed, awk, etc.
To connect it all together, consider a loop such as the following:
while read -r line; do
...
done < <(grep ...)
Now, to handle all possible filenames (including ones with colons) and all possible content (including strings with colons), you need a grep with GNU extensions:
while IFS='' read -u 4 -r -d '' file \
&& read -u 4 -r -d ':' lineno \
&& read -u 4 -r line; do
vim "+$lineno" "$file"
done 4< <(grep -HnZ -e "string to find" /dev/null file)
This works as follows:
Use grep -Z (a GNU extension) to terminate each filename with a NUL rather than a :
Use IFS='' read -r -d '' to read until the first NUL when reading filenames
Use read -r -d ':' lineno to read until a colon when reading line numbers
Read until the next newline when reading lines
Redirect contents on FD #4 to avoid overriding stdin, stdout or stderr (so vim will still work properly)
Use the -u 4 argument on all calls to read to handle contents from FD #4
How about this?:
echo 'after/ftplugin/python.vim:49: setlocal number' | cut -d: -f1-2 | sed -e 's/:/ and /'
Result:
after/ftplugin/python.vim and 49

Use sed te extract ascii hex string from a single line in a file

I have a file that looks like this:
some random
text
00ab46f891c2emore random
text
234324fc234ba253069
and yet more text
only one line in the file contains only hex characters (234324fc234ba253069), how do I extract that? I tried sed -ne 's/^\([a-f0-9]*\)$/\1/p' file I used line start and line end (^ and &) as delimiters, but I am obviously missing something...
Grep does the job,
$ grep '^[a-f0-9]\+$' file
234324fc234ba253069
Through awk,
$ awk '/^[a-f0-9]+$/{print}' file
234324fc234ba253069
Based on the search pattern given, awk and grep prints the matched line.
^ # start
[a-f0-9]\+ # hex characters without capital A-F one or more times
$ # End
sed can make it:
sed -n '/^[a-f0-9]*$/p' file
234324fc234ba253069
By the way, your command sed -ne 's/^\([a-f0-9]*\)$/\1/p' file is working to me. Note, also, that it is not necessary to use \1 to print back. It is handy in many cases, but now it is too much because you want to print the whole line. Just sed -n '/pattern/p' does the job, as I indicate above.
As there is just one match in the whole file, you may want to exit once it is found (thanks NeronLeVelu!):
sed -n '/^[a-f0-9]*$/{p;q}' file
Another approach is to let printf decide when the line is hexadecimal:
while read line
do
printf "%f\n" "0x"$line >/dev/null 2>&1 && echo "$line"
done < file
Based on Hexadecimal To Decimal in Shell Script, printf "%f" 0xNUMBER executes successfully if the number is indeed hexadecimal. Otherwise, it returns an error.
Hence, using printf ... >/dev/null 2>&1 && echo "$line" does not let printf print anything (redirects to /dev/null) but then prints the line if it was hexadecimal.
For your given file, it returns:
$ while read line; do printf "%f\n" "0x"$line >/dev/null 2>&1 && echo "$line"; done < a
234324fc234ba253069
Using egrep you can restrict your regex to select lines that only match valid hex characters i.e. [a-fA-F0-9]:
egrep '^[a-fA-F0-9]+$' file
234324fc234ba253069

Help with Bash script

I'm trying to get this script to basically read input from a file on a command line, match the user id in the file using grep and output these lines with line numbers starting from 1)...n in a new file.
so far my script looks like this
#!/bin/bash
linenum=1
grep $USER $1 |
while [ read LINE ]
do
echo $linenum ")" $LINE >> usrout
$linenum+=1
done
when i run it ./username file
i get
line 4: [: read: unary operator expected
could anyone explain the problem to me?
thanks
Just remove the [] around read line - they should be used to perform tests (file exists, string is empty etc.).
How about the following?
$ grep $USER file | cat -n >usrout
Leave off the square brackets.
while read line; do
echo $linenum ")" $LINE
done >> usrout
just use awk
awk -vu="$USER" '$0~u{print ++d") "$0}' file
or
grep $USER file |nl
or with the shell, (no need to use grep)
i=1
while read -r line
do
case "$line" in
*"$USER"*) echo $((i++)) $line >> newfile;;
esac
done <"file"
Why not just use grep with the -n (or --line-number) switch?
$ grep -n ${USERNAME} ${FILE}
The -n switch gives the line number that the match was found on in the file. From grep's man page:
-n, --line-number
Prefix each line of output with the 1-based line number
within its input file.
So, running this against the /etc/passwd file in linux for user test_user, gives:
31:test_user:x:5000:5000:Test User,,,:/home/test_user:/bin/bash
This shows that the test_user account appears on line 31 of the /etc/passwd file.
Also, instead of $foo+=1, you should write foo=$(($foo+1)).

Resources