disable escaping single quotes within string from bash read builtin command - bash

i want to process files from a text file containing single quoted file names, like
'new'$'\n''line'
'tab'$'\t''ulator'
copy & paste for manually processing this files works fine:
test -f 'tab'$'\t''ulator'
now, reading from file with bash read builtin command
while IFS="" read -r myfile; do
line=$myfile
...
done < text.txt
give strings containing escaped single quotes, like
'\''new'\''$'\''\n'\'''\''line'\'''
'\''tab'\''$'\''\t'\'''\''ulator'\'''
however, processing this file names in bash script does not work.
test -f "$myfile"
test -f ${myfile}
how to disable /undo escaping single quotes and process raw file name within bash?

Using eval
Many people quite reasonably regard eval as a mis-spelling of evil.
So, I would regard this solution as last-choice to be used only if all else fails.
Let's take this sample file:
$ cat badformat
'new'$'\n''line'
'tab'$'\t''ulator'
We can read and interpret these file names as in the following example:
while read -r f; do
eval "f=$f"; [ -f "$f" ] || echo "file not found"
done <badformat
Using NUL-separated lists of file names
The only character that cannot be in a Unix file name is NUL (hex 00). Consequently, many Unix tools are designed to be able to handle NUL-separated lists.
Thus, when creating the file, replace:
stat -c %N * >badformat
with:
printf '%s\0' * >safeformat
This latter file can be read into shell scripts via a while-read loop. For example:
while IFS= read -r -d $'\0' f; do
[ -f "$f" ] || echo "file not found"
done <safeformat
In addition to shell while-read loops, note that grep, find, sort, xargs, as well as GNU sed and GNU awk, all have the native ability to handle NUL-separated lists. Thus, the NUL-separated list approach is both safe and well-supported.

found the solution with string manipulating
${filename//$'\047'\\$'\047'$'\047'/$'\047'}
as you mentioned above, using eval is very dangerous for filenames like 'rm -rf'. regarding stat -c %N (which only escapes single quotes, linefeeds and tabs) there is another solution
while IFS="" read -r myfile; do
filename="$myfile"
filename="${filename#?}"
filename="${filename%?}"
filename="${filename//"'$'\t''"/$'\011'}"
filename="${filename//"'$'\n''"/$'\012'}"
filename="${filename//$'\047'\\$'\047'$'\047'/$'\047'}"
test -f "$filename" && echo "$myfile exists"
done < text.txt

Related

How to recall a string in shell script

I made a script like this:
#! /usr/bin/bash
a=`ls ../wrfprd/wrfout_d0${i}* | cut -c22-25`
b=`ls ../wrfprd/wrfout_d0${i}* | cut -c27-28`
c=`ls ../wrfprd/wrfout_d0${i}* | cut -c30-31`
d=`ls ../wrfprd/wrfout_d0${i}* | cut -c33-34`
f=$a$b$c$d
echo $f
sed "s/.* startdate=.*/export startdate=${f}/g" ./post_process > post_process2
echo command works and gives 2008042118 that is what I want but in file post_process2 is like this export startdate= and can not recall variable f. I want to produce a line like export startdate=2008042118
First -- don't use ls here -- it's both expensive in terms of performance (compared to globbing, which is performed internal to the shell without starting any external programs), and doesn't guarantee useful output for the full range of possible filenames, making its use in this context inherently bug-prone. A better way to retrieve pieces from a filename, assuming a ksh-derived shell such as bash or zsh, would look like this:
#!/bin/bash
# this is an array, but we're only going to use the first element
file=( "../wrfprd/wrfout_d0${i}"* )
[[ -e $file ]] || { echo "No file found" >&2; exit 1; }
f=${file:22:4}${file:27:2}${file:30:2}${file:33:2}
Second, don't use sed to modify code -- doing so requires that your runtime user have permission to modify its own code, and moreover invites injection vulnerabilities. Just write your content out to a data file:
printf '%s\n' "$f" >startdate.txt
...and, in your second script, to read in the value from that file:
# if the shebang is #!/bin/bash
startdate=$(<startdate.txt)
# if the shebang is #!/bin/sh
startdate=$(cat startdate.txt)

Read an input file in shell script and store its lines in a variable

I'm new to UNIX and have this really simple problem:
I have a text-file (input.txt) containing a string in each line. It looks like this:
House
Monkey
Car
And inside my shell script I need to read this input file line by line to get to a variable like this:
things="House,Monkey,Car"
I know this sounds easy, but I just couldnt find any simple solution for this. My closest attempt so far:
#!/bin/sh
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done <input.txt
echo $things
But this won't work. Regarding to my google research I thought the while loop would create a new sub shell, but this I was wrong there (see the comment section). Nevertheless the variable "things" was still not available in the echo later on. (I cannot just write the echo inside the while loop, because I need to work with that string later on)
Could you please help me out here? Any help will be appreciated, thank you!
What you proposed works fine! I've only made two changes here: Adding missing quotes, and handling the empty-string case.
things=""
addToString() {
if [ -n "$things" ]; then
things="${things},$1"
else
things="$1"
fi
}
while read -r line; do addToString "$line"; done <input.txt
echo "$things"
If you were piping into while read, this would create a subshell, and that would eat your variables. You aren't piping -- you're doing a <input.txt redirection. No subshell, code works without changes.
That said, there are better ways to read lists of items into shell variables. On any version of bash after 3.0:
IFS=$'\n' read -r -d '' -a things <input.txt # read into an array
printf -v things_str '%s,' "${things[#]}" # write array to a comma-separated string
echo "${things_str%,}" # print that string w/o trailing comma
...on bash 4, that first line can be:
readarray -t things <input.txt # read into an array
This is not a shell solution, but the truth is that solutions in pure shell are often excessively long and verbose. So e.g. to do string processing it is better to use special tools that are part of the “default” Unix environment.
sed ':b;N;$!bb;s/\n/,/g' < input.txt
If you want to omit empty lines, then:
sed ':b;N;$!bb;s/\n\n*/,/g' < input.txt
Speaking about your solution, it should work, but you should really always use quotes where applicable. E.g. this works for me:
things=""
while read line; do things="$things,$line"; done < input.txt
echo "$things"
(Of course, there is an issue with this code, as it outputs a leading comma. If you want to skip empty lines, just add an if check.)
This might/might not work, depending on the shell you are using. On my Ubuntu 14.04/x64, it works with both bash and dash.
To make it more reliable and independent from the shell's behavior, you can try to put the whole block into a subshell explicitly, using the (). For example:
(
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done
echo $things
) < input.txt
P.S. You can use something like this to avoid the initial comma. Without bash extensions (using short-circuit logical operators instead of the if for shortness):
test -z "$things" && things="$1" || things="${things},${1}"
Or with bash extensions:
things="${things}${things:+,}${1}"
P.P.S. How I would have done it:
tr '\n' ',' < input.txt | sed 's!,$!\n!'
You can do this too:
#!/bin/bash
while read -r i
do
[[ $things == "" ]] && things="$i" || things="$things","$i"
done < <(grep . input.txt)
echo "$things"
Output:
House,Monkey,Car
N.B:
Used grep to tackle with empty lines and the probability of not having a new line at the end of file. (Normal while read will fail to read the last line if there is no newline at the end of file.)

Loop for deleting first line of Multiple Files using Bash Script

Just new to Bash scripting and programming in general. I would like to automate the deletion of the first line of multiple .data files in a directory. My script is as follows:
#!/bin/bash
for f in *.data ;
do tail -n +2 $f | echo "processing $f";
done
I get the echo message but when I cat the file nothing has changed. Any ideas?
Thanks in advance
I get the echo message but when I cat the file nothing has changed.
Because simply tailing wouldn't change the file.
You could use sed to modify the files in-place with the first line excluded. Saying
sed -i '1d' *.data
would delete the first line from all .data files.
EDIT: BSD sed (on OSX) would expect an argument to -i, so you can either specify an extension to backup older files, or to edit the files in-place, say:
sed -i '' '1d' *.data
You are not changing the file itself. By using tail you simply read the file and print parts of it to stdout (the terminal), you have to redirect that output to a temporary file and then overwrite the original file with the temporary one.
#!/usr/bin/env bash
for f in *.data; do
tail -n +2 "$f" > "${f}".tmp && mv "${f}".tmp "$f"
echo "Processing $f"
done
Moreover it's not clear what you'd like to achieve with the echo command. Why do you use a pipe (|) there?
sed will give you an easier way to achieve this. See devnull's answer.
I'd do it this way:
#!/usr/bin/env bash
set -eu
for f in *.data; do
echo "processing $f"
tail -n +2 "$f" | sponge "$f"
done
If you don't have sponge you can get it in the moreutils package.
The quotes around the filename are important--they will make it work with filenames containing spaces. And the env thing at the top is so that people can set which Bash interpreter they want to use via their PATH, in case someone has a non-default one. The set -eu makes Bash exit if an error occurs, which is usually safer.
ed is the standard editor:
shopt -s nullglob
for f in *.data; do
echo "Processing file \`$f'"
ed -s -- "$f" < <( printf '%s\n' "1d" "wq" )
done
The shopt -s nullglob is here just because you should always use this when using globs, especially in a script: it will make globs expand to nothing if there are no matches; you don't want to run commands with uncontrolled arguments.
Next, we loop on all your files, and use ed with the commands:
1: go to first line
d: delete that line
wq: write and quit
Options for ed:
-s: tells ed to shut up! we don't want ed to print its junk on our screen.
--: end of options: this will make your script much more robust, in case a file name starts with a hypen: in this case, the hyphen will confuse ed trying to process it as an option. With --, ed knows that there are no more options after that and will happily process any files, even those starting with a hyphen.

Check execute command after cheking file type

I am working on a bash script which execute a command depending on the file type. I want to use the the "file" option and not the file extension to determine the type, but I am bloody new to this scripting stuff, so if someone can help me I would be very thankful! - Thanks!
Here the script I want to include the function:
#!/bin/bash
export PrintQueue="/root/xxx";
IFS=$'\n'
for PrintFile in $(/bin/ls -1 ${PrintQueue}) do
lpr -r ${PrintQueue}/${PrintFile};
done
The point is, all files which are PDFs should be printed with the lpr command, all others with ooffice -p
You are going through a lot of extra work. Here's the idiomatic code, I'll let the man page provide the explanation of the pieces:
#!/bin/sh
for path in /root/xxx/* ; do
case `file --brief $path` in
PDF*) cmd="lpr -r" ;;
*) cmd="ooffice -p" ;;
esac
eval $cmd \"$path\"
done
Some notable points:
using sh instead of bash increases portability and narrows the choices of how to do things
don't use ls when a glob pattern will do the same job with less hassle
the case statement has surprising power
First, two general shell programming issues:
Do not parse the output of ls. It's unreliable and completely useless. Use wildcards, they're easy and robust.
Always put double quotes around variable substitutions, e.g. "$PrintQueue/$PrintFile", not $PrintQueue/$PrintFile. If you leave the double quotes out, the shell performs wildcard expansion and word splitting on the value of the variable. Unless you know that's what you want, use double quotes. The same goes for command substitutions $(command).
Historically, implementations of file have had different output formats, intended for humans rather than parsing. Most modern implementations have an option to output a MIME type, which is easily parseable.
#!/bin/bash
print_queue="/root/xxx"
for file_to_print in "$print_queue"/*; do
case "$(file -i "$file_to_print")" in
application/pdf\;*|application/postscript\;*)
lpr -r "$file_to_print";;
application/vnd.oasis.opendocument.*)
ooffice -p "$file_to_print" &&
rm "$file_to_print";;
# and so on
*) echo 1>&2 "Warning: $file_to_print has an unrecognized format and was not printed";;
esac
done
#!/bin/bash
PRINTQ="/root/docs"
OLDIFS=$IFS
IFS=$(echo -en "\n\b")
for file in $(ls -1 $PRINTQ)
do
type=$(file --brief $file | awk '{print $1}')
if [ $type == "PDF" ]
then
echo "[*] printing $file with LPR"
lpr "$file"
else
echo "[*] printing $file with OPEN-OFFICE"
ooffice -p "$file"
fi
done
IFS=$OLDIFS

Bash Script using Grep to search for a pattern in a file

I am writing a bash script to search for a pattern in a file using GREP. I am clueless for why it isnt working. This is the program
echo "Enter file name...";
read fname;
echo "Enter the search pattern";
read pattern
if [ -f $fname ]; then
result=`grep -i '$pattern' $fname`
echo $result;
fi
Or is there different approach to do this ?
Thanks
(contents of file)
Welcome to UNIX
The shell is a command programming language that provides an interface to the UNIX operating system.
The shell can modify the environment in which commands run.
Simple UNIX commands consist of one or more words separated by blanks.
Most commands produce output on the standard output that is initially connected to the terminal. This output may be sent to a file by writing.
The standard output of one UNIX command may be connected to the standard input of another UNIX Command by writing the `pipe' operator, indicated by |
(pattern)
`UNIX` or `unix`
The single quotes around $pattern in the grep statement make the shell not resolve the shell variable so you should use double quotes.
Only one of those semicolons is necessary (the one before then), but I usually omit it and put then on a line by itself. You should put double quotes around the variable that you're echoing and around the variable holding your grep pattern. Variables that hold filenames should be quoted, also. You can have read display your prompt. You should use $() instead of backticks.
read -p "Enter file name..." fname
read -p "Enter the search pattern" pattern
if [ -f "$fname" ]
then
result=$(grep -i "$pattern" "$fname")
echo "$result"
fi
read -p "Enter file name..." fname
read -p "Enter the search pattern" pattern
if [ -f "$fname" ]
then
result=$(grep -i -v -e $pattern -e "$fname")
echo "$result"
fi

Resources