I'm writing a script in bash and i would like to know if there is an alternative way to write these sed commands (without using sed):
sed '1,11d;$d' "${SOTTOCARTELLA}"/file
sed '1,11!d' "${SOTTOCARTELLA}"/file
sed '1,11d' -i "${SOTTOCARTELLA}"/file1
With sed '1,11!d' "${SOTTOCARTELLA}"/file you are asking for the first 11 lines of the file;
With sed '1,11d' -i "${SOTTOCARTELLA}"/file1 you are asking for the entire file except for the first 11 lines.
If you don't want to use head, tail or other binaries as suggested, you can achieve the same options using read and some support variables.
For example, let's try sed '1,11!d' "${SOTTOCARTELLA}"/file.
You will need a start point and an end point (and of course, the file).
start=1
end=11
counter="$((start - 1))";
file="${SOTTOCARTELLA}/file"
exec 3<"${file}" ### Create file descriptor 3
while IFS= read -r line <&3; do ### Read file line by line
if [ "${counter}" -lt "${end}" ]; then ### If I'm in my "bundaries"
printf "%s\n" "${line}" ### Print the line
fi
counter="$((counter + 1))"
done
exec 3>&- ### Close file descriptor 3
Note that this piece of code can be way better (E.G. adding a control on the counter in while condition), but this is the least you will need to understand two things:
sed, head, tails, awk, etc. are born in order to avoid to rewrite over and over again same routines and, also, for performance issues; this is why everyone, including me, will be telling you to use those.
This kind of codes are useful only for portability concerns, that's why I wrote this piece of code in a posix compliant way.
Related
Here is my code:
ls | grep -E '^application--[0-9]{4}-[0-9]{2}.tar.gz$' | awk '{if($1<"application--'"${CLEAR_DATE_LEVEL0}"'.tar.gz") print $1}' | xargs -r echo
ls | grep -E '^application--[0-9]{4}-[0-9]{2}.tar.gz$' | awk '{if($1<"application--'"${CLEAR_DATE_LEVEL0}"'.tar.gz") print $1}' | xargs -r rm
As you can see it will get a list of files, show it on screen (for logging purpose) and then delete it.
The issue is that if a file was created between first and second line gets executed, I will delete a file without logging that fact.
Is there a way to create a script that will read the same pipe twice, so the awk result will be piped to both xargs echo and xargs rm commands?
I know I can use a file as a temporary buffer, but I would like to avoid that.
You can change your command to something like
touch example
ls example* | tee >(xargs rm)
I would prefer to avoid parsing ls:
while IFS= read -r file; do
if [[ "$1" < "application--${CLEAR_DATE_LEVEL0}.tar.gz" ]]; then
echo "Removing ${file}"
rm "${file}"
fi
done < <(find . -regextype egrep -regex "./application--[0-9]{4}-[0-9]{2}.tar.gz")
EDIT: An improvement:
As #tripleee mentioned is their answer, using rm -v avoids the additional echo and will also avoid an echo when removing a file failed.
For your specific case, you don't need to read the pipe twice, you can just use rm -v to have rm itself also "echo" each file.
Also, in cases like this, it is better for shell scripts to use globs instead grep ..., both for robustness and performance reasons.
And once you do that, even better: you can loop on the glob and not go through any pipes at all (even more robust in the general case, because there are even less places to worry "could a character in this be special to that program?", and might perform better because everything stays in one process):
for file in application--[0-9][0-9][0-9][0-9]-[0-9][0-9].tar.gz
do
if [[ "$file" < "application--${CLEAR_DATE_LEVEL0}.tar.gz" ]]
then
# echo "$file"
# rm "$file"
rm -v "$file"
fi
done
But if you find yourself in a situation where you really do need to get data from a pipe and a glob won't work, there are a couple ways:
One neat trick in the shell is that loops and other compound commands can be pipes - so a loop can read a pipe, and the inside of the loop can have all the commands you wanted to have read from the pipe:
ls ... | awk ... | while IFS="" read -r file
do
# echo "$file"
# rm "$file"
rm -v "$file"
done
(As a general best practice, you'd want to set IFS= to the empty string for the read command so that read doesn't split the input on characters like spaces, and give read the -r argument to tell it to not interpret special characters like backslashes. In your specific case it doesn't matter.)
But if a loop doesn't work for what you need, then in the general case, you can catch the result of a pipe in a shell variable:
pipe_contents="$(ls application--[0-9][0-9][0-9][0-9]-[0-9][0-9].tar.gz | awk '{if($1<"application--'"${CLEAR_DATE_LEVEL0}"'.tar.gz") print $1}')"
echo "$pipe_contents"
rm $pipe_contents
(This works fine unless your pipe output contains characters that would be special to the shell at the point that the pipe output has to be unquoted - in this case, it needs to be unquoted for the rm, because if it's quoted then the shell won't split the captured pipe output on whitespace, and rm will end up looking for one big file name that looks like the entire pipe output. Part of why looping on a glob is more robust is that it doesn't have these kinds of problems: the pipe combines all file names into one big text that needs to be re-split on whitespace. Luckily in your case, your file names don't have whitespace nor globbing characters, so leaving the pipe output unquoted ends up being fine.)
Also, since you're using bash and your pipe data is multiple separate things, you can use an array variable (bash extension, also found in shells like zsh) instead of a regular variable:
files=($(ls application--[0-9][0-9][0-9][0-9]-[0-9][0-9].tar.gz | awk '{if($1<"application--'"${CLEAR_DATE_LEVEL0}"'.tar.gz") print $1}'))
echo "${files[#]}"
rm "${files[#]}"
(Note that an unquoted expansion still happens with the array, it just happens when defining the array instead of when passing the pipe contents to rm. A small advantage is that if you had multiple commands which needed the unquoted contents, using an array does the splitting only once. A big advantage is that once you recognize array syntax, it does a better job of expressing your big-picture intent through the code itself.)
You can also use a temporary file instead of a shell variable, but you said you want to avoid that. I also prefer a variable when the data fits in memory because Linux/UNIX does not give shell scripts a reliable way to clean up external resources (you can use trap but for example traps can't run on uncatchable signals).
P.S. ideally, in the general habit, you should use printf '%s\n' "$foo" instead of echo "$foo", because echo has various special cases (and portability inconsistencies, but that doesn't matter as much if you always use bash until you need to care about portable sh). In modern featureful shells like bash, you can also use %q instead of %s in printf, which is great because for example printf '%q\n' "${files[#]}" will actually print each file with any special characters properly quoted or escaped, which can help with debugging if you ever are dealing with files that have special whitespace or globbing characters in them.
No, a pipe is a stream - once you read something from it, it is forever gone from the pipe.
A good general solution is to use a temporary file; this lets you rewind and replay it. Just take care to remove it when you're done.
temp=$(mktemp -t) || exit
trap 'rm -f "$temp"' ERR EXIT
cat >"$temp"
cat "$temp"
xargs rm <"$temp"
The ERR and EXIT pseudo-signals are Bash extensions. For POSIX portability, you need a somewhat more involved set of trap commands.
Properly speaking, mktemp should receive an argument which is used as a template for the temporary file's name, so that the user can see which temporary file belongs to which tool. For example, if this script was called rmsponge, you could use mktemp rmspongeXXXXXXXXX to have mktemp generate a temporary file name which begins with rmsponge.
If you only expect a limited amount of input, perhaps just capture the input in a variable. However, this scales poorly, and could have rather unfortunate problems if the input data exceeds available memory;
# XXX avoid: scales poorly
values=$(cat)
xargs printf "%s\n" <<<"$values"
xargs rm <<<"$values"
The <<< "here string" syntax is also a Bash extension. This also suffers from the various issues from https://mywiki.wooledge.org/BashFAQ/020 but this is inherent to your problem articulation.
Of course, in this individual case, just use rm -v to see which files rm removes.
I have troubles in Bash looping within a text file of ~20k lines.
Here is my (minimised) code:
LINE_NB=0
while IFS= read -r LINE; do
LINE_NB=$((LINE_NB+1))
CMD=$(sed "s/\([^ ]*\) .*/\1/" <<< ${LINE})
echo "[${LINE_NB}] ${LINE}: CMD='${CMD}'"
done <"${FILE}"
The while loop ends prematurely after a few hundreds iterations. However, the loop works correctly if I remove the CMD=$(sed...) part. So, evidently, there is some interference I cannot spot.
As I ready here, I also tried:
LINE_NB=0
while IFS= read -r -u4 LINE; do
LINE_NB=$((LINE_NB+1))
CMD=$(sed "s/\([^ ]*\) .*/\1/" <<< ${LINE})
echo "[${LINE_NB}] ${LINE}: CMD='${CMD}'"
done 4<"${FILE}"
but nothing changes. Any explanation for this behaviour and help on how can I solve it?
Thanks!
To clarify the situation for user1934428 (thanks for your interest!), I now have created a minimal script and added "set -x". The full script is as follows:
#!/usr/bin/env bash
set -x
FILE="$1"
LINE_NB=0
while IFS= read -u "$file_fd" -r LINE; do
LINE_NB=$((LINE_NB+1))
CMD=$(sed "s/\([^ ]*\) .*/\1/" <<< "${LINE}")
echo "[${LINE_NB}] ${LINE}: CMD='${CMD}'" #, TIME='${TIME}' "
done {file_fd}<"${FILE}"
echo "Done."
The input file is a list of ~20k lines of the form:
S1 0.018206
L1 0.018966
F1 0.006833
S2 0.004212
L2 0.008005
I8R190 18.3791
I4R349 18.5935
...
The while loops ends prematurely at (seemingly) random points. One possible output is:
+ FILE=20k/ir-collapsed.txt
+ LINE_NB=0
+ IFS=
+ read -u 10 -r LINE
+ LINE_NB=1
++ sed 's/\([^ ]*\) .*/\1/'
+ CMD=S1
+ echo '[1] S1 0.018206: CMD='\''S1'\'''
[1] S1 0.018206: CMD='S1'
+ echo '[6510] S1514 0.185504: CMD='\''S1514'\'''
...[snip]...
[6510] S1514 0.185504: CMD='S1514'
+ IFS=
+ read -u 10 -r LINE
+ echo Done.
Done.
As you can see, the loop ends prematurely after line 6510, while the input file is ~20k lines long.
Yes, making a stable file copy is a best start.
Learning awk and/or perl is still well worth your time. It's not as hard as it looks. :)
Aside from that, a couple of optimizations - try to never run any program inside a loop when you can avoid it. For a 20k line file, that's 20k seds, which really adds up unnecessarily. Instead you could just use parameter parsing for this one.
# don't use all caps.
# cmd=$(sed "s/\([^ ]*\) .*/\1/" <<< "${line}") becomes
cmd="${cmd%% *}" # strip everything from the first space
Using the read to handle that is even better, since you were already using it anyway, but don't spawn another if you can avoid it. As much as I love it, read is pretty inefficient; it has to do a lot of fiddling to handle all its options.
while IFS= read -u "$file_fd" cmd timeval; do
echo "[$((++line_nb))] CMD='${CMD}' TIME='${timeval}'"
done {file_fd}<"${file}"
or
while IFS= read -u "$file_fd" -r -a tok; do
echo "[$((++line_nb))] LINE='${tok[#]}' CMD='${tok[0]}' TIME='${tok[1]}'"
done {file_fd}<"${file}"
(This will sort of rebuild the line, but if there were tabs or extra spaces, etc, it will only pad with the 1st char of $IFS, which is a space by default. Shouldn't matter here.)
awk would have made short work of this, though, and been a lot faster, with better tools already built in.
awk '{printf "NR=[%d] LINE=[%s] CMD=[%s] TIME=[%s]\n",NR,$0,$1,$2 }' 20k/ir-collapsed.txt
Run some time comparisons - with and without the sed, with one read vs two, and then compare each against the awk. :)
The more things you have to do with each line, and the more lines there are in the file, the more it will matter. Make it a habit to do even small things as neatly as you can - it will pay off well in the long run.
I have some text files $f resembling the following
function
%blah
%blah
%blah
code here
I want to append the following text before the first empty line:
%
%This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike
%3.0 Unported License. See notes at the end of this file for more information.
I tried the following:
top=$(cat ./PATH/text.txt)
top="${top//$'\n'/\\n}"
sed -i.bak 's#^$#'"$top"'\\n#' $f
where the second line (I think) preserves the new line in the text and the third line (I think) substitutes the first empty line with the text plus a new empty line.
Two problems:
1- My code appends the following text:
%n%This work is licensed under the Creative Commons
Attribution-NonCommercial-ShareAlike n%3.0 Unported License. See notes
at the end of this file for more information.\n
2- It appends it at end of the file.
Can someone please help me understand the problems with my code?
If you are using GNU sed, following would work.
Use ^$ to find the empty line and then use sed to replace/put the text that you want.
# Define your replacement text in a variable
a="%\n%This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike\n%3.0 Unported License. See notes at the end of this file for more information."
Note, $a should include those \n that will be directly interpreted by sed as newlines.
$ sed "0,/^$/s//$a/" inputfile.txt
In the above syntax, 0 represents the first occurrence.
Output:
function
%blah
%blah
%
%This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike
%3.0 Unported License. See notes at the end of this file for more information.
%blah
code here
test
You've included bash and sed tags in your question. Since I can't seem to come up with a way of doing this in sed, here's a bash-only solution. It's likely to perform the worst of all working solutions you might find.
The following works with your sample input:
$ while read -r x; do [[ -z "$x" ]] && cat boilerplate; printf '%s\n' "$x"; done < src
This will however insert the boilerplate before EVERY blank line, which is probably not what you're after. Instead, we should probably make this more than a one-liner:
#!/usr/bin/env bash
y=true
while read -r x; do
if [[ -z "$x" ]] && $y; then
cat boilerplate
y=false
fi
printf '%s\n' "$x"
done < src
Note that unlike the code in your question, this doesn't store your boilerplate in a variable, it just cats it "at the right time".
Note that this sends the combined output to stdout. If your goal is to modify the original file, you'll need to wrap this in something that moves around temporary files. (Note that sed's -i option also doesn't really edit files in place, it only hides the moving-around-temp-files from you.)
The following alternatives are probably a better idea.
A similar solution to the bash one might be achieved with better performance using awk:
awk 'NR==FNR{b=b $0 ORS;next} /^$/&&!y{printf "%s",b;y++} 1' boilerplate src
This awk solution obviously reads your boilerplate into a variable, though it's not a shell variable.
Notwithstanding non-standard platform-specific extensions, awk does not have any facility for editing files "in place" either. A portable solution using awk would still need to push temp files around.
And of course, the following old standard of ed is great to keep in your back pocket:
printf 'H\n/^$/\n-\n.r boilerplate\nw\nq\n' | ed src
In bash, of course, you could always use heretext, which might be clearer:
$ ed src <<< $'H\n/^$/\n-\n.r boilerplate\nw\nq\n'
The ed command is non-stream version of sed. Or rather, sed is the stream version of ed, which has been around since before the dinosaurs and is still going strong.
The commands we're using are separated by newlines and fed to ed's standard input. You can discard stdout if you feel the urge. The commands shown here are:
H - instruct ed to print more useful errors, if it gets any.
/^$/ - search for the first occurrence of a newline.
- - GO BACK ONE LINE. Awesome, right?
.r boilerplate - Read your boilerplate at the current line,
w - and write the file.
q - Quit.
Note that this does not keep a .bak file. You'll need to do that yourself if you really want one.
And if, as you suggested in comments, the filename you're reading is to be constructed from a variable, note that variable expansion does not happen inside format quoting ($' .. '). You can either switch quoting mechanisms mid-script:
ed "$file" <<< $'H\n/^$/\n-\n.r ./TATTOO_'"$currn"$'/top.txt\nw\nq\n'
Or you could put ed script in a variable constructed by printf
printf -v scr 'H\n/^$/\n-\n.r ./TATTOO_%s/top.txt\nw\nq\n' "$currn"
ed "$file" <<< "$scr"`
Adding the text to a variable so you can interpolate the variable is wasteful and an unnecessary complication. sed can easily read the contents of a file by itself.
sed -i.bak '1r./PATH/text.txt' "$f"
Unfortunately, this part of sed is poorly standardized, so you may have to experiment a little bit. Some dialects require a newline (perhaps, or perhaps not, preceded by a backslash) before the filename.
sed -i.bak '1r\
./PATH/text.txt' "$f"
(Notice also the double quotes around the file name. You generally always want double quotes around variables which contain file names. More here.)
Adapting the recipe from here we can extend this to apply to the first empty line instead of the first line.
sed -i.bak -e '/^$/!b' -e 'r./PATH/text.txt' -e :a -e '$!{' -e n -e ba -e } "$f"
This adds the boilerplate after the first empty line but perhaps that's acceptable. Refactoring it to replace it or add an empty line after should not be too challenging anyway. (Maybe use sed -n and instead explicitly print everything except the empty line.)
In brief terms, this skips to the end (simply prints) up until we find the first empty line. Then, we read and print the file, and go into a loop which prints the remainder of the file without returning to the beginning of the script.
sed that I think works. Uses files for the extra bit to be inserted.
b='##\n## comment piece\n##'
sed --posix -ne '
1,/^$/ {
/^$/ {
x;
/^true$/ !{
x
s/^$/true/
i\
'"$b"'
};
x;
s/^.*$//
}
}
p
' file1
with the examples using ranges of 1,/^$/, an empty first line would result in the disclaimer being printed twice. To avoid this, I've set it up to put a flag in the hold space ( x; s/^$/true/ ) that I can swap to the pattern space to check whether its the first blank. Once theres a match for blank line, i\ inserts the comment ($b) in front of the pattern space.
Thanks to ghoti for the initial plan.
I'm trying to use enscript to print PDFs from Mutt, and hitting character encoding issues. One way around them seems to be to just use sed to replace the problem characters: sed -ir 's/[“”]/"/g' {input}
My test input file is this:
“very dirty”
we’re
I'm hoping to get "very dirty" and we're but instead I'm still getting
â\200\234very dirtyâ\200\235
weâ\200\231re
I found a nice little post on printing to PDFs from Mutt that I used as a starting point. I have a bash script that I point to from my .muttrc with set print_command="$HOME/.mutt/print.sh" -- the script currently reads about like this:
#!/bin/bash
input="$1" pdir="$HOME/Desktop" open_pdf=evince
# Straighten out curly quotes
sed -ir 's/[“”]/"/g' $input
sed -ir "s/[’]/'/g" $input
tmpfile="`mktemp $pdir/mutt_XXXXXXXX.pdf`"
enscript --font=Courier8 $input -2r --word-wrap --fancy-header=mutt -p - 2>/dev/null | ps2pdf - $tmpfile
$open_pdf $tmpfile >/dev/null 2>&1 &
sleep 1
rm $tmpfile
It does a fine job of creating a PDF (and works fine if you give it a file as an argument) but I can't figure out how to fix the curly quotes.
I've tried a bunch of variations on the sed line:
input=sed -r 's/[“”]/"/g' $input
$input=sed -ir "s/[’]/'/g" $input
Per the suggestion at Can I use sed to manipulate a variable in bash? I also tried input=$(sed -r 's/[“”]/"/g' <<< $input) and I get an error: "Syntax error: redirection unexpected"
But none manages to actually change $input -- what is the correct syntax to change $input with sed?
Note: I accepted an answer that resolved the question I asked, but as you can see from the comments there are a couple of other issues here. enscript is taking in a whole file as a variable, not just the text of the file. So trying to tweak the text inside the file is going to take a few extra steps. I'm still learning.
On Editing Variables In General
BashFAQ #21 is a comprehensive reference on performing search-and-replace operations in bash, including within variables, and is thus recommended reading. On this particular case:
Use the shell's native string manipulation instead; this is far higher performance than forking off a subshell, launching an external process inside it, and reading that external process's output. BashFAQ #100 covers this topic in detail, and is well worth reading.
Depending on your version of bash and configured locale, it might be possible to use a bracket expression (ie. [“”], as your original code did). However, the most portable thing is to treat “ and ” separately, which will work even without multi-byte character support available.
input='“hello ’cruel’ world”'
input=${input//'“'/'"'}
input=${input//'”'/'"'}
input=${input//'’'/"'"}
printf '%s\n' "$input"
...correctly outputs:
"hello 'cruel' world"
On Using sed
To provide a literal answer -- you almost had a working sed-based approach in your question.
input=$(sed -r 's/[“”]/"/g' <<<"$input")
...adds the missing syntactic double quotes around the parameter expansion of $input, ensuring that it's treated as a single token regardless of how it might be string-split or glob-expanded.
But All That May Not Help...
The below is mentioned because your test script is manipulating content passed on the command line; if that's not the case in production, you can probably disregard the below.
If your script is invoked as ./yourscript “hello * ’cruel’ * world”, then information about exactly what the user entered is lost before the script is started, and nothing you can do here will fix that.
This is because $1, in that scenario, will only contain “hello; ’cruel’ and world” are in their own argv locations, and the *s will have been replaced with lists of files in the current directory (each such file substituted as a separate argument) before the script was even started. Because the shell responsible for parsing the user's command line (which is not the same shell running your script!) did not recognize the quotes as valid at the time when it ran this parsing, by the time the script is running, there's nothing you can do to recover the original data.
Abstract: The way to use sed to change a variable is explored, but what you really need is a way to use and edit a file. It is covered ahead.
Sed
The (two) sed line(s) could be solved with this (note that -i is not used, it is not a file but a value):
input='“very dirty”
we’re'
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
But it should be faster (for small strings) to use the internals of the shell:
input='“very dirty”
we’re'
input=${input//[“”]/\"}
input=${input//[’]/\'}
printf '%s\n' "$input"
$1
But there is an underlying problem with your script, you are trying to clean an input received from the command line. You are using $1 as the source of the string. Once somebody writes:
./script “very dirty”
we’re
That input is lost. It is broken into shell's tokens and "$1" will be “very only.
But I do not believe that is what you really have.
file
However, you are also saying that the input comes from a file. If that is the case, then read it in with:
input="$(<infile)" # not $1
sed 's/[“”]/\"/g;s/’/'\''/g' <<<"$input"
Or, if you don't mind to edit (change) the file, do this instead:
sed -i 's/[“”]/\"/g;s/’/'\''/g' infile
input="$(<infile)"
Or, if you are clear and certain that what is being given to the script is a filename, like:
./script infile
You can use:
infile="$1"
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
input="$(<"$infile")"
Other comments:
Then:
Quote your variables.
Do not use the very old `…` syntax, use $(…) instead.
Do not use variables in UPPER case, those are reserved for environment variables.
And (unless you actually meant sh) use a shebang (first line) that targets bash.
The command enscript most definitively requires a file, not a variable.
Maybe you should use evince to open the PS file, there is no need of the step to make a pdf, unless you know you really need it.
I believe that is better use a file to store the output of enscript and ps2pdf.
Do not hide the errors printed by the commands until everything is working as desired, then, just call the script as:
./script infile 2>/dev/null
Or as required to make it less verbose.
Final script.
If you call the script with the name of the file that enscript is going to use, something like:
./script infile
Then, the whole script will look like this (runs both in bash or sh):
#!/usr/bin/env bash
Usage(){ echo "$0; This script require a source file"; exit 1; }
[ $# -lt 1 ] && Usage
[ ! -e $1 ] && Usage
infile="$1"
pdir="$HOME/Desktop"
open_pdf=evince
# Straighten out curly quotes
sed -i 's/[“”]/\"/g;s/’/'\''/g' "$infile"
tmpfile="$(mktemp "$pdir"/mutt_XXXXXXXX.pdf)"
outfile="${tmpfile%.*}.ps"
enscript --font=Courier10 "$infile" -2r \
--word-wrap --fancy-header=mutt -p "$outfile"
ps2pdf "$outfile" "$tmpfile"
"$open_pdf" "$tmpfile" >/dev/null 2>&1 &
sleep 5
rm "$tmpfile" "$outfile"
I have a file which has very long rows of data. When i try to read using shell script, the data comes into multiple lines,ie, breaks at certain points.
Example row:
B_18453583||Active|917396140129|405819121107402|Active|7396140129||7396140129|||||||||18-MAY-10|||||18-MAY-10|405819121107402|Outgoing International Calls,Outgoing Calls,WAP,Call Waiting,MMS,Data Service,National Roaming-Voice,Outgoing International Calls except home country,Conference Call,STD,Call Forwarding-Barr,CLIP,Incoming Calls,INTSNS,WAPSNS,International Roaming-Voice,ISD,Incoming Calls When Roaming Internationally,INTERNET||For You Plan||||||||||||||||||
All this is the content of a single line.
I use a normal read like this :
var=`cat pranay.psv`
for i in $var; do
echo $i
done
The output comes as:
B_18453583||Active|917396140129|405819121107402|Active|7396140129||7396140129|||||||||18- MAY-10|||||18-MAY-10|405819121107402|Outgoing
International
Calls,Outgoing
Calls,WAP,Call
Waiting,MMS,Data
Service,National
Roaming-Voice,Outgoing
International
Calls
except
home
country,Conference
Call,STD,Call
Forwarding-Barr,CLIP,Incoming
Calls,INTSNS,WAPSNS,International
Roaming-Voice,ISD,Incoming
Calls
When
Roaming
Internationally,INTERNET||For
You
Plan||||||||||||||||||
How do i print all in single line??
Please help.
Thanks
This is because of word splitting. An easier way to do this (which also disbands with the useless use of cat) is this:
while IFS= read -r -d $'\n' -u 9
do
echo "$REPLY"
done 9< pranay.psv
To explain in detail:
$'...' can be used to create human readable strings with escape sequences. See man bash.
IFS= is necessary to avoid that any characters in IFS are stripped from the start and end of $REPLY.
-r avoids interpreting backslash in text specially.
-d $'\n' splits lines by the newline character.
Use file descriptor 9 for data storage instead of standard input to avoid greedy commands like cat eating all of it.
You need proper quoting. In your case, you should use the command read:
while read line ; do
echo "$line"
done < pranay.psv