I have a file parse.txt
parse.txt contains the following
remo/hello/1.0,remo/hello2/2.0,remo/hello3/3.0,whitney/hello/1.0,julie/hello/2.0,julie/hello/3.0
and I want the output.txt file as (to reverse the order from last to first)using parse.txt
julie/hello/3.0,julie/hello/2.0,whitney/hello/1.0,remo/hello3/3.0,remo/hello2/2.0,remo/hello/1.0
I have tried the following code:
tail -r parse.txt
You can use the surprisingly helpful tac from GNU Coreutils.
tac -s "," parse.txt > newparse.txt
tac by default will "cat" the file to standard out, reversing the lines. By specifying the separator using the -s flag, you can simply reverse your fields as desired.
(You may need to do a post-processing step to get the commas to work out correctly, which can be another step in your pipeline.)
I like the tac solution; it's tight and elegant, but as Micah pointed out, tac is part of GNU Coreutils, which means that it's not available by default in FreeBSD, OSX, Solaris, etc.
This can be done in pure bash, no external tools required.
#!/usr/bin/env bash
unset comma
read foo < parse.txt
bar=(${foo//,/ })
for (( count="${#bar[#]}"; --count >= 0; )); do
printf "%s%s" "$comma" "${bar[$count]}"
comma=","
done
This obviously only handles one line, per your sample input. You can wrap it in something if you need to handle multiple lines of input.
The logic here is that we can convert the input into an array by replacing commas with spaces. Of course, if our input data included spaces, this would have to be adjusted. Once we have the array, we simply step backwards through it, printing each record.
Note that this does not include a terminating newline. If you want one, you can add it with:
printf '\n'
as a final line.
perl -F, -lane 'print join ",", reverse #F' parse.txt > output.txt
You can use this awk command:
awk -v RS=, '{a[++i]=$1} END{for (k=i; k>=1; k--) printf a[k] (k>1?RS:ORS)}' parse.txt
julie/hello/3.0,julie/hello/2.0,whitney/hello/1.0,remo/hello3/3.0,remo/hello2/2.0,remo/hello/1.0
The question is tagged unix and you have mentioned tail -r which suggests you might not be using Linux (with full GNU toolchain), but instead some "real" Unix (BSD variant), e.g. osx.
As such, the tac command is not available, but as mentioned in the question, tail -r is. So you can use the following:
$ tr ',' '\n' < parse.txt | tail -r | tr '\n' ',' | sed 's/,$//'
julie/hello/3.0,julie/hello/2.0,whitney/hello/1.0,remo/hello3/3.0,remo/hello2/2.0,remo/hello/1.0
$
Notes:
This only works for files that have one line, as we are relying on converting commas to newlines and back. If there is more than one line, then the newlines in between will get converted to commas by the second tr.
The final sed is to remove a trailing comma, that was converted from a trailing newline inserted by tail
Emulating tac with sed:
tr , '\n' <parse.txt | sed '1!G; h; $!d' | paste -sd ,
Alternatively, if you don't have paste:
tr , '\n' <parse.txt | sed '1!G; h; $!d' | tr '\n' , | sed 's/,$//'
Output:
julie/hello/3.0,julie/hello/2.0,whitney/hello/1.0,remo/hello3/3.0,remo/hello2/2.0,remo/hello/1.0
You can use any language to do that
xargs ruby -e "puts ARGV[0].split(',').reverse.join(',')" < parse.txt
Reverse can be done by tac (from cat). As commented this will reverse the lines not what the OP asked for.
tac filename
You can still you tac if you provide line by line and reverse by not linefeed delimiter but the field separator, here ,.
echo "a,b,c" | tr '\n' ',' | tac -s "," | sed 's/,$/\n/'
Related
I have one file suffix.txt which contains some strings linewise, for example-
ing
ness
es
ed
tion
Also, I have a text file text.txt which contains some text,
it is given that text.txt consists only of lowercase letters and without any punctuation, for example-
the raining cloud answered the man all his interrogation and with all
questioned mind the princess responded
harness all goodness without getting irritated
I want to remove the suffixes from the original words in text.txt only once for every suffix. Thus I expect the following output-
the rain cloud answer the man all his interroga and with all
question mind the princess respond
har all good without gett irritat
Note that tion was not removed from questioned since the original word didn't contain tion as a suffix. It would be really helpful if someone could answer this with sed commands.
I was using a naive script that doesn't seem to do the job-
#!/bin/bash
while read p; do
sed -i "s/$p / /g" text.txt;
sed -i "s/$p$//g" text.txt;
done <suffix.txt
An awk:
$ awk '
NR==FNR { # generate a regex of suffices
s=s (s==""?"(":"|") $0 # (ing|ness|es|ed|tion)$
next
}
FNR==1 {
s=s ")$" # well, above )$ is inserted here
}
{
for(i=1;i<=NF;i++) # iterate all the words and
sub(s,"",$i) # apply regex to each of them
}1' suffix text # output
Output:
the rain cloud answer the man all his interroga and with all
question mind the princess respond
har all good without gett irritat
Kinda hairy but sed and unix tools only:
sed -E -f <(tr '\n' '|' <suffix.txt | sed 's/\|$//; s/\|/\\\\b|/g; s/$/\\\\b/' | xargs printf 's/%s//g') text.txt
The
tr '\n' '|' <suffix.txt | sed 's/\|$//; s/\|/\\\\b|/g; s/$/\\\\b/' | xargs printf 's/%s//g'
generates the substitution script of
s/ing\b|ness\b|es\b|ed\b|tion\b//g
This requires GNU sed for \b.
It would be easier with perl, ruby, awk, etc
Here is a GNU awk:
gawk -i join 'FNR==NR {arr[FNR]=$1; next}
FNR==1{re=join(arr,1,length(arr),"\\>|"); re=re "\\>"}
{gsub(re,"")}
1
' suffix.txt text.txt
Both produce:
the rain cloud answer the man all his interroga and with all
question mind the princess respond
har all good without gett irritat
This might work for you (GNU sed):
sed -z 'y/\n/|/;s/|$//;s#.*#s/\\B(&)\\b//g#' suffixFile | sed -Ef - textFile
Convert suffixFile into sed commands in a file and pass that via a pipe to a second invocation of sed that amends the textFile.
N.B. The sed command use the \B and \b to match a suffix.
I have a file as show below
1.2.3.4.ask
sanma.nam.sam
c.d.b.test
I want to remove the last field from each line, the delimiter is . and the number of fields are not constant.
Can anybody help me with an awk or sed to find out the solution. I can't use perl here.
Both these sed and awk solutions work independent of the number of fields.
Using sed:
$ sed -r 's/(.*)\..*/\1/' file
1.2.3.4
sanma.nam
c.d.b
Note: -r is the flag for extended regexp, it could be -E so check with man sed. If your version of sed doesn't have a flag for this then just escape the brackets:
sed 's/\(.*\)\..*/\1/' file
1.2.3.4
sanma.nam
c.d.b
The sed solution is doing a greedy match up to the last . and capturing everything before it, it replaces the whole line with only the matched part (n-1 fields). Use the -i option if you want the changes to be stored back to the files.
Using awk:
$ awk 'BEGIN{FS=OFS="."}{NF--; print}' file
1.2.3.4
sanma.nam
c.d.b
The awk solution just simply prints n-1 fields, to store the changes back to the file use redirection:
$ awk 'BEGIN{FS=OFS="."}{NF--; print}' file > tmp && mv tmp file
Reverse, cut, reverse back.
rev file | cut -d. -f2- | rev >newfile
Or, replace from last dot to end with nothing:
sed 's/\.[^.]*$//' file >newfile
The regex [^.] matches one character which is not dot (or newline). You need to exclude the dot because the repetition operator * is "greedy"; it will select the leftmost, longest possible match.
With cut on the reversed string
cat youFile | rev |cut -d "." -f 2- | rev
If you want to keep the "." use below:
awk '{gsub(/[^\.]*$/,"");print}' your_file
How can I join multiple lines into one line, with a separator where the new-line characters were, and avoiding a trailing separator and, optionally, ignoring empty lines?
Example. Consider a text file, foo.txt, with three lines:
foo
bar
baz
The desired output is:
foo,bar,baz
The command I'm using now:
tr '\n' ',' <foo.txt |sed 's/,$//g'
Ideally it would be something like this:
cat foo.txt |join ,
What's:
the most portable, concise, readable way.
the most concise way using non-standard unix tools.
Of course I could write something, or just use an alias. But I'm interested to know the options.
Perhaps a little surprisingly, paste is a good way to do this:
paste -s -d","
This won't deal with the empty lines you mentioned. For that, pipe your text through grep, first:
grep -v '^$' | paste -s -d"," -
This sed one-line should work -
sed -e :a -e 'N;s/\n/,/;ba' file
Test:
[jaypal:~/Temp] cat file
foo
bar
baz
[jaypal:~/Temp] sed -e :a -e 'N;s/\n/,/;ba' file
foo,bar,baz
To handle empty lines, you can remove the empty lines and pipe it to the above one-liner.
sed -e '/^$/d' file | sed -e :a -e 'N;s/\n/,/;ba'
How about to use xargs?
for your case
$ cat foo.txt | sed 's/$/, /' | xargs
Be careful about the limit length of input of xargs command. (This means very long input file cannot be handled by this.)
Perl:
cat data.txt | perl -pe 'if(!eof){chomp;$_.=","}'
or yet shorter and faster, surprisingly:
cat data.txt | perl -pe 'if(!eof){s/\n/,/}'
or, if you want:
cat data.txt | perl -pe 's/\n/,/ unless eof'
Just for fun, here's an all-builtins solution
IFS=$'\n' read -r -d '' -a data < foo.txt ; ( IFS=, ; echo "${data[*]}" ; )
You can use printf instead of echo if the trailing newline is a problem.
This works by setting IFS, the delimiters that read will split on, to just newline and not other whitespace, then telling read to not stop reading until it reaches a nul, instead of the newline it usually uses, and to add each item read into the array (-a) data. Then, in a subshell so as not to clobber the IFS of the interactive shell, we set IFS to , and expand the array with *, which delimits each item in the array with the first character in IFS
I needed to accomplish something similar, printing a comma-separated list of fields from a file, and was happy with piping STDOUT to xargs and ruby, like so:
cat data.txt | cut -f 16 -d ' ' | grep -o "\d\+" | xargs ruby -e "puts ARGV.join(', ')"
I had a log file where some data was broken into multiple lines. When this occurred, the last character of the first line was the semi-colon (;). I joined these lines by using the following commands:
for LINE in 'cat $FILE | tr -s " " "|"'
do
if [ $(echo $LINE | egrep ";$") ]
then
echo "$LINE\c" | tr -s "|" " " >> $MYFILE
else
echo "$LINE" | tr -s "|" " " >> $MYFILE
fi
done
The result is a file where lines that were split in the log file were one line in my new file.
Simple way to join the lines with space in-place using ex (also ignoring blank lines), use:
ex +%j -cwq foo.txt
If you want to print the results to the standard output, try:
ex +%j +%p -scq! foo.txt
To join lines without spaces, use +%j! instead of +%j.
To use different delimiter, it's a bit more tricky:
ex +"g/^$/d" +"%s/\n/_/e" +%p -scq! foo.txt
where g/^$/d (or v/\S/d) removes blank lines and s/\n/_/ is substitution which basically works the same as using sed, but for all lines (%). When parsing is done, print the buffer (%p). And finally -cq! executing vi q! command, which basically quits without saving (-s is to silence the output).
Please note that ex is equivalent to vi -e.
This method is quite portable as most of the Linux/Unix are shipped with ex/vi by default. And it's more compatible than using sed where in-place parameter (-i) is not standard extension and utility it-self is more stream oriented, therefore it's not so portable.
POSIX shell:
( set -- $(cat foo.txt) ; IFS=+ ; printf '%s\n' "$*" )
My answer is:
awk '{printf "%s", ","$0}' foo.txt
printf is enough. We don't need -F"\n" to change field separator.
How can I replace a column with its hash value (like MD5) in awk or sed?
The original file is super huge, so I need this to be really efficient.
So, you don't really want to be doing this with awk. Any of the popular high-level scripting languages -- Perl, Python, Ruby, etc. -- would do this in a way that was simpler and more robust. Having said that, something like this will work.
Given input like this:
this is a test
(E.g., a row with four columns), we can replace a given column with its md5 checksum like this:
awk '{
tmp="echo " $2 " | openssl md5 | cut -f2 -d\" \""
tmp | getline cksum
$2=cksum
print
}' < sample
This relies on GNU awk (you'll probably have this by default on a Linux system), and it uses openssl to generate the md5 checksum. We first build a shell command line in tmp to pass the selected column to the md5 command. Then we pipe the output into the cksum variable, and replace column 2 with the checksum. Given the sample input above, the output of this awk script would be:
this 7e1b6dbfa824d5d114e96981cededd00 a test
I copy pasted larsks's response, but I have added the close line, to avoid the problem indicated in this post: gawk / awk: piping date to getline *sometimes* won't work
awk '{
tmp="echo " $2 " | openssl md5 | cut -f2 -d\" \""
tmp | getline cksum
close(tmp)
$2=cksum
print
}' < sample
This might work using Bash/GNU sed:
<<<"this is a test" sed -r 's/(\S+\s)(\S+)(.*)/echo "\1 $(md5sum <<<"\2") \3"/e;s/ - //'
this 7e1b6dbfa824d5d114e96981cededd00 a test
or a mostly sed solution:
<<<"this is a test" sed -r 'h;s/^\S+\s(\S+).*/md5sum <<<"\1"/e;G;s/^(\S+).*\n(\S+)\s\S+\s(.*)/\2 \1 \3/'
this 7e1b6dbfa824d5d114e96981cededd00 a test
Replaces is from this is a test with md5sum
Explanation:
In the first:- identify the columns and use back references as parameters in the Bash command which is substituted and evaluated then make cosmetic changes to lose the file description (in this case standard input) generated by the md5sum command.
In the second:- similar to the first but hive the input string into the hold space, then after evaluating the md5sum command, append the string G to the pattern space (md5sum result) and using substitution arrange to suit.
You can also do that with perl :
echo "aze qsd wxc" | perl -MDigest::MD5 -ne 'print "$1 ".Digest::MD5::md5_hex($2)." $3" if /([^ ]+) ([^ ]+) ([^ ]+)/'
aze 511e33b4b0fe4bf75aa3bbac63311e5a wxc
If you want to obfuscate large amount of data it might be faster than sed and awk which need to fork a md5sum process for each lines.
You might have a better time with read than awk, though I haven't done any benchmarking.
the input (scratch001.txt):
foo|bar|foobar|baz|bang|bazbang
baz|bang|bazbang|foo|bar|foobar
transformed using read:
while IFS="|" read -r one fish twofish red fishy bluefishy; do
twofish=`echo -n $twofish | md5sum | tr -d " -"`
echo "$one|$fish|$twofish|$red|$fishy|$bluefishy"
done < scratch001.txt
produces the output:
foo|bar|3858f62230ac3c915f300c664312c63f|baz|bang|bazbang
baz|bang|19e737ea1f14d36fc0a85fbe0c3e76f9|foo|bar|foobar
I have input.txt
1
2
3
4
5
I need to get such output.txt
1,2,3,4,5
How to do it?
Try this:
tr '\n' ',' < input.txt > output.txt
With sed, you could use:
sed -e 'H;${x;s/\n/,/g;s/^,//;p;};d'
The H appends the pattern space to the hold space (saving the current line in the hold space). The ${...} surrounds actions that apply to the last line only. Those actions are: x swap hold and pattern space; s/\n/,/g substitute embedded newlines with commas; s/^,// delete the leading comma (there's a newline at the start of the hold space); and p print. The d deletes the pattern space - no printing.
You could also use, therefore:
sed -n -e 'H;${x;s/\n/,/g;s/^,//;p;}'
The -n suppresses default printing so the final d is no longer needed.
This solution assumes that the CRLF line endings are the local native line ending (so you are working on DOS) and that sed will therefore generate the local native line ending in the print operation. If you have DOS-format input but want Unix-format (LF only) output, then you have to work a bit harder - but you also need to stipulate this explicitly in the question.
It worked OK for me on MacOS X 10.6.5 with the numbers 1..5, and 1..50, and 1..5000 (23,893 characters in the single line of output); I'm not sure that I'd want to push it any harder than that.
In response to #Jonathan's comment to #eumiro's answer:
tr -s '\r\n' ',' < input.txt | sed -e 's/,$/\n/' > output.txt
tr and sed used be very good but when it comes to file parsing and regex you can't beat perl
(Not sure why people think that sed and tr are closer to shell than perl... )
perl -pe 's/\n/$1,/' your_file
if you want pure shell to do it then look at string matching
${string/#substring/replacement}
Use paste command. Here is using pipes:
echo "1\n2\n3\n4\n5" | paste -s -d, /dev/stdin
Here is using a file:
echo "1\n2\n3\n4\n5" > /tmp/input.txt
paste -s -d, /tmp/input.txt
Per man pages the s concatenates all lines and d allows to define the delimiter character.
Awk versions:
awk '{printf("%s,",$0)}' input.txt
awk 'BEGIN{ORS=","} {print $0}' input.txt
Output - 1,2,3,4,5,
Since you asked for 1,2,3,4,5, as compared to 1,2,3,4,5, (note the comma after 5, most of the solutions above also include the trailing comma), here are two more versions with Awk (with wc and sed) to get rid of the last comma:
i='input.txt'; awk -v c=$(wc -l $i | cut -d' ' -f1) '{printf("%s",$0);if(NR<c){printf(",")}}' $i
awk '{printf("%s,",$0)}' input.txt | sed 's/,\s*$//'
printf "1\n2\n3" | tr '\n' ','
if you want to output that to a file just do
printf "1\n2\n3" | tr '\n' ',' > myFile
if you have the content in a file do
cat myInput.txt | tr '\n' ',' > myOutput.txt
python version:
python -c 'import sys; print(",".join(sys.stdin.read().splitlines()))'
Doesn't have the trailing comma problem (because join works that way), and splitlines splits data on native line endings (and removes them).
cat input.txt | sed -e 's|$|,|' | xargs -i echo "{}"