Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to prepend and append a header and trailer to a very large file.
So far I have tried sed and awk. Sed I can't get to work at all on the Mac with examples online. Awk I have go to work but only displaying to screen output.
Using this site as reference.
http://www.theunixschool.com/2011/03/different-ways-to-add-header-and.html
Using AWk how do I actually get this to update my file. Open to other suggestions too.
cat header big_file footer > tmp_file && mv tmp_file big_file
This works on unix/linux:
cat headerfile myfile trailerfile > newfile
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using
sed -s n v
Nothing works for me
The -i flag is only in GNU Sed.
cat file | tr ']' '[' > temp
mv temp file
The above should work for you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have an output from a program that I would like to process and If I pipe it to a file I get:
file/path#backup2018
file2/path/more/path/path#backup2019
file3/path#backup2017
And I want to process it so it looks like this:
file/path file.path
file2/path/more/path/path file.path.more.path.path
file3/path file.path
I have figured out how to make it with separate commands but would like a one liner.
$ awk -F# '{s=$1; gsub("/", ".", s); print $1, s}' file | column -t
file/path file.path
file2/path/more/path/path file2.path.more.path.path
file3/path file3.path
using sed
sed 's/\([^#]*\)#.*/\1 \1/g' file|column -t
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
I'd like to find duplicates on a csv file over bash with a pipe as field separator.
Let's take an example :
Input:
W14|E75
Z20|K60
R59|R59
K60|O74
A08|M10
Expected output :
Z20|K60
R59|R59
K60|O74
Else other expected output :
Z20|K60
R59|R59
I mean when the expression already exist in the first column, just keep it, the same with the second column, else I can accept to keep only the first line.
What I tried is :
awk -F "|" 'FNR==NR { x[$1,$2]++; next } x[$1,$2] > 1' file.csv file.csv
I think about using a grep but i'm not quiet sure how to do it.
Sorry for bad english and thank you in advance
I think based on the output, you want the non unique entries regardless of their position in the lines
$ awk -F'|' 'NR==FNR{a[$1]++;a[$2]++;next} a[$1]*a[$2]>1' file{,}
should give you your first output.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is there a way to change multiple (up to ten) patterns in a single fasta file using sed?
For instance I want to change X for Y:
sed "s/X/Y/g" < file1.fasta > output.fasta
how to add sed "s/\s/_/g" and 8 more commands to the same one-liner?
You can separate the commands by semicolons
sed 's/a/b/;s/c/d/'
(you can also use newlines instead of semicolons)
or you can use multiple -es:
sed -e 's/a/b/' -e 's/c/d/'
see this example: (test with gnu sed):
kent$ echo 'abcd'|sed 's/a/1/;s/b/2/;s/c/3/;s/d/4/'
1234
kent$ echo 'abcd'|sed 'y/abcd/1234/'
1234
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to find a pattern starting from a specific line rather than from the beginning and then delete all the lines starting from this specific position to the point/line where the pattern was first matched.
This will delete starting from line 10 until the pattern is matched:
sed '10,/pattern/d' file > newfile
what about this:
sed -e "$lineno,/$pattern/d" $file
where
$lineno is your line number for start deleting
$pattern is your pattern
$file is your file