Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Is there a way to change multiple (up to ten) patterns in a single fasta file using sed?
For instance I want to change X for Y:
sed "s/X/Y/g" < file1.fasta > output.fasta
how to add sed "s/\s/_/g" and 8 more commands to the same one-liner?
You can separate the commands by semicolons
sed 's/a/b/;s/c/d/'
(you can also use newlines instead of semicolons)
or you can use multiple -es:
sed -e 's/a/b/' -e 's/c/d/'
see this example: (test with gnu sed):
kent$ echo 'abcd'|sed 's/a/1/;s/b/2/;s/c/3/;s/d/4/'
1234
kent$ echo 'abcd'|sed 'y/abcd/1234/'
1234
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
I have 20 text files named as follows:
samp_100.out, samp_200.out, samp_300.out, ... ,samp_2000.out.
The naming is consistent and the numbering increases by 100 until 2000. I want to make a short script to (1) delete the first line of each script, and (2) apply the following command to each one of the files:
sed 's/ \+/,/g' ifile.txt > ofile.csv while keeping the naming the same when changed to a .csv extension
I am assuming I need to use a for loop, but I am not sure how to iterate through the file names.
This might work for you (GNU sed and parallel):
parallel "sed '1d;s/ \+/,/g' samp_{}.out > samp_{}.csv" ::: {100..2000..100}
Use GNU sed, GNU parallel and braces expansion, to delete first line and replace one or more spaces globally by commas for desired files and make copies.
Alternative:
for i in {100..2000..100}
do sed '1d;s/ \+/,/g' samp_${i}.out > samp_${i}.csv
done
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am using
sed -s n v
Nothing works for me
The -i flag is only in GNU Sed.
cat file | tr ']' '[' > temp
mv temp file
The above should work for you.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have two tab deliminated files
File1.tab
100 ABC
300 CDE
File2.tab
399 GSA
300 CDE
I want awk command to return 1 because row '300 CDE' is common in both file.
I almost hate to encourage laziness by answering a question with so little effort put into it, but did you try grep?
$: grep -c -f File1.tab File2.tab
1
If lines are unique per file you can use grep
grep -f File1.tab File2.tab | wc -l
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have an output from a program that I would like to process and If I pipe it to a file I get:
file/path#backup2018
file2/path/more/path/path#backup2019
file3/path#backup2017
And I want to process it so it looks like this:
file/path file.path
file2/path/more/path/path file.path.more.path.path
file3/path file.path
I have figured out how to make it with separate commands but would like a one liner.
$ awk -F# '{s=$1; gsub("/", ".", s); print $1, s}' file | column -t
file/path file.path
file2/path/more/path/path file2.path.more.path.path
file3/path file3.path
using sed
sed 's/\([^#]*\)#.*/\1 \1/g' file|column -t
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I've a txt file like this:
cat fruits.txt
apple
banana
mango
I need put them to bash array:
fruit[0]='apple'
fruit[1]='banana'
ftuit[2]='mango'
You can do:
fruit=( $(<fruits.txt) )
set | grep fruit
fruit=([0]="apple" [1]="banana" [2]="mango")
In bash 4 and later:
mapfile fruit < fruits.txt
To ignore the trailing newline from each line
mapfile -t fruit < fruits.txt
The command readarray is a synonym for mapfile.