Remove a substring from lines starting with a specific character - bash

I am trying to change long names in rows starting with >, so that I only keep the part till Stage_V_sporulation_protein...:
>tr_A0A024P1W8_A0A024P1W8_9BACI_Stage_V_sporulation_protein_AE_OS=Halobacillus_karajensis_OX=195088_GN=BN983_00096_PE=4_SV=1
MTFLWAFLVGGGICVIGQILLDVFKLTPAHVMSSFVVAGAVLDAFDLYDNLIRFAGGGATVPITSFGHSLLHGAMEQADEHGVIGVAIGIFELTSAGIASAILFGFIVAVIFKPKG
>tr_A0A060LWV2_A0A060LWV2_9BACI_SpoIVAD_sporulation_protein_AEB_OS=Alkalihalobacillus_lehensis_G1_OX=1246626_GN=BleG1_2089_PE=4_SV=1
MIFLWAFLVGGVICVIGQLLMDVVKLTPAHTMSTLVVSGAVLAGFGLYEPLVDFAGAGATVPITSFGNSLVQGAMEEANQVGLIGIITGIFEITSAGISAAIIFGFIAALIFKPKG
I am doing a loop:
cat file.txt | while read line; do
if [[ $line = \>* ]] ; then
cut -d_ -f1-4 $line;
fi;
done
but in addresses files but not rows in the file (I get cut: >>tr_A0A024P1W8_A0A024P1W8_9BACI_Stage_V_sporulation_protein_AE_OS=Halobacillus_karajensis_OX=195088_GN=BN983_00096_PE=4_SV=1: No such file or directory).
My desired output is:
>tr_A0A024P1W8_A0A024P1W8_9BACI
MTFLWAFLVGGGICVIGQILLDVFKLTPAHVMSSFVVAGAVLDAFDLYDNLIRFAGGGATVPITSFGHSLLHGAMEQADEHGVIGVAIGIFELTSAGIASAILFGFIVAVIFKPKG
>tr_A0A060LWV2_A0A060LWV2_9BACI
MIFLWAFLVGGVICVIGQLLMDVVKLTPAHTMSTLVVSGAVLAGFGLYEPLVDFAGAGATVPITSFGNSLVQGAMEEANQVGLIGIITGIFEITSAGISAAIIFGFIAALIFKPKG
How do I change actual rows?

With the current state of the question, it seems easiest to do:
awk '/^>/ {print $1,$2,$3,$4; next}1' FS=_ OFS=_ file.txt
Lines that match the > at the beginning of the line get only the first four fields printed, separated by _ (the value of OFS). Lines that do not match are printing unchanged.

One way using sed:
sed -E '/^>/s/(.*)_Stage_V_sporulation_protein/\1/' file

A sed one-liner would be:
sed '/^>/s/^\(\([^_]*_\)\{3\}[^_]*\).*/\1/' file

Use this Perl one-liner to process the headers in your FASTA file:
perl -lpe 'if ( m{^>} ) { #f = split m{_}, $_; splice #f, 4; $_ = join "_", #f; }' file.txt > out.txt
The Perl one-liner uses these command line flags:
-e : Tells Perl to look for code in-line, instead of in a file.
-p : Loop over the input one line at a time, assigning it to $_ by default. Add print $_ after each loop iteration.
-l : Strip the input line separator ("\n" on *NIX by default) before executing the code in-line, and append it when printing.
The one-liner uses split to split the input string on underscore into the array #f.
Then splice is used to remove from the array all elements except for the first 4 elements.
Finally, join joins these elements on an underscore.
All of the above is wrapped inside if ( m{^>} ) { ... } in order to limit the costly string manipulations only to the FASTA headers (the lines that start with >).
SEE ALSO:
perldoc perlrun: how to execute the Perl interpreter: command line switches

Related

Replace every 4th occurence of char "_" with "#" in multiple files

I am trying to replace every 4th occurrence of "_" with "#" in multiple files with bash.
E.g.
foo_foo_foo_foo_foo_foo_foo_foo_foo_foo..
would become
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo...
#perl -pe 's{_}{++$n % 4 ? $& : "#"}ge' *.txt
I have tried perl but the problem is this replaces every 4th _ carrying on from the last file. So for example, some files the first _ is replaced because it is not starting each new file at a count of 0, it carries on from the previous file.
I have tried:
#awk '{for(i=1; i<=NF; i++) if($i=="_") if(++count%4==0) $i="#"}1' *.txt
but this also does not work.
Using sed I cannot find a way to keep replacing every 4th occurrence as there are different numbers of _ in each file. Some files have 20 _, some have 200 _. Therefore, I cant specify a range.
I am really lost what to do, can anybody help?
You just need to reset the counter in the perl one using eof to tell when it's done reading each file:
perl -pe 's{_}{++$n % 4 ? "_" : "#"}ge; $n = 0 if eof' *.txt
This MAY be what you want, using GNU awk for RT:
$ awk -v RS='_' '{ORS=(FNR%4 ? RT : "#")} 1' file
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo..
It only reads each _-separated string into memory 1 at a time so should work no matter how large your input file, assuming there are _s in it.
It assumes you want to replace every 4th _ across the whole file as opposed to within individual lines.
A simple sed would handle this:
s='foo_foo_foo_foo_foo_foo_foo_foo_foo_foo'
sed -E 's/(([^_]+_){3}[^_]+)_/\1#/g' <<< "$s"
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
Explanation:
(: Start capture group #1
([^_]+_){3}: Match Match 1+ of non-_ characters followed by a _. Repeat this group 3 times to match 3 such words separated by _
[^_]+: Match 1+ of non-_ characters
): End capture group #1
_: Match a _
Replacement is \1# to replace 4th _ with a #
With GNU sed:
sed -nsE ':a;${s/(([^_]*_){3}[^_]*)_/\1#/g;p};N;ba' *.txt
-n suppresses the automatic printing, -s processes each file separately, -E uses extended regular expressions.
The script is a loop between label a (:a) and the branch-to-label-a command (ba). Each iteration appends the next line of input to the pattern space (N). This way, after the last line has been read, the pattern space contains the whole file(*). During the last iteration, when the last line has been read ($), a substitute command (s) replaces every 4th _ in the pattern space by a # (s/(([^_]*_){3}[^_]*)_/\1#/g) and prints (p) the result.
When you will be satisfied with the result you can change the options:
sed -i -nE ':a;${s/(([^_]*_){3}[^_]*)_/\1#/g;p};N;ba' *.txt
to modify the files in-place, or:
sed -i.bkp -nE ':a;${s/(([^_]*_){3}[^_]*)_/\1#/g;p};N;ba' *.txt
to modify the files in-place, but keep a *.txt.bkp backup of each file.
(*) Note that if you have very large files this could cause memory overflows.
With your shown samples, please try following awk program. Have created an awk variable named fieldNum where I have assigned 4 to it, since OP needs to enter # after every 4th _, you can keep it as per your need too.
awk -v fieldNum="4" '
BEGIN{ FS=OFS="_" }
{
val=""
for(i=1;i<=NF;i++){
val=(val?val:"") $i (i%fieldNum==0?"#":(i<NF?OFS:""))
}
print val
}
' Input_file
With GNU awk
$ cat ip.txt
foo_foo_foo_foo_foo_foo_foo_foo_foo_foo
123_45678_90
_
$ awk -v RS='(_[^_]+){3}_' -v ORS= '{sub(/_$/, "#", RT); print $0 RT}' ip.txt
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
123_45678_90
#
-v RS='(_[^_]+){3}_' set input record separator to cover sequence of four _ (text matched by this separator will be available via RT)
-v ORS= empty output record separator
sub(/_$/, "#", RT) change last _ to #
Use -i inplace for inplace editing.
If the count should reset for each line:
perl -pe's/(?:_[^_]*){3}\K_/\#/g'
$ cat a.txt
foo_foo_foo_foo_foo_foo_foo_foo_foo_foo
foo_foo_foo_foo_foo_foo_foo_foo_foo_foo
$ perl -pe's/(?:_[^_]*){3}\K_/\#/g' a.txt a.txt
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
If the count shouldn't reset for each line, but should reset for each file:
perl -0777pe's/(?:_[^_]*){3}\K_/\#/g'
The -0777 cause the whole file to be treated as one line. This causes the count to work properly across lines.
But since a new a match is used for each file, the count is reset between files.
$ cat a.txt
foo_foo_foo_foo_foo_foo_foo_foo_foo_foo
foo_foo_foo_foo_foo_foo_foo_foo_foo_foo
$ perl -0777pe's/(?:_[^_]*){3}\K_/\#/g' a.txt a.txt
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
foo_foo_foo#foo_foo_foo_foo#foo_foo_foo
foo_foo_foo_foo#foo_foo_foo_foo#foo_foo
foo_foo_foo#foo_foo_foo_foo#foo_foo_foo
To avoid that reading the entire file at once, you could continue using the same approach, but with the following added:
$n = 0 if eof;
Note that eof is not the same thing as eof()! See eof.

grep line and line after into one line

I have text file with lots of information. I'm interested in getting only aliases.
All alias and the port information are separated by a space, each port is separated by semicolon.
This command..
cat ~/Desktop/brocade_output.txt |grep -A1 alias
Gives me this output. All aliases start with a_ prefix.
> alias: a_computer_1
40:01:00:00:ab:00:00:aj; 60:01:00:0e:1e:d0:b5:fd
--
alias: a_helpdesk
41:00:00:24:fh:5c:99:9e; 81:00:00:24:ff:5c:48:9f
--
alias: a_library
91:00:00:24:fh:5c:99:9g; 91:00:00:24:ff:5c:48:9g
--
Desired output
a_computer_1 40:01:00:00:ab:00:00:aj 60:01:00:0e:1e:d0:b5:fd
a_helpdesk 41:00:00:24:fh:5c:99:9e 81:00:00:24:ff:5c:48:9f
a_library 91:00:00:24:fh:5c:99:9g 91:00:00:24:ff:5c:48:9g
In awk:
$ awk '/alias/ {f=$2;next} f{$1=$1; print f, $0; f=0 }' file
a_computer_1 40:01:00:00:ab:00:00:aj; 60:01:00:0e:1e:d0:b5:fd
a_helpdesk 41:00:00:24:fh:5c:99:9e; 81:00:00:24:ff:5c:48:9f
a_library 91:00:00:24:fh:5c:99:9g; 91:00:00:24:ff:5c:48:9g
Explained:
/alias/ {f=$2;next} when alias in record, f equals second field
f{$1=$1; print f, $0; f=0 } when fset, print it and the "next record"
Try this:
grep -A1 alias ~/Desktop/brocade_output.txt | cut -f2- -d":"
My understanding is you want the part of the line that follows the first :, so using : as the delimiter for cut and taking all fields starting at 2 should do the trick.
This can also be done in Perl with a few more characters:
perl -ane 'if (/alias/){$_=<>; s/^\s*//; print "$F[1] $_"}' file
These command-line options are used:
- -n loop around each line of the input file
- -a autosplit mode – split input lines into the #F array
- -e execute the perl code
/alias/ matches the current line
$F[1] is the second element in #F
$_=<> assigns the $_ default variable to the next line from the input file
s/^\s*// removes the leading spaces from $_
With sed:
sed -n '/alias:/N;{s/.*\(a_[^ ]* \)[[:space:]]*\(.*\)/\1\2/p;}' file
Add the -i flag to edit the file in place.

Replacing/removing excess white space between columns in a file

I am trying to parse a file with similar contents:
I am a string 12831928
I am another string 41327318
A set of strings 39842938
Another string 3242342
I want the out file to be tab delimited:
I am a string\t12831928
I am another string\t41327318
A set of strings\t39842938
Another string\t3242342
I have tried the following:
sed 's/\s+/\t/g' filename > outfile
I have also tried cut, and awk.
Just use awk:
$ awk -F' +' -v OFS='\t' '{sub(/ +$/,""); $1=$1}1' file
I am a string 12831928
I am another string 41327318
A set of strings 39842938
Another string 3242342
Breakdown:
-F' +' # tell awk that input fields (FS) are separated by 2 or more blanks
-v OFS='\t' # tell awk that output fields are separated by tabs
'{sub(/ +$/,""); # remove all trailing blank spaces from the current record (line)
$1=$1} # recompile the current record (line) replacing FSs by OFSs
1' # idiomatic: any true condition invokes the default action of "print"
I highly recommend the book Effective Awk Programming, 4th Edition, by Arnold Robbins.
The difficulty comes in the varying number of words per-line. While you can handle this with awk, a simple script reading each word in a line into an array and then tab-delimiting the last word in each line will work as well:
#!/bin/bash
fn="${1:-/dev/stdin}"
while read -r line || test -n "$line"; do
arr=( $(echo "$line") )
nword=${#arr[#]}
for ((i = 0; i < nword - 1; i++)); do
test "$i" -eq '0' && word="${arr[i]}" || word=" ${arr[i]}"
printf "%s" "$word"
done
printf "\t%s\n" "${arr[i]}"
done < "$fn"
Example Use/Output
(using your input file)
$ bash rfmttab.sh < dat/tabfile.txt
I am a string 12831928
I am another string 41327318
A set of strings 39842938
Another string 3242342
Each number is tab-delimited from the rest of the string. Look it over and let me know if you have any questions.
sed -E 's/[ ][ ]+/\\t/g' filename > outfile
NOTE: the [ ] is openBracket Space closeBracket
-E for extended regular expression support.
The double brackets [ ][ ]+ is to only substitute tabs for more than 1 consecutive space.
Tested on MacOS and Ubuntu versions of sed.
Your input has spaces at the end of each line, which makes things a little more difficult than without. This sed command would replace the spaces before that last column with a tab:
$ sed 's/[[:blank:]]*\([^[:blank:]]*[[:blank:]]*\)$/\t\1/' infile | cat -A
I am a string^I12831928 $
I am another string^I41327318 $
A set of strings^I39842938 $
Another string^I3242342 $
This matches – anchored at the end of the line – blanks, non-blanks and again blanks, zero or more of each. The last column and the optional blanks after it are captured.
The blanks before the last column are then replaced by a single tab, and the rest stays the same – see output piped to cat -A to show explicit line endings and ^I for tab characters.
If there are no blanks at the end of each line, this simplifies to
sed 's/[[:blank:]]*\([^[:blank:]]*\)$/\t\1/' infile
Notice that some seds, notably BSD sed as found in MacOS, can't use \t for tab in a substitution. In that case, you have to use either '$'\t'' or '"$(printf '\t')"' instead.
another approach, with gnu sed and rev
$ rev file | sed -r 's/ +/\t/1' | rev
You have trailing spaces on each line. So you can do two sed expressions in one go like so:
$ sed -E -e 's/ +$//' -e $'s/ +/\t/' /tmp/file
I am a string 12831928
I am another string 41327318
A set of strings 39842938
Another string 3242342
Note the $'s/ +/\t/': This tells bash to replace \t with an actual tab character prior to invoking sed.
To show that these deletions and \t insertions are in the right place you can do:
$ sed -E -e 's/ +$/X/' -e $'s/ +/Y/' /tmp/file
I am a stringY12831928X
I am another stringY41327318X
A set of stringsY39842938X
Another stringY3242342X
Simple and without invisible semantic characters in the code:
perl -lpe 's/\s+$//; s/\s\s+/\t/' filename
Explanation:
Options:
-l: remove LF during processing (in this case)
-p: loop over records (like awk) and print
-e: code follows
Code:
remove trailing whitespace
change two or more whitespace to tab
Tested on OP data. The trailing spaces are removed for consistency.

Bash split file on double newline

I have some files with content that change from file to file.
Each file have 2 sections of lines separated by a blank line.
I never know how many lines or characters there are in either section.
The file can look something like this.
This is a file
with some text
and some more text
This code only gives the first line from each section.
awk 'BEGIN {RS="\n\n"; FS="\n";} {print $1 }' file
I need each section split up to work with.
Prints first part:
sed '/^$/q' test.txt
Prints second part:
sed '1,/^$/d' test.txt
Since this is tagged bash, might as well have a native-bash solution.
sections=( )
current_section=
while REPLY=; IFS= read -r || [[ $REPLY ]]; do
if [[ $REPLY ]]; then
# preserve newlines within the sections
if [[ $current_section ]]; then
current_section+=$'\n'"$REPLY"
else
current_section+=$REPLY
fi
else
sections+=( "$current_section" )
current_section=
fi
done <file
This will put your file's individual sections into a bash array called sections.
You can print the contents of that array like so:
printf -- '---\n%s\n---\n' "${sections[#]}"
...or iterate over it to do as you please:
for section in "${sections[#]}"; do
: do something with "$section" here
done
Set RS to a null/blank value to get awk to operate on sequences of blank lines.
From the POSIX specification for awk:
RS
The first character of the string value of RS shall be the input record separator; a by default. If RS contains more than one character, the results are unspecified. If RS is null, then records are separated by sequences consisting of a plus one or more blank lines, leading or trailing blank lines shall not result in empty records at the beginning or end of the input, and a shall always be a field separator, no matter what the value of FS is.
IFS=";"
sections=($(awk -v RS= '{print $0 ";"}' filename))
IFS=; sets the internal field separator from space (default) to a semicolon
$(awk -v RS= '{print $0 ";"}' filename) prints out sections separated by double new lines and outputs a ; after each section. Another way to achieve the same result is awk -v ORS=';' -v RS= '{print}'
sections=(awk_output_here) splits the awk output based on the IFS and assigns each section as an array element. Hence, $sections contains each section split on ;. An individual section may be accessed with ${sections[0]} where 0 is an index.
The above awk trick works because it is a use case or convention in awk.
Another technique is to have blank lines separate records. By a special dispensation, an empty string as the value of RS indicates that records are separated by one or more blank lines. When RS is set to the empty string, each record always ends at the first blank line encountered.
Supposing that there are precisely two parts, this very simple Perl trick will print the the standard output until it encounters an empty line, and then the remainder to error;
perl -ne 'if (1../^$/) { print STDOUT } else { print STDERR }'
e.g.
cat tmp0 | perl -ne 'if (1../^$/) { print STDOUT } else { print STDERR }' > tmp1 2> tmp2
tmp1:
This is a file
with some text
tmp2:
and some more text

output csv with lines that contains only one column

with input csv file
sid,storeNo,latitude,longitude
2,1,-28.03720000,153.42921670
9
I wish to output only the lines with one column, in this example it's line 3.
how can this be done in bash shell script?
Using awk
The following awk would be usfull
$ awk -F, 'NF==1' inputFile
9
What it does?
-F, sets the field separator as ,
NF==1 matches lines with NF, number of fields as 1. No action is provided hence default action, printing the entire record is taken. it is similar to NF==1{print $0}
inputFile input csv file to the awk script
Using grep
The same function can also be done using grep
$ grep -v ',' inputFile
9
-v option prints lines that do not match the pattern
, along with -v greps matches lines that do not contain , field separator
Using sed
$ sed -n '/^[^,]*$/p' inputFile
9
what it does?
-n suppresses normal printing of pattern space
'/^[^,]*$/ selects lines that match the pattern, lines without any ,
^ anchors the regex at the start of the string
[^,]* matches anything other than ,
$ anchors string at the end of string
p action p makes sed to print the current pattern space, that is pattern space matching the input
try this bash script
#!/bin/bash
while read -r line
do
IFS=","
set -- $line
case ${#} in
1) echo $line;;
*) continue;;
esac
done < file

Resources