Add alphabet counter to beginning of each line - bash

I would like to add a counter to each line in a text file which is indexed over the alphabet. Are there any straight forward ways of doing this? For instance, if my text file looks like this
textAblabla
textBblabla
textCblabla
...
textABblabla
textACblabla
it should be converted into
A textAblabla
B textBblabla
C textCblabla
...
AB textABblabla
AC textACblabla
EDIT: It should be noted that it may very well be the case that each line is distinct. I now highlight this since it seems that my original question has caused some confusion.
EDIT 2: I do not want to print the list, I want to modify (overwrite) the original file with the enumeration.

With Perl:
perl -e '$x=A; while (<>) {print $x++." $_";}' file

Adapting Cyrus' answer to add the -p command-line option and changing the code to overwrite the original file:
$ perl -pi -e 'BEGIN { $x="A" }; $_ = $x++ . " $_"' your_file_here

With sed
sed -E 's/([^A-Z]+)([A-Z]+)(.*)/\2 \1\2\3/' infile
add -i to overwrite

Related

Unix sed command - global replacement is not working

I have scenario where we want to replace multiple double quotes to single quotes between the data, but as the input data is separated with "comma" delimiter and all column data is enclosed with double quotes "" got an issue and the same explained below:
The sample data looks like this:
"int","","123","abd"""sf123","top"
So, the output would be:
"int","","123","abd"sf123","top"
tried below approach to get the resolution, but only first occurrence is working, not sure what is the issue??
sed -ie 's/,"",/,"NULL",/g;s/""/"/g;s/,"NULL",/,"",/g' inputfile.txt
replacing all ---> from ,"", to ,"NULL",
replacing all multiple occurrences of ---> from """ or "" or """" to " (single occurrence)
replacing 1 step changes back to original ---> from ,"NULL", to ,"",
But, only first occurrence is getting changed and remaining looks same as below:
If input is :
"int","","","123","abd"""sf123","top"
the output is coming as:
"int","","NULL","123","abd"sf123","top"
But, the output should be:
"int","","","123","abd"sf123","top"
You may try this perl with a lookahead:
perl -pe 's/("")+(?=")//g' file
"int","","123","abd"sf123","top"
"int","","","123","abd"sf123","top"
"123"abcs"
Where input is:
cat file
"int","","123","abd"""sf123","top"
"int","","","123","abd"""sf123","top"
"123"""""abcs"
Breakup:
("")+: Match 1+ pairs of double quotes
(?="): If those pairs are followed by a single "
Using sed
$ sed -E 's/(,"",)?"+(",)?/\1"\2/g' input_file
"int","","123","abd"sf123","top"
"int","","NULL","123","abd"sf123","top"
"int","","","123","abd"sf123","top"
In awk with your shown samples please try following awk code. Written and tested in GNU awk, should work in any version of awk.
awk '
BEGIN{ FS=OFS="," }
{
for(i=1;i<=NF;i++){
if($i!~/^""$/){
gsub(/"+/,"\"",$i)
}
}
}
1
' Input_file
Explanation: Simple explanation would be, setting field separator and output field separator as , for all the lines of Input_file. Then traversing through each field of line, if a field is NOT NULL then Globally replacing all 1 or more occurrences of " with single occurrence of ". Then printing the line.
With sed you could repeat 1 or more times sets of "" using a group followed by matching a single "
Then in the replacement use a single "
sed -E 's/("")+"/"/g' file
For this content
$ cat file
"int","","123","abd"""sf123","top"
"int","","","123","abd"""sf123","top"
"123"""""abcs"
The output is
"int","","123","abd"sf123","top"
"int","","","123","abd"sf123","top"
"123"abcs"
sed s'#"""#"#' file
That works. I will demonstrate another method though, which you may also find useful in other situations.
#!/bin/sh -x
cat > ed1 <<EOF
3s/"""/"/
wq
EOF
cp file stack
cat stack | tr ',' '\n' > f2
ed -s f2 < ed1
cat f2 | tr '\n' ',' > stack
rm -v ./f2
rm -v ./ed1
The point of this is that if you have a big csv record all on one line, and you want to edit a specific field, then if you know the field number, you can convert all the commas to carriage returns, and use the field number as a line number to either substitute, append after it, or insert before it with Ed; and then re-convert back to csv.

Update version number in property file using bash

I am new in bash scripting and I need help with awk. So the thing is that I have a property file with version inside and I want to update it.
version=1.1.1.0
and I use awk to do that
file="version.properties"
awk -F'["]' -v OFS='"' '/version=/{
split($4,a,".");
$4=a[1]"."a[2]"."a[3]"."a[4]+1
}
;1' $file > newFile && mv newFile $file
but I am getting strange result version="1.1.1.0""...1
Could someone help me please with this.
You mentioned in your comment you want to update the file in place. You can do that in a one-liner with perl:
perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i version.properties
Explanation
-e is followed by a script to run. With -p and -i, the effect is to run that script on each line, and modify the file in place if the script changes anything.
The script itself, broken down for explanation, is:
/^version=/ and # Do the following on lines starting with `version=`
s/ # Make a replacement on those lines
(\d+\.\d+\.\d+\.)(\d+)/ # Match x.y.z.w, and set $1 = `x.y.z.` and $2 = `w`
$1 . ($2+1)/ # Replace x.y.z.w with a copy of $1, followed by w+1
e # This tells Perl the replacement is Perl code rather
# than a text string.
Example run
$ cat foo.txt
version=1.1.1.2
$ perl -pe '/^version=/ and s/(\d+\.\d+\.\d+\.)(\d+)/$1 . ($2+1)/e' -i foo.txt
$ cat foo.txt
version=1.1.1.3
This is not the best way, but here's one fix.
Test case
I am assuming the input file has at least one line that is exactly version=1.1.1.0.
$ awk -F'["]' -v OFS='"' '/version=/{
> split($4,a,".");
> $4=a[1]"."a[2]"."a[3]"."a[4]+1
> }
> ;1' <<<'version=1.1.1.0'
Output:
version=1.1.1.0"""...1
The """ is because you are assigning to field 4 ($4). When you do that, awk adds field separators (OFS) between fields 1 and 2, 2 and 3, and 3 and 4. Three OFS => """, in your example.
Minimal change
$ awk -F'["]' -v OFS='"' '/version=/{
split($1,a,".");
$1=a[1]"."a[2]"."a[3]"."a[4]+1;
print
}
' <<<'version=1.1.1.0'
version=1.1.1.1
Two changes:
Change $4 to $1
Since the input field separator (-F) is ["], $4 is whatever would be after the third " (if there were any in the input). Therefore, split($4, ...) splits an empty field. The contents of the line, before the first " (if any), are in $1.
print at the end instead of ;1
The 1 after the closing curly brace is the next condition, and there is no action specified. The default action is to print the current line, as modified, so the 1 triggers printing. Instead, just print within your action when you are done processing. That way your action is self-contained. (Of course, if you needed to do other processing, you might want to print later, after that processing.)
You can use the = as the delimiter, like this:
awk -F= -v v=1.0.1 '$1=="version"{printf "version=\"%s\"\n", v}' file.properties

Duplicate and modify words in bash with sed

I'm on Linux OS. I have a file to modify in my bash script.
My original file is like that:
...
ERIC-1898
HELENE-5456
THOMAS-54565
IRON-06516
...
And I'd like to modify this file with duplicate words (and -SYSTEM- word in second field), and add double quotes.
So, the result has to be like that:
...
"ERIC-1898" "ERIC-SYSTEM-1898"
"HELENE-5456" "HELENE-SYSTEM-5456"
"THOMAS-54565" "THOMAS-SYSTEM-54565"
"IRON-06516" "IRON-SYSTEM-06516"
...
How can I do that, for example with sed?
With sed and two capture groups:
$ sed 's/\(.*-\)\(.*\)/"&" "\1SYSTEM-\2"/' infile
"ERIC-1898" "ERIC-SYSTEM-1898"
"HELENE-5456" "HELENE-SYSTEM-5456"
"THOMAS-54565" "THOMAS-SYSTEM-54565"
"IRON-06516" "IRON-SYSTEM-06516"
Assuming that there is exactly one hyphen per input line.
awk solution:
awk -F'-' '{printf("\"%s\" \"%s-SYSTEM-%s\"\n", $1FS$2,$1,$2)}' file
The output would be like:
"ERIC-1898" "ERIC-SYSTEM-1898"
"HELENE-5456" "HELENE-SYSTEM-5456"
"THOMAS-54565" "THOMAS-SYSTEM-54565"
"IRON-06516" "IRON-SYSTEM-06516"
Not using external program:
#!/bin/bash
IFS=$'-'
while read -r first second;do
echo "\"$first-$second\" \"$first-SYSTEM-$second\""
done <infile
awk '{sub(/ /,"\" \"");print "\042" $0 "\042"}' file
"ERIC-1898" "ERIC-SYSTEM-1898"
"HELENE-5456" "HELENE-SYSTEM-5456"
"THOMAS-54565" "THOMAS-SYSTEM-54565"
"IRON-06516" "IRON-SYSTEM-06516"

sed: Replacing a range of text with contents of a file

There are many examples here and elsewhere on the interwebs for using sed's 'r' to replace a pattern, but it does not seem to work on a range, but maybe I'm just not holding it right.
The following works as expected, deleting BEGIN PATTERN and replacing it with the contents of /tmp/somefile.
sed -n "/BEGIN PATTERN/{ r /tmp/somefile d }" TARGET_FILE
This, however, only replaces END_PATTERN with the contents of /tmp/somefile.
sed -n "/BEGIN PATTERN/,/END PATTERN/ { r /tmp/somefile d }" TARGET_FILE
I suppose I could try perl or awk to do this as well, but it seems like sed should be able to do this.
I believe that this does what you want:
sed $'/BEGIN PATTERN/r somefile\n /BEGIN PATTERN/,/END PATTERN/d' file
Or:
sed -e '/BEGIN PATTERN/r somefile' -e '/BEGIN PATTERN/,/END PATTERN/d' file
How it works
/BEGIN PATTERN/r somefile
Whenever BEGIN PATTERN is found, this inserts the contents of somefile.
/BEGIN PATTERN/,/END PATTERN/d
Whenever we are in the range from a line with /BEGIN PATTERN/ to a line with /END PATTERN/, we delete (d) the contains of the pattern buffer.
Example
Let's consider these two test files:
$ cat file
prelude
BEGIN PATTERN
middle
END PATTERN
afterthought
and:
$ cat somefile
This is
New.
Our command produces:
$ sed $'/BEGIN PATTERN/r somefile\n /BEGIN PATTERN/,/END PATTERN/d' file
prelude
This is
New.
afterthought
This might work for you (GNU sed):
sed -e '/BEGIN PATTERN/,/END PATTERN/{/END PATTERN/!d;r somefile' -e 'd}' file
John1024's answer works if BEGIN PATTERN and END PATTERN are different. If this is not the case, the following works:
sed $'/PATTERN/,/PATTERN/d; 1,/PATTERN/ { /PATTERN/r somefile\n }' file
By preserving the pattern:
sed $'/PATTERN/,/PATTERN/ { /PATTERN/!d; }; 1,/PATTERN/ { /PATTERN/r somefile\n }' file
This solution can yield false positives if the pattern is not paired as potong pointed out.

convert multiply lines between pattern to a comma separated string

I need help in processing data from STDIN (data is taken from another file with 'tail -f' plus grepped to filter out garbage). There are several lines between patterns:
<DN> 589</DN>
<DD>03.12.2014</DD>
<ST> </ST>
<STC>0</STC>
<STT>0</STT>
<PU>5</PU>
<OT>01</OT>
<DSN></DSN>
<NRA>40807,40820,426,30231,40818,30230</NRA>
<GR>300 000-00
&#10</GR>
then next block with DN/GR starts
I need to convert lines between and to a single line, comma-separated:
<DN> 589</DN>,<DD>03.12.2014</DD>,<ST> </ST>,<STC>0</STC>,<STT>0</STT>,<PU>5</PU>,<OT>01</OT>,<DSN></DSN>,<NRA>40807,40820,426,30231,40818,30230</NRA>,<GR>300 000-00
&#10</GR>
I need a one-liner with awk or sed or perl to do it and put result to STDOUT.
I've tried to do it, but failed due to lack of experience. Also tried to google and didn't find a working solution.
whatever..| awk '{sub(/^\s*/,"");printf "%s%s",$0,(/\/GR>\s*$/?"\n":",")}'
this line does:
remove the leading spaces from each line
join all line with sep , till the block end /GR>
if you have x data blocks, it gives you x long lines.
sed -nr '/<DN>/,/<GR>/{ H; /<GR>/{ g; s%\n%,%g; s%^,%%; p; s%.*%%; h }; }' <<'EOSEQ'
<DN> 589</DN>
<DD>03.12.2014</DD>
<STC>0</STC>
<GR>300 000-00
&#10</GR>
<DN>900</DN>
<DD>20.11.2014</DD>
<OT>01</OT>
<NRA>40807,40820,426,30231,40818,30230</NRA>
<GR>300 000-00
&#10</GR>
EOSEQ
SED one-liner, as you wish :)
Using awk you could do the following:
awk '{printf ("%s,", $NF)}' test.txt ##Will have comma at the end which may/may not be ok for you.
You can use the following one in sed.
sed -r ':loop ;N;s/(.*)\n(.*)/\1,\2/ ; t loop ' file name.

Resources