Expand range of numbers in file - bash

I have a file with delimited integers which I've extracted from elsewhere and dumped into a file. Some lines contain a range, as per the below:
Files 1,2,3,4,5,6,7,8,9,10 are OK
Users 1,2,3-9,10 have problems
Cars 1-5,5-10 are in the depot
Trains 1-10 are on time
Any way to expand the ranges on the text file so that it returns each individual number, with the , delimiter preserved? The text either side of the integers could be anything, and I need it preserved.
Files 1,2,3,4,5,6,7,8,9,10 are OK
Uses 1,2,3,4,5,6,7,8,9,10 have problems
Cars 1,2,3,4,5,6,7,8,9,10 are in the depot
Trains 1,2,3,4,5,6,7,8,9,10 are on time
I guess this can be done relatively easily with awk, let alone any other scripting language. Any help very much appreciated

You haven't tagged with perl but I'd recommend it in this case:
perl -pe 's/(\d+)-(\d+)/join(",", $1..$2)/ge' file
This substitutes all occurrences of one or more digits, followed by a hyphen, followed by one or more digits. It uses the numbers it has captured to create a list from the first number to the second and joins the list on a comma.
The e modifier is needed here so that an expression can be evaluated in the replacement part of the substitution.
To avoid repeated values and to sort the list, things get a little more complicated. At this point, I'd recommend using a script, rather than a one-liner:
use strict;
use warnings;
use List::MoreUtils qw(uniq);
while (<>) {
s/(\d+)-(\d+)/join(",", $1..$2)/ge;
if (/(.*\s)((\d+,)+\d+)(.*)/) {
my #list = sort { $a <=> $b } uniq split(",", $2);
$_ = $1 . join(",", #list) . $4 . "\n";
}
} continue {
print;
}
After expanding the ranges (like in the one-liner), I've re-parsed the line to extract the list of values. I've used uniq from List::MoreUtils (a core module) to remove any duplicates and sorted the values.
Call the script like perl script.pl file.

A solution using awk:
{
result = "";
count = split($0, fields, /[ ,-]+/, seps);
for (i = 1; i <= count; i++) {
if (fields[i] ~ /[0-9]+/) {
if (seps[i] == ",") {
numbers[fields[i]] = fields[i];
} else if (seps[i] == "-") {
for (j = fields[i] + 1; j <= fields[i+1]; j++) {
numbers[j] = j;
}
} else if (seps[i] == " ") {
numbers[fields[i]] = fields[i];
c = asort(numbers);
for (r = 1; r < c; r++) {
result = result numbers[r] ",";
}
result = result numbers[c] " ";
}
} else {
result = result fields[i] seps[i];
}
}
print result;
}

$ cat tst.awk
match($0,/[0-9,-]+/) {
split(substr($0,RSTART,RLENGTH),numsIn,/,/)
numsOut = ""
delete seen
for (i=1;i in numsIn;i++) {
n = split(numsIn[i],range,/-/)
for (j=range[1]; j<=range[n]; j++) {
if ( !seen[j]++ ) {
numsOut = (numsOut=="" ? "" : numsOut ",") j
}
}
}
print substr($0,1,RSTART-1) numsOut substr($0,RSTART+RLENGTH)
}
$ awk -f tst.awk file
Files 1,2,3,4,5,6,7,8,9,10 are OK
Users 1,2,3,4,5,6,7,8,9,10 have problems
Cars 1,2,3,4,5,6,7,8,9,10 are in the depot
Trains 1,2,3,4,5,6,7,8,9,10 are on time

another awk
$ awk '{while(match($0, /[0-9]+-[0-9]+/))
{k=substr($0, RSTART, RLENGTH);
split(k,a,"-");
f=a[1];
for(j=a[1]+1; j<=a[2]; j++) f=f","j;
sub(k,f)}}1' file
Files 1,2,3,4,5,6,7,8,9,10 are OK
Users 1,2,3,4,5,6,7,8,9,10 have problems
Cars 1,2,3,4,5,5,6,7,8,9,10 are in the depot
Trains 1,2,3,4,5,6,7,8,9,10 are on time
note that the Cars 1-5,5-10 will end up two 5 values when expanded due to overlapping ranges.

Related

I have a file in Unix with data set as below, i want to generate more data like this but no duplicates. Looking for a unix shell code. Below is sample

I want to generate more data based on some sample data i already have in a file stored in Unix location.
looking for a unix shell code.
ID,FN,LN,Gender
1,John,hopkins,M
2,Andrew,Singh,M
3,Ram,Lakshman,M
4,ABC,DEF,F
5,Virendra,Sehwag,F
6,Sachin,Tendulkar,F
You could use awk to read the existing data into an array and then keep printing it over and over with new IDs:
awk -F, -v OFS=, -v n=100 '
BEGIN {
l = 0;
}
/^[0-9]/ {
a[l] = $2","$3","$4;
l++;
}
{ print }
END {
for ( i = l; i <= n; i++ ) {
printf "%d,%s\n", i, a[i%l];
}
}
'
n is the number of IDs you want (existing IDs + generated).

Compress ranges of ranges of numbers in bash

I have a csv file named "ranges.csv", which contains:
start_range,stop_range
9702220000,9702220999
9702222000,9702222999
9702223000,9702223999
9750000000,9750000999
9750001000,9750001999
9750002000,9750002999
I am trying to combine the ranges where the stop_range=start_range-1 and output the result in another csv file named "ranges2.csv". So the output will be:
9702220000,9702220999
9702222000,9702223999
9750000000,9750002999
Moreover, I need to know how many ranges contains a compress range (example: for the new range 9750000000,9750002999 I need to know that before the compression there were 3 ranges). This information will help me to create a new csv file named "ranges3.csv" which should contain only the range with the most ranges inside it (the most comprehensive area):
9750000000,9750002999
I was thinking about something like this:
if (stop_range = start_range-1)
new_stop_range = start_range-1
But I am not very smart and I am new to bash scripting.
I know how to output the results in another file but the function for what I need gives me headaches.
I think this does the trick:
#!/bin/bash
awk '
BEGIN { FS = OFS = ","}
NR == 2 {
start = $1; stop = $2; i = 1
}
NR > 2 {
if ($1 == (stop + 1)) {
i++;
stop = $2
} else {
if (++i > max) {
maxr = start "," stop;
max = i
}
start = $1
i = 0
}
stop = $2
}
END {
if (++i > max) {
maxr = start "," stop;
}
print maxr
}
' ranges.csv
Assuming your ranges are sorted, then this code gives you the merged ranges only:
awk 'BEGIN{FS=OFS=","}
(FNR>1) && ($1!=e+1){print b,e; b=e="" }
($1==e+1){ e=$2; next }
{ b=$1; e=$2 }
END { print b,e }' file
Below you get the same but with the range count:
awk 'BEGIN{FS=OFS=","}
(FNR>1) && ($1!=e+1){print b,e,c; b=e=c="" }
($1==e+1){ e=$2; c++; next }
{ b=$1; e=$2; c=1 }
END { print b,e,c }' file
If you want the largest one, you can sort on the third column. I don't want to make a rule to give the range with the most counts, as there might be multiple.
If you really only want all the ranges with the maximum merge:
awk 'BEGIN{FS=OFS=","}
(FNR>1) && ($1!=e+1){
a[c] = a[c] (a[c]?ORS:"") b OFS e
m=(c>m?c:m)
b=e=c=""
}
($1==e+1){ e=$2; c++; next }
{ b=$1; e=$2; c=1 }
END { a[c] = a[c] (a[c]?ORS:"") b OFS e
m=(c>m?c:m)
print a[m]
}' file

How I make a list of missing integer from a sequence using bash

I have a file let's say files_190911.csv whose contents are as follows.
EDR_MPU023_09_20190911080534.csv.gz
EDR_MPU023_10_20190911081301.csv.gz
EDR_MPU023_11_20190911083544.csv.gz
EDR_MPU023_14_20190911091405.csv.gz
EDR_MPU023_15_20190911105513.csv.gz
EDR_MPU023_16_20190911105911.csv.gz
EDR_MPU024_50_20190911235332.csv.gz
EDR_MPU024_51_20190911235400.csv.gz
EDR_MPU024_52_20190911235501.csv.gz
EDR_MPU024_54_20190911235805.csv.gz
EDR_MPU024_55_20190911235937.csv.gz
EDR_MPU025_24_20190911000050.csv.gz
EDR_MPU025_25_20190911000155.csv.gz
EDR_MPU025_26_20190911000302.csv.gz
EDR_MPU025_29_20190911000624.csv.gz
I want to make a list of missing sequence from those using bash script.
Every MPUXXX has its own sequence. So there are multiple series of sequences in that file.
The datetime for missing list will use from previous sequence.
From the sample above, the result will be like this.
EDR_MPU023_12_20190911083544.csv.gz
EDR_MPU023_13_20190911083544.csv.gz
EDR_MPU024_53_20190911235501.csv.gz
EDR_MPU025_27_20190911000302.csv.gz
EDR_MPU025_28_20190911000302.csv.gz
It would be simpler if there were only a single sequence.
So I can use something like this.
awk '{for(i=p+1; i<$1; i++) print i} {p=$1}'
But I know this can't be used for multiple sequence.
EDITED (Thanks #Cyrus!)
AWK is your friend:
#!/usr/bin/awk
BEGIN {
FS="[^0-9]*"
last_seq = 0;
next_serial = 0;
}
{
cur_seq = $2;
cur_serial = $3;
if (cur_seq != last_seq) {
last_seq = cur_seq;
ts = $4
prev = cur_serial;
} else {
if (cur_serial == next_serial) {
ts = $4;
} else {
for (i = next_serial; i < cur_serial; i++) {
print "EDR_MPU" last_seq "_" i "_" ts ".csv.gz"
}
}
}
next_serial = cur_serial + 1;
}
And then you do:
$ < files_190911.csv awk -f script.awk
EDR_MPU023_12_20190911083544.csv.gz
EDR_MPU023_13_20190911083544.csv.gz
EDR_MPU024_53_20190911235501.csv.gz
EDR_MPU025_27_20190911000302.csv.gz
EDR_MPU025_28_20190911000302.csv.gz
The assignment to FS= splits lines by the regex. The rest program detects holes in sequences and prints them with the appropriate timestamp.

How to transpose a list to a table in bash

I would like to transpose a list of of items (key/value pairs) into a table format. The solution can be a bash script, awk, sed, or some other method.
Suppose I have a long list, such as this:
date and time: 2013-02-21 18:18 PM
file size: 1283483 bytes
key1: value
key2: value
date and time: 2013-02-21 18:19 PM
file size: 1283493 bytes
key2: value
...
I would like to transpose into a table format with tab or some other separator to look like this:
date and time file size key1 key2
2013-02-21 18:18 PM 1283483 bytes value value
2013-02-21 18:19 PM 1283493 bytes value
...
or like this:
date and time|file size|key1|key2
2013-02-21 18:18 PM|1283483 bytes|value|value
2013-02-21 18:19 PM|1283493 bytes||value
...
I have looked at solutions such as this An efficient way to transpose a file in Bash, but it seems like I have a different case here. The awk solution works partially for me, it keeps outputting all the rows into a long list of columns, but I need for the columns to be constrained to a unique list.
awk -F': ' '
{
for (i=1; i<=NF; i++) {
a[NR,i] = $i
}
}
NF>p { p = NF }
END {
for(j=1; j<=p; j++) {
str=a[1,j]
for(i=2; i<=NR; i++){
str=str" "a[i,j];
}
print str
}
}' filename
UPDATE
Thanks to all of you who providing your solutions. Some of them look very promising, but I think my version of the tools might be outdated and I am getting some syntax errors. What I am seeing now is that I did not start off with very clear requirements. Kudos to sputnick for being the first one to offer the solution before I spelled out the full requirements. I have had a long day when I wrote the question and thus it was not very clear.
My goal is to come up with a very generic solution for parsing multiple lists of items into column format. I am thinking the solution does not need to support more than 255 columns. Column names are not going to be known ahead of time, this way the solution will work for anyone, not just me. The two known things are the separator between kev/value pairs (": ") and a separator between lists (empty line). It would be nice to have a variable for those, so that they are configurable for others to reuse this.
From looking at proposed solutions, I realize that a good approach is to do two passes over the input file. First pass is to gather all the column names, optionally sort them, then print the header. Second to grab the values of the columns and print them.
Here's one way using GNU awk. Run like:
awk -f script.awk file
Contents of script.awk:
BEGIN {
# change this to OFS="\t" for tab delimited ouput
OFS="|"
# treat each record as a set of lines
RS=""
FS="\n"
}
{
# keep a count of the records
++i
# loop through each line in the record
for (j=1;j<=NF;j++) {
# split each line in two
split($j,a,": ")
# just holders for the first two lines in the record
if (j==1) { date = a[1] }
if (j==2) { size = a[1] }
# keep a tally of the unique key names
if (j>=3) { !x[a[1]] }
# the data in a multidimensional array:
# record number . key = value
b[i][a[1]]=a[2]
}
}
END {
# sort the unique keys
m = asorti(x,y)
# add the two strings to a numerically indexed array
c[1] = date
c[2] = size
# set a variable to continue from
f=2
# loop through the sorted array of unique keys
for (j=1;j<=m;j++) {
# build the header line from the file by adding the sorted keys
r = (r ? r : date OFS size) OFS y[j]
# continue to add the sorted keys to the numerically indexed array
c[++f] = y[j]
}
# print the header and empty
print r
r = ""
# loop through the records ('i' is the number of records)
for (j=1;j<=i;j++) {
# loop through the subrecords ('f' is the number of unique keys)
for (k=1;k<=f;k++) {
# build the output line
r = (r ? r OFS : "") b[j][c[k]]
}
# and print and empty it ready for the next record
print r
r = ""
}
}
Here's the contents of a test file, called file:
date and time: 2013-02-21 18:18 PM
file size: 1283483 bytes
key1: value1
key2: value2
date and time: 2013-02-21 18:19 PM
file size: 1283493 bytes
key2: value2
key1: value1
key3: value3
date and time: 2013-02-21 18:20 PM
file size: 1283494 bytes
key3: value3
key4: value4
date and time: 2013-02-21 18:21 PM
file size: 1283495 bytes
key5: value5
key6: value6
Results:
2013-02-21 18:18 PM|1283483 bytes|value1|value2||||
2013-02-21 18:19 PM|1283493 bytes|value1|value2|value3|||
2013-02-21 18:20 PM|1283494 bytes|||value3|value4||
2013-02-21 18:21 PM|1283495 bytes|||||value5|value6
Here's a pure awk solution:
# split lines on ": " and use "|" for output field separator
BEGIN { FS = ": "; i = 0; h = 0; ofs = "|" }
# empty line - increment item count and skip it
/^\s*$/ { i++ ; next }
# normal line - add the item to the object and the header to the header list
# and keep track of first seen order of headers
{
current[i, $1] = $2
if (!($1 in headers)) {headers_ordered[h++] = $1}
headers[$1]
}
END {
h--
# print headers
for (k = 0; k <= h; k++)
{
printf "%s", headers_ordered[k]
if (k != h) {printf "%s", ofs}
}
print ""
# print the items for each object
for (j = 0; j <= i; j++)
{
for (k = 0; k <= h; k++)
{
printf "%s", current[j, headers_ordered[k]]
if (k != h) {printf "%s", ofs}
}
print ""
}
}
Example input (note that there should be a newline after the last item):
foo: bar
foo2: bar2
foo1: bar
foo: bar3
foo3: bar3
foo2: bar3
Example output:
foo|foo2|foo1|foo3
bar|bar2|bar|
bar3|bar3||bar3
Note: you will probably need to alter this if your data has ": " embedded in it.
This does not make any assumptions on the column structure so it does not try to order them, however, all fields are printed in the same order for all records:
use strict;
use warnings;
my (#db, %f, %fields);
my $counter = 1;
while (<>) {
my ($field, $value) = (/([^:]*):\s*(.*)\s*$/);
if (not defined $field) {
push #db, { %f };
%f = ();
} else {
$f{$field} = $value;
$fields{$field} = $counter++ if not defined $fields{$field};
}
}
push #db, \%f;
#my #fields = sort keys %fields; # alphabetical order
my #fields = sort {$fields{$a} cmp $fields{$b} } keys %fields; #first seen order
# print header
print join("|", #fields), "\n";
# print rows
for my $row (#db) {
print join("|", map { $row->{$_} ? $row->{$_} : "" } #fields), "\n";
}
Using perl
use strict; use warnings;
# read the file paragraph by paragraph
$/ = "\n\n";
print "date and time|file size|key1|key2\n";
# parsing the whole file with the magic diamond operator
while (<>) {
if (/^date and time:\s+(.*)/m) {
print "$1|";
}
if (/^file size:(.*)/m) {
print "$1|";
}
if (/^key1:(.*)/m) {
print "$1|";
}
else {
print "|";
}
if (/^key2:(.*)/m) {
print "$1\n";
}
else {
print "\n";
}
}
Usage
perl script.pl file
Output
date and time|file size|key1|key2
2013-02-21 18:18 PM| 1283483 bytes| value| value
2013-02-21 18:19 PM| 1283493 bytes|| value
example:
> ls -aFd * | xargs -L 5 echo | column -t
bras.tcl# Bras.tpkg/ CctCc.tcl# Cct.cfg consider.tcl#
cvsknown.tcl# docs/ evalCmds.tcl# export/ exported.tcl#
IBras.tcl# lastMinuteRule.tcl# main.tcl# Makefile Makefile.am
Makefile.in makeRule.tcl# predicates.tcl# project.cct sourceDeps.tcl#
tclIndex

How to remove duplicates entries from a file using shell

I have a file that is in the format:
0000000540|Q1.1|margi|Q1.1|margi|Q1.1|margi
0099940598|Q1.2|8888|Q1.3|5454|Q1.2|8888
0000234223|Q2.10|saigon|Q3.9|tango|Q1.1|money
I am trying to remove the duplicates that appear on the same line.
So, if a line has
0000000540|Q1.1|margi|Q1.1|margi|Q1.1|margi
I'll like it to be
0000000540|Q1.1|margi
If the line has
0099940598|Q1.2|8888|Q1.3|5454|Q1.2|8888
I'll like it to be like
0099940598|Q1.2|8888|Q1.3|5454
I would like to do this on a shell script that takes an input file and outputs the file without the duplicates.
Thanks in advance to anyone who can help
This should do it but may not be efficient for large files.
awk '
{
delete p;
n = split($0, a, "|");
printf("%s", a[1]);
for (i = 2; i <= n ; i++)
{
if (!(a[i] in p))
{
printf("|%s", a[i]);
p[a[i]] = "";
}
}
printf "\n";
}
' YourFileName

Resources