Calculating sum of days in LINQ query - linq

Following is my datatable:
ResourceName ContentName ProjectName PlanndStartDate PlannedEndDate ActualStartDate ActualEndDate DesiredDate
ANIL C-1479.doc HP_WI_4141 2/24/2014 2/25/2014 2/24/2014 2/24/2014 2/23/2014
ANIL C-1234.docx HP_WI_3131 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/11/2014
CHETNA C-1479.doc HP_WI_4141 2/24/2014 2/25/2014 2/24/2014 2/24/2014 2/26/2014
CHETNA C-1479.doc HP_WI_4141 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/10/2014
CHETNA C-14085.xlsx HP_WI_5151 2/14/2014 2/28/2014 2/14/2014 2/26/2014 2/26/2014
GAURAV YADAV C-1479.doc HP_WI_4141 2/24/2014 2/25/2014 2/24/2014 2/24/2014 2/25/2014
GAURAV YADAV C-1479.doc HP_WI_4141 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/15/2014
GAURAV YADAV C-14085.xlsx HP_WI_5151 2/14/2014 2/28/2014 2/14/2014 2/26/2014 2/28/2014
NITIN C-14077.pdf HP_WI_2121 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/13/2014
SRINIVAS C-14085.xlsx HP_WI_5151 2/14/2014 2/28/2014 2/14/2014 2/26/2014 2/25/2014
Now, By using LINQ query, I have to generate following Result.
ResourceName ContentName ProjectName PlanndStartDate PlannedEndDate ActualStartDate ActualEndDate DesiredDate TotalDays GroupDays
ANIL C-1479.doc HP_WI_4141 2/24/2014 2/25/2014 2/24/2014 2/24/2014 2/23/2014 -1 -1
ANIL C-1234.docx HP_WI_3131 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/11/2014 -2 -2
CHETNA C-1479.doc HP_WI_4141 2/24/2014 2/25/2014 2/24/2014 2/24/2014 2/26/2014 2 -1
CHETNA C-1479.doc HP_WI_4141 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/10/2014 -3 -1
CHETNA C-14085.xlsxHP_WI_5151 2/14/2014 2/28/2014 2/14/2014 2/26/2014 2/26/2014 0 0
GAURAV YADAV C-1479.doc HP_WI_4141 2/24/2014 2/25/2014 2/24/2014 2/24/2014 2/25/2014 1 3
GAURAV YADAV C-1479.doc HP_WI_4141 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/15/2014 2 3
GAURAV YADAV C-14085.xlsxHP_WI_5151 2/14/2014 2/28/2014 2/14/2014 2/26/2014 2/28/2014 2 2
NITIN C-14077.pdf HP_WI_2121 2/1/2014 2/12/2014 2/1/2014 2/13/2014 2/13/2014 0 0
SRINIVAS C-14085.xlsxHP_WI_5151 2/14/2014 2/28/2014 2/14/2014 2/26/2014 2/25/2014 -1 -1
Now, Result should be group by "ResouceName", "ContentName" and "ProjectName" and column "TotalDays" is difference of "DesireEndDate" and "ActualEndDate" and column "GroupDays" is sum of "TodalDays" as per group by.
Please suggest me how to do it.
Thanks.

GroupBy and a loop:
var groups = from row in table.AsEnumerable()
let ResouceName = row.Field<string>("ResouceName")
let ContentName = row.Field<string>("ContentName")
let ProjectName = row.Field<string>("ProjectName")
group row by new{ ResouceName, ContentName, ProjectName } into Group
select Group;
var tblResult = table.Clone();
tblResult.Columns.Add("TotalDays", typeof(int));
tblResult.Columns.Add("GroupDays", typeof(int);
foreach (var group in groups)
{
int GroupDays = group.Sum(r => (r.Field<DateTime>("DesiredDate") - r.Field<DateTime>("ActualEndDate")).Days);
foreach(DataRow row in group)
{
DateTime PlanndStartDate = row.Field<DateTime>("PlanndStartDate");
DateTime PlannedEndDate = row.Field<DateTime>("PlannedEndDate");
DateTime ActualStartDate = row.Field<DateTime>("ActualStartDate");
DateTime ActualEndDate = row.Field<DateTime>("ActualEndDate");
DateTime DesiredDate = row.Field<DateTime>("DesiredDate");
TimeSpan Total = DesiredDate - ActualEndDate;
tblResult.Rows.Add(group.Key.ResouceName, group.Key.ContentName, group.Key.ProjectName, PlanndStartDate, PlannedEndDate, ActualStartDate, ActualEndDate, DesiredDate, Total.Days, GroupDays);
}
}

Related

Spring boot api to download svg file

I wanted to provide a simple spring boot API, where I have an svg file as a String. Whenever my get api (/downloadsvg) is called I should be able to download the svg file.
I have tried as below,
• I formed an svg string
• I tried converting svg string to inputstream
InputStream stream = new ByteArrayInputStream(AUSTRALIAN_BADGE.getBytes(StandardCharsets.UTF_8));
used produces = MediaType.APPLICATION_OCTET_STREAM_VALUE
Somehow I wasn't able to download it and ended up with an error. Would be great if some pointers were given.
My get API call looks like this (don't mind about the code standard, wanted to improve it once I finish the logic)
#GetMapping(value = "/downloadsvg",produces = MediaType.APPLICATION_OCTET_STREAM_VALUE)
public ResponseEntity<InputStream> downloadSvgImage() {
final String AUSTRALIAN_BADGE= "<svg x=\"0\" y=\"0\" width=\"230\" height=\"84\" overflow=\"hidden\" preserveAspectRatio=\"xMidYMid\" xml:space=\"default\" viewbox=\"0 0 230 84\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\" xmlns:xml=\"http://www.w3.org/XML/1998/namespace\" version=\"1.1\">\n" +
" <rect x=\"1\" y=\"1\" width=\"228px\" height=\"82px\" rx=\"5\" ry=\"5\" stroke=\"#125430\" stroke-width=\"2\" xml:space=\"default\" style=\"fill:#FFFFFF;\" />\n" +
" <text id=\"FirstLineBadge\" xml:space=\"default\" x=\"66px\" y=\"18.18182px\" style=\"fill:Color [A=255, R=58, G=71, B=78];\" font-family=\"Fresh Sans Md\" font-size=\"14\">Made in Australia from</text>\n" +
" <text id=\"SecondLineBadge\" xml:space=\"default\" x=\"66px\" y=\"36.36364px\" style=\"fill:Color [A=255, R=58, G=71, B=78];\" font-family=\"Fresh Sans Md\" font-size=\"14\">at least 98% Australian</text>\n" +
" <text id=\"ThirdLineBadge\" xml:space=\"default\" x=\"66px\" y=\"54.54546px\" style=\"fill:Color [A=255, R=58, G=71, B=78];\" font-family=\"Fresh Sans Md\" font-size=\"14\">ingredients</text>\n" +
" <svg id=\"kangaroo\" xml:space=\"default\" width=\"48\" height=\"42\" viewbox=\"0 0 48 42\" x=\"10\" y=\"10\">\n" +
" <path d=\"M47.7 39.1 L25.6 1 C25.3 0.4 24.6 0 23.9 0 C23.2 0 22.6 0.4 22.2 1 L0.1999989 39.2 C0.1 39.5 0 39.8 0 40.1 C0 41.1 0.9 42 1.9 42 L46.2 42 C47.2 42 48.1 41.1 48.1 40.1 C48.1 39.8 48 39.4 47.7 39.1 L47.7 39.1 z\" xml:space=\"default\" class=\"logo\" style=\"fill:#F7A30A;\" />\n" +
" <path d=\"M47.7 39.1 L25.6 1 C25.3 0.4 24.6 0 23.9 0 C23.2 0 22.6 0.4 22.2 1 L6.7 28 C11.6 28.3 14 26 16.3 23.9 C18.3 22 20.5 20 24 20 C28.6 20 30.9 24.2 31 24.4 C31.3 25.1 31 25.8 30.4 26.1 C29.8 26.4 29 26.1 28.7 25.5 C28.6 25.3 26.8 22.5 24 22.5 C21.5 22.5 20 23.9 18.1 25.7 C14.8 28.9 10.2 30.4 5.700001 29.7 L0.3000007 39.1 C0.1 39.5 0 39.8 0 40.1 C0 41.1 0.9 42 1.9 42 L24.2 42 L21.3 37.8 C20.6 36.7 20.6 35.2 21.4 34.1 L24.7 28.9 C24.9 28.6 25.2 28.4 25.5 28.3 C25.8 28.2 26.2 28.3 26.5 28.5 C27.1 28.9 27.3 29.7 26.9 30.3 L23.6 35.6 C23.4 36 23.3 36.1 23.5 36.5 C24.1 37.5 27.1 41.9 27.2 42 L46.2 42 C47.2 42 48.1 41.1 48.1 40.1 C48.1 39.8 48 39.4 47.7 39.1 L47.7 39.1 z M38.8 28.6 C38.5 29 38 29.1 37.6 28.8 C37 28.5 36.3 28.3 35.6 28.3 C34.4 28.3 33.2 28.8 32.3 29.7 L27.8 35.3 C27.6 35.5 27.4 35.6 27.1 35.6 C26.9 35.6 26.6 35.5 26.5 35.3 C26.4 35.1 26.3 34.9 26.3 34.7 C26.3 34.5 26.4 34.4 26.4 34.2 L29.4 29.2 C30.5 27.2 32.5 25.9 34.8 25.8 L34 22.8 C33.9 22.4 34 22 34.3 21.8 C34.7 21.5 35.2 21.6 35.4 22 C35.4 22 35.4 22 35.4 22.1 L38.9 27.5 C39 27.9 39 28.3 38.8 28.6 z\" xml:space=\"default\" class=\"logo\" style=\"fill:#125430;\" />\n" +
" <path d=\"M35.4 22.2 C35.2 21.8 34.7 21.7 34.3 21.9 C34.3 21.9 34.3 21.9 34.2 21.9 C33.90001 22.1 33.8 22.5 33.90001 22.9 L34.7 25.9 C32.40001 26 30.3 27.3 29.3 29.3 L26.3 34.3 C26.2 34.5 26.2 34.6 26.2 34.8 C26.2 35 26.3 35.2 26.40001 35.4 C26.60001 35.6 26.8 35.7 27.00001 35.7 C27.3 35.7 27.50001 35.6 27.70001 35.4 L32.2 29.8 C33 28.9 34.2 28.3 35.5 28.4 C36.2 28.4 36.90001 28.6 37.5 28.9 C37.90001 29.1 38.40001 29 38.7 28.7 C39 28.4 39 27.9 38.8 27.6 L35.4 22.2 z\" xml:space=\"default\" class=\"logo\" style=\"fill:#F7A30A;\" />\n" +
" <path d=\"M6.6 28 L5.6 29.8 L5.6 29.8 L6.6 28 L6.6 28 z\" xml:space=\"default\" class=\"logo\" style=\"fill:#F7A30A;\" />\n" +
" <path d=\"M18 25.8 C19.9 24 21.4 22.6 23.9 22.6 C26.7 22.6 28.4 25.4 28.6 25.6 C28.9 26.2 29.7 26.5 30.4 26.1 C31 25.8 31.2 25 31 24.4 C30.9 24.2 28.6 20 24 20 C20.5 20 18.4 22 16.3 23.9 C14.1 26.1 11.7 28.3 6.699999 28 L5.699999 29.8 C10.2 30.5 14.7 29 18 25.8 z\" xml:space=\"default\" class=\"logo\" style=\"fill:#F7A30A;\" />\n" +
" <polygon xml:space=\"default\" class=\"logo\" points=\"24.2,42 27.1,42 27.1,42 24.2,42 \" style=\"fill:#F7A30A;\" />\n" +
" <path d=\"M23.5 35.6 L26.8 30.4 C27.2 29.8 27 29 26.4 28.6 C26.1 28.4 25.7 28.3 25.4 28.4 C25.1 28.5 24.8 28.7 24.6 29 L21.4 34.2 C20.6 35.3 20.6 36.7 21.3 37.9 L24.3 42.1 L27.2 42.1 C27.1 42 24.2 37.6 23.5 36.6 C23.2 36.1 23.3 36 23.5 35.6 z\" xml:space=\"default\" class=\"logo\" style=\"fill:#F7A30A;\" />\n" +
" </svg>\n" +
"</svg>";
HttpHeaders header = new HttpHeaders();
header.add(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=img.svg");
header.add("Cache-Control", "no-cache, no-store, must-revalidate");
header.add("Pragma", "no-cache");
header.add("Expires", "0");
InputStream stream = new ByteArrayInputStream(AUSTRALIAN_BADGE.getBytes(StandardCharsets.UTF_8));
return ResponseEntity.ok()
.headers(header)
.contentLength(AUSTRALIAN_BADGE.length())
.contentType(MediaType.APPLICATION_OCTET_STREAM)
.body(stream);
}
Just return the byte[] array with a proper header:
#GetMapping(value = "/downloadsvg")
public ResponseEntity<byte[]> downloadSvgImage() {
final String AUSTRALIAN_BADGE= "<svg x=\"0\" y=\"0
...
...</svg>";
HttpHeaders header = new HttpHeaders();
header.add("Content-Type","image/svg+xml");
return ResponseEntity.ok()
.headers(header)
.body(AUSTRALIAN_BADGE.getBytes());
The image I get when I try this code is:
[1]: https://i.stack.imgur.com/B5w0l.png

How do I sort 2 columns in shell script?

I have data like this:
Jul29 16:52
Jul30 19:06
Jul31 17:04
Aug1 17:22
Aug2 18:53
Aug3 21:44
Aug4 22:56
Aug6 17:01
Aug8 02:19
Aug8 16:49
Aug9 16:37
Aug10 21:09
Aug12 05:24
Aug12 17:09
Aug14 16:39
Aug16 16:41
Aug4 22:56
Aug6 17:01
Aug8 02:19
Aug8 16:49
Aug9 16:37
Aug10 21:09
Aug12 05:24
Aug12 17:09
Aug14 16:39
Aug16 16:41
Aug4 22:56
Aug6 17:01
Aug8 02:19
Aug8 16:49
Aug9 16:37
Aug10 21:09
Aug16 20:24
Aug16 19:09
Aug16 18:39
Aug16 16:41
I want to take out the duplicates, sort by the first column, then maintain that order and sort by the second column. Like the following:
Jul01 11:00
Aug01 12:00
Aug02 12:40
Aug03 10:00
Aug03 11:00
Aug03 13:00
I have this code:
cat filename | awk '!a[$0]++'
This only sorts the first column and something random happens to the second column. Any ideas?
When I tried cat ming | sort -k1M -k1d -k2V, I get this:
Jul29 16:52
Jul30 19:06
Jul31 17:04
Aug10 21:09
Aug10 21:09
Aug10 21:09
Aug1 17:22
Aug12 05:24
Aug12 05:24
Aug12 17:09
Aug12 17:09
Aug14 16:39
Aug14 16:39
Aug16 16:41
Aug16 16:41
Aug16 16:41
Aug16 18:39
Aug16 19:09
Aug16 20:24
Aug2 18:53
Aug3 21:44
Aug4 22:56
Aug4 22:56
Aug4 22:56
Aug6 17:01
Aug6 17:01
Aug6 17:01
Aug8 02:19
Aug8 02:19
Aug8 02:19
Aug8 16:49
Aug8 16:49
Aug8 16:49
Aug9 16:37
Aug9 16:37
Aug9 16:37
sort -u -k1.1,1.3M -k1.4n -k2V filename
-u
delete duplicate lines
-k1.1,1.3M
sort each line from word 1, character 1 to word 1, character 3 in month mode
-k1.4n
sort each line from word 1, character 4 until end of word 1 by numeric value
-k2V
sort second word in "version number" mode, which works well for the timestamp
you can use the following:
sort -k1M -k1.4n -k2V abcss | uniq
explanation:
k1M : does a month sort on the 1st column
k1.4n : does an numeric sort to get the columns in order
k2V : does a version sort on the second column to get timestamp right
The output will be:
Jul29 16:52
Jul30 19:06
Jul31 17:04
Aug1 17:22
Aug2 18:53
Aug3 21:44
Aug4 22:56
Aug6 17:01
Aug8 02:19
Aug8 16:49
Aug9 16:37
Aug10 21:09
Aug12 05:24
Aug12 17:09
Aug14 16:39
Aug16 16:41
Aug16 18:39
Aug16 19:09
Aug16 20:24

Method for initial guess of standard deviation of 2d gaussian/gabor?

I'm working on curve fitting software in Matlab. So far it's going pretty well but I need a method of inputting an initial guess for my curve fitting software. I'm given a selection of points, but I need to find an initial guess of the SDx and SDy but I don't know how to do this. Is there anywhere I can learn a good approach to this? Thank you so much!
My data is a 32x32 matrix that looks something like the following:
-0.0027 -0.0034 -0.0034 0.0003 0.0018 0.0028 0.0058 0.0057 0.0008 -0.0053
-0.0023 -0.0008 -0.0007 0.0005 0.0015 0.0033 0.0062 0.0054 0.0029 -0.0029
-0.0018 0.0004 0.0014 0.0009 0.0006 0.0024 0.0047 0.0045 0.0041 0.0009
-0.0034 -0.0020 0.0022 0.0022 -0.0007 0.0003 0.0012 0.0024 0.0022 0.0015
-0.0053 -0.0042 -0.0004 0.0010 -0.0014 -0.0020 -0.0021 -0.0003 0.0002 -0.0014
-0.0070 -0.0034 -0.0008 0.0000 0.0004 0.0032 0.0011 0.0019 0.0026 0.0006
-0.0054 -0.0016 0.0005 0.0012 0.0000 0.0045 0.0033 0.0035 0.0039 0.0013
-0.0050 -0.0015 -0.0009 0.0001 0.0001 0.0013 -0.0022 -0.0010 0.0012 -0.0024
-0.0044 -0.0028 -0.0019 0.0016 0.0026 -0.0005 -0.0057 -0.0057 -0.0042 -0.0057
-0.0037 -0.0022 -0.0024 0.0003 0.0036 0.0002 -0.0045 -0.0055 -0.0039 -0.0032
-0.0045 -0.0012 -0.0016 -0.0016 0.0000 0.0003 -0.0018 -0.0014 0.0025 -0.0015
-0.0047 -0.0028 -0.0028 -0.0021 -0.0041 -0.0025 -0.0008 0.0011 0.0020 -0.0029
-0.0028 -0.0020 -0.0024 -0.0024 -0.0044 -0.0060 -0.0032 0.0009 0.0018 -0.0008
-0.0005 -0.0017 0.0007 0.0025 -0.0020 -0.0030 -0.0010 -0.0011 -0.0004 0.0014
-0.0011 -0.0006 -0.0001 0.0003 -0.0002 0.0012 0.0033 0.0010 -0.0025 -0.0001
-0.0032 -0.0008 0.0001 -0.0039 -0.0022 0.0003 0.0016 0.0016 -0.0009 -0.0008
-0.0060 -0.0019 -0.0005 -0.0033 -0.0039 -0.0032 -0.0018 -0.0004 -0.0012 -0.0004
-0.0077 -0.0049 -0.0039 -0.0039 -0.0049 -0.0044 -0.0039 -0.0047 -0.0034 -0.0031
-0.0054 -0.0026 -0.0030 -0.0046 -0.0071 -0.0048 -0.0028 -0.0051 -0.0046 -0.0042
-0.0049 0.0002 0.0009 -0.0017 -0.0041 -0.0031 -0.0018 -0.0024 -0.0029 -0.0015
-0.0032 -0.0007 0.0021 0.0012 -0.0006 -0.0013 -0.0008

Extract rows where value in one column equals value in another column +1?

I have a tab delimited text file that looks like this:
1 10019 10020 rs775809821
1 10055 10055 rs768019142
1 10107 10108 rs62651026
1 10108 10109 rs376007522
1 10128 10128 rs796688738
1 10138 10139 rs368469931
1 10144 10145 rs144773400
1 10146 10147 rs779258992
1 10149 10150 rs371194064
1 10165 10165 s796884232
I want to extract the rows in which the value in column 2 is equal to the value in column 1 + 1 and direct them to a new file. So for the above example, the desired output would be:
1 10019 10020 rs775809821
1 10107 10108 rs62651026
1 10108 10109 rs376007522
1 10138 10139 rs368469931
1 10144 10145 rs144773400
1 10146 10147 rs779258992
1 10149 10150 s371194064
I think this can be accomplished using awk, but I'm not sure where to start. Any input would be greatly appreciated.
awk '$3 == $2 + 1' < input > output

sort the numbers in multiple lines in vim

I have a file formatted as such:
...
[ strNADPplus ]
3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457
3458 3459 3460 3461 3462 3463 3464 11153 11154 11155 11156 11157 11158 11159 11160
5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269
5270 5271 5272 5273 5274 5275 5276 5277 12964 12965 12966 12967 12968 12969 12970
5360 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070
5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375
5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414
5415 5416 5417 5418 5419 5420 5421 13110 13111 13112 13113 13114 13115 13116 13117
5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478
5479 5480 5481 5482 5483 5484 5485 5486 13173 13174 13175 13176 13177 13178 13179
5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 13557 13558 13559 13560
5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 13683 13684 13685 13686
6021 6022 6023 6024 6025 6026 6027 6028 6029 13718 13719 13720 13721 13722 13723
6339 6340 6341 6342 6343 6344 6345 6346 6347 14044 14045 14046 14047 14048 14049
...
I want to sort the numbers in that block of lines to have something that looks like:
1 2 3 4
7 8 9 100
101 121 345
346 348 10232
16654 ...
I first tried with :4707,4743%sort n (4707 and 4743 are the lines of that block), but I was only able to sort the first values of each line.
I then tried to join the selection and sort the line: visual mode + J and :'<,'>sort n.
But it doesn't sort correctly.
3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 11153 11154 11155 11156 11157 11158 11159 11160 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 12964 12965 12966 12967 12968 12969 12970 5360 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 13110 13111 13112 13113 13114 13115 13116 13117 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 13173 13174 13175 13176 13177 13178 13179 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 13557 13558 13559 13560 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 13683 13684 13685 13686 6021 6022 6023 6024 6025 6026 6027 6028 6029 13718 13719 13720 13721 13722 13723 6339 6340 6341 6342 6343 6344 6345 6346 6347 14044 14045 14046 14047 14048 14049 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 7432 7433 7434 7435 7436 7437 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 7502 7503 7504 7505 7506 7507 7508 7509 15208 15209 15210 15211 15212 15213 15214 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 15377 15378 15379 15380 11161 11162 11163 11164 11165 11166 11167 11168 11169 11170 11171 11172 11173 11174 5254 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 5463 13180 13181 13182 13183 13184 13185 13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 6334 6335 6336 6337 6338 14050 14051 14052 14053 14054 14055 14056 14057 7414 7415 7416 7417 7418 7419 7420 15137 15138 15139 15140 15141 15142 15143 15144 15145 15146 15147 7498 7499 7500 7501 15215 15216 15217 15218 15219 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397
How do I sort everything and keep that layout?
I would simply use standard external unix tools:
:'<,'>!tr ' ' '\n' | sort -n | tr '\n' ' ' | fold -w 15 -s
This wraps lines to 15 characters.
:'<,'>!tr ' ' '\n' | sort -n | paste -d' ' - - -
This wraps to 3 numbers per line.

Resources