I am using google sheets to look at some gps speed and distance data and I want to sum up all of the distance over 75% of max speed. Column D has 75% of max speed. Columns Q-X has distance at 2 m/s , 3 m/s, 4m/s, etc. Other than doing a massively nested if statement is there an easier way to do this?
Otherwise I am thinking it will look something like:
=if(d3>9,sum(q3:x3),if(d4>8,(sum(q3:w3),if(d4>7,(sum(q3:v3),if(d4>6,(sum(q3:u3),if(d4>50,(sum(q3:t3),if(d4>40,(sum(q3:s3),if(d4>30,(sum(q3:r3),if(d4>20,q3,0)))))))
Here is a link to the data: https://docs.google.com/spreadsheets/d/1eRCv4paCEAAufegmzbPCchIErteA-CYYmXp5lNizpO0/edit?usp=sharing
I also need to be able to match the Name column A and the Date in D1.
It seems like there should be a simpler way, but I don't know how to do it.
B4:
=INDEX(SORTN(SORT({data!F2:F, data!A2:A}, data!F2:F, 0), 9^9, 2, 2, 1),,1)
C4:
=INDEX(SORTN(SORT({(data!F2:F*0.75), data!A2:A}, data!F2:F, 0), 9^9, 2, 2, 1),,1)
D4:
=INDEX(SORTN(SORT({(data!F2:F*0.75)/2.237, data!A2:A}, data!F2:F, 0), 9^9, 2, 2, 1),,1)
E4 for all time top:
=ARRAYFORMULA(IF(A4:A="",,MMULT(IFERROR(VLOOKUP(A4:A,
SORTN(SORT({data!A2:A, data!Q2:X}, data!F2:F, 0), 9^9, 2, 1, 1),
IF(INDEX(SORTN(SORT({(data!F2:F*0.75)/2.237, data!A2:A}, data!F2:F, 0), 9^9, 2, 2, 1),,1)>
SEQUENCE(1, 8)+1, SEQUENCE(1, 8)+1, 0), 0), 0), SEQUENCE(8)^0)))
or E4 for date selected top:
=ARRAYFORMULA(IF(A4:A="",,MMULT(IFERROR(VLOOKUP(A4:A,
SORTN(SORT(FILTER({data!A2:A, data!Q2:X}, data!B2:B=TEXT(D1, "mm/dd/yyyy")),
FILTER(data!F2:F, data!B2:B=TEXT(D1, "mm/dd/yyyy")), 0), 9^9, 2, 1, 1),
IF(INDEX(SORTN(SORT({(data!F2:F*0.75)/2.237, data!A2:A}, data!F2:F, 0), 9^9, 2, 2, 1),,1)>
SEQUENCE(1, 8)+1, SEQUENCE(1, 8)+1, 0), 0), 0), SEQUENCE(8)^0)))
demo sheet
The first part of your question should just be a sumif (although you have to extract the numbers from your headers in the data sheet):
=ArrayFormula(sumif(--regexextract(data!Q1:W1,"\d"),"<"&D4,data!Q2:W2))
You could use index/match to get the correct row from the data tab (here demonstrated using sumifs):
=ArrayFormula(sumifs(index(data!Q2:W,match(A4&$D$1,data!A2:A&data!B2:B,0),0),--regexextract(data!Q1:W1,"\d"),"<"&D4))
Related
Have a series of ordered geometries (lines) of type:
MDSYS.SDO_GEOMETRY(4402, 4326, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1), MDSYS.SDO_ORDINATE_ARRAY(-87.5652173103127, 41.6985300456929, 0, 510.1408, -87.5652362658404, 41.6985530209061, 0, 510.14287, -87.5652682628194, 41.6985911197852, 0, 510.14632, ...)
Would like to join these into a "single" line of the same type, but with the vertices merged into a single line: i.e. another geometry (line) of type:
MDSYS.SDO_GEOMETRY(4402, 4326, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1), MDSYS.SDO_ORDINATE_ARRAY(-87.5652173103127, 41.6985300456929, 0, 510.1408, -87.5652362658404, 41.6985530209061, 0, 510.14287, -87.5652682628194, 41.6985911197852, 0, 510.14632, ...)
Tried:
SDO_UTIL.APPEND to incrementally join pair of lines, but this resulted in a "multipart" polyline, not a "single" polyline, i.e.:
MDSYS.SDO_GEOMETRY(4406, 4326, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1, 241, 2, 1, 377, 2, 1, 465, 2, 1, 733, 2, 1, 865, 2, 1, 1365, 2, 1), MDSYS.SDO_ORDINATE_ARRAY(-89.7856903197518,...)
same issue with SDO_AGGR_LRS_CONCAT
SDO_UTIL.CONCAT_LINES came closest to it, by producing a single line, but it seems some of the vertices SDO_ORDINATE_ARRAY were not correct...
Either there must be another function that does this easily, or perhaps was not using one of the above correctly... or perhaps may have to write a custom function to go into each line's SDO_ORDINATE_ARRAY and join those individually (?).
New to oracle spatial (spatial queries of any type) and documentation out there seem sparse. Any input would be appreciated.
Given two integers n and r, I want to generate all possible combinations with the following rules:
There are n distinct numbers to choose from, 1, 2, ..., n;
Each combination should have r elements;
A combination may contain more than one of an element, for instance (1,2,2) is valid;
Order matters, i.e. (1,2,3) and (1,3,2) are considered distinct;
However, two combinations are considered equivalent if one is a cyclic permutation of the other; for instance, (1,2,3) and (2,3,1) are considered duplicates.
Examples:
n=3, r=2
11 distinct combinations
(1,1,1), (1,1,2), (1,1,3), (1,2,2), (1,2,3), (1,3,2), (1,3,3), (2,2,2), (2,2,3), (2,3,3) and (3,3,3)
n=2, r=4
6 distinct combinations
(1,1,1,1), (1,1,1,2), (1,1,2,2), (1,2,1,2), (1,2,2,2), (2,2,2,2)
What is the algorithm for it? And how to implement it in c++?
Thank you in advance for advice.
Here is a naive solution in python:
Generate all combinations from the Cartesian product of {1, 2, ...,n} with itself r times;
Only keep one representative combination for each equivalency class; drop all other combinations that are equivalent to this representative combination.
This means we must have some way to compare combinations, and for instance, only keep the smallest combination of every equivalency class.
from itertools import product
def is_representative(comb):
return all(comb[i:] + comb[:i] >= comb
for i in range(1, len(comb)))
def cartesian_product_up_to_cyclic_permutations(n, r):
return filter(is_representative,
product(range(n), repeat=r))
print(list(cartesian_product_up_to_cyclic_permutations(3, 3)))
# [(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 1, 1), (0, 1, 2), (0, 2, 1), (0, 2, 2), (1, 1, 1), (1, 1, 2), (1, 2, 2), (2, 2, 2)]
print(list(cartesian_product_up_to_cyclic_permutations(2, 4)))
# [(0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 1, 1), (0, 1, 0, 1), (0, 1, 1, 1), (1, 1, 1, 1)]
You mentioned that you wanted to implement the algorithm in C++. The product function in the python code behaves just like a big for-loop that generates all the combinations in the Cartesian product. See this related question to implement Cartesian product in C++: Is it possible to execute n number of nested "loops(any)" where n is given?.
I generated a large file of paths via eclipse. Each line contains a clause for a list of 27 points.
$ wc -l snake_points.pl
240917 snake_points.pl
$ ls -lh snake_points.pl
-rw-rw-r-- 1 carl carl 72M Sep 6 02:39 snake_points.pl
$ head -n 1 snake_points.pl
snake_points([(2, 0, 0), (2, 0, 1), (2, 0, 2), (2, 1, 2), (2, 1, 1), (2, 1, 0), (2, 2, 0), (2, 2, 1), (2, 2, 2), (1, 2, 2), (0, 2, 2), (0, 1, 2), (0, 0, 2), (0, 0, 1), (0, 1, 1), (0, 1, 0), (0, 2, 0), (0, 2, 1), (1, 2, 1), (1, 2, 0), (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 0, 2), (1, 0, 1), (1, 0, 0), (0, 0, 0)]).
However, I am unable to load the file into memory (even with 8G of heap):
$ time eclipse -f snake_points.ecl -e 'halt.'
*** Overflow of the global/trail stack in spite of garbage collection!
You can use the "-g kBytes" (GLOBALSIZE) option to have a larger stack.
Peak sizes were: global stack 8388576 kbytes, trail stack 59904 kbytes
________________________________________________________
Executed in 128.05 secs fish external
usr time 122.92 secs 297.00 micros 122.92 secs
sys time 5.01 secs 37.00 micros 5.01 secs
Compare this to swipl:
$ time swipl -f snake_points.pl -g 'halt.'
________________________________________________________
Executed in 53.56 secs fish external
usr time 53.27 secs 272.00 micros 53.27 secs
sys time 0.28 secs 41.00 micros 0.28 secs
Neither are impressive, but I'd expect ECLiPSe to complete with a reasonable amount of memory.
Is this expected behavior? What can be done?
I understand the solution may be "use a database" or EXDR, but shouldn't this be able to be done efficiently?
The problem is that you are not only reading the data, you are trying to compile it as a single predicate with 240917 clauses, and the compiler is indeed not built for this kind of usage.
You can instead read and assert the clauses from the data file one-by-one, like this:
assert_from_file(File) :-
open(File, read, S),
repeat,
read(S, Term),
( Term == end_of_file ->
true
;
assert(Term),
fail
),
!, close(S).
This loads your data in finite time
?- assert_from_file("snake_points.pl").
Yes (19.38s cpu)
and you can then call the resulting predicate as expected
?- snake_points(X).
X = [(2, 0, 0), (2, 0, 1), (2, 0, 2), (2, 1, 2), (2, 1, 1), (2, 1, 0), (2, 2, 0), (2, 2, 1), (2, 2, 2), (1, 2, 2), (0, 2, 2), (0, 1, 2), (0, 0, 2), (0, 0, 1), (0, 1, 1), (0, 1, 0), (0, 2, 0), (0, ..., ...), (..., ...), ...]
Yes (0.04s cpu, solution 1, maybe more) ? ;
But whatever problem you are trying to solve, this doesn't look like the most promising approach...
Did you tried changing the points representation? Using ','/2 to represent tuples with more than two elements per tuple is a bad idea. Note that:
[eclipse 30]: write_canonical((1,2,3)).
','(1, ','(2, 3))
Yes (0.00s cpu)
Try instead e.g. p(1,2,3). This should lower memory requirements a bit and may make a difference.
I'm quite newby to Power Query. I have a column for the date, called MyDate, format (dd/mm/yy), and another variable called TotalSales. Is there any way of obtaining a variable TotalSalesYTD, with the sum of year-to-date TotalSales for each row? I've seen you can do that at Power Pivot or Power Bi, but didn't find anything for Power Query.
Alternatively, is there a way of creating a variable TotalSales12M, for the rolling sum of the last 12 months of TotalSales?
I wasn't able to test this properly, but the following code gave me your expected result:
let
initialTable = Table.FromRows({
{#date(2020, 5, 1), 150},
{#date(2020, 4, 1), 20},
{#date(2020, 3, 1), 54},
{#date(2020, 2, 1), 84},
{#date(2020, 1, 1), 564},
{#date(2019, 12, 1), 54},
{#date(2019, 11, 1), 678},
{#date(2019, 10, 1), 885},
{#date(2019, 9, 1), 54},
{#date(2019, 8, 1), 98},
{#date(2019, 7, 1), 654},
{#date(2019, 6, 1), 45},
{#date(2019, 5, 1), 64},
{#date(2019, 4, 1), 68},
{#date(2019, 3, 1), 52},
{#date(2019, 2, 1), 549},
{#date(2019, 1, 1), 463},
{#date(2018, 12, 1), 65},
{#date(2018, 11, 1), 45},
{#date(2018, 10, 1), 68},
{#date(2018, 9, 1), 65},
{#date(2018, 8, 1), 564},
{#date(2018, 7, 1), 16},
{#date(2018, 6, 1), 469},
{#date(2018, 5, 1), 4}
}, type table [MyDate = date, TotalSales = Int64.Type]),
ListCumulativeSum = (numbers as list) as list =>
let
accumulator = (listState as list, toAdd as nullable number) as list =>
let
previousTotal = List.Last(listState, 0),
combined = listState & {List.Sum({previousTotal, toAdd})}
in combined,
accumulated = List.Accumulate(numbers, {}, accumulator)
in accumulated,
TableCumulativeSum = (someTable as table, columnToSum as text, newColumnName as text) as table =>
let
values = Table.Column(someTable, columnToSum),
cumulative = ListCumulativeSum(values),
columns = Table.ToColumns(someTable) & {cumulative},
toTable = Table.FromColumns(columns, Table.ColumnNames(someTable) & {newColumnName})
in toTable,
yearToDateColumn =
let
groupKey = Table.AddColumn(initialTable, "$groupKey", each Date.Year([MyDate]), Int64.Type),
grouped = Table.Group(groupKey, "$groupKey", {"toCombine", each
let
sorted = Table.Sort(_, {"MyDate", Order.Ascending}),
cumulative = TableCumulativeSum(sorted, "TotalSales", "TotalSalesYTD")
in cumulative
}),
combined = Table.Combine(grouped[toCombine]),
removeGroupKey = Table.RemoveColumns(combined, "$groupKey")
in removeGroupKey,
rolling = Table.AddColumn(yearToDateColumn, "TotalSales12M", each
let
inclusiveEnd = [MyDate],
exclusiveStart = Date.AddMonths(inclusiveEnd, -12),
filtered = Table.SelectRows(yearToDateColumn, each [MyDate] > exclusiveStart and [MyDate] <= inclusiveEnd),
sum = List.Sum(filtered[TotalSales])
in sum
),
sortedRows = Table.Sort(rolling, {{"MyDate", Order.Descending}})
in
sortedRows
There might be more efficient ways to do what this code does, but if the size of your data is relatively small, then this approach should be okay.
For the year to date cumulative, the data is grouped by year, then sorted ascendingly, then a running total column is added.
For the rolling 12-month total, the data is grouped into 12-month windows and then the sales are totaled within each window. The totaling is a bit inefficient (since all rows are re-processed as opposed to only those which have entered/left the window), but you might not notice it.
Table.Range could have been used instead of Table.SelectRows when creating the 12-month windows, but I figured Table.SelectRows makes less assumptions about the input data (i.e. whether it's sorted, whether any months are missing, etc.) and is therefore safer/more robust.
This is what I get:
A coworker came to me with an interesting problem, a practical one having to do with a "new people in town" group she's a part of.
18 friends want to have dinner in groups for each of the next 4 days. The rules are as follows:
Each day the group will split into 4 groups of 4, and a group of 2.
Any given pair of people will only see each other at most once over the course of the 4 days.
Any given person will only be part of the size 2 group at most once.
A brute force recursive search for a valid set of group assignment is obviously impractical. I've thrown in some simple logic for pruning parts of the tree as soon as possible, but not enough to make it practical.
Actually, I'm starting to suspect that it might be impossible to follow all the rules, but I can't come up with a combinatorial argument for why that would be.
Any thoughts?
16 friends can be scheduled 4x4 for 4 nights using two mutually orthogonal latin squares of order 4. Assign each friend to a distinct position in the 4x4 grid. On the first night, group by row. On the second, group by column. On the third, group by similar entry in latin square #1 (card rank in the 4x4 example). On the fourth, group by similar entry in latin square #2 (card suit in the 4x4 example). Actually, the affine plane construction gives rise to three mutually orthogonal latin squares, so a fifth night could be scheduled, ensuring that each pair of friends meets exactly once.
Perhaps the schedule for 16 could be extended, using the freedom of the unused fifth night.
EDIT: here's the schedule for 16 people over 5 nights. Each row is a night. Each column is a person. The entry is the group to which they're assigned.
[0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3]
[0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3]
[0, 1, 2, 3, 1, 0, 3, 2, 2, 3, 0, 1, 3, 2, 1, 0]
[0, 2, 3, 1, 1, 3, 2, 0, 2, 0, 1, 3, 3, 1, 0, 2]
[0, 3, 1, 2, 1, 2, 0, 3, 2, 1, 3, 0, 3, 0, 2, 1]