I’ve got a really large file, circa 10m rows, in which I’m trying to populate a column based on conditions on another column via a jsee macro. While it is quite quick for small files, it does take some time for the large file.
//pseudocode
//No sorting on Col1, which can have empty cells too
For all lines in file
IF (cell in Col2 IS empty) AND (cell in Col1 IS NOT empty) AND (cell in Col1 = previous cell in Col1)
THEN cell in Col2 = previous cell in Col2
//jsee code
document.CellMode = true; // Must be cell selection mode
totalLines = document.GetLines();
for( i = 1; i < totalLines; i++ ) {
nref = document.GetCell( i, 1, eeCellIncludeNone );
gsize = document.GetCell( i, 2, eeCellIncludeNone );
if (gsize == "" && nref != "" && nref == document.GetCell( i-1, 1, eeCellIncludeNone ) ) {
document.SetCell( i, 2, document.GetCell( i-1, 2, eeCellIncludeNone ) , eeAutoQuote);
}
}
Input File:
Reference
Group Size
14/12/01819
1
14/12/01820
1
15/01/00191
4
15/01/00191
15/01/00191
15/01/00198
15/01/00292
3
15/01/00292
15/01/00292
15/01/00401
5
15/01/00401
15/01/00402
1
15/01/00403
2
15/01/00403
15/01/00403
15/01/00403
15/01/00404
20/01/01400
1
Output File:
Reference
Group Size
14/12/01819
1
14/12/01820
1
15/01/00191
4
15/01/00191
4
15/01/00191
4
15/01/00198
15/01/00292
3
15/01/00292
3
15/01/00292
3
15/01/00401
5
15/01/00401
5
15/01/00402
1
15/01/00403
2
15/01/00403
2
15/01/00403
2
15/01/00403
2
15/01/00404
20/01/01400
1
Any ideas on how to optimise this and make it run even faster?
I wrote a JavaScript for EmEditor macro for you. You might need to set the correct numbers in the first 2 lines for iColReference and iColGroupSize.
iColReference = 1; // the column index of "Reference"
iColGroupSize = 2; // the column index of "Group Size"
document.CellMode = true; // Must be cell selection mode
sDelimiter = document.Csv.Delimiter; // retrieve the delimiter
nOldHeadingLines = document.HeadingLines; // retrieve old headings
document.HeadingLines = 0; // set No Headings
yBottom = document.GetLines(); // retrieve the number of lines
if( document.GetLine( yBottom ).length == 0 ) { // -1 if the last line is empty
--yBottom;
}
str = document.GetColumn( iColReference, sDelimiter, eeCellIncludeQuotes, 1, yBottom ); // get whole 1st column from top to bottom, separated by TAB
sCol1 = str.split( sDelimiter );
str = document.GetColumn( iColGroupSize, sDelimiter, eeCellIncludeQuotes, 1, yBottom ); // get whole 2nd column from top to bottom, separated by TAB
sCol2 = str.split( sDelimiter );
s1 = "";
s2 = "";
for( i = 0; i < yBottom; ++i ) { // loop through all lines
if( sCol2[i].length != 0 ) {
s1 = sCol1[i];
s2 = sCol2[i];
}
else {
if( s1.length != 0 && sCol1[i] == s1 ) { // same value as previous line, copy s2
if( s2.length != 0 ) {
sCol2[i] = s2;
}
}
else { // different value, empty s1 and s2
s1 = "";
s2 = "";
}
}
}
str = sCol2.join( sDelimiter );
document.SetColumn( iColGroupSize, str, sDelimiter, eeDontQuote ); // set whole 2nd column from top to bottom with the new values
document.HeadingLines = nOldHeadingLines; // restore the original number of headings
To run this, save this code as, for instance, Macro.jsee, and then select this file from Select... in the Macros menu. Finally, select Run Macro.jsee in the Macros menu.
Given a sequence of N integers where 1 <= N <= 500 and the numbers are between 1 and 50. In a step any two adjacent equal numbers x x can be replaced with a single x + 1. What is the maximum number achievable by such steps.
For example if given 2 3 1 1 2 2 then the maximum possible is 4:
2 3 1 1 2 2 ---> 2 3 2 2 2 ---> 2 3 3 2 ---> 2 4 2.
It is evident that I should try to do better than the maximum number available in the sequence. But I can't figure out a good algorithm.
Each substring of the input can make at most one single number (invariant: the log base two of the sum of two to the power of each entry). For every x, we can find the set of substrings that can make x. For each x, this is (1) every occurrence of x (2) the union of two contiguous substrings that can make x - 1. The resulting algorithm is O(N^2)-time.
An algorithm could work like this:
Convert the input to an array where every element has a frequency attribute, collapsing repeated consecutive values in the input into one single node. For example, this input:
1 2 2 4 3 3 3 3
Would be represented like this:
{val: 1, freq: 1} {val: 2, freq: 2} {val: 4, freq: 1} {val: 3, freq: 4}
Then find local minima nodes, like the node (3 3 3 3) in 1 (2 2) 4 (3 3 3 3) 4, i.e. nodes whose neighbours both have higher values. For those local minima that have an even frequency, "lift" those by applying the step. Repeat this until no such local minima (with even frequency) exist any more.
Start of the recursive part of the algorithm:
At both ends of the array, work inwards to "lift" values as long as the more inner neighbour has a higher value. With this rule, the following:
1 2 2 3 5 4 3 3 3 1 1
will completely resolve. First from the left side inward:
1 4 5 4 3 3 3 1 1
Then from the right side:
1 4 6 3 2
Note that when there is an odd frequency (like for the 3s above), there will be a "remainder" that cannot be incremented. The remainder should in this rule always be left on the outward side, so to maximise the potential towards the inner part of the array.
At this point the remaining local minima have odd frequencies. Applying the step to such a node will always leave a "remainder" (like above) with the original value. This remaining node can appear anywhere, but it only makes sense to look at solutions where this remainder is on the left side or the right side of the lift (not in the middle). So for example:
4 1 1 1 1 1 2 3 4
Can resolve to one of these:
4 2 2 1 2 3 4
Or:
4 1 2 2 2 3 4
The 1 in either second or fourth position, is the above mentioned "remainder". Obviously, the second way of resolving is more promising in this example. In general, the choice is obvious when on one side there is a value that is too high to merge with, like the left-most 4 is too high for five 1 values to get to. The 4 is like a wall.
When the frequency of the local minimum is one, there is nothing we can do with it. It actually separates the array in a left and right side that do not influence each other. The same is true for the remainder element discussed above: it separates the array into two parts that do not influence each other.
So the next step in the algorithm is to find such minima (where the choice is obvious), apply that kind of step and separate the problem into two distinct problems which should be solved recursively (from the top). So in the last example, the following two problems would be solved separately:
4
2 2 3 4
Then the best of both solutions will count as the overall solution. In this case that is 5.
The most challenging part of the algorithm is to deal with those local minima for which the choice of where to put the remainder is not obvious. For instance;
3 3 1 1 1 1 1 2 3
This can go to either:
3 3 2 2 1 2 3
3 3 1 2 2 2 3
In this example the end result is the same for both options, but in bigger arrays it would be less and less obvious. So here both options have to be investigated. In general you can have many of them, like 2 in this example:
3 1 1 1 2 3 1 1 1 1 1 3
Each of these two minima has two options. This seems like to explode into too many possibilities for larger arrays. But it is not that bad. The algorithm can take opposite choices in neighbouring minima, and go alternating like this through the whole array. This way alternating sections are favoured, and get the most possible value drawn into them, while the other sections are deprived of value. Now the algorithm turns the tables, and toggles all choices so that the sections that were previously favoured are now deprived, and vice versa. The solution of both these alternatives is derived by resolving each section recursively, and then comparing the two "grand" solutions to pick the best one.
Snippet
Here is a live JavaScript implementation of the above algorithm.
Comments are provided which hopefully should make it readable.
"use strict";
function Node(val, freq) {
// Immutable plain object
return Object.freeze({
val: val,
freq: freq || 1, // Default frequency is 1.
// Max attainable value when merged:
reduced: val + (freq || 1).toString(2).length - 1
});
}
function compress(a) {
// Put repeated elements in a single node
var result = [], i, j;
for (i = 0; i < a.length; i = j) {
for (j = i + 1; j < a.length && a[j] == a[i]; j++);
result.push(Node(a[i], j - i));
}
return result;
}
function decompress(a) {
// Expand nodes into separate, repeated elements
var result = [], i, j;
for (i = 0; i < a.length; i++) {
for (j = 0; j < a[i].freq; j++) {
result.push(a[i].val);
}
}
return result;
}
function str(a) {
return decompress(a).join(' ');
}
function unstr(s) {
s = s.replace(/\D+/g, ' ').trim();
return s.length ? compress(s.split(/\s+/).map(Number)) : [];
}
/*
The function merge modifies an array in-place, performing a "step" on
the indicated element.
The array will get an element with an incremented value
and decreased frequency, unless a join occurs with neighboring
elements with the same value: then the frequencies are accumulated
into one element. When the original frequency was odd there will
be a "remainder" element in the modified array as well.
*/
function merge(a, i, leftWards, stats) {
var val = a[i].val+1,
odd = a[i].freq % 2,
newFreq = a[i].freq >> 1,
last = i;
// Merge with neighbouring nodes of same value:
if ((!odd || !leftWards) && a[i+1] && a[i+1].val === val) {
newFreq += a[++last].freq;
}
if ((!odd || leftWards) && i && a[i-1].val === val) {
newFreq += a[--i].freq;
}
// Replace nodes
a.splice(i, last-i+1, Node(val, newFreq));
if (odd) a.splice(i+leftWards, 0, Node(val-1));
// Update statistics and trace: this is not essential to the algorithm
if (stats) {
stats.total_applied_merges++;
if (stats.trace) stats.trace.push(str(a));
}
return i;
}
/* Function Solve
Parameters:
a: The compressed array to be reduced via merges. It is changed in-place
and should not be relied on after the call.
stats: Optional plain object that will be populated with execution statistics.
Return value:
The array after the best merges were applied to achieve the highest
value, which is stored in the maxValue custom property of the array.
*/
function solve(a, stats) {
var maxValue, i, j, traceOrig, skipLeft, skipRight, sections, goLeft,
b, choice, alternate;
if (!a.length) return a;
if (stats && stats.trace) {
traceOrig = stats.trace;
traceOrig.push(stats.trace = [str(a)]);
}
// Look for valleys of even size, and "lift" them
for (i = 1; i < a.length - 1; i++) {
if (a[i-1].val > a[i].val && a[i].val < a[i+1].val && (a[i].freq % 2) < 1) {
// Found an even valley
i = merge(a, i, false, stats);
if (i) i--;
}
}
// Check left-side elements with always increasing values
for (i = 0; i < a.length-1 && a[i].val < a[i+1].val; i++) {
if (a[i].freq > 1) i = merge(a, i, false, stats) - 1;
};
// Check right-side elements with always increasing values, right-to-left
for (j = a.length-1; j > 0 && a[j-1].val > a[j].val; j--) {
if (a[j].freq > 1) j = merge(a, j, true, stats) + 1;
};
// All resolved?
if (i == j) {
while (a[i].freq > 1) merge(a, i, true, stats);
a.maxValue = a[i].val;
} else {
skipLeft = i;
skipRight = a.length - 1 - j;
// Look for other valleys (odd sized): they will lead to a split into sections
sections = [];
for (i = a.length - 2 - skipRight; i > skipLeft; i--) {
if (a[i-1].val > a[i].val && a[i].val < a[i+1].val) {
// Odd number of elements: if more than one, there
// are two ways to merge them, but maybe
// one of both possibilities can be excluded.
goLeft = a[i+1].val > a[i].reduced;
if (a[i-1].val > a[i].reduced || goLeft) {
if (a[i].freq > 1) i = merge(a, i, goLeft, stats) + goLeft;
// i is the index of the element which has become a 1-sized valley
// Split off the right part of the array, and store the solution
sections.push(solve(a.splice(i--), stats));
}
}
}
if (sections.length) {
// Solve last remaining section
sections.push(solve(a, stats));
sections.reverse();
// Combine the solutions of all sections into one
maxValue = sections[0].maxValue;
for (i = sections.length - 1; i >= 0; i--) {
maxValue = Math.max(sections[i].maxValue, maxValue);
}
} else {
// There is no more valley that can be resolved without branching into two
// directions. Look for the remaining valleys.
sections = [];
b = a.slice(0); // take copy
for (choice = 0; choice < 2; choice++) {
if (choice) a = b; // restore from copy on second iteration
alternate = choice;
for (i = a.length - 2 - skipRight; i > skipLeft; i--) {
if (a[i-1].val > a[i].val && a[i].val < a[i+1].val) {
// Odd number of elements
alternate = !alternate
i = merge(a, i, alternate, stats) + alternate;
sections.push(solve(a.splice(i--), stats));
}
}
// Solve last remaining section
sections.push(solve(a, stats));
}
sections.reverse(); // put in logical order
// Find best section:
maxValue = sections[0].maxValue;
for (i = sections.length - 1; i >= 0; i--) {
maxValue = Math.max(sections[i].maxValue, maxValue);
}
for (i = sections.length - 1; i >= 0 && sections[i].maxValue < maxValue; i--);
// Which choice led to the highest value (choice = 0 or 1)?
choice = (i >= sections.length / 2)
// Discard the not-chosen version
sections = sections.slice(choice * sections.length/2);
}
// Reconstruct the solution from the sections.
a = [].concat.apply([], sections);
a.maxValue = maxValue;
}
if (traceOrig) stats.trace = traceOrig;
return a;
}
function randomValues(len) {
var a = [];
for (var i = 0; i < len; i++) {
// 50% chance for a 1, 25% for a 2, ... etc.
a.push(Math.min(/\.1*/.exec(Math.random().toString(2))[0].length,5));
}
return a;
}
// I/O
var inputEl = document.querySelector('#inp');
var randEl = document.querySelector('#rand');
var lenEl = document.querySelector('#len');
var goEl = document.querySelector('#go');
var outEl = document.querySelector('#out');
goEl.onclick = function() {
// Get the input and structure it
var a = unstr(inputEl.value),
stats = {
total_applied_merges: 0,
trace: a.length < 100 ? [] : undefined
};
// Apply algorithm
a = solve(a, stats);
// Output results
var output = {
value: a.maxValue,
compact: str(a),
total_applied_merges: stats.total_applied_merges,
trace: stats.trace || 'no trace produced (input too large)'
};
outEl.textContent = JSON.stringify(output, null, 4);
}
randEl.onclick = function() {
// Get input (count of numbers to generate):
len = lenEl.value;
// Generate
var a = randomValues(len);
// Output
inputEl.value = a.join(' ');
// Simulate click to find the solution immediately.
goEl.click();
}
// Tests
var tests = [
' ', '',
'1', '1',
'1 1', '2',
'2 2 1 2 2', '3 1 3',
'3 2 1 1 2 2 3', '5',
'3 2 1 1 2 2 3 1 1 1 1 3 2 2 1 1 2', '6',
'3 1 1 1 3', '3 2 1 3',
'2 1 1 1 2 1 1 1 2 1 1 1 1 1 2', '3 1 2 1 4 1 2',
'3 1 1 2 1 1 1 2 3', '4 2 1 2 3',
'1 4 2 1 1 1 1 1 1 1', '1 5 1',
];
var res;
for (var i = 0; i < tests.length; i+=2) {
var res = str(solve(unstr(tests[i])));
if (res !== tests[i+1]) throw 'Test failed: ' + tests[i] + ' returned ' + res + ' instead of ' + tests[i+1];
}
Enter series (space separated):<br>
<input id="inp" size="60" value="2 3 1 1 2 2"><button id="go">Solve</button>
<br>
<input id="len" size="4" value="30"><button id="rand">Produce random series of this size and solve</button>
<pre id="out"></pre>
As you can see the program produces a reduced array with the maximum value included. In general there can be many derived arrays that have this maximum; only one is given.
An O(n*m) time and space algorithm is possible, where, according to your stated limits, n <= 500 and m <= 58 (consider that even for a billion elements, m need only be about 60, representing the largest element ± log2(n)). m is representing the possible numbers 50 + floor(log2(500)):
Consider the condensed sequence, s = {[x, number of x's]}.
If M[i][j] = [num_j,start_idx] where num_j represents the maximum number of contiguous js ending at index i of the condensed sequence; start_idx, the index where the sequence starts or -1 if it cannot join earlier sequences; then we have the following relationship:
M[i][j] = [s[i][1] + M[i-1][j][0], M[i-1][j][1]]
when j equals s[i][0]
j's greater than s[i][0] but smaller than or equal to s[i][0] + floor(log2(s[i][1])), represent converting pairs and merging with an earlier sequence if applicable, with a special case after the new count is odd:
When M[i][j][0] is odd, we do two things: first calculate the best so far by looking back in the matrix to a sequence that could merge with M[i][j] or its paired descendants, and then set a lower bound in the next applicable cells in the row (meaning a merge with an earlier sequence cannot happen via this cell). The reason this works is that:
if s[i + 1][0] > s[i][0], then s[i + 1] could only possibly pair with the new split section of s[i]; and
if s[i + 1][0] < s[i][0], then s[i + 1] might generate a lower j that would combine with the odd j from M[i], potentially making a longer sequence.
At the end, return the largest entry in the matrix, max(j + floor(log2(num_j))), for all j.
JavaScript code (counterexamples would be welcome; the limit on the answer is set at 7 for convenient visualization of the matrix):
function f(str){
var arr = str.split(/\s+/).map(Number);
var s = [,[arr[0],0]];
for (var i=0; i<arr.length; i++){
if (s[s.length - 1][0] == arr[i]){
s[s.length - 1][1]++;
} else {
s.push([arr[i],1]);
}
}
var M = [new Array(8).fill([0,0])],
best = 0;
for (var i=1; i<s.length; i++){
M[i] = new Array(8).fill([0,i]);
var temp = s[i][1],
temp_odd,
temp_start,
odd = false;
for (var j=s[i][0]; temp>0; j++){
var start_idx = odd ? temp_start : M[i][j-1][1];
if (start_idx != -1 && M[start_idx - 1][j][0]){
temp += M[start_idx - 1][j][0];
start_idx = M[start_idx - 1][j][1];
}
if (!odd){
M[i][j] = [temp,start_idx];
temp_odd = temp;
} else {
M[i][j] = [temp_odd,-1];
temp_start = start_idx;
}
if (!odd && temp & 1 && temp > 1){
odd = true;
temp_start = start_idx;
}
best = Math.max(best,j + Math.floor(Math.log2(temp)));
temp >>= 1;
temp_odd >>= 1;
}
}
return [arr, s, best, M];
}
// I/O
var button = document.querySelector('button');
var input = document.querySelector('input');
var pre = document.querySelector('pre');
button.onclick = function() {
var val = input.value;
var result = f(val);
var text = '';
for (var i=0; i<3; i++){
text += JSON.stringify(result[i]) + '\n\n';
}
for (var i in result[3]){
text += JSON.stringify(result[3][i]) + '\n';
}
pre.textContent = text;
}
<input value ="2 2 3 3 2 2 3 3 5">
<button>Solve</button>
<pre></pre>
Here's a brute force solution:
function findMax(array A, int currentMax)
for each pair (i, i+1) of indices for which A[i]==A[i+1] do
currentMax = max(A[i]+1, currentMax)
replace A[i],A[i+1] by a single number A[i]+1
currentMax = max(currentMax, findMax(A, currentMax))
end for
return currentMax
Given the array A, let currentMax=max(A[0], ..., A[n])
print findMax(A, currentMax)
The algorithm terminates because in each recursive call the array shrinks by 1.
It's also clear that it is correct: we try out all possible replacement sequences.
The code is extremely slow when the array is large and there's lots of options regarding replacements, but actually works reasonbly fast on arrays with small number of replaceable pairs. (I'll try to quantify the running time in terms of the number of replaceable pairs.)
A naive working code in Python:
def findMax(L, currMax):
for i in range(len(L)-1):
if L[i] == L[i+1]:
L[i] += 1
del L[i+1]
currMax = max(currMax, L[i])
currMax = max(currMax, findMax(L, currMax))
L[i] -= 1
L.insert(i+1, L[i])
return currMax
# entry point
if __name__ == '__main__':
L1 = [2, 3, 1, 1, 2, 2]
L2 = [2, 3, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2]
print findMax(L1, max(L1))
print findMax(L2, max(L2))
The result of the first call is 4, as expected.
The result of the second call is 5 as expected; the sequence that gives the result: 2,3,1,1,2,2,2,2,2,2,2,2, -> 2,3,1,1,3,2,2,2,2,2,2 -> 2,3,1,1,3,3,2,2,2,2, -> 2,3,1,1,3,3,3,2,2 -> 2,3,1,1,3,3,3,3 -> 2,3,1,1,4,3, -> 2,3,1,1,4,4 -> 2,3,1,1,5
So, I'm trying to do something similar to a paginator (list of page numbers) where the current number is in the middle or as close as can be
Every way I solve it is hard and weird, just wondering if there is a nice mathy way to do it :)
given:
a: current page number
x: first page number
y: last page number
n: number required
I want to generate a list of numbers where a is as close to the center as can be, while staying within x and y
so f(5, 1, 10, 5) would return [3, 4, 5, 6, 7]
but f(1, 1, 10, 5) would return [1, 2, 3, 4, 5]
and f(9, 1, 10, 5) would return [6, 7, 8, 9, 10]
Can anyone think of a nice way of getting that kind of thing?
Implemented in a probably complicated way in ruby, can it be done simpler?
def numbers_around(current:, total:, required: 5)
required_before = (required - 1) / 2
required_after = (required - 1) / 2
before_x = current - required_before
after_x = current + required_after
if before_x < 1
after_x += before_x.abs + 1
before_x = 1
end
if after_x > total
before_x -= (after_x - total)
after_x = total
end
(before_x..after_x)
end
Here's something kind of mathy that returns the first number in the list (JavaScript code):
function f(a,x,y,n){
var m = n >> 1;
return x * (n > y - x) || a - m
+ Math.max(0,m - a + x)
- Math.max(0,m - y + a);
}
Output:
console.log(f(5,1,10,5)); // 3
console.log(f(1,1,10,5)); // 1
console.log(f(9,1,10,5)); // 6
console.log(f(2,1,10,5)); // 1
console.log(f(11,1,10,5)); // 6
console.log(f(7,3,12,10)); // 3
As you wont be mentioning the language you want this to do, here is some explained code I put together in C++:
std::vector<int> getPageNumbers(int first, int last, int page, int count) {
int begin = page - (count / 2);
if (begin < first) {
begin = first;
}
int cur = 0;
std::vector<int> result;
while (begin + cur <= last && cur < count) {
result.push_back(begin + cur);
++cur;
}
cur = 0;
while (begin - cur >= first && result.size() < count) {
++cur;
result.insert(result.begin(), begin-cur);
}
return result;
}
int main() {
std::vector<int> foo = getPageNumbers(1,10,10,4);
std::vector<int>::iterator it;
for (it = foo.begin(); it != foo.end(); ++it) {
std::cout << *it << " " << std::endl;
}
return 0;
}
What it does is basically:
Start at the Element page - (count/2) (count/2 is fine, you dont need to substract zero, as e.g. 2.5 will get rounded to 2).
If start element is below first, start at first
Keep adding Elements to the Result as long as current page number is smaller or equal the lastpage or until enough elements are inserted
Keep on inserting Elements in the beginning as long as there are less than count elements in the Resultvector or until the current Element is smaller than the first page
That is my basic attempt now. The code is executable.
After writing this, I realized it's very similar to #Nidhoegger's answer but maybe it will help? PHP
<?
//Assume 0 index pages
$current = 2;
$first = 1;
$last = 10;
$limit = 5;
$page_counter = floor($limit / 2); //start at half the limit, so if the limit is 5, start at current -2 (page 0) and move up
$pages = array();
for ($i = 0; $i < $limit) {
$page_to_add = $current + $page_counter;
$page_counter++;
if ($page_to_add > $last)
break;
if ($page_to_add > -1) {
$i++;
$pages[] = $page_to_add;
}
}
?>
I think it's just one of those problems with a lot of annoying corner cases.
start = a - (n / 2);
if (start < x) start = x; // don't go past first page.
end = start + (n - 1); // whereever you start, proceed n pages
if (end > y) { // also don't go past last page.
end = y;
start = end - (n - 1); // if you hit the end, go back n pages
if (start < x) start = x; // but _still_ don't go past first page (fewer than n pages)
}
// make some kind of vector [start..end] inclusive.
or, assuming higher-level primitives, if you prefer:
start = max(x, a - (n / 2)) // (n/2) pages before but don't pass x
end = min(start + (n - 1), y) // n pages long, but don't pass y
start = max(x, end - (n - 1)) // really n pages long, but really don't pass x
// make some kind of vector [start..end] inclusive.
Here's what seems to be the most efficient way to me. Using an array from 1 to n, find the index for the a value. First find the center point of the indexes of the array, then check to see if the number is close to one end or the other, and modify it by the difference. Then fill in the values.
It should be quick since instead of iterating, it uses algorithms to arrive at the index numbers.
Pseudocode:
centerindex = Ceiling(n/2, 1)
If (y-a) < (n - centerindex) Then centerindex = 2 * centerindex - (y - a) - 1
If (a-x) < (n - centerindex) Then centerindex = (a - x) + 1
For i = 1 to n
pages(i) = a - (centerindex - i)
Next i
I have an array (of 9 elements, say) which I must treat as a (3 by 3) square.
For the sake of simplifying the question, this is a one-based array (ie, indexing starts at 1 instead of 0).
My goal is to determine valid adjacent squares relative to a starting point.
In other words, how it's stored in memory: 1 2 3 4 5 6 7 8 9
How I'm treating it:
7 8 9
4 5 6
1 2 3
I already know how to move up and down and test for going out of bounds (1 >= current_index <= 9)
edit: I know the above test is overly general but it's simple and works.
//row_size = 3, row_step is -1, 0 or 1 depending on if we're going left,
//staying put or going right respectively.
current_index += (row_size * row_step);
How do I test for an out of bounds condition when going left or right? Conceptually I know it involves determining if 3 (for example) is on the same row as 4 (or if 10 is even within the same square as 9, as an alternate example, given that multiple squares are in the same array back to back), but I can't figure out how to determine that. I imagine there's a modulo in there somewhere, but where?
Thanks very much,
Geoff
Addendum:
Here's the resulting code, altered for use with a zero-based array (I cleaned up the offset code present in the project) which walks adjacent squares.
bool IsSameSquare(int index0, int index1, int square_size) {
//Assert for square_size != 0 here
return (!((index0 < 0) || (index1 < 0))
&& ((index0 < square_size) && (index1 < square_size)))
&& (index0 / square_size == index1 / square_size);
}
bool IsSameRow(int index0, int index1, int row_size) {
//Assert for row_size != 0 here
return IsSameSquare(index0, index1, row_size * row_size)
&& (index0 / row_size == index1 / row_size);
}
bool IsSameColumn(int index0, int index1, int row_size) {
//Assert for row_size != 0 here
return IsSameSquare(index0, index1, row_size * row_size)
&& (index0 % row_size == index1 % row_size);
}
//for all possible adjacent positions
for (int row_step = -1; row_step < 2; ++row_step) {
//move up, down or stay put.
int row_adjusted_position = original_position + (row_size * row_step);
if (!IsSameSquare(original_position, row_adjusted_position, square_size)) {
continue;
}
for (int column_step = -1; column_step < 2; ++column_step) {
if ((row_step == 0) & (column_step == 0)) { continue; }
//hold on to the position that has had its' row position adjusted.
int new_position = row_adjusted_position;
if (column_step != 0) {
//move left or right
int column_adjusted_position = new_position + column_step;
//if we've gone out of bounds again for the column.
if (IsSameRow(column_adjusted_position, new_position, row_size)) {
new_position = column_adjusted_position;
} else {
continue;
}
} //if (column_step != 0)
//if we get here we know it's safe, do something with new_position
//...
} //for each column_step
} //for each row_step
This is easier if you used 0-based indexing. These rules work if you subtract 1 from all your indexes:
Two indexes are in the same square if (a/9) == (b/9) and a >= 0 and b >= 0.
Two indexes are in the same row if they are in the same square and (a/3) == (b/3).
Two indexes are in the same column if they are in the same square and (a%3) == (b%3).
There are several way to do this, I'm choosing a weird one just for fun. Use modulus.
Ase your rows are size 3 just use modulus of 3 and two simple rules.
If currPos mod 3 = 0 and (currPos+move) mod 3 = 1 then invalid
If currPos mod 3 = 1 and (currPos+move) mod 3 = 0 then invalid
this check for you jumping two a new row, you could also do one rule like this
if (currPos mod 3)-((currPos+move) mod 3)> 1 then invalid
Cheers
You should be using a multidimensional array for this.
If your array class doesn't support multidimensional stuff, you should write up a quick wrapper that does.