Related
How to make this fibonacci more clean and possibly performance increase?
function fibonacci(n) {
var array = [];
if (n === 1) {
array.push(0);
return array;
} else if (n === 2) {
array.push(0, 1);
return array;
} else {
array = [0, 1];
for (var i = 2; i < n; i++) {
var sum = array[array.length - 2] + array[array.length - 1];
array.push(sum);
}
return array;
}
}
If you want to optimize your Fibonacci then pre-calculate a bunch of values (say up to 64 or even higher depending on your use-case) and have those pre-calculated values as a constant array that your function can use.
const precalcFibonacci = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, 75025, 121393, 196418, 317811, 514229, 832040, 1346269, 2178309, 3524578, 5702887, 9227465, 14930352, 24157817, 39088169, 63245986, 102334155, 165580141, 267914296, 433494437, 701408733, 1134903170, 1836311903, 2971215073, 4807526976, 7778742049, 12586269025, 20365011074, 32951280099, 53316291173, 86267571272, 139583862445, 225851433717, 365435296162, 591286729879, 956722026041, 1548008755920, 2504730781961, 4052739537881, 6557470319842];
function fibonacci(n) {
if(n <= 0) return [];
if(n < 65) return precalcFibonacci.slice(0, n);
else {
let array = precalcFibonacci.slice();
for(let i = 64, a = precalcFibonacci[62], b = precalcFibonacci[63]; i < n; i++) {
array[i] = a + b;
a = b;
b = array[i];
}
return array;
}
}
There is a way to get the N-th Fibonacci number in log(N).
All you need is to raise the matrix
| 0 1 |
| 1 1 |
to the power N using binary matrix exponentiation
This is really useful for really big N while a traditional algorithm is gonna be slow.
links to materials:
https://kukuruku.co/post/the-nth-fibonacci-number-in-olog-n/
https://www.youtube.com/watch?v=eMXNWcbw75E
Say you are given a set of coins such as 4 10¢, 4 5¢, and 4 1¢.
You are asked to place these coins on a 12-hour analog clock face, where the next coin you place must be placed at X hours after the previous coin, where X is the value of the previous coin.
So if you place a 1¢ on 12, the next coin you place goes at 1. If you place a 5¢ on 1, the next coin you place goes at 6. And so on.
How can you maximize the number of coins that can be placed on the clock before the next coin would have to be placed in a slot that is already taken?
This is a problem I came across which I have been unable to solve except via exhaustive search. If the inputs are made to be arbitrary, exhaustive search fails quickly-- say it's an arbitrary number of coins of arbitrary various known denominations, with an arbitrary number of hours on the clock. Then you can't do exhaustive search anymore, because it becomes factorial time and fails on basis of excessive computational time requirements.
As maraca mentioned probably there isn't a much better solution than backtracking without more restrictions. Maybe with a larger number of coins of given denominations space can be covered with 'patterns'. Like coins [5, 10, 10, 5, 10, 10, 5, x] cover first 8 places and next coin is placed in similar location as first one. So the process can be repeated if there are enough coins.
Number of possible coin combinations in this case is not large at all. It is 12! / (4! * 4! * 4!) = 34650. For sure number explodes with larger parameters. Here is simple python code that solves 3 times larger problem which has possible coin combinations 3*10^15.
max_positions = []
max_order = None
def add_coin(coins, position, coin_order, occupied_positions, num_hours):
global max_positions, max_order
if position in occupied_positions or not coins:
# Can't place on that position or there is nothing more to place
if len(occupied_positions) > len(max_positions):
max_positions = occupied_positions
max_order = coin_order
return not coins # if all is covered return true to stop search
#
for c, num_coins in coins: # Try each coin
# Copy coins to new list and remove one used
c_coins = [x for x in coins if x[0] != c]
if num_coins > 1:
c_coins.append((c, num_coins-1))
# Next iteration
if add_coin(c_coins,
(position + c) % num_hours,
coin_order + [c],
occupied_positions + [position],
num_hours):
return True
def solve_coins(coins, num_hours):
global max_positions, max_order
max_positions = []
max_order = None
add_coin(coins, 0, [], [], num_hours)
print len(max_positions), max_positions, max_order
solve_coins([(1, 4), (5, 4), (10, 4)], 12)
solve_coins([(1, 8), (5, 8), (10, 8)], 24)
solve_coins([(1, 12), (5, 12), (10, 12)], 36)
output:
12 [0, 1, 6, 4, 2, 3, 8, 9, 7, 5, 10, 11] [1, 5, 10, 10, 1, 5, 1, 10, 10, 5, 1, 5]
24 [0, 1, 6, 16, 17, 3, 4, 14, 19, 5, 15, 20, 21, 2, 7, 8, 13, 18, 23, 9, 10, 11, 12, 22] [1, 5, 10, 1, 10, 1, 10, 5, 10, 10, 5, 1, 5, 5, 1, 5, 5, 5, 10, 1, 1, 1, 10, 10]
36 [0, 1, 6, 16, 17, 22, 23, 28, 2, 12, 13, 18, 19, 29, 34, 3, 8, 9, 10, 11, 21, 31, 5, 15, 20, 30, 35, 4, 14, 24, 25, 26, 27, 32, 33, 7] [1, 5, 10, 1, 5, 1, 5, 10, 10, 1, 5, 1, 10, 5, 5, 5, 1, 1, 1, 10, 10, 10, 10, 5, 10, 5, 5, 10, 10, 1, 1, 1, 5, 1, 10, 5]
// Expressing the coins as a list of buckets with the same modulo allows
// you to efficiently find the next coin to test and you don't start to
// calculate with the first penny and then do the same again starting
// with the second penny (or a 13-coin on a 12-clock), it is basically the same.
// Additionally it allows to remove and insert items at the current position in O(1).
// Also reverting is much better than copying whole states on each recursive call.
private class Bucket {
public int number;
public LinkedList<Integer> numbers = new LinkedList<>();
public Bucket(int number, int hours) {
this.number = number % hours;
numbers.add(number);
}
}
private LinkedList<Bucket> coins; // not using interface List as you are supposed to
private LinkedList<Integer> best, current; // because of removeLast()
private boolean[] occupied;
private int hours, limit;
public List<Integer> findBest(int[] coins, int hours) {
this.hours = hours;
// create buckets of coins with the same modulo
Integer[] c = Arrays.stream(coins).boxed().toArray( Integer[]::new );
// sort descending because a lot of small coins in a row are more likely to create
// an impassable area on the next pass around the clock
Arrays.sort(c, new Comparator<Integer>(){
public int compare(Integer a, Integer b) {
return Integer.compare(b.intValue() % hours, a.intValue() % hours);
}
});
this.coins = new LinkedList<>();
Bucket b = new Bucket(c[0].intValue(), hours);
this.coins.add(b);
int mod = c[0].intValue() % hours, coinCount = 1;
for (int i = 1; i < c.length; i++) {
int m = c[i].intValue() % hours;
if (m == mod) { // same bucket
b.numbers.add(c[i]);
} else { // new bucket
b = new Bucket(c[i].intValue(), hours);
this.coins.add(b);
mod = m;
}
coinCount++;
if (mod == 0) // don't need more than one 0 value
break;
}
best = new LinkedList<>();
current = new LinkedList<>();
occupied = new boolean[hours];
limit = coinCount < hours ? coinCount : hours; // max coins that can be placed
findBest(0);
return best;
}
private void findBest(int pos) {
if (best.size() == limit) // already found optimal solution
return;
if (occupied[pos] || current.size() == limit) {
if (current.size() > best.size())
best = (LinkedList<Integer>)current.clone();
return;
}
occupied[pos] = true;
for (int i = 0; i < coins.size(); i++) {
Bucket b = coins.get(i);
current.add(b.numbers.removeLast());
boolean removed = false;
if (b.numbers.size() == 0) { // bucket empty
coins.remove(i);
removed = true;
}
findBest((pos + b.number) % hours);
if (removed)
coins.add(i, b);
b.numbers.add(current.removeLast());
}
occupied[pos] = false;
}
Output for the given example: 10 10 5 1 1 1 5 10 10 1 5 5
Here is a slightly more optimized version in JavaScript where the list is implemented manually, so that you can really see why removing and adding the currend bucket is O(1). Because the list is always read in order it is superior to the array in this case. Whith an array you need to shift many elements or skip a lot of empty ones, depending how you implement it, not with a list of buckets. Should be a little easier to understand than the Java code.
var head, occupied, current, best, h, limit;
document.body.innerHTML = solve([1,1,1,1,5,5,5,5,10,10,10,10], 12);
function solve(coins, hours) {
h = hours;
coins.sort(function(a, b) {
let x = a % hours, y = b % hours;
if (x > y)
return -1;
if (x < y)
return 1;
return 0;
});
let mod = coins[0] % hours;
head = {num: mod, vals: [coins[0]], next: null};
let b = head, coinCount = 1;
for (let i = 1; i < coins.length && mod != 0; i++) {
let m = coins[i] % hours;
if (m == mod) {
b.vals.push(coins[i]);
} else {
b.next = {num: m, vals: [coins[i]], next: null};
b = b.next;
mod = m;
}
coinCount++;
}
limit = coinCount < hours ? coinCount : hours;
occupied = [];
for (let i = 0; i < hours; i++)
occupied.push(false);
best = [];
current = [];
solveRec(0);
return JSON.stringify(best);
}
function solveRec(pos) {
occupied[pos] = true;
let b = head, prev = null;
while (b !== null) {
let m = (pos + b.num) % h;
if (!occupied[m]) {
current.push(b.vals.pop());
let rem = false;
if (b.vals.length == 0) {
if (prev == null)
head = b.next;
else
prev.next = b.next;
rem = true;
}
solveRec(m);
if (current.length > best.length)
best = current.slice();
if (best.length == limit)
return;
if (rem) {
if (prev == null)
head = b;
else
prev.next = b;
}
b.vals.push(current.pop());
} else if (current.length + 1 > best.length) {
best = current.slice();
best.push(b.vals[b.vals.length - 1]);
}
prev = b;
b = b.next;
}
occupied[pos] = false;
}
I am still working on this, but it is already much better than O(n!). I will try to fit a new O() on it soon.
The concept is pretty simple, basically you create the smallest combos of numbers and then link them together into longer and longer strings of numbers until the next step is not possible.
A key to this working is that you don't track the front or end of a list of numbers, only the sum (and the inner sums due to their being calculated at earlier steps). so long as that sum is never divisible by clock, it will remain a clean solution.
Each step attempts to "splice" the smaller combos into the next size bigger:
(1,3), (3,1) -> (1,3,1), (3,1,3)
Here is a brief example (simplified) of what the algo is doing:
clock: 4
coins: 4
coins: 1,2,3,3
*bold pass, others are skipped for 1 of 3 reasons (not enough in population to build combo, sum divisible by clock, (in actual algo) prevent duplicates: (1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)
triples: (1,2,1), (2,1,2), (2,3,2), (3,2,3), (1,2,3), (2,3,3), (3,2,1), (3,3,2)
final combos:(1,2,3,x), (3,2,1,x)
This code is runnable standalone in c++ (but placeCoins is the algo):
I assume you will appropriate the algo to your purposes, but for anyone who wishes to run this cpp file, it will request clock size, coin count, and then after putting in coin count it will accept the next coin count number of inputs followed by enter as the coin values. The output will show the best counts, all orders at that count, and also during the algo it will show you the number of currently processing steps (which is where you can estimate complexity/ number of combos checked to see how much faster this is than exhaustive of any kind)
#include <iostream>
#include <vector>
#include <algorithm>
#include <numeric>
#include <map>
using namespace std;
//min clock size 3
vector<vector<int>> placeCoins(int _clockSize, vector<int> _coins)
{
int totalCheckedCombos = 0;
vector<vector<int>> coinGroups;
vector<int> coinSet = _coins;
sort(coinSet.begin(), coinSet.end());
coinSet.erase(unique(coinSet.begin(), coinSet.end()), coinSet.end());
map<int, int> coinCounts;
for (int i = 0; i < coinSet.size(); i++)
{
coinCounts[coinSet.at(i)] = count(_coins.begin(), _coins.end(), coinSet.at(i));
}
cout << "pairs" << endl;
//generate fair pairs of coins
for (int i = 0; i < coinSet.size(); i++)
{
for (int ii = 0; ii < coinSet.size(); ii++)
{
if ((coinSet.at(i) + coinSet.at(ii)) % _clockSize != 0)
{
if (i == ii)
{
if (coinCounts[coinSet.at(i)] > 1)
{
coinGroups.push_back({ coinSet.at(i),coinSet.at(ii) });
}
}
else
{
coinGroups.push_back({ coinSet.at(i),coinSet.at(ii) });
}
}
}
}
cout << "combine" << endl;
//iteratively combine groups of coins
for (int comboSize = 3; comboSize < _clockSize; comboSize++)
{
totalCheckedCombos += coinGroups.size();
vector<vector<int>> nextSizeCombos;
for (int i = 0; i < coinGroups.size(); i++)
{
for (int ii = 0; ii < coinGroups.size(); ii++)
{
//check combo to match
bool match = true;
for (int a = 0; a < comboSize - 2; a++)
{
if (coinGroups.at(i).at(a+1) != coinGroups.at(ii).at(a))
{
match = false;
break;
}
}
//check sum
if (match)
{
vector<int> tempCombo = coinGroups.at(i);
int newVal = coinGroups.at(ii).at(coinGroups.at(ii).size()-1);
tempCombo.push_back(newVal);
if (coinCounts[newVal] >= count(tempCombo.begin(), tempCombo.end(), newVal))
{
if (accumulate(tempCombo.begin(), tempCombo.end(), 0) % _clockSize != 0)
{
nextSizeCombos.push_back(tempCombo);
}
}
}
}
}
if (nextSizeCombos.size() == 0)
{
//finished, no next size combos found
break;
}
else
{
cout << nextSizeCombos.size() << endl;
coinGroups = nextSizeCombos;
}
}
cout << "total combos checked: " << totalCheckedCombos << endl;
return coinGroups;
}
int main(int argc, char** argv) {
int clockSize;
int coinCount;
vector<int> coins = {};
cout << "enter clock size: " << endl;
cin >> clockSize;
cout << "count number: " << endl;
cin >> coinCount;
for (int i = 0; i < coinCount; i++)
{
int tempCoin;
cin >> tempCoin;
coins.push_back(tempCoin);
}
cout << "press enter to compute combos: " << endl;
cin.get();
cin.get();
vector<vector<int>> resultOrders = placeCoins(clockSize, coins);
for (int i = 0; i < resultOrders.size(); i++)
{
cout << resultOrders.at(0).size()+1 << endl;
for (int ii = 0; ii < resultOrders.at(i).size(); ii++)
{
cout << resultOrders.at(i).at(ii) << " , ";
}
cout <<"x"<< endl;
}
cin.get();
cin.get();
}
ps: although I debugged this to a stable state, it still could definitely use fine tuning and optimization, but that is variable to different languages, so I just got the algo to work and called it good enough. If you see something glaringly wrong or poor form, feel free to comment and i'll fix it (or edit it directly if you want).
Instead of a greedy approach, try the maximum result of choosing a coin vs. not choosing a coin.
def valueOfClock(capacity, coins, n, hour):
if (n == 0 or capacity == 0 or hour > 12):
return 0
# Choose next coin if value is greater than the capacity
if (coins[n-1] > capacity):
valueOfClock(capacity, coins, n-1, hours)
# Choose max value of either choosing the next coin or
# choosing the current coin
return max(valueOfClock(capacity, coins, n-1, hours),
valueOfClock(capacity-coins[n-1], coins, n-1, hours + coins[n-1]))
I have this method
public static void primeSort( String[] list, HashMap< Integer, ArrayList< String >> hm ){
for( int x=0; x<list.length; x++ ){
if( list[ x ] == null ) continue;
String curX = list[ x ];
int hashX = primeHash( curX );
hm.put( hashX, new ArrayList( Arrays.asList( curX )));
for( int y=x+1; y<list.length; y++ ){
String curY = list[ y ];
int hashY = primeHash( curY );
if( curY == null || curY.length() != curX.length()) continue;
if( hashX == hashY ){
hm.get( hashX ).add( curY );
list[ y ] = null; // if its an anagram null it out to avoid checking again
}
}
}
}
which calls this method:
public static int primeHash( String word ){
int productOfPrimes = 1;
int prime[] = { 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31,
37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101 };
for( char ch : word.toCharArray() ){
productOfPrimes *= prime[ (int) ch - (int) 'a' ];
}
return productOfPrimes;
}
The objective is to take a list of strings and sort them into anagrams, returning a list containing anagrams grouped into list.
I'm trying to determine the time complexity, but it's a bit tricky.
primeSort will be worst case O(n^2) and best case O(n)
primeHash will run m times each time it's called where m is the length of the current string. I'm not sure how this will be analyzed and how it can be combined with the analysis for primeSort.
Any help is appreciated, thanks :)
It would be O(n^2) only in the first one, because of the nested loop. You call then primeHash once in every loop. There you also have one loop. Let m be the length of the String word. Then in WC you get O((n*m)^2). However m will be most likely different, that's why I take the longest one, say M for the asymptotic analysis. Also O((n*M)^2). Assuming that M in WC can be close to n:
You can end up with WC: O((n^2)^2) = O(n^4).
In best case: You have only one loop iteration in the first function: O(n). Then we make assumptions on the length of the each word. If they are with only one char then only one for-loop iteration in primehash, or simply constant. Therefore O(1).
BC: O(n).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have some boolean arrays that their sizes are not constant, And I need a strong and fast hash algorithm to give minimum chance of hash collision for them.
My own idea was calculating the integer value of each boolean array but for example these 2 arrays will give same hash of 3:
[0 , 1, 1] and [1, 1]
I thought to multiply the size of array after calculating integer value, but this idea also sucks, because there is a high chance of hash collision.
Does anyone has a good idea?
You can insert a sentinel true element at the start of the array, then interpret the array as a binary number. This is a perfect hash (no collisions) for arrays with less than 32 elements. For larger arrays I suggest doing the arithmetic modulo a large prime less than 231.
Examples:
Array | Binary | Decimal
------------+--------+---------
[ 0, 1, 1 ] | 1011 | 11
[ 1, 1 ] | 111 | 7
This is the same as interpreting the array as a binary number, and then taking the bitwise OR with 1 << n where n is the size of the array.
Implementation:
int hash(int[] array)
{
int h = 1;
for (int i = 0; i < array.length; i++)
{
h = (h << 1) | array[i];
}
return h;
}
Note: This implementation only works well for arrays with less than 32 elements, because for larger arrays the calculation will overflow (assuming int is 32 bits) and the most significant bits will be completely discarded. This can be fixed by inserting h = h % ((1 << 31) - 1); before the end of the for-loop (the expression "(1 << 31) - 1" computes 231 - 1, which is prime).
My ideas:
Approach #1:
Calculate the first 2n prime numbers, where n is the length of the array.
Let hash = 1.
For i = 0 to n: If a bit at position i is 1, multiply hash by the 2ith and 2i + 1st prime. If it's 0, multiply it by the 2ith one only.
Approach #2:
Treat the binary arrays as ternary. Bit is 0 => ternary digit is 0; bit is 1 => ternary digit is 1; bit is not present => ternary digit is 2 (this former works because the array has a maximal possible length).
Calculate the ternary number using this substitution - the result will be unique.
Here's some code showing the implementation of these algorithms in C++ and a test program which generates hashes for every boolean array of length 0...18. I use the C++11 class std::unordered_map so that each hash is uniqued. Thus, if we don't have any duplicates (i. e. if the hash function is perfect), we should get 2 ^ 19 - 1 elements in the set, which we do (I had to change the integers to unsigned long long on IDEone, else the hashes weren't perfect - I suspect this has to do with 32 vs. 64 bit architectures):
#include <unordered_set>
#include <iostream>
#define MAX_LEN 18
unsigned long prime_hash(const unsigned int *arr, size_t len)
{
/* first 2 * MAX_LEN primes */
static const unsigned long p[2 * MAX_LEN] = {
2, 3, 5, 7, 11, 13, 17, 19, 23,
29, 31, 37, 41, 43, 47, 53, 59, 61,
67, 71, 73, 79, 83, 89, 97, 101, 103,
107, 109, 113, 127, 131, 137, 139, 149, 151
};
unsigned long h = 1;
for (size_t i = 0; i < len; i++)
h *= p[2 * i] * (arr[i] ? p[2 * i + 1] : 1);
return h;
}
unsigned long ternary_hash(const unsigned int *arr, size_t len)
{
static const unsigned long p3[MAX_LEN] = {
1, 3, 9, 27,
81, 243, 729, 2187,
6561, 19683, 59049, 177147,
531441, 1594323, 4782969, 14348907,
43046721, 129140163
};
unsigned long h = 0;
for (size_t i = 0; i < len; i++)
if (arr[i])
h += p3[i];
for (size_t i = len; i < MAX_LEN; i++)
h += 2 * p3[i];
return h;
}
void int2barr(unsigned int *dst, unsigned long n, size_t len)
{
for (size_t i = 0; i < len; i++) {
dst[i] = n & 1;
n >>= 1;
}
}
int main()
{
std::unordered_set<unsigned long> phashes, thashes;
/* generate all possible bool-arrays from length 0 to length 18 */
/* first, we checksum the only 0-element array */
phashes.insert(prime_hash(NULL, 0));
thashes.insert(ternary_hash(NULL, 0));
/* then we checksum the arrays of length 1...18 */
for (size_t len = 1; len <= MAX_LEN; len++) {
unsigned int bits[len];
for (unsigned long i = 0; i < (1 << len); i++) {
int2barr(bits, i, len);
phashes.insert(prime_hash(bits, len));
thashes.insert(ternary_hash(bits, len));
}
}
std::cout << "prime hashes: " << phashes.size() << std::endl;
std::cout << "ternary hashes: " << thashes.size() << std::endl;
return 0;
}
A simple an efficient hashcode is replacing 0 and 1 with prime numbers and do the usual shift-accumulator loop:
hash=0
for (bits in list):
hash = hash*31 + 2*bit + 3
return hash
Here 0 is treated as 3 and 1 is treated as 5, so that leading zeros are not ignored. The multiplication by 31 makes sure that order matters. This isn't cryptographically strong though: given a hash code for a short sequence it's simple arithmetic to reverse it.
find ten integers>0 that sum to 2011 but their reciprocals sum to 1
e.g.
x1+x2+..+x10 = 2011
1/x1+1/x2+..+1/x10 = 1
I found this problem here http://blog.computationalcomplexity.org/2011/12/is-this-problem-too-hard-for-hs-math.html
I was wondering what the computation complexity was, and what types of algorithms can solve it.
EDIT2:
I wrote the following brute force code which is fast enough. I didn't find any solutions though so I need to tweak my assumptions slightly. I'm now confident I will find the solution.
from fractions import Fraction
pairs = [(i,j) for i in range(2,30) for j in range(2,30)]
x1x2 = set((i+j, Fraction(1,i)+Fraction(1,j)) for i,j in pairs)
print('x1x2',len(x1x2))
x1x2x3x4 = set((s1+s2,f1+f2) for s1,f1 in x1x2 for s2,f2 in x1x2 if f1+f2<1)
print('x1x2x3x4',len(x1x2x3x4))
count = 0
for s,f in x1x2x3x4:
count+=1
if count%1000==0:
print('count',count)
s2 = 2011 - s
f2 = 1 - f
for s3,f3 in x1x2:
s4 = s2-s3
if s4>0:
f4 = f2-f3
if f4>0:
if (s4,f4) in x1x2x3x4:
print('s3f3',s3,f3)
print('sf',s,f)
Note that you cannot define computational complexity for a single problem instance, as once you know the answer the computational complexity is O(1), i.e. constant-time. Computational complexity can be only defined for an infinite family of problems.
One approach for solving this type of a problem would be to use backtracking search. Your algorithm spends too much time in searching parts of the 10-dimensional space that can't contain solutions. An efficient backtracking algorithm would
assign the variables in the order x1, x2, ..., x10
maintain the constraint x1 <= x2 <= ... <= x10
during search, always when number xi has been assigned
let S = x1 + ... + xi
let R = 1/x1 + ... + 1/xi
always check that S <= 2011 - (10 - i) * xi
always check that R <= 1 - (1 / [(2011 - S) / (10 - i)])
if these two constraints are not fulfilled during search there can't be a solution any more and the algorithm should backtrack immediately. Note that the constraints are based on the fact that the numbers are assigned in increasing order, i.e. xi <= xi+1 in all cases
Note: you can speed up search, limiting the search space and making calculations faster, by assuming that all x1, ..., x10 divide a given number evenly, e.g. 960. That is, you only consider such xi that 960 divided by xi is an integer. This makes calculating the fractional part much easier, as instead of checking that 1/x1 + ... equals 1 you can check that 960/x1 + ... equals 960. Because all the divisions are even and return integers, you don't need to use floating or rational arithmetics at all but everything works with integers only. Of course, the smaller the fixed modulus is the less solutions you can find, but this also makes the search faster.
I note that one of the things on the next blog in the series, http://blog.computationalcomplexity.org/2011/12/solution-to-reciprocals-problem.html, is a paper on the problem, and a suggested dynamic programming approach to counting the number of answers. Since it is a dynamic programming approach, you should be able to turn that into a dynamic program to find those answers.
Dynamic programming solution (C#) based on the Bill Gasarch paper someone posted. But this does not necessarily find the optimal (minimum number of numbers used) solution. It is only guaranteed to find a solution if allowed to go high enough, but it doesn't have to be with the desired N. Basically, I feel like it "accidentally" works for (10, 2011).
Some example solutions for 2011:
10 numbers: 2, 4, 5, 80, 80, 80, 160, 320, 640, 640
11 numbers: 3, 6, 4, 12, 12, 24, 30, 480, 480, 480, 480
13 numbers: 2, 4, 5, 200, 200, 200, 200, 200, 200, 200, 200, 200, 200
15 numbers: 3, 6, 6, 12, 16, 16, 32, 32, 32, 64, 256, 256, 256, 512, 512
Anyone have an idea how to fix it to work in general?
using System;
using System.Collections.Generic;
namespace Recip
{
class Program
{
static void Main(string[] args)
{
int year = 2011;
int numbers = 20;
int[,,] c = new int[year+1, numbers+1, numbers];
List<int> queue = new List<int>();
// need some initial guesses to expand on - use squares because 1/y * y = 1
int num = 1;
do
{
for (int i = 0; i < num; i++)
c[num * num, num, i] = num;
queue.Add(num * num);
num++;
} while (num <= numbers && num * num <= year);
// expand
while (queue.Count > 0)
{
int x0 = queue[0];
queue.RemoveAt(0);
for (int i = 0; i <= numbers; i++)
{
if (c[x0, i, 0] > 0)
{
int[] coefs ={ 20, 4, 2, 2, 3, 3};
int[] cons = { 11, 6, 8, 9, 6, 8};
int[] cool = { 3, 2, 2, 2, 2, 2};
int[] k1 = { 2, 2, 4, 3, 3, 2};
int[] k2 = { 4, 4, 4, 6, 3, 6};
int[] k3 = { 5, 0, 0, 0, 0, 0};
int[] mul = { 20, 4, 2, 2, 3, 3};
for (int k = 0; k < 6; k++)
{
int x1 = x0 * coefs[k] + cons[k];
int c1 = i + cool[k];
if (x1 <= year && c1 <= numbers && c[x1, c1, 0] == 0)
{
queue.Add(x1);
c[x1, c1, 0] = k1[k];
c[x1, c1, 1] = k2[k];
int index = 2;
if (k == 0)
{
c[x1, c1, index] = k3[k];
index++;
}
int diff = index;
while (c[x0, i, index - diff] > 0)
{
c[x1, c1, index] = c[x0, i, index - diff] * mul[k];
index++;
}
}
}
}
}
}
for (int n = 1; n < numbers; n++)
{
if (c[year, n, 0] == 0) continue;
int ind = 0;
while (ind < n && c[year, n, ind] > 0)
{
Console.Write(c[year, n, ind] + ", ");
ind++;
}
Console.WriteLine();
}
Console.ReadLine();
}
}
}
There are Choose(2011,10) or about 10^26 sets of 10 numbers that add up to 2011. So, in order for a brute force approach to work, the search tree would have to be trimmed significantly.
Fortunately, there are a few ways to do that.
The first obvious way is to require that the numbers are ordered. This reduces the number of options by a factor of around 10^7.
The second is that we can detect early if our current partial solution can never lead to a complete solution. Since our values are ordered, the remaining numbers in the set are at least as large as the current number. Note that the sum of the numbers increases as the numbers get larger, while the sum of the reciprocals decreases.
There are two sure ways we can tell we're at a dead end:
We get the smallest possible total from where we are when we take all remaining numbers to be the same as the current number. If this smallest sum is too big, we'll never get less.
We get the largest possible sum of reciprocals when we take all remaining numbers to be the same as the current number. If this largest sum is less than 1, we'll never get to 1.
These two conditions set an upper bound on the next xi.
Thirdly, we can stop looking if our partial sum of reciprocals is greater than or equal to 1.
Putting all this together, here is a solution in C#:
static int[] x = new int[10];
static void Search(int depth, int xi, int sum, double rsum) {
if (depth == 9) {
// We know exactly what the last number should be
// to make the sum 2011:
xi = 2011 - sum;
// Now check if the sum of reciprocals adds up as well
if (Math.Abs(rsum + 1.0 / xi - 1.0) < 1e-12) {
// We have a winner!
x[depth] = xi;
var s = string.Join(" ", Array.ConvertAll(x, n => n.ToString()));
Console.WriteLine(s);
}
} else {
int lastxi = xi;
// There are 2 ways xi can be too large:
xi = Math.Min(
// 1. If adding it (10 - depth) times to the sum
// is greater than our total:
(2011 - sum) / (10 - depth),
// 2. If adding (10 - depth) times its reciprocal
// is less than 1.
(int)((10 - depth) * remainder));
// We iterate towards smaller xi so we can stop
// when the reciprocal sum is too large:
while (xi >= lastxi) {
double newRSum = rsum + 1.0 / xi;
if (newRSum >= 1.0)
break;
x[depth] = xi;
Search(depth + 1, xi, sum + xi, newRSum);
xi--;
}
}
}
Search(0, 1, 0, 0)
If you used a brute force algorithm to iterate through all the combinations, you'd end up with the answers. But I don't think it's quite as big as 10*2011*2011. Since you can easily arbitrarily postulate that x1
I think a brute force approach would easily get the answer. However I would imagine that the instructor is looking for a mathematical approach. I'm thinking the '1' must have some significance with regards to finding how to manipulate the equations to the answer. The '2011' seems arbitrary.