Google Scripts - Sorting but without being case sensitive - sorting

I am writing an Google Scripts attached to Google Form and Google Sheet. I have a list of names where some have entered the data with lower case and upper case. I am trying to sort - however, the standard .sort() is sorting the Upper Case first and then the lower case - which is very confusing.
Could you suggest how i can sort a data so that it doesn't take into account the case for sorting - but retains the original uppwer and lower cases.
For e.g. var a = {Charlie, alpha, delta, Bravo};
Desired output {alpha, Bravo, Charlie, delta}.
Thank you.
Regards,
ray

You can define a custom function in javascript sort.
eg:
var a = ["Charlie", "alpha", "delta", "Bravo"];
a = a.sort(function(x, y){
x = x.toLowerCase()
y = y.toLowerCase()
if (x < y) {
return -1;
}
if (x > y) {
return 1;
}
return 0;
})
// Outputs [ "alpha", "Bravo", "Charlie", "delta" ]

Related

Converting a math problem to dynamic programming

Currently I'm working on an optimization problem for a course I'm doing with a fellow student. It's basically described by three equations.
Where n is an index taking values between 1 and 1180, Pr is a known vector (meaning all values of this vector are known and constant) and we have to find the vector Ps that results in the minimum value of Ef[1180].
Logically, the answer would be to set all values of Ps[n] to infinity. However, there are a few constraints:
Furthermore, the values of Es and Ps must always be a multiple of 1,000 to decrease the state space.
The above is what we figured out from the assignment description. However, we can't seem to figure out how to solve this as a dynamic programming problem. There are lots of examples around for going from a set of equations to a dynamic programming problem. However, those examples all have two or three inputs and use a 2- or 3-dimensional dictionary resp. to facilitate data reuse. We essentially have 1180 inputs. Creating an 1180-dimensional dictionary is not feasible
We tried constituting Bellman equations for this problem, but the professor told us this is wrong. Then we considered brute forcing the state space, but this is an insane job since there are 43^1180 possible combinations of input vectors P_s. Some of our fellow students advised us to checkout the checkerboard example on this wikipedia page:
Wikipedia page on dynamic programming
However, this example seems to traverse through the checkerboard only once. The usage of a cost function would always pick the highest possible value for Ps[n] to minimize Ef[n]. However, to do pick such a positive value we must have Es[n] > 0 which can only happen when previous elements of Ps[i] for i < n take negative values. But the cost function will prevent Ps from having negative values. Since the cost function does not allow negative values and the Es[n] >= 0 constraint does not allow negative values, this will result in a Ps containing only zeroes, which certainly does not result in the lowest value of Ef[1180].
Any hints on how to continue would be nice. We have been staring at this problem for days now and we are completely lost at this point.
You want to minimize E[1180] idem maximize f defined below:
f(P) = \sigma_{i=1}^{1180} P_i
under constraint:
forall n <= 1180
-6.5*10^5 <= \sum_{i=1}^n P_i <= 0
Recurrence formula be like
f(i, sumPs, v) {
if i == 1180
return { s: sumPs, solution: v }
res = { s: -infinity, solution: [] }
# Pr(i) > Ps(i)
for psi in -21:min(Pr[i], 21)
# Es(n-1) = - sumPs
if psi <= -sumPs
tmp = f(i+1, sumPs + psi, v+[psi])
if tmp.s > res.s
res.s = tmp.s
res.v = tmp.v
return res
}
f(0, 0, [])
Dynamic approach be similar:
Initialize the first layer: an associative array for sumPs as key and {s:sum, v:facultative} as value
We could actually just store nothing as value and use a set (stocking the sums), but it is convenient for debugging purpose
initialiaztion
for psi in -21:min(Pr[0], 0)
layer[psi] = {s: psi, v:[psi]}
To build layer i+1, you only need layer i
for i = 2:1180
nextLayer = []
for psi in range(-21, min(Pr[i-1], 21))
for candidate in layer:
if psi <= -candidate.s
maybe = candidate.s + psi
if !nextLayer[maybe]
nextLayer[maybe] = {s: maybe, v:v+[psi]}
layer = nextLayer
NB: I have not handled the 1000 factor, but that should not be a problem
const Pr = [-10,2,-2,4,-1]
function f(i, sumPs, v) {
if (i == 5) {
return { s: sumPs, v }
}
let res = { s: -1e12, v: [] }
for (let psi = -21; psi<=Math.min(Pr[i], 21); ++psi) {
if (psi <= -sumPs) {
let tmp = f(i+1, sumPs + psi, v.concat(psi))
if (tmp.s > res.s) {
res.s = tmp.s
res.v = tmp.v
}
}
}
return res
}
function dp(n, pr){
let layer = new Map
for (let psi = -21; psi <= Math.min(Pr[0], 0); ++psi) {
layer.set(psi, {s: psi, v:[psi]})
}
for (let i = 2; i <= n; ++i) {
let nextLayer = new Map
for (let psi = -21; psi <= Math.min(pr[i-1], 21); ++psi) {
for (let [k, candidate] of layer) {
if (psi <= -candidate.s) {
const maybe = candidate.s + psi
if (!nextLayer.has(maybe)) {
nextLayer.set(maybe, { s: maybe, v: candidate.v.concat(psi) })
}
}
}
}
layer = nextLayer
}
return [...layer.entries()].sort((a,b) => b[0] - a[0])[0][1]
}
console.log(f(0,0,[]))
console.log(dp(5,Pr))

Fuzzy string record search algorithm (supporting word transpose and character transpose)

I am trying to find the best algorithm for my particular application. I have searched around on SO, Google, read various articles about Levenshtein distances, etc. but honestly it's a bit out of my area of expertise. And most seem to find how similar two input strings are, like a Hamming distance between strings.
What I'm looking for is different, more of a fuzzy record search (and I'm sure there is a name for it, that I don't know to Google). I am sure someone has solved this problem before and I'm looking for a recommendation to point me in the right direction for my further research.
In my case I am needing a fuzzy search of a database of entries of music artists and their albums. As you can imagine, the database will have millions of entries so an algorithm that scales well is crucial. It's not important to my question that Artist and Album are in different columns, the database could just store all words in one column if that helped the search.
The database to search:
|-------------------|---------------------|
| Artist | Album |
|-------------------|---------------------|
| Alanis Morissette | Jagged Little Pill |
| Moby | Everything is Wrong |
| Air | Moon Safari |
| Pearl Jam | Ten |
| Nirvana | Nevermind |
| Radiohead | OK Computer |
| Beck | Odelay |
|-------------------|---------------------|
The query text will contain from just one word in the entire Artist_Album concatenation up to the entire thing. The query text is coming from OCR and is likely to have single character transpositions but the most likely thing is the words are not guaranteed to have the right order. Additionally, there could be extra words in the search that aren't a part of the album (like cover art text). For example, "OK Computer" might be at the top of the album and "Radiohead" below it, or some albums have text arranged in columns which intermixes the word orders.
Possible search strings:
C0mputer Rad1ohead
Pearl Ten Jan
Alanis Jagged Morisse11e Litt1e Pi11
Air Moon Virgin Records
Moby Everything
Note that with OCR, some letters will look like numbers, or the wrong letter completely (Jan instead of Jam). And in the case of Radiohead's OK Computer and Moby's Everything Is Wrong, the query text doesn't even have all of the words. In the case of Air's Moon Safari, the extra words Virgin Records are searched, but Safari is missing.
Is there a general algorithm that could return the single likeliest result from the database, and if none meet some "likeliness" score threshold, it returns nothing? I'm actually developing this in Python, but that's just a bonus, I'm looking more for where to get started researching.
Let's break the problem down in two parts.
First, you want to define some measure of likeness (this is called a metric). This metric should return a small number if the query text closely matches the album/artist cover, and return a larger number otherwise.
Second, you want a datastructure that speeds up this process. Obviously, you don't want to calculate this metric every single time a query is ran.
part 1: the metric
You already mentioned Levenshtein distance, which is a great place to start.
Think outside the box though.
LD makes certain assumptions (each character replacement is equally likely, deletion is equally likely as insertion, etc). You can obviously improve the performance of this metric by taking into account what faults OCR is likely to introduce.
E.g. turning a '1' into an 'i' should not be penalized as harshly as turning a '0' into an '_'.
I would implement the metric in two stages. For any given two strings:
split both strings in tokens (assume space as the separator)
look for the most similar words (using a modified version of LD)
assign a final score based on 'matching words', 'missing words' and 'added words' (preferably weighted)
This is an example implementation (fiddle around with the constants):
static double m(String a, String b){
String[] aParts = a.split(" ");
String[] bParts = b.split(" ");
boolean[] bUsed = new boolean[bParts.length];
int matchedTokens = 0;
int tokensInANotInB = 0;
int tokensInBNotInA = 0;
for(int i=0;i<aParts.length;i++){
String a0 = aParts[i];
boolean wasMatched = true;
for(int j=0;j<bParts.length;j++){
String b0 = bParts[j];
double d = levenshtein(a0, b0);
/* If we match the token a0 with a token from b0
* update the number of matchedTokens
* escape the loop
*/
if(d < 2){
bUsed[j]=true;
wasMatched = true;
matchedTokens++;
break;
}
}
if(!wasMatched){
tokensInANotInB++;
}
}
for(boolean partUsed : bUsed){
if(!partUsed){
tokensInBNotInA++;
}
}
return (matchedTokens
+ tokensInANotInB * -0.3 // the query is allowed to contain extra words at minimal cost
+ tokensInBNotInA * -0.5 // the album title should not contain too many extra words
) / java.lang.Math.max(aParts.length, bParts.length);
}
This function uses a modified levenshtein function:
static double levenshtein(String x, String y) {
double[][] dp = new double[x.length() + 1][y.length() + 1];
for (int i = 0; i <= x.length(); i++) {
for (int j = 0; j <= y.length(); j++) {
if (i == 0) {
dp[i][j] = j;
}
else if (j == 0) {
dp[i][j] = i;
}
else {
dp[i][j] = min(dp[i - 1][j - 1]
+ costOfSubstitution(x.charAt(i - 1), y.charAt(j - 1)),
dp[i - 1][j] + 1,
dp[i][j - 1] + 1);
}
}
}
return dp[x.length()][y.length()];
}
Which uses the function 'cost of substitution' (which works as explained)
static double costOfSubstitution(char a, char b){
if(a == b)
return 0.0;
else{
// 1 and i
if(a == '1' && b == 'i')
return 0.5;
if(a == 'i' && b == '1')
return 0.5;
// 0 and O
if(a == '0' && b == 'o')
return 0.5;
if(a == 'o' && b == '0')
return 0.5;
if(a == '0' && b == 'O')
return 0.5;
if(a == 'O' && b == '0')
return 0.5;
// default
return 1.0;
}
}
I only included a couple of examples (turning '1' into 'i' or '0' into 'o').
But I'm sure you get the idea.
part 2: the datastructure
Look into BK-trees. They are a specific datastructure to hold metric information. Your metric needs to be a genuine metric (in the mathematical sense of the word). But that's easily arranged.

How do I convert a STAN model file to a graphviz DOT file or another graphical representation?

I have a STAN file describing an hierarchical model. I would like to visualize this hierarchy with all parameters by converting the STAN code to a Graphviz DOT file. Another graphical representation will do fine as well.
Consider the following small example:
data {
int<lower=0> J; // number of items
int<lower=0> y[J]; // number of successes for j
int<lower=0> n[J]; // number of trials for j
}
parameters {
real<lower=0,upper=1> theta[J]; // chance of success for j
real<lower=0,upper=1> lambda; // prior mean chance of success
real<lower=0.1> kappa; // prior count
}
transformed parameters {
real<lower=0> alpha; // prior success count
real<lower=0> beta; // prior failure count
alpha <- lambda * kappa;
beta <- (1 - lambda) * kappa;
}
model {
lambda ~ uniform(0,1); // hyperprior
kappa ~ pareto(0.1,1.5); // hyperprior
theta ~ beta(alpha,beta); // prior
y ~ binomial(n,theta); // likelihood
}
generated quantities {
real<lower=0,upper=1> avg; // avg success
int<lower=0,upper=1> above_avg[J]; // true if j is above avg
int<lower=1,upper=J> rnk[J]; // rank of j
int<lower=0,upper=1> highest[J]; // true if j is highest rank
avg <- mean(theta);
for (j in 1:J)
above_avg[j] <- (theta[j] > avg);
for (j in 1:J) {
rnk[j] <- rank(theta,j) + 1;
highest[j] <- rnk[j] == 1;
}
}
Is there a way to parse this and convert it into a DOT language like file that I can draw to visualize the hierarchy?
I googled around a lot and the closest thing I could find to a parser was inside the http://gephi.github.io/ project.. Not sure if that helps.
What I want to end up with is something similar to this:
There is no tool for that in the Stan repository. Part of the reason is, unlike the BUGS family, such a graph is not necessary for Stan to operate. But they are nice visualization tools, so if you wrote a converter I'm sure there would be interest in using it. My guess is the path of least resistance would involve converting the .stan file to the format expected by PyMC and using their graphing capabilities.

Algorithm: Determine if a combination of min/max values fall within a given range

Imagine you have 3 buckets, but each of them has a hole in it. I'm trying to fill a bath tub. The bath tub has a minimum level of water it needs and a maximum level of water it can contain. By the time you reach the tub with the bucket it is not clear how much water will be in the bucket, but you have a range of possible values.
Is it possible to adequately fill the tub with water?
Pretty much you have 3 ranges (min,max), is there some sum of them that will fall within a 4th range?
For example:
Bucket 1 : 5-10L
Bucket 2 : 15-25L
Bucket 3 : 10-50L
Bathtub 100-150L
Is there some guaranteed combination of 1 2 and 3 that will fill the bathtub within the requisite range? Multiples of each bucket can be used.
EDIT: Now imagine there are 50 different buckets?
If the capacity of the tub is not very large ( not greater than 10^6 for an example), we can solve it using dynamic programming.
Approach:
Initialization: memo[X][Y] is an array to memorize the result. X = number of buckets, Y = maximum capacity of the tub. Initialize memo[][] with -1.
Code:
bool dp(int bucketNum, int curVolume){
if(curVolume > maxCap)return false; // pruning extra branches
if(curVolume>=minCap && curVolume<=maxCap){ // base case on success
return true;
}
int &ret = memo[bucketNum][curVolume];
if(ret != -1){ // this state has been visited earlier
return false;
}
ret = false;
for(int i = minC[bucketNum]; i < = maxC[bucketNum]; i++){
int newVolume = curVolume + i;
for(int j = bucketNum; j <= 3; j++){
ret|=dp(j,newVolume);
if(ret == true)return ret;
}
}
return ret;
}
Warning: Code not tested
Here's a naïve recursive solution in python that works just fine (although it doesn't find an optimal solution):
def match_helper(lower, upper, units, least_difference, fail = dict()):
if upper < lower + least_difference:
return None
if fail.get((lower,upper)):
return None
exact_match = [ u for u in units if u['lower'] >= lower and u['upper'] <= upper ]
if exact_match:
return [ exact_match[0] ]
for unit in units:
if unit['upper'] > upper:
continue
recursive_match = match_helper(lower - unit['lower'], upper - unit['upper'], units, least_difference)
if recursive_match:
return [unit] + recursive_match
else:
fail[(lower,upper)] = 1
return None
def match(lower, upper):
units = [
{ 'name': 'Bucket 1', 'lower': 5, 'upper': 10 },
{ 'name': 'Bucket 2', 'lower': 15, 'upper': 25 },
{ 'name': 'Bucket 3', 'lower': 10, 'upper': 50 }
]
least_difference = min([ u['upper'] - u['lower'] for u in units ])
return match_helper(
lower = lower,
upper = upper,
units = sorted(units, key = lambda u: u['upper']),
least_difference = min([ u['upper'] - u['lower'] for u in units ]),
)
result = match(100, 175)
if result:
lower = sum([ u['lower'] for u in result ])
upper = sum([ u['upper'] for u in result ])
names = [ u['name'] for u in result ]
print lower, "-", upper
print names
else:
print "No solution"
It prints "No solution" for 100-150, but for 100-175 it comes up with a solution of 5x bucket 1, 5x bucket 2.
Assuming you are saying that the "range" for each bucket is the amount of water that it may have when it reaches the tub, and all you care about is if they could possibly fill the tub...
Just take the "max" of each bucket and sum them. If that is in the range of what you consider the tub to be "filled" then it can.
Updated:
Given that buckets can be used multiple times, this seems to me like we're looking for solutions to a pair of equations.
Given buckets x, y and z we want to find a, b and c:
a*x.min + b*y.min + c*z.min >= bathtub.min
and
a*x.max + b*y.max + c*z.max <= bathtub.max
Re: http://en.wikipedia.org/wiki/Diophantine_equation
If bathtub.min and bathtub.max are both multiples of the greatest common divisor of a,b and c, then there are infinitely many solutions (i.e. we can fill the tub), otherwise there are no solutions (i.e. we can never fill the tub).
This can be solved with multiple applications of the change making problem.
Each Bucket.Min value is a currency denomination, and Bathtub.Min is the target value.
When you find a solution via a change-making algorithm, then apply one more constraint:
sum(each Bucket.Max in your solution) <= Bathtub.max
If this constraint is not met, throw out this solution and look for another. This will probably require a change to a standard change-making algorithm that allows you to try other solutions when one is found to not be suitable.
Initially, your target range is Bathtub.Range.
Each time you add an instance of a bucket to the solution, you reduce the target range for the remaining buckets.
For example, using your example buckets and tub:
Target Range = 100..150
Let's say we want to add a Bucket1 to the candidate solution. That then gives us
Target Range = 95..140
because if the rest of the buckets in the solution total < 95, then this Bucket1 might not be sufficient to fill the tub to 100, and if the rest of the buckets in the solution total > 140, then this Bucket1 might fill the tub over 150.
So, this gives you a quick way to check if a candidate solution is valid:
TargetRange = Bathtub.Range
foreach Bucket in CandidateSolution
TargetRange.Min -= Bucket.Min
TargetRange.Max -= Bucket.Max
if TargetRange.Min == 0 AND TargetRange.Max >= 0 then solution found
if TargetRange.Min < 0 or TargetRange.Max < 0 then solution is invalid
This still leaves the question - How do you come up with the set of candidate solutions?
Brute force would try all possible combinations of buckets.
Here is my solution for finding the optimal solution (least number of buckets). It compares the ratio of the maximums to the ratio of the minimums, to figure out the optimal number of buckets to fill the tub.
private static void BucketProblem()
{
Range bathTub = new Range(100, 175);
List<Range> buckets = new List<Range> {new Range(5, 10), new Range(15, 25), new Range(10, 50)};
Dictionary<Range, int> result;
bool canBeFilled = SolveBuckets(bathTub, buckets, out result);
}
private static bool BucketHelper(Range tub, List<Range> buckets, Dictionary<Range, int> results)
{
Range bucket;
int startBucket = -1;
int fills = -1;
for (int i = buckets.Count - 1; i >=0 ; i--)
{
bucket = buckets[i];
double maxRatio = (double)tub.Maximum / bucket.Maximum;
double minRatio = (double)tub.Minimum / bucket.Minimum;
if (maxRatio >= minRatio)
{
startBucket = i;
if (maxRatio - minRatio > 1)
fills = (int) minRatio + 1;
else
fills = (int) maxRatio;
break;
}
}
if (startBucket < 0)
return false;
bucket = buckets[startBucket];
tub.Maximum -= bucket.Maximum * fills;
tub.Minimum -= bucket.Minimum * fills;
results.Add(bucket, fills);
return tub.Maximum == 0 || tub.Minimum <= 0 || startBucket == 0 || BucketHelper(tub, buckets.GetRange(0, startBucket), results);
}
public static bool SolveBuckets(Range tub, List<Range> buckets, out Dictionary<Range, int> results)
{
results = new Dictionary<Range, int>();
buckets = buckets.OrderBy(b => b.Minimum).ToList();
return BucketHelper(new Range(tub.Minimum, tub.Maximum), buckets, results);
}

How to design an algorithm to calculate countdown style maths number puzzle

I have always wanted to do this but every time I start thinking about the problem it blows my mind because of its exponential nature.
The problem solver I want to be able to understand and code is for the countdown maths problem:
Given set of number X1 to X5 calculate how they can be combined using mathematical operations to make Y.
You can apply multiplication, division, addition and subtraction.
So how does 1,3,7,6,8,3 make 348?
Answer: (((8 * 7) + 3) -1) *6 = 348.
How to write an algorithm that can solve this problem? Where do you begin when trying to solve a problem like this? What important considerations do you have to think about when designing such an algorithm?
Very quick and dirty solution in Java:
public class JavaApplication1
{
public static void main(String[] args)
{
List<Integer> list = Arrays.asList(1, 3, 7, 6, 8, 3);
for (Integer integer : list) {
List<Integer> runList = new ArrayList<>(list);
runList.remove(integer);
Result result = getOperations(runList, integer, 348);
if (result.success) {
System.out.println(integer + result.output);
return;
}
}
}
public static class Result
{
public String output;
public boolean success;
}
public static Result getOperations(List<Integer> numbers, int midNumber, int target)
{
Result midResult = new Result();
if (midNumber == target) {
midResult.success = true;
midResult.output = "";
return midResult;
}
for (Integer number : numbers) {
List<Integer> newList = new ArrayList<Integer>(numbers);
newList.remove(number);
if (newList.isEmpty()) {
if (midNumber - number == target) {
midResult.success = true;
midResult.output = "-" + number;
return midResult;
}
if (midNumber + number == target) {
midResult.success = true;
midResult.output = "+" + number;
return midResult;
}
if (midNumber * number == target) {
midResult.success = true;
midResult.output = "*" + number;
return midResult;
}
if (midNumber / number == target) {
midResult.success = true;
midResult.output = "/" + number;
return midResult;
}
midResult.success = false;
midResult.output = "f" + number;
return midResult;
} else {
midResult = getOperations(newList, midNumber - number, target);
if (midResult.success) {
midResult.output = "-" + number + midResult.output;
return midResult;
}
midResult = getOperations(newList, midNumber + number, target);
if (midResult.success) {
midResult.output = "+" + number + midResult.output;
return midResult;
}
midResult = getOperations(newList, midNumber * number, target);
if (midResult.success) {
midResult.output = "*" + number + midResult.output;
return midResult;
}
midResult = getOperations(newList, midNumber / number, target);
if (midResult.success) {
midResult.output = "/" + number + midResult.output;
return midResult
}
}
}
return midResult;
}
}
UPDATE
It's basically just simple brute force algorithm with exponential complexity.
However you can gain some improvemens by leveraging some heuristic function which will help you to order sequence of numbers or(and) operations you will process in each level of getOperatiosn() function recursion.
Example of such heuristic function is for example difference between mid result and total target result.
This way however only best-case and average-case complexities get improved. Worst case complexity remains untouched.
Worst case complexity can be improved by some kind of branch cutting. I'm not sure if it's possible in this case.
Sure it's exponential but it's tiny so a good (enough) naive implementation would be a good start. I suggest you drop the usual infix notation with bracketing, and use postfix, it's easier to program. You can always prettify the outputs as a separate stage.
Start by listing and evaluating all the (valid) sequences of numbers and operators. For example (in postfix):
1 3 7 6 8 3 + + + + + -> 28
1 3 7 6 8 3 + + + + - -> 26
My Java is laughable, I don't come here to be laughed at so I'll leave coding this up to you.
To all the smart people reading this: yes, I know that for even a small problem like this there are smarter approaches which are likely to be faster, I'm just pointing OP towards an initial working solution. Someone else can write the answer with the smarter solution(s).
So, to answer your questions:
I begin with an algorithm that I think will lead me quickly to a working solution. In this case the obvious (to me) choice is exhaustive enumeration and testing of all possible calculations.
If the obvious algorithm looks unappealing for performance reasons I'll start thinking more deeply about it, recalling other algorithms that I know about which are likely to deliver better performance. I may start coding one of those first instead.
If I stick with the exhaustive algorithm and find that the run-time is, in practice, too long, then I might go back to the previous step and code again. But it has to be worth my while, there's a cost/benefit assessment to be made -- as long as my code can outperform Rachel Riley I'd be satisfied.
Important considerations include my time vs computer time, mine costs a helluva lot more.
A working solution in c++11 below.
The basic idea is to use a stack-based evaluation (see RPN) and convert the viable solutions to infix notation for display purposes only.
If we have N input digits, we'll use (N-1) operators, as each operator is binary.
First we create valid permutations of operands and operators (the selector_ array). A valid permutation is one that can be evaluated without stack underflow and which ends with exactly one value (the result) on the stack. Thus 1 1 + is valid, but 1 + 1 is not.
We test each such operand-operator permutation with every permutation of operands (the values_ array) and every combination of operators (the ops_ array). Matching results are pretty-printed.
Arguments are taken from command line as [-s] <target> <digit>[ <digit>...]. The -s switch prevents exhaustive search, only the first matching result is printed.
(use ./mathpuzzle 348 1 3 7 6 8 3 to get the answer for the original question)
This solution doesn't allow concatenating the input digits to form numbers. That could be added as an additional outer loop.
The working code can be downloaded from here. (Note: I updated that code with support for concatenating input digits to form a solution)
See code comments for additional explanation.
#include <iostream>
#include <vector>
#include <algorithm>
#include <stack>
#include <iterator>
#include <string>
namespace {
enum class Op {
Add,
Sub,
Mul,
Div,
};
const std::size_t NumOps = static_cast<std::size_t>(Op::Div) + 1;
const Op FirstOp = Op::Add;
using Number = int;
class Evaluator {
std::vector<Number> values_; // stores our digits/number we can use
std::vector<Op> ops_; // stores the operators
std::vector<char> selector_; // used to select digit (0) or operator (1) when evaluating. should be std::vector<bool>, but that's broken
template <typename T>
using Stack = std::stack<T, std::vector<T>>;
// checks if a given number/operator order can be evaluated or not
bool isSelectorValid() const {
int numValues = 0;
for (auto s : selector_) {
if (s) {
if (--numValues <= 0) {
return false;
}
}
else {
++numValues;
}
}
return (numValues == 1);
}
// evaluates the current values_ and ops_ based on selector_
Number eval(Stack<Number> &stack) const {
auto vi = values_.cbegin();
auto oi = ops_.cbegin();
for (auto s : selector_) {
if (!s) {
stack.push(*(vi++));
continue;
}
Number top = stack.top();
stack.pop();
switch (*(oi++)) {
case Op::Add:
stack.top() += top;
break;
case Op::Sub:
stack.top() -= top;
break;
case Op::Mul:
stack.top() *= top;
break;
case Op::Div:
if (top == 0) {
return std::numeric_limits<Number>::max();
}
Number res = stack.top() / top;
if (res * top != stack.top()) {
return std::numeric_limits<Number>::max();
}
stack.top() = res;
break;
}
}
Number res = stack.top();
stack.pop();
return res;
}
bool nextValuesPermutation() {
return std::next_permutation(values_.begin(), values_.end());
}
bool nextOps() {
for (auto i = ops_.rbegin(), end = ops_.rend(); i != end; ++i) {
std::size_t next = static_cast<std::size_t>(*i) + 1;
if (next < NumOps) {
*i = static_cast<Op>(next);
return true;
}
*i = FirstOp;
}
return false;
}
bool nextSelectorPermutation() {
// the start permutation is always valid
do {
if (!std::next_permutation(selector_.begin(), selector_.end())) {
return false;
}
} while (!isSelectorValid());
return true;
}
static std::string buildExpr(const std::string& left, char op, const std::string &right) {
return std::string("(") + left + ' ' + op + ' ' + right + ')';
}
std::string toString() const {
Stack<std::string> stack;
auto vi = values_.cbegin();
auto oi = ops_.cbegin();
for (auto s : selector_) {
if (!s) {
stack.push(std::to_string(*(vi++)));
continue;
}
std::string top = stack.top();
stack.pop();
switch (*(oi++)) {
case Op::Add:
stack.top() = buildExpr(stack.top(), '+', top);
break;
case Op::Sub:
stack.top() = buildExpr(stack.top(), '-', top);
break;
case Op::Mul:
stack.top() = buildExpr(stack.top(), '*', top);
break;
case Op::Div:
stack.top() = buildExpr(stack.top(), '/', top);
break;
}
}
return stack.top();
}
public:
Evaluator(const std::vector<Number>& values) :
values_(values),
ops_(values.size() - 1, FirstOp),
selector_(2 * values.size() - 1, 0) {
std::fill(selector_.begin() + values_.size(), selector_.end(), 1);
std::sort(values_.begin(), values_.end());
}
// check for solutions
// 1) we create valid permutations of our selector_ array (eg: "1 1 + 1 +",
// "1 1 1 + +", but skip "1 + 1 1 +" as that cannot be evaluated
// 2) for each evaluation order, we permutate our values
// 3) for each value permutation we check with each combination of
// operators
//
// In the first version I used a local stack in eval() (see toString()) but
// it turned out to be a performance bottleneck, so now I use a cached
// stack. Reusing the stack gives an order of magnitude speed-up (from
// 4.3sec to 0.7sec) due to avoiding repeated allocations. Using
// std::vector as a backing store also gives a slight performance boost
// over the default std::deque.
std::size_t check(Number target, bool singleResult = false) {
Stack<Number> stack;
std::size_t res = 0;
do {
do {
do {
Number value = eval(stack);
if (value == target) {
++res;
std::cout << target << " = " << toString() << "\n";
if (singleResult) {
return res;
}
}
} while (nextOps());
} while (nextValuesPermutation());
} while (nextSelectorPermutation());
return res;
}
};
} // namespace
int main(int argc, const char **argv) {
int i = 1;
bool singleResult = false;
if (argc > 1 && std::string("-s") == argv[1]) {
singleResult = true;
++i;
}
if (argc < i + 2) {
std::cerr << argv[0] << " [-s] <target> <digit>[ <digit>]...\n";
std::exit(1);
}
Number target = std::stoi(argv[i]);
std::vector<Number> values;
while (++i < argc) {
values.push_back(std::stoi(argv[i]));
}
Evaluator evaluator{values};
std::size_t res = evaluator.check(target, singleResult);
if (!singleResult) {
std::cout << "Number of solutions: " << res << "\n";
}
return 0;
}
Input is obviously a set of digits and operators: D={1,3,3,6,7,8,3} and Op={+,-,*,/}. The most straight forward algorithm would be a brute force solver, which enumerates all possible combinations of these sets. Where the elements of set Op can be used as often as wanted, but elements from set D are used exactly once. Pseudo code:
D={1,3,3,6,7,8,3}
Op={+,-,*,/}
Solution=348
for each permutation D_ of D:
for each binary tree T with D_ as its leafs:
for each sequence of operators Op_ from Op with length |D_|-1:
label each inner tree node with operators from Op_
result = compute T using infix traversal
if result==Solution
return T
return nil
Other than that: read jedrus07's and HPM's answers.
By far the easiest approach is to intelligently brute force it. There is only a finite amount of expressions you can build out of 6 numbers and 4 operators, simply go through all of them.
How many? Since you don't have to use all numbers and may use the same operator multiple times, This problem is equivalent to "how many labeled strictly binary trees (aka full binary trees) can you make with at most 6 leaves, and four possible labels for each non-leaf node?".
The amount of full binary trees with n leaves is equal to catalan(n-1). You can see this as follows:
Every full binary tree with n leaves has n-1 internal nodes and corresponds to a non-full binary tree with n-1 nodes in a unique way (just delete all the leaves from the full one to get it). There happen to be catalan(n) possible binary trees with n nodes, so we can say that a strictly binary tree with n leaves has catalan(n-1) possible different structures.
There are 4 possible operators for each non-leaf node: 4^(n-1) possibilities
The leaves can be numbered in n! * (6 choose (n-1)) different ways. (Divide this by k! for each number that occurs k times, or just make sure all numbers are different)
So for 6 different numbers and 4 possible operators you get Sum(n=1...6) [ Catalan(n-1) * 6!/(6-n)! * 4^(n-1) ] possible expressions for a total of 33,665,406. Not a lot.
How do you enumerate these trees?
Given a collection of all trees with n-1 or less nodes, you can create all trees with n nodes by systematically pairing all of the n-1 trees with the empty tree, all n-2 trees with the 1 node tree, all n-3 trees with all 2 node tree etc. and using them as the left and right sub trees of a newly formed tree.
So starting with an empty set you first generate the tree that has just a root node, then from a new root you can use that either as a left or right sub tree which yields the two trees that look like this: / and . And so on.
You can turn them into a set of expressions on the fly (just loop over the operators and numbers) and evaluate them as you go until one yields the target number.
I've written my own countdown solver, in Python.
Here's the code; it is also available on GitHub:
#!/usr/bin/env python3
import sys
from itertools import combinations, product, zip_longest
from functools import lru_cache
assert sys.version_info >= (3, 6)
class Solutions:
def __init__(self, numbers):
self.all_numbers = numbers
self.size = len(numbers)
self.all_groups = self.unique_groups()
def unique_groups(self):
all_groups = {}
all_numbers, size = self.all_numbers, self.size
for m in range(1, size+1):
for numbers in combinations(all_numbers, m):
if numbers in all_groups:
continue
all_groups[numbers] = Group(numbers, all_groups)
return all_groups
def walk(self):
for group in self.all_groups.values():
yield from group.calculations
class Group:
def __init__(self, numbers, all_groups):
self.numbers = numbers
self.size = len(numbers)
self.partitions = list(self.partition_into_unique_pairs(all_groups))
self.calculations = list(self.perform_calculations())
def __repr__(self):
return str(self.numbers)
def partition_into_unique_pairs(self, all_groups):
# The pairs are unordered: a pair (a, b) is equivalent to (b, a).
# Therefore, for pairs of equal length only half of all combinations
# need to be generated to obtain all pairs; this is set by the limit.
if self.size == 1:
return
numbers, size = self.numbers, self.size
limits = (self.halfbinom(size, size//2), )
unique_numbers = set()
for m, limit in zip_longest(range((size+1)//2, size), limits):
for numbers1, numbers2 in self.paired_combinations(numbers, m, limit):
if numbers1 in unique_numbers:
continue
unique_numbers.add(numbers1)
group1, group2 = all_groups[numbers1], all_groups[numbers2]
yield (group1, group2)
def perform_calculations(self):
if self.size == 1:
yield Calculation.singleton(self.numbers[0])
return
for group1, group2 in self.partitions:
for calc1, calc2 in product(group1.calculations, group2.calculations):
yield from Calculation.generate(calc1, calc2)
#classmethod
def paired_combinations(cls, numbers, m, limit):
for cnt, numbers1 in enumerate(combinations(numbers, m), 1):
numbers2 = tuple(cls.filtering(numbers, numbers1))
yield (numbers1, numbers2)
if cnt == limit:
return
#staticmethod
def filtering(iterable, elements):
# filter elements out of an iterable, return the remaining elements
elems = iter(elements)
k = next(elems, None)
for n in iterable:
if n == k:
k = next(elems, None)
else:
yield n
#staticmethod
#lru_cache()
def halfbinom(n, k):
if n % 2 == 1:
return None
prod = 1
for m, l in zip(reversed(range(n+1-k, n+1)), range(1, k+1)):
prod = (prod*m)//l
return prod//2
class Calculation:
def __init__(self, expression, result, is_singleton=False):
self.expr = expression
self.result = result
self.is_singleton = is_singleton
def __repr__(self):
return self.expr
#classmethod
def singleton(cls, n):
return cls(f"{n}", n, is_singleton=True)
#classmethod
def generate(cls, calca, calcb):
if calca.result < calcb.result:
calca, calcb = calcb, calca
for result, op in cls.operations(calca.result, calcb.result):
expr1 = f"{calca.expr}" if calca.is_singleton else f"({calca.expr})"
expr2 = f"{calcb.expr}" if calcb.is_singleton else f"({calcb.expr})"
yield cls(f"{expr1} {op} {expr2}", result)
#staticmethod
def operations(x, y):
yield (x + y, '+')
if x > y: # exclude non-positive results
yield (x - y, '-')
if y > 1 and x > 1: # exclude trivial results
yield (x * y, 'x')
if y > 1 and x % y == 0: # exclude trivial and non-integer results
yield (x // y, '/')
def countdown_solver():
# input: target and numbers. If you want to play with more or less than
# 6 numbers, use the second version of 'unsorted_numbers'.
try:
target = int(sys.argv[1])
unsorted_numbers = (int(sys.argv[n+2]) for n in range(6)) # for 6 numbers
# unsorted_numbers = (int(n) for n in sys.argv[2:]) # for any numbers
numbers = tuple(sorted(unsorted_numbers, reverse=True))
except (IndexError, ValueError):
print("You must provide a target and numbers!")
return
solutions = Solutions(numbers)
smallest_difference = target
bestresults = []
for calculation in solutions.walk():
diff = abs(calculation.result - target)
if diff <= smallest_difference:
if diff < smallest_difference:
bestresults = [calculation]
smallest_difference = diff
else:
bestresults.append(calculation)
output(target, smallest_difference, bestresults)
def output(target, diff, results):
print(f"\nThe closest results differ from {target} by {diff}. They are:\n")
for calculation in results:
print(f"{calculation.result} = {calculation.expr}")
if __name__ == "__main__":
countdown_solver()
The algorithm works as follows:
The numbers are put into a tuple of length 6 in descending order. Then, all unique subgroups of lengths 1 to 6 are created, the smallest groups first.
Example: (75, 50, 5, 9, 1, 1) -> {(75), (50), (9), (5), (1), (75, 50), (75, 9), (75, 5), ..., (75, 50, 9, 5, 1, 1)}.
Next, the groups are organised into a hierarchical tree: every group is partitioned into all unique unordered pairs of its non-empty subgroups.
Example: (9, 5, 1, 1) -> [(9, 5, 1) + (1), (9, 1, 1) + (5), (5, 1, 1) + (9), (9, 5) + (1, 1), (9, 1) + (5, 1)].
Within each group of numbers, the calculations are performed and the results are stored. For groups of length 1, the result is simply the number itself. For larger groups, the calculations are carried out on every pair of subgroups: in each pair, all results of the first subgroup are combined with all results of the second subgroup using +, -, x and /, and the valid outcomes are stored.
Example: (75, 5) consists of the pair ((75), (5)). The result of (75) is 75; the result of (5) is 5; the results of (75, 5) are [75+5=80, 75-5=70, 75*5=375, 75/5=15].
In this manner, all results are generated, from the smallest groups to the largest. Finally, the algorithm iterates through all results and selects the ones that are the closest match to the target number.
For a group of m numbers, the maximum number of arithmetic computations is
comps[m] = 4*sum(binom(m, k)*comps[k]*comps[m-k]//(1 + (2*k)//m) for k in range(1, m//2+1))
For all groups of length 1 to 6, the maximum total number of computations is then
total = sum(binom(n, m)*comps[m] for m in range(1, n+1))
which is 1144386. In practice, it will be much less, because the algorithm reuses the results of duplicate groups, ignores trivial operations (adding 0, multiplying by 1, etc), and because the rules of the game dictate that intermediate results must be positive integers (which limits the use of the division operator).
I think, you need to strictly define the problem first. What you are allowed to do and what you are not. You can start by making it simple and only allowing multiplication, division, substraction and addition.
Now you know your problem space- set of inputs, set of available operations and desired input. If you have only 4 operations and x inputs, the number of combinations is less than:
The number of order in which you can carry out operations (x!) times the possible choices of operations on every step: 4^x. As you can see for 6 numbers it gives reasonable 2949120 operations. This means that this may be your limit for brute force algorithm.
Once you have brute force and you know it works, you can start improving your algorithm with some sort of A* algorithm which would require you to define heuristic functions.
In my opinion the best way to think about it is as the search problem. The main difficulty will be finding good heuristics, or ways to reduce your problem space (if you have numbers that do not add up to the answer, you will need at least one multiplication etc.). Start small, build on that and ask follow up questions once you have some code.
I wrote a terminal application to do this:
https://github.com/pg328/CountdownNumbersGame/tree/main
Inside, I've included an illustration of the calculation of the size of the solution space (it's n*((n-1)!^2)*(2^n-1), so: n=6 -> 2,764,800. I know, gross), and more importantly why that is. My implementation is there if you care to check it out, but in case you don't I'll explain here.
Essentially, at worst it is brute force because as far as I know it's impossible to determine whether any specific branch will result in a valid answer without explicitly checking. Having said that, the average case is some fraction of that; it's {that number} divided by the number of valid solutions (I tend to see around 1000 on my program, where 10 or so are unique and the rest are permutations fo those 10). If I handwaved a number, I'd say roughly 2,765 branches to check which takes like no time. (Yes, even in Python.)
TL;DR: Even though the solution space is huge and it takes a couple million operations to find all solutions, only one answer is needed. Best route is brute force til you find one and spit it out.
I wrote a slightly simpler version:
for every combination of 2 (distinct) elements from the list and combine them using +,-,*,/ (note that since a>b then only a-b is needed and only a/b if a%b=0)
if combination is target then record solution
recursively call on the reduced lists
import sys
def driver():
try:
target = int(sys.argv[1])
nums = list((int(sys.argv[i+2]) for i in range(6)))
except (IndexError, ValueError):
print("Provide a list of 7 numbers")
return
solutions = list()
solve(target, nums, list(), solutions)
unique = set()
final = list()
for s in solutions:
a = '-'.join(sorted(s))
if not a in unique:
unique.add(a)
final.append(s)
for s in final: #print them out
print(s)
def solve(target, nums, path, solutions):
if len(nums) == 1:
return
distinct = sorted(list(set(nums)), reverse = True)
rem1 = list(distinct)
for n1 in distinct: #reduce list by combining a pair
rem1.remove(n1)
for n2 in rem1:
rem2 = list(nums) # in case of duplicates we need to start with full list and take out the n1,n2 pair of elements
rem2.remove(n1)
rem2.remove(n2)
combine(target, solutions, path, rem2, n1, n2, '+')
combine(target, solutions, path, rem2, n1, n2, '-')
if n2 > 1:
combine(target, solutions, path, rem2, n1, n2, '*')
if not n1 % n2:
combine(target, solutions, path, rem2, n1, n2, '//')
def combine(target, solutions, path, rem2, n1, n2, symb):
lst = list(rem2)
ans = eval("{0}{2}{1}".format(n1, n2, symb))
newpath = path + ["{0}{3}{1}={2}".format(n1, n2, ans, symb[0])]
if ans == target:
solutions += [newpath]
else:
lst.append(ans)
solve(target, lst, newpath, solutions)
if __name__ == "__main__":
driver()

Resources