How do you find the definition of a function when all you have is a huge set of input/ouput pairs? - algorithm

Suppose that you were given a list of input/ouput pairs:
f 0 = 0
f 1 = 2
f 2 = 1
f 3 = -1
f 4 = 0
f 5 = 0
f 6 = -76
f 7 = -3
f 8 = 3
f 9 = -1
f 10 = -1
f 11 = -6
f 12 = -1
f 13 = -1
f 14 = 4
f 15 = -2
f 16 = -10
f 17 = 0
f 18 = 0
f 19 = -1
f 20 = 2
f 21 = 3
f 22 = 0
f 23 = 4
f 24 = 2
f 25 = -1
f 26 = 0
f 27 = 0
f 28 = -4
f 29 = -2
f 30 = -14
Now suppose you were asked to find the definition of f using a proper, small mathematical formula instead of an enumeration of values. That is, the answer should be f x = floor(tan(x*x-3)) (or similar), because that is a small formula that is correct for every input. How would you do it?

So let's simplify. You want a function such that
f 1 = 10
f 2 = 3
f 3 = 8
There exists a formula for immediately finding a polynomial function which meets these demands. In particular
f x = 6 * x * x - 25 * x + 29
works. It turns out to be the case that if you have the graph of any function
{ (x_1, y_1), (x_2, y_2), ..., (x_i, y_i) }
you can immediately build a polynomial which exactly matches those inputs and outputs.
So, given that polynomials like this exist you're never going to solve your problem (finding a particular solution like floor(tan(x*x-3))) without enforcing more constraints. In particular, if you don't somehow outlaw or penalize polynomials then I'm always going to deliver them to you.
In general, what you'd like to do is (a) define a search space and (b) define a metric of fitness, also known as a loss function. If your search space is finite then you have yourself a solution immediately: rank every element of your search space according to your loss function and select randomly from the set of solutions which tie for best.
What it sounds like you're asking for is much harder though—if you're looking through the space of all possible programs then that space is unbelievably large. Searching it exhaustively is impossible unless we constrain ourselves heavily or accept approximation. Secondly, we must have very good understanding of your loss function and how it interacts with the search space as we'll want to make intelligent guesses to move forward through this vast space.
You mention genetic algorithms—they're often lauded for this kind of work and indeed they can be a method of driving search through a large space with an uncertain loss function, but they also fail as often as they succeed. Someone who is genuinely skilled at using genetic algorithms to solve problems will spend all of their time crafting the search space and the loss function to direct the algorithm toward meaningful answers.
Now this can be done for general programs if you're careful. In fact, this was the subject of last year's ICFP programming contest. In particular, search on this page for "Rules of the ICFP Contest 2013" to see the set up.

I think feed forward neural network (FFNN) and genetic programming (GP) are good techniques for complicated function simulation.
if you need function as polynomials use the GP otherwise FFNN is very simple and the matlab have a library for it.

I think the "interpolation" don't get what I am asking. Maybe I was not clear enough, but fortunately I've managed to get a semi-satisfactory answer to my question using a brute-force search algorithm myself. Using only a list of input/output pairs, as presented in the question, I was able to recover the original function. The comments on this snippet should explain it:
import Control.Monad.Omega
{- First we define a simple evaluator for mathematical expressions -}
data A = Add A A | Mul A A | Div A A | Sub A A | Pow A A |
Sqrt A | Tan A | Sin A | Cos A |
Num Float | X deriving (Show)
eval :: A -> Float -> Float
eval (Add a b) x = eval a x + eval b x
eval (Mul a b) x = eval a x * eval b x
eval (Div a b) x = eval a x / eval b x
eval (Sub a b) x = eval a x - eval b x
eval (Pow a b) x = eval a x ** eval b x
eval (Sqrt a) x = sqrt (eval a x)
eval (Tan a) x = tan (eval a x)
eval (Sin a) x = sin (eval a x)
eval (Cos a) x = cos (eval a x)
eval (Num a) x = a
eval X x = x
{- Now we enumerate all possible terms of that grammar -}
allTerms = do
which <- each [1..15]
if which == 1 then return X
else if which == 2 then do { x <- allTerms; y <- allTerms; return (Add x y) }
else if which == 3 then do { x <- allTerms; y <- allTerms; return (Mul x y) }
else if which == 4 then do { x <- allTerms; y <- allTerms; return (Div x y) }
else if which == 5 then do { x <- allTerms; y <- allTerms; return (Sub x y) }
else if which == 6 then do { x <- allTerms; y <- allTerms; return (Pow x y) }
else if which == 7 then do { x <- allTerms; y <- allTerms; return (Sqrt x) }
else if which == 8 then do { x <- allTerms; y <- allTerms; return (Tan x) }
else if which == 9 then do { x <- allTerms; y <- allTerms; return (Sin x) }
else if which == 10 then do { x <- allTerms; y <- allTerms; return (Cos x) }
else return (Num (which-10))
{- Then we create 20 input/output pairs of a random function -}
fun x = x+tan(x*x)
maps = let n=20 in zip [1..n] (map fun [1..n])
{- This tests a function in our language against a map of in/out pairs -}
check maps f = all test maps where
test (a,b) = (eval f a) == b
{- Naw lets see if a brute-force search can recover the original program
from the list of input/output pairs alone! -}
main = print $ take 1 $ filter (check maps) (runOmega allTerms)
{- Ouput: [Add X (Tan (Mul X X))]
Yay! As much as there are infinite possible solutions,
the first solution is actually our initial program.
-}

One possible definition goes like this:
f 0 = 0
f 1 = 2
f 2 = 1
f 3 = -1
f 4 = 0
f 5 = 0
f 6 = -76
f 7 = -3
f 8 = 3
f 9 = -1
f 10 = -1
f 11 = -6
f 12 = -1
f 13 = -1
f 14 = 4
f 15 = -2
f 16 = -10
f 17 = 0
f 18 = 0
f 19 = -1
f 20 = 2
f 21 = 3
f 22 = 0
f 23 = 4
f 24 = 2
f 25 = -1
f 26 = 0
f 27 = 0
f 28 = -4
f 29 = -2
f 30 = -14

Related

Change a pseudo-code such that we get rid of a loop

I have the following pseudo-code:
read n
p <- 1;
m <- 0;
k <- 0;
while ( n != 0 )
read x
for ( i <- 1, k )
x <- [x / 10]
if ( x != 0 )
c <- x % 10
else
c <- n % 10
m <- c * p + m
n <- [n / 10]
p <- p * 10
k <- k + 1
write m
and I have to transform this code such that we have just 1 loop. I went over examples over and over again and I don't see what I should do. I think we need that first while loop so I kept trying to get rid of that for loop function but I don't see how I could get the same behavior with just one loop.
(Excuse the terrible style of pseudo-code)
If you are talking about the
for ( i <- 1, k )
x <- [x / 10]
loop, it divides x by 10 ** k, and
x <- [x / 10**k]
accomplishes exactly what the loop does. If you have a feeling that raising to a power is a loop in disguise, consider
power_of_ten = 1
while ( n != 0 )
read x
x <- [x / power_of_ten]
....
power_of_ten <- power_of_ten * 10

Finding the largest power of a number that divides a factorial in haskell

So I am writing a haskell program to calculate the largest power of a number that divides a factorial.
largestPower :: Int -> Int -> Int
Here largestPower a b has find largest power of b that divides a!.
Now I understand the math behind it, the way to find the answer is to repeatedly divide a (just a) by b, ignore the remainder and finally add all the quotients. So if we have something like
largestPower 10 2
we should get 8 because 10/2=5/2=2/2=1 and we add 5+2+1=8
However, I am unable to figure out how to implement this as a function, do I use arrays or just a simple recursive function.
I am gravitating towards it being just a normal function, though I guess it can be done by storing quotients in an array and adding them.
Recursion without an accumulator
You can simply write a recursive algorithm and sum up the result of each call. Here we have two cases:
a is less than b, in which case the largest power is 0. So:
largestPower a b | a < b = 0
a is greater than or equal to b, in that case we divide a by b, calculate largestPower for that division, and add the division to the result. Like:
| otherwise = d + largestPower d b
where d = (div a b)
Or putting it together:
largestPower a b | a < b = 1
| otherwise = d + largestPower d b
where d = (div a b)
Recursion with an accumuator
You can also use recursion with an accumulator: a variable you pass through the recursion, and update accordingly. At the end, you return that accumulator (or a function called on that accumulator).
Here the accumulator would of course be the running product of divisions, so:
largestPower = largestPower' 0
So we will define a function largestPower' (mind the accent) with an accumulator as first argument that is initialized as 1.
Now in the recursion, there are two cases:
a is less than b, we simply return the accumulator:
largestPower' r a b | a < b = r
otherwise we multiply our accumulator with b, and pass the division to the largestPower' with a recursive call:
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Or the full version:
largestPower = largestPower' 1
largestPower' r a b | a < b = r
| otherwise = largestPower' (d+r) d b
where d = (div a b)
Naive correct algorithm
The algorithm is not correct. A "naive" algorithm would be to simply divide every item and keep decrementing until you reach 1, like:
largestPower 1 _ = 0
largestPower a b = sumPower a + largestPower (a-1) b
where sumPower n | n `mod` b == 0 = 1 + sumPower (div n b)
| otherwise = 0
So this means that for the largestPower 4 2, this can be written as:
largestPower 4 2 = sumPower 4 + sumPower 3 + sumPower 2
and:
sumPower 4 = 1 + sumPower 2
= 1 + 1 + sumPower 1
= 1 + 1 + 0
= 2
sumPower 3 = 0
sumPower 2 = 1 + sumPower 1
= 1 + 0
= 1
So 3.
The algorithm as stated can be implemented quite simply:
largestPower :: Int -> Int -> Int
largestPower 0 b = 0
largestPower a b = d + largestPower d b where d = a `div` b
However, the algorithm is not correct for composite b. For example, largestPower 10 6 with this algorithm yields 1, but in fact the correct answer is 4. The problem is that this algorithm ignores multiples of 2 and 3 that are not multiples of 6. How you fix the algorithm is a completely separate question, though.

Haskell: How to change algorithm to work on any size of list?

I have this code:
project= [
[(a,b),(c,d),(e,f)]
|
a<-[1..5],
b<-[1..3],
c<-[1..5],
d<-[1..3],
e<-[1..5],
f<-[1..3]
, a*b + c*d + e*f <6
, a + c + e == 5
, b == 3 || d==3 || f==3
]
x=take 1 project
main = print $ x
it is return a list of 3 pairs [(x,y),(x,y),(x,y)] .
There are 3 conditions:
If you sum all the x you must get 5.
If you sum all the x*y you will get less than 6.
There is at least one y that equal to 3.
Now, I want exactly the same algorithm to work for any longer list for example 10 pairs. How should I do that?
Here:
project n =
[ x
| x <- replicateM n $ liftA2 (,) [1..5] [1..3]
, sum (map (uncurry (*)) x) < 6
, sum (map fst x) == 5
, any ((==3) . snd) x
]
main = print $ take 1 $ project 3
Or like so:
project n
= filter (any ((==3) . snd))
$ filter ((==5) . sum . map fst)
$ filter ((<6) . sum . map (uncurry (*)))
$ replicateM n
$ liftA2 (,) [1..5] [1..3]

Trying to create an efficient algorithm for a function in Haskell

I'm looking for an efficient polynomial-time solution to the following problem:
Implement a recursive function node x y for calculating the (x,y)-th number in a number triangle defined as
g(x,y) = 0 if |x| > y
= 1 if (x,y) = (0,0)
= sum of all incoming paths otherwise
The sum of all incoming paths to a node is defined as the sum of the values of all possible paths from the root node (x, y) = (0, 0) to the node under consideration, where at each node (x,y) a path can either continue diagonally down and left (x−1,y+1), straight down (x,y+1), or diagonally down and right (x+1,y+1). The value of a path to a node is defined as the sum of all the nodes along that path up to, but not including, the node under consideration.
The first few entries in the number triangle are given in the table:
\ x -3 -2 -1 0 1 2 3
\
y \ _________________________
|
0 | 0 0 0 1 0 0 0
|
1 | 0 0 1 1 1 0 0
|
2 | 0 2 4 6 4 2 0
|
3 | 4 16 40 48 40 16 4
I am trying to work out a naive solution first, here is what I have:
node x y | y < 0 = error "number cannot be negative"
| (abs x) > y = 0
| (x == 0) && (y == 0) = 1
| otherwise = node (x+1) (y-1) + node x (y-1) + node (x-1) (y-1)
Whenever I run this I get:
"* Exception: stack overflow"?
I believe your problem is a bit more complicated than your example code suggests. First, let's be clear about some definitions here:
Let pathCount x y be the number of paths that end at (x, y). We have
pathCount :: Int -> Int -> Integer
pathCount x y
| y == 0 = if x == 0 then 1 else 0
| otherwise = sum [ pathCount (x + d) (y - 1) | d <- [-1..1]]
Now let's pathSum x y be the sum of all paths that end in (x, y). We have:
pathSum :: Int -> Int -> Integer
pathSum x y
| y == 0 = if x == 0 then 1 else 0
| otherwise = sum [ pathSum (x + d) (y - 1) + node x y * pathCount (x + d) (y - 1)
| d <- [-1..1] ]
With this helper, we can finally define node x y properly:
node :: Int -> Int -> Integer
node x y
| y == 0 = if x == 0 then 1 else 0
| otherwise = sum [ pathSum (x + d) (y - 1) | d <- [-1..1]]
This algorithm as such is exponential time in its current form. We can however add memoization to make the number of additions quadratic. The memoize package on Hackage makes this easy as pie. Full example:
import Control.Monad
import Data.List (intercalate)
import Data.Function.Memoize (memoize2)
node' :: Int -> Int -> Integer
node' x y
| y == 0 = if x == 0 then 1 else 0
| otherwise = sum [ pathSum (x + d) (y - 1) | d <- [-1..1]]
node = memoize2 node'
pathCount' :: Int -> Int -> Integer
pathCount' x y
| y == 0 = if x == 0 then 1 else 0
| otherwise = sum [ pathCount (x + d) (y - 1) | d <- [-1..1]]
pathCount = memoize2 pathCount'
pathSum' :: Int -> Int -> Integer
pathSum' x y
| y == 0 = if x == 0 then 1 else 0
| otherwise = sum [ pathSum (x + d) (y - 1) + node x y * pathCount (x + d) (y - 1)
| d <- [-1..1] ]
pathSum = memoize2 pathSum'
main =
forM_ [0..n] $ \y ->
putStrLn $ intercalate " " $ map (show . flip node y) [-n..n]
where n = 5
Output:
0 0 0 0 0 1 0 0 0 0 0
0 0 0 0 1 1 1 0 0 0 0
0 0 0 2 4 6 4 2 0 0 0
0 0 4 16 40 48 40 16 4 0 0
0 8 72 352 728 944 728 352 72 8 0
16 376 4248 16608 35128 43632 35128 16608 4248 376 16
As you can see the algorithm the size of the numbers will get out of hands rather quickly. So the runtime is not O(n^2), while the number of arithmetic operations is.
You're thinking in terms of outgoing paths, when you should be thinking in terms of incoming paths. Your recursive step is currently looking for nodes from below, instead of above.
First of all, sorry if this is long. I wanted to explain the step by step thought process.
To start off with, you need one crucial fact: You can represent the "answer" at each "index" by a list of paths. For all the zeros, this is [[]], for your base case it is [[1]], and for example, for 0,2 it is [[6,1,1],[6,1,1],[6,1,1]]. This may seem like some redundancy, but it simplifies things down the road. Then, extracting the answer is head . head if the list is non empty, or const 0 if it is.
This is very useful because you can store the answer as a list of rows (the first row would be '[[1]], [], [] ...) and the results of any given row depend only on the previous row.
Secondly, this problem is symmetrical. This is pretty obvious.
The first thing we will do will mirror the definition of fib very closely:
type Path = [[Integer]]
triangle' :: [[Path]]
triangle' = ([[1]] : repeat []) : map f triangle'
We know this must be close to correct, since the 2nd row will depend on the first row only, the third on the 2nd only, etc. So the result will be
([[1]] : repeat []) : f ([[1]] : repeat []) : f ....
Now we just need to know what f is. Firstly, its type: [Path] -> [Path]. Quite simply, given the previous row, return the next row.
Now you may see another problem arising. Each invocation of f needs to know how many columns in the current row. We could actually count the length of non-null elements in the previous row, but it is simpler to pass the parameter directly, so we change map f triangle' to zipWith f [1..] triangle', giving f the type Int -> [Path] -> [Path].
f needs to handle one special case and one general case. The special case is x=0, in this case we simply treat the x+1,y-1 and x-1,y-1 recursions the same, and otherwise is identical to gn. Lets make two functions, g0 and gn which handle these two cases.
The actually computation of gn is easy. We know for some x we need the elements x-1, x, x+1 of the previous row. So if we drop x-1 elements before giving the previous row to the xth invocation of gn, gn can just take the first 3 elements and it will have what it needs. We write this as follows:
f :: Int -> [Path] -> [Path]
f n ps = g0 ps : map (gn . flip drop ps) [0..n-1] ++ repeat []
The repeat [] at the end should be obvious: for indices outside the triangle, the result is 0.
Now writing g0 and gs is really quite simple:
g0 :: [Path] -> Path
g0 (a:b:_) = map (s:) q
where
s = sum . concat $ q
q = b ++ a ++ b
gn :: [Path] -> Path
gn (a:b:c:_) = map (s:) q
where
s = sum . concat $ q
q = a ++ b ++ c
On my machine this version is about 3-4 times faster than the fastest version I could write with normal recursion and memoization.
The rest is just printing or pulling out the number you want.
triangle :: Int -> Int -> Integer
triangle x y = case (triangle' !! y) !! (abs x) of
[] -> 0
xs -> head $ head xs
triList :: Int -> Int -> Path
triList x y = (triangle' !! y) !! (abs x)
printTri :: Int -> Int -> IO ()
printTri width height =
putStrLn $ unlines $ map unwords
[[ p $ triangle x y | x <- [-x0..x0]] | y <- [0..height]]
where maxLen = length $ show $ triangle 0 height
x0 = width `div` 2
p = printf $ "%" ++ show maxLen ++ "d "

How to find multiplicative partitions of any integer?

I'm looking for an efficient algorithm for computing the multiplicative partitions for any given integer. For example, the number of such partitions for 12 is 4, which are
12 = 12 x 1 = 4 x 3 = 2 x 2 x 3 = 2 x 6
I've read the wikipedia article for this, but that doesn't really give me an algorithm for generating the partitions (it only talks about the number of such partitions, and to be honest, even that is not very clear to me!).
The problem I'm looking at requires me to compute multiplicative partitions for very large numbers (> 1 billion), so I was trying to come up with a dynamic programming approach for it (so that finding all possible partitions for a smaller number can be re-used when that smaller number is itself a factor of a bigger number), but so far, I don't know where to begin!
Any ideas/hints would be appreciated - this is not a homework problem, merely something I'm trying to solve because it seems so interesting!
The first thing I would do is get the prime factorization of the number.
From there, I can make a permutation of each subset of the factors, multiplied by the remaining factors at that iteration.
So if you take a number like 24, you get
2 * 2 * 2 * 3 // prime factorization
a b c d
// round 1
2 * (2 * 2 * 3) a * bcd
2 * (2 * 2 * 3) b * acd (removed for being dup)
2 * (2 * 2 * 3) c * abd (removed for being dup)
3 * (2 * 2 * 2) d * abc
Repeat for all "rounds" (round being the number of factors in the first number of the multiplication), removing duplicates as they come up.
So you end up with something like
// assume we have the prime factorization
// and a partition set to add to
for(int i = 1; i < factors.size; i++) {
for(List<int> subset : factors.permutate(2)) {
List<int> otherSubset = factors.copy().remove(subset);
int subsetTotal = 1;
for(int p : subset) subsetTotal *= p;
int otherSubsetTotal = 1;
for(int p : otherSubset) otherSubsetTotal *= p;
// assume your partition excludes if it's a duplicate
partition.add(new FactorSet(subsetTotal,otherSubsetTotal));
}
}
Of course, the first thing to do is find the prime factorisation of the number, like glowcoder said. Say
n = p^a * q^b * r^c * ...
Then
find the multiplicative partitions of m = n / p^a
for 0 <= k <= a, find the multiplicative partitions of p^k, which is equivalent to finding the additive partitions of k
for each multiplicative partition of m, find all distinct ways to distribute a-k factors p among the factors
combine results of 2. and 3.
It is convenient to treat the multiplicative partitions as lists (or sets) of (divisor, multiplicity) pairs to avoid producing duplicates.
I've written the code in Haskell because it's the most convenient and concise of the languages I know for this sort of thing:
module MultiPart (multiplicativePartitions) where
import Data.List (sort)
import Math.NumberTheory.Primes (factorise)
import Control.Arrow (first)
multiplicativePartitions :: Integer -> [[Integer]]
multiplicativePartitions n
| n < 1 = []
| n == 1 = [[]]
| otherwise = map ((>>= uncurry (flip replicate)) . sort) . pfPartitions $ factorise n
additivePartitions :: Int -> [[(Int,Int)]]
additivePartitions 0 = [[]]
additivePartitions n
| n < 0 = []
| otherwise = aParts n n
where
aParts :: Int -> Int -> [[(Int,Int)]]
aParts 0 _ = [[]]
aParts 1 m = [[(1,m)]]
aParts k m = withK ++ aParts (k-1) m
where
withK = do
let q = m `quot` k
j <- [q,q-1 .. 1]
[(k,j):prt | let r = m - j*k, prt <- aParts (min (k-1) r) r]
countedPartitions :: Int -> Int -> [[(Int,Int)]]
countedPartitions 0 count = [[(0,count)]]
countedPartitions quant count = cbParts quant quant count
where
prep _ 0 = id
prep m j = ((m,j):)
cbParts :: Int -> Int -> Int -> [[(Int,Int)]]
cbParts q 0 c
| q == 0 = if c == 0 then [[]] else [[(0,c)]]
| otherwise = error "Oops"
cbParts q 1 c
| c < q = [] -- should never happen
| c == q = [[(1,c)]]
| otherwise = [[(1,q),(0,c-q)]]
cbParts q m c = do
let lo = max 0 $ q - c*(m-1)
hi = q `quot` m
j <- [lo .. hi]
let r = q - j*m
m' = min (m-1) r
map (prep m j) $ cbParts r m' (c-j)
primePowerPartitions :: Integer -> Int -> [[(Integer,Int)]]
primePowerPartitions p e = map (map (first (p^))) $ additivePartitions e
distOne :: Integer -> Int -> Integer -> Int -> [[(Integer,Int)]]
distOne _ 0 d k = [[(d,k)]]
distOne p e d k = do
cap <- countedPartitions e k
return $ [(p^i*d,m) | (i,m) <- cap]
distribute :: Integer -> Int -> [(Integer,Int)] -> [[(Integer,Int)]]
distribute _ 0 xs = [xs]
distribute p e [(d,k)] = distOne p e d k
distribute p e ((d,k):dks) = do
j <- [0 .. e]
dps <- distOne p j d k
ys <- distribute p (e-j) dks
return $ dps ++ ys
distribute _ _ [] = []
pfPartitions :: [(Integer,Int)] -> [[(Integer,Int)]]
pfPartitions [] = [[]]
pfPartitions [(p,e)] = primePowerPartitions p e
pfPartitions ((p,e):pps) = do
cop <- pfPartitions pps
k <- [0 .. e]
ppp <- primePowerPartitions p k
mix <- distribute p (e-k) cop
return (ppp ++ mix)
It's not particularly optimised, but it does the job.
Some times and results:
Prelude MultiPart> length $ multiplicativePartitions $ 10^10
59521
(0.03 secs, 53535264 bytes)
Prelude MultiPart> length $ multiplicativePartitions $ 10^11
151958
(0.11 secs, 125850200 bytes)
Prelude MultiPart> length $ multiplicativePartitions $ 10^12
379693
(0.26 secs, 296844616 bytes)
Prelude MultiPart> length $ multiplicativePartitions $ product [2 .. 10]
70520
(0.07 secs, 72786128 bytes)
Prelude MultiPart> length $ multiplicativePartitions $ product [2 .. 11]
425240
(0.36 secs, 460094808 bytes)
Prelude MultiPart> length $ multiplicativePartitions $ product [2 .. 12]
2787810
(2.06 secs, 2572962320 bytes)
The 10^k are of course particularly easy because there are only two primes involved (but squarefree numbers are still easier), the factorials get slow earlier. I think by careful organisation of the order and choice of better data structures than lists, there's quite a bit to be gained (probably one should sort the prime factors by exponent, but I don't know whether one should start with the highest exponents or the lowest).
Why dont you find all the numbers that can divide the number and then you find permutations of the numbers that multiplications will add up to the number?
Finding all numbers that can divide your number takes O(n).
Then you can permute this set to find all possible sets that multiplication of this set will give you the number.
Once you find set of all possible numbers that divide the original number, then you can do dynamic programming on them to find the set of numbers that multiplying them will give you the original number.

Resources