As part of a program I'm writing I need to make sure a variable does not equal any number that is the result of multiplying 2 numbers in a given list. For example: I've got a list Primes = [2, 3, 5, 7, 11] and I need to make sure that X does not equal any two of those numbers multiplied together such as 6 (2*3) or 55 (5*11) etc...
The code I have is as follows:
list(Numbers):-
Numbers = [X, Y, Sum],
between(3,6,Y),
between(3,6,X),
Primes = [2, 3, 5, 7, 11],
Sum is X+Y,
(Code i need help with)
The above code wiill type out results of [3,3,6], [4,3,7], [5,3,8] and so on. Now what I want is to be able to identify when sum is equal to a prime * prime and exclude that from the results. Something like Sum \= prime * prime. However, I don't know how to loop through the elements in Prime in order to multiply two elements together and then do that for all element in the list.
Hope this makes sense; im not great at explaining things.
Thanks in advance.
This is inefficient, but easy to code:
...
forall((nth(I,Primes,X),nth(J,Primes,Y),J>I), Sum =\= X*Y).
I think you could use that loop to initialize a list of precomputed factors, then use memberchk/2.
In SWI-Prolog use nth1/3 instead of nth/3
Related
Hackerrank has a problem called Decibinary numbers which are essentially numbers with 0-9 digit values but are exponentiated using powers of 2. The question asks us to display the xth decibinary number. There is another twist to the problem. Multiple decibinary numbers can equal the same decimal number. For example, 4 in decimal can be 100, 20, 12, and 4 in decibinary.
At first, I thought that finding how many decibinary numbers for a given decimal number would be helpful.
I consulted this post for a bit help ( https://math.stackexchange.com/questions/3540243/whats-the-number-of-decibinary-numbers-that-evaluate-to-given-decimal-number ). The post was a bit too hard to understand but then I also realized that even though we have how many decibinary numbers a decimal number can have, this doesn't help FINDING them (at least to my knowledge) which is the original goal of the question.
I do realize that for any decimal number, the largest decibinary number for it will simply be its binary representation. For ex, for 4 it is 100. So the brute force approach would be to check all numbers in this range for each decimal number and see if their decibinary representation evaluates to the given decimal number, but it is clearly evident that this approach will never pass since the input constraints define x to be from 1 to 10^16. Not only that, we have to find the xth decibinary number for a q amount of queries where q is from 1 to 10^5.
This question falls under the section of dp but I am confused how dp will be used or how it is even possible. In order for calculating the xth decibinary number q times (which is described in the brute force method above) it would be better to use a table (like the problem suggests). But for that, we would need to store and calculate 10^16 integers since that is the how big x can be. Assuming an integer is 4 Bytes, 4B * 10^16 ~= 4B * (2^3)^16 = 2^50 Bytes.
Can someone please explain how this problem is solved optimally. I am still new to CP so if I have made an error in something, please let me know.
(see link below for full problem statement):
https://www.hackerrank.com/challenges/decibinary-numbers/problem
This is solvable with about 80 MB of data. I won't give code, but I will explain the strategy.
Build a lookup count[n][i] that gives you the number of ways to get the decimal number n using the first i digits. You start by inserting 0 everywhere, and then put a 1 in count[0][0]. Now start filling in using the rule:
count[n][i] = count[n][i-1] + count[n - 2**i][i-1] + count[n - 2*2**i][i-1] + ... + count[n - 9*2**i][i-1]
It turns out that you only need the first 19 digits, and you only need counts of n up to 2**19-1. And the counts all fit in 8 byte longs.
Once you have that, create a second data structure count_below[n] which is the count of how many decibinary numbers will give a value less than n. Use the same range of n as before.
And now a lookup proceeds as follows. First you do a binary search on count_below to find the last value that has less than your target number below it. Subtracting count_below from your query, you know which decibinary number of that value you want.
Next, search through count[n][i] to find the i such that you get your target query with i digits, and not with less. This will be the position of the leading digit of your answer. You then subtract off count[n][i-1] from your query (all the decibinaries with fewer digits). Then subtract off count[n-2**i][i-1], count[n-2* 2**i][i-1], ... count[n-8*2**i][i-1] until you find what that leading digit is. Now you subtract the contribution of that digit from the value, and repeat the logic for finding the correct decibinary for that smaller value with fewer digits.
Here is a worked example to clarify. First the data structures for the first 3 digits and up to 2**3 - 1:
count = [
[1, 1, 1, 1], # sum 0
[0, 1, 1, 1], # sum 1
[0, 1, 2, 2], # sum 2
[0, 1, 2, 2], # sum 3
[0, 1, 3, 4], # sum 4
[0, 1, 3, 4], # sum 5
[0, 1, 4, 6], # sum 6
[0, 1, 4, 6], # sum 7
]
count_below = [
0, 1, 2, 4, 6, 10, 14, 20, 26, ...
]
Let's find the 20th.
count_below[6] is 14 and count_below[7] is 20 so our decimal sum is 6.
We want the 20 - count_below[6] = 6th decibinary with decimal sum 6.
count[6][2] is 4 while count[6][3] is 6 so we have a non-zero third digit.
We want the count[6][3] - count[6][2] = 2 with a non-zero third digit.
count[1][6 - 2**2] is 2, so 2 have 3rd digit 1.
The third digit is 1
We are now looking for the second decibinary whose decimal sum is 2.
count[2][1] is 1 and count[2][2] is 2 so it has a non-zero second digit.
We want the count[2][2] - count[2][1] = 1st with a non-zero second digit.
The second digit is 1
The rest is 0 because 2 - 2**1 = 0.
And thus you find that the answer is 110.
Now for such a small number, this was a lot of work. But even for your hardest lookup you'll only need about 20 steps of a binary search to find your decimal sum, another 20 steps to find the position of the first non-zero digit, and for each of of those digits, you'll have to do 1-9 different calculations to find what that digit is. Which means only hundreds of calculations to find the number.
I am not understanding the following question. I mean I want to know the sample input output for this problem question: "The pigeonhole principle states that if a function f has n distinct inputs but less than n distinct outputs,then there exist two inputs a and b such that a!=b and f(a)=f(b). Present an algorithm to find a and b such that f(a)=f(b). Assume that the function inputs are 1,2,......,and n.?"
I am unable to solve this problem as I am not understanding the question clearly. looking for your help.
The pigeonhole principle says that if you have more items than boxes, at least one of the boxes must have multiple items in it.
If you want to find which items a != b have the property f(a) == f(b), a straightforward approach is to use a hashmap data structure. Use the function value f(x) as key to store the item value x. Iterate through the items, x=1,...,n. If there is no entry at f(x), store x. If there is, the current value of x and the value stored at f(x) are a pair of the type you're seeking.
In pseudocode:
h = {} # initialize an empty hashmap
for x in 1,...,n
if h[f(x)] is empty
h[f(x)] <- x # store x in the hashmap indexed by f(x)
else
(x, h[f(x)]) qualify as a match # do what you want with them
If you want to identify all pigeons who have roommates, initialize the hashmap with empty sets. Then iterate through the values and append the current value x to the set indexed by f(x). Finally, iterate through the hashmap and pick out all sets with more than one element.
Since you didn't specify a language, for the fun of it I decided to implement the latter algorithm in Ruby:
N = 10 # number of pigeons
# Create an array of value/function pairs.
# Using N-1 for range of rand guarantees at least one duplicate random
# number, and with the nature of randomness, quite likely more.
value_and_f = Array.new(N) { |index| [index, rand(N-1)]}
h = {} # new hash
puts "Value/function pairs..."
p value_and_f # print the value/function pairs
value_and_f.each do |x, key|
h[key] = [] unless h[key] # create an array if none exists for this key
h[key] << x # append the x to the array associated with this key
end
puts "\nConfirm which values share function mapping"
h.keys.each { |key| p h[key] if h[key].length > 1 }
Which produces the following output, for example:
Value/function pairs...
[[0, 0], [1, 3], [2, 1], [3, 6], [4, 7], [5, 4], [6, 0], [7, 1], [8, 0], [9, 3]]
Confirm which values share function mapping
[0, 6, 8]
[1, 9]
[2, 7]
Since this implementation uses randomness, it will produce different results each time you run it.
Well let's go step by step.
I have 2 boxes. My father gave me 3 chocolates....
And I want to put those chocolates in 2 boxes. For our benefit let's name the chocolate a,b,c.
So how many ways we can put them?
[ab][c]
[abc][]
[a][bc]
And you see something strange? There is atleast one box with more than 1 chocolate.
So what do you think?
You can try this with any number of boxes and chocolates ( more than number of boxes) and try this. You will see that it's right.
Well let's make it more easy:
I have 5 friends 3 rooms. We are having a party. And now let's see what happens. (All my friends will sit in any of the room)
I am claiming that there will be atleast one room where there will be more than 1 friend.
My friends are quite mischievious and knowing this they tried to prove me wrong.
Friend-1 selects room-1.
Friend-2 thinks why room-1? Then I will be correct so he selects room-2
Friend-3 also thinks same...he avoids 1 and 2 room and get into room-3
Friend-4 now comes and he understands that there is no other empty room and so he has to enter some room. And thus I become correct.
So you understand the situation?
There n friends (funtions) but unfortunately or (fortunately) their rooms (output values) are less than n. So ofcourse one of the there exists 2 friend of mine a and b who shares the same room.( same value f(a)=f(b))
Continuing what https://stackoverflow.com/a/42254627/7256243 said.
Lets say that you map an array A of length N to an array B with length N-1.
Than the result could be an array B; were for 1 index you would have 2 elements.
A = {1,2,3,4,5,6}
map A -> B
Were a possible solution could be.
B= {1,2,{3,4},5,6}
The mapping of A -> could be done in any number of ways.
Here in this example both input index of 3 and 4 in Array A have the same index in array B.
I hope this usefull.
I need to generate a list of numbers (about 120.) The numbers range from 1 to X (max 10), both included. The algorithm should use every number an equal amount of times, or at least try, if some numbers are used once less, that's OK.
This is the first time I have to make this kind of algorithm, I've created very simple once, but I'm stumped on how to do this. I tried googling first, though don't really know what to call this kind of algorithms, so I couldn't find anything.
Thanks a lot!
It sounds like what you want to do is first fill a list with the numbers you want and then shuffle that list. One way to do this would be to add each of your numbers to the list and then repeat that process until the list has as many items as you want. After that, randomly shuffle the list.
In pseudo-code, generating the initial list might look something like this:
list = []
while length(list) < N
for i in 1, 2, ..., X
if length(list) >= N
break
end if
list.append(i)
end for
end while
I leave the shuffling part as an exercise to the reader.
EDIT:
As pointed out in the comments the above will always put more smaller numbers than larger numbers. If this isn't what's desired, you could iterate over the possible numbers in a random order. For example:
list = []
numbers = shuffle( [1, 2, ..., X] )
while length(list) < N
for i in 1, 2, ..., X
if length(list) >= N
break
end if
list.append( numbers[i] )
end for
end while
I think this should remove that bias.
What you want is a uniformly distributed random number (wiki). It means that if you generate 10 numbers between 1 to 10 then there is a high probability that all the numbers 1 upto 10 are present in the list.
The Random() class in java gives a fairly uniform distribution. So just go for it. To test, just check this:
Random rand = new Random();
for(int i=0;i<10;i++)
int rNum = rand.nextInt(10);
And see in the result whether you get all the numbers between 1 to 10.
One more similar discussion that might help: Uniform distribution with Random class
If I have an array:
a = [1,2,3]
How do I randomly select subsets of the array, such that the elements of each subset are unique? That is, for a the possible subsets would be:
[]
[1]
[2]
[3]
[1,2]
[2,3]
[1,2,3]
I can't generate all of the possible subsets as the real size of a is very big so there are many, many subsets. At the moment, I am using a 'random walk' idea - for each element of a, I 'flip a coin' and include it if the coin comes up heads - but I am not sure if this actually uniformly samples the space. It feels like it biases towards the middle, but this might just be my mind doing pattern-matching, as there will be more middle sized possiblities.
Am I using the right approach, or how should I be randomly sampling?
(I am aware that this is more of a language agnostic and 'mathsy' question, but I felt it wasn't really Mathoverflow material - I just need a practical answer.)
Just go ahead with your original "coin flipping" idea. It uniformly samples the space of possibilities.
It feels to you like it's biased towards the "middle", but that's because the number of possibilities is largest in the "middle". Think about it: there is only 1 possibility with no elements, and only 1 with all elements. There are N possibilities with 1 element, and N possibilities with (N-1) elements. As the number of elements chosen gets closer to (N/2), the number of possibilities grows very quickly.
You could generate random numbers, convert them to binary and choose the elements from your original array where the bits were 1. Here is an implementation of this as a monkey-patch for the Array class:
class Array
def random_subset(n=1)
raise ArgumentError, "negative argument" if n < 0
(1..n).map do
r = rand(2**self.size)
self.select.with_index { |el, i| r[i] == 1 }
end
end
end
Usage:
a.random_subset(3)
#=> [[3, 6, 9], [4, 5, 7, 8, 10], [1, 2, 3, 4, 6, 9]]
Generally this doesn't perform so bad, it's O(n*m) where n is the number of subsets you want and m is the length of the array.
I think the coin flipping is fine.
ar = ('a'..'j').to_a
p ar.select{ rand(2) == 0 }
An array with 10 elements has 2**10 possible combinations (including [ ] and all 10 elements) which is nothing more then 10 times (1 or 0). It does output more arrays of four, five and six elements, because there are a lot more of those in the powerset.
A way to select a random element from the power set is the following:
my_array = ('a'..'z').to_a
power_set_size = 2 ** my_array.length
random_subset = rand(power_set_size)
subset = []
random_subset.to_i(2).chars.each_with_index do |bit, corresponding_element|
subset << my_array[corresponding_element] if bit == "1"
end
This makes use of strings functions instead than working with real "bits" and bitwise operations just for my convenience. You can turn it into a faster (I guess) algorithm by using real bits.
What it does, is to encode the powerset of array as an integer between 0 and 2 ** array.length and then picks one of those integers at random (uniformly random, indeed). Then it decodes back the integer into a particular subset of array using a bitmask (1 = the element is in the subset, 0 = it is not).
In this way you have an uniform distribution over the power set of your array.
a.select {|element| rand(2) == 0 }
For each element, a coin is flipped. If heads ( == 0), then it is selected.
Stepping back from the following question :
Selecting with Cases
I need to generate a random Set (1 000 000 items would be enough)
Subsets[Flatten[ParallelTable[{i, j}, {i, 1, 96}, {j, 1, 4}], 1], {4}]
Further, I need to reject any quadruples with non-unique first elements, such as {{1,1},{1,2},{2,3},{6,1}}.
But the above is impossible on a laptop. How could I just draw uniformly one millions sets avoiding killing my machine ?
Provided you have a base set you need to generate 4-element subsets of,
baseSet = Flatten[Table[{i, j}, {i, 1, 96}, {j, 1, 4}], 1];
you can use RandomSample as follows:
RandomSample[baseSet, 4]
This gives you a length-4 random subset of baseSet. Generating a million of them takes 2.5 seconds on my very old machine:
Timing[subsets = Table[RandomSample[baseSet, 4], {1000000}];]
Not all of what we get are going to be different subsets, so we need to remove duplicates using Union:
subsets = Union[subsets];
After this I'm still left with 999 971 items in a sample run, thanks to the much larger number of possible subsets (Binomial[Length[baseSet], 4] == 891 881 376)
This should also do the trick, and it runs faster than Szabolcs' proposal.
(t=Table[{RandomInteger[{1, 96}], RandomInteger[{1, 4}]}, {10^6}, {4}]); //Timing
I saw no need to remove duplicate subsets since we're sampling, not trying to produce the entire population. (But you can easily remove duplicates if you so wish.)
BTW, for this case, Table runs faster than ParallelTable.
I believe a slight variation of David's method will produce the duplicate-free form requested in the original post.
set =
With[{r = Range#96},
{RandomSample[r, 4], RandomInteger[{1, 4}, 4]}\[Transpose] ~Table~ {1*^6}
];
This of course does not produce 10^6 unique samples, but Szabolcs showed how that may be done, and the cost is not great.