Make values in tensor fit in given range - performance

I have a 3d tensor and would like to ensure that all values fall within a given range (0-1 in this case). In order to do this I have already written the following code:
function capTo1or0 (Tensor3d)
tensor_width=Tensor3d:size()[2]
tensor_height=Tensor3d:size()[3]
tensor_depth=Tensor3d:size()[1]
for i=1,tensor_width,1 do
for j=1,tensor_height,1 do
for k=1,tensor_depth,1 do
if(Tensor3d[k][i][j])>1 then
Tensor3d[k][i][j]=1
end
if(Tensor3d[k][i][j]<0.0) then
Tensor3d[k][i][j]=0.0
end
end
end
end
return Tensor3d
end
and it works, there is just one problem: performance is terrible, I know that there has to be some better way of doing this then looping over the entire array given that most tensor operations that do not involve manually looping over an array are much faster. Anybody know how to make this faster?
An example in this is say that I have a `2-3-3` array with the values
[1, 2, 0.5][0.5,0.2,-0.2]
[0.1,0.2,0.3][1, 1, 1 ]
[-2, -1, 2 ][0.2,-5,-1 ]
then I expect an outcome of
[1, 1, 0.5][0.5,0.2,0]
[0.1,0.2,0.3][1, 1, 1 ]
[0, 0, 1 ] [0.2,0,-1 ]
replacing every value under the lower bound of 0 with 0 and every value over the upper bound of 1 with 1.
Anybody know how to do this fast?

I have never used Torch, but it's documentation says:
http://torch7.readthedocs.io/en/rtd/maths/#torch.clamp
[res] torch.clamp([res,] tensor1, min_value, max_value)
Clamp all elements in the tensor into the range [min_value,
max_value]. ie:
y_i = x_i, if x_i >= min_value or x_i <= max_value
= min_value, if x_i < min_value
= max_value, if x_i > max_value
z=torch.clamp(x,0,1) will return a new tensor with the result of x
bounded between 0 and 1.
torch.clamp(z,x,0,1) will put the result in z.
x:clamp(0,1) will perform the clamp operation in place (putting the
result in x).
z:clamp(x,0,1) will put the result in z.
I guess that is what you are looking for?

Related

What's wrong with this implementation of 'Highest product of consecutive elements'

I am trying to write a method that returns the highest product from adjacent values within an array. Below is my attempt to do so however it fails to return the highest products in some instances, patterns of which I am unclear on and I cannot see why the problems with this code:
def adjacentElementsProduct(inputArray)
inputArray.each_with_index do |value, index|
if inputArray[index+1]
products = [] << value * inputArray[index + 1]
return products.max
end
end
end
a) Can anyone help me understand what is wrong with the above implementation
b) Can anyone suggest a less verbose and simpler method of achieving the desired.
Here are the fails and passes (this is from codefights.com, 1st question, 2nd chapter 'Edge of the Ocean'):
First of all, you're returning from the method after calculating only the first product. So all your answers are simply the product of the first two numbers. To fix it, initialize your products variable before the loop and put your return after the loop.
As for a cleaner implementation, take a look at Enumerable's each_cons, which returns consecutive members of an array (for example, each_cons(2) returns consecutive pairs). Then you can multiply each pair in one fell swoop via map and return the maximum.
def adjacentElementsProduct(inputArray)
inputArray.each_cons(2).map{|a,b| a*b}.max
end
I presume that, for an array arr, you want the largest product
arr[i] * arr[i+1] * arr[i+2] *...* arr[j-1] * arr[j]
where 0 <= i <= j <= arr.size-1. We can do that in a Ruby-like way using Enumerable#each_cons, as #Mark suggested.
def max_adjacent_product(arr)
n = (1..arr.size).each_with_object({prod: -Float::INFINITY, adj: []}) do |len, best|
arr.each_cons(len) do |a|
pr = a.reduce(:*)
best.replace({prod: pr, adj: a}) if pr > best[:prod]
end
end
end
max_adjacent_product [2, -4, 3, -5, -10]
#=> {:prod=>150, :adj=>[3, -5, -10]}
I loop through the number of adjacent elements to consider. In the example, that would be from 1 to 5. For each number of adjacent elements, len, I then loop through each subarray a of adjacent elements of arr for which a.size equals len. In the example, for len = 3, that would be [2, -4, 3], [-4, 3, -5] and [3, -5, -10]. For each of those subarrays of adjacent elements I then compute the product of its elements (-24, 60 and 150 for the example just given). If a product is greater than the best known product so far I make it the best subarray so far. The final value of best is returned by the method.

Psuedo-Random Variable

I have a variable, between 0 and 1, which should dictate the likelyhood that a second variable, a random number between 0 and 1, is greater than 0.5. In other words, if I were to generate the second variable 1000 times, the average should be approximately equal to the first variable's value. How do I make this code?
Oh, and the second variable should always be capable of producing either 0 or 1 in any condition, just more or less likely depending on the value of the first variable. Here is a link to a graph which models approximately how I would like the program to behave. Each equation represents a separate value for the first variable.
You have a variable p and you are looking for a mapping function f(x) that maps random rolls between x in [0, 1] to the same interval [0, 1] such that the expected value, i.e. the average of all rolls, is p.
You have chosen the function prototype
f(x) = pow(x, c)
where c must be chosen appropriately. If x is uniformly distributed in [0, 1], the average value is:
int(f(x) dx, [0, 1]) == p
With the integral:
int(pow(x, c) dx) == pow(x, c + 1) / (c + 1) + K
one gets:
c = 1/p - 1
A different approach is to make p the median value of the distribution, such that half of the rolls fall below p, the other half above p. This yields a different distribution. (I am aware that you didn't ask for that.) Now, we have to satisfy the condition:
f(0.5) == pow(0.5, c) == p
which yields:
c = log(p) / log(0.5)
With the current function prototype, you cannot satisfy both requirements. Your function is also asymmetric (f(x, p) != f(1-x, 1-p)).
Python functions below:
def medianrand(p):
"""Random number between 0 and 1 whose median is p"""
c = math.log(p) / math.log(0.5)
return math.pow(random.random(), c)
def averagerand(p):
"""Random number between 0 and 1 whose expected value is p"""
c = 1/p - 1
return math.pow(random.random(), c)
You can do this by using a dummy. First set the first variable to a value between 0 and 1. Then create a random number in the dummy between 0 and 1. If this dummy is bigger than the first variable, you generate a random number between 0 and 0.5, and otherwise you generate a number between 0.5 and 1.
In pseudocode:
real a = 0.7
real total = 0.0
for i between 0 and 1000 begin
real dummy = rand(0,1)
real b
if dummy > a then
b = rand(0,0.5)
else
b = rand(0.5,1)
end if
total = total + b
end for
real avg = total / 1000
Please note that this algorithm will generate average values between 0.25 and 0.75. For a = 1 it will only generate random values between 0.5 and 1, which should average to 0.75. For a=0 it will generate only random numbers between 0 and 0.5, which should average to 0.25.
I've made a sort of pseudo-solution to this problem, which I think is acceptable.
Here is the algorithm I made;
a = 0.2 # variable one
b = 0 # variable two
b = random.random()
b = b^(1/(2^(4*a-1)))
It doesn't actually produce the average results that I wanted, but it's close enough for my purposes.
Edit: Here's a graph I made that consists of a large amount of datapoints I generated with a python script using this algorithm;
import random
mod = 6
div = 100
for z in xrange(div):
s = 0
for i in xrange (100000):
a = (z+1)/float(div) # variable one
b = random.random() # variable two
c = b**(1/(2**((mod*a*2)-mod)))
s += c
print str((z+1)/float(div)) + "\t" + str(round(s/100000.0, 3))
Each point in the table is the result of 100000 randomly generated points from the algorithm; their x positions being the a value given, and their y positions being their average. Ideally they would fit to a straight line of y = x, but as you can see they fit closer to an arctan equation. I'm trying to mess around with the algorithm so that the averages fit the line, but I haven't had much luck as of yet.

Allocate an array of integers proportionally compensating for rounding errors

I have an array of non-negative values. I want to build an array of values who's sum is 20 so that they are proportional to the first array.
This would be an easy problem, except that I want the proportional array to sum to exactly
20, compensating for any rounding error.
For example, the array
input = [400, 400, 0, 0, 100, 50, 50]
would yield
output = [8, 8, 0, 0, 2, 1, 1]
sum(output) = 20
However, most cases are going to have a lot of rounding errors, like
input = [3, 3, 3, 3, 3, 3, 18]
naively yields
output = [1, 1, 1, 1, 1, 1, 10]
sum(output) = 16 (ouch)
Is there a good way to apportion the output array so that it adds up to 20 every time?
There's a very simple answer to this question: I've done it many times. After each assignment into the new array, you reduce the values you're working with as follows:
Call the first array A, and the new, proportional array B (which starts out empty).
Call the sum of A elements T
Call the desired sum S.
For each element of the array (i) do the following:
a. B[i] = round(A[i] / T * S). (rounding to nearest integer, penny or whatever is required)
b. T = T - A[i]
c. S = S - B[i]
That's it! Easy to implement in any programming language or in a spreadsheet.
The solution is optimal in that the resulting array's elements will never be more than 1 away from their ideal, non-rounded values. Let's demonstrate with your example:
T = 36, S = 20. B[1] = round(A[1] / T * S) = 2. (ideally, 1.666....)
T = 33, S = 18. B[2] = round(A[2] / T * S) = 2. (ideally, 1.666....)
T = 30, S = 16. B[3] = round(A[3] / T * S) = 2. (ideally, 1.666....)
T = 27, S = 14. B[4] = round(A[4] / T * S) = 2. (ideally, 1.666....)
T = 24, S = 12. B[5] = round(A[5] / T * S) = 2. (ideally, 1.666....)
T = 21, S = 10. B[6] = round(A[6] / T * S) = 1. (ideally, 1.666....)
T = 18, S = 9. B[7] = round(A[7] / T * S) = 9. (ideally, 10)
Notice that comparing every value in B with it's ideal value in parentheses, the difference is never more than 1.
It's also interesting to note that rearranging the elements in the array can result in different corresponding values in the resulting array. I've found that arranging the elements in ascending order is best, because it results in the smallest average percentage difference between actual and ideal.
Your problem is similar to a proportional representation where you want to share N seats (in your case 20) among parties proportionnaly to the votes they obtain, in your case [3, 3, 3, 3, 3, 3, 18]
There are several methods used in different countries to handle the rounding problem. My code below uses the Hagenbach-Bischoff quota method used in Switzerland, which basically allocates the seats remaining after an integer division by (N+1) to parties which have the highest remainder:
def proportional(nseats,votes):
"""assign n seats proportionaly to votes using Hagenbach-Bischoff quota
:param nseats: int number of seats to assign
:param votes: iterable of int or float weighting each party
:result: list of ints seats allocated to each party
"""
quota=sum(votes)/(1.+nseats) #force float
frac=[vote/quota for vote in votes]
res=[int(f) for f in frac]
n=nseats-sum(res) #number of seats remaining to allocate
if n==0: return res #done
if n<0: return [min(x,nseats) for x in res] # see siamii's comment
#give the remaining seats to the n parties with the largest remainder
remainders=[ai-bi for ai,bi in zip(frac,res)]
limit=sorted(remainders,reverse=True)[n-1]
#n parties with remainter larger than limit get an extra seat
for i,r in enumerate(remainders):
if r>=limit:
res[i]+=1
n-=1 # attempt to handle perfect equality
if n==0: return res #done
raise #should never happen
However this method doesn't always give the same number of seats to parties with perfect equality as in your case:
proportional(20,[3, 3, 3, 3, 3, 3, 18])
[2,2,2,2,1,1,10]
You have set 3 incompatible requirements. An integer-valued array proportional to [1,1,1] cannot be made to sum to exactly 20. You must choose to break one of the "sum to exactly 20", "proportional to input", and "integer values" requirements.
If you choose to break the requirement for integer values, then use floating point or rational numbers. If you choose to break the exact sum requirement, then you've already solved the problem. Choosing to break proportionality is a little trickier. One approach you might take is to figure out how far off your sum is, and then distribute corrections randomly through the output array. For example, if your input is:
[1, 1, 1]
then you could first make it sum as well as possible while still being proportional:
[7, 7, 7]
and since 20 - (7+7+7) = -1, choose one element to decrement at random:
[7, 6, 7]
If the error was 4, you would choose four elements to increment.
A naïve solution that doesn't perform well, but will provide the right result...
Write an iterator that given an array with eight integers (candidate) and the input array, output the index of the element that is farthest away from being proportional to the others (pseudocode):
function next_index(candidate, input)
// Calculate weights
for i in 1 .. 8
w[i] = candidate[i] / input[i]
end for
// find the smallest weight
min = 0
min_index = 0
for i in 1 .. 8
if w[i] < min then
min = w[i]
min_index = i
end if
end for
return min_index
end function
Then just do this
result = [0, 0, 0, 0, 0, 0, 0, 0]
result[next_index(result, input)]++ for 1 .. 20
If there is no optimal solution, it'll skew towards the beginning of the array.
Using the approach above, you can reduce the number of iterations by rounding down (as you did in your example) and then just use the approach above to add what has been left out due to rounding errors:
result = <<approach using rounding down>>
while sum(result) < 20
result[next_index(result, input)]++
So the answers and comments above were helpful... particularly the decreasing sum comment from #Frederik.
The solution I came up with takes advantage of the fact that for an input array v, sum(v_i * 20) is divisible by sum(v). So for each value in v, I mulitply by 20 and divide by the sum. I keep the quotient, and accumulate the remainder. Whenever the accumulator is greater than sum(v), I add one to the value. That way I'm guaranteed that all the remainders get rolled into the results.
Is that legible? Here's the implementation in Python:
def proportion(values, total):
# set up by getting the sum of the values and starting
# with an empty result list and accumulator
sum_values = sum(values)
new_values = []
acc = 0
for v in values:
# for each value, find quotient and remainder
q, r = divmod(v * total, sum_values)
if acc + r < sum_values:
# if the accumlator plus remainder is too small, just add and move on
acc += r
else:
# we've accumulated enough to go over sum(values), so add 1 to result
if acc > r:
# add to previous
new_values[-1] += 1
else:
# add to current
q += 1
acc -= sum_values - r
# save the new value
new_values.append(q)
# accumulator is guaranteed to be zero at the end
print new_values, sum_values, acc
return new_values
(I added an enhancement that if the accumulator > remainder, I increment the previous value instead of the current value)

How do I value from an array that returns objects at the beginning more often?

Given an array like [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], I want to get a random value that takes into consideration the position.
I want the likelihood of 1 popping up to be way bigger than 10.
Is something like this possible?
For the sake of simplicity let's assume an array arr = [x, y, z] from which we will be sampling values. We'd like to see following relative frequencies of x, y and z:
frequencies = [5, 2, 1]
Preprocess these frequencies to calculate margins for our subsequent dice roll:
thresholds = frequencies.clone
1.upto(frequencies.count - 1).each { |i| thresholds[i] += thresholds[i - 1] }
Let's sum them up.
max = frequencies.reduce :+
Now choose a random number
roll = 1 + rand max
index = thresholds.find_index { |x| roll <= x }
Return arr[index] as a result. To sum up:
def sample arr, frequencies
# assert arr.count == frequencies.count
thresholds = frequencies.clone
1.upto(frequencies.count - 1).each { |i| thresholds[i] += thresholds[i - 1] }
max = frequencies.reduce :+
roll = 1 + rand(max)
index = thresholds.find_index { |x| roll <= x }
arr[index]
end
Let's see how it works.
data = 80_000.times.map { sample [:x, :y, :z], [5, 2, 1] }
A histogram for data shows that sample works as we've intended.
def coin_toss( arr )
arr.detect{ rand(2) == 0 } || arr.last
end
a = (1..10).to_a
10.times{ print coin_toss( a ), ' ' } #=> 1 1 1 9 1 5 4 1 1 3
This takes the first element of the array, flips a coin, returns the element and stops if the coinflip is 'tails'; the same with the next element otherwise. If it is 'heads' all the way, return the last element.
A simple way to implement this with an logarithmic probabilistic of being selected is to simulate coin flips. Generate a random integer 0 and 1, the index to that array to choose is the number of consecutive 1s you get. With this method, the chance of selecting 2 is 1/2 as likely as 1, 3 is 1/4th as likely, etc. You can vary the probability slightly say by generating random numbers between 0 and 5 and count the number of consecutive rounds above 1, which makes each number in the array 4/5th as likely to appear as the one before.
A better and more general way to solve this problem is to use the alias method. See the answer to this question for more information:
Data structure for loaded dice?

Create many constrained, random permutation of a list

I need to make a random list of permutations. The elements can be anything but assume that they are the integers 0 through x-1. I want to make y lists, each containing z elements. The rules are that no list may contain the same element twice and that over all the lists, the number of times each elements is used is the same (or as close as possible). For instance, if my elements are 0,1,2,3, y is 6, and z is 2, then one possible solution is:
0,3
1,2
3,0
2,1
0,1
2,3
Each row has only unique elements and no element has been used more than 3 times. If y were 7, then 2 elements would be used 4 times, the rest 3.
This could be improved, but it seems to do the job (Python):
import math, random
def get_pool(items, y, z):
slots = y*z
use_each_times = slots/len(items)
exceptions = slots - use_each_times*len(items)
if (use_each_times > y or
exceptions > 0 and use_each_times+1 > y):
raise Exception("Impossible.")
pool = {}
for n in items:
pool[n] = use_each_times
for n in random.sample(items, exceptions):
pool[n] += 1
return pool
def rebalance(ret, pool, z):
max_item = None
max_times = None
for item, times in pool.items():
if times > max_times:
max_item = item
max_times = times
next, times = max_item, max_times
candidates = []
for i in range(len(ret)):
item = ret[i]
if next not in item:
candidates.append( (item, i) )
swap, swap_index = random.choice(candidates)
swapi = []
for i in range(len(swap)):
if swap[i] not in pool:
swapi.append( (swap[i], i) )
which, i = random.choice(swapi)
pool[next] -= 1
pool[swap[i]] = 1
swap[i] = next
ret[swap_index] = swap
def plist(items, y, z):
pool = get_pool(items, y, z)
ret = []
while len(pool.keys()) > 0:
while len(pool.keys()) < z:
rebalance(ret, pool, z)
selections = random.sample(pool.keys(), z)
for i in selections:
pool[i] -= 1
if pool[i] == 0:
del pool[i]
ret.append( selections )
return ret
print plist([0,1,2,3], 6, 2)
Ok, one way to approximate that:
1 - shuffle your list
2 - take the y first elements to form the next row
4 - repeat (2) as long as you have numbers in the list
5 - if you don't have enough numbers to finish the list, reshuffle the original list and take the missing elements, making sure you don't retake numbers.
6 - Start over at step (2) as long as you need rows
I think this should be as random as you can make it and will for sure follow your criteria. Plus, you have very little tests for duplicate elements.
First, you can always randomly sort the list in the end, so let's not worry about making "random permutations" (hard); and just worry about 1) making permutations (easy) and 2) randomizing them (easy).
If you want "truly" random groups, you have to accept that randomization by nature doesn't really allow for the constraint of "even distribution" of results -- you may get that or you may get a run of similar-looking ones. If you really want even distribution, first make the sets evenly distributed, and then randomize them as a group.
Do you have to use each element in the set x evenly? It's not clear from the rules that I couldn't just make the following interpretation:
Note the following: "over all the lists, the number of times each elements is used is the same (or as close as possible)"
Based on this criteria, and the rule that z < x*, I postulate that you can simply enumerate all the items over all the lists. So you automatically make y list of the items enumerated to position z. Your example doesn't fulfill the rule above as closely as my version will. Using your example of x={0,1,2,3} y=6 and z=2, I get:
0,1 0,1 0,1 0,1 0,1 0,1
Now I didn't use 2 or 3, but you didn't say I had to use them all. If I had to use them all and I don't care to be able to prove that I am "as close as possible" to even usage, I would just enumerate across all the items through the lists, like this:
0,1 2,3 0,1 2,3 0,1 2,3
Finally, suppose I really do have to use all the elements. To calculate how many times each element can repeat, I just take (y*z)/(count of x). That way, I don't have to sit and worry about how to divide up the items in the list. If there is a remainder, or the result is less than 1, then I know that I will not get an exact number of repeats, so in those cases, it doesn't much matter to try to waste computational energy to make it perfect. I contend that the fastest result is still to just enumerate as above, and use the calculation here to show why either a perfect result was or wasn't achieved. A fancy algorithm to extract from this calculation how many positions will be duplicates could be achieved, but "it's too long to fit here in the margin".
*Each list has the same z number of elements, so it will be impossible to make lists where z is greater than x and still fulfill the rule that no list may contain the same element twice. Therefore, this rule demands that z cannot be greater than x.
Based on new details in the comments, the solution may simply be an implementation of a standard random permutation generation algorithm. There is a lengthy discussion of random permutation generation algorithms here:
http://www.techuser.net/randpermgen.html
(From Google search: random permutation generation)
This works in Ruby:
# list is the elements to be permuted
# y is the number of results desired
# z is the number of elements per result
# equalizer keeps track of who got used how many times
def constrained_permutations list, y, z
list.uniq! # Never trust the user. We want no repetitions.
equalizer = {}
list.each { |element| equalizer[element] = 0 }
results = []
# Do this until we get as many results as desired
while results.size < y
pool = []
puts pool
least_used = equalizer.each_value.min
# Find how used the least used element was
while pool.size < z
# Do this until we have enough elements in this resultset
element = nil
while element.nil?
# If we run out of "least used elements", then we need to increment
# our definition of "least used" by 1 and keep going.
element = list.shuffle.find do |x|
!pool.include?(x) && equalizer[x] == least_used
end
least_used += 1 if element.nil?
end
equalizer[element] += 1
# This element has now been used one more time.
pool << element
end
results << pool
end
return results
end
Sample usage:
constrained_permutations [0,1,2,3,4,5,6], 6, 2
=> [[4, 0], [1, 3], [2, 5], [6, 0], [2, 5], [3, 6]]
constrained_permutations [0,1,2,3,4,5,6], 6, 2
=> [[4, 5], [6, 3], [0, 2], [1, 6], [5, 4], [3, 0]]
enter code here
http://en.wikipedia.org/wiki/Fisher-Yates_shuffle

Resources