Elastic search aggregator price range from 0 to 0 - elasticsearch

I am using elastic search aggregator query to get a list of available products based on the price range.
This is how my aggregator query looks like :
'aggs': {
'prices': {
'range': {
'field': 'price',
'ranges': [
{'from': 0, 'to': 0},
{'to': 4.99},
{'from': 5, 'to': 9.99},
{'from': 10}
]
}
}
}
I want to get the number of products that is free, so i have the ranges from 0 to 0. But that didn't work. The rest of the ranges are working fine. How can i get agg for price 0?

Quoted from the Range Aggregations
Note that this aggregation includes the from value and excludes the to
value for each range.
So, range aggregations excludes the to value you have entered. That is why, you didn't get any documents in bucket 0-0.
Again, if you have given from: 0, to: 1 this means the bucket of 0 ≤ value < 1 . And for from: 0, to: 0 means bucket of 0 ≤ value < 0 , which doesn't includes 0.
Solution:
Although, if you want to get the bucket of 0 values with the range aggregation then you can set the range from: 0, to: 0.000000001. Here to value is a minimum value greater than 0 (you can set as of your application).

Related

how to map the results obtained after multiclass classification to 1 and 0

I am working on image classification for cifar data set.I obtained the predicted labels as output mapped from 0-1 for 10 different classes is there any way to find the class the predicted label belongs?
//sample output obtained
array([3.3655483e-04, 9.4402254e-01, 1.1646092e-03, 2.8560971e-04,
1.4086446e-04, 7.1564602e-05, 2.4985364e-03, 6.5030693e-04,
3.4783698e-05, 5.0794542e-02], dtype=float32)
One way is to find the max and make that index as 1 and rest to 0.
//for above case it should look like this
array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0])
can anybody tell me how to do this or else if you have any better methods please suggest. thanks
It is as simple as
>>> data = np.array([3.3655483e-04, 9.4402254e-01, 1.1646092e-03, 2.8560971e-04,
... 1.4086446e-04, 7.1564602e-05, 2.4985364e-03, 6.5030693e-04,
... 3.4783698e-05, 5.0794542e-02], dtype=np.float32)
>>>
>>> (data == data.max()).view(np.int8)
array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int8)
Explanation: data.max() finds the largest value. We compare that with each individual element to get a vector of truth values. This we then cast to integer taking advantage of the fact that True maps to 1 and False maps to 0.
Please note that this will return multiple ones if the maximum is not unique.

Scala: What'd be the idiomatic way to test equally spaced frequencies in array?

I have a Scala application and have the following use case. Given a numberOfDates: Int and an optimalFrequencyInDays: Int I need to find the frequency in days closest to this optimal frequency in days that will give me evenly spaced triggers within this number of days. As extra conditions the trigger also has to happen at the beginning and at the end ; furthermore, the number of days between any two triggers can not be smaller than the optimal frequency e.g.
val numberOfDays = 260
val optimalFrequencyInDays = 2
// equally spaced answer is 3 i.e. 87 triggers Seq(0, 3, 6, 9, .. , 255, 259)
val numberOfDays = 260
val optimalFrequencyInDays = 124
// equally spaced answer is 130 i.e. 3 triggers Seq(0, 130, 259)
I think the rule to solve this is:
val solution = (numberOfDates % optimalFrequencyInDays ) match {
case 0 => numberOfDates / (((numberOfDates / optimalFrequencyInDays) / 2) + 1)
case _ => numberOfDates / (((numberOfDates / optimalFrequencyInDays + 1) / 2) + 1)
}
In words, the formula (length / 2 + 1) gives me the range of odd numbers that will produce the number of triggers I need for an evenly spaced solution e.g. for 20 would be 20 / 2 + 1 = 11, 9, 7, 5, 3, 2 If I divide the length by the result of that formula I get the evenly spaced frequency I need.
The output of this use-case is encoded in an array of Booleans of the form Array(1, 0, 0, 1, ..., 1, 0, 0, 1) meaning whether there was a trigger at the day of that index. What is the idiomatic Scala way to test that the triggers are equally spaced except the last that can be evenly spaced +- 1 because there is no perfect fit.
You have a collection of 1s and 0s, and you want to test if the 1s are evenly spaced except for the final spacing which could be an outlier.
triggers.mkString // one long string of 0's and 1's
.split("(?=1)") // multiple strings, all starting with '1'
.dropRight(2) // drop the final `1` and the possible outlier
.sliding(2) // pair up the rest
.forall{ // all the same?
case Array(a,b) => a == b
case Array(_) => true // too few to matter
}
This will handle an empty triggers collection as well as a collection of one or more 1s (no 0s).
update
This will work with an Array[Boolean], either by mapping it to an Array[Int] or by changing the split() pattern to split("(?=true)").
You can test the "outlier" for its off-by-one condition by saving the intermediate collection after the split() and testing its head against its init.last.
You can take indexes of elements with ones, and then calculate the difference between each pair of elements to calculate intervals. Then you just have to check if all elements but last are equal and if the last element is equal +/- 1.
val triggers = Vector(1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1);
val intervals = triggers.zipWithIndex.filter(_._1 == 1).map(_._2).sliding(2).map { case Vector(a, b) => b - a }.toVector
val allButAllAreEqual = intervals.init.forall(_ == intervals.head)
val lastIsEqualToAllPlusMinusOne = intervals.last == intervals.head || intervals.last + 1 == intervals.head

Coin change with limited coins complexity

If there is an unlimited number of every coin then the complexity is O(n*m) where is n is the total change and m is the number of coin types. Now when the coins for every type are limited then we have to take into account the remaining coins. I managed to make it work with a complexity of O(n*m2) using another for of size n so I can track the remaining coins for each type. Is there a way-trick to make the complexity better? EDIT : The problem is to compute the least ammount of coins required to make the exact given change and the number of times that we used each coin type
There is no need for an extra loop. You need to:
recurse with a depth of at most m (number of coins) levels, dealing with one specific coin per recursion level.
Loop at most n times at each recursion level in order to decide how many you will take of a given coin.
Here is how the code would look in Python 3:
def getChange(coins, amount, coinIndex = 0):
if amount == 0:
return [] # success
if coinIndex >= len(coins):
return None # failure
coin = coins[coinIndex]
coinIndex += 1
# Start by taking as many as possible from this coin
canTake = min(amount // coin["value"], coin["count"])
# Reduce the number taken from this coin until success
for count in range(canTake, -1, -1): # count will go down to zero
# Recurse to decide how many to take from the next coins
change = getChange(coins, amount - coin["value"] * count, coinIndex)
if change != None: # We had success
if count: # Register this number for this coin:
return change + [{ "value": coin["value"], "count": count }]
return change
# Example data and call:
coins = [
{ "value": 20, "count": 2 },
{ "value": 10, "count": 2 },
{ "value": 5, "count": 3 },
{ "value": 2, "count": 2 },
{ "value": 1, "count": 10 }
]
result = getChange(coins, 84)
print(result)
Output for the given example:
[
{'value': 1, 'count': 5},
{'value': 2, 'count': 2},
{'value': 5, 'count': 3},
{'value': 10, 'count': 2},
{'value': 20, 'count': 2}
]
Minimising the number of coins used
As stated in comments, the above algorithm returns the first solution it finds. If there is a requirement that the number of individual coins must be minimised when there are multiple solutions, then you cannot return halfway a loop, but must retain the "best" solution found so far.
Here is the modified code to achieve that:
def getchange(coins, amount):
minCount = None
def recurse(amount, coinIndex, coinCount):
nonlocal minCount
if amount == 0:
if minCount == None or coinCount < minCount:
minCount = coinCount
return [] # success
return None # not optimal
if coinIndex >= len(coins):
return None # failure
bestChange = None
coin = coins[coinIndex]
# Start by taking as many as possible from this coin
cantake = min(amount // coin["value"], coin["count"])
# Reduce the number taken from this coin until 0
for count in range(cantake, -1, -1):
# Recurse, taking out this coin as a possible choice
change = recurse(amount - coin["value"] * count, coinIndex + 1,
coinCount + count)
# Do we have a solution that is better than the best so far?
if change != None:
if count: # Does it involve this coin?
change.append({ "value": coin["value"], "count": count })
bestChange = change # register this as the best so far
return bestChange
return recurse(amount, 0, 0)
coins = [{ "value": 10, "count": 2 },
{ "value": 8, "count": 2 },
{ "value": 3, "count": 10 }]
result = getchange(coins, 26)
print(result)
Output:
[
{'value': 8, 'count': 2},
{'value': 10, 'count': 1}
]
Here's an implementation of an O(nm) solution in Python.
If one defines C(c, k) = 1 + x^c + x^(2c) + ... + x^(kc), then the program calculates the first n+1 coefficients of the polynomial product(C(c[i], k[i]), i = 1...ncoins). The j'th coefficient of this polynomial is the number of ways of making change for j.
When all the ks are unlimited, this polynomial product is easy to calculate (see, for example: https://stackoverflow.com/a/20743780/1400793). When limited, one needs to be able to calculate running sums of k terms efficiently, which is done in the program using the rs array.
# cs is a list of pairs (c, k) where there's k
# coins of value c.
def limited_coins(cs, n):
r = [1] + [0] * n
for c, k in cs:
# rs[i] will contain the sum r[i] + r[i-c] + r[i-2c] + ...
rs = r[:]
for i in xrange(c, n+1):
rs[i] += rs[i-c]
# This line effectively performs:
# r'[i] = sum(r[i-j*c] for j=0...k)
# but using rs[] so that the computation is O(1)
# and in place.
r[i] += rs[i-c] - (0 if i<c*(k+1) else rs[i-c*(k+1)])
return r[n]
for n in xrange(50):
print n, limited_coins([(1, 3), (2, 2), (5, 3), (10, 2)], n)

How to test if all items in a column are identical

I am writing a program that needs to check if the values in each column of a two-dimensional array are equal. The number of columns is also static at five.
Currently I have an if statement that iterates from column to column and compares all of the values in that column in one giant check:
if column[0][i] == column[1][i] && column[0][i] == column[2][i]
Edit: Sorry, I didn't intend for the confusion. The array creates a 5x5 game board. The rows refers to each individual array and the columns refers to the nth digit in each of the arrays.
Your question is somewhat confusing, I think because in most code I've come across that represents a structure with rows and columns using arrays, the "outer" array represents the rows and the "inner" arrays represent the columns. For example:
arr = [ [ a, b ],
[ x, y ] ]
In the usual model, (a, b) is "row" 0, and (x, y) is row 1. That makes (a, x) column 0 and (b, y) column 1.
But your code suggests that your structure is inverted, with row 0 being (a, x) and row 1 being (b, y), which makes (a, b) column 0 and (x, y) column 1, so I'll answer it that way. If we want every value in a column to be equal to every value in the same column (i.e. a == b && x == y), then it's pretty easy. Suppose we have the following data:
arr = [ [ 10, 10, 10, 10 ], # <-- Column 0
[ 11, 11, 11, 11 ], # <-- Column 1
[ 12, 0, 12, 12 ] ] # <-- Column 2
To check if every value in "column" 0 is equal to every other value in column 0, we could do this:
arr[0].all? {|item| item == arr[0][0] } # => true
This just compares every item in the column to the first item arr[0][0] and returns false as soon as it finds one that isn't equal (or true if it doesn't).
In order to do this for the every "row", we can wrap the first all? in another:
arr.all? do |sub_arr|
sub_arr.all? {|item| item == sub_arr.first }
end
# => false
Edit: If your array looks instead like this:
arr = [ [ 10, 11, 12 ],
[ 10, 11, 0 ],
[ 10, 11, 12 ],
[ 10, 11, 12 ] ]
# │ │ └─ Column 2
# │ └─ Column 1
# └─ Column 0
One way to solve it would be this:
first_row, *rest = arr
rest.all? do |row|
row.each_with_index.all? do |item, col_idx|
row[col_idx] == first_row[col_idx]
end
end
The first line assigns the first row to first_row and the rest of the rows to rest. Then for each row in rest we use all? to compare each item to the corresponding item in first_row.
P.S. Another way to solve it would be this:
arr.transpose.all? {|row| row.uniq.size == 1 }
Array#transpose just swaps the rows and columns (i.e. turning [[a,b],[x,y]] into [[a,x],[b,y]]), and then in all? we use count the unique values in each "column" (which is now a row). If there's more than one unique value we know they're not all equal. Of course, this has a lot more overhead: Both transpose and uniq iterate over every value and return a new array, whereas the method above stops as soon as it finds any value that doesn't match. But given only 25 items it might not be so bad, depending on how often you need it to run.
P.P.S. I was curious how much better the first method performs than the second. You can see the code and the result here: https://gist.github.com/jrunning/7168af45c5fa5fb4ddd3 Because the first method "short-circuits"—i.e. it stops as soon as it finds a "wrong" value—it gets faster as the probability of a "wrong" value increases. With a 33% chance of any row having a wrong value, the first method performs 33% faster than the second. With a 75% chance, the first performs 80% faster than the second. I realize that's more information than you require, but I found it interesting.

Is there a way to find out where a number lies in a range in ruby?

Let's say I have a min and a max number. max can be anything, but min will always be greater than zero.
I can get the range min..max and let's say I have a third number, count -- I want to divide the range by 10 (or some other number) to get a new scale. So, if the range is 1000, it would increment in values of 100, 200, 300, and find out where the count lies within the range, based on my new scale. So, if count is 235, it would return 2 because that's where it lies on the range scale.
Am I making any sense? I'm trying to create a heat map based on a range of values, basically ... so I need to create the scale based on the range and find out where the value I'm testing lies on that new scale.
I was working with something like this, but it didn't do it:
def heat_map(project, word_count, division)
unless word_count == 0
max = project.words.maximum('quantity')
min = project.words.minimum('quantity')
range = min..max
total = range.count
break_point = total / division
heat_index = total.to_f / word_count.to_f
heat_index.round
else
"freezing"
end
end
I figured there's probably an easier ruby way I'm missing.
Why not just use arithmetic and rounding? Assuming that number is between min and max and you want the range split into n_div divisions and x is the number you want to find the index of (according to above it looks like min = 0, max = 1000, n_div = 10, and x = 235):
def heat_index(x, min, max, n_div)
break_point = (max - min).to_f/n_div.to_f
heat_index = (((x - min).to_f)/break_point).to_i
end
Then heat_index(235, 0, 1000, 10) gives 2.
I'm just quickly brainstorming an idea, but would something like this help?
>> (1..100).each_slice(10).to_a.index { |subrange| subrange.include? 34 }
=> 3
>> (1..100).each_slice(5).to_a.index { |subrange| subrange.include? 34 }
=> 6
This tells you in which subrange (the subrange size is determined by the argument to each_slice) the value (the argument to subrange.include?) lies.
>> (1..1000).each_slice(100).to_a.index { |subrange| subrange.include? 235 }
=> 2
Note that the indices for the subranges start from 0, so you may want to add 1 to them depending on what you need. Also this isn't ready as is, but should be easy to wrap up in a method.
How's this? It makes an array of range boundaries and then checks if the number lies between them.
def find_range(min, max, query, increment)
values = []
(min..max).step(increment) { |value| values << value }
values.each_with_index do |value, index|
break if values[index + 1].nil?
if query > value && query < values[index + 1]
return index
end
end
end
EDIT: removed redundant variable

Resources