Combining Observables in RxScala - rx-scala

i was wondering if someone can give me some hints here. I am learning RxScala and i have the following exercise to do:
- Implement an observable object that emits an event every 5 seconds and every 12 seconds
I was wondering if the following code does it ? havent managed to find lots of documentation for OBservable combinators
val evens = Observable.interval(5.second).filter(_ % 6 != 0)
val odds = Observable.interval(12.second).filter(_ % 5 != 0)
val merged = evens.merge(odds)
merged.subscribe(it => println(it))
Thanks and regards
Marco

Related

Is there a way to use range with Z3ints in z3py?

I'm relatively new to Z3 and experimenting with it in python. I've coded a program which returns the order in which different actions is performed, represented with a number. Z3 returns an integer representing the second the action starts.
Now I want to look at the model and see if there is an instance of time where nothing happens. To do this I made a list with only 0's and I want to change the index at the times where each action is being executed, to 1. For instance, if an action start at the 5th second and takes 8 seconds to be executed, the index 5 to 12 would be set to 1. Doing this with all the actions and then look for 0's in the list would hopefully give me the instances where nothing happens.
The problem is: I would like to write something like this for coding the problem
list_for_check = [0]*total_time
m = s.model()
for action in actions:
for index in range(m.evaluate(action.number) , m.evaluate(action.number) + action.time_it_takes):
list_for_check[index] = 1
But I get the error:
'IntNumRef' object cannot be interpreted as an integer
I've understood that Z3 isn't returning normal ints or bools in their models, but writing
if m.evaluate(action.boolean):
works, so I'm assuming the if is overwritten in a way, but this doesn't seem to be the case with range. So my question is: Is there a way to use range with Z3 ints? Or is there another way to do this?
The problem might also be that action.time_it_takes is an integer and adding a Z3int with a "normal" int doesn't work. (Done in the second part of the range).
I've also tried using int(m.evaluate(action.number)), but it doesn't work.
Thanks in advance :)
When you call evaluate it returns an IntNumRef, which is an internal z3 representation of an integer number inside z3. You need to call as_long() method of it to convert it to a Python number. Here's an example:
from z3 import *
s = Solver()
a = Int('a')
s.add(a > 4);
s.add(a < 7);
if s.check() == sat:
m = s.model()
print("a is %s" % m.evaluate(a))
print("Iterating from a to a+5:")
av = m.evaluate(a).as_long()
for index in range(av, av + 5):
print(index)
When I run this, I get:
a is 5
Iterating from a to a+5:
5
6
7
8
9
which is exactly what you're trying to achieve.
The method as_long() is defined here. Note that there are similar conversion functions from bit-vectors and rationals as well. You can search the z3py api using the interface at: https://z3prover.github.io/api/html/namespacez3py.html

The Odin Project Stock Picker. Struggling to begin to solve in Ruby

I am currently doing the stock picker problem on Odin and I am struggling to even begin to tackle the problem. I spent a good while trying to implement my thoughts into code but to no avail...So I looked at another solution for inspiration to see if it could help me try and solve the problem. What is best_sell = j + (i + 1) doing? I cannot figure out how that chooses the highest sell date after the purchase date?
http://www.theodinproject.com/courses/ruby-programming/lessons/building-blocks?ref=lnav
def stock_picker(arr)
best_buy = 0
best_sell = 0
max_profit = 0
arr[0..-2].each_with_index do |buy, i|
arr[(i+1)..-1].each_with_index do |sell, j|
if (sell - buy) > max_profit
best_sell = j + (i + 1)
best_buy = i
max_profit = sell - buy
end
end
end
[best_buy, best_sell]
end
puts stock_picker([17,3,6,9,15,8,6,1,10]).inspect
i and j represent indexes in the array. Does that help you understand what the function returns when it completes? (Does it return values or indexes?)
Hint: how might it help you to keep track of the lowest purchase price seen so far as you traverse the array in constructing a more efficient solution?

In Scala using variables in a map reduces the performance?

maybe it is a stupid question, but I have this doubt and I cannot find a response...
If I have a map operation on a list of complex objects and to make the code more readable I use intermediate variables inside the map the performance can change?
For example this are two versions of the same code:
profilesGroupedWithIds map {
c =>
val blockId = c._2
val entityIds = c._1._2
val entropy = c._1._1
if (separatorID < 0) BlockDirty(blockId, entityIds.swap, entropy)
else BlockClean(blockId, entityIds, entropy)
}
..
profilesGroupedWithIds map {
c =>
if (separatorID < 0) BlockDirty(c._2, c._1._2.swap, c._1._1)
else BlockClean(c._2, c._1._2, c._1._1)
}
As you can see the first version is more readable than the second one.
But the efficiency is the same? Or the three variables that I have created inside the map have to be removed by the garbage collector and this will reduce the perfomance (suppose that 'profilesGroupedWithIds' is a very big list)?
Thanks
Regards
Luca
The vals are just aliases for the tuple elements. So the generated java bytecode will be identical in both cases, and so will be the performance.
More importantly, the first variant is much better code since it clearly conveys the intent.
Here is a third variant that avoids accessing the tuple elements _1 and _2 entirely:
profilesGroupedWithIds map {
case ((entropy,entityIds),blockId) =>
if (separatorID < 0) BlockDirty(blockId, entityIds.swap, entropy)
else BlockClean(blockId, entityIds, entropy)
}

Calculate difference or delta between different events in logstash

Say I have a log file looking like this:
# time, count
2016-09-07 23:00:00, 1108731
2016-09-07 23:00:02, 1108733
2016-09-07 23:00:03, 1108734
Now, every next row contains a sum of all events that occurred in the past. I would like to use it in kibana and the natural way would be to have a count as a deltafied number.
So I expect an effect of:
# time, count, deltaCount
2016-09-07 23:00:00, 1108731, 0
2016-09-07 23:00:02, 1108733, 2
2016-09-07 23:00:03, 1108734, 1
How to achieve this in logstash. I know I could edit this files beforehand.
Thanks!
Solution #1: Write your plugin
One way to do it would be to create a plugin. The same problem is solved here. However, the filter that is posted there is not publicly available and, what is worse, it is actually 5 lines of code.
Solution #2: Ruby code snippet
I have found a solution in this thread on elastic forums: Keeping global variables in LS?!. The title says it all.
Cutting long story short, the solution goes as follows:
filter {
...
ruby {
init => "##previous_count = -1"
code => "
if (##previous_count == -1)
delta = 0
else
delta = event.get('count') - ##previous_count
end
event.set('requests', delta)
# remember event for next time
##previous_count = event.get('count')
"
}
}
Was not that hard after all.

PyMC3: How can I code my custom distribution with observed data better for Theano?

I am attempting to implement a fairly simple model in pymc3. The gist is that I have some data that is generated from a sequence of random choices. The choices can be thought of as a multinomial, and the process selects choices as a function of previous choices.
The overall probability of the categories is modeled with a Dirichlet prior.
The likelihood function must be customized for the data at hand. The data are lists of 0's and 1's that are output from the process. I have successfully made the model in pymc2, which you can find at this blog post. Here is a python function that generates test data for this problem:
ps = [0.2,0.35,0.25,0.15,0.0498,1/5000]
def make(ps):
out = []
while len(out) < 5:
n_spots = 5-len(out)
sp = sum(ps[:n_spots+1])
P = [x/sp for x in ps[:n_spots+1]]
l = np.argwhere(np.random.multinomial(1,P)==1).ravel()[0]
#if len(out) == 4:
# l = np.argwhere(np.random.multinomial(1,ps[:2])==1).ravel()[0]
out.extend([1]*l)
if (out and out[-1] == 1 and len(out) < 5) or l == 0:
out.append(0)
#print n_spots, l, len(out)
assert len(out) == 5
return out
As I'm learning/moving to pymc3, I'm trying to input my data as observed into a custom likelihood function, and I'm running into several issues along the way. It's probably because this is my first experience with Theano, but I'm hoping that someone can give some advice.
Here is my code (using the make function above):
import numpy as np
import pymc3 as pm
from scipy import optimize
import theano.tensor as T
from theano.compile.ops import as_op
from collections import Counter
# This function gets the attributes of the data that are relevant for calculating the likelihood
def scan(value):
groups = []
prev = False
s = 0
for i in xrange(5):
if value[i] == 0:
if prev:
groups.append((s,5-(i-s)))
prev = False
s = 0
else:
groups.append((0,5-i))
else:
prev = True
s += 1
if prev:
groups.append((s,4-(i-s)))
return groups
# The likelihood calculation for a single data point
def like1(v,p):
l = 1
groups = scan(v)
for n, s in groups:
l *= p[n]/p[:s+1].sum()
return T.log(l)
# my custom likelihood class
class CustomDist(pm.distributions.Discrete):
def __init__(self, ps, data, *args, **kwargs):
super(CustomDist, self).__init__(*args, **kwargs)
self.ps = ps
self.data = data
def logp(self,v):
all_l = 0
for v, k in self.data.items():
l = like1(v,self.ps)
all_l += l*k
return all_l
# model creation
model = pm.Model()
with model:
probs = pm.Dirichlet('probs',a=np.array([0.5]*6),shape=6,testval=np.array([1/6.0]*6))
output = CustomDist("rolls",ps=probs,data=data,observed=True)
I am able to find the MAP in about a minute or so (my machine is Windows 7, i7-4790 #3.6GHz). The MAP matches well with the input probability vector, which at least means the model is linked properly.
When I try to do traces, though, my memory usage skyrockets (up to several gig) and I haven't actually been patient enough for the model to finish compiling. I've waited 10 minutes + for the NUTS or HMC to compile before even tracing. The metropolis stepping works just fine, though (and is much faster than with pymc2).
Am I just being too hopeful for Theano to be able to handle for-loops of non-theano data well? Is there a better way to write this code so that Theano plays well with it, or am I limited because my data is a custom python type and can't be analyzed with array/matrix operations?
Thanks in advance for your advice and feedback. Please let me know what might need clarification!

Resources