Reset SignalProducer state based on timer and result - frp

I have a SignalProducer producer that asynchronously sends Ints. I can sum the values with
producer.scan(0, +)
Suppose I want to reset the sum to 0 if it is > 10 and no other values have been sent for 1 second. My first attempt looked like this:
producer
.scan(0, +)
.flatMap(.latest) { n -> SignalProducer<Int, NoError> in
if n <= 10 {
return SignalProducer(value: n)
} else {
return SignalProducer.merge([
SignalProducer(value: n),
SignalProducer(value: 0).delay(1, on: QueueScheduler.main)
])
}
}
While this correctly sends 0, it doesn't reset the state in scan. That is, a sequence of 9, 8, long pause, 7 sends 9, 17, 0, 24.
Is there a way to combine these two concept in a way that correctly resets the state?

I would use startWithSignal to gain access to the produced Signal in order to create a reset trigger to merge with the incoming values. I'm using an enum as my value to indicate whether the value coming into the scan is a reset trigger or an Int to accumulate.
enum ValOrReset {
case val(Int)
case reset
}
producer.startWithSignal { signal, _ in
resetTrigger = signal
.debounce(TimeInterval(1.0), on: QueueScheduler.main)
.map { _ in .reset }
signal
.map { val in .val(val) }
.merge(with: resetTrigger)
.scan(0) { (state, next) in
switch next {
case .val(let val):
return state + val
case .reset:
if state > 10 {
return 0
}
else {
return state
}
}
}
.observeValues { val in
print(val)
}
}
The way startWithSignal works is that it ensures no values will come until the closure is finished, which means you can create multiple downstream signals and wire them together without worrying about missing any values, even if the producer sends values synchronously.

Related

Solving problem using Zoho Deluge script in custom function

I need an output as follows for 1200+ accounts:
accounts_count = {
"a": 120,
"b": 45,
"z": 220
}
So it will take the first Letter of an account name and make count for all account with their initial letter.
How can I solve it using Zoho Deluge script in a custom function?
The hard part is getting all the contacts due to the 200 record limit in zoho
I'm not going to provide the full solution here, just the harder parts.
// this is the maximum number of records that you can query at once
max = 200;
// set to a value slightly higher than you need
max_contacts = 1500;
// round up one
n = max_contacts / max + 1;
// This is how many times you will loop (convert to int)
iterations = n.toNumber();
// this is a zoho hack to create a range of the length we want
counter = leftpad("1",iterations).replaceAll(" ","1,").toList();
// set paging
page = 1;
// TODO: initialize an alphabet map
//result = Map( {a: 0, b:0....});
// set to true when done so you're not wasting API Calls
done = false;
for each i in counter
{
// prevent extra crm calls
if(!done)
{
response = zoho.crm.getRecords("Contacts",page,max);
if(!isEmpty(response))
{
if(isNull(response.get(0)) && !isNull(response.get("code")))
{
info "Error:";
info response;
}
else
{
for each user in response
{
// this is the easy part: get the first letter (lowercase)
// get the current value and add one
// set the value again
// set in result map
}
}
}
else
{
done = true;
}
}
// increment the page
page = page + 1;
}

How do I sequentially and nonparallel loop through an array in RxSwift?

I have a list of objects i need to send to a server and i would like to do this one after the other (not in parallel). After all objects have been sent and there was no error i want to run additional Observables which do different things.
let objects = [1, 2, 3]
let _ = Observable.from(objects).flatMap { object -> Observable<Void> in
return Observable.create { observer in
print("Starting request \(object)")
DispatchQueue.main.asyncAfter(deadline: .now() + 2) { // one request takes ~2sec
print("Request \(object) finished")
observer.onNext(Void())
observer.onCompleted()
}
return Disposables.create()
}
}.flatMap { result -> Observable<Void> in
print("Do something else (but only once)")
return Observable.just(Void())
}.subscribe(
onNext: {
print("Next")
},
onCompleted: {
print("Done")
}
)
What i get is
Starting request 1
Starting request 2
Starting request 3
Request 1 finished
Do something else (but only once)
Next
Request 2 finished
Do something else (but only once)
Next
Request 3 finished
Do something else (but only once)
Next
Done
The whole process ends after 2 sec. What i want is
Starting request 1
Request 1 finished
Starting request 2
Request 2 finished
Starting request 3
Request 3 finished
Do something else (but only once)
Next
Done
The whole sequence should end after 6 seconds (because it's not executed parallel).
I got this to work with a recursive function. But with lots of requests this ends in a deep recursion stack which i would like to avoid.
Use concatMap instead of flatMap in order to send them one at a time instead of all at once. Learn more here:
RxSwift’s Many Faces of FlatMap
Then to do something just once afterwards, use toArray(). Here is a complete example:
let objects = [1, 2, 3]
_ = Observable.from(objects)
.concatMap { object -> Observable<Void> in
return Observable.just(())
.debug("Starting Request \(object)")
.delay(.seconds(2), scheduler: MainScheduler.instance)
.debug("Request \(object) finished")
}
.toArray()
.flatMap { results -> Single<Void> in
print("Do something else (but only once)")
return Single.just(())
}
.subscribe(
onSuccess: { print("done") },
onError: { print("error", $0) }
)

kotlin, how dynamically change the for loop pace

sometime based on some condition it may want to jump (or move forward) a few steps inside the for loop,
how to do it is kolin?
a simplified use case:
val datArray = arrayOf(1, 2, 3......)
/**
* start from the index to process some data, return how many data has been
consumed
*/
fun processData_1(startIndex: Int) : Int {
// process dataArray starting from the index of startIndex
// return the data it has processed
}
fun processData_2(startIndex: Int) : Int {
// process dataArray starting from the index of startIndex
// return the data it has processed
}
in Java it could be:
for (int i=0; i<datArray.lenght-1; i++) {
int processed = processData_1(i);
i += processed; // jump a few steps for those have been processed, then start 2nd process
if (i<datArray.lenght-1) {
processed = processData_2(i);
i += processed;
}
}
How to do it in kotlin?
for(i in array.indices){
val processed = processData(i);
// todo
}
With while:
var i = 0
while (i < datArray.length - 1) {
var processed = processData_1(i)
i += processed // jump a few steps for those have been processed, then start 2nd process
if (i < datArray.length - 1) {
processed = processData_2(i)
i += processed
}
i++
}
You can do that with continue as stated in the Kotlin docs here: https://kotlinlang.org/docs/reference/returns.html
Example:
val names = arrayOf("james", "john", "jim", "jacob", "johan")
for (name in names) {
if(name.length <= 4) continue
println(name)
}
This would only print names longer than 4 characters (as it skips names with a length of 4 and below)
Edit: this only skips one iteration at a time. So if you want to skip multiple, you could store the process state somewhere else and check the status for each iteration.

rw_semaphore's negative count value

I am debugging a kernel crash dump. There seems to be a problem with one process was trying to memory map a new region. The problem is that it was not able to hold the memory map semaphore.
When I looked into process's mm_struct and printed its content. I saw that the struct rw_semaphore mmap_sem were as seen below. Now, does he value of count seem suspicious? It has a negative value, as if there was a race condition where it was decremented twice by two different threads after checking for zero.
mmap_sem = {
count = -4294967295,
wait_lock = {
{
rlock = {
raw_lock = {
slock = 262148
}
}
}
},
wait_list = {
next = 0xffff8801f0113e48,
prev = 0xffff8801f0113e48
}
},
Sorry for the confusion. I thought crash pulls the correct data types and uses that properly when printing out the all the values ...
Looks like crash utility is not read the count member as an int ....
When I print it as int, I get the correct value.
crash> p (int) (((struct mm_struct *) 0xffff8801f15fa540)->mmap_sem).count
$13 = 1

Scala stateful actor, recursive calling faster than using vars?

Sample code below. I'm a little curious why MyActor is faster than MyActor2. MyActor recursively calls process/react and keeps state in the function parameters whereas MyActor2 keeps state in vars. MyActor even has the extra overhead of tupling the state but still runs faster. I'm wondering if there is a good explanation for this or if maybe I'm doing something "wrong".
I realize the performance difference is not significant but the fact that it is there and consistent makes me curious what's going on here.
Ignoring the first two runs as warmup, I get:
MyActor:
559
511
544
529
vs.
MyActor2:
647
613
654
610
import scala.actors._
object Const {
val NUM = 100000
val NM1 = NUM - 1
}
trait Send[MessageType] {
def send(msg: MessageType)
}
// Test 1 using recursive calls to maintain state
abstract class StatefulTypedActor[MessageType, StateType](val initialState: StateType) extends Actor with Send[MessageType] {
def process(state: StateType, message: MessageType): StateType
def act = proc(initialState)
def send(message: MessageType) = {
this ! message
}
private def proc(state: StateType) {
react {
case msg: MessageType => proc(process(state, msg))
}
}
}
object MyActor extends StatefulTypedActor[Int, (Int, Long)]((0, 0)) {
override def process(state: (Int, Long), input: Int) = input match {
case 0 =>
(1, System.currentTimeMillis())
case input: Int =>
state match {
case (Const.NM1, start) =>
println((System.currentTimeMillis() - start))
(Const.NUM, start)
case (s, start) =>
(s + 1, start)
}
}
}
// Test 2 using vars to maintain state
object MyActor2 extends Actor with Send[Int] {
private var state = 0
private var strt = 0: Long
def send(message: Int) = {
this ! message
}
def act =
loop {
react {
case 0 =>
state = 1
strt = System.currentTimeMillis()
case input: Int =>
state match {
case Const.NM1 =>
println((System.currentTimeMillis() - strt))
state += 1
case s =>
state += 1
}
}
}
}
// main: Run testing
object TestActors {
def main(args: Array[String]): Unit = {
val a = MyActor
// val a = MyActor2
a.start()
testIt(a)
}
def testIt(a: Send[Int]) {
for (_ <- 0 to 5) {
for (i <- 0 to Const.NUM) {
a send i
}
}
}
}
EDIT: Based on Vasil's response, I removed the loop and tried it again. And then MyActor2 based on vars leapfrogged and now might be around 10% or so faster. So... lesson is: if you are confident that you won't end up with a stack overflowing backlog of messages, and you care to squeeze every little performance out... don't use loop and just call the act() method recursively.
Change for MyActor2:
override def act() =
react {
case 0 =>
state = 1
strt = System.currentTimeMillis()
act()
case input: Int =>
state match {
case Const.NM1 =>
println((System.currentTimeMillis() - strt))
state += 1
case s =>
state += 1
}
act()
}
Such results are caused with the specifics of your benchmark (a lot of small messages that fill the actor's mailbox quicker than it can handle them).
Generally, the workflow of react is following:
Actor scans the mailbox;
If it finds a message, it schedules the execution;
When the scheduling completes, or, when there're no messages in the mailbox, actor suspends (Actor.suspendException is thrown);
In the first case, when the handler finishes to process the message, execution proceeds straight to react method, and, as long as there're lots of messages in the mailbox, actor immediately schedules the next message to execute, and only after that suspends.
In the second case, loop schedules the execution of react in order to prevent a stack overflow (which might be your case with Actor #1, because tail recursion in process is not optimized), and thus, execution doesn't proceed to react immediately, as in the first case. That's where the millis are lost.
UPDATE (taken from here):
Using loop instead of recursive react
effectively doubles the number of
tasks that the thread pool has to
execute in order to accomplish the
same amount of work, which in turn
makes it so any overhead in the
scheduler is far more pronounced when
using loop.
Just a wild stab in the dark. It might be due to the exception thrown by react in order to evacuate the loop. Exception creation is quite heavy. However I don't know how often it do that, but that should be possible to check with a catch and a counter.
The overhead on your test depends heavily on the number of threads that are present (try using only one thread with scala -Dactors.corePoolSize=1!). I'm finding it difficult to figure out exactly where the difference arises; the only real difference is that in one case you use loop and in the other you do not. Loop does do fair bit of work, since it repeatedly creates function objects using "andThen" rather than iterating. I'm not sure whether this is enough to explain the difference, especially in light of the heavy usage by scala.actors.Scheduler$.impl and ExceptionBlob.

Resources