RxJS: How to create an event manager with a buffer that flush based on multiple conditions - rxjs

I have a requirement to create an event manager with a buffer that flushes if one of 3 criteria is met:
2 seconds pass
50 events received
Flush on demand if requested by user
All criteria will reset when buffer flushes (reset the 2 second timer, reset the 50 events count...etc)
This is what I've implemeted so far and it seems to be working but I'm wondering if there's a better way to achieve this requirement.
import { interval, merge, Subject, Subscription } from "rxjs";
import { bufferWhen, filter, tap } from "rxjs/operators";
class Foo {
private eventListener: Subject < string > = new Subject();
private eventForceFlushListener: Subject < void > = new Subject();
private eventBufferSizeListener: Subject < void > = new Subject();
private maxBufferSize = 50;
private currentBufferSize = 0;
/**
*
* Buffer that will flush if one of the 3 criteria is met:
* - 50 texts are received
* - 2 seconds pass
* - Force flush by user
*/
private eventBufferOperator = () => merge(interval(2 * 1000), this.eventForceFlushListener, this.eventBufferSizeListener);
/**
* Flush buffer if requested by user. (for example flush buffer before app close so we dont lose buffered texts)
*/
public forceFlush() {
this.eventForceFlushListener.next();
}
/**
* Method used by users to emit texts to the listener
*/
public emitText(text: string) {
this.eventListener.next(text);
this.currentBufferSize = this.currentBufferSize + 1;
if (this.currentBufferSize == this.maxBufferSize) {
// flush all evenst when maxBufferSize is reached
this.eventBufferSizeListener.next();
// buffer size is reset below in the function that's inside "subscribe"
}
}
public subscribeToEventListerenr() {
const eventListenerSubscription = this.eventListener
.pipe(
tap((text) => text.trim()),
filter((text) => true),
bufferWhen(this.eventBufferOperator),
filter((events) => !!events.length)
)
.subscribe((x) => {
console.log(x);
this.maxBufferSize = 0; // reset size buffer
});
return eventListenerSubscription;
}
}
Users then can use this event manager as follows:
const eventManager = new Foo();
eventManager.subscribeToEventListerenr();
eventManager.emitText('message1');
eventManager.emitText('message2');

5 seconds pass
5 events received
Flush on demand if requested by user
const { race, Subject, take, share, buffer, tap, bufferCount, bufferTime, startWith, exhaustMap, map } = rxjs;
const observer = (str) => ({
subscribe: () => console.log(`${str} -> subscribe`),
next: () => console.log(`${str} -> next`),
unsubscribe: () => console.log(`${str} -> unsubscribe`),
});
const event$ = new Subject()
const share$ = event$.pipe(map((_, i) => i + 1), share());
const flush$ = new Subject();
const trigger$ = flush$.pipe(tap(observer('flush$')));
const bufferSize$ = share$.pipe(startWith(null), bufferCount(5), tap(observer('BufferSize 5')));
const bufferTime$ = share$.pipe(bufferTime(5000), tap(observer('5 Sec')));
const race$ = race(bufferTime$, trigger$, bufferSize$).pipe(take(1));
const buffer$ = share$.pipe(exhaustMap(() => race$));
share$.pipe(buffer(buffer$)).subscribe((x) => console.log(x));
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/7.5.6/rxjs.umd.min.js"></script>
<button onclick="event$.next()">event</button>
<button onclick="flush$.next()">flush</button>
https://stackblitz.com/edit/rxjs-5qgaeq

My final answer
window
bufferTime
const bufferBy$ = new Subject<void>();
const maxBufferSize = 3;
const bufferTimeSpan = 5000;
source$.pipe(
window(bufferBy$),
mergeMap(bufferTime(bufferTimeSpan, null, maxBufferSize)),
).subscribe(...);
https://stackblitz.com/edit/rxjs-hoafkr

Related

How to seek to a position in a song Discord.js?

I am facing some difficulty with seeking to a specified timestamp in the current song. I have separate files for all my commands. I want to create a seek.js file which takes input a specified time and then passes it to the play.js file(it plays the current song in the queue) but the problem is I cant seem to find a way to how do this.
This is my play command.
const { Collector } = require("discord.js");
const ytdlDiscord = require("ytdl-core-discord");
//const play = require("../commands/play");
module.exports = {
async play(song, message){
const queue = message.client.queue.get(message.guild.id);
if(!song){
setTimeout(function(){
if(!queue.connection.dispatcher && message.guild.me.voice.channel){
queue.channel.leave();
queue.textChannel.send(`**Cadenza** left successfully`).catch(console.error);
}
else return;
},120000);
message.client.queue.delete(message.guild.id);
return queue.textChannel.send(`**Music Queue Ended**`);
}
let stream = await ytdlDiscord(song.url,{filter: 'audioonly', quality: 'highestaudio', highWaterMark: 1<<25});
let streamType = song.url.includes("youtube.com") ? "opus" : "ogg/opus";
queue.connection.on("disconnect", () => message.client.queue.delete(message.guild.id));
const dispatcher = queue.connection
.play(stream, {type: streamType, highWaterMark: 1})
.on("finish", () => {
if(queue.loop){
let last = queue.songs.shift();
queue.songs.push(last);
module.exports.play(queue.songs[0], message);
}else{
queue.songs.shift();
module.exports.play(queue.songs[0], message);
}
})
.on("error", (err) => {
console.error(err);
queue.songs.shift();
module.exports.play(queue.songs[0], message);
});
dispatcher.setVolumeLogarithmic(queue.volume / 100);
queue.textChannel.send(`Started Playing **${song.title}**`);
}
};
seek command
const { play } = require("../include/play");
function timeConvert(str){
const t = str.split(':');
let s = 0, m = 1;
while(t.length > 0){
s = +m * parseInt(t.pop(),10);
m = m * 60;
}
return s;
}
module.exports = {
name: 'seek',
description: 'Seeks to a certain point in the current track.',
execute(message,args){
const queue = message.client.queue.get(message.guild.id);
if(!queue) return message.channel.send("There is no song playing.").catch(console.error);
queue.playing = true;
let time = timeConvert(args[0]);
if( time > queue.songs[0].duration)
return message.channel.send(`**Input a valid time**`);
else{
let time = timeConvert(args[0]) * 1000;
#main code here
}
}
}
How can I pass the time variable to play() so that the current song seeks to that amount?

Calling n times the same observable

I have a http get webservice which I need to call n times, adding the return of my last call each time (first time there is a default value) how can I do it ?
You can use 'expand' operator from rxjs. It will loop until it's supplied with empty() observable. Here is example:
import { empty } from 'rxjs';
private service; <--- service that gives us the observable by some value
private initialValue: number = 5;
private counter: number = 0;
private source$: Observable<number> = this.service.getSourceWithValue(initialValue);
source$.pipe(
expand(value => isCounterExceeded()
? incrementCounterAndGetNextSourceObservableWithValue(value);
: empty()
);
// if counter is not exceeded we will increment the counter and create another
// observable based on current value. If it is exceeded, we are stopping the loop by
// returning the empty() observable
private incrementCounterAndGetNextSourceObservableWithValue(value: number): Observable<number> {
this.counter++;
return this.service.getSourceWithValue(value);
}
private isCounterExceeded() {
return this.counter >= 4;
}
This sounds like you could use expand:
const N = 4;
const source = of(1).pipe(
expand((previous, index) => index === 4 ? EMPTY : of(previous * 2))
);
source.subscribe(console.log);
Live demo: https://stackblitz.com/edit/rxjs-fcpin2

Play and Pause interval rxjs

i'm trying to implement play and pause button using Rxjs library.
const play$ = fromEvent(playerImplementation, PLAYER_EVENTS.PLAY).pipe(mapTo(true));
const pause$ = fromEvent(playerImplementation, PLAYER_EVENTS.PAUSE).pipe(mapTo(false));
const waiting$ = fromEvent(playerImplementation, PLAYER_EVENTS.WAITING).pipe(mapTo(false));
let counterTime = 0;
const currentTime$ = interval(30).pipe(
map(()=>counterTime += 30));
const player$ = merge(play$, pause$, waiting$).pipe(
switchMap(value => (value ? currentTime$ : EMPTY)));
// DIFFERENCE IN RESULTS
currentTime$.subscribe((v)=> console.log("Regular Count " + v)); // get correctly 30,60,90,120...
player$.subscribe((v)=>console.log("Condition Count" + v)); // get wrongly 30,150,270, 390
can anyone help in understanding why there is a difference between the results?
It happened because I used several subscribers for one observable (player$ observable). I solve this by using ReplaySubject instead of Observable and by using multicasting in order to handle the event in several subscribers, without changing the value.
const play$ = fromEvent(playerImplementation, PLAYER_EVENTS.PLAY).pipe(mapTo(true));
const pause$ = fromEvent(playerImplementation, PLAYER_EVENTS.PAUSE).pipe(mapTo(false));
const waiting$ = fromEvent(playerImplementation, PLAYER_EVENTS.WAITING).pipe(mapTo(false));
let timeCounter = 0;
const source = Observable.create((obs : Observer<number>)=> {
interval(30).pipe(
map(() => timeCounter += 30)).subscribe(obs);
return () => {};
});
// Cast the observable to subject for distributing to several subscribers
const currentTime$ = source.pipe(multicast(()=> new ReplaySubject(5))).refCount();
const player$ = merge(play$, pause$, waiting$).pipe(
switchMap(value => value ? currentTime$ : EMPTY));

In Rx instead of only getting the last debounced object, can I get the complete sequence?

I want to know if one of the debounced objects was a green ball. Filtering for only green balls before or after the debounce leads to incorrect behavior.
You can use the buffer operator together with the debounce operator. Here a very basic example:
// This is our event stream. In this example we only track mouseup events on the document
const move$ = Observable.fromEvent(document, 'mouseup');
// We want to create a debounced version of the initial stream
const debounce$ = move$.debounceTime(1000);
// Now create the buffered stream from the initial move$ stream.
// The debounce$ stream can be used to emit the values that are in the buffer
const buffered$ = move$.buffer(debounce$);
// Subscribe to your buffered stream
buffered$.subscribe(res => console.log('Buffered Result: ', res));
If I understand correctly what you want to achieve, you probably need to build an Observable which emits some sort of object which contains both the source value (i.e. blue, red, green in your case) as well as a flag that indicates whether or not there was a green in the debounced values.
If this is true, you can try to code along these lines
const s = new Subject<string>();
setTimeout(() => s.next('B'), 100);
setTimeout(() => s.next('G'), 1100);
setTimeout(() => s.next('B'), 1200);
setTimeout(() => s.next('G'), 1300);
setTimeout(() => s.next('R'), 1400);
setTimeout(() => s.next('B'), 2400);
let hasGreen = false;
s
.do(data => hasGreen = hasGreen || data === 'G')
.debounceTime(500)
.map(data => ({data, hasGreen})) // this map has to come before the following do
.do(() => hasGreen = false)
.subscribe(data => console.log(data))
Be careful about the sequence. In particular you have to put the map operator which creates the object you want to emit before the do that resets your variable.
This could be done with a non-trivial set of operators and side-effecting a flow by introducing extra channels:
import java.util.Queue;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicLong;
import org.junit.Test;
import io.reactivex.*;
import io.reactivex.functions.Consumer;
import io.reactivex.schedulers.*;
import io.reactivex.subjects.PublishSubject;
public class DebounceTimeDrop {
#Test
public void test() {
PublishSubject<Integer> source = PublishSubject.create();
TestScheduler scheduler = new TestScheduler();
source.compose(debounceTime(10, TimeUnit.MILLISECONDS, scheduler, v -> {
System.out.println(
"Dropped: " + v + " # T=" + scheduler.now(TimeUnit.MILLISECONDS));
}))
.subscribe(v -> System.out.println(
"Passed: " + v + " # T=" + scheduler.now(TimeUnit.MILLISECONDS)),
Throwable::printStackTrace,
() -> System.out.println(
"Done " + " # T=" + scheduler.now(TimeUnit.MILLISECONDS)));
source.onNext(1);
scheduler.advanceTimeBy(10, TimeUnit.MILLISECONDS);
scheduler.advanceTimeBy(20, TimeUnit.MILLISECONDS);
source.onNext(2);
scheduler.advanceTimeBy(1, TimeUnit.MILLISECONDS);
source.onNext(3);
scheduler.advanceTimeBy(1, TimeUnit.MILLISECONDS);
source.onNext(4);
scheduler.advanceTimeBy(1, TimeUnit.MILLISECONDS);
source.onNext(5);
scheduler.advanceTimeBy(10, TimeUnit.MILLISECONDS);
scheduler.advanceTimeBy(20, TimeUnit.MILLISECONDS);
source.onNext(6);
scheduler.advanceTimeBy(10, TimeUnit.MILLISECONDS);
scheduler.advanceTimeBy(20, TimeUnit.MILLISECONDS);
source.onComplete();
}
public static <T> ObservableTransformer<T, T> debounceTime(
long time, TimeUnit unit, Scheduler scheduler,
Consumer<? super T> dropped) {
return o -> Observable.<T>defer(() -> {
AtomicLong index = new AtomicLong();
Queue<Timed<T>> queue = new ConcurrentLinkedQueue<>();
return o.map(v -> {
Timed<T> t = new Timed<>(v,
index.getAndIncrement(), TimeUnit.NANOSECONDS);
queue.offer(t);
return t;
})
.debounce(time, unit, scheduler)
.map(v -> {
while (!queue.isEmpty()) {
Timed<T> t = queue.peek();
if (t.time() < v.time()) {
queue.poll();
dropped.accept(t.value());
} else
if (t == v) {
queue.poll();
break;
}
}
return v.value();
})
.doOnComplete(() -> {
while (!queue.isEmpty()) {
dropped.accept(queue.poll().value());
}
});
});
}
}
prints
Passed: 1 # T=10
Dropped: 2 # T=43
Dropped: 3 # T=43
Dropped: 4 # T=43
Passed: 5 # T=43
Passed: 6 # T=73
Done # T=93

How to route, group, or otherwise split up messages into consistent sets using TPL Dataflow

I'm new to TPL Dataflow and I'm looking for a construct which will allow splitting up a list of source messages for evenly distributed parallel processing while maintaining order of the messages message through individual pipelines. Is there a specific Block or concept within the DataFlow API that can be used to accomplish this or is it more of a matter providing glue code or custom Blocks between existing Blocks?
For those familiar with Akka.NET I'm looking for functionality similar to the ConsistentHashing router which allow sending messages to a single router which then forwards these messages on to individual routees to be handled.
Synchronous example:
var count = 100000;
var processingGroups = 5;
var source = Enumerable.Range(1, count);
// Distribute source elements consistently and evenly into a specified set of groups (ex. 5) so that.
var distributed = source.GroupBy(s => s % processingGroups);
// Within each of the 5 processing groups go through each item and add 1 to it
var transformed = distributed.Select(d => d.Select(i => i + 3).ToArray());
List<int[]> result = transformed.ToList();
Check.That(result.Count).IsEqualTo(processingGroups);
for (int i = 0; i < result.Count; i++)
{
var outputGroup = result[i];
var expectedRange = Enumerable.Range(i + 1, count/processingGroups).Select((e, index) => e + (index * (processingGroups - 1)) + 3);
Check.That(outputGroup).ContainsExactly(expectedRange);
}
In general I don't think what you're looking for is pre-made in Dataflow as it may be with a ConsistentHashing router. However, by adding an id to the pieces of data you wish to flow you can process them in any order, in parallel and reorder them when the processing finishes.
public class Message {
public int MessageId { get; set; }
public int GroupId { get; set; }
public int Value { get; set; }
}
public class MessageProcessing
{
public void abc() {
var count = 10000;
var groups = 5;
var source = Enumerable.Range(0, count);
//buffer all input
var buffer = new BufferBlock<IEnumerable<int>>();
//split each input enumerable into processing groups
var messsageProducer = new TransformManyBlock<IEnumerable<int>, Message>(ints =>
ints.Select((i, index) => new Message() { MessageId = index, GroupId = index % groups, Value = i }).ToList());
//process each message, one action block may process any group id in any order
var processMessage = new TransformBlock<Message, Message>(msg =>
{
msg.Value++;
return msg;
}, new ExecutionDataflowBlockOptions() {
MaxDegreeOfParallelism = groups
});
//output of processed message values
int[] output = new int[count];
//insert messages into array in the order the started in
var regroup = new ActionBlock<Message>(msg => output[msg.MessageId] = msg.Value,
new ExecutionDataflowBlockOptions() {
MaxDegreeOfParallelism = 1
});
}
}
In the example the GroupId of a message isn't used but it could be used in a more complete example for coordinating groups of messages. Also, handling follow up posts to the bufferblock could be done by changing the output array to a List and setting up a corresponding list element each time an enumerable of integers is posted to the buffer block. Depending on your exact use, you may need to support multiple users of the output, and this can be folded back into the flow.
You can dynamically create a pipeline with linking the blocks between each other based on predicate:
var count = 100;
var processingGroups = 5;
var source = Enumerable.Range(1, count);
var buffer = new BufferBlock<int>();
var consumer1 = new ActionBlock<int>(i => { });
var consumer2 = new ActionBlock<int>(i => { });
var consumer3 = new ActionBlock<int>(i => { });
var consumer4 = new ActionBlock<int>(i => { Console.WriteLine(i); });
var consumer5 = new ActionBlock<int>(i => { });
buffer.LinkTo(consumer1, i => i % 5 == 1);
buffer.LinkTo(consumer2, i => i % 5 == 2);
buffer.LinkTo(consumer3, i => i % 5 == 3);
buffer.LinkTo(consumer4, i => i % 5 == 4);
buffer.LinkTo(consumer5);
foreach (var i in source)
{
buffer.Post(i);
// consider async option if you able to do it
// await buffer.SendAsync(i);
}
buffer.Complete();
Console.ReadLine();
The code above will write only numbers from 4th group, processing other groups silently, but I hope you got the idea. There is a general practice to link a block for at least one consumer without filtering for messages not being dropped if they aren't accepted by any consumers, and you can do this if you don't have a default handler (NullTarget<int> simply ignores all the messages it got):
buffer.LinkTo(DataflowBlock.NullTarget<int>());
The downside of this is a continuation of it's advantages: you have to provide predicates, as there is no built-in structures for this. However, it still could be done.

Resources