Delay release from buffer until promises after it are done - rxjs

Here is my pseudo-code:
const s = new Subject();
s.pipe(
bufferCount(1).pipe(
concatMap(() => new Promise()),
concatMap(() => new Promise()),
concatMap(() => new Promise()),
)
)
s.next('a');
s.next('b');
s.next('c');
I want "b" and "c" held in the buffer UNTIL "a" is done processing.
Is this possible?

I suppose you want a source Observable to trigger some task and then only have the next value from source trigger the next task when the previous task completed. You can achieve this by zipping your source with a second trigger (startNext) that indicates that the previous task is done and the next value from source starting the next task can be emitted.
import { Subject, zip, of, BehaviorSubject } from 'rxjs';
import { map, tap, delay, concatMap } from 'rxjs/operators';
const source = new Subject();
const startNext = new BehaviorSubject(null);
zip(source, startNext)
.pipe(
map(([s, n]) => s), // discard the 'startNext' trigger
concatMap(s => of(s).pipe(delay(1000))),
concatMap(s => of(s).pipe(delay(200))),
concatMap(s => of(s).pipe(delay(3000))),
tap(_ => startNext.next(null))
).subscribe(s => console.log('result for', s));
source.next('a');
source.next('b');
source.next('c');
https://stackblitz.com/edit/rxjs-g5efuc

Related

Flutter how to test Either<Failure,List<Object>>

it seems to have the same value with matcher but still can't pass the test probably cuz of the memory address stuff. can anyone let me know how i can test the result when a list is in Either<< Right >>
test('get board list from remote data source', () async {
when(mockBoardRemoteDataSource.getBoards())
.thenAnswer((_) async => tBoardModels);
final result = await repository.getBoards();
verify(mockBoardRemoteDataSource.getBoards());
expect(result, equals(Right(toBoards)));
// Either<Failure, List<BoardInfo>> result;
// (new) Right<dynamic, List<BoardInfo>> Right(List<BoardInfo> _r)
});
//console result
Expected: Right<dynamic, List<BoardInfo>>:<Right([_$_BoardInfo(1, name1, address1), _$_BoardInfo(2, name2, address2)])>
Actual: Right<Failure, List<BoardInfo>>:<Right([_$_BoardInfo(1, name1, address1), _$_BoardInfo(2, name2, address2)])>
package:test_api expect
package:flutter_test/src/widget_tester.dart 455:16 expect
test\features\nurban_honey\data\repositories\board_repository_impl_test.dart 58:9 main.<fn>.<fn>.<fn>
//BoardInfo Implementation
import 'package:equatable/equatable.dart';
import 'package:freezed_annotation/freezed_annotation.dart';
part 'board_info.freezed.dart';
#freezed
class BoardInfo extends Equatable with _$BoardInfo {
BoardInfo._();
factory BoardInfo(int id, String name, String address) = _BoardInfo;
#override
List<Object?> get props => [id, name, address];
}
Thanks to Jay, his answer helps me to made my resolution.
This is how i made the test act:
// Act
var results = (await repository.fetch()).fold(
(failure) => failure,
(response) => response,
);
Then I made a type expectation, matching to the failure type declared on my use case:
In the case of the successful execution
// Assert
expect(results, isA<ResultType>());
In the case of the failure expectation
// Assert
expect(results, isA<FailureType>());
The entire test case looks like the following
test('On successful execution, should returns a SuccessResultType', () async {
// Arrange
repository = MyUseCaseRepository();
// Act
var results = (await repository.fetch()).fold(
(failure) => failure,
(response) => response,
);
// Assert
expect(results, isA<SuccessResultType>());
});
I hope this approach can help anybody!
I've made the test pass by doing this:
result.fold(
(l) => null,
(resultR) => Right(toBoards)
.fold((l) => null, (matcherR) => expect(resultR, matcherR)));
is there better way to do it?

How to run some code in an RxJS chain given there were no errors

I am trying to find a way to run some code only if there was no error in a given rxjs chain. Consider the following, is there something like the artificial NO_ERROR_OCCURED_RUN_HAPPY_PATH_CODE operator in rxjs?
private wrap(obs: Observable<any>): Observable<any> {
return of(1).pipe(
tap(() => this.spinner.startSpinner()),
mergeMap(() =>
obs.pipe(
NO_ERROR_OCCURED_RUN_HAPPY_PATH_CODE(() => this.generic_success_popup()),
catchError(this.handleError),
)
),
finalize(() => this.spinner.stopSpinner())
);
}
Basically almost all operator will be invoke if no error is thrown along the pipe, apart from finalize
obs.pipe(
tap(_=>console.log('no error, will run'),
// throw some error
map(_=>throwError('some error'),
finalize(_=>console.log('will be called when there is error or upon observable complete')),
tap(_=>console.log('this will not run')),
catchError(this.handleError),
)

How can i rewrite the following code with switchmap

Hi I have a small code snippet using rxjs library as follows. It is working fine . However I would like to re write it using switchmap . I tried all option however i am getting an error . was wondering if someone could help me out .
this.campaignControl.valueChanges.subscribe(
(value) => {
this.flightsService.getUnpaginatedFlightsWithinRange(
{
campaignId : value,
jobStatuses: this.flightStatusIds,
rangeStartDate:
(this.rangeControl.value[0]).toISOString(),
rangeEndDate: (this.rangeControl.value[1]).toISOString()
}
).subscribe(
(flightsUnpaginated) => {
this.flights = flightsUnpaginated;
}
);
}
);
thank you
I believe you're looking for something like this:
this.campaignControl.valueChanges.pipe(
switchMap(value => this.flightsService.getUnpaginatedFlightsWithinRange({
campaignId : value,
jobStatuses: this.flightStatusIds,
rangeStartDate: (this.rangeControl.value[0]).toISOString(),
rangeEndDate: (this.rangeControl.value[1]).toISOString()
}),
tap(flightsUnpaginated => this.flights = flightsUnpaginated)
).subscribe();

Implementing debouncing batching queue with RxJS

I was trying to understand if RxJS would be a good fit for solving the problem that this node module performs: https://github.com/ericdolson/debouncing-batch-queue
It's description says: "A queue which will emit and clear its contents when its size or timeout is reached. Ideal for aggregating data for bulk apis where batching in a timely manner is best. Or anything really where batching data is needed."
If so, could someone walk me through how to implement the simple example in this npm module with RxJS? Ideally with ES5 if possible.
There's an operator for thatâ„¢: bufferwithtimeorcount. If you need it to be truly equivalent, the input stream would be a Subject, with group_by for namespaces, like the following:
var dbbq$ = new Subject();
dbbq$.group_by(function(v_ns) { return v_ns[1]; })
.flatMap(function(S) {
return S.bufferwithtimeorcount(1000, 2)
});
dbbq$.next([ 'ribs 0' ]);
dbbq$.next([ 'more ribs', 'bbq1' ]);
// is analogous to
var dbbq = new DBBQ(1000, 2);
dbbq.add('ribs 0');
dbbq.add('more ribs', 'bbq1');
No way I'm doing this with ES5 :)
const dataWithNamespace = (data, namespace) => ({data, namespace});
const source = [
dataWithNamespace('ribs 0'),
dataWithNamespace('ribs 1'),
dataWithNamespace('ribs 2'),
dataWithNamespace('ribs 3'),
dataWithNamespace('ribs 4'),
dataWithNamespace('more ribs', 'bbq1'),
dataWithNamespace('more ribs', 'bbq1'),
dataWithNamespace('brisket', 'best bbq namespace')
];
const DBBQ = (debounceTimeout, maxBatchSize) =>
source$ => source$
.groupBy(x => x.namespace)
.mergeMap(grouped$ => grouped$
.switchMap(x =>
Rx.Observable.of(x.data)
.concat(Rx.Observable.of(undefined)
.delay(debounceTimeout)
)
)
.bufferCount(maxBatchSize)
.filter(x => x.length == maxBatchSize)
.map(x => x.filter(x => x !== undefined))
);
const source$ = Rx.Observable.from(source);
DBBQ(1000, 2)(source$).subscribe(console.log)
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>

Using side effects in Akka Streams to implement commands received from a websocket

I want to be able to click a button on a website, have it represent a command, issue that command to my program via a websocket, have my program process that command (which will produce a side effect), and then return the results of that command to the website to be rendered.
The websocket would be responsible for updating state changes applied by different actors that are within the users view.
Example: Changing AI instructions via the website. This modifies some values, which would get reported back to the website. Other users might change other AI instructions, or the AI would react to current conditions changing position, requiring the client to update the screen.
I was thinking I could have an actor responsible for updating the client with changed information, and just have the receiving stream update the state with the changes?
Is this the right library to use? Is there a better method to achieve what I want?
You can use akka-streams and akka-http for this just fine. An example when using an actor as a handler:
package test
import akka.actor.{Actor, ActorRef, ActorSystem, Props, Stash, Status}
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.ws.{Message, TextMessage}
import akka.http.scaladsl.server.Directives._
import akka.stream.scaladsl.{Flow, Sink, Source, SourceQueueWithComplete}
import akka.stream.{ActorMaterializer, OverflowStrategy, QueueOfferResult}
import akka.pattern.pipe
import scala.concurrent.{ExecutionContext, Future}
import scala.io.StdIn
object Test extends App {
implicit val actorSystem = ActorSystem()
implicit val materializer = ActorMaterializer()
implicit def executionContext: ExecutionContext = actorSystem.dispatcher
val routes =
path("talk") {
get {
val handler = actorSystem.actorOf(Props[Handler])
val flow = Flow.fromSinkAndSource(
Flow[Message]
.filter(_.isText)
.mapAsync(4) {
case TextMessage.Strict(text) => Future.successful(text)
case TextMessage.Streamed(textStream) => textStream.runReduce(_ + _)
}
.to(Sink.actorRefWithAck[String](handler, Handler.Started, Handler.Ack, Handler.Completed)),
Source.queue[String](16, OverflowStrategy.backpressure)
.map(TextMessage.Strict)
.mapMaterializedValue { queue =>
handler ! Handler.OutputQueue(queue)
queue
}
)
handleWebSocketMessages(flow)
}
}
val bindingFuture = Http().bindAndHandle(routes, "localhost", 8080)
println("Started the server, press enter to shutdown")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind())
.onComplete(_ => actorSystem.terminate())
}
object Handler {
case object Started
case object Completed
case object Ack
case class OutputQueue(queue: SourceQueueWithComplete[String])
}
class Handler extends Actor with Stash {
import context.dispatcher
override def receive: Receive = initialReceive
def initialReceive: Receive = {
case Handler.Started =>
println("Client has connected, waiting for queue")
context.become(waitQueue)
sender() ! Handler.Ack
case Handler.OutputQueue(queue) =>
println("Queue received, waiting for client")
context.become(waitClient(queue))
}
def waitQueue: Receive = {
case Handler.OutputQueue(queue) =>
println("Queue received, starting")
context.become(running(queue))
unstashAll()
case _ =>
stash()
}
def waitClient(queue: SourceQueueWithComplete[String]): Receive = {
case Handler.Started =>
println("Client has connected, starting")
context.become(running(queue))
sender() ! Handler.Ack
unstashAll()
case _ =>
stash()
}
case class ResultWithSender(originalSender: ActorRef, result: QueueOfferResult)
def running(queue: SourceQueueWithComplete[String]): Receive = {
case s: String =>
// do whatever you want here with the received message
println(s"Received text: $s")
val originalSender = sender()
queue
.offer("some response to the client")
.map(ResultWithSender(originalSender, _))
.pipeTo(self)
case ResultWithSender(originalSender, result) =>
result match {
case QueueOfferResult.Enqueued => // okay
originalSender ! Handler.Ack
case QueueOfferResult.Dropped => // due to the OverflowStrategy.backpressure this should not happen
println("Could not send the response to the client")
originalSender ! Handler.Ack
case QueueOfferResult.Failure(e) =>
println(s"Could not send the response to the client: $e")
context.stop(self)
case QueueOfferResult.QueueClosed =>
println("Outgoing connection to the client has closed")
context.stop(self)
}
case Handler.Completed =>
println("Client has disconnected")
queue.complete()
context.stop(self)
case Status.Failure(e) =>
println(s"Client connection has failed: $e")
e.printStackTrace()
queue.fail(new RuntimeException("Upstream has failed", e))
context.stop(self)
}
}
There are lots of places here which could be tweaked, but the basic idea remains the same. Alternatively, you could implement the Flow[Message, Message, _] required by the handleWebSocketMessages() method by using GraphStage. Everything used above is also described in detail in akka-streams documentation.

Resources