How would mvc cause non-deterministic ui and redux not - model-view-controller

I've read several articals as well as offical docs of redux all of which mention mvc leads to non-deterministic ui while redux not as redux uses pure function. I know that pure function produces same output for same input. But why mutation does not? It would be nice to have an example.

Mutation + asynchronous code can easily lead to functions that don't return the same result given the same input. This is a (very) simplified example with some comments.
// this could be a function in your controller
function delayedAddition(valuePair) {
console.log(
`Getting ready to calculate ${valuePair.x} + ${valuePair.y}`
);
return new Promise((resolve, reject) => {
setTimeout(() => resolve(valuePair.x + valuePair.y), 500);
});
}
const printWithMessage = message => printMe => console.log(message, printMe);
let mutableValuePair = { x: 5, y: 10 };
// this could be a call your view depends on
delayedAddition(mutableValuePair)
.then(printWithMessage('Result is: '));
// MUTATION!
// This could happen in another controller,
// or where ever
mutableValuePair.x = 32;
// Expected result = 5 + 10.
// Result is: 42
// So your view is no longer a function of
// what arguments you pass to your controllers.
If we were using an immutable data structure for valuePair then something like valuePair.setX(32) would not change the original object. Instead we'd get back a new (independent) copy. So you would use it like this instead const modifiedValuePar = valuePair.setX(32). That way, the ongoing calculation (which used the unaffected valuePair) would still give the expected result that 5 + 10 = 15.

Related

How to model asynchronous callback in functional reactive programming?

As I understand, in FRP (Functional Reactive Programming), we model the system as a component which receives some input signals and generates some output signals:
,------------.
--- input1$ --> | | -- output1$ -->
| System | -- output2$ -->
--- input2$ --> | | -- output3$ -->
`------------'
In this way, if we have multiple subsystems, we can plump them together as long as we can provide operators that can pipe inputs and outputs.
Now, I'm building an app, which processes video frames asynchronously. The actual processing logic is abstracted and can be provided as an argument. In non-FRP way of thinking, I can construct the app as
new App(async (frame) => {
return await processFrame(frame)
})
The App is responsible for establishing communication with underlying video pipeline and repeatedly get video frames and then pass that frame to the given callback, and once the callback resolves,App sends back the processed frame.
Now I want to model the App in a FRP way so I can flexibly design the frame processing.
const processedFrameSubject = new Subject()
const { frame$ } = createApp(processedFrameSubject)
frame$.pipe(
map(toRGB),
mergeMap(processRGBFrame),
map(toYUV)
).subscribe(processedFrameSubject)
The benefit is that it enables the consumer of createApp to define the processing pipeline declaratively.
However, in createApp, given a processedFrame, I need to reason about which frame it is related to. Since frame$ and processedFrameSubject is now separated, it's really hard for createApp to reason about which frame a processedFrame is related to, which was quite easy in non-FRP implementation because the frame and processedFrame were in same closure.
In functional reactive programming, you would avoid using side effects as much as possible, this means avoiding .subscribe(, tap(() => subject.next()), etc. With FRP, your state is declared on how it works and how it's wired up, but it doesn't execute until someone needs it and performs the side effect.
So I think that the following API would still be considered FRP:
function createApp(
processFrame: (frame: Frame) => Observable<ProcessedFrame>
): Observable<void>
const app$ = createApp(frame => of(frame).pipe(
map(toRGB),
mergeMap(processRGBFrame),
map(toYUV)
));
// `app$` is an Observable that can be consumed by composing it to other
// observables, or by "executing the side effect" by .subscribe() on it
// possible implementation of createApp for this API
function createApp(
processFrame: (frame: Frame) => Observable<ProcessedFrame>
) {
return new Observable<void>(() => {
const stopVideoHandler = registerVideoFrameHandler(
(frame: Frame) => firstValueFrom(processFrame(frame))
);
return () => {
// teardown
stopVideoHandler()
}
});
}
Something worth noting is that createApp is returning a new Observable. Inside new Observable( we can escape from FRP because it's the only way we can integrate with external parties, and all the side effects we have written won't be called until someone .subscribe()s to the observable.
This API is simple and would still be FRP, but it has one limitation: the processFrame callback can only process frames independently from others.
If you need an API that supports that, then you need to expose the frames$, but again, this is a project function for createApp:
function createApp(
projectFn: (frame$: Observable<Frame>) => Observable<ProcessedFrame>
): Observable<void>
const app$ = createApp(frame$ => frame$.pipe(
map(toRGB),
mergeMap(processRGBFrame),
map(toYUV)
));
// possible declaration of createApp
function createApp(
projectFn: (frame$: Observable<Frame>) => Observable<ProcessedFrame>
) {
return new Observable<void>(() => {
const frame$ = new Subject<Frame>;
const processedFrame$ = connectable(frame$.pipe(projectFn));
const processedSub = processedFrame$.connect();
const stopVideoHandler = registerVideoFrameHandler(
(frame: Frame) => {
// We need to create the promise _before_ we send in the next `frame$`, in case it's processed synchronously
const resultFrame = firstValueFrom(processedFrame$);
frame$.next(frame);
return resultFrame;
})
);
return () => {
// teardown
stopVideoHandler()
processedSub.unsubscribe();
}
});
}
I'm guessing here registerVideoFrameHandler will call the function one-by-one without overlap? If there's overlap then you'd need to track the frame number in some way, if the SDK doesn't give you any option, then try something like:
// Assuming `projectFn` will emit frames in order. If not, then the API
// should change to be able to match them
const processedFrame$ = connectable(frame$.pipe(
projectFn,
map((result, index) => ({ result, index }))
));
const processedSub = processedFrame$.connect();
let frameIdx = 0;
const stopVideoHandler = registerVideoFrameHandler(
(frame: Frame) => {
const thisIdx = frameIdx;
frameIdx++;
const resultFrame = firstValueFrom(processedFrame$.pipe(
filter(({ index }) => index === thisIdx),
map(({ result }) => result)
));
frame$.next(frame);
return resultFrame;
})
);

How do I append to an observable inside the observable itself

My situation is as follows: I am performing sequential HTTP requests, where one HTTP request depends on the response of the previous. I would like to combine the response data of all these HTTP requests into one observable. I have implemented this before using an async generator. The code for this was relatively simple:
async function* AsyncGeneratorVersion() {
let moreItems = true; // whether there is a next page
let lastAssetId: string | undefined = undefined; // used for pagination
while (moreItems) {
// fetch current batch (this performs the HTTP request)
const batch = await this.getBatch(/* arguments */, lastAssetId);
moreItems = batch.more_items;
lastAssetId = batch.last_assetid;
yield* batch.getSteamItemsWithDescription();
}
}
I am trying to move away from async generators, and towards RxJs Observables. My best (and working) attempt is as follows:
const observerVersion = new Observable<SteamItem>((subscriber) => {
(async () => {
let moreItems = true;
let lastAssetId: string | undefined = undefined;
while (moreItems) {
// fetch current batch (this performs the HTTP request)
const batch = await this.getBatch(/* arguments */, lastAssetId);
moreItems = batch.more_items;
lastAssetId = batch.last_assetid;
const items = batch.getSteamItemsWithDescription();
for (const item of items) subscriber.next(item);
}
subscriber.complete();
})();
});
Now, I believe that there must be some way of improving this Observer variant - this code does not seem very reactive to me. I have tried several things using pipe, however unfortunately these were all unsuccessful.
I found concatMap to come close to a solution. This allowed me to concat the next HTTP request as an observable (done with the this.getBatch method), however I could not find a good way to also not abandon the response of the current HTTP request.
How can this be achieved? In short I believe this problem could be described as appending data to an observable inside the observable itself. (But perhaps this is not a good way of handling this situation)
TLDR;
Here's a working StackBlitz demo.
Explanation
Here would be my approach:
// Faking an actual request
const makeReq = (prevArg, response) =>
new Promise((r) => {
console.log(`Running promise with the prev arg as: ${prevArg}!`);
setTimeout(r, 1000, { prevArg, response });
});
// Preparing the sequential requests.
const args = [1, 2, 3, 4, 5];
from(args)
.pipe(
// Running the reuqests sequantially.
mergeScan(
(acc, crtVal) => {
// `acc?.response` will refer to the previous response
// and we're using it for the next request.
return makeReq(acc?.response, crtVal);
},
// The seed(works the same as `reduce`).
null,
// Making sure that only one request is run at a time.
1
),
// Combining all the responses into one object
// and emitting it after all the requests are done.
reduce((acc, val, idx) => ({ ...acc, [`request${idx + 1}`]: val }), {})
)
.subscribe(console.warn);
Firstly, from(array) will emit each item from the array, synchronously and one by one.
Then, there is mergeScan. It is exactly the result of combining scan and merge. With scan, we can accumulate values(in this case we're using it to have access to the response of the previous request) and what merge does is to allow us to use observables.
To make things a bit easier to understand, think of the Array.prototype.reduce function. It looks something like this:
[].reduce((acc, value) => { return { ...acc }}, /* Seed value */{});
What merge does in mergeScan is to allow us to use the accumulator something like (acc, value) => new Observable(...) instead of return { ...acc }. The latter indicates a synchronous behavior, whereas with the former we can have asynchronous behavior.
Let's go a bit step by step:
when 1 is emitted, makeReq(undefined, 1) will be invoked
after the first makeReq(from above) resolves, makeReq(1, 2) will be invoked
after makeReq(1, 2) resolves, makeReq(2, 3) will be invoked and so on...
Somebody I consulted regarding this matter came up with this solution, I think it's quite elegant:
defer(() => this.getBatch(options)).pipe(
expand(({ more_items, last_assetid }) =>
more_items
? this.getBatch({ ...options, startAssetId: last_assetid })
: EMPTY,
),
concatMap((batch) => batch.getSteamItemsWithDescription()),
);
From my understanding the use of expand here is very similar to the use of mergeScan in #Andrei's answer

Filtered send queue in rxjs

So I'm relatively inexperienced with rxjs so if this is something that would be a pain or really awkward to do, please tell me and I'll go a different route. So in this particular use case, I was to queue up updates to send to the server, but if there's an update "in flight" I want to only keep the latest item which will be sent when the current in flight request completes.
I am kind of at a loss of where to start honestly. It seems like this would be either a buffer type operator and/or a concat map.
Here's what I would expect to happen:
const updateQueue$ = new Subject<ISettings>()
function sendToServer (settings: ISettings): Observable {...}
...
// we should send this immediately because there's nothing in-flight
updateQueue$.next({ volume: 25 });
updateQueue$.next({ volume: 30 });
updateQueue$.next({ volume: 50 });
updateQueue$.next({ volume: 65 });
// lets assume that our our original update just completed
// I would now expect a new request to go out with `{ volume: 65 }` and the previous two to be ignored.
I think you can achieve what you want with this:
const allowNext$ = new Subject<boolean>()
const updateQueue$ = new Subject<ISettings>()
function sendToServer (settings: ISettings): Observable { ... }
updateQueue$
.pipe(
// Pass along flag to mark the first emitted value
map((value, index) => {
const isFirstValue = index === 0
return { value, isFirstValue }
}),
// Allow the first value through immediately
// Debounce the rest until subject emits
debounce(({ isFirstValue }) => isFirstValue ? of(true) : allowNext$),
// Send network request
switchMap(({ value }) => sendToServer(value)),
// Push to subject to allow next debounced value through
tap(() => allowNext$.next(true))
)
.subscribe(response => {
...
})
This is a pretty interesting question.
If you did not have the requirement of issuing the last in the queue, but simply ignoring all requests of update until the one on the fly completes, than you would simply have to use exhaustMap operator.
But the fact that you want to ignore all BUT the last request for update makes the potential solution a bit more complex.
If I understand the problem well, I would proceed as follows.
First of all I would define 2 Subjects, one that emits the values for the update operation (i.e. the one you have already defined) and one dedicated to emit only the last one in the queue if there is one.
The code would look like this
let lastUpdate: ISettings;
const _updateQueue$ = new Subject<ISettings>();
const updateQueue$ = _updateQueue$
.asObservable()
.pipe(tap(settings => (lastUpdate = settings)));
const _lastUpdate$ = new Subject<ISettings>();
const lastUpdate$ = _lastUpdate$.asObservable().pipe(
tap(() => (lastUpdate = null)),
delay(0)
);
Then I would merge the 2 Observables to obtain the stream you are looking for, like this
merge(updateQueue$, lastUpdate$)
.pipe(
exhaustMap(settings => sendToServer(settings))
)
.subscribe({
next: res => {
// do something with the response
if (lastUpdate) {
// emit only if there is a new "last one" in the queue
_lastUpdate$.next(lastUpdate);
}
},
});
You may notice that the variable lastUpdate is used to control that the last update in the queue is used only once.

Make Http Request Observation and return anm Observable with the result

I have the following scenario: There is a service which is called "ContextProvider" that holds information regarding the context of the applicaiton (Logged In User, Things he can acess, etc). Right now I am observing this as following:
this.contextProvider.Context.subscribe(context => {
//Do Something
})
Now I have a service that will also be observable. I want this service to observe the context and return an observable. This would be easy with the map function:
let observable = this.contextProvider.Context.pipe(map(context => {
let aux: number = somevar + someothervar;
return aux;
})) //observable variable now holds the type Observable<number>
My scenario is a little bit more complex, because in order to fetch the result, I have to make an Http call, which is also an observable/promise:
let observable = this.contextProvider.Context.pipe(map(context => {
return this.httpClient.get<number>("Some URL").pipe(take(1));
})); //observable var now holds Obsevable<Observable<number>>
How can I make the "observable" var hold Observable?
EDIT: The URL value depends on the some values of the "context" variable
If I understand right your problem, you need to use concatMap for this case, like this
this.contextProvider.Context.pipe(
concatMap(context => {
return this.httpClient.get<number>("Some URL" + context.someData);
}));
You can find more patterns around the use of Observables with http calls in this article

How to stub Fluture?

Background
I am trying to convert a code snippet from good old Promises into something using Flutures and Sanctuary:
https://codesandbox.io/embed/q3z3p17rpj?codemirror=1
Problem
Now, usually, using Promises, I can uses a library like sinonjs to stub the promises, i.e. to fake their results, force to resolve, to reject, ect.
This is fundamental, as it helps one test several branch directions and make sure everything works as is supposed to.
With Flutures however, it is different. One cannot simply stub a Fluture and I didn't find any sinon-esque libraries that could help either.
Questions
How do you stub Flutures ?
Is there any specific recommendation to doing TDD with Flutures/Sanctuary?
I'm not sure, but those Flutures (this name! ... nevermind, API looks cool) are plain objects, just like promises. They only have more elaborate API and different behavior.
Moreover, you can easily create "mock" flutures with Future.of, Future.reject instead of doing some real API calls.
Yes, sinon contains sugar helpers like resolves, rejects but they are just wrappers that can be implemented with callsFake.
So, you can easily create stub that creates fluture like this.
someApi.someFun = sinon.stub().callsFake((arg) => {
assert.equals(arg, 'spam');
return Future.of('bar');
});
Then you can test it like any other API.
The only problem is "asynchronicity", but that can be solved like proposed below.
// with async/await
it('spams with async', async () => {
const result = await someApi.someFun('spam).promise();
assert.equals(result, 'bar');
});
// or leveraging mocha's ability to wait for returned thenables
it('spams', async () => {
return someApi.someFun('spam)
.fork(
(result) => { assert.equals(result, 'bar');},
(error) => { /* ???? */ }
)
.promise();
});
As Zbigniew suggested, Future.of and Future.reject are great candidates for mocking using plain old javascript or whatever tools or framework you like.
To answer part 2 of your question, any specific recommendations how to do TDD with Fluture. There is of course not the one true way it should be done. However I do recommend you invest a little time in readability and ease of writing tests if you plan on using Futures all across your application.
This applies to anything you frequently include in tests though, not just Futures.
The idea is that when you are skimming over test cases, you will see developer intention, rather than boilerplate to get your tests to do what you need them to.
In my case I use mocha & chai in the BDD style (given when then).
And for readability I created these helper functions.
const {expect} = require('chai');
exports.expectRejection = (f, onReject) =>
f.fork(
onReject,
value => expect.fail(
`Expected Future to reject, but was ` +
`resolved with value: ${value}`
)
);
exports.expectResolve = (f, onResolve) =>
f.fork(
error => expect.fail(
`Expected Future to resolve, but was ` +
`rejected with value: ${error}`
),
onResolve
);
As you can see, nothing magical going on, I simply fail the unexpected result and let you handle the expected path, to do more assertions with that.
Now some tests would look like this:
const Future = require('fluture');
const {expect} = require('chai');
const {expectRejection, expectResolve} = require('../util/futures');
describe('Resolving function', () => {
it('should resolve with the given value', done => {
// Given
const value = 42;
// When
const f = Future.of(value);
// Then
expectResolve(f, out => {
expect(out).to.equal(value);
done();
});
});
});
describe('Rejecting function', () => {
it('should reject with the given value', done => {
// Given
const value = 666;
// When
const f = Future.of(value);
// Then
expectRejection(f, out => {
expect(out).to.equal(value);
done();
});
});
});
And running should give one pass and one failure.
✓ Resolving function should resolve with the given value: 1ms
1) Rejecting function should reject with the given value
1 passing (6ms)
1 failing
1) Rejecting function
should reject with the given value:
AssertionError: Expected Future to reject, but was resolved with value: 666
Do keep in mind that this should be treated as asynchronous code. Which is why I always accept the done function as an argument in it() and call it at the end of my expected results. Alternatively you could change the helper functions to return a promise and let mocha handle that.

Resources