scope management in RxJs - rxjs

Here, I'm mixing both RxJS and promise-mysql..
const mysql = require('promise-mysql');
const Rx = require('rxjs/Rx');
I establish a connection with this:
var conn;
var queryString = select product_id, set_complete_in from mi_product limit 2;
var conn;
const obs = Rx.Observable.fromPromise(mysql.createConnection({
host: 'somehost',
user: 'someuser',
password: 'some password',
database: 'someday'
}));
at this point the obs is a promise that may or not be resolve.
Here I set up a series of queries.. (basically a fan out, in of queries in parallel)
const responseStream = obs
.flatMap(connection => {
conn = connection;
//submitArray is an array of IDs
var requestStream = submitArray.map(id => Rx.Observable.fromPromise(connection.query(queryString, [id])));
//run these in parallel
return Rx.Observable.forkJoin(requestStream);
});
//it's an array of arrays so flatten..
responseStream.subscribe(resp =>
{
returnValue = resp.reduce((accum, arr) => {return accum.concat(arr);}, []);
console.log('returnValue is ', JSON.stringify(returnValue));
//call back here to return the data
}
,
err => {console.log('!!!!!!err is ', err);},
() => {console.log('connection end!'); conn.end();});
this all works, but my question is how to handle the connection. As you can see, I define the connection at a higher scope so it's available in the subscribe callback. It doesn't seem very functional to call conn.end() in the subscription.. It seems like I should be handling it within the responseStream definition. Does this seem correct?

Your code is a little bit hard to follow, but I think your problem is that you need access to your connection, from your promise, but you won't have access to it in a completion callback without mixing up scopes. If I were implementing this feature, I would solve this by ditching the fromPromise usage and creating my own stream. When subscribers were done with my stream, I'd clean up the connection.
Rx.Observable.create( observer => {
const createConnection = mysql.createConnection({
host: 'somehost',
user: 'someuser',
password: 'some password',
database: 'someday'
});
/* forward the connection and any errors to observer */
createConnection.then( observer.next, observer.error );
return function cleanUp() {
/* end the connection when we are done */
createConnection.then( connection => connection.end() );
};
});
This ensures that any "connection close" operation is handled by the source stream, a pattern that is common and advantageous when working with observables.

Related

IndexedDB breaks in Firefox after trying to save autoIncremented Blob

I am trying to implement Blob storage via IndexedDB for long Media recordings.
My code works fine in Chrome and Edge (not tested in Safari yet) - but won't do anything in Firefox. There are no errors, it just doesn't try to fulfill my requests past the initial DB Connection (which is successful). Intuitively, it seems that the processing is blocked by something. But I don't have anything in my code which would be blocking.
Simplified version of the code (without heavy logging and excessive error checks which I have added trying to debug):
const dbName = 'recording'
const storeValue = 'blobs'
let connection = null
const handler = window.indexedDB || window.mozIndexedDB || window.webkitIndexedDB
function connect() {
return new Promise((resolve, reject) => {
const request = handler.open(dbName)
request.onupgradeneeded = (event) => {
const db = event.target.result
if (db.objectStoreNames.contains(storeValue)) {
db.deleteObjectStore(storeValue)
}
db.createObjectStore(storeValue, {
keyPath: 'id',
autoIncrement: true,
})
}
request.onerror = () => {
reject()
}
request.onsuccess = () => {
connection = request.result
connection.onerror = () => {
connection = null
}
connection.onclose = () => {
connection = null
}
resolve()
}
})
}
async function saveChunk(chunk) {
if (!connection) await connect()
return new Promise((resolve, reject) => {
const store = connection.transaction(
storeValue,
'readwrite'
).objectStore(storeValue)
const req = store.add(chunk)
req.onsuccess = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
resolve(req.result)
}
req.onerror = () => {
reject()
}
req.transaction.oncomplete = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
}
})
}
// ... on blob available
await saveChunk(blob)
What I tried so far:
close any other other browser windows, anything that could count as on "open connection" that might be blocking execution
refresh Firefox profile
let my colleague test the code on his own machine => same result
Additional information that might useful:
Running in Nuxt 2.15.8 dev environment (localhost:3000). Code is used in the component as a Mixin. The project is rather large and uses a bunch of different browser APIs. There might be some kind of collision ?! This is the only place where we use IndexedDB, though, so to get to the bottom of this without any errors being thrown seems almost impossible.
Edit:
When I create a brand new Database, there is a brief window in which Transactions complete fine, but after some time has passed/something triggered, it goes back to being queued indefinitely.
I found out this morning when I had this structure:
...
clearDatabase() {
// get the store
const req = store.clear()
req.transaction.oncomplete = () => console.log('all good!')
}
await this.connect()
await this.clearDatabase()
'All good' fired. But any subsequent requests were broken same as before.
On page reload, even the clearDatabase request was broken again.
Something breaks with ongoing usage.
Edit2:
It's clearly connected to saving a Blob instance without an id with the autoIncrement option. Not only does it fail silently, it basically completely corrupts the DB. If I manually assign an incrementing ID to a Blob object, it works! If I leave out the id field for a regular simple object, it also works! Anyone knows about this? I feel like saving blobs is a common use-case so this should have been found already?!
I've concluded, unless proven otherwise, that it's a Firefox bug and opened a ticket on Bugzilla.
This happens with Blobs but might also be true for other instances. If you find yourself in the same situation there is a workaround. Don't rely on autoIncrement and assign IDs manually before trying to save them to the DB.

Unit testing NestJS Observable Http Retry

I'm making a request to a 3rd party API via NestJS's built in HttpService. I'm trying to simulate a scenario where the initial call to one of this api's endpoints might return an empty array on the first try. I'd like to use RxJS's retryWhen to hit the api again after a delay of 1 second. I'm currently unable to get the unit test to mock the second response however:
it('Retries view account status if needed', (done) => {
jest.spyOn(httpService, 'post')
.mockReturnValueOnce(of(failView)) // mock gets stuck on returning this value
.mockReturnValueOnce(of(successfulView));
const accountId = '0812081208';
const batchNo = '39cba402-bfa9-424c-b265-1c98204df7ea';
const response =client.viewAccountStatus(accountId, batchNo);
response.subscribe(
data => {
expect(data[0].accountNo)
.toBe('0812081208');
expect(data[0].companyName)
.toBe('Some company name');
done();
},
)
});
My implementation is:
viewAccountStatus(accountId: string, batchNo: string): Observable<any> {
const verificationRequest = new VerificationRequest();
verificationRequest.accountNo = accountId;
verificationRequest.batchNo = batchNo;
this.logger.debug(`Calling 3rd party service with batchNo: ${batchNo}`);
const config = {
headers: {
'Content-Type': 'application/json',
},
};
const response = this.httpService.post(url, verificationRequest, config)
.pipe(
map(res => {
console.log(res.data); // always empty
if (res.status >= 400) {
throw new HttpException(res.statusText, res.status);
}
if (!res.data.length) {
this.logger.debug('Response was empty');
throw new HttpException('Account not found', 404);
}
return res.data;
}),
retryWhen(errors => {
this.logger.debug(`Retrying accountId: ${accountId}`);
// It's entirely possible the first call will return an empty array
// So we retry with a backoff
return errors.pipe(
delayWhen(() => timer(1000)),
take(1),
);
}),
);
return response;
}
When logging from inside the initial map, I can see that the array is always empty. It's as if the second mocked value never happens. Perhaps I also have a solid misunderstanding of how observables work and I should somehow be trying to assert against the SECOND value that gets emitted? Regardless, when the observable retries, we should be seeing that second mocked value, right?
I'm also getting
: Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.Error:
On each run... so I'm guessing I'm not calling done() in the right place.
I think the problem is that retryWhen(notifier) will resubscribe to the same source when its notifier emits.
Meaning that if you have
new Observable(s => {
s.next(1);
s.next(2);
s.error(new Error('err!'));
}).pipe(
retryWhen(/* ... */)
)
The callback will be invoked every time the source is re-subscribed. In your example, it will call the logic which is responsible for sending the request, but it won't call the post method again.
The source could be thought of as the Observable's callback: s => { ... }.
What I think you'll have to do is to conditionally choose the source, based on whether the error took place or not.
Maybe you could use mockImplementation:
let hasErr = false;
jest.spyOn(httpService, 'post')
.mockImplementation(
() => hasErr ? of(successView) : (hasErr = true, of(failView))
)
Edit
I think the above does not do anything different, where's what I think mockImplementation should look like:
let err = false;
mockImplementation(
() => new Observable(s => {
if (err) {
s.next(success)
}
else {
err = true;
s.next(fail)
}
})
)

upload multiple files to sftp server using Rxjs

Good Day! I would like to implement a convenient method for uploading a multiple files to an sftp-server with methods of calling back each ofuploaded files to server.
I have already tried to implement some code that works, but I saw that there is a memory leak that does not allow to successfully close the connection to the sftp server server after all download.
it is absolutely not critical to constantly open the connection and close it for me.
I tweaked the code a little bit from here: how do I send (put) multiple files using nodejs ssh2-sftp-client?
code:
function sftpPutFiles(config, files, pathToDir, callbackStep, callbackFinish, callbackError) {
let Client = require('ssh2-sftp-client');
let PromisePool = require('es6-promise-pool');
const sendFile = (config, pathFrom, pathTo) => {
return new Promise(function (resolve, reject) {
let sftp = new Client();
console.log(pathFrom, pathTo);
sftp.on('keyboard-interactive', (name, instructions, instructionsLang, prompts, finish) => { finish([config.password]); });
sftp.connect(config).then(() => {
return sftp.put(pathFrom, pathTo);
}).then(() => {
console.log('finish '+pathTo);
callbackStep(pathTo);
sftp.end();
resolve(pathTo);
}).catch((err) => {
console.log(err, 'catch error');
callbackError(err);
});
});
};
// Create a pool.
let indexFile = 0;
let pool = new PromisePool(() => {
while (indexFile < files.length) {
let file = files[indexFile];
indexFile++;
return sendFile(config, file.path, `${pathToDir}/${file.name}`);
}
return null;
}, 10);
pool.start().then(function () {
console.log({"message":"OK"}); // res.send('{"message":"OK"}');
callbackFinish();
});
}
using
input.addEventListener('change', function (e) {
e.preventDefault();
sftpPutFiles(
{host: '192.168.2.201', username: 'crestron', password: 'ehAdmin'},
this.files,
`./Program01/test/`,
pathTo => {
let tr = document.createElement('tr');
let bodyTable = document.querySelector('.body');
tr.innerHTML = `<td>${bodyTable.children.length+1}</td><td>${pathTo}</td><td>OK</td>`;
bodyTable.appendChild(tr);
}, () => {
alert('Всё файлы загружены');
},
err => {
alert('Ошибка: '+err);
}
);
});
If there is an error uploading the file to the sftp server, the connection does not close and I cannot reconnect when I open the custom console. I would like to translate the code to Rxjs to better support and I think I can solve the problem of closing the connection and responsiveness of the application.
make sure your using the latest version of ssh2-sftp-client - there has been a fair amount of updates recently, including fixes to handle errors more consistently and ensure connections are closed correctly. (v4.1.0).
You are using sftp.on('keyboard-interaction', ...). There is nothing which emits events of this type in the module, so this listener will not fire.
If you just want to upload files, use the fastPut() method. It is much faster. Make sure the destination path includes the remote file name and not just the remote directory.
Have a look at Promise.all(). You could use this instead of the promise-pool and I think it would be a lot cleaner. Something like (untested)
let localPath = '/path/to/src-dir';
let remotePath = '/path/to/dst-dir';
let files = ['file1.txt', file2.txt','file3.txt'];
let client = new Client();
client.connect(config)
.then(() => {
let promises = [];
files.forEach(f => {
let from = path.join(localPath, f);
let to = path.join(remotePath, f);
promise.push(client.fastPut(from, to));
});
return Promise.all(promises);
}).then(res => { // res is array of resoved promise results
client.end();
}).catch(err => {
// deal with error
});

.Net Core SignalR - connection timeout - heartbeat timer - connection state change handling

just to be clear up-front, this questions is about .Net Core SignalR, not the previous version.
The new SignalR has an issue with WebSockets behind IIS (I can't get them to work on Chrome/Win7/IIS express). So instead I'm using Server Sent Events (SSE).
However, the problem is that those time out after about 2 minutes, the connection state goes from 2 to 3. Automatic reconnect has been removed (apparently it wasn't working really well anyway in previous versions).
I'd like to implement a heartbeat timer now to stop clients from timing out, a tick every 30 seconds may well do the job.
Update 10 November
I have now managed to implement the server side Heartbeat, essentially taken from Ricardo Peres' https://weblogs.asp.net/ricardoperes/signalr-in-asp-net-core
in startup.cs, add to public void Configure(IApplicationBuilder app, IHostingEnvironment env, IServiceProvider serviceProvider)
app.UseSignalR(routes =>
{
routes.MapHub<TheHubClass>("signalr");
});
TimerCallback SignalRHeartBeat = async (x) => {
await serviceProvider.GetService<IHubContext<TheHubClass>>().Clients.All.InvokeAsync("Heartbeat", DateTime.Now); };
var timer = new Timer(SignalRHeartBeat).Change(TimeSpan.FromSeconds(0), TimeSpan.FromSeconds(30));
HubClass
For the HubClass, I have added public async Task HeartBeat(DateTime now) => await Clients.All.InvokeAsync("Heartbeat", now);
Obviously, both the timer, the data being sent (I'm just sending a DateTime) and the client method name can be different.
Update .Net Core 2.1+
See the comment below; the timer callback should no longer be used. I've now implemented an IHostedService (or rather the abstract BackgroundService) to do that:
public class HeartBeat : BackgroundService
{
private readonly IHubContext<SignalRHub> _hubContext;
public HeartBeat(IHubContext<SignalRHub> hubContext)
{
_hubContext = hubContext;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
await _hubContext.Clients.All.SendAsync("Heartbeat", DateTime.Now, stoppingToken);
await Task.Delay(30000, stoppingToken);
}
}
}
In your startup class, wire it in after services.AddSignalR();:
services.AddHostedService<HeartBeat>();
Client
var connection = new signalR.HubConnection("/signalr", { transport: signalR.TransportType.ServerSentEvents });
connection.on("Heartbeat", serverTime => { console.log(serverTime); });
Remaining pieces of the initial question
What is left is how to properly reconnect the client, e.g. after IO was suspended (the browser's computer went to sleep, lost connection, changed Wifis or whatever)
I have implemented a client side Heartbeat that is working properly, at least until the connection breaks:
Hub Class: public async Task HeartBeatTock() => await Task.CompletedTask;
Client:
var heartBeatTockTimer;
function sendHeartBeatTock() {
connection.invoke("HeartBeatTock");
}
connection.start().then(args => {
heartBeatTockTimer = setInterval(sendHeartBeatTock, 10000);
});
After the browser suspends IO for example, the invoke method would throw an exception - which cannot be caught by a simple try/catch because it is async.
What I tried to do for my HeartBeatTock was something like (pseudo-code):
function sendHeartBeatTock
try connection.invoke("HeartbeatTock)
catch exception
try connection.stop()
catch exception (and ignore it)
finally
connection = new HubConnection().start()
repeat try connection.invoke("HeartbeatTock")
catch exception
log("restart did not work")
clearInterval(heartBeatTockTimer)
informUserToRefreshBrowser()
Now, this does not work for a few reasons. invoke throws the exception after the code block executes due to being run asynchronous. It looks as though it exposes a .catch() method, but I'm not sure how to implement my thoughts there properly.
The other reason is that starting a new connection would require me to re-implement all server calls like "connection.on("send"...) - which appears silly.
Any hints as to how to properly implement a reconnecting client would be much appreciated.
This is an issue when running SignalR Core behind IIS. IIS will close idle connections after 2 minutes. The long term plan is to add keep alive messages which, as a side effect, will prevent IIS from closing the connection. To work around the problem for now you can:
send periodically a message to the clients
change the idle-timeout setting in IIS as described here
restart the connection on the client side if it gets closed
use a different transport (e.g. long polling since you cannot use webSockets on Win7/Win2008 R2 behind IIS)
I've got a working solution now (tested in Chrome and FF so far). In the hope to either motivate you to come up with something better, or to save you a little while coming up with something like this yourselves, I'm posting my solution here:
The Heartbeat-"Tick" message (the server routinely pinging the clients) is described in the question above.
The client ("Tock" part) now has:
a function to register the connection, so that the callback methods (connection.on()) can be repeated; they'd be lost after just restarting a "new HubConnection" otherwise
a function to register the TockTimer
and a function to actually send Tock pings
The tock method catches errors upon sending, and tries to initiate a new connection. Since the timer keeps running, I'm registering a new connection and then simply sit back and wait for the next invocation.
Putting the client together:
// keeps the connection object
var connection = null;
// stores the ID from SetInterval
var heartBeatTockTimer = 0;
// how often should I "tock" the server
var heartBeatTockTimerSeconds = 10;
// how often should I retry after connection loss?
var maxRetryAttempt = 5;
// the retry should wait less long then the TockTimer, or calls may overlap
var retryWaitSeconds = heartBeatTockTimerSeconds / 2;
// how many retry attempts did we have?
var currentRetryAttempt = 0;
// helper function to wait a few seconds
$.wait = function(miliseconds) {
var defer = $.Deferred();
setTimeout(function() { defer.resolve(); }, miliseconds);
return defer;
};
// first routine start of the connection
registerSignalRConnection();
function registerSignalRConnection() {
++currentRetryAttempt;
if (currentRetryAttempt > maxRetryAttempt) {
console.log("Clearing registerHeartBeatTockTimer");
clearInterval(heartBeatTockTimer);
heartBeatTockTimer = 0;
throw "Retry attempts exceeded.";
}
if (connection !== null) {
console.log("registerSignalRConnection was not null", connection);
connection.stop().catch(err => console.log(err));
}
console.log("Creating new connection");
connection = new signalR.HubConnection("/signalr", { transport: signalR.TransportType.ServerSentEvents });
connection.on("Heartbeat", serverTime => { console.log(serverTime); });
connection.start().then(() => {
console.log("Connection started, starting timer.");
registerHeartBeatTockTimer();
}).catch(exception => {
console.log("Error connecting", exception, connection);
});
}
function registerHeartBeatTockTimer() {
// make sure we're registered only once
if (heartBeatTockTimer !== 0) return;
console.log("Registering registerHeartBeatTockTimer");
if (connection !== null)
heartBeatTockTimer = setInterval(sendHeartBeatTock, heartBeatTockTimerSeconds * 1000);
else
console.log("Connection didn't allow registry");
}
function sendHeartBeatTock() {
console.log("Standard attempt HeartBeatTock");
connection.invoke("HeartBeatTock").then(() => {
console.log("HeartbeatTock worked.") })
.catch(err => {
console.log("HeartbeatTock Standard Error", err);
$.wait(retryWaitSeconds * 1000).then(function() {
console.log("executing attempt #" + currentRetryAttempt.toString());
registerSignalRConnection();
});
console.log("Current retry attempt: ", currentRetryAttempt);
});
}
Client version based on ExternalUse's answer...
import * as signalR from '#aspnet/signalr'
import _ from 'lodash'
var connection = null;
var sendHandlers = [];
var addListener = f => sendHandlers.push(f);
function registerSignalRConnection() {
if (connection !== null) {
connection.stop().catch(err => console.log(err));
}
connection = new signalR.HubConnectionBuilder()
.withUrl('myHub')
.build();
connection.on("Heartbeat", serverTime =>
console.log("Server heartbeat: " + serverTime));
connection.on("Send", data =>
_.each(sendHandlers, value => value(data)));
connection.start()
.catch(exception =>
console.log("Error connecting", exception, connection));
}
registerSignalRConnection();
setInterval(() =>
connection.invoke("HeartBeatTock")
.then(() => console.log("Client heatbeat."))
.catch(err => {
registerSignalRConnection();
}), 10 * 1000);
export { addListener };

How to Mock and test using an RxJS subject?

I have some functions that accept an RxJS subject (backed to a socket) that I want to test. I'd like to mock the subject in a very request reply fashion. Since I'm unsure of a clean Rx way to do this, I'm tempted to use an EventEmitter to form my fake socket.
Generally, I want to:
check that the message received on my "socket" matches expectations
respond to that message on the same subject: observer.next(resp)
I do need to be able to use data from the message to form the response as well.
The code being tested is
export function acquireKernelInfo(sock) {
// set up our JSON payload
const message = createMessage('kernel_info_request');
const obs = shell
.childOf(message)
.ofMessageType('kernel_info_reply')
.first()
.pluck('content', 'language_info')
.map(setLanguageInfo)
.publishReplay(1)
.refCount();
sock.next(message);
return obs;
}
You could manually create two subjects and "glue them together" as one Subject with Subject.create:
const sent = new Rx.Subject();
const received = new Rx.Subject();
const mockWebSocketSubject = Subject.create(sent, received)
const s1 = sent.subscribe(
(msg) => sentMsgs.push({ next: msg }),
(err) => sentMsgs.push({ error: err }),
() => sendMsgs.push({ complete: true })
);
const s2 = recieved.subscribe(
(msg) => sentMsgs.push({ next: msg }),
(err) => sentMsgs.push({ error: err }),
() => sendMsgs.push({ complete: true })
);
// to send a message
// (presumably whatever system you're injecting this into is doing the sending)
sent.next('weee');
// to mock a received message
received.next('blarg');
s1.unsubscribe();
s2.unsubscribe();
That said, it's really a matter of what you're testing, how it's structured, and what the API is.
Ideally you'd be able to run your whole test synchronously. If you can't for some Rx-related reason, you should look into the TestScheduler, which has facilities to run tests in virtualized time.

Resources