Process.all(array.map(... doesn't work in parallel with page.goto( - promise

I am using the pupperteer library for my bot and I would like to perform some operations in parallel.
In many articles, it is advised to use this syntax :
await Promise.all(array.map(async data => //..some operations))
I've tested this on several operations and it works but when I embed the code below in my .map promise
await page.goto(..
It did not work during Operation Promise and it considers this to be a synchronous operation.
I would like to know why it reacts like this?

I believe your error comes from the fact that you're using the same page object.
The following should work:
const currentPage = browser.pages().then(allPages => allPages[0]);
const anotherPage = await browser.newPage();
const bothPages = [currentPage, anotherPage];
await Promise.all(
bothPages.map(page => page.goto("https://stackoverflow.com"))
);

Related

How to use await on getAll method for IndexDB?

I am trying to get the data stored in my ObjectStore and I want this synchronously. So instead of using onsuccess I want to use await / async.
I have implemented this below code but somehow its not returning me the data.
async function viewNotes() {
const tx = db.transaction("personal_notes","readonly")
const pNotes = tx.objectStore("personal_notes")
const items = await db.transaction("personal_notes").objectStore("personal_notes").getAllKeys()
console.log("And the Items are ", items.result)
let NotesHere = await pNotes.getAll().onsuccess
console.log("Ans this are the logs", NotesHere)
}
Neither I am getting the data through items.result nor from NotesHere.
When I view from debug mode, items's readyState is still in pending even after using await.
What am I missing ?
The IndexedDB API does not natively support async/await. You need to either manually wrap the event handlers in promises, or (much better solution) use a library like https://github.com/jakearchibald/idb that does it for you.

C# async calls and realm instances

I am using Realm with a Xamarin Forms project, and I have read about how realm entity instances can't be shared across threads.
Given the following code, is using the route obtained in line 100, and then accessed again on line 109 after the awaited call on 104, dangerous?
I am new to using Realm, but if this is true, then one must get a new instance of the Realm and any object being worked with after any/every awaited call. Seems onerous...
is using the route obtained in line 100, and then accessed again on line 109 after the awaited call on 104, dangerous?
Yes, on the next foreach iteration, you will end up with a different managed thread, and Realm will throw an different thread access exception.
The key is to use a SynchronizationContext so your await continuations are on the same thread (and, of course, since you will be in a different thread, skip the use of the Realm-based async methods)
Using Stephen Cleary's Nito.AsyncEx (he is the king of sync contexts 😜)
re: how can i force await to continue on the same thread?
var yourRealmInstanceThread = new AsyncContextThread();
await yourRealmInstanceThread.Factory.Run(async () =>
{
var asyncExBasedRealm = Realm.GetInstance();
var routes = asyncExBasedRealm.All<UserModel>();
foreach (var route in routes)
{
// map it
// post it
await Task.Delay(TimeSpan.FromMilliseconds(1)); // Simulate some Task, i.e. a httpclient request....
// The following continuations will be executed on the proper thread
asyncExBasedRealm.Write(() => route.Uploaded = true);
}
});
Using SushiHangover.RealmThread
I wrote a simple SynchronizationContext for Realm awhile back, it works for my needs and has a specialized API for Realm.
using (var realmThread = new RealmThread(realm.Config))
{
await realmThread.InvokeAsync(async myRealm =>
{
var routes = myRealm.All<UserModel>();
foreach (var route in routes)
{
// map it
// post it
await Task.Delay(TimeSpan.FromMilliseconds(1));
// The following continuations will be executed on the proper thread
myRealm.Write(() => route.Uploaded = true);
}
});
}
Note: For someone that does not understand SynchronizationContext well, I would highly recommend using Nito.AsyncEx as a generic solution that is well supported and due to the fact that is from Stephen Cleary... I use it in a vast majority of my projects.

rxjs switchMap cache the obsolete result and do not create new stream

const s1$ = of(Math.random())
const s2$ = ajax.getJSON(`https://api.github.com/users?per_page=5`)
const s3$ = from(fetch(`https://api.github.com/users?per_page=5`))
const click$ = fromEvent(document, 'click')
click$.pipe(
switchMap(() => s1$)
).subscribe(e => {
console.log(e)
})
I was confused by the code above and can not reason about them properly.
In the first case(s1$), the same result is received every time, it LOOKs fine to me even though I can not understand why switchMap do not start a new stream each time. OK, it is fine
The really wired thing happen when you run s2$ and s3$, the looks equivalent, right? WRONG!!! the behaviours are completely different if you try them out!
The result of s3$ is cached somehow, i.e. if you open the network panel, you will see the http request was send only ONCE. In comparison, the http request is sent each time for s2$
My problem is that I can not use something like ajax from rx directly because the http request is hidden a third-party library, The solution I can come up with is to use inline stream, i.e. create new stream every time
click$.pipe(
switchMap(() => from(fetch(`https://api.github.com/users?per_page=5`)))
).subscribe(e => {
console.log(e)
})
So, how exactly I can explain such behaviour and what is the correct to handle this situation?
One problem is that you actually execute Math.random and fetch while setting up your test case.
// calling Math.random() => using the return value
const s1$ = of(Math.random())
// calling fetch => using the return value (a promise)
const s3$ = from(fetch(`https://api.github.com/users?per_page=5`))
Another is that fetch returns a promise, which resolves only once. from(<promise>) then does not need to re-execute the ajax call, it will simply emit the resolved value.
Whereas ajax.getJSON returns a stream which re-executes every time.
If you wrap the test-streams with defer you get more intuitive behavior.
const { of, defer, fromEvent } = rxjs;
const { ajax } = rxjs.ajax;
const { switchMap } = rxjs.operators;
// defer Math.random()
const s1$ = defer(() => of(Math.random()));
// no defer needed here (already a stream)
const s2$ = ajax.getJSON('https://api.github.com/users?per_page=5');
// defer `fetch`, but `from` is not needed, as a promise is sufficient
const s3$ = defer(() => fetch('https://api.github.com/users?per_page=5'));
const t1$ = fromEvent(document.getElementById('s1'), 'click').pipe(switchMap(() => s1$));
const t2$ = fromEvent(document.getElementById('s2'), 'click').pipe(switchMap(() => s2$));
const t3$ = fromEvent(document.getElementById('s3'), 'click').pipe(switchMap(() => s3$));
t1$.subscribe(console.log);
t2$.subscribe(console.log);
t3$.subscribe(console.log);
<script src="https://unpkg.com/#reactivex/rxjs#6/dist/global/rxjs.umd.js"></script>
<button id="s1">test random</button>
<button id="s2">test ajax</button>
<button id="s3">test fetch</button>

Do I have to use ContinueWith with HttpClient?

when calling rest services using the System.Net.Http.HttpClient i have code like
var response = client.GetAsync("api/MyController").Result;
if(response.IsSuccessStatusCode)
...
is that proper or should i be doing
client.GetAsync("api/MyController").ContinueWith(task => { var response = task.Result; ...}
It is much safer to do the second. There are a variety of scenarios where the first option can cause a deadlock.

can't seem to get progress events from node-formidable to send to the correct client over socket.io

So I'm building a multipart form uploader over ajax on node.js, and sending progress events back to the client over socket.io to show the status of their upload. Everything works just fine until I have multiple clients trying to upload at the same time. Originally what would happen is while one upload is going, when a second one starts up it begins receiving progress events from both of the forms being parsed. The original form does not get affected and it only receives progress updates for itself. I tried creating a new formidable form object and storing it in an array along with the socket's session id to try to fix this, but now the first form stops receiving events while the second form gets processed. Here is my server code:
var http = require('http'),
formidable = require('formidable'),
fs = require('fs'),
io = require('socket.io'),
mime = require('mime'),
forms = {};
var server = http.createServer(function (req, res) {
if (req.url.split("?")[0] == "/upload") {
console.log("hit upload");
if (req.method.toLowerCase() === 'post') {
socket_id = req.url.split("sid=")[1];
forms[socket_id] = new formidable.IncomingForm();
form = forms[socket_id];
form.addListener('progress', function (bytesReceived, bytesExpected) {
progress = (bytesReceived / bytesExpected * 100).toFixed(0);
socket.sockets.socket(socket_id).send(progress);
});
form.parse(req, function (err, fields, files) {
file_name = escape(files.upload.name);
fs.writeFile(file_name, files.upload, 'utf8', function (err) {
if (err) throw err;
console.log(file_name);
})
});
}
}
});
var socket = io.listen(server);
server.listen(8000);
If anyone could be any help on this I would greatly appreciate it. I've been banging my head against my desk for a few days trying to figure this one out, and would really just like to get this solved so that I can move on. Thank you so much in advance!
Can you try putting console.log(socket_id);
after form = forms[socket_id]; and
after progress = (bytesReceived / bytesExpected * 100).toFixed(0);, please?
I get the feeling that you might have to wrap that socket_id in a closure, like this:
form.addListener(
'progress',
(function(socket_id) {
return function (bytesReceived, bytesExpected) {
progress = (bytesReceived / bytesExpected * 100).toFixed(0);
socket.sockets.socket(socket_id).send(progress);
};
})(socket_id)
);
The problem is that you aren't declaring socket_id and form with var, so they're actually global.socket_id and global.form rather than local variables of your request handler. Consequently, separate requests step over each other since the callbacks are referring to the globals rather than being proper closures.
rdrey's solution works because it bypasses that problem (though only for socket_id; if you were to change the code in such a way that one of the callbacks referenced form you'd get in trouble). Normally you only need to use his technique if the variable in question is something that changes in the course of executing the outer function (e.g. if you're creating closures within a loop).

Resources