I created super simple application using nexus graphql framework.
First step like this:
https://www.nexusjs.org/#/tutorial/chapter-1-setup-and-first-query
I api/app.ts i typed code:
const uid = require('uid')
let i = 0;
const sleep = async (ms: number) => {
return new Promise((resolve, reject) => {
setTimeout(resolve, ms)
})
}
const recurrenceFunction = async (id:string): Promise<void> => {
i++;
console.log(id, i);
await sleep(2000);
return recurrenceFunction(id)
}
const startQueue = () => {
const id:string = uid();
return recurrenceFunction(id)
}
(async () => {
return startQueue()
})()
Only dependency is uid that generates unique keys.
When I run it with ts-node I can see:
So we can see:
Once queue started.
There is single queue.
It works as it should work.
But when I run this code with nexus by command nexus dev I have seen:
So:
Two queues started.
They are running at the same time.
They have independently created variables i.
Questions:
Is anyone meet with this problem?
How should I change my code to get single queue like in ts-node?
Or maybe it is bug of nexus?
Update:
I checked that this problem exists for verions >= 0.21.1-next.2
In 0.21.1-next.1 application runs single time.
Steps to reproduce:
nexus dev
npm i nexus#0.21.1-next.1
nexus dev
npm i nexus#0.21.1-next.2
there is commit that introduced this behavior:
https://github.com/graphql-nexus/nexus/commit/ce1d45359e33af81169b7ebdc7bee6718fe313a8
There is variables like onMainThread and REFLECTION_ENV_VAR but without references to documentation. I can't understand what this code is doing?
This probably will be documented in future in this place:
https://www.nexusjs.org/#/architecture?id=build-flow
but now there is:
Update 2
I found woraround:
const xid = uid();
let limit=10;
const onApplicationStart = async (cb: () => any):Promise<any> => {
console.log("can i start?", xid, limit, app.private.state.running);
if(app.private.state.running) {
await cb()
return;
}
limit--;
if(limit <= 0) return ;
await sleep(100);
return onApplicationStart(cb);
}
(async () => {
return onApplicationStart(async () => {
return startQueue()
})
})()
but this is rather temporary hack than solution. Full example:
https://github.com/graphql-nexus/nexus/discussions/983
Eager module code relying on side-effects is not supported by Nexus.
See:
https://www.nexusjs.org/#/tutorial/chapter-2-writing-your-first-schema?id=reflection
https://github.com/graphql-nexus/nexus/issues/758
For a workaround for now wrap your code like this:
// app.ts
if (!process.env.NEXUS_REFLECTION) {
yourHeavyAsyncCode()
}
Reference https://github.com/graphql-nexus/nexus/issues/732#issuecomment-626586244
Related
I am trying to implement Blob storage via IndexedDB for long Media recordings.
My code works fine in Chrome and Edge (not tested in Safari yet) - but won't do anything in Firefox. There are no errors, it just doesn't try to fulfill my requests past the initial DB Connection (which is successful). Intuitively, it seems that the processing is blocked by something. But I don't have anything in my code which would be blocking.
Simplified version of the code (without heavy logging and excessive error checks which I have added trying to debug):
const dbName = 'recording'
const storeValue = 'blobs'
let connection = null
const handler = window.indexedDB || window.mozIndexedDB || window.webkitIndexedDB
function connect() {
return new Promise((resolve, reject) => {
const request = handler.open(dbName)
request.onupgradeneeded = (event) => {
const db = event.target.result
if (db.objectStoreNames.contains(storeValue)) {
db.deleteObjectStore(storeValue)
}
db.createObjectStore(storeValue, {
keyPath: 'id',
autoIncrement: true,
})
}
request.onerror = () => {
reject()
}
request.onsuccess = () => {
connection = request.result
connection.onerror = () => {
connection = null
}
connection.onclose = () => {
connection = null
}
resolve()
}
})
}
async function saveChunk(chunk) {
if (!connection) await connect()
return new Promise((resolve, reject) => {
const store = connection.transaction(
storeValue,
'readwrite'
).objectStore(storeValue)
const req = store.add(chunk)
req.onsuccess = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
resolve(req.result)
}
req.onerror = () => {
reject()
}
req.transaction.oncomplete = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
}
})
}
// ... on blob available
await saveChunk(blob)
What I tried so far:
close any other other browser windows, anything that could count as on "open connection" that might be blocking execution
refresh Firefox profile
let my colleague test the code on his own machine => same result
Additional information that might useful:
Running in Nuxt 2.15.8 dev environment (localhost:3000). Code is used in the component as a Mixin. The project is rather large and uses a bunch of different browser APIs. There might be some kind of collision ?! This is the only place where we use IndexedDB, though, so to get to the bottom of this without any errors being thrown seems almost impossible.
Edit:
When I create a brand new Database, there is a brief window in which Transactions complete fine, but after some time has passed/something triggered, it goes back to being queued indefinitely.
I found out this morning when I had this structure:
...
clearDatabase() {
// get the store
const req = store.clear()
req.transaction.oncomplete = () => console.log('all good!')
}
await this.connect()
await this.clearDatabase()
'All good' fired. But any subsequent requests were broken same as before.
On page reload, even the clearDatabase request was broken again.
Something breaks with ongoing usage.
Edit2:
It's clearly connected to saving a Blob instance without an id with the autoIncrement option. Not only does it fail silently, it basically completely corrupts the DB. If I manually assign an incrementing ID to a Blob object, it works! If I leave out the id field for a regular simple object, it also works! Anyone knows about this? I feel like saving blobs is a common use-case so this should have been found already?!
I've concluded, unless proven otherwise, that it's a Firefox bug and opened a ticket on Bugzilla.
This happens with Blobs but might also be true for other instances. If you find yourself in the same situation there is a workaround. Don't rely on autoIncrement and assign IDs manually before trying to save them to the DB.
Good Day! I would like to implement a convenient method for uploading a multiple files to an sftp-server with methods of calling back each ofuploaded files to server.
I have already tried to implement some code that works, but I saw that there is a memory leak that does not allow to successfully close the connection to the sftp server server after all download.
it is absolutely not critical to constantly open the connection and close it for me.
I tweaked the code a little bit from here: how do I send (put) multiple files using nodejs ssh2-sftp-client?
code:
function sftpPutFiles(config, files, pathToDir, callbackStep, callbackFinish, callbackError) {
let Client = require('ssh2-sftp-client');
let PromisePool = require('es6-promise-pool');
const sendFile = (config, pathFrom, pathTo) => {
return new Promise(function (resolve, reject) {
let sftp = new Client();
console.log(pathFrom, pathTo);
sftp.on('keyboard-interactive', (name, instructions, instructionsLang, prompts, finish) => { finish([config.password]); });
sftp.connect(config).then(() => {
return sftp.put(pathFrom, pathTo);
}).then(() => {
console.log('finish '+pathTo);
callbackStep(pathTo);
sftp.end();
resolve(pathTo);
}).catch((err) => {
console.log(err, 'catch error');
callbackError(err);
});
});
};
// Create a pool.
let indexFile = 0;
let pool = new PromisePool(() => {
while (indexFile < files.length) {
let file = files[indexFile];
indexFile++;
return sendFile(config, file.path, `${pathToDir}/${file.name}`);
}
return null;
}, 10);
pool.start().then(function () {
console.log({"message":"OK"}); // res.send('{"message":"OK"}');
callbackFinish();
});
}
using
input.addEventListener('change', function (e) {
e.preventDefault();
sftpPutFiles(
{host: '192.168.2.201', username: 'crestron', password: 'ehAdmin'},
this.files,
`./Program01/test/`,
pathTo => {
let tr = document.createElement('tr');
let bodyTable = document.querySelector('.body');
tr.innerHTML = `<td>${bodyTable.children.length+1}</td><td>${pathTo}</td><td>OK</td>`;
bodyTable.appendChild(tr);
}, () => {
alert('Всё файлы загружены');
},
err => {
alert('Ошибка: '+err);
}
);
});
If there is an error uploading the file to the sftp server, the connection does not close and I cannot reconnect when I open the custom console. I would like to translate the code to Rxjs to better support and I think I can solve the problem of closing the connection and responsiveness of the application.
make sure your using the latest version of ssh2-sftp-client - there has been a fair amount of updates recently, including fixes to handle errors more consistently and ensure connections are closed correctly. (v4.1.0).
You are using sftp.on('keyboard-interaction', ...). There is nothing which emits events of this type in the module, so this listener will not fire.
If you just want to upload files, use the fastPut() method. It is much faster. Make sure the destination path includes the remote file name and not just the remote directory.
Have a look at Promise.all(). You could use this instead of the promise-pool and I think it would be a lot cleaner. Something like (untested)
let localPath = '/path/to/src-dir';
let remotePath = '/path/to/dst-dir';
let files = ['file1.txt', file2.txt','file3.txt'];
let client = new Client();
client.connect(config)
.then(() => {
let promises = [];
files.forEach(f => {
let from = path.join(localPath, f);
let to = path.join(remotePath, f);
promise.push(client.fastPut(from, to));
});
return Promise.all(promises);
}).then(res => { // res is array of resoved promise results
client.end();
}).catch(err => {
// deal with error
});
I am trying to cover redux-saga that gets data from RxDB with Jest tests.
export function* checkUnsavedData(action) {
const { tab } = action;
try {
const db = yield getDB().catch(e => {
throw new Error(e);
});
const currentUser = yield select(makeSelectCurrentUser());
const unsavedData = yield db[USER_COLLECTION].findOne(currentUser)
.exec()
.then(data => data && data.unsavedData)
.catch(e => {
throw new Error(e);
});
} catch (error) {
yield showError(error);
}
}
Everything is fine in live run. But testing the generator I get:
UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 2): Error: Error: RxError:
RxDatabase.create(): Adapter not added. Use RxDB.plugin(require('pouchdb-adapter-[adaptername]');
Given parameters: {
adapter:"idb"}
If anyone has done this, please, tell me how to test such cases with RxDB in redux-saga with Jest.
It looks like you did non add the adapter to RxDB. Can you paste the code where you create the database? This would help more in finding the error.
When running tests, you should not use the idb-adapter. Use the in-memory-adapter, it's faster and also you can be sure that you start on a clean state on each testrun.
As far as I can tell, using promises or callbacks in After hook prevents Command Queue from executing when using promises / callbacks. I'm trying to figure out why, any help or suggestions are appreciated. Closest issue I could find on github is: https://github.com/nightwatchjs/nightwatch/issues/341
which states: finding that trying to make browser calls in the after hook is too late; it appears that the session is closed before after is run. (exactly my problem). But there is no solution provided. I need to run cleanup steps after my scenarios run, and those cleanup steps need to be able to interact with browser.
https://github.com/nightwatchjs/nightwatch/wiki/Understanding-the-Command-Queue
In the snippet below, bar is never outputted. Just foo.
const { After } = require('cucumber');
const { client } = require('nightwatch-cucumber');
After(() => new Promise((resolve) => {
console.log('foo')
client.perform(() => {
console.log('bar')
});
}));
I also tried using callback approach
After((browser, done) => {
console.log('foo');
client.perform(() => {
console.log('bar');
done();
});
});
But similar to 1st example, bar is never outputted, just foo
You can instead use something like:
const moreWork = async () => {
console.log('bar');
await new Promise((resolve) => {
setTimeout(resolve, 10000);
})
}
After(() => client.perform(async () => {
console.log('foo');
moreWork();
}));
But the asynchronous nature of moreWork means that the client terminates before my work is finished, so this isn't really workin for me. You can't use an await in the perform since they are in different execution contexts.
Basically the only way to get client commands to execute in after hook is my third example, but it prevents me from using async.
The 1st and 2nd examples would be great if the command queue didn't freeze and prevent execution.
edit: I'm finding more issues on github that state the browser is not available in before / after hooks: https://github.com/nightwatchjs/nightwatch/issues/575
What are you supposed to do if you want to clean up using the browser after all features have run?
Try the following
After(async () => {
await client.perform(() => {
...
});
await moreWork();
})
I'm running into an issue with some code for an Ionic 3 app.
Basically, I have a list of objects that all have a unique id. The unique id for each object must be sent through a GET request so that we can get the appropriate data back for each object from the server. This has to be done on a per object basis; I can't bundle them into one request because there is no API endpoint for that.
Therefore the objects are all stored in an array, so I've been trying to loop through the array and call the provider for each one. Note that the provider is returning an observable.
Since the provider is an asynchronous function the promise will resolve before the loop is finished, unless I time out the promise resolution. This defeats the whole point of the promise.
What is the correct way that I should go about doing this so that the looping provider calls are done before the promise resolves?
If I have an inner promise to resolve when the looping is done, won't it also resolve prematurely?
I also read that it is bad to have a bunch of observables open. Should I instead return each observable as a promise using toPromise()?
Here is the code to build the data:
asyncBuildData() {
var promise = new Promise((resolve, reject) => {
let completedRequests = 0;
for (let i = 0; i < 10; i++) {
this.provider.getStuffById(listOfStuff[i].id).subscribe(val => {
list.push(val)
completedRequests++;
})
}
console.log('cp', completedRequests); // completedRequests = 0
setTimeout(() => {
console.log('cp', completedRequests); // completedRequests = 10
let done = true;
if (done) {
resolve('Done');
} else {
reject('Not done');
}
}, 1500)
})
return promise;
}
Code from Provider:
getStuffById(stuffId) {
let url = url + stuffId;
return this.http.get(url)
.map(res => res.json());
}
Even though you can't bundle them into one request, you can still bundle them into one observable, of which those requests are fired in parallel, using .forkJoin():
buildData$() {
let parallelRequests = listOfStuffs.map(stuff => this.provider.getStuffById(stuff.id));
return Observable.forkJoin([...parallelRequests]);
}
and then in your component, you can just call:
buildData$.subscribe(val=>{
//val is an array of all the results of this.provider.getStuffById()
list =val;
})
Note that Obersvable.forkJoin() will for all requests to complete before emitting any values.
If I understand correctly then the following code should get you on your way. This will execute a promise, one at a time, for each element in the array.
var ids = [1,2,3,4,5];
ids.reduce(function (promise, id) {
return promise.then(function () {
let url = url + id;
return this.http.get(url)
.map(res => res.json());
});
}, Promise.resolve()).then(function(last) {
// handle last result
}, function(err) {
// handle errors
});
I tested this with a jQuery post and replaced it with yours from Ionic. If it fails then let me know.