ExecutionError: Exceeded the prepaid gas -- when called from front end - nearprotocol

The transfer() function works perfectly fine when testing and through the CLI. However, when I try to call it from the front end, it returns
Uncaught (in promise) Error: {"index":0,"kind":{"ExecutionError":"Exceeded the prepaid gas."}}
It is not a complex call and only involves just 1. transferring tokens 2. updating a value in storage. Can anyone give me pointers as to why this might be happening?
document.querySelector('#transfer-to-owner').onclick = () => {
console.log("Transfer about to begin")
try {
window.contract.transfer({})
} catch (e) {
'Something went wrong! ' +
'Check your browser console for more info.'
}
}
contract from this repo
const XCC_GAS: Gas = 20_000_000_000_000;
transfer(): void {
this.assert_owner()
assert(this.contributions.received > u128.Zero, "No received (pending) funds to be transferred")
const to_self = Context.contractName
const to_owner = ContractPromiseBatch.create(this.owner)
// transfer earnings to owner then confirm transfer complete
const promise = to_owner.transfer(this.contributions.received)
promise.then(to_self).function_call("on_transfer_complete", '{}', u128.Zero, XCC_GAS)
}
#mutateState()
on_transfer_complete(): void {
assert_self()
assert_single_promise_success()
logging.log("transfer complete")
// reset contribution tracker
this.contributions.record_transfer()
}

near-api-js and near-shell use a different default value for gas.
near-api-js:
const DEFAULT_FUNC_CALL_GAS = new BN('30_000_000_000_000');
near-shell:
.option('gas', {
desc: 'Max amount of gas this call can use (in gas units)',
type: 'string',
default: '100_000_000_000_000'
})
I added _s to make it clearer that near-shell uses more than 3 times the amount of gas by default.

Related

IndexedDB breaks in Firefox after trying to save autoIncremented Blob

I am trying to implement Blob storage via IndexedDB for long Media recordings.
My code works fine in Chrome and Edge (not tested in Safari yet) - but won't do anything in Firefox. There are no errors, it just doesn't try to fulfill my requests past the initial DB Connection (which is successful). Intuitively, it seems that the processing is blocked by something. But I don't have anything in my code which would be blocking.
Simplified version of the code (without heavy logging and excessive error checks which I have added trying to debug):
const dbName = 'recording'
const storeValue = 'blobs'
let connection = null
const handler = window.indexedDB || window.mozIndexedDB || window.webkitIndexedDB
function connect() {
return new Promise((resolve, reject) => {
const request = handler.open(dbName)
request.onupgradeneeded = (event) => {
const db = event.target.result
if (db.objectStoreNames.contains(storeValue)) {
db.deleteObjectStore(storeValue)
}
db.createObjectStore(storeValue, {
keyPath: 'id',
autoIncrement: true,
})
}
request.onerror = () => {
reject()
}
request.onsuccess = () => {
connection = request.result
connection.onerror = () => {
connection = null
}
connection.onclose = () => {
connection = null
}
resolve()
}
})
}
async function saveChunk(chunk) {
if (!connection) await connect()
return new Promise((resolve, reject) => {
const store = connection.transaction(
storeValue,
'readwrite'
).objectStore(storeValue)
const req = store.add(chunk)
req.onsuccess = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
resolve(req.result)
}
req.onerror = () => {
reject()
}
req.transaction.oncomplete = () => {
console.warn('DONE!') // Fires in Chrome and Edge - not in Firefox
}
})
}
// ... on blob available
await saveChunk(blob)
What I tried so far:
close any other other browser windows, anything that could count as on "open connection" that might be blocking execution
refresh Firefox profile
let my colleague test the code on his own machine => same result
Additional information that might useful:
Running in Nuxt 2.15.8 dev environment (localhost:3000). Code is used in the component as a Mixin. The project is rather large and uses a bunch of different browser APIs. There might be some kind of collision ?! This is the only place where we use IndexedDB, though, so to get to the bottom of this without any errors being thrown seems almost impossible.
Edit:
When I create a brand new Database, there is a brief window in which Transactions complete fine, but after some time has passed/something triggered, it goes back to being queued indefinitely.
I found out this morning when I had this structure:
...
clearDatabase() {
// get the store
const req = store.clear()
req.transaction.oncomplete = () => console.log('all good!')
}
await this.connect()
await this.clearDatabase()
'All good' fired. But any subsequent requests were broken same as before.
On page reload, even the clearDatabase request was broken again.
Something breaks with ongoing usage.
Edit2:
It's clearly connected to saving a Blob instance without an id with the autoIncrement option. Not only does it fail silently, it basically completely corrupts the DB. If I manually assign an incrementing ID to a Blob object, it works! If I leave out the id field for a regular simple object, it also works! Anyone knows about this? I feel like saving blobs is a common use-case so this should have been found already?!
I've concluded, unless proven otherwise, that it's a Firefox bug and opened a ticket on Bugzilla.
This happens with Blobs but might also be true for other instances. If you find yourself in the same situation there is a workaround. Don't rely on autoIncrement and assign IDs manually before trying to save them to the DB.

Mysterious timeout when connecting to neptune db

I'm getting this error message when trying to connect to a aws neptune db from a lambda:
2022-05-05T18:36:04.114Z e0c9ee4c-0e1d-49c7-ad05-d8bab79d3ea6 WARN Determining whether retriable error: Server error: {
"requestId": "some value",
"code": "TimeLimitExceededException",
"detailedMessage": "A timeout occurred within the script or was otherwise cancelled directly during evaluation of [some value]"
} (598)
The timeout happens consistently after 20s.
It's not clear what's causing this. Things I've tried:
increasing the lambda memory in case it's just a hardware problem, but no luck
increasing neptune query timeout from 20s to 60s, but the request still times out at 20s.
This is the code of the lambda that tries to initialize the connection:
import { driver, structure } from 'gremlin';
import { getUrlAndHeaders } from 'gremlin-aws-sigv4/lib/utils';
const getConnectionDetails = () => {
if (process.env['USE_IAM'] == 'true') {
return getUrlAndHeaders(
process.env['CLUSTER_ENDPOINT'],
process.env['CLUSTER_PORT'],
{},
'/gremlin',
'wss'
);
} else {
const database_url =
'wss://' +
process.env['CLUSTER_ENDPOINT'] +
':' +
process.env['CLUSTER_PORT'] +
'/gremlin';
return { url: database_url, headers: {} };
}
};
const getConnection = () => {
const { url, headers } = getConnectionDetails();
const c = new driver.DriverRemoteConnection(url, {
mimeType: 'application/vnd.gremlin-v2.0+json',
headers: headers,
});
c._client._connection.on('close', (code, message) => {
console.info(`close - ${code} ${message}`);
if (code == 1006) {
console.error('Connection closed prematurely');
throw new Error('Connection closed prematurely');
}
});
return c;
};
This was working previously using more powerful hardware (r4.2xlarge) for the neptune db, but I changed that t3.medium to minimize cost and it seems that's when the problem started. But I find it hard to believe that this hardware change alone would cause the connection to timeout, and it's odd that it continues to timeout at exactly 20s. Any ideas?
Once parameter group values are changed, the instance you are connecting to still needs to be restarted for them to take effect. You can do this:
From the AWS Console (web page) for Neptune
From the CLI using aws neptune reboot-db-instance

Expecting a Promise *not* to complete, in Jest

I have the following need to test whether something does not happen.
While testing something like that may be worth a discussion (how long wait is long enough?), I hope there would exist a better way in Jest to integrate with test timeouts. So far, I haven't found one, but let's begin with the test.
test ('User information is not distributed to a project where the user is not a member', async () => {
// Write in 'userInfo' -> should NOT turn up in project 1.
//
await collection("userInfo").doc("xyz").set({ displayName: "blah", photoURL: "https://no-such.png" });
// (firebase-jest-testing 0.0.3-beta.3)
await expect( eventually("projects/1/userInfo/xyz", o => !!o, 800 /*ms*/) ).resolves.toBeUndefined();
// ideally:
//await expect(prom).not.toComplete; // ..but with cancelling such a promise
}, 9999 /*ms*/ );
The eventually returns a Promise and I'd like to check that:
within the test's normal timeout...
such a Promise does not complete (resolve or reject)
Jest provides .resolves and .rejects but nothing that would combine the two.
Can I create the anticipated .not.toComplete using some Jest extension mechanism?
Can I create a "run just before the test would time out" (with ability to make the test pass or fail) trigger?
I think the 2. suggestion might turn handy, and can create a feature request for such, but let's see what comments this gets..
Edit: There's a further complexity in that JS Promises cannot be cancelled from outside (but they can time out, from within).
I eventually solved this with a custom matcher:
/*
* test-fns/matchers/timesOut.js
*
* Usage:
* <<
* expect(prom).timesOut(500);
* <<
*/
import { expect } from '#jest/globals'
expect.extend({
async timesOut(prom, ms) { // (Promise of any, number) => { message: () => string, pass: boolean }
// Wait for either 'prom' to complete, or a timeout.
//
const [resolved,error] = await Promise.race([ prom, timeoutMs(ms) ])
.then(x => [x])
.catch(err => [undefined,err] );
const pass = (resolved === TIMED_OUT);
return pass ? {
message: () => `expected not to time out in ${ms}ms`,
pass: true
} : {
message: () => `expected to time out in ${ms}ms, but ${ error ? `rejected with ${error}`:`resolved with ${resolved}` }`,
pass: false
}
}
})
const timeoutMs = (ms) => new Promise((resolve) => { setTimeout(resolve, ms); })
.then( _ => TIMED_OUT);
const TIMED_OUT = Symbol()
source
The good side is, this can be added to any Jest project.
The down side is, one needs to separately mention the delay (and guarantee Jest's time out does not happen before).
Makes the question's code become:
await expect( eventually("projects/1/userInfo/xyz") ).timesOut(300)
Note for Firebase users:
Jest does not exit to OS level if Firestore JS SDK client listeners are still active. You can prevent it by unsubscribing to them in afterAll - but this means keeping track of which listeners are alive and which not. The firebase-jest-testing library does this for you, under the hood. Also, this will eventually ;) get fixed by Firebase.

Unit testing NestJS Observable Http Retry

I'm making a request to a 3rd party API via NestJS's built in HttpService. I'm trying to simulate a scenario where the initial call to one of this api's endpoints might return an empty array on the first try. I'd like to use RxJS's retryWhen to hit the api again after a delay of 1 second. I'm currently unable to get the unit test to mock the second response however:
it('Retries view account status if needed', (done) => {
jest.spyOn(httpService, 'post')
.mockReturnValueOnce(of(failView)) // mock gets stuck on returning this value
.mockReturnValueOnce(of(successfulView));
const accountId = '0812081208';
const batchNo = '39cba402-bfa9-424c-b265-1c98204df7ea';
const response =client.viewAccountStatus(accountId, batchNo);
response.subscribe(
data => {
expect(data[0].accountNo)
.toBe('0812081208');
expect(data[0].companyName)
.toBe('Some company name');
done();
},
)
});
My implementation is:
viewAccountStatus(accountId: string, batchNo: string): Observable<any> {
const verificationRequest = new VerificationRequest();
verificationRequest.accountNo = accountId;
verificationRequest.batchNo = batchNo;
this.logger.debug(`Calling 3rd party service with batchNo: ${batchNo}`);
const config = {
headers: {
'Content-Type': 'application/json',
},
};
const response = this.httpService.post(url, verificationRequest, config)
.pipe(
map(res => {
console.log(res.data); // always empty
if (res.status >= 400) {
throw new HttpException(res.statusText, res.status);
}
if (!res.data.length) {
this.logger.debug('Response was empty');
throw new HttpException('Account not found', 404);
}
return res.data;
}),
retryWhen(errors => {
this.logger.debug(`Retrying accountId: ${accountId}`);
// It's entirely possible the first call will return an empty array
// So we retry with a backoff
return errors.pipe(
delayWhen(() => timer(1000)),
take(1),
);
}),
);
return response;
}
When logging from inside the initial map, I can see that the array is always empty. It's as if the second mocked value never happens. Perhaps I also have a solid misunderstanding of how observables work and I should somehow be trying to assert against the SECOND value that gets emitted? Regardless, when the observable retries, we should be seeing that second mocked value, right?
I'm also getting
: Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout.Error:
On each run... so I'm guessing I'm not calling done() in the right place.
I think the problem is that retryWhen(notifier) will resubscribe to the same source when its notifier emits.
Meaning that if you have
new Observable(s => {
s.next(1);
s.next(2);
s.error(new Error('err!'));
}).pipe(
retryWhen(/* ... */)
)
The callback will be invoked every time the source is re-subscribed. In your example, it will call the logic which is responsible for sending the request, but it won't call the post method again.
The source could be thought of as the Observable's callback: s => { ... }.
What I think you'll have to do is to conditionally choose the source, based on whether the error took place or not.
Maybe you could use mockImplementation:
let hasErr = false;
jest.spyOn(httpService, 'post')
.mockImplementation(
() => hasErr ? of(successView) : (hasErr = true, of(failView))
)
Edit
I think the above does not do anything different, where's what I think mockImplementation should look like:
let err = false;
mockImplementation(
() => new Observable(s => {
if (err) {
s.next(success)
}
else {
err = true;
s.next(fail)
}
})
)

Apollo server subscription not recognizing Async Iterable

I'm having an issue with Apollo GraphQL's subscription. When attempting to start the subscription I'm getting this in return:
"Subscription field must return Async Iterable. Received: { pubsub: { ee: [EventEmitter], subscriptions: {}, subIdCounter: 0 }, pullQueue: [], pushQueue: [], running: true, allSubscribed: null, eventsArray: [\"H-f_mUvS\"], return: [function return] }"
I have other subscriptions setup and are completely functional - so I can confirm the webserver is setup correctly.
I'm just curious if anyone else has ever ran onto this issue before.
Source code in PR diff (it's an open source project):
https://github.com/astronomer/houston-api/pull/165/files
error in playground
I don't think this is an issue specific to the PR you posted. I'd be surprised if any of the subscriptions were working as is.
Your subscribe function should return an AsyncIterable, as the error states. Since it returns a call to createPoller, createPoller should return an AsyncIterable. But here's what that function looks like:
export default function createPoller(
func,
pubsub,
interval = 5000, // Poll every 5 seconds
timeout = 3600000 // Kill after 1 hour
) {
// Gernate a random internal topic.
const topic = shortid.generate();
// Create an async iterator. This is what a subscription resolver expects to be returned.
const iterator = pubsub.asyncIterator(topic);
// Wrap the publish function on the pubsub object, pre-populating the topic.
const publish = bind(curry(pubsub.publish, 2)(topic), pubsub);
// Call the function once to get initial dataset.
func(publish);
// Then set up a timer to call the passed function. This is the poller.
const poll = setInterval(partial(func, publish), interval);
// If we are passed a timeout, kill subscription after that interval has passed.
const kill = setTimeout(iterator.return, timeout);
// Create a typical async iterator, but overwrite the return function
// and cancel the timer. The return function gets called by the apollo server
// when a subscription is cancelled.
return {
...iterator,
return: () => {
log.info(`Disconnecting subscription ${topic}`);
clearInterval(poll);
clearTimeout(kill);
return iterator.return();
}
};
}
So createPoller creates an AsyncIterable, but then creates a shallow copy of it and returns that. graphql-subscriptions uses iterall's isAsyncIterable for the check that's producing the error you're seeing. Because of the way isAsyncIterable works, a shallow copy won't fly. You can see this for yourself:
const { PubSub } = require('graphql-subscriptions')
const { isAsyncIterable } = require('iterall')
const pubSub = new PubSub()
const iterable = pubSub.asyncIterator('test')
const copy = { ...iterable }
console.log(isAsyncIterable(iterable)) // true
console.log(isAsyncIterable(copy)) // false
So, instead of returning a shallow copy, createPoller should just mutate the return method directly:
export default function createPoller(...) {
...
iterator.return = () => { ... }
return iterator
}

Resources