What does contract.filters return in ethers.js? - events

I came across ethers.js library's event filters function contract.filters. See below codes from ethers.js documentation.
https://docs.ethers.org/v5/concepts/events/
abi = [
"event Transfer(address indexed src, address indexed dst, uint val)"
];
contract = new Contract(tokenAddress, abi, provider);
// List all token transfers *from* myAddress
contract.filters.Transfer(myAddress)
// {
// address: '0x6B175474E89094C44Da98b954EedeAC495271d0F',
// topics: [
// '0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef',
// '0x0000000000000000000000008ba1f109551bd432803012645ac136ddd64dba72'
// ]
// }
What is in the returned value inside topics:[], what is each string value represent? Is each event represented in its transaction hash?
And also what is the sort order of each event? Would the latest event be the first in the topics array?
Thank you!
I also looked at solidity's documentation on events but still not sure.

What is in the returned value inside topics:[], what is each string value represent? Is each event represented in its transaction hash?
Since this event is non-anonymous (most events are; missing the anonymous keyword at the end of the event definition), topics[0] represents the event signature.
Event signature is a keccak256 hash of the event name and argument types.
In this case hash of string Transfer(address,address,uint256) (uint is an alias for uint256 and the longer version is used for signatures).
Following topics items are the indexed params. Since there's 2 indexed params in your event, the total size of the array should be 3 (1 signature + 2 indexed params). I'm not sure why the 3rd item is missing from the code in your question.
And also what is the sort order of each event? Would the latest event be the first in the topics array?
They are sorted by the order of their execution. So the following code returns EventB, then EventA and finally EventB.
pragma solidity ^0.8;
contract MyContract {
event EventA();
event EventB();
function foo() external {
emit EventB();
emit EventA();
emit EventB();
}
}

As #Petr Hejda said
You can see information about topic here
http://solidity.readthedocs.io/en/develop/contracts.html?highlight=topic#events
https://medium.com/mycrypto/understanding-event-logs-on-the-ethereum-blockchain-f4ae7ba50378
And use keccak256 hash function like:
ethers.utils.keccak256("Transfer(address,address,uint256)")
to calculate the event topic you wish to filter.
https://docs.ethers.org/v5/api/utils/hashing/#utils-keccak256

Related

How to query substrate storage via `.entries` for partial items

how can i query the storage via .entries when i know the list of the IDs which are used to store the data?
snippet from decl_storage
/// PoE Proofs
Proofs get(fn proofs): map hasher(blake2_128_concat) GenericId=> ProofInfo<Proof, T::AccountId, T::BlockNumber>;
typescript code where i am trying to get the only few entries
type IncomingParam = [StorageKey, ProofInfo]
type SnGenericIds = GenericId[]
export async function getAll (
items: SnGenericIds = []
): Promise<IncomingParam[]> {
const api = getApi()
return await api.query.poe.proofs.entries(items)
}
// items is [ '0x6261666b313332313365616465617364' ]
when i use the polkadot.js app in the browser and pass that ID i get the record and only one, the above TS code returns ALL the records, i've checked https://polkadot.js.org/api/start/api.query.other.html#map-keys-entries and if i understand correctly above code should. work
i know of the multi but i'd like to use this method to get all or some, is that even possible?
the .entries(args) is only possible to use and filter with the double_map where the args is a single string that matches the first argument of the double_map
The rust code above does not have filtering thus the RPC and the API will retrieve all entries.
so the rust code would be following:
/// PoE Proofs
pub Proofs get(fn proofs): double_map hasher(blake2_128_concat) GenericId, hasher(twox_64_concat) T::AccountId => ProofInfo<Proof, T::AccountId, T::BlockNumber>;
this allows filtering using the following approach
api.query.poe.proofs.entries('0x1231233132312')
Additional info can be found here

Firestore transaction produces console error: FAILED_PRECONDITION: the stored version does not match the required base version

I have written a bit of code that allows a user to upvote / downvote recipes in a manner similar to Reddit.
Each individual vote is stored in a Firestore collection named votes, with a structure like this:
{username,recipeId,value} (where value is either -1 or 1)
The recipes are stored in the recipes collection, with a structure somewhat like this:
{title,username,ingredients,instructions,score}
Each time a user votes on a recipe, I need to record their vote in the votes collection, and update the score on the recipe. I want to do this as an atomic operation using a transaction, so there is no chance the two values can ever become out of sync.
Following is the code I have so far. I am using Angular 6, however I couldn't find any Typescript examples showing how to handle multiple gets() in a single transaction, so I ended up adapting some Promise-based JavaScript code that I found.
The code seems to work, but there is something happening that is concerning. When I click the upvote/downvote buttons in rapid succession, some console errors occasionally appear. These read POST https://firestore.googleapis.com/v1beta1/projects/myprojectname/databases/(default)/documents:commit 400 (). When I look at the actual response from the server, I see this:
{
"error": {
"code": 400,
"message": "the stored version (1534122723779132) does not match the required base version (0)",
"status": "FAILED_PRECONDITION"
}
}
Note that the errors do not appear when I click the buttons slowly.
Should I worry about this error, or is it just a normal result of the transaction retrying? As noted in the Firestore documentation, a "function calling a transaction (transaction function) might run more than once if a concurrent edit affects a document that the transaction reads."
Note that I have tried wrapping try/catch blocks around every single operation below, and there are no errors thrown. I removed them before posting for the sake of making the code easier to follow.
Very interested in hearing any suggestions for improving my code, regardless of whether they're related to the HTTP 400 error.
async vote(username, recipeId, direction) {
let value;
if ( direction == 'up' ) {
value = 1;
}
if ( direction == 'down' ) {
value = -1;
}
// assemble vote object to be recorded in votes collection
const voteObj: Vote = { username: username, recipeId: recipeId , value: value };
// get references to both vote and recipe documents
const voteDocRef = this.afs.doc(`votes/${username}_${recipeId}`).ref;
const recipeDocRef = this.afs.doc('recipes/' + recipeId).ref;
await this.afs.firestore.runTransaction( async t => {
const voteDoc = await t.get(voteDocRef);
const recipeDoc = await t.get(recipeDocRef);
const currentRecipeScore = await recipeDoc.get('score');
if (!voteDoc.exists) {
// This is a new vote, so add it to the votes collection
// and apply its value to the recipe's score
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + value) });
} else {
const voteData = voteDoc.data();
if ( voteData.value == value ) {
// existing vote is the same as the button that was pressed, so delete
// the vote document and revert the vote from the recipe's score
t.delete(voteDocRef);
t.update(recipeDocRef, { score: (currentRecipeScore - value) });
} else {
// existing vote is the opposite of the one pressed, so update the
// vote doc, then apply it to the recipe's score by doubling it.
// For example, if the current score is 1 and the user reverses their
// +1 vote by pressing -1, we apply -2 so the score will become -1.
t.set(voteDocRef, voteObj);
t.update(recipeDocRef, { score: (currentRecipeScore + (value*2))});
}
}
return Promise.resolve(true);
});
}
According to Firebase developer Nicolas Garnier, "What you are experiencing here is how Transactions work in Firestore: one of the transactions failed to write because the data has changed in the mean time, in this case Firestore re-runs the transaction again, until it succeeds. In the case of multiple Reviews being written at the same time some of them might need to be ran again after the first transaction because the data has changed. This is expected behavior and these errors should be taken more as warnings."
In other words, this is a normal result of the transaction retrying.
I used RxJS throttleTime to prevent the user from flooding the Firestore server with transactions by clicking the upvote/downvote buttons in rapid succession, and that greatly reduced the occurrences of this 400 error. In my app, there's no legitimate reason someone would need to clip upvote/downvote dozens of times per seconds. It's not a video game.

How can I test a trigger function in GAS?

Google Apps Script supports Triggers, that pass Events to trigger functions. Unfortunately, the development environment will let you test functions with no parameter passing, so you cannot simulate an event that way. If you try, you get an error like:
ReferenceError: 'e' is not defined.
Or
TypeError: Cannot read property *...* from undefined
(where e is undefined)
One could treat the event like an optional parameter, and insert a default value into the trigger function using any of the techniques from Is there a better way to do optional function parameters in JavaScript?. But that introduces a risk that a lazy programmer (hands up if that's you!) will leave that code behind, with unintended side effects.
Surely there are better ways?
You can write a test function that passes a simulated event to your trigger function. Here's an example that tests an onEdit() trigger function. It passes an event object with all the information described for "Spreadsheet Edit Events" in Understanding Events.
To use it, set your breakpoint in your target onEdit function, select function test_onEdit and hit Debug.
/**
* Test function for onEdit. Passes an event object to simulate an edit to
* a cell in a spreadsheet.
*
* Check for updates: https://stackoverflow.com/a/16089067/1677912
*
* See https://developers.google.com/apps-script/guides/triggers/events#google_sheets_events
*/
function test_onEdit() {
onEdit({
user : Session.getActiveUser().getEmail(),
source : SpreadsheetApp.getActiveSpreadsheet(),
range : SpreadsheetApp.getActiveSpreadsheet().getActiveCell(),
value : SpreadsheetApp.getActiveSpreadsheet().getActiveCell().getValue(),
authMode : "LIMITED"
});
}
If you're curious, this was written to test the onEdit function for Google Spreadsheet conditional on three cells.
Here's a test function for Spreadsheet Form Submission events. It builds its simulated event by reading form submission data. This was originally written for Getting TypeError in onFormSubmit trigger?.
/**
* Test function for Spreadsheet Form Submit trigger functions.
* Loops through content of sheet, creating simulated Form Submit Events.
*
* Check for updates: https://stackoverflow.com/a/16089067/1677912
*
* See https://developers.google.com/apps-script/guides/triggers/events#google_sheets_events
*/
function test_onFormSubmit() {
var dataRange = SpreadsheetApp.getActiveSheet().getDataRange();
var data = dataRange.getValues();
var headers = data[0];
// Start at row 1, skipping headers in row 0
for (var row=1; row < data.length; row++) {
var e = {};
e.values = data[row].filter(Boolean); // filter: https://stackoverflow.com/a/19888749
e.range = dataRange.offset(row,0,1,data[0].length);
e.namedValues = {};
// Loop through headers to create namedValues object
// NOTE: all namedValues are arrays.
for (var col=0; col<headers.length; col++) {
e.namedValues[headers[col]] = [data[row][col]];
}
// Pass the simulated event to onFormSubmit
onFormSubmit(e);
}
}
Tips
When simulating events, take care to match the documented event objects as close as possible.
If you wish to validate the documentation, you can log the received event from your trigger function.
Logger.log( JSON.stringify( e , null, 2 ) );
In Spreadsheet form submission events:
all namedValues values are arrays.
Timestamps are Strings, and their format will be localized to the Form's locale. If read from a spreadsheet with default formatting*, they are Date objects. If your trigger function relies on the string format of the timestamp (which is a Bad Idea), take care to ensure you simulate the value appropriately.
If you've got columns in your spreadsheet that are not in your form, the technique in this script will simulate an "event" with those additional values included, which is not what you'll receive from a form submission.
As reported in Issue 4335, the values array skips over blank answers (in "new Forms" + "new Sheets"). The filter(Boolean) method is used to simulate this behavior.
*A cell formatted "plain text" will preserve the date as a string, and is not a Good Idea.
Update 2020-2021:
You don't need to use any kind of mocks events as suggested in the previous answers.
As said in the question, If you directly "run" the function in the script editor, Errors like
TypeError: Cannot read property ... from undefined
are thrown. These are not the real errors. This error is only because you ran the function without a event. If your function isn't behaving as expected, You need to figure out the actual error:
To test a trigger function,
Trigger the corresponding event manually: i.e., To test onEdit, edit a cell in sheet; To test onFormSubmit, submit a dummy form response; To test doGet, navigate your browser to the published webapp /exec url.
If there are any errors, it is logged to stackdriver. To view those logs,
In Script editor > Execution icon on the left bar(Legacy editor: View > Executions).
Alternatively, Click here > Click the project you're interested in > Click "Executions" icon on the left bar(the 4th one)
You'll find a list of executions in the executions page. Make sure to clear out any filters like "Ran as:Me" on the top left to show all executions. Click the execution you're interested in, it'll show the error that caused the trigger to fail in red.
Note: Sometimes, The logs are not visible due to bugs. This is true especially in case of webapp being run by anonymous users. In such cases, It is recommended to Switch Default Google cloud project to a standard Google cloud project and use View> Stackdriver logging directly. See here for more information.
For further debugging, You can use edit the code to add console.log(/*object you're interested in*/) after any line you're interested in to see details of that object. It is highly recommended that you stringify the object you're looking for: console.log(JSON.stringify(e)) as the log viewer has idiosyncrasies. After adding console.log(), repeat from Step 1. Repeat this cycle until you've narrowed down the problem.
Congrats! You've successfully figured out the problem and crossed the first obstacle.
2017 Update:
Debug the Event objects with Stackdriver Logging for Google Apps Script. From the menu bar in the script editor, goto:
View > Stackdriver Logging to view or stream the logs.
console.log() will write DEBUG level messages
Example onEdit():
function onEdit (e) {
var debug_e = {
authMode: e.authMode,
range: e.range.getA1Notation(),
source: e.source.getId(),
user: e.user,
value: e.value,
oldValue: e. oldValue
}
console.log({message: 'onEdit() Event Object', eventObject: debug_e});
}
Example onFormSubmit():
function onFormSubmit (e) {
var debug_e = {
authMode: e.authMode,
namedValues: e.namedValues,
range: e.range.getA1Notation(),
value: e.value
}
console.log({message: 'onFormSubmit() Event Object', eventObject: debug_e});
}
Example onChange():
function onChange (e) {
var debug_e = {
authMode: e.authMode,
changeType: changeType,
user: e.user
}
console.log({message: 'onChange() Event Object', eventObject: debug_e});
}
Then check the logs in the Stackdriver UI labeled as the message string to see the output
As an addition to the method mentioned above (Update 2020) in point 4.:
Here is a small routine which I use to trace triggered code and that has saved me a lot of time already. Also I have two windows open: One with the stackdriver (executions), and one with the code (which mostly resides in a library), so I can easily spot the culprit.
/**
*
* like Logger.log %s in text is replaced by subsequent (stringified) elements in array A
* #param {string | object} text %s in text is replaced by elements of A[], if text is not a string, it is stringified and A is ignored
* #param {object[]} A array of objects to insert in text, replaces %s
* #returns {string} text with objects from A inserted
*/
function Stringify(text, A) {
var i = 0 ;
return (typeof text == 'string') ?
text.replace(
/%s/g,
function(m) {
if( i >= A.length) return m ;
var a = A[i++] ;
return (typeof a == 'string') ? a : JSON.stringify(a) ;
} )
: (typeof text == 'object') ? JSON.stringify(text) : text ;
}
/* use Logger (or console) to display text and variables. */
function T(text) {
Logger.log.apply(Logger, arguments) ;
var Content = Stringify( text, Array.prototype.slice.call(arguments,1) ) ;
return Content ;
}
/**** EXAMPLE OF USE ***/
function onSubmitForm(e) {
T("responses:\n%s" , e.response.getItemResponses().map(r => r.getResponse()) ;
}

LevelDB key,value from csv

I've huge csv file database of ~5M rows having below fields
start_ip,end_ip,country,city,lat,long
I am storing these in LevelDB using start_ip as the key and rest as the value.
How can I retrieve records for keys where
( ip_key > start_ip and ip_key < end_ip )
Any alternative solution.
I assume that your keys are the hash values of the IP and the hashes are 64-bit `unsigned' integers, but if that's not the case then just modify the code below to account for the proper keys.
void MyClass::ReadRecordRange(const uint64 startRange, const uint64 endRange)
{
// Get the start slice and the end slice
leveldb::Slice startSlice(static_cast<const char*>(static_cast<const void*>(&startRange)), sizeof(startRange));
leveldb::Slice endSlice(static_cast<const char*>(static_cast<const void*>(&endRange)), sizeof(endRange));
// Get a database iterator
shared_ptr<leveldb::Iterator> dbIter(_database->NewIterator(leveldb::ReadOptions()));
// Possible optimization suggested by Google engineers
// for critical loops. Reduces memory thrash.
for(dbIter->Seek(startSlice); dbIter->Valid() && _options.comparator->Compare(dbIter->key(), endSlice)<=0); dbIter->Next())
{
// get the key
dbIter->key().data();
// get the value
dbIter->value().data();
// TODO do whatever you need to do with the key/value you read
}
}
Note that _options are the same leveldb::Options with which you opened the database instance. You want to use the comparator specified in the options so that the order in which you read the records is the same as the order in the database.
If you're not using boost or tr1, then you can either use something else similar to the shared_ptr or just delete the leveldb::Iterator by yourself. If you don't delete the iterator, then you'll leak memory and get asserts in debug mode.

Using Reactives to Merge Chunked Messages

So I'm attempting to use reactives to recompose chunked messages identified by ID and am having a problem terminating the final observable. I have a Message class which consists of Id, Total Size, Payload, Chunk Number and Type and have the following client-side code:
I need to calculate the number of messages to Take at runtime
(from messages in
(from messageArgs in Receive select Serializer.Deserialize<Message>(new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
group messages by messages.Id into grouped select grouped)
.Subscribe(g =>
{
var cache = new List<Message>();
g.TakeWhile((int) Math.Ceiling(MaxPayload/g.First().Size) < cache.Count)
.Subscribe(cache.Add,
_ => { /* Rebuild Message Parts From Cache */ });
});
First I create a grouped observable filtering messages by their unique ID and then I am trying to cache all messages in each group until I have collected them all, then I sort them and put them together. The above seems to block on g.First().
I need a way to calculate the number to take from the first (or any) of the messages that come through however am having difficulty doing so. Any help?
First is a blocking operator (how else can it return T and not IObservable<T>?)
I think using Scan (which builds an aggregate over time) could be what you need. Using Scan, you can hide the "state" of your message re-construction in a "builder" object.
MessageBuilder.IsComplete returns true when all the size of messages it has received reaches MaxPayload (or whatever your requirements are). MessageBuilder.Build() then returns the reconstructed message.
I've also moved your "message building" code into a SelectMany, which keeps the built messages within the monad.
(Apologies for reformatting the code into extension methods, I find it difficult to read/write mixed LINQ syntax)
Receive
.Select(messageArgs => Serializer.Deserialize<Message>(
new MemoryStream(Encoding.UTF8.GetBytes(messageArgs.Message))))
.GroupBy(message => message.Id)
.SelectMany(group =>
{
// Use the builder to "add" message parts to
return group.Scan(new MessageBuilder(), (builder, messagePart) =>
{
builder.AddPart(messagePart);
return builder;
})
.SkipWhile(builder => !builder.IsComplete)
.Select(builder => builder.Build());
})
.Subscribe(OnMessageReceived);

Resources