Im new in Go-EVM, and some time ago I got a source code to get transaction tracking. But some functions in the source codes have been updated and changed, here are some questions I wanna ask:
The first one is: How to get *snapshot.Tree?
stateDB, err := state.New(block.Root(), state.NewDatabase(db))
Now this statements need three parameter and the lost parameter's type is *sanpshot.Tree. It is a struct, here is the link to its source code, in line 164.
The second one is: What are AsseccList and GasTipFee?
message := types.NewMessage(from, tx.To(), 0, tx.Value(), tx.Gas(), from Address, to Address, nonce, amount, gasLimit, tx.GasPrice(), GasTopfee, GasTipFee, tx.Data(), accesslist AccessList, false)
AccessList is also a struct. You can see its struct from here. What should I input into AccessList and GasTipFee?
Really appreciate it if you can help me solve these questions.
In your case you do not need to pass a tree snapshot if you do not have one. The purpose of the tree snapshot is to, if the snapshot matches with the given root of the block's trie, run what they call a pre-fetcher routine, that is in charge of preloading nodes in memory so that when the state reaches the commit phase, it is more performant because it already has most of the required nodes in memory. So in your case, you should be perfectly fine passing nil to that constructor.
As for the AccessList and GasTipFee parameters:
AccessList is an EIP-2930 access list. It's once again something optional that transactions can provide to specify the addresses and storage keys that they need access to. Once again you can provide a nil slice.
What you have called GasTipFee is called GasTipCap on master and is basically the limit value of a gas tip, as far as I understand. You can find more information on gas fees in the official documentation.
Related
My goal is to get to the raw driver.Value values as deserialized by a sql driver in its implementation of driver.Rows.Next(). I want to handle the conversion from the values returned by the driver to the needed target types, instead of relying on the automatic conversions built in to Rows.Scan. Note this question does not ask your opinion on whether Rows.Scan "should" be used. I don't want to use it, and I am asking if there is any way to avoid it.
A meaningful answer does not use Rows.Scan at all. The dynamic approach illustrated in Working with Unknown Columns is awful: It invokes all the overhead of Scan and destroys the type information of the source columns, instead shredding the actual driver.Values into SqlBytes.
The following hack works, but relies on the internal implementation detail that sql.Rows.Next() populates the internal field lastcols with exactly the unconverted values which I want:
vpRows := reflect.ValueOf(rows) // rows is a *sql.Rows
vRows := reflect.Indirect(vpRows) // now we have the sql.Rows struct
mem := vRows.FieldByName("lastcols") // unexported field lastcols
unsafeLastCols := unsafe.Pointer(mem.UnsafeAddr()) // Evil
plastCols := (*[]driver.Value)(unsafeLastCols) // But effective
for rows.Next() {
rowVals := *plastCols
fmt.Println(rowVals)
}
The normal solution is to implement your own sql.Scanner. But this does use rows.Scan, so it violates your mysterious requirement not to use rows.Scan.
If you truly must avoid rows.Scan, you'll need to write your own driver implementation (possibly wrapping an existing driver) which provides access to the driver.Value values without rows.Scan.
I am working on a service that provides information about a few related entities, somewhat like a database. Suppose that there's calls to retrieve information about a school:
service MySchool {
rpc GetClassRoom (ClassRoomRequest) returns (ClassRoom);
rpc GetStudent (StudentRequest) returns (Student);
}
Now, suppose that I want to find out a class room's information, I'd receive a proto that looks like so:
message ClassRoom {
string id = 1;
string address = 2;
string teacher = 3;
}
Sometimes I also want to know all of the students of the classroom. I am struggling to think which is the better design pattern.
Option A) Add an extra rpc like so: rpc GetClassRoomStudents (ClassRoomRequest) returns (ClassRoomStudents), where ClassRoomStudents has a single field repeated Student students. This technique requires more than one call to get all the information that we want (and many if we wanted to know information for more than one classroom).
Option B) Add an extra repeated Student students field to the ClassRoom proto, and B') Fill it up only when necessary, or B") Fill it up whenever the server receives a GetClassRoom call. This may sometimes fetch extra information, or lead to ambiguity according to what fields are filled up.
I am not sure what's the best / most conventional way of dealing with this. How have some of you dealt with this?
There is no simple answer. It's a tradeoff between simplicity (option A) and performance (option B), and it depends on the situation which solution is best.
In general, I'd recommend to go with the simple solution first, unless your measurements demonstrate that it leads to performance issues. At that point, it's easy to add repeated Student students to ClassRoom and a field bool fetch_students [default=false] to ClassRoomRequest. Then clients are free to continue using the simple API, or choose to upgrade to the more performant API if they need to.
Note that this isn't specific to gRPC; the same issue is seen in REST APIs, and basically almost any request/response model.
I'm writing a service to learn Go. My main function can be found below. It starts with reading an XML file and storing them in a slice. I have a /rss endpoint which outputs a RSS feed from the items stored in the "database". This is working fine. I also have an endpoint (/add/{base64}) which is used to add a new item to that slice. Unfortunately I don't know how to do this. For some reason I need to return the new database with the added record, so it gets available to the /rss. But how?
My concrete problem is:
I know how to add a record to database
But I don't know how to return the full (including the added) database so the /rss endpoint is able to use it. So I want to let the rest.AddArticle return the new database so the /rss endpoint knows the added item.
func main() {
defer glog.Flush()
// read database
database := model.ReadFileIntoSlice()
// initialise mux router
r := mux.NewRouter()
// http handles
r.HandleFunc("/add/{base64url}", rest.AddArticle(database))
r.HandleFunc("/rss", rest.GenerateRSS(database))
// start server
http.Handle("/", r)
glog.Infof("running on port %d", *port)
http.ListenAndServe(":"+strconv.Itoa(*port), nil)
}
Or is there some other solution which does the job? I just want database to be available through all packages.
From what I can tell the problem is that you're writing to your db but you're reading from the cached version so the response of rss just doesn't reflect the model at the time the request is made. If you look at this code;
database := model.ReadFileIntoSlice()
// initialise mux router
r := mux.NewRouter()
// http handles
r.HandleFunc("/add/{base64url}", rest.AddArticle(database))
you somehow need to modify the value of database. There are many ways you could do this. A few options would be 1) define it on some object or at a package level and then directly modify it from wherever AddArticle is defined. 2) refresh your in memory version, ie before you return the results, read from the db again so you're assured to have the latest (obv performance implications) 3) don't pass database by value, make the argument a pointer instead. AddArticle is getting a copy of database rather than the address to the version you're reading from in your rss call. You could instead pass a pointer into that method so that the original copy is modified (it also performs substantially better as your model gets larger).
Based on the simplicity of your program I'd probably do 3. Realistically 2 is a more robust solution and serious enterprise software probably would require something more along those lines (your model doesn't work if your app is load balanced or something like that).
I never quite understood the if needed part of the description.
.fetchAll()
Fetches the given list of Parse.Object.
.fetchAllIfNeeded()
Fetches the given list of Parse.Object if needed.
What is the situation where I might use this and what exactly determines the need? I feel like it's something super elementary but I haven't been able to find a satisfactory and clear definition.
In the example in the API, I notice that the fetchAllIfNeeded() has:
// Objects were fetched and updated.
In the success while the fetchAll only has:
// All the objects were fetched.
So does the fetchAllIfNeeded() also save stuff too? Very confused here.
UPDATES
TEST 1
Going on some of the hints #danh left in the comments I tried the following things.
var todos = [];
var x = new Todo({content:'Test A'}); // Parse.Object
todos.push(x);
x.save();
// So now we have a todo saved to parse and x has an id. Async assumed.
x.set({content:'Test B'});
Parse.Object.fetchAllIfNeeded(todos);
So in this scenario, my client x is different than the server. But the x.hasChanged() is false since we used the set function and the change event is triggered. fetchAllIfNeeded returns no results. So it isn't that it's trying to compare this outright to what is on the server to sync and fetch.
I notice that in the request payload, running the fetchAllIfNeeded is sending the following interesting thing.
{where: {objectId: {$in: []}}, _method: "GET",…}
So it seems that on the clientside something determines whether an object isNeeded
Test 2
So now, based on the comments I tried manipulating the changed state of the object by setting with silent.
x.set({content:'Test C'}, {silent:true});
x.hasChanged(); // true
Parse.Object.fetchAllIfNeeded(todos);
Still nothing interesting. Clearly the server state ("Test A") is different than clientside ("Test C"). and I still results [] and the request payload is:
{where: {objectId: {$in: []}}, _method: "GET",…}
UPDATE 2
Figured it out by looking at the Parse source. See answer.
After many manipulations, then taking a look at the source - I figured this out. Basically fetchAllIfNeeded will fetch models in an array that have no data, meaning there are no attribute properties and values.
So the use case would be you have lets say a parent object with an array of nested Parse Objects. When you fetch the parent object, the nested child objects in the array will not be included (unless you have the include query constraint set). Instead, the pointers are sent back to clientside and in your client, those pointers are translated into 'empty' models with no data, basically just blank Parse.Objects with ids.
Specifically, the Parse.Object has an internal Boolean property called _hasData which seems to be toggled true any time stuff like set, or fetch, or whatever gives that model attributes.
So, lets say you need to fetch those child objects. You can just do something like
var childObjects = parent.get('children'); // Array
Parse.Object.fetchAllIfNeeded(childObjects);
And it will search for those children who are currently only represented as empty Objects with id.
It's useful as opposed to fetchAll in that you might go through the children array and lazily load one at a time as needed, then at a later time need to "get the rest". fetchAllIfNeeded essentially just filters "the rest" and sends a whereIn query that limits fetching to those child objects that have no data.
In the Parse documentation, they have a comment in the callback response to fetchAllIfNeeded as:
// Objects were fetched and UPDATED.
I think they mean the clientside objects were updated. fetchAllIfNeeded is definitely sending GET calls so I doubt anything updates on the serverside. So this isn't some sync function. This really confused me as I instantly thought of serverside updating when they really mean:
// Client objects were fetched and updated.
Just like as title, I want to ask what difference between
fromPin()
and
fromLocalDatastore()
By the way, Pin and datastore two terminologies. What difference between two of them ?
Thanks.
There is a slight difference and you can see it from the docs and from the decompiled code of the Parse library (okay, this last one is more complicated...).
The docs says:
fromLocalDatastore(): Change the source of this query to all pinned objects.
fromPin(): Change the source of this query to the default group of pinned objects.
Here you can see that, interally on Parse, there is a way to get all the objects from the entire set of pinned data, without filters, but also from a so-called "default group". This group is defined in the Parse code with the following string: _default (o'rly?).
When you pin something using pinInBackground, you can do it in different ways:
pinInBackground() [without arguments]: Stores the object and every object it points to in the local datastore.
This is what the docs say, but if you look at the code you'll discover that the pin will be actually performed to the... _default group!
public Task<Void> pinInBackground() {
return pinAllInBackground("_default", Arrays.asList(new ParseObject[] { this }));
}
On the other hand, you can always call pinInBackground(String group) to specify a precise group.
Conclusion: every time you pin an object, it's guaranteed to be pinned to a certain group. The group is "_default" if you don't specify anything in the parameters. If you pin an object to your custom group "G", then a query using fromPin() will not find it! Because you didn't put it on "_default", but "G".
Instead, using fromLocalDatastore(), the query is guaranteed to find your object because it will search into "_default", "G" and so on.