How to pre-compute the receipt_id/is the receipt_id pre-computed based on the transaction/transaction hash? - nearprotocol

I'm trying to link some receipts to the transactions it came from. I don't see a way from the docs to query a receipt and get the transaction it's associated with (there is only an RPC endpoint for transaction->receipt https://docs.near.org/api/rpc/transactions#transaction-status-with-receipts)
I also saw this code blob from nearcore which seems to imply that I could do this without an RPC call by pre-computing it from a transaction.
EDIT: I found out how from digging through nearcore.
The hash is created with this function which is called by a create_receipt_id_from_transaction function.

Related

How can I get the to-address of a transaction from a transaction hash in Solana

I'm trying to get all the information like from-address, to-address, amount, etc in my JS backend to confirm a transaction using it's hash. The transaction object returned from the getTransaction RPC call has a array for owner account PublicKey(from-address) but can't find a way to find the to-address. How can I find this?
You need to iterate over the CompiledInstructions (found on TransactionResponse.transaction.message.instructions) and decode them one by one.
Regular SOL transfers would be transfer instructions called on the system program. Web3.js has a helper for that purpose.
SPL Transfers would be transfer instructions called on the token program. I something similar for the initialcapoffering.com order form, this code might be helpful

Possible to implement a "job" pattern with GraphQL

Is there a reasonable way to implement a job-based query paradigm in GraphQL?
In particular, something like the following:
Caller submits a search request
Backend returns a job ID
Caller receives status updates on the job as it runs
Caller separately can retrieve pages of data from the job results
I guess the problem I see here is that we are splitting up the process into two steps: One is making the request and the second is retrieving data. As a result, the fields requested in the first request do not correspond with what is returned (just a job ID). And similarly, a call to retrieve results has the same issue.
Subscriptions don't really solve this problem either, I don't believe. They might help with requesting data that might take a long time to return I think, but that isn't quite the same as a job-based API.
Maybe this is a niche use case, and I have no doubt that it wasn't what GraphQL was initially built to solve. But, I'm just wondering if this is something doable, or if this is more of trying to fit a square peg into a round hole.

GraphQL Asynchronous query results

I'm trying to implement a batch query interface with GraphQL. I can get a request to work synchronously without issue, but I'm not sure how to approach making the result asynchronous. Basically, I want to be able to kick off the query and return a pointer of sorts to where the results will eventually be when the query is done. I'd like to do this because the queries can sometimes take quite a while.
In REST, this is trivial. You return a 202 and return a Location header pointing to where the client can go to fetch the result. GraphQL as a specification does not seem to have this notion; it appears to always want requests to be handled synchronously.
Is there any convention for doing things like this in GraphQL? I very much like the query specification but I'd prefer to not leave the client HTTP connection open for up to a few minutes while a large query is executed on the backend. If anything happens to kill that connection the entire query would need to be retried, even if the results themselves are durable.
What you're trying to do is not solved easily in a spec-compliant way. Apollo introduced the idea of a #defer directive that does pretty much what you're looking for but it's still an experimental feature. I believe Relay Modern is trying to do something similar.
The idea is effectively the same -- the client uses a directive to mark a field or fragment as deferrable. The server resolves the request but leaves the deferred field null. It then sends one or more patches to the client with the deferred data. The client is able to apply the initial request and the patches separately to its cache, triggering the appropriate UI changes each time as usual.
I was working on a similar issue recently. My use case was to submit a job to create a report and provide the result back to the user. Creating a report takes couple of minutes which makes it an asynchronous operation. I created a mutation which submitted the job to the backend processing system and returned a job ID. Then I periodically poll the jobs field using a query to find out about the state of the job and eventually the results. As the result is a file, I return a link to a different endpoint where it can be downloaded (similar approach Github uses).
Polling for actual results is working as expected but I guess this might be better solved by subscriptions.

Am I misusing GraphQL if I must decompose REST data, then re-aggregate it?

We are considering using GraphQL on top of a REST service (using the
FHIR standard for medical records).
I understand that the pattern with GraphQL is to aggregate the results
of multiple, independent resolvers into the final result. But a
FHIR-compliant REST server offers batch endpoints that already aggregate
data. Sometimes we’ll need à la carte data—a patient’s age or address
only, for example. But quite often, we’ll need most or all of the data
available about a particular patient.
So although we can get that kind of plenary data from a single REST call
that knits together multiple associations, it seems we will need to
fetch it piecewise to do things the GraphQL way.
An optimization could be to eager load and memoize all the associated
data anytime any resolver asks for any data. In some cases this would be
appropriate while in other cases it would be serious overkill. But
discerning when it would be overkill seems impossible given that
resolvers should be independent. Also, it seems bloody-minded to undo
and then redo something that the REST service is already perfectly
capable of doing efficiently.
So—
Is GraphQL the wrong tool when it sits on top of a REST API that can
efficiently aggregate data?
If GraphQL is the right tool in this situation, is eager-loading and
memoization of associated data appropriate?
If eager-loading and memoization is not the right solution, is there
an alternative way to take advantage of the REST service’s ability
to aggregate data?
My question is different from
this
question and
this
question because neither touches on how to take advantage of another
service’s ability to aggregate data.
An alternative approach would be to parse the request inside the resolver for a particular query. The fourth parameter passed to a resolver is an object containing extensive information about the request, including the selection set. You could then await the batched request to your API endpoint based on the requested fields, and finally return the result of the REST call, and let your lower level resolvers handle parsing it into the shape the data was requested in.
Parsing the info object can be a PITA, although there's libraries out there for that, at least in the Node ecosystem.

Parse.com. Execute backend code before response

I need to know the relative position of an object in a list. Lets say I need to know the position of a certain wine of all wines added to the database, based in the votes received by users. The app should be able to receive the ranking position as an object property when retrieving a "wine" class object.
This should be easy to do in the backend side but I've seen Cloud Code and it seems it only is able to execute code before or after saving or deleting, not before reading and giving response.
Any way to do this task?. Any workaround?.
Thanks.
I think you would have to write a Cloud function to perform this calculation for a particular wine.
https://www.parse.com/docs/cloud_code_guide#functions
This would be a function you would call manually. You would have to provide the "wine" object or objectId as a parameter and then get have your cloud function return the value you need. Keep in mind there are limitations on cloud functions. Read the documentation about time limits. You also don't want to make too many API calls every time you run this. It sounds like your computation could be fairly heavy if your dataset is large and you aren't caching at least some of the information.

Resources