How can I send multiple types of FT from a single contract? - nearprotocol

At the moment, this contract can create a single token, but what I am looking for is a contract that can create, receive and send multiple tokens.
Is it possible?

Related

Why would you delete the access keys to your NEAR account containing a smart contract?

This answer about upgradability suggests that at some point you should delete access keys to the account containing a smart contract: How do you upgrade NEAR smart contracts?.
It makes sense that a smart contract should be "frozen" at some point, and you want to give its users confidence that it will not be changed. But what about contract rewards and other funds belonging to the contract account? How would the original owner get access to that if keys are deleted?
But what about contract rewards and other funds belonging to the contract account? How would the original owner get access to that if keys are deleted?
The contract should be implemented in such a way that would allow certain operations.
Let's take a lockup contract as an example. This contract has a single owner, and the funds are locked for a certain amount of time, and the contract only provides certain methods to be called and guarded with the specific logic:
As an owner, I can delegate (stake) my tokens to staking pools while I still cannot arbitrary transfer the tokens
As an owner, I can withdraw the rewards from the staking pool through the lockup contract, and transfer those to an arbitrary account
Once the lockup time is over, as an owner, I can call add_full_access_key function, and thus gain full access over the account, and even delete it after that (transferring all the tokens to some other account).
All that is explicitly implemented on the contract level, and easy to review, and given there is no other AccessKey on the lockup contract, we can be sure that there is no other way to interfere with the contract logic.

Should we create different endpoint in microservices to get single data and to get list or should I use call single data endpoint multiple time

Say we have a grpc microservice which have endpoint GetDataById(int id). If I came across a use case where I want data of multiple ids, should I create separate endpoint like GetDataByIds(int[] ids) or should I call GetDataById(int id) multiple time in parallel.

How to process concurrent requests sequentially coming from certain user to specific endpoint

I am having trouble with handling concurrent requests coming from user to specific endpoint. The problem I am encountering is when user makes a request to certain endpoint with a certain parameter, to be specific with uuid, I pass that parameter to stored procedure then query the database and db returns error since first transaction is not complete. I want subsequent requests wait until previous processed if request is coming from the same user and to specific endpoint. How do I do that?
I tried to implement it using mutexes but it seemed didn't work.
I want to solve this problem only in server side without touching database.

GraphQL: Many small mutations, or one bulk mutation?

Let's say I am a user and I am editing my profile on some arbitrary app. The app let's me make a bunch of changes, and when I'm done, I click on "Save" and my profile gets updated.
What is the recommended best practice in GraphQL to handle a large update like this? As I see it, there are a few options:
A) Many small mutations. If the user changed 5 things (i.e., name, email, username, image, bio) the client could fire off 5 mutations to the server.
Pros: smaller, more isolated operations.
Cons: Doesn't this defeat the purpose of "one round trip to the server" in GraphQL, as it would require... 5?
B) Many small mutations, called server-side. Rather than calling 5 mutations from the client, requiring 5 round trips, you could post a data blob to the server and have a function that parses it, and runs individual mutations on the data it finds.
Pros: One round trip
Cons: We have to add another layer to the app to handle this. The new function would get messy, be hard to test, and hard to maintain over time.
C) One large mutation. The user sends the data blob to the server via a single mutation, which sets the new data in bulk on the document rather than running individual mutations on each field.
Pros: DX; one round trip.
Cons: Since fields are being passed in as arguments, this open the application to attack. A malicious user could try passing in arbitrary fields, setting fields that shouldn't be changed (i.e. an isAdmin field), etc. The mutation would have to be smart to know which fields are allowed to be updated, and reject / ignore the rest.
I can't find much on the web about which way is the "right way" to do this kind of thing in GraphQL. Hoping to find some answers / feedback here. Thanks!
A) Many small mutations. If the user changed 5 things (i.e., name, email, username, image, bio) the client could fire off 5 mutations to the server.
You can execute multiple mutations in a single request. No need to call the server multiple times.
Here's an example:
mutation {
setUserName(name: "new_name") { ok }
setUserEmail(email: "new_email") { ok }
}
B) Many small mutations, called server-side. Rather than calling 5 mutations from the client, requiring 5 round trips, you could post a data blob to the server and have a function that parses it, and runs individual mutations on the data it finds.
This is exactly what GraphQL does for you when you mutate multiple fields or execute multiple queries at once.
C) One large mutation. The user sends the data blob to the server via a single mutation, which sets the new data in bulk on the document rather than running individual mutations on each field.
You can still set data in bulk even if you are using multiple fields.
This would require you to, instead of updating the database directly, pass the request to a middle-ware that would build and execute a single query once all mutation resolvers are executed.
Cons: Since fields are being passed in as arguments, this open the application to attack. A malicious user could try passing in arbitrary fields, setting fields that shouldn't be changed (i.e. an isAdmin field), etc. The mutation would have to be smart to know which fields are allowed to be updated, and reject / ignore the rest.
You shouldn't use arbitrary variables, but instead list all allowed properties as arguments.
I'd go with the third solution, one large mutation. I'm not sure I understand your point about malicious users passing arbitrary fields : they wouldn't be able to pass fields that are not defined in your schema.
As for the server side logic, you'd have to put those smart checks anyway : you can never trust the client!

microservice messaging db-assigned identifiers

The company I work for is investigating moving from our current monolithic API to microservices. Our current API is heavily dependent on spring and we use SQL server for most persistence. Our microservice investigation is leaning toward spring-cloud, spring-cloud-stream, kafka, and polyglot persistence (isolated database per microservice).
I have a question about how messaging via kafka is typically done in a microservice architecture. We're planning to have a coordination layer between the set of microservices and our client applications, which will coordinate activities across different microservices and isolate clients from changes to microservice APIs. Most of the stuff we've read about using spring-cloud-stream and kafka indicate that we should use streams at the coordination layer (source) for resource change operations (inserts, updates, deletes), with the microservice being one consumer of the messages.
Where I've been having trouble with this is inserts. We make heavy use of database-assigned identifiers (identity columns/auto-increment columns/sequences/surrogate keys), and they're usually assigned as part of a post request and returned to the caller. The coordination layer may be saving multiple things using different microservices and often needs the assigned identifier from one insert before it can move on to the next operation. Using messaging between the coordination layer and microservices for inserts makes it so the coordination layer can't get a response from the insert operation, so it can't get the assigned identifier that it needs. Additionally, other consumers on the stream (i.e. consumers that publish the data to a data warehouse) really need the message to contain the assigned identifier.
How are people dealing with this problem? Are database-assigned identifiers an anti-pattern in microservices? Should we expose separate microservice endpoints that return database-assigned identifiers so that the coordination layer can make a synchronous call to get an identifier before calling the asynchronous insert? We could use UUIDs but our DBAs hate those as primary keys, and they couldn't be used as an order number or other user-facing generated ids.
If you can programmatically create the identifier earlier while receiving from the message source, you can embed the identifier as part of the message header and subsequently use the message header information during database inserts and in any other consumers.
But this approach requires a separate verification by the other consumers against the database to process only the committed transactions (if you are concerned about processing only the inserts).
At our company, we built a dedicated service responsible for unique ids generation. And every other services grap the ids they need from there.
These generated ids couldn't be used as an order number but I think it's shouldn't be used for this job anyway. If you need to sort by created date, it's better to have a created_date field.
One more thing that is used to bug my mind with this approach is that the primary resource might be persisted after the other resource that rerefence it by the id. For example, a insert user, and insert user address request payload are sent asynchronously. The insert user payload contains a generated unique id, and user address payload contains that id as foreign reference back to user. The insert user address might be proccessed before the insert user request, but it's totally fine. I think it's called eventual consistency.

Resources