What is the alternative for get_esdt_balance when the address is on a different shard? - elrond

As the documentation states
get_esdt_balance(address: &ManagedAddress, token_id: &TokenIdentifier, nonce: u64) -> BigUint
will return the balance but it only works if the address is in the same shard as the smart contract? Is there a known alternative or a smart way to make this work for when the address is indeed in a different shard?

I believe there is no way currently to do what you want. Probably you should change the way you think regarding this, maybe have the user send the ESDT to your contract, do the check and send the ESDT back?

There is no way to retrieve the balance of the user without doing a cross-shard call. I would also see it as bad practice to rely on the user having a certain balance since tokens are easily transferable and you won't get any security from doing so.
However, if you really want to do that (without having to rely on snapshots or similar mechanics) you would have to deploy a smart contract on each shard that has an endpoint to check for the user's balance and call that using an async call.

Related

How to structure a microservice management layer calls

I am building a microservice that will have a management layer, and another corresponding microservice for a public api. The management layer's client will be imported into the api service as well as a few other apps. What I am wondering is what the proper way to structure the mgmt layer's calls would be. I'm using rpc.
For example, let's say the service contains Human, Dog, Food. Food has to be owned by a Human OR Dog, not both and not neither.
Now since the mgmt service will only be accessed by client and not url, would it be better to define the specs like:
POST: /humans/id/food Client call: mgmtService.createFoodForHuman(humanId, Food)
POST: /dogs/id/food Client call: mgmtService.createFoodForDog(dogId, Food)
or
POST: /food Client call: mgmtService.createFood(Food)
where the second instance of Food will require the user to pass in one of human id or dog id.
To me, the first example seems more straightforward from a code-client perspective since the methods are more defined and the second example doesn't give any info of what kind of id is needed, but I am interested if there is any thoughts on which one is best practice. Thanks!
It is actually depends, on larger scale you would want to apply the first method. Sometimes, when you works with the other teams to build a complex services, it is possible that the other teams has to fix your bug because of your absent or any case where you can't fix the bug that time, if the bug is really an urgent cases, you wouldn't want it becomes a blocker because they did not understand how your code works.
Well, you can always provide a documentation for every API details on your platform such as Confluence, but still that would takes time to look for it.
On the other hand, with second method you can achieve that one for all API.
TL;DR -> Large scale 1st method, small scale 2nd method.

Eventually consistent DB : How to deal with relational data?

So let's say we have microservices that uses an event broker to communicate each other.
To secure sovereignty of data, each microservices has denormalized documents.
So whenever the data is changed, from the service changed the data, 'DataAHasChanged' event gets fired. Next, all the microservices that have subscribed this event will change document they have to maintain consistency of data A. (A here is not foreign key, but it's actual data, since it's denormalized)
This seems really not good to me if services have multiple documents that have data A. And if data A is changing often. I would just send API call to other services using data A's ID as a foreign key.
Real world use case would be:
User creates 'contract requests' and it has multiple vendor information.
Vendors information will be changed often.
So if there are 2000 contract requests. It means whenever vendor changes their information. We should go through every contract requests and change the denormalized document.
Is eventual consistency still the best practice in this case? or should I just use synchronous call to just read data from vendor service?
Thank you.
I would revisit the microservices decoupling and would ask a question - who is the source of truth for each type of data? You'll probably arrive to one service owning documents and that service will be responsible for updating those documents as well.
Even with a dedicated service owning documents, you still have to answer what are the consistency guarantees you need. Usually you start with SLA's - how available your service should be? How the data is stored? Often the underlaying data storage will dictate those.
Also, I would like to note that even with synchronous calls your system will be eventually consistent - since it takes time to execute all those calls, it will be a period when the system as a whole might see non-latest data.
If you really need true strong consistency, you may will have to pick right storage for that. I would go with a strongly consistent option assuming my performance and availability goals are met. And the reason for strong consistency - it is much easier to reason about; hence the system gets simpler.

Compensating Events on CQRS/ES Architecture

So, I'm working on a CQRS/ES project in which we are having some doubts about how to handle trivial problems that would be easy to handle in other architectures
My scenario is the following:
I have a customer CRUD REST API and each customer has unique document(number), so when I'm registering a new customer I have to verify if there is another customer with that document to avoid duplicity, but when it comes to a CQRS/ES architecture where we have eventual consistency, I found out that this kind of validations can be very hard to address.
It is important to notice that my problem is not across microservices, but between the command application and the query application of the same microservice.
Also we are using eventstore.
My current solution:
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%. That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
Altough this works, there are 2 things that bother me here, the first thing is my command application relying on the query application, so if my query application is down, my command is affected (today I just return false on my validation if query is down but still...) and second thing is, should a query/read model really be able to emit events? And if so, what is the correct way of doing it? Should the command have some kind of API for that? Or should the query emit the event directly to eventstore using some common shared library? And if I have more than one view/read? Which one should I choose to handle this?
Really hope someone could shine a light into these questions and help me this these matters.
For reference, you may want to be reviewing what Greg Young has written about Set Validation.
I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right?
That's exactly right - your read model is stale copy, and may not have all of the information collected by the write model.
That's when my second validation kicks in, when my query application is processing the events and saving them to my PostgreSQL, I check again if there is a customer with that document and if there is, I reject that event and emit a compensating event to undo/cancel/inactivate the customer with the duplicated document, therefore finishing that customer stream on eventstore.
This spelling doesn't quite match the usual designs. The more common implementation is that, if we detect a problem when reading data, we send a command message to the write model, telling it to straighten things out.
This is commonly referred to as a process manager, but you can think of it as the automation of a human supervisor of the system. Conceptually, a process manager is an event sourced collection of messages to be sent to the command model.
You might also want to consider whether you are modeling your domain correctly. If documents are supposed to be unique, then maybe the command model should be using the document number as a key in the book of record, rather than using the customer. Or perhaps the document id should be a function of the customer data, rather than being an arbitrary input.
as far as I know, eventstore doesn't have transactions across different streams
Right - one of the things you really need to be thinking about in general is where your stream boundaries lie. If set validation has significant business value, then you really need to be thinking about getting the entire set into a single stream (or by finding a way to constrain uniqueness without using a set).
How should I send a command message to the write model? via API? via a message broker like Kafka?
That's plumbing; it doesn't really matter how you do it, so long as you are sure that the command runs within its own transaction/unit of work.
So what I do today is, in my command application, before saving the CustomerCreated event, I ask the query application (using PostgreSQL) if there is a customer with that document, and if not, I allow the event to go on. But that doesn't guarantee 100%, right? Because my query can be desynchronized, so I cannot trust it 100%.
No, you cannot safely rely on the query side, which is eventually consistent, to prevent the system to step into an invalid state.
You have two options:
You permit the system to enter in a temporary, pending state and then, eventually, you will bring it into a valid permanent state; for this you could allow the command to pass, yield CustomerRegistered event and using a Saga/Process manager you verify against a uniquely-indexed-by-document-collection and issue a compensating command (not event!), i.e. UnregisterCustomer.
Instead of sending a command, you create&start a Saga/Process that preallocates the document in a uniquely-indexed-by-document-collection and if successfully then send the RegisterCustomer command. You can model the Saga as an entity.
So, in both solution you use a Saga/Process manager. In order for the system to be resilient you should make sure that RegisterCustomer command is idempotent (so you can resend it if the Saga fails/is restarted)
You've butted up against a fairly common problem. I think the other answer by VoicOfUnreason is worth reading. I just wanted to make you aware of a few more options.
A simple approach I have used in the past is to create a lookup table. Your command tries to register the key in a unique constraint table. If it can reserve the key the command can go ahead.
Depending on the nature of the data and the domain you could let this 'problem' occur and raise additional events to mark it. If it is something that's important to the business/the way the application works then you can deal with it either manually or at the time via compensating commands. if the latter then it would make sense to use a process manager.
In some (rare) cases where speed/capacity is less of an issue then you could consider old-fashioned locking and transactions. Admittedly these are much better suited to CRUD style implementations but they can be used in CQRS/ES.
I have more detail on this in my blog post: How to Handle Set Based Consistency Validation in CQRS
I hope you find it helpful.

How to avoid time conflict or overlap for CalDAV?

I am studying CalDAV protocol.
I have some question for time conflict or overlap for CalDAV.
Let me explain by instance for some scenario.
I made an event PM1 ~ PM6 in calendar. And then I try to made another event PM2~7 in same calendar. It is time conflict or overlap.
How does CalDav server resolve this conflict? Does server make error when second event make?
I did search out RFC 6638. But I could not find solution.
Please help my question.
Thanks for reading.
It is up to the CalDAV client to decide how to behave when overlap is involved.
If the client decides to write an event that overlaps another the server will write the overlapping event.
When scheduling is involved (userA wants to invite userB to a meeting but would like to avoid picking a time slot that is already busy in userB's calendar) the CalDAV client can query the FREEBUSY status for a user (see RFC 4791). There's also availability which allows a CalDAV client to retrieve a user's availability (think business hours).
The functionality Kim is asking for a very common one for business calendaring systems (not have the same person booked twice etc).
I think in the CalDAV world there are two parts to this:
a) First the client is supposed to perform a freebusy query to check
whether a user is available. And then show a conflict warning or
whatever seems appropriate.
This is how many systems, including btw Exchange work. Siri also does this kind of conflict detection (“hey, you already have an event at the time, shall I still create the conflicting one, master?”)
b) But in a reasonable system you actually need to guarantee that
the information isn’t outdated at PUT time. I.e. that no second
client has scheduled the same attendee/resource.
I think in CalDAV you can accomplish that by testing the sync-token or the CTag using an If header on the PUT. I.e. let the PUT only succeed if the whole underlying collection didn’t change. And if it did (the PUT will fail with a conflict), redo the freebusy, then try again.
I don’t think that there is a reliable way to do this in CalDAV cross collections (calendars), that is, if the availability of a resource changed because it got booked in a different calendar, the targeted sync collection won’t usually change its sync tag and the PUT would run through.
The bad thing about CalDAV (w/ scheduling) is that PUTs are not idempotent anymore. Otherwise you could do the PUT, recheck whether it still has no conflicts, and if so drop it after the fact.

Additional validation logic before sending a request to the external system

I'm publishing a comment on Instagram using their API. In the documentation they describe the rules that the message that is being sent has to pass. So far my approach always was to add the validation layer just before the message would be sent to the service checking if it satisfies all the requirements. I preferred to get back to the user quicker with the proper error without sending any requests to the social network.
It requires to maintain additional logic in my application and in case of Instagram, where rules are not so simple (like e.g. just limiting the length of the message) I started thinking if that's the optimal approach.
For example, one of the requirements on comments is that they cannot contain more than 4 hashtags which forces me to keep some logic to be able to check how many hashtags are in a string.
Would you think that the effort put into keeping that validation is worth it? I always thought so, but am not so sure any more.
Would you think that the effort put into keeping that validation is
worth it? I always thought so, but am not so sure any more.
Absolutely yes unless you don't care about user at all.
User is your primary value and his comfort is above all things. So, double validation is a must for a good software.
I think you shouldn't struggle to implement absolutely ALL Instagram checks, but at least most of those, that users fail most often.

Resources