What are Solidity Events - events

I have been struggling for quite some time with finding an explanation to what events are in Solidity (or in the Blockchain context). As far as I understand, they are a way of storing (or logging) information on the particular contract that can then be updated throughout the life of that contract. But how is this different than a plain ol' variable? Why can't I just create a variable that is then simply updated with new information?

Events in Solidity can be used to log certain events in EVM logs. These are useful when clients are required to be notified of any change or event in the contract. Or maybe in the future you need to search for something that has happened so you go through all the logs. These logs are stored on the blockchain in transaction logs. Logs cannot be accessed from the contracts but are used as a mechanism to notify change of state or the occurrence of an event in the contract. They help us write asynchronous applications.
Events are stored in the logsBloom which is in the header of each block.
Events are piece of data executed on the blockchain and stored in the blockchain but not accessible by any smart contracts. it is kinda console.log in javascript or print in python.
Events are much more gas efficient than using a storage variable
Events are useful for testing the contract. If you interact with oracles, you sometimes want to see if the function call by oracle service is done or not. To see if the function call is done, you emit the result of the function or one of the properties of the result.
Events are useful if you want to maintain the history/log of every change that happens in a mapping.
Deposit contracts were created on the Ethereum 1.0 chain. This kind of smart contract is used for depositing ETH on the beacon chain. An event is emitted every time a deposit is made.
There are two events that must be present in an ERC-20-compliant token:
Transfer : This event must trigger when tokens are transferred, including any zero-value transfers. The event is defined as follows:
event Transfer(address indexed _from, address indexed _to, uint256 _value)
Approval : This event must trigger when a successful call is made to the approve function.
event Approval(address indexed _owner, address indexed _spender, uint256 _value)
You can read this article for deep dive: transaction-receipt-trie-and-logs-simplified

From the docs:
Solidity events give an abstraction on top of the EVM’s logging functionality. Applications can subscribe and listen to these events through the RPC interface of an Ethereum client.
It's easier for an off-chain app to subscribe to new event logs than to a variable change. Especially when the variable is not public.
Same goes for querying historical event logs (easy through the JSON-RPC API and its wrappers such as Web3 or Ethers.js), vs. historical changes of the variable (complicated, would need to query a node for each block and look for the changes proactively).
Example: The ERC-20 token standard defines the Transfer() event. A token contract emits this event each time a transfer (of its tokens) occurs. This allows a blockchain explorer (or any other off-chain app) to react to this event - for example to update their own database of token holders. Without the event, they would have no way (or a very complicated way at least) to learn about the transfer.

Solidity events are pieces of data that are emitted and stored in the blockchain. When you emit an event it creates a log that front-end applications can use to trigger changes in the UI.
It's a cheap form of storage.
You can define an event like this:
event Message(address indexed sender, address indexed recipient, string message);
and emit an event like this:
emit Message(msg.sender, _recipient, "Hello World!");
Read this article for more information and this one to learn how to get events in JavaScript using Ethers.js

Related

Is there a hook for reacting to a catching up Axon projection?

I want to implement a Axon projection providing a subscription query. The projection persists entities of a specific based on the events (CRUD). The following should happen if a replay of the projection is executed:
Inform about a empty collection via the subscription query.
Process the events until the projection catches up with the event store.
Inform about the current state of the entities via the subscription query.
My intention is that the subscription query should not inform about many events in a very short time and I want to prevent informing about many intermediate positions (e.g. create and delete a specific entity, it should not be used in the subscription query, because it is not available after catching up).
Currently I cannot implement the third step, because I miss a hook for reacting to the moment when the Axon projection is catching up again.
I use Axon v4.5 and Spring Boot v2.4.5.
At the moment, the Reference Guide is sadly not extremely specific on how to achieve this. I can assure you though that work is underway to improve the entire Event Processor section, including a clearer explanation of the Replay process and the hooks you have.
However, the possible hooks are stated as it is (you can find them here).
What you can do to know whether your event handlers are replaying yes/no, is to utilize the ReplayStatus enumeration. This enum can be used as an additional parameter to your #EventHandler annotated method, holding just two states:
REPLAY
REGULAR
This enumeration allows for conditional logic within an event handler on what to do during replays. If an event handler for example not only updates a projection but also sends an email, you'd want to make sure the email isn't sent again when replaying.
To further clarify how to use this, consider the following snippet:
#EventHandler
public void on(SomeEvent event, ReplayStatus replayStatus) {
if (replayStatus == REGULAR) {
// perform tasks that only may happen when the processor is not replaying
}
// perform tasks that can happen during regular processing and replaying
}
It's within the if-block where you could invoke the QueryUpdateEmitter, to only emit updates when the event handler is performing the regular event handling process.

How to get newly created resource to client with CQRS and event sourcing based microservices

I'm experimenting with microservices, event sourcing and CQRS. However, I'm a little bit confused about how I go from issuing a command to performing a query to return the new state, specifically with regard to interactions with a web API gateway.
As an example, the simple application I am attempting to write (which probably doesn't actually need any of these; it is just something to aid my learning) creates a random-graph and then performs some long-running calculations on the graph. I've modelled this as two separate services: the GraphService and the ComputationService. The imagined process flow is as follows:
User requests new random graph.
API gateway constructs CreateGraph command and sends it to the
graph service.
GraphService command handler creates a graph and publishes a
GraphCreated event.
GraphService event handler subscribes to topic for graph events,
processes GraphCreated event and stores graph in persistent read
storage.
Client somehow gets the newly created graph.
ComputationService event handler subscribes to topic for graph
events, processes GraphCreated event and begins potentially
long-running computation, e.g. calculate diameter.
ComputationService publishes DiameterComputed event.
GraphService event handler subscribes to topic for computation
events, processed DiameterComputed event and updates the graph in
persistent read storage.
Client somehow gets updated - easier than getting the new graph, since already have an ID and can poll for changes / websockets / SSE, etc.
That seems relatively simple. However, my confusion lies in how to go about informing the API gateway, and thus the web client, of the new graph (as highlighted in bold above). In a typical CRUD process, the result of the POST request to create a new graph would be to return the URL of the new resource, for instance. However, with CQRS, commands should return nothing or an exception.
How do I pass information back to the client of the service (in this case the API gateway) about the ID of the new graph so that it can perform a query to get the representation of the new resource and send it to the user? Or at least get an ID so that the web client can ask the API gateway, etc?
As I see it at the moment, after sending a command, everyone is just left hanging. There needs to be some sort of subscription model that can be interrogated for the status of the graph creation. I considered having the API gateway generate a request ID which gets embedded with the CreateGraph command, but this then couples the service to the API.
I'm obviously missing something, but have no idea what. None of the examples I've looked at or discussions I've read address this issue and assume that the ID of whatever resource is already known. I couldn't find any discussions here addressing this issue, but if I've just missed them, please point me there rather than duplicating questions. Any pointers would be hugely welcomed.
How do I pass information back to the client of the service (in this case the API gateway) about the ID of the new graph so that it can perform a query to get the representation of the new resource and send it to the user? Or at least get an ID so that the web client can ask the API gateway, etc?
By listening for the echo.
The basic idea behind at least once delivery is that I'm going to send you a message, and keep sending it over and over until I receive a message that proves you've received at least one copy of my message.
Therefore, my protocol looks something like
Establish a mail box where I can collect messages
Encode into the message instructions for delivering to my mailbox
Send the message to you
Check my mailbox
if the answer is there, I'm done
otherwise, I send you another copy of the message
The mail box could be implemented any number of ways -- it could be a callback; it could be a promise, it could be a correlation identifier. You could have the signal dispatched by the command handler, when it gets acknowledgement of the write by the book of record, or by the "read model" when the new resource is available.

ES,CQRS messaging flow

I was trying to understanding ES+CQRS and tech stack can be used.
As per my understanding flow should be as below.
UI sends a request to Controller(HTTP Adapter)
Controller calls application service by passing Request Object as parameter.
Application Service creates Command from Request Object passed from controller.
Application Service pass this Command to Message Consumer.
Message Consumer publish Command to message broker(RabbitMQ)
Two Subscriber will be listening for above command
a. One subscriber will generate Aggregate from eventStore using command
and will apply command than generated event will be stored in event store.
b. Another subscriber will be at VIEW end,that will populate data in view database/cache.
Kindly suggest my understanding is correct.
Kindly suggest my understanding is correct
I think you've gotten a bit tangled in your middleware.
As a rule, CQRS means that the writes happen to one data model, and reads in another. So the views aren't watching commands, they are watching the book of record.
So in the subscriber that actually processes the command, the command handler will load the current state from the book of record into memory, update the copy in memory according to the domain model, and then replace the state in the book of record with the updated version.
Having update the book of record, we can now trigger a refresh of the data model that backs the view; no business logic is run here, this is purely a transform of the data from the model we use for writes to the model we use for reads.
When we add event sourcing, this pattern is the same -- the distinction is that the data model we use for writes is a history of events.
How atomicity is achieved in writing data in event store and writing data in VIEW Model?
It's not -- we don't try to make those two actions atomic.
how do we handle if event is stored in EventStrore but System got crashed before we send event in Message Queue
The key idea is to realize that we typically build new views by reading events out of the event store; not by reading the events out of the message queue. The events in the queue just tell us that an update is available. In the absence of events appearing in the message queue, we can still poll the event store watching for updates.
Therefore, if the event store is unreachable, you just leave the stale copy of the view in place, and wait for the system to recover.
If the event store is reachable, but the message queue isn't, then you update the view (if necessary) on some predetermined schedule.
This is where the eventual consistency part comes in. Given a successful write into the event store, we are promising that the effects of that write will be visible in a finite amount of time.

CQRS+ES: Client log as event

I'm developing small CQRS+ES framework and develop applications with it. In my system, I should log some action of the client and use it for analytics, statistics and maybe in the future do something in domain with it. For example, client (on web) download some resource(s) and I need save date, time, type (download, partial,...), from region or country (maybe IP), etc. after that in some view client can see count of download or some complex report. I'm not sure how to implement this feather.
First solution creates analytic context and some aggregate, in each client action send some command like IncreaseDownloadCounter(resourced) them handle the command and raise domain event's and updating view, but in this scenario first download occurred and after that, I send command so this is not really command and on other side version conflict increase.
The second solution is raising event, from client side and update the view model base on it, but in this type of handling my event not store in event store because it's not raise by command and never change any domain context. If is store it in event store, no aggregate to handle it after fetch for some other use.
Third solution is raising event, from client side and I store it on other database may be for each type of event have special table, but in this manner of event handle I have multiple event storage with different schema and difficult on recreating view models and trace events for recreating contexts states so in future if I add some domain for use this type of event's it's difficult to use events.
What is the best approach and solution for this scenario?
First solution creates analytic context and some aggregate
Unquestionably the wrong answer; the event has already happened, so it is too late for the domain model to complain.
What you have is a stream of events. Putting them in the same event store that you use for your aggregate event streams is fine. Putting them in a separate store is also fine. So you are going to need some other constraint to make a good choice.
Typically, reads vastly outnumber writes, so one concern might be that these events are going to saturate the domain store. That might push you towards storing these events separately from your data model (prior art: we typically keep the business data in our persistent book of record, but the sequence of http requests received by the server is typically written instead to a log...)
If you are supporting an operational view, push on the requirement that the state be recovered after a restart. You might be able to get by with building your view off of an in memory model of the event counts, and use something more practical for the representations of the events.
Thanks for your complete answer, so I should create something like the ES schema without some field (aggregate name or type, version, etc.) and collect client event in that repository, some offline process read and update read model or create command to do something on domain space.
Something like that, yes. If the view for the client doesn't actually require any validation by your model at all, then building the read model from the externally provided events is fine.
Are you recommending save some claim or authorization token of the user and sender app for validation in another process?
Maybe, maybe not. The token describes the authority of the event; our own event handler is the authority for the command(s) that is/are derived from the events. It's an interesting question that probably requires more context -- I'd suggest you open a new question on that point.

Events changing state in CQRS

This should be easy to follow, but after some reading I still can find an answer.
So, say that the user needs to change his mobile number, to accomplished that, we might have a command as: ChangedUserMobileNumber
holding the new number. The domain responsible for handling the command will perform the change in the aggregate and publish an event: UserMobilePhoneChanged
There is a subscriber for that event in another domain, which also holds the user mobile number in its aggregate but according to our software architect, events can not old any data so what we end up is rather stupid to say the least:
The Domain 1, receives the command to update the mobile number, the number is updated and one event is published, also, because the event cannot hold data, the command handler in the Domain 1 issues yet another command which is sent to Domain 2. The subscriber of that event lives in Domain 2 too, we then have a Saga to handle both the event and the command.
In terms of implementation we are using NServiceBus, so we have this saga to handle these message and in it we have this line of code, where the entity.IsMobilePhoneUpdated field stored in a saga entity is changed when the event is handeled.
bool isReady = (entity.IsMobilePhoneUpdated && entity.MobilePhoneNumber != null);
Effectively the Saga is started by both the command and the event raised in the Domain 1, and until this condition is met, the saga is kept alive.
If it was up to me, I would be sending the mobile number in the event itself, I just want to get a few other opinions on this.
Thanks
I'm not sure how a UserMobilePhoneChanged event could be useful in any way unless it contained the new phone number. User asks to change a number, the event shoots out that it has. Should be very simple indeed. Why does your architect say that events shouldn't contain any information?
In the first event based system i've designed events also had no data. I also did enforce that rule. At the time that sounded like a clever decision. After a while i realised that it was dumb, and i was making a lot of workarounds because of it. Also this caused a lot of querying form the event subscribers, even for trivial data. I had no problem changing this "rule" after i realised i'm doing it wrong.
Events should have all the data required to make them meaningful. Also they should only have the data that makes sense for that event. ( No point in having the user address in a ChangePhoneNumber message )
If your architect imposes such a restriction, it's not going to be easy to develop a CQRS system. How are the read models updated? Since the events have no data then you either query something to get the data ( the write side ? ) of find some way of sending a command to the read model ( then what's the point of publishing events? ). To fix your problem you should try to have a professional discussion with this architect, preferably including other tech heads and without offending anybody try to get him to relax this constraint.
On argument you could use is Event Sourcing. Event Sourcing is complementary to CQRS and would not make sense without events that have data. Even more when using event sourcing, the only data you have is the data stored in the events. Even if you don't actually implement event sourcing you can use it's existence as a reason for events to have data.
There is little point in finding a technical solution to a people problem.

Resources