SQLAlchemy operation logging - events

I would like to log SQLAlchemy INSERT\UPDATE\DELETE operations after they have been committed into the database. Is there a way to this based on SQLAlchemy events?
Events, such as after_update, after_insert refer to Python objects, thus are not really helpful in case of a rollback, for example.

Related

How to implement Event sourcing and a database in a microservice architecture?

I have been learning lately about microservices architecture and it's features.
in this source it appears that event sourcing is replacing a database, however, it is later stated:
The event store is difficult to query since it requires typical queries to reconstruct the state of the business entities. That is likely to be complex and inefficient. As a result, the application must use Command Query Responsibility Segregation (CQRS) to implement queries.
In the CQRS Page the author seems to describe a singular database that listens to all events and reconstructs itself.
My question(s) is:
What is actually needed to implement event sourcing with a queryable database? particularly:
Where is the events database? Where is the queryable database? Do I need to have multiple event stores for every service or can I store events in a message broker like Kafka? is the CQRS database actually is one "whole" database that collects all the events? And how can all of this scale?
I'm sorry if I'm not clear with my question, I am very confused myself. I guess I'm looking for a full example architecture of how things will look in the grand picture.
Where is the queryable database?
I'm guessing this is the most useful starting point, because it will be most familiar. The queryable database is in the same place that your this-is-the-entire-database was when you weren't doing event sourcing.
That could be a database exclusively to support this microservice, or it could be a database that is shared by several microservices, with some part of the schema where this microservice has exclusive write authority. Another way of thinking about this: the microservices are using different logical databases, which might be physically deployed together.
Where is the events database?
Same general idea - you can have one events database per microservice; or you could have several different microservices sharing the same database. Again, you have partitioning of authority, and the same logical vs physical separation to consider.
What changes with the introduction of events and CQRS is that the query/reporting database no longer stores the authoritative copy of the information that is used by the microservice. The authoritative information lives in the event store, and the query/reporting database acts more like a cache.
Our command handlers will typically load information only from the authoritative store (aka the events); that's the data that we lock if we are processing commands concurrently.
We copy information that is stored in the events into the query/reporting database(s). Depending on our needs, that can be done synchronously by the command handlers, but it is more common to use background batch processing to do that work, meaning that the data in the reporting database will often be a little bit stale.
can I store events in a message broker like Kafka?
Current consensus is that Kafka cannot reliably be used for event sourcing as understood by the CQRS community.
https://issues.apache.org/jira/browse/KAFKA-2260
https://cwiki.apache.org/confluence/display/KAFKA/KIP-27+-+Conditional+Publish
Roughly, the problem is this: when you have two processes with the authority to write events, how do you ensure that they don't introduce inconsistencies? With event stores we can use locks, or conditional writes (aka compare and swap), to ensure that nobody came along and snuck in a few extra events that might change the events we are writing.
With Kafka, there doesn't seem to be a mechanism that supports prevention, so you need to lean more into apologies, or something.
the CQRS database actually is one "whole" database that collects all the events?
Logically? No. But you certain can combine them physically into the same appliance. For example, message-db is "just" a postgres schema with some tables, functions, and so on. You certainly could combine that with the tables you use for queries and reports.
I'm looking for a full example architecture of how things will look in the grand picture.
The materials published by Greg Young in 2010 might be a decent starting point.
Event Source is not replacing the DB. It has some benefits and challenges. So, we should choose it wisely. If you are not comfortable then don't choose it. You can implement Microservice Style without event sourcing.
Query able DB - Simple solution is to implement CQRS pattern and keep your Query DB in sync with Event Source DB.
Event DB should be with owner service like if you are keeping events about Order than it should be in Order service. (Yeah, other service can have replica of the same).
You may use Kafka as intermediate storage for event but not the final one.
CQRS is not about one DB. It an pattern where we use to DB models, one is for Command and Another one is for Query.
If you understand Java then please refer Book "Microservice Patterns - Chris Richardson" and if you are from C# or Microsoft technology stack then you may refer "https://github.com/dotnet-architecture/eShopOnAzure".

Cache Coherency with Tarantool

I understand that Tarantool has ACID transactions within a stored procedure. My question is: does it also make sure in-memory data is in sync with persistent file system data? For Example, if I change 5 records using a Stored Proc and something goes wrong while writing the changes to WAL file, will the in-memory cache roll back to original values for ALL 5 records?
Also, while an update transaction is in progress, will other readers see 'dirty' uncommitted records or a consistent view of the records as these existed before transaction started?
Thanks
Tarantool has a special function for transaction control[1] inside stored procedure. But it has some limitations for instance: you can't call fiber.yield()[2] (that includes underlying calls, i.e. fio, sockets and so on) inside box.begin() box.end() section. You can find more about transaction control here: https://tarantool.io/en/doc/1.9/book/box/atomic.html?highlight=yield.
And also, tarantool supports fsync[3].
[1] https://tarantool.io/en/doc/1.9/book/box/box_txn_management.html?highlight=commit#lua-function.box.commit
[2] https://tarantool.io/en/doc/1.9/reference/reference_lua/fiber.html?highlight=yield#lua-function.fiber.yield
[3] https://tarantool.io/en/doc/1.9/reference/configuration/index.html?highlight=fsync#confval-wal_mode
It is possible only if a user uses fibers and don't control own codes. That means that is possible only if a logical error exists inside user's codes.
You are welcome.

Commits in the absence of locks in CockroachDB

I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
Of course, ignoring performance issues / joins, etc.
I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
CockroachDB does have locks, but uses different terminology. Some of the existing documentation that talks about optimistic concurrency control is currently being updated.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
One of the transactions will block waiting for the other to commit. If a deadlock between the transactions is detected, one of the two transactions involved in the deadlock will be aborted.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Yes.
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
At a high-level there is nothing additional for you to do. CockroachDB defaults to serializable isolation which can result in more transaction restarts that weaker isolation levels, but comes with the advantage that the application programmer doesn't have to worry about anomalies.

neo4j slows down after lots of inserts

I'm the owner of the Blockchain2graph project that reads data from Bitcoin core rest API and insert Blocks, Addresses and Transactions as Graph objects in Neo4j.
After some imports, the process is slowing down until the memory is full. I don't want to use CSV imports. My problem is not performance, my goal is to insert things without the application stopping because of memory (even if it takes quite a lot of time)
I'm using spring-boot-starter-data-neo4j.
In my code, I try to make session.clear from times to times but it doesn't seem to have an impact. After restarting tomcat8, things go fast again.
As your project is about mass inserts, I wouldn't use an OGM like Spring Data Neo4j for writing the data.
You don't want a session to keep your data around on the client.
Instead, use Cypher directly sending updates you get from the BlockChain API directly as a batch per request, see my blog post for some examples (some of which we also use in SDN/Neo4j-OGM under the hood).
You can still use SDN for individual entity handling (CRUD) that's what OGMs are good for in my book to reduce the boilerplate.
But for more complex read operations that have aggregation, filtering, projection and path matches I'd still use Cypher on an annotated repository method, returning rows that can be mapped to a list of DTOs.

Using Oracle Streams to implement audit trails

I'm going to implement asynchronous audit trails functionality for highly loaded system with using of Oracle Streams (for log mining on redo and archive logs). Audit trails in my case mustn't slow down any DML operations over set of my tables. Also audits must contain additional information about end user identity and date and time of modification.
Does someone have experience in implementing audit trails with using of Oracle Streams? Is it good idea to move this way?
Is there any tutorials exist with tips and tricks about implementing audit trails with using of Oracle Streams?
Oracle claims the auditing features in the database create an insignificant amount of overhead. Have you tried those as a test case to see how it performs? It doesn't require any DML triggers on the tables. I've used them and got no noticeable difference but the system resources weren't maxed out either.
Using streams for auditing sounds possible but I think it's an overly complicated solution. I supposes you could use streams to replicate transactions to another database and then use the auditing in that database. You're still going to add I/O load to wherever you store your redo logs.
Agree with #JOTN. One more thing to add wrt Oracles streams, it is deprecated in 12c and being packaged/offered as 'Golden gate' with a separate license cost.

Resources