EventStore 3.1 - SQL Persistence - How can I use a non-dbo schema? - event-sourcing

For the project I'm working on, I can't use the [dbo] schema. From looking at the EventStore source, it doesn't look trivial to use a non-dbo schema.
So far, the best I've come up with is to use a custom dialect like this:
Sub-class CommonSqlDialect
Add a private instance of MsSqlDialect
Then override all virtual properties of CommonSqlDialect to do something like
Example:
public override string AppendSnapshotToCommit
{
get { return customizeSchema(_msSqlDialect.AppendSnapshotToCommit); }
}
private string customizeSchema(string dboStatement)
{
// replace "[dbo]" with "[notdbo]",
// replace " Commits" with " [notdbo].Commits",
// replace " Snapshots" with " [notdbo].Snapshots"
}
I also have to customize the InitializeStorage property to replace "sysobjects" with "sys.objects" so I can add an additional constraint on the schema name.
This works, but it seems like there should be wireup options for customizing the schema and table names.
UsingSqlPersistence(...)
.WithSchema(...)
.WithCommitsTable(...)
.WithSnapshotsTable(...)
Is there a clearly better way to handle this that I've missed?

While I can see a potential need to customize the table names, the existing version does not support that. All you need to do is to subclass the MsSqlDialect and then provide your custom version to the wireup like so:
UsingSqlPersistence(...)
.WithDialect(new MsSqlDialectWithCustomTableNames());

A solution that doesn't require any code changes, and is good for security:
Create a new user in database and give him read/write only access to the new schema.
Set the new schema as a default schema to the user.
Add a new 'EventStore' connection string to the config file.
Pass this new connection to UsingSqlPersistence constructor.
All the queries in the event store are not prefixed with schema name, so changing a default schema for the user effectively 'redirects' all the calls to the new schema.
What is more, having a specific user for event store with limited permissions is a good thing anyway. Make sure that other db users don't have access to your new schema.

Related

How can I have custom identifiers/primary keys for my resources?

My table has a primary key other than id and react-admin enforces id to be returned in the response by the DataProvider. So can I configure different primary keys/identifiers for my resources?
I am using this library - https://github.com/Steams/ra-data-hasura-graphql
Right now I have made few changes in my library code to make this work, but I need an idea to implement it, so anyone using this library doesn't need to go thru whole code to make it work.
const config = {
'primaryKey': {
'tableName': 'primaryKey1', 'tableName2': 'primaryKey2'
}
};
I was thinking of something like passing configuration like this.
Thanks.
I don't believe it is currently possible with the 0.1.0 release. However in hasura you can expose the column with a different name
hasura column exposed name

How to check if exists when using CrudRepository#getOne()

I have a Document entity. To edit a document you must acquire a DocumentLock.
Request API to edit a document looks like below. edit=true allows to fetch the record from database with 'FOR UPDATE`.
GET /api/document/123?edit=true
In Back end side, we do like this (of course over simplified),
Document document = documentRepository.getOne(documentId) //<--- (1)
if(document == null){ //<--- (2) This is always false
//Throw exception
}
DocumentLock dl = DocumentLock.builder()
.lockedBy(user)
.lockedAt(NOW)
.document(document)
.build()
documentLockRepository.save(dl);
We are using getOne() because we do not need to fetch whole document from database, we are just creating a relation with DocumentLock object.
Question is, how to ensure the document actually exists? Because getOne() will always return the proxy and I do not see any obvious way to check if the record exists in databse.
Versions:
spring-data-jpa: 2.2.6.RELEASE
hibernate: 5.4.12
Update: removed additional questions.
Use findById or existsById. Yes they do access the database. That is the point of it, isn't it?
getOne is explicitly for use cases where you already know the entity exists and only need the id wrapped in something that looks like the entity.
If your repository returning value or its transactional management is confusing, you should check your repository codes and entity class design even session configurations that used for querying or even your Datastore design because everything looks fine in this code fragment

How to ensure data integrity with domain that change

I'm working on a project where I applied DDD principles.
In order to ensure domain integrity I validate each domain model (entities or value objects) on creation.
Example of the user entity:
class User {
constructor(opts) {
this.email = opts.email;
this.password = opts.password;
this.validate();
}
validate() {
if(typeof this.email !== 'string') {
throw new Error('email is invalid');
}
if(typeof this.password !== 'string') {
throw new Error('password is invalid');
}
}
}
The validate method is stupid implementation of validation (I know I should verify email using Regex and I handle the error in a most effective way).
This model is then persisted using the the userRepository module.
Now, imagine I want to add a new property username to my user model, my validate method will look like this:
validate() {
if(typeof this.email !== 'string') {
throw new Error('email is invalid');
}
if(typeof this.password !== 'string') {
throw new Error('password is invalid');
}
if(typeof this.username !== 'string') {
throw new Error('username is invalid');
}
}
The problem is that old user models stored will not have the username property which is now required. Therefore when I'll fetch data from database and try to construct model it'll throw an error.
To fix this problem I see multiple solutions (but none seems good to me):
create an anti-corruption layer in the user repository (create default username if not defined)
Allow invariant in my domain model (username is not required)
Use cron-services that update database entities based on the domain change (again set default username)
The problem is that old user models stored will not have the username property which is now required.
Yup, that's a problem.
Here's how I think about it -- the persisted copy of your domain model is a message, sent by an instance of your domain model running in the past to an instance of your domain model running in the future.
If you want those messages to be compatible, then you need to accept certain constraints in the design of your message schema.
One of those constraints is that you don't add new required fields to an existing message type.
Adding optional fields is fine, because systems that don't care can ignore the optional fields, and the systems that do care can provide a default value of when the field is missing.
But if you need to add a new required field, then you create a new message.
The event sourcing community has to worry about this sort of thing a lot (events are messages); Greg Young wrote Versioning in an Event Sourced System, which has good lessons on the versioning of messages.
To fix this problem I see multiple solutions (but none seems good to me)
I agree, these are all kind of lousy - in the sense that they are all introducing a mechanism for deriving a "default" user name where none exists. That being the case, the field is effectively optional; so why claim that it is required?
In a situation where the field isn't required, but you want to stop accepting new data that doesn't include this field -- you probably want to put new validation on the data input code path. That is to say, you can create a new API with messages that require the field, validate those messages, and then use the domain model with the optional field to store and fetch the data.
So adding a new required field is an anti-pattern in DDD
Adding new required fields is an anti-pattern in messaging; DDD has little to do with it.
You shouldn't be expecting to be able to add required fields to existing schema in a backwards compatible way. Instead, you extend the message schema by introducing a new message in which the field is required.
I thought applying DDD principles help to handle the business logic complexity and also help to design evoluting software and evoluting domain models
It does, but it isn't magic. If your new models aren't backward compatible with the old models, then you are going to have to manage that change in some way
You might declare bankruptcy, and simply forget all previous history.
You might migrate your existing data to the new data model.
You might maintain the two different data models in parallel.
In other words, backwards compatibility is a long term concern that you should be thinking about as you design your solution.

Make the entire symfony app read-only

I need to set up a live demo of a Symfony app.
How can I make everything read-only? The users should be able to try all the features but not make any persistent change visible to others.
I could remove the INSERT and UPDATE privileges to the mysql user, but that would be an ugly error 500 when they try to save something...
Quick and dirty way to make your entire app Read-Only.
AppBundle/EventSubscriber/EntitySubscriber.php
namespace AppBundle\EventSubscriber;
use Doctrine\Common\EventSubscriber;
use Doctrine\ORM\Event\PreFlushEventArgs;
class EntitySubscriber implements EventSubscriber
{
public function getSubscribedEvents()
{
return [
'preFlush'
];
}
public function preFlush(PreFlushEventArgs $args)
{
$entityManager = $args->getEntityManager();
$entityManager->clear();
}
}
services.yml
app.entity_subscriber:
class: AppBundle\EventSubscriber\EntitySubscriber
tags:
- { name: doctrine.event_subscriber, connection: default }
I suppose that you've already made it. But if not:
Use dummy database. Copy it from original DB. Let them play. Drop it when you don't need it.
If you have no access to database creation and drop you can still do the trick. Just add temporary prefixes to table names in Doctrine entities. No need to rewrite the entire app, just a few lines. Run migrations to create new tables. Drop them whenever you want later.
Use virtual machine. Make a snapshot before the show. Roll back to the saved snapshot after the show.
These are more or less easy ways and they are platform independent.
Changing this based on the Symfony app level might have one of two disadvantages. You either do not save anything and thus your demo is not working so nice to show it to the customer. Or you have to do to much manipulations with the code and throw this huge work away right after the show.
Maybe you can use Session to do that or Memcache that you can implement in Symfony (Some examples are available on the web). Hope this will help.

getting old values of updated domain properties in grails

I am trying to implement the beforUpdate event in grails domain classes I need to audit log both the old and new values of the Domains attributes. I see that we can use the isDirty check or use Domain.dirtyPropertyNames which return list of properties which are dirty in the domain. and getPersistentValue gets the old value in the table so I can have both values..
For implementing this I will be using the beforUpdate event in the domain class and call a logging service from there, passing it the id of the User domain. now using this ID I can get user Instance in Service and then check if any fields are dirty using the above specified method? or do I need to log the Audit when I am actually doing the Update in the UserController's update def?
Which is the better approach?
I want to confirm if this the right approach..
Also what other things I need to take care for, Like:
1) if the attributes are domain object references and not simple types.
2) any other things I need to take care like not flushing out the hibernate session, thinking of implementing this in call to service from the domain class.
Regards,
Priyank
Edit: I tried this in the beforeUpdate event in User domain that I want to Audit log for Update activity..
def beforeUpdate = {
GraauditService service = AH.getApplication().getMainContext().getBean(''graauditService)
service.saveUserUpdateEntry(this.id); // id property of User domain...
}
In the method in Service I do:
def saveUserUpdateEntry(Long id){
User grauser = User.get(id);
println ("user="+ grauser)
println "Dirty Properties -: ${grauser.dirtyPropertyNames}"
println "Changed value for firstName = -: ${ grauser.firstName}"
println "Database value for firstName = -: ${ grauser.getPersistentValue('firstName')}"
}
I try to do the update from the UI for email, firstname, lastname and get the following on console:
user=com.gra.register.User : 1
Dirty Properties -: [eMail, firstName, lastName]
Changed value for firstName = -: sefser
Database value for firstName = -: administer
user=com.gra.register.User : 1
Dirty Properties -: []
Changed value for firstName = -: sefser
Database value for firstName = -: sefser
possible nonthreadsafe access to session
I am not able to know:
1) Why am I getting 2 sets ... is the event called twice once before commit and once after commit...??
2) how to remove or handle the Hibernate exception (tried to use withNew session in the function but no difference
Thanks in Advance..
Rather than using GORM event handlers for audit logging, use audit logging plugin. This will take away a lot of pain of yours.
Hope this helps.
or
If You want much finer control over what you are doing you should consider using a subclass of Hibernate's EmptyInterceptor. This will serve tow purposes for you
Will give you much finer control over what and how you are doing the audit logging
Will place all your logic for audit logging at one place, which will help you maintaining your code.
Click here to see the API for EmptyInterceptor.
Note: Hibernate does not ship any implementation in this class and also don't provide any subclass of this which might provide you the default behavior. So you will have to write a custom implementation.

Resources