I wonder how to retrieve deleted record that audited with Javers, as I know how to retrieve version changelog but when the record deleted from database table, there is no ID to retrieve audit record.
There is no dedicated filter for selecting deleted objects snapshots.
You can use general purpose queries like byClass() and filter selected snapshots by SnapshotType on your own, although it will not be very efficient.
New SnapshotType filter can be easily implemented in JaVers JQL, consider contributing a PR.
Related
Clickhouse recently released the new DELETE query, that allows you to quickly mark data as deleted (https://clickhouse.com/docs/en/sql-reference/statements/delete/).
The actual data deletion is done in the background afterwards, as is written in the docs.
My question is, is there some indication to when data would be deleted?
Or is there a way to make sure it gets deleted?
I'm asking for GDPR compliance purposes.
Thanks
For GDRP better use
ALTER TABLE ... DELETE WHERE ...
and poll
SELECT * FROM system.muations WHERE is_done=0
when mutation complete (is_done=1), that mean data deleted physically
According to CH support on slack:
If using the DELETE query, the only way to make sure data is deleted is to use the OPTIMIZE FINAL query.
This would trigger a merge and the deleted data won't be included in the outcome
I'm using hibernate-envers for audit purposes in an application. I'm also using hibernate-search in order to search/read the information of JPA entities in the application.
I was wondering if there's any kind of configuration/integration that can make hibernate-envers work with the audit enties/tables, over indexes too, in order to read with hibernate -search that information from the indexes.
I would like to avoid doing it "manually", for example, using envers event listeners in order to create/manipulate a new index manually for the audited entity, using a new JPA Entity modelling the Audit entity information including #Indexed annotation, fields etc.).
Ideally was wondering if there's support for envers/search integration out of the box, without custom development, to achieve storing all audit information in new _aud indexes.
Thanks in advance, any piece of advice is appreciated.
It's certainly not possible out of the box.
If it ever becomes possible, you won't benefit from all the Envers features such as "get me this entity at this revision". You will simply index all the revisions of each entity, and you will only be able to query (and retrieve) these revisions. That would be queries such as "get all revisions of the entity with id 1 where name contained "some text".
Also, this will not remove the need for audit tables. The indexes will exist in addition to the audit tables.
That being said, I just gave it a try and we could make it possible in Hibernate Search 6 with just a few changes. If you're still interested, you can have a look there: https://hibernate.atlassian.net/browse/HSEARCH-4238
In our application, we have a requirement to audit "viewed by" events. Currently, we implemented this functionality by using an Audit table and manually logging it to the table during "GET" calls. I am trying to understand how to accomplish this in Javers.
In our current application, to find our changes, we use hibernate interceptor and manually add the changes to the audit table.
I thought that the easiest way to accomplish the "viewed" audit functionality in Javers is to add a "viewedBy" field to the entity being audited and manually update it in "GET" calls. But I am concerned about this approach as each time, there is a view, we are changing the version of the object(by physically updating it) and the state is being saved to the jv_snapshot table.
I expect that the viewed by audits will be part of javers.findChanges() method, so that the changes are tracked in a chronological order and also possibly be paginated.
I am new to Cassandra. I am looking at many examples online. Here is one from JHipster Cassandra examples on GitHub:
https://gist.github.com/jdubois/c3d3bedb869466731316
The repository save(user) method does a read (to look for existence) then a delete and re-insert of the existing user across all the denormalized tables whenever the user data changed.
Is this best practice?
Is this only because of how the data model for this sample is designed?
Is this sample's design a result of twisting a POJO framework into a NoSQL database design?
When would I want to just do a update in Cassandra? It supports updates at the field-level, so it seems like that would be preferred.
First of all, the delete operations should be part of the batch for more robust error handling. But it looks like there are also some concurrency issues with the code. It will update the user based on the current user value read before. It's not save to assume this will still be the latest value while save() is actually executed. It will also just overwrite any keys in the lookup table that might be in use for a different user at that point. E.g. the login could already exist for another user while executing insertByLoginStmt.
It is not necessary to delete a row before inserting a new one.
But if you are replacing rows and new columns are different from existing columns then you need to delete all existing columns and insert new columns. Or insert new and delete old, does not matter if happens in batch.
I am using H2Database With ORMLite. we have 60 tables all created with ORMLite "create if not exists", Now we are going to provide a major release and requirement is to update old version database. But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column. any suggestions. I have seen some other posts regarding OrmLite for Android SqlLite. How can this approach be used for other db. e.g Like this post
ORMLite update of the database
But I need to know how to do this with ormLite as in new version some of Tables will be new and some is existing old tables with some modifications e.g we have an table of job in previous version db, in this release we added 2 more columns and change the datatype of one column.
I'm not sure there is any easy answer here. ORMLite doesn't directly provide any magic capabilities to make the migration of data any easier. Here are some thoughts however:
You will need to use some sort of SQL logic to determine whether your application has the "old" or "new" schema installed. You could use raw SQL to look for the existance of particular tables or columns. Might be a good idea going forward to store a meta table with database version which Android gets for free.
You can create new and old versions of each of your entities (OldAccount versus Account) and map them both to the same table with the #DatabaseTable(tableName = "accounts"). Then you can read the old entities using the oldAccountDao.iterator(), convert them to new entities and (as long as you aren't mucking with the primary key) update them using the new accountDao.update(...).
You can certain come up with a series of SQL statements that will need to be performed in the proper order to change the schema. Then call the dao.exectuteRaw(...) with them in order.
Obviously the new entities will just be created.
You might want to consider dumping a backup file of all tables somewhere before the conversion process and telling the user about it in case there is some failure so your users could revert and run the old version of your application.
Hopefully something here is helpful.