I use sqlalchemy hooks to check user permissions for some operations (like querying or inserting data to DB).
For preventing of save inaccessible records, I set hook before_commit, which gets a session object. I want to get all instances, which should performed by this commit to check can this instances be saved or no (and expunge wrong records).
For adding new records it works by after_attach and before_commit (expunge inside of after_attach isn't works for some reason). But in case of query->update->commit of instances after_attach didn't called, so all permission checks should be moved into before_commit.
by the time before_commit is emitted, everything is "saved" from the session's perspective; this is because changes are emitted to the database specifically within the flush(), not the commit(). The commit just calls upon the flush() but that's not the only time flush() happens.
If you want to prevent something from happening in the flush, use the before_flush event for that: http://docs.sqlalchemy.org/en/rel_0_8/orm/events.html?highlight=before_flush#sqlalchemy.orm.events.SessionEvents.before_flush
Related
I have a 3th party Java library that in a moment, gets a JDBC connection, starts a transaction, does several batch updates with PreparedStatement.addBatch(), executes the batch, commits the transaction and closes the connection. Almost immediately after (in the span of <10 milliseconds), the library gets another connection and queries one of the records affected by the update.
For the proper functioning of the library, that query should return the updated record. However, in some rare cases, I'm getting (using P6Spy) that the query is returning the record with its values before the update (and the library fails in some point forwards due to unexpected data).
I'm trying to understand why this would happen, and then I found that in my database (Oracle 19c) there is a parameter COMMIT_WAIT that basically gives the possibility that a call to a commit doesn't block until the commit is finished, obtaining an asynchronous commit. So I used the SHOW PARAMETERS to see the value of that parameter and I found out that COMMIT_WAIT is set up to NOWAIT (also, COMMIT_LOGGING was set up to BATCH).
I began to speculate if what was happening was that the call to commit() just started the operation (without waiting for it to finish), and perhaps the next query occurred while the operation was still in progress, returning the value of the record before the transaction. (The isolation level for all connections is Connection.TRANSACTION_READ_COMMITTED)
Can COMMIT_WAIT set up to NOWAIT cause that kind of scenario? I read that the use of NOWAIT has a lot of risks associated with it, but mostly they refers to things like loss of durability if the database crashes.
Changing the commit behavior should not affect database consistency and should not cause wrong results to be returned.
A little background - Oracle uses REDO for durability (recovering data after an error) and uses UNDO for consistency (making sure the correct results are always returned for any point-in-time). To improve performance, there are many tricks to reduce REDO and UNDO. But changing the commit behavior doesn't reduce the amount of logical REDO and UNDO, it only delays and optimizes the REDO physical writes.
Before a commit happens, and even before your statements return, the UNDO data used for consistency has been written to memory. Changing the commit behavior won't stop the changes from making their way to the UNDO tablespace.
Per the Database Reference for COMMIT_WAIT, "Also, [the parameter] can violate the durability of ACID (Atomicity, Consistency, Isolation, Durability) transactions if the database shuts down unexpectedly." Since the manual is already talking about the "D" in ACID, I assume it would also explicitly mention if the parameter affects the "C".
On the other hand, the above statements are all just theory. It's possible that there's some UNDO optimization bug that's causing the parameter to break something. But I think that would be extremely unlikely. Oracle goes out of its way to make sure that data is never lost or incorrect. (I know because even when I don't want REDO or UNDO it's hard to turn them off.)
I have 2 databases, DBa and DBb. I have 2 records sets, RecordsA and RecordsB. The concept is that in our app you can add records from A to B. I am having an issue where I go to add a record from A to B and try to query the records again. The particular property on the added record is stale/incorrect.
RecordsA lives on DBa and RecordsB lives on DBb. I make my stored proc call to add the record to the B side and modify a column's value on DBa which makes the insert/update using a dblink on DBb. Problem is, when I do a insert/update followed by an immidiate get call on DBa (calling DBb) that modified property is incorrect, it's null as if the insert never went through. However, if I put a breakpoint before the pull call and wait about 1 second the correct data is returned. Making me wonder if there is some latency issues with dblinks.
This seems like an async issue but we verified no async calls are being made and everything is running on the same thread. Would this type of behavior be likely with a db link? As in, inserting/updating a record on a remote server and retrieving it right away causing some latency where the record wasn't quite updated at the time of the re-pull?
I'm wondering is it possible to touch/update all records in some class so they trigger before and after save hooks. I have a lot of records in database and it takes time to update all manually via Parse control panel.
You could write a cloud job which iterates through everything, but it would need to make an actual change to each object or it won't save (because the objects won't be dirty). You're also limited on runtime so you should sort by updated date and run the job repeatedly until nothing is left to do...
I'm trying to initialize my data in my Azure Data Tables but I only want this to happen once on the server at startup (i.e. via the WebRole Role Entry OnStart routine). The problem is if I have multiple instances starting up at the same time then potentially either one of those instances can add records to the same table at the same time hence duplicating the data at runtime.
Is there there like an overarching routine for all instances? An application object in which I can shove a value into and check it in each of the instances to see if the tables have been created or not? A singleton of some sort that azure exposes?
Cheers
Rob
No, but you could use a Blob lease as a mutex. You could also use a table lock in SQL Azure, if you're using that.
You could also use a Queue, and drop a message in there and then just one role would pick up the message and process it.
You could create a new single instance role that does this job on role start.
To be really paranoid about this and address the event of failure in the middle of writing the data, you can do something even more complex.
A queue message is a great way to ensure transactional capabilities as long as the work you are doing can be idempotent.
Each instance adds a message to a queue.
Each instance polls the queue and on receiving a message
Reads the locking row from the table.
If the ‘create data state’ value is ‘unclaimed’
Attempts to update the row with a ‘in process’ value and a timeout expiration timestamp based on the amount of time needed to create the data.
if the update is successful, the instance owns the task of creating the data
So create the data
update the ‘create data state’ to ‘committed’
delete the message
else if the update is unsuccessful the instance does not own the task
so just delete the message.
Else if the ‘create data’ value is ‘in process’, check if the current time is past the expiration timestamp.
That would imply that the ‘in process’ failed
So try all over again to set the state to ‘in process’, delete the incomplete written rows
And try recreating the data, updating the state and deleting the message
Else if the ‘create data’ value is ‘committed’
Just delete the queue message, since the work has been done
I have a table and two databases which have the same table, but one is a symlink of the other one and only read is permitted on this table.
I have mapped the table to Java using Hibernate and I use spring to set the Entity Manager's data source as one of the two databases based on some input criteria.
I call only read only operations (selects) when I am connected to the second database, but it seems Hibernate tries to flush something back to the database and it fails telling update is not allowed on this view.
How do I disable this update only for the second datasource and keep it normal for the first one?
Update:
Looking at the stack trace, the flush seems to be started here:
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504)
... 55 more
Is this related to hibernate.transaction.flush_before_completion property? Can I set it to false for the second data source?
Most probably your entities become "dirty" the same moment they are loaded from the database, and Hibernate thinks that it needs to store the changes. This happens, if your accessors (get and set methods) are not returning the exact same value or reference that had been set by Hibernate.
In our code, this happened with lists, developers created new list instances because they didn't like the type they got in the setter.
If you don't want to change the code, change the mapping to field access.
You can also prevent Hibernate of storing changes by setting FlushMode to never on the session, but this only hides the real problem which will still occur in other situations an will lead to unnecessary updates.
First you need to determine if this is DDL or DML. If you don't know, then I recommend you set hibernate.show_sql=true to capture the offending statement.
If it is DDL, then it's most likely going to be Hibernate updating the schema for you and you'd want to additionally configure the hibernate.hbm2ddl.auto setting to be either "update" or "none", depending on whether you're using the actual db or the symlinked (read-only) one, respectivley. You can use "validate" instead of none, too.
If it is DML, then I would first determine whether your code is for some reason making a change to an instance which is still attached to an active Hibernate Session. If so, then a subsequent read may cause a flush of these changes without ever explicitly saving the object (Grails?). If this is the case, consider evicting the instance causing the flush ( or using transport objects instead ).
Are you perhaps using any aspects or Hibernate lifecycle events to provide auditing of the objects? This, too, could cause access of a read-only to result in an insert or update being run.
It may turn out that you need to provide alternative mappings for the offending class should the updatability of a field come into play, but the code is doing everything exactly as you'd like ( this is unlikely ;0 ). If you are in an all-annotation world, this may be tricky. If working with hbm.xml, then providing an alternative mapping is easier.