Disable Hibernate auto update on flush on read only synonyms - oracle

I have a table and two databases which have the same table, but one is a symlink of the other one and only read is permitted on this table.
I have mapped the table to Java using Hibernate and I use spring to set the Entity Manager's data source as one of the two databases based on some input criteria.
I call only read only operations (selects) when I am connected to the second database, but it seems Hibernate tries to flush something back to the database and it fails telling update is not allowed on this view.
How do I disable this update only for the second datasource and keep it normal for the first one?
Update:
Looking at the stack trace, the flush seems to be started here:
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1027)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:365)
at org.hibernate.ejb.AbstractEntityManagerImpl$1.beforeCompletion(AbstractEntityManagerImpl.java:504)
... 55 more
Is this related to hibernate.transaction.flush_before_completion property? Can I set it to false for the second data source?

Most probably your entities become "dirty" the same moment they are loaded from the database, and Hibernate thinks that it needs to store the changes. This happens, if your accessors (get and set methods) are not returning the exact same value or reference that had been set by Hibernate.
In our code, this happened with lists, developers created new list instances because they didn't like the type they got in the setter.
If you don't want to change the code, change the mapping to field access.
You can also prevent Hibernate of storing changes by setting FlushMode to never on the session, but this only hides the real problem which will still occur in other situations an will lead to unnecessary updates.

First you need to determine if this is DDL or DML. If you don't know, then I recommend you set hibernate.show_sql=true to capture the offending statement.
If it is DDL, then it's most likely going to be Hibernate updating the schema for you and you'd want to additionally configure the hibernate.hbm2ddl.auto setting to be either "update" or "none", depending on whether you're using the actual db or the symlinked (read-only) one, respectivley. You can use "validate" instead of none, too.
If it is DML, then I would first determine whether your code is for some reason making a change to an instance which is still attached to an active Hibernate Session. If so, then a subsequent read may cause a flush of these changes without ever explicitly saving the object (Grails?). If this is the case, consider evicting the instance causing the flush ( or using transport objects instead ).
Are you perhaps using any aspects or Hibernate lifecycle events to provide auditing of the objects? This, too, could cause access of a read-only to result in an insert or update being run.
It may turn out that you need to provide alternative mappings for the offending class should the updatability of a field come into play, but the code is doing everything exactly as you'd like ( this is unlikely ;0 ). If you are in an all-annotation world, this may be tricky. If working with hbm.xml, then providing an alternative mapping is easier.

Related

What is the best way of updating data in SpringBoot?

For put request, Do I always have to check the old data and change the changed fields in order to update the existing data? Is it the right way to check for each data change?
I do not know of any project which takes the effort to only update fields that were actually changed.
Usually what you'd do is that you just override all fields in your table with the new value as this is the easiest and most reliable way of doing so.
Also consider, that custom logic that decides what to update also needs to be maintained and can have bugs. If you end up having a bug in that logic, most likely, you'll realize that you have data consistency errors which might be unfixable.
Most likely, when you use Spring Boot, you would probably also use Spring Data JPA and Hibernate which are going to take care of mapping your objects to your database. In that case, Hibernate is going to decide on the update strategy you use anyways.
If you are worried about data consistency and concurrent updates to the same record, I would recommend looking into Optimistic Locking, which is an easy way to handle that issue. It's very easy to setup by just adding a version column to your table.

Truncate all database tables with Hibernate and Spring Boot

I need to truncate all database tables after each test. Is there a way to do so or at least a database agnostic way to get all table names so that they can be truncated.
Any other alternatives are welcome. But keep in mind #Transactional and #Rollback will not help as I'm dealing with integration tests which fire http request on the server.
I think you're going to struggle to truncate tables in a simple, database-agnostic way. For example, what do you do about foreign key constraints? Some DBs will let you just truncate the tables in the correct order, leaving you with the problem of how to define that order. But if I recall correctly, some won't let you truncate tables with foreign key constraints at all, even if empty. Then you need to use some DB-specific DDL to disable the constraints, or worse, drop and recreate them.
You are also ruling out parallelising your integration tests if you take this approach.
I've always found a better approach is to make each test clear up just the data that it created. For example, for your create API, you may be able to register a listener that records the IDs of all created entities in your test code, then on teardown you can just reverse iterate this list of IDs, calling your delete API. The downside of this approach is that you may need to implement APIs that your application doesn't actually need, just to support the tests. However these can then be disabled by a flag on deployment to production.
I read this property from a text file and add the 2 hibernate properties in the if statement which rebuilds the database every time I execute my project, perhaps this can help you?
if (environment.getProperty("dbReset").compareTo("ON") == 0)
{
properties.put("hbm2ddl.auto", "create");
properties.put("hibernate.hbm2ddl.auto", "create");
}

How to pass a String with more than 250 characters as job parameter in Spring Batch?

In BATCH_JOB_EXECUTION_PARAMS table, column "STRING_VAL" is defined as varchar(250). If any string longer than 250 is passed as job parameter, the database will complain that data is too long. I did some research and what some people did was to manually change the definition of the column to hold more data. Is there any side effect to store large params in the table? If so, what is the best solution to pass a large job param?
Thanks.
There shouldn't be a side effect; especially, if it is a non-identifiying parameter.
But also then, the only place this could have a sideeffect is the generation of the "JOB_KEY"-field in the JOB_INSTANCE table (have a look at JdbcJobInstanceDao).
The content for this field is generated using a "JobKeyGenerator" and having a look at the used default implementation "org.springframework.batch.core.DefaultJobKeyGenerator", I don't see anything that could cause a side effect.
I would not go down that road since it is part of Spring Framework developed outside of your control. Even if it is safe now to change what if they decide to use 250 character limit in some important framework functionality. You will get either funny bugs when you upgrade to new version or you will get version lock since you changed library code yourself.
I answered similar question in this post. You can create new table for holding parameters or whatever near Spring Batch meta data (in same database) and you can pass just ID. Inside Spring Batch job you can pull whatever from that table based on passed ID.

Using PostgreSQL Rules/Triggers for debugging purposes

An application I am trying to support is currently running into unique constraint violations. I haven't been able to reproduce this problem in non-production environments. Is it reasonable, for debugging purposes, to create a rule (trigger?) that will in effect just copy every insert to a different table? So in effect the new table will be the same as the old table without a constraint, hopefully.
The application is using Spring to manage transactionality, and I haven't been able to find any documentation relating rules to transactions. After the violation, whatever is written so far in the transaction is rolled back - will this affect the rule in any way?
This is Postgres 8.3.
After the violation, whatever is written so far in the transaction is
rolled back - will this affect the rule in any way?
That will rollback everything the rule did, as well. You could create a trigger that uses dblink, to get some work done outside your current transaction. Another option could be a savepoint, but then you have to change all your current code and transaction.
Unique violations are logged in the logfiles as well, get this information to see what is going wrong. Version 9.0 has a change that will tell you also what the values are:
Improve uniqueness-constraint violation error messages to report the
values causing the failure (Itagaki Takahiro) For example, a
uniqueness constraint violation might now report Key (x)=(2) already
exists.
You can do almost anything you can imagine with rules and triggers. And then some more. Your exact intent remains somewhat unclear, though.
If the transaction is rolled back anyway, as you hint at the end, then everything will be undone, including all side-effects of any rules or triggers involved. Your plan would be futile.
There is a workaround for that in case that is, in fact, what you want to achieve: use dblink to link and INSERT to a table in the same database. That's not rolled back.
However, if it's just for debugging purposes, the database log is a much simpler way to see which duplicates have not been entered. Errors are logged by default. If not, you can set it up as you need it. See about your options in the manual.
As has been said, rules cannot be used for this purpose, as they only serve to rewrite the query. But rewritten query is just like the original one still part of the transaction.
Rules can be used to enforce constraints that are impossible to implement using regular constraints, such as a key being unique among several tables, or other multi-table stuff. (these do have the advantage of the "canary" tablename showing up in the logs and error messages) But the OP already had too many constraints, it appears...
Tweaking the serialisation level also seems indicated (are there multiple sessions involved? does the framework use a connection pool?)

Implementing user-defined db parameters/properties in Oracle

OK, the question title probably isn't the best, but I'm looking for a good way to implement an extensible set of parameters for Oracle database applications that "stay with" the host/instance. By "stay with", I mean that I'd like to rule out just having an Oracle table of name/value pairs that would have to modified if I create a test/QA instance by cloning the production instance. (For example, imagine a parameter called email_error_address that should be set to prod_support#abc.com in production and qa_support#abc.com in testing).
These parameters need to be accessed from both PL/SQL code running in the database as well as client-side code. I started out doing this by overloading the plsql_cc_flags init parameter (not a solution I'm proud of), but this is getting messy to maintain and parse.
[Edit]
Ideally, the implementation would allow changes to the list without restarting the instance, similar to the dynamically-modifiable init parameters.
You want to have a separate set of values for each environment. You want these values to be independent of the data, so that they don't get overridden if you import data from another instance.
The solution is to use an external table (providing you are on 9i or higher). Because external tables hold the data in an OS file they are independent of the database. To apply changed values all you need to do is overwrite the OS file.
All you need to do is ensure that the files for each environment are kept separate, This is easy enough if Test, QA, Production, etc are on their own servers. If they are on the same server then you will need to distinguish them by file name or directory path; in either case you may need to issue a bit of DDL to correct the location in the event of a database refresh.
The drawback to using external tables is that they can be a bit of a performance overhead - they are really intended for bulk loading. If this is likely to be a problem you could use caching, with a user-defined namespace or CONTEXT. Load the values into memory using DBMS_SESSION.SET_CONTEXT() either on demand on with an ON LOGON trigger. Retrieve the values by wrapper calls to SYS_CONTEXT(). Because the namespace is in session memory retrieval is quite fast. René Nyffenegger has a simple example of working with CONTEXT: check it out.
While I've been writing this up I see you have added a requirement to change things on the fly. As I have said already this is easy with an OS file, but the use of caching makes things sightly more difficult. The solution would be to use a globally accessible CONTEXT. Have a routine which loads all the values at startup which you can also call whenever you refresh the OS file.
You could use environment variables that you can set per oracle user (the account that starts up the Oracle database) or per server. The environment variables can be read with the DBMS_SYSTEM.GET_ENV procedure.
I tend to use a system_parameters table. If your concerned with it being overwritten put it in it's own schema and make a public synonym.
#APC's answer is clever.
You could solve the performance overhead by adding a materialized view on top of the external table(s). You would refresh it after RMAN-cloning, and after each update of the config files.

Resources