Using spring/hibernate/C3P0 ComboPooledDataSource. I am on a legacy project with spring 2.5, hibernate 3.3 and newest C3P0.
I am using LocalSessionFactoryBean implementation.
I use the spring TransactionInterceptor to globally set the transaction attributes.
I am adding a second replicated database to be used for reporting queries only. This will be a read only database and I would like to set all the connections to be read only.
I was trying to create a second instance of LocalSessionFactoryBean that has data sources which reference secondary databases.
However, what I would like to do is set all these transactions to read only.
I was hoping there might be a way to do this in the ComboPooledDataSource datasource. For instance the apache commons BasicDataSource has a readOnly setting. The C3P0 one does not.
Next, I thought there might be a way to do it in the LocalSessionFactoryBean. But no luck finding that. Any ideas?
Create a database account that only has read-only access. That is the best way to guarantee that the user cannot write. Create a second data source for that account and have at it.
Trying to do this with Spring/Hibernate is not the proper way to do it. If you use the right tool for the job it will be much more clean, understandable and maintainable.
Related
I have a Spring, Spring Data, JPA/Hibernate application.
The legacy part of the application uses JdbcTemplate the new stuff uses spring-data/hibernate and everything is wrapped in a transaction.
Problem is when I modify an entity via hibernate and the legacy part of the system attempts to query something that's been modified I don't get the updated values with out having to explicitly "flush" the entity manager each time.
Is it possible execute the JdbcTemplate queries against hibernate's first-level cache?
What about trying this?
Edit: https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/orm/jpa/JpaTransactionManager.html
This transaction manager also supports direct DataSource access within a transaction (i.e. plain JDBC code working with the same DataSource). This allows for mixing services which access JPA and services which use plain JDBC (without being aware of JPA)! Application code needs to stick to the same simple Connection lookup pattern as with DataSourceTransactionManager (i.e. DataSourceUtils.getConnection(javax.sql.DataSource) or going through a TransactionAwareDataSourceProxy). Note that this requires a vendor-specific JpaDialect to be configured.
WSO2 Identity Server 5.0.0 (and some patches ;))
It does not appear that custom JDBC user store managers (child of JDBCUserStoreManager) use a JDBC pool. I'm noticing that I can end up session closed errors and sql exceptions whereas the Identity Server itself is still operating OK with its separate database connection (a configured pool).
So I guess I have two questions about this:
Somewhere up the chain, is there a JDBC pool for the JDBCUserStoreManager? If so, are there means to configure that guy more robustly?
Can I create another JDBC datasource in master-datasources.xml which my custom JDBC user store manage could reference?
Instead of using your own datasources/connections, you can import Carbon Datasources and use those (they come with inbuilt pooling and no need to worry about any configurations etc). You can either access these programmatically by directly calling ndatasource component or access them via JNDI.
To access them directly from ndatasource component:
The dependency:
<dependency>
<groupId>org.wso2.carbon</groupId>
<artifactId>org.wso2.carbon.ndatasource.core</artifactId>
<version>add_correct_version_here</version>
</dependency>
(You can check repository/components/plugins to find out the correct version for above dependency)
You can inject DataSourceService as in this code (the #scr.reference tag refers to the service you need to inject, this uses maven scr plugin to parse these dependencies when building the bundle).
Note that when you follow this approach you'll have to build the jar as an OSGi bundle as it uses declarative services (and have to place it in repository/components/dropins). Otherwise the dependencies won't be injected at runtime.
Next, you can access all the data sources as:
List<CarbonDataSource> dataSources = dataSourceService.getAllDataSources();
Rajeev's answer was really insightful and helped with investigating and evaluating what I should do. But, I didn't end up using that route. :)
I ended up looking through the Identity Server and Carbon source code and found out that the JDBCUserStoreManager does end up creating a JDBC pool configured by the properties you set for that manager. I had a class called CustomUserStoreConstants for my custom user store manager which had setMandatoryProperty called by default to set:
JDBCRealmConstants.DRIVER_NAME
JDBCRealmConstants.URL
JDBCRealmConstants.USER_NAME
JDBCRealmConstants.PASSWORD
So the pool was configured with these values, BUT that was it...nothing else. So no wonder it wasn't surviving the night!
It turned out that the code setting this up, if it found a value for the JDBCRealmConstants.DATASOURCE in the config params, it would just load up that datasource and ignore any other params set. Seeing that, I got rid of those 4 params listed above and forced my custom user store to only allow having a DATASOURCE and I set it in code with the default JNDI name that I would name that datasource always. With that, I was able to configure my JDBC pool for this datasource with all params such as testOnBorrow, validationQuery, validationInterval, etc in master-datasources.xml. Now the only thing that would ever need to change is the datasource's configuration in that file.
The other reason I went with the datasource in the master-datasources.xml is that I didn't have to decided in my custom user store's code which parameters I would want to have or not have and just manage it all in the xml file easily. This really has advantages with portability of configs and IT involvement for deployments and debugging. I already have other datasources in this file for the IS deployment.
All said, my user store is now living through the night and weekends. :)
I used GWT2+DAO pattern for my apps and it's work correctly. Now my BD as grown a lot and i want to manage it more easier. So I want to use an ORM.What i want to do is to keep my first DAO implementation and use hibernate for my new classes. But I read a lot on internet and I'm very confused about the way to deal with this.
which solution between hibernate ejb3+Tomcat+Openejb and Spring+hibernate could be better for me?
also which one could be the fastest?
Should I change all my dao to use hibernate methods or should I use the both?
NB: I'm just started to read spring doc, but I have already read hibernate doc.
thanks.
I think the change you need only affects the back-end, hence has nothing to do with the server or container you are using.
Rather in your DAO, when saving new pojos, use hibernateTemplate instead of what you were using.
It would be advisable to actually be consistent, if you are going to use hibernate, use hibernate for all your db manipulation.
Optimization is a whole chapter on itself, I think you should focus on getting your db changes for now, then worry about the speed when everything works.
I am working on an app that uses SimpleJDBCTemplate as the wrapper to make JDBC calls.
However, instead of a conventional Datasource, I am choosing to use AbstractDataSource so I can choose from multiple data sources.
I am using ThreadLocal to inject keys to choose the appropriate Datasource.
However, it appears Spring is eagerly creating all my DAOs and my jdbcTemplate and hence I cannot figure out how to have the jdbcTemplate get Connection on demand.
Any clues.?
Do you mean AbstractRoutingDataSource? If not, you really should be using that, since this is exactly what it's for. Mark Fisher wrote a useful blog about it back when it was added to the framework.
Yes Spring will create your DAOs and JdbcTemplates eagerly if they're singletons, which is the default, but that doesn't mean they all obtain a connection immediately. A connection will only be obtained when you start some kind of operation that uses that data source. Typically, that would be starting a transaction. In other words, what you say you want to happen is what already happens.
There are two entity manager factory beans in Spring that would work for my application. The org.springframework.orm.jpa.LocalEntityManagerFactoryBean and org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean. I am using Spring 3.0 with EclipseLink JPA 2.2.
What I've read about these two are that they are the same. Except that LocalContainerEntityManagerFactoryBean uses weaving. What is it? And, why would I want to use it?
"Weaving" is a term for program transformation, usually heard around "aspect oriented programming" area. The transformation is usually not done to the source but the .class (the bytecode) and a techy term for changing bytecode is "bytecode instrumentation".
why would I want to use it?
The JPA implementation you use may rely on such bytecode instrumentation for some features it provides and hence you might be forced into using it.
And for weaving to work correctly, you might need to specify a -javaagent: For eg,see section 'Eclipse Junit' here.
It looks like LocalContainerEntityManagerFactoryBean allows you to configure a weaver implementation ( one of DefaultContextLoadTimeWeaver, GlassFishLoadTimeWeaver, InstrumentationLoadTimeWeaver, OC4JLoadTimeWeaver, ReflectiveLoadTimeWeaver, SimpleLoadTimeWeaver, WebLogicLoadTimeWeaver
at an XML file, instead of relying on a -javaagent runtime argument.
This configuration isn't such a big factor, I'd guess.
Other features which the docs explain, sound like deciding factors.
LocalEntityManagerFactoryBean bootstrap is appropriate for standalone applications which solely use JPA for data access. If you want to set up your persistence provider for an external DataSource and/or for global transactions which span multiple resources, you will need to either deploy it into a full Java EE 5 application server and access the deployed EntityManagerFactory via JNDI, or use Spring's LocalContainerEntityManagerFactoryBean with appropriate configuration for local setup according to JPA's container contract.
If you plan on deploying your application to an Application Server and letting the Application Server manage the Entity Manager Factory and Transactions than the LocalContainerEntityManagerFactoryBean might be a better option. If you rather have the Application be more isolated, than the LocalEntityManagerFactoryBean would be more appropriate.
This blog can help provide more insight: http://second-kind-demon.blogspot.com/2011/06/spring-jpa-java-ee-jboss-deployment.html