I am working on a spring + hibernate based project. Actually, A project is given to me with Simple Spring Web Maven Structure (Spring Tool Suit as IDE).
I have successfully imported the project into my STS IDE and have also changed some of hibernate configuration properties so that application will be able to talk to my local PostGreSQL server.
The changes that I have made are as given below:
jdbc.driverClassName=org.postgresql.Driver
jdbc.dialect=org.hibernate.dialect.PostgreSQLDialect
jdbc.databaseurl=jdbc:postgresql://localhost:5432/schema
jdbc.username=username
jdbc.password=password
The hibernate.hbm2ddl.auto property is already set to update so I didn't change that.
Then I simply deploy my project to Pivotal Server and hibernate get executed and creates around 36 tables inside my DB schema. Looks fine !
My Problem: In my hibernate.cfg.XML file total 100 Java model classes are mapped and they also have #Entity annotation. Then, why hibernate is not creating all the remaining tables?
Due to some cause I can't post the code of any of the model class here, I have searched a lot about the problem and applied many different solutions but didn't worked. Could someone please let me know that what could be the reasons that hibernate can react like this?
One of my model class which is not created in my DB.
#Entity
#Table(name = "fare_master")
public class FareMaster {
#Id
#Column(name = "fare_id")
#GeneratedValue
private int fareId;
#Column(name = "base_fare_amount")
private double baseFareAmount;
public int getFareId() {
return fareId;
}
public void setFareId(int fareId) {
this.fareId = fareId;
}
public double getBaseFareAmount() {
return baseFareAmount;
}
public void setBaseFareAmount(double baseFareAmount) {
this.baseFareAmount = baseFareAmount;
}
}
And mapping of the class is as follows
<mapping class="com.mypackage.model.FareMaster" />
Change hibernate.hbm2ddl.auto property to create-drop if you want to create tables, setting it to update will just allow you to update existing tables in your DB.
And check your log file to catch errors.
After a very long time efforts, I came to a conclusion that for this problem or similar type of other problems we should always follow the basic rules
1.) First, Be sure about your problem that means the Exact Issue causing this type of error.
2.) And to find out the exact issue use Logger in your application and you will definitely save lot of time.
In my case this is happening becasue I have switched my DB from MySql to PostGreSql and some of syntax in columnDefinition( a parameterin in #Column Annotation) was not compatible with my new DB. As I used again MySql everything worked fine.
If you have a schema.sql file under your projects, hibernate does not create tables.
Please remove it and try again.
Related
I am using Spring data rest and EclipseLink to create a multi-tenant single table application.
But I am not able to create an Repository where I can call on custom QueryParameters.
My Kid.class
#Entity
#Table(name="kid")
#Multitenant
public class Kid {
#Id
private Long id;
#Column(name = "tenant_id")
private String tenant_id;
#Column(name = "mother_id")
private Long motherId;
//more attributes, constructor, getter and setter
}
My KidRepository
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long>, QuerydslPredicateExecutor<Kid> {}
When I call localhost/kids I get the following exception:
Exception [EclipseLink-6174] (Eclipse Persistence Services - 2.7.4.v20190115-ad5b7c6b2a):
org.eclipse.persistence.exceptions.QueryException\r\nException Description: No value was provided for the session property [eclipselink.tenant-id].
This exception is possible when using additional criteria or tenant discriminator columns without specifying the associated contextual property.
These properties must be set through EntityManager, EntityManagerFactory or persistence unit properties.
If using native EclipseLink, these properties should be set directly on the session.
When I remove the #Multitenant annotation on my entity, everything works fine. So it has definitively something to do with EclipseLink.
When I don't extend from the QuerydslPredicateExecutor it works too. But then I have to implement all findBy* by myself. And even doing so, it breaks again. Changing my KidsRepository to:
#RepositoryRestResource
public interface KidRepository extends PagingAndSortingRepository<Kid, Long> {
Collection<Kid> findByMotherId(#Param("motherId") Long motherId);
}
When I now call localhost/kids/search/findByMotherId?motherId=1 I get the same exception as above.
I used this tutorial to set up EcpliseLink with JPA: https://blog.marcnuri.com/spring-data-jpa-eclipselink-configuring-spring-boot-to-use-eclipselink-as-the-jpa-provider/, meaning the PlatformTransactionManager, the createJpaVendorAdapter and the getVendorProperties are overwritten.
The tenant-id comes with a jwt and everything works fine as long as I don't use QuerydslPredicateExecutor, which is mandatory for the use case.
Turns out, that the wrong JpaTransactionManager is used we I rely on the QuerydslPredicateExecutor. I couldn't find out, which one is created, but having multiple breakpoints inside the EclipseLink Framework code, non of them were hit. This is true for both, using the QuerydslPredicateExecutor or using the custom findby method.
I have googled a lot and tried to override some of the basic EclipseLink methods, but non of that worked. I am running out of options.
Does anyone has any idea how to fix or work around this?
I was looking for a solution for the same issue; what finally helped was adding the Spring's #Transactional annotation to either Repository or any place from where this custom query is called. (It even works with javax.transactional.) We had the #Transactional annotation on most of our services so the issue was not obvious and its occurrence seemed rather accidental.
More detailed explanation about using #Transactional on Repository is here: How to use #Transactional with Spring Data?.
We have created a spring boot project using 1.3.5 version. Our application interacts with Mysql database.
We have created a set of jpa-repositories in which we are using findAll, findOne and other custom queries methods.
We are facing a issue which is occurring randomly . Following are the steps to reproduce it:
Fire a read query on db using spring-boot application.
Now Manually change the data in Mysql using mysql-console of the records which were returned by above read query.
Again fire the same read query using application.
After step 3 , we should have received the modified results of step 2, but what we got was the data before modification.
Now if we again fire the read query using application, It gives us correct values.
This issue occurs randomly. We are not using any kind of cache in our application.
While debugging I found out that jpa-repository code is infact calling mysql and it also fetches the latest result ,but when this call return back to our application service , surprisingly the return value has the old data.
Please help us identify the possible cause of it.
JPA/Datasource config:
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/dbname?autoReconnect=true
spring.datasource.username=root
spring.datasource.password=xxx
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
spring.datasource.max-wait=15000
spring.datasource.max-active=100
spring.datasource.max-idle=20
spring.datasource.test-on-borrow=true
spring.datasource.remove-abandoned=true
spring.datasource.remove-abandoned-timeout=300
spring.datasource.default-auto-commit=false
spring.datasource.validation-query=SELECT 1
spring.datasource.validation-interval=30000
hibernate.dialect=org.hibernate.dialect.MySQL5Dialect
hibernate.show_sql=false
hibernate.hbm2ddl.auto=update
Service Method:
#Override
#Transactional
public List<Event> getAllEvent() {
return eventRepository.findAll();
}
JPARepository:
public interface EventRepository extends JpaRepository<Event, Long> {
List<Event> findAll();
}
#Cacheable(false)
example:
#Entity
#Table(name="table_name")
#Cacheable(false)
public class EntityName {
// ...
}
This might be because of some "DIRTY READS". Faced a similar issue, Try using Transactional Locks especially "Repeatable reads" which could probably avoid this problem. Correct me if I'm wrong.
you can use
entityManager.refresh(entity)
to get latest values of entity
You can use:
#Autowired
private EntityManager entityManager;
then before querying the same entity another time:
entityManager.clear();
then call the query
I've been struggling for the past week to successfully integrate Spring Data MongoDB into our application. We use the fairly common practice of having separate databases for each collection that we rely on. For instance, TenantConfiguration database contains only the TenantConfigurations collection.
I've read through the documentation several times and trawled through the code for a solution but have turned up nothing. Surely such a widely adopted project has some solution for this issue? My current attempt looks like this:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = TenantConfigurationRepository.class,
mongoTemplateRef = "tenantConfigurationTemplate")
public class TenantConfigurationRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate tenantConfigurationTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "TenantConfiguration");
}
}
Here is one of the other individual repository configurations:
#Configuration
#EnableMongoRepositories(basePackages = "com.whatever.service.repository",
basePackageClasses = RegisteredCardRepository.class,
mongoTemplateRef = "registeredCardTemplate")
public class RegisteredCardRepositoryConfig {
#Value("${mongo.hosts}")
private List<String> mongoHosts;
#Bean
public MongoTemplate registeredCardTemplate() throws Exception {
final List<ServerAddress> serverAddresses = new ArrayList<>();
for (String host : mongoHosts) {
serverAddresses.add(new ServerAddress(host, 27017));
}
final MongoClientOptions clientOptions = new MongoClientOptions.Builder()
.connectTimeout(25000)
.readPreference(ReadPreference.primaryPreferred())
.build();
final MongoClient client = new MongoClient(serverAddresses, clientOptions);
return new MongoTemplate(client, "RegisteredCard");
}
}
Now here is the actual repository definition for the RegisteredCard repository:
#Repository
public interface RegisteredCardRepository extends MongoRepository<RegisteredCard, Guid>,
QueryDslPredicateExecutor<RegisteredCard> { }
This all makes perfect sense to me, the individual configurations uniquely identify the specific repository interfaces they configure and the specific template bean to use with that repository via the mongoTemplateRef parameter of the annotation. At least, this is how the documentation seems to imply it should work.
In reality, when I start up the application, the RegisteredCard repository resolves to a MongoDB repository instance with an associated MongoDbFactory that is bound to the TenantConfiguration database. In fact, every single repository receives the same, incorrect MongoOperations object. Despite each repository having its own unique configuration, it appears that whatever database is accessed first remains the target database for every repository.
Are there any solutions available to this problem?
It's taken me almost a week, but I've actually found a passable solution to this issue. Here's a quick run-down of facts I've picked up while researching this issue:
#EnableMongoRepositories(basePackageClasses = Whatever.class) simply uses a qualified class name to indicate what package it should scan for all of your defined data models. This is entirely equivalent to doing #EnableMongoRepositories(basePackageClasses = "com.mypackage.whatevers") if Whatever.class resides in that package.
#EnableMongoRepositories is not repeatable but can be used to annotate several classes. This has been covered in other SO conversations but bears repeating here. You will need to define several repository configuration classes; one for each database you intend to interact with.
Each of your individual repository configurations must specify its own MongoTemplate instance in the #EnableMongoRepositories annotation. You can get away with providing only a single Mongo bean but the MongoTemplate relies on a specific MongoMappingContext.
The #EnableMongoRepositories annotation helps define your mapping context, this understands the structure of your data models and how to serialize them. It also understands the #Document and #Field annotations and does the heavy lifting of persisting your objects. The Mongo template instances are where your specify what database you want to interact with. So by providing the #EnableMongoRepositories annotation with both a basePackage attribute and a mongoTemplateRef attribute you can tell Spring Data Mongo to "take these models and persist them in this specific database".
The unfortunate requirement for this solution is that you must organize your data models into separate packages depending on what database they belong in. If, like me, you are using a Mongo database structure that allocates a single collection to each database (this is fairly common for heavily accessed collections), this means that each of your data models must reside in its own package. Each of these packages must be pointed to by an #EnableMongoRepositories annotation also containing a mongoTemplateRef attribute to a unique MongoTemplate bean.
I hope this helps someone avoid the trouble I've gone through trying to accomplish what should be a fairly run-of-the-mill Mongo integration.
PS: Abandon all hope, those who seek to combine auditing with this configuration.
I know this is old but for those who are looking for a short solution like me:
#Autowired
#Qualifier("registeredCardTemplate")
private MongoTemplate template;
Qualifier name is your "mongoTemplateRef={XXX}"
I have started to use SDN 3.0.0 M1 with Neo4j 2.0 (via rest interface) and I want use an existing graph.db with existing datas.
I have no problem to find node created through SDN via hrRepository.save(myObject); but I can't fetch any existing node (not created through SDN), via hrRepository.findAll(); or any other method, despite I have manually added a property __type__ in this existing nodes.
I use a very simple repository to test that :
#Component
public interface HrRepository extends GraphRepository<Hr> {
Hr findByName(String name);
#Query("match (hr:hr) return hr")
EndResult <Hr> GetAllHrByLabels();
}
And the named query GetAllHrByLabels work perfectly.
Is an existing way to use standard methods (findAll() , findByName()) on existing datas without redefine Cypher query ?
I recently ran into the same problem when upgrading from SDN 2.x to 3.0. I was able to get it working by first following the steps in this article: http://maxdemarzi.com/2013/06/26/neo4j-2-0-is-coming/ to create and enable Neo4j Labels on the existing data.
From there, though, I had to get things working for SDN 3. As you encountered, to do this, you need to set the metadata correctly. Here's how to do that:
Consider a #NodeEntity called Person, that inherits from AbstractNodeEntity (imports and extraneous code removed for brevity):
AbstractNodeEntity:
#NodeEntity
public abstract class AbstractNodeEntity {
#GraphId private Long id;
}
Person:
#NodeEntity
#TypeAlias("Person") // <== This line added for SDN 3.0
public class Person extends AbstractNodeEntity {
public String name;
}
As you know, in SDN 2.x, a __type__ property is created automatically that stores the class name used by SDN to instantiate the node entity when it's read from Neo4j. This is still true, although in SDN 3.0 it's now specified using the #TypeAlias annotation, as seen in the example above. SDN 3.0 also adds new metadata in the form of Neo4j Labels representing the class hierarchy, where the node's class is prepended with an underscore (_).
For existing data, you can add these labels In Cypher (I just used the new web-based Browser utilty in Neo4j 2.0.1) like this:
MATCH (n {__type__:'Person'}) SET n:`_Person`:`AbstractNodeEntity`;
Just wash/rinse/repeat for other #NodeEntity types you have.
There is also a Neo4j Label that gets created called SDN_LABEL_STRATEGY but it isn't applied to any nodes, at least in my data. SDN 3 must have created it automatically, as I didn't do so manually.
Hope this helps...
-Chris
Using SDN over REST is probably not the best idea performance-wise. Just that you know.
Data not created with SDN won't have the necessary meta information.
You will have to iterate over the nodes manually and use
template.postEntityCreation(Node,Class);
on each of them to add the type information. Where class is your SDN annotated entity class.
something like:
for (Node n : template.query("match(n) where n.type = 'Hr' return n").to(Node.class))
template.postEntityCreation(n,Hr.class);
We have an application where we use Struts, Spring, and hibernate.
Previously, we were using mysql databse for running test suites using testng framework.
Now we want to use “in memory” database of HSQLDB.
We have made all the required code changes to use HSQLDB in “in memory” mode.
For ex.
Datasource url = jdbc:hsql:mem:TEST_DB
Username = sa
Password =
Driver = org.hsqldb.jdbcDriver
Hibernate dialect= org.hibernate.dialect.HSQLDialect
Hibernate.hbm2ddl.aoto = create
#Autowired
private DriverManagerDataSource dataSource;
private static Connection dbConnection;
private static IDatabaseConnection dbUnitConnection;
private static IDataSet dataSet;
private MockeryHelper mockeryHelper;
public void startUp() throws Exception {
mockeryHelper = new MockeryHelper();
if (dbConnection == null) {
dbConnection = dataSource.getConnection();
dbUnitConnection = new DatabaseConnection(dbConnection);
dbUnitConnection.getConfig().setProperty(DatabaseConfig.PROPERTY_DATATYPE_FACTORY, new HsqldbDataTypeFactory());
dataSet = new XmlDataSet(new FileInputStream("src/test/resources/test-data.xml"));
}
DatabaseOperation.CLEAN_INSERT.execute(dbUnitConnection, dataSet);
}
We have done required code changes to our base class where we do startup and teardown of database before and after each test.
We use test-data.xml file from where we insert test data to created database using testng framework. Now my questions are
1.when I run test case, database gets created and data is also inserted correctly. However, my respective daos return empty object list when I try to retrieve them from interceptors of struts.
2.We use HSQLDB version 1.8.0.10. Same configurations are made for other project. In that project, most of the test cases are running with success, however for some of them sorting order of data is incorrect.
We discovered that HSQLDB is case sensitive for sorting. And there is one property sql.ignore_case, when set to true, sorting becomes case insensitive. But this is not working for us.
Can someone please help in this?
Thanks in adavance.
I'm afraid sql.ignore_case is not available in your HSQLDB version, as it's not even in the last stable (2.2.9), contrary to what the docs say. However latest snapshots, as stated in this thread, do include it. I'm not using 1.8 my self, but executing SET IGNORECASE TRUE before any table creation may work for you, it does in 2.2.9. If you really need 1.8, a third option may be to pick the relevant code from latest source, add it to 1.8 source and recompile, no idea how hard/easy this could be.