Unitils/DbUnit: How to test in a multi-database environment? - dbunit

I'm using Unitils (with DbUnit) for my data access layer unit testing, but the need has arisen to test multiple databases. What's the best way to do it?
The databases are different so some DAOs are for one database, another DAOs are for another.
I see the following alternatives:
Associating each *DaoTest with a separate unitils.properties file that would hold configuration for this DAO's database. Is it even possible?
Having a separate test project for every database (holding this database's *DaoTests and a unitils.properties file with the database's credentials)
Any other ideas?

Hopefully you found an answer for this in the 6 years since you originally asked :)
I recently found myself with this same issue and resolved it this way:
I used a single unittils.properties for each DAO which defines every datasource my project needs to test. In the unittils.properties file, I defined a database.schemaNames=DATABASE_1, DATABASE_2 property.
Then, you can modify your dataset definition to look something like this:
<?xml version='1.0' encoding='UTF-8'?>
<dataset xmlns="DATABASE_1" xmlns:b="DATABASE_2">
<some_table />
<b:some_other_table />
<some_table attr_1="foo" attr_2="bar" />
<b:some_other_table other_attr="baz" />
</dataset>
Note that some_table will be assumed to live in DATABASE_1. This is because unittils sets the first database in the database.schemaNames property as the default. You can optionally omit the xmlns="DATABASE_1" in your dataset's xml file because of this.

Related

Basics of adding a custom analyzer to an index built using spring

I think this is a very basic question but I keep going around the houses so any help pointing me in the right direction would be appreciated.
I have inherited a java application which builds and elastic search index using spring-data-elasticsearch (1.2.1.RELEASE at the moment). I have modified this quite successfully in various trivial ways but now I want to add a custom analyzer to use on one field (char mapping to remove /'s).
The index being built is essentially 1 index with various document types. It seems to be built pretty much out of the box. I'm fairly new to java and spring but and tracking down all the config and auto-wiring can still outfox me sometimes but as far as I can see the client config in the context XML file points directly to the spring code and doesn't add much except the custom index name and a location for the repository interfaces and code
<elasticsearch:node-client id="esClient" local="true" cluster-name="products"/>
<elasticsearch:repositories base-package="com.warehouse.es.repos"/>
<bean name="elasticsearchTemplate" class="org.springframework.data.elasticsearch.core.ElasticsearchTemplate">
<constructor-arg name="client" ref="esClient"/>
</bean>
The code then seems to use an out of the box client object
#Autowired
public void setClient(Client client) throws IOException {
this.client = client;
and then goes on to set various typemappings using mapping files along these lines
createTypeMapping(client, Constants.INDEX_NAME, INDEX_TYPE, "Products.mapping");
Apologies if some of this is either too brief (or too much waffle for this basic question) but I'm trying to work out / find an example of how and where to add my custom analyzer. I have documentation and examples to show me how to create some json to create the custom analyzer
(e.g. https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-mapping-charfilter.html#analysis-mapping-charfilter
and some previous stackoverflow q&s's)
but I'm struggling to understand where I add this in my java code creating the index.
Obviously the more help the better (!) but really at this stage I'm trying to get a grip of whether I could just add the analyzer to the yml file or whether I need to add some code to modify the client in some way ? or possibly even just add it to the individual type mappings.
Thanks.
If the index/type has been created directly on the cluster (like running a curl command) or if the index/types creation is handled by your Spring application. In the latter case, I think you can follow some code samples from this link on github.
If you have Products.mapping file with that content that looks like a type mapping indeed. Any data that you have in your cluster, though, needs to be re-indexed if you change the type mapping by adding a new analyzer.
EDIT with poster findings: it is not possible to put the settings in project's individual mapping files, but as a separate file with the #settings annotation.

Hibernate and JPA error: duplicate import on dependent Maven project

I have two Maven projects, one called project-data and the other one call project-rest which has a dependency on the project-data project.
The Maven build is successful in the project-data project but it fails in the project-rest project, with the exception:
Caused by: org.hibernate.DuplicateMappingException: duplicate import: TemplatePageTag refers to both com.thalasoft.learnintouch.data.jpa.domain.TemplatePageTag and com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag (try using auto-import="false")
I could see some explanation here: http://isolasoftware.it/2011/10/14/hibernate-and-jpa-error-duplicate-import-try-using-auto-importfalse/
What I don't understand, is why this message does not occur when building the project-data project and occurs when building the project-rest project.
I tried to look up in the pom.xml files to see if there was something in there that could explain the issue.
I also looked up the way the tests are configured and run on the project-rest project.
But I haven't yet seen any thing.
The error is basically due to the fact that the sessionFactory bean underlies two entities with the same logical name TemplatePageTag :
One lies under the com.thalasoft.learnintouch.data.jpa.domain package.
The other under the com.thalasoft.learnintouch.data.dao.domain.
Since this fall to an unusual case, you will have Hibernate complaining about the case. Mostly because you may run in eventual issues when running some HQL queries (which are basically entity oriented queries) and may have inconsistent results.
As a solution, you may need either to:
Rename your Entity beans with different name to avoid confusion which I assume is not a suitable solution in your case since it may need much re-factoring and can hurt your project compatibility.
Configure your EJB entities to be resolved with different names. As you are configuring one entity using xml based processing and the other through annotation, the schema is not the same to define the entities names:
For the com.thalasoft.learnintouch.data.jpa.domain.TemplatePageTag entity, you will need to add the name attribute to the #Entity annotation as below:
#Entity(name = "TemplatePageTag_1")
public class TemplatePageTag extends AbstractEntity {
//...
}
For the com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag, as it is mapped using an hbm xml declaration, you will need to add the entity-name attribute to your class element as follows:
<hibernate-mapping>
<class name="com.thalasoft.learnintouch.data.dao.domain.TemplatePageTag"
table="template_page_tag"
entity-name="TemplatePageTag_2"
dynamic-insert="true"
dynamic-update="true">
<!-- other attributes declaration -->
</class>
</hibernate-mapping>
As I took a look deeper into your project strucure, you may need also to fix entity names for other beans as you have been following the same schema for many other classes, such as com.thalasoft.learnintouch.data.jpa.domain.AdminModule and com.thalasoft.learnintouch.data.dao.domain.AdminModule.
This issue could be fixed by using a combination of #Entity and #Table annotations. Below link provides a good explanation and difference between both.
difference between name-attribute-in-entity-and-table

spring transaction of JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport

How transaction is controlled while using JdbcTemplate/HibernateTemplateand HibernateDaoSupport/JdbcDaoSupport? I used to check source code and didn't find where the transaction is controlled by JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport.
And In source code HibernateDaoSupport/JdbcDaoSupport is using JdbcTemplate/HibernateTemplate, what's the role of HibernateDaoSupport/JdbcDaoSupport and what's the role of JdbcTemplate/HibernateTemplate?
Why do we use JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport? It seems all sample code is using them. What should I use if I don't want to use them, such as only using spring + hibernate?
If I'm using JdbcTemplate/HibernateTemplate and HibernateDaoSupport/JdbcDaoSupport, do I still need to config transaction proxy in xml? If I still need to config transaction proxy in xml, it means it's ok for me to put both getHibernateTemplate().saveOrUpdate(user)and getHibernateTemplate().saveOrUpdate(order) together, and they're invoked in the same transaction, is this right?
First off all please forget about HibernateTemplate and HibernateDaoSupport these classes should be considered deprecated since the release of hibernate 3.0.1 (which was somewhere in 2006!). You should be creating daos/repositories based on a plain hibernate API, as explained in the Spring Reference Guide. (The same goes for JpaTemplate and JpaDaoSupport).
JdbcTemplate (and all other *Template classes) intend is to make it easier to work with the underlying technology. Once upon a time this was also needed for Hibernate (< 3.0.1), now it isn't.
JdbcTemplate makes it easier to work with plain JDBC code. You don't have to get a connection, create a (Prepared)Statement, add the parameters, execute the query, iterate over the resultset and convert the ResultSet. With the JdbcTemplate much of this is hidden and most of it can be written in 1 to 3 lines of code, whereas plain JDBC would require a lot more.
The *Support classes make it easier to gain access to a template but aren't a must to use. Creating a JdbcTemplate is quite easy and you don't really need to extend JdbcDaoSupport. But you can if you want. For more information a lot is explained in the reference guide.

Hibernate will not persist data after save

Can someone explain why the "lastAccessed" date does not get saved to the database in this example and how I can get it to save to the DB? My understanding is that the do object is an attached object after the save() call and therefore all modifications should be persisted automatically.
Note: "myDate" is persisted correctly, so all other spring configuration seems to be correct.
#Transactional(readOnly = false)
public DateObject getOrCreateDateObject(Date myDate) {
DateObject do = null;
do = getCurrentDateObject(); // For my tests, this has been returning null
if (do == null) {
// create a new object
do = new DateObject();
do.setDate(myDate);
sessionFactory.getCurrentSession().save(do);
}
// This does not persist to the database
do.setLastAccessed(new Date());
return do;
}
I have also tried some of the following combinations (and more) after the save() call. None of these work:
sessionFactory.getCurrentSession().merge(do); // tried before and after do.setDate(d2)
sessionFactory.getCurrentSession().update(do);
sessionFactory.getCurrentSession().saveOrUpdate(do);
sessionFactory.getCurrentSession().flush();
DateObject doCopy = (DateObject)sessionFactory.getCurrentSession().load(DateObject.class, do.getId());
sessionFactory.getCurrentSession().merge(doCopy);
doCopy.setLastAccessed(new Date());
I'm hoping this is an easy answer that I'm just not seeing. Thank you for your help!
Edit #1 05/22/2012
As requested, here is the mapping for this entity, specified in src/main/resources/META-INF/dateobject.hbm.xml. I can see that the columns are created in the database using "SELECT * FROM dateObjects" in the mysql client. MY_DATE is populated correctly, but LAST_ACCESSED is set to NULL.
<class name="com.example.entity.DateObject" table="dateObjects">
<id name="id" column="DATE_OBJECT_ID">
<generator class="identity" />
</id>
<property name="date" type="date" column="MY_DATE" />
<property name="lastAccessed" type="date" column="LAST_ACCESSED" />
</class>
Edit #2 05/24/2012
I have a working SSCCE at https://github.com/eschmidt/dateobject. The interesting thing is that the web client (calling localhost:8080/view/test) shows that lastAccessed is set correctly, but when I check the database with the MySQL client, it shows that lastAccessed is NULL. With this complete set of code, can anybody see why the database wouldn't update even though the method is marked #Transactional?
If you're absolutely certain that after running that code, do.date is stored in the db and do.lastAccessed isn't, then your connection and transaction are obviously set up correctly. My first guess would be incorrect mappings, since that's the simplest solution. You don't happen to have an #Transient on the field, the getter, or the setter for lastAccessed, do you? (Assuming, of course, that you're using annotations to map your domain objects.)
If you could provide an SSCCE, I'll bet I or someone else can give you a definitive answer.
Update: It's hard trimming a full application down to the smallest possible code that demonstrates a problem. The upshot is that you'll likely find the answer while you're at it. I have lots of sample projects in github that might help guide you if you just need a few nudges in the right direction. basic-springmvc might be closest to what you're doing, but it uses annotations instead of xml for mappings. It's also a Spring MVC project. It's a lot simpler to start a Spring context manually in a main class than to worry about a whole servlet container and the multiple contexts that Spring MVC wants you to have. spring-method-caching, for one, has an example of doing that.
As for the mapping you posted, it looks fine, though it's been a long while since I touched an XML mapping. Are you using field or property access? That could possibly have a bearing on things. Also, are there any custom listeners or interceptors in the SessionFactory that might be twiddling with your objects?
You are using IDENTITY generation for your identifier generation strategy, so the save() call here immediately translates to the insert. Do you see any INSERT/UPDATE/DELETE SQL executed after that? If not, it is most likely that the session is just not being flushed. flushing might happen at a number of points, read the docs on flushing if you are unfamiliar.

JAX-WS RI does not enforce XSD restrictions

I am currently developing a few Web services using the JAX-WS reference implementation (version 2.1.7). They are contract-based, that is, the WSDL and XSD files are not generated by wsgen.
This allows me to freely use XSD restrictions to strengthen validation of values passed to my services through SOAP messages. Here are two examples of such "restricted" XSD elements:
<xsd:element name="maxResults" minOccurs="1">
<xsd:simpleType>
<xsd:restriction base="xsd:positiveInteger">
<xsd:minInclusive value="1"/>
<xsd:maxInclusive value="1000"/>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>
<xsd:element name="lastName" minOccurs="0">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:minLength value="1"/>
<xsd:maxLength value="25"/>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>
I added the #SchemaValidation annotation to my service classes to enforce schema validation. However, JAX-WS does not enforce validation rules as expected. The behaviour is as follows:
Missing mandatory elements are correctly reported (e.g., missing maxResults).
Invalid values (e.g., character data in an integer field) are correctly reported too.
Interval restriction violations (e.g., maxResults > 1000 or maxResults < 1) pass through the validation process without being reported and are injected into my JAXB-generated Java structures. Even negative values are considered valid despite the xsd:positiveInteger type!
String length constraint violations (e.g., lastName length over 25 characters) are not reported either.
In other words, restrictions that appear in <xsd:element> tags are correctly enforced but <xsd:restriction> elements seem to be totally ignored by JAXB when used in a JAX-WS-based context.
I wrote a test class to check my XSD restrictions using bare JAXB (no JAX-WS). As a result, all restrictions are correctly enforced.
This gives me the feeling that there might be a bug in the usage of JAXB by JAX-WS... unless there is something I am doing incorrectly, of course...
Am I missing something fundamental here?!?
Thanks in advance for any help,
Jeff
I finally found what's wrong...
In order to get my Web services to work in a JUnit context, i.e. published through Endpoint.publish(), I had to remove the wsdlLocation attribute from my #WebService annotations. If I don't, the wsdlLocation = "WEB-INF/wsdl/SearchIndividualsV1_0.wsdl" passed to the #WebService annotation clashes with the URL value passed to the Endpoint.publish() method, http://127.0.0.1:9000/rpe-ws/SearchIndividuals.
After reading Glen Mazza's Weblog (http://www.jroller.com/gmazza/entry/soap_xml_schema_validation), Additional Notes section, I put back the wsdlLocation attribute and all restrictions are now properly enforced.
In other words, removing the wsdlLocation in a #WebService annotation does not prevent the service itself from working, but prevents restrictions declared in <xsd:restrictions> elements from being properly enforced. Restrictions declared in <xsd:element> elements, however, are still correctly enforced.
I am therefore getting back to having to solve that wsdlLocation compatibility problem to make my unit tests work properly, but this is way less critical than non-working validations in a production context...
Just in case... Anyone has an idea about this WSDL location incompatibility when running a Web service in a non-Web context?
Thanks,
Jeff
Oh brother!...
In order to override the wsdlLocation for my JUnit tests, I created derivations of my Web service implementations that override only the #WebService annotation. As a result, I ran into the same problem I finally solved this morning (ref. my first answer above).
After doing plenty of tests, I figured out that it's the presence of a #WebService-annotated class extending my Web service implementation that prevents XSD validation from properly handling <xsd:restriction> tags.
To illustrate this bizarre behaviour, let's suppose I have the following classes:
#WebService(...)
public interface JeffWebService {...}
#WebService(..., wsdlLocation = "path/myWsdl.wsdl", ...)
public class JeffWebServiceImpl implements JeffWebService {...}
where path/myWsdl.wsdl correctly locates the WSDL. Then XSD validation works properly, i.e. the content of my first answer above is totally valid.
I now add the following class that I use in my JUnit-based Endpoint.publish() calls:
#WebService(..., wsdlLocation = "alternatePath/myWsdl.wsdl", ...)
public class TestWebServiceImpl extends JeffWebServiceImpl {}
that overrides nothing but the #WebService annotation. Then XSD validation excludes <xsd:restriction> tags as it used to do before specifying the wsdlLocation attribute at all, despite the fact that I still use the JeffWebServiceImpl implementation in my non-JUnit code! If I comment out the annotation in TestWebServiceImpl, then everything works properly again, except for unit tests, of course.
In other words, as soon as there is some class extending my Web service implementation in the classpath, the #WebService annotation of the most specific class overrides all others, regardless of the actual class I use in a regular Web application context. Weird, isn't it?!?
Bottom line: I will disable Endpoint-based unit tests for now. If I (or anyone reading this thread) find a clean, non-bogus way to integrate both production and JUnit configurations, I will consider putting them back in my JUnit test suite.
I hope this thread will help anyone running into the same problem solve it faster than I did...
Jeff

Resources