Is there a generic way of getting columns in ResultsSet of MapRow - jdbc

I am using SimpleJdbcTemplate and for example I have something like this:
#Override
public Variant mapRow(ResultSet rs, int rowNum) throws SQLException
then I am getting the values from this result set with lines of code like this:
variant.setName(rs.getString("variant_name"));
so I have to look at my table, see what type should I use for each column, - getString for String in this example - ...so I will have getString, getLong, getInt,...
I was wondering if there a more generic way of getting these values from result set without the need to specify the correct type and hope that Spring JDBC takes care of some boxing/unboxing on these generic types

If you want to map JDBC results to your object model, then you're going to have to live with doing that. That's the deal when you use JDBC.
If you want something more high level, including column-to-property mapping, then you need a better tool. You could go the whole hog and use Hibernate, but that carries a whole load of baggage, and presents 10 new problems for every one it solves.
Have a look at MyBatis (formerly known as iBatis). This is a pretty basic framework for mapping JDBC result sets to javabeans, with connection/statement management backed in. Spring provides support for iBatis 2, but iBatis 2 itself is no longer supported. The new MyBatis 3.x isn't supported by Spring out-of-the-box, but the MyBatis project does provide it's own Spring integration.

Related

Technical difference between Spring Boot with JOOQ and Spring Data JPA

When would you use Spring Data JPA over Spring Boot with JOOQ and vice versa?
I know that Spring Data JPA can be used for completing basic CRUD queries, but not really for complex join queries while using JOOQ makes it easier?
EDIT: Can you use both Spring data jpa with jooq?
There is no easy answer to your question. I have given a couple of talks on that topic. Sometimes there are good reasons to have both in a project.
Edit: IMHO Abstraction over the database in regards of dialects and datatypes is not the main point here!! jOOQ does a pretty good job to generate SQL for a given target dialect - and so does JPA / Hibernate. I would even say that jOOQ goes an extra mile to emulate functions for databases that don't have all the bells and whistles like Postgres or Oracle.
The question here is "Do I want to be able to express a query myself with everything SQL has to offer or am I happy with what JPA can express?"
Here's an example to run both together. I have a Spring Data JPA provided repository here with a custom extension (interface and implementation are necessary). I let the Spring context inject both the JPA EntityManager as well as the jOOQ context. I then use jOOQ to create queries and run them through JPA.
Why? Because expressing the query in question is not possible with JPA ("Give me the thing I listened the most" which is not the one having the highest number of count, but could be several).
The reason I run the query through JPA is simple: A downstream use case might require me to pass JPA entities to it. jOOQ can of course run this query itself and you could work on records or map the stuff anyway u like. But as you specifically asked about maybe using both technologies, I thought this is a good example:
import java.util.List;
import javax.persistence.EntityManager;
import javax.persistence.Query;
import org.jooq.DSLContext;
import org.jooq.Field;
import org.jooq.Record;
import org.jooq.SelectQuery;
import org.jooq.conf.ParamType;
import org.jooq.impl.DSL;
import org.springframework.data.repository.CrudRepository;
import static ac.simons.bootiful_databases.db.tables.Genres.GENRES;
import static ac.simons.bootiful_databases.db.tables.Plays.PLAYS;
import static ac.simons.bootiful_databases.db.tables.Tracks.TRACKS;
import static org.jooq.impl.DSL.count;
import static org.jooq.impl.DSL.rank;
import static org.jooq.impl.DSL.select;
public interface GenreRepository extends
CrudRepository<GenreEntity, Integer>, GenreRepositoryExt {
List<GenreEntity> findAllByOrderByName();
}
interface GenreRepositoryExt {
List<GenreWithPlaycount> findAllWithPlaycount();
List<GenreEntity> findWithHighestPlaycount();
}
class GenreRepositoryImpl implements GenreRepositoryExt {
private final EntityManager entityManager;
private final DSLContext create;
public GenreRepositoryImpl(EntityManager entityManager, DSLContext create) {
this.entityManager = entityManager;
this.create = create;
}
#Override
public List<GenreWithPlaycount> findAllWithPlaycount() {
final Field<Integer> cnt = count().as("cnt");
return this.create
.select(GENRES.GENRE, cnt)
.from(PLAYS)
.join(TRACKS).onKey()
.join(GENRES).onKey()
.groupBy(GENRES.GENRE)
.orderBy(cnt)
.fetchInto(GenreWithPlaycount.class);
}
#Override
public List<GenreEntity> findWithHighestPlaycount() {
/*
select id, genre
from (
select g.id, g.genre, rank() over (order by count(*) desc) rnk
from plays p
join tracks t on p.track_id = t.id
join genres g on t.genre_id = g.id
group by g.id, g.genre
) src
where src.rnk = 1;
*/
final SelectQuery<Record> sqlGenerator =
this.create.select()
.from(
select(
GENRES.ID, GENRES.GENRE,
rank().over().orderBy(count().desc()).as("rnk")
).from(PLAYS)
.join(TRACKS).onKey()
.join(GENRES).onKey()
.groupBy(GENRES.ID, GENRES.GENRE)
).where(DSL.field("rnk").eq(1)).getQuery();
// Retrieve sql with named parameter
final String sql = sqlGenerator.getSQL(ParamType.NAMED);
// and create actual hibernate query
final Query query = this.entityManager.createNativeQuery(sql, GenreEntity.class);
// fill in parameter
sqlGenerator.getParams().forEach((n, v) -> query.setParameter(n, v.getValue()));
// execute query
return query.getResultList();
}
}
I spoke about this a couple of times. There is no silver bullet in those tech, sometimes it's a very thin judgement:
The full talk is here: https://speakerdeck.com/michaelsimons/live-with-your-sql-fetish-and-choose-the-right-tool-for-the-job
As well as the recorded version of it: https://www.youtube.com/watch?v=NJ9ZJstVL9E
The full working example is here https://github.com/michael-simons/bootiful-databases.
IMHO if you want a performing and maintainable application which uses a database at its core, you don't want to abstract away the fact that you are using a database. JOOQ gives you full control because you can read and write the actual query in your code but with type safety.
JPA embraces the OO model and this simply does not match the way a database works in all cases, which could result in unexpected queries such as N+1 because you put the wrong annotation on a field. If you are not paying enough attention this will lead to performance issues when scaling your application. JPA Criteria helps a bit but it still way harder to write and read.
As a result, with JPA you are first writing your query in SQL and then use half a day to translate it to Criteria. After years of working with both frameworks I would use JOOQ even for simple a CRUD application (because there is no such thing as a simple CRUD application :-)).
Edit: I don't think that you can mix JPA with JOOQ, question is, why would you want to? They are both using a different approach so just choose one. It's difficult enough to learn the intricacies of one framework.
Spring Data JPA gives you the following:
An ORM layer, allowing you to treat database tables as if they were Java objects. It allows you to write code that is largely database-agnostic (you can use MySQL, Oracle, SQL Server, etc) and that avoids much of the error-prone code that you get when writing bare SQL.
The Unit of Work pattern. One reason why you see so many articles on C# explaining what a unit of work is, and practically none for Java, is because of JPA. Java has had this for 15 years; C#, well, you never know.
Domain-driven design repositories. DDD is an approach to object-oriented software that does away with the anaemic domain model you so often see in database-driven applications, with entity object only having data and accessor methods (anaemic model), and all business logic in service classes. There's more to it, but this is the most important bit that pertains to Spring Data JPA.
Integration into the Spring ecosystem, with inversion of control, dependency injection, etc.
jOOQ, on the other hand, is a database mapping library that implements the active record pattern. It takes an SQL-centric approach to database operations, and uses a domain-specific language for that purpose.
As happens so often, there is no one correct or superior choice. Spring Data JPA works very well if you don't care about your database. If you're happy not to do any complicated queries, then Spring Data JPA will be enough. However, once you need to do joins between tables, you notice that a Spring Data JPA repository really isn't a very good match for certain operations.
As #michael-simons mentioned, combining the two can sometimes be the best solution.
Here's an official explanation when JOOQ fits:
https://www.jooq.org/doc/latest/manual/getting-started/jooq-and-jpa/
Just because you're using jOOQ doesn't mean you have to use it for everything!
When introducing jOOQ into an existing application that uses JPA, the
common question is always: "Should we replace JPA by jOOQ?" and "How
do we proceed doing that?"
Beware that jOOQ is not a replacement for JPA. Think of jOOQ as a
complement. JPA (and ORMs in general) try to solve the object graph
persistence problem. In short, this problem is about
Loading an entity graph into client memory from a database
Manipulating that graph in the client Storing the modification back to
the database As the above graph gets more complex, a lot of tricky
questions arise like:
What's the optimal order of SQL DML operations for loading and storing
entities? How can we batch the commands more efficiently? How can we
keep the transaction footprint as low as possible without compromising
on ACID? How can we implement optimistic locking? jOOQ only has some
of the answers. While jOOQ does offer updatable records that help
running simple CRUD, a batch API, optimistic locking capabilities,
jOOQ mainly focuses on executing actual SQL statements.
SQL is the preferred language of database interaction, when any of the
following are given:
You run reports and analytics on large data sets directly in the
database You import / export data using ETL You run complex business
logic as SQL queries Whenever SQL is a good fit, jOOQ is a good fit.
Whenever you're operating and persisting the object graph, JPA is a
good fit.
And sometimes, it's best to combine both
Spring Data JPA does support #Query idiom with the ability to run native queries (by setting nativeQuery flag) where we can write & see the query (simple & complex, with joins or otherwise) right there with the repository & reuse them easily.
Given the above,
When would you use Spring Data JPA over Spring Boot with JOOQ and vice versa?
I would simply use Spring Data JPA unless i am not using the Spring ecosystem itself. Other reason might be that i prefer the fluent style..
I know that Spring Data JPA can be used for completing basic CRUD queries, but not really for complex join queries
As i noted above, Spring Data JPA does provide the ability to use complex and/or join queries. In addition via the custom repository mechanism (example already in #Michael Simons post above that uses JOOQ) provides even more flexibility. So its a full fledged solution by itself.
Can you use both Spring data jpa with jooq?
Already answered wonderfully by #Michael Simons above.

Embeded H2 Database for dynamic files

In our application, we need to load large CSV files and fetch some data out of it. For example, getting the distinct values from the CSV file. For this, we decided to go with in-memory DB's like H2, as there is no need to store the data in persistent storage.
However, the file is so dynamic that the columns may not be the same. I need to load the file to the H2 database to a table that is temporary for that session.
Tech Stack is Spring boot and H2.
The examples I see on forums is using a standard entity that knows what fields the table has. However my case the table columns will be dynamic
I tried the below in spring boot
public interface ImportCSVRepository extends JpaRepository<Object, String>
with
#Query(value = "CREATE TABLE TEST AS SELECT * FROM CSVREAD('test.csv');", nativeQuery = true)
But this gives unmanaged entity error. I understand why the error is thrown. However I am not sure how to achieve this. Also please clarify if I should use Spring-batch ?
You can use JdbcTemplate to manually create tables and query/update the data in them.
An example of how to create a table with JdbcTemplate
Dynamically creating tables and defining new entities (or modifying existing ones) is hardly possible with spring-data repositories and #Entity-ies. You probably should also check some NoSQL dbs like MongoDb - it's easier to define documents (or key-value objects - Redis) with dynamic structures in them.

How should I define non-entity repositories with Spring Data MongoDB?

On my domain I have the usual entities (User, Company, etc) and also "entities" that doesn't change, I mean they are fixed values but stored on data base. My backend is Mongo so I make use of MongoRepository. I'm also using Spring Data Rest.
Let's say I have defined Sector as entity, which is nothing more than a String wrapped on a Java object.
So this is how I define the repository.
#RepositoryRestResource
public interface SectorRepo extends MongoRepository<Sector,String>{
}
The thing is that this seems to be inappropriate, as I should not define an object that only wraps an string and treat it as an entity, it isn't. The only purpose for Sector collection is to be loaded on a combo box, nothing more.
The problem gets serious when you have more and more of these non-entities objects.
How I should approach this situation so I can still use MongoRepository + Spring Data Rest?
This is similar to couple of other questions. Please see my answers for both. Hope it helps
Spring Data MongoDB eliminate POJO's
Storing a JSON schema in mongodb with spring

Spring SimpleJdbcInsert vs JdbcTemplate

I have a requirement where I have to insert a row into the Database and get the Key (Identity) back. I thought of using SimpleJdbcInsert for this. I am passing the Object of JdbcTemplate to my SimpleJdbcInsert and executing method executeAndReturnKey().
The same can be done using update() method of JdbcTemplate by setting PreparedStatement instead of Parameters Map.
I just want to know if JdbcTemplate is better in terms of performance and should I be using it over SimpleJdbcInsert? If so then what is the reason for it's superior performance?
Note: I'm not inserting a Batch of Records but a single record only.
Thanks
SimpleJdbcInsert vs JdbcTemplate
From docs.spring.io
JdbcTemplate is the classic Spring JDBC approach and the most popular. This "lowest level" approach and all others use a JdbcTemplate under the covers.
Note :all others use a JdbcTemplate under the covers
SimpleJdbcInsert optimize database metadata to limit the amount of necessary configuration. This approach simplifies coding so that you only need to provide the name of the table or procedure and provide a map of parameters matching the column names. This only works if the database provides adequate metadata. If the database doesn’t provide this metadata, you will have to provide explicit configuration of the parameters.
If you are going to use A SimpleJdbcInsert,then also the actual insert is being handled using JdbcTemplate. So definitely in terms of performance SimpleJdbcInsert can not be better that JdbcTemplate.
So performance wise you can not be beneficial using SimpleJdbcInsert.
But perform insert operations in to multiple tables by using SimpleJdbcInsert is having definitely better capability then JdbcTemplate class.There may be some situation in which you want to insert data in lot of tables and you may like to do less coding.In these situations, using of SimpleJdbcInsert can be a very good option.See this example to understand that.

converting J2EE App from Sql to Oracle - suggestions with effecient approach

We have a J2EE app built on Struts2+spring+iBatis; not all DAO's use iBatis...some code still uses the old JDBC approach of interacting with Database. All our DAO's call Stored Procedures, we do not have any inline SQL. Since Oracle Stored Procedures return cursors, we have to drastically change our code.
It is fairly easy for us to convert current iBatis mappings (in sql) to oracle (used a groovy script to do this) also it is easy to convert Java code that was calling old mappings that were in sql.
Our problem is to convert the old DAO's that still use JDBC approach. Since we will have to modify them anyways (because we are now using oracle) we are thinking about converting them to iBatis mappings. is this a good approach? This will be a huge effort from our side...
what do you think will be the best approach to tackle this huge effort?
should we just get to work and start converting each method in every DAO
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
I appologize if the question is vague but am just looking for someone who has gone through this type of thing before and has some pointers or 'lessons learned'.
The first thing you should do is cover your DAO layer in tests. This way you'll know if you broke something during the conversion. If you are moving a stored procedure from one DBMS to Oracle, you should also write tests for that using a framework like DbUnit.
You should have a TEST DB instance populated with sample data that doesn't change. You should be able to refresh this DB with the same set of sample data after your are done running your tests. This will ensure your TEST DB is in a known state. You will then have your input parameters paired with some expected (correct) result. Your test will read in these pairs and execute them against the test DB instance and confirm the expected result is returned. Assuming your tests mutate the DB, you'll want to refresh the DB between runs of your test suite.
Second, if you're already going in and changing some data access implementations for Oracle, why not use this as an opportunity to move some of that business logic out of the DB and into Java? There are many well-documented problems with maintaining large codebases in a DBMS.
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
I don't recommend this. The time you'd spend tweaking the script for each special case, plus hunting down all the bugs it would introduce would be better spent doing the conversion by a thinking human.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
That's a fine idea. You can then combine them in your sqlMapConfig with
<sqlMap resource="sqlMaps/XXX.xml" />
This will keep your mappings more manageable. Just make sure to specify the namespace attribute in each sqlMap like:
<sqlMap namespace="User">
So that you can reuse mappings between the sqlMaps for instantiating object graphs (example: when loading a User and his Permissions, the User.xml sqlMap calls the Permission.xml mapping).
All our DAO's call Stored Procedures
I don't see what iBatis is buying you here.
It's also not clear what the migration is. Are you saying that you've decided to move all the code into stored procedures, so there's no more in-line SQL? If that's the case, I'd say don't use iBatis. If you're already using Spring, let it call into Oracle using its StoredProcedure object and map the cursors into objects.
The recommendation to create JUnit or, better yet, TestNG tests is spot on. Do that before changing anything.

Resources