Is it applicable to write a JDBC driver in Kotlin? - jdbc

Upon writing a DB-API interface, and a basic SQLAlchemy dialect for our database, I was assigned to pick up Java, and write a JDBC driver as well. I gather it'll be Level 4, with sockets against the database and such.
I know Zero Java. I intended to pick up Kotlin at some point, so I was wondering whether it'd be feasible to create a JDBC driver in Kotlin.
For example, as far as I know, Kotlin can use Java libraries. Not sure of the other way around - would any Java application be able to use a Kotlin JDBC driver, if written properly? What would be "properly" in this case? Other considerations to be noted?
Any feedback would be appreciated.
I also considered Jython, but I'm less inclined as I suspect it'll be less fitting, though not really sure on this one as well.

You can absolutely do this. JDBC driver vendors need to implement certain Java interfaces in package java.sql which is totally feasible with Kotlin as much as with Java. Just do it :)
You can get started with Driver

Related

Using JOOQ with Reactive SQL Client in Quarkus

I want to use the JOOQ DSL in Quarkus to build my SQL (and hopefully execute them).
Therefore I added the following Quarkus JOOQ extension.
Since I want to use the reactive PG SQL Client in my project, I'm asking myself if e.g. the fetch() method of JOOQ will block the thread ? Is it compatible with the reactive vertx client under the hood or does it use a blocking one ? Looks like the latter one since it doesn't return a future or anything like that.
In that case I propably should only use JOOQ for creating the SQL string.
Which parts of the jOOQ API can be used reactively
jOOQ's ResultQuery<R> extends Publisher<R>, so you can just place a jOOQ query in any reactive stream implementation. There are 3 main Publisher subtypes in jOOQ:
ResultQuery<R> extends Publisher<R>
RowCountQuery extends Publisher<Integer>
Batch extends Publisher<Integer>
And starting with jOOQ 3.17, there will also be a way to create transactional Publisher types.
With this in mind, in the reactive world, you will never need to call any of the traditional jOOQ blocking execution methods. You'll always implicitly execute jOOQ queries via some reactive streams integration.
Avoiding calls to blocking API
Starting with jOOQ 3.17, all the blocking API (e.g. ResultQuery.fetch()) will be annotated as org.jetbrains.annotations.Blocking, so you get IDE support to warn you that you're about to do something that might not make sense in your non-blocking context.
Backing implementation
For any of this to work, you need to provide jOOQ with an R2DBC connection. R2DBC is an SPI that enables interoperability between client libraries like jOOQ and R2DBC drivers, like r2dbc-postgres. Just like JDBC, it works as an SPI, not strictly an API. Besides, it integrates also directly with the reactive streams SPI, which has been integrated in the JDK 9 via the Flow API.
There might be future work to support alternative non-blocking drivers in the future, however R2DBC seemed to be the most interoperable choice at the time the reactive support was added, and I do hope that the Vert.x and R2DBC teams will find ways to cooperate more tightly in the future. The Vert.x SQL client, for example, does not implement the reactive streams SPI directly, Red Hat does not seem too interested (yet) in moving forward with this issue here: https://github.com/eclipse-vertx/vertx-sql-client/issues/249
So, for now, this means that you have to either:
Use jOOQ with R2DBC, which is what jOOQ supports (not sure if Quarkus will support R2DBC, though I don't see any reason why it shouldn't)
Use jOOQ to generate SQL only and run the SQL with Vert.x (you'll lose a lot of type safety and convenience, as well as access to advanced features like MULTISET, which relies on jOOQ executing your query)
A side note on reactive execution
Of course, it's always important to think about whether you really need to go reactive. In my personal experience, this is mostly a matter of programming style, not actual performance and/or load requirements. Sticking with the blocking paradigm and JDBC will greatly simplify your every day work, and I doubt you'll notice a measurable difference in production.
I'm looking for a solution to do the same thing, I haven't tested it yet but I came across this repo:
https://github.com/jklingsporn/vertx-jooq
https://github.com/jklingsporn/quarkus-jooq-reactive-example
It may help to be fully reative using vert.x in quarkus.

Spring boot 2.1.5, WebFlux, Reactor: How to deal properly with MDC

Spring boot 2.1.5
Project Reactor 3.2.9
I am setting up a bunch of rest reactive APIs using the above-mentioned frameworks and I am running into an annoying problem with MDC (mapped diagnostic context). My applications are in JAVA.
MDC relies on thread locals to store the current query's mapped context to put in the logs. That system, obviously, is not perfect and contradicts the reactive pattern since the different steps of your execution will be executed through different threads.
I have run into the same problem with the Play Reactive framework but found a workaround there by copying the mapped context transparently from one actor to another.
For spring and reactor, I could not find a satisfying solution yet.
Some random examples found on the internet:
First - It works but forces you to use a bunch of utility methods
Same thing
Second - It tries to copy the context during the onNext publisher event but seems to lose some features on the way of doing that. The signal context, for example, is lost.
I am in need of a proper solution to deal with this:
A library which would make the link between MDC and reactor?
A way to tweak reactor/spring to achieve it transparently?
Any advice?
"I could not find a satisfying solution yet."
Working with contexts is the only solution for the moment. Since as you said threadlocals goes against everything that has to do with reactive programming. Using thread local as a storage point during a request is a resource heavy way of solving things and in my opinion poor design. Unless logging frameworks themselves come up with a better solution to the problem we developers must pass the data through the context to accommodate for the logging frameworks blocking nature.
Reactive programming is a paradigm shift in the programming world. Other things like database drivers, that use threadlocal to rollback transactions are also in big trouble. the JDBC database driver spec is defined as blocking in nature, and atm. there has been attempts by spring and the R2DBC project to define a new JDBC driver spec that is inherently non/blocking. This means that all vendors must rewrite ther database driver implementations from scratch.
Reactive program is so new that lots of libraries need to rewrite entire codebases. The logging frameworks as we know it needs to be rewritten from the ground up which is a huge task. And the context in reactive is actually something that should not even be in reactive programming, it was implemented just to accommodate for MDC problems.
It's actually a lot of overhead needing to pass data from thread to thread.
So what can we do?
push on logging frameworks, and/or help logging frameworks to rewrite their codebase
Accept that there is no "tweak" that will magically fix this
use the context and the way suggested in the blogposts
Project reactor context

cassandra-jdbc and Ebean, can they work together?

Admitting I am not a Avaje Ebean expert or a JDBC expert either. I use play framework and Ebean in the "normal" use cases (H2 and mySQL basically) and they perform fine for me.
I found recently about the cassandra-jdbc driver project and was wondering if I could naively make them work together. So I tried and, once I turned off evolutions, I got a SQLFeatureNotSupportedException because cassandra-sql force autocommit always on.
I wanted to know if there is a way to make them work together since the driver claim to be Jdbc compliant and Ebean should be able to work with that. Is there something in the way Ebean use the drivers that make this impossible?
Although the cassandra-jdbc driver is jdbc complient, at the moment its not possible to use cassandra as the backend for your playframwork #Models.
There are a few projects trying to implement support for NoSql, although not explicitly cassandra, in the play framework, have a look at siena.
Also this SO question might be a useful reference.

Reasons to use a persistence framework in Java EE?

I'm working on a Java EE Application, and I use Spring as framework.
Now I've seen people talking about ORM Frameworks (Hibernate/JPA/iBatis...) but I don't know what might be the reasons to use those frameworks?
I mean what those frameworks will change in the project functions & performance?
if you can give me a clear example it will be great.
Since you will get bored by writing the SQL insert/update/select statements for entire java objects and keep the Object <-> SQL code in shape when your object changes. JPA is actually a part of the Java EE standard.
However, it will not provide any means to keep you from knowing what you are doing with the database, except for very simple cases. My experience is that any JPA framework will add just another layer of complexity to performance issue track down and debugging.
In the end, you might end up need to understand how JPQL (SQL-ish syntax for JPA) translate into SQL for every combination of JPA provider (OpenJPA, HIbernate, eclipse link..) and datbase implementation. This will be non trivial.
If you have no specific performance requirements and just want easy object persistance, give it go! Hibernate seems to be state of the art atm.
To avoid writing your own SQL for everything, and to [partially] bridge the object-relational gulf ("abyss").
For simple requirements, ORMs are great, and make thinking about some DB stuff go away--with the caveat that you still need to be aware of what's actually happening on the DB side to prevent what can be serious performance implications.
For complicated requirements, you'll learn to understand why they call ORMs the "Vietname of computer science"... "We have learned the lessons of Vietnam... do not go to Vietnam."

Migrating Pro*COBOL and Pro*C to Java: Is JDBC the way to go?

I am migrating Pro*COBOL and Pro*C (code with embedded SQL) to Java.
Am I right that I should move migrate all of the embedded SQL to JDBC calls?
Or is there a sort of "Pro*Java" way that Oracle would recommend? What is the usual best practice?
Yes.
There was (or is?) SQLJ for embedding SQL into Java, but I have never seen that in use anywhere.
Everything SQL-based in Java goes via JDBC.
A usual practice (not sure if a "best practice") is to abstract even further and use an ORM and some kind of persistence API.
As there is no easy way to migrate C or worse COBOL to Java you will be doing a lot of re-writing anyway. So using JDBC with your existing SQL is probably the easiest way to go.
Another poster mentioned SQLJ which is a possibility, however I don't think it really gains you anything as you will be doing so much re-factoring anyway, however if you are happy with the whole pre-compiler thing then it will work! (At least for Oracle or DB2, support is patchy for the freebie databases).

Resources