This is my current repo structure, I'm looking for a solution that works with both Postgres and OracleDB and preferably does not involve changing my DB schema to accomodate the ORM. Whether Postgres or Oracle is used is in defined in the spring.datasource.url in the application.properties file.
data class NewsCover(
#Id val tenantId: TenantId,
val openOnStart: Boolean,
val cycleDelay: Int,
#MappedCollection(idColumn = "tenant_id", keyColumn = "tenant_id")
val sections: Set<NewsCoverSection>,
)
data class NewsCoverSection(
#Id val id: NewsCoverSectionId,
val title: String,
val pinnedOnly: Boolean,
val position: Int,
val tenantId: TenantId,
... some other fields ...
)
interface NewsCoverRepo : CrudRepository<NewsCover, TenantId> { ... }
This works just fine with Postgresql, but creates errors when uses with Oracle:
SELECT "NEWS_COVER_SECTION"."ID" AS "ID", "NEWS_COVER_SECTION"."TITLE" AS "TITLE", "NEWS_COVER_SECTION"."POSITION" AS "POSITION", "NEWS_COVER_SECTION"."TENANT_ID" AS "TENANT_ID", "NEWS_COVER_SECTION"."PINNED_ONLY" AS "PINNED_ONLY"
FROM "NEWS_COVER_SECTION"
WHERE "NEWS_COVER_SECTION"."tenant_id" = ?
See the quoted idColumn/keyColumn names in the #MappedCollection. They are lower case. That is fine for Postgres, but does not work with Oracle. Changing tenant_id to TENANT_ID fixes the problem for Oracle, but breaks Postgres.
What I tried:
A NamingStrategy override for Oracle, but I can't seem to override those quoted identifiers.
Conditional column names in #MappedCollection, but #MappedCollection only accepts compile time constants and does not support SpEL, so I can't differentiate based on the spring.datasource.url property.
Any ideas how I can get it to query for "news_cover_section"."tenant_id" when the DB is Postgres and "NEWS_COVER_SECTION"."TENANT_ID" when the DB is Oracle?
As you found out you can disable the behaviour of quoting all names by setting the forceQuote property of the JdbcMappingContext to false.
Alternatively you can create the schema in a consistent way on both databases by quoting the names in your schema creation script.
The first option allows you not to fiddle with the database schema.
But it makes the application depend on avoiding database key words like for example: ORDER or USER.
The second option is arguably the conceptual cleaner one, because it actually uses the same schema (as far as names are concerned) for both databases, which in itself is certainly valuable. But comes at the cost of quoting names because Postgres doesn't adhere to the behaviour prescribed by the SQL standard of treating unquoted names as uppercase.
Note: There is now an issue for supporting SpEL expressions for table and column names.
Related
I'm developing a service that must be able to have a configurable username column in its User field, i.e. different columns can be treated by the service as the "username" (i.e. the actual username column, the ID column etc.). It is a strange requirement to have, but legacy support is a strange thing as you all know by now :)
I've tried to tackle this in a following way, my configuration file contains the name of the column that will be treated as the username, and then that value is used with JPA specifications to find the User (my repository extends JpaSpecificationExecutor).
The code looks something like this:
public UserEntity getUserByUsername(String username) {
String columnName = configuration.getUsernameColumn();
return userRepository.findOne((root, query, builder) ->
builder.and(builder.equal(root.<String>get(columnName), username)));
}
This should work fine... However there is a catch, the column name is specified as the actual column name in the database, and the JPA specification seems to expect field name to be specified, not the database column. My entity is annotated as:
#Column(name = "USER_NAME", length = 100)
private String userName;
So when I try to find the User by searching for "USER_NAME", my code throws an exception because it expected to find a "USER_NAME" field, not column in the database.
I know that the obvious solution is putting "userName" in the configuration instead, however that is not an option. Another way to do this is by using reflection, but that would a last resort approach. Is there a better way to go about this?
One of the another solution can be the criteria query. Pass the column and the searchString and build the criteria query.
Reference : https://eng.zemosolabs.com/dynamic-multi-column-search-with-jpa-criteria-5720fedf13d3
In a Spring Boot project, Java8, with hibernate-spatial and PostgresDB 9.4
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-spatial</artifactId>
<version>5.2.10.Final</version>
</dependency>
application.properties
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.database-platform=org.hibernate.spatial.dialect.postgis.PostgisPG94Dialect
spring.jpa.properties.hibernate.dialect=org.hibernate.spatial.dialect.postgis.PostgisPG94Dialect
(I tried also PostgisPG9Dialect)
My Entity has a property
...
import com.vividsolutions.jts.geom.Point;
....
#Column(columnDefinition = "Point")
private Point cityLocation;
If I save with null value it's ok, but if I put a value
setCityLocation(new GeometryFactory().createPoint(new Coordinate(lng, lat));
I have:
PSQLException: ERROR: column "city_location" is of type point but expression is of type bytea You will need to rewrite or cast the expression.
In my db I can see the column definition as
type: point
column size: 2147483647
data type: 1111
num prec radix: 10
char octet length: 2147483647
I'M GOING CRAZY... Why It doesn't work?
UPDATE (It still don't work, I'm collecting new informations)
1) I'm thinking the problem could be the creation of the db.
In my application.properties I also have :
spring.jpa.properties.hibernate.hbm2ddl.auto=update
so the schema will update 'automatically' by hibernate.
2) I can run with success a query directly on the db (I use "Squirrel SQL" as client)
update my_table set city_location = POINT(-13,23) where id = 1
and if I
select city_location from my_table where id = 1
the answer is
<Other>
I can't see the value... I got the same answer for the record with null value inside the point type...
3) After set a value to the 'point' column with a query, I'm no more able to read from the table, I receive the exception:
org.geolatte.geom.codec.WktDecodeException : Wrong symbol at position: 1 in Wkt: (-13.0,23.0)
4) I look inside the hibernate-spatial-5.2.10.Final.jar and I found two "geolatte" named classes in the package org.hibernate.spatial :
GeolatteGeometryJavaTypeDescriptor.class
GeolatteGeometryType.class
5) And also (specific for Squirrel SQL client experts):
if I try to change a value of a column in "my_table" (not the 'point' city_location but anyone of the other columns) I recive an error similar to the one I recive in java when I try to insert a point value:
Exception seen during check on DB. Exception was:
ERROR: operator does not exist: point = character varying
Hint: No operator matches the given name and argument type(s). You might need to add explicit type casts.
Squirrel is made with java.. so I can accept this strange thing, may be it compose the query in a 'wrong' way, maybe it is connected to the value I see when I make a select...
Any ideas?
I found the solution!!
A fix to the code was needed and a magic trick I read in another stackoverflow question saved my life.
The problem was that the db column was created in a wrong way:
in the db the column type should be geometry NOT point
I removed the columnDefinition = "Point" from the #Column annotation and I ran the query
CREATE EXTENSION postgis;
on my db following these instructions:
Postgis installation: type "geometry" does not exist
Krishna Sapkota you are my new super hero!
Just remove columnDefinition = "POINT", from #Column annotation, and just use the Point object. (i.e. Use default column definition)
I got some problems with a request with Doctrine.
I do a findOneByFieldName on my entity which is supposed to retrieves me an object.
I want to findOneByNameSurname (NameSurname is a string) with a parameter which contains for example 'BEART Jean-Francois'.
What I want is that doctrine retrieves me the user BEART Jean-Francois, if only BEART Jean-Francois exists in the table. Nevertheless, if I give to doctrine 'BEART Jean-francois' with an f lowercase, doctrine still retrieves me the user holding the 'BEART Jean-Francois' field with the F uppercase.
What I want is that doctrine should be sensitive with the whole string, I mean that it should not squizze the duplicate name, even if, for real its not a duplicate name as it is written differently (upper case on the f letter).
I tried to run an SQL request directly in SqlDeveloper to test if Oracle makes the difference with and without the f lower or uppercase in 'Jean-Francois', and it DOES.
So what am I missing ? How can I say to doctrine, if you got an f lowercase in the string parameter I give you, and you only find a F uppercase in DB, please don't retrieve me anything, its not a match ...
Thanks anyway for your help.
Solved my problems in my oracle Listener ....
I had that :
private static $_SQL_SET_SORT = "ALTER SESSION SET NLS_SORT=Latin_AI";
private static $_SQL_SET_COMP = "ALTER SESSION SET NLS_COMP=LINGUISTIC";
Changed it to that and now doctrine is sensitive :
private static $_SQL_SET_SORT = "ALTER SESSION SET NLS_SORT=BINARY_CI";
private static $_SQL_SET_COMP = "ALTER SESSION SET NLS_COMP=BINARY";
See this post for more info :
Case insensitive searching in Oracle
I have a secondary index on an optional column:
class Sessions extends CassandraTable[ConcreteSessions, Session] {
object matchId extends LongColumn(this) with PartitionKey[Long]
object userId extends OptionalLongColumn(this) with Index[Option[Long]]
...
}
However, the indexedToQueryColumn implicit conversion is not available for optional columns, so this does not compile:
def getByUserId(userId: Long): Future[Seq[Session]] = {
select.where(_.userId eqs userId).fetch()
}
Neither does this:
select.where(_.userId eqs Some(userId)).fetch()
Or changing the type of the index:
object userId extends OptionalLongColumn(this) with Index[Long]
Is there a way to perform such a query using phantom?
I know that I could denormalize, but it would involve some very messy housekeeping and triple our (substantial) data size. The query usually returns only a handful of results, so I'd be willing to use a secondary index in this case.
Short answer: You could not use optional fields in order to query things in phantom.
Long detailed answer:
But, if you really want to work with secondary optional columns, you should declare your entity field as Option but your phantom representation should not be an option in order to query.
object userId extends LongColumn(this) with Index[Long]
In the fromRow(r: Row) you can create your object like this:
Sessions(matchId(r), Some(userId(r)))
Then in the service part you could do the following:
.value(_.userId, t.userId.getOrElse(0))
You also have a better way to do that. You could duplicate the table, making a new kind of query like sessions_by_user_id where in this table your user_id would be the primary key and the match_id the clustering key.
Since user_id is optional, you would end with a table that contains only valid user ids, which is easy and fast to lookup.
Cassandra relies on queries, so use it in your favor.
Take a look up on my github project that helps you get up with multiple queries in the same table.
https://github.com/iamthiago/cassandra-phantom
I use JOOQ-3.1.0 to generate and execute dynamic queries for Oracle and Postgresql with Spring-4. In a scenario I have a partitioned table, which I need to query using JOOQ. I use DSL.tableByName(vblTablename); where vblTablename is the string received as a string in the query generation method, ex, vbl_default partition(p_04-Dec-14). (The vblTablename pattern differs for different databases, and is configured in the external property file). The JOOQ generates the sql, but with the double-quote around the tablename. The query and error shown below
Query
SELECT COUNT(ID) COUNT FROM "vbl_default partition(p_04-Dec-14)"
where (rts between timestamp '2014-12-04 00:00:00.0' and timestamp '2014-12-05 00:00:00.0' and userid in (2))
Error
ORA-00972: identifier is too long
00972. 00000 - "identifier is too long"
*Cause: An identifier with more than 30 characters was specified.
*Action: Specify at most 30 characters.
Error at Line: 4 Column: 29
Though I have set the below settings on the DefaultDSLContext
Settings settings = new Settings();
settings.setRenderNameStyle(RenderNameStyle.AS_IS);
How do I remove the quote around the table? Any other settings have I missed?
The idea behind DSL.tableByName(String...) is that you provide a table ... by name :-)
What you're looking for is a plain SQL table, via DSL.table(String).
You can write:
// Assuming this import
import static org.jooq.impl.DSL.*;
DSL.using(configuration)
.select(count(VBL_DEFAULT.ID))
.from(table("vbl_default partition(p_04-Dec-14)"))
.where(...);
Or by using the convenient overload SelectFromStep.from(String)
DSL.using(configuration)
.select(count(VBL_DEFAULT.ID))
.from("vbl_default partition(p_04-Dec-14)")
.where(...);
More information about plain SQL in jOOQ can be obtained from this manual page:
http://www.jooq.org/doc/latest/manual/sql-building/plain-sql/
Partition support
Note that support for Oracle partitions is on the roadmap: #2775. If in the mean time you wish to use partitioned tables more often, you could also write your own function for that:
// Beware of the risk of SQL injection, though!
public <R extends Record> Table<R> partition(Table<R> table, String partition) {
return DSL.table("{0} partition(" + partition + ")", table);
}
... and then:
DSL.using(configuration)
.select(count(VBL_DEFAULT.ID))
.from(partition(VBL_DEFAULT, "p_04-Dec-14"))
.where(...);