Hibernate TypeBootstrapContext not found - spring

I am trying to map java enum to PostgreSQL enum in Spring app. I am doing completely same things as Vlad did in his tutorial (Section Mapping a Java Enum to a database-specific Enumerated column type).
So I've imported hibernate-types-55 artefact, added
#TypeDef(
name = "pgsql_enum",
typeClass = PostgreSQLEnumType.class
)
Above entity class, added
#Enumerated(EnumType.STRING)
#Column(name = "column_name",
columnDefinition = "some_enum",
nullable = false)
#Type( type = "pgsql_enum" )
private SomeEnum someProperty;
and finally added a column with newly created database enum where values corresponds with values in enum
But I am getting
java.lang.ClassNotFoundException: org.hibernate.type.spi.TypeBootstrapContext from
while trying to start the application using Wildfly.
Whole maven build completes successfully, all tests passed, so everything looks okay but this exception which causes app wont start at the server.
The Hibernate core version I am using is 5.2.10.Final

I´ve imported hibernate-types-55 artefact
As it's stated in the documentation, you should use:
<dependency>
<groupId>com.vladmihalcea</groupId>
<artifactId>hibernate-types-52</artifactId>
<version>2.12.0</version>
</dependency>
for the Hibernate 5.2 branch.

Related

Apache Ignite repository save method is doing only UPDATE instead of INSERT

I'm developing a Spring Boot + Impala app using Apache Ignite as cache store.
The problem is IgniteRepository.save(key,entity) is only running UPDATE query instead of INSERT.
pom.xml
<ignite.version>2.14.0</ignite.version>
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-spring-data-ext</artifactId>
<version>2.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-core</artifactId>
<version>${ignite.version}</version>
</dependency>
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-indexing</artifactId>
<version>${ignite.version}</version>
</dependency>
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-spring</artifactId>
<version>${ignite.version}</version>
</dependency>
Ignite Configuration :
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("springDataNode");
cfg.setPeerClassLoadingEnabled(true);
CacheConfiguration ccfg = new CacheConfiguration("XYZCache");
ccfg.setIndexedTypes(Long.class, XYZ.class);
ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
CacheJdbcPojoStoreFactory<Long, XYZ> factory = new CacheJdbcPojoStoreFactory<>();
factory.setDataSourceBean("ImpalaDataSource");
JdbcType jdbcType = new JdbcType();
jdbcType.setCacheName("XYZCache");
jdbcType.setKeyType(Long.class);
jdbcType.setValueType(XYZ.class);
jdbcType.setDatabaseTable("schema.table");
jdbcType.setKeyFields(new JdbcTypeField(Types.BIGINT, "id", Long.class, "id"));
jdbcType.setValueFields(
new JdbcTypeField(Types.VARCHAR, "comments", String.class, "comments"),
new JdbcTypeField(Types.BIGINT, "id", Long.class, "id")
);
factory.setTypes(jdbcType);
ccfg.setCacheStoreFactory(factory);
cfg.setCacheConfiguration(ccfg);
return IgniteSpring.start(cfg, applicationContext);
Ignite Repository :
#RepositoryConfig(cacheName = "XYZCache")
public interface XYZRepository extends IgniteRepository<XYZ, Long> {
#Query("select * FROM XYZ WHERE comments=?")
List<XYZ> test(String comments);
#Query("insert into XYZ (id,comments) values (?,?)")
List<XYZ> customSave(Long id,String comments);
}
POJO :
#Data
public class XYZ implements Serializable {
private static final long serialVersionUID = -2677636393779376050L;
#QuerySqlField
private Long id;
#QuerySqlField
private String comments;
}
Calling code:
xyzRepository.save(id, xyz);
xyzRepository.customSave(id, comments);
Both the methods are throwing error by running UPDATE query (instead of INSERT) which is not supported in Impala and also not what I intend to do :
Caused by:
org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException:
Failed to update keys (retry update if possible).: [1671548234688] at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1251)
~[ignite-core-2.14.0.jar:2.14.0]
Caused by: org.apache.ignite.IgniteCheckedException: Failed update
entry in database [table=schema.table, entry=Entry [key=1671548234688,
val=pkg.XYZ [idHash=1354181174, hash=991365654, id=1671548234688,
comments=test]]] at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:593)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:6154)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:5918)
at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:5603)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4254)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5700(BPlusTree.java:4148)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2226)
at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:2116)
... 146 common frames omitted
Caused by: javax.cache.integration.CacheWriterException: Failed update entry in database [table=schema.table, entry=Entry
[key=1671548234688, val=pkg.XYZ [idHash=1354181174, hash=991365654,
id=1671548234688, comments=test]]]
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeUpsert(CacheAbstractJdbcStore.java:978)
at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.write(CacheAbstractJdbcStore.java:1029)
at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:585)
... 153 common frames omitted
Caused by: com.cloudera.impala.support.exceptions.GeneralException:
[Cloudera]ImpalaJDBCDriver ERROR processing query/statement.
Error Code: 0, SQL state: TStatus(statusCode:ERROR_STATUS,
sqlState:HY000, errorMessage:AnalysisException: Impala does not
support modifying a non-Kudu table: schema.table ), Query: UPDATE
schema.table SET table.comments = 'test' WHERE (table.id =
1671548234688). ... 163 common frames omitted
What is the issue here? Why UPDATE is being forced by Apache Ignite? How can I change this behavior?
I also implemented Persistable interface and overrode isNew() to return true but it didn't work.
PS: Select queries are working fine (findAll, findById etc.) including the custom test() method. So, there is no datasource configuration issue and I am able to connect to Impala.
This is likely because the dialect you are using does not have the merge set up.
See here: to understand the flow. This is per the stack trace you posted.
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeUpsert(CacheAbstractJdbcStore.java:978) at <br>
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.write(CacheAbstractJdbcStore.java:1029) at <br>
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.put(GridCacheStoreManagerAdapter.java:585) ..
Alternatively, you can write your own data store factory.

Can't use the added function in Repository

I'm using Spring Data Neo4j 2.4.4 for my project. This is some in project:
User:
UserRepository:
I can still use the built-in functions in Repository such as save(), findAll(),... but when I add and use some functions, for example "existsByUsername", it has error:
11:53:26.562 [http-nio-8081-exec-1] WARN o.s.d.n.c.Neo4jPersistenceExceptionTranslator - Don't know how to translate exception of type class org.neo4j.driver.exceptions.NoSuchRecordException
Then, I try to add query for function, it is still
Could you help me to determine this error and give me a solution? Thank you!\
Updated:
I call API in Postman, I received this result while my DB has only 1 user:
{
"error": "Records with more than one value cannot be converted without a mapper.; nested exception is java.lang.IllegalArgumentException: Records with more than one value cannot be converted without a mapper."
}
As your exception states, no record is returned from Neo4j and thus it cannot be mapped to a Boolean.
Best would be to use Optional<User> and check with isPresent()
#Query("MATCH (n:User {username: $username}) RETURN n")
Optional<User> existsForUsername(String username);
That said, it is already handled by Spring Data without using a custom query :
boolean existsByUsername(String username);
Reference : https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/#appendix.query.method.subject
There is a rather obscure set of references on github site for the NEO4J Spring Data: here
It seems that the existsBy property was omitted. It is being fixed, but whether that has made it to the Spring Data repository is another matter. You don;t say which version of the Spring-boot-starter-neo4j you are using, but you may care to use this one and see if it works;
<!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-data-neo4j -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-neo4j</artifactId>
<version>2.4.4</version>
</dependency>
In general, the other answers here are right. But, if you're using Spring Boot 2.4.4, OGM is no longer supported. It now uses a model called SDN (or at least it was when it was under development). You may just have a dependency issue. Here's part of my Gradle build file, these should be all the dependencies you need:
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
application
id("org.springframework.boot") version "2.4.4"
id("io.spring.dependency-management") version "1.0.10.RELEASE"
}
java.sourceCompatibility = JavaVersion.VERSION_11
configurations {
compileOnly {
extendsFrom(configurations.annotationProcessor.get())
}
}
repositories {
mavenCentral()
}
dependencies {
implementation("org.springframework.boot:spring-boot-starter-actuator")
implementation("org.springframework.boot:spring-boot-starter-data-neo4j")
implementation("org.springframework.boot:spring-boot-starter-security")
implementation("org.springframework.boot:spring-boot-starter-web")
}

Connecting Springboot application to Azure databricks

I'm trying to connect SpringBoot Application to Azure Databricks.
Below is something I have tried....
application.properties
spring.datasource.url = jdbc:spark://adb-**********.*.azuredatabricks.net:**/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/******/******-*****-abcd341
spring.datasource.username = username
spring.datasource.password = Generated Token
pom.xml
Below are some dependencies I'm using...
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.2</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.5.2</version>
</dependency>
<dependency>
<groupId>com.databricks</groupId>
<artifactId>spark-avro_2.10</artifactId>
<version>2.0.1</version>
</dependency>
I'm getting below error..
***************************
APPLICATION FAILED TO START
***************************
Description:
Cannot determine embedded database driver class for database type NONE
Action:
If you want an embedded database please put a supported one on the classpath. If you have database settings to be loaded from a particular profile you may need to active it (no profiles are currently active).
Suggest me if I'm missing any maven dependency.
Thanks in Advance..
To connect from the Spring Boot you need to use JDBC driver, not Spark jars (remove them - you don't need them). You can get JDBC driver as described in documentation, or very recently - directly via Maven using following coordinates:
<dependency>
<groupId>com.databricks</groupId>
<artifactId>databricks-jdbc</artifactId>
<version>2.6.25-1</version>
</dependency>
and then use standard JDBC APIs exposed by Spring. I have a simple example that uses JdbcTemplate to access data in Databricks - you just need to construct JDBC URL correctly:
String host = "";
String httpPath = "";
String token = "";
String jdbcUrl = "jdbc:databricks://" + host +
":443/default;transportMode=http;ssl=1;httpPath=" +
httpPath + ";AuthMech=3;UID=token;PWD=" + token;
and then just access data:
// define data source
SimpleDriverDataSource ds = new SimpleDriverDataSource();
ds.setDriver(new Driver());
ds.setUrl(jdbcUrl);
JdbcTemplate jdbcTemplate = new JdbcTemplate(ds);
// query data
List<Map<String, Object>> data = jdbcTemplate.queryForList(query);
for (Map<String, Object> row: data) {
....
}
P.S. You may omit username or at least set it to the token value...
Try adding the "spring.datasource.driverClassName" , and let me know if that helps you to proceed

LocalDateTime mapped to Oracle DATE, but not to H2 DATE

Let's say I have JPA #Embeddable:
#Embeddable
public class SpecificationValidity {
#Column(name = "VALID_FROM", nullable = false)
private final LocalDateTime validFrom;
#Column(name = "VALID_TO")
private final LocalDateTime validTo;
}
SQL table contains columns VALID_FROM and VALID_TO and is declared using liquibase changeset as follows:
<column name="VALID_FROM" type="date">
<constraints nullable="false"/>
</column>
<column name="VALID_TO" type="date"/>
When I run this code against Oracle database, everything works.
When I run it against H2 database, Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: wrong column type encountered in column [valid_from] in table [specification]; found [date (Types#DATE)], but expecting [timestamp (Types#TIMESTAMP)]
Why is it?
Is it possible to have consistent mapping for both dbms?
I assume you use Hibernate(from your exception message). Since you are using java 8 or above, you might need to add this to your dependency for hibernate 5.0.x.
<!-- https://mvnrepository.com/artifact/org.hibernate/hibernate-java8 -->
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-java8</artifactId>
<version>5.3.7.Final</version>
</dependency>
This helps to convert to and fro from Java 8 types to JPA known types. In this case it allows LocalDateTime LocalDate and Instant etc.,
I'm putting out the mapping that comes along(referred in the article as well).
A reference article : Hibernate + java 8 datetime
P.S : For Hibernate 5.2x and above no need for this explicit dependency.
http://docs.jboss.org/hibernate/orm/5.2/userguide/html_single/Hibernate_User_Guide.html#basic-datetime

Editing the name parameter of #java.persitence.Entity through external jaxb-Binding

I have the following constellation:
B1.xsd and B2.xsd both import A.xsd. Using maven-hyperjaxb3-plugin I created Java classes with JPA annotations for both B1.xsd and B2.xsd. So the classes of A.xsd are created in the project of B1.xsd as well es in the project of B2.xsd.
In order to use this two sets of classes in one persistence unit, I set through jaxb external binding the database schema on each Entity, like shown in Editing #java.persitence.Table in external jaxb-Binding.
The problem is, after deploying to wildfly, wildfly throws org.hibernate.DuplicateMappingException: duplicate import: B1_ClassName refers to both B1_ClassName and B2_ClassName (try using auto-import=\"false\")"}}
So what I need to do is editing the name parameter of the Entity annotation through jaxb external binding so that
#XmlRootElement(name = "B1_Element1")
#Immutable
#Cacheable(true)
#Entity(name = "B1_Element1")
#Table(name = "B1_Element1")
public class B1_Element1
implements Serializable, Equals, HashCode, ToString
{
...
}
will look like
#XmlRootElement(name = "B1_Element1")
#Immutable
#Cacheable(true)
#Entity(name = "PACKAGE_NAME.B1_Element1")
#Table(name = "B1_Element1")
public class B1_Element1
implements Serializable, Equals, HashCode, ToString
{
...
}
My actual bindings-xjc.xjb looks like this
<jaxb:globalBindings localScoping="toplevel">
<xjc:serializable />
</jaxb:globalBindings>
<jaxb:bindings schemaLocation="B1.xsd"
node="/xs:schema">
<hj:persistence>
<hj:default-generated-id name="Hjid">
<orm:generated-value strategy="IDENTITY" />
</hj:default-generated-id>
<hj:default-entity>
<orm:table schema="B1_database_schema" />
</hj:default-entity>
</hj:persistence>
<jaxb:schemaBindings>
<jaxb:package name="b1.package.name" />
</jaxb:schemaBindings>
</jaxb:bindings>
Anybody has an idea how I can edit the name parameter of #java.persitence.Entity?
Disclaimer: I am the author of Hyperjaxb.
The answer is that you should not need to customize this. I.e. if you need to customize this, something is wrong.
The problem that you're facing is because you generate two sets of classes for your A.xsd schema, probably in different packages. This can be the case if you're either have chameleon schema (A.xsd has no target namespace) or if you just compile it twice because you have B1.xsd and B2.xsd.
The correct solution is not to compile A.xsd twice. I hope you don't have chameleon schema (this is a very bad design pattern for JAXB). In this case you can either compile A.xsd, B1.xsd and B2.xsd together or you can compile all of the separately. You can compile A.xsd first and the use it as an episode in B1 and B2. See Using Episodes on how it works.
In any case you should not produce different packages for A.xsd classes.
To answer your specific question - try customizing your complex types with:
<hj:entity name="MyUniqueName"/>
I think this should override the automatically generated name. However that's not the way to go.
ps. Here's a test project for episodes:
https://github.com/highsource/hyperjaxb3/tree/master/ejb/tests/episodes

Resources