I have a pretty small OSGI application running in Apache Karaf, and it work fine on OrientDB version 2.1.11. But I've faced with following error when tried to start it on new 2.2.3 version:
Caused by: com.orientechnologies.orient.core.exception.ODatabaseException: Error on opening database 'remote:/localhost/testdb'
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.<init>(ODatabaseDocumentTx.java:187)
at com.orientechnologies.orient.core.db.OPartitionedDatabasePool$DatabaseDocumentTxPolled.<init>(OPartitionedDatabasePool.java:118)
at com.orientechnologies.orient.core.db.OPartitionedDatabasePool$DatabaseDocumentTxPolled.<init>(OPartitionedDatabasePool.java:114)
at com.orientechnologies.orient.core.db.OPartitionedDatabasePool.initQueue(OPartitionedDatabasePool.java:442)
This error occures on application install time when it's trying to initialise DB connection pool. After some debugging I found that the root problem is in method registerEngines() of class com.orientechnologies.orient.core.Orient where OrientDB classloader tries to find all implmentations of OEngine interface:
Iterator<OEngine> engines = OClassLoaderHelper.lookupProviderWithOrientClassLoader(OEngine.class, classLoader);
This call returns only 2 implementations, plocal and memory (located in the same bundle as registerEngines() - orientdb-core). The remote engine (com.orientechnologies.orient.client.remote.OEngineRemote) is located in orientdb-client bundle and OrientDB classloader can't find it.
Here is code fragment that creates DB connection poll and produces described error:
log.info("Creating OrientDB connection pool");
OGlobalConfiguration.NETWORK_BINARY_DNS_LOADBALANCING_ENABLED.setValue(true);
OPartitionedDatabasePool connectionPool = new OPartitionedDatabasePool(url, user, password);
log.info(String.format("Created OrientDB connection pool: url=%s", url));
Related
I am quite new to groovy and am quite confused about the Groovy java.sql.SQLException that I'm getting.
Here is my code
// https://mvnrepository.com/artifact/com.microsoft.sqlserver/mssql-jdbc
#Grapes(
#Grab(group='com.microsoft.sqlserver', module='mssql-jdbc', version='7.2.2.jre8')
)
import groovy.sql.*
def username = xxx, password = yyy
// Create connection to MSSQL with classic JDBC DriverManager.
def db = Sql.newInstance("jdbc:sqlserver://localhost:1433;database=tempdb;", username, password, 'com.microsoft.sqlserver.jdbc.SQLServerDriver')
The same SQL JDBC Connection string ("jdbc:sqlserver://localhost:1433;database=tempdb;", 'com.microsoft.sqlserver.jdbc.SQLServerDriver') works just fine for my other cases, like Java or JMeter, but not works with groovy. This is what I'm getting:
> groovy groovy-sql-test.groovy
Picked up _JAVA_OPTIONS: -Xms512M -Xmx1g
Caught: java.sql.SQLException: No suitable driver found for jdbc:sqlserver://localhost:1433;database=tempdb;
java.sql.SQLException: No suitable driver found for jdbc:sqlserver://localhost:1433;database=tempdb;
at groovy-sql-0.run(groovy-sql-0.groovy:11)
This is running under Win10.
I've also tried
groovy -cp D:\path\to\my\jars groovy-sql-test.groovy
where within the D:\path\to\my\jars dir, I'm having both sqljdbc41.jar and mssql-jdbc-7.2.2.jre8.jar files.
You have to use #GrabConfig(systemClassLoader=true) to use the system
classloader and thus get the jdbc driver picked up.
From https://groovy-lang.org/databases.html#_connecting_using_grab
The #GrabConfig statement is necessary to make sure the system
classloader is used. This ensures that the driver classes and system
classes like java.sql.DriverManager are in the same classloader.
according to https://www.testcontainers.org/modules/databases/jdbc/#database-containers-launched-via-jdbc-url-scheme , i'm trying to create an Oracle Container with Quarkus using jdbc url scheme.
After provided a valid docker image ("store/oracle/database-instantclient:12.2.0.1") and set this properties:
"%test":
quarkus:
datasource:
jdbc:
driver: org.testcontainers.jdbc.ContainerDatabaseDriver
url: jdbc:tc:oracle:///databasename
db-kind: other
i get this error:
Container is started (JDBC URL: jdbc:oracle:thin:system/oracle#localhost:32827:xe)
2020-11-09 17:33:06,719 INFO [🐳 .2.0.1]] (Agroal_13889837441) Container store/oracle/database-instantclient:12.2.0.1 started in PT4M7.8483772S
2020-11-09 17:33:06,738 WARN [io.agr.pool] (Agroal_13889837441) Datasource '<default>': Could not create new connection
2020-11-09 17:33:06,805 ERROR [io.qua.application] (main) Failed to start application (with profile test): org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to obtain connection from database: Could not create new connection
--------------------------------------------------------------------------
SQL State : null
Error Code : 0
Message : Could not create new connection
at org.flywaydb.core.internal.jdbc.JdbcUtils.openConnection(JdbcUtils.java:65)
at org.flywaydb.core.internal.jdbc.JdbcConnectionFactory.<init>(JdbcConnectionFactory.java:80)
at org.flywaydb.core.Flyway.execute(Flyway.java:453)
at org.flywaydb.core.Flyway.migrate(Flyway.java:158)
Can someone help me?
If I remember correctly, Flyway community editions (that ships with Quarkus) does not have support for Oracle. You need to use the Enterprise edition.
You need to replace the Quarkus Flyway dependency. Just exclude the one included and add the enterprise one (either in Maven or Gradle).
I am getting this error Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1 while creating state store in my kafka streams application. Here is the complete stack trace of the application
[2016-08-30 12:43:09,408] ERROR [StreamThread-1] User provided listener org.apache.kafka.streams.processor.internals.StreamThread$1 for group string-monitor failed on partition assignment (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:71)
at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:86)
at org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:550)
at org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:577)
at org.apache.kafka.streams.processor.internals.StreamThread.access$000(StreamThread.java:68)
at org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:123)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:222)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:232)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$1.onSuccess(AbstractCoordinator.java:227)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.RequestFuture$2.onSuccess(RequestFuture.java:182)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:436)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupResponseHandler.handle(AbstractCoordinator.java:422)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)
at org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
at org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
at org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:345)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:977)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:937)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)
... 1 more
Caused by: java.io.IOException: Failed to lock the state directory: /tmp/kafka-streams/string-monitor/0_1
at org.apache.kafka.streams.processor.internals.ProcessorStateManager.<init>(ProcessorStateManager.java:95)
at org.apache.kafka.streams.processor.internals.AbstractTask.<init>(AbstractTask.java:69)
... 32 more
And I create a state store as below
StateStoreSupplier avgStore = Stores.create("avgStore")
.withKeys(Serdes.String())
.withValues(Serdes.String())
.persistent()
.build();
Any idea of how to fix this?
Did you configure multiple threads within your application instance? If yes, it may be due to a known issue in older versions of Kafka, where the underlying consumer used by Kafka Streams (in the app instance) can take too long to rebalance, causing itself to be kicked out of the consumer group (and hence triggering another consumer group rebalance) while it is still under the first rebalance process.
The following error messages in your stacktrace indicate that you are actually running into the problem I described above:
Exception in thread "StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Failed to rebalance
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:299)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error while creating the state manager
The problem is summarized in this Apache Kafka ticket:
https://issues.apache.org/jira/browse/KAFKA-3758
A recent change to the underlying Kafka consumer client fixes this issue:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-62%3A+Allow+consumer+to+send+heartbeats+from+a+background+thread
However it is not included in an official Kafka release yet and, as of today, is available only in Apache Kafka trunk. If you are able to run your app with Kafka trunk you can verify if this problem has gone away already.
I also saw this problem when the user didn't have permissions to write to the default state.dir
When I changed the following property to a dir w/good permissions everything was fine:
property.put(StreamsConfig.STATE_DIR_CONFIG, "{goodDir}")
This was observed in 0.10.2
Just a note for those who receives the same Failed to lock the state directory exception and uses The Spring for Apache Kafka (spring-kafka) (example from spring doc):
It took me a while to figure out in the source code that Spring starts your built Steams automatically, so there is no need for you to do manually something like:
KafkaStreams streams = new KafkaStreams(builder, config);
streams.start();
I dit it and faced the aforementioned exception because, not surpassingly, there were two Application instances running: one is created by Spring, another by me
I'm doing some testing and got the following exception:
java.lang.IllegalArgumentException: Invalid connection URL url dbc:h2:db/test
at org.mariadb.jdbc.JDBCUrl.parse(JDBCUrl.java:144)
at org.mariadb.jdbc.Driver.connect(Driver.java:95)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
The code I'm using:
Class.forName("org.h2.Driver"); //load h2 driver
String connectionUrl = "jdbc:h2:db/test";
Connection conn = DriverManager.getConnection(connectionUrl, "sa", "");
I also test something related to mariadb/mysql, so the mariadb driver is also on classpath in addition to the h2 driver (eclipse project). If I remove the mariadb driver from classpath the connection works.
To my knowledge it should be possible to have multiple jdbc drivers on classpath or have I understood something wrong?
(h2 is version 1.3.176 and mariadb-java-client is 1.2.0)
EDIT: Using mariadb-java-client 1.2.2 removes the problem
I can confirm that this was a bug in the mariadb-jdbc-driver
https://mariadb.atlassian.net/plugins/servlet/mobile#issue/CONJ-167
I would assume this was some kind of bug, since I have not seen the problem after updating to mariadb-java-client 1.2.2 .
I am using Hadoop-0.20.0 and Hive-0.8.0. Now i have data into Hive table and i want generate reports from that. For that I am using iReport-4.5.0. For that I also download HivePlugin-0.5.nbm in iReport.
Now I am going to connect Hive connection in iReport.
Create New Data source --> New --> Hive Connection
Jdbc Drive: org.apache.hadoop.hive.jdbc.HiveDriver
Jdbc URl: jdbc:hive//localhost:10000/default
Server Address: localhost
Database: default
user name: root
password: somepassword
Then click on Test connection button.
I am getting error like:
Exception
Message:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
Level:
SEVERE
Stack Trace:
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException:
java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:226)
org.apache.hadoop.hive.jdbc.HiveConnection.<init>(HiveConnection.java:72)
org.apache.hadoop.hive.jdbc.HiveDriver.connect(HiveDriver.java:110)
com.jaspersoft.ireport.designer.connection.JDBCConnection.getConnection(JDBCConnection.java:140)
com.jaspersoft.ireport.hadoop.hive.connection.HiveConnection.getConnection(HiveConnection.java:48)
com.jaspersoft.ireport.designer.connection.JDBCConnection.test(JDBCConnection.java:447)
com.jaspersoft.ireport.designer.connection.gui.ConnectionDialog.jButtonTestActionPerformed(ConnectionDialog.java:335)
com.jaspersoft.ireport.designer.connection.gui.ConnectionDialog.access$300(ConnectionDialog.java:43)
Can any one help me in this? Where i am wrong or missing something?
"I also download HivePlugin-0.5.nbm in iReport."
This isn't clear. iReport 4.5 has the Hadoop Hive connector pre-installed. Why did you download the connector separately? Did you install this plugin?
Create New Data source --> New --> Hive Connection
Jdbc Drive: org.apache.hadoop.hive.jdbc.HiveDriver
...
This isn't possible with the current Hadoop Hive connector. When you create a new "Hadoop Hive Connection" you are given only one parameter to fill out: the url.
I'm guessing that you created a JDBC connection when you meant to create a Hadoop Hive connection. This is a logical thing to do. Hive is accessed via JDBC. But the Hive JDBC driver is still pretty new. It has a number of shortcomings. That's why the Hive connector was added to iReport. It is based on the Hive JDBC driver, but it includes a wrapper around it to avoid some problems.
Or maybe you installed an old Hive connector over the top of the one that's already included with iReport 4.5. At some point in the past the Hive connector let you fill in extra information like the JDBC Driver.
Start with a fresh iReport installation, and make sure you use the Hadoop Hive Connection. That should clear it up.
The error "java.lang.RuntimeException: Illegal Hadoop Version: Unknown (expected A.B.* format)" happens because the VersionInfo class in hadoop-common.jar attempts to locate the version info using the current thread's class loader.
https://github.com/apache/hadoop/blob/release-2.6.0/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java#L41-L58
The code in question looks like this...
package org.apache.hadoop.util;
...
public class VersionInfo {
...
protected VersionInfo(String component) {
info = new Properties();
String versionInfoFile = component + "-version-info.properties";
InputStream is = null;
try {
is = Thread.currentThread().getContextClassLoader()
.getResourceAsStream(versionInfoFile);
if (is == null) {
throw new IOException("Resource not found");
}
info.load(is);
} catch (IOException ex) {
LogFactory.getLog(getClass()).warn("Could not read '" +
versionInfoFile + "', " + ex.toString(), ex);
} finally {
IOUtils.closeStream(is);
}
}
If your tool attempts to connect to the datasource in a separate thread, it will generate this error.
The easiest way to work around the issue is to put the hadoop-common.jar library in $JAVA_HOME/lib/ext or use the command line setting -Djava.endorsed.dirs to point to the hadoop-common.jar library. Then the thread's class loader will always be able to find this information.