DataTorrent: JDBC operator not working - maven

I am replacing the console operator in WordCountDemo but it is giving me a operatorError in STRAM Events. When I click on it it shows me nullpointer error. I am very new to datatorrent.
Here is the complete error message:
Abandoning deployment due to setup failure. java.lang.NullPointerException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:195)
at com.datatorrent.lib.db.jdbc.JdbcStore.connect(JdbcStore.java:163)
at com.datatorrent.lib.db.jdbc.JdbcTransactionalStore.connect(JdbcTransactionalStore.java:118)
at com.datatorrent.lib.db.AbstractTransactionableStoreOutputOperator.setup(AbstractTransactionableStoreOutputOperator.java:94)
at com.datatorrent.lib.db.jdbc.AbstractJdbcTransactionableOutputOperator.setup(AbstractJdbcTransactionableOutputOperator.java:81)
at com.datatorrent.lib.db.jdbc.AbstractJdbcTransactionableOutputOperator.setup(AbstractJdbcTransactionableOutputOperator.java:58)
at com.datatorrent.stram.engine.Node.setup(Node.java:182)
at com.datatorrent.stram.engine.StreamingContainer.setupNode(StreamingContainer.java:1290)
at com.datatorrent.stram.engine.StreamingContainer.access$100(StreamingContainer.java:129)
at com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1369)

I guess you didn't set the properties that is needed by jdbc operator. You need to set driver/databaseurl/username/password
Here is an example
dt.operator."your operator name".store.databaseDriver=jdbc.mysql
dt.operator."your operator name".store.databaseUrl=....

Related

After the Spring boot source code is compiled, it starts to report an error in debug mode

I am trying to analyze the source code of spring boot.
I compiled the source code of spring boot 2.2.9 RELEASE, and no errors were reported. And use this version as a dependency of my test project, and the test project can be started normally in IDEA.
But when I start the project in debug mode in IDEA it produces an error.
Exception in thread "main" java.lang.NoClassDefFoundError: kotlin/collections/AbstractMutableMap
at java.base/java.lang.ClassLoader.defineClass1(Native Method)
at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1016)
at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:151)
at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:825)
at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:723)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:646)
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:604)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:168)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at kotlinx.coroutines.debug.internal.DebugProbesImpl.<clinit>(DebugProbesImpl.kt:30)
at kotlinx.coroutines.debug.AgentPremain.<clinit>(AgentPremain.kt:26)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:513)
at java.instrument/sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:525)
Caused by: java.lang.ClassNotFoundException: kotlin.collections.AbstractMutableMap
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:606)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:168)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 17 more
*** java.lang.instrument ASSERTION FAILED ***: "result" with message agent load/premain call failed at ./open/src/java.instrument/share/native/libinstrument/JPLISAgent.c line: 422
FATAL ERROR in native method: processing of -javaagent failed, processJavaStart failed
Disconnected from the target VM, address: '127.0.0.1:64553', transport: 'socket'
Process finished with exit code 1
I've looked at many resources online, but no one seems to have encountered a similar error.
I've tried multiple variations of this, but none of them seem to work. Any ideas?
Thanks in advance.
This has been fixed in IntelliJ IDEA 2021.3 in the scope of https://youtrack.jetbrains.com/issue/KTIJ-15750
Feel free to upgrade your installation

Quarkus migration, rest endpoints test problem - TestInstantiationException because of IllegalArgumentException

I am migrating application to quarkus. To current point with success. But I meet a problem which currently I can not bypass. I have rest endpoints and when I run app they are working perfectly. But when I am trying to use quarkus test framework to test them (#QuarkusTest) I am getting a bit nondescriptive error:
org.junit.jupiter.api.extension.TestInstantiationException: TestInstanceFactory [io.quarkus.test.junit.QuarkusTestExtension] failed to instantiate test class [com.daimler.pia.input.globus.resource.rest.impl.GreetingResourceTest]: io.quarkus.builder.BuildException: Build failure: Build failed due to errors
[error]: Build step io.quarkus.deployment.steps.ConfigBuildSteps#generateConfigSources threw an exception: java.lang.IllegalArgumentException
at org.objectweb.asm.ClassVisitor.<init>(ClassVisitor.java:79)
at io.quarkus.gizmo.GizmoClassVisitor.<init>(GizmoClassVisitor.java:22)
at io.quarkus.gizmo.ClassCreator.writeTo(ClassCreator.java:150)
at io.quarkus.gizmo.ClassCreator.close(ClassCreator.java:203)
at io.quarkus.deployment.steps.ConfigBuildSteps.generateConfigSources(ConfigBuildSteps.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at io.quarkus.deployment.ExtensionLoader$2.execute(ExtensionLoader.java:915)
at io.quarkus.builder.BuildContext.run(BuildContext.java:279)
at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2011)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1535)
at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1426)
at java.lang.Thread.run(Thread.java:748)
at org.jboss.threads.JBossThread.run(JBossThread.java:479)
[...]
[error]: Build step io.quarkus.jaeger.deployment.JaegerProcessor#setupTracer threw an exception: java.lang.IllegalArgumentException
[...]
[error]: Build step io.quarkus.deployment.logging.LoggingResourceProcessor#setupLoggingRuntimeInit threw an exception: java.lang.IllegalArgumentException
[...]
basicly after each [error] stack trace is the same.
I found some cases on web describing problems with IllegalArgumentException and Quarkus, but usually there was some additional information.
I tried to move example from https://github.com/quarkusio/quarkus-quickstarts/tree/master/getting-started-testing to my project, result was the same. Unfortunetly for time being I do not have time to deal with it. But eventually I will have to go back o that problem. Therefore I have decided to post here my problem becasue I propabbly am missing some small thing (or not) and maybe someone already solve that problem.

Oracle JDBC connection failure

I am experiencing a very strange problem , I have few Junit test cases which create JDBC Oracle connection and close when they are done. For example I have 5 junit
FetchTest
InsertTest
UpdateTest
DeleteTest
First 2 test cases are running perfectly fine, But when 3rd test case is try to connect to Oracle through JDBC its through Exception
Exception occured while creating connection object using DriverManager
at java.net.SocketOutputStream.socketWrite0(Native Method) at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:159) at
oracle.net.ns.DataPacket.send(DataPacket.java:199) at
oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:211) at
oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:227)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:175) at
oracle.net.ns.NetInputStream.read(NetInputStream.java:100) at
oracle.net.ns.NetInputStream.read(NetInputStream.java:85) at
oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:122)
at
oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:78)
at
oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1179)
at
oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1155)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:279) at
oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:186) at
oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:366)
at
oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:752)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:359) at
oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:531)
at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:221)
at
oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:503) at
java.sql.DriverManager.getConnection(DriverManager.java:571) at
java.sql.DriverManager.getConnection(DriverManager.java:215)
And then my 4th Test will also run fine, getting connection and deleting data as expected.
I also tried to ignore 3rd Test case, And then the 4th test case giving same Exception.
What are the possible cause of this exception ?
Is this any issue with time ? Because this exception occur in daily build in Jenkins
java.sql.SQLRecoverableException: IO Error: Connection reset by peer: socket write error
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:421)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:531)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:221)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:503)

PIG Unable to Read Local CSV Leading to Job Failure

Relatively new to the pig/hadoop ecosystem and encountering a frustrating issue when trying to execute a simple DUMP. I am trying to call the below pig script (the file is local, not HFDS, so I am opening the pig shell using pig -x local).
REGISTER utils.py USING jython AS utils;
events = LOAD '../test/events.csv' USING PigStorage(',') AS (patientid:int, eventid:chararray, eventdesc:chararray, timestamp:chararray, value:float);
events = FOREACH events GENERATE patientid, eventid, ToDate(timestamp, 'yyyy-MM-dd') AS etimestamp, value;
DUMP events;
However, when doing this, I receive the following error messages (failed job summary below, full PIG stack trace at bottom):
Input(s): Failed to read data from "file:///bootcamp/test/events.csv"
Output(s): Failed to produce result in "file/tmp/temp/305054006/tmp-908064458"
Pig Stack Trace:
ERROR 1066: Unable to open iterator for alias events. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator for alias events. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.pig.PigServer.openIterator(PigServer.java:925)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:746)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:230)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:205)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:558)
at org.apache.pig.Main.main(Main.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:822)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:452)
at org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:280)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1390)
at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1375)
at org.apache.pig.PigServer.storeEx(PigServer.java:1034)
at org.apache.pig.PigServer.store(PigServer.java:997)
at org.apache.pig.PigServer.openIterator(PigServer.java:910)
... 13 more
Caused by: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
at org.apache.hadoop.mapreduce.Job.ensureState(Job.java:294)
at org.apache.hadoop.mapreduce.Job.getTaskReports(Job.java:540)
at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.getTaskReports(HadoopShims.java:235)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.getStats(MapReduceLauncher.java:801)
...20 more
I have seen similar issues in regards to failed jobs, but sadly I haven't managed to hunt down a resolution as of yet.
EDIT: I should mention that when following the PIG tutorial at the below link, I was encountering the same issue.
http://www.sunlab.org/teaching/cse8803/fall2016/lab/hadoop-pig/
So, I found I was able to "DUMP" the file by doing the following:
tmp = events 100000; --any int larger than number of rows
dump tmp;
I had seen a similar issue on here, and was able to resolve by running as root.

Hadoop/Yarn distributed shell example

I'm trying to run the distributed shell example (using a SVN checkout of Hadoop, which is why the version is set to 3.0.0-SNAPSHOT):
yarn jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.0-SNAPSHOT.jar \
-jar share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.0-SNAPSHOT.jar \
org.apache.hadoop.yarn.applications.distributedshell.Client -shell_command whoami
However it does not work:
12/09/03 13:44:37 FATAL distributedshell.Client: Error running CLient
java.lang.reflect.UndeclaredThrowableException
at org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:128)
at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getClusterMetrics(ClientRMProtocolPBClientImpl.java:123)
at org.hadoop.yarn.client.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:163)
at org.apache.hadoop.yarn.applications.distributedshell.Client.run(Client.java:316)
at org.apache.hadoop.yarn.applications.distributedshell.Client.main(Client.java:164)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Unknown protocol: org.apache.hadoop.yarn.api.ClientRMProtocolPB
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.getProtocolImpl(ProtobufRpcEngine.java:398)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:456)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:898)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1732)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1728)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1726)
at org.apache.hadoop.ipc.Client.call(Client.java:1164)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
at $Proxy7.getClusterMetrics(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ClientRMProtocolPBClientImpl.getClusterMetrics(ClientRMProtocolPBClientImpl.java:121)
... 8 more
The essential problem seems to be in the second trace:
Unknown protocol: org.apache.hadoop.yarn.api.ClientRMProtocolPB
Does anyone know how protocol registration for Hadoops ProtoBufRPC works? Any idea on how to debug?
Edit: With Hadoop version 2.0.1-alpha, it works slightly better.
12/09/03 18:43:14 INFO distributedshell.Client: Application did not finish. YarnState=FAILED, DSFinalStatus=FAILED. Breaking monitoring loop
12/09/03 18:43:14 ERROR distributedshell.Client: Application failed to complete successfully
So maybe my build did not work right. Any ideas of what is causing the problem above (I'd really like to use HEAD, as I'm planning to do some low level experiments, beyond MapReduce)? Or is HEAD partially broken, does distributed shell on HEAD work for you?
My own (not yet working ...) client still fails with the same error:
Caused by: java.io.IOException: Unknown protocol: org.apache.hadoop.yarn.api.ClientRMProtocolPB
It turned out that the main problem with my own code was that I naively instantiated the Configuration class, instead of instantiating YarnConfiguration. This way, the yarn config files were not read, and it tried to contact the servers on their default ports - which don't agree with my settings.
The same bug seems to be present in the distributedshell example.

Resources