Corrupted data exception when run gremlin script - janusgraph

JanusGraph 0.3.2 + Scylla as storage.
gremlin> g.V(24624).valueMap(true)
Invalid flag encountered in serialization: -96. Corrupted data.
Type ':help' or ':h' for help.
Display stack trace? [yN]
java.lang.IllegalArgumentException: Invalid flag encountered in serialization: -96. Corrupted data.
at com.google.common.base.Preconditions.checkArgument(Preconditions.java:145)
at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectInternal(StandardSerializer.java:257)
at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObject(StandardSerializer.java:238)
at org.janusgraph.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:205)
at org.janusgraph.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:195)
at org.janusgraph.graphdb.database.EdgeSerializer.parseRelation(EdgeSerializer.java:129)
at org.janusgraph.graphdb.database.EdgeSerializer.readRelation(EdgeSerializer.java:73)
at org.janusgraph.graphdb.transaction.RelationConstructor.readRelation(RelationConstructor.java:70)
at org.janusgraph.graphdb.transaction.RelationConstructor$1.next(RelationConstructor.java:57)
at org.janusgraph.graphdb.transaction.RelationConstructor$1.next(RelationConstructor.java:45)
at org.janusgraph.graphdb.query.LimitAdjustingIterator.next(LimitAdjustingIterator.java:94)
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:651)
at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
at org.janusgraph.graphdb.query.ResultSetIterator.nextInternal(ResultSetIterator.java:54)
at org.janusgraph.graphdb.query.ResultSetIterator.<init>(ResultSetIterator.java:44)

Related

RocksDB exception in Kafka Streams with kafka_2.13-3.2.0 running on Window

I have kafka 2.13-3.2.0 running on Window machine. I am trying stream Join operation and getting following error. I can see same issue fixed in 1.0.1 version as per : https://issues.apache.org/jira/browse/KAFKA-6162
But I am still getting this error with kafka 2.13-3.2.0.
Error Logs:
Caused by: org.rocksdb.RocksDBException: Failed to create dir: D:\tmp\kafka-streams\join_driver_application\1_0\KSTREAM-JOINTHIS-0000000014-store\KSTREAM-JOINTHIS-0000000014-store:1661385600000: Invalid argument
at org.rocksdb.RocksDB.open(Native Method)
at org.rocksdb.RocksDB.open(RocksDB.java:231)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)
... 23 more

Dse is not starting stating unable to write to commit log directory

I am getting below error while starting the dse:
ERROR [main] 2020-02-26 13:08:33,269 DseModule.java:97 - {}. Exiting...
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) An exception was caught and reported. Message: Unable to check disk space available to /u01/dse_ops/logs. Perhaps the Cassandra user does not have the necessary permissions
at com.datastax.bdp.DseModule.configure(Unknown Source)

Unable to convert to timestamp using Kafka timestampconvert

I am using Kafka source JDBC connector to pull DB events and I am running the kafka connect in standalone mode. When I run this file, I get the shown error. Please help me.
Code:
name=sailpointdb01107
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.password = xxxxx
connection.url = jdbc:oracle:thin:#xxxxx:1521/xxxxx
connection.user =xxxxx
query= SELECT * FROM (SELECT NAME, TO_TIMESTAMP('19700101', 'YYYYMMDD')+ NUMTODSINTERVAL(COMPLETED/1000,'SECOND') AS TASKFAILEDON FROM task WHERE COMPLETION_STATUS='Error')
mode= timestamp
timestamp.column.name=TASKFAILEDON
topic.prefix=testing
validate.non.null=false
transforms=TimestampConverter
transforms.TimestampConverter.type=org.apache.kafka.connect.transforms.TimestampConverter$Value
transforms.TimestampConverter.format=yyyy-MM-dd
transforms.TimestampConverter.target.type=Timestamp
transforms.TimestampConverter.target.field=TASKFAILEDON
Error:
Error[2019-10-01 15:17:45,058] ERROR WorkerSourceTask{id=sailpointdb01107-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
Caused by: org.apache.kafka.connect.errors.ConnectException: Schema Schema{STRUCT} does not correspond to a known timestamp type format
at org.apache.kafka.connect.transforms.TimestampConverter.timestampTypeFromSchema(TimestampConverter.java:406)
at org.apache.kafka.connect.transforms.TimestampConverter.applyWithSchema(TimestampConverter.java:334)
at org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:275)
at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
[2019-10-01 15:17:45,059] ERROR WorkerSourceTask{id=sailpointdb01107-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
This line configured at connector could avoid that issue.
time.precision.mode: "connect"

JanusGraph query exception:org.apache.tinkerpop.gremlin.groovy.plugin.RemoteException: Could not find type for id

I can login and connect the tinkerpop server successfully, but when i execute a gremlin, there come out one strange exception:org.apache.tinkerpop.gremlin.groovy.plugin.RemoteException: Could not find type for id: 137481, i use g.V(137481), there also throws an exception, but when i execute g.V(137481).valueMap(true), it return an node, here is the gremlin
execution result:
[root#docker9 janusgraph-0.2.0-hadoop2]# bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/janusgraph-0.2.0-hadoop2/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/janusgraph-0.2.0-hadoop2/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
09:51:06 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
plugin activated: tinkerpop.hadoop
plugin activated: tinkerpop.spark
plugin activated: tinkerpop.tinkergraph
gremlin> :remote connect tinkerpop.server conf/remote.yaml session
==>Configured cdh-slave1/192.168.66.149:8182-[f699751a-c046-472f-8d84-a22a7897b241]
gremlin> :remote console
==>All scripts will now be sent to Gremlin Server - [cdh-slave1/192.168.66.149:8182]-[f699751a-c046-472f-8d84-a22a7897b241] - type ':remote console' to return to local mode
gremlin> g
==>graphtraversalsource[standardjanusgraph[cassandrathrift:[192.168.66.149]], standard]
gremlin> g.V(137481).valueMap()
==>{}
gremlin> g.V(137481)
Server could not serialize the result requested. Server error - Error during serialization: Could not find type for id: 137481. Note that the class must be serializable by the client and server for proper operation.
Type ':help' or ':h' for help.
Display stack trace? [yN]
gremlin>
gremlin> g.V(137481).valueMap(true)
==>{id=137481, label=vertex}
I'm sure that the vertex who's 'uri'='/0/85' is already exists!
gremlin> g.V().has('uri','/0/85').valueMap()
Could not find type for id: 137481
Type ':help' or ':h' for help.
Display stack trace? [yN]y
org.apache.tinkerpop.gremlin.groovy.plugin.RemoteException: Could not find type for id: 137481
at org.apache.tinkerpop.gremlin.console.groovy.plugin.DriverRemoteAcceptor.submit(DriverRemoteAcceptor.java:175)
at org.apache.tinkerpop.gremlin.console.GremlinGroovysh.execute(GremlinGroovysh.groovy:99)
at org.codehaus.groovy.tools.shell.Shell.leftShift(Shell.groovy:122)
at org.codehaus.groovy.tools.shell.ShellRunner.work(ShellRunner.groovy:95)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$work(InteractiveShellRunner.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:152)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.work(InteractiveShellRunner.groovy:124)
at org.codehaus.groovy.tools.shell.ShellRunner.run(ShellRunner.groovy:59)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.super$2$run(InteractiveShellRunner.groovy)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:93)
at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:325)
at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:1213)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuperN(ScriptBytecodeAdapter.java:132)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.invokeMethodOnSuper0(ScriptBytecodeAdapter.java:152)
at org.codehaus.groovy.tools.shell.InteractiveShellRunner.run(InteractiveShellRunner.groovy:83)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)
at org.apache.tinkerpop.gremlin.console.Console.<init>(Console.groovy:166)
at org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:232)
at org.apache.tinkerpop.gremlin.console.Console.main(Console.groovy:478)
It happens if multiple instances of Gremlin Server are running
because a Gremlin Server was not shutdown or killed properly.
It can happen because the VM on which Gremlin Server is running might have restarted.
The solution is login to the Gremlin Console and run your commands based on your backend; in my case it's Cassandra and ElasticSearch
So, I ran
Method 1
:remote connect tinkerpop.server conf/remote.yaml session
:remote console session
or
graph=JanusGraphFactory.open('conf/janusgraph-cql-es.properties');
g=graph.traversal()
and if you are running containers then your command should be similar to this
graph=JanusGraphFactory.open('/etc/opt/janusgraph/janusgraph.properties');
g=graph.traversal()
Now, after running those you can run
mgmt = graph.openManagement()
mgmt.getOpenInstances()
it will display all the instances
eg
ac12000231-a9ffbcbb0e921
ac12000230-a9ffbcbb0e921(current)
Except that current instance, you should close the other instances
mgmt.forceCloseInstance('ac12000231-a9ffbcbb0e921')
After closing all the instances commit the changes
mgmt.commit()
Now restart your Gremlin Server and run your query, and it should work
Method 2
If the problem persists, just kill your Gremlin Server and start it again few times and it should work.
Another reason why this happens is if the data is not restored properly. If you are using a cluster take the backup on all the nodes then restore on your destination node or nodes.
I used nodetool for backup and sstableloader for restoring data .

HIve shell exception java type java.lang.Integer cant be mapped for this datastore

I have hadoop and hbase installed. When i run show tables comand in hive shell the following error raised.
Hive version 0.10.0
Hbase version 0.90.6
Hadoop version 1.1.2
hive> show tables;
FAILED: Error in metadata: MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException javax.jdo.JDOFatalInternalException: JDBC type integer declared for field
"org.apache.hadoop.hive.metastore.model.MTable.createTime" of java type java.lang.Integer cant be mapped for this datastore.
NestedThrowables:
org.datanucleus.exceptions.NucleusException: JDBC type integer declared for field "org.apache.hadoop.hive.metastore.model.MTable.createTime" of java type java.lang.Integer cant be mapped for this datastore.)
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
I found where the problem comes from. Error is related to language settings of the linux box. Before launching hive export LANG=C is needful.

Resources