Following the official guide of Titan DB here, and trying to run the command:
graph = TitanFactory.open('conf/titan-cassandra-es.properties')
I got this error:
Backend shorthand unknown: conf/titan-cassandra-es.properties
Obviously, the reason is the incorrect path to the
titan-cassandra-es.properties
file. So I changed it to:
graph = TitanFactory.open('../conf/titan-cassandra-es.properties')
and got this error:
Encountered unregistered class ID: 141.
The error happens in the following version:
titan-0.5.4-hadoop2
On titan-1.0.0-hadoop2 instead of this error message I get this one:
Invalid import definition: 'com.thinkaurelius.titan.hadoop.MapReduceIndexManagement'; reason: startup failed: script14747941661821834264593.groovy: 1: unable to resolve class com.thinkaurelius.titan.hadoop.MapReduceIndexManagement # line 1, column 1. import com.thinkaurelius.titan.hadoop.MapReduceIndexManagement ^
1 error
And on titan-1.0.0-hadoop2 I get this one:
The input line is too long.
The syntax of the command is incorrect.
Does anyone know how to handle this issue?
It seems like you have not even managed to get Titan 1 to start up yet.
I do not believe Titan 1 has been deployed to support Windows out of the box. I.e. the downloadable package will not just work with windows.
Saying that I have managed to get Titan DB 1 to work on windows. To do so, all you have to do is install Cassandra 2.x on Windows. This guide may help you out. Start cassandra and enable thrift connections.
With that done you should be able to get Titan doing basic operations on windows. From there you may find dealing with you current errors easier.
Side Note: Windows support for Titan 0.5.x may be more substantial. So you could look into that as well.
Related
I'm having trouble using the GeoMondrian Workbench on my Ubuntu 18.04 LTS system. I've followed the installation instructions and have installed the following:
Oracle Java 8
PostgreSQL 9.5
PostGIS 2.5
I've downloaded the GeoMondrian Workbench and the "simple_geofoodmart.sql" file, created a database, and passed the necessary parameters to the workbench.
However, when I try to open the "simple_geofoodmart.xml" schema file, I get the following error:
Error: Schema file /home/tarik/workbench/demo/simple_geofoodmart.xml is invalid. org/opengis/referencing/NoSuchAuthorityCodeException java.lang.NoClassDefFoundError: org/opengis/referencing/NoSuchAuthorityCodeException
Additionally, when I try to use an MDX query, I get the following error:
"Exception in thread 'AWT-EventQueue-0' java.lang.NoClassDefFoundError: Could not initialize class mondrian.olap.fun.GlobalFunTable at mondrian.rolap.RolapSchema$RolapSchemaFunctionTable.defineFunctions(RolapSchema.java:1643)"
I've tried to resolve all the dependencies and have made sure that I have the necessary JAR files in my classpath, but I'm still getting the same errors. Can anyone help me figure out what's going wrong and how to fix it? Thank you!
Pybullet error: physics server version mismatch (expected 202010061 got 201902120)
I am trying to connect pybullet with VR, and I try to run the example vr code given inside pybullet, vr_kuka_setup_vrSyncPlugin.py, I first ran the "build_visual_studio_vr_pybullet_double.bat" script and the App_PhysicsServer_SharedMemory_VR*.exe, and then I tried vr_kuka_setup_vrSyncPlugin.py, but I get such an error message:
b3Error[examples/SharedMemory/PhysicsClientSharedMemory.cpp,359]:
Error: physics server version mismatch (expected 202010061 got 201902120)
I am pretty sure that there is only one version of pybullet in my computer, the 3.0.9, and the python file and App_PhysicsServer_SharedMemory_VR*.exe comes from the same repo, so I don't know what is the problem related to this error? Is it a version error or something else?
output for code:
code itself
I don't think the pybullet connects to the shared memory successfully, and I think it connects to GUI.
My python version is 3.7, and after I ran pip3 install happybase, I started the command hbase thrift start and tried to write a brief .py file as following:
import happybase
connection = happybase.Connection('master')
table = connection.table('jmlr') #'jmlr' is a table in hbase
for i in table.scan():
print(i)
table.put('001', {'title':'dasds'}) #error here
connection.close()
When it's about to run table.put(), it reported such an error:
thriftpy2.transport.base.TTransportException: TTransportException(type=4, message='TSocket read 0 bytes')
And at the same time, the thrift reported an error:
ERROR [thrift-worker-1] thrift.TBoundedThreadPoolServer: Error occurred during processing of message. java.lang.IllegalArgumentException: Invalid famAndQf provided.
But just now I ran this python file again, it gave me a different error in thrift:
thrift.TBoundedThreadPoolServer: Thrift error occurred during processing of message.
org.apache.thrift.protocol.TProtocolException: Bad version in readMessageBegin
I have tried to add parameters like protocol='compact', transport='framed', but this didn't work, even the table.scan() failed.
Everything in the hbase shell is OK, so I can't figure out what went wrong, I'm about to collapse.
I ran into the same issue and found this sollution. You need to add even empty Column Qualifier ( ':' symbol as delimiter between Column Family and Column Qualifier) into put() method:
table.put('001:', {'title':'dasds'})
Also, you have a different error message after second run of script because thrift server is already failed.
I hope it will help you.
When running my Beam pipeline locally it all works as expected but when trying to run it on the DataflowRunner I suddenly get the error below. Honestly I don't even know where to start evaluating this because the DataflowRunner seems to be a black box.
Jan 14, 2019 11:26:51 AM org.apache.beam.runners.dataflow.DataflowRunner fromOptions
INFO: PipelineOptions.filesToStage was not specified. Defaulting to files from the classpath: will stage 165 files. Enable logging at DEBUG level to see which files will be staged.
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class org.apache.beam.model.pipeline.v1.RunnerApi$StandardPTransforms$Primitives does not implement the requested interface com.google.protobuf.ProtocolMessageEnum
at org.apache.beam.runners.core.construction.BeamUrns.getUrn(BeamUrns.java:27)
at org.apache.beam.runners.core.construction.PTransformTranslation.<clinit>(PTransformTranslation.java:58)
at org.apache.beam.runners.core.construction.UnconsumedReads$1.visitValue(UnconsumedReads.java:49)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:666)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.visit(TransformHierarchy.java:649)
at org.apache.beam.sdk.runners.TransformHierarchy$Node.access$600(TransformHierarchy.java:311)
at org.apache.beam.sdk.runners.TransformHierarchy.visit(TransformHierarchy.java:245)
at org.apache.beam.sdk.Pipeline.traverseTopologically(Pipeline.java:458)
at org.apache.beam.runners.core.construction.UnconsumedReads.ensureAllReadsConsumed(UnconsumedReads.java:40)
at org.apache.beam.runners.dataflow.DataflowRunner.replaceTransforms(DataflowRunner.java:868)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:660)
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:173)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:313)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:299)
at (my code: pipe.run().waitUntilFinish();)
check the versions of beam etc and upgrade your dependencies where possible.
I had the same error and after seeing you get this error, I thought it must be a dependency conflict as it didn't exist before.
I'm using scio to deploy to dataflow and just referenced what they're using. https://github.com/spotify/scio/blob/v0.7.1/build.sbt
I updated guava and protobuf also.
I know you're using java, but try updating beam to 2.9.0 and maybe guava, protobuf...
Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.