I am unable to use json4s-Jackson 3.2.11 within my spark 1.4.1 Streaming application.
Thinking that it was the existing dependency within the spark-core project that is causing the problem as explained here -> Is it possible to use json4s 3.2.11 with Spark 1.3.0? I have built Spark from source with an adjusted core/pom.xml. I have changed the reference from json4s-jackson_2.10:3.2.10 to 3.2.11, as the 2.10 version does not support extracting to implicit types.
I have replaced the source jars that are referenced in my intellij IDEA project with the rebuilt jars, however I am still getting the same errors as before. I fear that Spark must still be referencing json4s 3.2.10 somehow?
here is my simple test:
object StreamingPredictor {
implicit val formats = DefaultFormats
case class event(Key: String,
sensorId: String,
sessionId: String,
deviceId: String,
playerId: String,
impressionId: String,
time: String,
eventName: String,
eventProperties: Map[String, Any],
dl: Array[List[(String, Any)]],
$post: Boolean,
$sync: Boolean)
def parser(json: String): String = {
val parsedJson = parse(json)
val foo = parsedJson.extract[event]
foo.eventName
}
def main(args: Array[String]) {
val zkQuorum = "localhost:2181"
val group = "myGroup"
val topic = Map("test" -> 1)
val sparkContext = new SparkContext("local[4]","KafkaConsumer")
val ssc = new StreamingContext(sparkContext, Seconds(1))
val json = KafkaUtils.createStream(ssc, zkQuorum, group, topic)
val eventName = json.map(_._2).map(parser)
eventName.print()
ssc.start()
}
}
The error I get when referencing json4s 3.2.11 in my application pom.xml file:
java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods$.render(Lorg/json4s/JsonAST$JValue;)Lorg/json4s/JsonAST$JValue;
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:143)
at org.apache.spark.scheduler.EventLoggingListener$$anonfun$logEvent$1.apply(EventLoggingListener.scala:143)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.EventLoggingListener.logEvent(EventLoggingListener.scala:143)
at org.apache.spark.scheduler.EventLoggingListener.onBlockManagerAdded(EventLoggingListener.scala:174)
at org.apache.spark.scheduler.SparkListenerBus$class.onPostEvent(SparkListenerBus.scala:46)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.scheduler.LiveListenerBus.onPostEvent(LiveListenerBus.scala:31)
at org.apache.spark.util.ListenerBus$class.postToAll(ListenerBus.scala:56)
at org.apache.spark.util.AsynchronousListenerBus.postToAll(AsynchronousListenerBus.scala:37)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(AsynchronousListenerBus.scala:79)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
at org.apache.spark.util.AsynchronousListenerBus$$anon$1.run(AsynchronousListenerBus.scala:63)
And the error I get when i use json4s-jackson_2.10:3.2.10 in my application pom.xml file:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost): org.json4s.package$MappingException: No usable value for eventProperties
No information known about type
at org.json4s.reflect.package$.fail(package.scala:96)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:443)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:463)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$14.apply(Extraction.scala:463)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:451)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:491)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:488)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:500)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:488)
at org.json4s.Extraction$.extract(Extraction.scala:332)
at org.json4s.Extraction$.extract(Extraction.scala:42)
at org.json4s.ExtractableJsonAstNode.extract(ExtractableJsonAstNode.scala:21)
at com.pca.triggar.Streaming.StreamingPredictor$.parser(StreamingPredictor.scala:38)
at com.pca.triggar.Streaming.StreamingPredictor$$anonfun$2.apply(StreamingPredictor.scala:57)
at com.pca.triggar.Streaming.StreamingPredictor$$anonfun$2.apply(StreamingPredictor.scala:57)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1276)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$28.apply(RDD.scala:1276)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.json4s.package$MappingException: No information known about type
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$instantiate(Extraction.scala:465)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:491)
at org.json4s.Extraction$ClassInstanceBuilder$$anonfun$result$6.apply(Extraction.scala:488)
at org.json4s.Extraction$.org$json4s$Extraction$$customOrElse(Extraction.scala:500)
at org.json4s.Extraction$ClassInstanceBuilder.result(Extraction.scala:488)
at org.json4s.Extraction$.extract(Extraction.scala:332)
at org.json4s.Extraction$$anonfun$extract$5.apply(Extraction.scala:316)
at org.json4s.Extraction$$anonfun$extract$5.apply(Extraction.scala:316)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.json4s.Extraction$.extract(Extraction.scala:316)
at org.json4s.Extraction$ClassInstanceBuilder.org$json4s$Extraction$ClassInstanceBuilder$$buildCtorArg(Extraction.scala:431)
... 42 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Ok, I found the problem. As posted else where, you need to compile against jason4s 3.2.10. Apparently, doing so generates a binary which would then work with Spark (version 1.5 in my case. Same in some earlier versions as well). It has to do with the default parameter in the render() method which shows up in 3.2.11.
I had the same issue with emr 4.3.0 and spark 1.6
solved it with installing json4s in the bootstrap action by :
down load the json4s jar and put it in s3
create the following shell script and put it in s3
#!/bin/bash
set -e
wget -S -T 10 -t 5 https://s3.amazonaws.com/your-bucketname/json4s-native_2.10-3.2.4.jar
mkdir -p /home/hadoop/lib
mv json4s-native_2.10-3.2.4.jar /home/hadoop/lib/
add it as a bootstrap step in the emr launch steps
Related
I'm trying to call lapply within a function applied on spark data frame. According to documentation it's possible since Spark 2.0.
wrapper = function(df){
out = df
out$len <- unlist(lapply(df$value, function(y) length(y)))
return(out)
}
# dd is Spark Data Frame with one column (value) of type raw
dapplyCollect(dd, wrapper)
It returns error:
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...): org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 37.0 failed 1 times, most recent failure: Lost task 0.0 in stage 37.0 (TID 37, localhost): org.apache.spark.SparkException: R computation failed with
Error in (function (..., deparse.level = 1, make.row.names = TRUE) :
incompatible types (from raw to logical) in subassignment type fix
The following works fine:
wrapper(collect(dd))
But we want computation to run on nodes (not on driver).
What could be the problem? There is a related question but it does not help.
Thanks.
You need to add the schema as it can only be defaulted if the columns of the output are the same mode as the input.
In Spark, version 1.6.1 (code is in Scala 2.10), I am trying to write a data frame to a Parquet file:
import sc.implicits._
val triples = file.map(p => _parse(p, " ", true)).toDF()
triples.write.mode(SaveMode.Overwrite).parquet("hdfs://some.external.ip.address:9000/tmp/table.parquet")
When I do it in development mode, everything works fine. It also works fine if I setup a master and one worker in standalone mode in a docker environment (separate docker containers) on the same machine. It fails when I try to execute it on a cluster (1 master, 5 workers). If I set it up local on the master it also works.
When I try to execute it, I get following stacktrace:
{
"duration": "18.716 secs",
"classPath": "LDFSparkLoaderJobTest2",
"startTime": "2016-07-18T11:41:03.299Z",
"context": "sql-context",
"result": {
"errorClass": "org.apache.spark.SparkException",
"cause": "Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, curry-n3): java.lang.NullPointerException
at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetRelation.scala:101)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.abortTask$1(WriterContainer.scala:294)
at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:271)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)\n\nDriver stacktrace:",
"stack":[
"org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)",
"scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)",
"scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)",
"org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)",
"org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)",
"scala.Option.foreach(Option.scala:236)",
"org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)",
"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)",
"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)",
"org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)",
"org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)",
"org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)",
"org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)",
"org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)",
"org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:150)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)",
"org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)",
"org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)",
"org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)",
"org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)",
"org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)",
"org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)",
"org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)",
"org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)",
"org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)",
"org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)",
"org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)",
"org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)",
"org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)",
"org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:139)",
"org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:334)",
"LDFSparkLoaderJobTest2$.readFile(SparkLoaderJob.scala:55)",
"LDFSparkLoaderJobTest2$.runJob(SparkLoaderJob.scala:48)",
"LDFSparkLoaderJobTest2$.runJob(SparkLoaderJob.scala:18)",
"spark.jobserver.JobManagerActor$$anonfun$spark$jobserver$JobManagerActor$$getJobFuture$4.apply(JobManagerActor.scala:268)",
"scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)",
"scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)",
"java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)",
"java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)",
"java.lang.Thread.run(Thread.java:745)"
],
"causingClass": "org.apache.spark.SparkException",
"message": "Job aborted."
},
"status": "ERROR",
"jobId": "54ad3056-3aaa-415f-8352-ca8c57e02fe9"
}
Notes:
The job is submitted via the Spark Jobserver.
The file that needs to be converted to a Parquet file is 15.1 MB in size.
Question:
Is there something I am doing wrong (I followed the docs)
Or is there another way I can create the Parquet file, so all my workers have access to it?
In your stand alone setup only one worker is working with ParquetRecordWriter. so it worked fine.
In case of real test i.e. cluster (1 master, 5 workers). with ParquetRecordWriter it will fail since you are concurrently writing with multiple workers...
pls try below.
import sc.implicits._
val triples = file.map(p => _parse(p, " ", true)).toDF()
triples.write.mode(SaveMode.Append).parquet("hdfs://some.external.ip.address:9000/tmp/table.parquet")
pls. see SaveMode.Append "append" When saving a DataFrame to a data source, if data/table already exists, contents of the DataFrame are expected to be appended to existing data.
I had not exactly the same, but similar issues writing dataframes to parquet files in cluster mode. Those problems disappeared when deleteing the file, just before writing, using this convenience function 'write(..)' :
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
..
def main(arg: Array[String]) {
..
val fs = FileSystem.get(sc.hadoopConfiguration)
..
def write(df:DataFrame, fn:String ) = {
val op1=s"hdfs:///user/you/$fn"
fs.delete(new Path(op1))
df.write.parquet(op1)
}
Give it a go, tell me if it works for you...
I am trying to use processing-video 2.2.1 as a library from my (Scala) project. I can run the demo capture sketches directly in the Processing IDE, but from my project I get an error that looks like a version mismatch:
Exception in thread "Animation Thread" java.lang.AbstractMethodError: com.sun.jna.Structure.getFieldOrder()Ljava/util/List;
at com.sun.jna.Structure.fieldOrder(Structure.java:868)
at com.sun.jna.Structure.getFields(Structure.java:894)
at com.sun.jna.Structure.deriveLayout(Structure.java:1042)
at com.sun.jna.Structure.calculateSize(Structure.java:966)
at com.sun.jna.Structure.calculateSize(Structure.java:933)
at com.sun.jna.Structure.allocateMemory(Structure.java:360)
at com.sun.jna.Structure.<init>(Structure.java:184)
at com.sun.jna.Structure.<init>(Structure.java:172)
at com.sun.jna.Structure.<init>(Structure.java:159)
at com.sun.jna.Structure.<init>(Structure.java:151)
at org.gstreamer.lowlevel.GObjectAPI$GTypeInstance.<init>(GObjectAPI.java:114)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at java.lang.Class.newInstance(Class.java:442)
at com.sun.jna.Structure.newInstance(Structure.java:1635)
at com.sun.jna.Structure.newInstance(Structure.java:1621)
at com.sun.jna.Structure.size(Structure.java:950)
at com.sun.jna.Native.getNativeSize(Native.java:1076)
at com.sun.jna.Structure.getNativeSize(Structure.java:1927)
at com.sun.jna.Structure.getNativeSize(Structure.java:1920)
at com.sun.jna.Structure.validateField(Structure.java:1018)
at com.sun.jna.Structure.validateFields(Structure.java:1032)
at com.sun.jna.Structure.<init>(Structure.java:179)
at com.sun.jna.Structure.<init>(Structure.java:172)
at com.sun.jna.Structure.<init>(Structure.java:159)
at com.sun.jna.Structure.<init>(Structure.java:151)
at org.gstreamer.lowlevel.GObjectAPI$GParamSpec.<init>(GObjectAPI.java:395)
at org.gstreamer.GObject.findProperty(GObject.java:656)
at org.gstreamer.GObject.set(GObject.java:87)
at processing.video.Capture.initGStreamer(Unknown Source)
at processing.video.Capture.<init>(Unknown Source)
at (my sketch)
The Maven POM file is here. I end up with the following libraries on the class path:
com.googlecode.gstreamer-java:gstreamer-java:1.5
net.java.dev.jna:jna:4.0.0
net.java.dev.jna:platform:3.4.0
org.processing:core:2.2.1
org.processing:video:2.2.1
My intuition says there is a mismatch between jna and platform - should they have the same version? That would indicate that the published POM is wrong. Which version does the Processing standalone use? Unfortunately the jars there are stripped of version information.
Indeed, it seems the processing POM specifies an incompatible JNA version. In sbt, I could fix this with a dependencyOverrides declaration:
def processingVersion = "2.2.1"
def gstreamerVersion = "1.5"
def jnaVersion = "3.4.0"
libraryDependencies ++= Seq(
"org.processing" % "video" % processingVersion,
"com.googlecode.gstreamer-java" % "gstreamer-java" % gstreamerVersion
)
dependencyOverrides += "net.java.dev.jna" % "jna" % jnaVersion // !
for gradle peeps thats:
implementation ("org.processing:core:3.3.7") {
exclude group: 'net.java.dev.jna'
}
// https://mvnrepository.com/artifact/org.processing/video
implementation ("org.processing:video:3.3.7) {
exclude group: 'net.java.dev.jna'
}
// higher jna versions have abstract Structure.getFieldOrder which gstreamer doesn't implement
implementation "net.java.dev.jna:jna:3.4.0"
Are the below properties in hive-site.xml correct for Hive access to cassandra??
(I HAVE COPIED ENTIRE HIVE-DEFAULT.XML CONTENT BUT HAVE CHANGED ONLY THE BELOW PROPERTIES)
javax.jdo.option.ConnectionURL : cassandra://localhost:9160
javax.jdo.option.ConnectionDriverName:org.apache.cassandra.cql.jdbc.CassandraDriver
hive.stats.dbclass: jdbc:cassandra
hive.stats.jdbcdriver: org.apache.cassandra.cql.jdbc.CassandraDriver
hive.stats.dbconnectionstring: jdbc:cassandra:;databaseName=TempStatsStore;create=true
I am running 1-node Cassandra. But, later would make it a minimum 2 node cluster.
When I run the below table creation command I get an error:
CREATE EXTERNAL TABLE MyHiveTable
(m string, n string, o string, p string)
STORED BY 'org.apache.hadoop.hive.cassandra.cql3.CqlStorageHandler'
TBLPROPERTIES ( "cassandra.ks.name" = "cql3ks",
"cassandra.cf.name" = "test",
"cassandra.cql3.type" = "text, text, text, text");
Error:
FAILED: Error in metadata: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
NestedThrowables:
java.lang.reflect.InvocationTargetException
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
don't know about jdo settings but you could try this link which is far better option for integrating hive with cassandra -
https://github.com/milliondreams/hive/tree/cas-support-cql/cassandra-handler
I wanted to configure Logback using Groovy DSL. The file is very simple:
import ch.qos.logback.classic.encoder.PatternLayoutEncoder
import ch.qos.logback.core.ConsoleAppender
import static ch.qos.logback.classic.Level.DEBUG
import static ch.qos.logback.classic.Level.INFO
appender("stdout", ConsoleAppender) {
encoder(PatternLayoutEncoder) {
pattern = "%d %p [%c] - <%m>%n"
}
}
root(INFO, ["stdout"])
I use Gradle to build my application and run it with jettyRun. I get the following error:
Failed to instantiate [ch.qos.logback.classic.LoggerContext]
Reported exception:
org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'ch.qos.logback.core.ConsoleAppender[null]' with class 'ch.qos.logback.core.ConsoleAppender' to class 'ch.qos.logback.core.Appender'
at org.codehaus.groovy.runtime.typehandling.DefaultTypeTransformation.castToType(DefaultTypeTransformation.java:360)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.castToType(ScriptBytecodeAdapter.java:599)
at ch.qos.logback.classic.gaffer.ConfigurationDelegate.appender(ConfigurationDelegate.groovy:119)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:90)
at org.codehaus.groovy.runtime.metaclass.MixinInstanceMetaMethod.invoke(MixinInstanceMetaMethod.java:53)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoMetaMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:308)
at org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.callCurrent(PogoMetaMethodSite.java:52)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallCurrent(CallSiteArray.java:46)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:133)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.callCurrent(AbstractCallSite.java:149)
However, when I switch to equivalent XML configuration, everything works. What am I doing wrong?
Using Logback 1.0.0. Tried with Logback 1.0.3.
I figured the solution out, but some questions remain opened. The problem was that I had no proper Groovy on the classpath. I decided to make an example project to demonstrate this bug. I started with a console application using Gradle's "application" plugin. I didn't include Groovy as dependency.
apply plugin: 'application'
repositories {
mavenCentral()
}
ext.logbackVersion = '1.0.3'
ext.slf4jVersion = '1.6.4'
dependencies {
compile "ch.qos.logback:logback-classic:$ext.logbackVersion"
compile "org.slf4j:jcl-over-slf4j:$ext.slf4jVersion"
//runtime "org.codehaus.groovy:groovy:1.8.6" // the problem was here
}
mainClassName = "org.test.Main"
This gave me an error, which is quite straighforward.
|-ERROR in ch.qos.logback.classic.LoggerContext[default] - Groovy classes are not available on the class path. ABORTING INITIALIZATION.
OK, cool. The dependency was missing - easy to fix. But why didn't I get the same error when I ran my web application? Adding Groovy dependency solved the initial problem in the web application. I stripped my project down and will create a corresponding JIRA. Perhaps, Groovy on classpath detection is not quite correct.
The GroovyCastException "cannot cast ConsoleAppender as Appender" has all the bearings of a class loader issue. Which version of groovy is this? Could you open a bug report including a test case for reproducing this issuew?
colleagues.
I have faced almost the same trouble today:
When I use logback.xml everything works fine
When I use logback.groovy in IntelliJ IDEA eveything works fine too
When I use logback.groovy when start my script from command line I got a lot of errors like
:
D:\Projects\PRDMonitoring\sources>groovy tray.groovy PRD
Failed to instantiate [ch.qos.logback.classic.LoggerContext]
Reported exception:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
Script1.groovy: 2: unable to resolve class ch.qos.logback.classic.filter.LevelFilter
# line 2, column 1.
import ch.qos.logback.classic.filter.LevelFilter
^
Script1.groovy: -1: unable to resolve class ch.qos.logback.classic.encoder.PatternLayoutEncoder
# line -1, column -1.
Script1.groovy: 3: unable to resolve class ch.qos.logback.core.ConsoleAppender
# line 3, column 1.
import ch.qos.logback.core.ConsoleAppender
^
Script1.groovy: -1: unable to resolve class ch.qos.logback.classic.Level
# line -1, column -1.
Script1.groovy: 6: unable to resolve class ch.qos.logback.core.spi.FilterReply
# line 6, column 1.
import static ch.qos.logback.core.spi.FilterReply.ACCEPT
^
Script1.groovy: 7: unable to resolve class ch.qos.logback.core.spi.FilterReply
# line 7, column 1.
import static ch.qos.logback.core.spi.FilterReply.DENY
But after a couple of minutes to find a solution I figured out, that the following string before #Grapes annotation fixes a trouble with classes loading #GrabConfig(systemClassLoader=true)
#GrabConfig(systemClassLoader=true)
#Grapes([
#Grab(group = 'org.codehaus.groovy.modules.http-builder', module = 'http-builder', version = '0.6'),
#Grab(group = 'org.apache.commons', module='commons-lang3', version='3.0'),
#Grab(group = 'commons-io', module = 'commons-io', version = '2.4'),
#Grab(group = 'joda-time', module = 'joda-time', version = '2.9.4'),
#Grab(group = 'ch.qos.logback', module = 'logback-classic', version = '1.1.7'),
#Grab(group = 'ch.qos.logback', module = 'logback-core', version = '1.1.7')
])