SonarQube Update Database for Version 7 Fails - sonarqube

We are running setup to update our SonarQube to version 7.0 - we get a database failure (see stack trace below).
Any idea how we can get past this?
2018.02.07 07:16:47 INFO web[][o.s.s.p.d.m.DatabaseMigrationImpl] Starting DB Migration and container restart
2018.02.07 07:16:47 INFO web[][DbMigrations] Executing DB migrations...
2018.02.07 07:16:47 INFO web[][DbMigrations] #1907 'Populate table live_measures'...
2018.02.07 07:16:48 ERROR web[][DbMigrations] #1907 'Populate table live_measures': failure | time=788ms
2018.02.07 07:16:48 ERROR web[][DbMigrations] Executed DB migrations: failure | time=790ms
2018.02.07 07:16:48 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration failed | time=902ms
2018.02.07 07:16:48 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration ended with an exception
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1907 'Populate table live_measures' failed
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEachOrdered(ReferencePipeline.java:590)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:52)
at org.sonar.server.platform.db.migration.engine.MigrationEngineImpl.execute(MigrationEngineImpl.java:50)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doUpgradeDb(DatabaseMigrationImpl.java:105)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doDatabaseMigration(DatabaseMigrationImpl.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Error during processing of row: [uuid=eea5cd4b-3c1c-4001-bf83-85c1062a1b7c,project_uuid=3dabb938-1a4a-4c82-b0a7-0b20cc419be9,metric_id=10019,value=1,text_value=null,variation_value_1=0,measure_data=null]
at org.sonar.server.platform.db.migration.step.SelectImpl.newExceptionWithRowDetails(SelectImpl.java:89)
at org.sonar.server.platform.db.migration.step.SelectImpl.scroll(SelectImpl.java:81)
at org.sonar.server.platform.db.migration.step.MassUpdate.execute(MassUpdate.java:91)
at org.sonar.server.platform.db.migration.version.v70.PopulateLiveMeasures.execute(PopulateLiveMeasures.java:57)
at org.sonar.server.platform.db.migration.step.DataChange.execute(DataChange.java:44)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:75)
... 11 common frames omitted
Caused by: java.sql.BatchUpdateException: ORA-00001: unique constraint (SONARQUBE_IDM.LIVE_MEASURES_COMPONENT) violated
at oracle.jdbc.driver.OraclePreparedStatement.executeLargeBatch(OraclePreparedStatement.java:10032)
at oracle.jdbc.driver.T4CPreparedStatement.executeLargeBatch(T4CPreparedStatement.java:1364)
at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:9839)
at oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:234)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
at org.apache.commons.dbcp.DelegatingStatement.executeBatch(DelegatingStatement.java:297)
at org.sonar.server.platform.db.migration.step.UpsertImpl.addBatch(UpsertImpl.java:42)
at org.sonar.server.platform.db.migration.step.MassUpdate.callSingleHandler(MassUpdate.java:118)
at org.sonar.server.platform.db.migration.step.MassUpdate.lambda$execute$0(MassUpdate.java:91)
at org.sonar.server.platform.db.migration.step.SelectImpl.scroll(SelectImpl.java:78)
... 15 common frames omitted

we had the same issue.
Execution of migration step #1907 'Populate table live_measures'
failed;[...]ERROR: duplicate key value violates unique constraint
"live_measures_component
I checked the entries in our DB that are causing the issue with this query (we use PostgreSQL, so you have to check if the query syntax is still valid for Oracle):
SELECT p.uuid, pm.metric_id, COUNT(1) FROM project_measures pm INNER JOIN projects p on p.uuid = pm.component_uuid INNER JOIN snapshots s on s.uuid = pm.analysis_uuid WHERE s.islast = TRUE and pm.person_id is null GROUP BY p.uuid, pm.metric_id HAVING COUNT(1) > 1;
There were > 3.500 (!) entries with the same uuid and metric_id, so no chance to manually adjust some table entries.
As we did not have enough time to analyze this further and we wanted to get past this we decided to delete and recreate the index "live_measures_component" without the UNIQUE key on the table live_measures.
The following statements should work for you as well: (with large databases the duration of these statements should be taken into consideration...)
DROP INDEX "live_measures_component";
CREATE INDEX live_measures_component ON live_measures (component_uuid,metric_id);
This workaround allowed us to finish the database migration. I don't know if the workaround has some side-effects (maybe somebody from sonarqube can tell) - but with having > 3.500 "problematic" entries in the DB it was our only possiblity at the moment.
Hope this helps.

(no rep, can't comment): the previous answer by guenther-s initially worked, but caused our analysis to later fail with sonar 9.7:
org.postgresql.util.PSQLException: ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
when inserting into the live_measures table. we fixed ours by dropping the index, removing the duplicates and re-adding a unique index after that:
DROP INDEX "live_measures_component";
DELETE FROM
live_measures a
USING live_measures b
WHERE
a.updated_at < b.updated_at
AND (a.component_uuid = b.component_uuid AND a.metric_uuid = b.metric_uuid);
CREATE UNIQUE INDEX live_measures_component ON live_measures (component_uuid,metric_uuid);

Related

Why do I get ORA-01013 error when executing batch with HQL when the query takes less than a second to execute

Am getting below error for an HQL query. Have set the statement timeout to -1 as well as 0 in weblogic console but still same error. The query printed in logs takes 0.856 seconds to execute on oracle SQL developer and fetches 3986 rows.
Query:
select kpiResultDO
from KPIResult kpiResultDO
where kpiResultDO.processedFl = 'NO'
and kpiResultDO.typeCd in ('ABC','XYZ', 'PQR')
order by kpiResultDO.seq
Error:
Caused by: java.sql.SQLTimeoutException: ORA-01013: user requested cancel of current operation
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:494)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:446)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1054)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:623)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:252)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:612)

Migrating Sonarqube - Database Upgrade Error

We are trying to migrate Sonarqube 6.5 from EC2 to Kubernetes and our database is in AWS RDS.
Steps I followed:
1) Launched sonarqube pod 6.7 with an empty DB(e.g sonark8s).
2) Backed up and restored existing Prod DB to new DB (sonark8s).
3) Restarted the pod and executed and then upgrade.
But, getting error 'Upgrade Failed: Database connection cannot be established. Please check database status and JDBC settings.'
web.log error:
2019.01.08 12:20:42 ERROR web[][DbMigrations] #1801 'Create table CE task characteristics': failure | time=18ms
2019.01.08 12:20:42 ERROR web[][DbMigrations] Executed DB migrations: failure | time=20ms
2019.01.08 12:20:42 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration failed | time=64ms
2019.01.08 12:20:42 ERROR web[][o.s.s.p.d.m.DatabaseMigrationImpl] DB migration ended with an exception
org.sonar.server.platform.db.migration.step.MigrationStepExecutionException: Execution of migration step #1801 'Create table CE task characteristics' failed
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:79)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:67)
at java.util.Iterator.forEachRemaining(Iterator.java:116)
at java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
at java.util.stream.ReferencePipeline$Head.forEachOrdered(ReferencePipeline.java:590)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:52)
at org.sonar.server.platform.db.migration.engine.MigrationEngineImpl.execute(MigrationEngineImpl.java:50)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doUpgradeDb(DatabaseMigrationImpl.java:105)
at org.sonar.server.platform.db.migration.DatabaseMigrationImpl.doDatabaseMigration(DatabaseMigrationImpl.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: Fail to execute CREATE TABLE ce_task_characteristics (uuid VARCHAR (40) NOT NULL,task_uuid VARCHAR (40) NOT NULL,kee VARCHAR (512) NOT NULL,text_value VARCHAR (512) NULL, CONSTRAINT pk_ce_task_characteristics PRIMARY KEY (uuid)) ENGINE=InnoDB CHARACTER SET utf8 COLLATE utf8_bin
at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:97)
at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:77)
at org.sonar.server.platform.db.migration.step.DdlChange$Context.execute(DdlChange.java:117)
at org.sonar.server.platform.db.migration.version.v66.CreateTableCeTaskCharacteristics.execute(CreateTableCeTaskCharacteristics.java:67)
at org.sonar.server.platform.db.migration.step.DdlChange.execute(DdlChange.java:45)
at org.sonar.server.platform.db.migration.step.MigrationStepsExecutorImpl.execute(MigrationStepsExecutorImpl.java:75)
... 11 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'ce_task_characteristics' already exists

Hadoop - timed out when dropping a Hive table

I get an error when trying to drop a table in hive:
> drop table my_table;
Error:
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask.
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out
I also don't have the related data on the HDFS, what could be the reason for that?
You can try to increase the socket timeout:
set hive.metastore.client.socket.timeout=5000

ClassCastException on Drop table query in apache spark hive

I'm using the following hive query :
this.queryExecutor.executeQuery("Drop table user")
and am getting the following exception :
java.lang.LinkageError: ClassCastException: attempting to castjar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/javax/ws/rs/ext/RuntimeDelegate.classtojar:file:/usr/hdp/2.4.2.0-258/spark/lib/spark-assembly-1.6.1.2.4.2.0-258-hadoop2.7.1.2.4.2.0-258.jar!/javax/ws/rs/ext/RuntimeDelegate.class
at javax.ws.rs.ext.RuntimeDelegate.findDelegate(RuntimeDelegate.java:116)
at javax.ws.rs.ext.RuntimeDelegate.getInstance(RuntimeDelegate.java:91)
at javax.ws.rs.core.MediaType.<clinit>(MediaType.java:44)
at com.sun.jersey.core.header.MediaTypes.<clinit>(MediaTypes.java:64)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:182)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.initReaders(MessageBodyFactory.java:175)
at com.sun.jersey.core.spi.factory.MessageBodyFactory.init(MessageBodyFactory.java:162)
at com.sun.jersey.api.client.Client.init(Client.java:342)
at com.sun.jersey.api.client.Client.access$000(Client.java:118)
at com.sun.jersey.api.client.Client$1.f(Client.java:191)
at com.sun.jersey.api.client.Client$1.f(Client.java:187)
at com.sun.jersey.spi.inject.Errors.processWithErrors(Errors.java:193)
at com.sun.jersey.api.client.Client.<init>(Client.java:187)
at com.sun.jersey.api.client.Client.<init>(Client.java:170)
at org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.serviceInit(TimelineClientImpl.java:340)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.hive.ql.hooks.ATSHook.<init>(ATSHook.java:67)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at org.apache.hadoop.hive.ql.hooks.HookUtils.getHooks(HookUtils.java:60)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1309)
at org.apache.hadoop.hive.ql.Driver.getHooks(Driver.java:1293)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1347)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:495)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$withHiveState$1.apply(ClientWrapper.scala:290)
at org.apache.spark.sql.hive.client.ClientWrapper.liftedTree1$1(ClientWrapper.scala:237)
at org.apache.spark.sql.hive.client.ClientWrapper.retryLocked(ClientWrapper.scala:236)
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:279)
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:484)
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:474)
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:613)
at org.apache.spark.sql.hive.execution.DropTable.run(commands.scala:89)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:130)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:52)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
at com.accenture.aa.dmah.spark.core.QueryExecutor.executeQuery(QueryExecutor.scala:35)
at com.accenture.aa.dmah.attribution.transformer.MulltipleUserJourneyTransformer.transform(MulltipleUserJourneyTransformer.scala:32)
at com.accenture.aa.dmah.attribution.userjourney.UserJourneyBuilder$$anonfun$buildUserJourney$1.apply$mcVI$sp(UserJourneyBuilder.scala:31)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at com.accenture.aa.dmah.attribution.userjourney.UserJourneyBuilder.buildUserJourney(UserJourneyBuilder.scala:29)
at com.accenture.aa.dmah.attribution.core.AttributionHub.executeAttribution(AttributionHub.scala:47)
at com.accenture.aa.dmah.attribution.jobs.AttributionJob.process(AttributionJob.scala:33)
at com.accenture.aa.dmah.core.DMAHJob.processJob(DMAHJob.scala:73)
at com.accenture.aa.dmah.core.DMAHJob.execute(DMAHJob.scala:27)
at com.accenture.aa.dmah.core.JobRunner.<init>(JobRunner.scala:17)
at com.accenture.aa.dmah.core.ApplicationInstance.initilize(ApplicationInstance.scala:48)
at com.accenture.aa.dmah.core.Bootstrap.boot(Bootstrap.scala:112)
at com.accenture.aa.dmah.core.BootstrapObj$.main(Bootstrap.scala:134)
at com.accenture.aa.dmah.core.BootstrapObj.main(Bootstrap.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:71)
at scala.tools.nsc.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:139)
at scala.tools.nsc.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:71)
at scala.tools.nsc.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:139)
at scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:28)
at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:45)
at scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:35)
at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:45)
at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:74)
at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:96)
at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:105)
at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
I saw there have been similar posts here and here but they haven't had any response till now.
Also have looked here but don't think thats a valid course of action in my case.
Whats intriguing is that this is specific when we try to use drop table (or drop table if exists) query.
Hoping to find resolution for the same.
To my knowledge, The above error could be because of sample class with same package structure ie: 'javax.ws.rs.ext.RuntimeDelegate' found in different JARs issue. Class Objects are created and casted at run time. So there is every possibility the the code responsible for triggering DROP syntax , the above class would be used and broken due as it it is found more than once in the classpath.
I have tried DROP and DROP IF EXISTS in chd5 and was working without issue, below are the details of my run:
first run - Hadoop version - 2.6,Hive 1.1.0 and Spark - 1.3.1 (included hive libraries to spark lib)
second run -Hadoop version - 2.6,Hive 1.1.0 and Spark - 1.6.1
Mode of run - cli
scala> sqlContext.sql("DROP TABLE SAMPLE");
16/08/04 11:31:39 INFO parse.ParseDriver: Parsing command: DROP TABLE SAMPLE
16/08/04 11:31:39 INFO parse.ParseDriver: Parse Completed
......
scala>sqlContext.sql("DROP TABLE IF EXISTS SAMPLE");
16/08/04 11:40:34 INFO parse.ParseDriver: Parsing command: DROP TABLE IF EXISTS SAMPLE
16/08/04 11:40:35 INFO parse.ParseDriver: Parse Completed
.....
If Possible please validate DROP commands using a different version of spark lib to narrow down problem scope.
Meanwhile, I am analyzing the jars to find out the linkage where two occurrences of same class 'RuntimeDelegate' existence and will report back to check if the removal of any jar can fix the issue and addition of the jar should recreate the same issue.

SerDe problems with Hive 0.12 and Hadoop 2.2.0-cdh5.0.0-beta2

The title is a bit weird as I'm having difficulties narrowing down the problem. I used my solution on Hadoop 2.0.0-cdh4.4.0 and hive 0.10 without issues.
I can't create a table using this SerDe: https://github.com/rcongiu/Hive-JSON-Serde
first try:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. org.apache.hadoop.hive.serde2.objectinspector.primitive.AbstractPrimitiveJavaObjectInspector.<init>(Lorg/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorUtils$PrimitiveTypeEntry;)V
second try:
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Could not initialize class org.openx.data.jsonserde.objectinspector.JsonObjectInspectorFactory
I can create a table with this SerDe: https://github.com/cloudera/cdh-twitter-example
I create an external table with tweets from flume. I can't do "SELECT * FROM tweets;"
FAILED: RuntimeException org.apache.hadoop.hive.ql.metadata.HiveException: Failed with exception java.lang.ClassNotFoundException: com.cloudera.hive.serde.JSONSerDejava.lang.RuntimeException: java.lang.ClassNotFoundException: com.cloudera.hive.serde.JSONSerDe
I can do SELECT id, text FROM tweets;
I can do a SELECT COUNT(*) FROM tweets;
I can't self join this table:
Execution log at: /tmp/jochen.debie/jochen.debie_20140311121313_164611a9-b0d8-4e53-9bda-f9f7ac342aaf.log
2014-03-11 12:13:30 Starting to launch local task to process map join; maximum memory = 257294336
Execution failed with exit status: 2
Obtaining error information
Task failed!
Task ID:
Stage-5
mentioned execution log:
2014-03-11 12:13:30,331 ERROR mr.MapredLocalTask (MapredLocalTask.java:executeFromChildJVM(324)) - Hive Runtime Error: Map local work failed
org.apache.hadoop.hive.ql.metadata.HiveException: Failed with exception java.lang.ClassNotFoundException: com.cloudera.hive.serde.JSONSerDejava.lang.RuntimeException: java.lang.ClassNotFoundException: com.cloudera.hive.serde.JSONSerDe
Does anyone know how to fix this or at least show me where the problem is?
EDIT: Can it be a problem that I built the serde on a Hadoop 2.0.0-cdh4.4.0 and hive 0.10?
From what I've seen, Hive-.11+ has a bug in join with custom SerDe.
https://github.com/Esri/gis-tools-for-hadoop/issues/9
You might try the workaround of copying the JAR file containing the SerDe class, to $HIVE_HOME/lib .
(I see in your question you got ClassNotFoundException both in join and in other cases; so far the times I have encountered such were all with join.)
[Edit] Another workaround is to use HADOOP_CLASSPATH:
env HADOOP_CLASSPATH=some.jar:other.jar hive ...
[Edit] The work around applies to Hive versions 0.11 and 0.12; then 0.13 and above contain the fix for HIVE-6670.

Resources