Hive crashing with java.lang.IncompatibleClassChangeError - hadoop

Running hive 3.1.1 against Hadoop 3.2.0 crashes when running 'select * from employee' with
java.lang.IncompatibleClassChangeError: Class com.google.common.collect.ImmutableSortedMap does not implement the requested interface java.util.NavigableMap
Commands like show tables all run fine and data is loaded ok from the CLI as well.
Checked various other commands and e.g. data is loaded etc. Uses MySQL as metastore with MySQL-connector-java-5.1.47.jar. The only other observation is that sometimes I get
WARN DataNucleus.MetaData: Metadata has jdbc-type of null yet this is not valid. Ignored
which other people seem to get as well and seems not to impact me here.
Anybody seen this as well? Help greatly appreciated ...
2019-04-02 16:24:41,643 INFO metastore.HiveMetaStore: 0: Done cleaning up thread local RawStore
2019-04-02 16:24:41,645 INFO HiveMetaStore.audit: ugi=fdai0145 ip=unknown-ip-addr cmd=Done cleaning up thread local RawStore
Exception in thread "main" java.lang.IncompatibleClassChangeError: Class com.google.common.collect.ImmutableSortedMap does not implement the requested interface java.util.NavigableMap
at org.apache.calcite.schema.Schemas.gatherLattices(Schemas.java:498)
at org.apache.calcite.schema.Schemas.getLatticeEntries(Schemas.java:492)
at org.apache.calcite.jdbc.CalciteConnectionImpl.init(CalciteConnectionImpl.java:153)
at org.apache.calcite.jdbc.Driver$1.onConnectionInit(Driver.java:109)
at org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:139)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:150)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:111)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1414)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1430)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:450)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12161)
at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:330)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:285)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:659)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1826)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1773)
at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1768)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236).

Perhaps it's a late answer, but I did run into the same issue. In my case, I found that the hive-exec Maven artifact's jar file is shading the Google collections framework. Now, since I've seen that other Hadoop/Hive artifacts also make use of Google Guava (version 11 if I'm not mistaken), there's a good chance that calcite will find the wrong class definition for ImmutableSortedMap (from Guava 11).
For me, excluding guava from the Hadoop/Hive artifacts that my code uses made calcite find the correct class version from Google collections.
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-minicluster</artifactId>
<version>${hadoop.version}</version>
<exclusions>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-exec</artifactId>
<version>${hive.version}</version>
<exclusions>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>${hive.version}</version>
<exclusions>
<exclusion>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
</exclusion>
</exclusions>
</dependency>
This is probably an issue that should be reported to the Hive project, since these kinds of class path collision errors are hard to diagnose. Internally shaded artifacts should have the project's own package prefix to indicate explicit shading of the external code in question.
Oh well. Hope this helps.

Related

Databricks local test fail with java.lang.NoSuchMethodError: org.apache.hadoop.security.HadoopKerberosName.setRuleMechanism

I have a unit test to databricks code, and I want to run it locally on windows. Unluckily when I run pytest with PyCharm, it throws the following exception:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.security.HadoopKerberosName.setRuleMechanism(Ljava/lang/String;)V
at org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:84)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:315)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:300)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:575)
at org.apache.spark.util.Utils$.$anonfun$getCurrentUserName$1(Utils.scala:2747)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2747)
at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:79)
at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:368)
at org.apache.spark.deploy.SparkSubmit.secMgr$1(SparkSubmit.scala:368)
at org.apache.spark.deploy.SparkSubmit.$anonfun$prepareSubmitEnvironment$8(SparkSubmit.scala:376)
at scala.Option.map(Option.scala:230)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:376)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1039)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1048)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
And from source code it is from the initialization:
spark = SparkSession.builder \
.master("local[2]") \
.appName("Helper Functions Unit Testing") \
.getOrCreate()
I do search the above error and most of them are related to maven configure to add dependency of hadoop auth. However, for pyspark, I don't know how to deal with it. Does anyone have experience or insight for this error?
Here my workaround is to have python version to 3.7 and change pyspark version to 3.0, and then it seems ok. So it is related to the environment and dependency inconsistent.
This is just limit to my case, and from my search on web most is related to maven to add hadoop-auth.jar dependency for hadoop configuration.
Encountered this error for a Maven project written in Scala, not Python. What did it for me was adding not only the hadoop-auth dependency like OP specified but also the hadoop-common dependency in my pom file like so,
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>3.1.2</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-auth</artifactId>
<version>3.1.2</version>
</dependency>
Replace 3.1.2 with whatever version you're using. However, I also found that I had to find other dependencies that conflicted with hadoop-common and hadoop-auth and add exclusions to them like so,
<exclusions>
<exclusion>
<artifactId>hadoop-common</artifactId>
<groupId>org.apache.hadoop</groupId>
</exclusion>
<exclusion>
<artifactId>hadoop-auth</artifactId>
<groupId>org.apache.hadoop</groupId>
</exclusion>
</exclusions>

Edit YARN's classpath in Oozie

I am trying to run a hadoop job through Oozie. The job uploads data to DynamoDB in AWS. As such, I use AmazonDynamoDBClient. I get the following exception in reducers:
2016-06-14 10:30:52,997 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z
at com.fasterxml.jackson.databind.ObjectMapper.<init>(ObjectMapper.java:458)
at com.fasterxml.jackson.databind.ObjectMapper.<init>(ObjectMapper.java:379)
at com.amazonaws.util.json.Jackson.<clinit>(Jackson.java:32)
at com.amazonaws.internal.config.InternalConfig.loadfrom(InternalConfig.java:233)
at com.amazonaws.internal.config.InternalConfig.load(InternalConfig.java:251)
at com.amazonaws.internal.config.InternalConfig$Factory.<clinit>(InternalConfig.java:308)
at com.amazonaws.util.VersionInfoUtils.userAgent(VersionInfoUtils.java:139)
at com.amazonaws.util.VersionInfoUtils.initializeUserAgent(VersionInfoUtils.java:134)
at com.amazonaws.util.VersionInfoUtils.getUserAgent(VersionInfoUtils.java:95)
at com.amazonaws.ClientConfiguration.<clinit>(ClientConfiguration.java:42)
at com.amazonaws.PredefinedClientConfigurations.dynamoDefault(PredefinedClientConfigurations.java:38)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.<init>(AmazonDynamoDBClient.java:292)
at com.mypackage.UploadDataToDynamoDBMR$DataUploaderReducer.setup(UploadDataToDynamoDBMR.java:396)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:168)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
I used a fat jar which packages all dependencies and copied the jar to Oozie's lib directory.
I have also used dependency management in pom to pin fasterxml jackson dependency to 2.4.1 (which is used by AWS dynamodb SDK). However, when the execution happens on the reducers, somehow some other version of fasterxml jackson appears first on the classpath (or so I believe).
I also excluded jackson dependency from dynamodb and aws sdks.
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
<version>1.10.11</version>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-core</artifactId>
<version>1.10.11</version>
<exclusions>
<exclusion>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
How can I make sure that my jar is the first one on the classpath in mappers and reducers? I tried the suggestion on this page and added the following property to the job's configuration xml:
<property>
<name>oozie.launcher.mapreduce.user.classpath.first</name>
<value>true</value>
</property>
But this did not help.
Any suggestions?
Have you copied your jar into the lib folder next to the lib workflow.xml or into sharelib?
Check what version of Jackson your Hadoop distribution is using and try to use that version of Jackson everywhere. Also, it might worth checking that no other Jackson jars are on the classpath.
From the exception it looks like that Hadoop tries to call a method:
com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering
This method was introduced in Jackson version 2.3, so probably an even older version of Jackson is in there somewhere.

Spring boot and apache spark - container conflict

I am trying to use spring boot 1.1.5 and apache spark 1.0.2 together in project. Look like apache spark uses Jetty container internally and I have configured spring-boot to use Tomcat container. However application startup fails with some securityException at root cause. If I see full stack trace looks like spring boot trying to initialize "jettyEmbeddedServletContainerFactory" which it shouldn't in first place. It probably picks it up from classpath due to jetty presence via spark. If I exclude jetty from spark and run again I don't see same error again but then SparkContext initialization fails due to not finding jetty. How do i tell spring-boot runtime to look for "TomcatEmbeddedServletContainerFactory" instead of jetty one?
I got "java.lang.SecurityException: class "javax.servlet.http.HttpSessionIdListener"'s signer information does not match signer information of other classes in the same package"
To fix this issue I was need to remove all javax.servlet dependencies.
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.3.1</version>
<exclusions>
<exclusion>
<groupId>javax.servlet</groupId>
<artifactId>javax.servlet-api</artifactId>
</exclusion>
<exclusion>
<groupId>org.glassfish</groupId>
<artifactId>javax.servlet</artifactId>
</exclusion>
<exclusion>
<groupId>org.eclipse.jetty.orbit</groupId>
<artifactId>javax.servlet</artifactId>
</exclusion>
</exclusions>
</dependency>
#Joakim Erdfelt, Thanks.
I was just waiting to see if someone is familiar with this situation and if it's just a small configuration change. As it turns out it is! #Configuration
#EnableAutoConfiguration(exclude={EmbeddedServletContainerFactory.class})
public class MyConfiguration { }
I defined my own "EmbeddedServletContainerFactory" bean as a "org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContaine‌​rFactory" and it started working as I expected.

How to find which scala version needs to be used for each kafka release

currently we are using kafka-0.7.2(https://clojars.org/org.clojars.paul/core-kafka_2.8.0) and scala version 2.8.0 in production.
we are excited to see kafka 0.8.1.1 so we followed the producer(https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example) code and consumer(https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example) code in the mentioned blog.
This code leads to an error of missing scala.serializer class so I searched and got info that scala-2.10.3(Eclipse scala.object cannot be resolved) will solve this issue.
while running this code again I got an error of
[WARNING]
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:293)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.lang.NoClassDefFoundError: scala/reflect/ClassManifest
at kafka.utils.Log4jController$.<init>(Log4jController.scala:29)
at kafka.utils.Log4jController$.<clinit>(Log4jController.scala)
at kafka.utils.Logging$class.$init$(Logging.scala:29)
will linger despite being asked to die via interruption
[WARNING] NOTE: 3 thread(s) did not finish despite being asked to via interruption. This is not a problem with exec:java, it is a problem with the running code. Although not serious, it should be remedied.
[WARNING] Couldn't destroy threadgroup org.codehaus.mojo.exec.ExecJavaMojo$IsolatedThreadGroup[name=packageName,maxpri=10]
java.lang.IllegalThreadStateException
at java.lang.ThreadGroup.destroy(ThreadGroup.java:775)
at org.codehaus.mojo.exec.ExecJavaMojo.execute(ExecJavaMojo.java:328)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
I don't know why this error occurred,
changed pom.xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.1.1</version>
<scope>compile</scope>
<exclusions>
<exclusion>
<groupId>com.sun.jmx</groupId>
<artifactId>jmxri</artifactId>
</exclusion>
<exclusion>
<groupId>com.sun.jdmk</groupId>
<artifactId>jmxtools</artifactId>
</exclusion>
<exclusion>
<groupId>javax.jms</groupId>
<artifactId>jms</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.3</version>
</dependency>
I also got info from one of the site that kafka 0.8.1.1 will have a support for different version of scala,but i don't know why it was not supported for this scala version(2.10.3).
How can we find for x kafka version, y scala version will work
i also figured out that in gradle.properties of kafka-0.8.1.1
group=org.apache.kafka
version=0.8.1.1
scalaVersion=2.8.0
task=build
here mentioned scalaVersion of 2.8.0 is meant for ?
Thanks in advance
I have removed the scala dependency,since its using scala internally.now its worked properly and also some changes in property name of zookeeper connections.

Bean Validation with JBoss Errai

I want to make a GWT app with the Errai framework but I run in some problems with the Data Binding and Validation.
My pom.xml
<dependency>
<groupId>org.jboss.errai</groupId>
<artifactId>errai-validation</artifactId>
<version>${errai.version}</version>
</dependency>
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>javax.validation</groupId>
<artifactId>validation-api</artifactId>
<classifier>sources</classifier>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>4.2.0.Final</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-validator</artifactId>
<version>4.2.0.Final</version>
<scope>provided</scope>
<classifier>sources</classifier>
</dependency>
My app.gwt.xml includes the Errai-Validation and HibernateValidator modules:
<inherits name="org.jboss.errai.validation.Validation" />
<inherits name="org.hibernate.validator.HibernateValidator" />
There are no unresolved dependencies I have already double checked this.
When I try to run the application with mvn gwt:run I'm getting the following error:
java.util.concurrent.ExecutionException: org.jboss.errai.ioc.rebind.ioc.exception.UnsatisfiedDependenciesException: #> org.jboss.errai.ui.nav.client.local.Navigation
- field org.jboss.errai.codegen.meta.MetaField:org.jboss.errai.ui.nav.client.local.Navigation.stateChangeEvent could not be satisfied for type: org.jboss.errai.ioc.client.lifecycle.api.StateChange
Message: can't resolve bean: org.jboss.errai.ioc.client.lifecycle.api.StateChange<java.lang.Object> ( #Default )
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:188)
at org.jboss.errai.config.rebind.AsyncGenerators$FutureWrapper.get(AsyncGenerators.java:112)
at org.jboss.errai.config.rebind.AsyncGenerators$FutureWrapper.get(AsyncGenerators.java:86)
at org.jboss.errai.config.rebind.AbstractAsyncGenerator.startAsyncGeneratorsAndWaitFor(AbstractAsyncGenerator.java:100)
at org.jboss.errai.ioc.rebind.ioc.bootstrapper.IOCGenerator.generate(IOCGenerator.java:58)
at com.google.gwt.core.ext.IncrementalGenerator.generateNonIncrementally(IncrementalGenerator.java:40)
at com.google.gwt.dev.javac.StandardGeneratorContext.runGeneratorIncrementally(StandardGeneratorContext.java:657)
at com.google.gwt.dev.cfg.RuleGenerateWith.realize(RuleGenerateWith.java:41)
at com.google.gwt.dev.shell.StandardRebindOracle$Rebinder.rebind(StandardRebindOracle.java:79)
at com.google.gwt.dev.shell.StandardRebindOracle.rebind(StandardRebindOracle.java:276)
at com.google.gwt.dev.shell.ShellModuleSpaceHost.rebind(ShellModuleSpaceHost.java:141)
at com.google.gwt.dev.shell.ModuleSpace.rebind(ModuleSpace.java:595)
at com.google.gwt.dev.shell.ModuleSpace.rebindAndCreate(ModuleSpace.java:465)
at com.google.gwt.dev.shell.GWTBridgeImpl.create(GWTBridgeImpl.java:49)
at com.google.gwt.core.shared.GWT.create(GWT.java:57)
at com.google.gwt.core.client.GWT.create(GWT.java:85)
at org.jboss.errai.ioc.client.Container.bootstrapContainer(Container.java:64)
at org.jboss.errai.ioc.client.Container.onModuleLoad(Container.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:406)
at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:200)
at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:526)
at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:364)
at java.lang.Thread.run(Thread.java:744)
Thats why the Bootstrap is failing and the application is throwing a onModuleLoad Exception and is not starting.
If I remove the 2 validation modules I'm able to start the application without any errors.
Im using the Errai Tutorial with version 3.0.1 FINAL.
Thanks for your help :)
EDIT:
I resolved the error by adding
<inherits name="org.jboss.errai.ui.nav.Navigation" />
to my app.gwt.xml but now I'm running into the next problem with this exception:
java.lang.RuntimeException: Deferred binding failed for 'org.jboss.errai.validation.client.ValidatorFactoryImpl$GwtValidator' (did you forget to inherit a required module?)
at com.google.gwt.dev.shell.GWTBridgeImpl.create(GWTBridgeImpl.java:53)
at com.google.gwt.core.shared.GWT.create(GWT.java:57)
at com.google.gwt.core.client.GWT.create(GWT.java:85)
at org.jboss.errai.validation.client.ValidatorFactoryImpl.createValidator(ValidatorFactoryImpl.java:11)
at com.google.gwt.validation.client.AbstractGwtValidatorFactory.getValidator(AbstractGwtValidatorFactory.java:90)
at org.jboss.errai.validation.client.ValidatorProvider.get(ValidatorProvider.java:37)
at org.jboss.errai.ioc.client.BootstrapperImpl$28.getInstance(BootstrapperImpl.java:432)
at org.jboss.errai.ioc.client.BootstrapperImpl$28.getInstance(BootstrapperImpl.java:1)
at org.jboss.errai.ioc.client.container.IOCDependentBean.getInstance(IOCDependentBean.java:96)
at org.jboss.errai.ioc.client.container.IOCDependentBean.getInstance(IOCDependentBean.java:87)
at org.jboss.errai.ioc.client.container.SyncToAsyncBeanManagerAdapter$1.getInstance(SyncToAsyncBeanManagerAdapter.java:148)
at org.jboss.errai.ui.nav.client.local.spi.GeneratedNavigationGraph$2.produceContent(GeneratedNavigationGraph.java:69)
at org.jboss.errai.ui.nav.client.local.Navigation.maybeShowPage(Navigation.java:304)
at org.jboss.errai.ui.nav.client.local.Navigation.navigate(Navigation.java:249)
at org.jboss.errai.ui.nav.client.local.Navigation.navigate(Navigation.java:230)
at org.jboss.errai.ui.nav.client.local.Navigation.navigate(Navigation.java:225)
at org.jboss.errai.ui.nav.client.local.Navigation.goTo(Navigation.java:191)
at org.jboss.errai.ui.nav.client.local.DefaultNavigationErrorHandler.handleError(DefaultNavigationErrorHandler.java:27)
at org.jboss.errai.ui.nav.client.local.Navigation.goTo(Navigation.java:193)
Is there another module that is missing?
I'm correct that Errai is creating the ValidationFactory and injecting the correct instance? So I don't have to create my own ValidationFactory like here:
GWT Validation Tutorial
Yes, that's correct. You don't have to create your own ValidationFactory. Errai will do that for you. You can simply #Inject a Validator.
I have prepared a version of the Errai tutorial using 3.0.1.Final that shows exactly that (following the instructions from the reference guide). I've put the project on GitHub.
The last error you pasted doesn't contain enough information to investigate why this is failing for you. However, you should see more error information in the devmode console.

Resources