with HDP-2.5 on Ubuntu-14.04, running this command and
$ ./kite-dataset csv-import ./test.csv test_schema
trying to import raw csv data into Hive using the KiteSdk ver.1-1-0
and having the following IOError:
1 job failure(s) occurred: org.kitesdk.tools.CopyTask:
Kite(dataset:file:/tmp/444e6fc4-10e2-407d-afaf-723c408a6d... ID=1
(1/1)(1): java.io.FileNotFoundException: File
file:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:624)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:850)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:614)
at org.apache.hadoop.fs.DelegateToFileSystem.getFileStatus(DelegateToFileSystem.java:125)
at org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:468)
at org.apache.hadoop.fs.FilterFs.resolvePath(FilterFs.java:158)
at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2195)
at org.apache.hadoop.fs.FileContext$25.next(FileContext.java:2191)
at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
at org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2191)
at org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:603)
at org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:457)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchControlledJob.submit(CrunchControlledJob.java:329)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.startReadyJobs(CrunchJobControl.java:204)
at org.apache.crunch.hadoop.mapreduce.lib.jobcontrol.CrunchJobControl.pollJobStatusAndStartNewOnes(CrunchJobControl.java:238)
at org.apache.crunch.impl.mr.exec.MRExecutor.monitorLoop(MRExecutor.java:112)
at org.apache.crunch.impl.mr.exec.MRExecutor.access$000(MRExecutor.java:55)
at org.apache.crunch.impl.mr.exec.MRExecutor$1.run(MRExecutor.java:83)
at java.lang.Thread.run(Thread.java:745)
I've checked the file "hdfs:/hdp/apps/2.5.0.0-1245/mapreduce/mapreduce.tar.gz"
exists and can't figure out how to resolve this error for quite a while.
Any help is greatly appreciated.
I was experiencing the same error, and I resolved it by creating /hdp/apps/2.5.0.0-1245/mapreduce and then:
cp /usr/hdp/current/hadoop-client/mapreduce.tar.gz /hdp/apps/2.5.0.0-1245/mapreduce
This then created a new error :org.kitesdk.tools.CopyTask: Kite(dataset:file:/tmp/413a41a2-8813-4056-9433-3c5e073d80... ID=1 (1/1)(1): java.io.FileNotFoundException: File does not exist: hdfs://sandbox.hortonworks.com:8020/tmp/crunch-283520469/p1/REDUCE
Which I'm still trying to troubleshoot.
I think you are getting this error since you are using Kite SDK 1.1.0 version. I also got similar error when I was doing csv-import. When I switched to Kite SDK 1.0.0 version there was no such error.
I would suggest you to switch to Kite SDK 1.0.0 version.
Moreover there has been no new release of Kite SDK after 1.1.0 version and even this release happened in June 2015.
Related
I am trying to run liquibase to create DB schema and tables in GCP.
Below error is coming any idea ?
Class [org.hibernate.boot.jaxb.hbm.spi.package-info] could not be found.
Processing bindings will probably fail.
java.lang.ClassNotFoundException: org.hibernate.boot.jaxb.hbm.spi.package-info
at org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy.loadClass (SelfFirstStrategy.java:50)
at org.codehaus.plexus.classworlds.realm.ClassRealm.unsynchronizedLoadClass (ClassRealm.java:271)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass (ClassRealm.java:247)
at org.codehaus.plexus.classworlds.realm.ClassRealm.loadClass (ClassRealm.java:239)
at java.lang.Class.forName0 (Native Method)
at java.lang.Class.forName (Class.java:315)
I had to downgrade my Liquibase version to 3.6.3 (as suggested by #Jens). Now I am not getting the error.
Stack I use: Ubuntu 18.04 / OpenJDK 11
We had the same issue but it was only warning and it was not a stopper for us. To remove the warning, we upgraded the Liquibase version into version - 4.7.1 and the warning is gone.
Seems like JDeveloper doesn't work with macOS Sierra. I tried to reinstall, it didn't help. It just doesn't launch.
Is there some walkaround to solve the problem?
I tried also 12.2.1.0 version. The result is the same.
Upd. Setting Java home in product.conf doesn't help also.
Oracle JDeveloper 12c Development Build 12.2.1.1.0
Copyright (c) 1997, 2016, Oracle and/or its affiliates. All rights reserved.
java.lang.RuntimeException: Exception in org.eclipse.osgi.framework.internal.core.SystemBundleActivator.start() of bundle org.eclipse.osgi.
at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.resume(InternalSystemBundle.java:233)
at org.eclipse.osgi.framework.internal.core.Framework.launch(Framework.java:664)
at org.eclipse.osgi.framework.internal.core.EquinoxLauncher.internalInit(EquinoxLauncher.java:69)
at org.eclipse.osgi.framework.internal.core.EquinoxLauncher.init(EquinoxLauncher.java:37)
at org.eclipse.osgi.launch.Equinox.init(Equinox.java:178)
at org.netbeans.modules.netbinox.Netbinox.init(Netbinox.java:84)
at org.netbeans.core.netigso.Netigso.prepare(Netigso.java:167)
at org.netbeans.NetigsoHandle.turnOn(NetigsoHandle.java:138)
at org.netbeans.ModuleManager.enable(ModuleManager.java:1346)
at org.netbeans.ModuleManager.enable(ModuleManager.java:1163)
at org.netbeans.core.startup.ModuleList.installNew(ModuleList.java:340)
at org.netbeans.core.startup.ModuleList.trigger(ModuleList.java:276)
at org.netbeans.core.startup.ModuleSystem.restore(ModuleSystem.java:301)
at org.netbeans.core.startup.Main.getModuleSystem(Main.java:181)
at org.netbeans.core.startup.Main.getModuleSystem(Main.java:150)
at org.netbeans.core.startup.Main.start(Main.java:307)
at org.netbeans.core.startup.TopThreadGroup.run(TopThreadGroup.java:123)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.osgi.framework.BundleException: Exception in org.eclipse.osgi.framework.internal.core.SystemBundleActivator.start() of bundle org.eclipse.osgi.
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:734)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.start(BundleContextImpl.java:683)
at org.eclipse.osgi.framework.internal.core.InternalSystemBundle.resume(InternalSystemBundle.java:225)
... 17 more
Caused by: java.lang.NumberFormatException: For input string: "2.0"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Integer.parseInt(Integer.java:615)
at org.eclipse.osgi.internal.resolver.StateBuilder.createBundleDescription(StateBuilder.java:61)
at org.eclipse.osgi.internal.resolver.StateObjectFactoryImpl.createBundleDescription(StateObjectFactoryImpl.java:33)
at org.eclipse.osgi.internal.baseadaptor.BaseStorage.readStateData(BaseStorage.java:853)
at org.eclipse.osgi.internal.baseadaptor.BaseStorage.getStateManager(BaseStorage.java:799)
at org.eclipse.osgi.baseadaptor.BaseAdaptor.getState(BaseAdaptor.java:387)
at org.eclipse.osgi.internal.baseadaptor.BaseStorage.frameworkStart(BaseStorage.java:923)
at org.eclipse.osgi.baseadaptor.BaseAdaptor.frameworkStart(BaseAdaptor.java:250)
at org.eclipse.osgi.framework.internal.core.SystemBundleActivator.start(SystemBundleActivator.java:60)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl$1.run(BundleContextImpl.java:711)
at java.security.AccessController.doPrivileged(Native Method)
at org.eclipse.osgi.framework.internal.core.BundleContextImpl.startActivator(BundleContextImpl.java:702)
... 19 more
Which Java version do you have installed (java -version)?
Try removing the systemxxx directory (usually under .jdeveloper directory) and restarting JDeveloper.
Works for me fine on El Capitan
If you did an OS update, the java home directory probably changed. Check the product.conf file in your .jdeveloper directory, and make sure the path to the JDK is correct
We need to remove system_cache folder only under path <USER_HOME>/.jdeveloper/system12.2.x.x.x.x.x/system_cache.
Please refer- https://migrmrz.wordpress.com/2016/08/29/jdeveloper-12c-wont-start-in-mac-os-x-el-capitan/
I am trying to execute below command on Hadoop
hadoop fs -ls /
but it is returing with the error
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/hadoop/fs/FsShell : Unsupported major.minor version 51.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClassCond(ClassLoader.java:632)
at java.lang.ClassLoader.defineClass(ClassLoader.java:616)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:283)
at java.net.URLClassLoader.access$000(URLClassLoader.java:58)
at java.net.URLClassLoader$1.run(URLClassLoader.java:197)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
Could not find the main class: org.apache.hadoop.fs.FsShell. Program will exit.
I have tried updating the java but it is still giving me the same error.
Note: The same command is working on the other nodes but not on 2 for the cluster nodes.
Try updating the JDK to version 1.7. Perhaps, you have updated JRE not JDK.
If you used the CDH, you may need change the java version to jdk1.7.0_67-cloudera. After I changed the JAVA_HOME from /usr/java/jdk1.6.0_31 to /usr/java/jdk1.7.0_67-cloudera, I solved the problem.
Unsupported major.minor version 51.0
You need Java 7 (or higher) to run this.
See: https://www.java.com/de/download/faq/java_7.xml
If you already installed Java 7 (or higher) then execute it with:
C:\Program Files\Java\<Java Version (jre7/jre8>\bin\java.exe -jar <Path To .Jar>
The command is not working because it is pointing to the newer version of hadoop jars avilable on the nodes instead of the hadoop jars of installed version
It was pointing to jars placed in
/usr/lib/hadoop
Then i tried to execute it from the installation directory as below
/opt/cloudera/parcels/CDH/lib/hadoop/bin/hadoop fs -ls /
It worked for me.
I have installed and Configured Apache Hive-1.2.1 long ago. It worked fine. Recently I have installed Apache Spark-2.7.0 and started using its shells. Now when I want to work with Hive again, it didn't start. It's showing the following error:
Exception in thread "main" java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log(Lorg/slf4j/Marker;Ljava/lang/String;ILjava/lang/String;[Ljava/lang/Object;Ljava/lang/Throwable;)V
at org.apache.commons.logging.impl.SLF4JLocationAwareLog.debug(SLF4JLocationAwareLog.java:133)
at org.apache.hadoop.hive.common.LogUtils.logConfigLocation(LogUtils.java:147)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jDefault(LogUtils.java:128)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:77)
at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:58)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:637)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
I tried reinstalling Hive, but the same error follows. Is this error due to installing Spark? How can I run Hive normally again?
It seems that you are having a conflict with your logging library. This question could help you: java.lang.NoSuchMethodError: org.slf4j.spi.LocationAwareLogger.log
We had an oozie workflow with simple "create" and "alter" statements, with "create" statement using "RCFILE" file format in Hive Action.
The challenge we are facing is that this Hive action is executing successfully sometimes and failing sometimes... We weren't able to fix this.
It is throwing "NoSuchMethodError" exception in regard to "serde".
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.HiveMain], main() threw exception, org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/typeinfo/TypeInfo;
java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/typeinfo/TypeInfo;
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerNumericType(FunctionRegistry.java:630)
at org.apache.hadoop.hive.ql.exec.FunctionRegistry.<clinit>(FunctionRegistry.java:636)
at org.apache.hadoop.hive.ql.session.SessionState.<init>(SessionState.java:208)
at org.apache.hadoop.hive.cli.CliSessionState.<init>(CliSessionState.java:78)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:645)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at org.apache.oozie.action.hadoop.HiveMain.runHive(HiveMain.java:318)
at org.apache.oozie.action.hadoop.HiveMain.run(HiveMain.java:279)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:38)
at org.apache.oozie.action.hadoop.HiveMain.main(HiveMain.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:226)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Can someone help me fix this?
I had the same problem. It was in a different context, but I'm sure the underlying issue is the same. The return type of the org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getPrimitiveTypeInfo(String) method was narrowed down to PrimitiveTypeInfoin Hive 0.13 to from TypeInfo in prior versions of Hive which broke Java binary compatibility.
It seems that the org.apache.hadoop.hive.ql.exec.FunctionRegistry was compiled with a pre 0.13 version of a hive-serde-x.x.x.jar, but execution classpath includes a version from newer hive-serde-0.13.x.jar.
The solution was to make sure that hive-serde-n.n.n.jar and hive-exec-n.n.n.jar (or wherever org.apache.hadoop.hive.ql.exec.FunctionRegistry is coming from in your case) have consistent versions on the classpath.