I am trying to connect to my accumulo instance remotely , I started a project with maven and added all library needed , Here in this code I am setting the parameters of connection:
public class App{
public static void main(String [] argv){
HashMap<String,String> parametres=new HashMap<>();
parametres.put("accumulo.instance.id","******");
parametres.put("accumulo.zookeepers","accumulo-do");
parametres.put("accumulo.user","root");
parametres.put("accumulo.password","****");
parametres.put("accumulo.catalog","*******");
try
{
DataStore dataStore= DataStoreFinder.getDataStore(parametres);
System.out.println("Succés");
}catch (Exception e){
System.out.println("Exception de Accumulo");
System.out.println(e);
}
}
}
But I tried to run it I am getting this error :
> Unable to load native-hadoop library for your platform... using builtin-java >classes where applicable
>Failed to locate the winutils binary in the hadoop binary path
>java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:355)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:370)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:363)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:79)
at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:116)
at org.apache.hadoop.security.Groups.<init>(Groups.java:93)
at org.apache.hadoop.security.Groups.<init>(Groups.java:73)
at >org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:293)
at >org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:283)
at >org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:260)
at >org.apache.hadoop.security.UserGroupInformation.isAuthenticationMethodEnabled(UserGroupInformation.java:337)
at >org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:331)
at >org.locationtech.geomesa.accumulo.data.AccumuloDataStore.liftedTree1$1(AccumuloDataStore.scala:66)
> at org.locationtech.geomesa.accumulo.data.AccumuloDataStore.<init>(AccumuloDataStore.scala:65)
> at >org.locationtech.geomesa.accumulo.data.AccumuloDataStoreFactory.createDataStore(>AccumuloDataStoreFactory.scala:50)
> at >org.locationtech.geomesa.accumulo.data.AccumuloDataStoreFactory.createDataStore(>AccumuloDataStoreFactory.scala:37)
> at >org.geotools.data.DataAccessFinder.getDataStore(DataAccessFinder.java:130)
> at org.geotools.data.DataStoreFinder.getDataStore(DataStoreFinder.java:89)
> at test.App.main(App.java:48)
can you tell me what the cause of this error ?
i am not using hadoop on Windows , My hadoop Cluster is running on linux
How to prevent this ?
Related
I am trying to do map-reduce with kite-dataset api.
I have followed below urls to refer.
https://community.cloudera.com/t5/Kite-SDK-includes-Morphlines/Map-Reduce-with-Kite/td-p/22165
https://github.com/kite-sdk/kite/blob/master/kite-data/kite-data-mapreduce/src/test/java/org/kitesdk/data/mapreduce/TestMapReduce.java
My code snippet as below
public class MapReduce {
private static final String sourceDatasetURI = "dataset:hive:test_avro";
private static final String destinationDatasetURI = "dataset:hive:test_parquet";
private static class LineCountMapper
extends Mapper<GenericData.Record, Void, Text, IntWritable> {
#Override
protected void map(GenericData.Record record, Void value,
Context context)
throws IOException, InterruptedException {
System.out.println("Record is "+record);
context.write(new Text(record.get("index").toString()), new IntWritable(1));
}
}
private Job createJob() throws Exception {
System.out.println("Inside Create Job");
Job job = new Job();
job.setJarByClass(getClass());
Dataset<GenericData.Record> inputDataset = Datasets.load(sourceDatasetURI, GenericData.Record.class);
Dataset<GenericData.Record> outputDataset = Datasets.load(destinationDatasetURI, GenericData.Record.class);
DatasetKeyInputFormat.configure(job).readFrom(inputDataset).withType(GenericData.Record.class);
job.setMapperClass(LineCountMapper.class);
DatasetKeyOutputFormat.configure(job).writeTo(outputDataset).withType(GenericData.Record.class);
job.waitForCompletion(true);
return job;
}
public static void main(String[] args) throws Exception {
MapReduce httAvroToParquet = new MapReduce();
httAvroToParquet.createJob();
}
}
I am using HDP 2.3.2 box ,creating assembly jar and submitting my application through spark-submit.
I am getting below error when I submit my application.
2015-12-18 04:09:07,156 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2015-12-18 04:09:07,282 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null
2015-12-18 04:09:07,333 WARN [main] org.kitesdk.data.spi.Registration: Not loading URI patterns in org.kitesdk.data.spi.hive.Loader
2015-12-18 04:09:07,334 INFO [main] org.apache.hadoop.service.AbstractService: Service org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI: hive://{}:9083/default/test_parquet. Check that JARs for hive datasets are on the classpath.
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI: hive://{}:9083/default/test_parquet. Check that JARs for hive datasets are on the classpath.
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:478)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:458)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1560)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:458)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:377)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$4.run(MRAppMaster.java:1518)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1515)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1448)
Caused by: org.kitesdk.data.DatasetNotFoundException: Unknown dataset URI: hive://{}:9083/default/test_parquet. Check that JARs for hive datasets are on the classpath.
at org.kitesdk.data.spi.Registration.lookupDatasetUri(Registration.java:109)
at org.kitesdk.data.Datasets.load(Datasets.java:103)
at org.kitesdk.data.Datasets.load(Datasets.java:165)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.load(DatasetKeyOutputFormat.java:510)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat.getOutputCommitter(DatasetKeyOutputFormat.java:473)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$1.call(MRAppMaster.java:476)
... 11 more
I am not getting what's wrong ? Is there any class-path problem ? If yes then where do I set it ?
You effectively have a classpath problem
Your project is missing org.kitesdk:kite-data-hive
You can
Add this jar to your fat jar before submitting it to Spark
Tells Spark to add it to your classpath when you submit
i am making connection with hive using java code but i am getting below error -
log4j:WARN No appenders could be found for logger (org.apache.thrift.transport.TSaslTransport).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=anonymous, access=WRITE, inode="/":oodles:supergroup:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5904)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5886)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5860)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3793)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3763)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3737)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:778)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:573)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at com.oodles.example.HiveJdbcClient.main(HiveJdbcClient.java:23)
My Java code is below
package com.oodles.example;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class HiveJdbcClient {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws SQLException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
Connection con = DriverManager.getConnection("jdbc:hive2://localhost:10000/default", "", "");
Statement stmt = con.createStatement();
String tableName = "testHiveDriverTable";
stmt.execute("drop table if exists " + tableName);
stmt.execute("create table " + tableName + " (key int, value string)");
System.out.println("success!");
stmt.close();
con.close();
}
}
and my other concern is that , whenever i make connection without starting hadoop services it gives error
log4j:WARN No appenders could be found for logger (org.apache.thrift.transport.TSaslTransport).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: java.net.ConnectException Call From oodles-Latitude-3540/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused)
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at com.oodles.example.HiveJdbcClient.main(HiveJdbcClient.java:21)
later issue gets resolved if i start hadoop services ,so i want to ask is it mandatory to start hadoop services in order to make connection with hive?
Since you have not mentioned what Hive version you are using, but base on the Driver name and connection URl I am assuming you are using Hive 0.11 or above.
So in Hive 0.11 or above you need to mention a username in the Connection URL:
DriverManager.getConnection("jdbc:hive2://localhost:10000/default", <user_name>, "")
NOTE: This user should have read+write permissions in HDFS.
Regarding you second query:
I am quite sure that Hadoop services are not required just for connection. I have never tried that.
Its my assumption, since we need to mention a database in the connection URL , which is a directory in HDFS. So it might need NAMENODE service to check the existence of that directory.
Hope it helps...!!!
I'm facing ClassNotFoundException, when I run my job for the class org.apache.hcatalog.rcfile.RCFileMapReduceOutputFormat.
I tried to pass the additional jar files with -libjars, still I am facing the same issue. Any suggestions will be greatly helpful. Thanks in advance.
Below is the command I am using and exception I am facing!
hadoop jar MyJob.jar MyDriver -libjars hcatalog-core-0.5.0-cdh4.4.0.jar inputDir OutputDir
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hcatalog/rcfile/RCFileMapReduceOutputFormat
at com.cloudera.sa.omniture.mr.OmnitureToRCFileJob.run(OmnitureToRCFileJob.java:91)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at com.cloudera.sa.omniture.mr.OmnitureToRCFileJob.main(OmnitureToRCFileJob.java:131)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.lang.ClassNotFoundException: org.apache.hcatalog.rcfile.RCFileMapReduceOutputFormat
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
... 8 more
I implemented ToolRunner as well, below is the code which confirms that!
public class OmnitureToRCFileJob extends Configured implements Tool {
public static void main(String[] args) throws Exception {
OmnitureToRCFileJob processor = new OmnitureToRCFileJob();
String[] otherArgs = new GenericOptionsParser(processor.getConf(), args).getRemainingArgs();
System.exit(ToolRunner.run(processor.getConf(), processor, otherArgs));
}
}
Did you try running by giving full path of "hcatalog-core-0.5.0-cdh4.4.0.jar" jar file in your below line.
hadoop jar MyJob.jar MyDriver -libjars hcatalog-core-0.5.0-cdh4.4.0.jar inputDir OutputDir
or
Below configuration should also work for you
$ export LIBJARS= <fullpath>/hcatalog-core-0.5.0-cdh4.4.0.jar
$hadoop jar MyJob.jar MyDriver -libjars ${LIBJARS} inputDir OutputDir
If you look at hadoop command documentation, you can see that -libjars is a generic option. For parsing generic option, you got to override the ToolRunner.run() method in your driver class as follows :
public class TestDriver extends Configured implements Tool {
#Override
public int run(String[] args) throws Exception {
Configuration conf = getConf();
# Job configuration details
# Job submission
return 0;
}
}
public static void main(String[] args) throws Exception {
int exitCode = ToolRunner.run(new TestDriver(), args);
System.exit(exitCode);
}
I'nk you are getting this exception from your driver code itself. Setting hcatalog-cor*.jar using -libjars option may not be available in client JVM(JVM in which driver code runs). Better you need to set this jar in HADOOP_CLASSPATH environment variable before executing the same using hadoop jar as follows
export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:<PATH-TO-HCAT-LIB>/hcatalog-core-0.5.0-cdh4.4.0.jar;
hadoop jar MyJob.jar MyDriver -libjars hcatalog-core-0.5.0-cdh4.4.0.jar inputDir OutputDir
Id had the same problem but found out the jar command doesn't accept the --libjars argument.
"Specify comma separated jar files to include in the classpath. Applies only to job." --> Hadoop Cli Generic Options
Instead you should use the env vars to add additional or replace jars.
export HADOOP_USER_CLASSPATH_FIRST=true
export HADOOP_CLASSPATH="./lib/*"
I am trying to run a MapReduce job from outside the cluster.
e.g. Hadoop cluster is running on Linux machines.
We have one web application running on a Windows machine.
We want to run the hadoop job from this remote web application.
We want to retrieve the hadoop output directory and present it as a Graph.
We have written the following piece of code:
Configuration conf = new Configuration();
Job job = new Job(conf);
conf.set("mapred.job.tracker", "192.168.56.101:54311");
conf.set("fs.default.name", "hdfs://192.168.56.101:54310");
job.setJarByClass(Analysis.class) ;
//job.setOutputKeyClass(Text.class);
//job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
//job.set
job.setInputFormatClass(CustomFileInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(IntWritable.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.waitForCompletion(true);
And this is the error we get. Even if we shut down the hadoop 1.1.2 cluster, the error is still the same.
14/03/07 00:23:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/03/07 00:23:37 ERROR security.UserGroupInformation: PriviledgedActionException as:user cause:java.io.IOException: Failed to set permissions of path: \tmp\hadoop-user\mapred\staging\user818037780\.staging to 0700
Exception in thread "main" java.io.IOException: Failed to set permissions of path: \tmp\hadoop-user\mapred\staging\user818037780\.staging to 0700
at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)
at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:664)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:514)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:349)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:193)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:126)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:942)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
at LineCounter.main(LineCounter.java:86)
While running from a remote system, you should run as remote user. You can do it in your main class as follows:
public static void main(String a[]) {
UserGroupInformation ugi
= UserGroupInformation.createRemoteUser("root");
try {
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf);
conf.set("hadoop.job.ugi", "root");
// write your remaining piece of code here.
return null;
}
});
} catch (Exception e) {
e.printStackTrace();
}
}
Also while submitting a mapreduce job, it should copy your java classes with their dependent jars to hadoop cluster, where it execute mapreduce job.You can read more here.
So you need to create a runnable jar of your code (with main class Analysis in your case) with all dependent jar files inits manifest classpath. Then run your jar file from your commandline using
java -jar job-jar-with-dependencies.jar arguments
HTH!
I'm trying to set up a connection pool in GlassFish for Cassandra using cassandra-jdbc driver. I've put the driver jar (and all of the jars that it depends on) in the ~glassfish-domain/lib/ext folder but I get the following error when I try to ping:
java.lang.NoClassDefFoundError: Could not initialize class org.apache.cassandra.cql.jdbc.CassandraDriver Could not initialize class org.apache.cassandra.cql.jdbc.CassandraDriver
It seems that GlassFish finds the class, but can't load it. As all of the dependencies are satisfied, a possible reason is that there is an exception in a static block. I checked the code of CassandraDriver and it actually has a static block:
static
{
// Register the CassandraDriver with DriverManager
try
{
CassandraDriver driverInst = new CassandraDriver();
DriverManager.registerDriver(driverInst);
}
catch (SQLException e)
{
throw new RuntimeException(e.getMessage());
}
}
Thanks in advance!
It seems that slf4j wasn't loading correctly because it depends on log4j.jar. So I after adding it in the classpath, everything seems to be working fine. Here's the list of all of the jars in my lib:
apache-cassandra-1.1.6.jar
apache-cassandra-clientutil-1.1.6.jar
apache-cassandra-thrift-1.1.6.jar
cassandra-jdbc-1.1.2.jar
commons-lang-2.4.jar
guava-r08.jar
libthrift-0.7.0.jar
log4j-1.2.14.jar
slf4j-api-1.5.8.jar
slf4j-log4j12-1.5.8.jar