Run Hadoop Java code without Hadoop command - hadoop

public void readFile(String file) throws IOException {
Cofiguration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml"))
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml"))
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/mapred-site.xml"))
}
FileSystem fileSystem = FileSystem.get(conf);
System.out.println("DefaultFS: " + cong.get("fs.defaultFS"));
System.out.println("Home directory: " + fileSystem.getHomeDirectory());
Path path = new Path(file);
if(!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}
I am very new to Hadoop and I am wondering if it is possible to execute this Hadoop Java Client code using "java -jar".
My code works using the "hadoop jar" command. However, when I try to execute this code using "java -jar" instead of "hadoop jar", it can't locate the file in HDFS and the method getHomeDirectory() returns a local path that doesn't exist.
Is my configuration files not added correctly? Why does the code only work when executed under Hadoop command?

Instead of passing a Path object, pass the file path as string
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml");
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml");
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/mapred-site.xml");
Or else you could add these files to classpath and try.
conf.addResource("core-site.xml");
conf.addResource("hdfs-site.xml");
conf.addResource("mapred-site.xml");

Related

Can't connect to remote hdfs using spring hadoop

I'm just starting in Hadoop and I'm trying to read from remote hdfs (remote hdfs in a docker container, accessible from localhost:32783) trough Spring Hadoop but I get the following error:
org.springframework.data.hadoop.HadoopException:
Cannot list resources Failed on local exception:
java.io.EOFException; Host Details : local host is: "user/127.0.1.1";
destination host is: "localhost":32783;
I'm triying to read the file using the following code:
HdfsClient hdfsClient = new HdfsClient();
Configuration conf = new Configuration();
conf.set("fs.defaultFS","hdfs://localhost:32783");
FileSystem fs = FileSystem.get(conf);
SimplerFileSystem sFs = new SimplerFileSystem(fs);
hdfsClient.setsFs(sFs);
String filePath = "/tmp/tmpTestReadTest.txt";
String output = hdfsClient.readFile(filePath);
What hdfsClient.readFile(filePath) does is the following:
public class HdfsClient {
private SimplerFileSystem sFs;
public String readFile(String filePath) throws IOException {
FSDataInputStream inputStream = this.sFs.open(filePath);
output = getStringFromInputStream(inputStream.getWrappedStream());
inputStream.close();
}
return output;
}
Any guess why I can't read from the remote hdfs? Removing the conf.set("fs.defaultFS","hdfs://localhost:32783"); I can read, but just from local filepath.
I understand that the "hdfs://localhost:32783" is correct because changing it by a random uri gives Connection refused error
There migh be something wrong into my hadoop configuration?
Thank you!

unable to execute "put" in the function of map using hbase and hadoop

everybody.I'm using mr to process some log file, the file is on hdfs. I want to retrieve some info form the file and store them to hbase.
so I launch the job
HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar crm_hbase-1.0.jar /datastream/music/useraction/2014-11-30/music_useraction_20141130-230003072+0800.24576015364769354.00018022.lzo
if I just run job as "hadoop jar xxxx" it shows "not find HbaseConfiguraion"
My code is quite simple,
public int run(String[] strings) throws Exception {
Configuration config = HBaseConfiguration.create();
String kerbConfPrincipal = "ndir#HADOOP.HZ.NETEASE.COM";
String kerbKeytab = "/srv/zwj/ndir.keytab";
UserGroupInformation.loginUserFromKeytab(kerbConfPrincipal, kerbKeytab);
UserGroupInformation ugi = UserGroupInformation.getLoginUser();
System.out.println(" auth: " + ugi.getAuthenticationMethod());
System.out.println(" name: " + ugi.getUserName());
System.out.println(" using keytab:" + ugi.isFromKeytab());
HBaseAdmin.checkHBaseAvailable(config);
//set job name
Job job = new Job(config, "Import from file ");
job.setJarByClass(LogRun.class);
//set map class
job.setMapperClass(LogMapper.class);
//set output format and output table name
job.setOutputFormatClass(TableOutputFormat.class);
job.getConfiguration().set(TableOutputFormat.OUTPUT_TABLE, "crm_data");
job.setOutputKeyClass(ImmutableBytesWritable.class);
job.setOutputValueClass(Put.class);
job.setNumReduceTasks(0);
TableMapReduceUtil.addDependencyJars(job);
but when i try to run this MR, I cannot execute "context.write(null,put)", it seems the "map" halts at this line.
I think it has relationship with "kerbKeytab", does it mean I need to "login" when I run the "map" process
after adding TableMapReduceUtil it works
Job job = new Job(config, "Import from file ");
job.setJarByClass(LogRun.class);
//set map class
job.setMapperClass(LogMapper.class);
TableMapReduceUtil.initTableReducerJob(table, null, job);
job.setNumReduceTasks(0);
TableMapReduceUtil.addDependencyJars(job);
FileInputFormat.setInputPaths(job,input);
//FileInputFormat.addInputPath(job, new Path(strings[0]));
int ret = job.waitForCompletion(true) ? 0 : 1;

Reading in a parameter file in Amazon Elastic MapReduce and S3

I am trying to run my hadoop program in Amazon Elastic MapReduce system. My program takes an input file from the local filesystem which contains parameters needed for the program to run. However, since the file is normally read from the local filesystem with FileInputStream the task fails when executed in AWS environment with an error saying that the parameter file was not found. Note that, I already uploaded the file into Amazon S3. How can I fix this problem? Thanks. Below is the code that I use to read the paremeter file and consequently read the parameters in the file.
FileInputStream fstream = new FileInputStream(path);
FileInputStream os = new FileInputStream(fstream);
DataInputStream datain = new DataInputStream(os);
BufferedReader br = new BufferedReader(new InputStreamReader(datain));
String[] args = new String[7];
int i = 0;
String strLine;
while ((strLine = br.readLine()) != null) {
args[i++] = strLine;
}
If you must read the file from the local file system, you can configure your EMR job to run with a boostrap action. In that action, simply copy the file from S3 to a local file using s3cmd or similar.
You could also go through the Hadoop FileSystem class to read the file, as I'm pretty sure EMR supports direct access like this. For example:
FileSystem fs = FileSystem.get(new URI("s3://my.bucket.name/"), conf);
DataInputStream in = fs.open(new Path("/my/parameter/file"));
I did not try Amazon Elastic yet, however it looks like a classical application of distributed cache. You add file do cache using -files option (if you implement Tool/ToolRunner) or job.addCacheFile(URI uri) method, and access it as if it existed locally.
You can add this file to the distributed cache as follows :
...
String s3FilePath = args[0];
DistributedCache.addCacheFile(new URI(s3FilePath), conf);
...
Later, in configure() of your mapper/reducer, you can do the following:
...
Path s3FilePath;
#Override
public void configure(JobConf job) {
s3FilePath = DistributedCache.getLocalCacheFiles(job)[0];
FileInputStream fstream = new FileInputStream(s3FilePath.toString());
...
}

Unable to load OpenNLP sentence model in Hadoop map-reduce job

I'm trying to get OpenNLP integrated into a map-reduce job on Hadoop, starting with some basic sentence splitting. Within the map function, the following code is run:
public AnalysisFile analyze(String content) {
InputStream modelIn = null;
String[] sentences = null;
// references an absolute path to en-sent.bin
logger.info("sentenceModelPath: " + sentenceModelPath);
try {
modelIn = getClass().getResourceAsStream(sentenceModelPath);
SentenceModel model = new SentenceModel(modelIn);
SentenceDetectorME sentenceBreaker = new SentenceDetectorME(model);
sentences = sentenceBreaker.sentDetect(content);
} catch (FileNotFoundException e) {
logger.error("Unable to locate sentence model.");
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (modelIn != null) {
try {
modelIn.close();
} catch (IOException e) {
}
}
}
logger.info("number of sentences: " + sentences.length);
<snip>
}
When I run my job, I'm getting an error in the log saying "in must not be null!" (source of class throwing error), which means that somehow I can't open an InputStream to the model. Other tidbits:
I've verified that the model file exists in the location sentenceModelPath refers to.
I've added Maven dependencies for opennlp-maxent:3.0.2-incubating, opennlp-tools:1.5.2-incubating, and opennlp-uima:1.5.2-incubating.
Hadoop is just running on my local machine.
Most of this is boilerplate from the OpenNLP documentation. Is there something I'm missing, either on the Hadoop side or the OpenNLP side, that would cause me to be unable to read from the model?
Your problem is the getClass().getResourceAsStream(sentenceModelPath) line. This will try to load a file from the classpath - neither the file in HDFS nor on the client local file system is part of the classpath at mapper / reducer runtime, so this is why you're seeing the Null error (the getResourceAsStream() returns null if the resource cannot be found).
To get around this you have a number of options:
Amend your code to load the file from HDFS:
modelIn = FileSystem.get(context.getConfiguration()).open(
new Path("/sandbox/corpus-analysis/nlp/en-sent.bin"));
Amend your code to load the file from the local dir, and use the -files GenericOptionsParser option (which copies to file from the local file system to HDFS, and back down to the local directory of the running mapper / reducer):
modelIn = new FileInputStream("en-sent.bin");
Hard-bake the file into the job jar (in the root dir of the jar), and amend your code to include a leading slash:
modelIn = getClass().getResourceAsStream("/en-sent.bin");</li>

How to Read file from Hadoop using Java without command line

I wanted to read file from hadoop system, I could do that using the below code
String uri = theFilename;
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
InputStream in = null;
try {
in = fs.open(new Path(uri));
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
To run this file I have to run hadoop jar myjar.jar com.mycompany.cloud.CatFile /filepathin_hadoop
That works. But How can I do that same from other program, I mean without using hadoop jar command.
You can add your core-site.xml to that Configuration object so it knows the URI for your HDFS instance. This method requires HADOOP_HOME to be set.
Configuration conf = new Configuration();
Path coreSitePath = new Path(System.getenv("HADOOP_HOME"), "conf/core-site.xml");
conf.addResource(coreSitePath);
FileSystem hdfs = FileSystem.get(conf);
// rest of code the same
Now, without using hadoop jar you can open a connection to your HDFS instance.
Edit: Have to use conf.addResource(Path). If you use a String arg it, looks in the classpath for that filename.
There is another configuration method set(parameterName,value). If you use this method, you dont have to specify the location of core-site.xml. This would be useful for accessing HDFS from remote location like webserver.
Usage as follows :
String uri = theFilename;
Configuration conf = new Configuration();
conf.set("fs.default.name","hdfs://10.132.100.211:8020/");
FileSystem fs = FileSystem.get(conf);
// Rest of the code

Resources