Can't connect to remote hdfs using spring hadoop - spring

I'm just starting in Hadoop and I'm trying to read from remote hdfs (remote hdfs in a docker container, accessible from localhost:32783) trough Spring Hadoop but I get the following error:
org.springframework.data.hadoop.HadoopException:
Cannot list resources Failed on local exception:
java.io.EOFException; Host Details : local host is: "user/127.0.1.1";
destination host is: "localhost":32783;
I'm triying to read the file using the following code:
HdfsClient hdfsClient = new HdfsClient();
Configuration conf = new Configuration();
conf.set("fs.defaultFS","hdfs://localhost:32783");
FileSystem fs = FileSystem.get(conf);
SimplerFileSystem sFs = new SimplerFileSystem(fs);
hdfsClient.setsFs(sFs);
String filePath = "/tmp/tmpTestReadTest.txt";
String output = hdfsClient.readFile(filePath);
What hdfsClient.readFile(filePath) does is the following:
public class HdfsClient {
private SimplerFileSystem sFs;
public String readFile(String filePath) throws IOException {
FSDataInputStream inputStream = this.sFs.open(filePath);
output = getStringFromInputStream(inputStream.getWrappedStream());
inputStream.close();
}
return output;
}
Any guess why I can't read from the remote hdfs? Removing the conf.set("fs.defaultFS","hdfs://localhost:32783"); I can read, but just from local filepath.
I understand that the "hdfs://localhost:32783" is correct because changing it by a random uri gives Connection refused error
There migh be something wrong into my hadoop configuration?
Thank you!

Related

Run Hadoop Java code without Hadoop command

public void readFile(String file) throws IOException {
Cofiguration conf = new Configuration();
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml"))
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml"))
conf.addResource(new Path("/usr/local/hadoop-2.7.3/etc/hadoop/mapred-site.xml"))
}
FileSystem fileSystem = FileSystem.get(conf);
System.out.println("DefaultFS: " + cong.get("fs.defaultFS"));
System.out.println("Home directory: " + fileSystem.getHomeDirectory());
Path path = new Path(file);
if(!fileSystem.exists(path)) {
System.out.println("File " + file + " does not exists");
return;
}
I am very new to Hadoop and I am wondering if it is possible to execute this Hadoop Java Client code using "java -jar".
My code works using the "hadoop jar" command. However, when I try to execute this code using "java -jar" instead of "hadoop jar", it can't locate the file in HDFS and the method getHomeDirectory() returns a local path that doesn't exist.
Is my configuration files not added correctly? Why does the code only work when executed under Hadoop command?
Instead of passing a Path object, pass the file path as string
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/core-site.xml");
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/hdfs-site.xml");
conf.addResource("/usr/local/hadoop-2.7.3/etc/hadoop/mapred-site.xml");
Or else you could add these files to classpath and try.
conf.addResource("core-site.xml");
conf.addResource("hdfs-site.xml");
conf.addResource("mapred-site.xml");

Append to file in HDFS (CDH 5.4.5)

Brand new to HDFS here.
I've got this small section of code to test out appending to a file:
val path: Path = new Path("/tmp", "myFile")
val config = new Configuration()
val fileSystem: FileSystem = FileSystem.get(config)
val outputStream = fileSystem.append(path)
outputStream.writeChars("what's up")
outputStream.close()
It is failing with this message:
Not supported
java.io.IOException: Not supported
at org.apache.hadoop.fs.ChecksumFileSystem.append(ChecksumFileSystem.java:352)
at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1163)
I looked at the source for ChecksumFileSystem.java, and it seems to be hardcoded to not support appending:
#Override
public FSDataOutputStream append(Path f, int bufferSize,
Progressable progress) throws IOException {
throw new IOException("Not supported");
}
How to make this work? Is there some way to change the default file system to some other implementation that does support append?
It turned out that I needed to actually run a real hadoop namenode and datanode. I am new to hadoop and did not realize this. Without this, it will use your local filesystem which is a ChecksumFileSystem, which does not support append. So I followed the blog post here to get it up and running on my system, and now I am able to append.
The append method has to be called on outputstream not on filesystem. filesystem.get() is just used to connect to your HDFS. First set dfs.support.append as true in hdfs-site.xml
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
stop all your demon services using stop-all.sh and restart it again using start-all.sh. Put this in your main method.
String fileuri = "hdfs/file/path"
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(fileuri),conf);
FSDataOutputStream out = fs.append(new Path(fileuri));
PrintWriter writer = new PrintWriter(out);
writer.append("I am appending this to my file");
writer.close();
fs.close();

Reading in a parameter file in Amazon Elastic MapReduce and S3

I am trying to run my hadoop program in Amazon Elastic MapReduce system. My program takes an input file from the local filesystem which contains parameters needed for the program to run. However, since the file is normally read from the local filesystem with FileInputStream the task fails when executed in AWS environment with an error saying that the parameter file was not found. Note that, I already uploaded the file into Amazon S3. How can I fix this problem? Thanks. Below is the code that I use to read the paremeter file and consequently read the parameters in the file.
FileInputStream fstream = new FileInputStream(path);
FileInputStream os = new FileInputStream(fstream);
DataInputStream datain = new DataInputStream(os);
BufferedReader br = new BufferedReader(new InputStreamReader(datain));
String[] args = new String[7];
int i = 0;
String strLine;
while ((strLine = br.readLine()) != null) {
args[i++] = strLine;
}
If you must read the file from the local file system, you can configure your EMR job to run with a boostrap action. In that action, simply copy the file from S3 to a local file using s3cmd or similar.
You could also go through the Hadoop FileSystem class to read the file, as I'm pretty sure EMR supports direct access like this. For example:
FileSystem fs = FileSystem.get(new URI("s3://my.bucket.name/"), conf);
DataInputStream in = fs.open(new Path("/my/parameter/file"));
I did not try Amazon Elastic yet, however it looks like a classical application of distributed cache. You add file do cache using -files option (if you implement Tool/ToolRunner) or job.addCacheFile(URI uri) method, and access it as if it existed locally.
You can add this file to the distributed cache as follows :
...
String s3FilePath = args[0];
DistributedCache.addCacheFile(new URI(s3FilePath), conf);
...
Later, in configure() of your mapper/reducer, you can do the following:
...
Path s3FilePath;
#Override
public void configure(JobConf job) {
s3FilePath = DistributedCache.getLocalCacheFiles(job)[0];
FileInputStream fstream = new FileInputStream(s3FilePath.toString());
...
}

Copying files from HDFS to local file system with JAVA

I am trying to copy files from HDFS to local filesystem for preprocessing. The below code should work according to the documentation. Although it doesn't give any error messages and the mapreduce job runs smoothly I can not see any output on my local hard drive. What do you think the problem is? Thanks.
try {
Path phdfs_input = new Path("hdfs://master:54310/user/hduser/conninput/"+value.toString());
Path plocal_input = new Path("/home/hduser/Desktop/"+avlue.toString());
FileSystem fs = FileSystem.get(context.getConfiguration());
fs.copyToLocalFile(phdfs_input, plocal_input);
/* String localoutput_file = "/home/hduser/Destop/output/"+value.toString();
String cmd1[] = {"mafia", "-mfi", ".5", "-ascii", "~/Desktop/"+value.toString(), localoutput_file };
File mafia_dir = new File("/home/hduser/");
ShellCommandExecutor s = new ShellCommandExecutor(cmd1, mafia_dir);*/
} catch (Exception e) {
e.printStackTrace();
}
Try using /user/hduser/conninput/"+value.toString() in the Path constructor instead of providing the master:54310 part. It should figure out master:54310 from the Configuration.

How to Read file from Hadoop using Java without command line

I wanted to read file from hadoop system, I could do that using the below code
String uri = theFilename;
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
InputStream in = null;
try {
in = fs.open(new Path(uri));
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
To run this file I have to run hadoop jar myjar.jar com.mycompany.cloud.CatFile /filepathin_hadoop
That works. But How can I do that same from other program, I mean without using hadoop jar command.
You can add your core-site.xml to that Configuration object so it knows the URI for your HDFS instance. This method requires HADOOP_HOME to be set.
Configuration conf = new Configuration();
Path coreSitePath = new Path(System.getenv("HADOOP_HOME"), "conf/core-site.xml");
conf.addResource(coreSitePath);
FileSystem hdfs = FileSystem.get(conf);
// rest of code the same
Now, without using hadoop jar you can open a connection to your HDFS instance.
Edit: Have to use conf.addResource(Path). If you use a String arg it, looks in the classpath for that filename.
There is another configuration method set(parameterName,value). If you use this method, you dont have to specify the location of core-site.xml. This would be useful for accessing HDFS from remote location like webserver.
Usage as follows :
String uri = theFilename;
Configuration conf = new Configuration();
conf.set("fs.default.name","hdfs://10.132.100.211:8020/");
FileSystem fs = FileSystem.get(conf);
// Rest of the code

Resources