I have a simple MR job that needs to create a directory in hdfs based on the timestamp. I am having hard time finding the correct api (in hadoop 2.0.3 to find the status and create a directory if it doesn't exist). Can some one suggest the right way of doing it? here is the existing code:
FileSystem fileSystem = FileSystem.get(new Configuration());
Calendar c = Calendar.getInstance();
String basepath = "/dev/group/data/json/";
for ( Record record: records){
c.setTimeInMillis(record.timestamp );
Path path = new Path(basepath + c.get(Calendar.YEAR) + "/" + c.get(Calendar.MONTH));
// Check if the path is valid and create hdfs folder if not
FileStatus[] status = filesystem.???
context.write(key, new Text(mapper.writeValueAsString(record)));
}
Thx
mkdirs returns false if the folder creation fails, true if it succeeds. So just use that and then know that it didn't create it when it returns false.
Checking to see if it exists first doesn't really help at all because that's an extra operation to the NameNode. Also, you have to be worried about the contention across multiple jobs. Consider the following situation:
Mapper 1 checks to see if dir abc exists -- it doesn't
Mapper 2 checks to see if dir abc exists -- it doesn't
Mapper 1 tries to create dir abc -- it does
Mapper 2 tries to create dir abc -- it does't
So long story short, just use mkdirs because it's atomic and doesn't have the above problem, and also requires less work from the NameNode.
Related
I want to use Cloudera's MapReduceIndexerTool to understand how morphlines work. I created a basic morphline that just reads lines from the input file and I tried to run that tool using that command:
hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool \
--morphline-file morphline.conf \
--output-dir hdfs:///hostname/dir/ \
--dry-run true
Hadoop is installed on the same machine where I run this command.
The error I'm getting is the following:
net.sourceforge.argparse4j.inf.ArgumentParserException: Cannot write parent of file: hdfs:/hostname/dir
at org.apache.solr.hadoop.PathArgumentType.verifyCanWriteParent(PathArgumentType.java:200)
The /dir directory has 777 permissions on it, so it is definitely allowed to write into it. I don't know what I should do to allow it to write into that output directory.
I'm new to HDFS and I don't know how I should approach this problem. Logs don't offer me any info about that.
What I tried until now (with no result):
created a hierarchy of 2 directories (/dir/dir2) and put 777 permissions on both of them
changed the output-dir schema from hdfs:///... to hdfs://... because all the examples in the --help menu are built that way, but this leads to an invalid schema error
Thank you.
It states 'cannot write parent of file'. And the parent in your case is /. Take a look into the source:
private void verifyCanWriteParent(ArgumentParser parser, Path file) throws ArgumentParserException, IOException {
Path parent = file.getParent();
if (parent == null || !fs.exists(parent) || !fs.getFileStatus(parent).getPermission().getUserAction().implies(FsAction.WRITE)) {
throw new ArgumentParserException("Cannot write parent of file: " + file, parser);
}
}
In the message printed is file, in your case hdfs:/hostname/dir, so file.getParent() will be /.
Additionally you can try the permissions with hadoop fs command, for example you can try to create a zero length file in the path:
hadoop fs -touchz /test-file
I solved that problem after days of working on it.
The problem is with that line --output-dir hdfs:///hostname/dir/.
First of all, there are not 3 slashes at the beginning as I put in my continuous trying to make this work, there are only 2 (as in any valid HDFS URI). Actually I put 3 slashes because otherwise, the tool throws an invalid schema exception! You can easily see in this code that the schema check is done before the verifyCanWriteParent check.
I tried to get the hostname by simply running the hostname command on the Cent OS machine that I was running the tool on. This was the main issue. I analyzed the /etc/hosts file and I saw that there are 2 hostnames for the same local IP. I took the second one and it worked. (I also attached the port to the hostname, so the final format is the following: --output-dir hdfs://correct_hostname:8020/path/to/file/from/hdfs
This error is very confusing because everywhere you look for the namenode hostname, you will see the same thing that the hostname command returns. Moreover, the errors are not structured in a way that you can diagnose the problem and take a logical path to solve it.
Additional information regarding this tool and debugging it
If you want to see the actual code that runs behind it, check the cloudera version that you are running and select the same branch on the official repository. The master is not up to date.
If you want to just run this tool to play with the morphline (by using the --dry-run option) without connecting to Solr and playing with it, you can't. You have to specify a Zookeeper endpoint and a Solr collection or a solr config directory, which involves additional work to research on. This is something that can be improved to this tool.
You don't need to run the tool with -u hdfs, it works with a regular user.
I have a folder in my LocalSystem. It contains 1000 files, and I would move or copy him from my LocalSystem to HDFS
I tried by these two commands:
hadoop fs copyFromLocal C:/Users/user/Downloads/ProjectSpark/ling-spam /tmp
And I also tried this command:
hdfs dfs -put /C:/Users/user/Downloads/ProjectSpark/ling-spam
/tmp/ling-spam
It displays an error message which says that my directory not found and yet I'm sure that correct.
I found a function getmerge() to move a folder from HDFS to LocalSystem, but I did not find the inverse.
Please, can you help me?
my VirtualBox on Windows, and i work by HDP2.3.2 with the console secure shell
You can't copy files from your Windows machine to HDFS. You have to first SCP the files into the VM (I recommend WinSCP or Filezilla) and only then can you use hadoop fs to put files onto HDFS.
The error was correct in that C:/Users/user/Downloads does not exist on the HDP sandbox because it's a Linux machine.
As noted, you can also try and use the Ambari HDFS file viewer, but I still standby by note that SCP is the official way because not all Hadoop systems have Ambari (or at least the HDFS file view for Ambari)
I would take the Mutual Information for classification of the word spam or ham. I have this operation: MI(Word)= ∑ Probabi(Occ,Class) * Log2 * (Probabi(Occuren,Class)/Probabi(Occurren) * Probabi(Class)).
I understand the function, I must compute 4 operation (true,ham), (false,ham), (true,spam) and (false,spam).
I do not understand who i do write exactly, in fact, I computed the number of the file in which in occur.
But I do not who exactly I must write in my function.
Thank you very much!
This isthe corps of my function:
def computeMutualInformationFactor(
probaWC:RDD[(String, Double)],// probability of occurrence of the word in a given class.
probaW:RDD[(String, Double)],// probability of occurrence of the word in whether class
probaC: Double, //probability an email appears in class (spam or ham)
probaDefault: Double // default value when a probability is missing
):RDD[(String, Double)] = {
I put some files into hdfs (/path/to/directory/) which contain data like following;
63 EB44863EA74AA0C5D3ECF3D678A7DF59
62 FABBC9ED9719A5030B2F6A4591EDB180
59 6BF6D40AF15DE2D7E295EAFB9574BBF8
All of them named as _user_hive_warehouse_file_name_000XYZ_A. These files had downloaded from another hdfs.
I'm trying to create external table via Hive;
CREATE EXTERNAL TABLE users(
id int,
user string
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION '/path/to/directory/';
It says;
OK
Time taken: 0.098 seconds
select * from users; returns empty.
select count(1) from users; returns 0.
Hive creates the table successfully, but it's always empty. If I put another file like another.txt, that contains the sample data mentioned above, select count(1) from users; returns 3.
What am I missing, why the table is empty?
Environment:
JDK 7
Hadoop 2.6.0
Hive 0.14.0
Ubuntu 14.04
I think you are encountering an issue that is peripherally discussed in HIVE-6431. In particular, this comment is the important one:
By default, FileInputFormat(which is the super class of various formats) in hadoop ignores file name starts with "_" or ".", and hard to walk around this in hive codebase.
The workaround is probably to avoid using filenames that begin with _ or .
When you run any command on Hive, it is run internally as a MapReduce Job on the HDFS path that you stored the file. The job uses the FileInputFormat to read the HDFS files which has a hiddenFileFilter which ignores any files starting with underscore ("_") and ("."). You can actually set other files to ignore by setting the FileInputFormat.SetInputPathFilter to a CustomPathFilter. Hadoop uses the files with underscores are "special" files to show job output and logs. This is probably why they are ignored.
for Map Reducer Job
In my input directory having around 1000 files. and each files contains some GB's of data.
for example /MyFolder/MyResults/in_data/20140710/ contains 1000 files.
when I give the inputpath as /MyFolder/MyResults/in_data/20140710 it's taking all 1000 files to process.
I would like to run a job by talking 200 files only at a time. How we can do this?
Here my command to execute:
hadoop jar wholefile.jar com.form1.WholeFileInputDriver -libjars myref.jar -D mapred.reduce.tasks=15 /MyFolder/MyResults/in_data/20140710/ <<Output>>
Can any help me, how to run a job like a batch size for the inputfiles.
Thanks in advance
-Vim
A simple way would be to modify your driver to take only 200 files as input out of all the files in that directory. Something like this:
FileSystem fs = FileSystem.get(new Configuration());
FileStatus[] files = fs.globStatus(new Path("/MyFolder/MyResults/in_data/20140710/*"));
for (int i=0;i<200;i++) {
FileInputFormat.addInputPath(job, files[i].getPath());
}
I need fastest access to a single file, several copies of which are stored in many systems using Hadoop. I also need to finding the ping time for each file in a sorted manner.
How should I approach learning hadoop to accomplish this task?
Please help fast.I have very less time.
If you need faster access to a file just increase the replication factor to that file using setrep command. This might not increase the file throughput proportionally, because of your current hardware limitations.
The ls command is not giving the access time for the directories and the files, it's showing the modification time only. Use the Offline Image Viewer to dump the contents of hdfs fsimage files to human-readable formats. Below is the command using the Indented option.
bin/hdfs oiv -i fsimagedemo -p Indented -o fsimage.txt
A sample o/p from the fsimage.txt, look for the ACCESS_TIME column.
INODE
INODE_PATH = /user/praveensripati/input/sample.txt
REPLICATION = 1
MODIFICATION_TIME = 2011-10-03 12:53
ACCESS_TIME = 2011-10-03 16:26
BLOCK_SIZE = 67108864
BLOCKS [NUM_BLOCKS = 1]
BLOCK
BLOCK_ID = -5226219854944388285
NUM_BYTES = 529
GENERATION_STAMP = 1005
NS_QUOTA = -1
DS_QUOTA = -1
PERMISSIONS
USER_NAME = praveensripati
GROUP_NAME = supergroup
PERMISSION_STRING = rw-r--r--
To get the ping time in a sorted manner, you need to write a shell script or some other program to extract the INODE_PATH and ACCESS_TIME for each of the INODE section and then sort them based on the ACCESS_TIME. You can also use Pig as shown here.
How should I approach learning hadoop to accomplish this task? Please help fast.I have very less time.
If you want to learn Hadoop in a day or two it's not possible. Here are some videos and articles to start with.