How to read gz files in Spark using wholeTextFiles - hadoop

I have a folder which contains many small .gz files (compressed csv text files). I need to read them in my Spark job, but the thing is I need to do some processing based on info which is in the file name. Therefore, I did not use:
JavaRDD<<String>String> input = sc.textFile(...)
since to my understanding I do not have access to the file name this way. Instead, I used:
JavaPairRDD<<String>String,String> files_and_content = sc.wholeTextFiles(...);
because this way I get a pair of file name and the content.
However, it seems that this way, the input reader fails to read the text from the gz file, but rather reads the binary Gibberish.
So, I would like to know if I can set it to somehow read the text, or alternatively access the file name using sc.textFile(...)

You cannot read gzipped files with wholeTextFiles because it uses CombineFileInputFormat which cannot read gzipped files because they are not splittable (source proving it):
override def createRecordReader(
split: InputSplit,
context: TaskAttemptContext): RecordReader[String, String] = {
new CombineFileRecordReader[String, String](
split.asInstanceOf[CombineFileSplit],
context,
classOf[WholeTextFileRecordReader])
}
You may be able to use newAPIHadoopFile with wholefileinputformat (not built into hadoop but all over the internet) to get this to work correctly.
UPDATE 1: I don't think WholeFileInputFormat will work since it just gets the bytes of the file, meaning you may have to write your own class possibly extending WholeFileInputFormat to make sure it decompresses the bytes.
Another option would be to decompress the bytes yourself using GZipInputStream
UPDATE 2: If you have access to the directory name like in the OP's comment below you can get all the files like this.
Path path = new Path("");
FileSystem fileSystem = path.getFileSystem(new Configuration()); //just uses the default one
FileStatus [] fileStatuses = fileSystem.listStatus(path);
ArrayList<Path> paths = new ArrayList<>();
for (FileStatus fileStatus : fileStatuses) paths.add(fileStatus.getPath());

I faced the same issue while using spark to connect to S3.
My File was a gzip csv with no extension .
JavaPairRDD<String, String> fileNameContentsRDD = javaSparkContext.wholeTextFiles(logFile);
This approach returned currupted values
I solved it by using the the below code :
JavaPairRDD<String, String> fileNameContentsRDD = javaSparkContext.wholeTextFiles(logFile+".gz");
By adding .gz to the S3 URL , spark automatically picked the file and read it like gz file .(Seems a wrong approach but solved my problem .

Related

How to get the file name in hadoop from input file path out side mapper and reducer i.e driver class

To get file path in mapper or reducer we use
FileSplit fileSplit = (FileSplit)reporter.getInputSplit();
String filename = fileSplit.getPath().getName();
System.out.println("File name "+filename);
System.out.println("Directory and File name"+fileSplit.getPath().toString());
process(key,value);
But In input folder i had five different kind of files so need to get file name such that i can set different mappers for different files .
example in args[0] my input folder /cloudera/test contains test.txt,dev.txt,rev.txt
if file name contains dev I should set mapper1
file name contains test I should set mapper 2
..........
You have to use MultipleInputs and mappers i think i got a good link for you help that helped me too when i practiced it long back.
MultipleInput Usage
You can use something like this:
FileInputFormat.addInputPaths(job, String.valueOf(args[0]+","+args[1]));
Where, you can mention the paths of individual files in args[0] and args[1].

Ruby converting String to File for uploading to FTP

Currently we have a method that returns a string with a formatted CSV file.
string = EXPORT.tickets
We need to upload this csv file to a ftp server like so
ftp = Net::FTP.new(server, username, password)
ftp.putbinaryfile(string)
however, the string variable is obviously a string, and not a binary file as the putbinaryfile method expects. I see two ways to do this,
convert the string variable to a file first using File
convert the string directly to a file with something like StringIO
Do these seem like viable options? If so, how would I approach doing this, thanks in advance!
EDIT:
Since the putbinaryfile method is looking for a file path rather than an actual file, it looks like my best best will be to create a File from the string variable. Can anyone give an example of how this can be accomplished?
After talking to another developer, he gave me this solution which I found to be a better for my situation, since the file did not exist already. It skips writing the string to a Tempfile and uses StringIO to upload it directly. His solution:
The Net::FTP#putbinaryfile method takes the name of a file on the local filesystem to copy to the remote filesystem. Now, if your data is in a string (and wasn't read from a file on the filesystem) then you may want to use Net::FTP#storbinary instead:
require 'stringio'
require 'net/ftp'
BLOCKSIZE = 512
data = StringIO.new("Hello, world!\n")
hostname = "ftp.domain.tld"
username = "username"
password = "password"
remote_filename = "something.txt"
Net::FTP.open(hostname, username, password) do |ftp|
# ...other ftp commands here...
ftp.storbinary("STOR #{remote_filename}", data, BLOCKSIZE)
# ...any other ftp commands...
end
The above avoids writing data that's not on disk to disk, just so you can upload it somewhere. However, if the data is already in a file on disk, you might as well just fix your code to reference its filename instead.
Something like this should cover most of the bases:
require 'tempfile'
temp_file = Tempfile.new('for_you')
temp_file.write(string)
temp_file.close
ftp.putbinaryfile(temp_file)
temp_file.unlink
Using Tempfile relieves you from a lot of issues regarding unique filename, threadsafeness, etc. Garbage collection will ensure your file gets deleted, even if putbinaryfile raises an exception or similar perils.
The uploaded file will get a name like for_you.23423423.423.423.4, both locally and on the remote server. If you want it to have a specific name on the remote server like 'daily_log_upload', do this instead:
ftp.putbinaryfile(temp_file, 'daily_log_upload')
It will still have a unique name for the local temp file, but you don't care about that.

DistributedCache Hadoop - FileNotFound

I'm trying to place a file in the distributed cache. In order to do this I invoke my driver class using the -files option, something like:
hadoop jar job.jar my.driver.class -files MYFILE input output
The getCacheFiles() and the getLocalCacheFiles() return arrays of URIs/Paths containing MYFILE.
(E.g.: hdfs://localhost/tmp/hadoopuser/mapred/staging/knappy/.staging/job_201208262359_0005/files/histfile#histfile)
Unfortunately, when trying to retrieve MYFILE in the map task, it throws a FileNotFoundException.
I tried this in standalone(local) mode as well as in pseudo-distributed mode.
Do you know what might be the cause ?
UPDATE:
The following three lines:
System.out.println("cache files:"+ctx.getConfiguration().get("mapred.cache.files"));
uris = DistributedCache.getLocalCacheFiles(ctx.getConfiguration());
for(Path uri: uris){
System.out.println(uri.toString());
System.out.println(uri.getName());
if(uri.getName().contains(Constants.PATH_TO_HISTFILE)){
histfileName = uri.getName();
}
}
print out this:
cache files:file:/home/knappy/histfile#histfile
/tmp/hadoop-knappy/mapred/local/archive/-7231_-1351_105/file/home/knappy/histfile
histfile
So, the file seems to be listed in the job.xml mapred.cache.files property and the local file seems to be present. Still, the FileNotFoundException is thrown.
First check mapred.cache.files in your job's xml to see whether the file is in the cache.
The you can retrieve it in your mapper:
...
Path[] files = DistributedCache.getLocalCacheFiles(context.getConfiguration());
File myFile = new File(files[0].getName());
//read your file content
...

Writing output to different folders hadoop

I want to write two different types of output from the same reducer, into two different directories.
I am able to use multipleoutputs feature in hadoop to write to different files, but they both go to the same output folder.
I want to write each file from the same reduce to a different folder.
Is there a way for doing this?
If I try putting for example "hello/testfile", as the second argument, it shows invaid argument. So I m not able to write to different folders.
If the above case is not possible, the is it possible for the mapper to read only specific files from an input folder?
Please help me.
Thanks in advance!
Thanks for the reply. I am able to read a file successfully using then above method. But in distributed mode, I am not able to do so. In the reducer, I have
set:
mos.getCollector("data", reporter).collect(new Text(str_key), new Text(str_val));
(Using multiple outputs, and in Job Conf:
I tried using
FileInputFormat.setInputPaths(conf2, "/home/users/mlakshm/opchk285/data-r-00000*");
as well as
FileInputFormat.setInputPaths(conf2, "/home/users/mlakshm/opchk285/data*");
But, it gives the following error:
cause:org.apache.hadoop.mapred.InvalidInputException: Input Pattern hdfs://mentat.cluster:54310/home/users/mlakshm/opchk295/data-r-00000* matches 0 files
Question 1: Writing output files to different directories - you can do it using the following approaches:
1. Using MultipleOutputs class:
Its great that you are able to create multiple named output files using MultipleOutputs. As you know, we need to add this in your driver code.
MultipleOutputs.addNamedOutput(job, "OutputFileName", OutputFormatClass, keyClass, valueClass);
The API provides two overloaded write methods to achieve this.
multipleOutputs.write("OutputFileName", new Text(Key), new Text(Value));
Now, to write the output file to separate output directories, you need to use an overloaded write method with an extra parameter for the base output path.
multipleOutputs.write("OutputFileName", new Text(key), new Text(value), baseOutputPath);
Please remember to change your baseOutputPath in each of your implementation.
2. Rename/Move the file in driver class:
This is probably the easiest hack to write output to multiple directories. Use multipleOutputs and write all the output files to a single output directory. But the file names need to be different for each category.
Assume that you want to create 3 different sets of output files, the first step is to register named output files in the driver:
MultipleOutputs.addNamedOutput(job, "set1", OutputFormatClass, keyClass, valueClass);
MultipleOutputs.addNamedOutput(job, "set2", OutputFormatClass, keyClass, valueClass);
MultipleOutputs.addNamedOutput(job, "set3", OutputFormatClass, keyClass, valueClass);
Also, create the different output directories or the directory structure you want in the driver code, along with the actual output directory:
Path set1Path = new Path("/hdfsRoot/outputs/set1");
Path set2Path = new Path("/hdfsRoot/outputs/set2");
Path set3Path = new Path("/hdfsRoot/outputs/set3");
The final important step is to rename the output files based on their names. If the job is successful;
FileSystem fileSystem = FileSystem.get(new Configuration);
if (jobStatus == 0) {
// Get the output files from the actual output path
FileStatus outputfs[] = fileSystem.listStatus(outputPath);
// Iterate over all the files in the output path
for (int fileCounter = 0; fileCounter < outputfs.length; fileCounter++) {
// Based on each fileName rename the path.
if (outputfs[fileCounter].getPath().getName().contains("set1")) {
fileSystem.rename(outputfs[fileCounter].getPath(), new Path(set1Path+"/"+anyNewFileName));
} else if (outputfs[fileCounter].getPath().getName().contains("set2")) {
fileSystem.rename(outputfs[fileCounter].getPath(), new Path(set2Path+"/"+anyNewFileName));
} else if (outputfs[fileCounter].getPath().getName().contains("set3")) {
fileSystem.rename(outputfs[fileCounter].getPath(), new Path(set3Path+"/"+anyNewFileName));
}
}
}
Note: This will not add any significant overhead to the job because we are only MOVING files from one directory to another. And choosing any particular approach depends on the nature of your implementation.
In summary, this approach basically writes all the output files using different names to the same output directory and when the job is successfully completed, we rename the base output path and move files to different output directories.
Question 2: Reading specific files from an input folder(s):
You can definitely read specific input files from a directory using MultipleInputs class.
Based on your input path/file names you can pass the input files to the corresponding Mapper implementation.
Case 1: If all the input files ARE IN a single directory:
FileStatus inputfs[] = fileSystem.listStatus(inputPath);
for (int fileCounter = 0; fileCounter < inputfs.length; fileCounter++) {
if (inputfs[fileCounter].getPath().getName().contains("set1")) {
MultipleInputs.addInputPath(job, inputfs[fileCounter].getPath(), TextInputFormat.class, Set1Mapper.class);
} else if (inputfs[fileCounter].getPath().getName().contains("set2")) {
MultipleInputs.addInputPath(job, inputfs[fileCounter].getPath(), TextInputFormat.class, Set2Mapper.class);
} else if (inputfs[fileCounter].getPath().getName().contains("set3")) {
MultipleInputs.addInputPath(job, inputfs[fileCounter].getPath(), TextInputFormat.class, Set3Mapper.class);
}
}
Case 2: If all the input files ARE NOT IN a single directory:
We can basically use the same approach above even if the input files are in different directories. Iterate over the base input path and check the file path name for a matching criteria.
Or, if the files are in complete different locations, the simplest way is to add to multiple inputs individually.
MultipleInputs.addInputPath(job, Set1_Path, TextInputFormat.class, Set1Mapper.class);
MultipleInputs.addInputPath(job, Set2_Path, TextInputFormat.class, Set2Mapper.class);
MultipleInputs.addInputPath(job, Set3_Path, TextInputFormat.class, Set3Mapper.class);
Hope this helps! Thank you.
Copy the MultipleOutputs code into your code base and loosen the restriction on allowable characters. I can't see any valid reason for the restrictions anyway.
Yes you can specify that a input format only processes certain files:
FileInputFormat.setInputPaths(job, "/path/to/folder/testfile*");
If you do amend the code, remember the _SUCCESS file should be written to both folders upon successful job completion - while this isn't a requirement, it is a machanism by which someone can determine if the output in that folder is complete, and not 'truncated' because of an error.
Yes you can do this. All you need to do is generate the file name for a particular key/value pair coming out of the reducer.
If you override a method, you can return the file name depending on what key/value pair you get, and so on. Here is the link that shows you how to do that.
https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CFMQFjAA&url=https%3A%2F%2Fsites.google.com%2Fsite%2Fhadoopandhive%2Fhome%2Fhow-to-write-output-to-multiple-named-files-in-hadoop-using-multipletextoutputformat&ei=y7YBULarN8iIrAf4iPSOBg&usg=AFQjCNHbd8sRwlY1-My2gNYI0yqw4254YQ

How to overwrite/reuse the existing output path for Hadoop jobs again and agian

I want to overwrite/reuse the existing output directory when I run my Hadoop job daily.
Actually the output directory will store summarized output of each day's job run results.
If I specify the same output directory it gives the error "output directory already exists".
How to bypass this validation?
What about deleting the directory before you run the job?
You can do this via shell:
hadoop fs -rmr /path/to/your/output/
or via the Java API:
// configuration should contain reference to your namenode
FileSystem fs = FileSystem.get(new Configuration());
// true stands for recursively deleting the folder you gave
fs.delete(new Path("/path/to/your/output"), true);
Jungblut's answer is your direct solution. Since I never trust automated processes to delete stuff (me personally), I'll suggest an alternative:
Instead of trying to overwrite, I suggest you make the output name of your job dynamic, including the time in which it ran.
Something like "/path/to/your/output-2011-10-09-23-04/". This way you can keep around your old job output in case you ever need to revisit in. In my system, which runs 10+ daily jobs, we structure the output to be: /output/job1/2011/10/09/job1out/part-r-xxxxx, /output/job1/2011/10/10/job1out/part-r-xxxxx, etc.
Hadoop's TextInputFormat (which I guess you are using) does not allow overwriting an existing directory. Probably to excuse you the pain of finding out you mistakenly deleted something you (and your cluster) worked very hard on.
However, If you are certain you want your output folder to be overwritten by the job, I believe the cleanest way is to change TextOutputFormat a little like this:
public class OverwriteTextOutputFormat<K, V> extends TextOutputFormat<K, V>
{
public RecordWriter<K, V>
getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException
{
Configuration conf = job.getConfiguration();
boolean isCompressed = getCompressOutput(job);
String keyValueSeparator= conf.get("mapred.textoutputformat.separator","\t");
CompressionCodec codec = null;
String extension = "";
if (isCompressed)
{
Class<? extends CompressionCodec> codecClass =
getOutputCompressorClass(job, GzipCodec.class);
codec = (CompressionCodec) ReflectionUtils.newInstance(codecClass, conf);
extension = codec.getDefaultExtension();
}
Path file = getDefaultWorkFile(job, extension);
FileSystem fs = file.getFileSystem(conf);
FSDataOutputStream fileOut = fs.create(file, true);
if (!isCompressed)
{
return new LineRecordWriter<K, V>(fileOut, keyValueSeparator);
}
else
{
return new LineRecordWriter<K, V>(new DataOutputStream(codec.createOutputStream(fileOut)),keyValueSeparator);
}
}
}
Now you are creating the FSDataOutputStream (fs.create(file, true)) with overwrite=true.
Hadoop already supports the effect you seem to be trying to achieve by allowing multiple input paths to a job. Instead of trying to have a single directory of files to which you add more files, have a directory of directories to which you add new directories. To use the aggregate result as input, simply specify the input glob as a wildcard over the subdirectories (e.g., my-aggregate-output/*). To "append" new data to the aggregate as output, simply specify a new unique subdirectory of the aggregate as the output directory, generally using a timestamp or some sequence number derived from your input data (e.g. my-aggregate-output/20140415154424).
If one is loading the input file (with e.g., appended entries) from the local file system to hadoop distributed file system as such:
hdfs dfs -put /mylocalfile /user/cloudera/purchase
Then one could also overwrite/reuse the existing output directory with -f. No need to delete or re-create the folder
hdfs dfs -put -f /updated_mylocalfile /user/cloudera/purchase
Hadoop follows the philosophy Write Once, Read Many times. Thus when you try to write to the directory again, it assumes it has to make a new one (Write once) but it already exists, and so it complains. You can delete it via hadoop fs -rmr /path/to/your/output/. It's better to create a dynamic directory (eg,based on timestamp or hash value) in order to preserve data.
You can create an output subdirectory for each execution by time. For example lets say you are expecting output directory from user and then set it as follows:
FileOutputFormat.setOutputPath(job, new Path(args[1]);
Change this by the following lines:
String timeStamp = new SimpleDateFormat("yyyy.MM.dd.HH.mm.ss", Locale.US).format(new Timestamp(System.currentTimeMillis()));
FileOutputFormat.setOutputPath(job, new Path(args[1] + "/" + timeStamp));
I had a similar use case, I use MultipleOutputs to resolve this.
For example, if I want different MapReduce jobs to write to the same directory /outputDir/. Job 1 writes to /outputDir/job1-part1.txt, job 2 writes to /outputDir/job1-part2.txt (without deleting exiting files).
In the main, set the output directory to a random one (it can be deleted before a new job runs)
FileInputFormat.addInputPath(job, new Path("/randomPath"));
In the reducer/mapper, use MultipleOutputs and set the writer to write to the desired directory:
public void setup(Context context) {
MultipleOutputs mos = new MultipleOutputs(context);
}
and:
mos.write(key, value, "/outputDir/fileOfJobX.txt")
However, my use case was a bit complicated than that. If it's just to write to the same flat directory, you can write to a different directory and runs a script to migrate the files, like: hadoop fs -mv /tmp/* /outputDir
In my use case, each MapReduce job writes to different sub-directories based on the value of the message being writing. The directory structure can be multi-layered like:
/outputDir/
messageTypeA/
messageSubTypeA1/
job1Output/
job1-part1.txt
job1-part2.txt
...
job2Output/
job2-part1.txt
...
messageSubTypeA2/
...
messageTypeB/
...
Each Mapreduce job can write to thousands of sub-directories. And the cost of writing to a tmp dir and moving each files to the correct directory is high.
I encountered this exact problem, it stems from the exception raised in checkOutputSpecs in the class FileOutputFormat. In my case, I wanted to have many jobs adding files to directories that already exist and I guaranteed that the files would have unique names.
I solved it by creating an output format class which overrides only the checkOutputSpecs method and suffocates (ignores) the FileAlreadyExistsException that's thrown where it checks if the directory already exists.
public class OverwriteTextOutputFormat<K, V> extends TextOutputFormat<K, V> {
#Override
public void checkOutputSpecs(JobContext job) throws IOException {
try {
super.checkOutputSpecs(job);
}catch (FileAlreadyExistsException ignored){
// Suffocate the exception
}
}
}
And the in the job configuration, I used LazyOutputFormat and also MultipleOutputs.
LazyOutputFormat.setOutputFormatClass(job, OverwriteTextOutputFormat.class);
you need to add the setting in your main class:
//Configuring the output path from the filesystem into the job
FileOutputFormat.setOutputPath(job, new Path(args[1]));
//auto_delete output dir
OutputPath.getFileSystem(conf).delete(OutputPath);

Resources