storing a file in an already occupied location in Pig - user-defined-functions

It seems that Pig prevents us from reusing an output directory. In that case, I want to write a Pig UDF that will accept a filename as parameter, open the file within the UDF and append the contents to the already existing file at the location. Is this possible?
Thanks in advance

It may be possible, but I don't know that it's advisable. Why not just have a new output directory? For example, if ultimately you want all your results in /path/to/results, STORE the output of the first run into /path/to/results/001, the next run into /path/to/results/002, and so on. This way you can easily identify bad data from any failed jobs, and if you want all of it together, you can just do hdfs -cat /path/to/results/*/*.
If you don't actually want to append but instead want to just replace the existing contents, you can use Pig's RMF shell command:
%DEFINE output /path/to/results
RMF $output
STORE results INTO '$output';

Related

naming convention of part files in HDFS

When we do an INSERT INTO command in Hive, the result of the execution creates multiple part files in HDFS.
e.g. part-*-***** or 000000_0,000001_0 etc or something else.
Is there a configuration/setting that controls the naming of these part files?
The cluster I work in creates 000000_0, 000001_0, 000000_1 etc. I would like to change this to part- or text- etc so that its easier for me to pick these files up and merge them if needed.
If there is a setting that can be set in Hive right before executing the HQL, that would be ideal.
Thanks in advance.
I think you should be able
set mapreduce.output.basename = part-;
This won't work. The only way I have found is with a custom file writer.

How can I set the output directory for a Pig STORE command?

I am using Pig via Azure HDInsight. I am able to submit a query that ends with a STORE, something like this:
STORE Ordered INTO 'results' USING PigStorage(',');
That works, storing the output in the directory /user/hdp/results/. However I would like to control the output directory. I've tried both...
STORE Ordered INTO '/myOutDir/results' USING PigStorage(',');
and
STORE Ordered INTO 'wasb:///myOutDir/results' USING PigStorage(',');
Neither of those works. They both generate this error:
Ordered was unexpected at this time.
My question is, can I control the output directory for a Store command? Or does it have to go in the user directory?
If you want to set the output with a parameter you can do this :
STORE Ordered INTO '$myOutDir/results' USING...
And then run your script with :
pig -param myOutDir=/blablabla/... myScript.pig
NB: you can also set a default value to your parameter, add at the top of your script :
%default myOutDir '/blablabla/...'
Hope this help, good luck :)
Use output path as below
wasb[s]://<BlobStorageContainerName>#<StorageAccountName>.blob.core.windows.net/<path>
If your output path /example/data/sample.log then use
wasb://mycontainer#mystorageaccount.blob.core.windows.net/example/data/sample.log
wasb:///example/data/sample.log
I hope this may help you. :-)

PIG - LOAD continue on error

New to pig.
I'm loading data into a relation like so:
raw_data = LOAD '$input_path/abc/def.*;
It works great, but if it can't find any files matching def.* the entire script fails.
Is here a way to continue with the rest of the script when there are no matches. Just produce an empty set?
I tried to do:
raw_data = LOAD '$input_path/abc/def.* ONERROR Ignore();
But that doesn't parse.
You could write a custom load UDF that returns either the file or an empty tuple.
http://wiki.apache.org/pig/UDFManual
No, there is no such feature, at least the one that I've heard of.
Also I would say that "producing an empty set" is "not running the script at all".
If you don't want to run a Pig script under some circumstances then I recommend using wrapper shell scripts or Pig embedding:
http://pig.apache.org/docs/r0.11.1/cont.html

Pig removing parentheses when storing output

I'm new in programming Pig and currently I'm trying to implement my Hadoop jobs with pig.
So far my Pig programs work. I've got some output files stored as *.txt with semicolon as delimiter.
My problem is that Pig adds parentheses around the tuple's...
Is it possible to store the output in a file without these parentheses? Only storing the values? Maybe by overwriting the PigStorage method with an UDF?
Does anyone have a hint for me?
I want to read my output files into a RDBMS (Oracle) without the parentheses.
You probably need to write your own custom Storer. See: http://wiki.apache.org/pig/Pig070LoadStoreHowTo.
Shouldn't be too difficult to just write it as a plain CSV or whatever. There's also a pre-existing DBStorage class that you might be able to use to write directly to Oracle if you want.
For people who find find this topic first, question is answered here:
Remove brackets and commas in output from Pig
use the FLATTEN command in your script like this:
output = FOREACH [variable] GENERATE FLATTEN (($1, $2, $3));<br>
STORE output INTO '[path]' USING PigStorage(,);
notice the second set of parentheses for the output you want to flatten.

Hadoop: Modify output file after it's written

Summary: can I specify some action to be executed on each output file after it's written with hadoop streaming?
Basically, this is follow-up to Easiest efficient way to zip output of hadoop mapreduce question. I want for each key X its value written to X.txt file, compressed into X.zip archive. But when we write zip output stream, it's hard to tell something about a key or a name of resulting file, so we end up with X.zip archive containing default-name.txt.
It'd be very simple operation to rename archive contents, but where can I place it? What I don't want to do is download all zips from S3 and upload them back then.
Consider using a custom MultipleOutputFormat:
Basic use cases:
This class is used for a map reduce job with at least one reducer. The reducer wants to write data to different files depending on the actual keys.
It is assumed that a key (or value) encodes the actual key (value) and the desired location for the actual key (value).
This class is used for a map only job. The job wants to use an output file name that is either a part of the input file name of the input data, or some derivation of it.
This class is used for a map only job. The job wants to use an output file name that depends on both the keys and the input file name
You may also control which key goes to which reducer (Partitioner)

Resources