How to overwrite the output directory in Hadoop? - hadoop

I'm reading a json file and I wish to modify some changes in the json file. After modification I would like to overwrite in the same json file. When I'm doing that, MapReduce throws an exception as "FileAlreadyExists". Please give me a solution to overwrite in the same file. I'm not interested to delete the file and create a new file. I just wants to overwrite.

HDFS does not allow writes in the existing files. You have to delete the files first and re-write them. The in-place update to file is not supported in HDFS. The HDFS was design to provide high read on the existing data. So the feature you are expecting is not available in HDFS.

Related

How to read parquet file from s3 bucket in nifi?

I am trying to read parquet file from s3 bucket in nifi.
to read the file I have used processor listS3 and fetchS3Object and then ExtractAttribute processor. till there it looked fine.
the files are in parquet.gz file and by no mean i was able to generate the flowfile from them, My final purpose is to load the file in noSql(SnowFlake).
FetchParquet works with HDFS which we are not used.
My next option is to use executeScript processor (with python) to read these parquet file and save them back to text.
Can somebody please suggest any work around.
It depends what you need to do with the Parquet files.
For example, if you wanted to get them to your local disk, then ListS3 -> FetchS3Object -> PutFile would work fine. This is because this scenario is just moving around bytes and doesn't really matter whether it is Parquet or not.
If you need to actually interpret the Parquet data in some way, which it sounds like you do for getting it into a database, then you need to use FetchParquet and convert from Parquet to some other format like Avro, Json, or Csv, and then send that to one of the database processors.
You can use Fetch/Put Parquet processors, or any other HDFS processors, with s3 by configuring a core-site.xml with an s3 filesystem.
http://apache-nifi-users-list.2361937.n4.nabble.com/PutParquet-with-S3-td3632.html

Multiple Input Paths in newHadoopAPI for spark to read Lzo files

I am working on a Spark Application that has to read multiple directories (i.e. multiple paths) from S3 Bucket and HDFS. I read that newHadoopAPI provides a great way to read Lzo compressed / indexed files in a good performant way. But, how do we read multiple folder paths / directories have several Lzo files and Index files in an RDD using newHadoopAPI?
The folder structure is like partitioned Hive Table on two columns.
Ex: as below. Partition on date and batch
/rootDirectory/date=20161002/batch=5678/001_0.lzo
/rootDirectory/date=20161002/batch=5678/001_0.lzo.index
/rootDirectory/date=20161002/batch=5678/002_0.lzo
/rootDirectory/date=20161002/batch=5678/002_0.lzo.index
/rootDirectory/date=20161002/batch=8765/001_0.lzo
/rootDirectory/date=20161002/batch=8765/001_0.lzo.index
/rootDirectory/date=20161002/batch=8765/002_0.lzo
/rootDirectory/date=20161002/batch=8765/002_0.lzo.index
..... and so on.
Now I use the below code to read data from S3. This treats both Lzo and Lzo.Index files as input which crashes my application, as I dont want to read .lzo.index files, but just the .lzo files using the index for speed.
val impInput = sparkSession.sparkContext.newAPIHadoopFile("s3://my-bucket/myfolder/*/*", classOf[NonSplittableTextInputFormat],classOf[org.apache.hadoop.io.LongWritable],classOf[org.apache.hadoop.io.Text])
val impRDD = impInput.map(_._2.toString)
Could anyone please help me to understand how can I do that?
1). Read all (mulitple) folders under the root for the Lzo files using the newHadoopAPI so that I can utilize the .index file for my benefit.
2). Read the data from HDFS in the similar fashion.
Adding a suffix to your HDFS path may help.
val impInput = sparkSession.sparkContext.newAPIHadoopFile("s3://my-bucket/myfolder/*/*.lzo", classOf[NonSplittableTextInputFormat],classOf[org.apache.hadoop.io.LongWritable],classOf[org.apache.hadoop.io.Text])

appending to ORC file

I'm new to Big data and related technologies, so I'm unsure if we can append data to the existing ORC file. I'm writing the ORC file using Java API and when I close the Writer, I'm unable to open the file again to write new content to it, basically to append new data.
Is there a way I can append data to the existing ORC file, either using Java Api or Hive or any other means?
One more clarification, when saving Java util.Date object into ORC file, ORC type is stored as:
struct<timestamp:struct<fasttime:bigint,cdate:struct<cachedyear:int,cachedfixeddatejan1:bigint,cachedfixeddatenextjan1:bigint>>,
and for java BigDecimal it's:
<margin:struct<intval:struct<signum:int,mag:struct<>,bitcount:int,bitlength:int,lowestsetbit:int,firstnonzerointnum:int>
Are these correct and is there any info on this?
Update 2017
Yes now you can! Hive provides a new support for ACID, but you can append data to your table using Append Mode mode("append") with Spark
Below an example
Seq((10, 20)).toDF("a", "b").write.mode("overwrite").saveAsTable("tab1")
Seq((20, 30)).toDF("a", "b").write.mode("append").saveAsTable("tab1")
sql("select * from tab1").show
Or a more complete exmple with ORC here; below an extract:
val command = spark.read.format("jdbc").option("url" .... ).load()
command.write.mode("append").format("orc").option("orc.compression","gzip").save("command.orc")
No, you cannot append directly to an ORC file. Nor to a Parquet file. Nor to any columnar format with a complex internal structure with metadata interleaved with data.
Quoting the official "Apache Parquet" site...
Metadata is written after the data to allow for single pass writing.
Then quoting the official "Apache ORC" site...
Since HDFS does not support changing the data in a file after it is
written, ORC stores the top level index at the end of the file (...)
The file’s tail consists of 3 parts; the file metadata, file footer
and postscript.
Well, technically, nowadays you can append to an HDFS file; you can even truncate it. But these tricks are only useful for some edge cases (e.g. Flume feeding messages into an HDFS "log file", micro-batch-wise, with fflush from time to time).
For Hive transaction support they use a different trick: creating a new ORC file on each transaction (i.e. micro-batch) with periodic compaction jobs running in the background, à la HBase.
Yes this is possible through Hive in which you can basically 'concatenate' newer data. From hive official documentation https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-WhatisACIDandwhyshouldyouuseit?

How do i get generated filename when calling the Spark SaveAsTextFile method

I'am new to Spark, Hadoop and all what comes with. My global need is to build a real-time application that get tweets and store them on HDFS in order to build a report based on HBase.
I'd like to get the generated filename when calling saveAsTextFile RRD method in order to import it to Hive.
Feel free to ask for further informations and thanks in advance.
saveAsTextFile will create a directory of sequence files. So if you give it path "hdfs://user/NAME/saveLocation", a folder called saveLocation will be created filled with sequence files. You should be able to load this into HBase simply by passing the directory name to HBase (sequenced files are a standard in Hadoop).
I do recommend you look into saving as a parquet though, they are much more useful than standard text files.
From what I understand, You saved your tweets to hdfs and now want the file names of those saved files. Correct me if I'm wrong
val filenames=sc.textfile("Your hdfs location where you saved your tweets").map(_._1)
This gives you an array of rdd's into filenames onto which you could do your operations. Im a newbie too to hadoop, but anyways...hope that helps

how to delete some data from hdfs file in Hadoop

I'd uploaded 50GB data on Hadoop cluster. But Now i want to delete first row of data file.
This is time consuming if i remove that data & change manually. Then upload it again on HDFS.
Please reply me.
HDFS files are immutable (for all practical purposes).
You need to upload the modified file(s). You can do the change programatically with a M/R job that does a near-identity transformation, eg. running a streaming shell script that does sed, but the gist of it that you need to create new files, HDFS files cannot be edited.

Resources