I'm new to Big data and related technologies, so I'm unsure if we can append data to the existing ORC file. I'm writing the ORC file using Java API and when I close the Writer, I'm unable to open the file again to write new content to it, basically to append new data.
Is there a way I can append data to the existing ORC file, either using Java Api or Hive or any other means?
One more clarification, when saving Java util.Date object into ORC file, ORC type is stored as:
struct<timestamp:struct<fasttime:bigint,cdate:struct<cachedyear:int,cachedfixeddatejan1:bigint,cachedfixeddatenextjan1:bigint>>,
and for java BigDecimal it's:
<margin:struct<intval:struct<signum:int,mag:struct<>,bitcount:int,bitlength:int,lowestsetbit:int,firstnonzerointnum:int>
Are these correct and is there any info on this?
Update 2017
Yes now you can! Hive provides a new support for ACID, but you can append data to your table using Append Mode mode("append") with Spark
Below an example
Seq((10, 20)).toDF("a", "b").write.mode("overwrite").saveAsTable("tab1")
Seq((20, 30)).toDF("a", "b").write.mode("append").saveAsTable("tab1")
sql("select * from tab1").show
Or a more complete exmple with ORC here; below an extract:
val command = spark.read.format("jdbc").option("url" .... ).load()
command.write.mode("append").format("orc").option("orc.compression","gzip").save("command.orc")
No, you cannot append directly to an ORC file. Nor to a Parquet file. Nor to any columnar format with a complex internal structure with metadata interleaved with data.
Quoting the official "Apache Parquet" site...
Metadata is written after the data to allow for single pass writing.
Then quoting the official "Apache ORC" site...
Since HDFS does not support changing the data in a file after it is
written, ORC stores the top level index at the end of the file (...)
The file’s tail consists of 3 parts; the file metadata, file footer
and postscript.
Well, technically, nowadays you can append to an HDFS file; you can even truncate it. But these tricks are only useful for some edge cases (e.g. Flume feeding messages into an HDFS "log file", micro-batch-wise, with fflush from time to time).
For Hive transaction support they use a different trick: creating a new ORC file on each transaction (i.e. micro-batch) with periodic compaction jobs running in the background, à la HBase.
Yes this is possible through Hive in which you can basically 'concatenate' newer data. From hive official documentation https://cwiki.apache.org/confluence/display/Hive/Hive+Transactions#HiveTransactions-WhatisACIDandwhyshouldyouuseit?
We are working on POC to figure out which compression technique is better to use for saving for file in compressed format and have better performance from compress format. we have 4 format *.gz, *.zlib, *.snappy & *.lz4.
We figured out *.gz and *.zlib has better compression ratio but they have performance issue while reading compressed, since these files are not splittable and Number of Mappers , reducers are always 1. These formats are default accepted by Hive 0.14.
but we want to test other compression technique for our text file like *.lz4, *.lzo and snappy
Can any one help me on how to configure my hive to read input file which is compressed in *.lzo, snappy and *.lz4 and also Avro.
is these compress technique are present hive 0.14 or should i need to upload these *.jar ( i'm .NET Guys no idea on java) and use Serde for Serialisation and deserialization.
Can any one help me whether Hive by default accept those file format like *.lzo, *.snappy and *.lz4 and avro for reading these compressed file and should i need to configure hive to read these file format. I'm looking for best performance while reading of compressed file format. Its ok to compromise on Compression ratio, but should have better performance reading.
How can I create a Scalding Source that will handle conversions between avro and parquet.
The solution should:
1. Read from parquet format and convert to avro memory representation
2. Write avro objects into a parquet file
Note: I noticed Cascading has a module for leveraging thrift and parquet. It occurs to me that this would be a good place to start looking. I also opened a thread on google-groups/scalding-dev
Try our latest changes in this fork -
https://github.com/epishkin/scalding/tree/parquet_avro/scalding-parquet
I am newBee to parquet!
I have tried below Example code to write data into parquet file using parquetWriter .
http://php.sabscape.com/blog/?p=623
The above example uses parquetWriter, But I want to use ParquetFileWriter to write data efficiently in parquet files.
Please suggest an example or how we can write parquet files using ParquetFileWriter ?
You can probably get some idea from a parquet column reader that i wrote here.
I am new to HBaseMapReduce and Hadoop Data Base. I need to read a raw text file from mapreduce job and store the retrieved data into Htable using HBase MapReduce API.
I am googling from may days but I am not able to understand the extact flow. Can any one please provide me with some sample Code of reading data from A file.
I need to read Data From a Text/csv files. I can find some examples of reading data from command prompt. Which method can we use to read an xml file FileInputFormat or, please help me in learning Mapreduce API and please provide me with simple read and write examples.
You can import your csv data to HBase using importtsv and completebulkupload tools. importtsv loads csvs to hadoop files and completebulkupload loads them to a specified HTable. You can use these tools both from command line and Java code. If this can help you inform me to provide sample code or command