Write Delta Encoded Parquet Files - parquet

I know that Apache Arrow Parquet can read spec compliant Delta encoded files, but can not write them out. I am wondering if there is any commonly used open source C++/Python library that can write out Parquet spec compliant delta encoding.

There's a Rust library with Python bindings called delta-rs that has a file writer that can take an apache arrow Table or RecordBatch and write to Delta format. Note that it doesn't support transactions or checkpoints yet.
Seems like a pretty active project though, with recent contributions around Delta optimizations so that's cool.
Note: the Delta writer feature of delta-rs is labeled Experimental, so it might not be completely stable.

Related

Returnn Switchboard data processing

Could anybody give me pointers on how to process Switchboard dataset for training with RETURNN? I did see BlissDataset class that seems to be designed for switchboard, but it's not clear to me what I should include in the paths given in the example:
Example:
./tools/dump-dataset.py "
{'class':'BlissDataset',
'path': '/u/tuske/work/ASR/switchboard/corpus/xml/train.corpus.gz',
'bpe_file': '/u/zeyer/setups/switchboard/subwords/swb-bpe-codes',
'vocab_file': '/u/zeyer/setups/switchboard/subwords/swb-vocab'}"
The switchboard dataset has several folders with audios, i.e. swb1_d2/data/*.sph and transcripts swb1_LDC97S62/swb_ms98_transcriptions/**/*
I'm not quite sure how to proceed with this to get a dataset that can be used to train RETURNN.
At our group (RWTH Aachen University), we use the config as it was published on GitHub. As you see, this one uses ExternSprintDataset. That dataset uses
The implementation uses Sprint (publicly called RWTH ASR (RASR), see here) as an external tool (ran in a subprocess) to handle the data (feature extraction, etc). Sprint gets a Bliss XML file which describes all the segments with path to audio and audio offsets and transcriptions, and also it gets further configs for the feature extraction and maybe other things. There is an open source version of RASR which should work but it might be a bit involved to get this to work.
The BlissDataset was planned to be a simpler replacement for that. However, the implementation is incomplete. Also, you still would need to generate the Bliss XML by yourself in some way (we have used some own internal scripts to prepare that based on the official LDC data).
So, unfortunately, there is no simple way yet. Actually, I think the easiest way would be to come up with yet another custom format, which might be similar to the LibriSpeechDataset implementation, or maybe just the same, and then you could just reuse LibriSpeechDataset, or at least parts of that. That dataset implementation takes the data in some zip format which contains the transcripts in txt files and the audio in ogg or wav files. It uses librosa to do MFCC feature extraction (or also other feature types). I planned to implement that for Switchboard, and then reproduce the results, however I did not have time yet and not sure when I will get to that. But if you want to try that on your own, I will be happy to help you however I can. The starting point would be to look at LibriSpeechDataset and understand how the format of that looks like.

Change VkFormat at runtime

At the moment I am working on some performance measurements in Vulkan. I want to measure the difference between uncompressed formats such as VK_FORMAT_R32_SFLOAT and compressed formats such as VK_FORMAT_BC6H_UFLOAT_BLOCK. Is there a built-in feature in Vulkan that allows switching between formats at runtime?
Since the data is created at runtime, it is unfortunately not an option to compress the data offline. I also know that I could implement the compression myself, but BC6 is so complex that I would like to avoid it if possible.
If Vulkan does not support this feature, is there some C++ lib that I could use instead?
Vulkan does not have built-in on-the-fly image compression. According to a quick Google search, the DirectXTex library seems like it should do what you want.

Uploading data to HDFS cluster from custom format

I have several machines with TBs of log data in a custom format which can be read with a c++ library. I want to upload all data to hadoop cluster (HDFS) while converting it to parquet files.
This is an on going process (meaning every day I will get more data) and not a one time effort.
What is best alternative to do it performance wise (doing it efficiently)?
Is the parquet C++ library as good as the Java one? (updates, bugs, etc.)
The solution should handle tens of TBs per day or even more in the future.
Log data arrives on going and should be available immediately on HDFS cluster.
Performance-wise, your best approach will be to gather the data in batches and then write out a new Parquet file per batch. If your data is received in single lines and you want to persist them immediately on HDFS, then you could also write them out to a row-based format (that supports single line appends), e.g. AVRO and run regulary a job that compacts them into a single Parquet file.
Library-wise, parquet-cpp is much more in active development at the moment then parquet-mr (the Java library). This is mainly due to the fact that active parquet-cpp development (re-)started about 1.5 years ago (winter/spring 2016). So updates to the C++ library will happen very quickly at the moment while the Java library is very mature as it has a huge userbase since quite some years. There are some features like predicate pushdown that are not yet implemented in parquet-cpp but these all on the read path, so for write they don't matter.
We now at a point with parquet-cpp, that it already runs very stable in different productive environments, so in the end, your choice of using the C++ or Java library should mainly depend on our system environment. If all your code is currently running in the JVM, than use parquet-mr, otherwise, if you're a C++/Python/Ruby user, use parquet-cpp.
Disclaimer: I'm one of the parquet-cpp developers.

MFC CDocument: How to read contents of database files created by defunct app?

I have almost zero experience coding in Visual Studio, MFC, etc. But I've got several data files that were created in a now-defunct MFC application, which I need to migrate to another format.
Unfortunately there's really no good way, within the application itself, to extract the data (short of copy-pasting hundreds or even thousands of records individually). And viewing the files themselves, i.e. in a Hex Editor, has proven fruitless; even though the raw data stored by the app is text-based, the database files are encoded in some cryptic binary format.
So far I've been able to determine that the app was written using MFC and that it uses the CDocument class (or a simple derivative thereof) to store the files. I understand that CDocument-based data files have something to do with serializing the data, but I'm not sure how to make sense of the encoding.
Does anyone know enough about MFC to explain to me how CDocument actually works?
Does anyone have any ideas on how I might be able to decode these files to extract the text?
I once faced an almost identical scenario. I eventually worked out the code to deserialize the data, but it wasn't easy.
Write a small MFC application to do the work, that way you can leverage the same serialization code that the original app used. The topic of reverse engineering a data format is way too complex to answer here. It's probably not encrypted; more likely compressed.
If you're an experienced programmer you should be able to read the MFC source code, then apply that knowledge to the raw data. Not everything can be heuristically determined just by observing the raw data, but if you have an independent way of determining the actual content, it's certainly possible with sufficient work.

Backward compatibility of Hadoop Streaming

AFAK, Hadoop Streaming only support text input, which means the data is organized by lines. but the mapper code will become messy if we want backward compatibility, supporting different versions of log lines in the same mapper program wrote in c++.
I used to consider avro or protobuf, but it seems that they are not supported in streaming mode, is it true?
and is there any other solution?
Other input/output formats can also be used along with Hadoop Streaming.
Avro support had been added for Hadoop Streaming. See AVRO-808 & AVRO-830. Also this Thread might be useful.
I could not find InputFormat and OutputFormat classes for ProtoBuf. So, they need to be custom created.
Just for information, hadoop streaming supports binary input/output.
Look for -io rawbytes option.
I created a prototype which was able to consume SequenceFile (I think - it was long ago).
I abandoned the idea because I had to deserialize Java Hadoop *Writables from the stream. And C# BinaryReader
uses little-endian encoding, while Java uses big-endian. So mapper became more complicated that it should be.
Anyway, it is possible.

Resources