I have a service that runs over HTTP and takes file chunks (usually 10MB) and stores them as chunks on a remote file system. The combination of these chunks would make up a complete file.
I would like to generate some metadata about the file from the chunks. Specifically I would like to generate the total file size and MD5 checksum for the file.
The end file can be relatively large (500+ MBytes). Is there a way to iterative generate the checksum in a distributed manner? For example lets say I have two web server running this service behind a load balancer that distributes the requests between the two servers. Is it possible to generate the MD5 checksum for the complete file on the fly using something like a shared Redis server?
I am trying to avoid caching the chunks locally on disk or querying the remote data store for the actual file contents after they have fully been uploaded.
You can do this with a Merkle Tree, like Cassandra.
Quoting from Amazon's Dynamo section 4.7,
To detect the inconsistencies between replicas faster and to minimize the amount of transferred data, Dynamo uses Merkle trees. A Merkle tree is a hash tree where leaves are hashes of the values of individual keys. Parent nodes higher in the tree are hashes of their respective children. The principal advantage of Merkle tree is that each branch of the tree can be checked independently without requiring nodes to download the entire [...] data set.
Found this little gem. It allows you to accumulatively calculate an MD5 sum using a database.
https://github.com/jarl-dk/digest_extensions
Related
Apache Ignite has two concepts, one of them is NearCache, and another one is the CacheMode enumaration.
What is the main difference between two concepts?
Near cache is the local hot cache that keeps often accessed data. It significantly speeds up data processing, saving time on network round-trips.
CacheMode defines how your data will be stored. It could be LOCAL for single node, which means data are not distributed in grid. Other two PARTITIONED and REPLICATED means respectively: cache data divided between nodes on some equal parts (called partitions) or each node keeps full data from that cache.
PARTITIONED allows you to keep in grid more data than available in separate machine, REPLICATED gives 100% data survivorability (if all nodes crashed except one - you will not loose your data).
More details you can find in documentation https://apacheignite.readme.io/docs/near-caches and https://apacheignite.readme.io/docs/cache-modes
I am looking for an in-memory cache solution which can handle big data (<5GB). For a user inputted search term, the database (elasticsearch) will return a large amount of data which the tool will analyze and show via different webpages of the tool. Now my problem is that I want to cache this big data temporarily till the user session gets over so that I don't have to fetch it again from elasticsearch every time the user opens a new page. It will have to be in-memory because disk based will take over a minute which would be very slow.
I initially thought memcached but it has a max limit of 128MB. After reading quite a bit, Redis seems suitable but it is unclear to me whether a bunch of Redis nodes can work in tandem or not. Is it possible to set up a pool of many Redis nodes so that a suitable node will be automatically chosen on SET and the data returned upon GET without me having to specify the node?
TL;DR
Problem: Cache big data (<5GB) in an in-memory cache
Possible solution: Redis
Question: Can I pool a bunch of Redis nodes so that I can fetch a key stored in any of them without specifying a particular node. I don't need to distribute my data since data for a single user will fit into the RAM of a single node.
A Redis Cluster sounds like a good fit for your usecase!
Redis cluster provides a mechanism for data sharding by means of hash slots. These slots are equally distributed over the nodes in your cluster when setting it up.
Whenever you store a value in the cluser, the corresponding hash slot for the given key is calculated and the data is forwarded to the responsible node. And the same way you can afterwards query your data again. So the answer to your question is certainly yes.
However, the max value size per key is 512MB. I'm not sure if I got your storage requirement correctly. I assume 5GB is the estimated total amount over all users.
Checkout the redis cluster tutorial.
You can also look into NCache(.net) / Tayzgrid(java) by Alachisoft,
Both of these solutions provide distributed caching with dynamic clustering which allows to add or remove nodes in cluster at runtime with out losing any data. Also intelligent client makes sure to refer to appropriate node to fetch/store a record against any key.
I'm new to Hadoop. I have a big file which won't be not updated and a bunch of small files. My intention is to put the big file on many different datanodes according to hash values of records in it. And every time I search records in a small file by dispatch them to datanodes according to hash values and then do binary search locally.
My questions are:
How to assign datanodes to records according to hash values of records
How to make sure the binary search is performed locally
NOTE: I've figured it out by using both TotalOrderPartitioner and Mapfile.
My job is to design a distributed system for static image/video files. The size of the data is about tens of Terabytes. It's mostly for HTTP access (thus no processing on data; or only simple processing such as resizing- however it's not important because it can be done directly in the application).
To be a little more clear, it's a system that:
Must be distributed (horizontal scale), because the total size of data is very big.
Primarily serves small static files (such as images, thumbnails, short videos) via HTTP.
Generally, no requirement on processing the data (thus MapReduce is not needed)
Setting HTTP access on the data could be done easily.
(Should have) good throughput.
I am considering:
Native network file system: But it seems not feasible because the data can not fit into one machine.
Hadoop filesystem. I worked with Hadoop mapreduce before, but I have no experience using Hadoop as a static file repository for HTTP requests. So I don't know if it's possible or if it's a recommended way.
MogileFS. It seems promising, but I feel that using MySQL to manage local files (on a single machine) will create too much overhead.
Any suggestion please?
I am the author of Weed-FS. For your requirement, WeedFS is ideal. Hadoop can not handle many small files, in addition to your reasons, each file needs to have an entry in the master. If the number of files are big, the hdfs master node can not scale.
Weed-FS is getting faster when compiled with latest Golang releases.
Many new improvements have been done on Weed-FS recently. Now you can test and compare very easily with the built-in upload tool. This one upload all files recursively under a directory.
weed upload -dir=/some/directory
Now you can compare by "du -k /some/directory" to see the disk usage, and "ls -l /your/weed/volume/directory" to see the Weed-FS disk usage.
And I suppose you would need replication with data center, rack aware, etc. They are in now!
Hadoop is optimized for large files e.g. It's default block size is 64M. A lot of small files are both wasteful and hard to manage on Hadoop.
You can take a look at other distributed file systems e.g. GlusterFS
Hadoop has a rest API for acessing files. See this entry in the documentation. I feel that Hadoop is not meant for storing large number of small files.
HDFS is not geared up to efficiently accessing small files: it is primarily designed for streaming access of large files. Reading through small files normally causes lots of seeks and lots of hopping from datanode to datanode to retrieve each small file, all of which is an inefficient data access pattern.
Every file, directory and block in HDFS is represented as an object in the namenode’s memory, each of which occupies 150 bytes. The block size is 64 mb. So even if the file is of 10kb, it would be allocated an entire block of 64 mb. Thats a waste disk space.
If the file is very small and there are a lot of them, then each map task processes very little input, and there are a lot more map tasks, each of which imposes extra bookkeeping overhead. Compare a 1GB file broken into 16 files of 64MB blocks, and 10,000 or so 100KB files. The 10,000 files use one map each, and the job time can be tens or hundreds of times slower than the equivalent one with a single input file.
In "Hadoop Summit 2011", there was this talk by Karthik Ranganathan about Facebook Messaging in which he gave away this bit: Facebook stores data (profiles, messages etc) over HDFS but they dont use the same infra for images and videos. They have their own system named Haystack for images. Its not open source but they shared the abstract design level details about it.
This brings me to weed-fs: an open source project for inspired by Haystacks' design. Its tailor made for storing files. I have not used it till now but seems worth a shot.
If you are able to batch the files and have no requirement to update a batch after adding to HDFS, then you could compile multiple small files into a single larger binary sequence file. This is a more efficient way to store small files in HDFS (as Arnon points out above, HDFS is designed for large files and becomes very inefficient when working with small files).
This is the approach I took when using Hadoop to process CT images (details at Image Processing in Hadoop). Here the 225 slices of the CT scan (each an individual image) were compiled into a single, much larger, binary sequence file for long streaming reads into Hadoop for processing.
Hope this helps!
G
I am working on Hbase. I have query regarding how Hbase store the data in sorted order with LSM.
As per my understanding, Hbase use LSM Tree for data transfer in large scale data processing. when Data comes from client, it store in-memory sequentially first and than sort and store as B-Tree as Store file. Than it is merging the Store file with Disk B-Tree(of key). is it correct ? Am I missing something ?
If Yes, than in cluster env. there are multiple RegionServers who take the client request. On that case, How all the Hlogs (of each regionServer) merge with disk B-Tree(as existing key spread across the all dataNode disk) ?
Is it like Hlog only merge the data with Hfile of same regionServer ?
You can take a look at this two articles that describe exactly what you want
http://blog.cloudera.com/blog/2012/06/hbase-io-hfile-input-output/
http://blog.cloudera.com/blog/2012/06/hbase-write-path/
In brief:
The client send data to the region server that is responsible to handle the key
(.META. contains key ranges for each region)
The user operation (e.g. put) is written to the Write-Ahead-Log (WAL, the HLog)
(The log is used just for "safety" if the region server crash the log is replayed to recover data not written to disk)
After writing to the log, data is also written to the MemStore
...once the memstore reach a threshold (conf property)
The memstore is flushed on disk, creating a single hfile
...when the number of hfiles grows too much (conf property) the compaction kicks in (merge)
In terms of on disk data structure:
http://blog.cloudera.com/blog/2012/06/hbase-io-hfile-input-output/
The article above cover the hfile format...
it's an append only format, and can be seen like a b+tree. (Keeping in mind that this b+tree cannot be modified in place)
The HLog is only used for "safety", once the data is written to the hfiles, the logs can be thrown away
According to LSM-tree model in HBase the data consists of two parts - in-memory tree which contains most recent updates upon the data and disk store tree which arranges the rest part of the data into a form of immutable sequential B-tree located on the hard drive. From time to time HBase service decides that it has enough changes in memory to flush them into file storage. In that case it performs the rolling merge of data from the virtual space to disc, executing an operation similar to merge step of Merge sort algorithm.
In HBase infrastructure such data model is based on several components which organize all data across the cluster as a collections of LSM-trees located on slave servers and driven by the main master service. The system is driven by the following components:
HMaster - primary HBase service which maintains the correct state of slave Region Server nodes by managing and balancing the data among them. Besides it drives the changes of metadata information in the storage, like table or column creations and updates.
Zookeeper - represents a distributed cache used by HBase services and its clients to store reconciled up-to-date information about naming and configurations.
Regional servers - HBase worker nodes which perform the management and storage of pieces of the information in LSM-tree fashion
HDFS - used by Regional servers behind the scene for the actual storage of the data
From Low-level the most part of HBase functionality is located within Regional server which performs the read-write work upon the tables. Every table technically can be distributed across different Regional servers as a collection of of separate pieces called HRegions. Single Regional server node can hold several HRegions of one table. Each HRegion holds a certain range of rows shared between the memory and disc space and sorted by key attribute. These ranges do not intersect between different regions so we can relay on their sequential behavior across the cluster. Individual Regional server HRegion includes following parts:
Write Ahead Log (WAL) file - the first place when data is been persisted on every write operation before getting into Memory. As I've mentioned earlier the first part of the LSM-tree is kept in memory, which means that it can be affected by some external factors like power lose from example. Keeping the log file of such operations in a separate place would allow to restore this part easily without any looses.
Memstore - keeps a sorted collection of most recent updates of the information in the memory. It is the actual implementation of the first part of LMS-tree structure, described earlier. Periodically performs rolling merges into the store files called HFiles on the local hard drives
HFile - represents a small pieces of date received from the Memstore and saved in HDFS. Each HFile contains sorted KeyValues collection and B-Tree+ index which allows to seek the data without reading the whole file. Periodically HBase performs merge sort operations upon these files to make them fit the configured size of standard HDFS block and avoid small files problem
You can walk through these elements manually by pushing the data and passing it through the whole LSM-tree process. I described how to do it in my recent article:
https://oyermolenko.blog/2017/02/21/hbase-as-primary-nosql-hadoop-storage/