How should I use Azure Blob Containers? - azure-blob-storage

My current approach is that I have a few containers:
raw (the actual raw files or exports, separated into folders like servicenow-cases, servicenow-users, playvox-evaluations, etc.)
staging (lightly transformed raw data)
analytics (these are Parquet file directories which consolidate and partition the files)
visualization (we use a 3rd party tool which syncs with Azure Blob, but only CSV files currently. This is almost the exact same as the analytics container)
However, it could also make some sense to create more containers and kind of use them like I would use a database schema. For example, one container for ServiceNow data, another for LogMeIn data, another for our telephony system, etc.
Is there any preferred approach?

Based on your description, it seems you are tangled to use a small number of containers to store a large number of blobs or make a large number of containers to store a small number of blobs. If all you think about is parallelism and scalability, you can rest assured, just you design a storage structure that suits you. Because partitioning in Azure Blob storage is done at the blob level, not the container.
Each of these two approaches has their advantages and disadvantages.
For a small number of containers, it can save the cost of creating containers (the operation of creating containers need you to pay for it). But at the same time, when you try to list the blobs in the container, the objects in it will be listed. If you still have a subset inside, you still need to continue to obtain, in this case the performance is less than the Lots of Container Solution. And at the same time, the security boundary you set will apply to all blobs in this container. This is not necessarily what you want.
For a large number of structured containers, more containers can set more security boundaries (custom access permissions, access control SAS signatures). It is also easy to list blobs, no more messy subsets are needed to catch. But again, its disadvantage is that it will have more consumption in creating containers (in extreme cases, it will increase a lot of costs. In general, it does not matter. a website that calculates costs: https://azure.microsoft.com/en-us/pricing/calculator/?cdn=disable).

Related

Highload data update architecture

I'm developing a parcels tracking system and thinking of how to improve it's performance.
Right now we have one table in postgres named parcels containing things like id, last known position, etc.
Everyday about 300.000 new parcels are added to this table. The parcels data is took from external API. We need to track all parcels positions as accurate as possible and reduce time between API calls about specific parcel.
Given such requirements what could you suggest about project architecture?
Right now the only solution I can think of is producer-consumer pattern. Like having one process selecting all records from parcel table in the infinite loop and then distribute fetching data task with something like Celery.
Majors downsides of this solution are:
possible deadlocks, as fetching data about the same task can be executed at the same time on different machines.
need in control of queue size
This is a very broad topic, but I can give you a few pointers. Once you reach the limits of vertical scaling (scaling based on picking more powerful machines) you have to scale horizontally (scaling based on adding more machines to the same task). So for being able to design scalable architectures you have to learn about distributed systems. Here some topics to look into:
Infrastructure & processes for hosting distributed systems, such as Kubernetes, Containers, CI/CD.
Scalable forms of persistence. For example different forms of distributed NoSQL like key-value stores, wide-column stores, in-memory databases and novel scalable SQL stores.
Scalable forms of data flows and processing. For example event driven architectures using distributed message- / event-queues.
For your specific problem with packages I would recommend to consider a key-value store for your position data. Those could scale to billions of insertions and retrievals per day (when querying by key).
It also sounds like your data is somewhat temporary and could be kept in an in-memory hot-storage while the package is not delivered yet (and archived afterwards). A distributed in-memory DB could scale even further in terms insertion and queries.
Also you probably want to decouple data extraction (through your api) from processing and persistence. For that you could consider introducing stream processing systems.

Storing Images / Media Files in Oracle

I want to store a large number of media files in oracle. I believe I can store these files in the form of blobs using pl/sql procedure. However I want to make sure there is no impact to resolution / quality of the media file. Also are there any considerations that I need to account for to store media files in Oracle DB?
By storing and retrieving files in a blob does not impact image quality or resolution. Oracle does treat them as binary objects and what you store is what you get when you retrieve.
Typically such modifications are done at application layer logic before storing data into blob. In case of text based files, compressing them and storing them would save some disk space, in case of images, typically resolution/image size is modified to reduce file size etc. These are the decisions taken while designing application, as part of application architecture to reduce overall storage requirement.
Also, consider if this is going to be right design for you. There are implications in terms of storage requirement, application performance and scalablity. There are several threads in stackoverflow discussing advantages of storing images in RDBMS vs NoSQL databases vs filesystems. Also average size of files do matter a lot.
some links:
Storing images in NoSQL stores
NoSQL- Is it suitable for storing images?
Storing very big files in database
https://softwareengineering.stackexchange.com/questions/150669/is-it-a-bad-practice-to-store-large-files-10-mb-in-a-database

Hadoop as document store database

We have a large document store currently running at 3TB in space and it increments by 1 TB every six months. They are currently stored in a windows file system which has at times caused problems in terms of access and retrieval. We are looking to exploit a Hadoop based document store database. Is it a good idea to go ahead with Hadoop? Anyone has any exposure to the same? What can be the challenges, technology roadblocks in achieving the same?
Hadoop is more for batch processing that high data access. You should have a look at some NoSQL systems, like document oriented databases. Hard to answer without knowing what your data is like.
The number one rule to NoSQL design is to define your query scenarios first. Once you really understand how you want to query the data then you can look into the various NoSQL solutions out there. The default unit of distribution is key. Therefore you need to remember that you need to be able to split your data between your node machines effectively otherwise you will end up with a horizontally scalable system with all the work still being done on one node (albeit better queries depending on the case).
You also need to think back to CAP theorem, most NoSQL databases are eventually consistent (CP or AP) while traditional Relational DBMS are CA. This will impact the way you handle data and creation of certain things, for example key generation can be come trickery. Obviously files in a folder are a bit different.
Also remember than in some systems such as HBase there is no indexing concept (I'm gussing you have file indexing setup on this windows FS document store). All your indexes will need to be built by your application logic and any updates and deletes will need to be managed as such. With Mongo you can actually create indexes on fields and query them relatively quickly, there is also the possibility to integrate Solr with Mongo. You don’t just need to query by ID in Mongo like you do in HBase which is a column family (aka Google BigTable style database) where you essentially have nested key-value pairs.
So once again it comes to your data, what you want to store, how you plan to store it, and most importantly how you want to access it. The Lily project looks very promising. THe work I am involved with we take a large amount of data from the web and we store it, analyse it, strip it down, parse it, analyse it, stream it, update it etc etc. We dont just use one system but many which are best suited to the job at hand. For this process we use different systems at different stages as it gives us fast access where we need it, provides the ability to stream and analyse data in real-time and importantly, keep track of everything as we go (as data loss in a prod system is a big deal) . I am using Hadoop, HBase, Hive, MongoDB, Solr, MySQL and even good old text files. Remember that to productionize a system using these technogies is a bit harder than installing Oracle on a server, some releases are not as stable and you really need to do your testing first. At the end of the day it really depends on the level of business resistance and the mission-critical nature of your system.
Another path that no one thus far has mentioned is NewSQL - i.e. Horizontally scalable RDBMSs... There are a few out there like MySQL cluster (i think) and VoltDB which may suit your cause.But again depending on your data (are the files word docs or text docs with info about products, invoices or instruments or something)...
Again it comes to understanding your data and the access patterns, NoSQL systems are also Non-Rel i.e. non-relational and are there for better suit to non-relational data sets. If your data is inherently relational and you need some SQL query features that really need to do things like Cartesian products (aka joins) then you may well be better of sticking with Oracle and investing some time in indexing, sharding and performance tuning.
My advice would be to actually play around with a few different systems. Look at;
MongoDB - Document - CP
CouchDB - Document - AP
Cassandra - Column Family - Available & Partition Tolerant (AP)
VoltDB - A really good looking product, a relation database that is distributed and might work for your case (may be an easier move). They also seem to provide enterprise support which may be more suited for a prod env (i.e. give business users a sense of security).
Any way thats my 2c. Playing around with the systems is really the only way your going to find out what really works for your case.
HDFS does not sound to be right solution. It is optimized for massive parralel processing of the data and not to be general purpose file system.
Specifically it has following limitations making it probabbly bad choice:
a) It is sensitive to the number of files. Practical limit should be about dozens of millions of files.
b) The files are read only, and can only be appended, but not edited. It is fine for analytical data processing but might not suite your need.
c) It has single point of failure - namenode. So its reliability is limited.
If you need system with comparable scalability, but not sensitive to number of files I would suggest OpenStack's Swift. It also does not have SPOF.
My suggestion is you can buy a NAS storage. May be EMS isilon kind of product you can consider.
Hadoop HDFS is not for file storage. It is storage to processing the data (for reports, analytics..)
NAS is for file sharing
SAN is more for a database
http://www.slideshare.net/jabramo/emc-sanoverviewpresentation
Declaration: I am not a EMC person, so you can consider any product. I just used EMC for reference.

Storing images in NoSQL stores

Our application will be serving a large number of small, thumbnail-size images (about 6-12KB in size) through HTTP. I've been asked to investigate whether using a NoSQL data store is a viable solution for data storage. Ideally, we would like our data store to be fault-toerant and distributed.
Is it a good idea to store blobs in NoSQL stores, and which one is good for it? Also, is NoSQL a good solution for our problem, or would we be better served storing the images in the file system and serving them directly from the web server (as an aside, CDN is currently not an option for us)?
Whether or not to store images in a DB or the filesystem is sometime one of those "holy war" type of debates; each side feels their way of doing things is the one right way. In general:
To store in the DB:
Easier to manage back-up/replicate everything at once in one place.
Helps with your data consistency and integrity. You can set the BLOB field to disallow NULLs, but you're not going to be able to prevent an external file from being deleted. (Though this isn't applicable to NoSQL since there aren't the traditional constraints).
To store on the filesystem:
A filesystem is designed to serve files. Let it do it's job.
The DB is often your bottleneck in an application. Whatever load you can take off it, the better.
Easier to serve on a CDN (which you mentioned isn't applicable in your situation).
I tend to come down on the side of the filesystem because it scales much better. But depending on the size of your project, either choice will likely work fine. With NoSQL, the differences are even less apparent.
Mongo DB should work well for you. I haven't used it for blobs yet, but here is a nice FLOSS Weekly podcast interview with Michael Dirolf from the Mongo DB team where he addresses this use case.
I was looking for a similar solution for a personal project and came across Riak, which, to me, seems like an amazing solution to this problem. Basically, it distributes a specified number of copies of each file to the servers in the network. It is designed such that a server coming or going is no big deal. All the copies on a server that leaves are distributed amongst the others.
With the right configuration, Riak can deal with an entire datacenter crashing.
Oh, and it has commercial support available.
Well CDN would be the obvious choice. Since that's out, I'd say your best bet for fault tolerance and load balancing would be your own private data center (whatever that means to you) behind 2 or more load balancers like an F5. This will be your easiest management system and you can get as much fault tolerance as your hardware budget allows. You won't need any new software expertise, just XCOPY.
For true fault tolerance you're going to need geographic dispersion or you're subject to anyone with a backhoe.
(Gravatars?)
If you are in a Python environment, consider the y_serial module: http://yserial.sourceforge.net/
In under 10 minutes, you will be able to store and access your images (in fact, any arbitrary Python object including webpages) -- in compressed form; NoSQL.

Storage for Write Once Read Many

I have a list of 1 million digits. Every time the user submit an input, I would need to do a matching of the input with the list.
As such, the list would have the Write Once Read Many (WORM) characteristics?
What would be the best way to implement storage for this data?
I am thinking of several options:
A SQL Database but is it suitable for WORM (UPDATE: using VARCHAR field type instead of INT)
One file with the list
A directory structure like /1/2/3/4/5/6/7/8/9/0 (but this one would be taking too much space)
A bucket system like /12345/67890/
What do you think?
UPDATE: The application would be a web application.
To answer this question you'll need to think about two things:
Are you trying to minimize storage space, or are you trying to minimize process time.
Storing the data in memory will give you the fastest processing time, especially if you could optimize the datastructure for your most common operations (in this case a lookup) at the cost of memory space. For persistence, you could store the data to a flat file, and read the data during startup.
SQL Databases are great for storing and reading relational data. For instance storing Names, addresses, and orders can be normalized and stored efficiently. Does a flat list of digits make sense to store in a relational database? For each access you will have a lot of overhead associated with looking up the data. Constructing the query, building the query plan, executing the query plan, etc. Since the data is a flat list, you wouldn't be able to create an effective index (your index would essentially be the values you are storing, which means you would do a table scan for each data access).
Using a directory structure might work, but then your application is no longer portable.
If I were writing the application, I would either load the data during startup from a file and store it in memory in a hash table (which offers constant lookups), or write a simple indexed file accessor class that stores the data in a search optimized order (worst case a flat file).
Maybe you are interested in how The Pi Searcher did it. They have 200 million digits to search through, and have published a description on how their indexed searches work.
If you're concerned about speed and don't want to care about file system storage, probably SQL is your best shot. You can optimize your table indexes but also will add another external dependency on your project.
EDIT: Seems MySQL have an ARCHIVE Storage Engine:
MySQL supports on-the-fly compression since version 5.0 with the ARCHIVE storage engine. Archive is a write-once, read-many storage engine, designed for historical data. It compresses data up to 90%. It does not support indexes. In version 5.1 Archive engine can be used with partitioning.
Two options I would consider:
Serialization - when the memory footprint of your lookup list is acceptable for your application, and the application is persistent (a daemon or server app), then create it and store it as a binary file, read the binary file on application startup. Upside - fast lookups. Downside - memory footprint, application initialization time.
SQL storage - when the lookup is amenable to index-based lookup, and you don't want to hold the entire list in memory. Upside - reduced init time, reduced memory footprint. Downside - requires DBMS (extra app dependency, design expertise), fast, but not as fast as holding the whole list in memeory
If you're concerned about tampering, buy a writable DVD (or a CD if you can find a store which still carries them ...), write the list on it and then put it into a server with only a DVD drive (not a DVD writer/burner). This way, the list can't be modified. Another option would be to buy an USB stick which has a "write protect" switch but they are hard to come by and the security isn't as good as with a CD/DVD.
Next, write each digit into a file on that disk with one entry per line. When you need to match the numbers, just open the file, read each line and stop when you find a match. With todays computer speeds and amounts of RAM (and therefore file system cache), this should be fast enough for a once-per-day access pattern.
Given that 1M numbers is not a huge amount of numbers for todays computers, why not just do pretty much the simplest thing that could work. Just store the numbers in a text file and read them into a hash set on application startup. On my computer reading in 1M numbers from a text file takes under a second and after that I can do about 13M lookups per second.

Resources