I would like to implement a simple, in-memory OLAP cube storage engine for read and write (writeback) - functionally similar to SSAS cube with multiple dimensions but one measure and only with 1 type of aggregation (sum). As in OLAP cube each axis in the multidimensional space can be a multi-level hierarchy.
Can the community provide me with some hints at which data-structures and related algorithms should I be looking at? I understand that I need something capable of indexing data in many dimensions at once, and storing intermediate precomputed aggregation values.
I'd rather not be gluing multiple nested maps together but implement something from scratch - the goal of the excercise is not just to implement this beast but also to better understand multidimensional data structures and algorithms.
Just to clarify - I am focused on the core data structure of storing multidimensional hierarchical data for reads and writes. I do not seek to implement MDX parser, make the cube persistent, etc.
Have a look at the list of spatial indexes at Wikipedia, one of them, like R-tree or k-d tree might be what you are looking for.
Related
I started dedicating time for learning algorithms and data structures. So my first and basic question is, how do we represent the data depending on the context.
I have given it time and thought and came up with this conclusion.
Groups of same data -> List/Arrays
Classification of data [Like population on gender, then age etc.] -> Trees
Relations [Like relations between a product brought and others] -> Graphs
I am posting this question to know our stack overflow community thought about my interpretation of datastructures. Since it is a generic topic I could not get a justification for my thought online. Please help me if I am wrong.
This looks like oversimplifying things.
The data structure we want to use depends on what we are going to do with the data.
For example, when we store records about people and need fast access by index, we can use an array.
When we store the same records about people but need to find by name fast, we can use a search tree.
Graphs are a theoretical concept, not a data structure.
They can be stored as an adjacency matrix (two-dimensional array, suitable for small or dense graphs), or as lists of adjacent edges (array/list of dynamic arrays/lists, suitable for large or sparse graphs), or implicitly (generated on the fly), or otherwise.
According to this StackOverflow question, probabilistic data structures are data structures that give approximate, as opposed to precise, answers. In particular, they have very low time and space complexities and are easily parallelizable, making them very efficient structures to use. Examples provided include Bloom Filters, Count-Min Sketch, and HyperLogLog.
However, all of these data structures are also known as "sketch" data structures - structures that approximate a large set via a compact representation for more efficient (but less precise) operation.
I don't see the difference between a "sketch" and a "probabilistic" data structure.
There are probabilistic data structures that are not approximations, for example the Skip list.
I want to use some kinda disk based indexing for multi dimensional data. I want to be able
to perform range searches - (10 - 20% of application usage)
faster retrieval - (80%)
data size ( in order of GBs) and record count in order of billions
To be more specific, I want to implement something like R-Tree, or X-Tree. But I thought it is a good idea to get started with B-Trees. Although all the databases offer very efficient
implementations of B-Tree, i want to be able to tune the design, add possible
application based heuristics to the design so I would prefer to implement something
of my own or to use some library as a starting point.
Any pointers to libraries, or suggestions would be very helpful. Thanks in advance
"Retrieval" - by what? Window queries? Radius queries? Nearest neighbor queries?
How many dimensions - if it's just 2D, even simple grid approaches may work very well.
Note that most quality SQL systems (pretty much everything except MySQL actually) have support for R-trees to some extend.
Which of the following data structure
R-tree,
R*-tree,
X-tree,
SS-tree,
SR-tree,
VP-tree,
metric-trees
provide reasonably good performance in insert, update and searching of multidimensional data stored in its corresponding form?
Is there a better data structure out there for handling multidimensional data?
What kind of multi-dimensional data are you talkign about? In the R-tree wiki, it states that itis used for indexing multi-dimensional data, but it seems clear that it will be primarily useful for data which is multi-dimensional in the same kind of feature -- i.e. vertical location and horizontal location, longitude and latitude, etc.
If the data is multi-dimensional simply because there are a lot of attributes for the data and it needs to be analyzed along many of these dimensions, then a relational representation is probably best.
The real issue is how do you optimize the relations and indices for the type of queries you need to answer? For this, you need to do some domain analysis beforehand, and some performance analysis after the first iteration, to determine if there are better ways to structure and index your tables.
I know a bit about database internals. I've actually implemented a small, simple relational database engine before, using ISAM structures on disk and BTree indexes and all that sort of thing. It was fun, and very educational. I know that I'm much more cognizant about carefully designing database schemas and writing queries now that I know a little bit more about how RDBMSs work under the hood.
But I don't know anything about multidimensional OLAP data models, and I've had a hard time finding any useful information on the internet.
How is the information stored on disk? What data structures comprise the cube? If a MOLAP model doesn't use tables, with columns and records, then... what? Especially in highly dimensional data, what kinds of data structures make the MOLAP model so efficient? Do MOLAP implementations use something analogous to RDBMS indexes?
Why are OLAP servers so much better at processing ad hoc queries? The same sorts of aggregations that might take hours to process in an ordinary relational database can be processed in milliseconds in an OLTP cube. What are the underlying mechanics of the model that make that possible?
I've implemented a couple of systems that mimicked what OLAP cubes do, and here are a couple of things we did to get them to work.
The core data was held in an n-dimensional array, all in memory, and all the keys were implemented via hierarchies of pointers to the underlying array. In this way we could have multiple different sets of keys for the same data. The data in the array was the equivalent of the fact table, often it would only have a couple of pieces of data, in one instance this was price and number sold.
The underlying array was often sparse, so once it was created we used to remove all the blank cells to save memory - lots of hardcore pointer arithmetic but it worked.
As we had hierarchies of keys, we could write routines quite easily to drill down/up a hierarchy easily. For instance we would access year of data, by going through the month keys, which in turn mapped to days and/or weeks. At each level we would aggregate data as part of building the cube - made calculations much faster.
We didn't implement any kind of query language, but we did support drill down on all axis (up to 7 in our biggest cubes), and that was tied directly to the UI which the users liked.
We implemented core stuff in C++, but these days I reckon C# could be fast enough, but I'd worry about how to implement sparse arrays.
Hope that helps, sound interesting.
The book Microsoft SQL Server 2008 Analysis Services Unleashed spells out some of the particularities of SSAS 2008 in decent detail. It's not quite a "here's exactly how SSAS works under the hood", but it's pretty suggestive, especially on the data structure side. (It's not quite as detailed/specific about the exact algorithms.) A few of the things I, as an amateur in this area, gathered from this book. This is all about SSAS MOLAP:
Despite all the talk about multi-dimensional cubes, fact table (aka measure group) data is still, to a first approximation, ultimately stored in basically 2D tables, one row per fact. A number of OLAP operations seem to ultimately consist of iterating over rows in 2D tables.
The data is potentially much smaller inside MOLAP than inside a corresponding SQL table, however. One trick is that each unique string is stored only once, in a "string store". Data structures can then refer to strings in a more compact form (by string ID, basically). SSAS also compresses rows within the MOLAP store in some form. This shrinking I assume lets more of the data stay in RAM simultaneously, which is good.
Similarly, SSAS can often iterate over a subset of the data rather than the full dataset. A few mechanisms are in play:
By default, SSAS builds a hash index for each dimension/attribute value; it thus knows "right away" which pages on disk contain the relevant data for, say, Year=1997.
There's a caching architecture where relevant subsets of the data are stored in RAM separate from the whole dataset. For example, you might have cached a subcube that has only a few of your fields, and that only pertains to the data from 1997. If a query is asking only about 1997, then it will iterate only over that subcube, thereby speeding things up. (But note that a "subcube" is, to a first approximation, just a 2D table.)
If you're predefined aggregates, then these smaller subsets can also be precomputed at cube processing time, rather than merely computed/cached on demand.
SSAS fact table rows are fixed size, which presumibly helps in some form. (In SQL, in constrast, you might have variable-width string columns.)
The caching architecture also means that, once an aggregation has been computed, it doesn't need to be refetched from disk and recomputed again and again.
These are some of the factors in play in SSAS anyway. I can't claim that there aren't other vital things as well.