i'm not sure of the term for this problem i'm having, or if it's even an issue to worry about at all. let's say i have a hypothetical situation like this:
it seems as though having the link from remix objects back to the the original objects makes for a somewhat complicated structure, especially if i start to add more objects into the structure.
if i remove the links from remix song and remix album to the original, i can use some sort of ID and traverse the structure to still figure out the original versions, but this would require me to write some code to ensure the integrity of the data, like the remix album is not pointing to an original album that no longer exist.
question: is having a structure like this something to worry about? if so, how to fix such a structure aside from the solution i proposed above which requires writing code to ensure the integrity of the data.
I don't know what programming language you're working with, but it looks to me like you're describing a directed acyclic graph, which, in simple terms, is a collection of points with arrows connecting them, but there aren't any cycles.
This is a very common structure. For instance, it describes dependencies of software packages in operating systems with automated software installation (such as many Linux distributions). It describes citations in research papers, where a paper can cite many other papers and a paper can be cited by many other papers, but it doesn't make sense for two papers to cite each other.
The best way to represent this data structure depends on the programming language, and on what you need to do with it. The simplest way to do it in most programming languages is to simply have each object link to the other objects by reference, something like:
struct Song {
std::string name;
std::vector<struct Foo*> originals;
};
It's a simple matter to find every "original" of a given song, but it's more expensive to find every "remix". You could augment the structure with remix links and ensure consistency, but in both cases, you have to ensure there are no cycles.
In a SQL database, you could describe the relationship like so:
CREATE TABLE songs (
id SERIAL PRIMARY KEY,
name TEXT
);
CREATE TABLE is_remix_of (
remix INT REFERENCES songs(id),
original INT REFERENCES songs(id)
);
CREATE INDEX remix_to_original ON is_remix_of(remix);
CREATE INDEX original_to_remix ON is_remix_of(original);
Again, you would have to find a way to guard against cycles.
Related
We are struggling to model our data correctly for use in Kedro - we are using the recommended Raw\Int\Prm\Ft\Mst model but are struggling with some of the concepts....e.g.
When is a dataset a feature rather than a primary dataset? The distinction seems vague...
Is it OK for a primary dataset to consume data from another primary dataset?
Is it good practice to build a feature dataset from the INT layer? or should it always pass through Primary?
I appreciate there are no hard & fast rules with data modelling but these are big modelling decisions & any guidance or best practice on Kedro modelling would be really helpful, I can find just one table defining the layers in the Kedro docs
If anyone can offer any further advice or blogs\docs talking about Kedro Data Modelling that would be awesome!
Great question. As you say, there are no hard and fast rules here and opinions do vary, but let me share my perspective as a QB data scientist and kedro maintainer who has used the layering convention you referred to several times.
For a start, let me emphasise that there's absolutely no reason to stick to the data engineering convention suggested by kedro if it's not suitable for your needs. 99% of users don't change the folder structure in data. This is not because the kedro default is the right structure for them but because they just don't think of changing it. You should absolutely add/remove/rename layers to suit yourself. The most important thing is to choose a set of layers (or even a non-layered structure) that works for your project rather than trying to shoehorn your datasets to fit the kedro default suggestion.
Now, assuming you are following kedro's suggested structure - onto your questions:
When is a dataset a feature rather than a primary dataset? The distinction seems vague...
In the case of simple features, a feature dataset can be very similar to a primary one. The distinction is maybe clearest if you think about more complex features, e.g. formed by aggregating over time windows. A primary dataset would have a column that gives a cleaned version of the original data, but without doing any complex calculations on it, just simple transformations. Say the raw data is the colour of all cars driving past your house over a week. By the time the data is in primary, it will be clean (e.g. correcting "rde" to "red", maybe mapping "crimson" and "red" to the same colour). Between primary and the feature layer, we will have done some less trivial calculations on it, e.g. to find one-hot encoded most common car colour each day.
Is it OK for a primary dataset to consume data from another primary dataset?
In my opinion, yes. This might be necessary if you want to join multiple primary tables together. In general if you are building complex pipelines it will become very difficult if you don't allow this. e.g. in the feature layer I might want to form a dataset containing composite_feature = feature_1 * feature_2 from the two inputs feature_1 and feature_2. There's no way of doing this without having multiple sub-layers within the feature layer.
However, something that is generally worth avoiding is a node that consumes data from many different layers. e.g. a node that takes in one dataset from the feature layer and one from the intermediate layer. This seems a bit strange (why has the latter dataset not passed through the feature layer?).
Is it good practice to build a feature dataset from the INT layer? or should it always pass through Primary?
Building features from the intermediate layer isn't unheard of, but it seems a bit weird. The primary layer is typically an important one which forms the basis for all feature engineering. If your data is in a shape that you can build features then that means it's probably primary layer already. In this case, maybe you don't need an intermediate layer.
The above points might be summarised by the following rules (which should no doubt be broken when required):
The input datasets for a node in layer L should all be in the same layer, which can be either L or L-1
The output datasets for a node in layer L should all be in the same layer L, which can be either L or L+1
If anyone can offer any further advice or blogs\docs talking about Kedro Data Modelling that would be awesome!
I'm also interested in seeing what others think here! One possibly useful thing to note is that kedro was inspired by cookiecutter data science, and the kedro layer structure is an extended version of what's suggested there. Maybe other projects have taken this directory structure and adapted it in different ways.
Your question prompted us to write a Medium article better explaining these concepts, it's just been published on Toward Data Science
I'm new to the Graph Database scene, looking into Neo4j and learning Cypher, we're trying to model a graph database, it's a fairly simple one, we got users, and we got movies, users can VIEW movies, RATE movies, create playlists and playlists can HAVE movies.
The question is regarding the Super Node performance issue. And I will quote something from a very good book I am currently reading - Learning Neo4j by Rik Van Bruggen, so here it is:
A very interesting problem then occurs in datasets where some parts of the graph
are all connected to the same node. This node, also referred to as a dense node or a
supernode, becomes a real problem for graph traversals because the graph database
management system will have to evaluate all of the connected relationships to
that node in order to determine what the next step will be in the graph traversal.
The solution to this problem proposed in the book is to have a Meta node with 100 connections to it, and the 101th connection to be linked to a new Meta node that is linked to the previous Meta Node.
I have seen a blog post from the official Neo4j Blog saying that they will fix this problem in the upcoming future (the blog post is from January 2013) - http://neo4j.com/blog/2013-whats-coming-next-in-neo4j/
More exactly they say:
Another project we have planned around “bigger data” is to add some specific optimizations to handle traversals across densely-connected nodes, having very large numbers (millions) of relationships. (This problem is sometimes referred to as the “supernodes” problem.)
What are your opinions on this issue? Should we go with the Meta node fanning-out pattern or go with the basic relationship that every tutorial seem to be using? Any other suggestions?
UPDATE - October 2020. This article is the best source on this topic, covering all aspects of super nodes
(my original answer below)
It's a good question. This isn't really an answer, but why shouldn't we be able to discuss this here? Technically I think I'm supposed to flag your question as "primarily opinion based" since you're explicitly soliciting opinions, but I think it's worth the discussion.
The boring but honest answer is that it always depends on your query patterns. Without knowing what kinds of queries you're going to issue against this data structure, there's really no way to know the "best" approach.
Supernodes are problems in other areas as well. Graph databases sometimes are very difficult to scale in some ways, because the data in them is hard to partition. If this were a relational database, we could partition vertically or horizontally. In a graph DB when you have supernodes, everything is "close" to everything else. (An Alaskan farmer likes Lady Gaga, so does a New York banker). Moreso than just graph traversal speed, supernodes are a big problem for all sorts of scalability.
Rik's suggestion boils down to encouraging you to create "sub-clusters" or "partitions" of the super-node. For certain query patterns, this might be a good idea, and I'm not knocking the idea, but I think hidden in here is the notion of a clustering strategy. How many meta nodes do you assign? How many max links per meta-node? How did you go about assigning this user to this meta node (and not some other)? Depending on your queries, those questions are going to be very hard to answer, hard to implement correctly, or both.
A different (but conceptually very similar) approach is to clone Lady Gaga about a thousand times, and duplicate her data and keep it in sync between nodes, then assert a bunch of "same as" relationships between the clones. This isn't that different than the "meta" approach, but it has the advantage that it copies Lady Gaga's data to the clone, and the "Meta" node isn't just a dumb placeholder for navigation. Most of the same problems apply though.
Here's a different suggestion though: you have a large-scale many-to-many mapping problem here. It's possible that if this is a really huge problem for you, you'd be better off breaking this out into a single relational table with two columns (from_id, to_id), each referencing a neo4j node ID. You then might have a hybrid system that's mostly graph (but with some exceptions). Lots of tradeoffs here; of course you couldn't traverse that rel in cypher at all, but it would scale and partition much better, and querying for a particular rel would probably be much faster.
One general observation here: whether we're talking about relational, graph, documents, K/V databases, or whatever -- when the databases get really big, and the performance requirements get really intense, it's almost inevitable that people end up with some kind of a hybrid solution with more than one kind of DBMS. This is because of the inescapable reality that all databases are good at some things, and not good at others. So if you need a system that's good at most everything, you're going to have to use more than one kind of database. :)
There is probably quite a bit neo4j can do to optimize in these cases, but it would seem to me that the system would need some kinds of hints on access patterns in order to do a really good job at that. Of the 2,000,000 relations present, how to the endpoints best cluster? Are older relationships more important than newer, or vice versa?
Re. the Neo4j blog, dense node support should be enhanced in Neo4j 2.1 (and above), see also http://neo4j.com/blog/neo4j-2-1-graph-etl/
(disclaimer: not an answer, but some discussion)
The 2013 neo4j blog post you mentioned links to this github commit, where the intended problem scope and its solution is discussed. To summarize, it does not address the general supernode issue. Instead, it alleviates the issue when, among multiple relationship types (and directions) that a supernode has, some of the types (directions) happen to have disproportionately less edges than the others. The engine is able to filter based on types and directions.
A more generic solution is the vertex centric approach from Titan (https://stackoverflow.com/a/21385213/1311956), which sort the edges by one or a composite of properties, result in O(log(E)) searching performance, where E is the number of edges in/out of the supernode.
Neo4j has the concept of index on relationships. Unlike vertex centric approach of Titan, the index is global. However, relationship index is a legacy one in Neo4j. This is discussed in another stackoverflow thread.
Another issue with Supernode is the storage problem which leads to storage issue and IO cost.
I'm not really trying to compress a database. This is more of a logical problem. Is there any algorithm that will take a data table with lots of columns and repeated data and find a way to organize it into many tables with ID's in such a way that in total there are as few cells as possible, and that this tables can be then joined with a query to replicate the original one.
I don't care about any particular database engine or language. I just want to see if there is a logical way of doing it. If you will post code, I like C# and SQL but you can use any.
I don't know of any automated algorithms but what you really need to do is heavily normalize your database. This means looking at your actual functional dependencies and breaking this off wherever it makes sense.
The problem with trying to do this in a computer program is that it isn't always clear if your current set of stored data represents all possible problem cases. You can't only look at numbers of values either. It makes little sense to break off booleans into their own table because they have only two values, for example, and this is only the tip of the iceberg.
I think that at this point, nothing is going to beat good ol' patient, hand-crafted normalization. This is something to do by hand. Any possible computer algorithm will either make a total mess of things or make you define the relationships such that you might as well do it all yourself.
I am trying to store a large list of strings in a concise manner so that they can be very quickly analyzed/searched through.
A directed acyclic word graph (DAWG) suits this purpose wonderfully. However, I do not have a list of the strings to include in the first place, so it must be incrementally buildable. Additionally, when I search through it for a string, I need to bring back data associated with the result (not just a boolean saying if it was present).
I have found information on a modification of the DAWG for string data tracking here: http://www.pathcom.com/~vadco/adtdawg.html It looks extremely, extremely complex and I am not sure I am capable of writing it.
I have also found a few research papers describing incremental building algorithms, though I've found that research papers in general are not very helpful.
I don't think I am advanced enough to be able to combine both of these algorithms myself. Is there documentation of an algorithm already that features these, or an alternative algorithm with good memory use & speed?
I wrote the ADTDAWG web page. Adding words after construction is not an option. The structure is nothing more than 4 arrays of unsigned integer types. It was designed to be immutable for total CPU cache inclusion, and minimal multi-thread access complexity.
The structure is an automaton that forms a minimal and perfect hash function. It was built for speed while traversing recursively using an explicit stack.
As published, it supports up to 18 characters. Including all 26 English chars will require further augmentation.
My advice is to use a standard Trie, with an array index stored in each node. Ya, it is going to seem infantile, but each END_OF_WORD node represents only one word. The ADTDAWG is a solution to each END_OF_WORD node in a traditional DAWG representing many, many words.
Minimal and perfect hash tables are not the sort of thing that you can just put together on the fly.
I am looking for something else to work on, or a job, so contact me, and I'll do what I can. For now, all I can say is that it is unrealistic to use heavy optimization on a structure that is subject to being changed frequently.
Java
For graph problems which require persistence, I'd take a look at the Neo4j graph DB project. Neo4j is designed to store large graphs and allow incremental building and modification of the data, which seems to meet the criteria you describe.
They have some good examples to get you going quickly and there's usually example code to get you started with most problems.
They have a DAG example with a link at the bottom to the full source code.
C++
If you're using C++, a common solution to graph building/analysis is to use the Boost graph library. To persist your graph you could maintain a file based version of the graph in GraphML (for example) and read and write to that file as your graph changes.
You may also want to look at a trie structure for this (potentially building a radix-tree). It seems like a decent 'simple' alternative structure.
I'm suggesting this for a few reasons:
I really don't have a full understanding of your result.
Definitely incremental to build.
Leaf nodes can contain any data you wish.
Subjectively, a simple algorithm.
I would use a hash table and use ISBN number as key. As this will give me a look up time of O(1)....as avg time of look up in hash table is O(1)....
we can also use Binary search tree.....look up time is O(nlogn)...
What data structure would you guys use and why?
This sounds like a homework or interview question. If I were asking it, I would be interested in more than just whether you understand a couple of data structures. I would also want to know how you analyze a real-world problem and translate it to the world of computers and data structures.
As such, you should probably think about what operations you need to perform on the data before you pick a data structure. You should also think some about real libraries and some of the "gotchas" that could come up with any data structure you chose.
If all you need to do is translate from an ISBN to the catalog entry for the corresponding book, then a hash table might be a reasonable choice. But you might want to think about how you would deal with popular books, such as best sellers, that a library could have many copies of.
But is ISBN lookup really the important use case? I use my local library all the time, and I never look up books by ISBN. Some of things that I do are:
Look up a specific book by title. Sometimes there are different books with the same title.
Browse the list of books by an author I like
Find where books on a particular subject are shelved, so I can browse them.
Librarians probably have additional uses for a catalog system:
Add new books to the catalog
Mark books as checked out
Change listing information, such as subject classification, for a book
So I guess my recommendation would be to think more carefully about what problem you want to solve before you decide on the solution.
Apologies for asking more questions instead of providing an answer. I hope this is helpful anyway.
Well ... I don't think the hardest problem to solve with designing a data structure to store information about books is that of look-up speed.
And I would certainly not settle for a system that only allowed searching if you know the ISBN. What if you only remember the author, or a few words from the title? If there is to be any gains in having a computerized system for this, you must support flexible searches, in my opinion.
I would probably look into using Dublin Core, but I'm not at all sure that's the "right" thing to do. It seems people have spent a great deal of time thinking about that one, though.