neo4j: how to import data from Oracle - oracle

I have 5 tables: 3 for nodes and 2 for relationships between them ( relationship = child). How to transfer the data from Oracle to neo4j ?

The Neo4j site has an entire documentation section on moving data from relational databases to neo4j. There are a bunch of different possibilities.
The simplest way though is to use your chosen database tools to export your tables to a CSV format, and then to use Cypher's LOAD CSV command to pull the data in.
The data can't be directly transferred though, in the sense that your tables represent entities and relationships between them; when moving the data to neo4j, this requires that you consider what you want your graph data model to look like.
Because the flexibility and the power that neo4j will give you will ultimately have a lot to do with how you modeled your data, you should give this careful consideration before you dump the CSV and try to import it into neo4j.

Related

Import Neo4J data into Oracle relational database

I would like to keep the Neo4J database as the master database, and I would like to keep oracle relational database tables sincronized with Neo4J dat as a read-only Materialized View .
I only find links and articles explaining how to import relational data into Neo4j and not the other way around.
Conceptually I am looking for a kind of Materizalized View in Oracle using a Cypher Query as source. Maybe I could make a custom Merge program, mapping Oracle Tables to Cypher queries. Ideally I would like to run this program in Oracle (PLSQL).
Thanks in advance,

Using cassandra in a data grid to sort and filter data

We are converting from SQL Server to Cassandra for various reasons. The back end system is converted and working and now we are focusing on the front end systems.
In the current system we have a number of Telerik data grids where the app loads all the data and search/sort/filter is done in the grid itself. We want to avoid this and are going to push the search/sort/filter to the DB. In SQL Server this is not a problem because of ad-hoc queries. However in Cassandra it becomes very confusing.
If any operation was allowed then of course a Cassandra table would have to model the data that way. However I was wondering how this is performed in real world scenarios for large amounts of data and large amounts of columns.
For instance, if I had a grid with columns 1, 2, 3, 4 what is the best course of action?
Highly control what the user can do
Create a lot of tables to model the data and pick the one to select from
Don't allow the user to do any data operations
As any NoSQL system, Cassandra performs the queries on Primary Keys best. You can of course use secondary indices, but it will be a lot slower.
So the recommended way is to create Materialized Views for all possible queries.
Another way is to use something like Apache Ignite on top of Cassandra to do analytics, but you don't want to use grids for some reason as i get it.

Database Schema vs. Data Structure

What's the conceptual difference between a database schema and any data structure, in general? Don't both convey the organisation of data for efficiency? Or am I mixing two completely different things?
Database schema is the logical view of the entire DB.
Data structures are the specific formats that are used to store data (File, array, trees, etc)
Another way to think of it is that a DB schema will contain various data structures.

Who will add rows in fact table of Mondrian? Developer or Mondrian itself?

I want to ask very very basic question related to Mondrian.
I have created one fact table to build Mondrian cube. Currently that fact table does not contain any rows. So, I would like to know Who will add rows in fact table of Mondrian? Developer or Mondrian itself?
The developer.
Mondrian is, roughly speaking, simply an engine that takes MDX queries and translates them into SQL queries.
More to the point, typically you'll have a database that serves as data warehouse (where you have your Mondrian cubes) and an operational database (or several), where the actual data is coming from. Though you declared the cube in a cubename.mondrian.xml file, you have given no indications to Mondrian on what the operational database looks like (it might not even look like a database -- we maintain several cubes populated from Apache logs!)
Since it's your responsibility as the developer to populate the cube, in the Pentaho world we usually use Pentaho Data Integration (also known as Kettle) as our ETL tool (which is to say, it's the tool we use to Extract data from whatever sources, Transform it into a shape more useful for our purposes, and Load it into the data warehouse)

Move data from Oracle to Cassandra and/or MongoDB

At work we are thinking to move from Oracle to a NoSQL database, so I have to make some test on Cassandra and MongoDB. I have to move a lot of tables to the NoSQL database the idea is to have the data synchronized between this two platforms.
So I create a simple procedure that make selects into the Oracle DB and insert into mongo. Some of my colleagues point that maybe there is an easier(and more professional) way to do it.
Anybody had this problem before? how do you solve it?
If your goal is to copy your existing structure from Oracle to a NoSQL database then you should probably reconsider your move in the first place. By doing that you are losing any of the benefits one sees from going to a non-relational data store.
A good first step would be to take a long look at your existing structure and determine how it can be modified to affect positive impact on your application. Additionally, consider a hybrid system at the same time. Cassandra is great for a lot of things, but if you need a relational system and already are using a lot of Oracle functionality, it likely makes sense for most of your database to stay in Oracle, while moving the pieces that require frequent writes and would benefit from a different structure to Mongo or Cassandra.
Once you've made the decisions about your structure, I would suggest writing scripts/programs/add a module to your existing app, to write the data in the new format to the new data store. That will give you the most fine-grained control over every step in the process, which in a large system-wide architectural change, I would want to have.
You can also consider using components of Hadoop ecosystem to perform this kind of (ETL) task .For that you need to model your Cassandra DB as per the requirements.
Steps could be to migrate your oracle table data to HDFS (using SQOOP preferably) and then writing Map-Reduce job to transform this data and insert into Cassandra Data Model .

Resources