Currently we have data in the transaction database (Oracle) and are fetching data through queries to form reports. e.g. fetch all people under company A along with their details and lookup values from some more tables. It looks something like:
Select p.name,
p.address,
(select country_name from country where country_id = p.country_id),
...
...
from
person p, company c, person_file pf...
where c.company_id = p.company_id and c.company_id = 1
.. <all joins and conditions for tables>
The query takes a lot of time to fetch the records when there are a number of people against a company. My question is, what would be a better reporting solution by design and technology to get results faster if I don't want to stick to oracle as in future data will grow. Logically, it would be to implement something that does work in parallel. Another option like Spark seems to be an overkill.
First of all if you want to use oracle as the existing solution for the parallel processing you can use spark as your data reconciliation framework. Though it needs some learning curve but by using spark sql you can use your own query to read data from oracle. You can read data in parallel though it depends on how many parallel sessions is been configured with your oracle profile. Please check with the DBA.
Another option is migrating to any nosql dB like Cassandra so you can horizontally scale your machines rather than vertically. But the migration won’t be an easy task and straight forward. As nosql database does not support joins by design so the data modelling should be changed accordingly. Once done you can use spark on top of it. You can also consider using Talend which has predefined spark component ready.
Related
I'm trying to think out loud here to understand if graphql is a likely candidate for my need.
We have a home-grown self servicing report creation tool. This is web-based. It starts with user selecting a particular report type.
The report type in itself is a base SQL query. In subsequent screens, one can select the required columns, filters, etc. As we The output of all these steps is a SQL query, which is then run on an Oracle database.
As you can see, there are lot of cons with this tool. It is tightly coupled with the Oracle OLTP tables. There are hundreds of tables.
Given the current data model, and the presence of many tables, I'm wondering if GraphQL would be the right approach to design a UI that could act like a "data explorer". If I could combine some of the closely related tables and abstract them via GraphQL into logical groups, I'm wondering if I could create a report out of them.
**Logical Group 1**
Table1
Table2
Table3
Table4
Table5
**Logical Group 2**
Table6
Table7
Table8
Table9
Table10
and so on..
Let's say, I want 2 columns from tables in Logical group 1 and 4 Columns from Logical Group 2, is this something that could be defined as a GraphQL object and retrieved to be either rendered on a screen or written to a file?
I think I'm trying to write a data modelling UI via GraphQL. Is this even a good candidate for such a need?
We have also been evaluating Looker as a possible data modelling layer. However, it seems like there could be some
Thanks.
Without understanding your data better, it is hard to say for certain, but at first glance, this does not seem like a problem that is well suited to GraphQL.
GraphQL's strength is its ability to model + traverse a graph of data. It sounds to me like you are not so much traversing a continuous graph of data as cherry picking tables from a DB. It certainly is possible, but there may be a good deal of friction since this was not its intended design.
The litmus test I would use is the following two questions:
Can you imagine your problem mapping well to a REST API?
Does your API get consumed by performance sensitive clients?
If so, then GraphQL may serve your needs well, if not you may want to look at something like https://grpc.io/
We are converting from SQL Server to Cassandra for various reasons. The back end system is converted and working and now we are focusing on the front end systems.
In the current system we have a number of Telerik data grids where the app loads all the data and search/sort/filter is done in the grid itself. We want to avoid this and are going to push the search/sort/filter to the DB. In SQL Server this is not a problem because of ad-hoc queries. However in Cassandra it becomes very confusing.
If any operation was allowed then of course a Cassandra table would have to model the data that way. However I was wondering how this is performed in real world scenarios for large amounts of data and large amounts of columns.
For instance, if I had a grid with columns 1, 2, 3, 4 what is the best course of action?
Highly control what the user can do
Create a lot of tables to model the data and pick the one to select from
Don't allow the user to do any data operations
As any NoSQL system, Cassandra performs the queries on Primary Keys best. You can of course use secondary indices, but it will be a lot slower.
So the recommended way is to create Materialized Views for all possible queries.
Another way is to use something like Apache Ignite on top of Cassandra to do analytics, but you don't want to use grids for some reason as i get it.
We seem to have bit of a debate on a discussion point in our team.
We are working on a Data Warehouse in the Microsoft SQL Server 2012 platform. We have followed the Kimball Architecture to build this Data Warehouse.
Issue:
A reporting solution (built on SSRS), which sources data from this Warehouse, has significant performance issues when sourcing data from fact and dim tables. Some of our team members suggest that we extract data from facts and dims into a new set of tables using SSIS packages. This would mean we denormalise these tables into ‘Snapshot’ tables. In this way the we would not need to join these tables to create data sets within the reports. Data could be read out of these tables directly.
I do have my own worries about this; inconsistencies, maintenance of different data structures, duplication of data etc to name a few.
Question:
Would you consider creating snapshot tables (by denormalising facts and dim tables) for reporting tables a right approach?
Would like to hear your thoughts on this.
Cheers
Nithin
I don't think there is anything wrong with snapshot tables. The two most important aspects of a data warehouse are:
The data is correct.
The data is useful.
If your users are unable to extract the totals they require, in a reasonable timescale, they won't use the warehouse.
My own solution includes 3 snapshot tables. Like you, I was worried about inconsistencies. To address this we built an automated checking process. This sub-system executes a series of queries, stored on a network drive, once an hour. Any records returned by the queries are considered a fail. Fails are reported and immediately investigated by my ETL team. This sub-system ensures the snapshots and underlying facts are always aligned and consistent with each other. Drift is prevented.
That said, additional tables equals additional complexity. And that requires more time/effort to manage. Before introducing another layer to your warehouse, you should investigate why these queries are underperforming. If joins are to blame:
Are you using an inappropriate data type, for your P/F keys?
Are the FKeys indexed (some RDBMS do this by default, others do not)?
Have you looked at the execution plans, for the offending queries?
Is the join really to blame, or is it a filter applied to the dim table?
for raw cube performance my advice would be to always try to denormalize your tables and have one fact table and one table for each dimension (star schema).
If you are unsure if it will actually help you could start creating materialized views. These are kind of the best of both worlds, on the long run you should alter your etl.
In my previous job we only had flattened tables which worked quite well. Currenly we have a normalized schema but flatten it in the last step.
We have different set of data into different systems like Hadoop, Cassandra, MongoDB. But our analytic team want to get the stitched data from different systems. For example customer information with demographic will be in one system, their transactions will be in another system. Analytic should able to query to get data like from US users what was the volume of transaction. We need to develop an application to provide ease way to interact with different system. What is the best way to do?
Another requirement:
If we want to provide their custom workspace in a system like MongoDB, they can easily place with it. What is the best strategy to pull data from one system to another system on demand?
Any pointer or common architecture used to solve this kind of problem will be really helpful.
I see two questions here:
How can I consolidate data from different systems into one system?
How can I create some data in Mongo for people to experiment with?
Here we go ... =)
I would pick one system and target that for consolidation. In other words, between Hadoop, Cassandra and MongoDB, which one does your team have the most experience with? Which one do you find easiest to query with? Which one do you have set up to scale well?
Each one has pros and cons to scale, storage and queryability.
I would pick one and then pump all data to that system. At a recent job, that ended up being MongoDB. It was easy to move data to Mongo and it had by far the best query language. It also had a great community and setting up nodes was easier than Hadoop, etc.
Once you have solved (1), you can trim your data set and create a scaled down sandbox for people to run ad-hoc queries against. That would be my approach. You don't want to support the entire data set, because it would likely be too expensive and complicated.
If you were doing this in a relational database, I would say just run a
select top 1000 * from [table]
query on each table and use that data for people to play with.
I have a daily batch process that involves selecting out a large number of records and formatting up a file to send to an external system. I also need to mark these records as sent so they are not transmitted again tomorrow.
In my naive JDBC way, I would prepare and execute a statement and then begin to loop through the recordset. As I only go forwards through the recordset there is no need for my application server to hold the whole result set in memory at one time. Groups of records can be feed across from the database server.
Now, lets say I'm using hibernate. Won't I endup with a bunch of objects representing the whole result set in memory at once?
Hibernate does also iterate over the result set so only one row is kept in memory. This is the default. If it to load greedily, you must tell it so.
Reasons to use Hibernate:
"Someone" was "creative" with the column names (PRXFC0315.XXFZZCC12)
The DB design is still in flux and/or you want one place where column names are mapped to Java.
You're using Hibernate anyway
You have complex queries and you're not fluent in SQL
Reasons not to use Hibernate:
The rest of your app is pure JDBC
You don't need any of the power of Hibernate
You have complex queries and you're fluent in SQL
You need a specific feature of your DB to make the SQL perform
Hibernate offers some possibilities to keep the session small.
You can use Query.scroll(), Criteria.scroll() for JDBC-like scrolling. You can use Session.evict(Object entity) to remove entities from the session. You can use a StatelessSession to suppress dirty-checking. And there are some more performance optimizations, see the Hibernate documentation.
Hibernate as any ORM framework is intended for developing and maintaining systems based on object oriented programming principal. But most of the databases are relational and not object oriented, so in any case ORM is always a trade off between convenient OOP programming and optimized/most effective DB access.
I wouldn't use ORM for specific isolated tasks, but rather as an overall architectural choice for application persistence layer.
In my opinion I would NOT use Hibernate, since it makes your application a whole lot bigger and less maintainable and you do not really have a chance of optimizing the generated sql-scripts in a quick way.
Furthermore you could use all the SQL functionality the JDBC-bridge supports and are not limited to the hibernate functionality. Another thing is that you have the limitations too that come along with each layer of legacy code.
But in the end it is a philosophical question and you should do it the way it fits you're way of thinking best.
If there are possible performance issues then stick with the JDBC code.
There are a number of well known pure SQL optimisations which
which would be very difficult to do in Hibernate.
Only select the columns you use! (No "select *" stuff ).
Keep the SQl as simple as possible. e.g. Dont include small reference tables like currency codes in the join. Instead load the currency table into memory and resolve currency descriptions with a program lookup.
Depending on the DBMS minor re-ordering of the SQL where predicates can have a major effect on performance.
If you are updateing/inserting only commit every 100 to 1000 updates. i.e. Do not commit every unit of work but keep some counter so you commit less often.
Take advantage of the aggregate functions of your database. If you want totals by DEPT code then do it in the SQL with " SUM(amount) ... GROUP BY DEPT ".