I've setted up Syncfusion Dashboard Platform to work with some imported test data (Syncfusion Big Data).
I have used Integration platform for import in Hadoop (files are stored in Avro format).
Then I put Avro data from Hadoop to Hive tables via Spark SQL (convert Avro to table), in total I have 400 rows.
Then I used this Hive2 as Datasource in Dashboard platform.
Then I tried to fetch the data and it works very slowly.
Can somebody point me how to figure it out? Settings in Syncfusion are by default.
Thanks!
Recommended Solution:
As Hive Server2(Map Reduce) is well suited for batch processing with large data set, I recommend you to use Spark SQL data source for near real time analytics such as dashboard visualization. Because Spark SQL process data in-memory to avoid multiple disk I/O operations.
• Tables created under the Hive can also be accessed from “Spark SQL” in Syncfusion distribution as both uses same meta store database.
So you can use the “Spark SQL” connection type in Syncfusion Dashboard platform instead of “Hive”.
Related
We are going to move from SQL server to Snowflake as our target database for the warehouse.
Today we have most of our ETL development done in ODI (Oracle Data Ingegrator).
So I'm intressted in to know if anyone is using ODI together with Snowflake and how it's woking.
And what experince/recommendations you have of other ETL tools together with Snowflake as target.
For example
Matillion
DBT
Xplenty
Today we have started with using NIFI moving the data from source to Azure blob storage.
But we are not sure if ODI is the right tool for the rest when we are in the cloud.
I'm really looking forward to see all your answers
Snowflake supports both transformations during (ETL) or after loading (ELT).
Snowflake works with a wide range of data integration tools, including Informatica, Talend, Tableau, Matillion and others.
In data engineering, new tools and self-service pipelines are eliminating traditional tasks such as manual ETL coding and data cleaning companies. With easy ETL or ELT options via Snowflake, data engineers can instead spend more time working on critical data strategy and pipeline optimization projects.
With a Snowflake as your data lake and data warehouse, ETL can be effectively eliminated, as no pre-transformations or pre-schemas are needed.
In addition, Snowflake Snowpark is designed to make building complex data pipelines a breeze and to allow developers to interact with Snowflake directly without moving data. Read more about Snowpark here.
https://www.snowflake.com/trending/etl-tools
If you started to transfer data from the source to Azure blob storage, I assume that you have a subscription in Azure and it is possible that Snowflake itself is placed in the Azure region.
In this case, I recommend using Azure Data Factory directly, so you have everything on one provider and support for data migration from SQL Server.
Link to documentation: Copy and transform data in Snowflake using Azure Data Factory
Good day,
I have an Oracle DB and I need to load some tables so I can query them in BigQuery.
¿Is there a way of loading the data automatically, every 24 h, to Google BigQuery?
Any way would work. It could be loading into Data Storage and creating the tables from there, or loading into Google drive from the server.
I really need some ideas, I have read a lot of articles with no luck.
Check this tutorial by Progress:
https://www.progress.com/tutorials/cloud-and-hybrid/etl-on-premises-oracle-data-to-google-bigquery-using-google-cloud-dataflow
In this tutorial the main goal will be to connect to an On-Premises Oracle database, read the data, apply a simple transformation and write it to BigQuery. The code for this project has been uploaded to GitHub for your reference.
This solution uses Dataflow and Progress' Hybrid Data Pipeline tool:
Google Cloud Dataflow is a data processing service for both batch and real-time data streams. Dataflow allows you to build pipes to ingest data, then transform and process according to your needs before making that data available to analysis tools. DataDirect Hybrid Data Pipeline can be used to ingest both on-premises and cloud data with Google Cloud Dataflow.
I'm new to Hcatlog (HCAT), we would like to know in what usecases/scenario's we use HCAT, Benefits of making use of HCAT, Is there any Performance Improvement can be gain from HCatlog. Can any one just provide information on when to use Hcatlog
Apache HCatalog is a table and storage management layer for Hadoop that enables users with different data processing tools – Apache Pig, Apache Map/Reduce, and Apache Hive – to more easily read and write data on the grid.
HCatalog creates a table abstraction layer over data stored on an HDFS cluster. This table abstraction layer presents the data in a familiar relational format and makes it easier to read and write data using familiar query language concepts.
HCatalog data structures are defined using Hive's data definition language (DDL) and the Hive metastore stores the HCatalog data structures. Using the command-line interface (CLI), users can create, alter, and drop tables. Tables are organized into databases or are placed in the default database if none are defined for the table. Once tables are created, you can explore the metadata of the tables using commands such as Show Table and Describe Table.
HCatalog commands are the same as Hive's DDL commands.
HCatalog’s ensures that users need not worry about where or in what format their data is stored. HCatalog displays data from RCFile format, text files, or sequence files in a tabular view. It also provides REST APIs so that external systems can access these tables’ metadata.
HCatalog opens up the hive metadata to other Map/Reduce tools. Every Map/Reduce tools has its own notion about HDFS data (example Pig sees the HDFS data as set of files, Hive sees it as tables) HCatalog supported Map/Reduce tools do not need to care about where the data is stored, in which format and storage location.
It assist integration with other tools and supplies read and write interfaces for Pig, Hive and Map/Reduce.
It provide shared schema and data types for Hadoop tools.You do not have to explicitly type the data structures in each program.
It Expose the information as Rest Interface for external data access.
It also integrates with Sqoop, which is a tool designed to transfer data back and forth between Hadoop and relational databases such as SQL Server and Oracle
It provide APIs and webservice wrapper for accessing metadata in hive metastore.
HCatalog also exposes a REST interface so that you can create custom tools and applications to interact with Hadoop data structures.
This allows us to use the right tool for the right job. For example, we can load data into Hadoop using HCatalog, perform some ETL on the data using Pig, and then aggregate the data using Hive. After the processing, you could then send the data to your data warehouse housed in SQL Server using Sqoop. You can even automate the process using Oozie.
How it works:
Pig- HCatLoader and HCatStore interface
Map/Reduce- HCatInputFormat and HCatOutputFormat interface
Hive- No Interface Necessary. Direct access to metadata
References:
Microsoft Big Data Solution
http://hortonworks.com/hadoop/hcatalog/
Answer to your question:
As I described earlier HCatalog provides shared schema and data types for hadoop tools It simplifies your work during data processing. If you have created a table using HCatalog, you can directly access that hive table through pig or Map/Reduce (you cannot simply access a hive table through pig or Map Reduce).You don't need to create schema for every tool.
If you are working with the shared data that can be used from multiple
users(some team using Hive, some team using pig, some team using Map/Reduce) then HCatalog will be useful as they just need to table only to access the data for processing.
It is not replacement of any tool It a facility to provide single access to many tools.
Performance depends on your hadoop cluster. You should do some performance benchmarking in your Hadoop cluster to major performance.
I have an application with SAS where I pull the data from Oracle and produce report to excel using Base SAS and SAS macros. Now the problem is day by day my database is getting huge and fetching data from Oracle is taking more time, as a result my jobs are running slow.
So I want my application to be built on Hadoop for Reporting and analysis purpose. Can someone please suggest me any approach and what are the tools I need to use for this.
The short answer is: it depends.
For unloading data from Oracle I would recommend you to use Sqoop (http://sqoop.apache.org/), it is designed for this specific use case and can even do incremental loads and can create Hive table for unloaded data
When the data is unloaded, you can use Impala to build the report you need. Impala can natively work with Hive tables, so the sings are really simple. Of course, you would have to rewrite your SAS code to a set of SQL statements that would run on top of Impala.
Next, if you need visualization tool to run on top of it, you can either try something like Tableau or any other tool that is capable of utilizing ODBC/JDBC to connect to Impala
Finally, I think Hadoop + Sqoop + Impala would cover your needs. But I'd recommend you also to take a look at the MPP databases, because using SAS means you have pretty structured data and MPP database would be a better fit for this case
I have a requirement to ingest the data from an Oracle database to Hadoop in real-time.
What's the best way to achieve this on Hadoop?
The important problem here is getting the data out of the Oracle DB in real time. This is usually called Change Data Capture, or CDC. The complete solution depends on how you do this part.
Other things that matter for this answer are:
What is the target for the data and what are you going to do with it?
just store plain HDFS files and access for adhoc queries with something like Impala?
store in HBase for use in other apps?
use in a CEP solution like Storm?
...
What tools is your team familiar with
Do you prefer the DIY approach, gluing together existing open-source tools and writing code for the missing parts?
or do you prefer a Data integration tool like Informatica?
Coming back to CDC, there are three different approaches to it:
Easy: if you don't need true real-time and have a way to identify new data with an SQL query that executes fast enough for the required data latency. Then you can run this query over and over and ingest its results (the exact method depends on the target, the size of each chunk, and the preferred tools)
Complicated: Roll your own CDC solution: download the database logs, parse them into series of inserts/updates/deletes, ingest these to Hadoop.
Expensive: buy a CDC solution, that does this for you (like GoldenGate or Attunity)
Expanding a bit on what #Nickolay mentioned, there are a few options, but the best would be too opinion based to state.
Tungsten (open source)
Tungsten Replicator is an open source replication engine supporting a variety of different extractor and applier modules. Data can be extracted from MySQL, Oracle and Amazon RDS, and applied to transactional stores, including MySQL, Oracle, and Amazon RDS; NoSQL stores such as MongoDB, and datawarehouse stores such as Vertica, Hadoop, and Amazon rDS.
Oracle GoldenGate
Oracle GoldenGate is a comprehensive software package for real-time data integration and replication in heterogeneous IT environments. The product set enables high availability solutions, real-time data integration, transactional change data capture, data replication, transformations, and verification between operational and analytical enterprise systems. It provides a handler for HDFS.
Dell Shareplex
SharePlex™ Connector for Hadoop® loads and continuously replicates changes from an Oracle® database to a Hadoop® cluster. This gives you all the benefits of maintaining a real-time or near real-time copy of source tables
Apache Sqoop is a data transfer tool to transfer bulk data from any RDBMS with JDBC connectivity(supports Oracle also) to hadoop HDFS.