I am new to Cognos and trying to create reports on top of Hadoop using Hive JDBC Driver. I'm able to connect to Hive through JDBC and can able to generate reports, but here the report runs very slow. I did the same job while connecting with DB2 and the data is same as in Hadoop. Reports ran very quickly when compared to reports on top of Hive. I'm using same data-sets in both Hadoop and DB2, but can't figure out why reports on top of Hadoop are very slow. I installed Hadoop in pseudo distributed mode and connected through JDBC.
I installed following versions of software's which I used,
IBM Cognos 10.2.1 with fix pack 11,
Apache Hadoop 2.7.2,
Apache Hive 0.12.
Both are installed in different systems, Cognos on top of Windows 7 and Hadoop on top of Red-Hat.
Can any one tell me where I might be wrong in setting up of Cognos or Hadoop. Is there any way to speed up the report running time in Cognos on top of Hadoop.
When you say you installed Hadoop in pseudo distributed mode are you saying you are only running it on a single server? If so, it's never going to be as fast as DB2. Hadoop and Hive are designed to run on a cluster and scale. Get 3 or 4 servers running in a cluster and you should find that you can start to see some impressive query speeds over large datasets.
Check that you have allowed the Cognos Query Service to access more than the default amount of memory for it's Java Heap (http://www-01.ibm.com/support/docview.wss?uid=swg21587457) I currently run an initial size of 8Gb and max of 12Gb, but still manage to blow this occasionally.
Next issue you will run into is that Cognos doesn't know Hive SQL specifics (or Impala which is what I am using). This means that any non-basic query is going to be converted to a select from and maybe a group by. The big missing piece will be a where clause, which will mean that Cognos is going to try to suck in all the data from the Hive table and then do the filtering in Cognos rather than pass that off to Hive where it belongs. Cognos knows how to write DB2 SQL and all the specifics so it can pass that workload through.
The more complex the query, or any platform specific functions etc will generally not get passed to Hive (date functions, analytic functions etc), so try to structure your data and queries so they are required in filters.
Use the Hive query logs to monitor the queries that Cognos is running. Also try things like add fields to the query and then drag that field to the filter rather than direct from the model into the filter. I have found this can help in getting Cognos to include the filter in a where clause.
The other option is to use passthrough SQL queries in Report Studio and just write it all in Hive's SQL. I have just done this for a set of dashboards which required a stack of top 5's from a fact table with 5 million rows. For 5 rows Cognos was extracting all 5 million rows and then ranking them within Cognos. Do this a number of times and all of a sudden Cognos is going to struggle. With a passthrough query I could use the Impala Rank() function and only get 5 rows, much much faster, and faster than what DB2 would do seeing I am running on a proper (but small) cluster.
Another consideration with Hive is whether you are using Hive on Map Reduce or Hive on TEZ. From what a colleague has found, Hive on TEZ is much faster at the type of queries Cognos runs than Hive on Map Reduce.
Related
What is the best ways to parallel ingest data from Teradata database into Hadoop with parallel data moving?
If we create a job which is simple opens one session to Teradata database it will take a lot of time to load huge table.
if we create a set of sessions to load data in parallel, and also make Select in each of the sessions, than it will make a set of Full table scans Teradata to produce a data
What is the recommended best practice to load data in parallelised streams and make unnecessary workload to Teradata?
If Tera data supports table partitioning like oracle, you could try reading the table based on partitioning points which will enable parallelism in read...
Other option you have is, split the table into multiple partitions like adding a where clause on indexed column. This will ensure index scan and you can avoid full table scan.
The most scalable way to ingest data into Hadoop form teradata, which i found is to use Teradata connector for hadoop. It is included in Cloudera & Hortonworks distributions. I will show example base on Cloudera documentation, but the same works with Hortonworks as well:
Informatica big Data edition is using standard Scoop invocation via command line and submitting set of parameters to it. So the main question is - which driver to use to make parallel connections between two MPP systems.
Here is the link to the Cloudera documentation:
Using the Cloudera Connector Powered by Teradata
And here is the digest from this documentation (You could find that this connector support different kinds of load balancing between connections):
Cloudera Connector Powered by Teradata supports the following methods for importing data from Teradata to Hadoop:
split.by.amp
split.by.value
split.by.partition
split.by.hash
split.by.amp Method
This optimal method retrieves data from Teradata. The connector creates one mapper per available Teradata AMP, and each mapper subsequently retrieves data from each AMP. As a result, no staging table is required. This method requires Teradata 14.10 or higher.
If you use partition names in the select clause, Power Center will select only the rows within that partition so there won't be duplicate read (don't forget to choose Database partitioning in Informatica session level). However if you use key range partition you have to choose the range as you mentioned in settings. Usually we use NTILE oracle analytical function to split the table into multiple portions so that the read will be unique across the selects. Please let me know if you have any question. If you have range/auto generated/surrogate key column in the table use it in where clause - write a sub-query to divide the table into multiple portions.
I was trying to migrate a data from SQL db to Hadoop. I have successfully done this by configuring Hive, HBase & Hadoop.
My problem is that I was using Birt & Tableau with my SQL db and was able to load 10 million data in 5-10 mins, but my newly configured Hadoop, Hive & HBase System takes around 50 mins to fetch 10 million entries.
How can I improve this performance?
As Hadoop is specially developed for processing tons of data, why I am not able to do so?
Is there any special configuration for performance?
After lot of research and for the answer of this question I went through HDP as well. Then I come across a scenrio that we cannot compare the performance of SQL Db with Hadoop as both are used for different purposes.
Also Hadoop will show its performance only after the data crosses a limit of Several TB's i.e. the case in which SQL Database fails. So it will be better if one should check first whether for an Application. If there is a requirement of Performance, choosing Hadoop is not a good option; go for the SQL Databases. But if the Application is such that it will have huge amount of Data & one has to do an analysis of such huge data where SQL DB fails; in such case Hadoop is prevalent.
I am working on Migrating a Data from SQL Database to Hadoop, in which I have used HBase & Hadoop as well. I have successfully imported my data from SQL db to Hadoop, HBase and Hive. But the problem is the Performance of the System. I was getting the results of millions of entries within 5-10 minutes in SQL Db, but it takes around 1 hr to fetch 10 million of data from HBase & Hive. Can anyone help me on this to improve the Performance of my Hadoop System.
Data in HBase is only 'indexed' by rowkey. If you're querying in Hive on anything other than rowkey prefixes, you will generally be performing a full table scan.
There are some optimizations that can be made with HBase filters e.g., when using a FamilyFilter, you may be able to skip entire regions, but I doubt Hive is doing that.
How to improve performance depends on how your data is shaped and what analysis you need to perform on it. When performing frequent ad-hoc analysis, you may be better served by exporting data from HBase into something like Parquet files on HDFS and running your analysis against those with Hive (or Drill or Spark, Imapala, etc).
We presently load CDRs to an oracle warehouse using a combination of bash shell scripts and SQL loader with multiple threads. We are hoping to offload this process to hadoop because we envisage that the increase in data due to increase in subscriber base will soon max out the current system. And we also want to gradually introduce hadoop into our data warehouse environment.
Will loading from hadoop be faster?
If so what's is the best set of hadoop tool for this?
Further info:
We usually will get contunoius stream of pipe delimited text files through ftp to a folder, add two more fields to each record, load to temp tables in oracle and run a procedure to load to final table. How would u advice the process flow to be in terms of tools to use. For example;
files are ftp to the Linux file system (or is possible to ftp straight to hadoop?) and flume loads to hadoop.
fields are added (what will be best to do this? Pig, hive, spark or any other recommendations)
files are then loaded to oracle using sqoop
the final procedure is called(can sqoop make an oracle procedure call? If not what tool will be best to execute procedure and help control the whole process ?)
Also how can one control the level of paralleism ? Does it equate the number of mappers running the job?
Had a similar task of exporting data from a < 6 node Hadoop cluster to an Oracle Datewarehouse.
I've tested the following:
Sqoop
OraOop
Oracle Loader for Hadoop from the "Oracle BigData Connectors" suite
Hadoop streaming job which uses sqloader as mapper, in its configuration you can read from stdin using: load data infile "-"
Considering just speed, the Hadoop streaming job with sqloader as a mapper was the fastest way to transfer the data, but you have to install sqloader on each machine of your cluster. It was more of a personal curiosity, I would not recommend using this way to export data, the logging capabilities are limited, and should have a bigger impact on your datawarehouse performance.
The winner was Sqoop, it is pretty reliable, it's the import/export tool of the Hadoop ecosystem and was second fastest solution, according to my tests.(1.5x slower than first place)
Sqoop with OraOop (last updated 2012) was slower than the latest version of Sqoop, and requires extra configuration on the cluster.
Finally the worst time was obtained using Oracle's BigData Connectors, if you have a big cluster(>100 machines) than it should not be as bad as the time I obtained. The export was done in two steps. First step involves reprocessing the output and converting it to an Oracle Format that plays nice with the Datawarehouse. The second step was transfering the result to the Datawarehouse. This approach is better if you have allot of processing power, and you would not impact the Datawarehouse's performace as much as the other solutions.
I want to use SAS/ACESS 9.3M2 Interface for connecting sas with my Hive.
My question is,
whether sas imports hive cubes into sas environment and queries there?
or,
It again hits hive for the purpose of reporting so that it runs MR which degrades my reporting performance to more than 2-4 secs.
If it imports hive tables to its environment what would be its performance when compared to normal sql cubes?
I am totally new to sas i want my reports generated with in 2-4 secs where my aggregated data is in Hive tables and then I have created cube dimensions over that.
Thanks...
What SAS/ACCESS serves for is to:
- provide you with ability to read data and write from/to a datasource, take care of data type conversions
- provides metadata about a datastore (list of tables, fields, datatypes)
- provide a mean to (also partially) translate (implicit pass-through) SAS code to datasource specific code (usually SQL variant etc)
- provide a mean for you to write a datasource specific code and sent it from SAS for execution in datasource
I'm totally new to Hadoop :-) so I'll just guess that SAS/Access to Hadoop (via LIBNAME statement) reads relational data from Hadoop, the documentation mentions JDBC, so I guess that's used for data access.
I'd doubt SAS/Access is able to query the cubes from Hadoop (is that your question?
- "I have created cube dimensions over that" - meaning in Hadoop?).
Generally SAS/Access tries to minimize data transfers from datasources and tries to push the processing to the datasource.
From http://blog.cloudera.com/blog/2013/05/how-the-sas-and-cloudera-platforms-work-together:
SAS/ACCESS to Hadoop
SAS/ACCESS provides the ability to access data sets stored in Hadoop in SAS natively. With SAS/Access to Hadoop:
LIBNAME statements can be used to make Hive tables look like SAS data sets on top of which SAS Procedures and SAS DATA steps can interact.
PROC SQL commands provide the ability to execute direct Hive SQL commands on Hadoop.
PROC HADOOP provides the ability to directly submit MapReduce, Apache Pig, and HDFS commands from the SAS execution environment to your CDH cluster.
The SAS/ACCESS interface is available from the SAS 9.3M2 release and supports CDH 3U2 as well as CDH 4.01 and higher.
Also might be helpful PROC HADOOP at
http://support.sas.com/documentation/cdl/en/proc/65145/HTML/default/viewer.htm#p1esotuxnkbuepn1w443ueufw8in.htm