Improving Teradata offloading performance using Apache Spark - performance

I am having more than 1 TB data in Teradata ,I have to import and process the data in spark code ,I have done all efficient ways in spark submit executor memory,core count and driver memory,core even though It's taking more time in the cluster environment.My cluster data node size is 4 node and more than 500 gb what are all other main consideration are there to improve performance..please help

If you post your code for loading the DataFrame I may be able to help more. In situations like this I usually find the Spark connector is doing something unexpected during the load. If you have a way to inspect the queries that Spark is running on the Teradata cluster you might find that it's loading in an inefficient order.
What are you doing with the data after it's loaded? Is there a reason you need 100% of your dataset in memory in your Spark cluster? Teradata stores data on disk in a more compressed format than it will be in memory on your Spark cluster. You may have more success doing predicate pushdown to limit the amount of data you're pulling into Spark. See Push down a query to the database engine and the following "Push down optimization" for more.

Related

Impala vs Hive. How Impala circumvents MapReduce?

How is Impala able to achieve lower latency than Hive in query processing?
I was going through http://impala.apache.org/overview.html, where it is stated:
To avoid latency, Impala circumvents MapReduce to directly access the
data through a specialized distributed query engine that is very
similar to those found in commercial parallel RDBMSs. The result is
order-of-magnitude faster performance than Hive, depending on the type
of query and configuration.
How Impala fetches the data without MapReduce (as in Hive)?
Can we say that Impala is closer to HBase and should be compared with HBase instead of comparing with Hive?
Edit:
Or can we say that as classically, Hive is on top of MapReduce and does require less memory to work on while Impala does everything in memory and hence it requires more memory to work by having the data already being cached in memory and acted upon on request?
Just read Impala Architecture and Components
Impala is a massively parallel processing (MPP) database engine. It consists of different daemon processes that run on specific hosts.... Impala is different from Hive and Pig because it uses its own daemons that are spread across the cluster for queries.
It circumvents MapReduce containers by having a long running daemon on every node that is able to accept query requests. There is no singular point of failure that handles requests like HiveServer2; all impala engines are able to immediately respond to query requests rather than queueing up MapReduce YARN containers.
Impala however does rely on the Hive Metastore service because it is just a useful service for mapping out metadata stored in the RDBMS to the Hadoop filesystem. Pig, Spark, PrestoDB, and other query engines also share the Hive Metastore without communicating though HiveServer.
Data is not "already cached" in Impala. Similar to Spark, you must read the data into a large portion of memory in order for operations to be quick. Unlike Spark, the daemons and statestore services remain active for handling subsequent queries.
Impala can query HBase, but it is not similar in architecture and in my experience, a well designed HBase table is faster to query than Impala. Impala is probably closer to Kudu.
Also worth mentioning that it's not really recommended to use MapReduce Hive anymore. Tez is far better, and Hortonworks states Hive LLAP is better than Impala, although as you quoted, it largely "depends on the type of query and configuration."
Impala use "Impala Daemon" service to read data directly from the dataNode (it must be installed with the same hosts of dataNode) .he cache only the location of files and some statistics in memory not the data itself.
that why impala can't read new files created within the table . you must invalidate or refresh (depend on your case) to tell impala to cache the new files and be able to read them directly
since impala is in memory , you need to have enough memory for the data read by the query , if you query will use more data than your memory (complexe query with aggregation on huge tables),use hive with spark engine not the default map reduce
set hive.execution.engine=spark; just before the query
you can use the same query in hive with spark engine
impala is cloudera product , you won't find it for hortonworks and MapR (or others) .
Tez is not included with cloudera for exemple.
it all depends on the platform you are using

Apache Spark-SQL vs Sqoop benchmarking while transferring data from RDBMS to hdfs

I am working on a use case where I have to transfer data from RDBMS to HDFS. We have done the benchmarking of this case using sqoop and found out that we are able to transfer around 20GB data in 6-7 Mins.
Where as when I try the same with Spark SQL, the performance is very low(1 Gb of records is taking 4 min to transfer from netezza to hdfs). I am trying to do some tuning and increase its performance but its unlikely to tune it to the level of sqoop(around 3 Gb of data in 1 Min).
I agree to the fact that spark is primarily a processing engine but my main question is that both spark and sqoop are using JDBC driver internally so why there is so much difference in the performance(or may be I am missing something). I am posting my code here.
object helloWorld {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("Netezza_Connection").setMaster("local")
val sc= new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
sqlContext.read.format("jdbc").option("url","jdbc:netezza://hostname:port/dbname").option("dbtable","POC_TEST").option("user","user").option("password","password").option("driver","org.netezza.Driver").option("numPartitions","14").option("lowerBound","0").option("upperBound","13").option("partitionColumn", "id").option("fetchSize","100000").load().registerTempTable("POC")
val df2 =sqlContext.sql("select * from POC")
val partitioner= new org.apache.spark.HashPartitioner(14)
val rdd=df2.rdd.map(x=>(String.valueOf(x.get(1)),x)).partitionBy(partitioner).values
rdd.saveAsTextFile("hdfs://Hostname/test")
}
}
I have checked many other post but could not get a clear answer for the internal working and tuning of sqoop nor I got sqoop vs spark sql benchmarking .Kindly help in understanding this issue.
You are using the wrong tools for the job.
Sqoop will launch a slew of processes (on the datanodes) that will each make a connections to your database (see num-mapper) and they will each extract a part of the dataset. I don't think you can achieve kind of read parallelism with Spark.
Get the dataset with Sqoop and then process it with Spark.
you can try the following:-
Read data from netezza without any partitions and with increased fetch_size to a million.
sqlContext.read.format("jdbc").option("url","jdbc:netezza://hostname:port/dbname").option("dbtable","POC_TEST").option("user","user").option("password","password").option("driver","org.netezza.Driver").option("fetchSize","1000000").load().registerTempTable("POC")
repartition the data before writing it to final file.
val df3 = df2.repartition(10) //to reduce the shuffle
ORC formats are more optimized than TEXT. Write the final output to parquet/ORC.
df3.write.format("ORC").save("hdfs://Hostname/test")
#amitabh
Although marked as an answer, I disagree with it.
Once you give the predicate to partition the data while reading from the jdbc, spark will run separate tasks for each partition. In your case no of tasks should be 14 (u can confirm this using spark UI).
I notice that you are using local as master, which would provide only 1 core for executors. Hence there will be no parallelism. Which is what is happening in your case.
Now to get the same throughput as sqoop you need to make sure that these tasks are running in parallel. Theoretically this can be done either by:
1. Using 14 executors with 1 core each
2. Using 1 executor with 14 cores (other end of the spectrum)
Typically, I would go with 4-5 cores per executor. So I test the performance with 15/5= 3 executors (i added 1 to 14 to consider 1 core for the driver running in clustor mode).
Use: executor.cores, executor.instances in sparkConf.set to play with the configs.
If this does not significantly increase performance, the next thing would be to look at the executor memory.
Finally, I would tweak the application logic to look at mapRDD sizes, partition sizes and shuffle sizes.
I had the same problem because the piece of code you are using it's not working for partition.
sqlContext.read.format("jdbc").option("url","jdbc:netezza://hostname:port/dbname").option("dbtable","POC_TEST").option("user","user").option("password","password").option("driver","org.netezza.Driver").option("numPartitions","14").option("lowerBound","0").option("upperBound","13").option("partitionColumn", "id").option("fetchSize","100000").load().registerTempTable("POC")
You can check number of partitions created in you spark job by
df.rdd.partitions.length
you can use following code to connect db:
sqlContext.read.jdbc(url=db_url,
table=tableName,
columnName="ID",
lowerBound=1L,
upperBound=100000L,
numPartitions=numPartitions,
connectionProperties=connectionProperties)
To optimize your spark job following are the parameters:
1. # of partitions
2. --num-executors
3.--executor-cores
4. --executor-memory
5. --driver-memory
6. fetch-size
2,3,4 and 5 options are depends on you cluster configurations
you can monitor your spark job on spark ui.
Sqoop and Spark SQL both use JDBC connectivity to fetch the data from RDBMS engines but Sqoop has an edge here since it is specifically made to migrate the data between RDBMS and HDFS.
Every single option available in Sqoop has been fine-tuned to get the best performance while doing the data ingestions.
You can start with discussing the option -m which control the number of mappers.
This is what you need to do to fetch data in parallel from RDBMS. Can I do it in Spark SQL?
Of course yes but the developer would need to take care of "multithreading" that Sqoop has been taking care automatically.
The below solution helped me
var df=spark.read.format("jdbc").option("url","
"url").option("user","user").option("password","password").option("dbTable","dbTable").option("fetchSize","10000").load()
df.registerTempTable("tempTable")
var dfRepart=spark.sql("select * from tempTable distribute by primary_key") //this will repartition the data evenly
dfRepart.write.format("parquet").save("hdfs_location")
Apache Sqoop is retired now - https://attic.apache.org/projects/sqoop.html
Using Apache Spark is a good option. This link shows how Spark can be used instead of Sqoop - https://medium.com/zaloni-engineering/apache-spark-vs-sqoop-engineering-a-better-data-pipeline-ef2bcb32b745
Else one can choose any cloud services like Azure Data Factory or Amazon Redshift etc.

Loading data into HIVE to support front end application

We have a datawarehousing application which we are planning to convert to Hadoop.
Currently, there are 20 feeds that we receive on daily basis and load this data into MySQL database.
As the data is getting large, we are planning to move to Hadoop for faster query processing.
As the first step we are planning to load the data into HIVE on a daily basis instead of MySQL.
Question:-
1.Can I convert Hadoop similar to a DWH application to process files on daily basis?
2.When I load the data in Master Node, will it be sync'd automatically?
It really depends on the size of your data. The Question is a bit complex but in general you will have to design your own pipeline.
If you are analyzing raw logs HDFS will be a good choice to start from. You can use Java, Python or Scala to schedule the Hive jobs on daily basis and use Sqoop if you still need some MySQL data.
In Hive you will have to create partitioned table to be synced and available upon query execution. Partition creation can be also scheduled.
I would suggest to go with Impala instead of Hive as it is more tunable, fault tolerant and easier to use.

Tableau, Hadoop & Birt

I was trying to migrate a data from SQL db to Hadoop. I have successfully done this by configuring Hive, HBase & Hadoop.
My problem is that I was using Birt & Tableau with my SQL db and was able to load 10 million data in 5-10 mins, but my newly configured Hadoop, Hive & HBase System takes around 50 mins to fetch 10 million entries.
How can I improve this performance?
As Hadoop is specially developed for processing tons of data, why I am not able to do so?
Is there any special configuration for performance?
After lot of research and for the answer of this question I went through HDP as well. Then I come across a scenrio that we cannot compare the performance of SQL Db with Hadoop as both are used for different purposes.
Also Hadoop will show its performance only after the data crosses a limit of Several TB's i.e. the case in which SQL Database fails. So it will be better if one should check first whether for an Application. If there is a requirement of Performance, choosing Hadoop is not a good option; go for the SQL Databases. But if the Application is such that it will have huge amount of Data & one has to do an analysis of such huge data where SQL DB fails; in such case Hadoop is prevalent.

Performance Issue in Hadoop,HBase & Hive

I am working on Migrating a Data from SQL Database to Hadoop, in which I have used HBase & Hadoop as well. I have successfully imported my data from SQL db to Hadoop, HBase and Hive. But the problem is the Performance of the System. I was getting the results of millions of entries within 5-10 minutes in SQL Db, but it takes around 1 hr to fetch 10 million of data from HBase & Hive. Can anyone help me on this to improve the Performance of my Hadoop System.
Data in HBase is only 'indexed' by rowkey. If you're querying in Hive on anything other than rowkey prefixes, you will generally be performing a full table scan.
There are some optimizations that can be made with HBase filters e.g., when using a FamilyFilter, you may be able to skip entire regions, but I doubt Hive is doing that.
How to improve performance depends on how your data is shaped and what analysis you need to perform on it. When performing frequent ad-hoc analysis, you may be better served by exporting data from HBase into something like Parquet files on HDFS and running your analysis against those with Hive (or Drill or Spark, Imapala, etc).

Resources