Hbase cluster running Phoenix is slow using jdbc and python phoenixdb - performance

I have a cluster setup running HBase and a phoenix queryserver. Currently my cluster contains a master node and 3 slaves. The table I am connecting to consists of 124 columns and a total of 16 million rows. A simple COUNT(*) or DISTINCT "value" query takes around 1-2 minutes, which as far as I understood shouldn't be the case - How fast is Phoenix? Why is it so fast?
In the documentation linked above a full table scan of 100 million rows should take around 20 seconds. And since my table size is significantly smaller I don't understand why my queries take that long. What could I do to optimize my queries? I plan on reconstructing my table using column families (which I know improves performance, but I was wondering if there are other ways to get quick performance boosts, as reconstructing my current table will be a quite huge task.
I am using Phoenix 4.9 and HBase 1.2.

Related

Hive query having 15 tables join is expected to generate 1 Billion records, on 3 datanodes, 16GB RAM each Is this the right way to do?

My name is Vitthal.
The Hortonworks HDP 2.4 Cluster on Amazon is 3 Datanodes, Masters on different Instances.
7 Instances 16GB RAM each.
Total 1TB HDD Space
3 Data Nodes
Hadoop version 2.7
I have pulled data from Postgres into Hadoop Distributed Environment.
The Data is 15 Tables, Among them 4 tables are having 15 Million Records, rest are Masters.
I've pulled them in HDFS, compressed as ORC, and SnappyCodec. Created Hive External Tables with schema.
Now I'm firing a query which joins all the 15 tables and selects the columns which I need in a final flat table. The records expected are more than 1.5 Billion.
I have optimized Hive, Yarn, MapReduce Engine viz. Parallel Execution, Vectorization, Optimized Joins, Small Table Condition, Heap Size etc.
The query is running on Cluster / Hive / Tez since 20 hours & it's reached 90% where the last reducer is running. The 90% is reached long back like since 18 hours it's stuck at 90%.
Am I doing it the right way ?
If I understand, you have effectively copied tables in their raw form from your RDBMs into Hadoop in order to create a flattened view into one or more new tables. You're using Hive to do this. All of this sounds fine.
There are many possibilities why this is taking so long, but several come to mind.
First, YARN will allocate containers (one per CPU core, typically) that mappers and reducers will use to run the parallelized parts of the query. This should allow you to utilize all of the resources you have available.
I use Cloudera, but I assume Hortonworks has similar tools that let you see how many containers are in use, how many mappers and reducers are created by Hive, and so on. You should see that most or all of your available CPUs are in use constantly. Jobs should be finishing at some reasonable rate (perhaps every minute, or every 15 minutes). Depending on the query, Hive is often able to break it into distinct "stages" that are executed distinctly from others, then reassembled at the end.
If this is the case, everything may be fine, but your cluster may be under-resourced. But before you throw more AWS instances at the problem, consider the query itself.
First, Hive has several tools that are essential for optimizing performance, most importantly, partitioning. When you create tables, you should find some means of partitioning the resulting datasets into roughly equal subsets. A common method is to use dates, for example year+month+day (perhaps 20160417), or if you expect to have lots of historical data, maybe just year+month. This will also allow you to dramatically optimize queries that can be constrained by date. I seem to recall that Hive (or maybe it's YARN) will allocate partitions to different containers, so if you don't see all your workers working, then this would be a possible cause. Use the PARTITIONED BY clause in your CREATE TABLE statement.
The reason to choose something like date is that presumably your data is relatively evenly distributed over time (dates). We had chosen a customer_id as a partition key in an early implementation but as we grew, so did our customers. Hundreds of smaller customers would finish in a few minutes, then hundreds of mid-sized customers would finish in an hour, then a couple of our largest customers would take 10 or more hours to complete. We would see complete utilization of the cluster for that first hour, then only a couple containers in use for the last couple of customers. Not good.
This phenomenon is known as "data skew", so you want to carefully choose partitions to avoid skew. There are some options involving SKEW BY and CLUSTER BY that can help deal with getting evenly sized or smaller data files that you could consider.
Note that the raw import data should also be partitioned, as partitions act like indexes in a RDBMS, so are important for performance. In this case, choose partitions that use the keys that your larger query joins on. It is possible and common to have multiple partitions, so a date-based top-level partition, with a sub-partition on the join key could be helpful ... maybe ... depends on your data.
We have also found that it's very important to optimize the query itself. Hive has some hinting mechanisms that can direct it to run the query differently. While quite rudimentary compared to RDBMS, EXPLAIN is very helpful for understanding how Hive will break up the query and when it needs to scan a full dataset. It's hard to read the explain output, so get comfortable with the Hive documentation :-).
Lastly, if you can't make Hive do things in a sensible manner (if its optimizer still results in imbalanced stages) you can create intermediate tables with an additional Hive query that runs to create a partially transformed dataset before building the final one. This seems expensive since you're adding an additional write, and read of new tables, but in the case you describe it may be much faster overall. Also, it's sometimes useful to have intermediate tables just to test or sample data.
Writing Hive is a lot less like writing regular software -- you can get the Hive query done pretty quickly in most cases. Getting it to run fast has taken us 10 or 15 tries in a few cases. Good luck, and I hope this is helpful.

Does HBase use the compute capacity of all nodes in the cluster for query execution?

We are having a setup of 1 master and 2 slave nodes. The data is setup in postgres and in hbase and its a similar dataset (same number of rows) - 65 million rows. Yet, we dont find a measurable increase in performance from HBase for the same query.
My first thought is - does HBase use the compute capacity of all nodes to fork the query out? Perhaps this is why the performance is not measurably better.
Any other reasons for why the performance between Postgres and HBase would be about the same? Any specific configuration items to look for?
EDIT : Something I found while researching this : http://www.flurry.com/2012/06/12/137492485#.VaQP_5QpBpg
This is kind of a yes and no answer. Depending on what you are doing for your 'query' and your region distribution, you may or may not be using all the nodes. For example, if you are running a scan across the table it will run against each region (assuming more then one) in sequence. However if you are using a multi-get for keys that are in different regions, this will run in parallel.
The real benefit is going to come as the number of regions increase and you start parallelizing requests (multiple clients). Regions will be distributed across region servers by the Master as regions are split.

What can I expect about hive and hadoop in performance?

I'am actually trying to implement a solution with Hadoop using Hive on CDH 5.0 with Yarn. So my architecture is:
1 Namenode
3 DataNode
I'm querying ~123 millions rows with 21 columns
My node are virtualized with 2vCPU #2.27 and 8 GO RAM
So I tried some request and i got some result, and after that i tried the same requests in a basic MySQL with the same dataset in order to compare the results.
And actually MySQL is very faster than Hive. So I'm trying to understand why. I know I have some bad performance because of my hosts. My main question is : is my cluster well sizing ?
Do i need to add same DataNode for this amount of data (which is not very enormous in my opinion) ?
And if someone try some request with appoximately the same architecture, you are welcome to share me your results.
Thanks !
I'm querying ~123 millions rows with 21 columns [...] which is not very enormous in my opinion
That's exactly the problem, it's not enormous. Hive is a big data solution and is not designed to run on small data-sets like the one your using. It's like trying to use a forklift to take out your kitchen trash. Sure, it will work, but it's probably faster to just take it out by hand.
Now, having said all that, you have a couple of options if you want realtime performance closer to that of a traditional RDBMS.
Hive 0.13+ which uses TEZ, ORC and a number of other optimizations that greatly improve response time
Impala (part of CDH distributions) which bypasses MapReduce altogether, but is more limited in file format support.
Edit:
I'm saying that with 2 datanodes i get the same performance than with 3
That's not surprising at all. Since Hive uses MapReduce to handle query operators (join, group by, ...) it incurs all the cost that comes with MapReduce. This cost is more or less constant regardless of the size of data and number of datanodes.
Let's say you have a dataset with 100 rows in it. You might see 98% of your processing time in MapReduce initialization and 2% in actual data processing. As the size of your data increases, the cost associated with MapReduce becomes negligible compared to the total time taken.

Amazon EMR not utilizing all the nodes

I am using 4 core nodes..
I am using hive to run queries on a table.
Various queries seem to be under utilizing the capacity.
My table consists of 8 integer fields and about 1000 rows.
queries of the form
select avg(col1-col2) from tbl;
select count(*) from tbl;
and every other query I tried
are producing
number of reducers=1,number of mappers=1
i have tried using set mapred.reduce.tasks=4;
but it doesnt work.
The weirdest thing is that when I use mapred.job.tracker=local which means one map and one reduce on the local node itself the task finished twice as fast.
All the reduce/map slots except one are open all the time.
Why isnt adding capacity even slightly improving exec time?
Is my data sample so small that increasing capacity doesn't matter and localizing the mapping and reduction actually improves the time?
The reason you are getting a single mapper is because your table is so small. I'm assuming your 1000 row table is one file which is much smaller than then your HDFS block size. Try a million row table or larger and you will start seeing it utilize multiple mappers. The answers to this question has some more information on how the number of mappers is chosen.
The reason you are getting a single reducer is a combination of two things. First, you are working with a tiny amount of data (for Hive) so you end up with one reducer. Second, some queries (like COUNT(*) FROM some_table) must have one reducer (see the question here)
You nailed it on why running the job locally is faster. 1000 row tables are great for testing the logic of your queries, but not for determining things like runtime. Running Hive on a cluster instead of locally will probably only start being better once you have data on the order of GBs. Hive is definitely not the "right tool for the job" until you get into queries that touch at least 10's of GBs, though 100's of GBs or TBs (or more) is easier to justify.

Join performance on AWS elastic map reduce running hive

I am running a simple join query
select count(*) from t1 join t2 on t1.sno=t2.sno
Table t1 and t2 both have 20 million records each and column sno is of string data type.
The table data is imported in to HDFS from Amazon s3 in rcfile format.
The query took 109s with 15 Amazon large instances however it takes 42sec on sql server with 16 GB RAM and 16 cpu cores.
Am I missing anything? Can't understand why am I getting slow performance on Amazon?
Some questions to help you tune Hadoop performance:
What does your IO utilization look like on those instances? Maybe large instances are not the right balance of CPU / Disk / Memory for the job.
How are your files stored? Is it a single file, or many small files? Hadoop isn't so hot with many small files, even if they're combinable
How many reducers did you run? You want to have about 0.9*totalReduceCapacity as ideal
How skewed is your data? If there are many records with the same key they will all go to the same reducer, and you'll have O(n*n) upper bound in that reducer if you're not careful.
sql-server might be fine with 40mm records, but wait till you have 2bn records and see how it does. It will probably just break. I'd see hive more as a clever wrapper for Map Reduce rather than an alternative to a real database.
Also from experience I think having 15 c1.mediums might perform just as well as the large machines, if not better. the large machines don't have the right balance of CPU/Memory honestly.

Resources