Why is counting items in pig so much slower than hive - hadoop

I have a set of data (~1TB) I imported in both Hive and Pig. Using our entire hadoop cluster but I have huge time differences where Hive is significantly faster than pig for just counting the number of records.
select count(*) from india_tab;
Time taken: 61.103 seconds, Fetched: 1 row(s)
In PIG:
data = LOAD 'warehouse/india_tab/*' USING PigStorage()
AS (ac_id:int, c_code01:chararray, longitude:float, latitude:float, satillite:chararray, month:chararray, day:chararray, timestamp:int, cm:int, li:double, tir:int,vis:int);
grpd = GROUP data ALL;
cnt = FOREACH grpd GENERATE COUNT(data);
DUMP cnt;
Runtime: 6m 9s

Hive Uses only one reducer for the Count for full aggregate irrespective of input data size. if you want more efficiency then turn on the hive.map.aggr= true.
Pig sets the number of reducers using a heuristic based on the size of the input data.
reducers = MIN (pig.exec.reducers.max, total input size (in bytes) / bytes per reducer)
generally by default reducer max is 999
ur total input size = 10^12 / 10^9
so for your pig job reducer = 999. that might be the reason for slowness compare to hive.
Use Parallel key word or set default_parallel to override the reducer num.

Related

Multiple consecutive join operations on PySpark

I am running a PySpark application where we are comparing two large datasets of 3GB each. There are some differences in the datasets, which we are filtering via outer join.
mismatch_ids_row = (sourceonedf.join(sourcetwodf, on=primary_key,how='outer').where(condition).select(primary_key)
mismatch_ids_row.count()
So the output of join on count is a small data of say 10 records. The shuffle partition at this point is about 30 which has been counted as amount of data/partition size(100Mb).
After the result of the join, the previous two datasets are joined with the resultant joined datasets to filter out data for each dataframe.
df_1 = sourceonedf.join(mismatch_ids_row, on=primary_key, how='inner').dropDuplicates()
df_2 = sourcetwodf.join(mismatch_ids_row, on=primary_key, how='inner').dropDuplicates()
Here we are dropping duplicates since the result of first join will be double via outer join where some values are null.
These two dataframes are further joined to find the column level comparison and getting the exact issue where the data is mismatched.
df = (df_1.join(df_2,on=some condition, how="full_outer"))
result_df = df.count()
The resultant dataset is then used to display as:
result_df.show()
The issue is that, the first join with more data is using merge sort join with partition size as 30 which is fine since the dataset is somewhat large.
After the result of the first join has been done, the mismatched rows are only 10 and when joining with 3Gb is a costly operation and using broadcast didn't help.
The major issue in my opinion comes when joining two small resultant datasets in second join to produce the result. Here too many shuffle partitions are killing the performance.
The application is running in client mode as spark run for testing purposes and the parameters are sufficient for it to be running on the driver node.
Here is the DAG for the last operation:
As an example:
data1 = [(335008138387,83165192,"yellow","2017-03-03",225,46),
(335008138384,83165189,"yellow","2017-03-03",220,4),
(335008138385,83165193,"yellow","2017-03-03",210,11),
(335008138386,83165194,"yellow","2017-03-03",230,12),
(335008138387,83165195,"yellow","2017-03-03",240,13),
(335008138388,83165196,"yellow","2017-03-03",250,14)
]
data2 = [(335008138387,83165192,"yellow","2017-03-03",300,46),
(335008138384,83165189,"yellow","2017-03-03",220,10),
(335008138385,83165193,"yellow","2017-03-03",210,11),
(335008138386,83165194,"yellow","2017-03-03",230,12),
(335008138387,83165195,"yellow","2017-03-03",240,13),
(335008138388,83165196,"yellow","2017-03-03",250,14)
]
field = [
StructField("row_num",LongType(),True),
StructField("tripid",IntegerType(),True),
StructField("car_type",StringType(),True),
StructField("dates", StringType(), True),
StructField("pickup_location_id", IntegerType(), True),
StructField("trips", IntegerType(), True)
]
schema = StructType(field)
sourceonedf = spark.createDataFrame(data=data1,schema=schema)
sourcetwodf = spark.createDataFrame(data=data2,schema=schema)
They have just two differences, on a larger dataset think of these as 10 or more differences.
df_1 will get rows from 1st sourceonedf based on mismatch_ids_row and so will the df_2. They are then joined to create another resultant dataframe which outputs the data.
How can we optimize this piece of code so that optimum partitions are there for it to perform faster that it does now.
At this point it takes ~500 secs to do whole activity, when it can take about 200 secs lesser and why does the show() takes time as well, there are only 10 records so it should print pretty fast if all are in 1 partition I guess.
Any suggestions are appreciated.
You should be able to go without df_1 and df_2. After the first 'outer' join you have all the data in that table already.
Cache the result of the first join (as you said, the dataframe is small):
# (Removed the select after the first join)
mismatch_ids_row = sourceonedf.join(sourcetwodf, on=primary_key, how='outer').where(condition)
mismatch_ids_row.cache()
mismatch_ids_row.count()
Then you should be able to create a self-join condition. When joining, use dataframe aliases for explicit control:
result_df = (
mismatch_ids_row.alias('a')
.join(mismatch_ids_row.alias('b'), on=some condition...)
.select(...)
)

What is best or Most lightweight/efficient/cheapest RDD action to perform on Huge/large RDD in Apache Spark

I am new to Apache Spark.
Below is the code snippet which demonstrates my sample code.
val x = 5
val arrayVal = (1 to 100000)
val rdd1 = sc.parallelize(arrayVal, x)//Has Huge RDD of Min 10000 to 100000
var rdd2 = rdd1.map(x => (x, x))
rdd2 = rdd2.cache()
rdd2.count()
val cartesianRDD = rdd2.cartesian(rdd2)
var filteredRDD = cartesianRDD.filter(f => (f._1._1 < f._2._1))
filteredRDD = filteredRDD.repartition(x/2)
rdd2 = rdd2.unpersist(false)
filteredRDD.persist(StorageLevel.MEMORY_ONLY)//To avoid re-calculation
filteredRDD.count()
As I do count on RDD which takes many minutes to count RDD. I wants to know what is the best or most efficient/cheapest/lightweight way to trigger RDD transformations.
I have also tried rdd.take(1) and rdd.first() which results the same.
Ultimately my goal is to reduce the time taken by the any of these action. So that total time of execution could be reduced.
Thanks in advance.
rdd.first() is the cheapest one you can have since it only materializes the first partition.
The cheapest action that will materialize all partitions is rdd.forEachPartition{_=>_}.
Ultimately my goal is to reduce the time taken by the any of these action. So that total time of execution could be reduced.
However, the action you take will not affect the time taken by the previous steps. If you want to decrease total time, you have to optimize other things.

Cassandra WordCount Hadoop

Can anyone explain to me the following lines from Cassandra 2.1.15 WordCount example?
CqlConfigHelper.setInputCQLPageRowSize(job.getConfiguration(), "3");
CqlConfigHelper.setInputCql(job.getConfiguration(), "select * from " + COLUMN_FAMILY + " where token(id) > ? and token(id) <= ? allow filtering");
How do I define concrete values which will be used to replace "?" in the query?
And what is meant by page row size?
How do I define concrete values which will be used to replace "?" in
the query?
You don't. These parameterized values are set by the splits created by the input format. They are set automatically but can be adjusted (to a degree) by adjusting the split size.
And what is meant by page row size?
Page row size determines the number of CQL Rows retrieved in a single request by a mapper during execution. If a C* partition contains 10000 CQL rows and the page row size is set to 1000, it will take 10 requests to retrieve all of the data.

Hive cluster by vs order by vs sort by

As far as I understand;
sort by only sorts with in the reducer
order by orders things globally but shoves everything into one reducers
cluster by intelligently distributes stuff into reducers by the key hash and make a sort by
So my question is does cluster by guarantee a global order? distribute by puts the same keys into same reducers but what about the adjacent keys?
The only document I can find on this is here and from the example it seems like it orders them globally. But from the definition I feel like it doesn't always do that.
A shorter answer: yes, CLUSTER BY guarantees global ordering, provided you're willing to join the multiple output files yourself.
The longer version:
ORDER BY x: guarantees global ordering, but does this by pushing all data through just one reducer. This is basically unacceptable for large datasets. You end up one sorted file as output.
SORT BY x: orders data at each of N reducers, but each reducer can receive overlapping ranges of data. You end up with N or more sorted files with overlapping ranges.
DISTRIBUTE BY x: ensures each of N reducers gets non-overlapping ranges of x, but doesn't sort the output of each reducer. You end up with N or more unsorted files with non-overlapping ranges.
CLUSTER BY x: ensures each of N reducers gets non-overlapping ranges, then sorts by those ranges at the reducers. This gives you global ordering, and is the same as doing (DISTRIBUTE BY x and SORT BY x). You end up with N or more sorted files with non-overlapping ranges.
Make sense? So CLUSTER BY is basically the more scalable version of ORDER BY.
Let me clarify first: clustered by only distributes your keys into different buckets, clustered by ... sorted by get buckets sorted.
With a simple experiment (see below) you can see that you will not get global order by default. The reason is that default partitioner splits keys using hash codes regardless of actual key ordering.
However you can get your data totally ordered.
Motivation is "Hadoop: The Definitive Guide" by Tom White (3rd edition, Chapter 8, p. 274, Total Sort), where he discusses TotalOrderPartitioner.
I will answer your TotalOrdering question first, and then describe several sort-related Hive experiments that I did.
Keep in mind: what I'm describing here is a 'proof of concept', I was able to handle a single example using Claudera's CDH3 distribution.
Originally I hoped that org.apache.hadoop.mapred.lib.TotalOrderPartitioner will do the trick. Unfortunately it did not because it looks like Hive partitions by value, not key. So I patch it (should have subclass, but I do not have time for that):
Replace
public int getPartition(K key, V value, int numPartitions) {
return partitions.findPartition(key);
}
with
public int getPartition(K key, V value, int numPartitions) {
return partitions.findPartition(value);
}
Now you can set (patched) TotalOrderPartitioner as your Hive partitioner:
hive> set hive.mapred.partitioner=org.apache.hadoop.mapred.lib.TotalOrderPartitioner;
hive> set total.order.partitioner.natural.order=false
hive> set total.order.partitioner.path=/user/yevgen/out_data2
I also used
hive> set hive.enforce.bucketing = true;
hive> set mapred.reduce.tasks=4;
in my tests.
File out_data2 tells TotalOrderPartitioner how to bucket values.
You generate out_data2 by sampling your data. In my tests I used 4 buckets and keys from 0 to 10. I generated out_data2 using ad-hoc approach:
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.hive.ql.io.HiveKey;
import org.apache.hadoop.fs.FileSystem;
public class TotalPartitioner extends Configured implements Tool{
public static void main(String[] args) throws Exception{
ToolRunner.run(new TotalPartitioner(), args);
}
#Override
public int run(String[] args) throws Exception {
Path partFile = new Path("/home/yevgen/out_data2");
FileSystem fs = FileSystem.getLocal(getConf());
HiveKey key = new HiveKey();
NullWritable value = NullWritable.get();
SequenceFile.Writer writer = SequenceFile.createWriter(fs, getConf(), partFile, HiveKey.class, NullWritable.class);
key.set( new byte[]{1,3}, 0, 2);//partition at 3; 1 came from Hive -- do not know why
writer.append(key, value);
key.set( new byte[]{1, 6}, 0, 2);//partition at 6
writer.append(key, value);
key.set( new byte[]{1, 9}, 0, 2);//partition at 9
writer.append(key, value);
writer.close();
return 0;
}
}
Then I copied resulting out_data2 to HDFS (into /user/yevgen/out_data2)
With these settings I got my data bucketed/sorted (see last item in my experiment list).
Here is my experiments.
Create sample data
bash> echo -e "1\n3\n2\n4\n5\n7\n6\n8\n9\n0" > data.txt
Create basic test table:
hive> create table test(x int);
hive> load data local inpath 'data.txt' into table test;
Basically this table contains values from 0 to 9 without order.
Demonstrate how table copying works (really mapred.reduce.tasks parameter which sets MAXIMAL number of reduce tasks to use)
hive> create table test2(x int);
hive> set mapred.reduce.tasks=4;
hive> insert overwrite table test2
select a.x from test a
join test b
on a.x=b.x; -- stupied join to force non-trivial map-reduce
bash> hadoop fs -cat /user/hive/warehouse/test2/000001_0
1
5
9
Demonstrate bucketing. You can see that keys are assinged at random without any sort order:
hive> create table test3(x int)
clustered by (x) into 4 buckets;
hive> set hive.enforce.bucketing = true;
hive> insert overwrite table test3
select * from test;
bash> hadoop fs -cat /user/hive/warehouse/test3/000000_0
4
8
0
Bucketing with sorting. Results are partially sorted, not totally sorted
hive> create table test4(x int)
clustered by (x) sorted by (x desc)
into 4 buckets;
hive> insert overwrite table test4
select * from test;
bash> hadoop fs -cat /user/hive/warehouse/test4/000001_0
1
5
9
You can see that values are sorted in ascending order. Looks like Hive bug in CDH3?
Getting partially sorted without cluster by statement:
hive> create table test5 as
select x
from test
distribute by x
sort by x desc;
bash> hadoop fs -cat /user/hive/warehouse/test5/000001_0
9
5
1
Use my patched TotalOrderParitioner:
hive> set hive.mapred.partitioner=org.apache.hadoop.mapred.lib.TotalOrderPartitioner;
hive> set total.order.partitioner.natural.order=false
hive> set total.order.partitioner.path=/user/training/out_data2
hive> create table test6(x int)
clustered by (x) sorted by (x) into 4 buckets;
hive> insert overwrite table test6
select * from test;
bash> hadoop fs -cat /user/hive/warehouse/test6/000000_0
1
2
0
bash> hadoop fs -cat /user/hive/warehouse/test6/000001_0
3
4
5
bash> hadoop fs -cat /user/hive/warehouse/test6/000002_0
7
6
8
bash> hadoop fs -cat /user/hive/warehouse/test6/000003_0
9
CLUSTER BY does not produce global ordering.
The accepted answer (by Lars Yencken) misleads by stating that the reducers will receive non-overlapping ranges. As Anton Zaviriukhin correctly points to the BucketedTables documentation, CLUSTER BY is basically DISTRIBUTE BY (same as bucketing) plus SORT BY within each bucket/reducer. And DISTRIBUTE BY simply hashes and mods into buckets and while the hashing function may preserve order (hash of i > hash of j if i > j), mod of hash value does not.
Here's a better example showing overlapping ranges
http://myitlearnings.com/bucketing-in-hive/
As I understand, short answer is No.
You'll get overlapping ranges.
From SortBy documentation:
"Cluster By is a short-cut for both Distribute By and Sort By."
"All rows with the same Distribute By columns will go to the same reducer."
But there is no information that Distribute by guarantee non-overlapping ranges.
Moreover, from DDL BucketedTables documentation:
"How does Hive distribute the rows across the buckets? In general, the bucket number is determined by the expression hash_function(bucketing_column) mod num_buckets."
I suppose that Cluster by in Select statement use the same principle to distribute rows between reducers because it's main use is for populating bucketed tables with the data.
I created a table with 1 integer column "a", and inserted numbers from 0 to 9 there.
Then I set number of reducers to 2
set mapred.reduce.tasks = 2;.
And select data from this table with Cluster by clause
select * from my_tab cluster by a;
And received result that I expected:
0
2
4
6
8
1
3
5
7
9
So, first reducer (number 0) got even numbers (because their mode 2 gives 0)
and second reducer (number 1) got odd numbers (because their mode 2 gives 1)
So that's how "Distribute By" works.
And then "Sort By" sorts the results inside each reducer.
Use case : When there is a large dataset then one should go for sort by as in sort by , all the set reducers sort the data internally before clubbing together and that enhances the performance. While in Order by, the performance for the larger dataset reduces as all the data is passed through a single reducer which increases the load and hence takes longer time to execute the query.
Please see below example on 11 node cluster.
This one is Order By example output
This one is Sort By example output
This one is Cluster By example
What I observed , the figures of sort by , cluster by and distribute by is SAME But internal mechanism is different. In DISTRIBUTE BY : The same column rows will go to one reducer , eg. DISTRIBUTE BY(City) - Bangalore data in one column , Delhi data in one reducer:
Cluster by is per reducer sorting not global. In many books also it is mentioned incorrectly or confusingly. It has got particular use where say you distribute each department to specific reducer and then sort by employee name in each department and do not care abt order of dept no the cluster by to be used and it more perform-ant as workload is distributed among reducers.
SortBy: N or more sorted files with overlapping ranges.
OrderBy: Single output i.e fully ordered.
Distribute By: Distribute By protecting each of N reducers gets non-overlapping ranges of the column but doesn’t sort the output of each reducer.
For more information http://commandstech.com/hive-sortby-vs-orderby-vs-distributeby-vs-clusterby/
ClusterBy: Refer to the same example as above, if we use Cluster By x, the two reducers will further sort rows on x:
If I understood it correctly
1.sort by - only sorts the data within the reducer
2.order by - orders things globally by pushing the entire data set to a single reducer. If we do have a lot of data(skewed), this process will take a lot of time.
cluster by - intelligently distributes stuff into reducers by the key hash and make a sort by, but does not grantee global ordering. One key(k1) can be placed into two reducers. 1st reducer gets 10K K1 data, the second one might get 1K k1 data.

Hbase quickly count number of rows

Right now I implement row count over ResultScanner like this
for (Result rs = scanner.next(); rs != null; rs = scanner.next()) {
number++;
}
If data reaching millions time computing is large.I want to compute in real time that i don't want to use Mapreduce
How to quickly count number of rows.
Use RowCounter in HBase
RowCounter is a mapreduce job to count all the rows of a table. This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency. It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit.
$ hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename>
Usage: RowCounter [options]
<tablename> [
--starttime=[start]
--endtime=[end]
[--range=[startKey],[endKey]]
[<column1> <column2>...]
]
You can use the count method in hbase to count the number of rows. But yes, counting rows of a large table can be slow.count 'tablename' [interval]
Return value is the number of rows.
This operation may take a LONG time (Run ‘$HADOOP_HOME/bin/hadoop jar
hbase.jar rowcount’ to run a counting mapreduce job). Current count is shown
every 1000 rows by default. Count interval may be optionally specified. Scan
caching is enabled on count scans by default. Default cache size is 10 rows.
If your rows are small in size, you may want to increase this
parameter.
Examples:
hbase> count 't1'
hbase> count 't1', INTERVAL => 100000
hbase> count 't1', CACHE => 1000
hbase> count 't1', INTERVAL => 10, CACHE => 1000
The same commands also can be run on a table reference. Suppose you had a reference to table 't1', the corresponding commands would be:
hbase> t.count
hbase> t.count INTERVAL => 100000
hbase> t.count CACHE => 1000
hbase> t.count INTERVAL => 10, CACHE => 1000
If you cannot use RowCounter for whatever reason, then a combination of these two filters should be an optimal way to get a count:
FirstKeyOnlyFilter() AND KeyOnlyFilter()
The FirstKeyOnlyFilter will result in the scanner only returning the first column qualifier it finds, as opposed to the scanner returning all of the column qualifiers in the table, which will minimize the network bandwith. What about simply picking one column qualifier to return? This would work if you could guarentee that column qualifier exists for every row, but if that is not true then you would get an inaccurate count.
The KeyOnlyFilter will result in the scanner only returning the column family, and will not return any value for the column qualifier. This further reduces the network bandwidth, which in the general case wouldn't account for much of a reduction, but there can be an edge case where the first column picked by the previous filter just happens to be an extremely large value.
I tried playing around with scan.setCaching but the results were all over the place. Perhaps it could help.
I had 16 million rows in between a start and stop that I did the following pseudo-empirical testing:
With FirstKeyOnlyFilter and KeyOnlyFilter activated:
With caching not set (i.e., the default value), it took 188 seconds.
With caching set to 1, it took 188 seconds
With caching set to 10, it took 200 seconds
With caching set to 100, it took 187 seconds
With caching set to 1000, it took 183 seconds.
With caching set to 10000, it took 199 seconds.
With caching set to 100000, it took 199 seconds.
With FirstKeyOnlyFilter and KeyOnlyFilter disabled:
With caching not set, (i.e., the default value), it took 309 seconds
I didn't bother to do proper testing on this, but it seems clear that the FirstKeyOnlyFilter and KeyOnlyFilter are good.
Moreover, the cells in this particular table are very small - so I think the filters would have been even better on a different table.
Here is a Java code sample:
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.KeyOnlyFilter;
import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.filter.RegexStringComparator;
public class HBaseCount {
public static void main(String[] args) throws IOException {
Configuration config = HBaseConfiguration.create();
HTable table = new HTable(config, "my_table");
Scan scan = new Scan(
Bytes.toBytes("foo"), Bytes.toBytes("foo~")
);
if (args.length == 1) {
scan.setCaching(Integer.valueOf(args[0]));
}
System.out.println("scan's caching is " + scan.getCaching());
FilterList allFilters = new FilterList();
allFilters.addFilter(new FirstKeyOnlyFilter());
allFilters.addFilter(new KeyOnlyFilter());
scan.setFilter(allFilters);
ResultScanner scanner = table.getScanner(scan);
int count = 0;
long start = System.currentTimeMillis();
try {
for (Result rr = scanner.next(); rr != null; rr = scanner.next()) {
count += 1;
if (count % 100000 == 0) System.out.println(count);
}
} finally {
scanner.close();
}
long end = System.currentTimeMillis();
long elapsedTime = end - start;
System.out.println("Elapsed time was " + (elapsedTime/1000F));
}
}
Here is a pychbase code sample:
from pychbase import Connection
c = Connection()
t = c.table('my_table')
# Under the hood this applies the FirstKeyOnlyFilter and KeyOnlyFilter
# similar to the happybase example below
print t.count(row_prefix="foo")
Here is a Happybase code sample:
from happybase import Connection
c = Connection(...)
t = c.table('my_table')
count = 0
for _ in t.scan(filter='FirstKeyOnlyFilter() AND KeyOnlyFilter()'):
count += 1
print count
Thanks to #Tuckr and #KennyCason for the tip.
Use the HBase rowcount map/reduce job that's included with HBase
Simple, Effective and Efficient way to count row in HBASE:
Whenever you insert a row trigger this API which will increment that particular cell.
Htable.incrementColumnValue(Bytes.toBytes("count"), Bytes.toBytes("details"), Bytes.toBytes("count"), 1);
To check number of rows present in that table. Just use "Get" or "scan" API for that particular Row 'count'.
By using this Method you can get the row count in less than a millisecond.
To count the Hbase table record count on a proper YARN cluster you have to set the map reduce job queue name as well:
hbase org.apache.hadoop.hbase.mapreduce.RowCounter -Dmapreduce.job.queuename= < Your Q Name which you have SUBMIT access>
< TABLE_NAME>
You can use coprocessor what is available since HBase 0.92. See Coprocessor and AggregateProtocol and example
Two ways Worked for me to get count of rows from hbase table with Speed
Scenario #1
If hbase table size is small then login to hbase shell with valid user and execute
>count '<tablename>'
Example
>count 'employee'
6 row(s) in 0.1110 seconds
Scenario #2
If hbase table size is large,then execute inbuilt RowCounter map reduce job:
Login to hadoop machine with valid user and execute:
/$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter '<tablename>'
Example:
/$HBASE_HOME/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'employee'
....
....
....
Virtual memory (bytes) snapshot=22594633728
Total committed heap usage (bytes)=5093457920
org.apache.hadoop.hbase.mapreduce.RowCounter$RowCounterMapper$Counters
ROWS=6
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
If you're using a scanner, in your scanner try to have it return the least number of qualifiers as possible. In fact, the qualifier(s) that you do return should be the smallest (in byte-size) as you have available. This will speed up your scan tremendously.
Unfortuneately this will only scale so far (millions-billions?). To take it further, you can do this in real time but you will first need to run a mapreduce job to count all rows.
Store the Mapreduce output in a cell in HBase. Every time you add a row, increment the counter by 1. Every time you delete a row, decrement the counter.
When you need to access the number of rows in real time, you read that field in HBase.
There is no fast way to count the rows otherwise in a way that scales. You can only count so fast.
U can find sample example here:
/**
* Used to get the number of rows of the table
* #param tableName
* #param familyNames
* #return the number of rows
* #throws IOException
*/
public long countRows(String tableName, String... familyNames) throws IOException {
long rowCount = 0;
Configuration configuration = connection.getConfiguration();
// Increase RPC timeout, in case of a slow computation
configuration.setLong("hbase.rpc.timeout", 600000);
// Default is 1, set to a higher value for faster scanner.next(..)
configuration.setLong("hbase.client.scanner.caching", 1000);
AggregationClient aggregationClient = new AggregationClient(configuration);
try {
Scan scan = new Scan();
if (familyNames != null && familyNames.length > 0) {
for (String familyName : familyNames) {
scan.addFamily(Bytes.toBytes(familyName));
}
}
rowCount = aggregationClient.rowCount(TableName.valueOf(tableName), new LongColumnInterpreter(), scan);
} catch (Throwable e) {
throw new IOException(e);
}
return rowCount;
}
Go to Hbase home directory and run this command,
./bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 'namespace:tablename'
This will launch a mapreduce job and the output will show the number of records existing in the hbase table.
You could try hbase api methods!
org.apache.hadoop.hbase.client.coprocessor.AggregationClient

Resources