How Hbase handles duplicate records? - hadoop

I want to understand how Hbase internally handles duplicates records from a file.
In order to experiment this, I have created an EXTERNAL table in hive with HBase specific configuration properties like table properties, SERDE, column family.
I have to create the table in HBase with column family as well, which I did.
I have performed an insert overwrite into this HIVE table from a source table which has duplicate records.
By duplicate records I mean like this,
ID | Name | Surname
1 | Ritesh | Rai
1 | RiteshKumar | Rai
Now after performing insert overwrite, I queried my HIVE table with id 1, I got the output as (the second one)
1 RiteshKumar Rai
I wanted to under how HBase decides which one is updated? Is it just that it just writes the data in a sequential manner. The last record will be overwritten in and considered as latest? Or how it is?
Thanks in advance.
Regards,
Govind

You are on the right track!
HBase datamodel can be seen as a 'multidimensional map' and each cell value is associated with a timestamp (insertion_time by default):
row:column_family:column_qualifier:timestamp:value
NOTE: The timestamp is associated with each single value and not the entire row (This enables several nice features)!
At read time you will get the latest versions by default unless you specify otherwise. By default 3 versions should be stored. Hbase does a 'merge read' and it will return the latest cell value for each row.
Please try this from your hbase-shell (not really tested before posting):
put ‘table_name’, ‘1’, ‘f:name’, ‘Ritesh’
put ‘table_name’, ‘1’, ‘f:surname’, ‘Rai’
put ‘table_name’, ‘1’, ‘f:name’, ‘RiteshKumar’
put ‘table_name’, ‘1’, ‘f:surname’, ‘Rai’
put ‘table_name’, ‘1’, ‘f:other’, ‘Some other stuff’
// Data on 'disk' (that might just be the memstore for now) will look like this:
// 1:f:name:1234567890:‘Ritesh’
// 1:f:surname:1234567891:‘Rai’
// 1:f:name:1234567892:‘RiteshKumar’
// 1:f:surname:1234567893:‘Rai’
// 1:f:other:1234567894:‘Some other stuff’
// Now try... And you will get ‘RiteshKumar’, ‘Rai’, ‘Some other stuff’
get ‘table_name’, ‘1’
// To get the previous versions of the data use the following:
get ‘table_name’, ‘1’, {COLUMN => ‘f’, VERSIONS => 2}
Don't forget to take a look at the best practices of schema design

Related

HBase : Confusion between COLUMN and FILTER (SingleColumnValueFilter)

I have installed hbase and I have access to command's shell.
I have a table with 2 familly column like this:
create 'arbres', 'emplacement', 'propriete'
This request works fine :
scan 'arbres',{FILTER=>"SingleColumnValueFilter('emplacement', 'lieu_adresse', =,'binary:VOIE INCONNUE')", COLUMNS=>['emplacement'], COLUMN=>15}
But this second one, list all rows, without filter
scan 'arbres',{FILTER=>"SingleColumnValueFilter('emplacement', 'lieu_adresse', =,'binary:VOIE INCONNUE')", COLUMNS=>['propriete'], COLUMN=>15}
I don't understand why and I don't find the reason in the documentation.
Please can you explain a little the reason.
regards
The second command has a filter on different column family and column that you are not accessing.
The push down requires the columns to be accessed, meaning you should have the column family and column mentioned in the COLUMNS=>[]
The reason one would have two different column families is to make access easier and light weight, since each column family will have its own file.

what happens when two update for same record comes in one file while loading in DB using INFORMATICA

Suppose I Have a table xyz:
id name add city act_flg start_dtm end_dtm
1 amit abc,z pune Y 21012018 null
and this table is loaded from a file Using Informatica using SCD2.
suppose there is one file that contains two record with id=2
ie. 2 vipul abc,z mumbai
2 vipul asdf bangalore
so who will this be loaded into db?
It depends how your doing the SCD type 2. If you are using a look-up with Static cache , both records will be added end date as null
Best case in this scenario is to use a dynamic lookup cache and read your source data in such a way that latest record is read last. This will ensure one record is expired with end date and only one active record( ie end date is null) exists per id.
Hmm 1 of 2 possibilities depending on what you mean... if you mean that you're pulling data from different source systems which sometimes have the same ids on those systems then its easy... just stamp both the natural key (i.e. the id) and a source system value on the dimension column along with the arbitrary surrogate key which is unique to your target table... (this is a datawarehousing basic so read kimball).
If you mean that you are somehow tracing realtime changes in the single record in the source system and writing these changes to the input files of your etl job then you need to agree with your client whether they're happy for you to aggregate them based on the timestamp of the change and just pick the most recent one or to create 2 records, one with its expiry datetime set and the other still open (which is the standard scd approach... again read kimball).

Increase scan performance in Apache Hbase

I am working on an use case and help me in improving the scan performance.
Customers visiting our website are generated as logs and we will be processing it which is usually done by Apache Pig and inserts the output from pig into hbase table(test) directly using HbaseStorage. This will be done every morning. Data consists of following columns
Customerid | Name | visitedurl | timestamp | location | companyname
I have only one column family (test_family)
As of now I have generated random no for each row and it is inserted as row key for that table. For ex I have following data to be inserted into table
1725|xxx|www.something.com|127987834 | india |zzzz
1726|yyy|www.some.com|128389478 | UK | yyyy
If so I will add 1 as row key for first row and 2 for second one and so on.
Note : Same id will be repeated for different days so I chose random no to be row-key
while querying data from table where I use scan 'test', {FILTER=>"SingleColumnValueFilter('test_family','Customerid',=,'binary:1002')"} it takes more than 2 minutes to return the results.`
Suggest me a way so that I have to bring down this process to 1 to 2 seconds since I am using it in real-time analytics
Thanks
As per the query you have mentioned, I am assuming you need records based on Customer ID. If it is correct, then, to improve the performance, you should use Customer ID as Row Key.
However, multiple entries could be there for single Customer ID. So, better design Row key as CustomerID|unique number. This unique number could be the timestamp too. It depends upon your requirements.
To scan the data in this case, you need to use PrefixFilter on row key. This will give you better performance.
Hope this help..

HBase row key design for reads and updates

I'm try to understand the best way to design the key for my HBase Table.
My use case :
Structure right now
PersonID | BatchDate | PersonJSON
When some thing about the person is modified, a new PersonJSON and new a batchdate is inserted in to Hbase updating the old records. And every 4 hours a scan of all the people who are modified are then pushed to Hadoop for further processing.
If my key is just personID it great for updating the data. But my performance sucks because I have to add a filter on BatchData column to scan all the rows greater than a batch date.
If my key is a composite key like BatchDate|PersonID I could use startrow and endrow on the row key and get all the rows that have been modified. But then I would have lot of duplicated since the key is not unique and can no longer update a person.
Is bloom filter on row+col (personid+batchdate) an option ?
Any help is appreciated.
Thanks,
Abhishek
In addition to the table with PersonID as the rowkey, it sounds like you need a dual-write secondary index, with BatchDate as the rowkey.
Another option would be Apache Phoenix, which provides support for secondary indexes.
I usually do two steps:
Create table one just have key is commbine of BatchDate+PersonId, value could be empty.
Create table two just as normal you did. Key is PersonId Value is the whole data.
For date range query: query table one first to get the PersonIds, and then use Hbase batch get API to get the data by batch. it would be very fast.

Hive: How to have a derived column that has stores the sentiment value from the sentiment analysis API

Here's the scenario:
Say you have a Hive Table that stores twitter data.
Say it has 5 columns. One column being the Text Data.
Now How do you add a 6th column that stores the sentiment value from the Sentiment Analysis of the twitter Text data. I plan to use the Sentiment Analysis API like Sentiment140 or viralheat.
I would appreciate any tips on how to implement the "derived" column in Hive.
Thanks.
Unfortunately, while the Hive API lets you add a new column to your table (using ALTER TABLE foo ADD COLUMNS (bar binary)), those new columns will be NULL and cannot be populated. The only way to add data to these columns is to clear the table's rows and load data from a new file, this new file having that new column's data.
To answer your question: You can't, in Hive. To do what you propose, you would have to have a file with 6 columns, the 6th already containing the sentiment analysis data. This could then be loaded into your HDFS, and queried using Hive.
EDIT: Just tried an example where I exported the table as a .csv after adding the new column (see above), and popped that into M$ Excel where I was able to perform functions on the table values. After adding functions, I just saved and uploaded the .csv, and rebuilt the table from it. Not sure if this is helpful to you specifically (since it's not likely that sentiment analysis can be done in Excel), but may be of use to anyone else just wanting to have computed columns in Hive.
References:
https://cwiki.apache.org/Hive/gettingstarted.html#GettingStarted-DDLOperations
http://comments.gmane.org/gmane.comp.java.hadoop.hive.user/6665
You can do this in two steps without a separate table. Steps:
Alter the original table to add the required column
Do an "overwrite table select" of all columns + your computed column from the original table into the original table.
Caveat: This has not been tested on a clustered installation.

Resources