Hive: How to have a derived column that has stores the sentiment value from the sentiment analysis API - hadoop

Here's the scenario:
Say you have a Hive Table that stores twitter data.
Say it has 5 columns. One column being the Text Data.
Now How do you add a 6th column that stores the sentiment value from the Sentiment Analysis of the twitter Text data. I plan to use the Sentiment Analysis API like Sentiment140 or viralheat.
I would appreciate any tips on how to implement the "derived" column in Hive.
Thanks.

Unfortunately, while the Hive API lets you add a new column to your table (using ALTER TABLE foo ADD COLUMNS (bar binary)), those new columns will be NULL and cannot be populated. The only way to add data to these columns is to clear the table's rows and load data from a new file, this new file having that new column's data.
To answer your question: You can't, in Hive. To do what you propose, you would have to have a file with 6 columns, the 6th already containing the sentiment analysis data. This could then be loaded into your HDFS, and queried using Hive.
EDIT: Just tried an example where I exported the table as a .csv after adding the new column (see above), and popped that into M$ Excel where I was able to perform functions on the table values. After adding functions, I just saved and uploaded the .csv, and rebuilt the table from it. Not sure if this is helpful to you specifically (since it's not likely that sentiment analysis can be done in Excel), but may be of use to anyone else just wanting to have computed columns in Hive.
References:
https://cwiki.apache.org/Hive/gettingstarted.html#GettingStarted-DDLOperations
http://comments.gmane.org/gmane.comp.java.hadoop.hive.user/6665

You can do this in two steps without a separate table. Steps:
Alter the original table to add the required column
Do an "overwrite table select" of all columns + your computed column from the original table into the original table.
Caveat: This has not been tested on a clustered installation.

Related

Power query - strategy for handling repeating rows

Given a report which as a table with repeated row headings, is there a good strategy for using Power Query/M to extract the data in a clean format?
For example the report available here, has an excel file (which at time of writing is pointing to August 2021):
https://www.opec.org/opec_web/static_files_project/media/downloads/publications/MOMR%20Appendix%20Tables%20(August%202021).xlsx
In this example:
we have the World demand table portion
Non-OPEC Liquids production portion
both of these have rows: Americas/Europe/Asia Pacific:
which makes it hard to distinguish them in Power Query
What is right approach which would allow extraction of data from this type of table?
I would add a column ... custom column ... with formula
=if [2018] = null then [Column] else null
and then right click the new column and fill down
That would put World Demand and non-OPEC as a column that you could additionally filter on

I would like to compare data between tables

I would like to compare data between two tables say source and destination and output the difference,
the problem is there's a mapping table which stores the columns of source table and corresponding columns of destination.
For example,
Table: T_MAP
SourceTableName SourceTableColumns DestinationTable DestinationTableColumn
s_t1 s_t1_col1 d_t1 d_t1_col1
s_t1 s_t1_col2 d_t d_t1_col2
s_t2 s_t2_col1 d_t2 d_t2_col1
....
So the question is how to compare the data between two tables with the map table.
Current idea is using dynamic cursor to generate dynamic sql statement, then using minus+union all to compare data. But the performance may be a big problem.
Is there any thoughts?
Please help..
Thanks in advance.

Oracle - build dimension from a file based data source

I'm trying to build a star schema in Oracle 12c. In my case my data source is not a relational database but a single excel/csv file which is populated via a google form, which means I don't have any sort of reference from a source system such as auto incremental keys/ids. Now what would be the best approach to build a star schema given this condition?
File row sample:
<submitted timestamp>,<submitted by user>,<region>,<country>,<branch>,<branch location>,<branch area>,<branch type>,<branch name>,<branch private? yes/no value>,<the following would be all "fact" values (measurements),...,...,...
In case i wanted to build a "branch" dimension, how would I handle updates/inserts after the first load into the dimension table?
Thought solution so far:
I had thought of making a concatenated string "key" with the branch values, which would make it unique (underscore would be the "glue" to concatenate the values), eg:
<region>_<country>_<branch>_<branch location> as branch_key
I would insert all the distinct branches into a staging table, including they branch_key column for each one of them, then when trying to load into the dimension I could compare which key does not exists yet in my dimension table and then insert it. As for updates, I'm a bit stuck on how to handle that, I had thought of having another file mapping which branches are active having a expiration date column. Basically trying to simulate what I could do having the data in a database instead of CSV files.
This is all I can think of so far, do you have any other recommendations/ideas on how to implement this? Take on consideration that the data source cannot as in I have to read these csv files, since data is not stored anywhere else.
Thank you.

Cross table with two datasets (one as the row and the other as the column)

I have two datasets in my birt report :
Lesson (date)
Student (name)
and I would like to know how to create a cross table using the date (red) as the column names and name (blue) as the row names as shown below :
The cells will stay empty.
I have try to use the Cross Tab but it seems that I can only use one dataset.
For information I am stuck with the version 2.5.2. I say this in case someone writes about a practical functionality available in the later version of birt... :-)
Where both datasets are coming from the same relational data source, the simplest way to achieve this would normally be:
Replace the existing two datasets with a single dataset, in which the two original datasets are cross-joined to each other;
create a crosstab from the new dataset, with the new dataset columns as the data cube groups.

HBase row key design for reads and updates

I'm try to understand the best way to design the key for my HBase Table.
My use case :
Structure right now
PersonID | BatchDate | PersonJSON
When some thing about the person is modified, a new PersonJSON and new a batchdate is inserted in to Hbase updating the old records. And every 4 hours a scan of all the people who are modified are then pushed to Hadoop for further processing.
If my key is just personID it great for updating the data. But my performance sucks because I have to add a filter on BatchData column to scan all the rows greater than a batch date.
If my key is a composite key like BatchDate|PersonID I could use startrow and endrow on the row key and get all the rows that have been modified. But then I would have lot of duplicated since the key is not unique and can no longer update a person.
Is bloom filter on row+col (personid+batchdate) an option ?
Any help is appreciated.
Thanks,
Abhishek
In addition to the table with PersonID as the rowkey, it sounds like you need a dual-write secondary index, with BatchDate as the rowkey.
Another option would be Apache Phoenix, which provides support for secondary indexes.
I usually do two steps:
Create table one just have key is commbine of BatchDate+PersonId, value could be empty.
Create table two just as normal you did. Key is PersonId Value is the whole data.
For date range query: query table one first to get the PersonIds, and then use Hbase batch get API to get the data by batch. it would be very fast.

Resources