create view on Hive table: comment for each variable are lost - view

I created a Hive table for which we added some description in the "comment" field for each variable as shown below:
spark.sql("create table test_comment (col string comment 'col comment') comment 'hello world table comment ' ")
spark.sql("describe test_comment").show()
+--------+---------+-----------+
|col_name|data_type| comment|
+--------+---------+-----------+
| col| string|col comment|
+--------+---------+-----------+
All is good and we see the comment "col comment" in the commennt field of the variable "col".
Now when I am creating a view on this table, the "comment" field is not propagated to the view and the "comment" column is empty:
spark.sql("""create view test_comment_view as select * from test_comment""")
spark.sql("describe test_comment_view")
+--------+---------+-------+
|col_name|data_type|comment|
+--------+---------+-------+
| col| string| null|
+--------+---------+-------+
Is there a way to keep the values of the comment field when created a view ? What is the reason of this "feature" ?
I am using:
Hadoop 2.6.0-cdh5.8.0
Hive 1.1.0-cdh5.8.0
Spark 2.1.0.cloudera1

What I have observed is that, comments are not inherited even when creating a table from another table. Looks like it is the default behaviour.
create table t1 like another_table
desc t1 //includes comments
+-----------+------------+------------------+--+
| col_name | data_type | comment |
+-----------+------------+------------------+--+
| id | int | new employee id |
| name | string | employee name |
+-----------+------------+------------------+--+
create table t1 as select * from another_table
desc t1 //excludes comments
+-----------+------------+----------+--+
| col_name | data_type | comment |
+-----------+------------+----------+--+
| id | int | |
| name | string | |
+-----------+------------+----------+--+
But there is a workaround. You can specify individual columns with comment when creating a view
create view v2(id2 comment 'vemp id', name2 comment 'vemp name') as select * from another_table;
+-----------+------------+------------+--+
| col_name | data_type | comment |
+-----------+------------+------------+--+
| id2 | int | vemp id |
| name2 | string | vemp name |
+-----------+------------+------------+--+

Related

How to drop hive partitions with hivevar passed as partition variable?

I have been trying to run this piece of code to drop current day's partition from hive a table and for some reason it does not drop the partition from the hive table. Not sure what's worng.
Table Name : prod_db.products
desc:
+----------------------------+-----------------------+-----------------------+--+
| col_name | data_type | comment |
+----------------------------+-----------------------+-----------------------+--+
| name | string | |
| cost | double | |
| load_date | string | |
| | NULL | NULL |
| # Partition Information | NULL | NULL |
| # col_name | data_type | comment |
| | NULL | NULL |
| load_date | string | |
+----------------------------+-----------------------+-----------------------+--+
## I am using the following code
SET hivevar:current_date=current_date();
ALTER TABLE prod_db.products DROP PARTITION(load_date='${current_date}');
Before and After picture of partitions:
+-----------------------+--+
| partition |
+-----------------------+--+
| load_date=2022-04-07 |
| load_date=2022-04-11 |
| load_date=2022-04-18 |
| load_date=2022-04-25 |
+-----------------------+--+
It runs without any error but doesn't work but won't drop the partition. Table is internal/managed.
I tried different ways mentioned on stack but it is just not working for me.
Help.
You dont need to set a variable. You can directly drop using direct sql.
Alter table prod_db.products
drop partition (load_date= current_date());

How to convert row values of a column to columns - JDBCTemplate and PostgreSQL

I currently have a table:
id | info | value | date
1 | desc | description | 19-01-1990 10:01:23
2 | lname | Doe | 19-11-1990 10:01:23
1 | fname | John | 19-08-1990 10:01:23
1 | dob | dob | 19-05-1990 10:01:23
3 | fname | Jo | 19-01-1990 10:01:23
I would like to query and grab data and do joins with multiple tables later on, so I need it to be:
id | desc | lname | fname | dob | desc | date | ... |
1 | description | Doe | John | dob | description | 19-01-1990 10:01:23 | ... |
2 | ......... | ..... | Jo | | | ... | ... |
I have tried crosstab but it does not seem to work. Any help is appreciated
Your current table is a typical denormalized key value store. You may generate the normalized output you want by aggregating by id and then using max CASE expressions:
SELECT
id,
MAX(CASE WHEN info = 'desc' THEN value END) AS desc,
MAX(CASE WHEN info = 'lname' THEN value END) AS lname,
MAX(CASE WHEN info = 'fname' THEN value END) AS fname,
MAX(CASE WHEN info = 'dob' THEN value END) AS dob
FROM yourTable
GROUP BY
id
ORDER BY
id;
Note that I don't have any column for the date, as you did not give logic for which date value should be retained for each id.
As for the Spring part of your question, you would probably have to use a native query to execute the above.

Change table column name parquet format Hadoop

I have table with columns a,b,c.
The data store on hdfs as parquet, is it possible to change specific column name even if the parquet already writted with the schema of a,b,c?
read file in a loop
create a new df with changed column name
write new df in append mode in another dir
move this new dir to read dir
cmd=['hdfs', 'dfs', '-ls', OutDir]
process = subprocess.Popen(cmd, stdout=subprocess.PIPE)
for i in process.communicate():
if i:
for j in i.decode('utf-8').strip().split():
if j.endswith('snappy.parquet'):
print('reading file ',j)
mydf = spark.read.format("parquet").option("inferSchema","true")\
.option("header", "true")\
.load(j)
print('df built on bad file ')
mydf.createOrReplaceTempView("dtl_rev")
ssql="""select old-name AS new_name,
old_col AS new_col from dtl_rev"""
newdf=spark.sql(ssql)
print('df built on renamed file ')
aggdf.write.format("parquet").mode("append").save(newdir)
We can not rename column name in the existing files, parquet stores schema in the data file,
we can check schema using below command
parquet-tools schema part-m-00000.parquet
we have to take backup into a temp table and re-ingest the history data.
Try using, ALTER TABLE
desc p;
+-------------------------+------------+----------+--+
| col_name | data_type | comment |
+-------------------------+------------+----------+--+
| category_id | int | |
| category_department_id | int | |
| category_name | string | |
+-------------------------+------------+----------+--+
alter table p change column category_id id int
desc p;
+-------------------------+------------+----------+--+
| col_name | data_type | comment |
+-------------------------+------------+----------+--+
| id | int | |
| category_department_id | int | |
| category_name | string | |
+-------------------------+------------+----------+--+

Querying HIVE Metadata

I need to query the following table and view information from my Apache HIVE cluster:
Each row needs to contain the following:
TABLE SCHEMA
TABLE NAME
TABLE DESCRIPTION
COLUMN NAME
COLUMN DATA TYPE
COLUMN LENGTH
COLUMN PRECISION
COLUMN SCALE
NULL OR NOT NULL
PRIMARY KEY INDICATOR
This can be easily queried from most RDBMS (metadata tables/views), but I am struggling to find much information about the equivalent metadata tables/views in HIVE.
Please help :)
This information is available from the Hive metastore. The below example query is for a MySQL-backed metastore (Hive version 1.2).
SELECT
DBS.NAME AS TABLE_SCHEMA,
TBLS.TBL_NAME AS TABLE_NAME,
TBL_COMMENTS.TBL_COMMENT AS TABLE_DESCRIPTION,
COLUMNS_V2.COLUMN_NAME AS COLUMN_NAME,
COLUMNS_V2.TYPE_NAME AS COLUMN_DATA_TYPE_DETAILS
FROM DBS
JOIN TBLS ON DBS.DB_ID = TBLS.DB_ID
JOIN SDS ON TBLS.SD_ID = SDS.SD_ID
JOIN COLUMNS_V2 ON COLUMNS_V2.CD_ID = SDS.CD_ID
JOIN
(
SELECT DISTINCT TBL_ID, TBL_COMMENT
FROM
(
SELECT TBLS.TBL_ID TBL_ID, TABLE_PARAMS.PARAM_KEY, TABLE_PARAMS.PARAM_VALUE, CASE WHEN TABLE_PARAMS.PARAM_KEY = 'comment' THEN TABLE_PARAMS.PARAM_VALUE ELSE '' END TBL_COMMENT
FROM TBLS JOIN TABLE_PARAMS
ON TBLS.TBL_ID = TABLE_PARAMS.TBL_ID
) TBL_COMMENTS_INTERNAL
) TBL_COMMENTS
ON TBLS.TBL_ID = TBL_COMMENTS.TBL_ID;
Sample output:
+--------------+----------------------+-----------------------+-------------------+------------------------------+
| TABLE_SCHEMA | TABLE_NAME | TABLE_DESCRIPTION | COLUMN_NAME | COLUMN_DATA_TYPE_DETAILS |
+--------------+----------------------+-----------------------+-------------------+------------------------------+
| default | temp003 | This is temp003 table | col1 | string |
| default | temp003 | This is temp003 table | col2 | array<string> |
| default | temp003 | This is temp003 table | col3 | array<string> |
| default | temp003 | This is temp003 table | col4 | int |
| default | temp003 | This is temp003 table | col5 | decimal(10,2) |
| default | temp004 | | col11 | string |
| default | temp004 | | col21 | array<string> |
| default | temp004 | | col31 | array<string> |
| default | temp004 | | col41 | int |
| default | temp004 | | col51 | decimal(10,2) |
+--------------+----------------------+-----------------------+-------------------+------------------------------+
Metastore tables referred in query:
DBS: Details of databases/schemas.
TBLS: Details of tables.
COLUMNS_V2: Details about columns.
SDS: Details about storage.
TABLE_PARAMS: Details about table parameters (key-value pairs)

Automatically generating documentation about the structure of the database

There is a database that contains several views and tables.
I need create a report (documentation of database) with a list of all the fields in these tables indicating the type and, if possible, an indication of the minimum/maximum values and values from first row. For example:
.------------.--------.--------.--------------.--------------.--------------.
| Table name | Column | Type | MinValue | MaxValue | FirstRow |
:------------+--------+--------+--------------+--------------+--------------:
| Table1 | day | date | ‘2010-09-17’ | ‘2016-12-10’ | ‘2016-12-10’ |
:------------+--------+--------+--------------+--------------+--------------:
| Table1 | price | double | 1030.8 | 29485.7 | 6023.8 |
:------------+--------+--------+--------------+--------------+--------------:
| … | | | | | |
:------------+--------+--------+--------------+--------------+--------------:
| TableN | day | date | ‘2014-06-20’ | ‘2016-11-28’ | ‘2016-11-16’ |
:------------+--------+--------+--------------+--------------+--------------:
| TableN | owner | string | NULL | NULL | ‘Joe’ |
'------------'--------'--------'--------------'--------------'--------------'
I think the execution of many queries
SELECT MAX(column_name) as max_value, MIN(column_name) as min_value
FROM table_name
Will be ineffective on the huge tables that are stored in Hadoop.
After reading documentation found an article about "Statistics in Hive"
It seems I must use request like this:
ANALYZE TABLE tablename COMPUTE STATISTICS FOR COLUMNS;
But this command ended with error:
Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.ColumnStatsTask
Do I understand correctly that this request add information to the description of the table and not display the result? Will this request work with view?
Please suggest how to effectively and automatically create documentation for the database in HIVE?

Resources