Inserting a Row Hash in a H2 database - h2

I would like to configure a H2 database, that for every saved record a hash value is calculated. The Hash should contain all columns of the table.
Example:
There is this table user:
id | name | age | hash
and a user John with an age of 45 would be added.
The database should create this record:
1 | John | 45 | hash(1, john, 45)
Is it possible to create this setup in for a H2 database within a liquibase changelog?
I thought about it might work with valueComputed and the H2 hash function (http://www.h2database.com/html/functions.html#hash).

I'm not familiar with H2, but whether you are using xml, json, or yaml (or formatted sql of course) you can always write custom sql to take advantage of dbms-specific sql syntax:
xml Example using the Oracle-specific to_date function:
<sql>
insert into table1 values (id, start_time)
values (1, to_date('05/03/2021 13:00:00','mm/dd/yyyy hh24:mi:ss'));
</sql>

Related

"order by" postgres and oracle have different results

when I do "order by" with the varchar field in the oracle database
Here is the result
DATA:
1>33
1>31>33
1>31
112
11
1
Is there any way to achieve the desired result below
DATA:
112
11
1>33
1>31>33
1>31
1
for postgres it works perfectly but for other databases it is not working as it should
If someone can help me, thank you very much
Oracle_19
Postgres_13
source:
create table test(
data char(50)
)
insert into test values('112');
insert into test values('11');
insert into test values('1>33');
insert into test values('1>31>33');
insert into test values('1>31');
insert into test values('1');
select * from test order by data desc
I don't pretend to have a lot of knowledge of collation, but you can achieve the result you want in Oracle with UCA collation and variable characters and weighting, either for a specific query with nlssort():
select * from test order by nlssort(data, 'NLS_SORT=UCA0700_DUCET_VN') desc
or by setting the NLS_SORT session (or system) parameter:
alter session set nls_sort = UCA0700_DUCET_VN;
select * from test order by data desc
Both give the result you want:
DATA
----------
112
11
1>33
1>31>33
1>31
1
db<>fiddle
Presumably your PostgreSQL environment is configured to do something similar.
Never used PostgreSQL but looks like this collation does the same:
select * from test order by data collate "vi-VN-x-icu" desc
db<>fiddle

TSQL: Fastest way to convert data_type and update it

My database performance skills are not really good - I could not find any 'good' Google result, so I need your help.
I am trying to convert all columns of a table. All data in this table are datatype varchar.
I do have a reference table which has wrong data but correct meta data like Column_Name, Data_Type etc. ==> So I try to use the table with the correct metadata to convert the table with the correct data. As in the following example, the dynamic script wants to convert a column that should actually be datetime:
IF #Datatype IN ('datetime')
Begin
set #sqlDate = ('
Update dbo.'+#Table+'
SET '+#Column+' = TRY_CONVERT( datetime, '+#Column+', 105)
Alter Table dbo.'+#Table+'
Alter Column '+#Column+' datetime;
')
exec (#sqlDate);
End
So my goal is to convert a Table like this :
+----------------+----------------+
| Col1 (varchar) | Col2 (varchar) |
+----------------+----------------+
| '01.01.2000' | '124.5' |
+----------------+----------------+
To this:
+-------------------------+--------------+
| Col1(datetime) | Col2 (float) |
+-------------------------+--------------+
| jjjj-mm-tt hh:mi:ss.mmm | 124.5 |
+-------------------------+--------------+
(based on the correct metadata table)
Do you think its better to first convert data into #TempTable and Update the original Column via the pre-converted #TempTable? Any better solution?
Thanks a lot!
Here's how I would do it.
First, create and populate sample table (Please save is this step in your future questions):
CREATE TABLE Sample
(
DateTimeColumn varchar(50),
FloatColumn varchar(50)
);
INSERT INTO Sample(DateTimeColumn, FloatColumn) VALUES ('01.01.2000', '124.5');
Then - Alter the table to add the columns with the correct data type.
ALTER TABLE Sample
ADD AsDateTime DateTime,
AsFloat float;
Populate the new columns:
UPDATE Sample
SET AsDateTime = TRY_CONVERT(datetime, DateTimeColumn, 105),
AsFloat = TRY_CAST(FloatColumn AS float);
At this point, you should pause and check if you really did get correct values.
Once the new columns data is verified, you can delete the old columns
ALTER TABLE Sample
DROP COLUMN DateTimeColumn;
ALTER TABLE Sample
DROP COLUMN FloatColumn;
and rename the new columns:
EXEC sp_rename 'dbo.Sample.AsDateTime', 'DateTimeColumn', 'COLUMN';
EXEC sp_rename 'dbo.Sample.AsFloat', 'FloatColumn', 'COLUMN';
A quick select to verify the change:
SELECT DateTimeColumn, FloatColumn
FROM Sample;
Results:
DateTimeColumn FloatColumn
2000-01-01 00:00:00 124.5

Check if a hive table is partitioned on a given column

I have a list of hive tables , of which some are partitioned. Given a column I need to check if a particular table is partitioned on that column or not.
I have searched and found that desc formatted tablename would result in all the details of the table.
Since I have to iterate over all the tables and get the list , desc formatted would not help.
Is there any other way this can be done.
You can connect directly to metastore and query it:
metastore=# select d."NAME" as DATABASE,
t."TBL_NAME" as TABLE,
p."PKEY_NAME" as PARTITION_KEY
from "PARTITION_KEYS" p
join "TBLS" t on p."TBL_ID"=t."TBL_ID"
join "DBS" d on t."DB_ID"=d."DB_ID";
database | table | partition_key
----------+-------------+---------------
default | src_union_1 | ds
default | cbo_t1 | dt
default | cbo_t2 | dt
The exact syntax of querying your metastore depends on your particular choice of metastore (in my case is a PostgreSQL one).

Update of two same tables on different oracle schemas using primary key

I am scratching our head to resolve the issue with good performance, we managed to find out the solution in java by using hash map, but as the table contains 1L records its pretty tough to manage this part.
I am looking for the best possible option.
I have two schemas on same oracle database. I need to update a table with another schema table using primary key(we need only update if the primary key row exists, we should not insert it).
Suppose My oracle database is TEST and i have two schema's SCHEMA1 & SCHEMA2.
SCHEMA1 & SCHEMA2 CONTAINS THE TABLE SAMPLE1
Structure:
ID NUMBER ==> PRIMARY KEY
NAME VARCHAR ==> PRIMARY KEY
LASTNAME VARCHAR ==> NORMAL COLUMN
NOW SCHEMA1 SAMPLE1 CONTAINS DATA BELOW
1) 123 'TEMP' 'TEMPOARY1'
2) 234 'TEMP2' 'TEMPORARY2'
3) 345 'TEMP3' 'TEMPORARY3'
SCHEMA2 SAMPLE1 CONTAINS DATA BELOW
1) 123 'TEMP' 'TEMP1'
2) 23 'TEMP23 'TEMP2'
3) 235 'TEMP2' 'TEMP3'
Now my target is i need to sync table SAMPLE1 of SCHEMA1 with the table of SAMPLE1 of SCHEMA2 and the result should be below.
1) 123 'TEMP' 'TEMP1'
2) 234 'TEMP2' 'TEMPORARY2'
3) 345 'TEMP3' 'TEMPORARY3'
Thank you for your help
Try something like this :
declare
procedure fncUpdate(pId PLS_INTEGER, pName VARCHAR2 , pLastname VARCHAR2) as
vIden pls_integer;
begin
UPDATE SCHEMA2.SAMPLE1 set id, name,lastname values (pId, pName pLastname)
returning iden into vIden;
DBMS_OUTPUT.PUT_LINE('iden : '|| vIden);
end fncUpdate;
begin
for cur in(
SELECT id,name,lastname
FROM SCHEMA1.SAMPLE1
)
loop
fncUpdate(cur.id,cur.name,cur.lastname);
end loop;
end;
Update of two same tables on different oracle databases
I have two schemas
I have edited your question title and changed database to schema. Since, you have clearly mentioned schema in your question body. Do not confuse between a DATABASE and a SCHEMA. I have seen SQL Server developers often interpreting a schema as a relative term for database. A schema is the set of objects (tables, views, indexes, etc) that belongs to an user. Do not confuse between a schema and database.
No need of PL/SQL. Do it in plain SQL.
You could use a MERGE statement.
For example,
MERGE INTO schema2.table2 t2
USING (SELECT * FROM schema1.table1) t1
ON (t2.primarykey = t1.key)
WHEN MATCHED THEN
UPDATE SET
t2.column2 = t1.column2
AND t2.column3 = t1.column3
/

Column name is masked in oracle indexes

I have a table in oracle db which has a unique index composed of two columns (id and valid_from). The column valid_from is of type timestamps with time zone.
When I query the SYS.USER_IND_COLUMNS to see which columns my table is using as unique index, I can not see the name of the valid_from column but instead I see smth like SYS_NC00027$.
Is there any possibility that I can display the name valid_from rather than SYS_NC00027$. ?
Apparently Oracle creates a function based index for timestamp with time zone columns.
The definition of them can be found in the view ALL_IND_EXPRESSIONS
Something like this should get you started:
select ic.index_name,
ic.column_name,
ie.column_expression
from all_ind_columns ic
left join all_ind_expressions ie
on ie.index_owner = ic.index_owner
and ie.index_name = ic.index_name
and ie.column_position = ic.column_position
where ic.table_name = 'FOO';
Unfortunately column_expression is a (deprecated) LONG column and cannot easily be used in a coalesce() or nvl() function.
Use the below to verify the col info.
select column_name,virtual_column,hidden_column,data_default from user_tab_cols where table_name='EMP';

Resources