How to find the table record difference after changing the ROW_FORMAT of mysql table - mysql-8.0

I want to migrate from mysql 5.6 to mysql 8 but got some warning related with maximum allowed size (8126) for a record on index leaf page.
so need to confirm whether data are safe after changing the row format. initilay tried with checksum but checksum also differs for few of the tables if we change the row format.so is there any other way to match the table records before and after changing row format and even after migration.

Related

Update key column type in ClickHouse

I try to update column type from DateTime(TZ) to DateTime, but it is key column and couldn't be changed. Drop/create table doesn't have any result - looks like metadata stored in ZK.
Can I change table structure (I can drop/create table) without changing ZK records? Or it is required to remove meta from ZK?
You need to drop a table at all replicas. If you lost a replica and did notdrop their table you need to clean ZK manually.
Or you can just use another ZK path. Table name does matter.

Populating Tables into Oracle in-memory segments

I am trying to load the tables into oracle in-memory database. I have enable the tables for INMEMORY by using sql+ command ALTER TABLE table_name INMEMORY. The table also contains data i.e. the table is populated. But when I try to use the command SELECT v.owner, v.segment_name name, v.populate_status status from v$im_segments v;, it shows no rows selected.
What can be the problem?
Have you considered this?
https://docs.oracle.com/database/121/CNCPT/memory.htm#GUID-DF723C06-62FE-4E5A-8BE0-0703695A7886
Population of the IM Column Store in Response to Queries
Setting the INMEMORY attribute on an object means that this object is a candidate for population in the IM column store, not that the database immediately populates the object in memory.
By default (INMEMORY PRIORITY is set to NONE), the database delays population of a table in the IM column store until the database considers it useful. When the INMEMORY attribute is set for an object, the database may choose not to materialize all columns when the database determines that the memory is better used elsewhere. Also, the IM column store may populate a subset of columns from a table.
You probably need to run a select against the date first

Number format not matching between Pentaho Kettle and Oracle?

I have a database table in Oracle 11g created and populated with the following code:
CREATE TABLE TEST_TABLE (CODE NUMBER(1,0));
INSERT INTO TEST_TABLE (CODE) VALUES (3);
Now, I want to use this table as a lockup table in a Pentaho Kettle transformation. I want to make sure that the value of a column comes from this table, and if not abort. I have the following setup:
The data frame has a single column called Test of type integer, and a single row. The lookup is configured like this:
However the lookup always fail and the transformation is aborted, no matter if the value of Test is 3 (should be ok), or 4 (should be aborted). However, if I check the "Load all data from table" box, it works as expected.
So my question is this: Why does it not work unless I cache the whole table?
Two further observations:
When it works, and the row is printed in log, I notice that Test is printed like [ 3] and From DB is printed like [3] (without the extra space). I don't know if this is of any significance, though.
If I change the database table so that CODE is created as INT, it works. This leads me to believe it is somehow related to the number formatting. (In my actual application, I can not change the database tables.) I guess I should change the format of Test, but to what? Setting it to Number does not help, nor does Number with length 1 and precision 0.

Drop column in compressed table

is there any chance to drop a column in a compressed table?
I checked google and it seems like its not possible at all.
to get sure im asking here.
regards
set that column to unused:
ALTER TABLE TEST SET UNUSED (column name);
ALTER TABLE TEST DROP unused columns;
Note: This statement does not actually remove the target column data or restore the disk space occupied by these columns. However, a column that is marked as unused is not displayed in queries or data dictionary views, and its name is removed so that a new column can reuse that name. All constraints, indexes, and statistics defined on the column are also removed.
If that does not work for you for some reason, you can try to move the table into a non-compressed format and then drop the column and compress again.

Adding column with default value

I have a table A (3 columns) in production which is around 10 million records. I wanted to add one more column to that table and also I want to make default value to 1. Is it going to impact production DB performance If add a column with default value 1 or something else. What would be best approach to this to avoid any kind of performance impact on DB? your thoughts are much appreciated!!
In Oracle 11g the process of adding a new column with a default value has been considerably optimized. If a newly added column is specified as NOT NULL, default value for that column is maintained in the data dictionary and it's no longer required for a default value of a column to be stored for all records in a table, so it's no longer required to update each record with a default value. Such an optimization considerably reduces amount of time the table is exclusively locked during the operation.
alter table <tab_name> add(<col_name> <data_type> default <def_val> not null)
Moreover, column with a default value added that way will not consume space, until you deliberately start to update that column or insert a record with a non default value for that column. So the operation of adding a new column with a default value and not null constraint specified completes pretty quick.
i think that it is better that you create a table as backup table with this syntax:
create table BackUpTable as SELECT * FROM YourTable;
alter table BackUpTable add (newColumn number(5,0)default 1);

Resources