Liquibase column data types from generateChangeLog - oracle

I have two databases, one on DB2 and one on ORACLE. I've generated change log file via generateChangeLog command. It produced me correct xml file, but only on Oracle database. I was invoking this command on oracle database and in result I got in column data types like NUMBER(*,0) which are not valid on DB2. How can I generateChangeLog with unified data types in liquibase ?
Does some list of data types exists in liquibase ? Which are let's say versitale to all databases ??

Reverse engineering an existing DB schema to a Liquibase XML file always creates the DBMS specific datatypes. You will have to edit the generated XML file to use JDBC types.
The supported "cross-platform" types are documented in the manual:
http://www.liquibase.org/documentation/column.html
To help make scripts database-independent, the following “generic” data types will be converted to the correct database implementation:
BOOLEAN
CURRENCY
UUID
CLOB
BLOB
DATE
DATETIME
TIME
BIGINT
Also, specifying a java.sql.Types.* type will be converted to the correct type as well. If needed, precision can be included. Here are some examples:
java.sql.Types.TIMESTAMP
java.sql.Types.VARCHAR(255)
From my experience the first list is missing INTEGER and DECIMAL which can also be used without problems (at least for Oracle and Postgres - don't have DB2 around to test it).

Related

Is there any performance loss when using ANSI data types in Oracle?

If I use any ANSI supported data types like INTEGER, NUMERIC, REAL etc., as a data type for a column, or a variable in PL/SQL, will it have an additional cost for the database?
What are the pros and cons for using the ANSI supported data types in Oracle database? (Database Version: 19c)
ANSI data-types are just aliases for Oracle data types and will be converted to the equivalent Oracle data type.
From the documentation:
ANSI, DB2, and SQL/DS Data Types
SQL statements that create tables and clusters can also use ANSI data types and data types from the IBM products SQL/DS and DB2. Oracle recognizes the ANSI or IBM data type name that differs from the Oracle Database data type name. It converts the data type to the equivalent Oracle data type, records the Oracle data type as the name of the column data type, and stores the column data in the Oracle data type based on the conversions shown in the tables that follow.
ANSI SQL Data Type
Oracle Data Type
NUMERIC[(p,s)]DECIMAL[(p,s)] (Note 1)
NUMBER(p,s)
INTEGERINTSMALLINT
NUMBER(38)
FLOAT (Note 2)DOUBLE PRECISION (Note 3)REAL (Note 4)
FLOAT(126)FLOAT(126)FLOAT(63)
What are the pros and cons for using the ANSI supported data types in Oracle database?
There are no performance benefits or penalties as the type will be converted to the equivalent Oracle type. The main benefit would be the portability of code between different RDBMS.

how to obtain blob type data for insertion in oracle?

hello friends I have a problem with the blob data type, I want to migrate some data from one bd to another bd however I have not been able to some tables that have blob type columns, what I have tried is to export a single record in the following way.
first I make a select of the record I want to export to my other bd
select TEMPLATE_DOCUMENT_ID,blob_file from example_table where template_document_id = 32;
then I export the result to obtain the insert
I configure as follows
when I do this I get a script with the data of the record that I want to migrate
if I run this it gives me the following error
Error report -
ORA-01465: invalid hex number
Do you have any idea how I could get the correct data to make my insert?
NOTE: MIGRATION IS DONE FROM ONE ORACLE DATABASE TO ANOTHER ORACLE DATABASE.
Obviously the source database is Oracle. You did not mention what is the target database. In case it is Oracle as well I would suggest using the Oracle Data Pump tool (expdp/impdp). Doc is here: https://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_overview.htm
In case you need it, at least I use it quite often is the VIEW_AS_TABLE option of the tool as it allows me to export a subset of the data.

H2 set schema changes schema_search_path

If I have a schema_search_path set and I wish to create a bunch of tables using a common script by setting the schema and not explicating the
schema in the table create (common script could be used in multiple schemas, this also sets the schema_search_path to just the specified schema.
This seems like an undesirable side affect.
Value set by SET SCHEMA_SEARCH_PATH is not affected by any other commands.
But this value is only used when an identified is not qualified with the schema and an object with this name doesn't exist in the current schema (affected by SET SCHEMA command).
For example, tables referenced by non-qualified names are searched in the following order:
Tables of the current schema.
Local temporary tables. (Currently they also include query aliases from the WITH clauses, but this may be changed when somebody will implement a separate scope of identifiers for these views.)
Tables of each schema from SCHEMA_SEARCH_PATH, if any. When multiple schemas are specified, they order has a meaning, they are processed in the same order.
Legacy or compatibility tables, such as DUAL or SYSDUMMY1 in DB2 and Derby compatibility modes.
The first table matched by its name will be used.
This is a complex case, for the most types of database objects only steps (1) and (3) are performed.
If you think that something is not going as described here and you can create a standalone test case (Java / JDBC / SQL only, no third-party libraries), you can create a bug report on GitHub:
https://github.com/h2database/h2database/issues

How to export data from tables with BLOBs as SQL inserts

I need to export data from one schema and import it to another. But in the second schema there are tables with different names, different attribute names, etc, but these tables are suitable for data in first schema. So I export data as SQL inserts and manually rewrite names etc. in this inserts.
My problem is with tables which have columns with type BLOB. PL/SQL Developer throws error:
Table MySchema.ENT_NOTIFICATIONS contains one or more BLOB columns.
Cannot export in SQL format, use PL/SQL Developer format instead.
But, when I use PL/SQL Developer format (.pde) it is some kind of raw byte data and I can't change what I need.
Is there any solution to manage this?
Note: I use PL/SQL Developer 10.0.5.1710 and Oracle database 12c

What Oracle data type is easily converted to BIT in MSSQL via SSIS?

I have a Data Flow from an Oracle table to an MSSQL table with one field of data type BIT. The Oracle table is using the characters Y and N at the moment (I'm unsure of the data type and have no way of checking), but the MSSQL table needs to be data type BIT. What type of cast can I use on the Oracle query so that the data is pulled smoothly over?
Use char(1) and then use a derived column transformation like this:
(DT_BOOL)(OracleField == "Y"?1:0)
Give this column a name like OracleFieldAsBool
and then use it instead of the original column in the rest of your data flow.

Resources