I had DB Vertica 8.0. Create schema WAREHOUSE with table include field by type LONG VARCHAR. Now I tried to execude SELECT for example
SELECT * FROM WAREHOUSE.ALL_EVENTS a
WHERE
a.original_data like '%d963%'
returned error
SQL Error [4286] [42883]: [Vertica][VJDBC](4286) ERROR: Operator does not exist: long varchar ~~ unknown
[Vertica][VJDBC](4286) ERROR: Operator does not exist: long varchar ~~ unknown
com.vertica.util.ServerException: [Vertica][VJDBC](4286) ERROR: Operator does not exist: long varchar ~~ unknown
In Oracle I used dbms_lob package for CLOB fields.
Vertica have simular package for LONG VARCHAR types?
How "LIKE" by LONG VARCHAR?
As explained in the fine manual the (SQL standard) LIKE predicate in Vertica accept CHAR, VARCHAR, BINARY and VARBINARY data types.
To perform LIKE operations on LONG VARCHAR columns you can use REGEXP_LIKE (no need to install/use any special package). This way:
SELECT *
FROM WAREHOUSE.ALL_EVENTS a
WHERE REGEXP_LIKE(a.original_data, 'd963');
That's all.
Related
In an old application (no source code available) long texts with over 40.000 characters are stored in an Oracle database in a table column of type LONG RAW. We now want to transfer the texts into another Oracle database and want to display their content. Somehow the old application was capable to do so. However, we always run into a 4000 byte limit...
How can we export/import and display the content in a human readable VARCHAR2 (or multiple ones).
All convert functions we tried seam not to work. For example TO_LOB or TO_CLOB.
ORA-00932: Inconsistent datentype: - expected, LONG BINARY found
ORA-00932: Inconsistent datentype: CHAR expected, LONG BINARY found
From the documentation:
TO_LOB converts LONG or LONG RAW values in the column long_column to LOB values. You can apply this function only to a LONG or LONG RAW column, and only in the select list of a subquery in an INSERT statement.
So you can't do:
select to_lob(old_column) from old_table;
ORA-00932: Inconsistent datentype: - expected, LONG BINARY found
But you can move the data into a BLOB column in another table:
insert into new_table(new_column)
select to_lob(old_column) from old_table;
db<>fiddle
Once you have the data as a BLOB you can manage it as a binary value, or if it represents text then you can convert it to a CLOB - using the appropriate character set - with the dbms_lob package.
When developers are using Laravel to create queries against a MSSQL database that uses default SQL collation and the tables contain varchar data type columns (ASCII) for the vast majority of tables columns. The included Laravel ORM creates queries that are passed into SQL that have bound parameters that are of the nvarchar data type, i.e Unicode or utf8, which when the values are passed in have the wrong data type for where clauses (predicate evaluation) to the varchar data in the column. This mismatch in the data types results in an implicit conversion in the execution plan thus rendering any indexes non searchable. SQL server will do table seek/scan on the clustered index since that is the default heuristic behavior when no index is able to be used. Maybe not an issue in small data sets but in tables with billions of rows and monthly partitioning it can result in SQL scanning hundreds of GB of data or scanning every partition to find the data which if the index had been used would take milliseconds and not minutes. This issue is not specific to the Laravel Eloquent ORM but crops up in many ORMs that assume a utf8 character set but the database was designed with English as the only language and so is ASCII with varchar data types.
I found some similar questions Laravel, SQL varchar , on Stackoverflow that referenced the SQL grammar file and changing the data type declarations from nvarchar to varchar in the file, https://github.com/laravel/framework/blob/9.x/src/Illuminate/Database/Schema/Grammars/SqlServerGrammar.php but it seemed they were talking about migrations and not the queries generated by the Eloquent ORM itself. If this results in the same thing occurring, that changing the SQL grammar file will result on queries being generated with varchar bound parameters is not apparent in those answers or ones I found on laracast as well. A definitive answer on how to handle data type mismatches generated in the ORM is what I am looking for.
Here is how the code looks
The code generate by Eloquent is as follows, the parameters #P1,... are all passed in to MSSQL as nvarchar(4000) resulting in the need to have a CONVERT function in then where clause (Geohash7 = CONVERT(VARCHAR(7), #P1) by using a raw SQL code overload.
declare #p1 int
set #p1=1
exec sp_prepexec #p1 output,N'#P1 nvarchar(4000),#P2 nvarchar(4000),#P3 nvarchar(4000),#P4 nvarchar(4000) ...
..
..
,N'select GMTDate date, SUM(Scans) scans from DB1.dbo.Geohash with (nolock) where (Geohash7 = CONVERT(VARCHAR(7), #P1) or Geohash7 = CONVERT(VARCHAR(7), #P2) or Geohash7 = CONVERT(VARCHAR(7), #P3) or Geohash7 = CONVERT(VARCHAR(7), #P4) ...
..
..
,N'9vffnth',N'9vffnsu',N'9vffnt5',N'9vffnsg'....
If the CONVERT is not there added in as a raw SQL overload then the code passed in would have this look directly from Eloquent.
declare #p1 int
set #p1=1
exec sp_prepexec #p1 output,N'#P1 nvarchar(4000),#P2 nvarchar(4000),#P3 nvarchar(4000),#P4 nvarchar(4000) ...
..
..
,N'select GMTDate date, SUM(Scans) scans from DB1.dbo.Geohash with (nolock) where (Geohash7 = #P1 or Geohash7 = #P2 or Geohash7 = #P3 or Geohash7 = #P4 ...
..
..
,N'9vffnth',N'9vffnsu',N'9vffnt5',N'9vffnsg'....
The SQL execution plan generated by the first set of code will use an index to seek/scan for records. The SQL execution plan for the second set of code will not use an index since SQL will do an implicit conversion on the nvarchar(4000) parameter and that results in no possible index use, it will always do a clustered index seek/scan which is not a efficient or viable plan when you have billions of rows in the table. The goal is to not have Eloquent use nvarchar(4000) as the parameter passed in but varchar(XXX) or even varchar(4000) so that SQL server can use indexes built on the underlying table data which is varchar data type.
I have column datatype on db2 as
"column name" VARBINARY(2000) defalut Binary(X'20')
I need its equivalent column datatype and default value for oracle
Use RAW or LONG RAW Datatype, However Oracle recommends BLOB and BFILE datatypes for large amounts of binary data.
Check this link for further information
Use Oracle type RAW or LONG RAW , and use the same default value.
Example:
,mycol raw(2000) default to_number(' ')
datatype replacement for VARBINARY is raw in oracle( as per remaining answers).
But the default value for Binary(X'20') is representing a space.
As the hexadecimal value for a space is 20,and for that reason its mentioned as X'20'.
select rawtohex(' ') from dual; which will give you 20.
I am comparing two databases which have similar schema. Both should support unicode characters.
When i describe the same table in both database, db 1 shows all the varchar fields with char, (eg varchar(20 char)) but the db2 shows without char, (varchar(20)
the second schema supports only one byte/char.
When i compare nls_database_parameters and v$nls_parameters in both database its all same.
could some one let me know what may be the change here?
Have you checked NLS_LENGTH_SEMANTICS? You can set the default to BYTE or CHAR for CHAR/VARCHAR2 types.
If these parameters are the same on both datbases then maybe the table was created by explicitly specifying it that way.
I have created an Oracle table with an indexed varchar2 column, called 'ID'.
A software I'm using is reading this table, but instead of running queries like
select * from table_name where ID='something'
it does (notice the extra "N" before the value)
select * from table_name where ID=N'something'
which is causing some kind of character conversion.
The problem is that, while the 1st query is performing a range scan, the 2nd is performing a full table scan.
Since I cannot modify the queries that this software is running, which data type should I use, instead of varchar2, so that the conversion performed by the 'N' function does not imply a full table scan?
The prefix N before the string is used to specify a NVARCHAR2 or NCHAR datatype.
When comparing NVARCHAR2s to VARCHAR2s, Oracle converts the VARCHAR2 variable to NVARCHAR2. This is why you are experiencing a FULL SCAN.
Use a NVARCHAR2 column instead of a VARCHAR2 in your table if you can't modify the query.