A strange problem with MariaDB text,losing data - jdbc

In some cases, MariaDB will lose the data of the text field.
My table is
create table tt(
id int(11) AUTO_INCREMENT PRIMARY KEY ,
info text
)
My SQL is
update tt join
(select 'StringValue' as info , 1 as id ) a using(id)
set tt.info = a.info
The StringValue is a string data that more than 65535 bytes;
I can execute this SQL successfully using Java JDBC, but only a few bytes can be written.
For example, StringValue is a String data with 65538 bytes, after executing the SQL, tt.info has only 2 bytes.
My MariaDB version is 10.4.7, innodb_page_size is 16kb.

That seems to be simply this:
UPDATE tt
SET info = 'StringValue'
WHERE id = 1;
But, if the string is bigger than TEXT can hold, it will either give a warning or an error, depending on some setting. Is that what you are asking about? Does the code check for warnings and errors?
If you change the column definition from TEXT to MEDIUMTEXT, the limit is 16MB instead of 64KB. (Both of those numbers are in bytes, not characters.
You say it has 2 bytes -- do they happen to be the last 2 bytes?

Related

How can I decline an INSERT when column is set to NOT NULL

From the documentation, you have to put a NOT NULL modifier in the column definition to mark it as such, just like for other SQL databases.
Consider this table:
CREATE TABLE test (
name String NOT NULL,
isodate DateTime('Europe/Berlin') NOT NULL
) ENGINE = MergeTree()
ORDER BY (isodate)
If I would try to insert NULL for both columns (or at least one), the expected behaviour is that Clickhouse declines insertion since the columns are marked as NOT NULL. Instead, Clickhouse creates a new row, where isodate is 1970-01-01 01:00:00 and name is an empty string, which are the default values for those data types apparently.
What do I have to do so that Clickhouse declines such inserts?
My Clickhouse server version is 21.12.3.
In ClickHouse, NULL and NOT NULL do change the behavior of the data type, but not in the way other relational databases - it is syntactically compatible with other relational database but not semantically (a Int32 NULL is the same as a Nullable(Int32), as a Int32 NOT NULL is the same as a Int32). A column defined as NOT NULL does not mean it will refuse to insert fields whose values are NULL in the insert statement - it means ClickHouse will use the default expression for the column type (or if it is not specified in the column definition, the default value for the data type). This behavior is expected in ClickHouse when input_format_null_as_default is enabled (the default for Clickhouse 21.12.3).
To throw exceptions for such invalid values you need to change the system setting input_format_null_as_default to 0. If you use clickhouse-client, you can disable it while connecting to clickhouse:
clickhouse-client -h ... --input_format_null_as_default 0
or after:
clickhouse> SET input_format_null_as_default=0
This way, a statement like insert into test (name, isodate) values (NULL, NULL); will behave more likely most relational databases.
Clickhouse behaviour with Not Null constraints is not compatible with other databases.
You can overcome it using check constraints https://clickhouse.com/docs/en/sql-reference/statements/create/table/#constraints
CREATE TABLE test (
name String NOT NULL,
isodate DateTime('Europe/Berlin') NOT NULL,
CONSTRAINT isodate_not_null CHECK isodate <> toDateTime(0, 'Europe/Berlin')
) ENGINE = MergeTree()
ORDER BY (isodate)
insert into test(name) values ('x');
DB::Exception: Constraint `isodate_not_null` for table default.test (f589312a-1592-426a-b589-312a1592b26a) is violated at row 1. Expression: (isodate != toDateTime(0)). Column values: isodate = 0. (VIOLATED_CONSTRAINT)
insert into test values ('x', now());
OK.
The reason is performance, in OLAP databases need to ingest data as fast as possible.

how to check the size of input to avoid exceed DB column limit

I have an input field of my page with size=8.
And in the DB, the corresponding column is VARCHAR2(8).
But if I input a string of length 8 with a special ascii character in the field, I will get the following exception.
ORA-12899: value too large for column xxxx (actual: 10, maximum: 8)
I'm trying to catch this in the validator, I check myString.getBytes().length which is also 8.
I know one solution is on DB side that change the column to VARCHAR2(8 CHAR).
Is there another solution that I can check this in the controller?
The error is telling you that you've given 10 bytes but the column only allows 8. I am assuming it's bytes because of your use of the Chinese character set. So, I believe that the column was created as if it were VARCHAR2(8 byte).
If you describe the table, you'll see what's going on. Compare that describe with a describe of this one:
create table x (a varchar2(30), b varchar2(30 byte), c varchar2(30 char));
The code you are executing to obtain the number of bytes is almost correct. Instead of:
myString.getBytes().length /* this probably returns 8 */
you need to execute this:
myString.getBytes("UTF-8").length /* this probably returns 10 */
This should help you, this will return the actual size in Bytes.
SELECT LENGTHB ('é')
FROM DUAL;
Above will return 2. So whatever character you are using, you can specify something like MY_VARCHAR_FIELD VARCHAR2(2 BYTES)

What can I do to ensure fields longer than column width go to the BAD File?

When creating Oracle external tables, how should I phrase the reject rows clause to ensure that any field which exceeds its column width rejects and goes in the BADFILE?
This is my current design and I don't want records greater than 20 characters. I do want them to go BADFILE instead. Yet, they still appear when I select * from foobar
DROP TABLE FOOBAR CASCADE CONSTRAINTS;
CREATE TABLE FOOBAR
(
FOO_MAX20 VARCHAR2(20 CHAR)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY FOOBAR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
BADFILE 'foobar_bad_rec.txt'
DISCARDFILE 'foobar_discard_rec.txt'
LOGFILE 'foobar_logfile.txt'
FIELDS
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(
FOO_MAX20 POSITION(1:20)
)
)
LOCATION (foobar:'foobar.txt') )
REJECT LIMIT UNLIMITED
PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
NOMONITORING;
Here is my external file foobar.txt
1234567
1234567890123456
126464843750476074218751012345678901234567890
7135009765625
048669433593
7
527
You can't do this with the reject rows clause, as it only accepts one form.
You have a variable-length (delimited) record, but a fixed-length field. Everything after the last position you specify, which is 20 in this case, is seen as filler that you want to ignore. That isn't an error condition; you might have rubbish at the end that isn't relevant to your table. There is nothing that says chars 21-45 in your third record shouldn't be there - just that you aren't interested in them.
It would be nice if you could discard them with the load when clause, but you don't seem to be able to compare , say, (21:21) to null or an empty string - the former isn't recognised and the latter causes an internal error, which isn't good.
You can make the longer records be sent to the bad file by forcing an SQL error when it tries to put a longer parsed value from the file into the field, by changing:
FOO_MAX20 POSITION(1:20)
to
FOO_MAX20 POSITION(1:21)
Values that are up to 20 characters are still loaded:
select * from foobar;
FOO_MAX20
--------------------
1234567
1234567890123456
7135009765625
048669433593
7
527
6 rows selected
but for anything longer than 20 characters it'll try to put 21 chars in to the database's 20-char field, which gets this in the log:
error processing column FOO_MAX20 in row 3 for datafile /path/to/dir/foobar.txt
ORA-12899: value too large for column FOO_MAX20 (actual: 21, maximum: 20)
And the bad file gets that record:
126464843750476074218751012345678901234567890
Have a CHECK CONSTRAINT on the column to not allow any value exceeding the `LENGTH'.

SQL loader position

New to SQL loader and am a bit confused about the POSITION.
Let's use the following sample data as reference:
Munising 49862 MI
Shingleton49884 MI
Seney 49883 MI
And here is the load statement:
LOAD DATA
INFILE 'zipcodes.dat'
REPLACE INTO TABLE zipcodes (
city_name POSITION(1) CHAR(10),
zip_code POSITION(*) CHAR(5),
state_abbr POSITION(*+1) CHAR(2)
)
In the load statement, the city_name POSITION is 1. How does SQLLDR know where it ends? Is CHAR(10) the trick here? Counting the two spaces behind 'Munising', it has 10 characters.
Also why is zip_code assigned with CHAR even though it contains nothing but numbers?
Thank You
Yes, when end position is not specified, it is derived from the datatype. This documentation explains the POSITION clause.
city_name POSITION(1) CHAR(10)
Here the starting position of data field is 1. Ending position is not specified, but is derived from the datatype, that is 10.
zip_code POSITION(*) CHAR(5)
Here * specifies that, data field immediately follows the previous field and should be 5 bytes long.
state_abbr POSITION(*+1) CHAR(2)
Here +1 specifies the offset from the previous field. Sqlloader skips 1 byte and reads next 2 bytes, as derived from char(2) datatype.
As to why zipcode is CHAR, zip code is considered simply a fixed length string. You are not going to do any arithmetic operations on it. So, CHAR is appropriate for it.
Also, have a look at SQL Loader datatypes. In control file you are telling SQL*Loader how to interpret the data. It can be different from that of table structure. In this example you could also specify INTEGER EXTERNAL for zip code.
we need three text file & 1 batch file for Load Data:
Suppose your file location 'D:\loaddata'
input file 'D:\loaddata\abc.CSV'
1. D:\loaddata\abc.bad -- empty
2. D:\loaddata\abc.log -- empty
3. D:\loaddata\abc.ctl "Write Code Below"
OPTIONS ( SKIP=1, DIRECT=TRUE, ERRORS=10000000, ROWS=5000000)
load data
infile 'D:\loaddata\abc.CSV'
TRUNCATE
into table Your_table
(
a_column POSITION (1:7) char,
b_column POSITION (8:10) char,
c_column POSITION (11:12) char,
d_column POSITION (13:13) char,
f_column POSITION (14:20) char
)
D:\loaddata\abc.bat --- For execution
sqlldr db_user/db_passward#your_tns control=D:\loaddata\abc.ctl log=D:\loaddata\abc.log
After double click "D:\loaddata\abc.bat" file you data will be load desire oracle table. if anything wrong check you "D:\loaddata\abc.bad" and "D:\loaddata\abc.log" file

How to make a varchar2 field shorter in Oracle?

I have a field in a table that is varchar2, 4000 bytes. There are over 50000 rows. Not all rows have data in this field. Few data fields are over 255 bytes, but some are 4000. To place the table in a new application, I need to shorten the field to 255 bytes.
Is there a SQL statement that will reduce the length to 255? I realize data will be lost, that is part of the cost of the new application. The cut should be arbitrary, just stopping the data at 255 no matter the circumstance.
update b set text2 = substr(text2,1,255);
then alter table to set length of column to 255 :
alter table b MODIFY "TEXT2" varchar2(255 byte);

Resources