This may be a feature instead of a bug, so I thought to include it on SO instead of MariaDB's Jira.
Yesterday I updated my MariaDB install on Homebrew from 10.1.23 to 10.2.6. All my selects are still working correctly, but now in my legacy app, I get a bunch of errors on inserts where the code is "assuming" MariaDB will set a default value. For example...
INSERT INTO table SET
email = 'some#email.com', -- varchar
phone_number = '', -- bigint
ts = '2017-05-30 23:51:23', -- datetime
some_val = '689728' -- varchar
This code was working fine before, but since I've upgraded I now get the following couple of errors...
Error 1 (is_some_toggle is a tinyint and is not defined in the query above, it is assumed that MariaDB would just insert a 0)
Field 'is_some_toggle' doesn't have a default value
Error 2 (after I set the default value to is_some_toggle)
Incorrect integer value: '' for column 'phone_number' at row 1
I'm guessing this is a feature, not a bug. I've looked through their changelogs for 10.2 series and I'm not seeing anything jump out, but there's a lot so I could have missed it. I saw a server config for OLD_SQL but that didn't seem to be what I was looking for. Any thoughts?
macOS Sierra 10.12.5 btw
CREATE TABLE `table` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`email` varchar(200) NOT NULL,
`phone_number` bigint(20) NOT NULL,
`some_val` varchar(6) NOT NULL,
`ts` datetime DEFAULT NULL,
`is_some_toggle` tinyint(1) NOT NULL,
PRIMARY KEY (`id`),
KEY `email_code` (`email`(15),`some_val`),
KEY `phone_number_code` (`phone_number`,`some_val`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
Looks like going through the changelog VERY slowly worked for me.
sql_mode was updated as was outlined in this article.
Option | Old default value | New default value
sql_mode | NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION | STRICT_TRANS_TABLES, ERROR_FOR_DIVISION_BY_ZERO, NO_AUTO_CREATE_USER, NO_ENGINE_SUBSTITUTION
I changed it back to the default and I'm good as gold.
Related
We use MariaDB database on our projects. I observed program was working slowly on my machine. I made some test using Heidisql. Updating 2000 row un my machine takes 9 second. But when I open virtual machine in my computer and run same query it takes 0.7 seconds. Other computers in our office also shows the same result. They all execute the query around 0.7 seconds.
Table create code is like this.
CREATE TABLE `t_tag` (
`Id` INT(10) NOT NULL AUTO_INCREMENT,
`TagName` VARCHAR(100) NOT NULL DEFAULT ' ' COLLATE 'utf8mb3_general_ci',
`DataType` INT(10) NOT NULL DEFAULT '0',
`DataBlock` INT(10) NOT NULL DEFAULT '0',
`VarType` INT(10) NOT NULL DEFAULT '0',
`ByteAddress` INT(10) NOT NULL DEFAULT '0',
`BitAddress` INT(10) NOT NULL DEFAULT '0',
`PlcId` INT(10) NOT NULL DEFAULT '0',
`TagLogTimerId` INT(10) NOT NULL DEFAULT '0',
`ValueOffset` DOUBLE NOT NULL DEFAULT '1',
`Digit` INT(10) NOT NULL DEFAULT '0',
`ModbusType` INT(10) NOT NULL DEFAULT '0',
`TagValue` VARCHAR(50) NOT NULL DEFAULT '' COLLATE 'utf8mb3_general_ci',
`TagMaxValue` DOUBLE NULL DEFAULT NULL,
`TagMinValue` DOUBLE NULL DEFAULT NULL,
`LastReadTime` DATETIME NOT NULL DEFAULT current_timestamp(),
PRIMARY KEY (`Id`) USING BTREE,
INDEX `FK_t_tag_t_plc_address` (`PlcId`) USING BTREE,
CONSTRAINT `FK_t_tag_t_plc_address` FOREIGN KEY (`PlcId`) REFERENCES `t_plc_address` (`Id`) ON UPDATE RESTRICT ON DELETE RESTRICT
)
COLLATE='utf8mb3_general_ci'
ENGINE=InnoDB
AUTO_INCREMENT=4006
;
and update query is like this.
UPDATE t_tag SET TagValue = '91' WHERE Id = 1;
UPDATE t_tag SET TagValue = '90' WHERE Id = 2;
UPDATE t_tag SET TagValue = '89' WHERE Id = 3;
UPDATE t_tag SET TagValue = '88' WHERE Id = 4;
total 2000 rows
All the machines have same config files and same MariaDB versions. I removed and clean installed the MariaDB several times but no changes on the result.
If anyone encountered problems like this or knows how to solve this problem, it would be big help otherwise last resort is the installing windows again, but it is too much work to install all the development setup again.
I have tried installing and uninstalling the MariaDB.
I have tried different version of MariaDB.
I tried same query on different machines.
I tried same query on my computer but in a virtual machine.
I expected the query takes similar times on all machines but. It only takes 9 seconds on my computer and 0.7 seconds on other computers.
When running many queries in HeidiSQL, it can make a big difference to execute them in one (or few) larger batch(es). Executing them one by one adds a significant overhead. Try it out, on the dropdown menu besides the blue "execute" button:
I need some help!
I have an API built in laravel 5.8, i am upgrading the platform to 6.2.
After all changes in configuration files and some scripts php, all my tests witch run the migrations on SQLite is broken.
The following error is displayed:
SQLSTATE[HY000]: General error: 1 no such collation sequence: utf8_general_ci (SQL: CREATE TABLE events (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, event VARCHAR(255) NOT NULL COLLATE BINARY, description CLOB DEFAULT NULL COLLATE BINARY, invitation CLOB DEFAULT NULL COLLATE BINARY, sale CLOB DEFAULT NULL COLLATE BINARY, information CLOB DEFAULT NULL COLLATE BINARY, city VARCHAR(255) NOT NULL COLLATE BINARY,
location VARCHAR(255) NOT NULL COLLATE BINARY, date_start DATE NOT NULL, time_start TIME NOT NULL, date_end DATE DEFAULT NULL, flyer CLOB DEFAULT NULL COLLATE BINARY, atv BOOLEAN DEFAULT '1', created_at DATETIME DEFAULT NULL, updated_at DATETIME DEFAULT NULL, location_map VARCHAR(255) DEFAULT NULL COLLATE utf8_general_ci --IFRAME com a localização do evento.
My intention is update to 7.x after resolve all issues on 6.2.
In the migrations, was enough to remove the collation option from the fields. Example:
Initially this way:
$table->string('field', 255)->charset('utf8')->collation('utf8_general_ci')->change();
The solution was as follows:
$table->string('field', 255)->charset('utf8')->change();
Removing this option will not force a collation not accepted by SQLite.
I'm getting an error when I use DataMapper's auto_upgrade! method to add fields in an SQLite3 db based on the properties defined in my code:
DataObjects::SyntaxError at /history
Cannot add a NOT NULL column with default value NULL
An example of an offending line would be:
property :fieldname, Text, required: true
The error goes away if I (a) remove the line, (b) remove required: true, (c) change true to false, or (d) add a default value.
SQLite does not require a default value to be specified for every field, so this problem is definitely with DataMapper, not SQLite.
How can I get around this, so DataMapper can specify that a field is required without assuming that not specifying a default value automatically means the default should be NULL?
(If you want to know more about why I'm designing this way: there will be another client process accessing SQLite and logging data into the SQLite database, while a Sinatra app will be pulling data out of the db for display in a browser. I want the database therefore to enforce the field requirements, but DM's auto_upgrade is a very convenient way to be able to upgrade the database as needed—so long as it doesn't foul things up in the process.)
You are requiring the field, hence it cannot be NULL. This is simple table properties.
When DataMapper runs auto_upgrade! it runs the SQL commands on the database.
CREATE TABLE Test
(
P_Id int NOT NULL,
lname varchar(255) NOT NULL,
fname varchar(255),
Address varchar(255),
City varchar(255)
)
And doing something like this won't work.
CREATE TABLE Test
(
P_Id int NOT NULL,
lname varchar(255) NOT NULL DEFAULT NULL,
fname varchar(255),
Address varchar(255),
City varchar(255)
)
I tested it in MySQL and this is the error.
02:52:43 CREATE TABLE TestTest ( P_Id int NOT NULL, lname
varchar(255) NOT NULL DEFAULT NULL, fname varchar(255), Address
varchar(255), City varchar(255) ) Error Code: 1067. Invalid default
value for 'lname' 0.062 sec
Correction: SQLite does allow you to create a table with such properties. However, when trying to insert anything to that table makes it throw an error whether the field is NULL or not. So DataMapper might be doing some sanitation for your before even creating the table.
It is not clear to me if you are creating a new table or modifying an existing one.
If you have an existing table and are trying to alter it with a column defined as NOT NULL, then you must provide a default value so that the existing rows can be migrated. The RDBMS needs to know what to put in the field for pre-existing rows.
If you are creating a new table, then the property definition you have should be fine.
For one of my custom module I have following sql setup script
$installer = $this;
$installer->startSetup();
$installer->run("
CREATE TABLE {$this->getTable('wholesale')} (
`wholesale_id` int(11) unsigned NOT NULL auto_increment,
`title` varchar(255) NOT NULL default '',
`percentage` float(5,3) NOT NULL,
`status` smallint(6) NOT NULL default '0',
`created_time` datetime NULL,
`update_time` datetime NULL,
PRIMARY KEY (`wholesale_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
");
$installer->endSetup();
The value passed to this percentage` float(5,3) NOT NULL, column is saving with decimal but always with 00 for example. If I save a value 10.23 it save in database as 10.00.
Please help
I ran into the same issue. After clearing the cache with "Flush Magento cache", it started to save properly. In my case, I had altered the column and it was being cached.
I truncated the table and inserted one record manually and it worked for me.
I am not sure why this happened but this worked for me.
I am trying to do automatic DB migration. I am using dbdeploy for the same.
I followed the steps in this link http://blog.codeborne.com/2012/09/using-dbdeploy-in-gradle.html
I created the change log table as:
CREATE TABLE changelog (
change_number INTEGER NOT NULL,
delta_set VARCHAR(10) NOT NULL,
start_dt TIMESTAMP NOT NULL,
complete_dt TIMESTAMP NULL,
applied_by VARCHAR(100) NOT NULL,
description VARCHAR(500) NOT NULL
);
ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number, delta_set);
The updateDatabase task in build.gradle is:
task updateDatabase << {
ant.dbdeploy(driver: dbDriver,
url: dbUrl,
userid: dbUsername,
password: dbPassword,
dir: './src/main/resources/deploy/sql',
dbms: 'mysql'
)
}
When I do gradle updateDatabase, I get com.dbdeploy.exceptions.SchemaVersionTrackingException: Could not update change log because: Field 'delta_set' doesn't have a default value.
I tried assigning 'main' as default value in the change table log file as:
delta_set VARCHAR(10) NOT NULL DEFAULT 'Main'
But, I still got the same exception.
I also removed the delta_set attribute, I got the same exception. This really confused me.
I am completely new to datamigration. So, any help regarding this error and how I should go about it will be deeply appreciated.
Thank you in advance.
The DBDeploy documentation isn't very explicit about this, but the changelog table format changed between versions 2.X and 3.X (see the upgrade instructions). I suspect that you are using DBDeploy 3.X.
You need to:
Remove the old changelog table:
DROP TABLE changelog;
Recreate it using the new format:
CREATE TABLE changelog (
change_number INTEGER NOT NULL,
complete_dt TIMESTAMP NOT NULL,
applied_by VARCHAR(100) NOT NULL,
description VARCHAR(500) NOT NULL
);
ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number);
After this, everything should work.