MariaDB MAX_JOIN_SIZE error without index hint - mariadb-10.4

There are MariaDB 10.4.14 with max_join_size = 300M and the coin InnoDB table with ~150,000 records inside.
A simple enough query produces MAX_JOIN_SIZE error:
SELECT * FROM coin z -- USE INDEX(PRIMARY)
WHERE z.id IN (5510, 5511, 5512 /* more item IDs up to 250 */)
AND z.currency_id IN (8, 227)
AND z.distribution_id IN (1, 2);
Error Code: 1104
The SELECT would examine more than MAX_JOIN_SIZE rows; check your WHERE and use SET SQL_BIG_SELECTS=1 or SET MAX_JOIN_SIZE=# if the SELECT is okay
But the same query with index hint works well. It does not matter which index is in hit ["PRIMARY", "currency_id_idx", "distribution_id"] or even issue_date_idx
It works even without indexes USE INDEX() at all.
What could be wrong here? Why doesn't the query work without hint?
By the way, this query works well on MariaDB 10.3.24 and doesn't work on 10.5.5
OPTIMIZE TABLE coin; -- didn't help
Table DDL and query EXPLAIN
CREATE TABLE `coin` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`name` varchar(90) NOT NULL DEFAULT '',
`country_id` smallint(5) unsigned NOT NULL DEFAULT 0,
`currency_id` smallint(5) unsigned NOT NULL DEFAULT 0,
`distribution_id` tinyint(3) unsigned NOT NULL,
`issue_date` date NOT NULL DEFAULT '0000-00-00',
-- and other, total 29 fields
PRIMARY KEY (`id`),
KEY `issue_date_idx` (`issue_date`),
KEY `currency_id_idx` (`currency_id`),
KEY `distribution_id` (`distribution_id`),
-- and other, total 21 indices for other fields which don't use currency_id or distribution_id
CONSTRAINT `coin_ibfk_4` FOREIGN KEY (`currency_id`) REFERENCES `currency` (`id`),
CONSTRAINT `coin_ibfk_11` FOREIGN KEY (`distribution_id`) REFERENCES `distribution` (`id`),
-- and other, total 13 CONSTRAINTs
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
EXPLAIN FORMAT=JSON -- for SELECT above without index hint
{
"query_block": {
"select_id": 1,
"table": {
"table_name": "z",
"access_type": "range",
"possible_keys": ["PRIMARY", "currency_id_idx", "distribution_id"],
"key": "PRIMARY",
"key_length": "3",
"used_key_parts": ["id"],
"rows": 50,
"filtered": 100,
"attached_condition": "z.`id` in (5510,5511,5512, /* ... total 100 */) and z.currency_id in (8,227) and z.distribution_id in (1,2)"
}
}
}
The above SELECT works without index hint with max possible MAX_JOIN_SIZE value and doesn't with max value - 1:
SET MAX_JOIN_SIZE=18446744073709551615 -- this works
SET MAX_JOIN_SIZE=18446744073709551614 -- this doesn't work

The MAX_JOIN_SIZE error can be fixed with restoring previous default value
optimizer_use_condition_selectivity = 1 # new default is 4 since 10.4.1

Related

How to select only those rows which are greater than modified time using spring data jpa

For Example ,
I have created a table ,
CREATE DATABASE es_db;
USE es_db;
DROP TABLE IF EXISTS es_table;
CREATE TABLE es_table (
id BIGINT(20) UNSIGNED NOT NULL,
PRIMARY KEY (id),
UNIQUE KEY unique_id (id),
client_name VARCHAR(32) NOT NULL,
modification_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
insertion_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
Now assume i have to select those data which are greater than time i give as input .
consider this query for example,
SELECT *, UNIX_TIMESTAMP(modification_time) AS unix_ts_in_secs FROM es_table WHERE (UNIX_TIMESTAMP(modification_time) > :sql_last_modifiedvalue AND modification_time < NOW()) ORDER BY modification_time ASC
Is there away to translate the same to native query ? i can achieve the same with jdbctemplate but would like to know if this is possible with native query?

Missing right parenthesis error while creating a table with SQL code generated in Vertabelo

I want to create a Database for my school project. Some tables were created without an error, but when I wanted to create a more complex table, I had this error:
ORA-00907: missing right parenthesis
The code is following (the names of the table and attributes are in Romanian):
CREATE TABLE Curse (
id_cursa int NOT NULL,
id_tura int NULL,
moment_inceput_cursa timestamp NULL,
moment_sfarsit_cursa timestamp NULL,
adresa_initiala text NULL,
GPS_punct_start text NULL,
adresa_destinatie text NULL,
destinatie_GPS text NULL,
stare_cursa char NOT NULL DEFAULT 0,
modalitate_plata int NULL,
pret decimal NULL,
CONSTRAINT Curse_pk PRIMARY KEY (id_cursa)
);
The actual fault is the DEFAULT clause comes before the NOT NULL clause. So this the correct syntax:
stare_cursa char DEFAULT 0 NOT NULL
Beyond that you need to change the text datatype to something like varchar2(1000) or whatever length you need.
A few objections:
TEXT datatype is invalid; I used VARCHAR2(100); could be larger, or CLOB
correct order is DEFAULT 0 NOT NULL (not NOT NULL DEFAULT 0)
Although it is not an error, you don't have to specify that NULL values are allowed
SQL> CREATE TABLE curse(
2 id_cursa INT NOT NULL,
3 id_tura INT NULL,
4 moment_inceput_cursa TIMESTAMP NULL,
5 moment_sfarsit_cursa TIMESTAMP NULL,
6 adresa_initiala VARCHAR2(100)NULL,
7 gps_punct_start VARCHAR2(100)NULL,
8 adresa_destinatie VARCHAR2(100)NULL,
9 destinatie_gps VARCHAR2(100)NULL,
10 stare_cursa CHAR DEFAULT 0 NOT NULL,
11 modalitate_plata INT NULL,
12 pret DECIMAL,
13 CONSTRAINT curse_pk PRIMARY KEY(id_cursa)
14 );
Table created.
SQL>

H2DB - executeUpdate() returns 0 or 1 on DELETE depending on table definition

I wonder if someone could explain the behaviour of the H2 JDBC driver when deleting an entry from a rather simple table.
When using the following table definition, the method executeUpdate() for a PreparedStatement instance returns 1 if one entry has been deleted (expected behaviour).
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL
);
When adding a PRIMARY KEY constraint on the CODE column, the same method returns 0 although the entry gets deleted successfully (behaviour not expected).
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("CODE")
);
Most interestingly, when adding an INT typed column to serve as PRIMARY KEY the return value is 1 again:
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"ID" INT NOT NULL AUTO_INCREMENT,
"CODE" VARCHAR(5) NOT NULL,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("ID")
);
Is someone able to reconstruct this behaviour and probably somehow explain it to me?
I have included the current version of H2 DB using maven.
EDIT:
If I eventually add a UNIQUE constraint for the CODE column, the return value is 0 again ...
CREATE TABLE IF NOT EXISTS "MATERIAL" (
"ID" INT NOT NULL AUTO_INCREMENT,
"CODE" VARCHAR(5) NOT NULL UNIQUE,
"NAME" VARCHAR(100) NOT NULL,
PRIMARY KEY ("CODE")
);
EDIT 2:
The query used to delete an entry looks like the following (used in PreparedStatement):
DELETE FROM MATERIAL WHERE CODE = ?
SOLUTION:
I'm sorry to have you bothered with this. Actually, there was no problem with the table definition or the JDBC driver. It was my test data - from earlier testing I had wanted to INSERT two entries having the same CODE. It was a multiple row insert - obviously this failed, when CODE was the PK or having a UNIQUE index. Thus, in this cases executeUpdate() could only return 0 because there was no data in the table at all.

optimize an inner join between two multi-million row tables

I'm new to Postgres and even newer to understanding how explain works. I have a query below which is typical, I just replace the date:
explain
select account_id,
security_id,
market_value_date,
sum(market_value) market_value
from market_value_history mvh
inner join holding_cust hc on hc.id = mvh.owning_object_id
where
hc.account_id = 24766
and market_value_date = '2015-07-02'
and mvh.created_by = 'HoldingLoad'
group by account_id, security_id, market_value_date
order by security_id, market_value_date;
Attached is a screenshot of explain
The count for holding_cust table is 2 million rows and market_value_history table has 163 million rows
Below are the table definitions and indexes for market_value_history and holding_cust:
I'd appreciate any advice you may be able to give me on tuning this query.
CREATE TABLE public.market_value_history
(
id integer NOT NULL DEFAULT nextval('market_value_id_seq'::regclass),
market_value numeric(18,6) NOT NULL,
market_value_date date,
holding_type character varying(25) NOT NULL,
owning_object_type character varying(25) NOT NULL,
owning_object_id integer NOT NULL,
created_by character varying(50) NOT NULL,
created_dt timestamp without time zone NOT NULL,
last_modified_dt timestamp without time zone NOT NULL,
CONSTRAINT market_value_history_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.market_value_history
OWNER TO postgres;
-- Index: public.ix_market_value_history_id
-- DROP INDEX public.ix_market_value_history_id;
CREATE INDEX ix_market_value_history_id
ON public.market_value_history
USING btree
(owning_object_type COLLATE pg_catalog."default", owning_object_id);
-- Index: public.ix_market_value_history_object_type_date
-- DROP INDEX public.ix_market_value_history_object_type_date;
CREATE UNIQUE INDEX ix_market_value_history_object_type_date
ON public.market_value_history
USING btree
(owning_object_type COLLATE pg_catalog."default", owning_object_id, holding_type COLLATE pg_catalog."default", market_value_date);
CREATE TABLE public.holding_cust
(
id integer NOT NULL DEFAULT nextval('holding_cust_id_seq'::regclass),
account_id integer NOT NULL,
security_id integer NOT NULL,
subaccount_type integer,
trade_date date,
purchase_date date,
quantity numeric(18,6),
net_cost numeric(18,2),
adjusted_net_cost numeric(18,2),
open_date date,
close_date date,
created_by character varying(50) NOT NULL,
created_dt timestamp without time zone NOT NULL,
last_modified_dt timestamp without time zone NOT NULL,
CONSTRAINT holding_cust_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE public.holding_cust
OWNER TO postgres;
-- Index: public.ix_holding_cust_account_id
-- DROP INDEX public.ix_holding_cust_account_id;
CREATE INDEX ix_holding_cust_account_id
ON public.holding_cust
USING btree
(account_id);
-- Index: public.ix_holding_cust_acctid_secid_asofdt
-- DROP INDEX public.ix_holding_cust_acctid_secid_asofdt;
CREATE INDEX ix_holding_cust_acctid_secid_asofdt
ON public.holding_cust
USING btree
(account_id, security_id, trade_date DESC);
-- Index: public.ix_holding_cust_security_id
-- DROP INDEX public.ix_holding_cust_security_id;
CREATE INDEX ix_holding_cust_security_id
ON public.holding_cust
USING btree
(security_id);
-- Index: public.ix_holding_cust_trade_date
-- DROP INDEX public.ix_holding_cust_trade_date;
CREATE INDEX ix_holding_cust_trade_date
ON public.holding_cust
USING btree
(trade_date);
Two things:
As Dmitry pointed out, you should look at creating an Index on market_value_date field. Its possible that post that you have a completely different query plan, which may or may not bring up other bottlenecks, but it should certainly remove this seq-Scan.
Minor (since I doubt if it affects performance), but secondly, if you aren't enforcing field length by design, you may want to change createdby field to TEXT. As can be seen in the query, its trying to cast all createdby fields to TEXT for this query.

How to create a check constraint between two columns in SQL?

I am trying to create a Basic pay (BP) table with
CREATE TABLE bp (
bpid VARCHAR(5),
FOREIGN KEY (bpid) REFERENCES designation(desigid),
upperlimit DECIMAL(10,2) NOT NULL,
lowerlimit DECIMAL(10,2) NOT NULL,
increment DECIMAL(10,2) NOT NULL
CONSTRAINT llvalid CHECK (upperlimit > lowerlimit)
);
As you can see near the ending, I want to check if upperlimit is greater than lowerlimit, how can I do that?
It might (probably does) depend on the data base you use.
Comparing to the oracle syntax (e.g. here: http://www.techonthenet.com/oracle/check.php), what you are missing might be a ',' between NULL and CONSTRAINT
The problem is that you have defined it as a column level constraint but it references other columns. You must define a constraint at the table level.
ALTER TABLE bp
ADD CONSTRAINT CK_limit CHECK ( upperlimit > lowerlimit)
Here's proper the SQL query...
CREATE TABLE bp (bpid VARCHAR(5),
FOREIGN KEY (bpid) REFERENCES designation(desigid),
upperlimit DECIMAL(10,2) NOT NULL,
lowerlimit DECIMAL(10,2) NOT NULL,
increment DECIMAL(10,2) NOT NULL,
CONSTRAINT llvalid CHECK (upperlimit > lowerlimit));
Note the comma after NOT NULL and CONSTRAINT in the last line.

Resources