SQLite SELECT queries are slowing down after bulk INSERT - windows

I have 2 connections open to an SQLite database. On one connection a task is running simple SELECT queries on a single table using only the PK in the WHERE clause. This query is prepared once and repeatedly executed. On the other connection, INSERT queries are run periodically on the same table, about 2000 inserts per run.
What I observed is that the SELECT queries are fast (<1ms) but after the second connection has executed the INSERTs, then the SELECT queries slow down to ~20ms.
When I restart the application, using the same database file, then the SELECT queries are fast again.
This only happens on Windows (MSVC 2019, vc142), on Linux (GCC 11) the INSERTs have no effect on the SELECTs.
What could cause this slowdown and is there a workaround?
Extra info:
Each connection runs the following statements right after connection.
PRAGMA synchronous = NORMAL;
PRAGMA journal_mode = WAL;
PRAGMA busy_timeout = 1000;
Versions of SQLite used: 3.36.0 (Windows), 3.37.2 (Linux)

Related

T-SQL : why is running multiple sql statements in one batch slower without GO?

I have come to very interesting problem (at least for me).
When I run following SQL:
SELECT count(*) AS [count]
FROM [dbo].[contract_v] AS [contract_v]
WHERE 1 = 0;
SELECT *
FROM [dbo].[contract] AS [contract]
LEFT JOIN ([dbo].[contract_accepted_garbage_type] AS [garbageTypes->contract_accepted_garbage_type]
INNER JOIN [dbo].[garbage_type] AS [garbageTypes] ON [garbageTypes].[id] = [garbageTypes->contract_accepted_garbage_type].[garbage_type_id])
ON [contract].[id] = [garbageTypes->contract_accepted_garbage_type].[contract_id]
WHERE [contract].[id] IN (125018);
Execution takes 21s
However when I add GO statement as following:
SELECT count(*) AS [count]
FROM [dbo].[contract_v] AS [contract_v]
WHERE 1 = 0;
GO
SELECT *
FROM [dbo].[contract] AS [contract]
LEFT JOIN ([dbo].[contract_accepted_garbage_type] AS [garbageTypes->contract_accepted_garbage_type]
INNER JOIN [dbo].[garbage_type] AS [garbageTypes] ON [garbageTypes].[id] = [garbageTypes->contract_accepted_garbage_type].[garbage_type_id])
ON [contract].[id] = [garbageTypes->contract_accepted_garbage_type].[contract_id]
WHERE [contract].[id] IN (125018);
It takes only 2s.
The view used in first SQL statement is based on the table called in second statement.
Could you please explain this behaviour to me? I know that GO statement makes database create separate execution plan for every batch. I have checked the execution plans, and the actual steps are identical.
Thank you!
The GO keyword separates execution batches. If the underlying tables are the same in both queries, and they are executed in the same batch, both queries have to be executed with the same transaction context. This ensures that the underlying data in both tables is the same during both executions.
If using separate batches (GO statement in-between), you cannot guarantee that the data will be consistent in that rows could theoretically be modified in between executions.
If you don't care about the chance of the data changing in between queries, then by all means use GO for performance. If you do care, consider it a dangerous move.
SQL Server applications can send multiple Transact-SQL statements to an instance of SQL Server for execution as a batch. The statements in the batch are then compiled into a single execution plan. Programmers executing ad hoc statements in the SQL Server utilities, or building scripts of Transact-SQL statements to run through the SQL Server utilities, use GO to signal the end of a batch.
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/sql-server-utilities-statements-go?view=sql-server-ver15

Set timeout value for ALTER TABLE ADD COLUMN in Oracle (DDL_LOCK_TIMEOUT not works)

Question
How I can set a timeout value for nonblocking DDL (ALTER TABLE add column) in oracle so that if any DML lock the table for long time (several hours), my DDL can fast-fail instead of waiting for hours. (we expect oracle raise error like ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired to interrupt our DDL)
P.S: DDL_LOCK_TIMEOUT is not working (refer 'What I tried' below)
Background
I'm working on a big oracle database (Oracle Database 19c). There are legacy application every hour will do aggregation job to calculate the data in past hour, like AVG, SUM of the counters. The production has 40 CPUs and 200GB+ memory, normally the aggregation job will run around 30 minutes, but in some case, like due to maintenance break the aggregation jobs are delayed, more data need to be handle in next aggregation job cause the job running for few hours.
Those legacy applications are out of my control. It's not possible to change the aggregation job.
Edition-Based Redefinition is not used.
My work is update database table (due to new counter added). We use ALTER TABLE to add new column to the existing tables. But in some case, the aggregation job lock the table for hours make my script hang there for hours. It make customer unhappy. So I want to make my script fast-fail.
What I tried
By google a long time, seems DDL_LOCK_TIMEOUT is the simplest solution.
However, based on the test, we notice that DDL_LOCK_TIMEOUT is not works in our case. By a long time google again, we found Oracle document here clearly mentioned:
The DDL_LOCK_TIMEOUT parameter affects blocking DDL statements (but not nonblocking DDL statements)
ALTER TABLE add column is exactly 'nonblocking DDL' as listed in List of Nonblocking DDLs
Expectation
When a DML lock the table for 1 hours, like SELECT * FROM MY_TABLE FOR UPDATE and commit after 1 hours. I want my DDL like ALTER TABLE MY_TABLE ADD (COL_A number) can get timeout after 10 minutes instead of wait for 1 hour.
Other Solutions
1
There have one solution in my mind that we can first issue a lock table MY_TABLE IN EXCLUSIVE MODE wait 600 to get the lock fist. But before we go with this solution, I want to seek is there any simple solution just like DDL_LOCK_TIMEOUT to set only one parameter.
2
Based on oracle doc, enable Supplemental Logging able to downgrade the nonblocking DDL to blocking way. But Supplemental Logging is DB level configuration. I do not have the permission to do such change.

How Oracle GATHER_SCHEMA_STATS works

We have one of our system that perform quite a bit of database activity in terms of INSERT/UPDATE/DELETE statements against various tables. Because of this the statistics became stale and this is reflected in overall performance.
We want to create a scheduled job that would periodically invoke DBMS_STATS.GATHER_SCHEMA_STATS. Because we don't want actual stats gathering itself to impact the system processing even more we are thinking to collect statistics quite frequent and use GATHER STALE option:
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME => 'MY_SCHEMA', OPTIONS =>'GATHER STALE')
This executes almost instantly but running this statement below before and after stats gathering seems to bring back the same records with the same values:
SELECT * FROM user_tab_modifications WHERE inserts + updates + deletes > 0;
The very short time taking to execute and the fact that user_tab_modifications content stays the same makes me question if OPTIONS =>'GATHER STALE' actually does what we expect it should do. On the other hand if I run this again before and after statistics gathering I can see the tables reported as stale before re no longer reported as stale after:
DECLARE
stale dbms_stats.objecttab;
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MY_SCHEMA', OPTIONS =>'LIST STALE', objlist => stale);
FOR i in 1 .. stale.count
LOOP
dbms_output.put_line( stale(i).objName );
END LOOP;
END;
On another hand if lets say my_table is one of my tables being listed as part of the tables that part of the user_tab_modifications with inserts + updates + deletes > 0 and I run I can see my_table no longer being reported as having changes.
EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'MY_SCHEMA', tabname => 'MY_TABLE');
So my questions are:
Is my approach correct. Can I trust I am getting fresh stats just by running options => 'GATHER STALE' or I should manually collect stats for all tables that come back with a reasonable number of inserts, updates, deletes?
When user_tab_modifications would actually get reset; obviously GATHER STALE option does not seem to do it
We are using Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
Got the following info from Oracle docs.
You should enable monitoring if you use GATHER_DATABASE_STATS or GATHER_SCHEMA_STATS with the GATHER AUTO or GATHER STALE options.
This view USER_TAB_MODIFICATIONS is populated only for tables with the MONITORING attribute. It is intended for statistics collection over a long period of time. For performance reasons, the Oracle Database does not populate this view immediately when the actual modifications occur. Run the FLUSH_DATABASE_MONITORING_INFO procedure in the DBMS_STATS PL/SQL package to populate this view with the latest information. The ANALYZE_ANY system privilege is required to run this procedure.
Hope this helps you to identify which of your assumptions are incorrect and understand the correct usage of "GATHER STALE".

SAS connection to Oracle hung up for 2 hours

In SAS we have a library which is actually ORACLE schema and today I faced with a strange event when trying to query a table in this library.
A regular SAS SQL query:
proc sql;
delete from table where id=123;
quit;
Was hung up for two hours while it usually took some seconds:
NOTE: PROCEDURE SQL used (Total process time):
real time 2:00:33.49
cpu time 0.03 seconds
While this operation was being performed I tried to delete a nearby row in ORACLE SQL DEVELOPER but it hung up processing delete request too. However deleting a row that was not nearby these rows did not cause any problems. Well how can I find out the possible reason? I guess that was a sort of deadlock.
It sounds like someone has locked a row that your session is trying to delete. You should be able to spot this by querying v$session:
select sid, schemaname, osuser, terminal, program, event
from v$session
where type != 'BACKGROUND';
and checking if your session has an event of "enq: TX - row lock contention" (or similar). If so, then you'll have to work out who has the blocking lock (if you have access to Toad's session browser, this is easy to do, but Google should throw up something that can help. Or, if your database is Oracle 11.2, there's a view: v$session_blockers that ought to pinpoint the blocking session), and then get them to either commit or rollback their transaction.

Running a sql script on a slow network

I have a sql script which creates my applications tables, sequences, triggers etc. & inserts around 10k rows of data.
I am on a slow network and when I run this script from my local machine it takes a long time to finish.
Wondering if there is any support in sqlplus (or sqldeveloper) to run this script on server. So the entire script is first transported to the server executed and then returns say a log file of the execution.
No, there is not. There are some things you can do that might the data load go faster, such as use sql loader if you are doing individual inserts, and increasing your commit interval. However, I would have to see the code to really help very much.
If you have access to the remote server on which the database is hosted and you have access to execute sqlplus on the said server, sure you can.
Login or SSH (depending upon OS - Windows or *nix) to the
server
Create your SQL script (myscript.sql) over there.
Login to SQL*Plus and execute using the command # myscript.sql
There is rarely a need to run these kinds of scripts on the server. A few simple changes to batch commands can significantly improve performance. All of the changes below combine multiple statements, reducing the number of network round-trips. Some of them also decrease parsing time, which will significantly improve performance even if the scripts run on the server.
Combine INSERTs into a single statement
Replace individual inserts:
insert into some_table values(1);
insert into some_table values(2);
...
with combind inserts like this:
insert into some_table
select 1 from dual union all
select 2 from dual union all
...
Use PL/SQL blocks
Replace individual DDL:
create sequence sequence1;
create sequence sequence2;
with a PL/SQL block:
begin
execute immediate 'create sequence sequence1';
execute immediate 'create sequence sequence2';
end;
/
Use inline constraints
Combine DDL as much as possible. For example, use this statement:
create table some_table(a number not null);
Instead of this:
create table some_table(a number);
alter table some_table modify a not null;

Resources