I have a sql server database setup that accepts insert transactions from a visual studio application that is running using threads. Now the transactions are getting deadlocked and I have a workaround in that when the visual studio 2010 code detects a timeout, it will just reattempt to insert the data.
Looking at my text logs that I have setup, this is happening way too often and causing performance issues.
Some of the online resources indicate finding the offending transaction and killing it but if my application is dependent on the results obtained in the database that may not be an option. Are there suggestions out there as to how to deal with this. I am using the Parallel Taskfactory in visual studion 2010, so there are at least 1000 threads running at any given time?
some code to view: my insert code
Any ideas really appreciated.
sql table schema
Table PostFeed
id int
entryid varchar(100)
feed varchar(MAX)
pubdate varchar(50)
authorName nvarchar(100)
authorId nvarchar(100)
age nvarchar(50)
locale nvarchar(50)
pic nvarchar(50)
searchterm nvarchar(100)
dateadded datetime
PK - entryid + searchterm
stored procedure for insert
so it does a whole bunch of inserts and relies on primary key constraints for dupe checking
complete table create
USE [Feeds]
GO
/****** Object: Table [dbo].[PostFeed] Script Date: 09/21/2011 11:21:38 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[PostFeed](
[id] [int] IDENTITY(1,1) NOT NULL,
[entryId] [varchar](100) NOT NULL,
[feed] [varchar](max) NULL,
[entryContent] [nvarchar](max) NULL,
[pubDate] [varchar](50) NOT NULL,
[authorName] [nvarchar](100) NOT NULL,
[authorId] [nvarchar](100) NULL,
[age] [nvarchar](50) NULL,
[sex] [nvarchar](50) NULL,
[locale] [nvarchar](50) NULL,
[pic] [nvarchar](100) NULL,
[fanPage] [nvarchar](400) NULL,
[faceTitle] [nvarchar](100) NULL,
[feedtype] [varchar](50) NULL,
[searchterm] [nvarchar](400) NOT NULL,
[clientId] [nvarchar](100) NULL,
[dateadded] [datetime] NULL,
[matchfound] [nvarchar](50) NULL,
[hashField] AS ([dbo].[getMd5Hash]([entryId])),
CONSTRAINT [PK_Feed] PRIMARY KEY CLUSTERED
(
[entryId] ASC,
[searchterm] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO SET ANSI_PADDING OFF
GO
I tried exporting the deadlock graph, could not get it to work so here are few lines from trace
Lock:Timeout 60468 sa 0X01 247 G01 sa 2011-09- 21 16:00:44.750 1:4557807 3463314605 .Net SqlClient Data Provider 0 0XEF8B45000100000000000000070006 22164 7 postFeeds 0 2011-09-21 16:00:44.750 1 G02 0 - LOCK 8 - IX 0 72057594075152384 1 - TRANSACTION 0 6 - PAGE
Lock:Timeout 60469 sa 0X01 478 G01 sa 2011-09- 21 16:00:44.887 (7bf23fc490ce) 3463299315 .Net SqlClient Data Provider 0 0X3802000000017BF23FC490CE070007 17900 7 postFeeds 0 2011-09-21 16:00:44.887 1 G02 0 - LOCK 5 - X 0 72057594075152384 1 - TRANSACTION 0 7 - KEY
Lock:Timeout 60470 sa 0X01 803 G01 sa 2011-09- 21 16:00:44.887 (379349b72c77) 3463296982 .Net SqlClient Data Provider 0 0X380200000001379349B72C77070007 17900 7 postFeeds 0 2011-09-21 16:00:44.887 1 G02 0 - LOCK 5 - X 0 72057594075152384 1 - TRANSACTION 0 7 - KEY
Lock:Timeout 60471 tdbuser 0X048D73EF643661429B907E6106F78358 93 G01 tdbuser 2011-09-21 16:02:41.333 1:14386936 3463346220 .Net SqlClient Data Provider 0
I've ran into the same situation inserting a boat load of records with parallel threads. To get around my deadlock issue I specified a table lock upon insert...
Insert dbo.MyTable WITH(TABLOCKX) (Column1)
Values ( SomeValue);
I tried using lowel level locks but I still got deadlocks. The TabLockX slowed the throughput down some but it still was a ton faster than serial inserts and no more deadlocks.
Related
We use MariaDB database on our projects. I observed program was working slowly on my machine. I made some test using Heidisql. Updating 2000 row un my machine takes 9 second. But when I open virtual machine in my computer and run same query it takes 0.7 seconds. Other computers in our office also shows the same result. They all execute the query around 0.7 seconds.
Table create code is like this.
CREATE TABLE `t_tag` (
`Id` INT(10) NOT NULL AUTO_INCREMENT,
`TagName` VARCHAR(100) NOT NULL DEFAULT ' ' COLLATE 'utf8mb3_general_ci',
`DataType` INT(10) NOT NULL DEFAULT '0',
`DataBlock` INT(10) NOT NULL DEFAULT '0',
`VarType` INT(10) NOT NULL DEFAULT '0',
`ByteAddress` INT(10) NOT NULL DEFAULT '0',
`BitAddress` INT(10) NOT NULL DEFAULT '0',
`PlcId` INT(10) NOT NULL DEFAULT '0',
`TagLogTimerId` INT(10) NOT NULL DEFAULT '0',
`ValueOffset` DOUBLE NOT NULL DEFAULT '1',
`Digit` INT(10) NOT NULL DEFAULT '0',
`ModbusType` INT(10) NOT NULL DEFAULT '0',
`TagValue` VARCHAR(50) NOT NULL DEFAULT '' COLLATE 'utf8mb3_general_ci',
`TagMaxValue` DOUBLE NULL DEFAULT NULL,
`TagMinValue` DOUBLE NULL DEFAULT NULL,
`LastReadTime` DATETIME NOT NULL DEFAULT current_timestamp(),
PRIMARY KEY (`Id`) USING BTREE,
INDEX `FK_t_tag_t_plc_address` (`PlcId`) USING BTREE,
CONSTRAINT `FK_t_tag_t_plc_address` FOREIGN KEY (`PlcId`) REFERENCES `t_plc_address` (`Id`) ON UPDATE RESTRICT ON DELETE RESTRICT
)
COLLATE='utf8mb3_general_ci'
ENGINE=InnoDB
AUTO_INCREMENT=4006
;
and update query is like this.
UPDATE t_tag SET TagValue = '91' WHERE Id = 1;
UPDATE t_tag SET TagValue = '90' WHERE Id = 2;
UPDATE t_tag SET TagValue = '89' WHERE Id = 3;
UPDATE t_tag SET TagValue = '88' WHERE Id = 4;
total 2000 rows
All the machines have same config files and same MariaDB versions. I removed and clean installed the MariaDB several times but no changes on the result.
If anyone encountered problems like this or knows how to solve this problem, it would be big help otherwise last resort is the installing windows again, but it is too much work to install all the development setup again.
I have tried installing and uninstalling the MariaDB.
I have tried different version of MariaDB.
I tried same query on different machines.
I tried same query on my computer but in a virtual machine.
I expected the query takes similar times on all machines but. It only takes 9 seconds on my computer and 0.7 seconds on other computers.
When running many queries in HeidiSQL, it can make a big difference to execute them in one (or few) larger batch(es). Executing them one by one adds a significant overhead. Try it out, on the dropdown menu besides the blue "execute" button:
For an application based on Spring Boot and relying on a PostgreSQL 9.6 database, I'm using Spring Batch to schedule a few operations which must take place every n seconds (customizable but usually ranging between a few seconds and a few minutes); as a result, at the end of the day a lot of jobs are performed by the system and a lot of information are persisted by Spring Batch.
The fact is that I'm not really interested in historicizing those jobs so, at the beginning, I used the in-memory version of Spring Batch to avoid any kind of persistency on such (to me) useless information.
However, in case of configurations with small n running on environments with low resources, this approach led to performance issues so I decided to try with the database way.
Unfortunately, those tables grow quite fast and I would like to implement a cleanup procedure to get rid of all data older than, for instance, a day.
Here comes the pain: in fact, even if nothing is locking those tables (so, the main application is down and noone is interacting with the database) it takes forever to clean them and I really cannot understand the reason why.
Spring Batch (4.0.1) provides the following PG script to generate those tables:
CREATE TABLE BATCH_JOB_INSTANCE (
JOB_INSTANCE_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT ,
JOB_NAME VARCHAR(100) NOT NULL,
JOB_KEY VARCHAR(32) NOT NULL,
constraint JOB_INST_UN unique (JOB_NAME, JOB_KEY)
) ;
CREATE TABLE BATCH_JOB_EXECUTION (
JOB_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT ,
JOB_INSTANCE_ID BIGINT NOT NULL,
CREATE_TIME TIMESTAMP NOT NULL,
START_TIME TIMESTAMP DEFAULT NULL ,
END_TIME TIMESTAMP DEFAULT NULL ,
STATUS VARCHAR(10) ,
EXIT_CODE VARCHAR(2500) ,
EXIT_MESSAGE VARCHAR(2500) ,
LAST_UPDATED TIMESTAMP,
JOB_CONFIGURATION_LOCATION VARCHAR(2500) NULL,
constraint JOB_INST_EXEC_FK foreign key (JOB_INSTANCE_ID)
references BATCH_JOB_INSTANCE(JOB_INSTANCE_ID)
) ;
CREATE TABLE BATCH_JOB_EXECUTION_PARAMS (
JOB_EXECUTION_ID BIGINT NOT NULL ,
TYPE_CD VARCHAR(6) NOT NULL ,
KEY_NAME VARCHAR(100) NOT NULL ,
STRING_VAL VARCHAR(250) ,
DATE_VAL TIMESTAMP DEFAULT NULL ,
LONG_VAL BIGINT ,
DOUBLE_VAL DOUBLE PRECISION ,
IDENTIFYING CHAR(1) NOT NULL ,
constraint JOB_EXEC_PARAMS_FK foreign key (JOB_EXECUTION_ID)
references BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ;
CREATE TABLE BATCH_STEP_EXECUTION (
STEP_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY ,
VERSION BIGINT NOT NULL,
STEP_NAME VARCHAR(100) NOT NULL,
JOB_EXECUTION_ID BIGINT NOT NULL,
START_TIME TIMESTAMP NOT NULL ,
END_TIME TIMESTAMP DEFAULT NULL ,
STATUS VARCHAR(10) ,
COMMIT_COUNT BIGINT ,
READ_COUNT BIGINT ,
FILTER_COUNT BIGINT ,
WRITE_COUNT BIGINT ,
READ_SKIP_COUNT BIGINT ,
WRITE_SKIP_COUNT BIGINT ,
PROCESS_SKIP_COUNT BIGINT ,
ROLLBACK_COUNT BIGINT ,
EXIT_CODE VARCHAR(2500) ,
EXIT_MESSAGE VARCHAR(2500) ,
LAST_UPDATED TIMESTAMP,
constraint JOB_EXEC_STEP_FK foreign key (JOB_EXECUTION_ID)
references BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ;
CREATE TABLE BATCH_STEP_EXECUTION_CONTEXT (
STEP_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY,
SHORT_CONTEXT VARCHAR(2500) NOT NULL,
SERIALIZED_CONTEXT TEXT ,
constraint STEP_EXEC_CTX_FK foreign key (STEP_EXECUTION_ID)
references BATCH_STEP_EXECUTION(STEP_EXECUTION_ID)
) ;
CREATE TABLE BATCH_JOB_EXECUTION_CONTEXT (
JOB_EXECUTION_ID BIGINT NOT NULL PRIMARY KEY,
SHORT_CONTEXT VARCHAR(2500) NOT NULL,
SERIALIZED_CONTEXT TEXT ,
constraint JOB_EXEC_CTX_FK foreign key (JOB_EXECUTION_ID)
references BATCH_JOB_EXECUTION(JOB_EXECUTION_ID)
) ;
CREATE SEQUENCE BATCH_STEP_EXECUTION_SEQ MAXVALUE 9223372036854775807 NO CYCLE;
CREATE SEQUENCE BATCH_JOB_EXECUTION_SEQ MAXVALUE 9223372036854775807 NO CYCLE;
CREATE SEQUENCE BATCH_JOB_SEQ MAXVALUE 9223372036854775807 NO CYCLE;
By respecting the references precedence, I try to cleanup those tables by executing the following deletions:
delete from BATCH_STEP_EXECUTION_CONTEXT;
delete from BATCH_STEP_EXECUTION;
delete from BATCH_JOB_EXECUTION_CONTEXT;
delete from BATCH_JOB_EXECUTION_PARAMS;
delete from BATCH_JOB_EXECUTION;
delete from BATCH_JOB_INSTANCE;
Everything is ok with the first 4 tables but, as soon as I reach the BATCH_JOB_EXECUTION one, it keeps like 30 minutes to remove a few hundred thousands of rows.
Even worse, after deleting everything from the first 5 tables, the last one (which is now linked to nothing) takes even more.
Can you see a reason why this simple operation takes so long to complete? I mean, of course it has to check for constraints violations but it seems however unreasonably slow.
Plus, is there a better way to use Spring Batch without wasting disk space with unnecessary jobs information?
I want to create a Database for my school project. Some tables were created without an error, but when I wanted to create a more complex table, I had this error:
ORA-00907: missing right parenthesis
The code is following (the names of the table and attributes are in Romanian):
CREATE TABLE Curse (
id_cursa int NOT NULL,
id_tura int NULL,
moment_inceput_cursa timestamp NULL,
moment_sfarsit_cursa timestamp NULL,
adresa_initiala text NULL,
GPS_punct_start text NULL,
adresa_destinatie text NULL,
destinatie_GPS text NULL,
stare_cursa char NOT NULL DEFAULT 0,
modalitate_plata int NULL,
pret decimal NULL,
CONSTRAINT Curse_pk PRIMARY KEY (id_cursa)
);
The actual fault is the DEFAULT clause comes before the NOT NULL clause. So this the correct syntax:
stare_cursa char DEFAULT 0 NOT NULL
Beyond that you need to change the text datatype to something like varchar2(1000) or whatever length you need.
A few objections:
TEXT datatype is invalid; I used VARCHAR2(100); could be larger, or CLOB
correct order is DEFAULT 0 NOT NULL (not NOT NULL DEFAULT 0)
Although it is not an error, you don't have to specify that NULL values are allowed
SQL> CREATE TABLE curse(
2 id_cursa INT NOT NULL,
3 id_tura INT NULL,
4 moment_inceput_cursa TIMESTAMP NULL,
5 moment_sfarsit_cursa TIMESTAMP NULL,
6 adresa_initiala VARCHAR2(100)NULL,
7 gps_punct_start VARCHAR2(100)NULL,
8 adresa_destinatie VARCHAR2(100)NULL,
9 destinatie_gps VARCHAR2(100)NULL,
10 stare_cursa CHAR DEFAULT 0 NOT NULL,
11 modalitate_plata INT NULL,
12 pret DECIMAL,
13 CONSTRAINT curse_pk PRIMARY KEY(id_cursa)
14 );
Table created.
SQL>
I'm trying to retrieve FK of a given table with JDBC metadata.
For that, I'm using the "getImportedKeys" function.
For my table 'cash_mgt_strategy', it give in resultset:
PKTABLE_CAT : 'HAWK'
PKTABLE_SCHEM : 'dbo'
PKTABLE_NAME : 'fx_execution_strategy_policy'
PKCOLUMN_NAME : 'fx_execution_strategy_policy_id'
FKTABLE_CAT : 'HAWK'
FKTABLE_SCHEM : 'dbo'
FKTABLE_NAME : 'cash_mgt_strategy'
FKCOLUMN_NAME : 'fx_est_execution_strategy_policy'
KEY_SEQ : '1'
UPDATE_RULE : '1'
DELETE_RULE : '1'
FK_NAME : 'fk_fx_est_execution_strategy_policy'
PK_NAME : 'cash_mgt_s_1283127861'
DEFERRABILITY : '7'
The problem is that the "FKCOLUMN_NAME : 'fx_est_execution_strategy_policy'" is not a real column of my table, but it seems to be truncated? (missing "_id" at the end)
When using an official Sybase sql client (Sybase Workspace), displaying the DDL of the table give for this constraint / foreign key:
ALTER TABLE dbo.cash_mgt_strategy ADD CONSTRAINT fk_fx_est_execution_strategy_policy FOREIGN KEY (fx_est_execution_strategy_policy_id)
REFERENCES HAWK.dbo.fx_execution_strategy_policy (fx_execution_strategy_policy_id)
So I'm wondering how to retrieve the full FKCOLUMN_NAME ?
Note that I'm using jconnect 6.0.
I've tested with jconnect 7.0, same problem.
Thanks
You haven't provided your ASE version so I'm going to assume the following:
dataserver was running ASE 12.x at some point (descriptor names limited to 30 characters)
dataserver was upgraded to ASE 15.x/16.x (descriptor names extended to 255 characters)
DBA failed to upgrade/update the sp_jdbc* procs after the upgrade to ASE 15.x/16.x (hence the old ASE 12.x version of the procs - descriptors limited to 30 characters - are still in use in the dataserver)
If the above is true then sp_version should show the older versions of the jdbc procs running in the dataserver.
The (obvious) solution would be to have the DBA load the latest version of the jdbc stored procs (typically found under ${SYBASE}/jConnect*/sp).
NOTE: Probably wouldn't hurt to have the DBA review the output from sp_version to see if there are any other upgrade scripts that need to be loaded (eg, installmodel, installsecurity, installcommit, etc).
Ok, so I've done some search on my DB server and I've found the code of stored proc sp_jdbc_importkey. In this code can see:
create table #jfkey_res(
PKTABLE_CAT varchar(32) null,
PKTABLE_SCHEM varchar(32) null,
PKTABLE_NAME varchar(257) null,
PKCOLUMN_NAME varchar(257) null,
FKTABLE_CAT varchar(32) null,
FKTABLE_SCHEM varchar(32) null,
FKTABLE_NAME varchar(257) null,
FKCOLUMN_NAME varchar(257) null,
KEY_SEQ smallint,
UPDATE_RULE smallint,
DELETE_RULE smallint,
FK_NAME varchar(257),
PK_NAME varchar(257) null)
create table #jpkeys(seq int, keys varchar(32) null)
create table #jfkeys(seq int, keys varchar(32) null)
The temporary tables #jpkeys and #jfkeys used to store the column names (for PK and FK) are typed with varchar(32) instead of 257!!
Need to search how to patch / update theses stored proc now.
First som basics.
Java 6
OJDBC6
Oracle 10.2.0.4 (also the same result in 11g version)
I am experiencing that a sql statement is behaving differently when executed from Java with the OJDBC6 client and using the tool SQL Gate that probably uses a native/OCI driver. For som reason the optimizer chooses to use hash join for the executed statement in Java but not for the other.
Here is the table:
CREATE TABLE DPOWNERA.XXX_CHIP (
xxxCH_ID NUMBER(22) NOT NULL,
xxxCHP_ID NUMBER(22) NOT NULL,
xxxSP_ID NUMBER(22) NULL,
xxxCU_ID NUMBER(22) NULL,
xxxFT_ID NUMBER(22) NULL,
UEMTE_ID NUMBER(38) NULL,
xxxCH_CHIPID VARCHAR2(30) NOT NULL
)
The index:
ALTER TABLE DPOWNERA.XXX_CHIP ADD
(
CONSTRAINT IX_AK1_XXX_CHIPV2
UNIQUE ( XXXCH_CHIPID )
USING INDEX
TABLESPACE DP_DATA01
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 128 K
NEXT 128 K
MINEXTENTS 1
MAXEXTENTS UNLIMITED
)
);
Here is the SQL i used:
SELECT *
FROM (SELECT m2.*,
rownum rnum
FROM (SELECT m_chip.xxxch_id,
m_chip.xxxch_chipid
FROM xxx_chip m_chip
ORDER BY m_chip.xxxch_chipid) m2
WHERE rownum < 101)
WHERE rnum >= 1;
And finally excerpts from the explain plan:
SQL Tool Query:
OPERATION OBJECT_NAME COST CARDINALITY CPU_COST
---------------- ------------------- ----- ----------- ----------
SELECT STATEMENT NULL 2 10 11740
VIEW NULL 2 10 11740
COUNT NULL NULL NULL NULL
VIEW NULL 2 10 11740
NESTED LOOPS NULL 2 10 11740
TABLE ACCESS XXX_CHIP 1 1000000 3319
INDEX IX_AK1_XXX_CHIPV2 1 10 2336
TABLE ACCESS XXX_CUSTOMER 1 1 842
INDEX IX_PK_XXX_CUSTOMER 1 1 105
QQL Java Query OJDBC Thin client:
**OPERATION OBJECT_NAME COST CARDINALITY CPU_COST**
SELECT STATEMENT NULL 15100 100 1538329415
VIEW NULL 15100 100 1538329415
COUNT NULL NULL NULL NULL
VIEW NULL 15100 1000000 1538329415
SORT NULL 15100 1000000 1538329415
HASH JOIN NULL 1639 1000000 424719850
VIEW index$_join$_004 3 3 2268646
HASH JOIN NULL NULL NULL NULL
INDEX IX_AK1_XXX_CUSTOMER 1 3 965
INDEX IX_PK_XXX_CUSTOMER 1 3 965
TABLE ACCESS xxx_CHIP 1614 1000000 320184788
So, i am lost to why the hash join is chosen by the optimizer?
My guess is that the varchar2 is treated differently.
I found an answer and it was simpler than i thought. It all has to do with the VARCHAR2 datatype of the index column. My database was set to language and country "en", "US" but locally
i have another language and region. Therfore the optimizer rightly discarded the index since it wasn't configured with the same language and country as the client.
So what i did to test it was to start my eclipse with some extra -D parameters entered in my eclipse.ini file.
-Duser.language=en
-Duser.country=US
-Duser.region=US
Then in the data source explorer in Eclipse i had created a connection and ran my statement and it worked like a charm.
So lesson learned is to always see to that the client and database are compatible language wise. Probably we will change so we use UTF-8 in the database so it is the same for every installation. Otherwise you will have to configure it for every installation depending on country and language.
Hope this will help someone. If the answer was unclear please post a comment.