Centura Column locking Technique - centura

How to implement child table column locking in centura ?
It is used for providing better clarity to table windows, having large number of columns locking the key column.Locked columns do not scroll horizontally, they are fixed to the left side of the table window.

Implementation code snippet:
Set nColumnID = SalTblQueryColumnID( tablename.columnname )
Call SalTblSetLockedColumns( Tablename, nColumnID )

Related

Approaches that would allow to make the MERGE opearation in SQL work faster

Could you recommed approaches that would allow to make the MERGE opearation in SQL to work faster?
I believe that the question is all about knowledge and experience and should not be considered as opinion based, since anything which would make the operation faster is definitely appropriate for the question and the faster the operation will become, the better the answer is.
In my particular case I have approximately 1.7 million records, which I am fetching in a recurring job and I use the records to update the existing records. In order to lock the real table (it is [LegalContractors]) as little as possible, I am using a temporary table (it is [LegalContractorTemps]) into which I add all the records from non SQL (but C#) code and after that I run the MERGE.
Here is what I am trying:
DELETE FROM [dbo].[LegalContractorTemps] WHERE [Code] IS NULL;
DELETE FROM [dbo].[LegalContractorTemps]
WHERE [Id] IN (
SELECT [Id]
FROM [dbo].[LegalContractorTemps] [Temp]
JOIN (
SELECT [Code], [Status], MAX([Id]) as [MaxId]
FROM [dbo].[LegalContractorTemps]
GROUP BY [Code], [Status]
HAVING COUNT([Id]) > 1
) [TempGroup]
ON ([Temp].[Code] = [TempGroup].[Code] AND [Temp].[Status] = [TempGroup].[Status] AND [MaxId] != [Id])
);
CREATE UNIQUE INDEX [CodeStatus]
ON [dbo].[LegalContractorTemps] ([Code], [Status]);
SELECT GETDATE() AS [beginTime];
MERGE [dbo].[LegalContractors] AS TblTarget
USING [dbo].[LegalContractorTemps] AS TblSource
ON (TblSource.[Code] = TblTarget.[Code] AND TblSource.[Status] = TblTarget.[Status])
WHEN NOT MATCHED BY TARGET THEN
INSERT ([Code], [ShortName], [Name], [LegalAddress], [Status], [LastModified])
VALUES (TblSource.[Code], TblSource.[ShortName], TblSource.[Name], TblSource.[LegalAddress], TblSource.[Status], GETDATE())
WHEN MATCHED AND
(TblTarget.[ShortName] != TblSource.[ShortName] OR
TblTarget.[Name] != TblSource.[Name] OR
TblTarget.[LegalAddress] != TblSource.[LegalAddress]) THEN
UPDATE SET
TblTarget.[ShortName] = TblSource.[ShortName],
TblTarget.[Name] = TblSource.[Name],
TblTarget.[LegalAddress] = TblSource.[LegalAddress],
TblTarget.[LastModified] = GETDATE()
WHEN NOT MATCHED BY SOURCE THEN
DELETE;
SELECT GETDATE() AS [endTime];
DROP INDEX [CodeStatus] ON [dbo].[LegalContractorTemps];
Right now the code shown above runs approximately 2 minutes.
I found this answer, but I was not able to apply it to my case, because I need the WHEN NOT MATCHED clause and I will have to perform a full scan anyway (whether or nor I will use the MERGE).
I would consider doing a modified flush and fill, rather than doing a MERGE at all.
The method I've had the most success with uses partition switching. You build three identical tables; the main table that your users pull from, a staging table that you use for applying CRUD operations, and a holding table that you'll only use during the transition period after your updates.
This will require a little re-tooling to shift your LastModified logic right into the CRUD operations you're performing during your updates.
Then, after the staging table is ready for prime time, truncate yesterday's copy of the holding table. Next, switch the data from the main table to the now-empty holding table. Switch the data from staging to main. Probably wrap all of that in an explicit transaction.
Boom. Your table is up-to-date. And you have a back up copy of yesterday's data in the holding table, just in case.
Tons of additional detail in these articles:
Comparison: Switching Tables vs. sp_rename
Why You Should Switch in Staging Tables Instead of Renaming Them

Performance issue on SQLite closure table implementation

I start by saying that I am fairly new to SQL and database systems, so please excuse me for any possible noobie mistakes that I may be doing.
I am using a closure table to insert hierarchical data in a SQLite database. I am using C# (.NET 4.6.1) and SQLite precompiled 32-bit DLL (x86) for SQLite version 3.26.0. The hierarchical data inserted contains ~240000 elements, and the max tree depth is not greater than 7.
My hierarchical element table is:
CREATE TABLE element (elementId INTEGER PRIMARY KEY, parentId INTEGER, elementName TEXT, FOREIGN KEY (parentId) REFERENCES element(elementId));
And my closure table is defined by:
CREATE TABLE hierarchy (parentId INTEGER, childId INTEGER, depth INTEGER, FOREIGN KEY(parentId) REFERENCES element(elementId), FOREIGN KEY(childId) REFERENCES element(elementId));
The elements are inserted using a classical stack, which begins with the “root” element, to which the element’s members are added during the treatment inside a transaction using:
INSERT INTO element VALUES (<ELEMENT_ID>, <PARENT_ID>'<ELEMENT_NAME');
And I initialize the closure table with the “self” relationship, using:
INSERT INTO hierarchy(parentId, childId, depth) VALUES (<ELEMENT_ID>, <ELEMENT_ID>, 0);
These inserts cause no issues, taking a few seconds to execute.
Next, I go through all the elements again using the same stack method to build the closure table (NOTE: I could probably do this at the same time of the previous instructions, however I do it on a separate loop to isolate the performance issue) using the following code (inside another transaction):
INSERT INTO hierarchy SELECT p.parentId, c.childId, p.depth+c.depth+1 FROM hierarchy p, hierarchy c WHERE p.childId=<PARENT_ID> AND c.parentId=<ELEMENT_ID>;
However, this query takes HOURS, maybe even days, to execute. It's execution time is also increasingly longer. I know that it inserts a lot of elements in the closure table (one entry per relationship between the current element and all its ascendants), but I wish to know if anything can be done to improve the performance here?
Thanks
You need indexes on child keys and parent keys. Also wrap everything in a transaction.
Better yet, use a single recursive CTE to generate the closure table, something like
with recursive
closure as (
select elementId, elementId as parentId, 0 as depth from element
union all
select closure.elementId, element.parentId, 1 + depth as depth
from closure, element where closure.parentId = element.elementId
)
select * from closure
To actually create the table use something like create table hierarchy as to put the above result into a table.

Process SQL result set entirely

I need to work with a SQL result set in order to do some processing for each column (medians, standard deviations, several control statements included)
The SQL is dynamic so I don't know the number of columns, rows.
First I tried to use temporary tables, views, etc to store the results, however I did not manage to overcome the 30 character limit of Oracle columns when using the below sql:
create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS. MAT_N_NUM_EVENTI) <-- exceeds the 30 character limit
FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= '201301'
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)
Second choice was to use some PL/SQL types to store the entire table information, so I could call it like in other programming languages (e.g. a matrix result[i][j]) but I could not find anything similar.
Third variant, using files for reading and writing: i did not try it yet; i'm still expecting a more elegant pl/sql solution
It's possible that I have the wrong approach here so any advice is more than welcome.
UPDATE: Modifying the input SQL is not an option. The program has to accept any select statement.
Note that you can alias both tables and fields. Using a table alias keeps references to it from producing walls of text in the query. Using one for a field gives it a new name in the output.
SELECT A.LONG_FIELD_NAME_HERE AS SHORTNAME
FROM REALLY_LONG_TABLE_NAME_HERE A
The auto naming adds _1 and _2 etc to differentiate the same column name coming from different table references. This often puts a field already borderline over the limit. Giving the fields names yourself bypasses this.
You can put the alias also in dynamic SQL:
sqlstr := 'create table (or view or global temporary table) as select * from (
SELECT
DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE,
SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ + DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI) AS '||SUBSTR('SUM(DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI_CHZ +DMTTBF_MAT_MATURATO_BILL_POS.MAT_N_NUM_EVENTI)', 1, 30)
||' FROM DMTTBF_MAT_MATURATO_BILL_POS
WHERE DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE >= ''201301''
GROUP BY DMTTBF_MAT_MATURATO_BILL_POS.MAT_V_COD_ANNOMESE
)'

Executing triggers in Oracle for copying the old values to a Mirror table

We are trying to copy the current row of a table to mirror table by using a trigger before delete / update. Below is the working query
BEFORE UPDATE OR DELETE
ON CurrentTable FOR EACH ROW
BEGIN
INSERT INTO MirrorTable
( EMPFIRSTNAME,
EMPLASTNAME,
CELLNO,
SALARY
)
VALUES
( :old.EMPFIRSTNAME,
:old.EMPLASTNAME,
:old.CELLNO,
:old.SALARY
);
END;
But the problem is we have more than 50 coulmns in the current table and dont want to mention all those column names. Is there a way to select all coulmns like
:old.*
SELECT * INTO MirrorTable FROM CurrentTable
Any suggestions would be helpful.
Thanks,
Realistically, no. You'll need to list all the columns.
You could, of course, dynamically generate the trigger code pulling the column names from DBA_TAB_COLUMNS. But that is going to be dramatically more work than simply typing in 50 column names.
If your table happens to be an object table, :new would be an instance of that object so you could insert that. But it would be rather rare to have an object table.
If your 'current' and 'mirror' tables have EXACTLY the same structure you may be able to use something like
INSERT INTO MirrorTable
SELECT *
FROM CurrentTable
WHERE CurrentTable.primary_key_column = :old.primary_key_column
Honestly, I think that this is a poor choice and wouldn't do it, but it's a more-or-less free world and you're free (more or less :-) to make your own choices.
Share and enjoy.
For what it's worth, I've been writing the same stuff and used this to generate the code:
SQL> set pagesize 0
SQL> select ':old.'||COLUMN_NAME||',' from all_tab_columns where table_name='BIGTABLE' and owner='BOB';
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
...
If you feed all columns, no need to mention them twice (and you may use NULL for empty columns):
INSERT INTO bigtable VALUES (
:old.COL1,
:old.COL2,
:old.COL3,
:old.COL4,
:old.COL5,
NULL,
NULL);
people writing tables with that many columns should have no desserts ;-)

How to protect a running column within Oracle/PostgreSQL (kind of MAX-result locking or something)

I'd need advice on following situation with Oracle/PostgreSQL:
I have a db table with a "running counter" and would like to protect it in the following situation with two concurrent transactions:
T1 T2
SELECT MAX(C) FROM TABLE WHERE CODE='xx'
-- C for new : result + 1
SELECT MAX(C) FROM TABLE WHERE CODE='xx';
-- C for new : result + 1
INSERT INTO TABLE...
INSERT INTO TABLE...
So, in both cases, the column value for INSERT is calculated from the old result added by one.
From this, some running counter handled by the db would be fine. But that wouldn't work because
the counter values or existing rows are sometimes changed
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
With some other databases this can be handled with SERIALIZABLE isolation state but at least with Oracle&Postgre the phantom reads are prevented but as the result the table ends up with two distinct rows with same counter value. This seems to have to do with the predicate locking, locking "all the possible rows covered by the query" - some other db:s end up to lock the whole table or something..
SELECT ... FOR UPDATE -statements seem to be for other purposes and don't even seem to work with MAX() -function.
Setting an UNIQUE contraint on the column would probably be the solution but are there some other ways to prevent the situation?
b.r. Touko
EDIT: One more option could probably be manual locking even though it doesn't appear nice to me..
Both Oracle and PostgreSQL support what's called sequences and the perfect fit for your problem. You can have a regular int column, but define one sequence per group, and do a single query like
--PostgreSQL
insert into table (id, ... ) values (nextval(sequence_name_for_group_xx), ... )
--Oracle
insert into table (id, ... ) values (sequence_name_for_group_xx.nextval, ... )
Increments in sequences are atomic, so your problem just wouldn't exist. It's only a matter of creating the required sequences, one per group.
the counter values or existing rows are sometimes changed
You should to put a unique constraint on that column if this would be a problem for your app. Doing so would guarantee a transaction at SERIALIZABLE isolation level would abort if it tried to use the same id as another transaction.
One more option could probably be manual locking even though it doesn't appear nice to me..
Manual locking in this case is pretty easy: just take a SHARE UPDATE EXCLUSIVE or stronger lock on the table before selecting the maximum. This will kill concurrent performance, though.
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
This leads me to the Right Solution for this problem: sequences. Set up several sequences, one for each "value group" you want to get IDs in their own range. See Section 9.15 of The Manual for the details of sequences and how to use them; it looks like they're a perfect fit for you. Sequences will never give the same value twice, but might skip values: if a transaction gets the value '2' from a sequence and aborts, the next transaction will get the value '3' rather than '2'.
The sequence answer is common, but might not be right. The viability of this solution depends on what you actually need. If what you semantically want is "some guaranteed to be unique number" then that is what a sequence is for. However, if what you want is to make sure that your value increases by exactly one on each insert (as you have asked), then DO NOT USE A SEQUENCE! I have run into this trap before myself. Sequences are not guaranteed to be sequential! They can skip numbers. Depending on what sort of optimizations you have configured, they can skip LOTS of numbers. Even if you have things configured just right so that you shouldn't skip any numbers, that is not guaranteed, and is not what sequences are for. So, you are only asking for trouble if you (mis)use them like that.
One step better solution is to bundle the select into the insert, like so:
INSERT INTO table(code, c, ...)
VALUES ('XX', (SELECT MAX(c) + 1 AS c FROM table WHERE code = 'XX'), ...);
(I haven't test run that query, but I'm pretty sure it should work. My apologies if it doesn't.) But, doing something like that reflects the semantic intent of what you are trying to do. However, this is inefficient, because you have to do a scan for MAX, and the inference I am taking from your sample is that you have a small number of code values relative to the size of the table, so you are going to do an expensive, full table scan on every insert. That isn't good. Also, this doesn't even get you the ACID guarantee you are looking for. The select is not transactionally tied to the insert. You can't "lock" the result of the MAX() function. So, you could still have two transactions running this query and they both do the sub-select and get the same max, both add one, and then both try to insert. It's a much smaller window, but you may still technically have a race condition here.
Ultimately, I would challenge that you probably have the wrong data model if you are trying to increment on insert. You should insert with a unique key, most commonly a sequence value (at least as an easy, surrogate key for any natural key). That gets the data safely inserted. Then, if you need a count of things, then have one table that stores your counts.
CREATE TABLE code_counts (
code VARCHAR(2), --or whatever
count NUMBER
);
If you really want to store the code count of each item as it is inserted, the separate count table also allows you to do so correctly, transactionally, like so:
UPDATE code_counts SET count = count + 1 WHERE code = 'XX' RETURNING count INTO :count;
INSERT INTO table(code, c, ...) VALUES ('XX', :count, ...);
COMMIT;
The key is that the update locks the counter table and reserves that value for you. Then your insert uses that value. And all of that is committed as one transactional change. You have to do this in a transaction. Having a separate count table avoids the full table scan of doing SELECT MAX().... In essense, what this does is re-implements a sequence, but it also guarantees you sequencial, ordered use.
Without knowing your whole problem domain and data model, it is hard to say, but abstracting your counts out to a separate table like this where you don't have to do a select max to get the right value is probably a good idea. Assuming, of course, that a count is what you really care about. If you are just doing logging or something where you want to make sure things are unique, then use a sequence, and a timestamp to sort by.
Note that I'm saying not to sort by a sequence either. Basically, never trust a sequence to be anything other than unique. Because when you get to caching sequence values on a multi-node system, your application might even consume them out of order.
This is why you should use the Serial datatype, which defers the lookup of C to the time of insert (which uses table locks i presume). You would then not specify C, but it would be generated automatically. If you need C for some intermediate calculation, you would need to save first, then read C and finally update with the derived values.
Edit: Sorry, I didn't read your whole question. What about solving your other problems with normalization? Just create a second table for each specific type (for each x where A='x'), where you have another auto increment. Manually edited sequences could be another column in the same table, which uses the generated sequence as a base (i.e if pk = 34 you can have another column mypk='34Changed').
You can create sequential collumn by using sequence as default value:
First, you have to create the sequence counter:
CREATE SEQUENCE SEQ_TABLE_1 START WITH 1 INCREMENT BY 1;
So, you can use it as default value:
CREATE TABLE T (
COD NUMERIC(10) DEFAULT NEXTVAL('SEQ_TABLE_1') NOT NULL,
collumn1...
collumn2...
);
Now you don't need to worry about sequence on inserting rows:
INSERT INTO T (collumn1, collumn2) VALUES (value1, value2);
Regards.

Resources