SQLite: how to enable counting number of rows modified from trigger - view

is there any way to enable counting of rows that trigger modified in SQLite?
I know it is disabled https://www.sqlite.org/c3ref/changes.html and i understand why, but can i enable it somehow?
CREATE TABLE Users_data (
Id INTEGER PRIMARY KEY AUTOINCREMENT,
Deleted BOOLEAN DEFAULT (0),
Name STRING
);
CREATE VIEW Users AS
SELECT Id, Name
FROM Users_data
WHERE Deleted = 0;
CREATE TRIGGER UsersDelete2UsersData
INSTEAD OF DELETE
ON Users
FOR EACH ROW
BEGIN
UPDATE Users_data SET Deleted = 1 WHERE Id = OLD.Id;
END;
-- etc for insert & update
then delete from Users where Name like 'foo' /* doesnt even need 'Id = 1' */; works fine, but numbers of modified rows is, as documentation say, always zero.
(I cant modify my DAL to automatically add "where Deleted = 0", so backup plan is to have table Users_deleted and 'on delete' trigger on Users table without any view, but then i have to keep tracking FKs (for example, what to do when someone delete from FK table) and so on...)
Edit: Returned number is used for checking on database concurrency.
Edit2: To be more clear: As i say, I can not modify my DAL (Entity Framework 6), so the preferred answer should operate as follow pseudo code: int affectedRow = query("delete from Users where Name like 'foo';").Execute();
Its all about SQLite "trigger on view" behavior.

Use sqlite3_total_changes() instead:
This function returns the total number of rows inserted, modified or deleted by all INSERT, UPDATE or DELETE statements completed since the database connection was opened, including those executed as part of trigger programs.

Its imposible in sqlite3 (in 2015).
Basically I was looking for instead of trigger on view (as in question) with return function, which is not supported in sqlite.
By the way, postgresql (and i believe some others full db servers) can do it.

Related

Oracle APEX ARP process not performed just because another process fails?

You are an awesome comunity. It is the first time I couldn't find an answer for my questions and I had millions
Take me easy. I'm super noob and I already fell bad for making you waste your time with my dummy question
Soo... I'm tring to make an app with oracle apex. I have an form with an interactive report for table1. On the form page I have 3 processes in this order:
Automatic Row Processing (DML) that apex automaticaly made for me,
a pl/sql process I made and
the reset page process apex made.
The ARP updates, creates and deletes and is triggered by any of the buttons (SAVE, CREATE, DELETE).
My procces deletes a row in another table2 and is performed when DELETE is clicked and ITEM1 is not null (because in ITEM1 I stored the PK for the row in the second table).
The last process is the usual reset page that should clear all items value when DELETE is pressed.
Firing point is by default "Processing" for all 3.
Sometimes my process fails (and return the error I set) because of a FK constraint.
Now here is the think: If my proccess fails, the oder 2 seem not to be executed. Is that posible? If i set the condition (to be executed) of my process to Never the other 2 are working. What am I missing?
You aren't missing anything.
When you push a button that fires those processes, they make a transaction. If any of them raises an error, all of them (executed so far) are rolled back.
If you want to continue processing regardless what your own procedure (2nd one) does (I mean: whether it succeeds or not), then handle it, somehow.
A trivial (and not the best) option is to ignore possible errors, e.g.
begin
delete from child_table where id = :P1_ITEM1;
exception
when others then null; --> ignore any errors
end;
Smarter way would be to intercept errors you expect. If you know (and yes, you do) that there's a possibility that foreign key constraint will be violated, check whether child rows exist; if not, delete the master row.
declare
l_id child_table.id%type;
begin
-- If row(s) with such an ID exists, L_ID will be set to that value.
-- In that case, don't do anything
select m.id
into l_id
from child_table m
where m.id = :P1_ITEM1
and rownum = 1;
-- The above query returned something; don't do anything
null;
exception
when no_data_found then
-- The above query didn't return anything, so - delete a row
delete from child_table where id = :P1_ID;
end;
Now, that can/could/should be modified, depending on what you really have; it is just an idea what to look at.
Yet another option is to set foreign key constraint to be on delete cascade, which means that deleting master record automatically deletes its detail records. Doing so, you wouldn't care about such a problems and your 2nd process would be as simple as
delete from child_table where id = :P1_ID;
(unless you hit another kind of an error, of course).
If you want to let users decide whether they want to delete rows or not, change button's action to "Redirect to URL" (currently it is "Submit", I presume). The target URL will be something like this (suppose that button's name is P1_START_PROCESSES):
javascript:if(confirm('Are you sure you want to delete all rows related to this document?')){doSubmit('P1_START_PROCESSES');}

Populate a column on update (create too?), and why "FOR EACH ROW"?

I have a table of people who belong to various sites. These sites can change, but don't very often. So when we create an attendance record (a learner_session object) we don't store the site. But this has cause a problem in reporting how many training hours a site has, because some people have changed sites over the years. Not by much, but we'd like to get this right.
So I've added a site_at_the_time column to the learner_session table. I want to auto-populate this with the site the person was at when they attended the session. But I'm not sure how to reference this. For some reason (I'm guessing to speed development or something) the learner_id is allowed to be null. So I'm currently planning to do an update trigger. The learner_id shouldn't ever get updated, and if it ever did somehow, the entire record would be junk so I'm not worried about it overwriting it.
The trigger I have now is
create trigger set_site_at_the_time
after update of learner_id on lrn_session
begin
:new.site_at_the_time:= (select site_id from learner who where :new.learner_id = who.learner_id);
end;
which leads me to the following error:
ORA-04082: NEW or OLD references not allowed in table level triggers
Now, I've done some research and found I need to use a FOR EACH ROW - and I'm wondering what exactly this FOR EACH ROW does - is it every row captured by the trigger? Or is it every row in the table?
Also, will this trigger when I create a record too? So if I do insert into learner_session(id,learner_id,...) values(learner_session_id_seq.nextval,1234,...) will this capture that appropriately?
And while I'm here, I might as well see if there's something else I'm doing wrong with this trigger. But I'm mainly asking to figure out what the FOR EACH ROW is supposed to do and if it triggers properly. =)
FOR EACH ROW means that the trigger will fire once for each row that is updated by your SQL statement. Without this clause, the trigger will only fire once, no matter how many rows are affected. If you want to change values as they're being inserted, you have to use FOR EACH ROW, because otherwise the trigger can't know which :new and :old values to use.
As written, the trigger only fires on update. To make it also fire upon insert, you'd need to change the definition:
CREATE TRIGGER set_site_at_the_time
BEFORE INSERT OR UPDATE OF learner_id
ON lrn_session
FOR EACH ROW
BEGIN
SELECT site_id into :new.site_at_the_time
FROM learner who
WHERE :new.learner_id = who.learner_id);
END set_site_at_the_time;

oracle select and concurrent insert :: To check email availability

We have simple case, We have a table with column emailId i.e. unique.....using oracle DB
Question#1
Multiple concurrent user can check if some email id is available or not. Like 2 user that same time check availability of: abc#test.com
session1: select emailid from user_table;
//If not present allow user to complete rest of the process & insert info
session2: select emailid from user_table;
Now both session will get that this email id (abc#test.com) is available & both try to insert, I know one of them will get error upon insertion BUT how we can make sure only 1 user get availability & other get not available upon select ??
Question#2
Also in case both sessions inserted the same value, then first will succeed, is there ways that 2nd session update that row instead of throwing error. Like we have another column for timestamp & want that 2nd session instead of throwing error simple update the timestamp column ?
As this is a rather abstract question, here are only some general guidelines:
To deal with concurrent insert in a table, you need an unique index, and be prepared in your code to deal with ORA-00001 error unique constraint violated. Never rely only on check before insert(unless you have somehow exclusive access to your table -- and even if so ... as of myself, I would add an unique index: doesn't cost much and make me sleep better)
Oracle has a MERGE statement that allows you update or insert based on a condition. This operation is sometimes called an upsert. By using that keywork you should be able to find more informationsSee Oracle: how to UPSERT (update or insert into a table?) for example.
Now for, some thoughts about you specific case (maybe):
The only way for the system to work as you suggested, would be to make some kind of reservation when you check for availability (i.e.: immediately insert the row, instead of just select). And then update the row when the user confirm. But that means: (1) you will have to somehow deal with never-confirmed reservations (2) that doesn't dispense you to have an unique index, and to deal with ORA-00001.

DB2 duplicate key error when inserting, BUT working after select count(*)

I have a - for me unknown - issue and I don't know what's the logic/cause behind it. When I try to insert a record in a table I get a DB2 error saying:
[SQL0803] Duplicate key value specified: A unique index or unique constraint *N in *N
exists over one or more columns of table TABLEXXX in SCHEMAYYY. The operation cannot
be performed because one or more values would have produced a duplicate key in
the unique index or constraint.
Which is a quite clear message to me. But actually there would be no duplicate key if I inserted my new record seeing what records are already in there. When I do a SELECT COUNT(*) from SCHEMAYYY.TABLEXXX and then try to insert the record it works flawlessly.
How can it be that when performing the SELECT COUNT(*) I can suddenly insert the records? Is there some sort of index associated with it which might give issues because it is out of sync? I didn't design the data model, so I don't have deep knowledge of the system yet.
The original DB2 SQL is:
-- Generate SQL
-- Version: V6R1M0 080215
-- Generated on: 19/12/12 10:28:39
-- Relational Database: S656C89D
-- Standards Option: DB2 for i
CREATE TABLE TZVDB.PRODUCTCOSTS (
ID INTEGER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 INCREMENT BY 1
MINVALUE 1 MAXVALUE 2147483647
NO CYCLE NO ORDER
CACHE 20 )
,
PRODUCT_ID INTEGER DEFAULT NULL ,
STARTPRICE DECIMAL(7, 2) DEFAULT NULL ,
FROMDATE TIMESTAMP DEFAULT NULL ,
TILLDATE TIMESTAMP DEFAULT NULL ,
CONSTRAINT TZVDB.PRODUCTCOSTS_PK PRIMARY KEY( ID ) ) ;
ALTER TABLE TZVDB.PRODUCTCOSTS
ADD CONSTRAINT TZVDB.PRODCSTS_PRDCT_FK
FOREIGN KEY( PRODUCT_ID )
REFERENCES TZVDB.PRODUCT ( ID )
ON DELETE RESTRICT
ON UPDATE NO ACTION;
I'd like to see the statements...but since this question is a year old...I won't old my breath.
I'm thinking the problem may be the
GENERATED BY DEFAULT
And instead of passing NULL for the identity column, you're accidentally passing zero or some other duplicate value the first time around.
Either always pass NULL, pass a non-duplicate value or switch to GENERATED ALWAYS
Look at preceding messages in the joblog for specifics as to what caused this. I don't understand how the INSERT can suddenly work after the COUNT(*). Please let us know what you find.
Since it shows *N (ie n/a) as the name of the index or constraing, this suggests to me that is is not a standard DB2 object, and therefore may be a "logical file" [LF] defined with DDS rather than SQL, with a key structure different than what you were doing your COUNT(*) on.
Your shop may have better tools do view keys on dependent files, but the method below will work anywhere.
If your table might not be the actual "physical file", check this using Display File Description, DSPFD TZVDB.PRODUCTCOSTS, in a 5250 ("green screen") session.
Use the Display Database Relations command, DSPDBR TZVDB.PRODUCTCOSTS, to find what files are defined over your table. You can then DSPFD on each of these files to see the definition of the index key. Also check there that each of these indexes is maintained *IMMED, rather than *REBUILD or *DELAY. (A wild longshot guess as to a remotely possible cause of your strange anomaly.)
You will find the DB2 for i message finder here in the IBM i 7.1 Information Center or other releases
Is it a paging issue? we seem to get -0803 on inserts occasionally when a row is being held for update and it locks a page that probably contains the index that is needed for the insert? This is only a guess but it appears to me that is what is happening.
I know it is an old topic, but this is what Google shown me on the first place.
I had the same issue yesterday, causing me a lot of headache. I did the same as above, checked the table definitions, keys, existing items...
Then I found out the problem was with my INSERT statement. It was trying to insert to identical records at once, but as the constraint prevented the commit, I could not find anything in the database.
Advice: review your INSERT statement carefully! :)

Oracle Apex - Updating a view with instead-of trigger

Apex beginner here. I have a view in my Oracle database of the form:
create or replace view vw_awkward_view as
select unique tab1.some_column1,
tab2.some_column1,
tab2.some_column2,
tab2.some_column3
from table_1 tab1,
table_2 tab2
WHERE ....
I need the 'unique' clause on 'tab1.some_column1' because it has many entries in its underlying table. I also need to include 'tab1.some_column1' in my view because the rest of the data doesn't make much sense without it.
In Apex, I want to create a report on this view with a form for editing it (update only). I do NOT need to edit tab1.some_column1. Only the other columns in the view need to be editable. I can normally achieve this using an 'instead-of' trigger, but this doesn't look possible when the view contains a 'distinct', 'unique' or 'group by' clause.
If I try to update a row on this view I get the following error:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
How can I avoid this error? I want my 'instead-of' trigger to kick in and perform the update and I don't need to edit the column which has the 'unique' clause, so I think it should be possible to do this.
I think that you should be able to remove the "unique".
if tab2.some_column1, tab2.some_column2, tab2.some_column3 are not unique, then how do you want to update them ?
if they are unique then the whole result: tab1.some_column1, tab2.some_column1, tab2.some_column2, tab2.some_column3 is unique.
When you state in a sql query "unique" or "distinct" it's for all columns not only 'tab1.some_column1'
Hope i'm in the correct direction of your question here ;)
Your query could be achieved by doing something like:
select a.some_column1, tab2.some_column1, tab2.some_column2, tab2.some_column3
from table_2 tab2
join (select distinct some_column1 from table_1) a
on tab2.column_in_tab1 = a.some_column1
The reason you get the ORA-02014 error is because of the automatically generated ApplyMRU process. This process will attempt to lock a (the) changed row(s):
begin
for r in (select ...
from vw_awkward_view
where <your first defined PK column>= 'value for PK1'
for update nowait)
loop
null;
end loop;
end;
That's a bummer, and means you won't be able to use the generated process. You'll have to write your own process which does the updating.
For this, you'll have to use the F## arrays in apex_application.
If this sounds totally unfamiliar, take a look at:
Custom submit process, and on using the apex_application arrays.
Also, here is a how-to for apex from 2004 from Oracle itself. It still uses lots of htmldb references, but the gist of it is there.
(it might be a good idea to use the apex_item interface to build up your form, and have control over what is generated and what array it takes.)
What it comes down to is: loop over the array containing your items and do an UPDATE on your view with the submitted values.
Of course, you don't have locking this way, nor a way to prevent unnecessary updates.
Locking you can do yourself, with for example using the select for update method. You'd have to lock the correct rows in the table(s) you want to alter, before you update them. If the locking fails, then your process should fail.
As for the 'lost update' story: here you'd need to check the MD5-checksums. A checksum is generated from the editable columns in your form and put in the html-code. On submit, this checksum is then compared to a newly generated checksum from those same columns, but with values from the database at that time of submit. If the checksums differ, it means the record has changed between the page load and the page submit. Your process should fail because the record has been altered, and you don't want to have those overwritten. (if you go the apex_item way, then don't forget to include an MD5_CHECKSUM call (or MD5_HIDDEN).
Important note though: checksums generated by either using apex_item or simply the standard form functionality build up a string to be hashed. As you can see in apex_item.md5_hidden, checksums are generated using DBMS_OBFUSCATION_TOOLKIT.MD5.
You can get the checksum of the values in the DB in 2 ways: wwv_flow_item.md5 or using dbms_obfuscation.
However, what the documentation fails to mention is this: OTN Apex discussion on MD5 checksums. Pipes are added in the generated checksums! Don't forget this, or it'll blow up in your face and you'll be left wondering for days what the hell is wrong with it.
Example:
select utl_raw.cast_to_raw(dbms_obfuscation_toolkit.md5(input_string=>
"COLUMN1" ||'|'||
"COLUMN2" ||'|'||
"COLUMN5" ||'|'||
"COLUMN7" ||'|'||
"COLUMN10" ||'|'||
"COLUMN12" ||'|'||
"COLUMN14" ||
'|||||||||||||||||||||||||||||||||||||||||||'
)) md5
from some_table
To get the checksum of a row of the some_table table, where columns 1,2,5,7,10,12,14 are editable!
In the end, this is how it should be structured:
loop over array
generate a checksum for the current value of the editable columns
from the database
compare this checksum with the submitted checksum
(apex_application.g_fcs if generated) if the checksums match,
proceed with update. If not, fail process here.
lock the correct records for updating. Specify nowait, and it
locking fails, fail the process
update your view with the submitted values. Your instead-of trigger
will fire. Be sure you use correct values for your update statement so that only this one record will be updated
Don't commit inbetween. It's either all or nothing.
I almost feel like i went overboard, and it might feel like it is all a bit much, but when you know the pitfalls it's actually not so hard to pull this custom process off! It was very knowledgable for me to play with it :p
The answer by Tom is a correct way of dealing with ths issue but I think overkill for your requirements if I understand correctly.
The easiest way may be to create a form on the table you want to edit. Then have the report edit link take the user to this form which will only update the needed columns from the one table. If you need the value of the column from the other table displayed it is simple when you create the link to pass this value to the form which can contain a display only item to show this.

Resources