I'm going through some Visual Foxpro 7.0 which is new to me. I'm having a bit of trouble deciphering a DELETE command inside a DO WHILE with two different work areas.
I'm just not clear on what is actually being deleted. When the DELETE command is issued I'm using Work Area 1 but I'm not looping through any records. If the DELETE command IS being used against tblpay (Work Area 1) then it seems that it's deleting the record that was just inserted which makes no sense. Can someone clue me please?
select 1 (tblpay)
USE tblpay
select 2 (tblfac)
USE tblfac
GOTO top
DO WHILE NOT EOF()
lcfy = fy
lcindex_no = index_no
lcpca = pca
lnpys = padl(alltrim(str(cum_pys)),13,' ')
select 1 (tblpay)
LOCATE FOR fy = lcfy AND index_no = lcindex_no AND pca = lcpca
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
DELETE
ENDIF
select 2 (tblfac)
SKIP
ENDDO
Admittedly the code you show is not very clear.
Some suggested changes:
select 1 (tblpay)
USE tblpay
should be
USE tblpay IN 0 && Open table tblpay into next available workspace
and
select 2 (tblfac)
USE tblfac
should be
USE tblfac IN 0 && Open table tblfac into next available workspace
Then you would no longer have to remember SELECT 1 or SELECT 2 - Now what did I have in #1 or #2?
Instead you would choose the table it by its Alias such as: SELECT tblpac
The rest of the code doesn't make much sense either.
* You choose table tblfac and SCAN its records for values
* Then you go to table tblpay and attempt to LOCATE one or more specific records
* If the tblpay ** record is NOT FOUND, you then use the values from **tblfac (and other info) and INSERT a new record into tblpay using a SQL Command (you could have also used the VFP commands: APPEND BLANK followed by a REPLACE)
* The DELETE that follows that will Delete the table record that it is currently pointing to - however the way your code is written it might not be what you want.
The way it looks, it seems like if you have NOT FOUND() a matching record in tblpay your record pointer is still pointing to that table, but it is now at the EOF() (End Of File) and not to any actual record. And an attempt to Delete it will not do anything.
In your VFP Development mode, you should use the Debug methods to actually 'see' which table table record pointer is 'looking' at and which record.
To do that you might want to use ** SET STEP ON** in the following manner.
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
SET STEP ON && Added Here for Debug Purposes ONLY
DELETE
ENDIF
Then when you execute the code in the VFP Development mode and execution hits that line, it will Suspend execution and open the Debug TRACE window - thereby allowing you to investigate the record pointer, etc.
Good Luck
What Dhugalmac has said is partially correct but not entirely. If the record searched is not found, then you are inserting a record and then you are deleting that newly inserted record. Pointer is NOT at EOF but at that new record.
As Dhugalmac said, do not use work area numbers but aliases instead. Above code is not the real code, it wouldn't run without an error.
If you are using this code and the text it is in for learning, immediately stop reading it and dump it away. The code is terrible and doesn't sound to have a purpose (besides having errors).
If your intent is to learn how to delete from VB.Net, just use VFPOLEDB and a DELETE - SQL command with ExecuteNonQuery (just as you would do against SQL server, PostgreSQL, MySql ... any ANSI database). With VB.Net, most of the xbase commands have no place (neither those do while ... skip ...enddo - even you wouldn't use it from within VFP).
Related
I've gotten to one of those places where I've been toying with something for a little while trying to figure out why its not working and figured I would ask here. I am currently in the middle of making adjustments to a batch process that involves creating an external table A used for staging and then transferring the data from that table over to Table B for further processing.
There's a step in the batch that was there before to load all that data and it goes like this:
INSERT INTO TABLE B SELECT * FROM TABLE A
Upon running this statement in batch and outside of it in Oracle Developer I get the following error:
Run query ORA-00932: inconsistent datatypes: expected DATE got NUMBER
I went through my adjustments line by line and made sure I had the right data types. I also went over the data itself the best I could and from what I can tell it seems normal also. In an effort to find which individual field could have been having the error, I attempted to load data from Table A to Table B one column at a time...Doing this I received no errors which shocked me somewhat. If I use the SQL below and have all the fields listed out individually, the load of all the data works flawlessly. Can someone explain why this might be? Does the below function perform an internal Oracle working that the previous one does not?
insert into TABLE B (
COLUMN_ONE,
COLUMN_TWO,
COLUMN_THREE
.
.
.)
select
COLUMN_ONE,
COLUMN_TWO,
COLUMN_THREE
.
.
.
from TABLE A;
Well, if you posted description of tables A and B, we could see it ourselves. As it is now, we have to trust what you're saying, i.e. that everything matches (but Oracle disagrees), so I don't know what to say.
On the other hand, I've learnt that using
INSERT INTO TABLE B SELECT * FROM TABLE A
is a poor way of handling things (unless that's a quick & dirty testing). I try to always name all columns I'm working with, no matter how many of them are involved in that very operation. As you noticed, that seems to be working well for you too, so I'd suggest you to keep doing it.
is there any way to enable counting of rows that trigger modified in SQLite?
I know it is disabled https://www.sqlite.org/c3ref/changes.html and i understand why, but can i enable it somehow?
CREATE TABLE Users_data (
Id INTEGER PRIMARY KEY AUTOINCREMENT,
Deleted BOOLEAN DEFAULT (0),
Name STRING
);
CREATE VIEW Users AS
SELECT Id, Name
FROM Users_data
WHERE Deleted = 0;
CREATE TRIGGER UsersDelete2UsersData
INSTEAD OF DELETE
ON Users
FOR EACH ROW
BEGIN
UPDATE Users_data SET Deleted = 1 WHERE Id = OLD.Id;
END;
-- etc for insert & update
then delete from Users where Name like 'foo' /* doesnt even need 'Id = 1' */; works fine, but numbers of modified rows is, as documentation say, always zero.
(I cant modify my DAL to automatically add "where Deleted = 0", so backup plan is to have table Users_deleted and 'on delete' trigger on Users table without any view, but then i have to keep tracking FKs (for example, what to do when someone delete from FK table) and so on...)
Edit: Returned number is used for checking on database concurrency.
Edit2: To be more clear: As i say, I can not modify my DAL (Entity Framework 6), so the preferred answer should operate as follow pseudo code: int affectedRow = query("delete from Users where Name like 'foo';").Execute();
Its all about SQLite "trigger on view" behavior.
Use sqlite3_total_changes() instead:
This function returns the total number of rows inserted, modified or deleted by all INSERT, UPDATE or DELETE statements completed since the database connection was opened, including those executed as part of trigger programs.
Its imposible in sqlite3 (in 2015).
Basically I was looking for instead of trigger on view (as in question) with return function, which is not supported in sqlite.
By the way, postgresql (and i believe some others full db servers) can do it.
When I LOCATE a record in Visual foxpro,
LOCALTE for studentID = 1
IF FOUND()
DELETE
PACK
ENDIF
In the PACK command, error message "File must be open exclusively"
Have I close all table before PACK?
To get exclusive use, you need to tell it so when you open...
select YourTable
use YourTable EXCLUSIVE
PACK
use YourTable SHARED
However, packing the table all the time is not efficient and there is an easier way to "ignore" records marked for deletion..
Look into
SET DELETED ON
SET DELETED OFF
Set deleted "ON" means HIDE ANY RECORDS that are marked for deletion. So you don't need to pack each time. set delete "OFF" means SHOW ALL records, even those marked for deletion.
But also, having EXCLUSIVE use can cause an issue in a multi-user environment if others are using the table too. You would typically have some database maintenance routine that would try to check for exclusive use on tables and do them all during a cleanup process.
Also, LOCATE/FOUND is sort of old-school, once you have the table open exclusively just do:
delete from mytable where studentid=1
pack
Apex beginner here. I have a view in my Oracle database of the form:
create or replace view vw_awkward_view as
select unique tab1.some_column1,
tab2.some_column1,
tab2.some_column2,
tab2.some_column3
from table_1 tab1,
table_2 tab2
WHERE ....
I need the 'unique' clause on 'tab1.some_column1' because it has many entries in its underlying table. I also need to include 'tab1.some_column1' in my view because the rest of the data doesn't make much sense without it.
In Apex, I want to create a report on this view with a form for editing it (update only). I do NOT need to edit tab1.some_column1. Only the other columns in the view need to be editable. I can normally achieve this using an 'instead-of' trigger, but this doesn't look possible when the view contains a 'distinct', 'unique' or 'group by' clause.
If I try to update a row on this view I get the following error:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
How can I avoid this error? I want my 'instead-of' trigger to kick in and perform the update and I don't need to edit the column which has the 'unique' clause, so I think it should be possible to do this.
I think that you should be able to remove the "unique".
if tab2.some_column1, tab2.some_column2, tab2.some_column3 are not unique, then how do you want to update them ?
if they are unique then the whole result: tab1.some_column1, tab2.some_column1, tab2.some_column2, tab2.some_column3 is unique.
When you state in a sql query "unique" or "distinct" it's for all columns not only 'tab1.some_column1'
Hope i'm in the correct direction of your question here ;)
Your query could be achieved by doing something like:
select a.some_column1, tab2.some_column1, tab2.some_column2, tab2.some_column3
from table_2 tab2
join (select distinct some_column1 from table_1) a
on tab2.column_in_tab1 = a.some_column1
The reason you get the ORA-02014 error is because of the automatically generated ApplyMRU process. This process will attempt to lock a (the) changed row(s):
begin
for r in (select ...
from vw_awkward_view
where <your first defined PK column>= 'value for PK1'
for update nowait)
loop
null;
end loop;
end;
That's a bummer, and means you won't be able to use the generated process. You'll have to write your own process which does the updating.
For this, you'll have to use the F## arrays in apex_application.
If this sounds totally unfamiliar, take a look at:
Custom submit process, and on using the apex_application arrays.
Also, here is a how-to for apex from 2004 from Oracle itself. It still uses lots of htmldb references, but the gist of it is there.
(it might be a good idea to use the apex_item interface to build up your form, and have control over what is generated and what array it takes.)
What it comes down to is: loop over the array containing your items and do an UPDATE on your view with the submitted values.
Of course, you don't have locking this way, nor a way to prevent unnecessary updates.
Locking you can do yourself, with for example using the select for update method. You'd have to lock the correct rows in the table(s) you want to alter, before you update them. If the locking fails, then your process should fail.
As for the 'lost update' story: here you'd need to check the MD5-checksums. A checksum is generated from the editable columns in your form and put in the html-code. On submit, this checksum is then compared to a newly generated checksum from those same columns, but with values from the database at that time of submit. If the checksums differ, it means the record has changed between the page load and the page submit. Your process should fail because the record has been altered, and you don't want to have those overwritten. (if you go the apex_item way, then don't forget to include an MD5_CHECKSUM call (or MD5_HIDDEN).
Important note though: checksums generated by either using apex_item or simply the standard form functionality build up a string to be hashed. As you can see in apex_item.md5_hidden, checksums are generated using DBMS_OBFUSCATION_TOOLKIT.MD5.
You can get the checksum of the values in the DB in 2 ways: wwv_flow_item.md5 or using dbms_obfuscation.
However, what the documentation fails to mention is this: OTN Apex discussion on MD5 checksums. Pipes are added in the generated checksums! Don't forget this, or it'll blow up in your face and you'll be left wondering for days what the hell is wrong with it.
Example:
select utl_raw.cast_to_raw(dbms_obfuscation_toolkit.md5(input_string=>
"COLUMN1" ||'|'||
"COLUMN2" ||'|'||
"COLUMN5" ||'|'||
"COLUMN7" ||'|'||
"COLUMN10" ||'|'||
"COLUMN12" ||'|'||
"COLUMN14" ||
'|||||||||||||||||||||||||||||||||||||||||||'
)) md5
from some_table
To get the checksum of a row of the some_table table, where columns 1,2,5,7,10,12,14 are editable!
In the end, this is how it should be structured:
loop over array
generate a checksum for the current value of the editable columns
from the database
compare this checksum with the submitted checksum
(apex_application.g_fcs if generated) if the checksums match,
proceed with update. If not, fail process here.
lock the correct records for updating. Specify nowait, and it
locking fails, fail the process
update your view with the submitted values. Your instead-of trigger
will fire. Be sure you use correct values for your update statement so that only this one record will be updated
Don't commit inbetween. It's either all or nothing.
I almost feel like i went overboard, and it might feel like it is all a bit much, but when you know the pitfalls it's actually not so hard to pull this custom process off! It was very knowledgable for me to play with it :p
The answer by Tom is a correct way of dealing with ths issue but I think overkill for your requirements if I understand correctly.
The easiest way may be to create a form on the table you want to edit. Then have the report edit link take the user to this form which will only update the needed columns from the one table. If you need the value of the column from the other table displayed it is simple when you create the link to pass this value to the form which can contain a display only item to show this.
I have a wrapper to the VFP TABLEUPDATE() function which, among other things, logs additions and changes to other tables that are made. The log table gets thrashed on occasion, due to multiple users saving and editing throughout the app, which results in a 'File is in use' error on my log table. The table is not open when the INSERT is called.
I am reasonably sure no process has the file opened exclusively. Ideally, I want to
Check and see if the file is available to open
Write to the file using INSERT INTO
Get out as fast as I can
Records are never edited, only INSERTed. Is there a way I can test the table before issuing the INSERT?
If you receive File is in use (Error 3), then according to Visual Fox Manual: You have attempted a USE, DELETE, or RENAME command on a file that is currently open. So you say DELETE or RENAME is out of the question.
It must be the USE IN SELECT("cTableName").
If EXCLUSIVE is OFF, there is no need to check if the file is open.
Do not open the table before INSERT. Just execute the INSERT and there will be no need to close the table afterwards.
And so you can get rid of the UNLOCK IN cTableName USE IN SELECT("cTableName").
My first thought is that you're holding the table open for too long and that any preliminary checks that you add will just tie the table up for longer. Do you close the table after your INSERT?
You say that the log table isn't open at the start of the process. This means that Fox will open the table for you silently so that the SQL can run. Are you opening it exclusive and are you explicitly closing it afterwards?
Have you tried locking the table in your insert routine?
IF FLOCK("mytable")
INSERT INTO ......
ELSE
WAIT WINDOW "Unable to lock"
ENDIF
Perhaps put this into a DO WHJILE loop?