When I LOCATE a record in Visual foxpro,
LOCALTE for studentID = 1
IF FOUND()
DELETE
PACK
ENDIF
In the PACK command, error message "File must be open exclusively"
Have I close all table before PACK?
To get exclusive use, you need to tell it so when you open...
select YourTable
use YourTable EXCLUSIVE
PACK
use YourTable SHARED
However, packing the table all the time is not efficient and there is an easier way to "ignore" records marked for deletion..
Look into
SET DELETED ON
SET DELETED OFF
Set deleted "ON" means HIDE ANY RECORDS that are marked for deletion. So you don't need to pack each time. set delete "OFF" means SHOW ALL records, even those marked for deletion.
But also, having EXCLUSIVE use can cause an issue in a multi-user environment if others are using the table too. You would typically have some database maintenance routine that would try to check for exclusive use on tables and do them all during a cleanup process.
Also, LOCATE/FOUND is sort of old-school, once you have the table open exclusively just do:
delete from mytable where studentid=1
pack
Related
I'm going through some Visual Foxpro 7.0 which is new to me. I'm having a bit of trouble deciphering a DELETE command inside a DO WHILE with two different work areas.
I'm just not clear on what is actually being deleted. When the DELETE command is issued I'm using Work Area 1 but I'm not looping through any records. If the DELETE command IS being used against tblpay (Work Area 1) then it seems that it's deleting the record that was just inserted which makes no sense. Can someone clue me please?
select 1 (tblpay)
USE tblpay
select 2 (tblfac)
USE tblfac
GOTO top
DO WHILE NOT EOF()
lcfy = fy
lcindex_no = index_no
lcpca = pca
lnpys = padl(alltrim(str(cum_pys)),13,' ')
select 1 (tblpay)
LOCATE FOR fy = lcfy AND index_no = lcindex_no AND pca = lcpca
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
DELETE
ENDIF
select 2 (tblfac)
SKIP
ENDDO
Admittedly the code you show is not very clear.
Some suggested changes:
select 1 (tblpay)
USE tblpay
should be
USE tblpay IN 0 && Open table tblpay into next available workspace
and
select 2 (tblfac)
USE tblfac
should be
USE tblfac IN 0 && Open table tblfac into next available workspace
Then you would no longer have to remember SELECT 1 or SELECT 2 - Now what did I have in #1 or #2?
Instead you would choose the table it by its Alias such as: SELECT tblpac
The rest of the code doesn't make much sense either.
* You choose table tblfac and SCAN its records for values
* Then you go to table tblpay and attempt to LOCATE one or more specific records
* If the tblpay ** record is NOT FOUND, you then use the values from **tblfac (and other info) and INSERT a new record into tblpay using a SQL Command (you could have also used the VFP commands: APPEND BLANK followed by a REPLACE)
* The DELETE that follows that will Delete the table record that it is currently pointing to - however the way your code is written it might not be what you want.
The way it looks, it seems like if you have NOT FOUND() a matching record in tblpay your record pointer is still pointing to that table, but it is now at the EOF() (End Of File) and not to any actual record. And an attempt to Delete it will not do anything.
In your VFP Development mode, you should use the Debug methods to actually 'see' which table table record pointer is 'looking' at and which record.
To do that you might want to use ** SET STEP ON** in the following manner.
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
SET STEP ON && Added Here for Debug Purposes ONLY
DELETE
ENDIF
Then when you execute the code in the VFP Development mode and execution hits that line, it will Suspend execution and open the Debug TRACE window - thereby allowing you to investigate the record pointer, etc.
Good Luck
What Dhugalmac has said is partially correct but not entirely. If the record searched is not found, then you are inserting a record and then you are deleting that newly inserted record. Pointer is NOT at EOF but at that new record.
As Dhugalmac said, do not use work area numbers but aliases instead. Above code is not the real code, it wouldn't run without an error.
If you are using this code and the text it is in for learning, immediately stop reading it and dump it away. The code is terrible and doesn't sound to have a purpose (besides having errors).
If your intent is to learn how to delete from VB.Net, just use VFPOLEDB and a DELETE - SQL command with ExecuteNonQuery (just as you would do against SQL server, PostgreSQL, MySql ... any ANSI database). With VB.Net, most of the xbase commands have no place (neither those do while ... skip ...enddo - even you wouldn't use it from within VFP).
I need to ZAP a table in order to refill it with fresh data. I set Exclusive ON, I USE the table EXCLUSIVE, and yet, when I try to ZAP the table I get the error message "File must be opened exclusively".
What am I doing wrong?
The table is not opened anywhere else, according to the Watch window Exclusive is really ON and the table IS opened exclusively, and I get the same result with DELETE ALL/PACK as with ZAP.
Thank you.
/bernard
Bernard, not counter productive. Use Table EXCLUSIVE will ALWAYS override the EXCLUSIVE setting. So even if set exclusive off, a use table EXCLUSIVE will always work.
Now, the issue is that if some other user has the program and table open, it should FAIL OPENING the table exclusively to allow the pack to occur.
Try to set exclusive mode before opening the file:
SET EXCLUSIVE ON
USE "D:\somefile.dbf"
ZAP "D:\somefile.dbf"
I have a wrapper to the VFP TABLEUPDATE() function which, among other things, logs additions and changes to other tables that are made. The log table gets thrashed on occasion, due to multiple users saving and editing throughout the app, which results in a 'File is in use' error on my log table. The table is not open when the INSERT is called.
I am reasonably sure no process has the file opened exclusively. Ideally, I want to
Check and see if the file is available to open
Write to the file using INSERT INTO
Get out as fast as I can
Records are never edited, only INSERTed. Is there a way I can test the table before issuing the INSERT?
If you receive File is in use (Error 3), then according to Visual Fox Manual: You have attempted a USE, DELETE, or RENAME command on a file that is currently open. So you say DELETE or RENAME is out of the question.
It must be the USE IN SELECT("cTableName").
If EXCLUSIVE is OFF, there is no need to check if the file is open.
Do not open the table before INSERT. Just execute the INSERT and there will be no need to close the table afterwards.
And so you can get rid of the UNLOCK IN cTableName USE IN SELECT("cTableName").
My first thought is that you're holding the table open for too long and that any preliminary checks that you add will just tie the table up for longer. Do you close the table after your INSERT?
You say that the log table isn't open at the start of the process. This means that Fox will open the table for you silently so that the SQL can run. Are you opening it exclusive and are you explicitly closing it afterwards?
Have you tried locking the table in your insert routine?
IF FLOCK("mytable")
INSERT INTO ......
ELSE
WAIT WINDOW "Unable to lock"
ENDIF
Perhaps put this into a DO WHJILE loop?
How do I see what pending changes have been made to an SPFILE before I bounce the database? I know I can see changes in alert log, but it may have been a few months when the change had been made.
For example:
alter system set sga_max_size=1024M scope=spfile;
This doesn't become active until the next bounce.
You'll get some noise in the results from this for various reasons, but you can get close by:
select name, value from v$spparameter where isspecified = 'TRUE'
minus
select name, value from v$parameter;
I don't know of an official view that does this, but it ought to be feasible to read the spfile as an external table and join it to v$parameter.
In 11g you can do:
CREATE PFILE='dir/init_current.ora' FROM MEMORY;
and
CREATE PFILE='dir/init_spfile.ora' FROM SPFILE;
and then just compare these text files (sort lines in both files first if necessary).
Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'