VB6 application keeps lock on Access (.mdb) database after creation, causing an error 3028 - vb6

Our application builds an Access database (.mdb) and then starts a different application with the Shell command which needs Read/Write Access to this very database. The problem is that on some systems our application seems erratically to retain an exclusive lock on the database, preventing the other application from accessing it. Only after closing down the first application can the other application proceed.
The specific Error that is raised is Error 3028, which seems to be specific for DAO 3.51 (Access '97) which we indeed employ. I cannot understand why some systems are affected (and then not consistently) and others never. I thought that it might be a timing issue and built in a Sleep period between building the database and launching the other application, but that does not help.
What is going on?
EDIT:
I now created a workaround by creating the database in a separate file and then copying it. Now the second program should always be able to access it and any remaining lock problems will surface in the first program, which I maintain. I will follow up later when our users have been able to test this.

Are you closing the connection to the DB before passing control to another EXE?

I had a similar issue previously which wasn't quite the same but from what you have described this is the approach I would try:
Before lauching the secondary application with the shell command.
Alongside the sleep period you have already employed you will also need to close the original program which generated the .mdb file.
I achieved this by shelling a windows batch file, and then immediately exiting the original program.
Batch file makeup as follows:
ping -n 5 localhost >NUL
start MSAccess.exe "C:\DB.mdb"
exit
This allows 5 seconds for the mdb file to be freed-up before launching, you could replace my Ms Access call with your secondary program.

Related

Avoiding Go panic when Oracle Database client libraries not found

I have a server written in Go that accesses an Oracle database. It works fine. However, there will be multiple instances running at different (currently 2) locations some of which do not need to access the database. (They get the same info passed on to them from their peer servers.)
I want the same executable running in all places but some will be configured to not use the database since they don't need it. The problem is that once I import the OCI package, its init() function is called which panics when it can't talk to the database.
Running GO 1.12.5 on Windows Server 2019.
I tried adding OCI.DLL to the same directory as the .EXE but it still panics.
import _ "github.com/mattn/go-oci8"
When I run on the server (without DB drivers) I get the error:
panic: OCIEnvCreate error
goroutine 1 [running]:
github.com/mattn/go-oci8.init.0()
D:/Golang/src/github.com/mattn/go-oci8/globals.go:160 +0x130
I want to avoid this panic when I don't need database access. I would prefer one .EXE without the mess of conditional builds.
Swap to the Go goracle driver which delays Oracle client library initialization until connections are opened precisely to handle your situation, where not all app users connect to Oracle DB.
As you said, adding the DLLs to the same directory as the exe solves your problem, so if you wanted a single file to still work you can have the exe copy all of the DLLs over when it starts, and even delete then when it is done if you want. This way, you can transfer the file to multiple locations, but there is most likely no way to keep it one file while running.

Erlang: Disable MnesiaCore upon Mnesia Errors

I have an application using Mnesia Database running is a memory constrained environment. So is there any way to disable MnesiaCore file creation by when something goes wrong with Mnesia which is a default case? Or is it possible to give limit on the size of file or constraint on count of files?
Since the MnesiaCore file gets created with different name each time, it will not overwrite the previous one. It causes flooding of MnesiaCore files in the working directory of my application, if the application has crashed for multiple times.

Can I delete .dmp and .phd files of Liberty Profile server?

In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.

PostgreSQL statistics issue - could not rename temporary statistics file

I am running PotgreSQL 9.4 on Windows, and constantly get the error,
2015-06-15 09:35:36 EDT LOG could not rename temporary statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": Permission denied
I also see constant 200-800k writes to global.stat and global.tmp. I have seen other users with the same issue, but no solution.
It is a big database server, with 300g of data, and 6,000 databases.
I tried setting,
track_activities=off
In the config file, but it did not seem to have any affect.
Any help for the error, or reducing the write?
After my initial answer, I decided to research the operation of the stats collector and in particular what it is doing with the files in pg_stat_tmp. I've substantially re-written the answer as a result.
What are the global.stat / global.tmp files used for?
Postgresql contains functionality to collect statistics and status information about its operation. The function is described in Section 27.2 of the manual.
This information is collated by the stats collector process. It is made available to the other postgresql processes via the global.stat file. The first time you run a query that accesses this data within a transaction, the backend which you are connected to will read the global.stat file and cache the result, using it until the end of the transaction.
To keep this file up to date, the stats collector process periodically re-writes it with updated information. It typically does this several times a second. The process is as follows:
Create a new file global.tmp
Write data to this file
Rename global.tmp as global.stat, overwriting the previous global.stat
The global.tmp and global.stats files are written into the directory configured by the stats_temp_directory configuration parameter. Normally this is set to $PGDATA/pg_stat_tmp.
On shutdown, the stats file is written into the file $PGDATA/global/pgstat.stat, and the files in the tmp dir above are removed. This file is then read and removed when the database is started up again.
Why is the stats collector processor creating so much I/O load?
Normally, the amount of data written to the global.stats is relatively modest and writing it does not generate that much I/O traffic. However under some circumstances it does seem to get very bloated. When this happens the amount of load generated can start to get excessive as the entire file is rewritten more than once a second.
I have had one experience where it grew by a factor or 10 or more, compared to other similar servers. This machine did have an unusually large number of databases (for our application at least - 30-40 databases - but nothing like the 6000 you say you have). It is possible that having a large number of databases exacerbates this.
Some of the references below talk about a pattern of creating / dropping lots of tables causing bloat in these files, and that perhaps autovacuum is not running aggressively enough to remove the associated bloat. You may wish to consider your autovac settings.
Why do I get 'Permission Denied' errors on Windows?
After examining the postgresql source code I think there may be a race condition in accessing the global.stats file which could happen at any time, but is exacerbated by the size of the file.
The default mode of operation in Windows is that it is not possible to rename or remove a file while another process has it open. This is different to Linux (or Unix) where a file can be renamed or removed while other processes are accessing it.
In the sequence above you can see that if one of the backend processes is reading the file at the same time as the stats collector is rewriting it, then the backend process may still have the file open at the time the rename is attempted. That leads to the 'Permission Denied' error you are seeing.
Naturally when the file becomes very large, then the amount of time taken to read it becomes more significant, therefore the probability of the stats collector process attempting a rename while a backend still has it open increases.
However, since the file is frequently being rewritten, the impact of these errors is relatively mild. It just means that this particular update fails, leading the the backends getting slightly out of date statistics. The next update will probably succeed.
Note that Windows does offer a file opening mode which does allow files to be deleted or renamed while they are opened by another process, however as far as I could tell, this mode is not used by Postgresql. I could not find any bug report on this - seems like it should be reported.
In summary, these errors are a side effect of the main problem, which is the excessive size of the global.stat file.
I've turned track_activities off but the file is still being written - Why?
From what I can see, track_activites affects only one of the sets of information that the stats collector is collecting.
In addition, it looks as though the stats collector process is started regardless of these settings, and will continue to re-write the file. The settings appear to control only the collection of fresh data.
My conclusion is that once the file has become bloated, it will remain so and continue to be re-written, even once all of the stats collection options are turned off.
What can I do to avoid this problem?
Once the file has become bloated, it seems that the easiest way to get the database back into a good working state is to remove the file, using the following steps:
Stop the database
When the DB is stopped, the pg_stat_tmp directory is empty and a file $PGDATA/global/pgstat.stat is written. We renamed this file to pgstat.stat.old.
Start the database. It creates a fresh set of pgstat files. After confirming the server was operating correctly you can remove the old file you have renamed.
This is the process we used when one of our servers suffered from this problem.
Needless to say be very careful when manually manipulating any files under the Postgresql Data directory.
After this you may want to monitor the server to see if it the file becomes bloated again. If it does then here are some additional ideas to consider:
As mentioned above I have seen some references to this file becoming bloated if autovacuum is not running aggressively enough. You may wish to tune the autovacuum settings
Disabling any of the track_xxx options described in the Section 18.9.1 of the manual which are not required may help
It is possible to place the pg_stats_tmp directory in a tmpfs filesystem (or whatever equivalent RAM based filesystem is available in windows). Doing so should eliminate I/O as a concern for these files.
References:
Postgres stats collector showing high disk I/O
Too much I/O generated by postgres stats collector process
stats collector suddenly causing lots of IO
Here might be a solution for your problem. https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug
Another possibility could be antivirus settings. Try to turn it off temporarily.
It happened to me few days ago. I rebooted the machine, but the error did not disappeared.
Don't know why, but performing a vacuum analyze verbose did the trick, and the error has stoped to show up.

What to do when a file remains left open when a remote application crashes or forgets to close the file

I have not worked so much with files: I am wondering about possible issues with accessing remote files on another computer. What if the distant application crashes and doesn't close the file ?
My aim is to use this win32 function:
HFILE WINAPI OpenFile(LPCSTR lpFileName, LPOFSTRUCT lpReOpenBuff, UINT uStyle);
Using the flag OF_SHARE_EXCLUSIVE assures me that any concurrent access will be denied
(because several machines are writing to this file from time to time).
But what if the file is left open ? (application crash for example ?)
How to put the file back to normal ?
What if the distant application crashes and doesn't close the file ?
Then the O/S should close the file when it cleans up after the "crashed" application.
This won't help with a "hung" application (an application which stays open but does nothing forever).
I don't know what the issues are with network access: for example if the network connection disappears when a client has the file open (or if the client machine switches off or reboots). I'd guess there are timeouts which might eventually close the file on the server machine, but I don't know.
It might be better to use a database engine instead of a file: because database engines are explicitly built to handle concurrent access, locking, timeouts, etc.
I came across the same problem using VMware, which sometimes doe not release file handles on the host, when files are closed on the guest.
You can close such handles using the handle utility from www.sysinternals.com
First determine the file handle id my passing a part of the filename. handle will show all open files where the given string matches a part of the file name:
D:\sysinternals\>handle myfile
deadhist.exe pid: 744 3C8: D:\myfile.txt
Then close the hanlde using the parameters -c and -p
D:\sysinternals\>handle -c 3c8 -p 744
3C8: File (---) D:\myfile.txt
Close handle 3C8 in LOCKFILE.exe (PID 744)? (y/n) y
Handle closed.
handle does not care about the application holding the file handle. You are now able to reopen, remove, rename etc. the file

Resources