Erlang: Disable MnesiaCore upon Mnesia Errors - debugging

I have an application using Mnesia Database running is a memory constrained environment. So is there any way to disable MnesiaCore file creation by when something goes wrong with Mnesia which is a default case? Or is it possible to give limit on the size of file or constraint on count of files?
Since the MnesiaCore file gets created with different name each time, it will not overwrite the previous one. It causes flooding of MnesiaCore files in the working directory of my application, if the application has crashed for multiple times.

Related

Can I delete .dmp and .phd files of Liberty Profile server?

In the folder <WAS Liberty Profile root>\<profile>\usr\servers\defaultServer there are many files named core.*.dmp and heapdump.*.phd. The size of these files is between 130 MB and 1.3 GB when my deployed app uses 4 MB.
Can I delete these files *.dmp and *.phd?
What are these files for?
Short answer: yes, it's safe to delete them, but you should find out why they're appearing, as it could indicate that your application is not running correctly.
If your dump files were created a long time ago, or you know you were debugging an OutOfMemoryException or have been running server javadump --include=heap,system then go ahead and delete the files. If, however, you keep getting new dump files and don't know why then read on.
The core and heapdump files contain a snapshot of the memory of the application from a specific point in time. Usually you do this to capture the state of your application at the point where something goes wrong so that you can examine it with analysis tools and try to work out what went wrong.
For example, by default the IBM JVM will perform a dump when an OutOfMemoryException is thrown. This allows you to look at the dump file and see what's using up all the memory.
If you have a corresponding javacore file, the fourth line or so should say why the memory dump was made.
e.g. 1TISIGINFO Dump Requested By User (00100000) Through com.ibm.jvm.Dump.javaDumpToFile (caused by running server javadump)
or 1TISIGINFO Dump Event "user" (00004000) received (caused by running kill -3)
If it's a "user" event, then something's asking the JVM to create a dump. If not, and you're still not sure what's causing it, check your jvm.options file for any -Xdump options which can be used to cause the JVM to create a dump in response to certain events. More information on that in the Knowledge Center.

PostgreSQL statistics issue - could not rename temporary statistics file

I am running PotgreSQL 9.4 on Windows, and constantly get the error,
2015-06-15 09:35:36 EDT LOG could not rename temporary statistics file "pg_stat_tmp/global.tmp" to "pg_stat_tmp/global.stat": Permission denied
I also see constant 200-800k writes to global.stat and global.tmp. I have seen other users with the same issue, but no solution.
It is a big database server, with 300g of data, and 6,000 databases.
I tried setting,
track_activities=off
In the config file, but it did not seem to have any affect.
Any help for the error, or reducing the write?
After my initial answer, I decided to research the operation of the stats collector and in particular what it is doing with the files in pg_stat_tmp. I've substantially re-written the answer as a result.
What are the global.stat / global.tmp files used for?
Postgresql contains functionality to collect statistics and status information about its operation. The function is described in Section 27.2 of the manual.
This information is collated by the stats collector process. It is made available to the other postgresql processes via the global.stat file. The first time you run a query that accesses this data within a transaction, the backend which you are connected to will read the global.stat file and cache the result, using it until the end of the transaction.
To keep this file up to date, the stats collector process periodically re-writes it with updated information. It typically does this several times a second. The process is as follows:
Create a new file global.tmp
Write data to this file
Rename global.tmp as global.stat, overwriting the previous global.stat
The global.tmp and global.stats files are written into the directory configured by the stats_temp_directory configuration parameter. Normally this is set to $PGDATA/pg_stat_tmp.
On shutdown, the stats file is written into the file $PGDATA/global/pgstat.stat, and the files in the tmp dir above are removed. This file is then read and removed when the database is started up again.
Why is the stats collector processor creating so much I/O load?
Normally, the amount of data written to the global.stats is relatively modest and writing it does not generate that much I/O traffic. However under some circumstances it does seem to get very bloated. When this happens the amount of load generated can start to get excessive as the entire file is rewritten more than once a second.
I have had one experience where it grew by a factor or 10 or more, compared to other similar servers. This machine did have an unusually large number of databases (for our application at least - 30-40 databases - but nothing like the 6000 you say you have). It is possible that having a large number of databases exacerbates this.
Some of the references below talk about a pattern of creating / dropping lots of tables causing bloat in these files, and that perhaps autovacuum is not running aggressively enough to remove the associated bloat. You may wish to consider your autovac settings.
Why do I get 'Permission Denied' errors on Windows?
After examining the postgresql source code I think there may be a race condition in accessing the global.stats file which could happen at any time, but is exacerbated by the size of the file.
The default mode of operation in Windows is that it is not possible to rename or remove a file while another process has it open. This is different to Linux (or Unix) where a file can be renamed or removed while other processes are accessing it.
In the sequence above you can see that if one of the backend processes is reading the file at the same time as the stats collector is rewriting it, then the backend process may still have the file open at the time the rename is attempted. That leads to the 'Permission Denied' error you are seeing.
Naturally when the file becomes very large, then the amount of time taken to read it becomes more significant, therefore the probability of the stats collector process attempting a rename while a backend still has it open increases.
However, since the file is frequently being rewritten, the impact of these errors is relatively mild. It just means that this particular update fails, leading the the backends getting slightly out of date statistics. The next update will probably succeed.
Note that Windows does offer a file opening mode which does allow files to be deleted or renamed while they are opened by another process, however as far as I could tell, this mode is not used by Postgresql. I could not find any bug report on this - seems like it should be reported.
In summary, these errors are a side effect of the main problem, which is the excessive size of the global.stat file.
I've turned track_activities off but the file is still being written - Why?
From what I can see, track_activites affects only one of the sets of information that the stats collector is collecting.
In addition, it looks as though the stats collector process is started regardless of these settings, and will continue to re-write the file. The settings appear to control only the collection of fresh data.
My conclusion is that once the file has become bloated, it will remain so and continue to be re-written, even once all of the stats collection options are turned off.
What can I do to avoid this problem?
Once the file has become bloated, it seems that the easiest way to get the database back into a good working state is to remove the file, using the following steps:
Stop the database
When the DB is stopped, the pg_stat_tmp directory is empty and a file $PGDATA/global/pgstat.stat is written. We renamed this file to pgstat.stat.old.
Start the database. It creates a fresh set of pgstat files. After confirming the server was operating correctly you can remove the old file you have renamed.
This is the process we used when one of our servers suffered from this problem.
Needless to say be very careful when manually manipulating any files under the Postgresql Data directory.
After this you may want to monitor the server to see if it the file becomes bloated again. If it does then here are some additional ideas to consider:
As mentioned above I have seen some references to this file becoming bloated if autovacuum is not running aggressively enough. You may wish to tune the autovacuum settings
Disabling any of the track_xxx options described in the Section 18.9.1 of the manual which are not required may help
It is possible to place the pg_stats_tmp directory in a tmpfs filesystem (or whatever equivalent RAM based filesystem is available in windows). Doing so should eliminate I/O as a concern for these files.
References:
Postgres stats collector showing high disk I/O
Too much I/O generated by postgres stats collector process
stats collector suddenly causing lots of IO
Here might be a solution for your problem. https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug
Another possibility could be antivirus settings. Try to turn it off temporarily.
It happened to me few days ago. I rebooted the machine, but the error did not disappeared.
Don't know why, but performing a vacuum analyze verbose did the trick, and the error has stoped to show up.

VB6 application keeps lock on Access (.mdb) database after creation, causing an error 3028

Our application builds an Access database (.mdb) and then starts a different application with the Shell command which needs Read/Write Access to this very database. The problem is that on some systems our application seems erratically to retain an exclusive lock on the database, preventing the other application from accessing it. Only after closing down the first application can the other application proceed.
The specific Error that is raised is Error 3028, which seems to be specific for DAO 3.51 (Access '97) which we indeed employ. I cannot understand why some systems are affected (and then not consistently) and others never. I thought that it might be a timing issue and built in a Sleep period between building the database and launching the other application, but that does not help.
What is going on?
EDIT:
I now created a workaround by creating the database in a separate file and then copying it. Now the second program should always be able to access it and any remaining lock problems will surface in the first program, which I maintain. I will follow up later when our users have been able to test this.
Are you closing the connection to the DB before passing control to another EXE?
I had a similar issue previously which wasn't quite the same but from what you have described this is the approach I would try:
Before lauching the secondary application with the shell command.
Alongside the sleep period you have already employed you will also need to close the original program which generated the .mdb file.
I achieved this by shelling a windows batch file, and then immediately exiting the original program.
Batch file makeup as follows:
ping -n 5 localhost >NUL
start MSAccess.exe "C:\DB.mdb"
exit
This allows 5 seconds for the mdb file to be freed-up before launching, you could replace my Ms Access call with your secondary program.

Read application log written on Windows Azure

I have 10 applications they have same logic to write the log on a text file located on the application root folder.
I have an application which reads the log files of all the applicaiton and shows details in a web page.
Can the same be achieved on Windows Azure? I don't want to use the 'DiagnosticMonitor' API's. As I cannot change logging logic of application.
Thanks,
Aman
Even if technically this is possible, this is not advisable as the Fabric Controller can re-create any role at a whim (well - with good reasons, but unpredictable none-the-less) and so whenever this happens you will lose any files stored locally on a role.
So - primarily you should be looking for a different place to store those logs, and there are many options, but all require that you change the logging logic of the application.
You could do this, but aside from the issue Yossi pointed out (the log would be ephemeral; it could get deleted at any time), you'd have a different log file on each role instance (VM). That means when you hit your web page to view the log, you'd see whatever happened to be on the log on that particular VM, instead of what you presumably want (a roll-up of the log files across all VMs).
Windows Azure Diagnostics could help, since you can configure it to copy log files off to blob storage (so no need to change the logging). But honestly I find Diagnostics a bit cumbersome for this. It will end up creating a lot of different blobs, and you'll have to change the log viewer to read all those blobs and combine them.
I personally would suggest writing a separate piece of code that monitors the log file and, for each new line, stores the line as an entity (row) in table storage. This bit of code could be launched as a startup task and just run continuously as a separate process (leaving everything else unchanged). Then modify the log viewer to read the last n entities from table storage and display them.
(I'm assuming you can modify the log viewer even if you can't modify the apps that log to the file.)
What about writing logs to something like azure storage table? Just need to define unique ParitionKey/RowKey, then you can easily retrieve the log for the web page.

Transaction Log files in edb database

In my attempt to extract data (dumps and selective reading of columns) from a diverse collection of edb databases I got faced with a fundamental problem. I have an edb database coming with a couple of log files. I know what information there is within the database, but I just get half of it extracted. I fear that the remaining half sleeps somewhere in the log files. I assumed the EDB engine knows where the log files are and automagically loads them when attaching the database (JET_paramSystemPath, JET_paramLogFilePath and JET_paramBaseName are properly set). Is that a wrong assumption? If so, what should I do to have the logs loaded as well?
Alternatively, would it be possible to simply commit the transactions to the EDB file and get rid of the logs?
If there are uncommitted transactions then the database will be marked as 'inconsistent'. You can check this using ESENTUTL /MH against the database. Calling JetAttachDatabase against an inconsistent database will always fail.
So, if your program is able to attach and open the database then it is consistent. There are two ways a database can be made consistent:
A clean shutdown of ESENT.
Running recovery using the logfiles at JetInit time.
The first thing that JetInit does is to look for the logfiles specified by JET_paramLogFilePath and JET_paramBaseName. Logfiles contain the full paths of the database(s) they reference and the transactions in the logfiles are then committed to the database(s). So, if you set the system parameters properly then ESENT will load the logs when attaching the database.
On the other hand, if you don't set the parameters properly then your program will actually work on databases that don't require recovery. JetInit won't find any logfiles so it won't do anything and the attach will succeed because the database is consistent.
One further twist is that the logfiles contain the full path to the database. This means that if you have copied/moved the database then recovery will not work. Starting with Windows Server 2003 you can deal with this by setting JET_paramAlternateDatabaseRecoveryPath to the directory containing the database.
Important: to be safe you should always attach and open the database using the read-only flags. This will avoid any problems caused by bad logfile settings. A common problem is that read-only applications end up creating a set of logfiles in a different directory which prevent the database from being recovered properly.

Resources